Japan Geoscience Union Meeting 2025

Presentation information

[E] Poster

A (Atmospheric and Hydrospheric Sciences ) » A-CG Complex & General

[A-CG41] Satellite Earth Environment Observation

Thu. May 29, 2025 5:15 PM - 7:15 PM Poster Hall (Exhibition Hall 7&8, Makuhari Messe)

convener:Riko Oki(Japan Aerospace Exploration Agency), Yoshiaki HONDA(Center for Environmental Remote Sensing, Chiba University), Tsuneo Matsunaga(Center for Global Environmental Research and Satellite Observation Center, National Institute for Environmental Studies), Nobuhiro Takahashi(Institute for Space-Earth Environmental Research, Nagoya University)

5:15 PM - 7:15 PM

[ACG41-P06] Flood Inundation Mapping Model Trained on Global Flood Events: Incorporating Physical Flood Variables and Applications in Japan

Takumi Bannai3,4, *Yuki Kita1, Dai Yamazaki2 (1.Gaia Vision Inc., 2.Institute of Industrial Science, The University of Tokyo, 3.LTS, Inc., 4.Me-Lab, Inc)

Keywords:Satellite flood mapping, Physics-guided Machine learning, Floodplain terrain information, Synthetic Aperture Radar

In recent years, near-real-time flood inundation mapping using satellite imagery has become increasingly crucial in disaster response, as rapid satellite-data analysis significantly supports decision-making in rescue operations and early damage assessment. However, many existing studies primarily focus on case study–based evaluations of specific flood events, leaving the generalization performance of these models for spatiotemporally distant flood events insufficiently assessed. Because floods can occur worldwide and are influenced by region-specific factors, it is essential to develop models capable of maintaining high accuracy even in regions and under conditions not represented in the training data.

Therefore, this study aims to evaluate the generalization performance of a model trained on diverse flood events worldwide by examining how accurately it can map floods for events that are geospatially separated and exhibit different regional characteristics. While most satellite-based flood inundation mapping studies rely primarily on data-driven approaches, relatively few have actively integrated knowledge of physical flooding processes. To address this gap, we propose a machine learning–based approach that integrates additional flood-physics-based variables to improve the accuracy of satellite-derived flood mapping.

For SAR-based flood inundation mapping, we use Sentinel-1 SAR data (VV/VH polarizations), MERIT DEM (elevation data), and relative elevation information (FLDDIF) as inputs. We employ Sen1Floods11, a global flood database, as training data. In order to assess the model’s generalization performance, we then evaluate it using a flood event not contained in Sen1Floods11: the 2019 flood in Japan caused by Typhoon Hagibis. For validation, we use the “inundation estimation map” surveyed by the Geospatial Information Authority of Japan. The horizontal resolutions of each data were arranged into 1 arcsec (approximately, 30m).

To extract flood inundations from satellite data, we adopt U-Net, a deep learning architecture specifically designed for image segmentation tasks and widely used in satellite-based research. We incorporate SAR data, topographic, and hydrological information as three input channels. Our evaluation shows that the model achieved 93.1% accuracy for multiple rivers confirmed to have suffered flood damage, and the resulting visualizations also indicated spatially coherent inferences. This finding suggests that even a model trained on global flood events may exhibit a degree of generalization when applied to flood scenarios in Japan. However, although accuracy was high, recall was relatively low (32.4%), indicating a tendency to miss many inundated pixels. One possible reason is that training data for localized flood events often contain a large majority of non-inundated (negative) pixels, causing the model to systematically underestimate inundations.

Nevertheless, when DEM or FLDDIF data were included, recall notably improved to 48.1% and 46.3%, respectively, compared with using SAR data alone. Consequently, the F1 score, a balanced metric, also increased. Error analysis confirmed that these physical variables reduced false negatives error, thereby decreasing missed inundations. Preventing omissions is crucial in disaster detection tasks such as flood mapping. Additionally, a comparison with optical satellite images suggested that the reduction in missed areas was especially pronounced in regions with complex land use, such as rice paddies. Because these areas are susceptible to noise in SAR signals due to vegetation and water surface reflections, incorporating hydrological data likely helped correct misclassifications and improve overall detection accuracy. These findings underscore the importance of appropriately incorporating physically based information into near-real-time flood mapping with satellite data.