日本地球惑星科学連合2025年大会

講演情報

[E] 口頭発表

セッション記号 M (領域外・複数領域) » M-GI 地球科学一般・情報地球科学

[M-GI27] Data-driven approaches for weather and hydrological predictions

2025年5月29日(木) 09:00 〜 10:30 展示場特設会場 (4) (幕張メッセ国際展示場 7・8ホール)

コンビーナ:小槻 峻司(千葉大学 環境リモートセンシング研究センター)、堀田 大介(気象研究所)、安田 勇輝(東京科学大学)、関山 剛(気象庁気象研究所)、座長:安田 勇輝(東京科学大学)

09:15 〜 09:30

[MGI27-02] Conditional Deep Diffusion Modeling for GSMaP Inpainting

*岸川 大航1武藤 裕花1小槻 峻司2,1 (1.千葉大学環境リモートセンシング研究センター、2.千葉大学国際高等研究基幹)

キーワード:GSMaP、降水マップ、拡散モデル、機械学習

The Global Satellite Mapping of Precipitation (GSMaP), issued by the Japan Aerospace Exploration Agency (JAXA), is a satellite-based precipitation retrieval product that integrates multiple satellite observations. Due to the orbital characteristics of polar-orbiting satellites equipped with microwave sensors, continuous global precipitation estimation is infeasible in GSMaP, resulting in substantial missing regions in microwave-sensor-based precipitation estimates. The current GSMaP algorithm addresses these missing regions using transformation equations to interpolate from adjacent observations. However, this approach often introduces spatial discontinuities, where estimated precipitation maps lack spatial continuity with observed regions. This issue arises because the current method prioritizes temporal consistency over spatial continuity.
To overcome these limitations, we propose a machine learning-based approach. Precipitation map inpainting can be formulated as video inpainting, a task in the research field of computer vision, in which missing regions are reconstructed using temporal information from adjacent frames and spatial cues from surrounding areas. Recent studies have framed video inpainting as a conditional generation task using diffusion models, a state-of-the-art generative approach. By training a conditional diffusion model with a 3D U-Net, which learns spatio-temporal features from paired incomplete and complete video samples, the diffusion model reconstructs fully inpainted videos from unseen data with missing regions. Furthermore, because the model is trained end to end, it eliminates the need for manually designing complex inpainting algorithms.
Our model consists of a 3D U-Net and a 3D condition encoder. The 3D U-Net learns the reverse diffusion process to predict less noisy precipitation maps over L time steps from noisy precipitation maps with missing regions while capturing spatio-temporal features. Encoded features from the 3D condition encoder are incorporated into each layer of the U-Net encoder and decoder. The condition inputs, including infrared imagery, latitude-longitude grids, and date information, provide additional guidance for inpainting.
For the experiments, we used hourly precipitation data from 2023 in the ERA5 dataset, provided by ECMWF, as the complete precipitation maps. Training data was generated by extracting observation masks from GSMaP and applying them to the ERA5 data. For infrared imagery, we used GPM Merged IR data. The trained model successfully inpainted missing regions in actual GSMaP data. We then evaluated whether the proposed method achieves more spatially coherent inpainting than conventional approaches.