Japan Geoscience Union Meeting 2024

Presentation information

[J] Oral

S (Solid Earth Sciences ) » S-CG Complex & General

[S-CG50] Driving Solid Earth Science through Machine Learning

Mon. May 27, 2024 9:00 AM - 10:15 AM Convention Hall (CH-B) (International Conference Hall, Makuhari Messe)

convener:Hisahiko Kubo(National Research Institute for Earth Science and Disaster Resilience), Yuki Kodera(Meteorological Research Institute, Japan Meteorological Agency), Makoto Naoi(Hokkaido University), Keisuke Yano(The Institute of Statistical Mathematics), Chairperson:Kazuki Ohtake(Earthquake Research Institute, The University of Tokyo), Keisuke Yano(The Institute of Statistical Mathematics), Hisahiko Kubo(National Research Institute for Earth Science and Disaster Resilience)

9:45 AM - 10:15 AM

[SCG50-04] A deep learning-based approach for forecasting ground motion and precipitation

★Invited Papers

*Hirotaka Hachiya1,2 (1.Wakayama University, 2.Center for AIP, RIKEN)

Keywords:deep learning, spatial interpolation, precipitation forecast

Deep learning based methods have been applied to problems in many fields, including seismology and meteorology. In this work, we present some of the methods for 1) spatial ground motion interpolation and 2) precipitation forecasting as follows:

1) The acquisition of continuous spatial ground motion data is essential for the assessment of the damaged area and the appropriate deployment of rescue and medical teams. Therefore, spatial interpolation methods have been developed to linearly estimate the value of unobserved points from neighbouring observed values. Meanwhile, realistic spatially continuous environmental data with different scenarios can be generated by 3D finite difference methods using a high-resolution structural model. These allow the collection of supervised data even for unobserved points. Therefore, this work proposes a supervised spatial interpolation framework and applies advanced deep inpainting methods, where spatially distributed observed points are treated as masked images and non-linearly expanded by convolutional encoder-decoder networks. However, the translation invariance property would prevent locally fine-grained interpolation because the relationship between the target and surrounding observation points varies between regions due to their topography and subsurface structure. To overcome this problem, this work proposes the introduction of position-dependent partial convolution, where kernel weights are adjusted depending on their position on an image based on the trainable position-feature map. The experimental results show the effectiveness of the proposed method using ground motion data.

2) As the damage caused by heavy rains increases, there is a growing need for improved forecasting. A practical approach to this problem is the linear integration of several existing forecasts, which allows the contribution of each forecast at different locations to be visualised. However, current methods, such as arithmetic and Bayesian averaging, use a single weight distributed over the entire space, making it difficult to account for local variations in importance. Furthermore, although U-net-based spatial forecasts have been proposed, they are limited to short-term predictions and do not facilitate the visualisation of individual forecast contributions due to their non-linear processes. To overcome these challenges, we propose a new integration framework based on U-net image transformation. This framework generates weighted images that dynamically integrate forecasts based on both time and location. To effectively handle large and unbalanced precipitation data, we introduce novel extensions to the U-Net model. These extensions address highly unbalanced precipitation data and enable position and time dependent integration. Experimental results on real precipitation forecast data in Japan show that our proposed method outperforms existing integration methods.