日本地球惑星科学連合2024年大会

講演情報

[J] ポスター発表

セッション記号 S (固体地球科学) » S-CG 固体地球科学複合領域・一般

[S-CG50] 機械学習による固体地球科学の牽引

2024年5月26日(日) 17:15 〜 18:45 ポスター会場 (幕張メッセ国際展示場 6ホール)

コンビーナ:久保 久彦(国立研究開発法人防災科学技術研究所)、小寺 祐貴(気象庁気象研究所)、直井 誠(北海道大学)、矢野 恵佑(統計数理研究所)

17:15 〜 18:45

[SCG50-P02] 継続学習のフレームワークを利用した熱水システムのサロゲートモデリング

*石塚 師也1 (1.京都大学大学院 工学研究科)

キーワード:深層学習、サロゲートモデリング、熱水システム

Recent advances in deep learning allow for utilizing neural networks for the surrogate modeling of subsurface systems. Thanks to the universal approximation capability of deep learning, the trained network can be used to simulate physical phenomena with less computational cost. In this study, I developed a framework of the surrogate modeling of hydrothermal systems using continual learning. Modeling hydrothermal systems is important to understand the nature of subsurface hydrothermal fluid migration, whereas one of the difficulties of the modeling is lack of observation data. To mitigate the problem, the use of transfer learning framework may be a practical option. However, transfer learning has a problem of catastrophic forgetting, which is a phenomenon in which when a deep neural network is trained with new information that is relevant in turn, the prediction accuracy with respect to previously learned information drops drastically. The framework of continual learning is potentially effective for the problem because the algorithm has strategies to mitigate catastrophic forgetting.
I examined the developed framework with 2D synthetic models. Three-types of synthetic hydrothermal conceptual models were firstly created, and 2000 numerical datasets were then generated for each conceptual model. A deep fully-connected neural network was trained with the numerical datasets based on different conceptual models in sequence, and the accuracy of predicted quantities were examined. As a result, while the transfer learning framework failed to maintain the prediction accuracy once another type of datasets was trained, the developed framework with continual learning successfully maintained the prediction accuracy related to previously trained types of datasets.