日本地球惑星科学連合2023年大会

講演情報

[J] 口頭発表

セッション記号 S (固体地球科学) » S-TT 計測技術・研究手法

[S-TT43] ハイパフォーマンスコンピューティングが拓く固体地球科学の未来

2023年5月22日(月) 09:00 〜 10:15 国際会議室 (IC) (幕張メッセ国際会議場)

コンビーナ:堀 高峰(国立研究開発法人海洋研究開発機構)、八木 勇治(国立大学法人 筑波大学大学院 生命環境系)、汐見 勝彦(国立研究開発法人防災科学技術研究所)、松澤 孝紀(国立研究開発法人 防災科学技術研究所)、座長:堀 高峰(国立研究開発法人海洋研究開発機構)、汐見 勝彦(国立研究開発法人防災科学技術研究所)

10:00 〜 10:15

[STT43-05] TomoATT : A new HPC-ready open-source project of Eikonal equation solver based adjoint state traveltime tomography for a large-scale imaging of subsurface velocity heterogeneity and seismic anisotropy.

*長曽 大1,2、Chen Jing1,2、Tong Ping1,2、Wu Shucheng1,2 (1.Nanyan Technological University/School of Physical and Mathematical Sciences、2.Earth Observatory of Singapore)

キーワード:ハイパフォーマンスコンピューティング、随伴状態伝搬時間トモグラフィ法、逆解析、ファストスウィーピングメソッド

We have started an open-source project “TomoATT”, which implements Adjoint-state Traveltime Tomography (ATT) for revealing velocity heterogeneity and seismic anisotropy. The main objective of this project is to apply ATT to large-scale problems which require HPC systems. For this purpose, we use an eikonal equation solver for forward/adjoint simulation which requires much less computational resources than wave equation based solvers. By this solver, the anisotropic eikonal equation is solved in spherical coordinates using a high-order fast weeping method. Then the Fréchet derivatives of the objective function are calculated based on the computed traveltime/adjoint fields. Optimization process is further implemented by a step-size controlled gradient descent method with multi-grid model parameterization technique.

For the implementation, we introduced hybrid multilayer parallelization, i.e. the Fréchet derivative for multiple seismic events are calculated simultaneously. In the calculation of each event, the global domain may be divided into subdomains as an MPI inter-node parallelization which removes the memory size limit of a computer. Then finally in each (sub) domain, the node values on a sweeping surface are calculated parallelly with MPI Shared Memory, which eliminates the communication cost between MPI processes. In addition to this MPI parallelization scheme, we also introduced a memory relocation and Single Instruction, Multiple Data (SIMD) parallelization for both AVX and ARM SVE in order to mitigate an inefficient memory access pattern of stencil based computation.

We performed benchmark tests on a local machine and HPCs including Fugaku. The result shows a good scaling for small to large computation grids. Finally, we apply this numerical tool to real seismic data of the California + Nevada region with massive amounts of seismic arrival time data.