10:00 〜 10:15
[STT43-05] TomoATT : A new HPC-ready open-source project of Eikonal equation solver based adjoint state traveltime tomography for a large-scale imaging of subsurface velocity heterogeneity and seismic anisotropy.
キーワード:ハイパフォーマンスコンピューティング、随伴状態伝搬時間トモグラフィ法、逆解析、ファストスウィーピングメソッド
We have started an open-source project “TomoATT”, which implements Adjoint-state Traveltime Tomography (ATT) for revealing velocity heterogeneity and seismic anisotropy. The main objective of this project is to apply ATT to large-scale problems which require HPC systems. For this purpose, we use an eikonal equation solver for forward/adjoint simulation which requires much less computational resources than wave equation based solvers. By this solver, the anisotropic eikonal equation is solved in spherical coordinates using a high-order fast weeping method. Then the Fréchet derivatives of the objective function are calculated based on the computed traveltime/adjoint fields. Optimization process is further implemented by a step-size controlled gradient descent method with multi-grid model parameterization technique.
For the implementation, we introduced hybrid multilayer parallelization, i.e. the Fréchet derivative for multiple seismic events are calculated simultaneously. In the calculation of each event, the global domain may be divided into subdomains as an MPI inter-node parallelization which removes the memory size limit of a computer. Then finally in each (sub) domain, the node values on a sweeping surface are calculated parallelly with MPI Shared Memory, which eliminates the communication cost between MPI processes. In addition to this MPI parallelization scheme, we also introduced a memory relocation and Single Instruction, Multiple Data (SIMD) parallelization for both AVX and ARM SVE in order to mitigate an inefficient memory access pattern of stencil based computation.
We performed benchmark tests on a local machine and HPCs including Fugaku. The result shows a good scaling for small to large computation grids. Finally, we apply this numerical tool to real seismic data of the California + Nevada region with massive amounts of seismic arrival time data.
For the implementation, we introduced hybrid multilayer parallelization, i.e. the Fréchet derivative for multiple seismic events are calculated simultaneously. In the calculation of each event, the global domain may be divided into subdomains as an MPI inter-node parallelization which removes the memory size limit of a computer. Then finally in each (sub) domain, the node values on a sweeping surface are calculated parallelly with MPI Shared Memory, which eliminates the communication cost between MPI processes. In addition to this MPI parallelization scheme, we also introduced a memory relocation and Single Instruction, Multiple Data (SIMD) parallelization for both AVX and ARM SVE in order to mitigate an inefficient memory access pattern of stencil based computation.
We performed benchmark tests on a local machine and HPCs including Fugaku. The result shows a good scaling for small to large computation grids. Finally, we apply this numerical tool to real seismic data of the California + Nevada region with massive amounts of seismic arrival time data.