Japan Geoscience Union Meeting 2025

Presentation information

[J] Poster

M (Multidisciplinary and Interdisciplinary) » M-GI General Geosciences, Information Geosciences & Simulations

[M-GI31] Earth and planetary informatics and data utilization

Tue. May 27, 2025 5:15 PM - 7:15 PM Poster Hall (Exhibition Hall 7&8, Makuhari Messe)

convener:Susumu Nonogaki(Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology), Ken T. Murata(National Institute of Information and Communications Technology), Keiichiro Fukazawa(Research Institute for Humanity and Nature), Yukari Kido(Japan Agency for Marine-Earth Science and Technology)

5:15 PM - 7:15 PM

[MGI31-P03] Evaluation of the implementation for GPU parallelization on geophysical simulations

*Shoji Sakoda1, Keiichiro Fukazawa2, Yasunobu Miyoshi4, Takeshi Iwashita3 (1.Graduate School of Informatics, Kyoto Univ., 2.Research Institute for Humanity and Nature, 3.Academic Center for Computing nd Media Studies, Kyoto Univ., 4.Department of Earth and Planetary Sciences, Faculty of Sciences, Kyushu University)


Keywords:Parallelization, GPGPU

Numerical simulations play a decisive role in validating theories and computational methods in various scientific and engineering fields. On the other hand, large-scale simulations require long execution times due to their extensive computational efforts. Parallel computing techniques, particularly the use of Graphics Processing Units (GPUs), have been introduced to address this issue. GPUs enable significant speedup compared to traditional CPU-based computations with their massively parallel architecture. However, it is not easy to modify a simulation code for CPUs to the one for GPUs. One of the sensitive issues is choosing an appropriate API for GPU computing.
CUDA is a widely used API for GPU computing. It provides low-level control over hardware resources to maximize performance. However, implementing CUDA requires extensive code modifications. In contrast, directive-based approaches such as OpenMP and OpenACC allow GPU utilization with minimal code changes. These methods can reduce the programming effort. Despite their advantages, comparative analysis of their performance, portability, and ease of implementation remains limited. Additionally, information on specific optimizations needed for directive-based approaches is still insufficient.
This study evaluates CUDA, OpenMP, and OpenACC by applying them to two simulation codes: magnetohydrodynamics (MHD) and atmospheric dynamics simulations. First, we compare the above programming methods in the MHD simulation. We analyze the execution time and the complexity of code modification. Next, we evaluate the simulation code for atmospheric dynamics which is modified for GPU executions. We also conduct performance comparison between GPU and CPU oriented codes Furthermore, because MHD and atmospheric simulations differ in computational models and data structures, we analyze how these differences affect GPU optimization.