Japan Geoscience Union Meeting 2015

Presentation information


Symbol S (Solid Earth Sciences) » S-TT Technology & Techniques

[S-TT55] Creating future of solid Earth science with high performance computing (HPC)

Wed. May 27, 2015 2:15 PM - 4:00 PM 103 (1F)

Convener:*Takane Hori(R&D Center for Earthquake and Tsunami, Japan Agency for Marine-Earth Science and Technology), Yoshiyuki Kaneda(Japan Agency for Marine-Earth Science and Technology), Muneo Hori(Earthquake Research Institute, University of Tokyo), Ryota Hino(International Research Institute of Disaster Science, Tohoku University), Taro Arikawa(Port and Airport Research Institute), Masaru Todoriki(Center for Integrated Disaster Information Research / Earthquake Research Institute, The University of Tokyo), Chair:Takane Hori(R&D Center for Earthquake and Tsunami, Japan Agency for Marine-Earth Science and Technology)

3:00 PM - 3:15 PM

[STT55-03] Development of high performance particle simulations of fluid and granular dynamics for contributing to human society


Keywords:DEM, parallel computing, Tsunami, ballast track, particle, SPH

Large-scale parallel computing is important for numerically reproducing actual measurement results and dynamics of phenomena in various science and engineering areas, such as civil engineering, bioengineering, and earth sciences. The computational performance of parallelized software tools plays a critical role in such simulation studies, as these improve the computational accuracy relative to the simulation resolution within a limited computation time. Recent massively parallel computer systems based on shared- and distributed-memory architectures employ various types of arithmetic processors. Current processor designs are known to exhibit totally different computational performance depending on the numerical algorithms and implementation methods employed. Currently, parallel computing generally uses either a multi-core CPU, graphics processing unit (GPU), or many integrated core (MIC) processor. Multi-core CPUs have traditionally been used in high-performance computing, whereas GPUs were originally designed for computer graphics with many arithmetic cores. The common progress of current processor designs is the increase in the number of cores using vector operations such as single-instruction?multiple-data (SIMD). In such a situation, the shared-memory parallelization plays a basic but critical role in dealing with the increasing number of arithmetic cores in an efficient manner.
Numerical simulation methods used in science and engineering include the finite difference method (FDM), finite element method (FEM), finite volume method (FVM), boundary element method (BEM), and particle simulation method (PSM). Among these, PSM has a benefit of being mesh-free, allowing the computation of large-scale deformations and fractures of a continuum body without expensive remeshing tasks. As a PSM, smoothed particle hydrodynamics (SPH) is often used for tsunami disaster simulations because of its robustness in free-surface fluid dynamics. The discrete element method (DEM) is one popular PSM for granular dynamics in which geometrical size and shape attributes are provided for each particle. In the most conventional formulation of the DEM, the Voigt model in both the normal and tangential directions is considered at each contact point. In the tangential direction, Coulomb friction is introduced to determine the maximum tangential force and the slip condition. In addition, rolling friction can be considered at the contact points. Therefore, the DEM is attractive to simulate granular materials such as sand, pebbles, and other grains.
However, PSM programs must be implemented carefully to avoid write-access conflicts under shared-memory parallelization, especially when calculating a resultant force. In addition, it is important for distributed-memory parallelization to dynamically balance the computational load between computational nodes. To address these issues, we have proposed parallel algorithms that use the action-reaction law and parallelize the interaction summation with a reference table to avoid memory access conflicts. We have also implemented the algorithm of dynamic load balancing by resizing the domain decomposition region. Our methods were implemented on various parallel processors such as GPU, MIC processor, multi-core CPU on K computer, and vector processor on Earth simulator. In this presentation, we will talk about these parallel algorithms and applications for contributing to human society; Tsunami disaster simulations in consideration of structures?soil?fluid interactions and impact dynamics of ballast particles in rail track are important topics that require a high performance computing resources.