Japan Geoscience Union Meeting 2016

Presentation information


Symbol S (Solid Earth Sciences) » S-TT Technology & Techniques

[S-TT55] Creating future of solid Earth science with high performance computing (HPC)

Tue. May 24, 2016 5:15 PM - 6:30 PM Poster Hall (International Exhibition Hall HALL6)

Convener:*Takane Hori(R&D Center for Earthquake and Tsunami, Japan Agency for Marine-Earth Science and Technology), tsuyoshi ichimura(Earthquake Research Institute,The University of Tokyo), Ryota Hino(Graduate School of Science, Tohoku University), Taro Arikawa(Port and Airport Research Institute), Takamasa Iryo(Kobe University)

5:15 PM - 6:30 PM

[STT55-P01] Development of large-scale particle simulations for fluid and granular dynamics

*Daisuke Nishiura1, Mikito Furuichi1, Satori Tsuzuki1, Takayuki Aoki2, Hide Sakaguchi1 (1.Japan Agency for Marine-Earth Science and Technology, 2.Tokyo Institute of Technology)

Keywords:DEM, SPH, Parallel computing, Tsunami, Sandbox, Accretion prism

Large-scale parallel computing is important for numerically reproducing actual measurement results and dynamics of phenomena in various science and engineering areas, such as civil engineering, bioengineering, and earth sciences. The computational performance of parallelized software tools plays a critical role in such simulation studies, as these improve the computational accuracy relative to the simulation resolution within a limited computation time. Recent massively parallel computer systems based on shared- and distributed-memory architectures employ various types of arithmetic processors. Current processor designs are known to exhibit totally different computational performance depending on the numerical algorithms and implementation methods employed. Currently, parallel computing generally uses either a multi-core CPU, graphics processing unit (GPU), or many integrated core (MIC) processor. Multi-core CPUs have traditionally been used in high-performance computing, whereas GPUs were originally designed for computer graphics with many arithmetic cores. The common progress of current processor designs is the increase in the number of cores using vector operations such as single-instruction–multiple-data (SIMD). In such a situation, the shared-memory parallelization plays a basic but critical role in dealing with the increasing number of arithmetic cores in an efficient manner.
Particle simulation method (PSM) has a benefit of being mesh-free, allowing the computation of large-scale deformations and fractures of a continuum body without expensive remeshing tasks. As a PSM, smoothed particle hydrodynamics (SPH) is often used for tsunami disaster simulations because of its robustness in free-surface fluid dynamics. The discrete element method (DEM) is one popular PSM for granular dynamics in which geometrical size and shape attributes are provided for each particle. Therefore, the DEM is attractive to simulate granular materials such as sand, pebbles, and other grains.
An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Particle methods inherently have workload imbalance problem for parallelization with the fixed domain in space, because particles move around and change workloads during the simulation run. Therefore, dynamic load balance is key technique to perform the large scale SPH or DEM simulation. In this presentation, we introduce the several techniques of parallel implementation utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. We will also introduce the applications of large-scale particle simulations such as Tsunami disaster simulation in consideration of structures–soil–fluid interactions and sandbox simulation for thrust dynamics of an accretion prism that require a high performance computing resources.