Japan Geoscience Union Meeting 2018

Presentation information

[JJ] Oral

H (Human Geosciences) » H-DS Disaster geosciences

[H-DS10] Tsunami and Tsunami Forecast

Thu. May 24, 2018 3:30 PM - 5:00 PM 105 (1F International Conference Hall, Makuhari Messe)

convener:Naotaka YAMAMOTO CHIKASADA(National Research Institute for Earth Science and Disaster Resilience), Kentaro Imai(Japan Agency for Marine-Earth Science and Technology), Hiroaki Tsushima(気象庁気象研究所), Chairperson:Imai Kentaro, Maeda Takuto

4:45 PM - 5:00 PM

[HDS10-29] Trans-boundary realization of the pipelined nested-grid algorithm for distributed tsunami modeling

*Alexander Vazhenin1, Kensaku Hayashi1, Andrey Marchuk2 (1.Graduate School of Computer Science, University of Aizu, 2.Tsunami Laboratory, ICMMG SB RAS)

Keywords:Tsunami Modeling, Nested Grids, Pipeline Computing, Cloud Computing

The tsunami modeling can be considered as a heavy-computational problem requiring versatile approaches based on integration of them. This research is focused on designing a high-speed scheme for tsunami modeling used nested computing in which computations are carried out on a sequence of grids with various resolutions where one is embedded into another. This allows decreasing the total amount of calculations by excluding unimportant coast areas from the calculation process. The paper describes the main features of the pipelined Tsunami Modeling Infrastructure supporting high-speed tsunami modeling on system with rather limited computational resources. The pipelining scheme is realized by distributing bathymetries over computational resources and synchronization of processing of each area using buffering of boundaries between areas. It also describes adopting this scheme to cloud-based computations allowing creating flexible and reconfiguring computational scheme with a variable set of modeling zones. In comparison with current nested algorithms, the proposed approach supports integrating in one modeling scheme the bathymetry grids designed by different developers that usually have non-proportional grid steps and differences in bottom relief. The higher accuracy of modeling is reached that boundaries are transferring between areas in every computational step. The results presented confirm possibility of implementing all high-speed computations concurrently at the laboratory level using a distributed computing in combination with CUDA-accelerators. We evaluated different variants of computational schemes and resource using. Table 1 shows results of the testing. Experiments were made on the PC workstation with following characteristics: Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz; 8 CPU(s), 64Gb Memory; 2 GEFORCE GTX 1050ti GPUs; SSD hard disk.
nn Architecture Time (min) Speedup
1 Sequential, SSD T1= 283.3 T1/T1= 1
2 Pipelined, SSD T2= 151.0 T1/T2= 1.87
3 Sequential,SSD T3 = 60.0 T1/T3 = 4.72
2 CUDA-boards
4 Pipelined, SSD, T4 = 41.0 T1/T4 = 6.90
2 CUDA-boards T3/T4 = 1.46
5 Pipelined, SSD T5= 104.6 T1/T5= 2.70
1 CUDA-board T5= 104.6 T2/T5= 1.44

Results of testing demonstrate good performance and confirm possibility to extend it to a cloud-based computing via a distributed multicomputer system with shared resources. Bathymetry of each area can be managed independently using the original Bathymetry and Tsunami Source Data Editor [1]. This gives the possibility to support efficiently the calculations investigating an influence of underwater objects on tsunami wave parameters in designated areas.

[1] Hayashi, K, Vazhenin, A, Marchuk, An (2016). “Source Data and Bathymetry Editor in Tsunami Modeling Environment”, Frontiers in Artificial Intelligence and Applications, Vol. 286, 235-245.