4:45 PM - 5:00 PM
[HDS10-29] Trans-boundary realization of the pipelined nested-grid algorithm for distributed tsunami modeling
Keywords:Tsunami Modeling, Nested Grids, Pipeline Computing, Cloud Computing
The tsunami modeling can be considered as a heavy-computational problem requiring versatile approaches based on integration of them. This research is focused on designing a high-speed scheme for tsunami modeling used nested computing in which computations are carried out on a sequence of grids with various resolutions where one is embedded into another. This allows decreasing the total amount of calculations by excluding unimportant coast areas from the calculation process. The paper describes the main features of the pipelined Tsunami Modeling Infrastructure supporting high-speed tsunami modeling on system with rather limited computational resources. The pipelining scheme is realized by distributing bathymetries over computational resources and synchronization of processing of each area using buffering of boundaries between areas. It also describes adopting this scheme to cloud-based computations allowing creating flexible and reconfiguring computational scheme with a variable set of modeling zones. In comparison with current nested algorithms, the proposed approach supports integrating in one modeling scheme the bathymetry grids designed by different developers that usually have non-proportional grid steps and differences in bottom relief. The higher accuracy of modeling is reached that boundaries are transferring between areas in every computational step. The results presented confirm possibility of implementing all high-speed computations concurrently at the laboratory level using a distributed computing in combination with CUDA-accelerators. We evaluated different variants of computational schemes and resource using. Table 1 shows results of the testing. Experiments were made on the PC workstation with following characteristics: Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz; 8 CPU(s), 64Gb Memory; 2 GEFORCE GTX 1050ti GPUs; SSD hard disk.
nn Architecture Time (min) Speedup
1 Sequential, SSD T1= 283.3 T1/T1= 1
2 Pipelined, SSD T2= 151.0 T1/T2= 1.87
3 Sequential,SSD T3 = 60.0 T1/T3 = 4.72
2 CUDA-boards
4 Pipelined, SSD, T4 = 41.0 T1/T4 = 6.90
2 CUDA-boards T3/T4 = 1.46
5 Pipelined, SSD T5= 104.6 T1/T5= 2.70
1 CUDA-board T5= 104.6 T2/T5= 1.44
Results of testing demonstrate good performance and confirm possibility to extend it to a cloud-based computing via a distributed multicomputer system with shared resources. Bathymetry of each area can be managed independently using the original Bathymetry and Tsunami Source Data Editor [1]. This gives the possibility to support efficiently the calculations investigating an influence of underwater objects on tsunami wave parameters in designated areas.
[1] Hayashi, K, Vazhenin, A, Marchuk, An (2016). “Source Data and Bathymetry Editor in Tsunami Modeling Environment”, Frontiers in Artificial Intelligence and Applications, Vol. 286, 235-245.
nn Architecture Time (min) Speedup
1 Sequential, SSD T1= 283.3 T1/T1= 1
2 Pipelined, SSD T2= 151.0 T1/T2= 1.87
3 Sequential,SSD T3 = 60.0 T1/T3 = 4.72
2 CUDA-boards
4 Pipelined, SSD, T4 = 41.0 T1/T4 = 6.90
2 CUDA-boards T3/T4 = 1.46
5 Pipelined, SSD T5= 104.6 T1/T5= 2.70
1 CUDA-board T5= 104.6 T2/T5= 1.44
Results of testing demonstrate good performance and confirm possibility to extend it to a cloud-based computing via a distributed multicomputer system with shared resources. Bathymetry of each area can be managed independently using the original Bathymetry and Tsunami Source Data Editor [1]. This gives the possibility to support efficiently the calculations investigating an influence of underwater objects on tsunami wave parameters in designated areas.
[1] Hayashi, K, Vazhenin, A, Marchuk, An (2016). “Source Data and Bathymetry Editor in Tsunami Modeling Environment”, Frontiers in Artificial Intelligence and Applications, Vol. 286, 235-245.