Japan Geoscience Union Meeting 2016

Presentation information

International Session (Oral)

Symbol A (Atmospheric and Hydrospheric Sciences) » A-AS Atmospheric Sciences, Meteorology & Atmospheric Environment

[A-AS02] High performance computing of next generation weather, climate, and environmental sciences using K

Sun. May 22, 2016 3:30 PM - 5:00 PM 302 (3F)

Convener:*Masaki Satoh(Atmosphere and Ocean Research Institute, The University of Tokyo), Masahide Kimoto(Atmosphere and Ocean Research Institute, The University of Tokyo), Kazuo Saito(Forecast Research Department, Meteorological Research Institute), Hiromu Seko(Meteorological Research Institute), Takemasa Miyoshi(RIKEN Advanced Institute for Computational Science), Tetsuro Tamura(Tokyo Institute of Technology), Hiroshi Niino(Dynamic Marine Meteorology Group, Department of Physical Oceanography, Atmosphere and Ocean Research Institute,The University of Tokyo), Masayuki Takigawa(Japan Agency for Marine-Earth Science and Technology), Hirofumi Tomita(AICS, RIKEN), Chihiro Kodama(Japan Agency for Marine-Earth Science and Technology), Chair:Takemasa Miyoshi(RIKEN Advanced Institute for Computational Science)

4:15 PM - 4:30 PM

[AAS02-10] Issues regarding the high-performance computing associated with the rapid-update-cycle ensemble data assimilation

*Guo-Yuan Lien1, Takemasa Miyoshi1, Seiya Nishizawa1, Ryuji Yoshida1, Hisashi Yashiro1, Takumi Honda1, Hirofumi Tomita1 (1.RIKEN Advanced Institute for Computational Science)

Keywords:LETKF, SCALE, High-performance computing

We have developed the SCALE-LETKF system, utilizing the Scalable Computing for Advanced Library and Environment (SCALE)-LES model and the Local Ensemble Transform Kalman Filter (LETKF), aiming to conduct ensemble data assimilation with very high resolution and rapid update cycle. The system has been used for several different studies, including the assimilation of the phased array weather radar (PAWR) and the Himawari-8 satellite radiance data. Although the peak computational speed of the K computer is powerful enough to run a very large problem, but the early version of the SCALE-LETKF system showed several issues to cause poor computational performance and low parallelization efficiency, or even to inhibit us from running a big problem. These issues include the memory overflow with huge observation files, heavy disk I/O and inter-process communication, and the load imbalance among processes. Some issues have been solved by the improvement of the code design, and the others are being investigated. We will discuss the issues and solutions up to the time of the presentation.