日本地球惑星科学連合2025年大会

講演情報

[E] 口頭発表

セッション記号 A (大気水圏科学) » A-AS 大気科学・気象学・大気環境

[A-AS05] 高性能計算が拓く気象・気候・環境科学

2025年5月28日(水) 15:30 〜 17:00 展示場特設会場 (5) (幕張メッセ国際展示場 7・8ホール)

コンビーナ:八代 尚(国立研究開発法人国立環境研究所)、中野 満寿男(海洋研究開発機構)、宮川 知己(東京大学大気海洋研究所)、川畑 拓矢(気象研究所)、座長:八代 尚(国立研究開発法人国立環境研究所)

15:45 〜 16:00

[AAS05-08] Evaluating FourCastNet with High-Resolution Data, Varied Internal Resolutions, and Data Conversion

*上条 藍悠1宮川 知己1八代 尚2 (1.東京大学 大気海洋研究所、2.国立研究開発法人国立環境研究所)


キーワード:機械学習、数値予報、天気予報、SFNO、AI、HPC

1. Introduction
We focus on a machine learning-based Fourier ForeCasting Network (FCN) model [2], which internally utilizes the Spherical Fourier Neural Operator (SFNO) [1]—a neural operator operating in the wavenumber space of spherical harmonic functions. We selected this model according to its demonstrated stability for forecasts spanning at least one year [3] and the existence of libraries and prior studies supporting ensemble executions [4,5]. Most of the previous studies have trained models solely on ERA5 data [6]. The present research has two objectives: first, to investigate the accuracy and training cost of the FCN model for higher-resolution data, and second, to examine how Gaussian transformation improves accuracy for predicting non-Gaussian data (e.g., precipitation).

2. Experimental Setup
For the ERA5 and NICAM-AMIP [7] data (720×1440 and 1280×2560 horizontal grids, respectively), we prepared 51 atmospheric variables as two-dimensional data transformed into σ−p coordinates covering 30 years from 1979 to 2008. The first 26 years of data were used for training, the next 3 years for testing, and the final year for validation. Both training and inference were performed on the University of Tokyo’s Wisteria/BDEC-01 supercomputer system using up to 64 NVIDIA A100 (40GB) GPUs.
One of the hyperparameters, referred to as the scale factor (SC), determines the size of the wavenumber space in the model’s internal resolution. The internal resolution is obtained by dividing the data’s resolution in the latitude and longitude directions by sc. Three experiments were conducted: ERA5-SC3 (using ERA5 data with SC = 3), ERA5-SC5 (using ERA5 data with SC = 5), and NICAM-AMIP-SC5 (using NICAM-AMIP experimental data with SC = 5). Comparing the ERA5-SC3 and ERA5-SC5 experiments (They are the same dataset with different internal resolutions) allows us to assess the impact of internal resolution on accuracy and computational cost. In contrast, comparing ERA5-SC3 with NICAM-AMIP-SC5 (which use different datasets but have nearly identical internal resolutions) highlights the effect of the dataset itself on the required computational resources.
We also prepared an experiment called ERA5-SC3-N, in which the training data was transformed using a Gaussian transformation prior to training and inference to compare the resulting accuracies.

3. Results
The number of trainable parameters in the model is approximately 32 million for ERA5-SC3, 20 million for ERA5-SC5, and 34 million for NICAM-AMIP-SC5. Considering that the internal resolutions of ERA5-SC3 and NICAM-AMIP-SC5 are nearly the same, these values are consistent. Moreover, the computational resources required for one epoch of training are about 16 GPU-hours for both ERA5-SC3 and ERA5-SC5, whereas for NICAM-AMIP-SC5, it is approximately 170 GPU-hours.
When using ERA5 data, varying the internal resolution does not significantly change the computational resources required. In contrast, with NICAM-AMIP data, the necessary computational resources increase substantially compared to the ERA5 experiments with nearly the same internal resolution. Thus, under the conditions of this study, the data resolution primarily determines the required computational resources.
In this presentation, we will share the experimental results, including comparisons of model accuracy and the effects of applying a Gaussian transformation to precipitation data.

Acknowledgements:
This research was conducted using the Wisteria/BDEC-01 at the Information Technology Center, The University of Tokyo.

References:
[1] Boris, B., et al., 2023, arXiv, 2306.03838.
[2] Jaideep, P., et al., 2022, arXiv, 2202.11214.
[3] Oliver, W. M., et al., 2023, arXiv, 2310.02074.
[4] Ankur, M., et al., 2024, arXiv, 2408.03100.
[5] Ankur, M., et al., 2024, arXiv, 2408.01581.
[6] Hersbach H, Bell B, Berrisford P, et al. Q J R Meteorol Soc. 2020; 146: 1999–2049.
[7] Chihiro, K., et al., 2015, J. Meteor. Soc. Japan, 93, 393-424.