日本地球惑星科学連合2023年大会

講演情報

[E] オンラインポスター発表

セッション記号 P (宇宙惑星科学) » P-EM 太陽地球系科学・宇宙電磁気学・宇宙環境

[P-EM15] 太陽地球系結合過程の研究基盤形成

2023年5月26日(金) 15:30 〜 17:00 オンラインポスターZoom会場 (4) (オンラインポスター)

コンビーナ:山本 衛(京都大学生存圏研究所)、小川 泰信(国立極地研究所)、野澤 悟徳(名古屋大学宇宙地球環境研究所)、吉川 顕正(九州大学大学院理学研究院地球惑星科学部門)

現地ポスター発表開催日時 (2023/5/26 17:15-18:45)

15:30 〜 17:00

[PEM15-P05] 深層学習再帰型ニューラルネットワークを用いた地球電離圏全電子数マップの時空間系列予測

*劉 鵬1横山 竜宏1山本 衛1 (1.京都大学 生存圏研究所)

キーワード:全電子数マップ、深層学習、時空間系列予測

Global ionospheric Total Electron Content (TEC) map that indicates the total number of electrons, is an important physical quantity for the Earth ionosphere. Since 1995, 132,960 global TEC maps have been provided by the Centre for Orbit Determination in Europe (CODE) based on the signal delay between the globally distributed ground receivers and satellites.

Machine learning technologies that develop rapidly nowadays are leveraged to predict upcoming frames of temporal and spatiotemporal sequences. Recurrent Neural Network (RNN) can restore current output and state of the network as the input of prediction on the next timestamp, thus RNN and its improved version (for example, bi-directional multilayer RNN) are used for temporal sequence prediction including global TEC map prediction. However, these models only learn the temporal trend of sequential input data without considering the spatial association. To solve this problem, the first spatiotemporal sequence prediction model, ConvLSTM, was proposed in 2015 which used convolution operation to learn additional spatial distribution features. After several years of development, new advanced spatiotemporal sequence prediction models such as MIM, E3D-LSTM and PredRNN were proposed but have not been applied on global TEC map prediction yet. In this work, the state-of-the-art performances of these models under different circumstances are evaluated for the first time.

If the global TEC maps provided by CODE are grouped mutually exclusively and equally with a frame interval of 2 hours and a sequence length of 5 days, only 2,216 spatiotemporal sequences are obtained. The quantity is not enough for the training of spatiotemporal sequence prediction model. In this work, we use sequence number augmentation to overcome this shortage. The detailed implementation is that the head of a 5-day sampling window slides to the next frame every time when a sequence is extracted. With this method, the number of spatiotemporal sequences is enhanced to 66,480 which can have a full use of sequence data.