日本地球惑星科学連合2021年大会

講演情報

[E] ポスター発表

セッション記号 P (宇宙惑星科学) » P-PS 惑星科学

[P-PS02] Recent advances of Venus science and coming decades

2021年6月3日(木) 17:15 〜 18:30 Ch.01

コンビーナ:佐藤 毅彦(宇宙航空研究開発機構・宇宙科学研究本部)、Thomas Widemann(Observatoire de Paris)、Kevin McGouldrick(University of Colorado Boulder)、佐川 英夫(京都産業大学)

17:15 〜 18:30

[PPS02-P05] Topographic feature extraction from Akatsuki/LIR image using Conditional Generative Adversarial Network (CGAN) and Deep learning

*今井 正尭1、神山 徹1、田口 真2、安藤 紘基3、高木 征弘3 (1.産業技術総合研究所, 人工知能研究センター、2.立教大学, 理学部、3.京都産業大学, 理学部)

キーワード:金星、あかつき、深層学習、山岳波

Venus has a thick global cloud layer in an altitude range of 40–70 km, and the atmosphere rotates over 100 ms-1 from east to west. This mysterious fast wind is named the Super-rotation. Thermal tides are global-scale atmospheric waves excited in the cloud layer by the solar heating and are regarded as one of the mechanisms maintaining the entire Super-rotation. The LIR (Long-wave Infrared camera) onboard Akatsuki Venus orbiter observes 10 μm of global thermal emissions from ~70 km cloud-top and successfully retrieves the temperature perturbation associated with diurnal and semidiurnal tides. However, zonal and meridional wind measurements covering all local time have not been archived yet, and it is critical to evaluate the angular momentum transport by thermal tides.

The LIR mainly observes thermal perturbations due to gravity waves generated by the surface topography. While the LIR can also capture mesoscale features migrating with the Super-rotation, it was challenging to use these features for cloud tracking because of the low SN ratio. In order to distinguish such minor features from the topographic features, we conducted machine-learning-based feature extraction. We used Conditional Generative Adversarial Network (CGAN) model, which is used widely in image generation and its algorithm architectures include two neural networks the generator and the discriminator. The generator generates synthetic instances from inputs, and the discriminator evaluates the instances and facilitating the learning. In this study, we initially tested to reconstruct high-pass filtered LIR images. In the training phase, the CGAN takes as input a set of image pairs, one is a single LIR image high-pass filtering (by this we mean a low SN image hard to interpret minor features) and a corresponding high-pass filtered image where sequential 30 images were stacked in the geographic coordinate system. The CGAN then tries to 'learn' to recover the degraded image by minimizing the difference between the recovered image and the non-degraded image. In our first trial, we reduced the spatial resolution from 0.25o in longitude and latitude to 1o to accelerate the learning and success to reconstruct feature-enhanced images (Figure). Based on this result, we can distinguish major features from the original image. Our next step is applying the coordinate system that rotates with the background superrotation for stacking the ground truth image. In this presentation, we will show the results of CGAN machine learning-based temperature perturbation extraction from LIR images and discuss the future possibility of utilizing LIR images for cloud tracking.