5:15 PM - 6:30 PM
[PPS02-P05] Topographic feature extraction from Akatsuki/LIR image using Conditional Generative Adversarial Network (CGAN) and Deep learning
Keywords:Venus, Akatsuki, Deep learning, Atmospheric Mountain Waves
Venus has a thick global cloud layer in an altitude range of 40–70 km, and the atmosphere rotates over 100 ms-1 from east to west. This mysterious fast wind is named the Super-rotation. Thermal tides are global-scale atmospheric waves excited in the cloud layer by the solar heating and are regarded as one of the mechanisms maintaining the entire Super-rotation. The LIR (Long-wave Infrared camera) onboard Akatsuki Venus orbiter observes 10 μm of global thermal emissions from ~70 km cloud-top and successfully retrieves the temperature perturbation associated with diurnal and semidiurnal tides. However, zonal and meridional wind measurements covering all local time have not been archived yet, and it is critical to evaluate the angular momentum transport by thermal tides.
The LIR mainly observes thermal perturbations due to gravity waves generated by the surface topography. While the LIR can also capture mesoscale features migrating with the Super-rotation, it was challenging to use these features for cloud tracking because of the low SN ratio. In order to distinguish such minor features from the topographic features, we conducted machine-learning-based feature extraction. We used Conditional Generative Adversarial Network (CGAN) model, which is used widely in image generation and its algorithm architectures include two neural networks the generator and the discriminator. The generator generates synthetic instances from inputs, and the discriminator evaluates the instances and facilitating the learning. In this study, we initially tested to reconstruct high-pass filtered LIR images. In the training phase, the CGAN takes as input a set of image pairs, one is a single LIR image high-pass filtering (by this we mean a low SN image hard to interpret minor features) and a corresponding high-pass filtered image where sequential 30 images were stacked in the geographic coordinate system. The CGAN then tries to 'learn' to recover the degraded image by minimizing the difference between the recovered image and the non-degraded image. In our first trial, we reduced the spatial resolution from 0.25o in longitude and latitude to 1o to accelerate the learning and success to reconstruct feature-enhanced images (Figure). Based on this result, we can distinguish major features from the original image. Our next step is applying the coordinate system that rotates with the background superrotation for stacking the ground truth image. In this presentation, we will show the results of CGAN machine learning-based temperature perturbation extraction from LIR images and discuss the future possibility of utilizing LIR images for cloud tracking.
The LIR mainly observes thermal perturbations due to gravity waves generated by the surface topography. While the LIR can also capture mesoscale features migrating with the Super-rotation, it was challenging to use these features for cloud tracking because of the low SN ratio. In order to distinguish such minor features from the topographic features, we conducted machine-learning-based feature extraction. We used Conditional Generative Adversarial Network (CGAN) model, which is used widely in image generation and its algorithm architectures include two neural networks the generator and the discriminator. The generator generates synthetic instances from inputs, and the discriminator evaluates the instances and facilitating the learning. In this study, we initially tested to reconstruct high-pass filtered LIR images. In the training phase, the CGAN takes as input a set of image pairs, one is a single LIR image high-pass filtering (by this we mean a low SN image hard to interpret minor features) and a corresponding high-pass filtered image where sequential 30 images were stacked in the geographic coordinate system. The CGAN then tries to 'learn' to recover the degraded image by minimizing the difference between the recovered image and the non-degraded image. In our first trial, we reduced the spatial resolution from 0.25o in longitude and latitude to 1o to accelerate the learning and success to reconstruct feature-enhanced images (Figure). Based on this result, we can distinguish major features from the original image. Our next step is applying the coordinate system that rotates with the background superrotation for stacking the ground truth image. In this presentation, we will show the results of CGAN machine learning-based temperature perturbation extraction from LIR images and discuss the future possibility of utilizing LIR images for cloud tracking.