Japan Geoscience Union Meeting 2019

Presentation information

[J] Poster

P (Space and Planetary Sciences ) » P-PS Planetary Sciences

[P-PS08] Lunar science and exploration

Thu. May 30, 2019 3:30 PM - 5:00 PM Poster Hall (International Exhibition Hall8, Makuhari Messe)

convener:Hiroshi Nagaoka(Japan Aerospace Exploration Agency), Masahiro KAYAMA(Department of Earth and Planetary Material Sciences, Faculty of Science, Tohoku University), Masaki N Nishino(Japan Aerospace Exploration Agency, Institute of Space and Astronautical Science), Tomokatsu Morota(Graduate School of Environmental Studies, Nagoya University)

[PPS08-P05] Lunar surface feature reconstruction by deep learning using synthetic data

*Koichi Tsuru1, Toru Kouyama1, Ryosuke Nakamura1 (1.National Institute of Advanced Industrial Science and Technology)

Keywords:Moon, Feature Detection, Deep Learning, SELENE

Various machine learning methods have been proposed to detect lunar surface features such as craters and boulders from satellite imagery. However, many of them need a huge manual effort to create a training data set even to solve one task.

Hence, so far, we have focused on several algorithms based on Generative Adversarial Networks (GANs) that enable highly accurate image generation even from a small amount of training set, and we examined to use Cycle-Consistency Adversarial Networks (CycleGAN) which can learn translation relationship between two image groups, such as a group of Moon surface images and a group of crater maps, for crater detection on the Moon surface. The CycleGAN does not require paired images, whereas it learns the relationship between the two image groups with an unpaired training manner, and images in one of the two groups can be simulated label data.

Initially, we tried to translate a satellite imagery to a labeled image (i.e. crater map), in which the labeled image is produced by simulations based on statistical crater size-frequency distribution. Although it could reduce the annotation cost drastically by the automatic generation of the label data, we found that the learning was not stable and not precise because the texture was too different between the satellite imagery and the simple labeled image. Then, we considered that, in order to improve the performance, it is necessary to train a network with multiple feature labels (i.e. lunar surface elevation, albedo, and shadow maps, and crater and boulder distributions) that characterize the appearance of the Moon surface to be observed.

In this work, we adopted a two-step deep learning framework with fine-tune models trained by 1) generating feature labels from a modeled surface image which is based on given feature labels, and synthesizing the labels to generate a lunar surface image that is compared with the original one to check the network performance (at the same time the generated feature labels are also compared with given ones), and then 2) generating feature labels from actual observed data and again synthesizing the labels to generate a lunar surface image to be compared with the actual image. The good advantage of using modeled data and labels as training data is that it becomes easier to cover complicated and enormous amount of possible observation conditions which is necessary for analyzing actual observation data. In addition, highly precise inference can be possible by the simulation even when the existing observation data of a target object covers only insufficient observation conditions.

In the proposed framework, we employed two generative networks that reconstruct terrain features from satellite imagery and to synthesize satellite imagery from terrain features. By applying cycle-consistency for the training that is measured from similarity between the resynthesized satellite image from “reconstructed” features and the original, the actual terrain features are not required in the training phase.

We prepared DEM, albedo map, crater map, boulder map, and shadow map as features for the reconstruction of a target region. In this presentation, we show the results of feature reconstruction from the actual observed SELENE Terrain Camera imagery.