日本地球惑星科学連合2019年大会

講演情報

[J] ポスター発表

セッション記号 P (宇宙惑星科学) » P-PS 惑星科学

[P-PS08] 月の科学と探査

2019年5月30日(木) 15:30 〜 17:00 ポスター会場 (幕張メッセ国際展示場 8ホール)

コンビーナ:長岡 央(宇宙航空研究開発機構)、鹿山 雅裕(東北大学大学院理学研究科地学専攻)、西野 真木(宇宙航空研究開発機構宇宙科学研究所)、諸田 智克(名古屋大学大学院環境学研究科)

[PPS08-P05] 合成データを用いた深層学習による月面特徴の復元

*水流 晃一1神山 徹1中村 良介1 (1.国立研究開発法人産業技術総合研究所)

キーワード:月、地物検出、深層学習、SELENE

Various machine learning methods have been proposed to detect lunar surface features such as craters and boulders from satellite imagery. However, many of them need a huge manual effort to create a training data set even to solve one task.

Hence, so far, we have focused on several algorithms based on Generative Adversarial Networks (GANs) that enable highly accurate image generation even from a small amount of training set, and we examined to use Cycle-Consistency Adversarial Networks (CycleGAN) which can learn translation relationship between two image groups, such as a group of Moon surface images and a group of crater maps, for crater detection on the Moon surface. The CycleGAN does not require paired images, whereas it learns the relationship between the two image groups with an unpaired training manner, and images in one of the two groups can be simulated label data.

Initially, we tried to translate a satellite imagery to a labeled image (i.e. crater map), in which the labeled image is produced by simulations based on statistical crater size-frequency distribution. Although it could reduce the annotation cost drastically by the automatic generation of the label data, we found that the learning was not stable and not precise because the texture was too different between the satellite imagery and the simple labeled image. Then, we considered that, in order to improve the performance, it is necessary to train a network with multiple feature labels (i.e. lunar surface elevation, albedo, and shadow maps, and crater and boulder distributions) that characterize the appearance of the Moon surface to be observed.

In this work, we adopted a two-step deep learning framework with fine-tune models trained by 1) generating feature labels from a modeled surface image which is based on given feature labels, and synthesizing the labels to generate a lunar surface image that is compared with the original one to check the network performance (at the same time the generated feature labels are also compared with given ones), and then 2) generating feature labels from actual observed data and again synthesizing the labels to generate a lunar surface image to be compared with the actual image. The good advantage of using modeled data and labels as training data is that it becomes easier to cover complicated and enormous amount of possible observation conditions which is necessary for analyzing actual observation data. In addition, highly precise inference can be possible by the simulation even when the existing observation data of a target object covers only insufficient observation conditions.

In the proposed framework, we employed two generative networks that reconstruct terrain features from satellite imagery and to synthesize satellite imagery from terrain features. By applying cycle-consistency for the training that is measured from similarity between the resynthesized satellite image from “reconstructed” features and the original, the actual terrain features are not required in the training phase.

We prepared DEM, albedo map, crater map, boulder map, and shadow map as features for the reconstruction of a target region. In this presentation, we show the results of feature reconstruction from the actual observed SELENE Terrain Camera imagery.