Japan Geoscience Union Meeting 2019

Presentation information

[E] Poster

M (Multidisciplinary and Interdisciplinary) » M-IS Intersection

[M-IS07] Astrobiology

Thu. May 30, 2019 5:15 PM - 6:30 PM Poster Hall (International Exhibition Hall8, Makuhari Messe)

convener:Hikaru Yabuta(Hiroshima University, Department of Earth and Planetary Systems Science), Seiji Sugita(Department of Earth and Planetary Science, Graduate School of Science Sciece, The University of Tokyo), Misato Fukagawa(National Astronomical Observatory of Japan), Fujishima Kosuke(Tokyo Institute of Technology, Earth-Life Science Institute)

[MIS07-P07] Application of Machine learning and software development for initial analysis in the TANPOPO mission

*Kyoko Okudaira1, Yuto Toda1, Takashi Sonoke1, Yuichi Yaguchi1, Masashi Yoshida1, Junya Imani2, Satoshi Sasaki3, Hajime Yano4, Hirohide Demura1, Makoto Tabata5, Akihiko Yamagishi6 (1.The University of Aizu, 2.YUKI Precision, 3.Tokyo University of Technology, 4.JAXA, 5.Chiba University, 6.Tokyo University of Pharmacy and Life Science)

Keywords:TANPOPO mission, Astrobiology, Initial analysis, Machine learning, Software development

One of the goals of the TANPOPO mission is collecting cosmic dust and space debris on the International Space Station [1]. Ultra-low density material “silica aerogel” panels are used as capture media [2]. These panels retrieved from the space are sent to the laboratory and initial analysis has been conducted by a system called “CLOXS”.

CLOXS efficiently covers the processes (1) Automatic imaging the entire panel, 2) Search for penetration track candidates, 3) Mapping of track candidates, 4) Shape data acquisition of tracks, and 5) Automatic sample cutting) up to sample allocation to researchers. However, to reduce researchers’ time and effort, improvement of the software and automation of some processes are required.

In this study, application of machine learning in discrimination of objects in aerogel images taken by CLOXS is mainly reported. The research goal is to classify objects in the images into tracks and dirt using machine learning.

There are morphological varieties in the objects. As for both tracks and dirt, they are classified into 4 types respectively in this study.

Since the contrast of the images are low, the first step of our approach is preprocessing of the images. We compared the results of Contrast Stretching and CLAHE (Contrast Limited Adaptive Histogram Equalization).

Learning data and test data are prepared respectively, to compare the case that the images are categorized into 4 types and the case that the images of all types are dealt altogether.

Convolutional Neural Network (CNN) is used as a method of machine learning. Python is used as programing Language.

In conclusion, as for image processing, CLAHE showed higher accuracy when tracks were judged to be tracks, and Contrast Stretching was found to yield a better result for dirt. It is probably because, in Contrast Stretching, CNN was not sufficiently highlighted to capture features, and CLAHE was able to improve the contrast sufficiently.

Regarding separating the types of tracks and dirt, the accuracy rate was higher when tracks were categorized into different types and learned.

Although further evaluation and improvements might be needed, our accuracy rates meet the mission requirements, that is, the percentage of identifying track as track should be more than 95 %, while the percentage of identifying dirt as dirt should be more than 70 % (that of NOT identifying true track as dirt should be less than 30 %).



[1] A. Yamagishi et al., “Tanpopo: Astrobiology exposure and micrometeoroid capture experiments - proposed experiments at the exposure facility of ISS-JEM”, Trans. JSASS. Aerospace Tech. Jpn. vol. 12, No. ists29, p. Tk_49-Tk_55, 2014.

[2] M. Tabata, et al. “Ultralow-density double-layer silica aerogel fabrication for the intact capture of cosmic dust in low-Earth orbits” J. Sol-Gel Sci. Technol. 77(2), 325-334, 2016.