JpGU-AGU Joint Meeting 2020

Presentation information

[E] Poster

P (Space and Planetary Sciences ) » P-PS Planetary Sciences

[P-PS02] Lunar Science and Exploration

convener:Masaki N Nishino(Japan Aerospace Exploration Agency, Institute of Space and Astronautical Science), Masahiro KAYAMA(Department of General Systems Studies, Graduate School of Arts and Sciences, The University of Tokyo), Hiroshi Nagaoka(Japan Aerospace Exploration Agency), Yusuke Nakauchi(Japan Aerospace Exploration Agency)

[PPS02-P11] Study of lunar 3-D terrain mapping using both the camera and the LIDAR loaded on the rover

*Ryuhei Yamada1, Yuichi Yaguch1, Keitaro Naruse1 (1.The University of Aizu, Department of Computer Science and Engineering)

Keywords:Lunar terrain, 3-D mapping, LIDAR, camera, rover

The lunar landing mission is next main topic for survey and utilization of the Moon. Many remote sensing observations by Kaguya, LRO, GRAIL, LADEE and Chang’e programs have almost completed comprehensive knowledge of the lunar surface in recent 15 years. The landing exploration can provide details of local terrain, geology and chemical composition on a selected area. Then, deployment of the geophysical sensors such as seismometer, thermometer and magnetometer using a lunar lander and/or a rover should provide the new chances to investigate the lunar interior structure since Apollo-era. In this study, we target to construct the accurate 3-D terrain map around the landing site in range from several hundreds of m to a few km using both the camera and the 3-D LIDAR loaded on the lunar rover.

The accurate 3-D terrain map can provide the detailed frequency distribution of size of the boulder and its shape, geometries of the craters and relief of the surface around the site. This information helps us to investigate the formation process and age of the area. Then, we can utilize the map to select the scientific target with other sensor data and design the route of the rover to the target.

We have studied the method to construct the 3-D terrain map using the 3-D LIDAR point cloud data by the SLAM (Simultaneous Localization and Mapping) algorithm, and we can construct the 3-D map with accuracy of a few – about 10 cm [Yamada et al., 2019]. On the other hand, the 3-D map consisting of only the LIDAR data is usually sparse especially in far range. The camera image can be used to complement the space and provide color information in the map. We have also studied the calibration method to match the camera image and the LIDAR point cloud data to construct the accurate 3-D map with high resolution [Yamada et al., 2020]. In this presentation, we will introduce the methods of 3-D map construction and calibration. Then, we will perform the mapping test using the both the camera and the LIDAR loaded on the wagon and/or the small rover in outside area and indicate the result of the 3-D map. Finally, we will also discuss the application of the method to the lunar science and exploration, operation of the rover and the future vision using the lunar rover.