Japan Geoscience Union Meeting 2024

Presentation information

[E] Oral

P (Space and Planetary Sciences ) » P-CG Complex & General

[P-CG20] Future missions and instrumentation for space and planetary science

Mon. May 27, 2024 10:45 AM - 12:00 PM 103 (International Conference Hall, Makuhari Messe)

convener:Masaki Kuwabara(Rikkyo University), Shoichiro Yokota(Graduate School of Science, Osaka University), Naoya Sakatani(Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency), Takefumi Mitani(Japan Aerospace Exploration Agency, Institute of Space and Astronautical Science), Chairperson:Naoya Sakatani(Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency)

11:30 AM - 11:45 AM

[PCG20-09] Study on Onboard High-Speed Machine Learning Inference using Dynamically Reconfigurable Processor

*Taisei Ukita1, Shoya Matsuda1, Yoshiya Kasahara1, Yuto Morikoshi1 (1.Kanazawa University)

Keywords:Dynamically Reconfigurable Processor, Machine Learning, Edge Computing, Plasma Wave

Automatic event classification using a machine learning model is an effective approach for improving the intelligent processing on spacecraft. In general, GPUs (Graphical Processing Units) are used for machine learning and inference on the ground. However, when we implement these functions in a spacecraft, alternative devices are required due to power consumption and thermal design constraints. In this study, we use a dynamically reconfigurable processor (DRP) to achieve small resource and high-speed machine learning inference, aiming for onboard event classification in space.
The Renesas Electronics RZ/V2L microcomputer has a hardware logic component named “DRP-AI” for edge-AI computing. DRP-AI has both features; high-speed processing as FPGA and flexible computing as CPU, through dynamic hardware configuration changes.
In this presentation, we develop a convolutional neural network (CNN) model consisting of six convolutional layers and two fully connected layers, and we evaluate the performance of machine learning inference using DRP-AI. As a result, we confirmed that the inference time using DRP-AI on the RZ/V2L was approximately 20.3 times faster than that using the conventional CPU operating at a clock frequency of 200 MHz. FPGA had an advantage in terms of inference speed, as it flattens computations and performs concurrent processing. However, the DRP-AI had an advantage in terms of resource consumption, as it dynamically changes the hardware configuration. Therefore, we conclude that the RZ/V2L can implement more complex models then the FPGA.