*CEDRIC JEAN VINCENT CAREMEL1, WEI GUO2, KAORU SENDA1, DAISUKE TAKADA4, KOTA TSUBOUCHI3, YOSHIHIRO KAWAHARA1
(1.Graduate School of Engineering, department of Electrical Engineering and Information Systems, The University of Tokyo, 2. Graduate School of Agricultural and Life Science, The University of Tokyo, 3.Yahoo! JAPAN Research, 4.Faculty of Food and Agricultural Sciences, Fukushima University)
Keywords:Artificial Neural Networks (ANN), Semantic Segmentation, Light Detection and Ranging (LiDAR), Terrestrial Laser Scanning (TLS), Unmanned Aerial Vehicle (UAV) Photogrammetry
Machine learning is now revolutionizing plant phenotyping, and the growing importance of digital representations of crops and fields is increasing the range of what new technologies and implementations can offer: close-range remote sensing systems, such as Terrestrial Laser Scanning, LiDAR, and UAV-based photogrammetry, can map outdoor environment to a point cloud representation with unprecedented levels of accuracy. However, semantic segmentation, which consists of clustering those point clouds into individual objects for further analysis, presents undeniable challenges. Manual segmentation can be labor-intensive and unrealistic for large datasets. Point clouds representing dense vegetation, such as groves or orchards, are notoriously difficult to segment. Recently, novel Deep Learning-based methods for the semantic segmentation of large point clouds have been proven efficient, yet still supervised networks require the manual labeling of an equally large number of samples. Here, we propose a fast, semi-supervised method to precisely automate the build-up of the training data, for both scene-level and instance-level semantic segmentation. In addition, our results show that using our approach, a simple artificial neural network can outperform, both in terms of processing speed and accuracy, state-of-the-art machine learning pipelines.