JpGU-AGU Joint Meeting 2020

Presentation information

[E] Oral

S (Solid Earth Sciences ) » S-TT Technology & Techniques

[S-TT50] Synthetic Aperture Radar and its application

convener:Yohei Kinoshita(University of Tsukuba), Yu Morishita(Geospatial Information Authority of Japan), Shoko Kobayashi(Tamagawa University), Takahiro Abe(Earth Observation Research Center, Japan Aerospace Exploration Agency)

[STT50-01] Trial of deforestation detection accuracies improvement for JJ-FAST algorithm by using deep learning.

*Manabu Watanabe1, Christian Naohide koyama1, Masato Hayashi2, Isumi Nagatani2, Takeo Tadono2, Masanobu Shimada1 (1.School of Science and Engineering, Tokyo denki university, 2.JAXA/EORC)

Keywords:Forest monitoring, deforestation detection, PALSAR-2, AI

JICA-JAXA Forest Early Warning System in the Tropics (JJ-FAST, [1]) is first and only one SAR-based deforestation detection system monitoring tropical forest globally. The deforestation detection algorithm currently used in the JJ-FAST is threshold-based algorithm. In case that gamma_0 difference between latest and past data exceeds empirically decided threshold level, the area is regarded as a deforestation site [2]. Eleven deforestation sites with the size of 1x1 deg. are selected from South America, South-East Asia, and Africa, and are used to do an accuracy evaluation. The overall accuracies for the threshold based algorithm are estimated to be more than 80% for the sites, where the number of deforestation is less than 100 sites within 1x1 deg. On the other hand, the accuracies, especially for the producer’s accuracies, are often worse for the deforestation site, where the number is more than 100 sites. If the lower threshold levels are adopted, the producer’s accuracies are improved, while the use’s accuracies are decreased.
To fix this problem, deep learning are introduced to improve the producer’s accuracies for the dense deforestation detection sites. A test site are selected in Brazil, where 652 deforestation were detected between July 13 and August 24, 2018 within 1x1 deg. (Upper left lon. & lat: W075, S08). Input chip images are made for each polarization (HH, HV, HH-HV ratio), and color composite image is produced from latest and past image.
The threshold level 1 is used, and the detection accuracies for the threshold method are estimated. Only 212 out of 652 deforestation sites are correctly detected for the threshold method with threshold level 1, and the 171 sites are detected falsely. In case that the threshold level 2, where the threshold level is 0.5 dB smaller than Threshold level 1, is adopted, producer’s accuracies are improved, while the user’s accuracies worsen to be 40.2%. Falsely and correctly detected site images are used as the training data, and deep learning are applied to improve the producer’s accuracies obtained with threshold 2. Two well-known network model, AlexNet [3] and GoogLeNet [4], are tested in this time. AlexNet have five convolution layers, and often used as standard model. But no improvement are obtained in our dataset. GoogLeNet introduces inception module, which consists of several pooling and convolution layers and makes larger network.
The results are presented in the figure below. While the user’s accuracies are slightly decreased, the producer’s accuracies is 60.7 %, and almost twice as large as the one for the threshold method with threshold 1. Threshold method only use information about intensity variation, while the deep learning uses not only the intensity variation, but also texture information within and surrounding the deforestation site. This additional information may make better estimation accuracies.

[1] JJ-FAST, http://www.eorc.jaxa.jp/jjfast/jj_index.html, accessed on February 17, 2019
[2] Manabu Watanabe, et al., IMPROVEMENT OF DEFORESTATION DETECTION ALGORITHMS USED IN JJ-FAST, Proceedings of 2019 IEEE International Geoscience & Remote Sensing Symposium, 'INSPEC Accession Number: 19138057, 2019
[3] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Proc. of NIPS, 2012.
[4] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proc. of CVPR, 2015.