CIGR VI 2019

Presentation information

Oral Session

Others (including the category of JSAM and SASJ)

[4-1600-D] Other Categories (1)

Wed. Sep 4, 2019 4:00 PM - 6:15 PM Room D (4th room)

Chair:Satoshi Yamamoto(Akita Prefectural University), Kikuhito Kawasue(University of Miyazaki)

4:45 PM - 5:00 PM

[4-1600-D-04] Plant Disease Identification using Explainable Features with Deep Convolutional Neural Network

*Harshana Habaragamuwa1, Yu Oishi1, Katu Takeya1, Kenichi Tanaka1 (1. National Agriculture and Food Research Organization(Japan))

Keywords:Plant disease identification , Explainable features, Convolutional Neural Network , Auto-encoder, Deep learning

Recently deep learning algorithms are widely used in agricultural applications such as disease identification. However, the most of these algorithms are black-box models, which means the users are unable to interpret (explain) what kind of features the Convolutional Neural Network (CNN) algorithm learned to perform the classification task. Without interpreting the learned features, users cannot verify whether the algorithm learned the correct features, this problem may lead to disastrous situations. Because of low interpretability, it is difficult to, improve the training data, gain new knowledge from the data, improve the architecture, or predict the behavior of the algorithm in different conditions. Our objective is to develop a deep learning algorithm which, in an intermediate stage creates explainable features that can be used to discriminate between a healthy and diseased leaf. We used the PlantVillage dataset which is a commonly used dataset for disease identification research, to develop and test our algorithm. This data set consists of leaf images (healthy and diseased) from plants such as tomato, potato, bell pepper, etc. The algorithm is made of three stages. The first stage is an unsupervised generative training using a variational auto-encoder. The second stage involves a supervised generative training using a variational auto-encoder and the final stage involves training a supervised classifier to discriminate between healthy and diseased leaves. The results were evaluated using the visual quality of the features which can be visualized in the second stage of the training. We also tested the final classification accuracy, because there is a compromise between interpretability (understandability) and fidelity (the accuracy of classification). The results of our visual outputs were easy to understand with compared to a conventional heat-map visualization. Our average classification accuracy was 92%, which may be acceptable given the level of interpretation supplied by our method. Our method can be used to find out the features which may be used to separates a healthy and diseased leaf with a low sacrifice to the final classification accuracy. In the agricultural field, this method will help in disease classification to improve algorithms and deficiencies in training datasets. Moreover, the disease experts can predict the behavior of this algorithm in different situations and they can gain knowledge about the features which are characteristics of plant disease. In the future, this algorithm would be extended to other fields where the safety is of paramount importance. Object identification in autonomous vehicles, food safety inspections, and poisonous plant identifications are perspective areas to extend our algorithm.