16:40 〜 16:55
[ACG43-11] Improving Detection of Tropical Cyclones by Deep Convolutional Neural Network through a Two-step Training
キーワード:熱帯低気圧、深層学習
Detecting tropical cyclones (TCs) is important to mitigate disasters induced by TCs. The Japan Meteorological Agency operationally uses the Dvorak method that manually estimates TC intensities by cloud patterns. With the recent progress of machine learning algorithms, TC detections using deep neural network have been explored. Matsuoka et al. (2018) used a deep convolutional neural network (DCNN) to detect TCs or non-TC for simulated cloud images. They found that the classifications were less accurate when images were densely or sparsely covered by clouds.
This study aims at developing an efficient approach for training DCNN for TC or non-TC classifier. We first developed a VGG16-based DCNN that has one additional layer with two features before the classifier (Machine 1). We trained Machine 1 using Matsuoka et al. (2018)’s cloud images, and found that the classification was less accurate when the two features were close to zero. Based on this preliminary results, we developed a new machine (Machine 2) that uses three inputs for the fully-connected NN: (1) cloud cover ratio, (2) two features from pre-trained Machine 1, and (3) outputs from standard VGG16-based DCNN. This two-step training improved detection accuracy significantly with relative to the classical VGG16. Although cloud images contain cloud cover ratio and two features implicitly, using “extracted features” explicitly enables efficient training with limited training data. At the conference, we will introduce the details of our DCNN approach together with preliminary unsuccessful experiments.
This study aims at developing an efficient approach for training DCNN for TC or non-TC classifier. We first developed a VGG16-based DCNN that has one additional layer with two features before the classifier (Machine 1). We trained Machine 1 using Matsuoka et al. (2018)’s cloud images, and found that the classification was less accurate when the two features were close to zero. Based on this preliminary results, we developed a new machine (Machine 2) that uses three inputs for the fully-connected NN: (1) cloud cover ratio, (2) two features from pre-trained Machine 1, and (3) outputs from standard VGG16-based DCNN. This two-step training improved detection accuracy significantly with relative to the classical VGG16. Although cloud images contain cloud cover ratio and two features implicitly, using “extracted features” explicitly enables efficient training with limited training data. At the conference, we will introduce the details of our DCNN approach together with preliminary unsuccessful experiments.