Japan Geoscience Union Meeting 2021

Presentation information

[J] Oral

A (Atmospheric and Hydrospheric Sciences ) » A-CG Complex & General

[A-CG43] Earth & Environmental Sciences and Artificial Intelligence/Machine Learning

Thu. Jun 3, 2021 3:30 PM - 5:00 PM Ch.06 (Zoom Room 06)

convener:Tomohiko Tomita(Faculty of Advanced Science and Technology, Kumamoto University), Shigeki Hosoda(Japan Marine-Earth Science and Technology), Ken-ichi Fukui(Osaka University), Satoshi Ono(Kagoshima Univeristy), Chairperson:Shigeki Hosoda(Japan Marine-Earth Science and Technology), Tomohiko Tomita(Faculty of Advanced Science and Technology, Kumamoto University)

4:40 PM - 4:55 PM

[ACG43-11] Improving Detection of Tropical Cyclones by Deep Convolutional Neural Network through a Two-step Training

*TAKERU TSUCHIYA1, Shunji Kotsuki2, Ryota Kikuchi3, Takeshi Umezawa4, Noritaka Osawa4 (1.Department of Information Engineering, Chiba University, Chiba, Japan, 2.Center for Environmental Remote Sensing, Chiba University, Chiba, Japan, 3.Office of Society Academia Collaboration for Innovation, Kyoto University, Japan, 4.Graduate School & Faculty of Engineering, Chiba University, Chiba, Japan)

Keywords:Tropical Cyclones, Deep Learning

Detecting tropical cyclones (TCs) is important to mitigate disasters induced by TCs. The Japan Meteorological Agency operationally uses the Dvorak method that manually estimates TC intensities by cloud patterns. With the recent progress of machine learning algorithms, TC detections using deep neural network have been explored. Matsuoka et al. (2018) used a deep convolutional neural network (DCNN) to detect TCs or non-TC for simulated cloud images. They found that the classifications were less accurate when images were densely or sparsely covered by clouds.

This study aims at developing an efficient approach for training DCNN for TC or non-TC classifier. We first developed a VGG16-based DCNN that has one additional layer with two features before the classifier (Machine 1). We trained Machine 1 using Matsuoka et al. (2018)’s cloud images, and found that the classification was less accurate when the two features were close to zero. Based on this preliminary results, we developed a new machine (Machine 2) that uses three inputs for the fully-connected NN: (1) cloud cover ratio, (2) two features from pre-trained Machine 1, and (3) outputs from standard VGG16-based DCNN. This two-step training improved detection accuracy significantly with relative to the classical VGG16. Although cloud images contain cloud cover ratio and two features implicitly, using “extracted features” explicitly enables efficient training with limited training data. At the conference, we will introduce the details of our DCNN approach together with preliminary unsuccessful experiments.