Japan Geoscience Union Meeting 2019

Presentation information

[J] Oral

A (Atmospheric and Hydrospheric Sciences ) » A-CG Complex & General

[A-CG36] Earth & Environmental Sciences and Artificial Intelligence

Thu. May 30, 2019 1:45 PM - 3:15 PM 104 (1F)

convener:Tomohiko Tomita(Faculty of Advanced Science and Technology, Kumamoto University), Ken-ichi Fukui(Osaka University), Daisuke Matsuoka(Japan Agency for Marine-Earth Science and Technology), Satoshi Ono(Kagoshima University), Chairperson:Tomohiko Tomita(Kumamoto University, Faculty of Advanced Science and Technology)

1:45 PM - 2:00 PM

[ACG36-05] Remote sensing of cloud using the deep learning based on three-dimensional radiative transfer

*Hironobu Iwabuchi1, Ryosuke Masuda1, K. Sebastian Schmidt2,3, Alessandro Damiani4, Rei Kudo5 (1.Graduate School of Science, Tohoku University, 2.Department of Atmospheric and Oceanic Sciences, University of Colorado, 3.Laboratory for Atmospheric and Space Physics, University of Colorado, 4.Center for Environmental Remote Sensing, Chiba University, 5.Meteorological Research Institute, Japan Meteorological Agency)

Keywords:deep learning, remote sensing, cloud, radiative transfer

Remote sensing is a field of active research to which the machine learning techniques are applied. Machine learning models can be trained to detect characteristics and estimate properties of spatial and temporal variations of geophysical features. The deep learning uses a deep neural network and is rapidly developing in wide areas of industrial applications and scientific research. The deep neural network generally offers high capabilities of automatic feature extraction from multi-variate, multi-modal data and a good approximation of non-linear functions. If a deep learning model is trained well, its advantages are high accuracy with reasonable speed of calculation, which is suitable for practical applications to processing a large amount of data.

We will present a few applications of the deep learning to remote sensing of cloud from satellite- and ground-based optical measurements. In optical remote sensing of clouds, three-dimensional (3-D) radiative transfer effects are a major source of retrieval errors. Radiative interactions operate across spatial elements of the cloudy atmosphere at a wide range of spatial scales. Radiance measured at each pixel of an image taken by a satellite-based imager or ground-based camera is influenced by not only cloud density and microphysical properties along the line of sight but also the spatial distribution of clouds in a wide domain surrounding the line of sight. Although the retrieval of cloud properties is usually based on the independent pixel approximation assuming a plane-parallel, homogeneous cloud for each image pixel, the 3-D radiative interaction makes it difficult to retrieve cloud properties at pixel level if one uses single-pixel radiances solely. Thus, use of multiple pixels looks attractive, offering a great potential to accurately retrieve cloud properties from an image. Convolutional neural networks (CNNs) naturally traces the 3-D radiative effects that appear across image pixels, which traditional single-pixel approaches cannot capture. We have thus developed deep learning models to estimate the spatial distribution of cloud properties from multi-spectral, multi-pixel cloud retrieval from satellite imagery and images taken by a ground-based digital camera. Training data of pseudo-observation radiances are made from simulations using a Monte Carlo 3-D radiative transfer model for cloud fields simulated by a large eddy simulation model with high spatial resolution. In this case, the deep learning model is trained to learn the multi-scale spatial structure of clouds in addition to the complicated relationships between cloud properties and radiances. Deep CNNs show high retrieval accuracy for cloud properties such as cloud optical thickness and effective droplet radius from multi-spectral images of visible, near-infrared, and shortwave infrared channels, efficiently deriving the spatial distribution of cloud properties at multiple pixels all at once from radiances at multiple pixels. By using multi-scale features in the images, it is possible to recover the information lost in 3-D radiative transfer. Results show significantly better accuracy compared with traditional approaches.