日本地球惑星科学連合2019年大会

講演情報

[J] 口頭発表

セッション記号 A (大気水圏科学) » A-CG 大気海洋・環境科学複合領域・一般

[A-CG36] 地球環境科学と人工知能

2019年5月30日(木) 13:45 〜 15:15 104 (1F)

コンビーナ:冨田 智彦(熊本大学大学院 先端科学研究部)、福井 健一(大阪大学)、松岡 大祐(海洋研究開発機構)、小野 智司(鹿児島大学)、座長:冨田 智彦(熊本大学 大学院先端科学研究部 基礎科学部門)

13:45 〜 14:00

[ACG36-05] 深層学習と三次元大気放射伝達モデルを用いた雲のリモートセンシング

*岩渕 弘信1増田 涼佑1K. Sebastian Schmidt2,3Damiani Alessandro4工藤 玲5 (1.東北大学大学院理学研究科、2.コロラド大学大気海洋学研究科、3.コロラド大学大気宇宙物理学研究所、4.千葉大学環境リモートセンシング研究センター、5.気象研究所)

キーワード:深層学習、リモートセンシング、雲、放射伝達

Remote sensing is a field of active research to which the machine learning techniques are applied. Machine learning models can be trained to detect characteristics and estimate properties of spatial and temporal variations of geophysical features. The deep learning uses a deep neural network and is rapidly developing in wide areas of industrial applications and scientific research. The deep neural network generally offers high capabilities of automatic feature extraction from multi-variate, multi-modal data and a good approximation of non-linear functions. If a deep learning model is trained well, its advantages are high accuracy with reasonable speed of calculation, which is suitable for practical applications to processing a large amount of data.

We will present a few applications of the deep learning to remote sensing of cloud from satellite- and ground-based optical measurements. In optical remote sensing of clouds, three-dimensional (3-D) radiative transfer effects are a major source of retrieval errors. Radiative interactions operate across spatial elements of the cloudy atmosphere at a wide range of spatial scales. Radiance measured at each pixel of an image taken by a satellite-based imager or ground-based camera is influenced by not only cloud density and microphysical properties along the line of sight but also the spatial distribution of clouds in a wide domain surrounding the line of sight. Although the retrieval of cloud properties is usually based on the independent pixel approximation assuming a plane-parallel, homogeneous cloud for each image pixel, the 3-D radiative interaction makes it difficult to retrieve cloud properties at pixel level if one uses single-pixel radiances solely. Thus, use of multiple pixels looks attractive, offering a great potential to accurately retrieve cloud properties from an image. Convolutional neural networks (CNNs) naturally traces the 3-D radiative effects that appear across image pixels, which traditional single-pixel approaches cannot capture. We have thus developed deep learning models to estimate the spatial distribution of cloud properties from multi-spectral, multi-pixel cloud retrieval from satellite imagery and images taken by a ground-based digital camera. Training data of pseudo-observation radiances are made from simulations using a Monte Carlo 3-D radiative transfer model for cloud fields simulated by a large eddy simulation model with high spatial resolution. In this case, the deep learning model is trained to learn the multi-scale spatial structure of clouds in addition to the complicated relationships between cloud properties and radiances. Deep CNNs show high retrieval accuracy for cloud properties such as cloud optical thickness and effective droplet radius from multi-spectral images of visible, near-infrared, and shortwave infrared channels, efficiently deriving the spatial distribution of cloud properties at multiple pixels all at once from radiances at multiple pixels. By using multi-scale features in the images, it is possible to recover the information lost in 3-D radiative transfer. Results show significantly better accuracy compared with traditional approaches.