JpGU-AGU Joint Meeting 2017

講演情報

[EE] 口頭発表

セッション記号 A (大気水圏科学) » A-AS 大気科学・気象学・大気環境

[A-AS01] [EE] 3D Cloud Modeling as a Tool for 3D Radiative Transfer, and Conversely

2017年5月21日(日) 10:45 〜 12:15 304 (国際会議場 3F)

コンビーナ:Thomas Fauchez(Universities Space Research Association, GSFC Greenbelt)、Anthony B Davis(Jet Propulsion Laboratory)、岩渕 弘信(東北大学大学院理学研究科)、鈴木 健太郎(東京大学大気海洋研究所)、座長:Fauchez Thomas(NASA Post Doctoral Program, USRA, Goddard Space Flight Center, USA)

11:25 〜 11:40

[AAS01-08] Retrieval of optical thickness and effective droplet radius of inhomogeneous clouds using a deep neural network

*岩渕 弘信1岡村 凜太郎1Sebastian Schmidt2 (1.東北大学大学院理学研究科、2.コロラド大学)

キーワード:remote sensing, cloud retrieval, deep neural network

Estimation of cloud properties such as the cloud optical thickness and effective droplet radius is usually based on the independent pixel approximation (IPA) assuming a plane-parallel, homogeneous cloud for each pixel of a satellite image. Prior studies have pointed out that horizontal and vertical inhomogeneities produce significant errors in the retrieved cloud properties. The observed reflectance at each pixel is influenced by the spatial arrangement of cloud water in adjacent pixels, which necessitates the consideration of the adjacent cloud effects when estimating the cloud properties at a target pixel. We study the feasibility of a multi-spectral, multi-pixel approach to estimate the cloud optical thickness and effective droplet radius using a deep neural network (DNN), which is a kind of machine-learning technique and has capabilities of multi-variable estimation, automatic characterization of data, and non-linear approximation. A Monte Carlo three-dimensional radiative transfer model is used to simulate the reflectances with a resolution of 280 m for large eddy simulation cloud fields in cases of boundary layer clouds. Two retrieval methods are constructed: 1) DNN-2r that correct IPA retrievals using the reflectances (from 3D simulations) at 0.86 and 2.13 µm and 2) DNN-4w that uses the so-called convolution layer and directly retrieve cloud properties from the reflectances at 0.86, 1.64, 2.13 and 3.75 µm. Both DNNs efficiently derive the spatial distribution of cloud properties at about 6×6 pixels all at once from reflectances at multiple pixels. Both DNNs outperform the IPA-based retrieval in estimating cloud optical thickness and effective droplet radius more accurately. The DNN-4w can robustly estimate cloud properties even for optically thick clouds, and the use of a convolution layer in the DNN seems adequate to represent three-dimensional radiative transfer effects.