1:45 PM - 2:00 PM
[ACG36-05] Remote sensing of cloud using the deep learning based on three-dimensional radiative transfer
Keywords:deep learning, remote sensing, cloud, radiative transfer
We will present a few applications of the deep learning to remote sensing of cloud from satellite- and ground-based optical measurements. In optical remote sensing of clouds, three-dimensional (3-D) radiative transfer effects are a major source of retrieval errors. Radiative interactions operate across spatial elements of the cloudy atmosphere at a wide range of spatial scales. Radiance measured at each pixel of an image taken by a satellite-based imager or ground-based camera is influenced by not only cloud density and microphysical properties along the line of sight but also the spatial distribution of clouds in a wide domain surrounding the line of sight. Although the retrieval of cloud properties is usually based on the independent pixel approximation assuming a plane-parallel, homogeneous cloud for each image pixel, the 3-D radiative interaction makes it difficult to retrieve cloud properties at pixel level if one uses single-pixel radiances solely. Thus, use of multiple pixels looks attractive, offering a great potential to accurately retrieve cloud properties from an image. Convolutional neural networks (CNNs) naturally traces the 3-D radiative effects that appear across image pixels, which traditional single-pixel approaches cannot capture. We have thus developed deep learning models to estimate the spatial distribution of cloud properties from multi-spectral, multi-pixel cloud retrieval from satellite imagery and images taken by a ground-based digital camera. Training data of pseudo-observation radiances are made from simulations using a Monte Carlo 3-D radiative transfer model for cloud fields simulated by a large eddy simulation model with high spatial resolution. In this case, the deep learning model is trained to learn the multi-scale spatial structure of clouds in addition to the complicated relationships between cloud properties and radiances. Deep CNNs show high retrieval accuracy for cloud properties such as cloud optical thickness and effective droplet radius from multi-spectral images of visible, near-infrared, and shortwave infrared channels, efficiently deriving the spatial distribution of cloud properties at multiple pixels all at once from radiances at multiple pixels. By using multi-scale features in the images, it is possible to recover the information lost in 3-D radiative transfer. Results show significantly better accuracy compared with traditional approaches.