13:45 〜 14:00
[ACG36-05] 深層学習と三次元大気放射伝達モデルを用いた雲のリモートセンシング
キーワード:深層学習、リモートセンシング、雲、放射伝達
Remote sensing is a field of active research to which the machine learning techniques are applied. Machine learning models can be trained to detect characteristics and estimate properties of spatial and temporal variations of geophysical features. The deep learning uses a deep neural network and is rapidly developing in wide areas of industrial applications and scientific research. The deep neural network generally offers high capabilities of automatic feature extraction from multi-variate, multi-modal data and a good approximation of non-linear functions. If a deep learning model is trained well, its advantages are high accuracy with reasonable speed of calculation, which is suitable for practical applications to processing a large amount of data.
We will present a few applications of the deep learning to remote sensing of cloud from satellite- and ground-based optical measurements. In optical remote sensing of clouds, three-dimensional (3-D) radiative transfer effects are a major source of retrieval errors. Radiative interactions operate across spatial elements of the cloudy atmosphere at a wide range of spatial scales. Radiance measured at each pixel of an image taken by a satellite-based imager or ground-based camera is influenced by not only cloud density and microphysical properties along the line of sight but also the spatial distribution of clouds in a wide domain surrounding the line of sight. Although the retrieval of cloud properties is usually based on the independent pixel approximation assuming a plane-parallel, homogeneous cloud for each image pixel, the 3-D radiative interaction makes it difficult to retrieve cloud properties at pixel level if one uses single-pixel radiances solely. Thus, use of multiple pixels looks attractive, offering a great potential to accurately retrieve cloud properties from an image. Convolutional neural networks (CNNs) naturally traces the 3-D radiative effects that appear across image pixels, which traditional single-pixel approaches cannot capture. We have thus developed deep learning models to estimate the spatial distribution of cloud properties from multi-spectral, multi-pixel cloud retrieval from satellite imagery and images taken by a ground-based digital camera. Training data of pseudo-observation radiances are made from simulations using a Monte Carlo 3-D radiative transfer model for cloud fields simulated by a large eddy simulation model with high spatial resolution. In this case, the deep learning model is trained to learn the multi-scale spatial structure of clouds in addition to the complicated relationships between cloud properties and radiances. Deep CNNs show high retrieval accuracy for cloud properties such as cloud optical thickness and effective droplet radius from multi-spectral images of visible, near-infrared, and shortwave infrared channels, efficiently deriving the spatial distribution of cloud properties at multiple pixels all at once from radiances at multiple pixels. By using multi-scale features in the images, it is possible to recover the information lost in 3-D radiative transfer. Results show significantly better accuracy compared with traditional approaches.
We will present a few applications of the deep learning to remote sensing of cloud from satellite- and ground-based optical measurements. In optical remote sensing of clouds, three-dimensional (3-D) radiative transfer effects are a major source of retrieval errors. Radiative interactions operate across spatial elements of the cloudy atmosphere at a wide range of spatial scales. Radiance measured at each pixel of an image taken by a satellite-based imager or ground-based camera is influenced by not only cloud density and microphysical properties along the line of sight but also the spatial distribution of clouds in a wide domain surrounding the line of sight. Although the retrieval of cloud properties is usually based on the independent pixel approximation assuming a plane-parallel, homogeneous cloud for each image pixel, the 3-D radiative interaction makes it difficult to retrieve cloud properties at pixel level if one uses single-pixel radiances solely. Thus, use of multiple pixels looks attractive, offering a great potential to accurately retrieve cloud properties from an image. Convolutional neural networks (CNNs) naturally traces the 3-D radiative effects that appear across image pixels, which traditional single-pixel approaches cannot capture. We have thus developed deep learning models to estimate the spatial distribution of cloud properties from multi-spectral, multi-pixel cloud retrieval from satellite imagery and images taken by a ground-based digital camera. Training data of pseudo-observation radiances are made from simulations using a Monte Carlo 3-D radiative transfer model for cloud fields simulated by a large eddy simulation model with high spatial resolution. In this case, the deep learning model is trained to learn the multi-scale spatial structure of clouds in addition to the complicated relationships between cloud properties and radiances. Deep CNNs show high retrieval accuracy for cloud properties such as cloud optical thickness and effective droplet radius from multi-spectral images of visible, near-infrared, and shortwave infrared channels, efficiently deriving the spatial distribution of cloud properties at multiple pixels all at once from radiances at multiple pixels. By using multi-scale features in the images, it is possible to recover the information lost in 3-D radiative transfer. Results show significantly better accuracy compared with traditional approaches.