日本地球惑星科学連合2022年大会

講演情報

[E] ポスター発表

セッション記号 A (大気水圏科学) » A-CG 大気海洋・環境科学複合領域・一般

[A-CG38] 衛星による地球環境観測

2022年5月31日(火) 11:00 〜 13:00 オンラインポスターZoom会場 (11) (Ch.11)

コンビーナ:沖 理子(宇宙航空研究開発機構)、コンビーナ:本多 嘉明(千葉大学環境リモートセンシング研究センター)、高薮 縁(東京大学 大気海洋研究所)、コンビーナ:松永 恒雄(国立環境研究所地球環境研究センター/衛星観測センター)、座長:本多 嘉明(千葉大学環境リモートセンシング研究センター)、高薮 縁(東京大学 大気海洋研究所)、松永 恒雄(国立環境研究所地球環境研究センター/衛星観測センター)

11:00 〜 13:00

[ACG38-P03] Improvements of cloud detection algorithm of GCOM-C with neural network method

*棚田 和玖1村上 浩1 (1.国立研究開発法人宇宙航空研究開発機構)

キーワード:雲検知、GCOM-C、深層学習

GCOM-C (Global Change Observation Mission - Climate) called “SHIKISAI”, which is JAXA polar-orbit satellite, has been launched on 23 December 2017. The objective of GCOM-C is to observe the environmental change of the earth with the Second-generation GLobal Imager (SGLI) which is a multi-band optical imaging radiometer. The SGLI cloud flag (CLFG) product includes a cloud/clear discrimination information, cloud thermodynamic phase information and so on. Since CLFG is also used as an input data for other products such as aerosol product and clear-sky TOA radiance product, it is important to ensure its accuracy. However, under limited conditions, there are some known issues that the cloud flag is misclassified in Ver.2 CLFG. In this study, we investigated and developed a new algorithm of reducing these misclassifications.

The known issues of Ver.2 CLFG are misclassification between snow/ice from clear sky in day time (e.g. low reflectance snow covered region), misclassification between heavy aerosols and clouds in day time( e.g. extreme wildfires), and misclassification between cloud and clear sky at night time (e.g. at high latitude region).

To improve the accuracy of classification above, we newly developed a deep neural network (DNN) method. This DNN method is processed at the same time as Ver.2 CLFG method (CLAUDIA) processing. Comparing the results obtained from two different methods for each pixel, the more plausible result is chosen to output the final cloud flag. The cloud flag for the areas that do not need any improvements would be the same as the Ver.2 method because the algorithm of Ver.3 basically developed to focus on the weak points of Ver.2.

We made the training input dataset and the ground truth data at the various regions in various days of year. We labeled the category for each pixel considering the RGB true color image and QA flag of SIPR (Snow Ice PRoperties) prooduct with eyes. Defined categories are ”land”, ”ocean”, ”cloudy”, ”heavy aerosol”, and ”ice/snow”. The total pixels of the training dataset is more than 500,000.

We used a 3-layer DNN architecture in this method and the trained results show a high accuracy of ~99% for all categories except for 88% for night-time aerosol. The pixels used for verification were in the same tile as the area containing the training pixels (exactly the same pixels were not used to validate).

As a result, the main achieved improvements of the Ver.3 cloud flag algorithm are as below:

1. Reduced misclassification between snow/ice and clear-sky pixels.

2. Day-time heavy aerosols can be detected (not as cloud).

3. Reduced misclassification between cloud and clear sky at night time (clear-sky pixels increased). The valid pixels for the other SGLI products such as LST (Land Surface Temperature) are also expected to increase due to its improvements of cloud flag at night. However, the accuracy of clear-sky detection at night in high latitudes still needs to be addressed (future work).