[PPS08-P12] Estimation of dust optical depth observed by MGS/TES from visible images observed by MGS/MOC using a convolutional neural network
Keywords:Mars, CNN, deep learning, MGS/MOC, MGS/TES
For understanding cause and movement of dust haze, we need the atmospheric dust distribution ranging from O(10) to O(100) km in order to understand the initiation and development of dust haze. Although visible image data have been observed by several orbiters, dust optical depth (DOD) cannot be derived from them. And we cannot distinguish thin dust haze from water ice clouds. Therefore, DOD measured by a infrared spectrometer is useful. However, FOV of the spectrometer is generally so narrow that it cannot resolve the development of fine structures of dust haze.
For that reason, we develop a method for estimating DOD of each pixel in images observed by MGS/MOC from the local patterns of the visible images themselves by using the Convolutional Neural Network (CNN), where the ground truth is given by DOD observed by MGS/TES.
Because most of DOD data was much smaller than 1. Such imbalance in the data volume between large and small DODs tends to results in poor estimation of DOD. Therefore, we extracted the same number of DOD data from each regularly divided bin. And trained the CNN model using the data. Validation loss was 0.148. However, it was turned out that predicted DOD highly depended on local brightness in MGS/MOC images. Hence we tried to improve CNN model by normalizing the input data. Consequently, the validation loss increased from 0.148 to 0.181, which means degradation of the model. This degradation is not a serious problem but pave the way to improve the performance of the estimation because dependency of the DOD estimation on local brightness has been reduced.
At my presentation, we will report results of improvement of the CNN model by data augmentation or complication of the model structure.
For that reason, we develop a method for estimating DOD of each pixel in images observed by MGS/MOC from the local patterns of the visible images themselves by using the Convolutional Neural Network (CNN), where the ground truth is given by DOD observed by MGS/TES.
Because most of DOD data was much smaller than 1. Such imbalance in the data volume between large and small DODs tends to results in poor estimation of DOD. Therefore, we extracted the same number of DOD data from each regularly divided bin. And trained the CNN model using the data. Validation loss was 0.148. However, it was turned out that predicted DOD highly depended on local brightness in MGS/MOC images. Hence we tried to improve CNN model by normalizing the input data. Consequently, the validation loss increased from 0.148 to 0.181, which means degradation of the model. This degradation is not a serious problem but pave the way to improve the performance of the estimation because dependency of the DOD estimation on local brightness has been reduced.
At my presentation, we will report results of improvement of the CNN model by data augmentation or complication of the model structure.