11:15 AM - 11:30 AM
[AAS07-09] Emulating rainfall-runoff-inundation model with deep convolutional neural network
Keywords:d4PDF, rainfall-runoff-inundation, emulator, deep learning
Predicting spatial distributions of maximum inundation depth for individual rainfall events is important to mitigate hydrological disasters induced by extreme precipitation. Physics-based rainfall-runoff-inundation (RRI) models, a mainstream for predicting hydrological disasters, need massive computation resource to employ model simulations. Here, this study aims to develop a computationally-inexpensive deep learning (hereafter, Rain2Surface) by emulating an RRI model. This study focuses around the Omono river in Akita prefecture and emulates prediction of the spatial distribution of maximum inundation depth from the spatial and temporal rainfall data for individual events.
The Rain2Surface is developed based on a deep convolutional neural network. We used hourly rainfall at 13 AMeDAS stations over seven days for input rainfall data, drawn from 50-ensemble of 30-years data from large-ensemble weather/climate predictions (d4PDF). The maximum inundation depth was simulated by Sayama et al. (2014)’s RRI model prior to the training. The Rain2Surface consists of two components. The first component extracts feature data from input rainfall data with one-dimensional convolutional layers and a fully-connected layer. The second component expands spatial distribution of maximum inundation depth from the feature data based on the two-dimensional transpose convolutional layers.
The prediction accuracy of the Rain2Surface was 20 cm for root mean square error (RMSE) and 0.93 for a coefficient of determination (γ) at a station with deep maximum inundation depth (hereafter, point A). The emulator provided more accurate predictions compared to our previous study (Kotsuki et al., 2020; RMSE = 29 cm, γ = 0.89) which used ensemble learning of multiple regularized regressions. This suggests that non-linear convolutional neural network effectively extracts feature required for the output inundation data from the input rainfall data. Kotsuki et al. (2020)’s approach needs to develop independent models for prediction spatial distributions of maximum inundation depth. In contrast, this study enabled to obtain spatial distribution of maximum inundation depth only by training single model, namely, Rain2Surface.
The Rain2Surface is developed based on a deep convolutional neural network. We used hourly rainfall at 13 AMeDAS stations over seven days for input rainfall data, drawn from 50-ensemble of 30-years data from large-ensemble weather/climate predictions (d4PDF). The maximum inundation depth was simulated by Sayama et al. (2014)’s RRI model prior to the training. The Rain2Surface consists of two components. The first component extracts feature data from input rainfall data with one-dimensional convolutional layers and a fully-connected layer. The second component expands spatial distribution of maximum inundation depth from the feature data based on the two-dimensional transpose convolutional layers.
The prediction accuracy of the Rain2Surface was 20 cm for root mean square error (RMSE) and 0.93 for a coefficient of determination (γ) at a station with deep maximum inundation depth (hereafter, point A). The emulator provided more accurate predictions compared to our previous study (Kotsuki et al., 2020; RMSE = 29 cm, γ = 0.89) which used ensemble learning of multiple regularized regressions. This suggests that non-linear convolutional neural network effectively extracts feature required for the output inundation data from the input rainfall data. Kotsuki et al. (2020)’s approach needs to develop independent models for prediction spatial distributions of maximum inundation depth. In contrast, this study enabled to obtain spatial distribution of maximum inundation depth only by training single model, namely, Rain2Surface.