Keywords:Aftershock forecasting, Point process, Bayesian forecasting
Aftershock forecasting provides one of the important measures for mitigation of earthquake damages. For this purpose, statistics- and physics- based models have been developed. When forecasting using these models, we usually adopt an optimal single set of parameter values such as the maximum likelihood estimates, which is called as plug-in forecasting. However, for a given small sized and incomplete data shortly after the main shock, the estimation of the model parameters may be accompanied by large uncertainty. In such a case, the plug-in forecasting underestimates the predictive probability range, and sometimes the range significantly biases the actual observations. Alternatively, more robust and unbiased forecasts can be obtained by considering the estimation uncertainty in an appropriate way. Bayesian forecasting provides a consistent statistical framework for this, and enables us to assess the forecast uncertainty. In this talk, we will argue the importance of evaluating the forecast uncertainty in probabilistic forecasting. As an example here, we employ the epidemic type aftershock sequence (ETAS) model as a forecasting model, and we show how the plug-in forecasting can fail and how the Bayesian forecasting can improve the performance. We will argue that the Bayesian predictors should also be tested in CSEP forecasting experiments.
Reference: T. Omi, Y. Ogata, Y. Hirata, & K. Aihara, "Intermediate-term forecasting of aftershocks from an early aftershock sequence: Bayesian and ensemble forecasting approaches", JGR (in revision).