5:15 PM - 6:30 PM
[HDS09-P01] How S-net improves tsunami forecasting along the Pacific coasts of Japan
Keywords:Tsunami forecasting, S-net, Optimization
We conducted exhaustive synthetic experiments to appraise the performance of the world’s largest network of ocean bottom pressure sensors for real-time tsunami monitoring. The system is named The Seafloor Observation Network for Earthquakes and Tsunamis along the Japan Trench (S-net) consisting of 150 seafloor observatories connected via submarine optical cables with approximately 5800 km in length (Kanazawa et al. 2016; Mochizuki et al. 2018). Since the first establishment in 2016, S-net has hitherto registered only a few tsunami events. Consequently, the limited real tsunami data prohibit us to comprehensively appraise the efficacy of S-net against plausible multitude tsunami characteristics. Therefore, synthetic experiments are needed as a proxy or to complement the scarce tsunami data of the actual events. In this study, we used the synthetic data to estimate the accuracy of tsunami waveform inversion for forecasting coastal tsunami heights. We also applied an optimization technique to determine the optimal combination of stations with respect to earthquake magnitudes. The analysis would improve our understanding on the effect of station distribution in response to sources of different size and location.
While the impact of tsunamis along the Pacific coasts of Japan can be generated by several causal earthquakes, e.g., outer rise events, we limited our tsunami source scenarios by considering only local megathrust earthquakes from the Japan Trench subduction zone with magnitudes ranging from Mw 7.7–9.1 (0.1 magnitude interval). We discretized the subduction zone plate interface obtained from the SLAB 2 model (Hayes et al. 2018) with the downdip limit of 40 km, into 240 curvilinear grids with approximate sizes varying from ~20 km2 to ~40 km2. A method by Mai and Beroza (2002) was applied to characterize the complexity of earthquake slip represented by a spatial random field according to a von Karman autocorrelation function. Then, we calculated the sea surface displacement using a triangular dislocation in a half-space proposed by Nikkhoo and Walter (2015), assuming a rake angle of 90o, an instantaneous deformation, and a long wave approximation.
To efficiently generate a large number of tsunamis virtually observed at S-net stations, we utilized a tsunami Green’s function summation leveraging the linear superposition theory (Satake 1987). We simulated tsunamis from the 240 subfaults with 1-m slip by solving the linear shallow water equations using a numerical tsunami simulation code called JAGURS (Baba et al. 2015). We set the simulation time at 5 hours and the numerical grid size at 1 arc minute with the bathymetry data resampled from the GEBCO 30 arc-second bathymetric grid (Weatherall et al. 2015). The virtual tsunami observations were then obtained by applying the precalculated Green’s function to the stochastically generated slip. Like the virtually observed waveforms at S-net, we also stored tsunami waveforms at specified coastal points along the 50-m isobath for validation purposes. Additionally, we performed a statistical analysis to estimate the appropriate number of samples at each earthquake magnitude interval that efficiently represents a wide range of plausible scenarios. The analysis suggested that the variability of coastal tsunami heights is no longer significant after more than 1500 samples for the overall specified earthquake magnitude range.
Based on the tsunami waveform inversion method to forecast coastal tsunami heights, our study indicated that the vast coverage of S-net has significantly improved the forecast skill in the study area. The least accurate result of the smallest considered magnitude of Mw 7.7 yielded a mean accuracy of 99% by using only tsunami data from 3–5 minutes after the earthquake. However, we note that accuracies stated in this study may not reflect the actual operational forecast performance due to various factors (e.g., mechanical issue, oversimplification of synthetic data, etc.). Nonetheless, a comparable level of accuracy is difficult to be achieved when only a few stations are available, such that prior to the S-net deployment. Finally, the optimization results showed that the minimum requisite number of stations to maintain the accuracy attained by the existing network configuration (150 stations) decreases from 130 to 90 stations when the earthquake size increases from Mw 7.7 to 9.1. The results also implied the proportion of stations that predominantly contribute to the overall tsunami forecast skill for different earthquake magnitudes.
While the impact of tsunamis along the Pacific coasts of Japan can be generated by several causal earthquakes, e.g., outer rise events, we limited our tsunami source scenarios by considering only local megathrust earthquakes from the Japan Trench subduction zone with magnitudes ranging from Mw 7.7–9.1 (0.1 magnitude interval). We discretized the subduction zone plate interface obtained from the SLAB 2 model (Hayes et al. 2018) with the downdip limit of 40 km, into 240 curvilinear grids with approximate sizes varying from ~20 km2 to ~40 km2. A method by Mai and Beroza (2002) was applied to characterize the complexity of earthquake slip represented by a spatial random field according to a von Karman autocorrelation function. Then, we calculated the sea surface displacement using a triangular dislocation in a half-space proposed by Nikkhoo and Walter (2015), assuming a rake angle of 90o, an instantaneous deformation, and a long wave approximation.
To efficiently generate a large number of tsunamis virtually observed at S-net stations, we utilized a tsunami Green’s function summation leveraging the linear superposition theory (Satake 1987). We simulated tsunamis from the 240 subfaults with 1-m slip by solving the linear shallow water equations using a numerical tsunami simulation code called JAGURS (Baba et al. 2015). We set the simulation time at 5 hours and the numerical grid size at 1 arc minute with the bathymetry data resampled from the GEBCO 30 arc-second bathymetric grid (Weatherall et al. 2015). The virtual tsunami observations were then obtained by applying the precalculated Green’s function to the stochastically generated slip. Like the virtually observed waveforms at S-net, we also stored tsunami waveforms at specified coastal points along the 50-m isobath for validation purposes. Additionally, we performed a statistical analysis to estimate the appropriate number of samples at each earthquake magnitude interval that efficiently represents a wide range of plausible scenarios. The analysis suggested that the variability of coastal tsunami heights is no longer significant after more than 1500 samples for the overall specified earthquake magnitude range.
Based on the tsunami waveform inversion method to forecast coastal tsunami heights, our study indicated that the vast coverage of S-net has significantly improved the forecast skill in the study area. The least accurate result of the smallest considered magnitude of Mw 7.7 yielded a mean accuracy of 99% by using only tsunami data from 3–5 minutes after the earthquake. However, we note that accuracies stated in this study may not reflect the actual operational forecast performance due to various factors (e.g., mechanical issue, oversimplification of synthetic data, etc.). Nonetheless, a comparable level of accuracy is difficult to be achieved when only a few stations are available, such that prior to the S-net deployment. Finally, the optimization results showed that the minimum requisite number of stations to maintain the accuracy attained by the existing network configuration (150 stations) decreases from 130 to 90 stations when the earthquake size increases from Mw 7.7 to 9.1. The results also implied the proportion of stations that predominantly contribute to the overall tsunami forecast skill for different earthquake magnitudes.