11:45 AM - 12:00 PM
[S04-2-05] From historical seismology to seismogenic source models, 20 years on: results and challenges
Over the past 20 years historical seismology has gone through a silent revolution that turned it from purely descriptive to fully quantitative, transformed its outcomes from vaguely subjective to solidly reproducible, and – unbeknown to many – greatly increased its relevance far beyond the mere computation of “activity rates" in the modern SHA practice.
In Italy the main aspect of this revolution was spawned by the creation of “new generation catalogues", large databases providing for each earthquake all felt intensities along with precise identification of the reported sites and even a description of the earthquake effects. These catalogues allowed automatic processing of intensity data to derive the epicentral parameters, an equivalent magnitude and the geometric parameters of the prospective causative fault through a Fortran code termed Boxer (Gasperini et al., BSSA, 1999).
Increased data resolution and manageability allowed large Italian earthquakes sequences to be explored in depth, often revealing an unexpected source complexity and setting new constraints on the location and magnitude of the most significant shocks (e.g Fracassi & Valensise, BSSA, 2007; Burrato & Valensise, BSSA, 2008). This evidence contributed to unraveling the arrangement and behavior of large seismogenic fault systems, greatly supporting seismotectonic interpretations in areas dominated by large tectonic complexity and by blind faulting (Basili et al., Tectonophysics, 2008; Kastelic et al., MPG, 2013; Vannoli et al., Pageoph, 2015).
More recently historical data have been used to calculate earthquake budgets to be compared with geodetic evidence for ongoing strain and fault slip rates, thus strengthening earthquake recurrence models devised for seismic hazard assessment (Carafa et al., GJI, 2017).
Little of this would have been possible without historical seismology data, an often overlooked wealth of information that still awaits to be exploited in many seismogenic areas worldwide.
In Italy the main aspect of this revolution was spawned by the creation of “new generation catalogues", large databases providing for each earthquake all felt intensities along with precise identification of the reported sites and even a description of the earthquake effects. These catalogues allowed automatic processing of intensity data to derive the epicentral parameters, an equivalent magnitude and the geometric parameters of the prospective causative fault through a Fortran code termed Boxer (Gasperini et al., BSSA, 1999).
Increased data resolution and manageability allowed large Italian earthquakes sequences to be explored in depth, often revealing an unexpected source complexity and setting new constraints on the location and magnitude of the most significant shocks (e.g Fracassi & Valensise, BSSA, 2007; Burrato & Valensise, BSSA, 2008). This evidence contributed to unraveling the arrangement and behavior of large seismogenic fault systems, greatly supporting seismotectonic interpretations in areas dominated by large tectonic complexity and by blind faulting (Basili et al., Tectonophysics, 2008; Kastelic et al., MPG, 2013; Vannoli et al., Pageoph, 2015).
More recently historical data have been used to calculate earthquake budgets to be compared with geodetic evidence for ongoing strain and fault slip rates, thus strengthening earthquake recurrence models devised for seismic hazard assessment (Carafa et al., GJI, 2017).
Little of this would have been possible without historical seismology data, an often overlooked wealth of information that still awaits to be exploited in many seismogenic areas worldwide.