日本地球惑星科学連合2025年大会

講演情報

[E] 口頭発表

セッション記号 S (固体地球科学) » S-SS 地震学

[S-SS06] New trends in data acquisition, analysis and interpretation of seismicity

2025年5月30日(金) 13:45 〜 15:15 301A (幕張メッセ国際会議場)

コンビーナ:Enescu Bogdan(京都大学 大学院 理学研究科 地球惑星科学専攻 地球物理学教室)、Grigoli Francesco(University of Pisa)、青木 陽介(東京大学地震研究所)、内出 崇彦(産業技術総合研究所 地質調査総合センター 活断層・火山研究部門)、座長:Enescu Bogdan(京都大学 大学院 理学研究科 地球惑星科学専攻 地球物理学教室)、Francesco Grigoli(University of Pisa)、青木 陽介(東京大学地震研究所)、内出 崇彦(産業技術総合研究所 地質調査総合センター 活断層・火山研究部門)

14:45 〜 15:00

[SSS06-05] Information Content of Earthquake Catalogs

*John B Rundle1 (1.University of California Davis)

キーワード:Earthquake Catalogs, Information Entropy, Nowcasting, Forecasting, Machine Learning

The question of whether earthquake occurrence is random in time, or perhaps chaotic with order hidden in the chaos, is of major importance to the determination of risk from these events. It was shown many years ago that if aftershocks are removed from the earthquake catalogs, what remains are apparently events that occur at random time intervals, and therefore not predictable in time. In the present work, we enlist machine learning methods using Receiver Operating Characteristic (ROC) analysis. With these methods, probabilities of large events and their associated information value can be computed. Here information value is defined using Shannon Information Entropy, shown by Claude Shannon (Shannon, 1948) to define the surprise value of a communication such as a string of computer bits. Random messages can be shown to have high entropy, surprise value, or uncertainty, whereas low entropy is associated with reduced uncertainty and high reliability. An earthquake nowcast probability associated with reduced uncertainty and greater reliability is most desirable. Examples of the latter could be the statements that there is a 90% probability of a major earthquake within 3 years, or a 5% chance of a major earthquake within 1 year. Despite the random intervals between major earthquakes, we find that it is possible to make low uncertainty, high reliability statements on current hazard by the use of machine learning methods. Using stochastic earthquake simulations, we show that significant information in the catalogs arises from the "non-Poisson" power-law aftershock clustering, implying that the practice of de-clustering observed catalogs may remove information that would otherwise be useful in forecasting and nowcasting. Interval statistics have been used to conclude that major earthquakes are random events in time and cannot be anticipated or predicted Machine learning is a powerful new technique that enhances our ability to understand the information content of earthquake catalogs We show that catalogs contain significant information on current hazard and future predictability for large earthquakes