*John B Rundle1
(1.University of California Davis)
Keywords:Artificial Intelligence, Earthquake Nowcasting, Machine Learning, Generative AI
We present new approach to earthquake nowcasting based on science transformers (GC Fox et al., Geohazards, 2022). As explained in the seminal paper by Vaswani et al. (NIPS, 2017), a transformer is a type of deep learning model that learns the context of a set of time series values by means of tracking the relationships in a sequence of data, such as the words in a sentence. Transformers extend deep learning in the adoption of a context-sensitive protocol "attention", which is used to tag important sequences of data, and to identify relationships between those tagged data. Pretrained transformers are the foundational technology that underpins the new AI models ChatGPT (Generative Pretrained Transformers) from openAI.com, and Bard, from Google.com. In our case, we hypothesize that a transformer might be able to learn the sequence of events leading up to a major earthquake. Typically, the data used to train the model is in the billions or larger, so these models, when applied to earthquake problems, need the size of data sets that only long numerical earthquake simulations can provide. In this research, we are developing the Earthquake Generative Pretrained Transformer model, "QuakeGPT", in a similar vein. For simulations, we are using simulation catalogs from a stochastic physics-informed earthquake simulation model "ERAMSS", similar to the more common ETAS models. As describe in a talk elsewhere at this meeting, ERAMSS has only 3 uncorrelated parameters that are easily retrieved from the observed catalog. In the future, physics-based models such as Virtual Quake model will be used as well. Observed data, which is the data to anticipate with nowcasting, is taken from the USGS online catalog for California. In this talk, we discuss the architecture of QuakeGPT and report first results.