17:15 〜 19:15
[STT43-P01] Creation of training and validation datasets of Distributed Acoustic Sensing recordings from Sanriku seafloor observation system, Japan.
Distributed Acoustic Sensing (DAS) is an emergent technology revolutionizing seismology. We can obtain ground displacement from the strain rate measurements retrieved from an optical fiber cable. DAS offers several advantages for monitoring earthquake activity, including the installed telecommunication infrastructure and the ability to retrieve thousands of earthquake signals. However, processing vast volumes of data poses a challenge for human analysts. Addressing this issue, recent advances in DAS research aim to develop AI-based algorithms to process DAS data. Although some deep learning models have been successfully developed and tested with promising results, there is still room for improvement, especially for seafloor recordings, where additional noise sources challenge the current methodologies. In this work, we present the results of a procedure designed to create training and validation datasets from seafloor DAS data. This approach can be divided into two steps: 1) detecting earthquake signals using envelope template matching, and 2) identifying seismic phases using conventional seismological techniques, including STA/LTA, RMS, AR – AIC, and Kurtosis/Skewness. We used DAS data from the seafloor observation cable in Sanriku, Japan. From 2022-02-28, 08:51:00 to 14:30:00, we identified seven seismic events with magnitudes between 1.4 and 5.0 and retrieved thousands of DAS recordings with different signal-to-noise ratios. Finally, we used the created catalog to train a deep-learning model from scratch, leveraging models developed for speech recognition and fine-tuning deep-learning methods (PhaseNet – DAS) to improve their performance in seafloor data. With this work, we expect to highlight the importance of these datasets in improving current AI-based methods to process seafloor data.