11:30 AM - 11:45 AM
[G02-08] Trial of composing a melody using ground motion records of the 2013 earthquake near Awaji Island
Keywords:ground motion record, composing a melody, education for disaster mitigation
It is important to understand about the strong ground motion in order to protect ourselves from them. In this report, we attempted to express the ground motion records in a way other than the time history waveforms for the outreach of earthquake education and disaster prevention. In particular, we sought a method to convert the seismic motion records into a melody that enables us to "feel" the nature of seismic motion by listening to the melody. In this study, we modified the method of Yamada and Okubo (2012) for the 2013 earthquake near Awaji Island, and sought to express sensory to the differences of seismic wave propagation and seismic motion at each location.
In this paper, we show the spatial characteristics of the ground motion from the K-NET records of the 2013 Awaji Island earthquake (277 points), and introduce an attempt to create a melody using these data. For data processing, the Fourier spectrum of the velocity record for 20.48 s after the S-wave arrival time of the UD component was calculated, and the peak value and dominant frequency of the spectrum were extracted. From the results, we can see the trend of ground motion such as longer period in the sedimentary basins. In addition, we used them to create a musical score, using the strength, height, and placement of notes. To determine the pitch of a note, the relationship between the key number and the frequency of the note was retained and applied to the frequency characteristics of seismic motion. The maximum spectral amplitude was used for the intensity of the sound, and the maximum spectral amplitude was correlated with the reference value of the intensity symbol to create a musical score. In order to arrange the determined notes on the score, the temporal characteristics were retained and the order of S-wave arrival was set, with a time frame of 1 second and 0.01 second per note. In the 1 s case, there are points at which the S-wave arrives at almost the same time, resulting in a large number of overlapping scales in a single note, and in the 0.01 s case, there are many blank time slots, each of which creates its own problems, and there is room for ingenuity in the way the notes are arranged. The speed of the song and the type of sound are also issues that need to be addressed in the future. On the other hand, we have attempted to create a version of this report that is limited to the Shikoku region, where the number of observation records has been reduced. In the future, we believe that the combination of the melody we created and the visual representation method, as well as the process of creating the melody itself, will help us understand seismic motion in a fun way. Although it is impossible to judge the impressions and quality of the results because they are subjective, we would like to make further efforts based on the problems we encountered.
The contents of this report are based on the graduation research of the co-author, Ms. Shiina Yamamoto in 2020. And we used K-NET data provided by National Research Institute for Earth Science and Disaster Resilience. This research was supported in part by Grant-in-Aid for Scientific Research (C) (19K02615). I would like to express my gratitude to all of them.
In this paper, we show the spatial characteristics of the ground motion from the K-NET records of the 2013 Awaji Island earthquake (277 points), and introduce an attempt to create a melody using these data. For data processing, the Fourier spectrum of the velocity record for 20.48 s after the S-wave arrival time of the UD component was calculated, and the peak value and dominant frequency of the spectrum were extracted. From the results, we can see the trend of ground motion such as longer period in the sedimentary basins. In addition, we used them to create a musical score, using the strength, height, and placement of notes. To determine the pitch of a note, the relationship between the key number and the frequency of the note was retained and applied to the frequency characteristics of seismic motion. The maximum spectral amplitude was used for the intensity of the sound, and the maximum spectral amplitude was correlated with the reference value of the intensity symbol to create a musical score. In order to arrange the determined notes on the score, the temporal characteristics were retained and the order of S-wave arrival was set, with a time frame of 1 second and 0.01 second per note. In the 1 s case, there are points at which the S-wave arrives at almost the same time, resulting in a large number of overlapping scales in a single note, and in the 0.01 s case, there are many blank time slots, each of which creates its own problems, and there is room for ingenuity in the way the notes are arranged. The speed of the song and the type of sound are also issues that need to be addressed in the future. On the other hand, we have attempted to create a version of this report that is limited to the Shikoku region, where the number of observation records has been reduced. In the future, we believe that the combination of the melody we created and the visual representation method, as well as the process of creating the melody itself, will help us understand seismic motion in a fun way. Although it is impossible to judge the impressions and quality of the results because they are subjective, we would like to make further efforts based on the problems we encountered.
The contents of this report are based on the graduation research of the co-author, Ms. Shiina Yamamoto in 2020. And we used K-NET data provided by National Research Institute for Earth Science and Disaster Resilience. This research was supported in part by Grant-in-Aid for Scientific Research (C) (19K02615). I would like to express my gratitude to all of them.