3:45 PM - 4:00 PM
[STT44-02] Development for upgrading the generalization capability of the seismic damage detection of building based on the deep-learning utilizing aerial photographs
★Invited Papers
Keywords:Deep-learning, Convolutional Neural Network, Remote Sensing, Damage Detection, Aerial photograph, Bayesian Updating
The image consists from images of multiple ortho-rectified vertical aerial photos taken soon after the 2016 Kumamoto earthquake, the 1995 southern Hyogo Prefecture earthquake, and the 2011 off the Pacific coast of Tohoku earthquake. By combination of these images, we constructed training data and the damage discriminant model which have high generalization performance.
First, we classified building damages into four levels by means of the visual judgment; LEVEL1: No damage, LEVEL2: slight damage, LEVEL3: moderate damage, LEVEL4: collapsed. Then we divided building types as wooden and non-wooden based on the shape of roofs. And we organized these as GIS data, utilizing the building polygon provided by the Geospatial Information Authority of Japan.
Next, we normalized the resolution of each photographs as approximately 20 centimeters per pixel, and regularized brightness of images utilizing cumulative sum of luminance histograms.
Subsequently, we acquired patch images of 80-pixels square which include the main part of a common residence from each photographs, then extracted total 311,281 patches including each building damage levels proportionally.
Furthermore, we developed the damage discriminant model based on the CNN referring the VGG, then trained this model with these patch images. By this model, a discrimination accuracy of validation data which extracted from 3 earthquakes except of training data indicates over 70% in each damage levels.
In this study, an output of discriminant result is produced as a 4 color image created by raster scanning as a unit of 80-pixels square and a stride width of 20-pixels. Then by overlapping of building polygons, we classify the damage level of each building based on the threshold value of area ratios. Additionally, we assumed the sum of LEVEL3 and LEVEL4 as the totally collapsed buildings, and the sum of LEVEL2 and LEVEL3 and LEVEL4 as the totally or partially collapsed buildings, on the basis of the comparison between the governmental inspection result and the visual judgment result with aerial photographs.
By these methods, the immediate damage detection of the extensive area with an aerial photograph is achieved. Besides, by application of the Bayesian updating method, the estimation result based on the distribution of ground motion can be updated into the higher precision.
For all these reasons, the automatic damage detection with an aerial photo can approve the accuracy of the damage estimation based on the ground motion, and it is possible to utilize for disaster responses. However, this method has difficult tasks such as the damage discrimination based on a vertical image sometimes under-estimates the building damage except of wooden building which have tiled roof, and survey with optical images by airplane is difficult to observe during the cloudy weather and the night-time.
In the future, by utilizing the image acquired by different platforms and different sensors, we are going to develop the damage detection methods which have more immediacy and higher accuracy, then we intend to utilize these methods not only the initial disaster response, but also the recovery and reconstruction stage.
Acknowledgement:
This work was supported by Cross-ministerial Strategic Innovation Promotion Program (SIP), titled “Enhancement of societal resiliency against natural disasters”. We used the ArcGIS for developing training data, the OpenCV and the Python for image analyses, and the Keras as the deep-learning framework.