日本地球惑星科学連合2023年大会

講演情報

[J] オンラインポスター発表

セッション記号 M (領域外・複数領域) » M-GI 地球科学一般・情報地球科学

[M-GI29] データ駆動地球惑星科学

2023年5月22日(月) 09:00 〜 10:30 オンラインポスターZoom会場 (3) (オンラインポスター)

コンビーナ:桑谷 立(国立研究開発法人 海洋研究開発機構)、長尾 大道(東京大学地震研究所)、上木 賢太(国立研究開発法人海洋研究開発機構)、伊藤 伸一(東京大学)

現地ポスター発表開催日時 (2023/5/21 17:15-18:45)

09:00 〜 10:30

[MGI29-P03] Automatic facies classification using convolutional neural network for 3D outcrop data

*佐藤 瑠晟1 (1.京都大学)


This study established a method to automatically classify facies established using convolutional neural network (CNN) for 3D point cloud of outcrop. Recently, 3D facies models have been widely used to know the spatial characteristics of geological architecture. The 3D facies model is a 3D geometry composed of point cloud, meshes or voxels representing spatial distributions of lithofacies in an outcrop. Despite the need for a wide range of their applications, there have been several challenges in building 3D facies models using the existing methods.Fristly, the cost to operate large and expensive devices could be a problem acquiring the 3D outcrop data. There is a restriction that outcrop data can be acquired when researchers reach the outcrop. In addition, researchers' specialized experience and knowledge are necessary for the manual classification of lithofacies from the visual representations of outcrops. Objectivity and efficiency may be issues in this manual classification process. Therefore this study processes a method to automatically construct 3D facies models, applying CNN for 3D outcrop point cloud. As a case study, we surveyed the outcrop along the Esashito coast, exposing the Upper Cretaceous to Paleocene Akkeshi Fromation, the Nemuro Group in the Hamanaka city, Hokkaido Iland. The mass transport deposit crops out in the Esashito coast, which consists of a pebbly mudstone matrix containing blocks of alternating beds of sandstone and mudstone as blocks. In this study, firstly, 4235 outcrop images were taken by a drone, and a 3D outcrop point cloud was constructed by the photogrammetry. Secondly, the 3D point cloud was translated into a set of 2D images of 1.73 m square of each. The 2D images were 224x224 pixels that have three color channels (RGB colors) and two channels exhibiting the roughness of the outcrop. One channel for the outcrop roughness records the distance between the actual surface and the average outcrop plane, and the another is the standard deviation of roughness distances. The process of this translation is as follows. (1) Median filter was applied to the point cloud, and (2) the subsets of the point cloud were extracted by segmentation of regions at regular intervals. Then, (3) the average outcrop plane was obtained by fitting a plane with the extracted point cloud. (4) All the points of the outcrop surface were projected to this average outcrop plane. Finally, (5) the colors and roughness properties of the projected points were interpolated at the pixels of the 2D image. After this translation from the 3D point cloud into 2D images, facies labels were manually given to the 2D images. The classes of the labels in this study were six: pebbly mudstone, alternating beds of sandstone and mudstone, vegetation, beach, top soil, background (the area absent of points). The combination of these facies labels and the 2D outcrop images were used as training data for the U-Net, the CNN model employed in this study, to generate the automatic facies classification model. In training, two conditions were tested to examine the significance of suface geometry for lithofacies identification: training only with RGB colors or training with RGB colors and the roughness metrics. Finally, the trained model was applied to the 2D images produced from the 3D outcrop data. The 3D facies model was constructed by transcribing the facies labels on the 2D images classified by CNN to the original 3D point cloud. As a result, the trained U-Net models classified facies of the test data with high accuracy (more than 90 % in the precision metric) under both training condition. Visual comparisons between the reconstructed 3D facies model and the actual outcrop appear to be sufficiently consistent in the spatial distribution of facies. In the future, it is expected that the method proposed in this study will be widely applied to outcrops in various regions.