Japan Geoscience Union Meeting 2023

Presentation information

[J] Online Poster

M (Multidisciplinary and Interdisciplinary) » M-GI General Geosciences, Information Geosciences & Simulations

[M-GI29] Data-driven geosciences

Mon. May 22, 2023 9:00 AM - 10:30 AM Online Poster Zoom Room (3) (Online Poster)

convener:Tatsu Kuwatani(Japan Agency for Marine-Earth Science and Technology), Hiromichi Nagao(Earthquake Research Institute, The University of Tokyo), Kenta Ueki(Japan Agency for Marine-Earth Science and Technology), Shin-ichi Ito(The University of Tokyo)

On-site poster schedule(2023/5/21 17:15-18:45)

9:00 AM - 10:30 AM

[MGI29-P03] Automatic facies classification using convolutional neural network for 3D outcrop data

*Sato Ryusei1 (1.Kyoto University)


This study established a method to automatically classify facies established using convolutional neural network (CNN) for 3D point cloud of outcrop. Recently, 3D facies models have been widely used to know the spatial characteristics of geological architecture. The 3D facies model is a 3D geometry composed of point cloud, meshes or voxels representing spatial distributions of lithofacies in an outcrop. Despite the need for a wide range of their applications, there have been several challenges in building 3D facies models using the existing methods.Fristly, the cost to operate large and expensive devices could be a problem acquiring the 3D outcrop data. There is a restriction that outcrop data can be acquired when researchers reach the outcrop. In addition, researchers' specialized experience and knowledge are necessary for the manual classification of lithofacies from the visual representations of outcrops. Objectivity and efficiency may be issues in this manual classification process. Therefore this study processes a method to automatically construct 3D facies models, applying CNN for 3D outcrop point cloud. As a case study, we surveyed the outcrop along the Esashito coast, exposing the Upper Cretaceous to Paleocene Akkeshi Fromation, the Nemuro Group in the Hamanaka city, Hokkaido Iland. The mass transport deposit crops out in the Esashito coast, which consists of a pebbly mudstone matrix containing blocks of alternating beds of sandstone and mudstone as blocks. In this study, firstly, 4235 outcrop images were taken by a drone, and a 3D outcrop point cloud was constructed by the photogrammetry. Secondly, the 3D point cloud was translated into a set of 2D images of 1.73 m square of each. The 2D images were 224x224 pixels that have three color channels (RGB colors) and two channels exhibiting the roughness of the outcrop. One channel for the outcrop roughness records the distance between the actual surface and the average outcrop plane, and the another is the standard deviation of roughness distances. The process of this translation is as follows. (1) Median filter was applied to the point cloud, and (2) the subsets of the point cloud were extracted by segmentation of regions at regular intervals. Then, (3) the average outcrop plane was obtained by fitting a plane with the extracted point cloud. (4) All the points of the outcrop surface were projected to this average outcrop plane. Finally, (5) the colors and roughness properties of the projected points were interpolated at the pixels of the 2D image. After this translation from the 3D point cloud into 2D images, facies labels were manually given to the 2D images. The classes of the labels in this study were six: pebbly mudstone, alternating beds of sandstone and mudstone, vegetation, beach, top soil, background (the area absent of points). The combination of these facies labels and the 2D outcrop images were used as training data for the U-Net, the CNN model employed in this study, to generate the automatic facies classification model. In training, two conditions were tested to examine the significance of suface geometry for lithofacies identification: training only with RGB colors or training with RGB colors and the roughness metrics. Finally, the trained model was applied to the 2D images produced from the 3D outcrop data. The 3D facies model was constructed by transcribing the facies labels on the 2D images classified by CNN to the original 3D point cloud. As a result, the trained U-Net models classified facies of the test data with high accuracy (more than 90 % in the precision metric) under both training condition. Visual comparisons between the reconstructed 3D facies model and the actual outcrop appear to be sufficiently consistent in the spatial distribution of facies. In the future, it is expected that the method proposed in this study will be widely applied to outcrops in various regions.