JpGU-AGU Joint Meeting 2020

Presentation information

[E] Oral

S (Solid Earth Sciences ) » S-TT Technology & Techniques

[S-TT53] Airborne surveys and monitoring of the Earth

convener:Shigekazu Kusumoto(Graduate School of Science and Engineering for Research, University of Toyama), Takao Koyama(Earthquake Research Institute, The University of Tokyo), Yuji Mitsuhata(AdvancedIndustrial Science and Technology), Shigeo Okuma(Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology (AIST))

[STT53-01] Compute environment agnostic scientific research platform for exploration geophysics

★Invited Papers

*Pavel Golodoniuc1, Davis Aaron1, Samuel Bradley1, Shane Mule1, John Hille1 (1.CSIRO)

Keywords:geophysics, visualisation, exploratory data analysis, human-computer interaction, electromagnetics, data processing

Earth scientists rely on large datasets for analysis, interpretation and prediction of rock properties at different scales. Exploration geophysicists operate with complex data and numerical codes to solve forward and inverse modelling problems. For many small- to medium-sized exploration companies, access to new analytical methods might be unfeasible due to the amount of data and computing resources required. This also restricts researchers in delivering new methods to early adopters who are more agile than large companies but can lack resources and expertise to set up the necessary infrastructure. Herein we address both limitations by offering early adopters’ access to cutting-edge techniques and providing researchers with a link to end-users.

We develop a platform that enables visual interaction with geophysical data and allows seamless integration with complex numerical codes at various scales – from a desktop data processing to the inversion of hundreds of kilometres of airborne electromagnetic data in a Cloud, private compute cluster or a High-Performance Computing facility. The architecture, in which everything is packaged as a Docker container, allows individual applications to run in different environments without recompiling. Containers are dynamically built using an open declarative scripting language and can be shared via a common Docker Registry. The containerisation also eliminates potential library dependency conflicts as each container represents an isolated execution environment that is configured for a specific program. Our application scales from a desktop to a Docker Swarm Cluster to a Cloud with ease and shrinks back when computing resources are no longer in use. The approach develops lightweight Cloud-hosted applications tailored to specific needs, thus not limiting users to a single application. A set of Docker containers implementing complex analytical procedures can be deployed and run either locally or externally. In the latter case, the computational load can be balanced through a range of means, e.g., growing the Docker Swarm or delegating computationally heavy tasks to a dedicated server.

Our approach is cost-efficient and enables a link between researchers and explorers. It offers access to cutting edge research while protecting researchers’ intellectual rights.