JpGU-AGU Joint Meeting 2020

講演情報

[E] 口頭発表

セッション記号 U (ユニオン) » ユニオン

[U-16] OC:Research Advances in Recent Disaster Studies Using Remote Sensing and Computational Methodologies

コンビーナ:松本 淳(首都大学東京大学院都市環境科学研究科地理環境学域)、Guido Cervone(Pennsylvania State University Main Campus)、座長:松本 淳(首都大学東京大学院都市環境科学研究科地理環境学域)、Chairperson:Guido Cervone(Pennsylvania State University Main Campus)

[U16-01] Expanded Dimensionality Image Spectroscopy Via Machine Learning

★Invited Papers

*Guido Cervone1Mark SalvadorFangcao Xu1 (1.Pennsylvania State University Main Campus)

キーワード:remote sensing, target detection, machine learning

The general goal of hyperspectral image analysis is to identify with confidence surface solid materials, liquids, or atmospheric gases. The current state-of-the science approach to hyperspectral imagery collection and exploitation is based on physics and engineering for the development and building of the sensors, and data science for the analysis of the data collected by the sensors. The algorithms and tools used to analyze the very large data collected were developed primarily in the early 1990s, and continues to pervade both operational and research and development.



These concepts and solutions have demonstrated operational success and their methods are ensconced in existing algorithms and software. While these existing solutions are based on sound physics, mathematics, and statistics, they lag behind the revolution in computational power, artificial intelligence, and agile sensors and platforms. Most importantly, they have been shown to fail under varied environmental conditions, obscuration, and target material conditions. Many simplifying assumptions are made in this process and it can be said that today's analysis is effective in performing material identification in optimal collection conditions. It can also be said that in non-optimal conditions, cloudy scenes, shadows, intimate mixtures of materials, liquid spills and residues, or any combination of the above, material identification of solids, liquids, or gases is ineffective, unrepeatable, or subject to unknown levels of uncertainty.



New spectral imagers are being integrated in or on agile collection platforms, such as gimbals, and they are becoming popular in unmanned aerial systems (UAS). This overcomes the fixed nadir looking geometries of past instruments. In addition to imaging a particular scene location, the agile sensor can also image the background and the atmosphere to improve the overall accuracy of the target classification. Additionally, while the previous generation of sensors have a scene revisit time of minutes to days, the current generation, and even more so the next, have a scene revisit time of as little as a few seconds. The ability of collecting rapid sequences of hyperspectral scans under different angles provides an unprecedented dataset. It is theoretically possible to advance the state of the science by developing a more complete and accurate solution by analyzing multiple hyperspectral images simultaneously.

We propose an expansion of the radiance equation to take advantage of multiple hyperspectral images and a solution of this expansion via Deep Learning. The characterization of the components of the radiance equation via the expansion would drive future hyperspectral sensor performance requirements and concepts of operations. The expansion and solution would enable identification of target materials in scenarios and environments that no current or planned hyperspectral image exploitation system is capable of.

At the basis of hyperspectral imaging is the fundamental radiance equation defining the reflective and emissive spectral radiance components of the image scene and its 3-dimensional surroundings. Current state of the science algorithms and tools analyze each hyperspectral image individually via a simplification of the radiance equation into a single spatial dimension. This individual image analysis was driven primarily by the limits of sensors and collection platforms with optimal scene collection consisting of a nadir looking scan of the target area.