JpGU-AGU Joint Meeting 2017

Presentation information

[EE] Poster

A (Atmospheric and Hydrospheric Sciences) » A-AS Atmospheric Sciences, Meteorology & Atmospheric Environment

[A-AS01] [EE] 3D Cloud Modeling as a Tool for 3D Radiative Transfer, and Conversely

Sun. May 21, 2017 1:45 PM - 3:15 PM Poster Hall (International Exhibition Hall HALL7)

convener:Thomas Fauchez(Universities Space Research Association, GSFC Greenbelt), Anthony B Davis(Jet Propulsion Laboratory), Hironobu Iwabuchi(Graduate School of Science, Tohoku University), Kentaroh Suzuki(Atmosphere and Ocean Research Institute, University of Tokyo)

[AAS01-P07] Analysis of optimal conditions for photo-based 3D modeling of cloud-like objects

*Maya Shimono1, Ken Hirata1, Ade Purwanto4, Kuriki Murahashi4, Hiroshi Kawamata1,2, Nobuyasu Naruse3, Yukihiro Takahashi1,4 (1.Global Science Campus, Hokkaido University, 2.Institute for the Advancement of Higher Education, Hokkaido University, 3.Shiga University of Medical Science, 4.Graduate School of Science, Hokkaido University)

Keywords:Cloud Observation, 3D Modeling, Camera Images, Precipitation Forecast

Currently, short-term rain/snow forecast is largely based on weather radars that observe rain/snow particles in the air and therefore cannot fully cover the clouds which may cause severe weather. New radars have recently been developed to detect clouds but the construction and maintenance of radars would be expensive. In order to develop low-cost methods to observe clouds, images of clouds captured with digital cameras could be used to locate and make measurements of clouds. Previous studies attempted to calibrate cameras using various objects and landmarks, such as topographic features, locations of airplanes and stars. However, for the practical cloud monitoring, it is important to develop methods to observe clouds without any external calibration but the proper photographic conditions for the task are yet to be carefully examined.
In order to examine the optimum conditions, images of the cloud-like objects, a lump of cotton and a piece of clay, were taken from different angles with digital cameras (Nikon D5500). The images were then processed by the 3D modeling software PhotoScanPro to construct a 3D model with which the location and size of the objects will be calculated. Up to now (15 Feb 2017), several preliminary experiments have been conducted. Using a lump of cotton hung with thin threads, images were captured with different dihedral angle between cloud-camera planes (θ = 1º, 2º, 3º… as indicated in Fig 1) to examine the viability to construct 3D models (A) and the accuracy of the calculated distance (B) and the surface area (C) of the models. Another experiment was held by capturing all-round photos of a piece of clay to generate a 3D model and examine the accuracy of volume measurements (D). Finally, using the cotton, multiple photos were captured with different positions of light source to evaluate the influence on the resulting models (E).
The results of the experiments are as follows: (A) although it varied depending on the number of photos, there was a maximum angle with which a 3D model could be made. The angle got the bigger the more photos were used. (B) Also, with the larger dihedral angles, the accuracy of the calculated distances improved and (C) the surface areas of the produced 3D models expanded. (D) In the second experiment, the calculated volume of the clay was about 30% smaller than the actual volume. This is most likely because the shaded bottom side of the clay made the model incomplete and this issue could be solved by changing the exposure of the camera. (E) The final experiment regarding the light source positions gave an outcome that the resulting 3D models were not very different with different positions of the light.
From the results of the experiment, we have found some of the optimum conditions. For further investigation, we will look into other conditions such as the number of cameras, elevation angle of cameras, size of the object taking up each image, and camera exposure to see if they affect the accuracy of resulting 3D models and measurements. We are also planning to take actual photos of clouds and generate 3D models using them in order to evaluate the validity of the preliminary experiments. In the end, by applying those conditions to the actual photo-capturing situation, consecutive images will repeatedly be captured to construct 3D models, the accumulation of which could help determine criteria for precipitation forecast.
Keywords: Cloud Observation, 3D Modeling, Camera Images, Precipitation Forecast