1:45 PM - 3:15 PM
[O11-P125] Automatic determination of visibility of Mt. Fuji by machine learning
Keywords:Mt.Fuji, Machine learning, Yolov9
[Motivation and Objectives]
The Astronomical and Meteorological Department of our school has conducted visibility observations for approximately 80 years. Taguchi (2018) studied the relationship between air pollution and poor visibility. In 2019, Tanaka, Baba, and Hamashima developed an automatic photographic device aimed at the city center and investigated meteorological phenomena through visual observation. Later, Shinkawa and Takezoe (2021) built a similar system oriented toward Mt. Fuji, analyzing visibility trends over the past 50 years.
In 2023, Yamanobe, Yasuhara, and Shimanuki developed an automatic visibility discrimination program for the city center, which is now in active use. However, since 2021, such a system has not been applied to Mt. Fuji’s direction. Therefore, the objective of this study is to develop a similar automated program for Mt. Fuji, improving the efficiency of image classification and analysis.
[Methods]
A machine learning-based program was developed using YOLOv9 to automatically determine the visibility of Mt. Fuji using images captured by the device from Shinkawa and Takezoe (2021). The process included:
Collection of approximately 830 images of Mt. Fuji captured by the automatic observation system.
Manual categorization into three visibility levels:
• Visible
• Partially visible
• Completely invisible
Annotation of Mt. Fuji in each image.
Transfer learning using the YOLOv9 official pre-trained model (gelan-c.pt).
Images were selected from January 1 to 10, 2025, ensuring variation in weather and time to avoid bias.
[Results and Discussion]
The model achieved an overall accuracy of 93.5%, with 97.1% for "visible" and 90.0% for "partially visible" classifications. This high performance is likely due to the introduction of the "partially visible" category, which helped reduce ambiguity in human classification.
However, the dataset was relatively small (830 images), so the current model’s reliability remains limited. To improve this, further data collection and retraining will be necessary.
[Future Work]
We plan to expand the dataset to 1,500–2,000 images, retrain the model, and verify the accuracy again. Additionally, we aim to create a system that automatically sends inference results to the discrimination program and stores them in the database built by Yamanobe, Yasuhara, and Mizusawa (2024) for streamlined analysis.
We also plan to develop a separate system capable of recognizing and classifying characteristic cloud formations seen over Mt. Fuji (e.g., cap clouds, apron clouds, hanging clouds) using machine learning.
[References]
Yasuhara, T., Yamano, H., Shimanuki (2024). “Automation of visibility observation and construction of meteorological observation system.” Japan Geoscience Union Meeting 2024
Yasuhara, T., Inoue, H., Toda, K., Shinkawa, R., Ushizaka, T. (2021). “Automation of visibility observation and development of meteorological observation system.” The 10th Meteorological Observation Equipment Contest
Shinkawa, R., Takezoe, R. (2020). “Mt. Fuji Observation Instrument FUYOU.” The 9th Meteorological Observation Equipment Contest
Chantatsu (2024). “[Session 19] Learning image recognition using self-made dataset with YOLOv9.” https://chantastu.hatenablog.com/entry/2024/03/14/120221
Ultralytics (2025). “YOLOv9: A Leap Forward in Object Detection Technology.” https://docs.ultralytics.com/ja/models/yolov9/
Yasuhara, T., Yamanobe, Mizusawa, S. (2024). “Automatic Identification of Visibility and Meteor Meteors.” The 7th Junior and Senior High School Informatics Research Contest
Tanaka, H., Baba, M., Hamashima, Y. (2019). “Clear Sky Automating Visibility Observations.” The 8th Meteorological Observation Equipment Contest
The Astronomical and Meteorological Department of our school has conducted visibility observations for approximately 80 years. Taguchi (2018) studied the relationship between air pollution and poor visibility. In 2019, Tanaka, Baba, and Hamashima developed an automatic photographic device aimed at the city center and investigated meteorological phenomena through visual observation. Later, Shinkawa and Takezoe (2021) built a similar system oriented toward Mt. Fuji, analyzing visibility trends over the past 50 years.
In 2023, Yamanobe, Yasuhara, and Shimanuki developed an automatic visibility discrimination program for the city center, which is now in active use. However, since 2021, such a system has not been applied to Mt. Fuji’s direction. Therefore, the objective of this study is to develop a similar automated program for Mt. Fuji, improving the efficiency of image classification and analysis.
[Methods]
A machine learning-based program was developed using YOLOv9 to automatically determine the visibility of Mt. Fuji using images captured by the device from Shinkawa and Takezoe (2021). The process included:
Collection of approximately 830 images of Mt. Fuji captured by the automatic observation system.
Manual categorization into three visibility levels:
• Visible
• Partially visible
• Completely invisible
Annotation of Mt. Fuji in each image.
Transfer learning using the YOLOv9 official pre-trained model (gelan-c.pt).
Images were selected from January 1 to 10, 2025, ensuring variation in weather and time to avoid bias.
[Results and Discussion]
The model achieved an overall accuracy of 93.5%, with 97.1% for "visible" and 90.0% for "partially visible" classifications. This high performance is likely due to the introduction of the "partially visible" category, which helped reduce ambiguity in human classification.
However, the dataset was relatively small (830 images), so the current model’s reliability remains limited. To improve this, further data collection and retraining will be necessary.
[Future Work]
We plan to expand the dataset to 1,500–2,000 images, retrain the model, and verify the accuracy again. Additionally, we aim to create a system that automatically sends inference results to the discrimination program and stores them in the database built by Yamanobe, Yasuhara, and Mizusawa (2024) for streamlined analysis.
We also plan to develop a separate system capable of recognizing and classifying characteristic cloud formations seen over Mt. Fuji (e.g., cap clouds, apron clouds, hanging clouds) using machine learning.
[References]
Yasuhara, T., Yamano, H., Shimanuki (2024). “Automation of visibility observation and construction of meteorological observation system.” Japan Geoscience Union Meeting 2024
Yasuhara, T., Inoue, H., Toda, K., Shinkawa, R., Ushizaka, T. (2021). “Automation of visibility observation and development of meteorological observation system.” The 10th Meteorological Observation Equipment Contest
Shinkawa, R., Takezoe, R. (2020). “Mt. Fuji Observation Instrument FUYOU.” The 9th Meteorological Observation Equipment Contest
Chantatsu (2024). “[Session 19] Learning image recognition using self-made dataset with YOLOv9.” https://chantastu.hatenablog.com/entry/2024/03/14/120221
Ultralytics (2025). “YOLOv9: A Leap Forward in Object Detection Technology.” https://docs.ultralytics.com/ja/models/yolov9/
Yasuhara, T., Yamanobe, Mizusawa, S. (2024). “Automatic Identification of Visibility and Meteor Meteors.” The 7th Junior and Senior High School Informatics Research Contest
Tanaka, H., Baba, M., Hamashima, Y. (2019). “Clear Sky Automating Visibility Observations.” The 8th Meteorological Observation Equipment Contest
