Japan Geoscience Union Meeting 2016

Presentation information


Symbol H (Human Geosciences) » H-TT Technology & Techniques

[H-TT24] Geographic Information Systems and Cartography

Sun. May 22, 2016 5:15 PM - 6:30 PM Poster Hall (International Exhibition Hall HALL6)

Convener:*Mamoru Koarai(Earth Science course, College of Science, Ibaraki University), Shin Yoshikawa(Faculty of Engineering, Osaka Institute of Technology), Atsushi Suzuki(Faculty of Geo-environmental Science, Rissho University)

5:15 PM - 6:30 PM

[HTT24-P05] A Simulation Method for Visual Attention in Reading Illustrated Maps

*Masatoshi Arikawa1 (1.Center for Spatial Information Science, The University of Tokyo)

Keywords:Visual Attention, Map Reading, Simulation

We present a novel approach of developing a visual attention movement analysis tool for illustrated maps by converting the common rules of visual attention movement defined in natural language into a mathematical model, which is an algorithm to extract a trajectory as a visual movement from multistory dynamic potential fields representing the distribution of visual attention within a single illustrated map. Our algorithm begins by composing a potential field as a combination of Gaussian kernels corresponding with graphic elements on a illustrated map. Because the symbolic attributes of these graphic elements and the relations between them generally lead the users to determine the order of reading graphic elements, the graphic elements compose multiple hierarchical networks and are classified into several layers, such as labels, mountains, and rivers, based on the knowledge of cartography so that these attributes and relations affect the dynamic change of potential fields. The algorithm then allows us to extract a visual attention movement on the illustrated map as the trajectory with area on the composed fields of a point moving along the valley of the potential fields. Finally, the feasibility of our approach is demonstrated by the comparison between the visual attention movements extracted by our implemented prototype system and those extracted by real users using an eye-tracker.