Presentation information

Oral presentation

Organized Session » [Organized Session] OS-3

[4O1-OS-3a] [Organized Session] OS-3

Fri. Jun 8, 2018 12:00 PM - 1:20 PM Room O (2F Kaimon)

12:40 PM - 1:00 PM

[4O1-OS-3a-02] A neural network as a unified model for explaining image features for processing various types of “shitsukan”

〇Takuya Koumura1, Masataka Sawayama1, Shin’ya Nishida1 (1. NTT Communication Science Laboratories)

Keywords:shitsukan, visual perception, deep neural network

Natural visual stimuli contain rich “shitsukan”, such as glossiness, translucency, and material of an object. Explaining various types of shitsukan in a unified framework is difficult because in general visual perception involves numerous features. Here we tried to explain visual features for shitsukan perception by analyzing the experimental data of shitsukan discrimination tasks. We assumed that participants responded based on the image features of the stimuli. The features were calculated by a deep neural network (DNN) optimized for image classification, in which more complex and abstract features are represented in the higher layers. The features in the middle layer best explained the participants’ responses, suggesting that relatively complex features are used for shitsukan perception. We also found that the effective features depends on the type of the shitsukan. These results suggest the effectiveness of a DNN for explaining visual features for shitsukan perception.