Keywords:speech production, picture naming tasks, aphasia, neuropsychological assesment
Various attempts to explain the task performances of aphasic patients have been made in practice. These models have often based on explanatory models that originated in the 1980s. We attempted to update these models by using deep learning models to process visual features of the picture naming tasks and the semantic features. The representation including penultimate layers converting visual inputs such as line drawings into language responses can be explained by using convolutional neural network models, while the semantic impaired patients' task performance can be explained by word embedding models. The real images frequently elicited to assess patients in aphasia were used as visual stimuli. We advocated ResNet and VGG16 for the recognition processes of the line drawing stimuli, while employed the word2vec for the semantic representation as a lexical representation. It made them possible to provide a more detailed explanation for the empirical data. It is expected to contribute to the interpretation and implementation of neuroscience tests in the following ways: 1) clarification of the meaning of stimulus pictures and words in aphasia tests, and 2) provision of selection criteria for training materials in rehabilitation.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.