[3Yin2-04] Analyzing the submodule structure in deep learning models to reduce the search space of Neural Architecture Search
Keywords:Neural Architecture Search, Graph Neural Network, Neural Network, Explainability
Neural Architecture Search (NAS) is a method to automatically design architectures of deep learning. In NAS, the optimal architecture is determined from search space, which is a set of architecture substructures. However, the search space of the existing NAS is limited due to the huge amount of computation, so there is a possibility of missing high-performance architectures. Therefore, we hypothesized architectures with task-specific substructures are high-performance, and proposed a method to identify task-specific substructures that contribute to design efficient search space in NAS. We developed a method based on Graph Convolutional Network outputs task specificity for each substructure. We confirmed the correlation between task specificity and task accuracy for each substructure, so our method can identify task-specific substructures that contribute to accuracy. Our method is expected to contribute to NAS development that determines high-performance architectures with reduced computational complexity.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.