10:50 〜 11:20
[ACG36-01] ディープニューラルネットワークモデルの最近の発展とその応用
★招待講演
キーワード:深層学習、ディープニューラルネットワーク、機械学習、人工知能
A deep neural network (DNN) is a powerful machine learning model that shows remarkable performance in various artificial intelligence domains such as computer vision and natural language processing. A DNN is composed of a number of non-linear, differentiable units that have tunable parameters. In the training phase, DNN parameters (called connection weights) are trained so as to minimize training loss. After training the DNN, we expect it to be able to produce the generalized rule for the target task, i.e., the trained DNN is expected to return the ideal outputs from inputs for unknown data.
Researchers in the deep learning community are actively proposing a novel DNN architecture (i.e., the network structure of DNNs) to improve the performance and applicability of DNNs. For instance, a convolutional neural network (CNN) is frequently used as the network model for computer vision tasks, and the recurrent connection and long short-term memory (LSTM) cell is suitable for treating time series and sequential data. Besides these network structures, many extended network modules and architectures have been developed.
In this talk, we provide an overview of the recent progress in deep neural network architectures and their amazing applications. We also overview how to handle multimodal data in DNNs and present several studies on modality translation such as image-to-text (image captioning) and text-to-speech.
Although a lot of DNN architectures have been proposed so far, the selection and design of the architectures are still the users' task. Such a task is not trivial because the appropriate architecture heavily depends on the target problem and dataset, meaning that trial and error and expert knowledge are required for users. The second topic of this talk is the automatic design of DNN architectures, called neural architecture search (NAS). Recent studies show that NAS can improve the performance of DNNs depending on the dataset. Particularly, we present the computationally efficient NAS methods that can work on a single GPU with reasonable computational time.
At the end of this talk, we would like to discuss the applicability of deep neural networks to the field of global environmental science.
Researchers in the deep learning community are actively proposing a novel DNN architecture (i.e., the network structure of DNNs) to improve the performance and applicability of DNNs. For instance, a convolutional neural network (CNN) is frequently used as the network model for computer vision tasks, and the recurrent connection and long short-term memory (LSTM) cell is suitable for treating time series and sequential data. Besides these network structures, many extended network modules and architectures have been developed.
In this talk, we provide an overview of the recent progress in deep neural network architectures and their amazing applications. We also overview how to handle multimodal data in DNNs and present several studies on modality translation such as image-to-text (image captioning) and text-to-speech.
Although a lot of DNN architectures have been proposed so far, the selection and design of the architectures are still the users' task. Such a task is not trivial because the appropriate architecture heavily depends on the target problem and dataset, meaning that trial and error and expert knowledge are required for users. The second topic of this talk is the automatic design of DNN architectures, called neural architecture search (NAS). Recent studies show that NAS can improve the performance of DNNs depending on the dataset. Particularly, we present the computationally efficient NAS methods that can work on a single GPU with reasonable computational time.
At the end of this talk, we would like to discuss the applicability of deep neural networks to the field of global environmental science.