日本地球惑星科学連合2019年大会

講演情報

[J] 口頭発表

セッション記号 A (大気水圏科学) » A-CG 大気海洋・環境科学複合領域・一般

[A-CG36] 地球環境科学と人工知能

2019年5月30日(木) 10:45 〜 12:15 106 (1F)

コンビーナ:冨田 智彦(熊本大学大学院 先端科学研究部)、福井 健一(大阪大学)、松岡 大祐(海洋研究開発機構)、小野 智司(鹿児島大学)、座長:冨田 智彦(熊本大学 大学院先端科学研究部 基礎科学部門)

10:50 〜 11:20

[ACG36-01] ディープニューラルネットワークモデルの最近の発展とその応用

★招待講演

*白川 真一1 (1.横浜国立大学)

キーワード:深層学習、ディープニューラルネットワーク、機械学習、人工知能

A deep neural network (DNN) is a powerful machine learning model that shows remarkable performance in various artificial intelligence domains such as computer vision and natural language processing. A DNN is composed of a number of non-linear, differentiable units that have tunable parameters. In the training phase, DNN parameters (called connection weights) are trained so as to minimize training loss. After training the DNN, we expect it to be able to produce the generalized rule for the target task, i.e., the trained DNN is expected to return the ideal outputs from inputs for unknown data.

Researchers in the deep learning community are actively proposing a novel DNN architecture (i.e., the network structure of DNNs) to improve the performance and applicability of DNNs. For instance, a convolutional neural network (CNN) is frequently used as the network model for computer vision tasks, and the recurrent connection and long short-term memory (LSTM) cell is suitable for treating time series and sequential data. Besides these network structures, many extended network modules and architectures have been developed.

In this talk, we provide an overview of the recent progress in deep neural network architectures and their amazing applications. We also overview how to handle multimodal data in DNNs and present several studies on modality translation such as image-to-text (image captioning) and text-to-speech.

Although a lot of DNN architectures have been proposed so far, the selection and design of the architectures are still the users' task. Such a task is not trivial because the appropriate architecture heavily depends on the target problem and dataset, meaning that trial and error and expert knowledge are required for users. The second topic of this talk is the automatic design of DNN architectures, called neural architecture search (NAS). Recent studies show that NAS can improve the performance of DNNs depending on the dataset. Particularly, we present the computationally efficient NAS methods that can work on a single GPU with reasonable computational time.

At the end of this talk, we would like to discuss the applicability of deep neural networks to the field of global environmental science.