JSAI2020

Presentation information

General Session

General Session » J-1 Fundamental AI, theory

[4B2-GS-1] Fundamental AI, theory (1)

Fri. Jun 12, 2020 12:00 PM - 1:40 PM Room B (jsai2020online-2)

座長:濱田直希

1:20 PM - 1:40 PM

[4B2-GS-1-05] Learning of neural networks and singularities

〇Tomohiro Isshiki1 (1. CREATOR'S HEAD INC.)

Keywords:Neural network, Learning , Singularities, Neocognitron, Wavelet

The pattern to be learned in deep learning is a neuromanifold as pointed out by Amari, and the neuromanifold has a singularity. In addition, learning does not normally work if there is a singular point. Nevertheless, deep learning can be performed with high accuracy by increasing the number of hidden layers in the neural network.
The reason is developed by the following logic using Hironaka's singularity elimination theorem.
When back propagation is used in deep learning, learning proceeds from the upper layer, which is a low-dimensional information space, to the lower layer, which is a high-dimensional information space. Also, increasing the number of layers increases the dimension of the information space. In other words, the dimension of the information space is increased by increasing the number of layers, and learning is performed from the dimension of the low information space to the dimension of the high information space by back propagation, and the singularity elimination indicated by the wide singularity elimination theorem Happens. As a result, the singularity of the neuromanifold having the singularity is eliminated. That is mathematically guaranteed by Amari and Hironaka's research.
In addition, there is a relationship between neocognitron, one of the origins of CNN, and wavelets. Prove that as well.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password