License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITCS.2018.22
URN: urn:nbn:de:0030-drops-83521
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2018/8352/
Panigrahy, Rina ;
Rahimi, Ali ;
Sachdeva, Sushant ;
Zhang, Qiuyi
Convergence Results for Neural Networks via Electrodynamics
Abstract
We study whether a depth two neural network can learn another
depth two network using gradient descent. Assuming a linear output node, we show that the question of whether gradient descent converges to the target function is equivalent to the following question in electrodynamics: Given k fixed protons in R^d, and k electrons, each moving due to the attractive force from the protons and repulsive force from the remaining electrons, whether at equilibrium all the electrons will be matched up with the protons, up to a permutation. Under the standard electrical force, this follows from the classic Earnshaw's theorem. In our setting,
the force is determined by the activation function and the
input distribution. Building on this equivalence, we prove the
existence of an activation function such that gradient descent learns at least one of the hidden nodes in the target network.
Iterating, we show that gradient descent can be used to learn the entire network one node at a time.
BibTeX - Entry
@InProceedings{panigrahy_et_al:LIPIcs:2018:8352,
author = {Rina Panigrahy and Ali Rahimi and Sushant Sachdeva and Qiuyi Zhang},
title = {{Convergence Results for Neural Networks via Electrodynamics}},
booktitle = {9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
pages = {22:1--22:19},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-060-6},
ISSN = {1868-8969},
year = {2018},
volume = {94},
editor = {Anna R. Karlin},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2018/8352},
URN = {urn:nbn:de:0030-drops-83521},
doi = {10.4230/LIPIcs.ITCS.2018.22},
annote = {Keywords: Deep Learning, Learning Theory, Non-convex Optimization}
}
Keywords: |
|
Deep Learning, Learning Theory, Non-convex Optimization |
Collection: |
|
9th Innovations in Theoretical Computer Science Conference (ITCS 2018) |
Issue Date: |
|
2018 |
Date of publication: |
|
12.01.2018 |