Title: Dropout is a special of the stochastic delta rule: and more

Authors: Noah Frazier-Logue, Stephen José Hanson

Abstract: Multi-layer neural networks have lead to remarkable performance on many kinds of benchmark tasks in text, speech and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting. One approach to this overfitting and related problems (local minima, colinearity, feature discovery etc.) is called dropout (Srivastava, et al 2014, Baldi et al 2016). This removes hidden units with a Bernoulli random variable with probability $p$ over updates. In this paper we will show that Dropout is a special case of a more general model published originally in 1990 called the stochastic delta rule ( SDR, Hanson, 1990). SDR parameterizes each weight in the network as a random variable with mean $mu{w{ij}}$ and standard deviation $sigma{w{ij}}$. These random variables are sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights. Both parameters are updated according to prediction error, thus implementing weight noise injections that reflect a local history of prediction error and efficient model averaging. SDR therefore implements a local gradient-dependent simulated annealing per weight converging to a bayes optimal network. Tests on standard benchmarks (CIFAR) using a modified version of DenseNet shows the SDR outperforms standard dropout in error by over 50% and in loss by over 50%. Furthermore, the SDR implementation converges on a solution much faster, reaching a error of 5 in just 15 epochs with DenseNet-40 compared to standard DenseNet-40’s 94 epochs.

PDF link Landing page

Source link
thanks you RSS link
( https://www.reddit.com/r//comments/96vgvr/r_180803578_dropout_is_a_special_case_of_the/)


Please enter your comment!
Please enter your name here