Authors: JinCheng Li Wing W Y Ng Daniel S Yeung Patrick P K Chan
Publish Date: 2013/09/26
Volume: 5, Issue: 1, Pages: 73-83
Abstract
Deep neural networks provide more expressive power in comparison to shallow ones However current activation functions can not propagate error using gradient descent efficiently with the increment of the number of hidden layers Current activation functions eg sigmoid have large saturation regions which are insensitive to changes of hidden neuron’s input and yield gradient diffusion To relief these problems we propose a bifiring activation function in this work The bifiring function is a differentiable function with a very small saturation region Experimental results show that deep neural networks with the proposed activation functions yield faster training better error propagation and better testing accuracies on seven image datasets
Keywords: