Training feed-forward neural networks using the gradient descent method with the optimal stepsize
2012 (English)In: Journal of Computational Information Systems, ISSN 1553-9105, Vol. 8, no 4, 1359-1371 p.Article in journal (Refereed) Published
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation (EBP), is an iterative gradient descend algorithm by nature. Variable stepsize is the key to fast convergence of BP networks. A new optimal stepsize algorithm is proposed for accelerating the training process. It modifies the objective function to reduce the computational complexity of the Jacobin and consequently that of Hessian matrices, and hereby directly computes the optimal iterative stepsize. The improved backpropagation algorithm helps alleviating the problem of slow convergence and oscillations. The analysis indicates that the backpropagation with optimal stepsize (BPOS) is more efficient when treating large-scale samples. The numerical experiment results on pattern recognition and function approximation problems show that the proposed algorithm possesses the features of fast convergence and less intensive computational complexity.
Place, publisher, year, edition, pages
2012. Vol. 8, no 4, 1359-1371 p.
Research subject Operation and Maintenance
IdentifiersURN: urn:nbn:se:ltu:diva-8599Local ID: 71e87e73-f977-4680-a137-32b843648626OAI: oai:DiVA.org:ltu-8599DiVA: diva2:981537
Godkänd; 2012; 20120420 (andbra)2016-09-292016-09-29Bibliographically approved