Change search
ReferencesLink to record
Permanent link

Direct link
Measure-based Learning Algorithms: An Analysis of Back-propagated Neural Networks
Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
2008 (English)Independent thesis Advanced level (degree of Master (One Year))Student thesis
Abstract [en]

In this thesis we present a theoretical investigation of the feasibility of using a problem specific inductive bias for back-propagated neural networks. We argue that if a learning algorithm is biased towards optimizing a certain performance measure, it is plausible to assume that it will generate a higher performance score when evaluated using that particular measure. We use the term measure function for a multi-criteria evaluation function that can also be used as an inherent function in learning algorithms, in order to customize the bias of a learning algorithm for a specific problem. Hence, the term measure-based learning algorithms. We discuss different characteristics of the most commonly used performance measures and establish similarities among them. The characteristics of individual measures and the established similarities are then correlated to the characteristics of the backpropagation algorithm, in order to explore the applicability of introducing a measure function to backpropagated neural networks. Our study shows that there are certain characteristics of the error back-propagation mechanism and the inherent gradient search method that limit the set of measures that can be used for the measure function. Also, we highlight the significance of taking the representational bias of the neural network into account when developing methods for measure-based learning. The overall analysis of the research shows that measure-based learning is a promising area of research with potential for further exploration. We suggest directions for future research that might help realize measure-based neural networks.

Abstract [sv]

The study is an investigation on the feasibility of using a generic inductive bias for backpropagation artificial neural networks, which could incorporate any one or a combination of problem specific performance metrics to be optimized. We have identified several limitations of both the standard error backpropagation mechanism as well the inherent gradient search approach. These limitations suggest exploration of methods other than backpropagation, as well use of global search methods instead of gradient search. Also, we emphasize the importance of taking the representational bias of the neural network in consideration, since only a combination of both procedural and representational bias can provide highly optimal solutions.

Place, publisher, year, edition, pages
2008. , 35 p.
Keyword [en]
Supervised learning, Inductive Bias, Artificial Neural Networks
National Category
Computer Science
URN: urn:nbn:se:bth-4795Local ID: diva2:832143
Available from: 2015-04-22 Created: 2008-06-23 Last updated: 2015-06-30Bibliographically approved

Open Access in DiVA

fulltext(244 kB)14 downloads
File information
File name FULLTEXT01.pdfFile size 244 kBChecksum SHA-512
Type fulltextMimetype application/pdf

By organisation
Department of Interaction and System Design
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 14 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 25 hits
ReferencesLink to record
Permanent link

Direct link