Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Highly parallel computers for artificial neural networks
Luleå tekniska universitet.
1995 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

During a number of years the two fields of artificial neural networks (ANNs) and highly parallel computing have both evolved rapidly. In this thesis the possibility of combining these fields is explored, investigating the design and usage of highly parallel computers for ANN calculations. A new system-architecture REMAP (Real-time, Embedded, Modular, Adaptive, Parallel processor) is presented as a candidate platform for future action-oriented systems. With this new system-architecture, multi-modular networks of cooperating and competing ANNs can be realized. For action-oriented systems, concepts like real-time interaction with the environment, embeddedness, and learning with selforganization are important. In this thesis the requirements for efficient mapping of ANN algorithms onto the suggested architecture are identified. This has been accomplished by studies of ANN implementations on general purpose parallel computers as well as designs of new parallel systems particularly suited to ANN computing. The suggested architecture incorporates highly parallel, communicating processing modules, each constructed as a linear SIMD (Single Instruction stream, Multiple Data stream) array, internally connected using a ring topology, but also supporting broadcast and reduction operations. Many of the analyzed ANN models are similar in structure and can be studied in a unified context. A new superclass of ANN models called localized learning systems (LLSs) is therefore suggested and defined. A parallel computer implementation of LLSs is analyzed and the importance of the reduction operations is recognized. The study of various LLS models and other commonly used ANN models not contained in the LLS class, like the multilayer perceptron with error back-propagation, establishes REMAP modules as an excellent architecture for many different ANN models, useful in the design of action-oriented systems.

Place, publisher, year, edition, pages
Luleå: Luleå tekniska universitet, 1995. , 205 p.
Series
Doctoral thesis / Luleå University of Technologyy… → 31 dec 1996, ISSN 0348-8373 ; 162
National Category
Signal Processing
Research subject
Signal Processing
Identifiers
URN: urn:nbn:se:ltu:diva-25655Local ID: a6af4d90-f427-11db-ac9f-000ea68e967bOAI: oai:DiVA.org:ltu-25655DiVA: diva2:998809
Note
Godkänd; 1995; 20070426 (ysko)Available from: 2016-09-30 Created: 2016-09-30Bibliographically approved

Open Access in DiVA

fulltext(106058 kB)10 downloads
File information
File name FULLTEXT01.pdfFile size 106058 kBChecksum SHA-512
55ae00ea285ae217352495a37e3f8460c6d485084c4c8145827eb803243ed830f97b838489665cd8937d75527c367eb54ddadf4ab46ef935eda9907e01a52638
Type fulltextMimetype application/pdf

Signal Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 10 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 27 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf