Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Implementation of Hierarchical Temporal Memory on a Many-core Architecture
Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
2013 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

This thesis makes use of a many-core architecture developed by the Adapteva Company toimplement a parallel version of the Hierarchical Temporal Memory Cortical LearningAlgorithm (HTM CLA). The HTM algorithm is a new machine learning model which ispromising in the aspect of pattern recognition and inference. Due to its complexity,sufficiently large simulations are time-consuming to perform on sequential processor,therefore, in this thesis we have investigated the feasibility of using many-core processors torun HTM simulations.In this thesis, a parallel implementation of the HTM algorithm on the proposed many-coreplatform has been done in C. In order to evaluate the performance of parallel implementation,some metrics such as speedup, efficiency and scalability have been measured throughperforming some simple pattern recognition tasks. Implementing the HTM algorithm on asingle-core computer established the baseline to calculate the speedup and efficiency ofparallel implementation for the purpose of evaluating scalability.In this thesis, three mapping methods which are block-based, column-based and row-based,have been selected to parallelize the HTM from many mapping methods. In the experimentwith small training examples, the row-based mapping method gained the best performancewith a high speedup because of the lesser influence of training example variability, andreflected a good scalability when implemented on different numbers of cores. However, theexperiment with a relatively large amount of training examples gives almost identical resultsfrom all three mapping methods. In contrast with the small experiment, the full set experimentused much more diverse input and the mapping method did not influence the average runningtime for this training set. All three mappings have showed almost perfect scalability and thereis linear speedup increasing with number of cores, for the dataset and HTM size used.

Place, publisher, year, edition, pages
2013. , 80 p.
Series
Halmstad University Dissertations
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:hh:diva-21597Local ID: IDE1279OAI: oai:DiVA.org:hh-21597DiVA: diva2:609351
Subject / course
Computer science and engineering
Presentation
2012-12-20, E320, Hamstad University, Halmstad, 10:15 (English)
Uppsok
Technology
Supervisors
Examiners
Available from: 2013-03-25 Created: 2013-03-05 Last updated: 2013-03-25Bibliographically approved

Open Access in DiVA

fulltext(1985 kB)6174 downloads
File information
File name FULLTEXT05.pdfFile size 1985 kBChecksum SHA-512
91de644d1e00d9ec3704a7c522c8c00fc403b4f6cbdbeb292d7eb54f66572a675dfea8f30380d806eff1af6cc4a335139f9ed94fe0993444ab937c5a53fe90ad
Type fulltextMimetype application/pdf

By organisation
School of Information Science, Computer and Electrical Engineering (IDE)
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 6174 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 2699 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf