Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Analogical mapping with sparse distributed memory: a simple model that learns to generalize from examples
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.ORCID iD: 0000-0001-5662-825X
2014 (English)In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 6, no 1, p. 74-88Article in journal (Refereed) Published
Abstract [en]

We present a computational model for the analogical mapping of compositional structures that com- bines two existing ideas known as holistic mapping vec- tors and sparse distributed memory. The model enables integration of structural and semantic constraints when learning mappings of the type x_i → y_i and computing analogies x_j → y_j for novel inputs x_j. The model has a one-shot learning process, is randomly initialized and has three exogenous parameters: the dimensionality D of representations, the memory size S and the prob- ability χ for activation of the memory. After learning three examples the model generalizes correctly to novel examples. We find minima in the probability of generalization error for certain values of χ, S and the number of different mapping examples learned. These results indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of the memory is important. The optimal dimensionality of binary representations is of the order 10^4, which is consistent with a known analytical estimate and the synapse count for most cortical neurons. We demonstrate that the model can learn analogical mappings of generic two-place relationships and we calculate the error probabilities for recall and generalization.

Place, publisher, year, edition, pages
2014. Vol. 6, no 1, p. 74-88
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Industrial Electronics
Identifiers
URN: urn:nbn:se:ltu:diva-14994DOI: 10.1007/s12559-013-9206-3Local ID: e732e1a6-a530-451e-adb5-036bf89d53c7OAI: oai:DiVA.org:ltu-14994DiVA, id: diva2:987967
Note
Validerad; 2014; 20130125 (bleemr)Available from: 2016-09-29 Created: 2016-09-29 Last updated: 2018-05-04Bibliographically approved

Open Access in DiVA

fulltext(514 kB)138 downloads
File information
File name FULLTEXT01.pdfFile size 514 kBChecksum SHA-512
43e140892580a6c1ffacc0ab84e51c040d48925ea52297e17c9f818b9c0693bdce4df710c30659a823a4c0ac435d1b6b8e5cc335a3c44724b91dd6bb953c74f1
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Search in DiVA

By author/editor
Emruli, BlerimSandin, Fredrik
By organisation
Embedded Internet Systems Lab
In the same journal
Cognitive Computation
Other Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 138 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 418 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf