Analogical mapping with sparse distributed memory: a simple model that learns to generalize from examples
2014 (English)In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 6, no 1, 74-88 p.Article in journal (Refereed) Published
We present a computational model for the analogical mapping of compositional structures that com- bines two existing ideas known as holistic mapping vec- tors and sparse distributed memory. The model enables integration of structural and semantic constraints when learning mappings of the type x_i → y_i and computing analogies x_j → y_j for novel inputs x_j. The model has a one-shot learning process, is randomly initialized and has three exogenous parameters: the dimensionality D of representations, the memory size S and the prob- ability χ for activation of the memory. After learning three examples the model generalizes correctly to novel examples. We find minima in the probability of generalization error for certain values of χ, S and the number of different mapping examples learned. These results indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of the memory is important. The optimal dimensionality of binary representations is of the order 10^4, which is consistent with a known analytical estimate and the synapse count for most cortical neurons. We demonstrate that the model can learn analogical mappings of generic two-place relationships and we calculate the error probabilities for recall and generalization.
Place, publisher, year, edition, pages
2014. Vol. 6, no 1, 74-88 p.
Research subject Industrial Electronics
IdentifiersURN: urn:nbn:se:ltu:diva-14994DOI: 10.1007/s12559-013-9206-3Local ID: e732e1a6-a530-451e-adb5-036bf89d53c7OAI: oai:DiVA.org:ltu-14994DiVA: diva2:987967
Validerad; 2014; 20130125 (bleemr)2016-09-292016-09-29Bibliographically approved