Analogical mapping and inference with binary spatter codes and sparse distributed memory
2013 (English)In: The 2013 International Joint Conference on Neural Networks (IJCNN): Dallas, Texas 4-9 Aug 2013, Piscataway, NJ: IEEE Communications Society, 2013, 1-8 p.Conference paper (Refereed)
Analogy-making is a key function of human cognition. Therefore, the development of computational models of analogy that automatically learn from examples can lead to significant advances in cognitive systems. Analogies require complex, relational representations of learned structures, which is challenging for both symbolic and neurally inspired models. Vector symbolic architectures (VSAs) are a class of connectionist models for the representation and manipulation of compositional structures, which can be used to model analogy. We study a novel VSA network for the analogical mapping of compositional structures, which integrates an associative memory known as sparse distributed memory (SDM). The SDM enables non-commutative binding of compositional structures, which makes it possible to predict novel patterns in sequences. To demonstrate this property we apply the network to a commonly used intelligence test called Raven’s Progressive Matrices. We present results of simulation experiments for the Raven’s task and calculate the probability of prediction error at 95% confidence level. We find that non-commutative binding requires sparse activation of the SDM and that 10–20% concept-specific activation of neurons is optimal. The optimal dimensionality of the binary distributed representations of the VSA is of the order 10^4, which is comparable with former results and the average synapse count of neurons in the cerebral cortex.
Place, publisher, year, edition, pages
Piscataway, NJ: IEEE Communications Society, 2013. 1-8 p.
, Proceedings of ... International Joint Conference on Neural Networks, ISSN 2161-4393
Research subject Industrial Electronics
IdentifiersURN: urn:nbn:se:ltu:diva-40730Local ID: ff850dbb-d6f7-426b-844b-2e2a6fe4303cISBN: 978-1-4673-6128-6OAI: oai:DiVA.org:ltu-40730DiVA: diva2:1014251
International Joint Conference on Neural Networks : 04/08/2013 - 09/08/2013
Godkänd; 2013; 20130408 (bleemr)2016-10-032016-10-03Bibliographically approved