Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits
KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). University of Edinburgh School of Informatics.ORCID iD: 0000-0001-8796-3237
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns

do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations. 

 

In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels. 

 

The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2017. , 89 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2017:11
Keyword [en]
Bayes' rule, synaptic plasticity and memory modeling, intrinsic excitability, naïve Bayes classifier, spiking neural networks, Hebbian learning, neuromorphic engineering, reinforcement learning, temporal sequence learning, attractor network
National Category
Computer Systems
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-205568ISBN: 978-91-7729-351-4 (print)OAI: oai:DiVA.org:kth-205568DiVA: diva2:1089220
Public defence
2017-05-09, F3, Lindstedtsvägen 26, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20170421

Available from: 2017-04-21 Created: 2017-04-19 Last updated: 2017-04-21Bibliographically approved
List of papers
1. Probabilistic computation underlying sequence learning in a spiking attractor memory network
Open this publication in new window or tab >>Probabilistic computation underlying sequence learning in a spiking attractor memory network
2013 (English)In: BMC neuroscience (Online), ISSN 1471-2202, E-ISSN 1471-2202, no 14 (Suppl 1)Article in journal (Refereed) Published
Place, publisher, year, edition, pages
BioMed Central, 2013
National Category
Bioinformatics (Computational Biology)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-205609 (URN)10.1186/1471-2202-14-S1-P236 (DOI)
Note

QC 20170421

Available from: 2017-04-20 Created: 2017-04-20 Last updated: 2017-04-21Bibliographically approved
2. Synaptic and nonsynaptic plasticity approximating probabilistic inference
Open this publication in new window or tab >>Synaptic and nonsynaptic plasticity approximating probabilistic inference
2014 (English)In: Frontiers in Synaptic Neuroscience, ISSN 1663-3563, Vol. 6, no APR, 8Article in journal (Refereed) Published
Abstract [en]

Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.

Keyword
Bayes' rule, synaptic plasticity and memory modeling, intrinsic excitability, naïve Bayes classifier, spiking neural networks, Hebbian learning
National Category
Natural Sciences Medical and Health Sciences
Research subject
Computer Science; Theoretical Chemistry and Biology; Biological Physics
Identifiers
urn:nbn:se:kth:diva-165806 (URN)10.3389/fnsyn.2014.00008 (DOI)2-s2.0-84904738936 (Scopus ID)
Projects
BrainScaleSErasmus Mundus EuroSPIN
Funder
VINNOVAEU, FP7, Seventh Framework Programme, EU-FP7-FET-269921Swedish National Infrastructure for Computing (SNIC)Swedish Research Council, VR-621-2009-3807
Note

This Document is Protected by copyright and was first published by Frontiers. All rights reserved. it is reproduced with permission.

QC 20150430

Available from: 2015-04-29 Created: 2015-04-29 Last updated: 2017-04-19Bibliographically approved
3. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Open this publication in new window or tab >>Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Show others...
2016 (English)In: Frontiers in Neuroanatomy, ISSN 1662-5129, E-ISSN 1662-5129, Vol. 10, 37Article in journal (Refereed) Published
Abstract [en]

SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 x 10(4) neurons and 5.1 x 10(7) plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45x more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

Keyword
SpiNNaker, learning, plasticity, digital neuromorphic hardware, Bayesian confidence propagation neural network (BCPNN), event-driven simulation, fixed-point accuracy
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-185975 (URN)10.3389/fnana.2016.00037 (DOI)000373595100002 ()2-s2.0-84966267433 (Scopus ID)
Note

QC 20160509

Available from: 2016-05-09 Created: 2016-04-29 Last updated: 2017-04-19Bibliographically approved
4. Spike-Based Bayesian-Hebbian Learning of Temporal Sequences
Open this publication in new window or tab >>Spike-Based Bayesian-Hebbian Learning of Temporal Sequences
2016 (English)In: PloS Computational Biology, ISSN 1553-734X, E-ISSN 1553-7358, Vol. 12, no 5, e1004954Article in journal (Refereed) Published
Abstract [en]

Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison.

National Category
Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-190519 (URN)10.1371/journal.pcbi.1004954 (DOI)000379348100041 ()2-s2.0-84975865045 (Scopus ID)
Funder
Swedish Research Council, VR-621-2012-3502VINNOVASwedish e‐Science Research CenterEU, FP7, Seventh Framework Programme, DFF - 1330-00226EU, FP7, Seventh Framework Programme, EU-FP7-FET-269921
Note

QC 20160817

Available from: 2016-08-17 Created: 2016-08-12 Last updated: 2017-04-19Bibliographically approved
5. Functional Relevance of Different Basal Ganglia Pathways Investigated in a Spiking 1 Model with Reward Dependent Plasticity
Open this publication in new window or tab >>Functional Relevance of Different Basal Ganglia Pathways Investigated in a Spiking 1 Model with Reward Dependent Plasticity
Show others...
(English)Manuscript (preprint) (Other academic)
National Category
Neurosciences
Identifiers
urn:nbn:se:kth:diva-185159 (URN)
Note

QS 2016

Available from: 2016-04-11 Created: 2016-04-11 Last updated: 2017-04-19Bibliographically approved

Open Access in DiVA

fulltext(10600 kB)121 downloads
File information
File name FULLTEXT02.pdfFile size 10600 kBChecksum SHA-512
c06769cb55af447b4e4a414da4178e9b14756c5d99f18b1bd545115d6ee7546070a46a920aeba5dcfde7719c39dfd1b1eceb6f9a53f60fe61323ae0d4e4265d2
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Tully, Philip
By organisation
Computational Science and Technology (CST)
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 121 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 813 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf