Skip to content

Latest commit

 

History

History
48 lines (34 loc) · 2.08 KB

29.hebbian-learning.md

File metadata and controls

48 lines (34 loc) · 2.08 KB
layout title date author summary references weight
notes
29. Hebbian Learning
2017-05-06
OctoMiao
Simplest learning rule, aka, correlation based learning
name
Spiking Neuron Models, Chapter 10
29

Outlines

General ideas about Hebbian learning rule

  1. Hebbian learning: correlation-based learning
  2. Long-term potentiation: persistent increase in the efficacy.
  3. Strong high-frequency pulses which evokes spikes will increase the efficacy.
  4. The change in synaptic weight will persist many hours, thus long-term potentiation.
  5. Joint activity of pre- and post-synaptic, i.e., correlated activity, could increase the synaptic weight.
  6. Presynaptic has to precede the postsynaptic in order to increase the rate.

Rate-based Hebbian learning

  1. Locality: change of synaptic efficacy depends only local variables, pre- and post- firing rates, and efficacy.

    $$ \frac{d}{dt}w_{ij} = F(w_{ij};\nu_i,\nu_j). $$

  2. Cooperativity: timing of the pre- and post-synaptic activity. Naively we would simply implement a correlation function into $F$. But the book used Taylor expansion. However, I think we should expand over some kind of contant activity that we could have instead of 0's that the book is using. They should give us the same results. In Taylor expansion, the second order is bilinear which contains the property we need. By keeping only the second order expansion, we have

    $$ \frac{d}{dt} w_{ij} = c^{\text{corr}}(w_{ij}) \nu_i \nu_j. $$

  3. We could choose $c^{\text{corr}}(w_{ij}) = \gamma (w_{ij}^{\text{max}} - w_{ij})$. As $w_{ij}$ approaches its max, the learning will saturate.

  4. Weigth decays, so we need the first term of Taylor expansion, $c_0(w_{ij})$. We could choose $c_0(w_{ij}) = - \gamma_0 w_{ij}$.

  5. Final form: Eqn. 10.6

  6. Competition: growth of synaptic weight is at the expense of weakening of other weights.

  7. Other learning rules are possible. Eqn. 10.7

Spike-time-dependent Plasticity

  1. If spike trains are applied, we would expect the effect to be a summation of the single spike rules. Eqn. 10.13 and Eqn. 10 14