Skip to content

Commit

Permalink
rename of neuron variables: CaSpk* -> Ca*; CaLrn, NrnCa -> LearnCa; B…
Browse files Browse the repository at this point in the history
…eta1,2 on GPU and fix order of Neuron indexes; consolidate all sim metadata in config fields; deep_fsa stats in place, but something still off relative to original.
  • Loading branch information
rcoreilly committed Dec 1, 2024
1 parent 1903362 commit f57a940
Show file tree
Hide file tree
Showing 78 changed files with 5,976 additions and 3,192 deletions.
2 changes: 1 addition & 1 deletion Deep.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ The predictive pulvinar TRC is created and associated with the *driver* layer, a

This package has 3 primary specialized Layer types:

* `SuperLayer`: implements the superficial layer 2-3 neurons, which function just like standard axon.Layer neurons, and always represent the _current state_ of things. They learn continuously from predictive learning error signals, are widely interconnected with other cortical areas, and form the basis for the learned representations in other layers. As a computational simplification, they can also directly compute the Burst activation signal that reflects the deep layer 5IB bursting activation, via thresholding of the superficial layer activations (Bursting is thought to have a higher threshold). Activity is represented by the `CaSpkP` value -- `Act` is used only for display purposes!
* `SuperLayer`: implements the superficial layer 2-3 neurons, which function just like standard axon.Layer neurons, and always represent the _current state_ of things. They learn continuously from predictive learning error signals, are widely interconnected with other cortical areas, and form the basis for the learned representations in other layers. As a computational simplification, they can also directly compute the Burst activation signal that reflects the deep layer 5IB bursting activation, via thresholding of the superficial layer activations (Bursting is thought to have a higher threshold). Activity is represented by the `CaP` value -- `Act` is used only for display purposes!

* `CTLayer`: implements the layer 6 regular spiking CT corticothalamic neurons that project into the thalamus. They receive the Burst activation via a `CTCtxtPath` pathway type, and integrate that in the CtxtGe value, which is added to other excitatory conductance inputs to drive the overall activation of these neurons. Due to the bursting nature of the Burst inputs, this causes these CT layer neurons to reflect what the superficial layers encoded on the *previous* timestep -- thus they represent a temporally delayed context state.

Expand Down
2 changes: 1 addition & 1 deletion GPU.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ There is a hard max storage buffer limit of 4 GiB (uint32), and `MaxStorageBuffe
+ `Layer.Act.` -> `Layer.Acts.`
+ `Layer.Acts.GABAB.` -> `Layer.Acts.GabaB.`
+ `Layer.Acts.Spike.` -> `Layer.Acts.Spikes.`
+ `Layer.Learn.CaLrn.` -> `Layer.Learn.CaLearn.`
+ `Layer.Learn.LearnCa.` -> `Layer.Learn.CaLearn.`



4 changes: 2 additions & 2 deletions PCoreBG.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ The key challenge in BG learning is that the `da` term typically comes significa

* `Tr += sn.Act * rn.Act`

(we actually use `sn.CaSpkD` and `rn.GeIntMax` which are spiking Ca variables, and GeIntMax captures the max activity over the trial because MSN firing is transient).
(we actually use `sn.CaD` and `rn.GeIntMax` which are spiking Ca variables, and GeIntMax captures the max activity over the trial because MSN firing is transient).

And then we leverage the _reward salience_ firing properties of cholinergic interneurons (CINs, AKA TANs = tonically active neurons) to provide a later "learn now" signal by firing in proportion to the non-discounted, positive rectified US or CS value (i.e., whenever any kind of reward or punishment signal arrives, or is indicated by a CS). Thus, at the point of high ACh firing, which coincides with DA release, we get:

Expand All @@ -100,7 +100,7 @@ and the trace is effectively reset by a decay factor:

One further wrinkle is that the BG will become permanently stuck if there is no gating at all -- trial and error learning requires "trials" of activity to learn! Thus, we introduce a slow "NoGate" learning case on trials where no neurons gated within the layer:

* `Tr += -NoGateLRate * ACh * rn.SpkMax * sn.CaSpkD`
* `Tr += -NoGateLRate * ACh * rn.SpkMax * sn.CaD`


# Other models
Expand Down
59 changes: 29 additions & 30 deletions README.md

Large diffs are not rendered by default.

85 changes: 35 additions & 50 deletions axon/act-layer.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit f57a940

Please sign in to comment.