Skip to content

Commit

Permalink
update links to use /book instead of /ed4
Browse files Browse the repository at this point in the history
  • Loading branch information
kkoreilly committed Aug 18, 2024
1 parent 829d92f commit 9ecbb00
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
6 changes: 3 additions & 3 deletions examples/inhib/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ A more intuitive (but somewhat inaccurate in the details) way of understanding t

# Roles of Feedforward and Feedback Inhibition

Next we assess the importance and properties of the feedforward versus feedback inhibitory pathways by manipulating their relative strengths. The control panel has two parameters that determine the relative contribution of the feedforward and feedback inhibitory pathways: `FFinhibWtScale` applies to the feedforward weights from the input to the inhibitory units, and `FBinhibWtScale` applies to the feedback weights from the hidden layer to the inhibitory units. These parameters (specifically the .rel components of them) uniformly scale the strengths of an entire pathway of connections from one layer to another, and are the arbitrary `WtScale.Rel` (r_k) relative scaling parameters described in *Net Input Detail* Appendix in [CCN TExtbook](https://github.com/CompCogNeuro/ed4).
Next we assess the importance and properties of the feedforward versus feedback inhibitory pathways by manipulating their relative strengths. The control panel has two parameters that determine the relative contribution of the feedforward and feedback inhibitory pathways: `FFinhibWtScale` applies to the feedforward weights from the input to the inhibitory units, and `FBinhibWtScale` applies to the feedback weights from the hidden layer to the inhibitory units. These parameters (specifically the .rel components of them) uniformly scale the strengths of an entire pathway of connections from one layer to another, and are the arbitrary `WtScale.Rel` (r_k) relative scaling parameters described in *Net Input Detail* Appendix in [CCN Textbook](https://github.com/CompCogNeuro/book).

* Set `FFInhibWtScale` to 0, effectively eliminating the feedforward excitatory inputs to the inhibitory neurons from the input layer (i.e., eliminating feedforward inhibition).

Expand All @@ -94,7 +94,7 @@ These exercises should help you to see that a combination of both feedforward an

## Time Constants and Feedforward Anticipation

We just saw that feedforward inhibition is important for anticipating and offsetting the excitation coming from the inputs to the hidden layer. In addition to this feedforward inhibitory connectivity, the anticipatory effect depends on a difference between excitatory and inhibitory neurons in their rate of updating, which is controlled by the `Dt.GTau` parameters `HiddenGTau` and `InhibGTau` in the control panel (see [CCN Textbook](https://github.com/CompCogNeuro/ed4), Chapter 2). As you can see, the excitatory neurons are updated at tau of 40 (slower), while the inhibitory are at 20 (faster) -- these numbers correspond roughly to how many cycles it takes for a substantial amount of change happen. The faster updating of the inhibitory neurons allows them to more quickly become activated by the feedforward input, and send anticipatory inhibition to the excitatory hidden units before they actually get activated.
We just saw that feedforward inhibition is important for anticipating and offsetting the excitation coming from the inputs to the hidden layer. In addition to this feedforward inhibitory connectivity, the anticipatory effect depends on a difference between excitatory and inhibitory neurons in their rate of updating, which is controlled by the `Dt.GTau` parameters `HiddenGTau` and `InhibGTau` in the control panel (see [CCN Textbook](https://github.com/CompCogNeuro/book), Chapter 2). As you can see, the excitatory neurons are updated at tau of 40 (slower), while the inhibitory are at 20 (faster) -- these numbers correspond roughly to how many cycles it takes for a substantial amount of change happen. The faster updating of the inhibitory neurons allows them to more quickly become activated by the feedforward input, and send anticipatory inhibition to the excitatory hidden units before they actually get activated.

* To verify this, click on Defaults, set `InhibGTau` to 40 (instead of the 20 default), and then Run.

Expand Down Expand Up @@ -148,7 +148,7 @@ This reduces the amount of inhibition on the excitatory neurons. Note that this

# Exploration of FFFB Inhibition

You should run this section after having read the *FFFB Inhibition Function* section of the [CCN Textbook](https://github.com/CompCogNeuro/ed4).
You should run this section after having read the *FFFB Inhibition Function* section of the [CCN Textbook](https://github.com/CompCogNeuro/book).

* Reset the parameters to their default values using the `Defaults` button, click the `BidirNet` on to use that, and then Test to get the initial state of the network. This should reproduce the standard activation graph for the case with actual inhibitory neurons.

Expand Down
2 changes: 1 addition & 1 deletion examples/neuron/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Here is a quick overview of each of the variables -- we'll go through them indiv

# Spiking Behavior

The default parameters that you just ran show the spiking behavior of an axon Neuron. This is implementing a modified version of the Adaptive Exponential function (see [CCN Textbook](https://github.com/CompCogNeuro/ed4)) or AdEx model, which has been shown to provide a very good reproduction of the firing behavior of real cortical pyramidal neurons. As such, this is a good representation of what real neurons do.
The default parameters that you just ran show the spiking behavior of an axon Neuron. This is implementing a modified version of the Adaptive Exponential function (see [CCN Textbook](https://github.com/CompCogNeuro/book)) or AdEx model, which has been shown to provide a very good reproduction of the firing behavior of real cortical pyramidal neurons. As such, this is a good representation of what real neurons do.

At the broadest level, you can see the periodic purple spikes that fire as the membrane potential gets over the firing threshold, and it is then reset back to the rest level, from which it then climbs back up again, to repeat the process again and again. Looking at the overall rate of spiking as indexed by the spacing between spikes (i.e., the *ISI* or inter-spike-interval), you can see that the spacing increases over time, and thus the rate decreases over time. This is due to the **adaptation** property of the AdEx model -- the spike rate adapts over time (note: we are using a different form of adaptation than used in the original AdEx model).

Expand Down

0 comments on commit 9ecbb00

Please sign in to comment.