Skip to content

Commit

Permalink
Merge pull request #300 from kanishk16/doxygen_doc_fixes
Browse files Browse the repository at this point in the history
Thanks for all your hard work on this @kanishk16
  • Loading branch information
neworderofjamie authored May 29, 2020
2 parents dcc20ae + 28e923f commit fe3e901
Show file tree
Hide file tree
Showing 4 changed files with 161 additions and 67 deletions.
33 changes: 15 additions & 18 deletions doxygen/01_Installation.dox
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,13 @@ Point your browser to \a https://github.com/genn-team/genn/releases
and download a release from the list by clicking the relevant source
code button. Note that GeNN is only distributed in the form of source
code due to its code generation design. Binary distributions would not
make sense in this framework and are not provided.
make sense in this framework and hence are not provided.
After downloading continue to install GeNN as described in the \ref installing section below.

\section GitSnapshot Obtaining a Git snapshot

If it is not yet installed on your system, download and install Git
(\a http://git-scm.com/). Then clone the GeNN repository from Github
(\a http://git-scm.com/).Then clone the GeNN repository from Github
\code
git clone https://github.com/genn-team/genn.git
\endcode
Expand All @@ -29,14 +29,14 @@ source version upon which the next release will be based. There are other
branches in the repository that are used for specific development
purposes and are opened and closed without warning.

As an alternative to using git you can also download the full content of
GeNN sources clicking on the "Download ZIP" button on the bottom right
An alternative to using git is to download the full content of
GeNN sources by clicking on the "Download ZIP" button on the bottom right
of the GeNN Github page (\a https://github.com/genn-team/genn).

\section installing Installing GeNN

Installing GeNN comprises a few simple steps to create the GeNN
development environment.
development environment:
\note
While GeNN models are normally simulated using CUDA on NVIDIA GPUs, if you want to use GeNN on a machine without an NVIDIA GPU, you can skip steps v and vi and use GeNN in "CPU_ONLY" mode.

Expand All @@ -49,7 +49,7 @@ repository.
\code
export PATH=$PATH:$HOME/GeNN/bin
\endcode
to your login script (e.g. `.profile` or `.bashrc`. If you are using
to your login script (e.g. `.profile` or `.bashrc`). If you are using
WINDOWS, the path should be a windows path as it will be
interpreted by the Visual C++ compiler `cl`, and environment
variables are best set using `SETX` in a Windows cmd window.
Expand All @@ -65,22 +65,19 @@ repository.
For Windows, download Microsoft Visual Studio Community Edition from
\a https://www.visualstudio.com/en-us/downloads/download-visual-studio-vs.aspx.
When installing Visual Studio, one should select the 'Desktop
development with C++' configuration' and the 'Windows 8.1 SDK' and 'Windows
development with C++' configuration and the 'Windows 8.1 SDK' and 'Windows
Universal CRT' individual components.
Mac users should download and set up Xcode from
\a https://developer.apple.com/xcode/index.html
Linux users should install the GNU compiler collection gcc and g++
, Linux users should install the GNU compiler collection gcc and g++
from their Linux distribution repository, or alternatively from
\a https://gcc.gnu.org/index.html
Be sure to pick CUDA and C++ compiler versions which are compatible
with each other. The latest C++ compiler is not necessarily
compatible with the latest CUDA toolkit.


(v) If your machine has a GPU and you haven't installed CUDA already,
obtain a fresh installation of the NVIDIA CUDA toolkit from
\a https://developer.nvidia.com/cuda-downloads
Again, be sure to pick CUDA and C++ compiler versions which are compatible
with each other. The latest C++ compiler is not necessarily
with each other. The latest C++ compiler need not necessarily be
compatible with the latest CUDA toolkit.

(vi) Set the `CUDA_PATH` variable if it is not already set by the
Expand All @@ -91,24 +88,24 @@ repository.
in your login script (or, if CUDA is installed in a non-standard
location, the appropriate path to the main CUDA directory).
For most people, this will be done by the CUDA install script and
the default value of /usr/local/cuda is fine. In Windows, CUDA_PATH
is normally already set after installing the CUDA toolkit. If not,
the default value of /usr/local/cuda is fine. In Windows, usually CUDA_PATH
is already set after installing the CUDA toolkit. If not,
set this variable with:
\code
setx CUDA_PATH C:\path\to\cuda
\endcode

This normally completes the installation. Windows useres must close
This normally completes the installation. Windows users must close
and reopen their command window to ensure variables set using `SETX`
are initialised.

If you are using GeNN in Windows, the Visual Studio development
environment must be set up within every instance of the CMD.EXE command
window used. One can open an instance of CMD.EXE with the development
environment already set up by navigating to Start - All Programs -
Microsoft Visual Studio - Visual Studio Tools - x64 Native Tools Command Prompt. You may wish to
Microsoft Visual Studio - Visual Studio Tools - x64 Native Tools Command Prompt. You may also wish to
create a shortcut for this tool on the desktop, for convenience.


-----
\link Installation Top\endlink | \link Quickstart Next\endlink
Expand Down
20 changes: 10 additions & 10 deletions doxygen/02_Quickstart.dox
Original file line number Diff line number Diff line change
Expand Up @@ -14,20 +14,20 @@
//----------------------------------------------------------------------------
/*! \page Quickstart Quickstart

GeNN is based on the idea of code generation for the involved GPU or
CPU simulation code for neuronal network models but leaves a lot of
freedom how to use the generated code in the final
application. To facilitate the use of GeNN on the background of this
GeNN is based on the idea of code generation for the code that simulates neuronal network models,
either on GPU or CPU. At the same time, it leaves a lot of freedom to users,
how to use the generated code in their final applications.
To facilitate the use of GeNN on the background of this
philosophy, it comes with a number of complete examples containing both
the model description code that is used by GeNN for code generation
and the "user side code" to run the generated model and safe the
results. Some of the example models such as the \ref ex_mbody use an `generate_run` executable which automates the building and simulation of the model.
Using these executables, running these complete examples should be achievable in a few minutes.
and the "user side code" to run the generated model as well as save the
results. Some of the example models, such as the \ref ex_mbody, uses a `generate_run` executable which automates the building and simulation of the model.
Using these executables, running the complete examples should be achievable in a few minutes.
The necessary steps are described below.

\section example Running an Example Model
\subsection unix_quick Unix
In order to build the `generate_run` executable as well as any additional tools required for the model, open a
In order to build the `generate_run` executable as well as other additional tools required for the model, open a
shell and navigate to the `userproject/MBody1_project` directory.
Then type
\code
Expand Down Expand Up @@ -60,11 +60,11 @@ generate_run --cpu-only test1
\endcode

\subsection quick_visualising Visualising results
These steps will build and simulate a model of the locust olfactory system with default parameters of 100 projection neurons,
These steps build and simulate a model of the locust olfactory system with default parameters of 100 projection neurons,
1000 Kenyon cells, 20 lateral horn interneurons and 100 output neurons in the mushroom body lobes.
\note If the model isn't build in CPU_ONLY mode it will be simulated on an automatically chosen GPU.

The generate_run tool generates input patterns and writes them to file, compiles and runs the model using these files as inputs and finally output the
The generate_run executable generates input patterns and writes them to a file, compiles and runs the model using these files as inputs and finally output the
resulting spiking activity. For more information of the options passed to this command see the \ref ex_mbody section.
The results of the simulation can be plotted with
\code
Expand Down
103 changes: 102 additions & 1 deletion userproject/MBody1_project/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,107 @@
Locust olfactory system (Nowotny et al. 2005)
=============================================

This project implements the insect olfaction model by Nowotny et al. that demonstrates
self-organized clustering of odours in a simulation of the insect antennal lobe and
mushroom body. As provided, the model works with conductance based Hodgkin-Huxley neurons
and several different synapse types, conductance based (but pulse-coupled) excitatory
synapses, graded inhibitory synapses and synapses with a simplified STDP rule.
This example project contains a helper executable called "generate_run", which
prepares input pattern data, before compiling and executing the model.

To compile it, navigate to genn/userproject/MBody1_project and type:

msbuild ..\userprojects.sln /t:generate_mbody1_runner /p:Configuration=Release

for Windows users, or:

make

for Linux, Mac and other UNIX users.


USAGE
-----

generate_run [OPTIONS] <outname>

Mandatory arguments:
outname: The base name of the output location and output files

Optional arguments:
--debug: Builds a debug version of the simulation and attaches the debugger
--cpu-only: Uses CPU rather than CUDA backend for GeNN
--timing: Uses GeNN's timing mechanism to measure performance and displays it at the end of the simulation
--ftype: Sets the floating point precision of the model to either float or double (defaults to float)
--gpu-device: Sets which GPU device to use for the simulation (defaults to -1 which picks automatically)
--num-al: Number of neurons in the antennal lobe (AL), the input neurons to this model (defaults to 100)
--num-kc: Number of Kenyon cells (KC) in the "hidden layer" (defaults to 1000)
--num-lhi: Number of lateral horn interneurons, implementing gain control (defaults to 20)
--num-dn: Number of decision neurons (DN) in the output layer (defaults to 100)
--gscale: A general rescaling factor for synaptic strength (defaults to 0.0025)
--bitmask: Use bitmasks to represent sparse PN->KC connectivity rather than dense connectivity
--delayed-synapses: Rather than use constant delays of DT throughough, use delays of (5 * DT) ms on KC->DN and
of (3 * DT) ms on DN->DN synapse populations

An example invocation of generate_run using these defaults and recording results with a base name of `test' would be:

generate_run.exe test

for Windows users, or:

./generate_run test

for Linux, Mac and other UNIX users.

Such a command would generate a locust olfaction model with 100 antennal lobe neurons,
1000 mushroom body Kenyon cells, 20 lateral horn interneurons and 100 mushroom body
output neurons, and launch a simulation of it on a CUDA-enabled GPU using single
precision floating point numbers. All output files will be prefixed with "test"
and will be created under the "test" directory. The model that is run is defined
in `model/MBody1.cc`, debugging is switched off and the model would be simulated using
float (single precision floating point) variables.

In more details, what generate_run program does is:
a) use other tools to generate input patterns.

b) build the source code for the model by writing neuron numbers into
./model/sizes.h, and executing "genn-buildmodel.sh ./model/MBody1.cc.

c) compile the generated code by invoking "make clean && make"
running the code, e.g. "./classol_sim r1".

Another example of an invocation that runs the simulation using CPU rather than GPU,
records timing information and uses bitmask connectivity would be:

generate_run.exe --cpu-only --timing --bitmask test

for Windows users, or:

./generate_run --cpu-only --timing --bitmask test

for Linux, Mac and other UNIX users.

As provided, the model outputs `test.dn.st', `test.kc.st', `test.lhi.st' and `test.pn.st' files
which contain the spiking activity observed in each population in the simulation. There are two
columns in this ASCII file, the first one containing the time of a spike and the second one,
the ID of the neuron that spiked. MATLAB users can use the scripts in the `matlab` directory to plot
the results of a simulation and Python users can use the plot_spikes.py script in userproject/python.
For more about the model itself and the scientific insights gained from it, see Nowotny et al. referenced below.


MODEL INFORMATION
-----------------

For information regarding the locust olfaction model implemented in this example project, see:

T. Nowotny, R. Huerta, H. D. I. Abarbanel, and M. I. Rabinovich Self-organization in the
olfactory system: One shot odor recognition in insects, Biol Cyber, 93 (6): 436-446 (2005),
doi:10.1007/s00422-005-0019-7
=======

Locust olfactory system (Nowotny et al. 2005)
=============================================

This project implements the insect olfaction model by Nowotny et
al. that demonstrates self-organized clustering of odours in a
simulation of the insect antennal lobe and mushroom body. As provided
Expand Down Expand Up @@ -99,4 +200,4 @@ For information regarding the locust olfaction model implemented in this example

T. Nowotny, R. Huerta, H. D. I. Abarbanel, and M. I. Rabinovich Self-organization in the
olfactory system: One shot odor recognition in insects, Biol Cyber, 93 (6): 436-446 (2005),
doi:10.1007/s00422-005-0019-7
doi:10.1007/s00422-005-0019-7
72 changes: 34 additions & 38 deletions userproject/Model_Schmuker_2014_classifier_project/README.txt
Original file line number Diff line number Diff line change
@@ -1,66 +1,62 @@
Author: Alan Diamond, University of Sussex, 2014
A neuromorphic network for generic multivariate data classification
===================================================================

This project recreates using GeNN the spiking classifier design used in the paper
This project recreates the spiking classifier proposed in the paper by
Michael Schmuker, Thomas Pfeil and Martin Paul Nawrota using GeNN. The classifier design is based on an
abstraction of the insect olfactory system. This example uses the IRIS standard data set
as a test for the classifier.

"A neuromorphic network for generic multivariate data classification"
Authors: Michael Schmuker, Thomas Pfeil, Martin Paul Nawrota

The classifier design is based on an abstraction of the insect olfactory system.
This example uses the IRIS stadard data set as a test for the classifier
To build the model using the GENN meta compiler, navigate to genn/userproject/Model_Schmuker_2014_project and type:

BUILD / RUN INSTRUCTIONS

Install GeNN from the internet released build, following instruction on setting your PATH etc

Start a terminal session

cd to this project directory (userproject/Model_Schmuker_2014_project)
genn-buildmodel.bat Model_Schmuker_2014_classifier.cc

To build the model using the GENN meta compiler type:
for Windows users (add -d for a debug build), or:

genn-buildmodel.sh Model_Schmuker_2014_classifier.cc

for Linux, Mac and other UNIX systems, or:
for Linux, Mac and other UNIX users.

genn-buildmodel.bat Model_Schmuker_2014_classifier.cc
You would only have to do this at the start, or when you change your actual network model,
i.e. on editing the file, Model_Schmuker_2014_classifier.cc

for Windows systems (add -d for a debug build).
Then to compile the experiment and the GeNN created C/CUDA code type:

You should only have to do this at the start, or when you change your actual network model (i.e. editing the file Model_Schmuker_2014_classifier.cc )
msbuild Schmuker2014_classifier.vcxproj /p:Configuration=Release

Then to compile the experiment plus the GeNN created C/CUDA code type:-
for Windows users (change Release to Debug if using debug mode), or:

make

for Linux, Mac and other UNIX users (add DEBUG=1 if using debug mode), or:
for Linux, Mac and other UNIX users (add DEBUG=1 if using debug mode).

msbuild Schmuker2014_classifier.vcxproj /p:Configuration=Release

for Windows users (change Release to Debug if using debug mode).
Once it compiles you should be able to run the classifier against the included Iris dataset by typing:

Once it compiles you should be able to run the classifier against the included Iris dataset.
Schmuker2014_classifier .

type
for Windows users, or:

./experiment .

for Linux, Mac and other UNIX systems, or:

Schmuker2014_classifier .

for Windows systems.
for Linux, Mac and other UNIX systems.

This is how it works roughly.
The experiment (experiment.cu) controls the experiment at a high level. It mostly does this by instructing the classifier (Schmuker2014_classifier.cu) which does the grunt work.
This is how it works roughly. The experiment (experiment.cu) controls the experiment at a high level.
It mostly does this by instructing the classifier (Schmuker2014_classifier.cu) which does the grunt work.

So the experiment first tells the classifier to set up the GPU with the model and synapse data.
So the experiment first tells the classifier to set up the GPU with the model and the synapse data.

Then it chooses the training and test set data.

It runs through the training set , with plasticity ON , telling the classifier to run with the specfied observation and collecting the classifier decision.
It runs through the training set with plasticity ON, telling the classifier to run with the specfied
observation and collecting the classifier decisions.

Then it runs through the test set with plasticity OFF and collects the results in various reporting files.
Then it runs through the test set with plasticity OFF and collects the results in various reporting files.

At the highest level it also has a loop where you can cycle through a list of parameter values e.g. some threshold value for the classifier to use. It will then report on the performance for each value. You should be aware that some parameter changes won't actually affect the classifier unless you invoke a re-initialisation of some sort. E.g. anything to do with VRs will require the input data cache to be reset between values, anything to do with non-plastic synapse weights won't get cleared down until you upload a changed set to the GPU etc.
At the highest level, it also has a loop where you can cycle through a list of parameter values e.g. some
threshold value for the classifier to use. It will then report on the performance for each value.
You should be aware that some parameter changes won't actually affect the classifier unless you invoke
a re-initialisation of some sort. E.g. anything to do with VRs will require the input data cache to be
reset between values, anything to do with non-plastic synapse weights won't get cleared down until you
upload a changed set to the GPU etc.

You should also note there is no option currently to run on CPU, this is not due to the demanding task, it just hasn't been tweaked yet to allow for this (small change).
You should also note there is no option currently to run on CPU, this is not due to the demanding task, but
just hasn't been tweaked yet to allow for this (a small change).

0 comments on commit fe3e901

Please sign in to comment.