Releases: genn-team/genn
GeNN 4.0.1
This release fixes several small bugs found in GeNN 4.0.0 and implements some small features:
User Side Changes
- Improved detection and handling of errors when specifying model parameters and values in PyGeNN.
- SpineML simulator is now implemented as a library which can be used directly from user applications as well as from command line tool.
Bug fixes:
- Fixed typo in
GeNNModel.push_var_to_device
function in PyGeNN. - Fixed broken support for Visual C++ 2013.
- Fixed zero-copy mode.
- Fixed typo in tutorial 2.
GeNN 4.0.0
This release is the result of a second round of fairly major refactoring which we hope will make GeNN easier to use and allow it to be extended more easily in future.
However, especially if you have been using GeNN 2.XX syntax, it breaks backward compatibility.
User Side Changes
- Totally new build system -
make install
can be used to install GeNN to a system location on Linux and Mac and Windows projects work much better in the Visual Studio IDE. - Python interface now supports Windows and can be installed using binary 'wheels'.
- No need to call
initGeNN()
at start andmodel.finalize()
at end of all models. - Initialisation system simplified - if you specify a value or initialiser for a variable or sparse connectivity,
it will be initialised by your chosen backend. If you mark it as uninitialised, it is up to you to initialize it
in user code between the calls toinitialize()
andinitializeSparse()
(where it will be copied to device). genn-create-user-project
helper scripts to create Makefiles or MSBuild projects for building user code- State variables can now be pushed and pulled individually using the
pull<var name><neuron or synapse name>FromDevice()
andpush<var name><neuron or synapse name>ToDevice()
functions. - Management of extra global parameter arrays has been somewhat automated.
GENN_PREFERENCES
is no longer a namespace - it's a global struct so members need to be accessed with . rather than ::.NeuronGroup
,SynapseGroup
,CurrentSource
andNNmodel
all previously exposed a lot of methods that the user wasn't supposed to call but could.
These have now all been made protected and are exposed to GeNN internals using derived classes (NeuronGroupInternal
,SynapseGroupInternal
,CurrentSourceInternal
,ModelSpecInternal
) that make them public usingusing
directives.- Auto-refractory behaviour was controlled using
GENN_PREFERENCES::autoRefractory
, this is now controlled on a per-neuron-model basis using theSET_NEEDS_AUTO_REFRACTORY
macro. - The functions used for pushing and pulling have been unified somewhat this means that
copyStateToDevice
andcopyStateFromDevice
functions no longer copy spikes andpus<neuron or synapse name>SpikesToDevice
andpull<neuron or synapse name>SpikesFromDevice
no longer copy spike times or spike-like events. - Standard models of leaky-integrate-and-fire neuron (
NeuronModels::LIF
) and of exponentially shaped postsynaptic current (PostsynapticModels::ExpCurr
) have been added. - When a model is built using the CUDA backend, the device it was built for is stored using it's PCI bus ID so it will always use the same device.
- -s option for buildmodel.sh allows C++ standard used to build model to be set.
Deprecations
- Yale-format sparse matrices are no longer supported.
- GeNN 2.X syntax for implementing neuron and synapse models is no longer supported.
$(addtoinSyn) = X; $(updatelinsyn);
idiom in weight update models has been replaced by function style$(addToInSyn, X);
.
GeNN 3.3.0
This release is intended as the last service release for GeNN 3.X.X.
Fixes for serious bugs may be backported if requested but, otherwise, development will be switching to GeNN 4.
User Side Changes
- Postsynaptic models can now have Extra Global Parameters.
- Gamma distribution can now be sampled using
$(gennrand_gamma, a)
. This can be used to initialise variables usingInitVarSnippet::Gamma
. - Experimental Python interface - All features of GeNN are now exposed to Python through the
pygenn
module
Bug fixes:
- Devices with Streaming Multiprocessor version 2.1 (compute capability 2.0) now work correctly in Windows.
- Seeding of on-device RNGs now works correctly.
- Improvements to accuracy of memory usage estimates provided by code generator.
GeNN 3.2.0
This release extends the initialisation system introduced in 3.1.0 to support the initialisation of sparse synaptic connectivity, adds support for networks with more sophisticated models of synaptic plasticity and delay as well as including several other small features, optimisations and bug fixes for certain system configurations. This release supports GCC >= 4.9.1 on Linux, Visual Studio >= 2013 on Windows and recent versions of Clang on Mac OS X.
User Side Changes
- Sparse synaptic connectivity can now be initialised using small snippets of code run either on GPU or CPU. This can save significant amounts of initialisation time for large models.
- New 'ragged matrix' data structure for representing sparse synaptic connections -- supports initialisation using new sparse synaptic connecivity initialisation system and enables future optimisations.
- Added support for pre and postsynaptic state variables for weight update models to allow more efficient implementatation of trace based STDP rules. See \ref sect34 for more details.
- Added support for devices with Compute Capability 7.0 (Volta) to block-size optimizer.
- Added support for a new class of 'current source' model which allows non-synaptic input to be efficiently injected into neurons.
- Added support for heterogeneous dendritic delays.
- Added support for (homogeneous) synaptic back propagation delays using
SynapseGroup::setBackPropDelaySteps
. - For long simulations, using single precision to represent simulation time does not work well. Added
NNmodel::setTimePrecision
to allow data type used to represent time to be set independently.
Optimisations
GENN_PREFERENCES::mergePostsynapticModels
flag can be used to enable the merging together of postsynaptic models from a neuron population's incoming synapse populations - improves performance and saves memory.- On devices with compute capability > 3.5 GeNN now uses the read only cache to improve performance of postsynaptic learning kernel.
Bug fixes:
- Fixed bug enabling support for CUDA 9.1 and 9.2 on Windows.
- Fixed bug in SynDelay example where membrane voltage went to NaN.
- Fixed bug in code generation of
SCALAR_MIN
andSCALAR_MAX
values. - Fixed bug in substitution of trancendental functions with single-precision variants.
- Fixed various issues involving using spike times with delayed synapse projections.
GeNN 3.1.1
This release fixes several small bugs found in GeNN 3.1.0 and implements some small features:
User Side Changes
- Added new synapse matrix types
SPARSE_GLOBALG_INDIVIDUAL_PSM
,DENSE_GLOBALG_INDIVIDUAL_PSM
andBITMASK_GLOBALG_INDIVIDUAL_PSM
to handle case where synapses with no individual state have a postsynaptic model with state variables e.g. an alpha synapse.
Bug fixes:
- Correctly handle aliases which refer to other aliases in SpineML models
- Fixed issues with presynaptically parallelised synapse populations where the postsynaptic population is small enough for input to be accumulated in shared memory
GeNN 3.1.0
This release builds on the changes made in 3.0.0 to further streamline the process of building models with GeNN and includes several bug fixes for certain system configurations.
User Side Changes
- Support for simulating models described using the SpineML model description language with GeNN.
- Neuron models can now sample from uniform, normal, exponential or log-normal distributions - these calls are translated to cuRAND when run on GPUs and calls to the C++11 library when run on CPU.
- Model state variables can now be initialised using small snippets of code run either on GPU or CPU. This can save significant amounts of initialisation time for large models.
- New MSBuild build system for Windows - makes developing user code from within Visual Studio much more streamlined.
Bug fixes:
- Workaround for bug found in Glibc 2.23 and 2.24 which causes poor performance on some 64-bit Linux systems (namely on Ubuntu 16.04 LTS).
- Fixed bug encountered when using extra global variables in weight updates.
- Fixed bug in SpineML code generation found in 3.1.0RC
GeNN 3.0.0
This release is the result of some fairly major refactoring of GeNN which we hope will make it more user-friendly and maintainable in the future.
User Side Changes
- Entirely new syntax for defining models - hopefully terser and less error-prone (see updated documentation and examples for details).
- Continuous integration testing using Jenkins - automated testing and code coverage calculation calculated automatically for Github pull requests etc.
- Support for using Zero-copy memory for model variables. Especially on devices such as NVIDIA Jetson TX1 with no physical GPU memory this can significantly improve performance when recording data or injecting it to the simulation from external sensors.
Service release 2.2.3
This release includes minor new features and several bug fixes for certain system configurations.
User Side Changes
- Transitioned feature tests to use Google Test framework.
- Added support for CUDA shader model 6.X
Bug fixes:
- Fixed problem using GeNN on systems running 32-bit Linux kernels on a 64-bit architecture (Nvidia Jetson modules running old software for example).
- Fixed problem linking against CUDA on Mac OS X El Capitan due to SIP (System Integrity Protection).
- Fixed problems with support code relating to its scope and usage in spike-like event threshold code.
- Disabled use of C++ regular expressions on older versions of GCC.
Please refer to the full documentation for further details, tutorials and complete code documentation.
Service release 2.2.2
Release Notes for GeNN v2.2.2
This release includes minor new features and several bug fixes for certain system configurations.
User Side Changes
- Added support for the new version (2.0) of the Brian simulation package for Python.
- Added a mechanism for setting user-defined flags for the C++ compiler and NVCC compiler, via
GENN_PREFERENCES
.
Bug fixes:
- Fixed a problem with
atomicAdd()
redefinitions on certain CUDA runtime versions and GPU configurations. - Fixed an incorrect bracket placement bug in code generation for certain models.
- Fixed an incorrect neuron group indexing bug in the learning kernel, for certain models.
- The dry-run compile phase now stores temporary files in the current directory, rather than the temp directory, solving issues o
n some systems. - The
LINK_FLAGS
andINCLUDE_FLAGS
in the common windows makefile include 'makefile_commin_win.mk' are now appended to, rathe
r than being overwritten, fixing issues with custom user makefiles on Windows.
NB: If you are migrating from release 2.1 or earlier please refer to the release notes for Release 2.2 and 2.2.1 as there are a number of alterations you will need to implement for the migration.
Please refer to the full documentation for further details, tutorials and complete code documentation.
Service release 2.2.1
Release Notes for GeNN v2.2.1
This bugfix release fixes some critical bugs which occur on certain system configurations.
Bug fixes:
- (important) Fixed a Windows-specific bug where the CL compiler terminates, incorrectly reporting that the nested scope limit has been exceeded, when a large number of device variables need to be initialised.
- (important) Fixed a bug where, in certain circumstances, outdated generateALL objects are used by the Makefiles, rather than being cleaned and replaced by up-to-date ones.
- (important) Fixed an 'atomicAdd' redeclared or missing bug, which happens on certain CUDA architectures when using the newest CUDA 8.0 RC toolkit.
- (minor) The SynDelay example project now correctly reports spike indexes for the input group.
NB: If you are migrating from release 2.1 or earlier please refer to the release notes for Release 2.2 as there are a number of alterations you will need to implement for the migration.
Please refer to the full documentation for further details, tutorials and complete code documentation.