Skip to content

Commit

Permalink
activate spack env
Browse files Browse the repository at this point in the history
  • Loading branch information
Marmaduke Woodman committed Sep 3, 2024
1 parent b6bedbb commit c8d5ee8
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 76 deletions.
3 changes: 2 additions & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ jobs:
pip-tvbk:
runs-on: ubuntu-latest
strategy:
fail-fast: false
fail-fast: true
matrix:
python-version: [ "3.10", ]

Expand Down Expand Up @@ -61,6 +61,7 @@ jobs:
shell: spack-bash {0}
run: |
cd tvb_kernels
spack env activate .
pip install .
pytest
Expand Down
99 changes: 24 additions & 75 deletions tvb_kernels/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,85 +6,34 @@ This is a library of computational kernels for TVB.

in order of priority

- [ ] sparse delay coupling functions
- [x] sparse delay coupling functions
- [ ] fused heun neural mass model step functions
- [ ] neural ODE
- [ ] bold / tavg monitors

## building

For now, a `make` invocation is enough, which calls `mkispc.sh` to build
the ISPC kernels, then CMake to build the nanobind wrappers. The next
steps will convert this to a pure CMake based process.

## variants

### batch variants

It may be desirable to compute batches of a kernel.

### parameters

Parameters may be global or "spatialized" meaning different
values for every node. Kernels should provide separate calls
for both cases.

## matlab

Given some users in matlab, it could be useful to wrap some of the algorithms
as MEX files.

## working notes

### inner loop

for the inner loop, excl monitors etc, all that's needed is
- cx_all
- step: stochastic heun on local modelo
- buf update
which will then be used internally by TVB.

### scalable design

#### parameter sweeps

separating the data from the work arrays, allows defining
the data once and reusing it, which is important for large
connectivities.

the work arrays can then be allocated for each simulation to do.

#### components

enabling a multicomponent design requires some modifications.
as a useful example consider a model with the following elements

- cortical field with modified epileptor
- subcortical regions with Jansen-Rit
- corticortical excitatory connectivity
- subcortical excitattory connectivity
- cortical->subcortical inhibitory connectivity
- subcortical->cortical inhibitory connectivity
- dopamine region
- dopa->cortical connectivity

which implies some notions we already have. let's use the
word *domain* for field, regions, etc and *projection* for
the different connectivities

- a domain has one or more input *ports* which sum afferent
values from various projections
- a domain has one or more output *ports* which expose efferent
values read by various projections
- a single projection may transport one or more values

changes required

- proj should not own the afferent cx buffer
rather take it as a operator argument like t
or just have a pointer
- a given model should own the input port buffer
- projections should own the delay buffer
- delay buffers should have num_cvars as last dimension
## building

- once step taken, push to "listening" projections
No stable versions are available, so users can install from
source with `pip install .` or to just get a wheel `pip wheel .`

For development, prefer
```
pip install nanobind scikit-build-core[pyproject]
pip install --no-build-isolation -Ceditable.rebuild=true -ve .
```

## roadmap

- improve api (ownership, checking etc)
- variants
- single
- batches
- spatialized parameters
- cuda ports for Jax, CuPy and Torch users
- mex functions?
- scalable components
- domains
- projections
- small vm to automate inner loop

0 comments on commit c8d5ee8

Please sign in to comment.