-
Notifications
You must be signed in to change notification settings - Fork 119
Hackathon 2023
Schedule: September 25-28, 2023 (Monday - Thursday), 8:30 AM to 5:00 PM
Venue: Yale School of Medicine: Steiner Room, SHM L-210, 333 Cedar St, New Haven, CT
This is just a proposal to initiate the discussion!
8:30 AM: Welcome (15 mins)
Welcome & Logistics - Michael & Robert
8:45 AM: Opening Session (Optional, 30 mins)
Inviting 2-3 prominent external NEURON users/labs to share their NEURON use cases and provide feedback on challenges and wishlists for NEURON's future development. The presentations could have a predefined structure to address specific questions of our interest.
- Talk 1: NEURON Usage @ Allen, Feedback, and Wishlist
- Talk 2: NEURON Usage @ ..., Feedback, and Wishlist
9:15 AM: Hackathon Goals (15 mins)
An overview of the proposed technical projects and the objectives for the hackathon. Participants are also welcome to work on other tasks if they haven't proposed a topic.
9:30 AM - 12:00 PM: Hack (2 hours)
12:00 PM - 1:00 PM: Lunch (1 hour)
1:00 PM - 4:00 PM: Hack (3 hours)
4:00 PM - 4:15 PM: Coffee Break (15 mins)
4:15 PM - 4:30 PM: Daily Recap (15 mins)
A brief summary of each work package, including achievements, challenges faced, and suggestions for smoother progress.
4:30 PM - 5:00 PM: Lightning Talks (30 mins)
Brief talks and demos to share knowledge about NEURON development.
- Talk 1
- Talk 2
8:30 AM - 12:00 PM: Hack (3.5 hours)
12:00 PM - 1:00 PM: Lunch (1 hour)
1:00 PM - 4:00 PM: Hack (3 hours)
4:00 PM - 4:15 PM: Coffee Break (15 mins)
4:15 PM - 4:30 PM: Daily Recap (15 mins)
4:30 PM - 5:00 PM: Lightning Talks (30 mins)
- Talk 1
- Talk 2
8:30 AM - 12:00 PM: Hack (3.5 hours)
12:00 PM - 1:30 PM: Hackathon Lunch (1.5 hour)
An extended lunch break, involving all participants of the hackathon.
1:00 PM - 4:00 PM: Hack (3 hours)
4:00 PM - 4:15 PM: Coffee Break (15 mins)
4:15 PM - 4:30 PM: Daily Recap (15 mins)
4:30 PM - 5:00 PM: Lightning Talks (30 mins)
- Talk 1
- Talk 2
8:30 AM - 12:00 PM: Hack (~3 hours)
12:00 PM - 1:00 PM: Lunch (1 hour)
1:00 PM - 2:00 PM: Hack (1 hour)
2:00 PM - 4:00 PM: Presentations (2 hours)
A short presentation (10-15 mins) covering achievements, current status, and next steps.
4:00 PM - 4:15 PM: Coffee Break (15 mins)
4:15 PM - 4:45 PM: Next Steps / Discussions
4:45 PM - 5:00 PM: Closing Remarks
We think it is better to settle beforehand on a few topics that we think are worth tackling and well suited for the format of a hackathon rather than create a huge list of topics of which most will be left untouched (we're still brainstorming topics, please feel free to add more). These topics could be worked on alone or ideally in pairs to benefit from being together in one place.
Currently NMODL code is generated using the old nocmold transpiler. Meanwhile the NMODL Framework has become an extremely versatile and mature modern DSL transpiler for NMODL that we are using successfully for CoreNEURON. We'd like to make it the only transpiler used in the NEURON toolchain, replacing nocmodl. Recently a lot of work has been done to support language features that were left aside initially (cf. #959 and #958. Ultimately we will need to write a new code generator in NMODL (following the model of the CoreNEURON compatible code generator) that will produce code compatible with NEURON. We don't expect that this item can be fully completed and made production ready in the hackathon, but starting this work together, in-person would give us a good initial boost and help solving a lot of the initial questions that will come up.
During the 2022 Hackron @Helveg, @ramcdougal, @ferdonline and @nrnhines (see https://github.com/orgs/neuronsimulator/projects/3/views/1?pane=issue&itemId=5659828) started working on improvements for NEURONs Python API. Python has become over the last couple years the preferred interface for driving NEURON models and simulations for many users. While the HOC interpreter will probably have to remain the underlying engine for NEURON, we think that the Python interface should become a first-class citizen and the main programmable interface for NEURON. This item is about building on top of the work that has already been done, extending it and ideally get to a point where we can merge some of the improvements that were started last time.
Some of the issues that are potentially related are documented here:
The work already done can be found here:
A useful documentation task that is related can be found here:
NEURON has accrued a lot of legacy and dead code. Over the last couple years various developers have put significant work into code cleanup but there is still a lot to go through. Specifically:
- Get rid of incompatible licensed code
- Remove code paths that are never used and are essentially dead
- Remove disabled code that we know will not be used again or that will be completely replaced in the future (we can always revisit it via the git history)
- Remove branches that we won't be working on anymore, that is abandoned work
A number of possible next steps have been described here: Data Structures / Future Work
Maintaining the code and build process for mingw is a pain. While having a completely native windows build for NEURON is possible, we think that the required work is too involved and might not be worth it. Instead, using Windows Subsystem for Linux (WSL) could be a great solution that would allow us to focus our efforts on maintaining our linux (and the mostly compatible mac) code and dropping all code related to mingw.
The task here would be to work with windows users attending the hackathon to test neuron on WSL, its installation and use and evaluate how usable it is, what could be potential pitfalls and document those. If, with some minor extra work, this is working well could then decide on a timeline to start pushing the new way of using neuron on windows and pushing out mingw support.
Background:
For CoreNEURON execution with the file model transfer, up to 4 files are created for each rank. This includes 2 files from the neuron model, 1 for gap junctions, and 1 for report mapping. While this setup is suitable for a small number of MPI ranks, large-scale simulations with BBP models have highlighted an issue. In such cases, the simulation generates a substantial number of files: 800 nodes x 40 ranks per node x 32 model building steps x 3 files per rank, leading to a total of 3,072,000 files. This significantly impacts I/O performance on compute clusters.
Hackathon Goal:
The objective for this task during the hackathon is as follows:
- Revise the current approach of generating 2 to 4 files per rank.
- Implement an alternative method: either generate a single file for all ranks or adopt a single file per compute node.
Note: We are concentrating on straightforward enhancements – not introducing MPI I/O or any other complex implementations. Instead, we'll work with the existing setup while adopting a single-file approach (or minimizing the file count) to write the model data at pre-calculated offsets. This won't directly increase the number of write I/O operations, it will address the challenge of metadata operations and the excessive number of files.
Preparatory Tasks:
- Review the existing implementation of nrncore_write.cpp in NEURON.
- Enhance the current implementation to accurately calculate the model size:
- Refer to the part1().rankbytes variable.
- Note potential discrepancies in the current model size calculation due to additional factors: 1) writing specific elements in ASCII format and 2) inclusion of extra IDs for error validation when writing int/float vectors.
- Prior to the workshop, develop a prototype for calculating the model size and offset. Explore the possibility of updating the
part2()
function to calculate the model size without performing the actual writing.
These details are based on ongoing discussions with @sergiorg-hpc and his ongoing work.
Background:
In terms of execution speed, CoreNEURON holds a 2-4x CPU speed advantage over NEURON due to: 1) SoA memory layout for mechanism/model data, and 2) Auto-vectorization with proper compiler hints. Recent work by Olli Lupton introduced SoA memory layout to NEURON, showing up to 2x improvements. The remaining CPU performance gap likely results from a lack of auto-vectorization.
Hackathon Goal:
During the hackathon, we aim to:
- Verify if the CPU performance gap is due to lack of auto-vectorization.
- Introduce
#pragma ivdep
or#pragma omp simd
to check if compiler auto-vectorizes kernels, potentially matching CoreNEURON's CPU performance. - If performance gap exists, investigate the cause.
Note that we prefer simple experiments within a day or two. Just to give an idea, I can think of:
- Begin with a ringtest, our standard benchmark (and channel-benchmark later if needed).
- Insert
#pragma ...
innrn_state()
andnrn_cur()
kernels inhh.cpp
. - And then:
- (Re-)compile the test
- Review vectorization report from the (Intel) compiler
- Compare NEURON vs CoreNEURON performance
The main motivation for this task is to validate our assumption that NEURON, with recent SoA data structure changes and improved NMODL code generation, can achieve performance similar to CoreNEURON. Note that we had auto-vectorization with MOD2C-generated kernels and that needed minimal changes (of few lines). Thus, we believe NEURON with a generated kernel can attain comparable performance.
Preparatory Tasks:
- Check how we use Ring test and channel-benchmark for benchmarking of NEURON vs CoreNEURON.
- Build NEURON+CoreNEURON with Caliper instrumentation and Intel compiler. Understand the generated reports and how we cross-check the speedup of MOD file kernels
- As a reference, check how we were printing pragmas on the MOD2C side.
- Check how to introduce pragmas in nocmodl:
Note that there are two versions of printing kernels in the noccout.c. We should look at the code in c_out_vectorize()
and not c_out()
.
Possible Lightning talks - 5-10 mins How To:
- Refactor src/gnu (https://github.com/neuronsimulator/nrn/issues/1330)
- Modern C++ idioms used in NEURON
- Continuous integration
- live debugging
- neuron documentation
- nrn.readthedocs.io
- neuron building and CI
- wheel building, pypi, neuron-nightly
- nrn-modeldb-ci
- testing
- how to write
- how to run one (or a few, see all output, run directly)
- Python idioms for tests
- extra coverage resulting from a new test.
Things to look at before Hackathon:
- NMODL sympy and Eigen
-
NEURON developer meetings