You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, curious about the timeline for this? Clusterless decoding is taking a while for me (1 hour + for 50s recording snippets). I passed use_gpu=True for predict, but it doesn't seem to have an effect. Do I need to specify a gpu-based encoding model for this to work as expected?
edit: wow, this was the answer... went from 1+ hour to finishing in 5 seconds once I set the algorithm to 'multiunit_likelihood_gpu'
Ah glad you found it. Generally making sure cupy is installed, set use_gpu=True in the predict function will speed up the state space part of the model and you can also set the likelihood algorithm to use gpu (e.g. sorted_spikes_algorithm = "spiking_likelihood_kde_gpu"
No description provided.
The text was updated successfully, but these errors were encountered: