-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alter nn_isolation
to use existing principal components
#3342
base: main
Are you sure you want to change the base?
Alter nn_isolation
to use existing principal components
#3342
Conversation
I think the only real issue is that the values are now different. So from my perspective it would make the most sense to wait until your other PR is merged allowing for computation of individual metrics without redoing all quality metrics so that if a user needs to go back and change. Would we want a warning for at least a version or two saying implementation has changed so nn is not directly comparable with versions < (0.101.x)? |
Yeah, that makes sense. Note: a bug (fixed in #3138) meant
Could be a good idea. Also would be good to make a quality metrics policy in the future. Do we include lots of implementations, or just "the best" implementation? A good example is isi_violations: https://spikeinterface.readthedocs.io/en/latest/modules/qualitymetrics/isi_violations.html I think the Llobet approximation has been shown to be more accurate and is the same speed as the UMS2000 one. Should we keep both metrics, or make a policy to decide on which to keep? And if we find errors in some quality metrics and fix it, do we warn about the change for a version? We could be generous and allow for everything. Then I could call this metric |
I wrote the silhouette score code and added in the approximation (from Hrushka) because my computer didn't have enough RAM to do the original implementation. Haha. I think my issue is that if someone (maybe not for nn but for other metrics) was using some cutoff and then we change the implementation and now their values are all lower now their cutoff for analyzing their data no longer works and they have to redo the process of figuring out what a new cutoff is. But yeah I'm not sure the best solution whether it be a warning or something. So I think this is a good topic of discussion for the meeting! |
Updated the
nn_isolation
metric to use the already-calculated PCs.The previous implementation of the
nn_isolation
metric does the following:This is prohibitively slow; meaning that this quality metric is currently removed from the default quality_metric list due to its speed.
Instead, this PR implements the following:
compute_principal_components
, which includes all spikesI think the new implementation is consistent with the references describing the quality metrics (https://pubmed.ncbi.nlm.nih.gov/28910621
and https://pubmed.ncbi.nlm.nih.gov/33473216/
). Please correct me if you disagree!
It’s also:
The isolation score are generally worse in the new implementation, because the PCA is applied to all spikes, not just those in the two clusters being compared.
Also updated docstrings and docs to (hopefully) clarify what the metric is calculating.
Benchmark code. Note:
num_units
is the most important parameter since there is a max_spikes limit (so a long duration doesn’t affect things) and the method uses sparsity (so num_channels doesn’t affect things)Old times:
New times: