Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unify template and quality metrics #3537

Merged

Conversation

chrishalcrow
Copy link
Collaborator

Begin unifying template and quality metrics.

We recently discussed unifying the template and quality metric extensions. Here’s a proposal for starting that process. They’re already quite similar: the big difference is how they deal with parameters. Quality metrics have a parameter set for each metric, through the qm_params kwarg:

qm_params = {
    'sd_ratio’: {
censored_period_ms’: 4.0
    },
    ‘synchrony’: {
synchrony_sizes’: (2,4)
    }
}

while the template metrics have one set of parameters shared between all metrics, through the metrics_kwargs kwarg:

metrics_kwargs = {
    'recovery_window_ms’: 0.4,
    ‘spread_smooth_um’ : 20
}

This PR is a proposal to unify the parameter handling of the two metrics extensions, following the more flexible qm_params pattern, calling it metric_params in both cases. In the transition period, if we get the template metrics metrics_kwargs, we propogate these to all metrics. Hence, now the template metrics kwargs would look like:

metric_params = {
    'spread’: {
censored_period_ms’: 4.0
    },
    ‘exp_decay’: {
censored_period_ms’: 100,
        ‘peak_width_ms’: 0.11
    }
}

We also extend the load_params function for ComputeQualityMetric and ComputeTemplateMetric for backwards compatibility and add a get_default_tm_params function to match the existing get_default_qm_params.

This simplifies (and makes some neat refactoring (TODO) possible) for e.g. checking if metrics have already been computed. Would make the code in the curation metrics PR simpler.

Bigger picture, this standardisation is a necessary step if we want to make a more generic MetricExtension class/concept. @alejoe91 discussed a SpikeMetrics class, and it could be a good place for plugins and integration with other projects (e.g. could imagine UnitMatchMetrics class which computes the metrics used in UnitMatch).

Breaking change, so worth discussion. One less disruptive option: keep the metrics_kwargs and qm_params names, but just change the structure.

@chrishalcrow chrishalcrow added enhancement New feature or request postprocessing Related to postprocessing module qualitymetrics Related to qualitymetrics module labels Nov 14, 2024
Copy link
Member

@alejoe91 alejoe91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM @chrishalcrow! Thanks!

@alejoe91 alejoe91 marked this pull request as ready for review December 2, 2024 14:03
@alejoe91
Copy link
Member

alejoe91 commented Dec 2, 2024

@zm711 wanna take a look or ok to merge?

Copy link
Collaborator

@zm711 zm711 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One fix and a couple questions.

src/spikeinterface/postprocessing/template_metrics.py Outdated Show resolved Hide resolved

metric_params = {}
for metric_name in metric_names:
metric_params[metric_name] = deepcopy(metrics_kwargs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a deep copy is expensive, but there's no way around it I guess right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is the backwards compatibility case, where the individual metric_kwargs dict (which used to be forced to be the same for every template metric) gets copied repeatedly into metric_params. If there isn't a deepcopy, metric_params is just a dict of references to the metric_kwargs dict. So if one gets altered, they all do. Luckily, the dict is just a dict of kwargs (so is small) and this code only runs on instantiation if someone tries to use the old notation. So shouldn't be a performance issue.

return metric_params


# a dict converting the name of the metric for computation to the output of that computation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for templates we don't have the same structure as for quality metrics where we have a separate py file with all these names and a list of metrics?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's a bit different. Here we're copying the qm_compute_name_to_column_names dict, which is the important dict for getting the actual dataframe column names. The other stuff in quality_metric_list.py is mostly used for the pca/non-pca split. We could follow the quality_metrics structure and make a templatemetrics folder inside src/spikeinterface/postprocessing?? @alejoe91 thoughts?

src/spikeinterface/postprocessing/template_metrics.py Outdated Show resolved Hide resolved
@alejoe91 alejoe91 merged commit d991382 into SpikeInterface:main Dec 3, 2024
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request postprocessing Related to postprocessing module qualitymetrics Related to qualitymetrics module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants