-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unify template and quality metrics #3537
Unify template and quality metrics #3537
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM @chrishalcrow! Thanks!
@zm711 wanna take a look or ok to merge? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One fix and a couple questions.
|
||
metric_params = {} | ||
for metric_name in metric_names: | ||
metric_params[metric_name] = deepcopy(metrics_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a deep copy is expensive, but there's no way around it I guess right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is the backwards compatibility case, where the individual metric_kwargs
dict (which used to be forced to be the same for every template metric) gets copied repeatedly into metric_params
. If there isn't a deepcopy, metric_params
is just a dict of references to the metric_kwargs
dict. So if one gets altered, they all do. Luckily, the dict is just a dict of kwargs (so is small) and this code only runs on instantiation if someone tries to use the old notation. So shouldn't be a performance issue.
return metric_params | ||
|
||
|
||
# a dict converting the name of the metric for computation to the output of that computation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for templates we don't have the same structure as for quality metrics where we have a separate py file with all these names and a list of metrics?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's a bit different. Here we're copying the qm_compute_name_to_column_names
dict, which is the important dict for getting the actual dataframe column names. The other stuff in quality_metric_list.py
is mostly used for the pca/non-pca split. We could follow the quality_metrics structure and make a templatemetrics
folder inside src/spikeinterface/postprocessing
?? @alejoe91 thoughts?
Begin unifying template and quality metrics.
We recently discussed unifying the template and quality metric extensions. Here’s a proposal for starting that process. They’re already quite similar: the big difference is how they deal with parameters. Quality metrics have a parameter set for each metric, through the
qm_params
kwarg:while the template metrics have one set of parameters shared between all metrics, through the
metrics_kwargs
kwarg:This PR is a proposal to unify the parameter handling of the two metrics extensions, following the more flexible
qm_params
pattern, calling itmetric_params
in both cases. In the transition period, if we get the template metricsmetrics_kwargs
, we propogate these to all metrics. Hence, now the template metrics kwargs would look like:We also extend the
load_params
function forComputeQualityMetric
andComputeTemplateMetric
for backwards compatibility and add aget_default_tm_params
function to match the existingget_default_qm_params
.This simplifies (and makes some neat refactoring (TODO) possible) for e.g. checking if metrics have already been computed. Would make the code in the curation metrics PR simpler.
Bigger picture, this standardisation is a necessary step if we want to make a more generic MetricExtension class/concept. @alejoe91 discussed a SpikeMetrics class, and it could be a good place for plugins and integration with other projects (e.g. could imagine UnitMatchMetrics class which computes the metrics used in UnitMatch).
Breaking change, so worth discussion. One less disruptive option: keep the
metrics_kwargs
andqm_params
names, but just change the structure.