-
Notifications
You must be signed in to change notification settings - Fork 230
Refactor metrics into its own module #4183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| metrics = pd.DataFrame(index=all_unit_ids, columns=old_metrics.columns) | ||
|
|
||
| metrics.loc[not_new_ids, :] = old_metrics.loc[not_new_ids, :] | ||
| metrics.loc[new_unit_ids_f, :] = self._compute_metrics( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello, this is a new thing. It'd be great to check if we can compute this before we try to do it, for the following situation:
Suppose you originally compute a metric using spikeinterface version 103 (or some fork that you've made yourself... ahem).
Then you open your analyzer in si-gui using version 102. There was a new metric introduced in 103, which 102 doesn't know about. When you try to merge, it errors because it can't compute the new metric. So you do any merging at all due to the inability to merge one metric.
Or you no longer have the recording when you open, so you can't compute sd_ratio or something....
Instead, I'd like to warn if we can't compute and stick in anan. We could could do that here by checking that metric_names are in self.metric_list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I think I meant to write this at line 1207 about the merging step, but also applies to splits!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nag nag nag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here:
available_metric_names = [m.metric_name for m in self.metric_list]
metric_names = [m for m in self.params["metric_names"] if m in available_metric_names]
|
This looks great - love the I think this is a good chance to remove I'd vote to take the chance to make multi channel template metrics included by default: they're very helpful. |
I agree! Maybe we can make it default for number of channel > 64? |
| get_default_qm_params, | ||
| import warnings | ||
|
|
||
| warnings.warn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get the deprecation warning if I do e.g.
from spikeinterface.qualitymetrics import compute_quality_metrics
Because of some import * magic. This should fix it in almost all cases:
if __name__ not in ('__main__', 'builtins'):
warnings.warn(
"The module 'spikeinterface.qualitymetrics' is deprecated and will be removed in 0.105.0."
"Please use 'spikeinterface.metrics.quality' instead.",
DeprecationWarning,
stacklevel=2,
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cough cough
| metrics = pd.DataFrame(index=all_unit_ids, columns=old_metrics.columns) | ||
|
|
||
| metrics.loc[not_new_ids, :] = old_metrics.loc[not_new_ids, :] | ||
| metrics.loc[new_unit_ids, :] = self._compute_metrics( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will error if we don't know how to compute the metrics in metric_names. So if a metric changes name between version, we get an error and can't merge/split. I think we should only give _compute_metrics the intersection of metric_names and self.metric_list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should work:
available_metric_names = [m.metric_name for m in self.metric_list]
metric_names = [m for m in self.params["metric_names"] if m in available_metric_names]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, sounds good to me
This PR includes a major refactor of the metrics concept.
It defines a
BaseMetric, with core metadata of individual metrics including dtypes, column names, extension dependance, and a compute function.Another
BaseMetricExtensioncontains a collection ofBaseMetrics and deals with most of the machinery, including:The
template_metrics,quality_metrics, and a newspiketrain_metricsextensions are now in themetricsmodule. The latter only includesnum_spikesandfiring_rate, which are also imported as quality metrics.Still finalizing tests, but this should be 90% done