Skip to content

Conversation

@alejoe91
Copy link
Member

This PR includes a major refactor of the metrics concept.

It defines a BaseMetric, with core metadata of individual metrics including dtypes, column names, extension dependance, and a compute function.
Another BaseMetricExtension contains a collection of BaseMetrics and deals with most of the machinery, including:

  • setting params
  • checking dependencies and removing metrics based on it
  • computing metrics
  • deleting, merging, splitting metrics
  • preparing data that can be shared across metrics (e.g., pca for pca metrics, peaks info and templates for template metrics)

The template_metrics, quality_metrics, and a new spiketrain_metrics extensions are now in the metrics module. The latter only includes num_spikes and firing_rate, which are also imported as quality metrics.

Still finalizing tests, but this should be 90% done

@alejoe91 alejoe91 added the qualitymetrics Related to qualitymetrics module label Oct 22, 2025
metrics = pd.DataFrame(index=all_unit_ids, columns=old_metrics.columns)

metrics.loc[not_new_ids, :] = old_metrics.loc[not_new_ids, :]
metrics.loc[new_unit_ids_f, :] = self._compute_metrics(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello, this is a new thing. It'd be great to check if we can compute this before we try to do it, for the following situation:

Suppose you originally compute a metric using spikeinterface version 103 (or some fork that you've made yourself... ahem).
Then you open your analyzer in si-gui using version 102. There was a new metric introduced in 103, which 102 doesn't know about. When you try to merge, it errors because it can't compute the new metric. So you do any merging at all due to the inability to merge one metric.
Or you no longer have the recording when you open, so you can't compute sd_ratio or something....

Instead, I'd like to warn if we can't compute and stick in anan. We could could do that here by checking that metric_names are in self.metric_list.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think I meant to write this at line 1207 about the merging step, but also applies to splits!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nag nag nag

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here:

available_metric_names = [m.metric_name for m in self.metric_list]
metric_names = [m for m in self.params["metric_names"] if m in available_metric_names]

@chrishalcrow
Copy link
Member

This looks great - love the prepare_data idea! And this refactor will make it less awkward to develop new metrics. And the pca file is much neater now - nice.

I think this is a good chance to remove compute_{metric_name} stuff from the docs (modules/qualitymetrics) and replace with analyzer.compute("quality_metrics", metric_names={metric_name}) as our recommended method. More awkward, but much better for provenance etc.

I'd vote to take the chance to make multi channel template metrics included by default: they're very helpful.

@alejoe91
Copy link
Member Author

I'd vote to take the chance to make multi channel template metrics included by default: they're very helpful.

I agree! Maybe we can make it default for number of channel > 64?

@alejoe91 alejoe91 added this to the 0.104.0 milestone Oct 31, 2025
get_default_qm_params,
import warnings

warnings.warn(
Copy link
Member

@chrishalcrow chrishalcrow Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get the deprecation warning if I do e.g.

from spikeinterface.qualitymetrics import compute_quality_metrics

Because of some import * magic. This should fix it in almost all cases:

if __name__ not in ('__main__', 'builtins'):
    warnings.warn(
        "The module 'spikeinterface.qualitymetrics' is deprecated and will be removed in 0.105.0."
        "Please use 'spikeinterface.metrics.quality' instead.",
        DeprecationWarning,
        stacklevel=2,
    )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cough cough

metrics = pd.DataFrame(index=all_unit_ids, columns=old_metrics.columns)

metrics.loc[not_new_ids, :] = old_metrics.loc[not_new_ids, :]
metrics.loc[new_unit_ids, :] = self._compute_metrics(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will error if we don't know how to compute the metrics in metric_names. So if a metric changes name between version, we get an error and can't merge/split. I think we should only give _compute_metrics the intersection of metric_names and self.metric_list?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts??

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should work:

available_metric_names = [m.metric_name for m in self.metric_list]

metric_names = [m for m in self.params["metric_names"] if m in available_metric_names]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sounds good to me

@alejoe91 alejoe91 marked this pull request as ready for review November 25, 2025 09:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

qualitymetrics Related to qualitymetrics module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants