Skip to content

More image metrics

Compare
Choose a tag to compare
@Borda Borda released this 20 Mar 19:05
· 27 commits to master since this release
3fe3aa5

The upcoming release of TorchMetrics is set to deliver a range of innovative features and enhancements across multiple domains, further solidifying its position as a leading tool for machine learning metrics. In the image domain, significant additions include the ARNIQA and DeepImageStructureAndTextureSimilarity metrics, which provide new insights into image quality and similarity. Additionally, the CLIPScore metric now supports more models and processors, expanding its versatility in image-text alignment tasks.

Beyond image analysis, the regression package welcomes the JensenShannonDivergence metric, offering a powerful tool for comparing probability distributions. The clustering package also sees a notable update with the introduction of the ClusterAccuracy metric, which helps evaluate the performance of clustering algorithms more effectively.

In the realm of classification, the Equal Error Rate (EER) metric has been added, providing a crucial measure for assessing the performance of classification models, particularly in scenarios where false positives and false negatives have different costs. Furthermore, the MeanAveragePrecision metric now includes a functional interface, enhancing its usability and flexibility for users.

These updates collectively enhance the capabilities of TorchMetrics, making it an even more comprehensive and indispensable resource for machine learning practitioners and researchers.

[1.7.0] - 2025-03-20

Added

  • Additions to image domain:
    • Added ARNIQA metric (#2953)
    • Added DeepImageStructureAndTextureSimilarity (#2993)
    • Added support for more models and processors in CLIPScore (#2978)
  • Added JensenShannonDivergence metric to regression package (#2992)
  • Added ClusterAccuracy metric to cluster package (#2777)
  • Added Equal Error Rate (EER) to classification package (#3013)
  • Added functional interface to MeanAveragePrecision metric (#3011)

Changed

  • Making num_classes optional for one-hot inputs in MeanIoU (#3012)

Removed

  • Removed Dice from classification (#3017)

Fixed

  • Fixed edge case in integration between class-wise wrapper and metric tracker (#3008)
  • Fixed IndexError in MultiClassAccuracy when using top_k with single sample (#3021)

Key Contributors

@Isalia20, @LorenzoAgnolucci, @nathanpainchaud, @rittik9, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: v1.6.0...v1.7.0