Skip to content

feat!: Implement covariance-based impact calculations #519

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 23 commits into
base: master
Choose a base branch
from

Conversation

MoAly98
Copy link
Collaborator

@MoAly98 MoAly98 commented Mar 12, 2025

This closes #513. Replaces previous PR (#515) due to history mess.

To-do for this:

  • save impacts method used in ranking results as metadata
  • create alternative-looking plots for method
  • CLI support for method specification
  • Add pure statistical uncertainty impact in impacts summary

Expected missing coverage in this PR since they are implemented but not exposed in API:

  • Impacts per modifier type are not tested
  • Impact summaries are not tested

These should be addressed in a follow-up PR where the API gets exposed and tests are added. This can probably be a PR on its own where we return a grouped impact table/plot.

* BREAKING CHANGE: default ranking method switches to covariance-based
* add option in `fit.ranking` to specify which impact-calculation method should be used, supporting NP shifting and covariance-based method. 
* Place holder with no implementation added for computing impacts by global observable (aux data) shifting   
* Modularised the computation of impacts using the different methods
* return impacts summary (total systematic/statistical/normfactors/MC stats impacts) from impact calculations
* All impact calculations return the parameter impacts for each type of modifier, as well as the combined list of impacts for all modifier types. 
* The `RankingResults` object now holds information on what impact method was used
* Different plotting aesthetic is implemented for covariance-based impacts
* Ranking plot files now have the impact method used as a suffix
* Impacts computation method can be specified from CLI
* StatOnly impacts are computed and stored in impacts summary

@MoAly98 MoAly98 requested a review from alexander-held March 12, 2025 13:50
Copy link

codecov bot commented Mar 12, 2025

Codecov Report

Attention: Patch coverage is 98.65772% with 2 lines in your changes missing coverage. Please review.

Project coverage is 99.90%. Comparing base (1576a4d) to head (f02df78).
Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
src/cabinetry/fit/__init__.py 99.21% 0 Missing and 1 partial ⚠️
src/cabinetry/visualize/plot_result.py 94.11% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##            master     #519      +/-   ##
===========================================
- Coverage   100.00%   99.90%   -0.10%     
===========================================
  Files           22       22              
  Lines         2096     2208     +112     
  Branches       291      313      +22     
===========================================
+ Hits          2096     2206     +110     
- Partials         0        2       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement covariance-based parameter impacts
1 participant