Add dataset and model permutation selection feature#106
Add dataset and model permutation selection feature#106tintinrevient wants to merge 3 commits intomainfrom
Conversation
|
@JCZuurmond it is ready for your review of the CLI. I will add the tests tomorrow. The only major change is in |
|
✅ Supervised models have all passed validation. Metric,Value
Overall ACC,0.0
Overall RACCU,0.005050505050505051
Overall RACC,0.0
Kappa,0.0
Gwet AC1,-0.005076142131979714
Bennett S,-0.005076142131979696
Kappa Standard Error,0.0
Kappa Unbiased,-0.005076142131979696
Scott PI,-0.005076142131979696
Kappa No Prevalence,-1.0
Kappa 95% CI,"(0.0, 0.0)"
Standard Error,0.0
95% CI,"(0.0, 0.0)"
Chi-Squared,None
Phi-Squared,None
Cramer V,None
Response Entropy,6.62935662007962
Reference Entropy,6.62935662007962
Cross Entropy,0
Joint Entropy,6.62935662007962
Conditional Entropy,-0.0
Mutual Information,6.62935662007962
KL Divergence,None
Lambda B,1.0
Lambda A,1.0
Chi-Squared DF,38809
Overall J,"(0.0, 0.0)"
Hamming Loss,1.0
Zero-one Loss,99
NIR,0.010101010101010102
P-Value,1
Overall CEN,0.0
Overall MCEN,0.0
Overall MCC,0.0
RR,0.5
CBA,0.0
AUNU,None
AUNP,None
RCI,1.0
Pearson C,None
TPR Micro,0.0
TPR Macro,None
CSI,None
ARI,None
TNR Micro,0.9949238578680203
TNR Macro,0.9949494949494949
Bangdiwala B,None
Krippendorff Alpha,0.0
SOA1(Landis & Koch),Slight
SOA2(Fleiss),Poor
SOA3(Altman),Poor
SOA4(Cicchetti),Poor
SOA5(Cramer),None
SOA6(Matthews),Negligible
SOA7(Lambda A),Perfect
SOA8(Lambda B),Perfect
SOA9(Krippendorff Alpha),Low
SOA10(Pearson C),None
FPR Macro,0.005050505050505083
FNR Macro,None
PPV Macro,None
NPV Macro,0.9949494949494949
ACC Macro,0.98989898989899
F1 Macro,0.0
FPR Micro,0.005076142131979711
FNR Micro,1.0
PPV Micro,0.0
F1 Micro,0.0
NPV Micro,0.9949238578680203
Spearman,0.667965367965368✅ Zero-shot models have all passed validation. Metric,Value
Overall ACC,0.0
Overall RACCU,0.00010007998789141575
Overall RACC,0.0
Kappa,0.0
Gwet AC1,-0.00010009004298543876
Bennett S,-0.00010009008107296567
Kappa Standard Error,0.0
Kappa Unbiased,-0.00010009000489789399
Scott PI,-0.00010009000489789399
Kappa No Prevalence,-1.0
Kappa 95% CI,"(0.0, 0.0)"
Standard Error,0.0
95% CI,"(0.0, 0.0)"
Chi-Squared,None
Phi-Squared,None
Cramer V,None
Response Entropy,12.286557761608659
Reference Entropy,12.286549508613042
Cross Entropy,0
Joint Entropy,12.286549508613042
Conditional Entropy,-0.0
Mutual Information,12.286557761608659
KL Divergence,None
Lambda B,1.0
Lambda A,1.0
Chi-Squared DF,99820081
Overall J,"(0.0, 0.0)"
Hamming Loss,0.9999999999999999
Zero-one Loss,4996
NIR,0.00020016012810248197
P-Value,1
Overall CEN,0.0
Overall MCEN,0.0
Overall MCC,0.0
RR,0.5
CBA,0.0
AUNU,None
AUNP,None
RCI,1.0000006717097922
Pearson C,None
TPR Micro,0.0
TPR Macro,None
CSI,None
ARI,None
TNR Micro,0.999899909957026
TNR Macro,0.9998999199359487
Bangdiwala B,None
Krippendorff Alpha,7.616744806910965e-11
SOA1(Landis & Koch),Slight
SOA2(Fleiss),Poor
SOA3(Altman),Poor
SOA4(Cicchetti),Poor
SOA5(Cramer),None
SOA6(Matthews),Negligible
SOA7(Lambda A),Perfect
SOA8(Lambda B),Perfect
SOA9(Krippendorff Alpha),Low
SOA10(Pearson C),None
FPR Macro,0.00010008006405126668
FNR Macro,None
PPV Macro,None
NPV Macro,0.9998999200121163
ACC Macro,0.999799839948065
F1 Macro,0.0
FPR Micro,0.00010009004297395485
FNR Micro,1.0
PPV Micro,0.0
F1 Micro,0.0
NPV Micro,0.999899909957026
Spearman, |
JCZuurmond
left a comment
There was a problem hiding this comment.
It's in the right direction. As discussed, we prefer to pull out the listing capabilities from pg2-benchmark (following the unix philosophy). See below for example user stories:
# User story 1
# pipe datasets into a benchmark with a given set of models
pg2-dataset list --query ".kind = 'awesome'" | uv run dvc ../path/to/public_models.local.yml
# User story 2
# pipe models into a benchmark with a given set of datasets
yq ./models/**/*.md --query ".type = 'one_shot'" | uv run dvc ../path/to/most_popular_models.local.yml
# User story 3
# dynamically create benchmark
pg2-dataset list --query ".kind = 'awesome'" --format json > datasets.json
yq ./models/**/*.md --query ".type = 'one_shot'" --format json > models.json
uv run dvc ../path/to/benchmark.local.yml # benchmarks points to datasets.json and models.json
| """Parameters file for benchmark configuration.""" | ||
|
|
||
|
|
||
| class GameType(str, Enum): |
There was a problem hiding this comment.
Nice point, I'm going to change it.
| """Default location for model card files relative to model root directory.""" | ||
|
|
||
|
|
||
| class DatasetPath: |
There was a problem hiding this comment.
Looks like should be configurable - use pydantic_settings ?
There was a problem hiding this comment.
The DatasetsPath will be changed or removed in the future, since the uniform archive format is .pgdata.
I'm breaking this down to 3 PRs, one is ready: ProteinGym/proteingym-base#306. The purpose of this PR is to first list datasets in a local path (Dataset might also exist in S3, GCP, DVC registry or somewhere else, we can extend it later.)
So currently it is to list datasets in a local path with pgdata archived format, then we can export in JSON or YAML format to prefill dvc.yaml, so the local DVC benchmark can work.
The first one is here: ProteinGym/proteingym-base#306 |
|
Close this PR, since it breaks down into 3 PRs: |

Changes
Resolves #92 and #93
The user can select model and dataset permutation by below command in an interactive way:
Afterwards, the user can run benchmarking as usual:
The major file changes are all in
__main__.py, and all the fixeddvc.yamlandparams.yamlare replaced with configurable permutations, powered by Jinja template.Checklist