-
Notifications
You must be signed in to change notification settings - Fork 17
Description
Hi, I have a couple of questions, regarding the two thermostability models published on huggingface:
- https://huggingface.co/SaProtHub/Model-Thermostability-650M
- https://huggingface.co/SaProtHub/Thermostability_35M
I have downloaded both to evaluate them. In the readme, there is some training info and the test spearman for the 650M but for the 35M there is none.
When I evaluated the 35M I got spearman 0.87 (0.91) for valid (test), which is much better than the spearman 0.697 reported for 650M in the paper (or 0.706 in model's readme). Was the 35M model trained on the same dataset splits?
Also when I tried to evaluate the downloaded 650M model in the exact same way as I did successfully with the 35M model, I got model outputs/predictions that were just zeros.
So, why does the 35M model perform so well, and how do I make the 650M return non-zero predictions?
Or, is it possible to rerun the training of the model in the paper or the published models with their original configs, and where would I find them?
Thanks!