-
Notifications
You must be signed in to change notification settings - Fork 1
[DNM] Move docking scorers #54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: pydantic_2
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
🚀 New features to boost your workflow:
|
|
Do we know how and where are these scorers being used? I ask here because I don't really know how these are currently used. Just thinking, while they are indeed ML, we are now effectively making the I know this may make the docking CI pass, but maybe it's just moving the problem somewhere else. If it just moves the issue around without really solving the CI, I'd suggest that this probably requires a further discussion with the users of these scorers and these API points. I hope this makes sense. |
|
@ijpulidos This is a good point. I think they are basically only used in the small and large scale docking workflows (which live in workflows and also depend on dataviz, etc). There might be some logic in moving the scorer base class to This is maybe worth a larger discussion. An even easier solution is to put both the ML scorers in I think the dependency problem does have to do with the way we've made the scorers with complicated logic like this: Perhaps we can keep the ml scorers in the ml subpackage but write them without using the docking package, by entirely removing the dependency on the DockingResults object and moving a base scorer class to |
|
I do think with our stated goals for the next two weeks, making ML depend on docking to make docking not depend on ML is worth it, since the primary goals are to make it easy to use spectrum, docking, modeling, and data, and make the CI pass for all of those. a larger refactor can happen later. but definitely worth discussing more (maybe with an eye on how to incorporate free energy calculation "scores" and boltz-2 scores and whatnot in the future). |
|
Just to clarify, options as I see it are:
4 is not mutually exclusive with 1,2 and 3, but one of these 4 need to happen in the next two weeks. |
|
Update: I also need the ChemGauss scorer for spectrum (and the PR I want to merge before the first release), so I'm leaning towards option 1 as a temporary fix. I agree we should copy the whole base of the scorers to data to make it independent of docking (but keep the scorers there), but that should probably go on the next release. |
|
Since the current plan is to release without ml, I suggest we modify the cli-ci test to xfail ml testing. Just add in |
|
From discussion with @apayne97 and @mariacm12 , we thought a fast way forward in these terms is to move the ML scorers to their own module, still in the This branch is a good starting point for a further restructure of the scorers. As in, having some base and agnostic classes for scorers in |
Description
Fixes #15
The goal is to make it so that docking doesn't depend on ml at all, and therefore the docking cli will pass.
As of 2025-07-02 we've decided to go with option 4 but for a future release. We will make the minimal change necessary in a separate PR.
Todos
Status
Developers certificate of origin