-
Notifications
You must be signed in to change notification settings - Fork 20
Algorithm similarity eval - general implementation #214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Algorithm similarity eval - general implementation #214
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to some minor suggestions, the one major comment is that if we have 4 algorithms with 2 parameter combinations each, we want a 8 x 8 output instead of a 4 x 4 output.
algo_similarity_heatmap = SEP.join([out_dir, '{dataset}-ml', 'jaccard-heatmap.png']) | ||
run: | ||
summary_df = ml.summarize_networks(input.pathways) | ||
jaccard = ml.jaccard_similarity_eval(summary_df, output.algo_similarity_matrix, output.algo_similarity_heatmap) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need to store the output if it isn't used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure I understand correctly - should I be changing line 33 & 34 to:
jaccard = ml.jaccard_similarity_eval(ml.summarize_networks(input.pathways), output.algo_similarity_matrix, output.algo_similarity_heatmap)
As discussed - this is the general implementation only, and once this is approved - will work on the per_algo implementation