Standardise configs/models for comparison#241
Conversation
Coverage reportClick to see where and how coverage changed
This report was generated by python-coverage-comment-action |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Reviewing this |
IFenton
left a comment
There was a problem hiding this comment.
With your changes to the default latent space size etc. this now runs very slowly on my computer (0.11 it/s vs. 3.74 it/s). Would make sense to change some of those defaults for the sample runs?
I'm also curious as to why you changed how the hemisphere was being obtained?
|
The biggest shift seems to have been changing the size of the encoder latent space - when I changed that to the old values it was a much more reasonable speed. So maybe we just need to change that. As to how, maybe we have a |
Done in 54e36e1 |
IFenton
left a comment
There was a problem hiding this comment.
LGTM. Would be good to add a quick note to the README about the new quick_test config
Done in cde8d30 |
test_metrics,test_lossandval_lossin DDPM for easier comparison with other models