Replies: 2 comments
-
|
Hi @kk98kk - I didn't realize discussions were open here, sorry! Take a look at this paper: https://arxiv.org/pdf/2507.10747 In particular, in section 2.2.1, second paragraph: |
Beta Was this translation helpful? Give feedback.
-
|
Hi, Thanks for the paper — I went through it earlier already, but I’m still unclear about how the dataset is split among the train, validation, and test folders for the Domino architecture. As I understand it, the workflow for the Domino architecture requires three folders:
For the test folder, it’s mentioned that it contains 10% of the dataset, where:
The test runs seem to be explicitly defined here: However, I couldn’t find any information on how the remaining runs are split between training and validation. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I’m currently exploring the Domino model and trying to reproduce some of the results.
I found this section about the dataset split:
https://github.com/NVIDIA/physicsnemo-cfd/blob/main/workflows/bench_example/drivaer_ml_files/README.md#drivaerml-dataset-files
From what I can see, it lists which runs are used for training and testing, but I couldn’t find any information on how the training data itself was further divided into training and validation subsets.
Does anybody know how that split was done (e.g., which runs were used for training, validation and testing explicitly)
Having that detail would be very helpful for reproducibility.
Thanks a lot in advance!
Beta Was this translation helpful? Give feedback.
All reactions