You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The validation data is available <ahref="https://doi.org/10.5281/zenodo.15699958"target="_blank">here</a>, which has the same structure as the test data. The data folder structure is as follows. Details can be found in the <ahref="https://doi.org/10.5281/zenodo.15699958"target="_blank">zenodo page</a>. The validation data is different from the training data in that: 1) the image and transformation for each scan are stored separately in two folders; 2) the added file `dataset_keys.h5` contains the paths to all scans in the dataset.
117
+
118
+
```bash
119
+
Freehand_US_data_val_2025/
120
+
│
121
+
├── frames/
122
+
│ ├── 050/
123
+
│ ├── RH_rotation.h5 # US frames in rotating scan of right forearm, subject 050
124
+
│ └── LH_rotation.h5 # US frames in rotating scan of left forearm, subject 050
125
+
│
126
+
│ ├── 051/
127
+
│ ├── RH_rotation.h5 # US frames in rotating scan of right forearm, subject 051
128
+
│ └── LH_rotation.h5 # US frames in rotating scan of left forearm, subject 051
129
+
│
130
+
│ ├── ...
131
+
│
132
+
│
133
+
├── transfs/
134
+
│ ├── 050/
135
+
│ ├── RH_rotation.h5 # transformations (from tracker tool space to optical camera space) in rotating scan of right forearm, subject 050
136
+
│ └── LH_rotation.h5 # transformations (from tracker tool space to optical camera space) in rotating scan of left forearm, subject 050
137
+
│
138
+
│ ├── 051/
139
+
│ ├── RH_rotation.h5 # transformations (from tracker tool space to optical camera space) in rotating scan of right forearm, subject 051
140
+
│ └── LH_rotation.h5 # transformations (from tracker tool space to optical camera space) in rotating scan of left forearm, subject 051
141
+
│
142
+
│ ├── ...
143
+
│
144
+
│
145
+
├── landmarks/
146
+
│ ├── landmark_050.h5 # landmark coordinates in scans of subject 050
147
+
│ ├── landmark_051.h5 # landmark coordinates in scans of subject 051
148
+
│ ├── ...
149
+
│
150
+
├── calib_matrix.csv # calibration matrix
151
+
└── dataset_keys.h5 # contains paths of all scans for the dataset
The Challenge will take place on Sep. 23, 2025 during the <ahref="https://miccai-ultrasound.github.io/#/asmus25"target="_blank">ASMUS Workshop</a>. (Details for location, presentation, and event format: TBC.)
47
+
The Challenge will take place on Sep. 27, 2025 during the <ahref="https://miccai-ultrasound.github.io/#/asmus25"target="_blank">ASMUS Workshop</a>. (Details for location, presentation, and event format: TBC.)
Additional training and validation data from TUS-REC2024:
27
27
@@ -39,7 +39,7 @@ Additional training and validation data from TUS-REC2024:
39
39
## 4. Build Docker image and submit
40
40
41
41
*[Submission guideline](submission.html)
42
-
* <ahref="TBA"target="_blank">An example docker</a>[TBA]
42
+
* <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/README.md#instructions-for-docker"target="_blank">An example docker</a>
43
43
44
44
<!-- ## 7. Track the [leaderboard](leaderboard.html)
Copy file name to clipboardExpand all lines: submission.md
+28-7Lines changed: 28 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,13 +5,34 @@ nav_order: 8
5
5
---
6
6
7
7
# Submission Process
8
-
Participants are required to dockerize their trained network/algorithm/method and submit them via a file-sharing link (e.g., OneDrive, Dropbox) to the organizers via this <ahref="TBA"target="_blank">form [TBA]</a>. Participants are encouraged to familiarize themselves with the fundamentals of building and running Docker images; however, advanced Docker expertise is not required. A basic Docker image will be provided to help you get started. The detailed information of the release and usage of the Docker image will be announced in our website later.
9
8
10
-
<!-- The evaluation code, together with the baseline models, is publicly available [here](https://github.com/QiLi111/tus-rec-challenge_baseline). Participating teams are encouraged, though not obligated, to share their code publicly. Links to any available source code will be provided. -->
9
+
* Participants are encouraged to familiarize themselves with the fundamentals of building and running Docker images; however, advanced Docker expertise is not required. We have provided a <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/README.md#instructions-for-docker"target="_blank">basic Docker image</a> to help you get started, which can predict DDFs on the validation/test dataset. The source code is also available in <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/tree/main/submission"target="_blank">`submission`</a> folder.
10
+
* The participants are expected to replace the content of <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/predict_ddfs.py"target="_blank">`predict_ddfs`</a> function with their own algorithm, which is used in <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/a818cdb708049b6a2209b7dbde6759ef1c8af0e8/submission/test.py#L39"target="_blank">test.py</a>. The function is expected to take one entire scan as input, and output four DDFs. There is no requirement on how the algorithm is designed internally, for example, whether it is learning-based method; frame-, sequence- or scan-based processing; or, rigid-, affine- or nonrigid transformation assumptions.
11
+
* The requirement of the `predict_ddfs` function is described below:
12
+
* Input:
13
+
*`frames`: All frames in the scan; numpy array with a shape of [N,480,640], where N is the number of frames in this scan.
14
+
*`landmark`: Location of 100 landmarks in the scan; numpy array with a shape of [100,3]. Each row denotes one landmark and the three columns denote the frame index (starting from 0) and the 2d-coordinates of landmarks in the image coordinate system (starting from 1, to maintain consistency with the calibration process). For example, a row like [10,200,100] indicates that there is a landmark in the 10th frame, located at the coordinates [200, 100].
15
+
*`data_path_calib`: Path to calibration matrix.
16
+
*`device`: Device to run the model on, provided in <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/a818cdb708049b6a2209b7dbde6759ef1c8af0e8/submission/test.py#L26"target="_blank">this line</a>.
17
+
* Output:
18
+
*`GP`: Global displacement vectors for all pixels. DDF from the current frame to the first frame, in mm. The first frame is regarded as the reference frame. The DDF should be in numpy array format with a shape of [N-1,3,307200] where N-1 is the number of frames in that scan (excluding the first frame), "3" denotes “x”, “y”, and “z” axes, respectively, and 307200 is the number of all pixels in a frame. The order of the flattened 307200 pixels can be found in function <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/a818cdb708049b6a2209b7dbde6759ef1c8af0e8/submission/utils/plot_functions.py#L6"target="_blank">`reference_image_points`</a>.
19
+
*`GL`: Global displacement vectors for landmarks, in mm. The DDF should be in numpy array format with a shape of [3,100], where 100 is the number of landmarks in a scan.
20
+
*`LP`: Local displacement vectors for all pixels. DDF from current frame to the previous frame, in mm. The previous frame is regarded as the reference frame. The DDF should be in numpy array format with a shape of [N-1,3,307200], where N-1 is the number of frames in that scan (excluding the first frame), "3" denotes “x”, “y”, and “z” axes, respectively, and 307200 is the number of all pixels in a frame. The order of the flattened 307200 pixels can be found in function <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/a818cdb708049b6a2209b7dbde6759ef1c8af0e8/submission/utils/plot_functions.py#L6"target="_blank">`reference_image_points`</a>.
21
+
*`LL`: Local displacement vectors for landmarks, in mm. The DDF should be in numpy array format with a shape of [3,100], where 100 is the number of landmarks in a scan.
22
+
23
+
24
+
> **_NOTE:_**
25
+
> * If you are not sure about data dimensions, coordinate system or transformation direction, etc., please refer to the <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/baseline_model/Prediction.py"target="_blank">example code</a> in `baseline_model` folder.
26
+
>* We have provided two functions, which can generate four DDFs from global and local transformations, in <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/utils/Transf2DDFs.py"target="_blank">`Transf2DDFs.py`</a>.
27
+
> * Only modify the implementation of the `predict_ddfs` function. It’s okay to add files but please do not change existing files other than `baseline_model` folder.
28
+
> * The order of the four DDFs and the order of 307200 pixels cannot be changed and they must all be numpy arrays. Please ensure your prediction does not have null values. Otherwise, the final score could not be generated.
29
+
> * Your model is expected to run on a single GPU, with GPU memory usage not exceeding 32GB when running docker.
30
+
> * Participants are required to dockerize their trained network/algorithm/method and submit them via a file-sharing link (e.g., OneDrive, Dropbox) to the organizers via this <ahref="https://forms.office.com/e/dj1g5TKyaj"target="_blank">form</a>.
31
+
> * Participants are allowed to make multiple distinct submissions (but must ensure they are not merely simple variations in hyperparameter values), and the best result will be selected for competing. The number of submissions for each team is limited to 5 to preserve variations in hyperparameters.
11
32
12
-
<!-- The algorithm is expected to take the entire scan as input and output two different sets of transformation-representing displacement vectors as results, a set of displacement vectors on individual pixels and a set of displacement vectors on provided landmarks. There is no requirement on how the algorithm is designed internally, for example, whether it is learning-based method; frame-, sequence- or scan-based processing; or, rigid-, affine- or nonrigid transformation assumptions. Details are explained further in "Metric" section. -->
13
33
14
-
> **_NOTE:_**
15
-
> * We are planning to provide a small validation set, which allows participants to tune their models using these unseen data, and also perform a self-evaluation on the validation data for a sanity check using their Docker images. The participants are allowed to make multiple distinct submissions (but must ensure they are not merely simple variations in hyperparameter values), and the best result will be selected for competing. The number of submissions for each team is limited to 5 to preserve variations in hyperparameters.
16
-
<!-- > * We expect your model to run on a single GPU, and make sure the GPU memory is below 32G when running docker. -->
17
-
> * Your model is expected to run on a single GPU, with GPU memory usage not exceeding 32GB when running docker.
Receipt of all submissions will be acknowledged via email within two working days of receipt, and evaluations will be posted on the leaderboard once completed.
37
+
38
+
The evaluation code, together with the baseline models, is publicly available <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline"target="_blank">here</a>. Participating teams are encouraged, though not obligated, to share their code publicly. Links to any available source code will be provided.
0 commit comments