Skip to content

Commit d497d90

Browse files
committed
update submission process and link for validation data
1 parent e638896 commit d497d90

File tree

6 files changed

+77
-17
lines changed

6 files changed

+77
-17
lines changed

TUS-REC2024/index.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -55,9 +55,6 @@ The results from all participants will be made publicly available on leaderboard
5555
- The first-place and runner-up achievers will receive additional certificates.
5656
- Participants who successfully participated the challenge will be awarded certificates of participation.
5757

58-
<!-- ## Discussion Board
59-
60-
TBA on GitHub -->
6158

6259
## Organizers
6360

data.md

Lines changed: 42 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ Freehand_US_data_train_2025/
8787
│ ├── landmark_001.h5 # landmarks in scans of subject 001
8888
│ ├── ...
8989
90-
── calib_matrix.csv # calibration matrix
90+
── calib_matrix.csv # calibration matrix
9191

9292
```
9393
<!-- ├── dataset_keys.h5 # the paths to all the scans of the data set -->
@@ -111,3 +111,44 @@ Freehand_US_data_train_2025/
111111
* <a href="https://zenodo.org/doi/10.5281/zenodo.11355499" target="_blank">Training data (Part 3)</a>
112112
* <a href="https://zenodo.org/doi/10.5281/zenodo.12979481" target="_blank">Validation data</a>
113113

114+
## Validation Data
115+
116+
The validation data is available <a href="https://doi.org/10.5281/zenodo.15699958" target="_blank">here</a>, which has the same structure as the test data. The data folder structure is as follows. Details can be found in the <a href="https://doi.org/10.5281/zenodo.15699958" target="_blank">zenodo page</a>. The validation data is different from the training data in that: 1) the image and transformation for each scan are stored separately in two folders; 2) the added file `dataset_keys.h5` contains the paths to all scans in the dataset.
117+
118+
```bash
119+
Freehand_US_data_val_2025/
120+
121+
├── frames/
122+
│ ├── 050/
123+
│ ├── RH_rotation.h5 # US frames in rotating scan of right forearm, subject 050
124+
│ └── LH_rotation.h5 # US frames in rotating scan of left forearm, subject 050
125+
126+
│ ├── 051/
127+
│ ├── RH_rotation.h5 # US frames in rotating scan of right forearm, subject 051
128+
│ └── LH_rotation.h5 # US frames in rotating scan of left forearm, subject 051
129+
130+
│ ├── ...
131+
132+
133+
├── transfs/
134+
│ ├── 050/
135+
│ ├── RH_rotation.h5 # transformations (from tracker tool space to optical camera space) in rotating scan of right forearm, subject 050
136+
│ └── LH_rotation.h5 # transformations (from tracker tool space to optical camera space) in rotating scan of left forearm, subject 050
137+
138+
│ ├── 051/
139+
│ ├── RH_rotation.h5 # transformations (from tracker tool space to optical camera space) in rotating scan of right forearm, subject 051
140+
│ └── LH_rotation.h5 # transformations (from tracker tool space to optical camera space) in rotating scan of left forearm, subject 051
141+
142+
│ ├── ...
143+
144+
145+
├── landmarks/
146+
│ ├── landmark_050.h5 # landmark coordinates in scans of subject 050
147+
│ ├── landmark_051.h5 # landmark coordinates in scans of subject 051
148+
│ ├── ...
149+
150+
├── calib_matrix.csv # calibration matrix
151+
└── dataset_keys.h5 # contains paths of all scans for the dataset
152+
153+
154+
```

index.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,10 @@ reconstruction for rotating scans, suitable for modern learning-based data-drive
2626

2727
## Main resources
2828
* <a href="https://zenodo.org/records/15119085" target="_blank">Full challenge description</a>
29-
* <a href="https://zenodo.org/records/15224704" target="_blank">Train data</a><!-- * [Validation data](TBA) -->
29+
* <a href="https://zenodo.org/records/15224704" target="_blank">Training data</a>
30+
* <a href="https://doi.org/10.5281/zenodo.15699958" target="_blank">Validation data</a>
3031
* <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline" target="_blank">Baseline code</a>
31-
* <a href="TBA" target="_blank">Submission/Evaluation code</a> [TBA]
32+
* <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/tree/main/submission" target="_blank">Submission/Evaluation code</a>
3233

3334
## Timeline
3435

@@ -43,7 +44,7 @@ The TUS-REC2025 challenge is an open call event, accepting new submissions after
4344
| Sep. 01, 2025 | Winners Announcement |
4445
| Sep. 23, 2025 | TUS-REC2025 Challenge Events at MICCAI 2025 |
4546

46-
The Challenge will take place on Sep. 23, 2025 during the <a href="https://miccai-ultrasound.github.io/#/asmus25" target="_blank">ASMUS Workshop</a>. (Details for location, presentation, and event format: TBC.)
47+
The Challenge will take place on Sep. 27, 2025 during the <a href="https://miccai-ultrasound.github.io/#/asmus25" target="_blank">ASMUS Workshop</a>. (Details for location, presentation, and event format: TBC.)
4748

4849
## The Task
4950

participate.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ nav_order: 4
2121
We will send the links of data to you once we receive your registration information.
2222

2323
* <a href="https://zenodo.org/records/15224704" target="_blank">Training data</a>
24-
<!-- * <a href="TBA" target="_blank">Validation data</a> [TBA] -->
24+
* <a href="https://doi.org/10.5281/zenodo.15699958" target="_blank">Validation data</a>
2525

2626
Additional training and validation data from TUS-REC2024:
2727

@@ -39,7 +39,7 @@ Additional training and validation data from TUS-REC2024:
3939
## 4. Build Docker image and submit
4040

4141
* [Submission guideline](submission.html)
42-
* <a href="TBA" target="_blank">An example docker</a> [TBA]
42+
* <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/README.md#instructions-for-docker" target="_blank">An example docker</a>
4343

4444
<!-- ## 7. Track the [leaderboard](leaderboard.html)
4545

policies.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ nav_order: 9
88

99
- Submitted methods must be fully automatic.
1010
- Public and private data are permitted, however their use must be disclosed by participants.
11-
- Organizers may participate but not eligible for awards and not listed in leaderboard.
11+
- Members of the organizers' institutes may participate but not eligible for awards and not listed in leaderboard.
1212
- All participants must belong to teams, even if a team consists of only one member, and each participant can only be a member of one team.
1313

1414

submission.md

Lines changed: 28 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,34 @@ nav_order: 8
55
---
66

77
# Submission Process
8-
Participants are required to dockerize their trained network/algorithm/method and submit them via a file-sharing link (e.g., OneDrive, Dropbox) to the organizers via this <a href="TBA" target="_blank">form [TBA]</a>. Participants are encouraged to familiarize themselves with the fundamentals of building and running Docker images; however, advanced Docker expertise is not required. A basic Docker image will be provided to help you get started. The detailed information of the release and usage of the Docker image will be announced in our website later.
98

10-
<!-- The evaluation code, together with the baseline models, is publicly available [here](https://github.com/QiLi111/tus-rec-challenge_baseline). Participating teams are encouraged, though not obligated, to share their code publicly. Links to any available source code will be provided. -->
9+
* Participants are encouraged to familiarize themselves with the fundamentals of building and running Docker images; however, advanced Docker expertise is not required. We have provided a <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/README.md#instructions-for-docker" target="_blank">basic Docker image</a> to help you get started, which can predict DDFs on the validation/test dataset. The source code is also available in <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/tree/main/submission" target="_blank">`submission`</a> folder.
10+
* The participants are expected to replace the content of <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/predict_ddfs.py" target="_blank">`predict_ddfs`</a> function with their own algorithm, which is used in <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/a818cdb708049b6a2209b7dbde6759ef1c8af0e8/submission/test.py#L39" target="_blank">test.py</a>. The function is expected to take one entire scan as input, and output four DDFs. There is no requirement on how the algorithm is designed internally, for example, whether it is learning-based method; frame-, sequence- or scan-based processing; or, rigid-, affine- or nonrigid transformation assumptions.
11+
* The requirement of the `predict_ddfs` function is described below:
12+
* Input:
13+
* `frames`: All frames in the scan; numpy array with a shape of [N,480,640], where N is the number of frames in this scan.
14+
* `landmark`: Location of 100 landmarks in the scan; numpy array with a shape of [100,3]. Each row denotes one landmark and the three columns denote the frame index (starting from 0) and the 2d-coordinates of landmarks in the image coordinate system (starting from 1, to maintain consistency with the calibration process). For example, a row like [10,200,100] indicates that there is a landmark in the 10th frame, located at the coordinates [200, 100].
15+
* `data_path_calib`: Path to calibration matrix.
16+
* `device`: Device to run the model on, provided in <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/a818cdb708049b6a2209b7dbde6759ef1c8af0e8/submission/test.py#L26" target="_blank">this line</a>.
17+
* Output:
18+
* `GP`: Global displacement vectors for all pixels. DDF from the current frame to the first frame, in mm. The first frame is regarded as the reference frame. The DDF should be in numpy array format with a shape of [N-1,3,307200] where N-1 is the number of frames in that scan (excluding the first frame), "3" denotes “x”, “y”, and “z” axes, respectively, and 307200 is the number of all pixels in a frame. The order of the flattened 307200 pixels can be found in function <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/a818cdb708049b6a2209b7dbde6759ef1c8af0e8/submission/utils/plot_functions.py#L6" target="_blank">`reference_image_points`</a>.
19+
* `GL`: Global displacement vectors for landmarks, in mm. The DDF should be in numpy array format with a shape of [3,100], where 100 is the number of landmarks in a scan.
20+
* `LP`: Local displacement vectors for all pixels. DDF from current frame to the previous frame, in mm. The previous frame is regarded as the reference frame. The DDF should be in numpy array format with a shape of [N-1,3,307200], where N-1 is the number of frames in that scan (excluding the first frame), "3" denotes “x”, “y”, and “z” axes, respectively, and 307200 is the number of all pixels in a frame. The order of the flattened 307200 pixels can be found in function <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/a818cdb708049b6a2209b7dbde6759ef1c8af0e8/submission/utils/plot_functions.py#L6" target="_blank">`reference_image_points`</a>.
21+
* `LL`: Local displacement vectors for landmarks, in mm. The DDF should be in numpy array format with a shape of [3,100], where 100 is the number of landmarks in a scan.
22+
23+
24+
> **_NOTE:_** 
25+
> * If you are not sure about data dimensions, coordinate system or transformation direction, etc., please refer to the <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/baseline_model/Prediction.py" target="_blank">example code</a> in `baseline_model` folder.
26+
>* We have provided two functions, which can generate four DDFs from global and local transformations, in <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/blob/main/submission/utils/Transf2DDFs.py" target="_blank">`Transf2DDFs.py`</a>.
27+
> * Only modify the implementation of the `predict_ddfs` function. It’s okay to add files but please do not change existing files other than `baseline_model` folder.
28+
> * The order of the four DDFs and the order of 307200 pixels cannot be changed and they must all be numpy arrays. Please ensure your prediction does not have null values. Otherwise, the final score could not be generated.
29+
> * Your model is expected to run on a single GPU, with GPU memory usage not exceeding 32GB when running docker.
30+
> * Participants are required to dockerize their trained network/algorithm/method and submit them via a file-sharing link (e.g., OneDrive, Dropbox) to the organizers via this <a href="https://forms.office.com/e/dj1g5TKyaj" target="_blank">form</a>.
31+
> * Participants are allowed to make multiple distinct submissions (but must ensure they are not merely simple variations in hyperparameter values), and the best result will be selected for competing. The number of submissions for each team is limited to 5 to preserve variations in hyperparameters.
1132
12-
<!-- The algorithm is expected to take the entire scan as input and output two different sets of transformation-representing displacement vectors as results, a set of displacement vectors on individual pixels and a set of displacement vectors on provided landmarks. There is no requirement on how the algorithm is designed internally, for example, whether it is learning-based method; frame-, sequence- or scan-based processing; or, rigid-, affine- or nonrigid transformation assumptions. Details are explained further in "Metric" section. -->
1333

14-
> **_NOTE:_** 
15-
> * We are planning to provide a small validation set, which allows participants to tune their models using these unseen data, and also perform a self-evaluation on the validation data for a sanity check using their Docker images. The participants are allowed to make multiple distinct submissions (but must ensure they are not merely simple variations in hyperparameter values), and the best result will be selected for competing. The number of submissions for each team is limited to 5 to preserve variations in hyperparameters.
16-
<!-- > * We expect your model to run on a single GPU, and make sure the GPU memory is below 32G when running docker. -->
17-
> * Your model is expected to run on a single GPU, with GPU memory usage not exceeding 32GB when running docker.
34+
Please contact [`[email protected]`](mailto:[email protected]) if you encounter any problem during submission.
35+
36+
Receipt of all submissions will be acknowledged via email within two working days of receipt, and evaluations will be posted on the leaderboard once completed.
37+
38+
The evaluation code, together with the baseline models, is publicly available <a href="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline" target="_blank">here</a>. Participating teams are encouraged, though not obligated, to share their code publicly. Links to any available source code will be provided.

0 commit comments

Comments
 (0)