You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: TUS-REC2024/index.md
+14Lines changed: 14 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,20 @@ nav_order: 1
16
16
17
17
**Trackerless 3D Freehand Ultrasound Reconstruction (TUS-REC) Challenge**
18
18
19
+
>### 📝 TUS-REC2024 Challenge Paper Released on arXiv
20
+
>We are pleased to announce that our challenge paper for the TUS-REC2024 Challenge is now released on arXiv: <ahref="https://doi.org/10.48550/arXiv.2506.21765"target="_blank">https://doi.org/10.48550/arXiv.2506.21765</a>.
21
+
22
+
---
23
+
24
+
>### 🔐 TUS-REC2024 Dataset Usage Policy Update
25
+
>The TUS-REC2024 training and validation datasets are publicly available for research purpuse as long as the challenge paper is properly cited, as specified on the <ahref="https://doi.org/10.5281/zenodo.11178508"target="_blank">Zenodo page</a>. Please note that the TUS-REC2025 Challenge datasets are not permitted for public use yet. They are intended solely for use within the scope of the TUS-REC2025 Challenge at this moment.
26
+
27
+
---
28
+
29
+
>### 🧑💻 Participant Code Repositories Available
30
+
>Code repositories from TUS-REC2024 Challenge participants are now publicly accessible [here](leaderboard.html).
31
+
32
+
19
33
Reconstructing 2D Ultrasound (US) images into a 3D volume enables 3D representations of anatomy to be generated which are beneficial to a wide range of downstream tasks such as quantitative biometric measurement, multimodal registration, 3D visualisation and interventional guidance. Although substantive progress has been made recently through non-deep-learning- and deep-learning-based approaches, this application is still challenging due to 1) inherent accumulated error - frame-to-frame transformation error will be accumulated through time when reconstructing long sequence of US frames, and 2) a lack of publicly-accessible data with synchronised spatial location, often obtained from tracking devices, for benchmarking the performance and for training learning-based methods. The TUS-REC challenge aims to provide a benchmark for freehand US reconstruction with publicly available in vivo US data from forearms of one hundred volunteers, using multiple predefined scanning protocols, targeted at improving the reconstruction performance in this challenging task. The outcome of the challenge includes 1) open-sourcing the first largest tracked US datasets with accurate positional information; 2) establishing one of the first benchmarks for 3D US reconstruction, suitable for modern learning-based data-driven approaches.
20
34
21
35
The TUS-REC challenge is an open call event, accepting new submissions after conference deadline. The fixed challenge submission timeline is associated with [MICCAI 2024](https://conferences.miccai.org/2024/en/), with the following [challenge timeline](#timeline).
| 2 | ISRU-DKFZ| Nektarios Winter, Caelan Haney, Phuc Nguyen, Lucas Steinberger | German Cancer Research Center, Heidelberg, Germany |**0.817±0.140**| 0.790±0.205 | 0.844±0.153 | 0.835±0.131 | 0.799±0.169 | 6.858±3.526 | 5.978±3.719 | 0.101±0.016 | 0.088±0.021 | 17.173±1.800 |
11
-
| 3 | ZJR | Yuan Zhao, Mingjie Jiang, Bowen Ren | City University of Hong Kong, Hong Kong, Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Hong Kong |**0.754±0.145**| 0.886±0.182 | 0.622±0.169 | 0.757±0.135 | 0.751±0.175 | 5.970±3.523 | 5.167±3.682 | 0.111±0.016 | 0.096±0.022 | 46.956±5.617 |
12
-
| 4 | AMI-Lab | SiYeoul Lee, SeonHo Kim, MinKyung Seo, MinWoo Kim | Pusan National University, South Korea |**0.573±0.240**| 0.548±0.322 | 0.598±0.246 | 0.595±0.233 | 0.551±0.270 | 9.388±5.358 | 8.459±5.699 | 0.112±0.024 | 0.100±0.033 | 16.964±2.015 |
| 6 | Baseline | TUS-REC Organisation Team | University College London, United Kingdom |**0.146±0.159**| 0.236±0.273 | 0.056±0.106 | 0.125±0.148 | 0.167±0.186 | 12.490±5.462 | 11.129±5.838 | 0.135±0.024 | 0.118±0.031 | 8.135±0.996 |
15
-
| 7 | USTC | Su Li, Haibo Yu, Ling Chang, Lei Zhang, Xujiong Ye | University of Science and Technology of China, China; University of Lincoln, United Kingdom |**-15.192±11.731**| -9.713±5.251 | -20.671±22.378 | -14.627±5.437 | -15.758±21.892 | 92.109±19.549 | 85.843±22.733 | 0.835±0.113 | 0.856±1.379 | 7.471±0.907 |
| 2 |<ahref="https://github.com/ISRU-DKFZ/RecuVol"target="_blank">RecuVol</a> | ISRU-DKFZ| Nektarios Winter, Caelan Haney, Phuc Nguyen, Lucas Steinberger | German Cancer Research Center, Heidelberg, Germany |**0.817±0.140**| 0.790±0.205 | 0.844±0.153 | 0.835±0.131 | 0.799±0.169 | 6.858±3.526 | 5.978±3.719 | 0.101±0.016 | 0.088±0.021 | 17.173±1.800 |
11
+
| 3 |--| ZJR | Yuan Zhao, Mingjie Jiang, Bowen Ren | City University of Hong Kong, Hong Kong, Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Hong Kong |**0.754±0.145**| 0.886±0.182 | 0.622±0.169 | 0.757±0.135 | 0.751±0.175 | 5.970±3.523 | 5.167±3.682 | 0.111±0.016 | 0.096±0.022 | 46.956±5.617 |
12
+
| 4 |<ahref="https://github.com/guhong3648/US3D"target="_blank">US3D</a> | AMI-Lab | SiYeoul Lee, SeonHo Kim, MinKyung Seo, MinWoo Kim | Pusan National University, South Korea |**0.573±0.240**| 0.548±0.322 | 0.598±0.246 | 0.595±0.233 | 0.551±0.270 | 9.388±5.358 | 8.459±5.699 | 0.112±0.024 | 0.100±0.033 | 16.964±2.015 |
| 6 |<ahref="https://github.com/QiLi111/tus-rec-challenge_baseline "target="_blank">Baseline</a>| Baseline | TUS-REC Organisation Team | University College London, United Kingdom |**0.146±0.159**| 0.236±0.273 | 0.056±0.106 | 0.125±0.148 | 0.167±0.186 | 12.490±5.462 | 11.129±5.838 | 0.135±0.024 | 0.118±0.031 | 8.135±0.996 |
15
+
| 7 |-- | USTC | Su Li, Haibo Yu, Ling Chang, Lei Zhang, Xujiong Ye | University of Science and Technology of China, China; University of Lincoln, United Kingdom |**-15.192±11.731**| -9.713±5.251 | -20.671±22.378 | -14.627±5.437 | -15.758±21.892 | 92.109±19.549 | 85.843±22.733 | 0.835±0.113 | 0.856±1.379 | 7.471±0.907 |
16
16
17
17
18
18
> Note: All scores (the larger the better) are normalised using the range of scores among all teams but the submissions below baseline performance. The raw values of DDF errors (the smaller the better) before normalisation are also listed for your reference. All values are rounded to 3 decimal places.
>The TUS-REC2024 training and validation datasets are publicly available for research purpuse as long as the challenge paper is properly cited, as specified on the <ahref="https://doi.org/10.5281/zenodo.11178508"target="_blank">Zenodo page</a>. Please note that the TUS-REC2025 Challenge datasets are not permitted for public use yet. They are intended solely for use within the scope of the TUS-REC2025 Challenge at this moment.
>### 📝 TUS-REC2024 Challenge Paper Released on arXiv
16
+
>We are pleased to announce that our challenge paper for the TUS-REC2024 Challenge is now released on arXiv: <ahref="https://doi.org/10.48550/arXiv.2506.21765"target="_blank">https://doi.org/10.48550/arXiv.2506.21765</a>.
>### ✅ Validation Dataset and Submission Guideline Released
9
23
>Participants can now access the official <ahref="https://github-pages.ucl.ac.uk/tus-rec-challenge/submission.html"target="_blank">submission guideline</a>. <ahref="https://github.com/QiLi111/TUS-REC2025-Challenge_baseline/tree/main/submission#instructions-for-docker"target="_blank">A baseline Docker image</a> is provided as a starting point, along with the <ahref="https://doi.org/10.5281/zenodo.15699958"target="_blank">validation dataset</a> for local testing and evaluation.
0 commit comments