Skip to content

Commit e247561

Browse files
authored
Update README.md
1 parent b18e9c3 commit e247561

File tree

1 file changed

+54
-25
lines changed

1 file changed

+54
-25
lines changed

README.md

Lines changed: 54 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -5,18 +5,26 @@
55
# Introduction
66

77
Running since 2019, this task was focused during the first two years on the classification of temporally segmented videos of single table tennis strokes.
8-
Since the third edition of the task, two subtasks have been proposed. The dataset also has been enriched this year with new and more diverse stroke samples.
8+
Since the third edition of the task, two subtasks have been proposed. This year, the task is merged with SwimTrack and offers subtasks on both swimming and table tennis sports. This baseline focuses only two of the subtasks:
99

10-
***Subtask 1 :*** is a classification task: participants are required to build a classification system that automatically labels video segments according to a performed stroke. There are 20 possible stroke classes and an additional non-stroke class.
1110

12-
***Subtask 2 :*** is a more challenging subtask proposed since last year: the goal here is to detect if a stroke has been performed, whatever its class, and to extract its temporal boundaries. The aim is to be able to distinguish between moments of interest in a game (players performing strokes) from irrelevant moments (picking up the ball, having a break…). This subtask can be a preliminary step for later recognizing a stroke that has been performed.
13-
11+
***Subtask 2.1:*** is a more challenging subtask proposed since last year: the goal here is to detect if a stroke has been performed, whatever its class, and to extract its temporal boundaries. The aim is to be able to distinguish between moments of interest in a game (players performing strokes) from irrelevant moments (picking up the ball, having a break…). This subtask can be a preliminary step for later recognizing a stroke that has been performed.
1412

15-
The organizers encourage the use of the method developed for subtask 1 to solve subtask 2. Participants are also invited to use the provided baseline as a starting point in their investigation. Finally, participants are encouraged to make their code public with their submission.
13+
***Subtask 3.1:*** is a classification task: participants are required to build a classification system that automatically labels video segments according to a performed stroke. There are 20 possible stroke classes and an additional non-stroke class.
14+
15+
The organizers encourage the combination of the methods in the subtasks and also cross-disciplinary. Participants are also invited to use the provided baseline as a starting point in their investigation. Finally, participants are encouraged to make their code public with their submission.
1616

1717
# Leaderboard
1818

19-
## Classification subtask
19+
## Subtask 2.1: Stroke detection in Table Tennis
20+
21+
The detection subtask is evaluated with regard to the Global IoU metric and the mAP (highest is the best), mAP being the ranking metric.
22+
23+
| Model | IoU | mAP |
24+
| :---: | :---: | :---: |
25+
| Baseline | **.515** | **.131** |
26+
27+
## Subtask 3.1: Stroke Classification in Table Tennis
2028

2129
The performance of each model is presented in terms of accuracy. The ranking metric is the overall accuracy.
2230

@@ -27,13 +35,7 @@ The performance of each model is presented in terms of accuracy. The ranking met
2735
| NathanSadoun | .814 | **.949** | .932 | .915 |
2836
| SSN-SVJ | .814 | .924 | **.941** | **.924** |
2937

30-
## Detection subtask
31-
32-
The detection subtask is evaluated with regard to the Global IoU metric and the mAP (highest is the best), mAP being the ranking metric.
3338

34-
| Model | IoU | mAP |
35-
| :---: | :---: | :---: |
36-
| Baseline | **.515** | **.131** |
3739

3840
# Baseline
3941
In order to help participants in their submission, to process videos, annotation files and deep learning techniques, we provide a baseline in this git which is formatted to process the provided data by the task organizers.
@@ -128,15 +130,6 @@ In addition, the classification task model was tested to perform segmentation an
128130

129131
## Performance
130132

131-
### Classification subtask
132-
133-
Performance of each model is presented according to each decision method in term on global classification accuracy in the folowwing table.
134-
135-
| Model | No Window | Vote | Mean | Gaussian |
136-
| :---: | :---: | :---: | :---: | :---: |
137-
| V1 | .847 | .839 | .856 | .856 |
138-
| V2 | .856 | .822 | .831 | **.864** |
139-
140133
### Detection subtask
141134

142135
The detection subtask is evaluated with regard to the Global IoU metric and the mAP (highest is the best).
@@ -186,6 +179,14 @@ Here a sliding window with step one is used on the test videos. The outputs are
186179
| V2 Class. Neg VS all | .000506 | .00173 | .00237 |
187180
| V2 Class. Neg VS sum(all) | .00145 | .00185 | .00261 |
188181

182+
### Classification subtask
183+
184+
Performance of each model is presented according to each decision method in term on global classification accuracy in the folowwing table.
185+
186+
| Model | No Window | Vote | Mean | Gaussian |
187+
| :---: | :---: | :---: | :---: | :---: |
188+
| V1 | .847 | .839 | .856 | .856 |
189+
| V2 | .856 | .822 | .831 | **.864** |
189190

190191
# Submission
191192

@@ -257,12 +258,12 @@ For example:
257258

258259
Thank you for your participation.
259260

260-
## Working Note paper
261+
## Working Note Paper
261262

262-
After your submition, you will be asked to submit a Working Note paper to share your method, implementation and results. We strongly advice to make your implementation available on github and share its link. Please report the baseline results for comparison.
263+
After your submission, you will be asked to submit a Working Note paper to share your method, implementation and results. We strongly advise you to make your implementation available on GitHub and share its link. Please report the baseline results for comparison.
263264

264-
Guideline to write your paper is available [there](https://docs.google.com/document/d/12uSn0rRYxa3buiFNEbpa46dKsHOyqV2PHU_joRGMHRw/edit?usp=sharing). Latex template can de downloaded from [here](https://drive.google.com/file/d/1SMXWs-i4DKEUdblvzYaqvPuHLYHdBlQx/view?usp=sharing). Please update the shortitle command to fit out task with:
265-
`\renewcommand{\shorttitle}{Sport Task}`
265+
Guideline to write your paper is available [there](https://docs.google.com/document/d/1HcAx14RVuxqDEi-1SJJRwhHhzC_V-Ktpw-9jn5dg0-0/edit#heading=h.b40noxg68mvn). Latex template can de downloaded from [here](https://drive.google.com/file/d/1hWorTTyJzLBiFJmtTzvF78YBNnSShw3W). Please update the shorttitle command to fit our task with:
266+
`\renewcommand{\shorttitle}{SportsVideo}`
266267

267268
Please cite the overview paper describing the task and the baseline paper. See next section.
268269

@@ -271,6 +272,34 @@ Please cite the overview paper describing the task and the baseline paper. See n
271272
To cite this work, we invite you to include some previous work. Find the bibTex below.
272273

273274
```
275+
@inproceedings{conf/mediaeval/2023/baseline,
276+
author = {Pierre{-}Etienne Martin},
277+
title = {Baseline Method for the Sport Task of MediaEval 2023 3D CNNs using Attention Mechanisms for Table Tennis Stoke Detection and Classification.},
278+
booktitle = {Working Notes Proceedings of the MediaEval 2023 Workshop, Amsterdam,
279+
The Netherlands and Online and Online, 1-2 February 2024},
280+
series = {{CEUR} Workshop Proceedings},
281+
publisher = {CEUR-WS.org},
282+
year = {2023}
283+
}
284+
285+
@inproceedings{conf/mediaeval/2023/sporttask,
286+
author = {Aymeric Erades and
287+
Pierre{-}Etienne Martin and
288+
Romain Vuillemot
289+
Boris Mansencal and
290+
Renaud P{\'{e}}teri and
291+
Julien Morlier and
292+
Stefan Duffner and
293+
Jenny Benois{-}Pineau
294+
},
295+
title = {Sports{V}ideo: A Multimedia Dataset for Event and Position Detection in Table Tennis and Swimming},
296+
booktitle = {Working Notes Proceedings of the MediaEval 2023 Workshop, Amsterdam,
297+
The Netherlands and Online, 1-2 February 2024},
298+
series = {{CEUR} Workshop Proceedings},
299+
publisher = {CEUR-WS.org},
300+
year = {2023}
301+
}
302+
274303
@inproceedings{mediaeval/Martin/2022/overview,
275304
author = {Pierre{-}Etienne Martin and
276305
Jordan Calandre and

0 commit comments

Comments
 (0)