You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+54-25Lines changed: 54 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,18 +5,26 @@
5
5
# Introduction
6
6
7
7
Running since 2019, this task was focused during the first two years on the classification of temporally segmented videos of single table tennis strokes.
8
-
Since the third edition of the task, two subtasks have been proposed. The dataset also has been enriched this year with new and more diverse stroke samples.
8
+
Since the third edition of the task, two subtasks have been proposed. This year, the task is merged with SwimTrack and offers subtasks on both swimming and table tennis sports. This baseline focuses only two of the subtasks:
9
9
10
-
***Subtask 1 :*** is a classification task: participants are required to build a classification system that automatically labels video segments according to a performed stroke. There are 20 possible stroke classes and an additional non-stroke class.
11
10
12
-
***Subtask 2 :*** is a more challenging subtask proposed since last year: the goal here is to detect if a stroke has been performed, whatever its class, and to extract its temporal boundaries. The aim is to be able to distinguish between moments of interest in a game (players performing strokes) from irrelevant moments (picking up the ball, having a break…). This subtask can be a preliminary step for later recognizing a stroke that has been performed.
13
-
11
+
***Subtask 2.1:*** is a more challenging subtask proposed since last year: the goal here is to detect if a stroke has been performed, whatever its class, and to extract its temporal boundaries. The aim is to be able to distinguish between moments of interest in a game (players performing strokes) from irrelevant moments (picking up the ball, having a break…). This subtask can be a preliminary step for later recognizing a stroke that has been performed.
14
12
15
-
The organizers encourage the use of the method developed for subtask 1 to solve subtask 2. Participants are also invited to use the provided baseline as a starting point in their investigation. Finally, participants are encouraged to make their code public with their submission.
13
+
***Subtask 3.1:*** is a classification task: participants are required to build a classification system that automatically labels video segments according to a performed stroke. There are 20 possible stroke classes and an additional non-stroke class.
14
+
15
+
The organizers encourage the combination of the methods in the subtasks and also cross-disciplinary. Participants are also invited to use the provided baseline as a starting point in their investigation. Finally, participants are encouraged to make their code public with their submission.
16
16
17
17
# Leaderboard
18
18
19
-
## Classification subtask
19
+
## Subtask 2.1: Stroke detection in Table Tennis
20
+
21
+
The detection subtask is evaluated with regard to the Global IoU metric and the mAP (highest is the best), mAP being the ranking metric.
22
+
23
+
| Model | IoU | mAP |
24
+
| :---: | :---: | :---: |
25
+
| Baseline |**.515**|**.131**|
26
+
27
+
## Subtask 3.1: Stroke Classification in Table Tennis
20
28
21
29
The performance of each model is presented in terms of accuracy. The ranking metric is the overall accuracy.
22
30
@@ -27,13 +35,7 @@ The performance of each model is presented in terms of accuracy. The ranking met
27
35
| NathanSadoun | .814 |**.949**| .932 | .915 |
28
36
| SSN-SVJ | .814 | .924 |**.941**|**.924**|
29
37
30
-
## Detection subtask
31
-
32
-
The detection subtask is evaluated with regard to the Global IoU metric and the mAP (highest is the best), mAP being the ranking metric.
33
38
34
-
| Model | IoU | mAP |
35
-
| :---: | :---: | :---: |
36
-
| Baseline |**.515**|**.131**|
37
39
38
40
# Baseline
39
41
In order to help participants in their submission, to process videos, annotation files and deep learning techniques, we provide a baseline in this git which is formatted to process the provided data by the task organizers.
@@ -128,15 +130,6 @@ In addition, the classification task model was tested to perform segmentation an
128
130
129
131
## Performance
130
132
131
-
### Classification subtask
132
-
133
-
Performance of each model is presented according to each decision method in term on global classification accuracy in the folowwing table.
134
-
135
-
| Model | No Window | Vote | Mean | Gaussian |
136
-
| :---: | :---: | :---: | :---: | :---: |
137
-
| V1 | .847 | .839 | .856 | .856 |
138
-
| V2 | .856 | .822 | .831 |**.864**|
139
-
140
133
### Detection subtask
141
134
142
135
The detection subtask is evaluated with regard to the Global IoU metric and the mAP (highest is the best).
@@ -186,6 +179,14 @@ Here a sliding window with step one is used on the test videos. The outputs are
186
179
| V2 Class. Neg VS all | .000506 | .00173 | .00237 |
Performance of each model is presented according to each decision method in term on global classification accuracy in the folowwing table.
185
+
186
+
| Model | No Window | Vote | Mean | Gaussian |
187
+
| :---: | :---: | :---: | :---: | :---: |
188
+
| V1 | .847 | .839 | .856 | .856 |
189
+
| V2 | .856 | .822 | .831 |**.864**|
189
190
190
191
# Submission
191
192
@@ -257,12 +258,12 @@ For example:
257
258
258
259
Thank you for your participation.
259
260
260
-
## Working Note paper
261
+
## Working Note Paper
261
262
262
-
After your submition, you will be asked to submit a Working Note paper to share your method, implementation and results. We strongly advice to make your implementation available on github and share its link. Please report the baseline results for comparison.
263
+
After your submission, you will be asked to submit a Working Note paper to share your method, implementation and results. We strongly advise you to make your implementation available on GitHub and share its link. Please report the baseline results for comparison.
263
264
264
-
Guideline to write your paper is available [there](https://docs.google.com/document/d/12uSn0rRYxa3buiFNEbpa46dKsHOyqV2PHU_joRGMHRw/edit?usp=sharing). Latex template can de downloaded from [here](https://drive.google.com/file/d/1SMXWs-i4DKEUdblvzYaqvPuHLYHdBlQx/view?usp=sharing). Please update the shortitle command to fit out task with:
265
-
`\renewcommand{\shorttitle}{Sport Task}`
265
+
Guideline to write your paper is available [there](https://docs.google.com/document/d/1HcAx14RVuxqDEi-1SJJRwhHhzC_V-Ktpw-9jn5dg0-0/edit#heading=h.b40noxg68mvn). Latex template can de downloaded from [here](https://drive.google.com/file/d/1hWorTTyJzLBiFJmtTzvF78YBNnSShw3W). Please update the shorttitle command to fit our task with:
266
+
`\renewcommand{\shorttitle}{SportsVideo}`
266
267
267
268
Please cite the overview paper describing the task and the baseline paper. See next section.
268
269
@@ -271,6 +272,34 @@ Please cite the overview paper describing the task and the baseline paper. See n
271
272
To cite this work, we invite you to include some previous work. Find the bibTex below.
272
273
273
274
```
275
+
@inproceedings{conf/mediaeval/2023/baseline,
276
+
author = {Pierre{-}Etienne Martin},
277
+
title = {Baseline Method for the Sport Task of MediaEval 2023 3D CNNs using Attention Mechanisms for Table Tennis Stoke Detection and Classification.},
278
+
booktitle = {Working Notes Proceedings of the MediaEval 2023 Workshop, Amsterdam,
279
+
The Netherlands and Online and Online, 1-2 February 2024},
280
+
series = {{CEUR} Workshop Proceedings},
281
+
publisher = {CEUR-WS.org},
282
+
year = {2023}
283
+
}
284
+
285
+
@inproceedings{conf/mediaeval/2023/sporttask,
286
+
author = {Aymeric Erades and
287
+
Pierre{-}Etienne Martin and
288
+
Romain Vuillemot
289
+
Boris Mansencal and
290
+
Renaud P{\'{e}}teri and
291
+
Julien Morlier and
292
+
Stefan Duffner and
293
+
Jenny Benois{-}Pineau
294
+
},
295
+
title = {Sports{V}ideo: A Multimedia Dataset for Event and Position Detection in Table Tennis and Swimming},
296
+
booktitle = {Working Notes Proceedings of the MediaEval 2023 Workshop, Amsterdam,
0 commit comments