You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<olstart="3"> <li>Access MAMKit in your Python code:</li> </ol>
145
+
146
+
```python
147
+
import mamkit
139
148
```
140
149
141
150
Then, retrieve the data using the following code:
@@ -179,22 +188,92 @@ def loading_data_example():
179
188
180
189
**Note**: By "updated version," we mean that the datasets have undergone a refinement in the alignment process, which has resulted in adjustments to the number of samples included compared to the original versions published in the referenced papers.
181
190
182
-
# Evaluation
183
-
For argumentative fallacy detection, we will compute the binary F1-score on predicted sentence-level labels.
184
-
For argumentative fallacy classification, we will compute the macro F1-score on predicted sentence-level labels.
185
-
Metrics will be computed on the hidden test set to determine the best system for each sub-task and input mode.
191
+
# Evaluation
192
+
193
+
For argumentative fallacy detection, we will compute the binary F1-score on predicted sentence-level labels.
194
+
For argumentative fallacy classification, we will compute the macro F1-score on predicted sentence-level labels.
195
+
Metrics will be computed on the hidden test set to determine the best system for each sub-task and input mode.
196
+
197
+
Evaluation will be performed via the [CodaLab platform](https://codalab.lisn.upsaclay.fr/competitions/22739).
198
+
On CodaLab, participants will find the leaderboard, along with the results of the provided baselines.
199
+
Submission guidelines can be found under the *Evaluation* section of the CodaLab competition page.
200
+
201
+
🚨 **Important**: In the evaluation website, you will also find a link to a **mandatory participation survey**.
202
+
Filling out this survey is required in order to participate in the task.
203
+
We also provide the survey link here for convenience: [https://tinyurl.com/limesurvey-argfallacy](https://tinyurl.com/limesurvey-argfallacy)
All evaluated submissions are required to commit to submitting a system description paper. You can choose between two options:
241
+
242
+
-**Non-Archival Paper**:
243
+
A 2-page paper describing your system, with unlimited pages for appendices and bibliography. These papers will *not* be published in the workshop proceedings, but your system will be mentioned in the Overview Paper of the shared task, upon acceptance.
244
+
245
+
-**Archival Paper**:
246
+
A 4-page paper describing your system, also with unlimited pages for appendices and bibliography. These papers *will* be published in the official ACL workshop proceedings and must be presented at the workshop (poster or oral session).
247
+
⚠️ *In accordance with ACL policy, at least one team member must register for the workshop in order to present an archival paper if aaccepted to be published at the ACL proceedings.*
248
+
249
+
All papers must use the official [ACL style templates](https://github.com/acl-org/acl-style-files), available in both LaTeX and Word. We strongly recommend using the official [Overleaf template](https://www.overleaf.com/project/5f64f1fb97c4c50001b60549) for convenience.
250
+
251
+
Submissions will be made via a dedicated submission website, which will be published soon.
252
+
253
+
- 🗓️ **Submissions open**: May 1st, 2025 (the day after the end of the evaluation period)
254
+
- 🗓️ **Submissions close**: May 15th, 2025
255
+
- 📢 **Notification of acceptance**: May 20th, 2025
256
+
- 📝 **Camera-ready deadline**: May 25th, 2025
257
+
258
+
**Important notes**:
259
+
- All accepted **archival papers** will be presented during the workshop’s poster session and require at least one registered author.
260
+
-**Non-archival papers** do *not* require registration and are not presented at the workshop, but their systems will be acknowledged in the Overview Paper.
261
+
262
+
We look forward to receiving your submissions!
186
263
187
264
# Key Dates (Anywhere on Earth)
188
265
189
266
-**Release of Training Data**: February 25th
190
267
-**Release of Test Set**: ~~March 24th~~ → April 7th
191
268
-**Evaluation Start**: ~~April 14th~~ → April 21st
0 commit comments