You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
28th European Conference on Artificial Intelligence (ECAI), pp. 4594-4602, 2025\
152
152
[DOI](https://doi.org/10.3233/FAIA251362)
153
153
154
-
**Untangling Hate Speech Definitions: A Semantic Componential Analysis Across Cultures and Domains.**\
154
+
**Untangling Hate Speech Definitions: A Semantic Componential Analysis Across Cultures and Domains.**\
155
155
Katerina Korre, Arianna Muti, Federico Ruggeri, and Alberto Barrón-Cedeño. 2025.\
156
156
In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3184–3198, Albuquerque, New Mexico. Association for Computational Linguistics.\
**TWOLAR: a TWO-step LLM-Augmented distillation method for passage Reranking**\
@@ -300,15 +300,17 @@ Marco Lippi and Paolo Torroni\
300
300
## Workshops
301
301
302
302
### 2025
303
-
Luca Moroni, Gianmarco Pappacoda, Edoardo Barba, Simone Conia, Andrea Galassi, Bernardo Magnini, Roberto Navigli, Paolo Torroni, and Roberto Zanoli. 2025.
304
-
Sustainable Italian LLM Evaluation: Community Perspectives and Methodological Guidelines.
305
-
In Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025), pages 747–759, Cagliari, Italy. CEUR Workshop Proceedings.
303
+
304
+
**Sustainable Italian LLM Evaluation: Community Perspectives and Methodological Guidelines**\
305
+
Luca Moroni, Gianmarco Pappacoda, Edoardo Barba, Simone Conia, Andrea Galassi, Bernardo Magnini, Roberto Navigli, Paolo Torroni, and Roberto Zanoli. 2025.\
306
+
In Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025), pages 747–759, Cagliari, Italy. CEUR Workshop Proceedings.\
**Nicolò Donati, Paolo Torroni, and Giuseppe Savino. 2025.**\
310
-
Do Large Language Models understand how to be judges?.\
311
-
In Proceedings of the 2nd LUHME Workshop, pages 85–102, Bologna, Italy. UP - Universidade do Porto (https://doi.org/10.21747/978-989-9193-73-4/lan2), LIACC - Laboratório de Inteligência Artificial e Ciência de Computadores da Universidade do Porto, CLUP - Centro de Linguística da Universidade do Porto, UEF - The University of Eastern Finland and UAH - Universidad de Alcalá.\
310
+
311
+
**Do Large Language Models understand how to be judges?**\
312
+
Nicolò Donati, Paolo Torroni, and Giuseppe Savino. 2025.\
313
+
In Proceedings of the 2nd LUHME Workshop, pages 85–102, Bologna, Italy. UP - Universidade do Porto, LIACC - Laboratório de Inteligência Artificial e Ciência de Computadores da Universidade do Porto, CLUP - Centro de Linguística da Universidade do Porto, UEF - The University of Eastern Finland and UAH - Universidad de Alcalá.\
Overview of the CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval\
327
-
Firoj Alam, Julia Maria Struß, Tanmoy Chakraborty, Stefan Dietze, Salim Hafid, Katerina Korre, Arianna Muti, Preslav Nakov, Federico Ruggeri, Sebastian Schellhammer, Vinay Setty, Megha Sundriyal, Konstantin Todorov & V. Venktesh\
328
-
Conference and Labs of the Evaluation Forum (CLEF), 2025.\\
328
+
**Overview of the CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval**\
329
+
Firoj Alam, Julia Maria Struß, Tanmoy Chakraborty, Stefan Dietze, Salim Hafid, Katerina Korre, Arianna Muti, Preslav Nakov, Federico Ruggeri, Sebastian Schellhammer, Vinay Setty, Megha Sundriyal, Konstantin Todorov & V. Venktesh.\
330
+
Conference and Labs of the Evaluation Forum (CLEF), 2025.\
**Dynamic Demonstrations Selection for Few-Shot Legal Argument Mining**\
@@ -334,7 +336,7 @@ AMELR: First Argument Mining and Empirical Legal Research Workshop\
334
336
[PDF](https://ceur-ws.org/Vol-4089/paper2.pdf)
335
337
336
338
### 2024
337
-
A Grice-ful Examination of Offensive Language: Using NLP Methods to Assess the Co-operative Principle.\
339
+
**A Grice-ful Examination of Offensive Language: Using NLP Methods to Assess the Co-operative Principle.**\
338
340
Katerina Korre, Federico Ruggeri, and Alberto Barrón-Cedeño. 2024.\
339
341
In Proceedings of the 1st LUHME Workshop, pages 12–19, Santiago de Compostela, Spain. CLUP, Centro de Linguística da Universidade do Porto FLUP - Faculdade de Letras da Universidade do Porto.\
**Detecting Arguments in CJEU Decisions on Fiscal State Aid.**\
393
+
**Detecting Arguments in CJEU Decisions on Fiscal State Aid.**\
392
394
Giulia Grundler, Piera Santin, Andrea Galassi, Federico Galli, Francesco Godano, Francesca Lagioia, Elena Palmieri, Federico Ruggeri, Giovanni Sartor, and Paolo Torroni. 2022.\
393
395
In Proceedings of the 9th Workshop on Argument Mining, pages 143–157, Online and in Gyeongju, Republic of Korea. International Conference on Computational Linguistics.\
@@ -87,7 +87,7 @@ We are in close contact with teams of legal experts who can provide their expert
87
87
88
88
### - Transformers and LLMs for the detection and classification of unfair clauses -
89
89
90
-
**Description:**\
90
+
**Description:**
91
91
For several years, we have been working on tools for the automatic detection of unfair clauses in Terms of Services and Privacy Policies documents in the English language (see CLAUDETTE and PRIMA [Projects page](\projects)).
92
92
We have already conducted several studies on this topic, and we are interested in applying new effective methods and techniques.
93
93
Right now, we are focused on LLMs, but we are also interested in alternative techniques.
@@ -149,9 +149,9 @@ However, properly integrating this type of information is particularly challengi
149
149
The standard approach for training a machine learning model on a task is to provide an annotated dataset $(\mathcal{X}, \mathcal{Y})$.
150
150
The dataset is built by providing unlabeled data $\mathcal{X}$ to a group of annotators previously trained on a set of annotation guidelines $\mathcal{G}$.
151
151
Annotators label data $\mathcal{X}$ via a given class set $\mathcal{C}$.
152
-
The main issue of this approach is that annotators define the mapping from data \mathcal{X} to the class set \mathcal{C} via the guidelines \mathcal{G}, while machine learning models are trained to learn the same mapping without guidelines \mathcal{G}.
153
-
Consequently, these models can learn any kind of mapping from \mathcal{X} to \mathcal{C} that better fits given data.
154
-
Our idea is to directly provide guidelines \mathcal{G} to models without any access to class labels during training.
152
+
The main issue of this approach is that annotators define the mapping from data $\mathcal{X}$ to the class set $\mathcal{C}$ via the guidelines $\mathcal{G}$, while machine learning models are trained to learn the same mapping without guidelines $\mathcal{G}$.
153
+
Consequently, these models can learn any kind of mapping from $\mathcal{X}$ to $\mathcal{C}$ that better fits given data.
154
+
Our idea is to directly provide guidelines $\mathcal{G}$ to models without any access to class labels during training.
@@ -173,7 +173,7 @@ Our aim is to evaluate how machine learning model are affected by different defi
173
173
174
174
**References:**
175
175
176
-
**Untangling Hate Speech Definitions: A Semantic Componential Analysis Across Cultures and Domains.**\
176
+
**Untangling Hate Speech Definitions: A Semantic Componential Analysis Across Cultures and Domains.**\
177
177
Katerina Korre, Arianna Muti, Federico Ruggeri, and Alberto Barrón-Cedeño. 2025.\
178
178
In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3184–3198, Albuquerque, New Mexico. Association for Computational Linguistics.\
@@ -188,7 +188,7 @@ We are mainly focused on interpretability by design in text classification.
188
188
189
189
Current topics of interest:
190
190
191
-
**Selective Rationalization:**\
191
+
**Selective Rationalization:**\
192
192
The process of learning by providing highlights as explanations is denoted as selective rationalization.
193
193
Highlights are a subset of input texts meant to be interpretable by a user and faithfully describe the inference process of a classification model.
194
194
A popular architecture for selective rationalization is the Select-then-Predict Pipeline (SPP): a generator selects the rationale to be fed to a predictor.
0 commit comments