You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-*Understanding Indirectness*. Indirectness involves for example indirect answers
316
+
-:hourglass_flowing_sand:*Understanding Indirectness*. Indirectness involves for example indirect answers
317
317
to requests that do not explicitly contain answer clues like Yes, yeah or no. Example:
318
318
Q: Do you wanna crash on the couch? A: I gotta go home sometime. Indirect
319
319
answers are natural in human dialogue, but very difficult for a conversational AI
@@ -409,7 +409,7 @@ EMNLP](https://arxiv.org/abs/2211.02570), [Yang et al., 2024](https://arxiv.org/
409
409
410
410
### **Selected research projects**
411
411
412
-
-_In-context learning from human preference disagreement_. Aggregating
412
+
-:hourglass_flowing_sand:_In-context learning from human preference disagreement_. Aggregating
413
413
annotations via majority vote could lead to ignoring the opinions of minority groups.
414
414
Learning from individual annotators shows a better result on classification tasks such
415
415
as hate speech detection, emotion detection and natural language inference than
@@ -432,7 +432,7 @@ EMNLP](https://arxiv.org/abs/2211.02570), [Yang et al., 2024](https://arxiv.org/
432
432
- Multilingual-focused: Analyze how LLM-generated label distributions vary
433
433
across languages or incorporate multilingual explanation generation as a joint
434
434
task.
435
-
- Linguistic-focused: Explore existing datasets like liveNLI ([Jiang et al.,
435
+
-:hourglass_flowing_sand:Linguistic-focused: Explore existing datasets like liveNLI ([Jiang et al.,
436
436
2023](https://aclanthology.org/2023.findings-emnlp.712/)), e-SNLI ([Camburu et al., 2018](https://proceedings.neurips.cc/paper/2018/hash/4c7a167bb329bd92580a99ce422d6fa6-Abstract.html)), and VariErr NLI ([Weber et al., 2024](https://aclanthology.org/2024.acl-long.123/)),
437
437
where different explanations exist for the same label, to classify these
438
438
explanations linguistically and observe the impact on LLM-generated label
0 commit comments