Skip to content

Commit 0c9848f

Browse files
authored
Update thesis.md
1 parent ff2df6c commit 0c9848f

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

_pages/thesis.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ possible here:
201201
</div>
202202
<div class="accordion-content">
203203
{{ "### **Selected Research Projects**
204-
- *NLP for Job Market Analysis*. Job postings are a rich resource to understand the
204+
- :hourglass_flowing_sand: *NLP for Job Market Analysis*. Job postings are a rich resource to understand the
205205
dynamics of the labor market including which skills are demanded, which is also
206206
important from an educational viewpoint. Recently, the emerging line of work on
207207
computational job market analysis or NLP for human resources has started to
@@ -290,7 +290,7 @@ lexical signals (and combinations thereof) including, e.g., TF-IDF and QLM (redu
290290
scope, BSc) and semantic signals obtained from semantic similarity models or LLMs
291291
(full scope, MSc). This thesis is suitable for students who do not have access to large
292292
GPUs. **Level: BSc or MSc**.
293-
- *Do LLMs suffer from lexical biases in learning to rank (L2R)?* LLMs are used
293+
- :hourglass_flowing_sand: *Do LLMs suffer from lexical biases in learning to rank (L2R)?* LLMs are used
294294
ubiquitously in virtually all areas of NLP. This includes information retrieval (IR),
295295
where LLMs are used to, e.g., judge query-document pairs to predict relevance (see
296296
Fig. 6 in [Yutao et al. (2023)](https://arxiv.org/pdf/2308.07107)). In the context of cross-lingual IR (CLIR), we previously
@@ -313,7 +313,7 @@ engineering, in-context L2R or instruction-tuned LLMs. **Level: MSc.**
313313
</div>
314314
<div class="accordion-content">
315315
{{ "### **Selected Research Projects**
316-
- *Understanding Indirectness*. Indirectness involves for example indirect answers
316+
- :hourglass_flowing_sand: *Understanding Indirectness*. Indirectness involves for example indirect answers
317317
to requests that do not explicitly contain answer clues like Yes, yeah or no. Example:
318318
Q: Do you wanna crash on the couch? A: I gotta go home sometime. Indirect
319319
answers are natural in human dialogue, but very difficult for a conversational AI
@@ -409,7 +409,7 @@ EMNLP](https://arxiv.org/abs/2211.02570), [Yang et al., 2024](https://arxiv.org/
409409

410410
### **Selected research projects**
411411

412-
- _In-context learning from human preference disagreement_. Aggregating
412+
- :hourglass_flowing_sand: _In-context learning from human preference disagreement_. Aggregating
413413
annotations via majority vote could lead to ignoring the opinions of minority groups.
414414
Learning from individual annotators shows a better result on classification tasks such
415415
as hate speech detection, emotion detection and natural language inference than
@@ -432,7 +432,7 @@ EMNLP](https://arxiv.org/abs/2211.02570), [Yang et al., 2024](https://arxiv.org/
432432
- Multilingual-focused: Analyze how LLM-generated label distributions vary
433433
across languages or incorporate multilingual explanation generation as a joint
434434
task.
435-
- Linguistic-focused: Explore existing datasets like liveNLI ([Jiang et al.,
435+
- :hourglass_flowing_sand: Linguistic-focused: Explore existing datasets like liveNLI ([Jiang et al.,
436436
2023](https://aclanthology.org/2023.findings-emnlp.712/)), e-SNLI ([Camburu et al., 2018](https://proceedings.neurips.cc/paper/2018/hash/4c7a167bb329bd92580a99ce422d6fa6-Abstract.html)), and VariErr NLI ([Weber et al., 2024](https://aclanthology.org/2024.acl-long.123/)),
437437
where different explanations exist for the same label, to classify these
438438
explanations linguistically and observe the impact on LLM-generated label

0 commit comments

Comments
 (0)