-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Now that we know that using a translation model is beneficial, we would like to make it more robust.
Specifically:
- We find that the model works decently when the input is a single word or a short sentence,
but not when the input is a long sentence or a paragraph. (In practice, we use sentence-splitting before translating, but this is not ideal, for context dependent info) - The model might not be accurate to simple semantic variations (desk vs table), likely since it is trained
from scratch, with a low-data setting.
To address these issues, we propose curating multiple data sources and fine-tuning LLMs.
- The parallel data from SignBank+ is of good quality (not perfect).
- We can use monolingual data alongside language models to generate synthetic sentence level data.
This would be similar to this paper replacing the "rule-based" approach with a large language model. - Key phrases can be extracted from the SignBank+ data, and understood as "template + slots"
including fingerspelling can be used to generate high quality synthetic data by replacing the fingerspelled entity. - Large sign language translation datasets can be automatically segmented and transcribed. This will create a large multilingual parallel document level dataset, with low quality SignWriting.
Once data is collected, we will need to find a training recipe that makes sense with
multiple languages and various data proportions, for either of the translation directions.
We would treat the existing models as baselines, and evaluate SignWriting output using signwriting-evaluation
Metadata
Metadata
Assignees
Labels
No labels