-
Couldn't load subscription status.
- Fork 0
Open
Description
Hi,
I have two questions regarding your LLaMA-3 fine-tuning work:
-
Release of fine-tuned model:
Would the fine-tuned LLaMA-3 model used in your experiments be publicly available? I am interested in reproducing some of your results. -
Instruction prompt:
I noticed that the instruction prompt used in the training script(train_lora_combined.py, line 150)differs from the prompt example shown in Table 10 of your paper (NAACL 2025, https://aclanthology.org/2025.naacl-long.451.pdf). Could you explain the reason for this difference? Is the script prompt the final one used for experiments, or just an example?
Thank you.
Metadata
Metadata
Assignees
Labels
No labels