Hi, thank you for the great work on LLaSA (NAACL '25)! The tabular models you've developed are very promising and valuable to the community.
Considering the high training cost, it would be extremely helpful if you could release the checkpoints for the models reported in your paper — specifically:
- Table 1: LLaSA 7B-M
- Table 2: Under both Freeze LLM and LoRA Tuning settings:
- LLaSA-Phi 3B
- LLaSA-LLaMA2 7B
- LLaSA-Mistral 7B
- LLaSA-LLaMA3 8B
This would greatly facilitate reproducibility, accelerate research progress, and increase the impact of your work in the community.
Looking forward to your response, and thank you again for sharing your work!