Skip to content

want to fine-tuned llama3.2.1b on MMLU and Arc_challenge and gsm8k(maths) #2132

Open
@sorobedio

Description

Hello, everyone,

I’m new to fine-tuning large language models (LLMs), but I have experience with PyTorch. I’m planning to fine-tune the LLaMA 3.2-1B (base and instruction models) on the MMLU, ARC-Challenge, and GSM8K (math) datasets, using full fine-tuning instead of LoRA. After fine-tuning, I aim to evaluate the models.

Could you please guide me on managing these datasets and share any working examples or resources to get started? Any initial push would be greatly appreciated.

Thank you!

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions