Skip to content

Fine-tuning multiple Transformer and LLM-based models using Kaggle datasets with parameter-efficient techniques like LoRA, QLoRA etc.

Notifications You must be signed in to change notification settings

vishakha1411/LLM-fine-tuning

Repository files navigation

Fine-Tuning LLM Models

This repository contains fine-tuning experiments of multiple Transformer and LLM-based models for suicide ideation and mental health text classification using public Kaggle datasets.

All models are trained using efficient fine-tuning techniques (e.g., LoRA) to reduce compute and memory usage.


Models Covered

  • BERT
  • RoBERTa
  • DeBERTa
  • T5
  • BART
  • GPT-2
  • LLaMA-2
  • Mistral-7B
  • Phi-2
  • MentalLLaMA

Techniques Used

  • Transfer learning with pretrained NLP models
  • Parameter-efficient fine-tuning (LoRA)
  • Hugging Face Transformers & Trainer APIs
  • Kaggle-based mental health datasets

About

Fine-tuning multiple Transformer and LLM-based models using Kaggle datasets with parameter-efficient techniques like LoRA, QLoRA etc.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published