This repository contains fine-tuning experiments of multiple Transformer and LLM-based models for suicide ideation and mental health text classification using public Kaggle datasets.
All models are trained using efficient fine-tuning techniques (e.g., LoRA) to reduce compute and memory usage.
- BERT
- RoBERTa
- DeBERTa
- T5
- BART
- GPT-2
- LLaMA-2
- Mistral-7B
- Phi-2
- MentalLLaMA
- Transfer learning with pretrained NLP models
- Parameter-efficient fine-tuning (LoRA)
- Hugging Face Transformers & Trainer APIs
- Kaggle-based mental health datasets