Fintext_NLP is a collection of applied projects exploring natural language processing in financial contexts. The goal is to evaluate how transformer-based models can be adapted, fine-tuned, or extended for tasks like sentiment analysis, domain adaptation, and transfer learning—specifically within the language of markets, news, and financial commentary.
Projects are modular, experiment-driven, and focused on bridging the gap between general-purpose language models and domain-specific performance in finance.
Navigate to: (Main Notebook)
-
Introduction and Objective
-
Exploratory Data Analysis (EDA)
-
Training Classifier Head Only
- 3.1 Experiment 1: Model Preformance Across Varying Subsets (Subset Fine-Tuning Experiments)
- 3.2 Experiment 2: Addressing Class Imbalance (Class Imbalance Experiments)
- 3.3 Aside: Working with TFDS - TensorFlow Datasets (TFDS Demonstration)
-
Supervised Fine-Tuning with Fixed Classifier Head
- 4.1 Load Trained Model, Freeze Classifier Head, Unfreeze Encoder
-
Full Fine-Tuning of Entire Model (Pre-Trained + Classifier Head)
- 5.1 Unfreezing
- 5.2 Experiment 3: Detailed Fine Tuning (Hyperparameter Fine-Tuning Experiments)
- 5.3 Experiment 4: Fine-Tuning with Custom Head Architecture
-
Low Rank Adaptation (LoRA)
-
Evaluating Perfomance on Financial PhraseBank Variants
-
Evaluating Alternative Hugging Face Models on Financial PhraseBank
-
In-Context Learning Experiments
-
NLP for Commodities Trading
-
Conclusion
This repository serves as a modular framework for testing and comparing various adaptation strategies for transformer models in financial NLP. Techniques are extensible to other datasets and domains.