Skip to content

Solve various NLP tasks by applying modern NLP architectures and tooling—from foundational embeddings to transformer‑based models like BERT and GPT; Use methodologies like fine‑tuning and prompt engineering to maximize task performance

License

Notifications You must be signed in to change notification settings

AlgazinovAleksandr/NLP-LLMs-Fine-Tuning

Repository files navigation

NLP-LLMs-Fine-Tuning

A hands-on collection of Jupyter notebooks and Python examples that walk through the most popular NLP tasks—from classical pattern-matching with regular expressions to modern, parameter-efficient fine-tuning of transformer models using LoRA adapters. This repository might be helpful for those just getting started with NLP, offering useful tutorials and end-to-end pipelines. The content of the repository is summarized as follows:

  • Word2Vec from Scratch: Implement the skip-gram model to train our own word vectors. Evaluate the model, and compare it with the existing LLMs.
  • Stable Diffusion with LoRA: Fine-tune a Stable Diffusion model using low-rank adapters for efficient image generation. As a result, we can adapt the image generation model to generate images based on specific style or theme.
  • Modern NLP Techniques: Hands-on examples of transformer-based tasks such as prompt engineering, architecture analysis and modification.
  • Overview of other popular NLP tasks such as sentiment analysis, NER, assessing the quality of text embedding models, and more.

About

Solve various NLP tasks by applying modern NLP architectures and tooling—from foundational embeddings to transformer‑based models like BERT and GPT; Use methodologies like fine‑tuning and prompt engineering to maximize task performance

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published