Skip to content

Latest commit

 

History

History
13 lines (10 loc) · 861 Bytes

README.md

File metadata and controls

13 lines (10 loc) · 861 Bytes

KubWizzard

Fine-tuning an instruction-tuned Mistral 7B model with instruction-code pairs, specifically with kubectl commands.

Dataset

We used a dataset generated by the subproject kubeget that scrapes the official documentation for kubectl and augments the gathered data using the openai GPT APIs.
You can find the dataset on huggingface here

Notebooks

The finetuning process of the base model is documented in this notebook

Target Notebook
Kubectl Colab