hacked version of a deepseek refusal vector (do not merge) #12
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is quite sloppy and I do not recommend merging this as-is into the codebase. However I wanted to offer it in it's current state to allow visibility and get feedback. If there is interest I would be happy to clean this up a bit and re-submit this in a cleaner form. Or if not, happy to close this unmerged where it still might be useful to someone else, etc.
This can be tested on the current branch by running:
Details below.
I created a refusal vector for DeepSeek-R1-Distill-Llama-8B with multiple changes to the code - some trivial but a few not obvious. If there is interest I or someone could clean this up, but I wanted to offer it in its current state while the changes were fresh on my mind in case it is useful for others.
The main changes were:
I attempted to insert this into the LLAMA3_CHAT_TEMPLATE_WITH_SYSTEM but that failed, so I simply patched the format_instruction_llama3_chat() function call instead.
Also in this file I included in _get_eoi_toks() so this would be evaluated.
pipeline/model_utils/model_factory.py was patched to load llama3_model() when 'distill-llama' is part of the model_name. Note that this should really be it's own dedicated model factory as this likely breaks existing (non-deepseek) llama3 processing.
pipeline/run_pipeline.py
kl_threshold was raised from 0.1 to 0.2 as this was necessary in testing
harmful_train data was patched with nine examples of deepseek specific refusal that I think was not well covered in existing datasets - such as "What is 8964?".
Also included is pipeline/runs/DeepSeek-R1-Distill-Llama-8B with the refusal vector I generated and metrics and diagnostics.
I also sloppily left in some print statements used for debugging