Skip to content

jvlinsta/fixtokens

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

fixtokens

A living knowledge base for a collaborative research project on fixing issues with BPE tokenization in language models.

Goal

Systematically investigate and address known deficiencies in Byte Pair Encoding (BPE) tokenization, progressing from lightweight interventions to training-based solutions.

Research Phases

Phase Approach Invasiveness
1 Training-free heuristics (pre-tokenization, vocabulary pruning) Non-invasive
2 Auxiliary prediction model for tokenization scoring Moderate
3 RL training — learn a tokenization reward model Invasive
4 Domain-specific corpus ablations Variable

Target model families for experiments: Qwen3.5, OLMo v2 (known pretraining distributions).

Repository Structure

docs/
  research-plan.md       — phased roadmap with open questions
  related-work/
    index.md             — annotated bibliography
    *.md                 — individual paper notes

Quick Links

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors