Status: Alpha • Focused on identifier (variable/function/class) renaming with ML-assisted suggestions.
Designed to grow into a full refactoring toolkit for Java & Python inside your IDE.
RefineID helps you spot unclear names and rename them safely without leaving VS Code. It shows inline CodeLens actions with suggestions, lets you accept/reject/hide, and (optionally) try a cloud or local LLM.
- Inline suggestions (CodeLens): See rename suggestions above each identifier definition.
- Explore alternatives: See more reveals up to three additional candidates so the developer can choose the best fit.
- One-click refactor:
Confirmapplies a safe rename across occurrences in the file. - Respect developer intent:
Rejectpermanently suppresses suggestions for that exact symbol. - Keep it tidy:
Hidehides the CodeLens for this session;Undoreverts the last applied change. - Local-first suggestions: Default path returns suggestions from a local model via your backend.
- Ollama / Cloud support: You can run with a local Ollama model or your own cloud LLM key.
- Skip-rule gating (backend): Filters out identifiers that should not be renamed (framework or generated code, etc.).
- Global commands: Reset ignored identifiers, reset hidden CodeLens & show all suggestions.
See the full skip rules in /docs/backend-skip-rules.md.
- Watch: https://www.youtube.com/watch?v=Fn8IY5FQWtw
- Shows: switching modes, getting suggestions, user actions...
The following diagram illustrates how RefineID determines whether an identifier is Good or Bad, and when suggestions are shown to the user:
- Requirements
- Docker Desktop (or Docker Engine) and Docker Compose
- VS Code
- Prepare environment
- PowerShell:
Copy-Item coderefine-backend\.env.example coderefine-backend\.env- Bash:
cp coderefine-backend/.env.example coderefine-backend/.env- Build and start
docker compose up -d --build- Health check
curl -fsS http://127.0.0.1:8000/health- Requirements
- Python 3.11, pip
- Create a virtualenv and install deps
- Windows (PowerShell)
cd coderefine-backend
python -m venv .venv
. .venv\Scripts\activate
pip install --upgrade pip
pip install --index-url https://download.pytorch.org/whl/cpu torch==2.7.1
pip install -r requirements.ml.txt- macOS/Linux (Bash)
cd coderefine-backend
python -m venv .venv && . .venv/bin/activate
pip install --upgrade pip
pip install --index-url https://download.pytorch.org/whl/cpu torch==2.7.1
pip install -r requirements.ml.txt- Point the backend to the model folder
- Set MODEL_DIR to the project path
coderefine-backend/app/ml/checkpoints/EIR_v1so files land inside your repo:- PowerShell:
$env:MODEL_DIR = "$PWD\coderefine-backend\app\ml\checkpoints\EIR_v1"
- Bash:
export MODEL_DIR="$(pwd)/coderefine-backend/app/ml/checkpoints/EIR_v1"
- (Optional) Pre-download the model from Hugging Face
- Using the CLI (requires huggingface_hub):
- PowerShell:
huggingface-cli download eyaJELJLI/EIR_v1 --local-dir "$env:MODEL_DIR" --local-dir-use-symlinks False
- Bash:
huggingface-cli download eyaJELJLI/EIR_v1 --local-dir "$MODEL_DIR" --local-dir-use-symlinks False - Or Python one-liner:
python -c "import os; from huggingface_hub import snapshot_download; os.makedirs(os.environ['MODEL_DIR'], exist_ok=True); snapshot_download('eyaJELJLI/EIR_v1', local_dir=os.environ['MODEL_DIR'], local_dir_use_symlinks=False)"
- Run the API (Uvicorn)
- From
coderefine-backendwith the venv active:uvicorn app.main:app --host 0.0.0.0 --port 8000
- Health check
curl -fsS http://127.0.0.1:8000/health- Option 1 - Marketplace (recommended): In VS Code, open Extensions, search RefineID, or use the marketplace link (e.g., https://marketplace.visualstudio.com/items?itemName=refineid-assistant.refineid), and install.
- Option 2 - VSIX from this repo: In VS Code, run
Extensions: Install from VSIXand selectdist/refineid-0.0.3.vsix. - Option 3 - From source (development):
cd coderefine npm ci npm run compile- Press F5 in VS Code to launch an Extension Development Host
Open Settings -> Extensions -> RefineID and fill in the following:
| Setting | Example value | Description |
|---|---|---|
refineid.mode |
local / ollama / cloud |
Backend provider |
refineid.backendUrl |
http://localhost:8000 |
FastAPI server |
refineid.ollama.apiBase |
http://localhost:11434/v1 |
Ollama endpoint |
refineid.ollama.model |
llama3:latest |
Model from Ollama list |
refineid.cloud.apiBase |
https://api.openai.com/v1 |
For cloud mode |
refineid.cloud.model |
gpt-4o-mini |
Cloud model name |
refineid.cloud.apiKey |
(your key) | Stored securely |
Note (Ollama URL)
- If the backend runs in Docker, set
refineid.ollama.apiBasetohttp://host.docker.internal:11434/v1. - If the backend runs without Docker, use
http://localhost:11434/v1.
If using Ollama, on the host:
ollama serve &
ollama pull qwen2.5-coder:7b-instruct
# optional:
# ollama pull <model that you pulled>- Open a Python or Java file.
- Save once to trigger analysis.
- Look for the RefineID gutter icon.
- Hover an identifier and choose Confirm, Reject, See more, Hide, or Undo.
| Command | Description |
|---|---|
| RefineID: Reset Ignored Identifiers | Clear the rejected list |
| RefineID: Reset Hidden CodeLens | Show hidden CodeLens again |
- Do I need a Hugging Face token?
- No, not for the public model repo used by Local mode.
- Can I use my own model?
- The default local model is downloaded automatically (via Docker) into the project volume. If you prefer your own LLM, you can just drop its files into the model checkpoint folder used by the backend (e.g.,
coderefine-backend/app/ml/checkpoints/<your_model>), point the backend to it, and restart.
- The default local model is downloaded automatically (via Docker) into the project volume. If you prefer your own LLM, you can just drop its files into the model checkpoint folder used by the backend (e.g.,
