Skip to content

SMART-Dal/RefineID

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RefineID : VS Code Refactoring Assistant (Java & Python)

Status: Alpha • Focused on identifier (variable/function/class) renaming with ML-assisted suggestions.
Designed to grow into a full refactoring toolkit for Java & Python inside your IDE.

RefineID helps you spot unclear names and rename them safely without leaving VS Code. It shows inline CodeLens actions with suggestions, lets you accept/reject/hide, and (optionally) try a cloud or local LLM.


Table of Contents


Features

  • Inline suggestions (CodeLens): See rename suggestions above each identifier definition.
  • Explore alternatives: See more reveals up to three additional candidates so the developer can choose the best fit.
  • One-click refactor: Confirm applies a safe rename across occurrences in the file.
  • Respect developer intent: Reject permanently suppresses suggestions for that exact symbol.
  • Keep it tidy: Hide hides the CodeLens for this session; Undo reverts the last applied change.
  • Local-first suggestions: Default path returns suggestions from a local model via your backend.
  • Ollama / Cloud support: You can run with a local Ollama model or your own cloud LLM key.
  • Skip-rule gating (backend): Filters out identifiers that should not be renamed (framework or generated code, etc.).
  • Global commands: Reset ignored identifiers, reset hidden CodeLens & show all suggestions.

See the full skip rules in /docs/backend-skip-rules.md.

Demo video


Identifier Decision Pipeline

The following diagram illustrates how RefineID determines whether an identifier is Good or Bad, and when suggestions are shown to the user:

Identifier Decision Pipeline


Quick Start

Option A - Docker Compose (recommended)

  1. Requirements
  • Docker Desktop (or Docker Engine) and Docker Compose
  • VS Code
  1. Prepare environment
  • PowerShell:
Copy-Item coderefine-backend\.env.example coderefine-backend\.env
  • Bash:
cp coderefine-backend/.env.example coderefine-backend/.env
  1. Build and start
docker compose up -d --build
  1. Health check
curl -fsS http://127.0.0.1:8000/health

Option B - No Docker (Uvicorn)

  1. Requirements
  • Python 3.11, pip
  1. Create a virtualenv and install deps
  • Windows (PowerShell)
cd coderefine-backend
python -m venv .venv
. .venv\Scripts\activate
pip install --upgrade pip
pip install --index-url https://download.pytorch.org/whl/cpu torch==2.7.1
pip install -r requirements.ml.txt
  • macOS/Linux (Bash)
cd coderefine-backend
python -m venv .venv && . .venv/bin/activate
pip install --upgrade pip
pip install --index-url https://download.pytorch.org/whl/cpu torch==2.7.1
pip install -r requirements.ml.txt
  1. Point the backend to the model folder
  • Set MODEL_DIR to the project path coderefine-backend/app/ml/checkpoints/EIR_v1 so files land inside your repo:
    • PowerShell:
    $env:MODEL_DIR = "$PWD\coderefine-backend\app\ml\checkpoints\EIR_v1"
    • Bash:
    export MODEL_DIR="$(pwd)/coderefine-backend/app/ml/checkpoints/EIR_v1"
  1. (Optional) Pre-download the model from Hugging Face
  • Using the CLI (requires huggingface_hub):
    • PowerShell:
    huggingface-cli download eyaJELJLI/EIR_v1 --local-dir "$env:MODEL_DIR" --local-dir-use-symlinks False
    • Bash:
    huggingface-cli download eyaJELJLI/EIR_v1 --local-dir "$MODEL_DIR" --local-dir-use-symlinks False
  • Or Python one-liner:
    python -c "import os; from huggingface_hub import snapshot_download; os.makedirs(os.environ['MODEL_DIR'], exist_ok=True); snapshot_download('eyaJELJLI/EIR_v1', local_dir=os.environ['MODEL_DIR'], local_dir_use_symlinks=False)"
  1. Run the API (Uvicorn)
  • From coderefine-backend with the venv active:
    uvicorn app.main:app --host 0.0.0.0 --port 8000
  1. Health check
curl -fsS http://127.0.0.1:8000/health

Install the VS Code Extension

  • Option 1 - Marketplace (recommended): In VS Code, open Extensions, search RefineID, or use the marketplace link (e.g., https://marketplace.visualstudio.com/items?itemName=refineid-assistant.refineid), and install.
  • Option 2 - VSIX from this repo: In VS Code, run Extensions: Install from VSIX and select dist/refineid-0.0.3.vsix.
  • Option 3 - From source (development):
    cd coderefine
    npm ci
    npm run compile
    • Press F5 in VS Code to launch an Extension Development Host

Configure the VS Code Extension

Open Settings -> Extensions -> RefineID and fill in the following:

Setting Example value Description
refineid.mode local / ollama / cloud Backend provider
refineid.backendUrl http://localhost:8000 FastAPI server
refineid.ollama.apiBase http://localhost:11434/v1 Ollama endpoint
refineid.ollama.model llama3:latest Model from Ollama list
refineid.cloud.apiBase https://api.openai.com/v1 For cloud mode
refineid.cloud.model gpt-4o-mini Cloud model name
refineid.cloud.apiKey (your key) Stored securely

Note (Ollama URL)

  • If the backend runs in Docker, set refineid.ollama.apiBase to http://host.docker.internal:11434/v1.
  • If the backend runs without Docker, use http://localhost:11434/v1.

If using Ollama, on the host:

ollama serve &
ollama pull qwen2.5-coder:7b-instruct
# optional:
# ollama pull <model that you pulled>

5. Use It

  1. Open a Python or Java file.
  2. Save once to trigger analysis.
  3. Look for the RefineID gutter icon.
  4. Hover an identifier and choose Confirm, Reject, See more, Hide, or Undo.

6. Commands (Command Palette)

Command Description
RefineID: Reset Ignored Identifiers Clear the rejected list
RefineID: Reset Hidden CodeLens Show hidden CodeLens again

FAQ

  • Do I need a Hugging Face token?
    • No, not for the public model repo used by Local mode.
  • Can I use my own model?
    • The default local model is downloaded automatically (via Docker) into the project volume. If you prefer your own LLM, you can just drop its files into the model checkpoint folder used by the backend (e.g., coderefine-backend/app/ml/checkpoints/<your_model>), point the backend to it, and restart.

About

Repository for MITACS project - User-centric LLM-powered software productivity tools

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors