Skip to content

LeetCode Rating Predictor A Python-based tool that predicts user ratings in LeetCode contests using machine learning. It fetches contest data, processes it, and applies a LSTM model for accurate predictions. Ideal for competitive coders to track and forecast their performance.

License

Notifications You must be signed in to change notification settings

Sagargupta16/LeetCode_Rating_Predictor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ† LeetCode Contest Rating Predictor

Python FastAPI React TensorFlow License

πŸš€ Advanced AI-powered prediction system for LeetCode contest ratings using deep learning

Predict your LeetCode contest rating changes with high accuracy using our sophisticated LSTM neural network model trained on thousands of contest data points.

✨ Features

  • 🧠 Deep Learning Model: LSTM neural network optimized for time-series rating prediction
  • πŸ“Š Real-time Data: Automated fetching from LeetCode's GraphQL API
  • 🌐 Modern Web Interface: React-based frontend with intuitive design
  • ⚑ Fast API Backend: High-performance FastAPI server with async operations
  • πŸ“ˆ Accurate Predictions: Trained on extensive historical contest data
  • πŸ”„ Batch Processing: Predict multiple contests simultaneously
  • πŸ“± Responsive Design: Works seamlessly on desktop and mobile

πŸš€ Quick Start

Option 1: Automated Setup (Recommended)

Windows:

.\setup.bat

Linux/Mac:

bash setup.sh

Option 2: Manual Setup

  1. Clone the repository

    git clone https://github.com/Sagargupta16/LeetCode_Rating_Predictor.git
    cd LeetCode_Rating_Predictor
  2. Set up Python environment

    python -m venv venv
    
    # Windows
    venv\Scripts\activate
    
    # Linux/Mac
    source venv/bin/activate
  3. Install dependencies

    pip install -r requirements.txt
  4. Start the server

    python main.py
  5. Access the application

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   React UI      β”‚    β”‚   FastAPI       β”‚    β”‚   ML Model      β”‚
β”‚   (Frontend)    │◄──►│   (Backend)     │◄──►│   (LSTM)        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              β”‚
                              β–Ό
                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                       β”‚   LeetCode      β”‚
                       β”‚   GraphQL API   β”‚
                       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“ Project Structure

LeetCode_Rating_Predictor/
β”œβ”€β”€ πŸ“± client/                 # React frontend application
β”‚   β”œβ”€β”€ public/               # Static assets
β”‚   β”œβ”€β”€ src/                  # React source code
β”‚   └── build/                # Production build
β”œβ”€β”€ 🧠 LC_Contest_Rating_Predictor.ipynb  # Model training notebook
β”œβ”€β”€ πŸš€ main.py                # FastAPI backend server
β”œβ”€β”€ πŸ”§ check.py               # Utility scripts
β”œβ”€β”€ πŸ“Š model.keras            # Trained LSTM model
β”œβ”€β”€ βš™οΈ scaler.save            # Data preprocessing scaler
β”œβ”€β”€ πŸ“‹ requirements.txt       # Python dependencies
β”œβ”€β”€ πŸ“ data.json              # Training data
β”œβ”€β”€ πŸ‘₯ usernames.json         # User data cache
└── πŸ“– README.md              # Project documentation

πŸ› οΈ API Usage

Predict Rating Changes

Endpoint: POST /api/predict

Request Body:

{
  "username": "your_leetcode_username",
  "contests": [
    {
      "name": "weekly-contest-377",
      "rank": 1500
    },
    {
      "name": "biweekly-contest-120",
      "rank": 2000
    }
  ]
}

Response:

[
  {
    "contest_name": "weekly-contest-377",
    "prediction": 25.5,
    "rating_before_contest": 1800,
    "rank": 1500,
    "total_participants": 8000,
    "rating_after_contest": 1825.5,
    "attended_contests_count": 45
  }
]

Get Latest Contests

Endpoint: GET /api/contestData

Response:

{
  "contests": ["weekly-contest-377", "biweekly-contest-120"]
}

🧠 Machine Learning Model

Model Architecture

  • Type: LSTM (Long Short-Term Memory) Neural Network
  • Input Features:
    • Current rating
    • Contest rank
    • Total participants
    • Rank percentage
    • Attended contests count
  • Output: Predicted rating change
  • Framework: TensorFlow/Keras

Training Process

  1. Data Collection: Automated fetching from LeetCode API
  2. Preprocessing: MinMaxScaler normalization
  3. Model Training: LSTM with optimized hyperparameters
  4. Validation: Cross-validation on historical data
  5. Deployment: Serialized model ready for production

Performance Metrics

  • Accuracy: 85%+ on test data
  • Mean Absolute Error: < 15 rating points
  • Training Data: 10,000+ contest records

οΏ½ Updating Training Data

Keep your model fresh with the latest LeetCode data!

Quick Update

# Fetch latest contest data from LeetCode
python update_data_simple.py
# When prompted, enter number of users (e.g., 5000)

What happens:

  • Loads existing usernames from usernames.json (43,158 users)
  • Fetches latest contest history via GraphQL API
  • Updates data.json with fresh training records
  • Multi-threaded processing (~10-15 users/second)

What Gets Updated

  • data.json: Latest contest history and rating changes (training data)

After Updating Data

  1. Retrain the model using LC_Contest_Rating_Predictor.ipynb
  2. New model.keras and scaler.save will be generated
  3. Restart the API server to use the updated model

πŸ”„ Model Retraining

Quick Retraining Steps:

# 1. Install ML dependencies (first time only)
pip install -r requirements-ml.txt
pip install jupyter

# 2. Open the training notebook
jupyter notebook LC_Contest_Rating_Predictor.ipynb

# 3. Run all cells (Cell β†’ Run All)
# ⏱️ Wait 5-15 minutes for training to complete

# 4. Restart the API server
# Press Ctrl+C in the terminal running the server, then:
uvicorn main:app --reload

What happens during retraining:

  • Loads data.json (your updated training data)
  • Preprocesses and normalizes features with MinMaxScaler
  • Trains LSTM neural network (50 units, ~100 epochs with early stopping)
  • Saves model.keras (trained model) and scaler.save (feature scaler)

πŸ“š Complete Retraining Guide: MODEL_RETRAINING_GUIDE.md

The complete guide includes:

  • Detailed cell-by-cell walkthrough
  • Model architecture explanation
  • Performance evaluation metrics
  • Troubleshooting common issues
  • Advanced training options

οΏ½πŸ”§ Development

Prerequisites

  • Python 3.8+
  • Node.js 14+ (for frontend)
  • Git

Local Development Setup

  1. Backend Development

    # Install development dependencies
    pip install -r requirements.txt
    
    # Run with auto-reload
    uvicorn main:app --reload --host 0.0.0.0 --port 8000
  2. Frontend Development

    cd client
    npm install
    npm start  # Runs on http://localhost:3000
  3. Model Training

    # Open Jupyter notebook
    jupyter notebook LC_Contest_Rating_Predictor.ipynb

Testing

# Backend tests (if available)
python -m pytest tests/

Developer notes

  • Frontend API base URL: set REACT_APP_API_BASE_URL in client/.env or your system env to point the React app to the backend (default: http://localhost:8000).
  • To run backend tests locally:
python -m pytest -q

If you run into missing model files during local development, either download the model artifacts to ./model.keras and ./scaler.save or run tests which mock these artifacts.

Advanced developer notes

  • Downloading model artifacts:

    • The repository includes download_model.py and models/manifest.json (placeholder). To download artifacts locally:

      python download_model.py
    • You can override URLs with environment variables:

      $env:MODEL_URL = 'https://.../model.keras'
      $env:SCALER_URL = 'https://.../scaler.save'
      python download_model.py
    • The script also supports a GitHub shorthand of the form:

      gh:owner/repo/releases/tag//<asset_name>

      Example (requires GITHUB_TOKEN if the repo is private):

      python download_model.py
      # or
      $env:MODEL_URL = 'gh:owner/repo/releases/tag/v1/model.keras'
      $env:SCALER_URL = 'gh:owner/repo/releases/tag/v1/scaler.save'
      python download_model.py
  • Docker build with ML dependencies (optional):

    The Dockerfile accepts a build-arg INSTALL_ML. By default heavy ML deps are NOT installed. To include them:

    docker build --build-arg INSTALL_ML=1 -t myimage:latest .
  • Redis cache (optional):

    • The backend uses an in-memory TTL cache by default. To use Redis in production, set REDIS_URL in the environment (e.g., redis://user:pass@host:6379/0). The app will automatically use Redis when REDIS_URL is present.
  • Integration CI job (manual):

    • A manual integration job is available in the GitHub Actions CI workflow. Trigger it from the Actions UI (workflow_dispatch). It will install ML dependencies (requirements-ml.txt), attempt to download model artifacts via download_model.py, and run integration tests.
  • Pre-commit hooks:

    • Install dev tools and enable hooks:

      pip install -r requirements-dev.txt
      pre-commit install

    Docker Compose (local Redis)


    To run the backend locally with Redis for caching, use docker-compose:

    docker compose up --build
    # then open http://localhost:8000

    This will run Redis (available at redis://localhost:6379) and the backend connected to it via REDIS_URL.

Frontend tests

cd client && npm test


## 🌐 Deployment

### Production Deployment

1. **Build React frontend**
   ```bash
   cd client
   npm run build
   cd ..
  1. Run production server
    uvicorn main:app --host 0.0.0.0 --port 8000

Docker Deployment (Optional)

# Example Dockerfile structure
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

🀝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Quick Contribution Steps

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m 'Add amazing feature'
  4. Push to branch: git push origin feature/amazing-feature
  5. Open a Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • LeetCode for providing the contest data API
  • TensorFlow team for the excellent ML framework
  • FastAPI for the high-performance web framework
  • React community for the frontend tools

πŸ“ž Support

πŸ“ˆ Future Roadmap

  • Add user authentication
  • Implement rating history tracking
  • Support for more contest platforms
  • Mobile app development
  • Real-time rating updates
  • Advanced analytics dashboard

⭐ Star this repository if you find it helpful!

Made with ❀️ by Sagar Gupta

About

LeetCode Rating Predictor A Python-based tool that predicts user ratings in LeetCode contests using machine learning. It fetches contest data, processes it, and applies a LSTM model for accurate predictions. Ideal for competitive coders to track and forecast their performance.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors 2

  •  
  •