Skip to content

TriggerGuy99/Glitch-O-Meter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Glitch-O-Meter

A competitive coding evaluation platform that automatically audits student code through syntax checks and semantic analysis, powered by a local AI model.

Quick Start

Prerequisites

  • Python: 3.9+ (tested with Python 3.12.1)
  • Node.js: 16+ (for frontend, setup in Sprint 4)
  • Git: For repository cloning

Backend Setup

  1. Create and activate virtual environment:

    cd backend
    python3 -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  2. Install dependencies:

    pip install -r requirements.txt
  3. Start the Flask server:

    python run.py

    Server will start on http://localhost:5000

  4. Verify server is running:

    curl http://localhost:5000/health
    # Expected response: {"status": "ok"}

Testing the Webhook

Send a test GitHub webhook to validate the integration:

curl -X POST http://localhost:5000/webhook/github \
  -H "Content-Type: application/json" \
  -d '{
    "repository": {"name": "test-repo", "full_name": "owner/test-repo"},
    "ref": "refs/heads/main",
    "commits": [{"id": "abc123", "message": "Test commit"}]
  }'

Expected response:

{
  "status": "received",
  "repository": "test-repo",
  "commits_count": 1
}

Project Structure

backend/
├── app/
│   ├── __init__.py          # Flask app factory
│   ├── api/
│   │   └── webhook_handler.py  # GitHub webhook endpoint
│   ├── core/                # Audit orchestration (Sprint 2-3)
│   │   ├── auditor.py
│   │   ├── syntax_auditor.py
│   │   └── semantic_auditor.py
│   ├── database/            # Database operations (Sprint 4)
│   │   └── mongo_client.py
│   └── utils/               # Helper utilities
├── sandbox/                 # Cloned student repos (auto-generated)
├── hidden_test_suites/      # Secret test files
├── config.py                # Configuration management
├── run.py                   # Entry point
└── requirements.txt         # Python dependencies

frontend/
├── public/                  # Static assets
├── src/                     # React components
└── package.json             # Node dependencies (Sprint 4)

Configuration

Backend Environment Variables

Create a .env file in the backend/ directory to customize settings:

# Flask settings
FLASK_ENV=development
FLASK_HOST=localhost
FLASK_PORT=5000
SECRET_KEY=your-secret-key

# Database (Sprint 4)
MONGO_URI=mongodb://localhost:27017/glitch_o_meter

# Ollama AI (Sprint 3)
OLLAMA_API_ENDPOINT=http://localhost:11434/api/generate
OLLAMA_MODEL=qwen2.5-coder:7b

# Execution
TEST_TIMEOUT=10

All settings have sensible defaults; .env is optional for development.

Development Workflow

Running the Server with Auto-Reload

The Flask development server includes auto-reload by default. Modify any Python file and the server will automatically restart.

cd backend && source .venv/bin/activate
python run.py

Code Quality

PEP 8 Compliance Check

cd backend
.venv/bin/pip install flake8
.venv/bin/flake8 app/ run.py config.py

Full Linting

.venv/bin/flake8 --count --select=E,W,F app/

API Endpoints

Current Sprint 1

Method Endpoint Description
GET /health Health check
POST /webhook/github GitHub webhook receiver

Planned Endpoints (Sprints 2-5)

  • POST /api/audit - Submit code for evaluation
  • GET /api/scores - Fetch team leaderboard
  • GET /api/scores/<team_id> - Get team details

Sprint Progress

  • Sprint 1 ✅: Project initialization, Flask setup, webhook listener

    • ✅ Directory structure
    • ✅ Python virtual environment
    • ✅ Flask application factory
    • ✅ GitHub webhook endpoint
    • ✅ Basic configuration system
  • Sprint 2: Syntax auditor (pytest/jest execution)

  • Sprint 3: Semantic auditor (Ollama AI integration)

  • Sprint 4: Scoring algorithm, MongoDB, React frontend

  • Sprint 5: Performance testing and finalization

See docs/todo.md for detailed sprint breakdown.

Troubleshooting

Port 5000 already in use

If port 5000 is in use, specify a different port:

FLASK_PORT=8080 python run.py

Virtual environment not activating

Ensure you're in the backend/ directory and run:

source .venv/bin/activate  # macOS/Linux
.venv\Scripts\activate      # Windows

webhook endpoint returns 400

Ensure the JSON payload is valid:

curl -X POST http://localhost:5000/webhook/github \
  -H "Content-Type: application/json" \
  -d '{}'  # Empty JSON

Should return: {"error": "Empty payload"} with 400 status.

Requirements & Constraints

  • Sandbox execution: Always use subprocess module, never eval() or exec()
  • Test timeout: 10 seconds maximum for any test execution
  • Local-only: All services (MongoDB, Ollama) run on localhost
  • GPU support: Designed for offline on single machine with GPU (RTX 3060)
  • Code style: Strict PEP 8 compliance for all Python code

Technologies

  • Backend: Python 3.12, Flask 3.0, Werkzeug
  • Database: MongoDB (Sprint 4)
  • AI: Ollama with qwen2.5-coder:7b model (Sprint 3)
  • Frontend: React (Sprint 4)
  • Testing: PyTest, Jest, Flake8

License

Internal project - Team use only

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages