This guide covers deploying CodeScan AI to production using Vercel (frontend) and Render (backend).
Before deploying, ensure you have:
- GitHub repository with CodeScan AI code
- Vercel account (https://vercel.com)
- Render account (https://render.com)
- API keys for:
- Groq API (https://console.groq.com)
- Google Gemini (https://makersuite.google.com/app/apikey)
- Hugging Face (optional, https://huggingface.co/settings/tokens)
- Visit https://vercel.com/new
- Click "Import Git Repository"
- Select your CodeScan AI GitHub repository
- Vercel will auto-detect it as a Vite project
Build Command:
npm run buildOutput Directory:
dist
Root Directory:
frontend
In Vercel Project Settings → Environment Variables, add:
VITE_API_URL=https://api.codescan-ai.com
VITE_SOCKET_URL=https://api.codescan-ai.com
VITE_APP_NAME=CodeScan AIFor Development:
VITE_API_URL=http://localhost:5000
VITE_SOCKET_URL=http://localhost:5000Vercel will auto-deploy on every push to main branch. You can also manually trigger deployments from the Vercel dashboard.
Your frontend will be available at:
https://codescan-ai.vercel.app
- Visit https://dashboard.render.com
- Click "New +" → "Web Service"
- Connect your GitHub repository
- Configure the service:
Name: codescan-api
Environment: Python 3.10
Build Command:
pip install -r requirements.txtStart Command:
gunicorn --worker-class eventlet -w 1 run:appRoot Directory:
backend
In Render Service Settings → Environment, add:
# Flask Config
FLASK_ENV=production
SECRET_KEY=your-super-secret-key-here
# Database
DATABASE_URL=sqlite:////var/data/codescan.db
# AI Provider Keys
GROQ_API_KEY=your_groq_api_key
GEMINI_API_KEY=your_gemini_api_key
HUGGING_FACE_API_KEY=your_huggingface_token
# Redis (will be added separately)
REDIS_URL=redis://default:password@redis-instance.onrender.com:6379
CELERY_BROKER_URL=redis://default:password@redis-instance.onrender.com:6379/0
CELERY_RESULT_BACKEND=redis://default:password@redis-instance.onrender.com:6379/1
# CORS Settings
FRONTEND_URL=https://codescan-ai.vercel.app
CORS_ORIGINS=https://codescan-ai.vercel.app,http://localhost:5173
# JWT Config
JWT_SECRET_KEY=your-jwt-secret-key
JWT_ACCESS_TOKEN_EXPIRES=900
JWT_REFRESH_TOKEN_EXPIRES=604800- In Render dashboard, go to Service Settings
- Add a Disk volume:
- Name:
data - Mount Path:
/var/data - Size: 1 GB (or more for production)
- Name:
This ensures your SQLite database persists across deployments.
- In Render dashboard, click "Add-ons"
- Create a Redis instance:
- Name:
codescan-redis - Plan: Free (or paid for production)
- Name:
- Copy the Redis URL to
REDIS_URLenvironment variable
Render will auto-deploy on every push. Your API will be available at:
https://codescan-api.onrender.com
- In Render, create a new Background Worker
- Configure:
Name: codescan-worker
Build Command:
pip install -r requirements.txtStart Command:
celery -A celery_worker.celery worker --loglevel=info --concurrency=2Environment Variables: (Same as Flask API service)
The worker will automatically connect to the Redis instance using the REDIS_URL environment variable.
Already included. No additional setup needed.
For larger deployments, migrate to PostgreSQL:
- In Render, add PostgreSQL Add-on
- Update
DATABASE_URLenvironment variable - Run migrations:
flask db upgradeBoth Vercel and Render provide automatic SSL certificates. HTTPS is enabled by default.
To enable email notifications for scan results:
- Set up SendGrid integration in Render
- Add to environment variables:
SENDGRID_API_KEY=your_sendgrid_api_key
SENDGRID_FROM_EMAIL=noreply@codescan-ai.comBefore going to production:
- Set strong
SECRET_KEYandJWT_SECRET_KEY - Enable HTTPS (automatic on Vercel/Render)
- Configure CORS to allow only your frontend domain
- Set up API rate limiting
- Rotate API keys regularly
- Set up monitoring and alerts
- Enable database backups
- Use environment variables for all secrets
- Set up error logging (Sentry/LogRocket)
View logs in Render dashboard:
Service → Logs
Integrate Sentry for error tracking:
import sentry_sdk
from sentry_sdk.integrations.flask import FlaskIntegration
sentry_sdk.init(
dsn="your_sentry_dsn",
integrations=[FlaskIntegration()],
traces_sample_rate=0.1
)Create .github/workflows/deploy.yml:
name: Deploy to Production
on:
push:
branches: [ main ]
jobs:
deploy-frontend:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Deploy to Vercel
run: |
npm install -g vercel
vercel deploy --prod --token ${{ secrets.VERCEL_TOKEN }}
deploy-backend:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Deploy to Render
run: |
curl https://api.render.com/deploy/srv-${{ secrets.RENDER_API_ID }}Error: ModuleNotFoundError: No module named 'flask'
Solution:
pip install -r requirements.txtError: sqlite3.OperationalError: unable to open database file
Solution: Ensure /var/data volume is mounted and writable
Error: redis.exceptions.ConnectionError
Solution: Verify Redis URL in environment variables and network connectivity
Error: WebSocket connection fails
Solution:
- Ensure
FRONTEND_URLis correctly set - Check CORS configuration
- Verify WebSocket is enabled in Render (default: enabled)
For production load, use Render's auto-scaling:
In Render Service Settings:
- Enable "Auto-Deploy"
- Set instance count to 2-3
- Configure load balancer (included)
Migrate SQLite → PostgreSQL:
# Export data
sqlite3 codescan.db .dump > backup.sql
# Create PostgreSQL instance on Render
# Update DATABASE_URL
# Run migrations
flask db upgrade| Component | Free Tier | Pro Tier | Notes |
|---|---|---|---|
| Vercel Frontend | 100 GB bandwidth/month | $20/month | Includes SSL, CDN |
| Render API | $7/month | $12+/month | Auto-scales, includes SSL |
| Render Redis | Free | $5+/month | Managed Redis |
| Render PostgreSQL | N/A | $15+/month | For scaling |
| Celery Worker | $7/month | $12+/month | Background jobs |
Total Estimated Cost: $7-40/month depending on scale
- Vercel Support: https://vercel.com/support
- Render Support: https://render.com/docs
- CodeScan AI Issues: https://github.com/pauldev-hub/CodeScan-AI/issues
Both Vercel and Render support zero-downtime deployments:
- New version is deployed to a canary instance
- Health checks pass
- Traffic gradually shifts to new version
- Old version is terminated
If deployment fails:
Vercel: Go to Deployments, select previous version, click "Make Production"
Render: Go to Deploys, select previous deployment, click "Deploy"
- Frontend loads at custom domain
- API responds to health checks
- Authentication works
- Scans complete successfully
- Results export (PDF, JSON, CSV)
- Real-time chat works
- Error logging is active
- Backups are scheduled
- Monitoring/alerts are set up
- Performance is within SLA