Skip to content

Commit 827d902

Browse files
committed
2 parents 0d2d83c + 984327e commit 827d902

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+465542
-43873
lines changed

Use_Cases/Congestion Prediction/Data preprocessing.ipynb

Lines changed: 1744 additions & 467 deletions
Large diffs are not rendered by default.

Use_Cases/Congestion Prediction/EVAT_Congestion_with_baselines_models.ipynb

Lines changed: 4097 additions & 258 deletions
Large diffs are not rendered by default.
Lines changed: 366 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,366 @@
1+
# EV Congestion Prediction API
2+
3+
A FastAPI-based REST API for real-time EV charging station congestion forecasting using a trained RandomForest model.
4+
5+
## Features
6+
7+
**Real-time Predictions** - Forecast 3-hour arrival counts for charging stations
8+
**Automatic Feature Engineering** - Fetches and processes external data automatically
9+
**External Data Integration** - Weather, holidays, events, pedestrian counts
10+
**Batch Predictions** - Predict for multiple stations in a single request
11+
**Auto-generated Documentation** - Interactive API docs at `/docs`
12+
**Health Monitoring** - Health check endpoint for service monitoring
13+
14+
## Installation
15+
16+
### 1. Install Dependencies
17+
18+
```bash
19+
pip install -r requirements_api.txt
20+
```
21+
22+
### 2. Ensure Model File Exists
23+
24+
Place your trained `random_forest_model.pkl` in the same directory as `model_api.py`.
25+
26+
## Usage
27+
28+
### Start the API Server
29+
30+
```bash
31+
# Development mode (auto-reload)
32+
uvicorn model_api:app --reload --port 8000
33+
34+
# Production mode
35+
uvicorn model_api:app --host 0.0.0.0 --port 8000 --workers 4
36+
```
37+
38+
The API will be available at: `http://localhost:8000`
39+
40+
## API Endpoints
41+
42+
### 1. Health Check
43+
```bash
44+
GET /health
45+
```
46+
47+
**Response:**
48+
```json
49+
{
50+
"status": "healthy",
51+
"model_loaded": true,
52+
"timestamp": "2026-01-07T10:30:00"
53+
}
54+
```
55+
56+
### 2. Model Information
57+
```bash
58+
GET /model/info
59+
```
60+
61+
**Response:**
62+
```json
63+
{
64+
"model_type": "RandomForestRegressor",
65+
"n_estimators": 300,
66+
"max_depth": 15,
67+
"features_count": 23,
68+
"features": ["hour", "dayofweek", ...]
69+
}
70+
```
71+
72+
### 3. Single Station Prediction
73+
```bash
74+
POST /predict
75+
Content-Type: application/json
76+
77+
{
78+
"station_id": "674f97ff3dc8e5d2ac00867a",
79+
"timestamp": "2026-01-07T14:00:00" // optional, defaults to now
80+
}
81+
```
82+
83+
**Response:**
84+
```json
85+
{
86+
"station_id": "674f97ff3dc8e5d2ac00867a",
87+
"predicted_arrivals": 2.45,
88+
"timestamp": "2026-01-07T14:00:00",
89+
"hour": 14,
90+
"dayofweek": 1,
91+
"is_weekend": false,
92+
"is_holiday": false,
93+
"is_major_event": true,
94+
"temperature_c": 22.5,
95+
"precipitation_mm": 0.0,
96+
"pedestrian_count": 1250.0
97+
}
98+
```
99+
100+
### 4. Batch Prediction
101+
```bash
102+
POST /predict/batch
103+
Content-Type: application/json
104+
105+
{
106+
"station_ids": [
107+
"674f97ff3dc8e5d2ac00867a",
108+
"674f98013dc8e5d2ac00894a",
109+
"674f97ff3dc8e5d2ac008456"
110+
],
111+
"timestamp": "2026-01-07T14:00:00" // optional
112+
}
113+
```
114+
115+
**Response:**
116+
```json
117+
{
118+
"predictions": [
119+
{
120+
"station_id": "674f97ff3dc8e5d2ac00867a",
121+
"predicted_arrivals": 2.45,
122+
...
123+
},
124+
{
125+
"station_id": "674f98013dc8e5d2ac00894a",
126+
"predicted_arrivals": 1.82,
127+
...
128+
}
129+
],
130+
"count": 3,
131+
"timestamp": "2026-01-07T14:00:00"
132+
}
133+
```
134+
135+
## Interactive API Documentation
136+
137+
FastAPI provides automatic interactive documentation:
138+
139+
- **Swagger UI**: http://localhost:8000/docs
140+
- **ReDoc**: http://localhost:8000/redoc
141+
142+
## Example Usage with Python
143+
144+
```python
145+
import requests
146+
147+
# Single prediction
148+
response = requests.post(
149+
"http://localhost:8000/predict",
150+
json={"station_id": "674f97ff3dc8e5d2ac00867a"}
151+
)
152+
result = response.json()
153+
print(f"Predicted arrivals: {result['predicted_arrivals']:.2f}")
154+
155+
# Batch prediction
156+
response = requests.post(
157+
"http://localhost:8000/predict/batch",
158+
json={
159+
"station_ids": [
160+
"674f97ff3dc8e5d2ac00867a",
161+
"674f98013dc8e5d2ac00894a"
162+
]
163+
}
164+
)
165+
results = response.json()
166+
for pred in results['predictions']:
167+
print(f"{pred['station_id']}: {pred['predicted_arrivals']:.2f}")
168+
```
169+
170+
## Example Usage with cURL
171+
172+
```bash
173+
# Health check
174+
curl http://localhost:8000/health
175+
176+
# Single prediction
177+
curl -X POST http://localhost:8000/predict \
178+
-H "Content-Type: application/json" \
179+
-d '{"station_id": "674f97ff3dc8e5d2ac00867a"}'
180+
181+
# Batch prediction
182+
curl -X POST http://localhost:8000/predict/batch \
183+
-H "Content-Type: application/json" \
184+
-d '{
185+
"station_ids": [
186+
"674f97ff3dc8e5d2ac00867a",
187+
"674f98013dc8e5d2ac00894a"
188+
]
189+
}'
190+
```
191+
192+
## Features Automatically Engineered
193+
194+
The API automatically fetches and engineers the following features:
195+
196+
### Temporal Features
197+
- `hour` - Hour of day (0-23)
198+
- `dayofweek` - Day of week (0=Monday, 6=Sunday)
199+
- `is_weekend` - Weekend indicator
200+
201+
### External Data Features
202+
- **Weather** (from Open-Meteo API)
203+
- Temperature (max, min, average)
204+
- Precipitation
205+
- Wind speed
206+
207+
- **Holidays** (Victoria, Australia)
208+
- Public holiday indicator
209+
210+
- **Major Events** (Melbourne-specific)
211+
- Australian Open
212+
- AFL Season & Grand Final
213+
- Melbourne Cup
214+
- Australian Grand Prix
215+
- Boxing Day Test
216+
217+
- **Pedestrian Counts** (Melbourne pedestrian counting system)
218+
- Foot traffic for the prediction hour
219+
220+
### Derived Features
221+
- Interaction features (weekend × hour, temperature × precipitation)
222+
- Lag features (set to zero for real-time prediction)
223+
224+
## Architecture
225+
226+
```
227+
┌─────────────┐
228+
│ Client │
229+
└──────┬──────┘
230+
231+
│ HTTP POST /predict
232+
233+
┌─────────────────────────────────────┐
234+
│ FastAPI Server │
235+
│ ┌───────────────────────────────┐ │
236+
│ │ Feature Engineering Pipeline │ │
237+
│ │ • Temporal features │ │
238+
│ │ • External data fetching │ │
239+
│ │ • Feature interactions │ │
240+
│ └───────────────────────────────┘ │
241+
│ ┌───────────────────────────────┐ │
242+
│ │ RandomForest Model │ │
243+
│ │ (300 trees, depth=15) │ │
244+
│ └───────────────────────────────┘ │
245+
└──────┬──────────────────────────────┘
246+
247+
│ Prediction Response (JSON)
248+
249+
┌─────────────┐
250+
│ Client │
251+
└─────────────┘
252+
```
253+
254+
## Error Handling
255+
256+
The API includes robust error handling:
257+
258+
- **503 Service Unavailable**: Model not loaded
259+
- **500 Internal Server Error**: Prediction or processing failure
260+
- **422 Unprocessable Entity**: Invalid request format
261+
262+
## Logging
263+
264+
The API logs important events:
265+
- Model loading status
266+
- Prediction requests
267+
- External API calls
268+
- Errors and warnings
269+
270+
## Performance Considerations
271+
272+
- **External API Caching**: Consider caching weather/pedestrian data
273+
- **Batch Predictions**: Use batch endpoint for multiple stations
274+
- **Async Operations**: API uses async handlers for concurrent requests
275+
- **Timeouts**: External API calls have 10-second timeouts with fallback defaults
276+
277+
## Production Deployment
278+
279+
### Using Docker
280+
281+
```dockerfile
282+
FROM python:3.10-slim
283+
284+
WORKDIR /app
285+
286+
COPY requirements_api.txt .
287+
RUN pip install --no-cache-dir -r requirements_api.txt
288+
289+
COPY model_api.py random_forest_model.pkl ./
290+
291+
EXPOSE 8000
292+
293+
CMD ["uvicorn", "model_api:app", "--host", "0.0.0.0", "--port", "8000"]
294+
```
295+
296+
Build and run:
297+
```bash
298+
docker build -t ev-prediction-api .
299+
docker run -p 8000:8000 ev-prediction-api
300+
```
301+
302+
### Using systemd (Linux)
303+
304+
Create `/etc/systemd/system/ev-prediction-api.service`:
305+
306+
```ini
307+
[Unit]
308+
Description=EV Congestion Prediction API
309+
After=network.target
310+
311+
[Service]
312+
Type=simple
313+
User=www-data
314+
WorkingDirectory=/opt/ev-prediction-api
315+
ExecStart=/usr/bin/uvicorn model_api:app --host 0.0.0.0 --port 8000 --workers 4
316+
Restart=always
317+
318+
[Install]
319+
WantedBy=multi-user.target
320+
```
321+
322+
Enable and start:
323+
```bash
324+
sudo systemctl enable ev-prediction-api
325+
sudo systemctl start ev-prediction-api
326+
```
327+
328+
## Monitoring
329+
330+
Monitor the API health:
331+
332+
```bash
333+
# Simple health check
334+
watch -n 5 'curl -s http://localhost:8000/health | jq'
335+
336+
# With logging
337+
tail -f /var/log/ev-prediction-api.log
338+
```
339+
340+
## Troubleshooting
341+
342+
### Model Not Loading
343+
- Ensure `random_forest_model.pkl` is in the correct directory
344+
- Check file permissions
345+
- Verify scikit-learn version compatibility
346+
347+
### External API Failures
348+
- The API uses fallback default values when external APIs fail
349+
- Check network connectivity
350+
- Review API rate limits
351+
352+
### Prediction Errors
353+
- Validate input station_id format
354+
- Check timestamp format (ISO 8601)
355+
- Review logs for detailed error messages
356+
357+
## License
358+
359+
MIT License - See LICENSE file for details
360+
361+
## Support
362+
363+
For issues and questions:
364+
- Check the API documentation at `/docs`
365+
- Review logs for error details
366+
- Ensure all dependencies are installed

0 commit comments

Comments
 (0)