This project demonstrates the deployment of a Machine Learning (ML) model pipeline with functionalities for model prediction, retraining, bulk data uploads, and more. The deployment includes dockerized web applications for both the frontend and backend, hosted on cloud platforms. The dockerized application was deployed on google cloup platform and it uses the the google cloud run application.
-
Model Prediction
- Predict on a single data point (e.g., an image or selected CSV features).
-
Data Visualization
- Interpret and visualize at least three features in the dataset with meaningful stories.
-
Data Upload
- Bulk upload data (CSV, images, or other formats) for retraining.
-
Model Retraining
- Trigger a model retraining process via a user-friendly interface.
-
Flood Simulation
- Simulate requests using Locust to evaluate response time and latency under different loads.
- Frontend: Deployed Frontend URL
- Backend: Deployed Backend URL
- Docker Image: DockerHub Link
- Video Folder: Google Drive Link
-
Install Docker
- Ensure Docker is installed on your machine. If not, download it from Docker Official Website.
-
Pull the Docker Image
docker pull your-docker-image-name
-
Run the Container
docker run -d -p 80:80 your-docker-image-name
-
Access the Application
- Open your browser and navigate to
http://localhostto interact with the app.
- Open your browser and navigate to
git clone https://github.com/edupred.git
cd project_namepython -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`pip install -r requirements.txtpython app.pyNavigate to http://localhost:5000 to access the application.
Project_name/
│
├── README.md # Project description and setup instructions
│
├── notebook/
│ ├── project_name.ipynb # Preprocessing, model training, and evaluation
│
├── src/
│ ├── preprocessing.py # Preprocessing logic
│ ├── model.py # Model training and evaluation
│ └── prediction.py # Model prediction logic
│
├── data/
│ ├── train/ # Training dataset
│ └── test/ # Testing dataset
│
└── models/
├── model_name.pkl # Saved model in Pickle format
└── model_name.tf # Saved model in TensorFlow format
The application was tested under high traffic using Locust to simulate floods of requests.
- Python 3.10 or later.
- Docker installed.
- Dependencies specified in
requirements.txt.
- Metrics: Accuracy, Precision, Recall, F1-Score.
