This is a CLIP-Based Fashion Recommender with MCP.
- Image upload
- Submit button
- Display clothing tags + recommendations
A user uploads a clothing image β YOLO detects clothing β CLIP encodes β Recommend similar

/project-root
β
βββ /backend
β βββ Dockerfile
β βββ /app
β βββ /aws
β β β βββ rekognition_wrapper.py # AWS Rekognition logic
β β βββ /utils
β β β βββ image_utils.py # Bounding box crop utils
β β βββ /controllers
β β β βββ clothing_detector.py # Coordinates Rekognition + cropping
β β βββ /tests
β β β βββ test_rekognition_wrapper.py
β β β βββ test_clothing_tagging.py
β β βββ server.py # FastAPI app code
β β βββ /routes
β β β βββ clothing_routes.py
β β βββ /controllers
β β β βββ clothing_controller.py
β β β βββ clothing_tagging.py
β β β βββ tag_extractor.py # Pending: define core CLIP functionality
β β βββ schemas/
β β β βββ clothing_schemas.py
β β βββ config/
β β β βββ tag_list_en.py $ Tool for mapping: https://jsoncrack.com/editor
β β β βββ database.py
β β β βββ settings.py
β β β βββ api_keys.py
β β βββ requirements.txt
β βββ .env
β
βββ /frontend
β βββ Dockerfile
β βββ package.json
β βββ package-lock.json
β βββ /public
β β βββ index.html
β βββ /src
β β βββ /components
β β β βββ ImageUpload.jsx
β β β βββ DetectedTags.jsx
β β β βββ Recommendations.jsx
β β βββ /utils
β β β βββ api.js
β β βββ App.js # Main React component
β β βββ index.js
β β βββ index.css
β β βββ tailwind.config.js
β β βββ postcss.config.js
β βββ .env
βββ docker-compose.yml
βββ README.md
python -m venv venv
source venv/bin/activate # On macOS or Linux
venv\Scripts\activate # On Windows
pip install -r requirements.txt
uvicorn backend.app.server:app --reload
Once the server is running and the database is connected, you should see the following message in the console:
Database connected
INFO: Application startup complete.

Database connected INFO: Application startup complete.
npm install
npm start
Once running, the server logs a confirmation and opens the app in your browser: http://localhost:3000/

- FastAPI server is up and running (24 Apr)
- Database connection is set up (24 Apr)
- Backend architecture is functional (24 Apr)
- Basic front-end UI for uploading picture (25 Apr)
PYTHONPATH=. pytest backend/app/tests/test_rekognition_wrapper.py

- Tested Rekognition integration logic independently using a mock β verified it correctly extracts bounding boxes only when labels match the garment set
- Confirmed the folder structure and PYTHONPATH=. works smoothly with pytest from root
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py

-
Detecting garments using AWS Rekognition
-
Cropping the image around detected bounding boxes
-
Tagging the cropped image using CLIP
7. Mock Testing for full image tagging pipeline (Image bytes β AWS Rekognition (detect garments) β Crop images β CLIP (predict tags) + Error Handling
Negative Test Case | Description |
---|---|
No Detection Result | AWS doesn't detect any garments β should return an empty list. |
Image Not Clothing | CLIP returns vague or empty tags β verify fallback behavior. |
AWS Returns Exception | Simulate rekognition.detect_labels throwing an error β check try-except . |
Corrupted Image File | Simulate a broken (non-JPEG) image β verify it raises an error or gives a hint. |
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py

- detect_garments: simulates AWS Rekognition returning one bounding box: {"Left": 0.1, "Top": 0.1, "Width": 0.5, "Height": 0.5}
- crop_by_bounding_box: simulates the cropping step returning a dummy "cropped_image" object
- get_tags_from_clip: simulates CLIP returning a list of tags: ["T-shirt", "Cotton", "Casual"]
Next Step:
- Evaluate CLIPβs tagging accuracy on sample clothing images
- Fine-tune the tagging system for better recommendations
- Test the backend integration with real-time user data
- Set up monitoring for model performance
- Front-end demo