This is the Cinni-Proto-Backend which uses a combination of technologies tailored for optimizing search and recommendation in the fashion industry.
- ChromaDB
- SQLite
- Custom CLIP models
- Custom OpenAI agents (using React paper implementations)
- Google Vision for object segmentation: We will build this if we can't find a satisfactory clothing segmentation model to mask & segment each item on a text or image query.
- Python version:
3.10
- Conda is preferred over venv/virtualenv
- Install the required Python packages:
pip install -r requirements.txt
- Environment variables:
export OPENAI_API_KEY="yourkey" export GOOGLE_API_KEY="yourkey" export FLASK_APP=app.py
- Test :
python app.py
-
Exploring CLIP: Bridging Text and Images
Read more on arXiv | Direct PDF
A comprehensive study on how CLIP models bridge the gap between text descriptions and image content, enabling new ways to handle multimodal tasks. -
Advancements in Neural Networks for Fashion
Read more on arXiv
Discusses recent neural network advancements specifically applied to the fashion industry, enhancing capabilities in image recognition and recommendation systems. -
A Review of Modern Fashion Recommender Systems
Direct PDF
This paper provides a detailed review of recommender systems operating within the fashion industry. It discusses challenges like dealing with sparse datasets, the need for personalization, and how visual and contextual data from various sources can significantly enhance recommendation strategies.
-
OpenAI CLIP
Visit OpenAI
OpenAI's CLIP is a neural network trained on a variety of (image, text) pairs. It learns visual concepts from natural language supervision. -
FashionML
Visit GitHub repository
A GitHub repository showcasing machine learning techniques applied to fashion datasets. Useful for developers looking into fashion-related ML projects.