You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
4. Save the top-10 recommendations for each evaluated user in `reports/user_<id>_recommendations.csv`
42
77
43
78
#### Analysis and Visualization
44
79
45
80
1. Provide visualizations comparing SVD and PMF predictions for the same user.
46
81
2. Offer insights into how the models differ in recommending movies for specific users based on their ratings history.
82
+
3. Save the following plots under `reports/`:
83
+
-`user_comparison.png` — SVD vs PMF predictions for a selected user
84
+
-`top_recommendations.png` — Histogram (or bar chart) of top recommended movies
47
85
48
86
#### Streamlit Dashboard
49
87
@@ -52,6 +90,7 @@ The goal of this project is to understand and apply advanced matrix factorizatio
52
90
- Movie recommendations from both the **SVD** and **PMF** models.
53
91
- Visual comparison of the SVD vs. PMF predictions for the user.
54
92
2. Ensure real-time interaction, with recommendations and visualizations updating dynamically based on user input.
93
+
3. The app must run successfully via: `streamlit run app.py`
55
94
56
95
### Project Repository Structure
57
96
@@ -72,6 +111,15 @@ matrix-factorization-project/
72
111
│ ├── matrix_creation.py
73
112
│ ├── recommendation.py
74
113
│
114
+
├── reports/
115
+
│ ├── model_metrics.json
116
+
│ ├── pmf_convergence.png
117
+
│ ├── rmse_comparison.png
118
+
│ ├── predicted_vs_actual.png
119
+
│ ├── user_comparison.png
120
+
│ ├── top_recommendations.png
121
+
│ └── user_<id>_recommendations.csv
122
+
│
75
123
├── app.py
76
124
├── requirement.txt
77
125
├── Movie_Recommender_System.ipynb
@@ -85,20 +133,6 @@ matrix-factorization-project/
85
133
-**Movie_Recommender_System.ipynb**: A notebook for initial experiments, data exploration, and visualization of the model training and recommendations.
86
134
-**README.md**: Project documentation with an overview of the recommender system, instructions for setup and running the dashboard, and additional resources.
87
135
88
-
### Timeline (1-2 weeks)
89
-
90
-
**Week 1:**
91
-
92
-
-**Days 1-2:** Load and preprocess the dataset, create user-item interaction matrix.
93
-
-**Days 3-4:** Implement and train the SVD model.
94
-
-**Days 5-7:** Implement and train the PMF model, visualize MSE vs. iterations for PMF.
95
-
96
-
**Week 2:**
97
-
98
-
-**Days 1-2:** Compare SVD and PMF models, evaluate using MSE.
99
-
-**Days 3-4:** Implement recommendation generation for both models.
100
-
-**Days 5-7:** Build the Streamlit dashboard, create visualizations, and finalize the project.
101
-
102
136
### Tips
103
137
104
138
Remember, a great recommender system needs to understand both the users and the content. Keep in mind the trade-off between model complexity and interpretability. Here are some additional considerations:
Copy file name to clipboardExpand all lines: subjects/ai/vision-track/audit/README.md
+39Lines changed: 39 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,8 @@
8
8
9
9
###### Is a `requirements.txt` file included with all dependencies and specific library versions required to run the project?
10
10
11
+
###### import test `python -c "import torch, supervision, cv2, streamlit"`
12
+
11
13
##### Data Processing and Exploratory Data Analysis
12
14
13
15
###### Does the Jupyter notebook (`VisionTrack_Analysis.ipynb`) include EDA showcasing data distribution, object detection samples, and preprocessing methods?
@@ -16,6 +18,9 @@
16
18
17
19
###### Does data preprocessing include resizing and normalization, ensuring compatibility with YOLO model input formats?
18
20
21
+
- Validation of YOLO-compatible annotations (.txt files with class, x, y, w, h).
22
+
- Confirm frames are resized and normalized properly before inference.
23
+
19
24
##### Model Implementation
20
25
21
26
###### Is the YOLO model implemented for person detection with configuration options for detection thresholds and class-specific tuning?
@@ -32,6 +37,8 @@
32
37
33
38
###### Does the project include logic for tracking and counting entries and exits within specified regions of interest (ROIs)?
34
39
40
+
###### Check that trained weights are saved in: `models/checkpoints/best.pt`
41
+
35
42
##### Streamlit App Development
36
43
37
44
###### Is the **Streamlit** app implemented to display video feeds with overlaid detection, tracking, and counting information?
@@ -56,6 +63,38 @@
56
63
57
64
###### Are evaluation metrics presented, showcasing precision, recall, and F1-score to assess the effectiveness of detection and tracking?
58
65
66
+
###### Check:
67
+
68
+
- Require metrics file:
69
+
70
+
```
71
+
reports/performance_metrics.json
72
+
```
73
+
74
+
- Validate JSON includes:
75
+
76
+
```json
77
+
{
78
+
"detection_precision": ...,
79
+
"detection_recall": ...,
80
+
"f1_score": ...,
81
+
"average_fps_per_stream": ...,
82
+
"average_latency_ms": ...
83
+
}
84
+
```
85
+
86
+
- Add minimum thresholds:
87
+
88
+
Precision ≥ 0.85
89
+
90
+
Recall ≥ 0.80
91
+
92
+
F1 ≥ 0.85
93
+
94
+
FPS ≥ 15 (720p)
95
+
96
+
- Add check that metrics are visible in Streamlit dashboard (FPS + latency shown live).
97
+
59
98
##### Additional Considerations
60
99
61
100
###### Does the codebase is documented with comments and explanations for readability and maintainability?
0 commit comments