Skip to content

Commit faa512a

Browse files
authored
Merge branch 'main' into feature/rkatakol/ibvs_reverse_proxy
2 parents e57b306 + 831bf03 commit faa512a

File tree

82 files changed

+1237
-248
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

82 files changed

+1237
-248
lines changed

manufacturing-ai-suite/industrial-edge-insights-vision/.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,5 @@ resources/
33
*.xml
44
*.avi
55
*.mp4
6-
*.h264
6+
*.h264
7+
apps/*/configs/nginx/ssl/

manufacturing-ai-suite/industrial-edge-insights-vision/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ It consists of the following microservices:
2020

2121

2222
<div style="text-align: center;">
23-
<img src=defect-detection-arch-diagram.png width=800>
23+
<img src=industrial-edge-insights-vision-architecture.drawio.svg width=800>
2424
</div>
2525

2626
### Directory structure
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
allow_anonymous true
22
listener 1883
3+
Lines changed: 237 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,237 @@
1+
events {
2+
worker_connections 1024;
3+
}
4+
5+
# MQTT TCP proxy
6+
stream {
7+
upstream mqtt_tcp {
8+
server mqtt-broker:1883;
9+
}
10+
11+
server {
12+
listen 1883; # Nginx listens on 1883 for TCP MQTT
13+
proxy_pass mqtt_tcp;
14+
}
15+
}
16+
17+
http {
18+
upstream dlstreamer {
19+
server dlstreamer-pipeline-server:8080;
20+
}
21+
22+
upstream prometheus {
23+
server prometheus:9090;
24+
}
25+
26+
upstream mediamtx {
27+
server mediamtx-server:8889;
28+
}
29+
30+
upstream mediamtx-webrtc {
31+
server mediamtx-server:8189;
32+
}
33+
34+
upstream model_registry {
35+
server model_registry:8111;
36+
}
37+
38+
upstream minio {
39+
server mraas-minio:8000;
40+
}
41+
42+
upstream otel_collector_grpc {
43+
server otel-collector:4317;
44+
}
45+
46+
upstream otel_collector_http {
47+
server otel-collector:4318;
48+
}
49+
50+
# HTTP server - redirect to HTTPS
51+
server {
52+
listen 80;
53+
return 301 https://$host$request_uri;
54+
}
55+
56+
# HTTPS server
57+
server {
58+
listen 443 ssl;
59+
server_name localhost;
60+
61+
client_max_body_size 500M;
62+
63+
# SSL configuration
64+
ssl_certificate /etc/nginx/ssl/server.crt;
65+
ssl_certificate_key /etc/nginx/ssl/server.key;
66+
67+
# SSL security settings
68+
ssl_protocols TLSv1.2 TLSv1.3;
69+
ssl_ciphers HIGH:!aNULL:!MD5;
70+
ssl_prefer_server_ciphers on;
71+
72+
# Security headers
73+
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
74+
add_header X-Content-Type-Options nosniff;
75+
add_header X-Frame-Options SAMEORIGIN;
76+
add_header X-XSS-Protection "1; mode=block";
77+
78+
# DL Streamer Pipeline Server
79+
location /api/ {
80+
proxy_pass http://dlstreamer/;
81+
proxy_set_header Host $host;
82+
proxy_set_header X-Real-IP $remote_addr;
83+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
84+
proxy_set_header X-Forwarded-Proto $scheme;
85+
}
86+
87+
# Prometheus
88+
location /prometheus/ {
89+
proxy_pass http://prometheus; # upstream already has port
90+
proxy_set_header Host $host;
91+
proxy_set_header X-Real-IP $remote_addr;
92+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
93+
proxy_set_header X-Forwarded-Proto $scheme;
94+
}
95+
96+
# Model Registry
97+
location /registry/ {
98+
proxy_pass http://model_registry/;
99+
proxy_set_header Host $host;
100+
proxy_set_header X-Real-IP $remote_addr;
101+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
102+
proxy_set_header X-Forwarded-Proto $scheme;
103+
}
104+
105+
# MinIO
106+
location /minio/ {
107+
proxy_pass http://minio/minio/;
108+
proxy_set_header Host $host;
109+
proxy_set_header X-Real-IP $remote_addr;
110+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
111+
proxy_set_header X-Forwarded-Proto $scheme;
112+
113+
# Allow CORS for MinIO UI
114+
add_header Access-Control-Allow-Origin *;
115+
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS";
116+
add_header Access-Control-Allow-Headers "Authorization, Content-Type";
117+
118+
# Handle preflight requests
119+
if ($request_method = OPTIONS) {
120+
return 204;
121+
}
122+
}
123+
124+
# OTEL Collector HTTP
125+
location /otel-http/ {
126+
proxy_pass http://otel_collector_http/;
127+
proxy_set_header Host $host;
128+
proxy_set_header X-Real-IP $remote_addr;
129+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
130+
proxy_set_header X-Forwarded-Proto $scheme;
131+
}
132+
133+
# OTEL Collector gRPC
134+
location /otel-grpc/ {
135+
grpc_pass grpc://otel_collector_grpc;
136+
}
137+
138+
# MediaMTX streams with dynamic paths
139+
location /mediamtx/ {
140+
proxy_pass http://mediamtx/;
141+
proxy_set_header Host $host;
142+
proxy_set_header X-Real-IP $remote_addr;
143+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
144+
proxy_set_header X-Forwarded-Proto $scheme;
145+
146+
# WebSocket support for WebRTC
147+
proxy_http_version 1.1;
148+
proxy_set_header Upgrade $http_upgrade;
149+
proxy_set_header Connection "upgrade";
150+
}
151+
152+
# MediaMTX WebRTC endpoint (port 8189)
153+
location /webrtc/ {
154+
proxy_pass http://mediamtx-webrtc/;
155+
proxy_set_header Host $host;
156+
proxy_set_header X-Real-IP $remote_addr;
157+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
158+
proxy_set_header X-Forwarded-Proto $scheme;
159+
160+
# WebSocket support for WebRTC
161+
proxy_http_version 1.1;
162+
proxy_set_header Upgrade $http_upgrade;
163+
proxy_set_header Connection "upgrade";
164+
}
165+
166+
# MediaMTX stream paths and WHEP/WHIP endpoints
167+
location ~ ^/([^/]+)/(whep|whip)(/.*)?$ {
168+
proxy_pass http://mediamtx/$1/$2$3;
169+
proxy_set_header Host $host;
170+
proxy_set_header X-Real-IP $remote_addr;
171+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
172+
proxy_set_header X-Forwarded-Proto $scheme;
173+
174+
# WebSocket support for WebRTC
175+
proxy_http_version 1.1;
176+
proxy_set_header Upgrade $http_upgrade;
177+
proxy_set_header Connection "upgrade";
178+
179+
# CORS headers for WebRTC
180+
add_header Access-Control-Allow-Origin *;
181+
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS";
182+
add_header Access-Control-Allow-Headers "Content-Type, Authorization";
183+
184+
# Handle preflight requests
185+
if ($request_method = OPTIONS) {
186+
add_header Access-Control-Allow-Origin *;
187+
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS";
188+
add_header Access-Control-Allow-Headers "Content-Type, Authorization";
189+
return 204;
190+
}
191+
}
192+
193+
# Default landing page
194+
location / {
195+
return 200 '<!DOCTYPE html>
196+
<html>
197+
<head>
198+
<title>Manufacturing Vision AI Application</title>
199+
<style>
200+
body { font-family: Arial, sans-serif; margin: 40px; }
201+
.service { margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; }
202+
a { color: #0066cc; text-decoration: none; }
203+
a:hover { text-decoration: underline; }
204+
</style>
205+
</head>
206+
<body>
207+
<h1>Manufacturing Vision AI Application</h1>
208+
<p>External facing services:</p>
209+
<div class="service">
210+
<h3><a href="/api/pipelines">DL Streamer Pipeline Server API</a></h3>
211+
<p>DL Streamer Pipeline Server pipelines</p>
212+
</div>
213+
<div class="service">
214+
<h3><a href="/prometheus/">Prometheus</a></h3>
215+
<p>Metrics monitoring and scraping</p>
216+
</div>
217+
<div class="service">
218+
<h3><a href="/registry/models">Model Registry</a></h3>
219+
<p>Manage and version AI models</p>
220+
</div>
221+
<div class="service">
222+
<h3><a href="/minio/">MinIO</a></h3>
223+
<p>Object storage (S3-compatible)</p>
224+
</div>
225+
</body>
226+
</html>';
227+
add_header Content-Type text/html;
228+
}
229+
230+
# Health check endpoint
231+
location /health {
232+
access_log off;
233+
return 200 "healthy\n";
234+
add_header Content-Type text/plain;
235+
}
236+
}
237+
}

manufacturing-ai-suite/industrial-edge-insights-vision/apps/pallet-defect-detection/docs/user-guide/Overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ This sample application consists of the following microservices: DL Streamer Pip
1212

1313
You start the pallet defect detection pipeline with a REST request using Client URL (cURL). The REST request will return a pipeline instance ID. DL Streamer Pipeline Server then sends the images with overlaid bounding boxes through webrtc protocol to webrtc browser client. This is done via the MediaMTX server used for signaling. Coturn server is used to facilitate NAT traversal and ensure that the webrtc stream is accessible on a non-native browser client and helps in cases where firewall is enabled. DL Streamer Pipeline Server also sends the images to S3 compliant storage. The Open Telemetry Data exported by DL Streamer Pipeline Server to Open Telemetry Collector is scraped by Prometheus and can be seen on Prometheus UI. Any desired AI model from the Model Registry Microservice (which can interact with Postgres, Minio and Geti Server for getting the model) can be pulled into DL Streamer Pipeline Server and used for inference in the sample application.
1414

15-
![Architecture and high-level representation of the flow of data through the architecture](./images/defect-detection-arch-diagram.png)
15+
![Architecture and high-level representation of the flow of data through the architecture](./images/industrial-edge-insights-vision-architecture.drawio.svg)
1616

1717
Figure 1: Architecture diagram
1818

manufacturing-ai-suite/industrial-edge-insights-vision/apps/pallet-defect-detection/docs/user-guide/get-started.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,11 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co
2929
```bash
3030
HOST_IP=<HOST_IP> # IP address of server where DLStreamer Pipeline Server is running.
3131
32+
MR_PSQL_PASSWORD= #PostgreSQL service & client adapter e.g. intel1234
33+
34+
MR_MINIO_ACCESS_KEY= # MinIO service & client access key e.g. intel1234
35+
MR_MINIO_SECRET_KEY= # MinIO service & client secret key e.g. intel1234
36+
3237
MTX_WEBRTCICESERVERS2_0_USERNAME=<username> # WebRTC credentials e.g. intel1234
3338
MTX_WEBRTCICESERVERS2_0_PASSWORD=<password>
3439
@@ -120,11 +125,11 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co
120125
Extracting payload for pipeline: pallet_defect_detection
121126
Found 1 payload(s) for pipeline: pallet_defect_detection
122127
Payload for pipeline 'pallet_defect_detection' {"source":{"uri":"file:///home/pipeline-server/resources/videos/warehouse.avi","type":"uri"},"destination":{"frame":{"type":"webrtc","peer-id":"pdd"}},"parameters":{"detection-properties":{"model":"/home/pipeline-server/resources/models/pallet-defect-detection/model.xml","device":"CPU"}}}
123-
Posting payload to REST server at http://<HOST_IP>:8080/pipelines/user_defined_pipelines/pallet_defect_detection
128+
Posting payload to REST server at https://<HOST_IP>/api/pipelines/user_defined_pipelines/pallet_defect_detection
124129
Payload for pipeline 'pallet_defect_detection' posted successfully. Response: "4b36b3ce52ad11f0ad60863f511204e2"
125130
```
126131

127-
> **NOTE:** This will start the pipeline. To view the inference stream on WebRTC, open a browser and navigate to http://<HOST_IP>:8889/pdd/ for Pallet Defect Detection
132+
> **NOTE:** This will start the pipeline. To view the inference stream on WebRTC, open a browser and navigate to https://<HOST_IP>/mediamtx/pdd/ for Pallet Defect Detection
128133

129134
8. Get the status of running pipeline instance(s):
130135

manufacturing-ai-suite/industrial-edge-insights-vision/apps/pallet-defect-detection/docs/user-guide/how-to-change-input-video-source.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Typically, a pipeline is started with a cURL request with JSON payload containing source, destination and parameters. For example, the following cURL request start an AI pipeline on a file inferencing on pallet defect detection model.
44

5-
curl http://<HOST_IP>:8080/pipelines/user_defined_pipelines/<pipeline_name> -X POST -H 'Content-Type: application/json' -d '{
5+
curl -k https://<HOST_IP>/api/pipelines/user_defined_pipelines/<pipeline_name> -X POST -H 'Content-Type: application/json' -d '{
66
"source": {
77
"uri": "file:///home/pipeline-server/resources/videos/warehouse.avi",
88
"type": "uri"

manufacturing-ai-suite/industrial-edge-insights-vision/apps/pallet-defect-detection/docs/user-guide/how-to-enable-mlops.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ With this feature, during runtime, you can download a new model from the registr
8484

8585
2. Run the following curl command to upload the local model.
8686
```sh
87-
curl -L -X POST "http://<HOST_IP>:32002/models" \
87+
curl -k -L -X POST "https://<HOST_IP>/registry/models" \
8888
-H 'Content-Type: multipart/form-data' \
8989
-F 'name="YOLO_Test_Model"' \
9090
-F 'precision="fp32"' \
@@ -100,27 +100,26 @@ With this feature, during runtime, you can download a new model from the registr
100100
3. Check if the model is uploaded successfully.
101101
102102
```sh
103-
curl 'http://<HOST_IP>:32002/models'
103+
curl -k 'https://<HOST_IP>/registry/models'
104104
```
105105
106106
### Steps to use the new model
107107
108108
1. List all the registered models in the model registry
109109
```sh
110-
curl 'http://<HOST_IP>:32002/models'
110+
curl -k 'https://<HOST_IP>/registry/models'
111111
```
112112
If you do not have a model available, follow the steps [here](#upload-a-model-to-model-registry) to upload a sample model in Model Registry
113113
114114
2. Check the instance ID of the currently running pipeline to use it for the next step.
115115
```sh
116-
curl --location -X GET http://<HOST_IP>:8080/pipelines/status
116+
curl -k --location -X GET https://<HOST_IP>/api/pipelines/status
117117
```
118-
> NOTE- Replace the port in the curl request according to the deployment method i.e. default 8080 for compose based.
119118
120119
3. Restart the model with a new model from Model Registry.
121120
The following curl command downloads the model from Model Registry using the specs provided in the payload. Upon download, the running pipeline is restarted with replacing the older model with this new model. Replace the `<instance_id_of_currently_running_pipeline>` in the URL below with the id of the pipeline instance currently running.
122121
```sh
123-
curl 'http://<HOST_IP>:8080/pipelines/user_defined_pipelines/pallet_defect_detection_mlops/{instance_id_of_currently_running_pipeline}/models' \
122+
curl -k 'https://<HOST_IP>/api/pipelines/user_defined_pipelines/pallet_defect_detection_mlops/{instance_id_of_currently_running_pipeline}/models' \
124123
--header 'Content-Type: application/json' \
125124
--data '{
126125
"project_name": "pallet-defect-detection",
@@ -143,5 +142,5 @@ With this feature, during runtime, you can download a new model from the registr
143142
144143
5. You can also stop any running pipeline by using the pipeline instance "id"
145144
```sh
146-
curl --location -X DELETE http://<HOST_IP>:8080/pipelines/{instance_id}
145+
curl -k --location -X DELETE https://<HOST_IP>/api/pipelines/{instance_id}
147146
```

manufacturing-ai-suite/industrial-edge-insights-vision/apps/pallet-defect-detection/docs/user-guide/how-to-manage-pipelines.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This section describes how to create custom AI pipelines for the sample applicat
44

55
## Create Pipelines
66

7-
The AI pipelines are defined by the `pipeline-server-config.json` file present under the configs subdirectory of a particular application directory (for docker compose deployment) and similary inside the helm directory (for helm based deployment. Please also note that the port in the cURL/REST requests needs to be changed from 8080 to 30107 for helm based deployment).
7+
The AI pipelines are defined by the `pipeline-server-config.json` file present under the configs subdirectory of a particular application directory (for docker compose deployment) and similary inside the helm directory (for helm based deployment).
88

99
The following is an example of the pallet defect detection pipeline, which is included in the `pipeline-server-config.json` file.
1010
```sh
@@ -55,7 +55,7 @@ Follow this procedure to start the pipeline.
5555
5656
In this example, a pipeline included in this sample application is `pallet_defect_detection`. Start this pipeline with the following cURL command.
5757
58-
curl http://<HOST_IP>:8080/pipelines/user_defined_pipelines/pallet_defect_detection -X POST -H 'Content-Type: application/json' -d '{
58+
curl -k https://<HOST_IP>/api/pipelines/user_defined_pipelines/pallet_defect_detection -X POST -H 'Content-Type: application/json' -d '{
5959
"source": {
6060
"uri": "file:///home/pipeline-server/resources/videos/warehouse.avi",
6161
"type": "uri"
@@ -83,15 +83,15 @@ Request the pipeline statistics with this cURL command.
8383
8484
Replace `HOST_IP` with the IP address of your system.
8585
86-
curl --location -X GET http://<HOST_IP>:8080/pipelines/status
86+
curl -k --location -X GET https://<HOST_IP>/api/pipelines/status
8787
8888
## Stop the Pipeline
8989
9090
Stop the pipeline with the following cURL command.
9191
9292
Replace `HOST_IP` with the IP address of your system and `instance_id` with the instance ID (without quotes) of the running pipeline.
9393
94-
curl --location -X DELETE http://<HOST_IP>:8080/pipelines/{instance_id}
94+
curl -k --location -X DELETE https://<HOST_IP>/api/pipelines/{instance_id}
9595
9696
> **Note**
9797
> The instance ID is shown in the Terminal when the [pipeline was started](#start-the-pipeline) or when [pipeline statistics were requested](#get-statistics-of-the-running-pipelines).

0 commit comments

Comments
 (0)