Skip to content

Commit f0733b7

Browse files
committed
updated docs
1 parent c7dc8bd commit f0733b7

3 files changed

Lines changed: 123 additions & 2 deletions

File tree

microservices/dlstreamer-pipeline-server/docs/user-guide/how-to-deploy-with-helm.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,16 @@ Update the below fields in `values.yaml` file in the helm chart
5555

5656
### Run default sample
5757

58-
Once the pods are up, we will send a pipeline request to DL Streamer Pipeline Server to run a detection model on a warehouse video. Both the model and video are provided as default sample in the docker image.
58+
Once the pods are up, we will send a pipeline request to DL Streamer Pipeline Server to run a detection model on a warehouse video.
59+
60+
Copy the resources such as video and model from local directory to the to the `dlstreamer-pipeline-server` pod to make them available for launching pipeline.
61+
```sh
62+
POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep dlstreamer-pipeline-server | head -n 1)
63+
64+
kubectl cp <edge-ai-libraries/microservices/dlstreamer-pipeline-server>/resources/models/geti/ $POD_NAME:/home/pipeline-server/resources/models/geti/ -c dlstreamer-pipeline-server -n apps
65+
66+
kubectl cp <edge-ai-libraries/microservices/dlstreamer-pipeline-server>/resources/videos/warehouse.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps
67+
```
5968

6069
We will send the below curl request to run the inference.
6170
It comprises of a source file path which is `warehouse.avi`, a destination, with metadata directed to a json fine in `/tmp/resuts.jsonl` and frames streamed over RTSP with id `pallet_defect_detection`. Additionally, we will also provide the GETi model path that would be used for detecting defective boxes on the video file.
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
# Troubleshooting
2+
3+
## Using REST API in Image Ingestor mode has low first inference latency
4+
5+
This is an expected behavior observed only for the first inference. Subsequent inferences would be considerably faster.
6+
For inference on GPU, the first inference might be even slower. Latency for up to 15 seconds have been observed for image requests inference on GPU.
7+
When in `sync` mode, we suggest users to provide a `timeout` with a value to accommodate for the first inference latency to avoid request time out.
8+
Read [here](./advanced-guide/detailed_usage/rest_api/restapi_reference_guide.md#post-pipelinesnameversioninstance_id) to learn more about the API.
9+
10+
11+
## Axis RTSP camera freezes or pipeline stops
12+
13+
Restart the DL Streamer pipeline server container with the pipeline that has this rtsp source.
14+
15+
16+
## Deploying with Intel GPU K8S Extension
17+
18+
If you're deploying a GPU based pipeline (example: with VA-API elements like `vapostproc`, `vah264dec` etc., and/or with `device=GPU` in `gvadetect` in `dlstreamer_pipeline_server_config.json`) with Intel GPU k8s Extension, ensure to set the below details in the file `helm/values.yaml` appropriately in order to utilize the underlying GPU.
19+
```sh
20+
gpu:
21+
enabled: true
22+
type: "gpu.intel.com/i915"
23+
count: 1
24+
```
25+
26+
## Deploying without Intel GPU K8S Extension
27+
28+
If you're deploying a GPU based pipeline (example: with VA-API elements like `vapostproc`, `vah264dec` etc., and/or with `device=GPU` in `gvadetect` in `dlstreamer_pipeline_server_config.json`) without Intel GPU k8s Extension, ensure to set the below details in the file `helm/values.yaml` appropriately in order to utilize the underlying GPU.
29+
```sh
30+
privileged_access_required: true
31+
```
32+
33+
## Using RTSP/WebRTC streaming, S3_write or MQTT fails with GPU elements in pipeline
34+
35+
If you are using GPU elements in the pipeline, RTSP/WebRTC streaming, S3_write and MQTT will not work because these are expects CPU buffer. \
36+
Add `vapostproc ! video/x-raw` before appsink element or `jpegenc` element(in case you are using S3_write) in the GPU pipeline.
37+
```sh
38+
# Sample pipeline
39+
40+
"pipeline": "{auto_source} name=source ! parsebin ! vah264dec ! vapostproc ! video/x-raw(memory:VAMemory) ! gvadetect name=detection model-instance-id=inst0 ! queue ! gvafpscounter ! gvametaconvert add-empty-results=true name=metaconvert ! gvametapublish name=destination ! vapostproc ! video/x-raw ! appsink name=appsink"
41+
```
42+
43+
## RTSP streaming fails if you are using udfloader
44+
45+
If you are using udfloader<link> pipeline RTSP streaming will not work because RTSP pipeline does not support RGB, BGR or Mono format.
46+
If you are using `udfloader pipeline` or `RGB, BGR or GRAY8` format in the pipeline, add `videoconvert ! video/x-raw, format=(string)NV12` before `appsink` element in pipeline.
47+
```sh
48+
# Sample pipeline
49+
50+
"pipeline": "{auto_source} name=source ! decodebin ! videoconvert ! video/x-raw,format=RGB ! udfloader name=udfloader ! gvametaconvert add-empty-results=true name=metaconvert ! gvametapublish name=destination ! videoconvert ! video/x-raw, format=(string)NV12 ! appsink name=appsink"
51+
```
52+
53+
## Resolving Time Sync Issues in Prometheus
54+
55+
If you see the following warning in Prometheus, it indicates a time sync issue.
56+
57+
**Warning: Error fetching server time: Detected xxx.xxx seconds time difference between your browser and the server.**
58+
59+
You can following the below steps to synchronize system time using NTP.
60+
1. **Install systemd-timesyncd** if not already installed:
61+
```bash
62+
sudo apt install systemd-timesyncd
63+
```
64+
65+
2. **Check service status**:
66+
```bash
67+
systemctl status systemd-timesyncd
68+
```
69+
70+
3. **Configure an NTP server** (if behind a corporate proxy):
71+
```bash
72+
sudo nano /etc/systemd/timesyncd.conf
73+
```
74+
Add:
75+
```ini
76+
[Time]
77+
NTP=corp.intel.com
78+
```
79+
Replace `corp.intel.com` with a different ntp server that is supported on your network.
80+
81+
4. **Restart the service**:
82+
```bash
83+
sudo systemctl restart systemd-timesyncd
84+
```
85+
86+
5. **Verify the status**:
87+
```bash
88+
systemctl status systemd-timesyncd
89+
```
90+
91+
This should resolve the time discrepancy in Prometheus.
92+
93+
## WebRTC Stream on web browser
94+
The firewall may prevent you from viewing the video stream on web browser. Please disable the firewall using this command.
95+
```sh
96+
sudo ufw disable
97+
```
98+
99+
## Error Logs
100+
101+
View the container logs using this command.
102+
```sh
103+
docker logs -f <CONTAINER_NAME>
104+
```

microservices/dlstreamer-pipeline-server/helm/README.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## Steps to deploy the helm chart:
44

55
- Note: Incase you do not have a k8s cluster, then please follow the steps mentioned in 'Setup k8s cluster' below before deploying the helm chart.
6-
- Get into the helm directory
6+
- Get into the helm directory (where this README.md exists)
77
`cd helm`
88
- Update the below fields in `values.yaml` file in the helm chart
99
``` sh
@@ -14,6 +14,14 @@
1414
`helm install dlsps . -n apps --create-namespace`
1515
- Check if Deep Learning Streamer Pipeline Server is running fine
1616
`kubectl get pods --namespace apps`and monitor its logs using `kubectl logs -f <pod_name> -n apps`
17+
- Copy the resources such as video and model from local directory to the to the `dlstreamer-pipeline-server` pod to make them available for launching pipeline.
18+
```sh
19+
POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep dlstreamer-pipeline-server | head -n 1)
20+
21+
kubectl cp ../resources/models/geti/ $POD_NAME:/home/pipeline-server/resources/models/geti/ -c dlstreamer-pipeline-server -n apps
22+
23+
kubectl cp ../resources/videos/warehouse.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps
24+
```
1725
- Send the curl command to start the pallet defect detection pipeline
1826
``` sh
1927
curl http://localhost:30007/pipelines/user_defined_pipelines/pallet_defect_detection -X POST -H 'Content-Type: application/json' -d '{

0 commit comments

Comments
 (0)