Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 7 additions & 8 deletions microservices/dlstreamer-pipeline-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,18 +134,13 @@ Add the following lines in [.env file](./docker/.env) if you are behind a proxy.
Update the following lines in [.env file](./docker/.env) for choosing the right base image and also for naming the image that gets built.

``` sh
# For Ubuntu 22.04: ghcr.io/open-edge-platform/edge-ai-libraries/deb-final-img-ubuntu22:candidate1407
# For Ubuntu 24.04: ghcr.io/open-edge-platform/edge-ai-libraries/deb-final-img-ubuntu24:candidate1407
# See .env file for example values
BASE_IMAGE=

# For Ubuntu 22.04 and optimized image: intel/dlstreamer-pipeline-server:3.1.0-ubuntu22
# For Ubuntu 24.04 and optimized image: intel/dlstreamer-pipeline-server:3.1.0-ubuntu24
# For Ubuntu 22.04 and extended image: intel/dlstreamer-pipeline-server:3.1.0-extended-ubuntu22
# For Ubuntu 24.04 and extended image: intel/dlstreamer-pipeline-server:3.1.0-extended-ubuntu24
# See .env file for example values
DLSTREAMER_PIPELINE_SERVER_IMAGE=

# For optimized image: dlstreamer-pipeline-server
# For extended image: dlstreamer-pipeline-server-extended
# See .env file for example values
BUILD_TARGET=
```

Expand Down Expand Up @@ -184,6 +179,10 @@ Refer [here](https://docs.openedgeplatform.intel.com/edge-ai-libraries/dlstreame
```
---

## Troubleshooting
- [Troubleshooting Guide](docs/user-guide/troubleshooting-guide.md)

---
## Learn More

- Understand the components, services, architecture, and data flow, in the [Overview](https://docs.openedgeplatform.intel.com/edge-ai-libraries/dlstreamer-pipeline-server/main/user-guide/Overview.html)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -238,4 +238,5 @@ ntp
bitrate
kbps
kb
VA-API
VA-API
Troubleshooting
Original file line number Diff line number Diff line change
Expand Up @@ -450,7 +450,7 @@ Parameters default value in pipeline definitions can be set in section in one of
}
```

1. **Read default value from environment variable**
2. **Read default value from environment variable**

A default value can be set using environment variable for the element property using `default` key.

Expand All @@ -475,10 +475,6 @@ Parameters default value in pipeline definitions can be set in section in one of
}
```

Set `DETECTION_DEVICE` environment variable at Pipeline Server start.
```bash
./docker/run.sh -e DETECTION_DEVICE=GPU
```

#### Parameters and FFmpeg Filters

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,96 +111,7 @@ For alternative ways to set up the microservice, see:
- [How to Deploy with Helm](./how-to-deploy-with-helm.md)

## Troubleshooting

- **Using REST API in Image Ingestor mode has low first inference latency**

This is an expected behavior observed only for the first inference. Subsequent inferences would be considerably faster.
For inference on GPU, the first inference might be even slower. Latency for up to 15 seconds have been observed for image requests inference on GPU.
When in `sync` mode, we suggest users to provide a `timeout` with a value to accommodate for the first inference latency to avoid request time out.
Read [here](./advanced-guide/detailed_usage/rest_api/restapi_reference_guide.md#post-pipelinesnameversioninstance_id) to learn more about the API.


- **Axis RTSP camera freezes or pipeline stops**

Restart the DL Streamer pipeline server container with the pipeline that has this rtsp source.


- **Deploying with Intel GPU K8S Extension on ITEP**

If you're deploying a GPU based pipeline (example: with VA-API elements like `vapostproc`, `vah264dec` etc., and/or with `device=GPU` in `gvadetect` in `dlstreamer_pipeline_server_config.json`) with Intel GPU k8s Extension on ITEP, ensure to set the below details in the file `helm/values.yaml` appropriately in order to utilize the underlying GPU.
```sh
gpu:
enabled: true
type: "gpu.intel.com/i915"
count: 1
```

- **Deploying without Intel GPU K8S Extension**

If you're deploying a GPU based pipeline (example: with VA-API elements like `vapostproc`, `vah264dec` etc., and/or with `device=GPU` in `gvadetect` in `dlstreamer_pipeline_server_config.json`) without Intel GPU k8s Extension, ensure to set the below details in the file `helm/values.yaml` appropriately in order to utilize the underlying GPU.
```sh
privileged_access_required: true
```

- **Using RTSP/WebRTC streaming, S3_write or MQTT fails with GPU elements in pipeline**

If you are using GPU elements in the pipeline, RTSP/WebRTC streaming, S3_write and MQTT will not work because these are expects CPU buffer. \
Add `vapostproc ! video/x-raw` before appsink element or `jpegenc` element(in case you are using S3_write) in the GPU pipeline.
```sh
# Sample pipeline

"pipeline": "{auto_source} name=source ! parsebin ! vah264dec ! vapostproc ! video/x-raw(memory:VAMemory) ! gvadetect name=detection model-instance-id=inst0 ! queue ! gvafpscounter ! gvametaconvert add-empty-results=true name=metaconvert ! gvametapublish name=destination ! vapostproc ! video/x-raw ! appsink name=appsink"
```

- **RTSP streaming fails if you are using udfloader**

If you are using udfloader<link> pipeline RTSP streaming will not work because RTSP pipeline does not support RGB, BGR or Mono format.
If you are using `udfloader pipeline` or `RGB, BGR or GRAY8` format in the pipeline, add `videoconvert ! video/x-raw, format=(string)NV12` before `appsink` element in pipeline.
```sh
# Sample pipeline

"pipeline": "{auto_source} name=source ! decodebin ! videoconvert ! video/x-raw,format=RGB ! udfloader name=udfloader ! gvametaconvert add-empty-results=true name=metaconvert ! gvametapublish name=destination ! videoconvert ! video/x-raw, format=(string)NV12 ! appsink name=appsink"
```

- **Resolving Time Sync Issues in Prometheus**

If you see the following warning in Prometheus, it indicates a time sync issue.

**Warning: Error fetching server time: Detected xxx.xxx seconds time difference between your browser and the server.**

You can following the below steps to synchronize system time using NTP.
1. **Install systemd-timesyncd** if not already installed:
```bash
sudo apt install systemd-timesyncd
```

2. **Check service status**:
```bash
systemctl status systemd-timesyncd
```

3. **Configure an NTP server** (if behind a corporate proxy):
```bash
sudo nano /etc/systemd/timesyncd.conf
```
Add:
```ini
[Time]
NTP=corp.intel.com
```
Replace `corp.intel.com` with a different ntp server that is supported on your network.

4. **Restart the service**:
```bash
sudo systemctl restart systemd-timesyncd
```

5. **Verify the status**:
```bash
systemctl status systemd-timesyncd
```

This should resolve the time discrepancy in Prometheus.
- [Troubleshooting Guide](./troubleshooting-guide.md)

## Known Issues

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,18 +24,13 @@ You can build either an optimized or an extended DL Streamer Pipeline Server ima
3. Update the following lines in `[WORKDIR]/edge-ai-libraries/microservices/dlstreamer-pipeline-server/docker/.env` for choosing the right base image and also for naming the image that gets built.

``` sh
# For Ubuntu 22.04: ghcr.io/open-edge-platform/edge-ai-libraries/deb-final-img-ubuntu22:candidate1407
# For Ubuntu 24.04: ghcr.io/open-edge-platform/edge-ai-libraries/deb-final-img-ubuntu24:candidate1407
# See .env file for example values
BASE_IMAGE=

# For Ubuntu 22.04 and optimized image: intel/dlstreamer-pipeline-server:3.1.0-ubuntu22
# For Ubuntu 24.04 and optimized image: intel/dlstreamer-pipeline-server:3.1.0-ubuntu24
# For Ubuntu 22.04 and extended image: intel/dlstreamer-pipeline-server:3.1.0-extended-ubuntu22
# For Ubuntu 24.04 and extended image: intel/dlstreamer-pipeline-server:3.1.0-extended-ubuntu24
# See .env file for example values
DLSTREAMER_PIPELINE_SERVER_IMAGE=

# For optimized image: dlstreamer-pipeline-server
# For extended image: dlstreamer-pipeline-server-extended
# See .env file for example values
BUILD_TARGET=
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,20 @@
Refer to tutorials such as <https://adamtheautomator.com/install-kubernetes-ubuntu> and many other
online tutorials to setup kubernetes cluster on the web with host OS as ubuntu 22.04.
- For helm installation, refer to [helm website](https://helm.sh/docs/intro/install/)
- Clone the Edge-AI-Libraries repository from open edge platform and change to the docker directory inside DL Streamer Pipeline Server project.

```sh
cd [WORKDIR]
git clone https://github.com/open-edge-platform/edge-ai-libraries.git
cd edge-ai-libraries/microservices/dlstreamer-pipeline-server/helm
```

## Quick try out
Follow the steps in this section to quickly pull the latest pre-built DL Streamer Pipeline Server helm charts followed by running a sample usecase.

### Pull the helm chart
### Pull the helm chart (Optional)

- Note: The helm chart should be downloaded when you are not using the helm chart provided in `edge-ai-libraries/microservices/dlstreamer-pipeline-server/helm`

- Download helm chart with the following command

Expand Down Expand Up @@ -46,10 +55,11 @@ Update the below fields in `values.yaml` file in the helm chart

### Run default sample

Once the pods are up, we will send a pipeline request to DL Streamer Pipeline Server to run a detection model on a warehouse video. Both the model and video are provided as default sample in the docker image.
Once the pods are up, we will send a pipeline request to DL Streamer Pipeline Server to run a detection model on a warehouse video.
The resources such as video and model are copied into `dlstreamer-pipeline-server` pod by `initContainers`.

We will send the below curl request to run the inference.
It comprises of a source file path which is `warehouse.avi`, a destination, with metadata directed to a json fine in `/tmp/resuts.jsonl` and frames streamed over RTSP with id `pallet_defect_detection`. Additionally, we will also provide the GETi model path that would be used for detecting defective boxes on the video file.
It comprises of a source file path which is `warehouse.avi`, a destination, with metadata directed to a json file in `/tmp/resuts.jsonl` and frames streamed over RTSP with id `pallet_defect_detection`. Additionally, we will also provide the GETi model path that would be used for detecting defective boxes on the video file.

Open another terminal and send the following curl request
```sh
Expand Down Expand Up @@ -108,6 +118,8 @@ To check the pipeline status and stop the pipeline send the following requests,

Now you have successfully run the DL Streamer Pipeline Server container, sent a curl request to start a pipeline within the microservice which runs the Geti based pallet defect detection model on a sample warehouse video. Then, you have also looked into the status of the pipeline to see if everything worked as expected and eventually stopped the pipeline as well.

## Troubleshooting
- [Troubleshooting Guide](./troubleshooting-guide.md)

## Summary

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Ensure to build/pull the DL Streamer Pipeline Server extended image i.e., `intel

Below is an example that shows how to subscribe to the published data.

- Install ROS2 Humble on Ubuntu22 and source it. Install pythyon and related dependencies too.
- Install ROS2 Humble on Ubuntu22 and source it. Install python and related dependencies too.
```sh
# Install ROS2 Humble
sudo apt update && sudo apt install -y curl gnupg lsb-release
Expand Down Expand Up @@ -91,73 +91,73 @@ Below is an example that shows how to subscribe to the published data.

- Save the below sample subscriber script as `ros_subscriber.py`
```python
#!/usr/bin/env python3
import sys
import rclpy
from rclpy.node import Node
from std_msgs.msg import String
import json
import base64
import cv2
import numpy as np
import re

class SimpleSubscriber(Node):
def __init__(self, topic_name):
super().__init__('simple_subscriber')
self.topic_name = topic_name
self.subscription = self.create_subscription(
String,
topic_name,
self.listener_callback,
10
)
self.counter = 0
# clean topic name for filename (remove /)
self.topic_safe = re.sub(r'[^a-zA-Z0-9_]', '_', topic_name)
print(f"Subscribed to topic: {topic_name}")

def listener_callback(self, msg):
try:
data = json.loads(msg.data)

metadata = data.get("metadata", {})
print(f"Metadata on topic {self.topic_name}: {metadata}")

image_b64 = data.get("blob", "")
if image_b64:
img_bytes = base64.b64decode(image_b64)
np_arr = np.frombuffer(img_bytes, np.uint8)
img = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)

if img is not None:
filename = f"{self.topic_safe}_{self.counter}.jpg"
cv2.imwrite(filename, img)
print(f"Image from topic {self.topic_name} saved to {filename}")
self.counter += 1
else:
print("Failed to decode image.")
#!/usr/bin/env python3
import sys
import rclpy
from rclpy.node import Node
from std_msgs.msg import String
import json
import base64
import cv2
import numpy as np
import re

class SimpleSubscriber(Node):
def __init__(self, topic_name):
super().__init__('simple_subscriber')
self.topic_name = topic_name
self.subscription = self.create_subscription(
String,
topic_name,
self.listener_callback,
10
)
self.counter = 0
# clean topic name for filename (remove /)
self.topic_safe = re.sub(r'[^a-zA-Z0-9_]', '_', topic_name)
print(f"Subscribed to topic: {topic_name}")

def listener_callback(self, msg):
try:
data = json.loads(msg.data)

metadata = data.get("metadata", {})
print(f"Metadata on topic {self.topic_name}: {metadata}")

image_b64 = data.get("blob", "")
if image_b64:
img_bytes = base64.b64decode(image_b64)
np_arr = np.frombuffer(img_bytes, np.uint8)
img = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)

if img is not None:
filename = f"{self.topic_safe}_{self.counter}.jpg"
cv2.imwrite(filename, img)
print(f"Image from topic {self.topic_name} saved to {filename}")
self.counter += 1
else:
print("No image data in message.")
print("Failed to decode image.")
else:
print("No image data in message.")

except Exception as e:
print(f"Error on topic {self.topic_name}: {e}")
except Exception as e:
print(f"Error on topic {self.topic_name}: {e}")

def main(args=None):
rclpy.init(args=args)
def main(args=None):
rclpy.init(args=args)

# get topic from command line
topic_name = '/dlstreamer_pipeline_results' # default
if len(sys.argv) > 1:
topic_name = sys.argv[1]
# get topic from command line
topic_name = '/dlstreamer_pipeline_results' # default
if len(sys.argv) > 1:
topic_name = sys.argv[1]

node = SimpleSubscriber(topic_name)
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
node = SimpleSubscriber(topic_name)
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()

if __name__ == '__main__':
main()
if __name__ == '__main__':
main()
```

- Run the sample subscriber script as follows and view the metadata being printed and frames being saved.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
overview-architecture
system-requirements
get-started
troubleshooting-guide

.. toctree::
:caption: How to
Expand Down
Loading
Loading