|
126 | 126 | ------------------------------------------- |
127 | 127 | Environment variables loaded from /home/intel/IRD/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/helm/temp_apps/pallet-defect-detection/pdd1/.env |
128 | 128 | Running sample app: pallet-defect-detection |
129 | | - Using Helm deployment - curl commands will use: 10.223.23.150:30443 |
| 129 | + Using Helm deployment - curl commands will use: <HOST_IP>:<NGINX_HTTPS_PORT> |
130 | 130 | Checking status of dlstreamer-pipeline-server... |
131 | 131 | Server reachable. HTTP Status Code: 200 |
132 | 132 | Getting list of loaded pipelines... |
|
150 | 150 | ------------------------------------------- |
151 | 151 | Environment variables loaded from /home/intel/IRD/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/helm/temp_apps/pallet-defect-detection/pdd2/.env |
152 | 152 | Running sample app: pallet-defect-detection |
153 | | - Using Helm deployment - curl commands will use: 10.223.23.150:30444 |
| 153 | + Using Helm deployment - curl commands will use: <HOST_IP>:<NGINX_HTTPS_PORT> |
154 | 154 | Checking status of dlstreamer-pipeline-server... |
155 | 155 | Server reachable. HTTP Status Code: 200 |
156 | 156 | Getting list of loaded pipelines... |
|
174 | 174 | ------------------------------------------- |
175 | 175 | Environment variables loaded from /home/intel/IRD/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/helm/temp_apps/weld-porosity/weld1/.env |
176 | 176 | Running sample app: weld-porosity |
177 | | - Using Helm deployment - curl commands will use: 10.223.23.150:30445 |
| 177 | + Using Helm deployment - curl commands will use: <HOST_IP>:<NGINX_HTTPS_PORT> |
178 | 178 | Checking status of dlstreamer-pipeline-server... |
179 | 179 | Server reachable. HTTP Status Code: 200 |
180 | 180 | Getting list of loaded pipelines... |
|
307 | 307 | 2. Start the pipeline for <INSTANCE_NAME>: |
308 | 308 |
|
309 | 309 | ```bash |
310 | | - ./sample_start.sh -i <INSTANCE_NAME> -p <PIPELINE_NAME> |
| 310 | + ./sample_start.sh helm -i <INSTANCE_NAME> -p <PIPELINE_NAME> |
311 | 311 | ``` |
312 | 312 |
|
313 | 313 | Output: |
|
343 | 343 | 1. Fetch the list of pipeline for <INSTANCE_NAME>: |
344 | 344 |
|
345 | 345 | ```bash |
346 | | - ./sample_list.sh -i <INSTANCE_NAME> |
| 346 | + ./sample_list.sh helm -i <INSTANCE_NAME> |
347 | 347 | ``` |
348 | 348 |
|
349 | 349 | Example Output: |
|
375 | 375 |
|
376 | 376 | ```text |
377 | 377 | Instance name set to: pdd1 |
378 | | - Custom payload file set to: custom_payload_corrected.json |
| 378 | + Custom payload file set to: custom_payload.json |
379 | 379 | Starting specified pipeline(s)... |
380 | 380 | Found SAMPLE_APP: pallet-defect-detection for INSTANCE_NAME: pdd1 |
381 | 381 | Environment variables loaded from /home/intel/IRD/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/helm/temp_apps/pallet-defect-detection/pdd1/.env |
382 | 382 | Running sample app: pallet-defect-detection |
383 | 383 | Using Helm deployment - curl commands will use: <HOST_IP>:<NGINX_HTTPS_PORT> |
384 | 384 | Checking status of dlstreamer-pipeline-server... |
385 | 385 | Server reachable. HTTP Status Code: 200 |
386 | | - Loading payload from custom_payload_corrected.json |
| 386 | + Loading payload from custom_payload.json |
387 | 387 | Payload loaded successfully. |
388 | 388 | Starting pipeline: pallet_defect_detection_gpu |
389 | 389 | Launching pipeline: pallet_defect_detection_gpu |
@@ -740,3 +740,127 @@ Applications can take advantage of S3 publish feature from DL Streamer Pipeline |
740 | 740 | ```sh |
741 | 741 | ./run.sh helm_uninstall |
742 | 742 | ``` |
| 743 | +
|
| 744 | +## MLOps using Model Download |
| 745 | +
|
| 746 | +1. Run all the steps mentioned in above [section](#setup-the-application) to setup the application. |
| 747 | +
|
| 748 | +2. Install the helm chart |
| 749 | +
|
| 750 | + ```sh |
| 751 | + ./run.sh helm_install |
| 752 | + ``` |
| 753 | +
|
| 754 | +3. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. |
| 755 | +
|
| 756 | + ```sh |
| 757 | + # Below is an example for Pallet Defect Detection. Please adjust the source path of models and videos appropriately for other sample applications. |
| 758 | +
|
| 759 | + POD_NAME=$(kubectl get pods -n <INSTANCE_NAME> -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) |
| 760 | +
|
| 761 | + kubectl cp resources/pallet-defect-detection/videos/warehouse.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n <INSTANCE_NAME> |
| 762 | +
|
| 763 | + kubectl cp resources/pallet-defect-detection/models/* $POD_NAME:/home/pipeline-server/resources/models/ -c dlstreamer-pipeline-server -n <INSTANCE_NAME> |
| 764 | + ``` |
| 765 | +
|
| 766 | +4. Modify the payload in `helm/temp_apps/<SAMPLE_APP>/<INSTANCE_NAME>/payload.json` to launch an instance for the mlops pipeline. |
| 767 | + |
| 768 | + Below is an example for pallet-defect-detection. Please modify the payload for other sample applications. |
| 769 | +
|
| 770 | + ```json |
| 771 | + [ |
| 772 | + { |
| 773 | + "pipeline": "pallet_defect_detection_mlops", |
| 774 | + "payload":{ |
| 775 | + "source": { |
| 776 | + "uri": "file:///home/pipeline-server/resources/videos/warehouse.avi", |
| 777 | + "type": "uri" |
| 778 | + }, |
| 779 | + "destination": { |
| 780 | + "frame": { |
| 781 | + "type": "webrtc", |
| 782 | + "peer-id": "pdd" |
| 783 | + } |
| 784 | + }, |
| 785 | + "parameters": { |
| 786 | + "detection-properties": { |
| 787 | + "model": "/home/pipeline-server/resources/models/pallet-defect-detection/deployment/Detection/model/model.xml", |
| 788 | + "device": "CPU" |
| 789 | + } |
| 790 | + } |
| 791 | + } |
| 792 | + } |
| 793 | + ] |
| 794 | + ``` |
| 795 | +
|
| 796 | +5. Start the pipeline with the above payload. |
| 797 | +
|
| 798 | + Below is an example for starting an instance for pallet-defect-detection: |
| 799 | +
|
| 800 | + ```sh |
| 801 | + ./sample_start.sh helm -i <INSTANCE_NAME> -p pallet_defect_detection_mlops |
| 802 | + ``` |
| 803 | + Note the instance-id. |
| 804 | +
|
| 805 | +6. Download and prepare the model. Below is an example for downloading and preparing model for pallet-defect-detection. Please modify MODEL_URL for the other sample applications. |
| 806 | + >NOTE- For sake of simplicity, we assume that the new model has already been downloaded by Model Download microservice. The following curl command is only a simulation that just downloads the model. In production, however, they will be downloaded by the Model Download service. |
| 807 | +
|
| 808 | + ```sh |
| 809 | + export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/INT8/pallet_defect_detection.zip' |
| 810 | +
|
| 811 | + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" |
| 812 | +
|
| 813 | + unzip "$(basename $MODEL_URL)" -d new-model # downloaded model is now extracted to `new-model` directory. |
| 814 | + ``` |
| 815 | +
|
| 816 | +7. Copy the new model to the `dlstreamer-pipeline-server` pod to make it available for application while launching pipeline. |
| 817 | +
|
| 818 | + ```sh |
| 819 | + |
| 820 | + POD_NAME=$(kubectl get pods -n <INSTANCE_NAME> -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) |
| 821 | +
|
| 822 | + kubectl cp new-model $POD_NAME:/home/pipeline-server/resources/models/ -c dlstreamer-pipeline-server -n <INSTANCE_NAME> |
| 823 | + ``` |
| 824 | + >NOTE- If there are multiple sample_apps in config.yml, repeat steps 6 and 7 for each sample app and instance. |
| 825 | +
|
| 826 | +
|
| 827 | +8. Stop the existing pipeline before restarting it with a new model. Use the instance-id generated from step 5. |
| 828 | + ```sh |
| 829 | + curl -k --location -X DELETE https://<HOST_IP>:<NGINX_HTTPS_PORT>/api/pipelines/{instance_id} |
| 830 | + ``` |
| 831 | +
|
| 832 | +9. Modify the payload in `helm/temp_apps/<SAMPLE_APP>/<INSTANCE_NAME>/payload.json` to launch an instance for the mlops pipeline with this new model. |
| 833 | + |
| 834 | + Below is an example for pallet-defect-detection. Please modify the payload for other sample applications. |
| 835 | +
|
| 836 | + ```json |
| 837 | + [ |
| 838 | + { |
| 839 | + "pipeline": "pallet_defect_detection_mlops", |
| 840 | + "payload":{ |
| 841 | + "source": { |
| 842 | + "uri": "file:///home/pipeline-server/resources/videos/warehouse.avi", |
| 843 | + "type": "uri" |
| 844 | + }, |
| 845 | + "destination": { |
| 846 | + "frame": { |
| 847 | + "type": "webrtc", |
| 848 | + "peer-id": "pdd" |
| 849 | + } |
| 850 | + }, |
| 851 | + "parameters": { |
| 852 | + "detection-properties": { |
| 853 | + "model": "/home/pipeline-server/resources/models/new-model/deployment/Detection/model/model.xml", |
| 854 | + "device": "CPU" |
| 855 | + } |
| 856 | + } |
| 857 | + } |
| 858 | + } |
| 859 | + ] |
| 860 | +
|
| 861 | +10. View the WebRTC streaming on `https://<HOST_IP>:<NGINX_HTTPS_PORT>/mediamtx/<peer-str-id>/` by replacing `<peer-str-id>` with the value used in the original cURL command to start the pipeline. |
| 862 | +
|
| 863 | +
|
| 864 | +## Troubleshooting |
| 865 | +
|
| 866 | +- [Troubleshooting Guide](../troubleshooting.md) |
0 commit comments