Skip to content

Latest commit

 

History

History
247 lines (188 loc) · 8.98 KB

File metadata and controls

247 lines (188 loc) · 8.98 KB

Get Started

  • Time to Complete: 30 minutes
  • Programming Language: Python 3

Prerequisites

Set up the application

The following instructions assume Docker engine is correctly set up in the host system. If not, follow the installation guide for docker engine at docker.com.

  1. Clone the edge-ai-suites repository and change into industrial-edge-insights-vision directory. The directory contains the utility scripts required in the instructions that follows.

Go to the target directory of your choice and clone the suite. If you want to clone a specific release branch, replace main with the desired tag. To learn more on partial cloning, check the Repository Cloning guide.

```bash
git clone --filter=blob:none --sparse --branch main https://github.com/open-edge-platform/edge-ai-suites.git
cd edge-ai-suites
git sparse-checkout set manufacturing-ai-suite
cd manufacturing-ai-suite/industrial-edge-insights-vision
```
  1. Set app specific environment variable file

    cp .env_weld-porosity .env
  2. Edit the below mentioned environment variables in .env file, as follows:

    HOST_IP=<HOST_IP>   # IP address of server where DL Streamer Pipeline Server is running.
    
    MINIO_ACCESS_KEY=   # MinIO service & client access key e.g. intel1234
    MINIO_SECRET_KEY=   # MinIO service & client secret key e.g. intel1234
    
    MTX_WEBRTCICESERVERS2_0_USERNAME=<username>  # WebRTC credentials e.g. intel1234
    MTX_WEBRTCICESERVERS2_0_PASSWORD=<password>
    
    # application directory
    SAMPLE_APP=weld-porosity
  3. Install pre-requisites. Run with sudo if needed.

    ./setup.sh

    This sets up application pre-requisites, downloads artifacts, sets executable permissions for scripts etc. Downloaded resource directories are made available to the application via volume mounting in the docker compose file automatically.

Deploy the Application

  1. Start the Docker application:

The Docker daemon service should start automatically at boot. If not, you can start it manually:

sudo systemctl start docker
>If you're running multiple instances of app, start the services using `./run.sh up` instead.
docker compose up -d
  1. Fetch the list of pipeline loaded available to launch

    ./sample_list.sh

    This lists the pipeline loaded in DL Streamer Pipeline Server.

    Example Output:

    # Example output for Weld Porosity Classification
    Environment variables loaded from /home/intel/OEP/new/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/.env
    Running sample app: weld-porosity
    Checking status of dlstreamer-pipeline-server...
    Server reachable. HTTP Status Code: 200
    Loaded pipelines:
    [
        ...
        {
            "description": "DL Streamer Pipeline Server pipeline",
            "name": "user_defined_pipelines",
            "parameters": {
            "properties": {
                "classification-properties": {
                "element": {
                    "format": "element-properties",
                    "name": "classification"
                }
                }
            },
            "type": "object"
            },
            "type": "GStreamer",
            "version": "weld_porosity_classification"
        },
        ...
    ]
  2. Start the sample application with a pipeline.

    ./sample_start.sh -p weld_porosity_classification

    This command will look for the payload for the pipeline specified in -p argument above, inside the payload.json file and launch the a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different options available.

    IMPORTANT: Before you run sample_start.sh script, make sure that jq is installed on your system. See the troubleshooting guide for more details.

    Output:

    # Example output for Weld Porosity Classification
    Environment variables loaded from /home/intel/OEP/new/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/.env
    Running sample app: weld-porosity
    Checking status of dlstreamer-pipeline-server...
    Server reachable. HTTP Status Code: 200
    Loading payload from /home/intel/OEP/new/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/apps/weld-porosity/payload.json
    Payload loaded successfully.
    Starting pipeline: weld_porosity_classification
    Launching pipeline: weld_porosity_classification
    Extracting payload for pipeline: weld_porosity_classification
    Found 1 payload(s) for pipeline: weld_porosity_classification
    Payload for pipeline 'weld_porosity_classification' {"source":{"uri":"file:///home/pipeline-server/resources/videos/welding.avi","type":"uri"},"destination":{"frame":{"type":"webrtc","peer-id":"weld"}},"parameters":{"classification-properties":{"model":"/home/pipeline-server/resources/models/weld-porosity/deployment/Classification/model/model.xml","device":"CPU"}}}
    Posting payload to REST server at https://<HOST_IP>/api/pipelines/user_defined_pipelines/weld_porosity_classification
    Payload for pipeline 'weld_porosity_classification' posted successfully. Response: "6d06422c5c7511f091f03266c7df2abf"
    

    NOTE: This will start the pipeline. The inference stream can be viewed on WebRTC, in a browser, at the following url:

    If you're running multiple instances of app, ensure to provide NGINX_HTTPS_PORT number in the url for the app instance i.e. replace <HOST_IP> with <HOST_IP>:<NGINX_HTTPS_PORT>

    https://<HOST_IP>/mediamtx/weld/
  3. Get status of pipeline instance(s) running.

    ./sample_status.sh

    This command lists status of pipeline instances launched during the lifetime of sample application.

    Output:

    # Example output for Weld Porosity Classification
    Environment variables loaded from /home/intel/OEP/new/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/.env
    Running sample app: weld-porosity
    [
    {
        "avg_fps": 30.20765374167394,
        "elapsed_time": 2.0193545818328857,
        "id": "0714ca6e5c7611f091f03266c7df2abf",
        "message": "",
        "start_time": 1752032244.3554578,
        "state": "RUNNING"
    }
    ]
  4. Stop pipeline instances.

    ./sample_stop.sh

    This command will stop all instances that are currently in RUNNING state and respond with the last status.

    Output:

    # Example output for Weld Porosity Classification
    No pipelines specified. Stopping all pipeline instances
    Environment variables loaded from /home/intel/OEP/new/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/.env
    Running sample app: weld-porosity
    Checking status of dlstreamer-pipeline-server...
    Server reachable. HTTP Status Code: 200
    Instance list fetched successfully. HTTP Status Code: 200
    Found 1 running pipeline instances.
    Stopping pipeline instance with ID: 0714ca6e5c7611f091f03266c7df2abf
    Pipeline instance with ID '0714ca6e5c7611f091f03266c7df2abf' stopped successfully. Response: {
    "avg_fps": 30.00652493520486,
    "elapsed_time": 4.965576171875,
    "id": "0714ca6e5c7611f091f03266c7df2abf",
    "message": "",
    "start_time": 1752032244.3554578,
    "state": "RUNNING"
    }

    To stop a specific instance, identify it with the --id argument. For example, ./sample_stop.sh --id 0714ca6e5c7611f091f03266c7df2abf

  5. Stop the Docker application.

    If you're running multiple instances of app, stop the services using ./run.sh down instead.

    docker compose down -v

    This will bring down the services in the application and remove any volumes.

Further Reading

Troubleshooting