Skip to content

Latest commit

 

History

History
176 lines (126 loc) · 5.88 KB

File metadata and controls

176 lines (126 loc) · 5.88 KB

Get Started

The Smart Parking application uses AI-driven video analytics to optimize parking management. It provides a modular architecture that integrates seamlessly with various input sources and leverages AI models to deliver accurate and actionable insights.

By following this guide, you will learn how to:

  • Set up the sample application: Use Docker Compose to quickly deploy the application in your environment.
  • Run a predefined pipeline: Execute a pipeline to see smart parking application in action.
  • Access the application's features and user interfaces: Explore the Grafana dashboard, Node-RED interface, and DL Streamer Pipeline Server to monitor, analyze and customize workflows.

Prerequisites

Set up and First Use

  1. Clone the Suite:

    Go to the target directory of your choice and clone the suite. If you want to clone a specific release branch, replace main with the desired tag. To learn more on partial cloning, check the Repository Cloning guide.

    git clone --filter=blob:none --sparse --branch main https://github.com/open-edge-platform/edge-ai-suites.git
    cd edge-ai-suites
    git sparse-checkout set metro-ai-suite
    cd metro-ai-suite/metro-vision-ai-app-recipe/
  2. Setup Application and Download Assets:

    • Use the installation script to configure the application and download required models:

      ./install.sh smart-parking

Note: For environments requiring a specific host IP address (such as when using Edge Manageability Toolkit or deploying across different network interfaces), you can explicitly specify the IP address : ./install.sh smart-parking <HOST_IP> (replace <HOST_IP> with your target IP address).

Run the Application

  1. Start the Application:

    • Download container images with Application microservices and run with Docker Compose:

      docker compose up -d
      Check Status of Microservices - The application starts the following microservices. - To check if all microservices are in Running state:
      docker ps

      Expected Services:

      • Grafana Dashboard
      • DL Streamer Pipeline Server
      • MQTT Broker
      • Node-RED (for applications without Intel® SceneScape)
      • Intel® SceneScape services (for Smart Intersection only)
  2. Run Predefined Pipelines:

    • Start video streams to run video inference pipelines:

      ./sample_start.sh
    • To check the status of the pipelines:

      ./sample_status.sh
      Stop pipelines - To stop the pipelines without waiting for video streams to finish replay:
      ./sample_stop.sh

      Note: This will stop all the pipelines and the streams. DO NOT run this if you want to see smart parking detection.

  3. View the Application Output:

    • Open a browser and go to https://localhost/grafana to access the Grafana dashboard.
      • Change the localhost to your host IP if you are accessing it remotely.
    • Log in with the following credentials:
      • Username: admin
      • Password: admin
    • Check under the Dashboards section for the application-specific preloaded dashboard.
    • Expected Results: The dashboard displays real-time video streams with AI overlays and detection metrics.

Access the Application and Components

Nginx Dashboard

  • URL: https://localhost

Grafana UI

  • URL: https://localhost/grafana
  • Log in with credentials:
    • Username: admin
    • Password: admin (You will be prompted to change it on first login.)
  • In Grafana UI, the dashboard displays the detected cars in the parking lot. Grafana Dashboard

NodeRED UI

  • URL: https://localhost/nodered/

DL Streamer Pipeline Server

  • REST API: https://localhost/api/pipelines/status
  • WebRTC: https://localhost/mediamtx/object_detection_1/

Stop the Application

  • To stop the application microservices, use the following command:

    docker compose down

Other Deployment Options

Supporting Resources