The Smart Parking application uses AI-driven video analytics to optimize parking management. It provides a modular architecture that integrates seamlessly with various input sources and leverages AI models to deliver accurate and actionable insights.
By following this guide, you will learn how to:
- Set up the sample application: Use Docker Compose to quickly deploy the application in your environment.
- Run a predefined pipeline: Execute a pipeline to see smart parking application in action.
- Access the application's features and user interfaces: Explore the Grafana dashboard, Node-RED interface, and DL Streamer Pipeline Server to monitor, analyze and customize workflows.
- Verify that your system meets the minimum requirements.
- Install Docker: Installation Guide.
-
Clone the Suite:
Go to the target directory of your choice and clone the suite. If you want to clone a specific release branch, replace
mainwith the desired tag. To learn more on partial cloning, check the Repository Cloning guide.git clone --filter=blob:none --sparse --branch main https://github.com/open-edge-platform/edge-ai-suites.git cd edge-ai-suites git sparse-checkout set metro-ai-suite cd metro-ai-suite/metro-vision-ai-app-recipe/
-
Setup Application and Download Assets:
-
Use the installation script to configure the application and download required models:
./install.sh smart-parking
-
Note: For environments requiring a specific host IP address (such as when using Edge Manageability Toolkit or deploying across different network interfaces), you can explicitly specify the IP address :
./install.sh smart-parking <HOST_IP>(replace<HOST_IP>with your target IP address).
-
Start the Application:
-
Download container images with Application microservices and run with Docker Compose:
docker compose up -d
Check Status of Microservices
- The application starts the following microservices. - To check if all microservices are in Running state:docker ps
Expected Services:
- Grafana Dashboard
- DL Streamer Pipeline Server
- MQTT Broker
- Node-RED (for applications without Intel® SceneScape)
- Intel® SceneScape services (for Smart Intersection only)
-
-
Run Predefined Pipelines:
-
Start video streams to run video inference pipelines:
./sample_start.sh
-
To check the status of the pipelines:
./sample_status.sh
Stop pipelines
- To stop the pipelines without waiting for video streams to finish replay:./sample_stop.sh
Note: This will stop all the pipelines and the streams. DO NOT run this if you want to see smart parking detection.
-
-
View the Application Output:
- Open a browser and go to
https://localhost/grafanato access the Grafana dashboard.- Change the localhost to your host IP if you are accessing it remotely.
- Log in with the following credentials:
- Username:
admin - Password:
admin
- Username:
- Check under the Dashboards section for the application-specific preloaded dashboard.
- Expected Results: The dashboard displays real-time video streams with AI overlays and detection metrics.
- Open a browser and go to
- URL:
https://localhost
- URL:
https://localhost/grafana - Log in with credentials:
- Username:
admin - Password:
admin(You will be prompted to change it on first login.)
- Username:
- In Grafana UI, the dashboard displays the detected cars in the parking lot.

- URL:
https://localhost/nodered/
- REST API:
https://localhost/api/pipelines/status - WebRTC:
https://localhost/mediamtx/object_detection_1/
-
To stop the application microservices, use the following command:
docker compose down
- Deploy Using Helm: Use Helm to deploy the application to a Kubernetes cluster for scalable and production-ready deployments.
- Deploy with Edge Orchestrator: Use a simplified edge application deployment process.
- Troubleshooting: Find detailed steps to resolve common issues during deployments.
- DL Streamer Pipeline Server: Intel microservice based on Python for video ingestion and deep learning inferencing functions.