You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that many of these services have built in ML models, and thus do not need to be run alongside an ML model service.
29
+
## Machine inference
34
30
35
-
One vision service you can use to run inference on a camera stream if you have an ML model service configured is the `mlmodel` service.
31
+
You can use `viam-server`to deploy and run ML models directly on your machines.
36
32
37
-
### Configure an mlmodel vision service
33
+
You can run inference on your machine in the following ways:
38
34
39
-
Add the `vision / ML model`service to your machine.
40
-
Then, from the **Select model** dropdown, select the name of the ML model service you configured when [deploying](/data-ai/ai/deploy/) your model (for example, `mlmodel-1`).
35
+
- with a vision service
36
+
- manually in application logic with an SDK
41
37
42
-
**Save** your changes.
38
+
Entry-level devices such as the Raspberry Pi 4 can run small ML models, such as TensorFlow Lite (TFLite).
39
+
More powerful hardware, including the Jetson Xavier or Raspberry Pi 5 with an AI HAT+, can process larger AI models, including Tensorflow and ONNX.
43
40
44
-
### Test your changes
41
+
{{< tabs >}}
42
+
{{% tab name="Vision service" %}}
45
43
46
-
You can test a deployed vision service by clicking on the **Test** area of its configuration panel or from the [**CONTROL** page](/manage/troubleshoot/teleoperate/default-interface/#viam-app).
44
+
Vision services apply an ML model to a stream of images from a camera to generate bounding boxes or classifications.
47
45
48
-
The camera stream shows when the vision service identifies something.
49
-
Try pointing the camera at a scene similar to your training data.
{{< imgproc src="/tutorials/data-management/blue-star.png" alt="Detected blue star" resize="x200" class="shadow" >}}
52
-
{{< imgproc src="/tutorials/filtered-camera-module/viam-figure-preview.png" alt="Detection of a viam figure with a confidence score of 0.97" resize="x200" class="shadow" >}}
48
+
{{% alert title="Tip" color="tip" %}}
49
+
Some vision services include their own ML models, and thus do not require a deployed ML model.
50
+
If your vision service does not include an ML model, you must [deploy an ML model to your machine](/data-ai/ai/deploy/) to use that service.
51
+
{{% /alert %}}
53
52
54
-
{{% expand "Want to limit the number of shown classifications or detections? Click here." %}}
53
+
To use a vision service:
55
54
56
-
If you are seeing a lot of classifications or detections, you can set a minimum confidence threshold.
55
+
1. Visit the **CONFIGURE** page of the Viam app.
56
+
1. Click the **+** icon next to your main machine part and select **Component or service**.
57
+
1. Type in the name of the service and select a vision service.
58
+
1. If your vision service does not include an ML model, [deploy an ML model to your machine](/data-ai/ai/deploy/) to use that service.
59
+
1. Configure the service based on your use case.
60
+
1. To view the deployed vision service, use the live detection feed in the Viam app.
61
+
The feed shows an overlay of detected objects or classifications on top of a live camera feed.
62
+
On the **CONFIGURE** or **CONTROL** pages for your machine, expand the **Test** area of the service panel to view the feed.
57
63
58
-
Start by setting the value to 0.8.
59
-
This reduces your output by filtering out anything below a threshold of 80% confidence.
60
-
You can adjust this attribute as necessary.
64
+
{{< imgproc src="/tutorials/data-management/blue-star.png" alt="Detected blue star" resize="x200" class="shadow" >}}
65
+
{{< imgproc src="/tutorials/filtered-camera-module/viam-figure-preview.png" alt="Detection of a viam figure with a confidence score of 0.97" resize="x200" class="shadow" >}}
61
66
62
-
Click the **Save** button in the top right corner of the page to save your configuration, then close and reopen the **TEST** panel of the vision service configuration panel.
63
-
Now if you reopen the panel, you will only see classifications or detections with a confidence value higher than the `default_minimum_confidence` attribute.
67
+
For instance, you could use [`viam:vision:mlmodel`](/operate/reference/services/vision/mlmodel/) with the `EfficientDet-COCO` ML model to detect a variety of objects, including people, bicycles, and apples, in a camera feed.
64
68
65
-
{{< /expand>}}
69
+
Alternatively, you could use [`viam-soleng:vision:openalpr`](https://app.viam.com/module/viam-soleng/viamalpr) to detect license plates in images.
70
+
Since this service includes its own ML model, there is no need to configure a separate ML model.
66
71
67
-
For more detailed information, including optional attribute configuration, see the [`mlmodel` docs](/operate/reference/services/vision/mlmodel/).
72
+
After adding a vision service, you can use a vision service API method with a classifier or a detector to get inferences programmatically.
73
+
For more information, see the APIs for [ML Model](/dev/reference/apis/services/ml/) and [Vision](/dev/reference/apis/services/vision/).
68
74
69
-
## Use an SDK
75
+
{{% /tab %}}
76
+
{{% tab name="SDK" %}}
70
77
71
-
You can also run inference using a Viam SDK.
78
+
With the Viam SDK, you can pass image data to an ML model service, read the output annotations, and react to output in your own code.
72
79
Use the [`Infer`](/dev/reference/apis/services/ml/#infer) method of the ML Model API to make inferences.
{{< card link="/dev/reference/apis/services/ml/" customTitle="ML Model API" noimage="True" >}}
111
-
{{% card link="/dev/reference/apis/services/vision/" customTitle="Vision service API" noimage="True" %}}
112
-
{{< /cards >}}
154
+
`infer` returns a list of detected classes or bounding boxes depending on the output of the ML model you specified, as well as a list of confidence values for those classes or boxes.
155
+
This method returns bounding box output using proportional coordinates between 0 and 1, with the origin `(0, 0)` in the top left of the image and `(1, 1)` in the bottom right.
The `infer` command enables you to run [cloud inference](/data-ai/ai/run-inference/#cloud-inference) on data. Cloud inference runs in the cloud, instead of on a local machine.
|`--binary-data-id`| The binary data ID of the image you want to run inference on. |**Required**|
578
+
|`--model-name`| The name of the model that you want to run in the cloud. |**Required**|
579
+
|`--model-version`| The version of the model that you want to run in the cloud. To find the latest version string for a model, visit the [registry page](https://app.viam.com/registry?type=ML+Model) for that model. You can find the latest version string in the **Version history** section, for instance "2024-02-16T12-55-32". Pass this value as a string, using double quotes. |**Required**|
580
+
|`--org-id`| The organization ID of the organization that will run the inference. |**Required**|
581
+
|`--model-org-id`| The organization ID of the organization that owns the model. |**Required**|
582
+
547
583
### `locations`
548
584
549
585
The `locations` command allows you to manage the [locations](/manage/reference/organize/) that you have access to.
0 commit comments