You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CircularNet provides two image-analysis models. The first detects _material
2
-
types_, and the second detects _material forms_. These models utilize a Mask
3
-
R-CNN algorithm for image training and implement ResNet or MobileNet as the
4
-
convolutional neural networks for image classification tasks.
1
+
CircularNet provides an image-analysis model. It detects _material types_, and
2
+
_material forms_. The model utilizes a Mask R-CNN algorithm for image training
3
+
and implements ResNet or MobileNet as the convolutional neural networks for
4
+
image classification tasks.
5
5
6
-
The models are loaded sequentially to achieve accurate predictions. When working
7
-
with images, each image undergoes preprocessing before the models use them for
6
+
The model is loaded sequentially to achieve accurate predictions. When working
7
+
with images, each image undergoes preprocessing before the model uses them for
8
8
prediction. In the case of video files, the video is split into individual
9
9
frames at a given frame rate. These frames are then processed in the same
10
10
sequential manner as images.
11
11
12
-
The predictions from the two models result in two distinct outputs, which are
12
+
The predictions from the model result in two distinct outputs, which are
13
13
then post-processed and combined into a single comprehensive output. This output
14
14
includes critical information such as the number of detected objects, their
15
15
bounding boxes, class names, class IDs, and masks for each object. Further
@@ -24,8 +24,8 @@ flow and real-time updates. A [prediction pipeline](./learn-about-pipeline) for
24
24
Google Cloud pushes the data directly to storage buckets and BigQuery tables,
25
25
which you can connect to the dashboard for [visualization and analysis](/official/projects/waste_identification_ml/circularnet-docs/content/view-data/).
26
26
27
-
On the other hand, direct data transfer to the cloud for edge device implementations needs a client-side configuration. A [prediction pipeline](/official/projects/waste_identification_ml/circularnet-docs/content/learn-about-pipeline) for devices lets you load models sequentially and store image analysis results locally.
27
+
On the other hand, direct data transfer to the cloud for edge device implementations needs a client-side configuration. A [prediction pipeline](/official/projects/waste_identification_ml/circularnet-docs/content/learn-about-pipeline) for devices lets you load the model and store image analysis results locally.
28
28
29
-
This section describes how to apply the two specialized CircularNet models using
29
+
This section describes how to apply the specialized CircularNet model using
30
30
a prediction pipeline on the client side to prepare and analyze the images you
Copy file name to clipboardExpand all lines: official/projects/waste_identification_ml/circularnet-docs/content/analyze-data/prediction-pipeline-in-cloud.md
+41-23Lines changed: 41 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -96,11 +96,12 @@ on Google Cloud:
96
96
**Important:** Run the previous command in the `server` folder, which
97
97
contains the `triton_server.sh` script.
98
98
99
-
1. Exit the `server` folder and open the client folder in the
99
+
1. Exit the `server` folder and open the `client` folder in the
100
100
`prediction_pipeline` directory:
101
101
102
102
```
103
-
cd .. cd client/
103
+
cd ..
104
+
cd client/
104
105
```
105
106
106
107
This folder contains the `pipeline_images.py` and `pipeline_videos.py`
@@ -110,22 +111,21 @@ on Google Cloud:
110
111
111
112
1. If you have to modify the scripts to provide your specific paths and values
112
113
for the prediction pipeline, edit the corresponding parameter values on the
113
-
script. The following example modifies the video pipeline script:
114
+
script. The following example modifies the image pipeline script:
114
115
115
116
```
116
-
vim run_gcp_videos.sh
117
+
vim run_images.sh
117
118
```
118
119
119
120
The Vim editor displays the following parameters:
120
121
121
122
```
122
-
--input_directoy=<path-to-input-bucket>
123
+
--input_directory=<path-to-input-bucket>
123
124
--output_directory=<path-to-output-bucket>
124
125
--fps=<frames-per-second>
125
126
--height=<height>
126
127
--width=<width>
127
-
--material_model=<material-model>
128
-
--material_form_model=<material-form-model>
128
+
--model=<circularnet-model>
129
129
--score=<score>
130
130
--search_range=<search-range>
131
131
--memory=<memory>
@@ -136,19 +136,34 @@ on Google Cloud:
136
136
137
137
Replace the following:
138
138
139
-
- `<path-to-input-bucket>`: The path to [the Cloud Storage input bucket you created](#create-the-cloud-storage-input-and-output-buckets), for example `gs://my-input-bucket/`.
140
-
- `<path-to-output-bucket>`: The path to [the Cloud Storage output bucket you created](#create-the-cloud-storage-input-and-output-buckets), for example `gs://my-output-bucket/`.
141
-
- `<frames-per-second>`: The rate at which you want to capture images from videos to split videos into frames, for example, 15.
142
-
- `<height>`: The height in pixels of the image or video frames that the model expects for prediction, for example, 512.
143
-
- `<width>`: The width in pixels of the image or video frames that the model expects for prediction, for example, 1024.
144
-
- `<material-model>`: The name of the material model in the Triton inference server that you want to call, for example, `material_resnet_v2_512_1024`.
145
-
- `<material-form-model>`: The name of the material form model in the Triton inference server that you want to call, for example, `material_form_resnet_v2_512_1024`.
146
-
- `<score>`: The threshold for model prediction, for example, 0.40.
147
-
- `<search-range>`: The pixels up to which you want to track an object for object tracking in consecutive frames, for example, 100.
148
-
- `<memory>`: The frames up to which you want to track an object, for example, 20.
149
-
- `<project-id>`: The ID of your Google Cloud project, for example, `my-project`.
150
-
- `<dataset-id>`: The ID that you want to assign to a BigQuery dataset to store prediction results, for example, `circularnet_dataset`.
151
-
- `<table-id>`: The ID that you want to assign to a BigQuery table to store prediction results, for example, `circularnet_table`. If the table already exists in your Google Cloud project, the pipeline appends results to that table.
139
+
- `<path-to-input-bucket>`: The path to [the Cloud Storage input bucket you
140
+
created](#create-the-cloud-storage-input-and-output-buckets), for example
141
+
`gs://my-input-bucket/`.
142
+
- `<path-to-output-bucket>`: The path to [the Cloud Storage output bucket
143
+
you created](#create-the-cloud-storage-input-and-output-buckets), for
144
+
example `gs://my-output-bucket/`.
145
+
- `<frames-per-second>`: The rate at which you want to capture images from
146
+
videos to split videos into frames, for example, `15`.
147
+
- `<height>`: The height in pixels of the image or video frames that the
148
+
model expects for prediction, for example, `512`.
149
+
- `<width>`: The width in pixels of the image or video frames that the
150
+
model expects for prediction, for example, `1024`.
151
+
- `<circularnet-model>`: The name of the CircularNet model in the Triton
152
+
inference server that you want to call, for example,
153
+
`Jan2025_ver2_merged_1024_1024`.
154
+
- `<score>`: The threshold for model prediction, for example, `0.70`.
155
+
- `<search-range>`: The pixels up to which you want to track an object for
156
+
object tracking in consecutive frames, for example, `100`.
157
+
- `<memory>`: The frames up to which you want to track an object, for
158
+
example, `20`.
159
+
- `<project-id>`: The ID of your Google Cloud project, for example,
160
+
`my-project`.
161
+
- `<dataset-id>`: The ID that you want to assign to a BigQuery dataset to
162
+
store prediction results, for example, `circularnet_dataset`.
163
+
- `<table-id>`: The ID that you want to assign to a BigQuery table to store
164
+
prediction results, for example, `circularnet_table`. If the table
165
+
already exists in your Google Cloud project, the pipeline appends results
166
+
to that table.
152
167
153
168
**Note:** If your input files are not videos but images, replace
154
169
`run_gcp_videos.sh` on the command with `run_gcp_images.sh` and remove the
@@ -160,11 +175,14 @@ on Google Cloud:
160
175
1. Run the prediction pipeline:
161
176
162
177
```
163
-
bash run_gcp_videos.sh
178
+
bash run_images.sh
164
179
```
165
180
166
-
**Note:** If your input files are not videos but images, replace
167
-
`run_gcp_videos.sh` on the command with `run_gcp_images.sh`.
181
+
**Note:** If you have a large amount of input files, you can run the
182
+
pipeline in a `screen` session in the background without worrying about the
183
+
terminal closing down. First, you launch the `screen` session with the
184
+
`screen -R client` command. A new session shell launches. Then, run the
185
+
`bash run_images.sh` script in the new shell.
168
186
169
187
The script also creates a `logs` folder inside the `client` folder that saves
170
188
the logs with the troubleshooting results and records from the models.
Copy file name to clipboardExpand all lines: official/projects/waste_identification_ml/circularnet-docs/content/analyze-data/prediction-pipeline-in-edge.md
Copy file name to clipboardExpand all lines: official/projects/waste_identification_ml/circularnet-docs/content/deploy-cn/before-you-begin.md
+34-7Lines changed: 34 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -21,13 +21,40 @@ one of the following options: <br><br>
21
21
</li>
22
22
<li><p><a href="https://cloud.google.com/compute/docs/gpus/create-gpu-vm-general-purpose">Create a Compute Engine virtual machine (VM) that has attached an NVIDIA T4 GPU</a>. Use the following settings on your VM:</p><br>
<li><strong>Operating system</strong>: Deep Learning on Linux</li>
36
+
<li><strong>Version</strong>: Deep Learning VM with
37
+
CUDA 11.3 preinstalled. Debian 11, Python 3.10. You can choose
38
+
any <i>M</i> number with this configuration, for example, M126.</li>
39
+
<li><strong>Boot disk type</strong>: Balanced persistent disk</li>
40
+
<li><strong>Size (GB)</strong>: 300 GB</li>
41
+
</ul>
42
+
</li>
43
+
<li><strong>Security</strong>: Navigate to the <b>Identity and API access</b>
44
+
section and select the following:
45
+
<ul>
46
+
<li><strong>Service accounts</strong>:Compute Engine default service account</li>
47
+
<li><strong>Access scopes</strong>: Allow full access
48
+
to all Cloud APIs</li>
49
+
</ul>
50
+
</li>
51
+
<li><strong>Networking</strong>: Navigate to the <b>Firewall</b>
52
+
section and select the following:
53
+
<ul>
54
+
<li>Allow HTTP traffic</li>
55
+
<li>Allow HTTPS traffic</li>
56
+
</ul>
57
+
</li>
31
58
</ul>
32
59
<p><strong>Note</strong>: Give your VM a name that is easy to remember and deploy in a region and a zone close to your physical location that allows GPUs.</p><br>
Copy file name to clipboardExpand all lines: official/projects/waste_identification_ml/circularnet-docs/content/deploy-cn/start-server.md
+9-15Lines changed: 9 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -21,24 +21,18 @@ tool open. For more information, see [Connect to VMs](https://cloud.google.com/c
21
21
This script loads as many models as you want at the same time. Later, you can choose which model you want to send your request to from the client side. For more information, see [Prepare and analyze images](/official/projects/waste_identification_ml/circularnet-docs/content/analyze-data/).
22
22
23
23
For example, when you start analyzing images, you can send them from the
24
-
client to the following models in the Triton server you created:
25
-
26
-
- `material_resnet_v2_512_1024`: shows the material and its subtype using
27
-
ResNet for classification on images of 512 x 1024 pixels.
28
-
- `material_form_resnet_v2_512_1024`: shows the form of an object, for
29
-
example, can or bottle, using ResNet for classification on images of 512
30
-
x 1024 pixels.
31
-
- `material_mobilenet_v2_512_1024`: shows the material and its subtype
32
-
using MobileNet for classification on images of 512 x 1024 pixels.
33
-
- `material_form_mobilenet_v2_512_1024`: shows the form of an object, for
34
-
example, can or bottle, using MobileNet for classification on images of
35
-
512 x 1024 pixels.
24
+
client to the following model in the Triton server you created:
25
+
26
+
- `Jan2025_ver2_merged_1024_1024`: shows the material type and form using
27
+
ResNet for classification on images of 1024 x 1024 pixels.
36
28
37
29
You have finished setting up the Triton inference server. The server keeps
38
30
running on the backend and your terminal window lets you run new commands to
39
-
interact with it.
31
+
interact with it. It takes some time for the server to be up and running.
32
+
Wait for the **Status ready** message from the server before launching the
33
+
client.
40
34
41
-
You can confirm the server is running by opening a screen session:
35
+
You can confirm the server is running by opening a `screen` session:
42
36
43
37
1. List the `screen` sessions:
44
38
@@ -59,7 +53,7 @@ You can confirm the server is running by opening a screen session:
59
53
server. The models must show a `READY` status on the `screen` session when
60
54
they are successfully deployed.
61
55
62
-
1. If you want to exit the screen session without stopping the server, press
56
+
1. If you want to exit the `screen` session without stopping the server, press
0 commit comments