You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
cd edge-ai-suites-instance1/metro-ai-suite/smart-traffic-intersection-agent/
118
+
```
113
119
114
-
2. Edit the deployment configuration file for instance #1:
120
+
2. Edit the deployment configuration file for instance #1:
115
121
116
-
```bash
117
-
nano src/config/deployment_instance.json
118
-
```
122
+
```bash
123
+
nano src/config/deployment_instance.json
124
+
```
119
125
120
-
Update the `latitude` and `longitude` values as required. If not required, use the default
121
-
values without updating this config file. Following is a sample value for the Instance #1
122
-
deployment config:
123
-
124
-
```json
125
-
{
126
-
"name": "intersection_1",
127
-
"latitude": 37.7049108,
128
-
"longitude": -121.9096158,
129
-
"agent_backend_port": "8081",
130
-
"agent_ui_port": "7860"
131
-
}
132
-
```
126
+
Update `name`, `latitude` and `longitude` values as required. Following is a sample value for the Instance #1 deployment config:
133
127
134
-
3. Run the setup for instance #1:
128
+
```json
129
+
{
130
+
"name": "intersection_1",
131
+
"latitude": 37.5879818,
132
+
"longitude": -122.0534334,
133
+
"agent_backend_port": "8081",
134
+
"agent_ui_port": "7860"
135
+
}
136
+
```
135
137
136
-
```bash
137
-
source setup.sh --setup
138
-
```
138
+
>**TIPS:** Leave `agent_backend_port` and `agent_ui_port` empty to avoid port conflicts. Random ports would be assigned and application URLs with assigned ports will be shown when setup finishes.
cd edge-ai-suites-instance2/metro-ai-suite/smart-traffic-intersection-agent/
149
-
```
162
+
2. Edit the deployment configuration for instance #2:
150
163
151
-
2. Edit the deployment configuration for instance #2:
164
+
```bash
165
+
nano src/config/deployment_instance.json
166
+
```
152
167
153
-
```bash
154
-
nano src/config/deployment_instance.json
155
-
```
168
+
The following is a sample value for instance #2 deployment configuration:
156
169
157
-
The following is a sample value for instance #2 deployment configuration:
170
+
```json
171
+
{
172
+
"name": "intersection_2",
173
+
"latitude": 37.33874,
174
+
"longitude": -121.8852525,
175
+
"agent_backend_port": "8082",
176
+
"agent_ui_port": "7861"
177
+
}
178
+
```
158
179
159
-
```json
160
-
{
161
-
"name": "intersection_2",
162
-
"latitude": 37.33874,
163
-
"longitude": -121.8852525,
164
-
"agent_backend_port": "8082",
165
-
"agent_ui_port": "7861"
166
-
}
167
-
```
168
-
3. Run Setup for Instance #2
180
+
>**TIPS:** Leave `agent_backend_port` and `agent_ui_port` empty to avoid port conflicts. Random ports would be assigned and application URLs with assigned ports will be shown when setup finishes.
>**IMPORTANT:** See this [disclaimer](#disclaimer-for-using-third-party-ai-models) before using any AI Model.
188
+
189
+
4. Run Setup for Instance #2
173
190
174
-
> **Note:** Keep the `agent_backend_port` and `agent_ui_port` values empty to use random
175
-
> ephemeral ports and avoid port conflicts.
191
+
```bash
192
+
source setup.sh --setup
193
+
```
176
194
177
195
Ensure each instance has their `deployment_instance.json` updated with:
178
196
179
197
- A unique value for`name` field
180
-
- Unique latitude and longitude co-ordinates
198
+
- Unique `latitude` and `longitude` co-ordinates
181
199
- Different `agent_backend_port` and `agent_ui_port` values to avoid port conflicts. This is
182
200
optional. If not specified, an ephemeral port is picked automatically.
183
201
@@ -191,6 +209,10 @@ are reached. Hence, deploy new instances only if you have the required resource
191
209
To spin-up more instances - say `n number of new instances`, repeat the steps mentioned in
192
210
[Set up Instance #2](#set-up-instance-2), by changing to a new directory `n` times.
193
211
212
+
### Disclaimer for Using Third-Party AI Models
213
+
214
+
Compliance with all license obligations and responsible use for a third-party AI Model is the user’s responsibility.
215
+
194
216
## Advanced Environment Configuration
195
217
196
218
For advanced users who need more control over the configuration, you can configure the following environment variables before running the setup script to override the default behaviour:
@@ -202,9 +224,6 @@ export LOG_LEVEL=DEBUG
202
224
# Select iGPU as the accelerator to perform VLM inference. By default, it is set to CPU
203
225
export VLM_DEVICE=GPU
204
226
205
-
# Change the VLM Model name. Default value set in setup.sh is microsoft/Phi-3.5-vision-instruct.
0 commit comments