@@ -2,38 +2,50 @@ TensorFlow Serving on ARM
2
2
=========================
3
3
4
4
TensorFlow Serving cross-compile project targeting linux on common arm cores from
5
- a linux amd64 ( x86_64) host.
5
+ a linux amd64 / x86_64 build host.
6
6
7
- ## Overview
8
-
9
- ** Upstream Project:** [ tensorflow/serving] ( https://github.com/tensorflow/serving )
7
+ ## Contents
8
+ * [ Overview] ( #overview )
9
+ * [ Docker Images] ( #docker-images )
10
+ * [ Build From Source] ( #build-from-source )
11
+ * [ Legacy Builds] ( #legacy-builds )
12
+ * [ Disclosures] ( #disclosures )
13
+ * [ Disclaimer] ( #disclaimer )
10
14
11
- ** Usage Documentation: ** [ TensorFlow Serving with Docker ] ( https://www.tensorflow.org/tfx/serving/docker )
15
+ ## Overview
12
16
13
- This project is basically a giant build wrapper around [ tensorflow/serving] ( https://github.com/tensorflow/serving )
14
- with the intention of making it easy to cross-build CPU optimized model server
17
+ The basis of this project is to provide an alternative build strategy for
18
+ [ tensorflow/serving] ( https://github.com/tensorflow/serving )
19
+ with the intention of making it relatively easy to cross-build CPU optimized model server
15
20
docker images targeting common linux arm platforms. Additonally, a set of docker
16
- images is produced for some of the most popular linux arm platforms and hosted on
21
+ image build targets is maintained and built for some of the popular linux arm platforms and hosted on
17
22
Docker Hub.
18
23
19
- ## The Docker Images
24
+ ** Upstream Project:** [ tensorflow/serving] ( https://github.com/tensorflow/serving )
25
+
26
+ ## Docker Images
20
27
21
28
** Hosted on Docker Hub:** [ emacski/tensorflow-serving] ( https://hub.docker.com/r/emacski/tensorflow-serving )
22
29
30
+ ** Usage Documentation:** [ TensorFlow Serving with Docker] ( https://www.tensorflow.org/tfx/serving/docker )
31
+
32
+ ** Note:** The project images are desinged to be functionally equivalent to their upstream counter part.
33
+
23
34
### Quick Start
24
35
25
36
On many consumer / developer 64-bit and 32-bit arm platforms you can simply:
26
37
``` sh
27
38
docker pull emacski/tensorflow-serving:latest
28
39
# or
29
- docker pull emacski/tensorflow-serving:2.1 .0
40
+ docker pull emacski/tensorflow-serving:2.2 .0
30
41
```
31
42
32
- Refer to [ TensorFlow Serving with Docker] ( https://www.tensorflow.org/tfx/serving/docker ) for usage.
43
+ Refer to [ TensorFlow Serving with Docker] ( https://www.tensorflow.org/tfx/serving/docker )
44
+ for configuration and setting up a model for serving.
33
45
34
46
### Images
35
47
36
- ` emacski/tensorflow-serving:[Tag] `
48
+ #### ` emacski/tensorflow-serving:[Tag] `
37
49
38
50
| ** Tag** | ** ARM Core Compatability** |
39
51
| ---------| ----------------------------|
@@ -46,12 +58,12 @@ Refer to [TensorFlow Serving with Docker](https://www.tensorflow.org/tfx/serving
46
58
Example
47
59
``` bash
48
60
# on beaglebone black
49
- docker pull emacski/tensorflow-serving:2.1 .0-linux_arm_armv7-a_neon_vfpv3
61
+ docker pull emacski/tensorflow-serving:2.2 .0-linux_arm_armv7-a_neon_vfpv3
50
62
```
51
63
52
64
### Aliases
53
65
54
- ` emacski/tensorflow-serving:[Alias] `
66
+ #### ` emacski/tensorflow-serving:[Alias] `
55
67
56
68
| ** Alias** | ** Tag** | ** Notes** |
57
69
| -----------| ---------| -----------|
@@ -62,10 +74,10 @@ docker pull emacski/tensorflow-serving:2.1.0-linux_arm_armv7-a_neon_vfpv3
62
74
| <nobr >` latest-linux_arm64 ` </nobr > | <nobr >` [Latest-Version]-linux_arm64 ` </nobr > | |
63
75
| <nobr >` latest-linux_arm ` </nobr > | <nobr >` [Latest-Version]-linux_arm ` </nobr > | |
64
76
65
- Example
77
+ Examples
66
78
``` bash
67
79
# on Raspberry PI 3 B+
68
- docker pull emacski/tensorflow-serving:2.1 .0-linux_arm64
80
+ docker pull emacski/tensorflow-serving:2.2 .0-linux_arm64
69
81
# or
70
82
docker pull emacski/tensorflow-serving:latest-linux_arm64
71
83
```
@@ -80,10 +92,12 @@ docker pull emacski/tensorflow-serving:latest-linux_arm64
80
92
| <nobr >` emacski/tensorflow-serving:latest-linux_arm64 ` </nobr > | ` linux ` | ` arm64 ` |
81
93
| <nobr >` emacski/tensorflow-serving:latest-linux_amd64 ` </nobr > | ` linux ` | ` amd64 ` |
82
94
83
- Example
95
+ Examples
84
96
``` bash
85
97
# on Raspberry PI 3 B+
86
98
docker pull emacski/tensorflow-serving
99
+ # or
100
+ docker pull emacski/tensorflow-serving:latest
87
101
# the actual image used is emacski/tensorflow-serving:latest-linux_arm64
88
102
# itself actually being emacski/tensorflow-serving:[Latest-Version]-linux_arm64_armv8-a
89
103
```
@@ -99,31 +113,31 @@ docker pull emacski/tensorflow-serving
99
113
Example
100
114
``` sh
101
115
# on Raspberry PI 3 B+
102
- docker pull emacski/tensorflow-serving:2.1 .0
103
- # the actual image used is emacski/tensorflow-serving:2.1 .0-linux_arm64
104
- # itself actually being emacski/tensorflow-serving:2.1 .0-linux_arm64_armv8-a
116
+ docker pull emacski/tensorflow-serving:2.2 .0
117
+ # the actual image used is emacski/tensorflow-serving:2.2 .0-linux_arm64
118
+ # itself actually being emacski/tensorflow-serving:2.2 .0-linux_arm64_armv8-a
105
119
```
106
120
107
121
### Debug Images
108
122
109
- As of version 2.1.0 , debug images are also built and published to docker hub.
123
+ As of version ` 2.0.0 ` , debug images are also built and published to docker hub.
110
124
These images are identical to the non-debug images with the addition of busybox
111
125
utils. The utils are located at ` /busybox/bin ` which is also included in the
112
- image ` PATH ` env variable .
126
+ image's system ` PATH ` .
113
127
114
128
For any image above, add ` debug ` after the ` [Version] ` and before the platform
115
129
suffix (if one is required) in the image tag.
116
130
117
- Examples
118
131
``` sh
119
132
# multi-arch
120
- docker pull emacski/tensorflow-serving:2.1 .0-debug
133
+ docker pull emacski/tensorflow-serving:2.2 .0-debug
121
134
# specific image
122
- docker pull emacski/tensorflow-serving:2.1 .0-debug-linux_arm64_armv8-a
135
+ docker pull emacski/tensorflow-serving:2.2 .0-debug-linux_arm64_armv8-a
123
136
# specific alias
124
137
docker pull emacski/tensorflow-serving:latest-debug-linux_arm64
125
138
```
126
139
140
+ Example Usage
127
141
``` sh
128
142
# start a new container with an interactive ash (busybox) shell
129
143
docker run -ti --entrypoint /busybox/bin/sh emacski/tensorflow-serving:latest-debug-linux_arm64
@@ -133,93 +147,129 @@ docker run -ti --entrypoint sh emacski/tensorflow-serving:latest-debug-linux_arm
133
147
docker exec -ti my_running_container /busybox/bin/sh
134
148
```
135
149
136
- ## Building from Source
150
+ [ Back to Top] ( #contents )
151
+
152
+ ## Build From Source
137
153
138
- ** Host Build Requirements:**
154
+ ### Build / Development Environment
155
+
156
+ ** Build Host Platform:** ` linux_amd64 ` (` x86_64 ` )
157
+
158
+ ** Build Host Requirements:**
139
159
* git
140
160
* docker
141
161
142
- ### Build / Development Environment
162
+ For each version / release, a self contained build environment ` devel ` image is
163
+ created and published. This image contains all necessary tools and dependencies
164
+ required for building project artifacts.
143
165
144
166
``` bash
145
167
git clone
[email protected] :emacski/tensorflow-serving-arm.git
146
-
147
168
cd tensorflow-serving-arm
148
169
170
+ # pull devel
149
171
docker pull emacski/tensorflow-serving:latest-devel
150
- # or
172
+ # or build devel
151
173
docker build -t emacski/tensorflow-serving:latest-devel -f tensorflow_model_server/tools/docker/Dockerfile .
152
174
```
153
175
154
- ### Build Examples
155
-
156
- The following examples assume that the commands are executed within the ` devel ` container:
176
+ All of the build examples assume that the commands are executed within the ` devel `
177
+ container:
157
178
``` bash
158
179
# interactive shell
159
180
docker run --rm -ti \
160
181
-w /code -v $PWD :/code \
161
182
-v /var/run/docker.sock:/var/run/docker.sock \
162
183
emacski/tensorflow-serving:latest-devel /bin/bash
184
+ # or
163
185
# non-interactive
164
186
docker run --rm \
165
187
-w /code -v $PWD :/code \
166
188
-v /var/run/docker.sock:/var/run/docker.sock \
167
189
emacski/tensorflow-serving:latest-devel [example_command]
168
190
```
169
191
170
- #### Build Project Docker Images
192
+ ### Config Groups
193
+
194
+ The following bazel config groups represent the build options used for each target
195
+ platform (found in ` .bazelrc ` ). These config groups should be treated as mutually
196
+ exclusive with each other and only one should be specified in a build command as
197
+ a ` --config ` option.
198
+
199
+ | Name | Type | Info |
200
+ | ------| ------| ------|
201
+ | ` linux_amd64 ` | Base | can be used for [ custom builds] ( #build-image-for-custom-arm-target ) |
202
+ | ` linux_arm64 ` | Base | can be used for [ custom builds] ( #build-image-for-custom-arm-target ) |
203
+ | ` linux_arm ` | Base | can be used for [ custom builds] ( #build-image-for-custom-arm-target ) |
204
+ | ** ` linux_amd64_avx_sse4.2 ` ** | ** Project** | inherits from ` linux_amd64 ` |
205
+ | ** ` linux_arm64_armv8-a ` ** | ** Project** | inherits from ` linux_arm64 ` |
206
+ | ** ` linux_arm64_armv8.2-a ` ** | ** Project** | inherits from ` linux_arm64 ` |
207
+ | ** ` linux_arm_armv7-a_neon_vfpv3 ` ** | ** Project** | inherits from ` linux_arm ` |
208
+ | ** ` linux_arm_armv7-a_neon_vfpv4 ` ** | ** Project** | inherits from ` linux_arm ` |
209
+
210
+ ### Build Project Image Target
211
+
212
+ #### ` //tensorflow_model_server:project_image.tar `
213
+
214
+ Build a project maintained model server docker image targeting one of the platforms
215
+ specified by a project config group as listed above. The resulting image can be
216
+ found as a tar file in bazel's output directory.
217
+
171
218
``` bash
172
- bazel build //tensorflow_model_server:linux_amd64_avx_sse4.2 --config=linux_amd64_avx_sse4.2
173
- bazel build //tensorflow_model_server:linux_arm64_armv8-a --config=linux_arm64_armv8-a
174
- bazel build //tensorflow_model_server:linux_arm64_armv8.2-a --config=linux_arm64_armv8.2-a
175
- bazel build //tensorflow_model_server:linux_arm_armv7-a_neon_vfpv3 --config=linux_arm_armv7-a_neon_vfpv3
176
- bazel build //tensorflow_model_server:linux_arm_armv7-a_neon_vfpv4 --config=linux_arm_armv7-a_neon_vfpv4
219
+ bazel build //tensorflow_model_server:project_image.tar --config=linux_arm64_armv8-a
220
+ # or
221
+ bazel build //tensorflow_model_server:project_image.tar --config=linux_arm_armv7-a_neon_vfpv4
177
222
# each build creates a docker loadable image tar in bazel's output dir
178
223
```
179
224
180
- #### Build and Load Project Images
225
+ ### Load Project Image Target
181
226
182
- ``` bash
183
- bazel run //tensorflow_model_server:linux_amd64_avx_sse4.2 --config=linux_amd64_avx_sse4.2
184
- bazel run //tensorflow_model_server:linux_arm64_armv8-a --config=linux_arm64_armv8-a
185
- bazel run //tensorflow_model_server:linux_arm64_armv8.2-a --config=linux_arm64_armv8.2-a
186
- bazel run //tensorflow_model_server:linux_arm_armv7-a_neon_vfpv3 --config=linux_arm_armv7-a_neon_vfpv3
187
- bazel run //tensorflow_model_server:linux_arm_armv7-a_neon_vfpv4 --config=linux_arm_armv7-a_neon_vfpv4
188
- ```
227
+ #### ` //tensorflow_model_server:project_image `
228
+
229
+ Same as above, but additionally bazel attempts to load the resulting image onto
230
+ the host, making it immediatly available to the host's docker.
189
231
190
232
** Note:** host docker must be available to the build container for final images
191
233
to be available on the host automatically.
192
234
193
- #### Build Project Binaries
194
- It's not recommended to use these binaries as standalone executables as they are built specifically to run in their respective containers.
195
235
``` bash
196
- bazel build //tensorflow_model_server --config=linux_amd64_avx_sse4.2
197
- bazel build //tensorflow_model_server --config=linux_arm64_armv8-a
198
- bazel build //tensorflow_model_server --config=linux_arm64_armv8.2-a
199
- bazel build //tensorflow_model_server --config=linux_arm_armv7-a_neon_vfpv3
200
- bazel build //tensorflow_model_server --config=linux_arm_armv7-a_neon_vfpv4
236
+ bazel run //tensorflow_model_server:project_image --config=linux_arm64_armv8-a
237
+ # or
238
+ bazel run //tensorflow_model_server:project_image --config=linux_arm_armv7-a_neon_vfpv4
201
239
```
202
240
203
- #### Build Docker Image for Custom ARM target
204
- Just specify the ` image.tar ` target and base arch config group and custom copile options.
241
+ ### Build Project Binary Target
242
+
243
+ #### ` //tensorflow_model_server `
244
+
245
+ Build the model server binary targeting one of the platforms specified by a project
246
+ config group as listed above.
205
247
206
- For ` linux_arm64 ` and ` linux_arm ` options see: https://releases.llvm.org/9.0.0/tools/clang/docs/CrossCompilation.html
248
+ ** Note:** It's not recommended to use these binaries as standalone executables
249
+ as they are built specifically to run in their respective containers, but they may
250
+ work on debian 10 like systems.
207
251
208
- Example building an image tuned for Cortex-A72
209
252
``` bash
210
- bazel build //tensorflow_model_server:image.tar --config=linux_arm64 --copt=-mcpu=cortex-a72
211
- # resulting image tar: bazel-bin/tensorflow_model_server/image.tar
253
+ bazel build //tensorflow_model_server --config=linux_arm64_armv8-a
254
+ # or
255
+ bazel build //tensorflow_model_server --config=linux_arm_armv7-a_neon_vfpv4
212
256
```
213
257
214
- ## Disclaimer
258
+ ### Build Image for Custom ARM Target
215
259
216
- * Not an ARM expert
217
- * Not a Bazel expert (but I know a little bit more now)
218
- * Not a TensorFlow expert
219
- * Personal project, so testing is minimal
260
+ #### ` //tensorflow_model_server:custom_image.tar `
220
261
221
- Should any of those scare you, I recommend NOT using anything here.
222
- Additionally, any help to improve things is always appreciated.
262
+ Can be used to fine-tune builds for specific platforms. Use a "Base" type
263
+ [ config group] ( #config-groups ) and custom compile options. For ` linux_arm64 ` and
264
+ ` linux_arm ` options see: https://releases.llvm.org/10.0.0/tools/clang/docs/CrossCompilation.html
265
+
266
+ ``` bash
267
+ # building an image tuned for Cortex-A72
268
+ bazel build //tensorflow_model_server:custom_image.tar --config=linux_arm64 --copt=-mcpu=cortex-a72
269
+ # look for custom_image.tar in bazel's output directory
270
+ ```
271
+
272
+ [ Back to Top] ( #contents )
223
273
224
274
## Legacy Builds
225
275
@@ -229,7 +279,8 @@ Additionally, any help to improve things is always appreciated.
229
279
* ` v1.13.0 `
230
280
* ` v1.14.0 `
231
281
232
- ** Note:** a tag exists for both ` v1.14.0 ` and ` 1.14.0 ` as this was the current upstream tensorflow/serving version when this project was refactored
282
+ ** Note:** a tag exists for both ` v1.14.0 ` and ` 1.14.0 ` as this was the current
283
+ upstream tensorflow/serving version when this project was refactored
233
284
234
285
### Legacy Docker Images
235
286
The following tensorflow serving versions were built using the legacy project
@@ -239,3 +290,28 @@ structure and are still available on DockerHub.
239
290
* ` emacksi/tensorflow-serving:[Version]-arm32v7_vfpv3 `
240
291
241
292
Versions: ` 1.11.1 ` , ` 1.12.0 ` , ` 1.13.0 ` , ` 1.14.0 `
293
+
294
+ [ Back to Top] ( #contents )
295
+
296
+ ## Disclosures
297
+
298
+ This project uses llvm / clang toolchains for c++ cross-compiling. By
299
+ default, the model server is statically linked to llvm's libc++. To dynamically
300
+ link against gnu libstdc++, include the build option ` --config=gnulibcpp ` .
301
+
302
+ The base docker images used in this project come from another project I
303
+ maintain called [ Discolix] ( https://github.com/discolix/discolix ) (distroless for arm).
304
+
305
+ [ Back to Top] ( #contents )
306
+
307
+ ## Disclaimer
308
+
309
+ * Not an ARM expert
310
+ * Not a Bazel expert (but I know a little bit more now)
311
+ * Not a TensorFlow expert
312
+ * Personal project, so testing is minimal
313
+
314
+ Should any of those scare you, I recommend NOT using anything here.
315
+ Additionally, any help to improve things is always appreciated.
316
+
317
+ [ Back to Top] ( #contents )
0 commit comments