1
- ---
2
- ---
3
- <style >hr {display :none ;}</style >
4
-
5
1
# Serving Dynamically Updated TensorFlow Model with Batching
6
2
7
3
This tutorial shows you how to use TensorFlow Serving components to build a
@@ -10,7 +6,7 @@ TensorFlow model. You'll also learn how to use TensorFlow Serving
10
6
batcher to do batched inference. The code examples in this tutorial focus on the
11
7
discovery, batching, and serving logic. If you just want to use TensorFlow
12
8
Serving to serve a single version model without batching, see
13
- [ TensorFlow Serving basic tutorial] ( serving_basic ) .
9
+ [ TensorFlow Serving basic tutorial] ( serving_basic.md ) .
14
10
15
11
This tutorial uses the simple Softmax Regression model introduced in the
16
12
TensorFlow tutorial for handwritten image (MNIST data) classification. If you
@@ -33,7 +29,7 @@ This tutorial steps through the following tasks:
33
29
4 . Serve request with TensorFlow Serving manager.
34
30
5 . Run and test the service.
35
31
36
- Before getting started, please complete the [ prerequisites] ( setup#prerequisites ) .
32
+ Before getting started, please complete the [ prerequisites] ( setup.md #prerequisites ) .
37
33
38
34
## Train And Export TensorFlow Model
39
35
@@ -58,7 +54,7 @@ $>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=2000 --
58
54
59
55
As you can see in ` mnist_export.py ` , the training and exporting is done the
60
56
same way it is in the
61
- [ TensorFlow Serving basic tutorial] ( serving_basic ) . For
57
+ [ TensorFlow Serving basic tutorial] ( serving_basic.md ) . For
62
58
demonstration purposes, you're intentionally dialing down the training
63
59
iterations for the first run and exporting it as v1, while training it normally
64
60
for the second run and exporting it as v2 to the same parent directory -- as we
@@ -128,8 +124,8 @@ monitors cloud storage instead of local storage, or you could build a version
128
124
policy plugin that does version transition in a different way -- in fact, you
129
125
could even build a custom model plugin that serves non-TensorFlow models. These
130
126
topics are out of scope for this tutorial, however, you can refer to the
131
- [custom source](custom_source) and
132
- [custom servable](custom_servable) documents for more information.
127
+ [custom source](custom_source.md ) and
128
+ [custom servable](custom_servable.md ) documents for more information.
133
129
134
130
## Batching
135
131
@@ -232,7 +228,7 @@ To put all these into the context of this tutorial:
232
228
`DoClassifyInBatch` is then just about requesting `SessionBundle` from the
233
229
manager and uses it to run inference. Most of the logic and flow is very similar
234
230
to the logic and flow described in the
235
- [TensorFlow Serving basic tutorial](serving_basic), with just a few
231
+ [TensorFlow Serving basic tutorial](serving_basic.md ), with just a few
236
232
key changes:
237
233
238
234
* The input tensor now has its first dimension set to variable batch size at
0 commit comments