Skip to content

Commit 31784e7

Browse files
committed
One more minor documentation fix.
1 parent 19f5356 commit 31784e7

File tree

1 file changed

+6
-10
lines changed

1 file changed

+6
-10
lines changed

tensorflow_serving/g3doc/serving_advanced.md

+6-10
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
---
2-
---
3-
<style>hr{display:none;}</style>
4-
51
# Serving Dynamically Updated TensorFlow Model with Batching
62

73
This tutorial shows you how to use TensorFlow Serving components to build a
@@ -10,7 +6,7 @@ TensorFlow model. You'll also learn how to use TensorFlow Serving
106
batcher to do batched inference. The code examples in this tutorial focus on the
117
discovery, batching, and serving logic. If you just want to use TensorFlow
128
Serving to serve a single version model without batching, see
13-
[TensorFlow Serving basic tutorial](serving_basic).
9+
[TensorFlow Serving basic tutorial](serving_basic.md).
1410

1511
This tutorial uses the simple Softmax Regression model introduced in the
1612
TensorFlow tutorial for handwritten image (MNIST data) classification. If you
@@ -33,7 +29,7 @@ This tutorial steps through the following tasks:
3329
4. Serve request with TensorFlow Serving manager.
3430
5. Run and test the service.
3531

36-
Before getting started, please complete the [prerequisites](setup#prerequisites).
32+
Before getting started, please complete the [prerequisites](setup.md#prerequisites).
3733

3834
## Train And Export TensorFlow Model
3935

@@ -58,7 +54,7 @@ $>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=2000 --
5854

5955
As you can see in `mnist_export.py`, the training and exporting is done the
6056
same way it is in the
61-
[TensorFlow Serving basic tutorial](serving_basic). For
57+
[TensorFlow Serving basic tutorial](serving_basic.md). For
6258
demonstration purposes, you're intentionally dialing down the training
6359
iterations for the first run and exporting it as v1, while training it normally
6460
for the second run and exporting it as v2 to the same parent directory -- as we
@@ -128,8 +124,8 @@ monitors cloud storage instead of local storage, or you could build a version
128124
policy plugin that does version transition in a different way -- in fact, you
129125
could even build a custom model plugin that serves non-TensorFlow models. These
130126
topics are out of scope for this tutorial, however, you can refer to the
131-
[custom source](custom_source) and
132-
[custom servable](custom_servable) documents for more information.
127+
[custom source](custom_source.md) and
128+
[custom servable](custom_servable.md) documents for more information.
133129
134130
## Batching
135131
@@ -232,7 +228,7 @@ To put all these into the context of this tutorial:
232228
`DoClassifyInBatch` is then just about requesting `SessionBundle` from the
233229
manager and uses it to run inference. Most of the logic and flow is very similar
234230
to the logic and flow described in the
235-
[TensorFlow Serving basic tutorial](serving_basic), with just a few
231+
[TensorFlow Serving basic tutorial](serving_basic.md), with just a few
236232
key changes:
237233
238234
* The input tensor now has its first dimension set to variable batch size at

0 commit comments

Comments
 (0)