Skip to content

Commit f43702f

Browse files
No public description
PiperOrigin-RevId: 725233618
1 parent 6afb844 commit f43702f

File tree

4 files changed

+9
-9
lines changed

4 files changed

+9
-9
lines changed

Diff for: docs/nlp/customize_encoder.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -497,7 +497,7 @@
497497
"source": [
498498
"#### Customize Feedforward Layer\n",
499499
"\n",
500-
"Similarly, one could also customize the feedforward layer.\n",
500+
"Similiarly, one could also customize the feedforward layer.\n",
501501
"\n",
502502
"See [the source of `nlp.layers.GatedFeedforward`](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/gated_feedforward.py) for how to implement a customized feedforward layer.\n",
503503
"\n",

Diff for: docs/vision/object_detection.ipynb

+4-4
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@
6666
"This tutorial demonstrates how to:\n",
6767
"\n",
6868
"1. Use models from the Tensorflow Model Garden(TFM) package.\n",
69-
"2. Fine-tune a pre-trained RetinaNet with ResNet-50 as backbone for object detection.\n",
69+
"2. Fine-tune a pre-trained RetinanNet with ResNet-50 as backbone for object detection.\n",
7070
"3. Export the tuned RetinaNet model"
7171
]
7272
},
@@ -323,7 +323,7 @@
323323
"\n",
324324
"Use the `retinanet_resnetfpn_coco` experiment configuration, as defined by `tfm.vision.configs.retinanet.retinanet_resnetfpn_coco`.\n",
325325
"\n",
326-
"The configuration defines an experiment to train a RetinaNet with Resnet-50 as backbone, FPN as decoder. Default Configuration is trained on [COCO](https://cocodataset.org/) train2017 and evaluated on [COCO](https://cocodataset.org/) val2017.\n",
326+
"The configuration defines an experiment to train a RetinanNet with Resnet-50 as backbone, FPN as decoder. Default Configuration is trained on [COCO](https://cocodataset.org/) train2017 and evaluated on [COCO](https://cocodataset.org/) val2017.\n",
327327
"\n",
328328
"There are also other alternative experiments available such as\n",
329329
"`retinanet_resnetfpn_coco`, `retinanet_spinenet_coco`, `fasterrcnn_resnetfpn_coco` and more. One can switch to them by changing the experiment name argument to the `get_exp_config` function.\n",
@@ -538,7 +538,7 @@
538538
"id": "m-QW7DoKbD8z"
539539
},
540540
"source": [
541-
"### Create category index dictionary to map the labels to corresponding label names."
541+
"### Create category index dictionary to map the labels to coressponding label names."
542542
]
543543
},
544544
{
@@ -573,7 +573,7 @@
573573
},
574574
"source": [
575575
"### Helper function for visualizing the results from TFRecords.\n",
576-
"Use `visualize_boxes_and_labels_on_image_array` from `visualization_utils` to draw bounding boxes on the image."
576+
"Use `visualize_boxes_and_labels_on_image_array` from `visualization_utils` to draw boudning boxes on the image."
577577
]
578578
},
579579
{

Diff for: docs/vision/semantic_segmentation.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -341,7 +341,7 @@
341341
"\n",
342342
"Use the `mnv2_deeplabv3_pascal` experiment configuration, as defined by `tfm.vision.configs.semantic_segmentation.mnv2_deeplabv3_pascal`.\n",
343343
"\n",
344-
"Please find all the registered experiments [here](https://www.tensorflow.org/api_docs/python/tfm/core/exp_factory/get_exp_config)\n",
344+
"Please find all the registered experiements [here](https://www.tensorflow.org/api_docs/python/tfm/core/exp_factory/get_exp_config)\n",
345345
"\n",
346346
"The configuration defines an experiment to train a [DeepLabV3](https://arxiv.org/pdf/1706.05587.pdf) model with MobilenetV2 as backbone and [ASPP](https://arxiv.org/pdf/1606.00915v2.pdf) as decoder.\n",
347347
"\n",
@@ -420,7 +420,7 @@
420420
"exp_config.task.train_data.dtype = 'float32'\n",
421421
"exp_config.task.train_data.output_size = [HEIGHT, WIDTH]\n",
422422
"exp_config.task.train_data.preserve_aspect_ratio = False\n",
423-
"exp_config.task.train_data.seed = 21 # Reproducible Training Data\n",
423+
"exp_config.task.train_data.seed = 21 # Reproducable Training Data\n",
424424
"\n",
425425
"# Validation Data Config\n",
426426
"exp_config.task.validation_data.input_path = val_data_tfrecords\n",
@@ -429,7 +429,7 @@
429429
"exp_config.task.validation_data.output_size = [HEIGHT, WIDTH]\n",
430430
"exp_config.task.validation_data.preserve_aspect_ratio = False\n",
431431
"exp_config.task.validation_data.groundtruth_padded_size = [HEIGHT, WIDTH]\n",
432-
"exp_config.task.validation_data.seed = 21 # Reproducible Validation Data\n",
432+
"exp_config.task.validation_data.seed = 21 # Reproducable Validation Data\n",
433433
"exp_config.task.validation_data.resize_eval_groundtruth = True # To enable validation loss"
434434
]
435435
},

Diff for: official/projects/waste_identification_ml/circularnet-docs/content/_index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
* [Aperture size (f-number)](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/factors.md#aperture-size-f-number)
2828
* [Shutter speed](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/factors.md#shutter-speed)
2929
* [Table of specifications](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/table-of-specs.md)
30-
* [Choose edge device hardware](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-edge-device.md)
30+
* [Choose edge device hardware](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-edge-device/_index.md)
3131

3232
**[Deploy CircularNet](/official/projects/waste_identification_ml/circularnet-docs/content/deploy-cn/_index.md)**
3333

0 commit comments

Comments
 (0)