-
Notifications
You must be signed in to change notification settings - Fork 327
Move Defaults to to end of arg docstring and standardise values
#1848
base: master
Are you sure you want to change the base?
Changes from 12 commits
c5b31ea
9ae3a40
b4c5fb7
3a6ffc8
9cc9478
ecec330
f515646
92a45c7
3ba1d76
38731bf
1b6678d
a45e15b
4ed54cf
9ce9fc1
201819d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -11,6 +11,7 @@ build/ | |
| *.egg-info | ||
| __pycache__/ | ||
| *.so | ||
| venv | ||
|
|
||
| #VS Code files and container | ||
| .vscode/ | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -146,7 +146,7 @@ | |
| batch size based on the number of accelerators being used. | ||
| """ | ||
|
|
||
| # Try to detect an available TPU. If none is present, defaults to | ||
| # Try to detect an available TPU. If none is present. Defaults to | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's undo this change for all the comments in examples. |
||
| # MirroredStrategy | ||
| try: | ||
| tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() | ||
|
|
@@ -303,10 +303,11 @@ def lr_warmup_cosine_decay( | |
| learning rate, after an optional holding period. | ||
|
|
||
| args: | ||
| - [float] start_lr: default 0.0, the starting learning rate at the beginning | ||
| of training from which the warmup starts | ||
| - [float] target_lr: default 1e-2, the target (initial) learning rate from | ||
| which you'd usually start without a LR warmup schedule | ||
| - [float] start_lr: the starting learning rate at the beginning | ||
| of training from which the warmup starts. Defaults to `0.0`. | ||
| - [float] target_lr: the target (initial) learning rate from | ||
| which you'd usually start without a LR warmup schedule. | ||
| Defaults to `1e-2`. | ||
| - [int] warmup_steps: number of training steps to warm up for expressed in | ||
| batches | ||
| - [int] total_steps: the total steps (epochs * number of batches per epoch) | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -81,12 +81,12 @@ def load( | |
| batch_size: how many instances to include in batches after loading. | ||
| Should only be specified if img_size is specified (so that images | ||
| can be resized to the same size before batching). | ||
| shuffle: whether to shuffle the dataset, defaults to True. | ||
| shuffle: whether to shuffle the dataset. Defaults to `True`. | ||
| shuffle_buffer: the size of the buffer to use in shuffling. | ||
| reshuffle_each_iteration: whether to reshuffle the dataset on every | ||
| epoch, defaults to False. | ||
| img_size: the size to resize the images to, defaults to None, indicating | ||
| that images should not be resized. | ||
| epoch. Defaults to `False`. | ||
| img_size: the size to resize the images to, when None, this indicates | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. when
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Makes sense to me:
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry I just meant use backticks for |
||
| that images should not be resized. Defaults to `None`. | ||
|
|
||
| Returns: | ||
| tf.data.Dataset containing ImageNet. Each entry is a dictionary | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -484,8 +484,8 @@ def load( | |
| dataset. Defaults to `sbd_train`. | ||
| data_dir: string, local directory path for the loaded data. This will be | ||
| used to download the data file, and unzip. It will be used as a | ||
| cache directory. Defaults to None, and `~/.keras/pascal_voc_2012` | ||
| will be used. | ||
| cache directory. When `None`: `~/.keras/pascal_voc_2012` | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe best to just say defaults to
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But it doesn't default to that value? - It computes to it sure, but defaults to |
||
| will be used. Defaults to `None`. | ||
| """ | ||
| supported_split_value = [ | ||
| "train", | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -110,7 +110,7 @@ def convert_format(keypoints, source, target, images=None, dtype=None): | |
| Required when transforming from a rel format to a non-rel | ||
| format. | ||
| dtype: the data type to use when transforming the boxes. | ||
| Defaults to None, i.e. `keypoints` dtype. | ||
| When `None` uses a `keypoints` dtype. Defaults to `None`. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Defaults to the dtype of |
||
| """ | ||
|
|
||
| source = source.lower() | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why drop the github.com prefix?
Will GitHub automagically render this link for us in the UI? That would be cool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ianstenbit Oh actually I think it was flake8 picking up a long line