Skip to content
This repository was archived by the owner on Mar 10, 2026. It is now read-only.
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions benchmarks/vectorized_channel_shuffle.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ class OldChannelShuffle(BaseImageAugmentationLayer):
`(..., height, width, channels)`, in `"channels_last"` format

Args:
groups: Number of groups to divide the input channels, defaults to 3.
groups: Number of groups to divide the input channels. Defaults to `3`.
seed: Integer. Used to create a random seed.

Call arguments:
Expand All @@ -48,7 +48,7 @@ class OldChannelShuffle(BaseImageAugmentationLayer):
` or (width, height, channels)`, with dtype
tf.float32 / tf.uint8
training: A boolean argument that determines whether the call should be
run in inference mode or training mode, defaults to True.
run in inference mode or training mode. Defaults to `True`.

Usage:
```python
Expand Down
11 changes: 5 additions & 6 deletions benchmarks/vectorized_jittered_resize.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,15 +87,14 @@ class OldJitteredResize(BaseImageAugmentationLayer):
This factor is used to scale the input image.
To replicate the results of the MaskRCNN paper pass `(0.8, 1.25)`.
crop_size: (Optional) the size of the image to crop from the scaled
image, defaults to `target_size` when not provided.
image, when not provided. Defaults to `target_size`.
bounding_box_format: The format of bounding boxes of input boxes.
Refer to
https://github.com/keras-team/keras-cv/blob/master/keras_cv/bounding_box/converters.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why drop the github.com prefix?

Will GitHub automagically render this link for us in the UI? That would be cool

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ianstenbit Oh actually I think it was flake8 picking up a long line

keras-team/keras-cv/blob/master/keras_cv/bounding_box/converters.py
for more details on supported bounding box formats.
interpolation: String, the interpolation method, defaults to
`"bilinear"`. Supports `"bilinear"`, `"nearest"`, `"bicubic"`,
`"area"`, `"lanczos3"`, `"lanczos5"`, `"gaussian"`,
`"mitchellcubic"`.
interpolation: String, the interpolation method. Supports `"bilinear"`,
`"nearest"`, `"bicubic"`, `"area"`, `"lanczos3"`, `"lanczos5"`,
`"gaussian"`, `"mitchellcubic"`. Defaults to `"bilinear"`.
seed: (Optional) integer to use as the random seed.
"""

Expand Down
7 changes: 4 additions & 3 deletions benchmarks/vectorized_mosaic.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,15 +50,16 @@ class OldMosaic(BaseImageAugmentationLayer):
sampled between the two values for every image augmented. If a
single float is used, a value between `0.0` and the passed float is
sampled. In order to ensure the value is always the same, please
pass a tuple with two identical floats: `(0.5, 0.5)`. Defaults to
(0.25, 0.75).
pass a tuple with two identical floats: `(0.5, 0.5)`.
Defaults to `(0.25, 0.75)`.
bounding_box_format: a case-insensitive string (for example, "xyxy") to
be passed if bounding boxes are being augmented by this layer.
Each bounding box is defined by at least these 4 values. The inputs
may contain additional information such as classes and confidence
after these 4 values but these values will be ignored and returned
as is. For detailed information on the supported formats, see the
[KerasCV bounding box documentation](https://keras.io/api/keras_cv/bounding_box/formats/). Defaults to None.
[KerasCV bounding box documentation](https://keras.io/api/keras_cv/bounding_box/formats/).
Defaults to `None`.
seed: integer, used to create a random seed.

References:
Expand Down
10 changes: 5 additions & 5 deletions benchmarks/vectorized_random_brightness.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,11 @@ class OldRandomBrightness(BaseImageAugmentationLayer):
is provided, eg, 0.2, then -0.2 will be used for lower bound and 0.2
will be used for upper bound.
value_range: Optional list/tuple of 2 floats for the lower and upper limit
of the values of the input data, defaults to [0.0, 255.0]. Can be
changed to e.g. [0.0, 1.0] if the image input has been scaled before
this layer. The brightness adjustment will be scaled to this range, and
the output values will be clipped to this range.
seed: optional integer, for fixed RNG behavior.
of the values of the input data. Can be changed to e.g. [0.0, 1.0] if
the image input has been scaled before this layer. The brightness
adjustment will be scaled to this range, and the output values will be
clipped to this range. Defaults to `[0.0, 255.0]`.
seed: optional integer, for fixed RNG behavior. Defaults to [0.0, 255.0]
Inputs: 3D (HWC) or 4D (NHWC) tensor, with float or int dtype. Input pixel
values can be of any range (e.g. `[0., 1.)` or `[0, 255]`)
Output: 3D (HWC) or 4D (NHWC) tensor with brightness adjusted based on the
Expand Down
6 changes: 3 additions & 3 deletions benchmarks/vectorized_random_flip.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,9 @@ class OldRandomFlip(BaseImageAugmentationLayer):

Arguments:
mode: String indicating which flip mode to use. Can be `"horizontal"`,
`"vertical"`, or `"horizontal_and_vertical"`, defaults to
`"horizontal"`. `"horizontal"` is a left-right flip and `"vertical"` is
a top-bottom flip.
`"vertical"`, or `"horizontal_and_vertical"`. `"horizontal"` is a
left-right flip and `"vertical"` is a top-bottom flip.
Defaults to `"horizontal"`.
seed: Integer. Used to create a random seed.
bounding_box_format: The format of bounding boxes of input dataset.
Refer to
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/vectorized_random_zoom.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,10 +56,10 @@ class OldRandomZoom(BaseImageAugmentationLayer):
represented as a single float, this value is used for both the upper and
lower bound. For instance, `width_factor=(0.2, 0.3)` result in an output
zooming out between 20% to 30%. `width_factor=(-0.3, -0.2)` result in an
output zooming in between 20% to 30%. Defaults to `None`, i.e., zooming
output zooming in between 20% to 30%. When None: zooming
vertical and horizontal directions by preserving the aspect ratio. If
height_factor=0 and width_factor=None, it would result in images with
no zoom at all.
no zoom at all. Defaults to `None`.
fill_mode: Points outside the boundaries of the input are filled according
to the given mode (one of `{"constant", "reflect", "wrap", "nearest"}`).
- *reflect*: `(d c b a | a b c d | d c b a)` The input is extended by
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/vectorized_randomly_zoomed_crop.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,8 @@ class OldRandomlyZoomedCrop(BaseImageAugmentationLayer):
tasks, this should be `(3/4, 4/3)`. To perform a no-op provide the
value `(1.0, 1.0)`.
interpolation: (Optional) A string specifying the sampling method for
resizing, defaults to "bilinear".
seed: (Optional) Used to create a random seed, defaults to None.
resizing. Defaults to "bilinear".
seed: (Optional) Used to create a random seed. Defaults to `None`.
"""

def __init__(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@
batch size based on the number of accelerators being used.
"""

# Try to detect an available TPU. If none is present, defaults to
# Try to detect an available TPU. If none is present. Defaults to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's undo this change for all the comments in examples.

# MirroredStrategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@

# parameters from FasterRCNN [paper](https://arxiv.org/pdf/1506.01497.pdf)

# Try to detect an available TPU. If none is present, defaults to
# Try to detect an available TPU. If none is present. Defaults to
# MirroredStrategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
Expand Down
2 changes: 1 addition & 1 deletion examples/training/object_detection/pascal_voc/retinanet.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@

# parameters from RetinaNet [paper](https://arxiv.org/abs/1708.02002)

# Try to detect an available TPU. If none is present, defaults to
# Try to detect an available TPU. If none is present. Defaults to
# MirroredStrategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
logging.info("mixed precision training enabled")
keras.mixed_precision.set_global_policy("mixed_float16")

# Try to detect an available TPU. If none is present, defaults to
# Try to detect an available TPU. If none is present. Defaults to
# MirroredStrategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
Expand Down
2 changes: 1 addition & 1 deletion keras_cv/bounding_box/converters.py
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,7 @@ def convert_format(
converters to compute relative pixel values of the bounding box
dimensions. Required when transforming from a rel format to a
non-rel format.
dtype: the data type to use when transforming the boxes, defaults to
dtype: the data type to use when transforming the boxes. Defaults to
`tf.float32`.
"""
if isinstance(boxes, dict):
Expand Down
3 changes: 2 additions & 1 deletion keras_cv/bounding_box/to_ragged.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,8 @@ def to_ragged(bounding_boxes, sentinel=-1, dtype=tf.float32):
bounding_boxes: a Tensor of bounding boxes. May be batched, or
unbatched.
sentinel: The value indicating that a bounding box does not exist at the
current index, and the corresponding box is padding, defaults to -1.
current index, and the corresponding box is padding.
Defaults to `-1`.
dtype: the data type to use for the underlying Tensors.
Returns:
dictionary of `tf.RaggedTensor` or 'tf.Tensor' containing the filtered
Expand Down
8 changes: 4 additions & 4 deletions keras_cv/datasets/imagenet/load.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,12 +81,12 @@ def load(
batch_size: how many instances to include in batches after loading.
Should only be specified if img_size is specified (so that images
can be resized to the same size before batching).
shuffle: whether to shuffle the dataset, defaults to True.
shuffle: whether to shuffle the dataset. Defaults to `True`.
shuffle_buffer: the size of the buffer to use in shuffling.
reshuffle_each_iteration: whether to reshuffle the dataset on every
epoch, defaults to False.
img_size: the size to resize the images to, defaults to None, indicating
that images should not be resized.
epoch. Defaults to `False`.
img_size: the size to resize the images to, when None, this indicates
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me:

the size to resize the images to, when None, this indicates that images should not be resized. Defaults to None.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I just meant use backticks for None. GitHub is rendering my backticks which makes these comments less clear 😓

that images should not be resized. Defaults to `None`.

Returns:
tf.data.Dataset containing ImageNet. Each entry is a dictionary
Expand Down
6 changes: 3 additions & 3 deletions keras_cv/datasets/pascal_voc/load.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,10 +66,10 @@ def load(
for more details on supported bounding box formats.
batch_size: how many instances to include in batches after loading
shuffle_buffer: the size of the buffer to use in shuffling.
shuffle_files: (Optional) whether to shuffle files, defaults to
True.
shuffle_files: (Optional) whether to shuffle files. Defaults to
`True`.
dataset: (Optional) the PascalVOC dataset to load from. Should be either
'voc/2007' or 'voc/2012', defaults to 'voc/2007'.
'voc/2007' or 'voc/2012'. Defaults to 'voc/2007'.

Returns:
tf.data.Dataset containing PascalVOC. Each entry is a dictionary
Expand Down
4 changes: 2 additions & 2 deletions keras_cv/datasets/pascal_voc/segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -484,8 +484,8 @@ def load(
dataset. Defaults to `sbd_train`.
data_dir: string, local directory path for the loaded data. This will be
used to download the data file, and unzip. It will be used as a
cache directory. Defaults to None, and `~/.keras/pascal_voc_2012`
will be used.
cache directory. When `None`: `~/.keras/pascal_voc_2012`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe best to just say defaults to ~/.keras/pascal_voc_2012?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it doesn't default to that value? - It computes to it sure, but defaults to None.

will be used. Defaults to `None`.
"""
supported_split_value = [
"train",
Expand Down
4 changes: 2 additions & 2 deletions keras_cv/datasets/waymo/load.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,11 @@ def load(
tfrecords in the Waymo Open Dataset, or a list of strings pointing
to the tfrecords themselves
transformer: a Python function which transforms a Waymo Open Dataset
Frame object into tensors, defaults to convert range image to point
Frame object into tensors. Defaults to convert range image to point
cloud.
output_signature: the type specification of the tensors created by the
transformer. This is often a dictionary from feature column names to
tf.TypeSpecs, defaults to point cloud representations of Waymo Open
tf.TypeSpecs. Defaults to point cloud representations of Waymo Open
Dataset data.

Returns:
Expand Down
2 changes: 1 addition & 1 deletion keras_cv/keypoint/converters.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def convert_format(keypoints, source, target, images=None, dtype=None):
Required when transforming from a rel format to a non-rel
format.
dtype: the data type to use when transforming the boxes.
Defaults to None, i.e. `keypoints` dtype.
When `None` uses a `keypoints` dtype. Defaults to `None`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Defaults to the dtype of keypoints?

"""

source = source.lower()
Expand Down
10 changes: 5 additions & 5 deletions keras_cv/layers/feature_pyramid.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,17 +55,17 @@ class FeaturePyramid(keras.layers.Layer):
max_level: a python int for the highest level of the pyramid for
feature extraction.
num_channels: an integer representing the number of channels for the FPN
operations, defaults to 256.
operations. Defaults to 256.
lateral_layers: a python dict with int keys that matches to each of the
pyramid level. The values of the dict should be `keras.Layer`, which
will be called with feature activation outputs from backbone at each
level. Defaults to None, and a `keras.Conv2D` layer with kernel 1x1
will be created for each pyramid level.
level. When None: a `keras.Conv2D` layer with kernel 1x1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When None

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Meant to indicate that backticks should be used)

will be created for each pyramid level. Defaults to `None`.
output_layers: a python dict with int keys that matches to each of the
pyramid level. The values of the dict should be `keras.Layer`, which
will be called with feature inputs and merged result from upstream
levels. Defaults to None, and a `keras.Conv2D` layer with kernel 3x3
will be created for each pyramid level.
levels. When `None`: a `keras.Conv2D` layer with kernel 3x3
will be created for each pyramid level. Defaults to `None`.

Sample Usage:
```python
Expand Down
2 changes: 1 addition & 1 deletion keras_cv/layers/fusedmbconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ class FusedMBConvBlock(layers.Layer):
activation: default "swish", the activation function used between
convolution operations
survival_probability: float, the optional dropout rate to apply before
the output convolution, defaults to 0.8
the output convolution. Defaults to 0.8

Returns:
A `tf.Tensor` representing a feature map, passed through the FusedMBConv
Expand Down
2 changes: 1 addition & 1 deletion keras_cv/layers/mbconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def __init__(
activation: default "swish", the activation function used between
convolution operations
survival_probability: float, the optional dropout rate to apply
before the output convolution, defaults to 0.8
before the output convolution. Defaults to 0.8

Returns:
A `tf.Tensor` representing a feature map, passed through the MBConv
Expand Down
6 changes: 3 additions & 3 deletions keras_cv/layers/object_detection/anchor_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ class AnchorGenerator(keras.layers.Layer):
strides: iterable of ints that represent the anchor stride size between
center of anchors at each scale.
clip_boxes: whether to clip generated anchor boxes to the image
size, defaults to `False`.
size. Defaults to `False`.

Usage:
```python
Expand Down Expand Up @@ -213,8 +213,8 @@ class _SingleAnchorGenerator:
stride: A single int represents the anchor stride size between center of
each anchor.
clip_boxes: Boolean to represent whether the anchor coordinates should be
clipped to the image size, defaults to `False`.
dtype: (Optional) The data type to use for the output anchors, defaults to
clipped to the image size. Defaults to `False`.
dtype: (Optional) The data type to use for the output anchors. Defaults to
'float32'.

"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,13 @@ class MultiClassNonMaxSuppression(keras.layers.Layer):
confidence.
iou_threshold: a float value in the range [0, 1] representing the minimum
IoU threshold for two boxes to be considered same for suppression.
Defaults to 0.5.
Defaults to `0.5`.
confidence_threshold: a float value in the range [0, 1]. All boxes with
confidence below this value will be discarded, defaults to 0.5.
confidence below this value will be discarded. Defaults to `0.5`.
max_detections: the maximum detections to consider after nms is applied. A
large number may trigger significant memory overhead, defaults to 100.
large number may trigger significant memory overhead. Defaults to `100`.
max_detections_per_class: the maximum detections to consider per class
after nms is applied, defaults to 100.
after nms is applied. Defaults to `100`.
""" # noqa: E501

def __init__(
Expand Down
4 changes: 2 additions & 2 deletions keras_cv/layers/object_detection/roi_sampler.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ class _ROISampler(keras.layers.Layer):
background_class: the background class which is used to map returned the
sampled ground truth which is classified as background.
num_sampled_rois: the number of sampled proposals per image for
further (loss) calculation, defaults to 256.
further (loss) calculation. Defaults to `256`.
append_gt_boxes: boolean, whether gt_boxes will be appended to rois
before sample the rois, defaults to True.
before sample the rois. Defaults to True.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True

""" # noqa: E501

def __init__(
Expand Down
9 changes: 5 additions & 4 deletions keras_cv/layers/preprocessing/aug_mix.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,15 +44,16 @@ class AugMix(BaseImageAugmentationLayer):
`keras_cv.FactorSampler`. A value is sampled from the provided
range. If a float is passed, the range is interpreted as
`(0, severity)`. This value represents the level of strength of
augmentations and is in the range [0, 1]. Defaults to 0.3.
augmentations and is in the range [0, 1]. Defaults to `0.3`.
num_chains: an integer representing the number of different chains to
be mixed, defaults to 3.
be mixed. Defaults to 3.
chain_depth: an integer or range representing the number of
transformations in the chains. If a range is passed, a random
`chain_depth` value sampled from a uniform distribution over the
given range is called at the start of the chain. Defaults to [1,3].
given range is called at the start of the chain.
Defaults to `[1,3]`.
alpha: a float value used as the probability coefficients for the
Beta and Dirichlet distributions, defaults to 1.0.
Beta and Dirichlet distributions. Defaults to 1.0.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's be consistent with whether we put backticks around numbers. It seems like in most cases we omit them, so let's do that throughout.

seed: Integer. Used to create a random seed.

References:
Expand Down
2 changes: 1 addition & 1 deletion keras_cv/layers/preprocessing/channel_shuffle.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ class ChannelShuffle(VectorizedBaseImageAugmentationLayer):
`(..., height, width, channels)`, in `"channels_last"` format

Args:
groups: Number of groups to divide the input channels, defaults to 3.
groups: Number of groups to divide the input channels. Defaults to 3.
seed: Integer. Used to create a random seed.

Usage:
Expand Down
4 changes: 2 additions & 2 deletions keras_cv/layers/preprocessing/cut_mix.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ class CutMix(BaseImageAugmentationLayer):
Args:
alpha: Float between 0 and 1. Inverse scale parameter for the gamma
distribution. This controls the shape of the distribution from which
the smoothing values are sampled. Defaults to 1.0, which is a
recommended value when training an imagenet1k classification model.
the smoothing values are sampled. 1.0 is a recommended value when
training an imagenet1k classification model. Defaults to `1.0`.
seed: Integer. Used to create a random seed.
References:
- [CutMix paper]( https://arxiv.org/abs/1905.04899).
Expand Down
Loading