-
Notifications
You must be signed in to change notification settings - Fork 5.5k
Fix Ivy Failing Test: jax - norms.batch_norm #28920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from 7 commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
e03de1c
random.multinomial tests
Ajay6601 2b33ef8
fix: remove unnecessary files
Sam-Armstrong 51846fe
fixing failing tests for norms.batch_norm
Ajay6601 daad014
Merge branch Merge upstream updates into topic branch to incorporate …
Ajay6601 8bf9975
Merge branch 'ivy-llc:main' into main
Ajay6601 afce618
reformat layers-conv1d-transpose
Ajay6601 9b31223
reformat
Ajay6601 f9364d4
Update ivy/functional/backends/numpy/layers.py
Ajay6601 faebe60
Update layers.py
Ajay6601 7c12301
Update layers.py
Ajay6601 7beddf4
Merge branch 'ivy-llc:main' into main
Ajay6601 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -160,44 +160,86 @@ def conv1d( | |
|
|
||
| @with_unsupported_dtypes({"2.15.0 and below": ("bfloat16", "complex")}, backend_version) | ||
| def conv1d_transpose( | ||
| x: Union[tf.Tensor, tf.Variable], | ||
| filters: Union[tf.Tensor, tf.Variable], | ||
| x: Union[ivy.Array, ivy.NativeArray], | ||
| filters: Union[ivy.Array, ivy.NativeArray], | ||
|
||
| strides: Union[int, Tuple[int]], | ||
| padding: str, | ||
| /, | ||
| *, | ||
| output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None, | ||
| output_shape: Optional[Union[ivy.Shape, ivy.NativeShape]] = None, | ||
| filter_format: str = "channel_last", | ||
| data_format: str = "NWC", | ||
| dilations: Union[int, Tuple[int]] = 1, | ||
| bias: Optional[Union[tf.Tensor, tf.Variable]] = None, | ||
| out: Optional[Union[tf.Tensor, tf.Variable]] = None, | ||
| ): | ||
| bias: Optional[ivy.Array] = None, | ||
| out: Optional[ivy.Array] = None, | ||
| ) -> ivy.Array: | ||
| """Compute a 1-D transpose convolution given 3-D input x and filters arrays. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| x | ||
| Input image *[batch_size,w,d_in]* or *[batch_size,d_in,w]*. | ||
| filters | ||
| Convolution filters *[fw,d_out,d_in]*. | ||
| strides | ||
| The stride of the sliding window for each dimension of input. | ||
| padding | ||
| Either 'SAME' (padding so that the output's shape is the same as the | ||
| input's), or 'VALID' (padding so that the output's shape is `output_shape`). | ||
| output_shape | ||
| Shape of the output (Default value = None) | ||
| filter_format | ||
| Either "channel_first" or "channel_last". "channel_first" corresponds | ||
| to "IOW",input data formats, while "channel_last" corresponds to "WOI". | ||
| data_format | ||
| The ordering of the dimensions in the input, one of "NWC" or "NCW". "NWC" | ||
| corresponds to input with shape (batch_size, width, channels), while "NCW" | ||
| corresponds to input with shape (batch_size, channels, width). | ||
| dilations | ||
| The dilation factor for each dimension of input. (Default value = 1) | ||
| bias | ||
| Bias array of shape *[d_out]*. | ||
| out | ||
| optional output array, for writing the result to. It must have a shape that the | ||
| inputs broadcast to. | ||
|
|
||
| Returns | ||
| ------- | ||
| ret | ||
| The result of the transpose convolution operation. | ||
| """ | ||
| if ivy.dev(x) == "cpu" and ( | ||
| (dilations > 1) if isinstance(dilations, int) else any(d > 1 for d in dilations) | ||
| ): | ||
| raise ivy.utils.exceptions.IvyException( | ||
| "Tensorflow does not support dilations greater than 1 when device is cpu" | ||
| ) | ||
|
|
||
| permuted_x = False | ||
| if data_format == "NCW" and ivy.dev(x) == "cpu": | ||
| x = tf.transpose(x, (0, 2, 1)) | ||
| data_format = "NWC" | ||
| permuted_x = True | ||
|
|
||
| if filter_format == "channel_first": | ||
| filters = tf.transpose(filters, (2, 1, 0)) | ||
|
|
||
| output_shape, padding = _transpose_out_pad( | ||
| x.shape, filters.shape, strides, padding, 1, dilations, data_format | ||
| ) | ||
|
|
||
| res = tf.nn.conv1d_transpose( | ||
| x, filters, output_shape, strides, padding, data_format, dilations | ||
| ) | ||
|
|
||
| if bias is not None: | ||
| if data_format[1] == "C": | ||
| bias = tf.reshape(bias, [1, -1, 1]) | ||
| res = tf.math.add(res, bias) | ||
|
|
||
| if permuted_x: | ||
| res = tf.transpose(res, (0, 2, 1)) | ||
|
|
||
| return res | ||
|
|
||
|
|
||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why you've changed this to refer to ivy.Container when these methods are on ivy.Array? Can you revert you're changes to this file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, Sam. I have changed it.
I thought like in Ivy; there are separate implementations for:
Regular array operations (on ivy.Array)
Container operations (on ivy.Container) which apply functions to nested arrays
so when working with a single array instead of a container, you would use ivy.conv1d_transpose or ivy.static_conv1d_transpose directly on an ivy.Array . Thats why i thought like for multiple arrays we must use container. But now i got it reading through docs again.