Skip to content

Commit 18a5428

Browse files
authored
[WebNN] Remove workarounds for TFLite backend (#23406)
The WebNN CPU device type may now target different backends, such as CoreML. Legacy special workarounds for the TFLite backend should be removed and allowed to fail as is, as these are implementation issues. Additionally, the WebNN EP should adhere to the WebNN API conformance. We assume all the WebNN ops should be supported, so remove the WebNN op support status for different device types in webnn-operators.md as well.
1 parent f4dc965 commit 18a5428

File tree

5 files changed

+106
-197
lines changed

5 files changed

+106
-197
lines changed

js/web/docs/webnn-operators.md

Lines changed: 105 additions & 103 deletions
Original file line numberDiff line numberDiff line change
@@ -6,108 +6,110 @@ operators and the supported opset domain/versions in **WebNN EP** by ONNX Runtim
66

77
(**Note**: ONNX Runtime only *guarantees* support for models stamped with opset version 7 or above for opset domain 'ai.onnx'.)
88

9-
[WebNN API](https://webmachinelearning.github.io/webnn) provides two device types `cpu` and `gpu` to leverage different on-device accelerators. WebNN API implementation in Chromium uses TFLite XNNPack delegate backend for `cpu` device type and DirectML backend for `gpu` device type. [The op support status](https://webmachinelearning.github.io/webnn-status/) behind these two backends is inconsistent.
9+
The [WebNN API](https://webmachinelearning.github.io/webnn) is available in the latest versions of Chrome and Edge on Windows,
10+
Linux, macOS, Android, and ChromeOS behind an <i>"Enables WebNN API"</i> flag. The operator support status may vary across these
11+
platforms. Check the [WebNN status](https://webmachinelearning.github.io/webnn-status/) for the latest implementation details.
1012

1113

12-
| Operator | Opset | WebNN API | WebNN CPU | WebNN GPU | Comments |
13-
|:------:|:------:|:------:|:-:|:-:|:------|
14-
| Abs | ai.onnx(7-12, 13+) | abs | || |
15-
| Add | ai.onnx(7-12, 13, 14+) | add | || |
16-
| And | ai.onnx(7+) | logicalAnd | || |
17-
| ArgMax | ai.onnx(7-10, 11, 12, 13+) | argMax | || |
18-
| ArgMin | ai.onnx(7-10, 11, 12, 13+) | argMin | || |
19-
| AveragePool | ai.onnx(7-9, 10, 11, 12-18, 19+) | averagePool2d | || Only supports 4-D input, 2-D 'kernel_shape', 'count_include_pad' value is 0 |
20-
| BatchNormalization | ai.onnx(7-8, 9-13, 14, 15+) | batchNormalization | || Only supports 'training_mode' value is 0, one output |
21-
| Cast | ai.onnx(7-8, 9-12, 13-18, 19-20, 21+) | cast | || WebNN CPU backend doesn't support casting to uint64 data type |
22-
| Ceil | ai.onnx(7-12, 13+) | ceil | || |
23-
| Clip | ai.onnx(7-10, 11, 12, 13+) | clamp | || WebNN CPU backend only supports 3 specific ranges: [0.0, infinity], [-1.0, 1.0], [0.0, 6.0] (Chromium issue: https://issues.chromium.org/issues/326156496) |
24-
| Concat | ai.onnx(7-10, 11-12, 13+) | concat | || |
25-
| Conv | ai.onnx(7-10, 11+) | conv2d | || Only supports 3-D or 4-D input and 'W' (weight) |
26-
| ConvTranspose | ai.onnx(7-10, 11+) | convTranspose2d | || Only supports 3-D or 4-D input and 'W' (weight). WebNN CPU backend only supports default dilations and group |
27-
| Cos | ai.onnx(7+) | cos | || |
28-
| CumSum | ai.onnx(11-13, 14+) | cumulativeSum | || 'axis' input should be a constant |
29-
| Div | ai.onnx(7-12, 13, 14+) | div | || |
30-
| DequantizeLinear | ai.onnx(10-12, 13-18, 19-20, 21-22, 23+) | dequantizeLinear | || The shape of x_scale should be a subsample of the shape of input |
31-
| Dropout | ai.onnx(7-9, 10-11, 12, 13-21, 22+) | identity | || Only supports test mode |
32-
| Einsum | ai.onnx(12+) | reshape, transpose, matmul, reduceSum, mul, triangular | || |
33-
| Elu | ai.onnx(7+) | elu | || WebNN CPU backend only supports 'alpha' value is 1.0 |
34-
| Equal | ai.onnx(7-10, 11-12, 13-18, 19+) | equal | || |
35-
| Erf | ai.onnx(7-9, 10-12, 13+) | erf | || |
36-
| Exp | ai.onnx(7-12, 13+) | exp | || |
37-
| Expand | ai.onnx(8-12, 13+) | expand | || 'shape' input should be a constant |
38-
| Flatten | ai.onnx(7-8, 9-10, 11-12, 13-20, 21+) | reshape | || |
39-
| Floor | ai.onnx(7-12, 13+) | floor | || |
40-
| Gather | ai.onnx(7-10, 11-12, 13+) | gather | || |
41-
| GatherElements | ai.onnx(11-12, 13+) | gatherElements | || |
42-
| GatherND | ai.onnx(11, 12, 13+) | gatherND | || Only supports 'batch_dims' == 0 |
43-
| Gelu | ai.onnx(20+) | gelu | || |
44-
| Gemm | ai.onnx(7-8, 9-10, 11-12, 13+) | gemm | || Only supports 1-D 'C' input |
45-
| GlobalAveragePool | ai.onnx(7+) | averagePool2d | || Only supports 4-D input |
46-
| GlobalMaxPool | ai.onnx(7+) | maxPool2d | || Only supports 4-D input |
47-
| GlobalLpPool| ai.onnx(7+) | l2Pool2d | || Only supports 4-D input, 'p' value is 2 |
48-
| Greater | ai.onnx(7-8, 9-12, 13+) | greater | || |
49-
| GreaterOrEqual | ai.onnx(12-15, 16+) | greaterOrEqual | || |
50-
| GRU | ai.onnx(7-13, 14-21, 22+) | gru | || Only supports 'layout' == 0. 'clip' is not supported. The activation functions in 'activations' must be one of 'Relu', 'Tanh', 'Sigmoid'. Forward and backward activations must be the same if bidirectional. 'sequence_lens' if present should be constant with values equal to the first dimension length of input 'X' |
51-
| HardSigmoid | ai.onnx(7+) | hardSigmoid | || |
52-
| HardSwish | ai.onnx(14+) | hardSwish | || |
53-
| Identity | ai.onnx(7-13, 14-15, 16-18, 19-20, 21+) | identity | || |
54-
| InstanceNormalization | ai.onnx(7+) | instanceNormalization | || |
55-
| LayerNormalization | ai.onnx(7-16, 17+) | layerNormalization | || |
56-
| LeakyRelu | ai.onnx(7-15, 16+) | leakyRelu | || |
57-
| Less | ai.onnx(7-8, 9-12, 13+) | lesser | || |
58-
| LessOrEqual | ai.onnx(12-15, 16+) | lesserOrEqual | || |
59-
| Log | ai.onnx(7-12, 13+) | log | || |
60-
| LpPool | ai.onnx(7-10, 11-17, 18+) | l2Pool2d | || Only supports 4-D input, 2-D 'kernel_shape', 'p' value is 2 |
61-
| LRN | ai.onnx(7-12, 13+) | pad, averagePool2d, transpose, add, mul, pow, div | || |
62-
| LSTM | ai.onnx(7-13, 14-21, 22+) | lstm | || Only supports 'layout' == 0, 'input_forget' == 0. 'clip' is not supported. The activation functions in 'activations' must be one of 'Relu', 'Tanh', 'Sigmoid'. Forward and backward activations must be the same if bidirectional. 'sequence_lens' if present should be constant with values equal to the first dimension length of input 'X' |
63-
| MatMul | ai.onnx(7-8, 9-12, 13+) | matmul | || |
64-
| Max | ai.onnx(7, 8-11, 12, 13+) | max | || |
65-
| MaxPool | ai.onnx(7, 8-9, 10, 11, 12+) | maxPool2d | || Only supports 4-D input, 2-D 'kernel_shape', 'storage_order' != 1, one output |
66-
| Min | ai.onnx(7, 8-11, 12, 13+) | min | || |
67-
| Mul | ai.onnx(7-12, 13, 14+) | mul | || |
68-
| Neg | ai.onnx(7-12, 13+) | neg | || |
69-
| Not | ai.onnx(7+) | logicalNot | || |
70-
| Or | ai.onnx(7+) | logicalOr | || |
71-
| Pad | ai.onnx(7-10, 11-12, 13-17, 18, 19-20, 21+) | pad | || modes == 'wrap' is not supported |
72-
| Pow | ai.onnx(7-11, 12, 13-14, 15+) | pow | || |
73-
| PRelu | ai.onnx(7-8, 9-15, 16+) | prelu | || WebNN CPU backend restricts the last dimension of input and slope to be same (Chromium issue: https://issues.chromium.org/issues/335517470) |
74-
| QuantizeLinear | ai.onnx(10-12, 13-18, 19-20, 21-22, 23+) | quantizeLinear | || The shape of x_scale should be a subsample of the shape of input |
75-
| Reciprocal | ai.onnx(7-12, 13+) | reciprocal | || |
76-
| ReduceL1 | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceL1 | || Input 'axes' if present should be a constant |
77-
| ReduceL2 | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceL2 | || Input 'axes' if present should be a constant |
78-
| ReduceLogSum| ai.onnx(7-10, 11-12, 13-17, 18+) | reduceLogSum|| | Input 'axes' if present should be a constant |
79-
| ReduceLogSumExp | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceLogSumExp | || Input 'axes' if present should be a constant |
80-
| ReduceMax | ai.onnx(7-10, 11, 12, 13-17, 18-19, 20+) | reduceMax | || Input 'axes' if present should be a constant |
81-
| ReduceMean | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceMean | || Input 'axes' if present should be a constant |
82-
| ReduceMin | ai.onnx(7-10, 11, 12, 13-17, 18-19, 20+) | reduceMin | || Input 'axes' if present should be a constant |
83-
| ReduceProd | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceProduct | || Input 'axes' if present should be a constant |
84-
| ReduceSum | ai.onnx(7-10, 11-12, 13+) | reduceSum | || Input 'axes' if present should be a constant |
85-
| ReduceSumSquare | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceSumSquare | || Input 'axes' if present should be a constant |
86-
| Relu | ai.onnx(7-12, 13, 14+) | relu | || |
87-
| Reshape | ai.onnx(7-12, 13, 14-18, 19-20, 21+) | reshape | || Input 'shape' should be a constant, 0 dimension value in 'shape' is not supported |
88-
| Resize | ai.onnx(11-12, 13-17, 18, 19+) | resample2d | || Only supports 4-D input, antialias == 0, exclude_outside == 0, keep_aspect_ratio_policy == 'stretch', 'linear' and 'nearest' modes, input 'scales' and 'sizes' if present must be a constant |
89-
| RotaryEmbedding | com.microsoft(1+) | add, concat, gather, mul, reshape, split | || |
90-
| ScatterElements | ai.onnx(11-12, 13-15, 16-17, 18+) | scatterElements | || Only supports 'reduction' == 'none' |
91-
| ScatterND | ai.onnx(11-12, 13-15, 16-17, 18+) | scatterND | || Only supports 'reduction' == 'none' |
92-
| Shape | ai.onnx(7-12, 13-14, 15-18, 19-20, 21+) | slice | || |
93-
| SimplifiedLayerNormalization | ai.onnx(1+) | pow, reduceMean, add, sqrt, div, mul | || |
94-
| Sigmoid | ai.onnx(7-12, 13+) | sigmoid | || |
95-
| Sign | ai.onnx(9-12, 13+) | sign | || |
96-
| SkipSimplifiedLayerNormalization | com.microsoft(1+) | pow, reduceMean, add, sqrt, div, mul | || |
97-
| Softplus | ai.onnx(7+) | softplus | || |
98-
| Softsign | ai.onnx(7+) | softsign | || |
99-
| Sin | ai.onnx(7+) | sin | || |
100-
| Slice | ai.onnx(7-9, 10, 11-12, 13+) | slice, reverse | || Input 'starts', 'ends', 'axes', and 'steps' if present must be a constant |
101-
| Softmax | ai.onnx(7-10, 11-12, 13+) | softmax | || |
102-
| Split | ai.onnx(7-10, 11-12, 13-17, 18+) | split | || Input 'split' if present should be a constant |
103-
| Sqrt | ai.onnx(7-12, 13+) | sqrt | || |
104-
| Squeeze | ai.onnx(7-10, 11-12, 13-20, 21+) | reshape | || Input 'axes' if present should be a constant |
105-
| Sub | ai.onnx(7-12, 13, 14+) | sub | || |
106-
| Tan | ai.onnx(7+) | tan | || |
107-
| Tanh | ai.onnx(7-12, 13+) | tanh | || |
108-
| Tile | ai.onnx(7-12, 13+) | tile | || Input 'repeats' should be a constant |
109-
| Transpose | ai.onnx(7-12, 13-20, 21+) | transpose | || |
110-
| Trilu | ai.onnx(14+) | triangular | || Input 'k' (option 'diagonal' for WebNN) if present should be a constant |
111-
| Unsqueeze | ai.onnx(7-10, 11-12, 13-20, 21+) | reshape | || |
112-
| Where | ai.onnx(7-8, 9-15, 16+) | where | || |
113-
| Xor | ai.onnx(7+) | logicalXor | || |
14+
| Operator | Opset | WebNN API | Comments |
15+
|:------:|:------:|:------:|:------|
16+
| Abs | ai.onnx(7-12, 13+) | abs | |
17+
| Add | ai.onnx(7-12, 13, 14+) | add | |
18+
| And | ai.onnx(7+) | logicalAnd | |
19+
| ArgMax | ai.onnx(7-10, 11, 12, 13+) | argMax | |
20+
| ArgMin | ai.onnx(7-10, 11, 12, 13+) | argMin | |
21+
| AveragePool | ai.onnx(7-9, 10, 11, 12-18, 19+) | averagePool2d | Only supports 4-D input, 2-D 'kernel_shape', 'count_include_pad' value is 0 |
22+
| BatchNormalization | ai.onnx(7-8, 9-13, 14, 15+) | batchNormalization | Only supports 'training_mode' value is 0, one output |
23+
| Cast | ai.onnx(7-8, 9-12, 13-18, 19-20, 21+) | cast | |
24+
| Ceil | ai.onnx(7-12, 13+) | ceil | |
25+
| Clip | ai.onnx(7-10, 11, 12, 13+) | clamp | |
26+
| Concat | ai.onnx(7-10, 11-12, 13+) | concat | |
27+
| Conv | ai.onnx(7-10, 11+) | conv2d | Only supports 3-D or 4-D input and 'W' (weight) |
28+
| ConvTranspose | ai.onnx(7-10, 11+) | convTranspose2d | Only supports 3-D or 4-D input and 'W' (weight) |
29+
| Cos | ai.onnx(7+) | cos | |
30+
| CumSum | ai.onnx(11-13, 14+) | cumulativeSum | 'axis' input should be a constant |
31+
| Div | ai.onnx(7-12, 13, 14+) | div | |
32+
| DequantizeLinear | ai.onnx(10-12, 13-18, 19-20, 21-22, 23+) | dequantizeLinear | The shape of x_scale should be a subsample of the shape of input |
33+
| Dropout | ai.onnx(7-9, 10-11, 12, 13-21, 22+) | identity | Only supports test mode |
34+
| Einsum | ai.onnx(12+) | reshape, transpose, matmul, reduceSum, mul, triangular | |
35+
| Elu | ai.onnx(7+) | elu | |
36+
| Equal | ai.onnx(7-10, 11-12, 13-18, 19+) | equal | |
37+
| Erf | ai.onnx(7-9, 10-12, 13+) | erf | |
38+
| Exp | ai.onnx(7-12, 13+) | exp | |
39+
| Expand | ai.onnx(8-12, 13+) | expand | 'shape' input should be a constant |
40+
| Flatten | ai.onnx(7-8, 9-10, 11-12, 13-20, 21+) | reshape | |
41+
| Floor | ai.onnx(7-12, 13+) | floor | |
42+
| Gather | ai.onnx(7-10, 11-12, 13+) | gather | |
43+
| GatherElements | ai.onnx(11-12, 13+) | gatherElements | |
44+
| GatherND | ai.onnx(11, 12, 13+) | gatherND | Only supports 'batch_dims' == 0 |
45+
| Gelu | ai.onnx(20+) | gelu | |
46+
| Gemm | ai.onnx(7-8, 9-10, 11-12, 13+) | gemm | Only supports 1-D 'C' input |
47+
| GlobalAveragePool | ai.onnx(7+) | averagePool2d | Only supports 4-D input |
48+
| GlobalMaxPool | ai.onnx(7+) | maxPool2d | Only supports 4-D input |
49+
| GlobalLpPool| ai.onnx(7+) | l2Pool2d | Only supports 4-D input, 'p' value is 2 |
50+
| Greater | ai.onnx(7-8, 9-12, 13+) | greater | |
51+
| GreaterOrEqual | ai.onnx(12-15, 16+) | greaterOrEqual | |
52+
| GRU | ai.onnx(7-13, 14-21, 22+) | gru | Only supports 'layout' == 0. 'clip' is not supported. The activation functions in 'activations' must be one of 'Relu', 'Tanh', 'Sigmoid'. Forward and backward activations must be the same if bidirectional. 'sequence_lens' if present should be constant with values equal to the first dimension length of input 'X' |
53+
| HardSigmoid | ai.onnx(7+) | hardSigmoid | |
54+
| HardSwish | ai.onnx(14+) | hardSwish | |
55+
| Identity | ai.onnx(7-13, 14-15, 16-18, 19-20, 21+) | identity | |
56+
| InstanceNormalization | ai.onnx(7+) | instanceNormalization | |
57+
| LayerNormalization | ai.onnx(7-16, 17+) | layerNormalization | |
58+
| LeakyRelu | ai.onnx(7-15, 16+) | leakyRelu | |
59+
| Less | ai.onnx(7-8, 9-12, 13+) | lesser | |
60+
| LessOrEqual | ai.onnx(12-15, 16+) | lesserOrEqual | |
61+
| Log | ai.onnx(7-12, 13+) | log | |
62+
| LpPool | ai.onnx(7-10, 11-17, 18+) | l2Pool2d | Only supports 4-D input, 2-D 'kernel_shape', 'p' value is 2 |
63+
| LRN | ai.onnx(7-12, 13+) | pad, averagePool2d, transpose, add, mul, pow, div | |
64+
| LSTM | ai.onnx(7-13, 14-21, 22+) | lstm | Only supports 'layout' == 0, 'input_forget' == 0. 'clip' is not supported. The activation functions in 'activations' must be one of 'Relu', 'Tanh', 'Sigmoid'. Forward and backward activations must be the same if bidirectional. 'sequence_lens' if present should be constant with values equal to the first dimension length of input 'X' |
65+
| MatMul | ai.onnx(7-8, 9-12, 13+) | matmul | |
66+
| Max | ai.onnx(7, 8-11, 12, 13+) | max | |
67+
| MaxPool | ai.onnx(7, 8-9, 10, 11, 12+) | maxPool2d | Only supports 4-D input, 2-D 'kernel_shape', 'storage_order' != 1, one output |
68+
| Min | ai.onnx(7, 8-11, 12, 13+) | min | |
69+
| Mul | ai.onnx(7-12, 13, 14+) | mul | |
70+
| Neg | ai.onnx(7-12, 13+) | neg | |
71+
| Not | ai.onnx(7+) | logicalNot | |
72+
| Or | ai.onnx(7+) | logicalOr | |
73+
| Pad | ai.onnx(7-10, 11-12, 13-17, 18, 19-20, 21+) | pad | modes == 'wrap' is not supported |
74+
| Pow | ai.onnx(7-11, 12, 13-14, 15+) | pow | |
75+
| PRelu | ai.onnx(7-8, 9-15, 16+) | prelu | |
76+
| QuantizeLinear | ai.onnx(10-12, 13-18, 19-20, 21-22, 23+) | quantizeLinear | The shape of x_scale should be a subsample of the shape of input |
77+
| Reciprocal | ai.onnx(7-12, 13+) | reciprocal | |
78+
| ReduceL1 | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceL1 | Input 'axes' if present should be a constant |
79+
| ReduceL2 | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceL2 | Input 'axes' if present should be a constant |
80+
| ReduceLogSum| ai.onnx(7-10, 11-12, 13-17, 18+) | reduceLogSum | Input 'axes' if present should be a constant |
81+
| ReduceLogSumExp | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceLogSumExp | Input 'axes' if present should be a constant |
82+
| ReduceMax | ai.onnx(7-10, 11, 12, 13-17, 18-19, 20+) | reduceMax | Input 'axes' if present should be a constant |
83+
| ReduceMean | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceMean | Input 'axes' if present should be a constant |
84+
| ReduceMin | ai.onnx(7-10, 11, 12, 13-17, 18-19, 20+) | reduceMin | Input 'axes' if present should be a constant |
85+
| ReduceProd | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceProduct | Input 'axes' if present should be a constant |
86+
| ReduceSum | ai.onnx(7-10, 11-12, 13+) | reduceSum | Input 'axes' if present should be a constant |
87+
| ReduceSumSquare | ai.onnx(7-10, 11-12, 13-17, 18+) | reduceSumSquare | Input 'axes' if present should be a constant |
88+
| Relu | ai.onnx(7-12, 13, 14+) | relu | |
89+
| Reshape | ai.onnx(7-12, 13, 14-18, 19-20, 21+) | reshape | Input 'shape' should be a constant, 0 dimension value in 'shape' is not supported |
90+
| Resize | ai.onnx(11-12, 13-17, 18, 19+) | resample2d | Only supports 4-D input, antialias == 0, exclude_outside == 0, keep_aspect_ratio_policy == 'stretch', 'linear' and 'nearest' modes, input 'scales' and 'sizes' if present must be a constant |
91+
| RotaryEmbedding | com.microsoft(1+) | add, concat, gather, mul, reshape, split | |
92+
| ScatterElements | ai.onnx(11-12, 13-15, 16-17, 18+) | scatterElements | Only supports 'reduction' == 'none' |
93+
| ScatterND | ai.onnx(11-12, 13-15, 16-17, 18+) | scatterND | Only supports 'reduction' == 'none' |
94+
| Shape | ai.onnx(7-12, 13-14, 15-18, 19-20, 21+) | slice | |
95+
| SimplifiedLayerNormalization | ai.onnx(1+) | pow, reduceMean, add, sqrt, div, mul | |
96+
| Sigmoid | ai.onnx(7-12, 13+) | sigmoid | |
97+
| Sign | ai.onnx(9-12, 13+) | sign | |
98+
| SkipSimplifiedLayerNormalization | com.microsoft(1+) | pow, reduceMean, add, sqrt, div, mul | |
99+
| Softplus | ai.onnx(7+) | softplus | |
100+
| Softsign | ai.onnx(7+) | softsign | |
101+
| Sin | ai.onnx(7+) | sin | |
102+
| Slice | ai.onnx(7-9, 10, 11-12, 13+) | slice, reverse | Input 'starts', 'ends', 'axes', and 'steps' if present must be a constant |
103+
| Softmax | ai.onnx(7-10, 11-12, 13+) | softmax | |
104+
| Split | ai.onnx(7-10, 11-12, 13-17, 18+) | split | Input 'split' if present should be a constant |
105+
| Sqrt | ai.onnx(7-12, 13+) | sqrt | |
106+
| Squeeze | ai.onnx(7-10, 11-12, 13-20, 21+) | reshape | Input 'axes' if present should be a constant |
107+
| Sub | ai.onnx(7-12, 13, 14+) | sub | |
108+
| Tan | ai.onnx(7+) | tan | |
109+
| Tanh | ai.onnx(7-12, 13+) | tanh | |
110+
| Tile | ai.onnx(7-12, 13+) | tile | Input 'repeats' should be a constant |
111+
| Transpose | ai.onnx(7-12, 13-20, 21+) | transpose | |
112+
| Trilu | ai.onnx(14+) | triangular | Input 'k' (option 'diagonal' for WebNN) if present should be a constant |
113+
| Unsqueeze | ai.onnx(7-10, 11-12, 13-20, 21+) | reshape | |
114+
| Where | ai.onnx(7-8, 9-15, 16+) | where | |
115+
| Xor | ai.onnx(7+) | logicalXor | |

onnxruntime/core/providers/webnn/builders/impl/activation_op_builder.cc

Lines changed: 0 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,6 @@ class ActivationOpBuilder : public BaseOpBuilder {
1717
private:
1818
Status AddToModelBuilderImpl(ModelBuilder& model_builder, const Node& node,
1919
const logging::Logger& logger) const override ORT_MUST_USE_RESULT;
20-
21-
// Operator support related.
22-
bool IsOpSupportedImpl(const InitializedTensorSet& initializers, const Node& node,
23-
WebnnDeviceType device_type, const logging::Logger& logger) const override;
2420
};
2521

2622
// Add operator related.
@@ -68,30 +64,6 @@ Status ActivationOpBuilder::AddToModelBuilderImpl(ModelBuilder& model_builder,
6864
return Status::OK();
6965
}
7066

71-
// Operator support related.
72-
bool ActivationOpBuilder::IsOpSupportedImpl(const InitializedTensorSet& /* initializers */,
73-
const Node& node,
74-
WebnnDeviceType device_type,
75-
const logging::Logger& logger) const {
76-
const auto& input_defs = node.InputDefs();
77-
const auto& op_type = node.OpType();
78-
79-
std::vector<int64_t> input_shape;
80-
if (!GetShape(*input_defs[0], input_shape, logger))
81-
return false;
82-
83-
if (op_type == "Elu" && device_type == WebnnDeviceType::CPU) {
84-
NodeAttrHelper helper(node);
85-
float alpha = helper.Get("alpha", 1.0f);
86-
if (alpha != 1.0f) {
87-
LOGS(logger, VERBOSE) << "WebNN CPU backend only supports Elu's alpha == 1.0";
88-
return false;
89-
}
90-
}
91-
92-
return true;
93-
}
94-
9567
void CreateActivationOpBuilder(const std::string& op_type, OpBuilderRegistrations& op_registrations) {
9668
if (op_registrations.op_builder_map.count(op_type) > 0)
9769
return;

0 commit comments

Comments
 (0)