[M147] Allow redundant reduction axes, and out of bounds reduction axes
TFlite allows the same axis to be specified in a reduction multiple times, and expects it to be treated like set, i.e. these should be deduplicated. Because of the way reshaping works, it can be difficult at delegation or subgraph creation time to determine if this is happening, so we can't just not delegate such ops.
TFlite also allows specifying reduction of a "scalar" with a reduction axis list of {0} (not {}. This is a super annoying behavior, but we need to handle it, because we can't determine if this is happening at delegation time. I think it is reasonable to simply allow out of bounds reduction axes, and just ignore them. This basically treats that reduction axis as an implied dimension of extent 1, which a lot of other things do already (e.g. binary elementwise ops).
As a result of this, I realized that the sum => mean rewrite we do currently is not safe from reshaping. The graph might be equivalent to a mean at the time of construction, but become not so after reshaping. (This would almost certainly be a bug in the client code, but it's a bug that we should not silently fix for the client.)
In addition, I think the sum => mean rewrite probably has negligible impact on performance. I locally modified the layer norm benchmark to use a sum + multiply to implement the two means, and the performance impact is actually an improvement (though very small, and that doesn't really make sense):
```
name time/op time/op vs base
FP32LayerNorm/M:128/N:256/K:512/NormMask:1/process_time/real_time 73.64m ± 2% 74.00m ± 1% ~ (p=0.394 n=6)
FP32LayerNorm/M:128/N:256/K:512/NormMask:2/process_time/real_time 69.82m ± 1% 69.31m ± 1% -0.74% (p=0.041 n=6)
FP32LayerNorm/M:128/N:256/K:512/NormMask:3/process_time/real_time 69.82m ± 2% 69.29m ± 1% ~ (p=0.485 n=6)
FP32LayerNorm/M:128/N:256/K:512/NormMask:4/process_time/real_time 62.28m ± 1% 61.31m ± 1% -1.56% (p=0.009 n=6)
FP32LayerNorm/M:128/N:256/K:512/NormMask:5/process_time/real_time 61.80m ± 1% 61.77m ± 1% ~ (p=0.699 n=6)
FP32LayerNorm/M:128/N:256/K:512/NormMask:6/process_time/real_time 60.46m ± 2% 59.42m ± 2% -1.72% (p=0.041 n=6)
FP32LayerNorm/M:128/N:256/K:512/NormMask:7/process_time/real_time 60.01m ± 2% 59.42m ± 3% ~ (p=0.240 n=6)
geomean 65.21m 64.71m -0.76%
```
To fix this issue, I've replaced this rewrite with a `widen_fp16_accumulators` rewrite, that leaves the subgraph mostly intact, but changes the types of the intermediate tensors to fp32, and inserts a convert to fp16 after the division.
(Cherry-picked from commit de3504fd8cfcedf194cd0ae43afb37cdff824aa2.)
PiperOrigin-RevId: 884165426
Bug: 492350403
Change-Id: I74b0d03c6ce57674ca9d16e9f93e7f9f3a37108dXNNPACK is a highly optimized solution for neural network inference on ARM, x86, WebAssembly, and RISC-V platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, ONNX Runtime, ExecuTorch, and MediaPipe.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
| Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
|---|---|---|---|
| FP32 MobileNet v1 1.0X | 82 | 86 | 88 |
| FP32 MobileNet v2 1.0X | 49 | 53 | 55 |
| FP32 MobileNet v3 Large | 39 | 42 | 44 |
| FP32 MobileNet v3 Small | 12 | 14 | 14 |
The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
| Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
|---|---|---|---|
| FP32 MobileNet v1 1.0X | 43 | 27 | 46 |
| FP32 MobileNet v2 1.0X | 26 | 18 | 28 |
| FP32 MobileNet v3 Large | 22 | 16 | 24 |
| FP32 MobileNet v3 Small | 7 | 6 | 8 |
Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5 on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench) and neural network models with randomized weights and inputs.
The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.
| Model | RPi Zero W (BCM2835), ms | RPi 2 (BCM2836), ms | RPi 3+ (BCM2837B0), ms | RPi 4 (BCM2711), ms | RPi 4 (BCM2711, ARM64), ms |
|---|---|---|---|---|---|
| FP32 MobileNet v1 1.0X | 3919 | 302 | 114 | 72 | 77 |
| FP32 MobileNet v2 1.0X | 1987 | 191 | 79 | 41 | 46 |
| FP32 MobileNet v3 Large | 1658 | 161 | 67 | 38 | 40 |
| FP32 MobileNet v3 Small | 474 | 50 | 22 | 13 | 15 |
| INT8 MobileNet v1 1.0X | 2589 | 128 | 46 | 29 | 24 |
| INT8 MobileNet v2 1.0X | 1495 | 82 | 30 | 20 | 17 |
Benchmarked on Feb 8, 2022 with end2end-bench --benchmark_min_time=5 on a Raspbian Buster build with CMake (./scripts/build-local.sh) and neural network models with randomized weights and inputs. INT8 inference was evaluated on per-channel quantization schema.
XNNPACK is based on QNNPACK library. Over time its codebase diverged a lot, and XNNPACK API is no longer compatible with QNNPACK.