Skip to content

Commit 3a6d2a3

Browse files
authored
fix(benchmarks): Bump max_decoding_message_size to 32MiB to fix batch processor benchmarks (#2730)
# Change Summary Batch processor benchmarks had 100% signal drop rate due to being over the decompression limit on the backend engine. Bumping the limit fixes the issue for both continuous and nightly. ## What issue does this PR close? * Closes #2729 ## How are these changes tested? Ran all scenarios locally and observed the dropped rate being 0 (or less): <img width="1891" height="466" alt="image" src="https://github.com/user-attachments/assets/6af43bf9-2e61-4af9-a48b-8285f8768a92" /> ## Are there any user-facing changes? No
1 parent dd34485 commit 3a6d2a3

1 file changed

Lines changed: 1 addition & 0 deletions

File tree

  • tools/pipeline_perf_test/test_suites/integration/templates/configs/backend

tools/pipeline_perf_test/test_suites/integration/templates/configs/backend/config.yaml.j2

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ groups:
4242
{%- if receiver_type == "otlp" %}
4343
protocols:
4444
grpc:
45+
max_decoding_message_size: 32MiB
4546
listening_addr: "{{ listen_addr }}"
4647
{%- for k, v in extra_config.items() %}
4748
{{ k }}: {{ v }}

0 commit comments

Comments
 (0)