-
Notifications
You must be signed in to change notification settings - Fork 178
Description
Describe the bug
We are using the Fluentd Kafka output plugin (rdkafka2) to send data to Kafka, and we noticed some behavior regarding max_send_limit_bytes when compression is enabled. We would like to confirm if this is expected or a known limitation.
Our configuration enables gzip compression with a 1MB send limit:
<match **>
@type rdkafka2
brokers <broker-list>
compression_codec gzip
max_send_limit_bytes 1m
</match>
We observed that max_send_limit_bytes is enforced based on the uncompressed payload size. Even when the compressed batch size is below 1MB, Fluentd does not send the data if the uncompressed size exceeds 1MB. This indicates that size validation is occurring before compression is applied by rdkafka.
Can you please confirm if this behavior is expected?
Additionally, if this is the intended behavior, is there any Fluentd or rdkafka configuration that allows enforcing the 1MB limit after compression is applied?
To Reproduce
NA
Expected behavior
NA
Your Environment
- Fluentd version: 1.19.0
- TD Agent version: 6.0.0
- fluent-plugin-kafka version:0.19.5
- ruby-kafka version: 1.5.0
- rdkafka version: 0.21.0
- Operating system: rocky8
- Kernel version: 5.14Your Configuration
NA
Your Error Log
NAAdditional context
No response