Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

max_send_limit_bytes setting is not working as expected #523

Open
pratikshavyas opened this issue Dec 23, 2024 · 2 comments
Open

max_send_limit_bytes setting is not working as expected #523

pratikshavyas opened this issue Dec 23, 2024 · 2 comments

Comments

@pratikshavyas
Copy link

pratikshavyas commented Dec 23, 2024

Describe the bug

As per readme file it is mentioned that: max_send_limit_bytes - default: nil - Max byte size to send message to avoid MessageSizeTooLarge. For example, if you set 1000000(message.max.bytes in kafka), Message more than 1000000 byes will be dropped.
However, I see that even if I have set max_send_limit_bytes as 5 MB, there are requests being sent to kafka for 20MB and 16 MB.
Why is fluentd not able to limit the message size? Is there anything else needs to be configured here?
Please suggest asap.

To Reproduce

configuration for the plugin:

<match *>
  @type copy
  <store ignore_error>
    <buffer>
      @type file
      @log_level error
      path /logskafka
      timekey 1d
      flush_thread_count 4
      chunk_limit_size 2MB
      overflow_action drop_oldest_chunk
      flush_mode interval
      flush_interval 5s
      total_limit_size 2GB
      max_send_limit_bytes 5000000
    </buffer>
    @type kafka2
    @log_level error
    brokers service:8097
    topic_key topic
    default_topic messages
  </store>
</match>

Expected behavior

Fluentd should limit the message being sent to kafka to 5 MB as max_send_limit_bytes is set to 5 MB

Your Environment

- Fluentd version: 1.17.0
- fluent-plugin-kafka version: 0.19.2

Your Configuration

configuration for the plugin:

<match *>
  @type copy
  <store ignore_error>
    <buffer>
      @type file
      @log_level error
      path /logskafka
      timekey 1d
      flush_thread_count 4
      chunk_limit_size 2MB
      overflow_action drop_oldest_chunk
      flush_mode interval
      flush_interval 5s
      total_limit_size 2GB
      max_send_limit_bytes 5000000
    </buffer>
    @type kafka2
    @log_level error
    brokers service:8097
    topic_key topic
    default_topic messages
  </store>
</match>

Your Error Log

No logs

Additional context

No response

@pratikshavyas
Copy link
Author

#323 (comment) As per this comment, it looks like that documentation does not have right information and it needs an update. This comment is also a bit ambiguous, Kindly check on this and provide an explanation on what all configurations have an impact and can be configured to avoid MessageSizeTooLarge exception.

@daipom daipom moved this to Triage in Fluentd Kanban Jan 7, 2025
@Watson1978
Copy link

Watson1978 commented Feb 12, 2025

I think you should set a value to out plugin instead of buffer.

<match *>
  @type copy
  <store ignore_error>
    <buffer>
      @type file
      @log_level error
      path /logskafka
      timekey 1d
      flush_thread_count 4
      chunk_limit_size 2MB
      overflow_action drop_oldest_chunk
      flush_mode interval
      flush_interval 5s
      total_limit_size 2GB
      # max_send_limit_bytes 5000000
    </buffer>

    @type kafka2
    @log_level error
    brokers service:8097
    topic_key topic
    default_topic messages

    max_send_limit_bytes 5000000
  </store>
</match>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants