-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow output plugins to configure a max chunk size #1938
Comments
Hi @PettitWesley I am wondering that what is the current progress on this issue right now? |
@JeffLuoo AFAIK, no work has been done on it yet. |
The same problem is with sending gelf over http to Graylog. Is there anything I should put in configuration file to have all messages in Graylog? :) |
@Robert-turbo were you able to solve this problem somehow? |
@ciastooo I started to use tcp output
And used Traefik with TCP route (with SSL) in front of graylog |
@PettitWesley Is this problem solved in any recent version ? |
@mohitjangid1512 No work has been done AFAIK on chunk sizing for outputs |
Alternately, compromising between options 1 and 2, we could write some middleware that handles chunk splitting and chunk fragment retries and wraps the flush function. This could potentially limit changes to AWS plugins, and not require any changes to core code. The middleware would take parameters flush_function_ptr,chunk_fragment_size,middleware_context and consist of:
On retry, the chunk will be looked up and the chunk fragments will be resumed at index successful_chunks+1 This would allow no code to change in each plugin's "flush" function, but an additional function to be created called flush_wrapper that calls some middleware that takes the flush function's pointer, chunk_fragment_size,middleware_context Just a thought. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the |
This issue is still creating problems in some of our workflows. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the |
This issue is still creating problems in some of our workflows. |
Agreed. Surprised its not implemented in the same manner as fluentD |
Would you mind describing how is it implemented in fluentd? I am not familiar with it but I know that this is a very sensitive issue. |
Hi @leonardo-albertovich @edsiper @PettitWesley , Fluentd's buffer has configuration to limit chunk size https://docs.fluentd.org/configuration/buffer-section (See |
hello, I just wonder is it work in process? We also meet this question. We are using fluent-bit and our output is AWS kinesis data stream, and data stream has a limit that one record should be under 1M, and we found chunks larger than 1. And as a result, We are meeting terrible data loss. |
Waiting on solution for that also. |
Is this ever going to be addressed? |
We would also like to see this feature being integrated into fluent-bit somehow. We are sending some logs to a Loki instance and would like to limit the max size of the request. |
@edsiper @leonardo-albertovich I have opened #9385 as a proof of concept for a potential solution. Hopefully it will be useful for starting a discussion of how this can potentially be fixed more properly. |
Beautiful, I'll review the PR as soon as possible, thanks for taking the time to tackle this hairy issue. |
@ryanohnemus and I have been iterating on this and have found some issues that my PR doesn't address. Ryan found an important issue, which is that Filter plugins can easily add a lot of data even to chunks that are split up and we wind up right back at the problem of chunks ending up over the input chunk limit. I've been trying to come up with alternatives but am running into snags on each. Here's what I have so far. Break up the input chunk in engine dispatch, creating flush tasks for each broken down part of the chunkThe snag I ran into here is that the Fix this just in
|
Waiting on solution on this |
Reading through pros and cons of proposed solutions, my understanding is there is no easy solution to enforce the max chunk size throughput the pipeline, as additional data can be added in different stages of the pipeline. If there is no single solution that address all scenarios/use cases, could we prioritize enforcing max chunk size in input plugins, as proposed in #9385, which I think addresses the major use case? |
@shuaich While the change by @braydonk helps break up input chunk sizes which may help some users get past the issue, it still doesn't resolve the issue entirely. As an input chunk (
That flb_output_flush object is used to call the output instance's flush callback, which is required to call FLB_OUTPUT_RETURN, which signals (OK, RETRY, or ERROR) back to the engine. I think a solution would be to have the ability to split the We'd first have to break up the processed_event_chunk into multiple chunks, call the output instance flush callback for the first processed chunk, modify the FLB_OUTPUT_RETURN logic to mark the progress in the overall task (of how many records were actually processed), then if it's an FLB_OK return, recursively call the output instance flush callback until all chunks were processed or a retry/error occurs, before finally signaling back to the engine and killing the coro. This feels messy and gets worse if you have an FLB_RETRY return on one of these chunks, you'll have to ensure that the next flb_output_flush_create is aware and only creates a new processed_event_chunk(s) (which is unfortunately recreated with each retry afaik) for the records still waiting to be processed (ie the first pass the output instance handled 1 of 3 chunks properly with chunk 2 returning FLB_RETRY, the second pass through this logic should only create the 2 chunks left). It seems doable but would be moderate/heavy lift and we should wait for the engine maintainers' thoughts on a solution IMO. That being said, @braydonk's would help with the starting size of the |
Thank you @ryanohnemus for more context and details. |
@edsiper and I discussed this recently; opening an issue to track it.
Problem
Many APIs have a limit on the amount of data they can ingest per request. For example, #1187 discusses that the DataDog HTTP API has a 2MB payload limit. A single request is made per flush, and occasionally Fluent Bit can send a chunk which is over 2 MB.
Some APIs have a limit on the number of log messages they can accept per HTTP request. For example, Amazon CloudWatch has a 10,000 log message limit per PutLogEvents call. Amazon Kinesis Firehose and Amazon Kinesis Data Streams have a much smaller batch limit of 500 events.
Consequently, plugins have to implement logic to split a single chunk into multiple requests (or accept that occasionally large chunks will fail to be sent). This becomes troublesome when a single API request fails in the set. If the plugin issues a retry, the whole chunk will get retried. The fractions of the chunk that got successfully uploaded will thus be sent multiple times.
Possible Solutions
Ideal solution: Output Plugins specify a max chunk size
Ideally, plugins should only have to make a single request per flush. This keeps the logic in the plugin very simple and straightforward. The common task of splitting chunks into right-sized pieces could be placed in the core of Fluent Bit.
Each output plugin could give Fluent Bit a max chunk size.
Implementing this would involve some complexity. Fluent Bit should not allocate additional memory to split chunks into smaller pieces. Instead it can pass a pointer to a fraction of chunk to an output, and track when the entire chunk has successfully been sent.
Non-ideal, but easy solution
The most important issue is retries. If each flush had a unique ID associated with it, plugins could internally track whether a flush is a first attempt or a retry, and then track whether the entirety of a chunk had been sent or not.
This is not a good idea, it makes the plugin very complicated; I've included it for the sake of completeness.
The text was updated successfully, but these errors were encountered: