Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.16](backport #41356) Restore memory queue's internal event cleanup after a batch is vended #41363

Merged
merged 1 commit into from
Oct 22, 2024

Conversation

mergify[bot]
Copy link
Contributor

@mergify mergify bot commented Oct 22, 2024

Fix #41355, where event data in the memory queue was not being freed when event batches were acknowledged, but only gradually as the queue buffer was overwritten by later events. This gave the same effect as if all beat instances, even low-volume ones, were running with a full / saturated event queue.

The root cause, found by @swiatekm, is this PR, an unrelated cleanup of old code that accidentally included one live call along with the deprecated ones. (There was an old FreeEntries hook in pipeline batches that was only used for deprecated shipper configs, but the cleanup also removed the FreeEntries call inside the queue which was essential for releasing event memory.)

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Related issues


This is an automatic backport of pull request #41356 done by [Mergify](https://mergify.com).

…#41356)

Fix #41355, where event data in the memory queue was not being freed when event batches were acknowledged, but only gradually as the queue buffer was overwritten by later events. This gave the same effect as if all beat instances, even low-volume ones, were running with a full / saturated event queue.

The root cause, found by @swiatekm, is [this PR](#39584), an unrelated cleanup of old code that accidentally included one live call along with the deprecated ones. (There was an old `FreeEntries` hook in pipeline batches that was only used for deprecated shipper configs, but the cleanup also removed the `FreeEntries` call _inside_ the queue which was essential for releasing event memory.)

(cherry picked from commit fdb912a)
@mergify mergify bot requested a review from a team as a code owner October 22, 2024 12:58
@mergify mergify bot added the backport label Oct 22, 2024
@mergify mergify bot requested review from AndersonQ and removed request for a team October 22, 2024 12:58
@mergify mergify bot assigned faec Oct 22, 2024
@mergify mergify bot requested a review from khushijain21 October 22, 2024 12:58
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Oct 22, 2024
@pierrehilbert pierrehilbert added the Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team label Oct 22, 2024
@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Oct 22, 2024
@elasticmachine
Copy link
Collaborator

Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)

@faec faec merged commit ba5187c into 8.16 Oct 22, 2024
140 of 142 checks passed
@faec faec deleted the mergify/bp/8.16/pr-41356 branch October 22, 2024 17:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants