Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support for multiple exporters #2535

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

hoangnv-bkhn
Copy link
Contributor

@hoangnv-bkhn hoangnv-bkhn commented Oct 17, 2024

Description

  • Support for configuring multiple exporters simultaneously
  • The way it was resolved:
    Each data exporter will be initialized with a separate schedule to operate independently of each other.
  • How to test the change:
    Refer: Example

Closes issue(s)

Resolve #2505

Checklist

  • I have tested this code
  • I have added unit test to cover this code
  • I have updated the documentation (README.md and /website/docs)
  • I have followed the contributing guide

Copy link

netlify bot commented Oct 17, 2024

Deploy Preview for go-feature-flag-doc-preview ready!

Name Link
🔨 Latest commit dc24bcf
🔍 Latest deploy log https://app.netlify.com/sites/go-feature-flag-doc-preview/deploys/671b5b7b6942aa0008daf56b
😎 Deploy Preview https://deploy-preview-2535--go-feature-flag-doc-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link

codecov bot commented Oct 17, 2024

Codecov Report

Attention: Patch coverage is 84.74576% with 18 lines in your changes missing coverage. Please review.

Project coverage is 85.56%. Comparing base (c1c2389) to head (f99def8).

Files with missing lines Patch % Lines
cmd/relayproxy/service/gofeatureflag.go 68.42% 13 Missing and 5 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2535      +/-   ##
==========================================
+ Coverage   84.81%   85.56%   +0.75%     
==========================================
  Files         100      100              
  Lines        4346     4414      +68     
==========================================
+ Hits         3686     3777      +91     
+ Misses        535      506      -29     
- Partials      125      131       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@hoangnv-bkhn hoangnv-bkhn marked this pull request as draft October 17, 2024 10:29
@hoangnv-bkhn hoangnv-bkhn force-pushed the feat/support-for-multiple-exporters branch from 1235b8a to b64acf6 Compare October 17, 2024 14:35
@hoangnv-bkhn hoangnv-bkhn marked this pull request as ready for review October 17, 2024 14:40
@thomaspoignant
Copy link
Owner

@hoangnv-bkhn I haven't looked in detail to all the PR but there is something I would have done differently.

In your proposal you are duplicating the logic of the DataExporter by allowing users to have a different different dataExporterSchedulers that allows to set different flush interval and max buffer per exporter.

While I understand why you've done that I think it will be easier to have only 1 flush interval for all the exporters.
Have you consider adding a field exporters in the DataExporter struct?

Any reason why you have not followed that path?

@hoangnv-bkhn
Copy link
Contributor Author

hoangnv-bkhn commented Oct 22, 2024

@hoangnv-bkhn I haven't looked in detail to all the PR but there is something I would have done differently.

In your proposal you are duplicating the logic of the DataExporter by allowing users to have a different different dataExporterSchedulers that allows to set different flush interval and max buffer per exporter.

While I understand why you've done that I think it will be easier to have only 1 flush interval for all the exporters. Have you consider adding a field exporters in the DataExporter struct?

Any reason why you have not followed that path?

@thomaspoignant
Actually, that was the first way I thought of and implemented.
But then I realized that users might need to set different values for flush interval and max buffer for different exporters.
So I changed it this way to give users maximum configurability.
This allows each exporter to be configured independently, so performance can be tailored to the specific needs of each data export target (different data sinks may have different requirements).

For example, with S3 or Google Cloud Storage, it may be better to use a longer flush interval and a higher maximum buffer because they are designed for batch uploads. Sending data in larger batches can reduce the number of requests and improve throughput.
And when using Webhook, shorter intervals may be more appropriate, especially when used for real-time processing. A lower maximum event count may also be preferable to avoid delays in sending data. Webhooks typically expect immediate responses, so batching too many events can result in timeouts or dropped requests.

Copy link

@hoangnv-bkhn
Copy link
Contributor Author

@thomaspoignant
[Kindly Reminder]
What do you think of my idea?

If you think that the idea of all exporters having the same flush interval and max buffer configuration makes more sense at the moment, then we can reconsider it.

@thomaspoignant
Copy link
Owner

@hoangnv-bkhn Sorry I am lagging with the reviews those days.
I understand why you prefer this option and I tend to agree that this is a good idea.

My main concern with your solution is that you add events to each exporters which means that we duplicate the usage of memory by the number of exporters and I want to avoid that as much as possible.
We can maybe have a common cache for every exporters, but this may be tricky to handle failures etc...

I am still trying to understand what is the best solution here, but I am happy to get your inputs.

@hoangnv-bkhn
Copy link
Contributor Author

@thomaspoignant
In case you missed my message, I messaged you on Slack Channel
Would love to hear your thoughts.

@github-actions github-actions bot added the Stale When an issue is open for too long label Dec 4, 2024
@github-actions github-actions bot removed the Stale When an issue is open for too long label Dec 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

(feature) Support for multiple exporters
2 participants