-
Notifications
You must be signed in to change notification settings - Fork 497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When a topic is deleted, connected clients should have their connection closed #3836
Comments
Howdy, thanks for the assistance so far in Discord for helping me figure my way through this. So far, a viable solution to me is looking like the following, but would love to get some thoughts/gut checks before implementing: In The next question is how to emit the event whenever the SC sends over a deleted topic notification. Both the consumer connection and SC dispatcher connections have access to the shared global context Sorry for the wall of text! Let me know if that sounds like a good plan and if I should continue with an implementation. |
Just to clarify, there are two I/O to fluvio cluster. First is to SC which manages metadata and second to SPU which handles actual consumer/producer. SC should not be handling SPU connection. It is responsibility of SPU to manage its own connection. This is fundamentally how Fluvio works. Here's suggestion; just add new Thanks for working on this, this is not trivial excercise. |
Understood on the two I/O to Fluvio. I appreciate the suggestion and would like a little clarification, if possible. I'm not entirely sure what you mean by expiring the OffsetChangeListener. I suppose I am interpreting that to mean something like changing the listener to return a It does not seem so easy to figure out which |
It would be challenge to change |
I posted a PR to explore these ideas and would appreciate some feedback to get it up to standards. I still have tests to add around this feature and would appreciate some guidance on what's required to be acceptable. The current solution registers connections (OffsetPublishers) with the shared leader state. This allows the ScDispatcher to trivially signal a new offset value that represents an error (TopicDelete = -2). One drawback to this is that the SharedLeaderState will accumulate (and, worse, keep alive) OffsetPublisher objects resulting in a small memory-leak for each connection. Perhaps I can downgrade these to weak Arcs to alleviate that problem. Anyways, I got it working locally and would love some feedback! |
Provide comments. Using a |
Appreciate the comments! I'll take a look at that and will knock out the tests + changes over the next day or two hopefully. |
Okay so with the merged PR (#3861), the consumer side of things is taken care of. Now, maybe I can look into the producer side of things. Also, the error message printed isn't completely the same as the expected message described, so I can look into changing the consumer CLI command to produce that error as expected (instead of "the topic is deleted"). If there's some other higher priority, I'd be happy to work on that as well. |
Nice work @jdafont, and thank you for looking into the producer as well. |
Stale issue message |
Both producers and consumers should have their connections closed. This issue mainly affects consumers streaming records from a topic.
Preferably with a specific error (TopicDeleted or TopicNotAvailable).
Setup:
Action:
Expectation:
The SPU is already notified for topic deletes from the SC because it has to do work to delete the topic. Any connections associated with that topic need to be closed down by the SPU.
This is an task of intermediate difficulty, mentoring from Infinyon is available if you are interested in working on this issue.
The text was updated successfully, but these errors were encountered: