-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for downstream labeled-response #40
Open
delthas
wants to merge
1
commit into
emersion:master
Choose a base branch
from
delthas:feature-labeled-response
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I figured I'd open an MR rather than a patch since this is somewhat large and may take multiple roundtrips. |
delthas
force-pushed
the
feature-labeled-response
branch
from
March 8, 2023 15:20
4bde00d
to
78724ae
Compare
Updated. |
This adds support for downstream labeled response. Instead of doing specific processing on each SendMessage call site, the idea is to add some logic to SendMessage. The cap is advertised only when the upstream server supports it too. ---- When handling a downstream message, we store the label of the current message we're processing, in the downstream struct. (This should be understood as data whose lifetime is limited to the processing of one message. It is reset when we're done handling the message.) During the handling of that downstream message, at any point, when we send out messages to this downstream: - if we're not handling a particular label, send it as is - else if this is the first message we're sending, store/buffer it - else, open a batch, send/flush the pending message and our message in that batch. Then, at the end of the processing: - if we're not handling a particular label, do nothing - if we have a pending message, send it without a batch - if we had opened a batch, close it - if we didnt send anything, send ACK This enables us to always send the minimum amount of messages. - 2+ messages generates a BATCH - 1 message generates a message with label - 0 message generates an ACK with label While we may send messages to many downstreams, obviously only the downstream from which the message we're processing was received is affected. ---- A special case affects the above behavior, which makes it differ from a typical server implementation. Soju handles many commands by forwarding the command to the server, ending the downstream message handling without responding anything, then forwarding the server response later when we receive it. In this case we must not send an empty ACK at the end of our routine, but we must instead not send anything, copy the message label to the message we're sending to the server, and then when we get the response (or batches) from the server, forward that to the downstream. Examples: dc->soju: @Label=foo BAR soju->uc: @Label=foo BAR uc->soju: @Label=foo BAZ soju->dc: @Label=foo BAZ dc->soju: @Label=foo BAR soju->uc: @Label=foo BAR uc->soju: @Label=foo BATCH +b labeled-response soju->dc: @Label=foo BATCH +b labeled-response uc->soju: @Batch=b BAZ soju->dc: @Batch=b BAZ uc->soju: BATCH -b soju->dc: BATCH -b (This is a bit more complicated because we wrap downstream labels with our own labels.) The patch therefore adds a DeferredResponse method to downstreamConn, which basically tells it not to send an empty ACK at the end of the routine, because we'll do a proper response later. ---- When handling upstream messages, we extract the downstream label from the server labeled response, if it exists. We also extract the downstream batch tag if it corresponds to a label. We store that info in the downstreamConn, much like the downstream message handling. We forward BATCH + and BATCH - as is, even keeping the same batch IDs. This way, we also support cases where the server responds with a single message but our logic turns that into multiple messages, or no messages. (We'd buffer the first message, open a batch, send the messages, close it etc. just like the downstream message case.) ---- Some cases are a bit more difficult to implement and not really useful. (And also not mandatory according to the spec since this is best-effort.) Those were left TODO for now.
delthas
force-pushed
the
feature-labeled-response
branch
from
July 24, 2024 16:45
78724ae
to
7507260
Compare
Updated. |
BTW I have this running in production on my soju (+ with labeled-response enabled in senpai) (wrt checking stability 😋) |
Gentle ping 🎀 |
Ready for review. 🎗️ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This adds support for downstream labeled response. Instead of doing
specific processing on each SendMessage call site, the idea is to add
some logic to SendMessage.
The cap is advertised only in single-upstream mode when the upstream
server supports it too.
When handling a downstream message, we store the label of the current
message we're processing, in the downstream struct. (This should be
understood as data whose lifetime is limited to the processing of one
message. It is reset when we're done handling the message.)
During the handling of that downstream message, at any point, when we
send out messages to this downstream:
that batch.
Then, at the end of the processing:
This enables us to always send the minimum amount of messages.
While we may send messages to many upstreams and downstreams, obviosuly
only the downstream from which the message we're processing was received
is affected.
A special case affects the above behavior, which makes it differ from a
typical server implementation. Soju handles many commands by forwarding
the command to the server, ending the downstream message handling
without responding anything, then forwarding the server response later
when we receive it.
In this case we must not send an empty ACK at the end of our routine,
but we must instead not send anything, copy the message label to the
message we're sending to the server, and then when we get the response
(or batches) from the server, forward that to the downstream.
Examples:
(This is a bit more complicated because we wrap downstream labels with
our own labels.)
The patch therefore adds a DeferredResponse method to downstreamConn,
which basically tells it not to send an empty ACK at the end of the
routine, because we'll do a proper response later.
When handling upstream messages, we extract the downstream label from
the server labeled response, if it exists. We also extract the
downstream batch tag if it corresponds to a label. We store that info in
the downstreamConn, much like the downstream message handling.
We forward BATCH + and BATCH - as is, even keeping the same batch IDs.
This way, we also support cases where the server responds with a single
message but our logic turns that into multiple messages, or no messages.
(We'd buffer the first message, open a batch, send the messages, close
it etc. just like the downstream message case.)
Some cases are a bit more difficult to implement and not really useful.
(And also not mandatory according to the spec since this is
best-effort.) Those were left TODO for now.