Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RPC-Spec-V2] chainHead_v1_follow should be "bounded" #5871

Open
niklasad1 opened this issue Sep 30, 2024 · 2 comments
Open

[RPC-Spec-V2] chainHead_v1_follow should be "bounded" #5871

niklasad1 opened this issue Sep 30, 2024 · 2 comments

Comments

@niklasad1
Copy link
Member

niklasad1 commented Sep 30, 2024

The chainHead_v1_follow is using unbounded channels to send out messages on the JSON-RPC connection which may use lots of memory if the client is slow and can't really keep up with server i.e, substrate may keep lots of message in memory

Currently three parts are unbounded:

  1. import_notification_stream
  2. finality_notification_stream
  3. internal chainHead_v1 follow channel

As I see it we have to options either:

  1. Close the subscription/connections if that client can keep up with the server
  2. Replace older items in queue.

For chainHead_follow replacing older items is not a good idea but one may loose important state and only 1) is option.

Thus, I propose that we create a dedicated fixed size buffer, poll items for the combined stream, push them to the buffer and if the buffer limit is exceeded then we throw a stop event and close down the subscription.

/cc @paritytech/subxt-team

@jsdw
Copy link
Contributor

jsdw commented Sep 30, 2024

Thus, I propose that we create a dedicated buffer and if it's exceeded then we throw a stop event and close down the subscription.

I thought we were basically doing this already with the pipe_from_stream thing; is that not correct?

I'm basically in favour of having some buffer that drains as fast as the user is accepting messages, and we send a stop notification if the buffer fills up with events (ie the user isn't consuming them quickly enough)

@niklasad1
Copy link
Member Author

I thought we were basically doing this already with the pipe_from_stream thing; is that not correct?

I think we were but it has been changed to this, where it takes one item at the time and it could block on send.await and buffer up with items in memory.

I think we could just use pipe_from_stream but we need to tweak it to able to re-use the sink to send the out the stop event.

I'm basically in favour of having some buffer that drains as fast as the user is accepting messages, and we send a stop notification if the buffer fills up with events (ie the user isn't consuming them quickly enough)

Yupp, agree

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants