-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Threading-related fixes from upstream romanz/electrs #74
Conversation
utACK 16636d1. Verified that the changes in this PR seem to match the changes in the upstream PRs. |
Changes were taken from the latest romanz/electrs rpc.rs implementation prior to the major refactoring in v0.9.0 that significantly diverged the codebases (https://github.com/romanz/electrs/blob/af6ff09a275ec12b6fd0d6a101637f4710902a3c/src/rpc.rs). The relevant changes include (not a complete list): romanz#284 romanz#233 romanz@a3bfdda romanz#195 romanz#523 (only post-v0.9 change, a very minor one) This fixes a memory leak that could be reproduced using the following script, which opens and closes 500k connections with a concurrency of 20: $ seq 1 500000 | xargs -I {} -n 1 -P 20 sh -c 'echo '\''{"id":{},"method":"server.version","params":[]}'\''| nc 127.0.0.1 50001 -v -N' Before the fixes, memory usage would continue to grow the more connections are made, to around 35MB for 500k connections. After the fixes, memory usage is steady at around 25MB and doesn't grow with more connections.
16636d1
to
d257ca2
Compare
Add paged mempool txids endpoint
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tack
There is still thread leak, if client does not explicitly close connection and only does it with single RST. PS: It was genius decision to disable Issues section. |
Thanks will investigate.
Issues re-enabled. |
Easy step to reproduce is to set up haproxy with layer4 tcp checks (straightforward) and you will see Running electrs in a docker if it makes sense.
Great |
I was able to reproduce this behaviour with a python program that sends RST on close:
Electrs does not cleanup in this case, after running the above script to completion a few times, we can see the client count is not reduced:
|
Changes were taken from the latest romanz/electrs rpc.rs implementation prior to the major refactoring in v0.9.0 that significantly diverged the codebases (
af6ff09
).The relevant changes include (not a complete list):
This fixes a memory leak that could be reproduced using the following script, which opens and closes 500k connections with a concurrency of 20:
Before the fixes, memory usage would continue to grow the more connections are made, to around 35MB for 500k connections. After the fixes, memory usage is steady at around 25MB and doesn't grow with more connections.