Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Threading-related fixes from upstream romanz/electrs #74

Merged
merged 1 commit into from
May 17, 2024

Conversation

shesek
Copy link
Collaborator

@shesek shesek commented Mar 18, 2024

Changes were taken from the latest romanz/electrs rpc.rs implementation prior to the major refactoring in v0.9.0 that significantly diverged the codebases (af6ff09).

The relevant changes include (not a complete list):

This fixes a memory leak that could be reproduced using the following script, which opens and closes 500k connections with a concurrency of 20:

$ seq 1 500000 | xargs -I {} -n 1 -P 20 sh -c 'echo '\''{"id":{},"method":"server.version","params":[]}'\''| nc 127.0.0.1 50001 -v -N'

Before the fixes, memory usage would continue to grow the more connections are made, to around 35MB for 500k connections. After the fixes, memory usage is steady at around 25MB and doesn't grow with more connections.

@jamesdorfman
Copy link

jamesdorfman commented Mar 18, 2024

utACK 16636d1.

Verified that the changes in this PR seem to match the changes in the upstream PRs.

Changes were taken from the latest romanz/electrs rpc.rs
implementation prior to the major refactoring in v0.9.0 that
significantly diverged the codebases (https://github.com/romanz/electrs/blob/af6ff09a275ec12b6fd0d6a101637f4710902a3c/src/rpc.rs).

The relevant changes include (not a complete list):
romanz#284
romanz#233
romanz@a3bfdda
romanz#195
romanz#523 (only post-v0.9 change, a very minor one)

This fixes a memory leak that could be reproduced using the following
script, which opens and closes 500k connections with a concurrency of 20:
$ seq 1 500000 | xargs -I {} -n 1 -P 20 sh -c 'echo '\''{"id":{},"method":"server.version","params":[]}'\''| nc 127.0.0.1 50001 -v -N'

Before the fixes, memory usage would continue to grow the more
connections are made, to around 35MB for 500k connections. After the
fixes,  memory usage is steady at around 25MB and doesn't grow with
more connections.
junderw pushed a commit to junderw/electrs that referenced this pull request May 1, 2024
Copy link
Collaborator

@philippem philippem left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tack

@shesek shesek merged commit 24d34fe into Blockstream:new-index May 17, 2024
1 check passed
@sammy007
Copy link

There is still thread leak, if client does not explicitly close connection and only does it with single RST.

PS: It was genius decision to disable Issues section.

@philippem
Copy link
Collaborator

philippem commented May 28, 2024

There is still thread leak, if client does not explicitly close connection and only does it with single RST.

Thanks will investigate.

PS: It was genius decision to disable Issues section.

Issues re-enabled.

@sammy007
Copy link

sammy007 commented May 28, 2024

There is still thread leak, if client does not explicitly close connection and only does it with single RST.

Thanks will investigate.

Easy step to reproduce is to set up haproxy with layer4 tcp checks (straightforward) and you will see thr increasing in htop. Then add linger option to tcp-checks and you will see no thr increase. By default haproxy use TCP RST and closes connection explicitly with linger option specified. It means electrs leaks threads.

Running electrs in a docker if it makes sense.

PS: It was genius decision to disable Issues section.

Issues re-enabled.

Great

@philippem
Copy link
Collaborator

philippem commented May 28, 2024

I was able to reproduce this behaviour with a python program that sends RST on close:

import socket
import struct
import time

# Create a TCP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

# Set the SO_LINGER option to send a RST signal on close
linger = struct.pack('ii', 1, 0)
# Enable linger with 0 timeout to send RST
sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, linger)

# Connect to electrs on localhost
server_address = ('localhost', 50001)
sock.connect(server_address)

try:
    message = '''{"id": 3, "method": "server.banner", "params": []}
'''
    msg = message.encode('utf-8')
    print(msg)
    sock.sendall(msg)

    response = sock.recv(1024)

    print(response)
    time.sleep(1)


finally:
    # Close the socket, triggering a RST signal
    sock.close()

Electrs does not cleanup in this case, after running the above script to completion a few times, we can see the client count is not reduced:

 % curl localhost:54224 | grep electrum_clients
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 32386  100 32386    0     0  15.5M      0 --:--:-- --:--:-- --:--:-- 30.8M
# HELP electrum_clients # of Electrum clients
# TYPE electrum_clients gauge
electrum_clients 5


@shesek
Copy link
Collaborator Author

shesek commented May 29, 2024

@sammy007 Thanks for the report! This should be fixed by #88.

PS: It was genius decision to disable Issues section.

fwiw the idea was that the Blockstream/esplora issue tracker should be used for electrs issues also. But I think that its a good idea to enable it here too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants