Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG REPORT] Updating Evidences doesn't guarantee an atomic write #1457

Open
nodiesBlade opened this issue Jul 21, 2022 · 1 comment
Open
Labels
bug Something isn't working

Comments

@nodiesBlade
Copy link
Contributor

nodiesBlade commented Jul 21, 2022

Describe the bug
With the current implementation of SetProof, this results in a race condition where evidences are not written into the cache storage atomically. This can result in evidences overwriting each other whenever concurrent relays are handled, and the node not being paid for the right amount of work when submitting a claim.

Consider this scenario:

  1. Two relays are sent to the node concurrently, labeled respectively A and B relay.
  2. Node validates both A and B relays
  3. Node retrieves the evidence to update relays A's proof, then subsequently grabs the evidence to update relays B proofs.
  4. The node at this time currently has a local copy of evidence for Relay A and another copy for Relay B. (two separate evidences in memory at this point!)
  5. The local copy of evidence A is updated with the proof, then updated into cache storage.
  6. The local copy of evidence B is updated with the proof, then updated into cache storage - overwriting the previous written evidence.
  7. Evidence storage now only contains 1 proof, where the expected amount is to be two.

Proof of concept test here I wrote it really quickly, might've missed something

Expected behavior
We need a way to ensure that proofs are written to the evidence storage atomically. There's multiple ways to do this, you can add a lock within the SetProof function, and then every call to SetProof can be completed in a different go routine. By putting it in a different go routine, it won't block the response back to the caller due to lock contention (i.e under high load)

Or the more proper way is to set up a queue using channels that will buffer relay proofs to a worker that will update the evidences one by one, similarly to the consumers/producer problem.

@nodiesBlade nodiesBlade added the bug Something isn't working label Jul 21, 2022
@nodiesBlade nodiesBlade changed the title [WIP] [BUG REPORT] Updating Evidences doesn't result into an atomic write [BUG REPORT] Updating Evidences doesn't result into an atomic write Jul 21, 2022
@nodiesBlade nodiesBlade changed the title [BUG REPORT] Updating Evidences doesn't result into an atomic write [BUG REPORT] Updating Evidences doesn't guarantee an atomic write Jul 21, 2022
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Oct 12, 2022
…mption that also is part of the issue pokt-network#1457.

Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Oct 12, 2022
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Nov 18, 2022
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Nov 23, 2022
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
oten91 pushed a commit that referenced this issue Nov 28, 2022
Fix a high memory consumption that also is part of the issue #1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of #1457 with the worker pool, the node remains under 14gb of ram in my local tests.
oten91 pushed a commit that referenced this issue Dec 1, 2022
Fix a high memory consumption that also is part of the issue #1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of #1457 with the worker pool, the node remains under 14gb of ram in my local tests.
@POKT-Discourse
Copy link

This issue has been mentioned on Pocket Network Forum. There might be relevant details there:

https://forum.pokt.network/t/poktscan-geo-mesh-reimbursement/3945/3

jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Mar 22, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jun 8, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jun 21, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jul 10, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jul 10, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jul 10, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jul 10, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jul 10, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jul 10, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
nodiesBlade pushed a commit to pokt-scan/pocket-core that referenced this issue Jul 16, 2023
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jan 22, 2024
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Jan 30, 2024
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Mar 6, 2024
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Mar 8, 2024
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Mar 8, 2024
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue May 2, 2024
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
jorgecuesta added a commit to pokt-scan/pocket-core that referenced this issue Oct 9, 2024
Fix a high memory consumption that also is part of the issue pokt-network#1457.
Under high load of requests (1000/rps or more) the RAM got crazy and scale up to 40GB or close to that.
Now after the fix of pokt-network#1457 with the worker pool, the node remains under 14gb of ram in my local tests.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants