Replies: 1 comment 6 replies
-
This can happen if the db hasn't got the private transaction at the point the member validates the onchain transaction. You can work round it using. https://docs.goquorum.consensys.net/configure-and-manage/manage/enhanced-permissions |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Setup Details:
We are running GoQuorum with the high availability setup over AKS.
Quorum Version: 22.7.4
Number of Parties: 2 (say Party A and Party B)
Quorum: 3 pods for both parties
Scenario:
We have observed multiple occurrences of merkle root check failed in quorum logs on both the parties as mentioned in the above setup details. We have also encountered some transactions failed with status 0x0 around this time. Does this lead to merkle root check failed error in the quorum logs?
Below is one of the block where status 0x0 was seen.
Note: The merkle root check failed error log has not occurred always on the same pod. It has been observed on different pods in different occurrences.
Sample log from quorum pod : Merkle root check failed actual=3f8d3b..886510 expect=787840..74d076
How do we validate that the chain is not corrupted or auto-repaired after we get this error. As we have not seen any syncing issues
Beta Was this translation helpful? Give feedback.
All reactions