-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix spent nullifier detection for non-linear scanning #876
Comments
@ebfull and I talked this over in a pairing, and discussed various options. The least-bad option I want to go with is as follows.
What this should mean is that if a wallet is mostly linear scanning, but with occasional non-linear jumps forward (e.g. the "update latest shard tree" scan ranges that #872 suggests), we will only store the nullifier map for those non-linear sections. If a wallet chooses to scan blocks in complete reverse order, then we will end up storing the entire nullifier map from current chain tip to "scan start" (wallet birthday, or the last time we were fully scanned). Worst case this would be storing the entire nullifier set, which could be GiB of data. However, the caller could detect the database size increase and switch to linear scanning until it fills in the remaining range, at which point the nullifier map would be entirely pruned. |
Fixed in #878. |
#872 introduced support for non-linear scanning. Clients can call
suggest_scan_ranges
and fetch blocks in that order, to make unspent notes spendable faster.However, just after the PR merged I noticed a flaw: if you scan block range
Y..Z
before block rangeX..Y
, then any notes that were received in the rangeX..Y
and spent in the rangeY..Z
won't be detected as spent. This is because their nullifiers weren't known at the timeY..Z
was scanned (so they weren't being looked for), and the rangeY..Z
isn't re-checked afterX..Y
is scanned.(I noticed this because I tried fetching all given block ranges in reverse order, which led to my
zec-sqlite-cli
wallet greatly over-reporting balance, which is Bad.)DAGSync doesn't have this problem by design (see #776 for related comments). But we need a way to ensure we don't have this problem for the pre-DAGSync approach we are retrofitting into the data API.
The text was updated successfully, but these errors were encountered: