From b590b0173d33abcc721b266c6e681d9aebd78e23 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Javier=20Rodr=C3=ADguez=20Chatruc?= <49622509+jrchatruc@users.noreply.github.com> Date: Thu, 25 Jan 2024 12:16:53 -0300 Subject: [PATCH] Merge main (#65) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(merkle tree): Remove enumeration index assignment from Merkle tree (#551) ## What ❔ Since enumeration indices are now fully stored in Postgres, it makes sense to not duplicate their assignment in the Merkle tree. Instead, the tree could take enum indices as inputs. ## Why ❔ This allows simplifying tree logic and unify "normal" L1 batch processing and tree recovery. (This unification is not a part of this PR; it'll be implemented separately.) ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. * feat(en): Support arbitrary genesis block for external nodes (#537) ## What ❔ Support non-zero genesis block specified in executor configuration. Check whether this block exists on initialization; validate its correspondence if it does, and persist consensus fields if it doesn't. ## Why ❔ This is necessary to support gossip-based syncing in practice; we likely won't back-sign all blocks in all envs. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. * fix(witness_generator): Disable BWIP dependency (#573) This revert is done to facilitate boojum upgrade on mainnet2. Without this, old provers would be halt and boojum upgrade could take longer than anticipated. `waiting_for_artifacts` forced witness to wait on BWIP run. `queued` makes them run instantly. - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Signed-off-by: Danil Co-authored-by: Danil * chore(main): release core 18.4.0 (#560) :robot: I have created a release *beep* *boop* --- ## [18.4.0](https://github.com/matter-labs/zksync-era/compare/core-v18.3.1...core-v18.4.0) (2023-12-01) ### Features * adds spellchecker workflow, and corrects misspelled words ([#559](https://github.com/matter-labs/zksync-era/issues/559)) ([beac0a8](https://github.com/matter-labs/zksync-era/commit/beac0a85bb1535b05c395057171f197cd976bf82)) * **en:** Support arbitrary genesis block for external nodes ([#537](https://github.com/matter-labs/zksync-era/issues/537)) ([15d7eaf](https://github.com/matter-labs/zksync-era/commit/15d7eaf872e222338810243865cec9dff7f6e799)) * **merkle tree:** Remove enumeration index assignment from Merkle tree ([#551](https://github.com/matter-labs/zksync-era/issues/551)) ([e2c1b20](https://github.com/matter-labs/zksync-era/commit/e2c1b20e361e6ee2f5ac69cefe75d9c5575eb2f7)) * Restore commitment test in Boojum integration ([#539](https://github.com/matter-labs/zksync-era/issues/539)) ([06f510d](https://github.com/matter-labs/zksync-era/commit/06f510d00f855ddafaebb504f7ea799700221072)) ### Bug Fixes * Change no pending batches 404 error into a success response ([#279](https://github.com/matter-labs/zksync-era/issues/279)) ([e8fd805](https://github.com/matter-labs/zksync-era/commit/e8fd805c8be7980de7676bca87cfc2d445aab9e1)) * **vm:** Expose additional types and traits ([#563](https://github.com/matter-labs/zksync-era/issues/563)) ([bd268ac](https://github.com/matter-labs/zksync-era/commit/bd268ac02bc3530c1d3247cb9496c3e13c2e52d9)) * **witness_generator:** Disable BWIP dependency ([#573](https://github.com/matter-labs/zksync-era/issues/573)) ([e05d955](https://github.com/matter-labs/zksync-era/commit/e05d955036c76a29f9b6e900872c69e20278e045)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * ci: Runs spellcheck in merge queue. (#574) ## What ❔ Runs spellcheck in merge queue. ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: fix typo (#575) ## What ❔ - fix typo ## Why ❔ - fix typo ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * chore: fix typos in document (#577) ## What ❔ - fixed typo ## Why ❔ fix typos in document ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * chore: Fix typos (#567) Hi, I have just resolve conflict #432 Co-authored-by: Igor Aleksanov * feat: Add metric to CallTracer for calculating maximum depth of the calls (#535) ## What ❔ Add metric to CallTracer for calculating maximum depth of the calls ## Why ❔ We need to know what our limits are. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. * feat: Add various metrics to the Prover subsystems (#541) ## What ❔ 1. Add various metrics to the Prover subsystems, especially: * oldest block, that wasn't sent to prover(`fri_prover.oldest_unprocessed_block`) * oldest block, that didn't go through basic/leaf/node aggregation levels (`fri_prover.oldest_unprocessed_block_by_round`) * how much time is spent on waiting for available prover to send data to (`prover_fri_witness_vector_generator.prover_waiting_time) * count for attempts to send data to prover (`prover_fri_witness_vector_generator.prover_attempts_count`) 2. Refactor metrics in prover to use vise. ## Why ❔ We have some metric coverage on the prover subsystem, but it's incomplete. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. * chore: fix wrong line (#592) ## What ❔ fix wrong line ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(docs): fixed docs typo (#588) ## What ❔ - Hello, fixed typo ## Why ❔ - fixed typo ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Roman Brodetski * chore(docs): fix typos in document (#589) ## What ❔ Hello, I corrected the typo. ## Why ❔ - fixed typo ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * chore: fix typo (#587) ## What ❔ fixed typos ## Why ❔ fixed typos ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * chore(docs): fix broken link (#590) ## What ❔ fixed broken link ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * feat: faster and less noisy zk fmt (#513) ## What ❔ I've added caching to prettier and changed so that noisy output about changed files is redirected to /dev/null. `zk fmt` is 3 times faster after those changes ## Why ❔ `zk fmt` output was too verbose and we didn't use cache ## Checklist - [X] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [X] Code has been formatted via `zk fmt` and `zk lint`. * chore: the errors in the document have been correct (#583) ## What ❔ the errors in the document have been correct ## Why ❔ the errors in the document have been correct ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * chore(docs): the errors in the document have been corrected. (#461) ## What ❔ - the errors in the document have been corrected. ## Why ❔ - the errors in the document have been corrected. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. --------- Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * chore: update document (#601) ## What ❔ - update document ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * chore: fixed typos in documentation (#603) ## What ❔ - fixed typos in documentation ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * chore: remove incorrect branch prompts (#594) ## What ❔ remove incorrect branch prompts ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * fix: Sync protocol version between consensus and server blocks (#568) ## What ❔ Aligns the protocol version for consensus blocks with that of `SyncBlock`s. ## Why ❔ Required for gossip-based block syncing to work correctly. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(main): release core 18.5.0 (#593) :robot: I have created a release *beep* *boop* --- ## [18.5.0](https://github.com/matter-labs/zksync-era/compare/core-v18.4.0...core-v18.5.0) (2023-12-05) ### Features * Add metric to CallTracer for calculating maximum depth of the calls ([#535](https://github.com/matter-labs/zksync-era/issues/535)) ([19c84ce](https://github.com/matter-labs/zksync-era/commit/19c84ce624d53735133fa3b12c7f980e8c14260d)) * Add various metrics to the Prover subsystems ([#541](https://github.com/matter-labs/zksync-era/issues/541)) ([58a4e6c](https://github.com/matter-labs/zksync-era/commit/58a4e6c4c22bd7f002ede1c6def0dc260706185e)) ### Bug Fixes * Sync protocol version between consensus and server blocks ([#568](https://github.com/matter-labs/zksync-era/issues/568)) ([56776f9](https://github.com/matter-labs/zksync-era/commit/56776f929f547b1a91c5b70f89e87ef7dc25c65a)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * chore: fix link (#576) ## What ❔ fix link ## Why ❔ - fix link ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * chore(main): release prover 9.1.0 (#460) :robot: I have created a release *beep* *boop* --- ## [9.1.0](https://github.com/matter-labs/zksync-era/compare/prover-v9.0.0...prover-v9.1.0) (2023-12-05) ### Features * Add various metrics to the Prover subsystems ([#541](https://github.com/matter-labs/zksync-era/issues/541)) ([58a4e6c](https://github.com/matter-labs/zksync-era/commit/58a4e6c4c22bd7f002ede1c6def0dc260706185e)) * adds spellchecker workflow, and corrects misspelled words ([#559](https://github.com/matter-labs/zksync-era/issues/559)) ([beac0a8](https://github.com/matter-labs/zksync-era/commit/beac0a85bb1535b05c395057171f197cd976bf82)) * **dal:** Do not load config from env in DAL crate ([#444](https://github.com/matter-labs/zksync-era/issues/444)) ([3fe1bb2](https://github.com/matter-labs/zksync-era/commit/3fe1bb21f8d33557353f447811ca86c60f1fe51a)) * **en:** Implement gossip fetcher ([#371](https://github.com/matter-labs/zksync-era/issues/371)) ([a49b61d](https://github.com/matter-labs/zksync-era/commit/a49b61d7769f9dd7b4cbc4905f8f8a23abfb541c)) * **hyperchain:** Adding prover related commands to zk stack ([#440](https://github.com/matter-labs/zksync-era/issues/440)) ([580cada](https://github.com/matter-labs/zksync-era/commit/580cada003bdfe2fff686a1fc3ce001b4959aa4d)) * **job-processor:** report attempts metrics ([#448](https://github.com/matter-labs/zksync-era/issues/448)) ([ab31f03](https://github.com/matter-labs/zksync-era/commit/ab31f031dfcaa7ddf296786ddccb78e8edd2d3c5)) * **merkle tree:** Allow random-order tree recovery ([#485](https://github.com/matter-labs/zksync-era/issues/485)) ([146e4cf](https://github.com/matter-labs/zksync-era/commit/146e4cf2f8d890ff0a8d33229e224442e14be437)) * **witness-generator:** add logs to leaf aggregation job ([#542](https://github.com/matter-labs/zksync-era/issues/542)) ([7e95a3a](https://github.com/matter-labs/zksync-era/commit/7e95a3a66ea48be7b6059d34630e22c503399bdf)) ### Bug Fixes * Change no pending batches 404 error into a success response ([#279](https://github.com/matter-labs/zksync-era/issues/279)) ([e8fd805](https://github.com/matter-labs/zksync-era/commit/e8fd805c8be7980de7676bca87cfc2d445aab9e1)) * **ci:** Use the same nightly rust ([#530](https://github.com/matter-labs/zksync-era/issues/530)) ([67ef133](https://github.com/matter-labs/zksync-era/commit/67ef1339d42786efbeb83c22fac99f3bf5dd4380)) * **crypto:** update shivini to switch to era-cuda ([#469](https://github.com/matter-labs/zksync-era/issues/469)) ([38bb482](https://github.com/matter-labs/zksync-era/commit/38bb4823c7b5e0e651d9f531feede66c24afd19f)) * Sync protocol version between consensus and server blocks ([#568](https://github.com/matter-labs/zksync-era/issues/568)) ([56776f9](https://github.com/matter-labs/zksync-era/commit/56776f929f547b1a91c5b70f89e87ef7dc25c65a)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). Co-authored-by: Artem Makhortov <13339874+artmakh@users.noreply.github.com> * chore(main): release prover 10.0.0 (#608) :robot: I have created a release *beep* *boop* --- ## [10.0.0](https://github.com/matter-labs/zksync-era/compare/prover-v9.1.0...prover-v10.0.0) (2023-12-05) ### ⚠ BREAKING CHANGES * boojum integration ([#112](https://github.com/matter-labs/zksync-era/issues/112)) * Update to protocol version 17 ([#384](https://github.com/matter-labs/zksync-era/issues/384)) ### Features * Add various metrics to the Prover subsystems ([#541](https://github.com/matter-labs/zksync-era/issues/541)) ([58a4e6c](https://github.com/matter-labs/zksync-era/commit/58a4e6c4c22bd7f002ede1c6def0dc260706185e)) * adds spellchecker workflow, and corrects misspelled words ([#559](https://github.com/matter-labs/zksync-era/issues/559)) ([beac0a8](https://github.com/matter-labs/zksync-era/commit/beac0a85bb1535b05c395057171f197cd976bf82)) * boojum integration ([#112](https://github.com/matter-labs/zksync-era/issues/112)) ([e76d346](https://github.com/matter-labs/zksync-era/commit/e76d346d02ded771dea380aa8240da32119d7198)) * **boojum:** Adding README to prover directory ([#189](https://github.com/matter-labs/zksync-era/issues/189)) ([c175033](https://github.com/matter-labs/zksync-era/commit/c175033b48a8da4969d88b6850dd0247c4004794)) * **config:** Extract everything not related to the env config from zksync_config crate ([#245](https://github.com/matter-labs/zksync-era/issues/245)) ([42c64e9](https://github.com/matter-labs/zksync-era/commit/42c64e91e13b6b37619f1459f927fa046ef01097)) * **core:** Split config definitions and deserialization ([#414](https://github.com/matter-labs/zksync-era/issues/414)) ([c7c6b32](https://github.com/matter-labs/zksync-era/commit/c7c6b321a63dbcc7f1af045aa7416e697beab08f)) * **dal:** Do not load config from env in DAL crate ([#444](https://github.com/matter-labs/zksync-era/issues/444)) ([3fe1bb2](https://github.com/matter-labs/zksync-era/commit/3fe1bb21f8d33557353f447811ca86c60f1fe51a)) * **en:** Implement gossip fetcher ([#371](https://github.com/matter-labs/zksync-era/issues/371)) ([a49b61d](https://github.com/matter-labs/zksync-era/commit/a49b61d7769f9dd7b4cbc4905f8f8a23abfb541c)) * **fri-prover:** In witness - panic if protocol version is not available ([#192](https://github.com/matter-labs/zksync-era/issues/192)) ([0403749](https://github.com/matter-labs/zksync-era/commit/040374900656c854a7b9de32e5dbaf47c1c47889)) * **hyperchain:** Adding prover related commands to zk stack ([#440](https://github.com/matter-labs/zksync-era/issues/440)) ([580cada](https://github.com/matter-labs/zksync-era/commit/580cada003bdfe2fff686a1fc3ce001b4959aa4d)) * **job-processor:** report attempts metrics ([#448](https://github.com/matter-labs/zksync-era/issues/448)) ([ab31f03](https://github.com/matter-labs/zksync-era/commit/ab31f031dfcaa7ddf296786ddccb78e8edd2d3c5)) * **merkle tree:** Allow random-order tree recovery ([#485](https://github.com/matter-labs/zksync-era/issues/485)) ([146e4cf](https://github.com/matter-labs/zksync-era/commit/146e4cf2f8d890ff0a8d33229e224442e14be437)) * **merkle tree:** Snapshot recovery for Merkle tree ([#163](https://github.com/matter-labs/zksync-era/issues/163)) ([9e20703](https://github.com/matter-labs/zksync-era/commit/9e2070380e6720d84563a14a2246fc18fdb1f8f9)) * Rewrite server binary to use `vise` metrics ([#120](https://github.com/matter-labs/zksync-era/issues/120)) ([26ee1fb](https://github.com/matter-labs/zksync-era/commit/26ee1fbb16cbd7c4fad334cbc6804e7d779029b6)) * Update to protocol version 17 ([#384](https://github.com/matter-labs/zksync-era/issues/384)) ([ba271a5](https://github.com/matter-labs/zksync-era/commit/ba271a5f34d64d04c0135b8811685b80f26a8c32)) * **vm:** Move all vm versions to the one crate ([#249](https://github.com/matter-labs/zksync-era/issues/249)) ([e3fb489](https://github.com/matter-labs/zksync-era/commit/e3fb4894d08aa98a84e64eaa95b51001055cf911)) * **witness-generator:** add logs to leaf aggregation job ([#542](https://github.com/matter-labs/zksync-era/issues/542)) ([7e95a3a](https://github.com/matter-labs/zksync-era/commit/7e95a3a66ea48be7b6059d34630e22c503399bdf)) ### Bug Fixes * Change no pending batches 404 error into a success response ([#279](https://github.com/matter-labs/zksync-era/issues/279)) ([e8fd805](https://github.com/matter-labs/zksync-era/commit/e8fd805c8be7980de7676bca87cfc2d445aab9e1)) * **ci:** Use the same nightly rust ([#530](https://github.com/matter-labs/zksync-era/issues/530)) ([67ef133](https://github.com/matter-labs/zksync-era/commit/67ef1339d42786efbeb83c22fac99f3bf5dd4380)) * **crypto:** update deps to include circuit fixes ([#402](https://github.com/matter-labs/zksync-era/issues/402)) ([4c82015](https://github.com/matter-labs/zksync-era/commit/4c820150714dfb01c304c43e27f217f17deba449)) * **crypto:** update shivini to switch to era-cuda ([#469](https://github.com/matter-labs/zksync-era/issues/469)) ([38bb482](https://github.com/matter-labs/zksync-era/commit/38bb4823c7b5e0e651d9f531feede66c24afd19f)) * **crypto:** update snark-vk to be used in server and update args for proof wrapping ([#240](https://github.com/matter-labs/zksync-era/issues/240)) ([4a5c54c](https://github.com/matter-labs/zksync-era/commit/4a5c54c48bbc100c29fa719c4b1dc3535743003d)) * **docs:** Add links to setup-data keys ([#360](https://github.com/matter-labs/zksync-era/issues/360)) ([1d4fe69](https://github.com/matter-labs/zksync-era/commit/1d4fe697e4e98a8e64642cde4fe202338ce5ec61)) * **path:** update gpu prover setup data path to remove extra gpu suffix ([#454](https://github.com/matter-labs/zksync-era/issues/454)) ([2e145c1](https://github.com/matter-labs/zksync-era/commit/2e145c192b348b2756acf61fac5bfe0ca5a6575f)) * **prover-fri:** Update setup loading for node agg circuit ([#323](https://github.com/matter-labs/zksync-era/issues/323)) ([d1034b0](https://github.com/matter-labs/zksync-era/commit/d1034b05754219b603508ef79c114d908c94c1e9)) * **prover-logging:** tasks_allowed_to_finish set to true for 1 off jobs ([#227](https://github.com/matter-labs/zksync-era/issues/227)) ([0fac66f](https://github.com/matter-labs/zksync-era/commit/0fac66f5ff86cc801ea0bb6f9272cb397cd03a95)) * Sync protocol version between consensus and server blocks ([#568](https://github.com/matter-labs/zksync-era/issues/568)) ([56776f9](https://github.com/matter-labs/zksync-era/commit/56776f929f547b1a91c5b70f89e87ef7dc25c65a)) * Update prover to use the correct storage oracle ([#446](https://github.com/matter-labs/zksync-era/issues/446)) ([835dd82](https://github.com/matter-labs/zksync-era/commit/835dd828ef5610a446ec8c456e4df1def0e213ab)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * refactor: Removed protobuf encoding from zksync_types (#562) ## What ❔ Removed protobuf encoding from zksync_types. ## Why ❔ To make zksync_types have less dependencies. * fix: use powers array in plonkSetup function (#508) ## What ❔ This PR modifies the `plonkSetup` function in `run.ts` to use the `powers` array when downloading key files. ## Why ❔ Previously, the function forget to use the argument values `powers`. Now, it will download keys for any powers specified in the `powers` array. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [NA] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. --------- Co-authored-by: Igor Aleksanov * fix: Fix database connections in house keeper (#610) ## What ❔ Use correct connections for databases in house keeper. ## Why ❔ Databases are divided in 2 on mainnet and testnet ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(contract-verifier): Support verification for zksolc v1.3.17 (#606) ## What ❔ Adds support for zksolc v1.3.17 to contract-verifier. ## Why ❔ Contract-verifier should support latest version ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(main): release core 18.6.0 (#613) :robot: I have created a release *beep* *boop* --- ## [18.6.0](https://github.com/matter-labs/zksync-era/compare/core-v18.5.0...core-v18.6.0) (2023-12-05) ### Features * **contract-verifier:** Support verification for zksolc v1.3.17 ([#606](https://github.com/matter-labs/zksync-era/issues/606)) ([b65fedd](https://github.com/matter-labs/zksync-era/commit/b65fedd6894497a4c9fbf38d558ccfaca535d1d2)) ### Bug Fixes * Fix database connections in house keeper ([#610](https://github.com/matter-labs/zksync-era/issues/610)) ([aeaaecb](https://github.com/matter-labs/zksync-era/commit/aeaaecb54b6bd3f173727531418dc242357b2aee)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * chore: Mainnet upgrade calldata (#564) ## What ❔ Includes mainnet upgrade preparation as well as some minor fixes for the upgrade tool ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. --------- Co-authored-by: koloz * chore: Update generated Prover FRI CPU setup-data keys from branch main (#609) "Update generated Prover FRI CPU setup-data keys from branch main" * perf(external-node): Use async miniblock sealing in external IO (#611) ## What ❔ External IO uses async miniblock sealing. ## Why ❔ Execution of transactions and miniblock sealing (writing data to postgres) happen in parallel so the perfomance is better. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: fix document path (#615) ## What ❔ fix document path ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * chore: Remove era-reviewers from codeowners (#618) ## What ❔ Removes era-reviewers group from codeowners. ## Why ❔ - Too noisy. - We have internal processes for that anyways. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(main): release core 18.6.1 (#616) :robot: I have created a release *beep* *boop* --- ## [18.6.1](https://github.com/matter-labs/zksync-era/compare/core-v18.6.0...core-v18.6.1) (2023-12-06) ### Performance Improvements * **external-node:** Use async miniblock sealing in external IO ([#611](https://github.com/matter-labs/zksync-era/issues/611)) ([5cf7210](https://github.com/matter-labs/zksync-era/commit/5cf7210dc77bb615944352f23ed39fad324b914f)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * feat(hyperchain-wizard): zkStack CLI GPU support (#612) ## What ❔ Support for creating zk hyperchain via zk cli with GPU-based provers ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [X] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Borodin * fix: Cursor not moving correctly after poll in `get_filter_changes` (#546) ## What ❔ When polling filter changes, add 1 to actual from_block value ## Why ❔ Otherwise, last block that was included in poll will be included to the next one. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. --------- Co-authored-by: Fedor Sakharov * fix: update google cloud dependencies that do not depend on rsa (#622) ## What ❔ This PR updates the dependencies of `google-cloud-storage` and `google-cloud-auth`. The changes are as follows: - From google-cloud-storage = "0.12.0" to google-cloud-storage = "0.15.0" - From google-cloud-auth = "0.11.0" to google-cloud-auth = "0.13.0" Relevant google-cloud changes: https://github.com/yoshidan/google-cloud-rust/pull/217 ## Why ❔ The primary reason for these updates is to address a security vulnerability associated with the `rsa` crate, as indicated by a recent `cargo-deny` check. The vulnerability (Marvin Attack, RUSTSEC-2023-0071) was detected in rsa v0.6.1, which is a dependency of `google-cloud-storage v0.12.0`. By updating to `google-cloud-storage v0.15.0`, we eliminate the use of the `rsa` crate, as the newer version of `google-cloud-storage` does not depend on it. Similarly, `google-cloud-auth` is updated for compatibility. Cargo deny output: ``` error[vulnerability]: Marvin Attack: potential key recovery through timing sidechannels ┌─ /Users/dustinbrickwood/Documents/dev/dut/forks/foundry-zksync/Cargo.lock:759:1 │ 759 │ rsa 0.6.1 registry+https://github.com/rust-lang/crates.io-index │ --------------------------------------------------------------- security vulnerability detected │ = ID: RUSTSEC-2023-0071 = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0071 = ### Impact Due to a non-constant-time implementation, information about the private key is leaked through timing information which is observable over the network. An attacker may be able to use that information to recover the key. ### Patches No patch is yet available, however work is underway to migrate to a fully constant-time implementation. ### Workarounds The only currently available workaround is to avoid using the `rsa` crate in settings where attackers are able to observe timing information, e.g. local use on a non-compromised computer is fine. ### References This vulnerability was discovered as part of the "[Marvin Attack]", which revealed several implementations of RSA including OpenSSL had not properly mitigated timing sidechannel attacks. [Marvin Attack]: https://people.redhat.com/~hkario/marvin/ = Announcement: https://github.com/RustCrypto/RSA/issues/19#issuecomment-1822995643 = Solution: No safe upgrade is available! = rsa v0.6.1 └── google-cloud-storage v0.12.0 └── zksync_object_store v0.1.0 ├── zksync_core v0.1.0 │ └── era_test_node v0.1.0-alpha.12 │ └── era_revm v0.0.1-alpha │ ├── foundry-common v0.2.0 │ │ ├── anvil v0.2.0 │ │ │ ├── (dev) forge v0.2.0 │ │ │ └── (dev) zkforge v0.2.0 │ │ ├── cast v0.2.0 │ │ ├── chisel v0.2.0 │ │ ├── forge v0.2.0 (*) │ │ ├── foundry-cli v0.2.0 │ │ │ ├── cast v0.2.0 (*) │ │ │ ├── chisel v0.2.0 (*) │ │ │ ├── forge v0.2.0 (*) │ │ │ ├── zkcast v0.2.0 │ │ │ │ └── zkforge v0.2.0 (*) │ │ │ └── zkforge v0.2.0 (*) │ │ ├── foundry-debugger v0.2.0 │ │ │ ├── forge v0.2.0 (*) │ │ │ ├── foundry-cli v0.2.0 (*) │ │ │ └── zkforge v0.2.0 (*) │ │ ├── foundry-evm v0.2.0 │ │ │ ├── anvil v0.2.0 (*) │ │ │ ├── anvil-core v0.2.0 │ │ │ │ └── anvil v0.2.0 (*) │ │ │ ├── cast v0.2.0 (*) │ │ │ ├── chisel v0.2.0 (*) │ │ │ ├── forge v0.2.0 (*) │ │ │ ├── foundry-cli v0.2.0 (*) │ │ │ ├── foundry-debugger v0.2.0 (*) │ │ │ ├── zkcast v0.2.0 (*) │ │ │ └── zkforge v0.2.0 (*) │ │ ├── foundry-test-utils v0.2.0 │ │ │ ├── (dev) cast v0.2.0 (*) │ │ │ ├── (dev) forge v0.2.0 (*) │ │ │ ├── (dev) zkcast v0.2.0 (*) │ │ │ └── (dev) zkforge v0.2.0 (*) │ │ ├── (dev) foundry-utils v0.2.0 │ │ │ ├── anvil v0.2.0 (*) │ │ │ ├── anvil-core v0.2.0 (*) │ │ │ ├── cast v0.2.0 (*) │ │ │ ├── chisel v0.2.0 (*) │ │ │ ├── forge v0.2.0 (*) │ │ │ ├── forge-doc v0.2.0 │ │ │ │ ├── forge v0.2.0 (*) │ │ │ │ └── zkforge v0.2.0 (*) │ │ │ ├── foundry-cli v0.2.0 (*) │ │ │ ├── foundry-debugger v0.2.0 (*) │ │ │ ├── (dev) foundry-evm v0.2.0 (*) │ │ │ ├── foundry-test-utils v0.2.0 (*) │ │ │ ├── zkcast v0.2.0 (*) │ │ │ └── zkforge v0.2.0 (*) │ │ ├── zkcast v0.2.0 (*) │ │ └── zkforge v0.2.0 (*) │ └── foundry-evm v0.2.0 (*) └── zksync_prover_utils v0.1.0 ├── zksync_core v0.1.0 (*) └── zksync_verification_key_generator_and_server v0.1.0 └── zksync_core v0.1.0 (*) ``` ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * docs: Include command to create rich L2 wallets. (#569) ## What ❔ Improve documentation by including the command to create rich L2 wallets. ## Why ❔ Save other people time figuring out the exact invocation. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: Enforce uniform import structure (#617) ## What ❔ ...using `zk fmt` command by suppling relevant command-line args to rustfmt. These args work on stable Rust (at least for now) despite being unstable. ## Why ❔ More structured imports are easier to read. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(job-processor): `max_attepts_reached` metric (#626) ## What ❔ `max_attepts_reached` metric is now reported on job start rather failure. With this change metric will be reported not only if last attempt failed but also if it got stuck/stopped/etc. ## Why ❔ Reporting `max_attepts_reached` metric for all cases. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(vm): Expose more pubs and make inmemory clonable (#632) ## What ❔ Expose AppFramestack public and make InMemoryStorage Clonable. ## Why ❔ For supporting era-test-node ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Signed-off-by: Danil * chore: remove old witness generator (#619) * fix: improve docs repositories (#570) ## What ❔ * Add new/missing zkSync repositories * Add missing descriptions * Remove deprecated repositories ## Why ❔ To make the list up-to-date ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * docs(setup): Added TL;DR instructions for new zkstack setup (#621) ## What ❔ * Added a TL;DR set of instructions needed to setup the system to run the zk stack from scratch. ## Why ❔ * To have a list of commands in one place. * chore: upgrades local test network to cancun+deneb compatible one (#580) ## What ❔ Upgrades local testnet to `Cancun+Deneb` compatible one. So far: Cancun gets enabled: ``` 2023-12-01 21:57:49 INFO [12-01|20:57:49.152] Merge configured: 2023-12-01 21:57:49 INFO [12-01|20:57:49.152] - Hard-fork specification: https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/paris.md 2023-12-01 21:57:49 INFO [12-01|20:57:49.152] - Network known to be merged: true 2023-12-01 21:57:49 INFO [12-01|20:57:49.152] - Total terminal difficulty: 0 2023-12-01 21:57:49 INFO [12-01|20:57:49.152] 2023-12-01 21:57:49 INFO [12-01|20:57:49.152] Post-Merge hard forks (timestamp based): 2023-12-01 21:57:49 INFO [12-01|20:57:49.152] - Shanghai: @1701464272 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/shanghai.md) 2023-12-01 21:57:49 INFO [12-01|20:57:49.152] - Cancun: @1701464272 ``` New network has been built into CI workflows. ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(zk init): Removed plonk setup (#638) ## What ❔ * Removed plonk setup (where we downloaded the CRS keys) ## Why ❔ * It is needed only for prover setup (and then this is handled within the prover_setup.ts directly) * it will save a lot of network bandwidth and time during zk stack setup * chore(CI): Speeding up docker builds in CI (#640) ## What ❔ * Removed zk contract compilation that was run before prover docker builds * Removed docker builds for old prover and old circuit_synthesizer * Downloading CRS file only for the snark wrapper / compressor job ## Why ❔ * to speed up docker CI that runs on every PR * chore(zk): finishing migration to docker compose (#646) ## What ❔ * replaced docker-compose with docker compose ## Why ❔ * we moved to use docker compose (instead of docker-compose) - which is a newer version * chore(ci): Pre-download compilers, as a workaround for old hardhat plugins (#645) ## What ❔ Add hack way to pre-download needed compileres ## Why ❔ Hardhat plugins, which we are using right now, are pretty often rate limited by github for downloading compilers, due to unoptimized download procedure. This issue was fixed in later versions of plugins, but we can't update _some_ of the hardhat plugins due to changes in deps for them. This pre-download will help us till the time we can update them. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(en): Remove `SyncBlock.root_hash` (#633) ## What ❔ Removes `root_hash` field from `SyncBlock`. ## Why ❔ It's not used anywhere, set to a dummy value (miniblock hash) and doesn't make sense in general (state hashes are computed on L1 batch level, not miniblock level). ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * docs: New protocol specification (#641) ## What ❔ New docs, focusing on protocol specs. Note: don't merge yet, waiting for review. ## Why ❔ We want to have good docs so people can understand the zkVM and its benefits. ## Checklist - [ X] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ X] Tests for the changes have been added / updated. - [ X] Documentation comments have been added / updated. - [ X] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: kelemeno Co-authored-by: MexicanAce Co-authored-by: Jack Hamer <47187316+JackHamer09@users.noreply.github.com> Co-authored-by: Marcin M <128217157+mm-zk@users.noreply.github.com> Co-authored-by: Fedor Sakharov * feat(zk tool): Added yarn & directory checks (#188) # What ❔ * zk will now print a warning if you run it from outside of $ZKSYNC_HOME directory * it will also warn you if your yarn has a wrong version ## Why ❔ * we use ZKSYNC_HOME to figure out which binaries to use, so if you are working on couple versions of era in multiple different directories, and forget to update ZKSYNC_HOME, then you might be pushing wrong binaries without knowing. * if your version of yarn is incorrect, you're going to get a lot of strange typescript compilation errors along the way. These are only printed as warnings, they don't prevent you from running the tool. ![image](https://github.com/matter-labs/zksync-era/assets/128217157/e8186e36-953d-4307-9449-048bdd589321) * fix: follow up metrics fixes (#648) ## What ❔ Fixes for previous metrics PRs. * Fix oldest unproved block by round (we have `manually_skipped` status in DB, is this correct?), SQL query and number of rounds * Rename some API server metrics * May filter count work incorrect? ## Why ❔ For metrics to work correctly. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(ci): Pre-download one more version of viper compiler (#652) ## What ❔ Pre-download one more version of viper compiler ## Why ❔ Workaround for rate limiting ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(en): Check block hash correspondence (#572) ## What ❔ Checks that the block hash received by EN corresponds to the locally computed block hash. The check occurs reasonably early (i.e., before the block is executed). ## Why ❔ Allows to ensure that there's nothing fundamentally wrong with EN implementation. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: fix docs error (#635) ## What ❔ fix docs error ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov Co-authored-by: Igor Borodin Co-authored-by: AnastasiiaVashchuk <72273339+AnastasiiaVashchuk@users.noreply.github.com> * chore(core/lib): typo fix (#624) few minor typo fix. 3 comments, 1 error message --------- Co-authored-by: AnastasiiaVashchuk <72273339+AnastasiiaVashchuk@users.noreply.github.com> * chore: remove tslint related comments (#636) ## What ❔ This pull request aims to remove TSLint-related comments and configurations from the codebase. ## Why ❔ The removal is necessary because the repository has transitioned to ESLint, TSLint-related comments and configurations obsolete. This cleanup ensures a more streamlined and consistent codebase, aligning with the migration to ESLint for improved code quality and maintainability. ## Checklist - [X] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [X] Tests for the changes have been added / updated. - [X] Documentation comments have been added / updated. - [X] Code has been formatted via `zk fmt` and `zk lint`. - [X] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: AnastasiiaVashchuk <72273339+AnastasiiaVashchuk@users.noreply.github.com> * feat: Snapshot Creator (#498) ## What ❔ Snapshot creator is small command line tool for creating a snapshot of zkSync node for EN node to be able to initialize to a certain L1 Batch. Snapshots do not contain full transactions history, but rather a minimal subset of information needed to bootstrap EN node. ## Checklist - [X] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [X] Tests for the changes have been added / updated. - [X] Documentation comments have been added / updated. - [X] Code has been formatted via `zk fmt` and `zk lint`. * chore(ci): Pre-download more versions of compilers + less verbose (#653) ## What ❔ Pre-download more versions of compilers ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(contract-verifier): Add zksolc v1.3.18 (#654) ## What ❔ Adds zksolc v1.3.18 to contract verifier ## Why ❔ Adds zksolc v1.3.18 to contract verifier ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: Follow up metrics fixes vol.2 (#656) ## What ❔ Fix oldest not generated batch query ## Why ❔ For metrics to work correctly ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: removing sqlx check from pre-push (#658) Removing this check for now, I will re-add is as an opt-in check, probably with other checks bundled in. It's best not to require built project on every push. * chore(main): release core 18.7.0 (#620) :robot: I have created a release *beep* *boop* --- ## [18.7.0](https://github.com/matter-labs/zksync-era/compare/core-v18.6.1...core-v18.7.0) (2023-12-12) ### Features * **contract-verifier:** Add zksolc v1.3.18 ([#654](https://github.com/matter-labs/zksync-era/issues/654)) ([77f91fe](https://github.com/matter-labs/zksync-era/commit/77f91fe253a0876e56de4aee47071fe249386fc7)) * **en:** Check block hash correspondence ([#572](https://github.com/matter-labs/zksync-era/issues/572)) ([28f5642](https://github.com/matter-labs/zksync-era/commit/28f5642c35800997879bc549fca9e960c4516d21)) * **en:** Remove `SyncBlock.root_hash` ([#633](https://github.com/matter-labs/zksync-era/issues/633)) ([d4cc6e5](https://github.com/matter-labs/zksync-era/commit/d4cc6e564642b4c49ef4a546cd1c86821327683c)) * Snapshot Creator ([#498](https://github.com/matter-labs/zksync-era/issues/498)) ([270edee](https://github.com/matter-labs/zksync-era/commit/270edee34402ecbd1761bc1fca559ef2205f71e8)) ### Bug Fixes * Cursor not moving correctly after poll in `get_filter_changes` ([#546](https://github.com/matter-labs/zksync-era/issues/546)) ([ec5907b](https://github.com/matter-labs/zksync-era/commit/ec5907b70ff7d868a05b685a1641d96dc4fa9d69)) * fix docs error ([#635](https://github.com/matter-labs/zksync-era/issues/635)) ([883c128](https://github.com/matter-labs/zksync-era/commit/883c1282f7771fb16a41d45391b74243021271e3)) * follow up metrics fixes ([#648](https://github.com/matter-labs/zksync-era/issues/648)) ([a317c7a](https://github.com/matter-labs/zksync-era/commit/a317c7ab68219cb376d08c8d1ec210c63b3c269f)) * Follow up metrics fixes vol.2 ([#656](https://github.com/matter-labs/zksync-era/issues/656)) ([5c1aea2](https://github.com/matter-labs/zksync-era/commit/5c1aea2a94d7eded26c3a4ae4973ff983c15e7fa)) * **job-processor:** `max_attepts_reached` metric ([#626](https://github.com/matter-labs/zksync-era/issues/626)) ([dd9b308](https://github.com/matter-labs/zksync-era/commit/dd9b308be9b0a6e37aad75f6f54b98e30a2ae14e)) * update google cloud dependencies that do not depend on rsa ([#622](https://github.com/matter-labs/zksync-era/issues/622)) ([8a8cad6](https://github.com/matter-labs/zksync-era/commit/8a8cad6ce62f2d34bb34adcd956f6920c08f94b8)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * ci: removes docker-compose-runner.yml (#649) ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(en): adds version metric collecting (#655) ## What ❔ This PR adds a metric that logs release version and protocol version of the external node. That is needed for provisioning purposes. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci(local-node): replace deprecated node setup (#642) ## What ❔ As in commit 33174aa59 and `docker/zk-environment/Dockerfile` use the repo to setup node, instead of the `curl | sh`. ## Why ❔ The old method is deprecated and it saves one minute waiting time instead of the deprecation banner being displayed. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. Signed-off-by: Harald Hoyer * chore(en): Add sepolia en config (#663) ## What ❔ - marks goerli testnet EN config as deprecated - adds sepolia testnet EN config - removes `EN_BOOTLOADER_HASH`, `EN_DEFAULT_AA_HASH` from configs as these vars are not used anymore ## Why ❔ Prepare docs for sepolia testnet EN launch ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: Remove obsolete CI workflows (#664) ## What ❔ - Removes Docker image builds with old provers with bundled setup keys - Removes obsolete manual standalone External Node builds (already integrated into Core workflows) ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(merkle tree): Snapshot recovery in metadata calculator (#607) ## What ❔ Implements tree recovery based on Postgres data in Metadata calculator. ## Why ❔ We obviously need tree recovery. Driving it by Metadata calculator (i.e., choreography) allows to restore multiple tree instances from the same snapshot. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: Implemented 1 validator consensus for the main node (#554) ## What ❔ Implemented 1 validator consensus for the main node. It is NOT enabled yet, corresponding configuration still needs to be provided. It is responsible for populating consensus column in the Miniblocks table. Consensus will be running asynchronously for now, not affecting the miniblock production at all. Fixes BFT-388. ## Why ❔ Reaching a consensus with just 1 validator doesn't provide much value, however we want to start running the consensus code in production (in a "shadow" mode) to make sure it works as expected before we start relying on it. * feat(dal): Make ConnectionPoolBuilder owned (#676) ## What ❔ Removes lifetime in the ConnectionPoolBuilder ## Why ❔ To make it easier to store the builder in long-living structures. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: dropping installed filters (#670) ## What ❔ Fix dropping installed filters: * use LRU instead of HashMap for InstalledFIlters ## Why ❔ Sometimes filters are dropped very quickly. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(zk): clean volumes directory from container to fix permissions on linux (#673) ## What ❔ Currently zk init may fail on linux due to docker creating files with root permissions. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(core): Merge bounded and unbounded gas adjuster (#678) ## What ❔ The bounded gas adjuster was used everywhere but in EthTxManager, but in EthTxManager it's used as tx params provider rather than as an L1 gas provider. It is safe to merge them and keep a single entity. ## Why ❔ One extra entity less. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(api): Sunset API translator (#675) ## What ❔ - Removes API translator component, `zks_getLogsWithVirtualBlocks ` method, and related no-longer-needed code. ## Why ❔ - Component was introduced for the period of the block.timestamp migration; it's no longer needed. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(main): release core 18.8.0 (#674) :robot: I have created a release *beep* *boop* --- ## [18.8.0](https://github.com/matter-labs/zksync-era/compare/core-v18.7.0...core-v18.8.0) (2023-12-13) ### Features * **api:** Sunset API translator ([#675](https://github.com/matter-labs/zksync-era/issues/675)) ([846fd33](https://github.com/matter-labs/zksync-era/commit/846fd33a74734520ae1bb57d8bc8abca71e16f25)) * **core:** Merge bounded and unbounded gas adjuster ([#678](https://github.com/matter-labs/zksync-era/issues/678)) ([f3c3bf5](https://github.com/matter-labs/zksync-era/commit/f3c3bf53b3136b2fe8c17638c83fda3328fd6033)) * **dal:** Make ConnectionPoolBuilder owned ([#676](https://github.com/matter-labs/zksync-era/issues/676)) ([1153c42](https://github.com/matter-labs/zksync-era/commit/1153c42f9d0e7cfe78da64d4508974e74afea4ee)) * Implemented 1 validator consensus for the main node ([#554](https://github.com/matter-labs/zksync-era/issues/554)) ([9c59838](https://github.com/matter-labs/zksync-era/commit/9c5983858d9dd84de360e6a082369a06bb58e924)) * **merkle tree:** Snapshot recovery in metadata calculator ([#607](https://github.com/matter-labs/zksync-era/issues/607)) ([f49418b](https://github.com/matter-labs/zksync-era/commit/f49418b24cdfa905e571568cb3393296c951e903)) ### Bug Fixes * dropping installed filters ([#670](https://github.com/matter-labs/zksync-era/issues/670)) ([985c737](https://github.com/matter-labs/zksync-era/commit/985c7375f6fa192b45473d8ba0b7dacb9314a482)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * fix(local-node): debug and fix local-node setup (#666) * test(snapshot creator): Add basic tests (#677) ## What ❔ Adds basic Rust tests for the snapshot creator component. Also, brushes up the component. ## Why ❔ Rust tests are easier to run than TS integration tests, and they provide more control over inputs (i.e., the storage state). ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: remove witgen related tables (#671) * feat: Dockerfile and step in CI to build snapshots-creator image (#686) * fix(en): Downgrade miniblock hash equality assertion to warning (#695) ## What ❔ Subject. ## Why ❔ There is a valid scenario in which we might get miniblock hash mismatch, namely after a reorg. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: Add ecadd and ecmul to the list of precompiles upon genesis (#669) ## What ❔ Add ecadd and ecmul to the list of system contracts to be added upon genesis ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: zk fmt sqlx-queries (#533) ## What ❔ Added option to format queries using `zk fmt sqlx-queries` It uses https://github.com/sql-formatter-org/sql-formatter as a formatter ## Checklist - [X] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [X] Code has been formatted via `zk fmt` and `zk lint`. * feat: proto serialization/deserialization of snapshots creator objects (#667) ## What ❔ Change of serialization format from json to protobuf for snapshots creator. **I still need to update the integration tests.** * perf: remove unnecessary to_vec (#702) ## What ❔ Remove unnecessary method call ## Why ❔ It might improve performance but probably not. Better to not have it anyway. * chore(docs): add YouTube to official links (#705) ## What ❔ Added youtube to official links. * fix: override file only needed in format_sql.ts (#707) Change that makes format_sql.ts script only override file if there was some formatting to do. This change is needed to prevent rust build system from dropping build caches when files did not change * feat(api): Do not return receipt if tx was not included to the batch (#706) ## What ❔ For compatibility with Ethereum, we have to return `Receipt` only if tx is included in the miniblock. Block hash is not null null. ## Why ❔ For compatibility with Ethereum and for not breaking external clients, such as Metemask, they do not expect that block hash could be null ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Signed-off-by: Danil Co-authored-by: perekopskiy Co-authored-by: Roman Brodetski Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * chore(main): release core 18.9.0 (#700) :robot: I have created a release *beep* *boop* --- ## [18.9.0](https://github.com/matter-labs/zksync-era/compare/core-v18.8.0...core-v18.9.0) (2023-12-19) ### Features * Add ecadd and ecmul to the list of precompiles upon genesis ([#669](https://github.com/matter-labs/zksync-era/issues/669)) ([0be35b8](https://github.com/matter-labs/zksync-era/commit/0be35b82fc63e88b6d709b644e437194f7559483)) * **api:** Do not return receipt if tx was not included to the batch ([#706](https://github.com/matter-labs/zksync-era/issues/706)) ([625d632](https://github.com/matter-labs/zksync-era/commit/625d632934ac63ad7479de50d65f83e6f144c7dd)) * proto serialization/deserialization of snapshots creator objects ([#667](https://github.com/matter-labs/zksync-era/issues/667)) ([9f096a4](https://github.com/matter-labs/zksync-era/commit/9f096a4dd362fbd74a35fa1e9af4f111f69f4317)) * zk fmt sqlx-queries ([#533](https://github.com/matter-labs/zksync-era/issues/533)) ([6982343](https://github.com/matter-labs/zksync-era/commit/69823439675411b3239ef0a24c6bfe4d3610161b)) ### Bug Fixes * **en:** Downgrade miniblock hash equality assertion to warning ([#695](https://github.com/matter-labs/zksync-era/issues/695)) ([2ef3ec8](https://github.com/matter-labs/zksync-era/commit/2ef3ec804573ba4bbf8f44f19a3b5616b297c796)) ### Performance Improvements * remove unnecessary to_vec ([#702](https://github.com/matter-labs/zksync-era/issues/702)) ([c55a658](https://github.com/matter-labs/zksync-era/commit/c55a6582eae3af7f92cdeceb4e50b81701665f96)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * feat: Remove data fetchers (#694) ## What ❔ Removes the data fetchers from the codebase. The tables they've been writing to are left intact since we need to support the RPC methods that interact with them for a while. This way these methods will be accessing the latest available data there. Additionally, as discussed before, `get_well_known_token_addresses` no longer checks for `well_known = true`. ## Why ❔ This component is deprecated. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(ci): use canonical prysm images in local testnet (#687) ## What ❔ The canonical images after prysmaticlabs/prysm#13324 now multiarch, that is including Apple Silicon builds. ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: merge contracts and system-contracts repos (#672) Updated paths and commands to use the new merged `era-contracts` submodule: https://github.com/matter-labs/era-contracts/pull/98 * chore(tests): restore revert test (#714) ## What ❔ Restores revert test ## Why ❔ Restores revert test ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(api): remove jsonrpc backend (#693) ## What ❔ The new release of `jsonrpsee` removes all known blockers for removing `jsonrpc` backend. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: remove leftovers after #693 (#720) ## What ❔ Removes leftover files. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: Remove zks_getConfirmedTokens method (#719) ## What ❔ Removes the `zks_getConfirmedTokens` method. ## Why ❔ This method was internal only and is no longer needed. The change isn't considered to be breaking, as this method has no measured usage on mainnet. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(prover): update rescue_poseidon version (#722) * chore: fix doc (#716) ## What ❔ fix doc ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * docs: corrected spelling in zk_intuition.md file (#710) Removed unwanted letter. ## What ❔ Removed a letter in the sentence. ## Why ❔ Because sentence was incorrect. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(prover): Add logging for prover + WVGs (#723) ## What ❔ Add better logging for prover and WVGs. ## Why ❔ During incidents, core noticed that logging is missing. These were added as a hotfix via a different patch on the deployed version. These changes are now being backported (and improved) on the main branch. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(prover): update rescue_poseidon version (#726) * feat(en): Make reorg detector work with pruned data (#712) ## What ❔ Modifies reorg detector so that it works with pruned node data during snapshot recovery. ## Why ❔ Part of preparations of EN code to support snapshot recovery. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(main): release prover 10.0.1 (#724) :robot: I have created a release *beep* *boop* --- ## [10.0.1](https://github.com/matter-labs/zksync-era/compare/prover-v10.0.0...prover-v10.0.1) (2023-12-21) ### Bug Fixes * **prover:** Add logging for prover + WVGs ([#723](https://github.com/matter-labs/zksync-era/issues/723)) ([d7ce14c](https://github.com/matter-labs/zksync-era/commit/d7ce14c5d0434326a1ebf406d77c20676ae526ae)) * **prover:** update rescue_poseidon version ([#726](https://github.com/matter-labs/zksync-era/issues/726)) ([3db25cb](https://github.com/matter-labs/zksync-era/commit/3db25cbea80180a1238531944d7e45a5408003dc)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * ci: add CUDA_ARCH arg to build prover template (#730) * ci: revert adding CUDA_ARCH arg to build prover template (#735) Reverts matter-labs/zksync-era#730 * chore(docs): Update list of repositories (#728) ## What ❔ * Updated list of repositories in repositories.md ## Why ❔ * Since last update, we moved everything to opensource and introduced boojum (with its own set of repositories). --------- Co-authored-by: Shahar Kaminsky * fix(prover): Reduce amount of prover connections per prover subcomponent (#734) ## What ❔ Configuration limit on number of DB connections per prover subcomponent. ## Why ❔ During surge of TPS, prover database was pummeled with connections. This change will makes sure the database will be protected from rogue connections. Assuming the patch is wrong, prover subcomponents will be throttled. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: add cuda arch param for prover-fri-gpu (#736) ## What ❔ Subj ## Why ❔ Subj ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: vise version bump (#727) * feat: applied status snapshots dal (#679) Adding applied snapshots DAL that will be used to track status of EN node applied snapshot. * chore(main): release prover 10.0.2 (#737) :robot: I have created a release *beep* *boop* --- ## [10.0.2](https://github.com/matter-labs/zksync-era/compare/prover-v10.0.1...prover-v10.0.2) (2023-12-21) ### Bug Fixes * **prover:** Reduce amount of prover connections per prover subcomponent ([#734](https://github.com/matter-labs/zksync-era/issues/734)) ([d38aa85](https://github.com/matter-labs/zksync-era/commit/d38aa8590c60a04278599a470debeb00e37c2395)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * refactor: Remove RocksDB backup leftovers (#721) ## What ❔ Removes leftovers of RocksDB backup functionality. ## Why ❔ Support for backups has been removed long ago, and we currently don't have plans to restore it. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: close Docker PostgreSQL to external connections (#741) PostgreSQL should only be accessible from the localhost interface, otherwise malicious actors can change or delete the database. Bringing the `zk init` settings in line with `zk stack init` settings: https://github.com/matter-labs/zksync-era/blob/main/docker-compose-zkstack-common.yml#L28 * chore: fix link (#740) ## What ❔ fixed link ## Why ❔ image ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: Update how_l2_messaging_works.md (#739) ## What ❔ fix typo ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: Retire the lightweight tree component (#732) ## What ❔ Removes the component-based option to enable the lightweight tree. Also, it drops the support for `_new` postfix for the tree component. ## Why ❔ The tree can be configured via configuration options in a more elegant way. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(api): Add metrics for `jsonrpsee` subscriptions (#733) ## What ❔ A small follow-up PR for #693 that introduces some metrics allowing to track the health of the new approach. Also, does minor refactoring for the API server builder so it's more typesafe. ## Why ❔ Observability and maintainability. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(prover): increase DB polling interval for witness vector generators (#697) Prover DB is under heavy load. The query that witness vectors generators run to get a new prover job is slow. In theory we shouldn't be running any witness vectors gens for circuits that don't have any jobs queued, but query insights suggest otherwise. This increases the polling interval - so that even if there are no jobs, we don't perform the query too often * fix: directory path in 01_initialization.md (#657) Co-authored-by: AnastasiiaVashchuk <72273339+AnastasiiaVashchuk@users.noreply.github.com> * chore: update docs (#631) ## What ❔ update docs ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * fix: silence cat errors on Show Logs step (#708) ## What ❔ Silencing errors like: ``` Run ci_run cat server.log cat: server.log: No such file or directory Error: Process completed with exit code 1. ``` That imo are just an additional needless noise that decrease the UX of development. * fix: added waiting for prometheus to finish (#745) ## What ❔ added waiting for prometheus task to finish. ## Why ❔ We need to wait for prometheus to push one last 'batch of metrics' * chore: fix dev discussions link (#750) ## What ❔ - Addresses typo in link ## Why ❔ - link was incorrect ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: all Docker configs no longer expose PostgreSQL externally (#749) PostgreSQL should only be accessible from the localhost interface, otherwise malicious actors can change or delete the database. Per discussion with Shahar, fixing all the remaining configs. Related: https://github.com/matter-labs/zksync-era/pull/741 * fix(EN): temporary produce a warning on pubdata mismatch with L1 (#758) * chore(main): release core 18.10.0 (#713) :robot: I have created a release *beep* *boop* --- ## [18.10.0](https://github.com/matter-labs/zksync-era/compare/core-v18.9.0...core-v18.10.0) (2023-12-25) ### Features * **api:** Add metrics for `jsonrpsee` subscriptions ([#733](https://github.com/matter-labs/zksync-era/issues/733)) ([39fd71c](https://github.com/matter-labs/zksync-era/commit/39fd71cc2a0ffda45933fc99c4dac6d9beb92ad0)) * **api:** remove jsonrpc backend ([#693](https://github.com/matter-labs/zksync-era/issues/693)) ([b3f0417](https://github.com/matter-labs/zksync-era/commit/b3f0417fd4512f98d7e579eb5b3b03c7f4b92e18)) * applied status snapshots dal ([#679](https://github.com/matter-labs/zksync-era/issues/679)) ([2e9f23b](https://github.com/matter-labs/zksync-era/commit/2e9f23b46c31a9538d4a55bed75c5df3ed8e8f63)) * **en:** Make reorg detector work with pruned data ([#712](https://github.com/matter-labs/zksync-era/issues/712)) ([c4185d5](https://github.com/matter-labs/zksync-era/commit/c4185d5b6526cc9ec42e6941d76453cb693988bd)) * Remove data fetchers ([#694](https://github.com/matter-labs/zksync-era/issues/694)) ([f48d677](https://github.com/matter-labs/zksync-era/commit/f48d6773e1e30fede44075f8862c68e7a8173cbb)) * Remove zks_getConfirmedTokens method ([#719](https://github.com/matter-labs/zksync-era/issues/719)) ([9298b1b](https://github.com/matter-labs/zksync-era/commit/9298b1b916ad5f81160c66c061370f804d129d97)) ### Bug Fixes * added waiting for prometheus to finish ([#745](https://github.com/matter-labs/zksync-era/issues/745)) ([eed330d](https://github.com/matter-labs/zksync-era/commit/eed330dd2e47114d9d0ea29c074259a0bc016f78)) * **EN:** temporary produce a warning on pubdata mismatch with L1 ([#758](https://github.com/matter-labs/zksync-era/issues/758)) ([0a7a4da](https://github.com/matter-labs/zksync-era/commit/0a7a4da52926d1db8dfe72aef78390cba3754627)) * **prover:** Add logging for prover + WVGs ([#723](https://github.com/matter-labs/zksync-era/issues/723)) ([d7ce14c](https://github.com/matter-labs/zksync-era/commit/d7ce14c5d0434326a1ebf406d77c20676ae526ae)) * remove leftovers after [#693](https://github.com/matter-labs/zksync-era/issues/693) ([#720](https://github.com/matter-labs/zksync-era/issues/720)) ([e93aa35](https://github.com/matter-labs/zksync-era/commit/e93aa358c43e60d5640224e5422a40d91cd4b9a0)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * fix(sequencer): don't stall blockchain on failed L1 tx (#759) * chore(main): release core 18.10.1 (#760) :robot: I have created a release *beep* *boop* --- ## [18.10.1](https://github.com/matter-labs/zksync-era/compare/core-v18.10.0...core-v18.10.1) (2023-12-25) ### Bug Fixes * **sequencer:** don't stall blockchain on failed L1 tx ([#759](https://github.com/matter-labs/zksync-era/issues/759)) ([50cd7c4](https://github.com/matter-labs/zksync-era/commit/50cd7c41f71757a3f2ffb36a6c1e1fa6b4372703)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * fix(vm): Get pubdata bytes from vm (#756) ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Signed-off-by: Danil * chore(main): release core 18.10.2 (#761) :robot: I have created a release *beep* *boop* --- ## [18.10.2](https://github.com/matter-labs/zksync-era/compare/core-v18.10.1...core-v18.10.2) (2023-12-25) ### Bug Fixes * **vm:** Get pubdata bytes from vm ([#756](https://github.com/matter-labs/zksync-era/issues/756)) ([6c6f1ab](https://github.com/matter-labs/zksync-era/commit/6c6f1ab078485669002e50197b35ab1b6a38cdb9)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * fix(core): do not unwrap unexisting calldata in commitment and regenerate it (#762) ## What ❔ Solve panics on nonexistent calldata. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(main): release core 18.10.3 (#763) :robot: I have created a release *beep* *boop* --- ## [18.10.3](https://github.com/matter-labs/zksync-era/compare/core-v18.10.2...core-v18.10.3) (2023-12-25) ### Bug Fixes * **core:** do not unwrap unexisting calldata in commitment and regenerate it ([#762](https://github.com/matter-labs/zksync-era/issues/762)) ([ec104ef](https://github.com/matter-labs/zksync-era/commit/ec104ef01136d1a455f40163c2ced92dbc5917e2)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * feat: Revert "feat: Remove zks_getConfirmedTokens method" (#765) Reverts matter-labs/zksync-era#719 * chore(main): release core 18.11.0 (#766) :robot: I have created a release *beep* *boop* --- ## [18.11.0](https://github.com/matter-labs/zksync-era/compare/core-v18.10.3...core-v18.11.0) (2023-12-25) ### Features * Revert "feat: Remove zks_getConfirmedTokens method" ([#765](https://github.com/matter-labs/zksync-era/issues/765)) ([6e7ed12](https://github.com/matter-labs/zksync-era/commit/6e7ed124e816f5ba1d2ba3e8efaf281cd2c055dd)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * feat(get-tokens): filter tokens by `well_known` (#767) Co-authored-by: Fedor Sakharov * chore(main): release core 18.12.0 (#768) :robot: I have created a release *beep* *boop* --- ## [18.12.0](https://github.com/matter-labs/zksync-era/compare/core-v18.11.0...core-v18.12.0) (2023-12-25) ### Features * **get-tokens:** filter tokens by `well_known` ([#767](https://github.com/matter-labs/zksync-era/issues/767)) ([9c99e13](https://github.com/matter-labs/zksync-era/commit/9c99e13ca0a4de678a4ce5bf7e2d5880d79c0e66)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * chore(docs): corrected spelling in `status.ts` file (#753) ## What ❔ corrected spelling in `status.ts` file ## Why ❔ Spelling was incorrect ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: fix link (#752) ## What ❔ fix link ## Why ❔ Inaccessible ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * docs: Update 01_initialization.md (#769) Remove the comma after "doc": "The goal of this doc is to show you some more details on how zkSync works internally." ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: fix path (#751) ## What ❔ fixed path ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: fix several typos (#754) ## What ❔ Fixing possiblity to possibility ## Why ❔ The modifications are straightforward and aim to enhance the accuracy of the documentation. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. Co-authored-by: Fedor Sakharov * docs: Update 01_initialization.md (#770) Add "at" after "look": "Now let's take a look at what's inside:" ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * docs: fix typo (#746) ## What ❔ fix some minor typo in readme.md and overview.md files ## Why ❔ I think it’s important to let users read the most accurate documentation * chore: update contracts.md (#773) ## What ❔ fixed typo ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: fix error link (#744) ## What ❔ fix error link ## Why ❔ 404 ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(docs): fix docs (#778) ## What ❔ fix docs ## Why ❔ 404 image ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * fix: fix error link (#777) ## What ❔ fix link ## Why ❔ image ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: fix row (#776) ## What ❔ fix row ## Why ❔ Error row ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Fedor Sakharov * chore: update how_l2_messaging_works.md (#779) ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(docs): fix typo (#781) ## What ❔ fixed typo ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: spelling and grammar fixes (#782) ## What ❔ ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * chore: update bootloader.md (#775) ## What ❔ fixed typo ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: fix docs (#783) ## What ❔ fix docs ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(docs): several typos in the documentation (#774) ## What ❔ I fixed some typos in the documentation. ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: Remove generic bounds on L1GasPriceProvider (#792) ## What ❔ - Replaces uses of `G: L1GasPriceProvider` with `dyn L1GasPriceProvider`. - ...as a result, `StateKeeper` and `TxSender` (and thus all the API structures) are now generic-free. ## Why ❔ - Prerequisite for modular system (generic types are hard for dynamically-configured setup). - Invoking `L1GasPriceProvider` is not related to any hot loops where dynamic dispatch would be noticeable, so having it as a generic is kinda meaningless anyways. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: Optimize contracts build (#715) ## What ❔ Moves contracts build to separate job and reuses them as an artifact in further build jobs ## Why ❔ To decrease CI build times ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: fix job dependecies (#796) ## What ❔ Fix CI workflows for build core components ## Why ❔ Broken CI for core components ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: fix docker.ts for build prover images * fix(hyperchain wizard): Do Not Return Upon EthSender Lag (#794) ## What ❔ This PR changes the wizard to not return from `zk status prover` if there is a lag between blocks committed by eth sender and blocks sealed by the state keeper. ## Why ❔ Before this PR, if there was a lag, users couldn't see prover stats. Without this `return` they now can. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: Remove TPS limiter from TX Sender (#793) ## What ❔ Removes TPS Limiter from TX Sender component. ## Why ❔ - It didn't work properly: limit is applied per API server, and API servers are horizontally scalable. - It doesn't seem to be configured in any of our envs. - It doesn't really make sense in context of ENs. - We have other mempool protection measures anyways (tx dry-run, nonce limits, etc). ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(contract-verifier): add zksolc v1.3.19 (#797) ## What ❔ - added zksolc v1.3.19 for contract verifier ## Why ❔ - zksolc v1.3.19 is now latest * chore(main): release core 18.13.0 (#795) :robot: I have created a release *beep* *boop* --- ## [18.13.0](https://github.com/matter-labs/zksync-era/compare/core-v18.12.0...core-v18.13.0) (2024-01-02) ### Features * **contract-verifier:** add zksolc v1.3.19 ([#797](https://github.com/matter-labs/zksync-era/issues/797)) ([2635570](https://github.com/matter-labs/zksync-era/commit/26355705c8c084344464458f3275c311c392c47f)) * Remove generic bounds on L1GasPriceProvider ([#792](https://github.com/matter-labs/zksync-era/issues/792)) ([edf071d](https://github.com/matter-labs/zksync-era/commit/edf071d39d4dd8e297fd2fb2244574d5e0537b38)) * Remove TPS limiter from TX Sender ([#793](https://github.com/matter-labs/zksync-era/issues/793)) ([d0e9296](https://github.com/matter-labs/zksync-era/commit/d0e929652eb431f6b1bc20f83d7c21d2a978293a)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * feat(prover): Remove circuit-synthesizer (#801) This change cleans circuit synthesizer which belongs to old prover subsystems. All old prover subsystems is to be removed as era uses boojum now. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(vm)!: Release v19 - remove allowlist (#747) ## What ❔ This upgrade contains mainly contract changes (more description of those [here](https://github.com/matter-labs/era-contracts/pull/133)), while the changes that impacted `zksync-era` repo mainly refer to the changes in how the testing / contract preprocessing is done: now, the system contracts are preprocessed into the `contracts-preprocessed` folder and then all the artifacts should be taken from there ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(merkle tree): Finalize metadata calculator snapshot recovery logic (#798) ## What ❔ - Integrates reading snapshot metadata in the metadata calculator. - Tests full recovery lifecycle for the metadata calculator. ## Why ❔ Right now, metadata calculator never enters recovery mode (since there's no snapshot metadata stored in Postgres yet). ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(eth_sender): Remove generic bounds on L1TxParamsProvider in EthSender (#799) ## What ❔ Follow-up to https://github.com/matter-labs/zksync-era/pull/792 ## Why ❔ Same reasons ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * refactor(metadata_calculator): Make config owned (#808) ## What ❔ Makes metadata calculator config owned + makes it use `Box` instead of `ObjectStoreFactory`. ## Why ❔ - Prerequisite for ZK Stack modular thingy. - Makes it easy to store the config object in a dynamic context. - Makes it more abstract w.r.t. resources required for instantiation. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * refactor(state_keeper): Abstract ConditionalSealer (#803) ## What ❔ Defines a new trait: `ConditionalSealer` and provides two implementations: `SequencerSealer` and `NoopSealer`. ## Why ❔ - Prerequisite for ZK Stack configuration system. - Leaves a single constructor for the state keeper. - Removes `StateKeeperConfig` use from `TxSender`. - Potentially makes it possible to create `ConditionalSealer` that contains some kind of sealing logic but does not rely on `StateKeeperConfig`. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(prover): Remove old prover (#810) This is one of many commits, most notable upcoming: - remove database tables via migrations - move prover related data from core to prover (I.E. prover_utils) This comes as part of migration to boojum. Old prover is decomissioned; this PR removes what's left of main components of old prover subsystems. It includes: - remove old prover - remove old prover protocol version - untangle old prover and fri prover common dependencies - and many more ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * refactor(eth-sender): Make `EthInterface` object-safe (#807) ## What ❔ - Makes `EthInterface` trait object-safe. - Extends its mock implementation (`MockEthereum`) so that it returns real values for the `get_tx()` method. ## Why ❔ Required to use `EthInterface` in consistency checker. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(prover): Remove old prover subsystems tables (#812) These tables are unused and should be removed. Note that reverting back is impossible for a production system, but may still be desired for troubleshooting reasons. The changes are part of the migration to boojum from old system. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(vm): Add boojum integration folder (#805) ## What ❔ In preparation for the 1.4.1 we firsly copy the boojum integration folder (without adding it to the workspace), so that the next PR where the new VM is actually integrated is easier to review ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(snapshot creator): Make snapshot creator fault-tolerant (#691) ## What ❔ Makes the snapshot creator tolerant to failures. The snapshot creator now checks whether there's a partially created snapshot on start. If there is, the creator continues producing this snapshot instead of starting from scratch. ## Why ❔ Since the snapshot creator can run for an extended amount of time, not having fault tolerance is unmaintainable in the long run. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: adds spellchecker to zk tool (#748) ## What ❔ - Adds `zk spellcheck` to zk tool for convenient spell checks ## Why ❔ - Adding the tool before including its usage in the CI workflow to ensure its available in published docker container for #681 ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Fedor Sakharov * refactor: Remove generics in eth_watch and eth_sender (#815) ## What ❔ Replaces generic parameters in `eth_watch` and `eth_sender` with `Arc` or `Box`. ## Why ❔ - Prerequisite for ZK Stack framework. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(vm): Separate boojum integration vm (#806) ## What ❔ Changes needed to make the boojum integration VM work as a separate VM version from "latest" ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * refactor(metadata_calculator): Move ObjectStore out from MetadataCalculatorConfig (#816) ## What ❔ title ## Why ❔ - There was some clunky back-and-forth for `MetadataCalculatorModeConfig` <->`MerkleTreeMode` because of the old component confuguration. - ObjectStore may not be available at time when `MetadataCalculatorModeConfig ` is created (prerequisite for ZK Stack thing). - Less code! ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(db): Fix parsing statement timeout from env (#818) ## What ❔ Fixes parsing statement timeout from the environment for the main node. ## Why ❔ Without a statement timeout, we may have long-running DB queries for API servers. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * refactor(config): Remove ChainConfig structure (#821) ## What ❔ It wasn't used anywhere. ## Why ❔ Dead code bad. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(vm): Make utils version-dependent (#809) ## What ❔ In the new 1.4.1 version, the fee formulas as well as max txs per batch will change. We'll prepare the main branch for it first to reduce the diff ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(en): Make consistency checker work with pruned data (#742) ## What ❔ Modifies consistency checker so that it works with pruned node data during snapshot recovery. Adds tests for the checker. ## Why ❔ Part of preparations of EN code to support snapshot recovery. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(prover): Remove prover-utils from core (#819) Last batch of removing old prover from core. Further improvements will follow up in the core <-> prover integration. These will be tackled as part of prover <-> core separation. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * refactor(object_store): Wrap ObjectStore in Arc instead of Box (#820) ## What ❔ Changes `Box` to `Arc`. ## Why ❔ `ObjectStore` is meant to be a point of customization in ZK Stack, but with the current approach the only way to get it is via `ObjectStoreFactory`, and it yields objects packed in `Box`, e.g. each instance is unique. In order to make it customizable, `ObjectStore` should be universally shareable (to allow it to be created once and then copied), and this is the first step toward that. Later `ObjectStoreFactory` type will be removed completely to grant full control regarding the initial type to the initialization code. ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(main): release prover 10.1.0 (#743) :robot: I have created a release *beep* *boop* --- ## [10.1.0](https://github.com/matter-labs/zksync-era/compare/prover-v10.0.2...prover-v10.1.0) (2024-01-05) ### Features * **prover:** Remove circuit-synthesizer ([#801](https://github.com/matter-labs/zksync-era/issues/801)) ([1426b1b](https://github.com/matter-labs/zksync-era/commit/1426b1ba3c8b700e0531087b781ced0756c12e3c)) * **prover:** Remove old prover ([#810](https://github.com/matter-labs/zksync-era/issues/810)) ([8be1925](https://github.com/matter-labs/zksync-era/commit/8be1925b18dcbf268eb03b8ea5f07adfd5330876)) ### Bug Fixes * **prover:** increase DB polling interval for witness vector generators ([#697](https://github.com/matter-labs/zksync-era/issues/697)) ([94579cc](https://github.com/matter-labs/zksync-era/commit/94579cc524514cb867843336cd9787db1b6b99d3)) * **prover:** Remove prover-utils from core ([#819](https://github.com/matter-labs/zksync-era/issues/819)) ([2ceb911](https://github.com/matter-labs/zksync-era/commit/2ceb9114659f4c4583c87b1bbc8ee230eb1c44db)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * chore(main): release core 19.0.0 (#802) :robot: I have created a release *beep* *boop* --- ## [19.0.0](https://github.com/matter-labs/zksync-era/compare/core-v18.13.0...core-v19.0.0) (2024-01-05) ### ⚠ BREAKING CHANGES * **vm:** Release v19 - remove allowlist ([#747](https://github.com/matter-labs/zksync-era/issues/747)) ### Features * **en:** Make consistency checker work with pruned data ([#742](https://github.com/matter-labs/zksync-era/issues/742)) ([ae6e18e](https://github.com/matter-labs/zksync-era/commit/ae6e18e5412cadefbc03307a476d6b96c41f04e1)) * **eth_sender:** Remove generic bounds on L1TxParamsProvider in EthSender ([#799](https://github.com/matter-labs/zksync-era/issues/799)) ([29a4f52](https://github.com/matter-labs/zksync-era/commit/29a4f5299c95e0b338010a6baf83f196ece3a530)) * **merkle tree:** Finalize metadata calculator snapshot recovery logic ([#798](https://github.com/matter-labs/zksync-era/issues/798)) ([c83db35](https://github.com/matter-labs/zksync-era/commit/c83db35f0929a412bc4d89fbee1448d32c54a83f)) * **prover:** Remove circuit-synthesizer ([#801](https://github.com/matter-labs/zksync-era/issues/801)) ([1426b1b](https://github.com/matter-labs/zksync-era/commit/1426b1ba3c8b700e0531087b781ced0756c12e3c)) * **prover:** Remove old prover ([#810](https://github.com/matter-labs/zksync-era/issues/810)) ([8be1925](https://github.com/matter-labs/zksync-era/commit/8be1925b18dcbf268eb03b8ea5f07adfd5330876)) * **snapshot creator:** Make snapshot creator fault-tolerant ([#691](https://github.com/matter-labs/zksync-era/issues/691)) ([286c7d1](https://github.com/matter-labs/zksync-era/commit/286c7d15a623604e01effa7119de3362f0fb4eb9)) * **vm:** Add boojum integration folder ([#805](https://github.com/matter-labs/zksync-era/issues/805)) ([4071e90](https://github.com/matter-labs/zksync-era/commit/4071e90578e0fc8c027a4d2a30d09d96db942b4f)) * **vm:** Make utils version-dependent ([#809](https://github.com/matter-labs/zksync-era/issues/809)) ([e5fbcb5](https://github.com/matter-labs/zksync-era/commit/e5fbcb5dfc2a7d2582f40a481c861fb2f4dd5fb0)) * **vm:** Release v19 - remove allowlist ([#747](https://github.com/matter-labs/zksync-era/issues/747)) ([0e2bc56](https://github.com/matter-labs/zksync-era/commit/0e2bc561b9642b854718adcc86087a3e9762cf5d)) * **vm:** Separate boojum integration vm ([#806](https://github.com/matter-labs/zksync-era/issues/806)) ([61712a6](https://github.com/matter-labs/zksync-era/commit/61712a636f69be70d75719c04f364d679ef624e0)) ### Bug Fixes * **db:** Fix parsing statement timeout from env ([#818](https://github.com/matter-labs/zksync-era/issues/818)) ([3f663ec](https://github.com/matter-labs/zksync-era/commit/3f663eca2f38f4373339ad024e6578099c693af6)) * **prover:** Remove old prover subsystems tables ([#812](https://github.com/matter-labs/zksync-era/issues/812)) ([9d0aefc](https://github.com/matter-labs/zksync-era/commit/9d0aefc1ef4992e19d7b15ec1ce34697e61a3464)) * **prover:** Remove prover-utils from core ([#819](https://github.com/matter-labs/zksync-era/issues/819)) ([2ceb911](https://github.com/matter-labs/zksync-era/commit/2ceb9114659f4c4583c87b1bbc8ee230eb1c44db)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). Co-authored-by: EmilLuta * chore(eth-sender): Added documentation to commit data (#731) ## What ❔ * Added documentation to the calldata that we generate in state keeper when calling commit and prove batches * also removed unused 'dead_code' annotations that are no longer needed after we rolled out boojum ## Why ❔ * to increase the understanding of the code - and it make it easier to find during code search. --------- Co-authored-by: Roman Brodetski * feat(state-keeper): circuits seal criterion (#729) ## What ❔ - the number of circuits needed is estimated in VM - replaces `ComputationGasCriterion` with more precise `CircuitsCriterion` ## Why ❔ Introduce `CircuitsCriterion` for efficient batch sealing ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: fix spelling in dev comments in `core/lib/multivm` - continued (#682) **Series of PRs: This is a part in a series of PRs aimed at enhancing spelling accuracy in this repository. See:** - [ ] https://github.com/matter-labs/zksync-era/pull/681 - [ ] https://github.com/matter-labs/zksync-era/pull/683 - [ ] https://github.com/matter-labs/zksync-era/pull/684 - [ ] https://github.com/matter-labs/zksync-era/pull/685 Once merged, a final PR will enable the `dev_comments: true` in the cargo-spellcheck config file. ## What ❔ - **Spelling Corrections in `core/lib/multivm`:** This PR focuses on rectifying spelling errors in the developer comments within the `core/lib/multivm` directory. - Updates dictionary ## Why ❔ - **Enhancing Code Quality:** The `core/lib/multivm` directory currently has several spelling mistakes in the developer comments. Correcting these errors will enhance the readability and overall quality of our code documentation. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Fedor Sakharov * feat: fix spelling in dev comments in `core/lib/*` - continued (#684) **Series of PRs: This is a part in a series of PRs aimed at enhancing spelling accuracy in this repository. See:** - [ ] https://github.com/matter-labs/zksync-era/pull/681 - [ ] https://github.com/matter-labs/zksync-era/pull/682 - [ ] https://github.com/matter-labs/zksync-era/pull/683 - [ ] https://github.com/matter-labs/zksync-era/pull/685 Once merged, a final PR will enable the `dev_comments: true` in the cargo-spellcheck config file. ## What ❔ - **Spelling Corrections**: This PR focuses on rectifying spelling errors found in the developer comments within multiple core libraries: `core/lib/types/`, `core/lib/eth_client/`, ` core/lib/eth_signer` , `core/lib/mempool`, `core/lib/merkle_tree`, `core/lib/state` - Updates dictionary ## Why ❔ - **Consistency and Readability:** The addressed directories contain numerous spelling inaccuracies. Correcting these enhances the readability and consistency of our documentation, ensuring clear understanding for current and future developer ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Fedor Sakharov * feat: fix spelling in dev comments in `core/lib/zksync_core` - continued (#685) **Series of PRs: This is a part in a series of PRs aimed at enhancing spelling accuracy in this repository. See:** - [ ] https://github.com/matter-labs/zksync-era/pull/681 - [ ] https://github.com/matter-labs/zksync-era/pull/682 - [ ] https://github.com/matter-labs/zksync-era/pull/683 - [ ] https://github.com/matter-labs/zksync-era/pull/684 Once merged, a final PR will enable the `dev_comments: true` in the cargo-spellcheck config file. ## What ❔ - - **Spelling Corrections**: This PR focuses on rectifying spelling errors found in the developer comments within core libraries: `core/lib/zksync_core` - Updates dictionary ## Why ❔ - **Consistency and Readability:** The addressed directories contain numerous spelling inaccuracies. Correcting these enhances the readability and consistency of our documentation, ensuring clear understanding for current and future developer ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: fix spelling in dev comments in `core/lib/*` - continued (#683) **Series of PRs: This is a part in a series of PRs aimed at enhancing spelling accuracy in this repository. See:** - [ ] https://github.com/matter-labs/zksync-era/pull/681 - [ ] https://github.com/matter-labs/zksync-era/pull/682 - [ ] https://github.com/matter-labs/zksync-era/pull/684 - [ ] https://github.com/matter-labs/zksync-era/pull/685 Once merged, a final PR will enable the `dev_comments: true` in the cargo-spellcheck config file. ## What ❔ - **Spelling Corrections**: This PR focuses on rectifying spelling errors found in the developer comments within multiple core libraries: `core/lib/basic_types/`, `core/lib/config/src/configs/`, `core/lib/constants/`, `core/lib/contracts/`, and `core/lib/dal/`. - Updates dictionary ## Why ❔ - **Consistency and Readability:** The addressed directories contain numerous spelling inaccuracies. Correcting these enhances the readability and consistency of our documentation, ensuring clear understanding for current and future developer ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Fedor Sakharov * feat: address remaining spelling issues in dev comments and turns on dev_comments in cfg (#827) Related to: https://github.com/matter-labs/zksync-era/pull/681 ## What ❔ - Sets dev_comments to true - Addresses new / remaining spelling issues in dev comments ## Why ❔ - Dev comments should be checked for spelling issues for improved readability ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Fedor Sakharov * ci: add arm64 build for en (#811) ## What ❔ Refactored workflows with multiplatform support ## Why ❔ For multiplatform builds ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: fix foop for merge docker manifest * ci: revert arm64 and merge manifest job * chore(dal): bumps sqlx dependency to 0.7.3 (#725) ## What ❔ Bumps `sqlx` dependency to `0.7.3`. ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Alex Ostrovski * fix: fixes markdown table formatting in `docs/specs/zk_evm/vm_specification/compiler/overview.md#instruction-set` (#829) ## What ❔ - Fixes markdown table formatting ## Why ❔ - The formatting of the table was unreadable - The prettier ignore is used to prevent zk fmt from disrupting markdown **Before:** ![Screenshot 2024-01-07 at 6 09 40 PM](https://github.com/matter-labs/zksync-era/assets/29983536/8e6a3bae-6aff-4c57-a4b4-62e9fc6c4a6a) **After:** ![Screenshot 2024-01-07 at 6 10 16 PM](https://github.com/matter-labs/zksync-era/assets/29983536/b091a048-14cb-4e91-a33f-cd9f183ab429) ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: Fixed zk-environment workflow (#831) ## What ❔ ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: Fixed zk-environment workflow typos and npm issues. (#833) ## What ❔ ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(state-keeper): Reject transactions that fail to publish bytecodes (#832) ## What ❔ State keeper and API reject transactions that fail to publish bytecodes ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat: adds cargo spellcheck and cspell to Dockerfile for zk env (#835) ## What ❔ - Adds cargo-spellcheck to Dockerfile for zk env - Adds cspell installation to Dockerfile for zk env ## Why ❔ - Needed as dep for zk spellcheck tool ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(docs): setup-dev improvements (#836) ## What ❔ Small improvements to setup-dev docs ## Why ❔ Eliminate steps which can be resulted in wrong versions of packages installed ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(ci): return the lint checks of sqlx (#830) ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: fix merge manifest job (#828) ## What ❔ Fix job for merge docker manifests ## Why ❔ In case of multi platform builds ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix(vm): fix circuit tracer (#837) ## What ❔ `finish_cycle` method of circuit tracer wasn't called and there were bugs in it. ## Why ❔ Fix tracer ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: Revert fix merge manifest job (#839) Reverts matter-labs/zksync-era#828 * feat: Integrate `cspell` checker in CI for `/docs` and update PR template (#681) **Series of PRs: This is the first in a series PRs aimed at enhancing spelling accuracy in this repository.** See: - [ ] https://github.com/matter-labs/zksync-era/pull/682 - [ ] https://github.com/matter-labs/zksync-era/pull/683 - [ ] https://github.com/matter-labs/zksync-era/pull/684 - [ ] https://github.com/matter-labs/zksync-era/pull/685 - [ ] https://github.com/matter-labs/zksync-era/pull/827 Once merged, a final PR will enable the `dev_comments: true` in the cargo-spellcheck config file. ## What ❔ - **Introduction of cspell checker:** This PR integrates the cspell checker into the CI, specifically targeting the `/docs` directory to identify and correct spelling errors. - Updates PR template to include `cspell` command - Updates dictionary ## Why ❔ - **Improving Documentation Quality:** The `/docs` directory currently contains numerous spelling errors. By resolving these and integrating a spellcheck step in our CI, we aim to maintain high-quality, error-free documentation. - **Preventing Future Spelling Mistakes:** The cspell checker will help in automatically detecting and preventing spelling errors in future contributions. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Oleksandr Stepanov * chore: fix link (#784) ## What ❔ image ## Why ❔ ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore(docs): spelling and grammar fixes (#789) ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Igor Aleksanov * chore: change fri_prover witness vector receiver port (#772) ## What ❔ This PR sets default witness_vector_receiver_port=3316 ## Why ❔ 4000 is the beacon chain RPC port https://github.com/matter-labs/zksync-era/blob/3a0ec355f4ade8641d52ccdcc9ed03753fcb7882/docker-compose.yml#L96 `TcpListener::bind` in [prover_fri](https://github.com/matter-labs/zksync-era/blob/3a0ec355f4ade8641d52ccdcc9ed03753fcb7882/prover/prover_fri/src/socket_listener.rs#L58) will fail if a beacon chain node is running. Let's define the port as 3316 consistent with the env_config below https://github.com/matter-labs/zksync-era/blob/3a0ec355f4ade8641d52ccdcc9ed03753fcb7882/core/lib/env_config/src/fri_prover.rs#L32 ## Checklist - [ x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ x] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ x] Code has been formatted via `zk fmt` and `zk lint`. - [ x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * chore: fix error link (#814) ## What ❔ - fix error link ## Why ❔ - The link is no longer accessible ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: Fedor Sakharov Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * chore: fix unreachable link (#840) ## What ❔ Unreachable link ## Why ❔ image ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * feat(vm): Add batch input abstraction (#817) ## What ❔ The new fee model will allow us to have separate L1 gas price and pubdata price (and it is needed for EIP4844). That's why the `BatchFeeInput` struct is needed as it provides the needed fields for both the VM version. The version that is provided in this PR will require not additional changes on the server side (i.e. whatever config params that we have right now is enough, the full integration of the config params for the new fee model will be added to the main PR: https://github.com/matter-labs/zksync-era/pull/791/. While the defined enum allows for both l1-gas-price pegged and independent pubdata pricing, in this PR only L1Pegged is ever instantiated ❗ A new JSON-RPC method has been added: `zks_getMainNodeFeeParams`. It will be used by the EN and it will serve the same purpose as the `zks_getL1GasPrice`, i.e. retrieving the fee info from the main node, but now it will be extended to provide the fair l2 gas price as well as other config params that for now will be defined on the main node. The `zks_getL1GasPrice` will continue working as usual to not interrupt the existing external nodes, but may be removed in the long run. ## Why ❔ Splitting the integration of the 1.4.1 & the new fee model (https://github.com/matter-labs/zksync-era/pull/791/) into separate smaller PRs ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(core): removes multiple tokio runtimes and worker number setting. (#826) ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * chore: adds development guide for zk spellcheck (#847) ## What ❔ - Adds relevant docs related to spellchecker ## Why ❔ - Provides devs with info related to the tool and how it is used ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * fix: oldest unpicked batch (#692) ## What ❔ Set sealed batch number if no unpicked job were found ## Why ❔ Sometimes there is no such jobs in db, so it remains the same. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * ci: fix merge manifest (#848) ## What ❔ Fix job for merge released docker manifest ## Why ❔ For multi-platform builds ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * fix(state-keeper): Updates manager keeps track of fictive block metrics (#843) ## What ❔ Updates manager keeps track of fictive block metrics ## Why ❔ Some metrics (l1_gas, estimated_circuits) are needed in context of the whole batch and were calculated incorrectly previously ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * fix: address issue with spellchecker not checking against prover workspace (#855) ## What ❔ - Updates `zk spellcheck` command to explicitly run cargo-spellcheck in `prover/` - Updates cspell glob pattern to cover all `.md` files to ensure READMEs are picked up - Updates cspell config to exclude `node_modules`, `target` and `CHANGELOGs` .md files ## Why ❔ - Given prover packages are not included in the root Cargo.toml, `cargo-spellcheck` was ignoring them. Running it explicitly during `zk spellcheck` ensures these files are also checked against - To ensure all READMEs are being checked against cspell changed glob pattern accordingly - Given changelogs are auto-generated checking for spelling issues here is not pragmatic ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * chore(main): release core 19.1.0 (#823) :robot: I have created a release *beep* *boop* --- ## [19.1.0](https://github.com/matter-labs/zksync-era/compare/core-v19.0.0...core-v19.1.0) (2024-01-10) ### Features * address remaining spelling issues in dev comments and turns on dev_comments in cfg ([#827](https://github.com/matter-labs/zksync-era/issues/827)) ([1fd0afd](https://github.com/matter-labs/zksync-era/commit/1fd0afdcd9b6c344e1f5dac93fda5aa25c106b2f)) * **core:** removes multiple tokio runtimes and worker number setting. ([#826](https://github.com/matter-labs/zksync-era/issues/826)) ([b8b190f](https://github.com/matter-labs/zksync-era/commit/b8b190f886f1d13602a0b2cc8a2b8525e68b1033)) * fix spelling in dev comments in `core/lib/*` - continued ([#683](https://github.com/matter-labs/zksync-era/issues/683)) ([0421fe6](https://github.com/matter-labs/zksync-era/commit/0421fe6b3e9629fdad2fb88ad5710200825adc91)) * fix spelling in dev comments in `core/lib/*` - continued ([#684](https://github.com/matter-labs/zksync-era/issues/684)) ([b46c2e9](https://github.com/matter-labs/zksync-era/commit/b46c2e9cbbcd048647f998810c8d550f8ad0c1f4)) * fix spelling in dev comments in `core/lib/multivm` - continued ([#682](https://github.com/matter-labs/zksync-era/issues/682)) ([3839d39](https://github.com/matter-labs/zksync-era/commit/3839d39eb6b6d111ec556948c88d1eb9c6ab5e4a)) * fix spelling in dev comments in `core/lib/zksync_core` - continued ([#685](https://github.com/matter-labs/zksync-era/issues/685)) ([70c3feb](https://github.com/matter-labs/zksync-era/commit/70c3febbf0445d2e0c22a942eaf643828aee045d)) * **state-keeper:** circuits seal criterion ([#729](https://github.com/matter-labs/zksync-era/issues/729)) ([c4a86bb](https://github.com/matter-labs/zksync-era/commit/c4a86bbbc5697b5391a517299bbd7a5e882a7314)) * **state-keeper:** Reject transactions that fail to publish bytecodes ([#832](https://github.com/matter-labs/zksync-era/issues/832)) ([0a010f0](https://github.com/matter-labs/zksync-era/commit/0a010f0a6f6682cedc49cb12ab9f9dfcdbccf68e)) * **vm:** Add batch input abstraction ([#817](https://github.com/matter-labs/zksync-era/issues/817)) ([997db87](https://github.com/matter-labs/zksync-era/commit/997db872455351a484c3161d0a733a4bc59dd684)) ### Bug Fixes * oldest unpicked batch ([#692](https://github.com/matter-labs/zksync-era/issues/692)) ([a6c869d](https://github.com/matter-labs/zksync-era/commit/a6c869d88c64a986405bbdfb15cab88e910d1e03)) * **state-keeper:** Updates manager keeps track of fictive block metrics ([#843](https://github.com/matter-labs/zksync-era/issues/843)) ([88fd724](https://github.com/matter-labs/zksync-era/commit/88fd7247c377efce703cd1caeffa4ecd61ed0d7f)) * **vm:** fix circuit tracer ([#837](https://github.com/matter-labs/zksync-era/issues/837)) ([83fc7be](https://github.com/matter-labs/zksync-era/commit/83fc7be3cb9f4d3082b8b9fa8b8f568330bf744f)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * chore(docs): Adding a README with a list of VM versions (#857) ## What ❔ * Adding a list with the current VM versions ## Why ❔ * To make it easy to see what is the chronological order (and what was changed). * fix(vm): `inspect_transaction_with_bytecode_compression` for old VMs (#862) ## What ❔ Fixes `inspect_transaction_with_bytecode_compression` for vm_1_3_2 and vm_m6 to work correctly with all tx execution modes ## Why ❔ Fix bug ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * chore(main): release core 19.1.1 (#864) :robot: I have created a release *beep* *boop* --- ## [19.1.1](https://github.com/matter-labs/zksync-era/compare/core-v19.1.0...core-v19.1.1) (2024-01-12) ### Bug Fixes * **vm:** `inspect_transaction_with_bytecode_compression` for old VMs ([#862](https://github.com/matter-labs/zksync-era/issues/862)) ([077c0c6](https://github.com/matter-labs/zksync-era/commit/077c0c689317fa33c9bf3623942b565e8471f418)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * chore(local-node): adds npm to Dockerfile (#865) ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `zk spellcheck`. * chore(zk): adds sql-formatter to zk dependencies (#866) ## What ❔ Bulding the newest image fails with ``` > [12/17] RUN cd /infrastructure/zk && yarn && yarn build && cd /: 26.23 [3/4] Linking dependencies... 26.23 warning " > zksync-web3@0.15.5" has incorrect peer dependency "ethers@~5.7.0". 32.77 [4/4] Building fresh packages... 38.07 success Saved lockfile. 38.08 Done in 37.55s. 38.39 yarn run v1.22.21 38.42 $ tsc 40.31 src/format_sql.ts(3,24): error TS2307: Cannot find module 'sql-formatter' or its corresponding type declarations. 40.34 error Command failed with exit code 1. 40.34 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. ------ Dockerfile:50 -------------------- 48 | 49 | # Build `zk` tool 50 | >>> RUN cd /infrastructure/zk && yarn && yarn build && cd / 51 | # Build `local-setup-preparation` tool 52 | RUN cd /infrastructure/local-setup-preparation && yarn && cd / ``` ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `zk spellcheck`. * feat: rewritten gossip sync to be async from block processing (#711) * Simplified configs to not require genesis block. * Removed consensus fields from jsonrpc sync api, to prevent having gaps in miniblock quorum certificates. * Moved consensus fields to a separate table to make first/last entry lookup fast and reliable. * Migrated consensus integration to new era-consensus API: https://github.com/matter-labs/era-consensus/pull/53 * Deduplicated code between EN consensus integration and main node consensus integration * Implemented support for cloning a db in tests * fix(local-node): deploy erc20 tokens to fix the deployment (#872) ## What ❔ Currently the run of [local-node](https://github.com/matter-labs/local-node) fails in the logs with: ``` 2024-01-15 07:36:06 Deploying TransparentUpgradeableProxy 2024-01-15 07:36:09 TransparentUpgradeableProxy deployed, gasUsed: 678557 2024-01-15 07:36:09 CONTRACTS_L1_ERC20_BRIDGE_PROXY_ADDR=0x1242f30c5c146ed47194BF6D56013A70D9B7C1A0 2024-01-15 07:36:09 Error: TypeError: Cannot read properties of undefined (reading 'address') 2024-01-15 07:36:09 at Deployer. (/contracts/ethereum/src.ts/deploy.ts:318:92) 2024-01-15 07:36:09 at step (/contracts/ethereum/src.ts/deploy.ts:33:23) 2024-01-15 07:36:09 at Object.next (/contracts/ethereum/src.ts/deploy.ts:14:53) 2024-01-15 07:36:09 at /contracts/ethereum/src.ts/deploy.ts:8:71 2024-01-15 07:36:09 at new Promise () 2024-01-15 07:36:09 at __awaiter (/contracts/ethereum/src.ts/deploy.ts:4:12) 2024-01-15 07:36:09 at Deployer.deployWethBridgeImplementation (/contracts/ethereum/src.ts/deploy.ts:390:16) 2024-01-15 07:36:09 at Deployer. (/contracts/ethereum/src.ts/deploy.ts:439:16) 2024-01-15 07:36:09 at step (/contracts/ethereum/src.ts/deploy.ts:33:23) 2024-01-15 07:36:09 at Object.next (/contracts/ethereum/src.ts/deploy.ts:14:53) error Command failed with exit code 1. 2024-01-15 07:36:09 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. 2024-01-15 07:36:09 Skip contract verification on localhost ``` However the overall run of the setup doesn't fail weirdly after this. What this failure does is that prevents the further steps in the L1 contract deployment and important stuff such as `VERIFIER_TIMELOCK` does not get deployed. Further down the road the environment variables get populated from the two files: https://github.com/matter-labs/zksync-era/blob/3af4644f428af0328cdea0fbae8a8f965489c6c4/docker/local-node/entrypoint.sh#L44-L45 However the `.init.dev` file does not contain the `CONTRACT_VERIFIER_TIMELOCK_ADDR` variable so it gets populated with `0xFC073319977e314F251EAE6ae6bE76B0B3BAeeCF` address from `/etc/env/dev.env` and all transactions such as `commitBlocks` go to this address which breaks block commit/verify/execution. The reason of the failure above is that the WETH bridge deployment needs to search for WETH token in `/etc/env/localhost.json` and because the ERC20 deployment step has been empty this file is also empty. So this fix adds a step of ERC20 deployment to fix this. ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `zk spellcheck`. * feat: adds `zk linkcheck` to zk tool and updates zk env for `zk linkcheck` ci usage (#868) Related PRs: - https://github.com/matter-labs/zksync-era/pull/869 - https://github.com/matter-labs/zksync-era/pull/870 ## What ❔ - Adds `zk linkcheck` to zk tool - Adds required dependencies to make use of `zk linkcheck` in zk env - Adds `zk linkcheck` docs - Updates `spellcheck` dir to be general `checks-config` dir to include spellcheck and link configuration files - Fixes issue with `zk spellcheck` exit code ## Why ❔ - `zk linkcheck` will ensure no dead links exist in repo and prevent unnecessary PRs - Required to install dependencies, similar to `zk spellcheck` - Relevant docs to outline `zk linkcheck` usage ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * chore(hyperchains): fix zk stack cli (#876) ## What ❔ Fixes zk stack cli when spinning up a hyperchain without WETH bridge ## Why ❔ Because it was broken. Some changes over the last month removed a param that used to exist ## Checklist - [X] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [X] Tests for the changes have been added / updated. - [X] Documentation comments have been added / updated. - [X] Code has been formatted via `zk fmt` and `zk lint`. - [X] Spellcheck has been run via `zk spellcheck`. * chore(docs): Update boojum_gadgets.md - Fix typo (#871) * ci: add copying wit-vector-gen image to cross-region registies (#883) ## What ❔ Job for copy witness-vector-generator between GAR registries ## Why ❔ For have access to image from different registries ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * feat(en): Make batch status updater work with pruned data (#863) ## What ❔ Modifies batch status updater so that it works with pruned node data during snapshot recovery. ## Why ❔ Part of preparations of EN code to support snapshot recovery. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * feat(contract-verifier): Support zkVM solc contract verification (#854) ## What ❔ - Downloads zkVM solc binaries in verifier image - Adds support for new zksolc version support ## Why ❔ Support zkVM solc contract verification ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * chore(main): release core 19.2.0 (#867) :robot: I have created a release *beep* *boop* --- ## [19.2.0](https://github.com/matter-labs/zksync-era/compare/core-v19.1.1...core-v19.2.0) (2024-01-17) ### Features * adds `zk linkcheck` to zk tool and updates zk env for `zk linkcheck` ci usage ([#868](https://github.com/matter-labs/zksync-era/issues/868)) ([d64f584](https://github.com/matter-labs/zksync-era/commit/d64f584f6d505b19cd6424928e9dc68e370e17fd)) * **contract-verifier:** Support zkVM solc contract verification ([#854](https://github.com/matter-labs/zksync-era/issues/854)) ([1ed5a95](https://github.com/matter-labs/zksync-era/commit/1ed5a95462dbd73151acd8afbc4ab6158a2aecda)) * **en:** Make batch status updater work with pruned data ([#863](https://github.com/matter-labs/zksync-era/issues/863)) ([3a07890](https://github.com/matter-labs/zksync-era/commit/3a07890dacebf6179636c44d7cce1afd21ab49eb)) * rewritten gossip sync to be async from block processing ([#711](https://github.com/matter-labs/zksync-era/issues/711)) ([3af4644](https://github.com/matter-labs/zksync-era/commit/3af4644f428af0328cdea0fbae8a8f965489c6c4)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). * chore(deps): bumps h2 to avoid vulnerability alert (#895) ## What ❔ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `zk spellcheck`. * fix: addresses broken links in preparation for ci link check (#869) Related PRs: - https://github.com/matter-labs/zksync-era/pull/868 - https://github.com/matter-labs/zksync-era/pull/870 ## What ❔ - Addresses broken links in the repo ## Why ❔ - Should not have dead links in the repo - Addressing issues with links prior to `zk linkcheck` usage in ci ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * fix: Incorrect exposing of log indexes (#896) ## What ❔ Make log indexes in transaction exposed correctly ## Why ❔ To behave correctly ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `zk spellcheck`. * feat(api): Make Web3 API server work with pruned data (#838) ## What ❔ Modifies the Web3 API server so that it works with pruned node data during snapshot recovery. ## Why ❔ Part of preparations of EN code to support snapshot recovery. ## Checklist - [x] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [x] Tests for the changes have been added / updated. - [x] Documentation comments have been added / updated. - [x] Code has been formatted via `zk fmt` and `zk lint`. - [x] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. * feat(vm)!: fee model updates + 1.4.1 (#791) ## What ❔ Adds a new VM version `VM_1_4_1` that will work with `zk_evm@1.4.1` and the [following contracts](https://github.com/matter-labs/era-contracts/pull/159). Generally, it contains the following features (on the server side): - The new fee model - 1.4.1 integration - The Increase of the maximal number of transactions per batch to 10 thousand. ❗ Requires DB migration and new config params before being deployed❗ ## Why ❔ ## Checklist - [ ] PR title corresponds to the body of PR (we generate changelog entries from PRs). - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. - [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`. --------- Co-authored-by: AntonD3 Co-authored-by: AntonD3 <74021421+AntonD3@users.noreply.github.com> Co-authored-by: Emil Co-authored-by: Fedor Sakharov Co-authored-by: perekopskiy Co-authored-by: zksync-admin-bot2 Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> * Fixes * Update verifier key hash * Remove todo! * Update submodule * Update submodule --------- Signed-off-by: Danil Signed-off-by: Harald Hoyer Co-authored-by: Alex Ostrovski Co-authored-by: EmilLuta Co-authored-by: Danil Co-authored-by: zksync-era-bot <147085853+zksync-era-bot@users.noreply.github.com> Co-authored-by: Yury Akudovich Co-authored-by: Karma <148863819+0xKarm@users.noreply.github.com> Co-authored-by: Igor Aleksanov Co-authored-by: Todd <148772493+toddfil@users.noreply.github.com> Co-authored-by: Santala <31094102+tranhoaison@users.noreply.github.com> Co-authored-by: Lech <88630083+Artemka374@users.noreply.github.com> Co-authored-by: gorden <148852660+gordera@users.noreply.github.com> Co-authored-by: yilirong1992 <42017468+yilirong1992@users.noreply.github.com> Co-authored-by: Roman Brodetski Co-authored-by: Salad <148864073+Saladerl@users.noreply.github.com> Co-authored-by: min <52465594+yilimin999@users.noreply.github.com> Co-authored-by: Jean <148654781+oxJean@users.noreply.github.com> Co-authored-by: Tomasz Grześkiewicz Co-authored-by: penghuarong <42017444+penghuarong@users.noreply.github.com> Co-authored-by: umi <66466781+yiliminiqihang@users.noreply.github.com> Co-authored-by: perekopskiy <53865202+perekopskiy@users.noreply.github.com> Co-authored-by: Doll <148654386+Dollyerls@users.noreply.github.com> Co-authored-by: Jack <87960263+ylmin@users.noreply.github.com> Co-authored-by: Artem Makhortov <13339874+artmakh@users.noreply.github.com> Co-authored-by: pompon0 Co-authored-by: Zijing Zhang <50045289+pluveto@users.noreply.github.com> Co-authored-by: Stanislav Bezkorovainyi Co-authored-by: koloz Co-authored-by: zksync-admin-bot2 <91326834+zksync-admin-bot2@users.noreply.github.com> Co-authored-by: Igor Borodin Co-authored-by: Fedor Sakharov Co-authored-by: Dustin Brickwood Co-authored-by: Thomas Knauth Co-authored-by: AnastasiiaVashchuk <72273339+AnastasiiaVashchuk@users.noreply.github.com> Co-authored-by: Jack Hamer <47187316+JackHamer09@users.noreply.github.com> Co-authored-by: Marcin M <128217157+mm-zk@users.noreply.github.com> Co-authored-by: kelemeno <34402761+kelemeno@users.noreply.github.com> Co-authored-by: kelemeno Co-authored-by: MexicanAce Co-authored-by: Alex <149157638+AlexBill01@users.noreply.github.com> Co-authored-by: Ford <153042616+guerrierindien@users.noreply.github.com> Co-authored-by: momodaka <463435681@qq.com> Co-authored-by: Harald Hoyer Co-authored-by: Joonatan Saarhelo Co-authored-by: 0xMBL <152177955+0xMBL@users.noreply.github.com> Co-authored-by: perekopskiy Co-authored-by: Roman Brodetski Co-authored-by: Bence Haromi <56651250+benceharomi@users.noreply.github.com> Co-authored-by: Bhaskar Kashyap <31563474+bskrksyp9@users.noreply.github.com> Co-authored-by: Maksym Co-authored-by: Shahar Kaminsky Co-authored-by: Ivan Bogatyy Co-authored-by: Dushyant Goswami <66271106+Dushyantgoswami@users.noreply.github.com> Co-authored-by: CrytoInsight <150222426+CrytoInsight@users.noreply.github.com> Co-authored-by: 0xDev <84038223+avocadodefi@users.noreply.github.com> Co-authored-by: Money Bund <148772726+MoneyBund@users.noreply.github.com> Co-authored-by: Mattew <149158367+MattewGraham@users.noreply.github.com> Co-authored-by: 0xblackbox <105477847+0xblackbox@users.noreply.github.com> Co-authored-by: web3-Jack <148616461+web3jacker@users.noreply.github.com> Co-authored-by: Joslis <148769700+Joslis@users.noreply.github.com> Co-authored-by: Tleao <148654938+Tleaoo@users.noreply.github.com> Co-authored-by: Roman Petriv Co-authored-by: Alex Ostrovski Co-authored-by: Aleksandr Stepanov Co-authored-by: JW Co-authored-by: 0xWizar <150222244+0xWizar@users.noreply.github.com> Co-authored-by: Ramon "9Tails" Canales Co-authored-by: mankzan <81523456+mankzan@users.noreply.github.com> Co-authored-by: AntonD3 Co-authored-by: AntonD3 <74021421+AntonD3@users.noreply.github.com> Co-authored-by: Emil Co-authored-by: zksync-admin-bot2 Co-authored-by: Jmunoz --- .dockerignore | 10 +- .eslintignore | 2 - .githooks/pre-push | 4 +- .github/pull_request_template.md | 1 + .github/release-please/manifest.json | 4 +- .github/workflows/build-contracts.yml | 83 + .github/workflows/build-core-template.yml | 99 +- .github/workflows/build-docker-from-tag.yml | 20 +- .../workflows/build-external-node-docker.yml | 51 - .github/workflows/build-gar-reusable.yml | 98 - .github/workflows/build-local-node-docker.yml | 28 +- .github/workflows/build-prover-template.yml | 72 +- .github/workflows/check-spelling.yml | 41 + .github/workflows/ci-core-lint-reusable.yml | 5 +- .github/workflows/ci-core-reusable.yml | 166 +- .github/workflows/ci-docs-reusable.yml | 4 +- .github/workflows/ci-prover-reusable.yml | 6 +- .github/workflows/ci.yml | 13 +- .github/workflows/nodejs-license.yaml | 3 +- .github/workflows/release-test-stage.yml | 22 +- .github/workflows/vm-perf-comparison.yml | 5 +- .github/workflows/vm-perf-to-prometheus.yml | 2 +- .github/workflows/zk-environment-publish.yml | 16 +- .gitignore | 1 - .gitmodules | 3 - .markdownlintignore | 2 - .prettierignore | 1 - .solhintignore | 3 +- CODEOWNERS | 1 - Cargo.lock | 1956 +-- Cargo.toml | 4 +- README.md | 18 +- bin/ci_localnet_up | 12 + bin/ci_run | 2 +- bin/zk | 43 +- checks-config/cspell.json | 47 + checks-config/era.cfg | 69 + checks-config/era.dic | 883 ++ checks-config/links.json | 29 + contracts | 2 +- core/CHANGELOG.md | 247 +- core/bin/block_reverter/src/main.rs | 12 +- core/bin/contract-verifier/src/main.rs | 7 +- core/bin/contract-verifier/src/verifier.rs | 25 +- .../bin/contract-verifier/src/zksolc_utils.rs | 6 +- .../contract-verifier/src/zkvyper_utils.rs | 6 +- core/bin/external_node/Cargo.toml | 4 + core/bin/external_node/src/config/mod.rs | 60 +- core/bin/external_node/src/main.rs | 155 +- core/bin/external_node/src/metrics.rs | 11 + .../src/main.rs | 7 +- core/bin/rocksdb_util/Cargo.toml | 22 - core/bin/rocksdb_util/src/main.rs | 85 - core/bin/snapshots_creator/Cargo.toml | 30 + core/bin/snapshots_creator/README.md | 24 + core/bin/snapshots_creator/src/chunking.rs | 69 + core/bin/snapshots_creator/src/creator.rs | 338 + core/bin/snapshots_creator/src/main.rs | 110 + core/bin/snapshots_creator/src/metrics.rs | 43 + core/bin/snapshots_creator/src/tests.rs | 460 + .../src/consistency.rs | 1 - .../storage_logs_dedup_migration/src/main.rs | 4 +- .../src/intrinsic_costs.rs | 19 +- .../system-constants-generator/src/main.rs | 58 +- .../system-constants-generator/src/utils.rs | 47 +- .../Cargo.toml | 37 - .../README.md | 39 - .../data/verification_0_key.json | 399 - .../data/verification_10_key.json | 399 - .../data/verification_11_key.json | 399 - .../data/verification_12_key.json | 399 - .../data/verification_13_key.json | 399 - .../data/verification_14_key.json | 399 - .../data/verification_15_key.json | 399 - .../data/verification_16_key.json | 399 - .../data/verification_17_key.json | 399 - .../data/verification_18_key.json | 399 - .../data/verification_1_key.json | 399 - .../data/verification_2_key.json | 399 - .../data/verification_3_key.json | 399 - .../data/verification_4_key.json | 399 - .../data/verification_5_key.json | 399 - .../data/verification_6_key.json | 399 - .../data/verification_7_key.json | 399 - .../data/verification_8_key.json | 399 - .../data/verification_9_key.json | 399 - .../src/commitment_generator.rs | 37 - .../src/json_to_binary_vk_converter.rs | 31 - .../src/lib.rs | 188 - .../src/main.rs | 45 - .../src/tests.rs | 66 - core/bin/verified_sources_fetcher/src/main.rs | 1 + core/bin/zksync_server/src/main.rs | 31 +- core/lib/basic_types/src/lib.rs | 37 +- core/lib/circuit_breaker/src/l1_txs.rs | 3 +- core/lib/circuit_breaker/src/lib.rs | 1 - core/lib/commitment_utils/Cargo.toml | 1 + core/lib/commitment_utils/src/lib.rs | 32 +- core/lib/config/src/configs/api.rs | 48 +- core/lib/config/src/configs/chain.rs | 75 +- .../config/src/configs/circuit_synthesizer.rs | 42 - .../config/src/configs/contract_verifier.rs | 3 +- core/lib/config/src/configs/database.rs | 30 +- core/lib/config/src/configs/eth_sender.rs | 4 +- core/lib/config/src/configs/eth_watch.rs | 3 +- core/lib/config/src/configs/fetcher.rs | 45 - .../src/configs/fri_proof_compressor.rs | 3 +- core/lib/config/src/configs/fri_prover.rs | 4 +- .../config/src/configs/fri_prover_gateway.rs | 3 +- .../config/src/configs/fri_prover_group.rs | 2 +- core/lib/config/src/configs/mod.rs | 32 +- .../config/src/configs/proof_data_handler.rs | 3 +- core/lib/config/src/configs/prover.rs | 61 - core/lib/config/src/configs/prover_group.rs | 66 - .../config/src/configs/snapshots_creator.rs | 18 + core/lib/config/src/configs/utils.rs | 4 +- core/lib/config/src/lib.rs | 5 +- core/lib/constants/Cargo.toml | 2 +- core/lib/constants/src/blocks.rs | 4 +- core/lib/constants/src/contracts.rs | 12 +- core/lib/constants/src/crypto.rs | 10 +- core/lib/constants/src/ethereum.rs | 19 +- core/lib/constants/src/system_context.rs | 16 +- core/lib/contracts/src/lib.rs | 75 +- core/lib/contracts/src/test_contracts.rs | 6 +- core/lib/crypto/README.md | 2 +- core/lib/crypto/src/hasher/blake2.rs | 2 +- core/lib/crypto/src/hasher/keccak.rs | 3 +- core/lib/crypto/src/hasher/sha256.rs | 2 +- ...f6e1df560ab1e8935564355236e90b6147d2f.json | 15 + ...381d82925556a4801f678b8786235d525d784.json | 16 + ...07703b2581dda4fe3c00be6c5422c78980c4b.json | 20 + ...e57a83f37da8999849377dfad60b44989be39.json | 107 + ...82d56bbace9013f1d95ea0926eef1fb48039b.json | 34 + ...a1b4ba7fe355ebc02ea49a054aa073ce324ba.json | 15 + ...4f3670813e5a5356ddcb7ac482a0201d045f7.json | 108 + ...c2563a3e061bcc6505d8c295b1b2517f85f1b.json | 20 + ...c7e3b6bdb2a08149d5f443310607d3c78988c.json | 22 + ...2b5ea100f59295cd693d14ee0d5ee089c7981.json | 20 + ...994723ebe57b3ed209069bd3db906921ef1a3.json | 28 + ...1a2ee78eb59628d0429e89b27d185f83512be.json | 28 + ...33d196907ebd599e926d305e5ef9f26afa2fa.json | 24 + ...4710daa723e2d9a23317c347f6081465c3643.json | 52 + ...1e4ee6682a89fb86f3b715a240805d44e6d87.json | 15 + ...c6f03469d78bf4f45f5fd1eaf37171db2f04a.json | 20 + ...dcde721ca1c652ae2f8db41fb75cecdecb674.json | 22 + ...fced141fb29dd8b6c32dd2dc3452dc294fe1f.json | 23 + ...e5f7edcafa4fc6757264a96a46dbf7dd1f9cc.json | 31 + ...8048558243ff878b59586eb3a8b22794395d8.json | 259 + ...bc6e326e15dca141050bc9d7dacae98a430e8.json | 22 + ...dfbf962636347864fc365fafa46c7a2da5f30.json | 22 + ...a52daa202279bf612a9278f663deb78bc6e41.json | 22 + ...986511265c541d81b1d21f0a751ae1399c626.json | 72 + ...661a097308d9f81024fdfef24a14418202730.json | 22 + ...700a95e4c37a7a18531b3cdf120394cb055b9.json | 22 + ...aff3a06b7a9c1866132d62e4449fa9436c7c4.json | 15 + ...263298abc0521734f807fbf45484158b167b2.json | 20 + ...c68e8e15a831f1a45dc3b2240d5c6809d5ef2.json | 82 + ...28a20420763a3a22899ad0e5f4b953b615a9e.json | 25 + ...a2d505a1aabf52ff4136d2ed1b39c70dd1632.json | 230 + ...a36408510bb05c15285dfa7708bc0b044d0c5.json | 259 + ...bfdf10fb1cb43781219586c169fb976204331.json | 22 + ...9f3f0b5d629fdb5c36ea1bfb93ed246be968e.json | 88 + ...f50e423eb6134659fe52bcc2b27ad16740c96.json | 14 + ...7ea964610e131d45f720f7a7b2a130fe9ed89.json | 17 + ...5d98a3d9f7df162ae22a98c3c7620fcd13bd2.json | 80 + ...62786c58e54f132588c48f07d9537cf21d3ed.json | 22 + ...7b84c5fd52f0352f0f0e311d054cb9e45b07e.json | 22 + ...0a01a6c7cbe9297cbb55d42533fddc18719b6.json | 20 + ...1610ffa7f169d560e79e89b99eedf681c6773.json | 16 + ...0fa6d65c05d8144bdfd143480834ead4bd8d5.json | 76 + ...4d26e48e968c79834858c98b7a7f9dfc81910.json | 14 + ...be3b85419fde77038dd3de6c55aa9c55d375c.json | 61 + ...5f6760cfeac185330d1d9c5cdb5b383ed8ed4.json | 30 + ...099c4591ce3f8d51f3df99db0165e086f96af.json | 22 + ...5fc7f60618209d0132e7937a1347b7e94b212.json | 20 + ...710ce910d0c9138d85cb55e16459c757dea03.json | 20 + ...b2d0413d0f89c309de4b31254c309116ea60c.json | 17 + ...de995ddddc621ee2149f08f905af2d8aadd03.json | 34 + ...492cf2545944291dd0d42b937c0d0c7eefd47.json | 106 + ...cd9b0128f82306e27e4c7258116082a54ab95.json | 29 + ...b19a3a7feec950cb3e503588cf55d954a493a.json | 22 + ...8eeab159533211d2ddbef41e6ff0ba937d04a.json | 14 + ...16065fcad42c6b621efb3a135a16b477dcfd9.json | 86 + ...6ca08324bc86a72e4ea85c9c7962a6c8c9e30.json | 16 + ...5234d536b064951a67d826ac49b7a3a095a1a.json | 28 + ...717b0dd3cec64b0b29f59b273f1c6952e01da.json | 22 + ...eda1da000848ed4abf309b68989da33e1ce5a.json | 124 + ...07997b7e24682da56f84d812da2b9aeb14ca2.json | 40 + ...6381a9c4165e2727542eaeef3bbedd13a4f20.json | 15 + ...42d4ef93ca238fd2fbc9155e6d012d0d1e113.json | 15 + ...23efc18e51e6d4092188e081f4fafe92fe0ef.json | 34 + ...095596297be0d728a0aa2d9b180fb72de222b.json | 22 + ...83f115cd97586bd795ee31f58fc14e56d58cb.json | 14 + ...52964b268b93c32fd280c49bf9f932884300d.json | 20 + ...52554ccfb5b83f00efdc12bed0f60ef439785.json | 25 + ...f45d57f1f8b2558fdb374bf02e84d3c825a23.json | 20 + ...bffc78d92adb3c1e3ca60b12163e38c67047e.json | 22 + ...feb5f094160627bc09db4bda2dda9a8c11c44.json | 15 + ...5fb7a093b73727f75e0cb7db9cea480c95f5c.json | 35 + ...76085385c3a79d72f49669b4215a9454323c2.json | 22 + ...78f42a877039094c3d6354928a03dad29451a.json | 15 + ...40e73a2778633bbf9d8e591ec912634639af9.json | 232 + ...eb89c728357c0744f209090824695a5d35fa3.json | 22 + ...4d6b3b1d15b818974e4e403f4ac418ed40c2c.json | 26 + ...119a0a60dc9804175b2baf8b45939c74bd583.json | 15 + ...05d7d1c9e649663f6e9444c4425821d0a5b71.json | 22 + ...da2b4afc314a3b3e78b041b44c8a020a0ee12.json | 14 + ...5ef6676b2eac623c0f78634cae9072fe0498a.json | 16 + ...3fe77e649e9d110043d2ac22005dd61cfcfb9.json | 22 + ...95f3823926f69cb1d54874878e6d6c301bfd8.json | 20 + ...e9ebdda648667d48d6b27ddf155f2fe01d77a.json | 16 + ...b04ba5868d1897553df85aac15b169ebb0732.json | 28 + ...22ff6372f63ecadb504a329499b02e7d3550e.json | 26 + ...26b0b3185a231bbf0b8b132a1a95bc157e827.json | 34 + ...f9044ae85b579c7051301b40bd5f94df1f530.json | 15 + ...7e5afa28d855d87ea2f8c93e79c436749068a.json | 258 + ...17a7ffc993cf436ad3aeeae82ed3e330b07bd.json | 20 + ...279601234a489f73d843f2f314252ed4cb8b0.json | 28 + ...ac77b63879ab97a32eed8392b48cc46116a28.json | 14 + ...96ec397767a063dc21aa3add974cb9b070361.json | 16 + ...6387f2f3eda07a630503324bd6576dbdf8231.json | 22 + ...6769dbb04d3a61cf232892236c974660ffe64.json | 35 + ...b249026665c9fb17b6f53a2154473547cbbfd.json | 22 + ...489745438eae73a07b577aa25bd08adf95354.json | 16 + ...8b87ead36f593488437c6f67da629ca81e4fa.json | 14 + ...4cee5c602d275bb812022cc8fdabf0a60e151.json | 56 + ...4de79b67833f17d252b5b0e8e00cd6e75b5c1.json | 20 + ...5da82065836fe17687ffad04126a6a8b2b27c.json | 15 + ...7931a02fe5ffaf2c4dc2f1e7a48c0e932c060.json | 50 + ...77cb8f1340b483faedbbc2b71055aa5451cae.json | 20 + ...d772d8801b0ae673b7173ae08a1fa6cbf67b2.json | 59 + ...6ea7ef2c1178b1b0394a17442185b79f2d77d.json | 22 + ...c6d47e8e1b4b5124c82c1f35d405204fcf783.json | 82 + ...afca34d61347e0e2e29fb07ca3d1b8b1f309c.json | 18 + ...37d8d542b4f14cf560972c005ab3cc13d1f63.json | 23 + ...ec427a099492905a1feee512dc43f39d10047.json | 15 + ...9a8f447824a5ab466bb6eea1710e8aeaa2c56.json | 15 + ...a7ea81d8d10c76bbae540013667e13230e2ea.json | 22 + ...d94f28b7b2b60d551d552a9b0bab1f1791e39.json | 22 + ...a8d4768c6a803a1a90889e5a1b8254c315231.json | 22 + ...252f26335699a4e7f0347d7e894320245271d.json | 15 + ...ecd50a04799ffd36be0e17c56f47fcdbc8f60.json | 14 + ...40e7eb9f598999c28015a504605f88bf84b33.json | 88 + ...6009503182c300877e74a8539f231050e6f85.json | 15 + ...17afe4e38bcfe5ce93bf229d68622066ab8a1.json | 94 + ...5bb2036d21f60d4c6934f9145730ac35c04de.json | 20 + ...63f089c91aead2bc9abb284697e65840f1e8f.json | 16 + ...5db07667280abef27cc73159d2fd9c95b209b.json | 256 + ...c4055c22895316ce68d9d41619db7fcfb7563.json | 100 + ...97b16599abaf51df0f19effe1a536376cf6a6.json | 28 + ...7effac442434c6e734d977e6682a7484abe7f.json | 35 + ...cd0bc0e8923a6bae64f22f09242766835ee0c.json | 74 + ...3d55ff2b1510868dfe80d14fffa3f5ff07b83.json | 15 + ...52aeb5f06c26f68d131dd242f6ed68816c513.json | 22 + ...f00e82b0b3ad9ae36bf4fe44d7e85b74c6f49.json | 20 + ...d4cb46ca093129707ee14f2fa42dc1800cc9e.json | 27 + ...1a525c87d390df21250ab4dce08e09be72591.json | 98 + ...3ce80f9b2b27758651ccfc09df61a4ae8a363.json | 88 + ...8927f9e976319a305e0928ff366d535a97104.json | 92 + ...843eb48b5e26ee92cd8476f50251e8c32610d.json | 26 + ...b734d5d255e36e48668b3bfc25a0f86ea52e7.json | 40 + ...e7ee767e4c98706246eb113498c0f817f5f38.json | 17 + ...91a8d67abcdb743a619ed0d1b9c16206a3c20.json | 12 + ...3260b49ce42daaf9dbe8075daf0a8e0ad9914.json | 12 + ...ab01cb2c55bf86d2c8c99abb1b7ca21cf75c0.json | 14 + ...8a58479487686d774e6b2b1391347bdafe06d.json | 29 + ...ad7fe3464f2619cee2d583accaa829aa12b94.json | 38 + ...2826095e9290f0c1157094bd0c44e06012e42.json | 232 + ...98129b44534062f524823666ed432d2fcd345.json | 12 + ...f396f9d193615d70184d4327378a7fc8a5665.json | 30 + ...26313c255ce356a9a4e25c098484d3129c914.json | 14 + ...7f92c4cfad9313b1461e12242d9becb59e0b0.json | 22 + ...ce56b53cc4ca0a8c6ee7cac1b9a5863000be3.json | 256 + ...8bf92c7f2c55386c8208e3a82b30456abd5b4.json | 90 + ...4a42b4ead6964edd17bfcacb4a828492bba60.json | 20 + ...dcf2324f380932698386aa8f9751b1fa24a7b.json | 15 + ...6bf94a013e49bf50ce23f4de4968781df0796.json | 15 + ...26dc7bb98e0f7feaf14656307e20bd2bb0b6c.json | 14 + ...1caa4cca66d6ad74b2cd1a34ea5f7bc1e6909.json | 28 + ...715e903f3b399886c2c73e838bd924fed6776.json | 18 + ...50d625f057acf1440b6550375ce5509a816a6.json | 107 + ...b139e943f3ad2a41387b610606a42b7f03283.json | 29 + ...7227120a8279db1875d26ccae5ee0785f46a9.json | 22 + ...1578db18c29cdca85b8b6aad86fe2a9bf6bbe.json | 32 + ...77b39429d56072f63b3530c576fb31d7a56f9.json | 18 + ...534014b9ab9ca7725e14fb17aa050d9f35eb8.json | 23 + ...f03c0aad3ac30d85176e0a4e35f72bbb21b12.json | 256 + ...19a00c170cf7725d95dd6eb8b753fa5facec8.json | 235 + ...0c5f885240d99ea69140a4636b00469d08497.json | 22 + ...c70aff49e60b7def3d93b9206e650c259168b.json | 20 + ...63595237a0c54f67b8c669dfbb4fca32757e4.json | 20 + ...1af63b86e8da6d908d48998012231e1d66a60.json | 29 + ...b4207b69cc48b4ba648e797211631be612b69.json | 28 + ...567e2667c63a033baa6b427bd8a0898c08bf2.json | 22 + ...02c690c33686c889d318b1d64bdd7fa6374db.json | 20 + ...c8dc4bd57e32dfefe1017acba9a15fc14b895.json | 36 + ...cbb724af0f0216433a70f19d784e3f2afbc9f.json | 22 + ...b8ba0b771cb45573aca81db254f6bcfc17c77.json | 20 + ...b17305343df99ebc55f240278b5c4e63f89f5.json | 22 + ...2767e112b95f4d103c27acd6f7ede108bd300.json | 16 + ...4125a2633ddb6abaa129f2b12389140d83c3f.json | 40 + ...f0a9676e26f422506545ccc90b7e8a36c8d47.json | 35 + ...d9b6a1a6fe629f9a02c8e33330a0efda64068.json | 32 + ...4561347b81f8931cc2addefa907c9aa9355e6.json | 82 + ...1e30606fddf6631c859ab03819ec476bcf005.json | 22 + ...578179e6269c6ff36877cedee264067ccdafc.json | 65 + ...693831e2ab6d6d3c1805df5aa51d76994ec19.json | 16 + ...e3b0fd67aa1f5f7ea0de673a2fbe1f742db86.json | 22 + ...c9317c01d6e35db3b07d0a31f436e7e3c7c40.json | 14 + ...8e221a8ef9014458cc7f1dbc60c056a0768a0.json | 16 + ...a36532fee1450733849852dfd20e18ded1f03.json | 15 + ...d2e2326d7ace079b095def52723a45b65d3f3.json | 15 + ...2cfbaf09164aecfa5eed8d7142518ad96abea.json | 22 + ...5ca679a0459a012899a373c67393d30d12601.json | 14 + ...d92f5a547150590b8c221c4065eab11175c7a.json | 20 + ...d7cbb0bc526ebe61a07f735d4ab587058b22c.json | 22 + ...a69138206dfeb41f3daff4a3eef1de0bed4e4.json | 16 + ...5d93e391600ab9da2e5fd4e8b139ab3d77583.json | 15 + ...6c6e7b4595f3e7c3dca1d1590de5437187733.json | 29 + ...c9a64904026506914abae2946e5d353d6a604.json | 23 + ...e2757dbc13be6b30f5840744e5e0569adc66e.json | 22 + ...9f41220c51f58a03c61d6b7789eab0504e320.json | 32 + ...477dcf21955e0921ba648ba2e751dbfc3cb45.json | 38 + ...9bdc9efc6b89fc0444caf8271edd7dfe4a3bc.json | 20 + ...cd6ca26aa2fa565fcf208b6985f461c1868f2.json | 28 + ...63f436a4f16dbeb784d0d28be392ad96b1c49.json | 14 + ...895006e23ec73018f4b4e0a364a741f5c9781.json | 22 + ...f724216807ffd481cd6f7f19968e42e52b284.json | 14 + ...1ac4ab2d73bda6022700aeb0a630a2563a4b4.json | 15 + ...f8dcbcb014b4f808c6232abd9a83354c995ac.json | 35 + ...663a9c5ea62ea7c99a77941eb8f05d4454125.json | 18 + ...7b56187686173327498ac75424593547c19c5.json | 22 + ...9871731c41cef01da3d6aaf2c53f7b17c47b2.json | 23 + ...f8c12deeca6b8843fe3869cc2b02b30da5de6.json | 22 + ...49b6370c211a7fc24ad03a5f0e327f9d18040.json | 22 + ...d964d4bb39b9dcd18fb03bc11ce2fb32b7fb3.json | 83 + ...61b86b1abd915739533f9982fea9d8e21b9e9.json | 14 + ...0103263af3ff5cb6c9dc5df59d3cd2a5e56b4.json | 17 + ...0c2cfff08e6fef3c3824d20dfdf2d0f73e671.json | 34 + ...5386a2fd53492f3df05545edbfb0ec0f059d2.json | 15 + ...697e36b3c2a997038c30553f7727cdfa17361.json | 20 + ...651aa072f030c70a5e6de38843a1d9afdf329.json | 16 + ...180abbd76956e073432af8d8500327b74e488.json | 22 + ...9c9a85e3c24fca1bf87905f7fc68fe2ce3276.json | 16 + ...6d5ee4341edadb8a86b459a07427b7e265e98.json | 136 + ...a4b4e2af48907fa9321064ddb24ac02ab17cd.json | 20 + ...5661d1df4ec053797d75cfd32272be4f485e7.json | 54 + ...b742349e78e6e4ce3e7c9a0dcf6447eedc6d8.json | 94 + ...912d57f8eb2a38bdb7884fc812a2897a3a660.json | 35 + ...709dbee508ad6d1cae43e477cf1bef8cb4aa9.json | 23 + ...f995b654adfe328cb825a44ad36b4bf9ec8f2.json | 94 + ...379f3b2e9ff1bc6e8e38f473fb4950c8e4b77.json | 20 + ...a653852f7e43670076eb2ebcd49542a870539.json | 14 + ...c1d66aeaeb6e2d36fddcf96775f01716a8a74.json | 14 + ...432d865852afe0c60e11a2c1800d30599aa61.json | 14 + ...8aa0f8525832cb4075e831c0d4b23c5675b99.json | 24 + ...e899e360650afccb34f5cc301b5cbac4a3d36.json | 15 + ...9d0e2d571533d4d5f683919987b6f8cbb00e0.json | 15 + ...38d60d6e93bcb34fd20422e227714fccbf6b7.json | 34 + ...8d43c31ec7441a7f6c5040e120810ebbb72f7.json | 21 + ...64420a25eef488de2b67e91ed39657667bd4a.json | 26 + ...2b7c4d79cbd404e0267794ec203df0cbb169d.json | 20 + ...2c212918b31dabebf08a84efdfe1feee85765.json | 20 + ...0d7eaeeb4549ed59b58f8d984be2a22a80355.json | 22 + ...ac429aac3c030f7e226a1264243d8cdae038d.json | 17 + ...03d8bd356f2f9cc249cd8b90caa5a8b5918e3.json | 23 + ...bed26c831730944237b74bed98869c83b3ca5.json | 28 + ...cb21a635037d89ce24dd3ad58ffaadb59594a.json | 20 + ...b4afd384063ae394a847b26304dd18d490ab4.json | 28 + ...41368993009bb4bd90c2ad177ce56317aa04c.json | 257 + ...3e67f08f2ead5f55bfb6594e50346bf9cf2ef.json | 32 + ...32432295b2f7ee40bf13522a6edbd236f1970.json | 29 + ...3b438f14b68535111cf2cedd363fc646aac99.json | 20 + ...0015eeb3ef2643ceda3024504a471b24d1283.json | 254 + ...8242aad3e9a47400f6d6837a35f4c54a216b9.json | 20 + ...fd17f833fb15b58b0110c7cc566946db98e76.json | 94 + ...3d2839f0f928c06b8020eecec38e895f99b42.json | 28 + ...813d2b2d411bd5faf8306cd48db500532b711.json | 29 + ...33c83477d65a6f8cb2673f67b3148cd95b436.json | 20 + ...75da2787332deca5708d1d08730cdbfc73541.json | 136 + ...7e88abd0f8164c2413dc83c91c29665ca645e.json | 35 + ...e8ee5e54b170de9da863bbdbc79e3f206640b.json | 14 + ...2060fbea775dc185f639139fbfd23e4d5f3c6.json | 15 + ...76fb01e6629e8c982c265f2af1d5000090572.json | 20 + ...c82627a936f7ea9f6c354eca4bea76fac6b10.json | 20 + ...2e2ca92fdfdf01cfd0b11f5ce24f0458a5e48.json | 26 + ...550b8db817630d1a9341db4a1f453f12e64fb.json | 34 + ...3aa4239d3a8be71b47178b4b8fb11fe898b31.json | 16 + ...70a4e629b2a1cde641e74e4e55bb100df809f.json | 22 + ...8f8af02aa297d85a2695c5f448ed14b2d7386.json | 19 + ...f35d55901fb1e6805f23413ea43e3637540a0.json | 28 + ...b957e92cd375ec33fe16f855f319ffc0b208e.json | 118 + ...8c7ffd9a04eae27afbdf37a6ba8ff7ac85f3b.json | 22 + ...6bc6419ba51514519652e055c769b096015f6.json | 22 + ...cb35d4d07d47f33fe1a5b9e9fe1f0ae09b705.json | 28 + ...71ababa66e4a443fbefbfffca72b7540b075b.json | 15 + ...85b19e1294b3aa6e084d2051bb532408672be.json | 12 + ...91a9984685eaaaa0a8b223410d560a15a3034.json | 61 + ...6686e655206601854799139c22c017a214744.json | 19 + ...78a8a0ec739f4ddec8ffeb3a87253aeb18d30.json | 14 + ...c3465e2211ef3013386feb12d4cc04e0eade9.json | 60 + ...15aaade450980719933089824eb8c494d64a4.json | 15 + ...4312b2b780563a9fde48bae5e51650475670f.json | 82 + ...d056cc9bae25bc070c5de8424f88fab20e5ea.json | 28 + ...bae63443c51deb818dd0affd1a0949b161737.json | 16 + ...5ff66f2c3b2b83d45360818a8782e56aa3d66.json | 36 + ...6efe8d41759bbdcdd6d39db93208f2664f03a.json | 22 + ...cfdc050b85e6d773b13f9699f15c335d42593.json | 22 + ...4867ed765dcb9dc60754df9df8700d4244bfb.json | 44 + ...17db60405a887f9f7fa0ca60aa7fc879ce630.json | 16 + ...227ecaa45d76d1ba260074898fe57c059a15a.json | 232 + ...5e6f8b7f88a0894a7f9e27fc26f93997d37c7.json | 24 + ...63a1e89945f8d5e0f4da42ecf6313c4f5967e.json | 20 + ...d425bac8ed3d2780c60c9b63afbcea131f9a0.json | 15 + ...c7dfb7aad7261e5fc402d845aedc3b91a4e99.json | 23 + ...304e8a35fd65bf37e976b7106f57c57e70b9b.json | 16 + ...a58f82acd817932415f04bcbd05442ad80c2b.json | 23 + ...5d9ec9b3f628c3a4cf5e10580ea6e5e3a2429.json | 14 + ...eb09b538a67d1c39fda052c4f4ddb23ce0084.json | 22 + ...5223f4599d4128db588d8645f3d106de5f50b.json | 20 + core/lib/dal/Cargo.toml | 24 +- core/lib/{zksync_core => dal}/build.rs | 6 +- ...1013163109_create_snapshots_table.down.sql | 1 + ...231013163109_create_snapshots_table.up.sql | 9 + ...ate_consensus_replica_state_table.down.sql | 1 + ...reate_consensus_replica_state_table.up.sql | 6 + ...witness_generation_related_tables.down.sql | 83 + ...d_witness_generation_related_tables.up.sql | 4 + .../20231213192041_snapshot-recovery.down.sql | 1 + .../20231213192041_snapshot-recovery.up.sql | 13 + ...20231225083442_add-pub-data-input.down.sql | 1 + .../20231225083442_add-pub-data-input.up.sql | 1 + ...20231229181653_fair_pubdata_price.down.sql | 1 + .../20231229181653_fair_pubdata_price.up.sql | 2 + ...ove_consensus_fields_to_new_table.down.sql | 2 + ..._move_consensus_fields_to_new_table.up.sql | 11 + ...5908_remove_old_prover_subsystems.down.sql | 52 + ...125908_remove_old_prover_subsystems.up.sql | 5 + ...21833_l1-batch-predicted-circuits.down.sql | 2 + ...4121833_l1-batch-predicted-circuits.up.sql | 2 + core/lib/dal/sqlx-data.json | 12413 +--------------- core/lib/dal/src/accounts_dal.rs | 25 +- .../src/basic_witness_input_producer_dal.rs | 129 +- core/lib/dal/src/blocks_dal.rs | 1595 +- core/lib/dal/src/blocks_web3_dal.rs | 597 +- core/lib/dal/src/connection/holder.rs | 5 +- core/lib/dal/src/connection/mod.rs | 215 +- core/lib/dal/src/consensus_dal.rs | 238 + core/lib/dal/src/contract_verification_dal.rs | 284 +- core/lib/dal/src/eth_sender_dal.rs | 296 +- core/lib/dal/src/events_dal.rs | 50 +- core/lib/dal/src/events_web3_dal.rs | 76 +- core/lib/dal/src/fri_gpu_prover_queue_dal.rs | 147 +- core/lib/dal/src/fri_proof_compressor_dal.rs | 223 +- core/lib/dal/src/fri_protocol_versions_dal.rs | 40 +- core/lib/dal/src/fri_prover_dal.rs | 470 +- .../fri_scheduler_dependency_tracker_dal.rs | 70 +- core/lib/dal/src/fri_witness_generator_dal.rs | 836 +- core/lib/dal/src/gpu_prover_queue_dal.rs | 170 - core/lib/dal/src/healthcheck.rs | 10 +- core/lib/dal/src/instrument.rs | 4 +- core/lib/dal/src/lib.rs | 98 +- core/lib/dal/src/metrics.rs | 4 +- core/lib/dal/src/models/mod.rs | 1 + .../src/models}/proto/mod.proto | 8 +- core/lib/dal/src/models/proto/mod.rs | 2 + core/lib/dal/src/models/storage_block.rs | 148 +- core/lib/dal/src/models/storage_eth_tx.rs | 11 +- core/lib/dal/src/models/storage_event.rs | 2 +- core/lib/dal/src/models/storage_log.rs | 12 +- .../src/models/storage_protocol_version.rs | 4 +- .../dal/src/models/storage_prover_job_info.rs | 15 +- core/lib/dal/src/models/storage_sync.rs | 247 +- core/lib/dal/src/models/storage_token.rs | 1 - .../lib/dal/src/models/storage_transaction.rs | 33 +- .../models/storage_verification_request.rs | 10 +- .../src/models/storage_witness_job_info.rs | 14 +- core/lib/dal/src/proof_generation_dal.rs | 126 +- core/lib/dal/src/protocol_versions_dal.rs | 282 +- .../lib/dal/src/protocol_versions_web3_dal.rs | 25 +- core/lib/dal/src/prover_dal.rs | 627 - core/lib/dal/src/snapshot_recovery_dal.rs | 135 + core/lib/dal/src/snapshots_creator_dal.rs | 129 + core/lib/dal/src/snapshots_dal.rs | 259 + core/lib/dal/src/storage_dal.rs | 100 +- core/lib/dal/src/storage_logs_dal.rs | 451 +- core/lib/dal/src/storage_logs_dedup_dal.rs | 85 +- core/lib/dal/src/storage_web3_dal.rs | 264 +- core/lib/dal/src/sync_dal.rs | 243 +- core/lib/dal/src/system_dal.rs | 2 +- core/lib/dal/src/tests/mod.rs | 455 +- core/lib/dal/src/time_utils.rs | 4 +- core/lib/dal/src/tokens_dal.rs | 131 +- core/lib/dal/src/tokens_web3_dal.rs | 32 +- core/lib/dal/src/transactions_dal.rs | 857 +- core/lib/dal/src/transactions_web3_dal.rs | 359 +- core/lib/dal/src/witness_generator_dal.rs | 930 -- core/lib/env_config/src/alerts.rs | 3 +- core/lib/env_config/src/api.rs | 20 +- core/lib/env_config/src/chain.rs | 185 +- .../lib/env_config/src/circuit_synthesizer.rs | 51 - core/lib/env_config/src/contracts.rs | 3 +- core/lib/env_config/src/database.rs | 42 +- core/lib/env_config/src/fetcher.rs | 68 - .../env_config/src/fri_proof_compressor.rs | 3 +- core/lib/env_config/src/fri_prover.rs | 3 + core/lib/env_config/src/lib.rs | 5 +- core/lib/env_config/src/object_store.rs | 25 + core/lib/env_config/src/prover.rs | 197 - core/lib/env_config/src/prover_group.rs | 149 - core/lib/env_config/src/snapshots_creator.rs | 9 + core/lib/env_config/src/test_utils.rs | 3 +- core/lib/env_config/src/utils.rs | 3 +- core/lib/eth_client/Cargo.toml | 11 +- core/lib/eth_client/src/clients/generic.rs | 170 + core/lib/eth_client/src/clients/http/mod.rs | 10 +- core/lib/eth_client/src/clients/http/query.rs | 76 +- .../eth_client/src/clients/http/signing.rs | 86 +- core/lib/eth_client/src/clients/mock.rs | 491 +- core/lib/eth_client/src/clients/mod.rs | 12 +- core/lib/eth_client/src/lib.rs | 93 +- core/lib/eth_client/src/types.rs | 114 +- core/lib/eth_signer/src/json_rpc_signer.rs | 33 +- core/lib/eth_signer/src/lib.rs | 11 +- core/lib/eth_signer/src/pk_signer.rs | 20 +- core/lib/eth_signer/src/raw_ethereum_tx.rs | 16 +- core/lib/health_check/src/lib.rs | 9 +- core/lib/mempool/src/mempool_store.rs | 5 +- core/lib/mempool/src/tests.rs | 32 +- core/lib/mempool/src/types.rs | 28 +- core/lib/merkle_tree/Cargo.toml | 2 +- core/lib/merkle_tree/README.md | 2 +- .../lib/merkle_tree/examples/loadtest/main.rs | 37 +- core/lib/merkle_tree/examples/recovery.rs | 17 +- core/lib/merkle_tree/src/consistency.rs | 49 +- core/lib/merkle_tree/src/domain.rs | 167 +- core/lib/merkle_tree/src/errors.rs | 3 +- core/lib/merkle_tree/src/getters.rs | 114 +- core/lib/merkle_tree/src/hasher/mod.rs | 37 +- core/lib/merkle_tree/src/hasher/nodes.rs | 3 +- core/lib/merkle_tree/src/hasher/proofs.rs | 62 +- core/lib/merkle_tree/src/lib.rs | 47 +- core/lib/merkle_tree/src/metrics.rs | 3 +- core/lib/merkle_tree/src/pruning.rs | 30 +- core/lib/merkle_tree/src/recovery.rs | 63 +- core/lib/merkle_tree/src/storage/mod.rs | 79 +- core/lib/merkle_tree/src/storage/patch.rs | 29 +- core/lib/merkle_tree/src/storage/proofs.rs | 224 +- core/lib/merkle_tree/src/storage/rocksdb.rs | 10 +- .../merkle_tree/src/storage/serialization.rs | 14 +- core/lib/merkle_tree/src/storage/tests.rs | 140 +- core/lib/merkle_tree/src/types/internal.rs | 38 +- core/lib/merkle_tree/src/types/mod.rs | 101 +- core/lib/merkle_tree/src/utils.rs | 5 - .../merkle_tree/tests/integration/common.rs | 53 +- .../tests/integration/consistency.rs | 8 +- .../merkle_tree/tests/integration/domain.rs | 94 +- .../tests/integration/merkle_tree.rs | 129 +- .../merkle_tree/tests/integration/recovery.rs | 50 +- core/lib/mini_merkle_tree/README.md | 4 +- core/lib/mini_merkle_tree/benches/tree.rs | 1 - core/lib/mini_merkle_tree/src/lib.rs | 4 +- core/lib/multivm/Cargo.toml | 8 +- core/lib/multivm/src/glue/history_mode.rs | 22 +- core/lib/multivm/src/glue/mod.rs | 4 +- core/lib/multivm/src/glue/tracers/mod.rs | 58 +- core/lib/multivm/src/glue/types/mod.rs | 1 + .../src/glue/types/vm/block_context_mode.rs | 24 +- .../src/glue/types/vm/tx_execution_mode.rs | 8 +- .../src/glue/types/vm/vm_block_result.rs | 21 +- .../types/vm/vm_partial_execution_result.rs | 7 +- .../glue/types/vm/vm_tx_execution_result.rs | 7 +- .../multivm/src/glue/types/zk_evm_1_4_1.rs | 65 + .../traits/tracers/dyn_tracers/mod.rs | 1 + .../traits/tracers/dyn_tracers/vm_1_3_3.rs | 6 +- .../traits/tracers/dyn_tracers/vm_1_4_0.rs | 8 +- .../traits/tracers/dyn_tracers/vm_1_4_1.rs | 33 + core/lib/multivm/src/interface/traits/vm.rs | 35 +- .../src/interface/types/errors/halt.rs | 11 +- .../types/errors/tx_revert_reason.rs | 6 +- .../types/errors/vm_revert_reason.rs | 4 +- .../interface/types/inputs/l1_batch_env.rs | 26 +- .../types/outputs/execution_result.rs | 14 +- .../types/outputs/execution_state.rs | 6 +- .../types/outputs/finished_l1batch.rs | 1 + .../src/interface/types/outputs/mod.rs | 15 +- .../src/interface/types/outputs/statistic.rs | 1 + core/lib/multivm/src/lib.rs | 25 +- .../src/tracers/call_tracer/metrics.rs | 15 + .../multivm/src/tracers/call_tracer/mod.rs | 51 +- .../call_tracer/vm_boojum_integration/mod.rs | 216 + .../src/tracers/call_tracer/vm_latest/mod.rs | 39 +- .../call_tracer/vm_refunds_enhancement/mod.rs | 37 +- .../call_tracer/vm_virtual_blocks/mod.rs | 44 +- .../multivm/src/tracers/multivm_dispatcher.rs | 19 +- .../src/tracers/storage_invocation/mod.rs | 1 + .../vm_boojum_integration/mod.rs | 35 + .../storage_invocation/vm_latest/mod.rs | 17 +- .../vm_refunds_enhancement/mod.rs | 15 +- .../vm_virtual_blocks/mod.rs | 15 +- core/lib/multivm/src/tracers/validator/mod.rs | 28 +- .../multivm/src/tracers/validator/types.rs | 8 +- .../validator/vm_boojum_integration/mod.rs | 201 + .../src/tracers/validator/vm_latest/mod.rs | 46 +- .../validator/vm_refunds_enhancement/mod.rs | 33 +- .../validator/vm_virtual_blocks/mod.rs | 30 +- core/lib/multivm/src/utils.rs | 249 + core/lib/multivm/src/versions/README.md | 11 + core/lib/multivm/src/versions/mod.rs | 1 + .../src/versions/vm_1_3_2/bootloader_state.rs | 2 +- .../vm_1_3_2/errors/tx_revert_reason.rs | 8 +- .../vm_1_3_2/errors/vm_revert_reason.rs | 10 +- .../src/versions/vm_1_3_2/event_sink.rs | 12 +- .../src/versions/vm_1_3_2/history_recorder.rs | 10 +- .../multivm/src/versions/vm_1_3_2/memory.rs | 28 +- core/lib/multivm/src/versions/vm_1_3_2/mod.rs | 31 +- .../src/versions/vm_1_3_2/oracle_tools.rs | 19 +- .../versions/vm_1_3_2/oracles/decommitter.rs | 24 +- .../src/versions/vm_1_3_2/oracles/mod.rs | 9 +- .../versions/vm_1_3_2/oracles/precompile.rs | 7 +- .../src/versions/vm_1_3_2/oracles/storage.rs | 31 +- .../vm_1_3_2/oracles/tracer/bootloader.rs | 20 +- .../versions/vm_1_3_2/oracles/tracer/call.rs | 46 +- .../versions/vm_1_3_2/oracles/tracer/mod.rs | 26 +- .../vm_1_3_2/oracles/tracer/one_tx.rs | 22 +- .../oracles/tracer/transaction_result.rs | 18 +- .../versions/vm_1_3_2/oracles/tracer/utils.rs | 27 +- .../vm_1_3_2/oracles/tracer/validation.rs | 40 +- .../src/versions/vm_1_3_2/pubdata_utils.rs | 16 +- .../multivm/src/versions/vm_1_3_2/refunds.rs | 35 +- .../src/versions/vm_1_3_2/test_utils.rs | 10 +- .../src/versions/vm_1_3_2/tests/bootloader.rs | 548 +- .../src/versions/vm_1_3_2/tests/mod.rs | 2 + .../src/versions/vm_1_3_2/tests/upgrades.rs | 106 +- .../src/versions/vm_1_3_2/tests/utils.rs | 34 +- .../src/versions/vm_1_3_2/transaction_data.rs | 155 +- .../multivm/src/versions/vm_1_3_2/utils.rs | 19 +- core/lib/multivm/src/versions/vm_1_3_2/vm.rs | 62 +- .../src/versions/vm_1_3_2/vm_instance.rs | 105 +- .../versions/vm_1_3_2/vm_with_bootloader.rs | 102 +- .../versions/vm_boojum_integration/README.md | 44 + .../bootloader_state/l2_block.rs | 87 + .../bootloader_state/mod.rs | 8 + .../bootloader_state/snapshot.rs | 25 + .../bootloader_state/state.rs | 295 + .../bootloader_state/tx.rs | 49 + .../bootloader_state/utils.rs | 177 + .../vm_boojum_integration/constants.rs | 144 + .../implementation/bytecode.rs | 58 + .../implementation/execution.rs | 137 + .../implementation/gas.rs | 43 + .../implementation/logs.rs | 74 + .../implementation/mod.rs | 7 + .../implementation/snapshots.rs | 89 + .../implementation/statistics.rs | 72 + .../implementation/tx.rs | 68 + .../src/versions/vm_boojum_integration/mod.rs | 34 + .../old_vm/event_sink.rs | 263 + .../vm_boojum_integration/old_vm/events.rs | 146 + .../old_vm/history_recorder.rs | 811 + .../vm_boojum_integration/old_vm/memory.rs | 327 + .../vm_boojum_integration/old_vm/mod.rs | 8 + .../old_vm/oracles/decommitter.rs | 236 + .../old_vm/oracles/mod.rs | 8 + .../old_vm/oracles/precompile.rs | 114 + .../old_vm/oracles/storage.rs | 2 +- .../vm_boojum_integration/old_vm/utils.rs | 221 + .../vm_boojum_integration/oracles/mod.rs | 1 + .../vm_boojum_integration/oracles/storage.rs | 509 + .../vm_boojum_integration/tests/bootloader.rs | 56 + .../tests/bytecode_publishing.rs | 43 + .../tests/call_tracer.rs | 92 + .../vm_boojum_integration/tests/circuits.rs | 44 + .../vm_boojum_integration/tests/default_aa.rs | 76 + .../vm_boojum_integration/tests/gas_limit.rs | 47 + .../tests/get_used_contracts.rs | 109 + .../tests/invalid_bytecode.rs | 120 + .../tests/is_write_initial.rs | 48 + .../tests/l1_tx_execution.rs | 139 + .../vm_boojum_integration/tests/l2_blocks.rs | 437 + .../vm_boojum_integration/tests/mod.rs | 22 + .../tests/nonce_holder.rs | 188 + .../tests/precompiles.rs | 136 + .../vm_boojum_integration/tests/refunds.rs | 167 + .../tests/require_eip712.rs | 165 + .../vm_boojum_integration/tests/rollbacks.rs | 263 + .../tests/simple_execution.rs | 81 + .../tests/tester/inner_state.rs | 130 + .../vm_boojum_integration/tests/tester/mod.rs | 7 + .../tests/tester/transaction_test_info.rs | 217 + .../tests/tester/vm_tester.rs | 295 + .../tests/tracing_execution_error.rs | 54 + .../vm_boojum_integration/tests/upgrade.rs | 362 + .../vm_boojum_integration/tests/utils.rs | 111 + .../tracers/circuits_capacity.rs | 85 + .../tracers/circuits_tracer.rs | 192 + .../tracers/default_tracers.rs | 311 + .../tracers/dispatcher.rs | 126 + .../vm_boojum_integration/tracers/mod.rs | 16 + .../tracers/pubdata_tracer.rs | 212 + .../vm_boojum_integration/tracers/refunds.rs | 352 + .../tracers/result_tracer.rs | 246 + .../vm_boojum_integration/tracers/traits.rs | 47 + .../vm_boojum_integration/tracers/utils.rs | 225 + .../types/internals/mod.rs | 9 + .../types/internals/pubdata.rs | 124 + .../types/internals/snapshot.rs | 11 + .../types/internals/transaction_data.rs | 358 + .../types/internals/vm_state.rs | 183 + .../vm_boojum_integration/types/l1_batch.rs | 43 + .../vm_boojum_integration/types/mod.rs | 2 + .../vm_boojum_integration/utils/fee.rs | 53 + .../vm_boojum_integration/utils/l2_blocks.rs | 95 + .../vm_boojum_integration/utils/logs.rs | 25 + .../vm_boojum_integration/utils/mod.rs | 6 + .../vm_boojum_integration/utils/overhead.rs | 351 + .../utils/transaction_encoding.rs | 16 + .../src/versions/vm_boojum_integration/vm.rs | 190 + .../vm_latest/bootloader_state/l2_block.rs | 16 +- .../vm_latest/bootloader_state/snapshot.rs | 6 +- .../vm_latest/bootloader_state/state.rs | 31 +- .../versions/vm_latest/bootloader_state/tx.rs | 21 +- .../vm_latest/bootloader_state/utils.rs | 42 +- .../src/versions/vm_latest/constants.rs | 91 +- .../vm_latest/implementation/bytecode.rs | 11 +- .../vm_latest/implementation/execution.rs | 48 +- .../versions/vm_latest/implementation/gas.rs | 9 +- .../versions/vm_latest/implementation/logs.rs | 25 +- .../vm_latest/implementation/snapshots.rs | 9 +- .../vm_latest/implementation/statistics.rs | 18 +- .../versions/vm_latest/implementation/tx.rs | 25 +- .../lib/multivm/src/versions/vm_latest/mod.rs | 43 +- .../versions/vm_latest/old_vm/event_sink.rs | 16 +- .../src/versions/vm_latest/old_vm/events.rs | 2 +- .../vm_latest/old_vm/history_recorder.rs | 18 +- .../src/versions/vm_latest/old_vm/memory.rs | 30 +- .../vm_latest/old_vm/oracles/decommitter.rs | 31 +- .../versions/vm_latest/old_vm/oracles/mod.rs | 2 +- .../vm_latest/old_vm/oracles/precompile.rs | 79 +- .../src/versions/vm_latest/old_vm/utils.rs | 30 +- .../src/versions/vm_latest/oracles/storage.rs | 77 +- .../versions/vm_latest/tests/bootloader.rs | 18 +- .../vm_latest/tests/bytecode_publishing.rs | 14 +- .../versions/vm_latest/tests/call_tracer.rs | 22 +- .../src/versions/vm_latest/tests/circuits.rs | 44 + .../versions/vm_latest/tests/default_aa.rs | 28 +- .../src/versions/vm_latest/tests/gas_limit.rs | 16 +- .../vm_latest/tests/get_used_contracts.rs | 28 +- .../vm_latest/tests/is_write_initial.rs | 14 +- .../vm_latest/tests/l1_tx_execution.rs | 83 +- .../src/versions/vm_latest/tests/l2_blocks.rs | 62 +- .../src/versions/vm_latest/tests/mod.rs | 4 +- .../versions/vm_latest/tests/nonce_holder.rs | 23 +- .../versions/vm_latest/tests/precompiles.rs | 136 + .../src/versions/vm_latest/tests/refunds.rs | 20 +- .../vm_latest/tests/require_eip712.rs | 36 +- .../src/versions/vm_latest/tests/rollbacks.rs | 29 +- .../vm_latest/tests/simple_execution.rs | 10 +- .../vm_latest/tests/tester/inner_state.rs | 24 +- .../tests/tester/transaction_test_info.rs | 12 +- .../vm_latest/tests/tester/vm_tester.rs | 54 +- .../tests/tracing_execution_error.rs | 15 +- .../src/versions/vm_latest/tests/upgrade.rs | 52 +- .../src/versions/vm_latest/tests/utils.rs | 25 +- .../vm_latest/tracers/circuits_capacity.rs | 85 + .../vm_latest/tracers/circuits_tracer.rs | 192 + .../vm_latest/tracers/default_tracers.rs | 81 +- .../versions/vm_latest/tracers/dispatcher.rs | 17 +- .../src/versions/vm_latest/tracers/mod.rs | 3 + .../vm_latest/tracers/pubdata_tracer.rs | 38 +- .../src/versions/vm_latest/tracers/refunds.rs | 71 +- .../vm_latest/tracers/result_tracer.rs | 36 +- .../src/versions/vm_latest/tracers/traits.rs | 17 +- .../src/versions/vm_latest/tracers/utils.rs | 30 +- .../versions/vm_latest/types/internals/mod.rs | 3 +- .../vm_latest/types/internals/pubdata.rs | 35 +- .../vm_latest/types/internals/snapshot.rs | 2 +- .../types/internals/transaction_data.rs | 72 +- .../vm_latest/types/internals/vm_state.rs | 52 +- .../src/versions/vm_latest/types/l1_batch.rs | 17 +- .../src/versions/vm_latest/utils/fee.rs | 77 +- .../src/versions/vm_latest/utils/l2_blocks.rs | 12 +- .../src/versions/vm_latest/utils/logs.rs | 10 +- .../src/versions/vm_latest/utils/overhead.rs | 353 +- .../vm_latest/utils/transaction_encoding.rs | 3 +- core/lib/multivm/src/versions/vm_latest/vm.rs | 73 +- .../src/versions/vm_m5/bootloader_state.rs | 2 +- .../versions/vm_m5/errors/tx_revert_reason.rs | 8 +- .../versions/vm_m5/errors/vm_revert_reason.rs | 8 +- .../multivm/src/versions/vm_m5/event_sink.rs | 7 +- .../src/versions/vm_m5/history_recorder.rs | 11 +- core/lib/multivm/src/versions/vm_m5/memory.rs | 26 +- core/lib/multivm/src/versions/vm_m5/mod.rs | 27 +- .../src/versions/vm_m5/oracle_tools.rs | 19 +- .../src/versions/vm_m5/oracles/decommitter.rs | 19 +- .../multivm/src/versions/vm_m5/oracles/mod.rs | 9 +- .../src/versions/vm_m5/oracles/precompile.rs | 7 +- .../src/versions/vm_m5/oracles/storage.rs | 33 +- .../src/versions/vm_m5/oracles/tracer.rs | 49 +- .../src/versions/vm_m5/pubdata_utils.rs | 23 +- .../lib/multivm/src/versions/vm_m5/refunds.rs | 36 +- .../lib/multivm/src/versions/vm_m5/storage.rs | 5 +- .../multivm/src/versions/vm_m5/test_utils.rs | 15 +- .../src/versions/vm_m5/tests/bootloader.rs | 422 +- .../src/versions/vm_m5/transaction_data.rs | 112 +- core/lib/multivm/src/versions/vm_m5/utils.rs | 16 +- core/lib/multivm/src/versions/vm_m5/vm.rs | 45 +- .../multivm/src/versions/vm_m5/vm_instance.rs | 103 +- .../src/versions/vm_m5/vm_with_bootloader.rs | 94 +- .../src/versions/vm_m6/bootloader_state.rs | 2 +- .../versions/vm_m6/errors/tx_revert_reason.rs | 8 +- .../versions/vm_m6/errors/vm_revert_reason.rs | 10 +- .../multivm/src/versions/vm_m6/event_sink.rs | 12 +- .../src/versions/vm_m6/history_recorder.rs | 9 +- core/lib/multivm/src/versions/vm_m6/memory.rs | 30 +- .../src/versions/vm_m6/oracle_tools.rs | 22 +- .../src/versions/vm_m6/oracles/decommitter.rs | 24 +- .../multivm/src/versions/vm_m6/oracles/mod.rs | 9 +- .../src/versions/vm_m6/oracles/precompile.rs | 7 +- .../src/versions/vm_m6/oracles/storage.rs | 36 +- .../vm_m6/oracles/tracer/bootloader.rs | 20 +- .../src/versions/vm_m6/oracles/tracer/call.rs | 52 +- .../src/versions/vm_m6/oracles/tracer/mod.rs | 26 +- .../versions/vm_m6/oracles/tracer/one_tx.rs | 22 +- .../oracles/tracer/transaction_result.rs | 18 +- .../versions/vm_m6/oracles/tracer/utils.rs | 27 +- .../vm_m6/oracles/tracer/validation.rs | 47 +- .../src/versions/vm_m6/pubdata_utils.rs | 23 +- .../lib/multivm/src/versions/vm_m6/refunds.rs | 37 +- .../lib/multivm/src/versions/vm_m6/storage.rs | 5 +- .../multivm/src/versions/vm_m6/test_utils.rs | 13 +- .../src/versions/vm_m6/tests/bootloader.rs | 544 +- .../src/versions/vm_m6/transaction_data.rs | 163 +- core/lib/multivm/src/versions/vm_m6/utils.rs | 24 +- core/lib/multivm/src/versions/vm_m6/vm.rs | 66 +- .../multivm/src/versions/vm_m6/vm_instance.rs | 111 +- .../src/versions/vm_m6/vm_with_bootloader.rs | 101 +- .../bootloader_state/l2_block.rs | 14 +- .../bootloader_state/snapshot.rs | 6 +- .../bootloader_state/state.rs | 23 +- .../bootloader_state/tx.rs | 5 +- .../bootloader_state/utils.rs | 29 +- .../vm_refunds_enhancement/constants.rs | 33 +- .../implementation/bytecode.rs | 11 +- .../implementation/execution.rs | 21 +- .../implementation/gas.rs | 7 +- .../implementation/logs.rs | 20 +- .../implementation/snapshots.rs | 15 +- .../implementation/statistics.rs | 13 +- .../implementation/tx.rs | 25 +- .../versions/vm_refunds_enhancement/mod.rs | 39 +- .../old_vm/event_sink.rs | 12 +- .../old_vm/history_recorder.rs | 14 +- .../vm_refunds_enhancement/old_vm/memory.rs | 30 +- .../old_vm/oracles/decommitter.rs | 28 +- .../old_vm/oracles/precompile.rs | 7 +- .../vm_refunds_enhancement/old_vm/utils.rs | 21 +- .../vm_refunds_enhancement/oracles/storage.rs | 39 +- .../tests/require_eip712.rs | 2 +- .../tests/tester/inner_state.rs | 4 +- .../vm_refunds_enhancement/tests/utils.rs | 4 +- .../tracers/default_tracers.rs | 42 +- .../tracers/dispatcher.rs | 15 +- .../vm_refunds_enhancement/tracers/refunds.rs | 51 +- .../tracers/result_tracer.rs | 36 +- .../vm_refunds_enhancement/tracers/traits.rs | 17 +- .../vm_refunds_enhancement/tracers/utils.rs | 23 +- .../types/internals/transaction_data.rs | 44 +- .../types/internals/vm_state.rs | 46 +- .../vm_refunds_enhancement/types/l1_batch.rs | 14 +- .../vm_refunds_enhancement/utils/fee.rs | 40 +- .../vm_refunds_enhancement/utils/l2_blocks.rs | 12 +- .../vm_refunds_enhancement/utils/overhead.rs | 146 +- .../utils/transaction_encoding.rs | 3 +- .../src/versions/vm_refunds_enhancement/vm.rs | 42 +- .../bootloader_state/l2_block.rs | 14 +- .../bootloader_state/snapshot.rs | 6 +- .../bootloader_state/state.rs | 25 +- .../vm_virtual_blocks/bootloader_state/tx.rs | 5 +- .../bootloader_state/utils.rs | 29 +- .../versions/vm_virtual_blocks/constants.rs | 33 +- .../implementation/bytecode.rs | 11 +- .../implementation/execution.rs | 24 +- .../vm_virtual_blocks/implementation/gas.rs | 7 +- .../vm_virtual_blocks/implementation/logs.rs | 20 +- .../implementation/snapshots.rs | 13 +- .../implementation/statistics.rs | 15 +- .../vm_virtual_blocks/implementation/tx.rs | 25 +- .../src/versions/vm_virtual_blocks/mod.rs | 38 +- .../vm_virtual_blocks/old_vm/event_sink.rs | 12 +- .../old_vm/history_recorder.rs | 14 +- .../vm_virtual_blocks/old_vm/memory.rs | 30 +- .../old_vm/oracles/decommitter.rs | 28 +- .../old_vm/oracles/precompile.rs | 7 +- .../old_vm/oracles/storage.rs | 32 +- .../vm_virtual_blocks/old_vm/utils.rs | 21 +- .../vm_virtual_blocks/tests/require_eip712.rs | 2 +- .../tests/tester/inner_state.rs | 4 +- .../versions/vm_virtual_blocks/tests/utils.rs | 4 +- .../tracers/default_tracers.rs | 44 +- .../vm_virtual_blocks/tracers/dispatcher.rs | 19 +- .../vm_virtual_blocks/tracers/refunds.rs | 54 +- .../tracers/result_tracer.rs | 42 +- .../vm_virtual_blocks/tracers/traits.rs | 17 +- .../vm_virtual_blocks/tracers/utils.rs | 28 +- .../types/internals/transaction_data.rs | 44 +- .../types/internals/vm_state.rs | 46 +- .../vm_virtual_blocks/types/l1_batch_env.rs | 15 +- .../versions/vm_virtual_blocks/utils/fee.rs | 41 +- .../vm_virtual_blocks/utils/l2_blocks.rs | 12 +- .../vm_virtual_blocks/utils/overhead.rs | 128 +- .../utils/transaction_encoding.rs | 3 +- .../src/versions/vm_virtual_blocks/vm.rs | 45 +- core/lib/multivm/src/vm_instance.rs | 38 +- core/lib/object_store/Cargo.toml | 10 +- core/lib/object_store/src/file.rs | 9 +- core/lib/object_store/src/gcs.rs | 28 +- core/lib/object_store/src/metrics.rs | 4 +- core/lib/object_store/src/mock.rs | 8 +- core/lib/object_store/src/objects.rs | 164 +- core/lib/object_store/src/raw.rs | 28 +- core/lib/object_store/tests/integration.rs | 1 - core/lib/prometheus_exporter/Cargo.toml | 4 +- core/lib/prometheus_exporter/src/lib.rs | 4 +- core/lib/prover_utils/Cargo.toml | 26 - .../lib/prover_utils/src/gcs_proof_fetcher.rs | 23 - core/lib/prover_utils/src/lib.rs | 126 - core/lib/prover_utils/src/region_fetcher.rs | 110 - core/lib/queued_job_processor/Cargo.toml | 2 +- core/lib/queued_job_processor/src/lib.rs | 33 +- core/lib/state/Cargo.toml | 2 +- core/lib/state/README.md | 4 +- core/lib/state/src/cache/metrics.rs | 6 +- core/lib/state/src/in_memory.rs | 12 +- core/lib/state/src/lib.rs | 2 +- core/lib/state/src/postgres/metrics.rs | 4 +- core/lib/state/src/postgres/mod.rs | 11 +- core/lib/state/src/postgres/tests.rs | 7 +- core/lib/state/src/rocksdb/metrics.rs | 4 +- core/lib/state/src/rocksdb/mod.rs | 25 +- core/lib/state/src/shadow_storage.rs | 2 +- core/lib/state/src/storage_view.rs | 10 +- core/lib/state/src/test_utils.rs | 11 +- core/lib/state/src/witness.rs | 3 +- core/lib/storage/Cargo.toml | 2 +- core/lib/storage/src/db.rs | 16 +- core/lib/storage/src/metrics.rs | 8 +- core/lib/test_account/src/lib.rs | 23 +- core/lib/types/Cargo.toml | 16 +- core/lib/types/build.rs | 4 +- core/lib/types/src/aggregated_operations.rs | 12 +- core/lib/types/src/api/en.rs | 9 +- core/lib/types/src/api/mod.rs | 29 +- core/lib/types/src/block.rs | 157 +- core/lib/types/src/circuit.rs | 3 +- core/lib/types/src/commitment.rs | 90 +- .../types/src/contract_verification_api.rs | 3 +- core/lib/types/src/eth_sender.rs | 5 +- core/lib/types/src/event.rs | 27 +- core/lib/types/src/fee.rs | 1 + core/lib/types/src/fee_model.rs | 227 + core/lib/types/src/l1/mod.rs | 13 +- core/lib/types/src/l2/mod.rs | 45 +- core/lib/types/src/l2_to_l1_log.rs | 21 +- core/lib/types/src/lib.rs | 16 +- .../lib/types/src/priority_op_onchain_data.rs | 4 +- core/lib/types/src/proofs.rs | 43 +- core/lib/types/src/proto/mod.proto | 21 +- core/lib/types/src/proto/mod.rs | 1 + core/lib/types/src/protocol_version.rs | 46 +- core/lib/types/src/prover_server_api/mod.rs | 11 +- core/lib/types/src/snapshots.rs | 198 + core/lib/types/src/storage/log.rs | 5 +- core/lib/types/src/storage/mod.rs | 2 +- .../types/src/storage/witness_block_state.rs | 6 +- .../types/src/storage/writes/compression.rs | 3 +- core/lib/types/src/storage/writes/mod.rs | 15 +- .../types/src/storage_writes_deduplicator.rs | 11 +- core/lib/types/src/system_contracts.rs | 25 +- core/lib/types/src/transaction_request.rs | 53 +- core/lib/types/src/tx/execute.rs | 5 +- core/lib/types/src/tx/mod.rs | 13 +- .../eip712_signature/member_types.rs | 9 +- .../eip712_signature/struct_builder.rs | 5 +- .../tx/primitives/eip712_signature/tests.rs | 21 +- .../eip712_signature/typed_structure.rs | 12 +- .../tx/primitives/eip712_signature/utils.rs | 3 +- .../src/tx/primitives/packed_eth_signature.rs | 15 +- core/lib/types/src/tx/tx_execution_info.rs | 10 +- core/lib/types/src/utils.rs | 15 +- core/lib/types/src/vk_transform.rs | 4 +- core/lib/types/src/vm_trace.rs | 12 +- core/lib/types/src/vm_version.rs | 3 +- core/lib/utils/Cargo.toml | 4 +- core/lib/utils/src/bytecode.rs | 20 +- core/lib/utils/src/convert.rs | 11 +- core/lib/utils/src/http_with_retries.rs | 3 +- core/lib/utils/src/misc.rs | 5 +- core/lib/vlog/src/lib.rs | 14 +- core/lib/web3_decl/Cargo.toml | 4 +- core/lib/web3_decl/src/error.rs | 8 +- core/lib/web3_decl/src/lib.rs | 4 +- core/lib/web3_decl/src/namespaces/debug.rs | 8 +- core/lib/web3_decl/src/namespaces/eth.rs | 26 +- core/lib/web3_decl/src/namespaces/mod.rs | 18 +- .../lib/web3_decl/src/namespaces/snapshots.rs | 28 + core/lib/web3_decl/src/namespaces/zks.rs | 10 +- core/lib/web3_decl/src/types.rs | 63 +- core/lib/zksync_core/Cargo.toml | 31 +- .../contract_verification/api_decl.rs | 3 +- .../contract_verification/api_impl.rs | 1 - .../contract_verification/metrics.rs | 4 +- .../api_server/contract_verification/mod.rs | 18 +- .../src/api_server/execution_sandbox/apply.rs | 119 +- .../src/api_server/execution_sandbox/error.rs | 6 +- .../api_server/execution_sandbox/execute.rs | 235 +- .../src/api_server/execution_sandbox/mod.rs | 148 +- .../api_server/execution_sandbox/testonly.rs | 74 + .../src/api_server/execution_sandbox/tests.rs | 155 + .../api_server/execution_sandbox/tracers.rs | 12 +- .../api_server/execution_sandbox/validate.rs | 65 +- .../execution_sandbox/vm_metrics.rs | 12 +- .../zksync_core/src/api_server/healthcheck.rs | 7 +- .../src/api_server/tree/metrics.rs | 4 +- .../zksync_core/src/api_server/tree/mod.rs | 15 +- .../zksync_core/src/api_server/tree/tests.rs | 16 +- .../src/api_server/tx_sender/mod.rs | 537 +- .../src/api_server/tx_sender/proxy.rs | 8 +- .../src/api_server/tx_sender/result.rs | 16 +- .../src/api_server/tx_sender/tests.rs | 139 + .../batch_limiter_middleware.rs | 145 - .../api_server/web3/backend_jsonrpc/error.rs | 44 - .../api_server/web3/backend_jsonrpc/mod.rs | 4 - .../web3/backend_jsonrpc/namespaces/debug.rs | 97 - .../web3/backend_jsonrpc/namespaces/en.rs | 40 - .../web3/backend_jsonrpc/namespaces/eth.rs | 510 - .../web3/backend_jsonrpc/namespaces/mod.rs | 6 - .../web3/backend_jsonrpc/namespaces/net.rs | 37 - .../web3/backend_jsonrpc/namespaces/web3.rs | 22 - .../web3/backend_jsonrpc/namespaces/zks.rs | 359 - .../web3/backend_jsonrpc/pub_sub.rs | 62 - .../batch_limiter_middleware.rs | 93 + .../api_server/web3/backend_jsonrpsee/mod.rs | 27 +- .../backend_jsonrpsee/namespaces/debug.rs | 7 +- .../web3/backend_jsonrpsee/namespaces/en.rs | 7 +- .../web3/backend_jsonrpsee/namespaces/eth.rs | 11 +- .../web3/backend_jsonrpsee/namespaces/mod.rs | 1 + .../backend_jsonrpsee/namespaces/snapshots.rs | 28 + .../web3/backend_jsonrpsee/namespaces/zks.rs | 27 +- .../src/api_server/web3/metrics.rs | 31 +- .../zksync_core/src/api_server/web3/mod.rs | 752 +- .../src/api_server/web3/namespaces/debug.rs | 150 +- .../src/api_server/web3/namespaces/en.rs | 27 +- .../src/api_server/web3/namespaces/eth.rs | 234 +- .../web3/namespaces/eth_subscribe.rs | 147 - .../src/api_server/web3/namespaces/mod.rs | 11 +- .../api_server/web3/namespaces/snapshots.rs | 108 + .../src/api_server/web3/namespaces/zks.rs | 227 +- .../zksync_core/src/api_server/web3/pubsub.rs | 457 + .../src/api_server/web3/pubsub_notifier.rs | 191 - .../zksync_core/src/api_server/web3/state.rs | 467 +- .../zksync_core/src/api_server/web3/tests.rs | 145 - .../src/api_server/web3/tests/debug.rs | 164 + .../src/api_server/web3/tests/filters.rs | 261 + .../src/api_server/web3/tests/mod.rs | 811 + .../src/api_server/web3/tests/snapshots.rs | 101 + .../src/api_server/web3/tests/vm.rs | 237 + .../src/api_server/web3/tests/ws.rs | 666 + .../src/basic_witness_input_producer/mod.rs | 28 +- .../vm_interactions.rs | 15 +- .../lib/zksync_core/src/block_reverter/mod.rs | 29 +- core/lib/zksync_core/src/consensus/mod.rs | 212 +- core/lib/zksync_core/src/consensus/payload.rs | 99 - .../zksync_core/src/consensus/proto/mod.rs | 2 - .../zksync_core/src/consensus/storage/mod.rs | 434 + .../src/consensus/storage/testonly.rs | 47 + .../lib/zksync_core/src/consensus/testonly.rs | 387 + core/lib/zksync_core/src/consensus/tests.rs | 326 + .../src/consistency_checker/mod.rs | 393 +- ...it_l1_batch_200000_testnet_goerli.calldata | Bin 0 -> 35012 bytes ...it_l1_batch_351000-351004_mainnet.calldata | Bin 0 -> 72932 bytes ...mit_l1_batch_4470_testnet_sepolia.calldata | Bin 0 -> 1956 bytes .../src/consistency_checker/tests/mod.rs | 605 + .../zksync_core/src/data_fetchers/error.rs | 83 - core/lib/zksync_core/src/data_fetchers/mod.rs | 34 - .../src/data_fetchers/token_list/mock.rs | 80 - .../src/data_fetchers/token_list/mod.rs | 134 - .../src/data_fetchers/token_list/one_inch.rs | 57 - .../data_fetchers/token_price/coingecko.rs | 167 - .../src/data_fetchers/token_price/mock.rs | 50 - .../src/data_fetchers/token_price/mod.rs | 134 - .../zksync_core/src/eth_sender/aggregator.rs | 50 +- core/lib/zksync_core/src/eth_sender/error.rs | 3 +- .../src/eth_sender/eth_tx_aggregator.rs | 124 +- .../src/eth_sender/eth_tx_manager.rs | 64 +- .../src/eth_sender/grafana_metrics.rs | 0 .../lib/zksync_core/src/eth_sender/metrics.rs | 5 +- .../src/eth_sender/publish_criterion.rs | 5 +- core/lib/zksync_core/src/eth_sender/tests.rs | 107 +- core/lib/zksync_core/src/eth_watch/client.rs | 63 +- .../event_processors/governance_upgrades.rs | 19 +- .../src/eth_watch/event_processors/mod.rs | 9 +- .../event_processors/priority_ops.rs | 4 +- .../eth_watch/event_processors/upgrades.rs | 7 +- core/lib/zksync_core/src/eth_watch/metrics.rs | 4 +- core/lib/zksync_core/src/eth_watch/mod.rs | 44 +- core/lib/zksync_core/src/eth_watch/tests.rs | 27 +- core/lib/zksync_core/src/fee_model.rs | 392 + .../zksync_core/src/gas_tracker/constants.rs | 4 +- core/lib/zksync_core/src/genesis.rs | 40 +- .../src/house_keeper/blocks_state_reporter.rs | 48 +- .../fri_proof_compressor_job_retry_manager.rs | 3 +- .../fri_proof_compressor_queue_monitor.rs | 22 +- .../fri_prover_job_retry_manager.rs | 3 +- .../house_keeper/fri_prover_queue_monitor.rs | 61 +- .../fri_scheduler_circuit_queuer.rs | 2 +- ...ri_witness_generator_jobs_retry_manager.rs | 3 +- .../fri_witness_generator_queue_monitor.rs | 2 +- .../house_keeper/gpu_prover_queue_monitor.rs | 67 - core/lib/zksync_core/src/house_keeper/mod.rs | 6 +- .../src/house_keeper}/periodic_job.rs | 2 +- .../house_keeper/prover_job_retry_manager.rs | 57 - .../src/house_keeper/prover_queue_monitor.rs | 84 - ...waiting_to_queued_fri_witness_job_mover.rs | 2 +- .../waiting_to_queued_witness_job_mover.rs | 96 - .../witness_generator_queue_monitor.rs | 122 - .../gas_adjuster/bounded_gas_adjuster.rs | 46 - .../gas_adjuster/erc_20_fetcher.rs | 5 +- .../src/l1_gas_price/gas_adjuster/mod.rs | 50 +- .../src/l1_gas_price/gas_adjuster/tests.rs | 9 +- .../src/l1_gas_price/main_node_fetcher.rs | 28 +- core/lib/zksync_core/src/l1_gas_price/mod.rs | 13 +- .../zksync_core/src/l1_gas_price/singleton.rs | 24 +- core/lib/zksync_core/src/lib.rs | 532 +- .../src/metadata_calculator/helpers.rs | 328 +- .../src/metadata_calculator/metrics.rs | 43 +- .../src/metadata_calculator/mod.rs | 156 +- .../src/metadata_calculator/recovery/mod.rs | 425 + .../src/metadata_calculator/recovery/tests.rs | 354 + .../src/metadata_calculator/tests.rs | 184 +- .../src/metadata_calculator/updater.rs | 183 +- core/lib/zksync_core/src/metrics.rs | 15 +- .../zksync_core/src/proof_data_handler/mod.rs | 11 +- .../proof_data_handler/request_processor.rs | 46 +- .../lib/zksync_core/src/reorg_detector/mod.rs | 325 +- .../zksync_core/src/reorg_detector/tests.rs | 496 + .../src/state_keeper/batch_executor/mod.rs | 118 +- .../state_keeper/batch_executor/tests/mod.rs | 6 +- .../batch_executor/tests/tester.rs | 21 +- .../src/state_keeper/extractors.rs | 3 +- .../zksync_core/src/state_keeper/io/common.rs | 19 +- .../src/state_keeper/io/mempool.rs | 110 +- .../zksync_core/src/state_keeper/io/mod.rs | 26 +- .../src/state_keeper/io/seal_logic.rs | 82 +- .../src/state_keeper/io/tests/mod.rs | 104 +- .../src/state_keeper/io/tests/tester.rs | 68 +- .../zksync_core/src/state_keeper/keeper.rs | 54 +- .../src/state_keeper/mempool_actor.rs | 43 +- .../zksync_core/src/state_keeper/metrics.rs | 21 +- core/lib/zksync_core/src/state_keeper/mod.rs | 56 +- .../seal_criteria/conditional_sealer.rs | 108 +- .../criteria/geometry_seal_criteria.rs | 233 +- .../seal_criteria/criteria/mod.rs | 9 +- .../seal_criteria/criteria/pubdata_bytes.rs | 2 +- .../seal_criteria/criteria/slots.rs | 10 +- .../criteria/tx_encoding_size.rs | 22 +- .../src/state_keeper/seal_criteria/mod.rs | 4 +- .../zksync_core/src/state_keeper/tests/mod.rs | 137 +- .../src/state_keeper/tests/tester.rs | 81 +- .../lib/zksync_core/src/state_keeper/types.rs | 20 + .../state_keeper/updates/l1_batch_updates.rs | 15 +- .../state_keeper/updates/miniblock_updates.rs | 54 +- .../src/state_keeper/updates/mod.rs | 59 +- .../src/sync_layer/batch_status_updater.rs | 366 - .../sync_layer/batch_status_updater/mod.rs | 470 + .../sync_layer/batch_status_updater/tests.rs | 442 + core/lib/zksync_core/src/sync_layer/client.rs | 5 +- .../zksync_core/src/sync_layer/external_io.rs | 144 +- .../lib/zksync_core/src/sync_layer/fetcher.rs | 81 +- .../lib/zksync_core/src/sync_layer/genesis.rs | 1 - .../src/sync_layer/gossip/buffered/mod.rs | 340 - .../src/sync_layer/gossip/buffered/tests.rs | 287 - .../src/sync_layer/gossip/conversions.rs | 31 +- .../src/sync_layer/gossip/metrics.rs | 29 - .../zksync_core/src/sync_layer/gossip/mod.rs | 93 - .../src/sync_layer/gossip/storage/mod.rs | 219 - .../src/sync_layer/gossip/storage/tests.rs | 127 - .../src/sync_layer/gossip/tests.rs | 339 - .../src/sync_layer/gossip/utils.rs | 48 - .../lib/zksync_core/src/sync_layer/metrics.rs | 19 +- core/lib/zksync_core/src/sync_layer/mod.rs | 5 +- .../zksync_core/src/sync_layer/sync_action.rs | 27 +- .../zksync_core/src/sync_layer/sync_state.rs | 10 +- core/lib/zksync_core/src/sync_layer/tests.rs | 176 +- core/lib/zksync_core/src/temp_config_store.rs | 9 +- core/lib/zksync_core/src/utils/mod.rs | 132 + core/lib/zksync_core/src/utils/testonly.rs | 206 + .../src/witness_generator/basic_circuits.rs | 645 - .../src/witness_generator/leaf_aggregation.rs | 340 - .../zksync_core/src/witness_generator/mod.rs | 88 - .../src/witness_generator/node_aggregation.rs | 378 - .../precalculated_merkle_paths_provider.rs | 261 - .../src/witness_generator/scheduler.rs | 376 - .../src/witness_generator/storage_oracle.rs | 46 - .../src/witness_generator/tests.rs | 296 - .../src/witness_generator/utils.rs | 27 - .../cross_external_nodes_checker/README.md | 2 +- .../src/checker.rs | 13 +- .../src/config.rs | 3 +- .../src/divergence.rs | 1 + .../src/helpers.rs | 7 +- .../cross_external_nodes_checker/src/main.rs | 13 +- .../src/pubsub_checker.rs | 18 +- core/tests/loadnext/README.md | 6 +- .../src/account/api_request_executor.rs | 1 - core/tests/loadnext/src/account/mod.rs | 19 +- .../loadnext/src/account/pubsub_executor.rs | 6 +- .../src/account/tx_command_executor.rs | 18 +- core/tests/loadnext/src/account_pool.rs | 5 +- core/tests/loadnext/src/command/api.rs | 5 +- core/tests/loadnext/src/command/tx_command.rs | 1 - core/tests/loadnext/src/config.rs | 25 +- core/tests/loadnext/src/corrupted_tx.rs | 20 +- core/tests/loadnext/src/executor.rs | 25 +- core/tests/loadnext/src/fs_utils.rs | 11 +- core/tests/loadnext/src/main.rs | 8 +- core/tests/loadnext/src/report.rs | 2 +- .../src/report_collector/metrics_collector.rs | 2 +- .../loadnext/src/report_collector/mod.rs | 4 +- .../operation_results_collector.rs | 4 +- core/tests/loadnext/src/rng.rs | 3 +- core/tests/loadnext/src/utils.rs | 1 + .../tests/revert-and-restart.test.ts | 25 +- core/tests/ts-integration/README.md | 4 +- .../contracts/counter/zkVM_bytecode.txt | 1 + .../custom-account/custom-paymaster.sol | 2 +- .../custom-account/interfaces/IPaymaster.sol | 2 +- core/tests/ts-integration/hardhat.config.ts | 6 +- core/tests/ts-integration/package.json | 5 +- .../ts-integration/scripts/compile-yul.ts | 2 +- core/tests/ts-integration/src/system.ts | 2 +- core/tests/ts-integration/src/test-master.ts | 2 +- .../tests/api/contract-verification.test.ts | 33 +- .../tests/api/snapshots-creator.test.ts | 86 + .../ts-integration/tests/api/web3.test.ts | 30 +- .../ts-integration/tests/contracts.test.ts | 24 + .../tests/custom-account.test.ts | 14 +- .../tests/custom-erc20-bridge.test.ts | 26 +- core/tests/ts-integration/tests/fees.test.ts | 46 +- core/tests/ts-integration/tests/l1.test.ts | 58 +- .../ts-integration/tests/l2-weth.test.ts | 4 +- .../ts-integration/tests/mempool.test.ts | 15 +- .../tests/ts-integration/tests/system.test.ts | 22 +- core/tests/upgrade-test/tests/upgrade.test.ts | 14 +- core/tests/vm-benchmark/README.md | 7 +- .../vm-benchmark/benches/diy_benchmark.rs | 3 +- core/tests/vm-benchmark/harness/src/lib.rs | 33 +- .../vm-benchmark/src/compare_iai_results.rs | 5 +- core/tests/vm-benchmark/src/find_slowest.rs | 1 + .../src/iai_results_to_prometheus.rs | 1 + .../tests/vm-benchmark/src/with_prometheus.rs | 3 +- deny.toml | 1 - docker-compose-cpu-runner.yml | 2 +- docker-compose-gpu-runner-cuda-12-0.yml | 2 +- docker-compose-gpu-runner.yml | 2 +- docker-compose-runner-nightly.yml | 26 +- docker-compose-runner.yml | 38 - docker-compose-zkstack-common.yml | 35 + docker-compose.yml | 135 +- docker/contract-verifier/Dockerfile | 10 +- docker/contract-verifier/install-all-solc.sh | 8 + docker/external-node/Dockerfile | 14 +- docker/geth/jwtsecret | 1 + docker/geth/standard-dev.json | 14 +- docker/local-node/Dockerfile | 15 +- docker/local-node/entrypoint.sh | 2 +- docker/proof-fri-compressor/Dockerfile | 2 +- docker/prover-fri-gateway/Dockerfile | 2 +- docker/prover-fri/Dockerfile | 2 +- docker/prover-gar/Dockerfile | 21 - docker/prover-gpu-fri/Dockerfile | 2 +- docker/prover/Dockerfile | 63 - docker/prysm/config.yml | 30 + docker/server-v2/Dockerfile | 14 +- .../Dockerfile | 19 +- docker/witness-generator/Dockerfile | 2 +- docker/witness-vector-generator/Dockerfile | 2 +- .../20.04_amd64_cuda_11_8.Dockerfile | 13 +- .../20.04_amd64_cuda_12_0.Dockerfile | 11 +- docker/zk-environment/Dockerfile | 7 +- docs/advanced/README.md | 14 - docs/advanced/blocks_and_batches.md | 85 - .../advanced/01_initialization.md | 8 +- docs/{ => guides}/advanced/02_deposits.md | 4 +- docs/{ => guides}/advanced/03_withdrawals.md | 4 +- .../guides/advanced/0_alternative_vm_intro.md | 311 + .../advanced/advanced_debugging.md | 0 .../advanced/compression.md} | 19 +- docs/{ => guides}/advanced/contracts.md | 8 +- .../advanced/deeper_overview.md} | 22 +- .../advanced/fee_model.md} | 6 +- docs/{ => guides}/advanced/how_call_works.md | 7 +- .../advanced/how_l2_messaging_works.md | 20 +- .../advanced/how_transaction_works.md | 2 +- docs/{ => guides}/advanced/prover_keys.md | 4 +- docs/{ => guides}/advanced/pubdata.md | 39 +- docs/{ => guides}/advanced/zk_intuition.md | 23 +- docs/{ => guides}/architecture.md | 1 - docs/{ => guides}/development.md | 67 + docs/{ => guides}/external-node/01_intro.md | 0 .../external-node/02_configuration.md | 6 +- docs/{ => guides}/external-node/03_running.md | 0 .../external-node/04_observability.md | 0 .../external-node/05_troubleshooting.md | 0 .../external-node/06_components.md | 0 .../prepared_configs/mainnet-config.env | 5 - .../testnet-goerli-config-deprecated.env} | 5 - .../testnet-sepolia-config.env | 92 + docs/{ => guides}/launch.md | 31 +- docs/guides/repositories.md | 80 + docs/{ => guides}/setup-dev.md | 65 +- docs/repositories.md | 75 - docs/specs/README.md | 36 + docs/specs/blocks_batches.md | 274 + docs/specs/data_availability/README.md | 5 + docs/specs/data_availability/compression.md | 125 + docs/specs/data_availability/overview.md | 19 + docs/specs/data_availability/pubdata.md | 446 + .../specs/data_availability/reconstruction.md | 7 + .../data_availability/validium_zk_porter.md | 7 + docs/specs/img/L2_Components.png | Bin 0 -> 75908 bytes docs/specs/img/diamondProxy.jpg | Bin 0 -> 127297 bytes docs/specs/img/governance.jpg | Bin 0 -> 161578 bytes docs/specs/img/zk-the-collective-action.jpeg | Bin 0 -> 260019 bytes docs/specs/introduction.md | 19 + docs/specs/l1_l2_communication/README.md | 5 + docs/specs/l1_l2_communication/l1_to_l2.md | 170 + docs/specs/l1_l2_communication/l2_to_l1.md | 72 + .../overview_deposits_withdrawals.md | 13 + docs/specs/l1_smart_contracts.md | 289 + docs/specs/overview.md | 38 + docs/specs/prover/README.md | 9 + .../boojum_function_check_if_satisfied.md | 97 + docs/specs/prover/boojum_gadgets.md | 189 + docs/specs/prover/circuit_testing.md | 59 + docs/specs/prover/circuits/README.md | 17 + .../specs/prover/circuits/code_decommitter.md | 208 + docs/specs/prover/circuits/demux_log_queue.md | 226 + docs/specs/prover/circuits/ecrecover.md | 320 + docs/specs/prover/circuits/img/diagram.png | Bin 0 -> 12668 bytes docs/specs/prover/circuits/img/flowchart.png | Bin 0 -> 187572 bytes docs/specs/prover/circuits/img/image.png | Bin 0 -> 227383 bytes .../prover/circuits/keccak_round_function.md | 203 + .../prover/circuits/l1_messages_hasher.md | 149 + docs/specs/prover/circuits/log_sorter.md | 351 + docs/specs/prover/circuits/main_vm.md | 342 + docs/specs/prover/circuits/overview.md | 123 + docs/specs/prover/circuits/ram_permutation.md | 195 + .../prover/circuits/sha256_round_function.md | 369 + .../prover/circuits/sort_decommitments.md | 232 + docs/specs/prover/circuits/sorting.md | 56 + .../circuits/sorting_and_deduplicating.md | 9 + .../prover/circuits/storage_application.md | 213 + docs/specs/prover/circuits/storage_sorter.md | 282 + docs/specs/prover/getting_started.md | 21 + .../Check_if_satisfied(1).png | Bin 0 -> 96306 bytes .../Check_if_satisfied(11).png | Bin 0 -> 92473 bytes .../Check_if_satisfied(12).png | Bin 0 -> 109109 bytes .../Check_if_satisfied(13).png | Bin 0 -> 122534 bytes .../Check_if_satisfied(14).png | Bin 0 -> 67201 bytes .../Check_if_satisfied(16).png | Bin 0 -> 84431 bytes .../Check_if_satisfied(17).png | Bin 0 -> 114302 bytes .../Check_if_satisfied(2).png | Bin 0 -> 143563 bytes .../Check_if_satisfied(3).png | Bin 0 -> 261820 bytes .../Check_if_satisfied(4).png | Bin 0 -> 151081 bytes .../Check_if_satisfied(7).png | Bin 0 -> 189514 bytes .../Check_if_satisfied(8).png | Bin 0 -> 124859 bytes .../Check_if_satisfied(9).png | Bin 0 -> 90694 bytes .../Check_if_satisfied.png | Bin 0 -> 147955 bytes .../img/circuit_testing/Contest(10).png | Bin 0 -> 139480 bytes .../img/circuit_testing/Contest(11).png | Bin 0 -> 85347 bytes .../img/circuit_testing/Contest(12).png | Bin 0 -> 152645 bytes .../prover/img/circuit_testing/Contest(4).png | Bin 0 -> 91702 bytes .../prover/img/circuit_testing/Contest(5).png | Bin 0 -> 265470 bytes .../prover/img/circuit_testing/Contest(6).png | Bin 0 -> 87853 bytes .../prover/img/circuit_testing/Contest(7).png | Bin 0 -> 207684 bytes .../prover/img/circuit_testing/Contest(8).png | Bin 0 -> 92569 bytes .../prover/img/circuit_testing/Contest(9).png | Bin 0 -> 135392 bytes .../circuit.png" | Bin 0 -> 14473 bytes docs/specs/prover/overview.md | 63 + docs/specs/prover/zk_terminology.md | 103 + docs/specs/the_hyperchain/README.md | 5 + docs/specs/the_hyperchain/hyperbridges.md | 42 + .../the_hyperchain/img/contractsExternal.png | Bin 0 -> 262845 bytes docs/specs/the_hyperchain/img/deployWeth.png | Bin 0 -> 81010 bytes docs/specs/the_hyperchain/img/depositWeth.png | Bin 0 -> 86769 bytes .../specs/the_hyperchain/img/hyperbridges.png | Bin 0 -> 179481 bytes .../the_hyperchain/img/hyperbridging.png | Bin 0 -> 113383 bytes docs/specs/the_hyperchain/img/newChain.png | Bin 0 -> 96670 bytes docs/specs/the_hyperchain/overview.md | 6 + docs/specs/the_hyperchain/shared_bridge.md | 247 + docs/specs/zk_evm/README.md | 9 + docs/specs/zk_evm/account_abstraction.md | 40 + docs/specs/zk_evm/bootloader.md | 334 + docs/specs/zk_evm/fee_model.md | 522 + docs/specs/zk_evm/precompiles.md | 298 + docs/specs/zk_evm/system_contracts.md | 316 + docs/specs/zk_evm/vm_overview.md | 27 + .../EraVM_formal_specification.pdf | Bin 0 -> 1385808 bytes docs/specs/zk_evm/vm_specification/README.md | 5 + .../vm_specification/compiler/README.md | 8 + .../compiler/code_separation.md | 76 + .../compiler/evmla_translator.md | 755 + .../compiler/exception_handling.md | 101 + .../compiler/instructions/README.md | 7 + .../compiler/instructions/evm/README.md | 15 + .../compiler/instructions/evm/arithmetic.md | 330 + .../compiler/instructions/evm/bitwise.md | 194 + .../compiler/instructions/evm/block.md | 159 + .../compiler/instructions/evm/call.md | 27 + .../compiler/instructions/evm/create.md | 16 + .../compiler/instructions/evm/environment.md | 294 + .../compiler/instructions/evm/logging.md | 27 + .../compiler/instructions/evm/logical.md | 188 + .../compiler/instructions/evm/memory.md | 61 + .../compiler/instructions/evm/overview.md | 23 + .../compiler/instructions/evm/return.md | 56 + .../compiler/instructions/evm/sha3.md | 49 + .../compiler/instructions/evm/stack.md | 47 + .../compiler/instructions/evmla.md | 85 + .../instructions/extensions/README.md | 5 + .../compiler/instructions/extensions/call.md | 70 + .../instructions/extensions/overview.md | 7 + .../instructions/extensions/verbatim.md | 73 + .../compiler/instructions/overview.md | 50 + .../compiler/instructions/yul.md | 92 + .../vm_specification/compiler/overview.md | 142 + .../compiler/system_contracts.md | 125 + .../vm_specification/img/arch-overview.png | Bin 0 -> 57728 bytes .../img/arithmetic_opcode.png | Bin 0 -> 15320 bytes .../zkSync_era_virtual_machine_primer.md | 637 + .../zksync_testharness_test.json | 20 +- .../contracts/precompiles/precompiles.sol | 21 + etc/env/base/api.toml | 1 + etc/env/base/chain.toml | 33 +- etc/env/base/circuit_synthesizer.toml | 10 - etc/env/base/contracts.toml | 4 +- etc/env/base/fetcher.toml | 16 - etc/env/base/fri_prover.toml | 3 +- etc/env/base/object_store.toml | 7 + etc/env/base/prover.toml | 74 - etc/env/base/prover_group.toml | 17 - etc/env/base/rust.toml | 6 +- etc/env/ext-node-docker.toml | 2 +- etc/env/ext-node.toml | 2 +- ...=> docker-compose-hyperchain-template.hbs} | 83 +- .../fee_estimate.yul/fee_estimate.yul.zbin | Bin 0 -> 65888 bytes .../vm_1_4_1/gas_test.yul/gas_test.yul.zbin | Bin 0 -> 64352 bytes .../playground_batch.yul.zbin | Bin 0 -> 66144 bytes .../proved_batch.yul/proved_batch.yul.zbin | Bin 0 -> 64928 bytes .../vm_remove_allowlist/commit | 1 + .../fee_estimate.yul/fee_estimate.yul.zbin | Bin 0 -> 68000 bytes .../gas_test.yul/gas_test.yul.zbin | Bin 0 -> 66464 bytes .../playground_batch.yul.zbin | Bin 0 -> 68192 bytes .../proved_batch.yul/proved_batch.yul.zbin | Bin 0 -> 67104 bytes etc/system-contracts | 1 - etc/tokens/native_erc20.json | 2 +- .../1699353977-boojum/mainnet2/crypto.json | 11 + .../1699353977-boojum/mainnet2/facetCuts.json | 177 + .../1699353977-boojum/mainnet2/facets.json | 18 + .../1699353977-boojum/mainnet2/l2Upgrade.json | 323 + .../mainnet2/transactions.json | 235 + .../1699353977-boojum/testnet2/crypto.json | 11 + .../1699353977-boojum/testnet2/facetCuts.json | 177 + .../1699353977-boojum/testnet2/facets.json | 18 + .../1699353977-boojum/testnet2/l2Upgrade.json | 323 + .../testnet2/transactions.json | 235 + .../1702392522-allowlist-removal/common.json | 5 + .../mainnet2/facetCuts.json | 165 + .../mainnet2/facets.json | 18 + .../mainnet2/l2Upgrade.json | 323 + .../mainnet2/transactions.json | 230 + .../stage2/facetCuts.json | 165 + .../stage2/facets.json | 18 + .../stage2/l2Upgrade.json | 323 + .../stage2/transactions.json | 230 + .../testnet-sepolia/facetCuts.json | 165 + .../testnet-sepolia/facets.json | 18 + .../testnet-sepolia/l2Upgrade.json | 323 + .../testnet-sepolia/transactions.json | 230 + .../testnet2/facetCuts.json | 165 + .../testnet2/facets.json | 18 + .../testnet2/l2Upgrade.json | 323 + .../testnet2/transactions.json | 230 + .../local-setup-preparation/README.md | 6 + .../local-setup-preparation/package.json | 5 +- .../local-setup-preparation/src/index.ts | 2 +- infrastructure/protocol-upgrade/README.md | 2 +- infrastructure/protocol-upgrade/package.json | 5 +- .../protocol-upgrade/pre-boojum/IZkSync.d.ts | 1 - .../pre-boojum/IZkSyncFactory.ts | 1 - .../protocol-upgrade/src/crypto/crypto.ts | 2 +- .../protocol-upgrade/src/crypto/deployer.ts | 2 +- .../src/l1upgrade/deployer.ts | 2 +- .../protocol-upgrade/src/l1upgrade/facets.ts | 2 +- .../src/l2upgrade/deployer.ts | 2 +- .../src/l2upgrade/transactions.ts | 8 +- .../protocol-upgrade/src/transaction.ts | 32 +- infrastructure/protocol-upgrade/src/utils.ts | 6 +- infrastructure/zk/package.json | 10 +- infrastructure/zk/src/clean.ts | 12 +- infrastructure/zk/src/config.ts | 4 - infrastructure/zk/src/contract.ts | 25 +- infrastructure/zk/src/docker.ts | 102 +- infrastructure/zk/src/fmt.ts | 62 +- infrastructure/zk/src/format_sql.ts | 169 + infrastructure/zk/src/hyperchain_wizard.ts | 110 +- infrastructure/zk/src/index.ts | 4 + infrastructure/zk/src/init.ts | 24 +- infrastructure/zk/src/linkcheck.ts | 25 + infrastructure/zk/src/lint.ts | 28 +- infrastructure/zk/src/prover_setup.ts | 72 +- infrastructure/zk/src/run/run.ts | 44 +- infrastructure/zk/src/spellcheck.ts | 44 + infrastructure/zk/src/status.ts | 21 +- infrastructure/zk/src/test/integration.ts | 13 + infrastructure/zk/src/up.ts | 40 +- infrastructure/zk/src/utils.ts | 3 +- package.json | 14 +- prover/CHANGELOG.md | 94 + prover/Cargo.lock | 2517 ++-- prover/Cargo.toml | 3 - prover/README.md | 15 +- prover/circuit_synthesizer/Cargo.toml | 38 - .../src/circuit_synthesizer.rs | 362 - prover/circuit_synthesizer/src/main.rs | 105 - prover/proof_fri_compressor/Cargo.toml | 7 +- prover/proof_fri_compressor/src/compressor.rs | 65 +- .../src/initial_setup_keys.rs | 59 + prover/proof_fri_compressor/src/main.rs | 29 +- prover/proof_fri_compressor/src/metrics.rs | 17 + prover/prover/Cargo.toml | 56 - prover/prover/README.md | 8 - prover/prover/src/artifact_provider.rs | 23 - prover/prover/src/main.rs | 25 - prover/prover/src/prover.rs | 315 - prover/prover/src/prover_params.rs | 36 - prover/prover/src/run.rs | 257 - prover/prover/src/socket_listener.rs | 119 - .../src/synthesized_circuit_provider.rs | 82 - prover/prover_fri/Cargo.toml | 12 +- prover/prover_fri/README.md | 26 +- .../src/gpu_prover_job_processor.rs | 97 +- prover/prover_fri/src/lib.rs | 1 + prover/prover_fri/src/main.rs | 70 +- prover/prover_fri/src/metrics.rs | 44 + prover/prover_fri/src/prover_job_processor.rs | 93 +- prover/prover_fri/src/socket_listener.rs | 71 +- prover/prover_fri/src/utils.rs | 54 +- prover/prover_fri/tests/basic_test.rs | 11 +- prover/prover_fri_gateway/Cargo.toml | 3 +- .../src/api_data_fetcher.rs | 12 +- prover/prover_fri_gateway/src/main.rs | 9 +- prover/prover_fri_gateway/src/metrics.rs | 11 + .../src/proof_gen_data_fetcher.rs | 7 +- .../prover_fri_gateway/src/proof_submitter.rs | 7 +- prover/prover_fri_types/Cargo.toml | 2 +- prover/prover_fri_types/src/lib.rs | 32 +- prover/prover_fri_utils/Cargo.toml | 7 +- prover/prover_fri_utils/src/lib.rs | 33 +- prover/prover_fri_utils/src/metrics.rs | 37 + prover/prover_fri_utils/src/region_fetcher.rs | 51 + prover/prover_fri_utils/src/socket_utils.rs | 12 +- prover/setup-data-cpu-keys.json | 6 +- prover/setup-data-gpu-keys.json | 6 +- .../setup_key_generator_and_server/Cargo.toml | 42 - .../data/.gitkeep | 0 .../setup_key_generator_and_server/src/lib.rs | 61 - .../src/main.rs | 67 - .../Cargo.toml | 8 +- .../data/finalization_hints_basic_1.bin | Bin 240 -> 240 bytes .../data/finalization_hints_basic_5.bin | Bin 216 -> 216 bytes .../data/finalization_hints_basic_7.bin | Bin 216 -> 3288 bytes .../data/finalization_hints_leaf_9.bin | Bin 120 -> 120 bytes .../data/finalization_hints_scheduler.bin | Bin 216 -> 216 bytes .../snark_verification_scheduler_key.json | 32 +- .../data/verification_basic_1_key.json | 136 +- .../data/verification_basic_5_key.json | 136 +- .../data/verification_basic_7_key.json | 173 +- .../data/verification_basic_9_key.json | 128 +- .../data/verification_leaf_3_key.json | 128 +- .../data/verification_leaf_7_key.json | 128 +- .../data/verification_leaf_9_key.json | 128 +- .../data/verification_scheduler_key.json | 136 +- .../data/witness_artifacts.json | 23 +- .../src/commitment_generator.rs | 8 +- .../src/commitment_utils.rs | 19 +- .../src/lib.rs | 96 +- .../src/main.rs | 20 +- .../src/setup_data_generator.rs | 40 +- .../src/tests.rs | 20 +- .../src/utils.rs | 118 +- .../src/vk_commitment_helper.rs | 3 +- .../src/vk_generator.rs | 27 +- prover/witness_generator/Cargo.toml | 10 +- prover/witness_generator/README.md | 4 +- .../witness_generator/src/basic_circuits.rs | 111 +- .../witness_generator/src/leaf_aggregation.rs | 91 +- prover/witness_generator/src/lib.rs | 2 + prover/witness_generator/src/main.rs | 53 +- prover/witness_generator/src/metrics.rs | 34 + .../witness_generator/src/node_aggregation.rs | 86 +- .../precalculated_merkle_paths_provider.rs | 15 +- prover/witness_generator/src/scheduler.rs | 82 +- .../witness_generator/src/storage_oracle.rs | 6 +- prover/witness_generator/src/tests.rs | 12 +- prover/witness_generator/src/utils.rs | 49 +- prover/witness_generator/tests/basic_test.rs | 22 +- prover/witness_vector_generator/Cargo.toml | 4 +- .../witness_vector_generator/src/generator.rs | 77 +- prover/witness_vector_generator/src/lib.rs | 2 + prover/witness_vector_generator/src/main.rs | 34 +- .../witness_vector_generator/src/metrics.rs | 19 + .../tests/basic_test.rs | 4 +- sdk/zksync-rs/src/abi/update-abi.sh | 6 +- sdk/zksync-rs/src/error.rs | 2 +- sdk/zksync-rs/src/ethereum/mod.rs | 83 +- sdk/zksync-rs/src/lib.rs | 21 +- sdk/zksync-rs/src/operations/mod.rs | 22 +- sdk/zksync-rs/src/operations/transfer.rs | 15 +- sdk/zksync-rs/src/operations/withdraw.rs | 6 +- sdk/zksync-rs/src/signer.rs | 11 +- sdk/zksync-rs/src/utils.rs | 1 - sdk/zksync-rs/src/wallet.rs | 5 +- yarn.lock | 1240 +- 1636 files changed, 77217 insertions(+), 51634 deletions(-) create mode 100644 .github/workflows/build-contracts.yml delete mode 100644 .github/workflows/build-external-node-docker.yml delete mode 100644 .github/workflows/build-gar-reusable.yml create mode 100644 .github/workflows/check-spelling.yml create mode 100755 bin/ci_localnet_up create mode 100644 checks-config/cspell.json create mode 100644 checks-config/era.cfg create mode 100644 checks-config/era.dic create mode 100644 checks-config/links.json create mode 100644 core/bin/external_node/src/metrics.rs delete mode 100644 core/bin/rocksdb_util/Cargo.toml delete mode 100644 core/bin/rocksdb_util/src/main.rs create mode 100644 core/bin/snapshots_creator/Cargo.toml create mode 100644 core/bin/snapshots_creator/README.md create mode 100644 core/bin/snapshots_creator/src/chunking.rs create mode 100644 core/bin/snapshots_creator/src/creator.rs create mode 100644 core/bin/snapshots_creator/src/main.rs create mode 100644 core/bin/snapshots_creator/src/metrics.rs create mode 100644 core/bin/snapshots_creator/src/tests.rs delete mode 100644 core/bin/verification_key_generator_and_server/Cargo.toml delete mode 100644 core/bin/verification_key_generator_and_server/README.md delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_0_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_10_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_11_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_12_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_13_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_14_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_15_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_16_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_17_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_18_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_1_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_2_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_3_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_4_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_5_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_6_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_7_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_8_key.json delete mode 100644 core/bin/verification_key_generator_and_server/data/verification_9_key.json delete mode 100644 core/bin/verification_key_generator_and_server/src/commitment_generator.rs delete mode 100644 core/bin/verification_key_generator_and_server/src/json_to_binary_vk_converter.rs delete mode 100644 core/bin/verification_key_generator_and_server/src/lib.rs delete mode 100644 core/bin/verification_key_generator_and_server/src/main.rs delete mode 100644 core/bin/verification_key_generator_and_server/src/tests.rs delete mode 100644 core/lib/config/src/configs/circuit_synthesizer.rs delete mode 100644 core/lib/config/src/configs/fetcher.rs delete mode 100644 core/lib/config/src/configs/prover.rs delete mode 100644 core/lib/config/src/configs/prover_group.rs create mode 100644 core/lib/config/src/configs/snapshots_creator.rs create mode 100644 core/lib/dal/.sqlx/query-00b88ec7fcf40bb18e0018b7c76f6e1df560ab1e8935564355236e90b6147d2f.json create mode 100644 core/lib/dal/.sqlx/query-012bed5d34240ed28c331c8515c381d82925556a4801f678b8786235d525d784.json create mode 100644 core/lib/dal/.sqlx/query-015350f8d729ef490553550a68f07703b2581dda4fe3c00be6c5422c78980c4b.json create mode 100644 core/lib/dal/.sqlx/query-01ac5343beb09ec5bd45b39d560e57a83f37da8999849377dfad60b44989be39.json create mode 100644 core/lib/dal/.sqlx/query-01e4cde73867da612084c3f6fe882d56bbace9013f1d95ea0926eef1fb48039b.json create mode 100644 core/lib/dal/.sqlx/query-01f72dfc1eee6360a8ef7809874a1b4ba7fe355ebc02ea49a054aa073ce324ba.json create mode 100644 core/lib/dal/.sqlx/query-02285b8d0bc76c8cfd259872ac24f3670813e5a5356ddcb7ac482a0201d045f7.json create mode 100644 core/lib/dal/.sqlx/query-026ab7dd7407f10074a2966b5eac2563a3e061bcc6505d8c295b1b2517f85f1b.json create mode 100644 core/lib/dal/.sqlx/query-03c585c7e9f918e608757496088c7e3b6bdb2a08149d5f443310607d3c78988c.json create mode 100644 core/lib/dal/.sqlx/query-040eaa878c3473f5edc73b77e572b5ea100f59295cd693d14ee0d5ee089c7981.json create mode 100644 core/lib/dal/.sqlx/query-04fbbd198108d2614a3b29fa795994723ebe57b3ed209069bd3db906921ef1a3.json create mode 100644 core/lib/dal/.sqlx/query-05267e9774056bb0f984918ab861a2ee78eb59628d0429e89b27d185f83512be.json create mode 100644 core/lib/dal/.sqlx/query-07310d96fc7e258154ad510684e33d196907ebd599e926d305e5ef9f26afa2fa.json create mode 100644 core/lib/dal/.sqlx/query-083991abb3f1c2183d1bd1fb2ad4710daa723e2d9a23317c347f6081465c3643.json create mode 100644 core/lib/dal/.sqlx/query-08e59ed8e2fd1a74e19d8bf0d131e4ee6682a89fb86f3b715a240805d44e6d87.json create mode 100644 core/lib/dal/.sqlx/query-0914f0ad03d6a8c55d287f94917c6f03469d78bf4f45f5fd1eaf37171db2f04a.json create mode 100644 core/lib/dal/.sqlx/query-0a3c928a616b5ebc0b977bd773edcde721ca1c652ae2f8db41fb75cecdecb674.json create mode 100644 core/lib/dal/.sqlx/query-0a3cb11f5bdcb8da31dbd4e3016fced141fb29dd8b6c32dd2dc3452dc294fe1f.json create mode 100644 core/lib/dal/.sqlx/query-0a53fc3c90a14038c9f3f32c3e2e5f7edcafa4fc6757264a96a46dbf7dd1f9cc.json create mode 100644 core/lib/dal/.sqlx/query-0aaefa9d5518ed1a2d8f735435e8048558243ff878b59586eb3a8b22794395d8.json create mode 100644 core/lib/dal/.sqlx/query-0bdcf87f6910c7222b621f76f71bc6e326e15dca141050bc9d7dacae98a430e8.json create mode 100644 core/lib/dal/.sqlx/query-0c899c68886f76a232ffac0454cdfbf962636347864fc365fafa46c7a2da5f30.json create mode 100644 core/lib/dal/.sqlx/query-0c95fbfb3a816bd49fd06e3a4f0a52daa202279bf612a9278f663deb78bc6e41.json create mode 100644 core/lib/dal/.sqlx/query-0d13b8947b1bafa9e5bc6fdc70a986511265c541d81b1d21f0a751ae1399c626.json create mode 100644 core/lib/dal/.sqlx/query-10959c91f01ce0da196f4c6eaf0661a097308d9f81024fdfef24a14418202730.json create mode 100644 core/lib/dal/.sqlx/query-11af69fc254e54449b64c086667700a95e4c37a7a18531b3cdf120394cb055b9.json create mode 100644 core/lib/dal/.sqlx/query-12ab208f416e2875f89e558f0d4aff3a06b7a9c1866132d62e4449fa9436c7c4.json create mode 100644 core/lib/dal/.sqlx/query-12ab8ba692a42f528450f2adf8d263298abc0521734f807fbf45484158b167b2.json create mode 100644 core/lib/dal/.sqlx/query-136569d7eb4037fd77e0fac2246c68e8e15a831f1a45dc3b2240d5c6809d5ef2.json create mode 100644 core/lib/dal/.sqlx/query-15858168fea6808c6d59d0e6d8f28a20420763a3a22899ad0e5f4b953b615a9e.json create mode 100644 core/lib/dal/.sqlx/query-1689c212d411ebd99a22210519ea2d505a1aabf52ff4136d2ed1b39c70dd1632.json create mode 100644 core/lib/dal/.sqlx/query-16e62660fd14f6d3731e69fa696a36408510bb05c15285dfa7708bc0b044d0c5.json create mode 100644 core/lib/dal/.sqlx/query-1766c0a21ba5918dd08f4babd8dbfdf10fb1cb43781219586c169fb976204331.json create mode 100644 core/lib/dal/.sqlx/query-1862d3a78e4e9068df1b8ce3bbe9f3f0b5d629fdb5c36ea1bfb93ed246be968e.json create mode 100644 core/lib/dal/.sqlx/query-18820f4ab0c3d2cc9187c5660f9f50e423eb6134659fe52bcc2b27ad16740c96.json create mode 100644 core/lib/dal/.sqlx/query-19314d74e94b610e2da6d728ca37ea964610e131d45f720f7a7b2a130fe9ed89.json create mode 100644 core/lib/dal/.sqlx/query-19545806b8f772075096e69f8665d98a3d9f7df162ae22a98c3c7620fcd13bd2.json create mode 100644 core/lib/dal/.sqlx/query-19b89495be8aa735db039ccc8a262786c58e54f132588c48f07d9537cf21d3ed.json create mode 100644 core/lib/dal/.sqlx/query-1ad3bbd791f3ff0d31683bf59187b84c5fd52f0352f0f0e311d054cb9e45b07e.json create mode 100644 core/lib/dal/.sqlx/query-1b4ebbfc96b4fd66ecbe64a6be80a01a6c7cbe9297cbb55d42533fddc18719b6.json create mode 100644 core/lib/dal/.sqlx/query-1bc6597117db032b87df33040d61610ffa7f169d560e79e89b99eedf681c6773.json create mode 100644 core/lib/dal/.sqlx/query-1c60010ded4e79886890a745a050fa6d65c05d8144bdfd143480834ead4bd8d5.json create mode 100644 core/lib/dal/.sqlx/query-1c994d418ada78586de829fc2d34d26e48e968c79834858c98b7a7f9dfc81910.json create mode 100644 core/lib/dal/.sqlx/query-1d2cc4b485536af350089cf7950be3b85419fde77038dd3de6c55aa9c55d375c.json create mode 100644 core/lib/dal/.sqlx/query-1d6b698b241cb6c5efd070a98165f6760cfeac185330d1d9c5cdb5b383ed8ed4.json create mode 100644 core/lib/dal/.sqlx/query-1dcb3afb0c1947f92981f61d95c099c4591ce3f8d51f3df99db0165e086f96af.json create mode 100644 core/lib/dal/.sqlx/query-1ea37ef1c3df72e5e9c50cfa1675fc7f60618209d0132e7937a1347b7e94b212.json create mode 100644 core/lib/dal/.sqlx/query-1ed2d7e5e98b15420a21650809d710ce910d0c9138d85cb55e16459c757dea03.json create mode 100644 core/lib/dal/.sqlx/query-1f25016c41169aa4ab14db2faf7b2d0413d0f89c309de4b31254c309116ea60c.json create mode 100644 core/lib/dal/.sqlx/query-1f46524410ce0f193dc6547499bde995ddddc621ee2149f08f905af2d8aadd03.json create mode 100644 core/lib/dal/.sqlx/query-1f75f2d88c1d2496e48b02f374e492cf2545944291dd0d42b937c0d0c7eefd47.json create mode 100644 core/lib/dal/.sqlx/query-2003dcf7bc807c7d345368538accd9b0128f82306e27e4c7258116082a54ab95.json create mode 100644 core/lib/dal/.sqlx/query-2028ba507f3ccd474f0261e571eb19a3a7feec950cb3e503588cf55d954a493a.json create mode 100644 core/lib/dal/.sqlx/query-20f84f9ec21459d8c7ad53241758eeab159533211d2ddbef41e6ff0ba937d04a.json create mode 100644 core/lib/dal/.sqlx/query-23be43bf705d679ca751c89353716065fcad42c6b621efb3a135a16b477dcfd9.json create mode 100644 core/lib/dal/.sqlx/query-245dc5bb82cc82df38e4440a7746ca08324bc86a72e4ea85c9c7962a6c8c9e30.json create mode 100644 core/lib/dal/.sqlx/query-24722ee4ced7f03e60b1b5ecaaa5234d536b064951a67d826ac49b7a3a095a1a.json create mode 100644 core/lib/dal/.sqlx/query-249cb862d44196cb6dc3945e907717b0dd3cec64b0b29f59b273f1c6952e01da.json create mode 100644 core/lib/dal/.sqlx/query-25aad4298d2459ef5aea7c4ea82eda1da000848ed4abf309b68989da33e1ce5a.json create mode 100644 core/lib/dal/.sqlx/query-26cb272c2a46a267c47681e0f1f07997b7e24682da56f84d812da2b9aeb14ca2.json create mode 100644 core/lib/dal/.sqlx/query-26e0b7eb1871d94ddc98254fece6381a9c4165e2727542eaeef3bbedd13a4f20.json create mode 100644 core/lib/dal/.sqlx/query-2737fea02599cdc163854b1395c42d4ef93ca238fd2fbc9155e6d012d0d1e113.json create mode 100644 core/lib/dal/.sqlx/query-2757b30c4641a346eb0226c706223efc18e51e6d4092188e081f4fafe92fe0ef.json create mode 100644 core/lib/dal/.sqlx/query-280cf015e40353e2833c0a70b77095596297be0d728a0aa2d9b180fb72de222b.json create mode 100644 core/lib/dal/.sqlx/query-293258ecb299be5f5e81696d14883f115cd97586bd795ee31f58fc14e56d58cb.json create mode 100644 core/lib/dal/.sqlx/query-2955e976281f9cbd98b7378c5ab52964b268b93c32fd280c49bf9f932884300d.json create mode 100644 core/lib/dal/.sqlx/query-2b626262c8003817ee02978f77452554ccfb5b83f00efdc12bed0f60ef439785.json create mode 100644 core/lib/dal/.sqlx/query-2c827c1c3cfa3552b90d4746c5df45d57f1f8b2558fdb374bf02e84d3c825a23.json create mode 100644 core/lib/dal/.sqlx/query-2d0c2e9ec4187641baef8a33229bffc78d92adb3c1e3ca60b12163e38c67047e.json create mode 100644 core/lib/dal/.sqlx/query-2d1e0f2e043c193052c9cc20f9efeb5f094160627bc09db4bda2dda9a8c11c44.json create mode 100644 core/lib/dal/.sqlx/query-2d31fcce581975a82d6156b52e35fb7a093b73727f75e0cb7db9cea480c95f5c.json create mode 100644 core/lib/dal/.sqlx/query-2d862097cfae49a1fb28ec0a05176085385c3a79d72f49669b4215a9454323c2.json create mode 100644 core/lib/dal/.sqlx/query-2d87b294817859e42258136b1cb78f42a877039094c3d6354928a03dad29451a.json create mode 100644 core/lib/dal/.sqlx/query-2dd7dbaeb2572404451e78a96f540e73a2778633bbf9d8e591ec912634639af9.json create mode 100644 core/lib/dal/.sqlx/query-2ddba807ac8ec5260bf92c77073eb89c728357c0744f209090824695a5d35fa3.json create mode 100644 core/lib/dal/.sqlx/query-2e0ea9434195270cc65cdca1f674d6b3b1d15b818974e4e403f4ac418ed40c2c.json create mode 100644 core/lib/dal/.sqlx/query-2e5b9ae1b81b0abfe7a962c93b3119a0a60dc9804175b2baf8b45939c74bd583.json create mode 100644 core/lib/dal/.sqlx/query-2eb25bfcfc1114de825dc4eeb0605d7d1c9e649663f6e9444c4425821d0a5b71.json create mode 100644 core/lib/dal/.sqlx/query-2eb617f3e34ac5b21f925053a45da2b4afc314a3b3e78b041b44c8a020a0ee12.json create mode 100644 core/lib/dal/.sqlx/query-31334f2878b1ac7d828d5bc22d65ef6676b2eac623c0f78634cae9072fe0498a.json create mode 100644 core/lib/dal/.sqlx/query-3191f5ba16af041123ffa941ad63fe77e649e9d110043d2ac22005dd61cfcfb9.json create mode 100644 core/lib/dal/.sqlx/query-31f12a8c44124bb2ce31889ac5295f3823926f69cb1d54874878e6d6c301bfd8.json create mode 100644 core/lib/dal/.sqlx/query-322d919ff1ef4675623a58af2b0e9ebdda648667d48d6b27ddf155f2fe01d77a.json create mode 100644 core/lib/dal/.sqlx/query-32792c6aee69cb8c8b928a209a3b04ba5868d1897553df85aac15b169ebb0732.json create mode 100644 core/lib/dal/.sqlx/query-33d6be45b246523ad76f9ae512322ff6372f63ecadb504a329499b02e7d3550e.json create mode 100644 core/lib/dal/.sqlx/query-3490fe0b778a03c73111bf8cbf426b0b3185a231bbf0b8b132a1a95bc157e827.json create mode 100644 core/lib/dal/.sqlx/query-35b87a3b7db0af87c6a95e9fe7ef9044ae85b579c7051301b40bd5f94df1f530.json create mode 100644 core/lib/dal/.sqlx/query-3671f23665664b8d6acf97e4f697e5afa28d855d87ea2f8c93e79c436749068a.json create mode 100644 core/lib/dal/.sqlx/query-3b013b93ea4a6766162c9f0c60517a7ffc993cf436ad3aeeae82ed3e330b07bd.json create mode 100644 core/lib/dal/.sqlx/query-3b0af308b0ce95a13a4eed40834279601234a489f73d843f2f314252ed4cb8b0.json create mode 100644 core/lib/dal/.sqlx/query-3b3fbcffd2702047045c2f358e8ac77b63879ab97a32eed8392b48cc46116a28.json create mode 100644 core/lib/dal/.sqlx/query-3b4d5009ec22f54cc7d305aa11d96ec397767a063dc21aa3add974cb9b070361.json create mode 100644 core/lib/dal/.sqlx/query-3c1d5f985be7e378211aa339c2c6387f2f3eda07a630503324bd6576dbdf8231.json create mode 100644 core/lib/dal/.sqlx/query-3c3abbf689fa64c6da7de69fd916769dbb04d3a61cf232892236c974660ffe64.json create mode 100644 core/lib/dal/.sqlx/query-3c60ca71b8a3b544f5fe9d7f2fbb249026665c9fb17b6f53a2154473547cbbfd.json create mode 100644 core/lib/dal/.sqlx/query-3e170eea3a5ea5c7389c15f76c6489745438eae73a07b577aa25bd08adf95354.json create mode 100644 core/lib/dal/.sqlx/query-3ec365c5c81f4678a905ae5bbd48b87ead36f593488437c6f67da629ca81e4fa.json create mode 100644 core/lib/dal/.sqlx/query-41c9f45d6eb727aafad0d8c18024cee5c602d275bb812022cc8fdabf0a60e151.json create mode 100644 core/lib/dal/.sqlx/query-43c7e352d09f69de1a182196aea4de79b67833f17d252b5b0e8e00cd6e75b5c1.json create mode 100644 core/lib/dal/.sqlx/query-46c4696fff5a4b8cc5cb46b05645da82065836fe17687ffad04126a6a8b2b27c.json create mode 100644 core/lib/dal/.sqlx/query-47c2f23d9209d155f3f32fd21ef7931a02fe5ffaf2c4dc2f1e7a48c0e932c060.json create mode 100644 core/lib/dal/.sqlx/query-481d3cdb6c9a90843b240dba84377cb8f1340b483faedbbc2b71055aa5451cae.json create mode 100644 core/lib/dal/.sqlx/query-4d263992ed6d5abbd7d3ca43af9d772d8801b0ae673b7173ae08a1fa6cbf67b2.json create mode 100644 core/lib/dal/.sqlx/query-4d50dabc25d392e6b9d0dbe0e386ea7ef2c1178b1b0394a17442185b79f2d77d.json create mode 100644 core/lib/dal/.sqlx/query-4d84bb4e180b7267bee5e3c1f83c6d47e8e1b4b5124c82c1f35d405204fcf783.json create mode 100644 core/lib/dal/.sqlx/query-4d92a133a36afd682a84fbfd75aafca34d61347e0e2e29fb07ca3d1b8b1f309c.json create mode 100644 core/lib/dal/.sqlx/query-525123d4ec2b427f1c171f30d0937d8d542b4f14cf560972c005ab3cc13d1f63.json create mode 100644 core/lib/dal/.sqlx/query-532a80b0873871896dd318beba5ec427a099492905a1feee512dc43f39d10047.json create mode 100644 core/lib/dal/.sqlx/query-534822a226068cde83ad8c30b569a8f447824a5ab466bb6eea1710e8aeaa2c56.json create mode 100644 core/lib/dal/.sqlx/query-53c04fd528752c0e0ef7ffa1f68a7ea81d8d10c76bbae540013667e13230e2ea.json create mode 100644 core/lib/dal/.sqlx/query-53f78fdee39b113d2f55f6f951bd94f28b7b2b60d551d552a9b0bab1f1791e39.json create mode 100644 core/lib/dal/.sqlx/query-5503575d9377785894de6cf6139a8d4768c6a803a1a90889e5a1b8254c315231.json create mode 100644 core/lib/dal/.sqlx/query-556f9b9e82d3a9399660dfa4bbf252f26335699a4e7f0347d7e894320245271d.json create mode 100644 core/lib/dal/.sqlx/query-55b0b4c569c0aaf9741afc85400ecd50a04799ffd36be0e17c56f47fcdbc8f60.json create mode 100644 core/lib/dal/.sqlx/query-5659480e5d79dab3399e35539b240e7eb9f598999c28015a504605f88bf84b33.json create mode 100644 core/lib/dal/.sqlx/query-5821f1446983260168cec366af26009503182c300877e74a8539f231050e6f85.json create mode 100644 core/lib/dal/.sqlx/query-5880a85667ccc26d392ff6272e317afe4e38bcfe5ce93bf229d68622066ab8a1.json create mode 100644 core/lib/dal/.sqlx/query-58aed39245c72d231b268ce83105bb2036d21f60d4c6934f9145730ac35c04de.json create mode 100644 core/lib/dal/.sqlx/query-59cb0dd78fadc121e2b1ebbc8a063f089c91aead2bc9abb284697e65840f1e8f.json create mode 100644 core/lib/dal/.sqlx/query-5aaed2a975042cc9b7b9d88e5fd5db07667280abef27cc73159d2fd9c95b209b.json create mode 100644 core/lib/dal/.sqlx/query-5c7b6b58261faa0a164181987eec4055c22895316ce68d9d41619db7fcfb7563.json create mode 100644 core/lib/dal/.sqlx/query-5d493cbce749cc5b56d4069423597b16599abaf51df0f19effe1a536376cf6a6.json create mode 100644 core/lib/dal/.sqlx/query-5e781f84ec41edd0941fa84de837effac442434c6e734d977e6682a7484abe7f.json create mode 100644 core/lib/dal/.sqlx/query-5f6885b5457aaa78e10917ae5b8cd0bc0e8923a6bae64f22f09242766835ee0c.json create mode 100644 core/lib/dal/.sqlx/query-5f8fc05ae782846898295d210dd3d55ff2b1510868dfe80d14fffa3f5ff07b83.json create mode 100644 core/lib/dal/.sqlx/query-61b2b858d4636809c21838635aa52aeb5f06c26f68d131dd242f6ed68816c513.json create mode 100644 core/lib/dal/.sqlx/query-61bc330d6d1b5fddec78342c1b0f00e82b0b3ad9ae36bf4fe44d7e85b74c6f49.json create mode 100644 core/lib/dal/.sqlx/query-65cc4517c3693c8bdb66b332151d4cb46ca093129707ee14f2fa42dc1800cc9e.json create mode 100644 core/lib/dal/.sqlx/query-66554ab87e5fe4776786217d1f71a525c87d390df21250ab4dce08e09be72591.json create mode 100644 core/lib/dal/.sqlx/query-6692ff6c0fbb2fc94f5cd2837a43ce80f9b2b27758651ccfc09df61a4ae8a363.json create mode 100644 core/lib/dal/.sqlx/query-66e012ce974c38d9fe84cfc7eb28927f9e976319a305e0928ff366d535a97104.json create mode 100644 core/lib/dal/.sqlx/query-68936a53e5b80576f3f341523e6843eb48b5e26ee92cd8476f50251e8c32610d.json create mode 100644 core/lib/dal/.sqlx/query-68c891ee9d71cffe709731f2804b734d5d255e36e48668b3bfc25a0f86ea52e7.json create mode 100644 core/lib/dal/.sqlx/query-6ae2ed34230beae0e86c584e293e7ee767e4c98706246eb113498c0f817f5f38.json create mode 100644 core/lib/dal/.sqlx/query-6b327df84d2b3b31d02db35fd5d91a8d67abcdb743a619ed0d1b9c16206a3c20.json create mode 100644 core/lib/dal/.sqlx/query-6bd3094be764e6378fe52b5bb533260b49ce42daaf9dbe8075daf0a8e0ad9914.json create mode 100644 core/lib/dal/.sqlx/query-6c0d03b1fbe6f47546bc34c6b2eab01cb2c55bf86d2c8c99abb1b7ca21cf75c0.json create mode 100644 core/lib/dal/.sqlx/query-708b2b3e40887e6d8d2d7aa20448a58479487686d774e6b2b1391347bdafe06d.json create mode 100644 core/lib/dal/.sqlx/query-70979db81f473950b2fae7816dbad7fe3464f2619cee2d583accaa829aa12b94.json create mode 100644 core/lib/dal/.sqlx/query-72a4f50355324cce85ebaef9fa32826095e9290f0c1157094bd0c44e06012e42.json create mode 100644 core/lib/dal/.sqlx/query-72ff9df79e78129cb96d14ece0198129b44534062f524823666ed432d2fcd345.json create mode 100644 core/lib/dal/.sqlx/query-73c4bf1e35d49faaab9f7828e80f396f9d193615d70184d4327378a7fc8a5665.json create mode 100644 core/lib/dal/.sqlx/query-7560ba61643a8ec8eeefbe6034226313c255ce356a9a4e25c098484d3129c914.json create mode 100644 core/lib/dal/.sqlx/query-759b80414b5bcbfe03a0e1e15b37f92c4cfad9313b1461e12242d9becb59e0b0.json create mode 100644 core/lib/dal/.sqlx/query-75a3cf6f502ebb1a0e92b672dc6ce56b53cc4ca0a8c6ee7cac1b9a5863000be3.json create mode 100644 core/lib/dal/.sqlx/query-75f6eaa518e7840374c4e44b0788bf92c7f2c55386c8208e3a82b30456abd5b4.json create mode 100644 core/lib/dal/.sqlx/query-75fa24c29dc312cbfa89bf1f4a04a42b4ead6964edd17bfcacb4a828492bba60.json create mode 100644 core/lib/dal/.sqlx/query-76cb9ad97b70d584b19af194576dcf2324f380932698386aa8f9751b1fa24a7b.json create mode 100644 core/lib/dal/.sqlx/query-77a43830ca31eac85a3c03d87696bf94a013e49bf50ce23f4de4968781df0796.json create mode 100644 core/lib/dal/.sqlx/query-77b35855fbb989f6314469b419726dc7bb98e0f7feaf14656307e20bd2bb0b6c.json create mode 100644 core/lib/dal/.sqlx/query-78978c19282961c5b3dc06352b41caa4cca66d6ad74b2cd1a34ea5f7bc1e6909.json create mode 100644 core/lib/dal/.sqlx/query-7a2145e2234a7896031bbc1ce82715e903f3b399886c2c73e838bd924fed6776.json create mode 100644 core/lib/dal/.sqlx/query-7a8fffe8d4e3085e00c98f770d250d625f057acf1440b6550375ce5509a816a6.json create mode 100644 core/lib/dal/.sqlx/query-7fccc28bd829bce334f37197ee6b139e943f3ad2a41387b610606a42b7f03283.json create mode 100644 core/lib/dal/.sqlx/query-806b82a9effd885ba537a2a1c7d7227120a8279db1875d26ccae5ee0785f46a9.json create mode 100644 core/lib/dal/.sqlx/query-8182690d0326b820d23fba49d391578db18c29cdca85b8b6aad86fe2a9bf6bbe.json create mode 100644 core/lib/dal/.sqlx/query-81869cb392e9fcbb71ceaa857af77b39429d56072f63b3530c576fb31d7a56f9.json create mode 100644 core/lib/dal/.sqlx/query-83a931ceddf34e1c760649d613f534014b9ab9ca7725e14fb17aa050d9f35eb8.json create mode 100644 core/lib/dal/.sqlx/query-84c804db9d60a4c1ebbce5e3dcdf03c0aad3ac30d85176e0a4e35f72bbb21b12.json create mode 100644 core/lib/dal/.sqlx/query-852aa5fe1c3b2dfe875cd4adf0d19a00c170cf7725d95dd6eb8b753fa5facec8.json create mode 100644 core/lib/dal/.sqlx/query-8625ca45ce76b8c8633d390e35e0c5f885240d99ea69140a4636b00469d08497.json create mode 100644 core/lib/dal/.sqlx/query-877d20634068170326ab5801b69c70aff49e60b7def3d93b9206e650c259168b.json create mode 100644 core/lib/dal/.sqlx/query-878c9cdfd69ad8988d049041edd63595237a0c54f67b8c669dfbb4fca32757e4.json create mode 100644 core/lib/dal/.sqlx/query-88c629334e30bb9f5c81c858aa51af63b86e8da6d908d48998012231e1d66a60.json create mode 100644 core/lib/dal/.sqlx/query-8903ba5db3f87851c12da133573b4207b69cc48b4ba648e797211631be612b69.json create mode 100644 core/lib/dal/.sqlx/query-894665c2c467bd1aaeb331b112c567e2667c63a033baa6b427bd8a0898c08bf2.json create mode 100644 core/lib/dal/.sqlx/query-8a7a57ca3d4d65da3e0877c003902c690c33686c889d318b1d64bdd7fa6374db.json create mode 100644 core/lib/dal/.sqlx/query-8b9e5d525c026de97c0a732b1adc8dc4bd57e32dfefe1017acba9a15fc14b895.json create mode 100644 core/lib/dal/.sqlx/query-8f5e89ccadd4ea1da7bfe9793a1cbb724af0f0216433a70f19d784e3f2afbc9f.json create mode 100644 core/lib/dal/.sqlx/query-90f7657bae05c4bad6902c6bfb1b8ba0b771cb45573aca81db254f6bcfc17c77.json create mode 100644 core/lib/dal/.sqlx/query-9334df89c9562d4b35611b8e5ffb17305343df99ebc55f240278b5c4e63f89f5.json create mode 100644 core/lib/dal/.sqlx/query-95ea0522a3eff6c0d2d0b1c58fd2767e112b95f4d103c27acd6f7ede108bd300.json create mode 100644 core/lib/dal/.sqlx/query-966dddc881bfe6fd94b56f587424125a2633ddb6abaa129f2b12389140d83c3f.json create mode 100644 core/lib/dal/.sqlx/query-9955b9215096f781442153518c4f0a9676e26f422506545ccc90b7e8a36c8d47.json create mode 100644 core/lib/dal/.sqlx/query-995cecd37a5235d1acc2e6fc418d9b6a1a6fe629f9a02c8e33330a0efda64068.json create mode 100644 core/lib/dal/.sqlx/query-99acb091650478fe0feb367b1d64561347b81f8931cc2addefa907c9aa9355e6.json create mode 100644 core/lib/dal/.sqlx/query-99d9ee2a0d0450acefa0d9b6c031e30606fddf6631c859ab03819ec476bcf005.json create mode 100644 core/lib/dal/.sqlx/query-99dd6f04e82585d81ac23bc4871578179e6269c6ff36877cedee264067ccdafc.json create mode 100644 core/lib/dal/.sqlx/query-9b90f7a7ffee3cd8439f90a6f79693831e2ab6d6d3c1805df5aa51d76994ec19.json create mode 100644 core/lib/dal/.sqlx/query-9c2a5f32c627d3a5c6f1e87b31ce3b0fd67aa1f5f7ea0de673a2fbe1f742db86.json create mode 100644 core/lib/dal/.sqlx/query-9cfcde703a48b110791d2ae1103c9317c01d6e35db3b07d0a31f436e7e3c7c40.json create mode 100644 core/lib/dal/.sqlx/query-9de5acb3de1b96ff8eb62a6324e8e221a8ef9014458cc7f1dbc60c056a0768a0.json create mode 100644 core/lib/dal/.sqlx/query-9ef2f43e6201cc00a0e1425a666a36532fee1450733849852dfd20e18ded1f03.json create mode 100644 core/lib/dal/.sqlx/query-a0e2b2c034cc5f668f0b3d43b94d2e2326d7ace079b095def52723a45b65d3f3.json create mode 100644 core/lib/dal/.sqlx/query-a2d02b71e3dcc29a2c0c20b44392cfbaf09164aecfa5eed8d7142518ad96abea.json create mode 100644 core/lib/dal/.sqlx/query-a4861c931e84d897c27f666de1c5ca679a0459a012899a373c67393d30d12601.json create mode 100644 core/lib/dal/.sqlx/query-a48c92f557e5e3a2674ce0dee9cd92f5a547150590b8c221c4065eab11175c7a.json create mode 100644 core/lib/dal/.sqlx/query-a4a4b0bfbe05eac100c42a717e8d7cbb0bc526ebe61a07f735d4ab587058b22c.json create mode 100644 core/lib/dal/.sqlx/query-a4fcd075b68467bb119e49e6b20a69138206dfeb41f3daff4a3eef1de0bed4e4.json create mode 100644 core/lib/dal/.sqlx/query-a74d029f58801ec05d8d14a3b065d93e391600ab9da2e5fd4e8b139ab3d77583.json create mode 100644 core/lib/dal/.sqlx/query-a83f853b1d63365e88975a926816c6e7b4595f3e7c3dca1d1590de5437187733.json create mode 100644 core/lib/dal/.sqlx/query-a84ee70bec8c03bd51e1c6bad44c9a64904026506914abae2946e5d353d6a604.json create mode 100644 core/lib/dal/.sqlx/query-a91c23c4d33771122cec2589c6fe2757dbc13be6b30f5840744e5e0569adc66e.json create mode 100644 core/lib/dal/.sqlx/query-aa91697157517322b0dbb53dca99f41220c51f58a03c61d6b7789eab0504e320.json create mode 100644 core/lib/dal/.sqlx/query-aaf4fb97c95a5290fb1620cd868477dcf21955e0921ba648ba2e751dbfc3cb45.json create mode 100644 core/lib/dal/.sqlx/query-ac505ae6cfc744b07b52997db789bdc9efc6b89fc0444caf8271edd7dfe4a3bc.json create mode 100644 core/lib/dal/.sqlx/query-ada54322a28012b1b761f3631c4cd6ca26aa2fa565fcf208b6985f461c1868f2.json create mode 100644 core/lib/dal/.sqlx/query-aeda34b1beadca72e3e600ea9ae63f436a4f16dbeb784d0d28be392ad96b1c49.json create mode 100644 core/lib/dal/.sqlx/query-aefea1f3e87f28791cc547f193a895006e23ec73018f4b4e0a364a741f5c9781.json create mode 100644 core/lib/dal/.sqlx/query-af72fabd90eb43fb315f46d7fe9f724216807ffd481cd6f7f19968e42e52b284.json create mode 100644 core/lib/dal/.sqlx/query-afc24bd1407dba82cd3dc9e7ee71ac4ab2d73bda6022700aeb0a630a2563a4b4.json create mode 100644 core/lib/dal/.sqlx/query-b17c71983da060f08616e001b42f8dcbcb014b4f808c6232abd9a83354c995ac.json create mode 100644 core/lib/dal/.sqlx/query-b23ddb16513d69331056b94d466663a9c5ea62ea7c99a77941eb8f05d4454125.json create mode 100644 core/lib/dal/.sqlx/query-b321c5ba22358cbb1fd9c627f1e7b56187686173327498ac75424593547c19c5.json create mode 100644 core/lib/dal/.sqlx/query-b33e8da69281efe7750043e409d9871731c41cef01da3d6aaf2c53f7b17c47b2.json create mode 100644 core/lib/dal/.sqlx/query-b367ecb1ebee86ec598c4079591f8c12deeca6b8843fe3869cc2b02b30da5de6.json create mode 100644 core/lib/dal/.sqlx/query-b3d71dbe14bcd94131b29b64dcb49b6370c211a7fc24ad03a5f0e327f9d18040.json create mode 100644 core/lib/dal/.sqlx/query-b4304b9afb9f838eee1fe95af5fd964d4bb39b9dcd18fb03bc11ce2fb32b7fb3.json create mode 100644 core/lib/dal/.sqlx/query-b452354c888bfc19b5f4012582061b86b1abd915739533f9982fea9d8e21b9e9.json create mode 100644 core/lib/dal/.sqlx/query-b4794e6a0c2366d5d95ab373c310103263af3ff5cb6c9dc5df59d3cd2a5e56b4.json create mode 100644 core/lib/dal/.sqlx/query-b49478150dbc8731c531ef3eddc0c2cfff08e6fef3c3824d20dfdf2d0f73e671.json create mode 100644 core/lib/dal/.sqlx/query-b4a0444897b60c7061363a48b2b5386a2fd53492f3df05545edbfb0ec0f059d2.json create mode 100644 core/lib/dal/.sqlx/query-b5fd77f515fe168908cc90e44d0697e36b3c2a997038c30553f7727cdfa17361.json create mode 100644 core/lib/dal/.sqlx/query-b678edd9f6ea97b8f086566811f651aa072f030c70a5e6de38843a1d9afdf329.json create mode 100644 core/lib/dal/.sqlx/query-b75e3d2fecbf5d85e93848b7a35180abbd76956e073432af8d8500327b74e488.json create mode 100644 core/lib/dal/.sqlx/query-b7bf6999002dd89dc1224468ca79c9a85e3c24fca1bf87905f7fc68fe2ce3276.json create mode 100644 core/lib/dal/.sqlx/query-bb1904a01a3860b5440ae23763d6d5ee4341edadb8a86b459a07427b7e265e98.json create mode 100644 core/lib/dal/.sqlx/query-bd51c9d93b103292f5acbdb266ba4b4e2af48907fa9321064ddb24ac02ab17cd.json create mode 100644 core/lib/dal/.sqlx/query-bd74435dc6dba3f4173858682ee5661d1df4ec053797d75cfd32272be4f485e7.json create mode 100644 core/lib/dal/.sqlx/query-be16d820c124dba9f4a272f54f0b742349e78e6e4ce3e7c9a0dcf6447eedc6d8.json create mode 100644 core/lib/dal/.sqlx/query-bfb80956a18eabf266f5b5a9d62912d57f8eb2a38bdb7884fc812a2897a3a660.json create mode 100644 core/lib/dal/.sqlx/query-bfc84bcf0985446b337467dd1da709dbee508ad6d1cae43e477cf1bef8cb4aa9.json create mode 100644 core/lib/dal/.sqlx/query-c038cecd8184e5e8d9f498116bff995b654adfe328cb825a44ad36b4bf9ec8f2.json create mode 100644 core/lib/dal/.sqlx/query-c03df29f4661fa47c1412bd82ba379f3b2e9ff1bc6e8e38f473fb4950c8e4b77.json create mode 100644 core/lib/dal/.sqlx/query-c10cf20825de4d24300c7ec50d4a653852f7e43670076eb2ebcd49542a870539.json create mode 100644 core/lib/dal/.sqlx/query-c139df45a977290d1c2c7987fb9c1d66aeaeb6e2d36fddcf96775f01716a8a74.json create mode 100644 core/lib/dal/.sqlx/query-c14837e92dbb02f2fde7109f524432d865852afe0c60e11a2c1800d30599aa61.json create mode 100644 core/lib/dal/.sqlx/query-c192377c08abab9306c5b0844368aa0f8525832cb4075e831c0d4b23c5675b99.json create mode 100644 core/lib/dal/.sqlx/query-c23d5ff919ade5898c6a912780ae899e360650afccb34f5cc301b5cbac4a3d36.json create mode 100644 core/lib/dal/.sqlx/query-c2fe6a5476e69c9588eec73baba9d0e2d571533d4d5f683919987b6f8cbb00e0.json create mode 100644 core/lib/dal/.sqlx/query-c36abacc705a2244d423599779e38d60d6e93bcb34fd20422e227714fccbf6b7.json create mode 100644 core/lib/dal/.sqlx/query-c41312e01aa66897552e8be9acc8d43c31ec7441a7f6c5040e120810ebbb72f7.json create mode 100644 core/lib/dal/.sqlx/query-c4ea7812861a283448095acbb1164420a25eef488de2b67e91ed39657667bd4a.json create mode 100644 core/lib/dal/.sqlx/query-c5656667e5610ffb33e7b977ac92b7c4d79cbd404e0267794ec203df0cbb169d.json create mode 100644 core/lib/dal/.sqlx/query-c5d6e1d5d834409bd793c8ce1fb2c212918b31dabebf08a84efdfe1feee85765.json create mode 100644 core/lib/dal/.sqlx/query-c6d523c6ae857022318350a2f210d7eaeeb4549ed59b58f8d984be2a22a80355.json create mode 100644 core/lib/dal/.sqlx/query-c706a49ff54f6b424e24d061fe7ac429aac3c030f7e226a1264243d8cdae038d.json create mode 100644 core/lib/dal/.sqlx/query-c809f42a221b18a767e9dd0286503d8bd356f2f9cc249cd8b90caa5a8b5918e3.json create mode 100644 core/lib/dal/.sqlx/query-c9e05ebc7b61c1f409c330bc110bed26c831730944237b74bed98869c83b3ca5.json create mode 100644 core/lib/dal/.sqlx/query-ca9d06141265b8524ee28c55569cb21a635037d89ce24dd3ad58ffaadb59594a.json create mode 100644 core/lib/dal/.sqlx/query-cb98d84fc34af1e4a4c2f427c5bb4afd384063ae394a847b26304dd18d490ab4.json create mode 100644 core/lib/dal/.sqlx/query-cddf48514aa2aa249d0530d44c741368993009bb4bd90c2ad177ce56317aa04c.json create mode 100644 core/lib/dal/.sqlx/query-ce5779092feb8a3d3e2c5e395783e67f08f2ead5f55bfb6594e50346bf9cf2ef.json create mode 100644 core/lib/dal/.sqlx/query-cea9fe027a6a0ada827f23b48ac32432295b2f7ee40bf13522a6edbd236f1970.json create mode 100644 core/lib/dal/.sqlx/query-d14b52df2cd9f9e484c60ba00383b438f14b68535111cf2cedd363fc646aac99.json create mode 100644 core/lib/dal/.sqlx/query-d1b261f4057e4113b96eb87c9e20015eeb3ef2643ceda3024504a471b24d1283.json create mode 100644 core/lib/dal/.sqlx/query-d3b09cbcddf6238b358d32d57678242aad3e9a47400f6d6837a35f4c54a216b9.json create mode 100644 core/lib/dal/.sqlx/query-d70cfc158e31dd2d5c942d24f81fd17f833fb15b58b0110c7cc566946db98e76.json create mode 100644 core/lib/dal/.sqlx/query-d712707e47e143c52330ea6e0513d2839f0f928c06b8020eecec38e895f99b42.json create mode 100644 core/lib/dal/.sqlx/query-d7e8eabd7b43ff62838fbc847e4813d2b2d411bd5faf8306cd48db500532b711.json create mode 100644 core/lib/dal/.sqlx/query-d7ed82f0d012f72374edb2ebcec33c83477d65a6f8cb2673f67b3148cd95b436.json create mode 100644 core/lib/dal/.sqlx/query-d8e0f98a67ffb53a1caa6820f8475da2787332deca5708d1d08730cdbfc73541.json create mode 100644 core/lib/dal/.sqlx/query-d8e3ee346375e4b6a8b2c73a3827e88abd0f8164c2413dc83c91c29665ca645e.json create mode 100644 core/lib/dal/.sqlx/query-da51a5220c2b964303292592c34e8ee5e54b170de9da863bbdbc79e3f206640b.json create mode 100644 core/lib/dal/.sqlx/query-db3e74f0e83ffbf84a6d61e560f2060fbea775dc185f639139fbfd23e4d5f3c6.json create mode 100644 core/lib/dal/.sqlx/query-dc16d0fac093a52480b66dfcb5976fb01e6629e8c982c265f2af1d5000090572.json create mode 100644 core/lib/dal/.sqlx/query-dc481f59aae632ff6f5fa23f5c5c82627a936f7ea9f6c354eca4bea76fac6b10.json create mode 100644 core/lib/dal/.sqlx/query-dc764e1636c4e958753c1fd54562e2ca92fdfdf01cfd0b11f5ce24f0458a5e48.json create mode 100644 core/lib/dal/.sqlx/query-dd55e46dfa5ba3692d9620088a3550b8db817630d1a9341db4a1f453f12e64fb.json create mode 100644 core/lib/dal/.sqlx/query-dea22358feed1418430505767d03aa4239d3a8be71b47178b4b8fb11fe898b31.json create mode 100644 core/lib/dal/.sqlx/query-df00e33809768120e395d8f740770a4e629b2a1cde641e74e4e55bb100df809f.json create mode 100644 core/lib/dal/.sqlx/query-df3b08549a11729fb475341b8f38f8af02aa297d85a2695c5f448ed14b2d7386.json create mode 100644 core/lib/dal/.sqlx/query-e073cfdc7a00559994ce04eca15f35d55901fb1e6805f23413ea43e3637540a0.json create mode 100644 core/lib/dal/.sqlx/query-e3479d12d9dc97001cf03dc42d9b957e92cd375ec33fe16f855f319ffc0b208e.json create mode 100644 core/lib/dal/.sqlx/query-e5a90d17b2c25744df4585b53678c7ffd9a04eae27afbdf37a6ba8ff7ac85f3b.json create mode 100644 core/lib/dal/.sqlx/query-e63cc86a8d527dae2905b2af6a66bc6419ba51514519652e055c769b096015f6.json create mode 100644 core/lib/dal/.sqlx/query-e71c39b93ceba5416ff3d988290cb35d4d07d47f33fe1a5b9e9fe1f0ae09b705.json create mode 100644 core/lib/dal/.sqlx/query-e74a34a59e6afda689b0ec9e19071ababa66e4a443fbefbfffca72b7540b075b.json create mode 100644 core/lib/dal/.sqlx/query-e76217231b4d896118e9630de9485b19e1294b3aa6e084d2051bb532408672be.json create mode 100644 core/lib/dal/.sqlx/query-e9adf5b5a1ab84c20a514a7775f91a9984685eaaaa0a8b223410d560a15a3034.json create mode 100644 core/lib/dal/.sqlx/query-e9ca863d6e77edd39a9fc55700a6686e655206601854799139c22c017a214744.json create mode 100644 core/lib/dal/.sqlx/query-ea904aa930d602d33b6fbc1bf1178a8a0ec739f4ddec8ffeb3a87253aeb18d30.json create mode 100644 core/lib/dal/.sqlx/query-ec04b89218111a5dc8d5ade506ac3465e2211ef3013386feb12d4cc04e0eade9.json create mode 100644 core/lib/dal/.sqlx/query-edc61e1285bf6d3837acc67af4f15aaade450980719933089824eb8c494d64a4.json create mode 100644 core/lib/dal/.sqlx/query-ee17d2b3edfe705d14811e3938d4312b2b780563a9fde48bae5e51650475670f.json create mode 100644 core/lib/dal/.sqlx/query-ef331469f78c6ff68a254a15b55d056cc9bae25bc070c5de8424f88fab20e5ea.json create mode 100644 core/lib/dal/.sqlx/query-ef687be83e496d6647e4dfef9eabae63443c51deb818dd0affd1a0949b161737.json create mode 100644 core/lib/dal/.sqlx/query-f012d0922265269746396dac8f25ff66f2c3b2b83d45360818a8782e56aa3d66.json create mode 100644 core/lib/dal/.sqlx/query-f1a90090c192d68367e799188356efe8d41759bbdcdd6d39db93208f2664f03a.json create mode 100644 core/lib/dal/.sqlx/query-f22c5d136fe68bbfcee60beb304cfdc050b85e6d773b13f9699f15c335d42593.json create mode 100644 core/lib/dal/.sqlx/query-f39372e37160df4897f62a800694867ed765dcb9dc60754df9df8700d4244bfb.json create mode 100644 core/lib/dal/.sqlx/query-f4362a61ab05af3d71a3232d2f017db60405a887f9f7fa0ca60aa7fc879ce630.json create mode 100644 core/lib/dal/.sqlx/query-f63586d59264eab7388ad1de823227ecaa45d76d1ba260074898fe57c059a15a.json create mode 100644 core/lib/dal/.sqlx/query-f717ca5d0890759496739a678955e6f8b7f88a0894a7f9e27fc26f93997d37c7.json create mode 100644 core/lib/dal/.sqlx/query-f91790ae5cc4b087bf942ba52dd63a1e89945f8d5e0f4da42ecf6313c4f5967e.json create mode 100644 core/lib/dal/.sqlx/query-f922c0718c9dda2f285f09cbabad425bac8ed3d2780c60c9b63afbcea131f9a0.json create mode 100644 core/lib/dal/.sqlx/query-fcc108fd59203644ff86ded0505c7dfb7aad7261e5fc402d845aedc3b91a4e99.json create mode 100644 core/lib/dal/.sqlx/query-fcddeb96dcd1611dedb2091c1be304e8a35fd65bf37e976b7106f57c57e70b9b.json create mode 100644 core/lib/dal/.sqlx/query-fde16cd2d3de03f4b61625fa453a58f82acd817932415f04bcbd05442ad80c2b.json create mode 100644 core/lib/dal/.sqlx/query-fdffa5841554286a924b217b5885d9ec9b3f628c3a4cf5e10580ea6e5e3a2429.json create mode 100644 core/lib/dal/.sqlx/query-fe501f86f4bf6c5b8ccc2e039a4eb09b538a67d1c39fda052c4f4ddb23ce0084.json create mode 100644 core/lib/dal/.sqlx/query-fec7b791e371a4c58350b6537065223f4599d4128db588d8645f3d106de5f50b.json rename core/lib/{zksync_core => dal}/build.rs (64%) create mode 100644 core/lib/dal/migrations/20231013163109_create_snapshots_table.down.sql create mode 100644 core/lib/dal/migrations/20231013163109_create_snapshots_table.up.sql create mode 100644 core/lib/dal/migrations/20231128123456_create_consensus_replica_state_table.down.sql create mode 100644 core/lib/dal/migrations/20231128123456_create_consensus_replica_state_table.up.sql create mode 100644 core/lib/dal/migrations/20231208134254_drop_old_witness_generation_related_tables.down.sql create mode 100644 core/lib/dal/migrations/20231208134254_drop_old_witness_generation_related_tables.up.sql create mode 100644 core/lib/dal/migrations/20231213192041_snapshot-recovery.down.sql create mode 100644 core/lib/dal/migrations/20231213192041_snapshot-recovery.up.sql create mode 100644 core/lib/dal/migrations/20231225083442_add-pub-data-input.down.sql create mode 100644 core/lib/dal/migrations/20231225083442_add-pub-data-input.up.sql create mode 100644 core/lib/dal/migrations/20231229181653_fair_pubdata_price.down.sql create mode 100644 core/lib/dal/migrations/20231229181653_fair_pubdata_price.up.sql create mode 100644 core/lib/dal/migrations/20240103123456_move_consensus_fields_to_new_table.down.sql create mode 100644 core/lib/dal/migrations/20240103123456_move_consensus_fields_to_new_table.up.sql create mode 100644 core/lib/dal/migrations/20240103125908_remove_old_prover_subsystems.down.sql create mode 100644 core/lib/dal/migrations/20240103125908_remove_old_prover_subsystems.up.sql create mode 100644 core/lib/dal/migrations/20240104121833_l1-batch-predicted-circuits.down.sql create mode 100644 core/lib/dal/migrations/20240104121833_l1-batch-predicted-circuits.up.sql create mode 100644 core/lib/dal/src/consensus_dal.rs delete mode 100644 core/lib/dal/src/gpu_prover_queue_dal.rs rename core/lib/{zksync_core/src/consensus => dal/src/models}/proto/mod.proto (69%) create mode 100644 core/lib/dal/src/models/proto/mod.rs delete mode 100644 core/lib/dal/src/prover_dal.rs create mode 100644 core/lib/dal/src/snapshot_recovery_dal.rs create mode 100644 core/lib/dal/src/snapshots_creator_dal.rs create mode 100644 core/lib/dal/src/snapshots_dal.rs delete mode 100644 core/lib/dal/src/witness_generator_dal.rs delete mode 100644 core/lib/env_config/src/circuit_synthesizer.rs delete mode 100644 core/lib/env_config/src/fetcher.rs delete mode 100644 core/lib/env_config/src/prover.rs delete mode 100644 core/lib/env_config/src/prover_group.rs create mode 100644 core/lib/env_config/src/snapshots_creator.rs create mode 100644 core/lib/eth_client/src/clients/generic.rs create mode 100644 core/lib/multivm/src/glue/types/zk_evm_1_4_1.rs create mode 100644 core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_4_1.rs create mode 100644 core/lib/multivm/src/tracers/call_tracer/metrics.rs create mode 100644 core/lib/multivm/src/tracers/call_tracer/vm_boojum_integration/mod.rs create mode 100644 core/lib/multivm/src/tracers/storage_invocation/vm_boojum_integration/mod.rs create mode 100644 core/lib/multivm/src/tracers/validator/vm_boojum_integration/mod.rs create mode 100644 core/lib/multivm/src/utils.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/README.md create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/l2_block.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/snapshot.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/state.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/tx.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/utils.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/constants.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/implementation/bytecode.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/implementation/execution.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/implementation/gas.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/implementation/logs.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/implementation/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/implementation/snapshots.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/implementation/statistics.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/implementation/tx.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/event_sink.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/events.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/history_recorder.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/memory.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/decommitter.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/precompile.rs rename core/lib/multivm/src/versions/{vm_latest => vm_boojum_integration}/old_vm/oracles/storage.rs (99%) create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/old_vm/utils.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/oracles/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/oracles/storage.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/bootloader.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/bytecode_publishing.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/call_tracer.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/circuits.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/default_aa.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/gas_limit.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/get_used_contracts.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/invalid_bytecode.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/is_write_initial.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/l1_tx_execution.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/l2_blocks.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/nonce_holder.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/precompiles.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/refunds.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/require_eip712.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/rollbacks.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/simple_execution.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/inner_state.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/transaction_test_info.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/vm_tester.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/tracing_execution_error.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/upgrade.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tests/utils.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/circuits_capacity.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/circuits_tracer.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/default_tracers.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/dispatcher.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/pubdata_tracer.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/refunds.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/result_tracer.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/traits.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/tracers/utils.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/types/internals/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/types/internals/pubdata.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/types/internals/snapshot.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/types/internals/transaction_data.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/types/internals/vm_state.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/types/l1_batch.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/types/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/utils/fee.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/utils/l2_blocks.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/utils/logs.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/utils/mod.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/utils/overhead.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/utils/transaction_encoding.rs create mode 100644 core/lib/multivm/src/versions/vm_boojum_integration/vm.rs create mode 100644 core/lib/multivm/src/versions/vm_latest/tests/circuits.rs create mode 100644 core/lib/multivm/src/versions/vm_latest/tests/precompiles.rs create mode 100644 core/lib/multivm/src/versions/vm_latest/tracers/circuits_capacity.rs create mode 100644 core/lib/multivm/src/versions/vm_latest/tracers/circuits_tracer.rs delete mode 100644 core/lib/prover_utils/Cargo.toml delete mode 100644 core/lib/prover_utils/src/gcs_proof_fetcher.rs delete mode 100644 core/lib/prover_utils/src/lib.rs delete mode 100644 core/lib/prover_utils/src/region_fetcher.rs create mode 100644 core/lib/types/src/fee_model.rs create mode 100644 core/lib/types/src/snapshots.rs create mode 100644 core/lib/web3_decl/src/namespaces/snapshots.rs create mode 100644 core/lib/zksync_core/src/api_server/execution_sandbox/testonly.rs create mode 100644 core/lib/zksync_core/src/api_server/execution_sandbox/tests.rs create mode 100644 core/lib/zksync_core/src/api_server/tx_sender/tests.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/batch_limiter_middleware.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/error.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/mod.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/debug.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/en.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/eth.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/mod.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/net.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/web3.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/zks.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/pub_sub.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/batch_limiter_middleware.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/snapshots.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/namespaces/eth_subscribe.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/namespaces/snapshots.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/pubsub.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/pubsub_notifier.rs delete mode 100644 core/lib/zksync_core/src/api_server/web3/tests.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/tests/debug.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/tests/filters.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/tests/mod.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/tests/snapshots.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/tests/vm.rs create mode 100644 core/lib/zksync_core/src/api_server/web3/tests/ws.rs delete mode 100644 core/lib/zksync_core/src/consensus/payload.rs delete mode 100644 core/lib/zksync_core/src/consensus/proto/mod.rs create mode 100644 core/lib/zksync_core/src/consensus/storage/mod.rs create mode 100644 core/lib/zksync_core/src/consensus/storage/testonly.rs create mode 100644 core/lib/zksync_core/src/consensus/testonly.rs create mode 100644 core/lib/zksync_core/src/consensus/tests.rs create mode 100644 core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_200000_testnet_goerli.calldata create mode 100644 core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_351000-351004_mainnet.calldata create mode 100644 core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_4470_testnet_sepolia.calldata create mode 100644 core/lib/zksync_core/src/consistency_checker/tests/mod.rs delete mode 100644 core/lib/zksync_core/src/data_fetchers/error.rs delete mode 100644 core/lib/zksync_core/src/data_fetchers/mod.rs delete mode 100644 core/lib/zksync_core/src/data_fetchers/token_list/mock.rs delete mode 100644 core/lib/zksync_core/src/data_fetchers/token_list/mod.rs delete mode 100644 core/lib/zksync_core/src/data_fetchers/token_list/one_inch.rs delete mode 100644 core/lib/zksync_core/src/data_fetchers/token_price/coingecko.rs delete mode 100644 core/lib/zksync_core/src/data_fetchers/token_price/mock.rs delete mode 100644 core/lib/zksync_core/src/data_fetchers/token_price/mod.rs delete mode 100644 core/lib/zksync_core/src/eth_sender/grafana_metrics.rs create mode 100644 core/lib/zksync_core/src/fee_model.rs delete mode 100644 core/lib/zksync_core/src/house_keeper/gpu_prover_queue_monitor.rs rename core/lib/{prover_utils/src => zksync_core/src/house_keeper}/periodic_job.rs (97%) delete mode 100644 core/lib/zksync_core/src/house_keeper/prover_job_retry_manager.rs delete mode 100644 core/lib/zksync_core/src/house_keeper/prover_queue_monitor.rs delete mode 100644 core/lib/zksync_core/src/house_keeper/waiting_to_queued_witness_job_mover.rs delete mode 100644 core/lib/zksync_core/src/house_keeper/witness_generator_queue_monitor.rs delete mode 100644 core/lib/zksync_core/src/l1_gas_price/gas_adjuster/bounded_gas_adjuster.rs create mode 100644 core/lib/zksync_core/src/metadata_calculator/recovery/mod.rs create mode 100644 core/lib/zksync_core/src/metadata_calculator/recovery/tests.rs create mode 100644 core/lib/zksync_core/src/reorg_detector/tests.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/batch_status_updater.rs create mode 100644 core/lib/zksync_core/src/sync_layer/batch_status_updater/mod.rs create mode 100644 core/lib/zksync_core/src/sync_layer/batch_status_updater/tests.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/gossip/buffered/mod.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/gossip/buffered/tests.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/gossip/metrics.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/gossip/mod.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/gossip/storage/mod.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/gossip/storage/tests.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/gossip/tests.rs delete mode 100644 core/lib/zksync_core/src/sync_layer/gossip/utils.rs create mode 100644 core/lib/zksync_core/src/utils/mod.rs create mode 100644 core/lib/zksync_core/src/utils/testonly.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/basic_circuits.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/leaf_aggregation.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/mod.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/node_aggregation.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/precalculated_merkle_paths_provider.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/scheduler.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/storage_oracle.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/tests.rs delete mode 100644 core/lib/zksync_core/src/witness_generator/utils.rs create mode 100644 core/tests/ts-integration/contracts/counter/zkVM_bytecode.txt create mode 100644 core/tests/ts-integration/tests/api/snapshots-creator.test.ts delete mode 100644 docker-compose-runner.yml create mode 100644 docker-compose-zkstack-common.yml mode change 100644 => 100755 docker/contract-verifier/install-all-solc.sh create mode 100644 docker/geth/jwtsecret delete mode 100644 docker/prover-gar/Dockerfile delete mode 100644 docker/prover/Dockerfile create mode 100644 docker/prysm/config.yml rename docker/{circuit-synthesizer => snapshots-creator}/Dockerfile (50%) delete mode 100644 docs/advanced/README.md delete mode 100644 docs/advanced/blocks_and_batches.md rename docs/{ => guides}/advanced/01_initialization.md (94%) rename docs/{ => guides}/advanced/02_deposits.md (97%) rename docs/{ => guides}/advanced/03_withdrawals.md (97%) create mode 100644 docs/guides/advanced/0_alternative_vm_intro.md rename docs/{ => guides}/advanced/advanced_debugging.md (100%) rename docs/{advanced/bytecode_compression.md => guides/advanced/compression.md} (83%) rename docs/{ => guides}/advanced/contracts.md (92%) rename docs/{advanced/prover.md => guides/advanced/deeper_overview.md} (96%) rename docs/{advanced/gas_and_fees.md => guides/advanced/fee_model.md} (96%) rename docs/{ => guides}/advanced/how_call_works.md (94%) rename docs/{ => guides}/advanced/how_l2_messaging_works.md (90%) rename docs/{ => guides}/advanced/how_transaction_works.md (99%) rename docs/{ => guides}/advanced/prover_keys.md (97%) rename docs/{ => guides}/advanced/pubdata.md (88%) rename docs/{ => guides}/advanced/zk_intuition.md (88%) rename docs/{ => guides}/architecture.md (98%) rename docs/{ => guides}/development.md (61%) rename docs/{ => guides}/external-node/01_intro.md (100%) rename docs/{ => guides}/external-node/02_configuration.md (95%) rename docs/{ => guides}/external-node/03_running.md (100%) rename docs/{ => guides}/external-node/04_observability.md (100%) rename docs/{ => guides}/external-node/05_troubleshooting.md (100%) rename docs/{ => guides}/external-node/06_components.md (100%) rename docs/{ => guides}/external-node/prepared_configs/mainnet-config.env (94%) rename docs/{external-node/prepared_configs/testnet-config.env => guides/external-node/prepared_configs/testnet-goerli-config-deprecated.env} (94%) create mode 100644 docs/guides/external-node/prepared_configs/testnet-sepolia-config.env rename docs/{ => guides}/launch.md (90%) create mode 100644 docs/guides/repositories.md rename docs/{ => guides}/setup-dev.md (81%) delete mode 100644 docs/repositories.md create mode 100644 docs/specs/README.md create mode 100644 docs/specs/blocks_batches.md create mode 100644 docs/specs/data_availability/README.md create mode 100644 docs/specs/data_availability/compression.md create mode 100644 docs/specs/data_availability/overview.md create mode 100644 docs/specs/data_availability/pubdata.md create mode 100644 docs/specs/data_availability/reconstruction.md create mode 100644 docs/specs/data_availability/validium_zk_porter.md create mode 100644 docs/specs/img/L2_Components.png create mode 100644 docs/specs/img/diamondProxy.jpg create mode 100644 docs/specs/img/governance.jpg create mode 100644 docs/specs/img/zk-the-collective-action.jpeg create mode 100644 docs/specs/introduction.md create mode 100644 docs/specs/l1_l2_communication/README.md create mode 100644 docs/specs/l1_l2_communication/l1_to_l2.md create mode 100644 docs/specs/l1_l2_communication/l2_to_l1.md create mode 100644 docs/specs/l1_l2_communication/overview_deposits_withdrawals.md create mode 100644 docs/specs/l1_smart_contracts.md create mode 100644 docs/specs/overview.md create mode 100644 docs/specs/prover/README.md create mode 100644 docs/specs/prover/boojum_function_check_if_satisfied.md create mode 100644 docs/specs/prover/boojum_gadgets.md create mode 100644 docs/specs/prover/circuit_testing.md create mode 100644 docs/specs/prover/circuits/README.md create mode 100644 docs/specs/prover/circuits/code_decommitter.md create mode 100644 docs/specs/prover/circuits/demux_log_queue.md create mode 100644 docs/specs/prover/circuits/ecrecover.md create mode 100644 docs/specs/prover/circuits/img/diagram.png create mode 100644 docs/specs/prover/circuits/img/flowchart.png create mode 100644 docs/specs/prover/circuits/img/image.png create mode 100644 docs/specs/prover/circuits/keccak_round_function.md create mode 100644 docs/specs/prover/circuits/l1_messages_hasher.md create mode 100644 docs/specs/prover/circuits/log_sorter.md create mode 100644 docs/specs/prover/circuits/main_vm.md create mode 100644 docs/specs/prover/circuits/overview.md create mode 100644 docs/specs/prover/circuits/ram_permutation.md create mode 100644 docs/specs/prover/circuits/sha256_round_function.md create mode 100644 docs/specs/prover/circuits/sort_decommitments.md create mode 100644 docs/specs/prover/circuits/sorting.md create mode 100644 docs/specs/prover/circuits/sorting_and_deduplicating.md create mode 100644 docs/specs/prover/circuits/storage_application.md create mode 100644 docs/specs/prover/circuits/storage_sorter.md create mode 100644 docs/specs/prover/getting_started.md create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(1).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(11).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(12).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(13).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(14).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(16).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(17).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(2).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(3).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(4).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(7).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(8).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied(9).png create mode 100644 docs/specs/prover/img/boojum_function_check_if_satisfied/Check_if_satisfied.png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(10).png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(11).png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(12).png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(4).png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(5).png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(6).png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(7).png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(8).png create mode 100644 docs/specs/prover/img/circuit_testing/Contest(9).png create mode 100644 "docs/specs/prover/img/intro_to_zkSync\342\200\231s_ZK/circuit.png" create mode 100644 docs/specs/prover/overview.md create mode 100644 docs/specs/prover/zk_terminology.md create mode 100644 docs/specs/the_hyperchain/README.md create mode 100644 docs/specs/the_hyperchain/hyperbridges.md create mode 100644 docs/specs/the_hyperchain/img/contractsExternal.png create mode 100644 docs/specs/the_hyperchain/img/deployWeth.png create mode 100644 docs/specs/the_hyperchain/img/depositWeth.png create mode 100644 docs/specs/the_hyperchain/img/hyperbridges.png create mode 100644 docs/specs/the_hyperchain/img/hyperbridging.png create mode 100644 docs/specs/the_hyperchain/img/newChain.png create mode 100644 docs/specs/the_hyperchain/overview.md create mode 100644 docs/specs/the_hyperchain/shared_bridge.md create mode 100644 docs/specs/zk_evm/README.md create mode 100644 docs/specs/zk_evm/account_abstraction.md create mode 100644 docs/specs/zk_evm/bootloader.md create mode 100644 docs/specs/zk_evm/fee_model.md create mode 100644 docs/specs/zk_evm/precompiles.md create mode 100644 docs/specs/zk_evm/system_contracts.md create mode 100644 docs/specs/zk_evm/vm_overview.md create mode 100644 docs/specs/zk_evm/vm_specification/EraVM_formal_specification.pdf create mode 100644 docs/specs/zk_evm/vm_specification/README.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/README.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/code_separation.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/evmla_translator.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/exception_handling.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/README.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/README.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/arithmetic.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/bitwise.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/block.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/call.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/create.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/environment.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/logging.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/logical.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/memory.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/overview.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/return.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/sha3.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evm/stack.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/evmla.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/extensions/README.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/extensions/call.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/extensions/overview.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/extensions/verbatim.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/overview.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/instructions/yul.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/overview.md create mode 100644 docs/specs/zk_evm/vm_specification/compiler/system_contracts.md create mode 100644 docs/specs/zk_evm/vm_specification/img/arch-overview.png create mode 100644 docs/specs/zk_evm/vm_specification/img/arithmetic_opcode.png create mode 100644 docs/specs/zk_evm/vm_specification/zkSync_era_virtual_machine_primer.md create mode 100644 etc/contracts-test-data/contracts/precompiles/precompiles.sol delete mode 100644 etc/env/base/circuit_synthesizer.toml delete mode 100644 etc/env/base/fetcher.toml delete mode 100644 etc/env/base/prover.toml delete mode 100644 etc/env/base/prover_group.toml rename etc/hyperchains/{docker-compose-hyperchain-template => docker-compose-hyperchain-template.hbs} (68%) create mode 100644 etc/multivm_bootloaders/vm_1_4_1/fee_estimate.yul/fee_estimate.yul.zbin create mode 100644 etc/multivm_bootloaders/vm_1_4_1/gas_test.yul/gas_test.yul.zbin create mode 100644 etc/multivm_bootloaders/vm_1_4_1/playground_batch.yul/playground_batch.yul.zbin create mode 100644 etc/multivm_bootloaders/vm_1_4_1/proved_batch.yul/proved_batch.yul.zbin create mode 100644 etc/multivm_bootloaders/vm_remove_allowlist/commit create mode 100644 etc/multivm_bootloaders/vm_remove_allowlist/fee_estimate.yul/fee_estimate.yul.zbin create mode 100644 etc/multivm_bootloaders/vm_remove_allowlist/gas_test.yul/gas_test.yul.zbin create mode 100644 etc/multivm_bootloaders/vm_remove_allowlist/playground_batch.yul/playground_batch.yul.zbin create mode 100644 etc/multivm_bootloaders/vm_remove_allowlist/proved_batch.yul/proved_batch.yul.zbin delete mode 160000 etc/system-contracts create mode 100644 etc/upgrades/1699353977-boojum/mainnet2/crypto.json create mode 100644 etc/upgrades/1699353977-boojum/mainnet2/facetCuts.json create mode 100644 etc/upgrades/1699353977-boojum/mainnet2/facets.json create mode 100644 etc/upgrades/1699353977-boojum/mainnet2/l2Upgrade.json create mode 100644 etc/upgrades/1699353977-boojum/mainnet2/transactions.json create mode 100644 etc/upgrades/1699353977-boojum/testnet2/crypto.json create mode 100644 etc/upgrades/1699353977-boojum/testnet2/facetCuts.json create mode 100644 etc/upgrades/1699353977-boojum/testnet2/facets.json create mode 100644 etc/upgrades/1699353977-boojum/testnet2/l2Upgrade.json create mode 100644 etc/upgrades/1699353977-boojum/testnet2/transactions.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/common.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/mainnet2/facetCuts.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/mainnet2/facets.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/mainnet2/l2Upgrade.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/mainnet2/transactions.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/stage2/facetCuts.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/stage2/facets.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/stage2/l2Upgrade.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/stage2/transactions.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/testnet-sepolia/facetCuts.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/testnet-sepolia/facets.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/testnet-sepolia/l2Upgrade.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/testnet-sepolia/transactions.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/testnet2/facetCuts.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/testnet2/facets.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/testnet2/l2Upgrade.json create mode 100644 etc/upgrades/1702392522-allowlist-removal/testnet2/transactions.json create mode 100644 infrastructure/zk/src/format_sql.ts create mode 100644 infrastructure/zk/src/linkcheck.ts create mode 100644 infrastructure/zk/src/spellcheck.ts delete mode 100644 prover/circuit_synthesizer/Cargo.toml delete mode 100644 prover/circuit_synthesizer/src/circuit_synthesizer.rs delete mode 100644 prover/circuit_synthesizer/src/main.rs create mode 100644 prover/proof_fri_compressor/src/initial_setup_keys.rs create mode 100644 prover/proof_fri_compressor/src/metrics.rs delete mode 100644 prover/prover/Cargo.toml delete mode 100644 prover/prover/README.md delete mode 100644 prover/prover/src/artifact_provider.rs delete mode 100644 prover/prover/src/main.rs delete mode 100644 prover/prover/src/prover.rs delete mode 100644 prover/prover/src/prover_params.rs delete mode 100644 prover/prover/src/run.rs delete mode 100644 prover/prover/src/socket_listener.rs delete mode 100644 prover/prover/src/synthesized_circuit_provider.rs create mode 100644 prover/prover_fri/src/metrics.rs create mode 100644 prover/prover_fri_gateway/src/metrics.rs create mode 100644 prover/prover_fri_utils/src/metrics.rs create mode 100644 prover/prover_fri_utils/src/region_fetcher.rs delete mode 100644 prover/setup_key_generator_and_server/Cargo.toml delete mode 100644 prover/setup_key_generator_and_server/data/.gitkeep delete mode 100644 prover/setup_key_generator_and_server/src/lib.rs delete mode 100644 prover/setup_key_generator_and_server/src/main.rs rename {core/lib/prover_utils => prover/vk_setup_data_generator_server_fri}/src/vk_commitment_helper.rs (99%) create mode 100644 prover/witness_generator/src/metrics.rs create mode 100644 prover/witness_vector_generator/src/metrics.rs diff --git a/.dockerignore b/.dockerignore index 6ddfa835189..8268fb1049e 100644 --- a/.dockerignore +++ b/.dockerignore @@ -30,12 +30,12 @@ contracts/.git !infrastructure/local-setup-preparation !infrastructure/zk !sdk/zksync-rs -!etc/system-contracts/bootloader/build/artifacts -!etc/system-contracts/contracts/artifacts -!etc/system-contracts/contracts/precompiles/artifacts -!etc/system-contracts/artifacts-zk +!contracts/system-contracts/bootloader/build/artifacts +!contracts/system-contracts/contracts-preprocessed/artifacts +!contracts/system-contracts/contracts-preprocessed/precompiles/artifacts +!contracts/system-contracts/artifacts-zk !etc/multivm_bootloaders !cargo !bellman-cuda -!core/bin/verification_key_generator_and_server/data/ !prover/vk_setup_data_generator_server_fri/data/ +!.github/release-please/manifest.json diff --git a/.eslintignore b/.eslintignore index 6bde0a2282e..51ebe5ef69e 100644 --- a/.eslintignore +++ b/.eslintignore @@ -4,9 +4,7 @@ node_modules build/ dist/ volumes/ -.tslintrc.js bellman-cuda # Ignore contract submodules contracts -etc/system-contracts \ No newline at end of file diff --git a/.githooks/pre-push b/.githooks/pre-push index eb1acbb693c..25312c14c10 100755 --- a/.githooks/pre-push +++ b/.githooks/pre-push @@ -1,4 +1,4 @@ -#!/bin/sh +#!/bin/bash # # Pre-push hook verifying that inappropriate code will not be pushed. @@ -8,7 +8,7 @@ NC='\033[0m' # No Color # Check that prettier formatting rules are not violated. if ! zk fmt --check; then - echo -e "${RED}Commit error!${NC}" + echo -e "${RED}Push error!${NC}" echo "Please format the code via 'zk fmt', cannot push unformatted code" exit 1 fi diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index dba6efd2fdf..764b85bacca 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -18,3 +18,4 @@ - [ ] Tests for the changes have been added / updated. - [ ] Documentation comments have been added / updated. - [ ] Code has been formatted via `zk fmt` and `zk lint`. +- [ ] Spellcheck has been run via `zk spellcheck`. diff --git a/.github/release-please/manifest.json b/.github/release-please/manifest.json index 7deaec6a597..e481bd598d3 100644 --- a/.github/release-please/manifest.json +++ b/.github/release-please/manifest.json @@ -1,5 +1,5 @@ { "sdk/zksync-rs": "0.4.0", - "core": "18.3.0", - "prover": "9.0.0" + "core": "19.2.0", + "prover": "10.1.0" } diff --git a/.github/workflows/build-contracts.yml b/.github/workflows/build-contracts.yml new file mode 100644 index 00000000000..c0699d00268 --- /dev/null +++ b/.github/workflows/build-contracts.yml @@ -0,0 +1,83 @@ +name: Build contracts +on: + workflow_call: + inputs: + compilers: + description: 'JSON of required compilers and their versions' + type: string + required: false + default: '[{ "zksolc": ["1.3.14", "1.3.16", "1.3.17", "1.3.1", "1.3.7", "1.3.18", "1.3.19", "1.3.21"] } , { "zkvyper": ["1.3.13"] }]' + +jobs: + build-images: + name: Build and upload contracts + runs-on: [matterlabs-ci-runner] + steps: + - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3 + with: + submodules: "recursive" + - name: setup-env + run: | + echo ZKSYNC_HOME=$(pwd) >> $GITHUB_ENV + echo CI=1 >> $GITHUB_ENV + echo $(pwd)/bin >> $GITHUB_PATH + echo CI=1 >> .env + echo IN_DOCKER=1 >> .env + + # TODO: Remove after when we can upgrade hardhat-plugins + - name: pre-download compiilers + run: | + # Download needed versions of vyper compiler + # Not sanitized due to unconventional path and tags + mkdir -p ./hardhat-nodejs/compilers-v2/vyper/linux + wget -nv -O ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.10 https://github.com/vyperlang/vyper/releases/download/v0.3.10/vyper.0.3.10+commit.91361694.linux + wget -nv -O ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.3 https://github.com/vyperlang/vyper/releases/download/v0.3.3/vyper.0.3.3+commit.48e326f0.linux + chmod +x ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.10 + chmod +x ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.3 + + COMPILERS_JSON='${{ inputs.compilers }}' + echo "$COMPILERS_JSON" | jq -r '.[] | to_entries[] | .key as $compiler | .value[] | "\(.),\($compiler)"' | while IFS=, read -r version compiler; do + mkdir -p "./hardhat-nodejs/compilers-v2/$compiler" + wget -nv -O "./hardhat-nodejs/compilers-v2/$compiler/${compiler}-v${version}" "https://github.com/matter-labs/${compiler}-bin/releases/download/v${version}/${compiler}-linux-amd64-musl-v${version}" + chmod +x "./hardhat-nodejs/compilers-v2/$compiler/${compiler}-v${version}" + done + + - name: start-services + run: | + echo "IMAGE_TAG_SUFFIX=${{ env.IMAGE_TAG_SUFFIX }}" >> .env + mkdir -p ./volumes/postgres + docker compose up -d zk postgres + ci_run sccache --start-server + + - name: build contracts + run: | + ci_run git config --global --add safe.directory /usr/src/zksync + ci_run git config --global --add safe.directory /usr/src/zksync/sdk/binaryen + ci_run git config --global --add safe.directory /usr/src/zksync/contracts/system-contracts + ci_run git config --global --add safe.directory /usr/src/zksync/contracts + ci_run zk + ci_run zk clean --all + ci_run zk run yarn + ci_run cp etc/tokens/{test,localhost}.json + ci_run zk compiler all + ci_run zk contract build + ci_run zk f yarn run l2-contracts build + + - name: upload contracts + uses: actions/upload-artifact@c7d193f32edcb7bfad88892161225aeda64e9392 + with: + name: contracts + path: | + ./contracts/system-contracts/**/artifacts/ + ./contracts/system-contracts/**/artifacts-zk/ + ./contracts/l1-contracts/**/artifacts/ + ./contracts/l1-contracts/**/artifacts-zk/ + ./contracts/l2-contracts/**/artifacts/ + ./contracts/l2-contracts/**/artifacts-zk/ + compression-level: 0 + + - name: Show sccache stats + if: always() + run: | + ci_run sccache --show-stats + ci_run cat /tmp/sccache_log.txt diff --git a/.github/workflows/build-core-template.yml b/.github/workflows/build-core-template.yml index c4ae27faf9c..d4a7963aa52 100644 --- a/.github/workflows/build-core-template.yml +++ b/.github/workflows/build-core-template.yml @@ -9,10 +9,6 @@ on: description: "DOCKERHUB_TOKEN" required: true inputs: - image_tag: - description: "Tag of a built image to deploy" - type: string - required: true image_tag_suffix: description: "Optional suffix to override tag name generation" type: string @@ -22,21 +18,26 @@ on: type: string default: "push" required: false - jobs: build-images: name: Build and Push Docker Images env: - image_tag: ${{ inputs.image_tag }} IMAGE_TAG_SUFFIX: ${{ inputs.image_tag_suffix }} - runs-on: [matterlabs-ci-runner] + runs-on: ${{ fromJSON('["matterlabs-ci-runner", "matterlabs-ci-runner-arm"]')[contains(matrix.platforms, 'arm')] }} strategy: matrix: - component: - - server-v2 - - external-node - - contract-verifier - - cross-external-nodes-checker + components: + - server-v2 + - external-node + - contract-verifier + - cross-external-nodes-checker + - snapshots-creator + platforms: + - linux/amd64 + include: + - components: external-node + platforms: linux/arm64 + steps: - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3 with: @@ -50,42 +51,90 @@ jobs: echo CI=1 >> .env echo IN_DOCKER=1 >> .env + - name: Download contracts + uses: actions/download-artifact@v4 + with: + name: contracts + path: ./contracts/ + - name: start-services run: | echo "IMAGE_TAG_SUFFIX=${{ env.IMAGE_TAG_SUFFIX }}" >> .env - docker-compose -f docker-compose-runner.yml up -d zk geth postgres + mkdir -p ./volumes/postgres + docker compose up -d zk postgres ci_run sccache --start-server - name: init run: | ci_run git config --global --add safe.directory /usr/src/zksync ci_run git config --global --add safe.directory /usr/src/zksync/sdk/binaryen - ci_run git config --global --add safe.directory /usr/src/zksync/etc/system-contracts + ci_run git config --global --add safe.directory /usr/src/zksync/contracts/system-contracts ci_run git config --global --add safe.directory /usr/src/zksync/contracts - ci_run zk - ci_run zk clean --all - ci_run zk run yarn - ci_run cp etc/tokens/{test,localhost}.json - ci_run zk compiler all - ci_run zk contract build - ci_run zk f yarn run l2-contracts build + ci_run zk || true + ci_run yarn zk build ci_run curl -LO https://storage.googleapis.com/matterlabs-setup-keys-us/setup-keys/setup_2\^26.key - name: login to Docker registries - if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/')) run: | ci_run docker login -u ${{ secrets.DOCKERHUB_USER }} -p ${{ secrets.DOCKERHUB_TOKEN }} - ci_run gcloud auth configure-docker us-docker.pkg.dev,asia-docker.pkg.dev,europe-docker.pkg.dev -q + ci_run gcloud auth configure-docker us-docker.pkg.dev -q - name: update-images env: DOCKER_ACTION: ${{ inputs.action }} - COMPONENT: ${{ matrix.component }} + COMPONENT: ${{ matrix.components }} + PLATFORM: ${{ matrix.platforms }} run: | ci_run rustup default nightly-2023-08-21 - ci_run zk docker $DOCKER_ACTION $COMPONENT -- --public + platform=$(echo $PLATFORM | tr '/' '-') + ci_run zk docker $DOCKER_ACTION --custom-tag=${IMAGE_TAG_SUFFIX} --platform=${PLATFORM} $COMPONENT - name: Show sccache stats if: always() run: | ci_run sccache --show-stats ci_run cat /tmp/sccache_log.txt + + create_manifest: + name: Create release manifest + runs-on: matterlabs-ci-runner + needs: build-images + if: ${{ inputs.action == 'push' }} + strategy: + matrix: + component: + - name: server-v2 + platform: linux/amd64 + - name: external-node + platform: linux/amd64,linux/arm64 + - name: contract-verifier + platform: linux/amd64 + - name: cross-external-nodes-checker + platform: linux/amd64 + - name: snapshots-creator + platform: linux/amd64 + env: + IMAGE_TAG_SUFFIX: ${{ inputs.image_tag_suffix }} + steps: + - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3 + with: + submodules: "recursive" + - name: login to Docker registries + run: | + docker login -u ${{ secrets.DOCKERHUB_USER }} -p ${{ secrets.DOCKERHUB_TOKEN }} + gcloud auth configure-docker us-docker.pkg.dev -q + + - name: Create Docker manifest + run: | + docker_repositories=("matterlabs/${{ matrix.component.name }}" "us-docker.pkg.dev/matterlabs-infra/matterlabs-docker/${{ matrix.component.name }}") + platforms=${{ matrix.component.platform }} + for repo in "${docker_repositories[@]}"; do + platform_tags="" + for platform in ${platforms//,/ }; do + platform=$(echo $platform | tr '/' '-') + platform_tags+=" --amend ${repo}:${IMAGE_TAG_SUFFIX}-${platform}" + done + for manifest in "${repo}:${IMAGE_TAG_SUFFIX}" "${repo}:2.0-${IMAGE_TAG_SUFFIX}" "${repo}:latest" "${repo}:latest2.0"; do + docker manifest create ${manifest} ${platform_tags} + docker manifest push ${manifest} + done + done diff --git a/.github/workflows/build-docker-from-tag.yml b/.github/workflows/build-docker-from-tag.yml index a5bc7884f28..023aff6cd95 100644 --- a/.github/workflows/build-docker-from-tag.yml +++ b/.github/workflows/build-docker-from-tag.yml @@ -49,16 +49,20 @@ jobs: run: | ./prover/extract-setup-data-keys.sh >> $GITHUB_OUTPUT + build-contracts: + name: Build contracts + if: contains(github.ref_name, 'core') + uses: ./.github/workflows/build-contracts.yml + build-push-core-images: name: Build and push image - needs: [setup] + needs: [setup, build-contracts] uses: ./.github/workflows/build-core-template.yml if: contains(github.ref_name, 'core') secrets: DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }} DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }} with: - image_tag: ${{ needs.setup.outputs.image_tag }} image_tag_suffix: ${{ needs.setup.outputs.image_tag_suffix }} build-push-prover-images: @@ -67,23 +71,13 @@ jobs: uses: ./.github/workflows/build-prover-template.yml if: contains(github.ref_name, 'prover') with: - image_tag: ${{ needs.setup.outputs.image_tag }} image_tag_suffix: ${{ needs.setup.outputs.image_tag_suffix }} ERA_BELLMAN_CUDA_RELEASE: ${{ vars.ERA_BELLMAN_CUDA_RELEASE }} + CUDA_ARCH: "60;70;75;89" secrets: DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }} DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }} - build-gar-prover: - name: Build GAR prover - needs: [setup, build-push-prover-images] - uses: ./.github/workflows/build-gar-reusable.yml - if: contains(github.ref_name, 'prover') - with: - setup_keys_id: bccc7de - image_tag_suffix: ${{ needs.setup.outputs.image_tag_suffix }} - push_asia: true - push_europe: true build-gar-prover-fri-gpu: name: Build GAR prover FRI GPU diff --git a/.github/workflows/build-external-node-docker.yml b/.github/workflows/build-external-node-docker.yml deleted file mode 100644 index cccbc84c6c9..00000000000 --- a/.github/workflows/build-external-node-docker.yml +++ /dev/null @@ -1,51 +0,0 @@ -name: External Node - Build & push docker image -on: - workflow_dispatch: - inputs: - image_tag: - description: "Tag of a built image to deploy (latest2.0 by default)" - type: string - required: false - default: "latest2.0" - -jobs: - build-images: - name: External Node - Build and Push Docker Image - runs-on: [matterlabs-ci-runner] - steps: - - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3 - with: - submodules: "recursive" - - - name: setup-env - run: | - echo ZKSYNC_HOME=$(pwd) >> $GITHUB_ENV - echo CI=1 >> $GITHUB_ENV - echo $(pwd)/bin >> $GITHUB_PATH - echo CI=1 >> .env - echo IN_DOCKER=1 >> .env - - - name: start-services - run: | - docker-compose -f docker-compose-runner.yml up -d zk geth postgres - - - name: init - run: | - ci_run git config --global --add safe.directory /usr/src/zksync - ci_run git config --global --add safe.directory /usr/src/zksync/sdk/binaryen - ci_run git config --global --add safe.directory /usr/src/zksync/etc/system-contracts - ci_run git config --global --add safe.directory /usr/src/zksync/contracts - - ci_run zk - ci_run zk run yarn - ci_run cp etc/tokens/{test,localhost}.json - ci_run zk compiler all - ci_run zk contract build - ci_run zk f yarn run l2-contracts build - - - name: update-image - run: | - ci_run docker login -u ${{ secrets.DOCKERHUB_USER }} -p ${{ secrets.DOCKERHUB_TOKEN }} - ci_run zk docker build server-v2 - ci_run gcloud auth configure-docker us-docker.pkg.dev -q - ci_run zk docker push external-node --custom-tag ${{ inputs.image_tag }} diff --git a/.github/workflows/build-gar-reusable.yml b/.github/workflows/build-gar-reusable.yml deleted file mode 100644 index c06119629ac..00000000000 --- a/.github/workflows/build-gar-reusable.yml +++ /dev/null @@ -1,98 +0,0 @@ -name: Workflow template for Build Prover builtin Setup Keys - -on: - workflow_call: - inputs: - image_tag_suffix: - description: "Commit sha or git tag for Docker tag" - required: true - type: string - setup_keys_id: - description: "Commit sha for downloading keys from bucket dir" - required: true - type: string - push_asia: - description: "Push images to Asia GAR" - required: false - default: false - type: boolean - push_europe: - description: "Push images to EU GAR" - required: false - default: false - type: boolean - -jobs: - build-gar-prover: - name: Build GAR prover - runs-on: [matterlabs-ci-runner] - strategy: - fail-fast: false - matrix: - setup_keys: - [ - { prover_id: "0", keys_ids: "0,18" }, - { prover_id: "1", keys_ids: "1,4" }, - { prover_id: "2", keys_ids: "2,5" }, - { prover_id: "3", keys_ids: "6,7" }, - { prover_id: "4", keys_ids: "8,9" }, - { prover_id: "5", keys_ids: "10,11" }, - { prover_id: "6", keys_ids: "12,13" }, - { prover_id: "7", keys_ids: "14,15" }, - { prover_id: "8", keys_ids: "16,17" }, - { prover_id: "9", keys_ids: "3" }, - ] - steps: - - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3 - with: - submodules: "recursive" - - - name: Download Setup Keys - run: | - gsutil cp gs://matterlabs-setup-keys-us/setup-keys/setup_2\^26.key docker/prover-gar/setup_2\^26.key - IFS=', ' read -r -a keys_ids <<< "${{ matrix.setup_keys.keys_ids }}" - printf "%s\n" "${keys_ids[@]}"| xargs -n 1 -P 8 -I {} gsutil cp -P gs://matterlabs-zksync-v2-infra-blob-store/prover_setup_keys/${{ inputs.setup_keys_id }}/setup_{}_key.bin docker/prover-gar/ - - - name: Login to us-central1 GAR - run: | - gcloud auth print-access-token --lifetime=7200 --impersonate-service-account=gha-ci-runners@matterlabs-infra.iam.gserviceaccount.com | docker login -u oauth2accesstoken --password-stdin https://us-docker.pkg.dev - - - name: Set up QEMU - uses: docker/setup-qemu-action@v2 - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v2 - - - name: Build and push - uses: docker/build-push-action@v4 - with: - context: docker/prover-gar - build-args: | - PROVER_IMAGE=${{ inputs.image_tag_suffix }} - push: true - tags: | - us-docker.pkg.dev/matterlabs-infra/matterlabs-docker/prover-v2-gar:2.0-${{ inputs.image_tag_suffix }}-prover-${{ matrix.setup_keys.prover_id }}-${{ inputs.setup_keys_id }} - - - name: Login to asia-southeast1 GAR - if: "${{ inputs.push_asia }}" - run: | - gcloud auth print-access-token --lifetime=7200 --impersonate-service-account=gha-ci-runners@matterlabs-infra.iam.gserviceaccount.com | docker login -u oauth2accesstoken --password-stdin https://asia-docker.pkg.dev - - - name: Push image to Asia - if: "${{ inputs.push_asia }}" - run: | - docker buildx imagetools create \ - --tag asia-docker.pkg.dev/matterlabs-infra/matterlabs-docker/prover-v2-gar:2.0-${{ inputs.image_tag_suffix }}-prover-${{ matrix.setup_keys.prover_id }}-${{ inputs.setup_keys_id }} \ - us-docker.pkg.dev/matterlabs-infra/matterlabs-docker/prover-v2-gar:2.0-${{ inputs.image_tag_suffix }}-prover-${{ matrix.setup_keys.prover_id }}-${{ inputs.setup_keys_id }} - - - name: Login to EU GAR - if: "${{ inputs.push_europe }}" - run: | - gcloud auth print-access-token --lifetime=7200 --impersonate-service-account=gha-ci-runners@matterlabs-infra.iam.gserviceaccount.com | docker login -u oauth2accesstoken --password-stdin https://europe-docker.pkg.dev - - - name: Push image to EU - if: "${{ inputs.push_europe }}" - run: | - docker buildx imagetools create \ - --tag europe-docker.pkg.dev/matterlabs-infra/matterlabs-docker/prover-v2-gar:2.0-${{ inputs.image_tag_suffix }}-prover-${{ matrix.setup_keys.prover_id }}-${{ inputs.setup_keys_id }} \ - us-docker.pkg.dev/matterlabs-infra/matterlabs-docker/prover-v2-gar:2.0-${{ inputs.image_tag_suffix }}-prover-${{ matrix.setup_keys.prover_id }}-${{ inputs.setup_keys_id }} diff --git a/.github/workflows/build-local-node-docker.yml b/.github/workflows/build-local-node-docker.yml index 9880361206c..4e48fc5c88f 100644 --- a/.github/workflows/build-local-node-docker.yml +++ b/.github/workflows/build-local-node-docker.yml @@ -7,6 +7,11 @@ on: type: string required: false default: "latest2.0" + compilers: + description: 'JSON of required compilers and their versions' + type: string + required: false + default: '[{ "zksolc": ["1.3.14", "1.3.16", "1.3.17", "1.3.1", "1.3.7", "1.3.18", "1.3.19", "1.3.21"] } , { "zkvyper": ["1.3.13"] }]' jobs: build-images: @@ -25,15 +30,34 @@ jobs: echo CI=1 >> .env echo IN_DOCKER=1 >> .env + # TODO: Remove after when we can upgrade hardhat-plugins + - name: pre-download compiilers + run: | + # Download needed versions of vyper compiler + # Not sanitized due to unconventional path and tags + mkdir -p ./hardhat-nodejs/compilers-v2/vyper/linux + wget -nv -O ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.10 https://github.com/vyperlang/vyper/releases/download/v0.3.10/vyper.0.3.10+commit.91361694.linux + wget -nv -O ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.3 https://github.com/vyperlang/vyper/releases/download/v0.3.3/vyper.0.3.3+commit.48e326f0.linux + chmod +x ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.10 + chmod +x ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.3 + + COMPILERS_JSON='${{ inputs.compilers }}' + echo "$COMPILERS_JSON" | jq -r '.[] | to_entries[] | .key as $compiler | .value[] | "\(.),\($compiler)"' | while IFS=, read -r version compiler; do + mkdir -p "./hardhat-nodejs/compilers-v2/$compiler" + wget -nv -O "./hardhat-nodejs/compilers-v2/$compiler/${compiler}-v${version}" "https://github.com/matter-labs/${compiler}-bin/releases/download/v${version}/${compiler}-linux-amd64-musl-v${version}" + chmod +x "./hardhat-nodejs/compilers-v2/$compiler/${compiler}-v${version}" + done + - name: start-services run: | - docker-compose -f docker-compose-runner.yml up -d zk geth postgres + mkdir -p ./volumes/postgres + docker compose up -d zk postgres - name: init run: | ci_run git config --global --add safe.directory /usr/src/zksync ci_run git config --global --add safe.directory /usr/src/zksync/sdk/binaryen - ci_run git config --global --add safe.directory /usr/src/zksync/etc/system-contracts + ci_run git config --global --add safe.directory /usr/src/zksync/contracts/system-contracts ci_run git config --global --add safe.directory /usr/src/zksync/contracts ci_run zk diff --git a/.github/workflows/build-prover-template.yml b/.github/workflows/build-prover-template.yml index 4151d03a697..c1a46a55ed2 100644 --- a/.github/workflows/build-prover-template.yml +++ b/.github/workflows/build-prover-template.yml @@ -13,10 +13,6 @@ on: description: "ERA_BELLMAN_CUDA_RELEASE" type: string required: true - image_tag: - description: "Tag of a built image to deploy" - type: string - required: true image_tag_suffix: description: "Optional suffix to override tag name generation" type: string @@ -31,22 +27,25 @@ on: type: boolean default: false required: false + CUDA_ARCH: + description: "CUDA Arch to build" + type: string + default: "89" + required: false jobs: build-images: name: Build and Push Docker Images env: - image_tag: ${{ inputs.image_tag }} IMAGE_TAG_SUFFIX: ${{ inputs.image_tag_suffix }} RUNNER_COMPOSE_FILE: "docker-compose-runner-nightly.yml" ERA_BELLMAN_CUDA_RELEASE: ${{ inputs.ERA_BELLMAN_CUDA_RELEASE }} + CUDA_ARCH: ${{ inputs.CUDA_ARCH }} runs-on: [matterlabs-ci-runner] strategy: matrix: component: - witness-generator - - prover-v2 - - circuit-synthesizer - prover-fri - prover-gpu-fri - witness-vector-generator @@ -68,29 +67,28 @@ jobs: - name: start-services run: | echo "IMAGE_TAG_SUFFIX=${{ env.IMAGE_TAG_SUFFIX }}" >> .env - docker-compose -f docker-compose-runner.yml up -d zk geth postgres + mkdir -p ./volumes/postgres + docker compose up -d zk postgres ci_run sccache --start-server - name: init run: | ci_run git config --global --add safe.directory /usr/src/zksync - ci_run git config --global --add safe.directory /usr/src/zksync/sdk/binaryen - ci_run git config --global --add safe.directory /usr/src/zksync/etc/system-contracts + ci_run git config --global --add safe.directory /usr/src/zksync/contracts/system-contracts ci_run git config --global --add safe.directory /usr/src/zksync/contracts ci_run zk - ci_run zk clean --all - ci_run zk run yarn - ci_run cp etc/tokens/{test,localhost}.json - ci_run zk compiler all - ci_run zk contract build - ci_run zk f yarn run l2-contracts build + + # We need the CRS only for the fri compressor. + - name: download CRS + if: matrix.component == 'proof-fri-compressor' + run: | ci_run curl -LO https://storage.googleapis.com/matterlabs-setup-keys-us/setup-keys/setup_2\^26.key - name: login to Docker registries if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/')) run: | ci_run docker login -u ${{ secrets.DOCKERHUB_USER }} -p ${{ secrets.DOCKERHUB_TOKEN }} - ci_run gcloud auth configure-docker us-docker.pkg.dev,asia-docker.pkg.dev,europe-docker.pkg.dev -q + ci_run gcloud auth configure-docker us-docker.pkg.dev -q # We need to run this only when ERA_BELLMAN_CUDA_RELEASE is not available # In our case it happens only when PR is created from fork @@ -166,7 +164,7 @@ jobs: ci_run echo [workspace] > Cargo.toml ci_run echo members = [\"prover/${underscored_name}\"] >> Cargo.toml ci_run cp prover/Cargo.lock Cargo.lock - PASSED_ENV_VARS="ERA_BELLMAN_CUDA_RELEASE" \ + PASSED_ENV_VARS="ERA_BELLMAN_CUDA_RELEASE,CUDA_ARCH" \ ci_run zk docker $DOCKER_ACTION $COMPONENT else ci_run zk docker $DOCKER_ACTION $COMPONENT @@ -177,3 +175,41 @@ jobs: run: | ci_run sccache --show-stats ci_run cat /tmp/sccache_log.txt + + copy-images: + name: Copy images between docker registries + env: + IMAGE_TAG_SUFFIX: ${{ inputs.image_tag_suffix }} + runs-on: matterlabs-ci-runner + needs: build-images + if: ${{ inputs.action == 'push' }} + strategy: + matrix: + component: + - witness-vector-generator + steps: + + - name: Set up QEMU + uses: docker/setup-qemu-action@v3 + + - name: Set up Docker Buildx + uses: docker/setup-buildx-action@v3 + + - name: Login to us-central1 GAR + run: | + gcloud auth print-access-token --lifetime=7200 --impersonate-service-account=gha-ci-runners@matterlabs-infra.iam.gserviceaccount.com | docker login -u oauth2accesstoken --password-stdin https://us-docker.pkg.dev + + - name: Login and push to Asia GAR + run: | + gcloud auth print-access-token --lifetime=7200 --impersonate-service-account=gha-ci-runners@matterlabs-infra.iam.gserviceaccount.com | docker login -u oauth2accesstoken --password-stdin https://asia-docker.pkg.dev + docker buildx imagetools create \ + --tag asia-docker.pkg.dev/matterlabs-infra/matterlabs-docker/${{ matrix.component }}:2.0-${{ inputs.image_tag_suffix }} \ + us-docker.pkg.dev/matterlabs-infra/matterlabs-docker/${{ matrix.component }}:2.0-${{ inputs.image_tag_suffix }} + + - name: Login and push to Europe GAR + run: | + gcloud auth print-access-token --lifetime=7200 --impersonate-service-account=gha-ci-runners@matterlabs-infra.iam.gserviceaccount.com | docker login -u oauth2accesstoken --password-stdin https://europe-docker.pkg.dev + docker buildx imagetools create \ + --tag europe-docker.pkg.dev/matterlabs-infra/matterlabs-docker/${{ matrix.component }}:2.0-${{ inputs.image_tag_suffix }} \ + us-docker.pkg.dev/matterlabs-infra/matterlabs-docker/${{ matrix.component }}:2.0-${{ inputs.image_tag_suffix }} + diff --git a/.github/workflows/check-spelling.yml b/.github/workflows/check-spelling.yml new file mode 100644 index 00000000000..0a3bce24cb7 --- /dev/null +++ b/.github/workflows/check-spelling.yml @@ -0,0 +1,41 @@ +name: Check Spelling + +on: + push: + branches: + - main + pull_request: + merge_group: + +env: + CARGO_TERM_COLOR: always + +jobs: + spellcheck: + runs-on: [matterlabs-ci-runner] + steps: + - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3 + with: + submodules: "recursive" + - name: Use Node.js + uses: actions/setup-node@v3 + with: + node-version: 18 + + - name: Setup environment + run: | + echo ZKSYNC_HOME=$(pwd) >> $GITHUB_ENV + echo $(pwd)/bin >> $GITHUB_PATH + echo IN_DOCKER=1 >> .env + + - name: Start services + run: | + docker compose up -d zk + + - name: Build zk + run: | + ci_run zk + + - name: Run spellcheck + run: | + ci_run zk spellcheck diff --git a/.github/workflows/ci-core-lint-reusable.yml b/.github/workflows/ci-core-lint-reusable.yml index 1ca6746b2d8..541049cdeb0 100644 --- a/.github/workflows/ci-core-lint-reusable.yml +++ b/.github/workflows/ci-core-lint-reusable.yml @@ -20,8 +20,8 @@ jobs: - name: Start services run: | - docker-compose -f docker-compose-runner.yml pull - docker-compose -f docker-compose-runner.yml up --build -d zk + mkdir -p ./volumes/postgres + docker compose up -d zk postgres ci_run sccache --start-server - name: Setup db @@ -37,4 +37,3 @@ jobs: ci_run zk lint ts --check ci_run zk lint md --check ci_run zk db check-sqlx-data - diff --git a/.github/workflows/ci-core-reusable.yml b/.github/workflows/ci-core-reusable.yml index 7ad0e54074c..03ede7fa732 100644 --- a/.github/workflows/ci-core-reusable.yml +++ b/.github/workflows/ci-core-reusable.yml @@ -1,6 +1,12 @@ name: Workflow template for CI jobs for Core Components on: workflow_call: + inputs: + compilers: + description: 'JSON of required compilers and their versions' + type: string + required: false + default: '[{ "zksolc": ["1.3.14", "1.3.16", "1.3.17", "1.3.1", "1.3.7", "1.3.18", "1.3.19", "1.3.21"] } , { "zkvyper": ["1.3.13"] }]' jobs: lint: @@ -20,10 +26,27 @@ jobs: echo $(pwd)/bin >> $GITHUB_PATH echo IN_DOCKER=1 >> .env + # TODO: Remove when we after upgrade of hardhat-plugins + - name: pre-download compilers + run: | + # Download needed versions of vyper compiler + # Not sanitized due to unconventional path and tags + mkdir -p ./hardhat-nodejs/compilers-v2/vyper/linux + wget -nv -O ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.10 https://github.com/vyperlang/vyper/releases/download/v0.3.10/vyper.0.3.10+commit.91361694.linux + wget -nv -O ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.3 https://github.com/vyperlang/vyper/releases/download/v0.3.3/vyper.0.3.3+commit.48e326f0.linux + chmod +x ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.10 + chmod +x ./hardhat-nodejs/compilers-v2/vyper/linux/0.3.3 + + COMPILERS_JSON='${{ inputs.compilers }}' + echo "$COMPILERS_JSON" | jq -r '.[] | to_entries[] | .key as $compiler | .value[] | "\(.),\($compiler)"' | while IFS=, read -r version compiler; do + mkdir -p "./hardhat-nodejs/compilers-v2/$compiler" + wget -nv -O "./hardhat-nodejs/compilers-v2/$compiler/${compiler}-v${version}" "https://github.com/matter-labs/${compiler}-bin/releases/download/v${version}/${compiler}-linux-amd64-musl-v${version}" + chmod +x "./hardhat-nodejs/compilers-v2/$compiler/${compiler}-v${version}" + done + - name: Start services run: | - docker-compose -f docker-compose-runner.yml pull - docker-compose -f docker-compose-runner.yml up --build -d geth zk postgres + ci_localnet_up ci_run sccache --start-server - name: Init @@ -63,15 +86,14 @@ jobs: - name: Start services run: | - docker-compose -f docker-compose-runner.yml pull - docker-compose -f docker-compose-runner.yml up --build -d geth zk postgres + ci_localnet_up ci_run sccache --start-server - name: Init run: | ci_run git config --global --add safe.directory /usr/src/zksync ci_run git config --global --add safe.directory /usr/src/zksync/sdk/binaryen - ci_run git config --global --add safe.directory /usr/src/zksync/etc/system-contracts + ci_run git config --global --add safe.directory /usr/src/zksync/contracts/system-contracts ci_run git config --global --add safe.directory /usr/src/zksync/contracts ci_run zk @@ -80,19 +102,21 @@ jobs: # `sleep 30` because we need to wait until server added all the tokens - name: Run server run: | - ci_run zk server --uring --components api,tree,eth,data_fetcher,state_keeper,housekeeper &>server.log & + ci_run zk server --uring --components api,tree,eth,state_keeper,housekeeper &>server.log & ci_run sleep 30 - name: Perform loadtest run: ci_run zk run loadtest - - name: Show logs + - name: Show server.log logs + if: always() + run: ci_run cat server.log || true + + - name: Show sccache logs if: always() run: | - ci_run cat server.log ci_run sccache --show-stats ci_run cat /tmp/sccache_log.txt - integration: runs-on: [matterlabs-ci-runner] @@ -113,20 +137,24 @@ jobs: run: | sudo apt update && sudo apt install wget -y - mkdir -p $(pwd)/etc/solc-bin/0.8.21 - wget https://github.com/ethereum/solc-bin/raw/gh-pages/linux-amd64/solc-linux-amd64-v0.8.21%2Bcommit.d9974bed - mv solc-linux-amd64-v0.8.21+commit.d9974bed $(pwd)/etc/solc-bin/0.8.21/solc - chmod +x $(pwd)/etc/solc-bin/0.8.21/solc - - mkdir -p $(pwd)/etc/zksolc-bin/v1.3.16 - wget https://github.com/matter-labs/zksolc-bin/raw/main/linux-amd64/zksolc-linux-amd64-musl-v1.3.16 - mv zksolc-linux-amd64-musl-v1.3.16 $(pwd)/etc/zksolc-bin/v1.3.16/zksolc - chmod +x $(pwd)/etc/zksolc-bin/v1.3.16/zksolc - - mkdir -p $(pwd)/etc/vyper-bin/0.3.3 - wget -O vyper0.3.3 https://github.com/vyperlang/vyper/releases/download/v0.3.3/vyper.0.3.3%2Bcommit.48e326f0.linux - mv vyper0.3.3 $(pwd)/etc/vyper-bin/0.3.3/vyper - chmod +x $(pwd)/etc/vyper-bin/0.3.3/vyper + mkdir -p $(pwd)/etc/solc-bin/0.8.23 + wget https://github.com/ethereum/solc-bin/raw/gh-pages/linux-amd64/solc-linux-amd64-v0.8.23%2Bcommit.f704f362 + mv solc-linux-amd64-v0.8.23+commit.f704f362 $(pwd)/etc/solc-bin/0.8.23/solc + chmod +x $(pwd)/etc/solc-bin/0.8.23/solc + + mkdir -p $(pwd)/etc/solc-bin/zkVM-0.8.23-1.0.0 + wget https://github.com/matter-labs/era-solidity/releases/download/0.8.23-1.0.0/solc-linux-amd64-0.8.23-1.0.0 -O $(pwd)/etc/solc-bin/zkVM-0.8.23-1.0.0/solc + chmod +x $(pwd)/etc/solc-bin/zkVM-0.8.23-1.0.0/solc + + mkdir -p $(pwd)/etc/zksolc-bin/v1.3.21 + wget https://github.com/matter-labs/zksolc-bin/raw/main/linux-amd64/zksolc-linux-amd64-musl-v1.3.21 + mv zksolc-linux-amd64-musl-v1.3.21 $(pwd)/etc/zksolc-bin/v1.3.21/zksolc + chmod +x $(pwd)/etc/zksolc-bin/v1.3.21/zksolc + + mkdir -p $(pwd)/etc/vyper-bin/0.3.10 + wget -O vyper0.3.10 https://github.com/vyperlang/vyper/releases/download/v0.3.10/vyper.0.3.10%2Bcommit.91361694.linux + mv vyper0.3.10 $(pwd)/etc/vyper-bin/0.3.10/vyper + chmod +x $(pwd)/etc/vyper-bin/0.3.10/vyper mkdir -p $(pwd)/etc/zkvyper-bin/v1.3.13 wget https://github.com/matter-labs/zkvyper-bin/raw/main/linux-amd64/zkvyper-linux-amd64-musl-v1.3.13 @@ -135,15 +163,14 @@ jobs: - name: Start services run: | - docker-compose -f docker-compose-runner.yml pull - docker-compose -f docker-compose-runner.yml up --build -d geth zk postgres + ci_localnet_up ci_run sccache --start-server - name: Init run: | ci_run git config --global --add safe.directory /usr/src/zksync ci_run git config --global --add safe.directory /usr/src/zksync/sdk/binaryen - ci_run git config --global --add safe.directory /usr/src/zksync/etc/system-contracts + ci_run git config --global --add safe.directory /usr/src/zksync/contracts/system-contracts ci_run git config --global --add safe.directory /usr/src/zksync/contracts ci_run zk ci_run zk init @@ -180,13 +207,25 @@ jobs: ci_run sleep 10 ci_run zk test i upgrade - - name: Show logs + - name: Show server.log logs + if: always() + run: ci_run cat server.log || true + + - name: Show contract_verifier.log logs + if: always() + run: ci_run cat contract_verifier.log || true + + - name: Show revert.log logs + if: always() + run: ci_run cat core/tests/revert-test/revert.log || true + + - name: Show upgrade.log logs + if: always() + run: ci_run cat core/tests/upgrade-test/upgrade.log || true + + - name: Show sccache logs if: always() run: | - ci_run cat server.log - ci_run cat contract_verifier.log - ci_run cat core/tests/revert-test/revert.log - ci_run cat core/tests/upgrade-test/upgrade.log ci_run sccache --show-stats ci_run cat /tmp/sccache_log.txt @@ -211,20 +250,24 @@ jobs: run: | sudo apt update && sudo apt install wget -y - mkdir -p $(pwd)/etc/solc-bin/0.8.21 - wget https://github.com/ethereum/solc-bin/raw/gh-pages/linux-amd64/solc-linux-amd64-v0.8.21%2Bcommit.d9974bed - mv solc-linux-amd64-v0.8.21+commit.d9974bed $(pwd)/etc/solc-bin/0.8.21/solc - chmod +x $(pwd)/etc/solc-bin/0.8.21/solc - - mkdir -p $(pwd)/etc/zksolc-bin/v1.3.16 - wget https://github.com/matter-labs/zksolc-bin/raw/main/linux-amd64/zksolc-linux-amd64-musl-v1.3.16 - mv zksolc-linux-amd64-musl-v1.3.16 $(pwd)/etc/zksolc-bin/v1.3.16/zksolc - chmod +x $(pwd)/etc/zksolc-bin/v1.3.16/zksolc - - mkdir -p $(pwd)/etc/vyper-bin/0.3.3 - wget -O vyper0.3.3 https://github.com/vyperlang/vyper/releases/download/v0.3.3/vyper.0.3.3%2Bcommit.48e326f0.linux - mv vyper0.3.3 $(pwd)/etc/vyper-bin/0.3.3/vyper - chmod +x $(pwd)/etc/vyper-bin/0.3.3/vyper + mkdir -p $(pwd)/etc/solc-bin/0.8.23 + wget https://github.com/ethereum/solc-bin/raw/gh-pages/linux-amd64/solc-linux-amd64-v0.8.23%2Bcommit.f704f362 + mv solc-linux-amd64-v0.8.23+commit.f704f362 $(pwd)/etc/solc-bin/0.8.23/solc + chmod +x $(pwd)/etc/solc-bin/0.8.23/solc + + mkdir -p $(pwd)/etc/solc-bin/zkVM-0.8.23-1.0.0 + wget https://github.com/matter-labs/era-solidity/releases/download/0.8.23-1.0.0/solc-linux-amd64-0.8.23-1.0.0 -O $(pwd)/etc/solc-bin/zkVM-0.8.23-1.0.0/solc + chmod +x $(pwd)/etc/solc-bin/zkVM-0.8.23-1.0.0/solc + + mkdir -p $(pwd)/etc/zksolc-bin/v1.3.21 + wget https://github.com/matter-labs/zksolc-bin/raw/main/linux-amd64/zksolc-linux-amd64-musl-v1.3.21 + mv zksolc-linux-amd64-musl-v1.3.21 $(pwd)/etc/zksolc-bin/v1.3.21/zksolc + chmod +x $(pwd)/etc/zksolc-bin/v1.3.21/zksolc + + mkdir -p $(pwd)/etc/vyper-bin/0.3.10 + wget -O vyper0.3.10 https://github.com/vyperlang/vyper/releases/download/v0.3.10/vyper.0.3.10%2Bcommit.91361694.linux + mv vyper0.3.10 $(pwd)/etc/vyper-bin/0.3.10/vyper + chmod +x $(pwd)/etc/vyper-bin/0.3.10/vyper mkdir -p $(pwd)/etc/zkvyper-bin/v1.3.11 wget https://github.com/matter-labs/zkvyper-bin/raw/main/linux-amd64/zkvyper-linux-amd64-musl-v1.3.11 @@ -233,15 +276,14 @@ jobs: - name: Start services run: | - docker-compose -f docker-compose-runner.yml pull - docker-compose -f docker-compose-runner.yml up --build -d geth zk postgres + ci_localnet_up ci_run sccache --start-server - name: Init run: | ci_run git config --global --add safe.directory /usr/src/zksync ci_run git config --global --add safe.directory /usr/src/zksync/sdk/binaryen - ci_run git config --global --add safe.directory /usr/src/zksync/etc/system-contracts + ci_run git config --global --add safe.directory /usr/src/zksync/contracts/system-contracts ci_run git config --global --add safe.directory /usr/src/zksync/contracts ci_run zk ci_run zk init @@ -261,7 +303,7 @@ jobs: # TODO(PLA-653): Restore bridge tests for EN. - name: Integration tests - run: ci_run zk test i server --testPathIgnorePatterns 'contract-verification|custom-erc20-bridge' + run: ci_run zk test i server --testPathIgnorePatterns 'contract-verification|custom-erc20-bridge|snapshots-creator' - name: Run Cross EN Checker run: ci_run zk run cross-en-checker @@ -288,12 +330,28 @@ jobs: ci_run zk env docker CHECK_EN_URL="http://0.0.0.0:3060" ci_run zk test i upgrade - - name: Show logs + - name: Show server.log logs + if: always() + run: ci_run cat server.log || true + + - name: Show ext-node.log logs + if: always() + run: ci_run cat ext-node.log || true + + - name: Show contract_verifier.log logs + if: always() + run: ci_run cat ext-node.log || true + + - name: Show revert.log logs + if: always() + run: ci_run cat core/tests/revert-test/revert.log || true + + - name: Show upgrade.log logs + if: always() + run: ci_run cat core/tests/upgrade-test/upgrade.log || true + + - name: Show sccache logs if: always() run: | - ci_run cat server.log - ci_run cat ext-node.log - ci_run cat core/tests/revert-test/revert.log - ci_run cat core/tests/upgrade-test/upgrade.log ci_run sccache --show-stats ci_run cat /tmp/sccache_log.txt diff --git a/.github/workflows/ci-docs-reusable.yml b/.github/workflows/ci-docs-reusable.yml index c0c9690542a..68d4c1adb94 100644 --- a/.github/workflows/ci-docs-reusable.yml +++ b/.github/workflows/ci-docs-reusable.yml @@ -19,8 +19,8 @@ jobs: - name: Start services run: | - docker-compose -f docker-compose-runner.yml pull - docker-compose -f docker-compose-runner.yml up --build -d zk + mkdir -p ./volumes/postgres + docker compose up -d zk postgres - name: Lints run: | diff --git a/.github/workflows/ci-prover-reusable.yml b/.github/workflows/ci-prover-reusable.yml index aa83cabc902..cc1246cbf9a 100644 --- a/.github/workflows/ci-prover-reusable.yml +++ b/.github/workflows/ci-prover-reusable.yml @@ -22,7 +22,8 @@ jobs: - name: Start services run: | docker-compose -f ${RUNNER_COMPOSE_FILE} pull - docker-compose -f ${RUNNER_COMPOSE_FILE} up --build -d zk + mkdir -p ./volumes/postgres + docker-compose -f ${RUNNER_COMPOSE_FILE} up --build -d zk postgres ci_run sccache --start-server - name: Init @@ -55,7 +56,8 @@ jobs: - name: Start services run: | docker-compose -f ${RUNNER_COMPOSE_FILE} pull - docker-compose -f ${RUNNER_COMPOSE_FILE} up --build -d zk + mkdir -p ./volumes/postgres + docker-compose -f ${RUNNER_COMPOSE_FILE} up --build -d zk postgres ci_run sccache --start-server - name: Init diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 2812f28778a..f15efa8a6a3 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -55,7 +55,6 @@ jobs: - '.github/workflows/build-core-template.yml' - '.github/workflows/ci-core-reusable.yml' - '.github/workflows/ci-core-lint-reusable.yml' - - 'docker-compose-runner.yml' - 'Cargo.toml' - 'Cargo.lock' - '!**/*.md' @@ -97,13 +96,18 @@ jobs: name: CI for Docs uses: ./.github/workflows/ci-docs-reusable.yml + build-contracts: + name: Build contracts + needs: changed_files + if: ${{ (needs.changed_files.outputs.core == 'true' || needs.changed_files.outputs.all == 'true') && !contains(github.ref_name, 'release-please--branches') }} + uses: ./.github/workflows/build-contracts.yml + build-core-images: name: Build core images - needs: changed_files + needs: build-contracts if: ${{ (needs.changed_files.outputs.core == 'true' || needs.changed_files.outputs.all == 'true') && !contains(github.ref_name, 'release-please--branches') }} uses: ./.github/workflows/build-core-template.yml with: - image_tag: ${{ needs.setup.outputs.image_tag }} image_tag_suffix: ${{ needs.setup.outputs.image_tag_suffix }} action: "build" secrets: @@ -116,7 +120,6 @@ jobs: if: ${{ (needs.changed_files.outputs.prover == 'true' || needs.changed_files.outputs.all == 'true') && !contains(github.ref_name, 'release-please--branches') }} uses: ./.github/workflows/build-prover-template.yml with: - image_tag: ${{ needs.setup.outputs.image_tag }} image_tag_suffix: ${{ needs.setup.outputs.image_tag_suffix }} action: "build" ERA_BELLMAN_CUDA_RELEASE: ${{ vars.ERA_BELLMAN_CUDA_RELEASE }} @@ -129,7 +132,7 @@ jobs: name: Github Status Check runs-on: ubuntu-latest if: always() && !cancelled() - needs: [ci-for-core-lint, ci-for-core, ci-for-prover, ci-for-docs, build-core-images, build-prover-images] + needs: [ci-for-core-lint, ci-for-core, ci-for-prover, ci-for-docs, build-contracts, build-core-images, build-prover-images] steps: - name: Status run: | diff --git a/.github/workflows/nodejs-license.yaml b/.github/workflows/nodejs-license.yaml index 5980eaae544..80a2eb276b3 100644 --- a/.github/workflows/nodejs-license.yaml +++ b/.github/workflows/nodejs-license.yaml @@ -19,8 +19,9 @@ env: Public Domain; WTFPL; Unlicense; + BlueOak-1.0.0; # It has to be one line, there must be no space between packages. - EXCLUDE_PACKAGES: testrpc@0.0.1;uuid@2.0.1; + EXCLUDE_PACKAGES: testrpc@0.0.1;uuid@2.0.1;@cspell/dict-en-common-misspellings@2.0.0; jobs: generate-matrix: diff --git a/.github/workflows/release-test-stage.yml b/.github/workflows/release-test-stage.yml index c8f7a1e9eee..fae77fca0b5 100644 --- a/.github/workflows/release-test-stage.yml +++ b/.github/workflows/release-test-stage.yml @@ -62,13 +62,18 @@ jobs: run: | ./prover/extract-setup-data-keys.sh >> $GITHUB_OUTPUT + build-contracts: + name: Build contracts + needs: changed_files + if: needs.changed_files.outputs.core == 'true' || needs.changed_files.outputs.all == 'true' + uses: ./.github/workflows/build-contracts.yml + build-push-core-images: name: Build and push images - needs: [setup, changed_files] + needs: [setup, build-contracts] uses: ./.github/workflows/build-core-template.yml if: needs.changed_files.outputs.core == 'true' || needs.changed_files.outputs.all == 'true' with: - image_tag: ${{ needs.setup.outputs.image_tag }} image_tag_suffix: ${{ needs.setup.outputs.image_tag_suffix }} secrets: DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }} @@ -80,24 +85,13 @@ jobs: uses: ./.github/workflows/build-prover-template.yml if: needs.changed_files.outputs.prover == 'true' || needs.changed_files.outputs.all == 'true' with: - image_tag: ${{ needs.setup.outputs.image_tag }} image_tag_suffix: ${{ needs.setup.outputs.image_tag_suffix }} ERA_BELLMAN_CUDA_RELEASE: ${{ vars.ERA_BELLMAN_CUDA_RELEASE }} + CUDA_ARCH: "60;70;75;89" secrets: DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }} DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }} - build-gar-prover: - name: Build GAR prover - needs: [setup, build-push-core-images, build-push-prover-images] - uses: ./.github/workflows/build-gar-reusable.yml - if: needs.changed_files.outputs.prover == 'true' || needs.changed_files.outputs.all == 'true' - with: - setup_keys_id: bccc7de - image_tag_suffix: ${{ needs.setup.outputs.image_tag_suffix }} - push_asia: false - push_europe: false - build-gar-prover-fri-gpu: name: Build GAR prover FRI GPU needs: [setup, build-push-prover-images] diff --git a/.github/workflows/vm-perf-comparison.yml b/.github/workflows/vm-perf-comparison.yml index f80248a723e..52f65a801d3 100644 --- a/.github/workflows/vm-perf-comparison.yml +++ b/.github/workflows/vm-perf-comparison.yml @@ -38,7 +38,7 @@ jobs: - name: init run: | - docker-compose -f docker-compose-runner.yml up -d zk + docker compose up -d zk - name: run benchmarks on base branch shell: bash @@ -46,9 +46,10 @@ jobs: ci_run zk ci_run zk compiler system-contracts ci_run cargo bench --package vm-benchmark --bench iai | tee base-iai + ci_run yarn workspace system-contracts clean - name: checkout PR - run: git checkout --force FETCH_HEAD + run: git checkout --force FETCH_HEAD --recurse-submodules - name: run benchmarks on PR shell: bash diff --git a/.github/workflows/vm-perf-to-prometheus.yml b/.github/workflows/vm-perf-to-prometheus.yml index d2a6594ffca..8bf905d7c0b 100644 --- a/.github/workflows/vm-perf-to-prometheus.yml +++ b/.github/workflows/vm-perf-to-prometheus.yml @@ -28,7 +28,7 @@ jobs: - name: init run: | - docker-compose -f docker-compose-runner.yml up -d zk + docker compose up -d zk ci_run zk ci_run zk compiler system-contracts diff --git a/.github/workflows/zk-environment-publish.yml b/.github/workflows/zk-environment-publish.yml index 44768241ccd..0551b15aac5 100644 --- a/.github/workflows/zk-environment-publish.yml +++ b/.github/workflows/zk-environment-publish.yml @@ -5,14 +5,14 @@ on: branches: - main paths: - - "docker/zk-environment/*" - - ".github/workflows/zk-environment.publish.yml" + - "docker/zk-environment/**" + - ".github/workflows/zk-environment-publish.yml" pull_request: branches: - main paths: - - "docker/zk-environment/*" - - ".github/workflows/zk-environment.publish.yml" + - "docker/zk-environment/**" + - ".github/workflows/zk-environment-publish.yml" concurrency: group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.event.pull_request.number || github.sha }} @@ -23,7 +23,7 @@ jobs: outputs: zk_environment: ${{ steps.changed-files-yaml.outputs.zk_env_any_changed }} zk_environment_cuda_11_8: ${{ steps.changed-files-yaml.outputs.zk_env_cuda_11_8_any_changed }} - zk_environment_cuda_12: ${{ steps.changed-files-yaml.outputs.zk_env_cuda_12_any_changed }} + zk_environment_cuda_12_0: ${{ steps.changed-files-yaml.outputs.zk_env_cuda_12_any_changed }} runs-on: ubuntu-latest steps: - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3 @@ -37,13 +37,13 @@ jobs: files_yaml: | zk_env: - docker/zk-environment/Dockerfile - - .github/workflows/zk-environment.publish.yml + - .github/workflows/zk-environment-publish.yml zk_env_cuda_11_8: - docker/zk-environment/20.04_amd64_cuda_11_8.Dockerfile - - .github/workflows/zk-environment.publish.yml + - .github/workflows/zk-environment-publish.yml zk_env_cuda_12: - docker/zk-environment/20.04_amd64_cuda_12_0.Dockerfile - - .github/workflows/zk-environment.publish.yml + - .github/workflows/zk-environment-publish.yml get_short_sha: if: needs.changed_files.outputs.zk_environment == 'true' diff --git a/.gitignore b/.gitignore index eff8079e75d..20c5973e8f4 100644 --- a/.gitignore +++ b/.gitignore @@ -27,7 +27,6 @@ todo Cargo.lock !/Cargo.lock -!/core/bin/verification_key_generator_and_server/Cargo.lock !/infrastructure/zksync-crypto/Cargo.lock !/prover/Cargo.lock diff --git a/.gitmodules b/.gitmodules index 47c6f801432..445344c3f20 100644 --- a/.gitmodules +++ b/.gitmodules @@ -1,6 +1,3 @@ -[submodule "etc/system-contracts"] -path = etc/system-contracts -url = https://github.com/matter-labs/era-system-contracts.git [submodule "contracts"] path = contracts url = https://github.com/matter-labs/era-contracts.git diff --git a/.markdownlintignore b/.markdownlintignore index 1305d94819c..ca8b9f5ef3c 100644 --- a/.markdownlintignore +++ b/.markdownlintignore @@ -7,5 +7,3 @@ bellman-cuda # Ignore contract submodules contracts -etc/system-contracts - diff --git a/.prettierignore b/.prettierignore index a68a572643c..5138b38cc6c 100644 --- a/.prettierignore +++ b/.prettierignore @@ -5,4 +5,3 @@ CHANGELOG.md # Ignore contract submodules contracts -etc/system-contracts diff --git a/.solhintignore b/.solhintignore index 24b618546bb..9e42f99d742 100644 --- a/.solhintignore +++ b/.solhintignore @@ -1,3 +1,2 @@ # Ignore contract submodules -contracts -etc/system-contracts \ No newline at end of file +contracts \ No newline at end of file diff --git a/CODEOWNERS b/CODEOWNERS index 8cde1cc1ade..eea7f1fa137 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -1,4 +1,3 @@ -* @matter-labs/era-reviewers .github/release-please/** @RomanBrodetski @perekopskiy @Deniallugo @popzxc **/CHANGELOG.md @RomanBrodetski @perekopskiy @Deniallugo @popzxc CODEOWNERS @RomanBrodetski @perekopskiy @Deniallugo @popzxc diff --git a/Cargo.lock b/Cargo.lock index 073f25b71bb..6aae513b69a 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -9,13 +9,13 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "617a8268e3537fe1d8c9ead925fca49ef6400927ee7bc26750e90ecee14ce4b8" dependencies = [ "bitflags 1.3.2", - "bytes 1.5.0", + "bytes", "futures-core", "futures-sink", "memchr", "pin-project-lite", "tokio", - "tokio-util 0.7.9", + "tokio-util", "tracing", ] @@ -44,11 +44,11 @@ dependencies = [ "actix-rt", "actix-service", "actix-utils", - "ahash 0.8.5", + "ahash 0.8.7", "base64 0.21.5", "bitflags 2.4.1", "brotli", - "bytes 1.5.0", + "bytes", "bytestring", "derive_more", "encoding_rs", @@ -68,7 +68,7 @@ dependencies = [ "sha1", "smallvec", "tokio", - "tokio-util 0.7.9", + "tokio-util", "tracing", "zstd 0.12.4", ] @@ -118,7 +118,7 @@ dependencies = [ "actix-utils", "futures-core", "futures-util", - "mio 0.8.9", + "mio", "socket2 0.5.5", "tokio", "tracing", @@ -160,8 +160,8 @@ dependencies = [ "actix-service", "actix-utils", "actix-web-codegen", - "ahash 0.8.5", - "bytes 1.5.0", + "ahash 0.8.7", + "bytes", "bytestring", "cfg-if 1.0.0", "cookie", @@ -230,7 +230,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d122413f284cf2d62fb1b7db97e02edb8cda96d769b16e443a4f6195e35662b0" dependencies = [ "crypto-common", - "generic-array 0.14.7", + "generic-array", ] [[package]] @@ -288,7 +288,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "be14c7498ea50828a38d0e24a765ed2effe92a705885b57d029cd67d45744072" dependencies = [ "cipher 0.2.5", - "opaque-debug 0.3.0", + "opaque-debug", ] [[package]] @@ -298,7 +298,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ea2e11f5e94c2f7d386164cc2aa1f97823fed6f259e486940a71c174dd01b0ce" dependencies = [ "cipher 0.2.5", - "opaque-debug 0.3.0", + "opaque-debug", ] [[package]] @@ -314,9 +314,9 @@ dependencies = [ [[package]] name = "ahash" -version = "0.8.5" +version = "0.8.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cd7d5a2cecb58716e47d67d5703a249964b14c7be1ec3cad3affc295b2d1c35d" +checksum = "77c3a9648d43b9cd48db467b3f87fdd6e146bcc88ab0180006cef2179fe11d01" dependencies = [ "cfg-if 1.0.0", "getrandom 0.2.10", @@ -349,6 +349,12 @@ dependencies = [ "alloc-no-stdlib", ] +[[package]] +name = "allocator-api2" +version = "0.2.16" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0942ffc6dcaadf03badf6e6a2d0228460359d5e34b57ccdc720b7382dfbd5ec5" + [[package]] name = "android-tzdata" version = "0.1.1" @@ -376,7 +382,7 @@ version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d52a9bb7ec0cf484c551830a7ce27bd20d67eac647e1befb56b0be4ee39a55d2" dependencies = [ - "winapi 0.3.9", + "winapi", ] [[package]] @@ -505,11 +511,13 @@ dependencies = [ [[package]] name = "async-lock" -version = "2.8.0" +version = "3.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "287272293e9d8c41773cec55e365490fe034813a2f172f502d6ddcf75b2f582b" +checksum = "7125e42787d53db9dd54261812ef17e937c95a51e4d291373b670342fa44310c" dependencies = [ - "event-listener", + "event-listener 4.0.0", + "event-listener-strategy", + "pin-project-lite", ] [[package]] @@ -547,13 +555,23 @@ dependencies = [ [[package]] name = "atoi" -version = "0.4.0" +version = "2.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "616896e05fc0e2649463a93a15183c6a16bf03413a7af88ef1285ddedfa9cda5" +checksum = "f28d99ec8bfea296261ca1af174f24225171fea9664ba9003cbebee704810528" dependencies = [ "num-traits", ] +[[package]] +name = "atomic-write-file" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "edcdbedc2236483ab103a53415653d6b4442ea6141baf1ffa85df29635e88436" +dependencies = [ + "nix", + "rand 0.8.5", +] + [[package]] name = "atty" version = "0.2.14" @@ -562,7 +580,7 @@ checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8" dependencies = [ "hermit-abi 0.1.19", "libc", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -589,7 +607,7 @@ dependencies = [ "async-trait", "axum-core", "bitflags 1.3.2", - "bytes 1.5.0", + "bytes", "futures-util", "http", "http-body", @@ -618,7 +636,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "759fa577a247914fd3f7f76d62972792636412fbfd634cd452f6a385a74d2d2c" dependencies = [ "async-trait", - "bytes 1.5.0", + "bytes", "futures-util", "http", "http-body", @@ -661,6 +679,12 @@ version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "349a06037c7bf932dd7e7d1f653678b2038b9ad46a74102f1fc7bd7872678cce" +[[package]] +name = "base16ct" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4c7f02d4ea65f2c1853089ffd8d2787bdbc63de2f0d29dedbcf8ccdfa0ccd4cf" + [[package]] name = "base64" version = "0.13.1" @@ -736,11 +760,11 @@ dependencies = [ [[package]] name = "bigdecimal" -version = "0.2.2" +version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d1e50562e37200edf7c6c43e54a08e64a5553bfb59d9c297d5572512aa517256" +checksum = "a6773ddc0eafc0e509fb60e48dff7f450f8e674a0686ae8605e8d9901bd5eefa" dependencies = [ - "num-bigint 0.3.3", + "num-bigint 0.4.4", "num-integer", "num-traits", "serde", @@ -796,6 +820,9 @@ name = "bitflags" version = "2.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "327762f6e5a765692301e5bb513e0d9fef63be86bbc14528052b1cd3e6f03e07" +dependencies = [ + "serde", +] [[package]] name = "bitmaps" @@ -838,7 +865,7 @@ checksum = "0a4e37d16930f5459780f5621038b6382b9bb37c19016f39fb6b5808d831f174" dependencies = [ "crypto-mac 0.8.0", "digest 0.9.0", - "opaque-debug 0.3.0", + "opaque-debug", ] [[package]] @@ -900,26 +927,14 @@ dependencies = [ "constant_time_eq", ] -[[package]] -name = "block-buffer" -version = "0.7.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c0940dc441f31689269e10ac70eb1002a3a1d3ad1390e030043662eb7fe4688b" -dependencies = [ - "block-padding 0.1.5", - "byte-tools", - "byteorder", - "generic-array 0.12.4", -] - [[package]] name = "block-buffer" version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4152116fd6e9dadb291ae18fc1ec3575ed6d84c29642d97890f4b4a3417297e4" dependencies = [ - "block-padding 0.2.1", - "generic-array 0.14.7", + "block-padding", + "generic-array", ] [[package]] @@ -928,7 +943,7 @@ version = "0.10.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" dependencies = [ - "generic-array 0.14.7", + "generic-array", ] [[package]] @@ -937,19 +952,10 @@ version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "57a0e8073e8baa88212fb5823574c02ebccb395136ba9a164ab89379ec6072f0" dependencies = [ - "block-padding 0.2.1", + "block-padding", "cipher 0.2.5", ] -[[package]] -name = "block-padding" -version = "0.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fa79dedbb091f449f1f39e53edf88d5dbe95f895dae6135a8d7b881fb5af73f5" -dependencies = [ - "byte-tools", -] - [[package]] name = "block-padding" version = "0.2.1" @@ -1000,7 +1006,7 @@ dependencies = [ "derivative", "ethereum-types 0.14.1", "firestorm", - "itertools", + "itertools 0.10.5", "lazy_static", "num-modular", "num_cpus", @@ -1015,6 +1021,30 @@ dependencies = [ "unroll", ] +[[package]] +name = "borsh" +version = "1.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "26d4d6dafc1a3bb54687538972158f07b2c948bc57d5890df22c0739098b3028" +dependencies = [ + "borsh-derive", + "cfg_aliases", +] + +[[package]] +name = "borsh-derive" +version = "1.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bf4918709cc4dd777ad2b6303ed03cb37f3ca0ccede8c1b0d28ac6db8f4710e0" +dependencies = [ + "once_cell", + "proc-macro-crate 2.0.1", + "proc-macro2 1.0.69", + "quote 1.0.33", + "syn 2.0.38", + "syn_derive", +] + [[package]] name = "brotli" version = "3.4.0" @@ -1036,16 +1066,6 @@ dependencies = [ "alloc-stdlib", ] -[[package]] -name = "bstr" -version = "1.7.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c79ad7fb2dd38f3dabd76b09c6a5a20c038fc0213ef1e9afd30eb777f120f019" -dependencies = [ - "memchr", - "serde", -] - [[package]] name = "bumpalo" version = "3.14.0" @@ -1059,10 +1079,26 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3ac9f8b63eca6fd385229b3675f6cc0dc5c8a5c8a54a59d4f52ffd670d87b0c" [[package]] -name = "byte-tools" -version = "0.3.1" +name = "bytecheck" +version = "0.6.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8b6372023ac861f6e6dc89c8344a8f398fb42aaba2b5dbc649ca0c0e9dbcb627" +dependencies = [ + "bytecheck_derive", + "ptr_meta", + "simdutf8", +] + +[[package]] +name = "bytecheck_derive" +version = "0.6.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e3b5ca7a04898ad4bcd41c90c5285445ff5b791899bb1b0abdd2a2aa791211d7" +checksum = "a7ec4c6f261935ad534c0c22dbef2201b45918860eb1c574b972bd213a76af61" +dependencies = [ + "proc-macro2 1.0.69", + "quote 1.0.33", + "syn 1.0.109", +] [[package]] name = "bytecount" @@ -1076,16 +1112,6 @@ version = "1.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" -[[package]] -name = "bytes" -version = "0.4.12" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "206fdffcfa2df7cbe15601ef46c813fce0965eb3286db6b56c583b814b51c81c" -dependencies = [ - "byteorder", - "iovec", -] - [[package]] name = "bytes" version = "1.5.0" @@ -1098,7 +1124,7 @@ version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "238e4886760d98c4f899360c834fa93e62cf7f721ac3c2da375cbdf4b8679aae" dependencies = [ - "bytes 1.5.0", + "bytes", ] [[package]] @@ -1180,6 +1206,12 @@ version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" +[[package]] +name = "cfg_aliases" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fd16c4719339c4530435d38e511904438d07cce7950afa3718a84ac36c10e89e" + [[package]] name = "chacha20" version = "0.9.1" @@ -1252,7 +1284,7 @@ version = "0.2.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "12f8e7987cbd042a63249497f41aed09f8e65add917ea6566effbc56578d6801" dependencies = [ - "generic-array 0.14.7", + "generic-array", ] [[package]] @@ -1276,7 +1308,21 @@ dependencies = [ "serde", "snark_wrapper", "zk_evm 1.4.0", - "zkevm_circuits", + "zkevm_circuits 1.4.0", +] + +[[package]] +name = "circuit_definitions" +version = "0.1.0" +source = "git+https://github.com/matter-labs/era-zkevm_test_harness.git?branch=v1.4.1#44975f894aff0893b5f98e34d0e364375390bcb8" +dependencies = [ + "crossbeam 0.8.2", + "derivative", + "seq-macro", + "serde", + "snark_wrapper", + "zk_evm 1.4.1", + "zkevm_circuits 1.4.1", ] [[package]] @@ -1415,13 +1461,17 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "acbf1af155f9b9ef647e42cdc158db4b64a1b61f743629225fde6f3e0be2a7c7" [[package]] -name = "combine" -version = "4.6.6" +name = "compile-fmt" +version = "0.1.0" +source = "git+https://github.com/slowli/compile-fmt.git?rev=c6a41c846c9a6f70cdba4b44c9f3922242ffcf12#c6a41c846c9a6f70cdba4b44c9f3922242ffcf12" + +[[package]] +name = "concurrent-queue" +version = "2.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "35ed6e9d84f0b51a7f52daf1c7d71dd136fd7a3f41a8462b8cdb8c78d920fad4" +checksum = "d16048cd947b08fa32c24458a22f5dc5e835264f689f4f5653210c69fd107363" dependencies = [ - "bytes 1.5.0", - "memchr", + "crossbeam-utils 0.8.16", ] [[package]] @@ -1436,12 +1486,6 @@ dependencies = [ "windows-sys 0.45.0", ] -[[package]] -name = "const-oid" -version = "0.7.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e4c78c047431fee22c1a7bb92e00ad095a02a983affe4d8a72e2a2c62c1b94f3" - [[package]] name = "const-oid" version = "0.9.5" @@ -1527,18 +1571,18 @@ dependencies = [ [[package]] name = "crc" -version = "2.1.0" +version = "3.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "49fc9a695bca7f35f5f4c15cddc84415f66a74ea78eef08e90c5024f2b540e23" +checksum = "86ec7a15cbe22e59248fc7eadb1907dab5ba09372595da4d73dd805ed4417dfe" dependencies = [ "crc-catalog", ] [[package]] name = "crc-catalog" -version = "1.1.1" +version = "2.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ccaeedb56da03b09f598226e25e80088cb4cd25f316e6e4df7d695f0feeb1403" +checksum = "19d374276b40fb8bbdee95aef7c7fa6b5316ec764510eb64b8dd0e2ed0d7e7f5" [[package]] name = "crc32fast" @@ -1561,7 +1605,7 @@ dependencies = [ "ciborium", "clap 3.2.25", "criterion-plot", - "itertools", + "itertools 0.10.5", "lazy_static", "num-traits", "oorandom", @@ -1582,7 +1626,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6b50826342786a51a89e2da3a28f1c32b06e387201bc2d19791f622c673706b1" dependencies = [ "cast", - "itertools", + "itertools 0.10.5", ] [[package]] @@ -1748,23 +1792,13 @@ version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7a81dae078cea95a014a339291cec439d2f232ebe854a9d672b796c6afafa9b7" -[[package]] -name = "crypto-bigint" -version = "0.3.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "03c6a1d5fa1de37e071642dfa44ec552ca5b299adb128fab16138e24b548fd21" -dependencies = [ - "generic-array 0.14.7", - "subtle", -] - [[package]] name = "crypto-bigint" version = "0.4.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ef2b4b23cddf68b89b8f8069890e8c270d54e2d5fe1b143820234805e4cb17ef" dependencies = [ - "generic-array 0.14.7", + "generic-array", "rand_core 0.6.4", "subtle", "zeroize", @@ -1776,8 +1810,10 @@ version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "740fe28e594155f10cfc383984cbefd529d7396050557148f79cb0f621204124" dependencies = [ + "generic-array", "rand_core 0.6.4", "subtle", + "zeroize", ] [[package]] @@ -1786,7 +1822,7 @@ version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3" dependencies = [ - "generic-array 0.14.7", + "generic-array", "rand_core 0.6.4", "typenum", ] @@ -1797,17 +1833,17 @@ version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b584a330336237c1eecd3e94266efb216c56ed91225d634cb2991c5f3fd1aeab" dependencies = [ - "generic-array 0.14.7", + "generic-array", "subtle", ] [[package]] name = "crypto-mac" -version = "0.10.1" +version = "0.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bff07008ec701e8028e2ceb8f83f0e4274ee62bd2dbdc4fefff2e9a91824081a" +checksum = "4857fd85a0c34b3c3297875b747c1e02e06b6a0ea32dd892d8192b9ce0813ea6" dependencies = [ - "generic-array 0.14.7", + "generic-array", "subtle", ] @@ -1862,6 +1898,36 @@ dependencies = [ "windows-sys 0.48.0", ] +[[package]] +name = "curl" +version = "0.4.44" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "509bd11746c7ac09ebd19f0b17782eae80aadee26237658a6b4808afb5c11a22" +dependencies = [ + "curl-sys", + "libc", + "openssl-probe", + "openssl-sys", + "schannel", + "socket2 0.4.10", + "winapi", +] + +[[package]] +name = "curl-sys" +version = "0.4.70+curl-8.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3c0333d8849afe78a4c8102a429a446bfdd055832af071945520e835ae2d841e" +dependencies = [ + "cc", + "libc", + "libz-sys", + "openssl-sys", + "pkg-config", + "vcpkg", + "windows-sys 0.48.0", +] + [[package]] name = "curve25519-dalek" version = "4.1.1" @@ -1935,7 +2001,7 @@ dependencies = [ "hashbrown 0.14.2", "lock_api", "once_cell", - "parking_lot_core 0.9.9", + "parking_lot_core", ] [[package]] @@ -1948,24 +2014,13 @@ dependencies = [ "uuid", ] -[[package]] -name = "der" -version = "0.5.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6919815d73839e7ad218de758883aae3a257ba6759ce7a9992501efbb53d705c" -dependencies = [ - "const-oid 0.7.1", - "crypto-bigint 0.3.2", - "pem-rfc7468", -] - [[package]] name = "der" version = "0.6.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f1a467a65c5e759bce6e65eaf91cc29f466cdc57cb65777bd646872a8a1fd4de" dependencies = [ - "const-oid 0.9.5", + "const-oid", "zeroize", ] @@ -1975,7 +2030,8 @@ version = "0.7.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fffa369a668c8af7dbf8b5e56c9f744fbd399949ed171606040001947de40b1c" dependencies = [ - "const-oid 0.9.5", + "const-oid", + "pem-rfc7468", "zeroize", ] @@ -2013,22 +2069,13 @@ dependencies = [ "syn 1.0.109", ] -[[package]] -name = "digest" -version = "0.8.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f3d0c8c8752312f9713efd397ff63acb9f85585afbf179282e720e7704954dd5" -dependencies = [ - "generic-array 0.12.4", -] - [[package]] name = "digest" version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d3dd60d1080a57a05ab032377049e0591415d2b31afd7028356dbf3cc6dcb066" dependencies = [ - "generic-array 0.14.7", + "generic-array", ] [[package]] @@ -2038,35 +2085,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" dependencies = [ "block-buffer 0.10.4", + "const-oid", "crypto-common", "subtle", ] [[package]] -name = "dirs" -version = "4.0.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ca3aa72a6f96ea37bbc5aa912f6788242832f75369bdfdadcb0e38423f100059" -dependencies = [ - "dirs-sys", -] - -[[package]] -name = "dirs-sys" -version = "0.3.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1b1d1d91c932ef41c0f2663aa8b0ca0342d444d842c06914aa0a7e352d0bada6" -dependencies = [ - "libc", - "redox_users", - "winapi 0.3.9", -] - -[[package]] -name = "dotenv" -version = "0.15.0" +name = "dotenvy" +version = "0.15.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "77c90badedccf4105eca100756a0b1289e191f6fcbdadd3cee1d2f614f97da8f" +checksum = "1aaf95b3e5c8f23aa320147307562d361db0ae0d51242340f558153b4eb2439b" [[package]] name = "dtoa" @@ -2081,11 +2109,25 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "413301934810f597c1d19ca71c8710e99a3f1ba28a0d2ebc01551a2daeea3c5c" dependencies = [ "der 0.6.1", - "elliptic-curve", - "rfc6979", + "elliptic-curve 0.12.3", + "rfc6979 0.3.1", "signature 1.6.4", ] +[[package]] +name = "ecdsa" +version = "0.16.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ee27f32b5c5292967d2d4a9d7f1e0b0aed2c15daded5a60300e4abb9d8020bca" +dependencies = [ + "der 0.7.8", + "digest 0.10.7", + "elliptic-curve 0.13.7", + "rfc6979 0.4.0", + "signature 2.2.0", + "spki 0.7.2", +] + [[package]] name = "ed25519" version = "2.2.3" @@ -2125,16 +2167,35 @@ version = "0.12.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e7bb888ab5300a19b8e5bceef25ac745ad065f3c9f7efc6de1b91958110891d3" dependencies = [ - "base16ct", + "base16ct 0.1.1", "crypto-bigint 0.4.9", "der 0.6.1", "digest 0.10.7", - "ff", - "generic-array 0.14.7", - "group", + "ff 0.12.1", + "generic-array", + "group 0.12.1", "pkcs8 0.9.0", "rand_core 0.6.4", - "sec1", + "sec1 0.3.0", + "subtle", + "zeroize", +] + +[[package]] +name = "elliptic-curve" +version = "0.13.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e9775b22bc152ad86a0cf23f0f348b884b26add12bf741e7ffc4d4ab2ab4d205" +dependencies = [ + "base16ct 0.2.0", + "crypto-bigint 0.5.3", + "digest 0.10.7", + "ff 0.13.0", + "generic-array", + "group 0.13.0", + "pkcs8 0.10.2", + "rand_core 0.6.4", + "sec1 0.7.3", "subtle", "zeroize", ] @@ -2223,6 +2284,17 @@ dependencies = [ "version_check", ] +[[package]] +name = "etcetera" +version = "0.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "136d1b5283a1ab77bd9257427ffd09d8667ced0570b6f938942bc7568ed5b943" +dependencies = [ + "cfg-if 1.0.0", + "home", + "windows-sys 0.48.0", +] + [[package]] name = "ethabi" version = "18.0.0" @@ -2301,10 +2373,25 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0206175f82b8d6bf6652ff7d71a1e27fd2e4efde587fd368662814d6ec1d9ce0" [[package]] -name = "fake-simd" -version = "0.1.2" +name = "event-listener" +version = "4.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "770d968249b5d99410d61f5bf89057f3199a077a04d087092f58e7d10692baae" +dependencies = [ + "concurrent-queue", + "parking", + "pin-project-lite", +] + +[[package]] +name = "event-listener-strategy" +version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e88a8acf291dafb59c2d96e8f59828f3838bb1a70398823ade51a84de6a6deed" +checksum = "958e4d70b6d5e81971bebec42271ec641e7ff4e170a6fa605f2b8a8b65cb97d3" +dependencies = [ + "event-listener 4.0.0", + "pin-project-lite", +] [[package]] name = "fastrand" @@ -2331,6 +2418,16 @@ dependencies = [ "subtle", ] +[[package]] +name = "ff" +version = "0.13.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ded41244b729663b1e574f1b4fb731469f69f79c17667b5d776b16cda0479449" +dependencies = [ + "rand_core 0.6.4", + "subtle", +] + [[package]] name = "ff_ce" version = "0.14.3" @@ -2374,7 +2471,7 @@ dependencies = [ "cc", "lazy_static", "libc", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -2429,6 +2526,17 @@ dependencies = [ "miniz_oxide", ] +[[package]] +name = "flume" +version = "0.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "55ac459de2512911e4b674ce33cf20befaba382d05b62b008afc1c8b57cbf181" +dependencies = [ + "futures-core", + "futures-sink", + "spin 0.9.8", +] + [[package]] name = "fnv" version = "1.0.7" @@ -2474,7 +2582,7 @@ dependencies = [ "digest 0.9.0", "hex", "indexmap 1.9.3", - "itertools", + "itertools 0.10.5", "lazy_static", "num-bigint 0.4.4", "num-derive 0.2.5", @@ -2506,7 +2614,7 @@ dependencies = [ "digest 0.9.0", "hex", "indexmap 1.9.3", - "itertools", + "itertools 0.10.5", "lazy_static", "num-bigint 0.4.4", "num-derive 0.2.5", @@ -2527,22 +2635,6 @@ version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a06f77d526c1a601b7c4cdd98f54b5eaabffc14d5f2f0296febdc7f357c6d3ba" -[[package]] -name = "fuchsia-zircon" -version = "0.3.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2e9763c69ebaae630ba35f74888db465e49e259ba1bc0eda7d06f4a067615d82" -dependencies = [ - "bitflags 1.3.2", - "fuchsia-zircon-sys", -] - -[[package]] -name = "fuchsia-zircon-sys" -version = "0.3.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3dcaa9ae7725d12cdb85b3ad99a434db70b468c09ded17e012d86b5c1010f7a7" - [[package]] name = "funty" version = "1.1.0" @@ -2606,13 +2698,13 @@ dependencies = [ [[package]] name = "futures-intrusive" -version = "0.4.2" +version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a604f7a68fbf8103337523b1fadc8ade7361ee3f112f7c680ad179651616aed5" +checksum = "1d930c203dd0b6ff06e0201a4a2fe9149b43c684fd4420555b26d21b1a02956f" dependencies = [ "futures-core", "lock_api", - "parking_lot 0.11.2", + "parking_lot", ] [[package]] @@ -2673,15 +2765,6 @@ dependencies = [ "slab", ] -[[package]] -name = "generic-array" -version = "0.12.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ffdf9f34f1447443d37393cc6c2b8313aebddcd96906caf34e54c68d8e57d7bd" -dependencies = [ - "typenum", -] - [[package]] name = "generic-array" version = "0.14.7" @@ -2690,6 +2773,7 @@ checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" dependencies = [ "typenum", "version_check", + "zeroize", ] [[package]] @@ -2720,7 +2804,7 @@ version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d930750de5717d2dd0b8c0d42c076c0e884c81a73e6cab859bbd2339c71e3e40" dependencies = [ - "opaque-debug 0.3.0", + "opaque-debug", "polyval", ] @@ -2736,24 +2820,11 @@ version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b" -[[package]] -name = "globset" -version = "0.4.13" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "759c97c1e17c55525b57192c06a267cda0ac5210b222d6b82189a2338fa1c13d" -dependencies = [ - "aho-corasick", - "bstr", - "fnv", - "log", - "regex", -] - [[package]] name = "gloo-net" -version = "0.3.1" +version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a66b4e3c7d9ed8d315fd6b97c8b1f74a7c6ecbbc2320e65ae7ed38b7068cc620" +checksum = "43aaa242d1239a8822c15c645f02166398da4f8b5c4bae795c1f5b44e9eee173" dependencies = [ "futures-channel", "futures-core", @@ -2784,9 +2855,9 @@ dependencies = [ [[package]] name = "gloo-utils" -version = "0.1.7" +version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "037fcb07216cb3a30f7292bd0176b050b7b9a052ba830ef7d5d65f6dc64ba58e" +checksum = "0b5555354113b18c547c1d3a98fbf7fb32a9ff4f6fa112ce823a21641a0ba3aa" dependencies = [ "js-sys", "serde", @@ -2797,9 +2868,9 @@ dependencies = [ [[package]] name = "google-cloud-auth" -version = "0.11.0" +version = "0.13.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "644f40175857d0b8d7b6cad6cd9594284da5041387fa2ddff30ab6d8faef65eb" +checksum = "af1087f1fbd2dd3f58c17c7574ddd99cd61cbbbc2c4dc81114b8687209b196cb" dependencies = [ "async-trait", "base64 0.21.5", @@ -2819,9 +2890,9 @@ dependencies = [ [[package]] name = "google-cloud-metadata" -version = "0.3.2" +version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "96e4ad0802d3f416f62e7ce01ac1460898ee0efc98f8b45cd4aab7611607012f" +checksum = "cc279bfb50487d7bcd900e8688406475fc750fe474a835b2ab9ade9eb1fc90e2" dependencies = [ "reqwest", "thiserror", @@ -2830,13 +2901,14 @@ dependencies = [ [[package]] name = "google-cloud-storage" -version = "0.12.0" +version = "0.15.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "215abab97e07d144428425509c1dad07e57ea72b84b21bcdb6a8a5f12a5c4932" +checksum = "ac04b29849ebdeb9fb008988cc1c4d1f0c9d121b4c7f1ddeb8061df124580e93" dependencies = [ "async-stream", + "async-trait", "base64 0.21.5", - "bytes 1.5.0", + "bytes", "futures-util", "google-cloud-auth", "google-cloud-metadata", @@ -2844,10 +2916,10 @@ dependencies = [ "hex", "once_cell", "percent-encoding", + "pkcs8 0.10.2", "regex", "reqwest", - "ring", - "rsa", + "ring 0.17.7", "serde", "serde_json", "sha2 0.10.8", @@ -2878,7 +2950,7 @@ dependencies = [ "futures-timer", "no-std-compat", "nonzero_ext", - "parking_lot 0.12.1", + "parking_lot", "quanta 0.9.3", "rand 0.8.5", "smallvec", @@ -2890,27 +2962,38 @@ version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5dfbfb3a6cfbd390d5c9564ab283a0349b9b9fcd46a706c1eb10e0db70bfbac7" dependencies = [ - "ff", + "ff 0.12.1", + "rand_core 0.6.4", + "subtle", +] + +[[package]] +name = "group" +version = "0.13.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f0f9ef7462f7c099f518d754361858f86d8a07af53ba9af0fe635bbccb151a63" +dependencies = [ + "ff 0.13.0", "rand_core 0.6.4", "subtle", ] [[package]] name = "h2" -version = "0.3.21" +version = "0.3.24" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "91fc23aa11be92976ef4729127f1a74adf36d8436f7816b185d18df956790833" +checksum = "bb2c4422095b67ee78da96fbb51a4cc413b3b25883c7717ff7ca1ab31022c9c9" dependencies = [ - "bytes 1.5.0", + "bytes", "fnv", "futures-core", "futures-sink", "futures-util", "http", - "indexmap 1.9.3", + "indexmap 2.1.0", "slab", "tokio", - "tokio-util 0.7.9", + "tokio-util", "tracing", ] @@ -2936,26 +3019,20 @@ dependencies = [ [[package]] name = "hashbrown" -version = "0.11.2" +version = "0.12.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ab5ef0d4909ef3724cc8cce6ccc8572c5c817592e9285f5464f8e86f8bd3726e" +checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888" dependencies = [ "ahash 0.7.7", ] -[[package]] -name = "hashbrown" -version = "0.12.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888" - [[package]] name = "hashbrown" version = "0.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33ff8ae62cd3a9102e5637afc8452c55acf3844001bd5374e0b0bd7b6616c038" dependencies = [ - "ahash 0.8.5", + "ahash 0.8.7", ] [[package]] @@ -2963,14 +3040,18 @@ name = "hashbrown" version = "0.14.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f93e7192158dbcda357bdec5fb5788eebf8bbac027f3f33e719d29135ae84156" +dependencies = [ + "ahash 0.8.7", + "allocator-api2", +] [[package]] name = "hashlink" -version = "0.7.0" +version = "0.8.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7249a3129cbc1ffccd74857f81464a323a152173cdb134e0fd81bc803b29facf" +checksum = "e8094feaf31ff591f651a2664fb9cfd92bba7a60ce3197265e9482ebe753c8f7" dependencies = [ - "hashbrown 0.11.2", + "hashbrown 0.14.2", ] [[package]] @@ -2990,7 +3071,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "06683b93020a07e3dbcf5f8c0f6d40080d725bea7936fc01ad345c01b97dc270" dependencies = [ "base64 0.21.5", - "bytes 1.5.0", + "bytes", "headers-core", "http", "httpdate", @@ -3061,7 +3142,7 @@ version = "0.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c1441c6b1e930e2817404b5046f1f989899143a12bf92de603b69f4e0aee1e15" dependencies = [ - "crypto-mac 0.10.1", + "crypto-mac 0.10.0", "digest 0.9.0", ] @@ -3091,7 +3172,7 @@ checksum = "3c731c3e10504cc8ed35cfe2f1db4c9274c3d35fa486e3b31df46f068ef3e867" dependencies = [ "libc", "match_cfg", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -3100,7 +3181,7 @@ version = "0.2.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bd6effc99afb63425aff9b05836f029929e345a6148a14b7ecd5ab67af944482" dependencies = [ - "bytes 1.5.0", + "bytes", "fnv", "itoa", ] @@ -3111,7 +3192,7 @@ version = "0.4.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d5f38f16d184e36f2408a55281cd658ecbd3ca05cce6d6510a176eca393e26d1" dependencies = [ - "bytes 1.5.0", + "bytes", "http", "pin-project-lite", ] @@ -3146,7 +3227,7 @@ version = "0.14.27" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ffb1cfd654a8219eaef89881fdb3bb3b1cdc5fa75ded05d6933b2b382e395468" dependencies = [ - "bytes 1.5.0", + "bytes", "futures-channel", "futures-core", "futures-util", @@ -3174,10 +3255,10 @@ dependencies = [ "http", "hyper", "log", - "rustls", - "rustls-native-certs", + "rustls 0.21.7", + "rustls-native-certs 0.6.3", "tokio", - "tokio-rustls", + "tokio-rustls 0.24.1", ] [[package]] @@ -3186,7 +3267,7 @@ version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d6183ddfa99b85da61a140bea0efc93fdf56ceaa041b37d553518030827f9905" dependencies = [ - "bytes 1.5.0", + "bytes", "hyper", "native-tls", "tokio", @@ -3334,7 +3415,7 @@ version = "0.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a0c10553d664a4d0bcff9f4215d0aac67a639cc68ef660840afe309b807bc9f5" dependencies = [ - "generic-array 0.14.7", + "generic-array", ] [[package]] @@ -3360,15 +3441,6 @@ dependencies = [ "cfg-if 1.0.0", ] -[[package]] -name = "iovec" -version = "0.1.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b2b3ea6ff95e175473f8ffe6a7eb7c00d054240321b84c57051175fe3c1e075e" -dependencies = [ - "libc", -] - [[package]] name = "ipnet" version = "2.9.0" @@ -3377,9 +3449,12 @@ checksum = "8f518f335dce6725a761382244631d86cf0ccb2863413590b31338feb467f9c3" [[package]] name = "ipnetwork" -version = "0.17.0" +version = "0.20.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "02c3eaab3ac0ede60ffa41add21970a7df7d91772c03383aac6c2c3d53cc716b" +checksum = "bf466541e9d546596ee94f9f69590f89473455f88372423e0008fc1a7daf100e" +dependencies = [ + "serde", +] [[package]] name = "iri-string" @@ -3411,6 +3486,15 @@ dependencies = [ "either", ] +[[package]] +name = "itertools" +version = "0.12.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "25db6b064527c5d482d0423354fcd07a89a2dfe07b67892e62411946db7f07b0" +dependencies = [ + "either", +] + [[package]] name = "itoa" version = "1.0.9" @@ -3435,20 +3519,6 @@ dependencies = [ "wasm-bindgen", ] -[[package]] -name = "jsonrpc-client-transports" -version = "18.0.0" -source = "git+https://github.com/matter-labs/jsonrpc.git?branch=master#12c53e3e20c09c2fb9966a4ef1b0ea63de172540" -dependencies = [ - "derive_more", - "futures 0.3.28", - "jsonrpc-core 18.0.0 (git+https://github.com/matter-labs/jsonrpc.git?branch=master)", - "jsonrpc-pubsub", - "log", - "serde", - "serde_json", -] - [[package]] name = "jsonrpc-core" version = "18.0.0" @@ -3464,105 +3534,11 @@ dependencies = [ "serde_json", ] -[[package]] -name = "jsonrpc-core" -version = "18.0.0" -source = "git+https://github.com/matter-labs/jsonrpc.git?branch=master#12c53e3e20c09c2fb9966a4ef1b0ea63de172540" -dependencies = [ - "futures 0.3.28", - "futures-executor", - "futures-util", - "log", - "serde", - "serde_derive", - "serde_json", -] - -[[package]] -name = "jsonrpc-core-client" -version = "18.0.0" -source = "git+https://github.com/matter-labs/jsonrpc.git?branch=master#12c53e3e20c09c2fb9966a4ef1b0ea63de172540" -dependencies = [ - "futures 0.3.28", - "jsonrpc-client-transports", -] - -[[package]] -name = "jsonrpc-derive" -version = "18.0.0" -source = "git+https://github.com/matter-labs/jsonrpc.git?branch=master#12c53e3e20c09c2fb9966a4ef1b0ea63de172540" -dependencies = [ - "proc-macro-crate 0.1.5", - "proc-macro2 1.0.69", - "quote 1.0.33", - "syn 1.0.109", -] - -[[package]] -name = "jsonrpc-http-server" -version = "18.0.0" -source = "git+https://github.com/matter-labs/jsonrpc.git?branch=master#12c53e3e20c09c2fb9966a4ef1b0ea63de172540" -dependencies = [ - "futures 0.3.28", - "hyper", - "jsonrpc-core 18.0.0 (git+https://github.com/matter-labs/jsonrpc.git?branch=master)", - "jsonrpc-server-utils", - "log", - "net2", - "parking_lot 0.11.2", - "unicase", -] - -[[package]] -name = "jsonrpc-pubsub" -version = "18.0.0" -source = "git+https://github.com/matter-labs/jsonrpc.git?branch=master#12c53e3e20c09c2fb9966a4ef1b0ea63de172540" -dependencies = [ - "futures 0.3.28", - "jsonrpc-core 18.0.0 (git+https://github.com/matter-labs/jsonrpc.git?branch=master)", - "lazy_static", - "log", - "parking_lot 0.11.2", - "rand 0.7.3", - "serde", -] - -[[package]] -name = "jsonrpc-server-utils" -version = "18.0.0" -source = "git+https://github.com/matter-labs/jsonrpc.git?branch=master#12c53e3e20c09c2fb9966a4ef1b0ea63de172540" -dependencies = [ - "bytes 1.5.0", - "futures 0.3.28", - "globset", - "jsonrpc-core 18.0.0 (git+https://github.com/matter-labs/jsonrpc.git?branch=master)", - "lazy_static", - "log", - "tokio", - "tokio-stream", - "tokio-util 0.6.10", - "unicase", -] - -[[package]] -name = "jsonrpc-ws-server" -version = "18.0.0" -source = "git+https://github.com/matter-labs/jsonrpc.git?branch=master#12c53e3e20c09c2fb9966a4ef1b0ea63de172540" -dependencies = [ - "futures 0.3.28", - "jsonrpc-core 18.0.0 (git+https://github.com/matter-labs/jsonrpc.git?branch=master)", - "jsonrpc-server-utils", - "log", - "parity-ws", - "parking_lot 0.11.2", - "slab", -] - [[package]] name = "jsonrpsee" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e5f3783308bddc49d0218307f66a09330c106fbd792c58bac5c8dc294fdd0f98" +checksum = "9579d0ca9fb30da026bac2f0f7d9576ec93489aeb7cd4971dd5b4617d82c79b2" dependencies = [ "jsonrpsee-client-transport", "jsonrpsee-core", @@ -3572,14 +3548,15 @@ dependencies = [ "jsonrpsee-types", "jsonrpsee-wasm-client", "jsonrpsee-ws-client", + "tokio", "tracing", ] [[package]] name = "jsonrpsee-client-transport" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "abc5630e4fa0096f00ec7b44d520701fda4504170cb85e22dca603ae5d7ad0d7" +checksum = "3f9f9ed46590a8d5681975f126e22531698211b926129a40a2db47cbca429220" dependencies = [ "futures-channel", "futures-util", @@ -3587,21 +3564,23 @@ dependencies = [ "http", "jsonrpsee-core", "pin-project", - "rustls-native-certs", + "rustls-native-certs 0.7.0", + "rustls-pki-types", "soketto", "thiserror", "tokio", - "tokio-rustls", - "tokio-util 0.7.9", + "tokio-rustls 0.25.0", + "tokio-util", "tracing", - "webpki-roots 0.24.0", + "url", + "webpki-roots 0.26.0", ] [[package]] name = "jsonrpsee-core" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5aaa4c4d5fb801dcc316d81f76422db259809037a86b3194ae538dd026b05ed7" +checksum = "776d009e2f591b78c038e0d053a796f94575d66ca4e77dd84bfc5e81419e436c" dependencies = [ "anyhow", "async-lock", @@ -3609,15 +3588,14 @@ dependencies = [ "beef", "futures-timer", "futures-util", - "globset", "hyper", "jsonrpsee-types", - "parking_lot 0.12.1", + "parking_lot", + "pin-project", "rand 0.8.5", "rustc-hash", "serde", "serde_json", - "soketto", "thiserror", "tokio", "tokio-stream", @@ -3627,9 +3605,9 @@ dependencies = [ [[package]] name = "jsonrpsee-http-client" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "aa7165efcbfbc951d180162ff28fe91b657ed81925e37a35e4a396ce12109f96" +checksum = "78b7de9f3219d95985eb77fd03194d7c1b56c19bce1abfcc9d07462574b15572" dependencies = [ "async-trait", "hyper", @@ -3642,16 +3620,17 @@ dependencies = [ "tokio", "tower", "tracing", + "url", ] [[package]] name = "jsonrpsee-proc-macros" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "21dc12b1d4f16a86e8c522823c4fab219c88c03eb7c924ec0501a64bf12e058b" +checksum = "d94b7505034e2737e688e1153bf81e6f93ad296695c43958d6da2e4321f0a990" dependencies = [ "heck 0.4.1", - "proc-macro-crate 1.3.1", + "proc-macro-crate 2.0.1", "proc-macro2 1.0.69", "quote 1.0.33", "syn 1.0.109", @@ -3659,43 +3638,46 @@ dependencies = [ [[package]] name = "jsonrpsee-server" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6e79d78cfd5abd8394da10753723093c3ff64391602941c9c4b1d80a3414fd53" +checksum = "5cc7c6d1a2c58f6135810284a390d9f823d0f508db74cd914d8237802de80f98" dependencies = [ "futures-util", + "http", "hyper", "jsonrpsee-core", "jsonrpsee-types", + "pin-project", + "route-recognizer", "serde", "serde_json", "soketto", + "thiserror", "tokio", "tokio-stream", - "tokio-util 0.7.9", + "tokio-util", "tower", "tracing", ] [[package]] name = "jsonrpsee-types" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "00aa7cc87bc42e04e26c8ac3e7186142f7fd2949c763d9b6a7e64a69672d8fb2" +checksum = "3266dfb045c9174b24c77c2dfe0084914bb23a6b2597d70c9dc6018392e1cd1b" dependencies = [ "anyhow", "beef", "serde", "serde_json", "thiserror", - "tracing", ] [[package]] name = "jsonrpsee-wasm-client" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0fe953c2801356f214d3f4051f786b3d11134512a46763ee8c39a9e3fa2cc1c0" +checksum = "30f36d27503d0efc0355c1630b74ecfb367050847bf7241a0ed75fab6dfa96c0" dependencies = [ "jsonrpsee-client-transport", "jsonrpsee-core", @@ -3704,14 +3686,15 @@ dependencies = [ [[package]] name = "jsonrpsee-ws-client" -version = "0.19.0" +version = "0.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5c71b2597ec1c958c6d5bc94bb61b44d74eb28e69dc421731ab0035706f13882" +checksum = "073c077471e89c4b511fa88b3df9a0f0abdf4a0a2e6683dd2ab36893af87bb2d" dependencies = [ "http", "jsonrpsee-client-transport", "jsonrpsee-core", "jsonrpsee-types", + "url", ] [[package]] @@ -3722,7 +3705,7 @@ checksum = "6971da4d9c3aa03c3d8f3ff0f4155b534aad021292003895a469716b2a230378" dependencies = [ "base64 0.21.5", "pem", - "ring", + "ring 0.16.20", "serde", "serde_json", "simple_asn1", @@ -3735,28 +3718,32 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "72c1e0b51e7ec0a97369623508396067a486bd0cbed95a2659a4b863d28cfc8b" dependencies = [ "cfg-if 1.0.0", - "ecdsa", - "elliptic-curve", + "ecdsa 0.14.8", + "elliptic-curve 0.12.3", "sha2 0.10.8", ] [[package]] -name = "keccak" -version = "0.1.4" +name = "k256" +version = "0.13.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8f6d5ed8676d904364de097082f4e7d240b571b67989ced0240f08b7f966f940" +checksum = "3f01b677d82ef7a676aa37e099defd83a28e15687112cafdd112d60236b6115b" dependencies = [ - "cpufeatures", + "cfg-if 1.0.0", + "ecdsa 0.16.9", + "elliptic-curve 0.13.7", + "once_cell", + "sha2 0.10.8", + "signature 2.2.0", ] [[package]] -name = "kernel32-sys" -version = "0.2.2" +name = "keccak" +version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7507624b29483431c0ba2d82aece8ca6cdba9382bff4ddd0f7490560c056098d" +checksum = "8f6d5ed8676d904364de097082f4e7d240b571b67989ced0240f08b7f966f940" dependencies = [ - "winapi 0.2.8", - "winapi-build", + "cpufeatures", ] [[package]] @@ -3771,7 +3758,7 @@ version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646" dependencies = [ - "spin", + "spin 0.5.2", ] [[package]] @@ -3799,7 +3786,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b67380fd3b2fbe7527a606e18729d21c6f3951633d0500574c4dc22d2d638b9f" dependencies = [ "cfg-if 1.0.0", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -3822,6 +3809,17 @@ dependencies = [ "libz-sys", ] +[[package]] +name = "libsqlite3-sys" +version = "0.27.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cf4e226dcd58b4be396f7bd3c20da8fdee2911400705297ba7d2d7cc2c30f716" +dependencies = [ + "cc", + "pkg-config", + "vcpkg", +] + [[package]] name = "libz-sys" version = "1.1.12" @@ -3829,6 +3827,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d97137b25e321a73eef1418d1d5d2eda4d77e12813f8e6dead84bc52c5870a7b" dependencies = [ "cc", + "libc", "pkg-config", "vcpkg", ] @@ -3964,6 +3963,12 @@ dependencies = [ "logos-codegen", ] +[[package]] +name = "lru" +version = "0.12.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2994eeba8ed550fd9b47a0b38f0242bc3344e496483c6180b69139cc2fa5d1d7" + [[package]] name = "mach" version = "0.3.2" @@ -4064,7 +4069,7 @@ version = "0.21.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fde3af1a009ed76a778cb84fdef9e7dbbdf5775ae3e4cc1f434a6a307f6f76c5" dependencies = [ - "ahash 0.8.5", + "ahash 0.8.7", "metrics-macros", "portable-atomic", ] @@ -4182,25 +4187,6 @@ dependencies = [ "adler", ] -[[package]] -name = "mio" -version = "0.6.23" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4afd66f5b91bf2a3bc13fad0e21caedac168ca4c707504e75585648ae80e4cc4" -dependencies = [ - "cfg-if 0.1.10", - "fuchsia-zircon", - "fuchsia-zircon-sys", - "iovec", - "kernel32-sys", - "libc", - "log", - "miow", - "net2", - "slab", - "winapi 0.2.8", -] - [[package]] name = "mio" version = "0.8.9" @@ -4213,30 +4199,6 @@ dependencies = [ "windows-sys 0.48.0", ] -[[package]] -name = "mio-extras" -version = "2.0.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "52403fe290012ce777c4626790c8951324a2b9e3316b3143779c72b029742f19" -dependencies = [ - "lazycell", - "log", - "mio 0.6.23", - "slab", -] - -[[package]] -name = "miow" -version = "0.2.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ebd808424166322d4a38da87083bfddd3ac4c131334ed55856112eb06d46944d" -dependencies = [ - "kernel32-sys", - "net2", - "winapi 0.2.8", - "ws2_32-sys", -] - [[package]] name = "multimap" version = "0.8.3" @@ -4250,7 +4212,7 @@ dependencies = [ "anyhow", "ethabi", "hex", - "itertools", + "itertools 0.10.5", "once_cell", "thiserror", "tokio", @@ -4259,6 +4221,9 @@ dependencies = [ "zk_evm 1.3.1", "zk_evm 1.3.3 (git+https://github.com/matter-labs/era-zk_evm.git?tag=v1.3.3-rc2)", "zk_evm 1.4.0", + "zk_evm 1.4.1", + "zkevm_test_harness 1.4.0", + "zkevm_test_harness 1.4.1", "zksync_contracts", "zksync_eth_signer", "zksync_state", @@ -4286,17 +4251,6 @@ dependencies = [ "tempfile", ] -[[package]] -name = "net2" -version = "0.2.39" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b13b648036a2339d06de780866fbdfda0dde886de7b3af2ddeba8b14f4ee34ac" -dependencies = [ - "cfg-if 0.1.10", - "libc", - "winapi 0.3.9", -] - [[package]] name = "nix" version = "0.27.1" @@ -4343,7 +4297,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "77a8165726e8236064dbb45459242600304b42a5ea24ee2948e18e023bf7ba84" dependencies = [ "overload", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -4432,6 +4386,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1ba157ca0885411de85d6ca030ba7e2a83a28636056c7c699b07c8b6f7383214" dependencies = [ "num-traits", + "serde", ] [[package]] @@ -4510,6 +4465,7 @@ dependencies = [ "num-bigint 0.4.4", "num-integer", "num-traits", + "serde", ] [[package]] @@ -4574,12 +4530,6 @@ version = "11.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0ab1bc2a289d34bd04a330323ac98a1b4bc82c9d9fcb1e66b63caa84da26b575" -[[package]] -name = "opaque-debug" -version = "0.2.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2839e79665f131bdb5782e51f2c6c9599c133c6098982a54c794358bf432529c" - [[package]] name = "opaque-debug" version = "0.3.0" @@ -4647,7 +4597,7 @@ checksum = "006e42d5b888366f1880eda20371fedde764ed2213dc8496f49622fa0c99cd5e" dependencies = [ "log", "serde", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -4688,7 +4638,7 @@ dependencies = [ [[package]] name = "pairing_ce" version = "0.28.5" -source = "git+https://github.com/matter-labs/pairing.git?rev=f55393f#f55393fd366596eac792d78525d26e9c4d6ed1ca" +source = "git+https://github.com/matter-labs/pairing.git?rev=f55393fd366596eac792d78525d26e9c4d6ed1ca#f55393fd366596eac792d78525d26e9c4d6ed1ca" dependencies = [ "byteorder", "cfg-if 1.0.0", @@ -4786,33 +4736,10 @@ dependencies = [ ] [[package]] -name = "parity-ws" -version = "0.11.1" +name = "parking" +version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5983d3929ad50f12c3eb9a6743f19d691866ecd44da74c0a3308c3f8a56df0c6" -dependencies = [ - "byteorder", - "bytes 0.4.12", - "httparse", - "log", - "mio 0.6.23", - "mio-extras", - "rand 0.7.3", - "sha-1 0.8.2", - "slab", - "url", -] - -[[package]] -name = "parking_lot" -version = "0.11.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7d17b78036a60663b797adeaee46f5c9dfebb86948d1255007a1d6be0271ff99" -dependencies = [ - "instant", - "lock_api", - "parking_lot_core 0.8.6", -] +checksum = "bb813b8af86854136c6922af0598d719255ecb2179515e6e7730d468f05c9cae" [[package]] name = "parking_lot" @@ -4821,21 +4748,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3742b2c103b9f06bc9fff0a37ff4912935851bee6d36f3c02bcc755bcfec228f" dependencies = [ "lock_api", - "parking_lot_core 0.9.9", -] - -[[package]] -name = "parking_lot_core" -version = "0.8.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "60a2cfe6f0ad2bfc16aefa463b497d5c7a5ecd44a23efa72aa342d90177356dc" -dependencies = [ - "cfg-if 1.0.0", - "instant", - "libc", - "redox_syscall 0.2.16", - "smallvec", - "winapi 0.3.9", + "parking_lot_core", ] [[package]] @@ -4873,7 +4786,7 @@ version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b3b8c0d71734018084da0c0354193a5edfb81b20d2d57a92c5b154aefc554a4a" dependencies = [ - "crypto-mac 0.10.1", + "crypto-mac 0.10.0", ] [[package]] @@ -4883,7 +4796,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bf916dd32dd26297907890d99dc2740e33f6bd9073965af4ccff2967962f5508" dependencies = [ "base64ct", - "crypto-mac 0.10.1", + "crypto-mac 0.10.0", "hmac 0.10.1", "password-hash", "sha2 0.9.9", @@ -4906,9 +4819,9 @@ dependencies = [ [[package]] name = "pem-rfc7468" -version = "0.3.1" +version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "01de5d978f34aa4b2296576379fcc416034702fd94117c56ffd8a1a767cefb30" +checksum = "88b39c9bfcfc231068454382784bb460aae594343fb030d46e9f50a645418412" dependencies = [ "base64ct", ] @@ -5008,24 +4921,13 @@ checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184" [[package]] name = "pkcs1" -version = "0.3.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a78f66c04ccc83dd4486fd46c33896f4e17b24a7a3a6400dedc48ed0ddd72320" -dependencies = [ - "der 0.5.1", - "pkcs8 0.8.0", - "zeroize", -] - -[[package]] -name = "pkcs8" -version = "0.8.0" +version = "0.7.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7cabda3fb821068a9a4fab19a683eac3af12edf0f34b94a8be53c4972b8149d0" +checksum = "c8ffb9f10fa047879315e6625af03c164b16962a5368d724ed16323b68ace47f" dependencies = [ - "der 0.5.1", - "spki 0.5.4", - "zeroize", + "der 0.7.8", + "pkcs8 0.10.2", + "spki 0.7.2", ] [[package]] @@ -5095,7 +4997,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8159bd90725d2df49889a078b54f4f79e87f1f8a8444194cdca81d38f5393abf" dependencies = [ "cpufeatures", - "opaque-debug 0.3.0", + "opaque-debug", "universal-hash", ] @@ -5107,7 +5009,7 @@ checksum = "d52cff9d1d4dee5fe6d03729099f4a310a41179e0a10dbf542039873f2e826fb" dependencies = [ "cfg-if 1.0.0", "cpufeatures", - "opaque-debug 0.3.0", + "opaque-debug", "universal-hash", ] @@ -5167,21 +5069,22 @@ dependencies = [ [[package]] name = "proc-macro-crate" -version = "0.1.5" +version = "1.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1d6ea3c4595b96363c13943497db34af4460fb474a95c43f4446ad341b8c9785" +checksum = "7f4c021e1093a56626774e81216a4ce732a735e5bad4868a03f3ed65ca0c3919" dependencies = [ - "toml", + "once_cell", + "toml_edit 0.19.15", ] [[package]] name = "proc-macro-crate" -version = "1.3.1" +version = "2.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7f4c021e1093a56626774e81216a4ce732a735e5bad4868a03f3ed65ca0c3919" +checksum = "97dc5fea232fc28d2f597b37c4876b348a40e33f3b02cc975c8d006d78d94b1a" dependencies = [ - "once_cell", - "toml_edit 0.19.15", + "toml_datetime", + "toml_edit 0.20.2", ] [[package]] @@ -5240,7 +5143,7 @@ checksum = "3c99afa9a01501019ac3a14d71d9f94050346f55ca471ce90c799a15c58f61e2" dependencies = [ "dtoa", "itoa", - "parking_lot 0.12.1", + "parking_lot", "prometheus-client-derive-encode", ] @@ -5273,7 +5176,7 @@ version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f4fdd22f3b9c31b53c060df4a0613a1c7f062d4115a2b984dd15b1858f7e340d" dependencies = [ - "bytes 1.5.0", + "bytes", "prost-derive", ] @@ -5283,9 +5186,9 @@ version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8bdf592881d821b83d471f8af290226c8d51402259e9bb5be7f9f8bdebbb11ac" dependencies = [ - "bytes 1.5.0", + "bytes", "heck 0.4.1", - "itertools", + "itertools 0.10.5", "log", "multimap", "once_cell", @@ -5306,7 +5209,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "265baba7fabd416cf5078179f7d2cbeca4ce7a9041111900675ea7c4cb8a4c32" dependencies = [ "anyhow", - "itertools", + "itertools 0.10.5", "proc-macro2 1.0.69", "quote 1.0.33", "syn 2.0.38", @@ -5343,7 +5246,7 @@ version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "00bb76c5f6221de491fe2c8f39b106330bbd9762c6511119c07940e10eb9ff11" dependencies = [ - "bytes 1.5.0", + "bytes", "miette", "prost", "prost-reflect", @@ -5364,6 +5267,26 @@ dependencies = [ "thiserror", ] +[[package]] +name = "ptr_meta" +version = "0.1.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0738ccf7ea06b608c10564b31debd4f5bc5e197fc8bfe088f68ae5ce81e7a4f1" +dependencies = [ + "ptr_meta_derive", +] + +[[package]] +name = "ptr_meta_derive" +version = "0.1.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "16b845dbfca988fa33db069c0e230574d15a3088f147a87b64c7589eb662c9ac" +dependencies = [ + "proc-macro2 1.0.69", + "quote 1.0.33", + "syn 1.0.109", +] + [[package]] name = "pulldown-cmark" version = "0.9.3" @@ -5388,7 +5311,7 @@ dependencies = [ "raw-cpuid", "wasi 0.10.2+wasi-snapshot-preview1", "web-sys", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -5404,7 +5327,7 @@ dependencies = [ "raw-cpuid", "wasi 0.11.0+wasi-snapshot-preview1", "web-sys", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -5456,7 +5379,7 @@ dependencies = [ "libc", "rand_core 0.3.1", "rdrand", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -5475,7 +5398,7 @@ dependencies = [ "rand_os", "rand_pcg", "rand_xorshift", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -5484,8 +5407,6 @@ version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6a6b1679d49b24bbfe0c803429aa1874472f50d9b363131f0e89fc356b544d03" dependencies = [ - "getrandom 0.1.16", - "libc", "rand_chacha 0.2.2", "rand_core 0.5.1", "rand_hc 0.2.0", @@ -5600,7 +5521,7 @@ checksum = "1166d5c91dc97b88d1decc3285bb0a99ed84b05cfd0bc2341bdf2d43fc41e39b" dependencies = [ "libc", "rand_core 0.4.2", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -5614,7 +5535,7 @@ dependencies = [ "libc", "rand_core 0.4.2", "rdrand", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -5683,15 +5604,6 @@ dependencies = [ "rand_core 0.3.1", ] -[[package]] -name = "redox_syscall" -version = "0.2.16" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fb5a58c1855b4b6819d59012155603f0b22ad30cad752600aadfcb695265519a" -dependencies = [ - "bitflags 1.3.2", -] - [[package]] name = "redox_syscall" version = "0.3.5" @@ -5710,17 +5622,6 @@ dependencies = [ "bitflags 1.3.2", ] -[[package]] -name = "redox_users" -version = "0.4.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b033d837a7cf162d7993aded9304e30a83213c648b6e389db233191f891e5c2b" -dependencies = [ - "getrandom 0.2.10", - "redox_syscall 0.2.16", - "thiserror", -] - [[package]] name = "regex" version = "1.10.2" @@ -5771,7 +5672,16 @@ version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3acd125665422973a33ac9d3dd2df85edad0f4ae9b00dafb1a05e43a9f5ef8e7" dependencies = [ - "winapi 0.3.9", + "winapi", +] + +[[package]] +name = "rend" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a2571463863a6bd50c32f94402933f03457a3fbaf697a707c5be741e459f08fd" +dependencies = [ + "bytecheck", ] [[package]] @@ -5781,7 +5691,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "046cd98826c46c2ac8ddecae268eb5c2e58628688a5fc7a2643704a73faba95b" dependencies = [ "base64 0.21.5", - "bytes 1.5.0", + "bytes", "encoding_rs", "futures-core", "futures-util", @@ -5800,16 +5710,16 @@ dependencies = [ "once_cell", "percent-encoding", "pin-project-lite", - "rustls", - "rustls-pemfile", + "rustls 0.21.7", + "rustls-pemfile 1.0.3", "serde", "serde_json", "serde_urlencoded", "system-configuration", "tokio", "tokio-native-tls", - "tokio-rustls", - "tokio-util 0.7.9", + "tokio-rustls 0.24.1", + "tokio-util", "tower-service", "url", "wasm-bindgen", @@ -5823,7 +5733,7 @@ dependencies = [ [[package]] name = "rescue_poseidon" version = "0.4.1" -source = "git+https://github.com/matter-labs/rescue-poseidon.git?branch=poseidon2#c4a788471710bdb7aa0f59e8756b45ef93cdd2b2" +source = "git+https://github.com/matter-labs/rescue-poseidon.git?branch=poseidon2#2e5e8afb152adc326fcf776a71ad3735fa7f3186" dependencies = [ "addchain", "arrayvec 0.7.4", @@ -5873,6 +5783,16 @@ dependencies = [ "zeroize", ] +[[package]] +name = "rfc6979" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f8dd2a808d456c4a54e300a23e9f5a67e122c3024119acbfd73e3bf664491cb2" +dependencies = [ + "hmac 0.12.1", + "subtle", +] + [[package]] name = "ring" version = "0.16.20" @@ -5882,10 +5802,24 @@ dependencies = [ "cc", "libc", "once_cell", - "spin", - "untrusted", + "spin 0.5.2", + "untrusted 0.7.1", "web-sys", - "winapi 0.3.9", + "winapi", +] + +[[package]] +name = "ring" +version = "0.17.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "688c63d65483050968b2a8937f7995f443e27041a0f7700aa59b0822aedebb74" +dependencies = [ + "cc", + "getrandom 0.2.10", + "libc", + "spin 0.9.8", + "untrusted 0.9.0", + "windows-sys 0.48.0", ] [[package]] @@ -5896,7 +5830,36 @@ checksum = "2eca4ecc81b7f313189bf73ce724400a07da2a6dac19588b03c8bd76a2dcc251" dependencies = [ "block-buffer 0.9.0", "digest 0.9.0", - "opaque-debug 0.3.0", + "opaque-debug", +] + +[[package]] +name = "rkyv" +version = "0.7.43" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "527a97cdfef66f65998b5f3b637c26f5a5ec09cc52a3f9932313ac645f4190f5" +dependencies = [ + "bitvec 1.0.1", + "bytecheck", + "bytes", + "hashbrown 0.12.3", + "ptr_meta", + "rend", + "rkyv_derive", + "seahash", + "tinyvec", + "uuid", +] + +[[package]] +name = "rkyv_derive" +version = "0.7.43" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b5c462a1328c8e67e4d6dbad1eb0355dd43e8ab432c6e227a43657f16ade5033" +dependencies = [ + "proc-macro2 1.0.69", + "quote 1.0.33", + "syn 1.0.109", ] [[package]] @@ -5905,7 +5868,7 @@ version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bb919243f34364b6bd2fc10ef797edbfa75f33c252e7998527479c6d6b47e1ec" dependencies = [ - "bytes 1.5.0", + "bytes", "rustc-hex", ] @@ -5920,37 +5883,47 @@ dependencies = [ ] [[package]] -name = "rocksdb_util" -version = "0.1.0" -dependencies = [ - "anyhow", - "clap 4.4.6", - "tempfile", - "zksync_config", - "zksync_env_config", - "zksync_storage", -] +name = "route-recognizer" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "afab94fb28594581f62d981211a9a4d53cc8130bbcbbb89a0440d9b8e81a7746" [[package]] name = "rsa" -version = "0.6.1" +version = "0.9.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4cf22754c49613d2b3b119f0e5d46e34a2c628a937e3024b8762de4e7d8c710b" +checksum = "af6c4b23d99685a1408194da11270ef8e9809aff951cc70ec9b17350b087e474" dependencies = [ - "byteorder", + "const-oid", "digest 0.10.7", "num-bigint-dig", "num-integer", - "num-iter", "num-traits", "pkcs1", - "pkcs8 0.8.0", + "pkcs8 0.10.2", "rand_core 0.6.4", - "smallvec", + "signature 2.2.0", + "spki 0.7.2", "subtle", "zeroize", ] +[[package]] +name = "rust_decimal" +version = "1.33.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "06676aec5ccb8fc1da723cc8c0f9a46549f21ebb8753d3915c6c41db1e7f1dc4" +dependencies = [ + "arrayvec 0.7.4", + "borsh", + "bytes", + "num-traits", + "rand 0.8.5", + "rkyv", + "serde", + "serde_json", +] + [[package]] name = "rustc-demangle" version = "0.1.23" @@ -5998,11 +5971,25 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cd8d6c9f025a446bc4d18ad9632e69aec8f287aa84499ee335599fabd20c3fd8" dependencies = [ "log", - "ring", - "rustls-webpki", + "ring 0.16.20", + "rustls-webpki 0.101.6", "sct", ] +[[package]] +name = "rustls" +version = "0.22.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fe6b63262c9fcac8659abfaa96cac103d28166d3ff3eaf8f412e19f3ae9e5a48" +dependencies = [ + "log", + "ring 0.17.7", + "rustls-pki-types", + "rustls-webpki 0.102.0", + "subtle", + "zeroize", +] + [[package]] name = "rustls-native-certs" version = "0.6.3" @@ -6010,7 +5997,20 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a9aace74cb666635c918e9c12bc0d348266037aa8eb599b5cba565709a8dff00" dependencies = [ "openssl-probe", - "rustls-pemfile", + "rustls-pemfile 1.0.3", + "schannel", + "security-framework", +] + +[[package]] +name = "rustls-native-certs" +version = "0.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8f1fb85efa936c42c6d5fc28d2629bb51e4b2f4b8a5211e297d599cc5a093792" +dependencies = [ + "openssl-probe", + "rustls-pemfile 2.0.0", + "rustls-pki-types", "schannel", "security-framework", ] @@ -6024,14 +6024,41 @@ dependencies = [ "base64 0.21.5", ] +[[package]] +name = "rustls-pemfile" +version = "2.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "35e4980fa29e4c4b212ffb3db068a564cbf560e51d3944b7c88bd8bf5bec64f4" +dependencies = [ + "base64 0.21.5", + "rustls-pki-types", +] + +[[package]] +name = "rustls-pki-types" +version = "1.0.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e7673e0aa20ee4937c6aacfc12bb8341cfbf054cdd21df6bec5fd0629fe9339b" + [[package]] name = "rustls-webpki" version = "0.101.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3c7d5dece342910d9ba34d259310cae3e0154b873b35408b787b59bce53d34fe" dependencies = [ - "ring", - "untrusted", + "ring 0.16.20", + "untrusted 0.7.1", +] + +[[package]] +name = "rustls-webpki" +version = "0.102.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "de2635c8bc2b88d367767c5de8ea1d8db9af3f6219eba28442242d9ab81d1b89" +dependencies = [ + "ring 0.17.7", + "rustls-pki-types", + "untrusted 0.9.0", ] [[package]] @@ -6101,24 +6128,44 @@ version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d53dcdb7c9f8158937a7981b48accfd39a43af418591a5d008c7b22b5e1b7ca4" dependencies = [ - "ring", - "untrusted", + "ring 0.16.20", + "untrusted 0.7.1", ] +[[package]] +name = "seahash" +version = "4.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1c107b6f4780854c8b126e228ea8869f4d7b71260f962fefb57b996b8959ba6b" + [[package]] name = "sec1" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3be24c1842290c45df0a7bf069e0c268a747ad05a192f2fd7dcfdbc1cba40928" dependencies = [ - "base16ct", + "base16ct 0.1.1", "der 0.6.1", - "generic-array 0.14.7", + "generic-array", "pkcs8 0.9.0", "subtle", "zeroize", ] +[[package]] +name = "sec1" +version = "0.7.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d3e97a565f76233a6003f9f5c54be1d9c5bdfa3eccfb189469f11ec4901c47dc" +dependencies = [ + "base16ct 0.2.0", + "der 0.7.8", + "generic-array", + "pkcs8 0.10.2", + "subtle", + "zeroize", +] + [[package]] name = "secp256k1" version = "0.20.3" @@ -6302,6 +6349,12 @@ dependencies = [ "uuid", ] +[[package]] +name = "seq-macro" +version = "0.3.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a3f0bf26fd526d2a95683cd0f87bf103b8539e2ca1ef48ce002d67aad59aa0b4" + [[package]] name = "serde" version = "1.0.189" @@ -6389,18 +6442,6 @@ dependencies = [ "syn 1.0.109", ] -[[package]] -name = "sha-1" -version = "0.8.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f7d94d0bede923b3cea61f3f1ff57ff8cdfd77b400fb8f9998949e0cf04163df" -dependencies = [ - "block-buffer 0.7.3", - "digest 0.8.1", - "fake-simd", - "opaque-debug 0.2.3", -] - [[package]] name = "sha-1" version = "0.9.8" @@ -6411,18 +6452,7 @@ dependencies = [ "cfg-if 1.0.0", "cpufeatures", "digest 0.9.0", - "opaque-debug 0.3.0", -] - -[[package]] -name = "sha-1" -version = "0.10.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f5058ada175748e33390e40e872bd0fe59a19f265d0158daa551c5a88a76009c" -dependencies = [ - "cfg-if 1.0.0", - "cpufeatures", - "digest 0.10.7", + "opaque-debug", ] [[package]] @@ -6446,7 +6476,7 @@ dependencies = [ "cfg-if 1.0.0", "cpufeatures", "digest 0.9.0", - "opaque-debug 0.3.0", + "opaque-debug", ] [[package]] @@ -6479,7 +6509,7 @@ dependencies = [ "block-buffer 0.9.0", "digest 0.9.0", "keccak", - "opaque-debug 0.3.0", + "opaque-debug", ] [[package]] @@ -6541,9 +6571,16 @@ version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de" dependencies = [ + "digest 0.10.7", "rand_core 0.6.4", ] +[[package]] +name = "simdutf8" +version = "0.1.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f27f6278552951f1f2b8cf9da965d10969b2efdea95a6ec47987ab46edfe263a" + [[package]] name = "similar" version = "2.3.0" @@ -6611,6 +6648,26 @@ dependencies = [ "serde", ] +[[package]] +name = "snapshots_creator" +version = "0.1.0" +dependencies = [ + "anyhow", + "futures 0.3.28", + "prometheus_exporter", + "rand 0.8.5", + "tokio", + "tracing", + "vise", + "vlog", + "zksync_config", + "zksync_dal", + "zksync_env_config", + "zksync_object_store", + "zksync_types", + "zksync_utils", +] + [[package]] name = "snark_wrapper" version = "0.1.0" @@ -6644,7 +6701,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9f7916fc008ca5542385b89a3d3ce689953c143e9304a9bf8beec1de48994c0d" dependencies = [ "libc", - "winapi 0.3.9", + "winapi", ] [[package]] @@ -6664,13 +6721,13 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "41d1c5305e39e09653383c2c7244f2f78b3bcae37cf50c64cb4789c9f5096ec2" dependencies = [ "base64 0.13.1", - "bytes 1.5.0", + "bytes", "futures 0.3.28", "http", "httparse", "log", "rand 0.8.5", - "sha-1 0.9.8", + "sha-1", ] [[package]] @@ -6680,13 +6737,12 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6e63cff320ae2c57904679ba7cb63280a3dc4613885beafb148ee7bf9aa9042d" [[package]] -name = "spki" -version = "0.5.4" +name = "spin" +version = "0.9.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "44d01ac02a6ccf3e07db148d2be087da624fea0221a16152ed01f0496a6b0a27" +checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67" dependencies = [ - "base64ct", - "der 0.5.1", + "lock_api", ] [[package]] @@ -6717,85 +6773,94 @@ checksum = "c85070f382340e8b23a75808e83573ddf65f9ad9143df9573ca37c1ed2ee956a" [[package]] name = "sqlformat" -version = "0.1.8" +version = "0.2.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b4b7922be017ee70900be125523f38bdd644f4f06a1b16e8fa5a8ee8c34bffd4" +checksum = "ce81b7bd7c4493975347ef60d8c7e8b742d4694f4c49f93e0a12ea263938176c" dependencies = [ - "itertools", + "itertools 0.12.0", "nom", "unicode_categories", ] [[package]] name = "sqlx" -version = "0.5.13" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "551873805652ba0d912fec5bbb0f8b4cdd96baf8e2ebf5970e5671092966019b" +checksum = "dba03c279da73694ef99763320dea58b51095dfe87d001b1d4b5fe78ba8763cf" dependencies = [ "sqlx-core", "sqlx-macros", + "sqlx-mysql", + "sqlx-postgres", + "sqlx-sqlite", ] [[package]] name = "sqlx-core" -version = "0.5.13" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e48c61941ccf5ddcada342cd59e3e5173b007c509e1e8e990dafc830294d9dc5" +checksum = "d84b0a3c3739e220d94b3239fd69fb1f74bc36e16643423bd99de3b43c21bfbd" dependencies = [ - "ahash 0.7.7", + "ahash 0.8.7", "atoi", - "base64 0.13.1", "bigdecimal", - "bitflags 1.3.2", "byteorder", - "bytes 1.5.0", + "bytes", "chrono", "crc", "crossbeam-queue 0.3.8", - "dirs", + "dotenvy", "either", - "event-listener", + "event-listener 2.5.3", "futures-channel", "futures-core", "futures-intrusive", + "futures-io", "futures-util", "hashlink", "hex", - "hkdf", - "hmac 0.12.1", - "indexmap 1.9.3", + "indexmap 2.1.0", "ipnetwork", - "itoa", - "libc", "log", - "md-5", "memchr", - "num-bigint 0.3.3", + "native-tls", "once_cell", "paste", "percent-encoding", - "rand 0.8.5", + "rust_decimal", "serde", "serde_json", - "sha-1 0.10.1", "sha2 0.10.8", "smallvec", "sqlformat", - "sqlx-rt", - "stringprep", "thiserror", + "tokio", "tokio-stream", + "tracing", "url", - "whoami", ] [[package]] name = "sqlx-macros" -version = "0.5.13" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bc0fba2b0cae21fc00fe6046f8baa4c7fcb49e379f0f592b04696607f69ed2e1" +checksum = "89961c00dc4d7dffb7aee214964b065072bff69e36ddb9e2c107541f75e4f2a5" dependencies = [ - "dotenv", + "proc-macro2 1.0.69", + "quote 1.0.33", + "sqlx-core", + "sqlx-macros-core", + "syn 1.0.109", +] + +[[package]] +name = "sqlx-macros-core" +version = "0.7.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d0bd4519486723648186a08785143599760f7cc81c52334a55d6a83ea1e20841" +dependencies = [ + "atomic-write-file", + "dotenvy", "either", "heck 0.4.1", "hex", @@ -6806,21 +6871,126 @@ dependencies = [ "serde_json", "sha2 0.10.8", "sqlx-core", - "sqlx-rt", + "sqlx-mysql", + "sqlx-postgres", + "sqlx-sqlite", "syn 1.0.109", + "tempfile", + "tokio", "url", ] [[package]] -name = "sqlx-rt" -version = "0.5.13" +name = "sqlx-mysql" +version = "0.7.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e37195395df71fd068f6e2082247891bc11e3289624bbc776a0cdfa1ca7f1ea4" +dependencies = [ + "atoi", + "base64 0.21.5", + "bigdecimal", + "bitflags 2.4.1", + "byteorder", + "bytes", + "chrono", + "crc", + "digest 0.10.7", + "dotenvy", + "either", + "futures-channel", + "futures-core", + "futures-io", + "futures-util", + "generic-array", + "hex", + "hkdf", + "hmac 0.12.1", + "itoa", + "log", + "md-5", + "memchr", + "once_cell", + "percent-encoding", + "rand 0.8.5", + "rsa", + "rust_decimal", + "serde", + "sha1", + "sha2 0.10.8", + "smallvec", + "sqlx-core", + "stringprep", + "thiserror", + "tracing", + "whoami", +] + +[[package]] +name = "sqlx-postgres" +version = "0.7.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d6ac0ac3b7ccd10cc96c7ab29791a7dd236bd94021f31eec7ba3d46a74aa1c24" +dependencies = [ + "atoi", + "base64 0.21.5", + "bigdecimal", + "bitflags 2.4.1", + "byteorder", + "chrono", + "crc", + "dotenvy", + "etcetera", + "futures-channel", + "futures-core", + "futures-io", + "futures-util", + "hex", + "hkdf", + "hmac 0.12.1", + "home", + "ipnetwork", + "itoa", + "log", + "md-5", + "memchr", + "num-bigint 0.4.4", + "once_cell", + "rand 0.8.5", + "rust_decimal", + "serde", + "serde_json", + "sha1", + "sha2 0.10.8", + "smallvec", + "sqlx-core", + "stringprep", + "thiserror", + "tracing", + "whoami", +] + +[[package]] +name = "sqlx-sqlite" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4db708cd3e459078f85f39f96a00960bd841f66ee2a669e90bf36907f5a79aae" +checksum = "210976b7d948c7ba9fced8ca835b11cbb2d677c59c79de41ac0d397e14547490" dependencies = [ - "native-tls", - "once_cell", - "tokio", - "tokio-native-tls", + "atoi", + "chrono", + "flume", + "futures-channel", + "futures-core", + "futures-executor", + "futures-intrusive", + "futures-util", + "libsqlite3-sys", + "log", + "percent-encoding", + "serde", + "sqlx-core", + "tracing", + "url", + "urlencoding", ] [[package]] @@ -6918,9 +7088,9 @@ dependencies = [ [[package]] name = "subtle" -version = "2.4.1" +version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6bdef32e8150c2a081110b42772ffe7d7c9032b606bc226c8260fd97e0976601" +checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc" [[package]] name = "syn" @@ -6955,6 +7125,18 @@ dependencies = [ "unicode-ident", ] +[[package]] +name = "syn_derive" +version = "0.1.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1329189c02ff984e9736652b1631330da25eaa6bc639089ed4915d25446cbe7b" +dependencies = [ + "proc-macro-error", + "proc-macro2 1.0.69", + "quote 1.0.33", + "syn 2.0.38", +] + [[package]] name = "sync_vm" version = "1.3.3" @@ -6965,7 +7147,7 @@ dependencies = [ "derivative", "franklin-crypto 0.0.5 (git+https://github.com/matter-labs/franklin-crypto?branch=dev)", "hex", - "itertools", + "itertools 0.10.5", "num-bigint 0.4.4", "num-derive 0.3.3", "num-integer", @@ -7248,11 +7430,11 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d0c014766411e834f7af5b8f4cf46257aab4036ca95e9d2c144a10f59ad6f5b9" dependencies = [ "backtrace", - "bytes 1.5.0", + "bytes", "libc", - "mio 0.8.9", + "mio", "num_cpus", - "parking_lot 0.12.1", + "parking_lot", "pin-project-lite", "signal-hook-registry", "socket2 0.5.5", @@ -7287,31 +7469,28 @@ version = "0.24.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c28327cf380ac148141087fbfb9de9d7bd4e84ab5d2c28fbc911d753de8a7081" dependencies = [ - "rustls", + "rustls 0.21.7", "tokio", ] [[package]] -name = "tokio-stream" -version = "0.1.14" +name = "tokio-rustls" +version = "0.25.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "397c988d37662c7dda6d2208364a706264bf3d6138b11d436cbac0ad38832842" +checksum = "775e0c0f0adb3a2f22a00c4745d728b479985fc15ee7ca6a2608388c5569860f" dependencies = [ - "futures-core", - "pin-project-lite", + "rustls 0.22.1", + "rustls-pki-types", "tokio", ] [[package]] -name = "tokio-util" -version = "0.6.10" +name = "tokio-stream" +version = "0.1.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "36943ee01a6d67977dd3f84a5a1d2efeb4ada3a1ae771cadfaa535d9d9fc6507" +checksum = "397c988d37662c7dda6d2208364a706264bf3d6138b11d436cbac0ad38832842" dependencies = [ - "bytes 1.5.0", "futures-core", - "futures-sink", - "log", "pin-project-lite", "tokio", ] @@ -7322,7 +7501,7 @@ version = "0.7.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1d68074620f57a0b21594d9735eb2e98ab38b17f80d3fcb189fca266771ca60d" dependencies = [ - "bytes 1.5.0", + "bytes", "futures-core", "futures-io", "futures-sink", @@ -7331,37 +7510,28 @@ dependencies = [ "tracing", ] -[[package]] -name = "toml" -version = "0.5.11" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f4f7f0dd8d50a853a531c426359045b1998f04219d88799810762cd4ad314234" -dependencies = [ - "serde", -] - [[package]] name = "toml_datetime" -version = "0.6.5" +version = "0.6.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3550f4e9685620ac18a50ed434eb3aec30db8ba93b0287467bca5826ea25baf1" +checksum = "7cda73e2f1397b1262d6dfdcef8aafae14d1de7748d66822d3bfeeb6d03e5e4b" [[package]] name = "toml_edit" -version = "0.14.4" +version = "0.19.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5376256e44f2443f8896ac012507c19a012df0fe8758b55246ae51a2279db51f" +checksum = "1b5bb770da30e5cbfde35a2d7b9b8a2c4b8ef89548a7a6aeab5c9a576e3e7421" dependencies = [ - "combine", - "indexmap 1.9.3", - "itertools", + "indexmap 2.1.0", + "toml_datetime", + "winnow", ] [[package]] name = "toml_edit" -version = "0.19.15" +version = "0.20.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1b5bb770da30e5cbfde35a2d7b9b8a2c4b8ef89548a7a6aeab5c9a576e3e7421" +checksum = "396e4d48bbb2b7554c944bde63101b5ae446cff6ec4a24227428f15eb72ef338" dependencies = [ "indexmap 2.1.0", "toml_datetime", @@ -7383,7 +7553,7 @@ dependencies = [ "rand 0.8.5", "slab", "tokio", - "tokio-util 0.7.9", + "tokio-util", "tower-layer", "tower-service", "tracing", @@ -7398,7 +7568,7 @@ dependencies = [ "async-compression", "base64 0.21.5", "bitflags 2.4.1", - "bytes 1.5.0", + "bytes", "futures-core", "futures-util", "http", @@ -7411,7 +7581,7 @@ dependencies = [ "percent-encoding", "pin-project-lite", "tokio", - "tokio-util 0.7.9", + "tokio-util", "tower", "tower-layer", "tower-service", @@ -7638,6 +7808,12 @@ version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a156c684c91ea7d62626509bce3cb4e1d9ed5c4d978f7b4352658f96a4c26b4a" +[[package]] +name = "untrusted" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1" + [[package]] name = "ureq" version = "2.8.0" @@ -7725,8 +7901,9 @@ checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f" [[package]] name = "vise" version = "0.1.0" -source = "git+https://github.com/matter-labs/vise.git?rev=dd05139b76ab0843443ab3ff730174942c825dae#dd05139b76ab0843443ab3ff730174942c825dae" +source = "git+https://github.com/matter-labs/vise.git?rev=1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1#1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" dependencies = [ + "compile-fmt", "elsa", "linkme", "once_cell", @@ -7737,7 +7914,7 @@ dependencies = [ [[package]] name = "vise-exporter" version = "0.1.0" -source = "git+https://github.com/matter-labs/vise.git?rev=dd05139b76ab0843443ab3ff730174942c825dae#dd05139b76ab0843443ab3ff730174942c825dae" +source = "git+https://github.com/matter-labs/vise.git?rev=1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1#1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" dependencies = [ "hyper", "metrics-exporter-prometheus", @@ -7750,7 +7927,7 @@ dependencies = [ [[package]] name = "vise-macros" version = "0.1.0" -source = "git+https://github.com/matter-labs/vise.git?rev=dd05139b76ab0843443ab3ff730174942c825dae#dd05139b76ab0843443ab3ff730174942c825dae" +source = "git+https://github.com/matter-labs/vise.git?rev=1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1#1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" dependencies = [ "proc-macro2 1.0.69", "quote 1.0.33", @@ -7928,7 +8105,7 @@ checksum = "5388522c899d1e1c96a4c307e3797e0f697ba7c77dd8e0e625ecba9dd0342937" dependencies = [ "arrayvec 0.7.4", "base64 0.21.5", - "bytes 1.5.0", + "bytes", "derive_more", "ethabi", "ethereum-types 0.14.1", @@ -7937,10 +8114,10 @@ dependencies = [ "headers", "hex", "idna", - "jsonrpc-core 18.0.0 (registry+https://github.com/rust-lang/crates.io-index)", + "jsonrpc-core", "log", "once_cell", - "parking_lot 0.12.1", + "parking_lot", "pin-project", "reqwest", "rlp", @@ -7953,18 +8130,18 @@ dependencies = [ [[package]] name = "webpki-roots" -version = "0.24.0" +version = "0.25.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b291546d5d9d1eab74f069c77749f2cb8504a12caa20f0f2de93ddbf6f411888" -dependencies = [ - "rustls-webpki", -] +checksum = "14247bb57be4f377dfb94c72830b8ce8fc6beac03cf4bf7b9732eadd414123fc" [[package]] name = "webpki-roots" -version = "0.25.2" +version = "0.26.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "14247bb57be4f377dfb94c72830b8ce8fc6beac03cf4bf7b9732eadd414123fc" +checksum = "0de2cfda980f21be5a7ed2eadb3e6fe074d56022bea2cdeb1a62eb220fc04188" +dependencies = [ + "rustls-pki-types", +] [[package]] name = "which" @@ -7983,16 +8160,6 @@ name = "whoami" version = "1.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "22fc3756b8a9133049b26c7f61ab35416c130e8c09b660f5b3958b446f52cc50" -dependencies = [ - "wasm-bindgen", - "web-sys", -] - -[[package]] -name = "winapi" -version = "0.2.8" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "167dc9d6949a9b857f3451275e911c3f44255842c1f7a76f33c55103a909087a" [[package]] name = "winapi" @@ -8004,12 +8171,6 @@ dependencies = [ "winapi-x86_64-pc-windows-gnu", ] -[[package]] -name = "winapi-build" -version = "0.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2d315eee3b34aca4797b2da6b13ed88266e6d612562a0c46390af8299fc699bc" - [[package]] name = "winapi-i686-pc-windows-gnu" version = "0.4.0" @@ -8022,7 +8183,7 @@ version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f29e6f9198ba0d26b4c9f07dbe6f9ed633e1f3d5b8b414090084349e46a52596" dependencies = [ - "winapi 0.3.9", + "winapi", ] [[package]] @@ -8191,16 +8352,6 @@ dependencies = [ "windows-sys 0.48.0", ] -[[package]] -name = "ws2_32-sys" -version = "0.2.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d59cefebd0c892fa2dd6de581e937301d8552cb44489cdff035c6187cb63fa5e" -dependencies = [ - "winapi 0.2.8", - "winapi-build", -] - [[package]] name = "wyz" version = "0.2.0" @@ -8227,18 +8378,18 @@ dependencies = [ [[package]] name = "zerocopy" -version = "0.7.11" +version = "0.7.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4c19fae0c8a9efc6a8281f2e623db8af1db9e57852e04cde3e754dd2dc29340f" +checksum = "1c4061bedbb353041c12f413700357bec76df2c7e2ca8e4df8bac24c6bf68e3d" dependencies = [ "zerocopy-derive", ] [[package]] name = "zerocopy-derive" -version = "0.7.11" +version = "0.7.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fc56589e9ddd1f1c28d4b4b5c773ce232910a6bb67a70133d61c9e347585efe9" +checksum = "b3c129550b3e6de3fd0ba67ba5c81818f9805e58b8d7fee80a3a59d2c9fc601a" dependencies = [ "proc-macro2 1.0.69", "quote 1.0.33", @@ -8271,7 +8422,7 @@ version = "1.3.1" source = "git+https://github.com/matter-labs/era-zk_evm.git?tag=v1.3.1-rc2#0a7c775932db4839ff6b7fb0db9bdb3583ab54c0" dependencies = [ "blake2 0.10.6 (git+https://github.com/RustCrypto/hashes.git?rev=1f727ce37ff40fa0cce84eb8543a45bdd3ca4a4e)", - "k256", + "k256 0.11.6", "lazy_static", "num 0.4.1", "serde", @@ -8293,7 +8444,7 @@ dependencies = [ "serde", "serde_json", "static_assertions", - "zk_evm_abstractions", + "zk_evm_abstractions 0.1.0", "zkevm_opcode_defs 1.3.2", ] @@ -8308,7 +8459,7 @@ dependencies = [ "serde", "serde_json", "static_assertions", - "zk_evm_abstractions", + "zk_evm_abstractions 0.1.0", "zkevm_opcode_defs 1.3.2", ] @@ -8323,21 +8474,49 @@ dependencies = [ "serde", "serde_json", "static_assertions", - "zk_evm_abstractions", + "zk_evm_abstractions 0.1.0", "zkevm_opcode_defs 1.3.2", ] +[[package]] +name = "zk_evm" +version = "1.4.1" +source = "git+https://github.com/matter-labs/era-zk_evm.git?branch=v1.4.1#6250dbf64b2d14ced87a127735da559f27a432d5" +dependencies = [ + "anyhow", + "lazy_static", + "num 0.4.1", + "serde", + "serde_json", + "static_assertions", + "zk_evm_abstractions 1.4.1", + "zkevm_opcode_defs 1.4.1", +] + [[package]] name = "zk_evm_abstractions" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-zk_evm_abstractions.git#15a2af404902d5f10352e3d1fac693cc395fcff9" +source = "git+https://github.com/matter-labs/era-zk_evm_abstractions.git#32dd320953841aa78579d9da08abbc70bcaed175" dependencies = [ "anyhow", + "num_enum", "serde", "static_assertions", "zkevm_opcode_defs 1.3.2", ] +[[package]] +name = "zk_evm_abstractions" +version = "1.4.1" +source = "git+https://github.com/matter-labs/era-zk_evm_abstractions.git?branch=v1.4.1#0aac08c3b097ee8147e748475117ac46bddcdcef" +dependencies = [ + "anyhow", + "num_enum", + "serde", + "static_assertions", + "zkevm_opcode_defs 1.4.1", +] + [[package]] name = "zkevm-assembly" version = "1.3.2" @@ -8357,6 +8536,25 @@ dependencies = [ "zkevm_opcode_defs 1.3.2", ] +[[package]] +name = "zkevm-assembly" +version = "1.3.2" +source = "git+https://github.com/matter-labs/era-zkEVM-assembly.git?branch=v1.4.1#50282016d01bd2fd147021dd558209778db2268b" +dependencies = [ + "env_logger 0.9.3", + "hex", + "lazy_static", + "log", + "nom", + "num-bigint 0.4.4", + "num-traits", + "sha3 0.10.8", + "smallvec", + "structopt", + "thiserror", + "zkevm_opcode_defs 1.4.1", +] + [[package]] name = "zkevm_circuits" version = "1.4.0" @@ -8368,7 +8566,7 @@ dependencies = [ "cs_derive 0.1.0 (git+https://github.com/matter-labs/era-boojum.git?branch=main)", "derivative", "hex", - "itertools", + "itertools 0.10.5", "rand 0.4.6", "rand 0.8.5", "serde", @@ -8377,6 +8575,27 @@ dependencies = [ "zkevm_opcode_defs 1.3.2", ] +[[package]] +name = "zkevm_circuits" +version = "1.4.1" +source = "git+https://github.com/matter-labs/era-zkevm_circuits.git?branch=v1.4.1#70234e99c2492740226b9f40091e7fccc7ef28e9" +dependencies = [ + "arrayvec 0.7.4", + "bincode", + "boojum", + "cs_derive 0.1.0 (git+https://github.com/matter-labs/era-boojum.git?branch=main)", + "derivative", + "hex", + "itertools 0.10.5", + "rand 0.4.6", + "rand 0.8.5", + "seq-macro", + "serde", + "serde_json", + "smallvec", + "zkevm_opcode_defs 1.4.1", +] + [[package]] name = "zkevm_opcode_defs" version = "1.3.1" @@ -8396,12 +8615,26 @@ dependencies = [ "bitflags 2.4.1", "blake2 0.10.6 (git+https://github.com/RustCrypto/hashes.git?rev=1f727ce37ff40fa0cce84eb8543a45bdd3ca4a4e)", "ethereum-types 0.14.1", - "k256", + "k256 0.11.6", "lazy_static", "sha2 0.10.6", "sha3 0.10.6", ] +[[package]] +name = "zkevm_opcode_defs" +version = "1.4.1" +source = "git+https://github.com/matter-labs/era-zkevm_opcode_defs.git?branch=v1.4.1#ba8228ff0582d21f64d6a319d50d0aec48e9e7b6" +dependencies = [ + "bitflags 2.4.1", + "blake2 0.10.6 (registry+https://github.com/rust-lang/crates.io-index)", + "ethereum-types 0.14.1", + "k256 0.13.2", + "lazy_static", + "sha2 0.10.8", + "sha3 0.10.8", +] + [[package]] name = "zkevm_test_harness" version = "1.3.3" @@ -8426,7 +8659,7 @@ dependencies = [ "test-log", "tracing", "zk_evm 1.3.3 (git+https://github.com/matter-labs/era-zk_evm.git?branch=v1.3.3)", - "zkevm-assembly", + "zkevm-assembly 1.3.2 (git+https://github.com/matter-labs/era-zkEVM-assembly.git?branch=v1.3.2)", ] [[package]] @@ -8435,7 +8668,7 @@ version = "1.4.0" source = "git+https://github.com/matter-labs/era-zkevm_test_harness.git?branch=v1.4.0#43aeb53d7d9c909508a98f9fc140edff0e9d2357" dependencies = [ "bincode", - "circuit_definitions", + "circuit_definitions 0.1.0 (git+https://github.com/matter-labs/era-zkevm_test_harness.git?branch=v1.4.0)", "codegen 0.2.0", "crossbeam 0.8.2", "derivative", @@ -8449,7 +8682,36 @@ dependencies = [ "structopt", "test-log", "tracing", - "zkevm-assembly", + "zkevm-assembly 1.3.2 (git+https://github.com/matter-labs/era-zkEVM-assembly.git?branch=v1.3.2)", +] + +[[package]] +name = "zkevm_test_harness" +version = "1.4.1" +source = "git+https://github.com/matter-labs/era-zkevm_test_harness.git?branch=v1.4.1#44975f894aff0893b5f98e34d0e364375390bcb8" +dependencies = [ + "bincode", + "circuit_definitions 0.1.0 (git+https://github.com/matter-labs/era-zkevm_test_harness.git?branch=v1.4.1)", + "codegen 0.2.0", + "crossbeam 0.8.2", + "curl", + "derivative", + "env_logger 0.9.3", + "hex", + "lazy_static", + "rand 0.4.6", + "rayon", + "reqwest", + "rescue_poseidon 0.4.1 (git+https://github.com/matter-labs/rescue-poseidon.git?branch=poseidon2)", + "serde", + "serde_json", + "smallvec", + "snark_wrapper", + "structopt", + "test-log", + "tracing", + "walkdir", + "zkevm-assembly 1.3.2 (git+https://github.com/matter-labs/era-zkEVM-assembly.git?branch=v1.4.1)", ] [[package]] @@ -8506,6 +8768,7 @@ dependencies = [ name = "zksync_commitment_utils" version = "0.1.0" dependencies = [ + "multivm", "zkevm_test_harness 1.4.0", "zksync_types", "zksync_utils", @@ -8514,7 +8777,7 @@ dependencies = [ [[package]] name = "zksync_concurrency" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", "once_cell", @@ -8541,9 +8804,10 @@ dependencies = [ [[package]] name = "zksync_consensus_bft" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", + "async-trait", "once_cell", "rand 0.8.5", "thiserror", @@ -8561,14 +8825,14 @@ dependencies = [ [[package]] name = "zksync_consensus_crypto" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", "blst", "ed25519-dalek", "ff_ce", "hex", - "pairing_ce 0.28.5 (git+https://github.com/matter-labs/pairing.git?rev=f55393f)", + "pairing_ce 0.28.5 (git+https://github.com/matter-labs/pairing.git?rev=f55393fd366596eac792d78525d26e9c4d6ed1ca)", "rand 0.4.6", "rand 0.8.5", "sha3 0.10.8", @@ -8579,10 +8843,9 @@ dependencies = [ [[package]] name = "zksync_consensus_executor" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", - "prost", "rand 0.8.5", "tracing", "vise", @@ -8594,14 +8857,12 @@ dependencies = [ "zksync_consensus_storage", "zksync_consensus_sync_blocks", "zksync_consensus_utils", - "zksync_protobuf", - "zksync_protobuf_build", ] [[package]] name = "zksync_consensus_network" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", "async-trait", @@ -8625,7 +8886,7 @@ dependencies = [ [[package]] name = "zksync_consensus_roles" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", "bit-vec", @@ -8633,6 +8894,7 @@ dependencies = [ "prost", "rand 0.8.5", "serde", + "thiserror", "tracing", "zksync_concurrency", "zksync_consensus_crypto", @@ -8644,7 +8906,7 @@ dependencies = [ [[package]] name = "zksync_consensus_storage" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", "async-trait", @@ -8652,6 +8914,7 @@ dependencies = [ "rand 0.8.5", "thiserror", "tracing", + "vise", "zksync_concurrency", "zksync_consensus_roles", "zksync_protobuf", @@ -8661,7 +8924,7 @@ dependencies = [ [[package]] name = "zksync_consensus_sync_blocks" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", "thiserror", @@ -8676,7 +8939,7 @@ dependencies = [ [[package]] name = "zksync_consensus_utils" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "thiserror", "zksync_concurrency", @@ -8744,13 +9007,9 @@ dependencies = [ "futures 0.3.28", "governor", "hex", - "itertools", - "jsonrpc-core 18.0.0 (git+https://github.com/matter-labs/jsonrpc.git?branch=master)", - "jsonrpc-core-client", - "jsonrpc-derive", - "jsonrpc-http-server", - "jsonrpc-pubsub", - "jsonrpc-ws-server", + "itertools 0.10.5", + "jsonrpsee", + "lru", "metrics", "multivm", "num 0.3.1", @@ -8774,9 +9033,12 @@ dependencies = [ "zksync_commitment_utils", "zksync_concurrency", "zksync_config", + "zksync_consensus_bft", + "zksync_consensus_crypto", "zksync_consensus_executor", "zksync_consensus_roles", "zksync_consensus_storage", + "zksync_consensus_utils", "zksync_contracts", "zksync_dal", "zksync_eth_client", @@ -8787,8 +9049,6 @@ dependencies = [ "zksync_mini_merkle_tree", "zksync_object_store", "zksync_protobuf", - "zksync_protobuf_build", - "zksync_prover_utils", "zksync_queued_job_processor", "zksync_state", "zksync_storage", @@ -8796,7 +9056,6 @@ dependencies = [ "zksync_test_account", "zksync_types", "zksync_utils", - "zksync_verification_key_generator_and_server", "zksync_web3_decl", ] @@ -8824,9 +9083,10 @@ dependencies = [ "bigdecimal", "bincode", "hex", - "itertools", - "num 0.3.1", + "itertools 0.10.5", + "num 0.4.1", "once_cell", + "prost", "rand 0.8.5", "serde", "serde_json", @@ -8837,8 +9097,12 @@ dependencies = [ "tracing", "url", "vise", + "zksync_consensus_roles", + "zksync_consensus_storage", "zksync_contracts", "zksync_health_check", + "zksync_protobuf", + "zksync_protobuf_build", "zksync_system_constants", "zksync_types", "zksync_utils", @@ -8859,11 +9123,10 @@ dependencies = [ name = "zksync_eth_client" version = "0.1.0" dependencies = [ - "anyhow", "async-trait", - "hex", - "jsonrpc-core 18.0.0 (registry+https://github.com/rust-lang/crates.io-index)", + "jsonrpc-core", "serde", + "static_assertions", "thiserror", "tokio", "tracing", @@ -8883,7 +9146,7 @@ dependencies = [ "async-trait", "futures 0.3.28", "hex", - "jsonrpc-core 18.0.0 (registry+https://github.com/rust-lang/crates.io-index)", + "jsonrpc-core", "parity-crypto", "reqwest", "rlp", @@ -8905,10 +9168,13 @@ dependencies = [ "envy", "futures 0.3.28", "prometheus_exporter", + "semver", "serde", + "serde_json", "tokio", "tracing", "url", + "vise", "vlog", "zksync_basic_types", "zksync_config", @@ -8988,21 +9254,25 @@ dependencies = [ "anyhow", "async-trait", "bincode", + "flate2", "google-cloud-auth", "google-cloud-storage", "http", + "prost", + "serde_json", "tempdir", "tokio", "tracing", "vise", "zksync_config", + "zksync_protobuf", "zksync_types", ] [[package]] name = "zksync_protobuf" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", "bit-vec", @@ -9020,7 +9290,7 @@ dependencies = [ [[package]] name = "zksync_protobuf_build" version = "0.1.0" -source = "git+https://github.com/matter-labs/era-consensus.git?rev=ed71b2e817c980a2daffef6a01885219e1dc6fa0#ed71b2e817c980a2daffef6a01885219e1dc6fa0" +source = "git+https://github.com/matter-labs/era-consensus.git?rev=5727a3e0b22470bb90092388f9125bcb366df613#5727a3e0b22470bb90092388f9125bcb366df613" dependencies = [ "anyhow", "heck 0.4.1", @@ -9033,25 +9303,6 @@ dependencies = [ "syn 2.0.38", ] -[[package]] -name = "zksync_prover_utils" -version = "0.1.0" -dependencies = [ - "anyhow", - "async-trait", - "ctrlc", - "futures 0.3.28", - "regex", - "reqwest", - "tokio", - "toml_edit 0.14.4", - "tracing", - "zksync_config", - "zksync_object_store", - "zksync_types", - "zksync_utils", -] - [[package]] name = "zksync_queued_job_processor" version = "0.1.0" @@ -9088,7 +9339,7 @@ name = "zksync_state" version = "0.1.0" dependencies = [ "anyhow", - "itertools", + "itertools 0.10.5", "mini-moka", "rand 0.8.5", "tempfile", @@ -9153,7 +9404,7 @@ dependencies = [ "codegen 0.1.0", "ethereum-types 0.12.1", "hex", - "num 0.3.1", + "num 0.4.1", "num_enum", "once_cell", "parity-crypto", @@ -9168,8 +9419,10 @@ dependencies = [ "tokio", "zk_evm 1.3.3 (git+https://github.com/matter-labs/era-zk_evm.git?tag=v1.3.3-rc2)", "zk_evm 1.4.0", + "zk_evm 1.4.1", "zkevm_test_harness 1.3.3", "zksync_basic_types", + "zksync_config", "zksync_consensus_roles", "zksync_contracts", "zksync_mini_merkle_tree", @@ -9187,9 +9440,9 @@ dependencies = [ "bigdecimal", "futures 0.3.28", "hex", - "itertools", + "itertools 0.10.5", "metrics", - "num 0.3.1", + "num 0.4.1", "reqwest", "serde", "serde_json", @@ -9201,32 +9454,13 @@ dependencies = [ "zksync_basic_types", ] -[[package]] -name = "zksync_verification_key_generator_and_server" -version = "0.1.0" -dependencies = [ - "anyhow", - "bincode", - "circuit_testing", - "ff_ce", - "hex", - "itertools", - "once_cell", - "serde_json", - "structopt", - "tracing", - "vlog", - "zksync_prover_utils", - "zksync_types", -] - [[package]] name = "zksync_web3_decl" version = "0.1.0" dependencies = [ "bigdecimal", "chrono", - "itertools", + "itertools 0.10.5", "jsonrpsee", "rlp", "serde", diff --git a/Cargo.toml b/Cargo.toml index 75a4c7237d2..cd823972a01 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -5,10 +5,9 @@ members = [ "core/bin/contract-verifier", "core/bin/external_node", "core/bin/merkle_tree_consistency_checker", - "core/bin/rocksdb_util", + "core/bin/snapshots_creator", "core/bin/storage_logs_dedup_migration", "core/bin/system-constants-generator", - "core/bin/verification_key_generator_and_server", "core/bin/verified_sources_fetcher", "core/bin/zksync_server", # Libraries @@ -33,7 +32,6 @@ members = [ "core/lib/state", "core/lib/storage", "core/lib/types", - "core/lib/prover_utils", "core/lib/utils", "core/lib/vlog", "core/lib/multivm", diff --git a/README.md b/README.md index 5d34006846a..4d658d75d44 100644 --- a/README.md +++ b/README.md @@ -11,13 +11,14 @@ write smart contracts in C++, Rust and other popular languages. The following questions will be answered by the following resources: -| Question | Resource | -| ------------------------------------------------------- | --------------------------------------- | -| What do I need to develop the project locally? | [development.md](docs/development.md) | -| How can I set up my dev environment? | [setup-dev.md](docs/setup-dev.md) | -| How can I run the project? | [launch.md](docs/launch.md) | -| What is the logical project structure and architecture? | [architecture.md](docs/architecture.md) | -| Where can I find developer docs? | [docs](https://v2-docs.zksync.io/dev/) | +| Question | Resource | +| ------------------------------------------------------- | ---------------------------------------------- | +| What do I need to develop the project locally? | [development.md](docs/guides/development.md) | +| How can I set up my dev environment? | [setup-dev.md](docs/guides/setup-dev.md) | +| How can I run the project? | [launch.md](docs/guides/launch.md) | +| What is the logical project structure and architecture? | [architecture.md](docs/guides/architecture.md) | +| Where can I find protocol specs? | [specs](docs/specs/README.md) | +| Where can I find developer docs? | [docs](https://era.zksync.io/docs/) | ## Policies @@ -29,7 +30,7 @@ The following questions will be answered by the following resources: zkSync Era is distributed under the terms of either - Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or ) -- MIT license ([LICENSE-MIT](LICENSE-MIT) or ) +- MIT license ([LICENSE-MIT](LICENSE-MIT) or ) at your option. @@ -42,6 +43,7 @@ at your option. - [Twitter for Devs](https://twitter.com/zkSyncDevs) - [Discord](https://join.zksync.dev/) - [Mirror](https://zksync.mirror.xyz/) +- [Youtube](https://www.youtube.com/@zkSync-era) ## Disclaimer diff --git a/bin/ci_localnet_up b/bin/ci_localnet_up new file mode 100755 index 00000000000..7f9701a9a14 --- /dev/null +++ b/bin/ci_localnet_up @@ -0,0 +1,12 @@ +#!/usr/bin/env bash + +set -e + +cd $ZKSYNC_HOME + +mkdir -p ./volumes/postgres ./volumes/geth/keystore ./volumes/prysm/beacon ./volumes/prysm/validator +cp ./docker/prysm/config.yml ./volumes/prysm/config.yml +cp ./docker/geth/jwtsecret ./volumes/geth/jwtsecret +cp ./docker/geth/password.sec ./volumes/geth/password.sec +cp ./docker/geth/keystore/UTC--2019-04-06T21-13-27.692266000Z--8a91dc2d28b689474298d91899f0c1baf62cb85b ./volumes/geth/keystore/ +docker-compose --profile runner up -d --wait diff --git a/bin/ci_run b/bin/ci_run index 18f11e33fec..709e057bafa 100755 --- a/bin/ci_run +++ b/bin/ci_run @@ -2,7 +2,7 @@ # Runs the command from within CI docker-compose environment. cd $ZKSYNC_HOME -compose_file="${RUNNER_COMPOSE_FILE:-docker-compose-runner.yml}" +compose_file="${RUNNER_COMPOSE_FILE:-docker-compose.yml}" # Pass environment variables explicitly if specified if [ ! -z "$PASSED_ENV_VARS" ]; then diff --git a/bin/zk b/bin/zk index e6dd1567b38..34c4f846b8d 100755 --- a/bin/zk +++ b/bin/zk @@ -1,9 +1,48 @@ #!/usr/bin/env bash +RED='\033[0;31m' +WHITE_BOLD='\033[1;37m' +NC='\033[0m' # No Color + +# checks that the current directory that user is in, is inside the $ZKSYNC_HOME. +# We depend on this variable in multiple places - so running from a different directory might have +# some surprising side effects (like loading wrong binaries etc). +check_subdirectory() { + if [[ -z "$ZKSYNC_HOME" ]]; then + echo -e "${RED}Error: ZKSYNC_HOME is not set.${NC}" + return 1 + fi + + ZKSYNC_HOME_ABS=$(realpath "$ZKSYNC_HOME") + CURRENT_DIR_ABS=$(realpath .) + + if [[ "$CURRENT_DIR_ABS" != "$ZKSYNC_HOME_ABS"* ]]; then + echo -e "${RED}Warning: You are not in a subdirectory of ZKSYNC_HOME ($ZKSYNC_HOME_ABS).${NC}" + return 1 + fi + return 0 +} + + +# Currently many parts of our zk typescript are checked & verified with yarn v1.22.19 - and might fail with newer versions of yarn. +check_yarn_version() { + desired_version="1.22" + installed_version=$(yarn --version | cut -d'.' -f1,2) + + if [ "$installed_version" != "$desired_version" ]; then + echo -e "${RED}Warning: Yarn is not at the desired version ($desired_version). Installed version is ($installed_version).${NC}" + echo -e "This might cause errors - we recommend to run: ${WHITE_BOLD} yarn set version $desired_version.${NC}" + fi +} + +# We must do these checks here, in the shell script, otherwise people console will be flooded with errors +# and it will be hard for them to see what went wrong. +check_subdirectory +check_yarn_version cd $ZKSYNC_HOME yarn && yarn zk build - -if [ -n "$1" ]; then + +if [ ! -z "$1" ]; then # can't start this with yarn since it has quirks with `--` as an argument node -- $ZKSYNC_HOME/infrastructure/zk/build/index.js "$@" fi diff --git a/checks-config/cspell.json b/checks-config/cspell.json new file mode 100644 index 00000000000..bafb5e036d0 --- /dev/null +++ b/checks-config/cspell.json @@ -0,0 +1,47 @@ +{ + "language": "en", + "ignorePaths": [ + "**/CHANGELOG.md", + "**/node_modules/**", + ".github/**", + ".firebase/**", + ".yarn/**", + "dist/**", + "**/contracts/**", + "**/target/**" + ], + "dictionaries": [ + "typescript", + "cpp", + "npm", + "filetypes", + "cpp", + "en_GB", + "en_US", + "node", + "bash", + "fonts", + "npm", + "cryptocurrencies", + "companies", + "rust", + "html", + "css", + "entities", + "softwareTerms", + "misc", + "fullstack", + "softwareTerms", + "zksync", + "nuxt", + "viem" + ], + "dictionaryDefinitions": [ + { + "name": "zksync", + "addWords": true, + "path": "./era.dic" + } + ], + "allowCompoundWords": true + } \ No newline at end of file diff --git a/checks-config/era.cfg b/checks-config/era.cfg new file mode 100644 index 00000000000..c8a6baba820 --- /dev/null +++ b/checks-config/era.cfg @@ -0,0 +1,69 @@ +# Project settings where a Cargo.toml exists and is passed +# ${CARGO_MANIFEST_DIR}/.config/spellcheck.toml + +# Also take into account developer comments +dev_comments = true + +# Skip the README.md file as defined in the cargo manifest +skip_readme = false + +[Hunspell] +# lang and name of `.dic` file +lang = "en_US" +# OS specific additives +# Linux: [ /usr/share/myspell ] +# Windows: [] +# macOS [ /home/alice/Libraries/hunspell, /Libraries/hunspell ] + +# Additional search paths, which take precedence over the default +# os specific search dirs, searched in order, defaults last +search_dirs = ["."] + +# Adds additional dictionaries, can be specified as +# absolute paths or relative in the search dirs (in this order). +# Relative paths are resolved relative to the configuration file +# which is used. +# Refer to `man 5 hunspell` +# or https://www.systutorials.com/docs/linux/man/4-hunspell/#lbAE +# on how to define a custom dictionary file. +extra_dictionaries = ["era.dic"] + +# If set to `true`, the OS specific default search paths +# are skipped and only explicitly specified ones are used. +skip_os_lookups = false + +# Use the builtin dictionaries if none were found in +# in the configured lookup paths. +# Usually combined with `skip_os_lookups=true` +# to enforce the `builtin` usage for consistent +# results across distributions and CI runs. +# Setting this will still use the dictionaries +# specified in `extra_dictionaries = [..]` +# for topic specific lingo. +use_builtin = true + + +[Hunspell.quirks] +# Transforms words that are provided by the tokenizer +# into word fragments based on the capture groups which are to +# be checked. +# If no capture groups are present, the matched word is whitelisted. +transform_regex = ["^'([^\\s])'$", "^[0-9]+x$"] +# Accepts `alphabeta` variants if the checker provides a replacement suggestion +# of `alpha-beta`. +allow_concatenation = true +# And the counterpart, which accepts words with dashes, when the suggestion has +# recommendations without the dashes. This is less common. +allow_dashed = false + +[NlpRules] +# Allows the user to override the default included +# exports of LanguageTool, with other custom +# languages + +# override_rules = "/path/to/rules_binencoded.bin" +# override_tokenizer = "/path/to/tokenizer_binencoded.bin" + +[Reflow] +# Reflows doc comments to adhere to a given maximum line width limit. +max_line_length = 80 diff --git a/checks-config/era.dic b/checks-config/era.dic new file mode 100644 index 00000000000..f0ec14591b1 --- /dev/null +++ b/checks-config/era.dic @@ -0,0 +1,883 @@ +42 +<= +=> +== +-> +<- ++ +- +* +\ += +/ +|| +< +> +% +^ +0x00 +0x01 +0x02 +0x20 +~10x +u32 +u64 +u8 +1B +H256 +10e18 +10^9 +2^32 +2^128 +2^24 +10^32 +10^* +2^16 +2^64 +10^8 +U256 +12.5% +5% +10% +20% +*% +90% +f64 +k +M +kb +50M +2M +130µs +– +18kb +128kb +10k +100k +120k +800k +24k +500k +50k +120kb +18kb +12GB +20GB +500B +100M +~100us +10ms +1_000ms +1us +~100 +gwei + +ABI +vlog +const +L2 +L2s +L1 +json +l1 +SystemConfig +TODO +se +ZKSYNC_HOME +MultiVMTracer +vm_virtual_blocks +eth_node +EthCall +BaseSystemContracts +eth_calls +refactor +WS +env +url +GasAdjuster +base_fee +base_fee_per_gas +ERC20 +Finalizer +Backoff +middleware +parallelization +precompute +precomputed +Postgres +parallelized +parallelize +job_id +API +APIs +async +pointwise +observability +atomics +integrations +stdout +GCS +websocket +struct +localhost +TOML +config +finalizer +boolean +prover +timestamp +H160 +zkSync +AccessList +miniblock +member₁ +member₂ +memberₙ +merkle +eth +Ethereum +deployer +RPC +tx +txs +subtrees +subtree +unfinalizable +meterer +Timedout +bootloader +bootloader's +testkit +Sepolia +Goerli +miniblock +miniblocks +MempoolIO +mempool +latencies +OracleTools +StorageOracle +zk_evm +zkEVM +src +utils +ptr +RefCell +Rc +StorageView +VM_HOOK_POSITION +VM_HOOKS_PARAMS_COUNT +PAYMASTER_CONTEXT_SLOTS +PrecompilerProcessor +MAX_POSTOP_SLOTS +postOp +type +opcode +KnownCodesStorage +param +HistoryDisabled +HistoryEnabled +sorted_timestamps +known_bytecodes +returndata +namespaces +StateDiffRecord +BYTES_PER_ENUMERATION_INDEX +derived_key +prefill +reorg +precompile +Init +init +enqueued +stage2 +testnets +ethCalls +generable +Serde +tokenize +EOAs +zeroized +value + +// zkSync-related words +matterlabs +zkweb +zksync +blockchain +zkscan +zkscrypto +PubSub +loadtest +BigUint +radix +state_keeper +MIN_PAYMASTER_BALANCE +PrometheusCollector +RetryCollector +ScriptCollector +MetricsCollector +OperationResultsCollector +ReportCollector +filesystem +hasher +Hasher +grafana +prometheus +serializer +serializable +deserializer +Deserializes +deserializes +serializing +deserializing +deserialization +configs +operation_number +hashed_key +deduplication +mutexes +mutex +Blake2s +Blake2 +web3 +Testnets +miniblock_number +hashed_key +tuples +\x19Ethereum +libzkscrypto +EOA +MultiVM +nonces +fri +rollup +pubkey +JSON +keccak256 +pubdata +timestamps +keccak +musig +len +calldata +DApp +metadata +boojum +deps +Precalculated +WASM +DefaultPrecompilesProcessor +LSB +DDoS +refactored +tuple +HistoryMode +vm +VM +VMs +VM's +MSB +Enum +PublishProof +jsrpc +backends +ethsig +ethop +decentralization +rollups +zkrollup +unencrypted +permissionless +trustlessness +IERC +Schnorr +MuSig +Merkle +decentralised +mainchain +offchain +processed +zcli +blockchains +sidechain +sidechains +tokenomics +validator +validator's +validator +Validators +CHAINID +PREVRANDAO +ECDSA +EIP712 +EIP1559 +EIPs +eth_estimateGas +eth_call +versa +blake2 +AR16MT +Preimages +EN's +SystemContext +StorageOracle +intrinsics +chunked +chunking +deadbeef01 +deadbeef0 +deadbeef +unsynced +computable +DevEx +Workspace +NFT +preimage +subcalls +hashmaps +monotonicity +subquery +RPCs +programmatically +stdin +stderr +Linter +SmallRng +ZkPorter +StateDiffs +HashMaps +encodings +CTPOP +decommitter +Decommitter +Decommitments +Decommitment +decommitment +decommitments +Decommit +decommit +decommits +DecommiterOracle +DecommitmentProcessor +decommitted +decommit +decommitting +Demuxer +demultiplex +recid +inversed +plux +Binop +Arithmetization +arithmetization +nocapture +Plonky +permissioned +mathbb +Invb +REDC +iszero +skept +ECADD +ECMUL +preds +inttoptr +syncvm +nasm +rodata +ISZERO +JUMPI +ethir +ptrtoint +lshr +getu +zext +noprofile +umin +cccond +ccret +prodm +prodl +prodeh +prodh +interm +signv +ashr +noalias +immediates +prode +StorageBatchInfo +CommitBatchInfo +IExecutor + +// Names +Vyper +stimate +samount +Stichting +Kingsfordweg +RSIN +ABDK +Alef +Zcon +Paypal +Numio +MLTT +USDCs +dapi +validiums +validium +Validium +sharded +pepe +Arweave +Streamr +dutterbutter +NixOS +CLI +SQLx +Rustup +nextest +NTFS +toolchains +toolchain +IDE +M1 +M2 +MacOS +OpenSSL +Xcode +LLVM +nvm +LTS +logout +WSL +orchestrator +TypeScript +Cryptographical +cryptographical +microservices +Executables +subcomponents +v2 +v1 +rmSync +SSL +setup_2^26 +uncomment +toml +GCP +dev +workspace +subcommand +Kubernetes +Etherscan +cryptographic +hashers +MacBook +DDR5 +~ + +// Used libraries +numberish +arrayify +hexlify +markdownlint +ethersproject +nomicfoundation +nomiclabs +Consensys +zkforge +zkcast +Eigen +IPFS + +// Used programming language words +printf +charsets +println +fatalf +allowfullscreen +inttypes +zbin +Panicf +Deri +DERI +Furucombo +kwargs +scaleb +isinstance +RocksDB +mload +secp +porco +rosso +insize +MLOAD +sload +sload +uadd +nocallback +nosync +swrite +Devs +insta +NFTF + +// ETC +gitter +signup +signups +precompiled +checkmark +Vitalik +Buterin +roadmap +majeure +conveniens +reimplementing +subsecond +supermajority +gemeente +unauthorised +Ethereum's +SDKs +EVM's +EVM +Göerli +ETHUSDC +USDCUSD +ETHUS +USDCUS +ETHUSD +Arbitrum +Adamantium +Immunefi +Winternitz +ewasm +Evmla +UUPS +Uups +TLDR +BLAKE2s +bytes32 +enumeration_index +backend +enum +num_initial +to_check_storage +source_storage +prepend +deduplicated +user_l2_to_l1_logs +L1Messeger +params +provers +zk +substring +reverter +wei +deduplicate +testnet +mainnet +performant +opcodes +USDC +USD +DBs +unexecutable +RLP +DAL +zkSync's +l2_to_l1 +PoW +coinbase +FIXME +ASC +DESC +Versioning +initializer +refactoring +prefetch +unformatted + +// crypto events +Edcon + +// Famous crypto people +Gluchowski +Vitalik's +Buterin's +multisignature +onchain +convertion +Keyhash +Armeabi +scijava +gluk +@Deniallugo's +emilluta + +// Programming related words +backfill +bytecode +bytecodes +impl +subrange +timeframe +leaf_count +mkdir +librocksdb +zksolc +zksyncrobot +precompiles +vyper +zkvyper +undol +applyl +Upgradability +Initializable +Hola +mundo +ISTN +Zerion +Maverik +zk_evm_1_3_3 +vk +vks +CORS +verifier +crypto +callee +Subcalls +Vec +vecs +L1Messenger +SystemL2ToL1Log +witness_inputs +StateKeeper +enum_index +virtual_block_start_batch +virtual_block_finish_l2_block +maxFeePerGas +maxPriorityFeePerGas +structs +all_circuit +OversizedData +M5 +eth_sign +geth +ethers +js +recovery_id +&self +ETHSignature +recover_signer +BlockNumber +(de) +{result +DebugCall} +CREATE2 +memtables +memtable +PostgreSQL +OneTx +DefaultTracer +Tx1 +Tx2 +TxN +VmStopped +Unversioned +versioned +l2_block +submodule +enums +deserialized +deserialize +hashmap +vm_m5 +SDK +1M +dir +SSD +getter +Getters +WebSocket +gasLimit +MiBs +MiB +GiB +GiBs +pubsub +\x19Ethereum +nibbles–node +ZkSyncTree +invariants +LEB128 +workflow +L1Batch +runtime +Tokio +Blobstore +S3 +AWS +ExternalIO +ClosedFormInputWrapper +AggregationWrapper +(de)serializer +typesafe +LRU +ns +Q3 +loadnext +args +with_arg +node_aggregation_job +scheduler_job +leaf_aggregation_job +MAX_ATTEMPTs +fsync +TEST_DATABASE_URL +newest_block +block_count +contracts_verification_info +RNG +jsonrpsee +l1_batch +Namespace +ExecutionStatus +VmStarted +reproducibility +CFs +key–value +enum_index_migration_cursor +block_number +initial_writes +errored +FactoryDeps +de +StorageView's +Yul +eth_txs +eth_tx +ExecuteBlock +PublishProofBlocksOnchain +CommitBlocks +entrypoint +gas_limit +TxSender +UX +BasicWitnessInputProducer +eth_tx_history +PENDING_BLOCK +from_block +namespace +PriorityQueue +Görli +Ropsten +Rinkeby +tokio +threadpool +IntrinsicGas +InsufficientFundsForTransfer +ChainId +eth_getLogs +façade +virtual_blocks_per_miniblock +virtual_block_interval +max_overhead +total_gas_limit +cloneable +timestamped +healthcheck +Healthcheck +HealthCheck +readonly +upgrader +startup +BFT +PingCAP +witgen +ok +hacky +ceil +Infura +synth + +AUTOGENERATED +x19Ethereum +block_timestamp +SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER +MAX_L2_TX_GAS_LIMIT +MAX_TX_ERGS_LIMIT +OneTxTracer +multicall +Multicall's +Multicall3 +proxied +scalers +updatable +instantiation +unexecuted +transactional +benchmarking +virtual_blocks_interval +dal +codebase +compactions +M6 +compiler_common +noop +tokenized +rustc +sqlx +zkevm +Boojum +Sepolia +psql +Cuda +cuda +hdcaa +impls +abda +edaf +unsynchronized +CUDA +gcloud +NVME +OTLP +multiVM +Deduplicator +lobkc +sread +myfunction +merklelization +beaf +subcalls +unallowed +Nuxt +Merklized +satisfiability +demultiplex +precompile +statekeeper +matchers +lifecycle +dedup +deduped +crаsh +protobuf +L1Tx +EIP +DecommittmentProcessor +decommitment +tokenized +Aggregator +DecommittmentProcessor +decommitment +hardcoded +plookup +shivini +EIP4844 +KZG diff --git a/checks-config/links.json b/checks-config/links.json new file mode 100644 index 00000000000..ed336a66590 --- /dev/null +++ b/checks-config/links.json @@ -0,0 +1,29 @@ +{ + "ignorePatterns": [ + { + "pattern": "^https://github\\.com/matter-labs/zksync-2-dev/" + }, + { + "pattern": "^https://www\\.notion\\.so/" + }, + { + "pattern": "^https://github\\.com/matter-labs/zksync-era/compare/" + }, + { + "pattern": "^https://twitter\\.com/zksync" + }, + { + "pattern": "^https://twitter\\.com/zkSyncDevs" + }, + { + "pattern": "^https://github\\.com/matter-labs/zk_evm" + }, + { + "pattern": "^https://sepolia\\.etherscan\\.io/tx/0x18c2a113d18c53237a4056403047ff9fafbf772cb83ccd44bb5b607f8108a64c" + }, + { + "pattern": "^https://github\\.com/matter-labs/zksync-era/commit/" + } + ], + "aliveStatusCodes": [0, 200, 206, 304] +} \ No newline at end of file diff --git a/contracts b/contracts index f97c03ac20b..79f4c20c2c5 160000 --- a/contracts +++ b/contracts @@ -1 +1 @@ -Subproject commit f97c03ac20b9c5b7246cedb544fa3fa4f85460b4 +Subproject commit 79f4c20c2c58c2134823e15a9dda38137af0d03d diff --git a/core/CHANGELOG.md b/core/CHANGELOG.md index 1b182d12dfc..25e51fef18e 100644 --- a/core/CHANGELOG.md +++ b/core/CHANGELOG.md @@ -1,5 +1,248 @@ # Changelog +## [19.2.0](https://github.com/matter-labs/zksync-era/compare/core-v19.1.1...core-v19.2.0) (2024-01-17) + + +### Features + +* adds `zk linkcheck` to zk tool and updates zk env for `zk linkcheck` ci usage ([#868](https://github.com/matter-labs/zksync-era/issues/868)) ([d64f584](https://github.com/matter-labs/zksync-era/commit/d64f584f6d505b19cd6424928e9dc68e370e17fd)) +* **contract-verifier:** Support zkVM solc contract verification ([#854](https://github.com/matter-labs/zksync-era/issues/854)) ([1ed5a95](https://github.com/matter-labs/zksync-era/commit/1ed5a95462dbd73151acd8afbc4ab6158a2aecda)) +* **en:** Make batch status updater work with pruned data ([#863](https://github.com/matter-labs/zksync-era/issues/863)) ([3a07890](https://github.com/matter-labs/zksync-era/commit/3a07890dacebf6179636c44d7cce1afd21ab49eb)) +* rewritten gossip sync to be async from block processing ([#711](https://github.com/matter-labs/zksync-era/issues/711)) ([3af4644](https://github.com/matter-labs/zksync-era/commit/3af4644f428af0328cdea0fbae8a8f965489c6c4)) + +## [19.1.1](https://github.com/matter-labs/zksync-era/compare/core-v19.1.0...core-v19.1.1) (2024-01-12) + + +### Bug Fixes + +* **vm:** `inspect_transaction_with_bytecode_compression` for old VMs ([#862](https://github.com/matter-labs/zksync-era/issues/862)) ([077c0c6](https://github.com/matter-labs/zksync-era/commit/077c0c689317fa33c9bf3623942b565e8471f418)) + +## [19.1.0](https://github.com/matter-labs/zksync-era/compare/core-v19.0.0...core-v19.1.0) (2024-01-10) + + +### Features + +* address remaining spelling issues in dev comments and turns on dev_comments in cfg ([#827](https://github.com/matter-labs/zksync-era/issues/827)) ([1fd0afd](https://github.com/matter-labs/zksync-era/commit/1fd0afdcd9b6c344e1f5dac93fda5aa25c106b2f)) +* **core:** removes multiple tokio runtimes and worker number setting. ([#826](https://github.com/matter-labs/zksync-era/issues/826)) ([b8b190f](https://github.com/matter-labs/zksync-era/commit/b8b190f886f1d13602a0b2cc8a2b8525e68b1033)) +* fix spelling in dev comments in `core/lib/*` - continued ([#683](https://github.com/matter-labs/zksync-era/issues/683)) ([0421fe6](https://github.com/matter-labs/zksync-era/commit/0421fe6b3e9629fdad2fb88ad5710200825adc91)) +* fix spelling in dev comments in `core/lib/*` - continued ([#684](https://github.com/matter-labs/zksync-era/issues/684)) ([b46c2e9](https://github.com/matter-labs/zksync-era/commit/b46c2e9cbbcd048647f998810c8d550f8ad0c1f4)) +* fix spelling in dev comments in `core/lib/multivm` - continued ([#682](https://github.com/matter-labs/zksync-era/issues/682)) ([3839d39](https://github.com/matter-labs/zksync-era/commit/3839d39eb6b6d111ec556948c88d1eb9c6ab5e4a)) +* fix spelling in dev comments in `core/lib/zksync_core` - continued ([#685](https://github.com/matter-labs/zksync-era/issues/685)) ([70c3feb](https://github.com/matter-labs/zksync-era/commit/70c3febbf0445d2e0c22a942eaf643828aee045d)) +* **state-keeper:** circuits seal criterion ([#729](https://github.com/matter-labs/zksync-era/issues/729)) ([c4a86bb](https://github.com/matter-labs/zksync-era/commit/c4a86bbbc5697b5391a517299bbd7a5e882a7314)) +* **state-keeper:** Reject transactions that fail to publish bytecodes ([#832](https://github.com/matter-labs/zksync-era/issues/832)) ([0a010f0](https://github.com/matter-labs/zksync-era/commit/0a010f0a6f6682cedc49cb12ab9f9dfcdbccf68e)) +* **vm:** Add batch input abstraction ([#817](https://github.com/matter-labs/zksync-era/issues/817)) ([997db87](https://github.com/matter-labs/zksync-era/commit/997db872455351a484c3161d0a733a4bc59dd684)) + + +### Bug Fixes + +* oldest unpicked batch ([#692](https://github.com/matter-labs/zksync-era/issues/692)) ([a6c869d](https://github.com/matter-labs/zksync-era/commit/a6c869d88c64a986405bbdfb15cab88e910d1e03)) +* **state-keeper:** Updates manager keeps track of fictive block metrics ([#843](https://github.com/matter-labs/zksync-era/issues/843)) ([88fd724](https://github.com/matter-labs/zksync-era/commit/88fd7247c377efce703cd1caeffa4ecd61ed0d7f)) +* **vm:** fix circuit tracer ([#837](https://github.com/matter-labs/zksync-era/issues/837)) ([83fc7be](https://github.com/matter-labs/zksync-era/commit/83fc7be3cb9f4d3082b8b9fa8b8f568330bf744f)) + +## [19.0.0](https://github.com/matter-labs/zksync-era/compare/core-v18.13.0...core-v19.0.0) (2024-01-05) + + +### ⚠ BREAKING CHANGES + +* **vm:** Release v19 - remove allowlist ([#747](https://github.com/matter-labs/zksync-era/issues/747)) + +### Features + +* **en:** Make consistency checker work with pruned data ([#742](https://github.com/matter-labs/zksync-era/issues/742)) ([ae6e18e](https://github.com/matter-labs/zksync-era/commit/ae6e18e5412cadefbc03307a476d6b96c41f04e1)) +* **eth_sender:** Remove generic bounds on L1TxParamsProvider in EthSender ([#799](https://github.com/matter-labs/zksync-era/issues/799)) ([29a4f52](https://github.com/matter-labs/zksync-era/commit/29a4f5299c95e0b338010a6baf83f196ece3a530)) +* **merkle tree:** Finalize metadata calculator snapshot recovery logic ([#798](https://github.com/matter-labs/zksync-era/issues/798)) ([c83db35](https://github.com/matter-labs/zksync-era/commit/c83db35f0929a412bc4d89fbee1448d32c54a83f)) +* **prover:** Remove circuit-synthesizer ([#801](https://github.com/matter-labs/zksync-era/issues/801)) ([1426b1b](https://github.com/matter-labs/zksync-era/commit/1426b1ba3c8b700e0531087b781ced0756c12e3c)) +* **prover:** Remove old prover ([#810](https://github.com/matter-labs/zksync-era/issues/810)) ([8be1925](https://github.com/matter-labs/zksync-era/commit/8be1925b18dcbf268eb03b8ea5f07adfd5330876)) +* **snapshot creator:** Make snapshot creator fault-tolerant ([#691](https://github.com/matter-labs/zksync-era/issues/691)) ([286c7d1](https://github.com/matter-labs/zksync-era/commit/286c7d15a623604e01effa7119de3362f0fb4eb9)) +* **vm:** Add boojum integration folder ([#805](https://github.com/matter-labs/zksync-era/issues/805)) ([4071e90](https://github.com/matter-labs/zksync-era/commit/4071e90578e0fc8c027a4d2a30d09d96db942b4f)) +* **vm:** Make utils version-dependent ([#809](https://github.com/matter-labs/zksync-era/issues/809)) ([e5fbcb5](https://github.com/matter-labs/zksync-era/commit/e5fbcb5dfc2a7d2582f40a481c861fb2f4dd5fb0)) +* **vm:** Release v19 - remove allowlist ([#747](https://github.com/matter-labs/zksync-era/issues/747)) ([0e2bc56](https://github.com/matter-labs/zksync-era/commit/0e2bc561b9642b854718adcc86087a3e9762cf5d)) +* **vm:** Separate boojum integration vm ([#806](https://github.com/matter-labs/zksync-era/issues/806)) ([61712a6](https://github.com/matter-labs/zksync-era/commit/61712a636f69be70d75719c04f364d679ef624e0)) + + +### Bug Fixes + +* **db:** Fix parsing statement timeout from env ([#818](https://github.com/matter-labs/zksync-era/issues/818)) ([3f663ec](https://github.com/matter-labs/zksync-era/commit/3f663eca2f38f4373339ad024e6578099c693af6)) +* **prover:** Remove old prover subsystems tables ([#812](https://github.com/matter-labs/zksync-era/issues/812)) ([9d0aefc](https://github.com/matter-labs/zksync-era/commit/9d0aefc1ef4992e19d7b15ec1ce34697e61a3464)) +* **prover:** Remove prover-utils from core ([#819](https://github.com/matter-labs/zksync-era/issues/819)) ([2ceb911](https://github.com/matter-labs/zksync-era/commit/2ceb9114659f4c4583c87b1bbc8ee230eb1c44db)) + +## [18.13.0](https://github.com/matter-labs/zksync-era/compare/core-v18.12.0...core-v18.13.0) (2024-01-02) + + +### Features + +* **contract-verifier:** add zksolc v1.3.19 ([#797](https://github.com/matter-labs/zksync-era/issues/797)) ([2635570](https://github.com/matter-labs/zksync-era/commit/26355705c8c084344464458f3275c311c392c47f)) +* Remove generic bounds on L1GasPriceProvider ([#792](https://github.com/matter-labs/zksync-era/issues/792)) ([edf071d](https://github.com/matter-labs/zksync-era/commit/edf071d39d4dd8e297fd2fb2244574d5e0537b38)) +* Remove TPS limiter from TX Sender ([#793](https://github.com/matter-labs/zksync-era/issues/793)) ([d0e9296](https://github.com/matter-labs/zksync-era/commit/d0e929652eb431f6b1bc20f83d7c21d2a978293a)) + +## [18.12.0](https://github.com/matter-labs/zksync-era/compare/core-v18.11.0...core-v18.12.0) (2023-12-25) + + +### Features + +* **get-tokens:** filter tokens by `well_known` ([#767](https://github.com/matter-labs/zksync-era/issues/767)) ([9c99e13](https://github.com/matter-labs/zksync-era/commit/9c99e13ca0a4de678a4ce5bf7e2d5880d79c0e66)) + +## [18.11.0](https://github.com/matter-labs/zksync-era/compare/core-v18.10.3...core-v18.11.0) (2023-12-25) + + +### Features + +* Revert "feat: Remove zks_getConfirmedTokens method" ([#765](https://github.com/matter-labs/zksync-era/issues/765)) ([6e7ed12](https://github.com/matter-labs/zksync-era/commit/6e7ed124e816f5ba1d2ba3e8efaf281cd2c055dd)) + +## [18.10.3](https://github.com/matter-labs/zksync-era/compare/core-v18.10.2...core-v18.10.3) (2023-12-25) + + +### Bug Fixes + +* **core:** do not unwrap unexisting calldata in commitment and regenerate it ([#762](https://github.com/matter-labs/zksync-era/issues/762)) ([ec104ef](https://github.com/matter-labs/zksync-era/commit/ec104ef01136d1a455f40163c2ced92dbc5917e2)) + +## [18.10.2](https://github.com/matter-labs/zksync-era/compare/core-v18.10.1...core-v18.10.2) (2023-12-25) + + +### Bug Fixes + +* **vm:** Get pubdata bytes from vm ([#756](https://github.com/matter-labs/zksync-era/issues/756)) ([6c6f1ab](https://github.com/matter-labs/zksync-era/commit/6c6f1ab078485669002e50197b35ab1b6a38cdb9)) + +## [18.10.1](https://github.com/matter-labs/zksync-era/compare/core-v18.10.0...core-v18.10.1) (2023-12-25) + + +### Bug Fixes + +* **sequencer:** don't stall blockchain on failed L1 tx ([#759](https://github.com/matter-labs/zksync-era/issues/759)) ([50cd7c4](https://github.com/matter-labs/zksync-era/commit/50cd7c41f71757a3f2ffb36a6c1e1fa6b4372703)) + +## [18.10.0](https://github.com/matter-labs/zksync-era/compare/core-v18.9.0...core-v18.10.0) (2023-12-25) + + +### Features + +* **api:** Add metrics for `jsonrpsee` subscriptions ([#733](https://github.com/matter-labs/zksync-era/issues/733)) ([39fd71c](https://github.com/matter-labs/zksync-era/commit/39fd71cc2a0ffda45933fc99c4dac6d9beb92ad0)) +* **api:** remove jsonrpc backend ([#693](https://github.com/matter-labs/zksync-era/issues/693)) ([b3f0417](https://github.com/matter-labs/zksync-era/commit/b3f0417fd4512f98d7e579eb5b3b03c7f4b92e18)) +* applied status snapshots dal ([#679](https://github.com/matter-labs/zksync-era/issues/679)) ([2e9f23b](https://github.com/matter-labs/zksync-era/commit/2e9f23b46c31a9538d4a55bed75c5df3ed8e8f63)) +* **en:** Make reorg detector work with pruned data ([#712](https://github.com/matter-labs/zksync-era/issues/712)) ([c4185d5](https://github.com/matter-labs/zksync-era/commit/c4185d5b6526cc9ec42e6941d76453cb693988bd)) +* Remove data fetchers ([#694](https://github.com/matter-labs/zksync-era/issues/694)) ([f48d677](https://github.com/matter-labs/zksync-era/commit/f48d6773e1e30fede44075f8862c68e7a8173cbb)) +* Remove zks_getConfirmedTokens method ([#719](https://github.com/matter-labs/zksync-era/issues/719)) ([9298b1b](https://github.com/matter-labs/zksync-era/commit/9298b1b916ad5f81160c66c061370f804d129d97)) + + +### Bug Fixes + +* added waiting for prometheus to finish ([#745](https://github.com/matter-labs/zksync-era/issues/745)) ([eed330d](https://github.com/matter-labs/zksync-era/commit/eed330dd2e47114d9d0ea29c074259a0bc016f78)) +* **EN:** temporary produce a warning on pubdata mismatch with L1 ([#758](https://github.com/matter-labs/zksync-era/issues/758)) ([0a7a4da](https://github.com/matter-labs/zksync-era/commit/0a7a4da52926d1db8dfe72aef78390cba3754627)) +* **prover:** Add logging for prover + WVGs ([#723](https://github.com/matter-labs/zksync-era/issues/723)) ([d7ce14c](https://github.com/matter-labs/zksync-era/commit/d7ce14c5d0434326a1ebf406d77c20676ae526ae)) +* remove leftovers after [#693](https://github.com/matter-labs/zksync-era/issues/693) ([#720](https://github.com/matter-labs/zksync-era/issues/720)) ([e93aa35](https://github.com/matter-labs/zksync-era/commit/e93aa358c43e60d5640224e5422a40d91cd4b9a0)) + +## [18.9.0](https://github.com/matter-labs/zksync-era/compare/core-v18.8.0...core-v18.9.0) (2023-12-19) + + +### Features + +* Add ecadd and ecmul to the list of precompiles upon genesis ([#669](https://github.com/matter-labs/zksync-era/issues/669)) ([0be35b8](https://github.com/matter-labs/zksync-era/commit/0be35b82fc63e88b6d709b644e437194f7559483)) +* **api:** Do not return receipt if tx was not included to the batch ([#706](https://github.com/matter-labs/zksync-era/issues/706)) ([625d632](https://github.com/matter-labs/zksync-era/commit/625d632934ac63ad7479de50d65f83e6f144c7dd)) +* proto serialization/deserialization of snapshots creator objects ([#667](https://github.com/matter-labs/zksync-era/issues/667)) ([9f096a4](https://github.com/matter-labs/zksync-era/commit/9f096a4dd362fbd74a35fa1e9af4f111f69f4317)) +* zk fmt sqlx-queries ([#533](https://github.com/matter-labs/zksync-era/issues/533)) ([6982343](https://github.com/matter-labs/zksync-era/commit/69823439675411b3239ef0a24c6bfe4d3610161b)) + + +### Bug Fixes + +* **en:** Downgrade miniblock hash equality assertion to warning ([#695](https://github.com/matter-labs/zksync-era/issues/695)) ([2ef3ec8](https://github.com/matter-labs/zksync-era/commit/2ef3ec804573ba4bbf8f44f19a3b5616b297c796)) + + +### Performance Improvements + +* remove unnecessary to_vec ([#702](https://github.com/matter-labs/zksync-era/issues/702)) ([c55a658](https://github.com/matter-labs/zksync-era/commit/c55a6582eae3af7f92cdeceb4e50b81701665f96)) + +## [18.8.0](https://github.com/matter-labs/zksync-era/compare/core-v18.7.0...core-v18.8.0) (2023-12-13) + + +### Features + +* **api:** Sunset API translator ([#675](https://github.com/matter-labs/zksync-era/issues/675)) ([846fd33](https://github.com/matter-labs/zksync-era/commit/846fd33a74734520ae1bb57d8bc8abca71e16f25)) +* **core:** Merge bounded and unbounded gas adjuster ([#678](https://github.com/matter-labs/zksync-era/issues/678)) ([f3c3bf5](https://github.com/matter-labs/zksync-era/commit/f3c3bf53b3136b2fe8c17638c83fda3328fd6033)) +* **dal:** Make ConnectionPoolBuilder owned ([#676](https://github.com/matter-labs/zksync-era/issues/676)) ([1153c42](https://github.com/matter-labs/zksync-era/commit/1153c42f9d0e7cfe78da64d4508974e74afea4ee)) +* Implemented 1 validator consensus for the main node ([#554](https://github.com/matter-labs/zksync-era/issues/554)) ([9c59838](https://github.com/matter-labs/zksync-era/commit/9c5983858d9dd84de360e6a082369a06bb58e924)) +* **merkle tree:** Snapshot recovery in metadata calculator ([#607](https://github.com/matter-labs/zksync-era/issues/607)) ([f49418b](https://github.com/matter-labs/zksync-era/commit/f49418b24cdfa905e571568cb3393296c951e903)) + + +### Bug Fixes + +* dropping installed filters ([#670](https://github.com/matter-labs/zksync-era/issues/670)) ([985c737](https://github.com/matter-labs/zksync-era/commit/985c7375f6fa192b45473d8ba0b7dacb9314a482)) + +## [18.7.0](https://github.com/matter-labs/zksync-era/compare/core-v18.6.1...core-v18.7.0) (2023-12-12) + + +### Features + +* **contract-verifier:** Add zksolc v1.3.18 ([#654](https://github.com/matter-labs/zksync-era/issues/654)) ([77f91fe](https://github.com/matter-labs/zksync-era/commit/77f91fe253a0876e56de4aee47071fe249386fc7)) +* **en:** Check block hash correspondence ([#572](https://github.com/matter-labs/zksync-era/issues/572)) ([28f5642](https://github.com/matter-labs/zksync-era/commit/28f5642c35800997879bc549fca9e960c4516d21)) +* **en:** Remove `SyncBlock.root_hash` ([#633](https://github.com/matter-labs/zksync-era/issues/633)) ([d4cc6e5](https://github.com/matter-labs/zksync-era/commit/d4cc6e564642b4c49ef4a546cd1c86821327683c)) +* Snapshot Creator ([#498](https://github.com/matter-labs/zksync-era/issues/498)) ([270edee](https://github.com/matter-labs/zksync-era/commit/270edee34402ecbd1761bc1fca559ef2205f71e8)) + + +### Bug Fixes + +* Cursor not moving correctly after poll in `get_filter_changes` ([#546](https://github.com/matter-labs/zksync-era/issues/546)) ([ec5907b](https://github.com/matter-labs/zksync-era/commit/ec5907b70ff7d868a05b685a1641d96dc4fa9d69)) +* fix docs error ([#635](https://github.com/matter-labs/zksync-era/issues/635)) ([883c128](https://github.com/matter-labs/zksync-era/commit/883c1282f7771fb16a41d45391b74243021271e3)) +* follow up metrics fixes ([#648](https://github.com/matter-labs/zksync-era/issues/648)) ([a317c7a](https://github.com/matter-labs/zksync-era/commit/a317c7ab68219cb376d08c8d1ec210c63b3c269f)) +* Follow up metrics fixes vol.2 ([#656](https://github.com/matter-labs/zksync-era/issues/656)) ([5c1aea2](https://github.com/matter-labs/zksync-era/commit/5c1aea2a94d7eded26c3a4ae4973ff983c15e7fa)) +* **job-processor:** `max_attepts_reached` metric ([#626](https://github.com/matter-labs/zksync-era/issues/626)) ([dd9b308](https://github.com/matter-labs/zksync-era/commit/dd9b308be9b0a6e37aad75f6f54b98e30a2ae14e)) +* update google cloud dependencies that do not depend on rsa ([#622](https://github.com/matter-labs/zksync-era/issues/622)) ([8a8cad6](https://github.com/matter-labs/zksync-era/commit/8a8cad6ce62f2d34bb34adcd956f6920c08f94b8)) + +## [18.6.1](https://github.com/matter-labs/zksync-era/compare/core-v18.6.0...core-v18.6.1) (2023-12-06) + + +### Performance Improvements + +* **external-node:** Use async miniblock sealing in external IO ([#611](https://github.com/matter-labs/zksync-era/issues/611)) ([5cf7210](https://github.com/matter-labs/zksync-era/commit/5cf7210dc77bb615944352f23ed39fad324b914f)) + +## [18.6.0](https://github.com/matter-labs/zksync-era/compare/core-v18.5.0...core-v18.6.0) (2023-12-05) + + +### Features + +* **contract-verifier:** Support verification for zksolc v1.3.17 ([#606](https://github.com/matter-labs/zksync-era/issues/606)) ([b65fedd](https://github.com/matter-labs/zksync-era/commit/b65fedd6894497a4c9fbf38d558ccfaca535d1d2)) + + +### Bug Fixes + +* Fix database connections in house keeper ([#610](https://github.com/matter-labs/zksync-era/issues/610)) ([aeaaecb](https://github.com/matter-labs/zksync-era/commit/aeaaecb54b6bd3f173727531418dc242357b2aee)) + +## [18.5.0](https://github.com/matter-labs/zksync-era/compare/core-v18.4.0...core-v18.5.0) (2023-12-05) + + +### Features + +* Add metric to CallTracer for calculating maximum depth of the calls ([#535](https://github.com/matter-labs/zksync-era/issues/535)) ([19c84ce](https://github.com/matter-labs/zksync-era/commit/19c84ce624d53735133fa3b12c7f980e8c14260d)) +* Add various metrics to the Prover subsystems ([#541](https://github.com/matter-labs/zksync-era/issues/541)) ([58a4e6c](https://github.com/matter-labs/zksync-era/commit/58a4e6c4c22bd7f002ede1c6def0dc260706185e)) + + +### Bug Fixes + +* Sync protocol version between consensus and server blocks ([#568](https://github.com/matter-labs/zksync-era/issues/568)) ([56776f9](https://github.com/matter-labs/zksync-era/commit/56776f929f547b1a91c5b70f89e87ef7dc25c65a)) + +## [18.4.0](https://github.com/matter-labs/zksync-era/compare/core-v18.3.1...core-v18.4.0) (2023-12-01) + + +### Features + +* adds spellchecker workflow, and corrects misspelled words ([#559](https://github.com/matter-labs/zksync-era/issues/559)) ([beac0a8](https://github.com/matter-labs/zksync-era/commit/beac0a85bb1535b05c395057171f197cd976bf82)) +* **en:** Support arbitrary genesis block for external nodes ([#537](https://github.com/matter-labs/zksync-era/issues/537)) ([15d7eaf](https://github.com/matter-labs/zksync-era/commit/15d7eaf872e222338810243865cec9dff7f6e799)) +* **merkle tree:** Remove enumeration index assignment from Merkle tree ([#551](https://github.com/matter-labs/zksync-era/issues/551)) ([e2c1b20](https://github.com/matter-labs/zksync-era/commit/e2c1b20e361e6ee2f5ac69cefe75d9c5575eb2f7)) +* Restore commitment test in Boojum integration ([#539](https://github.com/matter-labs/zksync-era/issues/539)) ([06f510d](https://github.com/matter-labs/zksync-era/commit/06f510d00f855ddafaebb504f7ea799700221072)) + + +### Bug Fixes + +* Change no pending batches 404 error into a success response ([#279](https://github.com/matter-labs/zksync-era/issues/279)) ([e8fd805](https://github.com/matter-labs/zksync-era/commit/e8fd805c8be7980de7676bca87cfc2d445aab9e1)) +* **vm:** Expose additional types and traits ([#563](https://github.com/matter-labs/zksync-era/issues/563)) ([bd268ac](https://github.com/matter-labs/zksync-era/commit/bd268ac02bc3530c1d3247cb9496c3e13c2e52d9)) +* **witness_generator:** Disable BWIP dependency ([#573](https://github.com/matter-labs/zksync-era/issues/573)) ([e05d955](https://github.com/matter-labs/zksync-era/commit/e05d955036c76a29f9b6e900872c69e20278e045)) + +## [18.3.1](https://github.com/matter-labs/zksync-era/compare/core-v18.3.0...core-v18.3.1) (2023-11-28) + + +### Bug Fixes + +* **external-node:** Check txs at insert time instead of read time ([#555](https://github.com/matter-labs/zksync-era/issues/555)) ([9ea02a1](https://github.com/matter-labs/zksync-era/commit/9ea02a1b2e7c861882f10c8cbe1997f6bb96d9cf)) +* Update comments post-hotfix ([#556](https://github.com/matter-labs/zksync-era/issues/556)) ([339e450](https://github.com/matter-labs/zksync-era/commit/339e45035e85eba7d60b533221be92ce78643705)) + ## [18.3.0](https://github.com/matter-labs/zksync-era/compare/core-v18.2.0...core-v18.3.0) (2023-11-28) @@ -200,7 +443,7 @@ ### Features * Implement dynamic L2-to-L1 log tree depth ([#126](https://github.com/matter-labs/zksync-era/issues/126)) ([7dfbc5e](https://github.com/matter-labs/zksync-era/commit/7dfbc5eddab94cd24f96912e0d43ba36e1cf363f)) -* **vm:** Introduce new way of returning from the tracer [#2569](https://github.com/matter-labs/zksync-era/issues/2569) ([#116](https://github.com/matter-labs/zksync-era/issues/116)) ([cf44a49](https://github.com/matter-labs/zksync-era/commit/cf44a491a324199b4cf457d28658da44b6dafc61)) +* **vm:** Introduce new way of returning from the tracer [#2569](https://github.com/matter-labs/zksync-2-dev/issues/2569) ([#116](https://github.com/matter-labs/zksync-era/issues/116)) ([cf44a49](https://github.com/matter-labs/zksync-era/commit/cf44a491a324199b4cf457d28658da44b6dafc61)) * **vm:** Restore system-constants-generator ([#115](https://github.com/matter-labs/zksync-era/issues/115)) ([5e61bdc](https://github.com/matter-labs/zksync-era/commit/5e61bdc75b2baa03004d4d3e801170c094766964)) ## [15.0.1](https://github.com/matter-labs/zksync-2-dev/compare/core-v15.0.0...core-v15.0.1) (2023-09-27) @@ -236,7 +479,7 @@ * **prover-fri:** added picked-by column in prover fri related tables ([#2600](https://github.com/matter-labs/zksync-2-dev/issues/2600)) ([9e604ab](https://github.com/matter-labs/zksync-2-dev/commit/9e604abf3bae11b6f583f2abd39c07a85dc20f0a)) * update verification keys, protocol version 15 ([#2602](https://github.com/matter-labs/zksync-2-dev/issues/2602)) ([2fff59b](https://github.com/matter-labs/zksync-2-dev/commit/2fff59bab00849996864b68e932739135337ebd7)) * **vlog:** Rework the observability configuration subsystem ([#2608](https://github.com/matter-labs/zksync-2-dev/issues/2608)) ([377f0c5](https://github.com/matter-labs/zksync-2-dev/commit/377f0c5f734c979bc990b429dff0971466872e71)) -* **vm:** Multivm tracer support ([#2601](https://github.com/matter-labs/zksync-2-dev/issues/2601)) ([4a7467b](https://github.com/matter-labs/zksync-2-dev/commit/4a7467b1b1556bfd795792dbe280bcf28c93a58f)) +* **vm:** MultiVM tracer support ([#2601](https://github.com/matter-labs/zksync-2-dev/issues/2601)) ([4a7467b](https://github.com/matter-labs/zksync-2-dev/commit/4a7467b1b1556bfd795792dbe280bcf28c93a58f)) ## [8.7.0](https://github.com/matter-labs/zksync-2-dev/compare/core-v8.6.0...core-v8.7.0) (2023-09-19) diff --git a/core/bin/block_reverter/src/main.rs b/core/bin/block_reverter/src/main.rs index 3958f4dec11..f7cbc20f554 100644 --- a/core/bin/block_reverter/src/main.rs +++ b/core/bin/block_reverter/src/main.rs @@ -1,15 +1,13 @@ use anyhow::Context as _; use clap::{Parser, Subcommand}; use tokio::io::{self, AsyncReadExt}; - use zksync_config::{ContractsConfig, DBConfig, ETHClientConfig, ETHSenderConfig, PostgresConfig}; -use zksync_dal::ConnectionPool; -use zksync_env_config::FromEnv; -use zksync_types::{L1BatchNumber, U256}; - use zksync_core::block_reverter::{ BlockReverter, BlockReverterEthConfig, BlockReverterFlags, L1ExecutedBatchesRevert, }; +use zksync_dal::ConnectionPool; +use zksync_env_config::FromEnv; +use zksync_types::{L1BatchNumber, U256}; #[derive(Debug, Parser)] #[command(author = "Matter Labs", version, about = "Block revert utility", long_about = None)] @@ -33,8 +31,8 @@ enum Command { /// L1 batch number used to rollback to. #[arg(long)] l1_batch_number: u32, - /// Priority fee used for rollback ethereum transaction. - // We operate only by priority fee because we want to use base fee from ethereum + /// Priority fee used for rollback Ethereum transaction. + // We operate only by priority fee because we want to use base fee from Ethereum // and send transaction as soon as possible without any resend logic #[arg(long)] priority_fee_per_gas: Option, diff --git a/core/bin/contract-verifier/src/main.rs b/core/bin/contract-verifier/src/main.rs index 05ee51139dd..477b4dc0722 100644 --- a/core/bin/contract-verifier/src/main.rs +++ b/core/bin/contract-verifier/src/main.rs @@ -1,16 +1,15 @@ use std::cell::RefCell; use anyhow::Context as _; +use futures::{channel::mpsc, executor::block_on, SinkExt, StreamExt}; use prometheus_exporter::PrometheusExporterConfig; +use tokio::sync::watch; use zksync_config::{configs::PrometheusConfig, ApiConfig, ContractVerifierConfig, PostgresConfig}; use zksync_dal::ConnectionPool; use zksync_env_config::FromEnv; use zksync_queued_job_processor::JobProcessor; use zksync_utils::wait_for_tasks::wait_for_tasks; -use futures::{channel::mpsc, executor::block_on, SinkExt, StreamExt}; -use tokio::sync::watch; - use crate::verifier::ContractVerifier; pub mod error; @@ -179,7 +178,7 @@ async fn main() -> anyhow::Result<()> { let contract_verifier = ContractVerifier::new(verifier_config, pool); let tasks = vec![ - // todo PLA-335: Leftovers after the prover DB split. + // TODO PLA-335: Leftovers after the prover DB split. // The prover connection pool is not used by the contract verifier, but we need to pass it // since `JobProcessor` trait requires it. tokio::spawn(contract_verifier.run(stop_receiver.clone(), opt.jobs_number)), diff --git a/core/bin/contract-verifier/src/verifier.rs b/core/bin/contract-verifier/src/verifier.rs index e34b4784c1c..4bb2e7745ee 100644 --- a/core/bin/contract-verifier/src/verifier.rs +++ b/core/bin/contract-verifier/src/verifier.rs @@ -1,7 +1,9 @@ -use std::collections::HashMap; -use std::env; -use std::path::Path; -use std::time::{Duration, Instant}; +use std::{ + collections::HashMap, + env, + path::Path, + time::{Duration, Instant}, +}; use anyhow::Context as _; use chrono::Utc; @@ -9,7 +11,6 @@ use ethabi::{Contract, Token}; use lazy_static::lazy_static; use regex::Regex; use tokio::time; - use zksync_config::ContractVerifierConfig; use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_env_config::FromEnv; @@ -22,11 +23,11 @@ use zksync_types::{ Address, }; -use crate::error::ContractVerifierError; -use crate::zksolc_utils::{ - Optimizer, Settings, Source, StandardJson, ZkSolc, ZkSolcInput, ZkSolcOutput, +use crate::{ + error::ContractVerifierError, + zksolc_utils::{Optimizer, Settings, Source, StandardJson, ZkSolc, ZkSolcInput, ZkSolcOutput}, + zkvyper_utils::{ZkVyper, ZkVyperInput}, }; -use crate::zkvyper_utils::{ZkVyper, ZkVyperInput}; lazy_static! { static ref DEPLOYER_CONTRACT: Contract = zksync_contracts::deployer_contract(); @@ -74,6 +75,12 @@ impl ContractVerifier { ); if artifacts.bytecode != deployed_bytecode { + tracing::info!( + "Bytecode mismatch req {}, deployed: 0x{}, compiled 0x{}", + request.id, + hex::encode(deployed_bytecode), + hex::encode(artifacts.bytecode) + ); return Err(ContractVerifierError::BytecodeMismatch); } diff --git a/core/bin/contract-verifier/src/zksolc_utils.rs b/core/bin/contract-verifier/src/zksolc_utils.rs index 4fba999453c..560bacb809f 100644 --- a/core/bin/contract-verifier/src/zksolc_utils.rs +++ b/core/bin/contract-verifier/src/zksolc_utils.rs @@ -1,8 +1,6 @@ +use std::{collections::HashMap, io::Write, path::PathBuf, process::Stdio}; + use serde::{Deserialize, Serialize}; -use std::collections::HashMap; -use std::io::Write; -use std::path::PathBuf; -use std::process::Stdio; use crate::error::ContractVerifierError; diff --git a/core/bin/contract-verifier/src/zkvyper_utils.rs b/core/bin/contract-verifier/src/zkvyper_utils.rs index 33a99f256f9..c597f78d458 100644 --- a/core/bin/contract-verifier/src/zkvyper_utils.rs +++ b/core/bin/contract-verifier/src/zkvyper_utils.rs @@ -1,8 +1,4 @@ -use std::collections::HashMap; -use std::fs::File; -use std::io::Write; -use std::path::PathBuf; -use std::process::Stdio; +use std::{collections::HashMap, fs::File, io::Write, path::PathBuf, process::Stdio}; use crate::error::ContractVerifierError; diff --git a/core/bin/external_node/Cargo.toml b/core/bin/external_node/Cargo.toml index 7fba647913a..272bcfc081c 100644 --- a/core/bin/external_node/Cargo.toml +++ b/core/bin/external_node/Cargo.toml @@ -26,6 +26,8 @@ zksync_web3_decl = { path = "../../lib/web3_decl" } zksync_types = { path = "../../lib/types" } vlog = { path = "../../lib/vlog" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } + anyhow = "1.0" tokio = { version = "1", features = ["full"] } futures = "0.3" @@ -33,4 +35,6 @@ serde = { version = "1.0", features = ["derive"] } envy = "0.4" url = "2.4" clap = { version = "4.2.4", features = ["derive"] } +serde_json = "1" +semver = "1" tracing = "0.1" diff --git a/core/bin/external_node/src/config/mod.rs b/core/bin/external_node/src/config/mod.rs index 00ae9d1da1b..6bf06f04930 100644 --- a/core/bin/external_node/src/config/mod.rs +++ b/core/bin/external_node/src/config/mod.rs @@ -1,17 +1,17 @@ +use std::{env, time::Duration}; + use anyhow::Context; use serde::Deserialize; -use std::{env, time::Duration}; use url::Url; - -use zksync_basic_types::{Address, L1ChainId, L2ChainId, MiniblockNumber}; +use zksync_basic_types::{Address, L1ChainId, L2ChainId}; use zksync_core::api_server::{ - tx_sender::TxSenderConfig, web3::state::InternalApiConfig, web3::Namespace, + tx_sender::TxSenderConfig, + web3::{state::InternalApiConfig, Namespace}, }; use zksync_types::api::BridgeAddresses; - use zksync_web3_decl::{ jsonrpsee::http_client::{HttpClient, HttpClientBuilder}, - namespaces::{EnNamespaceClient, EthNamespaceClient, ZksNamespaceClient}, + namespaces::{EthNamespaceClient, ZksNamespaceClient}, }; #[cfg(test)] @@ -30,8 +30,6 @@ pub struct RemoteENConfig { pub l2_testnet_paymaster_addr: Option
, pub l2_chain_id: L2ChainId, pub l1_chain_id: L1ChainId, - - pub fair_l2_gas_price: u64, } impl RemoteENConfig { @@ -63,15 +61,6 @@ impl RemoteENConfig { .context("Failed to fetch L1 chain ID")? .as_u64(), ); - let current_miniblock = client - .get_block_number() - .await - .context("Failed to fetch block number")?; - let block_header = client - .sync_l2_block(MiniblockNumber(current_miniblock.as_u32()), false) - .await - .context("Failed to fetch last miniblock header")? - .expect("Block is known to exist"); Ok(Self { diamond_proxy_addr, @@ -82,7 +71,6 @@ impl RemoteENConfig { l2_weth_bridge_addr: bridges.l2_weth_bridge, l2_chain_id, l1_chain_id, - fair_l2_gas_price: block_header.l2_fair_gas_price, }) } } @@ -105,10 +93,10 @@ pub struct OptionalENConfig { /// Max possible size of an ABI encoded tx (in bytes). #[serde(default = "OptionalENConfig::default_max_tx_size")] pub max_tx_size: usize, - /// Max number of cache misses during one VM execution. If the number of cache misses exceeds this value, the api server panics. + /// Max number of cache misses during one VM execution. If the number of cache misses exceeds this value, the API server panics. /// This is a temporary solution to mitigate API request resulting in thousands of DB queries. pub vm_execution_cache_misses_limit: Option, - /// Inbound transaction limit used for throttling. + /// Note: Deprecated option, no longer in use. Left to display a warning in case someone used them. pub transactions_per_sec_limit: Option, /// Limit for fee history block range. #[serde(default = "OptionalENConfig::default_fee_history_limit")] @@ -154,6 +142,15 @@ pub struct OptionalENConfig { /// The max possible number of gas that `eth_estimateGas` is allowed to overestimate. #[serde(default = "OptionalENConfig::default_estimate_gas_acceptable_overestimation")] pub estimate_gas_acceptable_overestimation: u32, + /// Whether to use the compatibility mode for gas estimation for L1->L2 transactions. + /// During the migration to the 1.4.1 fee model, there will be a period, when the server + /// will already have the 1.4.1 fee model, while the L1 contracts will still expect the transactions + /// to use the previous fee model with much higher overhead. + /// + /// When set to `true`, the API will ensure to return gasLimit is high enough overhead for both the old + /// and the new fee model when estimating L1->L2 transactions. + #[serde(default = "OptionalENConfig::default_l1_to_l2_transactions_compatibility_mode")] + pub l1_to_l2_transactions_compatibility_mode: bool, /// The multiplier to use when suggesting gas price. Should be higher than one, /// otherwise if the L1 prices soar, the suggested gas price won't be sufficient to be included in block #[serde(default = "OptionalENConfig::default_gas_price_scale_factor")] @@ -190,6 +187,11 @@ pub struct OptionalENConfig { /// Number of keys that is processed by enum_index migration in State Keeper each L1 batch. #[serde(default = "OptionalENConfig::default_enum_index_migration_chunk_size")] pub enum_index_migration_chunk_size: usize, + /// Capacity of the queue for asynchronous miniblock sealing. Once this many miniblocks are queued, + /// sealing will block until some of the miniblocks from the queue are processed. + /// 0 means that sealing is synchronous; this is mostly useful for performance comparison, testing etc. + #[serde(default = "OptionalENConfig::default_miniblock_seal_queue_capacity")] + pub miniblock_seal_queue_capacity: usize, } impl OptionalENConfig { @@ -221,6 +223,10 @@ impl OptionalENConfig { 1_000 } + const fn default_l1_to_l2_transactions_compatibility_mode() -> bool { + true + } + const fn default_gas_price_scale_factor() -> f64 { 1.2 } @@ -288,6 +294,10 @@ impl OptionalENConfig { 5000 } + const fn default_miniblock_seal_queue_capacity() -> usize { + 10 + } + pub fn polling_interval(&self) -> Duration { Duration::from_millis(self.polling_interval) } @@ -329,7 +339,7 @@ impl OptionalENConfig { pub fn api_namespaces(&self) -> Vec { self.api_namespaces .clone() - .unwrap_or_else(|| Namespace::NON_DEBUG.to_vec()) + .unwrap_or_else(|| Namespace::DEFAULT.to_vec()) } pub fn max_response_body_size(&self) -> usize { @@ -346,8 +356,6 @@ pub struct RequiredENConfig { pub ws_port: u16, /// Port on which the healthcheck REST server is listening. pub healthcheck_port: u16, - /// Number of threads per API server - pub threads_per_server: usize, /// Address of the Ethereum node API. /// Intentionally private: use getter method as it manages the missing port. eth_client_url: String, @@ -428,7 +436,7 @@ impl ExternalNodeConfig { .context("Unable to fetch required config values from the main node")?; // We can query them from main node, but it's better to set them explicitly - // as well to avoid connecting to wrong envs unintentionally. + // as well to avoid connecting to wrong environment variables unintentionally. let eth_chain_id = HttpClientBuilder::default() .build(required.eth_client_url()?) .expect("Unable to build HTTP client for L1 client") @@ -520,13 +528,15 @@ impl From for TxSenderConfig { .unwrap(), gas_price_scale_factor: config.optional.gas_price_scale_factor, max_nonce_ahead: config.optional.max_nonce_ahead, - fair_l2_gas_price: config.remote.fair_l2_gas_price, vm_execution_cache_misses_limit: config.optional.vm_execution_cache_misses_limit, // We set these values to the maximum since we don't know the actual values // and they will be enforced by the main node anyway. max_allowed_l2_tx_gas_limit: u32::MAX, validation_computational_gas_limit: u32::MAX, chain_id: config.remote.l2_chain_id, + l1_to_l2_transactions_compatibility_mode: config + .optional + .l1_to_l2_transactions_compatibility_mode, } } } diff --git a/core/bin/external_node/src/main.rs b/core/bin/external_node/src/main.rs index 52f3353dc07..07ee4140602 100644 --- a/core/bin/external_node/src/main.rs +++ b/core/bin/external_node/src/main.rs @@ -1,12 +1,13 @@ -use anyhow::Context; -use clap::Parser; -use tokio::{sync::watch, task, time::sleep}; - use std::{sync::Arc, time::Duration}; -use futures::{future::FusedFuture, FutureExt}; +use anyhow::Context as _; +use clap::Parser; +use futures::{future::FusedFuture, FutureExt as _}; +use metrics::EN_METRICS; use prometheus_exporter::PrometheusExporterConfig; +use tokio::{sync::watch, task, time::sleep}; use zksync_basic_types::{Address, L2ChainId}; +use zksync_config::configs::database::MerkleTreeMode; use zksync_core::{ api_server::{ execution_sandbox::VmConcurrencyLimiter, @@ -16,13 +17,14 @@ use zksync_core::{ }, block_reverter::{BlockReverter, BlockReverterFlags, L1ExecutedBatchesRevert}, consistency_checker::ConsistencyChecker, - l1_gas_price::MainNodeGasPriceFetcher, - metadata_calculator::{ - MetadataCalculator, MetadataCalculatorConfig, MetadataCalculatorModeConfig, - }, + l1_gas_price::MainNodeFeeParamsFetcher, + metadata_calculator::{MetadataCalculator, MetadataCalculatorConfig}, reorg_detector::ReorgDetector, setup_sigint_handler, - state_keeper::{L1BatchExecutorBuilder, MainBatchExecutorBuilder, ZkSyncStateKeeper}, + state_keeper::{ + seal_criteria::NoopSealer, L1BatchExecutorBuilder, MainBatchExecutorBuilder, + MiniblockSealer, MiniblockSealerHandle, ZkSyncStateKeeper, + }, sync_layer::{ batch_status_updater::BatchStatusUpdater, external_io::ExternalIO, fetcher::FetcherCursor, genesis::perform_genesis_if_needed, ActionQueue, MainNodeClient, SyncState, @@ -35,6 +37,10 @@ use zksync_storage::RocksDB; use zksync_utils::wait_for_tasks::wait_for_tasks; mod config; +mod metrics; + +const RELEASE_MANIFEST: &str = + std::include_str!("../../../../.github/release-please/manifest.json"); use crate::config::ExternalNodeConfig; @@ -47,6 +53,7 @@ async fn build_state_keeper( connection_pool: ConnectionPool, sync_state: SyncState, l2_erc20_bridge_addr: Address, + miniblock_sealer_handle: MiniblockSealerHandle, stop_receiver: watch::Receiver, chain_id: L2ChainId, ) -> ZkSyncStateKeeper { @@ -67,12 +74,14 @@ async fn build_state_keeper( save_call_traces, false, config.optional.enum_index_migration_chunk_size, + true, )); let main_node_url = config.required.main_node_url().unwrap(); let main_node_client = ::json_rpc(&main_node_url) .expect("Failed creating JSON-RPC client for main node"); let io = ExternalIO::new( + miniblock_sealer_handle, connection_pool, action_queue, sync_state, @@ -83,7 +92,12 @@ async fn build_state_keeper( ) .await; - ZkSyncStateKeeper::without_sealer(stop_receiver, Box::new(io), batch_executor_base) + ZkSyncStateKeeper::new( + stop_receiver, + Box::new(io), + batch_executor_base, + Box::new(NoopSealer), + ) } async fn init_tasks( @@ -95,6 +109,14 @@ async fn init_tasks( HealthCheckHandle, watch::Receiver, )> { + let release_manifest: serde_json::Value = serde_json::from_str(RELEASE_MANIFEST) + .expect("release manifest is a valid json document; qed"); + let release_manifest_version = release_manifest["core"].as_str().expect( + "a release-please manifest with \"core\" version field was specified at build time; qed.", + ); + + let version = semver::Version::parse(release_manifest_version) + .expect("version in manifest is a correct semver format; qed"); let main_node_url = config .required .main_node_url() @@ -102,10 +124,35 @@ async fn init_tasks( let (stop_sender, stop_receiver) = watch::channel(false); let mut healthchecks: Vec> = Vec::new(); // Create components. - let gas_adjuster = Arc::new(MainNodeGasPriceFetcher::new(&main_node_url)); + let fee_params_fetcher = Arc::new(MainNodeFeeParamsFetcher::new(&main_node_url)); let sync_state = SyncState::new(); let (action_queue_sender, action_queue) = ActionQueue::new(); + + let mut task_handles = vec![]; + let (miniblock_sealer, miniblock_sealer_handle) = MiniblockSealer::new( + connection_pool.clone(), + config.optional.miniblock_seal_queue_capacity, + ); + task_handles.push(tokio::spawn(miniblock_sealer.run())); + let pool = connection_pool.clone(); + task_handles.push(tokio::spawn(async move { + loop { + let protocol_version = pool + .access_storage() + .await + .unwrap() + .protocol_versions_dal() + .last_used_version_id() + .await + .map(|version| version as u16); + + EN_METRICS.version[&(format!("{}", version), protocol_version)].set(1); + + tokio::time::sleep(Duration::from_secs(10)).await; + } + })); + let state_keeper = build_state_keeper( action_queue, config.required.state_cache_path.clone(), @@ -113,6 +160,7 @@ async fn init_tasks( connection_pool.clone(), sync_state.clone(), config.remote.l2_erc20_bridge_addr, + miniblock_sealer_handle, stop_receiver.clone(), config.remote.l2_chain_id, ) @@ -138,19 +186,17 @@ async fn init_tasks( stop_receiver.clone(), ); - let metadata_calculator = MetadataCalculator::new(&MetadataCalculatorConfig { - db_path: &config.required.merkle_tree_path, - mode: MetadataCalculatorModeConfig::Full { - store_factory: None, - }, + let metadata_calculator_config = MetadataCalculatorConfig { + db_path: config.required.merkle_tree_path.clone(), + mode: MerkleTreeMode::Full, delay_interval: config.optional.metadata_calculator_delay(), max_l1_batches_per_iter: config.optional.max_l1_batches_per_tree_iter, multi_get_chunk_size: config.optional.merkle_tree_multi_get_chunk_size, block_cache_capacity: config.optional.merkle_tree_block_cache_size(), memtable_capacity: config.optional.merkle_tree_memtable_capacity(), stalled_writes_timeout: config.optional.merkle_tree_stalled_writes_timeout(), - }) - .await; + }; + let metadata_calculator = MetadataCalculator::new(metadata_calculator_config, None).await; healthchecks.push(Box::new(metadata_calculator.tree_health_check())); let consistency_checker = ConsistencyChecker::new( @@ -172,7 +218,7 @@ async fn init_tasks( .await .context("failed to build a connection pool for BatchStatusUpdater")?, ) - .await; + .context("failed initializing batch status updater")?; // Run the components. let tree_stop_receiver = stop_receiver.clone(); @@ -180,31 +226,24 @@ async fn init_tasks( .build() .await .context("failed to build a tree_pool")?; - // todo: PLA-335 - // Note: This pool isn't actually used by the metadata calculator, but it has to be provided anyway. - let prover_tree_pool = ConnectionPool::singleton(&config.postgres.database_url) - .build() - .await - .context("failed to build a prover_tree_pool")?; - let tree_handle = - task::spawn(metadata_calculator.run(tree_pool, prover_tree_pool, tree_stop_receiver)); + let tree_handle = task::spawn(metadata_calculator.run(tree_pool, tree_stop_receiver)); let consistency_checker_handle = tokio::spawn(consistency_checker.run(stop_receiver.clone())); let updater_handle = task::spawn(batch_status_updater.run(stop_receiver.clone())); let sk_handle = task::spawn(state_keeper.run()); let fetcher_handle = tokio::spawn(fetcher.run()); - let gas_adjuster_handle = tokio::spawn(gas_adjuster.clone().run(stop_receiver.clone())); + let fee_params_fetcher_handle = + tokio::spawn(fee_params_fetcher.clone().run(stop_receiver.clone())); let (tx_sender, vm_barrier, cache_update_handle) = { - let mut tx_sender_builder = + let tx_sender_builder = TxSenderBuilder::new(config.clone().into(), connection_pool.clone()) .with_main_connection_pool(connection_pool.clone()) .with_tx_proxy(&main_node_url); - // Add rate limiter if enabled. - if let Some(tps_limit) = config.optional.transactions_per_sec_limit { - tx_sender_builder = tx_sender_builder.with_rate_limiter(tps_limit); + if config.optional.transactions_per_sec_limit.is_some() { + tracing::warn!("`transactions_per_sec_limit` option is deprecated and ignored"); }; let max_concurrency = config.optional.vm_concurrency_limit; @@ -224,7 +263,7 @@ async fn init_tasks( let tx_sender = tx_sender_builder .build( - gas_adjuster, + fee_params_fetcher, Arc::new(vm_concurrency_limiter), ApiContracts::load_from_disk(), // TODO (BFT-138): Allow to dynamically reload API contracts storage_caches, @@ -234,12 +273,11 @@ async fn init_tasks( }; let http_server_handles = - ApiBuilder::jsonrpc_backend(config.clone().into(), connection_pool.clone()) + ApiBuilder::jsonrpsee_backend(config.clone().into(), connection_pool.clone()) .http(config.required.http_port) .with_filter_limit(config.optional.filters_limit) .with_batch_request_size_limit(config.optional.max_batch_request_size) .with_response_body_size_limit(config.optional.max_response_body_size()) - .with_threads(config.required.threads_per_server) .with_tx_sender(tx_sender.clone(), vm_barrier.clone()) .with_sync_state(sync_state.clone()) .enable_api_namespaces(config.optional.api_namespaces()) @@ -248,14 +286,13 @@ async fn init_tasks( .context("Failed initializing HTTP JSON-RPC server")?; let ws_server_handles = - ApiBuilder::jsonrpc_backend(config.clone().into(), connection_pool.clone()) + ApiBuilder::jsonrpsee_backend(config.clone().into(), connection_pool.clone()) .ws(config.required.ws_port) .with_filter_limit(config.optional.filters_limit) .with_subscriptions_limit(config.optional.subscriptions_limit) .with_batch_request_size_limit(config.optional.max_batch_request_size) .with_response_body_size_limit(config.optional.max_response_body_size()) .with_polling_interval(config.optional.polling_interval()) - .with_threads(config.required.threads_per_server) .with_tx_sender(tx_sender, vm_barrier) .with_sync_state(sync_state) .enable_api_namespaces(config.optional.api_namespaces()) @@ -271,7 +308,6 @@ async fn init_tasks( healthchecks, ); - let mut task_handles = vec![]; if let Some(port) = config.optional.prometheus_port { let prometheus_task = PrometheusExporterConfig::pull(port).run(stop_receiver.clone()); task_handles.push(tokio::spawn(prometheus_task)); @@ -284,7 +320,7 @@ async fn init_tasks( fetcher_handle, updater_handle, tree_handle, - gas_adjuster_handle, + fee_params_fetcher_handle, ]); task_handles.push(consistency_checker_handle); @@ -354,7 +390,6 @@ async fn main() -> anyhow::Result<()> { .build() .await .context("failed to build a connection_pool")?; - if opt.revert_pending_l1_batch { tracing::info!("Rolling pending L1 batch back.."); let reverter = BlockReverter::new( @@ -365,12 +400,15 @@ async fn main() -> anyhow::Result<()> { L1ExecutedBatchesRevert::Allowed, ); - let mut connection = connection_pool.access_storage().await.unwrap(); + let mut connection = connection_pool.access_storage().await?; let sealed_l1_batch_number = connection .blocks_dal() .get_sealed_l1_batch_number() .await - .unwrap(); + .context("Failed getting sealed L1 batch number")? + .context( + "Cannot roll back pending L1 batch since there are no L1 batches in Postgres", + )?; drop(connection); tracing::info!("Rolling back to l1 batch number {sealed_l1_batch_number}"); @@ -384,9 +422,7 @@ async fn main() -> anyhow::Result<()> { } let sigint_receiver = setup_sigint_handler(); - tracing::warn!("The external node is in the alpha phase, and should be used with caution."); - tracing::info!("Started the external node"); tracing::info!("Main node URL is: {}", main_node_url); @@ -408,23 +444,20 @@ async fn main() -> anyhow::Result<()> { let reorg_detector = ReorgDetector::new(&main_node_url, connection_pool.clone(), stop_receiver); let mut reorg_detector_handle = tokio::spawn(reorg_detector.run()).fuse(); + let mut reorg_detector_result = None; let particular_crypto_alerts = None; let graceful_shutdown = None::>; let tasks_allowed_to_finish = false; - let mut reorg_detector_last_correct_batch = None; tokio::select! { _ = wait_for_tasks(task_handles, particular_crypto_alerts, graceful_shutdown, tasks_allowed_to_finish) => {}, _ = sigint_receiver => { tracing::info!("Stop signal received, shutting down"); }, - last_correct_batch = &mut reorg_detector_handle => { - if let Ok(last_correct_batch) = last_correct_batch { - reorg_detector_last_correct_batch = last_correct_batch; - } else { - tracing::error!("Reorg detector actor failed"); - } + result = &mut reorg_detector_handle => { + tracing::info!("Reorg detector terminated, shutting down"); + reorg_detector_result = Some(result); } }; @@ -433,13 +466,23 @@ async fn main() -> anyhow::Result<()> { shutdown_components(stop_sender, health_check_handle).await; if !reorg_detector_handle.is_terminated() { - if let Ok(Some(last_correct_batch)) = reorg_detector_handle.await { - reorg_detector_last_correct_batch = Some(last_correct_batch); - } + reorg_detector_result = Some(reorg_detector_handle.await); } + let reorg_detector_last_correct_batch = reorg_detector_result.and_then(|result| match result { + Ok(Ok(last_correct_batch)) => last_correct_batch, + Ok(Err(err)) => { + tracing::error!("Reorg detector failed: {err}"); + None + } + Err(err) => { + tracing::error!("Reorg detector panicked: {err}"); + None + } + }); if let Some(last_correct_batch) = reorg_detector_last_correct_batch { - tracing::info!("Performing rollback to block {}", last_correct_batch); + tracing::info!("Performing rollback to L1 batch #{last_correct_batch}"); + let reverter = BlockReverter::new( config.required.state_cache_path, config.required.merkle_tree_path, diff --git a/core/bin/external_node/src/metrics.rs b/core/bin/external_node/src/metrics.rs new file mode 100644 index 00000000000..1d493dd0087 --- /dev/null +++ b/core/bin/external_node/src/metrics.rs @@ -0,0 +1,11 @@ +use vise::{Gauge, LabeledFamily, Metrics}; + +#[derive(Debug, Metrics)] +#[metrics(prefix = "external_node")] +pub(crate) struct EnMetrics { + #[metrics(labels = ["server_version", "protocol_version"])] + pub version: LabeledFamily<(String, Option), Gauge, 2>, +} + +#[vise::register] +pub(crate) static EN_METRICS: vise::Global = vise::Global::new(); diff --git a/core/bin/merkle_tree_consistency_checker/src/main.rs b/core/bin/merkle_tree_consistency_checker/src/main.rs index b132bda87fa..60a4feb750e 100644 --- a/core/bin/merkle_tree_consistency_checker/src/main.rs +++ b/core/bin/merkle_tree_consistency_checker/src/main.rs @@ -1,8 +1,7 @@ -use anyhow::Context as _; -use clap::Parser; - use std::{path::Path, time::Instant}; +use anyhow::Context as _; +use clap::Parser; use zksync_config::DBConfig; use zksync_env_config::FromEnv; use zksync_merkle_tree::domain::ZkSyncTree; @@ -29,7 +28,7 @@ impl Cli { tracing::info!("Verifying consistency of Merkle tree at {db_path}"); let start = Instant::now(); let db = RocksDB::new(Path::new(db_path)); - let tree = ZkSyncTree::new_lightweight(db); + let tree = ZkSyncTree::new_lightweight(db.into()); let l1_batch_number = if let Some(number) = self.l1_batch { L1BatchNumber(number) diff --git a/core/bin/rocksdb_util/Cargo.toml b/core/bin/rocksdb_util/Cargo.toml deleted file mode 100644 index 75d1afe1979..00000000000 --- a/core/bin/rocksdb_util/Cargo.toml +++ /dev/null @@ -1,22 +0,0 @@ -[package] -name = "rocksdb_util" -version = "0.1.0" -edition = "2021" -authors = ["The Matter Labs Team "] -homepage = "https://zksync.io/" -repository = "https://github.com/matter-labs/zksync-era" -license = "MIT OR Apache-2.0" -keywords = ["blockchain", "zksync"] -categories = ["cryptography"] -publish = false # We don't want to publish our binaries. - -[dependencies] -zksync_config = { path = "../../lib/config" } -zksync_env_config = { path = "../../lib/env_config" } -zksync_storage = { path = "../../lib/storage" } - -anyhow = "1.0" -clap = { version = "4.2.4", features = ["derive"] } - -[dev-dependencies] -tempfile = "3.0.2" diff --git a/core/bin/rocksdb_util/src/main.rs b/core/bin/rocksdb_util/src/main.rs deleted file mode 100644 index 30d3d42e771..00000000000 --- a/core/bin/rocksdb_util/src/main.rs +++ /dev/null @@ -1,85 +0,0 @@ -use anyhow::Context as _; -use clap::{Parser, Subcommand}; - -use zksync_config::DBConfig; -use zksync_env_config::FromEnv; -use zksync_storage::rocksdb::{ - backup::{BackupEngine, BackupEngineOptions, RestoreOptions}, - Env, Error, Options, DB, -}; - -#[derive(Debug, Parser)] -#[command(author = "Matter Labs", version, about = "RocksDB management utility", long_about = None)] -struct Cli { - #[command(subcommand)] - command: Command, -} - -#[derive(Debug, Subcommand)] -enum Command { - /// Creates new backup of running RocksDB instance. - #[command(name = "backup")] - Backup, - /// Restores RocksDB from backup. - #[command(name = "restore-from-backup")] - Restore, -} - -fn create_backup(config: &DBConfig) -> Result<(), Error> { - let mut engine = BackupEngine::open( - &BackupEngineOptions::new(&config.merkle_tree.backup_path)?, - &Env::new()?, - )?; - let db_dir = &config.merkle_tree.path; - let db = DB::open_for_read_only(&Options::default(), db_dir, false)?; - engine.create_new_backup(&db)?; - engine.purge_old_backups(config.backup_count) -} - -fn restore_from_latest_backup(config: &DBConfig) -> Result<(), Error> { - let mut engine = BackupEngine::open( - &BackupEngineOptions::new(&config.merkle_tree.backup_path)?, - &Env::new()?, - )?; - let db_dir = &config.merkle_tree.path; - engine.restore_from_latest_backup(db_dir, db_dir, &RestoreOptions::default()) -} - -fn main() -> anyhow::Result<()> { - let db_config = DBConfig::from_env().context("DBConfig::from_env()")?; - match Cli::parse().command { - Command::Backup => create_backup(&db_config).context("create_backup"), - Command::Restore => { - restore_from_latest_backup(&db_config).context("restore_from_latest_backup") - } - } -} - -#[cfg(test)] -mod tests { - use super::*; - use tempfile::TempDir; - - #[test] - fn backup_restore_workflow() { - let backup_dir = TempDir::new().expect("failed to get temporary directory for RocksDB"); - let temp_dir = TempDir::new().expect("failed to get temporary directory for RocksDB"); - let mut db_config = DBConfig::from_env().unwrap(); - db_config.merkle_tree.path = temp_dir.path().to_str().unwrap().to_string(); - db_config.merkle_tree.backup_path = backup_dir.path().to_str().unwrap().to_string(); - let db_dir = &db_config.merkle_tree.path; - - let mut options = Options::default(); - options.create_if_missing(true); - let db = DB::open(&options, db_dir).unwrap(); - db.put(b"key", b"value").expect("failed to write to db"); - - create_backup(&db_config).expect("failed to create backup"); - // Drop original database - drop((db, temp_dir)); - - restore_from_latest_backup(&db_config).expect("failed to restore from backup"); - let db = DB::open(&Options::default(), db_dir).unwrap(); - assert_eq!(db.get(b"key").unwrap().unwrap(), b"value"); - } -} diff --git a/core/bin/snapshots_creator/Cargo.toml b/core/bin/snapshots_creator/Cargo.toml new file mode 100644 index 00000000000..fe18233e7d9 --- /dev/null +++ b/core/bin/snapshots_creator/Cargo.toml @@ -0,0 +1,30 @@ +[package] +name = "snapshots_creator" +version = "0.1.0" +edition = "2021" +authors = ["The Matter Labs Team "] +homepage = "https://zksync.io/" +repository = "https://github.com/matter-labs/zksync-era" +license = "MIT OR Apache-2.0" +keywords = ["blockchain", "zksync"] +categories = ["cryptography"] +publish = false # We don't want to publish our binaries. + +[dependencies] +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } +prometheus_exporter = { path = "../../lib/prometheus_exporter" } +zksync_config = { path = "../../lib/config" } +zksync_dal = { path = "../../lib/dal" } +zksync_env_config = { path = "../../lib/env_config" } +zksync_utils = { path = "../../lib/utils" } +zksync_types = { path = "../../lib/types" } +zksync_object_store = { path = "../../lib/object_store" } +vlog = { path = "../../lib/vlog" } + +anyhow = "1.0" +tokio = { version = "1", features = ["full"] } +tracing = "0.1" +futures = "0.3" + +[dev-dependencies] +rand = "0.8" diff --git a/core/bin/snapshots_creator/README.md b/core/bin/snapshots_creator/README.md new file mode 100644 index 00000000000..03167b80359 --- /dev/null +++ b/core/bin/snapshots_creator/README.md @@ -0,0 +1,24 @@ +# Snapshots Creator + +Snapshot creator is small command line tool for creating a snapshot of zkSync node for EN node to be able to initialize +to a certain L1 Batch. + +Snapshots do not contain full transactions history, but rather a minimal subset of information needed to bootstrap EN +node. + +Usage (local development):\ +First run `zk env dev` \ +then the creator can be run using: +`zk run snapshots_creator` + +Snapshot contents can be stored based on blob_store config either in local filesystem or GS. + +## Snapshots format + +Each snapshot consists of three types of objects (see +[snapshots.rs](https://github.com/matter-labs/zksync-era/blob/main/core/lib/types/src/snapshots.rs)) : header, storage +logs chunks and factory deps: + +- Snapshot Header (currently returned by snapshots namespace of JSON-RPC API) +- Snapshot Storage logs chunks (most likely to be stored in gzipped protobuf files, but this part is still WIP) : +- Factory dependencies (most likely to be stored as protobufs in the very near future) diff --git a/core/bin/snapshots_creator/src/chunking.rs b/core/bin/snapshots_creator/src/chunking.rs new file mode 100644 index 00000000000..047a6a23d24 --- /dev/null +++ b/core/bin/snapshots_creator/src/chunking.rs @@ -0,0 +1,69 @@ +use std::ops; + +use zksync_types::{H256, U256}; +use zksync_utils::u256_to_h256; + +pub(crate) fn get_chunk_hashed_keys_range( + chunk_id: u64, + chunk_count: u64, +) -> ops::RangeInclusive { + assert!(chunk_count > 0); + let mut stride = U256::MAX / chunk_count; + let stride_minus_one = if stride < U256::MAX { + stride += U256::one(); + stride - 1 + } else { + stride // `stride` is really 1 << 256 == U256::MAX + 1 + }; + + let start = stride * chunk_id; + let (mut end, is_overflow) = stride_minus_one.overflowing_add(start); + if is_overflow { + end = U256::MAX; + } + u256_to_h256(start)..=u256_to_h256(end) +} + +#[cfg(test)] +mod tests { + use zksync_utils::h256_to_u256; + + use super::*; + + #[test] + fn chunking_is_correct() { + for chunks_count in (2..10).chain([42, 256, 500, 1_001, 12_345]) { + println!("Testing chunks_count={chunks_count}"); + let chunked_ranges: Vec<_> = (0..chunks_count) + .map(|chunk_id| get_chunk_hashed_keys_range(chunk_id, chunks_count)) + .collect(); + + assert_eq!(*chunked_ranges[0].start(), H256::zero()); + assert_eq!( + *chunked_ranges.last().unwrap().end(), + H256::repeat_byte(0xff) + ); + for window in chunked_ranges.windows(2) { + let [prev_chunk, next_chunk] = window else { + unreachable!(); + }; + assert_eq!( + h256_to_u256(*prev_chunk.end()) + 1, + h256_to_u256(*next_chunk.start()) + ); + } + + let chunk_sizes: Vec<_> = chunked_ranges + .iter() + .map(|chunk| h256_to_u256(*chunk.end()) - h256_to_u256(*chunk.start()) + 1) + .collect(); + + // Check that chunk sizes are roughly equal. Due to how chunks are constructed, the sizes + // of all chunks except for the last one are the same, and the last chunk size may be slightly smaller; + // the difference in sizes is lesser than the number of chunks. + let min_chunk_size = chunk_sizes.iter().copied().min().unwrap(); + let max_chunk_size = chunk_sizes.iter().copied().max().unwrap(); + assert!(max_chunk_size - min_chunk_size < U256::from(chunks_count)); + } + } +} diff --git a/core/bin/snapshots_creator/src/creator.rs b/core/bin/snapshots_creator/src/creator.rs new file mode 100644 index 00000000000..51a14ce2cca --- /dev/null +++ b/core/bin/snapshots_creator/src/creator.rs @@ -0,0 +1,338 @@ +//! [`SnapshotCreator`] and tightly related types. + +use std::sync::Arc; + +use anyhow::Context as _; +use tokio::sync::Semaphore; +use zksync_config::SnapshotsCreatorConfig; +use zksync_dal::{ConnectionPool, StorageProcessor}; +use zksync_object_store::ObjectStore; +use zksync_types::{ + snapshots::{ + SnapshotFactoryDependencies, SnapshotMetadata, SnapshotStorageLogsChunk, + SnapshotStorageLogsStorageKey, + }, + L1BatchNumber, MiniblockNumber, +}; +use zksync_utils::ceil_div; + +#[cfg(test)] +use crate::tests::HandleEvent; +use crate::{ + chunking::get_chunk_hashed_keys_range, + metrics::{FactoryDepsStage, StorageChunkStage, METRICS}, +}; + +/// Encapsulates progress of creating a particular storage snapshot. +#[derive(Debug)] +struct SnapshotProgress { + l1_batch_number: L1BatchNumber, + /// `true` if the snapshot is new (i.e., its progress is not recovered from Postgres). + is_new_snapshot: bool, + chunk_count: u64, + remaining_chunk_ids: Vec, +} + +impl SnapshotProgress { + fn new(l1_batch_number: L1BatchNumber, chunk_count: u64) -> Self { + Self { + l1_batch_number, + is_new_snapshot: true, + chunk_count, + remaining_chunk_ids: (0..chunk_count).collect(), + } + } + + fn from_existing_snapshot(snapshot: &SnapshotMetadata) -> Self { + let remaining_chunk_ids = snapshot + .storage_logs_filepaths + .iter() + .enumerate() + .filter_map(|(chunk_id, path)| path.is_none().then_some(chunk_id as u64)) + .collect(); + + Self { + l1_batch_number: snapshot.l1_batch_number, + is_new_snapshot: false, + chunk_count: snapshot.storage_logs_filepaths.len() as u64, + remaining_chunk_ids, + } + } +} + +/// Creator of a single storage snapshot. +#[derive(Debug)] +pub(crate) struct SnapshotCreator { + pub blob_store: Arc, + pub master_pool: ConnectionPool, + pub replica_pool: ConnectionPool, + #[cfg(test)] + pub event_listener: Box, +} + +impl SnapshotCreator { + async fn connect_to_replica(&self) -> anyhow::Result> { + self.replica_pool + .access_storage_tagged("snapshots_creator") + .await + } + + async fn process_storage_logs_single_chunk( + &self, + semaphore: &Semaphore, + miniblock_number: MiniblockNumber, + l1_batch_number: L1BatchNumber, + chunk_id: u64, + chunk_count: u64, + ) -> anyhow::Result<()> { + let _permit = semaphore.acquire().await?; + #[cfg(test)] + if self.event_listener.on_chunk_started().should_exit() { + return Ok(()); + } + + let hashed_keys_range = get_chunk_hashed_keys_range(chunk_id, chunk_count); + let mut conn = self.connect_to_replica().await?; + + let latency = + METRICS.storage_logs_processing_duration[&StorageChunkStage::LoadFromPostgres].start(); + let logs = conn + .snapshots_creator_dal() + .get_storage_logs_chunk(miniblock_number, hashed_keys_range) + .await + .context("Error fetching storage logs count")?; + drop(conn); + let latency = latency.observe(); + tracing::info!( + "Loaded chunk {chunk_id} ({} logs) from Postgres in {latency:?}", + logs.len() + ); + + let latency = + METRICS.storage_logs_processing_duration[&StorageChunkStage::SaveToGcs].start(); + let storage_logs_chunk = SnapshotStorageLogsChunk { storage_logs: logs }; + let key = SnapshotStorageLogsStorageKey { + l1_batch_number, + chunk_id, + }; + let filename = self + .blob_store + .put(key, &storage_logs_chunk) + .await + .context("Error storing storage logs chunk in blob store")?; + let output_filepath_prefix = self + .blob_store + .get_storage_prefix::(); + let output_filepath = format!("{output_filepath_prefix}/{filename}"); + let latency = latency.observe(); + + let mut master_conn = self + .master_pool + .access_storage_tagged("snapshots_creator") + .await?; + master_conn + .snapshots_dal() + .add_storage_logs_filepath_for_snapshot(l1_batch_number, chunk_id, &output_filepath) + .await?; + #[cfg(test)] + self.event_listener.on_chunk_saved(); + + let tasks_left = METRICS.storage_logs_chunks_left_to_process.dec_by(1) - 1; + tracing::info!( + "Saved chunk {chunk_id} (overall progress {}/{chunk_count}) in {latency:?} to location: {output_filepath}", + chunk_count - tasks_left as u64 + ); + Ok(()) + } + + async fn process_factory_deps( + &self, + miniblock_number: MiniblockNumber, + l1_batch_number: L1BatchNumber, + ) -> anyhow::Result { + let mut conn = self.connect_to_replica().await?; + + tracing::info!("Loading factory deps from Postgres..."); + let latency = + METRICS.factory_deps_processing_duration[&FactoryDepsStage::LoadFromPostgres].start(); + let factory_deps = conn + .snapshots_creator_dal() + .get_all_factory_deps(miniblock_number) + .await?; + drop(conn); + let latency = latency.observe(); + tracing::info!("Loaded {} factory deps in {latency:?}", factory_deps.len()); + + tracing::info!("Saving factory deps to GCS..."); + let latency = + METRICS.factory_deps_processing_duration[&FactoryDepsStage::SaveToGcs].start(); + let factory_deps = SnapshotFactoryDependencies { factory_deps }; + let filename = self + .blob_store + .put(l1_batch_number, &factory_deps) + .await + .context("Error storing factory deps in blob store")?; + let output_filepath_prefix = self + .blob_store + .get_storage_prefix::(); + let output_filepath = format!("{output_filepath_prefix}/{filename}"); + let latency = latency.observe(); + tracing::info!( + "Saved {} factory deps in {latency:?} to location: {output_filepath}", + factory_deps.factory_deps.len() + ); + + Ok(output_filepath) + } + + /// Returns `Ok(None)` if the created snapshot would coincide with `latest_snapshot`. + async fn initialize_snapshot_progress( + config: &SnapshotsCreatorConfig, + min_chunk_count: u64, + latest_snapshot: Option<&SnapshotMetadata>, + conn: &mut StorageProcessor<'_>, + ) -> anyhow::Result> { + // We subtract 1 so that after restore, EN node has at least one L1 batch to fetch + let sealed_l1_batch_number = conn.blocks_dal().get_sealed_l1_batch_number().await?; + let sealed_l1_batch_number = sealed_l1_batch_number.context("No L1 batches in Postgres")?; + anyhow::ensure!( + sealed_l1_batch_number != L1BatchNumber(0), + "Cannot create snapshot when only the genesis L1 batch is present in Postgres" + ); + let l1_batch_number = sealed_l1_batch_number - 1; + + let latest_snapshot_l1_batch_number = + latest_snapshot.map(|snapshot| snapshot.l1_batch_number); + if latest_snapshot_l1_batch_number == Some(l1_batch_number) { + tracing::info!( + "Snapshot at expected L1 batch #{l1_batch_number} is already created; exiting" + ); + return Ok(None); + } + + let distinct_storage_logs_keys_count = conn + .snapshots_creator_dal() + .get_distinct_storage_logs_keys_count(l1_batch_number) + .await?; + let chunk_size = config.storage_logs_chunk_size; + // We force the minimum number of chunks to avoid situations where only one chunk is created in tests. + let chunk_count = + ceil_div(distinct_storage_logs_keys_count, chunk_size).max(min_chunk_count); + + tracing::info!( + "Selected storage logs chunking for L1 batch {l1_batch_number}: \ + {chunk_count} chunks of expected size {chunk_size}" + ); + Ok(Some(SnapshotProgress::new(l1_batch_number, chunk_count))) + } + + /// Returns `Ok(None)` if a snapshot should not be created / resumed. + async fn load_or_initialize_snapshot_progress( + &self, + config: &SnapshotsCreatorConfig, + min_chunk_count: u64, + ) -> anyhow::Result> { + let mut master_conn = self + .master_pool + .access_storage_tagged("snapshots_creator") + .await?; + let latest_snapshot = master_conn + .snapshots_dal() + .get_newest_snapshot_metadata() + .await?; + drop(master_conn); + + let pending_snapshot = latest_snapshot + .as_ref() + .filter(|snapshot| !snapshot.is_complete()); + if let Some(snapshot) = pending_snapshot { + Ok(Some(SnapshotProgress::from_existing_snapshot(snapshot))) + } else { + Self::initialize_snapshot_progress( + config, + min_chunk_count, + latest_snapshot.as_ref(), + &mut self.connect_to_replica().await?, + ) + .await + } + } + + pub async fn run( + self, + config: SnapshotsCreatorConfig, + min_chunk_count: u64, + ) -> anyhow::Result<()> { + let latency = METRICS.snapshot_generation_duration.start(); + + let Some(progress) = self + .load_or_initialize_snapshot_progress(&config, min_chunk_count) + .await? + else { + // No snapshot creation is necessary; a snapshot for the current L1 batch is already created + return Ok(()); + }; + + let mut conn = self.connect_to_replica().await?; + let (_, last_miniblock_number_in_batch) = conn + .blocks_dal() + .get_miniblock_range_of_l1_batch(progress.l1_batch_number) + .await? + .context("Error fetching last miniblock number")?; + drop(conn); + + METRICS.storage_logs_chunks_count.set(progress.chunk_count); + tracing::info!( + "Creating snapshot for storage logs up to miniblock {last_miniblock_number_in_batch}, \ + L1 batch {}", + progress.l1_batch_number + ); + + if progress.is_new_snapshot { + let factory_deps_output_file = self + .process_factory_deps(last_miniblock_number_in_batch, progress.l1_batch_number) + .await?; + + let mut master_conn = self + .master_pool + .access_storage_tagged("snapshots_creator") + .await?; + master_conn + .snapshots_dal() + .add_snapshot( + progress.l1_batch_number, + progress.chunk_count, + &factory_deps_output_file, + ) + .await?; + } + + METRICS + .storage_logs_chunks_left_to_process + .set(progress.remaining_chunk_ids.len()); + let semaphore = Semaphore::new(config.concurrent_queries_count as usize); + let tasks = progress.remaining_chunk_ids.into_iter().map(|chunk_id| { + self.process_storage_logs_single_chunk( + &semaphore, + last_miniblock_number_in_batch, + progress.l1_batch_number, + chunk_id, + progress.chunk_count, + ) + }); + futures::future::try_join_all(tasks).await?; + + METRICS + .snapshot_l1_batch + .set(progress.l1_batch_number.0.into()); + + let elapsed = latency.observe(); + tracing::info!("snapshot_generation_duration: {elapsed:?}"); + tracing::info!("snapshot_l1_batch: {}", METRICS.snapshot_l1_batch.get()); + tracing::info!( + "storage_logs_chunks_count: {}", + METRICS.storage_logs_chunks_count.get() + ); + Ok(()) + } +} diff --git a/core/bin/snapshots_creator/src/main.rs b/core/bin/snapshots_creator/src/main.rs new file mode 100644 index 00000000000..0571500615b --- /dev/null +++ b/core/bin/snapshots_creator/src/main.rs @@ -0,0 +1,110 @@ +//! Snapshot creator utility. Intended to run on a schedule, with each run creating a new snapshot. +//! +//! # Assumptions +//! +//! The snapshot creator is fault-tolerant; if it stops in the middle of creating a snapshot, +//! this snapshot will be continued from roughly the same point after the restart. If this is +//! undesired, remove the `snapshots` table record corresponding to the pending snapshot. +//! +//! It is assumed that the snapshot creator is run as a singleton process (no more than 1 instance +//! at a time). + +use anyhow::Context as _; +use prometheus_exporter::PrometheusExporterConfig; +use tokio::{sync::watch, task::JoinHandle}; +use zksync_config::{configs::PrometheusConfig, PostgresConfig, SnapshotsCreatorConfig}; +use zksync_dal::ConnectionPool; +use zksync_env_config::{object_store::SnapshotsObjectStoreConfig, FromEnv}; +use zksync_object_store::ObjectStoreFactory; + +use crate::creator::SnapshotCreator; + +mod chunking; +mod creator; +mod metrics; +#[cfg(test)] +mod tests; + +async fn maybe_enable_prometheus_metrics( + stop_receiver: watch::Receiver, +) -> anyhow::Result>>> { + let prometheus_config = PrometheusConfig::from_env().ok(); + if let Some(prometheus_config) = prometheus_config { + let exporter_config = PrometheusExporterConfig::push( + prometheus_config.gateway_endpoint(), + prometheus_config.push_interval(), + ); + + tracing::info!("Starting prometheus exporter with config {prometheus_config:?}"); + let prometheus_exporter_task = tokio::spawn(exporter_config.run(stop_receiver)); + Ok(Some(prometheus_exporter_task)) + } else { + tracing::info!("Starting without prometheus exporter"); + Ok(None) + } +} + +/// Minimum number of storage log chunks to produce. +const MIN_CHUNK_COUNT: u64 = 10; + +#[tokio::main] +async fn main() -> anyhow::Result<()> { + let (stop_sender, stop_receiver) = watch::channel(false); + + tracing::info!("Starting snapshots creator"); + #[allow(deprecated)] // TODO (QIT-21): Use centralized configuration approach. + let log_format = vlog::log_format_from_env(); + #[allow(deprecated)] // TODO (QIT-21): Use centralized configuration approach. + let sentry_url = vlog::sentry_url_from_env(); + #[allow(deprecated)] // TODO (QIT-21): Use centralized configuration approach. + let environment = vlog::environment_from_env(); + + let prometheus_exporter_task = maybe_enable_prometheus_metrics(stop_receiver).await?; + let mut builder = vlog::ObservabilityBuilder::new().with_log_format(log_format); + if let Some(sentry_url) = sentry_url { + builder = builder + .with_sentry_url(&sentry_url) + .context("Invalid Sentry URL")? + .with_sentry_environment(environment); + } + let _guard = builder.build(); + + let object_store_config = + SnapshotsObjectStoreConfig::from_env().context("SnapshotsObjectStoreConfig::from_env()")?; + let blob_store = ObjectStoreFactory::new(object_store_config.0) + .create_store() + .await; + + let postgres_config = PostgresConfig::from_env().context("PostgresConfig")?; + let creator_config = + SnapshotsCreatorConfig::from_env().context("SnapshotsCreatorConfig::from_env")?; + + let replica_pool = ConnectionPool::builder( + postgres_config.replica_url()?, + creator_config.concurrent_queries_count, + ) + .build() + .await?; + + let master_pool = ConnectionPool::singleton(postgres_config.master_url()?) + .build() + .await?; + + let creator = SnapshotCreator { + blob_store, + master_pool, + replica_pool, + #[cfg(test)] + event_listener: Box::new(()), + }; + creator.run(creator_config, MIN_CHUNK_COUNT).await?; + + tracing::info!("Finished running snapshot creator!"); + stop_sender.send(true).ok(); + if let Some(prometheus_exporter_task) = prometheus_exporter_task { + prometheus_exporter_task + .await? + .context("Prometheus did not finish gracefully")?; + } + Ok(()) +} diff --git a/core/bin/snapshots_creator/src/metrics.rs b/core/bin/snapshots_creator/src/metrics.rs new file mode 100644 index 00000000000..5eb1984712e --- /dev/null +++ b/core/bin/snapshots_creator/src/metrics.rs @@ -0,0 +1,43 @@ +//! Metrics for the snapshot creator. + +use std::time::Duration; + +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics, Unit}; + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] +#[metrics(label = "stage", rename_all = "snake_case")] +pub(crate) enum FactoryDepsStage { + LoadFromPostgres, + SaveToGcs, +} + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] +#[metrics(label = "stage", rename_all = "snake_case")] +pub(crate) enum StorageChunkStage { + LoadFromPostgres, + SaveToGcs, +} + +#[derive(Debug, Metrics)] +#[metrics(prefix = "snapshots_creator")] +pub(crate) struct SnapshotsCreatorMetrics { + /// Number of chunks in the most recently generated snapshot. Set when a snapshot generation starts. + pub storage_logs_chunks_count: Gauge, + /// Number of chunks left to process for the snapshot being currently generated. + pub storage_logs_chunks_left_to_process: Gauge, + /// Total latency of snapshot generation. + #[metrics(buckets = Buckets::LATENCIES, unit = Unit::Seconds)] + pub snapshot_generation_duration: Histogram, + /// L1 batch number for the most recently generated snapshot. Set *after* the snapshot + /// is fully generated. + pub snapshot_l1_batch: Gauge, + /// Latency of storage log chunk processing split by stage. + #[metrics(buckets = Buckets::LATENCIES, unit = Unit::Seconds)] + pub storage_logs_processing_duration: Family>, + /// Latency of factory deps processing split by stage. + #[metrics(buckets = Buckets::LATENCIES, unit = Unit::Seconds)] + pub factory_deps_processing_duration: Family>, +} + +#[vise::register] +pub(crate) static METRICS: vise::Global = vise::Global::new(); diff --git a/core/bin/snapshots_creator/src/tests.rs b/core/bin/snapshots_creator/src/tests.rs new file mode 100644 index 00000000000..d061b090670 --- /dev/null +++ b/core/bin/snapshots_creator/src/tests.rs @@ -0,0 +1,460 @@ +//! Lower-level tests for the snapshot creator component. + +use std::{ + collections::{HashMap, HashSet}, + fmt, + sync::{ + atomic::{AtomicUsize, Ordering}, + Arc, + }, +}; + +use rand::{thread_rng, Rng}; +use zksync_dal::StorageProcessor; +use zksync_object_store::ObjectStore; +use zksync_types::{ + block::{BlockGasCount, L1BatchHeader, MiniblockHeader}, + snapshots::{ + SnapshotFactoryDependencies, SnapshotFactoryDependency, SnapshotStorageLog, + SnapshotStorageLogsChunk, SnapshotStorageLogsStorageKey, + }, + AccountTreeId, Address, L1BatchNumber, MiniblockNumber, ProtocolVersion, StorageKey, + StorageLog, H256, +}; + +use super::*; + +const TEST_CONFIG: SnapshotsCreatorConfig = SnapshotsCreatorConfig { + storage_logs_chunk_size: 1_000_000, + concurrent_queries_count: 10, +}; +const SEQUENTIAL_TEST_CONFIG: SnapshotsCreatorConfig = SnapshotsCreatorConfig { + storage_logs_chunk_size: 1_000_000, + concurrent_queries_count: 1, +}; + +#[derive(Debug)] +struct TestEventListener { + stop_after_chunk_count: usize, + processed_chunk_count: AtomicUsize, +} + +impl TestEventListener { + fn new(stop_after_chunk_count: usize) -> Self { + Self { + stop_after_chunk_count, + processed_chunk_count: AtomicUsize::new(0), + } + } +} + +impl HandleEvent for TestEventListener { + fn on_chunk_started(&self) -> TestBehavior { + let should_stop = + self.processed_chunk_count.load(Ordering::SeqCst) >= self.stop_after_chunk_count; + TestBehavior::new(should_stop) + } + + fn on_chunk_saved(&self) { + self.processed_chunk_count.fetch_add(1, Ordering::SeqCst); + } +} + +impl SnapshotCreator { + fn for_tests(blob_store: Arc, pool: ConnectionPool) -> Self { + Self { + blob_store, + master_pool: pool.clone(), + replica_pool: pool, + event_listener: Box::new(()), + } + } + + fn stop_after_chunk_count(self, stop_after_chunk_count: usize) -> Self { + Self { + event_listener: Box::new(TestEventListener::new(stop_after_chunk_count)), + ..self + } + } +} + +#[derive(Debug)] +pub(crate) struct TestBehavior { + should_exit: bool, +} + +impl TestBehavior { + fn new(should_exit: bool) -> Self { + Self { should_exit } + } + + pub fn should_exit(&self) -> bool { + self.should_exit + } +} + +pub(crate) trait HandleEvent: fmt::Debug { + fn on_chunk_started(&self) -> TestBehavior { + TestBehavior::new(false) + } + + fn on_chunk_saved(&self) { + // Do nothing + } +} + +impl HandleEvent for () {} + +fn gen_storage_logs(rng: &mut impl Rng, count: usize) -> Vec { + (0..count) + .map(|_| { + let key = StorageKey::new(AccountTreeId::from_fixed_bytes(rng.gen()), H256(rng.gen())); + StorageLog::new_write_log(key, H256(rng.gen())) + }) + .collect() +} + +fn gen_factory_deps(rng: &mut impl Rng, count: usize) -> HashMap> { + (0..count) + .map(|_| { + let factory_len = 32 * rng.gen_range(32..256); + let mut factory = vec![0_u8; factory_len]; + rng.fill_bytes(&mut factory); + (H256(rng.gen()), factory) + }) + .collect() +} + +#[derive(Debug, Default)] +struct ExpectedOutputs { + deps: HashSet, + storage_logs: HashSet, +} + +async fn create_miniblock( + conn: &mut StorageProcessor<'_>, + miniblock_number: MiniblockNumber, + block_logs: Vec, +) { + let miniblock_header = MiniblockHeader { + number: miniblock_number, + timestamp: 0, + hash: H256::from_low_u64_be(u64::from(miniblock_number.0)), + l1_tx_count: 0, + l2_tx_count: 0, + base_fee_per_gas: 0, + gas_per_pubdata_limit: 0, + batch_fee_input: Default::default(), + base_system_contracts_hashes: Default::default(), + protocol_version: Some(Default::default()), + virtual_blocks: 0, + }; + + conn.blocks_dal() + .insert_miniblock(&miniblock_header) + .await + .unwrap(); + conn.storage_logs_dal() + .insert_storage_logs(miniblock_number, &[(H256::zero(), block_logs)]) + .await; +} + +async fn create_l1_batch( + conn: &mut StorageProcessor<'_>, + l1_batch_number: L1BatchNumber, + logs_for_initial_writes: &[StorageLog], +) { + let mut header = L1BatchHeader::new( + l1_batch_number, + 0, + Address::default(), + Default::default(), + Default::default(), + ); + header.is_finished = true; + conn.blocks_dal() + .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[], 0) + .await + .unwrap(); + conn.blocks_dal() + .mark_miniblocks_as_executed_in_l1_batch(l1_batch_number) + .await + .unwrap(); + + let mut written_keys: Vec<_> = logs_for_initial_writes.iter().map(|log| log.key).collect(); + written_keys.sort_unstable(); + conn.storage_logs_dedup_dal() + .insert_initial_writes(l1_batch_number, &written_keys) + .await; +} + +async fn prepare_postgres( + rng: &mut impl Rng, + conn: &mut StorageProcessor<'_>, + block_count: u32, +) -> ExpectedOutputs { + conn.protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + + let mut outputs = ExpectedOutputs::default(); + for block_number in 0..block_count { + let logs = gen_storage_logs(rng, 100); + create_miniblock(conn, MiniblockNumber(block_number), logs.clone()).await; + + let factory_deps = gen_factory_deps(rng, 10); + conn.storage_dal() + .insert_factory_deps(MiniblockNumber(block_number), &factory_deps) + .await; + + // Since we generate `logs` randomly, all of them are written the first time. + create_l1_batch(conn, L1BatchNumber(block_number), &logs).await; + + if block_number + 1 < block_count { + let factory_deps = + factory_deps + .into_values() + .map(|bytecode| SnapshotFactoryDependency { + bytecode: bytecode.into(), + }); + outputs.deps.extend(factory_deps); + + let hashed_keys: Vec<_> = logs.iter().map(|log| log.key.hashed_key()).collect(); + let expected_l1_batches_and_indices = conn + .storage_logs_dal() + .get_l1_batches_and_indices_for_initial_writes(&hashed_keys) + .await; + + let logs = logs.into_iter().map(|log| { + let (l1_batch_number_of_initial_write, enumeration_index) = + expected_l1_batches_and_indices[&log.key.hashed_key()]; + SnapshotStorageLog { + key: log.key, + value: log.value, + l1_batch_number_of_initial_write, + enumeration_index, + } + }); + outputs.storage_logs.extend(logs); + } + } + outputs +} + +#[tokio::test] +async fn persisting_snapshot_metadata() { + let pool = ConnectionPool::test_pool().await; + let mut rng = thread_rng(); + let object_store_factory = ObjectStoreFactory::mock(); + let object_store = object_store_factory.create_store().await; + + // Insert some data to Postgres. + let mut conn = pool.access_storage().await.unwrap(); + prepare_postgres(&mut rng, &mut conn, 10).await; + + SnapshotCreator::for_tests(object_store, pool.clone()) + .run(TEST_CONFIG, MIN_CHUNK_COUNT) + .await + .unwrap(); + + // Check snapshot metadata in Postgres. + let snapshots = conn + .snapshots_dal() + .get_all_complete_snapshots() + .await + .unwrap(); + assert_eq!(snapshots.snapshots_l1_batch_numbers.len(), 1); + let snapshot_l1_batch_number = snapshots.snapshots_l1_batch_numbers[0]; + assert_eq!(snapshot_l1_batch_number, L1BatchNumber(8)); + + let snapshot_metadata = conn + .snapshots_dal() + .get_snapshot_metadata(snapshot_l1_batch_number) + .await + .unwrap() + .expect("No snapshot metadata"); + assert_eq!(snapshot_metadata.l1_batch_number, snapshot_l1_batch_number); + let factory_deps_path = &snapshot_metadata.factory_deps_filepath; + assert!(factory_deps_path.ends_with(".proto.gzip")); + assert_eq!( + snapshot_metadata.storage_logs_filepaths.len(), + MIN_CHUNK_COUNT as usize + ); + for path in &snapshot_metadata.storage_logs_filepaths { + let path = path + .as_ref() + .unwrap() + .strip_prefix("storage_logs_snapshots/") + .unwrap(); + assert!(path.ends_with(".proto.gzip")); + } +} + +#[tokio::test] +async fn persisting_snapshot_factory_deps() { + let pool = ConnectionPool::test_pool().await; + let mut rng = thread_rng(); + let object_store_factory = ObjectStoreFactory::mock(); + let object_store = object_store_factory.create_store().await; + let mut conn = pool.access_storage().await.unwrap(); + let expected_outputs = prepare_postgres(&mut rng, &mut conn, 10).await; + + SnapshotCreator::for_tests(object_store, pool.clone()) + .run(TEST_CONFIG, MIN_CHUNK_COUNT) + .await + .unwrap(); + let snapshot_l1_batch_number = L1BatchNumber(8); + + let object_store = object_store_factory.create_store().await; + let SnapshotFactoryDependencies { factory_deps } = + object_store.get(snapshot_l1_batch_number).await.unwrap(); + let actual_deps: HashSet<_> = factory_deps.into_iter().collect(); + assert_eq!(actual_deps, expected_outputs.deps); +} + +#[tokio::test] +async fn persisting_snapshot_logs() { + let pool = ConnectionPool::test_pool().await; + let mut rng = thread_rng(); + let object_store_factory = ObjectStoreFactory::mock(); + let object_store = object_store_factory.create_store().await; + let mut conn = pool.access_storage().await.unwrap(); + let expected_outputs = prepare_postgres(&mut rng, &mut conn, 10).await; + + SnapshotCreator::for_tests(object_store, pool.clone()) + .run(TEST_CONFIG, MIN_CHUNK_COUNT) + .await + .unwrap(); + let snapshot_l1_batch_number = L1BatchNumber(8); + + let object_store = object_store_factory.create_store().await; + assert_storage_logs(&*object_store, snapshot_l1_batch_number, &expected_outputs).await; +} + +async fn assert_storage_logs( + object_store: &dyn ObjectStore, + snapshot_l1_batch_number: L1BatchNumber, + expected_outputs: &ExpectedOutputs, +) { + let mut actual_logs = HashSet::new(); + for chunk_id in 0..MIN_CHUNK_COUNT { + let key = SnapshotStorageLogsStorageKey { + l1_batch_number: snapshot_l1_batch_number, + chunk_id, + }; + let chunk: SnapshotStorageLogsChunk = object_store.get(key).await.unwrap(); + actual_logs.extend(chunk.storage_logs.into_iter()); + } + assert_eq!(actual_logs, expected_outputs.storage_logs); +} + +#[tokio::test] +async fn recovery_workflow() { + let pool = ConnectionPool::test_pool().await; + let mut rng = thread_rng(); + let object_store_factory = ObjectStoreFactory::mock(); + let object_store = object_store_factory.create_store().await; + let mut conn = pool.access_storage().await.unwrap(); + let expected_outputs = prepare_postgres(&mut rng, &mut conn, 10).await; + + SnapshotCreator::for_tests(object_store, pool.clone()) + .stop_after_chunk_count(0) + .run(SEQUENTIAL_TEST_CONFIG, MIN_CHUNK_COUNT) + .await + .unwrap(); + + let snapshot_l1_batch_number = L1BatchNumber(8); + let snapshot_metadata = conn + .snapshots_dal() + .get_snapshot_metadata(snapshot_l1_batch_number) + .await + .unwrap() + .expect("No snapshot metadata"); + assert!(snapshot_metadata + .storage_logs_filepaths + .iter() + .all(Option::is_none)); + + let object_store = object_store_factory.create_store().await; + let SnapshotFactoryDependencies { factory_deps } = + object_store.get(snapshot_l1_batch_number).await.unwrap(); + let actual_deps: HashSet<_> = factory_deps.into_iter().collect(); + assert_eq!(actual_deps, expected_outputs.deps); + + // Process 2 storage log chunks, then stop. + SnapshotCreator::for_tests(object_store, pool.clone()) + .stop_after_chunk_count(2) + .run(SEQUENTIAL_TEST_CONFIG, MIN_CHUNK_COUNT) + .await + .unwrap(); + + let snapshot_metadata = conn + .snapshots_dal() + .get_snapshot_metadata(snapshot_l1_batch_number) + .await + .unwrap() + .expect("No snapshot metadata"); + assert_eq!( + snapshot_metadata + .storage_logs_filepaths + .iter() + .flatten() + .count(), + 2 + ); + + // Process the remaining chunks. + let object_store = object_store_factory.create_store().await; + SnapshotCreator::for_tests(object_store, pool.clone()) + .run(SEQUENTIAL_TEST_CONFIG, MIN_CHUNK_COUNT) + .await + .unwrap(); + + let object_store = object_store_factory.create_store().await; + assert_storage_logs(&*object_store, snapshot_l1_batch_number, &expected_outputs).await; +} + +#[tokio::test] +async fn recovery_workflow_with_varying_chunk_size() { + let pool = ConnectionPool::test_pool().await; + let mut rng = thread_rng(); + let object_store_factory = ObjectStoreFactory::mock(); + let object_store = object_store_factory.create_store().await; + let mut conn = pool.access_storage().await.unwrap(); + let expected_outputs = prepare_postgres(&mut rng, &mut conn, 10).await; + + SnapshotCreator::for_tests(object_store, pool.clone()) + .stop_after_chunk_count(2) + .run(SEQUENTIAL_TEST_CONFIG, MIN_CHUNK_COUNT) + .await + .unwrap(); + + let snapshot_l1_batch_number = L1BatchNumber(8); + let snapshot_metadata = conn + .snapshots_dal() + .get_snapshot_metadata(snapshot_l1_batch_number) + .await + .unwrap() + .expect("No snapshot metadata"); + assert_eq!( + snapshot_metadata + .storage_logs_filepaths + .iter() + .flatten() + .count(), + 2 + ); + + let config_with_other_size = SnapshotsCreatorConfig { + storage_logs_chunk_size: 1, // << should be ignored + ..SEQUENTIAL_TEST_CONFIG + }; + let object_store = object_store_factory.create_store().await; + SnapshotCreator::for_tests(object_store, pool.clone()) + .run(config_with_other_size, MIN_CHUNK_COUNT) + .await + .unwrap(); + + let object_store = object_store_factory.create_store().await; + assert_storage_logs(&*object_store, snapshot_l1_batch_number, &expected_outputs).await; +} diff --git a/core/bin/storage_logs_dedup_migration/src/consistency.rs b/core/bin/storage_logs_dedup_migration/src/consistency.rs index 3c63c8c81a7..dc0b3da389c 100644 --- a/core/bin/storage_logs_dedup_migration/src/consistency.rs +++ b/core/bin/storage_logs_dedup_migration/src/consistency.rs @@ -1,5 +1,4 @@ use clap::Parser; - use zksync_config::PostgresConfig; use zksync_dal::ConnectionPool; use zksync_env_config::FromEnv; diff --git a/core/bin/storage_logs_dedup_migration/src/main.rs b/core/bin/storage_logs_dedup_migration/src/main.rs index 7277c231e43..179685c4002 100644 --- a/core/bin/storage_logs_dedup_migration/src/main.rs +++ b/core/bin/storage_logs_dedup_migration/src/main.rs @@ -1,7 +1,6 @@ use std::collections::hash_map::{Entry, HashMap}; use clap::Parser; - use zksync_config::PostgresConfig; use zksync_dal::ConnectionPool; use zksync_env_config::FromEnv; @@ -58,7 +57,8 @@ async fn main() { .blocks_dal() .get_sealed_miniblock_number() .await - .unwrap(); + .unwrap() + .expect("Cannot start migration for Postgres recovered from snapshot"); println!( "Migration started for miniblock range {}..={}", opt.start_from_miniblock, sealed_miniblock diff --git a/core/bin/system-constants-generator/src/intrinsic_costs.rs b/core/bin/system-constants-generator/src/intrinsic_costs.rs index e15abf7d134..4f5e988e7b1 100644 --- a/core/bin/system-constants-generator/src/intrinsic_costs.rs +++ b/core/bin/system-constants-generator/src/intrinsic_costs.rs @@ -1,16 +1,16 @@ //! //! The script that returns the L2 gas price constants is that calculates the constants currently used by the -//! bootloader as well as L1 smart contracts. It should be used to edit the config file located in the etc/system-contracts/SystemConfig.json +//! bootloader as well as L1 smart contracts. It should be used to edit the config file located in the contracts/system-contracts/SystemConfig.json //! as well as contracts/SystemConfig.json //! +use multivm::utils::get_bootloader_encoding_space; +use zksync_types::{ethabi::Address, IntrinsicSystemGasConstants, ProtocolVersionId, U256}; + use crate::utils::{ execute_internal_transfer_test, execute_user_txs_in_test_gas_vm, get_l1_tx, get_l1_txs, - get_l2_txs, + get_l2_txs, metrics_from_txs, TransactionGenerator, }; -use crate::utils::{metrics_from_txs, TransactionGenerator}; -use multivm::vm_latest::constants::BOOTLOADER_TX_ENCODING_SPACE; -use zksync_types::{ethabi::Address, IntrinsicSystemGasConstants, U256}; #[derive(Debug, Clone, Copy, PartialEq)] pub(crate) struct VmSpentResourcesResult { @@ -81,7 +81,7 @@ pub(crate) fn l2_gas_constants() -> IntrinsicSystemGasConstants { true, ); - // This price does not include the overhead for the transaction itself, but rather auxilary parts + // This price does not include the overhead for the transaction itself, but rather auxiliary parts // that must be done by the transaction and it can not be enforced by the operator to not to accept // the transaction if it does not cover the minimal costs. let min_l1_tx_price = empty_l1_tx_result.gas_consumed - bootloader_intrinsic_gas; @@ -107,7 +107,7 @@ pub(crate) fn l2_gas_constants() -> IntrinsicSystemGasConstants { let delta_from_544_bytes = lengthier_tx_result.gas_consumed - empty_l1_tx_result.gas_consumed; - // The number of public data per factory dep should not depend on the size/structure of the factory + // The number of public data per factory dependencies should not depend on the size/structure of the factory // dependency, since the dependency has already been published on L1. let tx_with_more_factory_deps_result = execute_user_txs_in_test_gas_vm( vec![get_l1_tx( @@ -129,7 +129,8 @@ pub(crate) fn l2_gas_constants() -> IntrinsicSystemGasConstants { tx_with_more_factory_deps_result.pubdata_published - empty_l1_tx_result.pubdata_published; // The number of the bootloader memory that can be filled up with transactions. - let bootloader_tx_memory_size_slots = BOOTLOADER_TX_ENCODING_SPACE; + let bootloader_tx_memory_size_slots = + get_bootloader_encoding_space(ProtocolVersionId::latest().into()); IntrinsicSystemGasConstants { l2_tx_intrinsic_gas, @@ -179,7 +180,7 @@ fn get_intrinsic_overheads_for_tx_type(tx_generator: &TransactionGenerator) -> I let bootloader_intrinsic_pubdata = result_0.pubdata_published; // For various small reasons the overhead for the first transaction and for all the subsequent ones - // might differ a bit, so we will calculate both and will use the maximum one as the result for l2 txs. + // might differ a bit, so we will calculate both and will use the maximum one as the result for L2 txs. let (tx1_intrinsic_gas, tx1_intrinsic_pubdata) = get_intrinsic_price(result_0, result_1); let (tx2_intrinsic_gas, tx2_intrinsic_pubdata) = get_intrinsic_price(result_1, result_2); diff --git a/core/bin/system-constants-generator/src/main.rs b/core/bin/system-constants-generator/src/main.rs index ed906e1c9bb..3c5c056ffe7 100644 --- a/core/bin/system-constants-generator/src/main.rs +++ b/core/bin/system-constants-generator/src/main.rs @@ -1,24 +1,30 @@ use std::fs; +use codegen::{Block, Scope}; +use multivm::{ + utils::{get_bootloader_encoding_space, get_bootloader_max_txs_in_batch}, + vm_latest::constants::MAX_PUBDATA_PER_BLOCK, +}; use serde::{Deserialize, Serialize}; use zksync_types::{ - IntrinsicSystemGasConstants, GUARANTEED_PUBDATA_IN_TX, L1_GAS_PER_PUBDATA_BYTE, - MAX_GAS_PER_PUBDATA_BYTE, MAX_NEW_FACTORY_DEPS, MAX_TXS_IN_BLOCK, + zkevm_test_harness::zk_evm::zkevm_opcode_defs::{ + circuit_prices::{ + ECRECOVER_CIRCUIT_COST_IN_ERGS, KECCAK256_CIRCUIT_COST_IN_ERGS, + SHA256_CIRCUIT_COST_IN_ERGS, + }, + system_params::MAX_TX_ERGS_LIMIT, + }, + IntrinsicSystemGasConstants, ProtocolVersionId, GUARANTEED_PUBDATA_IN_TX, + L1_GAS_PER_PUBDATA_BYTE, MAX_NEW_FACTORY_DEPS, REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, }; +// For configs we will use the default value of `800_000` to represent the rough amount of L1 gas +// needed to cover the batch expenses. +const BLOCK_OVERHEAD_L1_GAS: u32 = 800_000; + mod intrinsic_costs; mod utils; -use codegen::Block; -use codegen::Scope; -use multivm::vm_latest::constants::{ - BLOCK_OVERHEAD_GAS, BLOCK_OVERHEAD_L1_GAS, BOOTLOADER_TX_ENCODING_SPACE, MAX_PUBDATA_PER_BLOCK, -}; -use zksync_types::zkevm_test_harness::zk_evm::zkevm_opcode_defs::circuit_prices::{ - ECRECOVER_CIRCUIT_COST_IN_ERGS, KECCAK256_CIRCUIT_COST_IN_ERGS, SHA256_CIRCUIT_COST_IN_ERGS, -}; -use zksync_types::zkevm_test_harness::zk_evm::zkevm_opcode_defs::system_params::MAX_TX_ERGS_LIMIT; - // Params needed for L1 contracts #[derive(Copy, Clone, Debug, Serialize, Deserialize)] #[serde(rename_all = "SCREAMING_SNAKE_CASE")] @@ -28,7 +34,6 @@ struct L1SystemConfig { priority_tx_max_pubdata: u32, fair_l2_gas_price: u64, l1_gas_per_pubdata_byte: u32, - block_overhead_l2_gas: u32, block_overhead_l1_gas: u32, max_transactions_in_block: u32, bootloader_tx_encoding_space: u32, @@ -54,11 +59,13 @@ pub fn generate_l1_contracts_system_config(gas_constants: &IntrinsicSystemGasCon priority_tx_max_pubdata: (L1_TX_DECREASE * (MAX_PUBDATA_PER_BLOCK as f64)) as u32, fair_l2_gas_price: FAIR_L2_GAS_PRICE_ON_L1_CONTRACT, l1_gas_per_pubdata_byte: L1_GAS_PER_PUBDATA_BYTE, - block_overhead_l2_gas: BLOCK_OVERHEAD_GAS, block_overhead_l1_gas: BLOCK_OVERHEAD_L1_GAS, - max_transactions_in_block: MAX_TXS_IN_BLOCK as u32, - bootloader_tx_encoding_space: BOOTLOADER_TX_ENCODING_SPACE, - + max_transactions_in_block: get_bootloader_max_txs_in_batch( + ProtocolVersionId::latest().into(), + ) as u32, + bootloader_tx_encoding_space: get_bootloader_encoding_space( + ProtocolVersionId::latest().into(), + ), l1_tx_intrinsic_l2_gas: gas_constants.l1_tx_intrinsic_gas, l1_tx_intrinsic_pubdata: gas_constants.l1_tx_intrinsic_pubdata, l1_tx_min_l2_gas_base: gas_constants.l1_tx_min_gas_base, @@ -66,7 +73,7 @@ pub fn generate_l1_contracts_system_config(gas_constants: &IntrinsicSystemGasCon l1_tx_delta_factory_deps_l2_gas: gas_constants.l1_tx_delta_factory_dep_gas, l1_tx_delta_factory_deps_pubdata: gas_constants.l1_tx_delta_factory_dep_pubdata, max_new_factory_deps: MAX_NEW_FACTORY_DEPS as u32, - required_l2_gas_price_per_pubdata: MAX_GAS_PER_PUBDATA_BYTE, + required_l2_gas_price_per_pubdata: REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, }; serde_json::to_string_pretty(&l1_contracts_config).unwrap() @@ -79,7 +86,6 @@ struct L2SystemConfig { guaranteed_pubdata_bytes: u32, max_pubdata_per_block: u32, max_transactions_in_block: u32, - block_overhead_l2_gas: u32, block_overhead_l1_gas: u32, l2_tx_intrinsic_gas: u32, l2_tx_intrinsic_pubdata: u32, @@ -97,15 +103,18 @@ pub fn generate_l2_contracts_system_config(gas_constants: &IntrinsicSystemGasCon let l2_contracts_config = L2SystemConfig { guaranteed_pubdata_bytes: GUARANTEED_PUBDATA_IN_TX, max_pubdata_per_block: MAX_PUBDATA_PER_BLOCK, - max_transactions_in_block: MAX_TXS_IN_BLOCK as u32, - block_overhead_l2_gas: BLOCK_OVERHEAD_GAS, + max_transactions_in_block: get_bootloader_max_txs_in_batch( + ProtocolVersionId::latest().into(), + ) as u32, block_overhead_l1_gas: BLOCK_OVERHEAD_L1_GAS, l2_tx_intrinsic_gas: gas_constants.l2_tx_intrinsic_gas, l2_tx_intrinsic_pubdata: gas_constants.l2_tx_intrinsic_pubdata, l1_tx_intrinsic_l2_gas: gas_constants.l1_tx_intrinsic_gas, l1_tx_intrinsic_pubdata: gas_constants.l1_tx_intrinsic_pubdata, max_gas_per_transaction: MAX_TX_ERGS_LIMIT, - bootloader_memory_for_txs: BOOTLOADER_TX_ENCODING_SPACE, + bootloader_memory_for_txs: get_bootloader_encoding_space( + ProtocolVersionId::latest().into(), + ), refund_gas: gas_constants.l2_tx_gas_for_refund_transfer, keccak_round_cost_gas: KECCAK256_CIRCUIT_COST_IN_ERGS, sha256_round_cost_gas: SHA256_CIRCUIT_COST_IN_ERGS, @@ -222,7 +231,10 @@ fn update_l1_system_constants(intrinsic_gas_constants: &IntrinsicSystemGasConsta fn update_l2_system_constants(intrinsic_gas_constants: &IntrinsicSystemGasConstants) { let l2_system_config = generate_l2_contracts_system_config(intrinsic_gas_constants); - save_file("etc/system-contracts/SystemConfig.json", l2_system_config); + save_file( + "contracts/system-contracts/SystemConfig.json", + l2_system_config, + ); } fn main() { diff --git a/core/bin/system-constants-generator/src/utils.rs b/core/bin/system-constants-generator/src/utils.rs index fc576ff44ee..993a995e619 100644 --- a/core/bin/system-constants-generator/src/utils.rs +++ b/core/bin/system-constants-generator/src/utils.rs @@ -1,28 +1,29 @@ -use once_cell::sync::Lazy; -use std::cell::RefCell; -use std::rc::Rc; - -use multivm::interface::{ - dyn_tracers::vm_1_4_0::DynTracer, tracer::VmExecutionStopReason, L1BatchEnv, L2BlockEnv, - SystemEnv, TxExecutionMode, VmExecutionMode, VmInterface, -}; -use multivm::vm_latest::{ - constants::{BLOCK_GAS_LIMIT, BOOTLOADER_HEAP_PAGE}, - BootloaderState, HistoryEnabled, HistoryMode, SimpleMemory, ToTracerPointer, Vm, VmTracer, - ZkSyncVmState, +use std::{cell::RefCell, rc::Rc}; + +use multivm::{ + interface::{ + dyn_tracers::vm_1_4_1::DynTracer, tracer::VmExecutionStopReason, L1BatchEnv, L2BlockEnv, + SystemEnv, TxExecutionMode, VmExecutionMode, VmInterface, + }, + vm_latest::{ + constants::{BLOCK_GAS_LIMIT, BOOTLOADER_HEAP_PAGE}, + BootloaderState, HistoryEnabled, HistoryMode, SimpleMemory, ToTracerPointer, Vm, VmTracer, + ZkSyncVmState, + }, + zk_evm_1_4_1::aux_structures::Timestamp, }; +use once_cell::sync::Lazy; use zksync_contracts::{ load_sys_contract, read_bootloader_code, read_sys_contract_bytecode, read_zbin_bytecode, BaseSystemContracts, ContractLanguage, SystemContractCode, }; use zksync_state::{InMemoryStorage, StorageView, WriteStorage}; use zksync_types::{ - block::legacy_miniblock_hash, ethabi::Token, fee::Fee, l1::L1Tx, l2::L2Tx, + block::MiniblockHasher, ethabi::Token, fee::Fee, fee_model::BatchFeeInput, l1::L1Tx, l2::L2Tx, utils::storage_key_for_eth_balance, AccountTreeId, Address, Execute, L1BatchNumber, - L1TxCommonData, L2ChainId, MiniblockNumber, Nonce, ProtocolVersionId, StorageKey, Timestamp, - Transaction, BOOTLOADER_ADDRESS, H256, SYSTEM_CONTEXT_ADDRESS, - SYSTEM_CONTEXT_GAS_PRICE_POSITION, SYSTEM_CONTEXT_TX_ORIGIN_POSITION, U256, - ZKPORTER_IS_AVAILABLE, + L1TxCommonData, L2ChainId, MiniblockNumber, Nonce, ProtocolVersionId, StorageKey, Transaction, + BOOTLOADER_ADDRESS, H256, SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_GAS_PRICE_POSITION, + SYSTEM_CONTEXT_TX_ORIGIN_POSITION, U256, ZKPORTER_IS_AVAILABLE, }; use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, u256_to_h256}; @@ -163,8 +164,8 @@ pub(super) fn get_l1_txs(number_of_txs: usize) -> (Vec, Vec Vec { read_zbin_bytecode(format!( - "etc/system-contracts/bootloader/tests/artifacts/{}.yul/{}.yul.zbin", - test, test + "contracts/system-contracts/bootloader/tests/artifacts/{}.yul.zbin", + test )) } @@ -173,14 +174,16 @@ fn default_l1_batch() -> L1BatchEnv { previous_batch_hash: None, number: L1BatchNumber(1), timestamp: 100, - l1_gas_price: 50_000_000_000, // 50 gwei - fair_l2_gas_price: 250_000_000, // 0.25 gwei + fee_input: BatchFeeInput::l1_pegged( + 50_000_000_000, // 50 gwei + 250_000_000, // 0.25 gwei + ), fee_account: Address::random(), enforced_base_fee: None, first_l2_block: L2BlockEnv { number: 1, timestamp: 100, - prev_block_hash: legacy_miniblock_hash(MiniblockNumber(0)), + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), max_virtual_blocks_to_create: 100, }, } diff --git a/core/bin/verification_key_generator_and_server/Cargo.toml b/core/bin/verification_key_generator_and_server/Cargo.toml deleted file mode 100644 index b49683424a4..00000000000 --- a/core/bin/verification_key_generator_and_server/Cargo.toml +++ /dev/null @@ -1,37 +0,0 @@ -[package] -name = "zksync_verification_key_generator_and_server" -version = "0.1.0" -edition = "2018" -license = "MIT OR Apache-2.0" - -[lib] -name = "zksync_verification_key_server" -path = "src/lib.rs" - -[[bin]] -name = "zksync_verification_key_generator" -path = "src/main.rs" - -[[bin]] -name = "zksync_json_to_binary_vk_converter" -path = "src/json_to_binary_vk_converter.rs" - -[[bin]] -name = "zksync_commitment_generator" -path = "src/commitment_generator.rs" - -[dependencies] -zksync_types = { path = "../../lib/types" } -zksync_prover_utils = { path = "../../lib/prover_utils" } -vlog = { path = "../../lib/vlog" } -circuit_testing = { git = "https://github.com/matter-labs/era-circuit_testing.git", branch = "main" } -itertools = "0.10.5" -bincode = "1.3.3" - -anyhow = "1.0" -serde_json = "1.0.85" -hex = "0.4.3" -structopt = "0.3.26" -ff = { package = "ff_ce", version = "0.14.1" } -once_cell = "1.8.0" -tracing = "0.1" diff --git a/core/bin/verification_key_generator_and_server/README.md b/core/bin/verification_key_generator_and_server/README.md deleted file mode 100644 index efe7d3f99a4..00000000000 --- a/core/bin/verification_key_generator_and_server/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Verification keys - -We currently have around 20 different circuits like: Scheduler, Leaf, KeccakPrecompile etc (for the full list - look at -CircuitType enum in sync_vm repo). - -Each such circuit requires a separate verification key. - -This crate fulfills 2 roles: - -- it has the binaries that can generate the updated versions of the keys (for example if VM code changes) -- it provides the libraries that can be used by other components that need to use these keys (for example provers) - - behaving like a key server. - -Moreover, all these keys are submitted as code within the repo in `verification_XX_key.json` files. - -## zksync_verification_key_server - -This is the library that can be used by other components to fetch the verification key for a given circuit (using -`get_vk_for_circuit_type` function). - -## zksync_verification_key_generator - -The main binary that generates verification key for given circuits. Most of the heavy lifting is done by the -`create_vk_for_padding_size_log_2` method from circuit_testing repo. - -The results are written to the `verification_XX_key.json` files in the current repository. - -## zksync_json_to_binary_vk_converter - -Converts the local json verification keys into the binary format (and stores them in the output directory). - -## zksync_commitment_generator - -This tool takes the 3 commitments (one for all the basic circuits, one for node and one for leaf), computed based on the -current verification keys - and updates the contract.toml config file (which is located in etc/env/base/contracts.toml). - -These commitments are later used in one of the circuit breakers - to compare their values to the commitments that L1 -contract holds (so that we can 'panic' properly - if we notice that our server's commitments differ from the L1 -contracts - which would result in failed L1 transactions). diff --git a/core/bin/verification_key_generator_and_server/data/verification_0_key.json b/core/bin/verification_key_generator_and_server/data/verification_0_key.json deleted file mode 100644 index c3262193a4f..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_0_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 14745348174000482855, - 2839037062185937123, - 3369862715588854899, - 1495909583940713128 - ], - "y": [ - 6859454683840363585, - 11340551061368171664, - 9528805406487149561, - 3414144677220223705 - ], - "infinity": false - }, - { - "x": [ - 9215749870136224396, - 18418669114332753377, - 13140219601461030180, - 2381098845928447331 - ], - "y": [ - 8834765081837029169, - 4424842234296363904, - 13294547557836067005, - 414624398145171890 - ], - "infinity": false - }, - { - "x": [ - 2148575411987453084, - 16730180692461995258, - 12423475767707134837, - 3014264170083149730 - ], - "y": [ - 10870860158804422503, - 14060279526953529989, - 2266257082861680293, - 22356173050560284 - ], - "infinity": false - }, - { - "x": [ - 17803008042411335770, - 5713064950476621403, - 17979342410816871746, - 491265656076548841 - ], - "y": [ - 9823492080506672630, - 3637386621225409615, - 8776978043600973097, - 2514196809208915768 - ], - "infinity": false - }, - { - "x": [ - 3768479078383323179, - 16153057542709544671, - 10578964798085613273, - 2831188075764800753 - ], - "y": [ - 2387514805820590694, - 15085489652142686165, - 8141513931186597223, - 1582376980242699819 - ], - "infinity": false - }, - { - "x": [ - 5395455814671474247, - 5013790368139874617, - 8671649443504728767, - 839142828943885970 - ], - "y": [ - 11231626069154926735, - 5078347962234771017, - 17373886182204596447, - 513647957075879347 - ], - "infinity": false - }, - { - "x": [ - 8940485327950054531, - 9156997542069636576, - 14316753178152000598, - 3357551869664255582 - ], - "y": [ - 14102490706504125272, - 4494991810930729808, - 15532318871086968466, - 1537365238286274178 - ], - "infinity": false - }, - { - "x": [ - 13914906478277300859, - 6213896071228541481, - 4364409818367302306, - 659097390118096039 - ], - "y": [ - 7328372274594390887, - 2650332638498669615, - 15455628473476960005, - 3119379427019958230 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 9438200511694036157, - 11094162170960057340, - 9123678872696723713, - 2950597355117190054 - ], - "y": [ - 6153972960518016517, - 8045683598100955864, - 13410633858416643489, - 988361678931464913 - ], - "infinity": false - }, - { - "x": [ - 805964423710846142, - 13603470797942296854, - 11292123377140077447, - 1455913517812009773 - ], - "y": [ - 4541622738043214385, - 8186357170000535775, - 4765839113294831637, - 3026863977499737494 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 1851039213129741497, - 11907960788190413713, - 2882727828085561070, - 1451278944954982956 - ], - "y": [ - 15245785050592773860, - 1774295027236395480, - 3373069120056880915, - 1080245109458702174 - ], - "infinity": false - }, - { - "x": [ - 9366052859968548005, - 12275028918364559591, - 2472023927159177225, - 1052535074027277666 - ], - "y": [ - 2428574557555628629, - 15067392861858369528, - 16949255188095910778, - 2297925771936569168 - ], - "infinity": false - }, - { - "x": [ - 17016009610362956206, - 4047659663396753591, - 1832464593155416403, - 2725142957049914767 - ], - "y": [ - 12447928856414787240, - 3072280375285720285, - 12294239288643819494, - 613511140380288958 - ], - "infinity": false - }, - { - "x": [ - 6312774245791141720, - 496150993329472460, - 12773767122915456934, - 3404402910494500531 - ], - "y": [ - 13852578578747731084, - 9030931732410275304, - 17159996848865265705, - 1696956882146098553 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 1073530, - "lookup_selector_commitment": { - "x": [ - 4441974708940861232, - 11325614820129407652, - 7273013871150456559, - 2270181644629652201 - ], - "y": [ - 3070631142979677922, - 15247189094202672776, - 12651459662740804392, - 1832216259472686694 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 631990924006796604, - 16139625628991115157, - 13331739325995827711, - 1062301837743594995 - ], - "y": [ - 15303054606290800139, - 15906872095881647437, - 7093896572295020249, - 1342952934989901142 - ], - "infinity": false - }, - { - "x": [ - 7983921919542246393, - 13296544189644416678, - 17081022784392007697, - 1980832835348244027 - ], - "y": [ - 10874958134865200330, - 7702740658637630534, - 14052057929798961943, - 3193353539419869016 - ], - "infinity": false - }, - { - "x": [ - 1114587284824996932, - 4636906500482867924, - 15328247172597030456, - 87946895873973686 - ], - "y": [ - 15573033830207915877, - 5194694185599035278, - 2562407345425607214, - 2782078999306862675 - ], - "infinity": false - }, - { - "x": [ - 18225112781127431982, - 18048613958187123807, - 7325490730844456621, - 1953409020724855888 - ], - "y": [ - 7577000130125917198, - 6193701449695751861, - 4102082927677054717, - 395350071385269650 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 7312875299592476003, - 313526216906044060, - 13914875394436353152, - 3424388477700656316 - ], - "y": [ - 2572062173996296044, - 5984767625164919974, - 12005537293370417131, - 616463121946800406 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_10_key.json b/core/bin/verification_key_generator_and_server/data/verification_10_key.json deleted file mode 100644 index ec9d3727bff..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_10_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 4364720487844379181, - 17010766725144227333, - 1022678199111276314, - 1146578362772127376 - ], - "y": [ - 10340654727439455072, - 12691578856596245032, - 837883495763401146, - 2135776887902289239 - ], - "infinity": false - }, - { - "x": [ - 14564870240038241482, - 16001391704613609683, - 16397364612792898214, - 1316914335235774452 - ], - "y": [ - 2386942353392090183, - 4642131766714508143, - 16789479723446408276, - 2261353401184907401 - ], - "infinity": false - }, - { - "x": [ - 6081056006818109026, - 14051483412950926523, - 8605392534710099348, - 1527183574619010123 - ], - "y": [ - 3896696527234063839, - 12862398541231039501, - 1005646628007936886, - 3479645512156004366 - ], - "infinity": false - }, - { - "x": [ - 11266242489999219523, - 8100856016495224488, - 6788749864393617587, - 482299081118345826 - ], - "y": [ - 225211373180020785, - 6498635074385582091, - 4274055525472487569, - 2578651815252093838 - ], - "infinity": false - }, - { - "x": [ - 10378455392293934375, - 13391940670290769236, - 10463014668466536299, - 472544008986099462 - ], - "y": [ - 1502016714118108544, - 14252801754530793876, - 2203844491975584716, - 1116114255465135672 - ], - "infinity": false - }, - { - "x": [ - 9703616742074407567, - 9691703077434834222, - 7366620887865105973, - 36165572355418066 - ], - "y": [ - 7430304832706471782, - 5173267152399523091, - 14416699599905226230, - 2681204653630184824 - ], - "infinity": false - }, - { - "x": [ - 9347312215430913530, - 13606433894103359668, - 14013475178334262360, - 2947181048682744075 - ], - "y": [ - 4001199390012145932, - 4622813642635649819, - 16433672063298879053, - 1247842462976799965 - ], - "infinity": false - }, - { - "x": [ - 1639425503718708209, - 8242804754724970899, - 11043260258533730377, - 2245145560504199089 - ], - "y": [ - 14202551139064230506, - 4307109380979442947, - 13141687433511141087, - 1913204959448290015 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 17540836040216578409, - 14577118461028955096, - 2300935836423716880, - 427649651480863044 - ], - "y": [ - 13066723755606073272, - 17324941433857131282, - 1679499122173566851, - 3298750515604566671 - ], - "infinity": false - }, - { - "x": [ - 14709152157752642079, - 13510549649315108277, - 3019440420162710858, - 627188607991231846 - ], - "y": [ - 16615706301470133997, - 915024441316023047, - 13798541787831785917, - 3340602253234308653 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 12626704863583094704, - 3308474372162220296, - 16088806788444947642, - 636430705662147361 - ], - "y": [ - 17052785040105865748, - 11203395113497209978, - 2939609765212411460, - 3167290643533167611 - ], - "infinity": false - }, - { - "x": [ - 3075146465184418179, - 11559452521956513155, - 1656597085428845901, - 1618447062156730856 - ], - "y": [ - 2010693621773175313, - 2977509893150409878, - 9431891659616951962, - 1776222288355278384 - ], - "infinity": false - }, - { - "x": [ - 6408318860212838666, - 9847136022608767026, - 18080834927350013528, - 3306285138140631107 - ], - "y": [ - 16064928058583899597, - 461689523483649779, - 13572099112445223829, - 1563453638232968523 - ], - "infinity": false - }, - { - "x": [ - 327171445663828020, - 12706053900720413614, - 9237483585964880752, - 1960293149538216528 - ], - "y": [ - 11030775691809003651, - 11089052388657955457, - 3209890793790993499, - 1198867574642866523 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 5202052, - "lookup_selector_commitment": { - "x": [ - 781239045644769777, - 14316527640474633593, - 2443643435827373112, - 3049372365263474427 - ], - "y": [ - 4073012743593667819, - 16009537994875540924, - 11173412503242869179, - 1513208421597995174 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 697552212563769686, - 7709943502535418760, - 15019345407325619175, - 3433081085078580257 - ], - "y": [ - 8668947019840357731, - 14698901351824712883, - 15088598879190660424, - 2873081208166433946 - ], - "infinity": false - }, - { - "x": [ - 7893133928909060673, - 7064922516930129957, - 3592836702741304814, - 2239702595710114437 - ], - "y": [ - 7691360541875191519, - 11379321785127235277, - 6653616064071569031, - 2555434628517540774 - ], - "infinity": false - }, - { - "x": [ - 6243944238013052821, - 7908243182210136125, - 17178099109525791299, - 2553622184721264566 - ], - "y": [ - 736121280088239428, - 6158073429758170526, - 11217302997977204117, - 2594798912020899417 - ], - "infinity": false - }, - { - "x": [ - 2064240298596094591, - 16917726764104887991, - 11042784977532408536, - 3377647228930170830 - ], - "y": [ - 10635525052494768819, - 387400048616497096, - 9379200582543310995, - 1571766153703296253 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 7603211811706190713, - 2486982239745271096, - 11528266448545919500, - 3080741880407152411 - ], - "y": [ - 7967754771633653173, - 6016822892450616749, - 9688696792558711613, - 2682562048141398047 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_11_key.json b/core/bin/verification_key_generator_and_server/data/verification_11_key.json deleted file mode 100644 index ec60b1b5c70..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_11_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 6404793958941109752, - 600086648940026770, - 17621036346050218167, - 648286585825030202 - ], - "y": [ - 15536368541166505022, - 13874331483468128999, - 15299774519724050181, - 694528839710637549 - ], - "infinity": false - }, - { - "x": [ - 8437895530551083583, - 9515418928119648176, - 13043255827139294721, - 2995712510038409810 - ], - "y": [ - 2599666661350767554, - 5213004864468121936, - 3448071048439343925, - 3372727479169634860 - ], - "infinity": false - }, - { - "x": [ - 4949545806128010634, - 7991544258837652527, - 13984289231122041826, - 435264553263929947 - ], - "y": [ - 5315155210033461895, - 5269954775753247626, - 8365554241810378947, - 3038338810517586456 - ], - "infinity": false - }, - { - "x": [ - 10765735847634894938, - 996016141851615448, - 17905928073714218280, - 1382306444325686451 - ], - "y": [ - 2138154197587423296, - 10332772886666867909, - 18365120064743353477, - 3036329558617382049 - ], - "infinity": false - }, - { - "x": [ - 10826908009799408310, - 17008417534705779156, - 6763973494549063072, - 2085829964414931488 - ], - "y": [ - 8778528796073273991, - 3575354418973385595, - 7700555759899743641, - 2991788183234680231 - ], - "infinity": false - }, - { - "x": [ - 4838537981048085423, - 17733460364049897496, - 2406410363431464143, - 317979983533551325 - ], - "y": [ - 1063783130085451648, - 17468950496650586998, - 1638492556781126884, - 2655791721465286744 - ], - "infinity": false - }, - { - "x": [ - 9900079822056413611, - 2971494295919434281, - 3851188096409515874, - 1674965457600938162 - ], - "y": [ - 278026997091552202, - 4169606578927284200, - 4285297176993939496, - 1835673146863992148 - ], - "infinity": false - }, - { - "x": [ - 14972922803706426724, - 1950002897609593521, - 14885502244328862256, - 2711533695106895845 - ], - "y": [ - 6445273103061253271, - 13093783937225622775, - 16913300898726970338, - 3338984185497324237 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 7023363902839996761, - 10470701207992157969, - 15655647820064667897, - 1574806151825297776 - ], - "y": [ - 5374465760860613169, - 17808737811039085287, - 9497881147171478776, - 2496973717640690197 - ], - "infinity": false - }, - { - "x": [ - 11667333913021610767, - 981513539224109240, - 906325130343873228, - 2938085706999497365 - ], - "y": [ - 12114685726509803851, - 8176447551157079615, - 4677211732718215770, - 612959750791398009 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 5178916486603003859, - 12440762249350081718, - 17531240512375127539, - 562979322442547791 - ], - "y": [ - 13269831614205338393, - 14075713698585784838, - 5009519510530479124, - 346033861980045408 - ], - "infinity": false - }, - { - "x": [ - 9815443577325313677, - 10727907015331332054, - 7582395371050260833, - 1746872659838481572 - ], - "y": [ - 3973552805135639320, - 14426732004648741961, - 8133164322153358522, - 2668541869556858228 - ], - "infinity": false - }, - { - "x": [ - 4868257934818957423, - 11529848268525929099, - 7089666284160764141, - 796901367628793969 - ], - "y": [ - 991195814042705325, - 1559922382138761102, - 15616159453482282503, - 1031107741111093289 - ], - "infinity": false - }, - { - "x": [ - 17936772813090339705, - 10208762457499980701, - 14796710996322725970, - 638550977107438851 - ], - "y": [ - 5073905611192321777, - 2956648407808816974, - 7778989780119416172, - 2955106321082932072 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 7960377, - "lookup_selector_commitment": { - "x": [ - 1083743271968869166, - 3134203175755215736, - 5835502497758804469, - 3010956977291777466 - ], - "y": [ - 3645612220088813035, - 32844736552579976, - 5426466326302260857, - 1489565191618899261 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 5825422128268478267, - 9219263846299851036, - 3879231702557190566, - 1702488722758880769 - ], - "y": [ - 18311881100262470992, - 5742998199368802392, - 18106865487471159417, - 502191980176920012 - ], - "infinity": false - }, - { - "x": [ - 17195892082859417081, - 7890531942603584793, - 2381805632820057528, - 3173232410464566465 - ], - "y": [ - 16359614627947132075, - 3459600273035137079, - 4550762061432972122, - 3394559699318358224 - ], - "infinity": false - }, - { - "x": [ - 1716103379277390185, - 18097936269579187542, - 16357329729761063450, - 1508640059338197502 - ], - "y": [ - 11014806739603983364, - 4396503314588777389, - 9397245609635151055, - 1703957955248411380 - ], - "infinity": false - }, - { - "x": [ - 4770171350693477354, - 17110558673192292253, - 9799800677557311408, - 761984875463445481 - ], - "y": [ - 1560561403388310063, - 31331275310848146, - 287152055803835484, - 457826332542037277 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 11327495732840772606, - 7407664417001729515, - 9486600059857658309, - 3060296564241189838 - ], - "y": [ - 7624492872489320847, - 18248981556039704277, - 3877205757853252152, - 939885486002612376 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_12_key.json b/core/bin/verification_key_generator_and_server/data/verification_12_key.json deleted file mode 100644 index fec076f39ed..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_12_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 456514006020943025, - 9595480195714948127, - 12254096252487404245, - 1742692690750856358 - ], - "y": [ - 16294223586064957217, - 3958270970168887906, - 11264067544872898258, - 1692817687935973108 - ], - "infinity": false - }, - { - "x": [ - 1359655052308122459, - 13840124148496555776, - 1774237333490664500, - 2964872651584750318 - ], - "y": [ - 11907598503482948769, - 8700506041798646988, - 15081040576888859990, - 3096802642049924528 - ], - "infinity": false - }, - { - "x": [ - 2884314851670818573, - 13442465544210396156, - 5937955495868181363, - 2486997439179977778 - ], - "y": [ - 9309776793338098458, - 14492906371677122697, - 8837309186596588911, - 1081143755093508499 - ], - "infinity": false - }, - { - "x": [ - 2655654413304275855, - 4244723109566147837, - 12150359360501203194, - 3338981627918702615 - ], - "y": [ - 2522870072161287404, - 17341373219317210182, - 13058930363994599297, - 210373422168410518 - ], - "infinity": false - }, - { - "x": [ - 16728834675380740056, - 2139390496020366235, - 9480389182940223467, - 2279560291896695719 - ], - "y": [ - 12461418813218976432, - 357566005384566098, - 5295578385080568808, - 1801243085576438875 - ], - "infinity": false - }, - { - "x": [ - 8716201428771436123, - 3392394702404760386, - 9990956922582058945, - 1388317411153212399 - ], - "y": [ - 11666415392681680155, - 10553517485129490455, - 16061047708722635939, - 2386622646140901822 - ], - "infinity": false - }, - { - "x": [ - 16162432560623854812, - 15537581062716888632, - 12927223782958923606, - 2800634589869451227 - ], - "y": [ - 5345141365329635916, - 2224393250977631865, - 396527108738048188, - 2298318725146167177 - ], - "infinity": false - }, - { - "x": [ - 18372685954785469756, - 10436523365152935441, - 15509622927999798123, - 2050428620045833325 - ], - "y": [ - 4996265985148335658, - 6073112270434155721, - 4873288683270752338, - 503179567393027927 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 4986139828502830074, - 8644425445976253042, - 4851433922656693398, - 1419574698085640872 - ], - "y": [ - 16192186537521161947, - 16183885683582261905, - 1655718756619164666, - 3420236094426390604 - ], - "infinity": false - }, - { - "x": [ - 10727231722644915889, - 13777116005624794169, - 1422623412369619026, - 1701279717637612575 - ], - "y": [ - 6503647097427010249, - 6381043883853023011, - 15391366286376907281, - 1261207976874708261 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 11852073725466955067, - 179170887563176222, - 17529899074897279348, - 2496783194148289461 - ], - "y": [ - 15490041181991978284, - 6745436372504113852, - 7017978386715410058, - 3482556315200370895 - ], - "infinity": false - }, - { - "x": [ - 1330152738947291505, - 1668990644246591877, - 6805443255260621096, - 1309987766073890626 - ], - "y": [ - 18322300356676620444, - 8225233874302527542, - 5744327785164342590, - 410571567010522636 - ], - "infinity": false - }, - { - "x": [ - 13968210937929584911, - 17067601391996082961, - 4861463652254416951, - 2147834012714370408 - ], - "y": [ - 9012483356698219484, - 8660929519763525826, - 17744882010750642463, - 331423342438323189 - ], - "infinity": false - }, - { - "x": [ - 1352282553299127274, - 8587971715415488300, - 2471024479841756772, - 1239586065229072559 - ], - "y": [ - 1597792022909153930, - 5020991346876715357, - 5622801511814109910, - 1916460940163680567 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 46287674, - "lookup_selector_commitment": { - "x": [ - 11573469000684493293, - 15304040816406013002, - 9206902553183544808, - 2597693769113957036 - ], - "y": [ - 10538181061926273477, - 5239567589495426242, - 3627181047901924882, - 302644994241575377 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 5134795695995115566, - 12287750992060803275, - 3112021177339560487, - 2737779104829043419 - ], - "y": [ - 12960786984497012138, - 17246059378047870426, - 11486754204718893642, - 46104506716724806 - ], - "infinity": false - }, - { - "x": [ - 148472607159578301, - 1393814398025790148, - 13651878286378332448, - 3460878321325997474 - ], - "y": [ - 10791022888598424744, - 1931353219232076143, - 12342018346439101174, - 23632989633122111 - ], - "infinity": false - }, - { - "x": [ - 1355031833403957875, - 10754997913401276231, - 8672292473740482178, - 3014145653612856517 - ], - "y": [ - 3728402825933673134, - 16492594359417243041, - 14619929139939206930, - 2894280666048705144 - ], - "infinity": false - }, - { - "x": [ - 11362104917939269301, - 3050269804312222606, - 17884269955997757593, - 2804911625130359365 - ], - "y": [ - 9563576475625880180, - 9736108320914226650, - 11545696954602328389, - 1108440262014676246 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 5367643753678334453, - 18149093736372716410, - 1335188566370936146, - 668596617655217713 - ], - "y": [ - 9984652217894703540, - 16253861114794085212, - 2139268495406835151, - 710303505771002735 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_13_key.json b/core/bin/verification_key_generator_and_server/data/verification_13_key.json deleted file mode 100644 index 73ffbd21200..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_13_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 17551054392858982554, - 6093238351564742844, - 9461983640740135929, - 665917981733823732 - ], - "y": [ - 5039211542045701927, - 14102316155129161178, - 7599318237652648682, - 1484263542771007309 - ], - "infinity": false - }, - { - "x": [ - 14015566113565304739, - 12895182424777444911, - 5150482782915031712, - 3280776276671330755 - ], - "y": [ - 5503211683737487414, - 5857977821275887356, - 1294122171191120577, - 2917900236095606783 - ], - "infinity": false - }, - { - "x": [ - 11180353512945796758, - 5467792637578213396, - 14862660111090994534, - 1678570344676416345 - ], - "y": [ - 16496106534540891926, - 4355829424666415263, - 8379906815867503783, - 2141225531456729878 - ], - "infinity": false - }, - { - "x": [ - 10512618919562577175, - 8909238001556772501, - 8669074760108324520, - 3259590816167766101 - ], - "y": [ - 15477336671232249792, - 10209451912771766896, - 13672268903388741173, - 682487251336397201 - ], - "infinity": false - }, - { - "x": [ - 14233534177298597555, - 14428793231398751908, - 18070433438826750034, - 1176819688107481869 - ], - "y": [ - 9251234182098356520, - 17131606126090989402, - 17185633762130361526, - 70013401388751862 - ], - "infinity": false - }, - { - "x": [ - 14148566925658671094, - 812517577375883951, - 5030512299767107864, - 44275794325016754 - ], - "y": [ - 3275438385460491589, - 12366768737850140720, - 10754478223029148744, - 64366431004577735 - ], - "infinity": false - }, - { - "x": [ - 5646513434714516506, - 12578668031398681290, - 6956692825033783810, - 536471110695536326 - ], - "y": [ - 876079378616587621, - 9787032999740439668, - 14965634813605966164, - 367083452910738472 - ], - "infinity": false - }, - { - "x": [ - 10902302115259229513, - 14044271471332330954, - 14571826360674828773, - 733766328575554031 - ], - "y": [ - 8186695183963076514, - 621472878958955881, - 14756382569165412398, - 3165780226323675661 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 17780673306296332984, - 10355922416617009060, - 5077451999006954761, - 2644291606399153501 - ], - "y": [ - 884498752701137122, - 731399349168706916, - 4286165746592754883, - 3279732117855760703 - ], - "infinity": false - }, - { - "x": [ - 11012802284910829398, - 7859388231941271159, - 17586341808458361180, - 1386364899721133297 - ], - "y": [ - 15634369655108108777, - 3858480397682251762, - 17706291110507066608, - 1663421415693803071 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 18134041530736321349, - 4345724579806003155, - 2324407857452293002, - 2319164124977213120 - ], - "y": [ - 14302129084811449335, - 8588677756442252515, - 3323846949783670865, - 2109729211841784387 - ], - "infinity": false - }, - { - "x": [ - 14486843004985564085, - 10799247040254992370, - 7658639806933647132, - 2215292564171027727 - ], - "y": [ - 14258341133968554193, - 11685656973533320944, - 14111972937744219524, - 1172604679688980794 - ], - "infinity": false - }, - { - "x": [ - 12872375111956991701, - 14049784009914403066, - 15325016171856456312, - 2811875539960405333 - ], - "y": [ - 5711194902040443430, - 13827091592207472460, - 17950028361571343192, - 1672758585097311581 - ], - "infinity": false - }, - { - "x": [ - 11717525586585736911, - 730672019767199816, - 3010255132348992613, - 2780587454575324896 - ], - "y": [ - 1473124157542628664, - 1573646910034288561, - 10026766074599473146, - 563223750818543582 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 42547753, - "lookup_selector_commitment": { - "x": [ - 4539928924349895484, - 2792770915461027618, - 11611697420465472575, - 1384307956752801018 - ], - "y": [ - 8840366360901511807, - 8892919985613263102, - 11941090149541110830, - 1930352681887390920 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 631990924006796604, - 16139625628991115157, - 13331739325995827711, - 1062301837743594995 - ], - "y": [ - 15303054606290800139, - 15906872095881647437, - 7093896572295020249, - 1342952934989901142 - ], - "infinity": false - }, - { - "x": [ - 7983921919542246393, - 13296544189644416678, - 17081022784392007697, - 1980832835348244027 - ], - "y": [ - 10874958134865200330, - 7702740658637630534, - 14052057929798961943, - 3193353539419869016 - ], - "infinity": false - }, - { - "x": [ - 1114587284824996932, - 4636906500482867924, - 15328247172597030456, - 87946895873973686 - ], - "y": [ - 15573033830207915877, - 5194694185599035278, - 2562407345425607214, - 2782078999306862675 - ], - "infinity": false - }, - { - "x": [ - 18225112781127431982, - 18048613958187123807, - 7325490730844456621, - 1953409020724855888 - ], - "y": [ - 7577000130125917198, - 6193701449695751861, - 4102082927677054717, - 395350071385269650 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 4121704254446914578, - 13863658665929861884, - 15362282368839162345, - 2762703036966024619 - ], - "y": [ - 102846692212239082, - 14904466746900448136, - 16872429770359000841, - 1687152581020907098 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_14_key.json b/core/bin/verification_key_generator_and_server/data/verification_14_key.json deleted file mode 100644 index e8c42d407e3..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_14_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 6916434521451934576, - 614815553772638285, - 3742595993843812033, - 2823214088432624432 - ], - "y": [ - 11642815096362884283, - 18063950820723921281, - 6353943092001719992, - 3201898419478369298 - ], - "infinity": false - }, - { - "x": [ - 10647237757917239762, - 1269177049592707998, - 2650053775033150725, - 582198744757304104 - ], - "y": [ - 9804667267596536998, - 493663115027956828, - 13953159385227792767, - 1568248765042207679 - ], - "infinity": false - }, - { - "x": [ - 7910659438561833906, - 12456422925439856914, - 10869604528749370003, - 1213616301038416610 - ], - "y": [ - 2606202790862698157, - 6809934263763206210, - 17472080335242458272, - 2884639755368519501 - ], - "infinity": false - }, - { - "x": [ - 14211325859682683183, - 11018598407116786751, - 10064425366978091674, - 2748595948091261209 - ], - "y": [ - 13960202853590116423, - 1211975538022172568, - 16303435518817750320, - 1634234707214097860 - ], - "infinity": false - }, - { - "x": [ - 4528591178982443847, - 16310104707629911601, - 5532120103079323919, - 1347877820087040669 - ], - "y": [ - 17983603511717948746, - 9529659424488112452, - 7820918413906679254, - 1819855238351369466 - ], - "infinity": false - }, - { - "x": [ - 14415562798118912210, - 6550719056383417327, - 424281724891761932, - 1264340531903932141 - ], - "y": [ - 7768057951329404686, - 15024442753889769568, - 9676935351692818899, - 1492251668690310932 - ], - "infinity": false - }, - { - "x": [ - 2619366878850208112, - 12150914745315976156, - 8375197026043390274, - 1935272977563031501 - ], - "y": [ - 5381369692389055354, - 17978011500330472972, - 17420193441326928998, - 479187691463910357 - ], - "infinity": false - }, - { - "x": [ - 8720830951139717797, - 15985700059986022675, - 11876530273787337931, - 421322430672290976 - ], - "y": [ - 9700690437922183179, - 1976785701667862157, - 16634886936358874061, - 3002178567925406588 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 8284083154661042764, - 11776500066398184343, - 868620904897679124, - 2988582549909766892 - ], - "y": [ - 10794129605563176627, - 15487634480061313925, - 17194646451372113884, - 2087686927573540537 - ], - "infinity": false - }, - { - "x": [ - 7916190330285050096, - 11731220788334102406, - 6221883233572429550, - 2552280229203107267 - ], - "y": [ - 10510502959728300366, - 14682539966609739595, - 8275243146917870162, - 164811532254637923 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 195850038587200624, - 10136289160450054078, - 4386512701252721226, - 219366815902177323 - ], - "y": [ - 12042545079209848932, - 599057886584676736, - 14545610403811537682, - 498958995843318019 - ], - "infinity": false - }, - { - "x": [ - 4721932753701441297, - 1676671918244393403, - 6943597542294442696, - 50994782040503038 - ], - "y": [ - 8321420884695240511, - 10606883887907326697, - 11471075822795411018, - 1311422627151559437 - ], - "infinity": false - }, - { - "x": [ - 85448132386017640, - 13016912343020112485, - 11647418800345296605, - 1741562939125330787 - ], - "y": [ - 10753835454658443286, - 8646325836340244979, - 7348777908140142985, - 2196062626460604424 - ], - "infinity": false - }, - { - "x": [ - 2125624295892265840, - 12754141819506101591, - 8789168208880604752, - 947087620272222934 - ], - "y": [ - 12566258871261234263, - 12307504590191426495, - 6700589767183706452, - 1828704371386663334 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 42212029, - "lookup_selector_commitment": { - "x": [ - 7709849601046260359, - 6836713108454667472, - 17360769186231334246, - 2348971634881039863 - ], - "y": [ - 13380830060569421804, - 15446653016734774164, - 17884501636917484387, - 1386904567459265970 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 631990924006796604, - 16139625628991115157, - 13331739325995827711, - 1062301837743594995 - ], - "y": [ - 15303054606290800139, - 15906872095881647437, - 7093896572295020249, - 1342952934989901142 - ], - "infinity": false - }, - { - "x": [ - 7983921919542246393, - 13296544189644416678, - 17081022784392007697, - 1980832835348244027 - ], - "y": [ - 10874958134865200330, - 7702740658637630534, - 14052057929798961943, - 3193353539419869016 - ], - "infinity": false - }, - { - "x": [ - 1114587284824996932, - 4636906500482867924, - 15328247172597030456, - 87946895873973686 - ], - "y": [ - 15573033830207915877, - 5194694185599035278, - 2562407345425607214, - 2782078999306862675 - ], - "infinity": false - }, - { - "x": [ - 18225112781127431982, - 18048613958187123807, - 7325490730844456621, - 1953409020724855888 - ], - "y": [ - 7577000130125917198, - 6193701449695751861, - 4102082927677054717, - 395350071385269650 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 6960699536013090594, - 2075384204892265266, - 12053931571725248687, - 1371193846897305849 - ], - "y": [ - 8904850119058507432, - 10465598889525773001, - 16159541505228012497, - 1982452464017823539 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_15_key.json b/core/bin/verification_key_generator_and_server/data/verification_15_key.json deleted file mode 100644 index 356dbb3c531..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_15_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 3227382513538635502, - 10189582412003011525, - 1928710987967879299, - 1641062823248805930 - ], - "y": [ - 3271795224553087841, - 14036363906521936156, - 10253705337161624780, - 3091191233208402889 - ], - "infinity": false - }, - { - "x": [ - 3541471743181642086, - 8117051273006688414, - 685909872467163024, - 2614724468827209722 - ], - "y": [ - 1096952120887201428, - 8197980407203032569, - 3949713006885563085, - 2838982585728277197 - ], - "infinity": false - }, - { - "x": [ - 12432945880074879560, - 13444859845042471186, - 16599097070979057001, - 3064039790213026567 - ], - "y": [ - 3745088406100356357, - 11715355314289478148, - 2282946417129489745, - 1619614407449915711 - ], - "infinity": false - }, - { - "x": [ - 6864310053920223866, - 11095455024311706186, - 12229748247000682102, - 2475016349586561501 - ], - "y": [ - 2946781066962542712, - 14275500021265062654, - 7624481756022778467, - 1439658776940615826 - ], - "infinity": false - }, - { - "x": [ - 13589273139905087785, - 10411035015021574213, - 7322465558208873130, - 1805943743448229826 - ], - "y": [ - 13035238946064559886, - 8309482746549063820, - 14229757515324464781, - 1676135665275665956 - ], - "infinity": false - }, - { - "x": [ - 84006308859404982, - 13783127238980064918, - 14101945786439708601, - 3343881426944938693 - ], - "y": [ - 11959320721291234482, - 7288504259378326725, - 9638777183731403514, - 1648453409181088010 - ], - "infinity": false - }, - { - "x": [ - 10987163680360734145, - 3374907765066907489, - 14421201974855570464, - 3148542489906320493 - ], - "y": [ - 17180031485000081847, - 1609372527008367113, - 6050341427989573858, - 477684541505306009 - ], - "infinity": false - }, - { - "x": [ - 2257028353691713628, - 6330174784373016532, - 1686021628649718039, - 2159927805963705967 - ], - "y": [ - 10814125155819336479, - 9673780307204445954, - 7995606758095566598, - 2252251279727988680 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 12209724104183572477, - 11631007075974892904, - 18407423517909669447, - 1123848354500646471 - ], - "y": [ - 4749227851055533192, - 16918951234067984229, - 5345146076707243019, - 2836719468222132526 - ], - "infinity": false - }, - { - "x": [ - 7250866110466496804, - 16022969863388101391, - 16334300930347324147, - 2232272485807431638 - ], - "y": [ - 257675104580526310, - 8044331403028603186, - 2070174268860891010, - 412313474208091695 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 6736882681315025594, - 13400430183084617843, - 17182588928882896917, - 413858188107207402 - ], - "y": [ - 11944170108613027081, - 10598841640624895850, - 9086311820289524704, - 994240611047161478 - ], - "infinity": false - }, - { - "x": [ - 9500318283622871785, - 5480449932874899465, - 13224510306395939252, - 1891329668301281157 - ], - "y": [ - 7314078756040350933, - 1023294602177498218, - 16475078688698425911, - 1793945182112302214 - ], - "infinity": false - }, - { - "x": [ - 17207548058425781429, - 2519222249126358251, - 16087595361924038018, - 3470846273906312296 - ], - "y": [ - 7578361094884620755, - 7082109151721400218, - 13675372677342046523, - 3204472226310685459 - ], - "infinity": false - }, - { - "x": [ - 7036282717341939568, - 3035419720331773758, - 6765191455902729185, - 1301973211946290083 - ], - "y": [ - 697377419426635450, - 14612037890797520515, - 11746079616766057625, - 1031190413179598818 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 6391155, - "lookup_selector_commitment": { - "x": [ - 17111915492430945419, - 17971275185478677346, - 14211391044159602918, - 2381455978713737016 - ], - "y": [ - 13971515893527127207, - 7078722574057096191, - 6337080743811431820, - 757015217034494132 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 5825422128268478267, - 9219263846299851036, - 3879231702557190566, - 1702488722758880769 - ], - "y": [ - 18311881100262470992, - 5742998199368802392, - 18106865487471159417, - 502191980176920012 - ], - "infinity": false - }, - { - "x": [ - 17195892082859417081, - 7890531942603584793, - 2381805632820057528, - 3173232410464566465 - ], - "y": [ - 16359614627947132075, - 3459600273035137079, - 4550762061432972122, - 3394559699318358224 - ], - "infinity": false - }, - { - "x": [ - 1716103379277390185, - 18097936269579187542, - 16357329729761063450, - 1508640059338197502 - ], - "y": [ - 11014806739603983364, - 4396503314588777389, - 9397245609635151055, - 1703957955248411380 - ], - "infinity": false - }, - { - "x": [ - 4770171350693477354, - 17110558673192292253, - 9799800677557311408, - 761984875463445481 - ], - "y": [ - 1560561403388310063, - 31331275310848146, - 287152055803835484, - 457826332542037277 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 12452920133699897102, - 6896642231513345496, - 4655495116895575043, - 1453525729114564853 - ], - "y": [ - 3574087764464303986, - 10141819911397868785, - 2342639320036978232, - 556196027732983028 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_16_key.json b/core/bin/verification_key_generator_and_server/data/verification_16_key.json deleted file mode 100644 index 356dbb3c531..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_16_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 3227382513538635502, - 10189582412003011525, - 1928710987967879299, - 1641062823248805930 - ], - "y": [ - 3271795224553087841, - 14036363906521936156, - 10253705337161624780, - 3091191233208402889 - ], - "infinity": false - }, - { - "x": [ - 3541471743181642086, - 8117051273006688414, - 685909872467163024, - 2614724468827209722 - ], - "y": [ - 1096952120887201428, - 8197980407203032569, - 3949713006885563085, - 2838982585728277197 - ], - "infinity": false - }, - { - "x": [ - 12432945880074879560, - 13444859845042471186, - 16599097070979057001, - 3064039790213026567 - ], - "y": [ - 3745088406100356357, - 11715355314289478148, - 2282946417129489745, - 1619614407449915711 - ], - "infinity": false - }, - { - "x": [ - 6864310053920223866, - 11095455024311706186, - 12229748247000682102, - 2475016349586561501 - ], - "y": [ - 2946781066962542712, - 14275500021265062654, - 7624481756022778467, - 1439658776940615826 - ], - "infinity": false - }, - { - "x": [ - 13589273139905087785, - 10411035015021574213, - 7322465558208873130, - 1805943743448229826 - ], - "y": [ - 13035238946064559886, - 8309482746549063820, - 14229757515324464781, - 1676135665275665956 - ], - "infinity": false - }, - { - "x": [ - 84006308859404982, - 13783127238980064918, - 14101945786439708601, - 3343881426944938693 - ], - "y": [ - 11959320721291234482, - 7288504259378326725, - 9638777183731403514, - 1648453409181088010 - ], - "infinity": false - }, - { - "x": [ - 10987163680360734145, - 3374907765066907489, - 14421201974855570464, - 3148542489906320493 - ], - "y": [ - 17180031485000081847, - 1609372527008367113, - 6050341427989573858, - 477684541505306009 - ], - "infinity": false - }, - { - "x": [ - 2257028353691713628, - 6330174784373016532, - 1686021628649718039, - 2159927805963705967 - ], - "y": [ - 10814125155819336479, - 9673780307204445954, - 7995606758095566598, - 2252251279727988680 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 12209724104183572477, - 11631007075974892904, - 18407423517909669447, - 1123848354500646471 - ], - "y": [ - 4749227851055533192, - 16918951234067984229, - 5345146076707243019, - 2836719468222132526 - ], - "infinity": false - }, - { - "x": [ - 7250866110466496804, - 16022969863388101391, - 16334300930347324147, - 2232272485807431638 - ], - "y": [ - 257675104580526310, - 8044331403028603186, - 2070174268860891010, - 412313474208091695 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 6736882681315025594, - 13400430183084617843, - 17182588928882896917, - 413858188107207402 - ], - "y": [ - 11944170108613027081, - 10598841640624895850, - 9086311820289524704, - 994240611047161478 - ], - "infinity": false - }, - { - "x": [ - 9500318283622871785, - 5480449932874899465, - 13224510306395939252, - 1891329668301281157 - ], - "y": [ - 7314078756040350933, - 1023294602177498218, - 16475078688698425911, - 1793945182112302214 - ], - "infinity": false - }, - { - "x": [ - 17207548058425781429, - 2519222249126358251, - 16087595361924038018, - 3470846273906312296 - ], - "y": [ - 7578361094884620755, - 7082109151721400218, - 13675372677342046523, - 3204472226310685459 - ], - "infinity": false - }, - { - "x": [ - 7036282717341939568, - 3035419720331773758, - 6765191455902729185, - 1301973211946290083 - ], - "y": [ - 697377419426635450, - 14612037890797520515, - 11746079616766057625, - 1031190413179598818 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 6391155, - "lookup_selector_commitment": { - "x": [ - 17111915492430945419, - 17971275185478677346, - 14211391044159602918, - 2381455978713737016 - ], - "y": [ - 13971515893527127207, - 7078722574057096191, - 6337080743811431820, - 757015217034494132 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 5825422128268478267, - 9219263846299851036, - 3879231702557190566, - 1702488722758880769 - ], - "y": [ - 18311881100262470992, - 5742998199368802392, - 18106865487471159417, - 502191980176920012 - ], - "infinity": false - }, - { - "x": [ - 17195892082859417081, - 7890531942603584793, - 2381805632820057528, - 3173232410464566465 - ], - "y": [ - 16359614627947132075, - 3459600273035137079, - 4550762061432972122, - 3394559699318358224 - ], - "infinity": false - }, - { - "x": [ - 1716103379277390185, - 18097936269579187542, - 16357329729761063450, - 1508640059338197502 - ], - "y": [ - 11014806739603983364, - 4396503314588777389, - 9397245609635151055, - 1703957955248411380 - ], - "infinity": false - }, - { - "x": [ - 4770171350693477354, - 17110558673192292253, - 9799800677557311408, - 761984875463445481 - ], - "y": [ - 1560561403388310063, - 31331275310848146, - 287152055803835484, - 457826332542037277 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 12452920133699897102, - 6896642231513345496, - 4655495116895575043, - 1453525729114564853 - ], - "y": [ - 3574087764464303986, - 10141819911397868785, - 2342639320036978232, - 556196027732983028 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_17_key.json b/core/bin/verification_key_generator_and_server/data/verification_17_key.json deleted file mode 100644 index 4886f501712..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_17_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 17914331890341023175, - 5200903915088916638, - 7417971632353510341, - 989671567770015891 - ], - "y": [ - 2927207345798721401, - 12686845373576710402, - 977520799157489114, - 1882223742569339495 - ], - "infinity": false - }, - { - "x": [ - 17162848902278956536, - 16169550484471334725, - 10830640611178609260, - 1347016616567630867 - ], - "y": [ - 6224316231648682710, - 10518372790293065661, - 4887066336660303630, - 703109868065750569 - ], - "infinity": false - }, - { - "x": [ - 15783141083967762454, - 16153855592853073081, - 5667838393811413602, - 1552498518850981979 - ], - "y": [ - 4220445586486275972, - 13196202402039716924, - 17506868028821343237, - 2718319833724164541 - ], - "infinity": false - }, - { - "x": [ - 4896615254637588846, - 5804270398165250639, - 10274952983674590649, - 1937027782721476561 - ], - "y": [ - 14180244016629518742, - 1376497406583367686, - 11268467489552574214, - 2331396669725958189 - ], - "infinity": false - }, - { - "x": [ - 191294939748295885, - 2804205121966814820, - 3897841028303648224, - 3406986167359695085 - ], - "y": [ - 6000542982074572633, - 1697448874567677325, - 10313504031977824294, - 320347014349001728 - ], - "infinity": false - }, - { - "x": [ - 6817435454105168413, - 15823888625999007373, - 9766931118761036330, - 3392959293697897728 - ], - "y": [ - 3549039265311512008, - 4758653036115592629, - 219467419355603781, - 83059544477934848 - ], - "infinity": false - }, - { - "x": [ - 5038171725639341807, - 6859992384823395611, - 15284967171349293554, - 16807092603996758 - ], - "y": [ - 16504201956683368367, - 12931995037356002803, - 16812826192957092842, - 3169839139097845275 - ], - "infinity": false - }, - { - "x": [ - 7140480682142203727, - 9518528852331365100, - 6189914959408603471, - 535939568308325781 - ], - "y": [ - 5944679084532939174, - 17280810090456322382, - 3743919877743496107, - 1235924204609568068 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 1929812895882850703, - 10386198218814398503, - 17007521659662498274, - 1093092717342753672 - ], - "y": [ - 14834187133095267171, - 15506032964234961178, - 7626816120460943443, - 871778379365004315 - ], - "infinity": false - }, - { - "x": [ - 15660406110329165813, - 8146521122567923995, - 2421739551937359002, - 3037598346026174089 - ], - "y": [ - 526124545966722472, - 1168331442853419483, - 4128095883471549051, - 2951909971734725955 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 6206240620508019400, - 3690935139087147193, - 15230272164329216928, - 2140680869789406894 - ], - "y": [ - 14967331981004447304, - 1624146052760537503, - 8986435052862626311, - 334011853307313390 - ], - "infinity": false - }, - { - "x": [ - 4342223064246074020, - 2037946044543710684, - 9057698479075332373, - 1955362957846693345 - ], - "y": [ - 13253375713250043938, - 6754658208742468331, - 9339617748652368850, - 3066524060291544175 - ], - "infinity": false - }, - { - "x": [ - 17765629723696241082, - 14243015821582305127, - 922013493526048847, - 186830516636733479 - ], - "y": [ - 14465184942185208224, - 11235596895177038197, - 5490682932088517686, - 1253279069662324930 - ], - "infinity": false - }, - { - "x": [ - 9369367805867402420, - 12663806522952881709, - 10184609326459106945, - 1664572000409921348 - ], - "y": [ - 4383960972942823390, - 6526609131568596717, - 1343118583674917141, - 113408414321095416 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 6306340, - "lookup_selector_commitment": { - "x": [ - 8662938005624859815, - 9126108646717466191, - 14321121874090966307, - 2777446762308933634 - ], - "y": [ - 12555265159079607081, - 9054928862248682392, - 2784170007581120117, - 1769718192676345815 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 631990924006796604, - 16139625628991115157, - 13331739325995827711, - 1062301837743594995 - ], - "y": [ - 15303054606290800139, - 15906872095881647437, - 7093896572295020249, - 1342952934989901142 - ], - "infinity": false - }, - { - "x": [ - 7983921919542246393, - 13296544189644416678, - 17081022784392007697, - 1980832835348244027 - ], - "y": [ - 10874958134865200330, - 7702740658637630534, - 14052057929798961943, - 3193353539419869016 - ], - "infinity": false - }, - { - "x": [ - 1114587284824996932, - 4636906500482867924, - 15328247172597030456, - 87946895873973686 - ], - "y": [ - 15573033830207915877, - 5194694185599035278, - 2562407345425607214, - 2782078999306862675 - ], - "infinity": false - }, - { - "x": [ - 18225112781127431982, - 18048613958187123807, - 7325490730844456621, - 1953409020724855888 - ], - "y": [ - 7577000130125917198, - 6193701449695751861, - 4102082927677054717, - 395350071385269650 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 12644448349947379666, - 16345179309557779118, - 10854030671875297787, - 1358228639202695992 - ], - "y": [ - 2673142241557152443, - 11674634738064487673, - 12992693662201776412, - 1888958170754620568 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_18_key.json b/core/bin/verification_key_generator_and_server/data/verification_18_key.json deleted file mode 100644 index 0987039dd1f..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_18_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 8828437332483635107, - 13777915698231175292, - 11504510351588004199, - 2516385517175522236 - ], - "y": [ - 1530453459325046685, - 2126477283125660971, - 6874073688275717548, - 2971751478402184988 - ], - "infinity": false - }, - { - "x": [ - 3490885152333630169, - 4123320877294819459, - 5138828731030738163, - 3039569146695764058 - ], - "y": [ - 10725322881860790776, - 1512262420257872325, - 10563843054743673205, - 447776577449487981 - ], - "infinity": false - }, - { - "x": [ - 14957646468235752771, - 6216555943494703122, - 7827110015048654177, - 2702223139144227095 - ], - "y": [ - 505353369980003046, - 9687811614109626117, - 5346740791392836415, - 1340467989233731971 - ], - "infinity": false - }, - { - "x": [ - 3201028595190213325, - 9659059230246338206, - 901122635500995415, - 765851963674764103 - ], - "y": [ - 10609226610841230792, - 8145519080052709505, - 17851750066177581293, - 362176586681460505 - ], - "infinity": false - }, - { - "x": [ - 13374935211181268625, - 1347742735582506393, - 4588995338963087243, - 94453217016201562 - ], - "y": [ - 4077548225372117006, - 11859845367084549583, - 2736752177668563039, - 1134818940315684409 - ], - "infinity": false - }, - { - "x": [ - 9467178015658262369, - 10545965721679492606, - 5726831550010619228, - 2051827871593168334 - ], - "y": [ - 6169140154733194545, - 5574043976386236933, - 12140759986363309479, - 1521273866181786590 - ], - "infinity": false - }, - { - "x": [ - 9642818207174528085, - 15617465062711953088, - 11263174413902929450, - 639683138088730423 - ], - "y": [ - 15150652293369779803, - 11338278639695990684, - 12204993260723588081, - 2039902155290309382 - ], - "infinity": false - }, - { - "x": [ - 7292405600450693833, - 573142590034645775, - 1583019100043676600, - 1978695840953226358 - ], - "y": [ - 5154489367309996043, - 8763740977657654022, - 9821219773990064941, - 2636875463267519559 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 2075450237700219880, - 2920304484074114568, - 8294843245052708759, - 555293007149161182 - ], - "y": [ - 6360019558055677441, - 7673047654179899818, - 10263007591992092214, - 2148859098846651643 - ], - "infinity": false - }, - { - "x": [ - 3970783323754285443, - 13019363829879217592, - 18197490676081603277, - 630296172623407012 - ], - "y": [ - 7987745494904024640, - 9631048689610078757, - 1592818072678520163, - 2678374240960081558 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 3055966415338102721, - 18231075292903695376, - 9187400351012014001, - 2311743062653684305 - ], - "y": [ - 2553578246375478674, - 930511927228692161, - 2271826946385879571, - 3124263363559878329 - ], - "infinity": false - }, - { - "x": [ - 6936812562216228782, - 15195638439305648290, - 17827467578192758430, - 2674740411261002393 - ], - "y": [ - 9738743088557108685, - 17225541903460577384, - 16627013813461429872, - 494410407050490065 - ], - "infinity": false - }, - { - "x": [ - 10570962909758341245, - 18167360144953681397, - 2744925075742623060, - 736412139310579435 - ], - "y": [ - 13849279071386536985, - 10093748777935480433, - 904764951143479286, - 138814932031469939 - ], - "infinity": false - }, - { - "x": [ - 4533871929444677010, - 10106157783629999301, - 4178648893377901718, - 3164693318611048089 - ], - "y": [ - 12699039702383686311, - 4388078229442418460, - 8961813905523894854, - 570254591975307765 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 18884644, - "lookup_selector_commitment": { - "x": [ - 15022814412717317376, - 17444332185630324119, - 14685665421775887958, - 906494215348891007 - ], - "y": [ - 9833778905776399360, - 1648124311168457783, - 3500435402371619753, - 2370413643071351216 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 631990924006796604, - 16139625628991115157, - 13331739325995827711, - 1062301837743594995 - ], - "y": [ - 15303054606290800139, - 15906872095881647437, - 7093896572295020249, - 1342952934989901142 - ], - "infinity": false - }, - { - "x": [ - 7983921919542246393, - 13296544189644416678, - 17081022784392007697, - 1980832835348244027 - ], - "y": [ - 10874958134865200330, - 7702740658637630534, - 14052057929798961943, - 3193353539419869016 - ], - "infinity": false - }, - { - "x": [ - 1114587284824996932, - 4636906500482867924, - 15328247172597030456, - 87946895873973686 - ], - "y": [ - 15573033830207915877, - 5194694185599035278, - 2562407345425607214, - 2782078999306862675 - ], - "infinity": false - }, - { - "x": [ - 18225112781127431982, - 18048613958187123807, - 7325490730844456621, - 1953409020724855888 - ], - "y": [ - 7577000130125917198, - 6193701449695751861, - 4102082927677054717, - 395350071385269650 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 8321950609730151216, - 18010887235457883784, - 17038267498493175776, - 1380842840607309871 - ], - "y": [ - 3264160671000273944, - 16611917363401804468, - 8505391859632632917, - 2149881676646664319 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_1_key.json b/core/bin/verification_key_generator_and_server/data/verification_1_key.json deleted file mode 100644 index 0310303d2a5..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_1_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 7601801432079276288, - 15201863322122857773, - 8806193975262404580, - 2590787273683229105 - ], - "y": [ - 16702527967956763728, - 6181870639994435984, - 1867123357108619315, - 2767403024411663364 - ], - "infinity": false - }, - { - "x": [ - 2455316591212726341, - 2027771240685247927, - 10685588854446154162, - 3030775657966372875 - ], - "y": [ - 18300009037843703356, - 1612973442135305251, - 10693350009422283513, - 1442590213691840716 - ], - "infinity": false - }, - { - "x": [ - 12311884457715965312, - 10390638194798557018, - 11306832124741148566, - 300716765354847473 - ], - "y": [ - 9707964220031061231, - 14753080439380196493, - 5717535245627190368, - 702219636062983319 - ], - "infinity": false - }, - { - "x": [ - 7758453297146426337, - 1673770484163252092, - 14607544807007157753, - 857313958429629763 - ], - "y": [ - 14921629410308576937, - 15298335487420996140, - 2704982045392946878, - 2611590721009022852 - ], - "infinity": false - }, - { - "x": [ - 14311011031579784592, - 15625526098906078640, - 1319146597092063841, - 774276845418764858 - ], - "y": [ - 3893523842912943845, - 18146056093503974553, - 11030513442747849089, - 389965813625175232 - ], - "infinity": false - }, - { - "x": [ - 7007915445081129178, - 2401922490835966325, - 418720827124106725, - 2770268368066902308 - ], - "y": [ - 12116308634970006696, - 14528630571959109449, - 9950799281726780069, - 724152027617190422 - ], - "infinity": false - }, - { - "x": [ - 2442021019274420960, - 16295185893380203674, - 2439146651414642189, - 2243335375830582173 - ], - "y": [ - 3782090054162740071, - 4704457281172608987, - 4410900061257118309, - 764611777065564766 - ], - "infinity": false - }, - { - "x": [ - 17964884224938230037, - 7876675311267561320, - 16762398450655445790, - 1210707988542142007 - ], - "y": [ - 10470358785861361347, - 9485656365593190672, - 6046378362748740079, - 2457285875935475197 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 17157526827088368172, - 11284084393440625999, - 9351565798611728109, - 3234841809825307363 - ], - "y": [ - 8319704714678793930, - 4159327153032521498, - 15356346081767327573, - 3239913585027348493 - ], - "infinity": false - }, - { - "x": [ - 15456321646261647359, - 15891438700803416959, - 3317730603133051465, - 2641175705943818316 - ], - "y": [ - 1411951218052246200, - 1661720531643832913, - 13537400120511760371, - 2292851110898807736 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 10328956753700766823, - 2827084848292920926, - 6753362467616392790, - 3266354497443915853 - ], - "y": [ - 4786671171082888838, - 11071539213550223285, - 3886224490311829958, - 1435384580945051012 - ], - "infinity": false - }, - { - "x": [ - 6970901872301032061, - 11845499850875638451, - 12523013241874863158, - 564589203700245768 - ], - "y": [ - 9149991346853645253, - 10833082414663634622, - 10032445307744641248, - 3184550747076826571 - ], - "infinity": false - }, - { - "x": [ - 2899501934612768796, - 7289832407727333580, - 15398305180487198919, - 2955735241334744486 - ], - "y": [ - 4963499698281910643, - 5723522390488208800, - 3637467607919864741, - 339118267031086794 - ], - "infinity": false - }, - { - "x": [ - 16561673014946600686, - 6893642268089467710, - 11554023210615815565, - 122477375056362239 - ], - "y": [ - 15978560303000591303, - 6087766803442805629, - 6114779478264008006, - 2753348573959524636 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 30899639, - "lookup_selector_commitment": { - "x": [ - 4819118611809066421, - 16205075690681881406, - 8088108199972047891, - 2462381205202312681 - ], - "y": [ - 9403235417076804812, - 11746452954984920263, - 5479393366572364588, - 2168476120537571525 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 1589280911861251894, - 2000192568988587993, - 18399902493387281635, - 1843483375839232315 - ], - "y": [ - 14712825033319581746, - 11500494123399487569, - 4370642671010258701, - 567620704393396341 - ], - "infinity": false - }, - { - "x": [ - 0, - 0, - 0, - 0 - ], - "y": [ - 1, - 0, - 0, - 0 - ], - "infinity": true - }, - { - "x": [ - 0, - 0, - 0, - 0 - ], - "y": [ - 1, - 0, - 0, - 0 - ], - "infinity": true - }, - { - "x": [ - 5989740765536181742, - 7510673671757970234, - 7988398980529338112, - 2047433943537325290 - ], - "y": [ - 14952889876146512965, - 17141012675484923048, - 328206788961236528, - 866564802795139 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 4824978155651454377, - 12191454623887257586, - 12973919510878979890, - 52932438992466171 - ], - "y": [ - 17857145998747603901, - 2092039184434926372, - 11018504664231591204, - 1321736242331612854 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_2_key.json b/core/bin/verification_key_generator_and_server/data/verification_2_key.json deleted file mode 100644 index 79b16257213..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_2_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 5518783475412319303, - 13900056820557691891, - 3293972357974626054, - 2215936931279678502 - ], - "y": [ - 7955917949806788616, - 13341003959544330056, - 2090626280536970058, - 340565138339520735 - ], - "infinity": false - }, - { - "x": [ - 14185170917510557830, - 8046892618400404954, - 16599645397148333553, - 2994187418830549588 - ], - "y": [ - 7234254448777026502, - 8445782435526889669, - 14116370103157060862, - 2248206929083565209 - ], - "infinity": false - }, - { - "x": [ - 11154659552703848544, - 12941656139895069323, - 17062140236305086427, - 722110816848028084 - ], - "y": [ - 5009717036998782771, - 827592822749515890, - 15966856850732642654, - 618036931564479654 - ], - "infinity": false - }, - { - "x": [ - 5157594213696692987, - 15014090155482426422, - 706425002062263449, - 3203486979181293219 - ], - "y": [ - 14363949081622225749, - 9001876918808042476, - 1615414451418136701, - 444697301726425121 - ], - "infinity": false - }, - { - "x": [ - 9176460251336839321, - 17295305184785757140, - 7831134341003191604, - 2666806971657364559 - ], - "y": [ - 2598277252699259004, - 11916936738177575234, - 2912317122505195338, - 2404138220482962548 - ], - "infinity": false - }, - { - "x": [ - 11575910134534349159, - 14192914809594698195, - 18267718409201448839, - 142641722814285206 - ], - "y": [ - 5883506329268908990, - 2832339585209792351, - 14642260147093833347, - 392817691249359885 - ], - "infinity": false - }, - { - "x": [ - 12908012748245269010, - 6525727331816152736, - 16979431824428028279, - 2845131870310951239 - ], - "y": [ - 1571963770034876851, - 17602700402136611105, - 13310928253737079884, - 3347891464097055062 - ], - "infinity": false - }, - { - "x": [ - 832167803175150309, - 11457734167413059640, - 13250442890410377059, - 2814079984479722654 - ], - "y": [ - 1463471541691279258, - 1744973157713476297, - 1204969522442685286, - 1269233371856967282 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 10352656458395970023, - 3995520406692994966, - 13084432248093257522, - 2302839365715839904 - ], - "y": [ - 8225034751786073151, - 16771047952615636124, - 616708265068224682, - 186403683175385821 - ], - "infinity": false - }, - { - "x": [ - 4270731028924703792, - 3128341040439802084, - 15083522049785140229, - 2261189689222904761 - ], - "y": [ - 8781157350107493893, - 14766318733918494793, - 9428422381369337621, - 419743052593117743 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 11112968480130414212, - 11913364106966677596, - 36671493864905181, - 496058283903160224 - ], - "y": [ - 9691136012048916590, - 12909186572206021308, - 1700657689434945171, - 3072265811815532764 - ], - "infinity": false - }, - { - "x": [ - 11360744654540534278, - 9830357778413675465, - 5192069313646589173, - 113131628631742646 - ], - "y": [ - 5515513518975242303, - 323890392099446701, - 2255482865429449468, - 2322464724330067577 - ], - "infinity": false - }, - { - "x": [ - 3414259545645111239, - 5416149397109634837, - 12993204506510556426, - 2894091844446687144 - ], - "y": [ - 4731949297479191167, - 1043460441127916951, - 16890401788673829290, - 1356564712828723527 - ], - "infinity": false - }, - { - "x": [ - 8993182433738017869, - 11441314659459910136, - 8181494681500166120, - 1591321336872387140 - ], - "y": [ - 5278254820002084488, - 17932571960593236295, - 7626453034762681225, - 3463596506399756742 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 30783671, - "lookup_selector_commitment": { - "x": [ - 1336161834228740427, - 15823221750660268452, - 13689567356831376139, - 1839611883700311389 - ], - "y": [ - 14875759795137726191, - 20318096045504920, - 8816565555629805366, - 75556627728969178 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 1589280911861251894, - 2000192568988587993, - 18399902493387281635, - 1843483375839232315 - ], - "y": [ - 14712825033319581746, - 11500494123399487569, - 4370642671010258701, - 567620704393396341 - ], - "infinity": false - }, - { - "x": [ - 0, - 0, - 0, - 0 - ], - "y": [ - 1, - 0, - 0, - 0 - ], - "infinity": true - }, - { - "x": [ - 0, - 0, - 0, - 0 - ], - "y": [ - 1, - 0, - 0, - 0 - ], - "infinity": true - }, - { - "x": [ - 5989740765536181742, - 7510673671757970234, - 7988398980529338112, - 2047433943537325290 - ], - "y": [ - 14952889876146512965, - 17141012675484923048, - 328206788961236528, - 866564802795139 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 3408213281770836085, - 15382444791373914560, - 16110552627056571461, - 1161688479331593061 - ], - "y": [ - 13379188756114722390, - 12926267823879081751, - 14282599792449107495, - 3244837013658545871 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_3_key.json b/core/bin/verification_key_generator_and_server/data/verification_3_key.json deleted file mode 100644 index 613c65dec32..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_3_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 4247884029119603815, - 14048318895702359089, - 1617022869923646571, - 1004300266779052296 - ], - "y": [ - 17868528514201987465, - 4244261302597587354, - 10221573892940475912, - 2482382880446840010 - ], - "infinity": false - }, - { - "x": [ - 6238506840459074871, - 18254983327500098151, - 12976360180164130634, - 1219856697105853614 - ], - "y": [ - 1359994609126438238, - 17827470346804056210, - 16773833510918183872, - 2604619773311417557 - ], - "infinity": false - }, - { - "x": [ - 5480908979724966765, - 3393255975447524652, - 10371160681199271551, - 3483125449532424455 - ], - "y": [ - 6910224697959110691, - 8190986918875328214, - 18233342390114194740, - 371038657258361111 - ], - "infinity": false - }, - { - "x": [ - 1589636458242554884, - 17321835409586313003, - 13993520794641679178, - 1266542986497561712 - ], - "y": [ - 5397891169353072140, - 5878548729835574296, - 15706893227817678651, - 1769961527856953483 - ], - "infinity": false - }, - { - "x": [ - 17541435070606794744, - 2655627213950653916, - 11216216944579921605, - 1313780180047509779 - ], - "y": [ - 16950319453735037870, - 1632204383055288188, - 15201163922365522932, - 2864472556240937346 - ], - "infinity": false - }, - { - "x": [ - 11997977223945303553, - 14325590013978700522, - 15557533141347230729, - 3289139360100222484 - ], - "y": [ - 2276406350677881932, - 12276125258173429823, - 6135372778488654786, - 2960027660870022236 - ], - "infinity": false - }, - { - "x": [ - 8889079782908651911, - 9444258938063781000, - 6152157289837951831, - 2046144251434758098 - ], - "y": [ - 3506685845878604982, - 480610274681523215, - 17898829927408725055, - 478373452366390807 - ], - "infinity": false - }, - { - "x": [ - 9543795530837745598, - 5641706788025454992, - 2058665597673045347, - 3199980849578540913 - ], - "y": [ - 2134420461745303677, - 11079036403297001210, - 13973590059437528369, - 2236186172656440899 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 17082763384512425754, - 5415974525679408765, - 2982831717715582652, - 2185533346241584143 - ], - "y": [ - 889517497459248510, - 11305258809453581163, - 14785916458686019285, - 712045239932611417 - ], - "infinity": false - }, - { - "x": [ - 1486326951928055275, - 17648143945822975405, - 8789056175543467342, - 1582641302957127155 - ], - "y": [ - 16130216435506275947, - 186882025793811656, - 5333388052689527168, - 2555185016165074595 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 6775436174991417687, - 1962133343483010121, - 3639644700285584252, - 2751431324201714590 - ], - "y": [ - 16721581791017871189, - 2572212631009994187, - 12263629829130796245, - 1194783809693078725 - ], - "infinity": false - }, - { - "x": [ - 9781583375044732502, - 17099127122236789849, - 15683598159868779227, - 2137916464125382410 - ], - "y": [ - 11971077938028623721, - 14460546631248863771, - 3674726360546135290, - 2587006282919627488 - ], - "infinity": false - }, - { - "x": [ - 2258960665841769264, - 11476106728738999555, - 2154715457718708453, - 1652460267728538717 - ], - "y": [ - 591013691648424928, - 2747643213972148016, - 4382285331965077793, - 700518369290275435 - ], - "infinity": false - }, - { - "x": [ - 17029386353507514799, - 12736838109975824615, - 17948233540620781856, - 1661567367117856229 - ], - "y": [ - 5088293739561490025, - 257269786506894093, - 7029871828271960168, - 2982592857123453815 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 15390957, - "lookup_selector_commitment": { - "x": [ - 3143229288506876352, - 14398478555351850494, - 17971061391349533728, - 2397240458539623423 - ], - "y": [ - 2507720097747632492, - 4897824016944146490, - 8535810669426357324, - 2617442440174156771 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 12925597216490182210, - 13030942092034120135, - 17733316148446765999, - 112547709703624791 - ], - "y": [ - 13293415162200038331, - 13010565234555563811, - 15476251035925496743, - 2588541998389664114 - ], - "infinity": false - }, - { - "x": [ - 11118240121224901946, - 9394562257959111170, - 9026436993514314918, - 1751747619588842429 - ], - "y": [ - 6039590802345873394, - 17531716309156986038, - 1711770599161550805, - 1941094644175870288 - ], - "infinity": false - }, - { - "x": [ - 17999903301086933877, - 10468070608989378923, - 3479353092436121335, - 607756992244480908 - ], - "y": [ - 10863079642303790364, - 4737012301447477097, - 4605789209164294308, - 1430572887755557386 - ], - "infinity": false - }, - { - "x": [ - 4609762018249049814, - 4113097757442144437, - 4725434011535510809, - 2977599521231955696 - ], - "y": [ - 14636094180551257630, - 8819447661702130886, - 1091706295519429215, - 56675985696303183 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 7406705046881629689, - 13550366909312172285, - 11707241152492715411, - 1951231993396003315 - ], - "y": [ - 649840467305243342, - 10916062129580101841, - 7643158916474300887, - 1216418901317802861 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_4_key.json b/core/bin/verification_key_generator_and_server/data/verification_4_key.json deleted file mode 100644 index 8d42dcd66a7..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_4_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 15923176050075197, - 8963905519117333456, - 5333091548039957996, - 1660697180439834807 - ], - "y": [ - 13105864494044341635, - 10079874572012628853, - 4164109084931753781, - 1860950003357484648 - ], - "infinity": false - }, - { - "x": [ - 8216018177730810417, - 13660800917029254431, - 2933384097067755755, - 2823425599268575868 - ], - "y": [ - 8768863192718196559, - 10146282684570870426, - 8275806247588563419, - 605489936306033583 - ], - "infinity": false - }, - { - "x": [ - 4277344855257545209, - 11172040917478096607, - 4489086903928758598, - 289283798032159440 - ], - "y": [ - 10444137083253378550, - 12133212848977612596, - 6748791972701343485, - 286274227999569844 - ], - "infinity": false - }, - { - "x": [ - 8861797510071553254, - 12734094237204882518, - 13692967202881086499, - 641906135411222522 - ], - "y": [ - 6831762763487302461, - 11965405347371646114, - 6218256502970252800, - 3201462388136754725 - ], - "infinity": false - }, - { - "x": [ - 12385743015818134054, - 16282219738575446638, - 3256359841301423419, - 505673042938576760 - ], - "y": [ - 6744956686738207932, - 8994291190634790001, - 16789606231722015883, - 2027930268272962928 - ], - "infinity": false - }, - { - "x": [ - 13671822069226357541, - 818021157447551159, - 10542481209144358852, - 2459295197762128786 - ], - "y": [ - 1072649761929447549, - 6089126583512618706, - 1178131210084507361, - 1066836948212725576 - ], - "infinity": false - }, - { - "x": [ - 16878956366815094090, - 364977461173568122, - 5439594588743996145, - 1265442855735725449 - ], - "y": [ - 11461704536083653156, - 660278441271820299, - 4314245569905306892, - 1438663846765259508 - ], - "infinity": false - }, - { - "x": [ - 9038539654045396650, - 539827912679485452, - 15399544523862100757, - 1256406598444490417 - ], - "y": [ - 5422113905848106255, - 4943961807853536385, - 10022409325033689104, - 3200702511424842211 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 7750990741566547331, - 12040155777441846781, - 3000981333322867315, - 2393292192734976436 - ], - "y": [ - 3394853839941291504, - 944019051205640111, - 1104911864338577098, - 2127308956089601096 - ], - "infinity": false - }, - { - "x": [ - 4735140124663926465, - 16935779121597983173, - 17111626619540374574, - 2327973550601526140 - ], - "y": [ - 8990848735371189388, - 4589751206662798166, - 7575424772436241307, - 2798852347400154642 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 4765077060699177749, - 15235935045874519477, - 2022237788491579392, - 354385727984957703 - ], - "y": [ - 11620113321350620961, - 2521830680983779826, - 14047226057605943635, - 2718701882953208503 - ], - "infinity": false - }, - { - "x": [ - 12967015398643083015, - 1100660813730542482, - 7835181433213557652, - 803165211156388599 - ], - "y": [ - 8557385569712401227, - 535900682745452035, - 16083571717847325979, - 396765644246918860 - ], - "infinity": false - }, - { - "x": [ - 6868107733370365435, - 17106601841261210672, - 12219407605084986215, - 2345246684976405066 - ], - "y": [ - 17532412968783851743, - 9996315626158111485, - 17970945522106166231, - 1003764081419207606 - ], - "infinity": false - }, - { - "x": [ - 7011201477832405407, - 8818123127103997131, - 2979445003396953339, - 318603240233076406 - ], - "y": [ - 11712108043964996282, - 3474989587891133574, - 3983451673298542860, - 1181581919257021598 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 8484642, - "lookup_selector_commitment": { - "x": [ - 27459247093738343, - 1785927757103538268, - 14972116880195568621, - 1034224917068963325 - ], - "y": [ - 17453858127001596558, - 6200103235089742197, - 16245568162666829501, - 651193715230511441 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 697552212563769686, - 7709943502535418760, - 15019345407325619175, - 3433081085078580257 - ], - "y": [ - 8668947019840357731, - 14698901351824712883, - 15088598879190660424, - 2873081208166433946 - ], - "infinity": false - }, - { - "x": [ - 7893133928909060673, - 7064922516930129957, - 3592836702741304814, - 2239702595710114437 - ], - "y": [ - 7691360541875191519, - 11379321785127235277, - 6653616064071569031, - 2555434628517540774 - ], - "infinity": false - }, - { - "x": [ - 6243944238013052821, - 7908243182210136125, - 17178099109525791299, - 2553622184721264566 - ], - "y": [ - 736121280088239428, - 6158073429758170526, - 11217302997977204117, - 2594798912020899417 - ], - "infinity": false - }, - { - "x": [ - 2064240298596094591, - 16917726764104887991, - 11042784977532408536, - 3377647228930170830 - ], - "y": [ - 10635525052494768819, - 387400048616497096, - 9379200582543310995, - 1571766153703296253 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 14868101692362122308, - 8135288013508071846, - 9460482611527381887, - 512823635961282598 - ], - "y": [ - 8358211286664762188, - 3532634521932288534, - 5862145521507736138, - 1807935137626658536 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_5_key.json b/core/bin/verification_key_generator_and_server/data/verification_5_key.json deleted file mode 100644 index b9a31b919f1..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_5_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 12322129650547620518, - 4320033807979823995, - 4503809593276792861, - 630958448551597950 - ], - "y": [ - 4947307957322067889, - 1897773243457379956, - 1563584362302565484, - 802109862761172056 - ], - "infinity": false - }, - { - "x": [ - 5860641327684713918, - 16885915425353665713, - 7037370194263044401, - 1837438863045303696 - ], - "y": [ - 13386292219804271609, - 4960073609197619993, - 7328379249582994262, - 191728769121948464 - ], - "infinity": false - }, - { - "x": [ - 9390502900121613993, - 17218409610830310329, - 4830832371938391322, - 1805131323553685028 - ], - "y": [ - 15707040961083920686, - 16216062707384374953, - 16957058843586642758, - 1341814870249072628 - ], - "infinity": false - }, - { - "x": [ - 969252611989285232, - 181405773082212747, - 11110666465356509832, - 1888802363524687207 - ], - "y": [ - 5293477339288357424, - 12076391347720360980, - 11422893229655154394, - 3165450734777404812 - ], - "infinity": false - }, - { - "x": [ - 642192487369089358, - 9585449571929647331, - 3847960352134961209, - 984199510163128792 - ], - "y": [ - 13950390676065893881, - 975256099594703300, - 253120832016214204, - 1860679841584192219 - ], - "infinity": false - }, - { - "x": [ - 3564548447861991296, - 6278944799487206913, - 1163701992635366786, - 3214877162977671335 - ], - "y": [ - 13131873482361140204, - 14012120801722220187, - 13254371011592477950, - 1082108070640175604 - ], - "infinity": false - }, - { - "x": [ - 14190764189814537607, - 18412181832598818289, - 17213387738194113336, - 1662783623959823461 - ], - "y": [ - 7987199081435644988, - 17119136750046780209, - 8770669323846078492, - 3183489396270587333 - ], - "infinity": false - }, - { - "x": [ - 14638218826597535389, - 16409988612234258347, - 5025411344133541245, - 603088654230685360 - ], - "y": [ - 12538363432956258836, - 6558875956959901550, - 2415879426147965883, - 750702584304895055 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 2599908293582905760, - 13534206398743622493, - 15926090086034346074, - 467418127379229858 - ], - "y": [ - 9529512934078774185, - 1459270552041127965, - 13418846370362665102, - 2270996612016337371 - ], - "infinity": false - }, - { - "x": [ - 7264275706530137047, - 5590205367072257545, - 17891440127697345143, - 360638857846382524 - ], - "y": [ - 17983779934218975397, - 1625779403076670241, - 1474025795387210129, - 1716171421120825643 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 9354841115000244260, - 12887310615208346489, - 1120617137774653400, - 424227936372254439 - ], - "y": [ - 3626714025954019309, - 4480975902927818206, - 10093567956580931634, - 2779897825000836477 - ], - "infinity": false - }, - { - "x": [ - 1864884782104066211, - 1247154271168453374, - 9982166936353409582, - 1177339527115773898 - ], - "y": [ - 9932597332303163060, - 1888682277213109000, - 11684220277443154622, - 3062389133489783806 - ], - "infinity": false - }, - { - "x": [ - 9943021177878836437, - 9004866876172522532, - 14085451328492136137, - 1567186274425392936 - ], - "y": [ - 7148906168793986389, - 4780330524752436486, - 10067456648871712650, - 179752856567560382 - ], - "infinity": false - }, - { - "x": [ - 14745822832390509907, - 13862030626549782961, - 10000268356302875837, - 705042314567833799 - ], - "y": [ - 11091254259539384938, - 11733968109785394056, - 11099103738494585500, - 1527456782567955191 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 35330543, - "lookup_selector_commitment": { - "x": [ - 12333191731462980214, - 17841370099698959347, - 12878670991018181621, - 2894319630687016858 - ], - "y": [ - 76816727314643395, - 3214684791046221459, - 878301108738499830, - 126016925902987736 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 911668445361375614, - 12752365066512000136, - 11550232015863976467, - 2053619216798992367 - ], - "y": [ - 4194339833917391280, - 1643071887467668153, - 3377480965202592691, - 1664272901450533719 - ], - "infinity": false - }, - { - "x": [ - 2999316735203966181, - 5189676006781764591, - 14324679313847304783, - 1264086978509739587 - ], - "y": [ - 8714172036038650967, - 10907167170124829028, - 8950970593162102458, - 1596853051185997037 - ], - "infinity": false - }, - { - "x": [ - 1146500486770850326, - 13562754408872334896, - 14063471769392190265, - 3387351506820193517 - ], - "y": [ - 6677788829230735422, - 15425668102208730571, - 5341291772716012975, - 539156410041791428 - ], - "infinity": false - }, - { - "x": [ - 18159886519320172405, - 4286826840324377773, - 16364826089434525345, - 228697666397725767 - ], - "y": [ - 4850633487261444791, - 6327421534074497160, - 12883776034588695446, - 1510314148471267214 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 18245233954308230592, - 8193493714287610439, - 6521078295132558240, - 861511081336275611 - ], - "y": [ - 4275834222266292944, - 13179071278128968874, - 5943013356852335765, - 2456639561657053045 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_6_key.json b/core/bin/verification_key_generator_and_server/data/verification_6_key.json deleted file mode 100644 index 34419df1770..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_6_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 11033020679838791108, - 14920056278440370765, - 8156477685651219112, - 2935096142913695825 - ], - "y": [ - 12780055516709256833, - 966513406268819160, - 9584266886886532866, - 892347068344972829 - ], - "infinity": false - }, - { - "x": [ - 4044870432040348042, - 10630300946926732771, - 3143480015080245177, - 323917785885883620 - ], - "y": [ - 2297905282612888789, - 8206728682979815807, - 10628767928228215441, - 3062326525278498604 - ], - "infinity": false - }, - { - "x": [ - 14760731158538087565, - 9176522400170689419, - 9855180338242634009, - 2456568616568530201 - ], - "y": [ - 5168103953295979961, - 397013651969935557, - 13864468728668213717, - 2925074735515169158 - ], - "infinity": false - }, - { - "x": [ - 13613691592548742743, - 11339389230513898784, - 4864282628000142183, - 2568915564796772962 - ], - "y": [ - 13074021698952750513, - 14891339562597317806, - 6145754680491802845, - 913243322463864468 - ], - "infinity": false - }, - { - "x": [ - 9607983563343027008, - 1604609357347728263, - 6735137627175405143, - 91305611485454778 - ], - "y": [ - 2068449139446365265, - 6171753015906067998, - 16290186276604645197, - 420889087081901603 - ], - "infinity": false - }, - { - "x": [ - 15994614598808477960, - 5137738490508028659, - 6599503545391493738, - 3293094250487745346 - ], - "y": [ - 3246688300070721763, - 8836841286539929132, - 1231014124908407748, - 3042941126579517307 - ], - "infinity": false - }, - { - "x": [ - 12550390789117808745, - 14001030013656521177, - 16383284077678821701, - 1815317458772356897 - ], - "y": [ - 10125044837604978181, - 7468984969058409331, - 592554137766258541, - 2877688586321491725 - ], - "infinity": false - }, - { - "x": [ - 12238091769471133989, - 184716847866634800, - 5888077423956723698, - 609118759536864800 - ], - "y": [ - 7725369615076384544, - 7561073323636510559, - 10473734750023783127, - 861766554781597742 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 1206127807467530207, - 3510053718168412786, - 7933459343694333819, - 3179950874373950282 - ], - "y": [ - 5784856107466398982, - 395767970566909293, - 11244200096534021583, - 2068407511544404377 - ], - "infinity": false - }, - { - "x": [ - 4044617248058764838, - 11957266999135308674, - 17621747993137866783, - 990156155955733134 - ], - "y": [ - 17234504892477991728, - 17558826298225495489, - 9349531438753716103, - 2656409262947709594 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 4308597000331285311, - 12130199317436319902, - 3842336010209461436, - 191866453597778475 - ], - "y": [ - 2144400171783010971, - 13016087318985913183, - 7166370365336301922, - 2216888390030560212 - ], - "infinity": false - }, - { - "x": [ - 4661184458541745063, - 12423889401726065791, - 11959346001895915074, - 779668716585305501 - ], - "y": [ - 16401363790535442499, - 7367694133722005848, - 8015837005184593399, - 454166987511489961 - ], - "infinity": false - }, - { - "x": [ - 858215262803403659, - 1405268530667707386, - 7763962169005921611, - 2845435536097215865 - ], - "y": [ - 10639490331338262540, - 6397733211512468794, - 968161689973799899, - 2054756257253905633 - ], - "infinity": false - }, - { - "x": [ - 17338818659525246480, - 13318488425310212471, - 10548319374858973842, - 87084958643052105 - ], - "y": [ - 2279840344577984658, - 15197280761751903251, - 16019225334594459873, - 149925650787595538 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 3054916, - "lookup_selector_commitment": { - "x": [ - 4844230422625825285, - 956290027823441223, - 763010695794739308, - 2426170829255106638 - ], - "y": [ - 13850520521470006763, - 9003994589054655373, - 10310690204425503422, - 3012516431885755457 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 5825422128268478267, - 9219263846299851036, - 3879231702557190566, - 1702488722758880769 - ], - "y": [ - 18311881100262470992, - 5742998199368802392, - 18106865487471159417, - 502191980176920012 - ], - "infinity": false - }, - { - "x": [ - 17195892082859417081, - 7890531942603584793, - 2381805632820057528, - 3173232410464566465 - ], - "y": [ - 16359614627947132075, - 3459600273035137079, - 4550762061432972122, - 3394559699318358224 - ], - "infinity": false - }, - { - "x": [ - 1716103379277390185, - 18097936269579187542, - 16357329729761063450, - 1508640059338197502 - ], - "y": [ - 11014806739603983364, - 4396503314588777389, - 9397245609635151055, - 1703957955248411380 - ], - "infinity": false - }, - { - "x": [ - 4770171350693477354, - 17110558673192292253, - 9799800677557311408, - 761984875463445481 - ], - "y": [ - 1560561403388310063, - 31331275310848146, - 287152055803835484, - 457826332542037277 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 16775586915653722908, - 9787338077086882544, - 8381721730521821042, - 2974660093975661578 - ], - "y": [ - 3011389235487891234, - 15409507493813096391, - 17416460976276029026, - 324418288749844627 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_7_key.json b/core/bin/verification_key_generator_and_server/data/verification_7_key.json deleted file mode 100644 index 406afcf4f0f..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_7_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 14104278525941001335, - 6652111379088654370, - 12369045377338511525, - 969809670184836151 - ], - "y": [ - 10111598525423302991, - 15018239425425696172, - 3683372413830991953, - 1023765059890131543 - ], - "infinity": false - }, - { - "x": [ - 11576486884237685781, - 16315823052257401029, - 9860864515877414033, - 3179959598270002012 - ], - "y": [ - 487035971539979311, - 5573003039451484772, - 15711637819381564577, - 1904127920269177012 - ], - "infinity": false - }, - { - "x": [ - 18299921128106602792, - 211731469708793711, - 17645028854462121436, - 675870769139913517 - ], - "y": [ - 15146647508675165454, - 18353083579110652488, - 12704645658780892142, - 2929235299763077823 - ], - "infinity": false - }, - { - "x": [ - 11570586127780196277, - 2363872676317471379, - 7386811009552915084, - 959006902628416514 - ], - "y": [ - 17455735716787098890, - 14879699386306994564, - 5628100821420984321, - 2862659911936763739 - ], - "infinity": false - }, - { - "x": [ - 8746328571248006135, - 17089435014355939378, - 8764506524471462449, - 1810135458362589443 - ], - "y": [ - 14070512019208911265, - 8756287737315170424, - 14821473955626613, - 1559545289765661890 - ], - "infinity": false - }, - { - "x": [ - 2113591086436573082, - 12629483649401688389, - 11845953673798951216, - 3081238281103628853 - ], - "y": [ - 727696133406005469, - 14413827745813557208, - 6425035421156126073, - 291513487083052109 - ], - "infinity": false - }, - { - "x": [ - 15346257923988607256, - 10403316660718504706, - 7158515894996917286, - 2702098910103276762 - ], - "y": [ - 16559143492878738107, - 12716298061927369795, - 12296985344891017351, - 2814996798832983835 - ], - "infinity": false - }, - { - "x": [ - 2213195001372039295, - 8878300942582564036, - 10524986226191936528, - 1815326540993196034 - ], - "y": [ - 11397120982692424098, - 4455537142488107627, - 14205354993332845055, - 2313809587433567240 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 14849046431510808003, - 11699893139960418168, - 6000246307731364190, - 3362832011707902866 - ], - "y": [ - 3242560497217933852, - 11672398501106836413, - 987926723326096281, - 2451226739475091625 - ], - "infinity": false - }, - { - "x": [ - 9272095445402359796, - 1201046264826394411, - 7424934554242366462, - 1125893484262333608 - ], - "y": [ - 15903920299684884420, - 17703294385387204708, - 2256937129195345942, - 1905295733884217610 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 7591926766688292250, - 10457199375342460747, - 3214976192729961314, - 1412860682249358355 - ], - "y": [ - 16894260140402496006, - 3666374878391815131, - 15124268261678582348, - 1340579262756129480 - ], - "infinity": false - }, - { - "x": [ - 2963934507934439034, - 17415763666461861018, - 6331792462137338053, - 3122358526111186727 - ], - "y": [ - 15040784043381591388, - 7188410244350767315, - 14077554108063383431, - 1704329843327300001 - ], - "infinity": false - }, - { - "x": [ - 7967507884960122293, - 13509230570773443525, - 11125712791473385552, - 2241808950326876268 - ], - "y": [ - 10594180941877323940, - 17179032413109513856, - 17941607623778808075, - 646138820984886096 - ], - "infinity": false - }, - { - "x": [ - 4729534828155895283, - 15489050734511381239, - 4847364931161261393, - 2461584260035042491 - ], - "y": [ - 15255817542606978857, - 6517429187947361297, - 17127878630247240853, - 3389541567226838859 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 40724289, - "lookup_selector_commitment": { - "x": [ - 5449769839889646584, - 2072406321611922291, - 9391796773218391195, - 2377769168011090955 - ], - "y": [ - 1789189431152658324, - 2639430755172378798, - 136577695530283091, - 3045539535973502646 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 631990924006796604, - 16139625628991115157, - 13331739325995827711, - 1062301837743594995 - ], - "y": [ - 15303054606290800139, - 15906872095881647437, - 7093896572295020249, - 1342952934989901142 - ], - "infinity": false - }, - { - "x": [ - 7983921919542246393, - 13296544189644416678, - 17081022784392007697, - 1980832835348244027 - ], - "y": [ - 10874958134865200330, - 7702740658637630534, - 14052057929798961943, - 3193353539419869016 - ], - "infinity": false - }, - { - "x": [ - 1114587284824996932, - 4636906500482867924, - 15328247172597030456, - 87946895873973686 - ], - "y": [ - 15573033830207915877, - 5194694185599035278, - 2562407345425607214, - 2782078999306862675 - ], - "infinity": false - }, - { - "x": [ - 18225112781127431982, - 18048613958187123807, - 7325490730844456621, - 1953409020724855888 - ], - "y": [ - 7577000130125917198, - 6193701449695751861, - 4102082927677054717, - 395350071385269650 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 12639039925867405095, - 9606685454938605275, - 7802675863289639223, - 1948831418843225802 - ], - "y": [ - 11059150608777595761, - 10458812733010634961, - 16772660325487078311, - 340608886692078192 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_8_key.json b/core/bin/verification_key_generator_and_server/data/verification_8_key.json deleted file mode 100644 index b8511e17b75..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_8_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 1834112096176967541, - 5137529514715617427, - 6540843391881340212, - 3033401888759110412 - ], - "y": [ - 8910602970094475216, - 13169513767982514776, - 5761530093694221441, - 2733318557350866268 - ], - "infinity": false - }, - { - "x": [ - 4701064149158432365, - 5425087325981406309, - 7911131985858828309, - 1683257627049186617 - ], - "y": [ - 13565328904521460918, - 17013189171844282257, - 4897087111183007258, - 2345861178674095559 - ], - "infinity": false - }, - { - "x": [ - 17285353863442654170, - 17787410547699779811, - 4803131526909484890, - 1607731426619418092 - ], - "y": [ - 3219378920021652314, - 11046862703797106703, - 10595836629242151972, - 2970963661532337787 - ], - "infinity": false - }, - { - "x": [ - 6619857367954187649, - 8023974497004524989, - 10088058961892288757, - 938018804109053807 - ], - "y": [ - 15549411064757453720, - 1776820811429478220, - 8222111141823917842, - 290593315633281086 - ], - "infinity": false - }, - { - "x": [ - 3338931670632164423, - 11330459786926502111, - 13560408114559586439, - 233279858410037466 - ], - "y": [ - 9757980615881472290, - 6475296714459436577, - 15954545788543926629, - 2522580407814024231 - ], - "infinity": false - }, - { - "x": [ - 2168501453409628158, - 16417992951888116942, - 1994813140597965849, - 1938552030580060698 - ], - "y": [ - 2393885012813093493, - 5109365147685051030, - 4449898145078443978, - 996506294158321126 - ], - "infinity": false - }, - { - "x": [ - 8163446935422765754, - 17127634458571165785, - 18101155318188210010, - 1502677094108070955 - ], - "y": [ - 4184320355428455210, - 15479528531137595907, - 8455846016430686855, - 2570922865513301289 - ], - "infinity": false - }, - { - "x": [ - 407579941387952352, - 17088458915370169940, - 16892753644011369852, - 2421666516533613805 - ], - "y": [ - 597435837737447683, - 18122233368438707442, - 4844832744563923839, - 396103093107107006 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 16242434178832819081, - 2218928756172422054, - 5871927983870638422, - 810020555846721779 - ], - "y": [ - 9387856576677982883, - 5119490172321159350, - 14295435318421985120, - 1325809191818871673 - ], - "infinity": false - }, - { - "x": [ - 5933965238687071287, - 10681704800081225943, - 14555731010498897395, - 959799154476325145 - ], - "y": [ - 1501632601560034962, - 9401704677918783964, - 12292111854761501889, - 858616662661742045 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 12841507457971520539, - 6525486152471484441, - 3744486588589217686, - 2769451038405535407 - ], - "y": [ - 14145668232228974364, - 9864097401535863500, - 12665512227995054273, - 1710776254334161256 - ], - "infinity": false - }, - { - "x": [ - 12108157388466567796, - 12008825937320240484, - 11228446795405478904, - 1520424921904150640 - ], - "y": [ - 18157047055378899649, - 10836823561088895074, - 583613418617515639, - 2570085764232471205 - ], - "infinity": false - }, - { - "x": [ - 3117226099128838157, - 10181632193024509490, - 1215328570209780930, - 1536961491401844084 - ], - "y": [ - 11646905141441654681, - 6168936708987385450, - 14459621573162108487, - 2047975568887748173 - ], - "infinity": false - }, - { - "x": [ - 12034664246790330785, - 12032082546920592595, - 12002839514296456095, - 3009479689157977152 - ], - "y": [ - 180421277197569955, - 5815678523367268562, - 11718416396488597085, - 408186057258055191 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 34384753, - "lookup_selector_commitment": { - "x": [ - 3872970821419373956, - 13556503327407661223, - 12832313376327677595, - 211677646774476601 - ], - "y": [ - 17281673428499585093, - 235933066531227024, - 17890327653152417391, - 2551853991532334733 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 14943975734974680929, - 9516136771242606543, - 6695719565456036638, - 3449077049666620393 - ], - "y": [ - 11678209093898264827, - 4499447145490933412, - 6317798459829178953, - 1439219764789809864 - ], - "infinity": false - }, - { - "x": [ - 13501290183905491407, - 17914451638435951710, - 5188762915201956497, - 1220375585898114161 - ], - "y": [ - 14519533874806433487, - 409100046306023, - 2203176115240501563, - 3105700623762337563 - ], - "infinity": false - }, - { - "x": [ - 13968159480895722732, - 6973568812120893251, - 6250254745096478587, - 2299355969860561070 - ], - "y": [ - 7695944005480078577, - 12009671787784557856, - 13727042561077817002, - 219052945806305675 - ], - "infinity": false - }, - { - "x": [ - 4871629130106420314, - 4091595855728790015, - 1851744390500340594, - 3123168382710331270 - ], - "y": [ - 9703969956757970162, - 1215036492891076659, - 11876727836856213678, - 2640893636590396388 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 10299044894603982393, - 4664166516779563250, - 13124827128688646542, - 3361599897730972314 - ], - "y": [ - 18259946931458798404, - 10145479316480429602, - 15446978899103328376, - 265382288883021070 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/data/verification_9_key.json b/core/bin/verification_key_generator_and_server/data/verification_9_key.json deleted file mode 100644 index 75de5f75c78..00000000000 --- a/core/bin/verification_key_generator_and_server/data/verification_9_key.json +++ /dev/null @@ -1,399 +0,0 @@ -{ - "n": 67108863, - "num_inputs": 1, - "state_width": 4, - "num_witness_polys": 0, - "gate_setup_commitments": [ - { - "x": [ - 15041888416700822899, - 15908701850433687369, - 6928173929840686173, - 501601364708497325 - ], - "y": [ - 9443860646360881208, - 15174745959183347299, - 3341918218952258763, - 1470216750942469587 - ], - "infinity": false - }, - { - "x": [ - 1713492202424532619, - 5921868784153327820, - 3919870428680620477, - 2459274846398943915 - ], - "y": [ - 8012717129874416534, - 13032363221581987781, - 9462161206147300944, - 1151760065513271967 - ], - "infinity": false - }, - { - "x": [ - 6636128327108235840, - 9362733145474272574, - 7779132015244601843, - 474802631021936400 - ], - "y": [ - 3900992471196218787, - 113851245079995197, - 7493904056590361535, - 3140468871801097229 - ], - "infinity": false - }, - { - "x": [ - 4340102674797800902, - 8715432707094353745, - 4331145745081713603, - 45456583984841487 - ], - "y": [ - 18326546742044058782, - 15443239165658185296, - 9765917874876721196, - 687859761729374839 - ], - "infinity": false - }, - { - "x": [ - 10804694580890857975, - 10550068287306981825, - 14956274043654722561, - 3060589920124935341 - ], - "y": [ - 17010223672048359580, - 263749806111642373, - 8349695975133446526, - 2826070525773268002 - ], - "infinity": false - }, - { - "x": [ - 16133249269780245267, - 4275571784340824698, - 6262619645627758753, - 3231281899173719188 - ], - "y": [ - 11839616617849449709, - 7142633755989890055, - 10840735473548209733, - 2847350786075278882 - ], - "infinity": false - }, - { - "x": [ - 16258572583186965203, - 1354691125575792689, - 17235265854934968790, - 1252220109588505888 - ], - "y": [ - 9336541637487074271, - 18402912967310224930, - 13223187653117829136, - 2979297976786733465 - ], - "infinity": false - }, - { - "x": [ - 8525686695522099028, - 4103157564078645049, - 18392570749492199187, - 2911539491816599180 - ], - "y": [ - 114653447583918953, - 10470307038453386601, - 11189850644566793538, - 1298227034210846592 - ], - "infinity": false - } - ], - "gate_selectors_commitments": [ - { - "x": [ - 2069700145549311928, - 4250782333685017927, - 14207216715687122978, - 1145927286048477791 - ], - "y": [ - 9341202692364554712, - 12346939747104737180, - 2826478533799125818, - 2279570556437452275 - ], - "infinity": false - }, - { - "x": [ - 12388902775325386546, - 1277383964095999647, - 10535796018183893831, - 3359866702323175506 - ], - "y": [ - 16500893366957272235, - 2806147688388338314, - 8233156072220488773, - 2867848844627212711 - ], - "infinity": false - } - ], - "permutation_commitments": [ - { - "x": [ - 17521183961631816299, - 18327810537117645266, - 16586212795163003556, - 3052771534158410452 - ], - "y": [ - 8441310283734453731, - 14146088755801181801, - 17480253356603213989, - 3217948944323396651 - ], - "infinity": false - }, - { - "x": [ - 16076801532842923524, - 7514743296775639295, - 2571323986448120255, - 184367540214459973 - ], - "y": [ - 13389643967183613114, - 17108261756464256828, - 11145735340309739417, - 2142196980030893874 - ], - "infinity": false - }, - { - "x": [ - 8034683328666433725, - 5436036566901194392, - 18053257213361014053, - 2821377847227509494 - ], - "y": [ - 14471305228212723444, - 8894846184648865892, - 7047725473055235530, - 2413388400332075493 - ], - "infinity": false - }, - { - "x": [ - 14026981588443304814, - 14671946927765496183, - 13387079215022495926, - 2554705188091675830 - ], - "y": [ - 440116222237740520, - 1630168477189852269, - 17833425794232523381, - 908824471705597078 - ], - "infinity": false - } - ], - "total_lookup_entries_length": 41494904, - "lookup_selector_commitment": { - "x": [ - 13889323383351416990, - 17887386740570674124, - 5463612855590268091, - 2434255340534820869 - ], - "y": [ - 2436699678434218349, - 11251365794004058995, - 11023509005141034197, - 2867854671852170604 - ], - "infinity": false - }, - "lookup_tables_commitments": [ - { - "x": [ - 631990924006796604, - 16139625628991115157, - 13331739325995827711, - 1062301837743594995 - ], - "y": [ - 15303054606290800139, - 15906872095881647437, - 7093896572295020249, - 1342952934989901142 - ], - "infinity": false - }, - { - "x": [ - 7983921919542246393, - 13296544189644416678, - 17081022784392007697, - 1980832835348244027 - ], - "y": [ - 10874958134865200330, - 7702740658637630534, - 14052057929798961943, - 3193353539419869016 - ], - "infinity": false - }, - { - "x": [ - 1114587284824996932, - 4636906500482867924, - 15328247172597030456, - 87946895873973686 - ], - "y": [ - 15573033830207915877, - 5194694185599035278, - 2562407345425607214, - 2782078999306862675 - ], - "infinity": false - }, - { - "x": [ - 18225112781127431982, - 18048613958187123807, - 7325490730844456621, - 1953409020724855888 - ], - "y": [ - 7577000130125917198, - 6193701449695751861, - 4102082927677054717, - 395350071385269650 - ], - "infinity": false - } - ], - "lookup_table_type_commitment": { - "x": [ - 3832160677272803715, - 2122279734318217808, - 811690144328522684, - 1416829483108546006 - ], - "y": [ - 10041279311991435550, - 14702496983143623186, - 4419862575487552747, - 1429817244630465543 - ], - "infinity": false - }, - "non_residues": [ - [ - 5, - 0, - 0, - 0 - ], - [ - 7, - 0, - 0, - 0 - ], - [ - 10, - 0, - 0, - 0 - ] - ], - "g2_elements": [ - { - "x": { - "c0": [ - 5106727233969649389, - 7440829307424791261, - 4785637993704342649, - 1729627375292849782 - ], - "c1": [ - 10945020018377822914, - 17413811393473931026, - 8241798111626485029, - 1841571559660931130 - ] - }, - "y": { - "c0": [ - 5541340697920699818, - 16416156555105522555, - 5380518976772849807, - 1353435754470862315 - ], - "c1": [ - 6173549831154472795, - 13567992399387660019, - 17050234209342075797, - 650358724130500725 - ] - }, - "infinity": false - }, - { - "x": { - "c0": [ - 9089143573911733168, - 11482283522806384523, - 13585589533905622862, - 79029415676722370 - ], - "c1": [ - 5692040832573735873, - 16884514497384809355, - 16717166481813659368, - 2742131088506155463 - ] - }, - "y": { - "c0": [ - 9604638503594647125, - 1289961608472612514, - 6217038149984805214, - 2521661352385209130 - ], - "c1": [ - 17168069778630926308, - 11309277837895768996, - 15154989611154567813, - 359271377050603491 - ] - }, - "infinity": false - } - ] -} \ No newline at end of file diff --git a/core/bin/verification_key_generator_and_server/src/commitment_generator.rs b/core/bin/verification_key_generator_and_server/src/commitment_generator.rs deleted file mode 100644 index ed859bcb436..00000000000 --- a/core/bin/verification_key_generator_and_server/src/commitment_generator.rs +++ /dev/null @@ -1,37 +0,0 @@ -use anyhow::Context as _; -use zksync_prover_utils::vk_commitment_helper::{ - get_toml_formatted_value, read_contract_toml, write_contract_toml, -}; -use zksync_verification_key_server::generate_commitments; - -fn main() -> anyhow::Result<()> { - tracing::info!("Starting commitment generation!"); - read_and_update_contract_toml() -} - -fn read_and_update_contract_toml() -> anyhow::Result<()> { - let mut contract_doc = read_contract_toml().context("read_contract_toml()")?; - let ( - basic_circuit_commitment_hex, - leaf_aggregation_commitment_hex, - node_aggregation_commitment_hex, - ) = generate_commitments(); - contract_doc["contracts"]["RECURSION_CIRCUITS_SET_VKS_HASH"] = - get_toml_formatted_value(basic_circuit_commitment_hex); - contract_doc["contracts"]["RECURSION_LEAF_LEVEL_VK_HASH"] = - get_toml_formatted_value(leaf_aggregation_commitment_hex); - contract_doc["contracts"]["RECURSION_NODE_LEVEL_VK_HASH"] = - get_toml_formatted_value(node_aggregation_commitment_hex); - tracing::info!("Updated toml content: {:?}", contract_doc.to_string()); - write_contract_toml(contract_doc).context("write_contract_toml") -} - -#[cfg(test)] -mod test { - use super::*; - - #[test] - fn test_read_and_update_contract_toml() { - read_and_update_contract_toml().unwrap(); - } -} diff --git a/core/bin/verification_key_generator_and_server/src/json_to_binary_vk_converter.rs b/core/bin/verification_key_generator_and_server/src/json_to_binary_vk_converter.rs deleted file mode 100644 index 65a2e3361bf..00000000000 --- a/core/bin/verification_key_generator_and_server/src/json_to_binary_vk_converter.rs +++ /dev/null @@ -1,31 +0,0 @@ -use bincode::serialize_into; -use std::fs::File; -use std::io::BufWriter; -use structopt::StructOpt; -use zksync_verification_key_server::get_vk_for_circuit_type; - -#[derive(Debug, StructOpt)] -#[structopt( - name = "json existing json VK's to binary vk", - about = "converter tool" -)] -struct Opt { - /// Binary output path of verification keys. - #[structopt(short)] - output_bin_path: String, -} - -fn main() { - let opt = Opt::from_args(); - println!("Converting existing json keys to binary"); - generate_bin_vks(opt.output_bin_path); -} - -fn generate_bin_vks(output_path: String) { - for circuit_type in 1..=18 { - let filename = format!("{}/verification_{}.key", output_path, circuit_type); - let vk = get_vk_for_circuit_type(circuit_type); - let mut f = BufWriter::new(File::create(filename).unwrap()); - serialize_into(&mut f, &vk).unwrap(); - } -} diff --git a/core/bin/verification_key_generator_and_server/src/lib.rs b/core/bin/verification_key_generator_and_server/src/lib.rs deleted file mode 100644 index 2b05363595b..00000000000 --- a/core/bin/verification_key_generator_and_server/src/lib.rs +++ /dev/null @@ -1,188 +0,0 @@ -use ff::to_hex; -use once_cell::sync::Lazy; -use std::collections::HashMap; -use std::path::Path; -use std::str::FromStr; -use zksync_types::zkevm_test_harness::abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit; -use zksync_types::zkevm_test_harness::bellman::bn256::Bn256; -use zksync_types::zkevm_test_harness::bellman::plonk::better_better_cs::setup::VerificationKey; -use zksync_types::zkevm_test_harness::witness::oracle::VmWitnessOracle; - -use itertools::Itertools; -use structopt::lazy_static::lazy_static; -use zksync_types::circuit::SCHEDULER_CIRCUIT_INDEX; -use zksync_types::circuit::{ - GEOMETRY_CONFIG, LEAF_CIRCUIT_INDEX, LEAF_SPLITTING_FACTOR, NODE_CIRCUIT_INDEX, - NODE_SPLITTING_FACTOR, SCHEDULER_UPPER_BOUND, -}; -use zksync_types::protocol_version::{L1VerifierConfig, VerifierParams}; -use zksync_types::vk_transform::generate_vk_commitment; -use zksync_types::zkevm_test_harness::witness; -use zksync_types::zkevm_test_harness::witness::full_block_artifact::BlockBasicCircuits; -use zksync_types::zkevm_test_harness::witness::recursive_aggregation::{ - erase_vk_type, padding_aggregations, -}; -use zksync_types::zkevm_test_harness::witness::vk_set_generator::circuits_for_vk_generation; -use zksync_types::H256; - -#[cfg(test)] -mod tests; - -lazy_static! { - static ref COMMITMENTS: Lazy = Lazy::new(|| { circuit_commitments() }); -} - -pub fn get_vks_for_basic_circuits( -) -> HashMap>>> { - // 3-17 are the ids of basic circuits - (3..=18) - .map(|circuit_type| (circuit_type, get_vk_for_circuit_type(circuit_type))) - .collect() -} - -pub fn get_vk_for_circuit_type( - circuit_type: u8, -) -> VerificationKey>> { - let filepath = get_file_path(circuit_type); - tracing::info!("Fetching verification key from path: {}", filepath); - let text = std::fs::read_to_string(&filepath) - .unwrap_or_else(|_| panic!("Failed reading verification key from path: {}", filepath)); - serde_json::from_str::>>>( - &text, - ) - .unwrap_or_else(|_| { - panic!( - "Failed deserializing verification key from path: {}", - filepath - ) - }) -} - -pub fn save_vk_for_circuit_type( - circuit_type: u8, - vk: VerificationKey>>, -) { - let filepath = get_file_path(circuit_type); - tracing::info!("saving verification key to: {}", filepath); - std::fs::write(filepath, serde_json::to_string_pretty(&vk).unwrap()).unwrap(); -} - -pub fn get_ordered_vks_for_basic_circuits( - circuits: &BlockBasicCircuits, - verification_keys: &HashMap< - u8, - VerificationKey>>, - >, -) -> Vec>>> { - circuits - .clone() - .into_flattened_set() - .iter() - .map(|circuit| { - let circuit_id = circuit.numeric_circuit_type(); - verification_keys - .get(&circuit_id) - .unwrap_or_else(|| { - panic!("no VK for circuit number {:?}", circuit.short_description()) - }) - .clone() - }) - .collect() -} - -pub fn get_vks_for_commitment( - verification_keys: HashMap< - u8, - VerificationKey>>, - >, -) -> Vec>>> { - // We need all the vks sorted by their respective circuit ids - verification_keys - .into_iter() - .sorted_by_key(|(id, _)| *id) - .map(|(_, val)| val) - .collect() -} - -pub fn get_circuits_for_vk() -> Vec>> { - ensure_setup_key_exist(); - let padding_aggregations = padding_aggregations(NODE_SPLITTING_FACTOR); - circuits_for_vk_generation( - GEOMETRY_CONFIG, - LEAF_SPLITTING_FACTOR, - NODE_SPLITTING_FACTOR, - SCHEDULER_UPPER_BOUND, - padding_aggregations, - ) -} - -fn ensure_setup_key_exist() { - if !Path::new("setup_2^26.key").exists() { - panic!("File setup_2^26.key is required to be present in current directory for verification keys generation. \ndownload from https://storage.googleapis.com/matterlabs-setup-keys-us/setup-keys/setup_2^26.key"); - } -} -fn get_file_path(circuit_type: u8) -> String { - let zksync_home = std::env::var("ZKSYNC_HOME").unwrap_or_else(|_| "/".into()); - format!( - "{}/core/bin/verification_key_generator_and_server/data/verification_{}_key.json", - zksync_home, circuit_type - ) -} - -pub fn generate_commitments() -> (String, String, String) { - let (_, basic_circuit_commitment, _) = - witness::recursive_aggregation::form_base_circuits_committment(get_vks_for_commitment( - get_vks_for_basic_circuits(), - )); - - let leaf_aggregation_vk = get_vk_for_circuit_type(LEAF_CIRCUIT_INDEX); - let node_aggregation_vk = get_vk_for_circuit_type(NODE_CIRCUIT_INDEX); - - let (_, leaf_aggregation_vk_commitment) = - witness::recursive_aggregation::compute_vk_encoding_and_committment(erase_vk_type( - leaf_aggregation_vk, - )); - - let (_, node_aggregation_vk_commitment) = - witness::recursive_aggregation::compute_vk_encoding_and_committment(erase_vk_type( - node_aggregation_vk, - )); - let basic_circuit_commitment_hex = format!("0x{}", to_hex(&basic_circuit_commitment)); - let leaf_aggregation_commitment_hex = format!("0x{}", to_hex(&leaf_aggregation_vk_commitment)); - let node_aggregation_commitment_hex = format!("0x{}", to_hex(&node_aggregation_vk_commitment)); - tracing::info!( - "basic circuit commitment {:?}", - basic_circuit_commitment_hex - ); - tracing::info!( - "leaf aggregation commitment {:?}", - leaf_aggregation_commitment_hex - ); - tracing::info!( - "node aggregation commitment {:?}", - node_aggregation_commitment_hex - ); - ( - basic_circuit_commitment_hex, - leaf_aggregation_commitment_hex, - node_aggregation_commitment_hex, - ) -} - -fn circuit_commitments() -> L1VerifierConfig { - let (basic, leaf, node) = generate_commitments(); - let scheduler = generate_vk_commitment(get_vk_for_circuit_type(SCHEDULER_CIRCUIT_INDEX)); - L1VerifierConfig { - params: VerifierParams { - recursion_node_level_vk_hash: H256::from_str(&node).expect("invalid node commitment"), - recursion_leaf_level_vk_hash: H256::from_str(&leaf).expect("invalid leaf commitment"), - recursion_circuits_set_vks_hash: H256::from_str(&basic) - .expect("invalid basic commitment"), - }, - recursion_scheduler_level_vk_hash: scheduler, - } -} - -pub fn get_cached_commitments() -> L1VerifierConfig { - **COMMITMENTS -} diff --git a/core/bin/verification_key_generator_and_server/src/main.rs b/core/bin/verification_key_generator_and_server/src/main.rs deleted file mode 100644 index 30ffb0574d4..00000000000 --- a/core/bin/verification_key_generator_and_server/src/main.rs +++ /dev/null @@ -1,45 +0,0 @@ -use std::collections::HashSet; -use std::env; -use zksync_types::zkevm_test_harness::abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit; -use zksync_types::zkevm_test_harness::bellman::bn256::Bn256; -use zksync_types::zkevm_test_harness::bellman::plonk::better_better_cs::cs::PlonkCsWidth4WithNextStepAndCustomGatesParams; -use zksync_types::zkevm_test_harness::witness::oracle::VmWitnessOracle; -use zksync_verification_key_server::{get_circuits_for_vk, save_vk_for_circuit_type}; - -/// Creates verification keys for the given circuit. -fn main() { - let args: Vec = env::args().collect(); - - let circuit_types: HashSet = if args.len() > 1 { - [get_and_ensure_valid_circuit_type(args[1].clone())].into() - } else { - (3..17).collect() - }; - tracing::info!("Starting verification key generation!"); - get_circuits_for_vk() - .into_iter() - .filter(|c| circuit_types.contains(&c.numeric_circuit_type())) - .for_each(generate_verification_key); -} - -fn get_and_ensure_valid_circuit_type(circuit_type: String) -> u8 { - tracing::info!("Received circuit_type: {:?}", circuit_type); - circuit_type - .parse::() - .expect("Please specify a circuit type in range [1, 17]") -} - -fn generate_verification_key(circuit: ZkSyncCircuit>) { - let res = circuit_testing::create_vk_for_padding_size_log_2::< - Bn256, - _, - PlonkCsWidth4WithNextStepAndCustomGatesParams, - >(circuit.clone(), 26) - .unwrap(); - save_vk_for_circuit_type(circuit.numeric_circuit_type(), res); - tracing::info!( - "Finished VK generation for circuit {:?} (id {:?})", - circuit.short_description(), - circuit.numeric_circuit_type() - ); -} diff --git a/core/bin/verification_key_generator_and_server/src/tests.rs b/core/bin/verification_key_generator_and_server/src/tests.rs deleted file mode 100644 index 8f013bad200..00000000000 --- a/core/bin/verification_key_generator_and_server/src/tests.rs +++ /dev/null @@ -1,66 +0,0 @@ -use crate::{get_vk_for_circuit_type, get_vks_for_basic_circuits, get_vks_for_commitment}; -use itertools::Itertools; -use serde_json::Value; -use std::collections::HashMap; -use zksync_types::zkevm_test_harness::abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit; -use zksync_types::zkevm_test_harness::bellman::bn256::Bn256; -use zksync_types::zkevm_test_harness::bellman::plonk::better_better_cs::setup::VerificationKey; - -use zksync_types::zkevm_test_harness::witness::oracle::VmWitnessOracle; - -#[test] -fn test_get_vk_for_circuit_type() { - for circuit_type in 1..=18 { - get_vk_for_circuit_type(circuit_type); - } -} - -#[test] -fn test_get_vks_for_basic_circuits() { - let circuit_type_to_vk = get_vks_for_basic_circuits(); - let circuit_types: Vec = circuit_type_to_vk.into_keys().sorted().collect::>(); - let expected: Vec = (3..=18).collect(); - assert_eq!( - expected, circuit_types, - "circuit types must be in the range [3, 17]" - ); -} - -#[test] -fn test_get_vks_for_commitment() { - let vk_5 = get_vk_for_circuit_type(5); - let vk_2 = get_vk_for_circuit_type(2); - let vk_3 = get_vk_for_circuit_type(3); - let map = HashMap::from([ - (5u8, vk_5.clone()), - (2u8, vk_2.clone()), - (3u8, vk_3.clone()), - ]); - let vks = get_vks_for_commitment(map); - let expected = vec![vk_2, vk_3, vk_5]; - compare_vks( - expected, - vks, - "expected verification key to be in order 2, 3, 5", - ); -} - -fn get_vk_json(vk: &VerificationKey>>) -> Value { - serde_json::to_value(vk).unwrap() -} - -fn get_vk_jsons( - vks: Vec>>>, -) -> Vec { - vks.into_iter().map(|vk| get_vk_json(&vk)).collect() -} - -fn compare_vks( - first: Vec>>>, - second: Vec>>>, - error_message: &str, -) { - let first_json = get_vk_jsons(first); - let second_json = get_vk_jsons(second); - assert_eq!(first_json, second_json, "{:?}", error_message); -} diff --git a/core/bin/verified_sources_fetcher/src/main.rs b/core/bin/verified_sources_fetcher/src/main.rs index 6bb6ee66cee..cc53229329f 100644 --- a/core/bin/verified_sources_fetcher/src/main.rs +++ b/core/bin/verified_sources_fetcher/src/main.rs @@ -1,4 +1,5 @@ use std::io::Write; + use zksync_config::PostgresConfig; use zksync_dal::ConnectionPool; use zksync_env_config::FromEnv; diff --git a/core/bin/zksync_server/src/main.rs b/core/bin/zksync_server/src/main.rs index f2aed9c75c2..ffaa08ea090 100644 --- a/core/bin/zksync_server/src/main.rs +++ b/core/bin/zksync_server/src/main.rs @@ -1,8 +1,7 @@ -use anyhow::Context as _; -use clap::Parser; - use std::{str::FromStr, time::Duration}; +use anyhow::Context as _; +use clap::Parser; use zksync_config::{ configs::{ api::{HealthCheckConfig, MerkleTreeApiConfig, Web3JsonRpcConfig}, @@ -13,16 +12,14 @@ use zksync_config::{ fri_prover_group::FriProverGroupConfig, house_keeper::HouseKeeperConfig, FriProofCompressorConfig, FriProverConfig, FriWitnessGeneratorConfig, PrometheusConfig, - ProofDataHandlerConfig, ProverGroupConfig, WitnessGeneratorConfig, + ProofDataHandlerConfig, WitnessGeneratorConfig, }, ApiConfig, ContractsConfig, DBConfig, ETHClientConfig, ETHSenderConfig, ETHWatchConfig, - FetcherConfig, GasAdjusterConfig, ObjectStoreConfig, PostgresConfig, ProverConfigs, + GasAdjusterConfig, ObjectStoreConfig, PostgresConfig, }; - -use zksync_core::temp_config_store::TempConfigStore; use zksync_core::{ - genesis_init, initialize_components, is_genesis_needed, setup_sigint_handler, Component, - Components, + genesis_init, initialize_components, is_genesis_needed, setup_sigint_handler, + temp_config_store::TempConfigStore, Component, Components, }; use zksync_env_config::FromEnv; use zksync_storage::RocksDB; @@ -44,7 +41,7 @@ struct Cli { /// Comma-separated list of components to launch. #[arg( long, - default_value = "api,tree,eth,data_fetcher,state_keeper,witness_generator,housekeeper,basic_witness_input_producer" + default_value = "api,tree,eth,state_keeper,housekeeper,basic_witness_input_producer" )] components: ComponentsToRun, } @@ -113,7 +110,6 @@ async fn main() -> anyhow::Result<()> { fri_witness_generator_config: FriWitnessGeneratorConfig::from_env().ok(), prometheus_config: PrometheusConfig::from_env().ok(), proof_data_handler_config: ProofDataHandlerConfig::from_env().ok(), - prover_group_config: ProverGroupConfig::from_env().ok(), witness_generator_config: WitnessGeneratorConfig::from_env().ok(), api_config: ApiConfig::from_env().ok(), contracts_config: ContractsConfig::from_env().ok(), @@ -121,9 +117,7 @@ async fn main() -> anyhow::Result<()> { eth_client_config: ETHClientConfig::from_env().ok(), eth_sender_config: ETHSenderConfig::from_env().ok(), eth_watch_config: ETHWatchConfig::from_env().ok(), - fetcher_config: FetcherConfig::from_env().ok(), gas_adjuster_config: GasAdjusterConfig::from_env().ok(), - prover_configs: ProverConfigs::from_env().ok(), object_store_config: ObjectStoreConfig::from_env().ok(), }; @@ -154,16 +148,9 @@ async fn main() -> anyhow::Result<()> { opt.components.0 }; - // OneShotWitnessGenerator is the only component that is not expected to run indefinitely - // if this value is `false`, we expect all components to run indefinitely: we panic if any component returns. - let is_only_oneshot_witness_generator_task = matches!( - components.as_slice(), - [Component::WitnessGenerator(Some(_), _)] - ); - // Run core actors. let (core_task_handles, stop_sender, cb_receiver, health_check_handle) = - initialize_components(&configs, components, is_only_oneshot_witness_generator_task) + initialize_components(&configs, components) .await .context("Unable to start Core actors")?; @@ -172,7 +159,7 @@ async fn main() -> anyhow::Result<()> { let particular_crypto_alerts = None::>; let graceful_shutdown = None::>; - let tasks_allowed_to_finish = is_only_oneshot_witness_generator_task; + let tasks_allowed_to_finish = false; tokio::select! { _ = wait_for_tasks(core_task_handles, particular_crypto_alerts, graceful_shutdown, tasks_allowed_to_finish) => {}, _ = sigint_receiver => { diff --git a/core/lib/basic_types/src/lib.rs b/core/lib/basic_types/src/lib.rs index 86cc8c59221..5c8b4e6ee69 100644 --- a/core/lib/basic_types/src/lib.rs +++ b/core/lib/basic_types/src/lib.rs @@ -2,25 +2,25 @@ //! //! Most of them are just re-exported from the `web3` crate. +use std::{ + convert::{Infallible, TryFrom, TryInto}, + fmt, + num::ParseIntError, + ops::{Add, Deref, DerefMut, Sub}, + str::FromStr, +}; + +use serde::{de, Deserialize, Deserializer, Serialize}; +pub use web3::{ + self, ethabi, + types::{Address, Bytes, Log, TransactionRequest, H128, H160, H2048, H256, U128, U256, U64}, +}; + #[macro_use] mod macros; - pub mod basic_fri_types; pub mod network; -use serde::{de, Deserialize, Deserializer, Serialize}; -use std::convert::{Infallible, TryFrom, TryInto}; -use std::fmt; -use std::num::ParseIntError; -use std::ops::{Add, Deref, DerefMut, Sub}; -use std::str::FromStr; - -pub use web3; -pub use web3::ethabi; -pub use web3::types::{ - Address, Bytes, Log, TransactionRequest, H128, H160, H2048, H256, U128, U256, U64, -}; - /// Account place in the global state tree is uniquely identified by its address. /// Binary this type is represented by 160 bit big-endian representation of account address. #[derive(Debug, Clone, Copy, Eq, PartialEq, Serialize, Deserialize, Hash, Ord, PartialOrd)] @@ -77,7 +77,7 @@ impl TryFrom for AccountTreeId { } } -/// ChainId in the ZkSync network. +/// ChainId in the zkSync network. #[derive(Copy, Clone, Debug, Serialize, PartialEq, Eq, PartialOrd, Ord, Hash)] pub struct L2ChainId(u64); @@ -115,9 +115,9 @@ impl FromStr for L2ChainId { impl L2ChainId { /// The maximum value of the L2 chain ID. - // 2^53 - 1 is a max safe integer in JS. In ethereum JS libs chain ID should be the safe integer. + // `2^53 - 1` is a max safe integer in JS. In Ethereum JS libraries chain ID should be the safe integer. // Next arithmetic operation: subtract 36 and divide by 2 comes from `v` calculation: - // v = 2*chainId + 36, that should be save integer as well. + // `v = 2*chainId + 36`, that should be save integer as well. const MAX: u64 = ((1 << 53) - 1 - 36) / 2; pub fn max() -> Self { @@ -222,9 +222,10 @@ impl Default for PriorityOpId { #[cfg(test)] mod tests { - use super::*; use serde_json::from_str; + use super::*; + #[test] fn test_from_str_valid_decimal() { let input = "42"; diff --git a/core/lib/circuit_breaker/src/l1_txs.rs b/core/lib/circuit_breaker/src/l1_txs.rs index 5279106637e..5d3c4dc9ccf 100644 --- a/core/lib/circuit_breaker/src/l1_txs.rs +++ b/core/lib/circuit_breaker/src/l1_txs.rs @@ -1,6 +1,7 @@ -use crate::{CircuitBreaker, CircuitBreakerError}; use zksync_dal::ConnectionPool; +use crate::{CircuitBreaker, CircuitBreakerError}; + #[derive(Debug)] pub struct FailedL1TransactionChecker { pub pool: ConnectionPool, diff --git a/core/lib/circuit_breaker/src/lib.rs b/core/lib/circuit_breaker/src/lib.rs index 878114f0d04..4c84f857a29 100644 --- a/core/lib/circuit_breaker/src/lib.rs +++ b/core/lib/circuit_breaker/src/lib.rs @@ -4,7 +4,6 @@ use anyhow::Context as _; use futures::channel::oneshot; use thiserror::Error; use tokio::sync::watch; - use zksync_config::configs::chain::CircuitBreakerConfig; pub mod l1_txs; diff --git a/core/lib/commitment_utils/Cargo.toml b/core/lib/commitment_utils/Cargo.toml index 801f90ea0f2..bb286d6216a 100644 --- a/core/lib/commitment_utils/Cargo.toml +++ b/core/lib/commitment_utils/Cargo.toml @@ -13,3 +13,4 @@ categories = ["cryptography"] zksync_types = { path = "../../lib/types" } zksync_utils = { path = "../../lib/utils" } zkevm_test_harness = { git = "https://github.com/matter-labs/era-zkevm_test_harness.git", branch = "v1.4.0" } +multivm = { path = "../../lib/multivm" } diff --git a/core/lib/commitment_utils/src/lib.rs b/core/lib/commitment_utils/src/lib.rs index ac6dc8ff917..3c2db1471cc 100644 --- a/core/lib/commitment_utils/src/lib.rs +++ b/core/lib/commitment_utils/src/lib.rs @@ -1,23 +1,33 @@ //! Utils for commitment calculation. +use multivm::utils::get_used_bootloader_memory_bytes; use zkevm_test_harness::witness::utils::{ events_queue_commitment_fixed, initial_heap_content_commitment_fixed, }; -use zksync_types::{LogQuery, H256, U256, USED_BOOTLOADER_MEMORY_BYTES}; +use zksync_types::{LogQuery, ProtocolVersionId, H256, U256}; use zksync_utils::expand_memory_contents; -pub fn events_queue_commitment(events_queue: &Vec, is_pre_boojum: bool) -> Option { - (!is_pre_boojum).then(|| H256(events_queue_commitment_fixed(events_queue))) +pub fn events_queue_commitment( + events_queue: &Vec, + protocol_version: ProtocolVersionId, +) -> Option { + (!protocol_version.is_pre_boojum()).then(|| H256(events_queue_commitment_fixed(events_queue))) } pub fn bootloader_initial_content_commitment( initial_bootloader_contents: &[(usize, U256)], - is_pre_boojum: bool, + protocol_version: ProtocolVersionId, ) -> Option { - (!is_pre_boojum).then(|| { - let full_bootloader_memory = - expand_memory_contents(initial_bootloader_contents, USED_BOOTLOADER_MEMORY_BYTES); - H256(initial_heap_content_commitment_fixed( - &full_bootloader_memory, - )) - }) + let expanded_memory_size = if protocol_version.is_pre_boojum() { + return None; + } else { + get_used_bootloader_memory_bytes(protocol_version.into()) + }; + + let full_bootloader_memory = + expand_memory_contents(initial_bootloader_contents, expanded_memory_size); + let commitment = H256(initial_heap_content_commitment_fixed( + &full_bootloader_memory, + )); + + Some(commitment) } diff --git a/core/lib/config/src/configs/api.rs b/core/lib/config/src/configs/api.rs index 3b23abea43c..a06b9fa53df 100644 --- a/core/lib/config/src/configs/api.rs +++ b/core/lib/config/src/configs/api.rs @@ -1,9 +1,9 @@ -use serde::Deserialize; +use std::{net::SocketAddr, num::NonZeroU32, time::Duration}; -use std::{net::SocketAddr, time::Duration}; +use serde::Deserialize; +use zksync_basic_types::H256; pub use crate::configs::PrometheusConfig; -use zksync_basic_types::H256; /// API configuration. #[derive(Debug, Deserialize, Clone, PartialEq)] @@ -38,15 +38,11 @@ pub struct Web3JsonRpcConfig { pub subscriptions_limit: Option, /// Interval between polling db for pubsub (in ms). pub pubsub_polling_interval: Option, - /// number of threads per server - pub threads_per_server: u32, /// Tx nonce: how far ahead from the committed nonce can it be. pub max_nonce_ahead: u32, /// The multiplier to use when suggesting gas price. Should be higher than one, /// otherwise if the L1 prices soar, the suggested gas price won't be sufficient to be included in block pub gas_price_scale_factor: f64, - /// Inbound transaction limit used for throttling - pub transactions_per_sec_limit: Option, /// Timeout for requests (in s) pub request_timeout: Option, /// Private keys for accounts managed by node @@ -55,9 +51,17 @@ pub struct Web3JsonRpcConfig { pub estimate_gas_scale_factor: f64, /// The max possible number of gas that `eth_estimateGas` is allowed to overestimate. pub estimate_gas_acceptable_overestimation: u32, + /// Whether to use the compatibility mode for gas estimation for L1->L2 transactions. + /// During the migration to the 1.4.1 fee model, there will be a period, when the server + /// will already have the 1.4.1 fee model, while the L1 contracts will still expect the transactions + /// to use the previous fee model with much higher overhead. + /// + /// When set to `true`, the API will ensure to return gasLimit is high enough overhead for both the old + /// and the new fee model when estimating L1->L2 transactions. + pub l1_to_l2_transactions_compatibility_mode: bool, /// Max possible size of an ABI encoded tx (in bytes). pub max_tx_size: usize, - /// Max number of cache misses during one VM execution. If the number of cache misses exceeds this value, the api server panics. + /// Max number of cache misses during one VM execution. If the number of cache misses exceeds this value, the API server panics. /// This is a temporary solution to mitigate API request resulting in thousands of DB queries. pub vm_execution_cache_misses_limit: Option, /// Max number of VM instances to be concurrently spawned by the API server. @@ -71,12 +75,6 @@ pub struct Web3JsonRpcConfig { /// Latest values cache size in MiBs. The default value is 128 MiB. If set to 0, the latest /// values cache will be disabled. pub latest_values_cache_size_mb: Option, - /// Override value for the amount of threads used for HTTP RPC server. - /// If not set, the value from `threads_per_server` is used. - pub http_threads: Option, - /// Override value for the amount of threads used for WebSocket RPC server. - /// If not set, the value from `threads_per_server` is used. - pub ws_threads: Option, /// Limit for fee history block range. pub fee_history_limit: Option, /// Maximum number of requests in a single batch JSON RPC request. Default is 500. @@ -86,7 +84,7 @@ pub struct Web3JsonRpcConfig { /// Maximum number of requests per minute for the WebSocket server. /// The value is per active connection. /// Note: For HTTP, rate limiting is expected to be configured on the infra level. - pub websocket_requests_per_minute_limit: Option, + pub websocket_requests_per_minute_limit: Option, /// Tree API url, currently used to proxy `getProof` calls to the tree pub tree_api_url: Option, } @@ -105,22 +103,19 @@ impl Web3JsonRpcConfig { filters_limit: Some(10000), subscriptions_limit: Some(10000), pubsub_polling_interval: Some(200), - threads_per_server: 1, max_nonce_ahead: 50, gas_price_scale_factor: 1.2, - transactions_per_sec_limit: Default::default(), request_timeout: Default::default(), account_pks: Default::default(), estimate_gas_scale_factor: 1.2, estimate_gas_acceptable_overestimation: 1000, + l1_to_l2_transactions_compatibility_mode: true, max_tx_size: 1000000, vm_execution_cache_misses_limit: Default::default(), vm_concurrency_limit: Default::default(), factory_deps_cache_size_mb: Default::default(), initial_writes_cache_size_mb: Default::default(), latest_values_cache_size_mb: Default::default(), - http_threads: Default::default(), - ws_threads: Default::default(), fee_history_limit: Default::default(), max_batch_request_size: Default::default(), max_response_body_size_mb: Default::default(), @@ -183,14 +178,6 @@ impl Web3JsonRpcConfig { self.latest_values_cache_size_mb.unwrap_or(128) * super::BYTES_IN_MEGABYTE } - pub fn http_server_threads(&self) -> usize { - self.http_threads.unwrap_or(self.threads_per_server) as usize - } - - pub fn ws_server_threads(&self) -> usize { - self.ws_threads.unwrap_or(self.threads_per_server) as usize - } - pub fn fee_history_limit(&self) -> u64 { self.fee_history_limit.unwrap_or(1024) } @@ -204,9 +191,10 @@ impl Web3JsonRpcConfig { self.max_response_body_size_mb.unwrap_or(10) * super::BYTES_IN_MEGABYTE } - pub fn websocket_requests_per_minute_limit(&self) -> u32 { + pub fn websocket_requests_per_minute_limit(&self) -> NonZeroU32 { // The default limit is chosen to be reasonably permissive. - self.websocket_requests_per_minute_limit.unwrap_or(6000) + self.websocket_requests_per_minute_limit + .unwrap_or(NonZeroU32::new(6000).unwrap()) } pub fn tree_api_url(&self) -> Option { @@ -232,8 +220,6 @@ pub struct ContractVerificationApiConfig { pub port: u16, /// URL to access REST server. pub url: String, - /// number of threads per server - pub threads_per_server: u32, } impl ContractVerificationApiConfig { diff --git a/core/lib/config/src/configs/chain.rs b/core/lib/config/src/configs/chain.rs index 95392c8df83..d35c6ed52b5 100644 --- a/core/lib/config/src/configs/chain.rs +++ b/core/lib/config/src/configs/chain.rs @@ -1,25 +1,7 @@ -/// External uses -use serde::Deserialize; -use std::str::FromStr; -/// Built-in uses -use std::time::Duration; -// Local uses -use zksync_basic_types::network::Network; -use zksync_basic_types::{Address, L2ChainId}; +use std::{str::FromStr, time::Duration}; -#[derive(Debug, Deserialize, Clone, PartialEq)] -pub struct ChainConfig { - /// L1 parameters configuration. - pub network: NetworkConfig, - /// State keeper / block generating configuration. - pub state_keeper: StateKeeperConfig, - /// Operations manager / Metadata calculator. - pub operations_manager: OperationsManagerConfig, - /// mempool configuration - pub mempool: MempoolConfig, - /// circuit breaker configuration - pub circuit_breaker: CircuitBreakerConfig, -} +use serde::Deserialize; +use zksync_basic_types::{network::Network, Address, L2ChainId}; #[derive(Debug, Deserialize, Clone, PartialEq)] pub struct NetworkConfig { @@ -44,6 +26,25 @@ impl NetworkConfig { } } +/// An enum that represents the version of the fee model to use. +/// - `V1`, the first model that was used in zkSync Era. In this fee model, the pubdata price must be pegged to the L1 gas price. +/// Also, the fair L2 gas price is expected to only include the proving/computation price for the operator and not the costs that come from +/// processing the batch on L1. +/// - `V2`, the second model that was used in zkSync Era. There the pubdata price might be independent from the L1 gas price. Also, +/// The fair L2 gas price is expected to both the proving/computation price for the operator and the costs that come from +/// processing the batch on L1. +#[derive(Debug, Clone, Copy, Deserialize, PartialEq, Eq)] +pub enum FeeModelVersion { + V1, + V2, +} + +impl Default for FeeModelVersion { + fn default() -> Self { + Self::V1 + } +} + #[derive(Debug, Deserialize, Clone, PartialEq, Default)] pub struct StateKeeperConfig { /// The max number of slots for txs in a block before it should be sealed by the slots sealer. @@ -76,13 +77,31 @@ pub struct StateKeeperConfig { pub close_block_at_geometry_percentage: f64, /// Denotes the percentage of L1 params used in L2 block that triggers L2 block seal. pub close_block_at_eth_params_percentage: f64, - /// Denotes the percentage of L1 gas used in l2 block that triggers L2 block seal. + /// Denotes the percentage of L1 gas used in L2 block that triggers L2 block seal. pub close_block_at_gas_percentage: f64, pub fee_account_addr: Address, - /// The price the operator spends on 1 gas of computation in wei. - pub fair_l2_gas_price: u64, + /// The minimal acceptable L2 gas price, i.e. the price that should include the cost of computation/proving as well + /// as potentially premium for congestion. + pub minimal_l2_gas_price: u64, + /// The constant that represents the possibility that a batch can be sealed because of overuse of computation resources. + /// It has range from 0 to 1. If it is 0, the compute will not depend on the cost for closing the batch. + /// If it is 1, the gas limit per batch will have to cover the entire cost of closing the batch. + pub compute_overhead_part: f64, + /// The constant that represents the possibility that a batch can be sealed because of overuse of pubdata. + /// It has range from 0 to 1. If it is 0, the pubdata will not depend on the cost for closing the batch. + /// If it is 1, the pubdata limit per batch will have to cover the entire cost of closing the batch. + pub pubdata_overhead_part: f64, + /// The constant amount of L1 gas that is used as the overhead for the batch. It includes the price for batch verification, etc. + pub batch_overhead_l1_gas: u64, + /// The maximum amount of gas that can be used by the batch. This value is derived from the circuits limitation per batch. + pub max_gas_per_batch: u64, + /// The maximum amount of pubdata that can be used by the batch. Note that if the calldata is used as pubdata, this variable should not exceed 128kb. + pub max_pubdata_per_batch: u64, + + /// The version of the fee model to use. + pub fee_model_version: FeeModelVersion, /// Max number of computational gas that validation step is allowed to take. pub validation_computational_gas_limit: u32, @@ -118,7 +137,13 @@ impl StateKeeperConfig { close_block_at_gas_percentage: 0.95, fee_account_addr: Address::from_str("0xde03a0B5963f75f1C8485B355fF6D30f3093BDE7") .unwrap(), - fair_l2_gas_price: 250000000, + compute_overhead_part: 0.0, + pubdata_overhead_part: 1.0, + batch_overhead_l1_gas: 800_000, + max_gas_per_batch: 200_000_000, + max_pubdata_per_batch: 100_000, + minimal_l2_gas_price: 100000000, + fee_model_version: FeeModelVersion::V2, validation_computational_gas_limit: 300000, save_call_traces: true, virtual_blocks_interval: 1, diff --git a/core/lib/config/src/configs/circuit_synthesizer.rs b/core/lib/config/src/configs/circuit_synthesizer.rs deleted file mode 100644 index 8df0f4bbd1a..00000000000 --- a/core/lib/config/src/configs/circuit_synthesizer.rs +++ /dev/null @@ -1,42 +0,0 @@ -use std::time::Duration; - -use serde::Deserialize; - -/// Configuration for the witness generation -#[derive(Debug, Deserialize, Clone, PartialEq)] -pub struct CircuitSynthesizerConfig { - /// Max time for circuit to be synthesized - pub generation_timeout_in_secs: u16, - /// Max attempts for synthesizing circuit - pub max_attempts: u32, - /// Max time before an `reserved` prover instance in considered as `available` - pub gpu_prover_queue_timeout_in_secs: u16, - /// Max time to wait to get a free prover instance - pub prover_instance_wait_timeout_in_secs: u16, - // Time to wait between 2 consecutive poll to get new prover instance. - pub prover_instance_poll_time_in_milli_secs: u16, - /// Configurations for prometheus - pub prometheus_listener_port: u16, - pub prometheus_pushgateway_url: String, - pub prometheus_push_interval_ms: Option, - // Group id for this synthesizer, synthesizer running the same circuit types shall have same group id. - pub prover_group_id: u8, -} - -impl CircuitSynthesizerConfig { - pub fn generation_timeout(&self) -> Duration { - Duration::from_secs(self.generation_timeout_in_secs as u64) - } - - pub fn prover_instance_wait_timeout(&self) -> Duration { - Duration::from_secs(self.prover_instance_wait_timeout_in_secs as u64) - } - - pub fn gpu_prover_queue_timeout(&self) -> Duration { - Duration::from_secs(self.gpu_prover_queue_timeout_in_secs as u64) - } - - pub fn prover_instance_poll_time(&self) -> Duration { - Duration::from_millis(self.prover_instance_poll_time_in_milli_secs as u64) - } -} diff --git a/core/lib/config/src/configs/contract_verifier.rs b/core/lib/config/src/configs/contract_verifier.rs index 5c2a1608c8f..db3c8fa1b52 100644 --- a/core/lib/config/src/configs/contract_verifier.rs +++ b/core/lib/config/src/configs/contract_verifier.rs @@ -1,6 +1,5 @@ -// Built-in uses use std::time::Duration; -// External uses + use serde::Deserialize; #[derive(Debug, Deserialize, Clone, PartialEq)] diff --git a/core/lib/config/src/configs/database.rs b/core/lib/config/src/configs/database.rs index d257e661eb3..578cd6be46a 100644 --- a/core/lib/config/src/configs/database.rs +++ b/core/lib/config/src/configs/database.rs @@ -1,8 +1,8 @@ +use std::time::Duration; + use anyhow::Context as _; use serde::{Deserialize, Serialize}; -use std::time::Duration; - /// Mode of operation for the Merkle tree. /// /// The mode does not influence how tree data is stored; i.e., a mode can be switched on the fly. @@ -23,9 +23,6 @@ pub struct MerkleTreeConfig { /// Path to the RocksDB data directory for Merkle tree. #[serde(default = "MerkleTreeConfig::default_path")] pub path: String, - /// Path to merkle tree backup directory. - #[serde(default = "MerkleTreeConfig::default_backup_path")] - pub backup_path: String, /// Operation mode for the Merkle tree. If not specified, the full mode will be used. #[serde(default)] pub mode: MerkleTreeMode, @@ -53,7 +50,6 @@ impl Default for MerkleTreeConfig { fn default() -> Self { Self { path: Self::default_path(), - backup_path: Self::default_backup_path(), mode: MerkleTreeMode::default(), multi_get_chunk_size: Self::default_multi_get_chunk_size(), block_cache_size_mb: Self::default_block_cache_size_mb(), @@ -69,10 +65,6 @@ impl MerkleTreeConfig { "./db/lightweight-new".to_owned() // named this way for legacy reasons } - fn default_backup_path() -> String { - "./db/backups".to_owned() - } - const fn default_multi_get_chunk_size() -> usize { 500 } @@ -120,30 +112,12 @@ pub struct DBConfig { // ^ Filled in separately in `Self::from_env()`. We cannot use `serde(flatten)` because it // doesn't work with 'envy`. pub merkle_tree: MerkleTreeConfig, - /// Number of backups to keep. - #[serde(default = "DBConfig::default_backup_count")] - pub backup_count: usize, - /// Time interval between performing backups. - #[serde(default = "DBConfig::default_backup_interval_ms")] - pub backup_interval_ms: u64, } impl DBConfig { fn default_state_keeper_db_path() -> String { "./db/state_keeper".to_owned() } - - const fn default_backup_count() -> usize { - 5 - } - - const fn default_backup_interval_ms() -> u64 { - 60_000 - } - - pub fn backup_interval(&self) -> Duration { - Duration::from_millis(self.backup_interval_ms) - } } /// Collection of different database URLs and general PostgreSQL options. diff --git a/core/lib/config/src/configs/eth_sender.rs b/core/lib/config/src/configs/eth_sender.rs index 3d036483347..cd44daed17f 100644 --- a/core/lib/config/src/configs/eth_sender.rs +++ b/core/lib/config/src/configs/eth_sender.rs @@ -1,8 +1,6 @@ -// Built-in uses use std::time::Duration; -// External uses + use serde::Deserialize; -// Workspace uses use zksync_basic_types::H256; /// Configuration for the Ethereum sender crate. diff --git a/core/lib/config/src/configs/eth_watch.rs b/core/lib/config/src/configs/eth_watch.rs index 93d73ddf6bf..05afebf81c3 100644 --- a/core/lib/config/src/configs/eth_watch.rs +++ b/core/lib/config/src/configs/eth_watch.rs @@ -1,6 +1,5 @@ -// Built-in uses use std::time::Duration; -// External uses + use serde::Deserialize; /// Configuration for the Ethereum sender crate. diff --git a/core/lib/config/src/configs/fetcher.rs b/core/lib/config/src/configs/fetcher.rs deleted file mode 100644 index b1a5fca4b24..00000000000 --- a/core/lib/config/src/configs/fetcher.rs +++ /dev/null @@ -1,45 +0,0 @@ -use serde::Deserialize; -use std::time::Duration; - -#[derive(Debug, Deserialize, Clone, Copy, PartialEq)] -pub enum TokenListSource { - OneInch, - Mock, -} - -#[derive(Debug, Deserialize, Clone, Copy, PartialEq)] -pub enum TokenPriceSource { - CoinGecko, - CoinMarketCap, - Mock, -} - -#[derive(Debug, Deserialize, Clone, Copy, PartialEq)] -pub enum TokenTradingVolumeSource { - Uniswap, - Mock, -} - -#[derive(Debug, Deserialize, Clone, PartialEq)] -pub struct SingleFetcherConfig { - /// Indicator of the API to be used for getting information. - pub source: TYPE, - /// URL of the API to use for fetching data. Not used for `mock` source. - pub url: String, - // Interval for fetching API data in seconds. Basically, how ofter do we need to poll third-part APIs. - pub fetching_interval: u64, -} - -impl SingleFetcherConfig { - pub fn fetching_interval(&self) -> Duration { - Duration::from_secs(self.fetching_interval) - } -} - -/// Configuration for the third-party API data fetcher. -#[derive(Debug, Deserialize, Clone, PartialEq)] -pub struct FetcherConfig { - pub token_list: SingleFetcherConfig, - pub token_price: SingleFetcherConfig, - pub token_trading_volume: SingleFetcherConfig, -} diff --git a/core/lib/config/src/configs/fri_proof_compressor.rs b/core/lib/config/src/configs/fri_proof_compressor.rs index bbf58f2d1c6..4b4e062dee2 100644 --- a/core/lib/config/src/configs/fri_proof_compressor.rs +++ b/core/lib/config/src/configs/fri_proof_compressor.rs @@ -1,6 +1,7 @@ -use serde::Deserialize; use std::time::Duration; +use serde::Deserialize; + /// Configuration for the fri proof compressor #[derive(Debug, Deserialize, Clone, PartialEq)] pub struct FriProofCompressorConfig { diff --git a/core/lib/config/src/configs/fri_prover.rs b/core/lib/config/src/configs/fri_prover.rs index aab358a4ada..a5e99a40737 100644 --- a/core/lib/config/src/configs/fri_prover.rs +++ b/core/lib/config/src/configs/fri_prover.rs @@ -1,6 +1,7 @@ -use serde::Deserialize; use std::time::Duration; +use serde::Deserialize; + #[derive(Debug, Deserialize, Clone, PartialEq)] pub enum SetupLoadMode { FromDisk, @@ -21,6 +22,7 @@ pub struct FriProverConfig { pub witness_vector_generator_thread_count: Option, pub queue_capacity: usize, pub witness_vector_receiver_port: u16, + pub zone_read_url: String, // whether to write to public GCS bucket for https://github.com/matter-labs/era-boojum-validator-cli pub shall_save_to_public_bucket: bool, diff --git a/core/lib/config/src/configs/fri_prover_gateway.rs b/core/lib/config/src/configs/fri_prover_gateway.rs index 652c7d1bc0f..86723ff3043 100644 --- a/core/lib/config/src/configs/fri_prover_gateway.rs +++ b/core/lib/config/src/configs/fri_prover_gateway.rs @@ -1,6 +1,7 @@ -use serde::Deserialize; use std::time::Duration; +use serde::Deserialize; + #[derive(Debug, Deserialize, Clone, PartialEq)] pub struct FriProverGatewayConfig { pub api_url: String, diff --git a/core/lib/config/src/configs/fri_prover_group.rs b/core/lib/config/src/configs/fri_prover_group.rs index 71ed5d1f7d9..856ff59809f 100644 --- a/core/lib/config/src/configs/fri_prover_group.rs +++ b/core/lib/config/src/configs/fri_prover_group.rs @@ -1,6 +1,6 @@ -use serde::Deserialize; use std::collections::HashSet; +use serde::Deserialize; use zksync_basic_types::basic_fri_types::CircuitIdRoundTuple; /// Configuration for the grouping of specialized provers. diff --git a/core/lib/config/src/configs/mod.rs b/core/lib/config/src/configs/mod.rs index 0c2ecc46103..fc0e7eb6d4d 100644 --- a/core/lib/config/src/configs/mod.rs +++ b/core/lib/config/src/configs/mod.rs @@ -1,29 +1,34 @@ // Public re-exports pub use self::{ - alerts::AlertsConfig, api::ApiConfig, chain::ChainConfig, - circuit_synthesizer::CircuitSynthesizerConfig, contract_verifier::ContractVerifierConfig, - contracts::ContractsConfig, database::DBConfig, database::PostgresConfig, - eth_client::ETHClientConfig, eth_sender::ETHSenderConfig, eth_sender::GasAdjusterConfig, - eth_watch::ETHWatchConfig, fetcher::FetcherConfig, - fri_proof_compressor::FriProofCompressorConfig, fri_prover::FriProverConfig, - fri_prover_gateway::FriProverGatewayConfig, fri_witness_generator::FriWitnessGeneratorConfig, - fri_witness_vector_generator::FriWitnessVectorGeneratorConfig, object_store::ObjectStoreConfig, - proof_data_handler::ProofDataHandlerConfig, prover::ProverConfig, prover::ProverConfigs, - prover_group::ProverGroupConfig, utils::PrometheusConfig, + alerts::AlertsConfig, + api::ApiConfig, + contract_verifier::ContractVerifierConfig, + contracts::ContractsConfig, + database::{DBConfig, PostgresConfig}, + eth_client::ETHClientConfig, + eth_sender::{ETHSenderConfig, GasAdjusterConfig}, + eth_watch::ETHWatchConfig, + fri_proof_compressor::FriProofCompressorConfig, + fri_prover::FriProverConfig, + fri_prover_gateway::FriProverGatewayConfig, + fri_witness_generator::FriWitnessGeneratorConfig, + fri_witness_vector_generator::FriWitnessVectorGeneratorConfig, + object_store::ObjectStoreConfig, + proof_data_handler::ProofDataHandlerConfig, + snapshots_creator::SnapshotsCreatorConfig, + utils::PrometheusConfig, witness_generator::WitnessGeneratorConfig, }; pub mod alerts; pub mod api; pub mod chain; -pub mod circuit_synthesizer; pub mod contract_verifier; pub mod contracts; pub mod database; pub mod eth_client; pub mod eth_sender; pub mod eth_watch; -pub mod fetcher; pub mod fri_proof_compressor; pub mod fri_prover; pub mod fri_prover_gateway; @@ -33,8 +38,7 @@ pub mod fri_witness_vector_generator; pub mod house_keeper; pub mod object_store; pub mod proof_data_handler; -pub mod prover; -pub mod prover_group; +pub mod snapshots_creator; pub mod utils; pub mod witness_generator; diff --git a/core/lib/config/src/configs/proof_data_handler.rs b/core/lib/config/src/configs/proof_data_handler.rs index e3efd6b7a4d..b773efbd7df 100644 --- a/core/lib/config/src/configs/proof_data_handler.rs +++ b/core/lib/config/src/configs/proof_data_handler.rs @@ -1,6 +1,7 @@ -use serde::Deserialize; use std::time::Duration; +use serde::Deserialize; + #[derive(Debug, Deserialize, Clone, Copy, PartialEq)] pub enum ProtocolVersionLoadingMode { FromDb, diff --git a/core/lib/config/src/configs/prover.rs b/core/lib/config/src/configs/prover.rs deleted file mode 100644 index 45ed7100f9f..00000000000 --- a/core/lib/config/src/configs/prover.rs +++ /dev/null @@ -1,61 +0,0 @@ -use std::time::Duration; - -use serde::Deserialize; - -/// Configuration for the prover application -#[derive(Debug, Deserialize, Clone, PartialEq)] -pub struct ProverConfig { - /// Port to which the Prometheus exporter server is listening. - pub prometheus_port: u16, - /// Currently only a single (largest) key is supported. We'll support different ones in the future - pub initial_setup_key_path: String, - /// https://storage.googleapis.com/matterlabs-setup-keys-us/setup-keys/setup_2\^26.key - pub key_download_url: String, - /// Max time for proof to be generated - pub generation_timeout_in_secs: u16, - /// Number of threads to be used concurrent proof generation. - pub number_of_threads: u16, - /// Max attempts for generating proof - pub max_attempts: u32, - // Polling time in mill-seconds. - pub polling_duration_in_millis: u64, - // Path to setup keys for individual circuit. - pub setup_keys_path: String, - // Group id for this prover, provers running the same circuit types shall have same group id. - pub specialized_prover_group_id: u8, - // Number of setup-keys kept in memory without swapping - // number_of_setup_slots = (R-C*A-4)/S - // R is available ram - // C is the number of parallel synth - // A is the size of Assembly that is 12gb - // S is the size of the Setup that is 20gb - // constant 4 is for the data copy with gpu - pub number_of_setup_slots: u8, - /// Port at which server would be listening to receive incoming assembly - pub assembly_receiver_port: u16, - /// Socket polling time for receiving incoming assembly - pub assembly_receiver_poll_time_in_millis: u64, - /// maximum number of assemblies that are kept in memory, - pub assembly_queue_capacity: usize, -} - -/// Prover configs for different machine types that are currently supported. -#[derive(Debug, Deserialize, Clone, PartialEq)] -pub struct ProverConfigs { - // used by witness-generator - pub non_gpu: ProverConfig, - // https://gcloud-compute.com/a2-highgpu-2g.html - pub two_gpu_forty_gb_mem: ProverConfig, - // https://gcloud-compute.com/a2-ultragpu-1g.html - pub one_gpu_eighty_gb_mem: ProverConfig, - // https://gcloud-compute.com/a2-ultragpu-2g.html - pub two_gpu_eighty_gb_mem: ProverConfig, - // https://gcloud-compute.com/a2-ultragpu-4g.html - pub four_gpu_eighty_gb_mem: ProverConfig, -} - -impl ProverConfig { - pub fn proof_generation_timeout(&self) -> Duration { - Duration::from_secs(self.generation_timeout_in_secs as u64) - } -} diff --git a/core/lib/config/src/configs/prover_group.rs b/core/lib/config/src/configs/prover_group.rs deleted file mode 100644 index 2d40d47ba8c..00000000000 --- a/core/lib/config/src/configs/prover_group.rs +++ /dev/null @@ -1,66 +0,0 @@ -use serde::Deserialize; - -/// Configuration for the grouping of specialized provers. -/// This config would be used by circuit-synthesizer and provers. -#[derive(Debug, Deserialize, Clone, PartialEq)] -pub struct ProverGroupConfig { - pub group_0_circuit_ids: Vec, - pub group_1_circuit_ids: Vec, - pub group_2_circuit_ids: Vec, - pub group_3_circuit_ids: Vec, - pub group_4_circuit_ids: Vec, - pub group_5_circuit_ids: Vec, - pub group_6_circuit_ids: Vec, - pub group_7_circuit_ids: Vec, - pub group_8_circuit_ids: Vec, - pub group_9_circuit_ids: Vec, - pub region_read_url: String, - // This is used while running the provers/synthesizer in non-gcp cloud env. - pub region_override: Option, - pub zone_read_url: String, - // This is used while running the provers/synthesizer in non-gcp cloud env. - pub zone_override: Option, - pub synthesizer_per_gpu: u16, -} - -impl ProverGroupConfig { - pub fn get_circuit_ids_for_group_id(&self, group_id: u8) -> Option> { - match group_id { - 0 => Some(self.group_0_circuit_ids.clone()), - 1 => Some(self.group_1_circuit_ids.clone()), - 2 => Some(self.group_2_circuit_ids.clone()), - 3 => Some(self.group_3_circuit_ids.clone()), - 4 => Some(self.group_4_circuit_ids.clone()), - 5 => Some(self.group_5_circuit_ids.clone()), - 6 => Some(self.group_6_circuit_ids.clone()), - 7 => Some(self.group_7_circuit_ids.clone()), - 8 => Some(self.group_8_circuit_ids.clone()), - 9 => Some(self.group_9_circuit_ids.clone()), - _ => None, - } - } - - pub fn is_specialized_group_id(&self, group_id: u8) -> bool { - group_id <= 9 - } - - pub fn get_group_id_for_circuit_id(&self, circuit_id: u8) -> Option { - let configs = [ - &self.group_0_circuit_ids, - &self.group_1_circuit_ids, - &self.group_2_circuit_ids, - &self.group_3_circuit_ids, - &self.group_4_circuit_ids, - &self.group_5_circuit_ids, - &self.group_6_circuit_ids, - &self.group_7_circuit_ids, - &self.group_8_circuit_ids, - &self.group_9_circuit_ids, - ]; - configs - .iter() - .enumerate() - .find(|(_, group)| group.contains(&circuit_id)) - .map(|(group_id, _)| group_id as u8) - } -} diff --git a/core/lib/config/src/configs/snapshots_creator.rs b/core/lib/config/src/configs/snapshots_creator.rs new file mode 100644 index 00000000000..2f37c5d3afd --- /dev/null +++ b/core/lib/config/src/configs/snapshots_creator.rs @@ -0,0 +1,18 @@ +use serde::Deserialize; + +#[derive(Debug, Clone, PartialEq, Deserialize)] +pub struct SnapshotsCreatorConfig { + #[serde(default = "snapshots_creator_storage_logs_chunk_size_default")] + pub storage_logs_chunk_size: u64, + + #[serde(default = "snapshots_creator_concurrent_queries_count")] + pub concurrent_queries_count: u32, +} + +fn snapshots_creator_storage_logs_chunk_size_default() -> u64 { + 1_000_000 +} + +fn snapshots_creator_concurrent_queries_count() -> u32 { + 25 +} diff --git a/core/lib/config/src/configs/utils.rs b/core/lib/config/src/configs/utils.rs index bfa9e7e7f3e..977a48e82d2 100644 --- a/core/lib/config/src/configs/utils.rs +++ b/core/lib/config/src/configs/utils.rs @@ -1,7 +1,7 @@ -use serde::Deserialize; - use std::{env, time::Duration}; +use serde::Deserialize; + #[derive(Debug, Deserialize, Clone, PartialEq)] pub struct PrometheusConfig { /// Port to which the Prometheus exporter server is listening. diff --git a/core/lib/config/src/lib.rs b/core/lib/config/src/lib.rs index aa83577aad8..d139596b80b 100644 --- a/core/lib/config/src/lib.rs +++ b/core/lib/config/src/lib.rs @@ -1,9 +1,8 @@ #![allow(clippy::upper_case_acronyms, clippy::derive_partial_eq_without_eq)] pub use crate::configs::{ - ApiConfig, ChainConfig, ContractVerifierConfig, ContractsConfig, DBConfig, ETHClientConfig, - ETHSenderConfig, ETHWatchConfig, FetcherConfig, GasAdjusterConfig, ObjectStoreConfig, - PostgresConfig, ProverConfig, ProverConfigs, + ApiConfig, ContractVerifierConfig, ContractsConfig, DBConfig, ETHClientConfig, ETHSenderConfig, + ETHWatchConfig, GasAdjusterConfig, ObjectStoreConfig, PostgresConfig, SnapshotsCreatorConfig, }; pub mod configs; diff --git a/core/lib/constants/Cargo.toml b/core/lib/constants/Cargo.toml index e7e12206da2..0d3d09f83a7 100644 --- a/core/lib/constants/Cargo.toml +++ b/core/lib/constants/Cargo.toml @@ -20,5 +20,5 @@ num = "0.3.1" serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" once_cell = "1.13.0" -bigdecimal = "0.2.2" +bigdecimal = "0.3.0" hex = "0.4" diff --git a/core/lib/constants/src/blocks.rs b/core/lib/constants/src/blocks.rs index 7579b408f0c..5f7f83c2de2 100644 --- a/core/lib/constants/src/blocks.rs +++ b/core/lib/constants/src/blocks.rs @@ -1,7 +1,7 @@ use zksync_basic_types::H256; -// By design we don't have a term: uncle blocks. Hence we have to use rlp hash -// from empty list for ethereum compatibility. +// By design we don't have a term: uncle blocks. Hence we have to use RLP hash +// from empty list for Ethereum compatibility. pub const EMPTY_UNCLES_HASH: H256 = H256([ 0x1d, 0xcc, 0x4d, 0xe8, 0xde, 0xc7, 0x5d, 0x7a, 0xab, 0x85, 0xb5, 0x67, 0xb6, 0xcc, 0xd4, 0x1a, 0xd3, 0x12, 0x45, 0x1b, 0x94, 0x8a, 0x74, 0x13, 0xf0, 0xa1, 0x42, 0xfd, 0x40, 0xd4, 0x93, 0x47, diff --git a/core/lib/constants/src/contracts.rs b/core/lib/constants/src/contracts.rs index d74c0e0319a..9d167f7346a 100644 --- a/core/lib/constants/src/contracts.rs +++ b/core/lib/constants/src/contracts.rs @@ -95,6 +95,16 @@ pub const SHA256_PRECOMPILE_ADDRESS: Address = H160([ 0x00, 0x00, 0x00, 0x02, ]); +pub const EC_ADD_PRECOMPILE_ADDRESS: Address = H160([ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x06, +]); + +pub const EC_MUL_PRECOMPILE_ADDRESS: Address = H160([ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x07, +]); + pub const ERC20_TRANSFER_TOPIC: H256 = H256([ 221, 242, 82, 173, 27, 226, 200, 155, 105, 194, 176, 104, 252, 55, 141, 170, 149, 43, 167, 241, 99, 196, 161, 22, 40, 245, 90, 77, 245, 35, 179, 239, @@ -103,5 +113,5 @@ pub const ERC20_TRANSFER_TOPIC: H256 = H256([ // TODO (SMA-240): Research whether using zero address is ok pub const MINT_AND_BURN_ADDRESS: H160 = H160::zero(); -// The storage_log.value database value for a contract that was deployed in a failed transaction. +// The `storage_log.value` database value for a contract that was deployed in a failed transaction. pub const FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH: H256 = H256::zero(); diff --git a/core/lib/constants/src/crypto.rs b/core/lib/constants/src/crypto.rs index e9ed44ff308..8b97af9d237 100644 --- a/core/lib/constants/src/crypto.rs +++ b/core/lib/constants/src/crypto.rs @@ -19,14 +19,10 @@ pub const MAX_BYTES_PER_PACKED_SLOT: u64 = 65; pub static GAS_PER_SLOT: Lazy = Lazy::new(|| BigUint::from(MAX_BYTES_PER_PACKED_SLOT) * BigUint::from(GAS_PER_PUBDATA_BYTE)); -pub const MAX_TXS_IN_BLOCK: usize = 1024; - pub const MAX_NEW_FACTORY_DEPS: usize = 32; pub const PAD_MSG_BEFORE_HASH_BITS_LEN: usize = 736; -/// The size of the bootloader memory in bytes which is used by the protocol. -/// While the maximal possible size is a lot higher, we restric ourselves to a certain limit to reduce -/// the requirements on RAM. -pub const USED_BOOTLOADER_MEMORY_BYTES: usize = 1 << 24; -pub const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32; +/// To avoid DDoS we limit the size of the transactions size. +/// TODO(X): remove this as a constant and introduce a config. +pub const MAX_ENCODED_TX_SIZE: usize = 1 << 24; diff --git a/core/lib/constants/src/ethereum.rs b/core/lib/constants/src/ethereum.rs index 13cdd32d5c1..d9a137c7c22 100644 --- a/core/lib/constants/src/ethereum.rs +++ b/core/lib/constants/src/ethereum.rs @@ -5,23 +5,18 @@ pub const PRIORITY_EXPIRATION: u64 = 50000; pub const MAX_L1_TRANSACTION_GAS_LIMIT: u64 = 300000; pub static ETHEREUM_ADDRESS: Address = Address::zero(); -/// This the number of pubdata such that it should be always possible to publish -/// from a single transaction. Note, that these pubdata bytes include only bytes that are -/// to be published inside the body of transaction (i.e. excluding of factory deps). -pub const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 4000; - /// The maximum number of pubdata per L1 batch. This limit is due to the fact that the Ethereum /// nodes do not accept transactions that have more than 128kb of pubdata. -/// The 18kb margin is left in case of any inpreciseness of the pubdata calculation. +/// The 18kb margin is left in case of any impreciseness of the pubdata calculation. pub const MAX_PUBDATA_PER_L1_BATCH: u64 = 110000; -// TODO: import from zkevm_opcode_defs once VM1.3 is supported +// TODO: import from `zkevm_opcode_defs` once `VM1.3` is supported pub const MAX_L2_TX_GAS_LIMIT: u64 = 80000000; -// The users should always be able to provide `MAX_GAS_PER_PUBDATA_BYTE` gas per pubdata in their -// transactions so that they are able to send at least GUARANTEED_PUBDATA_PER_L1_BATCH bytes per -// transaction. -pub const MAX_GAS_PER_PUBDATA_BYTE: u64 = MAX_L2_TX_GAS_LIMIT / GUARANTEED_PUBDATA_PER_L1_BATCH; - // The L1->L2 are required to have the following gas per pubdata byte. pub const REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE: u64 = 800; + +// The default gas per pubdata byte for L2 transactions, that is used, for instance, when we need to +// insert some default value for type 2 transactions. +// It is a realistic value, but it is large enough to fill into any batch regardless of the pubdata price. +pub const DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE: u64 = 50_000; diff --git a/core/lib/constants/src/system_context.rs b/core/lib/constants/src/system_context.rs index c142d6e73ec..6a90469fb1f 100644 --- a/core/lib/constants/src/system_context.rs +++ b/core/lib/constants/src/system_context.rs @@ -35,7 +35,7 @@ pub const SYSTEM_CONTEXT_DIFFICULTY_POSITION: H256 = H256([ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, ]); -// 2500000000000000. THe number is chosen for compatibility with other L2s. +// 2500000000000000. The number is chosen for compatibility with other L2s. pub const SYSTEM_CONTEXT_DIFFICULTY: H256 = H256([ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0xE1, 0xBC, 0x9B, 0xF0, 0x40, 0x00, @@ -48,7 +48,7 @@ pub const SYSTEM_CONTEXT_BASE_FEE_POSITION: H256 = H256([ // Tenth of a gwei in wei. 1 gwei is 10^9 wei, so 0.1 gwei is 10^8 wei. const TENTH_OF_GWEI: u64 = 10u64.pow(8); -// The base fee in wei. u64 as u32 would limit this price to be ~4.3 gwei. +// The base fee in wei. u64 as u32 would limit this price to be approximately 4.3 gwei. pub const SYSTEM_CONTEXT_MINIMAL_BASE_FEE: u64 = TENTH_OF_GWEI; pub const SYSTEM_CONTEXT_BLOCK_INFO_POSITION: H256 = H256([ @@ -78,18 +78,12 @@ pub const SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION: H256 = H256([ pub const SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES: u32 = 257; -// It is equal to SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION + SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES +// It is equal to `SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION + SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES` pub const CURRENT_VIRTUAL_BLOCK_INFO_POSITION: H256 = H256([ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x0c, ]); -// It is equal to CURRENT_VIRTUAL_BLOCK_INFO_POSITION + 1 -pub const VIRTUIAL_BLOCK_UPGRADE_INFO_POSITION: H256 = H256([ - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x0d, -]); - -/// Block info is stored compactly as SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER * block_number + block_timestamp. -/// This number is equal to 2**128 +/// Block info is stored compactly as `SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER * block_number + block_timestamp`. +/// This number is equal to `2**128` pub const SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER: U256 = U256([0, 0, 1, 0]); diff --git a/core/lib/contracts/src/lib.rs b/core/lib/contracts/src/lib.rs index 01dce6a98f9..3ed725baa7d 100644 --- a/core/lib/contracts/src/lib.rs +++ b/core/lib/contracts/src/lib.rs @@ -3,17 +3,18 @@ //! Careful: some of the methods are reading the contracts based on the ZKSYNC_HOME environment variable. #![allow(clippy::derive_partial_eq_without_eq)] + +use std::{ + fs::{self, File}, + path::{Path, PathBuf}, +}; + use ethabi::{ ethereum_types::{H256, U256}, Contract, Function, }; use once_cell::sync::Lazy; use serde::{Deserialize, Serialize}; -use std::{ - fs::{self, File}, - path::{Path, PathBuf}, -}; - use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words}; pub mod test_contracts; @@ -25,19 +26,19 @@ pub enum ContractLanguage { } const GOVERNANCE_CONTRACT_FILE: &str = - "contracts/ethereum/artifacts/cache/solpp-generated-contracts/governance/IGovernance.sol/IGovernance.json"; + "contracts/l1-contracts/artifacts/cache/solpp-generated-contracts/governance/IGovernance.sol/IGovernance.json"; const ZKSYNC_CONTRACT_FILE: &str = - "contracts/ethereum/artifacts/cache/solpp-generated-contracts/zksync/interfaces/IZkSync.sol/IZkSync.json"; + "contracts/l1-contracts/artifacts/cache/solpp-generated-contracts/zksync/interfaces/IZkSync.sol/IZkSync.json"; const MULTICALL3_CONTRACT_FILE: &str = - "contracts/ethereum/artifacts/cache/solpp-generated-contracts/dev-contracts/Multicall3.sol/Multicall3.json"; + "contracts/l1-contracts/artifacts/cache/solpp-generated-contracts/dev-contracts/Multicall3.sol/Multicall3.json"; const VERIFIER_CONTRACT_FILE: &str = - "contracts/ethereum/artifacts/cache/solpp-generated-contracts/zksync/Verifier.sol/Verifier.json"; + "contracts/l1-contracts/artifacts/cache/solpp-generated-contracts/zksync/Verifier.sol/Verifier.json"; const IERC20_CONTRACT_FILE: &str = - "contracts/ethereum/artifacts/cache/solpp-generated-contracts/common/interfaces/IERC20.sol/IERC20.json"; + "contracts/l1-contracts/artifacts/cache/solpp-generated-contracts/common/interfaces/IERC20.sol/IERC20.json"; const FAIL_ON_RECEIVE_CONTRACT_FILE: &str = - "contracts/ethereum/artifacts/cache/solpp-generated-contracts/zksync/dev-contracts/FailOnReceive.sol/FailOnReceive.json"; + "contracts/l1-contracts/artifacts/cache/solpp-generated-contracts/zksync/dev-contracts/FailOnReceive.sol/FailOnReceive.json"; const L2_BRIDGE_CONTRACT_FILE: &str = - "contracts/zksync/artifacts-zk/cache-zk/solpp-generated-contracts/bridge/interfaces/IL2Bridge.sol/IL2Bridge.json"; + "contracts/l2-contracts/artifacts-zk/contracts-preprocessed/bridge/interfaces/IL2Bridge.sol/IL2Bridge.json"; const LOADNEXT_CONTRACT_FILE: &str = "etc/contracts-test-data/artifacts-zk/contracts/loadnext/loadnext_contract.sol/LoadnextContract.json"; const LOADNEXT_SIMPLE_CONTRACT_FILE: &str = @@ -69,7 +70,7 @@ pub fn load_contract + std::fmt::Debug>(path: P) -> Contract { pub fn load_sys_contract(contract_name: &str) -> Contract { load_contract(format!( - "etc/system-contracts/artifacts-zk/cache-zk/solpp-generated-contracts/{0}.sol/{0}.json", + "contracts/system-contracts/artifacts-zk/contracts-preprocessed/{0}.sol/{0}.json", contract_name )) } @@ -146,6 +147,10 @@ pub fn deployer_contract() -> Contract { load_sys_contract("ContractDeployer") } +pub fn l1_messenger_contract() -> Contract { + load_sys_contract("L1Messenger") +} + pub fn eth_contract() -> Contract { load_sys_contract("L2EthToken") } @@ -189,17 +194,17 @@ pub static DEFAULT_SYSTEM_CONTRACTS_REPO: Lazy = /// fetching contracts that are located there. /// As most of the static methods in this file, is loading data based on ZKSYNC_HOME environment variable. pub struct SystemContractsRepo { - // Path to the root of the system contracts repo. + // Path to the root of the system contracts repository. pub root: PathBuf, } impl SystemContractsRepo { - /// Returns the default system contracts repo with directory based on the ZKSYNC_HOME environment variable. + /// Returns the default system contracts repository with directory based on the ZKSYNC_HOME environment variable. pub fn from_env() -> Self { let zksync_home = std::env::var("ZKSYNC_HOME").unwrap_or_else(|_| ".".into()); let zksync_home = PathBuf::from(zksync_home); SystemContractsRepo { - root: zksync_home.join("etc/system-contracts"), + root: zksync_home.join("contracts/system-contracts"), } } pub fn read_sys_contract_bytecode( @@ -210,11 +215,11 @@ impl SystemContractsRepo { ) -> Vec { match lang { ContractLanguage::Sol => read_bytecode_from_path(self.root.join(format!( - "artifacts-zk/cache-zk/solpp-generated-contracts/{0}{1}.sol/{1}.json", + "artifacts-zk/contracts-preprocessed/{0}{1}.sol/{1}.json", directory, name ))), ContractLanguage::Yul => read_zbin_bytecode_from_path(self.root.join(format!( - "contracts/{0}artifacts/{1}.yul/{1}.yul.zbin", + "contracts-preprocessed/{0}artifacts/{1}.yul.zbin", directory, name ))), } @@ -223,8 +228,8 @@ impl SystemContractsRepo { pub fn read_bootloader_code(bootloader_type: &str) -> Vec { read_zbin_bytecode(format!( - "etc/system-contracts/bootloader/build/artifacts/{}.yul/{}.yul.zbin", - bootloader_type, bootloader_type + "contracts/system-contracts/bootloader/build/artifacts/{}.yul.zbin", + bootloader_type )) } @@ -336,7 +341,7 @@ impl BaseSystemContracts { BaseSystemContracts::load_with_bootloader(bootloader_bytecode) } - /// BaseSystemContracts with playground bootloader - used for handling 'eth_calls'. + /// BaseSystemContracts with playground bootloader - used for handling eth_calls. pub fn playground() -> Self { let bootloader_bytecode = read_playground_batch_bootloader_bytecode(); BaseSystemContracts::load_with_bootloader(bootloader_bytecode) @@ -364,7 +369,19 @@ impl BaseSystemContracts { BaseSystemContracts::load_with_bootloader(bootloader_bytecode) } - /// BaseSystemContracts with playground bootloader - used for handling 'eth_calls'. + pub fn playground_post_allowlist_removal() -> Self { + let bootloader_bytecode = read_zbin_bytecode("etc/multivm_bootloaders/vm_remove_allowlist/playground_batch.yul/playground_batch.yul.zbin"); + BaseSystemContracts::load_with_bootloader(bootloader_bytecode) + } + + pub fn playground_post_1_4_1() -> Self { + let bootloader_bytecode = read_zbin_bytecode( + "etc/multivm_bootloaders/vm_1_4_1/playground_batch.yul/playground_batch.yul.zbin", + ); + BaseSystemContracts::load_with_bootloader(bootloader_bytecode) + } + + /// BaseSystemContracts with playground bootloader - used for handling eth_calls. pub fn estimate_gas() -> Self { let bootloader_bytecode = read_bootloader_code("fee_estimate"); BaseSystemContracts::load_with_bootloader(bootloader_bytecode) @@ -398,6 +415,20 @@ impl BaseSystemContracts { BaseSystemContracts::load_with_bootloader(bootloader_bytecode) } + pub fn estimate_gas_post_allowlist_removal() -> Self { + let bootloader_bytecode = read_zbin_bytecode( + "etc/multivm_bootloaders/vm_remove_allowlist/fee_estimate.yul/fee_estimate.yul.zbin", + ); + BaseSystemContracts::load_with_bootloader(bootloader_bytecode) + } + + pub fn estimate_gas_post_1_4_1() -> Self { + let bootloader_bytecode = read_zbin_bytecode( + "etc/multivm_bootloaders/vm_1_4_1/fee_estimate.yul/fee_estimate.yul.zbin", + ); + BaseSystemContracts::load_with_bootloader(bootloader_bytecode) + } + pub fn hashes(&self) -> BaseSystemContractsHashes { BaseSystemContractsHashes { bootloader: self.bootloader.hash, diff --git a/core/lib/contracts/src/test_contracts.rs b/core/lib/contracts/src/test_contracts.rs index 9db4051cfdb..eab1587f833 100644 --- a/core/lib/contracts/src/test_contracts.rs +++ b/core/lib/contracts/src/test_contracts.rs @@ -1,8 +1,8 @@ -use crate::get_loadnext_contract; -use ethabi::ethereum_types::U256; -use ethabi::{Bytes, Token}; +use ethabi::{ethereum_types::U256, Bytes, Token}; use serde::Deserialize; +use crate::get_loadnext_contract; + #[derive(Debug, Clone, Deserialize)] pub struct LoadnextContractExecutionParams { pub reads: usize, diff --git a/core/lib/crypto/README.md b/core/lib/crypto/README.md index 2a4f3d4b9c5..e224b2732d3 100644 --- a/core/lib/crypto/README.md +++ b/core/lib/crypto/README.md @@ -7,4 +7,4 @@ `zksync_crypto` is a part of zkSync stack, which is distributed under the terms of both the MIT license and the Apache License (Version 2.0). -See [LICENSE-APACHE](../../LICENSE-APACHE), [LICENSE-MIT](../../LICENSE-MIT) for details. +See [LICENSE-APACHE](../../../LICENSE-APACHE), [LICENSE-MIT](../../../LICENSE-MIT) for details. diff --git a/core/lib/crypto/src/hasher/blake2.rs b/core/lib/crypto/src/hasher/blake2.rs index 70d8c9797e8..97d3fbb8a1e 100644 --- a/core/lib/crypto/src/hasher/blake2.rs +++ b/core/lib/crypto/src/hasher/blake2.rs @@ -1,7 +1,7 @@ use blake2::{Blake2s256, Digest}; +use zksync_basic_types::H256; use crate::hasher::Hasher; -use zksync_basic_types::H256; #[derive(Default, Clone, Debug)] pub struct Blake2Hasher; diff --git a/core/lib/crypto/src/hasher/keccak.rs b/core/lib/crypto/src/hasher/keccak.rs index e4c441328de..d3baab873f9 100644 --- a/core/lib/crypto/src/hasher/keccak.rs +++ b/core/lib/crypto/src/hasher/keccak.rs @@ -1,6 +1,7 @@ -use crate::hasher::Hasher; use zksync_basic_types::{web3::signing::keccak256, H256}; +use crate::hasher::Hasher; + #[derive(Default, Clone, Debug)] pub struct KeccakHasher; diff --git a/core/lib/crypto/src/hasher/sha256.rs b/core/lib/crypto/src/hasher/sha256.rs index 73e593ead72..b976c79d210 100644 --- a/core/lib/crypto/src/hasher/sha256.rs +++ b/core/lib/crypto/src/hasher/sha256.rs @@ -1,7 +1,7 @@ use sha2::{Digest, Sha256}; +use zksync_basic_types::H256; use crate::hasher::Hasher; -use zksync_basic_types::H256; #[derive(Debug, Default, Clone, Copy)] pub struct Sha256Hasher; diff --git a/core/lib/dal/.sqlx/query-00b88ec7fcf40bb18e0018b7c76f6e1df560ab1e8935564355236e90b6147d2f.json b/core/lib/dal/.sqlx/query-00b88ec7fcf40bb18e0018b7c76f6e1df560ab1e8935564355236e90b6147d2f.json new file mode 100644 index 00000000000..49a533897ce --- /dev/null +++ b/core/lib/dal/.sqlx/query-00b88ec7fcf40bb18e0018b7c76f6e1df560ab1e8935564355236e90b6147d2f.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE scheduler_witness_jobs_fri\n SET\n status = 'successful',\n updated_at = NOW(),\n time_taken = $1\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Time", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "00b88ec7fcf40bb18e0018b7c76f6e1df560ab1e8935564355236e90b6147d2f" +} diff --git a/core/lib/dal/.sqlx/query-012bed5d34240ed28c331c8515c381d82925556a4801f678b8786235d525d784.json b/core/lib/dal/.sqlx/query-012bed5d34240ed28c331c8515c381d82925556a4801f678b8786235d525d784.json new file mode 100644 index 00000000000..fbeefdfbf95 --- /dev/null +++ b/core/lib/dal/.sqlx/query-012bed5d34240ed28c331c8515c381d82925556a4801f678b8786235d525d784.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n eth_commit_tx_id = $1,\n updated_at = NOW()\n WHERE\n number BETWEEN $2 AND $3\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4", + "Int8", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "012bed5d34240ed28c331c8515c381d82925556a4801f678b8786235d525d784" +} diff --git a/core/lib/dal/.sqlx/query-015350f8d729ef490553550a68f07703b2581dda4fe3c00be6c5422c78980c4b.json b/core/lib/dal/.sqlx/query-015350f8d729ef490553550a68f07703b2581dda4fe3c00be6c5422c78980c4b.json new file mode 100644 index 00000000000..d8495583ba9 --- /dev/null +++ b/core/lib/dal/.sqlx/query-015350f8d729ef490553550a68f07703b2581dda4fe3c00be6c5422c78980c4b.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(id) AS \"max?\"\n FROM\n protocol_versions\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "max?", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "015350f8d729ef490553550a68f07703b2581dda4fe3c00be6c5422c78980c4b" +} diff --git a/core/lib/dal/.sqlx/query-01ac5343beb09ec5bd45b39d560e57a83f37da8999849377dfad60b44989be39.json b/core/lib/dal/.sqlx/query-01ac5343beb09ec5bd45b39d560e57a83f37da8999849377dfad60b44989be39.json new file mode 100644 index 00000000000..8ca4bb693c2 --- /dev/null +++ b/core/lib/dal/.sqlx/query-01ac5343beb09ec5bd45b39d560e57a83f37da8999849377dfad60b44989be39.json @@ -0,0 +1,107 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET\n status = 'in_progress',\n attempts = attempts + 1,\n updated_at = NOW(),\n processing_started_at = NOW(),\n picked_by = $2\n WHERE\n id = (\n SELECT\n id\n FROM\n node_aggregation_witness_jobs_fri\n WHERE\n status = 'queued'\n AND protocol_version = ANY ($1)\n ORDER BY\n l1_batch_number ASC,\n depth ASC,\n id ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n node_aggregation_witness_jobs_fri.*\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "circuit_id", + "type_info": "Int2" + }, + { + "ordinal": 3, + "name": "depth", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 5, + "name": "attempts", + "type_info": "Int2" + }, + { + "ordinal": 6, + "name": "aggregations_url", + "type_info": "Text" + }, + { + "ordinal": 7, + "name": "processing_started_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "time_taken", + "type_info": "Time" + }, + { + "ordinal": 9, + "name": "error", + "type_info": "Text" + }, + { + "ordinal": 10, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 11, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 12, + "name": "number_of_dependent_jobs", + "type_info": "Int4" + }, + { + "ordinal": 13, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "picked_by", + "type_info": "Text" + } + ], + "parameters": { + "Left": [ + "Int4Array", + "Text" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true + ] + }, + "hash": "01ac5343beb09ec5bd45b39d560e57a83f37da8999849377dfad60b44989be39" +} diff --git a/core/lib/dal/.sqlx/query-01e4cde73867da612084c3f6fe882d56bbace9013f1d95ea0926eef1fb48039b.json b/core/lib/dal/.sqlx/query-01e4cde73867da612084c3f6fe882d56bbace9013f1d95ea0926eef1fb48039b.json new file mode 100644 index 00000000000..4a229dbe78d --- /dev/null +++ b/core/lib/dal/.sqlx/query-01e4cde73867da612084c3f6fe882d56bbace9013f1d95ea0926eef1fb48039b.json @@ -0,0 +1,34 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number,\n factory_deps_filepath,\n storage_logs_filepaths\n FROM\n snapshots\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "factory_deps_filepath", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "storage_logs_filepaths", + "type_info": "TextArray" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "01e4cde73867da612084c3f6fe882d56bbace9013f1d95ea0926eef1fb48039b" +} diff --git a/core/lib/dal/.sqlx/query-01f72dfc1eee6360a8ef7809874a1b4ba7fe355ebc02ea49a054aa073ce324ba.json b/core/lib/dal/.sqlx/query-01f72dfc1eee6360a8ef7809874a1b4ba7fe355ebc02ea49a054aa073ce324ba.json new file mode 100644 index 00000000000..e28c68abc28 --- /dev/null +++ b/core/lib/dal/.sqlx/query-01f72dfc1eee6360a8ef7809874a1b4ba7fe355ebc02ea49a054aa073ce324ba.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE storage\n SET\n value = u.value\n FROM\n UNNEST($1::bytea[], $2::bytea[]) AS u (key, value)\n WHERE\n u.key = hashed_key\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray", + "ByteaArray" + ] + }, + "nullable": [] + }, + "hash": "01f72dfc1eee6360a8ef7809874a1b4ba7fe355ebc02ea49a054aa073ce324ba" +} diff --git a/core/lib/dal/.sqlx/query-02285b8d0bc76c8cfd259872ac24f3670813e5a5356ddcb7ac482a0201d045f7.json b/core/lib/dal/.sqlx/query-02285b8d0bc76c8cfd259872ac24f3670813e5a5356ddcb7ac482a0201d045f7.json new file mode 100644 index 00000000000..41a37726f48 --- /dev/null +++ b/core/lib/dal/.sqlx/query-02285b8d0bc76c8cfd259872ac24f3670813e5a5356ddcb7ac482a0201d045f7.json @@ -0,0 +1,108 @@ +{ + "db_name": "PostgreSQL", + "query": "\n WITH\n sl AS (\n SELECT\n *\n FROM\n storage_logs\n WHERE\n storage_logs.address = $1\n AND storage_logs.tx_hash = $2\n ORDER BY\n storage_logs.miniblock_number DESC,\n storage_logs.operation_number DESC\n LIMIT\n 1\n )\n SELECT\n transactions.hash AS tx_hash,\n transactions.index_in_block AS index_in_block,\n transactions.l1_batch_tx_index AS l1_batch_tx_index,\n transactions.miniblock_number AS \"block_number!\",\n transactions.error AS error,\n transactions.effective_gas_price AS effective_gas_price,\n transactions.initiator_address AS initiator_address,\n transactions.data -> 'to' AS \"transfer_to?\",\n transactions.data -> 'contractAddress' AS \"execute_contract_address?\",\n transactions.tx_format AS \"tx_format?\",\n transactions.refunded_gas AS refunded_gas,\n transactions.gas_limit AS gas_limit,\n miniblocks.hash AS \"block_hash\",\n miniblocks.l1_batch_number AS \"l1_batch_number?\",\n sl.key AS \"contract_address?\"\n FROM\n transactions\n JOIN miniblocks ON miniblocks.number = transactions.miniblock_number\n LEFT JOIN sl ON sl.value != $3\n WHERE\n transactions.hash = $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "tx_hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "index_in_block", + "type_info": "Int4" + }, + { + "ordinal": 2, + "name": "l1_batch_tx_index", + "type_info": "Int4" + }, + { + "ordinal": 3, + "name": "block_number!", + "type_info": "Int8" + }, + { + "ordinal": 4, + "name": "error", + "type_info": "Varchar" + }, + { + "ordinal": 5, + "name": "effective_gas_price", + "type_info": "Numeric" + }, + { + "ordinal": 6, + "name": "initiator_address", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "transfer_to?", + "type_info": "Jsonb" + }, + { + "ordinal": 8, + "name": "execute_contract_address?", + "type_info": "Jsonb" + }, + { + "ordinal": 9, + "name": "tx_format?", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "refunded_gas", + "type_info": "Int8" + }, + { + "ordinal": 11, + "name": "gas_limit", + "type_info": "Numeric" + }, + { + "ordinal": 12, + "name": "block_hash", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "l1_batch_number?", + "type_info": "Int8" + }, + { + "ordinal": 14, + "name": "contract_address?", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Bytea" + ] + }, + "nullable": [ + false, + true, + true, + true, + true, + true, + false, + null, + null, + true, + false, + true, + false, + true, + false + ] + }, + "hash": "02285b8d0bc76c8cfd259872ac24f3670813e5a5356ddcb7ac482a0201d045f7" +} diff --git a/core/lib/dal/.sqlx/query-026ab7dd7407f10074a2966b5eac2563a3e061bcc6505d8c295b1b2517f85f1b.json b/core/lib/dal/.sqlx/query-026ab7dd7407f10074a2966b5eac2563a3e061bcc6505d8c295b1b2517f85f1b.json new file mode 100644 index 00000000000..d98798241f7 --- /dev/null +++ b/core/lib/dal/.sqlx/query-026ab7dd7407f10074a2966b5eac2563a3e061bcc6505d8c295b1b2517f85f1b.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number\n FROM\n l1_batches\n LEFT JOIN eth_txs_history AS prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id)\n WHERE\n prove_tx.confirmed_at IS NOT NULL\n ORDER BY\n number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "026ab7dd7407f10074a2966b5eac2563a3e061bcc6505d8c295b1b2517f85f1b" +} diff --git a/core/lib/dal/.sqlx/query-03c585c7e9f918e608757496088c7e3b6bdb2a08149d5f443310607d3c78988c.json b/core/lib/dal/.sqlx/query-03c585c7e9f918e608757496088c7e3b6bdb2a08149d5f443310607d3c78988c.json new file mode 100644 index 00000000000..9c811e9f87c --- /dev/null +++ b/core/lib/dal/.sqlx/query-03c585c7e9f918e608757496088c7e3b6bdb2a08149d5f443310607d3c78988c.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n storage_refunds\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "storage_refunds", + "type_info": "Int8Array" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + true + ] + }, + "hash": "03c585c7e9f918e608757496088c7e3b6bdb2a08149d5f443310607d3c78988c" +} diff --git a/core/lib/dal/.sqlx/query-040eaa878c3473f5edc73b77e572b5ea100f59295cd693d14ee0d5ee089c7981.json b/core/lib/dal/.sqlx/query-040eaa878c3473f5edc73b77e572b5ea100f59295cd693d14ee0d5ee089c7981.json new file mode 100644 index 00000000000..c0e0c777cc5 --- /dev/null +++ b/core/lib/dal/.sqlx/query-040eaa878c3473f5edc73b77e572b5ea100f59295cd693d14ee0d5ee089c7981.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number\n FROM\n snapshots\n WHERE\n NOT (''::TEXT = ANY (storage_logs_filepaths))\n ORDER BY\n l1_batch_number DESC\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "040eaa878c3473f5edc73b77e572b5ea100f59295cd693d14ee0d5ee089c7981" +} diff --git a/core/lib/dal/.sqlx/query-04fbbd198108d2614a3b29fa795994723ebe57b3ed209069bd3db906921ef1a3.json b/core/lib/dal/.sqlx/query-04fbbd198108d2614a3b29fa795994723ebe57b3ed209069bd3db906921ef1a3.json new file mode 100644 index 00000000000..00f94f7c864 --- /dev/null +++ b/core/lib/dal/.sqlx/query-04fbbd198108d2614a3b29fa795994723ebe57b3ed209069bd3db906921ef1a3.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MIN(miniblocks.number) AS \"min?\",\n MAX(miniblocks.number) AS \"max?\"\n FROM\n miniblocks\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "min?", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "max?", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + null, + null + ] + }, + "hash": "04fbbd198108d2614a3b29fa795994723ebe57b3ed209069bd3db906921ef1a3" +} diff --git a/core/lib/dal/.sqlx/query-05267e9774056bb0f984918ab861a2ee78eb59628d0429e89b27d185f83512be.json b/core/lib/dal/.sqlx/query-05267e9774056bb0f984918ab861a2ee78eb59628d0429e89b27d185f83512be.json new file mode 100644 index 00000000000..81b6ad9687b --- /dev/null +++ b/core/lib/dal/.sqlx/query-05267e9774056bb0f984918ab861a2ee78eb59628d0429e89b27d185f83512be.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n call_traces\n WHERE\n tx_hash IN (\n SELECT\n hash\n FROM\n transactions\n WHERE\n miniblock_number = $1\n )\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "tx_hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "call_trace", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "05267e9774056bb0f984918ab861a2ee78eb59628d0429e89b27d185f83512be" +} diff --git a/core/lib/dal/.sqlx/query-07310d96fc7e258154ad510684e33d196907ebd599e926d305e5ef9f26afa2fa.json b/core/lib/dal/.sqlx/query-07310d96fc7e258154ad510684e33d196907ebd599e926d305e5ef9f26afa2fa.json new file mode 100644 index 00000000000..a293d217645 --- /dev/null +++ b/core/lib/dal/.sqlx/query-07310d96fc7e258154ad510684e33d196907ebd599e926d305e5ef9f26afa2fa.json @@ -0,0 +1,24 @@ +{ + "db_name": "PostgreSQL", + "query": "INSERT INTO eth_txs_history (eth_tx_id, base_fee_per_gas, priority_fee_per_gas, tx_hash, signed_raw_tx, created_at, updated_at, confirmed_at) VALUES ($1, 0, 0, $2, '\\x00', now(), now(), $3) RETURNING id", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int4", + "Text", + "Timestamp" + ] + }, + "nullable": [ + false + ] + }, + "hash": "07310d96fc7e258154ad510684e33d196907ebd599e926d305e5ef9f26afa2fa" +} diff --git a/core/lib/dal/.sqlx/query-083991abb3f1c2183d1bd1fb2ad4710daa723e2d9a23317c347f6081465c3643.json b/core/lib/dal/.sqlx/query-083991abb3f1c2183d1bd1fb2ad4710daa723e2d9a23317c347f6081465c3643.json new file mode 100644 index 00000000000..e2c3a710556 --- /dev/null +++ b/core/lib/dal/.sqlx/query-083991abb3f1c2183d1bd1fb2ad4710daa723e2d9a23317c347f6081465c3643.json @@ -0,0 +1,52 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE basic_witness_input_producer_jobs\n SET\n status = $1,\n updated_at = NOW(),\n time_taken = $3,\n error = $4\n WHERE\n l1_batch_number = $2\n AND status != $5\n RETURNING\n basic_witness_input_producer_jobs.attempts\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + { + "Custom": { + "name": "basic_witness_input_producer_job_status", + "kind": { + "Enum": [ + "Queued", + "ManuallySkipped", + "InProgress", + "Successful", + "Failed" + ] + } + } + }, + "Int8", + "Time", + "Text", + { + "Custom": { + "name": "basic_witness_input_producer_job_status", + "kind": { + "Enum": [ + "Queued", + "ManuallySkipped", + "InProgress", + "Successful", + "Failed" + ] + } + } + } + ] + }, + "nullable": [ + false + ] + }, + "hash": "083991abb3f1c2183d1bd1fb2ad4710daa723e2d9a23317c347f6081465c3643" +} diff --git a/core/lib/dal/.sqlx/query-08e59ed8e2fd1a74e19d8bf0d131e4ee6682a89fb86f3b715a240805d44e6d87.json b/core/lib/dal/.sqlx/query-08e59ed8e2fd1a74e19d8bf0d131e4ee6682a89fb86f3b715a240805d44e6d87.json new file mode 100644 index 00000000000..0c3ca92c10c --- /dev/null +++ b/core/lib/dal/.sqlx/query-08e59ed8e2fd1a74e19d8bf0d131e4ee6682a89fb86f3b715a240805d44e6d87.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n proof_generation_details (l1_batch_number, status, proof_gen_data_blob_url, created_at, updated_at)\n VALUES\n ($1, 'ready_to_be_proven', $2, NOW(), NOW())\n ON CONFLICT (l1_batch_number) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Text" + ] + }, + "nullable": [] + }, + "hash": "08e59ed8e2fd1a74e19d8bf0d131e4ee6682a89fb86f3b715a240805d44e6d87" +} diff --git a/core/lib/dal/.sqlx/query-0914f0ad03d6a8c55d287f94917c6f03469d78bf4f45f5fd1eaf37171db2f04a.json b/core/lib/dal/.sqlx/query-0914f0ad03d6a8c55d287f94917c6f03469d78bf4f45f5fd1eaf37171db2f04a.json new file mode 100644 index 00000000000..b8e36d10906 --- /dev/null +++ b/core/lib/dal/.sqlx/query-0914f0ad03d6a8c55d287f94917c6f03469d78bf4f45f5fd1eaf37171db2f04a.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number\n FROM\n proof_generation_details\n WHERE\n status NOT IN ('generated', 'skipped')\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "0914f0ad03d6a8c55d287f94917c6f03469d78bf4f45f5fd1eaf37171db2f04a" +} diff --git a/core/lib/dal/.sqlx/query-0a3c928a616b5ebc0b977bd773edcde721ca1c652ae2f8db41fb75cecdecb674.json b/core/lib/dal/.sqlx/query-0a3c928a616b5ebc0b977bd773edcde721ca1c652ae2f8db41fb75cecdecb674.json new file mode 100644 index 00000000000..f0e439d0e0b --- /dev/null +++ b/core/lib/dal/.sqlx/query-0a3c928a616b5ebc0b977bd773edcde721ca1c652ae2f8db41fb75cecdecb674.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "SELECT COUNT(*) FROM storage_logs WHERE miniblock_number = $1", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + null + ] + }, + "hash": "0a3c928a616b5ebc0b977bd773edcde721ca1c652ae2f8db41fb75cecdecb674" +} diff --git a/core/lib/dal/.sqlx/query-0a3cb11f5bdcb8da31dbd4e3016fced141fb29dd8b6c32dd2dc3452dc294fe1f.json b/core/lib/dal/.sqlx/query-0a3cb11f5bdcb8da31dbd4e3016fced141fb29dd8b6c32dd2dc3452dc294fe1f.json new file mode 100644 index 00000000000..854e34b4f18 --- /dev/null +++ b/core/lib/dal/.sqlx/query-0a3cb11f5bdcb8da31dbd4e3016fced141fb29dd8b6c32dd2dc3452dc294fe1f.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n protocol_versions (\n id,\n timestamp,\n recursion_scheduler_level_vk_hash,\n recursion_node_level_vk_hash,\n recursion_leaf_level_vk_hash,\n recursion_circuits_set_vks_hash,\n bootloader_code_hash,\n default_account_code_hash,\n verifier_address,\n upgrade_tx_hash,\n created_at\n )\n VALUES\n ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, NOW())\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4", + "Int8", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea" + ] + }, + "nullable": [] + }, + "hash": "0a3cb11f5bdcb8da31dbd4e3016fced141fb29dd8b6c32dd2dc3452dc294fe1f" +} diff --git a/core/lib/dal/.sqlx/query-0a53fc3c90a14038c9f3f32c3e2e5f7edcafa4fc6757264a96a46dbf7dd1f9cc.json b/core/lib/dal/.sqlx/query-0a53fc3c90a14038c9f3f32c3e2e5f7edcafa4fc6757264a96a46dbf7dd1f9cc.json new file mode 100644 index 00000000000..00379abe6df --- /dev/null +++ b/core/lib/dal/.sqlx/query-0a53fc3c90a14038c9f3f32c3e2e5f7edcafa4fc6757264a96a46dbf7dd1f9cc.json @@ -0,0 +1,31 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n transactions (\n hash,\n is_priority,\n initiator_address,\n gas_limit,\n max_fee_per_gas,\n gas_per_pubdata_limit,\n data,\n priority_op_id,\n full_fee,\n layer_2_tip_fee,\n contract_address,\n l1_block_number,\n value,\n paymaster,\n paymaster_input,\n tx_format,\n l1_tx_mint,\n l1_tx_refund_recipient,\n received_at,\n created_at,\n updated_at\n )\n VALUES\n (\n $1,\n TRUE,\n $2,\n $3,\n $4,\n $5,\n $6,\n $7,\n $8,\n $9,\n $10,\n $11,\n $12,\n $13,\n $14,\n $15,\n $16,\n $17,\n $18,\n NOW(),\n NOW()\n )\n ON CONFLICT (hash) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Numeric", + "Numeric", + "Numeric", + "Jsonb", + "Int8", + "Numeric", + "Numeric", + "Bytea", + "Int4", + "Numeric", + "Bytea", + "Bytea", + "Int4", + "Numeric", + "Bytea", + "Timestamp" + ] + }, + "nullable": [] + }, + "hash": "0a53fc3c90a14038c9f3f32c3e2e5f7edcafa4fc6757264a96a46dbf7dd1f9cc" +} diff --git a/core/lib/dal/.sqlx/query-0aaefa9d5518ed1a2d8f735435e8048558243ff878b59586eb3a8b22794395d8.json b/core/lib/dal/.sqlx/query-0aaefa9d5518ed1a2d8f735435e8048558243ff878b59586eb3a8b22794395d8.json new file mode 100644 index 00000000000..688a7373d05 --- /dev/null +++ b/core/lib/dal/.sqlx/query-0aaefa9d5518ed1a2d8f735435e8048558243ff878b59586eb3a8b22794395d8.json @@ -0,0 +1,259 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n l1_batches.timestamp,\n is_finished,\n l1_tx_count,\n l2_tx_count,\n fee_account_address,\n bloom,\n priority_ops_onchain_data,\n hash,\n parent_hash,\n commitment,\n compressed_write_logs,\n compressed_contracts,\n eth_prove_tx_id,\n eth_commit_tx_id,\n eth_execute_tx_id,\n merkle_root_hash,\n l2_to_l1_logs,\n l2_to_l1_messages,\n used_contract_hashes,\n compressed_initial_writes,\n compressed_repeated_writes,\n l2_l1_compressed_messages,\n l2_l1_merkle_root,\n l1_gas_price,\n l2_fair_gas_price,\n rollup_last_leaf_index,\n zkporter_is_available,\n l1_batches.bootloader_code_hash,\n l1_batches.default_aa_code_hash,\n base_fee_per_gas,\n aux_data_hash,\n pass_through_data_hash,\n meta_parameters_hash,\n protocol_version,\n compressed_state_diffs,\n system_logs,\n events_queue_commitment,\n bootloader_initial_content_commitment,\n pubdata_input\n FROM\n l1_batches\n LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number\n JOIN protocol_versions ON protocol_versions.id = l1_batches.protocol_version\n WHERE\n eth_commit_tx_id IS NULL\n AND number != 0\n AND protocol_versions.bootloader_code_hash = $1\n AND protocol_versions.default_account_code_hash = $2\n AND commitment IS NOT NULL\n AND (\n protocol_versions.id = $3\n OR protocol_versions.upgrade_tx_hash IS NULL\n )\n ORDER BY\n number\n LIMIT\n $4\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "parent_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "commitment", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "compressed_write_logs", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "compressed_contracts", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "eth_prove_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "eth_commit_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 15, + "name": "eth_execute_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 16, + "name": "merkle_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 17, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 20, + "name": "compressed_initial_writes", + "type_info": "Bytea" + }, + { + "ordinal": 21, + "name": "compressed_repeated_writes", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "l2_l1_compressed_messages", + "type_info": "Bytea" + }, + { + "ordinal": 23, + "name": "l2_l1_merkle_root", + "type_info": "Bytea" + }, + { + "ordinal": 24, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 25, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 26, + "name": "rollup_last_leaf_index", + "type_info": "Int8" + }, + { + "ordinal": 27, + "name": "zkporter_is_available", + "type_info": "Bool" + }, + { + "ordinal": 28, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 29, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 30, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 31, + "name": "aux_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 32, + "name": "pass_through_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 33, + "name": "meta_parameters_hash", + "type_info": "Bytea" + }, + { + "ordinal": 34, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 35, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 36, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 37, + "name": "events_queue_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 38, + "name": "bootloader_initial_content_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 39, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Int4", + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true, + true, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "0aaefa9d5518ed1a2d8f735435e8048558243ff878b59586eb3a8b22794395d8" +} diff --git a/core/lib/dal/.sqlx/query-0bdcf87f6910c7222b621f76f71bc6e326e15dca141050bc9d7dacae98a430e8.json b/core/lib/dal/.sqlx/query-0bdcf87f6910c7222b621f76f71bc6e326e15dca141050bc9d7dacae98a430e8.json new file mode 100644 index 00000000000..e5ed4813072 --- /dev/null +++ b/core/lib/dal/.sqlx/query-0bdcf87f6910c7222b621f76f71bc6e326e15dca141050bc9d7dacae98a430e8.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n hash\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + true + ] + }, + "hash": "0bdcf87f6910c7222b621f76f71bc6e326e15dca141050bc9d7dacae98a430e8" +} diff --git a/core/lib/dal/.sqlx/query-0c899c68886f76a232ffac0454cdfbf962636347864fc365fafa46c7a2da5f30.json b/core/lib/dal/.sqlx/query-0c899c68886f76a232ffac0454cdfbf962636347864fc365fafa46c7a2da5f30.json new file mode 100644 index 00000000000..35c1633fc55 --- /dev/null +++ b/core/lib/dal/.sqlx/query-0c899c68886f76a232ffac0454cdfbf962636347864fc365fafa46c7a2da5f30.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n virtual_blocks\n FROM\n miniblocks\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "virtual_blocks", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "0c899c68886f76a232ffac0454cdfbf962636347864fc365fafa46c7a2da5f30" +} diff --git a/core/lib/dal/.sqlx/query-0c95fbfb3a816bd49fd06e3a4f0a52daa202279bf612a9278f663deb78bc6e41.json b/core/lib/dal/.sqlx/query-0c95fbfb3a816bd49fd06e3a4f0a52daa202279bf612a9278f663deb78bc6e41.json new file mode 100644 index 00000000000..100761f54b4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-0c95fbfb3a816bd49fd06e3a4f0a52daa202279bf612a9278f663deb78bc6e41.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n protocol_version\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "protocol_version", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + true + ] + }, + "hash": "0c95fbfb3a816bd49fd06e3a4f0a52daa202279bf612a9278f663deb78bc6e41" +} diff --git a/core/lib/dal/.sqlx/query-0d13b8947b1bafa9e5bc6fdc70a986511265c541d81b1d21f0a751ae1399c626.json b/core/lib/dal/.sqlx/query-0d13b8947b1bafa9e5bc6fdc70a986511265c541d81b1d21f0a751ae1399c626.json new file mode 100644 index 00000000000..8b5605f078a --- /dev/null +++ b/core/lib/dal/.sqlx/query-0d13b8947b1bafa9e5bc6fdc70a986511265c541d81b1d21f0a751ae1399c626.json @@ -0,0 +1,72 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE gpu_prover_queue_fri\n SET\n instance_status = 'reserved',\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n id IN (\n SELECT\n id\n FROM\n gpu_prover_queue_fri\n WHERE\n specialized_prover_group_id = $2\n AND zone = $3\n AND (\n instance_status = 'available'\n OR (\n instance_status = 'reserved'\n AND processing_started_at < NOW() - $1::INTERVAL\n )\n )\n ORDER BY\n updated_at ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n gpu_prover_queue_fri.*\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "instance_host", + "type_info": "Inet" + }, + { + "ordinal": 2, + "name": "instance_port", + "type_info": "Int4" + }, + { + "ordinal": 3, + "name": "instance_status", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "specialized_prover_group_id", + "type_info": "Int2" + }, + { + "ordinal": 5, + "name": "zone", + "type_info": "Text" + }, + { + "ordinal": 6, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 7, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "processing_started_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [ + "Interval", + "Int2", + "Text" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + true, + false, + false, + true + ] + }, + "hash": "0d13b8947b1bafa9e5bc6fdc70a986511265c541d81b1d21f0a751ae1399c626" +} diff --git a/core/lib/dal/.sqlx/query-10959c91f01ce0da196f4c6eaf0661a097308d9f81024fdfef24a14418202730.json b/core/lib/dal/.sqlx/query-10959c91f01ce0da196f4c6eaf0661a097308d9f81024fdfef24a14418202730.json new file mode 100644 index 00000000000..8f929a7a733 --- /dev/null +++ b/core/lib/dal/.sqlx/query-10959c91f01ce0da196f4c6eaf0661a097308d9f81024fdfef24a14418202730.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n verification_info\n FROM\n contracts_verification_info\n WHERE\n address = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "verification_info", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + true + ] + }, + "hash": "10959c91f01ce0da196f4c6eaf0661a097308d9f81024fdfef24a14418202730" +} diff --git a/core/lib/dal/.sqlx/query-11af69fc254e54449b64c086667700a95e4c37a7a18531b3cdf120394cb055b9.json b/core/lib/dal/.sqlx/query-11af69fc254e54449b64c086667700a95e4c37a7a18531b3cdf120394cb055b9.json new file mode 100644 index 00000000000..ed211d7dc9d --- /dev/null +++ b/core/lib/dal/.sqlx/query-11af69fc254e54449b64c086667700a95e4c37a7a18531b3cdf120394cb055b9.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE proof_generation_details\n SET\n status = 'picked_by_prover',\n updated_at = NOW(),\n prover_taken_at = NOW()\n WHERE\n l1_batch_number = (\n SELECT\n l1_batch_number\n FROM\n proof_generation_details\n WHERE\n status = 'ready_to_be_proven'\n OR (\n status = 'picked_by_prover'\n AND prover_taken_at < NOW() - $1::INTERVAL\n )\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n proof_generation_details.l1_batch_number\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Interval" + ] + }, + "nullable": [ + false + ] + }, + "hash": "11af69fc254e54449b64c086667700a95e4c37a7a18531b3cdf120394cb055b9" +} diff --git a/core/lib/dal/.sqlx/query-12ab208f416e2875f89e558f0d4aff3a06b7a9c1866132d62e4449fa9436c7c4.json b/core/lib/dal/.sqlx/query-12ab208f416e2875f89e558f0d4aff3a06b7a9c1866132d62e4449fa9436c7c4.json new file mode 100644 index 00000000000..5441bce3e01 --- /dev/null +++ b/core/lib/dal/.sqlx/query-12ab208f416e2875f89e558f0d4aff3a06b7a9c1866132d62e4449fa9436c7c4.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET\n status = 'failed',\n error = $1,\n updated_at = NOW()\n WHERE\n id = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "12ab208f416e2875f89e558f0d4aff3a06b7a9c1866132d62e4449fa9436c7c4" +} diff --git a/core/lib/dal/.sqlx/query-12ab8ba692a42f528450f2adf8d263298abc0521734f807fbf45484158b167b2.json b/core/lib/dal/.sqlx/query-12ab8ba692a42f528450f2adf8d263298abc0521734f807fbf45484158b167b2.json new file mode 100644 index 00000000000..556867a21ff --- /dev/null +++ b/core/lib/dal/.sqlx/query-12ab8ba692a42f528450f2adf8d263298abc0521734f807fbf45484158b167b2.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_address\n FROM\n tokens\n WHERE\n well_known = FALSE\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_address", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "12ab8ba692a42f528450f2adf8d263298abc0521734f807fbf45484158b167b2" +} diff --git a/core/lib/dal/.sqlx/query-136569d7eb4037fd77e0fac2246c68e8e15a831f1a45dc3b2240d5c6809d5ef2.json b/core/lib/dal/.sqlx/query-136569d7eb4037fd77e0fac2246c68e8e15a831f1a45dc3b2240d5c6809d5ef2.json new file mode 100644 index 00000000000..fc33c969303 --- /dev/null +++ b/core/lib/dal/.sqlx/query-136569d7eb4037fd77e0fac2246c68e8e15a831f1a45dc3b2240d5c6809d5ef2.json @@ -0,0 +1,82 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n protocol_versions\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "recursion_scheduler_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "recursion_node_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 4, + "name": "recursion_leaf_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "recursion_circuits_set_vks_hash", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "default_account_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "verifier_address", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "upgrade_tx_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "created_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + false, + true, + false + ] + }, + "hash": "136569d7eb4037fd77e0fac2246c68e8e15a831f1a45dc3b2240d5c6809d5ef2" +} diff --git a/core/lib/dal/.sqlx/query-15858168fea6808c6d59d0e6d8f28a20420763a3a22899ad0e5f4b953b615a9e.json b/core/lib/dal/.sqlx/query-15858168fea6808c6d59d0e6d8f28a20420763a3a22899ad0e5f4b953b615a9e.json new file mode 100644 index 00000000000..ac0e433a919 --- /dev/null +++ b/core/lib/dal/.sqlx/query-15858168fea6808c6d59d0e6d8f28a20420763a3a22899ad0e5f4b953b615a9e.json @@ -0,0 +1,25 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n id\n FROM\n prover_fri_protocol_versions\n WHERE\n recursion_circuits_set_vks_hash = $1\n AND recursion_leaf_level_vk_hash = $2\n AND recursion_node_level_vk_hash = $3\n AND recursion_scheduler_level_vk_hash = $4\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Bytea", + "Bytea" + ] + }, + "nullable": [ + false + ] + }, + "hash": "15858168fea6808c6d59d0e6d8f28a20420763a3a22899ad0e5f4b953b615a9e" +} diff --git a/core/lib/dal/.sqlx/query-1689c212d411ebd99a22210519ea2d505a1aabf52ff4136d2ed1b39c70dd1632.json b/core/lib/dal/.sqlx/query-1689c212d411ebd99a22210519ea2d505a1aabf52ff4136d2ed1b39c70dd1632.json new file mode 100644 index 00000000000..7b939d137db --- /dev/null +++ b/core/lib/dal/.sqlx/query-1689c212d411ebd99a22210519ea2d505a1aabf52ff4136d2ed1b39c70dd1632.json @@ -0,0 +1,230 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n transactions\n WHERE\n miniblock_number IS NOT NULL\n AND l1_batch_number IS NULL\n ORDER BY\n miniblock_number,\n index_in_block\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "is_priority", + "type_info": "Bool" + }, + { + "ordinal": 2, + "name": "full_fee", + "type_info": "Numeric" + }, + { + "ordinal": 3, + "name": "layer_2_tip_fee", + "type_info": "Numeric" + }, + { + "ordinal": 4, + "name": "initiator_address", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "signature", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "input", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "data", + "type_info": "Jsonb" + }, + { + "ordinal": 9, + "name": "received_at", + "type_info": "Timestamp" + }, + { + "ordinal": 10, + "name": "priority_op_id", + "type_info": "Int8" + }, + { + "ordinal": 11, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 12, + "name": "index_in_block", + "type_info": "Int4" + }, + { + "ordinal": 13, + "name": "error", + "type_info": "Varchar" + }, + { + "ordinal": 14, + "name": "gas_limit", + "type_info": "Numeric" + }, + { + "ordinal": 15, + "name": "gas_per_storage_limit", + "type_info": "Numeric" + }, + { + "ordinal": 16, + "name": "gas_per_pubdata_limit", + "type_info": "Numeric" + }, + { + "ordinal": 17, + "name": "tx_format", + "type_info": "Int4" + }, + { + "ordinal": 18, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 19, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 20, + "name": "execution_info", + "type_info": "Jsonb" + }, + { + "ordinal": 21, + "name": "contract_address", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "in_mempool", + "type_info": "Bool" + }, + { + "ordinal": 23, + "name": "l1_block_number", + "type_info": "Int4" + }, + { + "ordinal": 24, + "name": "value", + "type_info": "Numeric" + }, + { + "ordinal": 25, + "name": "paymaster", + "type_info": "Bytea" + }, + { + "ordinal": 26, + "name": "paymaster_input", + "type_info": "Bytea" + }, + { + "ordinal": 27, + "name": "max_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 28, + "name": "max_priority_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 29, + "name": "effective_gas_price", + "type_info": "Numeric" + }, + { + "ordinal": 30, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 31, + "name": "l1_batch_tx_index", + "type_info": "Int4" + }, + { + "ordinal": 32, + "name": "refunded_gas", + "type_info": "Int8" + }, + { + "ordinal": 33, + "name": "l1_tx_mint", + "type_info": "Numeric" + }, + { + "ordinal": 34, + "name": "l1_tx_refund_recipient", + "type_info": "Bytea" + }, + { + "ordinal": 35, + "name": "upgrade_id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + true, + true, + false, + true, + true, + true, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + false, + true, + false, + false, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "1689c212d411ebd99a22210519ea2d505a1aabf52ff4136d2ed1b39c70dd1632" +} diff --git a/core/lib/dal/.sqlx/query-16e62660fd14f6d3731e69fa696a36408510bb05c15285dfa7708bc0b044d0c5.json b/core/lib/dal/.sqlx/query-16e62660fd14f6d3731e69fa696a36408510bb05c15285dfa7708bc0b044d0c5.json new file mode 100644 index 00000000000..3ba2e9b5448 --- /dev/null +++ b/core/lib/dal/.sqlx/query-16e62660fd14f6d3731e69fa696a36408510bb05c15285dfa7708bc0b044d0c5.json @@ -0,0 +1,259 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n l1_batches.timestamp,\n is_finished,\n l1_tx_count,\n l2_tx_count,\n fee_account_address,\n bloom,\n priority_ops_onchain_data,\n hash,\n parent_hash,\n commitment,\n compressed_write_logs,\n compressed_contracts,\n eth_prove_tx_id,\n eth_commit_tx_id,\n eth_execute_tx_id,\n merkle_root_hash,\n l2_to_l1_logs,\n l2_to_l1_messages,\n used_contract_hashes,\n compressed_initial_writes,\n compressed_repeated_writes,\n l2_l1_compressed_messages,\n l2_l1_merkle_root,\n l1_gas_price,\n l2_fair_gas_price,\n rollup_last_leaf_index,\n zkporter_is_available,\n l1_batches.bootloader_code_hash,\n l1_batches.default_aa_code_hash,\n base_fee_per_gas,\n aux_data_hash,\n pass_through_data_hash,\n meta_parameters_hash,\n protocol_version,\n compressed_state_diffs,\n system_logs,\n events_queue_commitment,\n bootloader_initial_content_commitment,\n pubdata_input\n FROM\n l1_batches\n LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number\n JOIN protocol_versions ON protocol_versions.id = l1_batches.protocol_version\n WHERE\n eth_commit_tx_id IS NULL\n AND number != 0\n AND protocol_versions.bootloader_code_hash = $1\n AND protocol_versions.default_account_code_hash = $2\n AND commitment IS NOT NULL\n AND (\n protocol_versions.id = $3\n OR protocol_versions.upgrade_tx_hash IS NULL\n )\n AND events_queue_commitment IS NOT NULL\n AND bootloader_initial_content_commitment IS NOT NULL\n ORDER BY\n number\n LIMIT\n $4\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "parent_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "commitment", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "compressed_write_logs", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "compressed_contracts", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "eth_prove_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "eth_commit_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 15, + "name": "eth_execute_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 16, + "name": "merkle_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 17, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 20, + "name": "compressed_initial_writes", + "type_info": "Bytea" + }, + { + "ordinal": 21, + "name": "compressed_repeated_writes", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "l2_l1_compressed_messages", + "type_info": "Bytea" + }, + { + "ordinal": 23, + "name": "l2_l1_merkle_root", + "type_info": "Bytea" + }, + { + "ordinal": 24, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 25, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 26, + "name": "rollup_last_leaf_index", + "type_info": "Int8" + }, + { + "ordinal": 27, + "name": "zkporter_is_available", + "type_info": "Bool" + }, + { + "ordinal": 28, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 29, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 30, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 31, + "name": "aux_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 32, + "name": "pass_through_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 33, + "name": "meta_parameters_hash", + "type_info": "Bytea" + }, + { + "ordinal": 34, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 35, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 36, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 37, + "name": "events_queue_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 38, + "name": "bootloader_initial_content_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 39, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Int4", + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true, + true, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "16e62660fd14f6d3731e69fa696a36408510bb05c15285dfa7708bc0b044d0c5" +} diff --git a/core/lib/dal/.sqlx/query-1766c0a21ba5918dd08f4babd8dbfdf10fb1cb43781219586c169fb976204331.json b/core/lib/dal/.sqlx/query-1766c0a21ba5918dd08f4babd8dbfdf10fb1cb43781219586c169fb976204331.json new file mode 100644 index 00000000000..74c11e4c9a0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1766c0a21ba5918dd08f4babd8dbfdf10fb1cb43781219586c169fb976204331.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number\n FROM\n initial_writes\n WHERE\n hashed_key = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false + ] + }, + "hash": "1766c0a21ba5918dd08f4babd8dbfdf10fb1cb43781219586c169fb976204331" +} diff --git a/core/lib/dal/.sqlx/query-1862d3a78e4e9068df1b8ce3bbe9f3f0b5d629fdb5c36ea1bfb93ed246be968e.json b/core/lib/dal/.sqlx/query-1862d3a78e4e9068df1b8ce3bbe9f3f0b5d629fdb5c36ea1bfb93ed246be968e.json new file mode 100644 index 00000000000..1bb2d641bef --- /dev/null +++ b/core/lib/dal/.sqlx/query-1862d3a78e4e9068df1b8ce3bbe9f3f0b5d629fdb5c36ea1bfb93ed246be968e.json @@ -0,0 +1,88 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n transactions.is_priority,\n transactions.initiator_address,\n transactions.gas_limit,\n transactions.gas_per_pubdata_limit,\n transactions.received_at,\n transactions.miniblock_number,\n transactions.error,\n transactions.effective_gas_price,\n transactions.refunded_gas,\n commit_tx.tx_hash AS \"eth_commit_tx_hash?\",\n prove_tx.tx_hash AS \"eth_prove_tx_hash?\",\n execute_tx.tx_hash AS \"eth_execute_tx_hash?\"\n FROM\n transactions\n LEFT JOIN miniblocks ON miniblocks.number = transactions.miniblock_number\n LEFT JOIN l1_batches ON l1_batches.number = miniblocks.l1_batch_number\n LEFT JOIN eth_txs_history AS commit_tx ON (\n l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id\n AND commit_tx.confirmed_at IS NOT NULL\n )\n LEFT JOIN eth_txs_history AS prove_tx ON (\n l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id\n AND prove_tx.confirmed_at IS NOT NULL\n )\n LEFT JOIN eth_txs_history AS execute_tx ON (\n l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id\n AND execute_tx.confirmed_at IS NOT NULL\n )\n WHERE\n transactions.hash = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "is_priority", + "type_info": "Bool" + }, + { + "ordinal": 1, + "name": "initiator_address", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "gas_limit", + "type_info": "Numeric" + }, + { + "ordinal": 3, + "name": "gas_per_pubdata_limit", + "type_info": "Numeric" + }, + { + "ordinal": 4, + "name": "received_at", + "type_info": "Timestamp" + }, + { + "ordinal": 5, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "error", + "type_info": "Varchar" + }, + { + "ordinal": 7, + "name": "effective_gas_price", + "type_info": "Numeric" + }, + { + "ordinal": 8, + "name": "refunded_gas", + "type_info": "Int8" + }, + { + "ordinal": 9, + "name": "eth_commit_tx_hash?", + "type_info": "Text" + }, + { + "ordinal": 10, + "name": "eth_prove_tx_hash?", + "type_info": "Text" + }, + { + "ordinal": 11, + "name": "eth_execute_tx_hash?", + "type_info": "Text" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false, + false, + true, + true, + false, + true, + true, + true, + false, + false, + false, + false + ] + }, + "hash": "1862d3a78e4e9068df1b8ce3bbe9f3f0b5d629fdb5c36ea1bfb93ed246be968e" +} diff --git a/core/lib/dal/.sqlx/query-18820f4ab0c3d2cc9187c5660f9f50e423eb6134659fe52bcc2b27ad16740c96.json b/core/lib/dal/.sqlx/query-18820f4ab0c3d2cc9187c5660f9f50e423eb6134659fe52bcc2b27ad16740c96.json new file mode 100644 index 00000000000..73887716338 --- /dev/null +++ b/core/lib/dal/.sqlx/query-18820f4ab0c3d2cc9187c5660f9f50e423eb6134659fe52bcc2b27ad16740c96.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM transactions\n WHERE\n in_mempool = TRUE\n AND initiator_address = ANY ($1)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray" + ] + }, + "nullable": [] + }, + "hash": "18820f4ab0c3d2cc9187c5660f9f50e423eb6134659fe52bcc2b27ad16740c96" +} diff --git a/core/lib/dal/.sqlx/query-19314d74e94b610e2da6d728ca37ea964610e131d45f720f7a7b2a130fe9ed89.json b/core/lib/dal/.sqlx/query-19314d74e94b610e2da6d728ca37ea964610e131d45f720f7a7b2a130fe9ed89.json new file mode 100644 index 00000000000..88093dcee18 --- /dev/null +++ b/core/lib/dal/.sqlx/query-19314d74e94b610e2da6d728ca37ea964610e131d45f720f7a7b2a130fe9ed89.json @@ -0,0 +1,17 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE contract_verification_requests\n SET\n status = 'failed',\n updated_at = NOW(),\n error = $2,\n compilation_errors = $3,\n panic_message = $4\n WHERE\n id = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Text", + "Jsonb", + "Text" + ] + }, + "nullable": [] + }, + "hash": "19314d74e94b610e2da6d728ca37ea964610e131d45f720f7a7b2a130fe9ed89" +} diff --git a/core/lib/dal/.sqlx/query-19545806b8f772075096e69f8665d98a3d9f7df162ae22a98c3c7620fcd13bd2.json b/core/lib/dal/.sqlx/query-19545806b8f772075096e69f8665d98a3d9f7df162ae22a98c3c7620fcd13bd2.json new file mode 100644 index 00000000000..3273d9654aa --- /dev/null +++ b/core/lib/dal/.sqlx/query-19545806b8f772075096e69f8665d98a3d9f7df162ae22a98c3c7620fcd13bd2.json @@ -0,0 +1,80 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n protocol_versions\n ORDER BY\n id DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "recursion_scheduler_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "recursion_node_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 4, + "name": "recursion_leaf_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "recursion_circuits_set_vks_hash", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "default_account_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "verifier_address", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "upgrade_tx_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "created_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + false, + true, + false + ] + }, + "hash": "19545806b8f772075096e69f8665d98a3d9f7df162ae22a98c3c7620fcd13bd2" +} diff --git a/core/lib/dal/.sqlx/query-19b89495be8aa735db039ccc8a262786c58e54f132588c48f07d9537cf21d3ed.json b/core/lib/dal/.sqlx/query-19b89495be8aa735db039ccc8a262786c58e54f132588c48f07d9537cf21d3ed.json new file mode 100644 index 00000000000..b1156c907d4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-19b89495be8aa735db039ccc8a262786c58e54f132588c48f07d9537cf21d3ed.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "SELECT sent_at_block FROM eth_txs_history WHERE eth_tx_id = $1 AND sent_at_block IS NOT NULL ORDER BY created_at ASC LIMIT 1", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "sent_at_block", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + true + ] + }, + "hash": "19b89495be8aa735db039ccc8a262786c58e54f132588c48f07d9537cf21d3ed" +} diff --git a/core/lib/dal/.sqlx/query-1ad3bbd791f3ff0d31683bf59187b84c5fd52f0352f0f0e311d054cb9e45b07e.json b/core/lib/dal/.sqlx/query-1ad3bbd791f3ff0d31683bf59187b84c5fd52f0352f0f0e311d054cb9e45b07e.json new file mode 100644 index 00000000000..460f81615bf --- /dev/null +++ b/core/lib/dal/.sqlx/query-1ad3bbd791f3ff0d31683bf59187b84c5fd52f0352f0f0e311d054cb9e45b07e.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT DISTINCT\n ON (hashed_key) hashed_key\n FROM\n (\n SELECT\n *\n FROM\n storage_logs\n WHERE\n miniblock_number > $1\n ) inn\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "1ad3bbd791f3ff0d31683bf59187b84c5fd52f0352f0f0e311d054cb9e45b07e" +} diff --git a/core/lib/dal/.sqlx/query-1b4ebbfc96b4fd66ecbe64a6be80a01a6c7cbe9297cbb55d42533fddc18719b6.json b/core/lib/dal/.sqlx/query-1b4ebbfc96b4fd66ecbe64a6be80a01a6c7cbe9297cbb55d42533fddc18719b6.json new file mode 100644 index 00000000000..8b9995b3b0f --- /dev/null +++ b/core/lib/dal/.sqlx/query-1b4ebbfc96b4fd66ecbe64a6be80a01a6c7cbe9297cbb55d42533fddc18719b6.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(priority_op_id) AS \"op_id\"\n FROM\n transactions\n WHERE\n is_priority = TRUE\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "op_id", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "1b4ebbfc96b4fd66ecbe64a6be80a01a6c7cbe9297cbb55d42533fddc18719b6" +} diff --git a/core/lib/dal/.sqlx/query-1bc6597117db032b87df33040d61610ffa7f169d560e79e89b99eedf681c6773.json b/core/lib/dal/.sqlx/query-1bc6597117db032b87df33040d61610ffa7f169d560e79e89b99eedf681c6773.json new file mode 100644 index 00000000000..0351691c395 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1bc6597117db032b87df33040d61610ffa7f169d560e79e89b99eedf681c6773.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n scheduler_witness_jobs_fri (\n l1_batch_number,\n scheduler_partial_input_blob_url,\n protocol_version,\n status,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, 'waiting_for_proofs', NOW(), NOW())\n ON CONFLICT (l1_batch_number) DO\n UPDATE\n SET\n updated_at = NOW()\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Text", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "1bc6597117db032b87df33040d61610ffa7f169d560e79e89b99eedf681c6773" +} diff --git a/core/lib/dal/.sqlx/query-1c60010ded4e79886890a745a050fa6d65c05d8144bdfd143480834ead4bd8d5.json b/core/lib/dal/.sqlx/query-1c60010ded4e79886890a745a050fa6d65c05d8144bdfd143480834ead4bd8d5.json new file mode 100644 index 00000000000..a9d5b42d214 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1c60010ded4e79886890a745a050fa6d65c05d8144bdfd143480834ead4bd8d5.json @@ -0,0 +1,76 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE contract_verification_requests\n SET\n status = 'in_progress',\n attempts = attempts + 1,\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n id = (\n SELECT\n id\n FROM\n contract_verification_requests\n WHERE\n status = 'queued'\n OR (\n status = 'in_progress'\n AND processing_started_at < NOW() - $1::INTERVAL\n )\n ORDER BY\n created_at\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n id,\n contract_address,\n source_code,\n contract_name,\n zk_compiler_version,\n compiler_version,\n optimization_used,\n optimizer_mode,\n constructor_arguments,\n is_system\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "contract_address", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "source_code", + "type_info": "Text" + }, + { + "ordinal": 3, + "name": "contract_name", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "zk_compiler_version", + "type_info": "Text" + }, + { + "ordinal": 5, + "name": "compiler_version", + "type_info": "Text" + }, + { + "ordinal": 6, + "name": "optimization_used", + "type_info": "Bool" + }, + { + "ordinal": 7, + "name": "optimizer_mode", + "type_info": "Text" + }, + { + "ordinal": 8, + "name": "constructor_arguments", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "is_system", + "type_info": "Bool" + } + ], + "parameters": { + "Left": [ + "Interval" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + true, + false, + false + ] + }, + "hash": "1c60010ded4e79886890a745a050fa6d65c05d8144bdfd143480834ead4bd8d5" +} diff --git a/core/lib/dal/.sqlx/query-1c994d418ada78586de829fc2d34d26e48e968c79834858c98b7a7f9dfc81910.json b/core/lib/dal/.sqlx/query-1c994d418ada78586de829fc2d34d26e48e968c79834858c98b7a7f9dfc81910.json new file mode 100644 index 00000000000..747105fb444 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1c994d418ada78586de829fc2d34d26e48e968c79834858c98b7a7f9dfc81910.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM l2_to_l1_logs\n WHERE\n miniblock_number > $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "1c994d418ada78586de829fc2d34d26e48e968c79834858c98b7a7f9dfc81910" +} diff --git a/core/lib/dal/.sqlx/query-1d2cc4b485536af350089cf7950be3b85419fde77038dd3de6c55aa9c55d375c.json b/core/lib/dal/.sqlx/query-1d2cc4b485536af350089cf7950be3b85419fde77038dd3de6c55aa9c55d375c.json new file mode 100644 index 00000000000..b8929febf76 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1d2cc4b485536af350089cf7950be3b85419fde77038dd3de6c55aa9c55d375c.json @@ -0,0 +1,61 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n storage.value AS \"value!\",\n tokens.l1_address AS \"l1_address!\",\n tokens.l2_address AS \"l2_address!\",\n tokens.symbol AS \"symbol!\",\n tokens.name AS \"name!\",\n tokens.decimals AS \"decimals!\",\n tokens.usd_price AS \"usd_price?\"\n FROM\n storage\n INNER JOIN tokens ON storage.address = tokens.l2_address\n OR (\n storage.address = $2\n AND tokens.l2_address = $3\n )\n WHERE\n storage.hashed_key = ANY ($1)\n AND storage.value != $4\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "value!", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "l1_address!", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "l2_address!", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "symbol!", + "type_info": "Varchar" + }, + { + "ordinal": 4, + "name": "name!", + "type_info": "Varchar" + }, + { + "ordinal": 5, + "name": "decimals!", + "type_info": "Int4" + }, + { + "ordinal": 6, + "name": "usd_price?", + "type_info": "Numeric" + } + ], + "parameters": { + "Left": [ + "ByteaArray", + "Bytea", + "Bytea", + "Bytea" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + true + ] + }, + "hash": "1d2cc4b485536af350089cf7950be3b85419fde77038dd3de6c55aa9c55d375c" +} diff --git a/core/lib/dal/.sqlx/query-1d6b698b241cb6c5efd070a98165f6760cfeac185330d1d9c5cdb5b383ed8ed4.json b/core/lib/dal/.sqlx/query-1d6b698b241cb6c5efd070a98165f6760cfeac185330d1d9c5cdb5b383ed8ed4.json new file mode 100644 index 00000000000..7531f7ed4c0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1d6b698b241cb6c5efd070a98165f6760cfeac185330d1d9c5cdb5b383ed8ed4.json @@ -0,0 +1,30 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n contract_verification_requests (\n contract_address,\n source_code,\n contract_name,\n zk_compiler_version,\n compiler_version,\n optimization_used,\n optimizer_mode,\n constructor_arguments,\n is_system,\n status,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, $4, $5, $6, $7, $8, $9, 'queued', NOW(), NOW())\n RETURNING\n id\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Text", + "Text", + "Text", + "Text", + "Bool", + "Text", + "Bytea", + "Bool" + ] + }, + "nullable": [ + false + ] + }, + "hash": "1d6b698b241cb6c5efd070a98165f6760cfeac185330d1d9c5cdb5b383ed8ed4" +} diff --git a/core/lib/dal/.sqlx/query-1dcb3afb0c1947f92981f61d95c099c4591ce3f8d51f3df99db0165e086f96af.json b/core/lib/dal/.sqlx/query-1dcb3afb0c1947f92981f61d95c099c4591ce3f8d51f3df99db0165e086f96af.json new file mode 100644 index 00000000000..dd142601d00 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1dcb3afb0c1947f92981f61d95c099c4591ce3f8d51f3df99db0165e086f96af.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bytecode\n FROM\n factory_deps\n WHERE\n bytecode_hash = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bytecode", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false + ] + }, + "hash": "1dcb3afb0c1947f92981f61d95c099c4591ce3f8d51f3df99db0165e086f96af" +} diff --git a/core/lib/dal/.sqlx/query-1ea37ef1c3df72e5e9c50cfa1675fc7f60618209d0132e7937a1347b7e94b212.json b/core/lib/dal/.sqlx/query-1ea37ef1c3df72e5e9c50cfa1675fc7f60618209d0132e7937a1347b7e94b212.json new file mode 100644 index 00000000000..1ae6117e4a1 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1ea37ef1c3df72e5e9c50cfa1675fc7f60618209d0132e7937a1347b7e94b212.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number\n FROM\n l1_batches\n WHERE\n eth_prove_tx_id IS NOT NULL\n AND eth_execute_tx_id IS NULL\n ORDER BY\n number\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "1ea37ef1c3df72e5e9c50cfa1675fc7f60618209d0132e7937a1347b7e94b212" +} diff --git a/core/lib/dal/.sqlx/query-1ed2d7e5e98b15420a21650809d710ce910d0c9138d85cb55e16459c757dea03.json b/core/lib/dal/.sqlx/query-1ed2d7e5e98b15420a21650809d710ce910d0c9138d85cb55e16459c757dea03.json new file mode 100644 index 00000000000..9cf4cc1e68e --- /dev/null +++ b/core/lib/dal/.sqlx/query-1ed2d7e5e98b15420a21650809d710ce910d0c9138d85cb55e16459c757dea03.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n protocol_version\n FROM\n l1_batches\n ORDER BY\n number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "protocol_version", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + true + ] + }, + "hash": "1ed2d7e5e98b15420a21650809d710ce910d0c9138d85cb55e16459c757dea03" +} diff --git a/core/lib/dal/.sqlx/query-1f25016c41169aa4ab14db2faf7b2d0413d0f89c309de4b31254c309116ea60c.json b/core/lib/dal/.sqlx/query-1f25016c41169aa4ab14db2faf7b2d0413d0f89c309de4b31254c309116ea60c.json new file mode 100644 index 00000000000..b535ae5a863 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1f25016c41169aa4ab14db2faf7b2d0413d0f89c309de4b31254c309116ea60c.json @@ -0,0 +1,17 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE tokens\n SET\n token_list_name = $2,\n token_list_symbol = $3,\n token_list_decimals = $4,\n well_known = TRUE,\n updated_at = NOW()\n WHERE\n l1_address = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Varchar", + "Varchar", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "1f25016c41169aa4ab14db2faf7b2d0413d0f89c309de4b31254c309116ea60c" +} diff --git a/core/lib/dal/.sqlx/query-1f46524410ce0f193dc6547499bde995ddddc621ee2149f08f905af2d8aadd03.json b/core/lib/dal/.sqlx/query-1f46524410ce0f193dc6547499bde995ddddc621ee2149f08f905af2d8aadd03.json new file mode 100644 index 00000000000..077554611f4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-1f46524410ce0f193dc6547499bde995ddddc621ee2149f08f905af2d8aadd03.json @@ -0,0 +1,34 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE transactions\n SET\n hash = data_table.hash,\n signature = data_table.signature,\n gas_limit = data_table.gas_limit,\n max_fee_per_gas = data_table.max_fee_per_gas,\n max_priority_fee_per_gas = data_table.max_priority_fee_per_gas,\n gas_per_pubdata_limit = data_table.gas_per_pubdata_limit,\n input = data_table.input,\n data = data_table.data,\n tx_format = data_table.tx_format,\n miniblock_number = $21,\n index_in_block = data_table.index_in_block,\n error = NULLIF(data_table.error, ''),\n effective_gas_price = data_table.effective_gas_price,\n execution_info = data_table.new_execution_info,\n refunded_gas = data_table.refunded_gas,\n value = data_table.value,\n contract_address = data_table.contract_address,\n paymaster = data_table.paymaster,\n paymaster_input = data_table.paymaster_input,\n in_mempool = FALSE,\n updated_at = NOW()\n FROM\n (\n SELECT\n data_table_temp.*\n FROM\n (\n SELECT\n UNNEST($1::bytea[]) AS initiator_address,\n UNNEST($2::INT[]) AS nonce,\n UNNEST($3::bytea[]) AS hash,\n UNNEST($4::bytea[]) AS signature,\n UNNEST($5::NUMERIC[]) AS gas_limit,\n UNNEST($6::NUMERIC[]) AS max_fee_per_gas,\n UNNEST($7::NUMERIC[]) AS max_priority_fee_per_gas,\n UNNEST($8::NUMERIC[]) AS gas_per_pubdata_limit,\n UNNEST($9::INT[]) AS tx_format,\n UNNEST($10::INTEGER[]) AS index_in_block,\n UNNEST($11::VARCHAR[]) AS error,\n UNNEST($12::NUMERIC[]) AS effective_gas_price,\n UNNEST($13::jsonb[]) AS new_execution_info,\n UNNEST($14::bytea[]) AS input,\n UNNEST($15::jsonb[]) AS data,\n UNNEST($16::BIGINT[]) AS refunded_gas,\n UNNEST($17::NUMERIC[]) AS value,\n UNNEST($18::bytea[]) AS contract_address,\n UNNEST($19::bytea[]) AS paymaster,\n UNNEST($20::bytea[]) AS paymaster_input\n ) AS data_table_temp\n JOIN transactions ON transactions.initiator_address = data_table_temp.initiator_address\n AND transactions.nonce = data_table_temp.nonce\n ORDER BY\n transactions.hash\n ) AS data_table\n WHERE\n transactions.initiator_address = data_table.initiator_address\n AND transactions.nonce = data_table.nonce\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray", + "Int4Array", + "ByteaArray", + "ByteaArray", + "NumericArray", + "NumericArray", + "NumericArray", + "NumericArray", + "Int4Array", + "Int4Array", + "VarcharArray", + "NumericArray", + "JsonbArray", + "ByteaArray", + "JsonbArray", + "Int8Array", + "NumericArray", + "ByteaArray", + "ByteaArray", + "ByteaArray", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "1f46524410ce0f193dc6547499bde995ddddc621ee2149f08f905af2d8aadd03" +} diff --git a/core/lib/dal/.sqlx/query-1f75f2d88c1d2496e48b02f374e492cf2545944291dd0d42b937c0d0c7eefd47.json b/core/lib/dal/.sqlx/query-1f75f2d88c1d2496e48b02f374e492cf2545944291dd0d42b937c0d0c7eefd47.json new file mode 100644 index 00000000000..362c775ea5a --- /dev/null +++ b/core/lib/dal/.sqlx/query-1f75f2d88c1d2496e48b02f374e492cf2545944291dd0d42b937c0d0c7eefd47.json @@ -0,0 +1,106 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batches.number,\n l1_batches.timestamp,\n l1_batches.l1_tx_count,\n l1_batches.l2_tx_count,\n l1_batches.hash AS \"root_hash?\",\n commit_tx.tx_hash AS \"commit_tx_hash?\",\n commit_tx.confirmed_at AS \"committed_at?\",\n prove_tx.tx_hash AS \"prove_tx_hash?\",\n prove_tx.confirmed_at AS \"proven_at?\",\n execute_tx.tx_hash AS \"execute_tx_hash?\",\n execute_tx.confirmed_at AS \"executed_at?\",\n l1_batches.l1_gas_price,\n l1_batches.l2_fair_gas_price,\n l1_batches.bootloader_code_hash,\n l1_batches.default_aa_code_hash\n FROM\n l1_batches\n LEFT JOIN eth_txs_history AS commit_tx ON (\n l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id\n AND commit_tx.confirmed_at IS NOT NULL\n )\n LEFT JOIN eth_txs_history AS prove_tx ON (\n l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id\n AND prove_tx.confirmed_at IS NOT NULL\n )\n LEFT JOIN eth_txs_history AS execute_tx ON (\n l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id\n AND execute_tx.confirmed_at IS NOT NULL\n )\n WHERE\n l1_batches.number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 3, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "root_hash?", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "commit_tx_hash?", + "type_info": "Text" + }, + { + "ordinal": 6, + "name": "committed_at?", + "type_info": "Timestamp" + }, + { + "ordinal": 7, + "name": "prove_tx_hash?", + "type_info": "Text" + }, + { + "ordinal": 8, + "name": "proven_at?", + "type_info": "Timestamp" + }, + { + "ordinal": 9, + "name": "execute_tx_hash?", + "type_info": "Text" + }, + { + "ordinal": 10, + "name": "executed_at?", + "type_info": "Timestamp" + }, + { + "ordinal": 11, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 12, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 13, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 14, + "name": "default_aa_code_hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + true, + false, + true, + false, + true, + false, + true, + false, + false, + true, + true + ] + }, + "hash": "1f75f2d88c1d2496e48b02f374e492cf2545944291dd0d42b937c0d0c7eefd47" +} diff --git a/core/lib/dal/.sqlx/query-2003dcf7bc807c7d345368538accd9b0128f82306e27e4c7258116082a54ab95.json b/core/lib/dal/.sqlx/query-2003dcf7bc807c7d345368538accd9b0128f82306e27e4c7258116082a54ab95.json new file mode 100644 index 00000000000..77177e405f1 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2003dcf7bc807c7d345368538accd9b0128f82306e27e4c7258116082a54ab95.json @@ -0,0 +1,29 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n transactions.hash,\n transactions.received_at\n FROM\n transactions\n LEFT JOIN miniblocks ON miniblocks.number = miniblock_number\n WHERE\n received_at > $1\n ORDER BY\n received_at ASC\n LIMIT\n $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "received_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [ + "Timestamp", + "Int8" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "2003dcf7bc807c7d345368538accd9b0128f82306e27e4c7258116082a54ab95" +} diff --git a/core/lib/dal/.sqlx/query-2028ba507f3ccd474f0261e571eb19a3a7feec950cb3e503588cf55d954a493a.json b/core/lib/dal/.sqlx/query-2028ba507f3ccd474f0261e571eb19a3a7feec950cb3e503588cf55d954a493a.json new file mode 100644 index 00000000000..8aaefe3c6ba --- /dev/null +++ b/core/lib/dal/.sqlx/query-2028ba507f3ccd474f0261e571eb19a3a7feec950cb3e503588cf55d954a493a.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bytecode\n FROM\n factory_deps\n WHERE\n miniblock_number <= $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bytecode", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "2028ba507f3ccd474f0261e571eb19a3a7feec950cb3e503588cf55d954a493a" +} diff --git a/core/lib/dal/.sqlx/query-20f84f9ec21459d8c7ad53241758eeab159533211d2ddbef41e6ff0ba937d04a.json b/core/lib/dal/.sqlx/query-20f84f9ec21459d8c7ad53241758eeab159533211d2ddbef41e6ff0ba937d04a.json new file mode 100644 index 00000000000..5f7048a8a20 --- /dev/null +++ b/core/lib/dal/.sqlx/query-20f84f9ec21459d8c7ad53241758eeab159533211d2ddbef41e6ff0ba937d04a.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n skip_proof = TRUE\n WHERE\n number = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "20f84f9ec21459d8c7ad53241758eeab159533211d2ddbef41e6ff0ba937d04a" +} diff --git a/core/lib/dal/.sqlx/query-23be43bf705d679ca751c89353716065fcad42c6b621efb3a135a16b477dcfd9.json b/core/lib/dal/.sqlx/query-23be43bf705d679ca751c89353716065fcad42c6b621efb3a135a16b477dcfd9.json new file mode 100644 index 00000000000..8c63a924c0a --- /dev/null +++ b/core/lib/dal/.sqlx/query-23be43bf705d679ca751c89353716065fcad42c6b621efb3a135a16b477dcfd9.json @@ -0,0 +1,86 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n eth_txs\n WHERE\n confirmed_eth_tx_history_id IS NULL\n AND id <= (\n SELECT\n COALESCE(MAX(eth_tx_id), 0)\n FROM\n eth_txs_history\n WHERE\n sent_at_block IS NOT NULL\n )\n ORDER BY\n id\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "raw_tx", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "contract_address", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "tx_type", + "type_info": "Text" + }, + { + "ordinal": 5, + "name": "gas_used", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 7, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "has_failed", + "type_info": "Bool" + }, + { + "ordinal": 9, + "name": "sent_at_block", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "confirmed_eth_tx_history_id", + "type_info": "Int4" + }, + { + "ordinal": 11, + "name": "predicted_gas_cost", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false, + false, + false, + true, + false, + false, + false, + true, + true, + false + ] + }, + "hash": "23be43bf705d679ca751c89353716065fcad42c6b621efb3a135a16b477dcfd9" +} diff --git a/core/lib/dal/.sqlx/query-245dc5bb82cc82df38e4440a7746ca08324bc86a72e4ea85c9c7962a6c8c9e30.json b/core/lib/dal/.sqlx/query-245dc5bb82cc82df38e4440a7746ca08324bc86a72e4ea85c9c7962a6c8c9e30.json new file mode 100644 index 00000000000..0b9c4aa59b7 --- /dev/null +++ b/core/lib/dal/.sqlx/query-245dc5bb82cc82df38e4440a7746ca08324bc86a72e4ea85c9c7962a6c8c9e30.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n eth_prove_tx_id = $1,\n updated_at = NOW()\n WHERE\n number BETWEEN $2 AND $3\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4", + "Int8", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "245dc5bb82cc82df38e4440a7746ca08324bc86a72e4ea85c9c7962a6c8c9e30" +} diff --git a/core/lib/dal/.sqlx/query-24722ee4ced7f03e60b1b5ecaaa5234d536b064951a67d826ac49b7a3a095a1a.json b/core/lib/dal/.sqlx/query-24722ee4ced7f03e60b1b5ecaaa5234d536b064951a67d826ac49b7a3a095a1a.json new file mode 100644 index 00000000000..194f4faedb1 --- /dev/null +++ b/core/lib/dal/.sqlx/query-24722ee4ced7f03e60b1b5ecaaa5234d536b064951a67d826ac49b7a3a095a1a.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n hashed_key,\n INDEX\n FROM\n initial_writes\n WHERE\n l1_batch_number = $1\n ORDER BY\n INDEX\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "index", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "24722ee4ced7f03e60b1b5ecaaa5234d536b064951a67d826ac49b7a3a095a1a" +} diff --git a/core/lib/dal/.sqlx/query-249cb862d44196cb6dc3945e907717b0dd3cec64b0b29f59b273f1c6952e01da.json b/core/lib/dal/.sqlx/query-249cb862d44196cb6dc3945e907717b0dd3cec64b0b29f59b273f1c6952e01da.json new file mode 100644 index 00000000000..38419af111f --- /dev/null +++ b/core/lib/dal/.sqlx/query-249cb862d44196cb6dc3945e907717b0dd3cec64b0b29f59b273f1c6952e01da.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bytecode_hash\n FROM\n factory_deps\n WHERE\n miniblock_number > $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bytecode_hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "249cb862d44196cb6dc3945e907717b0dd3cec64b0b29f59b273f1c6952e01da" +} diff --git a/core/lib/dal/.sqlx/query-25aad4298d2459ef5aea7c4ea82eda1da000848ed4abf309b68989da33e1ce5a.json b/core/lib/dal/.sqlx/query-25aad4298d2459ef5aea7c4ea82eda1da000848ed4abf309b68989da33e1ce5a.json new file mode 100644 index 00000000000..d966ff14c99 --- /dev/null +++ b/core/lib/dal/.sqlx/query-25aad4298d2459ef5aea7c4ea82eda1da000848ed4abf309b68989da33e1ce5a.json @@ -0,0 +1,124 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n miniblocks.number,\n COALESCE(\n miniblocks.l1_batch_number,\n (\n SELECT\n (MAX(number) + 1)\n FROM\n l1_batches\n )\n ) AS \"l1_batch_number!\",\n miniblocks.timestamp,\n miniblocks.l1_tx_count,\n miniblocks.l2_tx_count,\n miniblocks.hash AS \"root_hash?\",\n commit_tx.tx_hash AS \"commit_tx_hash?\",\n commit_tx.confirmed_at AS \"committed_at?\",\n prove_tx.tx_hash AS \"prove_tx_hash?\",\n prove_tx.confirmed_at AS \"proven_at?\",\n execute_tx.tx_hash AS \"execute_tx_hash?\",\n execute_tx.confirmed_at AS \"executed_at?\",\n miniblocks.l1_gas_price,\n miniblocks.l2_fair_gas_price,\n miniblocks.bootloader_code_hash,\n miniblocks.default_aa_code_hash,\n miniblocks.protocol_version,\n l1_batches.fee_account_address AS \"fee_account_address?\"\n FROM\n miniblocks\n LEFT JOIN l1_batches ON miniblocks.l1_batch_number = l1_batches.number\n LEFT JOIN eth_txs_history AS commit_tx ON (\n l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id\n AND commit_tx.confirmed_at IS NOT NULL\n )\n LEFT JOIN eth_txs_history AS prove_tx ON (\n l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id\n AND prove_tx.confirmed_at IS NOT NULL\n )\n LEFT JOIN eth_txs_history AS execute_tx ON (\n l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id\n AND execute_tx.confirmed_at IS NOT NULL\n )\n WHERE\n miniblocks.number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_number!", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "root_hash?", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "commit_tx_hash?", + "type_info": "Text" + }, + { + "ordinal": 7, + "name": "committed_at?", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "prove_tx_hash?", + "type_info": "Text" + }, + { + "ordinal": 9, + "name": "proven_at?", + "type_info": "Timestamp" + }, + { + "ordinal": 10, + "name": "execute_tx_hash?", + "type_info": "Text" + }, + { + "ordinal": 11, + "name": "executed_at?", + "type_info": "Timestamp" + }, + { + "ordinal": 12, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 13, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 14, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 15, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 16, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 17, + "name": "fee_account_address?", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + null, + false, + false, + false, + false, + false, + true, + false, + true, + false, + true, + false, + false, + true, + true, + true, + false + ] + }, + "hash": "25aad4298d2459ef5aea7c4ea82eda1da000848ed4abf309b68989da33e1ce5a" +} diff --git a/core/lib/dal/.sqlx/query-26cb272c2a46a267c47681e0f1f07997b7e24682da56f84d812da2b9aeb14ca2.json b/core/lib/dal/.sqlx/query-26cb272c2a46a267c47681e0f1f07997b7e24682da56f84d812da2b9aeb14ca2.json new file mode 100644 index 00000000000..58ba7c33f2b --- /dev/null +++ b/core/lib/dal/.sqlx/query-26cb272c2a46a267c47681e0f1f07997b7e24682da56f84d812da2b9aeb14ca2.json @@ -0,0 +1,40 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n miniblock_number AS \"miniblock_number!\",\n hash,\n index_in_block AS \"index_in_block!\",\n l1_batch_tx_index AS \"l1_batch_tx_index!\"\n FROM\n transactions\n WHERE\n l1_batch_number = $1\n ORDER BY\n miniblock_number,\n index_in_block\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "miniblock_number!", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "index_in_block!", + "type_info": "Int4" + }, + { + "ordinal": 3, + "name": "l1_batch_tx_index!", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + true, + false, + true, + true + ] + }, + "hash": "26cb272c2a46a267c47681e0f1f07997b7e24682da56f84d812da2b9aeb14ca2" +} diff --git a/core/lib/dal/.sqlx/query-26e0b7eb1871d94ddc98254fece6381a9c4165e2727542eaeef3bbedd13a4f20.json b/core/lib/dal/.sqlx/query-26e0b7eb1871d94ddc98254fece6381a9c4165e2727542eaeef3bbedd13a4f20.json new file mode 100644 index 00000000000..30738bc2094 --- /dev/null +++ b/core/lib/dal/.sqlx/query-26e0b7eb1871d94ddc98254fece6381a9c4165e2727542eaeef3bbedd13a4f20.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE proof_generation_details\n SET\n status = $1,\n updated_at = NOW()\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "26e0b7eb1871d94ddc98254fece6381a9c4165e2727542eaeef3bbedd13a4f20" +} diff --git a/core/lib/dal/.sqlx/query-2737fea02599cdc163854b1395c42d4ef93ca238fd2fbc9155e6d012d0d1e113.json b/core/lib/dal/.sqlx/query-2737fea02599cdc163854b1395c42d4ef93ca238fd2fbc9155e6d012d0d1e113.json new file mode 100644 index 00000000000..67b9c056682 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2737fea02599cdc163854b1395c42d4ef93ca238fd2fbc9155e6d012d0d1e113.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE transactions\n SET\n error = $1,\n updated_at = NOW()\n WHERE\n hash = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Varchar", + "Bytea" + ] + }, + "nullable": [] + }, + "hash": "2737fea02599cdc163854b1395c42d4ef93ca238fd2fbc9155e6d012d0d1e113" +} diff --git a/core/lib/dal/.sqlx/query-2757b30c4641a346eb0226c706223efc18e51e6d4092188e081f4fafe92fe0ef.json b/core/lib/dal/.sqlx/query-2757b30c4641a346eb0226c706223efc18e51e6d4092188e081f4fafe92fe0ef.json new file mode 100644 index 00000000000..bb47b8254ac --- /dev/null +++ b/core/lib/dal/.sqlx/query-2757b30c4641a346eb0226c706223efc18e51e6d4092188e081f4fafe92fe0ef.json @@ -0,0 +1,34 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bootloader_code_hash,\n default_account_code_hash,\n id\n FROM\n protocol_versions\n WHERE\n timestamp <= $1\n ORDER BY\n id DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "default_account_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "2757b30c4641a346eb0226c706223efc18e51e6d4092188e081f4fafe92fe0ef" +} diff --git a/core/lib/dal/.sqlx/query-280cf015e40353e2833c0a70b77095596297be0d728a0aa2d9b180fb72de222b.json b/core/lib/dal/.sqlx/query-280cf015e40353e2833c0a70b77095596297be0d728a0aa2d9b180fb72de222b.json new file mode 100644 index 00000000000..5b49941ed18 --- /dev/null +++ b/core/lib/dal/.sqlx/query-280cf015e40353e2833c0a70b77095596297be0d728a0aa2d9b180fb72de222b.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n attempts\n FROM\n basic_witness_input_producer_jobs\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "280cf015e40353e2833c0a70b77095596297be0d728a0aa2d9b180fb72de222b" +} diff --git a/core/lib/dal/.sqlx/query-293258ecb299be5f5e81696d14883f115cd97586bd795ee31f58fc14e56d58cb.json b/core/lib/dal/.sqlx/query-293258ecb299be5f5e81696d14883f115cd97586bd795ee31f58fc14e56d58cb.json new file mode 100644 index 00000000000..2b07c3b02e0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-293258ecb299be5f5e81696d14883f115cd97586bd795ee31f58fc14e56d58cb.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM events\n WHERE\n miniblock_number > $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "293258ecb299be5f5e81696d14883f115cd97586bd795ee31f58fc14e56d58cb" +} diff --git a/core/lib/dal/.sqlx/query-2955e976281f9cbd98b7378c5ab52964b268b93c32fd280c49bf9f932884300d.json b/core/lib/dal/.sqlx/query-2955e976281f9cbd98b7378c5ab52964b268b93c32fd280c49bf9f932884300d.json new file mode 100644 index 00000000000..7c3a261d1f6 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2955e976281f9cbd98b7378c5ab52964b268b93c32fd280c49bf9f932884300d.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n timestamp\n FROM\n l1_batches\n WHERE\n eth_prove_tx_id IS NULL\n AND number > 0\n ORDER BY\n number\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "timestamp", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "2955e976281f9cbd98b7378c5ab52964b268b93c32fd280c49bf9f932884300d" +} diff --git a/core/lib/dal/.sqlx/query-2b626262c8003817ee02978f77452554ccfb5b83f00efdc12bed0f60ef439785.json b/core/lib/dal/.sqlx/query-2b626262c8003817ee02978f77452554ccfb5b83f00efdc12bed0f60ef439785.json new file mode 100644 index 00000000000..db810604cd8 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2b626262c8003817ee02978f77452554ccfb5b83f00efdc12bed0f60ef439785.json @@ -0,0 +1,25 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n id\n FROM\n prover_jobs_fri\n WHERE\n l1_batch_number = $1\n AND circuit_id = $2\n AND aggregation_round = $3\n AND depth = $4\n AND status = 'successful'\n ORDER BY\n sequence_number ASC;\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int2", + "Int2", + "Int4" + ] + }, + "nullable": [ + false + ] + }, + "hash": "2b626262c8003817ee02978f77452554ccfb5b83f00efdc12bed0f60ef439785" +} diff --git a/core/lib/dal/.sqlx/query-2c827c1c3cfa3552b90d4746c5df45d57f1f8b2558fdb374bf02e84d3c825a23.json b/core/lib/dal/.sqlx/query-2c827c1c3cfa3552b90d4746c5df45d57f1f8b2558fdb374bf02e84d3c825a23.json new file mode 100644 index 00000000000..9ec94fee0d3 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2c827c1c3cfa3552b90d4746c5df45d57f1f8b2558fdb374bf02e84d3c825a23.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(number) AS \"number\"\n FROM\n miniblocks\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "2c827c1c3cfa3552b90d4746c5df45d57f1f8b2558fdb374bf02e84d3c825a23" +} diff --git a/core/lib/dal/.sqlx/query-2d0c2e9ec4187641baef8a33229bffc78d92adb3c1e3ca60b12163e38c67047e.json b/core/lib/dal/.sqlx/query-2d0c2e9ec4187641baef8a33229bffc78d92adb3c1e3ca60b12163e38c67047e.json new file mode 100644 index 00000000000..f61f39e3b0b --- /dev/null +++ b/core/lib/dal/.sqlx/query-2d0c2e9ec4187641baef8a33229bffc78d92adb3c1e3ca60b12163e38c67047e.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n COUNT(*) AS \"count!\"\n FROM\n contracts_verification_info\n WHERE\n address = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count!", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + null + ] + }, + "hash": "2d0c2e9ec4187641baef8a33229bffc78d92adb3c1e3ca60b12163e38c67047e" +} diff --git a/core/lib/dal/.sqlx/query-2d1e0f2e043c193052c9cc20f9efeb5f094160627bc09db4bda2dda9a8c11c44.json b/core/lib/dal/.sqlx/query-2d1e0f2e043c193052c9cc20f9efeb5f094160627bc09db4bda2dda9a8c11c44.json new file mode 100644 index 00000000000..1d9c276b078 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2d1e0f2e043c193052c9cc20f9efeb5f094160627bc09db4bda2dda9a8c11c44.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n contracts_verification_info (address, verification_info)\n VALUES\n ($1, $2)\n ON CONFLICT (address) DO\n UPDATE\n SET\n verification_info = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Jsonb" + ] + }, + "nullable": [] + }, + "hash": "2d1e0f2e043c193052c9cc20f9efeb5f094160627bc09db4bda2dda9a8c11c44" +} diff --git a/core/lib/dal/.sqlx/query-2d31fcce581975a82d6156b52e35fb7a093b73727f75e0cb7db9cea480c95f5c.json b/core/lib/dal/.sqlx/query-2d31fcce581975a82d6156b52e35fb7a093b73727f75e0cb7db9cea480c95f5c.json new file mode 100644 index 00000000000..c4bcd6ea491 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2d31fcce581975a82d6156b52e35fb7a093b73727f75e0cb7db9cea480c95f5c.json @@ -0,0 +1,35 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE prover_jobs_fri\n SET\n status = 'queued',\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n id IN (\n SELECT\n id\n FROM\n prover_jobs_fri\n WHERE\n (\n status = 'in_progress'\n AND processing_started_at <= NOW() - $1::INTERVAL\n AND attempts < $2\n )\n OR (\n status = 'in_gpu_proof'\n AND processing_started_at <= NOW() - $1::INTERVAL\n AND attempts < $2\n )\n OR (\n status = 'failed'\n AND attempts < $2\n )\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n id,\n status,\n attempts\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Interval", + "Int2" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "2d31fcce581975a82d6156b52e35fb7a093b73727f75e0cb7db9cea480c95f5c" +} diff --git a/core/lib/dal/.sqlx/query-2d862097cfae49a1fb28ec0a05176085385c3a79d72f49669b4215a9454323c2.json b/core/lib/dal/.sqlx/query-2d862097cfae49a1fb28ec0a05176085385c3a79d72f49669b4215a9454323c2.json new file mode 100644 index 00000000000..e08f0c85517 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2d862097cfae49a1fb28ec0a05176085385c3a79d72f49669b4215a9454323c2.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n INDEX\n FROM\n initial_writes\n WHERE\n l1_batch_number <= $1\n ORDER BY\n l1_batch_number DESC,\n INDEX DESC\n LIMIT\n 1;\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "index", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "2d862097cfae49a1fb28ec0a05176085385c3a79d72f49669b4215a9454323c2" +} diff --git a/core/lib/dal/.sqlx/query-2d87b294817859e42258136b1cb78f42a877039094c3d6354928a03dad29451a.json b/core/lib/dal/.sqlx/query-2d87b294817859e42258136b1cb78f42a877039094c3d6354928a03dad29451a.json new file mode 100644 index 00000000000..dbd2e21c1a3 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2d87b294817859e42258136b1cb78f42a877039094c3d6354928a03dad29451a.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM storage_logs\n WHERE\n miniblock_number = $1\n AND operation_number != ALL ($2)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int4Array" + ] + }, + "nullable": [] + }, + "hash": "2d87b294817859e42258136b1cb78f42a877039094c3d6354928a03dad29451a" +} diff --git a/core/lib/dal/.sqlx/query-2dd7dbaeb2572404451e78a96f540e73a2778633bbf9d8e591ec912634639af9.json b/core/lib/dal/.sqlx/query-2dd7dbaeb2572404451e78a96f540e73a2778633bbf9d8e591ec912634639af9.json new file mode 100644 index 00000000000..bb81e7c3194 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2dd7dbaeb2572404451e78a96f540e73a2778633bbf9d8e591ec912634639af9.json @@ -0,0 +1,232 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n transactions\n WHERE\n miniblock_number = $1\n ORDER BY\n index_in_block\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "is_priority", + "type_info": "Bool" + }, + { + "ordinal": 2, + "name": "full_fee", + "type_info": "Numeric" + }, + { + "ordinal": 3, + "name": "layer_2_tip_fee", + "type_info": "Numeric" + }, + { + "ordinal": 4, + "name": "initiator_address", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "signature", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "input", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "data", + "type_info": "Jsonb" + }, + { + "ordinal": 9, + "name": "received_at", + "type_info": "Timestamp" + }, + { + "ordinal": 10, + "name": "priority_op_id", + "type_info": "Int8" + }, + { + "ordinal": 11, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 12, + "name": "index_in_block", + "type_info": "Int4" + }, + { + "ordinal": 13, + "name": "error", + "type_info": "Varchar" + }, + { + "ordinal": 14, + "name": "gas_limit", + "type_info": "Numeric" + }, + { + "ordinal": 15, + "name": "gas_per_storage_limit", + "type_info": "Numeric" + }, + { + "ordinal": 16, + "name": "gas_per_pubdata_limit", + "type_info": "Numeric" + }, + { + "ordinal": 17, + "name": "tx_format", + "type_info": "Int4" + }, + { + "ordinal": 18, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 19, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 20, + "name": "execution_info", + "type_info": "Jsonb" + }, + { + "ordinal": 21, + "name": "contract_address", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "in_mempool", + "type_info": "Bool" + }, + { + "ordinal": 23, + "name": "l1_block_number", + "type_info": "Int4" + }, + { + "ordinal": 24, + "name": "value", + "type_info": "Numeric" + }, + { + "ordinal": 25, + "name": "paymaster", + "type_info": "Bytea" + }, + { + "ordinal": 26, + "name": "paymaster_input", + "type_info": "Bytea" + }, + { + "ordinal": 27, + "name": "max_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 28, + "name": "max_priority_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 29, + "name": "effective_gas_price", + "type_info": "Numeric" + }, + { + "ordinal": 30, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 31, + "name": "l1_batch_tx_index", + "type_info": "Int4" + }, + { + "ordinal": 32, + "name": "refunded_gas", + "type_info": "Int8" + }, + { + "ordinal": 33, + "name": "l1_tx_mint", + "type_info": "Numeric" + }, + { + "ordinal": 34, + "name": "l1_tx_refund_recipient", + "type_info": "Bytea" + }, + { + "ordinal": 35, + "name": "upgrade_id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + true, + true, + false, + true, + true, + true, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + false, + true, + false, + false, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "2dd7dbaeb2572404451e78a96f540e73a2778633bbf9d8e591ec912634639af9" +} diff --git a/core/lib/dal/.sqlx/query-2ddba807ac8ec5260bf92c77073eb89c728357c0744f209090824695a5d35fa3.json b/core/lib/dal/.sqlx/query-2ddba807ac8ec5260bf92c77073eb89c728357c0744f209090824695a5d35fa3.json new file mode 100644 index 00000000000..68b48f32509 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2ddba807ac8ec5260bf92c77073eb89c728357c0744f209090824695a5d35fa3.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE transactions\n SET\n l1_batch_number = NULL,\n miniblock_number = NULL,\n error = NULL,\n index_in_block = NULL,\n execution_info = '{}'\n WHERE\n miniblock_number > $1\n RETURNING\n hash\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "2ddba807ac8ec5260bf92c77073eb89c728357c0744f209090824695a5d35fa3" +} diff --git a/core/lib/dal/.sqlx/query-2e0ea9434195270cc65cdca1f674d6b3b1d15b818974e4e403f4ac418ed40c2c.json b/core/lib/dal/.sqlx/query-2e0ea9434195270cc65cdca1f674d6b3b1d15b818974e4e403f4ac418ed40c2c.json new file mode 100644 index 00000000000..c2cba82b7a6 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2e0ea9434195270cc65cdca1f674d6b3b1d15b818974e4e403f4ac418ed40c2c.json @@ -0,0 +1,26 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n eth_txs_history (\n eth_tx_id,\n base_fee_per_gas,\n priority_fee_per_gas,\n tx_hash,\n signed_raw_tx,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, $4, $5, NOW(), NOW())\n ON CONFLICT (tx_hash) DO NOTHING\n RETURNING\n id\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int4", + "Int8", + "Int8", + "Text", + "Bytea" + ] + }, + "nullable": [ + false + ] + }, + "hash": "2e0ea9434195270cc65cdca1f674d6b3b1d15b818974e4e403f4ac418ed40c2c" +} diff --git a/core/lib/dal/.sqlx/query-2e5b9ae1b81b0abfe7a962c93b3119a0a60dc9804175b2baf8b45939c74bd583.json b/core/lib/dal/.sqlx/query-2e5b9ae1b81b0abfe7a962c93b3119a0a60dc9804175b2baf8b45939c74bd583.json new file mode 100644 index 00000000000..20548776830 --- /dev/null +++ b/core/lib/dal/.sqlx/query-2e5b9ae1b81b0abfe7a962c93b3119a0a60dc9804175b2baf8b45939c74bd583.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n compiler_versions (VERSION, compiler, created_at, updated_at)\n SELECT\n u.version,\n $2,\n NOW(),\n NOW()\n FROM\n UNNEST($1::TEXT[]) AS u (VERSION)\n ON CONFLICT (VERSION, compiler) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "TextArray", + "Text" + ] + }, + "nullable": [] + }, + "hash": "2e5b9ae1b81b0abfe7a962c93b3119a0a60dc9804175b2baf8b45939c74bd583" +} diff --git a/core/lib/dal/.sqlx/query-2eb25bfcfc1114de825dc4eeb0605d7d1c9e649663f6e9444c4425821d0a5b71.json b/core/lib/dal/.sqlx/query-2eb25bfcfc1114de825dc4eeb0605d7d1c9e649663f6e9444c4425821d0a5b71.json new file mode 100644 index 00000000000..6b0ddef258f --- /dev/null +++ b/core/lib/dal/.sqlx/query-2eb25bfcfc1114de825dc4eeb0605d7d1c9e649663f6e9444c4425821d0a5b71.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n eth_commit_tx_id\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "eth_commit_tx_id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + true + ] + }, + "hash": "2eb25bfcfc1114de825dc4eeb0605d7d1c9e649663f6e9444c4425821d0a5b71" +} diff --git a/core/lib/dal/.sqlx/query-2eb617f3e34ac5b21f925053a45da2b4afc314a3b3e78b041b44c8a020a0ee12.json b/core/lib/dal/.sqlx/query-2eb617f3e34ac5b21f925053a45da2b4afc314a3b3e78b041b44c8a020a0ee12.json new file mode 100644 index 00000000000..3baa8596a3b --- /dev/null +++ b/core/lib/dal/.sqlx/query-2eb617f3e34ac5b21f925053a45da2b4afc314a3b3e78b041b44c8a020a0ee12.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE transactions\n SET\n in_mempool = FALSE\n FROM\n UNNEST($1::bytea[]) AS s (address)\n WHERE\n transactions.in_mempool = TRUE\n AND transactions.initiator_address = s.address\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray" + ] + }, + "nullable": [] + }, + "hash": "2eb617f3e34ac5b21f925053a45da2b4afc314a3b3e78b041b44c8a020a0ee12" +} diff --git a/core/lib/dal/.sqlx/query-31334f2878b1ac7d828d5bc22d65ef6676b2eac623c0f78634cae9072fe0498a.json b/core/lib/dal/.sqlx/query-31334f2878b1ac7d828d5bc22d65ef6676b2eac623c0f78634cae9072fe0498a.json new file mode 100644 index 00000000000..31b129a8928 --- /dev/null +++ b/core/lib/dal/.sqlx/query-31334f2878b1ac7d828d5bc22d65ef6676b2eac623c0f78634cae9072fe0498a.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n snapshots (\n l1_batch_number,\n storage_logs_filepaths,\n factory_deps_filepath,\n created_at,\n updated_at\n )\n VALUES\n ($1, ARRAY_FILL(''::TEXT, ARRAY[$2::INTEGER]), $3, NOW(), NOW())\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int4", + "Text" + ] + }, + "nullable": [] + }, + "hash": "31334f2878b1ac7d828d5bc22d65ef6676b2eac623c0f78634cae9072fe0498a" +} diff --git a/core/lib/dal/.sqlx/query-3191f5ba16af041123ffa941ad63fe77e649e9d110043d2ac22005dd61cfcfb9.json b/core/lib/dal/.sqlx/query-3191f5ba16af041123ffa941ad63fe77e649e9d110043d2ac22005dd61cfcfb9.json new file mode 100644 index 00000000000..4290ba1f1b3 --- /dev/null +++ b/core/lib/dal/.sqlx/query-3191f5ba16af041123ffa941ad63fe77e649e9d110043d2ac22005dd61cfcfb9.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n timestamp\n FROM\n miniblocks\n WHERE\n (\n $1::BIGINT IS NULL\n AND l1_batch_number IS NULL\n )\n OR (l1_batch_number = $1::BIGINT)\n ORDER BY\n number\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "timestamp", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "3191f5ba16af041123ffa941ad63fe77e649e9d110043d2ac22005dd61cfcfb9" +} diff --git a/core/lib/dal/.sqlx/query-31f12a8c44124bb2ce31889ac5295f3823926f69cb1d54874878e6d6c301bfd8.json b/core/lib/dal/.sqlx/query-31f12a8c44124bb2ce31889ac5295f3823926f69cb1d54874878e6d6c301bfd8.json new file mode 100644 index 00000000000..c63ea98db44 --- /dev/null +++ b/core/lib/dal/.sqlx/query-31f12a8c44124bb2ce31889ac5295f3823926f69cb1d54874878e6d6c301bfd8.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n COUNT(*) AS \"count!\"\n FROM\n l1_batches\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count!", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "31f12a8c44124bb2ce31889ac5295f3823926f69cb1d54874878e6d6c301bfd8" +} diff --git a/core/lib/dal/.sqlx/query-322d919ff1ef4675623a58af2b0e9ebdda648667d48d6b27ddf155f2fe01d77a.json b/core/lib/dal/.sqlx/query-322d919ff1ef4675623a58af2b0e9ebdda648667d48d6b27ddf155f2fe01d77a.json new file mode 100644 index 00000000000..804940f674d --- /dev/null +++ b/core/lib/dal/.sqlx/query-322d919ff1ef4675623a58af2b0e9ebdda648667d48d6b27ddf155f2fe01d77a.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n commitment = $2,\n aux_data_hash = $3,\n updated_at = NOW()\n WHERE\n number = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Bytea", + "Bytea" + ] + }, + "nullable": [] + }, + "hash": "322d919ff1ef4675623a58af2b0e9ebdda648667d48d6b27ddf155f2fe01d77a" +} diff --git a/core/lib/dal/.sqlx/query-32792c6aee69cb8c8b928a209a3b04ba5868d1897553df85aac15b169ebb0732.json b/core/lib/dal/.sqlx/query-32792c6aee69cb8c8b928a209a3b04ba5868d1897553df85aac15b169ebb0732.json new file mode 100644 index 00000000000..32b36d7fed9 --- /dev/null +++ b/core/lib/dal/.sqlx/query-32792c6aee69cb8c8b928a209a3b04ba5868d1897553df85aac15b169ebb0732.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n basic_witness_input_producer_jobs (l1_batch_number, status, created_at, updated_at)\n VALUES\n ($1, $2, NOW(), NOW())\n ON CONFLICT (l1_batch_number) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + { + "Custom": { + "name": "basic_witness_input_producer_job_status", + "kind": { + "Enum": [ + "Queued", + "ManuallySkipped", + "InProgress", + "Successful", + "Failed" + ] + } + } + } + ] + }, + "nullable": [] + }, + "hash": "32792c6aee69cb8c8b928a209a3b04ba5868d1897553df85aac15b169ebb0732" +} diff --git a/core/lib/dal/.sqlx/query-33d6be45b246523ad76f9ae512322ff6372f63ecadb504a329499b02e7d3550e.json b/core/lib/dal/.sqlx/query-33d6be45b246523ad76f9ae512322ff6372f63ecadb504a329499b02e7d3550e.json new file mode 100644 index 00000000000..76483cd73d3 --- /dev/null +++ b/core/lib/dal/.sqlx/query-33d6be45b246523ad76f9ae512322ff6372f63ecadb504a329499b02e7d3550e.json @@ -0,0 +1,26 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET\n status = 'queued'\n WHERE\n (l1_batch_number, circuit_id) IN (\n SELECT\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id\n FROM\n prover_jobs_fri\n JOIN leaf_aggregation_witness_jobs_fri lawj ON prover_jobs_fri.l1_batch_number = lawj.l1_batch_number\n AND prover_jobs_fri.circuit_id = lawj.circuit_id\n WHERE\n lawj.status = 'waiting_for_proofs'\n AND prover_jobs_fri.status = 'successful'\n AND prover_jobs_fri.aggregation_round = 0\n GROUP BY\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id,\n lawj.number_of_basic_circuits\n HAVING\n COUNT(*) = lawj.number_of_basic_circuits\n )\n RETURNING\n l1_batch_number,\n circuit_id;\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "circuit_id", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false + ] + }, + "hash": "33d6be45b246523ad76f9ae512322ff6372f63ecadb504a329499b02e7d3550e" +} diff --git a/core/lib/dal/.sqlx/query-3490fe0b778a03c73111bf8cbf426b0b3185a231bbf0b8b132a1a95bc157e827.json b/core/lib/dal/.sqlx/query-3490fe0b778a03c73111bf8cbf426b0b3185a231bbf0b8b132a1a95bc157e827.json new file mode 100644 index 00000000000..3275e94936a --- /dev/null +++ b/core/lib/dal/.sqlx/query-3490fe0b778a03c73111bf8cbf426b0b3185a231bbf0b8b132a1a95bc157e827.json @@ -0,0 +1,34 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n hashed_key,\n l1_batch_number,\n INDEX\n FROM\n initial_writes\n WHERE\n hashed_key = ANY ($1::bytea[])\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "index", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "ByteaArray" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "3490fe0b778a03c73111bf8cbf426b0b3185a231bbf0b8b132a1a95bc157e827" +} diff --git a/core/lib/dal/.sqlx/query-35b87a3b7db0af87c6a95e9fe7ef9044ae85b579c7051301b40bd5f94df1f530.json b/core/lib/dal/.sqlx/query-35b87a3b7db0af87c6a95e9fe7ef9044ae85b579c7051301b40bd5f94df1f530.json new file mode 100644 index 00000000000..a11e154326e --- /dev/null +++ b/core/lib/dal/.sqlx/query-35b87a3b7db0af87c6a95e9fe7ef9044ae85b579c7051301b40bd5f94df1f530.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE prover_jobs_fri\n SET\n status = 'failed',\n error = $1,\n updated_at = NOW()\n WHERE\n id = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "35b87a3b7db0af87c6a95e9fe7ef9044ae85b579c7051301b40bd5f94df1f530" +} diff --git a/core/lib/dal/.sqlx/query-3671f23665664b8d6acf97e4f697e5afa28d855d87ea2f8c93e79c436749068a.json b/core/lib/dal/.sqlx/query-3671f23665664b8d6acf97e4f697e5afa28d855d87ea2f8c93e79c436749068a.json new file mode 100644 index 00000000000..1e8d851ab07 --- /dev/null +++ b/core/lib/dal/.sqlx/query-3671f23665664b8d6acf97e4f697e5afa28d855d87ea2f8c93e79c436749068a.json @@ -0,0 +1,258 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n timestamp,\n is_finished,\n l1_tx_count,\n l2_tx_count,\n fee_account_address,\n bloom,\n priority_ops_onchain_data,\n hash,\n parent_hash,\n commitment,\n compressed_write_logs,\n compressed_contracts,\n eth_prove_tx_id,\n eth_commit_tx_id,\n eth_execute_tx_id,\n merkle_root_hash,\n l2_to_l1_logs,\n l2_to_l1_messages,\n used_contract_hashes,\n compressed_initial_writes,\n compressed_repeated_writes,\n l2_l1_compressed_messages,\n l2_l1_merkle_root,\n l1_gas_price,\n l2_fair_gas_price,\n rollup_last_leaf_index,\n zkporter_is_available,\n bootloader_code_hash,\n default_aa_code_hash,\n base_fee_per_gas,\n aux_data_hash,\n pass_through_data_hash,\n meta_parameters_hash,\n protocol_version,\n compressed_state_diffs,\n system_logs,\n events_queue_commitment,\n bootloader_initial_content_commitment,\n pubdata_input\n FROM\n l1_batches\n LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number\n WHERE\n number BETWEEN $1 AND $2\n ORDER BY\n number\n LIMIT\n $3\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "parent_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "commitment", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "compressed_write_logs", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "compressed_contracts", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "eth_prove_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "eth_commit_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 15, + "name": "eth_execute_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 16, + "name": "merkle_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 17, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 20, + "name": "compressed_initial_writes", + "type_info": "Bytea" + }, + { + "ordinal": 21, + "name": "compressed_repeated_writes", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "l2_l1_compressed_messages", + "type_info": "Bytea" + }, + { + "ordinal": 23, + "name": "l2_l1_merkle_root", + "type_info": "Bytea" + }, + { + "ordinal": 24, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 25, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 26, + "name": "rollup_last_leaf_index", + "type_info": "Int8" + }, + { + "ordinal": 27, + "name": "zkporter_is_available", + "type_info": "Bool" + }, + { + "ordinal": 28, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 29, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 30, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 31, + "name": "aux_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 32, + "name": "pass_through_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 33, + "name": "meta_parameters_hash", + "type_info": "Bytea" + }, + { + "ordinal": 34, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 35, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 36, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 37, + "name": "events_queue_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 38, + "name": "bootloader_initial_content_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 39, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int8", + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true, + true, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "3671f23665664b8d6acf97e4f697e5afa28d855d87ea2f8c93e79c436749068a" +} diff --git a/core/lib/dal/.sqlx/query-3b013b93ea4a6766162c9f0c60517a7ffc993cf436ad3aeeae82ed3e330b07bd.json b/core/lib/dal/.sqlx/query-3b013b93ea4a6766162c9f0c60517a7ffc993cf436ad3aeeae82ed3e330b07bd.json new file mode 100644 index 00000000000..6e7bffec485 --- /dev/null +++ b/core/lib/dal/.sqlx/query-3b013b93ea4a6766162c9f0c60517a7ffc993cf436ad3aeeae82ed3e330b07bd.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n certificate\n FROM\n miniblocks_consensus\n ORDER BY\n number ASC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "certificate", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "3b013b93ea4a6766162c9f0c60517a7ffc993cf436ad3aeeae82ed3e330b07bd" +} diff --git a/core/lib/dal/.sqlx/query-3b0af308b0ce95a13a4eed40834279601234a489f73d843f2f314252ed4cb8b0.json b/core/lib/dal/.sqlx/query-3b0af308b0ce95a13a4eed40834279601234a489f73d843f2f314252ed4cb8b0.json new file mode 100644 index 00000000000..39781954d48 --- /dev/null +++ b/core/lib/dal/.sqlx/query-3b0af308b0ce95a13a4eed40834279601234a489f73d843f2f314252ed4cb8b0.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n hashed_key,\n value AS \"value!\"\n FROM\n storage\n WHERE\n hashed_key = ANY ($1)\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "value!", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "ByteaArray" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "3b0af308b0ce95a13a4eed40834279601234a489f73d843f2f314252ed4cb8b0" +} diff --git a/core/lib/dal/.sqlx/query-3b3fbcffd2702047045c2f358e8ac77b63879ab97a32eed8392b48cc46116a28.json b/core/lib/dal/.sqlx/query-3b3fbcffd2702047045c2f358e8ac77b63879ab97a32eed8392b48cc46116a28.json new file mode 100644 index 00000000000..326915adb2f --- /dev/null +++ b/core/lib/dal/.sqlx/query-3b3fbcffd2702047045c2f358e8ac77b63879ab97a32eed8392b48cc46116a28.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM call_traces\n WHERE\n tx_hash = ANY ($1)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray" + ] + }, + "nullable": [] + }, + "hash": "3b3fbcffd2702047045c2f358e8ac77b63879ab97a32eed8392b48cc46116a28" +} diff --git a/core/lib/dal/.sqlx/query-3b4d5009ec22f54cc7d305aa11d96ec397767a063dc21aa3add974cb9b070361.json b/core/lib/dal/.sqlx/query-3b4d5009ec22f54cc7d305aa11d96ec397767a063dc21aa3add974cb9b070361.json new file mode 100644 index 00000000000..38890ae58f2 --- /dev/null +++ b/core/lib/dal/.sqlx/query-3b4d5009ec22f54cc7d305aa11d96ec397767a063dc21aa3add974cb9b070361.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n factory_deps (bytecode_hash, bytecode, miniblock_number, created_at, updated_at)\n SELECT\n u.bytecode_hash,\n u.bytecode,\n $3,\n NOW(),\n NOW()\n FROM\n UNNEST($1::bytea[], $2::bytea[]) AS u (bytecode_hash, bytecode)\n ON CONFLICT (bytecode_hash) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray", + "ByteaArray", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "3b4d5009ec22f54cc7d305aa11d96ec397767a063dc21aa3add974cb9b070361" +} diff --git a/core/lib/dal/.sqlx/query-3c1d5f985be7e378211aa339c2c6387f2f3eda07a630503324bd6576dbdf8231.json b/core/lib/dal/.sqlx/query-3c1d5f985be7e378211aa339c2c6387f2f3eda07a630503324bd6576dbdf8231.json new file mode 100644 index 00000000000..ad5c726ea13 --- /dev/null +++ b/core/lib/dal/.sqlx/query-3c1d5f985be7e378211aa339c2c6387f2f3eda07a630503324bd6576dbdf8231.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n trace\n FROM\n transaction_traces\n WHERE\n tx_hash = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "trace", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false + ] + }, + "hash": "3c1d5f985be7e378211aa339c2c6387f2f3eda07a630503324bd6576dbdf8231" +} diff --git a/core/lib/dal/.sqlx/query-3c3abbf689fa64c6da7de69fd916769dbb04d3a61cf232892236c974660ffe64.json b/core/lib/dal/.sqlx/query-3c3abbf689fa64c6da7de69fd916769dbb04d3a61cf232892236c974660ffe64.json new file mode 100644 index 00000000000..56d8b1fa995 --- /dev/null +++ b/core/lib/dal/.sqlx/query-3c3abbf689fa64c6da7de69fd916769dbb04d3a61cf232892236c974660ffe64.json @@ -0,0 +1,35 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE scheduler_witness_jobs_fri\n SET\n status = 'queued',\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n (\n status = 'in_progress'\n AND processing_started_at <= NOW() - $1::INTERVAL\n AND attempts < $2\n )\n OR (\n status = 'failed'\n AND attempts < $2\n )\n RETURNING\n l1_batch_number,\n status,\n attempts\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Interval", + "Int2" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "3c3abbf689fa64c6da7de69fd916769dbb04d3a61cf232892236c974660ffe64" +} diff --git a/core/lib/dal/.sqlx/query-3c60ca71b8a3b544f5fe9d7f2fbb249026665c9fb17b6f53a2154473547cbbfd.json b/core/lib/dal/.sqlx/query-3c60ca71b8a3b544f5fe9d7f2fbb249026665c9fb17b6f53a2154473547cbbfd.json new file mode 100644 index 00000000000..8797c84ce88 --- /dev/null +++ b/core/lib/dal/.sqlx/query-3c60ca71b8a3b544f5fe9d7f2fbb249026665c9fb17b6f53a2154473547cbbfd.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n certificate\n FROM\n miniblocks_consensus\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "certificate", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "3c60ca71b8a3b544f5fe9d7f2fbb249026665c9fb17b6f53a2154473547cbbfd" +} diff --git a/core/lib/dal/.sqlx/query-3e170eea3a5ea5c7389c15f76c6489745438eae73a07b577aa25bd08adf95354.json b/core/lib/dal/.sqlx/query-3e170eea3a5ea5c7389c15f76c6489745438eae73a07b577aa25bd08adf95354.json new file mode 100644 index 00000000000..2290d558cea --- /dev/null +++ b/core/lib/dal/.sqlx/query-3e170eea3a5ea5c7389c15f76c6489745438eae73a07b577aa25bd08adf95354.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM tokens\n WHERE\n l2_address IN (\n SELECT\n SUBSTRING(key, 12, 20)\n FROM\n storage_logs\n WHERE\n storage_logs.address = $1\n AND miniblock_number > $2\n AND NOT EXISTS (\n SELECT\n 1\n FROM\n storage_logs AS s\n WHERE\n s.hashed_key = storage_logs.hashed_key\n AND (s.miniblock_number, s.operation_number) >= (storage_logs.miniblock_number, storage_logs.operation_number)\n AND s.value = $3\n )\n )\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Int8", + "Bytea" + ] + }, + "nullable": [] + }, + "hash": "3e170eea3a5ea5c7389c15f76c6489745438eae73a07b577aa25bd08adf95354" +} diff --git a/core/lib/dal/.sqlx/query-3ec365c5c81f4678a905ae5bbd48b87ead36f593488437c6f67da629ca81e4fa.json b/core/lib/dal/.sqlx/query-3ec365c5c81f4678a905ae5bbd48b87ead36f593488437c6f67da629ca81e4fa.json new file mode 100644 index 00000000000..5815e65636c --- /dev/null +++ b/core/lib/dal/.sqlx/query-3ec365c5c81f4678a905ae5bbd48b87ead36f593488437c6f67da629ca81e4fa.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE scheduler_witness_jobs_fri\n SET\n status = 'queued'\n WHERE\n l1_batch_number = $1\n AND status != 'successful'\n AND status != 'in_progress'\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "3ec365c5c81f4678a905ae5bbd48b87ead36f593488437c6f67da629ca81e4fa" +} diff --git a/core/lib/dal/.sqlx/query-41c9f45d6eb727aafad0d8c18024cee5c602d275bb812022cc8fdabf0a60e151.json b/core/lib/dal/.sqlx/query-41c9f45d6eb727aafad0d8c18024cee5c602d275bb812022cc8fdabf0a60e151.json new file mode 100644 index 00000000000..8c51c26131b --- /dev/null +++ b/core/lib/dal/.sqlx/query-41c9f45d6eb727aafad0d8c18024cee5c602d275bb812022cc8fdabf0a60e151.json @@ -0,0 +1,56 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n eth_txs_history.id,\n eth_txs_history.eth_tx_id,\n eth_txs_history.tx_hash,\n eth_txs_history.base_fee_per_gas,\n eth_txs_history.priority_fee_per_gas,\n eth_txs_history.signed_raw_tx,\n eth_txs.nonce\n FROM\n eth_txs_history\n JOIN eth_txs ON eth_txs.id = eth_txs_history.eth_tx_id\n WHERE\n eth_txs_history.sent_at_block IS NULL\n AND eth_txs.confirmed_eth_tx_history_id IS NULL\n ORDER BY\n eth_txs_history.id DESC\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "eth_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 2, + "name": "tx_hash", + "type_info": "Text" + }, + { + "ordinal": 3, + "name": "base_fee_per_gas", + "type_info": "Int8" + }, + { + "ordinal": 4, + "name": "priority_fee_per_gas", + "type_info": "Int8" + }, + { + "ordinal": 5, + "name": "signed_raw_tx", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "nonce", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false, + false, + false, + true, + false + ] + }, + "hash": "41c9f45d6eb727aafad0d8c18024cee5c602d275bb812022cc8fdabf0a60e151" +} diff --git a/core/lib/dal/.sqlx/query-43c7e352d09f69de1a182196aea4de79b67833f17d252b5b0e8e00cd6e75b5c1.json b/core/lib/dal/.sqlx/query-43c7e352d09f69de1a182196aea4de79b67833f17d252b5b0e8e00cd6e75b5c1.json new file mode 100644 index 00000000000..56fcdb38943 --- /dev/null +++ b/core/lib/dal/.sqlx/query-43c7e352d09f69de1a182196aea4de79b67833f17d252b5b0e8e00cd6e75b5c1.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MIN(number) AS \"number\"\n FROM\n l1_batches\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "43c7e352d09f69de1a182196aea4de79b67833f17d252b5b0e8e00cd6e75b5c1" +} diff --git a/core/lib/dal/.sqlx/query-46c4696fff5a4b8cc5cb46b05645da82065836fe17687ffad04126a6a8b2b27c.json b/core/lib/dal/.sqlx/query-46c4696fff5a4b8cc5cb46b05645da82065836fe17687ffad04126a6a8b2b27c.json new file mode 100644 index 00000000000..5ebb1951966 --- /dev/null +++ b/core/lib/dal/.sqlx/query-46c4696fff5a4b8cc5cb46b05645da82065836fe17687ffad04126a6a8b2b27c.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET\n status = 'successful',\n updated_at = NOW(),\n time_taken = $1\n WHERE\n id = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Time", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "46c4696fff5a4b8cc5cb46b05645da82065836fe17687ffad04126a6a8b2b27c" +} diff --git a/core/lib/dal/.sqlx/query-47c2f23d9209d155f3f32fd21ef7931a02fe5ffaf2c4dc2f1e7a48c0e932c060.json b/core/lib/dal/.sqlx/query-47c2f23d9209d155f3f32fd21ef7931a02fe5ffaf2c4dc2f1e7a48c0e932c060.json new file mode 100644 index 00000000000..fe8a346d1e2 --- /dev/null +++ b/core/lib/dal/.sqlx/query-47c2f23d9209d155f3f32fd21ef7931a02fe5ffaf2c4dc2f1e7a48c0e932c060.json @@ -0,0 +1,50 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number,\n l1_batch_root_hash,\n miniblock_number,\n miniblock_root_hash,\n last_finished_chunk_id,\n total_chunk_count\n FROM\n snapshot_recovery\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 3, + "name": "miniblock_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 4, + "name": "last_finished_chunk_id", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "total_chunk_count", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false, + false, + true, + false + ] + }, + "hash": "47c2f23d9209d155f3f32fd21ef7931a02fe5ffaf2c4dc2f1e7a48c0e932c060" +} diff --git a/core/lib/dal/.sqlx/query-481d3cdb6c9a90843b240dba84377cb8f1340b483faedbbc2b71055aa5451cae.json b/core/lib/dal/.sqlx/query-481d3cdb6c9a90843b240dba84377cb8f1340b483faedbbc2b71055aa5451cae.json new file mode 100644 index 00000000000..3a9c7616c9c --- /dev/null +++ b/core/lib/dal/.sqlx/query-481d3cdb6c9a90843b240dba84377cb8f1340b483faedbbc2b71055aa5451cae.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(number) AS \"number\"\n FROM\n l1_batches\n WHERE\n is_finished = TRUE\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "481d3cdb6c9a90843b240dba84377cb8f1340b483faedbbc2b71055aa5451cae" +} diff --git a/core/lib/dal/.sqlx/query-4d263992ed6d5abbd7d3ca43af9d772d8801b0ae673b7173ae08a1fa6cbf67b2.json b/core/lib/dal/.sqlx/query-4d263992ed6d5abbd7d3ca43af9d772d8801b0ae673b7173ae08a1fa6cbf67b2.json new file mode 100644 index 00000000000..b0fb8d4be23 --- /dev/null +++ b/core/lib/dal/.sqlx/query-4d263992ed6d5abbd7d3ca43af9d772d8801b0ae673b7173ae08a1fa6cbf67b2.json @@ -0,0 +1,59 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE prover_jobs_fri\n SET\n status = 'in_progress',\n attempts = attempts + 1,\n updated_at = NOW(),\n processing_started_at = NOW(),\n picked_by = $2\n WHERE\n id = (\n SELECT\n id\n FROM\n prover_jobs_fri\n WHERE\n status = 'queued'\n AND protocol_version = ANY ($1)\n ORDER BY\n aggregation_round DESC,\n l1_batch_number ASC,\n id ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n prover_jobs_fri.id,\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id,\n prover_jobs_fri.aggregation_round,\n prover_jobs_fri.sequence_number,\n prover_jobs_fri.depth,\n prover_jobs_fri.is_node_final_proof\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "circuit_id", + "type_info": "Int2" + }, + { + "ordinal": 3, + "name": "aggregation_round", + "type_info": "Int2" + }, + { + "ordinal": 4, + "name": "sequence_number", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "depth", + "type_info": "Int4" + }, + { + "ordinal": 6, + "name": "is_node_final_proof", + "type_info": "Bool" + } + ], + "parameters": { + "Left": [ + "Int4Array", + "Text" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false + ] + }, + "hash": "4d263992ed6d5abbd7d3ca43af9d772d8801b0ae673b7173ae08a1fa6cbf67b2" +} diff --git a/core/lib/dal/.sqlx/query-4d50dabc25d392e6b9d0dbe0e386ea7ef2c1178b1b0394a17442185b79f2d77d.json b/core/lib/dal/.sqlx/query-4d50dabc25d392e6b9d0dbe0e386ea7ef2c1178b1b0394a17442185b79f2d77d.json new file mode 100644 index 00000000000..e9a9425da3c --- /dev/null +++ b/core/lib/dal/.sqlx/query-4d50dabc25d392e6b9d0dbe0e386ea7ef2c1178b1b0394a17442185b79f2d77d.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "SELECT eth_txs.id FROM eth_txs_history JOIN eth_txs ON eth_txs.confirmed_eth_tx_history_id = eth_txs_history.id WHERE eth_txs_history.tx_hash = $1", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Text" + ] + }, + "nullable": [ + false + ] + }, + "hash": "4d50dabc25d392e6b9d0dbe0e386ea7ef2c1178b1b0394a17442185b79f2d77d" +} diff --git a/core/lib/dal/.sqlx/query-4d84bb4e180b7267bee5e3c1f83c6d47e8e1b4b5124c82c1f35d405204fcf783.json b/core/lib/dal/.sqlx/query-4d84bb4e180b7267bee5e3c1f83c6d47e8e1b4b5124c82c1f35d405204fcf783.json new file mode 100644 index 00000000000..44d1506ac93 --- /dev/null +++ b/core/lib/dal/.sqlx/query-4d84bb4e180b7267bee5e3c1f83c6d47e8e1b4b5124c82c1f35d405204fcf783.json @@ -0,0 +1,82 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n eth_txs_history\n WHERE\n eth_tx_id = $1\n ORDER BY\n created_at DESC\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "eth_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 2, + "name": "tx_hash", + "type_info": "Text" + }, + { + "ordinal": 3, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 4, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 5, + "name": "base_fee_per_gas", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "priority_fee_per_gas", + "type_info": "Int8" + }, + { + "ordinal": 7, + "name": "confirmed_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "signed_raw_tx", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "sent_at_block", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "sent_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true + ] + }, + "hash": "4d84bb4e180b7267bee5e3c1f83c6d47e8e1b4b5124c82c1f35d405204fcf783" +} diff --git a/core/lib/dal/.sqlx/query-4d92a133a36afd682a84fbfd75aafca34d61347e0e2e29fb07ca3d1b8b1f309c.json b/core/lib/dal/.sqlx/query-4d92a133a36afd682a84fbfd75aafca34d61347e0e2e29fb07ca3d1b8b1f309c.json new file mode 100644 index 00000000000..f7ae37f4b7b --- /dev/null +++ b/core/lib/dal/.sqlx/query-4d92a133a36afd682a84fbfd75aafca34d61347e0e2e29fb07ca3d1b8b1f309c.json @@ -0,0 +1,18 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n prover_fri_protocol_versions (\n id,\n recursion_scheduler_level_vk_hash,\n recursion_node_level_vk_hash,\n recursion_leaf_level_vk_hash,\n recursion_circuits_set_vks_hash,\n created_at\n )\n VALUES\n ($1, $2, $3, $4, $5, NOW())\n ON CONFLICT (id) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4", + "Bytea", + "Bytea", + "Bytea", + "Bytea" + ] + }, + "nullable": [] + }, + "hash": "4d92a133a36afd682a84fbfd75aafca34d61347e0e2e29fb07ca3d1b8b1f309c" +} diff --git a/core/lib/dal/.sqlx/query-525123d4ec2b427f1c171f30d0937d8d542b4f14cf560972c005ab3cc13d1f63.json b/core/lib/dal/.sqlx/query-525123d4ec2b427f1c171f30d0937d8d542b4f14cf560972c005ab3cc13d1f63.json new file mode 100644 index 00000000000..7764425aa21 --- /dev/null +++ b/core/lib/dal/.sqlx/query-525123d4ec2b427f1c171f30d0937d8d542b4f14cf560972c005ab3cc13d1f63.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n hash\n FROM\n miniblocks\n WHERE\n number BETWEEN $1 AND $2\n ORDER BY\n number\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "525123d4ec2b427f1c171f30d0937d8d542b4f14cf560972c005ab3cc13d1f63" +} diff --git a/core/lib/dal/.sqlx/query-532a80b0873871896dd318beba5ec427a099492905a1feee512dc43f39d10047.json b/core/lib/dal/.sqlx/query-532a80b0873871896dd318beba5ec427a099492905a1feee512dc43f39d10047.json new file mode 100644 index 00000000000..629dca2ea7f --- /dev/null +++ b/core/lib/dal/.sqlx/query-532a80b0873871896dd318beba5ec427a099492905a1feee512dc43f39d10047.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE eth_txs_history\n SET\n sent_at_block = $2,\n sent_at = NOW()\n WHERE\n id = $1\n AND sent_at_block IS NULL\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "532a80b0873871896dd318beba5ec427a099492905a1feee512dc43f39d10047" +} diff --git a/core/lib/dal/.sqlx/query-534822a226068cde83ad8c30b569a8f447824a5ab466bb6eea1710e8aeaa2c56.json b/core/lib/dal/.sqlx/query-534822a226068cde83ad8c30b569a8f447824a5ab466bb6eea1710e8aeaa2c56.json new file mode 100644 index 00000000000..a85b4895b45 --- /dev/null +++ b/core/lib/dal/.sqlx/query-534822a226068cde83ad8c30b569a8f447824a5ab466bb6eea1710e8aeaa2c56.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE proof_compression_jobs_fri\n SET\n status = $1,\n updated_at = NOW()\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "534822a226068cde83ad8c30b569a8f447824a5ab466bb6eea1710e8aeaa2c56" +} diff --git a/core/lib/dal/.sqlx/query-53c04fd528752c0e0ef7ffa1f68a7ea81d8d10c76bbae540013667e13230e2ea.json b/core/lib/dal/.sqlx/query-53c04fd528752c0e0ef7ffa1f68a7ea81d8d10c76bbae540013667e13230e2ea.json new file mode 100644 index 00000000000..e07b9192b5f --- /dev/null +++ b/core/lib/dal/.sqlx/query-53c04fd528752c0e0ef7ffa1f68a7ea81d8d10c76bbae540013667e13230e2ea.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n fee_account_address\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "fee_account_address", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "53c04fd528752c0e0ef7ffa1f68a7ea81d8d10c76bbae540013667e13230e2ea" +} diff --git a/core/lib/dal/.sqlx/query-53f78fdee39b113d2f55f6f951bd94f28b7b2b60d551d552a9b0bab1f1791e39.json b/core/lib/dal/.sqlx/query-53f78fdee39b113d2f55f6f951bd94f28b7b2b60d551d552a9b0bab1f1791e39.json new file mode 100644 index 00000000000..15a10f7ce3c --- /dev/null +++ b/core/lib/dal/.sqlx/query-53f78fdee39b113d2f55f6f951bd94f28b7b2b60d551d552a9b0bab1f1791e39.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n attempts\n FROM\n leaf_aggregation_witness_jobs_fri\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "53f78fdee39b113d2f55f6f951bd94f28b7b2b60d551d552a9b0bab1f1791e39" +} diff --git a/core/lib/dal/.sqlx/query-5503575d9377785894de6cf6139a8d4768c6a803a1a90889e5a1b8254c315231.json b/core/lib/dal/.sqlx/query-5503575d9377785894de6cf6139a8d4768c6a803a1a90889e5a1b8254c315231.json new file mode 100644 index 00000000000..5f27c7549b4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5503575d9377785894de6cf6139a8d4768c6a803a1a90889e5a1b8254c315231.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "INSERT INTO eth_txs (raw_tx, nonce, tx_type, contract_address, predicted_gas_cost, created_at, updated_at) VALUES ('\\x00', 0, $1, '', 0, now(), now()) RETURNING id", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Text" + ] + }, + "nullable": [ + false + ] + }, + "hash": "5503575d9377785894de6cf6139a8d4768c6a803a1a90889e5a1b8254c315231" +} diff --git a/core/lib/dal/.sqlx/query-556f9b9e82d3a9399660dfa4bbf252f26335699a4e7f0347d7e894320245271d.json b/core/lib/dal/.sqlx/query-556f9b9e82d3a9399660dfa4bbf252f26335699a4e7f0347d7e894320245271d.json new file mode 100644 index 00000000000..1dcfa982c51 --- /dev/null +++ b/core/lib/dal/.sqlx/query-556f9b9e82d3a9399660dfa4bbf252f26335699a4e7f0347d7e894320245271d.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n events_queue (l1_batch_number, serialized_events_queue)\n VALUES\n ($1, $2)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Jsonb" + ] + }, + "nullable": [] + }, + "hash": "556f9b9e82d3a9399660dfa4bbf252f26335699a4e7f0347d7e894320245271d" +} diff --git a/core/lib/dal/.sqlx/query-55b0b4c569c0aaf9741afc85400ecd50a04799ffd36be0e17c56f47fcdbc8f60.json b/core/lib/dal/.sqlx/query-55b0b4c569c0aaf9741afc85400ecd50a04799ffd36be0e17c56f47fcdbc8f60.json new file mode 100644 index 00000000000..6478bb53538 --- /dev/null +++ b/core/lib/dal/.sqlx/query-55b0b4c569c0aaf9741afc85400ecd50a04799ffd36be0e17c56f47fcdbc8f60.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM l1_batches\n WHERE\n number > $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "55b0b4c569c0aaf9741afc85400ecd50a04799ffd36be0e17c56f47fcdbc8f60" +} diff --git a/core/lib/dal/.sqlx/query-5659480e5d79dab3399e35539b240e7eb9f598999c28015a504605f88bf84b33.json b/core/lib/dal/.sqlx/query-5659480e5d79dab3399e35539b240e7eb9f598999c28015a504605f88bf84b33.json new file mode 100644 index 00000000000..399b0d02845 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5659480e5d79dab3399e35539b240e7eb9f598999c28015a504605f88bf84b33.json @@ -0,0 +1,88 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n eth_txs\n WHERE\n id > (\n SELECT\n COALESCE(MAX(eth_tx_id), 0)\n FROM\n eth_txs_history\n )\n ORDER BY\n id\n LIMIT\n $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "raw_tx", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "contract_address", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "tx_type", + "type_info": "Text" + }, + { + "ordinal": 5, + "name": "gas_used", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 7, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "has_failed", + "type_info": "Bool" + }, + { + "ordinal": 9, + "name": "sent_at_block", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "confirmed_eth_tx_history_id", + "type_info": "Int4" + }, + { + "ordinal": 11, + "name": "predicted_gas_cost", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + true, + false, + false, + false, + true, + true, + false + ] + }, + "hash": "5659480e5d79dab3399e35539b240e7eb9f598999c28015a504605f88bf84b33" +} diff --git a/core/lib/dal/.sqlx/query-5821f1446983260168cec366af26009503182c300877e74a8539f231050e6f85.json b/core/lib/dal/.sqlx/query-5821f1446983260168cec366af26009503182c300877e74a8539f231050e6f85.json new file mode 100644 index 00000000000..86877a48dd4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5821f1446983260168cec366af26009503182c300877e74a8539f231050e6f85.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE witness_inputs_fri\n SET\n status = $1,\n updated_at = NOW()\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "5821f1446983260168cec366af26009503182c300877e74a8539f231050e6f85" +} diff --git a/core/lib/dal/.sqlx/query-5880a85667ccc26d392ff6272e317afe4e38bcfe5ce93bf229d68622066ab8a1.json b/core/lib/dal/.sqlx/query-5880a85667ccc26d392ff6272e317afe4e38bcfe5ce93bf229d68622066ab8a1.json new file mode 100644 index 00000000000..31ce6f31993 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5880a85667ccc26d392ff6272e317afe4e38bcfe5ce93bf229d68622066ab8a1.json @@ -0,0 +1,94 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n miniblocks.number,\n COALESCE(\n miniblocks.l1_batch_number,\n (\n SELECT\n (MAX(number) + 1)\n FROM\n l1_batches\n )\n ) AS \"l1_batch_number!\",\n (\n SELECT\n MAX(m2.number)\n FROM\n miniblocks m2\n WHERE\n miniblocks.l1_batch_number = m2.l1_batch_number\n ) AS \"last_batch_miniblock?\",\n miniblocks.timestamp,\n miniblocks.l1_gas_price,\n miniblocks.l2_fair_gas_price,\n miniblocks.fair_pubdata_price,\n miniblocks.bootloader_code_hash,\n miniblocks.default_aa_code_hash,\n miniblocks.virtual_blocks,\n miniblocks.hash,\n miniblocks.protocol_version AS \"protocol_version!\",\n l1_batches.fee_account_address AS \"fee_account_address?\"\n FROM\n miniblocks\n LEFT JOIN l1_batches ON miniblocks.l1_batch_number = l1_batches.number\n WHERE\n miniblocks.number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_number!", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "last_batch_miniblock?", + "type_info": "Int8" + }, + { + "ordinal": 3, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 4, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 5, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "fair_pubdata_price", + "type_info": "Int8" + }, + { + "ordinal": 7, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "virtual_blocks", + "type_info": "Int8" + }, + { + "ordinal": 10, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "protocol_version!", + "type_info": "Int4" + }, + { + "ordinal": 12, + "name": "fee_account_address?", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + null, + null, + false, + false, + false, + true, + true, + true, + false, + false, + true, + false + ] + }, + "hash": "5880a85667ccc26d392ff6272e317afe4e38bcfe5ce93bf229d68622066ab8a1" +} diff --git a/core/lib/dal/.sqlx/query-58aed39245c72d231b268ce83105bb2036d21f60d4c6934f9145730ac35c04de.json b/core/lib/dal/.sqlx/query-58aed39245c72d231b268ce83105bb2036d21f60d4c6934f9145730ac35c04de.json new file mode 100644 index 00000000000..502d14e05ea --- /dev/null +++ b/core/lib/dal/.sqlx/query-58aed39245c72d231b268ce83105bb2036d21f60d4c6934f9145730ac35c04de.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number\n FROM\n proof_generation_details\n WHERE\n status = 'ready_to_be_proven'\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "58aed39245c72d231b268ce83105bb2036d21f60d4c6934f9145730ac35c04de" +} diff --git a/core/lib/dal/.sqlx/query-59cb0dd78fadc121e2b1ebbc8a063f089c91aead2bc9abb284697e65840f1e8f.json b/core/lib/dal/.sqlx/query-59cb0dd78fadc121e2b1ebbc8a063f089c91aead2bc9abb284697e65840f1e8f.json new file mode 100644 index 00000000000..8a0cb19b390 --- /dev/null +++ b/core/lib/dal/.sqlx/query-59cb0dd78fadc121e2b1ebbc8a063f089c91aead2bc9abb284697e65840f1e8f.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE tokens\n SET\n usd_price = $2,\n usd_price_updated_at = $3,\n updated_at = NOW()\n WHERE\n l1_address = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Numeric", + "Timestamp" + ] + }, + "nullable": [] + }, + "hash": "59cb0dd78fadc121e2b1ebbc8a063f089c91aead2bc9abb284697e65840f1e8f" +} diff --git a/core/lib/dal/.sqlx/query-5aaed2a975042cc9b7b9d88e5fd5db07667280abef27cc73159d2fd9c95b209b.json b/core/lib/dal/.sqlx/query-5aaed2a975042cc9b7b9d88e5fd5db07667280abef27cc73159d2fd9c95b209b.json new file mode 100644 index 00000000000..069cd195639 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5aaed2a975042cc9b7b9d88e5fd5db07667280abef27cc73159d2fd9c95b209b.json @@ -0,0 +1,256 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n timestamp,\n is_finished,\n l1_tx_count,\n l2_tx_count,\n fee_account_address,\n bloom,\n priority_ops_onchain_data,\n hash,\n parent_hash,\n commitment,\n compressed_write_logs,\n compressed_contracts,\n eth_prove_tx_id,\n eth_commit_tx_id,\n eth_execute_tx_id,\n merkle_root_hash,\n l2_to_l1_logs,\n l2_to_l1_messages,\n used_contract_hashes,\n compressed_initial_writes,\n compressed_repeated_writes,\n l2_l1_compressed_messages,\n l2_l1_merkle_root,\n l1_gas_price,\n l2_fair_gas_price,\n rollup_last_leaf_index,\n zkporter_is_available,\n bootloader_code_hash,\n default_aa_code_hash,\n base_fee_per_gas,\n aux_data_hash,\n pass_through_data_hash,\n meta_parameters_hash,\n protocol_version,\n compressed_state_diffs,\n system_logs,\n events_queue_commitment,\n bootloader_initial_content_commitment,\n pubdata_input\n FROM\n l1_batches\n LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number\n WHERE\n eth_prove_tx_id IS NOT NULL\n AND eth_execute_tx_id IS NULL\n ORDER BY\n number\n LIMIT\n $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "parent_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "commitment", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "compressed_write_logs", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "compressed_contracts", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "eth_prove_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "eth_commit_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 15, + "name": "eth_execute_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 16, + "name": "merkle_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 17, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 20, + "name": "compressed_initial_writes", + "type_info": "Bytea" + }, + { + "ordinal": 21, + "name": "compressed_repeated_writes", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "l2_l1_compressed_messages", + "type_info": "Bytea" + }, + { + "ordinal": 23, + "name": "l2_l1_merkle_root", + "type_info": "Bytea" + }, + { + "ordinal": 24, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 25, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 26, + "name": "rollup_last_leaf_index", + "type_info": "Int8" + }, + { + "ordinal": 27, + "name": "zkporter_is_available", + "type_info": "Bool" + }, + { + "ordinal": 28, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 29, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 30, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 31, + "name": "aux_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 32, + "name": "pass_through_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 33, + "name": "meta_parameters_hash", + "type_info": "Bytea" + }, + { + "ordinal": 34, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 35, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 36, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 37, + "name": "events_queue_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 38, + "name": "bootloader_initial_content_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 39, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true, + true, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "5aaed2a975042cc9b7b9d88e5fd5db07667280abef27cc73159d2fd9c95b209b" +} diff --git a/core/lib/dal/.sqlx/query-5c7b6b58261faa0a164181987eec4055c22895316ce68d9d41619db7fcfb7563.json b/core/lib/dal/.sqlx/query-5c7b6b58261faa0a164181987eec4055c22895316ce68d9d41619db7fcfb7563.json new file mode 100644 index 00000000000..cd76205be81 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5c7b6b58261faa0a164181987eec4055c22895316ce68d9d41619db7fcfb7563.json @@ -0,0 +1,100 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n timestamp,\n hash,\n l1_tx_count,\n l2_tx_count,\n base_fee_per_gas,\n l1_gas_price,\n l2_fair_gas_price,\n gas_per_pubdata_limit,\n bootloader_code_hash,\n default_aa_code_hash,\n protocol_version,\n virtual_blocks,\n fair_pubdata_price\n FROM\n miniblocks\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 6, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 7, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 8, + "name": "gas_per_pubdata_limit", + "type_info": "Int8" + }, + { + "ordinal": 9, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 12, + "name": "virtual_blocks", + "type_info": "Int8" + }, + { + "ordinal": 13, + "name": "fair_pubdata_price", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + false, + true + ] + }, + "hash": "5c7b6b58261faa0a164181987eec4055c22895316ce68d9d41619db7fcfb7563" +} diff --git a/core/lib/dal/.sqlx/query-5d493cbce749cc5b56d4069423597b16599abaf51df0f19effe1a536376cf6a6.json b/core/lib/dal/.sqlx/query-5d493cbce749cc5b56d4069423597b16599abaf51df0f19effe1a536376cf6a6.json new file mode 100644 index 00000000000..eba36994fb3 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5d493cbce749cc5b56d4069423597b16599abaf51df0f19effe1a536376cf6a6.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bootloader_code_hash,\n default_account_code_hash\n FROM\n protocol_versions\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "default_account_code_hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "5d493cbce749cc5b56d4069423597b16599abaf51df0f19effe1a536376cf6a6" +} diff --git a/core/lib/dal/.sqlx/query-5e781f84ec41edd0941fa84de837effac442434c6e734d977e6682a7484abe7f.json b/core/lib/dal/.sqlx/query-5e781f84ec41edd0941fa84de837effac442434c6e734d977e6682a7484abe7f.json new file mode 100644 index 00000000000..4958f38f535 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5e781f84ec41edd0941fa84de837effac442434c6e734d977e6682a7484abe7f.json @@ -0,0 +1,35 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE proof_compression_jobs_fri\n SET\n status = 'queued',\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n (\n status = 'in_progress'\n AND processing_started_at <= NOW() - $1::INTERVAL\n AND attempts < $2\n )\n OR (\n status = 'failed'\n AND attempts < $2\n )\n RETURNING\n l1_batch_number,\n status,\n attempts\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Interval", + "Int2" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "5e781f84ec41edd0941fa84de837effac442434c6e734d977e6682a7484abe7f" +} diff --git a/core/lib/dal/.sqlx/query-5f6885b5457aaa78e10917ae5b8cd0bc0e8923a6bae64f22f09242766835ee0c.json b/core/lib/dal/.sqlx/query-5f6885b5457aaa78e10917ae5b8cd0bc0e8923a6bae64f22f09242766835ee0c.json new file mode 100644 index 00000000000..b57400c28f5 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5f6885b5457aaa78e10917ae5b8cd0bc0e8923a6bae64f22f09242766835ee0c.json @@ -0,0 +1,74 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n id,\n contract_address,\n source_code,\n contract_name,\n zk_compiler_version,\n compiler_version,\n optimization_used,\n optimizer_mode,\n constructor_arguments,\n is_system\n FROM\n contract_verification_requests\n WHERE\n status = 'successful'\n ORDER BY\n id\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "contract_address", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "source_code", + "type_info": "Text" + }, + { + "ordinal": 3, + "name": "contract_name", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "zk_compiler_version", + "type_info": "Text" + }, + { + "ordinal": 5, + "name": "compiler_version", + "type_info": "Text" + }, + { + "ordinal": 6, + "name": "optimization_used", + "type_info": "Bool" + }, + { + "ordinal": 7, + "name": "optimizer_mode", + "type_info": "Text" + }, + { + "ordinal": 8, + "name": "constructor_arguments", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "is_system", + "type_info": "Bool" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + true, + false, + false + ] + }, + "hash": "5f6885b5457aaa78e10917ae5b8cd0bc0e8923a6bae64f22f09242766835ee0c" +} diff --git a/core/lib/dal/.sqlx/query-5f8fc05ae782846898295d210dd3d55ff2b1510868dfe80d14fffa3f5ff07b83.json b/core/lib/dal/.sqlx/query-5f8fc05ae782846898295d210dd3d55ff2b1510868dfe80d14fffa3f5ff07b83.json new file mode 100644 index 00000000000..4879c6095a1 --- /dev/null +++ b/core/lib/dal/.sqlx/query-5f8fc05ae782846898295d210dd3d55ff2b1510868dfe80d14fffa3f5ff07b83.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n predicted_commit_gas_cost = $2,\n updated_at = NOW()\n WHERE\n number = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "5f8fc05ae782846898295d210dd3d55ff2b1510868dfe80d14fffa3f5ff07b83" +} diff --git a/core/lib/dal/.sqlx/query-61b2b858d4636809c21838635aa52aeb5f06c26f68d131dd242f6ed68816c513.json b/core/lib/dal/.sqlx/query-61b2b858d4636809c21838635aa52aeb5f06c26f68d131dd242f6ed68816c513.json new file mode 100644 index 00000000000..c713af9a210 --- /dev/null +++ b/core/lib/dal/.sqlx/query-61b2b858d4636809c21838635aa52aeb5f06c26f68d131dd242f6ed68816c513.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number\n FROM\n prover_jobs_fri\n WHERE\n status <> 'skipped'\n AND status <> 'successful'\n AND aggregation_round = $1\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int2" + ] + }, + "nullable": [ + false + ] + }, + "hash": "61b2b858d4636809c21838635aa52aeb5f06c26f68d131dd242f6ed68816c513" +} diff --git a/core/lib/dal/.sqlx/query-61bc330d6d1b5fddec78342c1b0f00e82b0b3ad9ae36bf4fe44d7e85b74c6f49.json b/core/lib/dal/.sqlx/query-61bc330d6d1b5fddec78342c1b0f00e82b0b3ad9ae36bf4fe44d7e85b74c6f49.json new file mode 100644 index 00000000000..2c0454b0dd8 --- /dev/null +++ b/core/lib/dal/.sqlx/query-61bc330d6d1b5fddec78342c1b0f00e82b0b3ad9ae36bf4fe44d7e85b74c6f49.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(priority_op_id) AS \"op_id\"\n FROM\n transactions\n WHERE\n is_priority = TRUE\n AND miniblock_number IS NOT NULL\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "op_id", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "61bc330d6d1b5fddec78342c1b0f00e82b0b3ad9ae36bf4fe44d7e85b74c6f49" +} diff --git a/core/lib/dal/.sqlx/query-65cc4517c3693c8bdb66b332151d4cb46ca093129707ee14f2fa42dc1800cc9e.json b/core/lib/dal/.sqlx/query-65cc4517c3693c8bdb66b332151d4cb46ca093129707ee14f2fa42dc1800cc9e.json new file mode 100644 index 00000000000..5f967c6d265 --- /dev/null +++ b/core/lib/dal/.sqlx/query-65cc4517c3693c8bdb66b332151d4cb46ca093129707ee14f2fa42dc1800cc9e.json @@ -0,0 +1,27 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n miniblocks (\n number,\n timestamp,\n hash,\n l1_tx_count,\n l2_tx_count,\n base_fee_per_gas,\n l1_gas_price,\n l2_fair_gas_price,\n gas_per_pubdata_limit,\n bootloader_code_hash,\n default_aa_code_hash,\n protocol_version,\n virtual_blocks,\n fair_pubdata_price,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, NOW(), NOW())\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int8", + "Bytea", + "Int4", + "Int4", + "Numeric", + "Int8", + "Int8", + "Int8", + "Bytea", + "Bytea", + "Int4", + "Int8", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "65cc4517c3693c8bdb66b332151d4cb46ca093129707ee14f2fa42dc1800cc9e" +} diff --git a/core/lib/dal/.sqlx/query-66554ab87e5fe4776786217d1f71a525c87d390df21250ab4dce08e09be72591.json b/core/lib/dal/.sqlx/query-66554ab87e5fe4776786217d1f71a525c87d390df21250ab4dce08e09be72591.json new file mode 100644 index 00000000000..eb2ee1d31bc --- /dev/null +++ b/core/lib/dal/.sqlx/query-66554ab87e5fe4776786217d1f71a525c87d390df21250ab4dce08e09be72591.json @@ -0,0 +1,98 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n timestamp,\n hash,\n l1_tx_count,\n l2_tx_count,\n base_fee_per_gas,\n l1_gas_price,\n l2_fair_gas_price,\n gas_per_pubdata_limit,\n bootloader_code_hash,\n default_aa_code_hash,\n protocol_version,\n virtual_blocks,\n fair_pubdata_price\n FROM\n miniblocks\n ORDER BY\n number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 6, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 7, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 8, + "name": "gas_per_pubdata_limit", + "type_info": "Int8" + }, + { + "ordinal": 9, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 12, + "name": "virtual_blocks", + "type_info": "Int8" + }, + { + "ordinal": 13, + "name": "fair_pubdata_price", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + false, + true + ] + }, + "hash": "66554ab87e5fe4776786217d1f71a525c87d390df21250ab4dce08e09be72591" +} diff --git a/core/lib/dal/.sqlx/query-6692ff6c0fbb2fc94f5cd2837a43ce80f9b2b27758651ccfc09df61a4ae8a363.json b/core/lib/dal/.sqlx/query-6692ff6c0fbb2fc94f5cd2837a43ce80f9b2b27758651ccfc09df61a4ae8a363.json new file mode 100644 index 00000000000..586cace7617 --- /dev/null +++ b/core/lib/dal/.sqlx/query-6692ff6c0fbb2fc94f5cd2837a43ce80f9b2b27758651ccfc09df61a4ae8a363.json @@ -0,0 +1,88 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n eth_txs\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "raw_tx", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "contract_address", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "tx_type", + "type_info": "Text" + }, + { + "ordinal": 5, + "name": "gas_used", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 7, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "has_failed", + "type_info": "Bool" + }, + { + "ordinal": 9, + "name": "sent_at_block", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "confirmed_eth_tx_history_id", + "type_info": "Int4" + }, + { + "ordinal": 11, + "name": "predicted_gas_cost", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + true, + false, + false, + false, + true, + true, + false + ] + }, + "hash": "6692ff6c0fbb2fc94f5cd2837a43ce80f9b2b27758651ccfc09df61a4ae8a363" +} diff --git a/core/lib/dal/.sqlx/query-66e012ce974c38d9fe84cfc7eb28927f9e976319a305e0928ff366d535a97104.json b/core/lib/dal/.sqlx/query-66e012ce974c38d9fe84cfc7eb28927f9e976319a305e0928ff366d535a97104.json new file mode 100644 index 00000000000..e07fbfbd70b --- /dev/null +++ b/core/lib/dal/.sqlx/query-66e012ce974c38d9fe84cfc7eb28927f9e976319a305e0928ff366d535a97104.json @@ -0,0 +1,92 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n eth_txs (\n raw_tx,\n nonce,\n tx_type,\n contract_address,\n predicted_gas_cost,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, $4, $5, NOW(), NOW())\n RETURNING\n *\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "raw_tx", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "contract_address", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "tx_type", + "type_info": "Text" + }, + { + "ordinal": 5, + "name": "gas_used", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 7, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "has_failed", + "type_info": "Bool" + }, + { + "ordinal": 9, + "name": "sent_at_block", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "confirmed_eth_tx_history_id", + "type_info": "Int4" + }, + { + "ordinal": 11, + "name": "predicted_gas_cost", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Int8", + "Text", + "Text", + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + true, + false, + false, + false, + true, + true, + false + ] + }, + "hash": "66e012ce974c38d9fe84cfc7eb28927f9e976319a305e0928ff366d535a97104" +} diff --git a/core/lib/dal/.sqlx/query-68936a53e5b80576f3f341523e6843eb48b5e26ee92cd8476f50251e8c32610d.json b/core/lib/dal/.sqlx/query-68936a53e5b80576f3f341523e6843eb48b5e26ee92cd8476f50251e8c32610d.json new file mode 100644 index 00000000000..69b24831d73 --- /dev/null +++ b/core/lib/dal/.sqlx/query-68936a53e5b80576f3f341523e6843eb48b5e26ee92cd8476f50251e8c32610d.json @@ -0,0 +1,26 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n COUNT(*) AS \"count!\"\n FROM\n l1_batches\n WHERE\n number = $1\n AND hash = $2\n AND merkle_root_hash = $3\n AND parent_hash = $4\n AND l2_l1_merkle_root = $5\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count!", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8", + "Bytea", + "Bytea", + "Bytea", + "Bytea" + ] + }, + "nullable": [ + null + ] + }, + "hash": "68936a53e5b80576f3f341523e6843eb48b5e26ee92cd8476f50251e8c32610d" +} diff --git a/core/lib/dal/.sqlx/query-68c891ee9d71cffe709731f2804b734d5d255e36e48668b3bfc25a0f86ea52e7.json b/core/lib/dal/.sqlx/query-68c891ee9d71cffe709731f2804b734d5d255e36e48668b3bfc25a0f86ea52e7.json new file mode 100644 index 00000000000..1d5336030a4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-68c891ee9d71cffe709731f2804b734d5d255e36e48668b3bfc25a0f86ea52e7.json @@ -0,0 +1,40 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n transactions (\n hash,\n is_priority,\n initiator_address,\n nonce,\n signature,\n gas_limit,\n max_fee_per_gas,\n max_priority_fee_per_gas,\n gas_per_pubdata_limit,\n input,\n data,\n tx_format,\n contract_address,\n value,\n paymaster,\n paymaster_input,\n execution_info,\n received_at,\n created_at,\n updated_at\n )\n VALUES\n (\n $1,\n FALSE,\n $2,\n $3,\n $4,\n $5,\n $6,\n $7,\n $8,\n $9,\n $10,\n $11,\n $12,\n $13,\n $14,\n $15,\n JSONB_BUILD_OBJECT('gas_used', $16::BIGINT, 'storage_writes', $17::INT, 'contracts_used', $18::INT),\n $19,\n NOW(),\n NOW()\n )\n ON CONFLICT (initiator_address, nonce) DO\n UPDATE\n SET\n hash = $1,\n signature = $4,\n gas_limit = $5,\n max_fee_per_gas = $6,\n max_priority_fee_per_gas = $7,\n gas_per_pubdata_limit = $8,\n input = $9,\n data = $10,\n tx_format = $11,\n contract_address = $12,\n value = $13,\n paymaster = $14,\n paymaster_input = $15,\n execution_info = JSONB_BUILD_OBJECT('gas_used', $16::BIGINT, 'storage_writes', $17::INT, 'contracts_used', $18::INT),\n in_mempool = FALSE,\n received_at = $19,\n created_at = NOW(),\n updated_at = NOW(),\n error = NULL\n WHERE\n transactions.is_priority = FALSE\n AND transactions.miniblock_number IS NULL\n RETURNING\n (\n SELECT\n hash\n FROM\n transactions\n WHERE\n transactions.initiator_address = $2\n AND transactions.nonce = $3\n ) IS NOT NULL AS \"is_replaced!\"\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "is_replaced!", + "type_info": "Bool" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Int8", + "Bytea", + "Numeric", + "Numeric", + "Numeric", + "Numeric", + "Bytea", + "Jsonb", + "Int4", + "Bytea", + "Numeric", + "Bytea", + "Bytea", + "Int8", + "Int4", + "Int4", + "Timestamp" + ] + }, + "nullable": [ + null + ] + }, + "hash": "68c891ee9d71cffe709731f2804b734d5d255e36e48668b3bfc25a0f86ea52e7" +} diff --git a/core/lib/dal/.sqlx/query-6ae2ed34230beae0e86c584e293e7ee767e4c98706246eb113498c0f817f5f38.json b/core/lib/dal/.sqlx/query-6ae2ed34230beae0e86c584e293e7ee767e4c98706246eb113498c0f817f5f38.json new file mode 100644 index 00000000000..08dff439a7c --- /dev/null +++ b/core/lib/dal/.sqlx/query-6ae2ed34230beae0e86c584e293e7ee767e4c98706246eb113498c0f817f5f38.json @@ -0,0 +1,17 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n gpu_prover_queue_fri (\n instance_host,\n instance_port,\n instance_status,\n specialized_prover_group_id,\n zone,\n created_at,\n updated_at\n )\n VALUES\n (CAST($1::TEXT AS inet), $2, 'available', $3, $4, NOW(), NOW())\n ON CONFLICT (instance_host, instance_port, zone) DO\n UPDATE\n SET\n instance_status = 'available',\n specialized_prover_group_id = $3,\n zone = $4,\n updated_at = NOW()\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int4", + "Int2", + "Text" + ] + }, + "nullable": [] + }, + "hash": "6ae2ed34230beae0e86c584e293e7ee767e4c98706246eb113498c0f817f5f38" +} diff --git a/core/lib/dal/.sqlx/query-6b327df84d2b3b31d02db35fd5d91a8d67abcdb743a619ed0d1b9c16206a3c20.json b/core/lib/dal/.sqlx/query-6b327df84d2b3b31d02db35fd5d91a8d67abcdb743a619ed0d1b9c16206a3c20.json new file mode 100644 index 00000000000..d00622a1f5f --- /dev/null +++ b/core/lib/dal/.sqlx/query-6b327df84d2b3b31d02db35fd5d91a8d67abcdb743a619ed0d1b9c16206a3c20.json @@ -0,0 +1,12 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM eth_txs\n WHERE\n id >= (\n SELECT\n MIN(id)\n FROM\n eth_txs\n WHERE\n has_failed = TRUE\n )\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [] + }, + "nullable": [] + }, + "hash": "6b327df84d2b3b31d02db35fd5d91a8d67abcdb743a619ed0d1b9c16206a3c20" +} diff --git a/core/lib/dal/.sqlx/query-6bd3094be764e6378fe52b5bb533260b49ce42daaf9dbe8075daf0a8e0ad9914.json b/core/lib/dal/.sqlx/query-6bd3094be764e6378fe52b5bb533260b49ce42daaf9dbe8075daf0a8e0ad9914.json new file mode 100644 index 00000000000..c90296e322c --- /dev/null +++ b/core/lib/dal/.sqlx/query-6bd3094be764e6378fe52b5bb533260b49ce42daaf9dbe8075daf0a8e0ad9914.json @@ -0,0 +1,12 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM basic_witness_input_producer_jobs\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [] + }, + "nullable": [] + }, + "hash": "6bd3094be764e6378fe52b5bb533260b49ce42daaf9dbe8075daf0a8e0ad9914" +} diff --git a/core/lib/dal/.sqlx/query-6c0d03b1fbe6f47546bc34c6b2eab01cb2c55bf86d2c8c99abb1b7ca21cf75c0.json b/core/lib/dal/.sqlx/query-6c0d03b1fbe6f47546bc34c6b2eab01cb2c55bf86d2c8c99abb1b7ca21cf75c0.json new file mode 100644 index 00000000000..0ad799dd49d --- /dev/null +++ b/core/lib/dal/.sqlx/query-6c0d03b1fbe6f47546bc34c6b2eab01cb2c55bf86d2c8c99abb1b7ca21cf75c0.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE miniblocks\n SET\n protocol_version = $1\n WHERE\n l1_batch_number IS NULL\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [] + }, + "hash": "6c0d03b1fbe6f47546bc34c6b2eab01cb2c55bf86d2c8c99abb1b7ca21cf75c0" +} diff --git a/core/lib/dal/.sqlx/query-708b2b3e40887e6d8d2d7aa20448a58479487686d774e6b2b1391347bdafe06d.json b/core/lib/dal/.sqlx/query-708b2b3e40887e6d8d2d7aa20448a58479487686d774e6b2b1391347bdafe06d.json new file mode 100644 index 00000000000..a63bd3ebeeb --- /dev/null +++ b/core/lib/dal/.sqlx/query-708b2b3e40887e6d8d2d7aa20448a58479487686d774e6b2b1391347bdafe06d.json @@ -0,0 +1,29 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n hash\n FROM\n miniblocks\n WHERE\n number >= $1\n ORDER BY\n number ASC\n LIMIT\n $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int8" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "708b2b3e40887e6d8d2d7aa20448a58479487686d774e6b2b1391347bdafe06d" +} diff --git a/core/lib/dal/.sqlx/query-70979db81f473950b2fae7816dbad7fe3464f2619cee2d583accaa829aa12b94.json b/core/lib/dal/.sqlx/query-70979db81f473950b2fae7816dbad7fe3464f2619cee2d583accaa829aa12b94.json new file mode 100644 index 00000000000..45338f8e64c --- /dev/null +++ b/core/lib/dal/.sqlx/query-70979db81f473950b2fae7816dbad7fe3464f2619cee2d583accaa829aa12b94.json @@ -0,0 +1,38 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n l1_batches (\n number,\n l1_tx_count,\n l2_tx_count,\n timestamp,\n is_finished,\n fee_account_address,\n l2_to_l1_logs,\n l2_to_l1_messages,\n bloom,\n priority_ops_onchain_data,\n predicted_commit_gas_cost,\n predicted_prove_gas_cost,\n predicted_execute_gas_cost,\n initial_bootloader_heap_content,\n used_contract_hashes,\n base_fee_per_gas,\n l1_gas_price,\n l2_fair_gas_price,\n bootloader_code_hash,\n default_aa_code_hash,\n protocol_version,\n system_logs,\n storage_refunds,\n pubdata_input,\n predicted_circuits,\n created_at,\n updated_at\n )\n VALUES\n (\n $1,\n $2,\n $3,\n $4,\n $5,\n $6,\n $7,\n $8,\n $9,\n $10,\n $11,\n $12,\n $13,\n $14,\n $15,\n $16,\n $17,\n $18,\n $19,\n $20,\n $21,\n $22,\n $23,\n $24,\n $25,\n NOW(),\n NOW()\n )\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int4", + "Int4", + "Int8", + "Bool", + "Bytea", + "ByteaArray", + "ByteaArray", + "Bytea", + "ByteaArray", + "Int8", + "Int8", + "Int8", + "Jsonb", + "Jsonb", + "Numeric", + "Int8", + "Int8", + "Bytea", + "Bytea", + "Int4", + "ByteaArray", + "Int8Array", + "Bytea", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "70979db81f473950b2fae7816dbad7fe3464f2619cee2d583accaa829aa12b94" +} diff --git a/core/lib/dal/.sqlx/query-72a4f50355324cce85ebaef9fa32826095e9290f0c1157094bd0c44e06012e42.json b/core/lib/dal/.sqlx/query-72a4f50355324cce85ebaef9fa32826095e9290f0c1157094bd0c44e06012e42.json new file mode 100644 index 00000000000..707b7ce9e75 --- /dev/null +++ b/core/lib/dal/.sqlx/query-72a4f50355324cce85ebaef9fa32826095e9290f0c1157094bd0c44e06012e42.json @@ -0,0 +1,232 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n transactions\n WHERE\n hash = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "is_priority", + "type_info": "Bool" + }, + { + "ordinal": 2, + "name": "full_fee", + "type_info": "Numeric" + }, + { + "ordinal": 3, + "name": "layer_2_tip_fee", + "type_info": "Numeric" + }, + { + "ordinal": 4, + "name": "initiator_address", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "signature", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "input", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "data", + "type_info": "Jsonb" + }, + { + "ordinal": 9, + "name": "received_at", + "type_info": "Timestamp" + }, + { + "ordinal": 10, + "name": "priority_op_id", + "type_info": "Int8" + }, + { + "ordinal": 11, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 12, + "name": "index_in_block", + "type_info": "Int4" + }, + { + "ordinal": 13, + "name": "error", + "type_info": "Varchar" + }, + { + "ordinal": 14, + "name": "gas_limit", + "type_info": "Numeric" + }, + { + "ordinal": 15, + "name": "gas_per_storage_limit", + "type_info": "Numeric" + }, + { + "ordinal": 16, + "name": "gas_per_pubdata_limit", + "type_info": "Numeric" + }, + { + "ordinal": 17, + "name": "tx_format", + "type_info": "Int4" + }, + { + "ordinal": 18, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 19, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 20, + "name": "execution_info", + "type_info": "Jsonb" + }, + { + "ordinal": 21, + "name": "contract_address", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "in_mempool", + "type_info": "Bool" + }, + { + "ordinal": 23, + "name": "l1_block_number", + "type_info": "Int4" + }, + { + "ordinal": 24, + "name": "value", + "type_info": "Numeric" + }, + { + "ordinal": 25, + "name": "paymaster", + "type_info": "Bytea" + }, + { + "ordinal": 26, + "name": "paymaster_input", + "type_info": "Bytea" + }, + { + "ordinal": 27, + "name": "max_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 28, + "name": "max_priority_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 29, + "name": "effective_gas_price", + "type_info": "Numeric" + }, + { + "ordinal": 30, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 31, + "name": "l1_batch_tx_index", + "type_info": "Int4" + }, + { + "ordinal": 32, + "name": "refunded_gas", + "type_info": "Int8" + }, + { + "ordinal": 33, + "name": "l1_tx_mint", + "type_info": "Numeric" + }, + { + "ordinal": 34, + "name": "l1_tx_refund_recipient", + "type_info": "Bytea" + }, + { + "ordinal": 35, + "name": "upgrade_id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false, + false, + true, + true, + false, + true, + true, + true, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + false, + true, + false, + false, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "72a4f50355324cce85ebaef9fa32826095e9290f0c1157094bd0c44e06012e42" +} diff --git a/core/lib/dal/.sqlx/query-72ff9df79e78129cb96d14ece0198129b44534062f524823666ed432d2fcd345.json b/core/lib/dal/.sqlx/query-72ff9df79e78129cb96d14ece0198129b44534062f524823666ed432d2fcd345.json new file mode 100644 index 00000000000..75f288ee14f --- /dev/null +++ b/core/lib/dal/.sqlx/query-72ff9df79e78129cb96d14ece0198129b44534062f524823666ed432d2fcd345.json @@ -0,0 +1,12 @@ +{ + "db_name": "PostgreSQL", + "query": "\n VACUUM storage_logs\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [] + }, + "nullable": [] + }, + "hash": "72ff9df79e78129cb96d14ece0198129b44534062f524823666ed432d2fcd345" +} diff --git a/core/lib/dal/.sqlx/query-73c4bf1e35d49faaab9f7828e80f396f9d193615d70184d4327378a7fc8a5665.json b/core/lib/dal/.sqlx/query-73c4bf1e35d49faaab9f7828e80f396f9d193615d70184d4327378a7fc8a5665.json new file mode 100644 index 00000000000..aa38e1c4035 --- /dev/null +++ b/core/lib/dal/.sqlx/query-73c4bf1e35d49faaab9f7828e80f396f9d193615d70184d4327378a7fc8a5665.json @@ -0,0 +1,30 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE basic_witness_input_producer_jobs\n SET\n status = $1,\n updated_at = NOW(),\n time_taken = $3,\n input_blob_url = $4\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + { + "Custom": { + "name": "basic_witness_input_producer_job_status", + "kind": { + "Enum": [ + "Queued", + "ManuallySkipped", + "InProgress", + "Successful", + "Failed" + ] + } + } + }, + "Int8", + "Time", + "Text" + ] + }, + "nullable": [] + }, + "hash": "73c4bf1e35d49faaab9f7828e80f396f9d193615d70184d4327378a7fc8a5665" +} diff --git a/core/lib/dal/.sqlx/query-7560ba61643a8ec8eeefbe6034226313c255ce356a9a4e25c098484d3129c914.json b/core/lib/dal/.sqlx/query-7560ba61643a8ec8eeefbe6034226313c255ce356a9a4e25c098484d3129c914.json new file mode 100644 index 00000000000..9ff3ab86250 --- /dev/null +++ b/core/lib/dal/.sqlx/query-7560ba61643a8ec8eeefbe6034226313c255ce356a9a4e25c098484d3129c914.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM eth_txs_history\n WHERE\n id = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [] + }, + "hash": "7560ba61643a8ec8eeefbe6034226313c255ce356a9a4e25c098484d3129c914" +} diff --git a/core/lib/dal/.sqlx/query-759b80414b5bcbfe03a0e1e15b37f92c4cfad9313b1461e12242d9becb59e0b0.json b/core/lib/dal/.sqlx/query-759b80414b5bcbfe03a0e1e15b37f92c4cfad9313b1461e12242d9becb59e0b0.json new file mode 100644 index 00000000000..d488293cf81 --- /dev/null +++ b/core/lib/dal/.sqlx/query-759b80414b5bcbfe03a0e1e15b37f92c4cfad9313b1461e12242d9becb59e0b0.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(operation_number) AS \"max?\"\n FROM\n storage_logs\n WHERE\n miniblock_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "max?", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + null + ] + }, + "hash": "759b80414b5bcbfe03a0e1e15b37f92c4cfad9313b1461e12242d9becb59e0b0" +} diff --git a/core/lib/dal/.sqlx/query-75a3cf6f502ebb1a0e92b672dc6ce56b53cc4ca0a8c6ee7cac1b9a5863000be3.json b/core/lib/dal/.sqlx/query-75a3cf6f502ebb1a0e92b672dc6ce56b53cc4ca0a8c6ee7cac1b9a5863000be3.json new file mode 100644 index 00000000000..13f45b32225 --- /dev/null +++ b/core/lib/dal/.sqlx/query-75a3cf6f502ebb1a0e92b672dc6ce56b53cc4ca0a8c6ee7cac1b9a5863000be3.json @@ -0,0 +1,256 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n timestamp,\n is_finished,\n l1_tx_count,\n l2_tx_count,\n fee_account_address,\n bloom,\n priority_ops_onchain_data,\n hash,\n parent_hash,\n commitment,\n compressed_write_logs,\n compressed_contracts,\n eth_prove_tx_id,\n eth_commit_tx_id,\n eth_execute_tx_id,\n merkle_root_hash,\n l2_to_l1_logs,\n l2_to_l1_messages,\n used_contract_hashes,\n compressed_initial_writes,\n compressed_repeated_writes,\n l2_l1_compressed_messages,\n l2_l1_merkle_root,\n l1_gas_price,\n l2_fair_gas_price,\n rollup_last_leaf_index,\n zkporter_is_available,\n bootloader_code_hash,\n default_aa_code_hash,\n base_fee_per_gas,\n aux_data_hash,\n pass_through_data_hash,\n meta_parameters_hash,\n protocol_version,\n compressed_state_diffs,\n system_logs,\n events_queue_commitment,\n bootloader_initial_content_commitment,\n pubdata_input\n FROM\n l1_batches\n LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number\n WHERE\n eth_commit_tx_id IS NOT NULL\n AND eth_prove_tx_id IS NULL\n ORDER BY\n number\n LIMIT\n $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "parent_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "commitment", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "compressed_write_logs", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "compressed_contracts", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "eth_prove_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "eth_commit_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 15, + "name": "eth_execute_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 16, + "name": "merkle_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 17, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 20, + "name": "compressed_initial_writes", + "type_info": "Bytea" + }, + { + "ordinal": 21, + "name": "compressed_repeated_writes", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "l2_l1_compressed_messages", + "type_info": "Bytea" + }, + { + "ordinal": 23, + "name": "l2_l1_merkle_root", + "type_info": "Bytea" + }, + { + "ordinal": 24, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 25, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 26, + "name": "rollup_last_leaf_index", + "type_info": "Int8" + }, + { + "ordinal": 27, + "name": "zkporter_is_available", + "type_info": "Bool" + }, + { + "ordinal": 28, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 29, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 30, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 31, + "name": "aux_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 32, + "name": "pass_through_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 33, + "name": "meta_parameters_hash", + "type_info": "Bytea" + }, + { + "ordinal": 34, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 35, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 36, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 37, + "name": "events_queue_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 38, + "name": "bootloader_initial_content_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 39, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true, + true, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "75a3cf6f502ebb1a0e92b672dc6ce56b53cc4ca0a8c6ee7cac1b9a5863000be3" +} diff --git a/core/lib/dal/.sqlx/query-75f6eaa518e7840374c4e44b0788bf92c7f2c55386c8208e3a82b30456abd5b4.json b/core/lib/dal/.sqlx/query-75f6eaa518e7840374c4e44b0788bf92c7f2c55386c8208e3a82b30456abd5b4.json new file mode 100644 index 00000000000..71edd403d0b --- /dev/null +++ b/core/lib/dal/.sqlx/query-75f6eaa518e7840374c4e44b0788bf92c7f2c55386c8208e3a82b30456abd5b4.json @@ -0,0 +1,90 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE witness_inputs_fri\n SET\n status = 'in_progress',\n attempts = attempts + 1,\n updated_at = NOW(),\n processing_started_at = NOW(),\n picked_by = $3\n WHERE\n l1_batch_number = (\n SELECT\n l1_batch_number\n FROM\n witness_inputs_fri\n WHERE\n l1_batch_number <= $1\n AND status = 'queued'\n AND protocol_version = ANY ($2)\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n witness_inputs_fri.*\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "merkle_tree_paths_blob_url", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "attempts", + "type_info": "Int2" + }, + { + "ordinal": 3, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "error", + "type_info": "Text" + }, + { + "ordinal": 5, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 6, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 7, + "name": "processing_started_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "time_taken", + "type_info": "Time" + }, + { + "ordinal": 9, + "name": "is_blob_cleaned", + "type_info": "Bool" + }, + { + "ordinal": 10, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 11, + "name": "picked_by", + "type_info": "Text" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int4Array", + "Text" + ] + }, + "nullable": [ + false, + true, + false, + false, + true, + false, + false, + true, + true, + true, + true, + true + ] + }, + "hash": "75f6eaa518e7840374c4e44b0788bf92c7f2c55386c8208e3a82b30456abd5b4" +} diff --git a/core/lib/dal/.sqlx/query-75fa24c29dc312cbfa89bf1f4a04a42b4ead6964edd17bfcacb4a828492bba60.json b/core/lib/dal/.sqlx/query-75fa24c29dc312cbfa89bf1f4a04a42b4ead6964edd17bfcacb4a828492bba60.json new file mode 100644 index 00000000000..ff743f1028c --- /dev/null +++ b/core/lib/dal/.sqlx/query-75fa24c29dc312cbfa89bf1f4a04a42b4ead6964edd17bfcacb4a828492bba60.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n state AS \"state!\"\n FROM\n consensus_replica_state\n WHERE\n fake_key\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "state!", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "75fa24c29dc312cbfa89bf1f4a04a42b4ead6964edd17bfcacb4a828492bba60" +} diff --git a/core/lib/dal/.sqlx/query-76cb9ad97b70d584b19af194576dcf2324f380932698386aa8f9751b1fa24a7b.json b/core/lib/dal/.sqlx/query-76cb9ad97b70d584b19af194576dcf2324f380932698386aa8f9751b1fa24a7b.json new file mode 100644 index 00000000000..e5b4f3476c9 --- /dev/null +++ b/core/lib/dal/.sqlx/query-76cb9ad97b70d584b19af194576dcf2324f380932698386aa8f9751b1fa24a7b.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n call_traces (tx_hash, call_trace)\n SELECT\n u.tx_hash,\n u.call_trace\n FROM\n UNNEST($1::bytea[], $2::bytea[]) AS u (tx_hash, call_trace)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray", + "ByteaArray" + ] + }, + "nullable": [] + }, + "hash": "76cb9ad97b70d584b19af194576dcf2324f380932698386aa8f9751b1fa24a7b" +} diff --git a/core/lib/dal/.sqlx/query-77a43830ca31eac85a3c03d87696bf94a013e49bf50ce23f4de4968781df0796.json b/core/lib/dal/.sqlx/query-77a43830ca31eac85a3c03d87696bf94a013e49bf50ce23f4de4968781df0796.json new file mode 100644 index 00000000000..acff9eeebee --- /dev/null +++ b/core/lib/dal/.sqlx/query-77a43830ca31eac85a3c03d87696bf94a013e49bf50ce23f4de4968781df0796.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n hash = $1\n WHERE\n number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "77a43830ca31eac85a3c03d87696bf94a013e49bf50ce23f4de4968781df0796" +} diff --git a/core/lib/dal/.sqlx/query-77b35855fbb989f6314469b419726dc7bb98e0f7feaf14656307e20bd2bb0b6c.json b/core/lib/dal/.sqlx/query-77b35855fbb989f6314469b419726dc7bb98e0f7feaf14656307e20bd2bb0b6c.json new file mode 100644 index 00000000000..30149eb79c9 --- /dev/null +++ b/core/lib/dal/.sqlx/query-77b35855fbb989f6314469b419726dc7bb98e0f7feaf14656307e20bd2bb0b6c.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n consensus_replica_state (fake_key, state)\n VALUES\n (TRUE, $1)\n ON CONFLICT (fake_key) DO\n UPDATE\n SET\n state = excluded.state\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Jsonb" + ] + }, + "nullable": [] + }, + "hash": "77b35855fbb989f6314469b419726dc7bb98e0f7feaf14656307e20bd2bb0b6c" +} diff --git a/core/lib/dal/.sqlx/query-78978c19282961c5b3dc06352b41caa4cca66d6ad74b2cd1a34ea5f7bc1e6909.json b/core/lib/dal/.sqlx/query-78978c19282961c5b3dc06352b41caa4cca66d6ad74b2cd1a34ea5f7bc1e6909.json new file mode 100644 index 00000000000..f746bd5703b --- /dev/null +++ b/core/lib/dal/.sqlx/query-78978c19282961c5b3dc06352b41caa4cca66d6ad74b2cd1a34ea5f7bc1e6909.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n call_traces\n WHERE\n tx_hash = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "tx_hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "call_trace", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "78978c19282961c5b3dc06352b41caa4cca66d6ad74b2cd1a34ea5f7bc1e6909" +} diff --git a/core/lib/dal/.sqlx/query-7a2145e2234a7896031bbc1ce82715e903f3b399886c2c73e838bd924fed6776.json b/core/lib/dal/.sqlx/query-7a2145e2234a7896031bbc1ce82715e903f3b399886c2c73e838bd924fed6776.json new file mode 100644 index 00000000000..73a8c33695b --- /dev/null +++ b/core/lib/dal/.sqlx/query-7a2145e2234a7896031bbc1ce82715e903f3b399886c2c73e838bd924fed6776.json @@ -0,0 +1,18 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET\n aggregations_url = $1,\n number_of_dependent_jobs = $5,\n updated_at = NOW()\n WHERE\n l1_batch_number = $2\n AND circuit_id = $3\n AND depth = $4\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8", + "Int2", + "Int4", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "7a2145e2234a7896031bbc1ce82715e903f3b399886c2c73e838bd924fed6776" +} diff --git a/core/lib/dal/.sqlx/query-7a8fffe8d4e3085e00c98f770d250d625f057acf1440b6550375ce5509a816a6.json b/core/lib/dal/.sqlx/query-7a8fffe8d4e3085e00c98f770d250d625f057acf1440b6550375ce5509a816a6.json new file mode 100644 index 00000000000..da78974f61a --- /dev/null +++ b/core/lib/dal/.sqlx/query-7a8fffe8d4e3085e00c98f770d250d625f057acf1440b6550375ce5509a816a6.json @@ -0,0 +1,107 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET\n status = 'in_progress',\n attempts = attempts + 1,\n updated_at = NOW(),\n processing_started_at = NOW(),\n picked_by = $2\n WHERE\n id = (\n SELECT\n id\n FROM\n leaf_aggregation_witness_jobs_fri\n WHERE\n status = 'queued'\n AND protocol_version = ANY ($1)\n ORDER BY\n l1_batch_number ASC,\n id ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n leaf_aggregation_witness_jobs_fri.*\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "circuit_id", + "type_info": "Int2" + }, + { + "ordinal": 3, + "name": "closed_form_inputs_blob_url", + "type_info": "Text" + }, + { + "ordinal": 4, + "name": "attempts", + "type_info": "Int2" + }, + { + "ordinal": 5, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 6, + "name": "error", + "type_info": "Text" + }, + { + "ordinal": 7, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 9, + "name": "processing_started_at", + "type_info": "Timestamp" + }, + { + "ordinal": 10, + "name": "time_taken", + "type_info": "Time" + }, + { + "ordinal": 11, + "name": "is_blob_cleaned", + "type_info": "Bool" + }, + { + "ordinal": 12, + "name": "number_of_basic_circuits", + "type_info": "Int4" + }, + { + "ordinal": 13, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "picked_by", + "type_info": "Text" + } + ], + "parameters": { + "Left": [ + "Int4Array", + "Text" + ] + }, + "nullable": [ + false, + false, + false, + true, + false, + false, + true, + false, + false, + true, + true, + true, + true, + true, + true + ] + }, + "hash": "7a8fffe8d4e3085e00c98f770d250d625f057acf1440b6550375ce5509a816a6" +} diff --git a/core/lib/dal/.sqlx/query-7fccc28bd829bce334f37197ee6b139e943f3ad2a41387b610606a42b7f03283.json b/core/lib/dal/.sqlx/query-7fccc28bd829bce334f37197ee6b139e943f3ad2a41387b610606a42b7f03283.json new file mode 100644 index 00000000000..76a34db9699 --- /dev/null +++ b/core/lib/dal/.sqlx/query-7fccc28bd829bce334f37197ee6b139e943f3ad2a41387b610606a42b7f03283.json @@ -0,0 +1,29 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n transactions (\n hash,\n is_priority,\n initiator_address,\n gas_limit,\n max_fee_per_gas,\n gas_per_pubdata_limit,\n data,\n upgrade_id,\n contract_address,\n l1_block_number,\n value,\n paymaster,\n paymaster_input,\n tx_format,\n l1_tx_mint,\n l1_tx_refund_recipient,\n received_at,\n created_at,\n updated_at\n )\n VALUES\n (\n $1,\n TRUE,\n $2,\n $3,\n $4,\n $5,\n $6,\n $7,\n $8,\n $9,\n $10,\n $11,\n $12,\n $13,\n $14,\n $15,\n $16,\n NOW(),\n NOW()\n )\n ON CONFLICT (hash) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Numeric", + "Numeric", + "Numeric", + "Jsonb", + "Int4", + "Bytea", + "Int4", + "Numeric", + "Bytea", + "Bytea", + "Int4", + "Numeric", + "Bytea", + "Timestamp" + ] + }, + "nullable": [] + }, + "hash": "7fccc28bd829bce334f37197ee6b139e943f3ad2a41387b610606a42b7f03283" +} diff --git a/core/lib/dal/.sqlx/query-806b82a9effd885ba537a2a1c7d7227120a8279db1875d26ccae5ee0785f46a9.json b/core/lib/dal/.sqlx/query-806b82a9effd885ba537a2a1c7d7227120a8279db1875d26ccae5ee0785f46a9.json new file mode 100644 index 00000000000..c8e8a7aa603 --- /dev/null +++ b/core/lib/dal/.sqlx/query-806b82a9effd885ba537a2a1c7d7227120a8279db1875d26ccae5ee0785f46a9.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n attempts\n FROM\n node_aggregation_witness_jobs_fri\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "806b82a9effd885ba537a2a1c7d7227120a8279db1875d26ccae5ee0785f46a9" +} diff --git a/core/lib/dal/.sqlx/query-8182690d0326b820d23fba49d391578db18c29cdca85b8b6aad86fe2a9bf6bbe.json b/core/lib/dal/.sqlx/query-8182690d0326b820d23fba49d391578db18c29cdca85b8b6aad86fe2a9bf6bbe.json new file mode 100644 index 00000000000..fac64c1ea3f --- /dev/null +++ b/core/lib/dal/.sqlx/query-8182690d0326b820d23fba49d391578db18c29cdca85b8b6aad86fe2a9bf6bbe.json @@ -0,0 +1,32 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET\n status = 'queued'\n WHERE\n (l1_batch_number, circuit_id, depth) IN (\n SELECT\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id,\n prover_jobs_fri.depth\n FROM\n prover_jobs_fri\n JOIN node_aggregation_witness_jobs_fri nawj ON prover_jobs_fri.l1_batch_number = nawj.l1_batch_number\n AND prover_jobs_fri.circuit_id = nawj.circuit_id\n AND prover_jobs_fri.depth = nawj.depth\n WHERE\n nawj.status = 'waiting_for_proofs'\n AND prover_jobs_fri.status = 'successful'\n AND prover_jobs_fri.aggregation_round = 2\n GROUP BY\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id,\n prover_jobs_fri.depth,\n nawj.number_of_dependent_jobs\n HAVING\n COUNT(*) = nawj.number_of_dependent_jobs\n )\n RETURNING\n l1_batch_number,\n circuit_id,\n depth;\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "circuit_id", + "type_info": "Int2" + }, + { + "ordinal": 2, + "name": "depth", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "8182690d0326b820d23fba49d391578db18c29cdca85b8b6aad86fe2a9bf6bbe" +} diff --git a/core/lib/dal/.sqlx/query-81869cb392e9fcbb71ceaa857af77b39429d56072f63b3530c576fb31d7a56f9.json b/core/lib/dal/.sqlx/query-81869cb392e9fcbb71ceaa857af77b39429d56072f63b3530c576fb31d7a56f9.json new file mode 100644 index 00000000000..b8d80a904e4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-81869cb392e9fcbb71ceaa857af77b39429d56072f63b3530c576fb31d7a56f9.json @@ -0,0 +1,18 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n storage (hashed_key, address, key, value, tx_hash, created_at, updated_at)\n SELECT\n u.hashed_key,\n u.address,\n u.key,\n u.value,\n u.tx_hash,\n NOW(),\n NOW()\n FROM\n UNNEST($1::bytea[], $2::bytea[], $3::bytea[], $4::bytea[], $5::bytea[]) AS u (hashed_key, address, key, value, tx_hash)\n ON CONFLICT (hashed_key) DO\n UPDATE\n SET\n tx_hash = excluded.tx_hash,\n value = excluded.value,\n updated_at = NOW()\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray", + "ByteaArray", + "ByteaArray", + "ByteaArray", + "ByteaArray" + ] + }, + "nullable": [] + }, + "hash": "81869cb392e9fcbb71ceaa857af77b39429d56072f63b3530c576fb31d7a56f9" +} diff --git a/core/lib/dal/.sqlx/query-83a931ceddf34e1c760649d613f534014b9ab9ca7725e14fb17aa050d9f35eb8.json b/core/lib/dal/.sqlx/query-83a931ceddf34e1c760649d613f534014b9ab9ca7725e14fb17aa050d9f35eb8.json new file mode 100644 index 00000000000..8d9458dce0a --- /dev/null +++ b/core/lib/dal/.sqlx/query-83a931ceddf34e1c760649d613f534014b9ab9ca7725e14fb17aa050d9f35eb8.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n base_fee_per_gas\n FROM\n miniblocks\n WHERE\n number <= $1\n ORDER BY\n number DESC\n LIMIT\n $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "base_fee_per_gas", + "type_info": "Numeric" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "83a931ceddf34e1c760649d613f534014b9ab9ca7725e14fb17aa050d9f35eb8" +} diff --git a/core/lib/dal/.sqlx/query-84c804db9d60a4c1ebbce5e3dcdf03c0aad3ac30d85176e0a4e35f72bbb21b12.json b/core/lib/dal/.sqlx/query-84c804db9d60a4c1ebbce5e3dcdf03c0aad3ac30d85176e0a4e35f72bbb21b12.json new file mode 100644 index 00000000000..a0a3cb3d63b --- /dev/null +++ b/core/lib/dal/.sqlx/query-84c804db9d60a4c1ebbce5e3dcdf03c0aad3ac30d85176e0a4e35f72bbb21b12.json @@ -0,0 +1,256 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n timestamp,\n is_finished,\n l1_tx_count,\n l2_tx_count,\n fee_account_address,\n bloom,\n priority_ops_onchain_data,\n hash,\n parent_hash,\n commitment,\n compressed_write_logs,\n compressed_contracts,\n eth_prove_tx_id,\n eth_commit_tx_id,\n eth_execute_tx_id,\n merkle_root_hash,\n l2_to_l1_logs,\n l2_to_l1_messages,\n used_contract_hashes,\n compressed_initial_writes,\n compressed_repeated_writes,\n l2_l1_compressed_messages,\n l2_l1_merkle_root,\n l1_gas_price,\n l2_fair_gas_price,\n rollup_last_leaf_index,\n zkporter_is_available,\n bootloader_code_hash,\n default_aa_code_hash,\n base_fee_per_gas,\n aux_data_hash,\n pass_through_data_hash,\n meta_parameters_hash,\n protocol_version,\n system_logs,\n compressed_state_diffs,\n events_queue_commitment,\n bootloader_initial_content_commitment,\n pubdata_input\n FROM\n l1_batches\n LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "parent_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "commitment", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "compressed_write_logs", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "compressed_contracts", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "eth_prove_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "eth_commit_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 15, + "name": "eth_execute_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 16, + "name": "merkle_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 17, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 20, + "name": "compressed_initial_writes", + "type_info": "Bytea" + }, + { + "ordinal": 21, + "name": "compressed_repeated_writes", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "l2_l1_compressed_messages", + "type_info": "Bytea" + }, + { + "ordinal": 23, + "name": "l2_l1_merkle_root", + "type_info": "Bytea" + }, + { + "ordinal": 24, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 25, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 26, + "name": "rollup_last_leaf_index", + "type_info": "Int8" + }, + { + "ordinal": 27, + "name": "zkporter_is_available", + "type_info": "Bool" + }, + { + "ordinal": 28, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 29, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 30, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 31, + "name": "aux_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 32, + "name": "pass_through_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 33, + "name": "meta_parameters_hash", + "type_info": "Bytea" + }, + { + "ordinal": 34, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 35, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 36, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 37, + "name": "events_queue_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 38, + "name": "bootloader_initial_content_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 39, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true, + true, + false, + true, + true, + true, + true, + false, + true, + true, + true, + true + ] + }, + "hash": "84c804db9d60a4c1ebbce5e3dcdf03c0aad3ac30d85176e0a4e35f72bbb21b12" +} diff --git a/core/lib/dal/.sqlx/query-852aa5fe1c3b2dfe875cd4adf0d19a00c170cf7725d95dd6eb8b753fa5facec8.json b/core/lib/dal/.sqlx/query-852aa5fe1c3b2dfe875cd4adf0d19a00c170cf7725d95dd6eb8b753fa5facec8.json new file mode 100644 index 00000000000..6e582aac653 --- /dev/null +++ b/core/lib/dal/.sqlx/query-852aa5fe1c3b2dfe875cd4adf0d19a00c170cf7725d95dd6eb8b753fa5facec8.json @@ -0,0 +1,235 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE transactions\n SET\n in_mempool = TRUE\n FROM\n (\n SELECT\n hash\n FROM\n (\n SELECT\n hash\n FROM\n transactions\n WHERE\n miniblock_number IS NULL\n AND in_mempool = FALSE\n AND error IS NULL\n AND (\n is_priority = TRUE\n OR (\n max_fee_per_gas >= $2\n AND gas_per_pubdata_limit >= $3\n )\n )\n AND tx_format != $4\n ORDER BY\n is_priority DESC,\n priority_op_id,\n received_at\n LIMIT\n $1\n ) AS subquery1\n ORDER BY\n hash\n ) AS subquery2\n WHERE\n transactions.hash = subquery2.hash\n RETURNING\n transactions.*\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "is_priority", + "type_info": "Bool" + }, + { + "ordinal": 2, + "name": "full_fee", + "type_info": "Numeric" + }, + { + "ordinal": 3, + "name": "layer_2_tip_fee", + "type_info": "Numeric" + }, + { + "ordinal": 4, + "name": "initiator_address", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "signature", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "input", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "data", + "type_info": "Jsonb" + }, + { + "ordinal": 9, + "name": "received_at", + "type_info": "Timestamp" + }, + { + "ordinal": 10, + "name": "priority_op_id", + "type_info": "Int8" + }, + { + "ordinal": 11, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 12, + "name": "index_in_block", + "type_info": "Int4" + }, + { + "ordinal": 13, + "name": "error", + "type_info": "Varchar" + }, + { + "ordinal": 14, + "name": "gas_limit", + "type_info": "Numeric" + }, + { + "ordinal": 15, + "name": "gas_per_storage_limit", + "type_info": "Numeric" + }, + { + "ordinal": 16, + "name": "gas_per_pubdata_limit", + "type_info": "Numeric" + }, + { + "ordinal": 17, + "name": "tx_format", + "type_info": "Int4" + }, + { + "ordinal": 18, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 19, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 20, + "name": "execution_info", + "type_info": "Jsonb" + }, + { + "ordinal": 21, + "name": "contract_address", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "in_mempool", + "type_info": "Bool" + }, + { + "ordinal": 23, + "name": "l1_block_number", + "type_info": "Int4" + }, + { + "ordinal": 24, + "name": "value", + "type_info": "Numeric" + }, + { + "ordinal": 25, + "name": "paymaster", + "type_info": "Bytea" + }, + { + "ordinal": 26, + "name": "paymaster_input", + "type_info": "Bytea" + }, + { + "ordinal": 27, + "name": "max_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 28, + "name": "max_priority_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 29, + "name": "effective_gas_price", + "type_info": "Numeric" + }, + { + "ordinal": 30, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 31, + "name": "l1_batch_tx_index", + "type_info": "Int4" + }, + { + "ordinal": 32, + "name": "refunded_gas", + "type_info": "Int8" + }, + { + "ordinal": 33, + "name": "l1_tx_mint", + "type_info": "Numeric" + }, + { + "ordinal": 34, + "name": "l1_tx_refund_recipient", + "type_info": "Bytea" + }, + { + "ordinal": 35, + "name": "upgrade_id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8", + "Numeric", + "Numeric", + "Int4" + ] + }, + "nullable": [ + false, + false, + true, + true, + false, + true, + true, + true, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + false, + true, + false, + false, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "852aa5fe1c3b2dfe875cd4adf0d19a00c170cf7725d95dd6eb8b753fa5facec8" +} diff --git a/core/lib/dal/.sqlx/query-8625ca45ce76b8c8633d390e35e0c5f885240d99ea69140a4636b00469d08497.json b/core/lib/dal/.sqlx/query-8625ca45ce76b8c8633d390e35e0c5f885240d99ea69140a4636b00469d08497.json new file mode 100644 index 00000000000..f7906122f10 --- /dev/null +++ b/core/lib/dal/.sqlx/query-8625ca45ce76b8c8633d390e35e0c5f885240d99ea69140a4636b00469d08497.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n tx_hash\n FROM\n eth_txs_history\n WHERE\n eth_tx_id = $1\n AND confirmed_at IS NOT NULL\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "tx_hash", + "type_info": "Text" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false + ] + }, + "hash": "8625ca45ce76b8c8633d390e35e0c5f885240d99ea69140a4636b00469d08497" +} diff --git a/core/lib/dal/.sqlx/query-877d20634068170326ab5801b69c70aff49e60b7def3d93b9206e650c259168b.json b/core/lib/dal/.sqlx/query-877d20634068170326ab5801b69c70aff49e60b7def3d93b9206e650c259168b.json new file mode 100644 index 00000000000..3052b3a04d1 --- /dev/null +++ b/core/lib/dal/.sqlx/query-877d20634068170326ab5801b69c70aff49e60b7def3d93b9206e650c259168b.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n timestamp\n FROM\n l1_batches\n WHERE\n eth_execute_tx_id IS NULL\n AND number > 0\n ORDER BY\n number\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "timestamp", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "877d20634068170326ab5801b69c70aff49e60b7def3d93b9206e650c259168b" +} diff --git a/core/lib/dal/.sqlx/query-878c9cdfd69ad8988d049041edd63595237a0c54f67b8c669dfbb4fca32757e4.json b/core/lib/dal/.sqlx/query-878c9cdfd69ad8988d049041edd63595237a0c54f67b8c669dfbb4fca32757e4.json new file mode 100644 index 00000000000..9dde4d74ed1 --- /dev/null +++ b/core/lib/dal/.sqlx/query-878c9cdfd69ad8988d049041edd63595237a0c54f67b8c669dfbb4fca32757e4.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l2_address\n FROM\n tokens\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l2_address", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "878c9cdfd69ad8988d049041edd63595237a0c54f67b8c669dfbb4fca32757e4" +} diff --git a/core/lib/dal/.sqlx/query-88c629334e30bb9f5c81c858aa51af63b86e8da6d908d48998012231e1d66a60.json b/core/lib/dal/.sqlx/query-88c629334e30bb9f5c81c858aa51af63b86e8da6d908d48998012231e1d66a60.json new file mode 100644 index 00000000000..16fbffd5b66 --- /dev/null +++ b/core/lib/dal/.sqlx/query-88c629334e30bb9f5c81c858aa51af63b86e8da6d908d48998012231e1d66a60.json @@ -0,0 +1,29 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n timestamp,\n virtual_blocks\n FROM\n miniblocks\n WHERE\n number BETWEEN $1 AND $2\n ORDER BY\n number\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "virtual_blocks", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int8" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "88c629334e30bb9f5c81c858aa51af63b86e8da6d908d48998012231e1d66a60" +} diff --git a/core/lib/dal/.sqlx/query-8903ba5db3f87851c12da133573b4207b69cc48b4ba648e797211631be612b69.json b/core/lib/dal/.sqlx/query-8903ba5db3f87851c12da133573b4207b69cc48b4ba648e797211631be612b69.json new file mode 100644 index 00000000000..3d47a756f3e --- /dev/null +++ b/core/lib/dal/.sqlx/query-8903ba5db3f87851c12da133573b4207b69cc48b4ba648e797211631be612b69.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bytecode_hash,\n bytecode\n FROM\n factory_deps\n INNER JOIN miniblocks ON miniblocks.number = factory_deps.miniblock_number\n WHERE\n miniblocks.l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bytecode_hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "bytecode", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "8903ba5db3f87851c12da133573b4207b69cc48b4ba648e797211631be612b69" +} diff --git a/core/lib/dal/.sqlx/query-894665c2c467bd1aaeb331b112c567e2667c63a033baa6b427bd8a0898c08bf2.json b/core/lib/dal/.sqlx/query-894665c2c467bd1aaeb331b112c567e2667c63a033baa6b427bd8a0898c08bf2.json new file mode 100644 index 00000000000..06d3461c3fa --- /dev/null +++ b/core/lib/dal/.sqlx/query-894665c2c467bd1aaeb331b112c567e2667c63a033baa6b427bd8a0898c08bf2.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n protocol_version\n FROM\n miniblocks\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "protocol_version", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + true + ] + }, + "hash": "894665c2c467bd1aaeb331b112c567e2667c63a033baa6b427bd8a0898c08bf2" +} diff --git a/core/lib/dal/.sqlx/query-8a7a57ca3d4d65da3e0877c003902c690c33686c889d318b1d64bdd7fa6374db.json b/core/lib/dal/.sqlx/query-8a7a57ca3d4d65da3e0877c003902c690c33686c889d318b1d64bdd7fa6374db.json new file mode 100644 index 00000000000..ea6562d1a67 --- /dev/null +++ b/core/lib/dal/.sqlx/query-8a7a57ca3d4d65da3e0877c003902c690c33686c889d318b1d64bdd7fa6374db.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_block_number\n FROM\n transactions\n WHERE\n priority_op_id IS NOT NULL\n ORDER BY\n priority_op_id DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_block_number", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + true + ] + }, + "hash": "8a7a57ca3d4d65da3e0877c003902c690c33686c889d318b1d64bdd7fa6374db" +} diff --git a/core/lib/dal/.sqlx/query-8b9e5d525c026de97c0a732b1adc8dc4bd57e32dfefe1017acba9a15fc14b895.json b/core/lib/dal/.sqlx/query-8b9e5d525c026de97c0a732b1adc8dc4bd57e32dfefe1017acba9a15fc14b895.json new file mode 100644 index 00000000000..de369bccec5 --- /dev/null +++ b/core/lib/dal/.sqlx/query-8b9e5d525c026de97c0a732b1adc8dc4bd57e32dfefe1017acba9a15fc14b895.json @@ -0,0 +1,36 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n storage_logs.hashed_key,\n storage_logs.value,\n initial_writes.index\n FROM\n storage_logs\n INNER JOIN initial_writes ON storage_logs.hashed_key = initial_writes.hashed_key\n WHERE\n storage_logs.miniblock_number = $1\n AND storage_logs.hashed_key >= $2::bytea\n AND storage_logs.hashed_key <= $3::bytea\n ORDER BY\n storage_logs.hashed_key\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "value", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "index", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8", + "Bytea", + "Bytea" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "8b9e5d525c026de97c0a732b1adc8dc4bd57e32dfefe1017acba9a15fc14b895" +} diff --git a/core/lib/dal/.sqlx/query-8f5e89ccadd4ea1da7bfe9793a1cbb724af0f0216433a70f19d784e3f2afbc9f.json b/core/lib/dal/.sqlx/query-8f5e89ccadd4ea1da7bfe9793a1cbb724af0f0216433a70f19d784e3f2afbc9f.json new file mode 100644 index 00000000000..cf7822e8ec8 --- /dev/null +++ b/core/lib/dal/.sqlx/query-8f5e89ccadd4ea1da7bfe9793a1cbb724af0f0216433a70f19d784e3f2afbc9f.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n protocol_version\n FROM\n witness_inputs_fri\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "protocol_version", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + true + ] + }, + "hash": "8f5e89ccadd4ea1da7bfe9793a1cbb724af0f0216433a70f19d784e3f2afbc9f" +} diff --git a/core/lib/dal/.sqlx/query-90f7657bae05c4bad6902c6bfb1b8ba0b771cb45573aca81db254f6bcfc17c77.json b/core/lib/dal/.sqlx/query-90f7657bae05c4bad6902c6bfb1b8ba0b771cb45573aca81db254f6bcfc17c77.json new file mode 100644 index 00000000000..dfd7cd9c555 --- /dev/null +++ b/core/lib/dal/.sqlx/query-90f7657bae05c4bad6902c6bfb1b8ba0b771cb45573aca81db254f6bcfc17c77.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n nonce\n FROM\n eth_txs\n ORDER BY\n id DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "nonce", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "90f7657bae05c4bad6902c6bfb1b8ba0b771cb45573aca81db254f6bcfc17c77" +} diff --git a/core/lib/dal/.sqlx/query-9334df89c9562d4b35611b8e5ffb17305343df99ebc55f240278b5c4e63f89f5.json b/core/lib/dal/.sqlx/query-9334df89c9562d4b35611b8e5ffb17305343df99ebc55f240278b5c4e63f89f5.json new file mode 100644 index 00000000000..92e74026bf5 --- /dev/null +++ b/core/lib/dal/.sqlx/query-9334df89c9562d4b35611b8e5ffb17305343df99ebc55f240278b5c4e63f89f5.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n value\n FROM\n storage\n WHERE\n hashed_key = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "value", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false + ] + }, + "hash": "9334df89c9562d4b35611b8e5ffb17305343df99ebc55f240278b5c4e63f89f5" +} diff --git a/core/lib/dal/.sqlx/query-95ea0522a3eff6c0d2d0b1c58fd2767e112b95f4d103c27acd6f7ede108bd300.json b/core/lib/dal/.sqlx/query-95ea0522a3eff6c0d2d0b1c58fd2767e112b95f4d103c27acd6f7ede108bd300.json new file mode 100644 index 00000000000..3c822fe50d1 --- /dev/null +++ b/core/lib/dal/.sqlx/query-95ea0522a3eff6c0d2d0b1c58fd2767e112b95f4d103c27acd6f7ede108bd300.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE eth_txs\n SET\n gas_used = $1,\n confirmed_eth_tx_history_id = $2\n WHERE\n id = $3\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int4", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "95ea0522a3eff6c0d2d0b1c58fd2767e112b95f4d103c27acd6f7ede108bd300" +} diff --git a/core/lib/dal/.sqlx/query-966dddc881bfe6fd94b56f587424125a2633ddb6abaa129f2b12389140d83c3f.json b/core/lib/dal/.sqlx/query-966dddc881bfe6fd94b56f587424125a2633ddb6abaa129f2b12389140d83c3f.json new file mode 100644 index 00000000000..bf4eb3f9462 --- /dev/null +++ b/core/lib/dal/.sqlx/query-966dddc881bfe6fd94b56f587424125a2633ddb6abaa129f2b12389140d83c3f.json @@ -0,0 +1,40 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n recursion_scheduler_level_vk_hash,\n recursion_node_level_vk_hash,\n recursion_leaf_level_vk_hash,\n recursion_circuits_set_vks_hash\n FROM\n protocol_versions\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "recursion_scheduler_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "recursion_node_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "recursion_leaf_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "recursion_circuits_set_vks_hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false, + false, + false, + false + ] + }, + "hash": "966dddc881bfe6fd94b56f587424125a2633ddb6abaa129f2b12389140d83c3f" +} diff --git a/core/lib/dal/.sqlx/query-9955b9215096f781442153518c4f0a9676e26f422506545ccc90b7e8a36c8d47.json b/core/lib/dal/.sqlx/query-9955b9215096f781442153518c4f0a9676e26f422506545ccc90b7e8a36c8d47.json new file mode 100644 index 00000000000..c05539164ce --- /dev/null +++ b/core/lib/dal/.sqlx/query-9955b9215096f781442153518c4f0a9676e26f422506545ccc90b7e8a36c8d47.json @@ -0,0 +1,35 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n factory_deps.bytecode,\n transactions.data AS \"data?\",\n transactions.contract_address AS \"contract_address?\"\n FROM\n (\n SELECT\n *\n FROM\n storage_logs\n WHERE\n storage_logs.hashed_key = $1\n ORDER BY\n miniblock_number DESC,\n operation_number DESC\n LIMIT\n 1\n ) storage_logs\n JOIN factory_deps ON factory_deps.bytecode_hash = storage_logs.value\n LEFT JOIN transactions ON transactions.hash = storage_logs.tx_hash\n WHERE\n storage_logs.value != $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bytecode", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "data?", + "type_info": "Jsonb" + }, + { + "ordinal": 2, + "name": "contract_address?", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Bytea" + ] + }, + "nullable": [ + false, + false, + true + ] + }, + "hash": "9955b9215096f781442153518c4f0a9676e26f422506545ccc90b7e8a36c8d47" +} diff --git a/core/lib/dal/.sqlx/query-995cecd37a5235d1acc2e6fc418d9b6a1a6fe629f9a02c8e33330a0efda64068.json b/core/lib/dal/.sqlx/query-995cecd37a5235d1acc2e6fc418d9b6a1a6fe629f9a02c8e33330a0efda64068.json new file mode 100644 index 00000000000..49e0a4a8f07 --- /dev/null +++ b/core/lib/dal/.sqlx/query-995cecd37a5235d1acc2e6fc418d9b6a1a6fe629f9a02c8e33330a0efda64068.json @@ -0,0 +1,32 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number,\n factory_deps_filepath,\n storage_logs_filepaths\n FROM\n snapshots\n ORDER BY\n l1_batch_number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "factory_deps_filepath", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "storage_logs_filepaths", + "type_info": "TextArray" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "995cecd37a5235d1acc2e6fc418d9b6a1a6fe629f9a02c8e33330a0efda64068" +} diff --git a/core/lib/dal/.sqlx/query-99acb091650478fe0feb367b1d64561347b81f8931cc2addefa907c9aa9355e6.json b/core/lib/dal/.sqlx/query-99acb091650478fe0feb367b1d64561347b81f8931cc2addefa907c9aa9355e6.json new file mode 100644 index 00000000000..2aa6a538125 --- /dev/null +++ b/core/lib/dal/.sqlx/query-99acb091650478fe0feb367b1d64561347b81f8931cc2addefa907c9aa9355e6.json @@ -0,0 +1,82 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n protocol_versions\n WHERE\n id < $1\n ORDER BY\n id DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "recursion_scheduler_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "recursion_node_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 4, + "name": "recursion_leaf_level_vk_hash", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "recursion_circuits_set_vks_hash", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "default_account_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "verifier_address", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "upgrade_tx_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "created_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + false, + true, + false + ] + }, + "hash": "99acb091650478fe0feb367b1d64561347b81f8931cc2addefa907c9aa9355e6" +} diff --git a/core/lib/dal/.sqlx/query-99d9ee2a0d0450acefa0d9b6c031e30606fddf6631c859ab03819ec476bcf005.json b/core/lib/dal/.sqlx/query-99d9ee2a0d0450acefa0d9b6c031e30606fddf6631c859ab03819ec476bcf005.json new file mode 100644 index 00000000000..ab00c7b26ce --- /dev/null +++ b/core/lib/dal/.sqlx/query-99d9ee2a0d0450acefa0d9b6c031e30606fddf6631c859ab03819ec476bcf005.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n hashed_key\n FROM\n initial_writes\n WHERE\n hashed_key = ANY ($1)\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "ByteaArray" + ] + }, + "nullable": [ + false + ] + }, + "hash": "99d9ee2a0d0450acefa0d9b6c031e30606fddf6631c859ab03819ec476bcf005" +} diff --git a/core/lib/dal/.sqlx/query-99dd6f04e82585d81ac23bc4871578179e6269c6ff36877cedee264067ccdafc.json b/core/lib/dal/.sqlx/query-99dd6f04e82585d81ac23bc4871578179e6269c6ff36877cedee264067ccdafc.json new file mode 100644 index 00000000000..b8c14c53462 --- /dev/null +++ b/core/lib/dal/.sqlx/query-99dd6f04e82585d81ac23bc4871578179e6269c6ff36877cedee264067ccdafc.json @@ -0,0 +1,65 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE basic_witness_input_producer_jobs\n SET\n status = $1,\n attempts = attempts + 1,\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n l1_batch_number = (\n SELECT\n l1_batch_number\n FROM\n basic_witness_input_producer_jobs\n WHERE\n status = $2\n OR (\n status = $1\n AND processing_started_at < NOW() - $4::INTERVAL\n )\n OR (\n status = $3\n AND attempts < $5\n )\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n basic_witness_input_producer_jobs.l1_batch_number\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + { + "Custom": { + "name": "basic_witness_input_producer_job_status", + "kind": { + "Enum": [ + "Queued", + "ManuallySkipped", + "InProgress", + "Successful", + "Failed" + ] + } + } + }, + { + "Custom": { + "name": "basic_witness_input_producer_job_status", + "kind": { + "Enum": [ + "Queued", + "ManuallySkipped", + "InProgress", + "Successful", + "Failed" + ] + } + } + }, + { + "Custom": { + "name": "basic_witness_input_producer_job_status", + "kind": { + "Enum": [ + "Queued", + "ManuallySkipped", + "InProgress", + "Successful", + "Failed" + ] + } + } + }, + "Interval", + "Int2" + ] + }, + "nullable": [ + false + ] + }, + "hash": "99dd6f04e82585d81ac23bc4871578179e6269c6ff36877cedee264067ccdafc" +} diff --git a/core/lib/dal/.sqlx/query-9b90f7a7ffee3cd8439f90a6f79693831e2ab6d6d3c1805df5aa51d76994ec19.json b/core/lib/dal/.sqlx/query-9b90f7a7ffee3cd8439f90a6f79693831e2ab6d6d3c1805df5aa51d76994ec19.json new file mode 100644 index 00000000000..a890a6ca07e --- /dev/null +++ b/core/lib/dal/.sqlx/query-9b90f7a7ffee3cd8439f90a6f79693831e2ab6d6d3c1805df5aa51d76994ec19.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n witness_inputs_fri (\n l1_batch_number,\n merkle_tree_paths_blob_url,\n protocol_version,\n status,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, 'queued', NOW(), NOW())\n ON CONFLICT (l1_batch_number) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Text", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "9b90f7a7ffee3cd8439f90a6f79693831e2ab6d6d3c1805df5aa51d76994ec19" +} diff --git a/core/lib/dal/.sqlx/query-9c2a5f32c627d3a5c6f1e87b31ce3b0fd67aa1f5f7ea0de673a2fbe1f742db86.json b/core/lib/dal/.sqlx/query-9c2a5f32c627d3a5c6f1e87b31ce3b0fd67aa1f5f7ea0de673a2fbe1f742db86.json new file mode 100644 index 00000000000..f9a53d70763 --- /dev/null +++ b/core/lib/dal/.sqlx/query-9c2a5f32c627d3a5c6f1e87b31ce3b0fd67aa1f5f7ea0de673a2fbe1f742db86.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n timestamp\n FROM\n miniblocks\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "timestamp", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "9c2a5f32c627d3a5c6f1e87b31ce3b0fd67aa1f5f7ea0de673a2fbe1f742db86" +} diff --git a/core/lib/dal/.sqlx/query-9cfcde703a48b110791d2ae1103c9317c01d6e35db3b07d0a31f436e7e3c7c40.json b/core/lib/dal/.sqlx/query-9cfcde703a48b110791d2ae1103c9317c01d6e35db3b07d0a31f436e7e3c7c40.json new file mode 100644 index 00000000000..c4beef96173 --- /dev/null +++ b/core/lib/dal/.sqlx/query-9cfcde703a48b110791d2ae1103c9317c01d6e35db3b07d0a31f436e7e3c7c40.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE contract_verification_requests\n SET\n status = 'successful',\n updated_at = NOW()\n WHERE\n id = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "9cfcde703a48b110791d2ae1103c9317c01d6e35db3b07d0a31f436e7e3c7c40" +} diff --git a/core/lib/dal/.sqlx/query-9de5acb3de1b96ff8eb62a6324e8e221a8ef9014458cc7f1dbc60c056a0768a0.json b/core/lib/dal/.sqlx/query-9de5acb3de1b96ff8eb62a6324e8e221a8ef9014458cc7f1dbc60c056a0768a0.json new file mode 100644 index 00000000000..674377635ce --- /dev/null +++ b/core/lib/dal/.sqlx/query-9de5acb3de1b96ff8eb62a6324e8e221a8ef9014458cc7f1dbc60c056a0768a0.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE snapshots\n SET\n storage_logs_filepaths[$2] = $3,\n updated_at = NOW()\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int4", + "Text" + ] + }, + "nullable": [] + }, + "hash": "9de5acb3de1b96ff8eb62a6324e8e221a8ef9014458cc7f1dbc60c056a0768a0" +} diff --git a/core/lib/dal/.sqlx/query-9ef2f43e6201cc00a0e1425a666a36532fee1450733849852dfd20e18ded1f03.json b/core/lib/dal/.sqlx/query-9ef2f43e6201cc00a0e1425a666a36532fee1450733849852dfd20e18ded1f03.json new file mode 100644 index 00000000000..fd770071cf8 --- /dev/null +++ b/core/lib/dal/.sqlx/query-9ef2f43e6201cc00a0e1425a666a36532fee1450733849852dfd20e18ded1f03.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE scheduler_witness_jobs_fri\n SET\n status = 'failed',\n error = $1,\n updated_at = NOW()\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "9ef2f43e6201cc00a0e1425a666a36532fee1450733849852dfd20e18ded1f03" +} diff --git a/core/lib/dal/.sqlx/query-a0e2b2c034cc5f668f0b3d43b94d2e2326d7ace079b095def52723a45b65d3f3.json b/core/lib/dal/.sqlx/query-a0e2b2c034cc5f668f0b3d43b94d2e2326d7ace079b095def52723a45b65d3f3.json new file mode 100644 index 00000000000..7dc19564f7f --- /dev/null +++ b/core/lib/dal/.sqlx/query-a0e2b2c034cc5f668f0b3d43b94d2e2326d7ace079b095def52723a45b65d3f3.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE witness_inputs_fri\n SET\n status = 'failed',\n error = $1,\n updated_at = NOW()\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "a0e2b2c034cc5f668f0b3d43b94d2e2326d7ace079b095def52723a45b65d3f3" +} diff --git a/core/lib/dal/.sqlx/query-a2d02b71e3dcc29a2c0c20b44392cfbaf09164aecfa5eed8d7142518ad96abea.json b/core/lib/dal/.sqlx/query-a2d02b71e3dcc29a2c0c20b44392cfbaf09164aecfa5eed8d7142518ad96abea.json new file mode 100644 index 00000000000..fc36e47b54c --- /dev/null +++ b/core/lib/dal/.sqlx/query-a2d02b71e3dcc29a2c0c20b44392cfbaf09164aecfa5eed8d7142518ad96abea.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n initial_bootloader_heap_content\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "initial_bootloader_heap_content", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "a2d02b71e3dcc29a2c0c20b44392cfbaf09164aecfa5eed8d7142518ad96abea" +} diff --git a/core/lib/dal/.sqlx/query-a4861c931e84d897c27f666de1c5ca679a0459a012899a373c67393d30d12601.json b/core/lib/dal/.sqlx/query-a4861c931e84d897c27f666de1c5ca679a0459a012899a373c67393d30d12601.json new file mode 100644 index 00000000000..104a7fb2556 --- /dev/null +++ b/core/lib/dal/.sqlx/query-a4861c931e84d897c27f666de1c5ca679a0459a012899a373c67393d30d12601.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE scheduler_dependency_tracker_fri\n SET\n status = 'queued'\n WHERE\n l1_batch_number = ANY ($1)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8Array" + ] + }, + "nullable": [] + }, + "hash": "a4861c931e84d897c27f666de1c5ca679a0459a012899a373c67393d30d12601" +} diff --git a/core/lib/dal/.sqlx/query-a48c92f557e5e3a2674ce0dee9cd92f5a547150590b8c221c4065eab11175c7a.json b/core/lib/dal/.sqlx/query-a48c92f557e5e3a2674ce0dee9cd92f5a547150590b8c221c4065eab11175c7a.json new file mode 100644 index 00000000000..49e547e5564 --- /dev/null +++ b/core/lib/dal/.sqlx/query-a48c92f557e5e3a2674ce0dee9cd92f5a547150590b8c221c4065eab11175c7a.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(INDEX) AS \"max?\"\n FROM\n initial_writes\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "max?", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "a48c92f557e5e3a2674ce0dee9cd92f5a547150590b8c221c4065eab11175c7a" +} diff --git a/core/lib/dal/.sqlx/query-a4a4b0bfbe05eac100c42a717e8d7cbb0bc526ebe61a07f735d4ab587058b22c.json b/core/lib/dal/.sqlx/query-a4a4b0bfbe05eac100c42a717e8d7cbb0bc526ebe61a07f735d4ab587058b22c.json new file mode 100644 index 00000000000..f19add71350 --- /dev/null +++ b/core/lib/dal/.sqlx/query-a4a4b0bfbe05eac100c42a717e8d7cbb0bc526ebe61a07f735d4ab587058b22c.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n hash\n FROM\n miniblocks\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "a4a4b0bfbe05eac100c42a717e8d7cbb0bc526ebe61a07f735d4ab587058b22c" +} diff --git a/core/lib/dal/.sqlx/query-a4fcd075b68467bb119e49e6b20a69138206dfeb41f3daff4a3eef1de0bed4e4.json b/core/lib/dal/.sqlx/query-a4fcd075b68467bb119e49e6b20a69138206dfeb41f3daff4a3eef1de0bed4e4.json new file mode 100644 index 00000000000..39b0c391ef5 --- /dev/null +++ b/core/lib/dal/.sqlx/query-a4fcd075b68467bb119e49e6b20a69138206dfeb41f3daff4a3eef1de0bed4e4.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n initial_writes (hashed_key, INDEX, l1_batch_number, created_at, updated_at)\n SELECT\n u.hashed_key,\n u.index,\n $3,\n NOW(),\n NOW()\n FROM\n UNNEST($1::bytea[], $2::BIGINT[]) AS u (hashed_key, INDEX)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray", + "Int8Array", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "a4fcd075b68467bb119e49e6b20a69138206dfeb41f3daff4a3eef1de0bed4e4" +} diff --git a/core/lib/dal/.sqlx/query-a74d029f58801ec05d8d14a3b065d93e391600ab9da2e5fd4e8b139ab3d77583.json b/core/lib/dal/.sqlx/query-a74d029f58801ec05d8d14a3b065d93e391600ab9da2e5fd4e8b139ab3d77583.json new file mode 100644 index 00000000000..c4f1f4bbcd0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-a74d029f58801ec05d8d14a3b065d93e391600ab9da2e5fd4e8b139ab3d77583.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE proof_generation_details\n SET\n status = 'generated',\n proof_blob_url = $1,\n updated_at = NOW()\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "a74d029f58801ec05d8d14a3b065d93e391600ab9da2e5fd4e8b139ab3d77583" +} diff --git a/core/lib/dal/.sqlx/query-a83f853b1d63365e88975a926816c6e7b4595f3e7c3dca1d1590de5437187733.json b/core/lib/dal/.sqlx/query-a83f853b1d63365e88975a926816c6e7b4595f3e7c3dca1d1590de5437187733.json new file mode 100644 index 00000000000..0dab103fa24 --- /dev/null +++ b/core/lib/dal/.sqlx/query-a83f853b1d63365e88975a926816c6e7b4595f3e7c3dca1d1590de5437187733.json @@ -0,0 +1,29 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n hash = $1,\n merkle_root_hash = $2,\n commitment = $3,\n default_aa_code_hash = $4,\n compressed_repeated_writes = $5,\n compressed_initial_writes = $6,\n l2_l1_compressed_messages = $7,\n l2_l1_merkle_root = $8,\n zkporter_is_available = $9,\n bootloader_code_hash = $10,\n rollup_last_leaf_index = $11,\n aux_data_hash = $12,\n pass_through_data_hash = $13,\n meta_parameters_hash = $14,\n compressed_state_diffs = $15,\n updated_at = NOW()\n WHERE\n number = $16\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bool", + "Bytea", + "Int8", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "a83f853b1d63365e88975a926816c6e7b4595f3e7c3dca1d1590de5437187733" +} diff --git a/core/lib/dal/.sqlx/query-a84ee70bec8c03bd51e1c6bad44c9a64904026506914abae2946e5d353d6a604.json b/core/lib/dal/.sqlx/query-a84ee70bec8c03bd51e1c6bad44c9a64904026506914abae2946e5d353d6a604.json new file mode 100644 index 00000000000..3275df2a3d5 --- /dev/null +++ b/core/lib/dal/.sqlx/query-a84ee70bec8c03bd51e1c6bad44c9a64904026506914abae2946e5d353d6a604.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n id\n FROM\n prover_jobs_fri\n WHERE\n l1_batch_number = $1\n AND status = 'successful'\n AND aggregation_round = $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int2" + ] + }, + "nullable": [ + false + ] + }, + "hash": "a84ee70bec8c03bd51e1c6bad44c9a64904026506914abae2946e5d353d6a604" +} diff --git a/core/lib/dal/.sqlx/query-a91c23c4d33771122cec2589c6fe2757dbc13be6b30f5840744e5e0569adc66e.json b/core/lib/dal/.sqlx/query-a91c23c4d33771122cec2589c6fe2757dbc13be6b30f5840744e5e0569adc66e.json new file mode 100644 index 00000000000..7c757648b38 --- /dev/null +++ b/core/lib/dal/.sqlx/query-a91c23c4d33771122cec2589c6fe2757dbc13be6b30f5840744e5e0569adc66e.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n upgrade_tx_hash\n FROM\n protocol_versions\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "upgrade_tx_hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + true + ] + }, + "hash": "a91c23c4d33771122cec2589c6fe2757dbc13be6b30f5840744e5e0569adc66e" +} diff --git a/core/lib/dal/.sqlx/query-aa91697157517322b0dbb53dca99f41220c51f58a03c61d6b7789eab0504e320.json b/core/lib/dal/.sqlx/query-aa91697157517322b0dbb53dca99f41220c51f58a03c61d6b7789eab0504e320.json new file mode 100644 index 00000000000..27d48231728 --- /dev/null +++ b/core/lib/dal/.sqlx/query-aa91697157517322b0dbb53dca99f41220c51f58a03c61d6b7789eab0504e320.json @@ -0,0 +1,32 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET\n status = 'queued'\n WHERE\n (l1_batch_number, circuit_id, depth) IN (\n SELECT\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id,\n prover_jobs_fri.depth\n FROM\n prover_jobs_fri\n JOIN node_aggregation_witness_jobs_fri nawj ON prover_jobs_fri.l1_batch_number = nawj.l1_batch_number\n AND prover_jobs_fri.circuit_id = nawj.circuit_id\n AND prover_jobs_fri.depth = nawj.depth\n WHERE\n nawj.status = 'waiting_for_proofs'\n AND prover_jobs_fri.status = 'successful'\n AND prover_jobs_fri.aggregation_round = 1\n AND prover_jobs_fri.depth = 0\n GROUP BY\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id,\n prover_jobs_fri.depth,\n nawj.number_of_dependent_jobs\n HAVING\n COUNT(*) = nawj.number_of_dependent_jobs\n )\n RETURNING\n l1_batch_number,\n circuit_id,\n depth;\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "circuit_id", + "type_info": "Int2" + }, + { + "ordinal": 2, + "name": "depth", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "aa91697157517322b0dbb53dca99f41220c51f58a03c61d6b7789eab0504e320" +} diff --git a/core/lib/dal/.sqlx/query-aaf4fb97c95a5290fb1620cd868477dcf21955e0921ba648ba2e751dbfc3cb45.json b/core/lib/dal/.sqlx/query-aaf4fb97c95a5290fb1620cd868477dcf21955e0921ba648ba2e751dbfc3cb45.json new file mode 100644 index 00000000000..614b853c625 --- /dev/null +++ b/core/lib/dal/.sqlx/query-aaf4fb97c95a5290fb1620cd868477dcf21955e0921ba648ba2e751dbfc3cb45.json @@ -0,0 +1,38 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n COUNT(*) AS \"count!\",\n circuit_id AS \"circuit_id!\",\n aggregation_round AS \"aggregation_round!\",\n status AS \"status!\"\n FROM\n prover_jobs_fri\n WHERE\n status <> 'skipped'\n AND status <> 'successful'\n GROUP BY\n circuit_id,\n aggregation_round,\n status\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count!", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "circuit_id!", + "type_info": "Int2" + }, + { + "ordinal": 2, + "name": "aggregation_round!", + "type_info": "Int2" + }, + { + "ordinal": 3, + "name": "status!", + "type_info": "Text" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null, + false, + false, + false + ] + }, + "hash": "aaf4fb97c95a5290fb1620cd868477dcf21955e0921ba648ba2e751dbfc3cb45" +} diff --git a/core/lib/dal/.sqlx/query-ac505ae6cfc744b07b52997db789bdc9efc6b89fc0444caf8271edd7dfe4a3bc.json b/core/lib/dal/.sqlx/query-ac505ae6cfc744b07b52997db789bdc9efc6b89fc0444caf8271edd7dfe4a3bc.json new file mode 100644 index 00000000000..2dad4563cc7 --- /dev/null +++ b/core/lib/dal/.sqlx/query-ac505ae6cfc744b07b52997db789bdc9efc6b89fc0444caf8271edd7dfe4a3bc.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n id\n FROM\n protocol_versions\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "ac505ae6cfc744b07b52997db789bdc9efc6b89fc0444caf8271edd7dfe4a3bc" +} diff --git a/core/lib/dal/.sqlx/query-ada54322a28012b1b761f3631c4cd6ca26aa2fa565fcf208b6985f461c1868f2.json b/core/lib/dal/.sqlx/query-ada54322a28012b1b761f3631c4cd6ca26aa2fa565fcf208b6985f461c1868f2.json new file mode 100644 index 00000000000..04fde45469f --- /dev/null +++ b/core/lib/dal/.sqlx/query-ada54322a28012b1b761f3631c4cd6ca26aa2fa565fcf208b6985f461c1868f2.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE eth_txs_history\n SET\n updated_at = NOW(),\n confirmed_at = NOW()\n WHERE\n tx_hash = $1\n RETURNING\n id,\n eth_tx_id\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "eth_tx_id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Text" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "ada54322a28012b1b761f3631c4cd6ca26aa2fa565fcf208b6985f461c1868f2" +} diff --git a/core/lib/dal/.sqlx/query-aeda34b1beadca72e3e600ea9ae63f436a4f16dbeb784d0d28be392ad96b1c49.json b/core/lib/dal/.sqlx/query-aeda34b1beadca72e3e600ea9ae63f436a4f16dbeb784d0d28be392ad96b1c49.json new file mode 100644 index 00000000000..b411d3ce830 --- /dev/null +++ b/core/lib/dal/.sqlx/query-aeda34b1beadca72e3e600ea9ae63f436a4f16dbeb784d0d28be392ad96b1c49.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE eth_txs\n SET\n has_failed = TRUE\n WHERE\n id = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [] + }, + "hash": "aeda34b1beadca72e3e600ea9ae63f436a4f16dbeb784d0d28be392ad96b1c49" +} diff --git a/core/lib/dal/.sqlx/query-aefea1f3e87f28791cc547f193a895006e23ec73018f4b4e0a364a741f5c9781.json b/core/lib/dal/.sqlx/query-aefea1f3e87f28791cc547f193a895006e23ec73018f4b4e0a364a741f5c9781.json new file mode 100644 index 00000000000..c82bed1169c --- /dev/null +++ b/core/lib/dal/.sqlx/query-aefea1f3e87f28791cc547f193a895006e23ec73018f4b4e0a364a741f5c9781.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number\n FROM\n miniblocks\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + true + ] + }, + "hash": "aefea1f3e87f28791cc547f193a895006e23ec73018f4b4e0a364a741f5c9781" +} diff --git a/core/lib/dal/.sqlx/query-af72fabd90eb43fb315f46d7fe9f724216807ffd481cd6f7f19968e42e52b284.json b/core/lib/dal/.sqlx/query-af72fabd90eb43fb315f46d7fe9f724216807ffd481cd6f7f19968e42e52b284.json new file mode 100644 index 00000000000..6674fab59ea --- /dev/null +++ b/core/lib/dal/.sqlx/query-af72fabd90eb43fb315f46d7fe9f724216807ffd481cd6f7f19968e42e52b284.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE prover_jobs_fri\n SET\n status = 'sent_to_server',\n updated_at = NOW()\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "af72fabd90eb43fb315f46d7fe9f724216807ffd481cd6f7f19968e42e52b284" +} diff --git a/core/lib/dal/.sqlx/query-afc24bd1407dba82cd3dc9e7ee71ac4ab2d73bda6022700aeb0a630a2563a4b4.json b/core/lib/dal/.sqlx/query-afc24bd1407dba82cd3dc9e7ee71ac4ab2d73bda6022700aeb0a630a2563a4b4.json new file mode 100644 index 00000000000..ede2995ff55 --- /dev/null +++ b/core/lib/dal/.sqlx/query-afc24bd1407dba82cd3dc9e7ee71ac4ab2d73bda6022700aeb0a630a2563a4b4.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET\n status = 'failed',\n error = $1,\n updated_at = NOW()\n WHERE\n id = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "afc24bd1407dba82cd3dc9e7ee71ac4ab2d73bda6022700aeb0a630a2563a4b4" +} diff --git a/core/lib/dal/.sqlx/query-b17c71983da060f08616e001b42f8dcbcb014b4f808c6232abd9a83354c995ac.json b/core/lib/dal/.sqlx/query-b17c71983da060f08616e001b42f8dcbcb014b4f808c6232abd9a83354c995ac.json new file mode 100644 index 00000000000..82209e00b65 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b17c71983da060f08616e001b42f8dcbcb014b4f808c6232abd9a83354c995ac.json @@ -0,0 +1,35 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET\n status = 'queued',\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n (\n status = 'in_progress'\n AND processing_started_at <= NOW() - $1::INTERVAL\n AND attempts < $2\n )\n OR (\n status = 'failed'\n AND attempts < $2\n )\n RETURNING\n id,\n status,\n attempts\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Interval", + "Int2" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "b17c71983da060f08616e001b42f8dcbcb014b4f808c6232abd9a83354c995ac" +} diff --git a/core/lib/dal/.sqlx/query-b23ddb16513d69331056b94d466663a9c5ea62ea7c99a77941eb8f05d4454125.json b/core/lib/dal/.sqlx/query-b23ddb16513d69331056b94d466663a9c5ea62ea7c99a77941eb8f05d4454125.json new file mode 100644 index 00000000000..fd8600d59aa --- /dev/null +++ b/core/lib/dal/.sqlx/query-b23ddb16513d69331056b94d466663a9c5ea62ea7c99a77941eb8f05d4454125.json @@ -0,0 +1,18 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n leaf_aggregation_witness_jobs_fri (\n l1_batch_number,\n circuit_id,\n closed_form_inputs_blob_url,\n number_of_basic_circuits,\n protocol_version,\n status,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, $4, $5, 'waiting_for_proofs', NOW(), NOW())\n ON CONFLICT (l1_batch_number, circuit_id) DO\n UPDATE\n SET\n updated_at = NOW()\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int2", + "Text", + "Int4", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "b23ddb16513d69331056b94d466663a9c5ea62ea7c99a77941eb8f05d4454125" +} diff --git a/core/lib/dal/.sqlx/query-b321c5ba22358cbb1fd9c627f1e7b56187686173327498ac75424593547c19c5.json b/core/lib/dal/.sqlx/query-b321c5ba22358cbb1fd9c627f1e7b56187686173327498ac75424593547c19c5.json new file mode 100644 index 00000000000..bdd22927d38 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b321c5ba22358cbb1fd9c627f1e7b56187686173327498ac75424593547c19c5.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n attempts\n FROM\n scheduler_witness_jobs_fri\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "b321c5ba22358cbb1fd9c627f1e7b56187686173327498ac75424593547c19c5" +} diff --git a/core/lib/dal/.sqlx/query-b33e8da69281efe7750043e409d9871731c41cef01da3d6aaf2c53f7b17c47b2.json b/core/lib/dal/.sqlx/query-b33e8da69281efe7750043e409d9871731c41cef01da3d6aaf2c53f7b17c47b2.json new file mode 100644 index 00000000000..1ece8207371 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b33e8da69281efe7750043e409d9871731c41cef01da3d6aaf2c53f7b17c47b2.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n value\n FROM\n storage_logs\n WHERE\n storage_logs.hashed_key = $1\n AND storage_logs.miniblock_number <= $2\n ORDER BY\n storage_logs.miniblock_number DESC,\n storage_logs.operation_number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "value", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "b33e8da69281efe7750043e409d9871731c41cef01da3d6aaf2c53f7b17c47b2" +} diff --git a/core/lib/dal/.sqlx/query-b367ecb1ebee86ec598c4079591f8c12deeca6b8843fe3869cc2b02b30da5de6.json b/core/lib/dal/.sqlx/query-b367ecb1ebee86ec598c4079591f8c12deeca6b8843fe3869cc2b02b30da5de6.json new file mode 100644 index 00000000000..724c01ea6c5 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b367ecb1ebee86ec598c4079591f8c12deeca6b8843fe3869cc2b02b30da5de6.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n attempts\n FROM\n proof_compression_jobs_fri\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "b367ecb1ebee86ec598c4079591f8c12deeca6b8843fe3869cc2b02b30da5de6" +} diff --git a/core/lib/dal/.sqlx/query-b3d71dbe14bcd94131b29b64dcb49b6370c211a7fc24ad03a5f0e327f9d18040.json b/core/lib/dal/.sqlx/query-b3d71dbe14bcd94131b29b64dcb49b6370c211a7fc24ad03a5f0e327f9d18040.json new file mode 100644 index 00000000000..0ca284a3f57 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b3d71dbe14bcd94131b29b64dcb49b6370c211a7fc24ad03a5f0e327f9d18040.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n attempts\n FROM\n witness_inputs_fri\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "b3d71dbe14bcd94131b29b64dcb49b6370c211a7fc24ad03a5f0e327f9d18040" +} diff --git a/core/lib/dal/.sqlx/query-b4304b9afb9f838eee1fe95af5fd964d4bb39b9dcd18fb03bc11ce2fb32b7fb3.json b/core/lib/dal/.sqlx/query-b4304b9afb9f838eee1fe95af5fd964d4bb39b9dcd18fb03bc11ce2fb32b7fb3.json new file mode 100644 index 00000000000..fa6f91edfb3 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b4304b9afb9f838eee1fe95af5fd964d4bb39b9dcd18fb03bc11ce2fb32b7fb3.json @@ -0,0 +1,83 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE scheduler_witness_jobs_fri\n SET\n status = 'in_progress',\n attempts = attempts + 1,\n updated_at = NOW(),\n processing_started_at = NOW(),\n picked_by = $2\n WHERE\n l1_batch_number = (\n SELECT\n l1_batch_number\n FROM\n scheduler_witness_jobs_fri\n WHERE\n status = 'queued'\n AND protocol_version = ANY ($1)\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n scheduler_witness_jobs_fri.*\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "scheduler_partial_input_blob_url", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 3, + "name": "processing_started_at", + "type_info": "Timestamp" + }, + { + "ordinal": 4, + "name": "time_taken", + "type_info": "Time" + }, + { + "ordinal": 5, + "name": "error", + "type_info": "Text" + }, + { + "ordinal": 6, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 7, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "attempts", + "type_info": "Int2" + }, + { + "ordinal": 9, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "picked_by", + "type_info": "Text" + } + ], + "parameters": { + "Left": [ + "Int4Array", + "Text" + ] + }, + "nullable": [ + false, + false, + false, + true, + true, + true, + false, + false, + false, + true, + true + ] + }, + "hash": "b4304b9afb9f838eee1fe95af5fd964d4bb39b9dcd18fb03bc11ce2fb32b7fb3" +} diff --git a/core/lib/dal/.sqlx/query-b452354c888bfc19b5f4012582061b86b1abd915739533f9982fea9d8e21b9e9.json b/core/lib/dal/.sqlx/query-b452354c888bfc19b5f4012582061b86b1abd915739533f9982fea9d8e21b9e9.json new file mode 100644 index 00000000000..e87b9a2cddd --- /dev/null +++ b/core/lib/dal/.sqlx/query-b452354c888bfc19b5f4012582061b86b1abd915739533f9982fea9d8e21b9e9.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM factory_deps\n WHERE\n miniblock_number > $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "b452354c888bfc19b5f4012582061b86b1abd915739533f9982fea9d8e21b9e9" +} diff --git a/core/lib/dal/.sqlx/query-b4794e6a0c2366d5d95ab373c310103263af3ff5cb6c9dc5df59d3cd2a5e56b4.json b/core/lib/dal/.sqlx/query-b4794e6a0c2366d5d95ab373c310103263af3ff5cb6c9dc5df59d3cd2a5e56b4.json new file mode 100644 index 00000000000..14b4115b30e --- /dev/null +++ b/core/lib/dal/.sqlx/query-b4794e6a0c2366d5d95ab373c310103263af3ff5cb6c9dc5df59d3cd2a5e56b4.json @@ -0,0 +1,17 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE gpu_prover_queue_fri\n SET\n instance_status = $1,\n updated_at = NOW()\n WHERE\n instance_host = $2::TEXT::inet\n AND instance_port = $3\n AND zone = $4\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Text", + "Int4", + "Text" + ] + }, + "nullable": [] + }, + "hash": "b4794e6a0c2366d5d95ab373c310103263af3ff5cb6c9dc5df59d3cd2a5e56b4" +} diff --git a/core/lib/dal/.sqlx/query-b49478150dbc8731c531ef3eddc0c2cfff08e6fef3c3824d20dfdf2d0f73e671.json b/core/lib/dal/.sqlx/query-b49478150dbc8731c531ef3eddc0c2cfff08e6fef3c3824d20dfdf2d0f73e671.json new file mode 100644 index 00000000000..59a4d95f1f2 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b49478150dbc8731c531ef3eddc0c2cfff08e6fef3c3824d20dfdf2d0f73e671.json @@ -0,0 +1,34 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n hash,\n number,\n timestamp\n FROM\n miniblocks\n WHERE\n number > $1\n ORDER BY\n number ASC\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "timestamp", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "b49478150dbc8731c531ef3eddc0c2cfff08e6fef3c3824d20dfdf2d0f73e671" +} diff --git a/core/lib/dal/.sqlx/query-b4a0444897b60c7061363a48b2b5386a2fd53492f3df05545edbfb0ec0f059d2.json b/core/lib/dal/.sqlx/query-b4a0444897b60c7061363a48b2b5386a2fd53492f3df05545edbfb0ec0f059d2.json new file mode 100644 index 00000000000..804f6c5ac33 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b4a0444897b60c7061363a48b2b5386a2fd53492f3df05545edbfb0ec0f059d2.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE eth_txs\n SET\n confirmed_eth_tx_history_id = $1\n WHERE\n id = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "b4a0444897b60c7061363a48b2b5386a2fd53492f3df05545edbfb0ec0f059d2" +} diff --git a/core/lib/dal/.sqlx/query-b5fd77f515fe168908cc90e44d0697e36b3c2a997038c30553f7727cdfa17361.json b/core/lib/dal/.sqlx/query-b5fd77f515fe168908cc90e44d0697e36b3c2a997038c30553f7727cdfa17361.json new file mode 100644 index 00000000000..b8ba0465614 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b5fd77f515fe168908cc90e44d0697e36b3c2a997038c30553f7727cdfa17361.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE transactions\n SET\n miniblock_number = $1,\n index_in_block = data_table.index_in_block,\n error = NULLIF(data_table.error, ''),\n in_mempool = FALSE,\n execution_info = execution_info || data_table.new_execution_info,\n refunded_gas = data_table.refunded_gas,\n effective_gas_price = data_table.effective_gas_price,\n updated_at = NOW()\n FROM\n (\n SELECT\n UNNEST($2::bytea[]) AS hash,\n UNNEST($3::INTEGER[]) AS index_in_block,\n UNNEST($4::VARCHAR[]) AS error,\n UNNEST($5::jsonb[]) AS new_execution_info,\n UNNEST($6::BIGINT[]) AS refunded_gas,\n UNNEST($7::NUMERIC[]) AS effective_gas_price\n ) AS data_table\n WHERE\n transactions.hash = data_table.hash\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "ByteaArray", + "Int4Array", + "VarcharArray", + "JsonbArray", + "Int8Array", + "NumericArray" + ] + }, + "nullable": [] + }, + "hash": "b5fd77f515fe168908cc90e44d0697e36b3c2a997038c30553f7727cdfa17361" +} diff --git a/core/lib/dal/.sqlx/query-b678edd9f6ea97b8f086566811f651aa072f030c70a5e6de38843a1d9afdf329.json b/core/lib/dal/.sqlx/query-b678edd9f6ea97b8f086566811f651aa072f030c70a5e6de38843a1d9afdf329.json new file mode 100644 index 00000000000..004d970d81e --- /dev/null +++ b/core/lib/dal/.sqlx/query-b678edd9f6ea97b8f086566811f651aa072f030c70a5e6de38843a1d9afdf329.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n commitments (l1_batch_number, events_queue_commitment, bootloader_initial_content_commitment)\n VALUES\n ($1, $2, $3)\n ON CONFLICT (l1_batch_number) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Bytea", + "Bytea" + ] + }, + "nullable": [] + }, + "hash": "b678edd9f6ea97b8f086566811f651aa072f030c70a5e6de38843a1d9afdf329" +} diff --git a/core/lib/dal/.sqlx/query-b75e3d2fecbf5d85e93848b7a35180abbd76956e073432af8d8500327b74e488.json b/core/lib/dal/.sqlx/query-b75e3d2fecbf5d85e93848b7a35180abbd76956e073432af8d8500327b74e488.json new file mode 100644 index 00000000000..91d7b677959 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b75e3d2fecbf5d85e93848b7a35180abbd76956e073432af8d8500327b74e488.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n VERSION\n FROM\n compiler_versions\n WHERE\n compiler = $1\n ORDER BY\n VERSION\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "version", + "type_info": "Text" + } + ], + "parameters": { + "Left": [ + "Text" + ] + }, + "nullable": [ + false + ] + }, + "hash": "b75e3d2fecbf5d85e93848b7a35180abbd76956e073432af8d8500327b74e488" +} diff --git a/core/lib/dal/.sqlx/query-b7bf6999002dd89dc1224468ca79c9a85e3c24fca1bf87905f7fc68fe2ce3276.json b/core/lib/dal/.sqlx/query-b7bf6999002dd89dc1224468ca79c9a85e3c24fca1bf87905f7fc68fe2ce3276.json new file mode 100644 index 00000000000..cb8de87ca64 --- /dev/null +++ b/core/lib/dal/.sqlx/query-b7bf6999002dd89dc1224468ca79c9a85e3c24fca1bf87905f7fc68fe2ce3276.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE transactions\n SET\n l1_batch_number = $3,\n l1_batch_tx_index = data_table.l1_batch_tx_index,\n updated_at = NOW()\n FROM\n (\n SELECT\n UNNEST($1::INT[]) AS l1_batch_tx_index,\n UNNEST($2::bytea[]) AS hash\n ) AS data_table\n WHERE\n transactions.hash = data_table.hash\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4Array", + "ByteaArray", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "b7bf6999002dd89dc1224468ca79c9a85e3c24fca1bf87905f7fc68fe2ce3276" +} diff --git a/core/lib/dal/.sqlx/query-bb1904a01a3860b5440ae23763d6d5ee4341edadb8a86b459a07427b7e265e98.json b/core/lib/dal/.sqlx/query-bb1904a01a3860b5440ae23763d6d5ee4341edadb8a86b459a07427b7e265e98.json new file mode 100644 index 00000000000..ddc5d583900 --- /dev/null +++ b/core/lib/dal/.sqlx/query-bb1904a01a3860b5440ae23763d6d5ee4341edadb8a86b459a07427b7e265e98.json @@ -0,0 +1,136 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n l1_tx_count,\n l2_tx_count,\n timestamp,\n is_finished,\n fee_account_address,\n l2_to_l1_logs,\n l2_to_l1_messages,\n bloom,\n priority_ops_onchain_data,\n used_contract_hashes,\n base_fee_per_gas,\n l1_gas_price,\n l2_fair_gas_price,\n bootloader_code_hash,\n default_aa_code_hash,\n protocol_version,\n compressed_state_diffs,\n system_logs,\n pubdata_input\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 2, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 3, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 4, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 7, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 10, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 11, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 12, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 13, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 14, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 15, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 16, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 17, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 18, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + false, + true + ] + }, + "hash": "bb1904a01a3860b5440ae23763d6d5ee4341edadb8a86b459a07427b7e265e98" +} diff --git a/core/lib/dal/.sqlx/query-bd51c9d93b103292f5acbdb266ba4b4e2af48907fa9321064ddb24ac02ab17cd.json b/core/lib/dal/.sqlx/query-bd51c9d93b103292f5acbdb266ba4b4e2af48907fa9321064ddb24ac02ab17cd.json new file mode 100644 index 00000000000..7f1fc9b176c --- /dev/null +++ b/core/lib/dal/.sqlx/query-bd51c9d93b103292f5acbdb266ba4b4e2af48907fa9321064ddb24ac02ab17cd.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number\n FROM\n l1_batches\n LEFT JOIN eth_txs_history AS commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id)\n WHERE\n commit_tx.confirmed_at IS NOT NULL\n ORDER BY\n number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "bd51c9d93b103292f5acbdb266ba4b4e2af48907fa9321064ddb24ac02ab17cd" +} diff --git a/core/lib/dal/.sqlx/query-bd74435dc6dba3f4173858682ee5661d1df4ec053797d75cfd32272be4f485e7.json b/core/lib/dal/.sqlx/query-bd74435dc6dba3f4173858682ee5661d1df4ec053797d75cfd32272be4f485e7.json new file mode 100644 index 00000000000..e2386003538 --- /dev/null +++ b/core/lib/dal/.sqlx/query-bd74435dc6dba3f4173858682ee5661d1df4ec053797d75cfd32272be4f485e7.json @@ -0,0 +1,54 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n storage_logs.key AS \"key!\",\n storage_logs.value AS \"value!\",\n storage_logs.address AS \"address!\",\n storage_logs.miniblock_number AS \"miniblock_number!\",\n initial_writes.l1_batch_number AS \"l1_batch_number!\",\n initial_writes.index\n FROM\n (\n SELECT\n hashed_key,\n MAX(ARRAY[miniblock_number, operation_number]::INT[]) AS op\n FROM\n storage_logs\n WHERE\n miniblock_number <= $1\n AND hashed_key >= $2\n AND hashed_key < $3\n GROUP BY\n hashed_key\n ORDER BY\n hashed_key\n ) AS keys\n INNER JOIN storage_logs ON keys.hashed_key = storage_logs.hashed_key\n AND storage_logs.miniblock_number = keys.op[1]\n AND storage_logs.operation_number = keys.op[2]\n INNER JOIN initial_writes ON keys.hashed_key = initial_writes.hashed_key;\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "key!", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "value!", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "address!", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "miniblock_number!", + "type_info": "Int8" + }, + { + "ordinal": 4, + "name": "l1_batch_number!", + "type_info": "Int8" + }, + { + "ordinal": 5, + "name": "index", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8", + "Bytea", + "Bytea" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false + ] + }, + "hash": "bd74435dc6dba3f4173858682ee5661d1df4ec053797d75cfd32272be4f485e7" +} diff --git a/core/lib/dal/.sqlx/query-be16d820c124dba9f4a272f54f0b742349e78e6e4ce3e7c9a0dcf6447eedc6d8.json b/core/lib/dal/.sqlx/query-be16d820c124dba9f4a272f54f0b742349e78e6e4ce3e7c9a0dcf6447eedc6d8.json new file mode 100644 index 00000000000..695be9f2b8c --- /dev/null +++ b/core/lib/dal/.sqlx/query-be16d820c124dba9f4a272f54f0b742349e78e6e4ce3e7c9a0dcf6447eedc6d8.json @@ -0,0 +1,94 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n miniblock_number,\n log_index_in_miniblock,\n log_index_in_tx,\n tx_hash,\n NULL::bytea AS \"block_hash\",\n NULL::BIGINT AS \"l1_batch_number?\",\n shard_id,\n is_service,\n tx_index_in_miniblock,\n tx_index_in_l1_batch,\n sender,\n key,\n value\n FROM\n l2_to_l1_logs\n WHERE\n tx_hash = $1\n ORDER BY\n log_index_in_tx ASC\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "log_index_in_miniblock", + "type_info": "Int4" + }, + { + "ordinal": 2, + "name": "log_index_in_tx", + "type_info": "Int4" + }, + { + "ordinal": 3, + "name": "tx_hash", + "type_info": "Bytea" + }, + { + "ordinal": 4, + "name": "block_hash", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "l1_batch_number?", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "shard_id", + "type_info": "Int4" + }, + { + "ordinal": 7, + "name": "is_service", + "type_info": "Bool" + }, + { + "ordinal": 8, + "name": "tx_index_in_miniblock", + "type_info": "Int4" + }, + { + "ordinal": 9, + "name": "tx_index_in_l1_batch", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "sender", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "key", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "value", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false, + false, + false, + false, + null, + null, + false, + false, + false, + false, + false, + false, + false + ] + }, + "hash": "be16d820c124dba9f4a272f54f0b742349e78e6e4ce3e7c9a0dcf6447eedc6d8" +} diff --git a/core/lib/dal/.sqlx/query-bfb80956a18eabf266f5b5a9d62912d57f8eb2a38bdb7884fc812a2897a3a660.json b/core/lib/dal/.sqlx/query-bfb80956a18eabf266f5b5a9d62912d57f8eb2a38bdb7884fc812a2897a3a660.json new file mode 100644 index 00000000000..550cb5ec743 --- /dev/null +++ b/core/lib/dal/.sqlx/query-bfb80956a18eabf266f5b5a9d62912d57f8eb2a38bdb7884fc812a2897a3a660.json @@ -0,0 +1,35 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE witness_inputs_fri\n SET\n status = 'queued',\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n (\n status = 'in_progress'\n AND processing_started_at <= NOW() - $1::INTERVAL\n AND attempts < $2\n )\n OR (\n status = 'in_gpu_proof'\n AND processing_started_at <= NOW() - $1::INTERVAL\n AND attempts < $2\n )\n OR (\n status = 'failed'\n AND attempts < $2\n )\n RETURNING\n l1_batch_number,\n status,\n attempts\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Interval", + "Int2" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "bfb80956a18eabf266f5b5a9d62912d57f8eb2a38bdb7884fc812a2897a3a660" +} diff --git a/core/lib/dal/.sqlx/query-bfc84bcf0985446b337467dd1da709dbee508ad6d1cae43e477cf1bef8cb4aa9.json b/core/lib/dal/.sqlx/query-bfc84bcf0985446b337467dd1da709dbee508ad6d1cae43e477cf1bef8cb4aa9.json new file mode 100644 index 00000000000..8079d52a703 --- /dev/null +++ b/core/lib/dal/.sqlx/query-bfc84bcf0985446b337467dd1da709dbee508ad6d1cae43e477cf1bef8cb4aa9.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT DISTINCT\n hashed_key\n FROM\n storage_logs\n WHERE\n miniblock_number BETWEEN $1 AND $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "bfc84bcf0985446b337467dd1da709dbee508ad6d1cae43e477cf1bef8cb4aa9" +} diff --git a/core/lib/dal/.sqlx/query-c038cecd8184e5e8d9f498116bff995b654adfe328cb825a44ad36b4bf9ec8f2.json b/core/lib/dal/.sqlx/query-c038cecd8184e5e8d9f498116bff995b654adfe328cb825a44ad36b4bf9ec8f2.json new file mode 100644 index 00000000000..8161e8c1fc8 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c038cecd8184e5e8d9f498116bff995b654adfe328cb825a44ad36b4bf9ec8f2.json @@ -0,0 +1,94 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n address,\n topic1,\n topic2,\n topic3,\n topic4,\n value,\n NULL::bytea AS \"block_hash\",\n NULL::BIGINT AS \"l1_batch_number?\",\n miniblock_number,\n tx_hash,\n tx_index_in_block,\n event_index_in_block,\n event_index_in_tx\n FROM\n events\n WHERE\n tx_hash = $1\n ORDER BY\n miniblock_number ASC,\n event_index_in_block ASC\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "address", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "topic1", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "topic2", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "topic3", + "type_info": "Bytea" + }, + { + "ordinal": 4, + "name": "topic4", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "value", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "block_hash", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "l1_batch_number?", + "type_info": "Int8" + }, + { + "ordinal": 8, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 9, + "name": "tx_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "tx_index_in_block", + "type_info": "Int4" + }, + { + "ordinal": 11, + "name": "event_index_in_block", + "type_info": "Int4" + }, + { + "ordinal": 12, + "name": "event_index_in_tx", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + null, + null, + false, + false, + false, + false, + false + ] + }, + "hash": "c038cecd8184e5e8d9f498116bff995b654adfe328cb825a44ad36b4bf9ec8f2" +} diff --git a/core/lib/dal/.sqlx/query-c03df29f4661fa47c1412bd82ba379f3b2e9ff1bc6e8e38f473fb4950c8e4b77.json b/core/lib/dal/.sqlx/query-c03df29f4661fa47c1412bd82ba379f3b2e9ff1bc6e8e38f473fb4950c8e4b77.json new file mode 100644 index 00000000000..380a98bfabc --- /dev/null +++ b/core/lib/dal/.sqlx/query-c03df29f4661fa47c1412bd82ba379f3b2e9ff1bc6e8e38f473fb4950c8e4b77.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n COUNT(*) AS \"count!\"\n FROM\n contract_verification_requests\n WHERE\n status = 'queued'\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count!", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "c03df29f4661fa47c1412bd82ba379f3b2e9ff1bc6e8e38f473fb4950c8e4b77" +} diff --git a/core/lib/dal/.sqlx/query-c10cf20825de4d24300c7ec50d4a653852f7e43670076eb2ebcd49542a870539.json b/core/lib/dal/.sqlx/query-c10cf20825de4d24300c7ec50d4a653852f7e43670076eb2ebcd49542a870539.json new file mode 100644 index 00000000000..b341120ad7f --- /dev/null +++ b/core/lib/dal/.sqlx/query-c10cf20825de4d24300c7ec50d4a653852f7e43670076eb2ebcd49542a870539.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n scheduler_dependency_tracker_fri (l1_batch_number, status, created_at, updated_at)\n VALUES\n ($1, 'waiting_for_proofs', NOW(), NOW())\n ON CONFLICT (l1_batch_number) DO\n UPDATE\n SET\n updated_at = NOW()\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "c10cf20825de4d24300c7ec50d4a653852f7e43670076eb2ebcd49542a870539" +} diff --git a/core/lib/dal/.sqlx/query-c139df45a977290d1c2c7987fb9c1d66aeaeb6e2d36fddcf96775f01716a8a74.json b/core/lib/dal/.sqlx/query-c139df45a977290d1c2c7987fb9c1d66aeaeb6e2d36fddcf96775f01716a8a74.json new file mode 100644 index 00000000000..b04fb829dd6 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c139df45a977290d1c2c7987fb9c1d66aeaeb6e2d36fddcf96775f01716a8a74.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM storage_logs\n WHERE\n miniblock_number > $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "c139df45a977290d1c2c7987fb9c1d66aeaeb6e2d36fddcf96775f01716a8a74" +} diff --git a/core/lib/dal/.sqlx/query-c14837e92dbb02f2fde7109f524432d865852afe0c60e11a2c1800d30599aa61.json b/core/lib/dal/.sqlx/query-c14837e92dbb02f2fde7109f524432d865852afe0c60e11a2c1800d30599aa61.json new file mode 100644 index 00000000000..8cac9f31ac0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c14837e92dbb02f2fde7109f524432d865852afe0c60e11a2c1800d30599aa61.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM compiler_versions\n WHERE\n compiler = $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text" + ] + }, + "nullable": [] + }, + "hash": "c14837e92dbb02f2fde7109f524432d865852afe0c60e11a2c1800d30599aa61" +} diff --git a/core/lib/dal/.sqlx/query-c192377c08abab9306c5b0844368aa0f8525832cb4075e831c0d4b23c5675b99.json b/core/lib/dal/.sqlx/query-c192377c08abab9306c5b0844368aa0f8525832cb4075e831c0d4b23c5675b99.json new file mode 100644 index 00000000000..58c336bb832 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c192377c08abab9306c5b0844368aa0f8525832cb4075e831c0d4b23c5675b99.json @@ -0,0 +1,24 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bytecode\n FROM\n (\n SELECT\n *\n FROM\n storage_logs\n WHERE\n storage_logs.hashed_key = $1\n AND storage_logs.miniblock_number <= $2\n ORDER BY\n storage_logs.miniblock_number DESC,\n storage_logs.operation_number DESC\n LIMIT\n 1\n ) t\n JOIN factory_deps ON value = factory_deps.bytecode_hash\n WHERE\n value != $3\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bytecode", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Int8", + "Bytea" + ] + }, + "nullable": [ + false + ] + }, + "hash": "c192377c08abab9306c5b0844368aa0f8525832cb4075e831c0d4b23c5675b99" +} diff --git a/core/lib/dal/.sqlx/query-c23d5ff919ade5898c6a912780ae899e360650afccb34f5cc301b5cbac4a3d36.json b/core/lib/dal/.sqlx/query-c23d5ff919ade5898c6a912780ae899e360650afccb34f5cc301b5cbac4a3d36.json new file mode 100644 index 00000000000..8922816c7e1 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c23d5ff919ade5898c6a912780ae899e360650afccb34f5cc301b5cbac4a3d36.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE prover_jobs_fri\n SET\n status = $1,\n updated_at = NOW()\n WHERE\n id = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "c23d5ff919ade5898c6a912780ae899e360650afccb34f5cc301b5cbac4a3d36" +} diff --git a/core/lib/dal/.sqlx/query-c2fe6a5476e69c9588eec73baba9d0e2d571533d4d5f683919987b6f8cbb00e0.json b/core/lib/dal/.sqlx/query-c2fe6a5476e69c9588eec73baba9d0e2d571533d4d5f683919987b6f8cbb00e0.json new file mode 100644 index 00000000000..bdabc52d137 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c2fe6a5476e69c9588eec73baba9d0e2d571533d4d5f683919987b6f8cbb00e0.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n miniblocks_consensus (number, certificate)\n VALUES\n ($1, $2)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Jsonb" + ] + }, + "nullable": [] + }, + "hash": "c2fe6a5476e69c9588eec73baba9d0e2d571533d4d5f683919987b6f8cbb00e0" +} diff --git a/core/lib/dal/.sqlx/query-c36abacc705a2244d423599779e38d60d6e93bcb34fd20422e227714fccbf6b7.json b/core/lib/dal/.sqlx/query-c36abacc705a2244d423599779e38d60d6e93bcb34fd20422e227714fccbf6b7.json new file mode 100644 index 00000000000..ea4b266d825 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c36abacc705a2244d423599779e38d60d6e93bcb34fd20422e227714fccbf6b7.json @@ -0,0 +1,34 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n address,\n key,\n value\n FROM\n storage_logs\n WHERE\n miniblock_number BETWEEN (\n SELECT\n MIN(number)\n FROM\n miniblocks\n WHERE\n l1_batch_number = $1\n ) AND (\n SELECT\n MAX(number)\n FROM\n miniblocks\n WHERE\n l1_batch_number = $1\n )\n ORDER BY\n miniblock_number,\n operation_number\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "address", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "key", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "value", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "c36abacc705a2244d423599779e38d60d6e93bcb34fd20422e227714fccbf6b7" +} diff --git a/core/lib/dal/.sqlx/query-c41312e01aa66897552e8be9acc8d43c31ec7441a7f6c5040e120810ebbb72f7.json b/core/lib/dal/.sqlx/query-c41312e01aa66897552e8be9acc8d43c31ec7441a7f6c5040e120810ebbb72f7.json new file mode 100644 index 00000000000..4c24afad4f4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c41312e01aa66897552e8be9acc8d43c31ec7441a7f6c5040e120810ebbb72f7.json @@ -0,0 +1,21 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n prover_jobs_fri (\n l1_batch_number,\n circuit_id,\n circuit_blob_url,\n aggregation_round,\n sequence_number,\n depth,\n is_node_final_proof,\n protocol_version,\n status,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, $4, $5, $6, $7, $8, 'queued', NOW(), NOW())\n ON CONFLICT (l1_batch_number, aggregation_round, circuit_id, depth, sequence_number) DO\n UPDATE\n SET\n updated_at = NOW()\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int2", + "Text", + "Int2", + "Int4", + "Int4", + "Bool", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "c41312e01aa66897552e8be9acc8d43c31ec7441a7f6c5040e120810ebbb72f7" +} diff --git a/core/lib/dal/.sqlx/query-c4ea7812861a283448095acbb1164420a25eef488de2b67e91ed39657667bd4a.json b/core/lib/dal/.sqlx/query-c4ea7812861a283448095acbb1164420a25eef488de2b67e91ed39657667bd4a.json new file mode 100644 index 00000000000..6a74606e484 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c4ea7812861a283448095acbb1164420a25eef488de2b67e91ed39657667bd4a.json @@ -0,0 +1,26 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_address,\n l2_address\n FROM\n tokens\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_address", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "l2_address", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false + ] + }, + "hash": "c4ea7812861a283448095acbb1164420a25eef488de2b67e91ed39657667bd4a" +} diff --git a/core/lib/dal/.sqlx/query-c5656667e5610ffb33e7b977ac92b7c4d79cbd404e0267794ec203df0cbb169d.json b/core/lib/dal/.sqlx/query-c5656667e5610ffb33e7b977ac92b7c4d79cbd404e0267794ec203df0cbb169d.json new file mode 100644 index 00000000000..34bff903194 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c5656667e5610ffb33e7b977ac92b7c4d79cbd404e0267794ec203df0cbb169d.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n COALESCE(MAX(number), 0) AS \"number!\"\n FROM\n l1_batches\n WHERE\n eth_prove_tx_id IS NOT NULL\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number!", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "c5656667e5610ffb33e7b977ac92b7c4d79cbd404e0267794ec203df0cbb169d" +} diff --git a/core/lib/dal/.sqlx/query-c5d6e1d5d834409bd793c8ce1fb2c212918b31dabebf08a84efdfe1feee85765.json b/core/lib/dal/.sqlx/query-c5d6e1d5d834409bd793c8ce1fb2c212918b31dabebf08a84efdfe1feee85765.json new file mode 100644 index 00000000000..6c16372b82d --- /dev/null +++ b/core/lib/dal/.sqlx/query-c5d6e1d5d834409bd793c8ce1fb2c212918b31dabebf08a84efdfe1feee85765.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE scheduler_dependency_tracker_fri\n SET\n status = 'queuing'\n WHERE\n l1_batch_number IN (\n SELECT\n l1_batch_number\n FROM\n scheduler_dependency_tracker_fri\n WHERE\n status != 'queued'\n AND circuit_1_final_prover_job_id IS NOT NULL\n AND circuit_2_final_prover_job_id IS NOT NULL\n AND circuit_3_final_prover_job_id IS NOT NULL\n AND circuit_4_final_prover_job_id IS NOT NULL\n AND circuit_5_final_prover_job_id IS NOT NULL\n AND circuit_6_final_prover_job_id IS NOT NULL\n AND circuit_7_final_prover_job_id IS NOT NULL\n AND circuit_8_final_prover_job_id IS NOT NULL\n AND circuit_9_final_prover_job_id IS NOT NULL\n AND circuit_10_final_prover_job_id IS NOT NULL\n AND circuit_11_final_prover_job_id IS NOT NULL\n AND circuit_12_final_prover_job_id IS NOT NULL\n AND circuit_13_final_prover_job_id IS NOT NULL\n )\n RETURNING\n l1_batch_number;\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "c5d6e1d5d834409bd793c8ce1fb2c212918b31dabebf08a84efdfe1feee85765" +} diff --git a/core/lib/dal/.sqlx/query-c6d523c6ae857022318350a2f210d7eaeeb4549ed59b58f8d984be2a22a80355.json b/core/lib/dal/.sqlx/query-c6d523c6ae857022318350a2f210d7eaeeb4549ed59b58f8d984be2a22a80355.json new file mode 100644 index 00000000000..ebae1f42fbb --- /dev/null +++ b/core/lib/dal/.sqlx/query-c6d523c6ae857022318350a2f210d7eaeeb4549ed59b58f8d984be2a22a80355.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(l1_batches.number)\n FROM\n l1_batches\n JOIN eth_txs ON (l1_batches.eth_commit_tx_id = eth_txs.id)\n JOIN eth_txs_history AS commit_tx ON (eth_txs.confirmed_eth_tx_history_id = commit_tx.id)\n WHERE\n commit_tx.confirmed_at IS NOT NULL\n AND eth_prove_tx_id IS NOT NULL\n AND eth_execute_tx_id IS NULL\n AND EXTRACT(\n epoch\n FROM\n commit_tx.confirmed_at\n ) < $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "max", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Numeric" + ] + }, + "nullable": [ + null + ] + }, + "hash": "c6d523c6ae857022318350a2f210d7eaeeb4549ed59b58f8d984be2a22a80355" +} diff --git a/core/lib/dal/.sqlx/query-c706a49ff54f6b424e24d061fe7ac429aac3c030f7e226a1264243d8cdae038d.json b/core/lib/dal/.sqlx/query-c706a49ff54f6b424e24d061fe7ac429aac3c030f7e226a1264243d8cdae038d.json new file mode 100644 index 00000000000..95ae04bed50 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c706a49ff54f6b424e24d061fe7ac429aac3c030f7e226a1264243d8cdae038d.json @@ -0,0 +1,17 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE proof_compression_jobs_fri\n SET\n status = $1,\n updated_at = NOW(),\n time_taken = $2,\n l1_proof_blob_url = $3\n WHERE\n l1_batch_number = $4\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Time", + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "c706a49ff54f6b424e24d061fe7ac429aac3c030f7e226a1264243d8cdae038d" +} diff --git a/core/lib/dal/.sqlx/query-c809f42a221b18a767e9dd0286503d8bd356f2f9cc249cd8b90caa5a8b5918e3.json b/core/lib/dal/.sqlx/query-c809f42a221b18a767e9dd0286503d8bd356f2f9cc249cd8b90caa5a8b5918e3.json new file mode 100644 index 00000000000..b85f4c542bf --- /dev/null +++ b/core/lib/dal/.sqlx/query-c809f42a221b18a767e9dd0286503d8bd356f2f9cc249cd8b90caa5a8b5918e3.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n COUNT(*) AS \"count!\"\n FROM\n (\n SELECT\n *\n FROM\n storage_logs\n WHERE\n storage_logs.hashed_key = $1\n ORDER BY\n storage_logs.miniblock_number DESC,\n storage_logs.operation_number DESC\n LIMIT\n 1\n ) sl\n WHERE\n sl.value != $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count!", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Bytea" + ] + }, + "nullable": [ + null + ] + }, + "hash": "c809f42a221b18a767e9dd0286503d8bd356f2f9cc249cd8b90caa5a8b5918e3" +} diff --git a/core/lib/dal/.sqlx/query-c9e05ebc7b61c1f409c330bc110bed26c831730944237b74bed98869c83b3ca5.json b/core/lib/dal/.sqlx/query-c9e05ebc7b61c1f409c330bc110bed26c831730944237b74bed98869c83b3ca5.json new file mode 100644 index 00000000000..433564c6ae0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-c9e05ebc7b61c1f409c330bc110bed26c831730944237b74bed98869c83b3ca5.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n (\n SELECT\n l1_batch_number\n FROM\n miniblocks\n WHERE\n number = $1\n ) AS \"block_batch?\",\n COALESCE(\n (\n SELECT\n MAX(number) + 1\n FROM\n l1_batches\n ),\n (\n SELECT\n MAX(l1_batch_number) + 1\n FROM\n snapshot_recovery\n ),\n 0\n ) AS \"pending_batch!\"\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "block_batch?", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "pending_batch!", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + null, + null + ] + }, + "hash": "c9e05ebc7b61c1f409c330bc110bed26c831730944237b74bed98869c83b3ca5" +} diff --git a/core/lib/dal/.sqlx/query-ca9d06141265b8524ee28c55569cb21a635037d89ce24dd3ad58ffaadb59594a.json b/core/lib/dal/.sqlx/query-ca9d06141265b8524ee28c55569cb21a635037d89ce24dd3ad58ffaadb59594a.json new file mode 100644 index 00000000000..ff49f615ab5 --- /dev/null +++ b/core/lib/dal/.sqlx/query-ca9d06141265b8524ee28c55569cb21a635037d89ce24dd3ad58ffaadb59594a.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number\n FROM\n proof_compression_jobs_fri\n WHERE\n status <> 'successful'\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "ca9d06141265b8524ee28c55569cb21a635037d89ce24dd3ad58ffaadb59594a" +} diff --git a/core/lib/dal/.sqlx/query-cb98d84fc34af1e4a4c2f427c5bb4afd384063ae394a847b26304dd18d490ab4.json b/core/lib/dal/.sqlx/query-cb98d84fc34af1e4a4c2f427c5bb4afd384063ae394a847b26304dd18d490ab4.json new file mode 100644 index 00000000000..732992595c7 --- /dev/null +++ b/core/lib/dal/.sqlx/query-cb98d84fc34af1e4a4c2f427c5bb4afd384063ae394a847b26304dd18d490ab4.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n timestamp,\n hash\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + true + ] + }, + "hash": "cb98d84fc34af1e4a4c2f427c5bb4afd384063ae394a847b26304dd18d490ab4" +} diff --git a/core/lib/dal/.sqlx/query-cddf48514aa2aa249d0530d44c741368993009bb4bd90c2ad177ce56317aa04c.json b/core/lib/dal/.sqlx/query-cddf48514aa2aa249d0530d44c741368993009bb4bd90c2ad177ce56317aa04c.json new file mode 100644 index 00000000000..d2087e0a32b --- /dev/null +++ b/core/lib/dal/.sqlx/query-cddf48514aa2aa249d0530d44c741368993009bb4bd90c2ad177ce56317aa04c.json @@ -0,0 +1,257 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n timestamp,\n is_finished,\n l1_tx_count,\n l2_tx_count,\n fee_account_address,\n bloom,\n priority_ops_onchain_data,\n hash,\n parent_hash,\n commitment,\n compressed_write_logs,\n compressed_contracts,\n eth_prove_tx_id,\n eth_commit_tx_id,\n eth_execute_tx_id,\n merkle_root_hash,\n l2_to_l1_logs,\n l2_to_l1_messages,\n used_contract_hashes,\n compressed_initial_writes,\n compressed_repeated_writes,\n l2_l1_compressed_messages,\n l2_l1_merkle_root,\n l1_gas_price,\n l2_fair_gas_price,\n rollup_last_leaf_index,\n zkporter_is_available,\n bootloader_code_hash,\n default_aa_code_hash,\n base_fee_per_gas,\n aux_data_hash,\n pass_through_data_hash,\n meta_parameters_hash,\n system_logs,\n compressed_state_diffs,\n protocol_version,\n events_queue_commitment,\n bootloader_initial_content_commitment,\n pubdata_input\n FROM\n (\n SELECT\n l1_batches.*,\n ROW_NUMBER() OVER (\n ORDER BY\n number ASC\n ) AS ROW_NUMBER\n FROM\n l1_batches\n WHERE\n eth_commit_tx_id IS NOT NULL\n AND l1_batches.skip_proof = TRUE\n AND l1_batches.number > $1\n ORDER BY\n number\n LIMIT\n $2\n ) inn\n LEFT JOIN commitments ON commitments.l1_batch_number = inn.number\n WHERE\n number - ROW_NUMBER = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "parent_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "commitment", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "compressed_write_logs", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "compressed_contracts", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "eth_prove_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "eth_commit_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 15, + "name": "eth_execute_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 16, + "name": "merkle_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 17, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 20, + "name": "compressed_initial_writes", + "type_info": "Bytea" + }, + { + "ordinal": 21, + "name": "compressed_repeated_writes", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "l2_l1_compressed_messages", + "type_info": "Bytea" + }, + { + "ordinal": 23, + "name": "l2_l1_merkle_root", + "type_info": "Bytea" + }, + { + "ordinal": 24, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 25, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 26, + "name": "rollup_last_leaf_index", + "type_info": "Int8" + }, + { + "ordinal": 27, + "name": "zkporter_is_available", + "type_info": "Bool" + }, + { + "ordinal": 28, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 29, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 30, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 31, + "name": "aux_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 32, + "name": "pass_through_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 33, + "name": "meta_parameters_hash", + "type_info": "Bytea" + }, + { + "ordinal": 34, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 35, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 36, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 37, + "name": "events_queue_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 38, + "name": "bootloader_initial_content_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 39, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8", + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true, + true, + false, + true, + true, + true, + false, + true, + true, + true, + true, + true + ] + }, + "hash": "cddf48514aa2aa249d0530d44c741368993009bb4bd90c2ad177ce56317aa04c" +} diff --git a/core/lib/dal/.sqlx/query-ce5779092feb8a3d3e2c5e395783e67f08f2ead5f55bfb6594e50346bf9cf2ef.json b/core/lib/dal/.sqlx/query-ce5779092feb8a3d3e2c5e395783e67f08f2ead5f55bfb6594e50346bf9cf2ef.json new file mode 100644 index 00000000000..6f83fd55064 --- /dev/null +++ b/core/lib/dal/.sqlx/query-ce5779092feb8a3d3e2c5e395783e67f08f2ead5f55bfb6594e50346bf9cf2ef.json @@ -0,0 +1,32 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MIN(l1_batch_number) AS \"l1_batch_number!\",\n circuit_id,\n aggregation_round\n FROM\n prover_jobs_fri\n WHERE\n status IN ('queued', 'in_gpu_proof', 'in_progress', 'failed')\n GROUP BY\n circuit_id,\n aggregation_round\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number!", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "circuit_id", + "type_info": "Int2" + }, + { + "ordinal": 2, + "name": "aggregation_round", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null, + false, + false + ] + }, + "hash": "ce5779092feb8a3d3e2c5e395783e67f08f2ead5f55bfb6594e50346bf9cf2ef" +} diff --git a/core/lib/dal/.sqlx/query-cea9fe027a6a0ada827f23b48ac32432295b2f7ee40bf13522a6edbd236f1970.json b/core/lib/dal/.sqlx/query-cea9fe027a6a0ada827f23b48ac32432295b2f7ee40bf13522a6edbd236f1970.json new file mode 100644 index 00000000000..b1eae968a89 --- /dev/null +++ b/core/lib/dal/.sqlx/query-cea9fe027a6a0ada827f23b48ac32432295b2f7ee40bf13522a6edbd236f1970.json @@ -0,0 +1,29 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n u.hashed_key AS \"hashed_key!\",\n (\n SELECT\n value\n FROM\n storage_logs\n WHERE\n hashed_key = u.hashed_key\n AND miniblock_number <= $2\n ORDER BY\n miniblock_number DESC,\n operation_number DESC\n LIMIT\n 1\n ) AS \"value?\"\n FROM\n UNNEST($1::bytea[]) AS u (hashed_key)\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key!", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "value?", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "ByteaArray", + "Int8" + ] + }, + "nullable": [ + null, + null + ] + }, + "hash": "cea9fe027a6a0ada827f23b48ac32432295b2f7ee40bf13522a6edbd236f1970" +} diff --git a/core/lib/dal/.sqlx/query-d14b52df2cd9f9e484c60ba00383b438f14b68535111cf2cedd363fc646aac99.json b/core/lib/dal/.sqlx/query-d14b52df2cd9f9e484c60ba00383b438f14b68535111cf2cedd363fc646aac99.json new file mode 100644 index 00000000000..0370a63d65e --- /dev/null +++ b/core/lib/dal/.sqlx/query-d14b52df2cd9f9e484c60ba00383b438f14b68535111cf2cedd363fc646aac99.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n timestamp\n FROM\n l1_batches\n WHERE\n eth_commit_tx_id IS NULL\n AND number > 0\n ORDER BY\n number\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "timestamp", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "d14b52df2cd9f9e484c60ba00383b438f14b68535111cf2cedd363fc646aac99" +} diff --git a/core/lib/dal/.sqlx/query-d1b261f4057e4113b96eb87c9e20015eeb3ef2643ceda3024504a471b24d1283.json b/core/lib/dal/.sqlx/query-d1b261f4057e4113b96eb87c9e20015eeb3ef2643ceda3024504a471b24d1283.json new file mode 100644 index 00000000000..fd6ed893c23 --- /dev/null +++ b/core/lib/dal/.sqlx/query-d1b261f4057e4113b96eb87c9e20015eeb3ef2643ceda3024504a471b24d1283.json @@ -0,0 +1,254 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n timestamp,\n is_finished,\n l1_tx_count,\n l2_tx_count,\n fee_account_address,\n bloom,\n priority_ops_onchain_data,\n hash,\n parent_hash,\n commitment,\n compressed_write_logs,\n compressed_contracts,\n eth_prove_tx_id,\n eth_commit_tx_id,\n eth_execute_tx_id,\n merkle_root_hash,\n l2_to_l1_logs,\n l2_to_l1_messages,\n used_contract_hashes,\n compressed_initial_writes,\n compressed_repeated_writes,\n l2_l1_compressed_messages,\n l2_l1_merkle_root,\n l1_gas_price,\n l2_fair_gas_price,\n rollup_last_leaf_index,\n zkporter_is_available,\n bootloader_code_hash,\n default_aa_code_hash,\n base_fee_per_gas,\n aux_data_hash,\n pass_through_data_hash,\n meta_parameters_hash,\n protocol_version,\n compressed_state_diffs,\n system_logs,\n events_queue_commitment,\n bootloader_initial_content_commitment,\n pubdata_input\n FROM\n l1_batches\n LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number\n WHERE\n number = 0\n OR eth_commit_tx_id IS NOT NULL\n AND commitment IS NOT NULL\n ORDER BY\n number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 3, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 4, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "parent_hash", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "commitment", + "type_info": "Bytea" + }, + { + "ordinal": 11, + "name": "compressed_write_logs", + "type_info": "Bytea" + }, + { + "ordinal": 12, + "name": "compressed_contracts", + "type_info": "Bytea" + }, + { + "ordinal": 13, + "name": "eth_prove_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 14, + "name": "eth_commit_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 15, + "name": "eth_execute_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 16, + "name": "merkle_root_hash", + "type_info": "Bytea" + }, + { + "ordinal": 17, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 19, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 20, + "name": "compressed_initial_writes", + "type_info": "Bytea" + }, + { + "ordinal": 21, + "name": "compressed_repeated_writes", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "l2_l1_compressed_messages", + "type_info": "Bytea" + }, + { + "ordinal": 23, + "name": "l2_l1_merkle_root", + "type_info": "Bytea" + }, + { + "ordinal": 24, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 25, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 26, + "name": "rollup_last_leaf_index", + "type_info": "Int8" + }, + { + "ordinal": 27, + "name": "zkporter_is_available", + "type_info": "Bool" + }, + { + "ordinal": 28, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 29, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 30, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 31, + "name": "aux_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 32, + "name": "pass_through_data_hash", + "type_info": "Bytea" + }, + { + "ordinal": 33, + "name": "meta_parameters_hash", + "type_info": "Bytea" + }, + { + "ordinal": 34, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 35, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 36, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 37, + "name": "events_queue_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 38, + "name": "bootloader_initial_content_commitment", + "type_info": "Bytea" + }, + { + "ordinal": 39, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + true, + true, + true, + false, + false, + true, + true, + true, + true, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "d1b261f4057e4113b96eb87c9e20015eeb3ef2643ceda3024504a471b24d1283" +} diff --git a/core/lib/dal/.sqlx/query-d3b09cbcddf6238b358d32d57678242aad3e9a47400f6d6837a35f4c54a216b9.json b/core/lib/dal/.sqlx/query-d3b09cbcddf6238b358d32d57678242aad3e9a47400f6d6837a35f4c54a216b9.json new file mode 100644 index 00000000000..8770a9b596e --- /dev/null +++ b/core/lib/dal/.sqlx/query-d3b09cbcddf6238b358d32d57678242aad3e9a47400f6d6837a35f4c54a216b9.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number\n FROM\n l1_batches\n LEFT JOIN eth_txs_history AS execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id)\n WHERE\n execute_tx.confirmed_at IS NOT NULL\n ORDER BY\n number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "d3b09cbcddf6238b358d32d57678242aad3e9a47400f6d6837a35f4c54a216b9" +} diff --git a/core/lib/dal/.sqlx/query-d70cfc158e31dd2d5c942d24f81fd17f833fb15b58b0110c7cc566946db98e76.json b/core/lib/dal/.sqlx/query-d70cfc158e31dd2d5c942d24f81fd17f833fb15b58b0110c7cc566946db98e76.json new file mode 100644 index 00000000000..bff9c151373 --- /dev/null +++ b/core/lib/dal/.sqlx/query-d70cfc158e31dd2d5c942d24f81fd17f833fb15b58b0110c7cc566946db98e76.json @@ -0,0 +1,94 @@ +{ + "db_name": "PostgreSQL", + "query": "\n WITH\n events_select AS (\n SELECT\n address,\n topic1,\n topic2,\n topic3,\n topic4,\n value,\n miniblock_number,\n tx_hash,\n tx_index_in_block,\n event_index_in_block,\n event_index_in_tx\n FROM\n events\n WHERE\n miniblock_number > $1\n ORDER BY\n miniblock_number ASC,\n event_index_in_block ASC\n )\n SELECT\n miniblocks.hash AS \"block_hash?\",\n address AS \"address!\",\n topic1 AS \"topic1!\",\n topic2 AS \"topic2!\",\n topic3 AS \"topic3!\",\n topic4 AS \"topic4!\",\n value AS \"value!\",\n miniblock_number AS \"miniblock_number!\",\n miniblocks.l1_batch_number AS \"l1_batch_number?\",\n tx_hash AS \"tx_hash!\",\n tx_index_in_block AS \"tx_index_in_block!\",\n event_index_in_block AS \"event_index_in_block!\",\n event_index_in_tx AS \"event_index_in_tx!\"\n FROM\n events_select\n INNER JOIN miniblocks ON events_select.miniblock_number = miniblocks.number\n ORDER BY\n miniblock_number ASC,\n event_index_in_block ASC\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "block_hash?", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "address!", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "topic1!", + "type_info": "Bytea" + }, + { + "ordinal": 3, + "name": "topic2!", + "type_info": "Bytea" + }, + { + "ordinal": 4, + "name": "topic3!", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "topic4!", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "value!", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "miniblock_number!", + "type_info": "Int8" + }, + { + "ordinal": 8, + "name": "l1_batch_number?", + "type_info": "Int8" + }, + { + "ordinal": 9, + "name": "tx_hash!", + "type_info": "Bytea" + }, + { + "ordinal": 10, + "name": "tx_index_in_block!", + "type_info": "Int4" + }, + { + "ordinal": 11, + "name": "event_index_in_block!", + "type_info": "Int4" + }, + { + "ordinal": 12, + "name": "event_index_in_tx!", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + true, + false, + false, + false, + false + ] + }, + "hash": "d70cfc158e31dd2d5c942d24f81fd17f833fb15b58b0110c7cc566946db98e76" +} diff --git a/core/lib/dal/.sqlx/query-d712707e47e143c52330ea6e0513d2839f0f928c06b8020eecec38e895f99b42.json b/core/lib/dal/.sqlx/query-d712707e47e143c52330ea6e0513d2839f0f928c06b8020eecec38e895f99b42.json new file mode 100644 index 00000000000..362a4a9b83d --- /dev/null +++ b/core/lib/dal/.sqlx/query-d712707e47e143c52330ea6e0513d2839f0f928c06b8020eecec38e895f99b42.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n address,\n key\n FROM\n protective_reads\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "address", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "key", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "d712707e47e143c52330ea6e0513d2839f0f928c06b8020eecec38e895f99b42" +} diff --git a/core/lib/dal/.sqlx/query-d7e8eabd7b43ff62838fbc847e4813d2b2d411bd5faf8306cd48db500532b711.json b/core/lib/dal/.sqlx/query-d7e8eabd7b43ff62838fbc847e4813d2b2d411bd5faf8306cd48db500532b711.json new file mode 100644 index 00000000000..a049d76c24b --- /dev/null +++ b/core/lib/dal/.sqlx/query-d7e8eabd7b43ff62838fbc847e4813d2b2d411bd5faf8306cd48db500532b711.json @@ -0,0 +1,29 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number,\n status\n FROM\n proof_compression_jobs_fri\n WHERE\n l1_batch_number = (\n SELECT\n MIN(l1_batch_number)\n FROM\n proof_compression_jobs_fri\n WHERE\n status = $1\n OR status = $2\n )\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "status", + "type_info": "Text" + } + ], + "parameters": { + "Left": [ + "Text", + "Text" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "d7e8eabd7b43ff62838fbc847e4813d2b2d411bd5faf8306cd48db500532b711" +} diff --git a/core/lib/dal/.sqlx/query-d7ed82f0d012f72374edb2ebcec33c83477d65a6f8cb2673f67b3148cd95b436.json b/core/lib/dal/.sqlx/query-d7ed82f0d012f72374edb2ebcec33c83477d65a6f8cb2673f67b3148cd95b436.json new file mode 100644 index 00000000000..c415e3d33ce --- /dev/null +++ b/core/lib/dal/.sqlx/query-d7ed82f0d012f72374edb2ebcec33c83477d65a6f8cb2673f67b3148cd95b436.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n COUNT(*)\n FROM\n eth_txs\n WHERE\n has_failed = TRUE\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "d7ed82f0d012f72374edb2ebcec33c83477d65a6f8cb2673f67b3148cd95b436" +} diff --git a/core/lib/dal/.sqlx/query-d8e0f98a67ffb53a1caa6820f8475da2787332deca5708d1d08730cdbfc73541.json b/core/lib/dal/.sqlx/query-d8e0f98a67ffb53a1caa6820f8475da2787332deca5708d1d08730cdbfc73541.json new file mode 100644 index 00000000000..f0ea745821f --- /dev/null +++ b/core/lib/dal/.sqlx/query-d8e0f98a67ffb53a1caa6820f8475da2787332deca5708d1d08730cdbfc73541.json @@ -0,0 +1,136 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n number,\n l1_tx_count,\n l2_tx_count,\n timestamp,\n is_finished,\n fee_account_address,\n l2_to_l1_logs,\n l2_to_l1_messages,\n bloom,\n priority_ops_onchain_data,\n used_contract_hashes,\n base_fee_per_gas,\n l1_gas_price,\n l2_fair_gas_price,\n bootloader_code_hash,\n default_aa_code_hash,\n protocol_version,\n system_logs,\n compressed_state_diffs,\n pubdata_input\n FROM\n l1_batches\n WHERE\n eth_commit_tx_id = $1\n OR eth_prove_tx_id = $1\n OR eth_execute_tx_id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 2, + "name": "l2_tx_count", + "type_info": "Int4" + }, + { + "ordinal": 3, + "name": "timestamp", + "type_info": "Int8" + }, + { + "ordinal": 4, + "name": "is_finished", + "type_info": "Bool" + }, + { + "ordinal": 5, + "name": "fee_account_address", + "type_info": "Bytea" + }, + { + "ordinal": 6, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 7, + "name": "l2_to_l1_messages", + "type_info": "ByteaArray" + }, + { + "ordinal": 8, + "name": "bloom", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "priority_ops_onchain_data", + "type_info": "ByteaArray" + }, + { + "ordinal": 10, + "name": "used_contract_hashes", + "type_info": "Jsonb" + }, + { + "ordinal": 11, + "name": "base_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 12, + "name": "l1_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 13, + "name": "l2_fair_gas_price", + "type_info": "Int8" + }, + { + "ordinal": 14, + "name": "bootloader_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 15, + "name": "default_aa_code_hash", + "type_info": "Bytea" + }, + { + "ordinal": 16, + "name": "protocol_version", + "type_info": "Int4" + }, + { + "ordinal": 17, + "name": "system_logs", + "type_info": "ByteaArray" + }, + { + "ordinal": 18, + "name": "compressed_state_diffs", + "type_info": "Bytea" + }, + { + "ordinal": 19, + "name": "pubdata_input", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + false, + true, + true + ] + }, + "hash": "d8e0f98a67ffb53a1caa6820f8475da2787332deca5708d1d08730cdbfc73541" +} diff --git a/core/lib/dal/.sqlx/query-d8e3ee346375e4b6a8b2c73a3827e88abd0f8164c2413dc83c91c29665ca645e.json b/core/lib/dal/.sqlx/query-d8e3ee346375e4b6a8b2c73a3827e88abd0f8164c2413dc83c91c29665ca645e.json new file mode 100644 index 00000000000..ae7bcea1882 --- /dev/null +++ b/core/lib/dal/.sqlx/query-d8e3ee346375e4b6a8b2c73a3827e88abd0f8164c2413dc83c91c29665ca645e.json @@ -0,0 +1,35 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET\n status = 'queued',\n updated_at = NOW(),\n processing_started_at = NOW()\n WHERE\n (\n status = 'in_progress'\n AND processing_started_at <= NOW() - $1::INTERVAL\n AND attempts < $2\n )\n OR (\n status = 'failed'\n AND attempts < $2\n )\n RETURNING\n id,\n status,\n attempts\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Interval", + "Int2" + ] + }, + "nullable": [ + false, + false, + false + ] + }, + "hash": "d8e3ee346375e4b6a8b2c73a3827e88abd0f8164c2413dc83c91c29665ca645e" +} diff --git a/core/lib/dal/.sqlx/query-da51a5220c2b964303292592c34e8ee5e54b170de9da863bbdbc79e3f206640b.json b/core/lib/dal/.sqlx/query-da51a5220c2b964303292592c34e8ee5e54b170de9da863bbdbc79e3f206640b.json new file mode 100644 index 00000000000..9c24f870645 --- /dev/null +++ b/core/lib/dal/.sqlx/query-da51a5220c2b964303292592c34e8ee5e54b170de9da863bbdbc79e3f206640b.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM storage\n WHERE\n hashed_key = ANY ($1)\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "ByteaArray" + ] + }, + "nullable": [] + }, + "hash": "da51a5220c2b964303292592c34e8ee5e54b170de9da863bbdbc79e3f206640b" +} diff --git a/core/lib/dal/.sqlx/query-db3e74f0e83ffbf84a6d61e560f2060fbea775dc185f639139fbfd23e4d5f3c6.json b/core/lib/dal/.sqlx/query-db3e74f0e83ffbf84a6d61e560f2060fbea775dc185f639139fbfd23e4d5f3c6.json new file mode 100644 index 00000000000..d9f7527dfa0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-db3e74f0e83ffbf84a6d61e560f2060fbea775dc185f639139fbfd23e4d5f3c6.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET\n status = 'successful',\n updated_at = NOW(),\n time_taken = $1\n WHERE\n id = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Time", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "db3e74f0e83ffbf84a6d61e560f2060fbea775dc185f639139fbfd23e4d5f3c6" +} diff --git a/core/lib/dal/.sqlx/query-dc16d0fac093a52480b66dfcb5976fb01e6629e8c982c265f2af1d5000090572.json b/core/lib/dal/.sqlx/query-dc16d0fac093a52480b66dfcb5976fb01e6629e8c982c265f2af1d5000090572.json new file mode 100644 index 00000000000..9669622f5cf --- /dev/null +++ b/core/lib/dal/.sqlx/query-dc16d0fac093a52480b66dfcb5976fb01e6629e8c982c265f2af1d5000090572.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "SELECT COUNT(miniblocks.number) FROM miniblocks WHERE l1_batch_number IS NULL", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "count", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "dc16d0fac093a52480b66dfcb5976fb01e6629e8c982c265f2af1d5000090572" +} diff --git a/core/lib/dal/.sqlx/query-dc481f59aae632ff6f5fa23f5c5c82627a936f7ea9f6c354eca4bea76fac6b10.json b/core/lib/dal/.sqlx/query-dc481f59aae632ff6f5fa23f5c5c82627a936f7ea9f6c354eca4bea76fac6b10.json new file mode 100644 index 00000000000..77263c2a4dd --- /dev/null +++ b/core/lib/dal/.sqlx/query-dc481f59aae632ff6f5fa23f5c5c82627a936f7ea9f6c354eca4bea76fac6b10.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MAX(number) AS \"number\"\n FROM\n l1_batches\n WHERE\n hash IS NOT NULL\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "dc481f59aae632ff6f5fa23f5c5c82627a936f7ea9f6c354eca4bea76fac6b10" +} diff --git a/core/lib/dal/.sqlx/query-dc764e1636c4e958753c1fd54562e2ca92fdfdf01cfd0b11f5ce24f0458a5e48.json b/core/lib/dal/.sqlx/query-dc764e1636c4e958753c1fd54562e2ca92fdfdf01cfd0b11f5ce24f0458a5e48.json new file mode 100644 index 00000000000..b7320d0c3bb --- /dev/null +++ b/core/lib/dal/.sqlx/query-dc764e1636c4e958753c1fd54562e2ca92fdfdf01cfd0b11f5ce24f0458a5e48.json @@ -0,0 +1,26 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n hash = $1,\n merkle_root_hash = $2,\n compressed_repeated_writes = $3,\n compressed_initial_writes = $4,\n l2_l1_compressed_messages = $5,\n l2_l1_merkle_root = $6,\n zkporter_is_available = $7,\n parent_hash = $8,\n rollup_last_leaf_index = $9,\n pass_through_data_hash = $10,\n meta_parameters_hash = $11,\n compressed_state_diffs = $12,\n updated_at = NOW()\n WHERE\n number = $13\n AND hash IS NULL\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bytea", + "Bool", + "Bytea", + "Int8", + "Bytea", + "Bytea", + "Bytea", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "dc764e1636c4e958753c1fd54562e2ca92fdfdf01cfd0b11f5ce24f0458a5e48" +} diff --git a/core/lib/dal/.sqlx/query-dd55e46dfa5ba3692d9620088a3550b8db817630d1a9341db4a1f453f12e64fb.json b/core/lib/dal/.sqlx/query-dd55e46dfa5ba3692d9620088a3550b8db817630d1a9341db4a1f453f12e64fb.json new file mode 100644 index 00000000000..70449a85ea4 --- /dev/null +++ b/core/lib/dal/.sqlx/query-dd55e46dfa5ba3692d9620088a3550b8db817630d1a9341db4a1f453f12e64fb.json @@ -0,0 +1,34 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n status,\n error,\n compilation_errors\n FROM\n contract_verification_requests\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 1, + "name": "error", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "compilation_errors", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + true, + true + ] + }, + "hash": "dd55e46dfa5ba3692d9620088a3550b8db817630d1a9341db4a1f453f12e64fb" +} diff --git a/core/lib/dal/.sqlx/query-dea22358feed1418430505767d03aa4239d3a8be71b47178b4b8fb11fe898b31.json b/core/lib/dal/.sqlx/query-dea22358feed1418430505767d03aa4239d3a8be71b47178b4b8fb11fe898b31.json new file mode 100644 index 00000000000..ef070554c2f --- /dev/null +++ b/core/lib/dal/.sqlx/query-dea22358feed1418430505767d03aa4239d3a8be71b47178b4b8fb11fe898b31.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE l1_batches\n SET\n eth_execute_tx_id = $1,\n updated_at = NOW()\n WHERE\n number BETWEEN $2 AND $3\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int4", + "Int8", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "dea22358feed1418430505767d03aa4239d3a8be71b47178b4b8fb11fe898b31" +} diff --git a/core/lib/dal/.sqlx/query-df00e33809768120e395d8f740770a4e629b2a1cde641e74e4e55bb100df809f.json b/core/lib/dal/.sqlx/query-df00e33809768120e395d8f740770a4e629b2a1cde641e74e4e55bb100df809f.json new file mode 100644 index 00000000000..9ad3099d776 --- /dev/null +++ b/core/lib/dal/.sqlx/query-df00e33809768120e395d8f740770a4e629b2a1cde641e74e4e55bb100df809f.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n attempts\n FROM\n prover_jobs_fri\n WHERE\n id = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "attempts", + "type_info": "Int2" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "df00e33809768120e395d8f740770a4e629b2a1cde641e74e4e55bb100df809f" +} diff --git a/core/lib/dal/.sqlx/query-df3b08549a11729fb475341b8f38f8af02aa297d85a2695c5f448ed14b2d7386.json b/core/lib/dal/.sqlx/query-df3b08549a11729fb475341b8f38f8af02aa297d85a2695c5f448ed14b2d7386.json new file mode 100644 index 00000000000..a04523bc07b --- /dev/null +++ b/core/lib/dal/.sqlx/query-df3b08549a11729fb475341b8f38f8af02aa297d85a2695c5f448ed14b2d7386.json @@ -0,0 +1,19 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n snapshot_recovery (\n l1_batch_number,\n l1_batch_root_hash,\n miniblock_number,\n miniblock_root_hash,\n last_finished_chunk_id,\n total_chunk_count,\n updated_at,\n created_at\n )\n VALUES\n ($1, $2, $3, $4, $5, $6, NOW(), NOW())\n ON CONFLICT (l1_batch_number) DO\n UPDATE\n SET\n l1_batch_number = excluded.l1_batch_number,\n l1_batch_root_hash = excluded.l1_batch_root_hash,\n miniblock_number = excluded.miniblock_number,\n miniblock_root_hash = excluded.miniblock_root_hash,\n last_finished_chunk_id = excluded.last_finished_chunk_id,\n total_chunk_count = excluded.total_chunk_count,\n updated_at = excluded.updated_at\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Bytea", + "Int8", + "Bytea", + "Int4", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "df3b08549a11729fb475341b8f38f8af02aa297d85a2695c5f448ed14b2d7386" +} diff --git a/core/lib/dal/.sqlx/query-e073cfdc7a00559994ce04eca15f35d55901fb1e6805f23413ea43e3637540a0.json b/core/lib/dal/.sqlx/query-e073cfdc7a00559994ce04eca15f35d55901fb1e6805f23413ea43e3637540a0.json new file mode 100644 index 00000000000..929e4de8c1b --- /dev/null +++ b/core/lib/dal/.sqlx/query-e073cfdc7a00559994ce04eca15f35d55901fb1e6805f23413ea43e3637540a0.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bytecode,\n bytecode_hash\n FROM\n factory_deps\n WHERE\n bytecode_hash = ANY ($1)\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bytecode", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "bytecode_hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "ByteaArray" + ] + }, + "nullable": [ + false, + false + ] + }, + "hash": "e073cfdc7a00559994ce04eca15f35d55901fb1e6805f23413ea43e3637540a0" +} diff --git a/core/lib/dal/.sqlx/query-e3479d12d9dc97001cf03dc42d9b957e92cd375ec33fe16f855f319ffc0b208e.json b/core/lib/dal/.sqlx/query-e3479d12d9dc97001cf03dc42d9b957e92cd375ec33fe16f855f319ffc0b208e.json new file mode 100644 index 00000000000..32cc15c206d --- /dev/null +++ b/core/lib/dal/.sqlx/query-e3479d12d9dc97001cf03dc42d9b957e92cd375ec33fe16f855f319ffc0b208e.json @@ -0,0 +1,118 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n scheduler_dependency_tracker_fri\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "status", + "type_info": "Text" + }, + { + "ordinal": 2, + "name": "circuit_1_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 3, + "name": "circuit_2_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 4, + "name": "circuit_3_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 5, + "name": "circuit_4_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "circuit_5_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 7, + "name": "circuit_6_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 8, + "name": "circuit_7_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 9, + "name": "circuit_8_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 10, + "name": "circuit_9_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 11, + "name": "circuit_10_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 12, + "name": "circuit_11_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 13, + "name": "circuit_12_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 14, + "name": "circuit_13_final_prover_job_id", + "type_info": "Int8" + }, + { + "ordinal": 15, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 16, + "name": "updated_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false + ] + }, + "hash": "e3479d12d9dc97001cf03dc42d9b957e92cd375ec33fe16f855f319ffc0b208e" +} diff --git a/core/lib/dal/.sqlx/query-e5a90d17b2c25744df4585b53678c7ffd9a04eae27afbdf37a6ba8ff7ac85f3b.json b/core/lib/dal/.sqlx/query-e5a90d17b2c25744df4585b53678c7ffd9a04eae27afbdf37a6ba8ff7ac85f3b.json new file mode 100644 index 00000000000..5606beb7123 --- /dev/null +++ b/core/lib/dal/.sqlx/query-e5a90d17b2c25744df4585b53678c7ffd9a04eae27afbdf37a6ba8ff7ac85f3b.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n serialized_events_queue\n FROM\n events_queue\n WHERE\n l1_batch_number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "serialized_events_queue", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "e5a90d17b2c25744df4585b53678c7ffd9a04eae27afbdf37a6ba8ff7ac85f3b" +} diff --git a/core/lib/dal/.sqlx/query-e63cc86a8d527dae2905b2af6a66bc6419ba51514519652e055c769b096015f6.json b/core/lib/dal/.sqlx/query-e63cc86a8d527dae2905b2af6a66bc6419ba51514519652e055c769b096015f6.json new file mode 100644 index 00000000000..3176fb6ac3e --- /dev/null +++ b/core/lib/dal/.sqlx/query-e63cc86a8d527dae2905b2af6a66bc6419ba51514519652e055c769b096015f6.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM transactions\n WHERE\n miniblock_number IS NULL\n AND received_at < NOW() - $1::INTERVAL\n AND is_priority = FALSE\n AND error IS NULL\n RETURNING\n hash\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Interval" + ] + }, + "nullable": [ + false + ] + }, + "hash": "e63cc86a8d527dae2905b2af6a66bc6419ba51514519652e055c769b096015f6" +} diff --git a/core/lib/dal/.sqlx/query-e71c39b93ceba5416ff3d988290cb35d4d07d47f33fe1a5b9e9fe1f0ae09b705.json b/core/lib/dal/.sqlx/query-e71c39b93ceba5416ff3d988290cb35d4d07d47f33fe1a5b9e9fe1f0ae09b705.json new file mode 100644 index 00000000000..b61fbc645a0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-e71c39b93ceba5416ff3d988290cb35d4d07d47f33fe1a5b9e9fe1f0ae09b705.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n usd_price,\n usd_price_updated_at\n FROM\n tokens\n WHERE\n l2_address = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "usd_price", + "type_info": "Numeric" + }, + { + "ordinal": 1, + "name": "usd_price_updated_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + true, + true + ] + }, + "hash": "e71c39b93ceba5416ff3d988290cb35d4d07d47f33fe1a5b9e9fe1f0ae09b705" +} diff --git a/core/lib/dal/.sqlx/query-e74a34a59e6afda689b0ec9e19071ababa66e4a443fbefbfffca72b7540b075b.json b/core/lib/dal/.sqlx/query-e74a34a59e6afda689b0ec9e19071ababa66e4a443fbefbfffca72b7540b075b.json new file mode 100644 index 00000000000..54ea6b6eb03 --- /dev/null +++ b/core/lib/dal/.sqlx/query-e74a34a59e6afda689b0ec9e19071ababa66e4a443fbefbfffca72b7540b075b.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n proof_compression_jobs_fri (l1_batch_number, status, created_at, updated_at)\n VALUES\n ($1, $2, NOW(), NOW())\n ON CONFLICT (l1_batch_number) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Text" + ] + }, + "nullable": [] + }, + "hash": "e74a34a59e6afda689b0ec9e19071ababa66e4a443fbefbfffca72b7540b075b" +} diff --git a/core/lib/dal/.sqlx/query-e76217231b4d896118e9630de9485b19e1294b3aa6e084d2051bb532408672be.json b/core/lib/dal/.sqlx/query-e76217231b4d896118e9630de9485b19e1294b3aa6e084d2051bb532408672be.json new file mode 100644 index 00000000000..831a67cbee9 --- /dev/null +++ b/core/lib/dal/.sqlx/query-e76217231b4d896118e9630de9485b19e1294b3aa6e084d2051bb532408672be.json @@ -0,0 +1,12 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE transactions\n SET\n in_mempool = FALSE\n WHERE\n in_mempool = TRUE\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [] + }, + "nullable": [] + }, + "hash": "e76217231b4d896118e9630de9485b19e1294b3aa6e084d2051bb532408672be" +} diff --git a/core/lib/dal/.sqlx/query-e9adf5b5a1ab84c20a514a7775f91a9984685eaaaa0a8b223410d560a15a3034.json b/core/lib/dal/.sqlx/query-e9adf5b5a1ab84c20a514a7775f91a9984685eaaaa0a8b223410d560a15a3034.json new file mode 100644 index 00000000000..975c061632a --- /dev/null +++ b/core/lib/dal/.sqlx/query-e9adf5b5a1ab84c20a514a7775f91a9984685eaaaa0a8b223410d560a15a3034.json @@ -0,0 +1,61 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE prover_jobs_fri\n SET\n status = 'in_progress',\n attempts = attempts + 1,\n processing_started_at = NOW(),\n updated_at = NOW(),\n picked_by = $4\n WHERE\n id = (\n SELECT\n pj.id\n FROM\n (\n SELECT\n *\n FROM\n UNNEST($1::SMALLINT[], $2::SMALLINT[])\n ) AS tuple (circuit_id, ROUND)\n JOIN LATERAL (\n SELECT\n *\n FROM\n prover_jobs_fri AS pj\n WHERE\n pj.status = 'queued'\n AND pj.protocol_version = ANY ($3)\n AND pj.circuit_id = tuple.circuit_id\n AND pj.aggregation_round = tuple.round\n ORDER BY\n pj.l1_batch_number ASC,\n pj.id ASC\n LIMIT\n 1\n ) AS pj ON TRUE\n ORDER BY\n pj.l1_batch_number ASC,\n pj.aggregation_round DESC,\n pj.id ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n prover_jobs_fri.id,\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id,\n prover_jobs_fri.aggregation_round,\n prover_jobs_fri.sequence_number,\n prover_jobs_fri.depth,\n prover_jobs_fri.is_node_final_proof\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "circuit_id", + "type_info": "Int2" + }, + { + "ordinal": 3, + "name": "aggregation_round", + "type_info": "Int2" + }, + { + "ordinal": 4, + "name": "sequence_number", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "depth", + "type_info": "Int4" + }, + { + "ordinal": 6, + "name": "is_node_final_proof", + "type_info": "Bool" + } + ], + "parameters": { + "Left": [ + "Int2Array", + "Int2Array", + "Int4Array", + "Text" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false + ] + }, + "hash": "e9adf5b5a1ab84c20a514a7775f91a9984685eaaaa0a8b223410d560a15a3034" +} diff --git a/core/lib/dal/.sqlx/query-e9ca863d6e77edd39a9fc55700a6686e655206601854799139c22c017a214744.json b/core/lib/dal/.sqlx/query-e9ca863d6e77edd39a9fc55700a6686e655206601854799139c22c017a214744.json new file mode 100644 index 00000000000..0bdcbb99add --- /dev/null +++ b/core/lib/dal/.sqlx/query-e9ca863d6e77edd39a9fc55700a6686e655206601854799139c22c017a214744.json @@ -0,0 +1,19 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n node_aggregation_witness_jobs_fri (\n l1_batch_number,\n circuit_id,\n depth,\n aggregations_url,\n number_of_dependent_jobs,\n protocol_version,\n status,\n created_at,\n updated_at\n )\n VALUES\n ($1, $2, $3, $4, $5, $6, 'waiting_for_proofs', NOW(), NOW())\n ON CONFLICT (l1_batch_number, circuit_id, depth) DO\n UPDATE\n SET\n updated_at = NOW()\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Int2", + "Int4", + "Text", + "Int4", + "Int4" + ] + }, + "nullable": [] + }, + "hash": "e9ca863d6e77edd39a9fc55700a6686e655206601854799139c22c017a214744" +} diff --git a/core/lib/dal/.sqlx/query-ea904aa930d602d33b6fbc1bf1178a8a0ec739f4ddec8ffeb3a87253aeb18d30.json b/core/lib/dal/.sqlx/query-ea904aa930d602d33b6fbc1bf1178a8a0ec739f4ddec8ffeb3a87253aeb18d30.json new file mode 100644 index 00000000000..718b2a8f687 --- /dev/null +++ b/core/lib/dal/.sqlx/query-ea904aa930d602d33b6fbc1bf1178a8a0ec739f4ddec8ffeb3a87253aeb18d30.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n DELETE FROM miniblocks\n WHERE\n number > $1\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "ea904aa930d602d33b6fbc1bf1178a8a0ec739f4ddec8ffeb3a87253aeb18d30" +} diff --git a/core/lib/dal/.sqlx/query-ec04b89218111a5dc8d5ade506ac3465e2211ef3013386feb12d4cc04e0eade9.json b/core/lib/dal/.sqlx/query-ec04b89218111a5dc8d5ade506ac3465e2211ef3013386feb12d4cc04e0eade9.json new file mode 100644 index 00000000000..7c0264b5646 --- /dev/null +++ b/core/lib/dal/.sqlx/query-ec04b89218111a5dc8d5ade506ac3465e2211ef3013386feb12d4cc04e0eade9.json @@ -0,0 +1,60 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE prover_jobs_fri\n SET\n status = 'successful',\n updated_at = NOW(),\n time_taken = $1,\n proof_blob_url = $2\n WHERE\n id = $3\n RETURNING\n prover_jobs_fri.id,\n prover_jobs_fri.l1_batch_number,\n prover_jobs_fri.circuit_id,\n prover_jobs_fri.aggregation_round,\n prover_jobs_fri.sequence_number,\n prover_jobs_fri.depth,\n prover_jobs_fri.is_node_final_proof\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 2, + "name": "circuit_id", + "type_info": "Int2" + }, + { + "ordinal": 3, + "name": "aggregation_round", + "type_info": "Int2" + }, + { + "ordinal": 4, + "name": "sequence_number", + "type_info": "Int4" + }, + { + "ordinal": 5, + "name": "depth", + "type_info": "Int4" + }, + { + "ordinal": 6, + "name": "is_node_final_proof", + "type_info": "Bool" + } + ], + "parameters": { + "Left": [ + "Time", + "Text", + "Int8" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false + ] + }, + "hash": "ec04b89218111a5dc8d5ade506ac3465e2211ef3013386feb12d4cc04e0eade9" +} diff --git a/core/lib/dal/.sqlx/query-edc61e1285bf6d3837acc67af4f15aaade450980719933089824eb8c494d64a4.json b/core/lib/dal/.sqlx/query-edc61e1285bf6d3837acc67af4f15aaade450980719933089824eb8c494d64a4.json new file mode 100644 index 00000000000..2c7d7f1da5f --- /dev/null +++ b/core/lib/dal/.sqlx/query-edc61e1285bf6d3837acc67af4f15aaade450980719933089824eb8c494d64a4.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE witness_inputs_fri\n SET\n status = 'successful',\n updated_at = NOW(),\n time_taken = $1\n WHERE\n l1_batch_number = $2\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Time", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "edc61e1285bf6d3837acc67af4f15aaade450980719933089824eb8c494d64a4" +} diff --git a/core/lib/dal/.sqlx/query-ee17d2b3edfe705d14811e3938d4312b2b780563a9fde48bae5e51650475670f.json b/core/lib/dal/.sqlx/query-ee17d2b3edfe705d14811e3938d4312b2b780563a9fde48bae5e51650475670f.json new file mode 100644 index 00000000000..5732126a7ff --- /dev/null +++ b/core/lib/dal/.sqlx/query-ee17d2b3edfe705d14811e3938d4312b2b780563a9fde48bae5e51650475670f.json @@ -0,0 +1,82 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n eth_txs_history\n WHERE\n eth_tx_id = $1\n ORDER BY\n created_at DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "id", + "type_info": "Int4" + }, + { + "ordinal": 1, + "name": "eth_tx_id", + "type_info": "Int4" + }, + { + "ordinal": 2, + "name": "tx_hash", + "type_info": "Text" + }, + { + "ordinal": 3, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 4, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 5, + "name": "base_fee_per_gas", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "priority_fee_per_gas", + "type_info": "Int8" + }, + { + "ordinal": 7, + "name": "confirmed_at", + "type_info": "Timestamp" + }, + { + "ordinal": 8, + "name": "signed_raw_tx", + "type_info": "Bytea" + }, + { + "ordinal": 9, + "name": "sent_at_block", + "type_info": "Int4" + }, + { + "ordinal": 10, + "name": "sent_at", + "type_info": "Timestamp" + } + ], + "parameters": { + "Left": [ + "Int4" + ] + }, + "nullable": [ + false, + false, + false, + false, + false, + false, + false, + true, + true, + true, + true + ] + }, + "hash": "ee17d2b3edfe705d14811e3938d4312b2b780563a9fde48bae5e51650475670f" +} diff --git a/core/lib/dal/.sqlx/query-ef331469f78c6ff68a254a15b55d056cc9bae25bc070c5de8424f88fab20e5ea.json b/core/lib/dal/.sqlx/query-ef331469f78c6ff68a254a15b55d056cc9bae25bc070c5de8424f88fab20e5ea.json new file mode 100644 index 00000000000..60d4a49d628 --- /dev/null +++ b/core/lib/dal/.sqlx/query-ef331469f78c6ff68a254a15b55d056cc9bae25bc070c5de8424f88fab20e5ea.json @@ -0,0 +1,28 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_batch_number,\n l1_batch_tx_index\n FROM\n transactions\n WHERE\n hash = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 1, + "name": "l1_batch_tx_index", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + true, + true + ] + }, + "hash": "ef331469f78c6ff68a254a15b55d056cc9bae25bc070c5de8424f88fab20e5ea" +} diff --git a/core/lib/dal/.sqlx/query-ef687be83e496d6647e4dfef9eabae63443c51deb818dd0affd1a0949b161737.json b/core/lib/dal/.sqlx/query-ef687be83e496d6647e4dfef9eabae63443c51deb818dd0affd1a0949b161737.json new file mode 100644 index 00000000000..79b20fabb28 --- /dev/null +++ b/core/lib/dal/.sqlx/query-ef687be83e496d6647e4dfef9eabae63443c51deb818dd0affd1a0949b161737.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n proof_compression_jobs_fri (l1_batch_number, fri_proof_blob_url, status, created_at, updated_at)\n VALUES\n ($1, $2, $3, NOW(), NOW())\n ON CONFLICT (l1_batch_number) DO NOTHING\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8", + "Text", + "Text" + ] + }, + "nullable": [] + }, + "hash": "ef687be83e496d6647e4dfef9eabae63443c51deb818dd0affd1a0949b161737" +} diff --git a/core/lib/dal/.sqlx/query-f012d0922265269746396dac8f25ff66f2c3b2b83d45360818a8782e56aa3d66.json b/core/lib/dal/.sqlx/query-f012d0922265269746396dac8f25ff66f2c3b2b83d45360818a8782e56aa3d66.json new file mode 100644 index 00000000000..9815b5d3895 --- /dev/null +++ b/core/lib/dal/.sqlx/query-f012d0922265269746396dac8f25ff66f2c3b2b83d45360818a8782e56aa3d66.json @@ -0,0 +1,36 @@ +{ + "db_name": "PostgreSQL", + "query": "\n WITH\n sl AS (\n SELECT\n (\n SELECT\n ARRAY[hashed_key, value] AS kv\n FROM\n storage_logs\n WHERE\n storage_logs.miniblock_number = $1\n AND storage_logs.hashed_key >= u.start_key\n AND storage_logs.hashed_key <= u.end_key\n ORDER BY\n storage_logs.hashed_key\n LIMIT\n 1\n )\n FROM\n UNNEST($2::bytea[], $3::bytea[]) AS u (start_key, end_key)\n )\n SELECT\n sl.kv[1] AS \"hashed_key?\",\n sl.kv[2] AS \"value?\",\n initial_writes.index\n FROM\n sl\n LEFT OUTER JOIN initial_writes ON initial_writes.hashed_key = sl.kv[1]\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hashed_key?", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "value?", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "index", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Int8", + "ByteaArray", + "ByteaArray" + ] + }, + "nullable": [ + null, + null, + true + ] + }, + "hash": "f012d0922265269746396dac8f25ff66f2c3b2b83d45360818a8782e56aa3d66" +} diff --git a/core/lib/dal/.sqlx/query-f1a90090c192d68367e799188356efe8d41759bbdcdd6d39db93208f2664f03a.json b/core/lib/dal/.sqlx/query-f1a90090c192d68367e799188356efe8d41759bbdcdd6d39db93208f2664f03a.json new file mode 100644 index 00000000000..616173355a9 --- /dev/null +++ b/core/lib/dal/.sqlx/query-f1a90090c192d68367e799188356efe8d41759bbdcdd6d39db93208f2664f03a.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n INDEX\n FROM\n initial_writes\n WHERE\n hashed_key = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "index", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Bytea" + ] + }, + "nullable": [ + false + ] + }, + "hash": "f1a90090c192d68367e799188356efe8d41759bbdcdd6d39db93208f2664f03a" +} diff --git a/core/lib/dal/.sqlx/query-f22c5d136fe68bbfcee60beb304cfdc050b85e6d773b13f9699f15c335d42593.json b/core/lib/dal/.sqlx/query-f22c5d136fe68bbfcee60beb304cfdc050b85e6d773b13f9699f15c335d42593.json new file mode 100644 index 00000000000..7ffda2c8a32 --- /dev/null +++ b/core/lib/dal/.sqlx/query-f22c5d136fe68bbfcee60beb304cfdc050b85e6d773b13f9699f15c335d42593.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_address\n FROM\n tokens\n WHERE\n market_volume > $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_address", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Numeric" + ] + }, + "nullable": [ + false + ] + }, + "hash": "f22c5d136fe68bbfcee60beb304cfdc050b85e6d773b13f9699f15c335d42593" +} diff --git a/core/lib/dal/.sqlx/query-f39372e37160df4897f62a800694867ed765dcb9dc60754df9df8700d4244bfb.json b/core/lib/dal/.sqlx/query-f39372e37160df4897f62a800694867ed765dcb9dc60754df9df8700d4244bfb.json new file mode 100644 index 00000000000..9495f8f7c82 --- /dev/null +++ b/core/lib/dal/.sqlx/query-f39372e37160df4897f62a800694867ed765dcb9dc60754df9df8700d4244bfb.json @@ -0,0 +1,44 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l1_address,\n l2_address,\n NAME,\n symbol,\n decimals\n FROM\n tokens\n WHERE\n well_known = TRUE\n ORDER BY\n symbol\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_address", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "l2_address", + "type_info": "Bytea" + }, + { + "ordinal": 2, + "name": "name", + "type_info": "Varchar" + }, + { + "ordinal": 3, + "name": "symbol", + "type_info": "Varchar" + }, + { + "ordinal": 4, + "name": "decimals", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false, + false, + false, + false, + false + ] + }, + "hash": "f39372e37160df4897f62a800694867ed765dcb9dc60754df9df8700d4244bfb" +} diff --git a/core/lib/dal/.sqlx/query-f4362a61ab05af3d71a3232d2f017db60405a887f9f7fa0ca60aa7fc879ce630.json b/core/lib/dal/.sqlx/query-f4362a61ab05af3d71a3232d2f017db60405a887f9f7fa0ca60aa7fc879ce630.json new file mode 100644 index 00000000000..59c28852a03 --- /dev/null +++ b/core/lib/dal/.sqlx/query-f4362a61ab05af3d71a3232d2f017db60405a887f9f7fa0ca60aa7fc879ce630.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE proof_compression_jobs_fri\n SET\n status = $1,\n error = $2,\n updated_at = NOW()\n WHERE\n l1_batch_number = $3\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Text", + "Int8" + ] + }, + "nullable": [] + }, + "hash": "f4362a61ab05af3d71a3232d2f017db60405a887f9f7fa0ca60aa7fc879ce630" +} diff --git a/core/lib/dal/.sqlx/query-f63586d59264eab7388ad1de823227ecaa45d76d1ba260074898fe57c059a15a.json b/core/lib/dal/.sqlx/query-f63586d59264eab7388ad1de823227ecaa45d76d1ba260074898fe57c059a15a.json new file mode 100644 index 00000000000..d62e213ef57 --- /dev/null +++ b/core/lib/dal/.sqlx/query-f63586d59264eab7388ad1de823227ecaa45d76d1ba260074898fe57c059a15a.json @@ -0,0 +1,232 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n *\n FROM\n transactions\n WHERE\n l1_batch_number = $1\n ORDER BY\n miniblock_number,\n index_in_block\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "hash", + "type_info": "Bytea" + }, + { + "ordinal": 1, + "name": "is_priority", + "type_info": "Bool" + }, + { + "ordinal": 2, + "name": "full_fee", + "type_info": "Numeric" + }, + { + "ordinal": 3, + "name": "layer_2_tip_fee", + "type_info": "Numeric" + }, + { + "ordinal": 4, + "name": "initiator_address", + "type_info": "Bytea" + }, + { + "ordinal": 5, + "name": "nonce", + "type_info": "Int8" + }, + { + "ordinal": 6, + "name": "signature", + "type_info": "Bytea" + }, + { + "ordinal": 7, + "name": "input", + "type_info": "Bytea" + }, + { + "ordinal": 8, + "name": "data", + "type_info": "Jsonb" + }, + { + "ordinal": 9, + "name": "received_at", + "type_info": "Timestamp" + }, + { + "ordinal": 10, + "name": "priority_op_id", + "type_info": "Int8" + }, + { + "ordinal": 11, + "name": "l1_batch_number", + "type_info": "Int8" + }, + { + "ordinal": 12, + "name": "index_in_block", + "type_info": "Int4" + }, + { + "ordinal": 13, + "name": "error", + "type_info": "Varchar" + }, + { + "ordinal": 14, + "name": "gas_limit", + "type_info": "Numeric" + }, + { + "ordinal": 15, + "name": "gas_per_storage_limit", + "type_info": "Numeric" + }, + { + "ordinal": 16, + "name": "gas_per_pubdata_limit", + "type_info": "Numeric" + }, + { + "ordinal": 17, + "name": "tx_format", + "type_info": "Int4" + }, + { + "ordinal": 18, + "name": "created_at", + "type_info": "Timestamp" + }, + { + "ordinal": 19, + "name": "updated_at", + "type_info": "Timestamp" + }, + { + "ordinal": 20, + "name": "execution_info", + "type_info": "Jsonb" + }, + { + "ordinal": 21, + "name": "contract_address", + "type_info": "Bytea" + }, + { + "ordinal": 22, + "name": "in_mempool", + "type_info": "Bool" + }, + { + "ordinal": 23, + "name": "l1_block_number", + "type_info": "Int4" + }, + { + "ordinal": 24, + "name": "value", + "type_info": "Numeric" + }, + { + "ordinal": 25, + "name": "paymaster", + "type_info": "Bytea" + }, + { + "ordinal": 26, + "name": "paymaster_input", + "type_info": "Bytea" + }, + { + "ordinal": 27, + "name": "max_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 28, + "name": "max_priority_fee_per_gas", + "type_info": "Numeric" + }, + { + "ordinal": 29, + "name": "effective_gas_price", + "type_info": "Numeric" + }, + { + "ordinal": 30, + "name": "miniblock_number", + "type_info": "Int8" + }, + { + "ordinal": 31, + "name": "l1_batch_tx_index", + "type_info": "Int4" + }, + { + "ordinal": 32, + "name": "refunded_gas", + "type_info": "Int8" + }, + { + "ordinal": 33, + "name": "l1_tx_mint", + "type_info": "Numeric" + }, + { + "ordinal": 34, + "name": "l1_tx_refund_recipient", + "type_info": "Bytea" + }, + { + "ordinal": 35, + "name": "upgrade_id", + "type_info": "Int4" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false, + false, + true, + true, + false, + true, + true, + true, + false, + false, + true, + true, + true, + true, + true, + true, + true, + true, + false, + false, + false, + true, + false, + true, + false, + false, + false, + true, + true, + true, + true, + true, + false, + true, + true, + true + ] + }, + "hash": "f63586d59264eab7388ad1de823227ecaa45d76d1ba260074898fe57c059a15a" +} diff --git a/core/lib/dal/.sqlx/query-f717ca5d0890759496739a678955e6f8b7f88a0894a7f9e27fc26f93997d37c7.json b/core/lib/dal/.sqlx/query-f717ca5d0890759496739a678955e6f8b7f88a0894a7f9e27fc26f93997d37c7.json new file mode 100644 index 00000000000..e6e12748d0d --- /dev/null +++ b/core/lib/dal/.sqlx/query-f717ca5d0890759496739a678955e6f8b7f88a0894a7f9e27fc26f93997d37c7.json @@ -0,0 +1,24 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE proof_compression_jobs_fri\n SET\n status = $1,\n attempts = attempts + 1,\n updated_at = NOW(),\n processing_started_at = NOW(),\n picked_by = $3\n WHERE\n l1_batch_number = (\n SELECT\n l1_batch_number\n FROM\n proof_compression_jobs_fri\n WHERE\n status = $2\n ORDER BY\n l1_batch_number ASC\n LIMIT\n 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING\n proof_compression_jobs_fri.l1_batch_number\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l1_batch_number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Text", + "Text", + "Text" + ] + }, + "nullable": [ + false + ] + }, + "hash": "f717ca5d0890759496739a678955e6f8b7f88a0894a7f9e27fc26f93997d37c7" +} diff --git a/core/lib/dal/.sqlx/query-f91790ae5cc4b087bf942ba52dd63a1e89945f8d5e0f4da42ecf6313c4f5967e.json b/core/lib/dal/.sqlx/query-f91790ae5cc4b087bf942ba52dd63a1e89945f8d5e0f4da42ecf6313c4f5967e.json new file mode 100644 index 00000000000..cdf4b166270 --- /dev/null +++ b/core/lib/dal/.sqlx/query-f91790ae5cc4b087bf942ba52dd63a1e89945f8d5e0f4da42ecf6313c4f5967e.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n MIN(number) AS \"number\"\n FROM\n l1_batches\n WHERE\n hash IS NOT NULL\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "number", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + null + ] + }, + "hash": "f91790ae5cc4b087bf942ba52dd63a1e89945f8d5e0f4da42ecf6313c4f5967e" +} diff --git a/core/lib/dal/.sqlx/query-f922c0718c9dda2f285f09cbabad425bac8ed3d2780c60c9b63afbcea131f9a0.json b/core/lib/dal/.sqlx/query-f922c0718c9dda2f285f09cbabad425bac8ed3d2780c60c9b63afbcea131f9a0.json new file mode 100644 index 00000000000..c10268b7332 --- /dev/null +++ b/core/lib/dal/.sqlx/query-f922c0718c9dda2f285f09cbabad425bac8ed3d2780c60c9b63afbcea131f9a0.json @@ -0,0 +1,15 @@ +{ + "db_name": "PostgreSQL", + "query": "\n INSERT INTO\n transaction_traces (tx_hash, trace, created_at, updated_at)\n VALUES\n ($1, $2, NOW(), NOW())\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Bytea", + "Jsonb" + ] + }, + "nullable": [] + }, + "hash": "f922c0718c9dda2f285f09cbabad425bac8ed3d2780c60c9b63afbcea131f9a0" +} diff --git a/core/lib/dal/.sqlx/query-fcc108fd59203644ff86ded0505c7dfb7aad7261e5fc402d845aedc3b91a4e99.json b/core/lib/dal/.sqlx/query-fcc108fd59203644ff86ded0505c7dfb7aad7261e5fc402d845aedc3b91a4e99.json new file mode 100644 index 00000000000..3dd33855e0e --- /dev/null +++ b/core/lib/dal/.sqlx/query-fcc108fd59203644ff86ded0505c7dfb7aad7261e5fc402d845aedc3b91a4e99.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n nonce AS \"nonce!\"\n FROM\n transactions\n WHERE\n initiator_address = $1\n AND nonce >= $2\n AND is_priority = FALSE\n AND (\n miniblock_number IS NOT NULL\n OR error IS NULL\n )\n ORDER BY\n nonce\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "nonce!", + "type_info": "Int8" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Int8" + ] + }, + "nullable": [ + true + ] + }, + "hash": "fcc108fd59203644ff86ded0505c7dfb7aad7261e5fc402d845aedc3b91a4e99" +} diff --git a/core/lib/dal/.sqlx/query-fcddeb96dcd1611dedb2091c1be304e8a35fd65bf37e976b7106f57c57e70b9b.json b/core/lib/dal/.sqlx/query-fcddeb96dcd1611dedb2091c1be304e8a35fd65bf37e976b7106f57c57e70b9b.json new file mode 100644 index 00000000000..effc22d6a43 --- /dev/null +++ b/core/lib/dal/.sqlx/query-fcddeb96dcd1611dedb2091c1be304e8a35fd65bf37e976b7106f57c57e70b9b.json @@ -0,0 +1,16 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE gpu_prover_queue_fri\n SET\n instance_status = 'available',\n updated_at = NOW()\n WHERE\n instance_host = $1::TEXT::inet\n AND instance_port = $2\n AND instance_status = 'full'\n AND zone = $3\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Text", + "Int4", + "Text" + ] + }, + "nullable": [] + }, + "hash": "fcddeb96dcd1611dedb2091c1be304e8a35fd65bf37e976b7106f57c57e70b9b" +} diff --git a/core/lib/dal/.sqlx/query-fde16cd2d3de03f4b61625fa453a58f82acd817932415f04bcbd05442ad80c2b.json b/core/lib/dal/.sqlx/query-fde16cd2d3de03f4b61625fa453a58f82acd817932415f04bcbd05442ad80c2b.json new file mode 100644 index 00000000000..f8ad468d70d --- /dev/null +++ b/core/lib/dal/.sqlx/query-fde16cd2d3de03f4b61625fa453a58f82acd817932415f04bcbd05442ad80c2b.json @@ -0,0 +1,23 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n bytecode\n FROM\n factory_deps\n WHERE\n bytecode_hash = $1\n AND miniblock_number <= $2\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "bytecode", + "type_info": "Bytea" + } + ], + "parameters": { + "Left": [ + "Bytea", + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "fde16cd2d3de03f4b61625fa453a58f82acd817932415f04bcbd05442ad80c2b" +} diff --git a/core/lib/dal/.sqlx/query-fdffa5841554286a924b217b5885d9ec9b3f628c3a4cf5e10580ea6e5e3a2429.json b/core/lib/dal/.sqlx/query-fdffa5841554286a924b217b5885d9ec9b3f628c3a4cf5e10580ea6e5e3a2429.json new file mode 100644 index 00000000000..bdcf7e5f037 --- /dev/null +++ b/core/lib/dal/.sqlx/query-fdffa5841554286a924b217b5885d9ec9b3f628c3a4cf5e10580ea6e5e3a2429.json @@ -0,0 +1,14 @@ +{ + "db_name": "PostgreSQL", + "query": "\n UPDATE miniblocks\n SET\n l1_batch_number = $1\n WHERE\n l1_batch_number IS NULL\n ", + "describe": { + "columns": [], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [] + }, + "hash": "fdffa5841554286a924b217b5885d9ec9b3f628c3a4cf5e10580ea6e5e3a2429" +} diff --git a/core/lib/dal/.sqlx/query-fe501f86f4bf6c5b8ccc2e039a4eb09b538a67d1c39fda052c4f4ddb23ce0084.json b/core/lib/dal/.sqlx/query-fe501f86f4bf6c5b8ccc2e039a4eb09b538a67d1c39fda052c4f4ddb23ce0084.json new file mode 100644 index 00000000000..5573cdd9953 --- /dev/null +++ b/core/lib/dal/.sqlx/query-fe501f86f4bf6c5b8ccc2e039a4eb09b538a67d1c39fda052c4f4ddb23ce0084.json @@ -0,0 +1,22 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n l2_to_l1_logs\n FROM\n l1_batches\n WHERE\n number = $1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "l2_to_l1_logs", + "type_info": "ByteaArray" + } + ], + "parameters": { + "Left": [ + "Int8" + ] + }, + "nullable": [ + false + ] + }, + "hash": "fe501f86f4bf6c5b8ccc2e039a4eb09b538a67d1c39fda052c4f4ddb23ce0084" +} diff --git a/core/lib/dal/.sqlx/query-fec7b791e371a4c58350b6537065223f4599d4128db588d8645f3d106de5f50b.json b/core/lib/dal/.sqlx/query-fec7b791e371a4c58350b6537065223f4599d4128db588d8645f3d106de5f50b.json new file mode 100644 index 00000000000..c34d38ac2d0 --- /dev/null +++ b/core/lib/dal/.sqlx/query-fec7b791e371a4c58350b6537065223f4599d4128db588d8645f3d106de5f50b.json @@ -0,0 +1,20 @@ +{ + "db_name": "PostgreSQL", + "query": "\n SELECT\n certificate\n FROM\n miniblocks_consensus\n ORDER BY\n number DESC\n LIMIT\n 1\n ", + "describe": { + "columns": [ + { + "ordinal": 0, + "name": "certificate", + "type_info": "Jsonb" + } + ], + "parameters": { + "Left": [] + }, + "nullable": [ + false + ] + }, + "hash": "fec7b791e371a4c58350b6537065223f4599d4128db588d8645f3d106de5f50b" +} diff --git a/core/lib/dal/Cargo.toml b/core/lib/dal/Cargo.toml index 616e8a32a9e..6af26113360 100644 --- a/core/lib/dal/Cargo.toml +++ b/core/lib/dal/Cargo.toml @@ -1,7 +1,7 @@ [package] name = "zksync_dal" version = "0.1.0" -edition = "2018" +edition = "2021" authors = ["The Matter Labs Team "] homepage = "https://zksync.io/" repository = "https://github.com/matter-labs/zksync-era" @@ -9,36 +9,43 @@ license = "MIT OR Apache-2.0" keywords = ["blockchain", "zksync"] categories = ["cryptography"] +links = "zksync_dal_proto" + [dependencies] -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } zksync_utils = { path = "../utils" } zksync_system_constants = { path = "../constants" } zksync_contracts = { path = "../contracts" } zksync_types = { path = "../types" } zksync_health_check = { path = "../health_check" } +zksync_consensus_roles = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_consensus_storage = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_protobuf = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } itertools = "0.10.1" thiserror = "1.0" anyhow = "1.0" url = "2" +prost = "0.12.1" rand = "0.8" tokio = { version = "1", features = ["full"] } -sqlx = { version = "0.5.13", default-features = false, features = [ - "runtime-tokio-native-tls", +sqlx = { version = "0.7.3", default-features = false, features = [ + "runtime-tokio", + "tls-native-tls", "macros", "postgres", "bigdecimal", + "rust_decimal", "chrono", "json", - "offline", "migrate", "ipnetwork", ] } serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" -bigdecimal = "0.2.2" +bigdecimal = "0.3.0" bincode = "1" -num = "0.3.1" +num = "0.4.0" hex = "0.4" once_cell = "1.7" strum = { version = "0.24", features = ["derive"] } @@ -46,3 +53,6 @@ tracing = "0.1" [dev-dependencies] assert_matches = "1.5.0" + +[build-dependencies] +zksync_protobuf_build = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } diff --git a/core/lib/zksync_core/build.rs b/core/lib/dal/build.rs similarity index 64% rename from core/lib/zksync_core/build.rs rename to core/lib/dal/build.rs index 7e8cc45bb8c..f9986b59603 100644 --- a/core/lib/zksync_core/build.rs +++ b/core/lib/dal/build.rs @@ -1,11 +1,11 @@ //! Generates rust code from protobufs. fn main() { zksync_protobuf_build::Config { - input_root: "src/consensus/proto".into(), - proto_root: "zksync/core/consensus".into(), + input_root: "src/models/proto".into(), + proto_root: "zksync/dal".into(), dependencies: vec![], protobuf_crate: "::zksync_protobuf".parse().unwrap(), - is_public: false, + is_public: true, } .generate() .expect("generate()"); diff --git a/core/lib/dal/migrations/20231013163109_create_snapshots_table.down.sql b/core/lib/dal/migrations/20231013163109_create_snapshots_table.down.sql new file mode 100644 index 00000000000..708ff00f00e --- /dev/null +++ b/core/lib/dal/migrations/20231013163109_create_snapshots_table.down.sql @@ -0,0 +1 @@ +DROP TABLE IF EXISTS snapshots; diff --git a/core/lib/dal/migrations/20231013163109_create_snapshots_table.up.sql b/core/lib/dal/migrations/20231013163109_create_snapshots_table.up.sql new file mode 100644 index 00000000000..ae35521ee5e --- /dev/null +++ b/core/lib/dal/migrations/20231013163109_create_snapshots_table.up.sql @@ -0,0 +1,9 @@ +CREATE TABLE snapshots +( + l1_batch_number BIGINT NOT NULL PRIMARY KEY, + storage_logs_filepaths TEXT[] NOT NULL, + factory_deps_filepath TEXT NOT NULL, + + created_at TIMESTAMP NOT NULL, + updated_at TIMESTAMP NOT NULL +); diff --git a/core/lib/dal/migrations/20231128123456_create_consensus_replica_state_table.down.sql b/core/lib/dal/migrations/20231128123456_create_consensus_replica_state_table.down.sql new file mode 100644 index 00000000000..f81d2ea929d --- /dev/null +++ b/core/lib/dal/migrations/20231128123456_create_consensus_replica_state_table.down.sql @@ -0,0 +1 @@ +DROP TABLE consensus_replica_state; diff --git a/core/lib/dal/migrations/20231128123456_create_consensus_replica_state_table.up.sql b/core/lib/dal/migrations/20231128123456_create_consensus_replica_state_table.up.sql new file mode 100644 index 00000000000..d0cdc951d23 --- /dev/null +++ b/core/lib/dal/migrations/20231128123456_create_consensus_replica_state_table.up.sql @@ -0,0 +1,6 @@ +CREATE TABLE IF NOT EXISTS consensus_replica_state ( + state JSONB NOT NULL, + -- artificial primary key ensuring that the table contains at most 1 row. + fake_key BOOLEAN PRIMARY KEY, + CHECK (fake_key) +); diff --git a/core/lib/dal/migrations/20231208134254_drop_old_witness_generation_related_tables.down.sql b/core/lib/dal/migrations/20231208134254_drop_old_witness_generation_related_tables.down.sql new file mode 100644 index 00000000000..0f4351be7b7 --- /dev/null +++ b/core/lib/dal/migrations/20231208134254_drop_old_witness_generation_related_tables.down.sql @@ -0,0 +1,83 @@ +CREATE TABLE IF NOT EXISTS witness_inputs +( + l1_batch_number BIGINT NOT NULL PRIMARY KEY, + merkle_tree_paths BYTEA, + created_at TIMESTAMP NOT NULL, + updated_at TIMESTAMP NOT NULL, + status TEXT NOT NULL, + time_taken TIME DEFAULT '00:00:00'::TIME WITHOUT TIME ZONE NOT NULL, + processing_started_at TIMESTAMP, + error VARCHAR, + attempts INTEGER DEFAULT 0 NOT NULL, + merkel_tree_paths_blob_url TEXT, + is_blob_cleaned boolean DEFAULT false NOT NULL, + protocol_version INTEGER + CONSTRAINT witness_inputs_prover_protocol_version_fkey REFERENCES prover_protocol_versions +); +CREATE INDEX IF NOT EXISTS witness_inputs_blob_cleanup_status_index ON witness_inputs (status, is_blob_cleaned); + + +CREATE TABLE leaf_aggregation_witness_jobs +( + l1_batch_number BIGINT NOT NULL PRIMARY KEY, + basic_circuits BYTEA NOT NULL, + basic_circuits_inputs BYTEA NOT NULL, + number_of_basic_circuits INTEGER NOT NULL, + status TEXT NOT NULL, + processing_started_at TIMESTAMP, + time_taken TIME, + error TEXT, + created_at TIMESTAMP NOT NULL, + updated_at TIMESTAMP NOT NULL, + attempts INTEGER DEFAULT 0 NOT NULL, + basic_circuits_blob_url TEXT, + basic_circuits_inputs_blob_url TEXT, + is_blob_cleaned BOOLEAN DEFAULT FALSE NOT NULL, + protocol_version INTEGER + CONSTRAINT leaf_aggregation_witness_jobs_prover_protocol_version_fkey REFERENCES prover_protocol_versions +); +CREATE INDEX IF NOT EXISTS leaf_aggregation_witness_jobs_blob_cleanup_status_index ON leaf_aggregation_witness_jobs (status, is_blob_cleaned); + + +CREATE TABLE node_aggregation_witness_jobs +( + l1_batch_number BIGINT NOT NULL PRIMARY KEY, + leaf_layer_subqueues BYTEA, + aggregation_outputs BYTEA, + number_of_leaf_circuits INTEGER, + status TEXT NOT NULL, + processing_started_at TIMESTAMP, + time_taken TIME, + error TEXT, + created_at TIMESTAMP NOT NULL, + updated_at TIMESTAMP NOT NULL, + attempts INTEGER DEFAULT 0 NOT NULL, + leaf_layer_subqueues_blob_url TEXT, + aggregation_outputs_blob_url TEXT, + is_blob_cleaned BOOLEAN DEFAULT FALSE NOT NULL, + protocol_version INTEGER + CONSTRAINT node_aggregation_witness_jobs_prover_protocol_version_fkey REFERENCES prover_protocol_versions +); +CREATE INDEX IF NOT EXISTS node_aggregation_witness_jobs_blob_cleanup_status_index ON node_aggregation_witness_jobs (status, is_blob_cleaned); + + +CREATE TABLE scheduler_witness_jobs +( + l1_batch_number BIGINT NOT NULL PRIMARY KEY, + scheduler_witness BYTEA NOT NULL, + final_node_aggregations BYTEA, + status TEXT NOT NULL, + processing_started_at TIMESTAMP, + time_taken TIME, + error TEXT, + created_at TIMESTAMP NOT NULL, + updated_at TIMESTAMP NOT NULL, + attempts INTEGER DEFAULT 0 NOT NULL, + aggregation_result_coords BYTEA, + scheduler_witness_blob_url TEXT, + final_node_aggregations_blob_url TEXT, + is_blob_cleaned BOOLEAN DEFAULT FALSE NOT NULL, + protocol_version INTEGER + CONSTRAINT scheduler_witness_jobs_prover_protocol_version_fkey REFERENCES prover_protocol_versions +); +CREATE INDEX IF NOT EXISTS scheduler_witness_jobs_blob_cleanup_status_index ON scheduler_witness_jobs (status, is_blob_cleaned); diff --git a/core/lib/dal/migrations/20231208134254_drop_old_witness_generation_related_tables.up.sql b/core/lib/dal/migrations/20231208134254_drop_old_witness_generation_related_tables.up.sql new file mode 100644 index 00000000000..54d4ba26e59 --- /dev/null +++ b/core/lib/dal/migrations/20231208134254_drop_old_witness_generation_related_tables.up.sql @@ -0,0 +1,4 @@ +DROP TABLE IF EXISTS witness_inputs; +DROP TABLE IF EXISTS leaf_aggregation_witness_jobs; +DROP TABLE IF EXISTS node_aggregation_witness_jobs; +DROP TABLE IF EXISTS scheduler_witness_jobs; diff --git a/core/lib/dal/migrations/20231213192041_snapshot-recovery.down.sql b/core/lib/dal/migrations/20231213192041_snapshot-recovery.down.sql new file mode 100644 index 00000000000..bd0ccd9c824 --- /dev/null +++ b/core/lib/dal/migrations/20231213192041_snapshot-recovery.down.sql @@ -0,0 +1 @@ +DROP TABLE IF EXISTS snapshot_recovery; diff --git a/core/lib/dal/migrations/20231213192041_snapshot-recovery.up.sql b/core/lib/dal/migrations/20231213192041_snapshot-recovery.up.sql new file mode 100644 index 00000000000..f9a9375da15 --- /dev/null +++ b/core/lib/dal/migrations/20231213192041_snapshot-recovery.up.sql @@ -0,0 +1,13 @@ +CREATE TABLE snapshot_recovery +( + l1_batch_number BIGINT NOT NULL PRIMARY KEY, + l1_batch_root_hash BYTEA NOT NULL, + miniblock_number BIGINT NOT NULL, + miniblock_root_hash BYTEA NOT NULL, + + last_finished_chunk_id INT, + total_chunk_count INT NOT NULL, + + created_at TIMESTAMP NOT NULL, + updated_at TIMESTAMP NOT NULL +) diff --git a/core/lib/dal/migrations/20231225083442_add-pub-data-input.down.sql b/core/lib/dal/migrations/20231225083442_add-pub-data-input.down.sql new file mode 100644 index 00000000000..103f3ce566f --- /dev/null +++ b/core/lib/dal/migrations/20231225083442_add-pub-data-input.down.sql @@ -0,0 +1 @@ +ALTER table l1_batches DROP COLUMN pubdata_input; \ No newline at end of file diff --git a/core/lib/dal/migrations/20231225083442_add-pub-data-input.up.sql b/core/lib/dal/migrations/20231225083442_add-pub-data-input.up.sql new file mode 100644 index 00000000000..bd88f25e209 --- /dev/null +++ b/core/lib/dal/migrations/20231225083442_add-pub-data-input.up.sql @@ -0,0 +1 @@ +ALTER TABLE l1_batches ADD COLUMN pubdata_input BYTEA; \ No newline at end of file diff --git a/core/lib/dal/migrations/20231229181653_fair_pubdata_price.down.sql b/core/lib/dal/migrations/20231229181653_fair_pubdata_price.down.sql new file mode 100644 index 00000000000..9002b3924e0 --- /dev/null +++ b/core/lib/dal/migrations/20231229181653_fair_pubdata_price.down.sql @@ -0,0 +1 @@ +ALTER TABLE miniblocks DROP COLUMN fair_pubdata_price; diff --git a/core/lib/dal/migrations/20231229181653_fair_pubdata_price.up.sql b/core/lib/dal/migrations/20231229181653_fair_pubdata_price.up.sql new file mode 100644 index 00000000000..f79547b1123 --- /dev/null +++ b/core/lib/dal/migrations/20231229181653_fair_pubdata_price.up.sql @@ -0,0 +1,2 @@ +ALTER TABLE miniblocks + ADD COLUMN IF NOT EXISTS fair_pubdata_price BIGINT; diff --git a/core/lib/dal/migrations/20240103123456_move_consensus_fields_to_new_table.down.sql b/core/lib/dal/migrations/20240103123456_move_consensus_fields_to_new_table.down.sql new file mode 100644 index 00000000000..ce7a2de360b --- /dev/null +++ b/core/lib/dal/migrations/20240103123456_move_consensus_fields_to_new_table.down.sql @@ -0,0 +1,2 @@ +DROP TABLE miniblocks_consensus; +ALTER TABLE miniblocks ADD COLUMN consensus JSONB NULL; diff --git a/core/lib/dal/migrations/20240103123456_move_consensus_fields_to_new_table.up.sql b/core/lib/dal/migrations/20240103123456_move_consensus_fields_to_new_table.up.sql new file mode 100644 index 00000000000..8f5538ce631 --- /dev/null +++ b/core/lib/dal/migrations/20240103123456_move_consensus_fields_to_new_table.up.sql @@ -0,0 +1,11 @@ +ALTER TABLE miniblocks DROP COLUMN consensus; + +CREATE TABLE miniblocks_consensus ( + number BIGINT NOT NULL, + certificate JSONB NOT NULL, + PRIMARY KEY(number), + CHECK((certificate->'message'->'proposal'->'number')::jsonb::numeric = number), + CONSTRAINT miniblocks_fk FOREIGN KEY(number) + REFERENCES miniblocks(number) + ON DELETE CASCADE +); diff --git a/core/lib/dal/migrations/20240103125908_remove_old_prover_subsystems.down.sql b/core/lib/dal/migrations/20240103125908_remove_old_prover_subsystems.down.sql new file mode 100644 index 00000000000..5b5531589af --- /dev/null +++ b/core/lib/dal/migrations/20240103125908_remove_old_prover_subsystems.down.sql @@ -0,0 +1,52 @@ +-- Note that era can't revert to this point in time. +-- These tables are added only if engineers want to revert from a future codebase to a previous codebase. +-- This migration will enable backwards development (i.e. bisecting some error). + +CREATE TABLE IF NOT EXISTS gpu_prover_queue ( + id bigint NOT NULL PRIMARY KEY, + instance_host inet NOT NULL, + instance_port integer NOT NULL, + instance_status text NOT NULL, + created_at timestamp without time zone NOT NULL, + updated_at timestamp without time zone NOT NULL, + processing_started_at timestamp without time zone, + queue_free_slots integer, + queue_capacity integer, + specialized_prover_group_id smallint, + region text NOT NULL, + zone text NOT NULL, + num_gpu smallint, + CONSTRAINT valid_port CHECK (((instance_port >= 0) AND (instance_port <= 65535))) +); + +CREATE TABLE IF NOT EXISTS prover_jobs ( + id bigint NOT NULL PRIMARY KEY, + l1_batch_number bigint NOT NULL, + circuit_type text NOT NULL, + prover_input bytea NOT NULL, + status text NOT NULL, + error text, + processing_started_at timestamp without time zone, + created_at timestamp without time zone NOT NULL, + updated_at timestamp without time zone NOT NULL, + time_taken time without time zone DEFAULT '00:00:00'::time without time zone NOT NULL, + aggregation_round integer DEFAULT 0 NOT NULL, + result bytea, + sequence_number integer DEFAULT 0 NOT NULL, + attempts integer DEFAULT 0 NOT NULL, + circuit_input_blob_url text, + proccesed_by text, + is_blob_cleaned boolean DEFAULT false NOT NULL, + protocol_version integer +); + +CREATE TABLE IF NOT EXISTS prover_protocol_versions ( + id integer NOT NULL, + "timestamp" bigint NOT NULL, + recursion_scheduler_level_vk_hash bytea NOT NULL, + recursion_node_level_vk_hash bytea NOT NULL, + recursion_leaf_level_vk_hash bytea NOT NULL, + recursion_circuits_set_vks_hash bytea NOT NULL, + verifier_address bytea NOT NULL, + created_at timestamp without time zone NOT NULL +); diff --git a/core/lib/dal/migrations/20240103125908_remove_old_prover_subsystems.up.sql b/core/lib/dal/migrations/20240103125908_remove_old_prover_subsystems.up.sql new file mode 100644 index 00000000000..473706875fb --- /dev/null +++ b/core/lib/dal/migrations/20240103125908_remove_old_prover_subsystems.up.sql @@ -0,0 +1,5 @@ +DROP TABLE IF EXISTS gpu_prover_queue; + +DROP TABLE IF EXISTS prover_jobs; + +DROP TABLE IF EXISTS prover_protocol_versions; diff --git a/core/lib/dal/migrations/20240104121833_l1-batch-predicted-circuits.down.sql b/core/lib/dal/migrations/20240104121833_l1-batch-predicted-circuits.down.sql new file mode 100644 index 00000000000..925299304eb --- /dev/null +++ b/core/lib/dal/migrations/20240104121833_l1-batch-predicted-circuits.down.sql @@ -0,0 +1,2 @@ +ALTER TABLE l1_batches + DROP COLUMN IF EXISTS predicted_circuits; diff --git a/core/lib/dal/migrations/20240104121833_l1-batch-predicted-circuits.up.sql b/core/lib/dal/migrations/20240104121833_l1-batch-predicted-circuits.up.sql new file mode 100644 index 00000000000..a957fce4d98 --- /dev/null +++ b/core/lib/dal/migrations/20240104121833_l1-batch-predicted-circuits.up.sql @@ -0,0 +1,2 @@ +ALTER TABLE l1_batches + ADD COLUMN IF NOT EXISTS predicted_circuits INT; diff --git a/core/lib/dal/sqlx-data.json b/core/lib/dal/sqlx-data.json index 3776b4f84b3..95c8c858baa 100644 --- a/core/lib/dal/sqlx-data.json +++ b/core/lib/dal/sqlx-data.json @@ -1,12414 +1,3 @@ { - "db": "PostgreSQL", - "0002e8b596794ae9396de8ac621b30dcf0befdff28c5bc23d713185f7a410df4": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "UPDATE proof_generation_details SET status=$1, updated_at = now() WHERE l1_batch_number = $2" - }, - "00bd80fd83aff559d8d9232c2e98a12a1dd2c8f31792cd915e2cf11f28e583b7": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 2, - "type_info": "Int2" - }, - { - "name": "depth", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "status", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 5, - "type_info": "Int2" - }, - { - "name": "aggregations_url", - "ordinal": 6, - "type_info": "Text" - }, - { - "name": "processing_started_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 8, - "type_info": "Time" - }, - { - "name": "error", - "ordinal": 9, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 10, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 11, - "type_info": "Timestamp" - }, - { - "name": "number_of_dependent_jobs", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "protocol_version", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "picked_by", - "ordinal": 14, - "type_info": "Text" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int4Array", - "Text" - ] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now(),\n picked_by = $2\n WHERE id = (\n SELECT id\n FROM node_aggregation_witness_jobs_fri\n WHERE status = 'queued'\n AND protocol_version = ANY($1)\n ORDER BY l1_batch_number ASC, depth ASC, id ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING node_aggregation_witness_jobs_fri.*\n " - }, - "0141169c8375ae975598aca5351ea162948f72b2c325619f57c756db028bed74": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "eth_tx_id", - "ordinal": 1, - "type_info": "Int4" - }, - { - "name": "tx_hash", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "base_fee_per_gas", - "ordinal": 3, - "type_info": "Int8" - }, - { - "name": "priority_fee_per_gas", - "ordinal": 4, - "type_info": "Int8" - }, - { - "name": "signed_raw_tx", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "nonce", - "ordinal": 6, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT eth_txs_history.id, eth_txs_history.eth_tx_id, eth_txs_history.tx_hash, eth_txs_history.base_fee_per_gas, eth_txs_history.priority_fee_per_gas, eth_txs_history.signed_raw_tx, eth_txs.nonce FROM eth_txs_history JOIN eth_txs ON eth_txs.id = eth_txs_history.eth_tx_id WHERE eth_txs_history.sent_at_block IS NULL AND eth_txs.confirmed_eth_tx_history_id IS NULL ORDER BY eth_txs_history.id DESC" - }, - "01a21fe42c5c0ec0f848739235b8175b62b0ffe503b823c128dd620fec047784": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int4", - "Text" - ] - } - }, - "query": "UPDATE gpu_prover_queue_fri SET instance_status = 'available', updated_at = now() WHERE instance_host = $1::text::inet AND instance_port = $2 AND instance_status = 'full' AND zone = $3\n " - }, - "01ebdc5b524e85033fb06d9166475f365643f744492e59ff12f10b419dd6d485": { - "describe": { - "columns": [ - { - "name": "bytecode_hash", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT bytecode_hash FROM factory_deps WHERE miniblock_number > $1" - }, - "03a34f0fd82bed22f14c5b36554bb958d407e9724fa5ea5123edc3c6607e545c": { - "describe": { - "columns": [ - { - "name": "block_hash?", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "address!", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "topic1!", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "topic2!", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "topic3!", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "topic4!", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "value!", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "miniblock_number!", - "ordinal": 7, - "type_info": "Int8" - }, - { - "name": "l1_batch_number?", - "ordinal": 8, - "type_info": "Int8" - }, - { - "name": "tx_hash!", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "tx_index_in_block!", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "event_index_in_block!", - "ordinal": 11, - "type_info": "Int4" - }, - { - "name": "event_index_in_tx!", - "ordinal": 12, - "type_info": "Int4" - } - ], - "nullable": [ - true, - true, - true, - true, - true, - true, - true, - true, - true, - true, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n WITH events_select AS (\n SELECT\n address, topic1, topic2, topic3, topic4, value,\n miniblock_number, tx_hash, tx_index_in_block,\n event_index_in_block, event_index_in_tx\n FROM events\n WHERE miniblock_number > $1\n ORDER BY miniblock_number ASC, event_index_in_block ASC\n )\n SELECT miniblocks.hash as \"block_hash?\",\n address as \"address!\", topic1 as \"topic1!\", topic2 as \"topic2!\", topic3 as \"topic3!\", topic4 as \"topic4!\", value as \"value!\",\n miniblock_number as \"miniblock_number!\", miniblocks.l1_batch_number as \"l1_batch_number?\", tx_hash as \"tx_hash!\",\n tx_index_in_block as \"tx_index_in_block!\", event_index_in_block as \"event_index_in_block!\", event_index_in_tx as \"event_index_in_tx!\"\n FROM events_select\n INNER JOIN miniblocks ON events_select.miniblock_number = miniblocks.number\n ORDER BY miniblock_number ASC, event_index_in_block ASC\n " - }, - "06d90ea65c1e06bd871f090a0fb0e8772ea5e923f1da5310bedd8dc90e0827f4": { - "describe": { - "columns": [ - { - "name": "eth_commit_tx_id", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT eth_commit_tx_id FROM l1_batches WHERE number = $1" - }, - "07310d96fc7e258154ad510684e33d196907ebd599e926d305e5ef9f26afa2fa": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int4", - "Text", - "Timestamp" - ] - } - }, - "query": "INSERT INTO eth_txs_history (eth_tx_id, base_fee_per_gas, priority_fee_per_gas, tx_hash, signed_raw_tx, created_at, updated_at, confirmed_at) VALUES ($1, 0, 0, $2, '\\x00', now(), now(), $3) RETURNING id" - }, - "09768b376996b96add16a02d1a59231cb9b525cd5bd19d22a76149962d4c91c2": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bool", - "Bytea", - "Int8", - "Bytea", - "Bytea", - "Bytea", - "Int8" - ] - } - }, - "query": "UPDATE l1_batches SET hash = $1, merkle_root_hash = $2, compressed_repeated_writes = $3, compressed_initial_writes = $4, l2_l1_compressed_messages = $5, l2_l1_merkle_root = $6, zkporter_is_available = $7, parent_hash = $8, rollup_last_leaf_index = $9, pass_through_data_hash = $10, meta_parameters_hash = $11, compressed_state_diffs = $12, updated_at = now() WHERE number = $13 AND hash IS NULL" - }, - "0c212f47b9a0e719f947a419be8284837b1b01aa23994ba6401b420790b802b8": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int4" - ] - } - }, - "query": "\n INSERT INTO node_aggregation_witness_jobs\n (l1_batch_number, protocol_version, status, created_at, updated_at)\n VALUES ($1, $2, 'waiting_for_artifacts', now(), now())\n " - }, - "0cbbcd30fde109c4c44162f94b6ed9bab4e9db9948d03e584c2cab543449d298": { - "describe": { - "columns": [ - { - "name": "status", - "ordinal": 0, - "type_info": "Text" - }, - { - "name": "error", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "compilation_errors", - "ordinal": 2, - "type_info": "Jsonb" - } - ], - "nullable": [ - false, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT status, error, compilation_errors FROM contract_verification_requests WHERE id = $1" - }, - "0d1bed183c38304ff1a6c8c78dca03964e2e188a6d01f98eaf0c6b24f19b8b6f": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "UPDATE transactions SET in_mempool = FALSE FROM UNNEST ($1::bytea[]) AS s(address) WHERE transactions.in_mempool = TRUE AND transactions.initiator_address = s.address" - }, - "0d99b4015b29905862991e4f1a44a1021d48f50e99cb1701e7496ce6c3e15dc6": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(number) as \"number\" FROM l1_batches WHERE is_finished = TRUE" - }, - "0e001ef507253b4fd3a87e379c8f2e63fa41250b1a396d81697de2b7ea71215e": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [ - "Int8", - "Bytea", - "Bytea", - "Bytea", - "Bytea" - ] - } - }, - "query": "SELECT COUNT(*) as \"count!\" FROM l1_batches WHERE number = $1 AND hash = $2 AND merkle_root_hash = $3 AND parent_hash = $4 AND l2_l1_merkle_root = $5" - }, - "0ee31e6e2ec60f427d8dec719ec0ba03ef75bc610e878ae32b0bf61c4c2c1366": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "instance_host", - "ordinal": 1, - "type_info": "Inet" - }, - { - "name": "instance_port", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "instance_status", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "specialized_prover_group_id", - "ordinal": 4, - "type_info": "Int2" - }, - { - "name": "zone", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "processing_started_at", - "ordinal": 8, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - false, - false, - true - ], - "parameters": { - "Left": [ - "Interval", - "Int2", - "Text" - ] - } - }, - "query": "UPDATE gpu_prover_queue_fri SET instance_status = 'reserved', updated_at = now(), processing_started_at = now() WHERE id in ( SELECT id FROM gpu_prover_queue_fri WHERE specialized_prover_group_id=$2 AND zone=$3 AND ( instance_status = 'available' OR (instance_status = 'reserved' AND processing_started_at < now() - $1::interval) ) ORDER BY updated_at ASC LIMIT 1 FOR UPDATE SKIP LOCKED ) RETURNING gpu_prover_queue_fri.*\n " - }, - "0f5897b5e0109535caa3d49f899c65e5080511d49305558b59b185c34227aa18": { - "describe": { - "columns": [ - { - "name": "nonce!", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Bytea", - "Int8" - ] - } - }, - "query": "SELECT nonce as \"nonce!\" FROM transactions WHERE initiator_address = $1 AND nonce >= $2 AND is_priority = FALSE AND (miniblock_number IS NOT NULL OR error IS NULL) ORDER BY nonce" - }, - "0f8a603899280c015b033c4160bc064865103e9d6d63a369f07a8e5d859a7b14": { - "describe": { - "columns": [ - { - "name": "timestamp", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT timestamp FROM miniblocks WHERE number = $1" - }, - "0fd885074c624bea478ec0a24a499cf1278773cdba92550439da5d3b70cbf38c": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status!", - "ordinal": 1, - "type_info": "Text" - } - ], - "nullable": [ - null, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n SELECT COUNT(*) as \"count!\", status as \"status!\"\n FROM prover_jobs\n GROUP BY status\n " - }, - "100ede607d40d8d07000fcdc40705c806e8229323e0e6dfb7507691838963ccf": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "basic_circuits", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "basic_circuits_inputs", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "number_of_basic_circuits", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "status", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "processing_started_at", - "ordinal": 5, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 6, - "type_info": "Time" - }, - { - "name": "error", - "ordinal": 7, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 8, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "attempts", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "basic_circuits_blob_url", - "ordinal": 11, - "type_info": "Text" - }, - { - "name": "basic_circuits_inputs_blob_url", - "ordinal": 12, - "type_info": "Text" - }, - { - "name": "is_blob_cleaned", - "ordinal": 13, - "type_info": "Bool" - }, - { - "name": "protocol_version", - "ordinal": 14, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - true, - true, - false, - false, - false, - true, - true, - false, - true - ], - "parameters": { - "Left": [ - "Interval", - "Int4", - "Int8", - "Int4Array" - ] - } - }, - "query": "\n UPDATE leaf_aggregation_witness_jobs\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now()\n WHERE l1_batch_number = (\n SELECT l1_batch_number\n FROM leaf_aggregation_witness_jobs\n WHERE l1_batch_number <= $3\n AND\n ( status = 'queued'\n OR (status = 'in_progress' AND processing_started_at < now() - $1::interval)\n OR (status = 'failed' AND attempts < $2)\n )\n AND protocol_version = ANY($4)\n ORDER BY l1_batch_number ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING leaf_aggregation_witness_jobs.*\n " - }, - "13e5f6a2a73eaa979229611ffdbed86d6e5e1bad0c645d39b56fdc47f5c17971": { - "describe": { - "columns": [ - { - "name": "hashed_key", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8", - "Int8" - ] - } - }, - "query": "SELECT DISTINCT hashed_key FROM storage_logs WHERE miniblock_number BETWEEN $1 and $2" - }, - "14815f61d37d274f9aea1125ca4d368fd8c45098b0017710c0ee18d23d994c15": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT number FROM l1_batches LEFT JOIN eth_txs_history AS prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id) WHERE prove_tx.confirmed_at IS NOT NULL ORDER BY number DESC LIMIT 1" - }, - "157fc4ef4f5fd831399219850bc59ec0bd32d938ec8685dacaf913efdccfe7fe": { - "describe": { - "columns": [ - { - "name": "l1_address", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Numeric" - ] - } - }, - "query": "SELECT l1_address FROM tokens WHERE market_volume > $1" - }, - "16bca6f4258ff3db90a26a8550c5fc35e666fb698960486528fceba3e452fd62": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 2, - "type_info": "Bool" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bloom", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "parent_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "commitment", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "compressed_write_logs", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "compressed_contracts", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "eth_prove_tx_id", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "eth_commit_tx_id", - "ordinal": 14, - "type_info": "Int4" - }, - { - "name": "eth_execute_tx_id", - "ordinal": 15, - "type_info": "Int4" - }, - { - "name": "merkle_root_hash", - "ordinal": 16, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 18, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 19, - "type_info": "Jsonb" - }, - { - "name": "compressed_initial_writes", - "ordinal": 20, - "type_info": "Bytea" - }, - { - "name": "compressed_repeated_writes", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "l2_l1_compressed_messages", - "ordinal": 22, - "type_info": "Bytea" - }, - { - "name": "l2_l1_merkle_root", - "ordinal": 23, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 24, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 25, - "type_info": "Int8" - }, - { - "name": "rollup_last_leaf_index", - "ordinal": 26, - "type_info": "Int8" - }, - { - "name": "zkporter_is_available", - "ordinal": 27, - "type_info": "Bool" - }, - { - "name": "bootloader_code_hash", - "ordinal": 28, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 29, - "type_info": "Bytea" - }, - { - "name": "base_fee_per_gas", - "ordinal": 30, - "type_info": "Numeric" - }, - { - "name": "aux_data_hash", - "ordinal": 31, - "type_info": "Bytea" - }, - { - "name": "pass_through_data_hash", - "ordinal": 32, - "type_info": "Bytea" - }, - { - "name": "meta_parameters_hash", - "ordinal": 33, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 34, - "type_info": "Int4" - }, - { - "name": "compressed_state_diffs", - "ordinal": 35, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 36, - "type_info": "ByteaArray" - }, - { - "name": "events_queue_commitment", - "ordinal": 37, - "type_info": "Bytea" - }, - { - "name": "bootloader_initial_content_commitment", - "ordinal": 38, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true, - true, - false, - true, - true, - true, - true, - true, - false, - true, - true - ], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Int4", - "Int8" - ] - } - }, - "query": "SELECT number, l1_batches.timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, rollup_last_leaf_index, zkporter_is_available, l1_batches.bootloader_code_hash, l1_batches.default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, meta_parameters_hash, protocol_version, compressed_state_diffs, system_logs, events_queue_commitment, bootloader_initial_content_commitment FROM l1_batches LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number JOIN protocol_versions ON protocol_versions.id = l1_batches.protocol_version WHERE eth_commit_tx_id IS NULL AND number != 0 AND protocol_versions.bootloader_code_hash = $1 AND protocol_versions.default_account_code_hash = $2 AND commitment IS NOT NULL AND (protocol_versions.id = $3 OR protocol_versions.upgrade_tx_hash IS NULL) AND events_queue_commitment IS NOT NULL AND bootloader_initial_content_commitment IS NOT NULL ORDER BY number LIMIT $4" - }, - "173da19a30bde9f034de97c1427f3166d9615d46cdaa30f1645a36f42926fa63": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "nonce", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "raw_tx", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "contract_address", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "tx_type", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "gas_used", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "created_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "has_failed", - "ordinal": 8, - "type_info": "Bool" - }, - { - "name": "sent_at_block", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "confirmed_eth_tx_history_id", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "predicted_gas_cost", - "ordinal": 11, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - false, - false, - false, - true, - true, - false - ], - "parameters": { - "Left": [ - "Bytea", - "Int8", - "Text", - "Text", - "Int8" - ] - } - }, - "query": "INSERT INTO eth_txs (raw_tx, nonce, tx_type, contract_address, predicted_gas_cost, created_at, updated_at) VALUES ($1, $2, $3, $4, $5, now(), now()) RETURNING *" - }, - "17a42a97e87a675bd465103ebedc63d6d091e5bb093c7905de70aed3dc71d823": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "DELETE FROM storage_logs WHERE miniblock_number > $1" - }, - "191fb8c0549267b515aaa7acc199675be1ea113e9137195468bb8ce64a099ae8": { - "describe": { - "columns": [ - { - "name": "serialized_events_queue", - "ordinal": 0, - "type_info": "Jsonb" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT serialized_events_queue FROM events_queue WHERE l1_batch_number = $1" - }, - "1948ab14bafbb3ba0098563f22d958c9383877788980fe51bd217987898b1c92": { - "describe": { - "columns": [ - { - "name": "hashed_key!", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "value?", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - null, - null - ], - "parameters": { - "Left": [ - "ByteaArray", - "Int8" - ] - } - }, - "query": "SELECT u.hashed_key as \"hashed_key!\", (SELECT value FROM storage_logs WHERE hashed_key = u.hashed_key AND miniblock_number <= $2 ORDER BY miniblock_number DESC, operation_number DESC LIMIT 1) as \"value?\" FROM UNNEST($1::bytea[]) AS u(hashed_key)" - }, - "19b89495be8aa735db039ccc8a262786c58e54f132588c48f07d9537cf21d3ed": { - "describe": { - "columns": [ - { - "name": "sent_at_block", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT sent_at_block FROM eth_txs_history WHERE eth_tx_id = $1 AND sent_at_block IS NOT NULL ORDER BY created_at ASC LIMIT 1" - }, - "19c8d9e449034ce7fd501541e5e71e2d5957bf2329e52166f4981955a847e175": { - "describe": { - "columns": [ - { - "name": "value!", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "l1_address!", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "l2_address!", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "symbol!", - "ordinal": 3, - "type_info": "Varchar" - }, - { - "name": "name!", - "ordinal": 4, - "type_info": "Varchar" - }, - { - "name": "decimals!", - "ordinal": 5, - "type_info": "Int4" - }, - { - "name": "usd_price?", - "ordinal": 6, - "type_info": "Numeric" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - true - ], - "parameters": { - "Left": [ - "ByteaArray", - "Bytea", - "Bytea", - "Bytea" - ] - } - }, - "query": "\n SELECT storage.value as \"value!\",\n tokens.l1_address as \"l1_address!\", tokens.l2_address as \"l2_address!\",\n tokens.symbol as \"symbol!\", tokens.name as \"name!\", tokens.decimals as \"decimals!\", tokens.usd_price as \"usd_price?\"\n FROM storage\n INNER JOIN tokens ON\n storage.address = tokens.l2_address OR (storage.address = $2 AND tokens.l2_address = $3)\n WHERE storage.hashed_key = ANY($1) AND storage.value != $4\n " - }, - "1a91acea72e56513a2a9e667bd5a2c171baa5fec01c51dcb7c7cf33f736c854d": { - "describe": { - "columns": [ - { - "name": "tx_hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "index_in_block", - "ordinal": 1, - "type_info": "Int4" - }, - { - "name": "l1_batch_tx_index", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "block_number", - "ordinal": 3, - "type_info": "Int8" - }, - { - "name": "error", - "ordinal": 4, - "type_info": "Varchar" - }, - { - "name": "effective_gas_price", - "ordinal": 5, - "type_info": "Numeric" - }, - { - "name": "initiator_address", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "transfer_to?", - "ordinal": 7, - "type_info": "Jsonb" - }, - { - "name": "execute_contract_address?", - "ordinal": 8, - "type_info": "Jsonb" - }, - { - "name": "tx_format?", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "refunded_gas", - "ordinal": 10, - "type_info": "Int8" - }, - { - "name": "gas_limit", - "ordinal": 11, - "type_info": "Numeric" - }, - { - "name": "block_hash?", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "l1_batch_number?", - "ordinal": 13, - "type_info": "Int8" - }, - { - "name": "contract_address?", - "ordinal": 14, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - true, - true, - true, - true, - true, - false, - null, - null, - true, - false, - true, - false, - true, - false - ], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Bytea" - ] - } - }, - "query": "\n WITH sl AS (\n SELECT * FROM storage_logs\n WHERE storage_logs.address = $1 AND storage_logs.tx_hash = $2\n ORDER BY storage_logs.miniblock_number DESC, storage_logs.operation_number DESC\n LIMIT 1\n )\n SELECT\n transactions.hash as tx_hash,\n transactions.index_in_block as index_in_block,\n transactions.l1_batch_tx_index as l1_batch_tx_index,\n transactions.miniblock_number as block_number,\n transactions.error as error,\n transactions.effective_gas_price as effective_gas_price,\n transactions.initiator_address as initiator_address,\n transactions.data->'to' as \"transfer_to?\",\n transactions.data->'contractAddress' as \"execute_contract_address?\",\n transactions.tx_format as \"tx_format?\",\n transactions.refunded_gas as refunded_gas,\n transactions.gas_limit as gas_limit,\n miniblocks.hash as \"block_hash?\",\n miniblocks.l1_batch_number as \"l1_batch_number?\",\n sl.key as \"contract_address?\"\n FROM transactions\n LEFT JOIN miniblocks\n ON miniblocks.number = transactions.miniblock_number\n LEFT JOIN sl\n ON sl.value != $3\n WHERE transactions.hash = $2\n " - }, - "1becc0cdf3dbc9160853bb20c9130417cc6e17f576e9d239f889a1932eda9f4f": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "eth_tx_id", - "ordinal": 1, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Text" - ] - } - }, - "query": "UPDATE eth_txs_history SET updated_at = now(), confirmed_at = now() WHERE tx_hash = $1 RETURNING id, eth_tx_id" - }, - "1c1a4cdf476de4f4cc83a31151fc4c407b93b53e2cd995f8bb5222d0a3c38c47": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "l1_tx_count", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "root_hash?", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "commit_tx_hash?", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "committed_at?", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "prove_tx_hash?", - "ordinal": 7, - "type_info": "Text" - }, - { - "name": "proven_at?", - "ordinal": 8, - "type_info": "Timestamp" - }, - { - "name": "execute_tx_hash?", - "ordinal": 9, - "type_info": "Text" - }, - { - "name": "executed_at?", - "ordinal": 10, - "type_info": "Timestamp" - }, - { - "name": "l1_gas_price", - "ordinal": 11, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 12, - "type_info": "Int8" - }, - { - "name": "bootloader_code_hash", - "ordinal": 13, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 14, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - true, - false, - true, - false, - true, - false, - true, - false, - false, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n SELECT l1_batches.number,\n l1_batches.timestamp,\n l1_batches.l1_tx_count,\n l1_batches.l2_tx_count,\n l1_batches.hash as \"root_hash?\",\n commit_tx.tx_hash as \"commit_tx_hash?\",\n commit_tx.confirmed_at as \"committed_at?\",\n prove_tx.tx_hash as \"prove_tx_hash?\",\n prove_tx.confirmed_at as \"proven_at?\",\n execute_tx.tx_hash as \"execute_tx_hash?\",\n execute_tx.confirmed_at as \"executed_at?\",\n l1_batches.l1_gas_price,\n l1_batches.l2_fair_gas_price,\n l1_batches.bootloader_code_hash,\n l1_batches.default_aa_code_hash\n FROM l1_batches\n LEFT JOIN eth_txs_history as commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id AND commit_tx.confirmed_at IS NOT NULL)\n LEFT JOIN eth_txs_history as prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id AND prove_tx.confirmed_at IS NOT NULL)\n LEFT JOIN eth_txs_history as execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id AND execute_tx.confirmed_at IS NOT NULL)\n WHERE l1_batches.number = $1\n " - }, - "1c583696808f93ff009ddf5df0ea36fe2621827fbd425c39ed4c9670ebc6431b": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE witness_inputs_fri SET status =$1, updated_at = now()\n WHERE l1_batch_number = $2\n " - }, - "1d1f5198cbb0b9cd70019a9b386212de294075c00ebac4dbd39fda5397dbb07c": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 2, - "type_info": "Bool" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bloom", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "parent_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "commitment", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "compressed_write_logs", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "compressed_contracts", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "eth_prove_tx_id", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "eth_commit_tx_id", - "ordinal": 14, - "type_info": "Int4" - }, - { - "name": "eth_execute_tx_id", - "ordinal": 15, - "type_info": "Int4" - }, - { - "name": "merkle_root_hash", - "ordinal": 16, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 18, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 19, - "type_info": "Jsonb" - }, - { - "name": "compressed_initial_writes", - "ordinal": 20, - "type_info": "Bytea" - }, - { - "name": "compressed_repeated_writes", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "l2_l1_compressed_messages", - "ordinal": 22, - "type_info": "Bytea" - }, - { - "name": "l2_l1_merkle_root", - "ordinal": 23, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 24, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 25, - "type_info": "Int8" - }, - { - "name": "rollup_last_leaf_index", - "ordinal": 26, - "type_info": "Int8" - }, - { - "name": "zkporter_is_available", - "ordinal": 27, - "type_info": "Bool" - }, - { - "name": "bootloader_code_hash", - "ordinal": 28, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 29, - "type_info": "Bytea" - }, - { - "name": "base_fee_per_gas", - "ordinal": 30, - "type_info": "Numeric" - }, - { - "name": "aux_data_hash", - "ordinal": 31, - "type_info": "Bytea" - }, - { - "name": "pass_through_data_hash", - "ordinal": 32, - "type_info": "Bytea" - }, - { - "name": "meta_parameters_hash", - "ordinal": 33, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 34, - "type_info": "Int4" - }, - { - "name": "compressed_state_diffs", - "ordinal": 35, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 36, - "type_info": "ByteaArray" - }, - { - "name": "events_queue_commitment", - "ordinal": 37, - "type_info": "Bytea" - }, - { - "name": "bootloader_initial_content_commitment", - "ordinal": 38, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true, - true, - false, - true, - true, - true, - true, - true, - false, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, meta_parameters_hash, protocol_version, compressed_state_diffs, system_logs, events_queue_commitment, bootloader_initial_content_commitment FROM l1_batches LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number WHERE eth_commit_tx_id IS NOT NULL AND eth_prove_tx_id IS NULL ORDER BY number LIMIT $1" - }, - "1d3e9cd259fb70a2bc81e8344576c3fb27b47ad6cdb6751d2a9b8c8d342b7a75": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE prover_jobs\n SET status = $1, updated_at = now()\n WHERE id = $2\n " - }, - "1dbe99ed32b361936c2a829a99a92ac792a02c8a304d23b140804844a7b0f857": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 1, - "type_info": "Int2" - }, - { - "name": "depth", - "ordinal": 2, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET status='queued'\n WHERE (l1_batch_number, circuit_id, depth) IN\n (SELECT prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, prover_jobs_fri.depth\n FROM prover_jobs_fri\n JOIN node_aggregation_witness_jobs_fri nawj ON\n prover_jobs_fri.l1_batch_number = nawj.l1_batch_number\n AND prover_jobs_fri.circuit_id = nawj.circuit_id\n AND prover_jobs_fri.depth = nawj.depth\n WHERE nawj.status = 'waiting_for_proofs'\n AND prover_jobs_fri.status = 'successful'\n AND prover_jobs_fri.aggregation_round = 1\n AND prover_jobs_fri.depth = 0\n GROUP BY prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, prover_jobs_fri.depth, nawj.number_of_dependent_jobs\n HAVING COUNT(*) = nawj.number_of_dependent_jobs)\n RETURNING l1_batch_number, circuit_id, depth;\n " - }, - "1ed353a16e8d0abaf426e5c235b20a79c727c08bc23fb1708a833a6930131691": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Text" - ] - } - }, - "query": "INSERT INTO proof_compression_jobs_fri(l1_batch_number, status, created_at, updated_at) VALUES ($1, $2, now(), now()) ON CONFLICT (l1_batch_number) DO NOTHING" - }, - "1eede5c2169aee5a767b3b6b829f53721c0c353956ccec31a75226a65325ae46": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [] - } - }, - "query": "UPDATE transactions SET in_mempool = FALSE WHERE in_mempool = TRUE" - }, - "1faf6552c221c75b7232b55210c0c37be76a57ec9dc94584b6ccb562e8b182f2": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "circuit_type", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "prover_input", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "status", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "error", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "processing_started_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "created_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 8, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 9, - "type_info": "Time" - }, - { - "name": "aggregation_round", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "result", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "sequence_number", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "attempts", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "circuit_input_blob_url", - "ordinal": 14, - "type_info": "Text" - }, - { - "name": "proccesed_by", - "ordinal": 15, - "type_info": "Text" - }, - { - "name": "is_blob_cleaned", - "ordinal": 16, - "type_info": "Bool" - }, - { - "name": "protocol_version", - "ordinal": 17, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - true, - false, - false, - false, - false, - true, - false, - false, - true, - true, - false, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT * from prover_jobs where id=$1" - }, - "20b22fd457417e9a72f5941887448f9a11b97b449db4759da0b9d368ce93996b": { - "describe": { - "columns": [ - { - "name": "recursion_scheduler_level_vk_hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "recursion_node_level_vk_hash", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "recursion_leaf_level_vk_hash", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "recursion_circuits_set_vks_hash", - "ordinal": 3, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT recursion_scheduler_level_vk_hash, recursion_node_level_vk_hash, recursion_leaf_level_vk_hash, recursion_circuits_set_vks_hash\n FROM protocol_versions\n WHERE id = $1\n " - }, - "21c29846f4253081057b86cc1b7ce4ef3ae618c5561c876502dc7f4e773ee91e": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Bytea", - "Bytea" - ] - } - }, - "query": "INSERT INTO commitments (l1_batch_number, events_queue_commitment, bootloader_initial_content_commitment) VALUES ($1, $2, $3) ON CONFLICT (l1_batch_number) DO NOTHING" - }, - "22b57675a726d9cfeb82a60ba50c36cab1548d197ea56a7658d3f005df07c60b": { - "describe": { - "columns": [ - { - "name": "op_id", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(priority_op_id) as \"op_id\" from transactions where is_priority = true AND miniblock_number IS NOT NULL" - }, - "22e50b6def0365ddf979b64c3c943e2a3f8e5a1abcf72e61a00a82780d2d364e": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Text", - "Text" - ] - } - }, - "query": "INSERT INTO proof_compression_jobs_fri(l1_batch_number, fri_proof_blob_url, status, created_at, updated_at) VALUES ($1, $2, $3, now(), now()) ON CONFLICT (l1_batch_number) DO NOTHING" - }, - "2397c1a050d358b596c9881c379bf823e267c03172f72c42da84cc0c04cc9d93": { - "describe": { - "columns": [ - { - "name": "miniblock_number!", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "hash", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "index_in_block!", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "l1_batch_tx_index!", - "ordinal": 3, - "type_info": "Int4" - } - ], - "nullable": [ - true, - false, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n SELECT miniblock_number as \"miniblock_number!\",\n hash, index_in_block as \"index_in_block!\", l1_batch_tx_index as \"l1_batch_tx_index!\"\n FROM transactions\n WHERE l1_batch_number = $1\n ORDER BY miniblock_number, index_in_block\n " - }, - "23c154c243f27912320ea0d68bc7bb372517010fb8c5737621cadd7b408afe8d": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Text", - "Int8" - ] - } - }, - "query": "UPDATE proof_compression_jobs_fri SET status =$1, error= $2, updated_at = now() WHERE l1_batch_number = $3" - }, - "2424f0ab2b156e953841107cfc0ccd76519d13c62fdcd5fd6b39e3503d6ec82c": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE scheduler_witness_jobs_fri\n SET status ='failed', error= $1, updated_at = now()\n WHERE l1_batch_number = $2\n " - }, - "269f3ac58705d65f775a6c84a62b9c0726beef51eb633937fa2a75b80c6d7fbc": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 2, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT hash, number, timestamp FROM miniblocks WHERE number > $1 ORDER BY number ASC" - }, - "26ac14152ade97892cd78d37884523187a5619093887b5e6564c3a80741b9d94": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "recursion_scheduler_level_vk_hash", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "recursion_node_level_vk_hash", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "recursion_leaf_level_vk_hash", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "recursion_circuits_set_vks_hash", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bootloader_code_hash", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "default_account_code_hash", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "verifier_address", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "upgrade_tx_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "created_at", - "ordinal": 10, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - false, - true, - false - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT * FROM protocol_versions WHERE id = $1" - }, - "297d6517ec5f050e8d8fe4878e4ff330b4b10af4d60de86e8a25e2cd70e0363b": { - "describe": { - "columns": [ - { - "name": "verification_info", - "ordinal": 0, - "type_info": "Jsonb" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT verification_info FROM contracts_verification_info WHERE address = $1" - }, - "2985ea2bf34a94573103654c00a49d2a946afe5d552ac1c2a2d055eb9d6f2cf1": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Time", - "Int8" - ] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET status = 'successful', updated_at = now(), time_taken = $1\n WHERE id = $2\n " - }, - "29c04c63e5df40ef439d467373a848bce74de906548331856222cdb7551ca907": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "UPDATE contract_verification_requests SET status = 'successful', updated_at = now() WHERE id = $1" - }, - "29f7f469cd58b256237536463f1e9d58438314fd1fe733a6bb53e6523f78bb49": { - "describe": { - "columns": [ - { - "name": "attempts", - "ordinal": 0, - "type_info": "Int2" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT attempts FROM prover_jobs_fri WHERE id = $1" - }, - "2a38561e789af470d6ef1a905143f2d8d102b4ff23cebe97586681da9e4084a9": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "hash", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "base_fee_per_gas", - "ordinal": 5, - "type_info": "Numeric" - }, - { - "name": "l1_gas_price", - "ordinal": 6, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 7, - "type_info": "Int8" - }, - { - "name": "bootloader_code_hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "virtual_blocks", - "ordinal": 11, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT number, timestamp, hash, l1_tx_count, l2_tx_count, base_fee_per_gas, l1_gas_price, l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, virtual_blocks\n FROM miniblocks WHERE number = $1" - }, - "2a98f1b149045f25d2830c0b4ffaaa400b4c572eb3842add22e8540f44943711": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8", - "Int2" - ] - } - }, - "query": "SELECT id from prover_jobs_fri WHERE l1_batch_number = $1 AND status = 'successful' AND aggregation_round = $2" - }, - "2adfdba6fa2b6b967ba03ae6f930e7f3ea851f678d30df699ced27b2dbb01c2a": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT number FROM l1_batches LEFT JOIN eth_txs_history as execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id) WHERE execute_tx.confirmed_at IS NOT NULL ORDER BY number DESC LIMIT 1" - }, - "2af0eddab563f0800a4762031e8703dbcac11450daacf3439289641b9b179b1c": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 2, - "type_info": "Int2" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Interval", - "Int2" - ] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET status = 'queued', updated_at = now(), processing_started_at = now()\n WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'failed' AND attempts < $2)\n RETURNING id, status, attempts\n " - }, - "2b22e7d15adf069c8e68954059b83f71a71350f3325b4280840c4be7e54a319f": { - "describe": { - "columns": [ - { - "name": "l1_address", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "l2_address", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "name", - "ordinal": 2, - "type_info": "Varchar" - }, - { - "name": "symbol", - "ordinal": 3, - "type_info": "Varchar" - }, - { - "name": "decimals", - "ordinal": 4, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false, - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT l1_address, l2_address, name, symbol, decimals FROM tokens\n WHERE well_known = true\n ORDER BY symbol" - }, - "2b76ca7059810f691a2d7d053e7e62e06de13e7ddb7747e39335bb10c45534e9": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 1, - "type_info": "Int2" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET status='queued'\n WHERE (l1_batch_number, circuit_id) IN\n (SELECT prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id\n FROM prover_jobs_fri\n JOIN leaf_aggregation_witness_jobs_fri lawj ON\n prover_jobs_fri.l1_batch_number = lawj.l1_batch_number\n AND prover_jobs_fri.circuit_id = lawj.circuit_id\n WHERE lawj.status = 'waiting_for_proofs'\n AND prover_jobs_fri.status = 'successful'\n AND prover_jobs_fri.aggregation_round = 0\n GROUP BY prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, lawj.number_of_basic_circuits\n HAVING COUNT(*) = lawj.number_of_basic_circuits)\n RETURNING l1_batch_number, circuit_id;\n " - }, - "2bd9137542076526c245366057f0f3f57c08368f6e0dc86d49293a91875272b8": { - "describe": { - "columns": [ - { - "name": "attempts", - "ordinal": 0, - "type_info": "Int2" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT attempts FROM witness_inputs_fri WHERE l1_batch_number = $1" - }, - "2c136284610f728ddba3e255d7dc573b10e4baf9151de194b7d8e0dc40c40602": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Jsonb" - ] - } - }, - "query": "INSERT INTO transaction_traces (tx_hash, trace, created_at, updated_at) VALUES ($1, $2, now(), now())" - }, - "2c4178a125ddc46a36f7548c840e481e85738502c56566d1eef84feef2161b2e": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "is_priority", - "ordinal": 1, - "type_info": "Bool" - }, - { - "name": "full_fee", - "ordinal": 2, - "type_info": "Numeric" - }, - { - "name": "layer_2_tip_fee", - "ordinal": 3, - "type_info": "Numeric" - }, - { - "name": "initiator_address", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "nonce", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "signature", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "input", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "data", - "ordinal": 8, - "type_info": "Jsonb" - }, - { - "name": "received_at", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "priority_op_id", - "ordinal": 10, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 11, - "type_info": "Int8" - }, - { - "name": "index_in_block", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "error", - "ordinal": 13, - "type_info": "Varchar" - }, - { - "name": "gas_limit", - "ordinal": 14, - "type_info": "Numeric" - }, - { - "name": "gas_per_storage_limit", - "ordinal": 15, - "type_info": "Numeric" - }, - { - "name": "gas_per_pubdata_limit", - "ordinal": 16, - "type_info": "Numeric" - }, - { - "name": "tx_format", - "ordinal": 17, - "type_info": "Int4" - }, - { - "name": "created_at", - "ordinal": 18, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 19, - "type_info": "Timestamp" - }, - { - "name": "execution_info", - "ordinal": 20, - "type_info": "Jsonb" - }, - { - "name": "contract_address", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "in_mempool", - "ordinal": 22, - "type_info": "Bool" - }, - { - "name": "l1_block_number", - "ordinal": 23, - "type_info": "Int4" - }, - { - "name": "value", - "ordinal": 24, - "type_info": "Numeric" - }, - { - "name": "paymaster", - "ordinal": 25, - "type_info": "Bytea" - }, - { - "name": "paymaster_input", - "ordinal": 26, - "type_info": "Bytea" - }, - { - "name": "max_fee_per_gas", - "ordinal": 27, - "type_info": "Numeric" - }, - { - "name": "max_priority_fee_per_gas", - "ordinal": 28, - "type_info": "Numeric" - }, - { - "name": "effective_gas_price", - "ordinal": 29, - "type_info": "Numeric" - }, - { - "name": "miniblock_number", - "ordinal": 30, - "type_info": "Int8" - }, - { - "name": "l1_batch_tx_index", - "ordinal": 31, - "type_info": "Int4" - }, - { - "name": "refunded_gas", - "ordinal": 32, - "type_info": "Int8" - }, - { - "name": "l1_tx_mint", - "ordinal": 33, - "type_info": "Numeric" - }, - { - "name": "l1_tx_refund_recipient", - "ordinal": 34, - "type_info": "Bytea" - }, - { - "name": "upgrade_id", - "ordinal": 35, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - true, - true, - false, - true, - true, - true, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - false, - true, - false, - false, - false, - true, - true, - true, - true, - true, - false, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int8", - "Numeric", - "Numeric", - "Int4" - ] - } - }, - "query": "UPDATE transactions\n SET in_mempool = TRUE\n FROM (\n SELECT hash FROM (\n SELECT hash\n FROM transactions\n WHERE miniblock_number IS NULL AND in_mempool = FALSE AND error IS NULL\n AND (is_priority = TRUE OR (max_fee_per_gas >= $2 and gas_per_pubdata_limit >= $3))\n AND tx_format != $4\n ORDER BY is_priority DESC, priority_op_id, received_at\n LIMIT $1\n ) as subquery1\n ORDER BY hash\n ) as subquery2\n WHERE transactions.hash = subquery2.hash\n RETURNING transactions.*" - }, - "2e3f116ca05ae70b7c83ac550302194c91f57b69902ff8e42140fde732ae5e6a": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int4Array" - ] - } - }, - "query": "DELETE FROM storage_logs WHERE miniblock_number = $1 AND operation_number != ALL($2)" - }, - "2e543dc0013150040bb86e278bbe86765ce1ebad72a32bb931fe02a9c516a11c": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Int8" - ] - } - }, - "query": "UPDATE l1_batches SET hash = $1 WHERE number = $2" - }, - "2ff4a13a75537cc30b2c3d52d3ef6237850150e4a4569adeaa4da4a9ac5bc689": { - "describe": { - "columns": [ - { - "name": "bytecode", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea", - "Int8" - ] - } - }, - "query": "SELECT bytecode FROM factory_deps WHERE bytecode_hash = $1 AND miniblock_number <= $2" - }, - "300e5d4fa6d2481a10cb6d857f66a81b6c3760906c6c2ab02f126d52efc0d4d1": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "is_priority", - "ordinal": 1, - "type_info": "Bool" - }, - { - "name": "full_fee", - "ordinal": 2, - "type_info": "Numeric" - }, - { - "name": "layer_2_tip_fee", - "ordinal": 3, - "type_info": "Numeric" - }, - { - "name": "initiator_address", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "nonce", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "signature", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "input", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "data", - "ordinal": 8, - "type_info": "Jsonb" - }, - { - "name": "received_at", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "priority_op_id", - "ordinal": 10, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 11, - "type_info": "Int8" - }, - { - "name": "index_in_block", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "error", - "ordinal": 13, - "type_info": "Varchar" - }, - { - "name": "gas_limit", - "ordinal": 14, - "type_info": "Numeric" - }, - { - "name": "gas_per_storage_limit", - "ordinal": 15, - "type_info": "Numeric" - }, - { - "name": "gas_per_pubdata_limit", - "ordinal": 16, - "type_info": "Numeric" - }, - { - "name": "tx_format", - "ordinal": 17, - "type_info": "Int4" - }, - { - "name": "created_at", - "ordinal": 18, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 19, - "type_info": "Timestamp" - }, - { - "name": "execution_info", - "ordinal": 20, - "type_info": "Jsonb" - }, - { - "name": "contract_address", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "in_mempool", - "ordinal": 22, - "type_info": "Bool" - }, - { - "name": "l1_block_number", - "ordinal": 23, - "type_info": "Int4" - }, - { - "name": "value", - "ordinal": 24, - "type_info": "Numeric" - }, - { - "name": "paymaster", - "ordinal": 25, - "type_info": "Bytea" - }, - { - "name": "paymaster_input", - "ordinal": 26, - "type_info": "Bytea" - }, - { - "name": "max_fee_per_gas", - "ordinal": 27, - "type_info": "Numeric" - }, - { - "name": "max_priority_fee_per_gas", - "ordinal": 28, - "type_info": "Numeric" - }, - { - "name": "effective_gas_price", - "ordinal": 29, - "type_info": "Numeric" - }, - { - "name": "miniblock_number", - "ordinal": 30, - "type_info": "Int8" - }, - { - "name": "l1_batch_tx_index", - "ordinal": 31, - "type_info": "Int4" - }, - { - "name": "refunded_gas", - "ordinal": 32, - "type_info": "Int8" - }, - { - "name": "l1_tx_mint", - "ordinal": 33, - "type_info": "Numeric" - }, - { - "name": "l1_tx_refund_recipient", - "ordinal": 34, - "type_info": "Bytea" - }, - { - "name": "upgrade_id", - "ordinal": 35, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - true, - true, - false, - true, - true, - true, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - false, - true, - false, - false, - false, - true, - true, - true, - true, - true, - false, - true, - true, - true - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT * FROM transactions WHERE miniblock_number IS NOT NULL AND l1_batch_number IS NULL ORDER BY miniblock_number, index_in_block" - }, - "3055b9f38a04f26dac9adbba978679e6877f44c758fd03461e940a8f9a4e5af1": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int2", - "Int4", - "Text", - "Int4", - "Int4" - ] - } - }, - "query": "INSERT INTO node_aggregation_witness_jobs_fri (l1_batch_number, circuit_id, depth, aggregations_url, number_of_dependent_jobs, protocol_version, status, created_at, updated_at)\n VALUES ($1, $2, $3, $4, $5, $6, 'waiting_for_proofs', now(), now())\n ON CONFLICT(l1_batch_number, circuit_id, depth)\n DO UPDATE SET updated_at=now()" - }, - "3167c62f6da5171081f6c003e64a3096829d4da94c3af48867d12d2c135f1a29": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 2, - "type_info": "Bool" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bloom", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "parent_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "commitment", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "compressed_write_logs", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "compressed_contracts", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "eth_prove_tx_id", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "eth_commit_tx_id", - "ordinal": 14, - "type_info": "Int4" - }, - { - "name": "eth_execute_tx_id", - "ordinal": 15, - "type_info": "Int4" - }, - { - "name": "merkle_root_hash", - "ordinal": 16, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 18, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 19, - "type_info": "Jsonb" - }, - { - "name": "compressed_initial_writes", - "ordinal": 20, - "type_info": "Bytea" - }, - { - "name": "compressed_repeated_writes", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "l2_l1_compressed_messages", - "ordinal": 22, - "type_info": "Bytea" - }, - { - "name": "l2_l1_merkle_root", - "ordinal": 23, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 24, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 25, - "type_info": "Int8" - }, - { - "name": "rollup_last_leaf_index", - "ordinal": 26, - "type_info": "Int8" - }, - { - "name": "zkporter_is_available", - "ordinal": 27, - "type_info": "Bool" - }, - { - "name": "bootloader_code_hash", - "ordinal": 28, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 29, - "type_info": "Bytea" - }, - { - "name": "base_fee_per_gas", - "ordinal": 30, - "type_info": "Numeric" - }, - { - "name": "aux_data_hash", - "ordinal": 31, - "type_info": "Bytea" - }, - { - "name": "pass_through_data_hash", - "ordinal": 32, - "type_info": "Bytea" - }, - { - "name": "meta_parameters_hash", - "ordinal": 33, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 34, - "type_info": "Int4" - }, - { - "name": "compressed_state_diffs", - "ordinal": 35, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 36, - "type_info": "ByteaArray" - }, - { - "name": "events_queue_commitment", - "ordinal": 37, - "type_info": "Bytea" - }, - { - "name": "bootloader_initial_content_commitment", - "ordinal": 38, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true, - true, - false, - true, - true, - true, - true, - true, - false, - true, - true - ], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Int4", - "Int8" - ] - } - }, - "query": "SELECT number, l1_batches.timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, rollup_last_leaf_index, zkporter_is_available, l1_batches.bootloader_code_hash, l1_batches.default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, meta_parameters_hash, protocol_version, compressed_state_diffs, system_logs, events_queue_commitment, bootloader_initial_content_commitment FROM l1_batches LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number JOIN protocol_versions ON protocol_versions.id = l1_batches.protocol_version WHERE eth_commit_tx_id IS NULL AND number != 0 AND protocol_versions.bootloader_code_hash = $1 AND protocol_versions.default_account_code_hash = $2 AND commitment IS NOT NULL AND (protocol_versions.id = $3 OR protocol_versions.upgrade_tx_hash IS NULL) ORDER BY number LIMIT $4" - }, - "334197fef9eeca55790d366ae67bbe95d77181bdfd2ad3208a32bd50585aef2d": { - "describe": { - "columns": [ - { - "name": "hashed_key", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "SELECT hashed_key FROM initial_writes WHERE hashed_key = ANY($1)" - }, - "335826f54feadf6aa30a4e7668ad3f17a2afc6bd67d4f863e3ad61fefd1bd8d2": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(number) as \"number\" FROM miniblocks" - }, - "34087096293cd8fc1c5bfcb412291c228afa1ce5dc8889a8535a2b2ecf569e03": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "contract_address", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "source_code", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "contract_name", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "zk_compiler_version", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "compiler_version", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "optimization_used", - "ordinal": 6, - "type_info": "Bool" - }, - { - "name": "optimizer_mode", - "ordinal": 7, - "type_info": "Text" - }, - { - "name": "constructor_arguments", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "is_system", - "ordinal": 9, - "type_info": "Bool" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - true, - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT id, contract_address, source_code, contract_name, zk_compiler_version, compiler_version, optimization_used, optimizer_mode, constructor_arguments, is_system FROM contract_verification_requests WHERE status = 'successful' ORDER BY id" - }, - "357347157ed8ff19d223c54533c3a85bd7e64a37514d657f8d49bd6eb5be1806": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "recursion_scheduler_level_vk_hash", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "recursion_node_level_vk_hash", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "recursion_leaf_level_vk_hash", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "recursion_circuits_set_vks_hash", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bootloader_code_hash", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "default_account_code_hash", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "verifier_address", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "upgrade_tx_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "created_at", - "ordinal": 10, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - false, - true, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT * FROM protocol_versions ORDER BY id DESC LIMIT 1" - }, - "37e4a0eea7b72bd3b75c26e003f3fa62039d9b614f0f2fa3d61e8c5e95f002fd": { - "describe": { - "columns": [ - { - "name": "max?", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(index) as \"max?\" FROM initial_writes" - }, - "394bbd64939d47fda4e1545e2752b208901e872b7234a5c3af456bdf429a6074": { - "describe": { - "columns": [ - { - "name": "tx_hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "call_trace", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "\n SELECT * FROM call_traces\n WHERE tx_hash = $1\n " - }, - "3a18d0d1e236d8f57e8b3b1218a24414639a7c8235ba6a514c3d03b8a1790f17": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "merkle_tree_paths_blob_url", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 2, - "type_info": "Int2" - }, - { - "name": "status", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "error", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 5, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "processing_started_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 8, - "type_info": "Time" - }, - { - "name": "is_blob_cleaned", - "ordinal": 9, - "type_info": "Bool" - }, - { - "name": "protocol_version", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "picked_by", - "ordinal": 11, - "type_info": "Text" - } - ], - "nullable": [ - false, - true, - false, - false, - true, - false, - false, - true, - true, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int8", - "Int4Array", - "Text" - ] - } - }, - "query": "\n UPDATE witness_inputs_fri\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now(),\n picked_by = $3\n WHERE l1_batch_number = (\n SELECT l1_batch_number\n FROM witness_inputs_fri\n WHERE l1_batch_number <= $1\n AND status = 'queued'\n AND protocol_version = ANY($2)\n ORDER BY l1_batch_number ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING witness_inputs_fri.*\n " - }, - "3a6bb31237b29755a0031dbb4a47e51e474fe8d4d12bb1ead6f991905cfbe6a4": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int4", - "Int8", - "Int8", - "Text", - "Bytea" - ] - } - }, - "query": "INSERT INTO eth_txs_history (eth_tx_id, base_fee_per_gas, priority_fee_per_gas, tx_hash, signed_raw_tx, created_at, updated_at) VALUES ($1, $2, $3, $4, $5, now(), now()) ON CONFLICT (tx_hash) DO NOTHING RETURNING id" - }, - "3ac1fe562e9664bbf8c02ba3090cf97a37663e228eff48fec326f74b2313daa9": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "DELETE FROM call_traces\n WHERE tx_hash = ANY($1)" - }, - "3af5a385c6636afb16e0fa5eda5373d64a76cef695dfa0b3b156e236224d32c8": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "scheduler_witness", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "final_node_aggregations", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "status", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "processing_started_at", - "ordinal": 4, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 5, - "type_info": "Time" - }, - { - "name": "error", - "ordinal": 6, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 8, - "type_info": "Timestamp" - }, - { - "name": "attempts", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "aggregation_result_coords", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "scheduler_witness_blob_url", - "ordinal": 11, - "type_info": "Text" - }, - { - "name": "final_node_aggregations_blob_url", - "ordinal": 12, - "type_info": "Text" - }, - { - "name": "is_blob_cleaned", - "ordinal": 13, - "type_info": "Bool" - }, - { - "name": "protocol_version", - "ordinal": 14, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - true, - false, - true, - true, - true, - false, - false, - false, - true, - true, - true, - false, - true - ], - "parameters": { - "Left": [ - "Interval", - "Int4", - "Int8", - "Int4Array" - ] - } - }, - "query": "\n UPDATE scheduler_witness_jobs\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now()\n WHERE l1_batch_number = (\n SELECT l1_batch_number\n FROM scheduler_witness_jobs\n WHERE l1_batch_number <= $3\n AND\n ( status = 'queued'\n OR (status = 'in_progress' AND processing_started_at < now() - $1::interval)\n OR (status = 'failed' AND attempts < $2)\n )\n AND protocol_version = ANY($4)\n ORDER BY l1_batch_number ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING scheduler_witness_jobs.*\n " - }, - "3be0d3fd7a1ff997edb1eaff3fac59324a5b33663e7862cfddd4a5db8015f13c": { - "describe": { - "columns": [ - { - "name": "attempts", - "ordinal": 0, - "type_info": "Int2" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT attempts FROM leaf_aggregation_witness_jobs_fri WHERE id = $1" - }, - "3c582aeed32235ef175707de412a9f9129fad6ea5e87ebb85f68e20664b0da46": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4Array", - "ByteaArray", - "Int8" - ] - } - }, - "query": "\n UPDATE transactions\n SET \n l1_batch_number = $3,\n l1_batch_tx_index = data_table.l1_batch_tx_index,\n updated_at = now()\n FROM\n (SELECT\n UNNEST($1::int[]) AS l1_batch_tx_index,\n UNNEST($2::bytea[]) AS hash\n ) AS data_table\n WHERE transactions.hash=data_table.hash \n " - }, - "3d41f05e1d5c5a74e0605e66fe08e09f14b8bf0269e5dcde518aa08db92a3ea0": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "DELETE FROM events WHERE miniblock_number > $1" - }, - "3e982e4863eef38069e755e3f20602ef9eaae859d23d86c3f230ddea8805aea7": { - "describe": { - "columns": [ - { - "name": "index", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT index FROM initial_writes WHERE hashed_key = $1" - }, - "3f6332706376ef4cadda96498872429b6ed28eca5402b03b1aa3b77b8262bccd": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text" - ] - } - }, - "query": "DELETE FROM compiler_versions WHERE compiler = $1" - }, - "3f671298a05f3f69a8ffb2e36d5ae79c544145fc1c289dd9e0c060dca3ec6e21": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray", - "ByteaArray" - ] - } - }, - "query": "UPDATE storage SET value = u.value FROM UNNEST($1::bytea[], $2::bytea[]) AS u(key, value) WHERE u.key = hashed_key" - }, - "4029dd84cde963ed8541426a659b10ccdbacbf4392664e34bfc29737aa630b28": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int4", - "Int4", - "Int8", - "Bool", - "Bytea", - "ByteaArray", - "ByteaArray", - "Bytea", - "ByteaArray", - "Int8", - "Int8", - "Int8", - "Jsonb", - "Jsonb", - "Numeric", - "Int8", - "Int8", - "Bytea", - "Bytea", - "Int4", - "ByteaArray", - "Int8Array" - ] - } - }, - "query": "INSERT INTO l1_batches (number, l1_tx_count, l2_tx_count, timestamp, is_finished, fee_account_address, l2_to_l1_logs, l2_to_l1_messages, bloom, priority_ops_onchain_data, predicted_commit_gas_cost, predicted_prove_gas_cost, predicted_execute_gas_cost, initial_bootloader_heap_content, used_contract_hashes, base_fee_per_gas, l1_gas_price, l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, system_logs, storage_refunds, created_at, updated_at ) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22, $23, now(), now())" - }, - "40a86f39a74ab22bdcd8b40446ea063c68bfb3e930e3150212474a657e82b38f": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Text" - ] - } - }, - "query": "\n UPDATE scheduler_witness_jobs\n SET final_node_aggregations_blob_url = $2,\n status = 'waiting_for_proofs',\n updated_at = now()\n WHERE l1_batch_number = $1 AND status != 'queued'\n " - }, - "42762c079948860eb59ba807eb9ae5a53b94c93e6b5635471d0018dde1d4c9d9": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "merkel_tree_paths_blob_url", - "ordinal": 1, - "type_info": "Text" - } - ], - "nullable": [ - false, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT l1_batch_number, merkel_tree_paths_blob_url FROM witness_inputs WHERE status = 'successful' AND merkel_tree_paths_blob_url is NOT NULL AND updated_at < NOW() - INTERVAL '30 days' LIMIT $1" - }, - "433d5da4d72150cf2c1e1007ee3ff51edfa51924f4b662b8cf382f06e60fd228": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Int8", - "Text", - "Text" - ] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs\n SET number_of_leaf_circuits = $1,\n leaf_layer_subqueues_blob_url = $3,\n aggregation_outputs_blob_url = $4,\n status = 'waiting_for_proofs',\n updated_at = now()\n WHERE l1_batch_number = $2 AND status != 'queued'\n " - }, - "43b5082ff7673ee3a8e8f3fafa64667fac4f7f5c8bd26a21ead6b4ba0f8fd17b": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT hash FROM miniblocks WHERE number = $1" - }, - "448d283cab6ae334de9676f69416974656d11563b58e0188d53ca9e0995dd287": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8Array" - ] - } - }, - "query": "\n UPDATE scheduler_dependency_tracker_fri\n SET status='queued'\n WHERE l1_batch_number = ANY($1)\n " - }, - "4588d998b3454d8210190c6b16116b5885f6f3e74606aec8250e6c1e8f55d242": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [] - } - }, - "query": "VACUUM storage_logs" - }, - "4ab8a25620b5400d836e1b847320d4e176629a27e1a6cb0666ab02bb55371769": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Interval" - ] - } - }, - "query": "DELETE FROM transactions WHERE miniblock_number IS NULL AND received_at < now() - $1::interval AND is_priority=false AND error IS NULL RETURNING hash" - }, - "4ac212a08324b9d4c3febc585109f19105b4d20aa3e290352e3c63d7ec58c5b2": { - "describe": { - "columns": [ - { - "name": "l2_address", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT l2_address FROM tokens" - }, - "4ac92a8436108097a32e94e53f7fe99261c7c3a40dbc433c20ccea3a7d06650c": { - "describe": { - "columns": [ - { - "name": "hashed_key", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "value!", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "SELECT hashed_key, value as \"value!\" FROM storage WHERE hashed_key = ANY($1)" - }, - "4aef34fb19a07dbfe2be09024d6c7fc2033a8e1570cc7f002a5c78317ff8ff3f": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int2", - "Text", - "Int4", - "Int4" - ] - } - }, - "query": "\n INSERT INTO leaf_aggregation_witness_jobs_fri\n (l1_batch_number, circuit_id, closed_form_inputs_blob_url, number_of_basic_circuits, protocol_version, status, created_at, updated_at)\n VALUES ($1, $2, $3, $4, $5, 'waiting_for_proofs', now(), now())\n ON CONFLICT(l1_batch_number, circuit_id)\n DO UPDATE SET updated_at=now()\n " - }, - "4b8597a47c0724155ad9592dc32134523bcbca11c9d82763d1bebbe17479c7b4": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "recursion_scheduler_level_vk_hash", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "recursion_node_level_vk_hash", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "recursion_leaf_level_vk_hash", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "recursion_circuits_set_vks_hash", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bootloader_code_hash", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "default_account_code_hash", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "verifier_address", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "upgrade_tx_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "created_at", - "ordinal": 10, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - false, - true, - false - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT * FROM protocol_versions\n WHERE id = $1\n " - }, - "4bab972cbbd8b53237a840ba9307079705bd4b5270428d2b41f05ee3d2aa42af": { - "describe": { - "columns": [ - { - "name": "l1_batch_number!", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "circuit_type", - "ordinal": 1, - "type_info": "Text" - } - ], - "nullable": [ - null, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n SELECT MIN(l1_batch_number) as \"l1_batch_number!\", circuit_type\n FROM prover_jobs\n WHERE aggregation_round = 0 AND (status = 'queued' OR status = 'in_progress'\n OR status = 'in_gpu_proof'\n OR status = 'failed')\n GROUP BY circuit_type\n " - }, - "4c0d2aa6e08f3b4748b88cad5cf7b3a9eb9c051e8e8e747a3c38c1b37ce3a6b7": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "DELETE FROM l2_to_l1_logs WHERE miniblock_number > $1" - }, - "4c83881635e957872a435737392bfed829de58780887c9a0fa7921ea648296fb": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT number FROM l1_batches WHERE eth_prove_tx_id IS NOT NULL AND eth_execute_tx_id IS NULL ORDER BY number LIMIT 1" - }, - "4d2e106c809a48ace74952df2b883a5e747aaa1bc6bee28e986dccee7fa130b6": { - "describe": { - "columns": [ - { - "name": "nonce", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT nonce FROM eth_txs ORDER BY id DESC LIMIT 1" - }, - "4d36aff2bdeb0b659b8c4cd031f7c3fc204d92bb500a4efe8b6beb9255a232f6": { - "describe": { - "columns": [ - { - "name": "timestamp", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT timestamp FROM l1_batches WHERE eth_execute_tx_id IS NULL AND number > 0 ORDER BY number LIMIT 1" - }, - "4d50dabc25d392e6b9d0dbe0e386ea7ef2c1178b1b0394a17442185b79f2d77d": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Text" - ] - } - }, - "query": "SELECT eth_txs.id FROM eth_txs_history JOIN eth_txs ON eth_txs.confirmed_eth_tx_history_id = eth_txs_history.id WHERE eth_txs_history.tx_hash = $1" - }, - "4e2b733fea9ca7cef542602fcd80acf1a9d2e0f1e22566f1076c4837e3ac7e61": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "instance_host", - "ordinal": 1, - "type_info": "Inet" - }, - { - "name": "instance_port", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "instance_status", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 4, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 5, - "type_info": "Timestamp" - }, - { - "name": "processing_started_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "queue_free_slots", - "ordinal": 7, - "type_info": "Int4" - }, - { - "name": "queue_capacity", - "ordinal": 8, - "type_info": "Int4" - }, - { - "name": "specialized_prover_group_id", - "ordinal": 9, - "type_info": "Int2" - }, - { - "name": "region", - "ordinal": 10, - "type_info": "Text" - }, - { - "name": "zone", - "ordinal": 11, - "type_info": "Text" - }, - { - "name": "num_gpu", - "ordinal": 12, - "type_info": "Int2" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true - ], - "parameters": { - "Left": [ - "Interval", - "Int2", - "Text", - "Text" - ] - } - }, - "query": "\n UPDATE gpu_prover_queue\n SET instance_status = 'reserved',\n updated_at = now(),\n processing_started_at = now()\n WHERE id in (\n SELECT id\n FROM gpu_prover_queue\n WHERE specialized_prover_group_id=$2\n AND region=$3\n AND zone=$4\n AND (\n instance_status = 'available'\n OR (instance_status = 'reserved' AND processing_started_at < now() - $1::interval)\n )\n ORDER BY updated_at ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING gpu_prover_queue.*\n " - }, - "5089dfb745ff04a9b071b5785e68194a6f6a7a72754d23a65adc7d6838f7f640": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "UPDATE eth_txs SET has_failed = TRUE WHERE id = $1" - }, - "50cdc4e59990eb75ab12f002b0f41d196196c17194ee68ef5b0f7edb9f0f7f69": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Text", - "Jsonb", - "Text" - ] - } - }, - "query": "UPDATE contract_verification_requests SET status = 'failed', updated_at = now(), error = $2, compilation_errors = $3, panic_message = $4 WHERE id = $1" - }, - "51cb712685991ffd600dce59f5ed8b5a1bfce8feed46ebd02471c43802e6e65a": { - "describe": { - "columns": [ - { - "name": "bootloader_code_hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "default_account_code_hash", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT bootloader_code_hash, default_account_code_hash FROM protocol_versions\n WHERE id = $1\n " - }, - "51d02f2e314ebf78c27949cc10997bd2171755400cc3a13c63994c85e15cb3df": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "circuit_id!", - "ordinal": 1, - "type_info": "Int2" - }, - { - "name": "aggregation_round!", - "ordinal": 2, - "type_info": "Int2" - }, - { - "name": "status!", - "ordinal": 3, - "type_info": "Text" - } - ], - "nullable": [ - null, - false, - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n SELECT COUNT(*) as \"count!\", circuit_id as \"circuit_id!\", aggregation_round as \"aggregation_round!\", status as \"status!\"\n FROM prover_jobs_fri\n WHERE status <> 'skipped' and status <> 'successful'\n GROUP BY circuit_id, aggregation_round, status\n " - }, - "52eeb8c529efb796fdefb30a381fcf6c931512f30e55e24c155f6c649e662909": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n UPDATE scheduler_dependency_tracker_fri\n SET status='queuing'\n WHERE l1_batch_number IN\n (SELECT l1_batch_number FROM scheduler_dependency_tracker_fri\n WHERE status != 'queued'\n AND circuit_1_final_prover_job_id IS NOT NULL\n AND circuit_2_final_prover_job_id IS NOT NULL\n AND circuit_3_final_prover_job_id IS NOT NULL\n AND circuit_4_final_prover_job_id IS NOT NULL\n AND circuit_5_final_prover_job_id IS NOT NULL\n AND circuit_6_final_prover_job_id IS NOT NULL\n AND circuit_7_final_prover_job_id IS NOT NULL\n AND circuit_8_final_prover_job_id IS NOT NULL\n AND circuit_9_final_prover_job_id IS NOT NULL\n AND circuit_10_final_prover_job_id IS NOT NULL\n AND circuit_11_final_prover_job_id IS NOT NULL\n AND circuit_12_final_prover_job_id IS NOT NULL\n AND circuit_13_final_prover_job_id IS NOT NULL\n )\n RETURNING l1_batch_number;\n " - }, - "5490012051be6faaaa11fad0f196eb53160a9c5c045fe9d66afcef7f33403fe2": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "recursion_scheduler_level_vk_hash", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "recursion_node_level_vk_hash", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "recursion_leaf_level_vk_hash", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "recursion_circuits_set_vks_hash", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bootloader_code_hash", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "default_account_code_hash", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "verifier_address", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "upgrade_tx_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "created_at", - "ordinal": 10, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - false, - true, - false - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT * FROM protocol_versions\n WHERE id < $1\n ORDER BY id DESC\n LIMIT 1\n " - }, - "5503575d9377785894de6cf6139a8d4768c6a803a1a90889e5a1b8254c315231": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Text" - ] - } - }, - "query": "INSERT INTO eth_txs (raw_tx, nonce, tx_type, contract_address, predicted_gas_cost, created_at, updated_at) VALUES ('\\x00', 0, $1, '', 0, now(), now()) RETURNING id" - }, - "5563da0d52ca7310ae7bc957caa5d8b3dcbd9386bb2a0be68dcd21ebb044cdbd": { - "describe": { - "columns": [ - { - "name": "bytecode_hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "bytecode", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT bytecode_hash, bytecode FROM factory_deps INNER JOIN miniblocks ON miniblocks.number = factory_deps.miniblock_number WHERE miniblocks.l1_batch_number = $1" - }, - "55debba852ef32f3b5ba6ffcb745f7b59d6888a21cb8792f8f9027e3b164a245": { - "describe": { - "columns": [ - { - "name": "region", - "ordinal": 0, - "type_info": "Text" - }, - { - "name": "zone", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "total_gpus", - "ordinal": 2, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - null - ], - "parameters": { - "Left": [] - } - }, - "query": "\n SELECT region, zone, SUM(num_gpu) AS total_gpus\n FROM gpu_prover_queue\n GROUP BY region, zone\n " - }, - "565a302151a5a55aa717048e3e21b5d7379ab47c2b80229024f0cb2699136b11": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "UPDATE miniblocks SET protocol_version = $1 WHERE l1_batch_number IS NULL" - }, - "57742ed088179b89b50920a2ab1a103b745598ee0ba05d1793fc54e63b477319": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Int8", - "Int8" - ] - } - }, - "query": "UPDATE l1_batches SET eth_commit_tx_id = $1, updated_at = now() WHERE number BETWEEN $2 AND $3" - }, - "58489a4e8730646ce20efee849742444740c72f59fad2495647742417ed0ab5a": { - "describe": { - "columns": [ - { - "name": "base_fee_per_gas", - "ordinal": 0, - "type_info": "Numeric" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8", - "Int8" - ] - } - }, - "query": "SELECT base_fee_per_gas FROM miniblocks WHERE number <= $1 ORDER BY number DESC LIMIT $2" - }, - "58ae859333cf7fadbb83d9cde66dee2abe18b4883f883e69130024d11a4a5cc6": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8", - "Numeric" - ] - } - }, - "query": "SELECT number FROM ( SELECT number, sum(virtual_blocks) OVER(ORDER BY number) AS virtual_block_sum FROM miniblocks WHERE l1_batch_number >= $1 ) AS vts WHERE virtual_block_sum <= $2 ORDER BY number DESC LIMIT 1" - }, - "5922fdf40632a6ffecfe824a3ba29bcf7b379aff5253db2739cc7be6145524e8": { - "describe": { - "columns": [ - { - "name": "bootloader_code_hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "default_account_code_hash", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "id", - "ordinal": 2, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT bootloader_code_hash, default_account_code_hash, id FROM protocol_versions\n WHERE timestamp <= $1\n ORDER BY id DESC\n LIMIT 1\n " - }, - "596ede80b21f08fc4dcf3e1fcc40810fe4c8f5123bcc19faebd15bfac86029d7": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Jsonb" - ] - } - }, - "query": "INSERT INTO contracts_verification_info (address, verification_info) VALUES ($1, $2) ON CONFLICT (address) DO UPDATE SET verification_info = $2" - }, - "59a318fc330369353f2570bfef09909d11e22a1c76ba5277839a6866d8e796b6": { - "describe": { - "columns": [ - { - "name": "hashed_key", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "index", - "ordinal": 1, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT hashed_key, index FROM initial_writes WHERE l1_batch_number = $1 ORDER BY index" - }, - "5a27a65fa105897b60a99c1e0015e4b8c93c45e0c448e77b03565db5c36695ed": { - "describe": { - "columns": [ - { - "name": "max", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(l1_batch_number) FROM witness_inputs WHERE merkel_tree_paths_blob_url IS NOT NULL" - }, - "5a2f35f3b0135ab88451ea141e97b1160ea1b4cf495b6700b5d178a43499e0d8": { - "describe": { - "columns": [ - { - "name": "fee_account_address", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT fee_account_address FROM l1_batches WHERE number = $1" - }, - "5a31eab41a980cc82ad3609610d377a185ce38bd654ee93766c119aa6cae1040": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8", - "Numeric" - ] - } - }, - "query": "SELECT number FROM ( SELECT number, sum(virtual_blocks) OVER(ORDER BY number) AS virtual_block_sum FROM miniblocks WHERE l1_batch_number >= $1 ) AS vts WHERE virtual_block_sum >= $2 ORDER BY number LIMIT 1" - }, - "5a5844af61cc685a414fcd3cad70900bdce8f48e905c105f8dd50dc52e0c6f14": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "attempts", - "ordinal": 1, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE prover_jobs\n SET status = 'failed', error = $1, updated_at = now()\n WHERE id = $2\n RETURNING l1_batch_number, attempts\n " - }, - "5ac872e2c5a00b376cc053324b3776ef6a0bb7f6850e5a24a133dfee052c49e1": { - "describe": { - "columns": [ - { - "name": "value", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT value FROM storage WHERE hashed_key = $1" - }, - "5b2935b5b7e8c2907f5e221a6b1e6f4b8737b9fc618c5d021a3e1d58a3aed116": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE prover_jobs_fri\n SET status = 'failed', error = $1, updated_at = now()\n WHERE id = $2\n " - }, - "5bc8a41ae0f255b966df2102f1bd9059d55833e0afaf6e62c7ddcc9c06de8deb": { - "describe": { - "columns": [ - { - "name": "l1_batch_number!", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "aggregation_round", - "ordinal": 1, - "type_info": "Int4" - } - ], - "nullable": [ - null, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(l1_batch_number) as \"l1_batch_number!\", aggregation_round FROM prover_jobs \n WHERE status='successful'\n GROUP BY aggregation_round \n " - }, - "5bc8cdc7ed710bb2f9b0035654fd7e9dcc01731ca581c6aa75d55184817bc100": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(number) as \"number\" FROM l1_batches WHERE hash IS NOT NULL" - }, - "5cc93efebc14dc0b78ed32bf7f167a44bd083f32ab308662c57ce1f726c0f1f9": { - "describe": { - "columns": [ - { - "name": "attempts", - "ordinal": 0, - "type_info": "Int2" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT attempts FROM node_aggregation_witness_jobs_fri WHERE id = $1" - }, - "5df806b33f84893d4ddfacf3b289b0e173e85ad9204cbb7ad314e68a94cdc41e": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8", - "Int2", - "Int4", - "Int4" - ] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET aggregations_url = $1, number_of_dependent_jobs = $5, updated_at = now()\n WHERE l1_batch_number = $2\n AND circuit_id = $3\n AND depth = $4\n " - }, - "5e09f2359dd69380c1f183f613d82696029a56896e2b985738a2fa25d6cb8a71": { - "describe": { - "columns": [ - { - "name": "op_id", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(priority_op_id) as \"op_id\" from transactions where is_priority = true" - }, - "5eb9f25dacfb02e70a9fcf0a41937d4c63bd786efb2fd0d1180f449a3ae0bbc0": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "leaf_layer_subqueues", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "aggregation_outputs", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "number_of_leaf_circuits", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "status", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "processing_started_at", - "ordinal": 5, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 6, - "type_info": "Time" - }, - { - "name": "error", - "ordinal": 7, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 8, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "attempts", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "leaf_layer_subqueues_blob_url", - "ordinal": 11, - "type_info": "Text" - }, - { - "name": "aggregation_outputs_blob_url", - "ordinal": 12, - "type_info": "Text" - }, - { - "name": "is_blob_cleaned", - "ordinal": 13, - "type_info": "Bool" - }, - { - "name": "protocol_version", - "ordinal": 14, - "type_info": "Int4" - } - ], - "nullable": [ - false, - true, - true, - true, - false, - true, - true, - true, - false, - false, - false, - true, - true, - false, - true - ], - "parameters": { - "Left": [ - "Interval", - "Int4", - "Int8", - "Int4Array" - ] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now()\n WHERE l1_batch_number = (\n SELECT l1_batch_number\n FROM node_aggregation_witness_jobs\n WHERE l1_batch_number <= $3\n AND\n ( status = 'queued'\n OR (status = 'in_progress' AND processing_started_at < now() - $1::interval)\n OR (status = 'failed' AND attempts < $2)\n )\n AND protocol_version = ANY($4)\n ORDER BY l1_batch_number ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING node_aggregation_witness_jobs.*\n " - }, - "5f037f6ae8489d5224772d4f9e3e6cfc2075560957fa491d97a95c0e79ff4830": { - "describe": { - "columns": [ - { - "name": "block_batch?", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "max_batch?", - "ordinal": 1, - "type_info": "Int8" - } - ], - "nullable": [ - null, - null - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT (SELECT l1_batch_number FROM miniblocks WHERE number = $1) as \"block_batch?\", (SELECT MAX(number) + 1 FROM l1_batches) as \"max_batch?\"" - }, - "5f40849646bb7436e29cda8fb87fece2a4dcb580644f45ecb82388dece04f222": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 2, - "type_info": "Int2" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Interval", - "Int2" - ] - } - }, - "query": "\n UPDATE prover_jobs_fri\n SET status = 'queued', updated_at = now(), processing_started_at = now()\n WHERE id in (\n SELECT id\n FROM prover_jobs_fri\n WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'in_gpu_proof' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'failed' AND attempts < $2)\n FOR UPDATE SKIP LOCKED\n )\n RETURNING id, status, attempts\n " - }, - "5f4b1091b74424ffd20c0aede98287418afa2bb37dbc941200c1d6190c96bec5": { - "describe": { - "columns": [ - { - "name": "timestamp", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT timestamp FROM l1_batches WHERE eth_commit_tx_id IS NULL AND number > 0 ORDER BY number LIMIT 1" - }, - "601487490349c5eee83d6de19137b1a1079235e46c4a3f07e1eaa9db7760f586": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Jsonb" - ] - } - }, - "query": "INSERT INTO events_queue (l1_batch_number, serialized_events_queue) VALUES ($1, $2)" - }, - "6317155050a5dae24ea202cfd54d1e58cc7aeb0bfd4d95aa351f85cff04d3bff": { - "describe": { - "columns": [ - { - "name": "version", - "ordinal": 0, - "type_info": "Text" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Text" - ] - } - }, - "query": "SELECT version FROM compiler_versions WHERE compiler = $1 ORDER by version" - }, - "65a31949cd7f8890e9448d26a0efee852ddf59bfbbc858b51fba10048d47d27b": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 2, - "type_info": "Bool" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bloom", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "parent_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "commitment", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "compressed_write_logs", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "compressed_contracts", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "eth_prove_tx_id", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "eth_commit_tx_id", - "ordinal": 14, - "type_info": "Int4" - }, - { - "name": "eth_execute_tx_id", - "ordinal": 15, - "type_info": "Int4" - }, - { - "name": "merkle_root_hash", - "ordinal": 16, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 18, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 19, - "type_info": "Jsonb" - }, - { - "name": "compressed_initial_writes", - "ordinal": 20, - "type_info": "Bytea" - }, - { - "name": "compressed_repeated_writes", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "l2_l1_compressed_messages", - "ordinal": 22, - "type_info": "Bytea" - }, - { - "name": "l2_l1_merkle_root", - "ordinal": 23, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 24, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 25, - "type_info": "Int8" - }, - { - "name": "rollup_last_leaf_index", - "ordinal": 26, - "type_info": "Int8" - }, - { - "name": "zkporter_is_available", - "ordinal": 27, - "type_info": "Bool" - }, - { - "name": "bootloader_code_hash", - "ordinal": 28, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 29, - "type_info": "Bytea" - }, - { - "name": "base_fee_per_gas", - "ordinal": 30, - "type_info": "Numeric" - }, - { - "name": "aux_data_hash", - "ordinal": 31, - "type_info": "Bytea" - }, - { - "name": "pass_through_data_hash", - "ordinal": 32, - "type_info": "Bytea" - }, - { - "name": "meta_parameters_hash", - "ordinal": 33, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 34, - "type_info": "ByteaArray" - }, - { - "name": "compressed_state_diffs", - "ordinal": 35, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 36, - "type_info": "Int4" - }, - { - "name": "events_queue_commitment", - "ordinal": 37, - "type_info": "Bytea" - }, - { - "name": "bootloader_initial_content_commitment", - "ordinal": 38, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true, - true, - false, - true, - true, - true, - false, - true, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int8", - "Int8" - ] - } - }, - "query": "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, meta_parameters_hash, system_logs, compressed_state_diffs, protocol_version, events_queue_commitment, bootloader_initial_content_commitment FROM (SELECT l1_batches.*, row_number() OVER (ORDER BY number ASC) AS row_number FROM l1_batches WHERE eth_commit_tx_id IS NOT NULL AND l1_batches.skip_proof = TRUE AND l1_batches.number > $1 ORDER BY number LIMIT $2) inn LEFT JOIN commitments ON commitments.l1_batch_number = inn.number WHERE number - row_number = $1" - }, - "65e2cdb70ccef97d886fb53d1bb298875e13b0ffe7b744ac5dd86433f0929eb0": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - { - "Custom": { - "kind": { - "Enum": [ - "Queued", - "ManuallySkipped", - "InProgress", - "Successful", - "Failed" - ] - }, - "name": "basic_witness_input_producer_job_status" - } - }, - "Int8", - "Time", - "Text" - ] - } - }, - "query": "UPDATE basic_witness_input_producer_jobs SET status = $1, updated_at = now(), time_taken = $3, input_blob_url = $4 WHERE l1_batch_number = $2" - }, - "665112c83ed7f126f94d1c47408de3495ee6431970e334d94ae75f853496eb48": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET status ='failed', error= $1, updated_at = now()\n WHERE id = $2\n " - }, - "67a47f1e7d5f8dafcef94bea3f268b4baec1888c6ef11c92ab66480ecdcb9aef": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Time", - "Bytea", - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE prover_jobs\n SET status = 'successful', updated_at = now(), time_taken = $1, result = $2, proccesed_by = $3\n WHERE id = $4\n " - }, - "67ecdc69e39e689f1f23f867d31e6b8c47e9c041e18cbd84a2ad6482a9be4e74": { - "describe": { - "columns": [ - { - "name": "l2_to_l1_logs", - "ordinal": 0, - "type_info": "ByteaArray" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT l2_to_l1_logs FROM l1_batches WHERE number = $1" - }, - "67efc7ea5bd3821d8325759ed8357190f6122dd2ae503a57faf15d8b749a4361": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n UPDATE leaf_aggregation_witness_jobs\n SET status='queued'\n WHERE l1_batch_number IN\n (SELECT prover_jobs.l1_batch_number\n FROM prover_jobs\n JOIN leaf_aggregation_witness_jobs lawj ON prover_jobs.l1_batch_number = lawj.l1_batch_number\n WHERE lawj.status = 'waiting_for_proofs'\n AND prover_jobs.status = 'successful'\n AND prover_jobs.aggregation_round = 0\n GROUP BY prover_jobs.l1_batch_number, lawj.number_of_basic_circuits\n HAVING COUNT(*) = lawj.number_of_basic_circuits)\n RETURNING l1_batch_number;\n " - }, - "6939e766e122458b2ac618d19b2759c4a7298ef72b81e8c3957e0a5cf35c9552": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 2, - "type_info": "Int2" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Interval", - "Int2" - ] - } - }, - "query": "\n UPDATE witness_inputs_fri\n SET status = 'queued', updated_at = now(), processing_started_at = now()\n WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'in_gpu_proof' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'failed' AND attempts < $2)\n RETURNING l1_batch_number, status, attempts\n " - }, - "694f1d154f3f38b123d8f845fef6e876d35dc3743f1c5b69dce6be694e5e726c": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "UPDATE witness_inputs SET status='queued' WHERE l1_batch_number=$1 AND status='waiting_for_artifacts'" - }, - "697835cdd5be1b99a0f332c4c8f3245e317b0282b46e55f15e728a7642382b25": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 2, - "type_info": "Int2" - }, - { - "name": "aggregation_round", - "ordinal": 3, - "type_info": "Int2" - }, - { - "name": "sequence_number", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "depth", - "ordinal": 5, - "type_info": "Int4" - }, - { - "name": "is_node_final_proof", - "ordinal": 6, - "type_info": "Bool" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false - ], - "parameters": { - "Left": [ - "Time", - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE prover_jobs_fri\n SET status = 'successful', updated_at = now(), time_taken = $1, proof_blob_url=$2\n WHERE id = $3\n RETURNING prover_jobs_fri.id, prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id,\n prover_jobs_fri.aggregation_round, prover_jobs_fri.sequence_number, prover_jobs_fri.depth,\n prover_jobs_fri.is_node_final_proof\n " - }, - "6a282084b02cddd8646e984a729b689bdb758e07096fc8cf60f68c6ec5bd6a9c": { - "describe": { - "columns": [ - { - "name": "max?", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(id) as \"max?\" FROM protocol_versions" - }, - "6a3af113a71bffa445d4a729e24fbc2be90bfffbdd072c74f9ca58669b7e5f80": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Bytea", - "Bytea" - ] - } - }, - "query": "SELECT id FROM prover_fri_protocol_versions WHERE recursion_circuits_set_vks_hash = $1 AND recursion_leaf_level_vk_hash = $2 AND recursion_node_level_vk_hash = $3 AND recursion_scheduler_level_vk_hash = $4 " - }, - "6b53e5cb619c9649d28ae33df6a43e6984e2d9320f894f3d04156a2d1235bb60": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8", - "Int8" - ] - } - }, - "query": "SELECT hash FROM miniblocks WHERE number BETWEEN $1 AND $2 ORDER BY number" - }, - "6c0915ed87e6d0fdf83cb24a51cc277e366bea0ba8821c048092d2a0aadb2771": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int8", - "Bytea", - "Int4", - "Int4", - "Numeric", - "Int8", - "Int8", - "Int8", - "Bytea", - "Bytea", - "Int4", - "Int8" - ] - } - }, - "query": "INSERT INTO miniblocks ( number, timestamp, hash, l1_tx_count, l2_tx_count, base_fee_per_gas, l1_gas_price, l2_fair_gas_price, gas_per_pubdata_limit, bootloader_code_hash, default_aa_code_hash, protocol_version, virtual_blocks, created_at, updated_at ) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, now(), now())" - }, - "6d142503d0d8682992a0353bae4a6b25ec82e7cadf0b2bbadcfd23c27f646bae": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "TextArray", - "Text" - ] - } - }, - "query": "INSERT INTO compiler_versions (version, compiler, created_at, updated_at) SELECT u.version, $2, now(), now() FROM UNNEST($1::text[]) AS u(version) ON CONFLICT (version, compiler) DO NOTHING" - }, - "6ffd22b0590341c38ce3957dccdb5a4edf47fb558bc64e4df08897a0c72dbf23": { - "describe": { - "columns": [ - { - "name": "protocol_version", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n SELECT protocol_version\n FROM witness_inputs\n WHERE l1_batch_number = $1\n " - }, - "715aba794d60ce2faf937eacd9498b203dbb8e620d6d8850b9071cd72902ffbf": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray", - "ByteaArray", - "Int8" - ] - } - }, - "query": "INSERT INTO factory_deps (bytecode_hash, bytecode, miniblock_number, created_at, updated_at) SELECT u.bytecode_hash, u.bytecode, $3, now(), now() FROM UNNEST($1::bytea[], $2::bytea[]) AS u(bytecode_hash, bytecode) ON CONFLICT (bytecode_hash) DO NOTHING" - }, - "741b13b0a4769a30186c650a4a1b24855806a27ccd8d5a50594741842dde44ec": { - "describe": { - "columns": [ - { - "name": "min?", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "max?", - "ordinal": 1, - "type_info": "Int8" - } - ], - "nullable": [ - null, - null - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT MIN(miniblocks.number) as \"min?\", MAX(miniblocks.number) as \"max?\" FROM miniblocks WHERE l1_batch_number = $1" - }, - "751c8e5ed1fc211dbb4c7419a316c5f4e49a7f0b4f3a5c74c2abd8daebc457dd": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT l1_batch_number FROM miniblocks WHERE number = $1" - }, - "769c021b51b9aaafdf27b4019834729047702b17b0684f7271eecd6ffdf96e7c": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n UPDATE scheduler_witness_jobs\n SET status='queued'\n WHERE l1_batch_number IN\n (SELECT prover_jobs.l1_batch_number\n FROM prover_jobs\n JOIN scheduler_witness_jobs swj ON prover_jobs.l1_batch_number = swj.l1_batch_number\n WHERE swj.status = 'waiting_for_proofs'\n AND prover_jobs.status = 'successful'\n AND prover_jobs.aggregation_round = 2\n GROUP BY prover_jobs.l1_batch_number\n HAVING COUNT(*) = 1)\n RETURNING l1_batch_number;\n " - }, - "7717652bb4933f87cbeb7baa2e70e8e0b439663c6b15493bd2e406bed2486b42": { - "describe": { - "columns": [ - { - "name": "max", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [ - "Numeric" - ] - } - }, - "query": "SELECT max(l1_batches.number) FROM l1_batches JOIN eth_txs ON (l1_batches.eth_commit_tx_id = eth_txs.id) JOIN eth_txs_history AS commit_tx ON (eth_txs.confirmed_eth_tx_history_id = commit_tx.id) WHERE commit_tx.confirmed_at IS NOT NULL AND eth_prove_tx_id IS NOT NULL AND eth_execute_tx_id IS NULL AND EXTRACT(epoch FROM commit_tx.confirmed_at) < $1" - }, - "77d78689b5c0b631da047f21c89a607213bec507cd9cf2b5cb4ea86e1a084796": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bool", - "Bytea", - "Int8", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Int8" - ] - } - }, - "query": "UPDATE l1_batches SET hash = $1, merkle_root_hash = $2, commitment = $3, default_aa_code_hash = $4, compressed_repeated_writes = $5, compressed_initial_writes = $6, l2_l1_compressed_messages = $7, l2_l1_merkle_root = $8, zkporter_is_available = $9, bootloader_code_hash = $10, rollup_last_leaf_index = $11, aux_data_hash = $12, pass_through_data_hash = $13, meta_parameters_hash = $14, compressed_state_diffs = $15, updated_at = now() WHERE number = $16" - }, - "780b30e56a3ecfb3daa5310168ac6cd9e94bd5f1d871e1eaf36fbfd463a5e7e0": { - "describe": { - "columns": [ - { - "name": "address_and_key?", - "ordinal": 0, - "type_info": "ByteaArray" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "SELECT (SELECT ARRAY[address,key] FROM storage_logs WHERE hashed_key = u.hashed_key ORDER BY miniblock_number, operation_number LIMIT 1) as \"address_and_key?\" FROM UNNEST($1::bytea[]) AS u(hashed_key)" - }, - "78ba607e97bdf8b7c0b5e3cf87e10dc3b352a8552c2e94532b0f392af7dbe9cd": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "contract_address", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "source_code", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "contract_name", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "zk_compiler_version", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "compiler_version", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "optimization_used", - "ordinal": 6, - "type_info": "Bool" - }, - { - "name": "optimizer_mode", - "ordinal": 7, - "type_info": "Text" - }, - { - "name": "constructor_arguments", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "is_system", - "ordinal": 9, - "type_info": "Bool" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - true, - false, - false - ], - "parameters": { - "Left": [ - "Interval" - ] - } - }, - "query": "UPDATE contract_verification_requests SET status = 'in_progress', attempts = attempts + 1, updated_at = now(), processing_started_at = now() WHERE id = ( SELECT id FROM contract_verification_requests WHERE status = 'queued' OR (status = 'in_progress' AND processing_started_at < now() - $1::interval) ORDER BY created_at LIMIT 1 FOR UPDATE SKIP LOCKED ) RETURNING id, contract_address, source_code, contract_name, zk_compiler_version, compiler_version, optimization_used, optimizer_mode, constructor_arguments, is_system" - }, - "79420f7676acb3f17aeb538271cdb4067a342fd554adcf7bd0550b6682b4c82b": { - "describe": { - "columns": [ - { - "name": "tx_hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "call_trace", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT * FROM call_traces WHERE tx_hash IN (SELECT hash FROM transactions WHERE miniblock_number = $1)" - }, - "7947dd8e7d6c138146f7ebe6b1e89fcd494b2679ac4e9fcff6aa2b2944aeed50": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number!", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "last_batch_miniblock?", - "ordinal": 2, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 3, - "type_info": "Int8" - }, - { - "name": "root_hash?", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 6, - "type_info": "Int8" - }, - { - "name": "bootloader_code_hash", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "virtual_blocks", - "ordinal": 9, - "type_info": "Int8" - }, - { - "name": "hash", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "consensus", - "ordinal": 11, - "type_info": "Jsonb" - }, - { - "name": "protocol_version!", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "fee_account_address?", - "ordinal": 13, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - null, - null, - false, - false, - false, - false, - true, - true, - false, - false, - true, - true, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT miniblocks.number, COALESCE(miniblocks.l1_batch_number, (SELECT (max(number) + 1) FROM l1_batches)) as \"l1_batch_number!\", (SELECT max(m2.number) FROM miniblocks m2 WHERE miniblocks.l1_batch_number = m2.l1_batch_number) as \"last_batch_miniblock?\", miniblocks.timestamp, miniblocks.hash as \"root_hash?\", miniblocks.l1_gas_price, miniblocks.l2_fair_gas_price, miniblocks.bootloader_code_hash, miniblocks.default_aa_code_hash, miniblocks.virtual_blocks, miniblocks.hash, miniblocks.consensus, miniblocks.protocol_version as \"protocol_version!\", l1_batches.fee_account_address as \"fee_account_address?\" FROM miniblocks LEFT JOIN l1_batches ON miniblocks.l1_batch_number = l1_batches.number WHERE miniblocks.number = $1" - }, - "79cdb4cdd3c47b3654e6240178985fb4b4420e0634f9482a6ef8169e90200b84": { - "describe": { - "columns": [ - { - "name": "attempts", - "ordinal": 0, - "type_info": "Int2" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT attempts FROM scheduler_witness_jobs_fri WHERE l1_batch_number = $1" - }, - "7a5aba2130fec60318266c8059d3757cd78eb6099d50486b4996fb4090c99622": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Bytea", - "Bytea", - "Text", - "Text", - "Int4", - "Int4" - ] - } - }, - "query": "\n INSERT INTO leaf_aggregation_witness_jobs\n (l1_batch_number, basic_circuits, basic_circuits_inputs, basic_circuits_blob_url, basic_circuits_inputs_blob_url, number_of_basic_circuits, protocol_version, status, created_at, updated_at)\n VALUES ($1, $2, $3, $4, $5, $6, $7, 'waiting_for_proofs', now(), now())\n " - }, - "7b8043a59029a19a3ba2433a438e8a4fe560aba7eda57b7a63b580de2e19aacb": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Text", - "Int4" - ] - } - }, - "query": "INSERT INTO witness_inputs_fri(l1_batch_number, merkle_tree_paths_blob_url, protocol_version, status, created_at, updated_at) VALUES ($1, $2, $3, 'queued', now(), now()) ON CONFLICT (l1_batch_number) DO NOTHING" - }, - "7c3e55a10c8cf90e60001bca401113fd5335ec6c4b1ffdb6d6ff063d244d23e2": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "circuit_type", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "prover_input", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "status", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "error", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "processing_started_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "created_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 8, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 9, - "type_info": "Time" - }, - { - "name": "aggregation_round", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "result", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "sequence_number", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "attempts", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "circuit_input_blob_url", - "ordinal": 14, - "type_info": "Text" - }, - { - "name": "proccesed_by", - "ordinal": 15, - "type_info": "Text" - }, - { - "name": "is_blob_cleaned", - "ordinal": 16, - "type_info": "Bool" - }, - { - "name": "protocol_version", - "ordinal": 17, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - true, - false, - false, - false, - false, - true, - false, - false, - true, - true, - false, - true - ], - "parameters": { - "Left": [ - "TextArray", - "Int4Array" - ] - } - }, - "query": "\n UPDATE prover_jobs\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now()\n WHERE id = (\n SELECT id\n FROM prover_jobs\n WHERE circuit_type = ANY($1)\n AND status = 'queued'\n AND protocol_version = ANY($2)\n ORDER BY aggregation_round DESC, l1_batch_number ASC, id ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING prover_jobs.*\n " - }, - "7ca78be8b18638857111cdbc6117ed2c204e3eb22682d5e4553ac4f47efab6e2": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "UPDATE transactions\n SET l1_batch_number = NULL, miniblock_number = NULL, error = NULL, index_in_block = NULL, execution_info = '{}'\n WHERE miniblock_number > $1\n RETURNING hash\n " - }, - "7cf855c4869db43b765b92762402596f6b97b3717735b6d87a16a5776f2eca71": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Numeric", - "Timestamp" - ] - } - }, - "query": "UPDATE tokens SET usd_price = $2, usd_price_updated_at = $3, updated_at = now() WHERE l1_address = $1" - }, - "7d4210089c5abb84befec962fc769b396ff7ad7da212d079bd4460f9ea4d60dc": { - "describe": { - "columns": [ - { - "name": "l1_batch_number?", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "\n SELECT MIN(l1_batch_number) as \"l1_batch_number?\" FROM (\n SELECT MIN(l1_batch_number) as \"l1_batch_number\"\n FROM prover_jobs\n WHERE status = 'successful' OR aggregation_round < 3\n GROUP BY l1_batch_number\n HAVING MAX(aggregation_round) < 3\n ) as inn\n " - }, - "7df997e5a203e8df350b1346863fddf26d32123159213c02e8794c39240e48dc": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "UPDATE miniblocks SET l1_batch_number = $1 WHERE l1_batch_number IS NULL" - }, - "8045a697a6a1070857b6fdc656f60ee6bab4b3a875ab98099beee227c199f818": { - "describe": { - "columns": [ - { - "name": "miniblock_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "log_index_in_miniblock", - "ordinal": 1, - "type_info": "Int4" - }, - { - "name": "log_index_in_tx", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "tx_hash", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "block_hash", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "l1_batch_number?", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "shard_id", - "ordinal": 6, - "type_info": "Int4" - }, - { - "name": "is_service", - "ordinal": 7, - "type_info": "Bool" - }, - { - "name": "tx_index_in_miniblock", - "ordinal": 8, - "type_info": "Int4" - }, - { - "name": "tx_index_in_l1_batch", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "sender", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "key", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "value", - "ordinal": 12, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - null, - null, - false, - false, - false, - false, - false, - false, - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT miniblock_number, log_index_in_miniblock, log_index_in_tx, tx_hash, Null::bytea as \"block_hash\", Null::bigint as \"l1_batch_number?\", shard_id, is_service, tx_index_in_miniblock, tx_index_in_l1_batch, sender, key, value FROM l2_to_l1_logs WHERE tx_hash = $1 ORDER BY log_index_in_tx ASC" - }, - "832105952074e4ff35252d8e7973faa1b24455abc89820307db5e49a834c0718": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_tx_count", - "ordinal": 1, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "timestamp", - "ordinal": 3, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 4, - "type_info": "Bool" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 6, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "bloom", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 9, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 10, - "type_info": "Jsonb" - }, - { - "name": "base_fee_per_gas", - "ordinal": 11, - "type_info": "Numeric" - }, - { - "name": "l1_gas_price", - "ordinal": 12, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 13, - "type_info": "Int8" - }, - { - "name": "bootloader_code_hash", - "ordinal": 14, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 15, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 16, - "type_info": "Int4" - }, - { - "name": "system_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "compressed_state_diffs", - "ordinal": 18, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - false, - true - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT number, l1_tx_count, l2_tx_count, timestamp, is_finished, fee_account_address, l2_to_l1_logs, l2_to_l1_messages, bloom, priority_ops_onchain_data, used_contract_hashes, base_fee_per_gas, l1_gas_price, l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, system_logs, compressed_state_diffs FROM l1_batches WHERE eth_commit_tx_id = $1 OR eth_prove_tx_id = $1 OR eth_execute_tx_id = $1" - }, - "84703029e09ab1362aa4b4177b38be594d2daf17e69508cae869647028055efb": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Text", - "Text" - ] - } - }, - "query": "SELECT l1_batch_number, status FROM proof_compression_jobs_fri\n WHERE l1_batch_number = ( SELECT MIN(l1_batch_number) FROM proof_compression_jobs_fri WHERE status = $1 OR status = $2\n )" - }, - "848d82292a4960154449b425e0b10e250a4ced4c27fb324657589859a512d3a4": { - "describe": { - "columns": [ - { - "name": "tx_hash", - "ordinal": 0, - "type_info": "Text" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT tx_hash FROM eth_txs_history WHERE eth_tx_id = $1 AND confirmed_at IS NOT NULL" - }, - "85ac7fb2c4175d662c8f466e722d28b0eadcd2f252a788e366dbd05eac547b93": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 2, - "type_info": "Bool" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bloom", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "parent_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "commitment", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "compressed_write_logs", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "compressed_contracts", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "eth_prove_tx_id", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "eth_commit_tx_id", - "ordinal": 14, - "type_info": "Int4" - }, - { - "name": "eth_execute_tx_id", - "ordinal": 15, - "type_info": "Int4" - }, - { - "name": "merkle_root_hash", - "ordinal": 16, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 18, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 19, - "type_info": "Jsonb" - }, - { - "name": "compressed_initial_writes", - "ordinal": 20, - "type_info": "Bytea" - }, - { - "name": "compressed_repeated_writes", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "l2_l1_compressed_messages", - "ordinal": 22, - "type_info": "Bytea" - }, - { - "name": "l2_l1_merkle_root", - "ordinal": 23, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 24, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 25, - "type_info": "Int8" - }, - { - "name": "rollup_last_leaf_index", - "ordinal": 26, - "type_info": "Int8" - }, - { - "name": "zkporter_is_available", - "ordinal": 27, - "type_info": "Bool" - }, - { - "name": "bootloader_code_hash", - "ordinal": 28, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 29, - "type_info": "Bytea" - }, - { - "name": "base_fee_per_gas", - "ordinal": 30, - "type_info": "Numeric" - }, - { - "name": "aux_data_hash", - "ordinal": 31, - "type_info": "Bytea" - }, - { - "name": "pass_through_data_hash", - "ordinal": 32, - "type_info": "Bytea" - }, - { - "name": "meta_parameters_hash", - "ordinal": 33, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 34, - "type_info": "Int4" - }, - { - "name": "compressed_state_diffs", - "ordinal": 35, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 36, - "type_info": "ByteaArray" - }, - { - "name": "events_queue_commitment", - "ordinal": 37, - "type_info": "Bytea" - }, - { - "name": "bootloader_initial_content_commitment", - "ordinal": 38, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true, - true, - false, - true, - true, - true, - true, - true, - false, - true, - true - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, meta_parameters_hash, protocol_version, compressed_state_diffs, system_logs, events_queue_commitment, bootloader_initial_content_commitment\n FROM l1_batches LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number WHERE number = 0 OR eth_commit_tx_id IS NOT NULL AND commitment IS NOT NULL ORDER BY number DESC LIMIT 1" - }, - "85c52cb09c73499507144e3a684c3230c2c71eb4f8ddef43e67fbd33de2747c8": { - "describe": { - "columns": [ - { - "name": "timestamp", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "hash", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT timestamp, hash FROM l1_batches WHERE number = $1" - }, - "87e1ae393bf250f834704c940482884c9ed729a24f41d1ec07319fa0cbcc21a7": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "DELETE FROM l1_batches WHERE number > $1" - }, - "8996a1794585dfe0f9c16a11e113831a63d5d944bc8061d7caa25ea33f12b19d": { - "describe": { - "columns": [ - { - "name": "is_priority", - "ordinal": 0, - "type_info": "Bool" - }, - { - "name": "initiator_address", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "gas_limit", - "ordinal": 2, - "type_info": "Numeric" - }, - { - "name": "gas_per_pubdata_limit", - "ordinal": 3, - "type_info": "Numeric" - }, - { - "name": "received_at", - "ordinal": 4, - "type_info": "Timestamp" - }, - { - "name": "miniblock_number", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "error", - "ordinal": 6, - "type_info": "Varchar" - }, - { - "name": "effective_gas_price", - "ordinal": 7, - "type_info": "Numeric" - }, - { - "name": "refunded_gas", - "ordinal": 8, - "type_info": "Int8" - }, - { - "name": "eth_commit_tx_hash?", - "ordinal": 9, - "type_info": "Text" - }, - { - "name": "eth_prove_tx_hash?", - "ordinal": 10, - "type_info": "Text" - }, - { - "name": "eth_execute_tx_hash?", - "ordinal": 11, - "type_info": "Text" - } - ], - "nullable": [ - false, - false, - true, - true, - false, - true, - true, - true, - false, - false, - false, - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "\n SELECT transactions.is_priority,\n transactions.initiator_address,\n transactions.gas_limit,\n transactions.gas_per_pubdata_limit,\n transactions.received_at,\n transactions.miniblock_number,\n transactions.error,\n transactions.effective_gas_price,\n transactions.refunded_gas,\n commit_tx.tx_hash as \"eth_commit_tx_hash?\",\n prove_tx.tx_hash as \"eth_prove_tx_hash?\",\n execute_tx.tx_hash as \"eth_execute_tx_hash?\"\n FROM transactions\n LEFT JOIN miniblocks ON miniblocks.number = transactions.miniblock_number\n LEFT JOIN l1_batches ON l1_batches.number = miniblocks.l1_batch_number\n LEFT JOIN eth_txs_history as commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id AND commit_tx.confirmed_at IS NOT NULL)\n LEFT JOIN eth_txs_history as prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id AND prove_tx.confirmed_at IS NOT NULL)\n LEFT JOIN eth_txs_history as execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id AND execute_tx.confirmed_at IS NOT NULL)\n WHERE transactions.hash = $1\n " - }, - "89b124c78f4f6e86790af8ec391a2c486ce01b33cfb4492a443187b1731cae1e": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Int8", - "Int8" - ] - } - }, - "query": "UPDATE l1_batches SET eth_prove_tx_id = $1, updated_at = now() WHERE number BETWEEN $2 AND $3" - }, - "8a05b6c052ace9b5a383b301f3f441536d90a96bbb791f4711304b22e02193df": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Time", - "Int8" - ] - } - }, - "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET status = 'successful', updated_at = now(), time_taken = $1\n WHERE id = $2\n " - }, - "8cd540b6063f4a0c1bf4ccb3d111a0ecc341ca8b46b83544c515aa4d809ab9f1": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number!", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 2, - "type_info": "Int8" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "root_hash?", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "commit_tx_hash?", - "ordinal": 6, - "type_info": "Text" - }, - { - "name": "committed_at?", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "prove_tx_hash?", - "ordinal": 8, - "type_info": "Text" - }, - { - "name": "proven_at?", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "execute_tx_hash?", - "ordinal": 10, - "type_info": "Text" - }, - { - "name": "executed_at?", - "ordinal": 11, - "type_info": "Timestamp" - }, - { - "name": "l1_gas_price", - "ordinal": 12, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 13, - "type_info": "Int8" - }, - { - "name": "bootloader_code_hash", - "ordinal": 14, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 15, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 16, - "type_info": "Int4" - }, - { - "name": "fee_account_address?", - "ordinal": 17, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - null, - false, - false, - false, - false, - false, - true, - false, - true, - false, - true, - false, - false, - true, - true, - true, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n SELECT miniblocks.number,\n COALESCE(miniblocks.l1_batch_number, (SELECT (max(number) + 1) FROM l1_batches)) as \"l1_batch_number!\",\n miniblocks.timestamp,\n miniblocks.l1_tx_count,\n miniblocks.l2_tx_count,\n miniblocks.hash as \"root_hash?\",\n commit_tx.tx_hash as \"commit_tx_hash?\",\n commit_tx.confirmed_at as \"committed_at?\",\n prove_tx.tx_hash as \"prove_tx_hash?\",\n prove_tx.confirmed_at as \"proven_at?\",\n execute_tx.tx_hash as \"execute_tx_hash?\",\n execute_tx.confirmed_at as \"executed_at?\",\n miniblocks.l1_gas_price,\n miniblocks.l2_fair_gas_price,\n miniblocks.bootloader_code_hash,\n miniblocks.default_aa_code_hash,\n miniblocks.protocol_version,\n l1_batches.fee_account_address as \"fee_account_address?\"\n FROM miniblocks\n LEFT JOIN l1_batches ON miniblocks.l1_batch_number = l1_batches.number\n LEFT JOIN eth_txs_history as commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id AND commit_tx.confirmed_at IS NOT NULL)\n LEFT JOIN eth_txs_history as prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id AND prove_tx.confirmed_at IS NOT NULL)\n LEFT JOIN eth_txs_history as execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id AND execute_tx.confirmed_at IS NOT NULL)\n WHERE miniblocks.number = $1\n " - }, - "8d3c9575e3cea3956ba84edc982fcf6e0f7667350e6c2cd6801db8400eabaf9b": { - "describe": { - "columns": [ - { - "name": "hashed_key", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT DISTINCT ON (hashed_key) hashed_key FROM (SELECT * FROM storage_logs WHERE miniblock_number > $1) inn" - }, - "8dcbaaa6186da52ca8b440b6428826288dc668af5a6fc99ef3078c8bcb38c419": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 1, - "type_info": "Int2" - }, - { - "name": "depth", - "ordinal": 2, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs_fri\n SET status='queued'\n WHERE (l1_batch_number, circuit_id, depth) IN\n (SELECT prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, prover_jobs_fri.depth\n FROM prover_jobs_fri\n JOIN node_aggregation_witness_jobs_fri nawj ON\n prover_jobs_fri.l1_batch_number = nawj.l1_batch_number\n AND prover_jobs_fri.circuit_id = nawj.circuit_id\n AND prover_jobs_fri.depth = nawj.depth\n WHERE nawj.status = 'waiting_for_proofs'\n AND prover_jobs_fri.status = 'successful'\n AND prover_jobs_fri.aggregation_round = 2\n GROUP BY prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, prover_jobs_fri.depth, nawj.number_of_dependent_jobs\n HAVING COUNT(*) = nawj.number_of_dependent_jobs)\n RETURNING l1_batch_number, circuit_id, depth;\n " - }, - "8f75c5aa615080fc02b60baccae9c49a81e282a54864ea3eb874ebe10a23eafe": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "UPDATE prover_jobs_fri SET status = 'sent_to_server', updated_at = now() WHERE l1_batch_number = $1" - }, - "8fa1a390d7b11b60b3352fafc0a8a7fa15bc761b1bb902f5105fd66b2e3087f2": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n INSERT INTO scheduler_dependency_tracker_fri\n (l1_batch_number, status, created_at, updated_at)\n VALUES ($1, 'waiting_for_proofs', now(), now())\n ON CONFLICT(l1_batch_number)\n DO UPDATE SET updated_at=now()\n " - }, - "8fda20e48c41a9c1e58c8c607222a65e1409f63eba91ac99b2736ca5ebbb5ec6": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Numeric", - "Numeric", - "Numeric", - "Jsonb", - "Int4", - "Bytea", - "Int4", - "Numeric", - "Bytea", - "Bytea", - "Int4", - "Numeric", - "Bytea", - "Timestamp" - ] - } - }, - "query": "\n INSERT INTO transactions\n (\n hash,\n is_priority,\n initiator_address,\n\n gas_limit,\n max_fee_per_gas,\n gas_per_pubdata_limit,\n\n data,\n upgrade_id,\n contract_address,\n l1_block_number,\n value,\n\n paymaster,\n paymaster_input,\n tx_format,\n\n l1_tx_mint,\n l1_tx_refund_recipient,\n\n received_at,\n created_at,\n updated_at\n )\n VALUES\n (\n $1, TRUE, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12,\n $13, $14, $15, $16, now(), now()\n )\n ON CONFLICT (hash) DO NOTHING\n " - }, - "8fe01036cac5181aabfdc06095da291c4de6b1e0f82f846c37509bb550ef544e": { - "describe": { - "columns": [ - { - "name": "l1_address", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT l1_address FROM tokens WHERE well_known = false" - }, - "8ff84e800faad1a10eedf537195d37a74a68d8020f286444824d6ccac6727003": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 2, - "type_info": "Bool" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bloom", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "parent_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "commitment", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "compressed_write_logs", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "compressed_contracts", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "eth_prove_tx_id", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "eth_commit_tx_id", - "ordinal": 14, - "type_info": "Int4" - }, - { - "name": "eth_execute_tx_id", - "ordinal": 15, - "type_info": "Int4" - }, - { - "name": "merkle_root_hash", - "ordinal": 16, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 18, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 19, - "type_info": "Jsonb" - }, - { - "name": "compressed_initial_writes", - "ordinal": 20, - "type_info": "Bytea" - }, - { - "name": "compressed_repeated_writes", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "l2_l1_compressed_messages", - "ordinal": 22, - "type_info": "Bytea" - }, - { - "name": "l2_l1_merkle_root", - "ordinal": 23, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 24, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 25, - "type_info": "Int8" - }, - { - "name": "rollup_last_leaf_index", - "ordinal": 26, - "type_info": "Int8" - }, - { - "name": "zkporter_is_available", - "ordinal": 27, - "type_info": "Bool" - }, - { - "name": "bootloader_code_hash", - "ordinal": 28, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 29, - "type_info": "Bytea" - }, - { - "name": "base_fee_per_gas", - "ordinal": 30, - "type_info": "Numeric" - }, - { - "name": "aux_data_hash", - "ordinal": 31, - "type_info": "Bytea" - }, - { - "name": "pass_through_data_hash", - "ordinal": 32, - "type_info": "Bytea" - }, - { - "name": "meta_parameters_hash", - "ordinal": 33, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 34, - "type_info": "Int4" - }, - { - "name": "compressed_state_diffs", - "ordinal": 35, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 36, - "type_info": "ByteaArray" - }, - { - "name": "events_queue_commitment", - "ordinal": 37, - "type_info": "Bytea" - }, - { - "name": "bootloader_initial_content_commitment", - "ordinal": 38, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true, - true, - false, - true, - true, - true, - true, - true, - false, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, meta_parameters_hash, protocol_version, compressed_state_diffs, system_logs, events_queue_commitment, bootloader_initial_content_commitment FROM l1_batches LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number WHERE eth_prove_tx_id IS NOT NULL AND eth_execute_tx_id IS NULL ORDER BY number LIMIT $1" - }, - "8ff9d76b4791af1177231661847b6c8879ad625fd11c15de51a16c81d8712129": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Bytea", - "Text", - "Int4" - ] - } - }, - "query": "INSERT INTO witness_inputs(l1_batch_number, merkle_tree_paths, merkel_tree_paths_blob_url, status, protocol_version, created_at, updated_at) VALUES ($1, $2, $3, 'waiting_for_artifacts', $4, now(), now()) ON CONFLICT (l1_batch_number) DO NOTHING" - }, - "9051cc1a715e152afdd0c19739c76666b1a9b134e17601ef9fdf3dec5d2fc561": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 2, - "type_info": "Bool" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bloom", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "parent_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "commitment", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "compressed_write_logs", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "compressed_contracts", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "eth_prove_tx_id", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "eth_commit_tx_id", - "ordinal": 14, - "type_info": "Int4" - }, - { - "name": "eth_execute_tx_id", - "ordinal": 15, - "type_info": "Int4" - }, - { - "name": "merkle_root_hash", - "ordinal": 16, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 18, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 19, - "type_info": "Jsonb" - }, - { - "name": "compressed_initial_writes", - "ordinal": 20, - "type_info": "Bytea" - }, - { - "name": "compressed_repeated_writes", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "l2_l1_compressed_messages", - "ordinal": 22, - "type_info": "Bytea" - }, - { - "name": "l2_l1_merkle_root", - "ordinal": 23, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 24, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 25, - "type_info": "Int8" - }, - { - "name": "rollup_last_leaf_index", - "ordinal": 26, - "type_info": "Int8" - }, - { - "name": "zkporter_is_available", - "ordinal": 27, - "type_info": "Bool" - }, - { - "name": "bootloader_code_hash", - "ordinal": 28, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 29, - "type_info": "Bytea" - }, - { - "name": "base_fee_per_gas", - "ordinal": 30, - "type_info": "Numeric" - }, - { - "name": "aux_data_hash", - "ordinal": 31, - "type_info": "Bytea" - }, - { - "name": "pass_through_data_hash", - "ordinal": 32, - "type_info": "Bytea" - }, - { - "name": "meta_parameters_hash", - "ordinal": 33, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 34, - "type_info": "Int4" - }, - { - "name": "compressed_state_diffs", - "ordinal": 35, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 36, - "type_info": "ByteaArray" - }, - { - "name": "events_queue_commitment", - "ordinal": 37, - "type_info": "Bytea" - }, - { - "name": "bootloader_initial_content_commitment", - "ordinal": 38, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true, - true, - false, - true, - true, - true, - true, - true, - false, - true, - true - ], - "parameters": { - "Left": [ - "Int8", - "Int8", - "Int8" - ] - } - }, - "query": "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, meta_parameters_hash, protocol_version, compressed_state_diffs, system_logs, events_queue_commitment, bootloader_initial_content_commitment FROM l1_batches LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number WHERE number BETWEEN $1 AND $2 ORDER BY number LIMIT $3" - }, - "91db60cc4f98ebcaef1435342607da0a86fe16e20a696cb81a569772d5d5ae88": { - "describe": { - "columns": [ - { - "name": "value", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea", - "Int8" - ] - } - }, - "query": "\n SELECT value\n FROM storage_logs\n WHERE storage_logs.hashed_key = $1 AND storage_logs.miniblock_number <= $2\n ORDER BY storage_logs.miniblock_number DESC, storage_logs.operation_number DESC\n LIMIT 1\n " - }, - "944c38995043e7b11e6633beb68b5479059ff27b26fd2df171a3d9650f070547": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 2, - "type_info": "Int2" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Interval", - "Int2" - ] - } - }, - "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET status = 'queued', updated_at = now(), processing_started_at = now()\n WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'failed' AND attempts < $2)\n RETURNING id, status, attempts\n " - }, - "9554593134830bc197e95f3a7e69844839bfe31bf567934ddbab760017710e39": { - "describe": { - "columns": [ - { - "name": "bytecode", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "data?", - "ordinal": 1, - "type_info": "Jsonb" - }, - { - "name": "contract_address?", - "ordinal": 2, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - true - ], - "parameters": { - "Left": [ - "Bytea", - "Bytea" - ] - } - }, - "query": "SELECT factory_deps.bytecode, transactions.data as \"data?\", transactions.contract_address as \"contract_address?\" FROM ( SELECT * FROM storage_logs WHERE storage_logs.hashed_key = $1 ORDER BY miniblock_number DESC, operation_number DESC LIMIT 1 ) storage_logs JOIN factory_deps ON factory_deps.bytecode_hash = storage_logs.value LEFT JOIN transactions ON transactions.hash = storage_logs.tx_hash WHERE storage_logs.value != $2" - }, - "957ceda740ffb36740acf1e3fbacf76a2ea7422dd9d76a38d745113359e4b7a6": { - "describe": { - "columns": [ - { - "name": "protocol_version", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT protocol_version FROM l1_batches WHERE number = $1" - }, - "96623dfb2cb9efa255a54d87d61f748aebaf4e75ee09c05d04535d8c97a95d88": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT COUNT(*) as \"count!\" FROM contracts_verification_info WHERE address = $1" - }, - "96b1cd2bb6861064b633d597a4a09d279dbc7bcd7a810a7270da3d7941af0fff": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [ - "Bytea", - "Bytea" - ] - } - }, - "query": "SELECT COUNT(*) as \"count!\" FROM (SELECT * FROM storage_logs WHERE storage_logs.hashed_key = $1 ORDER BY storage_logs.miniblock_number DESC, storage_logs.operation_number DESC LIMIT 1) sl WHERE sl.value != $2" - }, - "96f6d06a49646f93ba1918080ef1efba868d506c6b51ede981e610f1b57bf88b": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "DELETE FROM storage WHERE hashed_key = ANY($1)" - }, - "97d81c27885fda4390ebc9789c6169cb94a449f583f7819ec74286fb0d9f81d5": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 2, - "type_info": "Bool" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "bloom", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "parent_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "commitment", - "ordinal": 10, - "type_info": "Bytea" - }, - { - "name": "compressed_write_logs", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "compressed_contracts", - "ordinal": 12, - "type_info": "Bytea" - }, - { - "name": "eth_prove_tx_id", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "eth_commit_tx_id", - "ordinal": 14, - "type_info": "Int4" - }, - { - "name": "eth_execute_tx_id", - "ordinal": 15, - "type_info": "Int4" - }, - { - "name": "merkle_root_hash", - "ordinal": 16, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 17, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 18, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 19, - "type_info": "Jsonb" - }, - { - "name": "compressed_initial_writes", - "ordinal": 20, - "type_info": "Bytea" - }, - { - "name": "compressed_repeated_writes", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "l2_l1_compressed_messages", - "ordinal": 22, - "type_info": "Bytea" - }, - { - "name": "l2_l1_merkle_root", - "ordinal": 23, - "type_info": "Bytea" - }, - { - "name": "l1_gas_price", - "ordinal": 24, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 25, - "type_info": "Int8" - }, - { - "name": "rollup_last_leaf_index", - "ordinal": 26, - "type_info": "Int8" - }, - { - "name": "zkporter_is_available", - "ordinal": 27, - "type_info": "Bool" - }, - { - "name": "bootloader_code_hash", - "ordinal": 28, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 29, - "type_info": "Bytea" - }, - { - "name": "base_fee_per_gas", - "ordinal": 30, - "type_info": "Numeric" - }, - { - "name": "aux_data_hash", - "ordinal": 31, - "type_info": "Bytea" - }, - { - "name": "pass_through_data_hash", - "ordinal": 32, - "type_info": "Bytea" - }, - { - "name": "meta_parameters_hash", - "ordinal": 33, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 34, - "type_info": "Int4" - }, - { - "name": "system_logs", - "ordinal": 35, - "type_info": "ByteaArray" - }, - { - "name": "compressed_state_diffs", - "ordinal": 36, - "type_info": "Bytea" - }, - { - "name": "events_queue_commitment", - "ordinal": 37, - "type_info": "Bytea" - }, - { - "name": "bootloader_initial_content_commitment", - "ordinal": 38, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - true, - true, - true, - false, - false, - true, - true, - true, - true, - false, - true, - true, - true, - true, - false, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, meta_parameters_hash, protocol_version, system_logs, compressed_state_diffs, events_queue_commitment, bootloader_initial_content_commitment FROM l1_batches LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number WHERE number = $1" - }, - "987fcbbd716648c7c368462643f13d8001d5c6d197add90613ae21d21fdef79b": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "UPDATE prover_jobs_fri SET status = $1, updated_at = now() WHERE id = $2" - }, - "98c81ee6f73859c6cd6ba54ab438c900dda646b70a700f936e5218d9ba3bd0ec": { - "describe": { - "columns": [ - { - "name": "l1_batch_number!", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 1, - "type_info": "Int2" - }, - { - "name": "aggregation_round", - "ordinal": 2, - "type_info": "Int2" - } - ], - "nullable": [ - null, - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n SELECT MIN(l1_batch_number) as \"l1_batch_number!\", circuit_id, aggregation_round\n FROM prover_jobs_fri\n WHERE status IN('queued', 'in_gpu_proof', 'in_progress', 'failed')\n GROUP BY circuit_id, aggregation_round\n " - }, - "9970bb69f5ca9ab9f103e1547eb40c1d4f5dd3a540ff6f1b9724821350c9501a": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "circuit_type", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "prover_input", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "status", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "error", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "processing_started_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "created_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 8, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 9, - "type_info": "Time" - }, - { - "name": "aggregation_round", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "result", - "ordinal": 11, - "type_info": "Bytea" - }, - { - "name": "sequence_number", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "attempts", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "circuit_input_blob_url", - "ordinal": 14, - "type_info": "Text" - }, - { - "name": "proccesed_by", - "ordinal": 15, - "type_info": "Text" - }, - { - "name": "is_blob_cleaned", - "ordinal": 16, - "type_info": "Bool" - }, - { - "name": "protocol_version", - "ordinal": 17, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - true, - false, - false, - false, - false, - true, - false, - false, - true, - true, - false, - true - ], - "parameters": { - "Left": [ - "Int4Array" - ] - } - }, - "query": "\n UPDATE prover_jobs\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now()\n WHERE id = (\n SELECT id\n FROM prover_jobs\n WHERE status = 'queued'\n AND protocol_version = ANY($1)\n ORDER BY aggregation_round DESC, l1_batch_number ASC, id ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING prover_jobs.*\n " - }, - "99d331d233d357302ab0cc7e3269ef9e414f0c3111785212660f471e3b4f6a04": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray", - "Int4Array", - "ByteaArray", - "ByteaArray", - "NumericArray", - "NumericArray", - "NumericArray", - "NumericArray", - "Int4Array", - "Int4Array", - "VarcharArray", - "NumericArray", - "JsonbArray", - "ByteaArray", - "JsonbArray", - "Int8Array", - "NumericArray", - "ByteaArray", - "ByteaArray", - "ByteaArray", - "Int8" - ] - } - }, - "query": "\n UPDATE transactions\n SET \n hash = data_table.hash,\n signature = data_table.signature,\n gas_limit = data_table.gas_limit,\n max_fee_per_gas = data_table.max_fee_per_gas,\n max_priority_fee_per_gas = data_table.max_priority_fee_per_gas,\n gas_per_pubdata_limit = data_table.gas_per_pubdata_limit,\n input = data_table.input,\n data = data_table.data,\n tx_format = data_table.tx_format,\n miniblock_number = $21,\n index_in_block = data_table.index_in_block,\n error = NULLIF(data_table.error, ''),\n effective_gas_price = data_table.effective_gas_price,\n execution_info = data_table.new_execution_info,\n refunded_gas = data_table.refunded_gas,\n value = data_table.value,\n contract_address = data_table.contract_address,\n paymaster = data_table.paymaster,\n paymaster_input = data_table.paymaster_input,\n in_mempool = FALSE,\n updated_at = now()\n FROM\n (\n SELECT data_table_temp.* FROM (\n SELECT\n UNNEST($1::bytea[]) AS initiator_address,\n UNNEST($2::int[]) AS nonce,\n UNNEST($3::bytea[]) AS hash,\n UNNEST($4::bytea[]) AS signature,\n UNNEST($5::numeric[]) AS gas_limit,\n UNNEST($6::numeric[]) AS max_fee_per_gas,\n UNNEST($7::numeric[]) AS max_priority_fee_per_gas,\n UNNEST($8::numeric[]) AS gas_per_pubdata_limit,\n UNNEST($9::int[]) AS tx_format,\n UNNEST($10::integer[]) AS index_in_block,\n UNNEST($11::varchar[]) AS error,\n UNNEST($12::numeric[]) AS effective_gas_price,\n UNNEST($13::jsonb[]) AS new_execution_info,\n UNNEST($14::bytea[]) AS input,\n UNNEST($15::jsonb[]) AS data,\n UNNEST($16::bigint[]) as refunded_gas,\n UNNEST($17::numeric[]) as value,\n UNNEST($18::bytea[]) as contract_address,\n UNNEST($19::bytea[]) as paymaster,\n UNNEST($20::bytea[]) as paymaster_input\n ) AS data_table_temp\n JOIN transactions ON transactions.initiator_address = data_table_temp.initiator_address\n AND transactions.nonce = data_table_temp.nonce\n ORDER BY transactions.hash\n ) AS data_table\n WHERE transactions.initiator_address=data_table.initiator_address\n AND transactions.nonce=data_table.nonce\n " - }, - "9a326e8fb44f8ebfdd26d945b73a054fd6802551594b23687d057a3954e24f33": { - "describe": { - "columns": [ - { - "name": "attempts", - "ordinal": 0, - "type_info": "Int2" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT attempts FROM basic_witness_input_producer_jobs WHERE l1_batch_number = $1" - }, - "9aaf98668f384f634860c4acf793ff47be08975e5d09061cc26fd53dea249c55": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Bytea", - "Text", - "Int4" - ] - } - }, - "query": "\n INSERT INTO scheduler_witness_jobs\n (l1_batch_number, scheduler_witness, scheduler_witness_blob_url, protocol_version, status, created_at, updated_at)\n VALUES ($1, $2, $3, $4, 'waiting_for_artifacts', now(), now())\n " - }, - "9b70e9039cdc1a8c8baf9220a9d42a9b1b209ce73f74cccb9e313bcacdc3daf3": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Text", - "Int4", - "Bytea", - "Int4", - "Text", - "Int4" - ] - } - }, - "query": "\n INSERT INTO prover_jobs (l1_batch_number, circuit_type, sequence_number, prover_input, aggregation_round, circuit_input_blob_url, protocol_version, status, created_at, updated_at)\n VALUES ($1, $2, $3, $4, $5, $6, $7, 'queued', now(), now())\n ON CONFLICT(l1_batch_number, aggregation_round, sequence_number) DO NOTHING\n " - }, - "9bf32ea710825c1f0560a7eaa89f8f097ad196755ba82d98a729a2b0d34e1aca": { - "describe": { - "columns": [ - { - "name": "successful_limit!", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "queued_limit!", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "max_block!", - "ordinal": 2, - "type_info": "Int8" - } - ], - "nullable": [ - null, - null, - null - ], - "parameters": { - "Left": [] - } - }, - "query": "\n SELECT\n (SELECT l1_batch_number\n FROM prover_jobs\n WHERE status NOT IN ('successful', 'skipped')\n ORDER BY l1_batch_number\n LIMIT 1) as \"successful_limit!\",\n \n (SELECT l1_batch_number\n FROM prover_jobs\n WHERE status <> 'queued'\n ORDER BY l1_batch_number DESC\n LIMIT 1) as \"queued_limit!\",\n\n (SELECT MAX(l1_batch_number) as \"max!\" FROM prover_jobs) as \"max_block!\"\n " - }, - "9d28c1be3bda0c4fb37567d4a56730e801f48fbb2abad42ea894ebd8ee40412d": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int2", - "Text", - "Int2", - "Int4", - "Int4", - "Bool", - "Int4" - ] - } - }, - "query": "\n INSERT INTO prover_jobs_fri (l1_batch_number, circuit_id, circuit_blob_url, aggregation_round, sequence_number, depth, is_node_final_proof, protocol_version, status, created_at, updated_at)\n VALUES ($1, $2, $3, $4, $5, $6, $7, $8, 'queued', now(), now())\n ON CONFLICT(l1_batch_number, aggregation_round, circuit_id, depth, sequence_number)\n DO UPDATE SET updated_at=now()\n " - }, - "a074cd2c23434a8e801c2c0b42e63f1657765aceabd6d8a50ef2d2299bba99ab": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 2, - "type_info": "Int2" - }, - { - "name": "closed_form_inputs_blob_url", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 4, - "type_info": "Int2" - }, - { - "name": "status", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "error", - "ordinal": 6, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 8, - "type_info": "Timestamp" - }, - { - "name": "processing_started_at", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 10, - "type_info": "Time" - }, - { - "name": "is_blob_cleaned", - "ordinal": 11, - "type_info": "Bool" - }, - { - "name": "number_of_basic_circuits", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "protocol_version", - "ordinal": 13, - "type_info": "Int4" - }, - { - "name": "picked_by", - "ordinal": 14, - "type_info": "Text" - } - ], - "nullable": [ - false, - false, - false, - true, - false, - false, - true, - false, - false, - true, - true, - true, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int4Array", - "Text" - ] - } - }, - "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now(),\n picked_by = $2\n WHERE id = (\n SELECT id\n FROM leaf_aggregation_witness_jobs_fri\n WHERE status = 'queued'\n AND protocol_version = ANY($1)\n ORDER BY l1_batch_number ASC, id ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING leaf_aggregation_witness_jobs_fri.*\n " - }, - "a0aa877e052e63b1c3df6fc4432eeb44f7f3930f624e66b034baa1c5d0f8bb30": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - { - "Custom": { - "kind": { - "Enum": [ - "Queued", - "ManuallySkipped", - "InProgress", - "Successful", - "Failed" - ] - }, - "name": "basic_witness_input_producer_job_status" - } - } - ] - } - }, - "query": "INSERT INTO basic_witness_input_producer_jobs (l1_batch_number, status, created_at, updated_at) VALUES ($1, $2, now(), now()) ON CONFLICT (l1_batch_number) DO NOTHING" - }, - "a0b720f4e9a558cb073725ecb62765c27d1635f3099d1850e7269bce8bf0ab36": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "leaf_layer_subqueues_blob_url", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "aggregation_outputs_blob_url", - "ordinal": 2, - "type_info": "Text" - } - ], - "nullable": [ - false, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n SELECT l1_batch_number, leaf_layer_subqueues_blob_url, aggregation_outputs_blob_url FROM node_aggregation_witness_jobs\n WHERE status='successful'\n AND leaf_layer_subqueues_blob_url is NOT NULL\n AND aggregation_outputs_blob_url is NOT NULL\n AND updated_at < NOW() - INTERVAL '30 days'\n LIMIT $1;\n " - }, - "a19b7137403c5cdf1be5f5122ce4d297ed661fa8bdb3bc91f8a81fe9da47469e": { - "describe": { - "columns": [ - { - "name": "upgrade_tx_hash", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "\n SELECT upgrade_tx_hash FROM protocol_versions\n WHERE id = $1\n " - }, - "a1a6b52403c1db35c8d83d0a512ac453ecd54b34ec516027d540ee1890b40291": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Bytea", - "Bytea", - "Bytea", - "Bytea" - ] - } - }, - "query": "INSERT INTO prover_fri_protocol_versions (id, recursion_scheduler_level_vk_hash, recursion_node_level_vk_hash, recursion_leaf_level_vk_hash, recursion_circuits_set_vks_hash, created_at) VALUES ($1, $2, $3, $4, $5, now()) ON CONFLICT(id) DO NOTHING" - }, - "a39f760d2cd879a78112e57d8611d7099802b03b7cc4933cafb4c47e133ad543": { - "describe": { - "columns": [ - { - "name": "address", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "topic1", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "topic2", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "topic3", - "ordinal": 3, - "type_info": "Bytea" - }, - { - "name": "topic4", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "value", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "block_hash", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "l1_batch_number?", - "ordinal": 7, - "type_info": "Int8" - }, - { - "name": "miniblock_number", - "ordinal": 8, - "type_info": "Int8" - }, - { - "name": "tx_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "tx_index_in_block", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "event_index_in_block", - "ordinal": 11, - "type_info": "Int4" - }, - { - "name": "event_index_in_tx", - "ordinal": 12, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - null, - null, - false, - false, - false, - false, - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "\n SELECT\n address, topic1, topic2, topic3, topic4, value,\n Null::bytea as \"block_hash\", Null::bigint as \"l1_batch_number?\",\n miniblock_number, tx_hash, tx_index_in_block,\n event_index_in_block, event_index_in_tx\n FROM events\n WHERE tx_hash = $1\n ORDER BY miniblock_number ASC, event_index_in_block ASC\n " - }, - "a3d526a5a341618e9784fc81626143a3174709483a527879254ff8e28f210ac3": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Int8", - "Int8" - ] - } - }, - "query": "UPDATE l1_batches SET eth_execute_tx_id = $1, updated_at = now() WHERE number BETWEEN $2 AND $3" - }, - "a42626c162a0600b9c7d22dd0d7997fa70cc95296ecc185ff9ae2e03593b07bf": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n UPDATE scheduler_witness_jobs_fri\n SET status='queued'\n WHERE l1_batch_number = $1\n AND status != 'successful'\n AND status != 'in_progress'\n " - }, - "a4a14eb42b9acca3f93c67e5760ba700c333b5e9a38c132a3060a94c988e7f13": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "received_at", - "ordinal": 1, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Timestamp", - "Int8" - ] - } - }, - "query": "SELECT transactions.hash, transactions.received_at FROM transactions LEFT JOIN miniblocks ON miniblocks.number = miniblock_number WHERE received_at > $1 ORDER BY received_at ASC LIMIT $2" - }, - "a4f240188c1447f5b6dcef33dfcc9d00b105f62a6b4c3949a825bea979954160": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [] - } - }, - "query": "DELETE FROM basic_witness_input_producer_jobs" - }, - "a5115658f3a53462a9570fd6676f1931604d1c17a9a2b5f1475519006aaf03ba": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Text" - ] - } - }, - "query": "INSERT INTO proof_generation_details (l1_batch_number, status, proof_gen_data_blob_url, created_at, updated_at) VALUES ($1, 'ready_to_be_proven', $2, now(), now()) ON CONFLICT (l1_batch_number) DO NOTHING" - }, - "a7abde5a53248d6e63aa998acac521194231bbe08140c9c4efa548c4f3ae17fa": { - "describe": { - "columns": [ - { - "name": "max?", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT MAX(operation_number) as \"max?\" FROM storage_logs WHERE miniblock_number = $1" - }, - "a8b32073a67ad77caab11e73a5cac5aa5b5382648ff95d6787a309eb3f64d434": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "DELETE FROM eth_txs_history WHERE id = $1" - }, - "a9b1a31def214f8b1441dc3ab720bd270f3991c9f1c7528256276e176d532163": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT l1_batch_number FROM initial_writes WHERE hashed_key = $1" - }, - "a9d96d6774af2637173d471f02995652cd4c131c05fdcb3d0e1644bcd1aa1809": { - "describe": { - "columns": [ - { - "name": "proof", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "aggregation_result_coords", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - true, - true - ], - "parameters": { - "Left": [ - "Int8", - "Int8" - ] - } - }, - "query": "SELECT prover_jobs.result as proof, scheduler_witness_jobs.aggregation_result_coords\n FROM prover_jobs\n INNER JOIN scheduler_witness_jobs\n ON prover_jobs.l1_batch_number = scheduler_witness_jobs.l1_batch_number\n WHERE prover_jobs.l1_batch_number >= $1 AND prover_jobs.l1_batch_number <= $2\n AND prover_jobs.aggregation_round = 3\n AND prover_jobs.status = 'successful'\n " - }, - "aa279ce3351b30788711be6c65cb99cb14304ac38f8fed6d332237ffafc7c86b": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Time", - "Text", - "Int8" - ] - } - }, - "query": "UPDATE proof_compression_jobs_fri SET status = $1, updated_at = now(), time_taken = $2, l1_proof_blob_url = $3WHERE l1_batch_number = $4" - }, - "aa7ae476aed5979227887891e9be995924588aa10ccba7424d6ce58f811eaa02": { - "describe": { - "columns": [ - { - "name": "number!", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT COALESCE(MAX(number), 0) AS \"number!\" FROM l1_batches WHERE eth_prove_tx_id IS NOT NULL" - }, - "aacaeff95b9a2988167dde78200d7139ba99edfa30dbcd8a7a57f72efc676477": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT number FROM l1_batches LEFT JOIN eth_txs_history AS commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id) WHERE commit_tx.confirmed_at IS NOT NULL ORDER BY number DESC LIMIT 1" - }, - "ac179b3a4eca421f3151f4f1eb844f2cee16fa1d2a47c910feb8e07d8f8ace6c": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 2, - "type_info": "Int2" - }, - { - "name": "aggregation_round", - "ordinal": 3, - "type_info": "Int2" - }, - { - "name": "sequence_number", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "depth", - "ordinal": 5, - "type_info": "Int4" - }, - { - "name": "is_node_final_proof", - "ordinal": 6, - "type_info": "Bool" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false - ], - "parameters": { - "Left": [ - "Int2Array", - "Int2Array", - "Int4Array", - "Text" - ] - } - }, - "query": "\n UPDATE prover_jobs_fri\n SET status = 'in_progress', attempts = attempts + 1,\n processing_started_at = now(), updated_at = now(), \n picked_by = $4\n WHERE id = (\n SELECT pj.id\n FROM ( SELECT * FROM unnest($1::smallint[], $2::smallint[]) ) AS tuple (circuit_id, round)\n JOIN LATERAL\n (\n SELECT * FROM prover_jobs_fri AS pj\n WHERE pj.status = 'queued'\n AND pj.protocol_version = ANY($3)\n AND pj.circuit_id = tuple.circuit_id AND pj.aggregation_round = tuple.round\n ORDER BY pj.l1_batch_number ASC, pj.id ASC\n LIMIT 1\n ) AS pj ON true\n ORDER BY pj.l1_batch_number ASC, pj.aggregation_round DESC, pj.id ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING prover_jobs_fri.id, prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id,\n prover_jobs_fri.aggregation_round, prover_jobs_fri.sequence_number, prover_jobs_fri.depth,\n prover_jobs_fri.is_node_final_proof\n " - }, - "ac35fb205c83d82d78983f4c9b47f56d3c91fbb2c95046555c7d60a9a2ebb446": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray", - "Int8Array", - "Int8" - ] - } - }, - "query": "INSERT INTO initial_writes (hashed_key, index, l1_batch_number, created_at, updated_at) SELECT u.hashed_key, u.index, $3, now(), now() FROM UNNEST($1::bytea[], $2::bigint[]) AS u(hashed_key, index)" - }, - "ad11ec3e628ae6c64ac160d8dd689b2f64033f620e17a31469788b3ce4968ad3": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "eth_tx_id", - "ordinal": 1, - "type_info": "Int4" - }, - { - "name": "tx_hash", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 3, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 4, - "type_info": "Timestamp" - }, - { - "name": "base_fee_per_gas", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "priority_fee_per_gas", - "ordinal": 6, - "type_info": "Int8" - }, - { - "name": "confirmed_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "signed_raw_tx", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "sent_at_block", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "sent_at", - "ordinal": 10, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT * FROM eth_txs_history WHERE eth_tx_id = $1 ORDER BY created_at DESC LIMIT 1" - }, - "ad495160a947cf1bd7343819e723d18c9332bc95cfc2014ed8d04907eff3896e": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 2, - "type_info": "Int2" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Interval", - "Int2" - ] - } - }, - "query": "\n UPDATE scheduler_witness_jobs_fri\n SET status = 'queued', updated_at = now(), processing_started_at = now()\n WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'failed' AND attempts < $2)\n RETURNING l1_batch_number, status, attempts\n " - }, - "ad4f74aa6f131df0243f4fa500ade1b98aa335bd71ed417b02361e2c697e60f8": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Int8" - ] - } - }, - "query": "\n UPDATE scheduler_witness_jobs\n SET aggregation_result_coords = $1,\n updated_at = now()\n WHERE l1_batch_number = $2\n " - }, - "ae072f51b65d0b5212264be9a34027922e5aedef7e4741517ad8104bf5aa79e9": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "DELETE FROM factory_deps WHERE miniblock_number > $1" - }, - "aea4e8d1b018836973d252df943a2c1988dd5f3ffc629064b87d25af8cdb8638": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_tx_index", - "ordinal": 1, - "type_info": "Int4" - } - ], - "nullable": [ - true, - true - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT l1_batch_number, l1_batch_tx_index FROM transactions WHERE hash = $1" - }, - "af22ad34bde12b8d25eb85da9939d12b7bed6407d732b868eeaf2916568c8646": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Time", - "Int8" - ] - } - }, - "query": "\n UPDATE scheduler_witness_jobs_fri\n SET status = 'successful', updated_at = now(), time_taken = $1\n WHERE l1_batch_number = $2\n " - }, - "af22b7cc067ac5e9c201514cdf783d61a0802cf788b4d44f8802554afee35bd9": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT COUNT(*) as \"count!\" FROM contract_verification_requests WHERE status = 'queued'" - }, - "af75db6b7e42b73ce62b28a7281e1bfa181ee0c80a85d7d8078831db5dcdb699": { - "describe": { - "columns": [ - { - "name": "l1_block_number", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT l1_block_number FROM transactions\n WHERE priority_op_id IS NOT NULL\n ORDER BY priority_op_id DESC\n LIMIT 1" - }, - "b11978a1a31a57fe754d08f7bf547c14e5474786700b5ed7445596568d18543a": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Int4" - ] - } - }, - "query": "UPDATE eth_txs SET confirmed_eth_tx_history_id = $1 WHERE id = $2" - }, - "b1478907214ad20dddd4f3846fba4b0ddf1fff63ddb3b95c8999635e77c8b863": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "eth_tx_id", - "ordinal": 1, - "type_info": "Int4" - }, - { - "name": "tx_hash", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 3, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 4, - "type_info": "Timestamp" - }, - { - "name": "base_fee_per_gas", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "priority_fee_per_gas", - "ordinal": 6, - "type_info": "Int8" - }, - { - "name": "confirmed_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "signed_raw_tx", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "sent_at_block", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "sent_at", - "ordinal": 10, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT * FROM eth_txs_history WHERE eth_tx_id = $1 ORDER BY created_at DESC" - }, - "b14997f84d11d7eea89168383195c5579eed1c57bb2b416a749e2863ae6594a5": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE leaf_aggregation_witness_jobs_fri\n SET status ='failed', error= $1, updated_at = now()\n WHERE id = $2\n " - }, - "b250f4cb646081c8c0296a286d3fd921a1aefb310951a1ea25ec0fc533ed32ab": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "nonce", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "raw_tx", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "contract_address", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "tx_type", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "gas_used", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "created_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "has_failed", - "ordinal": 8, - "type_info": "Bool" - }, - { - "name": "sent_at_block", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "confirmed_eth_tx_history_id", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "predicted_gas_cost", - "ordinal": 11, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - false, - false, - false, - true, - true, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT * FROM eth_txs WHERE confirmed_eth_tx_history_id IS NULL AND id <= (SELECT COALESCE(MAX(eth_tx_id), 0) FROM eth_txs_history WHERE sent_at_block IS NOT NULL) ORDER BY id" - }, - "b36acfd014ab3e79b700399cd2663b4e92e14c55278dfd0ba45ee50e7dfffe73": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [] - } - }, - "query": "DELETE FROM eth_txs WHERE id >= (SELECT MIN(id) FROM eth_txs WHERE has_failed = TRUE)" - }, - "b3c0070f22ab78bf148aade48f860934c53130e4c88cdb4670d5f57defedd919": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "circuit_input_blob_url", - "ordinal": 1, - "type_info": "Text" - } - ], - "nullable": [ - false, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n SELECT id, circuit_input_blob_url FROM prover_jobs\n WHERE status='successful'\n AND circuit_input_blob_url is NOT NULL\n AND updated_at < NOW() - INTERVAL '30 days'\n LIMIT $1;\n " - }, - "b479b7d3334f8d4566c294a44e2adb282fbc66a87be5c248c65211c2a8a07db0": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "hash", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Int8", - "Int8" - ] - } - }, - "query": "SELECT number, hash FROM miniblocks WHERE number > $1 ORDER BY number ASC LIMIT $2" - }, - "b4a3c902646725188f7c79ebac992cdce5896fc6fcc9f485c0cba9d90c4c982c": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "nonce", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "raw_tx", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "contract_address", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "tx_type", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "gas_used", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "created_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "has_failed", - "ordinal": 8, - "type_info": "Bool" - }, - { - "name": "sent_at_block", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "confirmed_eth_tx_history_id", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "predicted_gas_cost", - "ordinal": 11, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - false, - false, - false, - true, - true, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT * FROM eth_txs WHERE id > (SELECT COALESCE(MAX(eth_tx_id), 0) FROM eth_txs_history) ORDER BY id LIMIT $1" - }, - "b4c576db7c762103dc6700ded458e996d2e9ef670d7b58b181dbfab02fa426ce": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Numeric", - "Numeric", - "Numeric", - "Jsonb", - "Int8", - "Numeric", - "Numeric", - "Bytea", - "Int4", - "Numeric", - "Bytea", - "Bytea", - "Int4", - "Numeric", - "Bytea", - "Timestamp" - ] - } - }, - "query": "\n INSERT INTO transactions\n (\n hash,\n is_priority,\n initiator_address,\n\n gas_limit,\n max_fee_per_gas,\n gas_per_pubdata_limit,\n\n data,\n priority_op_id,\n full_fee,\n layer_2_tip_fee,\n contract_address,\n l1_block_number,\n value,\n\n paymaster,\n paymaster_input,\n tx_format,\n\n l1_tx_mint,\n l1_tx_refund_recipient,\n\n received_at,\n created_at,\n updated_at\n )\n VALUES\n (\n $1, TRUE, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12,\n $13, $14, $15, $16, $17, $18, now(), now()\n )\n ON CONFLICT (hash) DO NOTHING\n " - }, - "b4da918ee3b36b56d95c8834edebe65eb48ebb8270fa1e6ccf73ad354fd71134": { - "describe": { - "columns": [ - { - "name": "l1_address", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "l2_address", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT l1_address, l2_address FROM tokens WHERE well_known = true" - }, - "b6f9874059c57e5e59f3021936437e9ff71a68065dfc19c295d806d7a9aafc93": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Int8", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea" - ] - } - }, - "query": "INSERT INTO prover_protocol_versions\n (id, timestamp, recursion_scheduler_level_vk_hash, recursion_node_level_vk_hash,\n recursion_leaf_level_vk_hash, recursion_circuits_set_vks_hash, verifier_address, created_at)\n VALUES ($1, $2, $3, $4, $5, $6, $7, now())\n " - }, - "b944df7af612ec911170a43be846eb2f6e27163b0d3983672de2b8d5d60af640": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Interval" - ] - } - }, - "query": "UPDATE proof_generation_details SET status = 'picked_by_prover', updated_at = now(), prover_taken_at = now() WHERE l1_batch_number = ( SELECT l1_batch_number FROM proof_generation_details WHERE status = 'ready_to_be_proven' OR (status = 'picked_by_prover' AND prover_taken_at < now() - $1::interval) ORDER BY l1_batch_number ASC LIMIT 1 FOR UPDATE SKIP LOCKED ) RETURNING proof_generation_details.l1_batch_number" - }, - "bc4433cdfa499830fe6a6a95759c9fbe343ac25b371c7fa980bfd1b0afc86629": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Text", - "Text", - "Text" - ] - } - }, - "query": "UPDATE proof_compression_jobs_fri SET status = $1, attempts = attempts + 1, updated_at = now(), processing_started_at = now(), picked_by = $3 WHERE l1_batch_number = ( SELECT l1_batch_number FROM proof_compression_jobs_fri WHERE status = $2 ORDER BY l1_batch_number ASC LIMIT 1 FOR UPDATE SKIP LOCKED ) RETURNING proof_compression_jobs_fri.l1_batch_number" - }, - "be824de76050461afe29dfd229e524bdf113eab3ca24208782c200531db1c940": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8", - "Int2", - "Int2", - "Int4" - ] - } - }, - "query": "\n SELECT id from prover_jobs_fri\n WHERE l1_batch_number = $1\n AND circuit_id = $2\n AND aggregation_round = $3\n AND depth = $4\n AND status = 'successful'\n ORDER BY sequence_number ASC;\n " - }, - "bef58e581dd0b658350dcdc15ebf7cf350cf088b60c916a15889e31ee7534907": { - "describe": { - "columns": [ - { - "name": "bytecode", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "bytecode_hash", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "SELECT bytecode, bytecode_hash FROM factory_deps WHERE bytecode_hash = ANY($1)" - }, - "c0904ee4179531cfb9d458a17f753085dc2ed957b30a89119d7534112add3876": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Bytea", - "Bytea" - ] - } - }, - "query": "UPDATE l1_batches SET commitment = $2, aux_data_hash = $3, updated_at = now() WHERE number = $1" - }, - "c178e1574d2a16cb90bcc5d5333a4f8dd2a69e0c12b4e7e108a8dcc6000669a5": { - "describe": { - "columns": [ - { - "name": "protocol_version", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT protocol_version FROM miniblocks WHERE number = $1" - }, - "c1e5f85be88ef0b6ab81daf8dec2011797086a7ec5aeaffe5665ebf9584bf84a": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "scheduler_partial_input_blob_url", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "status", - "ordinal": 2, - "type_info": "Text" - }, - { - "name": "processing_started_at", - "ordinal": 3, - "type_info": "Timestamp" - }, - { - "name": "time_taken", - "ordinal": 4, - "type_info": "Time" - }, - { - "name": "error", - "ordinal": 5, - "type_info": "Text" - }, - { - "name": "created_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "attempts", - "ordinal": 8, - "type_info": "Int2" - }, - { - "name": "protocol_version", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "picked_by", - "ordinal": 10, - "type_info": "Text" - } - ], - "nullable": [ - false, - false, - false, - true, - true, - true, - false, - false, - false, - true, - true - ], - "parameters": { - "Left": [ - "Int4Array", - "Text" - ] - } - }, - "query": "\n UPDATE scheduler_witness_jobs_fri\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now(),\n picked_by = $2\n WHERE l1_batch_number = (\n SELECT l1_batch_number\n FROM scheduler_witness_jobs_fri\n WHERE status = 'queued'\n AND protocol_version = ANY($1)\n ORDER BY l1_batch_number ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING scheduler_witness_jobs_fri.*\n " - }, - "c2cf96a9eb6893c5ba7d9e5418d9f24084ccd87980cb6ee05de1b3bde5c654bd": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray", - "ByteaArray" - ] - } - }, - "query": "\n INSERT INTO call_traces (tx_hash, call_trace)\n SELECT u.tx_hash, u.call_trace\n FROM UNNEST($1::bytea[], $2::bytea[])\n AS u(tx_hash, call_trace)\n " - }, - "c3724d96ed4e1c31dd575b911b254ed5a4af4d5b6ad1243c812b37ebde0f6090": { - "describe": { - "columns": [ - { - "name": "storage_refunds", - "ordinal": 0, - "type_info": "Int8Array" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT storage_refunds FROM l1_batches WHERE number = $1" - }, - "c59d052f89ddfc3d2c07be84d6d9837adfbe2cefb10d01e09d31aa5e3364e281": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_tx_count", - "ordinal": 1, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "timestamp", - "ordinal": 3, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 4, - "type_info": "Bool" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 6, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "bloom", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 9, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 10, - "type_info": "Jsonb" - }, - { - "name": "base_fee_per_gas", - "ordinal": 11, - "type_info": "Numeric" - }, - { - "name": "l1_gas_price", - "ordinal": 12, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 13, - "type_info": "Int8" - }, - { - "name": "bootloader_code_hash", - "ordinal": 14, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 15, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 16, - "type_info": "Int4" - }, - { - "name": "compressed_state_diffs", - "ordinal": 17, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 18, - "type_info": "ByteaArray" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT number, l1_tx_count, l2_tx_count, timestamp, is_finished, fee_account_address, l2_to_l1_logs, l2_to_l1_messages, bloom, priority_ops_onchain_data, used_contract_hashes, base_fee_per_gas, l1_gas_price, l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, compressed_state_diffs, system_logs FROM l1_batches WHERE number = $1" - }, - "c604ee1dd86ac154d67ddb339da5f65ca849887d6a1068623e874f9df00cfdd1": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "ByteaArray", - "Int4Array", - "VarcharArray", - "JsonbArray", - "Int8Array", - "NumericArray" - ] - } - }, - "query": "\n UPDATE transactions\n SET\n miniblock_number = $1,\n index_in_block = data_table.index_in_block,\n error = NULLIF(data_table.error, ''),\n in_mempool=FALSE,\n execution_info = execution_info || data_table.new_execution_info,\n refunded_gas = data_table.refunded_gas,\n effective_gas_price = data_table.effective_gas_price,\n updated_at = now()\n FROM\n (\n SELECT\n UNNEST($2::bytea[]) AS hash,\n UNNEST($3::integer[]) AS index_in_block,\n UNNEST($4::varchar[]) AS error,\n UNNEST($5::jsonb[]) AS new_execution_info,\n UNNEST($6::bigint[]) as refunded_gas,\n UNNEST($7::numeric[]) as effective_gas_price\n ) AS data_table\n WHERE transactions.hash = data_table.hash\n " - }, - "c6aadc4ec78e30f5775f7a9f866ad02984b78de3e3d1f34c144a4057ff44ea6a": { - "describe": { - "columns": [ - { - "name": "count", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT COUNT(*) FROM eth_txs WHERE has_failed = TRUE" - }, - "c6cdc9ef18fe20ef530b653c0c24c674dd74aef3701bfb5c6db23d649115f1d4": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Time", - "Int8" - ] - } - }, - "query": "\n UPDATE witness_inputs_fri\n SET status = 'successful', updated_at = now(), time_taken = $1\n WHERE l1_batch_number = $2\n " - }, - "c8125b30eb64eebfa4500dc623972bf8771a83b218bd18a51e633d4cf4bf8eb3": { - "describe": { - "columns": [ - { - "name": "bytecode", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea", - "Int8", - "Bytea" - ] - } - }, - "query": "\n SELECT bytecode FROM (\n SELECT * FROM storage_logs\n WHERE\n storage_logs.hashed_key = $1 AND\n storage_logs.miniblock_number <= $2\n ORDER BY\n storage_logs.miniblock_number DESC, storage_logs.operation_number DESC\n LIMIT 1\n ) t\n JOIN factory_deps ON value = factory_deps.bytecode_hash\n WHERE value != $3\n " - }, - "c881cd7018a9f714cdc3388936e363d49bd6ae52467d382d2f2250ab4f11acf9": { - "describe": { - "columns": [ - { - "name": "address", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "key", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT address, key FROM protective_reads WHERE l1_batch_number = $1" - }, - "c891770305cb3aba4021738e60567d977eac54435c871b5178de7c3c96d2f721": { - "describe": { - "columns": [ - { - "name": "usd_price", - "ordinal": 0, - "type_info": "Numeric" - }, - { - "name": "usd_price_updated_at", - "ordinal": 1, - "type_info": "Timestamp" - } - ], - "nullable": [ - true, - true - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT usd_price, usd_price_updated_at FROM tokens WHERE l2_address = $1" - }, - "ca0697232d98066834184318985e6960e180c4f5b98b46ca67ab191b66d343bf": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Int4" - ] - } - }, - "query": "UPDATE eth_txs_history SET sent_at_block = $2, sent_at = now() WHERE id = $1 AND sent_at_block IS NULL" - }, - "ca8fa3521dab5ee985a837572e8625bd5b26bf79f58950698218b28110c29d1f": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int4", - "Int4", - "Int2", - "Text", - "Text", - "Int2" - ] - } - }, - "query": "\n INSERT INTO gpu_prover_queue (instance_host, instance_port, queue_capacity, queue_free_slots, instance_status, specialized_prover_group_id, region, zone, num_gpu, created_at, updated_at)\n VALUES (cast($1::text as inet), $2, $3, $3, 'available', $4, $5, $6, $7, now(), now())\n ON CONFLICT(instance_host, instance_port, region, zone)\n DO UPDATE SET instance_status='available', queue_capacity=$3, queue_free_slots=$3, specialized_prover_group_id=$4, region=$5, zone=$6, num_gpu=$7, updated_at=now()" - }, - "cc20350af9e837ae6b6160be65f88e6b675f62e207252f91f2ce7dcaaddb12b1": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int4", - "Int8", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea", - "Bytea" - ] - } - }, - "query": "INSERT INTO protocol_versions (id, timestamp, recursion_scheduler_level_vk_hash, recursion_node_level_vk_hash, recursion_leaf_level_vk_hash, recursion_circuits_set_vks_hash, bootloader_code_hash, default_account_code_hash, verifier_address, upgrade_tx_hash, created_at) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, now())" - }, - "cd2f668e3febead6b8c5c5dacaf95f0840b9c40f6c8585df93b0541f9b5b1548": { - "describe": { - "columns": [ - { - "name": "attempts", - "ordinal": 0, - "type_info": "Int2" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT attempts FROM proof_compression_jobs_fri WHERE l1_batch_number = $1" - }, - "ce3666b149f7fc62a68139a8efb83ed149c7deace17b8968817941763e45a147": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Int8", - "Bytea" - ] - } - }, - "query": "\n DELETE FROM tokens \n WHERE l2_address IN\n (\n SELECT substring(key, 12, 20) FROM storage_logs \n WHERE storage_logs.address = $1 AND miniblock_number > $2 AND NOT EXISTS (\n SELECT 1 FROM storage_logs as s\n WHERE\n s.hashed_key = storage_logs.hashed_key AND\n (s.miniblock_number, s.operation_number) >= (storage_logs.miniblock_number, storage_logs.operation_number) AND\n s.value = $3\n )\n )\n " - }, - "cea77fbe02853a7a9b1f7b5ddf2957cb23212ae5ef0f889834d796c35b583542": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "DELETE FROM miniblocks WHERE number > $1" - }, - "cfd2ce8eb6997b7609090b4400e1bc42db577fdd3758248be69d3b5d9d132bf1": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "circuit_type!", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "status!", - "ordinal": 2, - "type_info": "Text" - } - ], - "nullable": [ - null, - false, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n SELECT COUNT(*) as \"count!\", circuit_type as \"circuit_type!\", status as \"status!\"\n FROM prover_jobs\n WHERE status <> 'skipped' and status <> 'successful' \n GROUP BY circuit_type, status\n " - }, - "d0ff67e7c59684a0e4409726544cf850dbdbb36d038ebbc6a1c5bf0e76b0358c": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT COUNT(*) as \"count!\" FROM l1_batches" - }, - "d11ff84327058721c3c36bc3371c3139f41e2a2255f64bbc5108c1876848d8bb": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Text", - "Int4", - "Int4", - "Text", - "Text" - ] - } - }, - "query": "\n UPDATE gpu_prover_queue\n SET instance_status = $1, updated_at = now(), queue_free_slots = $4\n WHERE instance_host = $2::text::inet\n AND instance_port = $3\n AND region = $5\n AND zone = $6\n " - }, - "d12724ae2bda6214b68e19dc290281907383926abf5ad471eef89529908b2673": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "circuit_id", - "ordinal": 2, - "type_info": "Int2" - }, - { - "name": "aggregation_round", - "ordinal": 3, - "type_info": "Int2" - }, - { - "name": "sequence_number", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "depth", - "ordinal": 5, - "type_info": "Int4" - }, - { - "name": "is_node_final_proof", - "ordinal": 6, - "type_info": "Bool" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false - ], - "parameters": { - "Left": [ - "Int4Array", - "Text" - ] - } - }, - "query": "\n UPDATE prover_jobs_fri\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now(),\n picked_by = $2\n WHERE id = (\n SELECT id\n FROM prover_jobs_fri\n WHERE status = 'queued'\n AND protocol_version = ANY($1)\n ORDER BY aggregation_round DESC, l1_batch_number ASC, id ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING prover_jobs_fri.id, prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id,\n prover_jobs_fri.aggregation_round, prover_jobs_fri.sequence_number, prover_jobs_fri.depth,\n prover_jobs_fri.is_node_final_proof\n " - }, - "d1c82bd0b3c010569937ad7600760fa0c3aca7c9585bbf9598a5c0515b431b26": { - "describe": { - "columns": [ - { - "name": "hashed_key", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "l1_batch_number", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "index", - "ordinal": 2, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "SELECT hashed_key, l1_batch_number, index FROM initial_writes WHERE hashed_key = ANY($1::bytea[])" - }, - "d6709f3ce8f08f988e10a0e0fb5c06db9488834a85066babaf3d56cf212b4ea0": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Bytea", - "Varchar", - "Varchar", - "Int4" - ] - } - }, - "query": "UPDATE tokens SET token_list_name = $2, token_list_symbol = $3,\n token_list_decimals = $4, well_known = true, updated_at = now()\n WHERE l1_address = $1\n " - }, - "d7060880fe56fd99af7b7ed3f4c7fb9d0858cee30f44c5197821aae83c6c9666": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Bytea", - "Bytea" - ] - } - }, - "query": "\n SELECT id\n FROM prover_protocol_versions\n WHERE recursion_circuits_set_vks_hash = $1\n AND recursion_leaf_level_vk_hash = $2\n AND recursion_node_level_vk_hash = $3\n AND recursion_scheduler_level_vk_hash = $4\n " - }, - "d8515595d34dca53e50bbd4ed396f6208e33f596195a5ed02fba9e8364ceb33c": { - "describe": { - "columns": [ - { - "name": "bytecode", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT bytecode FROM factory_deps WHERE bytecode_hash = $1" - }, - "d8e0bb1a349523077356be101808340eab078979390af7d26c71489b5f303d1b": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "UPDATE l1_batches SET skip_proof = TRUE WHERE number = $1" - }, - "d91a80fdfe140ac71760755a0bb6c29cf4f613dc3fd88df6facd63d7338b8470": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "scheduler_witness_blob_url", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "final_node_aggregations_blob_url", - "ordinal": 2, - "type_info": "Text" - } - ], - "nullable": [ - false, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n SELECT l1_batch_number, scheduler_witness_blob_url, final_node_aggregations_blob_url FROM scheduler_witness_jobs\n WHERE status='successful'\n AND updated_at < NOW() - INTERVAL '30 days'\n AND scheduler_witness_blob_url is NOT NULL\n AND final_node_aggregations_blob_url is NOT NULL\n LIMIT $1;\n " - }, - "dba127c0f3023586217bfb214c5d3749e8e7ec3edc0c99cfd970332e31f81cb7": { - "describe": { - "columns": [ - { - "name": "virtual_blocks", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT virtual_blocks FROM miniblocks WHERE number = $1" - }, - "dc16d0fac093a52480b66dfcb5976fb01e6629e8c982c265f2af1d5000090572": { - "describe": { - "columns": [ - { - "name": "count", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT COUNT(miniblocks.number) FROM miniblocks WHERE l1_batch_number IS NULL" - }, - "dd330bc075a163974c59ec55ecfddd769d05801963b3e0e840e7f11e7bc6d3e9": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT l1_batch_number FROM witness_inputs WHERE length(merkle_tree_paths) <> 0 ORDER BY l1_batch_number DESC LIMIT $1" - }, - "dd8aa1c9d4dcea22c9a13cca5ae45e951cf963b0608046b88be40309d7379ec2": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Varchar", - "Bytea" - ] - } - }, - "query": "UPDATE transactions\n SET error = $1, updated_at = now()\n WHERE hash = $2" - }, - "dd8f0bbabcd646457a9174a590c79a45d4f744624a74f79017eacbab6b4f9b0a": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT id FROM protocol_versions" - }, - "ddb3b38be2b6038b63288961f46ba7d3bb7250caff1146e13c5ee77b6a994ffc": { - "describe": { - "columns": [ - { - "name": "circuit_type", - "ordinal": 0, - "type_info": "Text" - }, - { - "name": "result", - "ordinal": 1, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - true - ], - "parameters": { - "Left": [ - "Int8", - "Int4" - ] - } - }, - "query": "\n SELECT circuit_type, result from prover_jobs\n WHERE l1_batch_number = $1 AND status = 'successful' AND aggregation_round = $2\n ORDER BY sequence_number ASC;\n " - }, - "ddd8b105f5e5cf9db40b14ea47e4ba2b3875f89280019464be34f51605833f1b": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Text", - "Int4", - "Text" - ] - } - }, - "query": "UPDATE gpu_prover_queue_fri SET instance_status = $1, updated_at = now() WHERE instance_host = $2::text::inet AND instance_port = $3 AND zone = $4\n " - }, - "de960625b0fa0b766aacab74473fcd0332a3f7dc356648452a6a63189a8b7cc3": { - "describe": { - "columns": [ - { - "name": "protocol_version", - "ordinal": 0, - "type_info": "Int4" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT protocol_version FROM witness_inputs_fri WHERE l1_batch_number = $1" - }, - "deaf3789ac968e299fe0e5a7f1c72494af8ecd664da9c901ec9c0c5e7c29bb65": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray", - "ByteaArray", - "ByteaArray", - "ByteaArray", - "ByteaArray" - ] - } - }, - "query": "INSERT INTO storage (hashed_key, address, key, value, tx_hash, created_at, updated_at) SELECT u.hashed_key, u.address, u.key, u.value, u.tx_hash, now(), now() FROM UNNEST ($1::bytea[], $2::bytea[], $3::bytea[], $4::bytea[], $5::bytea[]) AS u(hashed_key, address, key, value, tx_hash) ON CONFLICT (hashed_key) DO UPDATE SET tx_hash = excluded.tx_hash, value = excluded.value, updated_at = now()" - }, - "df857ee85c600bd90687b2ed91517d91a5dc4de3cd6c15c34119ca52a3321828": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "merkle_tree_paths", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "created_at", - "ordinal": 2, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 3, - "type_info": "Timestamp" - }, - { - "name": "status", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "time_taken", - "ordinal": 5, - "type_info": "Time" - }, - { - "name": "processing_started_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "error", - "ordinal": 7, - "type_info": "Varchar" - }, - { - "name": "attempts", - "ordinal": 8, - "type_info": "Int4" - }, - { - "name": "merkel_tree_paths_blob_url", - "ordinal": 9, - "type_info": "Text" - }, - { - "name": "is_blob_cleaned", - "ordinal": 10, - "type_info": "Bool" - }, - { - "name": "protocol_version", - "ordinal": 11, - "type_info": "Int4" - } - ], - "nullable": [ - false, - true, - false, - false, - false, - false, - true, - true, - false, - true, - false, - true - ], - "parameters": { - "Left": [ - "Interval", - "Int4", - "Int8", - "Int4Array" - ] - } - }, - "query": "\n UPDATE witness_inputs\n SET status = 'in_progress', attempts = attempts + 1,\n updated_at = now(), processing_started_at = now()\n WHERE l1_batch_number = (\n SELECT l1_batch_number\n FROM witness_inputs\n WHERE l1_batch_number <= $3\n AND\n ( status = 'queued'\n OR (status = 'in_progress' AND processing_started_at < now() - $1::interval)\n OR (status = 'failed' AND attempts < $2)\n )\n AND protocol_version = ANY($4)\n ORDER BY l1_batch_number ASC\n LIMIT 1\n FOR UPDATE\n SKIP LOCKED\n )\n RETURNING witness_inputs.*\n " - }, - "e05a8c74653afc78c892ddfd08e60ab040d2b2f7c4b5ee110988eac2dd0dd90d": { - "describe": { - "columns": [ - { - "name": "timestamp", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "virtual_blocks", - "ordinal": 1, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false - ], - "parameters": { - "Left": [ - "Int8", - "Int8" - ] - } - }, - "query": "SELECT timestamp, virtual_blocks FROM miniblocks WHERE number BETWEEN $1 AND $2 ORDER BY number" - }, - "e3ed9f56d316ac95123df3831ce6e6a1552be8e280ac1f3caf5aa1539275905e": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea", - "Text", - "Text", - "Text", - "Text", - "Bool", - "Text", - "Bytea", - "Bool" - ] - } - }, - "query": "INSERT INTO contract_verification_requests ( contract_address, source_code, contract_name, zk_compiler_version, compiler_version, optimization_used, optimizer_mode, constructor_arguments, is_system, status, created_at, updated_at )\n VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, 'queued', now(), now()) RETURNING id" - }, - "e429061bd0f67910ad8676a34f2b89a051a6df3097c8afde81a491c342a10e3a": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - { - "Custom": { - "kind": { - "Enum": [ - "Queued", - "ManuallySkipped", - "InProgress", - "Successful", - "Failed" - ] - }, - "name": "basic_witness_input_producer_job_status" - } - }, - { - "Custom": { - "kind": { - "Enum": [ - "Queued", - "ManuallySkipped", - "InProgress", - "Successful", - "Failed" - ] - }, - "name": "basic_witness_input_producer_job_status" - } - }, - { - "Custom": { - "kind": { - "Enum": [ - "Queued", - "ManuallySkipped", - "InProgress", - "Successful", - "Failed" - ] - }, - "name": "basic_witness_input_producer_job_status" - } - }, - "Interval", - "Int2" - ] - } - }, - "query": "UPDATE basic_witness_input_producer_jobs SET status = $1, attempts = attempts + 1, updated_at = now(), processing_started_at = now() WHERE l1_batch_number = ( SELECT l1_batch_number FROM basic_witness_input_producer_jobs WHERE status = $2 OR (status = $1 AND processing_started_at < now() - $4::interval) OR (status = $3 AND attempts < $5) ORDER BY l1_batch_number ASC LIMIT 1 FOR UPDATE SKIP LOCKED ) RETURNING basic_witness_input_producer_jobs.l1_batch_number" - }, - "e626aa2efb6ba875a12f2b4e37b0ba8052810e73fa5e2d3280f747f7b89b956f": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "UPDATE proof_generation_details SET status='generated', proof_blob_url = $1, updated_at = now() WHERE l1_batch_number = $2" - }, - "e793a57147bbf31334e9471fa2fd82cc138124c2c34df6d10997556f41ae6bc0": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int4", - "Int4" - ] - } - }, - "query": "UPDATE eth_txs SET gas_used = $1, confirmed_eth_tx_history_id = $2 WHERE id = $3" - }, - "e8988deed66ad9d10be89e89966082aeb920c5dc91eb5fad16bd0d3118708c2e": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 2, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Interval", - "Int4" - ] - } - }, - "query": "\n UPDATE prover_jobs\n SET status = 'queued', updated_at = now(), processing_started_at = now()\n WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'in_gpu_proof' AND processing_started_at <= now() - $1::interval AND attempts < $2)\n OR (status = 'failed' AND attempts < $2)\n RETURNING id, status, attempts\n " - }, - "e900682a160af90d532da47a1222fc1d7c9962ee8996dbd9b9bb63f13820cf2b": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "ByteaArray" - ] - } - }, - "query": "DELETE FROM transactions WHERE in_mempool = TRUE AND initiator_address = ANY($1)" - }, - "e9b03a0d79eb40a67eab9bdaac8447fc17922bea89bcc6a89eb8eadf147835fe": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Text", - "Int4" - ] - } - }, - "query": "\n INSERT INTO scheduler_witness_jobs_fri\n (l1_batch_number, scheduler_partial_input_blob_url, protocol_version, status, created_at, updated_at)\n VALUES ($1, $2, $3, 'waiting_for_proofs', now(), now())\n ON CONFLICT(l1_batch_number)\n DO UPDATE SET updated_at=now()\n " - }, - "ea17481cab38d370e06e7cf8598daa39faf4414152456aab89695e3133477d3e": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "is_priority", - "ordinal": 1, - "type_info": "Bool" - }, - { - "name": "full_fee", - "ordinal": 2, - "type_info": "Numeric" - }, - { - "name": "layer_2_tip_fee", - "ordinal": 3, - "type_info": "Numeric" - }, - { - "name": "initiator_address", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "nonce", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "signature", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "input", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "data", - "ordinal": 8, - "type_info": "Jsonb" - }, - { - "name": "received_at", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "priority_op_id", - "ordinal": 10, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 11, - "type_info": "Int8" - }, - { - "name": "index_in_block", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "error", - "ordinal": 13, - "type_info": "Varchar" - }, - { - "name": "gas_limit", - "ordinal": 14, - "type_info": "Numeric" - }, - { - "name": "gas_per_storage_limit", - "ordinal": 15, - "type_info": "Numeric" - }, - { - "name": "gas_per_pubdata_limit", - "ordinal": 16, - "type_info": "Numeric" - }, - { - "name": "tx_format", - "ordinal": 17, - "type_info": "Int4" - }, - { - "name": "created_at", - "ordinal": 18, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 19, - "type_info": "Timestamp" - }, - { - "name": "execution_info", - "ordinal": 20, - "type_info": "Jsonb" - }, - { - "name": "contract_address", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "in_mempool", - "ordinal": 22, - "type_info": "Bool" - }, - { - "name": "l1_block_number", - "ordinal": 23, - "type_info": "Int4" - }, - { - "name": "value", - "ordinal": 24, - "type_info": "Numeric" - }, - { - "name": "paymaster", - "ordinal": 25, - "type_info": "Bytea" - }, - { - "name": "paymaster_input", - "ordinal": 26, - "type_info": "Bytea" - }, - { - "name": "max_fee_per_gas", - "ordinal": 27, - "type_info": "Numeric" - }, - { - "name": "max_priority_fee_per_gas", - "ordinal": 28, - "type_info": "Numeric" - }, - { - "name": "effective_gas_price", - "ordinal": 29, - "type_info": "Numeric" - }, - { - "name": "miniblock_number", - "ordinal": 30, - "type_info": "Int8" - }, - { - "name": "l1_batch_tx_index", - "ordinal": 31, - "type_info": "Int4" - }, - { - "name": "refunded_gas", - "ordinal": 32, - "type_info": "Int8" - }, - { - "name": "l1_tx_mint", - "ordinal": 33, - "type_info": "Numeric" - }, - { - "name": "l1_tx_refund_recipient", - "ordinal": 34, - "type_info": "Bytea" - }, - { - "name": "upgrade_id", - "ordinal": 35, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - true, - true, - false, - true, - true, - true, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - false, - true, - false, - false, - false, - true, - true, - true, - true, - true, - false, - true, - true, - true - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "\n SELECT * FROM transactions\n WHERE hash = $1\n " - }, - "eb95c3daeffd23d35d4e047e3bb8dc44e93492a6d41cf0fd1624d3ea4a2267c9": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Int8" - ] - } - }, - "query": "UPDATE l1_batches SET predicted_commit_gas_cost = $2, updated_at = now() WHERE number = $1" - }, - "ed50c609371b4588964e29f8757c41973706710090a80eb025ec263ce3d019b4": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int4", - "Int2", - "Text" - ] - } - }, - "query": "INSERT INTO gpu_prover_queue_fri (instance_host, instance_port, instance_status, specialized_prover_group_id, zone, created_at, updated_at) VALUES (cast($1::text as inet), $2, 'available', $3, $4, now(), now()) ON CONFLICT(instance_host, instance_port, zone) DO UPDATE SET instance_status='available', specialized_prover_group_id=$3, zone=$4, updated_at=now()" - }, - "eda61fd8012aadc27a2952e96d4238bccb21ec47a17e326a7ae9182d5358d733": { - "describe": { - "columns": [ - { - "name": "timestamp", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT timestamp FROM l1_batches WHERE eth_prove_tx_id IS NULL AND number > 0 ORDER BY number LIMIT 1" - }, - "ee74b42d1a6a52784124751dae6c7eca3fd36f5a3bb26de56efc2b810da7033a": { - "describe": { - "columns": [ - { - "name": "initial_bootloader_heap_content", - "ordinal": 0, - "type_info": "Jsonb" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT initial_bootloader_heap_content FROM l1_batches WHERE number = $1" - }, - "ee7bd820bf35c5c714092494c386eccff25457cff6dc00eb81d9809eaeb95670": { - "describe": { - "columns": [ - { - "name": "is_replaced!", - "ordinal": 0, - "type_info": "Bool" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [ - "Bytea", - "Bytea", - "Int8", - "Bytea", - "Numeric", - "Numeric", - "Numeric", - "Numeric", - "Bytea", - "Jsonb", - "Int4", - "Bytea", - "Numeric", - "Bytea", - "Bytea", - "Int8", - "Int4", - "Int4", - "Timestamp" - ] - } - }, - "query": "\n INSERT INTO transactions\n (\n hash,\n is_priority,\n initiator_address,\n nonce,\n signature,\n gas_limit,\n max_fee_per_gas,\n max_priority_fee_per_gas,\n gas_per_pubdata_limit,\n input,\n data,\n tx_format,\n contract_address,\n value,\n paymaster,\n paymaster_input,\n execution_info,\n received_at,\n created_at,\n updated_at\n )\n VALUES\n (\n $1, FALSE, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15,\n jsonb_build_object('gas_used', $16::bigint, 'storage_writes', $17::int, 'contracts_used', $18::int),\n $19, now(), now()\n )\n ON CONFLICT\n (initiator_address, nonce)\n DO UPDATE\n SET hash=$1,\n signature=$4,\n gas_limit=$5,\n max_fee_per_gas=$6,\n max_priority_fee_per_gas=$7,\n gas_per_pubdata_limit=$8,\n input=$9,\n data=$10,\n tx_format=$11,\n contract_address=$12,\n value=$13,\n paymaster=$14,\n paymaster_input=$15,\n execution_info=jsonb_build_object('gas_used', $16::bigint, 'storage_writes', $17::int, 'contracts_used', $18::int),\n in_mempool=FALSE,\n received_at=$19,\n created_at=now(),\n updated_at=now(),\n error = NULL\n WHERE transactions.is_priority = FALSE AND transactions.miniblock_number IS NULL\n RETURNING (SELECT hash FROM transactions WHERE transactions.initiator_address = $2 AND transactions.nonce = $3) IS NOT NULL as \"is_replaced!\"\n " - }, - "ee87b42383cd6b4f1445e2aa152369fee31a7fea436db8b3b9925a60ac60cd1a": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "is_priority", - "ordinal": 1, - "type_info": "Bool" - }, - { - "name": "full_fee", - "ordinal": 2, - "type_info": "Numeric" - }, - { - "name": "layer_2_tip_fee", - "ordinal": 3, - "type_info": "Numeric" - }, - { - "name": "initiator_address", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "nonce", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "signature", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "input", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "data", - "ordinal": 8, - "type_info": "Jsonb" - }, - { - "name": "received_at", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "priority_op_id", - "ordinal": 10, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 11, - "type_info": "Int8" - }, - { - "name": "index_in_block", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "error", - "ordinal": 13, - "type_info": "Varchar" - }, - { - "name": "gas_limit", - "ordinal": 14, - "type_info": "Numeric" - }, - { - "name": "gas_per_storage_limit", - "ordinal": 15, - "type_info": "Numeric" - }, - { - "name": "gas_per_pubdata_limit", - "ordinal": 16, - "type_info": "Numeric" - }, - { - "name": "tx_format", - "ordinal": 17, - "type_info": "Int4" - }, - { - "name": "created_at", - "ordinal": 18, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 19, - "type_info": "Timestamp" - }, - { - "name": "execution_info", - "ordinal": 20, - "type_info": "Jsonb" - }, - { - "name": "contract_address", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "in_mempool", - "ordinal": 22, - "type_info": "Bool" - }, - { - "name": "l1_block_number", - "ordinal": 23, - "type_info": "Int4" - }, - { - "name": "value", - "ordinal": 24, - "type_info": "Numeric" - }, - { - "name": "paymaster", - "ordinal": 25, - "type_info": "Bytea" - }, - { - "name": "paymaster_input", - "ordinal": 26, - "type_info": "Bytea" - }, - { - "name": "max_fee_per_gas", - "ordinal": 27, - "type_info": "Numeric" - }, - { - "name": "max_priority_fee_per_gas", - "ordinal": 28, - "type_info": "Numeric" - }, - { - "name": "effective_gas_price", - "ordinal": 29, - "type_info": "Numeric" - }, - { - "name": "miniblock_number", - "ordinal": 30, - "type_info": "Int8" - }, - { - "name": "l1_batch_tx_index", - "ordinal": 31, - "type_info": "Int4" - }, - { - "name": "refunded_gas", - "ordinal": 32, - "type_info": "Int8" - }, - { - "name": "l1_tx_mint", - "ordinal": 33, - "type_info": "Numeric" - }, - { - "name": "l1_tx_refund_recipient", - "ordinal": 34, - "type_info": "Bytea" - }, - { - "name": "upgrade_id", - "ordinal": 35, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - true, - true, - false, - true, - true, - true, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - false, - true, - false, - false, - false, - true, - true, - true, - true, - true, - false, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT * FROM transactions WHERE miniblock_number = $1 ORDER BY index_in_block" - }, - "efc83e42f5d0238b8996a5b311746527289a5a002ff659531a076680127e8eb4": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - } - ], - "nullable": [ - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT hash FROM l1_batches WHERE number = $1" - }, - "f0c83c517fdf9696a0acf288f061bd00a993e0b2379b667738b6876e2f588043": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [] - } - }, - "query": "\n UPDATE node_aggregation_witness_jobs\n SET status='queued'\n WHERE l1_batch_number IN\n (SELECT prover_jobs.l1_batch_number\n FROM prover_jobs\n JOIN node_aggregation_witness_jobs nawj ON prover_jobs.l1_batch_number = nawj.l1_batch_number\n WHERE nawj.status = 'waiting_for_proofs'\n AND prover_jobs.status = 'successful'\n AND prover_jobs.aggregation_round = 1\n GROUP BY prover_jobs.l1_batch_number, nawj.number_of_leaf_circuits\n HAVING COUNT(*) = nawj.number_of_leaf_circuits)\n RETURNING l1_batch_number;\n " - }, - "f1defa140e20b9c250d3212602dc259c0a35598c2e69d1c42746a8fab6dd8d3e": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int4", - "Int4", - "Text", - "Text" - ] - } - }, - "query": "\n UPDATE gpu_prover_queue\n SET instance_status = 'available', updated_at = now(), queue_free_slots = $3\n WHERE instance_host = $1::text::inet\n AND instance_port = $2\n AND instance_status = 'full'\n AND region = $4\n AND zone = $5\n " - }, - "f365ada84c576a9049551a28f800ca8cb1d0096f3ba1c9edec725e11892a5a6c": { - "describe": { - "columns": [ - { - "name": "hash", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "is_priority", - "ordinal": 1, - "type_info": "Bool" - }, - { - "name": "full_fee", - "ordinal": 2, - "type_info": "Numeric" - }, - { - "name": "layer_2_tip_fee", - "ordinal": 3, - "type_info": "Numeric" - }, - { - "name": "initiator_address", - "ordinal": 4, - "type_info": "Bytea" - }, - { - "name": "nonce", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "signature", - "ordinal": 6, - "type_info": "Bytea" - }, - { - "name": "input", - "ordinal": 7, - "type_info": "Bytea" - }, - { - "name": "data", - "ordinal": 8, - "type_info": "Jsonb" - }, - { - "name": "received_at", - "ordinal": 9, - "type_info": "Timestamp" - }, - { - "name": "priority_op_id", - "ordinal": 10, - "type_info": "Int8" - }, - { - "name": "l1_batch_number", - "ordinal": 11, - "type_info": "Int8" - }, - { - "name": "index_in_block", - "ordinal": 12, - "type_info": "Int4" - }, - { - "name": "error", - "ordinal": 13, - "type_info": "Varchar" - }, - { - "name": "gas_limit", - "ordinal": 14, - "type_info": "Numeric" - }, - { - "name": "gas_per_storage_limit", - "ordinal": 15, - "type_info": "Numeric" - }, - { - "name": "gas_per_pubdata_limit", - "ordinal": 16, - "type_info": "Numeric" - }, - { - "name": "tx_format", - "ordinal": 17, - "type_info": "Int4" - }, - { - "name": "created_at", - "ordinal": 18, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 19, - "type_info": "Timestamp" - }, - { - "name": "execution_info", - "ordinal": 20, - "type_info": "Jsonb" - }, - { - "name": "contract_address", - "ordinal": 21, - "type_info": "Bytea" - }, - { - "name": "in_mempool", - "ordinal": 22, - "type_info": "Bool" - }, - { - "name": "l1_block_number", - "ordinal": 23, - "type_info": "Int4" - }, - { - "name": "value", - "ordinal": 24, - "type_info": "Numeric" - }, - { - "name": "paymaster", - "ordinal": 25, - "type_info": "Bytea" - }, - { - "name": "paymaster_input", - "ordinal": 26, - "type_info": "Bytea" - }, - { - "name": "max_fee_per_gas", - "ordinal": 27, - "type_info": "Numeric" - }, - { - "name": "max_priority_fee_per_gas", - "ordinal": 28, - "type_info": "Numeric" - }, - { - "name": "effective_gas_price", - "ordinal": 29, - "type_info": "Numeric" - }, - { - "name": "miniblock_number", - "ordinal": 30, - "type_info": "Int8" - }, - { - "name": "l1_batch_tx_index", - "ordinal": 31, - "type_info": "Int4" - }, - { - "name": "refunded_gas", - "ordinal": 32, - "type_info": "Int8" - }, - { - "name": "l1_tx_mint", - "ordinal": 33, - "type_info": "Numeric" - }, - { - "name": "l1_tx_refund_recipient", - "ordinal": 34, - "type_info": "Bytea" - }, - { - "name": "upgrade_id", - "ordinal": 35, - "type_info": "Int4" - } - ], - "nullable": [ - false, - false, - true, - true, - false, - true, - true, - true, - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false, - false, - true, - false, - true, - false, - false, - false, - true, - true, - true, - true, - true, - false, - true, - true, - true - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT * FROM transactions WHERE l1_batch_number = $1 ORDER BY miniblock_number, index_in_block" - }, - "f39893caa0ad524eda13ab89539fd61804c9190b3d62f4416de83159c2c189e4": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "attempts", - "ordinal": 2, - "type_info": "Int2" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Interval", - "Int2" - ] - } - }, - "query": "UPDATE proof_compression_jobs_fri SET status = 'queued', updated_at = now(), processing_started_at = now() WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2) OR (status = 'failed' AND attempts < $2) RETURNING l1_batch_number, status, attempts" - }, - "f5e3c4b23fa0d0686b400b64c42cf78b2219f0cbcf1c9240b77e4132513e36ef": { - "describe": { - "columns": [ - { - "name": "address", - "ordinal": 0, - "type_info": "Bytea" - }, - { - "name": "key", - "ordinal": 1, - "type_info": "Bytea" - }, - { - "name": "value", - "ordinal": 2, - "type_info": "Bytea" - } - ], - "nullable": [ - false, - false, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "SELECT address, key, value FROM storage_logs WHERE miniblock_number BETWEEN (SELECT MIN(number) FROM miniblocks WHERE l1_batch_number = $1) AND (SELECT MAX(number) FROM miniblocks WHERE l1_batch_number = $1) ORDER BY miniblock_number, operation_number" - }, - "f69542ca7e27a74d3703f359d9be33cf11c1f066c42754b92fced2af410c4558": { - "describe": { - "columns": [ - { - "name": "attempts", - "ordinal": 0, - "type_info": "Int2" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - { - "Custom": { - "kind": { - "Enum": [ - "Queued", - "ManuallySkipped", - "InProgress", - "Successful", - "Failed" - ] - }, - "name": "basic_witness_input_producer_job_status" - } - }, - "Int8", - "Time", - "Text", - { - "Custom": { - "kind": { - "Enum": [ - "Queued", - "ManuallySkipped", - "InProgress", - "Successful", - "Failed" - ] - }, - "name": "basic_witness_input_producer_job_status" - } - } - ] - } - }, - "query": "UPDATE basic_witness_input_producer_jobs SET status = $1, updated_at = now(), time_taken = $3, error = $4 WHERE l1_batch_number = $2 AND status != $5 RETURNING basic_witness_input_producer_jobs.attempts" - }, - "f78960549e6201527454d060d5b483db032f4df80b4269a624f0309ed9a6a38e": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "\n UPDATE witness_inputs_fri SET status ='failed', error= $1, updated_at = now()\n WHERE l1_batch_number = $2\n " - }, - "fa006dda8f56abb70afc5ba8b6da631747d17ebd03a37ddb72914c4ed2aeb2f5": { - "describe": { - "columns": [ - { - "name": "trace", - "ordinal": 0, - "type_info": "Jsonb" - } - ], - "nullable": [ - false - ], - "parameters": { - "Left": [ - "Bytea" - ] - } - }, - "query": "SELECT trace FROM transaction_traces WHERE tx_hash = $1" - }, - "fa177254ba516ad1588f4f6960be96706d1f43c23ff1d57ba2bc7bc7148bdcac": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "timestamp", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "hash", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "l1_tx_count", - "ordinal": 3, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 4, - "type_info": "Int4" - }, - { - "name": "base_fee_per_gas", - "ordinal": 5, - "type_info": "Numeric" - }, - { - "name": "l1_gas_price", - "ordinal": 6, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 7, - "type_info": "Int8" - }, - { - "name": "bootloader_code_hash", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 9, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "virtual_blocks", - "ordinal": 11, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT number, timestamp, hash, l1_tx_count, l2_tx_count, base_fee_per_gas, l1_gas_price, l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, virtual_blocks\n FROM miniblocks ORDER BY number DESC LIMIT 1" - }, - "fa2b4316aaef09e96d93b70f96b129ed123951732e01d63f30b4b292d441ea39": { - "describe": { - "columns": [ - { - "name": "l1_batch_number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "status", - "ordinal": 1, - "type_info": "Text" - }, - { - "name": "circuit_1_final_prover_job_id", - "ordinal": 2, - "type_info": "Int8" - }, - { - "name": "circuit_2_final_prover_job_id", - "ordinal": 3, - "type_info": "Int8" - }, - { - "name": "circuit_3_final_prover_job_id", - "ordinal": 4, - "type_info": "Int8" - }, - { - "name": "circuit_4_final_prover_job_id", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "circuit_5_final_prover_job_id", - "ordinal": 6, - "type_info": "Int8" - }, - { - "name": "circuit_6_final_prover_job_id", - "ordinal": 7, - "type_info": "Int8" - }, - { - "name": "circuit_7_final_prover_job_id", - "ordinal": 8, - "type_info": "Int8" - }, - { - "name": "circuit_8_final_prover_job_id", - "ordinal": 9, - "type_info": "Int8" - }, - { - "name": "circuit_9_final_prover_job_id", - "ordinal": 10, - "type_info": "Int8" - }, - { - "name": "circuit_10_final_prover_job_id", - "ordinal": 11, - "type_info": "Int8" - }, - { - "name": "circuit_11_final_prover_job_id", - "ordinal": 12, - "type_info": "Int8" - }, - { - "name": "circuit_12_final_prover_job_id", - "ordinal": 13, - "type_info": "Int8" - }, - { - "name": "circuit_13_final_prover_job_id", - "ordinal": 14, - "type_info": "Int8" - }, - { - "name": "created_at", - "ordinal": 15, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 16, - "type_info": "Timestamp" - } - ], - "nullable": [ - false, - false, - true, - true, - true, - true, - true, - true, - true, - true, - true, - true, - true, - true, - true, - false, - false - ], - "parameters": { - "Left": [ - "Int8" - ] - } - }, - "query": "\n SELECT * FROM scheduler_dependency_tracker_fri\n WHERE l1_batch_number = $1\n " - }, - "fa33d51f8627376832b11bb174354e65e645ee2fb81564a97725518f47ae6f57": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT MAX(number) as \"number\" FROM l1_batches" - }, - "fa6ef06edd04d20ddbdf22a63092222e89bb84d6093b07bda16407811d9c33c0": { - "describe": { - "columns": [ - { - "name": "id", - "ordinal": 0, - "type_info": "Int4" - }, - { - "name": "nonce", - "ordinal": 1, - "type_info": "Int8" - }, - { - "name": "raw_tx", - "ordinal": 2, - "type_info": "Bytea" - }, - { - "name": "contract_address", - "ordinal": 3, - "type_info": "Text" - }, - { - "name": "tx_type", - "ordinal": 4, - "type_info": "Text" - }, - { - "name": "gas_used", - "ordinal": 5, - "type_info": "Int8" - }, - { - "name": "created_at", - "ordinal": 6, - "type_info": "Timestamp" - }, - { - "name": "updated_at", - "ordinal": 7, - "type_info": "Timestamp" - }, - { - "name": "has_failed", - "ordinal": 8, - "type_info": "Bool" - }, - { - "name": "sent_at_block", - "ordinal": 9, - "type_info": "Int4" - }, - { - "name": "confirmed_eth_tx_history_id", - "ordinal": 10, - "type_info": "Int4" - }, - { - "name": "predicted_gas_cost", - "ordinal": 11, - "type_info": "Int8" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - true, - false, - false, - false, - true, - true, - false - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT * FROM eth_txs WHERE id = $1" - }, - "fcca1961f34082f7186de607b922fd608166c5af98031e4dcc8a056b89696dbe": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Int8", - "Jsonb" - ] - } - }, - "query": "UPDATE miniblocks SET consensus = $2 WHERE number = $1" - }, - "ff7ff36b86b0e8d1cd7280aa447baef172cb054ffe7e1d742c59bf09b4f414cb": { - "describe": { - "columns": [ - { - "name": "count!", - "ordinal": 0, - "type_info": "Int8" - } - ], - "nullable": [ - null - ], - "parameters": { - "Left": [ - "Int4" - ] - } - }, - "query": "SELECT COUNT(*) as \"count!\" FROM prover_protocol_versions WHERE id = $1" - }, - "ff9c6a53717f0455089e27018e069809891249555e7ee38393927b2b25555fea": { - "describe": { - "columns": [ - { - "name": "number", - "ordinal": 0, - "type_info": "Int8" - }, - { - "name": "l1_tx_count", - "ordinal": 1, - "type_info": "Int4" - }, - { - "name": "l2_tx_count", - "ordinal": 2, - "type_info": "Int4" - }, - { - "name": "timestamp", - "ordinal": 3, - "type_info": "Int8" - }, - { - "name": "is_finished", - "ordinal": 4, - "type_info": "Bool" - }, - { - "name": "fee_account_address", - "ordinal": 5, - "type_info": "Bytea" - }, - { - "name": "l2_to_l1_logs", - "ordinal": 6, - "type_info": "ByteaArray" - }, - { - "name": "l2_to_l1_messages", - "ordinal": 7, - "type_info": "ByteaArray" - }, - { - "name": "bloom", - "ordinal": 8, - "type_info": "Bytea" - }, - { - "name": "priority_ops_onchain_data", - "ordinal": 9, - "type_info": "ByteaArray" - }, - { - "name": "used_contract_hashes", - "ordinal": 10, - "type_info": "Jsonb" - }, - { - "name": "base_fee_per_gas", - "ordinal": 11, - "type_info": "Numeric" - }, - { - "name": "l1_gas_price", - "ordinal": 12, - "type_info": "Int8" - }, - { - "name": "l2_fair_gas_price", - "ordinal": 13, - "type_info": "Int8" - }, - { - "name": "bootloader_code_hash", - "ordinal": 14, - "type_info": "Bytea" - }, - { - "name": "default_aa_code_hash", - "ordinal": 15, - "type_info": "Bytea" - }, - { - "name": "protocol_version", - "ordinal": 16, - "type_info": "Int4" - }, - { - "name": "compressed_state_diffs", - "ordinal": 17, - "type_info": "Bytea" - }, - { - "name": "system_logs", - "ordinal": 18, - "type_info": "ByteaArray" - } - ], - "nullable": [ - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - false, - true, - true, - true, - true, - false - ], - "parameters": { - "Left": [] - } - }, - "query": "SELECT number, l1_tx_count, l2_tx_count, timestamp, is_finished, fee_account_address, l2_to_l1_logs, l2_to_l1_messages, bloom, priority_ops_onchain_data, used_contract_hashes, base_fee_per_gas, l1_gas_price, l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, compressed_state_diffs, system_logs FROM l1_batches ORDER BY number DESC LIMIT 1" - }, - "ffc30c35b713dbde170c0369d5b9f741523778a3f396bd6fa9bfd1705fb4c8ac": { - "describe": { - "columns": [], - "nullable": [], - "parameters": { - "Left": [ - "Text", - "Int8" - ] - } - }, - "query": "UPDATE proof_compression_jobs_fri SET status = $1, updated_at = now() WHERE l1_batch_number = $2" - } + "db": "PostgreSQL" } \ No newline at end of file diff --git a/core/lib/dal/src/accounts_dal.rs b/core/lib/dal/src/accounts_dal.rs index bbd43c80ac0..a1323bf9517 100644 --- a/core/lib/dal/src/accounts_dal.rs +++ b/core/lib/dal/src/accounts_dal.rs @@ -43,13 +43,24 @@ impl AccountsDal<'_, '_> { .collect(); let rows = sqlx::query!( r#" - SELECT storage.value as "value!", - tokens.l1_address as "l1_address!", tokens.l2_address as "l2_address!", - tokens.symbol as "symbol!", tokens.name as "name!", tokens.decimals as "decimals!", tokens.usd_price as "usd_price?" - FROM storage - INNER JOIN tokens ON - storage.address = tokens.l2_address OR (storage.address = $2 AND tokens.l2_address = $3) - WHERE storage.hashed_key = ANY($1) AND storage.value != $4 + SELECT + storage.value AS "value!", + tokens.l1_address AS "l1_address!", + tokens.l2_address AS "l2_address!", + tokens.symbol AS "symbol!", + tokens.name AS "name!", + tokens.decimals AS "decimals!", + tokens.usd_price AS "usd_price?" + FROM + storage + INNER JOIN tokens ON storage.address = tokens.l2_address + OR ( + storage.address = $2 + AND tokens.l2_address = $3 + ) + WHERE + storage.hashed_key = ANY ($1) + AND storage.value != $4 "#, &hashed_keys, L2_ETH_TOKEN_ADDRESS.as_bytes(), diff --git a/core/lib/dal/src/basic_witness_input_producer_dal.rs b/core/lib/dal/src/basic_witness_input_producer_dal.rs index ac0627a96a0..c0d38516b16 100644 --- a/core/lib/dal/src/basic_witness_input_producer_dal.rs +++ b/core/lib/dal/src/basic_witness_input_producer_dal.rs @@ -1,10 +1,14 @@ -use crate::instrument::InstrumentExt; -use crate::time_utils::{duration_to_naive_time, pg_interval_from_duration}; -use crate::StorageProcessor; -use sqlx::postgres::types::PgInterval; use std::time::{Duration, Instant}; + +use sqlx::postgres::types::PgInterval; use zksync_types::L1BatchNumber; +use crate::{ + instrument::InstrumentExt, + time_utils::{duration_to_naive_time, pg_interval_from_duration}, + StorageProcessor, +}; + #[derive(Debug)] pub struct BasicWitnessInputProducerDal<'a, 'c> { pub(crate) storage: &'a mut StorageProcessor<'c>, @@ -47,10 +51,13 @@ impl BasicWitnessInputProducerDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result<()> { sqlx::query!( - "INSERT INTO basic_witness_input_producer_jobs \ - (l1_batch_number, status, created_at, updated_at) \ - VALUES ($1, $2, now(), now()) \ - ON CONFLICT (l1_batch_number) DO NOTHING", + r#" + INSERT INTO + basic_witness_input_producer_jobs (l1_batch_number, status, created_at, updated_at) + VALUES + ($1, $2, NOW(), NOW()) + ON CONFLICT (l1_batch_number) DO NOTHING + "#, l1_batch_number.0 as i64, BasicWitnessInputProducerJobStatus::Queued as BasicWitnessInputProducerJobStatus, ) @@ -66,23 +73,39 @@ impl BasicWitnessInputProducerDal<'_, '_> { &mut self, ) -> sqlx::Result> { let l1_batch_number = sqlx::query!( - "UPDATE basic_witness_input_producer_jobs \ - SET status = $1, \ - attempts = attempts + 1, \ - updated_at = now(), \ - processing_started_at = now() \ - WHERE l1_batch_number = ( \ - SELECT l1_batch_number \ - FROM basic_witness_input_producer_jobs \ - WHERE status = $2 OR \ - (status = $1 AND processing_started_at < now() - $4::interval) OR \ - (status = $3 AND attempts < $5) \ - ORDER BY l1_batch_number ASC \ - LIMIT 1 \ - FOR UPDATE \ - SKIP LOCKED \ - ) \ - RETURNING basic_witness_input_producer_jobs.l1_batch_number", + r#" + UPDATE basic_witness_input_producer_jobs + SET + status = $1, + attempts = attempts + 1, + updated_at = NOW(), + processing_started_at = NOW() + WHERE + l1_batch_number = ( + SELECT + l1_batch_number + FROM + basic_witness_input_producer_jobs + WHERE + status = $2 + OR ( + status = $1 + AND processing_started_at < NOW() - $4::INTERVAL + ) + OR ( + status = $3 + AND attempts < $5 + ) + ORDER BY + l1_batch_number ASC + LIMIT + 1 + FOR UPDATE + SKIP LOCKED + ) + RETURNING + basic_witness_input_producer_jobs.l1_batch_number + "#, BasicWitnessInputProducerJobStatus::InProgress as BasicWitnessInputProducerJobStatus, BasicWitnessInputProducerJobStatus::Queued as BasicWitnessInputProducerJobStatus, BasicWitnessInputProducerJobStatus::Failed as BasicWitnessInputProducerJobStatus, @@ -103,8 +126,14 @@ impl BasicWitnessInputProducerDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result> { let attempts = sqlx::query!( - "SELECT attempts FROM basic_witness_input_producer_jobs \ - WHERE l1_batch_number = $1", + r#" + SELECT + attempts + FROM + basic_witness_input_producer_jobs + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64, ) .fetch_optional(self.storage.conn()) @@ -121,12 +150,16 @@ impl BasicWitnessInputProducerDal<'_, '_> { object_path: &str, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE basic_witness_input_producer_jobs \ - SET status = $1, \ - updated_at = now(), \ - time_taken = $3, \ - input_blob_url = $4 \ - WHERE l1_batch_number = $2", + r#" + UPDATE basic_witness_input_producer_jobs + SET + status = $1, + updated_at = NOW(), + time_taken = $3, + input_blob_url = $4 + WHERE + l1_batch_number = $2 + "#, BasicWitnessInputProducerJobStatus::Successful as BasicWitnessInputProducerJobStatus, l1_batch_number.0 as i64, duration_to_naive_time(started_at.elapsed()), @@ -147,13 +180,19 @@ impl BasicWitnessInputProducerDal<'_, '_> { error: String, ) -> sqlx::Result> { let attempts = sqlx::query!( - "UPDATE basic_witness_input_producer_jobs \ - SET status = $1, \ - updated_at = now(), \ - time_taken = $3, \ - error = $4 \ - WHERE l1_batch_number = $2 AND status != $5 \ - RETURNING basic_witness_input_producer_jobs.attempts", + r#" + UPDATE basic_witness_input_producer_jobs + SET + status = $1, + updated_at = NOW(), + time_taken = $3, + error = $4 + WHERE + l1_batch_number = $2 + AND status != $5 + RETURNING + basic_witness_input_producer_jobs.attempts + "#, BasicWitnessInputProducerJobStatus::Failed as BasicWitnessInputProducerJobStatus, l1_batch_number.0 as i64, duration_to_naive_time(started_at.elapsed()), @@ -173,9 +212,13 @@ impl BasicWitnessInputProducerDal<'_, '_> { /// These functions should only be used for tests. impl BasicWitnessInputProducerDal<'_, '_> { pub async fn delete_all_jobs(&mut self) -> sqlx::Result<()> { - sqlx::query!("DELETE FROM basic_witness_input_producer_jobs") - .execute(self.storage.conn()) - .await?; + sqlx::query!( + r#" + DELETE FROM basic_witness_input_producer_jobs + "# + ) + .execute(self.storage.conn()) + .await?; Ok(()) } } diff --git a/core/lib/dal/src/blocks_dal.rs b/core/lib/dal/src/blocks_dal.rs index ca5018ae51e..87195d965ad 100644 --- a/core/lib/dal/src/blocks_dal.rs +++ b/core/lib/dal/src/blocks_dal.rs @@ -7,13 +7,11 @@ use std::{ use anyhow::Context as _; use bigdecimal::{BigDecimal, FromPrimitive, ToPrimitive}; use sqlx::Row; - use zksync_types::{ aggregated_operations::AggregatedActionType, - block::{BlockGasCount, ConsensusBlockFields, L1BatchHeader, MiniblockHeader}, + block::{BlockGasCount, L1BatchHeader, MiniblockHeader}, commitment::{L1BatchMetadata, L1BatchWithMetadata}, - Address, L1BatchNumber, LogQuery, MiniblockNumber, ProtocolVersionId, H256, - MAX_GAS_PER_PUBDATA_BYTE, U256, + Address, L1BatchNumber, LogQuery, MiniblockNumber, ProtocolVersionId, H256, U256, }; use crate::{ @@ -29,50 +27,116 @@ pub struct BlocksDal<'a, 'c> { impl BlocksDal<'_, '_> { pub async fn is_genesis_needed(&mut self) -> sqlx::Result { - let count = sqlx::query!("SELECT COUNT(*) as \"count!\" FROM l1_batches") - .fetch_one(self.storage.conn()) - .await? - .count; + let count = sqlx::query!( + r#" + SELECT + COUNT(*) AS "count!" + FROM + l1_batches + "# + ) + .fetch_one(self.storage.conn()) + .await? + .count; Ok(count == 0) } - pub async fn get_sealed_l1_batch_number(&mut self) -> anyhow::Result { - let number = sqlx::query!( - "SELECT MAX(number) as \"number\" FROM l1_batches WHERE is_finished = TRUE" + pub async fn get_sealed_l1_batch_number(&mut self) -> sqlx::Result> { + let row = sqlx::query!( + r#" + SELECT + MAX(number) AS "number" + FROM + l1_batches + WHERE + is_finished = TRUE + "# ) .instrument("get_sealed_block_number") .report_latency() .fetch_one(self.storage.conn()) - .await? - .number - .context("DAL invocation before genesis")?; + .await?; - Ok(L1BatchNumber(number as u32)) + Ok(row.number.map(|num| L1BatchNumber(num as u32))) } - pub async fn get_sealed_miniblock_number(&mut self) -> sqlx::Result { - let number: i64 = sqlx::query!("SELECT MAX(number) as \"number\" FROM miniblocks") - .instrument("get_sealed_miniblock_number") - .report_latency() - .fetch_one(self.storage.conn()) - .await? - .number - .unwrap_or(0); - Ok(MiniblockNumber(number as u32)) + pub async fn get_sealed_miniblock_number(&mut self) -> sqlx::Result> { + let row = sqlx::query!( + r#" + SELECT + MAX(number) AS "number" + FROM + miniblocks + "# + ) + .instrument("get_sealed_miniblock_number") + .report_latency() + .fetch_one(self.storage.conn()) + .await?; + + Ok(row.number.map(|number| MiniblockNumber(number as u32))) + } + + /// Returns the number of the earliest L1 batch present in the DB, or `None` if there are no L1 batches. + pub async fn get_earliest_l1_batch_number(&mut self) -> sqlx::Result> { + let row = sqlx::query!( + r#" + SELECT + MIN(number) AS "number" + FROM + l1_batches + "# + ) + .instrument("get_earliest_l1_batch_number") + .report_latency() + .fetch_one(self.storage.conn()) + .await?; + + Ok(row.number.map(|num| L1BatchNumber(num as u32))) } pub async fn get_last_l1_batch_number_with_metadata( &mut self, - ) -> anyhow::Result { - let number: i64 = - sqlx::query!("SELECT MAX(number) as \"number\" FROM l1_batches WHERE hash IS NOT NULL") - .instrument("get_last_block_number_with_metadata") - .report_latency() - .fetch_one(self.storage.conn()) - .await? - .number - .context("DAL invocation before genesis")?; - Ok(L1BatchNumber(number as u32)) + ) -> sqlx::Result> { + let row = sqlx::query!( + r#" + SELECT + MAX(number) AS "number" + FROM + l1_batches + WHERE + hash IS NOT NULL + "# + ) + .instrument("get_last_block_number_with_metadata") + .report_latency() + .fetch_one(self.storage.conn()) + .await?; + + Ok(row.number.map(|num| L1BatchNumber(num as u32))) + } + + /// Returns the number of the earliest L1 batch with metadata (= state hash) present in the DB, + /// or `None` if there are no such L1 batches. + pub async fn get_earliest_l1_batch_number_with_metadata( + &mut self, + ) -> sqlx::Result> { + let row = sqlx::query!( + r#" + SELECT + MIN(number) AS "number" + FROM + l1_batches + WHERE + hash IS NOT NULL + "# + ) + .instrument("get_earliest_l1_batch_number_with_metadata") + .report_latency() + .fetch_one(self.storage.conn()) + .await?; + + Ok(row.number.map(|num| L1BatchNumber(num as u32))) } pub async fn get_l1_batches_for_eth_tx_id( @@ -81,16 +145,35 @@ impl BlocksDal<'_, '_> { ) -> sqlx::Result> { let l1_batches = sqlx::query_as!( StorageL1BatchHeader, - "SELECT number, l1_tx_count, l2_tx_count, \ - timestamp, is_finished, fee_account_address, l2_to_l1_logs, l2_to_l1_messages, \ - bloom, priority_ops_onchain_data, \ - used_contract_hashes, base_fee_per_gas, l1_gas_price, \ - l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, \ - system_logs, compressed_state_diffs \ - FROM l1_batches \ - WHERE eth_commit_tx_id = $1 \ - OR eth_prove_tx_id = $1 \ - OR eth_execute_tx_id = $1", + r#" + SELECT + number, + l1_tx_count, + l2_tx_count, + timestamp, + is_finished, + fee_account_address, + l2_to_l1_logs, + l2_to_l1_messages, + bloom, + priority_ops_onchain_data, + used_contract_hashes, + base_fee_per_gas, + l1_gas_price, + l2_fair_gas_price, + bootloader_code_hash, + default_aa_code_hash, + protocol_version, + system_logs, + compressed_state_diffs, + pubdata_input + FROM + l1_batches + WHERE + eth_commit_tx_id = $1 + OR eth_prove_tx_id = $1 + OR eth_execute_tx_id = $1 + "#, eth_tx_id as i32 ) .instrument("get_l1_batches_for_eth_tx_id") @@ -107,19 +190,54 @@ impl BlocksDal<'_, '_> { ) -> sqlx::Result> { sqlx::query_as!( StorageL1Batch, - "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, \ - bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, \ - compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, \ - merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, \ - used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, \ - l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, \ - rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, \ - default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, \ - meta_parameters_hash, protocol_version, system_logs, compressed_state_diffs, \ - events_queue_commitment, bootloader_initial_content_commitment \ - FROM l1_batches \ - LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number \ - WHERE number = $1", + r#" + SELECT + number, + timestamp, + is_finished, + l1_tx_count, + l2_tx_count, + fee_account_address, + bloom, + priority_ops_onchain_data, + hash, + parent_hash, + commitment, + compressed_write_logs, + compressed_contracts, + eth_prove_tx_id, + eth_commit_tx_id, + eth_execute_tx_id, + merkle_root_hash, + l2_to_l1_logs, + l2_to_l1_messages, + used_contract_hashes, + compressed_initial_writes, + compressed_repeated_writes, + l2_l1_compressed_messages, + l2_l1_merkle_root, + l1_gas_price, + l2_fair_gas_price, + rollup_last_leaf_index, + zkporter_is_available, + bootloader_code_hash, + default_aa_code_hash, + base_fee_per_gas, + aux_data_hash, + pass_through_data_hash, + meta_parameters_hash, + protocol_version, + system_logs, + compressed_state_diffs, + events_queue_commitment, + bootloader_initial_content_commitment, + pubdata_input + FROM + l1_batches + LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number + WHERE + number = $1 + "#, number.0 as i64 ) .instrument("get_storage_l1_batch") @@ -134,14 +252,33 @@ impl BlocksDal<'_, '_> { ) -> sqlx::Result> { Ok(sqlx::query_as!( StorageL1BatchHeader, - "SELECT number, l1_tx_count, l2_tx_count, \ - timestamp, is_finished, fee_account_address, l2_to_l1_logs, l2_to_l1_messages, \ - bloom, priority_ops_onchain_data, \ - used_contract_hashes, base_fee_per_gas, l1_gas_price, \ - l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, \ - compressed_state_diffs, system_logs \ - FROM l1_batches \ - WHERE number = $1", + r#" + SELECT + number, + l1_tx_count, + l2_tx_count, + timestamp, + is_finished, + fee_account_address, + l2_to_l1_logs, + l2_to_l1_messages, + bloom, + priority_ops_onchain_data, + used_contract_hashes, + base_fee_per_gas, + l1_gas_price, + l2_fair_gas_price, + bootloader_code_hash, + default_aa_code_hash, + protocol_version, + compressed_state_diffs, + system_logs, + pubdata_input + FROM + l1_batches + WHERE + number = $1 + "#, number.0 as i64 ) .instrument("get_l1_batch_header") @@ -157,7 +294,14 @@ impl BlocksDal<'_, '_> { number: L1BatchNumber, ) -> anyhow::Result>> { let Some(row) = sqlx::query!( - "SELECT initial_bootloader_heap_content FROM l1_batches WHERE number = $1", + r#" + SELECT + initial_bootloader_heap_content + FROM + l1_batches + WHERE + number = $1 + "#, number.0 as i64 ) .instrument("get_initial_bootloader_heap") @@ -179,7 +323,14 @@ impl BlocksDal<'_, '_> { number: L1BatchNumber, ) -> anyhow::Result>> { let Some(row) = sqlx::query!( - "SELECT storage_refunds FROM l1_batches WHERE number = $1", + r#" + SELECT + storage_refunds + FROM + l1_batches + WHERE + number = $1 + "#, number.0 as i64 ) .instrument("get_storage_refunds") @@ -203,7 +354,14 @@ impl BlocksDal<'_, '_> { number: L1BatchNumber, ) -> anyhow::Result>> { let Some(row) = sqlx::query!( - "SELECT serialized_events_queue FROM events_queue WHERE l1_batch_number = $1", + r#" + SELECT + serialized_events_queue + FROM + events_queue + WHERE + l1_batch_number = $1 + "#, number.0 as i64 ) .instrument("get_events_queue") @@ -229,9 +387,14 @@ impl BlocksDal<'_, '_> { match aggregation_type { AggregatedActionType::Commit => { sqlx::query!( - "UPDATE l1_batches \ - SET eth_commit_tx_id = $1, updated_at = now() \ - WHERE number BETWEEN $2 AND $3", + r#" + UPDATE l1_batches + SET + eth_commit_tx_id = $1, + updated_at = NOW() + WHERE + number BETWEEN $2 AND $3 + "#, eth_tx_id as i32, number_range.start().0 as i64, number_range.end().0 as i64 @@ -241,9 +404,14 @@ impl BlocksDal<'_, '_> { } AggregatedActionType::PublishProofOnchain => { sqlx::query!( - "UPDATE l1_batches \ - SET eth_prove_tx_id = $1, updated_at = now() \ - WHERE number BETWEEN $2 AND $3", + r#" + UPDATE l1_batches + SET + eth_prove_tx_id = $1, + updated_at = NOW() + WHERE + number BETWEEN $2 AND $3 + "#, eth_tx_id as i32, number_range.start().0 as i64, number_range.end().0 as i64 @@ -253,9 +421,14 @@ impl BlocksDal<'_, '_> { } AggregatedActionType::Execute => { sqlx::query!( - "UPDATE l1_batches \ - SET eth_execute_tx_id = $1, updated_at = now() \ - WHERE number BETWEEN $2 AND $3", + r#" + UPDATE l1_batches + SET + eth_execute_tx_id = $1, + updated_at = NOW() + WHERE + number BETWEEN $2 AND $3 + "#, eth_tx_id as i32, number_range.start().0 as i64, number_range.end().0 as i64 @@ -274,6 +447,7 @@ impl BlocksDal<'_, '_> { predicted_block_gas: BlockGasCount, events_queue: &[LogQuery], storage_refunds: &[u32], + predicted_circuits: u32, ) -> anyhow::Result<()> { let priority_onchain_data: Vec> = header .priority_ops_onchain_data @@ -290,6 +464,7 @@ impl BlocksDal<'_, '_> { .iter() .map(|log| log.0.to_bytes().to_vec()) .collect::>>(); + let pubdata_input = header.pubdata_input.clone(); // Serialization should always succeed. let initial_bootloader_contents = serde_json::to_value(initial_bootloader_contents) @@ -305,15 +480,68 @@ impl BlocksDal<'_, '_> { let mut transaction = self.storage.start_transaction().await?; sqlx::query!( - "INSERT INTO l1_batches (\ - number, l1_tx_count, l2_tx_count, \ - timestamp, is_finished, fee_account_address, l2_to_l1_logs, l2_to_l1_messages, \ - bloom, priority_ops_onchain_data, \ - predicted_commit_gas_cost, predicted_prove_gas_cost, predicted_execute_gas_cost, \ - initial_bootloader_heap_content, used_contract_hashes, base_fee_per_gas, \ - l1_gas_price, l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, system_logs, \ - storage_refunds, created_at, updated_at \ - ) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22, $23, now(), now())", + r#" + INSERT INTO + l1_batches ( + number, + l1_tx_count, + l2_tx_count, + timestamp, + is_finished, + fee_account_address, + l2_to_l1_logs, + l2_to_l1_messages, + bloom, + priority_ops_onchain_data, + predicted_commit_gas_cost, + predicted_prove_gas_cost, + predicted_execute_gas_cost, + initial_bootloader_heap_content, + used_contract_hashes, + base_fee_per_gas, + l1_gas_price, + l2_fair_gas_price, + bootloader_code_hash, + default_aa_code_hash, + protocol_version, + system_logs, + storage_refunds, + pubdata_input, + predicted_circuits, + created_at, + updated_at + ) + VALUES + ( + $1, + $2, + $3, + $4, + $5, + $6, + $7, + $8, + $9, + $10, + $11, + $12, + $13, + $14, + $15, + $16, + $17, + $18, + $19, + $20, + $21, + $22, + $23, + $24, + $25, + NOW(), + NOW() + ) + "#, header.number.0 as i64, header.l1_tx_count as i32, header.l2_tx_count as i32, @@ -332,23 +560,24 @@ impl BlocksDal<'_, '_> { base_fee_per_gas, header.l1_gas_price as i64, header.l2_fair_gas_price as i64, - header - .base_system_contracts_hashes - .bootloader - .as_bytes(), - header - .base_system_contracts_hashes - .default_aa - .as_bytes(), + header.base_system_contracts_hashes.bootloader.as_bytes(), + header.base_system_contracts_hashes.default_aa.as_bytes(), header.protocol_version.map(|v| v as i32), &system_logs, &storage_refunds, + pubdata_input, + predicted_circuits as i32, ) .execute(transaction.conn()) .await?; sqlx::query!( - "INSERT INTO events_queue (l1_batch_number, serialized_events_queue) VALUES ($1, $2)", + r#" + INSERT INTO + events_queue (l1_batch_number, serialized_events_queue) + VALUES + ($1, $2) + "#, header.number.0 as i64, events_queue ) @@ -365,22 +594,40 @@ impl BlocksDal<'_, '_> { ) -> anyhow::Result<()> { let base_fee_per_gas = BigDecimal::from_u64(miniblock_header.base_fee_per_gas) .context("base_fee_per_gas should fit in u64")?; + sqlx::query!( - "INSERT INTO miniblocks ( \ - number, timestamp, hash, l1_tx_count, l2_tx_count, \ - base_fee_per_gas, l1_gas_price, l2_fair_gas_price, gas_per_pubdata_limit, \ - bootloader_code_hash, default_aa_code_hash, protocol_version, \ - virtual_blocks, created_at, updated_at \ - ) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, now(), now())", + r#" + INSERT INTO + miniblocks ( + number, + timestamp, + hash, + l1_tx_count, + l2_tx_count, + base_fee_per_gas, + l1_gas_price, + l2_fair_gas_price, + gas_per_pubdata_limit, + bootloader_code_hash, + default_aa_code_hash, + protocol_version, + virtual_blocks, + fair_pubdata_price, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, NOW(), NOW()) + "#, miniblock_header.number.0 as i64, miniblock_header.timestamp as i64, miniblock_header.hash.as_bytes(), miniblock_header.l1_tx_count as i32, miniblock_header.l2_tx_count as i32, base_fee_per_gas, - miniblock_header.l1_gas_price as i64, - miniblock_header.l2_fair_gas_price as i64, - MAX_GAS_PER_PUBDATA_BYTE as i64, + miniblock_header.batch_fee_input.l1_gas_price() as i64, + miniblock_header.batch_fee_input.fair_l2_gas_price() as i64, + miniblock_header.gas_per_pubdata_limit as i64, miniblock_header .base_system_contracts_hashes .bootloader @@ -391,44 +638,41 @@ impl BlocksDal<'_, '_> { .as_bytes(), miniblock_header.protocol_version.map(|v| v as i32), miniblock_header.virtual_blocks as i64, + miniblock_header.batch_fee_input.fair_pubdata_price() as i64, ) .execute(self.storage.conn()) .await?; Ok(()) } - /// Sets consensus-related fields for the specified miniblock. - pub async fn set_miniblock_consensus_fields( - &mut self, - miniblock_number: MiniblockNumber, - consensus: &ConsensusBlockFields, - ) -> anyhow::Result<()> { - let result = sqlx::query!( - "UPDATE miniblocks SET consensus = $2 WHERE number = $1", - miniblock_number.0 as i64, - serde_json::to_value(consensus).unwrap(), - ) - .execute(self.storage.conn()) - .await?; - - anyhow::ensure!( - result.rows_affected() == 1, - "Miniblock #{miniblock_number} is not present in Postgres" - ); - Ok(()) - } pub async fn get_last_sealed_miniblock_header( &mut self, ) -> sqlx::Result> { Ok(sqlx::query_as!( StorageMiniblockHeader, - "SELECT number, timestamp, hash, l1_tx_count, l2_tx_count, \ - base_fee_per_gas, l1_gas_price, l2_fair_gas_price, \ - bootloader_code_hash, default_aa_code_hash, protocol_version, \ - virtual_blocks - FROM miniblocks \ - ORDER BY number DESC \ - LIMIT 1", + r#" + SELECT + number, + timestamp, + hash, + l1_tx_count, + l2_tx_count, + base_fee_per_gas, + l1_gas_price, + l2_fair_gas_price, + gas_per_pubdata_limit, + bootloader_code_hash, + default_aa_code_hash, + protocol_version, + virtual_blocks, + fair_pubdata_price + FROM + miniblocks + ORDER BY + number DESC + LIMIT + 1 + "#, ) .fetch_optional(self.storage.conn()) .await? @@ -441,12 +685,27 @@ impl BlocksDal<'_, '_> { ) -> sqlx::Result> { Ok(sqlx::query_as!( StorageMiniblockHeader, - "SELECT number, timestamp, hash, l1_tx_count, l2_tx_count, \ - base_fee_per_gas, l1_gas_price, l2_fair_gas_price, \ - bootloader_code_hash, default_aa_code_hash, protocol_version, \ - virtual_blocks - FROM miniblocks \ - WHERE number = $1", + r#" + SELECT + number, + timestamp, + hash, + l1_tx_count, + l2_tx_count, + base_fee_per_gas, + l1_gas_price, + l2_fair_gas_price, + gas_per_pubdata_limit, + bootloader_code_hash, + default_aa_code_hash, + protocol_version, + virtual_blocks, + fair_pubdata_price + FROM + miniblocks + WHERE + number = $1 + "#, miniblock_number.0 as i64, ) .fetch_optional(self.storage.conn()) @@ -459,9 +718,13 @@ impl BlocksDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE miniblocks \ - SET l1_batch_number = $1 \ - WHERE l1_batch_number IS NULL", + r#" + UPDATE miniblocks + SET + l1_batch_number = $1 + WHERE + l1_batch_number IS NULL + "#, l1_batch_number.0 as i32, ) .execute(self.storage.conn()) @@ -474,14 +737,28 @@ impl BlocksDal<'_, '_> { metadata: &L1BatchMetadata, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE l1_batches \ - SET hash = $1, merkle_root_hash = $2, commitment = $3, default_aa_code_hash = $4, \ - compressed_repeated_writes = $5, compressed_initial_writes = $6, \ - l2_l1_compressed_messages = $7, l2_l1_merkle_root = $8, \ - zkporter_is_available = $9, bootloader_code_hash = $10, rollup_last_leaf_index = $11, \ - aux_data_hash = $12, pass_through_data_hash = $13, meta_parameters_hash = $14, \ - compressed_state_diffs = $15, updated_at = now() \ - WHERE number = $16", + r#" + UPDATE l1_batches + SET + hash = $1, + merkle_root_hash = $2, + commitment = $3, + default_aa_code_hash = $4, + compressed_repeated_writes = $5, + compressed_initial_writes = $6, + l2_l1_compressed_messages = $7, + l2_l1_merkle_root = $8, + zkporter_is_available = $9, + bootloader_code_hash = $10, + rollup_last_leaf_index = $11, + aux_data_hash = $12, + pass_through_data_hash = $13, + meta_parameters_hash = $14, + compressed_state_diffs = $15, + updated_at = NOW() + WHERE + number = $16 + "#, metadata.root_hash.as_bytes(), metadata.merkle_root_hash.as_bytes(), metadata.commitment.as_bytes(), @@ -514,14 +791,26 @@ impl BlocksDal<'_, '_> { let mut transaction = self.storage.start_transaction().await?; let update_result = sqlx::query!( - "UPDATE l1_batches \ - SET hash = $1, merkle_root_hash = $2, \ - compressed_repeated_writes = $3, compressed_initial_writes = $4, \ - l2_l1_compressed_messages = $5, l2_l1_merkle_root = $6, \ - zkporter_is_available = $7, parent_hash = $8, rollup_last_leaf_index = $9, \ - pass_through_data_hash = $10, meta_parameters_hash = $11, \ - compressed_state_diffs = $12, updated_at = now() \ - WHERE number = $13 AND hash IS NULL", + r#" + UPDATE l1_batches + SET + hash = $1, + merkle_root_hash = $2, + compressed_repeated_writes = $3, + compressed_initial_writes = $4, + l2_l1_compressed_messages = $5, + l2_l1_merkle_root = $6, + zkporter_is_available = $7, + parent_hash = $8, + rollup_last_leaf_index = $9, + pass_through_data_hash = $10, + meta_parameters_hash = $11, + compressed_state_diffs = $12, + updated_at = NOW() + WHERE + number = $13 + AND hash IS NULL + "#, metadata.root_hash.as_bytes(), metadata.merkle_root_hash.as_bytes(), metadata.repeated_writes_compressed, @@ -545,9 +834,13 @@ impl BlocksDal<'_, '_> { if metadata.events_queue_commitment.is_some() || is_pre_boojum { // Save `commitment`, `aux_data_hash`, `events_queue_commitment`, `bootloader_initial_content_commitment`. sqlx::query!( - "INSERT INTO commitments (l1_batch_number, events_queue_commitment, bootloader_initial_content_commitment) \ - VALUES ($1, $2, $3) \ - ON CONFLICT (l1_batch_number) DO NOTHING", + r#" + INSERT INTO + commitments (l1_batch_number, events_queue_commitment, bootloader_initial_content_commitment) + VALUES + ($1, $2, $3) + ON CONFLICT (l1_batch_number) DO NOTHING + "#, number.0 as i64, metadata.events_queue_commitment.map(|h| h.0.to_vec()), metadata @@ -561,9 +854,15 @@ impl BlocksDal<'_, '_> { .await?; sqlx::query!( - "UPDATE l1_batches \ - SET commitment = $2, aux_data_hash = $3, updated_at = now() \ - WHERE number = $1", + r#" + UPDATE l1_batches + SET + commitment = $2, + aux_data_hash = $3, + updated_at = NOW() + WHERE + number = $1 + "#, number.0 as i64, metadata.commitment.as_bytes(), metadata.aux_data_hash.as_bytes(), @@ -589,10 +888,18 @@ impl BlocksDal<'_, '_> { // block was already processed. Verify that existing hashes match let matched: i64 = sqlx::query!( - "SELECT COUNT(*) as \"count!\" \ - FROM l1_batches \ - WHERE number = $1 AND hash = $2 AND merkle_root_hash = $3 \ - AND parent_hash = $4 AND l2_l1_merkle_root = $5", + r#" + SELECT + COUNT(*) AS "count!" + FROM + l1_batches + WHERE + number = $1 + AND hash = $2 + AND merkle_root_hash = $3 + AND parent_hash = $4 + AND l2_l1_merkle_root = $5 + "#, number.0 as i64, metadata.root_hash.as_bytes(), metadata.merkle_root_hash.as_bytes(), @@ -624,21 +931,60 @@ impl BlocksDal<'_, '_> { // We can get 0 block for the first transaction let block = sqlx::query_as!( StorageL1Batch, - "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, \ - bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, \ - compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, \ - merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, \ - used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, \ - l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, \ - rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, \ - default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, \ - meta_parameters_hash, protocol_version, compressed_state_diffs, \ - system_logs, events_queue_commitment, bootloader_initial_content_commitment - FROM l1_batches \ - LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number \ - WHERE number = 0 OR eth_commit_tx_id IS NOT NULL AND commitment IS NOT NULL \ - ORDER BY number DESC \ - LIMIT 1", + r#" + SELECT + number, + timestamp, + is_finished, + l1_tx_count, + l2_tx_count, + fee_account_address, + bloom, + priority_ops_onchain_data, + hash, + parent_hash, + commitment, + compressed_write_logs, + compressed_contracts, + eth_prove_tx_id, + eth_commit_tx_id, + eth_execute_tx_id, + merkle_root_hash, + l2_to_l1_logs, + l2_to_l1_messages, + used_contract_hashes, + compressed_initial_writes, + compressed_repeated_writes, + l2_l1_compressed_messages, + l2_l1_merkle_root, + l1_gas_price, + l2_fair_gas_price, + rollup_last_leaf_index, + zkporter_is_available, + bootloader_code_hash, + default_aa_code_hash, + base_fee_per_gas, + aux_data_hash, + pass_through_data_hash, + meta_parameters_hash, + protocol_version, + compressed_state_diffs, + system_logs, + events_queue_commitment, + bootloader_initial_content_commitment, + pubdata_input + FROM + l1_batches + LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number + WHERE + number = 0 + OR eth_commit_tx_id IS NOT NULL + AND commitment IS NOT NULL + ORDER BY + number DESC + LIMIT + 1 + "#, ) .instrument("get_last_committed_to_eth_l1_batch") .fetch_one(self.storage.conn()) @@ -658,11 +1004,19 @@ impl BlocksDal<'_, '_> { &mut self, ) -> Result, sqlx::Error> { Ok(sqlx::query!( - "SELECT number FROM l1_batches \ - LEFT JOIN eth_txs_history AS commit_tx \ - ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id) \ - WHERE commit_tx.confirmed_at IS NOT NULL \ - ORDER BY number DESC LIMIT 1" + r#" + SELECT + number + FROM + l1_batches + LEFT JOIN eth_txs_history AS commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id) + WHERE + commit_tx.confirmed_at IS NOT NULL + ORDER BY + number DESC + LIMIT + 1 + "# ) .fetch_optional(self.storage.conn()) .await? @@ -672,9 +1026,14 @@ impl BlocksDal<'_, '_> { /// Returns the number of the last L1 batch for which an Ethereum prove tx exists in the database. pub async fn get_last_l1_batch_with_prove_tx(&mut self) -> sqlx::Result { let row = sqlx::query!( - "SELECT COALESCE(MAX(number), 0) AS \"number!\" \ - FROM l1_batches \ - WHERE eth_prove_tx_id IS NOT NULL" + r#" + SELECT + COALESCE(MAX(number), 0) AS "number!" + FROM + l1_batches + WHERE + eth_prove_tx_id IS NOT NULL + "# ) .fetch_one(self.storage.conn()) .await?; @@ -687,8 +1046,14 @@ impl BlocksDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result> { let row = sqlx::query!( - "SELECT eth_commit_tx_id FROM l1_batches \ - WHERE number = $1", + r#" + SELECT + eth_commit_tx_id + FROM + l1_batches + WHERE + number = $1 + "#, l1_batch_number.0 as i64 ) .fetch_optional(self.storage.conn()) @@ -702,11 +1067,19 @@ impl BlocksDal<'_, '_> { &mut self, ) -> sqlx::Result> { Ok(sqlx::query!( - "SELECT number FROM l1_batches \ - LEFT JOIN eth_txs_history AS prove_tx \ - ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id) \ - WHERE prove_tx.confirmed_at IS NOT NULL \ - ORDER BY number DESC LIMIT 1" + r#" + SELECT + number + FROM + l1_batches + LEFT JOIN eth_txs_history AS prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id) + WHERE + prove_tx.confirmed_at IS NOT NULL + ORDER BY + number DESC + LIMIT + 1 + "# ) .fetch_optional(self.storage.conn()) .await? @@ -718,11 +1091,19 @@ impl BlocksDal<'_, '_> { &mut self, ) -> sqlx::Result> { Ok(sqlx::query!( - "SELECT number FROM l1_batches \ - LEFT JOIN eth_txs_history as execute_tx \ - ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id) \ - WHERE execute_tx.confirmed_at IS NOT NULL \ - ORDER BY number DESC LIMIT 1" + r#" + SELECT + number + FROM + l1_batches + LEFT JOIN eth_txs_history AS execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id) + WHERE + execute_tx.confirmed_at IS NOT NULL + ORDER BY + number DESC + LIMIT + 1 + "# ) .fetch_optional(self.storage.conn()) .await? @@ -736,20 +1117,59 @@ impl BlocksDal<'_, '_> { ) -> anyhow::Result> { let raw_batches = sqlx::query_as!( StorageL1Batch, - "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, \ - bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, \ - compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, \ - merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, \ - used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, \ - l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, \ - rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, \ - default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, \ - meta_parameters_hash, protocol_version, compressed_state_diffs, \ - system_logs, events_queue_commitment, bootloader_initial_content_commitment \ - FROM l1_batches \ - LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number \ - WHERE eth_commit_tx_id IS NOT NULL AND eth_prove_tx_id IS NULL \ - ORDER BY number LIMIT $1", + r#" + SELECT + number, + timestamp, + is_finished, + l1_tx_count, + l2_tx_count, + fee_account_address, + bloom, + priority_ops_onchain_data, + hash, + parent_hash, + commitment, + compressed_write_logs, + compressed_contracts, + eth_prove_tx_id, + eth_commit_tx_id, + eth_execute_tx_id, + merkle_root_hash, + l2_to_l1_logs, + l2_to_l1_messages, + used_contract_hashes, + compressed_initial_writes, + compressed_repeated_writes, + l2_l1_compressed_messages, + l2_l1_merkle_root, + l1_gas_price, + l2_fair_gas_price, + rollup_last_leaf_index, + zkporter_is_available, + bootloader_code_hash, + default_aa_code_hash, + base_fee_per_gas, + aux_data_hash, + pass_through_data_hash, + meta_parameters_hash, + protocol_version, + compressed_state_diffs, + system_logs, + events_queue_commitment, + bootloader_initial_content_commitment, + pubdata_input + FROM + l1_batches + LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number + WHERE + eth_commit_tx_id IS NOT NULL + AND eth_prove_tx_id IS NULL + ORDER BY + number + LIMIT + $1 + "#, limit as i32 ) .instrument("get_ready_for_dummy_proof_l1_batches") @@ -783,7 +1203,13 @@ impl BlocksDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE l1_batches SET skip_proof = TRUE WHERE number = $1", + r#" + UPDATE l1_batches + SET + skip_proof = TRUE + WHERE + number = $1 + "#, l1_batch_number.0 as i64 ) .execute(self.storage.conn()) @@ -804,26 +1230,71 @@ impl BlocksDal<'_, '_> { // is used to avoid having gaps in the list of blocks to send dummy proofs for. let raw_batches = sqlx::query_as!( StorageL1Batch, - "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, \ - bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, \ - compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, \ - merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, \ - used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, \ - l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, \ - rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, \ - default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, \ - meta_parameters_hash, system_logs, compressed_state_diffs, protocol_version, \ - events_queue_commitment, bootloader_initial_content_commitment \ - FROM \ - (SELECT l1_batches.*, row_number() OVER (ORDER BY number ASC) AS row_number \ - FROM l1_batches \ - WHERE eth_commit_tx_id IS NOT NULL \ - AND l1_batches.skip_proof = TRUE \ - AND l1_batches.number > $1 \ - ORDER BY number LIMIT $2\ - ) inn \ - LEFT JOIN commitments ON commitments.l1_batch_number = inn.number \ - WHERE number - row_number = $1", + r#" + SELECT + number, + timestamp, + is_finished, + l1_tx_count, + l2_tx_count, + fee_account_address, + bloom, + priority_ops_onchain_data, + hash, + parent_hash, + commitment, + compressed_write_logs, + compressed_contracts, + eth_prove_tx_id, + eth_commit_tx_id, + eth_execute_tx_id, + merkle_root_hash, + l2_to_l1_logs, + l2_to_l1_messages, + used_contract_hashes, + compressed_initial_writes, + compressed_repeated_writes, + l2_l1_compressed_messages, + l2_l1_merkle_root, + l1_gas_price, + l2_fair_gas_price, + rollup_last_leaf_index, + zkporter_is_available, + bootloader_code_hash, + default_aa_code_hash, + base_fee_per_gas, + aux_data_hash, + pass_through_data_hash, + meta_parameters_hash, + system_logs, + compressed_state_diffs, + protocol_version, + events_queue_commitment, + bootloader_initial_content_commitment, + pubdata_input + FROM + ( + SELECT + l1_batches.*, + ROW_NUMBER() OVER ( + ORDER BY + number ASC + ) AS ROW_NUMBER + FROM + l1_batches + WHERE + eth_commit_tx_id IS NOT NULL + AND l1_batches.skip_proof = TRUE + AND l1_batches.number > $1 + ORDER BY + number + LIMIT + $2 + ) inn + LEFT JOIN commitments ON commitments.l1_batch_number = inn.number + WHERE + number - ROW_NUMBER = $1 + "#, last_proved_block_number.0 as i32, limit as i32 ) @@ -843,28 +1314,69 @@ impl BlocksDal<'_, '_> { max_l1_batch_timestamp_millis: Option, ) -> anyhow::Result> { let raw_batches = match max_l1_batch_timestamp_millis { - None => sqlx::query_as!( - StorageL1Batch, - "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, \ - bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, \ - compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, \ - merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, \ - used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, \ - l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, \ - rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, \ - default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, \ - meta_parameters_hash, protocol_version, compressed_state_diffs, \ - system_logs, events_queue_commitment, bootloader_initial_content_commitment \ - FROM l1_batches \ - LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number \ - WHERE eth_prove_tx_id IS NOT NULL AND eth_execute_tx_id IS NULL \ - ORDER BY number LIMIT $1", - limit as i32, - ) - .instrument("get_ready_for_execute_l1_batches/no_max_timestamp") - .with_arg("limit", &limit) - .fetch_all(self.storage.conn()) - .await?, + None => { + sqlx::query_as!( + StorageL1Batch, + r#" + SELECT + number, + timestamp, + is_finished, + l1_tx_count, + l2_tx_count, + fee_account_address, + bloom, + priority_ops_onchain_data, + hash, + parent_hash, + commitment, + compressed_write_logs, + compressed_contracts, + eth_prove_tx_id, + eth_commit_tx_id, + eth_execute_tx_id, + merkle_root_hash, + l2_to_l1_logs, + l2_to_l1_messages, + used_contract_hashes, + compressed_initial_writes, + compressed_repeated_writes, + l2_l1_compressed_messages, + l2_l1_merkle_root, + l1_gas_price, + l2_fair_gas_price, + rollup_last_leaf_index, + zkporter_is_available, + bootloader_code_hash, + default_aa_code_hash, + base_fee_per_gas, + aux_data_hash, + pass_through_data_hash, + meta_parameters_hash, + protocol_version, + compressed_state_diffs, + system_logs, + events_queue_commitment, + bootloader_initial_content_commitment, + pubdata_input + FROM + l1_batches + LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number + WHERE + eth_prove_tx_id IS NOT NULL + AND eth_execute_tx_id IS NULL + ORDER BY + number + LIMIT + $1 + "#, + limit as i32, + ) + .instrument("get_ready_for_execute_l1_batches/no_max_timestamp") + .with_arg("limit", &limit) + .fetch_all(self.storage.conn()) + .await? + } Some(max_l1_batch_timestamp_millis) => { // Do not lose the precision here, otherwise we can skip some L1 batches. @@ -889,9 +1401,19 @@ impl BlocksDal<'_, '_> { // We need to find the first L1 batch that is supposed to be executed. // Here we ignore the time delay, so we just take the first L1 batch that is ready for execution. let row = sqlx::query!( - "SELECT number FROM l1_batches \ - WHERE eth_prove_tx_id IS NOT NULL AND eth_execute_tx_id IS NULL \ - ORDER BY number LIMIT 1" + r#" + SELECT + number + FROM + l1_batches + WHERE + eth_prove_tx_id IS NOT NULL + AND eth_execute_tx_id IS NULL + ORDER BY + number + LIMIT + 1 + "# ) .fetch_optional(self.storage.conn()) .await?; @@ -906,13 +1428,23 @@ impl BlocksDal<'_, '_> { // Find the last L1 batch that is ready for execution. let row = sqlx::query!( - "SELECT max(l1_batches.number) FROM l1_batches \ - JOIN eth_txs ON (l1_batches.eth_commit_tx_id = eth_txs.id) \ - JOIN eth_txs_history AS commit_tx ON (eth_txs.confirmed_eth_tx_history_id = commit_tx.id) \ - WHERE commit_tx.confirmed_at IS NOT NULL \ - AND eth_prove_tx_id IS NOT NULL \ - AND eth_execute_tx_id IS NULL \ - AND EXTRACT(epoch FROM commit_tx.confirmed_at) < $1", + r#" + SELECT + MAX(l1_batches.number) + FROM + l1_batches + JOIN eth_txs ON (l1_batches.eth_commit_tx_id = eth_txs.id) + JOIN eth_txs_history AS commit_tx ON (eth_txs.confirmed_eth_tx_history_id = commit_tx.id) + WHERE + commit_tx.confirmed_at IS NOT NULL + AND eth_prove_tx_id IS NOT NULL + AND eth_execute_tx_id IS NULL + AND EXTRACT( + epoch + FROM + commit_tx.confirmed_at + ) < $1 + "#, max_l1_batch_timestamp_seconds_bd, ) .fetch_one(self.storage.conn()) @@ -924,26 +1456,67 @@ impl BlocksDal<'_, '_> { assert!(max_ready_to_send_block >= expected_started_point); sqlx::query_as!( StorageL1Batch, - "SELECT number, timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, \ - bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, \ - compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, \ - merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, \ - used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, \ - l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, \ - rollup_last_leaf_index, zkporter_is_available, bootloader_code_hash, \ - default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, \ - meta_parameters_hash, protocol_version, compressed_state_diffs, \ - system_logs, events_queue_commitment, bootloader_initial_content_commitment \ - FROM l1_batches \ - LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number \ - WHERE number BETWEEN $1 AND $2 \ - ORDER BY number LIMIT $3", + r#" + SELECT + number, + timestamp, + is_finished, + l1_tx_count, + l2_tx_count, + fee_account_address, + bloom, + priority_ops_onchain_data, + hash, + parent_hash, + commitment, + compressed_write_logs, + compressed_contracts, + eth_prove_tx_id, + eth_commit_tx_id, + eth_execute_tx_id, + merkle_root_hash, + l2_to_l1_logs, + l2_to_l1_messages, + used_contract_hashes, + compressed_initial_writes, + compressed_repeated_writes, + l2_l1_compressed_messages, + l2_l1_merkle_root, + l1_gas_price, + l2_fair_gas_price, + rollup_last_leaf_index, + zkporter_is_available, + bootloader_code_hash, + default_aa_code_hash, + base_fee_per_gas, + aux_data_hash, + pass_through_data_hash, + meta_parameters_hash, + protocol_version, + compressed_state_diffs, + system_logs, + events_queue_commitment, + bootloader_initial_content_commitment, + pubdata_input + FROM + l1_batches + LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number + WHERE + number BETWEEN $1 AND $2 + ORDER BY + number + LIMIT + $3 + "#, expected_started_point as i32, max_ready_to_send_block, limit as i32, ) .instrument("get_ready_for_execute_l1_batches") - .with_arg("numbers", &(expected_started_point..=max_ready_to_send_block)) + .with_arg( + "numbers", + &(expected_started_point..=max_ready_to_send_block), + ) .with_arg("limit", &limit) .fetch_all(self.storage.conn()) .await? @@ -961,37 +1534,79 @@ impl BlocksDal<'_, '_> { ) -> anyhow::Result> { let raw_batches = sqlx::query_as!( StorageL1Batch, - "SELECT number, l1_batches.timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, \ - bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, \ - compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, \ - merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, \ - used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, \ - l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, \ - rollup_last_leaf_index, zkporter_is_available, l1_batches.bootloader_code_hash, \ - l1_batches.default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, \ - meta_parameters_hash, protocol_version, compressed_state_diffs, \ - system_logs, events_queue_commitment, bootloader_initial_content_commitment \ - FROM l1_batches \ - LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number \ - JOIN protocol_versions ON protocol_versions.id = l1_batches.protocol_version \ - WHERE eth_commit_tx_id IS NULL \ - AND number != 0 \ - AND protocol_versions.bootloader_code_hash = $1 AND protocol_versions.default_account_code_hash = $2 \ - AND commitment IS NOT NULL \ - AND (protocol_versions.id = $3 OR protocol_versions.upgrade_tx_hash IS NULL) \ - ORDER BY number LIMIT $4", + r#" + SELECT + number, + l1_batches.timestamp, + is_finished, + l1_tx_count, + l2_tx_count, + fee_account_address, + bloom, + priority_ops_onchain_data, + hash, + parent_hash, + commitment, + compressed_write_logs, + compressed_contracts, + eth_prove_tx_id, + eth_commit_tx_id, + eth_execute_tx_id, + merkle_root_hash, + l2_to_l1_logs, + l2_to_l1_messages, + used_contract_hashes, + compressed_initial_writes, + compressed_repeated_writes, + l2_l1_compressed_messages, + l2_l1_merkle_root, + l1_gas_price, + l2_fair_gas_price, + rollup_last_leaf_index, + zkporter_is_available, + l1_batches.bootloader_code_hash, + l1_batches.default_aa_code_hash, + base_fee_per_gas, + aux_data_hash, + pass_through_data_hash, + meta_parameters_hash, + protocol_version, + compressed_state_diffs, + system_logs, + events_queue_commitment, + bootloader_initial_content_commitment, + pubdata_input + FROM + l1_batches + LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number + JOIN protocol_versions ON protocol_versions.id = l1_batches.protocol_version + WHERE + eth_commit_tx_id IS NULL + AND number != 0 + AND protocol_versions.bootloader_code_hash = $1 + AND protocol_versions.default_account_code_hash = $2 + AND commitment IS NOT NULL + AND ( + protocol_versions.id = $3 + OR protocol_versions.upgrade_tx_hash IS NULL + ) + ORDER BY + number + LIMIT + $4 + "#, bootloader_hash.as_bytes(), default_aa_hash.as_bytes(), protocol_version_id as i32, limit as i64, ) - .instrument("get_ready_for_commit_l1_batches") - .with_arg("limit", &limit) - .with_arg("bootloader_hash", &bootloader_hash) - .with_arg("default_aa_hash", &default_aa_hash) - .with_arg("protocol_version_id", &protocol_version_id) - .fetch_all(self.storage.conn()) - .await?; + .instrument("get_ready_for_commit_l1_batches") + .with_arg("limit", &limit) + .with_arg("bootloader_hash", &bootloader_hash) + .with_arg("default_aa_hash", &default_aa_hash) + .with_arg("protocol_version_id", &protocol_version_id) + .fetch_all(self.storage.conn()) + .await?; self.map_l1_batches(raw_batches) .await @@ -1007,26 +1622,69 @@ impl BlocksDal<'_, '_> { ) -> anyhow::Result> { let raw_batches = sqlx::query_as!( StorageL1Batch, - "SELECT number, l1_batches.timestamp, is_finished, l1_tx_count, l2_tx_count, fee_account_address, \ - bloom, priority_ops_onchain_data, hash, parent_hash, commitment, compressed_write_logs, \ - compressed_contracts, eth_prove_tx_id, eth_commit_tx_id, eth_execute_tx_id, \ - merkle_root_hash, l2_to_l1_logs, l2_to_l1_messages, \ - used_contract_hashes, compressed_initial_writes, compressed_repeated_writes, \ - l2_l1_compressed_messages, l2_l1_merkle_root, l1_gas_price, l2_fair_gas_price, \ - rollup_last_leaf_index, zkporter_is_available, l1_batches.bootloader_code_hash, \ - l1_batches.default_aa_code_hash, base_fee_per_gas, aux_data_hash, pass_through_data_hash, \ - meta_parameters_hash, protocol_version, compressed_state_diffs, \ - system_logs, events_queue_commitment, bootloader_initial_content_commitment \ - FROM l1_batches \ - LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number \ - JOIN protocol_versions ON protocol_versions.id = l1_batches.protocol_version \ - WHERE eth_commit_tx_id IS NULL \ - AND number != 0 \ - AND protocol_versions.bootloader_code_hash = $1 AND protocol_versions.default_account_code_hash = $2 \ - AND commitment IS NOT NULL \ - AND (protocol_versions.id = $3 OR protocol_versions.upgrade_tx_hash IS NULL) \ - AND events_queue_commitment IS NOT NULL AND bootloader_initial_content_commitment IS NOT NULL \ - ORDER BY number LIMIT $4", + r#" + SELECT + number, + l1_batches.timestamp, + is_finished, + l1_tx_count, + l2_tx_count, + fee_account_address, + bloom, + priority_ops_onchain_data, + hash, + parent_hash, + commitment, + compressed_write_logs, + compressed_contracts, + eth_prove_tx_id, + eth_commit_tx_id, + eth_execute_tx_id, + merkle_root_hash, + l2_to_l1_logs, + l2_to_l1_messages, + used_contract_hashes, + compressed_initial_writes, + compressed_repeated_writes, + l2_l1_compressed_messages, + l2_l1_merkle_root, + l1_gas_price, + l2_fair_gas_price, + rollup_last_leaf_index, + zkporter_is_available, + l1_batches.bootloader_code_hash, + l1_batches.default_aa_code_hash, + base_fee_per_gas, + aux_data_hash, + pass_through_data_hash, + meta_parameters_hash, + protocol_version, + compressed_state_diffs, + system_logs, + events_queue_commitment, + bootloader_initial_content_commitment, + pubdata_input + FROM + l1_batches + LEFT JOIN commitments ON commitments.l1_batch_number = l1_batches.number + JOIN protocol_versions ON protocol_versions.id = l1_batches.protocol_version + WHERE + eth_commit_tx_id IS NULL + AND number != 0 + AND protocol_versions.bootloader_code_hash = $1 + AND protocol_versions.default_account_code_hash = $2 + AND commitment IS NOT NULL + AND ( + protocol_versions.id = $3 + OR protocol_versions.upgrade_tx_hash IS NULL + ) + AND events_queue_commitment IS NOT NULL + AND bootloader_initial_content_commitment IS NOT NULL + ORDER BY + number + LIMIT + $4 + "#, bootloader_hash.as_bytes(), default_aa_hash.as_bytes(), protocol_version_id as i32, @@ -1050,7 +1708,14 @@ impl BlocksDal<'_, '_> { number: L1BatchNumber, ) -> sqlx::Result> { Ok(sqlx::query!( - "SELECT hash FROM l1_batches WHERE number = $1", + r#" + SELECT + hash + FROM + l1_batches + WHERE + number = $1 + "#, number.0 as i64 ) .fetch_optional(self.storage.conn()) @@ -1064,7 +1729,15 @@ impl BlocksDal<'_, '_> { number: L1BatchNumber, ) -> Result, sqlx::Error> { let Some(row) = sqlx::query!( - "SELECT timestamp, hash FROM l1_batches WHERE number = $1", + r#" + SELECT + timestamp, + hash + FROM + l1_batches + WHERE + number = $1 + "#, number.0 as i64 ) .fetch_optional(self.storage.conn()) @@ -1078,26 +1751,6 @@ impl BlocksDal<'_, '_> { Ok(Some((H256::from_slice(&hash), row.timestamp as u64))) } - pub async fn get_newest_l1_batch_header(&mut self) -> sqlx::Result { - let last_l1_batch = sqlx::query_as!( - StorageL1BatchHeader, - "SELECT number, l1_tx_count, l2_tx_count, \ - timestamp, is_finished, fee_account_address, l2_to_l1_logs, l2_to_l1_messages, \ - bloom, priority_ops_onchain_data, \ - used_contract_hashes, base_fee_per_gas, l1_gas_price, \ - l2_fair_gas_price, bootloader_code_hash, default_aa_code_hash, protocol_version, \ - compressed_state_diffs, system_logs \ - FROM l1_batches \ - ORDER BY number DESC \ - LIMIT 1" - ) - .instrument("get_newest_l1_batch_header") - .fetch_one(self.storage.conn()) - .await?; - - Ok(last_l1_batch.into()) - } - pub async fn get_l1_batch_metadata( &mut self, number: L1BatchNumber, @@ -1139,9 +1792,16 @@ impl BlocksDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result>> { Ok(sqlx::query!( - "SELECT bytecode_hash, bytecode FROM factory_deps \ - INNER JOIN miniblocks ON miniblocks.number = factory_deps.miniblock_number \ - WHERE miniblocks.l1_batch_number = $1", + r#" + SELECT + bytecode_hash, + bytecode + FROM + factory_deps + INNER JOIN miniblocks ON miniblocks.number = factory_deps.miniblock_number + WHERE + miniblocks.l1_batch_number = $1 + "#, l1_batch_number.0 as i64 ) .fetch_all(self.storage.conn()) @@ -1164,9 +1824,16 @@ impl BlocksDal<'_, '_> { last_batch_to_keep: Option, ) -> sqlx::Result<()> { let block_number = last_batch_to_keep.map_or(-1, |number| number.0 as i64); - sqlx::query!("DELETE FROM l1_batches WHERE number > $1", block_number) - .execute(self.storage.conn()) - .await?; + sqlx::query!( + r#" + DELETE FROM l1_batches + WHERE + number > $1 + "#, + block_number + ) + .execute(self.storage.conn()) + .await?; Ok(()) } @@ -1184,9 +1851,16 @@ impl BlocksDal<'_, '_> { last_miniblock_to_keep: Option, ) -> sqlx::Result<()> { let block_number = last_miniblock_to_keep.map_or(-1, |number| number.0 as i64); - sqlx::query!("DELETE FROM miniblocks WHERE number > $1", block_number) - .execute(self.storage.conn()) - .await?; + sqlx::query!( + r#" + DELETE FROM miniblocks + WHERE + number > $1 + "#, + block_number + ) + .execute(self.storage.conn()) + .await?; Ok(()) } @@ -1222,9 +1896,14 @@ impl BlocksDal<'_, '_> { predicted_gas_cost: u32, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE l1_batches \ - SET predicted_commit_gas_cost = $2, updated_at = now() \ - WHERE number = $1", + r#" + UPDATE l1_batches + SET + predicted_commit_gas_cost = $2, + updated_at = NOW() + WHERE + number = $1 + "#, number.0 as i64, predicted_gas_cost as i64 ) @@ -1238,9 +1917,15 @@ impl BlocksDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result> { let row = sqlx::query!( - "SELECT MIN(miniblocks.number) as \"min?\", MAX(miniblocks.number) as \"max?\" \ - FROM miniblocks \ - WHERE l1_batch_number = $1", + r#" + SELECT + MIN(miniblocks.number) AS "min?", + MAX(miniblocks.number) AS "max?" + FROM + miniblocks + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64 ) .fetch_one(self.storage.conn()) @@ -1266,71 +1951,23 @@ impl BlocksDal<'_, '_> { Ok(count != 0) } - pub async fn get_last_l1_batch_number_with_witness_inputs( - &mut self, - ) -> sqlx::Result { - let row = sqlx::query!( - "SELECT MAX(l1_batch_number) FROM witness_inputs \ - WHERE merkel_tree_paths_blob_url IS NOT NULL", - ) - .fetch_one(self.storage.conn()) - .await?; - - Ok(row - .max - .map(|l1_batch_number| L1BatchNumber(l1_batch_number as u32)) - .unwrap_or_default()) - } - - pub async fn get_l1_batches_with_blobs_in_db( - &mut self, - limit: u8, - ) -> sqlx::Result> { - let rows = sqlx::query!( - "SELECT l1_batch_number FROM witness_inputs \ - WHERE length(merkle_tree_paths) <> 0 \ - ORDER BY l1_batch_number DESC \ - LIMIT $1", - limit as i32 - ) - .fetch_all(self.storage.conn()) - .await?; - - Ok(rows - .into_iter() - .map(|row| L1BatchNumber(row.l1_batch_number as u32)) - .collect()) - } - - pub async fn get_merkle_tree_paths_blob_urls_to_be_cleaned( - &mut self, - limit: u8, - ) -> Result, sqlx::Error> { - let rows = sqlx::query!( - "SELECT l1_batch_number, merkel_tree_paths_blob_url \ - FROM witness_inputs \ - WHERE status = 'successful' \ - AND merkel_tree_paths_blob_url is NOT NULL \ - AND updated_at < NOW() - INTERVAL '30 days' \ - LIMIT $1", - limit as i32 - ) - .fetch_all(self.storage.conn()) - .await?; - - Ok(rows - .into_iter() - .map(|row| (row.l1_batch_number, row.merkel_tree_paths_blob_url.unwrap())) - .collect()) - } - // methods used for measuring Eth tx stage transition latencies // and emitting metrics base on these measured data pub async fn oldest_uncommitted_batch_timestamp(&mut self) -> sqlx::Result> { Ok(sqlx::query!( - "SELECT timestamp FROM l1_batches \ - WHERE eth_commit_tx_id IS NULL AND number > 0 \ - ORDER BY number LIMIT 1", + r#" + SELECT + timestamp + FROM + l1_batches + WHERE + eth_commit_tx_id IS NULL + AND number > 0 + ORDER BY + number + LIMIT + 1 + "#, ) .fetch_optional(self.storage.conn()) .await? @@ -1339,9 +1976,19 @@ impl BlocksDal<'_, '_> { pub async fn oldest_unproved_batch_timestamp(&mut self) -> sqlx::Result> { Ok(sqlx::query!( - "SELECT timestamp FROM l1_batches \ - WHERE eth_prove_tx_id IS NULL AND number > 0 \ - ORDER BY number LIMIT 1", + r#" + SELECT + timestamp + FROM + l1_batches + WHERE + eth_prove_tx_id IS NULL + AND number > 0 + ORDER BY + number + LIMIT + 1 + "#, ) .fetch_optional(self.storage.conn()) .await? @@ -1350,9 +1997,19 @@ impl BlocksDal<'_, '_> { pub async fn oldest_unexecuted_batch_timestamp(&mut self) -> Result, sqlx::Error> { Ok(sqlx::query!( - "SELECT timestamp FROM l1_batches \ - WHERE eth_execute_tx_id IS NULL AND number > 0 \ - ORDER BY number LIMIT 1", + r#" + SELECT + timestamp + FROM + l1_batches + WHERE + eth_execute_tx_id IS NULL + AND number > 0 + ORDER BY + number + LIMIT + 1 + "#, ) .fetch_optional(self.storage.conn()) .await? @@ -1364,7 +2021,14 @@ impl BlocksDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> anyhow::Result> { let Some(row) = sqlx::query!( - "SELECT protocol_version FROM l1_batches WHERE number = $1", + r#" + SELECT + protocol_version + FROM + l1_batches + WHERE + number = $1 + "#, l1_batch_number.0 as i64 ) .fetch_optional(self.storage.conn()) @@ -1383,7 +2047,14 @@ impl BlocksDal<'_, '_> { miniblock_number: MiniblockNumber, ) -> anyhow::Result> { let Some(row) = sqlx::query!( - "SELECT protocol_version FROM miniblocks WHERE number = $1", + r#" + SELECT + protocol_version + FROM + miniblocks + WHERE + number = $1 + "#, miniblock_number.0 as i64 ) .fetch_optional(self.storage.conn()) @@ -1402,7 +2073,14 @@ impl BlocksDal<'_, '_> { miniblock_number: MiniblockNumber, ) -> sqlx::Result> { Ok(sqlx::query!( - "SELECT timestamp FROM miniblocks WHERE number = $1", + r#" + SELECT + timestamp + FROM + miniblocks + WHERE + number = $1 + "#, miniblock_number.0 as i64, ) .fetch_optional(self.storage.conn()) @@ -1415,8 +2093,13 @@ impl BlocksDal<'_, '_> { id: ProtocolVersionId, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE miniblocks SET protocol_version = $1 \ - WHERE l1_batch_number IS NULL", + r#" + UPDATE miniblocks + SET + protocol_version = $1 + WHERE + l1_batch_number IS NULL + "#, id as i32, ) .execute(self.storage.conn()) @@ -1429,8 +2112,15 @@ impl BlocksDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result> { Ok(sqlx::query!( - "SELECT fee_account_address FROM l1_batches WHERE number = $1", - l1_batch_number.0 as u32 + r#" + SELECT + fee_account_address + FROM + l1_batches + WHERE + number = $1 + "#, + l1_batch_number.0 as i32 ) .fetch_optional(self.storage.conn()) .await? @@ -1442,8 +2132,15 @@ impl BlocksDal<'_, '_> { miniblock_number: MiniblockNumber, ) -> sqlx::Result> { Ok(sqlx::query!( - "SELECT virtual_blocks FROM miniblocks WHERE number = $1", - miniblock_number.0 as u32 + r#" + SELECT + virtual_blocks + FROM + miniblocks + WHERE + number = $1 + "#, + miniblock_number.0 as i32 ) .fetch_optional(self.storage.conn()) .await? @@ -1460,7 +2157,13 @@ impl BlocksDal<'_, '_> { hash: H256, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE l1_batches SET hash = $1 WHERE number = $2", + r#" + UPDATE l1_batches + SET + hash = $1 + WHERE + number = $2 + "#, hash.as_bytes(), batch_num.0 as i64 ) @@ -1528,7 +2231,7 @@ mod tests { header.l2_to_l1_messages.push(vec![33; 33]); conn.blocks_dal() - .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[]) + .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[], 0) .await .unwrap(); @@ -1577,7 +2280,7 @@ mod tests { execute: 10, }; conn.blocks_dal() - .insert_l1_batch(&header, &[], predicted_gas, &[], &[]) + .insert_l1_batch(&header, &[], predicted_gas, &[], &[], 0) .await .unwrap(); @@ -1585,7 +2288,7 @@ mod tests { header.timestamp += 100; predicted_gas += predicted_gas; conn.blocks_dal() - .insert_l1_batch(&header, &[], predicted_gas, &[], &[]) + .insert_l1_batch(&header, &[], predicted_gas, &[], &[], 0) .await .unwrap(); diff --git a/core/lib/dal/src/blocks_web3_dal.rs b/core/lib/dal/src/blocks_web3_dal.rs index e42a645966f..9f9a9d964af 100644 --- a/core/lib/dal/src/blocks_web3_dal.rs +++ b/core/lib/dal/src/blocks_web3_dal.rs @@ -1,6 +1,5 @@ use bigdecimal::BigDecimal; use sqlx::Row; - use zksync_system_constants::EMPTY_UNCLES_HASH; use zksync_types::{ api, @@ -13,14 +12,17 @@ use zksync_types::{ }; use zksync_utils::bigdecimal_to_u256; -use crate::models::{ - storage_block::{ - bind_block_where_sql_params, web3_block_number_to_sql, web3_block_where_sql, - StorageBlockDetails, StorageL1BatchDetails, +use crate::{ + instrument::InstrumentExt, + models::{ + storage_block::{ + bind_block_where_sql_params, web3_block_number_to_sql, web3_block_where_sql, + ResolvedL1BatchForMiniblock, StorageBlockDetails, StorageL1BatchDetails, + }, + storage_transaction::{extract_web3_transaction, web3_transaction_select_sql, CallTrace}, }, - storage_transaction::{extract_web3_transaction, web3_transaction_select_sql, CallTrace}, + StorageProcessor, }; -use crate::{instrument::InstrumentExt, StorageProcessor}; const BLOCK_GAS_LIMIT: u32 = system_params::VM_INITIAL_FRAME_ERGS; @@ -30,28 +32,6 @@ pub struct BlocksWeb3Dal<'a, 'c> { } impl BlocksWeb3Dal<'_, '_> { - pub async fn get_sealed_miniblock_number(&mut self) -> sqlx::Result { - let number = sqlx::query!("SELECT MAX(number) as \"number\" FROM miniblocks") - .instrument("get_sealed_block_number") - .report_latency() - .fetch_one(self.storage.conn()) - .await? - .number - .expect("DAL invocation before genesis"); - Ok(MiniblockNumber(number as u32)) - } - - pub async fn get_sealed_l1_batch_number(&mut self) -> sqlx::Result { - let number = sqlx::query!("SELECT MAX(number) as \"number\" FROM l1_batches") - .instrument("get_sealed_block_number") - .report_latency() - .fetch_one(self.storage.conn()) - .await? - .number - .expect("DAL invocation before genesis"); - Ok(L1BatchNumber(number as u32)) - } - pub async fn get_block_by_web3_block_id( &mut self, block_id: api::BlockId, @@ -161,17 +141,26 @@ impl BlocksWeb3Dal<'_, '_> { })) } - /// Returns hashes of blocks with numbers greater than `from_block` and the number of the last block. - pub async fn get_block_hashes_after( + /// Returns hashes of blocks with numbers starting from `from_block` and the number of the last block. + pub async fn get_block_hashes_since( &mut self, from_block: MiniblockNumber, limit: usize, ) -> sqlx::Result<(Vec, Option)> { let rows = sqlx::query!( - "SELECT number, hash FROM miniblocks \ - WHERE number > $1 \ - ORDER BY number ASC \ - LIMIT $2", + r#" + SELECT + number, + hash + FROM + miniblocks + WHERE + number >= $1 + ORDER BY + number ASC + LIMIT + $2 + "#, from_block.0 as i64, limit as i32 ) @@ -189,10 +178,18 @@ impl BlocksWeb3Dal<'_, '_> { from_block: MiniblockNumber, ) -> sqlx::Result> { let rows = sqlx::query!( - "SELECT hash, number, timestamp \ - FROM miniblocks \ - WHERE number > $1 \ - ORDER BY number ASC", + r#" + SELECT + hash, + number, + timestamp + FROM + miniblocks + WHERE + number > $1 + ORDER BY + number ASC + "#, from_block.0 as i64, ) .fetch_all(self.storage.conn()) @@ -225,21 +222,26 @@ impl BlocksWeb3Dal<'_, '_> { &mut self, block_id: api::BlockId, ) -> sqlx::Result> { - let query_string = match block_id { - api::BlockId::Hash(_) => "SELECT number FROM miniblocks WHERE hash = $1".to_owned(), + let query_string; + let query_str = match block_id { + api::BlockId::Hash(_) => "SELECT number FROM miniblocks WHERE hash = $1", api::BlockId::Number(api::BlockNumber::Number(_)) => { // The reason why instead of returning the `block_number` directly we use query is - // to handle numbers of blocks that are not created yet. - // the `SELECT number FROM miniblocks WHERE number=block_number` for - // non-existing block number will returns zero. - "SELECT number FROM miniblocks WHERE number = $1".to_owned() + // to handle numbers of blocks that are not created yet or were pruned. + // The query below will return NULL for non-existing block numbers. + "SELECT number FROM miniblocks WHERE number = $1" } api::BlockId::Number(api::BlockNumber::Earliest) => { - return Ok(Some(MiniblockNumber(0))); + // Similarly to `BlockNumber::Number`, we may be missing the earliest block + // if the storage was recovered from a snapshot. + "SELECT number FROM miniblocks WHERE number = 0" + } + api::BlockId::Number(block_number) => { + query_string = web3_block_number_to_sql(block_number); + &query_string } - api::BlockId::Number(block_number) => web3_block_number_to_sql(block_number), }; - let row = bind_block_where_sql_params(&block_id, sqlx::query(&query_string)) + let row = bind_block_where_sql_params(&block_id, sqlx::query(query_str)) .fetch_optional(self.storage.conn()) .await?; @@ -250,25 +252,33 @@ impl BlocksWeb3Dal<'_, '_> { } /// Returns L1 batch timestamp for either sealed or pending L1 batch. + /// + /// The correctness of the current implementation depends on the timestamp of an L1 batch always + /// being equal to the timestamp of the first miniblock in the batch. pub async fn get_expected_l1_batch_timestamp( &mut self, - l1_batch_number: L1BatchNumber, + l1_batch_number: &ResolvedL1BatchForMiniblock, ) -> sqlx::Result> { - let first_miniblock_of_batch = if l1_batch_number.0 == 0 { - MiniblockNumber(0) - } else { - match self - .get_miniblock_range_of_l1_batch(l1_batch_number - 1) - .await? - { - Some((_, miniblock_number)) => miniblock_number + 1, - None => return Ok(None), - } - }; let timestamp = sqlx::query!( - "SELECT timestamp FROM miniblocks \ - WHERE number = $1", - first_miniblock_of_batch.0 as i64 + r#" + SELECT + timestamp + FROM + miniblocks + WHERE + ( + $1::BIGINT IS NULL + AND l1_batch_number IS NULL + ) + OR (l1_batch_number = $1::BIGINT) + ORDER BY + number + LIMIT + 1 + "#, + l1_batch_number + .miniblock_l1_batch + .map(|number| i64::from(number.0)) ) .fetch_optional(self.storage.conn()) .await? @@ -281,7 +291,14 @@ impl BlocksWeb3Dal<'_, '_> { block_number: MiniblockNumber, ) -> sqlx::Result> { let hash = sqlx::query!( - "SELECT hash FROM miniblocks WHERE number = $1", + r#" + SELECT + hash + FROM + miniblocks + WHERE + number = $1 + "#, block_number.0 as i64 ) .fetch_optional(self.storage.conn()) @@ -295,7 +312,14 @@ impl BlocksWeb3Dal<'_, '_> { block_number: L1BatchNumber, ) -> sqlx::Result> { let raw_logs = sqlx::query!( - "SELECT l2_to_l1_logs FROM l1_batches WHERE number = $1", + r#" + SELECT + l2_to_l1_logs + FROM + l1_batches + WHERE + number = $1 + "#, block_number.0 as i64 ) .fetch_optional(self.storage.conn()) @@ -314,7 +338,14 @@ impl BlocksWeb3Dal<'_, '_> { miniblock_number: MiniblockNumber, ) -> sqlx::Result> { let number: Option = sqlx::query!( - "SELECT l1_batch_number FROM miniblocks WHERE number = $1", + r#" + SELECT + l1_batch_number + FROM + miniblocks + WHERE + number = $1 + "#, miniblock_number.0 as i64 ) .fetch_optional(self.storage.conn()) @@ -329,9 +360,15 @@ impl BlocksWeb3Dal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result> { let row = sqlx::query!( - "SELECT MIN(miniblocks.number) as \"min?\", MAX(miniblocks.number) as \"max?\" \ - FROM miniblocks \ - WHERE l1_batch_number = $1", + r#" + SELECT + MIN(miniblocks.number) AS "min?", + MAX(miniblocks.number) AS "max?" + FROM + miniblocks + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64 ) .fetch_one(self.storage.conn()) @@ -351,9 +388,15 @@ impl BlocksWeb3Dal<'_, '_> { tx_hash: H256, ) -> sqlx::Result> { let row = sqlx::query!( - "SELECT l1_batch_number, l1_batch_tx_index \ - FROM transactions \ - WHERE hash = $1", + r#" + SELECT + l1_batch_number, + l1_batch_tx_index + FROM + transactions + WHERE + hash = $1 + "#, tx_hash.as_bytes() ) .fetch_optional(self.storage.conn()) @@ -375,8 +418,21 @@ impl BlocksWeb3Dal<'_, '_> { ) -> sqlx::Result> { Ok(sqlx::query_as!( CallTrace, - "SELECT * FROM call_traces WHERE tx_hash IN \ - (SELECT hash FROM transactions WHERE miniblock_number = $1)", + r#" + SELECT + * + FROM + call_traces + WHERE + tx_hash IN ( + SELECT + hash + FROM + transactions + WHERE + miniblock_number = $1 + ) + "#, block_number.0 as i64 ) .fetch_all(self.storage.conn()) @@ -394,9 +450,18 @@ impl BlocksWeb3Dal<'_, '_> { block_count: u64, ) -> sqlx::Result> { let result: Vec<_> = sqlx::query!( - "SELECT base_fee_per_gas FROM miniblocks \ - WHERE number <= $1 \ - ORDER BY number DESC LIMIT $2", + r#" + SELECT + base_fee_per_gas + FROM + miniblocks + WHERE + number <= $1 + ORDER BY + number DESC + LIMIT + $2 + "#, newest_block.0 as i64, block_count as i64 ) @@ -418,30 +483,50 @@ impl BlocksWeb3Dal<'_, '_> { let storage_block_details = sqlx::query_as!( StorageBlockDetails, r#" - SELECT miniblocks.number, - COALESCE(miniblocks.l1_batch_number, (SELECT (max(number) + 1) FROM l1_batches)) as "l1_batch_number!", - miniblocks.timestamp, - miniblocks.l1_tx_count, - miniblocks.l2_tx_count, - miniblocks.hash as "root_hash?", - commit_tx.tx_hash as "commit_tx_hash?", - commit_tx.confirmed_at as "committed_at?", - prove_tx.tx_hash as "prove_tx_hash?", - prove_tx.confirmed_at as "proven_at?", - execute_tx.tx_hash as "execute_tx_hash?", - execute_tx.confirmed_at as "executed_at?", - miniblocks.l1_gas_price, - miniblocks.l2_fair_gas_price, - miniblocks.bootloader_code_hash, - miniblocks.default_aa_code_hash, - miniblocks.protocol_version, - l1_batches.fee_account_address as "fee_account_address?" - FROM miniblocks + SELECT + miniblocks.number, + COALESCE( + miniblocks.l1_batch_number, + ( + SELECT + (MAX(number) + 1) + FROM + l1_batches + ) + ) AS "l1_batch_number!", + miniblocks.timestamp, + miniblocks.l1_tx_count, + miniblocks.l2_tx_count, + miniblocks.hash AS "root_hash?", + commit_tx.tx_hash AS "commit_tx_hash?", + commit_tx.confirmed_at AS "committed_at?", + prove_tx.tx_hash AS "prove_tx_hash?", + prove_tx.confirmed_at AS "proven_at?", + execute_tx.tx_hash AS "execute_tx_hash?", + execute_tx.confirmed_at AS "executed_at?", + miniblocks.l1_gas_price, + miniblocks.l2_fair_gas_price, + miniblocks.bootloader_code_hash, + miniblocks.default_aa_code_hash, + miniblocks.protocol_version, + l1_batches.fee_account_address AS "fee_account_address?" + FROM + miniblocks LEFT JOIN l1_batches ON miniblocks.l1_batch_number = l1_batches.number - LEFT JOIN eth_txs_history as commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id AND commit_tx.confirmed_at IS NOT NULL) - LEFT JOIN eth_txs_history as prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id AND prove_tx.confirmed_at IS NOT NULL) - LEFT JOIN eth_txs_history as execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id AND execute_tx.confirmed_at IS NOT NULL) - WHERE miniblocks.number = $1 + LEFT JOIN eth_txs_history AS commit_tx ON ( + l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id + AND commit_tx.confirmed_at IS NOT NULL + ) + LEFT JOIN eth_txs_history AS prove_tx ON ( + l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id + AND prove_tx.confirmed_at IS NOT NULL + ) + LEFT JOIN eth_txs_history AS execute_tx ON ( + l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id + AND execute_tx.confirmed_at IS NOT NULL + ) + WHERE + miniblocks.number = $1 "#, block_number.0 as i64 ) @@ -465,26 +550,38 @@ impl BlocksWeb3Dal<'_, '_> { let l1_batch_details: Option = sqlx::query_as!( StorageL1BatchDetails, r#" - SELECT l1_batches.number, - l1_batches.timestamp, - l1_batches.l1_tx_count, - l1_batches.l2_tx_count, - l1_batches.hash as "root_hash?", - commit_tx.tx_hash as "commit_tx_hash?", - commit_tx.confirmed_at as "committed_at?", - prove_tx.tx_hash as "prove_tx_hash?", - prove_tx.confirmed_at as "proven_at?", - execute_tx.tx_hash as "execute_tx_hash?", - execute_tx.confirmed_at as "executed_at?", - l1_batches.l1_gas_price, - l1_batches.l2_fair_gas_price, - l1_batches.bootloader_code_hash, - l1_batches.default_aa_code_hash - FROM l1_batches - LEFT JOIN eth_txs_history as commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id AND commit_tx.confirmed_at IS NOT NULL) - LEFT JOIN eth_txs_history as prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id AND prove_tx.confirmed_at IS NOT NULL) - LEFT JOIN eth_txs_history as execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id AND execute_tx.confirmed_at IS NOT NULL) - WHERE l1_batches.number = $1 + SELECT + l1_batches.number, + l1_batches.timestamp, + l1_batches.l1_tx_count, + l1_batches.l2_tx_count, + l1_batches.hash AS "root_hash?", + commit_tx.tx_hash AS "commit_tx_hash?", + commit_tx.confirmed_at AS "committed_at?", + prove_tx.tx_hash AS "prove_tx_hash?", + prove_tx.confirmed_at AS "proven_at?", + execute_tx.tx_hash AS "execute_tx_hash?", + execute_tx.confirmed_at AS "executed_at?", + l1_batches.l1_gas_price, + l1_batches.l2_fair_gas_price, + l1_batches.bootloader_code_hash, + l1_batches.default_aa_code_hash + FROM + l1_batches + LEFT JOIN eth_txs_history AS commit_tx ON ( + l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id + AND commit_tx.confirmed_at IS NOT NULL + ) + LEFT JOIN eth_txs_history AS prove_tx ON ( + l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id + AND prove_tx.confirmed_at IS NOT NULL + ) + LEFT JOIN eth_txs_history AS execute_tx ON ( + l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id + AND execute_tx.confirmed_at IS NOT NULL + ) + WHERE + l1_batches.number = $1 "#, l1_batch_number.0 as i64 ) @@ -497,96 +594,13 @@ impl BlocksWeb3Dal<'_, '_> { Ok(l1_batch_details.map(api::L1BatchDetails::from)) } } - - pub async fn get_miniblock_for_virtual_block_from( - &mut self, - migration_start_l1_batch_number: u64, - from_virtual_block_number: u64, - ) -> sqlx::Result> { - // Since virtual blocks are numerated from `migration_start_l1_batch_number` number and not from 0 - // we have to subtract (migration_start_l1_batch_number - 1) from the `from` virtual block - // to find miniblock using query below - let virtual_block_offset = from_virtual_block_number - migration_start_l1_batch_number + 1; - - // In the query below `virtual_block_sum` is actually latest virtual block number, created within this miniblock - // and that can be calculated as sum of all virtual blocks counts, created in previous miniblocks. - // It is considered that all logs are created in the last virtual block of this miniblock, - // that's why we are interested in funding it. - // The goal of this query is to find the first miniblock, which contains given virtual block. - let record = sqlx::query!( - "SELECT number \ - FROM ( \ - SELECT number, sum(virtual_blocks) OVER(ORDER BY number) AS virtual_block_sum \ - FROM miniblocks \ - WHERE l1_batch_number >= $1 \ - ) AS vts \ - WHERE virtual_block_sum >= $2 \ - ORDER BY number LIMIT 1", - migration_start_l1_batch_number as i64, - virtual_block_offset as i64 - ) - .instrument("get_miniblock_for_virtual_block_from") - .with_arg( - "migration_start_l1_batch_number", - &migration_start_l1_batch_number, - ) - .report_latency() - .fetch_optional(self.storage.conn()) - .await?; - - let result = record.map(|row| row.number as u32); - - Ok(result) - } - - pub async fn get_miniblock_for_virtual_block_to( - &mut self, - migration_start_l1_batch_number: u64, - to_virtual_block_number: u64, - ) -> sqlx::Result> { - // Since virtual blocks are numerated from `migration_start_l1_batch_number` number and not from 0 - // we have to subtract (migration_start_l1_batch_number - 1) from the `to` virtual block - // to find miniblock using query below - let virtual_block_offset = to_virtual_block_number - migration_start_l1_batch_number + 1; - - // In the query below `virtual_block_sum` is actually latest virtual block number, created within this miniblock - // and that can be calculated as sum of all virtual blocks counts, created in previous miniblocks. - // It is considered that all logs are created in the last virtual block of this miniblock, - // that's why we are interested in funding it. - // The goal of this query is to find the last miniblock, that contains logs all logs(in the last virtual block), - // created before or in a given virtual block. - let record = sqlx::query!( - "SELECT number \ - FROM ( \ - SELECT number, sum(virtual_blocks) OVER(ORDER BY number) AS virtual_block_sum \ - FROM miniblocks \ - WHERE l1_batch_number >= $1 \ - ) AS vts \ - WHERE virtual_block_sum <= $2 \ - ORDER BY number DESC LIMIT 1", - migration_start_l1_batch_number as i64, - virtual_block_offset as i64 - ) - .instrument("get_miniblock_for_virtual_block_to") - .with_arg( - "migration_start_l1_batch_number", - &migration_start_l1_batch_number, - ) - .report_latency() - .fetch_optional(self.storage.conn()) - .await?; - - let result = record.map(|row| row.number as u32); - - Ok(result) - } } #[cfg(test)] mod tests { - use zksync_contracts::BaseSystemContractsHashes; use zksync_types::{ - block::{miniblock_hash, MiniblockHeader}, + block::{MiniblockHasher, MiniblockHeader}, + snapshots::SnapshotRecoveryStatus, MiniblockNumber, ProtocolVersion, ProtocolVersionId, }; @@ -611,16 +625,13 @@ mod tests { }; conn.blocks_dal().insert_miniblock(&header).await.unwrap(); + let block_hash = MiniblockHasher::new(MiniblockNumber(0), 0, H256::zero()) + .finalize(ProtocolVersionId::latest()); let block_ids = [ api::BlockId::Number(api::BlockNumber::Earliest), api::BlockId::Number(api::BlockNumber::Latest), api::BlockId::Number(api::BlockNumber::Number(0.into())), - api::BlockId::Hash(miniblock_hash( - MiniblockNumber(0), - 0, - H256::zero(), - H256::zero(), - )), + api::BlockId::Hash(block_hash), ]; for block_id in block_ids { let block = conn @@ -630,24 +641,18 @@ mod tests { let block = block.unwrap().unwrap(); assert!(block.transactions.is_empty()); assert_eq!(block.number, U64::zero()); - assert_eq!( - block.hash, - miniblock_hash(MiniblockNumber(0), 0, H256::zero(), H256::zero()) - ); + assert_eq!(block.hash, block_hash); let tx_count = conn.blocks_web3_dal().get_block_tx_count(block_id).await; assert_eq!(tx_count.unwrap(), Some((MiniblockNumber(0), 8.into()))); } + let non_existing_block_hash = MiniblockHasher::new(MiniblockNumber(1), 1, H256::zero()) + .finalize(ProtocolVersionId::latest()); let non_existing_block_ids = [ api::BlockId::Number(api::BlockNumber::Pending), api::BlockId::Number(api::BlockNumber::Number(1.into())), - api::BlockId::Hash(miniblock_hash( - MiniblockNumber(1), - 1, - H256::zero(), - H256::zero(), - )), + api::BlockId::Hash(non_existing_block_hash), ]; for block_id in non_existing_block_ids { let block = conn @@ -665,8 +670,18 @@ mod tests { async fn resolving_earliest_block_id() { let connection_pool = ConnectionPool::test_pool().await; let mut conn = connection_pool.access_storage().await.unwrap(); + + let miniblock_number = conn + .blocks_web3_dal() + .resolve_block_id(api::BlockId::Number(api::BlockNumber::Earliest)) + .await; + assert_eq!(miniblock_number.unwrap(), None); + + conn.protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; conn.blocks_dal() - .delete_miniblocks(MiniblockNumber(0)) + .insert_miniblock(&create_miniblock_header(0)) .await .unwrap(); @@ -681,13 +696,23 @@ mod tests { async fn resolving_latest_block_id() { let connection_pool = ConnectionPool::test_pool().await; let mut conn = connection_pool.access_storage().await.unwrap(); - conn.blocks_dal() - .delete_miniblocks(MiniblockNumber(0)) - .await - .unwrap(); conn.protocol_versions_dal() .save_protocol_version_with_tx(ProtocolVersion::default()) .await; + + let miniblock_number = conn + .blocks_web3_dal() + .resolve_block_id(api::BlockId::Number(api::BlockNumber::Latest)) + .await + .unwrap(); + assert_eq!(miniblock_number, None); + let miniblock_number = conn + .blocks_web3_dal() + .resolve_block_id(api::BlockId::Number(api::BlockNumber::Pending)) + .await + .unwrap(); + assert_eq!(miniblock_number, Some(MiniblockNumber(0))); + conn.blocks_dal() .insert_miniblock(&create_miniblock_header(0)) .await @@ -733,6 +758,31 @@ mod tests { assert_eq!(miniblock_number.unwrap(), Some(MiniblockNumber(1))); } + #[tokio::test] + async fn resolving_pending_block_id_for_snapshot_recovery() { + let connection_pool = ConnectionPool::test_pool().await; + let mut conn = connection_pool.access_storage().await.unwrap(); + let snapshot_recovery = SnapshotRecoveryStatus { + l1_batch_number: L1BatchNumber(23), + l1_batch_root_hash: H256::zero(), + miniblock_number: MiniblockNumber(42), + miniblock_root_hash: H256::zero(), + last_finished_chunk_id: None, + total_chunk_count: 100, + }; + conn.snapshot_recovery_dal() + .set_applied_snapshot_status(&snapshot_recovery) + .await + .unwrap(); + + let miniblock_number = conn + .blocks_web3_dal() + .resolve_block_id(api::BlockId::Number(api::BlockNumber::Pending)) + .await + .unwrap(); + assert_eq!(miniblock_number, Some(MiniblockNumber(43))); + } + #[tokio::test] async fn resolving_block_by_hash() { let connection_pool = ConnectionPool::test_pool().await; @@ -749,107 +799,20 @@ mod tests { .await .unwrap(); - let hash = miniblock_hash(MiniblockNumber(0), 0, H256::zero(), H256::zero()); + let hash = MiniblockHasher::new(MiniblockNumber(0), 0, H256::zero()) + .finalize(ProtocolVersionId::latest()); let miniblock_number = conn .blocks_web3_dal() .resolve_block_id(api::BlockId::Hash(hash)) .await; assert_eq!(miniblock_number.unwrap(), Some(MiniblockNumber(0))); - let hash = miniblock_hash(MiniblockNumber(1), 1, H256::zero(), H256::zero()); + let hash = MiniblockHasher::new(MiniblockNumber(1), 1, H256::zero()) + .finalize(ProtocolVersionId::latest()); let miniblock_number = conn .blocks_web3_dal() .resolve_block_id(api::BlockId::Hash(hash)) .await; assert_eq!(miniblock_number.unwrap(), None); } - - #[tokio::test] - async fn getting_miniblocks_for_virtual_block() { - let connection_pool = ConnectionPool::test_pool().await; - let mut conn = connection_pool.access_storage().await.unwrap(); - - conn.protocol_versions_dal() - .save_protocol_version_with_tx(ProtocolVersion::default()) - .await; - - let mut header = MiniblockHeader { - number: MiniblockNumber(0), - timestamp: 0, - hash: miniblock_hash(MiniblockNumber(0), 0, H256::zero(), H256::zero()), - l1_tx_count: 0, - l2_tx_count: 0, - base_fee_per_gas: 100, - l1_gas_price: 100, - l2_fair_gas_price: 100, - base_system_contracts_hashes: BaseSystemContractsHashes::default(), - protocol_version: Some(ProtocolVersionId::default()), - virtual_blocks: 0, - }; - conn.blocks_dal().insert_miniblock(&header).await.unwrap(); - conn.blocks_dal() - .mark_miniblocks_as_executed_in_l1_batch(L1BatchNumber(0)) - .await - .unwrap(); - - header.number = MiniblockNumber(1); - conn.blocks_dal().insert_miniblock(&header).await.unwrap(); - conn.blocks_dal() - .mark_miniblocks_as_executed_in_l1_batch(L1BatchNumber(1)) - .await - .unwrap(); - - for i in 2..=100 { - header.number = MiniblockNumber(i); - header.virtual_blocks = 5; - - conn.blocks_dal().insert_miniblock(&header).await.unwrap(); - conn.blocks_dal() - .mark_miniblocks_as_executed_in_l1_batch(L1BatchNumber(i)) - .await - .unwrap(); - } - - let virtual_block_ranges = [ - (2, 4), - (20, 24), - (11, 15), - (1, 10), - (88, 99), - (1, 100), - (1000000, 10000000), - ]; - let expected_miniblock_ranges = [ - (Some(2), Some(1)), - (Some(5), Some(5)), - (Some(4), Some(4)), - (Some(2), Some(3)), - (Some(19), Some(20)), - (Some(2), Some(21)), - (None, Some(100)), - ]; - - let inputs_with_expected_values = - IntoIterator::into_iter(virtual_block_ranges).zip(expected_miniblock_ranges); - for ( - (virtual_block_start, virtual_block_end), - (expected_miniblock_from, expected_miniblock_to), - ) in inputs_with_expected_values - { - // migration_start_l1_batch_number = 1 - let miniblock_from = conn - .blocks_web3_dal() - .get_miniblock_for_virtual_block_from(1, virtual_block_start) - .await - .unwrap(); - assert_eq!(miniblock_from, expected_miniblock_from); - - let miniblock_to = conn - .blocks_web3_dal() - .get_miniblock_for_virtual_block_to(1, virtual_block_end) - .await - .unwrap(); - assert_eq!(miniblock_to, expected_miniblock_to); - } - } } diff --git a/core/lib/dal/src/connection/holder.rs b/core/lib/dal/src/connection/holder.rs index 265b892c089..1174f834ae8 100644 --- a/core/lib/dal/src/connection/holder.rs +++ b/core/lib/dal/src/connection/holder.rs @@ -1,8 +1,7 @@ -// Built-in deps -use sqlx::pool::PoolConnection; -use sqlx::{postgres::Postgres, Transaction}; use std::fmt; +use sqlx::{pool::PoolConnection, postgres::Postgres, Transaction}; + /// Connection holder unifies the type of underlying connection, which /// can be either pooled or direct. pub(crate) enum ConnectionHolder<'a> { diff --git a/core/lib/dal/src/connection/mod.rs b/core/lib/dal/src/connection/mod.rs index ad761a7edf0..dba7098174e 100644 --- a/core/lib/dal/src/connection/mod.rs +++ b/core/lib/dal/src/connection/mod.rs @@ -1,43 +1,34 @@ +use std::{env, fmt, time::Duration}; + +use anyhow::Context as _; use sqlx::{ pool::PoolConnection, postgres::{PgConnectOptions, PgPool, PgPoolOptions, Postgres}, }; -use anyhow::Context as _; -use std::env; -use std::fmt; -use std::time::Duration; - -pub mod holder; - use crate::{metrics::CONNECTION_METRICS, StorageProcessor}; -/// Obtains the test database URL from the environment variable. -fn get_test_database_url() -> anyhow::Result { - env::var("TEST_DATABASE_URL").context( - "TEST_DATABASE_URL must be set. Normally, this is done by the 'zk' tool. \ - Make sure that you are running the tests with 'zk test rust' command or equivalent.", - ) -} +pub mod holder; /// Builder for [`ConnectionPool`]s. -pub struct ConnectionPoolBuilder<'a> { - database_url: &'a str, +pub struct ConnectionPoolBuilder { + database_url: String, max_size: u32, statement_timeout: Option, } -impl<'a> fmt::Debug for ConnectionPoolBuilder<'a> { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { +impl fmt::Debug for ConnectionPoolBuilder { + fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { // Database URL is potentially sensitive, thus we omit it. - f.debug_struct("ConnectionPoolBuilder") + formatter + .debug_struct("ConnectionPoolBuilder") .field("max_size", &self.max_size) .field("statement_timeout", &self.statement_timeout) .finish() } } -impl<'a> ConnectionPoolBuilder<'a> { +impl ConnectionPoolBuilder { /// Sets the statement timeout for the pool. See [Postgres docs] for semantics. /// If not specified, the statement timeout will not be set. /// @@ -68,81 +59,124 @@ impl<'a> ConnectionPoolBuilder<'a> { max_connections = self.max_size, statement_timeout = self.statement_timeout ); - Ok(ConnectionPool(pool)) + Ok(ConnectionPool { + database_url: self.database_url.clone(), + inner: pool, + max_size: self.max_size, + }) } } -/// Constructucts a new temporary database (with a randomized name) -/// by cloning the database template pointed by TEST_DATABASE_URL env var. -/// The template is expected to have all migrations from dal/migrations applied. -/// For efficiency, the postgres container of TEST_DATABASE_URL should be -/// configured with option "fsync=off" - it disables waiting for disk synchronization -/// whenever you write to the DBs, therefore making it as fast as an inmem postgres instance. -/// The database is not cleaned up automatically, but rather the whole postgres -/// container is recreated whenever you call "zk test rust". -pub(super) async fn create_test_db() -> anyhow::Result { - use rand::Rng as _; - use sqlx::{Connection as _, Executor as _}; - const PREFIX: &str = "test-"; - let db_url = get_test_database_url().unwrap(); - let mut db_url = url::Url::parse(&db_url) - .with_context(|| format!("{} is not a valid database address", db_url))?; - let db_name = db_url - .path() - .strip_prefix('/') - .with_context(|| format!("{} is not a valid database address", db_url.as_ref()))? - .to_string(); - let db_copy_name = format!("{PREFIX}{}", rand::thread_rng().gen::()); - db_url.set_path(""); - let mut attempts = 10; - let mut conn = loop { - match sqlx::PgConnection::connect(db_url.as_ref()).await { - Ok(conn) => break conn, - Err(err) => { - attempts -= 1; - if attempts == 0 { - return Err(err).context("sqlx::PgConnection::connect()"); +#[derive(Clone)] +pub struct ConnectionPool { + pub(crate) inner: PgPool, + database_url: String, + max_size: u32, +} + +impl fmt::Debug for ConnectionPool { + fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { + // We don't print the `database_url`, as is may contain + // sensitive information (e.g. database password). + formatter + .debug_struct("ConnectionPool") + .field("max_size", &self.max_size) + .finish_non_exhaustive() + } +} + +pub struct TestTemplate(url::Url); + +impl TestTemplate { + fn db_name(&self) -> &str { + self.0.path().strip_prefix('/').unwrap() + } + + fn url(&self, db_name: &str) -> url::Url { + let mut url = self.0.clone(); + url.set_path(db_name); + url + } + + async fn connect_to(db_url: &url::Url) -> sqlx::Result { + use sqlx::Connection as _; + let mut attempts = 10; + loop { + match sqlx::PgConnection::connect(db_url.as_ref()).await { + Ok(conn) => return Ok(conn), + Err(err) => { + attempts -= 1; + if attempts == 0 { + return Err(err); + } } } + tokio::time::sleep(std::time::Duration::from_millis(100)).await; } - tokio::time::sleep(std::time::Duration::from_millis(100)).await; - }; - conn.execute( - format!("CREATE DATABASE \"{db_copy_name}\" WITH TEMPLATE \"{db_name}\"").as_str(), - ) - .await - .context("failed to create a temporary database")?; - db_url.set_path(&db_copy_name); - Ok(db_url) -} + } -#[derive(Clone)] -pub struct ConnectionPool(pub(crate) PgPool); + /// Obtains the test database URL from the environment variable. + pub fn empty() -> anyhow::Result { + let db_url = env::var("TEST_DATABASE_URL").context( + "TEST_DATABASE_URL must be set. Normally, this is done by the 'zk' tool. \ + Make sure that you are running the tests with 'zk test rust' command or equivalent.", + )?; + Ok(Self(db_url.parse()?)) + } -impl fmt::Debug for ConnectionPool { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - f.debug_tuple("ConnectionPool").finish() + /// Closes the connection pool, disallows connecting to the underlying db, + /// so that the db can be used as a template. + pub async fn freeze(pool: ConnectionPool) -> anyhow::Result { + use sqlx::Executor as _; + let mut conn = pool.acquire_connection_retried().await?; + conn.execute( + "UPDATE pg_database SET datallowconn = false WHERE datname = current_database()", + ) + .await + .context("SET dataallowconn = false")?; + drop(conn); + pool.inner.close().await; + Ok(Self(pool.database_url.parse()?)) } -} -impl ConnectionPool { - pub async fn test_pool() -> ConnectionPool { - let db_url = create_test_db() + /// Constructs a new temporary database (with a randomized name) + /// by cloning the database template pointed by TEST_DATABASE_URL env var. + /// The template is expected to have all migrations from dal/migrations applied. + /// For efficiency, the Postgres container of TEST_DATABASE_URL should be + /// configured with option "fsync=off" - it disables waiting for disk synchronization + /// whenever you write to the DBs, therefore making it as fast as an in-memory Postgres instance. + /// The database is not cleaned up automatically, but rather the whole Postgres + /// container is recreated whenever you call "zk test rust". + pub async fn create_db(&self) -> anyhow::Result { + use rand::Rng as _; + use sqlx::Executor as _; + + let mut conn = Self::connect_to(&self.url("")) .await - .expect("Unable to prepare test database") - .to_string(); + .context("connect_to()")?; + let db_old = self.db_name(); + let db_new = format!("test-{}", rand::thread_rng().gen::()); + conn.execute(format!("CREATE DATABASE \"{db_new}\" WITH TEMPLATE \"{db_old}\"").as_str()) + .await + .context("CREATE DATABASE")?; const TEST_MAX_CONNECTIONS: u32 = 50; // Expected to be enough for any unit test. - Self::builder(&db_url, TEST_MAX_CONNECTIONS) + ConnectionPool::builder(self.url(&db_new).as_ref(), TEST_MAX_CONNECTIONS) .build() .await - .unwrap() + .context("ConnectionPool::builder()") + } +} + +impl ConnectionPool { + pub async fn test_pool() -> ConnectionPool { + TestTemplate::empty().unwrap().create_db().await.unwrap() } /// Initializes a builder for connection pools. - pub fn builder(database_url: &str, max_pool_size: u32) -> ConnectionPoolBuilder<'_> { + pub fn builder(database_url: &str, max_pool_size: u32) -> ConnectionPoolBuilder { ConnectionPoolBuilder { - database_url, + database_url: database_url.to_string(), max_size: max_pool_size, statement_timeout: None, } @@ -150,10 +184,17 @@ impl ConnectionPool { /// Initializes a builder for connection pools with a single connection. This is equivalent /// to calling `Self::builder(db_url, 1)`. - pub fn singleton(database_url: &str) -> ConnectionPoolBuilder<'_> { + pub fn singleton(database_url: &str) -> ConnectionPoolBuilder { Self::builder(database_url, 1) } + /// Returns the maximum number of connections in this pool specified during its creation. + /// This number may be distinct from the current number of connections in the pool (including + /// idle ones). + pub fn max_size(&self) -> u32 { + self.max_size + } + /// Creates a `StorageProcessor` entity over a recoverable connection. /// Upon a database outage connection will block the thread until /// it will be able to recover the connection (or, if connection cannot @@ -200,10 +241,12 @@ impl ConnectionPool { let mut retry_count = 0; while retry_count < DB_CONNECTION_RETRIES { - CONNECTION_METRICS.pool_size.observe(self.0.size() as usize); - CONNECTION_METRICS.pool_idle.observe(self.0.num_idle()); + CONNECTION_METRICS + .pool_size + .observe(self.inner.size() as usize); + CONNECTION_METRICS.pool_idle.observe(self.inner.num_idle()); - let connection = self.0.acquire().await; + let connection = self.inner.acquire().await; let connection_err = match connection { Ok(connection) => return Ok(connection), Err(err) => { @@ -220,11 +263,11 @@ impl ConnectionPool { } // Attempting to get the pooled connection for the last time - match self.0.acquire().await { + match self.inner.acquire().await { Ok(conn) => Ok(conn), Err(err) => { Self::report_connection_error(&err); - anyhow::bail!("Run out of retries getting a DB connetion, last error: {err}"); + anyhow::bail!("Run out of retries getting a DB connection, last error: {err}"); } } } @@ -242,10 +285,12 @@ mod tests { #[tokio::test] async fn setting_statement_timeout() { - let db_url = create_test_db() + let db_url = TestTemplate::empty() + .unwrap() + .create_db() .await - .expect("Unable to prepare test database") - .to_string(); + .unwrap() + .database_url; let pool = ConnectionPool::singleton(&db_url) .set_statement_timeout(Some(Duration::from_secs(1))) diff --git a/core/lib/dal/src/consensus_dal.rs b/core/lib/dal/src/consensus_dal.rs new file mode 100644 index 00000000000..c53541166d3 --- /dev/null +++ b/core/lib/dal/src/consensus_dal.rs @@ -0,0 +1,238 @@ +use anyhow::Context as _; +use zksync_consensus_roles::validator; +use zksync_consensus_storage::ReplicaState; +use zksync_types::{Address, MiniblockNumber}; + +pub use crate::models::storage_sync::Payload; +use crate::StorageProcessor; + +/// Storage access methods for `zksync_core::consensus` module. +#[derive(Debug)] +pub struct ConsensusDal<'a, 'c> { + pub storage: &'a mut StorageProcessor<'c>, +} + +impl ConsensusDal<'_, '_> { + /// Fetches the current BFT replica state. + pub async fn replica_state(&mut self) -> anyhow::Result> { + let Some(row) = sqlx::query!( + r#" + SELECT + state AS "state!" + FROM + consensus_replica_state + WHERE + fake_key + "# + ) + .fetch_optional(self.storage.conn()) + .await? + else { + return Ok(None); + }; + Ok(Some(zksync_protobuf::serde::deserialize(row.state)?)) + } + + /// Sets the current BFT replica state. + pub async fn set_replica_state(&mut self, state: &ReplicaState) -> sqlx::Result<()> { + let state = + zksync_protobuf::serde::serialize(state, serde_json::value::Serializer).unwrap(); + sqlx::query!( + r#" + INSERT INTO + consensus_replica_state (fake_key, state) + VALUES + (TRUE, $1) + ON CONFLICT (fake_key) DO + UPDATE + SET + state = excluded.state + "#, + state + ) + .execute(self.storage.conn()) + .await?; + Ok(()) + } + + /// Fetches the first consensus certificate. + /// Note that we didn't backfill the certificates for the past miniblocks + /// when enabling consensus certificate generation, so it might NOT be the certificate + /// for the genesis miniblock. + pub async fn first_certificate(&mut self) -> anyhow::Result> { + let Some(row) = sqlx::query!( + r#" + SELECT + certificate + FROM + miniblocks_consensus + ORDER BY + number ASC + LIMIT + 1 + "# + ) + .fetch_optional(self.storage.conn()) + .await? + else { + return Ok(None); + }; + Ok(Some(zksync_protobuf::serde::deserialize(row.certificate)?)) + } + + /// Fetches the last consensus certificate. + /// Currently certificates are NOT generated synchronously with miniblocks, + /// so it might NOT be the certificate for the last miniblock. + pub async fn last_certificate(&mut self) -> anyhow::Result> { + let Some(row) = sqlx::query!( + r#" + SELECT + certificate + FROM + miniblocks_consensus + ORDER BY + number DESC + LIMIT + 1 + "# + ) + .fetch_optional(self.storage.conn()) + .await? + else { + return Ok(None); + }; + Ok(Some(zksync_protobuf::serde::deserialize(row.certificate)?)) + } + + /// Fetches the consensus certificate for the miniblock with the given `block_number`. + pub async fn certificate( + &mut self, + block_number: validator::BlockNumber, + ) -> anyhow::Result> { + let Some(row) = sqlx::query!( + r#" + SELECT + certificate + FROM + miniblocks_consensus + WHERE + number = $1 + "#, + i64::try_from(block_number.0)? + ) + .fetch_optional(self.storage.conn()) + .await? + else { + return Ok(None); + }; + Ok(Some(zksync_protobuf::serde::deserialize(row.certificate)?)) + } + + /// Converts the miniblock `block_number` into consensus payload. `Payload` is an + /// opaque format for the miniblock that consensus understands and generates a + /// certificate for it. + pub async fn block_payload( + &mut self, + block_number: validator::BlockNumber, + operator_address: Address, + ) -> anyhow::Result> { + let block_number = MiniblockNumber(block_number.0.try_into()?); + let Some(block) = self + .storage + .sync_dal() + .sync_block_inner(block_number) + .await? + else { + return Ok(None); + }; + let transactions = self + .storage + .transactions_web3_dal() + .get_raw_miniblock_transactions(block_number) + .await?; + Ok(Some(block.into_payload(operator_address, transactions))) + } + + /// Inserts a certificate for the miniblock `cert.header().number`. + /// It verifies that + /// * the certified payload matches the miniblock in storage + /// * the `cert.header().parent` matches the parent miniblock. + /// * the parent block already has a certificate. + /// NOTE: This is an extra secure way of storing a certificate, + /// which will help us to detect bugs in the consensus implementation + /// while it is "fresh". If it turns out to take too long, + /// we can remove the verification checks later. + pub async fn insert_certificate( + &mut self, + cert: &validator::CommitQC, + operator_address: Address, + ) -> anyhow::Result<()> { + let header = &cert.message.proposal; + let mut txn = self.storage.start_transaction().await?; + if let Some(last) = txn.consensus_dal().last_certificate().await? { + let last = &last.message.proposal; + anyhow::ensure!( + last.number.next() == header.number, + "expected certificate for a block after the current head block" + ); + anyhow::ensure!(last.hash() == header.parent, "parent block mismatch"); + } else { + anyhow::ensure!( + header.parent == validator::BlockHeaderHash::genesis_parent(), + "inserting first block with non-zero parent hash" + ); + } + let want_payload = txn + .consensus_dal() + .block_payload(cert.message.proposal.number, operator_address) + .await? + .context("corresponding miniblock is missing")?; + anyhow::ensure!( + header.payload == want_payload.encode().hash(), + "consensus block payload doesn't match the miniblock" + ); + sqlx::query!( + r#" + INSERT INTO + miniblocks_consensus (number, certificate) + VALUES + ($1, $2) + "#, + header.number.0 as i64, + zksync_protobuf::serde::serialize(cert, serde_json::value::Serializer).unwrap(), + ) + .execute(txn.conn()) + .await?; + txn.commit().await?; + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use rand::Rng as _; + use zksync_consensus_storage::ReplicaState; + + use crate::ConnectionPool; + + #[tokio::test] + async fn replica_state_read_write() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + assert!(conn + .consensus_dal() + .replica_state() + .await + .unwrap() + .is_none()); + let rng = &mut rand::thread_rng(); + for _ in 0..10 { + let want: ReplicaState = rng.gen(); + conn.consensus_dal().set_replica_state(&want).await.unwrap(); + assert_eq!( + Some(want), + conn.consensus_dal().replica_state().await.unwrap() + ); + } + } +} diff --git a/core/lib/dal/src/contract_verification_dal.rs b/core/lib/dal/src/contract_verification_dal.rs index 59e0c6996f9..af4f3188a2d 100644 --- a/core/lib/dal/src/contract_verification_dal.rs +++ b/core/lib/dal/src/contract_verification_dal.rs @@ -1,7 +1,10 @@ -use anyhow::Context as _; -use std::fmt::{Display, Formatter}; -use std::time::Duration; +use std::{ + fmt::{Display, Formatter}, + time::Duration, +}; +use anyhow::Context as _; +use sqlx::postgres::types::PgInterval; use zksync_types::{ contract_verification_api::{ DeployContractCalldata, VerificationIncomingRequest, VerificationInfo, VerificationRequest, @@ -10,10 +13,7 @@ use zksync_types::{ get_code_key, Address, CONTRACT_DEPLOYER_ADDRESS, FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH, }; -use sqlx::postgres::types::PgInterval; - -use crate::models::storage_verification_request::StorageVerificationRequest; -use crate::StorageProcessor; +use crate::{models::storage_verification_request::StorageVerificationRequest, StorageProcessor}; #[derive(Debug)] pub struct ContractVerificationDal<'a, 'c> { @@ -42,9 +42,14 @@ impl Display for Compiler { impl ContractVerificationDal<'_, '_> { pub async fn get_count_of_queued_verification_requests(&mut self) -> sqlx::Result { sqlx::query!( - "SELECT COUNT(*) as \"count!\" \ - FROM contract_verification_requests \ - WHERE status = 'queued'" + r#" + SELECT + COUNT(*) AS "count!" + FROM + contract_verification_requests + WHERE + status = 'queued' + "# ) .fetch_one(self.storage.conn()) .await @@ -56,22 +61,27 @@ impl ContractVerificationDal<'_, '_> { query: VerificationIncomingRequest, ) -> sqlx::Result { sqlx::query!( - "INSERT INTO contract_verification_requests ( \ - contract_address, \ - source_code, \ - contract_name, \ - zk_compiler_version, \ - compiler_version, \ - optimization_used, \ - optimizer_mode, \ - constructor_arguments, \ - is_system, \ - status, \ - created_at, \ - updated_at \ - ) - VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, 'queued', now(), now()) \ - RETURNING id", + r#" + INSERT INTO + contract_verification_requests ( + contract_address, + source_code, + contract_name, + zk_compiler_version, + compiler_version, + optimization_used, + optimizer_mode, + constructor_arguments, + is_system, + status, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, $4, $5, $6, $7, $8, $9, 'queued', NOW(), NOW()) + RETURNING + id + "#, query.contract_address.as_bytes(), // Serialization should always succeed. serde_json::to_string(&query.source_code_data).unwrap(), @@ -91,7 +101,7 @@ impl ContractVerificationDal<'_, '_> { /// Returns the next verification request for processing. /// Considering the situation where processing of some request /// can be interrupted (panic, pod restart, etc..), - /// `processing_timeout` parameter is added to avoid stucking of requests. + /// `processing_timeout` parameter is added to avoid stuck requests. pub async fn get_next_queued_verification_request( &mut self, processing_timeout: Duration, @@ -103,19 +113,44 @@ impl ContractVerificationDal<'_, '_> { }; let result = sqlx::query_as!( StorageVerificationRequest, - "UPDATE contract_verification_requests \ - SET status = 'in_progress', attempts = attempts + 1, \ - updated_at = now(), processing_started_at = now() \ - WHERE id = ( \ - SELECT id FROM contract_verification_requests \ - WHERE status = 'queued' OR (status = 'in_progress' AND processing_started_at < now() - $1::interval) \ - ORDER BY created_at \ - LIMIT 1 \ - FOR UPDATE \ - SKIP LOCKED \ - ) \ - RETURNING id, contract_address, source_code, contract_name, zk_compiler_version, compiler_version, optimization_used, \ - optimizer_mode, constructor_arguments, is_system", + r#" + UPDATE contract_verification_requests + SET + status = 'in_progress', + attempts = attempts + 1, + updated_at = NOW(), + processing_started_at = NOW() + WHERE + id = ( + SELECT + id + FROM + contract_verification_requests + WHERE + status = 'queued' + OR ( + status = 'in_progress' + AND processing_started_at < NOW() - $1::INTERVAL + ) + ORDER BY + created_at + LIMIT + 1 + FOR UPDATE + SKIP LOCKED + ) + RETURNING + id, + contract_address, + source_code, + contract_name, + zk_compiler_version, + compiler_version, + optimization_used, + optimizer_mode, + constructor_arguments, + is_system + "#, &processing_timeout ) .fetch_optional(self.storage.conn()) @@ -136,9 +171,14 @@ impl ContractVerificationDal<'_, '_> { .context("start_transaction()")?; sqlx::query!( - "UPDATE contract_verification_requests \ - SET status = 'successful', updated_at = now() \ - WHERE id = $1", + r#" + UPDATE contract_verification_requests + SET + status = 'successful', + updated_at = NOW() + WHERE + id = $1 + "#, verification_info.request.id as i64, ) .execute(transaction.conn()) @@ -149,11 +189,16 @@ impl ContractVerificationDal<'_, '_> { let verification_info_json = serde_json::to_value(verification_info) .expect("Failed to serialize verification info into serde_json"); sqlx::query!( - "INSERT INTO contracts_verification_info \ - (address, verification_info) \ - VALUES ($1, $2) \ - ON CONFLICT (address) \ - DO UPDATE SET verification_info = $2", + r#" + INSERT INTO + contracts_verification_info (address, verification_info) + VALUES + ($1, $2) + ON CONFLICT (address) DO + UPDATE + SET + verification_info = $2 + "#, address.as_bytes(), &verification_info_json ) @@ -172,9 +217,17 @@ impl ContractVerificationDal<'_, '_> { panic_message: Option, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE contract_verification_requests \ - SET status = 'failed', updated_at = now(), error = $2, compilation_errors = $3, panic_message = $4 \ - WHERE id = $1", + r#" + UPDATE contract_verification_requests + SET + status = 'failed', + updated_at = NOW(), + error = $2, + compilation_errors = $3, + panic_message = $4 + WHERE + id = $1 + "#, id as i64, error.as_str(), &compilation_errors, @@ -190,8 +243,16 @@ impl ContractVerificationDal<'_, '_> { id: usize, ) -> anyhow::Result> { let Some(row) = sqlx::query!( - "SELECT status, error, compilation_errors FROM contract_verification_requests \ - WHERE id = $1", + r#" + SELECT + status, + error, + compilation_errors + FROM + contract_verification_requests + WHERE + id = $1 + "#, id as i64, ) .fetch_optional(self.storage.conn()) @@ -224,21 +285,38 @@ impl ContractVerificationDal<'_, '_> { ) -> anyhow::Result, DeployContractCalldata)>> { let hashed_key = get_code_key(&address).hashed_key(); let Some(row) = sqlx::query!( - "SELECT factory_deps.bytecode, transactions.data as \"data?\", transactions.contract_address as \"contract_address?\" \ - FROM ( \ - SELECT * FROM storage_logs \ - WHERE storage_logs.hashed_key = $1 \ - ORDER BY miniblock_number DESC, operation_number DESC \ - LIMIT 1 \ - ) storage_logs \ - JOIN factory_deps ON factory_deps.bytecode_hash = storage_logs.value \ - LEFT JOIN transactions ON transactions.hash = storage_logs.tx_hash \ - WHERE storage_logs.value != $2", + r#" + SELECT + factory_deps.bytecode, + transactions.data AS "data?", + transactions.contract_address AS "contract_address?" + FROM + ( + SELECT + * + FROM + storage_logs + WHERE + storage_logs.hashed_key = $1 + ORDER BY + miniblock_number DESC, + operation_number DESC + LIMIT + 1 + ) storage_logs + JOIN factory_deps ON factory_deps.bytecode_hash = storage_logs.value + LEFT JOIN transactions ON transactions.hash = storage_logs.tx_hash + WHERE + storage_logs.value != $2 + "#, hashed_key.as_bytes(), FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH.as_bytes() ) .fetch_optional(self.storage.conn()) - .await? else { return Ok(None) }; + .await? + else { + return Ok(None); + }; let calldata = match row.contract_address { Some(contract_address) if contract_address == CONTRACT_DEPLOYER_ADDRESS.0.to_vec() => { // `row.contract_address` and `row.data` are either both `None` or both `Some(_)`. @@ -259,9 +337,14 @@ impl ContractVerificationDal<'_, '_> { /// Returns true if the contract has a stored contracts_verification_info. pub async fn is_contract_verified(&mut self, address: Address) -> sqlx::Result { let count = sqlx::query!( - "SELECT COUNT(*) as \"count!\" \ - FROM contracts_verification_info \ - WHERE address = $1", + r#" + SELECT + COUNT(*) AS "count!" + FROM + contracts_verification_info + WHERE + address = $1 + "#, address.as_bytes() ) .fetch_one(self.storage.conn()) @@ -273,7 +356,16 @@ impl ContractVerificationDal<'_, '_> { async fn get_compiler_versions(&mut self, compiler: Compiler) -> sqlx::Result> { let compiler = format!("{compiler}"); let versions: Vec<_> = sqlx::query!( - "SELECT version FROM compiler_versions WHERE compiler = $1 ORDER by version", + r#" + SELECT + VERSION + FROM + compiler_versions + WHERE + compiler = $1 + ORDER BY + VERSION + "#, &compiler ) .fetch_all(self.storage.conn()) @@ -313,18 +405,29 @@ impl ContractVerificationDal<'_, '_> { let compiler = format!("{compiler}"); sqlx::query!( - "DELETE FROM compiler_versions WHERE compiler = $1", + r#" + DELETE FROM compiler_versions + WHERE + compiler = $1 + "#, &compiler ) .execute(transaction.conn()) .await?; sqlx::query!( - "INSERT INTO compiler_versions (version, compiler, created_at, updated_at) \ - SELECT u.version, $2, now(), now() \ - FROM UNNEST($1::text[]) \ - AS u(version) \ - ON CONFLICT (version, compiler) DO NOTHING", + r#" + INSERT INTO + compiler_versions (VERSION, compiler, created_at, updated_at) + SELECT + u.version, + $2, + NOW(), + NOW() + FROM + UNNEST($1::TEXT[]) AS u (VERSION) + ON CONFLICT (VERSION, compiler) DO NOTHING + "#, &versions, &compiler, ) @@ -355,11 +458,25 @@ impl ContractVerificationDal<'_, '_> { pub async fn get_all_successful_requests(&mut self) -> sqlx::Result> { let result = sqlx::query_as!( StorageVerificationRequest, - "SELECT id, contract_address, source_code, contract_name, zk_compiler_version, compiler_version, optimization_used, \ - optimizer_mode, constructor_arguments, is_system \ - FROM contract_verification_requests \ - WHERE status = 'successful' \ - ORDER BY id", + r#" + SELECT + id, + contract_address, + source_code, + contract_name, + zk_compiler_version, + compiler_version, + optimization_used, + optimizer_mode, + constructor_arguments, + is_system + FROM + contract_verification_requests + WHERE + status = 'successful' + ORDER BY + id + "#, ) .fetch_all(self.storage.conn()) .await? @@ -374,7 +491,14 @@ impl ContractVerificationDal<'_, '_> { address: Address, ) -> anyhow::Result> { let Some(row) = sqlx::query!( - "SELECT verification_info FROM contracts_verification_info WHERE address = $1", + r#" + SELECT + verification_info + FROM + contracts_verification_info + WHERE + address = $1 + "#, address.as_bytes(), ) .fetch_optional(self.storage.conn()) diff --git a/core/lib/dal/src/eth_sender_dal.rs b/core/lib/dal/src/eth_sender_dal.rs index 0d9d1da0dab..5e490e6590a 100644 --- a/core/lib/dal/src/eth_sender_dal.rs +++ b/core/lib/dal/src/eth_sender_dal.rs @@ -1,17 +1,22 @@ -use crate::models::storage_eth_tx::{ - L1BatchEthSenderStats, StorageEthTx, StorageTxHistory, StorageTxHistoryToSend, -}; -use crate::StorageProcessor; +use std::{convert::TryFrom, str::FromStr}; + use anyhow::Context as _; use sqlx::{ types::chrono::{DateTime, Utc}, Row, }; -use std::convert::TryFrom; -use std::str::FromStr; -use zksync_types::aggregated_operations::AggregatedActionType; -use zksync_types::eth_sender::{EthTx, TxHistory, TxHistoryToSend}; -use zksync_types::{Address, L1BatchNumber, H256, U256}; +use zksync_types::{ + aggregated_operations::AggregatedActionType, + eth_sender::{EthTx, TxHistory, TxHistoryToSend}, + Address, L1BatchNumber, H256, U256, +}; + +use crate::{ + models::storage_eth_tx::{ + L1BatchEthSenderStats, StorageEthTx, StorageTxHistory, StorageTxHistoryToSend, + }, + StorageProcessor, +}; #[derive(Debug)] pub struct EthSenderDal<'a, 'c> { @@ -22,9 +27,24 @@ impl EthSenderDal<'_, '_> { pub async fn get_inflight_txs(&mut self) -> sqlx::Result> { let txs = sqlx::query_as!( StorageEthTx, - "SELECT * FROM eth_txs WHERE confirmed_eth_tx_history_id IS NULL \ - AND id <= (SELECT COALESCE(MAX(eth_tx_id), 0) FROM eth_txs_history WHERE sent_at_block IS NOT NULL) \ - ORDER BY id" + r#" + SELECT + * + FROM + eth_txs + WHERE + confirmed_eth_tx_history_id IS NULL + AND id <= ( + SELECT + COALESCE(MAX(eth_tx_id), 0) + FROM + eth_txs_history + WHERE + sent_at_block IS NOT NULL + ) + ORDER BY + id + "# ) .fetch_all(self.storage.conn()) .await?; @@ -78,7 +98,14 @@ impl EthSenderDal<'_, '_> { pub async fn get_eth_tx(&mut self, eth_tx_id: u32) -> sqlx::Result> { Ok(sqlx::query_as!( StorageEthTx, - "SELECT * FROM eth_txs WHERE id = $1", + r#" + SELECT + * + FROM + eth_txs + WHERE + id = $1 + "#, eth_tx_id as i32 ) .fetch_optional(self.storage.conn()) @@ -89,10 +116,23 @@ impl EthSenderDal<'_, '_> { pub async fn get_new_eth_txs(&mut self, limit: u64) -> sqlx::Result> { let txs = sqlx::query_as!( StorageEthTx, - "SELECT * FROM eth_txs \ - WHERE id > (SELECT COALESCE(MAX(eth_tx_id), 0) FROM eth_txs_history) \ - ORDER BY id \ - LIMIT $1", + r#" + SELECT + * + FROM + eth_txs + WHERE + id > ( + SELECT + COALESCE(MAX(eth_tx_id), 0) + FROM + eth_txs_history + ) + ORDER BY + id + LIMIT + $1 + "#, limit as i64 ) .fetch_all(self.storage.conn()) @@ -103,18 +143,24 @@ impl EthSenderDal<'_, '_> { pub async fn get_unsent_txs(&mut self) -> sqlx::Result> { let txs = sqlx::query_as!( StorageTxHistoryToSend, - "SELECT \ - eth_txs_history.id, \ - eth_txs_history.eth_tx_id, \ - eth_txs_history.tx_hash, \ - eth_txs_history.base_fee_per_gas, \ - eth_txs_history.priority_fee_per_gas, \ - eth_txs_history.signed_raw_tx, \ - eth_txs.nonce \ - FROM eth_txs_history \ - JOIN eth_txs ON eth_txs.id = eth_txs_history.eth_tx_id \ - WHERE eth_txs_history.sent_at_block IS NULL AND eth_txs.confirmed_eth_tx_history_id IS NULL \ - ORDER BY eth_txs_history.id DESC", + r#" + SELECT + eth_txs_history.id, + eth_txs_history.eth_tx_id, + eth_txs_history.tx_hash, + eth_txs_history.base_fee_per_gas, + eth_txs_history.priority_fee_per_gas, + eth_txs_history.signed_raw_tx, + eth_txs.nonce + FROM + eth_txs_history + JOIN eth_txs ON eth_txs.id = eth_txs_history.eth_tx_id + WHERE + eth_txs_history.sent_at_block IS NULL + AND eth_txs.confirmed_eth_tx_history_id IS NULL + ORDER BY + eth_txs_history.id DESC + "#, ) .fetch_all(self.storage.conn()) .await?; @@ -132,9 +178,22 @@ impl EthSenderDal<'_, '_> { let address = format!("{:#x}", contract_address); let eth_tx = sqlx::query_as!( StorageEthTx, - "INSERT INTO eth_txs (raw_tx, nonce, tx_type, contract_address, predicted_gas_cost, created_at, updated_at) \ - VALUES ($1, $2, $3, $4, $5, now(), now()) \ - RETURNING *", + r#" + INSERT INTO + eth_txs ( + raw_tx, + nonce, + tx_type, + contract_address, + predicted_gas_cost, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, $4, $5, NOW(), NOW()) + RETURNING + * + "#, raw_tx, nonce as i64, tx_type.to_string(), @@ -152,7 +211,7 @@ impl EthSenderDal<'_, '_> { base_fee_per_gas: u64, priority_fee_per_gas: u64, tx_hash: H256, - raw_signed_tx: Vec, + raw_signed_tx: &[u8], ) -> anyhow::Result> { let priority_fee_per_gas = i64::try_from(priority_fee_per_gas).context("Can't convert u64 to i64")?; @@ -161,12 +220,24 @@ impl EthSenderDal<'_, '_> { let tx_hash = format!("{:#x}", tx_hash); Ok(sqlx::query!( - "INSERT INTO eth_txs_history \ - (eth_tx_id, base_fee_per_gas, priority_fee_per_gas, tx_hash, signed_raw_tx, created_at, updated_at) \ - VALUES ($1, $2, $3, $4, $5, now(), now()) \ - ON CONFLICT (tx_hash) DO NOTHING \ - RETURNING id", - eth_tx_id as u32, + r#" + INSERT INTO + eth_txs_history ( + eth_tx_id, + base_fee_per_gas, + priority_fee_per_gas, + tx_hash, + signed_raw_tx, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, $4, $5, NOW(), NOW()) + ON CONFLICT (tx_hash) DO NOTHING + RETURNING + id + "#, + eth_tx_id as i32, base_fee_per_gas, priority_fee_per_gas, tx_hash, @@ -183,8 +254,15 @@ impl EthSenderDal<'_, '_> { sent_at_block: u32, ) -> sqlx::Result<()> { sqlx::query!( - "UPDATE eth_txs_history SET sent_at_block = $2, sent_at = now() \ - WHERE id = $1 AND sent_at_block IS NULL", + r#" + UPDATE eth_txs_history + SET + sent_at_block = $2, + sent_at = NOW() + WHERE + id = $1 + AND sent_at_block IS NULL + "#, eth_txs_history_id as i32, sent_at_block as i32 ) @@ -195,8 +273,11 @@ impl EthSenderDal<'_, '_> { pub async fn remove_tx_history(&mut self, eth_txs_history_id: u32) -> sqlx::Result<()> { sqlx::query!( - "DELETE FROM eth_txs_history \ - WHERE id = $1", + r#" + DELETE FROM eth_txs_history + WHERE + id = $1 + "#, eth_txs_history_id as i64 ) .execute(self.storage.conn()) @@ -214,19 +295,31 @@ impl EthSenderDal<'_, '_> { .map_err(|err| anyhow::anyhow!("Can't convert U256 to i64: {err}"))?; let tx_hash = format!("{:#x}", tx_hash); let ids = sqlx::query!( - "UPDATE eth_txs_history \ - SET updated_at = now(), confirmed_at = now() \ - WHERE tx_hash = $1 \ - RETURNING id, eth_tx_id", + r#" + UPDATE eth_txs_history + SET + updated_at = NOW(), + confirmed_at = NOW() + WHERE + tx_hash = $1 + RETURNING + id, + eth_tx_id + "#, tx_hash, ) .fetch_one(transaction.conn()) .await?; sqlx::query!( - "UPDATE eth_txs \ - SET gas_used = $1, confirmed_eth_tx_history_id = $2 \ - WHERE id = $3", + r#" + UPDATE eth_txs + SET + gas_used = $1, + confirmed_eth_tx_history_id = $2 + WHERE + id = $3 + "#, gas_used, ids.id, ids.eth_tx_id @@ -243,8 +336,15 @@ impl EthSenderDal<'_, '_> { eth_tx_id: u32, ) -> anyhow::Result> { let tx_hash = sqlx::query!( - "SELECT tx_hash FROM eth_txs_history \ - WHERE eth_tx_id = $1 AND confirmed_at IS NOT NULL", + r#" + SELECT + tx_hash + FROM + eth_txs_history + WHERE + eth_tx_id = $1 + AND confirmed_at IS NOT NULL + "#, eth_tx_id as i64 ) .fetch_optional(self.storage.conn()) @@ -322,9 +422,13 @@ impl EthSenderDal<'_, '_> { // Mark general entry as confirmed. sqlx::query!( - "UPDATE eth_txs \ - SET confirmed_eth_tx_history_id = $1 \ - WHERE id = $2", + r#" + UPDATE eth_txs + SET + confirmed_eth_tx_history_id = $1 + WHERE + id = $2 + "#, eth_history_id, eth_tx_id ) @@ -351,7 +455,16 @@ impl EthSenderDal<'_, '_> { ) -> sqlx::Result> { let tx_history = sqlx::query_as!( StorageTxHistory, - "SELECT * FROM eth_txs_history WHERE eth_tx_id = $1 ORDER BY created_at DESC", + r#" + SELECT + * + FROM + eth_txs_history + WHERE + eth_tx_id = $1 + ORDER BY + created_at DESC + "#, eth_tx_id as i32 ) .fetch_all(self.storage.conn()) @@ -378,7 +491,18 @@ impl EthSenderDal<'_, '_> { ) -> sqlx::Result> { let history_item = sqlx::query_as!( StorageTxHistory, - "SELECT * FROM eth_txs_history WHERE eth_tx_id = $1 ORDER BY created_at DESC LIMIT 1", + r#" + SELECT + * + FROM + eth_txs_history + WHERE + eth_tx_id = $1 + ORDER BY + created_at DESC + LIMIT + 1 + "#, eth_tx_id as i32 ) .fetch_optional(self.storage.conn()) @@ -387,15 +511,32 @@ impl EthSenderDal<'_, '_> { } pub async fn get_next_nonce(&mut self) -> sqlx::Result> { - let row = sqlx::query!("SELECT nonce FROM eth_txs ORDER BY id DESC LIMIT 1") - .fetch_optional(self.storage.conn()) - .await?; + let row = sqlx::query!( + r#" + SELECT + nonce + FROM + eth_txs + ORDER BY + id DESC + LIMIT + 1 + "# + ) + .fetch_optional(self.storage.conn()) + .await?; Ok(row.map(|row| row.nonce as u64 + 1)) } pub async fn mark_failed_transaction(&mut self, eth_tx_id: u32) -> sqlx::Result<()> { sqlx::query!( - "UPDATE eth_txs SET has_failed = TRUE WHERE id = $1", + r#" + UPDATE eth_txs + SET + has_failed = TRUE + WHERE + id = $1 + "#, eth_tx_id as i32 ) .execute(self.storage.conn()) @@ -404,17 +545,36 @@ impl EthSenderDal<'_, '_> { } pub async fn get_number_of_failed_transactions(&mut self) -> anyhow::Result { - sqlx::query!("SELECT COUNT(*) FROM eth_txs WHERE has_failed = TRUE") - .fetch_one(self.storage.conn()) - .await? - .count - .context("count field is missing") + sqlx::query!( + r#" + SELECT + COUNT(*) + FROM + eth_txs + WHERE + has_failed = TRUE + "# + ) + .fetch_one(self.storage.conn()) + .await? + .count + .context("count field is missing") } pub async fn clear_failed_transactions(&mut self) -> sqlx::Result<()> { sqlx::query!( - "DELETE FROM eth_txs WHERE id >= \ - (SELECT MIN(id) FROM eth_txs WHERE has_failed = TRUE)" + r#" + DELETE FROM eth_txs + WHERE + id >= ( + SELECT + MIN(id) + FROM + eth_txs + WHERE + has_failed = TRUE + ) + "# ) .execute(self.storage.conn()) .await?; diff --git a/core/lib/dal/src/events_dal.rs b/core/lib/dal/src/events_dal.rs index 6355deaf29a..b7087985f52 100644 --- a/core/lib/dal/src/events_dal.rs +++ b/core/lib/dal/src/events_dal.rs @@ -1,14 +1,14 @@ -use sqlx::types::chrono::Utc; - use std::fmt; -use crate::{models::storage_event::StorageL2ToL1Log, SqlxError, StorageProcessor}; +use sqlx::types::chrono::Utc; use zksync_types::{ l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, tx::IncludedTxLocation, MiniblockNumber, VmEvent, H256, }; +use crate::{models::storage_event::StorageL2ToL1Log, SqlxError, StorageProcessor}; + /// Wrapper around an optional event topic allowing to hex-format it for `COPY` instructions. #[derive(Debug)] struct EventTopic<'a>(Option<&'a H256>); @@ -93,7 +93,11 @@ impl EventsDal<'_, '_> { /// Removes events with a block number strictly greater than the specified `block_number`. pub async fn rollback_events(&mut self, block_number: MiniblockNumber) { sqlx::query!( - "DELETE FROM events WHERE miniblock_number > $1", + r#" + DELETE FROM events + WHERE + miniblock_number > $1 + "#, block_number.0 as i64 ) .execute(self.storage.conn()) @@ -166,7 +170,11 @@ impl EventsDal<'_, '_> { /// Removes all L2-to-L1 logs with a miniblock number strictly greater than the specified `block_number`. pub async fn rollback_l2_to_l1_logs(&mut self, block_number: MiniblockNumber) { sqlx::query!( - "DELETE FROM l2_to_l1_logs WHERE miniblock_number > $1", + r#" + DELETE FROM l2_to_l1_logs + WHERE + miniblock_number > $1 + "#, block_number.0 as i64 ) .execute(self.storage.conn()) @@ -180,13 +188,28 @@ impl EventsDal<'_, '_> { ) -> Result, SqlxError> { sqlx::query_as!( StorageL2ToL1Log, - "SELECT \ - miniblock_number, log_index_in_miniblock, log_index_in_tx, tx_hash, \ - Null::bytea as \"block_hash\", Null::bigint as \"l1_batch_number?\", \ - shard_id, is_service, tx_index_in_miniblock, tx_index_in_l1_batch, sender, key, value \ - FROM l2_to_l1_logs \ - WHERE tx_hash = $1 \ - ORDER BY log_index_in_tx ASC", + r#" + SELECT + miniblock_number, + log_index_in_miniblock, + log_index_in_tx, + tx_hash, + NULL::bytea AS "block_hash", + NULL::BIGINT AS "l1_batch_number?", + shard_id, + is_service, + tx_index_in_miniblock, + tx_index_in_l1_batch, + sender, + key, + value + FROM + l2_to_l1_logs + WHERE + tx_hash = $1 + ORDER BY + log_index_in_tx ASC + "#, tx_hash.as_bytes() ) .fetch_all(self.storage.conn()) @@ -196,9 +219,10 @@ impl EventsDal<'_, '_> { #[cfg(test)] mod tests { + use zksync_types::{Address, L1BatchNumber, ProtocolVersion}; + use super::*; use crate::{tests::create_miniblock_header, ConnectionPool}; - use zksync_types::{Address, L1BatchNumber, ProtocolVersion}; fn create_vm_event(index: u8, topic_count: u8) -> VmEvent { assert!(topic_count <= 4); diff --git a/core/lib/dal/src/events_web3_dal.rs b/core/lib/dal/src/events_web3_dal.rs index 82a65c18444..06a049cd003 100644 --- a/core/lib/dal/src/events_web3_dal.rs +++ b/core/lib/dal/src/events_web3_dal.rs @@ -1,14 +1,11 @@ use sqlx::Row; - use zksync_types::{ api::{GetLogsFilter, Log}, Address, MiniblockNumber, H256, }; use crate::{ - instrument::InstrumentExt, - models::{storage_block::web3_block_number_to_sql, storage_event::StorageWeb3Log}, - SqlxError, StorageProcessor, + instrument::InstrumentExt, models::storage_event::StorageWeb3Log, SqlxError, StorageProcessor, }; #[derive(Debug)] @@ -119,10 +116,8 @@ impl EventsWeb3Dal<'_, '_> { let mut where_sql = format!("(miniblock_number >= {})", filter.from_block.0 as i64); - if let Some(to_block) = filter.to_block { - let block_sql = web3_block_number_to_sql(to_block); - where_sql += &format!(" AND (miniblock_number <= {})", block_sql); - } + where_sql += &format!(" AND (miniblock_number <= {})", filter.to_block.0 as i64); + if !filter.addresses.is_empty() { where_sql += &format!(" AND (address = ANY(${}))", arg_index); arg_index += 1; @@ -143,27 +138,53 @@ impl EventsWeb3Dal<'_, '_> { let db_logs: Vec = sqlx::query_as!( StorageWeb3Log, r#" - WITH events_select AS ( - SELECT - address, topic1, topic2, topic3, topic4, value, - miniblock_number, tx_hash, tx_index_in_block, - event_index_in_block, event_index_in_tx - FROM events - WHERE miniblock_number > $1 - ORDER BY miniblock_number ASC, event_index_in_block ASC - ) - SELECT miniblocks.hash as "block_hash?", - address as "address!", topic1 as "topic1!", topic2 as "topic2!", topic3 as "topic3!", topic4 as "topic4!", value as "value!", - miniblock_number as "miniblock_number!", miniblocks.l1_batch_number as "l1_batch_number?", tx_hash as "tx_hash!", - tx_index_in_block as "tx_index_in_block!", event_index_in_block as "event_index_in_block!", event_index_in_tx as "event_index_in_tx!" - FROM events_select - INNER JOIN miniblocks ON events_select.miniblock_number = miniblocks.number - ORDER BY miniblock_number ASC, event_index_in_block ASC + WITH + events_select AS ( + SELECT + address, + topic1, + topic2, + topic3, + topic4, + value, + miniblock_number, + tx_hash, + tx_index_in_block, + event_index_in_block, + event_index_in_tx + FROM + events + WHERE + miniblock_number > $1 + ORDER BY + miniblock_number ASC, + event_index_in_block ASC + ) + SELECT + miniblocks.hash AS "block_hash?", + address AS "address!", + topic1 AS "topic1!", + topic2 AS "topic2!", + topic3 AS "topic3!", + topic4 AS "topic4!", + value AS "value!", + miniblock_number AS "miniblock_number!", + miniblocks.l1_batch_number AS "l1_batch_number?", + tx_hash AS "tx_hash!", + tx_index_in_block AS "tx_index_in_block!", + event_index_in_block AS "event_index_in_block!", + event_index_in_tx AS "event_index_in_tx!" + FROM + events_select + INNER JOIN miniblocks ON events_select.miniblock_number = miniblocks.number + ORDER BY + miniblock_number ASC, + event_index_in_block ASC "#, from_block.0 as i64 ) - .fetch_all(self.storage.conn()) - .await?; + .fetch_all(self.storage.conn()) + .await?; let logs = db_logs.into_iter().map(Into::into).collect(); Ok(logs) } @@ -172,7 +193,6 @@ impl EventsWeb3Dal<'_, '_> { #[cfg(test)] mod tests { - use zksync_types::api::BlockNumber; use zksync_types::{Address, H256}; use super::*; @@ -185,7 +205,7 @@ mod tests { let events_web3_dal = EventsWeb3Dal { storage }; let filter = GetLogsFilter { from_block: MiniblockNumber(100), - to_block: Some(BlockNumber::Number(200.into())), + to_block: MiniblockNumber(200), addresses: vec![Address::from_low_u64_be(123)], topics: vec![(0, vec![H256::from_low_u64_be(456)])], }; diff --git a/core/lib/dal/src/fri_gpu_prover_queue_dal.rs b/core/lib/dal/src/fri_gpu_prover_queue_dal.rs index 46c46a15b73..a2929894444 100644 --- a/core/lib/dal/src/fri_gpu_prover_queue_dal.rs +++ b/core/lib/dal/src/fri_gpu_prover_queue_dal.rs @@ -1,8 +1,8 @@ use std::time::Duration; + use zksync_types::proofs::{GpuProverInstanceStatus, SocketAddress}; -use crate::time_utils::pg_interval_from_duration; -use crate::StorageProcessor; +use crate::{time_utils::pg_interval_from_duration, StorageProcessor}; #[derive(Debug)] pub struct FriGpuProverQueueDal<'a, 'c> { @@ -18,37 +18,49 @@ impl FriGpuProverQueueDal<'_, '_> { ) -> Option { let processing_timeout = pg_interval_from_duration(processing_timeout); let result: Option = sqlx::query!( - "UPDATE gpu_prover_queue_fri \ - SET instance_status = 'reserved', \ - updated_at = now(), \ - processing_started_at = now() \ - WHERE id in ( \ - SELECT id \ - FROM gpu_prover_queue_fri \ - WHERE specialized_prover_group_id=$2 \ - AND zone=$3 \ - AND ( \ - instance_status = 'available' \ - OR (instance_status = 'reserved' AND processing_started_at < now() - $1::interval) \ - ) \ - ORDER BY updated_at ASC \ - LIMIT 1 \ - FOR UPDATE \ - SKIP LOCKED \ - ) \ - RETURNING gpu_prover_queue_fri.* - ", - &processing_timeout, - specialized_prover_group_id as i16, - zone - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .map(|row| SocketAddress { - host: row.instance_host.network(), - port: row.instance_port as u16, - }); + r#" + UPDATE gpu_prover_queue_fri + SET + instance_status = 'reserved', + updated_at = NOW(), + processing_started_at = NOW() + WHERE + id IN ( + SELECT + id + FROM + gpu_prover_queue_fri + WHERE + specialized_prover_group_id = $2 + AND zone = $3 + AND ( + instance_status = 'available' + OR ( + instance_status = 'reserved' + AND processing_started_at < NOW() - $1::INTERVAL + ) + ) + ORDER BY + updated_at ASC + LIMIT + 1 + FOR UPDATE + SKIP LOCKED + ) + RETURNING + gpu_prover_queue_fri.* + "#, + &processing_timeout, + specialized_prover_group_id as i16, + zone + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .map(|row| SocketAddress { + host: row.instance_host.network(), + port: row.instance_port as u16, + }); result } @@ -60,18 +72,35 @@ impl FriGpuProverQueueDal<'_, '_> { zone: String, ) { sqlx::query!( - "INSERT INTO gpu_prover_queue_fri (instance_host, instance_port, instance_status, specialized_prover_group_id, zone, created_at, updated_at) \ - VALUES (cast($1::text as inet), $2, 'available', $3, $4, now(), now()) \ - ON CONFLICT(instance_host, instance_port, zone) \ - DO UPDATE SET instance_status='available', specialized_prover_group_id=$3, zone=$4, updated_at=now()", - format!("{}",address.host), - address.port as i32, - specialized_prover_group_id as i16, - zone + r#" + INSERT INTO + gpu_prover_queue_fri ( + instance_host, + instance_port, + instance_status, + specialized_prover_group_id, + zone, + created_at, + updated_at + ) + VALUES + (CAST($1::TEXT AS inet), $2, 'available', $3, $4, NOW(), NOW()) + ON CONFLICT (instance_host, instance_port, zone) DO + UPDATE + SET + instance_status = 'available', + specialized_prover_group_id = $3, + zone = $4, + updated_at = NOW() + "#, + format!("{}", address.host), + address.port as i32, + specialized_prover_group_id as i16, + zone ) - .execute(self.storage.conn()) - .await - .unwrap(); + .execute(self.storage.conn()) + .await + .unwrap(); } pub async fn update_prover_instance_status( @@ -81,12 +110,16 @@ impl FriGpuProverQueueDal<'_, '_> { zone: String, ) { sqlx::query!( - "UPDATE gpu_prover_queue_fri \ - SET instance_status = $1, updated_at = now() \ - WHERE instance_host = $2::text::inet \ - AND instance_port = $3 \ + r#" + UPDATE gpu_prover_queue_fri + SET + instance_status = $1, + updated_at = NOW() + WHERE + instance_host = $2::TEXT::inet + AND instance_port = $3 AND zone = $4 - ", + "#, format!("{:?}", status).to_lowercase(), format!("{}", address.host), address.port as i32, @@ -103,13 +136,17 @@ impl FriGpuProverQueueDal<'_, '_> { zone: String, ) { sqlx::query!( - "UPDATE gpu_prover_queue_fri \ - SET instance_status = 'available', updated_at = now() \ - WHERE instance_host = $1::text::inet \ - AND instance_port = $2 \ - AND instance_status = 'full' \ + r#" + UPDATE gpu_prover_queue_fri + SET + instance_status = 'available', + updated_at = NOW() + WHERE + instance_host = $1::TEXT::inet + AND instance_port = $2 + AND instance_status = 'full' AND zone = $3 - ", + "#, format!("{}", address.host), address.port as i32, zone diff --git a/core/lib/dal/src/fri_proof_compressor_dal.rs b/core/lib/dal/src/fri_proof_compressor_dal.rs index 97caf76ebce..ee331204ec4 100644 --- a/core/lib/dal/src/fri_proof_compressor_dal.rs +++ b/core/lib/dal/src/fri_proof_compressor_dal.rs @@ -1,14 +1,16 @@ +use std::{collections::HashMap, str::FromStr, time::Duration}; + use sqlx::Row; -use std::collections::HashMap; -use std::str::FromStr; -use std::time::Duration; use strum::{Display, EnumString}; +use zksync_types::{ + proofs::{JobCountStatistics, StuckJobs}, + L1BatchNumber, +}; -use zksync_types::proofs::{JobCountStatistics, StuckJobs}; -use zksync_types::L1BatchNumber; - -use crate::time_utils::{duration_to_naive_time, pg_interval_from_duration}; -use crate::StorageProcessor; +use crate::{ + time_utils::{duration_to_naive_time, pg_interval_from_duration}, + StorageProcessor, +}; #[derive(Debug)] pub struct FriProofCompressorDal<'a, 'c> { @@ -38,9 +40,13 @@ impl FriProofCompressorDal<'_, '_> { fri_proof_blob_url: &str, ) { sqlx::query!( - "INSERT INTO proof_compression_jobs_fri(l1_batch_number, fri_proof_blob_url, status, created_at, updated_at) \ - VALUES ($1, $2, $3, now(), now()) \ - ON CONFLICT (l1_batch_number) DO NOTHING", + r#" + INSERT INTO + proof_compression_jobs_fri (l1_batch_number, fri_proof_blob_url, status, created_at, updated_at) + VALUES + ($1, $2, $3, NOW(), NOW()) + ON CONFLICT (l1_batch_number) DO NOTHING + "#, block_number.0 as i64, fri_proof_blob_url, ProofCompressionJobStatus::Queued.to_string(), @@ -52,15 +58,19 @@ impl FriProofCompressorDal<'_, '_> { pub async fn skip_proof_compression_job(&mut self, block_number: L1BatchNumber) { sqlx::query!( - "INSERT INTO proof_compression_jobs_fri(l1_batch_number, status, created_at, updated_at) \ - VALUES ($1, $2, now(), now()) \ - ON CONFLICT (l1_batch_number) DO NOTHING", - block_number.0 as i64, + r#" + INSERT INTO + proof_compression_jobs_fri (l1_batch_number, status, created_at, updated_at) + VALUES + ($1, $2, NOW(), NOW()) + ON CONFLICT (l1_batch_number) DO NOTHING + "#, + block_number.0 as i64, ProofCompressionJobStatus::Skipped.to_string(), - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap(); + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap(); } pub async fn get_next_proof_compression_job( @@ -68,20 +78,32 @@ impl FriProofCompressorDal<'_, '_> { picked_by: &str, ) -> Option { sqlx::query!( - "UPDATE proof_compression_jobs_fri \ - SET status = $1, attempts = attempts + 1, \ - updated_at = now(), processing_started_at = now(), \ - picked_by = $3 \ - WHERE l1_batch_number = ( \ - SELECT l1_batch_number \ - FROM proof_compression_jobs_fri \ - WHERE status = $2 \ - ORDER BY l1_batch_number ASC \ - LIMIT 1 \ - FOR UPDATE \ - SKIP LOCKED \ - ) \ - RETURNING proof_compression_jobs_fri.l1_batch_number", + r#" + UPDATE proof_compression_jobs_fri + SET + status = $1, + attempts = attempts + 1, + updated_at = NOW(), + processing_started_at = NOW(), + picked_by = $3 + WHERE + l1_batch_number = ( + SELECT + l1_batch_number + FROM + proof_compression_jobs_fri + WHERE + status = $2 + ORDER BY + l1_batch_number ASC + LIMIT + 1 + FOR UPDATE + SKIP LOCKED + ) + RETURNING + proof_compression_jobs_fri.l1_batch_number + "#, ProofCompressionJobStatus::InProgress.to_string(), ProofCompressionJobStatus::Queued.to_string(), picked_by, @@ -97,8 +119,14 @@ impl FriProofCompressorDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result> { let attempts = sqlx::query!( - "SELECT attempts FROM proof_compression_jobs_fri \ - WHERE l1_batch_number = $1", + r#" + SELECT + attempts + FROM + proof_compression_jobs_fri + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64, ) .fetch_optional(self.storage.conn()) @@ -115,9 +143,16 @@ impl FriProofCompressorDal<'_, '_> { l1_proof_blob_url: &str, ) { sqlx::query!( - "UPDATE proof_compression_jobs_fri \ - SET status = $1, updated_at = now(), time_taken = $2, l1_proof_blob_url = $3\ - WHERE l1_batch_number = $4", + r#" + UPDATE proof_compression_jobs_fri + SET + status = $1, + updated_at = NOW(), + time_taken = $2, + l1_proof_blob_url = $3 + WHERE + l1_batch_number = $4 + "#, ProofCompressionJobStatus::Successful.to_string(), duration_to_naive_time(time_taken), l1_proof_blob_url, @@ -134,9 +169,15 @@ impl FriProofCompressorDal<'_, '_> { block_number: L1BatchNumber, ) { sqlx::query!( - "UPDATE proof_compression_jobs_fri \ - SET status =$1, error= $2, updated_at = now() \ - WHERE l1_batch_number = $3", + r#" + UPDATE proof_compression_jobs_fri + SET + status = $1, + error = $2, + updated_at = NOW() + WHERE + l1_batch_number = $3 + "#, ProofCompressionJobStatus::Failed.to_string(), error, block_number.0 as i64 @@ -150,13 +191,23 @@ impl FriProofCompressorDal<'_, '_> { &mut self, ) -> Option<(L1BatchNumber, ProofCompressionJobStatus)> { let row = sqlx::query!( - "SELECT l1_batch_number, status \ - FROM proof_compression_jobs_fri - WHERE l1_batch_number = ( \ - SELECT MIN(l1_batch_number) \ - FROM proof_compression_jobs_fri \ - WHERE status = $1 OR status = $2 - )", + r#" + SELECT + l1_batch_number, + status + FROM + proof_compression_jobs_fri + WHERE + l1_batch_number = ( + SELECT + MIN(l1_batch_number) + FROM + proof_compression_jobs_fri + WHERE + status = $1 + OR status = $2 + ) + "#, ProofCompressionJobStatus::Successful.to_string(), ProofCompressionJobStatus::Skipped.to_string() ) @@ -174,9 +225,14 @@ impl FriProofCompressorDal<'_, '_> { pub async fn mark_proof_sent_to_server(&mut self, block_number: L1BatchNumber) { sqlx::query!( - "UPDATE proof_compression_jobs_fri \ - SET status = $1, updated_at = now() \ - WHERE l1_batch_number = $2", + r#" + UPDATE proof_compression_jobs_fri + SET + status = $1, + updated_at = NOW() + WHERE + l1_batch_number = $2 + "#, ProofCompressionJobStatus::SentToServer.to_string(), block_number.0 as i64 ) @@ -206,6 +262,29 @@ impl FriProofCompressorDal<'_, '_> { } } + pub async fn get_oldest_not_compressed_batch(&mut self) -> Option { + let result: Option = sqlx::query!( + r#" + SELECT + l1_batch_number + FROM + proof_compression_jobs_fri + WHERE + status <> 'successful' + ORDER BY + l1_batch_number ASC + LIMIT + 1 + "#, + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .map(|row| L1BatchNumber(row.l1_batch_number as u32)); + + result + } + pub async fn requeue_stuck_jobs( &mut self, processing_timeout: Duration, @@ -214,20 +293,40 @@ impl FriProofCompressorDal<'_, '_> { let processing_timeout = pg_interval_from_duration(processing_timeout); { sqlx::query!( - "UPDATE proof_compression_jobs_fri \ - SET status = 'queued', updated_at = now(), processing_started_at = now() \ - WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2) \ - OR (status = 'failed' AND attempts < $2) \ - RETURNING l1_batch_number, status, attempts", + r#" + UPDATE proof_compression_jobs_fri + SET + status = 'queued', + updated_at = NOW(), + processing_started_at = NOW() + WHERE + ( + status = 'in_progress' + AND processing_started_at <= NOW() - $1::INTERVAL + AND attempts < $2 + ) + OR ( + status = 'failed' + AND attempts < $2 + ) + RETURNING + l1_batch_number, + status, + attempts + "#, &processing_timeout, max_attempts as i32, ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| StuckJobs { id: row.l1_batch_number as u64, status: row.status, attempts: row.attempts as u64 }) - .collect() + .fetch_all(self.storage.conn()) + .await + .unwrap() + .into_iter() + .map(|row| StuckJobs { + id: row.l1_batch_number as u64, + status: row.status, + attempts: row.attempts as u64, + }) + .collect() } } } diff --git a/core/lib/dal/src/fri_protocol_versions_dal.rs b/core/lib/dal/src/fri_protocol_versions_dal.rs index 8fbcf922d8b..d982d85771e 100644 --- a/core/lib/dal/src/fri_protocol_versions_dal.rs +++ b/core/lib/dal/src/fri_protocol_versions_dal.rs @@ -1,7 +1,6 @@ use std::convert::TryFrom; -use zksync_types::protocol_version::FriProtocolVersionId; -use zksync_types::protocol_version::L1VerifierConfig; +use zksync_types::protocol_version::{FriProtocolVersionId, L1VerifierConfig}; use crate::StorageProcessor; @@ -17,11 +16,20 @@ impl FriProtocolVersionsDal<'_, '_> { l1_verifier_config: L1VerifierConfig, ) { sqlx::query!( - "INSERT INTO prover_fri_protocol_versions \ - (id, recursion_scheduler_level_vk_hash, recursion_node_level_vk_hash, \ - recursion_leaf_level_vk_hash, recursion_circuits_set_vks_hash, created_at) \ - VALUES ($1, $2, $3, $4, $5, now()) \ - ON CONFLICT(id) DO NOTHING", + r#" + INSERT INTO + prover_fri_protocol_versions ( + id, + recursion_scheduler_level_vk_hash, + recursion_node_level_vk_hash, + recursion_leaf_level_vk_hash, + recursion_circuits_set_vks_hash, + created_at + ) + VALUES + ($1, $2, $3, $4, $5, NOW()) + ON CONFLICT (id) DO NOTHING + "#, id as i32, l1_verifier_config .recursion_scheduler_level_vk_hash @@ -49,13 +57,17 @@ impl FriProtocolVersionsDal<'_, '_> { vk_commitments: &L1VerifierConfig, ) -> Vec { sqlx::query!( - "SELECT id \ - FROM prover_fri_protocol_versions \ - WHERE recursion_circuits_set_vks_hash = $1 \ - AND recursion_leaf_level_vk_hash = $2 \ - AND recursion_node_level_vk_hash = $3 \ - AND recursion_scheduler_level_vk_hash = $4 \ - ", + r#" + SELECT + id + FROM + prover_fri_protocol_versions + WHERE + recursion_circuits_set_vks_hash = $1 + AND recursion_leaf_level_vk_hash = $2 + AND recursion_node_level_vk_hash = $3 + AND recursion_scheduler_level_vk_hash = $4 + "#, vk_commitments .params .recursion_circuits_set_vks_hash diff --git a/core/lib/dal/src/fri_prover_dal.rs b/core/lib/dal/src/fri_prover_dal.rs index 026cb783dd3..d9446182b7f 100644 --- a/core/lib/dal/src/fri_prover_dal.rs +++ b/core/lib/dal/src/fri_prover_dal.rs @@ -54,40 +54,56 @@ impl FriProverDal<'_, '_> { ) -> Option { let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); sqlx::query!( - " - UPDATE prover_jobs_fri - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now(), - picked_by = $2 - WHERE id = ( - SELECT id - FROM prover_jobs_fri - WHERE status = 'queued' - AND protocol_version = ANY($1) - ORDER BY aggregation_round DESC, l1_batch_number ASC, id ASC - LIMIT 1 + r#" + UPDATE prover_jobs_fri + SET + status = 'in_progress', + attempts = attempts + 1, + updated_at = NOW(), + processing_started_at = NOW(), + picked_by = $2 + WHERE + id = ( + SELECT + id + FROM + prover_jobs_fri + WHERE + status = 'queued' + AND protocol_version = ANY ($1) + ORDER BY + aggregation_round DESC, + l1_batch_number ASC, + id ASC + LIMIT + 1 FOR UPDATE - SKIP LOCKED + SKIP LOCKED ) - RETURNING prover_jobs_fri.id, prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, - prover_jobs_fri.aggregation_round, prover_jobs_fri.sequence_number, prover_jobs_fri.depth, + RETURNING + prover_jobs_fri.id, + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id, + prover_jobs_fri.aggregation_round, + prover_jobs_fri.sequence_number, + prover_jobs_fri.depth, prover_jobs_fri.is_node_final_proof - ", + "#, &protocol_versions[..], picked_by, ) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .map(|row| FriProverJobMetadata { - id: row.id as u32, - block_number: L1BatchNumber(row.l1_batch_number as u32), - circuit_id: row.circuit_id as u8, - aggregation_round: AggregationRound::try_from(row.aggregation_round as i32).unwrap(), - sequence_number: row.sequence_number as usize, - depth: row.depth as u16, - is_node_final_proof: row.is_node_final_proof, - }) + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .map(|row| FriProverJobMetadata { + id: row.id as u32, + block_number: L1BatchNumber(row.l1_batch_number as u32), + circuit_id: row.circuit_id as u8, + aggregation_round: AggregationRound::try_from(row.aggregation_round as i32).unwrap(), + sequence_number: row.sequence_number as usize, + depth: row.depth as u16, + is_node_final_proof: row.is_node_final_proof, + }) } pub async fn get_next_job_for_circuit_id_round( @@ -106,59 +122,90 @@ impl FriProverDal<'_, '_> { .map(|tuple| tuple.aggregation_round as i16) .collect(); sqlx::query!( - " - UPDATE prover_jobs_fri - SET status = 'in_progress', attempts = attempts + 1, - processing_started_at = now(), updated_at = now(), - picked_by = $4 - WHERE id = ( - SELECT pj.id - FROM ( SELECT * FROM unnest($1::smallint[], $2::smallint[]) ) AS tuple (circuit_id, round) - JOIN LATERAL - ( - SELECT * FROM prover_jobs_fri AS pj - WHERE pj.status = 'queued' - AND pj.protocol_version = ANY($3) - AND pj.circuit_id = tuple.circuit_id AND pj.aggregation_round = tuple.round - ORDER BY pj.l1_batch_number ASC, pj.id ASC - LIMIT 1 - ) AS pj ON true - ORDER BY pj.l1_batch_number ASC, pj.aggregation_round DESC, pj.id ASC - LIMIT 1 + r#" + UPDATE prover_jobs_fri + SET + status = 'in_progress', + attempts = attempts + 1, + processing_started_at = NOW(), + updated_at = NOW(), + picked_by = $4 + WHERE + id = ( + SELECT + pj.id + FROM + ( + SELECT + * + FROM + UNNEST($1::SMALLINT[], $2::SMALLINT[]) + ) AS tuple (circuit_id, ROUND) + JOIN LATERAL ( + SELECT + * + FROM + prover_jobs_fri AS pj + WHERE + pj.status = 'queued' + AND pj.protocol_version = ANY ($3) + AND pj.circuit_id = tuple.circuit_id + AND pj.aggregation_round = tuple.round + ORDER BY + pj.l1_batch_number ASC, + pj.id ASC + LIMIT + 1 + ) AS pj ON TRUE + ORDER BY + pj.l1_batch_number ASC, + pj.aggregation_round DESC, + pj.id ASC + LIMIT + 1 FOR UPDATE - SKIP LOCKED + SKIP LOCKED ) - RETURNING prover_jobs_fri.id, prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, - prover_jobs_fri.aggregation_round, prover_jobs_fri.sequence_number, prover_jobs_fri.depth, + RETURNING + prover_jobs_fri.id, + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id, + prover_jobs_fri.aggregation_round, + prover_jobs_fri.sequence_number, + prover_jobs_fri.depth, prover_jobs_fri.is_node_final_proof - ", + "#, &circuit_ids[..], &aggregation_rounds[..], &protocol_versions[..], picked_by, ) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .map(|row| FriProverJobMetadata { - id: row.id as u32, - block_number: L1BatchNumber(row.l1_batch_number as u32), - circuit_id: row.circuit_id as u8, - aggregation_round: AggregationRound::try_from(row.aggregation_round as i32).unwrap(), - sequence_number: row.sequence_number as usize, - depth: row.depth as u16, - is_node_final_proof: row.is_node_final_proof, - }) + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .map(|row| FriProverJobMetadata { + id: row.id as u32, + block_number: L1BatchNumber(row.l1_batch_number as u32), + circuit_id: row.circuit_id as u8, + aggregation_round: AggregationRound::try_from(row.aggregation_round as i32).unwrap(), + sequence_number: row.sequence_number as usize, + depth: row.depth as u16, + is_node_final_proof: row.is_node_final_proof, + }) } pub async fn save_proof_error(&mut self, id: u32, error: String) { { sqlx::query!( - " + r#" UPDATE prover_jobs_fri - SET status = 'failed', error = $1, updated_at = now() - WHERE id = $2 - ", + SET + status = 'failed', + error = $1, + updated_at = NOW() + WHERE + id = $2 + "#, error, id as i64, ) @@ -170,7 +217,14 @@ impl FriProverDal<'_, '_> { pub async fn get_prover_job_attempts(&mut self, id: u32) -> sqlx::Result> { let attempts = sqlx::query!( - "SELECT attempts FROM prover_jobs_fri WHERE id = $1", + r#" + SELECT + attempts + FROM + prover_jobs_fri + WHERE + id = $1 + "#, id as i64, ) .fetch_optional(self.storage.conn()) @@ -187,34 +241,44 @@ impl FriProverDal<'_, '_> { blob_url: &str, ) -> FriProverJobMetadata { sqlx::query!( - " - UPDATE prover_jobs_fri - SET status = 'successful', updated_at = now(), time_taken = $1, proof_blob_url=$2 - WHERE id = $3 - RETURNING prover_jobs_fri.id, prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, - prover_jobs_fri.aggregation_round, prover_jobs_fri.sequence_number, prover_jobs_fri.depth, + r#" + UPDATE prover_jobs_fri + SET + status = 'successful', + updated_at = NOW(), + time_taken = $1, + proof_blob_url = $2 + WHERE + id = $3 + RETURNING + prover_jobs_fri.id, + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id, + prover_jobs_fri.aggregation_round, + prover_jobs_fri.sequence_number, + prover_jobs_fri.depth, prover_jobs_fri.is_node_final_proof - ", + "#, duration_to_naive_time(time_taken), blob_url, id as i64, ) - .instrument("save_fri_proof") - .report_latency() - .with_arg("id", &id) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .map(|row| FriProverJobMetadata { - id: row.id as u32, - block_number: L1BatchNumber(row.l1_batch_number as u32), - circuit_id: row.circuit_id as u8, - aggregation_round: AggregationRound::try_from(row.aggregation_round as i32).unwrap(), - sequence_number: row.sequence_number as usize, - depth: row.depth as u16, - is_node_final_proof: row.is_node_final_proof, - }) - .unwrap() + .instrument("save_fri_proof") + .report_latency() + .with_arg("id", &id) + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .map(|row| FriProverJobMetadata { + id: row.id as u32, + block_number: L1BatchNumber(row.l1_batch_number as u32), + circuit_id: row.circuit_id as u8, + aggregation_round: AggregationRound::try_from(row.aggregation_round as i32).unwrap(), + sequence_number: row.sequence_number as usize, + depth: row.depth as u16, + is_node_final_proof: row.is_node_final_proof, + }) + .unwrap() } pub async fn requeue_stuck_jobs( @@ -225,28 +289,54 @@ impl FriProverDal<'_, '_> { let processing_timeout = pg_interval_from_duration(processing_timeout); { sqlx::query!( - " + r#" UPDATE prover_jobs_fri - SET status = 'queued', updated_at = now(), processing_started_at = now() - WHERE id in ( - SELECT id - FROM prover_jobs_fri - WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'in_gpu_proof' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'failed' AND attempts < $2) - FOR UPDATE SKIP LOCKED - ) - RETURNING id, status, attempts - ", + SET + status = 'queued', + updated_at = NOW(), + processing_started_at = NOW() + WHERE + id IN ( + SELECT + id + FROM + prover_jobs_fri + WHERE + ( + status = 'in_progress' + AND processing_started_at <= NOW() - $1::INTERVAL + AND attempts < $2 + ) + OR ( + status = 'in_gpu_proof' + AND processing_started_at <= NOW() - $1::INTERVAL + AND attempts < $2 + ) + OR ( + status = 'failed' + AND attempts < $2 + ) + FOR UPDATE + SKIP LOCKED + ) + RETURNING + id, + status, + attempts + "#, &processing_timeout, max_attempts as i32, ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| StuckJobs { id: row.id as u64, status: row.status, attempts: row.attempts as u64 }) - .collect() + .fetch_all(self.storage.conn()) + .await + .unwrap() + .into_iter() + .map(|row| StuckJobs { + id: row.id as u64, + status: row.status, + attempts: row.attempts as u64, + }) + .collect() } } @@ -263,12 +353,28 @@ impl FriProverDal<'_, '_> { protocol_version_id: FriProtocolVersionId, ) { sqlx::query!( - " - INSERT INTO prover_jobs_fri (l1_batch_number, circuit_id, circuit_blob_url, aggregation_round, sequence_number, depth, is_node_final_proof, protocol_version, status, created_at, updated_at) - VALUES ($1, $2, $3, $4, $5, $6, $7, $8, 'queued', now(), now()) - ON CONFLICT(l1_batch_number, aggregation_round, circuit_id, depth, sequence_number) - DO UPDATE SET updated_at=now() - ", + r#" + INSERT INTO + prover_jobs_fri ( + l1_batch_number, + circuit_id, + circuit_blob_url, + aggregation_round, + sequence_number, + depth, + is_node_final_proof, + protocol_version, + status, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, $4, $5, $6, $7, $8, 'queued', NOW(), NOW()) + ON CONFLICT (l1_batch_number, aggregation_round, circuit_id, depth, sequence_number) DO + UPDATE + SET + updated_at = NOW() + "#, l1_batch_number.0 as i64, circuit_id as i16, circuit_blob_url, @@ -287,24 +393,45 @@ impl FriProverDal<'_, '_> { { sqlx::query!( r#" - SELECT COUNT(*) as "count!", circuit_id as "circuit_id!", aggregation_round as "aggregation_round!", status as "status!" - FROM prover_jobs_fri - WHERE status <> 'skipped' and status <> 'successful' - GROUP BY circuit_id, aggregation_round, status + SELECT + COUNT(*) AS "count!", + circuit_id AS "circuit_id!", + aggregation_round AS "aggregation_round!", + status AS "status!" + FROM + prover_jobs_fri + WHERE + status <> 'skipped' + AND status <> 'successful' + GROUP BY + circuit_id, + aggregation_round, + status "# ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| (row.circuit_id, row.aggregation_round, row.status, row.count as usize)) - .fold(HashMap::new(), |mut acc, (circuit_id, aggregation_round, status, value)| { - let stats = acc.entry((circuit_id as u8, aggregation_round as u8)).or_insert(JobCountStatistics { - queued: 0, - in_progress: 0, - failed: 0, - successful: 0, - }); + .fetch_all(self.storage.conn()) + .await + .unwrap() + .into_iter() + .map(|row| { + ( + row.circuit_id, + row.aggregation_round, + row.status, + row.count as usize, + ) + }) + .fold( + HashMap::new(), + |mut acc, (circuit_id, aggregation_round, status, value)| { + let stats = acc + .entry((circuit_id as u8, aggregation_round as u8)) + .or_insert(JobCountStatistics { + queued: 0, + in_progress: 0, + failed: 0, + successful: 0, + }); match status.as_ref() { "queued" => stats.queued = value, "in_progress" => stats.in_progress = value, @@ -313,7 +440,8 @@ impl FriProverDal<'_, '_> { _ => (), } acc - }) + }, + ) } } @@ -321,10 +449,17 @@ impl FriProverDal<'_, '_> { { sqlx::query!( r#" - SELECT MIN(l1_batch_number) as "l1_batch_number!", circuit_id, aggregation_round - FROM prover_jobs_fri - WHERE status IN('queued', 'in_gpu_proof', 'in_progress', 'failed') - GROUP BY circuit_id, aggregation_round + SELECT + MIN(l1_batch_number) AS "l1_batch_number!", + circuit_id, + aggregation_round + FROM + prover_jobs_fri + WHERE + status IN ('queued', 'in_gpu_proof', 'in_progress', 'failed') + GROUP BY + circuit_id, + aggregation_round "# ) .fetch_all(self.storage.conn()) @@ -341,11 +476,43 @@ impl FriProverDal<'_, '_> { } } + pub async fn min_unproved_l1_batch_number_for_aggregation_round( + &mut self, + aggregation_round: AggregationRound, + ) -> Option { + sqlx::query!( + r#" + SELECT + l1_batch_number + FROM + prover_jobs_fri + WHERE + status <> 'skipped' + AND status <> 'successful' + AND aggregation_round = $1 + ORDER BY + l1_batch_number ASC + LIMIT + 1 + "#, + aggregation_round as i16 + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .map(|row| L1BatchNumber(row.l1_batch_number as u32)) + } + pub async fn update_status(&mut self, id: u32, status: &str) { sqlx::query!( - "UPDATE prover_jobs_fri \ - SET status = $1, updated_at = now() \ - WHERE id = $2", + r#" + UPDATE prover_jobs_fri + SET + status = $1, + updated_at = NOW() + WHERE + id = $2 + "#, status, id as i64, ) @@ -356,9 +523,14 @@ impl FriProverDal<'_, '_> { pub async fn save_successful_sent_proof(&mut self, l1_batch_number: L1BatchNumber) { sqlx::query!( - "UPDATE prover_jobs_fri \ - SET status = 'sent_to_server', updated_at = now() \ - WHERE l1_batch_number = $1", + r#" + UPDATE prover_jobs_fri + SET + status = 'sent_to_server', + updated_at = NOW() + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64, ) .execute(self.storage.conn()) @@ -371,10 +543,16 @@ impl FriProverDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> Option { sqlx::query!( - "SELECT id from prover_jobs_fri \ - WHERE l1_batch_number = $1 \ - AND status = 'successful' \ - AND aggregation_round = $2", + r#" + SELECT + id + FROM + prover_jobs_fri + WHERE + l1_batch_number = $1 + AND status = 'successful' + AND aggregation_round = $2 + "#, l1_batch_number.0 as i64, AggregationRound::Scheduler as i16, ) diff --git a/core/lib/dal/src/fri_scheduler_dependency_tracker_dal.rs b/core/lib/dal/src/fri_scheduler_dependency_tracker_dal.rs index 3844f5777ce..e123ab7064b 100644 --- a/core/lib/dal/src/fri_scheduler_dependency_tracker_dal.rs +++ b/core/lib/dal/src/fri_scheduler_dependency_tracker_dal.rs @@ -1,6 +1,7 @@ -use crate::StorageProcessor; use zksync_types::L1BatchNumber; +use crate::StorageProcessor; + #[derive(Debug)] pub struct FriSchedulerDependencyTrackerDal<'a, 'c> { pub storage: &'a mut StorageProcessor<'c>, @@ -10,26 +11,33 @@ impl FriSchedulerDependencyTrackerDal<'_, '_> { pub async fn get_l1_batches_ready_for_queuing(&mut self) -> Vec { sqlx::query!( r#" - UPDATE scheduler_dependency_tracker_fri - SET status='queuing' - WHERE l1_batch_number IN - (SELECT l1_batch_number FROM scheduler_dependency_tracker_fri - WHERE status != 'queued' - AND circuit_1_final_prover_job_id IS NOT NULL - AND circuit_2_final_prover_job_id IS NOT NULL - AND circuit_3_final_prover_job_id IS NOT NULL - AND circuit_4_final_prover_job_id IS NOT NULL - AND circuit_5_final_prover_job_id IS NOT NULL - AND circuit_6_final_prover_job_id IS NOT NULL - AND circuit_7_final_prover_job_id IS NOT NULL - AND circuit_8_final_prover_job_id IS NOT NULL - AND circuit_9_final_prover_job_id IS NOT NULL - AND circuit_10_final_prover_job_id IS NOT NULL - AND circuit_11_final_prover_job_id IS NOT NULL - AND circuit_12_final_prover_job_id IS NOT NULL - AND circuit_13_final_prover_job_id IS NOT NULL - ) - RETURNING l1_batch_number; + UPDATE scheduler_dependency_tracker_fri + SET + status = 'queuing' + WHERE + l1_batch_number IN ( + SELECT + l1_batch_number + FROM + scheduler_dependency_tracker_fri + WHERE + status != 'queued' + AND circuit_1_final_prover_job_id IS NOT NULL + AND circuit_2_final_prover_job_id IS NOT NULL + AND circuit_3_final_prover_job_id IS NOT NULL + AND circuit_4_final_prover_job_id IS NOT NULL + AND circuit_5_final_prover_job_id IS NOT NULL + AND circuit_6_final_prover_job_id IS NOT NULL + AND circuit_7_final_prover_job_id IS NOT NULL + AND circuit_8_final_prover_job_id IS NOT NULL + AND circuit_9_final_prover_job_id IS NOT NULL + AND circuit_10_final_prover_job_id IS NOT NULL + AND circuit_11_final_prover_job_id IS NOT NULL + AND circuit_12_final_prover_job_id IS NOT NULL + AND circuit_13_final_prover_job_id IS NOT NULL + ) + RETURNING + l1_batch_number; "#, ) .fetch_all(self.storage.conn()) @@ -43,10 +51,12 @@ impl FriSchedulerDependencyTrackerDal<'_, '_> { pub async fn mark_l1_batches_queued(&mut self, l1_batches: Vec) { sqlx::query!( r#" - UPDATE scheduler_dependency_tracker_fri - SET status='queued' - WHERE l1_batch_number = ANY($1) - "#, + UPDATE scheduler_dependency_tracker_fri + SET + status = 'queued' + WHERE + l1_batch_number = ANY ($1) + "#, &l1_batches[..] ) .execute(self.storage.conn()) @@ -82,9 +92,13 @@ impl FriSchedulerDependencyTrackerDal<'_, '_> { ) -> [u32; 13] { sqlx::query!( r#" - SELECT * FROM scheduler_dependency_tracker_fri - WHERE l1_batch_number = $1 - "#, + SELECT + * + FROM + scheduler_dependency_tracker_fri + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64, ) .fetch_all(self.storage.conn()) diff --git a/core/lib/dal/src/fri_witness_generator_dal.rs b/core/lib/dal/src/fri_witness_generator_dal.rs index c05dd3b3d1a..874ad8d0368 100644 --- a/core/lib/dal/src/fri_witness_generator_dal.rs +++ b/core/lib/dal/src/fri_witness_generator_dal.rs @@ -1,14 +1,12 @@ -use sqlx::Row; - -use std::convert::TryFrom; -use std::{collections::HashMap, time::Duration}; +use std::{collections::HashMap, convert::TryFrom, time::Duration}; -use zksync_types::protocol_version::FriProtocolVersionId; +use sqlx::Row; use zksync_types::{ proofs::{ AggregationRound, JobCountStatistics, LeafAggregationJobMetadata, NodeAggregationJobMetadata, StuckJobs, }, + protocol_version::FriProtocolVersionId, L1BatchNumber, }; @@ -45,16 +43,27 @@ impl FriWitnessGeneratorDal<'_, '_> { protocol_version_id: FriProtocolVersionId, ) { sqlx::query!( - "INSERT INTO witness_inputs_fri(l1_batch_number, merkle_tree_paths_blob_url, protocol_version, status, created_at, updated_at) \ - VALUES ($1, $2, $3, 'queued', now(), now()) \ - ON CONFLICT (l1_batch_number) DO NOTHING", - block_number.0 as i64, - object_key, - protocol_version_id as i32, - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap(); + r#" + INSERT INTO + witness_inputs_fri ( + l1_batch_number, + merkle_tree_paths_blob_url, + protocol_version, + status, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, 'queued', NOW(), NOW()) + ON CONFLICT (l1_batch_number) DO NOTHING + "#, + block_number.0 as i64, + object_key, + protocol_version_id as i32, + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap(); } pub async fn get_next_basic_circuit_witness_job( @@ -65,24 +74,34 @@ impl FriWitnessGeneratorDal<'_, '_> { ) -> Option { let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); sqlx::query!( - " - UPDATE witness_inputs_fri - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now(), - picked_by = $3 - WHERE l1_batch_number = ( - SELECT l1_batch_number - FROM witness_inputs_fri - WHERE l1_batch_number <= $1 - AND status = 'queued' - AND protocol_version = ANY($2) - ORDER BY l1_batch_number ASC - LIMIT 1 + r#" + UPDATE witness_inputs_fri + SET + status = 'in_progress', + attempts = attempts + 1, + updated_at = NOW(), + processing_started_at = NOW(), + picked_by = $3 + WHERE + l1_batch_number = ( + SELECT + l1_batch_number + FROM + witness_inputs_fri + WHERE + l1_batch_number <= $1 + AND status = 'queued' + AND protocol_version = ANY ($2) + ORDER BY + l1_batch_number ASC + LIMIT + 1 FOR UPDATE - SKIP LOCKED + SKIP LOCKED ) - RETURNING witness_inputs_fri.* - ", + RETURNING + witness_inputs_fri.* + "#, last_l1_batch_to_process as i64, &protocol_versions[..], picked_by, @@ -98,8 +117,14 @@ impl FriWitnessGeneratorDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result> { let attempts = sqlx::query!( - "SELECT attempts FROM witness_inputs_fri \ - WHERE l1_batch_number = $1", + r#" + SELECT + attempts + FROM + witness_inputs_fri + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64, ) .fetch_optional(self.storage.conn()) @@ -115,10 +140,14 @@ impl FriWitnessGeneratorDal<'_, '_> { block_number: L1BatchNumber, ) { sqlx::query!( - " - UPDATE witness_inputs_fri SET status =$1, updated_at = now() - WHERE l1_batch_number = $2 - ", + r#" + UPDATE witness_inputs_fri + SET + status = $1, + updated_at = NOW() + WHERE + l1_batch_number = $2 + "#, format!("{}", status), block_number.0 as i64 ) @@ -133,11 +162,15 @@ impl FriWitnessGeneratorDal<'_, '_> { time_taken: Duration, ) { sqlx::query!( - " - UPDATE witness_inputs_fri - SET status = 'successful', updated_at = now(), time_taken = $1 - WHERE l1_batch_number = $2 - ", + r#" + UPDATE witness_inputs_fri + SET + status = 'successful', + updated_at = NOW(), + time_taken = $1 + WHERE + l1_batch_number = $2 + "#, duration_to_naive_time(time_taken), block_number.0 as i64 ) @@ -148,10 +181,15 @@ impl FriWitnessGeneratorDal<'_, '_> { pub async fn mark_witness_job_failed(&mut self, error: &str, block_number: L1BatchNumber) { sqlx::query!( - " - UPDATE witness_inputs_fri SET status ='failed', error= $1, updated_at = now() - WHERE l1_batch_number = $2 - ", + r#" + UPDATE witness_inputs_fri + SET + status = 'failed', + error = $1, + updated_at = NOW() + WHERE + l1_batch_number = $2 + "#, error, block_number.0 as i64 ) @@ -162,11 +200,15 @@ impl FriWitnessGeneratorDal<'_, '_> { pub async fn mark_leaf_aggregation_job_failed(&mut self, error: &str, id: u32) { sqlx::query!( - " - UPDATE leaf_aggregation_witness_jobs_fri - SET status ='failed', error= $1, updated_at = now() - WHERE id = $2 - ", + r#" + UPDATE leaf_aggregation_witness_jobs_fri + SET + status = 'failed', + error = $1, + updated_at = NOW() + WHERE + id = $2 + "#, error, id as i64 ) @@ -177,11 +219,15 @@ impl FriWitnessGeneratorDal<'_, '_> { pub async fn mark_leaf_aggregation_as_successful(&mut self, id: u32, time_taken: Duration) { sqlx::query!( - " - UPDATE leaf_aggregation_witness_jobs_fri - SET status = 'successful', updated_at = now(), time_taken = $1 - WHERE id = $2 - ", + r#" + UPDATE leaf_aggregation_witness_jobs_fri + SET + status = 'successful', + updated_at = NOW(), + time_taken = $1 + WHERE + id = $2 + "#, duration_to_naive_time(time_taken), id as i64 ) @@ -197,23 +243,45 @@ impl FriWitnessGeneratorDal<'_, '_> { ) -> Vec { let processing_timeout = pg_interval_from_duration(processing_timeout); sqlx::query!( - " - UPDATE witness_inputs_fri - SET status = 'queued', updated_at = now(), processing_started_at = now() - WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'in_gpu_proof' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'failed' AND attempts < $2) - RETURNING l1_batch_number, status, attempts - ", - &processing_timeout, - max_attempts as i32, - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| StuckJobs { id: row.l1_batch_number as u64, status: row.status, attempts: row.attempts as u64 }) - .collect() + r#" + UPDATE witness_inputs_fri + SET + status = 'queued', + updated_at = NOW(), + processing_started_at = NOW() + WHERE + ( + status = 'in_progress' + AND processing_started_at <= NOW() - $1::INTERVAL + AND attempts < $2 + ) + OR ( + status = 'in_gpu_proof' + AND processing_started_at <= NOW() - $1::INTERVAL + AND attempts < $2 + ) + OR ( + status = 'failed' + AND attempts < $2 + ) + RETURNING + l1_batch_number, + status, + attempts + "#, + &processing_timeout, + max_attempts as i32, + ) + .fetch_all(self.storage.conn()) + .await + .unwrap() + .into_iter() + .map(|row| StuckJobs { + id: row.l1_batch_number as u64, + status: row.status, + attempts: row.attempts as u64, + }) + .collect() } pub async fn create_aggregation_jobs( @@ -230,13 +298,25 @@ impl FriWitnessGeneratorDal<'_, '_> { closed_form_inputs_and_urls { sqlx::query!( - " - INSERT INTO leaf_aggregation_witness_jobs_fri - (l1_batch_number, circuit_id, closed_form_inputs_blob_url, number_of_basic_circuits, protocol_version, status, created_at, updated_at) - VALUES ($1, $2, $3, $4, $5, 'waiting_for_proofs', now(), now()) - ON CONFLICT(l1_batch_number, circuit_id) - DO UPDATE SET updated_at=now() - ", + r#" + INSERT INTO + leaf_aggregation_witness_jobs_fri ( + l1_batch_number, + circuit_id, + closed_form_inputs_blob_url, + number_of_basic_circuits, + protocol_version, + status, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, $4, $5, 'waiting_for_proofs', NOW(), NOW()) + ON CONFLICT (l1_batch_number, circuit_id) DO + UPDATE + SET + updated_at = NOW() + "#, block_number.0 as i64, *circuit_id as i16, closed_form_inputs_url, @@ -259,13 +339,23 @@ impl FriWitnessGeneratorDal<'_, '_> { } sqlx::query!( - " - INSERT INTO scheduler_witness_jobs_fri - (l1_batch_number, scheduler_partial_input_blob_url, protocol_version, status, created_at, updated_at) - VALUES ($1, $2, $3, 'waiting_for_proofs', now(), now()) - ON CONFLICT(l1_batch_number) - DO UPDATE SET updated_at=now() - ", + r#" + INSERT INTO + scheduler_witness_jobs_fri ( + l1_batch_number, + scheduler_partial_input_blob_url, + protocol_version, + status, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, 'waiting_for_proofs', NOW(), NOW()) + ON CONFLICT (l1_batch_number) DO + UPDATE + SET + updated_at = NOW() + "#, block_number.0 as i64, scheduler_partial_input_blob_url, protocol_version_id as i32, @@ -275,13 +365,16 @@ impl FriWitnessGeneratorDal<'_, '_> { .unwrap(); sqlx::query!( - " - INSERT INTO scheduler_dependency_tracker_fri - (l1_batch_number, status, created_at, updated_at) - VALUES ($1, 'waiting_for_proofs', now(), now()) - ON CONFLICT(l1_batch_number) - DO UPDATE SET updated_at=now() - ", + r#" + INSERT INTO + scheduler_dependency_tracker_fri (l1_batch_number, status, created_at, updated_at) + VALUES + ($1, 'waiting_for_proofs', NOW(), NOW()) + ON CONFLICT (l1_batch_number) DO + UPDATE + SET + updated_at = NOW() + "#, block_number.0 as i64, ) .execute(self.storage.conn()) @@ -299,23 +392,34 @@ impl FriWitnessGeneratorDal<'_, '_> { ) -> Option { let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); let row = sqlx::query!( - " - UPDATE leaf_aggregation_witness_jobs_fri - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now(), - picked_by = $2 - WHERE id = ( - SELECT id - FROM leaf_aggregation_witness_jobs_fri - WHERE status = 'queued' - AND protocol_version = ANY($1) - ORDER BY l1_batch_number ASC, id ASC - LIMIT 1 + r#" + UPDATE leaf_aggregation_witness_jobs_fri + SET + status = 'in_progress', + attempts = attempts + 1, + updated_at = NOW(), + processing_started_at = NOW(), + picked_by = $2 + WHERE + id = ( + SELECT + id + FROM + leaf_aggregation_witness_jobs_fri + WHERE + status = 'queued' + AND protocol_version = ANY ($1) + ORDER BY + l1_batch_number ASC, + id ASC + LIMIT + 1 FOR UPDATE - SKIP LOCKED + SKIP LOCKED ) - RETURNING leaf_aggregation_witness_jobs_fri.* - ", + RETURNING + leaf_aggregation_witness_jobs_fri.* + "#, &protocol_versions[..], picked_by, ) @@ -345,8 +449,14 @@ impl FriWitnessGeneratorDal<'_, '_> { id: u32, ) -> sqlx::Result> { let attempts = sqlx::query!( - "SELECT attempts FROM leaf_aggregation_witness_jobs_fri \ - WHERE id = $1", + r#" + SELECT + attempts + FROM + leaf_aggregation_witness_jobs_fri + WHERE + id = $1 + "#, id as i64, ) .fetch_optional(self.storage.conn()) @@ -365,15 +475,20 @@ impl FriWitnessGeneratorDal<'_, '_> { depth: u16, ) -> Vec { sqlx::query!( - " - SELECT id from prover_jobs_fri - WHERE l1_batch_number = $1 - AND circuit_id = $2 - AND aggregation_round = $3 - AND depth = $4 - AND status = 'successful' - ORDER BY sequence_number ASC; - ", + r#" + SELECT + id + FROM + prover_jobs_fri + WHERE + l1_batch_number = $1 + AND circuit_id = $2 + AND aggregation_round = $3 + AND depth = $4 + AND status = 'successful' + ORDER BY + sequence_number ASC; + "#, block_number.0 as i64, circuit_id as i16, round as i16, @@ -391,20 +506,32 @@ impl FriWitnessGeneratorDal<'_, '_> { sqlx::query!( r#" UPDATE leaf_aggregation_witness_jobs_fri - SET status='queued' - WHERE (l1_batch_number, circuit_id) IN - (SELECT prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id - FROM prover_jobs_fri - JOIN leaf_aggregation_witness_jobs_fri lawj ON - prover_jobs_fri.l1_batch_number = lawj.l1_batch_number - AND prover_jobs_fri.circuit_id = lawj.circuit_id - WHERE lawj.status = 'waiting_for_proofs' - AND prover_jobs_fri.status = 'successful' - AND prover_jobs_fri.aggregation_round = 0 - GROUP BY prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, lawj.number_of_basic_circuits - HAVING COUNT(*) = lawj.number_of_basic_circuits) - RETURNING l1_batch_number, circuit_id; - "#, + SET + status = 'queued' + WHERE + (l1_batch_number, circuit_id) IN ( + SELECT + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id + FROM + prover_jobs_fri + JOIN leaf_aggregation_witness_jobs_fri lawj ON prover_jobs_fri.l1_batch_number = lawj.l1_batch_number + AND prover_jobs_fri.circuit_id = lawj.circuit_id + WHERE + lawj.status = 'waiting_for_proofs' + AND prover_jobs_fri.status = 'successful' + AND prover_jobs_fri.aggregation_round = 0 + GROUP BY + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id, + lawj.number_of_basic_circuits + HAVING + COUNT(*) = lawj.number_of_basic_circuits + ) + RETURNING + l1_batch_number, + circuit_id; + "#, ) .fetch_all(self.storage.conn()) .await @@ -423,13 +550,17 @@ impl FriWitnessGeneratorDal<'_, '_> { url: String, ) { sqlx::query!( - " - UPDATE node_aggregation_witness_jobs_fri - SET aggregations_url = $1, number_of_dependent_jobs = $5, updated_at = now() - WHERE l1_batch_number = $2 + r#" + UPDATE node_aggregation_witness_jobs_fri + SET + aggregations_url = $1, + number_of_dependent_jobs = $5, + updated_at = NOW() + WHERE + l1_batch_number = $2 AND circuit_id = $3 AND depth = $4 - ", + "#, url, block_number.0 as i64, circuit_id as i16, @@ -448,23 +579,35 @@ impl FriWitnessGeneratorDal<'_, '_> { ) -> Option { let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); let row = sqlx::query!( - " - UPDATE node_aggregation_witness_jobs_fri - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now(), - picked_by = $2 - WHERE id = ( - SELECT id - FROM node_aggregation_witness_jobs_fri - WHERE status = 'queued' - AND protocol_version = ANY($1) - ORDER BY l1_batch_number ASC, depth ASC, id ASC - LIMIT 1 + r#" + UPDATE node_aggregation_witness_jobs_fri + SET + status = 'in_progress', + attempts = attempts + 1, + updated_at = NOW(), + processing_started_at = NOW(), + picked_by = $2 + WHERE + id = ( + SELECT + id + FROM + node_aggregation_witness_jobs_fri + WHERE + status = 'queued' + AND protocol_version = ANY ($1) + ORDER BY + l1_batch_number ASC, + depth ASC, + id ASC + LIMIT + 1 FOR UPDATE - SKIP LOCKED + SKIP LOCKED ) - RETURNING node_aggregation_witness_jobs_fri.* - ", + RETURNING + node_aggregation_witness_jobs_fri.* + "#, &protocol_versions[..], picked_by, ) @@ -498,8 +641,14 @@ impl FriWitnessGeneratorDal<'_, '_> { id: u32, ) -> sqlx::Result> { let attempts = sqlx::query!( - "SELECT attempts FROM node_aggregation_witness_jobs_fri \ - WHERE id = $1", + r#" + SELECT + attempts + FROM + node_aggregation_witness_jobs_fri + WHERE + id = $1 + "#, id as i64, ) .fetch_optional(self.storage.conn()) @@ -512,11 +661,15 @@ impl FriWitnessGeneratorDal<'_, '_> { pub async fn mark_node_aggregation_job_failed(&mut self, error: &str, id: u32) { sqlx::query!( - " - UPDATE node_aggregation_witness_jobs_fri - SET status ='failed', error= $1, updated_at = now() - WHERE id = $2 - ", + r#" + UPDATE node_aggregation_witness_jobs_fri + SET + status = 'failed', + error = $1, + updated_at = NOW() + WHERE + id = $2 + "#, error, id as i64 ) @@ -527,11 +680,15 @@ impl FriWitnessGeneratorDal<'_, '_> { pub async fn mark_node_aggregation_as_successful(&mut self, id: u32, time_taken: Duration) { sqlx::query!( - " - UPDATE node_aggregation_witness_jobs_fri - SET status = 'successful', updated_at = now(), time_taken = $1 - WHERE id = $2 - ", + r#" + UPDATE node_aggregation_witness_jobs_fri + SET + status = 'successful', + updated_at = NOW(), + time_taken = $1 + WHERE + id = $2 + "#, duration_to_naive_time(time_taken), id as i64 ) @@ -550,42 +707,73 @@ impl FriWitnessGeneratorDal<'_, '_> { protocol_version_id: FriProtocolVersionId, ) { sqlx::query!( - "INSERT INTO node_aggregation_witness_jobs_fri (l1_batch_number, circuit_id, depth, aggregations_url, number_of_dependent_jobs, protocol_version, status, created_at, updated_at) - VALUES ($1, $2, $3, $4, $5, $6, 'waiting_for_proofs', now(), now()) - ON CONFLICT(l1_batch_number, circuit_id, depth) - DO UPDATE SET updated_at=now()", - block_number.0 as i64, - circuit_id as i16, - depth as i32, - aggregations_url, - number_of_dependent_jobs, - protocol_version_id as i32, - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap(); + r#" + INSERT INTO + node_aggregation_witness_jobs_fri ( + l1_batch_number, + circuit_id, + depth, + aggregations_url, + number_of_dependent_jobs, + protocol_version, + status, + created_at, + updated_at + ) + VALUES + ($1, $2, $3, $4, $5, $6, 'waiting_for_proofs', NOW(), NOW()) + ON CONFLICT (l1_batch_number, circuit_id, depth) DO + UPDATE + SET + updated_at = NOW() + "#, + block_number.0 as i64, + circuit_id as i16, + depth as i32, + aggregations_url, + number_of_dependent_jobs, + protocol_version_id as i32, + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap(); } pub async fn move_depth_zero_node_aggregation_jobs(&mut self) -> Vec<(i64, u8, u16)> { sqlx::query!( r#" UPDATE node_aggregation_witness_jobs_fri - SET status='queued' - WHERE (l1_batch_number, circuit_id, depth) IN - (SELECT prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, prover_jobs_fri.depth - FROM prover_jobs_fri - JOIN node_aggregation_witness_jobs_fri nawj ON - prover_jobs_fri.l1_batch_number = nawj.l1_batch_number - AND prover_jobs_fri.circuit_id = nawj.circuit_id - AND prover_jobs_fri.depth = nawj.depth - WHERE nawj.status = 'waiting_for_proofs' - AND prover_jobs_fri.status = 'successful' - AND prover_jobs_fri.aggregation_round = 1 - AND prover_jobs_fri.depth = 0 - GROUP BY prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, prover_jobs_fri.depth, nawj.number_of_dependent_jobs - HAVING COUNT(*) = nawj.number_of_dependent_jobs) - RETURNING l1_batch_number, circuit_id, depth; - "#, + SET + status = 'queued' + WHERE + (l1_batch_number, circuit_id, depth) IN ( + SELECT + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id, + prover_jobs_fri.depth + FROM + prover_jobs_fri + JOIN node_aggregation_witness_jobs_fri nawj ON prover_jobs_fri.l1_batch_number = nawj.l1_batch_number + AND prover_jobs_fri.circuit_id = nawj.circuit_id + AND prover_jobs_fri.depth = nawj.depth + WHERE + nawj.status = 'waiting_for_proofs' + AND prover_jobs_fri.status = 'successful' + AND prover_jobs_fri.aggregation_round = 1 + AND prover_jobs_fri.depth = 0 + GROUP BY + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id, + prover_jobs_fri.depth, + nawj.number_of_dependent_jobs + HAVING + COUNT(*) = nawj.number_of_dependent_jobs + ) + RETURNING + l1_batch_number, + circuit_id, + depth; + "#, ) .fetch_all(self.storage.conn()) .await @@ -599,21 +787,36 @@ impl FriWitnessGeneratorDal<'_, '_> { sqlx::query!( r#" UPDATE node_aggregation_witness_jobs_fri - SET status='queued' - WHERE (l1_batch_number, circuit_id, depth) IN - (SELECT prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, prover_jobs_fri.depth - FROM prover_jobs_fri - JOIN node_aggregation_witness_jobs_fri nawj ON - prover_jobs_fri.l1_batch_number = nawj.l1_batch_number - AND prover_jobs_fri.circuit_id = nawj.circuit_id - AND prover_jobs_fri.depth = nawj.depth - WHERE nawj.status = 'waiting_for_proofs' - AND prover_jobs_fri.status = 'successful' - AND prover_jobs_fri.aggregation_round = 2 - GROUP BY prover_jobs_fri.l1_batch_number, prover_jobs_fri.circuit_id, prover_jobs_fri.depth, nawj.number_of_dependent_jobs - HAVING COUNT(*) = nawj.number_of_dependent_jobs) - RETURNING l1_batch_number, circuit_id, depth; - "#, + SET + status = 'queued' + WHERE + (l1_batch_number, circuit_id, depth) IN ( + SELECT + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id, + prover_jobs_fri.depth + FROM + prover_jobs_fri + JOIN node_aggregation_witness_jobs_fri nawj ON prover_jobs_fri.l1_batch_number = nawj.l1_batch_number + AND prover_jobs_fri.circuit_id = nawj.circuit_id + AND prover_jobs_fri.depth = nawj.depth + WHERE + nawj.status = 'waiting_for_proofs' + AND prover_jobs_fri.status = 'successful' + AND prover_jobs_fri.aggregation_round = 2 + GROUP BY + prover_jobs_fri.l1_batch_number, + prover_jobs_fri.circuit_id, + prover_jobs_fri.depth, + nawj.number_of_dependent_jobs + HAVING + COUNT(*) = nawj.number_of_dependent_jobs + ) + RETURNING + l1_batch_number, + circuit_id, + depth; + "#, ) .fetch_all(self.storage.conn()) .await @@ -630,21 +833,39 @@ impl FriWitnessGeneratorDal<'_, '_> { ) -> Vec { let processing_timeout = pg_interval_from_duration(processing_timeout); sqlx::query!( - " - UPDATE leaf_aggregation_witness_jobs_fri - SET status = 'queued', updated_at = now(), processing_started_at = now() - WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'failed' AND attempts < $2) - RETURNING id, status, attempts - ", - &processing_timeout, - max_attempts as i32, + r#" + UPDATE leaf_aggregation_witness_jobs_fri + SET + status = 'queued', + updated_at = NOW(), + processing_started_at = NOW() + WHERE + ( + status = 'in_progress' + AND processing_started_at <= NOW() - $1::INTERVAL + AND attempts < $2 + ) + OR ( + status = 'failed' + AND attempts < $2 + ) + RETURNING + id, + status, + attempts + "#, + &processing_timeout, + max_attempts as i32, ) .fetch_all(self.storage.conn()) .await .unwrap() .into_iter() - .map(|row| StuckJobs { id: row.id as u64, status: row.status, attempts: row.attempts as u64 }) + .map(|row| StuckJobs { + id: row.id as u64, + status: row.status, + attempts: row.attempts as u64, + }) .collect() } @@ -655,30 +876,50 @@ impl FriWitnessGeneratorDal<'_, '_> { ) -> Vec { let processing_timeout = pg_interval_from_duration(processing_timeout); sqlx::query!( - " - UPDATE node_aggregation_witness_jobs_fri - SET status = 'queued', updated_at = now(), processing_started_at = now() - WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'failed' AND attempts < $2) - RETURNING id, status, attempts - ", - &processing_timeout, - max_attempts as i32, + r#" + UPDATE node_aggregation_witness_jobs_fri + SET + status = 'queued', + updated_at = NOW(), + processing_started_at = NOW() + WHERE + ( + status = 'in_progress' + AND processing_started_at <= NOW() - $1::INTERVAL + AND attempts < $2 + ) + OR ( + status = 'failed' + AND attempts < $2 + ) + RETURNING + id, + status, + attempts + "#, + &processing_timeout, + max_attempts as i32, ) .fetch_all(self.storage.conn()) .await .unwrap() .into_iter() - .map(|row| StuckJobs { id: row.id as u64, status: row.status, attempts: row.attempts as u64 }) + .map(|row| StuckJobs { + id: row.id as u64, + status: row.status, + attempts: row.attempts as u64, + }) .collect() } pub async fn mark_scheduler_jobs_as_queued(&mut self, l1_batch_number: i64) { sqlx::query!( r#" - UPDATE scheduler_witness_jobs_fri - SET status='queued' - WHERE l1_batch_number = $1 + UPDATE scheduler_witness_jobs_fri + SET + status = 'queued' + WHERE + l1_batch_number = $1 AND status != 'successful' AND status != 'in_progress' "#, @@ -696,21 +937,39 @@ impl FriWitnessGeneratorDal<'_, '_> { ) -> Vec { let processing_timeout = pg_interval_from_duration(processing_timeout); sqlx::query!( - " - UPDATE scheduler_witness_jobs_fri - SET status = 'queued', updated_at = now(), processing_started_at = now() - WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'failed' AND attempts < $2) - RETURNING l1_batch_number, status, attempts - ", - &processing_timeout, - max_attempts as i32, + r#" + UPDATE scheduler_witness_jobs_fri + SET + status = 'queued', + updated_at = NOW(), + processing_started_at = NOW() + WHERE + ( + status = 'in_progress' + AND processing_started_at <= NOW() - $1::INTERVAL + AND attempts < $2 + ) + OR ( + status = 'failed' + AND attempts < $2 + ) + RETURNING + l1_batch_number, + status, + attempts + "#, + &processing_timeout, + max_attempts as i32, ) .fetch_all(self.storage.conn()) .await .unwrap() .into_iter() - .map(|row| StuckJobs { id: row.l1_batch_number as u64, status: row.status, attempts: row.attempts as u64 }) + .map(|row| StuckJobs { + id: row.l1_batch_number as u64, + status: row.status, + attempts: row.attempts as u64, + }) .collect() } @@ -721,23 +980,33 @@ impl FriWitnessGeneratorDal<'_, '_> { ) -> Option { let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); sqlx::query!( - " - UPDATE scheduler_witness_jobs_fri - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now(), - picked_by = $2 - WHERE l1_batch_number = ( - SELECT l1_batch_number - FROM scheduler_witness_jobs_fri - WHERE status = 'queued' - AND protocol_version = ANY($1) - ORDER BY l1_batch_number ASC - LIMIT 1 + r#" + UPDATE scheduler_witness_jobs_fri + SET + status = 'in_progress', + attempts = attempts + 1, + updated_at = NOW(), + processing_started_at = NOW(), + picked_by = $2 + WHERE + l1_batch_number = ( + SELECT + l1_batch_number + FROM + scheduler_witness_jobs_fri + WHERE + status = 'queued' + AND protocol_version = ANY ($1) + ORDER BY + l1_batch_number ASC + LIMIT + 1 FOR UPDATE - SKIP LOCKED + SKIP LOCKED ) - RETURNING scheduler_witness_jobs_fri.* - ", + RETURNING + scheduler_witness_jobs_fri.* + "#, &protocol_versions[..], picked_by, ) @@ -752,8 +1021,14 @@ impl FriWitnessGeneratorDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> sqlx::Result> { let attempts = sqlx::query!( - "SELECT attempts FROM scheduler_witness_jobs_fri \ - WHERE l1_batch_number = $1", + r#" + SELECT + attempts + FROM + scheduler_witness_jobs_fri + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64, ) .fetch_optional(self.storage.conn()) @@ -769,11 +1044,15 @@ impl FriWitnessGeneratorDal<'_, '_> { time_taken: Duration, ) { sqlx::query!( - " - UPDATE scheduler_witness_jobs_fri - SET status = 'successful', updated_at = now(), time_taken = $1 - WHERE l1_batch_number = $2 - ", + r#" + UPDATE scheduler_witness_jobs_fri + SET + status = 'successful', + updated_at = NOW(), + time_taken = $1 + WHERE + l1_batch_number = $2 + "#, duration_to_naive_time(time_taken), block_number.0 as i64 ) @@ -784,11 +1063,15 @@ impl FriWitnessGeneratorDal<'_, '_> { pub async fn mark_scheduler_job_failed(&mut self, error: &str, block_number: L1BatchNumber) { sqlx::query!( - " - UPDATE scheduler_witness_jobs_fri - SET status ='failed', error= $1, updated_at = now() - WHERE l1_batch_number = $2 - ", + r#" + UPDATE scheduler_witness_jobs_fri + SET + status = 'failed', + error = $1, + updated_at = NOW() + WHERE + l1_batch_number = $2 + "#, error, block_number.0 as i64 ) @@ -840,9 +1123,14 @@ impl FriWitnessGeneratorDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> FriProtocolVersionId { sqlx::query!( - "SELECT protocol_version \ - FROM witness_inputs_fri \ - WHERE l1_batch_number = $1", + r#" + SELECT + protocol_version + FROM + witness_inputs_fri + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64, ) .fetch_one(self.storage.conn()) diff --git a/core/lib/dal/src/gpu_prover_queue_dal.rs b/core/lib/dal/src/gpu_prover_queue_dal.rs deleted file mode 100644 index cc769ff3008..00000000000 --- a/core/lib/dal/src/gpu_prover_queue_dal.rs +++ /dev/null @@ -1,170 +0,0 @@ -use std::time::Duration; - -use crate::time_utils::pg_interval_from_duration; -use crate::StorageProcessor; -use std::collections::HashMap; -use zksync_types::proofs::{GpuProverInstanceStatus, SocketAddress}; - -#[derive(Debug)] -pub struct GpuProverQueueDal<'a, 'c> { - pub(crate) storage: &'a mut StorageProcessor<'c>, -} - -impl GpuProverQueueDal<'_, '_> { - pub async fn lock_available_prover( - &mut self, - processing_timeout: Duration, - specialized_prover_group_id: u8, - region: String, - zone: String, - ) -> Option { - { - let processing_timeout = pg_interval_from_duration(processing_timeout); - let result: Option = sqlx::query!( - " - UPDATE gpu_prover_queue - SET instance_status = 'reserved', - updated_at = now(), - processing_started_at = now() - WHERE id in ( - SELECT id - FROM gpu_prover_queue - WHERE specialized_prover_group_id=$2 - AND region=$3 - AND zone=$4 - AND ( - instance_status = 'available' - OR (instance_status = 'reserved' AND processing_started_at < now() - $1::interval) - ) - ORDER BY updated_at ASC - LIMIT 1 - FOR UPDATE - SKIP LOCKED - ) - RETURNING gpu_prover_queue.* - ", - &processing_timeout, - specialized_prover_group_id as i16, - region, - zone - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .map(|row| SocketAddress { - host: row.instance_host.network(), - port: row.instance_port as u16, - }); - - result - } - } - - pub async fn insert_prover_instance( - &mut self, - address: SocketAddress, - queue_capacity: usize, - specialized_prover_group_id: u8, - region: String, - zone: String, - num_gpu: u8, - ) { - { - sqlx::query!( - " - INSERT INTO gpu_prover_queue (instance_host, instance_port, queue_capacity, queue_free_slots, instance_status, specialized_prover_group_id, region, zone, num_gpu, created_at, updated_at) - VALUES (cast($1::text as inet), $2, $3, $3, 'available', $4, $5, $6, $7, now(), now()) - ON CONFLICT(instance_host, instance_port, region, zone) - DO UPDATE SET instance_status='available', queue_capacity=$3, queue_free_slots=$3, specialized_prover_group_id=$4, region=$5, zone=$6, num_gpu=$7, updated_at=now()", - format!("{}",address.host), - address.port as i32, - queue_capacity as i32, - specialized_prover_group_id as i16, - region, - zone, - num_gpu as i16) - .execute(self.storage.conn()) - .await - .unwrap(); - } - } - - pub async fn update_prover_instance_status( - &mut self, - address: SocketAddress, - status: GpuProverInstanceStatus, - queue_free_slots: usize, - region: String, - zone: String, - ) { - { - sqlx::query!( - " - UPDATE gpu_prover_queue - SET instance_status = $1, updated_at = now(), queue_free_slots = $4 - WHERE instance_host = $2::text::inet - AND instance_port = $3 - AND region = $5 - AND zone = $6 - ", - format!("{:?}", status).to_lowercase(), - format!("{}", address.host), - address.port as i32, - queue_free_slots as i32, - region, - zone - ) - .execute(self.storage.conn()) - .await - .unwrap(); - } - } - - pub async fn update_prover_instance_from_full_to_available( - &mut self, - address: SocketAddress, - queue_free_slots: usize, - region: String, - zone: String, - ) { - { - sqlx::query!( - " - UPDATE gpu_prover_queue - SET instance_status = 'available', updated_at = now(), queue_free_slots = $3 - WHERE instance_host = $1::text::inet - AND instance_port = $2 - AND instance_status = 'full' - AND region = $4 - AND zone = $5 - ", - format!("{}", address.host), - address.port as i32, - queue_free_slots as i32, - region, - zone - ) - .execute(self.storage.conn()) - .await - .unwrap(); - } - } - - pub async fn get_prover_gpu_count_per_region_zone(&mut self) -> HashMap<(String, String), u64> { - { - sqlx::query!( - r#" - SELECT region, zone, SUM(num_gpu) AS total_gpus - FROM gpu_prover_queue - GROUP BY region, zone - "#, - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| ((row.region, row.zone), row.total_gpus.unwrap() as u64)) - .collect() - } - } -} diff --git a/core/lib/dal/src/healthcheck.rs b/core/lib/dal/src/healthcheck.rs index 902a235ce54..4a615ee2053 100644 --- a/core/lib/dal/src/healthcheck.rs +++ b/core/lib/dal/src/healthcheck.rs @@ -1,6 +1,4 @@ use serde::Serialize; -use sqlx::PgPool; - use zksync_health_check::{async_trait, CheckHealth, Health, HealthStatus}; use crate::ConnectionPool; @@ -8,12 +6,14 @@ use crate::ConnectionPool; #[derive(Debug, Serialize)] struct ConnectionPoolHealthDetails { pool_size: u32, + max_size: u32, } impl ConnectionPoolHealthDetails { - async fn new(pool: &PgPool) -> Self { + fn new(pool: &ConnectionPool) -> Self { Self { - pool_size: pool.size(), + pool_size: pool.inner.size(), + max_size: pool.max_size(), } } } @@ -42,7 +42,7 @@ impl CheckHealth for ConnectionPoolHealthCheck { // This check is rather feeble, plan to make reliable here: // https://linear.app/matterlabs/issue/PLA-255/revamp-db-connection-health-check self.connection_pool.access_storage().await.unwrap(); - let details = ConnectionPoolHealthDetails::new(&self.connection_pool.0).await; + let details = ConnectionPoolHealthDetails::new(&self.connection_pool); Health::from(HealthStatus::Ready).with_details(details) } } diff --git a/core/lib/dal/src/instrument.rs b/core/lib/dal/src/instrument.rs index cd761fb3500..5d99b0729de 100644 --- a/core/lib/dal/src/instrument.rs +++ b/core/lib/dal/src/instrument.rs @@ -1,5 +1,7 @@ //! DAL query instrumentation. +use std::{fmt, future::Future, panic::Location}; + use sqlx::{ postgres::{PgConnection, PgQueryResult, PgRow}, query::{Map, Query, QueryAs}, @@ -7,8 +9,6 @@ use sqlx::{ }; use tokio::time::{Duration, Instant}; -use std::{fmt, future::Future, panic::Location}; - use crate::metrics::REQUEST_METRICS; type ThreadSafeDebug<'a> = dyn fmt::Debug + Send + Sync + 'a; diff --git a/core/lib/dal/src/lib.rs b/core/lib/dal/src/lib.rs index 788ce2d98bd..3a5691a1c93 100644 --- a/core/lib/dal/src/lib.rs +++ b/core/lib/dal/src/lib.rs @@ -1,45 +1,28 @@ #![allow(clippy::derive_partial_eq_without_eq, clippy::format_push_string)] -// Built-in deps -pub use sqlx::Error as SqlxError; -use sqlx::{postgres::Postgres, Connection, PgConnection, Transaction}; -// External imports -use sqlx::pool::PoolConnection; -pub use sqlx::types::BigDecimal; - -// Local imports -use crate::accounts_dal::AccountsDal; -use crate::basic_witness_input_producer_dal::BasicWitnessInputProducerDal; -use crate::blocks_dal::BlocksDal; -use crate::blocks_web3_dal::BlocksWeb3Dal; -use crate::connection::holder::ConnectionHolder; +use sqlx::{pool::PoolConnection, postgres::Postgres, Connection, PgConnection, Transaction}; +pub use sqlx::{types::BigDecimal, Error as SqlxError}; + pub use crate::connection::ConnectionPool; -use crate::contract_verification_dal::ContractVerificationDal; -use crate::eth_sender_dal::EthSenderDal; -use crate::events_dal::EventsDal; -use crate::events_web3_dal::EventsWeb3Dal; -use crate::fri_gpu_prover_queue_dal::FriGpuProverQueueDal; -use crate::fri_proof_compressor_dal::FriProofCompressorDal; -use crate::fri_protocol_versions_dal::FriProtocolVersionsDal; -use crate::fri_prover_dal::FriProverDal; -use crate::fri_scheduler_dependency_tracker_dal::FriSchedulerDependencyTrackerDal; -use crate::fri_witness_generator_dal::FriWitnessGeneratorDal; -use crate::gpu_prover_queue_dal::GpuProverQueueDal; -use crate::proof_generation_dal::ProofGenerationDal; -use crate::protocol_versions_dal::ProtocolVersionsDal; -use crate::protocol_versions_web3_dal::ProtocolVersionsWeb3Dal; -use crate::prover_dal::ProverDal; -use crate::storage_dal::StorageDal; -use crate::storage_logs_dal::StorageLogsDal; -use crate::storage_logs_dedup_dal::StorageLogsDedupDal; -use crate::storage_web3_dal::StorageWeb3Dal; -use crate::sync_dal::SyncDal; -use crate::system_dal::SystemDal; -use crate::tokens_dal::TokensDal; -use crate::tokens_web3_dal::TokensWeb3Dal; -use crate::transactions_dal::TransactionsDal; -use crate::transactions_web3_dal::TransactionsWeb3Dal; -use crate::witness_generator_dal::WitnessGeneratorDal; +use crate::{ + accounts_dal::AccountsDal, basic_witness_input_producer_dal::BasicWitnessInputProducerDal, + blocks_dal::BlocksDal, blocks_web3_dal::BlocksWeb3Dal, connection::holder::ConnectionHolder, + consensus_dal::ConsensusDal, contract_verification_dal::ContractVerificationDal, + eth_sender_dal::EthSenderDal, events_dal::EventsDal, events_web3_dal::EventsWeb3Dal, + fri_gpu_prover_queue_dal::FriGpuProverQueueDal, + fri_proof_compressor_dal::FriProofCompressorDal, + fri_protocol_versions_dal::FriProtocolVersionsDal, fri_prover_dal::FriProverDal, + fri_scheduler_dependency_tracker_dal::FriSchedulerDependencyTrackerDal, + fri_witness_generator_dal::FriWitnessGeneratorDal, proof_generation_dal::ProofGenerationDal, + protocol_versions_dal::ProtocolVersionsDal, + protocol_versions_web3_dal::ProtocolVersionsWeb3Dal, + snapshot_recovery_dal::SnapshotRecoveryDal, snapshots_creator_dal::SnapshotsCreatorDal, + snapshots_dal::SnapshotsDal, storage_dal::StorageDal, storage_logs_dal::StorageLogsDal, + storage_logs_dedup_dal::StorageLogsDedupDal, storage_web3_dal::StorageWeb3Dal, + sync_dal::SyncDal, system_dal::SystemDal, tokens_dal::TokensDal, + tokens_web3_dal::TokensWeb3Dal, transactions_dal::TransactionsDal, + transactions_web3_dal::TransactionsWeb3Dal, +}; #[macro_use] mod macro_utils; @@ -48,6 +31,7 @@ pub mod basic_witness_input_producer_dal; pub mod blocks_dal; pub mod blocks_web3_dal; pub mod connection; +pub mod consensus_dal; pub mod contract_verification_dal; pub mod eth_sender_dal; pub mod events_dal; @@ -58,7 +42,6 @@ pub mod fri_protocol_versions_dal; pub mod fri_prover_dal; pub mod fri_scheduler_dependency_tracker_dal; pub mod fri_witness_generator_dal; -pub mod gpu_prover_queue_dal; pub mod healthcheck; mod instrument; mod metrics; @@ -66,7 +49,9 @@ mod models; pub mod proof_generation_dal; pub mod protocol_versions_dal; pub mod protocol_versions_web3_dal; -pub mod prover_dal; +pub mod snapshot_recovery_dal; +pub mod snapshots_creator_dal; +pub mod snapshots_dal; pub mod storage_dal; pub mod storage_logs_dal; pub mod storage_logs_dedup_dal; @@ -78,14 +63,13 @@ pub mod tokens_dal; pub mod tokens_web3_dal; pub mod transactions_dal; pub mod transactions_web3_dal; -pub mod witness_generator_dal; #[cfg(test)] mod tests; /// Storage processor is the main storage interaction point. /// It holds down the connection (either direct or pooled) to the database -/// and provide methods to obtain different storage schemas. +/// and provide methods to obtain different storage schema. #[derive(Debug)] pub struct StorageProcessor<'a> { conn: ConnectionHolder<'a>, @@ -161,6 +145,10 @@ impl<'a> StorageProcessor<'a> { BlocksWeb3Dal { storage: self } } + pub fn consensus_dal(&mut self) -> ConsensusDal<'_, 'a> { + ConsensusDal { storage: self } + } + pub fn eth_sender_dal(&mut self) -> EthSenderDal<'_, 'a> { EthSenderDal { storage: self } } @@ -197,22 +185,10 @@ impl<'a> StorageProcessor<'a> { TokensWeb3Dal { storage: self } } - pub fn prover_dal(&mut self) -> ProverDal<'_, 'a> { - ProverDal { storage: self } - } - - pub fn witness_generator_dal(&mut self) -> WitnessGeneratorDal<'_, 'a> { - WitnessGeneratorDal { storage: self } - } - pub fn contract_verification_dal(&mut self) -> ContractVerificationDal<'_, 'a> { ContractVerificationDal { storage: self } } - pub fn gpu_prover_queue_dal(&mut self) -> GpuProverQueueDal<'_, 'a> { - GpuProverQueueDal { storage: self } - } - pub fn protocol_versions_dal(&mut self) -> ProtocolVersionsDal<'_, 'a> { ProtocolVersionsDal { storage: self } } @@ -258,4 +234,16 @@ impl<'a> StorageProcessor<'a> { pub fn system_dal(&mut self) -> SystemDal<'_, 'a> { SystemDal { storage: self } } + + pub fn snapshots_dal(&mut self) -> SnapshotsDal<'_, 'a> { + SnapshotsDal { storage: self } + } + + pub fn snapshots_creator_dal(&mut self) -> SnapshotsCreatorDal<'_, 'a> { + SnapshotsCreatorDal { storage: self } + } + + pub fn snapshot_recovery_dal(&mut self) -> SnapshotRecoveryDal<'_, 'a> { + SnapshotRecoveryDal { storage: self } + } } diff --git a/core/lib/dal/src/metrics.rs b/core/lib/dal/src/metrics.rs index 58e733acc90..4840d073f57 100644 --- a/core/lib/dal/src/metrics.rs +++ b/core/lib/dal/src/metrics.rs @@ -1,12 +1,12 @@ //! Metrics for the data access layer. +use std::{thread, time::Duration}; + use vise::{ Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Histogram, LabeledFamily, LatencyObserver, Metrics, }; -use std::{thread, time::Duration}; - /// Request-related DB metrics. #[derive(Debug, Metrics)] #[metrics(prefix = "sql")] diff --git a/core/lib/dal/src/models/mod.rs b/core/lib/dal/src/models/mod.rs index f6ebb6fc781..4e3e0853991 100644 --- a/core/lib/dal/src/models/mod.rs +++ b/core/lib/dal/src/models/mod.rs @@ -1,3 +1,4 @@ +mod proto; pub mod storage_block; pub mod storage_eth_tx; pub mod storage_event; diff --git a/core/lib/zksync_core/src/consensus/proto/mod.proto b/core/lib/dal/src/models/proto/mod.proto similarity index 69% rename from core/lib/zksync_core/src/consensus/proto/mod.proto rename to core/lib/dal/src/models/proto/mod.proto index 6199585899d..33e8bbb0324 100644 --- a/core/lib/zksync_core/src/consensus/proto/mod.proto +++ b/core/lib/dal/src/models/proto/mod.proto @@ -1,21 +1,25 @@ syntax = "proto3"; -package zksync.core.consensus; +package zksync.dal; message Transaction { // Default derive(serde::Serialize) encoding of the zksync_types::Transaction. - // TODO(gprusak): it is neither efficient, unique, nor suitable for version control. + // TODO(BFT-407): it is neither efficient, unique, nor suitable for version control. // replace with a more robust encoding. optional string json = 1; // required } message Payload { + // zksync-era ProtocolVersionId + optional uint32 protocol_version = 9; // required; u16 optional bytes hash = 1; // required; H256 optional uint32 l1_batch_number = 2; // required optional uint64 timestamp = 3; // required; seconds since UNIX epoch optional uint64 l1_gas_price = 4; // required; gwei optional uint64 l2_fair_gas_price = 5; // required; gwei + optional uint64 fair_pubdata_price = 11; // required since 1.4.1; gwei optional uint32 virtual_blocks = 6; // required optional bytes operator_address = 7; // required; H160 repeated Transaction transactions = 8; + optional bool last_in_batch = 10; // required } diff --git a/core/lib/dal/src/models/proto/mod.rs b/core/lib/dal/src/models/proto/mod.rs new file mode 100644 index 00000000000..29f7c04d5d6 --- /dev/null +++ b/core/lib/dal/src/models/proto/mod.rs @@ -0,0 +1,2 @@ +#![allow(warnings)] +include!(concat!(env!("OUT_DIR"), "/src/models/proto/gen.rs")); diff --git a/core/lib/dal/src/models/storage_block.rs b/core/lib/dal/src/models/storage_block.rs index 390bd3b2fd8..2aa5e4d30ef 100644 --- a/core/lib/dal/src/models/storage_block.rs +++ b/core/lib/dal/src/models/storage_block.rs @@ -7,14 +7,14 @@ use sqlx::{ types::chrono::{DateTime, NaiveDateTime, Utc}, }; use thiserror::Error; - use zksync_contracts::BaseSystemContractsHashes; use zksync_types::{ api, block::{L1BatchHeader, MiniblockHeader}, commitment::{L1BatchMetaParameters, L1BatchMetadata}, + fee_model::{BatchFeeInput, L1PeggedBatchFeeModelInput, PubdataIndependentBatchFeeModelInput}, l2_to_l1_log::{L2ToL1Log, SystemL2ToL1Log, UserL2ToL1Log}, - Address, L1BatchNumber, MiniblockNumber, H2048, H256, + Address, L1BatchNumber, MiniblockNumber, ProtocolVersionId, H2048, H256, }; #[derive(Debug, Error)] @@ -48,9 +48,10 @@ pub struct StorageL1BatchHeader { // absent in all batches generated prior to boojum. // System logs are logs generated by the VM execution, rather than directly from user transactions, // that facilitate sending information required for committing a batch to l1. In a given batch there - // will be exactly 7 (or 8 in the event of a protocol updgrade) system logs. + // will be exactly 7 (or 8 in the event of a protocol upgrade) system logs. pub system_logs: Vec>, pub compressed_state_diffs: Option>, + pub pubdata_input: Option>, } impl From for L1BatchHeader { @@ -92,6 +93,7 @@ impl From for L1BatchHeader { protocol_version: l1_batch .protocol_version .map(|v| (v as u16).try_into().unwrap()), + pubdata_input: l1_batch.pubdata_input, } } } @@ -170,6 +172,7 @@ pub struct StorageL1Batch { pub events_queue_commitment: Option>, pub bootloader_initial_content_commitment: Option>, + pub pubdata_input: Option>, } impl From for L1BatchHeader { @@ -211,6 +214,7 @@ impl From for L1BatchHeader { protocol_version: l1_batch .protocol_version .map(|v| (v as u16).try_into().unwrap()), + pubdata_input: l1_batch.pubdata_input, } } } @@ -293,27 +297,32 @@ pub fn web3_block_number_to_sql(block_number: api::BlockNumber) -> String { match block_number { api::BlockNumber::Number(number) => number.to_string(), api::BlockNumber::Earliest => 0.to_string(), - api::BlockNumber::Pending => { - "(SELECT (MAX(number) + 1) as number FROM miniblocks)".to_string() - } + api::BlockNumber::Pending => " + (SELECT COALESCE( + (SELECT (MAX(number) + 1) AS number FROM miniblocks), + (SELECT (MAX(miniblock_number) + 1) AS number FROM snapshot_recovery), + 0 + ) AS number) + " + .to_string(), api::BlockNumber::Latest | api::BlockNumber::Committed => { - "(SELECT MAX(number) as number FROM miniblocks)".to_string() + "(SELECT MAX(number) AS number FROM miniblocks)".to_string() } api::BlockNumber::Finalized => " - (SELECT COALESCE( - ( - SELECT MAX(number) FROM miniblocks - WHERE l1_batch_number = ( - SELECT MAX(number) FROM l1_batches - JOIN eth_txs ON - l1_batches.eth_execute_tx_id = eth_txs.id - WHERE - eth_txs.confirmed_eth_tx_history_id IS NOT NULL - ) - ), - 0 - ) as number) - " + (SELECT COALESCE( + ( + SELECT MAX(number) FROM miniblocks + WHERE l1_batch_number = ( + SELECT MAX(number) FROM l1_batches + JOIN eth_txs ON + l1_batches.eth_execute_tx_id = eth_txs.id + WHERE + eth_txs.confirmed_eth_tx_history_id IS NOT NULL + ) + ), + 0 + ) AS number) + " .to_string(), } } @@ -336,7 +345,7 @@ pub fn bind_block_where_sql_params<'q>( query: Query<'q, Postgres, PgArguments>, ) -> Query<'q, Postgres, PgArguments> { match block_id { - // these block_id types result in `$1` in the query string, which we have to `bind` + // these `block_id` types result in `$1` in the query string, which we have to `bind` api::BlockId::Hash(block_hash) => query.bind(block_hash.as_bytes()), api::BlockId::Number(api::BlockNumber::Number(number)) => { query.bind(number.as_u64() as i64) @@ -510,15 +519,40 @@ pub struct StorageMiniblockHeader { pub default_aa_code_hash: Option>, pub protocol_version: Option, + pub fair_pubdata_price: Option, + + pub gas_per_pubdata_limit: i64, + // The maximal number of virtual blocks that can be created with this miniblock. // If this value is greater than zero, then at least 1 will be created, but no more than - // min(virtual_blocks, miniblock_number - virtual_block_number), i.e. making sure that virtual blocks + // `min(virtual_blocks`, `miniblock_number - virtual_block_number`), i.e. making sure that virtual blocks // never go beyond the miniblock they are based on. pub virtual_blocks: i64, } impl From for MiniblockHeader { fn from(row: StorageMiniblockHeader) -> Self { + let protocol_version = row.protocol_version.map(|v| (v as u16).try_into().unwrap()); + + let fee_input = protocol_version + .filter(|version: &ProtocolVersionId| version.is_post_1_4_1()) + .map(|_| { + BatchFeeInput::PubdataIndependent(PubdataIndependentBatchFeeModelInput { + fair_pubdata_price: row + .fair_pubdata_price + .expect("No fair pubdata price for 1.4.1 miniblock") + as u64, + fair_l2_gas_price: row.l2_fair_gas_price as u64, + l1_gas_price: row.l1_gas_price as u64, + }) + }) + .unwrap_or_else(|| { + BatchFeeInput::L1Pegged(L1PeggedBatchFeeModelInput { + fair_l2_gas_price: row.l2_fair_gas_price as u64, + l1_gas_price: row.l1_gas_price as u64, + }) + }); + MiniblockHeader { number: MiniblockNumber(row.number as u32), timestamp: row.timestamp as u64, @@ -526,13 +560,13 @@ impl From for MiniblockHeader { l1_tx_count: row.l1_tx_count as u16, l2_tx_count: row.l2_tx_count as u16, base_fee_per_gas: row.base_fee_per_gas.to_u64().unwrap(), - l1_gas_price: row.l1_gas_price as u64, - l2_fair_gas_price: row.l2_fair_gas_price as u64, + batch_fee_input: fee_input, base_system_contracts_hashes: convert_base_system_contracts_hashes( row.bootloader_code_hash, row.default_aa_code_hash, ), - protocol_version: row.protocol_version.map(|v| (v as u16).try_into().unwrap()), + gas_per_pubdata_limit: row.gas_per_pubdata_limit as u64, + protocol_version, virtual_blocks: row.virtual_blocks as u32, } } @@ -555,65 +589,3 @@ impl ResolvedL1BatchForMiniblock { self.miniblock_l1_batch.unwrap_or(self.pending_l1_batch) } } - -#[cfg(test)] -mod tests { - use super::*; - - #[test] - fn test_web3_block_number_to_sql_earliest() { - let sql = web3_block_number_to_sql(api::BlockNumber::Earliest); - assert_eq!(sql, 0.to_string()); - } - - #[test] - fn test_web3_block_number_to_sql_pending() { - let sql = web3_block_number_to_sql(api::BlockNumber::Pending); - assert_eq!( - sql, - "(SELECT (MAX(number) + 1) as number FROM miniblocks)".to_string() - ); - } - - #[test] - fn test_web3_block_number_to_sql_latest() { - let sql = web3_block_number_to_sql(api::BlockNumber::Latest); - assert_eq!( - sql, - "(SELECT MAX(number) as number FROM miniblocks)".to_string() - ); - } - - #[test] - fn test_web3_block_number_to_sql_committed() { - let sql = web3_block_number_to_sql(api::BlockNumber::Committed); - assert_eq!( - sql, - "(SELECT MAX(number) as number FROM miniblocks)".to_string() - ); - } - - #[test] - fn test_web3_block_number_to_sql_finalized() { - let sql = web3_block_number_to_sql(api::BlockNumber::Finalized); - assert_eq!( - sql, - " - (SELECT COALESCE( - ( - SELECT MAX(number) FROM miniblocks - WHERE l1_batch_number = ( - SELECT MAX(number) FROM l1_batches - JOIN eth_txs ON - l1_batches.eth_execute_tx_id = eth_txs.id - WHERE - eth_txs.confirmed_eth_tx_history_id IS NOT NULL - ) - ), - 0 - ) as number) - " - .to_string() - ); - } -} diff --git a/core/lib/dal/src/models/storage_eth_tx.rs b/core/lib/dal/src/models/storage_eth_tx.rs index ed5a732ff79..9026be8326d 100644 --- a/core/lib/dal/src/models/storage_eth_tx.rs +++ b/core/lib/dal/src/models/storage_eth_tx.rs @@ -1,8 +1,11 @@ -use sqlx::types::chrono::NaiveDateTime; use std::str::FromStr; -use zksync_types::aggregated_operations::AggregatedActionType; -use zksync_types::eth_sender::{EthTx, TxHistory, TxHistoryToSend}; -use zksync_types::{Address, L1BatchNumber, Nonce, H256}; + +use sqlx::types::chrono::NaiveDateTime; +use zksync_types::{ + aggregated_operations::AggregatedActionType, + eth_sender::{EthTx, TxHistory, TxHistoryToSend}, + Address, L1BatchNumber, Nonce, H256, +}; #[derive(Debug, Clone)] pub struct StorageEthTx { diff --git a/core/lib/dal/src/models/storage_event.rs b/core/lib/dal/src/models/storage_event.rs index 001e4a2547a..7de9dae73c0 100644 --- a/core/lib/dal/src/models/storage_event.rs +++ b/core/lib/dal/src/models/storage_event.rs @@ -43,7 +43,7 @@ impl From for Log { transaction_hash: Some(H256::from_slice(&log.tx_hash)), transaction_index: Some(Index::from(log.tx_index_in_block as u32)), log_index: Some(U256::from(log.event_index_in_block as u32)), - transaction_log_index: Some(U256::from(log.event_index_in_block as u32)), + transaction_log_index: Some(U256::from(log.event_index_in_tx as u32)), log_type: None, removed: Some(false), } diff --git a/core/lib/dal/src/models/storage_log.rs b/core/lib/dal/src/models/storage_log.rs index bc4028b4d8b..adca6742d09 100644 --- a/core/lib/dal/src/models/storage_log.rs +++ b/core/lib/dal/src/models/storage_log.rs @@ -1,7 +1,7 @@ use sqlx::types::chrono::NaiveDateTime; -use zksync_types::{AccountTreeId, Address, StorageKey, StorageLog, StorageLogKind, H256}; +use zksync_types::{AccountTreeId, Address, StorageKey, StorageLog, StorageLogKind, H256, U256}; -#[derive(sqlx::FromRow, Debug, Clone)] +#[derive(Debug, Clone, sqlx::FromRow)] pub struct DBStorageLog { pub id: i64, pub hashed_key: Vec, @@ -27,3 +27,11 @@ impl From for StorageLog { } } } + +// We don't want to rely on the Merkle tree crate to import a single type, so we duplicate `TreeEntry` here. +#[derive(Debug, Clone, Copy)] +pub struct StorageTreeEntry { + pub key: U256, + pub value: H256, + pub leaf_index: u64, +} diff --git a/core/lib/dal/src/models/storage_protocol_version.rs b/core/lib/dal/src/models/storage_protocol_version.rs index 93010f1b814..6eb6e94b003 100644 --- a/core/lib/dal/src/models/storage_protocol_version.rs +++ b/core/lib/dal/src/models/storage_protocol_version.rs @@ -1,4 +1,6 @@ use std::convert::TryInto; + +use sqlx::types::chrono::NaiveDateTime; use zksync_contracts::BaseSystemContractsHashes; use zksync_types::{ api, @@ -6,8 +8,6 @@ use zksync_types::{ Address, H256, }; -use sqlx::types::chrono::NaiveDateTime; - #[derive(sqlx::FromRow)] pub struct StorageProtocolVersion { pub id: i32, diff --git a/core/lib/dal/src/models/storage_prover_job_info.rs b/core/lib/dal/src/models/storage_prover_job_info.rs index facec83e0c2..3242953b39d 100644 --- a/core/lib/dal/src/models/storage_prover_job_info.rs +++ b/core/lib/dal/src/models/storage_prover_job_info.rs @@ -1,14 +1,11 @@ -use core::panic; -use sqlx::types::chrono::{DateTime, NaiveDateTime, NaiveTime, Utc}; -use std::convert::TryFrom; -use std::str::FromStr; +use std::{convert::TryFrom, panic, str::FromStr}; -use zksync_types::proofs::{ - JobPosition, ProverJobStatus, ProverJobStatusFailed, ProverJobStatusInProgress, - ProverJobStatusSuccessful, -}; +use sqlx::types::chrono::{DateTime, NaiveDateTime, NaiveTime, Utc}; use zksync_types::{ - proofs::{AggregationRound, ProverJobInfo}, + proofs::{ + AggregationRound, JobPosition, ProverJobInfo, ProverJobStatus, ProverJobStatusFailed, + ProverJobStatusInProgress, ProverJobStatusSuccessful, + }, L1BatchNumber, }; diff --git a/core/lib/dal/src/models/storage_sync.rs b/core/lib/dal/src/models/storage_sync.rs index b49cfd98acc..2836d2820d8 100644 --- a/core/lib/dal/src/models/storage_sync.rs +++ b/core/lib/dal/src/models/storage_sync.rs @@ -1,68 +1,233 @@ -use std::convert::TryInto; - +use anyhow::Context as _; +use zksync_consensus_roles::validator; use zksync_contracts::BaseSystemContractsHashes; -use zksync_types::api::en::SyncBlock; -use zksync_types::{Address, L1BatchNumber, MiniblockNumber, Transaction, H256}; +use zksync_protobuf::{required, ProtoFmt}; +use zksync_types::{ + api::en, Address, L1BatchNumber, MiniblockNumber, ProtocolVersionId, Transaction, H160, H256, +}; #[derive(Debug, Clone, sqlx::FromRow)] -pub struct StorageSyncBlock { +pub(crate) struct StorageSyncBlock { pub number: i64, pub l1_batch_number: i64, pub last_batch_miniblock: Option, pub timestamp: i64, - pub root_hash: Option>, // L1 gas price assumed in the corresponding batch pub l1_gas_price: i64, // L2 gas price assumed in the corresponding batch pub l2_fair_gas_price: i64, + pub fair_pubdata_price: Option, pub bootloader_code_hash: Option>, pub default_aa_code_hash: Option>, pub fee_account_address: Option>, // May be None if the block is not yet sealed pub protocol_version: i32, pub virtual_blocks: i64, pub hash: Vec, - pub consensus: Option, } -impl StorageSyncBlock { - pub(crate) fn into_sync_block( - self, - current_operator_address: Address, - transactions: Option>, - ) -> SyncBlock { - let number = self.number; +fn parse_h256(bytes: &[u8]) -> anyhow::Result { + Ok(<[u8; 32]>::try_from(bytes).context("invalid size")?.into()) +} - SyncBlock { - number: MiniblockNumber(self.number as u32), - l1_batch_number: L1BatchNumber(self.l1_batch_number as u32), - last_in_batch: self - .last_batch_miniblock - .map(|n| n == number) - .unwrap_or(false), - timestamp: self.timestamp as u64, - root_hash: self.root_hash.as_deref().map(H256::from_slice), - l1_gas_price: self.l1_gas_price as u64, - l2_fair_gas_price: self.l2_fair_gas_price as u64, - // TODO (SMA-1635): Make these filed non optional in database +fn parse_h160(bytes: &[u8]) -> anyhow::Result { + Ok(<[u8; 20]>::try_from(bytes).context("invalid size")?.into()) +} + +pub(crate) struct SyncBlock { + pub number: MiniblockNumber, + pub l1_batch_number: L1BatchNumber, + pub last_in_batch: bool, + pub timestamp: u64, + pub l1_gas_price: u64, + pub l2_fair_gas_price: u64, + pub fair_pubdata_price: Option, + pub base_system_contracts_hashes: BaseSystemContractsHashes, + pub fee_account_address: Option
, + pub virtual_blocks: u32, + pub hash: H256, + pub protocol_version: ProtocolVersionId, +} + +impl TryFrom for SyncBlock { + type Error = anyhow::Error; + fn try_from(block: StorageSyncBlock) -> anyhow::Result { + Ok(Self { + number: MiniblockNumber(block.number.try_into().context("number")?), + l1_batch_number: L1BatchNumber( + block + .l1_batch_number + .try_into() + .context("l1_batch_number")?, + ), + last_in_batch: block.last_batch_miniblock == Some(block.number), + timestamp: block.timestamp.try_into().context("timestamp")?, + l1_gas_price: block.l1_gas_price.try_into().context("l1_gas_price")?, + l2_fair_gas_price: block + .l2_fair_gas_price + .try_into() + .context("l2_fair_gas_price")?, + fair_pubdata_price: block + .fair_pubdata_price + .map(|v| v.try_into().context("fair_pubdata_price")) + .transpose()?, + // TODO (SMA-1635): Make these fields non optional in database base_system_contracts_hashes: BaseSystemContractsHashes { - bootloader: self - .bootloader_code_hash - .map(|bootloader_code_hash| H256::from_slice(&bootloader_code_hash)) - .expect("Should not be none"), - default_aa: self - .default_aa_code_hash - .map(|default_aa_code_hash| H256::from_slice(&default_aa_code_hash)) - .expect("Should not be none"), + bootloader: parse_h256( + &block + .bootloader_code_hash + .context("bootloader_code_hash should not be none")?, + ) + .context("bootloader_code_hash")?, + default_aa: parse_h256( + &block + .default_aa_code_hash + .context("default_aa_code_hash should not be none")?, + ) + .context("default_aa_code_hash")?, }, - operator_address: self + fee_account_address: block .fee_account_address - .map(|fee_account_address| Address::from_slice(&fee_account_address)) - .unwrap_or(current_operator_address), + .map(|a| parse_h160(&a)) + .transpose() + .context("fee_account_address")?, + virtual_blocks: block.virtual_blocks.try_into().context("virtual_blocks")?, + hash: parse_h256(&block.hash).context("hash")?, + protocol_version: u16::try_from(block.protocol_version) + .context("protocol_version")? + .try_into() + .context("protocol_version")?, + }) + } +} + +impl SyncBlock { + pub(crate) fn into_api( + self, + current_operator_address: Address, + transactions: Option>, + ) -> en::SyncBlock { + en::SyncBlock { + number: self.number, + l1_batch_number: self.l1_batch_number, + last_in_batch: self.last_in_batch, + timestamp: self.timestamp, + l1_gas_price: self.l1_gas_price, + l2_fair_gas_price: self.l2_fair_gas_price, + fair_pubdata_price: self.fair_pubdata_price, + base_system_contracts_hashes: self.base_system_contracts_hashes, + operator_address: self.fee_account_address.unwrap_or(current_operator_address), + transactions, + virtual_blocks: Some(self.virtual_blocks), + hash: Some(self.hash), + protocol_version: self.protocol_version, + } + } + + pub(crate) fn into_payload( + self, + current_operator_address: Address, + transactions: Vec, + ) -> Payload { + Payload { + protocol_version: self.protocol_version, + hash: self.hash, + l1_batch_number: self.l1_batch_number, + timestamp: self.timestamp, + l1_gas_price: self.l1_gas_price, + l2_fair_gas_price: self.l2_fair_gas_price, + fair_pubdata_price: self.fair_pubdata_price, + virtual_blocks: self.virtual_blocks, + operator_address: self.fee_account_address.unwrap_or(current_operator_address), transactions, - virtual_blocks: Some(self.virtual_blocks as u32), - hash: Some(H256::from_slice(&self.hash)), - protocol_version: (self.protocol_version as u16).try_into().unwrap(), - consensus: self.consensus.map(|v| serde_json::from_value(v).unwrap()), + last_in_batch: self.last_in_batch, + } + } +} + +/// L2 block (= miniblock) payload. +#[derive(Debug, PartialEq)] +pub struct Payload { + pub protocol_version: ProtocolVersionId, + pub hash: H256, + pub l1_batch_number: L1BatchNumber, + pub timestamp: u64, + pub l1_gas_price: u64, + pub l2_fair_gas_price: u64, + pub fair_pubdata_price: Option, + pub virtual_blocks: u32, + pub operator_address: Address, + pub transactions: Vec, + pub last_in_batch: bool, +} + +impl ProtoFmt for Payload { + type Proto = super::proto::Payload; + + fn read(message: &Self::Proto) -> anyhow::Result { + let mut transactions = Vec::with_capacity(message.transactions.len()); + for (i, tx) in message.transactions.iter().enumerate() { + transactions.push( + required(&tx.json) + .and_then(|json_str| Ok(serde_json::from_str(json_str)?)) + .with_context(|| format!("transaction[{i}]"))?, + ); } + + Ok(Self { + protocol_version: required(&message.protocol_version) + .and_then(|x| Ok(ProtocolVersionId::try_from(u16::try_from(*x)?)?)) + .context("protocol_version")?, + hash: required(&message.hash) + .and_then(|h| parse_h256(h)) + .context("hash")?, + l1_batch_number: L1BatchNumber( + *required(&message.l1_batch_number).context("l1_batch_number")?, + ), + timestamp: *required(&message.timestamp).context("timestamp")?, + l1_gas_price: *required(&message.l1_gas_price).context("l1_gas_price")?, + l2_fair_gas_price: *required(&message.l2_fair_gas_price) + .context("l2_fair_gas_price")?, + fair_pubdata_price: message.fair_pubdata_price, + virtual_blocks: *required(&message.virtual_blocks).context("virtual_blocks")?, + operator_address: required(&message.operator_address) + .and_then(|a| parse_h160(a)) + .context("operator_address")?, + transactions, + last_in_batch: *required(&message.last_in_batch).context("last_in_batch")?, + }) + } + + fn build(&self) -> Self::Proto { + Self::Proto { + protocol_version: Some((self.protocol_version as u16).into()), + hash: Some(self.hash.as_bytes().into()), + l1_batch_number: Some(self.l1_batch_number.0), + timestamp: Some(self.timestamp), + l1_gas_price: Some(self.l1_gas_price), + l2_fair_gas_price: Some(self.l2_fair_gas_price), + fair_pubdata_price: self.fair_pubdata_price, + virtual_blocks: Some(self.virtual_blocks), + operator_address: Some(self.operator_address.as_bytes().into()), + // Transactions are stored in execution order, therefore order is deterministic. + transactions: self + .transactions + .iter() + .map(|t| super::proto::Transaction { + // TODO: There is no guarantee that json encoding here will be deterministic. + json: Some(serde_json::to_string(t).unwrap()), + }) + .collect(), + last_in_batch: Some(self.last_in_batch), + } + } +} + +impl Payload { + pub fn decode(payload: &validator::Payload) -> anyhow::Result { + zksync_protobuf::decode(&payload.0) + } + + pub fn encode(&self) -> validator::Payload { + validator::Payload(zksync_protobuf::encode(self)) } } diff --git a/core/lib/dal/src/models/storage_token.rs b/core/lib/dal/src/models/storage_token.rs index 1cc42405fe2..3acd7e03bc9 100644 --- a/core/lib/dal/src/models/storage_token.rs +++ b/core/lib/dal/src/models/storage_token.rs @@ -2,7 +2,6 @@ use sqlx::types::{ chrono::{DateTime, NaiveDateTime, Utc}, BigDecimal, }; - use zksync_types::tokens::TokenPrice; use zksync_utils::big_decimal_to_ratio; diff --git a/core/lib/dal/src/models/storage_transaction.rs b/core/lib/dal/src/models/storage_transaction.rs index 40fd5aa692c..1e252a4b8e4 100644 --- a/core/lib/dal/src/models/storage_transaction.rs +++ b/core/lib/dal/src/models/storage_transaction.rs @@ -1,29 +1,30 @@ use std::{convert::TryInto, str::FromStr}; -use crate::BigDecimal; use bigdecimal::Zero; - use serde::{Deserialize, Serialize}; -use sqlx::postgres::PgRow; -use sqlx::types::chrono::{DateTime, NaiveDateTime, Utc}; -use sqlx::{Error, FromRow, Row}; - -use zksync_types::l2::TransactionType; -use zksync_types::protocol_version::ProtocolUpgradeTxCommonData; -use zksync_types::transaction_request::PaymasterParams; -use zksync_types::vm_trace::Call; -use zksync_types::web3::types::U64; -use zksync_types::{api, Bytes, ExecuteTransactionCommon}; +use sqlx::{ + postgres::PgRow, + types::chrono::{DateTime, NaiveDateTime, Utc}, + Error, FromRow, Row, +}; use zksync_types::{ + api, api::{TransactionDetails, TransactionStatus}, fee::Fee, l1::{OpProcessingType, PriorityQueueType}, - Address, Execute, L1TxCommonData, L2ChainId, L2TxCommonData, Nonce, PackedEthSignature, - PriorityOpId, Transaction, EIP_1559_TX_TYPE, EIP_2930_TX_TYPE, EIP_712_TX_TYPE, H160, H256, - PRIORITY_OPERATION_L2_TX_TYPE, PROTOCOL_UPGRADE_TX_TYPE, U256, + l2::TransactionType, + protocol_version::ProtocolUpgradeTxCommonData, + transaction_request::PaymasterParams, + vm_trace::Call, + web3::types::U64, + Address, Bytes, Execute, ExecuteTransactionCommon, L1TxCommonData, L2ChainId, L2TxCommonData, + Nonce, PackedEthSignature, PriorityOpId, Transaction, EIP_1559_TX_TYPE, EIP_2930_TX_TYPE, + EIP_712_TX_TYPE, H160, H256, PRIORITY_OPERATION_L2_TX_TYPE, PROTOCOL_UPGRADE_TX_TYPE, U256, }; use zksync_utils::bigdecimal_to_u256; +use crate::BigDecimal; + #[derive(Debug, Clone, sqlx::FromRow)] pub struct StorageTransaction { pub priority_op_id: Option, @@ -118,7 +119,7 @@ impl From for L1TxCommonData { // `tx.hash` represents the transaction hash obtained from the execution results, // and it should be exactly the same as the canonical tx hash calculated from the - // transaction data, so we don't store it as a separate "canonical_tx_hash" field. + // transaction data, so we don't store it as a separate `canonical_tx_hash` field. let canonical_tx_hash = H256::from_slice(&tx.hash); L1TxCommonData { diff --git a/core/lib/dal/src/models/storage_verification_request.rs b/core/lib/dal/src/models/storage_verification_request.rs index 47e9abd11db..e6c68ca16fd 100644 --- a/core/lib/dal/src/models/storage_verification_request.rs +++ b/core/lib/dal/src/models/storage_verification_request.rs @@ -1,8 +1,10 @@ -use zksync_types::contract_verification_api::{ - CompilerType, CompilerVersions, SourceCodeData, VerificationIncomingRequest, - VerificationRequest, +use zksync_types::{ + contract_verification_api::{ + CompilerType, CompilerVersions, SourceCodeData, VerificationIncomingRequest, + VerificationRequest, + }, + Address, }; -use zksync_types::Address; #[derive(Debug, Clone, sqlx::FromRow)] pub struct StorageVerificationRequest { diff --git a/core/lib/dal/src/models/storage_witness_job_info.rs b/core/lib/dal/src/models/storage_witness_job_info.rs index 1aa41032cfa..486b9f89681 100644 --- a/core/lib/dal/src/models/storage_witness_job_info.rs +++ b/core/lib/dal/src/models/storage_witness_job_info.rs @@ -1,11 +1,13 @@ +use std::{convert::TryFrom, str::FromStr}; + use sqlx::types::chrono::{DateTime, NaiveDateTime, NaiveTime, Utc}; -use std::convert::TryFrom; -use std::str::FromStr; -use zksync_types::proofs::{ - AggregationRound, JobPosition, WitnessJobInfo, WitnessJobStatus, WitnessJobStatusFailed, - WitnessJobStatusSuccessful, +use zksync_types::{ + proofs::{ + AggregationRound, JobPosition, WitnessJobInfo, WitnessJobStatus, WitnessJobStatusFailed, + WitnessJobStatusSuccessful, + }, + L1BatchNumber, }; -use zksync_types::L1BatchNumber; #[derive(sqlx::FromRow)] pub struct StorageWitnessJobInfo { diff --git a/core/lib/dal/src/proof_generation_dal.rs b/core/lib/dal/src/proof_generation_dal.rs index d5fd3079dc1..cdcfd70880b 100644 --- a/core/lib/dal/src/proof_generation_dal.rs +++ b/core/lib/dal/src/proof_generation_dal.rs @@ -1,10 +1,9 @@ use std::time::Duration; +use strum::{Display, EnumString}; use zksync_types::L1BatchNumber; -use crate::time_utils::pg_interval_from_duration; -use crate::{SqlxError, StorageProcessor}; -use strum::{Display, EnumString}; +use crate::{time_utils::pg_interval_from_duration, SqlxError, StorageProcessor}; #[derive(Debug)] pub struct ProofGenerationDal<'a, 'c> { @@ -30,19 +29,34 @@ impl ProofGenerationDal<'_, '_> { ) -> Option { let processing_timeout = pg_interval_from_duration(processing_timeout); let result: Option = sqlx::query!( - "UPDATE proof_generation_details \ - SET status = 'picked_by_prover', updated_at = now(), prover_taken_at = now() \ - WHERE l1_batch_number = ( \ - SELECT l1_batch_number \ - FROM proof_generation_details \ - WHERE status = 'ready_to_be_proven' \ - OR (status = 'picked_by_prover' AND prover_taken_at < now() - $1::interval) \ - ORDER BY l1_batch_number ASC \ - LIMIT 1 \ - FOR UPDATE \ - SKIP LOCKED \ - ) \ - RETURNING proof_generation_details.l1_batch_number", + r#" + UPDATE proof_generation_details + SET + status = 'picked_by_prover', + updated_at = NOW(), + prover_taken_at = NOW() + WHERE + l1_batch_number = ( + SELECT + l1_batch_number + FROM + proof_generation_details + WHERE + status = 'ready_to_be_proven' + OR ( + status = 'picked_by_prover' + AND prover_taken_at < NOW() - $1::INTERVAL + ) + ORDER BY + l1_batch_number ASC + LIMIT + 1 + FOR UPDATE + SKIP LOCKED + ) + RETURNING + proof_generation_details.l1_batch_number + "#, &processing_timeout, ) .fetch_optional(self.storage.conn()) @@ -59,9 +73,15 @@ impl ProofGenerationDal<'_, '_> { proof_blob_url: &str, ) -> Result<(), SqlxError> { sqlx::query!( - "UPDATE proof_generation_details \ - SET status='generated', proof_blob_url = $1, updated_at = now() \ - WHERE l1_batch_number = $2", + r#" + UPDATE proof_generation_details + SET + status = 'generated', + proof_blob_url = $1, + updated_at = NOW() + WHERE + l1_batch_number = $2 + "#, proof_blob_url, block_number.0 as i64, ) @@ -79,10 +99,13 @@ impl ProofGenerationDal<'_, '_> { proof_gen_data_blob_url: &str, ) { sqlx::query!( - "INSERT INTO proof_generation_details \ - (l1_batch_number, status, proof_gen_data_blob_url, created_at, updated_at) \ - VALUES ($1, 'ready_to_be_proven', $2, now(), now()) \ - ON CONFLICT (l1_batch_number) DO NOTHING", + r#" + INSERT INTO + proof_generation_details (l1_batch_number, status, proof_gen_data_blob_url, created_at, updated_at) + VALUES + ($1, 'ready_to_be_proven', $2, NOW(), NOW()) + ON CONFLICT (l1_batch_number) DO NOTHING + "#, block_number.0 as i64, proof_gen_data_blob_url, ) @@ -96,9 +119,14 @@ impl ProofGenerationDal<'_, '_> { block_number: L1BatchNumber, ) -> Result<(), SqlxError> { sqlx::query!( - "UPDATE proof_generation_details \ - SET status=$1, updated_at = now() \ - WHERE l1_batch_number = $2", + r#" + UPDATE proof_generation_details + SET + status = $1, + updated_at = NOW() + WHERE + l1_batch_number = $2 + "#, ProofGenerationJobStatus::Skipped.to_string(), block_number.0 as i64, ) @@ -109,4 +137,50 @@ impl ProofGenerationDal<'_, '_> { .then_some(()) .ok_or(sqlx::Error::RowNotFound) } + + pub async fn get_oldest_unpicked_batch(&mut self) -> Option { + let result: Option = sqlx::query!( + r#" + SELECT + l1_batch_number + FROM + proof_generation_details + WHERE + status = 'ready_to_be_proven' + ORDER BY + l1_batch_number ASC + LIMIT + 1 + "#, + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .map(|row| L1BatchNumber(row.l1_batch_number as u32)); + + result + } + + pub async fn get_oldest_not_generated_batch(&mut self) -> Option { + let result: Option = sqlx::query!( + r#" + SELECT + l1_batch_number + FROM + proof_generation_details + WHERE + status NOT IN ('generated', 'skipped') + ORDER BY + l1_batch_number ASC + LIMIT + 1 + "#, + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .map(|row| L1BatchNumber(row.l1_batch_number as u32)); + + result + } } diff --git a/core/lib/dal/src/protocol_versions_dal.rs b/core/lib/dal/src/protocol_versions_dal.rs index dde7574d390..8aad040221c 100644 --- a/core/lib/dal/src/protocol_versions_dal.rs +++ b/core/lib/dal/src/protocol_versions_dal.rs @@ -1,14 +1,15 @@ -use std::convert::{TryFrom, TryInto}; +use std::convert::TryInto; + use zksync_contracts::{BaseSystemContracts, BaseSystemContractsHashes}; use zksync_types::{ protocol_version::{L1VerifierConfig, ProtocolUpgradeTx, ProtocolVersion, VerifierParams}, Address, ProtocolVersionId, H256, }; -use crate::models::storage_protocol_version::{ - protocol_version_from_storage, StorageProtocolVersion, +use crate::{ + models::storage_protocol_version::{protocol_version_from_storage, StorageProtocolVersion}, + StorageProcessor, }; -use crate::StorageProcessor; #[derive(Debug)] pub struct ProtocolVersionsDal<'a, 'c> { @@ -26,25 +27,49 @@ impl ProtocolVersionsDal<'_, '_> { tx_hash: Option, ) { sqlx::query!( - "INSERT INTO protocol_versions \ - (id, timestamp, recursion_scheduler_level_vk_hash, recursion_node_level_vk_hash, \ - recursion_leaf_level_vk_hash, recursion_circuits_set_vks_hash, bootloader_code_hash, \ - default_account_code_hash, verifier_address, upgrade_tx_hash, created_at) \ - VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, now())", - id as i32, - timestamp as i64, - l1_verifier_config.recursion_scheduler_level_vk_hash.as_bytes(), - l1_verifier_config.params.recursion_node_level_vk_hash.as_bytes(), - l1_verifier_config.params.recursion_leaf_level_vk_hash.as_bytes(), - l1_verifier_config.params.recursion_circuits_set_vks_hash.as_bytes(), - base_system_contracts_hashes.bootloader.as_bytes(), - base_system_contracts_hashes.default_aa.as_bytes(), - verifier_address.as_bytes(), - tx_hash.map(|tx_hash| tx_hash.0.to_vec()), - ) - .execute(self.storage.conn()) - .await - .unwrap(); + r#" + INSERT INTO + protocol_versions ( + id, + timestamp, + recursion_scheduler_level_vk_hash, + recursion_node_level_vk_hash, + recursion_leaf_level_vk_hash, + recursion_circuits_set_vks_hash, + bootloader_code_hash, + default_account_code_hash, + verifier_address, + upgrade_tx_hash, + created_at + ) + VALUES + ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, NOW()) + "#, + id as i32, + timestamp as i64, + l1_verifier_config + .recursion_scheduler_level_vk_hash + .as_bytes(), + l1_verifier_config + .params + .recursion_node_level_vk_hash + .as_bytes(), + l1_verifier_config + .params + .recursion_leaf_level_vk_hash + .as_bytes(), + l1_verifier_config + .params + .recursion_circuits_set_vks_hash + .as_bytes(), + base_system_contracts_hashes.bootloader.as_bytes(), + base_system_contracts_hashes.default_aa.as_bytes(), + verifier_address.as_bytes(), + tx_hash.map(|tx_hash| tx_hash.0.to_vec()), + ) + .execute(self.storage.conn()) + .await + .unwrap(); } pub async fn save_protocol_version_with_tx(&mut self, version: ProtocolVersion) { @@ -73,36 +98,25 @@ impl ProtocolVersionsDal<'_, '_> { db_transaction.commit().await.unwrap(); } - pub async fn save_prover_protocol_version(&mut self, version: ProtocolVersion) { - sqlx::query!( - "INSERT INTO prover_protocol_versions - (id, timestamp, recursion_scheduler_level_vk_hash, recursion_node_level_vk_hash, - recursion_leaf_level_vk_hash, recursion_circuits_set_vks_hash, verifier_address, created_at) - VALUES ($1, $2, $3, $4, $5, $6, $7, now()) - ", - version.id as i32, - version.timestamp as i64, - version.l1_verifier_config.recursion_scheduler_level_vk_hash.as_bytes(), - version.l1_verifier_config.params.recursion_node_level_vk_hash.as_bytes(), - version.l1_verifier_config.params.recursion_leaf_level_vk_hash.as_bytes(), - version.l1_verifier_config.params.recursion_circuits_set_vks_hash.as_bytes(), - version.verifier_address.as_bytes(), - ) - .execute(self.storage.conn()) - .await - .unwrap(); - } - pub async fn base_system_contracts_by_timestamp( &mut self, current_timestamp: u64, ) -> (BaseSystemContracts, ProtocolVersionId) { let row = sqlx::query!( - "SELECT bootloader_code_hash, default_account_code_hash, id FROM protocol_versions - WHERE timestamp <= $1 - ORDER BY id DESC - LIMIT 1 - ", + r#" + SELECT + bootloader_code_hash, + default_account_code_hash, + id + FROM + protocol_versions + WHERE + timestamp <= $1 + ORDER BY + id DESC + LIMIT + 1 + "#, current_timestamp as i64 ) .fetch_one(self.storage.conn()) @@ -124,9 +138,15 @@ impl ProtocolVersionsDal<'_, '_> { version_id: u16, ) -> Option { let row = sqlx::query!( - "SELECT bootloader_code_hash, default_account_code_hash FROM protocol_versions - WHERE id = $1 - ", + r#" + SELECT + bootloader_code_hash, + default_account_code_hash + FROM + protocol_versions + WHERE + id = $1 + "#, version_id as i32 ) .fetch_optional(self.storage.conn()) @@ -153,11 +173,18 @@ impl ProtocolVersionsDal<'_, '_> { ) -> Option { let storage_protocol_version: StorageProtocolVersion = sqlx::query_as!( StorageProtocolVersion, - "SELECT * FROM protocol_versions - WHERE id < $1 - ORDER BY id DESC - LIMIT 1 - ", + r#" + SELECT + * + FROM + protocol_versions + WHERE + id < $1 + ORDER BY + id DESC + LIMIT + 1 + "#, version_id as i32 ) .fetch_optional(self.storage.conn()) @@ -176,7 +203,14 @@ impl ProtocolVersionsDal<'_, '_> { ) -> Option { let storage_protocol_version: StorageProtocolVersion = sqlx::query_as!( StorageProtocolVersion, - "SELECT * FROM protocol_versions WHERE id = $1", + r#" + SELECT + * + FROM + protocol_versions + WHERE + id = $1 + "#, version_id as i32 ) .fetch_optional(self.storage.conn()) @@ -192,15 +226,22 @@ impl ProtocolVersionsDal<'_, '_> { version_id: ProtocolVersionId, ) -> Option { let row = sqlx::query!( - "SELECT recursion_scheduler_level_vk_hash, recursion_node_level_vk_hash, recursion_leaf_level_vk_hash, recursion_circuits_set_vks_hash - FROM protocol_versions - WHERE id = $1 - ", + r#" + SELECT + recursion_scheduler_level_vk_hash, + recursion_node_level_vk_hash, + recursion_leaf_level_vk_hash, + recursion_circuits_set_vks_hash + FROM + protocol_versions + WHERE + id = $1 + "#, version_id as i32 ) - .fetch_optional(self.storage.conn()) - .await - .unwrap()?; + .fetch_optional(self.storage.conn()) + .await + .unwrap()?; Some(L1VerifierConfig { params: VerifierParams { recursion_node_level_vk_hash: H256::from_slice(&row.recursion_node_level_vk_hash), @@ -216,19 +257,54 @@ impl ProtocolVersionsDal<'_, '_> { } pub async fn last_version_id(&mut self) -> Option { - let id = sqlx::query!(r#"SELECT MAX(id) as "max?" FROM protocol_versions"#) - .fetch_optional(self.storage.conn()) - .await - .unwrap()? - .max?; + let id = sqlx::query!( + r#" + SELECT + MAX(id) AS "max?" + FROM + protocol_versions + "# + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap()? + .max?; + Some((id as u16).try_into().unwrap()) + } + + pub async fn last_used_version_id(&mut self) -> Option { + let id = sqlx::query!( + r#" + SELECT + protocol_version + FROM + l1_batches + ORDER BY + number DESC + LIMIT + 1 + "# + ) + .fetch_optional(self.storage.conn()) + .await + .unwrap()? + .protocol_version?; + Some((id as u16).try_into().unwrap()) } pub async fn all_version_ids(&mut self) -> Vec { - let rows = sqlx::query!("SELECT id FROM protocol_versions") - .fetch_all(self.storage.conn()) - .await - .unwrap(); + let rows = sqlx::query!( + r#" + SELECT + id + FROM + protocol_versions + "# + ) + .fetch_all(self.storage.conn()) + .await + .unwrap(); rows.into_iter() .map(|row| (row.id as u16).try_into().unwrap()) .collect() @@ -239,10 +315,14 @@ impl ProtocolVersionsDal<'_, '_> { protocol_version_id: ProtocolVersionId, ) -> Option { let row = sqlx::query!( - " - SELECT upgrade_tx_hash FROM protocol_versions - WHERE id = $1 - ", + r#" + SELECT + upgrade_tx_hash + FROM + protocol_versions + WHERE + id = $1 + "#, protocol_version_id as i32 ) .fetch_optional(self.storage.conn()) @@ -267,52 +347,4 @@ impl ProtocolVersionsDal<'_, '_> { None } } - - pub async fn protocol_version_for( - &mut self, - vk_commitments: &L1VerifierConfig, - ) -> Vec { - sqlx::query!( - r#" - SELECT id - FROM prover_protocol_versions - WHERE recursion_circuits_set_vks_hash = $1 - AND recursion_leaf_level_vk_hash = $2 - AND recursion_node_level_vk_hash = $3 - AND recursion_scheduler_level_vk_hash = $4 - "#, - vk_commitments - .params - .recursion_circuits_set_vks_hash - .as_bytes(), - vk_commitments - .params - .recursion_leaf_level_vk_hash - .as_bytes(), - vk_commitments - .params - .recursion_node_level_vk_hash - .as_bytes(), - vk_commitments.recursion_scheduler_level_vk_hash.as_bytes(), - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| ProtocolVersionId::try_from(row.id as u16).unwrap()) - .collect() - } - - pub async fn prover_protocol_version_exists(&mut self, id: ProtocolVersionId) -> bool { - sqlx::query!( - "SELECT COUNT(*) as \"count!\" FROM prover_protocol_versions \ - WHERE id = $1", - id as i32 - ) - .fetch_one(self.storage.conn()) - .await - .unwrap() - .count - > 0 - } } diff --git a/core/lib/dal/src/protocol_versions_web3_dal.rs b/core/lib/dal/src/protocol_versions_web3_dal.rs index dc43dadbd22..7c4b4d256a2 100644 --- a/core/lib/dal/src/protocol_versions_web3_dal.rs +++ b/core/lib/dal/src/protocol_versions_web3_dal.rs @@ -1,7 +1,6 @@ use zksync_types::api::ProtocolVersion; -use crate::models::storage_protocol_version::StorageProtocolVersion; -use crate::StorageProcessor; +use crate::{models::storage_protocol_version::StorageProtocolVersion, StorageProcessor}; #[derive(Debug)] pub struct ProtocolVersionsWeb3Dal<'a, 'c> { @@ -12,9 +11,14 @@ impl ProtocolVersionsWeb3Dal<'_, '_> { pub async fn get_protocol_version_by_id(&mut self, version_id: u16) -> Option { let storage_protocol_version: Option = sqlx::query_as!( StorageProtocolVersion, - "SELECT * FROM protocol_versions - WHERE id = $1 - ", + r#" + SELECT + * + FROM + protocol_versions + WHERE + id = $1 + "#, version_id as i32 ) .fetch_optional(self.storage.conn()) @@ -27,7 +31,16 @@ impl ProtocolVersionsWeb3Dal<'_, '_> { pub async fn get_latest_protocol_version(&mut self) -> ProtocolVersion { let storage_protocol_version: StorageProtocolVersion = sqlx::query_as!( StorageProtocolVersion, - "SELECT * FROM protocol_versions ORDER BY id DESC LIMIT 1", + r#" + SELECT + * + FROM + protocol_versions + ORDER BY + id DESC + LIMIT + 1 + "#, ) .fetch_one(self.storage.conn()) .await diff --git a/core/lib/dal/src/prover_dal.rs b/core/lib/dal/src/prover_dal.rs deleted file mode 100644 index d84d0628372..00000000000 --- a/core/lib/dal/src/prover_dal.rs +++ /dev/null @@ -1,627 +0,0 @@ -use sqlx::Error; - -use std::{ - collections::HashMap, - convert::{TryFrom, TryInto}, - ops::Range, - time::Duration, -}; - -use zksync_types::{ - aggregated_operations::L1BatchProofForL1, - proofs::{ - AggregationRound, JobCountStatistics, JobExtendedStatistics, ProverJobInfo, - ProverJobMetadata, - }, - zkevm_test_harness::{ - abstract_zksync_circuit::concrete_circuits::ZkSyncProof, bellman::bn256::Bn256, - }, - L1BatchNumber, ProtocolVersionId, -}; - -use crate::{ - instrument::InstrumentExt, - models::storage_prover_job_info::StorageProverJobInfo, - time_utils::{duration_to_naive_time, pg_interval_from_duration}, - StorageProcessor, -}; - -#[derive(Debug)] -pub struct ProverDal<'a, 'c> { - pub(crate) storage: &'a mut StorageProcessor<'c>, -} - -impl ProverDal<'_, '_> { - pub async fn get_next_prover_job( - &mut self, - protocol_versions: &[ProtocolVersionId], - ) -> Option { - let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); - let result: Option = sqlx::query!( - " - UPDATE prover_jobs - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now() - WHERE id = ( - SELECT id - FROM prover_jobs - WHERE status = 'queued' - AND protocol_version = ANY($1) - ORDER BY aggregation_round DESC, l1_batch_number ASC, id ASC - LIMIT 1 - FOR UPDATE - SKIP LOCKED - ) - RETURNING prover_jobs.* - ", - &protocol_versions[..] - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .map(|row| ProverJobMetadata { - id: row.id as u32, - block_number: L1BatchNumber(row.l1_batch_number as u32), - circuit_type: row.circuit_type, - aggregation_round: AggregationRound::try_from(row.aggregation_round).unwrap(), - sequence_number: row.sequence_number as usize, - }); - result - } - - pub async fn get_proven_l1_batches(&mut self) -> Vec<(L1BatchNumber, AggregationRound)> { - { - sqlx::query!( - r#"SELECT MAX(l1_batch_number) as "l1_batch_number!", aggregation_round FROM prover_jobs - WHERE status='successful' - GROUP BY aggregation_round - "# - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|record| { - ( - L1BatchNumber(record.l1_batch_number as u32), - record.aggregation_round.try_into().unwrap(), - ) - }) - .collect() - } - } - - pub async fn get_next_prover_job_by_circuit_types( - &mut self, - circuit_types: Vec, - protocol_versions: &[ProtocolVersionId], - ) -> Option { - { - let protocol_versions: Vec = - protocol_versions.iter().map(|&id| id as i32).collect(); - let result: Option = sqlx::query!( - " - UPDATE prover_jobs - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now() - WHERE id = ( - SELECT id - FROM prover_jobs - WHERE circuit_type = ANY($1) - AND status = 'queued' - AND protocol_version = ANY($2) - ORDER BY aggregation_round DESC, l1_batch_number ASC, id ASC - LIMIT 1 - FOR UPDATE - SKIP LOCKED - ) - RETURNING prover_jobs.* - ", - &circuit_types[..], - &protocol_versions[..] - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .map(|row| ProverJobMetadata { - id: row.id as u32, - block_number: L1BatchNumber(row.l1_batch_number as u32), - circuit_type: row.circuit_type, - aggregation_round: AggregationRound::try_from(row.aggregation_round).unwrap(), - sequence_number: row.sequence_number as usize, - }); - - result - } - } - - // If making changes to this method, consider moving the serialization logic to the DAL layer. - pub async fn insert_prover_jobs( - &mut self, - l1_batch_number: L1BatchNumber, - circuit_types_and_urls: Vec<(&'static str, String)>, - aggregation_round: AggregationRound, - protocol_version: i32, - ) { - { - let it = circuit_types_and_urls.into_iter().enumerate(); - for (sequence_number, (circuit, circuit_input_blob_url)) in it { - sqlx::query!( - " - INSERT INTO prover_jobs (l1_batch_number, circuit_type, sequence_number, prover_input, aggregation_round, circuit_input_blob_url, protocol_version, status, created_at, updated_at) - VALUES ($1, $2, $3, $4, $5, $6, $7, 'queued', now(), now()) - ON CONFLICT(l1_batch_number, aggregation_round, sequence_number) DO NOTHING - ", - l1_batch_number.0 as i64, - circuit, - sequence_number as i64, - &[] as &[u8], - aggregation_round as i64, - circuit_input_blob_url, - protocol_version - ) - .instrument("save_witness") - .report_latency() - .with_arg("l1_batch_number", &l1_batch_number) - .with_arg("circuit", &circuit) - .with_arg("circuit_input_blob_url", &circuit_input_blob_url) - .execute(self.storage.conn()) - .await - .unwrap(); - } - } - } - - pub async fn save_proof( - &mut self, - id: u32, - time_taken: Duration, - proof: Vec, - proccesed_by: &str, - ) -> Result<(), Error> { - { - sqlx::query!( - " - UPDATE prover_jobs - SET status = 'successful', updated_at = now(), time_taken = $1, result = $2, proccesed_by = $3 - WHERE id = $4 - ", - duration_to_naive_time(time_taken), - &proof, - proccesed_by, - id as i64, - ) - .instrument("save_proof") - .report_latency() - .with_arg("id", &id) - .with_arg("proof.len", &proof.len()) - .execute(self.storage.conn()) - .await?; - } - Ok(()) - } - - pub async fn save_proof_error( - &mut self, - id: u32, - error: String, - max_attempts: u32, - ) -> Result<(), Error> { - { - let mut transaction = self.storage.start_transaction().await.unwrap(); - - let row = sqlx::query!( - " - UPDATE prover_jobs - SET status = 'failed', error = $1, updated_at = now() - WHERE id = $2 - RETURNING l1_batch_number, attempts - ", - error, - id as i64, - ) - .fetch_one(transaction.conn()) - .await?; - - if row.attempts as u32 >= max_attempts { - transaction - .blocks_dal() - .set_skip_proof_for_l1_batch(L1BatchNumber(row.l1_batch_number as u32)) - .await - .unwrap(); - } - - transaction.commit().await.unwrap(); - Ok(()) - } - } - - pub async fn requeue_stuck_jobs( - &mut self, - processing_timeout: Duration, - max_attempts: u32, - ) -> Vec { - let processing_timeout = pg_interval_from_duration(processing_timeout); - { - sqlx::query!( - " - UPDATE prover_jobs - SET status = 'queued', updated_at = now(), processing_started_at = now() - WHERE (status = 'in_progress' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'in_gpu_proof' AND processing_started_at <= now() - $1::interval AND attempts < $2) - OR (status = 'failed' AND attempts < $2) - RETURNING id, status, attempts - ", - &processing_timeout, - max_attempts as i32, - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| StuckProverJobs{id: row.id as u64, status: row.status, attempts: row.attempts as u64}) - .collect() - } - } - - // For each block in the provided range it returns a tuple: - // (aggregation_coords; scheduler_proof) - pub async fn get_final_proofs_for_blocks( - &mut self, - from_block: L1BatchNumber, - to_block: L1BatchNumber, - ) -> Vec { - { - sqlx::query!( - "SELECT prover_jobs.result as proof, scheduler_witness_jobs.aggregation_result_coords - FROM prover_jobs - INNER JOIN scheduler_witness_jobs - ON prover_jobs.l1_batch_number = scheduler_witness_jobs.l1_batch_number - WHERE prover_jobs.l1_batch_number >= $1 AND prover_jobs.l1_batch_number <= $2 - AND prover_jobs.aggregation_round = 3 - AND prover_jobs.status = 'successful' - ", - from_block.0 as i32, - to_block.0 as i32 - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| { - let deserialized_proof = bincode::deserialize::>( - &row.proof - .expect("prove_job with `successful` status has no result"), - ).expect("cannot deserialize proof"); - let deserialized_aggregation_result_coords = bincode::deserialize::<[[u8; 32]; 4]>( - &row.aggregation_result_coords - .expect("scheduler_witness_job with `successful` status has no aggregation_result_coords"), - ).expect("cannot deserialize proof"); - L1BatchProofForL1 { - aggregation_result_coords: deserialized_aggregation_result_coords, - scheduler_proof: ZkSyncProof::into_proof(deserialized_proof), - } - }) - .collect() - } - } - - pub async fn get_prover_jobs_stats_per_circuit( - &mut self, - ) -> HashMap { - { - sqlx::query!( - r#" - SELECT COUNT(*) as "count!", circuit_type as "circuit_type!", status as "status!" - FROM prover_jobs - WHERE status <> 'skipped' and status <> 'successful' - GROUP BY circuit_type, status - "# - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| (row.circuit_type, row.status, row.count as usize)) - .fold(HashMap::new(), |mut acc, (circuit_type, status, value)| { - let stats = acc.entry(circuit_type).or_insert(JobCountStatistics { - queued: 0, - in_progress: 0, - failed: 0, - successful: 0, - }); - match status.as_ref() { - "queued" => stats.queued = value, - "in_progress" => stats.in_progress = value, - "failed" => stats.failed = value, - "successful" => stats.successful = value, - _ => (), - } - acc - }) - } - } - - pub async fn get_prover_jobs_stats(&mut self) -> JobCountStatistics { - { - let mut results: HashMap = sqlx::query!( - r#" - SELECT COUNT(*) as "count!", status as "status!" - FROM prover_jobs - GROUP BY status - "# - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| (row.status, row.count as usize)) - .collect::>(); - JobCountStatistics { - queued: results.remove("queued").unwrap_or(0usize), - in_progress: results.remove("in_progress").unwrap_or(0usize), - failed: results.remove("failed").unwrap_or(0usize), - successful: results.remove("successful").unwrap_or(0usize), - } - } - } - - pub async fn min_unproved_l1_batch_number(&mut self) -> Option { - { - sqlx::query!( - r#" - SELECT MIN(l1_batch_number) as "l1_batch_number?" FROM ( - SELECT MIN(l1_batch_number) as "l1_batch_number" - FROM prover_jobs - WHERE status = 'successful' OR aggregation_round < 3 - GROUP BY l1_batch_number - HAVING MAX(aggregation_round) < 3 - ) as inn - "# - ) - .fetch_one(self.storage.conn()) - .await - .unwrap() - .l1_batch_number - .map(|n| L1BatchNumber(n as u32)) - } - } - - pub async fn min_unproved_l1_batch_number_by_basic_circuit_type( - &mut self, - ) -> Vec<(String, L1BatchNumber)> { - { - sqlx::query!( - r#" - SELECT MIN(l1_batch_number) as "l1_batch_number!", circuit_type - FROM prover_jobs - WHERE aggregation_round = 0 AND (status = 'queued' OR status = 'in_progress' - OR status = 'in_gpu_proof' - OR status = 'failed') - GROUP BY circuit_type - "# - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| (row.circuit_type, L1BatchNumber(row.l1_batch_number as u32))) - .collect() - } - } - - pub async fn get_extended_stats(&mut self) -> anyhow::Result { - { - let limits = sqlx::query!( - r#" - SELECT - (SELECT l1_batch_number - FROM prover_jobs - WHERE status NOT IN ('successful', 'skipped') - ORDER BY l1_batch_number - LIMIT 1) as "successful_limit!", - - (SELECT l1_batch_number - FROM prover_jobs - WHERE status <> 'queued' - ORDER BY l1_batch_number DESC - LIMIT 1) as "queued_limit!", - - (SELECT MAX(l1_batch_number) as "max!" FROM prover_jobs) as "max_block!" - "# - ) - .fetch_one(self.storage.conn()) - .await?; - - let active_area = self - .get_jobs(GetProverJobsParams::blocks( - L1BatchNumber(limits.successful_limit as u32) - ..L1BatchNumber(limits.queued_limit as u32), - )) - .await?; - - Ok(JobExtendedStatistics { - successful_padding: L1BatchNumber(limits.successful_limit as u32 - 1), - queued_padding: L1BatchNumber(limits.queued_limit as u32 + 1), - queued_padding_len: (limits.max_block - limits.queued_limit) as u32, - active_area, - }) - } - } - - pub async fn get_jobs( - &mut self, - opts: GetProverJobsParams, - ) -> Result, sqlx::Error> { - let statuses = opts - .statuses - .map(|ss| { - { - // Until statuses are enums - let whitelist = ["queued", "in_progress", "successful", "failed"]; - if !ss.iter().all(|x| whitelist.contains(&x.as_str())) { - panic!("Forbidden value in statuses list.") - } - } - - format!( - "AND status IN ({})", - ss.iter() - .map(|x| format!("'{}'", x)) - .collect::>() - .join(",") - ) - }) - .unwrap_or_default(); - - let block_range = opts - .blocks - .as_ref() - .map(|range| { - format!( - "AND l1_batch_number >= {} - AND l1_batch_number <= {}", - range.start.0, range.end.0 - ) - }) - .unwrap_or_default(); - - let round = opts - .round - .map(|round| format!("AND aggregation_round = {}", round as u32)) - .unwrap_or_default(); - - let order = match opts.desc { - true => "DESC", - false => "ASC", - }; - - let limit = opts - .limit - .map(|limit| format!("LIMIT {}", limit)) - .unwrap_or_default(); - - let sql = format!( - r#" - SELECT - id, - circuit_type, - l1_batch_number, - status, - aggregation_round, - sequence_number, - length(prover_input) as input_length, - attempts, - created_at, - updated_at, - processing_started_at, - time_taken, - error - FROM prover_jobs - WHERE 1 = 1 -- Where clause can't be empty - {statuses} - {block_range} - {round} - ORDER BY "id" {order} - {limit} - "# - ); - - let query = sqlx::query_as(&sql); - - Ok(query - .fetch_all(self.storage.conn()) - .await? - .into_iter() - .map(|x: StorageProverJobInfo| x.into()) - .collect::>()) - } - - pub async fn get_prover_job_by_id( - &mut self, - job_id: u32, - ) -> Result, Error> { - { - let row = sqlx::query!("SELECT * from prover_jobs where id=$1", job_id as i64) - .fetch_optional(self.storage.conn()) - .await?; - - Ok(row.map(|row| ProverJobMetadata { - id: row.id as u32, - block_number: L1BatchNumber(row.l1_batch_number as u32), - circuit_type: row.circuit_type, - aggregation_round: AggregationRound::try_from(row.aggregation_round).unwrap(), - sequence_number: row.sequence_number as usize, - })) - } - } - - pub async fn get_circuit_input_blob_urls_to_be_cleaned( - &mut self, - limit: u8, - ) -> Vec<(i64, String)> { - { - let job_ids = sqlx::query!( - r#" - SELECT id, circuit_input_blob_url FROM prover_jobs - WHERE status='successful' - AND circuit_input_blob_url is NOT NULL - AND updated_at < NOW() - INTERVAL '30 days' - LIMIT $1; - "#, - limit as i32 - ) - .fetch_all(self.storage.conn()) - .await - .unwrap(); - job_ids - .into_iter() - .map(|row| (row.id, row.circuit_input_blob_url.unwrap())) - .collect() - } - } - - pub async fn update_status(&mut self, id: u32, status: &str) { - { - sqlx::query!( - r#" - UPDATE prover_jobs - SET status = $1, updated_at = now() - WHERE id = $2 - "#, - status, - id as i64, - ) - .execute(self.storage.conn()) - .await - .unwrap(); - } - } -} - -pub struct GetProverJobsParams { - pub statuses: Option>, - pub blocks: Option>, - pub limit: Option, - pub desc: bool, - pub round: Option, -} - -impl GetProverJobsParams { - pub fn blocks(range: Range) -> GetProverJobsParams { - GetProverJobsParams { - blocks: Some(range), - statuses: None, - limit: None, - desc: false, - round: None, - } - } -} - -#[derive(Debug)] -pub struct StuckProverJobs { - pub id: u64, - pub status: String, - pub attempts: u64, -} diff --git a/core/lib/dal/src/snapshot_recovery_dal.rs b/core/lib/dal/src/snapshot_recovery_dal.rs new file mode 100644 index 00000000000..abf6ceb4406 --- /dev/null +++ b/core/lib/dal/src/snapshot_recovery_dal.rs @@ -0,0 +1,135 @@ +use zksync_types::{snapshots::SnapshotRecoveryStatus, L1BatchNumber, MiniblockNumber, H256}; + +use crate::StorageProcessor; + +#[derive(Debug)] +pub struct SnapshotRecoveryDal<'a, 'c> { + pub(crate) storage: &'a mut StorageProcessor<'c>, +} + +impl SnapshotRecoveryDal<'_, '_> { + pub async fn set_applied_snapshot_status( + &mut self, + status: &SnapshotRecoveryStatus, + ) -> sqlx::Result<()> { + sqlx::query!( + r#" + INSERT INTO + snapshot_recovery ( + l1_batch_number, + l1_batch_root_hash, + miniblock_number, + miniblock_root_hash, + last_finished_chunk_id, + total_chunk_count, + updated_at, + created_at + ) + VALUES + ($1, $2, $3, $4, $5, $6, NOW(), NOW()) + ON CONFLICT (l1_batch_number) DO + UPDATE + SET + l1_batch_number = excluded.l1_batch_number, + l1_batch_root_hash = excluded.l1_batch_root_hash, + miniblock_number = excluded.miniblock_number, + miniblock_root_hash = excluded.miniblock_root_hash, + last_finished_chunk_id = excluded.last_finished_chunk_id, + total_chunk_count = excluded.total_chunk_count, + updated_at = excluded.updated_at + "#, + status.l1_batch_number.0 as i64, + status.l1_batch_root_hash.0.as_slice(), + status.miniblock_number.0 as i64, + status.miniblock_root_hash.0.as_slice(), + status.last_finished_chunk_id.map(|v| v as i32), + status.total_chunk_count as i64, + ) + .execute(self.storage.conn()) + .await?; + Ok(()) + } + + pub async fn get_applied_snapshot_status( + &mut self, + ) -> sqlx::Result> { + let record = sqlx::query!( + r#" + SELECT + l1_batch_number, + l1_batch_root_hash, + miniblock_number, + miniblock_root_hash, + last_finished_chunk_id, + total_chunk_count + FROM + snapshot_recovery + "#, + ) + .fetch_optional(self.storage.conn()) + .await?; + + Ok(record.map(|r| SnapshotRecoveryStatus { + l1_batch_number: L1BatchNumber(r.l1_batch_number as u32), + l1_batch_root_hash: H256::from_slice(&r.l1_batch_root_hash), + miniblock_number: MiniblockNumber(r.miniblock_number as u32), + miniblock_root_hash: H256::from_slice(&r.miniblock_root_hash), + last_finished_chunk_id: r.last_finished_chunk_id.map(|v| v as u64), + total_chunk_count: r.total_chunk_count as u64, + })) + } +} + +#[cfg(test)] +mod tests { + use zksync_types::{snapshots::SnapshotRecoveryStatus, L1BatchNumber, MiniblockNumber, H256}; + + use crate::ConnectionPool; + + #[tokio::test] + async fn manipulating_snapshot_recovery_table() { + let connection_pool = ConnectionPool::test_pool().await; + let mut conn = connection_pool.access_storage().await.unwrap(); + let mut applied_status_dal = conn.snapshot_recovery_dal(); + let empty_status = applied_status_dal + .get_applied_snapshot_status() + .await + .unwrap(); + assert_eq!(None, empty_status); + let status = SnapshotRecoveryStatus { + l1_batch_number: L1BatchNumber(123), + l1_batch_root_hash: H256::random(), + miniblock_number: MiniblockNumber(234), + miniblock_root_hash: H256::random(), + last_finished_chunk_id: None, + total_chunk_count: 345, + }; + applied_status_dal + .set_applied_snapshot_status(&status) + .await + .unwrap(); + let status_from_db = applied_status_dal + .get_applied_snapshot_status() + .await + .unwrap(); + assert_eq!(Some(status), status_from_db); + + let updated_status = SnapshotRecoveryStatus { + l1_batch_number: L1BatchNumber(123), + l1_batch_root_hash: H256::random(), + miniblock_number: MiniblockNumber(234), + miniblock_root_hash: H256::random(), + last_finished_chunk_id: Some(2345), + total_chunk_count: 345, + }; + applied_status_dal + .set_applied_snapshot_status(&updated_status) + .await + .unwrap(); + let updated_status_from_db = applied_status_dal + .get_applied_snapshot_status() + .await + .unwrap(); + assert_eq!(Some(updated_status), updated_status_from_db); + } +} diff --git a/core/lib/dal/src/snapshots_creator_dal.rs b/core/lib/dal/src/snapshots_creator_dal.rs new file mode 100644 index 00000000000..9267470878e --- /dev/null +++ b/core/lib/dal/src/snapshots_creator_dal.rs @@ -0,0 +1,129 @@ +use zksync_types::{ + snapshots::{SnapshotFactoryDependency, SnapshotStorageLog}, + AccountTreeId, Address, L1BatchNumber, MiniblockNumber, StorageKey, H256, +}; + +use crate::{instrument::InstrumentExt, StorageProcessor}; + +#[derive(Debug)] +pub struct SnapshotsCreatorDal<'a, 'c> { + pub(crate) storage: &'a mut StorageProcessor<'c>, +} + +impl SnapshotsCreatorDal<'_, '_> { + pub async fn get_distinct_storage_logs_keys_count( + &mut self, + l1_batch_number: L1BatchNumber, + ) -> sqlx::Result { + let count = sqlx::query!( + r#" + SELECT + INDEX + FROM + initial_writes + WHERE + l1_batch_number <= $1 + ORDER BY + l1_batch_number DESC, + INDEX DESC + LIMIT + 1; + "#, + l1_batch_number.0 as i32 + ) + .instrument("get_storage_logs_count") + .report_latency() + .fetch_one(self.storage.conn()) + .await? + .index; + Ok(count as u64) + } + + pub async fn get_storage_logs_chunk( + &mut self, + miniblock_number: MiniblockNumber, + hashed_keys_range: std::ops::RangeInclusive, + ) -> sqlx::Result> { + let storage_logs = sqlx::query!( + r#" + SELECT + storage_logs.key AS "key!", + storage_logs.value AS "value!", + storage_logs.address AS "address!", + storage_logs.miniblock_number AS "miniblock_number!", + initial_writes.l1_batch_number AS "l1_batch_number!", + initial_writes.index + FROM + ( + SELECT + hashed_key, + MAX(ARRAY[miniblock_number, operation_number]::INT[]) AS op + FROM + storage_logs + WHERE + miniblock_number <= $1 + AND hashed_key >= $2 + AND hashed_key < $3 + GROUP BY + hashed_key + ORDER BY + hashed_key + ) AS keys + INNER JOIN storage_logs ON keys.hashed_key = storage_logs.hashed_key + AND storage_logs.miniblock_number = keys.op[1] + AND storage_logs.operation_number = keys.op[2] + INNER JOIN initial_writes ON keys.hashed_key = initial_writes.hashed_key; + "#, + miniblock_number.0 as i64, + hashed_keys_range.start().0.as_slice(), + hashed_keys_range.end().0.as_slice(), + ) + .instrument("get_storage_logs_chunk") + .with_arg("miniblock_number", &miniblock_number) + .with_arg("min_hashed_key", &hashed_keys_range.start()) + .with_arg("max_hashed_key", &hashed_keys_range.end()) + .report_latency() + .fetch_all(self.storage.conn()) + .await? + .iter() + .map(|row| SnapshotStorageLog { + key: StorageKey::new( + AccountTreeId::new(Address::from_slice(&row.address)), + H256::from_slice(&row.key), + ), + value: H256::from_slice(&row.value), + l1_batch_number_of_initial_write: L1BatchNumber(row.l1_batch_number as u32), + enumeration_index: row.index as u64, + }) + .collect(); + Ok(storage_logs) + } + + pub async fn get_all_factory_deps( + &mut self, + miniblock_number: MiniblockNumber, + ) -> sqlx::Result> { + let rows = sqlx::query!( + r#" + SELECT + bytecode + FROM + factory_deps + WHERE + miniblock_number <= $1 + "#, + miniblock_number.0 as i64, + ) + .instrument("get_all_factory_deps") + .report_latency() + .fetch_all(self.storage.conn()) + .await?; + + Ok(rows + .into_iter() + .map(|row| SnapshotFactoryDependency { + bytecode: row.bytecode.into(), + }) + .collect()) + } +} diff --git a/core/lib/dal/src/snapshots_dal.rs b/core/lib/dal/src/snapshots_dal.rs new file mode 100644 index 00000000000..3b2e62085bb --- /dev/null +++ b/core/lib/dal/src/snapshots_dal.rs @@ -0,0 +1,259 @@ +use zksync_types::{ + snapshots::{AllSnapshots, SnapshotMetadata}, + L1BatchNumber, +}; + +use crate::{instrument::InstrumentExt, StorageProcessor}; + +#[derive(Debug, sqlx::FromRow)] +struct StorageSnapshotMetadata { + l1_batch_number: i64, + storage_logs_filepaths: Vec, + factory_deps_filepath: String, +} + +impl From for SnapshotMetadata { + fn from(row: StorageSnapshotMetadata) -> Self { + Self { + l1_batch_number: L1BatchNumber(row.l1_batch_number as u32), + storage_logs_filepaths: row + .storage_logs_filepaths + .into_iter() + .map(|path| (!path.is_empty()).then_some(path)) + .collect(), + factory_deps_filepath: row.factory_deps_filepath, + } + } +} + +#[derive(Debug)] +pub struct SnapshotsDal<'a, 'c> { + pub(crate) storage: &'a mut StorageProcessor<'c>, +} + +impl SnapshotsDal<'_, '_> { + pub async fn add_snapshot( + &mut self, + l1_batch_number: L1BatchNumber, + storage_logs_chunk_count: u64, + factory_deps_filepaths: &str, + ) -> sqlx::Result<()> { + sqlx::query!( + r#" + INSERT INTO + snapshots ( + l1_batch_number, + storage_logs_filepaths, + factory_deps_filepath, + created_at, + updated_at + ) + VALUES + ($1, ARRAY_FILL(''::TEXT, ARRAY[$2::INTEGER]), $3, NOW(), NOW()) + "#, + l1_batch_number.0 as i32, + storage_logs_chunk_count as i32, + factory_deps_filepaths, + ) + .instrument("add_snapshot") + .report_latency() + .execute(self.storage.conn()) + .await?; + Ok(()) + } + + pub async fn add_storage_logs_filepath_for_snapshot( + &mut self, + l1_batch_number: L1BatchNumber, + chunk_id: u64, + storage_logs_filepath: &str, + ) -> sqlx::Result<()> { + sqlx::query!( + r#" + UPDATE snapshots + SET + storage_logs_filepaths[$2] = $3, + updated_at = NOW() + WHERE + l1_batch_number = $1 + "#, + l1_batch_number.0 as i32, + chunk_id as i32 + 1, + storage_logs_filepath, + ) + .execute(self.storage.conn()) + .await?; + + Ok(()) + } + + pub async fn get_all_complete_snapshots(&mut self) -> sqlx::Result { + let rows = sqlx::query!( + r#" + SELECT + l1_batch_number + FROM + snapshots + WHERE + NOT (''::TEXT = ANY (storage_logs_filepaths)) + ORDER BY + l1_batch_number DESC + "# + ) + .instrument("get_all_complete_snapshots") + .report_latency() + .fetch_all(self.storage.conn()) + .await?; + + let snapshots_l1_batch_numbers = rows + .into_iter() + .map(|row| L1BatchNumber(row.l1_batch_number as u32)) + .collect(); + + Ok(AllSnapshots { + snapshots_l1_batch_numbers, + }) + } + + pub async fn get_newest_snapshot_metadata(&mut self) -> sqlx::Result> { + let row = sqlx::query_as!( + StorageSnapshotMetadata, + r#" + SELECT + l1_batch_number, + factory_deps_filepath, + storage_logs_filepaths + FROM + snapshots + ORDER BY + l1_batch_number DESC + LIMIT + 1 + "# + ) + .instrument("get_newest_snapshot_metadata") + .report_latency() + .fetch_optional(self.storage.conn()) + .await?; + + Ok(row.map(Into::into)) + } + + pub async fn get_snapshot_metadata( + &mut self, + l1_batch_number: L1BatchNumber, + ) -> sqlx::Result> { + let row = sqlx::query_as!( + StorageSnapshotMetadata, + r#" + SELECT + l1_batch_number, + factory_deps_filepath, + storage_logs_filepaths + FROM + snapshots + WHERE + l1_batch_number = $1 + "#, + l1_batch_number.0 as i32 + ) + .instrument("get_snapshot_metadata") + .report_latency() + .fetch_optional(self.storage.conn()) + .await?; + + Ok(row.map(Into::into)) + } +} + +#[cfg(test)] +mod tests { + use zksync_types::L1BatchNumber; + + use crate::ConnectionPool; + + #[tokio::test] + async fn adding_snapshot() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + let mut dal = conn.snapshots_dal(); + let l1_batch_number = L1BatchNumber(100); + dal.add_snapshot(l1_batch_number, 2, "gs:///bucket/factory_deps.bin") + .await + .expect("Failed to add snapshot"); + + let snapshots = dal + .get_all_complete_snapshots() + .await + .expect("Failed to retrieve snapshots"); + assert_eq!(snapshots.snapshots_l1_batch_numbers, []); + + for i in 0..2 { + dal.add_storage_logs_filepath_for_snapshot( + l1_batch_number, + i, + "gs:///bucket/chunk.bin", + ) + .await + .unwrap(); + } + + let snapshots = dal + .get_all_complete_snapshots() + .await + .expect("Failed to retrieve snapshots"); + assert_eq!(snapshots.snapshots_l1_batch_numbers, [l1_batch_number]); + + let snapshot_metadata = dal + .get_snapshot_metadata(l1_batch_number) + .await + .expect("Failed to retrieve snapshot") + .unwrap(); + assert_eq!(snapshot_metadata.l1_batch_number, l1_batch_number); + } + + #[tokio::test] + async fn adding_files() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + let mut dal = conn.snapshots_dal(); + let l1_batch_number = L1BatchNumber(100); + dal.add_snapshot(l1_batch_number, 2, "gs:///bucket/factory_deps.bin") + .await + .expect("Failed to add snapshot"); + + let storage_log_filepaths = ["gs:///bucket/test_file1.bin", "gs:///bucket/test_file2.bin"]; + dal.add_storage_logs_filepath_for_snapshot(l1_batch_number, 1, storage_log_filepaths[1]) + .await + .unwrap(); + + let files = dal + .get_snapshot_metadata(l1_batch_number) + .await + .expect("Failed to retrieve snapshot") + .unwrap() + .storage_logs_filepaths; + assert_eq!( + files, + [None, Some("gs:///bucket/test_file2.bin".to_string())] + ); + + dal.add_storage_logs_filepath_for_snapshot(l1_batch_number, 0, storage_log_filepaths[0]) + .await + .unwrap(); + + let files = dal + .get_snapshot_metadata(l1_batch_number) + .await + .expect("Failed to retrieve snapshot") + .unwrap() + .storage_logs_filepaths; + assert_eq!( + files, + [ + Some("gs:///bucket/test_file1.bin".to_string()), + Some("gs:///bucket/test_file2.bin".to_string()) + ] + ); + } +} diff --git a/core/lib/dal/src/storage_dal.rs b/core/lib/dal/src/storage_dal.rs index fdaaea38617..eaefdaef032 100644 --- a/core/lib/dal/src/storage_dal.rs +++ b/core/lib/dal/src/storage_dal.rs @@ -1,7 +1,6 @@ -use itertools::Itertools; - use std::collections::{HashMap, HashSet}; +use itertools::Itertools; use zksync_contracts::{BaseSystemContracts, SystemContractCode}; use zksync_types::{MiniblockNumber, StorageKey, StorageLog, StorageValue, H256, U256}; use zksync_utils::{bytes_to_be_words, bytes_to_chunks}; @@ -26,14 +25,21 @@ impl StorageDal<'_, '_> { .map(|dep| (dep.0.as_bytes(), dep.1.as_slice())) .unzip(); - // Copy from stdin can't be used here because of 'ON CONFLICT'. + // Copy from stdin can't be used here because of `ON CONFLICT`. sqlx::query!( - "INSERT INTO factory_deps \ - (bytecode_hash, bytecode, miniblock_number, created_at, updated_at) \ - SELECT u.bytecode_hash, u.bytecode, $3, now(), now() \ - FROM UNNEST($1::bytea[], $2::bytea[]) \ - AS u(bytecode_hash, bytecode) \ - ON CONFLICT (bytecode_hash) DO NOTHING", + r#" + INSERT INTO + factory_deps (bytecode_hash, bytecode, miniblock_number, created_at, updated_at) + SELECT + u.bytecode_hash, + u.bytecode, + $3, + NOW(), + NOW() + FROM + UNNEST($1::bytea[], $2::bytea[]) AS u (bytecode_hash, bytecode) + ON CONFLICT (bytecode_hash) DO NOTHING + "#, &bytecode_hashes as &[&[u8]], &bytecodes as &[&[u8]], block_number.0 as i64, @@ -43,10 +49,17 @@ impl StorageDal<'_, '_> { .unwrap(); } - /// Returns bytecode for a factory dep with the specified bytecode `hash`. + /// Returns bytecode for a factory dependency with the specified bytecode `hash`. pub async fn get_factory_dep(&mut self, hash: H256) -> Option> { sqlx::query!( - "SELECT bytecode FROM factory_deps WHERE bytecode_hash = $1", + r#" + SELECT + bytecode + FROM + factory_deps + WHERE + bytecode_hash = $1 + "#, hash.as_bytes(), ) .fetch_optional(self.storage.conn()) @@ -92,7 +105,15 @@ impl StorageDal<'_, '_> { let hashes_as_bytes: Vec<_> = hashes.iter().map(H256::as_bytes).collect(); sqlx::query!( - "SELECT bytecode, bytecode_hash FROM factory_deps WHERE bytecode_hash = ANY($1)", + r#" + SELECT + bytecode, + bytecode_hash + FROM + factory_deps + WHERE + bytecode_hash = ANY ($1) + "#, &hashes_as_bytes as &[&[u8]], ) .fetch_all(self.storage.conn()) @@ -115,7 +136,14 @@ impl StorageDal<'_, '_> { block_number: MiniblockNumber, ) -> Vec { sqlx::query!( - "SELECT bytecode_hash FROM factory_deps WHERE miniblock_number > $1", + r#" + SELECT + bytecode_hash + FROM + factory_deps + WHERE + miniblock_number > $1 + "#, block_number.0 as i64 ) .fetch_all(self.storage.conn()) @@ -158,14 +186,28 @@ impl StorageDal<'_, '_> { Vec<_>, ) = query_parts.multiunzip(); - // Copy from stdin can't be used here because of 'ON CONFLICT'. + // Copy from stdin can't be used here because of `ON CONFLICT`. sqlx::query!( - "INSERT INTO storage (hashed_key, address, key, value, tx_hash, created_at, updated_at) \ - SELECT u.hashed_key, u.address, u.key, u.value, u.tx_hash, now(), now() \ - FROM UNNEST ($1::bytea[], $2::bytea[], $3::bytea[], $4::bytea[], $5::bytea[]) \ - AS u(hashed_key, address, key, value, tx_hash) \ - ON CONFLICT (hashed_key) \ - DO UPDATE SET tx_hash = excluded.tx_hash, value = excluded.value, updated_at = now()", + r#" + INSERT INTO + storage (hashed_key, address, key, value, tx_hash, created_at, updated_at) + SELECT + u.hashed_key, + u.address, + u.key, + u.value, + u.tx_hash, + NOW(), + NOW() + FROM + UNNEST($1::bytea[], $2::bytea[], $3::bytea[], $4::bytea[], $5::bytea[]) AS u (hashed_key, address, key, value, tx_hash) + ON CONFLICT (hashed_key) DO + UPDATE + SET + tx_hash = excluded.tx_hash, + value = excluded.value, + updated_at = NOW() + "#, &hashed_keys, &addresses as &[&[u8]], &keys as &[&[u8]], @@ -184,7 +226,14 @@ impl StorageDal<'_, '_> { let hashed_key = key.hashed_key(); sqlx::query!( - "SELECT value FROM storage WHERE hashed_key = $1", + r#" + SELECT + value + FROM + storage + WHERE + hashed_key = $1 + "#, hashed_key.as_bytes() ) .instrument("get_by_key") @@ -199,7 +248,11 @@ impl StorageDal<'_, '_> { /// Removes all factory deps with a miniblock number strictly greater than the specified `block_number`. pub async fn rollback_factory_deps(&mut self, block_number: MiniblockNumber) { sqlx::query!( - "DELETE FROM factory_deps WHERE miniblock_number > $1", + r#" + DELETE FROM factory_deps + WHERE + miniblock_number > $1 + "#, block_number.0 as i64 ) .execute(self.storage.conn()) @@ -210,9 +263,10 @@ impl StorageDal<'_, '_> { #[cfg(test)] mod tests { + use zksync_types::{AccountTreeId, Address}; + use super::*; use crate::ConnectionPool; - use zksync_types::{AccountTreeId, Address}; #[tokio::test] async fn applying_storage_logs() { diff --git a/core/lib/dal/src/storage_logs_dal.rs b/core/lib/dal/src/storage_logs_dal.rs index c368e5adc8d..21572f3d3c0 100644 --- a/core/lib/dal/src/storage_logs_dal.rs +++ b/core/lib/dal/src/storage_logs_dal.rs @@ -1,14 +1,13 @@ -use sqlx::types::chrono::Utc; -use sqlx::Row; +use std::{collections::HashMap, ops, time::Instant}; -use std::{collections::HashMap, time::Instant}; - -use crate::{instrument::InstrumentExt, StorageProcessor}; +use sqlx::{types::chrono::Utc, Row}; use zksync_types::{ get_code_key, AccountTreeId, Address, L1BatchNumber, MiniblockNumber, StorageKey, StorageLog, - FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH, H256, + FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH, H256, U256, }; +use crate::{instrument::InstrumentExt, models::storage_log::StorageTreeEntry, StorageProcessor}; + #[derive(Debug)] pub struct StorageLogsDal<'a, 'c> { pub(crate) storage: &'a mut StorageProcessor<'c>, @@ -74,7 +73,14 @@ impl StorageLogsDal<'_, '_> { logs: &[(H256, Vec)], ) { let operation_number = sqlx::query!( - "SELECT MAX(operation_number) as \"max?\" FROM storage_logs WHERE miniblock_number = $1", + r#" + SELECT + MAX(operation_number) AS "max?" + FROM + storage_logs + WHERE + miniblock_number = $1 + "#, block_number.0 as i64 ) .fetch_one(self.storage.conn()) @@ -130,7 +136,11 @@ impl StorageLogsDal<'_, '_> { let stage_start = Instant::now(); sqlx::query!( - "DELETE FROM storage WHERE hashed_key = ANY($1)", + r#" + DELETE FROM storage + WHERE + hashed_key = ANY ($1) + "#, &keys_to_delete as &[&[u8]], ) .execute(self.storage.conn()) @@ -144,9 +154,15 @@ impl StorageLogsDal<'_, '_> { let stage_start = Instant::now(); sqlx::query!( - "UPDATE storage SET value = u.value \ - FROM UNNEST($1::bytea[], $2::bytea[]) AS u(key, value) \ - WHERE u.key = hashed_key", + r#" + UPDATE storage + SET + value = u.value + FROM + UNNEST($1::bytea[], $2::bytea[]) AS u (key, value) + WHERE + u.key = hashed_key + "#, &keys_to_update as &[&[u8]], &values_to_update as &[&[u8]], ) @@ -166,8 +182,19 @@ impl StorageLogsDal<'_, '_> { miniblock_number: MiniblockNumber, ) -> Vec { sqlx::query!( - "SELECT DISTINCT ON (hashed_key) hashed_key FROM \ - (SELECT * FROM storage_logs WHERE miniblock_number > $1) inn", + r#" + SELECT DISTINCT + ON (hashed_key) hashed_key + FROM + ( + SELECT + * + FROM + storage_logs + WHERE + miniblock_number > $1 + ) inn + "#, miniblock_number.0 as i64 ) .fetch_all(self.storage.conn()) @@ -181,7 +208,11 @@ impl StorageLogsDal<'_, '_> { /// Removes all storage logs with a miniblock number strictly greater than the specified `block_number`. pub async fn rollback_storage_logs(&mut self, block_number: MiniblockNumber) { sqlx::query!( - "DELETE FROM storage_logs WHERE miniblock_number > $1", + r#" + DELETE FROM storage_logs + WHERE + miniblock_number > $1 + "#, block_number.0 as i64 ) .execute(self.storage.conn()) @@ -192,14 +223,26 @@ impl StorageLogsDal<'_, '_> { pub async fn is_contract_deployed_at_address(&mut self, address: Address) -> bool { let hashed_key = get_code_key(&address).hashed_key(); let row = sqlx::query!( - "SELECT COUNT(*) as \"count!\" \ - FROM (\ - SELECT * FROM storage_logs \ - WHERE storage_logs.hashed_key = $1 \ - ORDER BY storage_logs.miniblock_number DESC, storage_logs.operation_number DESC \ - LIMIT 1\ - ) sl \ - WHERE sl.value != $2", + r#" + SELECT + COUNT(*) AS "count!" + FROM + ( + SELECT + * + FROM + storage_logs + WHERE + storage_logs.hashed_key = $1 + ORDER BY + storage_logs.miniblock_number DESC, + storage_logs.operation_number DESC + LIMIT + 1 + ) sl + WHERE + sl.value != $2 + "#, hashed_key.as_bytes(), FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH.as_bytes(), ) @@ -217,12 +260,33 @@ impl StorageLogsDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> HashMap { let rows = sqlx::query!( - "SELECT address, key, value \ - FROM storage_logs \ - WHERE miniblock_number BETWEEN \ - (SELECT MIN(number) FROM miniblocks WHERE l1_batch_number = $1) \ - AND (SELECT MAX(number) FROM miniblocks WHERE l1_batch_number = $1) \ - ORDER BY miniblock_number, operation_number", + r#" + SELECT + address, + key, + value + FROM + storage_logs + WHERE + miniblock_number BETWEEN ( + SELECT + MIN(number) + FROM + miniblocks + WHERE + l1_batch_number = $1 + ) AND ( + SELECT + MAX(number) + FROM + miniblocks + WHERE + l1_batch_number = $1 + ) + ORDER BY + miniblock_number, + operation_number + "#, l1_batch_number.0 as i64 ) .fetch_all(self.storage.conn()) @@ -334,8 +398,16 @@ impl StorageLogsDal<'_, '_> { let hashed_keys: Vec<_> = hashed_keys.iter().map(H256::as_bytes).collect(); let rows = sqlx::query!( - "SELECT hashed_key, l1_batch_number, index FROM initial_writes \ - WHERE hashed_key = ANY($1::bytea[])", + r#" + SELECT + hashed_key, + l1_batch_number, + INDEX + FROM + initial_writes + WHERE + hashed_key = ANY ($1::bytea[]) + "#, &hashed_keys as &[&[u8]], ) .instrument("get_l1_batches_and_indices_for_initial_writes") @@ -394,11 +466,26 @@ impl StorageLogsDal<'_, '_> { let hashed_keys: Vec<_> = hashed_keys.iter().map(H256::as_bytes).collect(); let rows = sqlx::query!( - "SELECT u.hashed_key as \"hashed_key!\", \ - (SELECT value FROM storage_logs \ - WHERE hashed_key = u.hashed_key AND miniblock_number <= $2 \ - ORDER BY miniblock_number DESC, operation_number DESC LIMIT 1) as \"value?\" \ - FROM UNNEST($1::bytea[]) AS u(hashed_key)", + r#" + SELECT + u.hashed_key AS "hashed_key!", + ( + SELECT + value + FROM + storage_logs + WHERE + hashed_key = u.hashed_key + AND miniblock_number <= $2 + ORDER BY + miniblock_number DESC, + operation_number DESC + LIMIT + 1 + ) AS "value?" + FROM + UNNEST($1::bytea[]) AS u (hashed_key) + "#, &hashed_keys as &[&[u8]], miniblock_number.0 as i64 ) @@ -415,33 +502,6 @@ impl StorageLogsDal<'_, '_> { .collect() } - /// Resolves hashed keys into storage keys ((address, key) tuples). - /// Panics if there is an unknown hashed key in the input. - pub async fn resolve_hashed_keys(&mut self, hashed_keys: &[H256]) -> Vec { - let hashed_keys: Vec<_> = hashed_keys.iter().map(H256::as_bytes).collect(); - sqlx::query!( - "SELECT \ - (SELECT ARRAY[address,key] FROM storage_logs \ - WHERE hashed_key = u.hashed_key \ - ORDER BY miniblock_number, operation_number \ - LIMIT 1) as \"address_and_key?\" \ - FROM UNNEST($1::bytea[]) AS u(hashed_key)", - &hashed_keys as &[&[u8]], - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| { - let address_and_key = row.address_and_key.unwrap(); - StorageKey::new( - AccountTreeId::new(Address::from_slice(&address_and_key[0])), - H256::from_slice(&address_and_key[1]), - ) - }) - .collect() - } - pub async fn get_miniblock_storage_logs( &mut self, miniblock_number: MiniblockNumber, @@ -450,14 +510,129 @@ impl StorageLogsDal<'_, '_> { .await } + /// Counts the total number of storage logs in the specified miniblock, + // TODO(PLA-596): add storage log count to snapshot metadata instead? + pub async fn count_miniblock_storage_logs( + &mut self, + miniblock_number: MiniblockNumber, + ) -> sqlx::Result { + let count = sqlx::query_scalar!( + "SELECT COUNT(*) FROM storage_logs WHERE miniblock_number = $1", + miniblock_number.0 as i32 + ) + .fetch_one(self.storage.conn()) + .await?; + Ok(count.unwrap_or(0) as u64) + } + + /// Gets a starting tree entry for each of the supplied `key_ranges` for the specified + /// `miniblock_number`. This method is used during Merkle tree recovery. + pub async fn get_chunk_starts_for_miniblock( + &mut self, + miniblock_number: MiniblockNumber, + key_ranges: &[ops::RangeInclusive], + ) -> sqlx::Result>> { + let (start_keys, end_keys): (Vec<_>, Vec<_>) = key_ranges + .iter() + .map(|range| (range.start().as_bytes(), range.end().as_bytes())) + .unzip(); + let rows = sqlx::query!( + r#" + WITH + sl AS ( + SELECT + ( + SELECT + ARRAY[hashed_key, value] AS kv + FROM + storage_logs + WHERE + storage_logs.miniblock_number = $1 + AND storage_logs.hashed_key >= u.start_key + AND storage_logs.hashed_key <= u.end_key + ORDER BY + storage_logs.hashed_key + LIMIT + 1 + ) + FROM + UNNEST($2::bytea[], $3::bytea[]) AS u (start_key, end_key) + ) + SELECT + sl.kv[1] AS "hashed_key?", + sl.kv[2] AS "value?", + initial_writes.index + FROM + sl + LEFT OUTER JOIN initial_writes ON initial_writes.hashed_key = sl.kv[1] + "#, + miniblock_number.0 as i64, + &start_keys as &[&[u8]], + &end_keys as &[&[u8]], + ) + .fetch_all(self.storage.conn()) + .await?; + + let rows = rows.into_iter().map(|row| { + Some(StorageTreeEntry { + key: U256::from_little_endian(row.hashed_key.as_ref()?), + value: H256::from_slice(row.value.as_ref()?), + leaf_index: row.index? as u64, + }) + }); + Ok(rows.collect()) + } + + /// Fetches tree entries for the specified `miniblock_number` and `key_range`. This is used during + /// Merkle tree recovery. + pub async fn get_tree_entries_for_miniblock( + &mut self, + miniblock_number: MiniblockNumber, + key_range: ops::RangeInclusive, + ) -> sqlx::Result> { + let rows = sqlx::query!( + r#" + SELECT + storage_logs.hashed_key, + storage_logs.value, + initial_writes.index + FROM + storage_logs + INNER JOIN initial_writes ON storage_logs.hashed_key = initial_writes.hashed_key + WHERE + storage_logs.miniblock_number = $1 + AND storage_logs.hashed_key >= $2::bytea + AND storage_logs.hashed_key <= $3::bytea + ORDER BY + storage_logs.hashed_key + "#, + miniblock_number.0 as i64, + key_range.start().as_bytes(), + key_range.end().as_bytes() + ) + .fetch_all(self.storage.conn()) + .await?; + + let rows = rows.into_iter().map(|row| StorageTreeEntry { + key: U256::from_little_endian(&row.hashed_key), + value: H256::from_slice(&row.value), + leaf_index: row.index as u64, + }); + Ok(rows.collect()) + } + pub async fn retain_storage_logs( &mut self, miniblock_number: MiniblockNumber, operation_numbers: &[i32], ) { sqlx::query!( - "DELETE FROM storage_logs \ - WHERE miniblock_number = $1 AND operation_number != ALL($2)", + r#" + DELETE FROM storage_logs + WHERE + miniblock_number = $1 + AND operation_number != ALL ($2) + "#, miniblock_number.0 as i64, &operation_numbers ) @@ -520,23 +695,34 @@ impl StorageLogsDal<'_, '_> { /// Vacuums `storage_logs` table. /// Shouldn't be used in production. pub async fn vacuum_storage_logs(&mut self) { - sqlx::query!("VACUUM storage_logs") - .execute(self.storage.conn()) - .await - .unwrap(); + sqlx::query!( + r#" + VACUUM storage_logs + "# + ) + .execute(self.storage.conn()) + .await + .unwrap(); } } #[cfg(test)] mod tests { - use super::*; - use crate::{tests::create_miniblock_header, ConnectionPool}; use zksync_contracts::BaseSystemContractsHashes; use zksync_types::{ block::{BlockGasCount, L1BatchHeader}, ProtocolVersion, ProtocolVersionId, }; + use super::*; + use crate::{tests::create_miniblock_header, ConnectionPool}; + + fn u256_to_h256_reversed(value: U256) -> H256 { + let mut bytes = [0_u8; 32]; + value.to_little_endian(&mut bytes); + H256(bytes) + } + async fn insert_miniblock(conn: &mut StorageProcessor<'_>, number: u32, logs: Vec) { let mut header = L1BatchHeader::new( L1BatchNumber(number), @@ -547,7 +733,7 @@ mod tests { ); header.is_finished = true; conn.blocks_dal() - .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[]) + .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[], 0) .await .unwrap(); conn.blocks_dal() @@ -570,15 +756,6 @@ mod tests { async fn inserting_storage_logs() { let pool = ConnectionPool::test_pool().await; let mut conn = pool.access_storage().await.unwrap(); - - conn.blocks_dal() - .delete_miniblocks(MiniblockNumber(0)) - .await - .unwrap(); - conn.blocks_dal() - .delete_l1_batches(L1BatchNumber(0)) - .await - .unwrap(); conn.protocol_versions_dal() .save_protocol_version_with_tx(ProtocolVersion::default()) .await; @@ -663,15 +840,6 @@ mod tests { async fn getting_storage_logs_for_revert() { let pool = ConnectionPool::test_pool().await; let mut conn = pool.access_storage().await.unwrap(); - - conn.blocks_dal() - .delete_miniblocks(MiniblockNumber(0)) - .await - .unwrap(); - conn.blocks_dal() - .delete_l1_batches(L1BatchNumber(0)) - .await - .unwrap(); conn.protocol_versions_dal() .save_protocol_version_with_tx(ProtocolVersion::default()) .await; @@ -719,15 +887,6 @@ mod tests { async fn reverting_keys_without_initial_write() { let pool = ConnectionPool::test_pool().await; let mut conn = pool.access_storage().await.unwrap(); - - conn.blocks_dal() - .delete_miniblocks(MiniblockNumber(0)) - .await - .unwrap(); - conn.blocks_dal() - .delete_l1_batches(L1BatchNumber(0)) - .await - .unwrap(); conn.protocol_versions_dal() .save_protocol_version_with_tx(ProtocolVersion::default()) .await; @@ -788,4 +947,100 @@ mod tests { } } } + + #[tokio::test] + async fn getting_starting_entries_in_chunks() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + let sorted_hashed_keys = prepare_tree_entries(&mut conn, 100).await; + + let key_ranges = [ + H256::zero()..=H256::repeat_byte(0xff), + H256::repeat_byte(0x40)..=H256::repeat_byte(0x80), + H256::repeat_byte(0x50)..=H256::repeat_byte(0x60), + H256::repeat_byte(0x50)..=H256::repeat_byte(0x51), + H256::repeat_byte(0xb0)..=H256::repeat_byte(0xfe), + H256::repeat_byte(0x11)..=H256::repeat_byte(0x11), + ]; + + let chunk_starts = conn + .storage_logs_dal() + .get_chunk_starts_for_miniblock(MiniblockNumber(1), &key_ranges) + .await + .unwrap(); + + for (chunk_start, key_range) in chunk_starts.into_iter().zip(key_ranges) { + let expected_start_key = sorted_hashed_keys + .iter() + .find(|&key| key_range.contains(key)); + if let Some(chunk_start) = chunk_start { + assert_eq!( + u256_to_h256_reversed(chunk_start.key), + *expected_start_key.unwrap() + ); + assert_ne!(chunk_start.value, H256::zero()); + assert_ne!(chunk_start.leaf_index, 0); + } else { + assert_eq!(expected_start_key, None); + } + } + } + + async fn prepare_tree_entries(conn: &mut StorageProcessor<'_>, count: u8) -> Vec { + conn.protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + + let account = AccountTreeId::new(Address::repeat_byte(1)); + let logs: Vec<_> = (0..count) + .map(|i| { + let key = StorageKey::new(account, H256::repeat_byte(i)); + StorageLog::new_write_log(key, H256::repeat_byte(i)) + }) + .collect(); + insert_miniblock(conn, 1, logs.clone()).await; + + let mut initial_keys: Vec<_> = logs.iter().map(|log| log.key).collect(); + initial_keys.sort_unstable(); + conn.storage_logs_dedup_dal() + .insert_initial_writes(L1BatchNumber(1), &initial_keys) + .await; + + let mut sorted_hashed_keys: Vec<_> = logs.iter().map(|log| log.key.hashed_key()).collect(); + sorted_hashed_keys.sort_unstable(); + sorted_hashed_keys + } + + #[tokio::test] + async fn getting_tree_entries() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + let sorted_hashed_keys = prepare_tree_entries(&mut conn, 10).await; + + let key_range = H256::zero()..=H256::repeat_byte(0xff); + let tree_entries = conn + .storage_logs_dal() + .get_tree_entries_for_miniblock(MiniblockNumber(1), key_range) + .await + .unwrap(); + assert_eq!(tree_entries.len(), 10); + assert_eq!( + tree_entries + .iter() + .map(|entry| u256_to_h256_reversed(entry.key)) + .collect::>(), + sorted_hashed_keys + ); + + let key_range = H256::repeat_byte(0x80)..=H256::repeat_byte(0xbf); + let tree_entries = conn + .storage_logs_dal() + .get_tree_entries_for_miniblock(MiniblockNumber(1), key_range.clone()) + .await + .unwrap(); + assert!(!tree_entries.is_empty() && tree_entries.len() < 10); + for entry in &tree_entries { + assert!(key_range.contains(&u256_to_h256_reversed(entry.key))); + } + } } diff --git a/core/lib/dal/src/storage_logs_dedup_dal.rs b/core/lib/dal/src/storage_logs_dedup_dal.rs index 8a70ceb50fe..a7bef5aa794 100644 --- a/core/lib/dal/src/storage_logs_dedup_dal.rs +++ b/core/lib/dal/src/storage_logs_dedup_dal.rs @@ -1,9 +1,11 @@ -use crate::StorageProcessor; -use sqlx::types::chrono::Utc; use std::collections::HashSet; + +use sqlx::types::chrono::Utc; use zksync_types::{AccountTreeId, Address, L1BatchNumber, LogQuery, StorageKey, H256}; use zksync_utils::u256_to_h256; +use crate::StorageProcessor; + #[derive(Debug)] pub struct StorageLogsDedupDal<'a, 'c> { pub(crate) storage: &'a mut StorageProcessor<'c>, @@ -58,9 +60,18 @@ impl StorageLogsDedupDal<'_, '_> { .collect(); sqlx::query!( - "INSERT INTO initial_writes (hashed_key, index, l1_batch_number, created_at, updated_at) \ - SELECT u.hashed_key, u.index, $3, now(), now() \ - FROM UNNEST($1::bytea[], $2::bigint[]) AS u(hashed_key, index)", + r#" + INSERT INTO + initial_writes (hashed_key, INDEX, l1_batch_number, created_at, updated_at) + SELECT + u.hashed_key, + u.index, + $3, + NOW(), + NOW() + FROM + UNNEST($1::bytea[], $2::BIGINT[]) AS u (hashed_key, INDEX) + "#, &hashed_keys, &indices, l1_batch_number.0 as i64, @@ -75,7 +86,15 @@ impl StorageLogsDedupDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> HashSet { sqlx::query!( - "SELECT address, key FROM protective_reads WHERE l1_batch_number = $1", + r#" + SELECT + address, + key + FROM + protective_reads + WHERE + l1_batch_number = $1 + "#, l1_batch_number.0 as i64 ) .fetch_all(self.storage.conn()) @@ -92,12 +111,19 @@ impl StorageLogsDedupDal<'_, '_> { } pub async fn max_enumeration_index(&mut self) -> Option { - sqlx::query!("SELECT MAX(index) as \"max?\" FROM initial_writes",) - .fetch_one(self.storage.conn()) - .await - .unwrap() - .max - .map(|max| max as u64) + sqlx::query!( + r#" + SELECT + MAX(INDEX) AS "max?" + FROM + initial_writes + "#, + ) + .fetch_one(self.storage.conn()) + .await + .unwrap() + .max + .map(|max| max as u64) } pub async fn initial_writes_for_batch( @@ -105,9 +131,17 @@ impl StorageLogsDedupDal<'_, '_> { l1_batch_number: L1BatchNumber, ) -> Vec<(H256, u64)> { sqlx::query!( - "SELECT hashed_key, index FROM initial_writes \ - WHERE l1_batch_number = $1 \ - ORDER BY index", + r#" + SELECT + hashed_key, + INDEX + FROM + initial_writes + WHERE + l1_batch_number = $1 + ORDER BY + INDEX + "#, l1_batch_number.0 as i64 ) .fetch_all(self.storage.conn()) @@ -120,9 +154,14 @@ impl StorageLogsDedupDal<'_, '_> { pub async fn get_enumeration_index_for_key(&mut self, key: StorageKey) -> Option { sqlx::query!( - "SELECT index \ - FROM initial_writes \ - WHERE hashed_key = $1", + r#" + SELECT + INDEX + FROM + initial_writes + WHERE + hashed_key = $1 + "#, key.hashed_key().0.to_vec() ) .fetch_optional(self.storage.conn()) @@ -135,8 +174,14 @@ impl StorageLogsDedupDal<'_, '_> { pub async fn filter_written_slots(&mut self, hashed_keys: &[H256]) -> HashSet { let hashed_keys: Vec<_> = hashed_keys.iter().map(H256::as_bytes).collect(); sqlx::query!( - "SELECT hashed_key FROM initial_writes \ - WHERE hashed_key = ANY($1)", + r#" + SELECT + hashed_key + FROM + initial_writes + WHERE + hashed_key = ANY ($1) + "#, &hashed_keys as &[&[u8]], ) .fetch_all(self.storage.conn()) diff --git a/core/lib/dal/src/storage_web3_dal.rs b/core/lib/dal/src/storage_web3_dal.rs index 5ba7510b7e0..e5cc4fc8b97 100644 --- a/core/lib/dal/src/storage_web3_dal.rs +++ b/core/lib/dal/src/storage_web3_dal.rs @@ -60,11 +60,18 @@ impl StorageWeb3Dal<'_, '_> { sqlx::query!( r#" - SELECT value - FROM storage_logs - WHERE storage_logs.hashed_key = $1 AND storage_logs.miniblock_number <= $2 - ORDER BY storage_logs.miniblock_number DESC, storage_logs.operation_number DESC - LIMIT 1 + SELECT + value + FROM + storage_logs + WHERE + storage_logs.hashed_key = $1 + AND storage_logs.miniblock_number <= $2 + ORDER BY + storage_logs.miniblock_number DESC, + storage_logs.operation_number DESC + LIMIT + 1 "#, hashed_key.as_bytes(), block_number.0 as i64 @@ -90,9 +97,32 @@ impl StorageWeb3Dal<'_, '_> { miniblock_number: MiniblockNumber, ) -> Result { let row = sqlx::query!( - "SELECT \ - (SELECT l1_batch_number FROM miniblocks WHERE number = $1) as \"block_batch?\", \ - (SELECT MAX(number) + 1 FROM l1_batches) as \"max_batch?\"", + r#" + SELECT + ( + SELECT + l1_batch_number + FROM + miniblocks + WHERE + number = $1 + ) AS "block_batch?", + COALESCE( + ( + SELECT + MAX(number) + 1 + FROM + l1_batches + ), + ( + SELECT + MAX(l1_batch_number) + 1 + FROM + snapshot_recovery + ), + 0 + ) AS "pending_batch!" + "#, miniblock_number.0 as i64 ) .fetch_one(self.storage.conn()) @@ -100,7 +130,7 @@ impl StorageWeb3Dal<'_, '_> { Ok(ResolvedL1BatchForMiniblock { miniblock_l1_batch: row.block_batch.map(|n| L1BatchNumber(n as u32)), - pending_l1_batch: L1BatchNumber(row.max_batch.unwrap_or(0) as u32), + pending_l1_batch: L1BatchNumber(row.pending_batch as u32), }) } @@ -110,7 +140,14 @@ impl StorageWeb3Dal<'_, '_> { ) -> Result, SqlxError> { let hashed_key = key.hashed_key(); let row = sqlx::query!( - "SELECT l1_batch_number FROM initial_writes WHERE hashed_key = $1", + r#" + SELECT + l1_batch_number + FROM + initial_writes + WHERE + hashed_key = $1 + "#, hashed_key.as_bytes(), ) .instrument("get_l1_batch_number_for_initial_write") @@ -129,7 +166,14 @@ impl StorageWeb3Dal<'_, '_> { miniblock_numbers: ops::RangeInclusive, ) -> Vec { sqlx::query!( - "SELECT DISTINCT hashed_key FROM storage_logs WHERE miniblock_number BETWEEN $1 and $2", + r#" + SELECT DISTINCT + hashed_key + FROM + storage_logs + WHERE + miniblock_number BETWEEN $1 AND $2 + "#, miniblock_numbers.start().0 as i64, miniblock_numbers.end().0 as i64 ) @@ -151,19 +195,28 @@ impl StorageWeb3Dal<'_, '_> { let hashed_key = get_code_key(&address).hashed_key(); { sqlx::query!( - " - SELECT bytecode FROM ( - SELECT * FROM storage_logs + r#" + SELECT + bytecode + FROM + ( + SELECT + * + FROM + storage_logs WHERE - storage_logs.hashed_key = $1 AND - storage_logs.miniblock_number <= $2 + storage_logs.hashed_key = $1 + AND storage_logs.miniblock_number <= $2 ORDER BY - storage_logs.miniblock_number DESC, storage_logs.operation_number DESC - LIMIT 1 + storage_logs.miniblock_number DESC, + storage_logs.operation_number DESC + LIMIT + 1 ) t JOIN factory_deps ON value = factory_deps.bytecode_hash - WHERE value != $3 - ", + WHERE + value != $3 + "#, hashed_key.as_bytes(), block_number.0 as i64, FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH.as_bytes(), @@ -183,7 +236,15 @@ impl StorageWeb3Dal<'_, '_> { ) -> Result>, SqlxError> { { sqlx::query!( - "SELECT bytecode FROM factory_deps WHERE bytecode_hash = $1 AND miniblock_number <= $2", + r#" + SELECT + bytecode + FROM + factory_deps + WHERE + bytecode_hash = $1 + AND miniblock_number <= $2 + "#, hash.as_bytes(), block_number.0 as i64 ) @@ -193,3 +254,164 @@ impl StorageWeb3Dal<'_, '_> { } } } + +#[cfg(test)] +mod tests { + use zksync_types::{ + block::{BlockGasCount, L1BatchHeader}, + snapshots::SnapshotRecoveryStatus, + ProtocolVersion, ProtocolVersionId, + }; + + use super::*; + use crate::{tests::create_miniblock_header, ConnectionPool}; + + #[tokio::test] + async fn resolving_l1_batch_number_of_miniblock() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + conn.protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + conn.blocks_dal() + .insert_miniblock(&create_miniblock_header(0)) + .await + .unwrap(); + let l1_batch_header = L1BatchHeader::new( + L1BatchNumber(0), + 0, + Address::repeat_byte(0x42), + Default::default(), + ProtocolVersionId::latest(), + ); + conn.blocks_dal() + .insert_l1_batch(&l1_batch_header, &[], BlockGasCount::default(), &[], &[], 0) + .await + .unwrap(); + conn.blocks_dal() + .mark_miniblocks_as_executed_in_l1_batch(L1BatchNumber(0)) + .await + .unwrap(); + + let first_miniblock = create_miniblock_header(1); + conn.blocks_dal() + .insert_miniblock(&first_miniblock) + .await + .unwrap(); + + let resolved = conn + .storage_web3_dal() + .resolve_l1_batch_number_of_miniblock(MiniblockNumber(0)) + .await + .unwrap(); + assert_eq!(resolved.miniblock_l1_batch, Some(L1BatchNumber(0))); + assert_eq!(resolved.pending_l1_batch, L1BatchNumber(1)); + assert_eq!(resolved.expected_l1_batch(), L1BatchNumber(0)); + + let timestamp = conn + .blocks_web3_dal() + .get_expected_l1_batch_timestamp(&resolved) + .await + .unwrap(); + assert_eq!(timestamp, Some(0)); + + for pending_miniblock_number in [1, 2] { + let resolved = conn + .storage_web3_dal() + .resolve_l1_batch_number_of_miniblock(MiniblockNumber(pending_miniblock_number)) + .await + .unwrap(); + assert_eq!(resolved.miniblock_l1_batch, None); + assert_eq!(resolved.pending_l1_batch, L1BatchNumber(1)); + assert_eq!(resolved.expected_l1_batch(), L1BatchNumber(1)); + + let timestamp = conn + .blocks_web3_dal() + .get_expected_l1_batch_timestamp(&resolved) + .await + .unwrap(); + assert_eq!(timestamp, Some(first_miniblock.timestamp)); + } + } + + #[tokio::test] + async fn resolving_l1_batch_number_of_miniblock_with_snapshot_recovery() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + conn.protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + let snapshot_recovery = SnapshotRecoveryStatus { + l1_batch_number: L1BatchNumber(23), + l1_batch_root_hash: H256::zero(), + miniblock_number: MiniblockNumber(42), + miniblock_root_hash: H256::zero(), + last_finished_chunk_id: None, + total_chunk_count: 100, + }; + conn.snapshot_recovery_dal() + .set_applied_snapshot_status(&snapshot_recovery) + .await + .unwrap(); + + let first_miniblock = create_miniblock_header(snapshot_recovery.miniblock_number.0 + 1); + conn.blocks_dal() + .insert_miniblock(&first_miniblock) + .await + .unwrap(); + + let resolved = conn + .storage_web3_dal() + .resolve_l1_batch_number_of_miniblock(snapshot_recovery.miniblock_number + 1) + .await + .unwrap(); + assert_eq!(resolved.miniblock_l1_batch, None); + assert_eq!( + resolved.pending_l1_batch, + snapshot_recovery.l1_batch_number + 1 + ); + assert_eq!( + resolved.expected_l1_batch(), + snapshot_recovery.l1_batch_number + 1 + ); + + let timestamp = conn + .blocks_web3_dal() + .get_expected_l1_batch_timestamp(&resolved) + .await + .unwrap(); + assert_eq!(timestamp, Some(first_miniblock.timestamp)); + + let l1_batch_header = L1BatchHeader::new( + snapshot_recovery.l1_batch_number + 1, + 100, + Address::repeat_byte(0x42), + Default::default(), + ProtocolVersionId::latest(), + ); + conn.blocks_dal() + .insert_l1_batch(&l1_batch_header, &[], BlockGasCount::default(), &[], &[], 0) + .await + .unwrap(); + conn.blocks_dal() + .mark_miniblocks_as_executed_in_l1_batch(l1_batch_header.number) + .await + .unwrap(); + + let resolved = conn + .storage_web3_dal() + .resolve_l1_batch_number_of_miniblock(snapshot_recovery.miniblock_number + 1) + .await + .unwrap(); + assert_eq!(resolved.miniblock_l1_batch, Some(l1_batch_header.number)); + assert_eq!(resolved.pending_l1_batch, l1_batch_header.number + 1); + assert_eq!(resolved.expected_l1_batch(), l1_batch_header.number); + + let timestamp = conn + .blocks_web3_dal() + .get_expected_l1_batch_timestamp(&resolved) + .await + .unwrap(); + assert_eq!(timestamp, Some(first_miniblock.timestamp)); + } +} diff --git a/core/lib/dal/src/sync_dal.rs b/core/lib/dal/src/sync_dal.rs index 7b7c1359414..53310603d2c 100644 --- a/core/lib/dal/src/sync_dal.rs +++ b/core/lib/dal/src/sync_dal.rs @@ -1,10 +1,10 @@ -use zksync_types::{api::en::SyncBlock, Address, MiniblockNumber, Transaction}; +use zksync_types::{api::en, Address, MiniblockNumber}; use crate::{ instrument::InstrumentExt, metrics::MethodLatency, - models::{storage_sync::StorageSyncBlock, storage_transaction::StorageTransaction}, - SqlxError, StorageProcessor, + models::storage_sync::{StorageSyncBlock, SyncBlock}, + StorageProcessor, }; /// DAL subset dedicated to the EN synchronization. @@ -14,63 +14,210 @@ pub struct SyncDal<'a, 'c> { } impl SyncDal<'_, '_> { - pub async fn sync_block( + pub(super) async fn sync_block_inner( &mut self, block_number: MiniblockNumber, - current_operator_address: Address, - include_transactions: bool, - ) -> Result, SqlxError> { - let latency = MethodLatency::new("sync_dal_sync_block"); - let storage_block_details = sqlx::query_as!( + ) -> anyhow::Result> { + let Some(block) = sqlx::query_as!( StorageSyncBlock, - "SELECT miniblocks.number, \ - COALESCE(miniblocks.l1_batch_number, (SELECT (max(number) + 1) FROM l1_batches)) as \"l1_batch_number!\", \ - (SELECT max(m2.number) FROM miniblocks m2 WHERE miniblocks.l1_batch_number = m2.l1_batch_number) as \"last_batch_miniblock?\", \ - miniblocks.timestamp, \ - miniblocks.hash as \"root_hash?\", \ - miniblocks.l1_gas_price, \ - miniblocks.l2_fair_gas_price, \ - miniblocks.bootloader_code_hash, \ - miniblocks.default_aa_code_hash, \ - miniblocks.virtual_blocks, \ - miniblocks.hash, \ - miniblocks.consensus, \ - miniblocks.protocol_version as \"protocol_version!\", \ - l1_batches.fee_account_address as \"fee_account_address?\" \ - FROM miniblocks \ - LEFT JOIN l1_batches ON miniblocks.l1_batch_number = l1_batches.number \ - WHERE miniblocks.number = $1", + r#" + SELECT + miniblocks.number, + COALESCE( + miniblocks.l1_batch_number, + ( + SELECT + (MAX(number) + 1) + FROM + l1_batches + ) + ) AS "l1_batch_number!", + ( + SELECT + MAX(m2.number) + FROM + miniblocks m2 + WHERE + miniblocks.l1_batch_number = m2.l1_batch_number + ) AS "last_batch_miniblock?", + miniblocks.timestamp, + miniblocks.l1_gas_price, + miniblocks.l2_fair_gas_price, + miniblocks.fair_pubdata_price, + miniblocks.bootloader_code_hash, + miniblocks.default_aa_code_hash, + miniblocks.virtual_blocks, + miniblocks.hash, + miniblocks.protocol_version AS "protocol_version!", + l1_batches.fee_account_address AS "fee_account_address?" + FROM + miniblocks + LEFT JOIN l1_batches ON miniblocks.l1_batch_number = l1_batches.number + WHERE + miniblocks.number = $1 + "#, block_number.0 as i64 ) .instrument("sync_dal_sync_block.block") .with_arg("block_number", &block_number) .fetch_optional(self.storage.conn()) - .await?; + .await? + else { + return Ok(None); + }; + Ok(Some(block.try_into()?)) + } - let res = if let Some(storage_block_details) = storage_block_details { - let transactions = if include_transactions { - let block_transactions = sqlx::query_as!( - StorageTransaction, - r#"SELECT * FROM transactions WHERE miniblock_number = $1 ORDER BY index_in_block"#, - block_number.0 as i64 - ) - .instrument("sync_dal_sync_block.transactions") - .with_arg("block_number", &block_number) - .fetch_all(self.storage.conn()) - .await? - .into_iter() - .map(Transaction::from) - .collect(); - Some(block_transactions) - } else { - None - }; - Some(storage_block_details.into_sync_block(current_operator_address, transactions)) + pub async fn sync_block( + &mut self, + block_number: MiniblockNumber, + current_operator_address: Address, + include_transactions: bool, + ) -> anyhow::Result> { + let _latency = MethodLatency::new("sync_dal_sync_block"); + let Some(block) = self.sync_block_inner(block_number).await? else { + return Ok(None); + }; + let transactions = if include_transactions { + Some( + self.storage + .transactions_web3_dal() + .get_raw_miniblock_transactions(block_number) + .await?, + ) } else { None }; + Ok(Some(block.into_api(current_operator_address, transactions))) + } +} + +#[cfg(test)] +mod tests { + use zksync_types::{ + block::{BlockGasCount, L1BatchHeader}, + fee::TransactionExecutionMetrics, + L1BatchNumber, ProtocolVersion, ProtocolVersionId, Transaction, + }; + + use super::*; + use crate::{ + tests::{create_miniblock_header, mock_execution_result, mock_l2_transaction}, + ConnectionPool, + }; + + #[tokio::test] + async fn sync_block_basics() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + + // Simulate genesis. + conn.protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + conn.blocks_dal() + .insert_miniblock(&create_miniblock_header(0)) + .await + .unwrap(); + let mut l1_batch_header = L1BatchHeader::new( + L1BatchNumber(0), + 0, + Address::repeat_byte(0x42), + Default::default(), + ProtocolVersionId::latest(), + ); + conn.blocks_dal() + .insert_l1_batch(&l1_batch_header, &[], BlockGasCount::default(), &[], &[], 0) + .await + .unwrap(); + conn.blocks_dal() + .mark_miniblocks_as_executed_in_l1_batch(L1BatchNumber(0)) + .await + .unwrap(); + + let operator_address = Address::repeat_byte(1); + assert!(conn + .sync_dal() + .sync_block(MiniblockNumber(1), operator_address, false) + .await + .unwrap() + .is_none()); + + // Insert another block in the store. + let miniblock_header = create_miniblock_header(1); + let tx = mock_l2_transaction(); + conn.transactions_dal() + .insert_transaction_l2(tx.clone(), TransactionExecutionMetrics::default()) + .await; + conn.blocks_dal() + .insert_miniblock(&miniblock_header) + .await + .unwrap(); + conn.transactions_dal() + .mark_txs_as_executed_in_miniblock( + MiniblockNumber(1), + &[mock_execution_result(tx.clone())], + 1.into(), + ) + .await; + + let block = conn + .sync_dal() + .sync_block(MiniblockNumber(1), operator_address, false) + .await + .unwrap() + .expect("no sync block"); + assert_eq!(block.number, MiniblockNumber(1)); + assert_eq!(block.l1_batch_number, L1BatchNumber(1)); + assert!(!block.last_in_batch); + assert_eq!(block.timestamp, miniblock_header.timestamp); + assert_eq!( + block.protocol_version, + miniblock_header.protocol_version.unwrap() + ); + assert_eq!( + block.virtual_blocks.unwrap(), + miniblock_header.virtual_blocks + ); + assert_eq!( + block.l1_gas_price, + miniblock_header.batch_fee_input.l1_gas_price() + ); + assert_eq!( + block.l2_fair_gas_price, + miniblock_header.batch_fee_input.fair_l2_gas_price() + ); + assert_eq!(block.operator_address, operator_address); + assert!(block.transactions.is_none()); + + let block = conn + .sync_dal() + .sync_block(MiniblockNumber(1), operator_address, true) + .await + .unwrap() + .expect("no sync block"); + let transactions = block.transactions.unwrap(); + assert_eq!(transactions, [Transaction::from(tx)]); + + l1_batch_header.number = L1BatchNumber(1); + l1_batch_header.timestamp = 1; + conn.blocks_dal() + .insert_l1_batch(&l1_batch_header, &[], BlockGasCount::default(), &[], &[], 0) + .await + .unwrap(); + conn.blocks_dal() + .mark_miniblocks_as_executed_in_l1_batch(L1BatchNumber(1)) + .await + .unwrap(); - drop(latency); - Ok(res) + let block = conn + .sync_dal() + .sync_block(MiniblockNumber(1), operator_address, true) + .await + .unwrap() + .expect("no sync block"); + assert_eq!(block.l1_batch_number, L1BatchNumber(1)); + assert!(block.last_in_batch); + assert_eq!(block.operator_address, l1_batch_header.fee_account_address); } } diff --git a/core/lib/dal/src/system_dal.rs b/core/lib/dal/src/system_dal.rs index e9b02094305..e5cf2cf29d4 100644 --- a/core/lib/dal/src/system_dal.rs +++ b/core/lib/dal/src/system_dal.rs @@ -9,7 +9,7 @@ pub struct SystemDal<'a, 'c> { impl SystemDal<'_, '_> { pub async fn get_replication_lag_sec(&mut self) -> u32 { // NOTE: lag (seconds) has a special meaning here - // (it is not the same that replay_lag/write_lag/flush_lag from pg_stat_replication view) + // (it is not the same that `replay_lag/write_lag/flush_lag` from `pg_stat_replication` view) // and it is only useful when synced column is false, // because lag means how many seconds elapsed since the last action was committed. let pg_row = sqlx::query( diff --git a/core/lib/dal/src/tests/mod.rs b/core/lib/dal/src/tests/mod.rs index c383ea7f944..5f6d2fddb27 100644 --- a/core/lib/dal/src/tests/mod.rs +++ b/core/lib/dal/src/tests/mod.rs @@ -1,27 +1,25 @@ -use std::fs; use std::time::Duration; use zksync_contracts::BaseSystemContractsHashes; use zksync_types::{ - block::{miniblock_hash, L1BatchHeader, MiniblockHeader}, + block::{MiniblockHasher, MiniblockHeader}, fee::{Fee, TransactionExecutionMetrics}, + fee_model::BatchFeeInput, helpers::unix_timestamp_ms, l1::{L1Tx, OpProcessingType, PriorityQueueType}, l2::L2Tx, - proofs::AggregationRound, tx::{tx_execution_info::TxExecutionStatus, ExecutionMetrics, TransactionExecutionResult}, - Address, Execute, L1BatchNumber, L1BlockNumber, L1TxCommonData, L2ChainId, MiniblockNumber, - PriorityOpId, ProtocolVersion, ProtocolVersionId, H160, H256, MAX_GAS_PER_PUBDATA_BYTE, U256, + Address, Execute, L1BlockNumber, L1TxCommonData, L2ChainId, MiniblockNumber, PriorityOpId, + ProtocolVersionId, H160, H256, U256, }; -use crate::blocks_dal::BlocksDal; -use crate::connection::ConnectionPool; -use crate::protocol_versions_dal::ProtocolVersionsDal; -use crate::prover_dal::{GetProverJobsParams, ProverDal}; -use crate::transactions_dal::L2TxSubmissionResult; -use crate::transactions_dal::TransactionsDal; -use crate::transactions_web3_dal::TransactionsWeb3Dal; -use crate::witness_generator_dal::WitnessGeneratorDal; +use crate::{ + blocks_dal::BlocksDal, + connection::ConnectionPool, + protocol_versions_dal::ProtocolVersionsDal, + transactions_dal::{L2TxSubmissionResult, TransactionsDal}, + transactions_web3_dal::TransactionsWeb3Dal, +}; const DEFAULT_GAS_PER_PUBDATA: u32 = 100; @@ -30,17 +28,19 @@ fn mock_tx_execution_metrics() -> TransactionExecutionMetrics { } pub(crate) fn create_miniblock_header(number: u32) -> MiniblockHeader { + let number = MiniblockNumber(number); + let protocol_version = ProtocolVersionId::default(); MiniblockHeader { - number: MiniblockNumber(number), - timestamp: 0, - hash: miniblock_hash(MiniblockNumber(number), 0, H256::zero(), H256::zero()), + number, + timestamp: number.0.into(), + hash: MiniblockHasher::new(number, 0, H256::zero()).finalize(protocol_version), l1_tx_count: 0, l2_tx_count: 0, + gas_per_pubdata_limit: 100, base_fee_per_gas: 100, - l1_gas_price: 100, - l2_fair_gas_price: 100, + batch_fee_input: BatchFeeInput::l1_pegged(100, 100), base_system_contracts_hashes: BaseSystemContractsHashes::default(), - protocol_version: Some(ProtocolVersionId::default()), + protocol_version: Some(protocol_version), virtual_blocks: 1, } } @@ -80,7 +80,7 @@ fn mock_l1_execute() -> L1Tx { full_fee: U256::zero(), gas_limit: U256::from(100_100), max_fee_per_gas: U256::from(1u32), - gas_per_pubdata_limit: MAX_GAS_PER_PUBDATA_BYTE.into(), + gas_per_pubdata_limit: 100.into(), op_processing_type: OpProcessingType::Common, priority_queue_type: PriorityQueueType::Deque, eth_hash: H256::random(), @@ -255,418 +255,3 @@ async fn remove_stuck_txs() { .unwrap() .unwrap(); } - -fn create_circuits() -> Vec<(&'static str, String)> { - vec![ - ("Main VM", "1_0_Main VM_BasicCircuits.bin".to_owned()), - ("SHA256", "1_1_SHA256_BasicCircuits.bin".to_owned()), - ( - "Code decommitter", - "1_2_Code decommitter_BasicCircuits.bin".to_owned(), - ), - ( - "Log demuxer", - "1_3_Log demuxer_BasicCircuits.bin".to_owned(), - ), - ] -} - -#[tokio::test] -async fn test_duplicate_insert_prover_jobs() { - let connection_pool = ConnectionPool::test_pool().await; - let storage = &mut connection_pool.access_storage().await.unwrap(); - storage - .protocol_versions_dal() - .save_protocol_version_with_tx(Default::default()) - .await; - storage - .protocol_versions_dal() - .save_prover_protocol_version(Default::default()) - .await; - let block_number = 1; - let header = L1BatchHeader::new( - L1BatchNumber(block_number), - 0, - Default::default(), - Default::default(), - Default::default(), - ); - storage - .blocks_dal() - .insert_l1_batch(&header, &[], Default::default(), &[], &[]) - .await - .unwrap(); - - let mut prover_dal = ProverDal { storage }; - let circuits = create_circuits(); - let l1_batch_number = L1BatchNumber(block_number); - prover_dal - .insert_prover_jobs( - l1_batch_number, - circuits.clone(), - AggregationRound::BasicCircuits, - ProtocolVersionId::latest() as i32, - ) - .await; - - // try inserting the same jobs again to ensure it does not panic - prover_dal - .insert_prover_jobs( - l1_batch_number, - circuits.clone(), - AggregationRound::BasicCircuits, - ProtocolVersionId::latest() as i32, - ) - .await; - - let prover_jobs_params = GetProverJobsParams { - statuses: None, - blocks: Some(std::ops::Range { - start: l1_batch_number, - end: l1_batch_number + 1, - }), - limit: None, - desc: false, - round: None, - }; - let jobs = prover_dal.get_jobs(prover_jobs_params).await.unwrap(); - assert_eq!(circuits.len(), jobs.len()); -} - -#[tokio::test] -async fn test_requeue_prover_jobs() { - let connection_pool = ConnectionPool::test_pool().await; - let storage = &mut connection_pool.access_storage().await.unwrap(); - let protocol_version = ProtocolVersion::default(); - storage - .protocol_versions_dal() - .save_protocol_version_with_tx(protocol_version) - .await; - storage - .protocol_versions_dal() - .save_prover_protocol_version(Default::default()) - .await; - let block_number = 1; - let header = L1BatchHeader::new( - L1BatchNumber(block_number), - 0, - Default::default(), - Default::default(), - ProtocolVersionId::latest(), - ); - storage - .blocks_dal() - .insert_l1_batch(&header, &[], Default::default(), &[], &[]) - .await - .unwrap(); - - let mut prover_dal = ProverDal { storage }; - let circuits = create_circuits(); - let l1_batch_number = L1BatchNumber(block_number); - prover_dal - .insert_prover_jobs( - l1_batch_number, - circuits, - AggregationRound::BasicCircuits, - ProtocolVersionId::latest() as i32, - ) - .await; - - // take all jobs from prover_job table - for _ in 1..=4 { - let job = prover_dal - .get_next_prover_job(&[ProtocolVersionId::latest()]) - .await; - assert!(job.is_some()); - } - let job = prover_dal - .get_next_prover_job(&[ProtocolVersionId::latest()]) - .await; - assert!(job.is_none()); - // re-queue jobs - let stuck_jobs = prover_dal - .requeue_stuck_jobs(Duration::from_secs(0), 10) - .await; - assert_eq!(4, stuck_jobs.len()); - // re-check that all jobs can be taken again - for _ in 1..=4 { - let job = prover_dal - .get_next_prover_job(&[ProtocolVersionId::latest()]) - .await; - assert!(job.is_some()); - } -} - -#[tokio::test] -async fn test_move_leaf_aggregation_jobs_from_waiting_to_queued() { - let connection_pool = ConnectionPool::test_pool().await; - let storage = &mut connection_pool.access_storage().await.unwrap(); - let protocol_version = ProtocolVersion::default(); - storage - .protocol_versions_dal() - .save_protocol_version_with_tx(protocol_version) - .await; - storage - .protocol_versions_dal() - .save_prover_protocol_version(Default::default()) - .await; - let block_number = 1; - let header = L1BatchHeader::new( - L1BatchNumber(block_number), - 0, - Default::default(), - Default::default(), - ProtocolVersionId::latest(), - ); - storage - .blocks_dal() - .insert_l1_batch(&header, &[], Default::default(), &[], &[]) - .await - .unwrap(); - - let mut prover_dal = ProverDal { storage }; - let circuits = create_circuits(); - let l1_batch_number = L1BatchNumber(block_number); - prover_dal - .insert_prover_jobs( - l1_batch_number, - circuits.clone(), - AggregationRound::BasicCircuits, - ProtocolVersionId::latest() as i32, - ) - .await; - let prover_jobs_params = get_default_prover_jobs_params(l1_batch_number); - let jobs = prover_dal.get_jobs(prover_jobs_params).await; - let job_ids: Vec = jobs.unwrap().into_iter().map(|job| job.id).collect(); - - let proof = get_sample_proof(); - - // mark all basic circuit proofs as successful. - for id in job_ids.iter() { - prover_dal - .save_proof(*id, Duration::from_secs(0), proof.clone(), "unit-test") - .await - .unwrap(); - } - let mut witness_generator_dal = WitnessGeneratorDal { storage }; - - witness_generator_dal - .create_aggregation_jobs( - l1_batch_number, - "basic_circuits_1.bin", - "basic_circuits_inputs_1.bin", - circuits.len(), - "scheduler_witness_1.bin", - ProtocolVersionId::latest() as i32, - ) - .await; - - // move the leaf aggregation job to be queued - witness_generator_dal - .move_leaf_aggregation_jobs_from_waiting_to_queued() - .await; - - // Ensure get-next job gives the leaf aggregation witness job - let job = witness_generator_dal - .get_next_leaf_aggregation_witness_job( - Duration::from_secs(0), - 10, - u32::MAX, - &[ProtocolVersionId::latest()], - ) - .await; - assert_eq!(l1_batch_number, job.unwrap().block_number); -} - -#[tokio::test] -async fn test_move_node_aggregation_jobs_from_waiting_to_queued() { - let connection_pool = ConnectionPool::test_pool().await; - let storage = &mut connection_pool.access_storage().await.unwrap(); - let protocol_version = ProtocolVersion::default(); - storage - .protocol_versions_dal() - .save_protocol_version_with_tx(protocol_version) - .await; - storage - .protocol_versions_dal() - .save_prover_protocol_version(Default::default()) - .await; - let block_number = 1; - let header = L1BatchHeader::new( - L1BatchNumber(block_number), - 0, - Default::default(), - Default::default(), - ProtocolVersionId::latest(), - ); - storage - .blocks_dal() - .insert_l1_batch(&header, &[], Default::default(), &[], &[]) - .await - .unwrap(); - - let mut prover_dal = ProverDal { storage }; - let circuits = create_circuits(); - let l1_batch_number = L1BatchNumber(block_number); - prover_dal - .insert_prover_jobs( - l1_batch_number, - circuits.clone(), - AggregationRound::LeafAggregation, - ProtocolVersionId::latest() as i32, - ) - .await; - let prover_jobs_params = get_default_prover_jobs_params(l1_batch_number); - let jobs = prover_dal.get_jobs(prover_jobs_params).await; - let job_ids: Vec = jobs.unwrap().into_iter().map(|job| job.id).collect(); - - let proof = get_sample_proof(); - // mark all leaf aggregation circuit proofs as successful. - for id in job_ids { - prover_dal - .save_proof(id, Duration::from_secs(0), proof.clone(), "unit-test") - .await - .unwrap(); - } - let mut witness_generator_dal = WitnessGeneratorDal { storage }; - - witness_generator_dal - .create_aggregation_jobs( - l1_batch_number, - "basic_circuits_1.bin", - "basic_circuits_inputs_1.bin", - circuits.len(), - "scheduler_witness_1.bin", - ProtocolVersionId::latest() as i32, - ) - .await; - witness_generator_dal - .save_leaf_aggregation_artifacts( - l1_batch_number, - circuits.len(), - "leaf_layer_subqueues_1.bin", - "aggregation_outputs_1.bin", - ) - .await; - - // move the leaf aggregation job to be queued - witness_generator_dal - .move_node_aggregation_jobs_from_waiting_to_queued() - .await; - - // Ensure get-next job gives the node aggregation witness job - let job = witness_generator_dal - .get_next_node_aggregation_witness_job( - Duration::from_secs(0), - 10, - u32::MAX, - &[ProtocolVersionId::latest()], - ) - .await; - assert_eq!(l1_batch_number, job.unwrap().block_number); -} - -#[tokio::test] -async fn test_move_scheduler_jobs_from_waiting_to_queued() { - let connection_pool = ConnectionPool::test_pool().await; - let storage = &mut connection_pool.access_storage().await.unwrap(); - let protocol_version = ProtocolVersion::default(); - storage - .protocol_versions_dal() - .save_protocol_version_with_tx(protocol_version) - .await; - storage - .protocol_versions_dal() - .save_prover_protocol_version(Default::default()) - .await; - let block_number = 1; - let header = L1BatchHeader::new( - L1BatchNumber(block_number), - 0, - Default::default(), - Default::default(), - ProtocolVersionId::latest(), - ); - storage - .blocks_dal() - .insert_l1_batch(&header, &[], Default::default(), &[], &[]) - .await - .unwrap(); - - let mut prover_dal = ProverDal { storage }; - let circuits = vec![( - "Node aggregation", - "1_0_Node aggregation_NodeAggregation.bin".to_owned(), - )]; - let l1_batch_number = L1BatchNumber(block_number); - prover_dal - .insert_prover_jobs( - l1_batch_number, - circuits.clone(), - AggregationRound::NodeAggregation, - ProtocolVersionId::latest() as i32, - ) - .await; - let prover_jobs_params = get_default_prover_jobs_params(l1_batch_number); - let jobs = prover_dal.get_jobs(prover_jobs_params).await; - let job_ids: Vec = jobs.unwrap().into_iter().map(|job| job.id).collect(); - - let proof = get_sample_proof(); - // mark node aggregation circuit proofs as successful. - for id in &job_ids { - prover_dal - .save_proof(*id, Duration::from_secs(0), proof.clone(), "unit-test") - .await - .unwrap(); - } - let mut witness_generator_dal = WitnessGeneratorDal { storage }; - - witness_generator_dal - .create_aggregation_jobs( - l1_batch_number, - "basic_circuits_1.bin", - "basic_circuits_inputs_1.bin", - circuits.len(), - "scheduler_witness_1.bin", - ProtocolVersionId::latest() as i32, - ) - .await; - witness_generator_dal - .save_node_aggregation_artifacts(l1_batch_number, "final_node_aggregations_1.bin") - .await; - - // move the leaf aggregation job to be queued - witness_generator_dal - .move_scheduler_jobs_from_waiting_to_queued() - .await; - - // Ensure get-next job gives the scheduler witness job - let job = witness_generator_dal - .get_next_scheduler_witness_job( - Duration::from_secs(0), - 10, - u32::MAX, - &[ProtocolVersionId::latest()], - ) - .await; - assert_eq!(l1_batch_number, job.unwrap().block_number); -} - -fn get_default_prover_jobs_params(l1_batch_number: L1BatchNumber) -> GetProverJobsParams { - GetProverJobsParams { - statuses: None, - blocks: Some(std::ops::Range { - start: l1_batch_number, - end: l1_batch_number + 1, - }), - limit: None, - desc: false, - round: None, - } -} - -fn get_sample_proof() -> Vec { - let zksync_home = std::env::var("ZKSYNC_HOME").unwrap_or_else(|_| ".".into()); - fs::read(format!("{}/etc/prover-test-data/proof.bin", zksync_home)) - .expect("Failed reading test proof file") -} diff --git a/core/lib/dal/src/time_utils.rs b/core/lib/dal/src/time_utils.rs index 45ff661a319..0ede5e6fc57 100644 --- a/core/lib/dal/src/time_utils.rs +++ b/core/lib/dal/src/time_utils.rs @@ -1,7 +1,7 @@ -use sqlx::postgres::types::PgInterval; -use sqlx::types::chrono::NaiveTime; use std::time::Duration; +use sqlx::{postgres::types::PgInterval, types::chrono::NaiveTime}; + pub fn duration_to_naive_time(duration: Duration) -> NaiveTime { let total_seconds = duration.as_secs() as u32; NaiveTime::from_hms_opt( diff --git a/core/lib/dal/src/tokens_dal.rs b/core/lib/dal/src/tokens_dal.rs index f7b64aed69e..96072bc2ec4 100644 --- a/core/lib/dal/src/tokens_dal.rs +++ b/core/lib/dal/src/tokens_dal.rs @@ -1,4 +1,3 @@ -use crate::StorageProcessor; use num::{rational::Ratio, BigUint}; use sqlx::types::chrono::Utc; use zksync_types::{ @@ -8,6 +7,8 @@ use zksync_types::{ }; use zksync_utils::ratio_to_big_decimal; +use crate::StorageProcessor; + // Precision of the USD price per token pub(crate) const STORED_USD_PRICE_PRECISION: usize = 6; @@ -62,10 +63,17 @@ impl TokensDal<'_, '_> { ) { { sqlx::query!( - "UPDATE tokens SET token_list_name = $2, token_list_symbol = $3, - token_list_decimals = $4, well_known = true, updated_at = now() - WHERE l1_address = $1 - ", + r#" + UPDATE tokens + SET + token_list_name = $2, + token_list_symbol = $3, + token_list_decimals = $4, + well_known = TRUE, + updated_at = NOW() + WHERE + l1_address = $1 + "#, l1_address.as_bytes(), metadata.name, metadata.symbol, @@ -77,13 +85,20 @@ impl TokensDal<'_, '_> { } } - pub async fn get_well_known_token_addresses(&mut self) -> Vec<(Address, Address)> { + pub(crate) async fn get_well_known_token_addresses(&mut self) -> Vec<(Address, Address)> { { - let records = - sqlx::query!("SELECT l1_address, l2_address FROM tokens WHERE well_known = true") - .fetch_all(self.storage.conn()) - .await - .unwrap(); + let records = sqlx::query!( + r#" + SELECT + l1_address, + l2_address + FROM + tokens + "# + ) + .fetch_all(self.storage.conn()) + .await + .unwrap(); let addresses: Vec<(Address, Address)> = records .into_iter() .map(|record| { @@ -99,10 +114,17 @@ impl TokensDal<'_, '_> { pub async fn get_all_l2_token_addresses(&mut self) -> Vec
{ { - let records = sqlx::query!("SELECT l2_address FROM tokens") - .fetch_all(self.storage.conn()) - .await - .unwrap(); + let records = sqlx::query!( + r#" + SELECT + l2_address + FROM + tokens + "# + ) + .fetch_all(self.storage.conn()) + .await + .unwrap(); let addresses: Vec
= records .into_iter() .map(|record| Address::from_slice(&record.l2_address)) @@ -113,10 +135,19 @@ impl TokensDal<'_, '_> { pub async fn get_unknown_l1_token_addresses(&mut self) -> Vec
{ { - let records = sqlx::query!("SELECT l1_address FROM tokens WHERE well_known = false") - .fetch_all(self.storage.conn()) - .await - .unwrap(); + let records = sqlx::query!( + r#" + SELECT + l1_address + FROM + tokens + WHERE + well_known = FALSE + "# + ) + .fetch_all(self.storage.conn()) + .await + .unwrap(); let addresses: Vec
= records .into_iter() .map(|record| Address::from_slice(&record.l1_address)) @@ -129,7 +160,14 @@ impl TokensDal<'_, '_> { { let min_volume = ratio_to_big_decimal(min_volume, STORED_USD_PRICE_PRECISION); let records = sqlx::query!( - "SELECT l1_address FROM tokens WHERE market_volume > $1", + r#" + SELECT + l1_address + FROM + tokens + WHERE + market_volume > $1 + "#, min_volume ) .fetch_all(self.storage.conn()) @@ -146,11 +184,19 @@ impl TokensDal<'_, '_> { pub async fn set_l1_token_price(&mut self, l1_address: &Address, price: TokenPrice) { { sqlx::query!( - "UPDATE tokens SET usd_price = $2, usd_price_updated_at = $3, updated_at = now() WHERE l1_address = $1", - l1_address.as_bytes(), - ratio_to_big_decimal(&price.usd_price, STORED_USD_PRICE_PRECISION), - price.last_updated.naive_utc(), - ) + r#" + UPDATE tokens + SET + usd_price = $2, + usd_price_updated_at = $3, + updated_at = NOW() + WHERE + l1_address = $1 + "#, + l1_address.as_bytes(), + ratio_to_big_decimal(&price.usd_price, STORED_USD_PRICE_PRECISION), + price.last_updated.naive_utc(), + ) .execute(self.storage.conn()) .await .unwrap(); @@ -160,20 +206,29 @@ impl TokensDal<'_, '_> { pub async fn rollback_tokens(&mut self, block_number: MiniblockNumber) { { sqlx::query!( - " - DELETE FROM tokens - WHERE l2_address IN - ( - SELECT substring(key, 12, 20) FROM storage_logs - WHERE storage_logs.address = $1 AND miniblock_number > $2 AND NOT EXISTS ( - SELECT 1 FROM storage_logs as s - WHERE - s.hashed_key = storage_logs.hashed_key AND - (s.miniblock_number, s.operation_number) >= (storage_logs.miniblock_number, storage_logs.operation_number) AND - s.value = $3 - ) + r#" + DELETE FROM tokens + WHERE + l2_address IN ( + SELECT + SUBSTRING(key, 12, 20) + FROM + storage_logs + WHERE + storage_logs.address = $1 + AND miniblock_number > $2 + AND NOT EXISTS ( + SELECT + 1 + FROM + storage_logs AS s + WHERE + s.hashed_key = storage_logs.hashed_key + AND (s.miniblock_number, s.operation_number) >= (storage_logs.miniblock_number, storage_logs.operation_number) + AND s.value = $3 + ) ) - ", + "#, ACCOUNT_CODE_STORAGE_ADDRESS.as_bytes(), block_number.0 as i64, FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH.as_bytes() diff --git a/core/lib/dal/src/tokens_web3_dal.rs b/core/lib/dal/src/tokens_web3_dal.rs index aa3674b6c3d..c3421c64679 100644 --- a/core/lib/dal/src/tokens_web3_dal.rs +++ b/core/lib/dal/src/tokens_web3_dal.rs @@ -1,11 +1,10 @@ -use crate::models::storage_token::StorageTokenPrice; -use crate::SqlxError; -use crate::StorageProcessor; use zksync_types::{ tokens::{TokenInfo, TokenMetadata, TokenPrice}, Address, }; +use crate::{models::storage_token::StorageTokenPrice, SqlxError, StorageProcessor}; + #[derive(Debug)] pub struct TokensWeb3Dal<'a, 'c> { pub(crate) storage: &'a mut StorageProcessor<'c>, @@ -15,9 +14,20 @@ impl TokensWeb3Dal<'_, '_> { pub async fn get_well_known_tokens(&mut self) -> Result, SqlxError> { { let records = sqlx::query!( - "SELECT l1_address, l2_address, name, symbol, decimals FROM tokens - WHERE well_known = true - ORDER BY symbol" + r#" + SELECT + l1_address, + l2_address, + NAME, + symbol, + decimals + FROM + tokens + WHERE + well_known = TRUE + ORDER BY + symbol + "# ) .fetch_all(self.storage.conn()) .await?; @@ -44,7 +54,15 @@ impl TokensWeb3Dal<'_, '_> { { let storage_price = sqlx::query_as!( StorageTokenPrice, - "SELECT usd_price, usd_price_updated_at FROM tokens WHERE l2_address = $1", + r#" + SELECT + usd_price, + usd_price_updated_at + FROM + tokens + WHERE + l2_address = $1 + "#, l2_address.as_bytes(), ) .fetch_optional(self.storage.conn()) diff --git a/core/lib/dal/src/transactions_dal.rs b/core/lib/dal/src/transactions_dal.rs index 5b235689961..19c19646bbf 100644 --- a/core/lib/dal/src/transactions_dal.rs +++ b/core/lib/dal/src/transactions_dal.rs @@ -1,10 +1,9 @@ +use std::{collections::HashMap, fmt, time::Duration}; + +use anyhow::Context; use bigdecimal::BigDecimal; use itertools::Itertools; use sqlx::{error, types::chrono::NaiveDateTime}; - -use anyhow::Context; -use std::{collections::HashMap, fmt, time::Duration}; - use zksync_types::{ block::MiniblockExecutionData, fee::TransactionExecutionMetrics, @@ -81,43 +80,57 @@ impl TransactionsDal<'_, '_> { let received_at = NaiveDateTime::from_timestamp_opt(secs, nanosecs).unwrap(); sqlx::query!( - " - INSERT INTO transactions - ( - hash, - is_priority, - initiator_address, - - gas_limit, - max_fee_per_gas, - gas_per_pubdata_limit, - - data, - priority_op_id, - full_fee, - layer_2_tip_fee, - contract_address, - l1_block_number, - value, - - paymaster, - paymaster_input, - tx_format, - - l1_tx_mint, - l1_tx_refund_recipient, - - received_at, - created_at, - updated_at - ) + r#" + INSERT INTO + transactions ( + hash, + is_priority, + initiator_address, + gas_limit, + max_fee_per_gas, + gas_per_pubdata_limit, + data, + priority_op_id, + full_fee, + layer_2_tip_fee, + contract_address, + l1_block_number, + value, + paymaster, + paymaster_input, + tx_format, + l1_tx_mint, + l1_tx_refund_recipient, + received_at, + created_at, + updated_at + ) VALUES ( - $1, TRUE, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, - $13, $14, $15, $16, $17, $18, now(), now() + $1, + TRUE, + $2, + $3, + $4, + $5, + $6, + $7, + $8, + $9, + $10, + $11, + $12, + $13, + $14, + $15, + $16, + $17, + $18, + NOW(), + NOW() ) ON CONFLICT (hash) DO NOTHING - ", + "#, tx_hash_bytes, sender, gas_limit, @@ -167,41 +180,53 @@ impl TransactionsDal<'_, '_> { let received_at = NaiveDateTime::from_timestamp_opt(secs, nanosecs).unwrap(); sqlx::query!( - " - INSERT INTO transactions - ( - hash, - is_priority, - initiator_address, - - gas_limit, - max_fee_per_gas, - gas_per_pubdata_limit, - - data, - upgrade_id, - contract_address, - l1_block_number, - value, - - paymaster, - paymaster_input, - tx_format, - - l1_tx_mint, - l1_tx_refund_recipient, - - received_at, - created_at, - updated_at - ) + r#" + INSERT INTO + transactions ( + hash, + is_priority, + initiator_address, + gas_limit, + max_fee_per_gas, + gas_per_pubdata_limit, + data, + upgrade_id, + contract_address, + l1_block_number, + value, + paymaster, + paymaster_input, + tx_format, + l1_tx_mint, + l1_tx_refund_recipient, + received_at, + created_at, + updated_at + ) VALUES ( - $1, TRUE, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, - $13, $14, $15, $16, now(), now() + $1, + TRUE, + $2, + $3, + $4, + $5, + $6, + $7, + $8, + $9, + $10, + $11, + $12, + $13, + $14, + $15, + $16, + NOW(), + NOW() ) ON CONFLICT (hash) DO NOTHING - ", + "#, tx_hash, sender, gas_limit, @@ -259,64 +284,92 @@ impl TransactionsDal<'_, '_> { // 3) WHERE clause conditions for DO UPDATE block were not met, so the transaction can't be replaced // the subquery in RETURNING clause looks into pre-UPDATE state of the table. So if the subquery will return NULL // transaction is fresh and was added to db(the second condition of RETURNING clause checks it). - // Otherwise, if the subquery won't return NULL it means that there is already tx with such nonce and initiator_address in DB + // Otherwise, if the subquery won't return NULL it means that there is already tx with such nonce and `initiator_address` in DB // and we can replace it WHERE clause conditions are met. // It is worth mentioning that if WHERE clause conditions are not met, None will be returned. let query_result = sqlx::query!( r#" - INSERT INTO transactions - ( - hash, - is_priority, - initiator_address, - nonce, - signature, - gas_limit, - max_fee_per_gas, - max_priority_fee_per_gas, - gas_per_pubdata_limit, - input, - data, - tx_format, - contract_address, - value, - paymaster, - paymaster_input, - execution_info, - received_at, - created_at, - updated_at - ) + INSERT INTO + transactions ( + hash, + is_priority, + initiator_address, + nonce, + signature, + gas_limit, + max_fee_per_gas, + max_priority_fee_per_gas, + gas_per_pubdata_limit, + input, + data, + tx_format, + contract_address, + value, + paymaster, + paymaster_input, + execution_info, + received_at, + created_at, + updated_at + ) VALUES ( - $1, FALSE, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, - jsonb_build_object('gas_used', $16::bigint, 'storage_writes', $17::int, 'contracts_used', $18::int), - $19, now(), now() + $1, + FALSE, + $2, + $3, + $4, + $5, + $6, + $7, + $8, + $9, + $10, + $11, + $12, + $13, + $14, + $15, + JSONB_BUILD_OBJECT('gas_used', $16::BIGINT, 'storage_writes', $17::INT, 'contracts_used', $18::INT), + $19, + NOW(), + NOW() ) - ON CONFLICT - (initiator_address, nonce) - DO UPDATE - SET hash=$1, - signature=$4, - gas_limit=$5, - max_fee_per_gas=$6, - max_priority_fee_per_gas=$7, - gas_per_pubdata_limit=$8, - input=$9, - data=$10, - tx_format=$11, - contract_address=$12, - value=$13, - paymaster=$14, - paymaster_input=$15, - execution_info=jsonb_build_object('gas_used', $16::bigint, 'storage_writes', $17::int, 'contracts_used', $18::int), - in_mempool=FALSE, - received_at=$19, - created_at=now(), - updated_at=now(), - error = NULL - WHERE transactions.is_priority = FALSE AND transactions.miniblock_number IS NULL - RETURNING (SELECT hash FROM transactions WHERE transactions.initiator_address = $2 AND transactions.nonce = $3) IS NOT NULL as "is_replaced!" + ON CONFLICT (initiator_address, nonce) DO + UPDATE + SET + hash = $1, + signature = $4, + gas_limit = $5, + max_fee_per_gas = $6, + max_priority_fee_per_gas = $7, + gas_per_pubdata_limit = $8, + input = $9, + data = $10, + tx_format = $11, + contract_address = $12, + value = $13, + paymaster = $14, + paymaster_input = $15, + execution_info = JSONB_BUILD_OBJECT('gas_used', $16::BIGINT, 'storage_writes', $17::INT, 'contracts_used', $18::INT), + in_mempool = FALSE, + received_at = $19, + created_at = NOW(), + updated_at = NOW(), + error = NULL + WHERE + transactions.is_priority = FALSE + AND transactions.miniblock_number IS NULL + RETURNING + ( + SELECT + hash + FROM + transactions + WHERE + transactions.initiator_address = $2 + AND transactions.nonce = $3 + ) IS NOT NULL AS "is_replaced!" "#, tx_hash.as_bytes(), initiator_address.as_bytes(), @@ -355,7 +408,7 @@ impl TransactionsDal<'_, '_> { // another tx with the same tx hash is supposed to have the same data // In this case we identify it as Duplicate // Note, this error can happen because of the race condition (tx can be taken by several - // api servers, that simultaneously start execute it and try to inserted to DB) + // API servers, that simultaneously start execute it and try to inserted to DB) if let error::Error::Database(ref error) = err { if let Some(constraint) = error.constraint() { if constraint == "transactions_pkey" { @@ -388,19 +441,21 @@ impl TransactionsDal<'_, '_> { let hashes: Vec<_> = transactions.iter().map(|tx| tx.hash.as_bytes()).collect(); let l1_batch_tx_indexes: Vec<_> = (0..transactions.len() as i32).collect(); sqlx::query!( - " - UPDATE transactions - SET - l1_batch_number = $3, - l1_batch_tx_index = data_table.l1_batch_tx_index, - updated_at = now() - FROM - (SELECT - UNNEST($1::int[]) AS l1_batch_tx_index, - UNNEST($2::bytea[]) AS hash - ) AS data_table - WHERE transactions.hash=data_table.hash - ", + r#" + UPDATE transactions + SET + l1_batch_number = $3, + l1_batch_tx_index = data_table.l1_batch_tx_index, + updated_at = NOW() + FROM + ( + SELECT + UNNEST($1::INT[]) AS l1_batch_tx_index, + UNNEST($2::bytea[]) AS hash + ) AS data_table + WHERE + transactions.hash = data_table.hash + "#, &l1_batch_tx_indexes, &hashes as &[&[u8]], block_number.0 as i64 @@ -542,7 +597,7 @@ impl TransactionsDal<'_, '_> { }); if !l2_hashes.is_empty() { - // Update l2 txs + // Update L2 txs // Due to the current tx replacement model, it's possible that tx has been replaced, // but the original was executed in memory, @@ -550,60 +605,65 @@ impl TransactionsDal<'_, '_> { // Note, that transactions are updated in order of their hashes to avoid deadlocks with other UPDATE queries. sqlx::query!( r#" - UPDATE transactions - SET - hash = data_table.hash, - signature = data_table.signature, - gas_limit = data_table.gas_limit, - max_fee_per_gas = data_table.max_fee_per_gas, - max_priority_fee_per_gas = data_table.max_priority_fee_per_gas, - gas_per_pubdata_limit = data_table.gas_per_pubdata_limit, - input = data_table.input, - data = data_table.data, - tx_format = data_table.tx_format, - miniblock_number = $21, - index_in_block = data_table.index_in_block, - error = NULLIF(data_table.error, ''), - effective_gas_price = data_table.effective_gas_price, - execution_info = data_table.new_execution_info, - refunded_gas = data_table.refunded_gas, - value = data_table.value, - contract_address = data_table.contract_address, - paymaster = data_table.paymaster, - paymaster_input = data_table.paymaster_input, - in_mempool = FALSE, - updated_at = now() - FROM - ( - SELECT data_table_temp.* FROM ( + UPDATE transactions + SET + hash = data_table.hash, + signature = data_table.signature, + gas_limit = data_table.gas_limit, + max_fee_per_gas = data_table.max_fee_per_gas, + max_priority_fee_per_gas = data_table.max_priority_fee_per_gas, + gas_per_pubdata_limit = data_table.gas_per_pubdata_limit, + input = data_table.input, + data = data_table.data, + tx_format = data_table.tx_format, + miniblock_number = $21, + index_in_block = data_table.index_in_block, + error = NULLIF(data_table.error, ''), + effective_gas_price = data_table.effective_gas_price, + execution_info = data_table.new_execution_info, + refunded_gas = data_table.refunded_gas, + value = data_table.value, + contract_address = data_table.contract_address, + paymaster = data_table.paymaster, + paymaster_input = data_table.paymaster_input, + in_mempool = FALSE, + updated_at = NOW() + FROM + ( + SELECT + data_table_temp.* + FROM + ( SELECT UNNEST($1::bytea[]) AS initiator_address, - UNNEST($2::int[]) AS nonce, + UNNEST($2::INT[]) AS nonce, UNNEST($3::bytea[]) AS hash, UNNEST($4::bytea[]) AS signature, - UNNEST($5::numeric[]) AS gas_limit, - UNNEST($6::numeric[]) AS max_fee_per_gas, - UNNEST($7::numeric[]) AS max_priority_fee_per_gas, - UNNEST($8::numeric[]) AS gas_per_pubdata_limit, - UNNEST($9::int[]) AS tx_format, - UNNEST($10::integer[]) AS index_in_block, - UNNEST($11::varchar[]) AS error, - UNNEST($12::numeric[]) AS effective_gas_price, + UNNEST($5::NUMERIC[]) AS gas_limit, + UNNEST($6::NUMERIC[]) AS max_fee_per_gas, + UNNEST($7::NUMERIC[]) AS max_priority_fee_per_gas, + UNNEST($8::NUMERIC[]) AS gas_per_pubdata_limit, + UNNEST($9::INT[]) AS tx_format, + UNNEST($10::INTEGER[]) AS index_in_block, + UNNEST($11::VARCHAR[]) AS error, + UNNEST($12::NUMERIC[]) AS effective_gas_price, UNNEST($13::jsonb[]) AS new_execution_info, UNNEST($14::bytea[]) AS input, UNNEST($15::jsonb[]) AS data, - UNNEST($16::bigint[]) as refunded_gas, - UNNEST($17::numeric[]) as value, - UNNEST($18::bytea[]) as contract_address, - UNNEST($19::bytea[]) as paymaster, - UNNEST($20::bytea[]) as paymaster_input + UNNEST($16::BIGINT[]) AS refunded_gas, + UNNEST($17::NUMERIC[]) AS value, + UNNEST($18::bytea[]) AS contract_address, + UNNEST($19::bytea[]) AS paymaster, + UNNEST($20::bytea[]) AS paymaster_input ) AS data_table_temp JOIN transactions ON transactions.initiator_address = data_table_temp.initiator_address - AND transactions.nonce = data_table_temp.nonce - ORDER BY transactions.hash - ) AS data_table - WHERE transactions.initiator_address=data_table.initiator_address - AND transactions.nonce=data_table.nonce + AND transactions.nonce = data_table_temp.nonce + ORDER BY + transactions.hash + ) AS data_table + WHERE + transactions.initiator_address = data_table.initiator_address + AND transactions.nonce = data_table.nonce "#, &l2_initiators, &l2_nonces, @@ -636,27 +696,28 @@ impl TransactionsDal<'_, '_> { if !l1_hashes.is_empty() { sqlx::query!( r#" - UPDATE transactions - SET - miniblock_number = $1, - index_in_block = data_table.index_in_block, - error = NULLIF(data_table.error, ''), - in_mempool=FALSE, - execution_info = execution_info || data_table.new_execution_info, - refunded_gas = data_table.refunded_gas, - effective_gas_price = data_table.effective_gas_price, - updated_at = now() - FROM - ( - SELECT - UNNEST($2::bytea[]) AS hash, - UNNEST($3::integer[]) AS index_in_block, - UNNEST($4::varchar[]) AS error, - UNNEST($5::jsonb[]) AS new_execution_info, - UNNEST($6::bigint[]) as refunded_gas, - UNNEST($7::numeric[]) as effective_gas_price - ) AS data_table - WHERE transactions.hash = data_table.hash + UPDATE transactions + SET + miniblock_number = $1, + index_in_block = data_table.index_in_block, + error = NULLIF(data_table.error, ''), + in_mempool = FALSE, + execution_info = execution_info || data_table.new_execution_info, + refunded_gas = data_table.refunded_gas, + effective_gas_price = data_table.effective_gas_price, + updated_at = NOW() + FROM + ( + SELECT + UNNEST($2::bytea[]) AS hash, + UNNEST($3::INTEGER[]) AS index_in_block, + UNNEST($4::VARCHAR[]) AS error, + UNNEST($5::jsonb[]) AS new_execution_info, + UNNEST($6::BIGINT[]) AS refunded_gas, + UNNEST($7::NUMERIC[]) AS effective_gas_price + ) AS data_table + WHERE + transactions.hash = data_table.hash "#, miniblock_number.0 as i32, &l1_hashes, @@ -674,27 +735,28 @@ impl TransactionsDal<'_, '_> { if !upgrade_hashes.is_empty() { sqlx::query!( r#" - UPDATE transactions - SET - miniblock_number = $1, - index_in_block = data_table.index_in_block, - error = NULLIF(data_table.error, ''), - in_mempool=FALSE, - execution_info = execution_info || data_table.new_execution_info, - refunded_gas = data_table.refunded_gas, - effective_gas_price = data_table.effective_gas_price, - updated_at = now() - FROM - ( - SELECT - UNNEST($2::bytea[]) AS hash, - UNNEST($3::integer[]) AS index_in_block, - UNNEST($4::varchar[]) AS error, - UNNEST($5::jsonb[]) AS new_execution_info, - UNNEST($6::bigint[]) as refunded_gas, - UNNEST($7::numeric[]) as effective_gas_price - ) AS data_table - WHERE transactions.hash = data_table.hash + UPDATE transactions + SET + miniblock_number = $1, + index_in_block = data_table.index_in_block, + error = NULLIF(data_table.error, ''), + in_mempool = FALSE, + execution_info = execution_info || data_table.new_execution_info, + refunded_gas = data_table.refunded_gas, + effective_gas_price = data_table.effective_gas_price, + updated_at = NOW() + FROM + ( + SELECT + UNNEST($2::bytea[]) AS hash, + UNNEST($3::INTEGER[]) AS index_in_block, + UNNEST($4::VARCHAR[]) AS error, + UNNEST($5::jsonb[]) AS new_execution_info, + UNNEST($6::BIGINT[]) AS refunded_gas, + UNNEST($7::NUMERIC[]) AS effective_gas_price + ) AS data_table + WHERE + transactions.hash = data_table.hash "#, miniblock_number.0 as i32, &upgrade_hashes, @@ -712,11 +774,14 @@ impl TransactionsDal<'_, '_> { if !bytea_call_traces.is_empty() { sqlx::query!( r#" - INSERT INTO call_traces (tx_hash, call_trace) - SELECT u.tx_hash, u.call_trace - FROM UNNEST($1::bytea[], $2::bytea[]) - AS u(tx_hash, call_trace) - "#, + INSERT INTO + call_traces (tx_hash, call_trace) + SELECT + u.tx_hash, + u.call_trace + FROM + UNNEST($1::bytea[], $2::bytea[]) AS u (tx_hash, call_trace) + "#, &call_traces_tx_hashes, &bytea_call_traces ) @@ -736,9 +801,14 @@ impl TransactionsDal<'_, '_> { // and we will update nothing. // These txs don't affect the state, so we can just easily skip this update. sqlx::query!( - "UPDATE transactions - SET error = $1, updated_at = now() - WHERE hash = $2", + r#" + UPDATE transactions + SET + error = $1, + updated_at = NOW() + WHERE + hash = $2 + "#, error, transaction_hash.0.to_vec() ) @@ -751,19 +821,30 @@ impl TransactionsDal<'_, '_> { pub async fn reset_transactions_state(&mut self, miniblock_number: MiniblockNumber) { { let tx_hashes = sqlx::query!( - "UPDATE transactions - SET l1_batch_number = NULL, miniblock_number = NULL, error = NULL, index_in_block = NULL, execution_info = '{}' - WHERE miniblock_number > $1 - RETURNING hash - ", + r#" + UPDATE transactions + SET + l1_batch_number = NULL, + miniblock_number = NULL, + error = NULL, + index_in_block = NULL, + execution_info = '{}' + WHERE + miniblock_number > $1 + RETURNING + hash + "#, miniblock_number.0 as i64 ) .fetch_all(self.storage.conn()) .await .unwrap(); sqlx::query!( - "DELETE FROM call_traces - WHERE tx_hash = ANY($1)", + r#" + DELETE FROM call_traces + WHERE + tx_hash = ANY ($1) + "#, &tx_hashes .iter() .map(|tx| tx.hash.clone()) @@ -779,10 +860,16 @@ impl TransactionsDal<'_, '_> { { let stuck_tx_timeout = pg_interval_from_duration(stuck_tx_timeout); sqlx::query!( - "DELETE FROM transactions \ - WHERE miniblock_number IS NULL AND received_at < now() - $1::interval \ - AND is_priority=false AND error IS NULL \ - RETURNING hash", + r#" + DELETE FROM transactions + WHERE + miniblock_number IS NULL + AND received_at < NOW() - $1::INTERVAL + AND is_priority = FALSE + AND error IS NULL + RETURNING + hash + "#, stuck_tx_timeout ) .fetch_all(self.storage.conn()) @@ -807,9 +894,16 @@ impl TransactionsDal<'_, '_> { let stashed_addresses: Vec<_> = stashed_accounts.into_iter().map(|a| a.0.to_vec()).collect(); sqlx::query!( - "UPDATE transactions SET in_mempool = FALSE \ - FROM UNNEST ($1::bytea[]) AS s(address) \ - WHERE transactions.in_mempool = TRUE AND transactions.initiator_address = s.address", + r#" + UPDATE transactions + SET + in_mempool = FALSE + FROM + UNNEST($1::bytea[]) AS s (address) + WHERE + transactions.in_mempool = TRUE + AND transactions.initiator_address = s.address + "#, &stashed_addresses, ) .execute(self.storage.conn()) @@ -819,8 +913,12 @@ impl TransactionsDal<'_, '_> { let purged_addresses: Vec<_> = purged_accounts.into_iter().map(|a| a.0.to_vec()).collect(); sqlx::query!( - "DELETE FROM transactions \ - WHERE in_mempool = TRUE AND initiator_address = ANY($1)", + r#" + DELETE FROM transactions + WHERE + in_mempool = TRUE + AND initiator_address = ANY ($1) + "#, &purged_addresses[..] ) .execute(self.storage.conn()) @@ -830,22 +928,47 @@ impl TransactionsDal<'_, '_> { // Note, that transactions are updated in order of their hashes to avoid deadlocks with other UPDATE queries. let transactions = sqlx::query_as!( StorageTransaction, - "UPDATE transactions - SET in_mempool = TRUE - FROM ( - SELECT hash FROM ( - SELECT hash - FROM transactions - WHERE miniblock_number IS NULL AND in_mempool = FALSE AND error IS NULL - AND (is_priority = TRUE OR (max_fee_per_gas >= $2 and gas_per_pubdata_limit >= $3)) - AND tx_format != $4 - ORDER BY is_priority DESC, priority_op_id, received_at - LIMIT $1 - ) as subquery1 - ORDER BY hash - ) as subquery2 - WHERE transactions.hash = subquery2.hash - RETURNING transactions.*", + r#" + UPDATE transactions + SET + in_mempool = TRUE + FROM + ( + SELECT + hash + FROM + ( + SELECT + hash + FROM + transactions + WHERE + miniblock_number IS NULL + AND in_mempool = FALSE + AND error IS NULL + AND ( + is_priority = TRUE + OR ( + max_fee_per_gas >= $2 + AND gas_per_pubdata_limit >= $3 + ) + ) + AND tx_format != $4 + ORDER BY + is_priority DESC, + priority_op_id, + received_at + LIMIT + $1 + ) AS subquery1 + ORDER BY + hash + ) AS subquery2 + WHERE + transactions.hash = subquery2.hash + RETURNING + transactions.* + "#, limit as i32, BigDecimal::from(fee_per_gas), BigDecimal::from(gas_per_pubdata), @@ -866,7 +989,15 @@ impl TransactionsDal<'_, '_> { let storage_keys: Vec<_> = nonce_keys.keys().map(|key| key.0.to_vec()).collect(); let nonces: HashMap<_, _> = sqlx::query!( - r#"SELECT hashed_key, value as "value!" FROM storage WHERE hashed_key = ANY($1)"#, + r#" + SELECT + hashed_key, + value AS "value!" + FROM + storage + WHERE + hashed_key = ANY ($1) + "#, &storage_keys, ) .fetch_all(self.storage.conn()) @@ -890,20 +1021,36 @@ impl TransactionsDal<'_, '_> { pub async fn reset_mempool(&mut self) { { - sqlx::query!("UPDATE transactions SET in_mempool = FALSE WHERE in_mempool = TRUE") - .execute(self.storage.conn()) - .await - .unwrap(); + sqlx::query!( + r#" + UPDATE transactions + SET + in_mempool = FALSE + WHERE + in_mempool = TRUE + "# + ) + .execute(self.storage.conn()) + .await + .unwrap(); } } pub async fn get_last_processed_l1_block(&mut self) -> Option { { sqlx::query!( - "SELECT l1_block_number FROM transactions - WHERE priority_op_id IS NOT NULL - ORDER BY priority_op_id DESC - LIMIT 1" + r#" + SELECT + l1_block_number + FROM + transactions + WHERE + priority_op_id IS NOT NULL + ORDER BY + priority_op_id DESC + LIMIT + 1 + "# ) .fetch_optional(self.storage.conn()) .await @@ -915,7 +1062,14 @@ impl TransactionsDal<'_, '_> { pub async fn last_priority_id(&mut self) -> Option { { let op_id = sqlx::query!( - r#"SELECT MAX(priority_op_id) as "op_id" from transactions where is_priority = true"# + r#" + SELECT + MAX(priority_op_id) AS "op_id" + FROM + transactions + WHERE + is_priority = TRUE + "# ) .fetch_optional(self.storage.conn()) .await @@ -928,21 +1082,34 @@ impl TransactionsDal<'_, '_> { pub async fn next_priority_id(&mut self) -> PriorityOpId { { sqlx::query!( - r#"SELECT MAX(priority_op_id) as "op_id" from transactions where is_priority = true AND miniblock_number IS NOT NULL"# + r#" + SELECT + MAX(priority_op_id) AS "op_id" + FROM + transactions + WHERE + is_priority = TRUE + AND miniblock_number IS NOT NULL + "# ) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .and_then(|row| row.op_id) - .map(|value| PriorityOpId((value + 1) as u64)) - .unwrap_or_default() + .fetch_optional(self.storage.conn()) + .await + .unwrap() + .and_then(|row| row.op_id) + .map(|value| PriorityOpId((value + 1) as u64)) + .unwrap_or_default() } } pub async fn insert_trace(&mut self, hash: H256, trace: VmExecutionTrace) { { sqlx::query!( - "INSERT INTO transaction_traces (tx_hash, trace, created_at, updated_at) VALUES ($1, $2, now(), now())", + r#" + INSERT INTO + transaction_traces (tx_hash, trace, created_at, updated_at) + VALUES + ($1, $2, NOW(), NOW()) + "#, hash.as_bytes(), serde_json::to_value(trace).unwrap() ) @@ -955,7 +1122,14 @@ impl TransactionsDal<'_, '_> { pub async fn get_trace(&mut self, hash: H256) -> Option { { let trace = sqlx::query!( - "SELECT trace FROM transaction_traces WHERE tx_hash = $1", + r#" + SELECT + trace + FROM + transaction_traces + WHERE + tx_hash = $1 + "#, hash.as_bytes() ) .fetch_optional(self.storage.conn()) @@ -969,7 +1143,7 @@ impl TransactionsDal<'_, '_> { } } - /// Returns miniblocks with their transactions that state_keeper needs to reexecute on restart. + /// Returns miniblocks with their transactions that state_keeper needs to re-execute on restart. /// These are the transactions that are included to some miniblock, /// but not included to L1 batch. The order of the transactions is the same as it was /// during the previous execution. @@ -978,9 +1152,18 @@ impl TransactionsDal<'_, '_> { ) -> anyhow::Result> { let transactions = sqlx::query_as!( StorageTransaction, - "SELECT * FROM transactions \ - WHERE miniblock_number IS NOT NULL AND l1_batch_number IS NULL \ - ORDER BY miniblock_number, index_in_block", + r#" + SELECT + * + FROM + transactions + WHERE + miniblock_number IS NOT NULL + AND l1_batch_number IS NULL + ORDER BY + miniblock_number, + index_in_block + "#, ) .fetch_all(self.storage.conn()) .await?; @@ -997,9 +1180,17 @@ impl TransactionsDal<'_, '_> { ) -> anyhow::Result> { let transactions = sqlx::query_as!( StorageTransaction, - "SELECT * FROM transactions \ - WHERE l1_batch_number = $1 \ - ORDER BY miniblock_number, index_in_block", + r#" + SELECT + * + FROM + transactions + WHERE + l1_batch_number = $1 + ORDER BY + miniblock_number, + index_in_block + "#, l1_batch_number.0 as i64, ) .fetch_all(self.storage.conn()) @@ -1035,7 +1226,17 @@ impl TransactionsDal<'_, '_> { .context("No last transaction found for miniblock")? .0; let miniblock_data = sqlx::query!( - "SELECT timestamp, virtual_blocks FROM miniblocks WHERE number BETWEEN $1 AND $2 ORDER BY number", + r#" + SELECT + timestamp, + virtual_blocks + FROM + miniblocks + WHERE + number BETWEEN $1 AND $2 + ORDER BY + number + "#, from_miniblock.0 as i64, to_miniblock.0 as i64, ) @@ -1043,9 +1244,16 @@ impl TransactionsDal<'_, '_> { .await?; let prev_hashes = sqlx::query!( - "SELECT hash FROM miniblocks \ - WHERE number BETWEEN $1 AND $2 \ - ORDER BY number", + r#" + SELECT + hash + FROM + miniblocks + WHERE + number BETWEEN $1 AND $2 + ORDER BY + number + "#, from_miniblock.0 as i64 - 1, to_miniblock.0 as i64 - 1, ) @@ -1083,28 +1291,41 @@ impl TransactionsDal<'_, '_> { { sqlx::query!( r#" - SELECT miniblock_number as "miniblock_number!", - hash, index_in_block as "index_in_block!", l1_batch_tx_index as "l1_batch_tx_index!" - FROM transactions - WHERE l1_batch_number = $1 - ORDER BY miniblock_number, index_in_block + SELECT + miniblock_number AS "miniblock_number!", + hash, + index_in_block AS "index_in_block!", + l1_batch_tx_index AS "l1_batch_tx_index!" + FROM + transactions + WHERE + l1_batch_number = $1 + ORDER BY + miniblock_number, + index_in_block "#, l1_batch_number.0 as i64 ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .group_by(|tx| tx.miniblock_number) - .into_iter() - .map(|(miniblock_number, rows)| { - ( - MiniblockNumber(miniblock_number as u32), - rows.map(|row| (H256::from_slice(&row.hash), row.index_in_block as u32, row.l1_batch_tx_index as u16)) - .collect::>(), - ) - }) - .collect() + .fetch_all(self.storage.conn()) + .await + .unwrap() + .into_iter() + .group_by(|tx| tx.miniblock_number) + .into_iter() + .map(|(miniblock_number, rows)| { + ( + MiniblockNumber(miniblock_number as u32), + rows.map(|row| { + ( + H256::from_slice(&row.hash), + row.index_in_block as u32, + row.l1_batch_tx_index as u16, + ) + }) + .collect::>(), + ) + }) + .collect() } } @@ -1113,8 +1334,12 @@ impl TransactionsDal<'_, '_> { sqlx::query_as!( CallTrace, r#" - SELECT * FROM call_traces - WHERE tx_hash = $1 + SELECT + * + FROM + call_traces + WHERE + tx_hash = $1 "#, tx_hash.as_bytes() ) @@ -1129,8 +1354,12 @@ impl TransactionsDal<'_, '_> { sqlx::query_as!( StorageTransaction, r#" - SELECT * FROM transactions - WHERE hash = $1 + SELECT + * + FROM + transactions + WHERE + hash = $1 "#, hash.as_bytes() ) diff --git a/core/lib/dal/src/transactions_web3_dal.rs b/core/lib/dal/src/transactions_web3_dal.rs index 5e2342d05b7..abc243dd1a1 100644 --- a/core/lib/dal/src/transactions_web3_dal.rs +++ b/core/lib/dal/src/transactions_web3_dal.rs @@ -1,20 +1,22 @@ use sqlx::types::chrono::NaiveDateTime; - use zksync_types::{ api, Address, L2ChainId, MiniblockNumber, Transaction, ACCOUNT_CODE_STORAGE_ADDRESS, FAILED_CONTRACT_DEPLOYMENT_BYTECODE_HASH, H160, H256, U256, U64, }; use zksync_utils::{bigdecimal_to_u256, h256_to_account_address}; -use crate::models::{ - storage_block::{bind_block_where_sql_params, web3_block_where_sql}, - storage_event::StorageWeb3Log, - storage_transaction::{ - extract_web3_transaction, web3_transaction_select_sql, StorageTransaction, - StorageTransactionDetails, +use crate::{ + instrument::InstrumentExt, + models::{ + storage_block::{bind_block_where_sql_params, web3_block_where_sql}, + storage_event::StorageWeb3Log, + storage_transaction::{ + extract_web3_transaction, web3_transaction_select_sql, StorageTransaction, + StorageTransactionDetails, + }, }, + SqlxError, StorageProcessor, }; -use crate::{instrument::InstrumentExt, SqlxError, StorageProcessor}; #[derive(Debug)] pub struct TransactionsWeb3Dal<'a, 'c> { @@ -29,34 +31,43 @@ impl TransactionsWeb3Dal<'_, '_> { { let receipt = sqlx::query!( r#" - WITH sl AS ( - SELECT * FROM storage_logs - WHERE storage_logs.address = $1 AND storage_logs.tx_hash = $2 - ORDER BY storage_logs.miniblock_number DESC, storage_logs.operation_number DESC - LIMIT 1 - ) + WITH + sl AS ( + SELECT + * + FROM + storage_logs + WHERE + storage_logs.address = $1 + AND storage_logs.tx_hash = $2 + ORDER BY + storage_logs.miniblock_number DESC, + storage_logs.operation_number DESC + LIMIT + 1 + ) SELECT - transactions.hash as tx_hash, - transactions.index_in_block as index_in_block, - transactions.l1_batch_tx_index as l1_batch_tx_index, - transactions.miniblock_number as block_number, - transactions.error as error, - transactions.effective_gas_price as effective_gas_price, - transactions.initiator_address as initiator_address, - transactions.data->'to' as "transfer_to?", - transactions.data->'contractAddress' as "execute_contract_address?", - transactions.tx_format as "tx_format?", - transactions.refunded_gas as refunded_gas, - transactions.gas_limit as gas_limit, - miniblocks.hash as "block_hash?", - miniblocks.l1_batch_number as "l1_batch_number?", - sl.key as "contract_address?" - FROM transactions - LEFT JOIN miniblocks - ON miniblocks.number = transactions.miniblock_number - LEFT JOIN sl - ON sl.value != $3 - WHERE transactions.hash = $2 + transactions.hash AS tx_hash, + transactions.index_in_block AS index_in_block, + transactions.l1_batch_tx_index AS l1_batch_tx_index, + transactions.miniblock_number AS "block_number!", + transactions.error AS error, + transactions.effective_gas_price AS effective_gas_price, + transactions.initiator_address AS initiator_address, + transactions.data -> 'to' AS "transfer_to?", + transactions.data -> 'contractAddress' AS "execute_contract_address?", + transactions.tx_format AS "tx_format?", + transactions.refunded_gas AS refunded_gas, + transactions.gas_limit AS gas_limit, + miniblocks.hash AS "block_hash", + miniblocks.l1_batch_number AS "l1_batch_number?", + sl.key AS "contract_address?" + FROM + transactions + JOIN miniblocks ON miniblocks.number = transactions.miniblock_number + LEFT JOIN sl ON sl.value != $3 + WHERE + transactions.hash = $2 "#, ACCOUNT_CODE_STORAGE_ADDRESS.as_bytes(), hash.as_bytes(), @@ -67,21 +78,17 @@ impl TransactionsWeb3Dal<'_, '_> { .fetch_optional(self.storage.conn()) .await? .map(|db_row| { - let status = match (db_row.block_number, db_row.error) { - (_, Some(_)) => Some(U64::from(0)), - (Some(_), None) => Some(U64::from(1)), - // tx not executed yet - _ => None, - }; + let status = db_row.error.map(|_| U64::zero()).unwrap_or_else(U64::one); + let tx_type = db_row.tx_format.map(U64::from).unwrap_or_default(); let transaction_index = db_row.index_in_block.map(U64::from).unwrap_or_default(); - let block_hash = db_row.block_hash.map(|bytes| H256::from_slice(&bytes)); + let block_hash = H256::from_slice(&db_row.block_hash); api::TransactionReceipt { transaction_hash: H256::from_slice(&db_row.tx_hash), transaction_index, block_hash, - block_number: db_row.block_number.map(U64::from), + block_number: db_row.block_number.into(), l1_batch_tx_index: db_row.l1_batch_tx_index.map(U64::from), l1_batch_number: db_row.l1_batch_number.map(U64::from), from: H160::from_slice(&db_row.initiator_address), @@ -117,7 +124,7 @@ impl TransactionsWeb3Dal<'_, '_> { root: block_hash, logs_bloom: Default::default(), // Even though the Rust SDK recommends us to supply "None" for legacy transactions - // we always supply some number anyway to have the same behaviour as most popular RPCs + // we always supply some number anyway to have the same behavior as most popular RPCs transaction_type: Some(tx_type), } }); @@ -127,13 +134,26 @@ impl TransactionsWeb3Dal<'_, '_> { StorageWeb3Log, r#" SELECT - address, topic1, topic2, topic3, topic4, value, - Null::bytea as "block_hash", Null::bigint as "l1_batch_number?", - miniblock_number, tx_hash, tx_index_in_block, - event_index_in_block, event_index_in_tx - FROM events - WHERE tx_hash = $1 - ORDER BY miniblock_number ASC, event_index_in_block ASC + address, + topic1, + topic2, + topic3, + topic4, + value, + NULL::bytea AS "block_hash", + NULL::BIGINT AS "l1_batch_number?", + miniblock_number, + tx_hash, + tx_index_in_block, + event_index_in_block, + event_index_in_tx + FROM + events + WHERE + tx_hash = $1 + ORDER BY + miniblock_number ASC, + event_index_in_block ASC "#, hash.as_bytes() ) @@ -144,7 +164,7 @@ impl TransactionsWeb3Dal<'_, '_> { .into_iter() .map(|storage_log| { let mut log = api::Log::from(storage_log); - log.block_hash = receipt.block_hash; + log.block_hash = Some(receipt.block_hash); log.l1_batch_number = receipt.l1_batch_number; log }) @@ -157,7 +177,7 @@ impl TransactionsWeb3Dal<'_, '_> { .into_iter() .map(|storage_l2_to_l1_log| { let mut l2_to_l1_log = api::L2ToL1Log::from(storage_l2_to_l1_log); - l2_to_l1_log.block_hash = receipt.block_hash; + l2_to_l1_log.block_hash = Some(receipt.block_hash); l2_to_l1_log.l1_batch_number = receipt.l1_batch_number; l2_to_l1_log }) @@ -221,25 +241,37 @@ impl TransactionsWeb3Dal<'_, '_> { let storage_tx_details: Option = sqlx::query_as!( StorageTransactionDetails, r#" - SELECT transactions.is_priority, - transactions.initiator_address, - transactions.gas_limit, - transactions.gas_per_pubdata_limit, - transactions.received_at, - transactions.miniblock_number, - transactions.error, - transactions.effective_gas_price, - transactions.refunded_gas, - commit_tx.tx_hash as "eth_commit_tx_hash?", - prove_tx.tx_hash as "eth_prove_tx_hash?", - execute_tx.tx_hash as "eth_execute_tx_hash?" - FROM transactions + SELECT + transactions.is_priority, + transactions.initiator_address, + transactions.gas_limit, + transactions.gas_per_pubdata_limit, + transactions.received_at, + transactions.miniblock_number, + transactions.error, + transactions.effective_gas_price, + transactions.refunded_gas, + commit_tx.tx_hash AS "eth_commit_tx_hash?", + prove_tx.tx_hash AS "eth_prove_tx_hash?", + execute_tx.tx_hash AS "eth_execute_tx_hash?" + FROM + transactions LEFT JOIN miniblocks ON miniblocks.number = transactions.miniblock_number LEFT JOIN l1_batches ON l1_batches.number = miniblocks.l1_batch_number - LEFT JOIN eth_txs_history as commit_tx ON (l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id AND commit_tx.confirmed_at IS NOT NULL) - LEFT JOIN eth_txs_history as prove_tx ON (l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id AND prove_tx.confirmed_at IS NOT NULL) - LEFT JOIN eth_txs_history as execute_tx ON (l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id AND execute_tx.confirmed_at IS NOT NULL) - WHERE transactions.hash = $1 + LEFT JOIN eth_txs_history AS commit_tx ON ( + l1_batches.eth_commit_tx_id = commit_tx.eth_tx_id + AND commit_tx.confirmed_at IS NOT NULL + ) + LEFT JOIN eth_txs_history AS prove_tx ON ( + l1_batches.eth_prove_tx_id = prove_tx.eth_tx_id + AND prove_tx.confirmed_at IS NOT NULL + ) + LEFT JOIN eth_txs_history AS execute_tx ON ( + l1_batches.eth_execute_tx_id = execute_tx.eth_tx_id + AND execute_tx.confirmed_at IS NOT NULL + ) + WHERE + transactions.hash = $1 "#, hash.as_bytes() ) @@ -261,12 +293,20 @@ impl TransactionsWeb3Dal<'_, '_> { limit: Option, ) -> Result<(Vec, Option), SqlxError> { let records = sqlx::query!( - "SELECT transactions.hash, transactions.received_at \ - FROM transactions \ - LEFT JOIN miniblocks ON miniblocks.number = miniblock_number \ - WHERE received_at > $1 \ - ORDER BY received_at ASC \ - LIMIT $2", + r#" + SELECT + transactions.hash, + transactions.received_at + FROM + transactions + LEFT JOIN miniblocks ON miniblocks.number = miniblock_number + WHERE + received_at > $1 + ORDER BY + received_at ASC + LIMIT + $2 + "#, from_timestamp, limit.map(|limit| limit as i64) ) @@ -281,36 +321,36 @@ impl TransactionsWeb3Dal<'_, '_> { Ok((hashes, last_loc)) } + /// `committed_next_nonce` should equal the nonce for `initiator_address` in the storage. pub async fn next_nonce_by_initiator_account( &mut self, initiator_address: Address, + committed_next_nonce: u64, ) -> Result { - let latest_block_number = self - .storage - .blocks_web3_dal() - .resolve_block_id(api::BlockId::Number(api::BlockNumber::Latest)) - .await? - .expect("Failed to get `latest` nonce"); - let latest_nonce = self - .storage - .storage_web3_dal() - .get_address_historical_nonce(initiator_address, latest_block_number) - .await? - .as_u64(); - // Get nonces of non-rejected transactions, starting from the 'latest' nonce. // `latest` nonce is used, because it is guaranteed that there are no gaps before it. // `(miniblock_number IS NOT NULL OR error IS NULL)` is the condition that filters non-rejected transactions. // Query is fast because we have an index on (`initiator_address`, `nonce`) // and it cannot return more than `max_nonce_ahead` nonces. let non_rejected_nonces: Vec = sqlx::query!( - "SELECT nonce as \"nonce!\" FROM transactions \ - WHERE initiator_address = $1 AND nonce >= $2 \ - AND is_priority = FALSE \ - AND (miniblock_number IS NOT NULL OR error IS NULL) \ - ORDER BY nonce", + r#" + SELECT + nonce AS "nonce!" + FROM + transactions + WHERE + initiator_address = $1 + AND nonce >= $2 + AND is_priority = FALSE + AND ( + miniblock_number IS NOT NULL + OR error IS NULL + ) + ORDER BY + nonce + "#, initiator_address.as_bytes(), - latest_nonce as i64 + committed_next_nonce as i64 ) .fetch_all(self.storage.conn()) .await? @@ -319,7 +359,7 @@ impl TransactionsWeb3Dal<'_, '_> { .collect(); // Find pending nonce as the first "gap" in nonces. - let mut pending_nonce = latest_nonce; + let mut pending_nonce = committed_next_nonce; for nonce in non_rejected_nonces { if pending_nonce == nonce { pending_nonce += 1; @@ -336,12 +376,19 @@ impl TransactionsWeb3Dal<'_, '_> { pub async fn get_raw_miniblock_transactions( &mut self, miniblock: MiniblockNumber, - ) -> Result, SqlxError> { + ) -> sqlx::Result> { let rows = sqlx::query_as!( StorageTransaction, - "SELECT * FROM transactions \ - WHERE miniblock_number = $1 \ - ORDER BY index_in_block", + r#" + SELECT + * + FROM + transactions + WHERE + miniblock_number = $1 + ORDER BY + index_in_block + "#, miniblock.0 as i64 ) .fetch_all(self.storage.conn()) @@ -353,8 +400,11 @@ impl TransactionsWeb3Dal<'_, '_> { #[cfg(test)] mod tests { + use std::collections::HashMap; + use zksync_types::{ - block::miniblock_hash, fee::TransactionExecutionMetrics, l2::L2Tx, ProtocolVersion, + block::MiniblockHasher, fee::TransactionExecutionMetrics, l2::L2Tx, Nonce, ProtocolVersion, + ProtocolVersionId, }; use super::*; @@ -399,15 +449,12 @@ mod tests { let tx_hash = tx.hash(); prepare_transaction(&mut conn, tx).await; + let block_hash = MiniblockHasher::new(MiniblockNumber(1), 0, H256::zero()) + .finalize(ProtocolVersionId::latest()); let block_ids = [ api::BlockId::Number(api::BlockNumber::Latest), api::BlockId::Number(api::BlockNumber::Number(1.into())), - api::BlockId::Hash(miniblock_hash( - MiniblockNumber(1), - 0, - H256::zero(), - H256::zero(), - )), + api::BlockId::Hash(block_hash), ]; let transaction_ids = block_ids .iter() @@ -480,4 +527,102 @@ mod tests { assert_eq!(raw_txs.len(), 1); assert_eq!(raw_txs[0].hash(), tx_hash); } + + #[tokio::test] + async fn getting_next_nonce_by_initiator_account() { + let connection_pool = ConnectionPool::test_pool().await; + let mut conn = connection_pool.access_storage().await.unwrap(); + conn.protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + + let initiator = Address::repeat_byte(1); + let next_nonce = conn + .transactions_web3_dal() + .next_nonce_by_initiator_account(initiator, 0) + .await + .unwrap(); + assert_eq!(next_nonce, 0.into()); + + let mut tx_by_nonce = HashMap::new(); + for nonce in [0, 1, 4] { + let mut tx = mock_l2_transaction(); + // Changing transaction fields invalidates its signature, but it's OK for test purposes + tx.common_data.nonce = Nonce(nonce); + tx.common_data.initiator_address = initiator; + tx_by_nonce.insert(nonce, tx.clone()); + conn.transactions_dal() + .insert_transaction_l2(tx, TransactionExecutionMetrics::default()) + .await; + } + + let next_nonce = conn + .transactions_web3_dal() + .next_nonce_by_initiator_account(initiator, 0) + .await + .unwrap(); + assert_eq!(next_nonce, 2.into()); + + // Reject the transaction with nonce 1, so that it'd be not taken into account. + conn.transactions_dal() + .mark_tx_as_rejected(tx_by_nonce[&1].hash(), "oops") + .await; + let next_nonce = conn + .transactions_web3_dal() + .next_nonce_by_initiator_account(initiator, 0) + .await + .unwrap(); + assert_eq!(next_nonce, 1.into()); + + // Include transactions in a miniblock (including the rejected one), so that they are taken into account again. + let mut miniblock = create_miniblock_header(1); + miniblock.l2_tx_count = 2; + conn.blocks_dal() + .insert_miniblock(&miniblock) + .await + .unwrap(); + let executed_txs = [ + mock_execution_result(tx_by_nonce[&0].clone()), + mock_execution_result(tx_by_nonce[&1].clone()), + ]; + conn.transactions_dal() + .mark_txs_as_executed_in_miniblock(miniblock.number, &executed_txs, 1.into()) + .await; + + let next_nonce = conn + .transactions_web3_dal() + .next_nonce_by_initiator_account(initiator, 0) + .await + .unwrap(); + assert_eq!(next_nonce, 2.into()); + } + + #[tokio::test] + async fn getting_next_nonce_by_initiator_account_after_snapshot_recovery() { + // Emulate snapshot recovery: no transactions with past nonces are present in the storage + let connection_pool = ConnectionPool::test_pool().await; + let mut conn = connection_pool.access_storage().await.unwrap(); + let initiator = Address::repeat_byte(1); + let next_nonce = conn + .transactions_web3_dal() + .next_nonce_by_initiator_account(initiator, 1) + .await + .unwrap(); + assert_eq!(next_nonce, 1.into()); + + let mut tx = mock_l2_transaction(); + // Changing transaction fields invalidates its signature, but it's OK for test purposes + tx.common_data.nonce = Nonce(1); + tx.common_data.initiator_address = initiator; + conn.transactions_dal() + .insert_transaction_l2(tx, TransactionExecutionMetrics::default()) + .await; + + let next_nonce = conn + .transactions_web3_dal() + .next_nonce_by_initiator_account(initiator, 1) + .await + .unwrap(); + assert_eq!(next_nonce, 2.into()); + } } diff --git a/core/lib/dal/src/witness_generator_dal.rs b/core/lib/dal/src/witness_generator_dal.rs deleted file mode 100644 index ab73c525c76..00000000000 --- a/core/lib/dal/src/witness_generator_dal.rs +++ /dev/null @@ -1,930 +0,0 @@ -use itertools::Itertools; -use sqlx::Row; - -use std::{collections::HashMap, ops::Range, time::Duration}; - -use zksync_types::proofs::{ - AggregationRound, JobCountStatistics, WitnessGeneratorJobMetadata, WitnessJobInfo, -}; -use zksync_types::zkevm_test_harness::abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit; -use zksync_types::zkevm_test_harness::abstract_zksync_circuit::concrete_circuits::ZkSyncProof; -use zksync_types::zkevm_test_harness::bellman::bn256::Bn256; -use zksync_types::zkevm_test_harness::bellman::plonk::better_better_cs::proof::Proof; -use zksync_types::zkevm_test_harness::witness::oracle::VmWitnessOracle; -use zksync_types::{L1BatchNumber, ProtocolVersionId}; - -use crate::{ - instrument::InstrumentExt, - metrics::MethodLatency, - models::storage_witness_job_info::StorageWitnessJobInfo, - time_utils::{duration_to_naive_time, pg_interval_from_duration}, - StorageProcessor, -}; - -#[derive(Debug)] -pub struct WitnessGeneratorDal<'a, 'c> { - pub(crate) storage: &'a mut StorageProcessor<'c>, -} - -impl WitnessGeneratorDal<'_, '_> { - pub async fn get_next_basic_circuit_witness_job( - &mut self, - processing_timeout: Duration, - max_attempts: u32, - last_l1_batch_to_process: u32, - protocol_versions: &[ProtocolVersionId], - ) -> Option { - let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); - let processing_timeout = pg_interval_from_duration(processing_timeout); - let result: Option = sqlx::query!( - " - UPDATE witness_inputs - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now() - WHERE l1_batch_number = ( - SELECT l1_batch_number - FROM witness_inputs - WHERE l1_batch_number <= $3 - AND - ( status = 'queued' - OR (status = 'in_progress' AND processing_started_at < now() - $1::interval) - OR (status = 'failed' AND attempts < $2) - ) - AND protocol_version = ANY($4) - ORDER BY l1_batch_number ASC - LIMIT 1 - FOR UPDATE - SKIP LOCKED - ) - RETURNING witness_inputs.* - ", - &processing_timeout, - max_attempts as i32, - last_l1_batch_to_process as i64, - &protocol_versions[..], - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap() - .map(|row| WitnessGeneratorJobMetadata { - block_number: L1BatchNumber(row.l1_batch_number as u32), - proofs: vec![], - }); - result - } - - pub async fn get_witness_generated_l1_batches( - &mut self, - ) -> Vec<(L1BatchNumber, AggregationRound)> { - let mut generated_batches = Vec::with_capacity(4); - for round in [ - "node_aggregation_witness_jobs", - "leaf_aggregation_witness_jobs", - "scheduler_witness_jobs", - "witness_inputs", - ] { - let record = sqlx::query(&format!( - "SELECT MAX(l1_batch_number) as l1_batch FROM {} WHERE status='successful'", - round - )) - .fetch_one(self.storage.conn()) - .await - .unwrap(); - let generated_batch = ( - L1BatchNumber( - record - .get::, &str>("l1_batch") - .unwrap_or_default() as u32, - ), - match round { - "node_aggregation_witness_jobs" => AggregationRound::NodeAggregation, - "leaf_aggregation_witness_jobs" => AggregationRound::LeafAggregation, - "scheduler_witness_jobs" => AggregationRound::Scheduler, - "witness_inputs" => AggregationRound::BasicCircuits, - _ => unreachable!(), - }, - ); - generated_batches.push(generated_batch); - } - generated_batches - } - - pub async fn get_next_leaf_aggregation_witness_job( - &mut self, - processing_timeout: Duration, - max_attempts: u32, - last_l1_batch_to_process: u32, - protocol_versions: &[ProtocolVersionId], - ) -> Option { - let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); - let processing_timeout = pg_interval_from_duration(processing_timeout); - let record = sqlx::query!( - " - UPDATE leaf_aggregation_witness_jobs - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now() - WHERE l1_batch_number = ( - SELECT l1_batch_number - FROM leaf_aggregation_witness_jobs - WHERE l1_batch_number <= $3 - AND - ( status = 'queued' - OR (status = 'in_progress' AND processing_started_at < now() - $1::interval) - OR (status = 'failed' AND attempts < $2) - ) - AND protocol_version = ANY($4) - ORDER BY l1_batch_number ASC - LIMIT 1 - FOR UPDATE - SKIP LOCKED - ) - RETURNING leaf_aggregation_witness_jobs.* - ", - &processing_timeout, - max_attempts as i32, - last_l1_batch_to_process as i64, - &protocol_versions[..], - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap(); - if let Some(row) = record { - let l1_batch_number = L1BatchNumber(row.l1_batch_number as u32); - let number_of_basic_circuits = row.number_of_basic_circuits; - - // Now that we have a job in `queued` status, we need to enrich it with the computed proofs. - // We select `aggregation_round = 0` to only get basic circuits. - // Note that at this point there cannot be any other circuits anyway, - // but we keep the check for explicitness - let basic_circuits_proofs: Vec< - Proof>>, - > = self - .load_proofs_for_block(l1_batch_number, AggregationRound::BasicCircuits) - .await; - - assert_eq!( - basic_circuits_proofs.len(), - number_of_basic_circuits as usize, - "leaf_aggregation_witness_job for l1 batch {} is in status `queued`, but there are only {} computed basic proofs, which is different from expected {}", - l1_batch_number, - basic_circuits_proofs.len(), - number_of_basic_circuits - ); - Some(WitnessGeneratorJobMetadata { - block_number: l1_batch_number, - proofs: basic_circuits_proofs, - }) - } else { - None - } - } - - pub async fn get_next_node_aggregation_witness_job( - &mut self, - processing_timeout: Duration, - max_attempts: u32, - last_l1_batch_to_process: u32, - protocol_versions: &[ProtocolVersionId], - ) -> Option { - let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); - { - let processing_timeout = pg_interval_from_duration(processing_timeout); - let record = sqlx::query!( - " - UPDATE node_aggregation_witness_jobs - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now() - WHERE l1_batch_number = ( - SELECT l1_batch_number - FROM node_aggregation_witness_jobs - WHERE l1_batch_number <= $3 - AND - ( status = 'queued' - OR (status = 'in_progress' AND processing_started_at < now() - $1::interval) - OR (status = 'failed' AND attempts < $2) - ) - AND protocol_version = ANY($4) - ORDER BY l1_batch_number ASC - LIMIT 1 - FOR UPDATE - SKIP LOCKED - ) - RETURNING node_aggregation_witness_jobs.* - ", - &processing_timeout, - max_attempts as i32, - last_l1_batch_to_process as i64, - &protocol_versions[..], - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap(); - if let Some(row) = record { - let l1_batch_number = L1BatchNumber(row.l1_batch_number as u32); - let number_of_leaf_circuits = row.number_of_leaf_circuits.expect("number_of_leaf_circuits is not found in a `queued` `node_aggregation_witness_jobs` job"); - - // Now that we have a job in `queued` status, we need to enrich it with the computed proofs. - // We select `aggregation_round = 1` to only get leaf aggregation circuits - let leaf_circuits_proofs: Vec< - Proof>>, - > = self - .load_proofs_for_block(l1_batch_number, AggregationRound::LeafAggregation) - .await; - - assert_eq!( - leaf_circuits_proofs.len(), - number_of_leaf_circuits as usize, - "node_aggregation_witness_job for l1 batch {} is in status `queued`, but there are only {} computed leaf proofs, which is different from expected {}", - l1_batch_number, - leaf_circuits_proofs.len(), - number_of_leaf_circuits - ); - Some(WitnessGeneratorJobMetadata { - block_number: l1_batch_number, - proofs: leaf_circuits_proofs, - }) - } else { - None - } - } - } - - pub async fn get_next_scheduler_witness_job( - &mut self, - processing_timeout: Duration, - max_attempts: u32, - last_l1_batch_to_process: u32, - protocol_versions: &[ProtocolVersionId], - ) -> Option { - let protocol_versions: Vec = protocol_versions.iter().map(|&id| id as i32).collect(); - { - let processing_timeout = pg_interval_from_duration(processing_timeout); - let record = sqlx::query!( - " - UPDATE scheduler_witness_jobs - SET status = 'in_progress', attempts = attempts + 1, - updated_at = now(), processing_started_at = now() - WHERE l1_batch_number = ( - SELECT l1_batch_number - FROM scheduler_witness_jobs - WHERE l1_batch_number <= $3 - AND - ( status = 'queued' - OR (status = 'in_progress' AND processing_started_at < now() - $1::interval) - OR (status = 'failed' AND attempts < $2) - ) - AND protocol_version = ANY($4) - ORDER BY l1_batch_number ASC - LIMIT 1 - FOR UPDATE - SKIP LOCKED - ) - RETURNING scheduler_witness_jobs.* - ", - &processing_timeout, - max_attempts as i32, - last_l1_batch_to_process as i64, - &protocol_versions[..], - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap(); - if let Some(row) = record { - let l1_batch_number = L1BatchNumber(row.l1_batch_number as u32); - // Now that we have a job in `queued` status, we need to enrich it with the computed proof. - // We select `aggregation_round = 2` to only get node aggregation circuits - let leaf_circuits_proofs: Vec< - Proof>>, - > = self - .load_proofs_for_block(l1_batch_number, AggregationRound::NodeAggregation) - .await; - - assert_eq!( - leaf_circuits_proofs.len(), - 1usize, - "scheduler_job for l1 batch {} is in status `queued`, but there is {} computed node proofs. We expect exactly one node proof.", - l1_batch_number.0, - leaf_circuits_proofs.len() - ); - Some(WitnessGeneratorJobMetadata { - block_number: l1_batch_number, - proofs: leaf_circuits_proofs, - }) - } else { - None - } - } - } - - async fn load_proofs_for_block( - &mut self, - block_number: L1BatchNumber, - aggregation_round: AggregationRound, - ) -> Vec>>> { - { - sqlx::query!( - " - SELECT circuit_type, result from prover_jobs - WHERE l1_batch_number = $1 AND status = 'successful' AND aggregation_round = $2 - ORDER BY sequence_number ASC; - ", - block_number.0 as i64, - aggregation_round as i64 - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| { - ZkSyncProof::into_proof(bincode::deserialize::>( - &row.result - .expect("prove_job with `successful` status has no result"), - ) - .expect("cannot deserialize proof")) - }) - .collect::>>>>() - } - } - - pub async fn mark_witness_job_as_successful( - &mut self, - block_number: L1BatchNumber, - aggregation_round: AggregationRound, - time_taken: Duration, - ) { - ({ - let table_name = Self::input_table_name_for(aggregation_round); - let sql = format!( - "UPDATE {} - SET status = 'successful', updated_at = now(), time_taken = $1 - WHERE l1_batch_number = $2", - table_name - ); - let mut query = sqlx::query(&sql); - query = query.bind(duration_to_naive_time(time_taken)); - query = query.bind(block_number.0 as i64); - - query.execute(self.storage.conn()).await.unwrap(); - }); - } - - /// Is invoked by the prover when all the required proofs are computed - pub async fn mark_witness_job_as_queued( - &mut self, - block_number: L1BatchNumber, - aggregation_round: AggregationRound, - ) { - ({ - let table_name = Self::input_table_name_for(aggregation_round); - let sql = format!( - "UPDATE {} - SET status = 'queued', updated_at = now() - WHERE l1_batch_number = $1", - table_name - ); - let mut query = sqlx::query(&sql); - query = query.bind(block_number.0 as i64); - - query.execute(self.storage.conn()).await.unwrap(); - }); - } - - pub async fn mark_witness_job_as_skipped( - &mut self, - block_number: L1BatchNumber, - aggregation_round: AggregationRound, - ) { - ({ - let table_name = Self::input_table_name_for(aggregation_round); - let sql = format!( - "UPDATE {} - SET status = 'skipped', updated_at = now() - WHERE l1_batch_number = $1", - table_name - ); - let mut query = sqlx::query(&sql); - query = query.bind(block_number.0 as i64); - - query.execute(self.storage.conn()).await.unwrap(); - }); - } - - /// Is invoked by the Witness Generator when the previous aggregation round is complete - pub async fn mark_witness_job_as_waiting_for_proofs( - &mut self, - block_number: L1BatchNumber, - aggregation_round: AggregationRound, - ) { - let table_name = Self::input_table_name_for(aggregation_round); - let sql = format!( - "UPDATE {} - SET status = 'waiting_for_proofs', updated_at = now() - WHERE l1_batch_number = $1", - table_name - ); - let mut query = sqlx::query(&sql); - query = query.bind(block_number.0 as i64); - - query.execute(self.storage.conn()).await.unwrap(); - } - - pub async fn mark_witness_job_as_failed( - &mut self, - aggregation_round: AggregationRound, - l1_batch_number: L1BatchNumber, - time_taken: Duration, - error: String, - ) -> u32 { - let table_name = Self::input_table_name_for(aggregation_round); - let sql = format!( - "UPDATE {} - SET status = 'failed', updated_at = now(), time_taken = $1, error = $2 - WHERE l1_batch_number = $3 - RETURNING attempts - ", - table_name - ); - let mut query = sqlx::query(&sql); - query = query.bind(duration_to_naive_time(time_taken)); - query = query.bind(error); - query = query.bind(l1_batch_number.0 as i64); - // returns the number of attempts of the job - query - .fetch_one(self.storage.conn()) - .await - .unwrap() - .get::("attempts") as u32 - } - - /// Creates a leaf_aggregation_job in `waiting_for_proofs` status, - /// and also a node_aggregation_job and scheduler_job in `waiting_for_artifacts` status. - /// The jobs will be advanced to `waiting_for_proofs` by the `Witness Generator` when the corresponding artifacts are computed, - /// and to `queued` by the `Prover` when all the dependency proofs are computed - pub async fn create_aggregation_jobs( - &mut self, - block_number: L1BatchNumber, - basic_circuits_blob_url: &str, - basic_circuits_inputs_blob_url: &str, - number_of_basic_circuits: usize, - scheduler_witness_blob_url: &str, - protocol_version: i32, - ) { - { - let latency = MethodLatency::new("create_aggregation_jobs"); - - sqlx::query!( - " - INSERT INTO leaf_aggregation_witness_jobs - (l1_batch_number, basic_circuits, basic_circuits_inputs, basic_circuits_blob_url, basic_circuits_inputs_blob_url, number_of_basic_circuits, protocol_version, status, created_at, updated_at) - VALUES ($1, $2, $3, $4, $5, $6, $7, 'waiting_for_proofs', now(), now()) - ", - block_number.0 as i64, - // TODO(SMA-1476): remove the below columns once blob is migrated to GCS. - vec![], - vec![], - basic_circuits_blob_url, - basic_circuits_inputs_blob_url, - number_of_basic_circuits as i64, - protocol_version, - ) - .execute(self.storage.conn()) - .await - .unwrap(); - - sqlx::query!( - " - INSERT INTO node_aggregation_witness_jobs - (l1_batch_number, protocol_version, status, created_at, updated_at) - VALUES ($1, $2, 'waiting_for_artifacts', now(), now()) - ", - block_number.0 as i64, - protocol_version, - ) - .execute(self.storage.conn()) - .await - .unwrap(); - - sqlx::query!( - " - INSERT INTO scheduler_witness_jobs - (l1_batch_number, scheduler_witness, scheduler_witness_blob_url, protocol_version, status, created_at, updated_at) - VALUES ($1, $2, $3, $4, 'waiting_for_artifacts', now(), now()) - ", - block_number.0 as i64, - // TODO(SMA-1476): remove the below column once blob is migrated to GCS. - vec![], - scheduler_witness_blob_url, - protocol_version, - ) - .execute(self.storage.conn()) - .await - .unwrap(); - - drop(latency); - } - } - - /// Saves artifacts in node_aggregation_job - /// and advances it to `waiting_for_proofs` status - /// it will be advanced to `queued` by the prover when all the dependency proofs are computed. - /// If the node aggregation job was already `queued` in case of connrecunt run of same leaf aggregation job - /// we keep the status as is to prevent data race. - pub async fn save_leaf_aggregation_artifacts( - &mut self, - l1_batch_number: L1BatchNumber, - number_of_leaf_circuits: usize, - leaf_layer_subqueues_blob_url: &str, - aggregation_outputs_blob_url: &str, - ) { - { - sqlx::query!( - " - UPDATE node_aggregation_witness_jobs - SET number_of_leaf_circuits = $1, - leaf_layer_subqueues_blob_url = $3, - aggregation_outputs_blob_url = $4, - status = 'waiting_for_proofs', - updated_at = now() - WHERE l1_batch_number = $2 AND status != 'queued' - ", - number_of_leaf_circuits as i64, - l1_batch_number.0 as i64, - leaf_layer_subqueues_blob_url, - aggregation_outputs_blob_url, - ) - .instrument("save_leaf_aggregation_artifacts") - .report_latency() - .with_arg("l1_batch_number", &l1_batch_number) - .execute(self.storage.conn()) - .await - .unwrap(); - } - } - - /// Saves artifacts in `scheduler_artifacts_jobs` and advances it to `waiting_for_proofs` status. - /// It will be advanced to `queued` by the prover when all the dependency proofs are computed. - /// If the scheduler witness job was already queued the in case of concurrent run - /// of same node aggregation job, we keep the status as is to prevent data race. - pub async fn save_node_aggregation_artifacts( - &mut self, - block_number: L1BatchNumber, - node_aggregations_blob_url: &str, - ) { - { - sqlx::query!( - " - UPDATE scheduler_witness_jobs - SET final_node_aggregations_blob_url = $2, - status = 'waiting_for_proofs', - updated_at = now() - WHERE l1_batch_number = $1 AND status != 'queued' - ", - block_number.0 as i64, - node_aggregations_blob_url, - ) - .instrument("save_node_aggregation_artifacts") - .report_latency() - .execute(self.storage.conn()) - .await - .unwrap(); - } - } - - pub async fn save_final_aggregation_result( - &mut self, - block_number: L1BatchNumber, - aggregation_result_coords: [[u8; 32]; 4], - ) { - { - let aggregation_result_coords_serialized = - bincode::serialize(&aggregation_result_coords) - .expect("cannot serialize aggregation_result_coords"); - sqlx::query!( - " - UPDATE scheduler_witness_jobs - SET aggregation_result_coords = $1, - updated_at = now() - WHERE l1_batch_number = $2 - ", - aggregation_result_coords_serialized, - block_number.0 as i64, - ) - .execute(self.storage.conn()) - .await - .unwrap(); - } - } - - pub async fn get_witness_jobs_stats( - &mut self, - aggregation_round: AggregationRound, - ) -> JobCountStatistics { - { - let table_name = Self::input_table_name_for(aggregation_round); - let sql = format!( - r#" - SELECT COUNT(*) as "count", status as "status" - FROM {} - GROUP BY status - "#, - table_name - ); - let mut results: HashMap = sqlx::query(&sql) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| (row.get("status"), row.get::("count"))) - .collect::>(); - - JobCountStatistics { - queued: results.remove("queued").unwrap_or(0i64) as usize, - in_progress: results.remove("in_progress").unwrap_or(0i64) as usize, - failed: results.remove("failed").unwrap_or(0i64) as usize, - successful: results.remove("successful").unwrap_or(0i64) as usize, - } - } - } - - fn input_table_name_for(aggregation_round: AggregationRound) -> &'static str { - match aggregation_round { - AggregationRound::BasicCircuits => "witness_inputs", - AggregationRound::LeafAggregation => "leaf_aggregation_witness_jobs", - AggregationRound::NodeAggregation => "node_aggregation_witness_jobs", - AggregationRound::Scheduler => "scheduler_witness_jobs", - } - } - - pub async fn get_jobs( - &mut self, - opts: GetWitnessJobsParams, - ) -> Result, sqlx::Error> { - struct SqlSlice { - columns: String, - table_name: String, - } - - impl SqlSlice { - fn new(ar: u32, table_name: String) -> SqlSlice { - SqlSlice { - columns: format!( - "{} as aggregation_round, - l1_batch_number, - created_at, - updated_at, - status, - time_taken, - processing_started_at, - error, - attempts", - ar - ), - table_name, - } - } - - fn sql(&self, opts: &GetWitnessJobsParams) -> String { - let where_blocks = opts - .blocks - .as_ref() - .map(|b| format!("AND l1_batch_number BETWEEN {} AND {}", b.start, b.end)) - .unwrap_or_default(); - - format!( - "SELECT {} - FROM {} - WHERE 1 = 1 -- Where clause can't be empty - {where_blocks}", - self.columns, self.table_name - ) - } - } - - let slices = vec![ - SqlSlice::new(0, "witness_inputs".to_string()), - SqlSlice::new(1, "leaf_aggregation_witness_jobs".to_string()), - SqlSlice::new(2, "node_aggregation_witness_jobs".to_string()), - SqlSlice::new(3, "scheduler_witness_jobs".to_string()), - ]; - - let sql = slices.iter().map(move |x| x.sql(&opts)).join(" UNION "); - - let query = sqlx::query_as(&sql); - - Ok(query - .fetch_all(self.storage.conn()) - .await? - .into_iter() - .map(|x: StorageWitnessJobInfo| x.into()) - .collect()) - } - - pub async fn save_witness_inputs( - &mut self, - block_number: L1BatchNumber, - object_key: &str, - protocol_version: Option, - ) { - { - sqlx::query!( - "INSERT INTO witness_inputs(l1_batch_number, merkle_tree_paths, merkel_tree_paths_blob_url, status, protocol_version, created_at, updated_at) \ - VALUES ($1, $2, $3, 'waiting_for_artifacts', $4, now(), now()) \ - ON CONFLICT (l1_batch_number) DO NOTHING", - block_number.0 as i64, - // TODO(SMA-1476): remove the below column once blob is migrated to GCS. - vec![], - object_key, - protocol_version.map(|v| v as i32), - ) - .fetch_optional(self.storage.conn()) - .await - .unwrap(); - } - } - - pub async fn mark_witness_inputs_job_as_queued(&mut self, block_number: L1BatchNumber) { - sqlx::query!( - "UPDATE witness_inputs \ - SET status='queued' \ - WHERE l1_batch_number=$1 \ - AND status='waiting_for_artifacts'", - block_number.0 as i64, - ) - .execute(self.storage.conn()) - .await - .unwrap(); - } - - pub async fn get_leaf_layer_subqueues_and_aggregation_outputs_blob_urls_to_be_cleaned( - &mut self, - limit: u8, - ) -> Vec<(i64, (String, String))> { - { - let job_ids = sqlx::query!( - r#" - SELECT l1_batch_number, leaf_layer_subqueues_blob_url, aggregation_outputs_blob_url FROM node_aggregation_witness_jobs - WHERE status='successful' - AND leaf_layer_subqueues_blob_url is NOT NULL - AND aggregation_outputs_blob_url is NOT NULL - AND updated_at < NOW() - INTERVAL '30 days' - LIMIT $1; - "#, - limit as i32 - ) - .fetch_all(self.storage.conn()) - .await - .unwrap(); - job_ids - .into_iter() - .map(|row| { - ( - row.l1_batch_number, - ( - row.leaf_layer_subqueues_blob_url.unwrap(), - row.aggregation_outputs_blob_url.unwrap(), - ), - ) - }) - .collect() - } - } - - pub async fn get_scheduler_witness_and_node_aggregations_blob_urls_to_be_cleaned( - &mut self, - limit: u8, - ) -> Vec<(i64, (String, String))> { - { - let job_ids = sqlx::query!( - r#" - SELECT l1_batch_number, scheduler_witness_blob_url, final_node_aggregations_blob_url FROM scheduler_witness_jobs - WHERE status='successful' - AND updated_at < NOW() - INTERVAL '30 days' - AND scheduler_witness_blob_url is NOT NULL - AND final_node_aggregations_blob_url is NOT NULL - LIMIT $1; - "#, - limit as i32 - ) - .fetch_all(self.storage.conn()) - .await - .unwrap(); - job_ids - .into_iter() - .map(|row| { - ( - row.l1_batch_number, - ( - row.scheduler_witness_blob_url.unwrap(), - row.final_node_aggregations_blob_url.unwrap(), - ), - ) - }) - .collect() - } - } - - pub async fn move_leaf_aggregation_jobs_from_waiting_to_queued(&mut self) -> Vec { - { - sqlx::query!( - r#" - UPDATE leaf_aggregation_witness_jobs - SET status='queued' - WHERE l1_batch_number IN - (SELECT prover_jobs.l1_batch_number - FROM prover_jobs - JOIN leaf_aggregation_witness_jobs lawj ON prover_jobs.l1_batch_number = lawj.l1_batch_number - WHERE lawj.status = 'waiting_for_proofs' - AND prover_jobs.status = 'successful' - AND prover_jobs.aggregation_round = 0 - GROUP BY prover_jobs.l1_batch_number, lawj.number_of_basic_circuits - HAVING COUNT(*) = lawj.number_of_basic_circuits) - RETURNING l1_batch_number; - "#, - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| row.l1_batch_number) - .collect() - } - } - - pub async fn move_node_aggregation_jobs_from_waiting_to_queued(&mut self) -> Vec { - { - sqlx::query!( - r#" - UPDATE node_aggregation_witness_jobs - SET status='queued' - WHERE l1_batch_number IN - (SELECT prover_jobs.l1_batch_number - FROM prover_jobs - JOIN node_aggregation_witness_jobs nawj ON prover_jobs.l1_batch_number = nawj.l1_batch_number - WHERE nawj.status = 'waiting_for_proofs' - AND prover_jobs.status = 'successful' - AND prover_jobs.aggregation_round = 1 - GROUP BY prover_jobs.l1_batch_number, nawj.number_of_leaf_circuits - HAVING COUNT(*) = nawj.number_of_leaf_circuits) - RETURNING l1_batch_number; - "#, - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| row.l1_batch_number) - .collect() - } - } - - pub async fn move_scheduler_jobs_from_waiting_to_queued(&mut self) -> Vec { - { - // There is always just one final node circuit - // hence we do AND p.number_of_jobs = 1 - sqlx::query!( - r#" - UPDATE scheduler_witness_jobs - SET status='queued' - WHERE l1_batch_number IN - (SELECT prover_jobs.l1_batch_number - FROM prover_jobs - JOIN scheduler_witness_jobs swj ON prover_jobs.l1_batch_number = swj.l1_batch_number - WHERE swj.status = 'waiting_for_proofs' - AND prover_jobs.status = 'successful' - AND prover_jobs.aggregation_round = 2 - GROUP BY prover_jobs.l1_batch_number - HAVING COUNT(*) = 1) - RETURNING l1_batch_number; - "#, - ) - .fetch_all(self.storage.conn()) - .await - .unwrap() - .into_iter() - .map(|row| row.l1_batch_number) - .collect() - } - } - - pub async fn protocol_version_for_l1_batch( - &mut self, - l1_batch_number: L1BatchNumber, - ) -> Option { - sqlx::query!( - r#" - SELECT protocol_version - FROM witness_inputs - WHERE l1_batch_number = $1 - "#, - l1_batch_number.0 as i64, - ) - .fetch_one(self.storage.conn()) - .await - .unwrap() - .protocol_version - } -} - -pub struct GetWitnessJobsParams { - pub blocks: Option>, -} diff --git a/core/lib/env_config/src/alerts.rs b/core/lib/env_config/src/alerts.rs index c72b23bbd9f..63cbde48bdf 100644 --- a/core/lib/env_config/src/alerts.rs +++ b/core/lib/env_config/src/alerts.rs @@ -1,6 +1,7 @@ -use crate::{envy_load, FromEnv}; use zksync_config::configs::AlertsConfig; +use crate::{envy_load, FromEnv}; + impl FromEnv for AlertsConfig { fn from_env() -> anyhow::Result { envy_load("sporadic_crypto_errors_substrs", "ALERTS_") diff --git a/core/lib/env_config/src/api.rs b/core/lib/env_config/src/api.rs index 20ecfe41e21..5368122437f 100644 --- a/core/lib/env_config/src/api.rs +++ b/core/lib/env_config/src/api.rs @@ -1,6 +1,4 @@ use anyhow::Context as _; - -use crate::{envy_load, FromEnv}; use zksync_config::configs::{ api::{ ContractVerificationApiConfig, HealthCheckConfig, MerkleTreeApiConfig, Web3JsonRpcConfig, @@ -8,6 +6,8 @@ use zksync_config::configs::{ ApiConfig, PrometheusConfig, }; +use crate::{envy_load, FromEnv}; + impl FromEnv for ApiConfig { fn from_env() -> anyhow::Result { Ok(Self { @@ -48,6 +48,8 @@ impl FromEnv for MerkleTreeApiConfig { #[cfg(test)] mod tests { + use std::num::NonZeroU32; + use super::*; use crate::test_utils::{hash, EnvMutex}; @@ -64,9 +66,7 @@ mod tests { filters_limit: Some(10000), subscriptions_limit: Some(10000), pubsub_polling_interval: Some(200), - threads_per_server: 128, max_nonce_ahead: 5, - transactions_per_sec_limit: Some(1000), request_timeout: Some(10), account_pks: Some(vec![ hash("0x0000000000000000000000000000000000000000000000000000000000000001"), @@ -75,24 +75,22 @@ mod tests { estimate_gas_scale_factor: 1.0f64, gas_price_scale_factor: 1.2, estimate_gas_acceptable_overestimation: 1000, + l1_to_l2_transactions_compatibility_mode: true, max_tx_size: 1000000, vm_execution_cache_misses_limit: None, vm_concurrency_limit: Some(512), factory_deps_cache_size_mb: Some(128), initial_writes_cache_size_mb: Some(32), latest_values_cache_size_mb: Some(256), - http_threads: Some(128), - ws_threads: Some(256), fee_history_limit: Some(100), max_batch_request_size: Some(200), max_response_body_size_mb: Some(10), - websocket_requests_per_minute_limit: Some(10), + websocket_requests_per_minute_limit: Some(NonZeroU32::new(10).unwrap()), tree_api_url: None, }, contract_verification: ContractVerificationApiConfig { port: 3070, url: "http://127.0.0.1:3070".into(), - threads_per_server: 128, }, prometheus: PrometheusConfig { listener_port: 3312, @@ -116,27 +114,23 @@ mod tests { API_WEB3_JSON_RPC_FILTERS_LIMIT=10000 API_WEB3_JSON_RPC_SUBSCRIPTIONS_LIMIT=10000 API_WEB3_JSON_RPC_PUBSUB_POLLING_INTERVAL=200 - API_WEB3_JSON_RPC_THREADS_PER_SERVER=128 API_WEB3_JSON_RPC_MAX_NONCE_AHEAD=5 API_WEB3_JSON_RPC_GAS_PRICE_SCALE_FACTOR=1.2 - API_WEB3_JSON_RPC_TRANSACTIONS_PER_SEC_LIMIT=1000 API_WEB3_JSON_RPC_REQUEST_TIMEOUT=10 API_WEB3_JSON_RPC_ACCOUNT_PKS="0x0000000000000000000000000000000000000000000000000000000000000001,0x0000000000000000000000000000000000000000000000000000000000000002" API_WEB3_JSON_RPC_ESTIMATE_GAS_SCALE_FACTOR=1.0 API_WEB3_JSON_RPC_ESTIMATE_GAS_ACCEPTABLE_OVERESTIMATION=1000 + API_WEB3_JSON_RPC_L1_TO_L2_TRANSACTIONS_COMPATIBILITY_MODE=true API_WEB3_JSON_RPC_MAX_TX_SIZE=1000000 API_WEB3_JSON_RPC_VM_CONCURRENCY_LIMIT=512 API_WEB3_JSON_RPC_FACTORY_DEPS_CACHE_SIZE_MB=128 API_WEB3_JSON_RPC_INITIAL_WRITES_CACHE_SIZE_MB=32 API_WEB3_JSON_RPC_LATEST_VALUES_CACHE_SIZE_MB=256 - API_WEB3_JSON_RPC_HTTP_THREADS=128 - API_WEB3_JSON_RPC_WS_THREADS=256 API_WEB3_JSON_RPC_FEE_HISTORY_LIMIT=100 API_WEB3_JSON_RPC_MAX_BATCH_REQUEST_SIZE=200 API_WEB3_JSON_RPC_WEBSOCKET_REQUESTS_PER_MINUTE_LIMIT=10 API_CONTRACT_VERIFICATION_PORT="3070" API_CONTRACT_VERIFICATION_URL="http://127.0.0.1:3070" - API_CONTRACT_VERIFICATION_THREADS_PER_SERVER=128 API_WEB3_JSON_RPC_MAX_RESPONSE_BODY_SIZE_MB=10 API_PROMETHEUS_LISTENER_PORT="3312" API_PROMETHEUS_PUSHGATEWAY_URL="http://127.0.0.1:9091" diff --git a/core/lib/env_config/src/chain.rs b/core/lib/env_config/src/chain.rs index e64ba3c36b8..c258c5092e5 100644 --- a/core/lib/env_config/src/chain.rs +++ b/core/lib/env_config/src/chain.rs @@ -1,22 +1,8 @@ -use crate::{envy_load, FromEnv}; -use anyhow::Context as _; use zksync_config::configs::chain::{ - ChainConfig, CircuitBreakerConfig, MempoolConfig, NetworkConfig, OperationsManagerConfig, - StateKeeperConfig, + CircuitBreakerConfig, MempoolConfig, NetworkConfig, OperationsManagerConfig, StateKeeperConfig, }; -impl FromEnv for ChainConfig { - fn from_env() -> anyhow::Result { - Ok(Self { - network: NetworkConfig::from_env().context("NetworkConfig")?, - state_keeper: StateKeeperConfig::from_env().context("StateKeeperConfig")?, - operations_manager: OperationsManagerConfig::from_env() - .context("OperationsManagerConfig")?, - mempool: MempoolConfig::from_env().context("MempoolConfig")?, - circuit_breaker: CircuitBreakerConfig::from_env().context("CircuitBreakerConfig")?, - }) - } -} +use crate::{envy_load, FromEnv}; impl FromEnv for NetworkConfig { fn from_env() -> anyhow::Result { @@ -51,68 +37,70 @@ impl FromEnv for MempoolConfig { #[cfg(test)] mod tests { use zksync_basic_types::L2ChainId; + use zksync_config::configs::chain::FeeModelVersion; use super::*; use crate::test_utils::{addr, EnvMutex}; static MUTEX: EnvMutex = EnvMutex::new(); - fn expected_config() -> ChainConfig { - ChainConfig { - network: NetworkConfig { - network: "localhost".parse().unwrap(), - zksync_network: "localhost".to_string(), - zksync_network_id: L2ChainId::from(270), - }, - state_keeper: StateKeeperConfig { - transaction_slots: 50, - block_commit_deadline_ms: 2500, - miniblock_commit_deadline_ms: 1000, - miniblock_seal_queue_capacity: 10, - max_single_tx_gas: 1_000_000, - max_allowed_l2_tx_gas_limit: 2_000_000_000, - close_block_at_eth_params_percentage: 0.2, - close_block_at_gas_percentage: 0.8, - close_block_at_geometry_percentage: 0.5, - reject_tx_at_eth_params_percentage: 0.8, - reject_tx_at_geometry_percentage: 0.3, - fee_account_addr: addr("de03a0B5963f75f1C8485B355fF6D30f3093BDE7"), - reject_tx_at_gas_percentage: 0.5, - fair_l2_gas_price: 250000000, - validation_computational_gas_limit: 10_000_000, - save_call_traces: false, - virtual_blocks_interval: 1, - virtual_blocks_per_miniblock: 1, - upload_witness_inputs_to_gcs: false, - enum_index_migration_chunk_size: Some(2_000), - }, - operations_manager: OperationsManagerConfig { - delay_interval: 100, - }, - mempool: MempoolConfig { - sync_interval_ms: 10, - sync_batch_size: 1000, - capacity: 1_000_000, - stuck_tx_timeout: 10, - remove_stuck_txs: true, - delay_interval: 100, - }, - circuit_breaker: CircuitBreakerConfig { - sync_interval_ms: 1000, - http_req_max_retry_number: 5, - http_req_retry_interval_sec: 2, - replication_lag_limit_sec: Some(10), - }, + fn expected_network_config() -> NetworkConfig { + NetworkConfig { + network: "localhost".parse().unwrap(), + zksync_network: "localhost".to_string(), + zksync_network_id: L2ChainId::from(270), } } #[test] - fn from_env() { + fn network_from_env() { let mut lock = MUTEX.lock(); let config = r#" CHAIN_ETH_NETWORK="localhost" CHAIN_ETH_ZKSYNC_NETWORK="localhost" CHAIN_ETH_ZKSYNC_NETWORK_ID=270 + "#; + lock.set_env(config); + + let actual = NetworkConfig::from_env().unwrap(); + assert_eq!(actual, expected_network_config()); + } + + fn expected_state_keeper_config() -> StateKeeperConfig { + StateKeeperConfig { + transaction_slots: 50, + block_commit_deadline_ms: 2500, + miniblock_commit_deadline_ms: 1000, + miniblock_seal_queue_capacity: 10, + max_single_tx_gas: 1_000_000, + max_allowed_l2_tx_gas_limit: 2_000_000_000, + close_block_at_eth_params_percentage: 0.2, + close_block_at_gas_percentage: 0.8, + close_block_at_geometry_percentage: 0.5, + reject_tx_at_eth_params_percentage: 0.8, + reject_tx_at_geometry_percentage: 0.3, + fee_account_addr: addr("de03a0B5963f75f1C8485B355fF6D30f3093BDE7"), + reject_tx_at_gas_percentage: 0.5, + minimal_l2_gas_price: 100000000, + compute_overhead_part: 0.0, + pubdata_overhead_part: 1.0, + batch_overhead_l1_gas: 800_000, + max_gas_per_batch: 200_000_000, + max_pubdata_per_batch: 100_000, + fee_model_version: FeeModelVersion::V2, + validation_computational_gas_limit: 10_000_000, + save_call_traces: false, + virtual_blocks_interval: 1, + virtual_blocks_per_miniblock: 1, + upload_witness_inputs_to_gcs: false, + enum_index_migration_chunk_size: Some(2_000), + } + } + + #[test] + fn state_keeper_from_env() { + let mut lock = MUTEX.lock(); + let config = r#" CHAIN_STATE_KEEPER_TRANSACTION_SLOTS="50" CHAIN_STATE_KEEPER_FEE_ACCOUNT_ADDR="0xde03a0B5963f75f1C8485B355fF6D30f3093BDE7" CHAIN_STATE_KEEPER_MAX_SINGLE_TX_GAS="1000000" @@ -126,18 +114,83 @@ mod tests { CHAIN_STATE_KEEPER_BLOCK_COMMIT_DEADLINE_MS="2500" CHAIN_STATE_KEEPER_MINIBLOCK_COMMIT_DEADLINE_MS="1000" CHAIN_STATE_KEEPER_MINIBLOCK_SEAL_QUEUE_CAPACITY="10" - CHAIN_STATE_KEEPER_FAIR_L2_GAS_PRICE="250000000" + CHAIN_STATE_KEEPER_MINIMAL_L2_GAS_PRICE="100000000" + CHAIN_STATE_KEEPER_COMPUTE_OVERHEAD_PART="0.0" + CHAIN_STATE_KEEPER_PUBDATA_OVERHEAD_PART="1.0" + CHAIN_STATE_KEEPER_BATCH_OVERHEAD_L1_GAS="800000" + CHAIN_STATE_KEEPER_MAX_GAS_PER_BATCH="200000000" + CHAIN_STATE_KEEPER_MAX_PUBDATA_PER_BATCH="100000" + CHAIN_STATE_KEEPER_FEE_MODEL_VERSION="V2" CHAIN_STATE_KEEPER_VALIDATION_COMPUTATIONAL_GAS_LIMIT="10000000" CHAIN_STATE_KEEPER_SAVE_CALL_TRACES="false" CHAIN_STATE_KEEPER_UPLOAD_WITNESS_INPUTS_TO_GCS="false" CHAIN_STATE_KEEPER_ENUM_INDEX_MIGRATION_CHUNK_SIZE="2000" + "#; + lock.set_env(config); + + let actual = StateKeeperConfig::from_env().unwrap(); + assert_eq!(actual, expected_state_keeper_config()); + } + + fn expected_operations_manager_config() -> OperationsManagerConfig { + OperationsManagerConfig { + delay_interval: 100, + } + } + + #[test] + fn operations_manager_from_env() { + let mut lock = MUTEX.lock(); + let config = r#" CHAIN_OPERATIONS_MANAGER_DELAY_INTERVAL="100" + "#; + lock.set_env(config); + + let actual = OperationsManagerConfig::from_env().unwrap(); + assert_eq!(actual, expected_operations_manager_config()); + } + + fn expected_mempool_config() -> MempoolConfig { + MempoolConfig { + sync_interval_ms: 10, + sync_batch_size: 1000, + capacity: 1_000_000, + stuck_tx_timeout: 10, + remove_stuck_txs: true, + delay_interval: 100, + } + } + + #[test] + fn mempool_from_env() { + let mut lock = MUTEX.lock(); + let config = r#" CHAIN_MEMPOOL_SYNC_INTERVAL_MS="10" CHAIN_MEMPOOL_SYNC_BATCH_SIZE="1000" CHAIN_MEMPOOL_STUCK_TX_TIMEOUT="10" CHAIN_MEMPOOL_REMOVE_STUCK_TXS="true" CHAIN_MEMPOOL_DELAY_INTERVAL="100" CHAIN_MEMPOOL_CAPACITY="1000000" + "#; + lock.set_env(config); + + let actual = MempoolConfig::from_env().unwrap(); + assert_eq!(actual, expected_mempool_config()); + } + + fn expected_circuit_breaker_config() -> CircuitBreakerConfig { + CircuitBreakerConfig { + sync_interval_ms: 1000, + http_req_max_retry_number: 5, + http_req_retry_interval_sec: 2, + replication_lag_limit_sec: Some(10), + } + } + + #[test] + fn circuit_breaker_from_env() { + let mut lock = MUTEX.lock(); + let config = r#" CHAIN_CIRCUIT_BREAKER_SYNC_INTERVAL_MS="1000" CHAIN_CIRCUIT_BREAKER_HTTP_REQ_MAX_RETRY_NUMBER="5" CHAIN_CIRCUIT_BREAKER_HTTP_REQ_RETRY_INTERVAL_SEC="2" @@ -145,7 +198,7 @@ mod tests { "#; lock.set_env(config); - let actual = ChainConfig::from_env().unwrap(); - assert_eq!(actual, expected_config()); + let actual = CircuitBreakerConfig::from_env().unwrap(); + assert_eq!(actual, expected_circuit_breaker_config()); } } diff --git a/core/lib/env_config/src/circuit_synthesizer.rs b/core/lib/env_config/src/circuit_synthesizer.rs deleted file mode 100644 index a59e1adcd65..00000000000 --- a/core/lib/env_config/src/circuit_synthesizer.rs +++ /dev/null @@ -1,51 +0,0 @@ -use zksync_config::configs::CircuitSynthesizerConfig; - -use crate::{envy_load, FromEnv}; - -impl FromEnv for CircuitSynthesizerConfig { - fn from_env() -> anyhow::Result { - envy_load("circuit_synthesizer", "CIRCUIT_SYNTHESIZER_") - } -} - -#[cfg(test)] -mod tests { - use super::*; - use crate::test_utils::EnvMutex; - - static MUTEX: EnvMutex = EnvMutex::new(); - - fn expected_config() -> CircuitSynthesizerConfig { - CircuitSynthesizerConfig { - generation_timeout_in_secs: 1000u16, - max_attempts: 2, - gpu_prover_queue_timeout_in_secs: 1000u16, - prover_instance_wait_timeout_in_secs: 1000u16, - prover_instance_poll_time_in_milli_secs: 250u16, - prometheus_listener_port: 3314, - prometheus_pushgateway_url: "http://127.0.0.1:9091".to_string(), - prometheus_push_interval_ms: Some(100), - prover_group_id: 0, - } - } - - #[test] - fn from_env() { - let mut lock = MUTEX.lock(); - let config = r#" - CIRCUIT_SYNTHESIZER_GENERATION_TIMEOUT_IN_SECS=1000 - CIRCUIT_SYNTHESIZER_MAX_ATTEMPTS=2 - CIRCUIT_SYNTHESIZER_GPU_PROVER_QUEUE_TIMEOUT_IN_SECS=1000 - CIRCUIT_SYNTHESIZER_PROVER_INSTANCE_WAIT_TIMEOUT_IN_SECS=1000 - CIRCUIT_SYNTHESIZER_PROVER_INSTANCE_POLL_TIME_IN_MILLI_SECS=250 - CIRCUIT_SYNTHESIZER_PROMETHEUS_LISTENER_PORT=3314 - CIRCUIT_SYNTHESIZER_PROMETHEUS_PUSHGATEWAY_URL="http://127.0.0.1:9091" - CIRCUIT_SYNTHESIZER_PROMETHEUS_PUSH_INTERVAL_MS=100 - CIRCUIT_SYNTHESIZER_PROVER_GROUP_ID=0 - "#; - lock.set_env(config); - - let actual = CircuitSynthesizerConfig::from_env().unwrap(); - assert_eq!(actual, expected_config()); - } -} diff --git a/core/lib/env_config/src/contracts.rs b/core/lib/env_config/src/contracts.rs index 8c58483db06..537b68414c6 100644 --- a/core/lib/env_config/src/contracts.rs +++ b/core/lib/env_config/src/contracts.rs @@ -10,9 +10,10 @@ impl FromEnv for ContractsConfig { #[cfg(test)] mod tests { + use zksync_config::configs::contracts::ProverAtGenesis; + use super::*; use crate::test_utils::{addr, hash, EnvMutex}; - use zksync_config::configs::contracts::ProverAtGenesis; static MUTEX: EnvMutex = EnvMutex::new(); diff --git a/core/lib/env_config/src/database.rs b/core/lib/env_config/src/database.rs index 939725d6773..74f665617ce 100644 --- a/core/lib/env_config/src/database.rs +++ b/core/lib/env_config/src/database.rs @@ -1,5 +1,6 @@ -use anyhow::Context as _; use std::env; + +use anyhow::Context as _; use zksync_config::{DBConfig, PostgresConfig}; use crate::{envy_load, FromEnv}; @@ -26,11 +27,11 @@ impl FromEnv for PostgresConfig { .ok() .map(|val| val.parse().context("failed to parse DATABASE_POOL_SIZE")) .transpose()?; - let statement_timeout_sec = env::var("DATABASE_STATEMENT_TIMEOUT") + let statement_timeout_sec = env::var("DATABASE_STATEMENT_TIMEOUT_SEC") .ok() .map(|val| { val.parse() - .context("failed to parse DATABASE_STATEMENT_TIMEOUT") + .context("failed to parse DATABASE_STATEMENT_TIMEOUT_SEC") }) .transpose()?; @@ -46,6 +47,8 @@ impl FromEnv for PostgresConfig { #[cfg(test)] mod tests { + use std::time::Duration; + use zksync_config::configs::database::MerkleTreeMode; use super::*; @@ -58,29 +61,23 @@ mod tests { let mut lock = MUTEX.lock(); let config = r#" DATABASE_STATE_KEEPER_DB_PATH="/db/state_keeper" - DATABASE_MERKLE_TREE_BACKUP_PATH="/db/backups" DATABASE_MERKLE_TREE_PATH="/db/tree" DATABASE_MERKLE_TREE_MODE=lightweight DATABASE_MERKLE_TREE_MULTI_GET_CHUNK_SIZE=250 DATABASE_MERKLE_TREE_MEMTABLE_CAPACITY_MB=512 DATABASE_MERKLE_TREE_STALLED_WRITES_TIMEOUT_SEC=60 DATABASE_MERKLE_TREE_MAX_L1_BATCHES_PER_ITER=50 - DATABASE_BACKUP_COUNT=5 - DATABASE_BACKUP_INTERVAL_MS=60000 "#; lock.set_env(config); let db_config = DBConfig::from_env().unwrap(); assert_eq!(db_config.state_keeper_db_path, "/db/state_keeper"); assert_eq!(db_config.merkle_tree.path, "/db/tree"); - assert_eq!(db_config.merkle_tree.backup_path, "/db/backups"); assert_eq!(db_config.merkle_tree.mode, MerkleTreeMode::Lightweight); assert_eq!(db_config.merkle_tree.multi_get_chunk_size, 250); assert_eq!(db_config.merkle_tree.max_l1_batches_per_iter, 50); assert_eq!(db_config.merkle_tree.memtable_capacity_mb, 512); assert_eq!(db_config.merkle_tree.stalled_writes_timeout_sec, 60); - assert_eq!(db_config.backup_count, 5); - assert_eq!(db_config.backup_interval().as_secs(), 60); } #[test] @@ -96,22 +93,17 @@ mod tests { "DATABASE_MERKLE_TREE_MEMTABLE_CAPACITY_MB", "DATABASE_MERKLE_TREE_STALLED_WRITES_TIMEOUT_SEC", "DATABASE_MERKLE_TREE_MAX_L1_BATCHES_PER_ITER", - "DATABASE_BACKUP_COUNT", - "DATABASE_BACKUP_INTERVAL_MS", ]); let db_config = DBConfig::from_env().unwrap(); assert_eq!(db_config.state_keeper_db_path, "./db/state_keeper"); assert_eq!(db_config.merkle_tree.path, "./db/lightweight-new"); - assert_eq!(db_config.merkle_tree.backup_path, "./db/backups"); assert_eq!(db_config.merkle_tree.mode, MerkleTreeMode::Full); assert_eq!(db_config.merkle_tree.multi_get_chunk_size, 500); assert_eq!(db_config.merkle_tree.max_l1_batches_per_iter, 20); assert_eq!(db_config.merkle_tree.block_cache_size_mb, 128); assert_eq!(db_config.merkle_tree.memtable_capacity_mb, 256); assert_eq!(db_config.merkle_tree.stalled_writes_timeout_sec, 30); - assert_eq!(db_config.backup_count, 5); - assert_eq!(db_config.backup_interval().as_secs(), 60); // Check that new env variable for Merkle tree path is supported lock.set_env("DATABASE_MERKLE_TREE_PATH=/db/tree/main"); @@ -130,4 +122,26 @@ mod tests { let db_config = DBConfig::from_env().unwrap(); assert_eq!(db_config.merkle_tree.max_l1_batches_per_iter, 50); } + + #[test] + fn postgres_from_env() { + let mut lock = MUTEX.lock(); + let config = r#" + DATABASE_URL=postgres://postgres@localhost/zksync_local + DATABASE_POOL_SIZE=50 + DATABASE_STATEMENT_TIMEOUT_SEC=300 + "#; + lock.set_env(config); + + let postgres_config = PostgresConfig::from_env().unwrap(); + assert_eq!( + postgres_config.master_url().unwrap(), + "postgres://postgres@localhost/zksync_local" + ); + assert_eq!(postgres_config.max_connections().unwrap(), 50); + assert_eq!( + postgres_config.statement_timeout(), + Some(Duration::from_secs(300)) + ); + } } diff --git a/core/lib/env_config/src/fetcher.rs b/core/lib/env_config/src/fetcher.rs deleted file mode 100644 index 4f86488b2c4..00000000000 --- a/core/lib/env_config/src/fetcher.rs +++ /dev/null @@ -1,68 +0,0 @@ -use zksync_config::FetcherConfig; - -use crate::{envy_load, FromEnv}; - -impl FromEnv for FetcherConfig { - fn from_env() -> anyhow::Result { - Ok(Self { - token_list: envy_load("token_list", "FETCHER_TOKEN_LIST_")?, - token_price: envy_load("token_price", "FETCHER_TOKEN_PRICE_")?, - token_trading_volume: envy_load( - "token_trading_volume", - "FETCHER_TOKEN_TRADING_VOLUME_", - )?, - }) - } -} - -#[cfg(test)] -mod tests { - use zksync_config::configs::fetcher::{ - SingleFetcherConfig, TokenListSource, TokenPriceSource, TokenTradingVolumeSource, - }; - - use super::*; - use crate::test_utils::EnvMutex; - - static MUTEX: EnvMutex = EnvMutex::new(); - - fn expected_config() -> FetcherConfig { - FetcherConfig { - token_list: SingleFetcherConfig { - source: TokenListSource::OneInch, - url: "http://127.0.0.1:1020".into(), - fetching_interval: 10, - }, - token_price: SingleFetcherConfig { - source: TokenPriceSource::CoinGecko, - url: "http://127.0.0.1:9876".into(), - fetching_interval: 7, - }, - token_trading_volume: SingleFetcherConfig { - source: TokenTradingVolumeSource::Uniswap, - url: "http://127.0.0.1:9975/graphql".to_string(), - fetching_interval: 5, - }, - } - } - - #[test] - fn from_env() { - let mut lock = MUTEX.lock(); - let config = r#" - FETCHER_TOKEN_LIST_SOURCE="OneInch" - FETCHER_TOKEN_LIST_URL="http://127.0.0.1:1020" - FETCHER_TOKEN_LIST_FETCHING_INTERVAL="10" - FETCHER_TOKEN_PRICE_SOURCE="CoinGecko" - FETCHER_TOKEN_PRICE_URL="http://127.0.0.1:9876" - FETCHER_TOKEN_PRICE_FETCHING_INTERVAL="7" - FETCHER_TOKEN_TRADING_VOLUME_SOURCE="Uniswap" - FETCHER_TOKEN_TRADING_VOLUME_URL="http://127.0.0.1:9975/graphql" - FETCHER_TOKEN_TRADING_VOLUME_FETCHING_INTERVAL="5" - "#; - lock.set_env(config); - - let actual = FetcherConfig::from_env().unwrap(); - assert_eq!(actual, expected_config()); - } -} diff --git a/core/lib/env_config/src/fri_proof_compressor.rs b/core/lib/env_config/src/fri_proof_compressor.rs index 2594433025e..777bdb03c58 100644 --- a/core/lib/env_config/src/fri_proof_compressor.rs +++ b/core/lib/env_config/src/fri_proof_compressor.rs @@ -10,9 +10,8 @@ impl FromEnv for FriProofCompressorConfig { #[cfg(test)] mod tests { - use crate::test_utils::EnvMutex; - use super::*; + use crate::test_utils::EnvMutex; static MUTEX: EnvMutex = EnvMutex::new(); diff --git a/core/lib/env_config/src/fri_prover.rs b/core/lib/env_config/src/fri_prover.rs index 200f22d89b7..7b97df50374 100644 --- a/core/lib/env_config/src/fri_prover.rs +++ b/core/lib/env_config/src/fri_prover.rs @@ -30,6 +30,8 @@ mod tests { witness_vector_generator_thread_count: Some(5), queue_capacity: 10, witness_vector_receiver_port: 3316, + zone_read_url: "http://metadata.google.internal/computeMetadata/v1/instance/zone" + .to_string(), shall_save_to_public_bucket: true, } } @@ -49,6 +51,7 @@ mod tests { FRI_PROVER_WITNESS_VECTOR_GENERATOR_THREAD_COUNT="5" FRI_PROVER_QUEUE_CAPACITY="10" FRI_PROVER_WITNESS_VECTOR_RECEIVER_PORT="3316" + FRI_PROVER_ZONE_READ_URL="http://metadata.google.internal/computeMetadata/v1/instance/zone" FRI_PROVER_SHALL_SAVE_TO_PUBLIC_BUCKET=true "#; lock.set_env(config); diff --git a/core/lib/env_config/src/lib.rs b/core/lib/env_config/src/lib.rs index a4a4af3f1ec..fa2bb237191 100644 --- a/core/lib/env_config/src/lib.rs +++ b/core/lib/env_config/src/lib.rs @@ -4,14 +4,12 @@ use serde::de::DeserializeOwned; mod alerts; mod api; mod chain; -mod circuit_synthesizer; mod contract_verifier; mod contracts; mod database; mod eth_client; mod eth_sender; mod eth_watch; -mod fetcher; mod fri_proof_compressor; mod fri_prover; mod fri_prover_gateway; @@ -21,8 +19,7 @@ mod fri_witness_vector_generator; mod house_keeper; pub mod object_store; mod proof_data_handler; -mod prover; -mod prover_group; +mod snapshots_creator; mod utils; mod witness_generator; diff --git a/core/lib/env_config/src/object_store.rs b/core/lib/env_config/src/object_store.rs index 3b4afe86b52..23b1abaf516 100644 --- a/core/lib/env_config/src/object_store.rs +++ b/core/lib/env_config/src/object_store.rs @@ -30,6 +30,16 @@ impl FromEnv for ProverObjectStoreConfig { } } +#[derive(Debug)] +pub struct SnapshotsObjectStoreConfig(pub ObjectStoreConfig); + +impl FromEnv for SnapshotsObjectStoreConfig { + fn from_env() -> anyhow::Result { + let config = envy_load("snapshots_object_store", "SNAPSHOTS_OBJECT_STORE_")?; + Ok(Self(config)) + } +} + #[cfg(test)] mod tests { use zksync_config::{configs::object_store::ObjectStoreMode, ObjectStoreConfig}; @@ -93,4 +103,19 @@ mod tests { let actual = ProverObjectStoreConfig::from_env().unwrap().0; assert_eq!(actual, expected_config("/prover_base_url")); } + + #[test] + fn snapshots_bucket_config_from_env() { + let mut lock = MUTEX.lock(); + let config = r#" + SNAPSHOTS_OBJECT_STORE_BUCKET_BASE_URL="/snapshots_base_url" + SNAPSHOTS_OBJECT_STORE_MODE="FileBacked" + SNAPSHOTS_OBJECT_STORE_FILE_BACKED_BASE_PATH="artifacts" + SNAPSHOTS_OBJECT_STORE_GCS_CREDENTIAL_FILE_PATH="/path/to/credentials.json" + SNAPSHOTS_OBJECT_STORE_MAX_RETRIES="5" + "#; + lock.set_env(config); + let actual = SnapshotsObjectStoreConfig::from_env().unwrap().0; + assert_eq!(actual, expected_config("/snapshots_base_url")); + } } diff --git a/core/lib/env_config/src/prover.rs b/core/lib/env_config/src/prover.rs deleted file mode 100644 index 700f0fffb96..00000000000 --- a/core/lib/env_config/src/prover.rs +++ /dev/null @@ -1,197 +0,0 @@ -use zksync_config::ProverConfigs; - -use crate::{envy_load, FromEnv}; - -impl FromEnv for ProverConfigs { - fn from_env() -> anyhow::Result { - Ok(Self { - non_gpu: envy_load("non_gpu", "PROVER_NON_GPU_")?, - two_gpu_forty_gb_mem: envy_load( - "two_gpu_forty_gb_mem", - "PROVER_TWO_GPU_FORTY_GB_MEM_", - )?, - one_gpu_eighty_gb_mem: envy_load( - "one_gpu_eighty_gb_mem", - "PROVER_ONE_GPU_EIGHTY_GB_MEM_", - )?, - two_gpu_eighty_gb_mem: envy_load( - "two_gpu_eighty_gb_mem", - "PROVER_TWO_GPU_EIGHTY_GB_MEM_", - )?, - four_gpu_eighty_gb_mem: envy_load( - "four_gpu_eighty_gb_mem", - "PROVER_FOUR_GPU_EIGHTY_GB_MEM_", - )?, - }) - } -} - -#[cfg(test)] -mod tests { - use zksync_config::ProverConfig; - - use super::*; - use crate::test_utils::EnvMutex; - - static MUTEX: EnvMutex = EnvMutex::new(); - - fn expected_config() -> ProverConfigs { - ProverConfigs { - non_gpu: ProverConfig { - prometheus_port: 3313, - initial_setup_key_path: "key".to_owned(), - key_download_url: "value".to_owned(), - generation_timeout_in_secs: 2700u16, - number_of_threads: 2, - max_attempts: 4, - polling_duration_in_millis: 5, - setup_keys_path: "/usr/src/setup-keys".to_string(), - specialized_prover_group_id: 0, - number_of_setup_slots: 2, - assembly_receiver_port: 17791, - assembly_receiver_poll_time_in_millis: 250, - assembly_queue_capacity: 5, - }, - two_gpu_forty_gb_mem: ProverConfig { - prometheus_port: 3313, - initial_setup_key_path: "key".to_owned(), - key_download_url: "value".to_owned(), - generation_timeout_in_secs: 2700u16, - number_of_threads: 2, - max_attempts: 4, - polling_duration_in_millis: 5, - setup_keys_path: "/usr/src/setup-keys".to_string(), - specialized_prover_group_id: 1, - number_of_setup_slots: 5, - assembly_receiver_port: 17791, - assembly_receiver_poll_time_in_millis: 250, - assembly_queue_capacity: 5, - }, - one_gpu_eighty_gb_mem: ProverConfig { - prometheus_port: 3313, - initial_setup_key_path: "key".to_owned(), - key_download_url: "value".to_owned(), - generation_timeout_in_secs: 2700u16, - number_of_threads: 4, - max_attempts: 4, - polling_duration_in_millis: 5, - setup_keys_path: "/usr/src/setup-keys".to_string(), - specialized_prover_group_id: 2, - number_of_setup_slots: 5, - assembly_receiver_port: 17791, - assembly_receiver_poll_time_in_millis: 250, - assembly_queue_capacity: 5, - }, - two_gpu_eighty_gb_mem: ProverConfig { - prometheus_port: 3313, - initial_setup_key_path: "key".to_owned(), - key_download_url: "value".to_owned(), - generation_timeout_in_secs: 2700u16, - number_of_threads: 9, - max_attempts: 4, - polling_duration_in_millis: 5, - setup_keys_path: "/usr/src/setup-keys".to_string(), - specialized_prover_group_id: 3, - number_of_setup_slots: 9, - assembly_receiver_port: 17791, - assembly_receiver_poll_time_in_millis: 250, - assembly_queue_capacity: 5, - }, - four_gpu_eighty_gb_mem: ProverConfig { - prometheus_port: 3313, - initial_setup_key_path: "key".to_owned(), - key_download_url: "value".to_owned(), - generation_timeout_in_secs: 2700u16, - number_of_threads: 18, - max_attempts: 4, - polling_duration_in_millis: 5, - setup_keys_path: "/usr/src/setup-keys".to_string(), - specialized_prover_group_id: 4, - number_of_setup_slots: 18, - assembly_receiver_port: 17791, - assembly_receiver_poll_time_in_millis: 250, - assembly_queue_capacity: 5, - }, - } - } - - const CONFIG: &str = r#" - PROVER_NON_GPU_PROMETHEUS_PORT="3313" - PROVER_NON_GPU_INITIAL_SETUP_KEY_PATH="key" - PROVER_NON_GPU_KEY_DOWNLOAD_URL="value" - PROVER_NON_GPU_GENERATION_TIMEOUT_IN_SECS=2700 - PROVER_NON_GPU_NUMBER_OF_THREADS="2" - PROVER_NON_GPU_MAX_ATTEMPTS="4" - PROVER_NON_GPU_POLLING_DURATION_IN_MILLIS=5 - PROVER_NON_GPU_SETUP_KEYS_PATH="/usr/src/setup-keys" - PROVER_NON_GPU_NUMBER_OF_SETUP_SLOTS=2 - PROVER_NON_GPU_ASSEMBLY_RECEIVER_PORT=17791 - PROVER_NON_GPU_ASSEMBLY_RECEIVER_POLL_TIME_IN_MILLIS=250 - PROVER_NON_GPU_ASSEMBLY_QUEUE_CAPACITY=5 - PROVER_NON_GPU_SPECIALIZED_PROVER_GROUP_ID=0 - - PROVER_TWO_GPU_FORTY_GB_MEM_PROMETHEUS_PORT="3313" - PROVER_TWO_GPU_FORTY_GB_MEM_INITIAL_SETUP_KEY_PATH="key" - PROVER_TWO_GPU_FORTY_GB_MEM_KEY_DOWNLOAD_URL="value" - PROVER_TWO_GPU_FORTY_GB_MEM_GENERATION_TIMEOUT_IN_SECS=2700 - PROVER_TWO_GPU_FORTY_GB_MEM_NUMBER_OF_THREADS="2" - PROVER_TWO_GPU_FORTY_GB_MEM_MAX_ATTEMPTS="4" - PROVER_TWO_GPU_FORTY_GB_MEM_POLLING_DURATION_IN_MILLIS=5 - PROVER_TWO_GPU_FORTY_GB_MEM_SETUP_KEYS_PATH="/usr/src/setup-keys" - PROVER_TWO_GPU_FORTY_GB_MEM_NUMBER_OF_SETUP_SLOTS=5 - PROVER_TWO_GPU_FORTY_GB_MEM_ASSEMBLY_RECEIVER_PORT=17791 - PROVER_TWO_GPU_FORTY_GB_MEM_ASSEMBLY_RECEIVER_POLL_TIME_IN_MILLIS=250 - PROVER_TWO_GPU_FORTY_GB_MEM_ASSEMBLY_QUEUE_CAPACITY=5 - PROVER_TWO_GPU_FORTY_GB_MEM_SPECIALIZED_PROVER_GROUP_ID=1 - - PROVER_ONE_GPU_EIGHTY_GB_MEM_PROMETHEUS_PORT="3313" - PROVER_ONE_GPU_EIGHTY_GB_MEM_INITIAL_SETUP_KEY_PATH="key" - PROVER_ONE_GPU_EIGHTY_GB_MEM_KEY_DOWNLOAD_URL="value" - PROVER_ONE_GPU_EIGHTY_GB_MEM_GENERATION_TIMEOUT_IN_SECS=2700 - PROVER_ONE_GPU_EIGHTY_GB_MEM_NUMBER_OF_THREADS="4" - PROVER_ONE_GPU_EIGHTY_GB_MEM_MAX_ATTEMPTS="4" - PROVER_ONE_GPU_EIGHTY_GB_MEM_POLLING_DURATION_IN_MILLIS=5 - PROVER_ONE_GPU_EIGHTY_GB_MEM_SETUP_KEYS_PATH="/usr/src/setup-keys" - PROVER_ONE_GPU_EIGHTY_GB_MEM_NUMBER_OF_SETUP_SLOTS=5 - PROVER_ONE_GPU_EIGHTY_GB_MEM_ASSEMBLY_RECEIVER_PORT=17791 - PROVER_ONE_GPU_EIGHTY_GB_MEM_ASSEMBLY_RECEIVER_POLL_TIME_IN_MILLIS=250 - PROVER_ONE_GPU_EIGHTY_GB_MEM_ASSEMBLY_QUEUE_CAPACITY=5 - PROVER_ONE_GPU_EIGHTY_GB_MEM_SPECIALIZED_PROVER_GROUP_ID=2 - - PROVER_TWO_GPU_EIGHTY_GB_MEM_PROMETHEUS_PORT="3313" - PROVER_TWO_GPU_EIGHTY_GB_MEM_INITIAL_SETUP_KEY_PATH="key" - PROVER_TWO_GPU_EIGHTY_GB_MEM_KEY_DOWNLOAD_URL="value" - PROVER_TWO_GPU_EIGHTY_GB_MEM_GENERATION_TIMEOUT_IN_SECS=2700 - PROVER_TWO_GPU_EIGHTY_GB_MEM_NUMBER_OF_THREADS="9" - PROVER_TWO_GPU_EIGHTY_GB_MEM_MAX_ATTEMPTS="4" - PROVER_TWO_GPU_EIGHTY_GB_MEM_POLLING_DURATION_IN_MILLIS=5 - PROVER_TWO_GPU_EIGHTY_GB_MEM_SETUP_KEYS_PATH="/usr/src/setup-keys" - PROVER_TWO_GPU_EIGHTY_GB_MEM_NUMBER_OF_SETUP_SLOTS=9 - PROVER_TWO_GPU_EIGHTY_GB_MEM_ASSEMBLY_RECEIVER_PORT=17791 - PROVER_TWO_GPU_EIGHTY_GB_MEM_ASSEMBLY_RECEIVER_POLL_TIME_IN_MILLIS=250 - PROVER_TWO_GPU_EIGHTY_GB_MEM_ASSEMBLY_QUEUE_CAPACITY=5 - PROVER_TWO_GPU_EIGHTY_GB_MEM_SPECIALIZED_PROVER_GROUP_ID=3 - - PROVER_FOUR_GPU_EIGHTY_GB_MEM_PROMETHEUS_PORT="3313" - PROVER_FOUR_GPU_EIGHTY_GB_MEM_INITIAL_SETUP_KEY_PATH="key" - PROVER_FOUR_GPU_EIGHTY_GB_MEM_KEY_DOWNLOAD_URL="value" - PROVER_FOUR_GPU_EIGHTY_GB_MEM_GENERATION_TIMEOUT_IN_SECS=2700 - PROVER_FOUR_GPU_EIGHTY_GB_MEM_NUMBER_OF_THREADS="18" - PROVER_FOUR_GPU_EIGHTY_GB_MEM_MAX_ATTEMPTS="4" - PROVER_FOUR_GPU_EIGHTY_GB_MEM_POLLING_DURATION_IN_MILLIS=5 - PROVER_FOUR_GPU_EIGHTY_GB_MEM_SETUP_KEYS_PATH="/usr/src/setup-keys" - PROVER_FOUR_GPU_EIGHTY_GB_MEM_NUMBER_OF_SETUP_SLOTS=18 - PROVER_FOUR_GPU_EIGHTY_GB_MEM_ASSEMBLY_RECEIVER_PORT=17791 - PROVER_FOUR_GPU_EIGHTY_GB_MEM_ASSEMBLY_RECEIVER_POLL_TIME_IN_MILLIS=250 - PROVER_FOUR_GPU_EIGHTY_GB_MEM_ASSEMBLY_QUEUE_CAPACITY=5 - PROVER_FOUR_GPU_EIGHTY_GB_MEM_SPECIALIZED_PROVER_GROUP_ID=4 - "#; - - #[test] - fn from_env() { - let mut lock = MUTEX.lock(); - lock.set_env(CONFIG); - let actual = ProverConfigs::from_env().unwrap(); - assert_eq!(actual, expected_config()); - } -} diff --git a/core/lib/env_config/src/prover_group.rs b/core/lib/env_config/src/prover_group.rs deleted file mode 100644 index bdac82cbb9c..00000000000 --- a/core/lib/env_config/src/prover_group.rs +++ /dev/null @@ -1,149 +0,0 @@ -use zksync_config::configs::ProverGroupConfig; - -use crate::{envy_load, FromEnv}; - -impl FromEnv for ProverGroupConfig { - fn from_env() -> anyhow::Result { - envy_load("prover_group", "PROVER_GROUP_") - } -} - -#[cfg(test)] -mod tests { - use super::*; - use crate::test_utils::EnvMutex; - - static MUTEX: EnvMutex = EnvMutex::new(); - - fn expected_config() -> ProverGroupConfig { - ProverGroupConfig { - group_0_circuit_ids: vec![0, 18], - group_1_circuit_ids: vec![1, 4], - group_2_circuit_ids: vec![2, 5], - group_3_circuit_ids: vec![6, 7], - group_4_circuit_ids: vec![8, 9], - group_5_circuit_ids: vec![10, 11], - group_6_circuit_ids: vec![12, 13], - group_7_circuit_ids: vec![14, 15], - group_8_circuit_ids: vec![16, 17], - group_9_circuit_ids: vec![3], - region_read_url: "http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-location".to_string(), - region_override: Some("us-central-1".to_string()), - zone_read_url: "http://metadata.google.internal/computeMetadata/v1/instance/zone".to_string(), - zone_override: Some("us-central-1-b".to_string()), - synthesizer_per_gpu: 10, - } - } - - const CONFIG: &str = r#" - PROVER_GROUP_GROUP_0_CIRCUIT_IDS="0,18" - PROVER_GROUP_GROUP_1_CIRCUIT_IDS="1,4" - PROVER_GROUP_GROUP_2_CIRCUIT_IDS="2,5" - PROVER_GROUP_GROUP_3_CIRCUIT_IDS="6,7" - PROVER_GROUP_GROUP_4_CIRCUIT_IDS="8,9" - PROVER_GROUP_GROUP_5_CIRCUIT_IDS="10,11" - PROVER_GROUP_GROUP_6_CIRCUIT_IDS="12,13" - PROVER_GROUP_GROUP_7_CIRCUIT_IDS="14,15" - PROVER_GROUP_GROUP_8_CIRCUIT_IDS="16,17" - PROVER_GROUP_GROUP_9_CIRCUIT_IDS="3" - PROVER_GROUP_REGION_READ_URL="http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-location" - PROVER_GROUP_REGION_OVERRIDE="us-central-1" - PROVER_GROUP_ZONE_READ_URL="http://metadata.google.internal/computeMetadata/v1/instance/zone" - PROVER_GROUP_ZONE_OVERRIDE="us-central-1-b" - PROVER_GROUP_SYNTHESIZER_PER_GPU="10" - "#; - - #[test] - fn from_env() { - let mut lock = MUTEX.lock(); - lock.set_env(CONFIG); - let actual = ProverGroupConfig::from_env().unwrap(); - assert_eq!(actual, expected_config()); - } - - #[test] - fn get_group_id_for_circuit_id() { - let prover_group_config = expected_config(); - - assert_eq!(Some(0), prover_group_config.get_group_id_for_circuit_id(0)); - assert_eq!(Some(0), prover_group_config.get_group_id_for_circuit_id(18)); - - assert_eq!(Some(1), prover_group_config.get_group_id_for_circuit_id(1)); - assert_eq!(Some(1), prover_group_config.get_group_id_for_circuit_id(4)); - - assert_eq!(Some(2), prover_group_config.get_group_id_for_circuit_id(2)); - assert_eq!(Some(2), prover_group_config.get_group_id_for_circuit_id(5)); - - assert_eq!(Some(3), prover_group_config.get_group_id_for_circuit_id(6)); - assert_eq!(Some(3), prover_group_config.get_group_id_for_circuit_id(7)); - - assert_eq!(Some(4), prover_group_config.get_group_id_for_circuit_id(8)); - assert_eq!(Some(4), prover_group_config.get_group_id_for_circuit_id(9)); - - assert_eq!(Some(5), prover_group_config.get_group_id_for_circuit_id(10)); - assert_eq!(Some(5), prover_group_config.get_group_id_for_circuit_id(11)); - - assert_eq!(Some(6), prover_group_config.get_group_id_for_circuit_id(12)); - assert_eq!(Some(6), prover_group_config.get_group_id_for_circuit_id(13)); - - assert_eq!(Some(7), prover_group_config.get_group_id_for_circuit_id(14)); - assert_eq!(Some(7), prover_group_config.get_group_id_for_circuit_id(15)); - - assert_eq!(Some(8), prover_group_config.get_group_id_for_circuit_id(16)); - assert_eq!(Some(8), prover_group_config.get_group_id_for_circuit_id(17)); - - assert_eq!(Some(9), prover_group_config.get_group_id_for_circuit_id(3)); - assert!(prover_group_config - .get_group_id_for_circuit_id(19) - .is_none()); - } - - #[test] - fn get_circuit_ids_for_group_id() { - let prover_group_config = expected_config(); - - assert_eq!( - Some(vec![0, 18]), - prover_group_config.get_circuit_ids_for_group_id(0) - ); - assert_eq!( - Some(vec![1, 4]), - prover_group_config.get_circuit_ids_for_group_id(1) - ); - assert_eq!( - Some(vec![2, 5]), - prover_group_config.get_circuit_ids_for_group_id(2) - ); - assert_eq!( - Some(vec![6, 7]), - prover_group_config.get_circuit_ids_for_group_id(3) - ); - assert_eq!( - Some(vec![8, 9]), - prover_group_config.get_circuit_ids_for_group_id(4) - ); - assert_eq!( - Some(vec![10, 11]), - prover_group_config.get_circuit_ids_for_group_id(5) - ); - assert_eq!( - Some(vec![12, 13]), - prover_group_config.get_circuit_ids_for_group_id(6) - ); - assert_eq!( - Some(vec![14, 15]), - prover_group_config.get_circuit_ids_for_group_id(7) - ); - assert_eq!( - Some(vec![16, 17]), - prover_group_config.get_circuit_ids_for_group_id(8) - ); - assert_eq!( - Some(vec![3]), - prover_group_config.get_circuit_ids_for_group_id(9) - ); - assert!(prover_group_config - .get_circuit_ids_for_group_id(10) - .is_none()); - } -} diff --git a/core/lib/env_config/src/snapshots_creator.rs b/core/lib/env_config/src/snapshots_creator.rs new file mode 100644 index 00000000000..6ed80e3780c --- /dev/null +++ b/core/lib/env_config/src/snapshots_creator.rs @@ -0,0 +1,9 @@ +use zksync_config::SnapshotsCreatorConfig; + +use crate::{envy_load, FromEnv}; + +impl FromEnv for SnapshotsCreatorConfig { + fn from_env() -> anyhow::Result { + envy_load("snapshots_creator", "SNAPSHOTS_CREATOR_") + } +} diff --git a/core/lib/env_config/src/test_utils.rs b/core/lib/env_config/src/test_utils.rs index 013d12493ae..2909071df39 100644 --- a/core/lib/env_config/src/test_utils.rs +++ b/core/lib/env_config/src/test_utils.rs @@ -1,4 +1,3 @@ -// Built-in uses. use std::{ collections::HashMap, env, @@ -6,7 +5,7 @@ use std::{ mem, sync::{Mutex, MutexGuard, PoisonError}, }; -// Workspace uses + use zksync_basic_types::{Address, H256}; /// Mutex that allows to modify certain env variables and roll them back to initial values when diff --git a/core/lib/env_config/src/utils.rs b/core/lib/env_config/src/utils.rs index 655d3b2e6d5..211e73ae2b1 100644 --- a/core/lib/env_config/src/utils.rs +++ b/core/lib/env_config/src/utils.rs @@ -1,6 +1,7 @@ -use crate::{envy_load, FromEnv}; use zksync_config::configs::PrometheusConfig; +use crate::{envy_load, FromEnv}; + impl FromEnv for PrometheusConfig { fn from_env() -> anyhow::Result { envy_load("prometheus", "API_PROMETHEUS_") diff --git a/core/lib/eth_client/Cargo.toml b/core/lib/eth_client/Cargo.toml index e78f74c5803..ff3e56ef731 100644 --- a/core/lib/eth_client/Cargo.toml +++ b/core/lib/eth_client/Cargo.toml @@ -1,7 +1,7 @@ [package] name = "zksync_eth_client" version = "0.1.0" -edition = "2018" +edition = "2021" authors = ["The Matter Labs Team "] homepage = "https://zksync.io/" repository = "https://github.com/matter-labs/zksync-era" @@ -10,7 +10,7 @@ keywords = ["blockchain", "zksync"] categories = ["cryptography"] [dependencies] -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } zksync_types = { path = "../types" } zksync_eth_signer = { path = "../eth_signer" } zksync_config = { path = "../config" } @@ -18,9 +18,10 @@ zksync_contracts = { path = "../contracts" } jsonrpc-core = "18" serde = "1.0.90" -hex = "0.4" -anyhow = "1.0" thiserror = "1" -tokio = { version = "1", features = ["full"] } async-trait = "0.1" tracing = "0.1" + +[dev-dependencies] +static_assertions = "1.1.0" +tokio = { version = "1", features = ["full"] } diff --git a/core/lib/eth_client/src/clients/generic.rs b/core/lib/eth_client/src/clients/generic.rs new file mode 100644 index 00000000000..c54a814d449 --- /dev/null +++ b/core/lib/eth_client/src/clients/generic.rs @@ -0,0 +1,170 @@ +use std::sync::Arc; + +use async_trait::async_trait; +use zksync_types::{ + web3::{ + contract::Options, + ethabi, + types::{ + Address, Block, BlockId, BlockNumber, Filter, Log, Transaction, TransactionReceipt, + H160, H256, U256, U64, + }, + }, + L1ChainId, +}; + +use crate::{ + BoundEthInterface, ContractCall, Error, EthInterface, ExecutedTxStatus, FailureInfo, + RawTransactionBytes, SignedCallResult, +}; + +#[async_trait] +impl EthInterface for Arc { + async fn nonce_at_for_account( + &self, + account: Address, + block: BlockNumber, + component: &'static str, + ) -> Result { + self.as_ref() + .nonce_at_for_account(account, block, component) + .await + } + + async fn base_fee_history( + &self, + from_block: usize, + block_count: usize, + component: &'static str, + ) -> Result, Error> { + self.as_ref() + .base_fee_history(from_block, block_count, component) + .await + } + + async fn get_pending_block_base_fee_per_gas( + &self, + component: &'static str, + ) -> Result { + self.as_ref() + .get_pending_block_base_fee_per_gas(component) + .await + } + + async fn get_gas_price(&self, component: &'static str) -> Result { + self.as_ref().get_gas_price(component).await + } + + async fn block_number(&self, component: &'static str) -> Result { + self.as_ref().block_number(component).await + } + + async fn send_raw_tx(&self, tx: RawTransactionBytes) -> Result { + self.as_ref().send_raw_tx(tx).await + } + + async fn get_tx_status( + &self, + hash: H256, + component: &'static str, + ) -> Result, Error> { + self.as_ref().get_tx_status(hash, component).await + } + + async fn failure_reason(&self, tx_hash: H256) -> Result, Error> { + self.as_ref().failure_reason(tx_hash).await + } + + async fn get_tx( + &self, + hash: H256, + component: &'static str, + ) -> Result, Error> { + self.as_ref().get_tx(hash, component).await + } + + async fn tx_receipt( + &self, + tx_hash: H256, + component: &'static str, + ) -> Result, Error> { + self.as_ref().tx_receipt(tx_hash, component).await + } + + async fn eth_balance(&self, address: Address, component: &'static str) -> Result { + self.as_ref().eth_balance(address, component).await + } + + async fn call_contract_function( + &self, + call: ContractCall, + ) -> Result, Error> { + self.as_ref().call_contract_function(call).await + } + + async fn logs(&self, filter: Filter, component: &'static str) -> Result, Error> { + self.as_ref().logs(filter, component).await + } + + async fn block( + &self, + block_id: BlockId, + component: &'static str, + ) -> Result>, Error> { + self.as_ref().block(block_id, component).await + } +} + +#[async_trait::async_trait] +impl BoundEthInterface for Arc { + fn contract(&self) -> ðabi::Contract { + self.as_ref().contract() + } + + fn contract_addr(&self) -> H160 { + self.as_ref().contract_addr() + } + + fn chain_id(&self) -> L1ChainId { + self.as_ref().chain_id() + } + + fn sender_account(&self) -> Address { + self.as_ref().sender_account() + } + + async fn allowance_on_account( + &self, + token_address: Address, + contract_address: Address, + erc20_abi: ethabi::Contract, + ) -> Result { + self.as_ref() + .allowance_on_account(token_address, contract_address, erc20_abi) + .await + } + + async fn sign_prepared_tx_for_addr( + &self, + data: Vec, + contract_addr: H160, + options: Options, + component: &'static str, + ) -> Result { + self.as_ref() + .sign_prepared_tx_for_addr(data, contract_addr, options, component) + .await + } + + async fn nonce_at(&self, block: BlockNumber, component: &'static str) -> Result { + self.as_ref().nonce_at(block, component).await + } + + async fn current_nonce(&self, component: &'static str) -> Result { + self.as_ref().current_nonce(component).await + } + + async fn pending_nonce(&self, component: &'static str) -> Result { + self.as_ref().pending_nonce(component).await + } +} diff --git a/core/lib/eth_client/src/clients/http/mod.rs b/core/lib/eth_client/src/clients/http/mod.rs index 5d94a383171..e3295ee4b76 100644 --- a/core/lib/eth_client/src/clients/http/mod.rs +++ b/core/lib/eth_client/src/clients/http/mod.rs @@ -1,17 +1,17 @@ +use std::time::Duration; + use vise::{ Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Histogram, LabeledFamily, Metrics, }; -use std::time::Duration; - -mod query; -mod signing; - pub use self::{ query::QueryClient, signing::{PKSigningClient, SigningClient}, }; +mod query; +mod signing; + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] #[metrics(label = "method", rename_all = "snake_case")] enum Method { diff --git a/core/lib/eth_client/src/clients/http/query.rs b/core/lib/eth_client/src/clients/http/query.rs index 198f5fc45af..d1abb74c46d 100644 --- a/core/lib/eth_client/src/clients/http/query.rs +++ b/core/lib/eth_client/src/clients/http/query.rs @@ -1,26 +1,22 @@ -use async_trait::async_trait; - use std::sync::Arc; -use crate::{ - clients::http::{Method, COUNTERS, LATENCIES}, - types::{Error, ExecutedTxStatus, FailureInfo}, - EthInterface, -}; +use async_trait::async_trait; use zksync_types::web3::{ self, - contract::{ - tokens::{Detokenize, Tokenize}, - Contract, Options, - }, + contract::Contract, ethabi, - helpers::CallFuture, transports::Http, types::{ Address, Block, BlockId, BlockNumber, Bytes, Filter, Log, Transaction, TransactionId, TransactionReceipt, H256, U256, U64, }, - Transport, Web3, + Web3, +}; + +use crate::{ + clients::http::{Method, COUNTERS, LATENCIES}, + types::{Error, ExecutedTxStatus, FailureInfo, RawTokens}, + ContractCall, EthInterface, RawTransactionBytes, }; /// An "anonymous" Ethereum client that can invoke read-only methods that aren't @@ -41,7 +37,7 @@ impl From for QueryClient { impl QueryClient { /// Creates a new HTTP client. pub fn new(node_url: &str) -> Result { - let transport = web3::transports::Http::new(node_url)?; + let transport = Http::new(node_url)?; Ok(transport.into()) } } @@ -81,9 +77,9 @@ impl EthInterface for QueryClient { Ok(network_gas_price) } - async fn send_raw_tx(&self, tx: Vec) -> Result { + async fn send_raw_tx(&self, tx: RawTransactionBytes) -> Result { let latency = LATENCIES.direct[&Method::SendRawTx].start(); - let tx = self.web3.eth().send_raw_transaction(Bytes(tx)).await?; + let tx = self.web3.eth().send_raw_transaction(Bytes(tx.0)).await?; latency.observe(); Ok(tx) } @@ -101,8 +97,8 @@ impl EthInterface for QueryClient { let mut history = Vec::with_capacity(block_count); let from_block = upto_block.saturating_sub(block_count); - // Here we are requesting fee_history from blocks - // (from_block; upto_block] in chunks of size MAX_REQUEST_CHUNK + // Here we are requesting `fee_history` from blocks + // `(from_block; upto_block)` in chunks of size `MAX_REQUEST_CHUNK` // starting from the oldest block. for chunk_start in (from_block..=upto_block).step_by(MAX_REQUEST_CHUNK) { let chunk_end = (chunk_start + MAX_REQUEST_CHUNK).min(upto_block); @@ -234,26 +230,21 @@ impl EthInterface for QueryClient { Ok(tx) } - #[allow(clippy::too_many_arguments)] - async fn call_contract_function( + async fn call_contract_function( &self, - func: &str, - params: P, - from: A, - options: Options, - block: B, - contract_address: Address, - contract_abi: ethabi::Contract, - ) -> Result - where - R: Detokenize + Unpin, - A: Into> + Send, - B: Into> + Send, - P: Tokenize + Send, - { + call: ContractCall, + ) -> Result, Error> { let latency = LATENCIES.direct[&Method::CallContractFunction].start(); - let contract = Contract::new(self.web3.eth(), contract_address, contract_abi); - let res = contract.query(func, params, from, options, block).await?; + let contract = Contract::new(self.web3.eth(), call.contract_address, call.contract_abi); + let RawTokens(res) = contract + .query( + &call.inner.name, + call.inner.params, + call.inner.from, + call.inner.options, + call.inner.block, + ) + .await?; latency.observe(); Ok(res) } @@ -286,23 +277,14 @@ impl EthInterface for QueryClient { Ok(logs) } - // TODO (PLA-333): at the moment the latest version of `web3` crate doesn't have `Finalized` variant in `BlockNumber`. - // However, it's already added in github repo and probably will be included in the next released version. - // Scope of PLA-333 includes forking/using crate directly from github, after that we will be able to change - // type of `block_id` from `String` to `BlockId` and use `self.web3.eth().block(block_id)`. async fn block( &self, - block_id: String, + block_id: BlockId, component: &'static str, ) -> Result>, Error> { COUNTERS.call[&(Method::Block, component)].inc(); let latency = LATENCIES.direct[&Method::Block].start(); - let block = CallFuture::new( - self.web3 - .transport() - .execute("eth_getBlockByNumber", vec![block_id.into(), false.into()]), - ) - .await?; + let block = self.web3.eth().block(block_id).await?; latency.observe(); Ok(block) } diff --git a/core/lib/eth_client/src/clients/http/signing.rs b/core/lib/eth_client/src/clients/http/signing.rs index fcc38efb4cc..6e3dd3d223d 100644 --- a/core/lib/eth_client/src/clients/http/signing.rs +++ b/core/lib/eth_client/src/clients/http/signing.rs @@ -1,29 +1,27 @@ -use async_trait::async_trait; - use std::{fmt, sync::Arc}; +use async_trait::async_trait; use zksync_config::{ContractsConfig, ETHClientConfig, ETHSenderConfig}; use zksync_contracts::zksync_contract; use zksync_eth_signer::{raw_ethereum_tx::TransactionParameters, EthereumSigner, PrivateKeySigner}; -use zksync_types::web3::{ - self, - contract::{ - tokens::{Detokenize, Tokenize}, - Options, - }, - ethabi, - transports::Http, - types::{ - Address, Block, BlockId, BlockNumber, Filter, Log, Transaction, TransactionReceipt, H160, - H256, U256, U64, +use zksync_types::{ + web3::{ + self, + contract::{tokens::Detokenize, Options}, + ethabi, + transports::Http, + types::{ + Address, Block, BlockId, BlockNumber, Filter, Log, Transaction, TransactionReceipt, + H160, H256, U256, U64, + }, }, + L1ChainId, PackedEthSignature, EIP_1559_TX_TYPE, }; -use zksync_types::{L1ChainId, PackedEthSignature, EIP_1559_TX_TYPE}; use super::{query::QueryClient, Method, LATENCIES}; use crate::{ types::{Error, ExecutedTxStatus, FailureInfo, SignedCallResult}, - BoundEthInterface, EthInterface, + BoundEthInterface, CallFunctionArgs, ContractCall, EthInterface, RawTransactionBytes, }; /// HTTP-based Ethereum client, backed by a private key to sign transactions. @@ -46,8 +44,7 @@ impl PKSigningClient { let default_priority_fee_per_gas = eth_sender.gas_adjuster.default_priority_fee_per_gas; let l1_chain_id = eth_client.chain_id; - let transport = - web3::transports::Http::new(main_node_url).expect("Failed to create transport"); + let transport = Http::new(main_node_url).expect("Failed to create transport"); let operator_address = PackedEthSignature::address_from_private_key(&operator_private_key) .expect("Failed to get address from private key"); @@ -121,7 +118,7 @@ impl EthInterface for SigningClient { self.query_client.get_gas_price(component).await } - async fn send_raw_tx(&self, tx: Vec) -> Result { + async fn send_raw_tx(&self, tx: RawTransactionBytes) -> Result { self.query_client.send_raw_tx(tx).await } @@ -165,34 +162,11 @@ impl EthInterface for SigningClient { self.query_client.get_tx(hash, component).await } - #[allow(clippy::too_many_arguments)] - async fn call_contract_function( + async fn call_contract_function( &self, - func: &str, - params: P, - from: A, - options: Options, - block: B, - contract_address: Address, - contract_abi: ethabi::Contract, - ) -> Result - where - R: Detokenize + Unpin, - A: Into> + Send, - B: Into> + Send, - P: Tokenize + Send, - { - self.query_client - .call_contract_function( - func, - params, - from, - options, - block, - contract_address, - contract_abi, - ) - .await + call: ContractCall, + ) -> Result, Error> { + self.query_client.call_contract_function(call).await } async fn tx_receipt( @@ -213,7 +187,7 @@ impl EthInterface for SigningClient { async fn block( &self, - block_id: String, + block_id: BlockId, component: &'static str, ) -> Result>, Error> { self.query_client.block(block_id, component).await @@ -252,7 +226,7 @@ impl BoundEthInterface for SigningClient { None => self.inner.default_priority_fee_per_gas, }; - // Fetch current base fee and add max_priority_fee_per_gas + // Fetch current base fee and add `max_priority_fee_per_gas` let max_fee_per_gas = match options.max_fee_per_gas { Some(max_fee_per_gas) => max_fee_per_gas, None => { @@ -302,7 +276,7 @@ impl BoundEthInterface for SigningClient { let hash = web3::signing::keccak256(&signed_tx).into(); latency.observe(); Ok(SignedCallResult { - raw_tx: signed_tx, + raw_tx: RawTransactionBytes(signed_tx), max_priority_fee_per_gas, max_fee_per_gas, nonce, @@ -317,19 +291,11 @@ impl BoundEthInterface for SigningClient { erc20_abi: ethabi::Contract, ) -> Result { let latency = LATENCIES.direct[&Method::Allowance].start(); - let res = self - .call_contract_function( - "allowance", - (self.inner.sender_account, address), - None, - Options::default(), - None, - token_address, - erc20_abi, - ) - .await?; + let args = CallFunctionArgs::new("allowance", (self.inner.sender_account, address)) + .for_contract(token_address, erc20_abi); + let res = self.call_contract_function(args).await?; latency.observe(); - Ok(res) + Ok(U256::from_tokens(res)?) } } diff --git a/core/lib/eth_client/src/clients/mock.rs b/core/lib/eth_client/src/clients/mock.rs index 07297a3645f..5541dd1d198 100644 --- a/core/lib/eth_client/src/clients/mock.rs +++ b/core/lib/eth_client/src/clients/mock.rs @@ -1,16 +1,14 @@ -use std::sync::atomic::{AtomicU64, Ordering}; +use std::{ + collections::{BTreeMap, HashMap}, + sync::RwLock, +}; use async_trait::async_trait; use jsonrpc_core::types::error::Error as RpcError; -use std::collections::{BTreeMap, HashMap}; -use std::sync::RwLock; use zksync_types::{ web3::{ - contract::{ - tokens::{Detokenize, Tokenize}, - Options, - }, - ethabi::{self, Token}, + contract::{tokens::Tokenize, Options}, + ethabi, types::{Block, BlockId, BlockNumber, Filter, Log, Transaction, TransactionReceipt, U64}, Error as Web3Error, }, @@ -19,55 +17,112 @@ use zksync_types::{ use crate::{ types::{Error, ExecutedTxStatus, FailureInfo, SignedCallResult}, - BoundEthInterface, EthInterface, + BoundEthInterface, ContractCall, EthInterface, RawTransactionBytes, }; -#[derive(Debug, Clone, Default, Copy)] -pub struct MockTx { - pub hash: H256, - pub nonce: u64, - pub base_fee: U256, +#[derive(Debug, Clone)] +struct MockTx { + input: Vec, + hash: H256, + nonce: u64, + max_fee_per_gas: U256, + max_priority_fee_per_gas: U256, } impl From> for MockTx { fn from(tx: Vec) -> Self { - use std::convert::TryFrom; - let len = tx.len(); - let total_gas_price = U256::try_from(&tx[len - 96..len - 64]).unwrap(); - let priority_fee = U256::try_from(&tx[len - 64..len - 32]).unwrap(); - let base_fee = total_gas_price - priority_fee; + let max_fee_per_gas = U256::try_from(&tx[len - 96..len - 64]).unwrap(); + let max_priority_fee_per_gas = U256::try_from(&tx[len - 64..len - 32]).unwrap(); let nonce = U256::try_from(&tx[len - 32..]).unwrap().as_u64(); let hash = { - let mut buffer: [u8; 32] = Default::default(); + let mut buffer = [0_u8; 32]; buffer.copy_from_slice(&tx[..32]); buffer.into() }; Self { + input: tx[32..len - 96].to_vec(), nonce, hash, - base_fee, + max_fee_per_gas, + max_priority_fee_per_gas, + } + } +} + +impl From for Transaction { + fn from(tx: MockTx) -> Self { + Self { + input: tx.input.into(), + hash: tx.hash, + nonce: tx.nonce.into(), + max_fee_per_gas: Some(tx.max_fee_per_gas), + max_priority_fee_per_gas: Some(tx.max_priority_fee_per_gas), + ..Self::default() } } } +/// Mutable part of [`MockEthereum`] that needs to be synchronized via an `RwLock`. +#[derive(Debug, Default)] +struct MockEthereumInner { + block_number: u64, + tx_statuses: HashMap, + sent_txs: HashMap, + current_nonce: u64, + pending_nonce: u64, + nonces: BTreeMap, +} + +impl MockEthereumInner { + fn execute_tx( + &mut self, + tx_hash: H256, + success: bool, + confirmations: u64, + non_ordering_confirmations: bool, + ) { + let block_number = self.block_number; + self.block_number += confirmations; + let nonce = self.current_nonce; + self.current_nonce += 1; + let tx_nonce = self.sent_txs[&tx_hash].nonce; + + if non_ordering_confirmations { + if tx_nonce >= nonce { + self.current_nonce = tx_nonce; + } + } else { + assert_eq!(tx_nonce, nonce, "nonce mismatch"); + } + self.nonces.insert(block_number, nonce + 1); + + let status = ExecutedTxStatus { + tx_hash, + success, + receipt: TransactionReceipt { + gas_used: Some(21000u32.into()), + block_number: Some(block_number.into()), + transaction_hash: tx_hash, + ..TransactionReceipt::default() + }, + }; + self.tx_statuses.insert(tx_hash, status); + } +} + /// Mock Ethereum client is capable of recording all the incoming requests for the further analysis. #[derive(Debug)] pub struct MockEthereum { - pub block_number: AtomicU64, - pub max_fee_per_gas: U256, - pub base_fee_history: RwLock>, - pub max_priority_fee_per_gas: U256, - pub tx_statuses: RwLock>, - pub sent_txs: RwLock>, - pub current_nonce: AtomicU64, - pub pending_nonce: AtomicU64, - pub nonces: RwLock>, + max_fee_per_gas: U256, + max_priority_fee_per_gas: U256, + base_fee_history: Vec, /// If true, the mock will not check the ordering nonces of the transactions. /// This is useful for testing the cases when the transactions are executed out of order. - pub non_ordering_confirmations: bool, - pub multicall_address: Address, + non_ordering_confirmations: bool, + multicall_address: Address, + inner: RwLock, } impl Default for MockEthereum { @@ -75,15 +130,10 @@ impl Default for MockEthereum { Self { max_fee_per_gas: 100.into(), max_priority_fee_per_gas: 10.into(), - block_number: Default::default(), - base_fee_history: Default::default(), - tx_statuses: Default::default(), - sent_txs: Default::default(), - current_nonce: Default::default(), - pending_nonce: Default::default(), - nonces: RwLock::new([(0, 0)].into()), + base_fee_history: vec![], non_ordering_confirmations: false, multicall_address: Address::default(), + inner: RwLock::default(), } } } @@ -91,54 +141,29 @@ impl Default for MockEthereum { impl MockEthereum { /// A fake `sha256` hasher, which calculates an `std::hash` instead. /// This is done for simplicity and it's also much faster. - pub fn fake_sha256(data: &[u8]) -> H256 { - use std::collections::hash_map::DefaultHasher; - use std::hash::Hasher; + fn fake_sha256(data: &[u8]) -> H256 { + use std::{collections::hash_map::DefaultHasher, hash::Hasher}; let mut hasher = DefaultHasher::new(); hasher.write(data); - let result = hasher.finish(); - H256::from_low_u64_ne(result) } + /// Returns the number of transactions sent via this client. + pub fn sent_tx_count(&self) -> usize { + self.inner.read().unwrap().sent_txs.len() + } + /// Increments the blocks by a provided `confirmations` and marks the sent transaction /// as a success. - pub fn execute_tx( - &self, - tx_hash: H256, - success: bool, - confirmations: u64, - ) -> anyhow::Result<()> { - let block_number = self.block_number.fetch_add(confirmations, Ordering::SeqCst); - let nonce = self.current_nonce.fetch_add(1, Ordering::SeqCst); - let tx_nonce = self.sent_txs.read().unwrap()[&tx_hash].nonce; - - if self.non_ordering_confirmations { - if tx_nonce >= nonce { - self.current_nonce.store(tx_nonce, Ordering::SeqCst); - } - } else { - anyhow::ensure!(tx_nonce == nonce, "nonce mismatch"); - } - - self.nonces.write().unwrap().insert(block_number, nonce + 1); - - let status = ExecutedTxStatus { + pub fn execute_tx(&self, tx_hash: H256, success: bool, confirmations: u64) { + self.inner.write().unwrap().execute_tx( tx_hash, success, - receipt: TransactionReceipt { - gas_used: Some(21000u32.into()), - block_number: Some(block_number.into()), - transaction_hash: tx_hash, - ..Default::default() - }, - }; - - self.tx_statuses.write().unwrap().insert(tx_hash, status); - - Ok(()) + confirmations, + self.non_ordering_confirmations, + ); } pub fn sign_prepared_tx( @@ -152,18 +177,18 @@ impl MockEthereum { .unwrap_or(self.max_priority_fee_per_gas); let nonce = options.nonce.expect("Nonce must be set for every tx"); - // Nonce and gas_price are appended to distinguish the same transactions + // Nonce and `gas_price` are appended to distinguish the same transactions // with different gas by their hash in tests. raw_tx.append(&mut ethabi::encode(&max_fee_per_gas.into_tokens())); raw_tx.append(&mut ethabi::encode(&max_priority_fee_per_gas.into_tokens())); raw_tx.append(&mut ethabi::encode(&nonce.into_tokens())); let hash = Self::fake_sha256(&raw_tx); // Okay for test purposes. - // Concatenate raw_tx plus hash for test purposes + // Concatenate `raw_tx` plus hash for test purposes let mut new_raw_tx = hash.as_bytes().to_vec(); new_raw_tx.extend(raw_tx); Ok(SignedCallResult { - raw_tx: new_raw_tx, + raw_tx: RawTransactionBytes(new_raw_tx), max_priority_fee_per_gas, max_fee_per_gas, nonce, @@ -172,12 +197,14 @@ impl MockEthereum { } pub fn advance_block_number(&self, val: u64) -> u64 { - self.block_number.fetch_add(val, Ordering::SeqCst) + val + let mut inner = self.inner.write().unwrap(); + inner.block_number += val; + inner.block_number } pub fn with_fee_history(self, history: Vec) -> Self { Self { - base_fee_history: RwLock::new(history), + base_fee_history: history, ..self } } @@ -204,17 +231,19 @@ impl EthInterface for MockEthereum { hash: H256, _: &'static str, ) -> Result, Error> { - Ok(self.tx_statuses.read().unwrap().get(&hash).cloned()) + Ok(self.inner.read().unwrap().tx_statuses.get(&hash).cloned()) } async fn block_number(&self, _: &'static str) -> Result { - Ok(self.block_number.load(Ordering::SeqCst).into()) + Ok(self.inner.read().unwrap().block_number.into()) } - async fn send_raw_tx(&self, tx: Vec) -> Result { - let mock_tx = MockTx::from(tx); + async fn send_raw_tx(&self, tx: RawTransactionBytes) -> Result { + let mock_tx = MockTx::from(tx.0); + let mock_tx_hash = mock_tx.hash; + let mut inner = self.inner.write().unwrap(); - if mock_tx.nonce < self.current_nonce.load(Ordering::SeqCst) { + if mock_tx.nonce < inner.current_nonce { return Err(Error::EthereumGateway(Web3Error::Rpc(RpcError { message: "transaction with the same nonce already processed".to_string(), code: 101.into(), @@ -222,13 +251,11 @@ impl EthInterface for MockEthereum { }))); } - if mock_tx.nonce == self.pending_nonce.load(Ordering::SeqCst) { - self.pending_nonce.fetch_add(1, Ordering::SeqCst); + if mock_tx.nonce == inner.pending_nonce { + inner.pending_nonce += 1; } - - self.sent_txs.write().unwrap().insert(mock_tx.hash, mock_tx); - - Ok(mock_tx.hash) + inner.sent_txs.insert(mock_tx_hash, mock_tx); + Ok(mock_tx_hash) } async fn nonce_at_for_account( @@ -250,18 +277,15 @@ impl EthInterface for MockEthereum { block_count: usize, _component: &'static str, ) -> Result, Error> { - Ok(self.base_fee_history.read().unwrap() - [from_block.saturating_sub(block_count - 1)..=from_block] - .to_vec()) + let start_block = from_block.saturating_sub(block_count - 1); + Ok(self.base_fee_history[start_block..=from_block].to_vec()) } async fn get_pending_block_base_fee_per_gas( &self, _component: &'static str, ) -> Result { - Ok(U256::from( - *self.base_fee_history.read().unwrap().last().unwrap(), - )) + Ok(U256::from(*self.base_fee_history.last().unwrap())) } async fn failure_reason(&self, tx_hash: H256) -> Result, Error> { @@ -275,24 +299,13 @@ impl EthInterface for MockEthereum { })) } - #[allow(clippy::too_many_arguments)] - async fn call_contract_function( + async fn call_contract_function( &self, - _func: &str, - _params: P, - _from: A, - _options: Options, - _block: B, - contract_address: Address, - _contract_abi: ethabi::Contract, - ) -> Result - where - R: Detokenize + Unpin, - A: Into> + Send, - B: Into> + Send, - P: Tokenize + Send, - { - if contract_address == self.multicall_address { + call: ContractCall, + ) -> Result, Error> { + use ethabi::Token; + + if call.contract_address == self.multicall_address { let token = Token::Array(vec![ Token::Tuple(vec![Token::Bool(true), Token::Bytes(vec![1u8; 32])]), Token::Tuple(vec![Token::Bool(true), Token::Bytes(vec![2u8; 32])]), @@ -307,17 +320,21 @@ impl EthInterface for MockEthereum { ), ]), ]); - return Ok(R::from_tokens(vec![token]).unwrap()); + return Ok(vec![token]); } - Ok(R::from_tokens(vec![]).unwrap()) + Ok(vec![]) } async fn get_tx( &self, - _hash: H256, + hash: H256, _component: &'static str, ) -> Result, Error> { - unimplemented!("Not needed right now") + let txs = &self.inner.read().unwrap().sent_txs; + let Some(tx) = txs.get(&hash) else { + return Ok(None); + }; + Ok(Some(tx.clone().into())) } async fn tx_receipt( @@ -342,7 +359,7 @@ impl EthInterface for MockEthereum { async fn block( &self, - _block_id: String, + _block_id: BlockId, _component: &'static str, ) -> Result>, Error> { unimplemented!("Not needed right now") @@ -388,199 +405,79 @@ impl BoundEthInterface for MockEthereum { async fn nonce_at(&self, block: BlockNumber, _component: &'static str) -> Result { if let BlockNumber::Number(block_number) = block { - Ok((*self - .nonces - .read() - .unwrap() - .range(..=block_number.as_u64()) - .next_back() - .unwrap() - .1) - .into()) + let inner = self.inner.read().unwrap(); + let mut nonce_range = inner.nonces.range(..=block_number.as_u64()); + let (_, &nonce) = nonce_range.next_back().unwrap_or((&0, &0)); + Ok(nonce.into()) } else { panic!("MockEthereum::nonce_at called with non-number block tag"); } } async fn pending_nonce(&self, _: &'static str) -> Result { - Ok(self.pending_nonce.load(Ordering::SeqCst).into()) + Ok(self.inner.read().unwrap().pending_nonce.into()) } async fn current_nonce(&self, _: &'static str) -> Result { - Ok(self.current_nonce.load(Ordering::SeqCst).into()) + Ok(self.inner.read().unwrap().current_nonce.into()) } } -#[async_trait] -impl + Send + Sync> EthInterface for T { - async fn nonce_at_for_account( - &self, - account: Address, - block: BlockNumber, - component: &'static str, - ) -> Result { - self.as_ref() - .nonce_at_for_account(account, block, component) - .await - } - - async fn base_fee_history( - &self, - from_block: usize, - block_count: usize, - component: &'static str, - ) -> Result, Error> { - self.as_ref() - .base_fee_history(from_block, block_count, component) - .await - } - - async fn get_pending_block_base_fee_per_gas( - &self, - component: &'static str, - ) -> Result { - self.as_ref() - .get_pending_block_base_fee_per_gas(component) - .await - } - - async fn get_gas_price(&self, component: &'static str) -> Result { - self.as_ref().get_gas_price(component).await - } - - async fn block_number(&self, component: &'static str) -> Result { - self.as_ref().block_number(component).await - } - - async fn send_raw_tx(&self, tx: Vec) -> Result { - self.as_ref().send_raw_tx(tx).await - } - - async fn failure_reason(&self, tx_hash: H256) -> Result, Error> { - self.as_ref().failure_reason(tx_hash).await - } - - async fn get_tx_status( - &self, - hash: H256, - component: &'static str, - ) -> Result, Error> { - self.as_ref().get_tx_status(hash, component).await - } - - async fn get_tx( - &self, - hash: H256, - component: &'static str, - ) -> Result, Error> { - self.as_ref().get_tx(hash, component).await - } - - #[allow(clippy::too_many_arguments)] - async fn call_contract_function( - &self, - func: &str, - params: P, - from: A, - options: Options, - block: B, - contract_address: Address, - contract_abi: ethabi::Contract, - ) -> Result - where - R: Detokenize + Unpin, - A: Into> + Send, - B: Into> + Send, - P: Tokenize + Send, - { - self.as_ref() - .call_contract_function( - func, - params, - from, - options, - block, - contract_address, - contract_abi, +#[cfg(test)] +mod tests { + use super::*; + + #[tokio::test] + async fn managing_block_number() { + let client = MockEthereum::default(); + let block_number = client.block_number("test").await.unwrap(); + assert_eq!(block_number, 0.into()); + + client.advance_block_number(5); + let block_number = client.block_number("test").await.unwrap(); + assert_eq!(block_number, 5.into()); + } + + #[tokio::test] + async fn managing_transactions() { + let client = MockEthereum::default().with_non_ordering_confirmation(true); + client.advance_block_number(2); + + let signed_tx = client + .sign_prepared_tx( + b"test".to_vec(), + Options { + nonce: Some(1.into()), + ..Options::default() + }, ) - .await - } - - async fn tx_receipt( - &self, - tx_hash: H256, - component: &'static str, - ) -> Result, Error> { - self.as_ref().tx_receipt(tx_hash, component).await - } - - async fn eth_balance(&self, address: Address, component: &'static str) -> Result { - self.as_ref().eth_balance(address, component).await - } + .unwrap(); + assert_eq!(signed_tx.nonce, 1.into()); + assert!(signed_tx.max_priority_fee_per_gas > 0.into()); + assert!(signed_tx.max_fee_per_gas > 0.into()); - async fn logs(&self, filter: Filter, component: &'static str) -> Result, Error> { - self.as_ref().logs(filter, component).await - } - - async fn block( - &self, - block_id: String, - component: &'static str, - ) -> Result>, Error> { - self.as_ref().block(block_id, component).await - } -} - -#[async_trait::async_trait] -impl + Send + Sync> BoundEthInterface for T { - fn contract(&self) -> ðabi::Contract { - self.as_ref().contract() - } - - fn contract_addr(&self) -> H160 { - self.as_ref().contract_addr() - } - - fn chain_id(&self) -> L1ChainId { - self.as_ref().chain_id() - } - - fn sender_account(&self) -> Address { - self.as_ref().sender_account() - } + let tx_hash = client.send_raw_tx(signed_tx.raw_tx).await.unwrap(); + assert_eq!(tx_hash, signed_tx.hash); - async fn sign_prepared_tx_for_addr( - &self, - data: Vec, - contract_addr: H160, - options: Options, - component: &'static str, - ) -> Result { - self.as_ref() - .sign_prepared_tx_for_addr(data, contract_addr, options, component) + client.execute_tx(tx_hash, true, 3); + let returned_tx = client + .get_tx(tx_hash, "test") .await - } - - async fn allowance_on_account( - &self, - token_address: Address, - contract_address: Address, - erc20_abi: ethabi::Contract, - ) -> Result { - self.as_ref() - .allowance_on_account(token_address, contract_address, erc20_abi) + .unwrap() + .expect("no transaction"); + assert_eq!(returned_tx.hash, tx_hash); + assert_eq!(returned_tx.input.0, b"test"); + assert_eq!(returned_tx.nonce, 1.into()); + assert!(returned_tx.max_priority_fee_per_gas.is_some()); + assert!(returned_tx.max_fee_per_gas.is_some()); + + let tx_status = client + .get_tx_status(tx_hash, "test") .await - } - - async fn nonce_at(&self, block: BlockNumber, component: &'static str) -> Result { - self.as_ref().nonce_at(block, component).await - } - - async fn pending_nonce(&self, _: &'static str) -> Result { - self.as_ref().pending_nonce("").await - } - - async fn current_nonce(&self, _: &'static str) -> Result { - self.as_ref().current_nonce("").await + .unwrap() + .expect("no transaction status"); + assert!(tx_status.success); + assert_eq!(tx_status.tx_hash, tx_hash); + assert_eq!(tx_status.receipt.block_number, Some(2.into())); } } diff --git a/core/lib/eth_client/src/clients/mod.rs b/core/lib/eth_client/src/clients/mod.rs index e992fac2eaf..aa77974c494 100644 --- a/core/lib/eth_client/src/clients/mod.rs +++ b/core/lib/eth_client/src/clients/mod.rs @@ -1,2 +1,10 @@ -pub mod http; -pub mod mock; +//! Various Ethereum client implementations. + +mod generic; +mod http; +mod mock; + +pub use self::{ + http::{PKSigningClient, QueryClient, SigningClient}, + mock::MockEthereum, +}; diff --git a/core/lib/eth_client/src/lib.rs b/core/lib/eth_client/src/lib.rs index 2291f721470..eeabcba47e0 100644 --- a/core/lib/eth_client/src/lib.rs +++ b/core/lib/eth_client/src/lib.rs @@ -1,16 +1,9 @@ -#![allow(clippy::upper_case_acronyms, clippy::derive_partial_eq_without_eq)] +use std::fmt; -pub mod clients; -pub mod types; - -use crate::types::{Error, ExecutedTxStatus, FailureInfo, SignedCallResult}; use async_trait::async_trait; use zksync_types::{ web3::{ - contract::{ - tokens::{Detokenize, Tokenize}, - Options, - }, + contract::Options, ethabi, types::{ Address, Block, BlockId, BlockNumber, Filter, Log, Transaction, TransactionReceipt, @@ -20,8 +13,16 @@ use zksync_types::{ L1ChainId, }; +pub use crate::types::{ + CallFunctionArgs, ContractCall, Error, ExecutedTxStatus, FailureInfo, RawTransactionBytes, + SignedCallResult, +}; + +pub mod clients; +mod types; + /// Common Web3 interface, as seen by the core applications. -/// Encapsulates the raw Web3 interction, providing a high-level interface. +/// Encapsulates the raw Web3 interaction, providing a high-level interface. /// /// ## Trait contents /// @@ -34,10 +35,10 @@ use zksync_types::{ /// /// Most of the trait methods support the `component` parameter. This parameter is used to /// describe the caller of the method. It may be useful to find the component that makes an -/// unnecessary high amount of Web3 calls. Implementations are advices to count invocations +/// unnecessary high amount of Web3 calls. Implementations are advice to count invocations /// per component and expose them to Prometheus. #[async_trait] -pub trait EthInterface: Sync + Send { +pub trait EthInterface: 'static + Sync + Send + fmt::Debug { /// Returns the nonce of the provided account at the specified block. async fn nonce_at_for_account( &self, @@ -70,7 +71,7 @@ pub trait EthInterface: Sync + Send { async fn block_number(&self, component: &'static str) -> Result; /// Sends a transaction to the Ethereum network. - async fn send_raw_tx(&self, tx: Vec) -> Result; + async fn send_raw_tx(&self, tx: RawTransactionBytes) -> Result; /// Fetches the transaction status for a specified transaction hash. /// @@ -108,22 +109,8 @@ pub trait EthInterface: Sync + Send { async fn eth_balance(&self, address: Address, component: &'static str) -> Result; /// Invokes a function on a contract specified by `contract_address` / `contract_abi` using `eth_call`. - #[allow(clippy::too_many_arguments)] - async fn call_contract_function( - &self, - func: &str, - params: P, - from: A, - options: Options, - block: B, - contract_address: Address, - contract_abi: ethabi::Contract, - ) -> Result - where - R: Detokenize + Unpin, - A: Into> + Send, - B: Into> + Send, - P: Tokenize + Send; + async fn call_contract_function(&self, call: ContractCall) + -> Result, Error>; /// Returns the logs for the specified filter. async fn logs(&self, filter: Filter, component: &'static str) -> Result, Error>; @@ -131,15 +118,18 @@ pub trait EthInterface: Sync + Send { /// Returns the block header for the specified block number or hash. async fn block( &self, - block_id: String, + block_id: BlockId, component: &'static str, ) -> Result>, Error>; } +#[cfg(test)] +static_assertions::assert_obj_safe!(EthInterface); + /// An extension of `EthInterface` trait, which is used to perform queries that are bound to /// a certain contract and account. /// -/// THe example use cases for this trait would be: +/// The example use cases for this trait would be: /// - An operator that sends transactions and interacts with zkSync contract. /// - A wallet implementation in the SDK that is tied to a user's account. /// @@ -149,10 +139,10 @@ pub trait EthInterface: Sync + Send { /// implementation that invokes `contract` / `contract_addr` / `sender_account` methods. #[async_trait] pub trait BoundEthInterface: EthInterface { - /// ABI of the contract that is used by the implementor. + /// ABI of the contract that is used by the implementer. fn contract(&self) -> ðabi::Contract; - /// Address of the contract that is used by the implementor. + /// Address of the contract that is used by the implementer. fn contract_addr(&self) -> H160; /// Chain ID of the L1 network the client is *configured* to connected to. @@ -225,40 +215,27 @@ pub trait BoundEthInterface: EthInterface { } /// Invokes a function on a contract specified by `Self::contract()` / `Self::contract_addr()`. - async fn call_main_contract_function( + async fn call_main_contract_function( &self, - func: &str, - params: P, - from: A, - options: Options, - block: B, - ) -> Result - where - R: Detokenize + Unpin, - A: Into> + Send, - P: Tokenize + Send, - B: Into> + Send, - { - self.call_contract_function( - func, - params, - from, - options, - block, - self.contract_addr(), - self.contract().clone(), - ) - .await + args: CallFunctionArgs, + ) -> Result, Error> { + let args = args.for_contract(self.contract_addr(), self.contract().clone()); + self.call_contract_function(args).await } /// Encodes a function using the `Self::contract()` ABI. - fn encode_tx_data(&self, func: &str, params: P) -> Vec { + /// + /// `params` are tokenized parameters of the function. Most of the time, you can use + /// [`Tokenize`][tokenize] trait to convert the parameters into tokens. + /// + /// [tokenize]: https://docs.rs/web3/latest/web3/contract/tokens/trait.Tokenize.html + fn encode_tx_data(&self, func: &str, params: Vec) -> Vec { let f = self .contract() .function(func) .expect("failed to get function parameters"); - f.encode_input(¶ms.into_tokens()) + f.encode_input(¶ms) .expect("failed to encode parameters") } } diff --git a/core/lib/eth_client/src/types.rs b/core/lib/eth_client/src/types.rs index bef3b63b69a..71d9473b661 100644 --- a/core/lib/eth_client/src/types.rs +++ b/core/lib/eth_client/src/types.rs @@ -1,9 +1,81 @@ -// External uses use zksync_types::web3::{ + contract::{ + tokens::{Detokenize, Tokenize}, + Error as ContractError, Options, + }, ethabi, - types::{TransactionReceipt, H256, U256}, + types::{Address, BlockId, TransactionReceipt, H256, U256}, }; +/// Wrapper for `Vec` that doesn't wrap them in an additional array in `Tokenize` implementation. +#[derive(Debug)] +pub(crate) struct RawTokens(pub Vec); + +impl Tokenize for RawTokens { + fn into_tokens(self) -> Vec { + self.0 + } +} + +impl Detokenize for RawTokens { + fn from_tokens(tokens: Vec) -> Result { + Ok(Self(tokens)) + } +} + +/// Arguments for calling a function in an unspecified Ethereum smart contract. +#[derive(Debug)] +pub struct CallFunctionArgs { + pub(crate) name: String, + pub(crate) from: Option
, + pub(crate) options: Options, + pub(crate) block: Option, + pub(crate) params: RawTokens, +} + +impl CallFunctionArgs { + pub fn new(name: &str, params: impl Tokenize) -> Self { + Self { + name: name.to_owned(), + from: None, + options: Options::default(), + block: None, + params: RawTokens(params.into_tokens()), + } + } + + pub fn with_sender(mut self, from: Address) -> Self { + self.from = Some(from); + self + } + + pub fn with_block(mut self, block: BlockId) -> Self { + self.block = Some(block); + self + } + + pub fn for_contract( + self, + contract_address: Address, + contract_abi: ethabi::Contract, + ) -> ContractCall { + ContractCall { + contract_address, + contract_abi, + inner: self, + } + } +} + +/// Information sufficient for calling a function in a specific Ethereum smart contract. Instantiated +/// using [`CallFunctionArgs::for_contract()`]. +#[derive(Debug)] +pub struct ContractCall { + pub(crate) contract_address: Address, + pub(crate) contract_abi: ethabi::Contract, + pub(crate) inner: CallFunctionArgs, +} + /// Common error type exposed by the crate, #[derive(Debug, thiserror::Error)] pub enum Error { @@ -24,11 +96,29 @@ pub enum Error { WrongFeeProvided(U256, U256), } +/// Raw transaction bytes. +#[derive(Debug, Clone, PartialEq)] +pub struct RawTransactionBytes(pub(crate) Vec); + +impl RawTransactionBytes { + /// Converts raw transaction bytes. It is caller's responsibility to ensure that these bytes + /// were actually obtained by signing a transaction. + pub fn new_unchecked(bytes: Vec) -> Self { + Self(bytes) + } +} + +impl AsRef<[u8]> for RawTransactionBytes { + fn as_ref(&self) -> &[u8] { + &self.0 + } +} + /// Representation of a signed transaction. #[derive(Debug, Clone, PartialEq)] pub struct SignedCallResult { /// Raw transaction bytes. - pub raw_tx: Vec, + pub raw_tx: RawTransactionBytes, /// `max_priority_fee_per_gas` field of transaction (EIP1559). pub max_priority_fee_per_gas: U256, /// `max_fee_per_gas` field of transaction (EIP1559). @@ -69,3 +159,21 @@ pub struct FailureInfo { /// Gas limit of the transaction. pub gas_limit: U256, } + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn raw_tokens_are_compatible_with_actual_call() { + let vk_contract = zksync_contracts::verifier_contract(); + let args = CallFunctionArgs::new("verificationKeyHash", ()); + let func = vk_contract.function(&args.name).unwrap(); + func.encode_input(&args.params.into_tokens()).unwrap(); + + let output_tokens = vec![ethabi::Token::FixedBytes(vec![1; 32])]; + let RawTokens(output_tokens) = RawTokens::from_tokens(output_tokens).unwrap(); + let hash = H256::from_tokens(output_tokens).unwrap(); + assert_eq!(hash, H256::repeat_byte(1)); + } +} diff --git a/core/lib/eth_signer/src/json_rpc_signer.rs b/core/lib/eth_signer/src/json_rpc_signer.rs index da81ff51dba..38e9a04e19a 100644 --- a/core/lib/eth_signer/src/json_rpc_signer.rs +++ b/core/lib/eth_signer/src/json_rpc_signer.rs @@ -1,13 +1,15 @@ -use crate::error::{RpcSignerError, SignerError}; -use crate::json_rpc_signer::messages::JsonRpcRequest; -use crate::raw_ethereum_tx::TransactionParameters; -use crate::EthereumSigner; - use jsonrpc_core::types::response::Output; -use zksync_types::tx::primitives::PackedEthSignature; -use zksync_types::{Address, EIP712TypedStructure, Eip712Domain, H256}; - use serde_json::Value; +use zksync_types::{ + tx::primitives::PackedEthSignature, Address, EIP712TypedStructure, Eip712Domain, H256, +}; + +use crate::{ + error::{RpcSignerError, SignerError}, + json_rpc_signer::messages::JsonRpcRequest, + raw_ethereum_tx::TransactionParameters, + EthereumSigner, +}; pub fn is_signature_from_address( signature: &PackedEthSignature, @@ -85,7 +87,7 @@ impl EthereumSigner for JsonRpcSigner { } } - /// Signs typed struct using ethereum private key by EIP-712 signature standard. + /// Signs typed struct using Ethereum private key by EIP-712 signature standard. /// Result of this function is the equivalent of RPC calling `eth_signTypedData`. async fn sign_typed_data( &self, @@ -169,7 +171,7 @@ impl JsonRpcSigner { None => AddressOrIndex::Index(0), }; - // EthereumSigner can support many different addresses, + // `EthereumSigner` can support many different addresses, // we define only the one we need by the index // of receiving from the server or by the address itself. signer.detect_address(address_or_index).await?; @@ -192,7 +194,7 @@ impl JsonRpcSigner { self.address.ok_or(SignerError::DefineAddress) } - /// Specifies the Ethreum address which sets the address for which all other requests will be processed. + /// Specifies the Ethereum address which sets the address for which all other requests will be processed. /// If the address has already been set, then it will all the same change to a new one. pub async fn detect_address( &mut self, @@ -325,13 +327,14 @@ impl JsonRpcSigner { } mod messages { - use crate::raw_ethereum_tx::TransactionParameters; use hex::encode; use serde::{Deserialize, Serialize}; use zksync_types::{ eip712_signature::utils::get_eip712_json, Address, EIP712TypedStructure, Eip712Domain, }; + use crate::raw_ethereum_tx::TransactionParameters; + #[derive(Debug, Serialize, Deserialize)] pub struct JsonRpcRequest { pub id: String, @@ -376,7 +379,7 @@ mod messages { Self::create("eth_sign", params) } - /// Signs typed struct using ethereum private key by EIP-712 signature standard. + /// Signs typed struct using Ethereum private key by EIP-712 signature standard. /// The address to sign with must be unlocked. pub fn sign_typed_data( address: Address, @@ -429,7 +432,6 @@ mod messages { #[cfg(test)] mod tests { - use crate::raw_ethereum_tx::TransactionParameters; use actix_web::{ post, web::{self, Data}, @@ -439,11 +441,10 @@ mod tests { use jsonrpc_core::{Failure, Id, Output, Success, Version}; use parity_crypto::publickey::{Generator, KeyPair, Random}; use serde_json::json; - use zksync_types::{tx::primitives::PackedEthSignature, Address}; use super::{is_signature_from_address, messages::JsonRpcRequest}; - use crate::{EthereumSigner, JsonRpcSigner}; + use crate::{raw_ethereum_tx::TransactionParameters, EthereumSigner, JsonRpcSigner}; #[post("/")] async fn index(req: web::Json, state: web::Data) -> impl Responder { diff --git a/core/lib/eth_signer/src/lib.rs b/core/lib/eth_signer/src/lib.rs index ce4540c151b..4c053c1ba7c 100644 --- a/core/lib/eth_signer/src/lib.rs +++ b/core/lib/eth_signer/src/lib.rs @@ -1,11 +1,12 @@ use async_trait::async_trait; use error::SignerError; -use zksync_types::tx::primitives::PackedEthSignature; -use zksync_types::{Address, EIP712TypedStructure, Eip712Domain}; - -pub use crate::raw_ethereum_tx::TransactionParameters; pub use json_rpc_signer::JsonRpcSigner; pub use pk_signer::PrivateKeySigner; +use zksync_types::{ + tx::primitives::PackedEthSignature, Address, EIP712TypedStructure, Eip712Domain, +}; + +pub use crate::raw_ethereum_tx::TransactionParameters; pub mod error; pub mod json_rpc_signer; @@ -13,7 +14,7 @@ pub mod pk_signer; pub mod raw_ethereum_tx; #[async_trait] -pub trait EthereumSigner: Send + Sync + Clone { +pub trait EthereumSigner: 'static + Send + Sync + Clone { async fn sign_message(&self, message: &[u8]) -> Result; async fn sign_typed_data( &self, diff --git a/core/lib/eth_signer/src/pk_signer.rs b/core/lib/eth_signer/src/pk_signer.rs index 4a5bfb838de..0ea68e2a6df 100644 --- a/core/lib/eth_signer/src/pk_signer.rs +++ b/core/lib/eth_signer/src/pk_signer.rs @@ -1,11 +1,11 @@ use secp256k1::SecretKey; - -use zksync_types::tx::primitives::PackedEthSignature; -use zksync_types::{Address, EIP712TypedStructure, Eip712Domain, H256}; +use zksync_types::{ + tx::primitives::PackedEthSignature, Address, EIP712TypedStructure, Eip712Domain, H256, +}; use crate::{ raw_ethereum_tx::{Transaction, TransactionParameters}, - {EthereumSigner, SignerError}, + EthereumSigner, SignerError, }; #[derive(Clone)] @@ -41,7 +41,7 @@ impl EthereumSigner for PrivateKeySigner { Ok(signature) } - /// Signs typed struct using ethereum private key by EIP-712 signature standard. + /// Signs typed struct using Ethereum private key by EIP-712 signature standard. /// Result of this function is the equivalent of RPC calling `eth_signTypedData`. async fn sign_typed_data( &self, @@ -62,7 +62,7 @@ impl EthereumSigner for PrivateKeySigner { let key = SecretKey::from_slice(self.private_key.as_bytes()).unwrap(); // According to the code in web3 - // We should use max_fee_per_gas as gas_price if we use EIP1559 + // We should use `max_fee_per_gas` as `gas_price` if we use EIP1559 let gas_price = raw_tx.max_fee_per_gas; let max_priority_fee_per_gas = raw_tx.max_priority_fee_per_gas; @@ -86,11 +86,11 @@ impl EthereumSigner for PrivateKeySigner { #[cfg(test)] mod test { - use super::PrivateKeySigner; - use crate::raw_ethereum_tx::TransactionParameters; - use crate::EthereumSigner; use zksync_types::{H160, H256, U256, U64}; + use super::PrivateKeySigner; + use crate::{raw_ethereum_tx::TransactionParameters, EthereumSigner}; + #[tokio::test] async fn test_generating_signed_raw_transaction() { let private_key = H256::from([5; 32]); @@ -113,7 +113,7 @@ mod test { .await .unwrap(); assert_ne!(raw_tx.len(), 1); - // precalculated signature with right algorithm implementation + // pre-calculated signature with right algorithm implementation let precalculated_raw_tx: Vec = vec![ 1, 248, 100, 130, 1, 14, 1, 2, 128, 148, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 131, 1, 2, 3, 192, 1, 160, 98, 201, 238, 158, 215, 98, 23, 231, diff --git a/core/lib/eth_signer/src/raw_ethereum_tx.rs b/core/lib/eth_signer/src/raw_ethereum_tx.rs index fcee1349445..03d085d9133 100644 --- a/core/lib/eth_signer/src/raw_ethereum_tx.rs +++ b/core/lib/eth_signer/src/raw_ethereum_tx.rs @@ -8,12 +8,16 @@ //! We can refactor this code and adapt it for our needs better, but I prefer to reuse as much code as we can. //! In the case where it will be possible to use only the web3 library without copy-paste, the changes will be small and simple //! Link to @Deniallugo's PR to web3: https://github.com/tomusdrw/rust-web3/pull/630 + use rlp::RlpStream; -use zksync_types::web3::{ - signing::{self, Signature}, - types::{AccessList, SignedTransaction}, +use zksync_types::{ + ethabi::Address, + web3::{ + signing::{self, Signature}, + types::{AccessList, SignedTransaction}, + }, + U256, U64, }; -use zksync_types::{ethabi::Address, U256, U64}; const LEGACY_TX_ID: u64 = 0; const ACCESSLISTS_TX_ID: u64 = 1; @@ -100,7 +104,7 @@ impl Transaction { let list_size = if signature.is_some() { 11 } else { 8 }; stream.begin_list(list_size); - // append chain_id. from EIP-2930: chainId is defined to be an integer of arbitrary size. + // append `chain_id`. from EIP-2930: `chainId` is defined to be an integer of arbitrary size. stream.append(&chain_id); self.rlp_append_legacy(&mut stream); @@ -119,7 +123,7 @@ impl Transaction { let list_size = if signature.is_some() { 12 } else { 9 }; stream.begin_list(list_size); - // append chain_id. from EIP-2930: chainId is defined to be an integer of arbitrary size. + // append `chain_id`. from EIP-2930: `chainId` is defined to be an integer of arbitrary size. stream.append(&chain_id); stream.append(&self.nonce); diff --git a/core/lib/health_check/src/lib.rs b/core/lib/health_check/src/lib.rs index fac8ec46dbb..12bb292bc85 100644 --- a/core/lib/health_check/src/lib.rs +++ b/core/lib/health_check/src/lib.rs @@ -1,11 +1,10 @@ -use futures::{future, FutureExt}; -use serde::Serialize; -use tokio::sync::watch; - use std::{collections::HashMap, thread}; -/// Public re-export for other crates to be able to implement the interface. +// Public re-export for other crates to be able to implement the interface. pub use async_trait::async_trait; +use futures::{future, FutureExt}; +use serde::Serialize; +use tokio::sync::watch; /// Health status returned as a part of `Health`. #[derive(Debug, Clone, Copy, PartialEq, Serialize)] diff --git a/core/lib/mempool/src/mempool_store.rs b/core/lib/mempool/src/mempool_store.rs index f900523517c..51a8d708a74 100644 --- a/core/lib/mempool/src/mempool_store.rs +++ b/core/lib/mempool/src/mempool_store.rs @@ -1,10 +1,11 @@ use std::collections::{hash_map, BTreeSet, HashMap, HashSet}; -use crate::types::{AccountTransactions, L2TxFilter, MempoolScore}; use zksync_types::{ l1::L1Tx, l2::L2Tx, Address, ExecuteTransactionCommon, Nonce, PriorityOpId, Transaction, }; +use crate::types::{AccountTransactions, L2TxFilter, MempoolScore}; + #[derive(Debug)] pub struct MempoolInfo { pub stashed_accounts: Vec
, @@ -29,7 +30,7 @@ pub struct MempoolStore { /// Next priority operation next_priority_id: PriorityOpId, stashed_accounts: Vec
, - /// Number of l2 transactions in the mempool. + /// Number of L2 transactions in the mempool. size: u64, capacity: u64, } diff --git a/core/lib/mempool/src/tests.rs b/core/lib/mempool/src/tests.rs index cb149752e2d..656d90c63d1 100644 --- a/core/lib/mempool/src/tests.rs +++ b/core/lib/mempool/src/tests.rs @@ -1,12 +1,18 @@ +use std::{ + collections::{HashMap, HashSet}, + iter::FromIterator, +}; + +use zksync_types::{ + fee::Fee, + helpers::unix_timestamp_ms, + l1::{OpProcessingType, PriorityQueueType}, + l2::L2Tx, + Address, Execute, ExecuteTransactionCommon, L1TxCommonData, Nonce, PriorityOpId, Transaction, + H256, U256, +}; + use crate::{mempool_store::MempoolStore, types::L2TxFilter}; -use std::collections::{HashMap, HashSet}; -use std::iter::FromIterator; -use zksync_types::fee::Fee; -use zksync_types::helpers::unix_timestamp_ms; -use zksync_types::l1::{OpProcessingType, PriorityQueueType}; -use zksync_types::l2::L2Tx; -use zksync_types::{Address, ExecuteTransactionCommon, L1TxCommonData, PriorityOpId, H256, U256}; -use zksync_types::{Execute, Nonce, Transaction}; #[test] fn basic_flow() { @@ -39,7 +45,7 @@ fn basic_flow() { (account0, 3) ); assert_eq!(mempool.next_transaction(&L2TxFilter::default()), None); - // unclog second account and insert more txns + // unclog second account and insert more transactions mempool.insert( vec![gen_l2_tx(account1, Nonce(0)), gen_l2_tx(account0, Nonce(3))], HashMap::new(), @@ -238,13 +244,13 @@ fn mempool_size() { fn filtering() { // Filter to find transactions with non-zero `gas_per_pubdata` values. let filter_non_zero = L2TxFilter { - l1_gas_price: 0u64, + fee_input: Default::default(), fee_per_gas: 0u64, gas_per_pubdata: 1u32, }; // No-op filter that fetches any transaction. let filter_zero = L2TxFilter { - l1_gas_price: 0u64, + fee_input: Default::default(), fee_per_gas: 0u64, gas_per_pubdata: 0u32, }; @@ -282,13 +288,13 @@ fn filtering() { #[test] fn stashed_accounts() { let filter_non_zero = L2TxFilter { - l1_gas_price: 0u64, + fee_input: Default::default(), fee_per_gas: 0u64, gas_per_pubdata: 1u32, }; // No-op filter that fetches any transaction. let filter_zero = L2TxFilter { - l1_gas_price: 0u64, + fee_input: Default::default(), fee_per_gas: 0u64, gas_per_pubdata: 0u32, }; diff --git a/core/lib/mempool/src/types.rs b/core/lib/mempool/src/types.rs index 130f8ad0016..99a63ffd08e 100644 --- a/core/lib/mempool/src/types.rs +++ b/core/lib/mempool/src/types.rs @@ -1,8 +1,8 @@ -use std::cmp::Ordering; -use std::collections::HashMap; -use zksync_types::fee::Fee; -use zksync_types::l2::L2Tx; -use zksync_types::{Address, Nonce, Transaction, U256}; +use std::{cmp::Ordering, collections::HashMap}; + +use zksync_types::{ + fee::Fee, fee_model::BatchFeeInput, l2::L2Tx, Address, Nonce, Transaction, U256, +}; /// Pending mempool transactions of account #[derive(Debug)] @@ -130,8 +130,8 @@ pub(crate) struct InsertionMetadata { /// criteria for transaction it wants to fetch. #[derive(Debug, Default, PartialEq, Eq)] pub struct L2TxFilter { - /// L1 gas price. - pub l1_gas_price: u64, + /// Batch fee model input. It typically includes things like L1 gas price, L2 fair fee, etc. + pub fee_input: BatchFeeInput, /// Effective fee price for the transaction. The price of 1 gas in wei. pub fee_per_gas: u64, /// Effective pubdata price in gas for transaction. The number of gas per 1 pubdata byte. @@ -145,9 +145,9 @@ mod tests { /// Checks the filter logic. #[test] fn filter() { - fn filter(l1_gas_price: u64, fee_per_gas: u64, gas_per_pubdata: u32) -> L2TxFilter { + fn filter(fee_per_gas: u64, gas_per_pubdata: u32) -> L2TxFilter { L2TxFilter { - l1_gas_price, + fee_input: BatchFeeInput::sensible_l1_pegged_default(), fee_per_gas, gas_per_pubdata, } @@ -168,31 +168,31 @@ mod tests { }, }; - let noop_filter = filter(0, 0, 0); + let noop_filter = filter(0, 0); assert!( score.matches_filter(&noop_filter), "Noop filter should always match" ); - let max_gas_filter = filter(0, MAX_FEE_PER_GAS, 0); + let max_gas_filter = filter(MAX_FEE_PER_GAS, 0); assert!( score.matches_filter(&max_gas_filter), "Correct max gas should be accepted" ); - let pubdata_filter = filter(0, 0, GAS_PER_PUBDATA_LIMIT); + let pubdata_filter = filter(0, GAS_PER_PUBDATA_LIMIT); assert!( score.matches_filter(&pubdata_filter), "Correct pubdata price should be accepted" ); - let decline_gas_filter = filter(0, MAX_FEE_PER_GAS + 1, 0); + let decline_gas_filter = filter(MAX_FEE_PER_GAS + 1, 0); assert!( !score.matches_filter(&decline_gas_filter), "Incorrect max gas should be rejected" ); - let decline_pubdata_filter = filter(0, 0, GAS_PER_PUBDATA_LIMIT + 1); + let decline_pubdata_filter = filter(0, GAS_PER_PUBDATA_LIMIT + 1); assert!( !score.matches_filter(&decline_pubdata_filter), "Incorrect pubdata price should be rejected" diff --git a/core/lib/merkle_tree/Cargo.toml b/core/lib/merkle_tree/Cargo.toml index 9a6c4a6b65d..ed1669519b9 100644 --- a/core/lib/merkle_tree/Cargo.toml +++ b/core/lib/merkle_tree/Cargo.toml @@ -10,7 +10,7 @@ keywords = ["blockchain", "zksync"] categories = ["cryptography"] [dependencies] -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } zksync_types = { path = "../types" } zksync_crypto = { path = "../crypto" } zksync_storage = { path = "../storage", default-features = false } diff --git a/core/lib/merkle_tree/README.md b/core/lib/merkle_tree/README.md index edf2ec20c41..b3c8a31c998 100644 --- a/core/lib/merkle_tree/README.md +++ b/core/lib/merkle_tree/README.md @@ -111,7 +111,7 @@ following order of RocksDB storage consumption at the end of the test: [gauge] rocksdb.total_mem_table_size{db=merkle_tree, cf=stale_keys} = 19924992 bytes ``` -I.e., pruning reduces RocksDB size ~8.7 times in this case. +I.e., pruning reduces RocksDB size approximately 8.7 times in this case. [jellyfish merkle tree]: https://developers.diem.com/papers/jellyfish-merkle-tree/2021-01-14.pdf [`insta`]: https://docs.rs/insta/ diff --git a/core/lib/merkle_tree/examples/loadtest/main.rs b/core/lib/merkle_tree/examples/loadtest/main.rs index b598a579f6b..185ae0543f9 100644 --- a/core/lib/merkle_tree/examples/loadtest/main.rs +++ b/core/lib/merkle_tree/examples/loadtest/main.rs @@ -3,27 +3,27 @@ //! Should be compiled with the release profile, otherwise hashing and other ops would be //! prohibitively slow. -use clap::Parser; -use rand::{rngs::StdRng, seq::IteratorRandom, SeedableRng}; -use tempfile::TempDir; -use tracing_subscriber::EnvFilter; - use std::{ thread, time::{Duration, Instant}, }; +use clap::Parser; +use rand::{rngs::StdRng, seq::IteratorRandom, SeedableRng}; +use tempfile::TempDir; +use tracing_subscriber::EnvFilter; use zksync_crypto::hasher::blake2::Blake2Hasher; use zksync_merkle_tree::{ - Database, HashTree, MerkleTree, MerkleTreePruner, PatchSet, RocksDBWrapper, TreeInstruction, + Database, HashTree, MerkleTree, MerkleTreePruner, PatchSet, RocksDBWrapper, TreeEntry, + TreeInstruction, }; use zksync_storage::{RocksDB, RocksDBOptions}; use zksync_types::{AccountTreeId, Address, StorageKey, H256, U256}; -mod batch; - use crate::batch::WithBatching; +mod batch; + /// CLI for load-testing for the Merkle tree implementation. #[derive(Debug, Parser)] #[command(author, version, about, long_about = None)] @@ -135,19 +135,22 @@ impl Cli { next_key_idx += new_keys.len() as u64; next_value_idx += (new_keys.len() + updated_indices.len()) as u64; - let values = (next_value_idx..).map(H256::from_low_u64_be); let updated_keys = Self::generate_keys(updated_indices.into_iter()); - let kvs = new_keys.into_iter().chain(updated_keys).zip(values); + let kvs = new_keys + .into_iter() + .chain(updated_keys) + .zip(next_value_idx..); + let kvs = kvs.map(|(key, idx)| { + // The assigned leaf indices here are not always correct, but it's OK for load test purposes. + TreeEntry::new(key, idx, H256::from_low_u64_be(idx)) + }); tracing::info!("Processing block #{version}"); let start = Instant::now(); let root_hash = if self.proofs { - let reads = Self::generate_keys(read_indices.into_iter()) - .map(|key| (key, TreeInstruction::Read)); - let instructions = kvs - .map(|(key, hash)| (key, TreeInstruction::Write(hash))) - .chain(reads) - .collect(); + let reads = + Self::generate_keys(read_indices.into_iter()).map(TreeInstruction::Read); + let instructions = kvs.map(TreeInstruction::Write).chain(reads).collect(); let output = tree.extend_with_proofs(instructions); output.root_hash().unwrap() } else { @@ -160,7 +163,7 @@ impl Cli { tracing::info!("Verifying tree consistency..."); let start = Instant::now(); - tree.verify_consistency(self.commit_count - 1) + tree.verify_consistency(self.commit_count - 1, false) .expect("tree consistency check failed"); let elapsed = start.elapsed(); tracing::info!("Verified tree consistency in {elapsed:?}"); diff --git a/core/lib/merkle_tree/examples/recovery.rs b/core/lib/merkle_tree/examples/recovery.rs index af16ed05baf..1b4e634e567 100644 --- a/core/lib/merkle_tree/examples/recovery.rs +++ b/core/lib/merkle_tree/examples/recovery.rs @@ -1,16 +1,15 @@ //! Tree recovery load test. +use std::time::Instant; + use clap::Parser; use rand::{rngs::StdRng, Rng, SeedableRng}; use tempfile::TempDir; use tracing_subscriber::EnvFilter; - -use std::time::Instant; - use zksync_crypto::hasher::blake2::Blake2Hasher; use zksync_merkle_tree::{ - recovery::{MerkleTreeRecovery, RecoveryEntry}, - HashTree, Key, PatchSet, PruneDatabase, RocksDBWrapper, ValueHash, + recovery::MerkleTreeRecovery, HashTree, Key, MerkleTree, PatchSet, PruneDatabase, + RocksDBWrapper, TreeEntry, ValueHash, }; use zksync_storage::{RocksDB, RocksDBOptions}; @@ -94,7 +93,7 @@ impl Cli { .map(|_| { last_leaf_index += 1; if self.random { - RecoveryEntry { + TreeEntry { key: Key::from(rng.gen::<[u8; 32]>()), value: ValueHash::zero(), leaf_index: last_leaf_index, @@ -102,7 +101,7 @@ impl Cli { } else { last_key += key_step - Key::from(rng.gen::()); // ^ Increases the key by a random increment close to `key` step with some randomness. - RecoveryEntry { + TreeEntry { key: last_key, value: ValueHash::zero(), leaf_index: last_leaf_index, @@ -121,13 +120,13 @@ impl Cli { ); } - let tree = recovery.finalize(); + let tree = MerkleTree::new(recovery.finalize()); tracing::info!( "Recovery finished in {:?}; verifying consistency...", recovery_started_at.elapsed() ); let started_at = Instant::now(); - tree.verify_consistency(recovered_version).unwrap(); + tree.verify_consistency(recovered_version, true).unwrap(); tracing::info!("Verified consistency in {:?}", started_at.elapsed()); } } diff --git a/core/lib/merkle_tree/src/consistency.rs b/core/lib/merkle_tree/src/consistency.rs index 85896bad1ae..7b30e8b44e0 100644 --- a/core/lib/merkle_tree/src/consistency.rs +++ b/core/lib/merkle_tree/src/consistency.rs @@ -1,9 +1,9 @@ //! Consistency verification for the Merkle tree. -use rayon::prelude::*; - use std::sync::atomic::{AtomicU64, Ordering}; +use rayon::prelude::*; + use crate::{ errors::DeserializeError, hasher::{HashTree, HasherWithStats}, @@ -69,10 +69,17 @@ pub enum ConsistencyError { impl MerkleTree { /// Verifies the internal tree consistency as stored in the database. /// + /// If `validate_indices` flag is set, it will be checked that indices for all tree leaves are unique + /// and are sequentially assigned starting from 1. + /// /// # Errors /// /// Returns an error (the first encountered one if there are multiple). - pub fn verify_consistency(&self, version: u64) -> Result<(), ConsistencyError> { + pub fn verify_consistency( + &self, + version: u64, + validate_indices: bool, + ) -> Result<(), ConsistencyError> { let manifest = self.db.try_manifest()?; let manifest = manifest.ok_or(ConsistencyError::MissingVersion(version))?; if version >= manifest.version_count { @@ -91,16 +98,19 @@ impl MerkleTree { // We want to perform a depth-first walk of the tree in order to not keep // much in memory. let root_key = Nibbles::EMPTY.with_version(version); - let leaf_data = LeafConsistencyData::new(leaf_count); - self.validate_node(&root_node, root_key, &leaf_data)?; - leaf_data.validate_count() + let leaf_data = validate_indices.then(|| LeafConsistencyData::new(leaf_count)); + self.validate_node(&root_node, root_key, leaf_data.as_ref())?; + if let Some(leaf_data) = leaf_data { + leaf_data.validate_count()?; + } + Ok(()) } fn validate_node( &self, node: &Node, key: NodeKey, - leaf_data: &LeafConsistencyData, + leaf_data: Option<&LeafConsistencyData>, ) -> Result { match node { Node::Leaf(leaf) => { @@ -111,7 +121,9 @@ impl MerkleTree { full_key: leaf.full_key, }); } - leaf_data.insert_leaf(leaf)?; + if let Some(leaf_data) = leaf_data { + leaf_data.insert_leaf(leaf)?; + } } Node::Internal(node) => { @@ -149,8 +161,8 @@ impl MerkleTree { is_leaf: child_ref.is_leaf, })?; - // Recursion here is OK; the tree isn't that deep (~8 nibbles for a tree with - // ~1B entries). + // Recursion here is OK; the tree isn't that deep (approximately 8 nibbles for a tree with + // approximately 1B entries). let child_hash = self.validate_node(&child, child_key, leaf_data)?; if child_hash == child_ref.hash { Ok(()) @@ -255,14 +267,17 @@ impl AtomicBitSet { #[cfg(test)] mod tests { + use std::num::NonZeroU64; + use assert_matches::assert_matches; use rayon::ThreadPoolBuilder; - - use std::num::NonZeroU64; + use zksync_types::{H256, U256}; use super::*; - use crate::{types::InternalNode, PatchSet}; - use zksync_types::{H256, U256}; + use crate::{ + types::{InternalNode, TreeEntry}, + PatchSet, + }; const FIRST_KEY: Key = U256([0, 0, 0, 0x_dead_beef_0000_0000]); const SECOND_KEY: Key = U256([0, 0, 0, 0x_dead_beef_0100_0000]); @@ -270,8 +285,8 @@ mod tests { fn prepare_database() -> PatchSet { let mut tree = MerkleTree::new(PatchSet::default()); tree.extend(vec![ - (FIRST_KEY, H256([1; 32])), - (SECOND_KEY, H256([2; 32])), + TreeEntry::new(FIRST_KEY, 1, H256([1; 32])), + TreeEntry::new(SECOND_KEY, 2, H256([2; 32])), ]); tree.db } @@ -300,7 +315,7 @@ mod tests { .num_threads(1) .build() .expect("failed initializing `rayon` thread pool"); - thread_pool.install(|| MerkleTree::new(db).verify_consistency(0)) + thread_pool.install(|| MerkleTree::new(db).verify_consistency(0, true)) } #[test] diff --git a/core/lib/merkle_tree/src/domain.rs b/core/lib/merkle_tree/src/domain.rs index bb82233aec2..2fe4b59f821 100644 --- a/core/lib/merkle_tree/src/domain.rs +++ b/core/lib/merkle_tree/src/domain.rs @@ -1,19 +1,21 @@ //! Tying the Merkle tree implementation to the problem domain. use rayon::{ThreadPool, ThreadPoolBuilder}; -use zksync_utils::h256_to_u256; - -use crate::{ - storage::{MerkleTreeColumnFamily, PatchSet, Patched, RocksDBWrapper}, - types::{Key, Root, TreeEntryWithProof, TreeInstruction, TreeLogEntry, ValueHash, TREE_DEPTH}, - BlockOutput, HashTree, MerkleTree, NoVersionError, -}; use zksync_crypto::hasher::blake2::Blake2Hasher; -use zksync_storage::RocksDB; use zksync_types::{ proofs::{PrepareBasicCircuitsJob, StorageLogMetadata}, writes::{InitialStorageWrite, RepeatedStorageWrite, StateDiffRecord}, - L1BatchNumber, StorageKey, StorageLog, StorageLogKind, U256, + L1BatchNumber, StorageKey, U256, +}; +use zksync_utils::h256_to_u256; + +use crate::{ + storage::{PatchSet, Patched, RocksDBWrapper}, + types::{ + Key, Root, TreeEntry, TreeEntryWithProof, TreeInstruction, TreeLogEntry, ValueHash, + TREE_DEPTH, + }, + BlockOutput, HashTree, MerkleTree, NoVersionError, }; /// Metadata for the current tree state. @@ -65,17 +67,17 @@ impl ZkSyncTree { /// Returns metadata based on `storage_logs` generated by the genesis L1 batch. This does not /// create a persistent tree. - pub fn process_genesis_batch(storage_logs: &[StorageLog]) -> BlockOutput { - let kvs = Self::filter_write_logs(storage_logs); + pub fn process_genesis_batch(storage_logs: &[TreeInstruction]) -> BlockOutput { + let kvs = Self::filter_write_instructions(storage_logs); tracing::info!( "Creating Merkle tree for genesis batch with {instr_count} writes", instr_count = kvs.len() ); - let kvs = kvs + let kvs: Vec<_> = kvs .iter() - .map(|(k, v)| (k.hashed_key_u256(), *v)) - .collect::>(); + .map(|instr| instr.map_key(StorageKey::hashed_key_u256)) + .collect(); let mut in_memory_tree = MerkleTree::new(PatchSet::default()); let output = in_memory_tree.extend(kvs); @@ -89,19 +91,18 @@ impl ZkSyncTree { } /// Creates a tree with the full processing mode. - pub fn new(db: RocksDB) -> Self { + pub fn new(db: RocksDBWrapper) -> Self { Self::new_with_mode(db, TreeMode::Full) } /// Creates a tree with the lightweight processing mode. - pub fn new_lightweight(db: RocksDB) -> Self { + pub fn new_lightweight(db: RocksDBWrapper) -> Self { Self::new_with_mode(db, TreeMode::Lightweight) } - fn new_with_mode(db: RocksDB, mode: TreeMode) -> Self { - let wrapper = RocksDBWrapper::from(db); + fn new_with_mode(db: RocksDBWrapper, mode: TreeMode) -> Self { Self { - tree: MerkleTree::new(Patched::new(wrapper)), + tree: MerkleTree::new(Patched::new(db)), thread_pool: None, mode, } @@ -170,29 +171,36 @@ impl ZkSyncTree { /// Panics if an inconsistency is detected. pub fn verify_consistency(&self, l1_batch_number: L1BatchNumber) { let version = u64::from(l1_batch_number.0); - self.tree.verify_consistency(version).unwrap_or_else(|err| { - panic!("Tree at version {version} is inconsistent: {err}"); - }); + self.tree + .verify_consistency(version, true) + .unwrap_or_else(|err| { + panic!("Tree at version {version} is inconsistent: {err}"); + }); } /// Processes an iterator of storage logs comprising a single L1 batch. - pub fn process_l1_batch(&mut self, storage_logs: &[StorageLog]) -> TreeMetadata { + pub fn process_l1_batch( + &mut self, + storage_logs: &[TreeInstruction], + ) -> TreeMetadata { match self.mode { TreeMode::Full => self.process_l1_batch_full(storage_logs), TreeMode::Lightweight => self.process_l1_batch_lightweight(storage_logs), } } - fn process_l1_batch_full(&mut self, storage_logs: &[StorageLog]) -> TreeMetadata { + fn process_l1_batch_full( + &mut self, + instructions: &[TreeInstruction], + ) -> TreeMetadata { let l1_batch_number = self.next_l1_batch_number(); - let instructions = Self::transform_logs(storage_logs); let starting_leaf_count = self.tree.latest_root().leaf_count(); let starting_root_hash = self.tree.latest_root_hash(); - let instructions_with_hashed_keys = instructions + let instructions_with_hashed_keys: Vec<_> = instructions .iter() - .map(|(k, instr)| (k.hashed_key_u256(), *instr)) - .collect::>(); + .map(|instr| instr.map_key(StorageKey::hashed_key_u256)) + .collect(); tracing::info!( "Extending Merkle tree with batch #{l1_batch_number} with {instr_count} ops in full mode", @@ -207,7 +215,7 @@ impl ZkSyncTree { let mut witness = PrepareBasicCircuitsJob::new(starting_leaf_count + 1); witness.reserve(output.logs.len()); - for (log, (key, instruction)) in output.logs.iter().zip(&instructions) { + for (log, instruction) in output.logs.iter().zip(instructions) { let empty_levels_end = TREE_DEPTH - log.merkle_path.len(); let empty_subtree_hashes = (0..empty_levels_end).map(|i| Blake2Hasher.empty_subtree_hash(i)); @@ -218,20 +226,22 @@ impl ZkSyncTree { .collect(); let value_written = match instruction { - TreeInstruction::Write(value) => value.0, - TreeInstruction::Read => [0_u8; 32], + TreeInstruction::Write(entry) => entry.value.0, + TreeInstruction::Read(_) => [0_u8; 32], }; let log = StorageLogMetadata { root_hash: log.root_hash.0, is_write: !log.base.is_read(), - first_write: matches!(log.base, TreeLogEntry::Inserted { .. }), + first_write: matches!(log.base, TreeLogEntry::Inserted), merkle_paths, - leaf_hashed_key: key.hashed_key_u256(), - leaf_enumeration_index: match log.base { - TreeLogEntry::Updated { leaf_index, .. } - | TreeLogEntry::Inserted { leaf_index } - | TreeLogEntry::Read { leaf_index, .. } => leaf_index, - TreeLogEntry::ReadMissingKey => 0, + leaf_hashed_key: instruction.key().hashed_key_u256(), + leaf_enumeration_index: match instruction { + TreeInstruction::Write(entry) => entry.leaf_index, + TreeInstruction::Read(_) => match log.base { + TreeLogEntry::Read { leaf_index, .. } => leaf_index, + TreeLogEntry::ReadMissingKey => 0, + _ => unreachable!("Read instructions always transform to Read / ReadMissingKey log entries"), + } }, value_written, value_read: match log.base { @@ -243,7 +253,7 @@ impl ZkSyncTree { previous_value.0 } TreeLogEntry::Read { value, .. } => value.0, - TreeLogEntry::Inserted { .. } | TreeLogEntry::ReadMissingKey => [0_u8; 32], + TreeLogEntry::Inserted | TreeLogEntry::ReadMissingKey => [0_u8; 32], }, }; witness.push_merkle_path(log); @@ -254,12 +264,12 @@ impl ZkSyncTree { .logs .into_iter() .filter_map(|log| (!log.base.is_read()).then_some(log.base)); - let kvs = instructions.into_iter().filter_map(|(key, instruction)| { - let TreeInstruction::Write(value) = instruction else { - return None; - }; - Some((key, value)) - }); + let kvs = instructions + .iter() + .filter_map(|instruction| match instruction { + TreeInstruction::Write(entry) => Some(*entry), + TreeInstruction::Read(_) => None, + }); let (initial_writes, repeated_writes, state_diffs) = Self::extract_writes(logs, kvs); tracing::info!( @@ -281,21 +291,9 @@ impl ZkSyncTree { } } - fn transform_logs(storage_logs: &[StorageLog]) -> Vec<(StorageKey, TreeInstruction)> { - let instructions = storage_logs.iter().map(|log| { - let key = log.key; - let instruction = match log.kind { - StorageLogKind::Write => TreeInstruction::Write(log.value), - StorageLogKind::Read => TreeInstruction::Read, - }; - (key, instruction) - }); - instructions.collect() - } - fn extract_writes( logs: impl Iterator, - kvs: impl Iterator, + entries: impl Iterator>, ) -> ( Vec, Vec, @@ -304,13 +302,14 @@ impl ZkSyncTree { let mut initial_writes = vec![]; let mut repeated_writes = vec![]; let mut state_diffs = vec![]; - for (log_entry, (key, value)) in logs.zip(kvs) { + for (log_entry, input_entry) in logs.zip(entries) { + let key = &input_entry.key; match log_entry { - TreeLogEntry::Inserted { leaf_index } => { + TreeLogEntry::Inserted => { initial_writes.push(InitialStorageWrite { - index: leaf_index, + index: input_entry.leaf_index, key: key.hashed_key_u256(), - value, + value: input_entry.value, }); state_diffs.push(StateDiffRecord { address: *key.address(), @@ -318,25 +317,25 @@ impl ZkSyncTree { derived_key: StorageKey::raw_hashed_key(key.address(), key.key()), enumeration_index: 0u64, initial_value: U256::default(), - final_value: h256_to_u256(value), + final_value: h256_to_u256(input_entry.value), }); } TreeLogEntry::Updated { + previous_value: prev_value_hash, leaf_index, - previous_value, } => { - if previous_value != value { + if prev_value_hash != input_entry.value { repeated_writes.push(RepeatedStorageWrite { - index: leaf_index, - value, + index: input_entry.leaf_index, + value: input_entry.value, }); state_diffs.push(StateDiffRecord { address: *key.address(), key: h256_to_u256(*key.key()), derived_key: StorageKey::raw_hashed_key(key.address(), key.key()), enumeration_index: leaf_index, - initial_value: h256_to_u256(previous_value), - final_value: h256_to_u256(value), + initial_value: h256_to_u256(prev_value_hash), + final_value: h256_to_u256(input_entry.value), }); } // Else we have a no-op update that must be omitted from `repeated_writes`. @@ -348,8 +347,11 @@ impl ZkSyncTree { (initial_writes, repeated_writes, state_diffs) } - fn process_l1_batch_lightweight(&mut self, storage_logs: &[StorageLog]) -> TreeMetadata { - let kvs = Self::filter_write_logs(storage_logs); + fn process_l1_batch_lightweight( + &mut self, + instructions: &[TreeInstruction], + ) -> TreeMetadata { + let kvs = Self::filter_write_instructions(instructions); let l1_batch_number = self.next_l1_batch_number(); tracing::info!( "Extending Merkle tree with batch #{l1_batch_number} with {kv_count} writes \ @@ -357,10 +359,10 @@ impl ZkSyncTree { kv_count = kvs.len() ); - let kvs_with_derived_key = kvs + let kvs_with_derived_key: Vec<_> = kvs .iter() - .map(|(k, v)| (k.hashed_key_u256(), *v)) - .collect::>(); + .map(|entry| entry.map_key(StorageKey::hashed_key_u256)) + .collect(); let output = if let Some(thread_pool) = &self.thread_pool { thread_pool.install(|| self.tree.extend(kvs_with_derived_key.clone())) @@ -390,14 +392,15 @@ impl ZkSyncTree { } } - fn filter_write_logs(storage_logs: &[StorageLog]) -> Vec<(StorageKey, ValueHash)> { - let kvs = storage_logs.iter().filter_map(|log| match log.kind { - StorageLogKind::Write => { - let key = log.key; - Some((key, log.value)) - } - StorageLogKind::Read => None, - }); + fn filter_write_instructions( + instructions: &[TreeInstruction], + ) -> Vec> { + let kvs = instructions + .iter() + .filter_map(|instruction| match instruction { + TreeInstruction::Write(entry) => Some(*entry), + TreeInstruction::Read(_) => None, + }); kvs.collect() } diff --git a/core/lib/merkle_tree/src/errors.rs b/core/lib/merkle_tree/src/errors.rs index a30b0b98f5b..4afe8a2367c 100644 --- a/core/lib/merkle_tree/src/errors.rs +++ b/core/lib/merkle_tree/src/errors.rs @@ -166,9 +166,10 @@ impl error::Error for NoVersionError {} #[cfg(test)] mod tests { + use zksync_types::U256; + use super::*; use crate::{types::Nibbles, Key}; - use zksync_types::U256; const TEST_KEY: Key = U256([0, 0, 0, 0x_dead_beef_0000_0000]); diff --git a/core/lib/merkle_tree/src/getters.rs b/core/lib/merkle_tree/src/getters.rs index 67ce2aa9877..c53365c6047 100644 --- a/core/lib/merkle_tree/src/getters.rs +++ b/core/lib/merkle_tree/src/getters.rs @@ -2,14 +2,16 @@ use crate::{ hasher::HasherWithStats, + recovery::MerkleTreeRecovery, storage::{LoadAncestorsResult, SortedKeys, WorkingPatchSet}, types::{Nibbles, Node, TreeEntry, TreeEntryWithProof}, - Database, HashTree, Key, MerkleTree, NoVersionError, ValueHash, + Database, HashTree, Key, MerkleTree, NoVersionError, PruneDatabase, ValueHash, }; impl MerkleTree { /// Reads entries with the specified keys from the tree. The entries are returned in the same order - /// as requested. + /// as requested. If a certain key is not present in the tree, the corresponding returned entry + /// will be [empty](TreeEntry::is_empty()). /// /// # Errors /// @@ -19,43 +21,7 @@ impl MerkleTree { version: u64, leaf_keys: &[Key], ) -> Result, NoVersionError> { - self.load_and_transform_entries( - version, - leaf_keys, - |patch_set, leaf_key, longest_prefix| { - let node = patch_set.get(longest_prefix); - match node { - Some(Node::Leaf(leaf)) if &leaf.full_key == leaf_key => (*leaf).into(), - _ => TreeEntry::empty(), - } - }, - ) - } - - fn load_and_transform_entries( - &self, - version: u64, - leaf_keys: &[Key], - mut transform: impl FnMut(&mut WorkingPatchSet, &Key, &Nibbles) -> T, - ) -> Result, NoVersionError> { - let root = self.db.root(version).ok_or_else(|| { - let manifest = self.db.manifest().unwrap_or_default(); - NoVersionError { - missing_version: version, - version_count: manifest.version_count, - } - })?; - let sorted_keys = SortedKeys::new(leaf_keys.iter().copied()); - let mut patch_set = WorkingPatchSet::new(version, root); - let LoadAncestorsResult { - longest_prefixes, .. - } = patch_set.load_ancestors(&sorted_keys, &self.db); - - Ok(leaf_keys - .iter() - .zip(&longest_prefixes) - .map(|(leaf_key, longest_prefix)| transform(&mut patch_set, leaf_key, longest_prefix)) - .collect()) + load_and_transform_entries(&self.db, version, leaf_keys, extract_entry) } /// Reads entries together with Merkle proofs with the specified keys from the tree. The entries are returned @@ -70,17 +36,19 @@ impl MerkleTree { leaf_keys: &[Key], ) -> Result, NoVersionError> { let mut hasher = HasherWithStats::new(&self.hasher); - self.load_and_transform_entries( + load_and_transform_entries( + &self.db, version, leaf_keys, |patch_set, &leaf_key, longest_prefix| { let (leaf, merkle_path) = patch_set.create_proof(&mut hasher, leaf_key, longest_prefix, 0); - let value_hash = leaf + let value = leaf .as_ref() .map_or_else(ValueHash::zero, |leaf| leaf.value_hash); TreeEntry { - value_hash, + key: leaf_key, + value, leaf_index: leaf.map_or(0, |leaf| leaf.leaf_index), } .with_merkle_path(merkle_path.into_inner()) @@ -89,6 +57,58 @@ impl MerkleTree { } } +fn load_and_transform_entries( + db: &impl Database, + version: u64, + leaf_keys: &[Key], + mut transform: impl FnMut(&mut WorkingPatchSet, &Key, &Nibbles) -> T, +) -> Result, NoVersionError> { + let root = db.root(version).ok_or_else(|| { + let manifest = db.manifest().unwrap_or_default(); + NoVersionError { + missing_version: version, + version_count: manifest.version_count, + } + })?; + let sorted_keys = SortedKeys::new(leaf_keys.iter().copied()); + let mut patch_set = WorkingPatchSet::new(version, root); + let LoadAncestorsResult { + longest_prefixes, .. + } = patch_set.load_ancestors(&sorted_keys, db); + + Ok(leaf_keys + .iter() + .zip(&longest_prefixes) + .map(|(leaf_key, longest_prefix)| transform(&mut patch_set, leaf_key, longest_prefix)) + .collect()) +} + +fn extract_entry( + patch_set: &mut WorkingPatchSet, + leaf_key: &Key, + longest_prefix: &Nibbles, +) -> TreeEntry { + let node = patch_set.get(longest_prefix); + match node { + Some(Node::Leaf(leaf)) if &leaf.full_key == leaf_key => (*leaf).into(), + _ => TreeEntry::empty(*leaf_key), + } +} + +impl MerkleTreeRecovery { + /// Reads entries with the specified keys from the tree. The entries are returned in the same order + /// as requested. If a certain key is not present in the tree, the corresponding returned entry + /// will be [empty](TreeEntry::is_empty()). + #[allow(clippy::missing_panics_doc)] + pub fn entries(&self, leaf_keys: &[Key]) -> Vec { + load_and_transform_entries(&self.db, self.recovered_version(), leaf_keys, extract_entry) + .unwrap_or_else(|_| { + // If there's no recovered version, the recovered tree is empty yet. + leaf_keys.iter().map(|key| TreeEntry::empty(*key)).collect() + }) + } +} + #[cfg(test)] mod tests { use super::*; @@ -107,26 +127,26 @@ mod tests { let entries = tree.entries_with_proofs(0, &[missing_key]).unwrap(); assert_eq!(entries.len(), 1); assert!(entries[0].base.is_empty()); - entries[0].verify(&tree.hasher, missing_key, tree.hasher.empty_tree_hash()); + entries[0].verify(&tree.hasher, tree.hasher.empty_tree_hash()); } #[test] fn entries_in_single_node_tree() { let mut tree = MerkleTree::new(PatchSet::default()); let key = Key::from(987_654); - let output = tree.extend(vec![(key, ValueHash::repeat_byte(1))]); + let output = tree.extend(vec![TreeEntry::new(key, 1, ValueHash::repeat_byte(1))]); let missing_key = Key::from(123); let entries = tree.entries(0, &[key, missing_key]).unwrap(); assert_eq!(entries.len(), 2); - assert_eq!(entries[0].value_hash, ValueHash::repeat_byte(1)); + assert_eq!(entries[0].value, ValueHash::repeat_byte(1)); assert_eq!(entries[0].leaf_index, 1); let entries = tree.entries_with_proofs(0, &[key, missing_key]).unwrap(); assert_eq!(entries.len(), 2); assert!(!entries[0].base.is_empty()); - entries[0].verify(&tree.hasher, key, output.root_hash); + entries[0].verify(&tree.hasher, output.root_hash); assert!(entries[1].base.is_empty()); - entries[1].verify(&tree.hasher, missing_key, output.root_hash); + entries[1].verify(&tree.hasher, output.root_hash); } } diff --git a/core/lib/merkle_tree/src/hasher/mod.rs b/core/lib/merkle_tree/src/hasher/mod.rs index 8b2478c43d3..fa700a68244 100644 --- a/core/lib/merkle_tree/src/hasher/mod.rs +++ b/core/lib/merkle_tree/src/hasher/mod.rs @@ -1,19 +1,19 @@ //! Hashing operations on the Merkle tree. -use once_cell::sync::Lazy; - use std::{fmt, iter}; -mod nodes; -mod proofs; +use once_cell::sync::Lazy; +use zksync_crypto::hasher::{blake2::Blake2Hasher, Hasher}; pub(crate) use self::nodes::{InternalNodeCache, MerklePath}; pub use self::proofs::TreeRangeDigest; use crate::{ metrics::HashingStats, - types::{Key, ValueHash, TREE_DEPTH}, + types::{TreeEntry, ValueHash, TREE_DEPTH}, }; -use zksync_crypto::hasher::{blake2::Blake2Hasher, Hasher}; + +mod nodes; +mod proofs; /// Tree hashing functionality. pub trait HashTree: Send + Sync { @@ -65,17 +65,11 @@ impl dyn HashTree + '_ { empty_hashes.chain(path.iter().copied()) } - fn fold_merkle_path( - &self, - path: &[ValueHash], - key: Key, - value_hash: ValueHash, - leaf_index: u64, - ) -> ValueHash { - let mut hash = self.hash_leaf(&value_hash, leaf_index); + fn fold_merkle_path(&self, path: &[ValueHash], entry: TreeEntry) -> ValueHash { + let mut hash = self.hash_leaf(&entry.value, entry.leaf_index); let full_path = self.extend_merkle_path(path); for (depth, adjacent_hash) in full_path.enumerate() { - hash = if key.bit(depth) { + hash = if entry.key.bit(depth) { self.hash_branch(&adjacent_hash, &hash) } else { self.hash_branch(&hash, &adjacent_hash) @@ -228,9 +222,10 @@ impl HasherWithStats<'_> { #[cfg(test)] mod tests { + use zksync_types::{AccountTreeId, Address, StorageKey, H256}; + use super::*; use crate::types::LeafNode; - use zksync_types::{AccountTreeId, Address, StorageKey, H256}; #[test] fn empty_tree_hash_is_as_expected() { @@ -254,7 +249,7 @@ mod tests { let address: Address = "4b3af74f66ab1f0da3f2e4ec7a3cb99baf1af7b2".parse().unwrap(); let key = StorageKey::new(AccountTreeId::new(address), H256::zero()); let key = key.hashed_key_u256(); - let leaf = LeafNode::new(key, H256([1; 32]), 1); + let leaf = LeafNode::new(TreeEntry::new(key, 1, H256([1; 32]))); let stats = HashingStats::default(); let mut hasher = (&Blake2Hasher as &dyn HashTree).with_stats(&stats); @@ -265,7 +260,7 @@ mod tests { assert!(stats.hashed_bytes.into_inner() > 100); let hasher: &dyn HashTree = &Blake2Hasher; - let folded_hash = hasher.fold_merkle_path(&[], key, H256([1; 32]), 1); + let folded_hash = hasher.fold_merkle_path(&[], leaf.into()); assert_eq!(folded_hash, EXPECTED_HASH); } @@ -274,7 +269,7 @@ mod tests { let address: Address = "4b3af74f66ab1f0da3f2e4ec7a3cb99baf1af7b2".parse().unwrap(); let key = StorageKey::new(AccountTreeId::new(address), H256::zero()); let key = key.hashed_key_u256(); - let leaf = LeafNode::new(key, H256([1; 32]), 1); + let leaf = LeafNode::new(TreeEntry::new(key, 1, H256([1; 32]))); let mut hasher = HasherWithStats::new(&Blake2Hasher); let leaf_hash = leaf.hash(&mut hasher, 2); @@ -283,9 +278,7 @@ mod tests { let expected_hash = hasher.hash_branch(&merkle_path[0], &leaf_hash); let expected_hash = hasher.hash_branch(&expected_hash, &merkle_path[1]); - let folded_hash = hasher - .inner - .fold_merkle_path(&merkle_path, key, H256([1; 32]), 1); + let folded_hash = hasher.inner.fold_merkle_path(&merkle_path, leaf.into()); assert_eq!(folded_hash, expected_hash); } } diff --git a/core/lib/merkle_tree/src/hasher/nodes.rs b/core/lib/merkle_tree/src/hasher/nodes.rs index d36c58c0ae1..6e1c007bc42 100644 --- a/core/lib/merkle_tree/src/hasher/nodes.rs +++ b/core/lib/merkle_tree/src/hasher/nodes.rs @@ -258,10 +258,11 @@ impl Node { #[cfg(test)] mod tests { - use super::*; use zksync_crypto::hasher::{blake2::Blake2Hasher, Hasher}; use zksync_types::H256; + use super::*; + fn test_internal_node_hashing(child_indexes: &[u8]) { println!("Testing indices: {child_indexes:?}"); diff --git a/core/lib/merkle_tree/src/hasher/proofs.rs b/core/lib/merkle_tree/src/hasher/proofs.rs index d97df0ad97d..49d4bfe9295 100644 --- a/core/lib/merkle_tree/src/hasher/proofs.rs +++ b/core/lib/merkle_tree/src/hasher/proofs.rs @@ -22,36 +22,37 @@ impl BlockOutputWithProofs { &self, hasher: &dyn HashTree, old_root_hash: ValueHash, - instructions: &[(Key, TreeInstruction)], + instructions: &[TreeInstruction], ) { assert_eq!(instructions.len(), self.logs.len()); let mut root_hash = old_root_hash; - for (op, &(key, instruction)) in self.logs.iter().zip(instructions) { + for (op, &instruction) in self.logs.iter().zip(instructions) { assert!(op.merkle_path.len() <= TREE_DEPTH); - if matches!(instruction, TreeInstruction::Read) { + if matches!(instruction, TreeInstruction::Read(_)) { assert_eq!(op.root_hash, root_hash); assert!(op.base.is_read()); } else { assert!(!op.base.is_read()); } - let (prev_leaf_index, leaf_index, prev_value) = match op.base { - TreeLogEntry::Inserted { leaf_index } => (0, leaf_index, ValueHash::zero()), + let prev_entry = match op.base { + TreeLogEntry::Inserted | TreeLogEntry::ReadMissingKey => { + TreeEntry::empty(instruction.key()) + } TreeLogEntry::Updated { leaf_index, - previous_value, - } => (leaf_index, leaf_index, previous_value), - - TreeLogEntry::Read { leaf_index, value } => (leaf_index, leaf_index, value), - TreeLogEntry::ReadMissingKey => (0, 0, ValueHash::zero()), + previous_value: value, + } + | TreeLogEntry::Read { leaf_index, value } => { + TreeEntry::new(instruction.key(), leaf_index, value) + } }; - let prev_hash = - hasher.fold_merkle_path(&op.merkle_path, key, prev_value, prev_leaf_index); + let prev_hash = hasher.fold_merkle_path(&op.merkle_path, prev_entry); assert_eq!(prev_hash, root_hash); - if let TreeInstruction::Write(value) = instruction { - let next_hash = hasher.fold_merkle_path(&op.merkle_path, key, value, leaf_index); + if let TreeInstruction::Write(new_entry) = instruction { + let next_hash = hasher.fold_merkle_path(&op.merkle_path, new_entry); assert_eq!(next_hash, op.root_hash); } root_hash = op.root_hash; @@ -65,19 +66,14 @@ impl TreeEntryWithProof { /// # Panics /// /// Panics if the proof doesn't verify. - pub fn verify(&self, hasher: &dyn HashTree, key: Key, trusted_root_hash: ValueHash) { + pub fn verify(&self, hasher: &dyn HashTree, trusted_root_hash: ValueHash) { if self.base.leaf_index == 0 { assert!( - self.base.value_hash.is_zero(), + self.base.value.is_zero(), "Invalid missing value specification: leaf index is zero, but value is non-default" ); } - let root_hash = hasher.fold_merkle_path( - &self.merkle_path, - key, - self.base.value_hash, - self.base.leaf_index, - ); + let root_hash = hasher.fold_merkle_path(&self.merkle_path, self.base); assert_eq!(root_hash, trusted_root_hash, "Root hash mismatch"); } } @@ -146,11 +142,7 @@ impl<'a> TreeRangeDigest<'a> { let left_contour: Vec<_> = left_contour.collect(); Self { hasher: HasherWithStats::new(hasher), - current_leaf: LeafNode::new( - start_key, - start_entry.base.value_hash, - start_entry.base.leaf_index, - ), + current_leaf: LeafNode::new(start_entry.base), left_contour: left_contour.try_into().unwrap(), // ^ `unwrap()` is safe by construction; `left_contour` will always have necessary length } @@ -161,13 +153,13 @@ impl<'a> TreeRangeDigest<'a> { /// # Panics /// /// Panics if the provided `key` is not greater than the previous key provided to this digest. - pub fn update(&mut self, key: Key, entry: TreeEntry) { + pub fn update(&mut self, entry: TreeEntry) { assert!( - key > self.current_leaf.full_key, + entry.key > self.current_leaf.full_key, "Keys provided to a digest must be monotonically increasing" ); - let diverging_level = utils::find_diverging_bit(self.current_leaf.full_key, key) + 1; + let diverging_level = utils::find_diverging_bit(self.current_leaf.full_key, entry.key) + 1; // Hash the current leaf up to the `diverging_level`, taking current `left_contour` into account. let mut hash = self @@ -188,7 +180,7 @@ impl<'a> TreeRangeDigest<'a> { } // Record the computed hash. self.left_contour[TREE_DEPTH - diverging_level] = hash; - self.current_leaf = LeafNode::new(key, entry.value_hash, entry.leaf_index); + self.current_leaf = LeafNode::new(entry); } /// Finalizes this digest and returns the root hash of the tree. @@ -196,8 +188,8 @@ impl<'a> TreeRangeDigest<'a> { /// # Panics /// /// Panics if the provided `final_key` is not greater than the previous key provided to this digest. - pub fn finalize(mut self, final_key: Key, final_entry: &TreeEntryWithProof) -> ValueHash { - self.update(final_key, final_entry.base); + pub fn finalize(mut self, final_entry: &TreeEntryWithProof) -> ValueHash { + self.update(final_entry.base); let full_path = self .hasher @@ -206,9 +198,9 @@ impl<'a> TreeRangeDigest<'a> { let zipped_paths = self.left_contour.into_iter().zip(full_path); let mut hash = self .hasher - .hash_leaf(&final_entry.base.value_hash, final_entry.base.leaf_index); + .hash_leaf(&final_entry.base.value, final_entry.base.leaf_index); for (depth, (left, right)) in zipped_paths.enumerate() { - hash = if final_key.bit(depth) { + hash = if final_entry.base.key.bit(depth) { self.hasher.hash_branch(&left, &hash) } else { self.hasher.hash_branch(&hash, &right) diff --git a/core/lib/merkle_tree/src/lib.rs b/core/lib/merkle_tree/src/lib.rs index 166400cbb64..687e957f8ef 100644 --- a/core/lib/merkle_tree/src/lib.rs +++ b/core/lib/merkle_tree/src/lib.rs @@ -26,10 +26,15 @@ //! - Hash of a vacant leaf is `hash([0_u8; 40])`, where `hash` is the hash function used //! (Blake2s-256). //! - Hash of an occupied leaf is `hash(u64::to_be_bytes(leaf_index) ++ value_hash)`, -//! where `leaf_index` is the 1-based index of the leaf key in the order of insertion, +//! where `leaf_index` is a 1-based index of the leaf key provided when the leaf is inserted / updated, //! `++` is byte concatenation. //! - Hash of an internal node is `hash(left_child_hash ++ right_child_hash)`. //! +//! Currently in zksync, leaf indices enumerate leaves in the order of their insertion into the tree. +//! Indices are computed externally and are provided to the tree as inputs; the tree doesn't verify +//! index assignment and doesn't rely on particular index assignment assumptions (other than when +//! [verifying tree consistency](MerkleTree::verify_consistency())). +//! //! [Jellyfish Merkle tree]: https://developers.diem.com/papers/jellyfish-merkle-tree/2021-01-14.pdf // Linter settings. @@ -41,6 +46,23 @@ clippy::doc_markdown // frequent false positive: RocksDB )] +use zksync_crypto::hasher::blake2::Blake2Hasher; + +pub use crate::{ + errors::NoVersionError, + hasher::{HashTree, TreeRangeDigest}, + pruning::{MerkleTreePruner, MerkleTreePrunerHandle}, + storage::{ + Database, MerkleTreeColumnFamily, PatchSet, Patched, PruneDatabase, PrunePatchSet, + RocksDBWrapper, + }, + types::{ + BlockOutput, BlockOutputWithProofs, Key, TreeEntry, TreeEntryWithProof, TreeInstruction, + TreeLogEntry, TreeLogEntryWithProof, ValueHash, + }, +}; +use crate::{hasher::HasherWithStats, storage::Storage, types::Root}; + mod consistency; pub mod domain; mod errors; @@ -64,23 +86,6 @@ pub mod unstable { }; } -pub use crate::{ - errors::NoVersionError, - hasher::{HashTree, TreeRangeDigest}, - pruning::{MerkleTreePruner, MerkleTreePrunerHandle}, - storage::{ - Database, MerkleTreeColumnFamily, PatchSet, Patched, PruneDatabase, PrunePatchSet, - RocksDBWrapper, - }, - types::{ - BlockOutput, BlockOutputWithProofs, Key, TreeEntry, TreeEntryWithProof, TreeInstruction, - TreeLogEntry, TreeLogEntryWithProof, ValueHash, - }, -}; - -use crate::{hasher::HasherWithStats, storage::Storage, types::Root}; -use zksync_crypto::hasher::blake2::Blake2Hasher; - /// Binary Merkle tree implemented using AR16MT from Diem [Jellyfish Merkle tree] white paper. /// /// A tree is persistent and is backed by a key-value store (the `DB` type param). It is versioned, @@ -209,10 +214,10 @@ impl MerkleTree { /// # Return value /// /// Returns information about the update such as the final tree hash. - pub fn extend(&mut self, key_value_pairs: Vec<(Key, ValueHash)>) -> BlockOutput { + pub fn extend(&mut self, entries: Vec) -> BlockOutput { let next_version = self.db.manifest().unwrap_or_default().version_count; let storage = Storage::new(&self.db, &self.hasher, next_version, true); - let (output, patch) = storage.extend(key_value_pairs); + let (output, patch) = storage.extend(entries); self.db.apply_patch(patch); output } @@ -226,7 +231,7 @@ impl MerkleTree { /// instruction. pub fn extend_with_proofs( &mut self, - instructions: Vec<(Key, TreeInstruction)>, + instructions: Vec, ) -> BlockOutputWithProofs { let next_version = self.db.manifest().unwrap_or_default().version_count; let storage = Storage::new(&self.db, &self.hasher, next_version, true); diff --git a/core/lib/merkle_tree/src/metrics.rs b/core/lib/merkle_tree/src/metrics.rs index 29bd58e599e..ef1e94f9b05 100644 --- a/core/lib/merkle_tree/src/metrics.rs +++ b/core/lib/merkle_tree/src/metrics.rs @@ -6,11 +6,12 @@ use std::{ time::Duration, }; -use crate::types::Nibbles; use vise::{ Buckets, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Global, Histogram, Metrics, Unit, }; +use crate::types::Nibbles; + #[derive(Debug, Metrics)] #[metrics(prefix = "merkle_tree")] pub(crate) struct GeneralMetrics { diff --git a/core/lib/merkle_tree/src/pruning.rs b/core/lib/merkle_tree/src/pruning.rs index bf60b8cf956..5b1911ca600 100644 --- a/core/lib/merkle_tree/src/pruning.rs +++ b/core/lib/merkle_tree/src/pruning.rs @@ -89,7 +89,7 @@ impl MerkleTreePruner { /// Sets the sleep duration when the pruner cannot progress. This time should be enough /// for the tree to produce enough stale keys. /// - /// The default value is 60s. + /// The default value is 60 seconds. pub fn set_poll_interval(&mut self, poll_interval: Duration) { self.poll_interval = poll_interval; } @@ -187,7 +187,7 @@ mod tests { use super::*; use crate::{ types::{Node, NodeKey}, - Database, Key, MerkleTree, PatchSet, ValueHash, + Database, Key, MerkleTree, PatchSet, TreeEntry, ValueHash, }; fn create_db() -> PatchSet { @@ -195,7 +195,7 @@ mod tests { for i in 0..5 { let key = Key::from(i); let value = ValueHash::from_low_u64_be(i); - MerkleTree::new(&mut db).extend(vec![(key, value)]); + MerkleTree::new(&mut db).extend(vec![TreeEntry::new(key, i + 1, value)]); } db } @@ -245,9 +245,9 @@ mod tests { assert!(start.elapsed() < Duration::from_secs(10)); } - fn generate_key_value_pairs(indexes: impl Iterator) -> Vec<(Key, ValueHash)> { + fn generate_key_value_pairs(indexes: impl Iterator) -> Vec { indexes - .map(|i| (Key::from(i), ValueHash::from_low_u64_be(i))) + .map(|i| TreeEntry::new(Key::from(i), i + 1, ValueHash::from_low_u64_be(i))) .collect() } @@ -273,7 +273,7 @@ mod tests { let mut tree = MerkleTree::new(&mut db); for version in first_retained_version..=latest_version { - tree.verify_consistency(version).unwrap(); + tree.verify_consistency(version, true).unwrap(); } let kvs = generate_key_value_pairs(100..200); @@ -290,7 +290,7 @@ mod tests { let tree = MerkleTree::new(&mut db); for version in first_retained_version..=latest_version { - tree.verify_consistency(version).unwrap(); + tree.verify_consistency(version, true).unwrap(); } assert_no_stale_keys(&db, first_retained_version); } @@ -318,8 +318,8 @@ mod tests { const ITERATIVE_BATCH_COUNT: usize = 10; let mut db = PatchSet::default(); - let kvs: Vec<_> = (0_u32..100) - .map(|i| (Key::from(i), ValueHash::zero())) + let kvs: Vec<_> = (0_u64..100) + .map(|i| TreeEntry::new(Key::from(i), i + 1, ValueHash::zero())) .collect(); let batch_count = if initialize_iteratively { @@ -335,8 +335,8 @@ mod tests { // Completely overwrite all keys. let new_value_hash = ValueHash::from_low_u64_be(1_000); - let new_kvs = (0_u32..100) - .map(|i| (Key::from(i), new_value_hash)) + let new_kvs = (0_u64..100) + .map(|i| TreeEntry::new(Key::from(i), i + 1, new_value_hash)) .collect(); MerkleTree::new(&mut db).extend(new_kvs); @@ -364,16 +364,16 @@ mod tests { prune_iteratively: bool, ) { let mut db = PatchSet::default(); - let kvs: Vec<_> = (0_u32..100) - .map(|i| (Key::from(i), ValueHash::zero())) + let kvs: Vec<_> = (0_u64..100) + .map(|i| TreeEntry::new(Key::from(i), i + 1, ValueHash::zero())) .collect(); MerkleTree::new(&mut db).extend(kvs); let leaf_keys_in_db = leaf_keys(&mut db); // Completely overwrite all keys in several batches. let new_value_hash = ValueHash::from_low_u64_be(1_000); - let new_kvs: Vec<_> = (0_u32..100) - .map(|i| (Key::from(i), new_value_hash)) + let new_kvs: Vec<_> = (0_u64..100) + .map(|i| TreeEntry::new(Key::from(i), i + 1, new_value_hash)) .collect(); for chunk in new_kvs.chunks(20) { MerkleTree::new(&mut db).extend(chunk.to_vec()); diff --git a/core/lib/merkle_tree/src/recovery.rs b/core/lib/merkle_tree/src/recovery.rs index 6f57b64ee81..aecda593a25 100644 --- a/core/lib/merkle_tree/src/recovery.rs +++ b/core/lib/merkle_tree/src/recovery.rs @@ -8,7 +8,7 @@ //! afterwards will have the same outcome as if they were applied to the original tree. //! //! Importantly, a recovered tree is only *observably* identical to the original tree; it differs -//! in (currently unobservable) node versions. In a recovered tree, all nodes will initially have +//! in (currently un-observable) node versions. In a recovered tree, all nodes will initially have //! the same version (the snapshot version), while in the original tree, node versions are distributed //! from 0 to the snapshot version (both inclusive). //! @@ -37,30 +37,18 @@ use std::time::Instant; +use zksync_crypto::hasher::blake2::Blake2Hasher; + use crate::{ hasher::{HashTree, HasherWithStats}, storage::{PatchSet, PruneDatabase, PrunePatchSet, Storage}, - types::{Key, Manifest, Root, TreeTags, ValueHash}, - MerkleTree, + types::{Key, Manifest, Root, TreeEntry, TreeTags, ValueHash}, }; -use zksync_crypto::hasher::blake2::Blake2Hasher; - -/// Entry in a Merkle tree used during recovery. -#[derive(Debug, Clone, Copy, PartialEq, Eq)] -pub struct RecoveryEntry { - /// Entry key. - pub key: Key, - /// Entry value. - pub value: ValueHash, - /// Leaf index associated with the entry. It is **not** checked whether leaf indices are well-formed - /// during recovery (e.g., that they are unique). - pub leaf_index: u64, -} /// Handle to a Merkle tree during its recovery. #[derive(Debug)] pub struct MerkleTreeRecovery { - db: DB, + pub(crate) db: DB, hasher: H, recovered_version: u64, } @@ -122,6 +110,11 @@ impl MerkleTreeRecovery { } } + /// Returns the version of the tree being recovered. + pub fn recovered_version(&self) -> u64 { + self.recovered_version + } + /// Returns the root hash of the recovered tree at this point. pub fn root_hash(&self) -> ValueHash { let root = self.db.root(self.recovered_version); @@ -154,7 +147,7 @@ impl MerkleTreeRecovery { %entries.key_range = entries_key_range(&entries), ), )] - pub fn extend_linear(&mut self, entries: Vec) { + pub fn extend_linear(&mut self, entries: Vec) { tracing::debug!("Started extending tree"); let started_at = Instant::now(); @@ -177,7 +170,7 @@ impl MerkleTreeRecovery { entries.len = entries.len(), ), )] - pub fn extend_random(&mut self, entries: Vec) { + pub fn extend_random(&mut self, entries: Vec) { tracing::debug!("Started extending tree"); let started_at = Instant::now(); @@ -197,7 +190,7 @@ impl MerkleTreeRecovery { fields(recovered_version = self.recovered_version), )] #[allow(clippy::missing_panics_doc, clippy::range_plus_one)] - pub fn finalize(mut self) -> MerkleTree { + pub fn finalize(mut self) -> DB { let mut manifest = self.db.manifest().unwrap(); // ^ `unwrap()` is safe: manifest is inserted into the DB on creation @@ -234,15 +227,11 @@ impl MerkleTreeRecovery { self.db.apply_patch(PatchSet::from_manifest(manifest)); tracing::debug!("Updated tree manifest to mark recovery as complete"); - // We don't need additional integrity checks since they were performed in the constructor - MerkleTree { - db: self.db, - hasher: self.hasher, - } + self.db } } -fn entries_key_range(entries: &[RecoveryEntry]) -> String { +fn entries_key_range(entries: &[TreeEntry]) -> String { let (Some(first), Some(last)) = (entries.first(), entries.last()) else { return "(empty)".to_owned(); }; @@ -252,7 +241,7 @@ fn entries_key_range(entries: &[RecoveryEntry]) -> String { #[cfg(test)] mod tests { use super::*; - use crate::{hasher::HasherWithStats, types::LeafNode}; + use crate::{hasher::HasherWithStats, types::LeafNode, MerkleTree}; #[test] #[should_panic(expected = "Tree is expected to be in the process of recovery")] @@ -272,7 +261,8 @@ mod tests { #[test] fn recovering_empty_tree() { - let tree = MerkleTreeRecovery::new(PatchSet::default(), 42).finalize(); + let db = MerkleTreeRecovery::new(PatchSet::default(), 42).finalize(); + let tree = MerkleTree::new(db); assert_eq!(tree.latest_version(), Some(42)); assert_eq!(tree.root(42), Some(Root::Empty)); } @@ -280,25 +270,16 @@ mod tests { #[test] fn recovering_tree_with_single_node() { let mut recovery = MerkleTreeRecovery::new(PatchSet::default(), 42); - let recovery_entry = RecoveryEntry { - key: Key::from(123), - value: ValueHash::repeat_byte(1), - leaf_index: 1, - }; + let recovery_entry = TreeEntry::new(Key::from(123), 1, ValueHash::repeat_byte(1)); recovery.extend_linear(vec![recovery_entry]); - let tree = recovery.finalize(); + let tree = MerkleTree::new(recovery.finalize()); assert_eq!(tree.latest_version(), Some(42)); let mut hasher = HasherWithStats::new(&Blake2Hasher); assert_eq!( tree.latest_root_hash(), - LeafNode::new( - recovery_entry.key, - recovery_entry.value, - recovery_entry.leaf_index - ) - .hash(&mut hasher, 0) + LeafNode::new(recovery_entry).hash(&mut hasher, 0) ); - tree.verify_consistency(42).unwrap(); + tree.verify_consistency(42, true).unwrap(); } } diff --git a/core/lib/merkle_tree/src/storage/mod.rs b/core/lib/merkle_tree/src/storage/mod.rs index c5a56abfca9..d2b89da48cd 100644 --- a/core/lib/merkle_tree/src/storage/mod.rs +++ b/core/lib/merkle_tree/src/storage/mod.rs @@ -1,31 +1,28 @@ //! Storage-related logic. -mod database; -mod patch; -mod proofs; -mod rocksdb; -mod serialization; -#[cfg(test)] -mod tests; - pub(crate) use self::patch::{LoadAncestorsResult, WorkingPatchSet}; pub use self::{ database::{Database, NodeKeys, Patched, PruneDatabase, PrunePatchSet}, patch::PatchSet, rocksdb::{MerkleTreeColumnFamily, RocksDBWrapper}, }; - use crate::{ hasher::HashTree, metrics::{TreeUpdaterStats, BLOCK_TIMINGS, GENERAL_METRICS}, - recovery::RecoveryEntry, types::{ BlockOutput, ChildRef, InternalNode, Key, LeafNode, Manifest, Nibbles, Node, Root, - TreeLogEntry, TreeTags, ValueHash, + TreeEntry, TreeLogEntry, TreeTags, ValueHash, }, - utils::increment_counter, }; +mod database; +mod patch; +mod proofs; +mod rocksdb; +mod serialization; +#[cfg(test)] +mod tests; + /// Tree operation: either inserting a new version or updating an existing one (the latter is only /// used during tree recovery). #[derive(Debug, Clone, Copy)] @@ -132,17 +129,17 @@ impl TreeUpdater { /// hashes for all updated nodes in [`Self::finalize()`]. fn insert( &mut self, - key: Key, - value_hash: ValueHash, + entry: TreeEntry, parent_nibbles: &Nibbles, - leaf_index_fn: impl FnOnce() -> u64, ) -> (TreeLogEntry, NewLeafData) { let version = self.patch_set.root_version(); + let key = entry.key; + let traverse_outcome = self.patch_set.traverse(key, parent_nibbles); let (log, leaf_data) = match traverse_outcome { TraverseOutcome::LeafMatch(nibbles, mut leaf) => { - let log = TreeLogEntry::update(leaf.value_hash, leaf.leaf_index); - leaf.value_hash = value_hash; + let log = TreeLogEntry::update(leaf.leaf_index, leaf.value_hash); + leaf.update_from(entry); self.patch_set.insert(nibbles, leaf.into()); self.metrics.updated_leaves += 1; (log, NewLeafData::new(nibbles, leaf)) @@ -173,23 +170,20 @@ impl TreeUpdater { nibble_idx += 1; } - let leaf_index = leaf_index_fn(); - let new_leaf = LeafNode::new(key, value_hash, leaf_index); + let new_leaf = LeafNode::new(entry); let new_leaf_nibbles = Nibbles::new(&key, nibble_idx + 1); let leaf_data = NewLeafData::new(new_leaf_nibbles, new_leaf); let moved_leaf_nibbles = Nibbles::new(&leaf.full_key, nibble_idx + 1); let leaf_data = leaf_data.with_adjacent_leaf(moved_leaf_nibbles, leaf); - (TreeLogEntry::insert(leaf_index), leaf_data) + (TreeLogEntry::Inserted, leaf_data) } TraverseOutcome::MissingChild(nibbles) if nibbles.nibble_count() == 0 => { // The root is currently empty; we replace it with a leaf. - let leaf_index = leaf_index_fn(); - debug_assert_eq!(leaf_index, 1); - let root_leaf = LeafNode::new(key, value_hash, leaf_index); + let root_leaf = LeafNode::new(entry); self.set_root_node(root_leaf.into()); let leaf_data = NewLeafData::new(Nibbles::EMPTY, root_leaf); - (TreeLogEntry::insert(1), leaf_data) + (TreeLogEntry::Inserted, leaf_data) } TraverseOutcome::MissingChild(nibbles) => { @@ -198,10 +192,9 @@ impl TreeUpdater { unreachable!("Node parent must be an internal node"); }; parent.insert_child_ref(last_nibble, ChildRef::leaf(version)); - let leaf_index = leaf_index_fn(); - let new_leaf = LeafNode::new(key, value_hash, leaf_index); + let new_leaf = LeafNode::new(entry); let leaf_data = NewLeafData::new(nibbles, new_leaf); - (TreeLogEntry::insert(leaf_index), leaf_data) + (TreeLogEntry::Inserted, leaf_data) } }; @@ -289,19 +282,20 @@ impl<'a, DB: Database + ?Sized> Storage<'a, DB> { /// Extends the Merkle tree in the lightweight operation mode, without intermediate hash /// computations. - pub fn extend(mut self, key_value_pairs: Vec<(Key, ValueHash)>) -> (BlockOutput, PatchSet) { + pub fn extend(mut self, entries: Vec) -> (BlockOutput, PatchSet) { let load_nodes_latency = BLOCK_TIMINGS.load_nodes.start(); - let sorted_keys = SortedKeys::new(key_value_pairs.iter().map(|(key, _)| *key)); + let sorted_keys = SortedKeys::new(entries.iter().map(|entry| entry.key)); let parent_nibbles = self.updater.load_ancestors(&sorted_keys, self.db); let load_nodes_latency = load_nodes_latency.observe(); tracing::debug!("Load stage took {load_nodes_latency:?}"); let extend_patch_latency = BLOCK_TIMINGS.extend_patch.start(); - let mut logs = Vec::with_capacity(key_value_pairs.len()); - for ((key, value_hash), parent_nibbles) in key_value_pairs.into_iter().zip(parent_nibbles) { - let (log, _) = self.updater.insert(key, value_hash, &parent_nibbles, || { - increment_counter(&mut self.leaf_count) - }); + let mut logs = Vec::with_capacity(entries.len()); + for (entry, parent_nibbles) in entries.into_iter().zip(parent_nibbles) { + let (log, _) = self.updater.insert(entry, &parent_nibbles); + if matches!(log, TreeLogEntry::Inserted) { + self.leaf_count += 1; + } logs.push(log); } let extend_patch_latency = extend_patch_latency.observe(); @@ -321,10 +315,7 @@ impl<'a, DB: Database + ?Sized> Storage<'a, DB> { Some(self.updater.load_greatest_key(self.db)?.0.full_key) } - pub fn extend_during_linear_recovery( - mut self, - recovery_entries: Vec, - ) -> PatchSet { + pub fn extend_during_linear_recovery(mut self, recovery_entries: Vec) -> PatchSet { let (mut prev_key, mut prev_nibbles) = match self.updater.load_greatest_key(self.db) { Some((leaf, nibbles)) => (Some(leaf.full_key), nibbles), None => (None, Nibbles::EMPTY), @@ -343,9 +334,7 @@ impl<'a, DB: Database + ?Sized> Storage<'a, DB> { let key_nibbles = Nibbles::new(&entry.key, prev_nibbles.nibble_count()); let parent_nibbles = prev_nibbles.common_prefix(&key_nibbles); - let (_, new_leaf) = - self.updater - .insert(entry.key, entry.value, &parent_nibbles, || entry.leaf_index); + let (_, new_leaf) = self.updater.insert(entry, &parent_nibbles); prev_nibbles = new_leaf.nibbles; self.leaf_count += 1; } @@ -356,10 +345,7 @@ impl<'a, DB: Database + ?Sized> Storage<'a, DB> { patch } - pub fn extend_during_random_recovery( - mut self, - recovery_entries: Vec, - ) -> PatchSet { + pub fn extend_during_random_recovery(mut self, recovery_entries: Vec) -> PatchSet { let load_nodes_latency = BLOCK_TIMINGS.load_nodes.start(); let sorted_keys = SortedKeys::new(recovery_entries.iter().map(|entry| entry.key)); let parent_nibbles = self.updater.load_ancestors(&sorted_keys, self.db); @@ -368,8 +354,7 @@ impl<'a, DB: Database + ?Sized> Storage<'a, DB> { let extend_patch_latency = BLOCK_TIMINGS.extend_patch.start(); for (entry, parent_nibbles) in recovery_entries.into_iter().zip(parent_nibbles) { - self.updater - .insert(entry.key, entry.value, &parent_nibbles, || entry.leaf_index); + self.updater.insert(entry, &parent_nibbles); self.leaf_count += 1; } let extend_patch_latency = extend_patch_latency.observe(); diff --git a/core/lib/merkle_tree/src/storage/patch.rs b/core/lib/merkle_tree/src/storage/patch.rs index 9e251bf0178..f0b06c83bf2 100644 --- a/core/lib/merkle_tree/src/storage/patch.rs +++ b/core/lib/merkle_tree/src/storage/patch.rs @@ -1,13 +1,13 @@ //! Types related to DB patches: `PatchSet` and `WorkingPatchSet`. -use rayon::prelude::*; - use std::{ collections::{hash_map::Entry, HashMap}, iter, time::Instant, }; +use rayon::prelude::*; + use crate::{ hasher::{HashTree, HasherWithStats, MerklePath}, metrics::HashingStats, @@ -344,7 +344,7 @@ impl WorkingPatchSet { } } - /// Computes hashes and serializes this changeset. + /// Computes hashes and serializes this change set. pub(super) fn finalize( self, manifest: Manifest, @@ -597,7 +597,7 @@ impl WorkingPatchSet { Some(Node::Internal(node)) => { let (next_nibble, child_ref) = node.last_child_ref(); nibbles = nibbles.push(next_nibble).unwrap(); - // ^ `unwrap()` is safe; there can be no internal nodes on the bottommost tree level + // ^ `unwrap()` is safe; there can be no internal nodes on the bottom-most tree level let child_key = nibbles.with_version(child_ref.version); let child_node = db.tree_node(&child_key, child_ref.is_leaf).unwrap(); // ^ `unwrap()` is safe by construction @@ -680,7 +680,7 @@ mod tests { use super::*; use crate::{ storage::Storage, - types::{Key, LeafNode}, + types::{Key, LeafNode, TreeEntry}, }; fn patch_len(patch: &WorkingPatchSet) -> usize { @@ -697,7 +697,7 @@ mod tests { let key = Key::from_little_endian(&[i; 32]); let nibbles = Nibbles::new(&key, 2 + usize::from(i) % 4); // ^ We need nibble count at least 2 for all `nibbles` to be distinct. - let leaf = LeafNode::new(key, ValueHash::zero(), i.into()); + let leaf = LeafNode::new(TreeEntry::new(key, i.into(), ValueHash::zero())); patch.insert(nibbles, leaf.into()); nibbles }); @@ -742,7 +742,8 @@ mod tests { // Test DB with a single entry. let mut db = PatchSet::default(); let key = Key::from(1234_u64); - let (_, patch) = Storage::new(&db, &(), 0, true).extend(vec![(key, ValueHash::zero())]); + let (_, patch) = + Storage::new(&db, &(), 0, true).extend(vec![TreeEntry::new(key, 1, ValueHash::zero())]); db.apply_patch(patch); let mut patch = WorkingPatchSet::new(1, db.root(0).unwrap()); @@ -754,8 +755,11 @@ mod tests { // Test DB with multiple entries. let other_key = Key::from_little_endian(&[0xa0; 32]); - let (_, patch) = - Storage::new(&db, &(), 1, true).extend(vec![(other_key, ValueHash::zero())]); + let (_, patch) = Storage::new(&db, &(), 1, true).extend(vec![TreeEntry::new( + other_key, + 2, + ValueHash::zero(), + )]); db.apply_patch(patch); let mut patch = WorkingPatchSet::new(2, db.root(1).unwrap()); @@ -766,8 +770,11 @@ mod tests { assert_eq!(load_result.db_reads, 1); let greater_key = Key::from_little_endian(&[0xaf; 32]); - let (_, patch) = - Storage::new(&db, &(), 2, true).extend(vec![(greater_key, ValueHash::zero())]); + let (_, patch) = Storage::new(&db, &(), 2, true).extend(vec![TreeEntry::new( + greater_key, + 3, + ValueHash::zero(), + )]); db.apply_patch(patch); let mut patch = WorkingPatchSet::new(3, db.root(2).unwrap()); diff --git a/core/lib/merkle_tree/src/storage/proofs.rs b/core/lib/merkle_tree/src/storage/proofs.rs index 9e2d172bd6b..81f140088d3 100644 --- a/core/lib/merkle_tree/src/storage/proofs.rs +++ b/core/lib/merkle_tree/src/storage/proofs.rs @@ -15,26 +15,6 @@ //! with root at level 4 (= 1 nibble). Thus, the patch sets and Merkle proofs //! produced by each group are mostly disjoint; they intersect only at the root node level. //! -//! ## Computing leaf indices -//! -//! We need to determine leaf indices for all write instructions. Indices potentially depend -//! on the entire list of `instructions`, so we should determine leaf indices before -//! parallelization. Otherwise, we'd need to sync between parallelized tasks, which defeats -//! the purpose of parallelization. -//! -//! We precompute indices as a separate step using the following observations: -//! -//! - If a leaf is present in the tree *before* `instructions` are applied, its index -//! can be obtained from the node ancestors loaded on the first step of the process. -//! - Otherwise, a leaf may have been added by a previous instruction for the same key. -//! Since we already need [`SortedKeys`] to efficiently load ancestors, it's easy -//! to determine such pairs of instructions. -//! - Otherwise, we have a first write, and the leaf index is defined as the current leaf -//! count. -//! -//! In summary, we can determine leaf indices for all write `instructions` in linear time -//! and without synchronization required during the parallel steps of the process. -//! //! ## Merging Merkle proofs //! //! The proofs produced by different groups only intersect at levels 0..4. This can be dealt with @@ -68,7 +48,7 @@ use crate::{ BlockOutputWithProofs, InternalNode, Key, Nibbles, Node, TreeInstruction, TreeLogEntry, TreeLogEntryWithProof, ValueHash, }, - utils::{increment_counter, merge_by_index}, + utils::merge_by_index, }; /// Number of subtrees used for parallel computations. @@ -93,16 +73,13 @@ impl TreeUpdater { for instruction in instructions { let InstructionWithPrecomputes { index, - key, instruction, parent_nibbles, - leaf_index, } = instruction; let log = match instruction { - TreeInstruction::Write(value_hash) => { - let (log, leaf_data) = - self.insert(key, value_hash, &parent_nibbles, || leaf_index); + TreeInstruction::Write(entry) => { + let (log, leaf_data) = self.insert(entry, &parent_nibbles); let (new_root_hash, merkle_path) = self.update_node_hashes(hasher, &leaf_data); root_hash = new_root_hash; TreeLogEntryWithProof { @@ -111,7 +88,7 @@ impl TreeUpdater { root_hash, } } - TreeInstruction::Read => { + TreeInstruction::Read(key) => { let (log, merkle_path) = self.prove(hasher, key, &parent_nibbles); TreeLogEntryWithProof { base: log, @@ -183,7 +160,7 @@ impl TreeUpdater { self.patch_set .create_proof(hasher, key, parent_nibbles, SUBTREE_ROOT_LEVEL / 4); let operation = leaf.map_or(TreeLogEntry::ReadMissingKey, |leaf| { - TreeLogEntry::read(leaf.value_hash, leaf.leaf_index) + TreeLogEntry::read(leaf.leaf_index, leaf.value_hash) }); if matches!(operation, TreeLogEntry::ReadMissingKey) { @@ -259,16 +236,14 @@ impl TreeUpdater { impl<'a, DB: Database + ?Sized> Storage<'a, DB> { pub fn extend_with_proofs( mut self, - instructions: Vec<(Key, TreeInstruction)>, + instructions: Vec, ) -> (BlockOutputWithProofs, PatchSet) { let load_nodes_latency = BLOCK_TIMINGS.load_nodes.start(); - let sorted_keys = SortedKeys::new(instructions.iter().map(|(key, _)| *key)); + let sorted_keys = SortedKeys::new(instructions.iter().map(TreeInstruction::key)); let parent_nibbles = self.updater.load_ancestors(&sorted_keys, self.db); load_nodes_latency.observe(); - let leaf_indices = self.compute_leaf_indices(&instructions, sorted_keys, &parent_nibbles); - let instruction_parts = - InstructionWithPrecomputes::split(instructions, parent_nibbles, leaf_indices); + let instruction_parts = InstructionWithPrecomputes::split(instructions, parent_nibbles); let initial_root = self.updater.patch_set.ensure_internal_root_node(); let initial_metrics = self.updater.metrics; let storage_parts = self.updater.split(); @@ -310,44 +285,13 @@ impl<'a, DB: Database + ?Sized> Storage<'a, DB> { output_with_proofs } - /// Computes leaf indices for all writes in `instructions`. Leaf indices are not used for reads; - /// thus, the corresponding entries are always 0. - fn compute_leaf_indices( - &mut self, - instructions: &[(Key, TreeInstruction)], - mut sorted_keys: SortedKeys, - parent_nibbles: &[Nibbles], - ) -> Vec { - sorted_keys.remove_read_instructions(instructions); - let key_mentions = sorted_keys.key_mentions(instructions.len()); - let patch_set = &self.updater.patch_set; - - let mut leaf_indices = Vec::with_capacity(instructions.len()); - let it = instructions.iter().zip(parent_nibbles).enumerate(); - for (idx, ((key, instruction), nibbles)) in it { - let leaf_index = match (instruction, key_mentions[idx]) { - (TreeInstruction::Read, _) => 0, - // ^ Leaf indices are not used for read instructions. - (TreeInstruction::Write(_), KeyMention::First) => { - let leaf_index = match patch_set.get(nibbles) { - Some(Node::Leaf(leaf)) if leaf.full_key == *key => Some(leaf.leaf_index), - _ => None, - }; - leaf_index.unwrap_or_else(|| increment_counter(&mut self.leaf_count)) - } - (TreeInstruction::Write(_), KeyMention::SameAs(prev_idx)) => leaf_indices[prev_idx], - }; - leaf_indices.push(leaf_index); - } - leaf_indices - } - fn finalize_with_proofs( mut self, hasher: &mut HasherWithStats<'_>, root: InternalNode, logs: Vec<(usize, TreeLogEntryWithProof)>, ) -> (BlockOutputWithProofs, PatchSet) { + self.leaf_count += self.updater.metrics.new_leaves; tracing::debug!( "Finished updating tree; total leaf count: {}, stats: {:?}", self.leaf_count, @@ -370,95 +314,35 @@ impl<'a, DB: Database + ?Sized> Storage<'a, DB> { } } -/// Mention of a key in a block: either the first mention, or the same mention as the specified -/// 0-based index in the block. -#[derive(Debug, Clone, Copy)] -enum KeyMention { - First, - SameAs(usize), -} - -impl SortedKeys { - fn remove_read_instructions(&mut self, instructions: &[(Key, TreeInstruction)]) { - debug_assert_eq!(instructions.len(), self.0.len()); - - self.0.retain(|(idx, key)| { - let (key_for_instruction, instruction) = &instructions[*idx]; - debug_assert_eq!(key_for_instruction, key); - matches!(instruction, TreeInstruction::Write(_)) - }); - } - - /// Determines for the original sequence of `Key`s whether a particular key mention - /// is the first one, or it follows after another mention. - fn key_mentions(&self, original_len: usize) -> Vec { - debug_assert!(original_len >= self.0.len()); - - let mut flags = vec![KeyMention::First; original_len]; - let [(mut first_key_mention, mut prev_key), tail @ ..] = self.0.as_slice() else { - return flags; - }; - - // Note that `SameAs(_)` doesn't necessarily reference the first mention of a key, - // just one with a lesser index. This is OK for our purposes. - for &(idx, key) in tail { - if prev_key == key { - if idx > first_key_mention { - flags[idx] = KeyMention::SameAs(first_key_mention); - } else { - debug_assert!(idx < first_key_mention); // all indices should be unique - flags[first_key_mention] = KeyMention::SameAs(idx); - first_key_mention = idx; - } - } else { - prev_key = key; - first_key_mention = idx; - } - } - flags - } -} - /// [`TreeInstruction`] together with precomputed data necessary to efficiently parallelize /// Merkle tree traversal. #[derive(Debug)] struct InstructionWithPrecomputes { /// 0-based index of the instruction. index: usize, - /// Key read / written by the instruction. - key: Key, instruction: TreeInstruction, /// Nibbles for the parent node computed by [`Storage::load_ancestors()`]. parent_nibbles: Nibbles, - /// Leaf index for the operation computed by [`Storage::compute_leaf_indices()`]. - /// Always 0 for reads. - leaf_index: u64, } impl InstructionWithPrecomputes { /// Creates groups of instructions to be used during parallelized tree traversal. fn split( - instructions: Vec<(Key, TreeInstruction)>, + instructions: Vec, parent_nibbles: Vec, - leaf_indices: Vec, ) -> [Vec; SUBTREE_COUNT] { const EMPTY_VEC: Vec = Vec::new(); // ^ Need to extract this to a constant to be usable as an array initializer. let mut parts = [EMPTY_VEC; SUBTREE_COUNT]; - let it = instructions - .into_iter() - .zip(parent_nibbles) - .zip(leaf_indices); - for (index, (((key, instruction), parent_nibbles), leaf_index)) in it.enumerate() { - let first_nibble = Nibbles::nibble(&key, 0); + let it = instructions.into_iter().zip(parent_nibbles); + for (index, (instruction, parent_nibbles)) in it.enumerate() { + let first_nibble = Nibbles::nibble(&instruction.key(), 0); let part = &mut parts[first_nibble as usize]; part.push(Self { index, - key, instruction, parent_nibbles, - leaf_index, }); } parts @@ -472,8 +356,6 @@ mod tests { use super::*; use crate::types::Root; - const HASH: ValueHash = ValueHash::zero(); - fn byte_key(byte: u8) -> Key { Key::from_little_endian(&[byte; 32]) } @@ -485,88 +367,14 @@ mod tests { assert_eq!(sorted_keys.0, [1, 3, 4, 0, 2].map(|i| (i, keys[i]))); } - #[test] - fn computing_key_mentions() { - let keys = [4, 1, 3, 4, 3, 3].map(byte_key); - let sorted_keys = SortedKeys::new(keys.into_iter()); - let mentions = sorted_keys.key_mentions(6); - - assert_matches!( - mentions.as_slice(), - [ - KeyMention::First, KeyMention::First, KeyMention::First, - KeyMention::SameAs(0), KeyMention::SameAs(2), KeyMention::SameAs(i) - ] if *i == 2 || *i == 4 - ); - } - - #[test] - fn computing_leaf_indices() { - let db = prepare_db(); - let (instructions, expected_indices) = get_instructions_and_leaf_indices(); - let mut storage = Storage::new(&db, &(), 1, true); - let sorted_keys = SortedKeys::new(instructions.iter().map(|(key, _)| *key)); - let parent_nibbles = storage.updater.load_ancestors(&sorted_keys, &db); - - let leaf_indices = - storage.compute_leaf_indices(&instructions, sorted_keys, &parent_nibbles); - assert_eq!(leaf_indices, expected_indices); - } - - fn prepare_db() -> PatchSet { - let mut db = PatchSet::default(); - let (_, patch) = - Storage::new(&db, &(), 0, true).extend(vec![(byte_key(2), HASH), (byte_key(1), HASH)]); - db.apply_patch(patch); - db - } - - fn get_instructions_and_leaf_indices() -> (Vec<(Key, TreeInstruction)>, Vec) { - let instructions_and_indices = vec![ - (byte_key(3), TreeInstruction::Read, 0), - (byte_key(1), TreeInstruction::Write(HASH), 2), - (byte_key(2), TreeInstruction::Read, 0), - (byte_key(3), TreeInstruction::Write(HASH), 3), - (byte_key(1), TreeInstruction::Read, 0), - (byte_key(3), TreeInstruction::Write(HASH), 3), - (byte_key(2), TreeInstruction::Write(HASH), 1), - (byte_key(0xc0), TreeInstruction::Write(HASH), 4), - (byte_key(2), TreeInstruction::Write(HASH), 1), - ]; - instructions_and_indices - .into_iter() - .map(|(key, instr, idx)| ((key, instr), idx)) - .unzip() - } - - #[test] - fn extending_storage_with_proofs() { - let db = prepare_db(); - let (instructions, expected_indices) = get_instructions_and_leaf_indices(); - let storage = Storage::new(&db, &(), 1, true); - let (block_output, _) = storage.extend_with_proofs(instructions); - assert_eq!(block_output.leaf_count, 4); - - assert_eq!(block_output.logs.len(), expected_indices.len()); - for (expected_idx, log) in expected_indices.into_iter().zip(&block_output.logs) { - match log.base { - TreeLogEntry::Inserted { leaf_index } - | TreeLogEntry::Updated { leaf_index, .. } => { - assert_eq!(leaf_index, expected_idx); - } - _ => {} - } - } - } - #[test] fn proofs_for_empty_storage() { let db = PatchSet::default(); let storage = Storage::new(&db, &(), 0, true); let instructions = vec![ - (byte_key(1), TreeInstruction::Read), - (byte_key(2), TreeInstruction::Read), - (byte_key(0xff), TreeInstruction::Read), + TreeInstruction::Read(byte_key(1)), + TreeInstruction::Read(byte_key(2)), + TreeInstruction::Read(byte_key(0xff)), ]; let (block_output, patch) = storage.extend_with_proofs(instructions); assert_eq!(block_output.leaf_count, 0); diff --git a/core/lib/merkle_tree/src/storage/rocksdb.rs b/core/lib/merkle_tree/src/storage/rocksdb.rs index 6c6a3a18105..7dd4d6083d7 100644 --- a/core/lib/merkle_tree/src/storage/rocksdb.rs +++ b/core/lib/merkle_tree/src/storage/rocksdb.rs @@ -1,9 +1,10 @@ //! RocksDB implementation of [`Database`]. -use rayon::prelude::*; - use std::path::Path; +use rayon::prelude::*; +use zksync_storage::{db::NamedColumnFamily, rocksdb::DBPinnableSlice, RocksDB}; + use crate::{ errors::{DeserializeError, ErrorContext}, metrics::ApplyPatchStats, @@ -13,7 +14,6 @@ use crate::{ }, types::{InternalNode, LeafNode, Manifest, Nibbles, Node, NodeKey, Root, StaleNodeKey}, }; -use zksync_storage::{db::NamedColumnFamily, rocksdb::DBPinnableSlice, RocksDB}; /// RocksDB column families used by the tree. #[derive(Debug, Clone, Copy)] @@ -285,10 +285,10 @@ impl PruneDatabase for RocksDBWrapper { #[cfg(test)] mod tests { - use tempfile::TempDir; - use std::collections::{HashMap, HashSet}; + use tempfile::TempDir; + use super::*; use crate::storage::tests::{create_patch, generate_nodes}; diff --git a/core/lib/merkle_tree/src/storage/serialization.rs b/core/lib/merkle_tree/src/storage/serialization.rs index 15d67604cc0..09a06a3630c 100644 --- a/core/lib/merkle_tree/src/storage/serialization.rs +++ b/core/lib/merkle_tree/src/storage/serialization.rs @@ -26,7 +26,11 @@ impl LeafNode { let leaf_index = leb128::read::unsigned(&mut bytes).map_err(|err| { DeserializeErrorKind::Leb128(err).with_context(ErrorContext::LeafIndex) })?; - Ok(Self::new(full_key, value_hash, leaf_index)) + Ok(Self { + full_key, + value_hash, + leaf_index, + }) } pub(super) fn serialize(&self, buffer: &mut Vec) { @@ -296,9 +300,11 @@ impl Manifest { #[cfg(test)] mod tests { - use super::*; use zksync_types::H256; + use super::*; + use crate::types::TreeEntry; + #[test] fn serializing_manifest() { let manifest = Manifest::new(42, &()); @@ -369,7 +375,7 @@ mod tests { #[test] fn serializing_leaf_node() { - let leaf = LeafNode::new(513.into(), H256([4; 32]), 42); + let leaf = LeafNode::new(TreeEntry::new(513.into(), 42, H256([4; 32]))); let mut buffer = vec![]; leaf.serialize(&mut buffer); assert_eq!(buffer[..30], [0; 30]); // padding for the key @@ -426,7 +432,7 @@ mod tests { #[test] fn serializing_root_with_leaf() { - let leaf = LeafNode::new(513.into(), H256([4; 32]), 42); + let leaf = LeafNode::new(TreeEntry::new(513.into(), 42, H256([4; 32]))); let root = Root::new(1, leaf.into()); let mut buffer = vec![]; root.serialize(&mut buffer); diff --git a/core/lib/merkle_tree/src/storage/tests.rs b/core/lib/merkle_tree/src/storage/tests.rs index 958c906289e..8bcaab71081 100644 --- a/core/lib/merkle_tree/src/storage/tests.rs +++ b/core/lib/merkle_tree/src/storage/tests.rs @@ -1,3 +1,5 @@ +use std::collections::{HashMap, HashSet}; + use assert_matches::assert_matches; use rand::{ rngs::StdRng, @@ -5,16 +7,14 @@ use rand::{ Rng, SeedableRng, }; use test_casing::test_casing; - -use std::collections::{HashMap, HashSet}; +use zksync_crypto::hasher::blake2::Blake2Hasher; +use zksync_types::{H256, U256}; use super::*; use crate::{ hasher::{HasherWithStats, MerklePath}, types::{NodeKey, TreeInstruction, KEY_SIZE}, }; -use zksync_crypto::hasher::blake2::Blake2Hasher; -use zksync_types::{H256, U256}; pub(super) const FIRST_KEY: Key = U256([0, 0, 0, 0x_dead_beef_0000_0000]); const SECOND_KEY: Key = U256([0, 0, 0, 0x_dead_beef_0100_0000]); @@ -25,7 +25,7 @@ pub(super) fn generate_nodes(version: u64, nibble_counts: &[usize]) -> HashMap) -> V fn reading_keys_does_not_change_child_version() { let mut db = PatchSet::default(); let storage = Storage::new(&db, &(), 0, true); - let kvs = vec![(FIRST_KEY, H256([0; 32])), (SECOND_KEY, H256([1; 32]))]; + let kvs = vec![ + TreeEntry::new(FIRST_KEY, 1, H256([0; 32])), + TreeEntry::new(SECOND_KEY, 2, H256([1; 32])), + ]; let (_, patch) = storage.extend(kvs); db.apply_patch(patch); let storage = Storage::new(&db, &(), 1, true); let instructions = vec![ - (FIRST_KEY, TreeInstruction::Read), - (E_KEY, TreeInstruction::Write(H256([2; 32]))), + TreeInstruction::Read(FIRST_KEY), + TreeInstruction::Write(TreeEntry::new(E_KEY, 3, H256([2; 32]))), ]; let (_, patch) = storage.extend_with_proofs(instructions); @@ -327,12 +339,15 @@ fn reading_keys_does_not_change_child_version() { fn read_ops_are_not_reflected_in_patch() { let mut db = PatchSet::default(); let storage = Storage::new(&db, &(), 0, true); - let kvs = vec![(FIRST_KEY, H256([0; 32])), (SECOND_KEY, H256([1; 32]))]; + let kvs = vec![ + TreeEntry::new(FIRST_KEY, 1, H256([0; 32])), + TreeEntry::new(SECOND_KEY, 2, H256([1; 32])), + ]; let (_, patch) = storage.extend(kvs); db.apply_patch(patch); let storage = Storage::new(&db, &(), 1, true); - let instructions = vec![(FIRST_KEY, TreeInstruction::Read)]; + let instructions = vec![TreeInstruction::Read(FIRST_KEY)]; let (_, patch) = storage.extend_with_proofs(instructions); assert!(patch.patches_by_version[&1].nodes.is_empty()); } @@ -351,7 +366,7 @@ fn read_instructions_do_not_lead_to_copied_nodes(writes_per_block: u64) { let mut database = PatchSet::default(); let storage = Storage::new(&database, &(), 0, true); let kvs = (0..key_count) - .map(|i| (big_endian_key(i), H256::zero())) + .map(|i| TreeEntry::new(big_endian_key(i), i + 1, H256::zero())) .collect(); let (_, patch) = storage.extend(kvs); database.apply_patch(patch); @@ -361,10 +376,11 @@ fn read_instructions_do_not_lead_to_copied_nodes(writes_per_block: u64) { // Select some existing keys to read. Keys may be repeated, this is fine for our purpose. let reads = (0..writes_per_block).map(|_| { let key = big_endian_key(rng.gen_range(0..key_count)); - (key, TreeInstruction::Read) + TreeInstruction::Read(key) + }); + let writes = (key_count..key_count + writes_per_block).map(|i| { + TreeInstruction::Write(TreeEntry::new(big_endian_key(i), i + 1, H256::zero())) }); - let writes = (key_count..key_count + writes_per_block) - .map(|i| (big_endian_key(i), TreeInstruction::Write(H256::zero()))); let mut instructions: Vec<_> = reads.chain(writes).collect(); instructions.shuffle(&mut rng); @@ -400,7 +416,7 @@ fn replaced_keys_are_correctly_tracked(writes_per_block: usize, with_proofs: boo let mut database = PatchSet::default(); let storage = Storage::new(&database, &(), 0, true); let kvs = (0..100) - .map(|i| (big_endian_key(i), H256::zero())) + .map(|i| TreeEntry::new(big_endian_key(i), i + 1, H256::zero())) .collect(); let (_, patch) = storage.extend(kvs); @@ -412,11 +428,11 @@ fn replaced_keys_are_correctly_tracked(writes_per_block: usize, with_proofs: boo let updates = (0..100) .choose_multiple(&mut rng, writes_per_block) .into_iter() - .map(|i| (big_endian_key(i), H256::zero())); + .map(|i| TreeEntry::new(big_endian_key(i), i + 1, H256::zero())); let storage = Storage::new(&database, &(), new_version, true); let patch = if with_proofs { - let instructions = updates.map(|(key, value)| (key, TreeInstruction::Write(value))); + let instructions = updates.map(TreeInstruction::Write); storage.extend_with_proofs(instructions.collect()).1 } else { storage.extend(updates.collect()).1 @@ -454,14 +470,18 @@ fn assert_replaced_keys(db: &PatchSet, patch: &PatchSet) { #[test] fn tree_handles_keys_at_terminal_level() { let mut db = PatchSet::default(); - let kvs = (0_u32..100) - .map(|i| (Key::from(i), ValueHash::zero())) + let kvs = (0_u64..100) + .map(|i| TreeEntry::new(Key::from(i), i + 1, ValueHash::zero())) .collect(); let (_, patch) = Storage::new(&db, &(), 0, true).extend(kvs); db.apply_patch(patch); // Overwrite a key and check that we don't panic. - let new_kvs = vec![(Key::from(0), ValueHash::from_low_u64_be(1))]; + let new_kvs = vec![TreeEntry::new( + Key::from(0), + 1, + ValueHash::from_low_u64_be(1), + )]; let (_, patch) = Storage::new(&db, &(), 1, true).extend(new_kvs); assert_eq!( @@ -483,7 +503,7 @@ fn tree_handles_keys_at_terminal_level() { #[test] fn recovery_flattens_node_versions() { let recovery_version = 100; - let recovery_entries = (0_u64..10).map(|i| RecoveryEntry { + let recovery_entries = (0_u64..10).map(|i| TreeEntry { key: Key::from(i) << 252, // the first key nibbles are distinct value: ValueHash::zero(), leaf_index: i + 1, @@ -516,7 +536,7 @@ fn recovery_flattens_node_versions() { #[test_casing(7, [256, 4, 5, 20, 69, 127, 128])] fn recovery_with_node_hierarchy(chunk_size: usize) { let recovery_version = 100; - let recovery_entries = (0_u64..256).map(|i| RecoveryEntry { + let recovery_entries = (0_u64..256).map(|i| TreeEntry { key: Key::from(i) << 248, // the first two key nibbles are distinct value: ValueHash::zero(), leaf_index: i + 1, @@ -567,7 +587,7 @@ fn recovery_with_node_hierarchy(chunk_size: usize) { #[test_casing(7, [256, 5, 7, 20, 59, 127, 128])] fn recovery_with_deep_node_hierarchy(chunk_size: usize) { let recovery_version = 1_000; - let recovery_entries = (0_u64..256).map(|i| RecoveryEntry { + let recovery_entries = (0_u64..256).map(|i| TreeEntry { key: Key::from(i), // the last two key nibbles are distinct value: ValueHash::zero(), leaf_index: i + 1, @@ -630,7 +650,7 @@ fn recovery_with_deep_node_hierarchy(chunk_size: usize) { fn recovery_workflow_with_multiple_stages() { let mut db = PatchSet::default(); let recovery_version = 100; - let recovery_entries = (0_u64..100).map(|i| RecoveryEntry { + let recovery_entries = (0_u64..100).map(|i| TreeEntry { key: Key::from(i), value: ValueHash::zero(), leaf_index: i, @@ -640,7 +660,7 @@ fn recovery_workflow_with_multiple_stages() { assert_eq!(patch.root(recovery_version).unwrap().leaf_count(), 100); db.apply_patch(patch); - let more_recovery_entries = (100_u64..200).map(|i| RecoveryEntry { + let more_recovery_entries = (100_u64..200).map(|i| TreeEntry { key: Key::from(i), value: ValueHash::zero(), leaf_index: i, @@ -653,7 +673,7 @@ fn recovery_workflow_with_multiple_stages() { // Check that all entries can be accessed let storage = Storage::new(&db, &(), recovery_version + 1, true); - let instructions = (0_u32..200).map(|i| (Key::from(i), TreeInstruction::Read)); + let instructions = (0_u32..200).map(|i| TreeInstruction::Read(Key::from(i))); let (output, _) = storage.extend_with_proofs(instructions.collect()); assert_eq!(output.leaf_count, 200); assert_eq!(output.logs.len(), 200); @@ -687,17 +707,15 @@ fn test_recovery_pruning_equivalence( ); let mut rng = StdRng::seed_from_u64(RNG_SEED); - let kvs = (0..100).map(|i| { - ( - U256([rng.gen(), rng.gen(), rng.gen(), rng.gen()]), - ValueHash::repeat_byte(i), - ) + let entries = (0..100).map(|i| { + let key = U256([rng.gen(), rng.gen(), rng.gen(), rng.gen()]); + TreeEntry::new(key, u64::from(i) + 1, ValueHash::repeat_byte(i)) }); - let kvs: Vec<_> = kvs.collect(); + let entries: Vec<_> = entries.collect(); // Add `kvs` into the tree in several commits. let mut db = PatchSet::default(); - for (version, chunk) in kvs.chunks(chunk_size).enumerate() { + for (version, chunk) in entries.chunks(chunk_size).enumerate() { let (_, patch) = Storage::new(&db, hasher, version as u64, true).extend(chunk.to_vec()); db.apply_patch(patch); } @@ -716,11 +734,7 @@ fn test_recovery_pruning_equivalence( // Generate recovery entries. let recovery_entries = all_nodes.values().filter_map(|node| { if let Node::Leaf(leaf) = node { - return Some(RecoveryEntry { - key: leaf.full_key, - value: leaf.value_hash, - leaf_index: leaf.leaf_index, - }); + return Some(TreeEntry::from(*leaf)); } None }); diff --git a/core/lib/merkle_tree/src/types/internal.rs b/core/lib/merkle_tree/src/types/internal.rs index 5e875f6e28a..06923bac33f 100644 --- a/core/lib/merkle_tree/src/types/internal.rs +++ b/core/lib/merkle_tree/src/types/internal.rs @@ -4,10 +4,9 @@ use std::{fmt, num::NonZeroU64}; -use zksync_types::{H256, U256}; - use crate::{ hasher::{HashTree, InternalNodeCache}, + types::{Key, TreeEntry, ValueHash}, utils::SmallMap, }; @@ -86,6 +85,15 @@ pub struct Manifest { } impl Manifest { + /// Returns the version of the tree that is currently being recovered. + pub fn recovered_version(&self) -> Option { + if self.tags.as_ref()?.is_recovering { + Some(self.version_count.checked_sub(1)?) + } else { + None + } + } + #[cfg(test)] pub(crate) fn new(version_count: u64, hasher: &dyn HashTree) -> Self { Self { @@ -306,12 +314,12 @@ impl NodeKey { #[allow(clippy::cast_possible_truncation)] pub(crate) fn to_db_key(self) -> Vec { let nibbles_byte_len = (self.nibbles.nibble_count + 1) / 2; - // ^ equivalent to ceil(self.nibble_count / 2) + // ^ equivalent to `ceil(self.nibble_count / 2)` let mut bytes = Vec::with_capacity(9 + nibbles_byte_len); // ^ 8 bytes for `version` + 1 byte for nibble count bytes.extend_from_slice(&self.version.to_be_bytes()); bytes.push(self.nibbles.nibble_count as u8); - // ^ conversion is safe: nibble_count <= 64 + // ^ conversion is safe: `nibble_count <= 64` bytes.extend_from_slice(&self.nibbles.bytes[..nibbles_byte_len]); bytes } @@ -323,11 +331,6 @@ impl fmt::Display for NodeKey { } } -/// Key stored in the tree. -pub type Key = U256; -/// Hashed value stored in the tree. -pub type ValueHash = H256; - /// Leaf node of the tree. #[derive(Debug, Clone, Copy)] #[cfg_attr(test, derive(PartialEq, Eq))] @@ -338,13 +341,18 @@ pub struct LeafNode { } impl LeafNode { - pub(crate) fn new(full_key: Key, value_hash: ValueHash, leaf_index: u64) -> Self { + pub(crate) fn new(entry: TreeEntry) -> Self { Self { - full_key, - value_hash, - leaf_index, + full_key: entry.key, + value_hash: entry.value, + leaf_index: entry.leaf_index, } } + + pub(crate) fn update_from(&mut self, entry: TreeEntry) { + self.value_hash = entry.value; + self.leaf_index = entry.leaf_index; + } } /// Reference to a child in an [`InternalNode`]. @@ -555,10 +563,12 @@ impl StaleNodeKey { #[cfg(test)] mod tests { + use zksync_types::U256; + use super::*; // `U256` uses little-endian `u64` ordering; i.e., this is - // 0x_dead_beef_0000_0000_.._0000. + // `0x_dead_beef_0000_0000_.._0000.` const TEST_KEY: Key = U256([0, 0, 0, 0x_dead_beef_0000_0000]); #[test] diff --git a/core/lib/merkle_tree/src/types/mod.rs b/core/lib/merkle_tree/src/types/mod.rs index 6988735ec02..43a3922da86 100644 --- a/core/lib/merkle_tree/src/types/mod.rs +++ b/core/lib/merkle_tree/src/types/mod.rs @@ -1,26 +1,57 @@ //! Basic storage types. -mod internal; +use zksync_types::{H256, U256}; pub(crate) use self::internal::{ ChildRef, Nibbles, NibblesBytes, StaleNodeKey, TreeTags, HASH_SIZE, KEY_SIZE, TREE_DEPTH, }; -pub use self::internal::{InternalNode, Key, LeafNode, Manifest, Node, NodeKey, Root, ValueHash}; +pub use self::internal::{InternalNode, LeafNode, Manifest, Node, NodeKey, Root}; + +mod internal; + +/// Key stored in the tree. +pub type Key = U256; +/// Hash type of values and intermediate nodes in the tree. +pub type ValueHash = H256; /// Instruction to read or write a tree value at a certain key. #[derive(Debug, Clone, Copy, PartialEq, Eq)] -pub enum TreeInstruction { - /// Read the current tree value. - Read, - /// Write the specified value. - Write(ValueHash), +pub enum TreeInstruction { + /// Read the current tree value at the specified key. + Read(K), + /// Write the specified entry. + Write(TreeEntry), +} + +impl TreeInstruction { + /// Creates a write instruction. + pub fn write(key: K, leaf_index: u64, value: ValueHash) -> Self { + Self::Write(TreeEntry::new(key, leaf_index, value)) + } + + /// Returns the tree key this instruction is related to. + pub fn key(&self) -> K { + match self { + Self::Read(key) => *key, + Self::Write(entry) => entry.key, + } + } + + pub(crate) fn map_key(&self, map_fn: impl FnOnce(&K) -> U) -> TreeInstruction { + match self { + Self::Read(key) => TreeInstruction::Read(map_fn(key)), + Self::Write(entry) => TreeInstruction::Write(entry.map_key(map_fn)), + } + } } /// Entry in a Merkle tree associated with a key. -#[derive(Debug, Clone, Copy)] -pub struct TreeEntry { +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub struct TreeEntry { + /// Tree key. + pub key: K, /// Value associated with the key. - pub value_hash: ValueHash, + pub value: ValueHash, /// Enumeration index of the key. pub leaf_index: u64, } @@ -28,23 +59,40 @@ pub struct TreeEntry { impl From for TreeEntry { fn from(leaf: LeafNode) -> Self { Self { - value_hash: leaf.value_hash, + key: leaf.full_key, + value: leaf.value_hash, leaf_index: leaf.leaf_index, } } } +impl TreeEntry { + /// Creates a new entry with the specified fields. + pub fn new(key: K, leaf_index: u64, value: ValueHash) -> Self { + Self { + key, + value, + leaf_index, + } + } + + pub(crate) fn map_key(&self, map_fn: impl FnOnce(&K) -> U) -> TreeEntry { + TreeEntry::new(map_fn(&self.key), self.leaf_index, self.value) + } +} + impl TreeEntry { - pub(crate) fn empty() -> Self { + pub(crate) fn empty(key: Key) -> Self { Self { - value_hash: ValueHash::zero(), + key, + value: ValueHash::zero(), leaf_index: 0, } } - /// Returns `true` iff this entry encodes lack of a value. + /// Returns `true` if and only if this entry encodes lack of a value. pub fn is_empty(&self) -> bool { - self.leaf_index == 0 && self.value_hash.is_zero() + self.leaf_index == 0 && self.value.is_zero() } pub(crate) fn with_merkle_path(self, merkle_path: Vec) -> TreeEntryWithProof { @@ -53,6 +101,12 @@ impl TreeEntry { merkle_path, } } + + /// Replaces the value in this entry and returns the modified entry. + #[must_use] + pub fn with_value(self, value: H256) -> Self { + Self { value, ..self } + } } /// Entry in a Merkle tree together with a proof of authenticity. @@ -63,7 +117,7 @@ pub struct TreeEntryWithProof { /// Proof of the value authenticity. /// /// If specified, a proof is the Merkle path consisting of up to 256 hashes - /// ordered starting the bottommost level of the tree (one with leaves) and ending before + /// ordered starting the bottom-most level of the tree (one with leaves) and ending before /// the root level. /// /// If the path is not full (contains <256 hashes), it means that the hashes at the beginning @@ -86,10 +140,7 @@ pub struct BlockOutput { #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum TreeLogEntry { /// A node was inserted into the tree. - Inserted { - /// Index of the inserted node. - leaf_index: u64, - }, + Inserted, /// A node with the specified index was updated. Updated { /// Index of the updated node. @@ -109,18 +160,14 @@ pub enum TreeLogEntry { } impl TreeLogEntry { - pub(crate) fn insert(leaf_index: u64) -> Self { - Self::Inserted { leaf_index } - } - - pub(crate) fn update(previous_value: ValueHash, leaf_index: u64) -> Self { + pub(crate) fn update(leaf_index: u64, previous_value: ValueHash) -> Self { Self::Updated { leaf_index, previous_value, } } - pub(crate) fn read(value: ValueHash, leaf_index: u64) -> Self { + pub(crate) fn read(leaf_index: u64, value: ValueHash) -> Self { Self::Read { leaf_index, value } } @@ -152,7 +199,7 @@ pub struct TreeLogEntryWithProof

> { /// Log entry about an atomic operation on the tree. pub base: TreeLogEntry, /// Merkle path to prove log authenticity. The path consists of up to 256 hashes - /// ordered starting the bottommost level of the tree (one with leaves) and ending before + /// ordered starting the bottom-most level of the tree (one with leaves) and ending before /// the root level. /// /// If the path is not full (contains <256 hashes), it means that the hashes at the beginning diff --git a/core/lib/merkle_tree/src/utils.rs b/core/lib/merkle_tree/src/utils.rs index 9542b24bbd3..4771a940f2c 100644 --- a/core/lib/merkle_tree/src/utils.rs +++ b/core/lib/merkle_tree/src/utils.rs @@ -114,11 +114,6 @@ impl SmallMap { } } -pub(crate) fn increment_counter(counter: &mut u64) -> u64 { - *counter += 1; - *counter -} - pub(crate) fn find_diverging_bit(lhs: Key, rhs: Key) -> usize { let diff = lhs ^ rhs; diff.leading_zeros() as usize diff --git a/core/lib/merkle_tree/tests/integration/common.rs b/core/lib/merkle_tree/tests/integration/common.rs index fd9e00855c2..28c3827827a 100644 --- a/core/lib/merkle_tree/tests/integration/common.rs +++ b/core/lib/merkle_tree/tests/integration/common.rs @@ -1,27 +1,25 @@ //! Shared functionality. -use once_cell::sync::Lazy; - use std::collections::HashMap; +use once_cell::sync::Lazy; use zksync_crypto::hasher::{blake2::Blake2Hasher, Hasher}; -use zksync_merkle_tree::{HashTree, TreeInstruction}; +use zksync_merkle_tree::{HashTree, TreeEntry, TreeInstruction}; use zksync_types::{AccountTreeId, Address, StorageKey, H256, U256}; -pub fn generate_key_value_pairs(indexes: impl Iterator) -> Vec<(U256, H256)> { +pub fn generate_key_value_pairs(indexes: impl Iterator) -> Vec { let address: Address = "4b3af74f66ab1f0da3f2e4ec7a3cb99baf1af7b2".parse().unwrap(); let kvs = indexes.map(|idx| { let key = H256::from_low_u64_be(idx); let key = StorageKey::new(AccountTreeId::new(address), key); - (key.hashed_key_u256(), H256::from_low_u64_be(idx + 1)) + let value = H256::from_low_u64_be(idx + 1); + TreeEntry::new(key.hashed_key_u256(), idx + 1, value) }); kvs.collect() } -pub fn compute_tree_hash(kvs: impl Iterator) -> H256 { - let kvs_with_indices = kvs - .enumerate() - .map(|(i, (key, value))| (key, value, i as u64 + 1)); +pub fn compute_tree_hash(kvs: impl Iterator) -> H256 { + let kvs_with_indices = kvs.map(|entry| (entry.key, entry.value, entry.leaf_index)); compute_tree_hash_with_indices(kvs_with_indices) } @@ -70,17 +68,18 @@ fn compute_tree_hash_with_indices(kvs: impl Iterator) } // Computing the expected hash takes some time in the debug mode, so we memoize it. -pub static KVS_AND_HASH: Lazy<(Vec<(U256, H256)>, H256)> = Lazy::new(|| { - let kvs = generate_key_value_pairs(0..100); - let expected_hash = compute_tree_hash(kvs.iter().copied()); - (kvs, expected_hash) +pub static ENTRIES_AND_HASH: Lazy<(Vec, H256)> = Lazy::new(|| { + let entries = generate_key_value_pairs(0..100); + let expected_hash = compute_tree_hash(entries.iter().copied()); + (entries, expected_hash) }); -pub fn convert_to_writes(kvs: &[(U256, H256)]) -> Vec<(U256, TreeInstruction)> { - let kvs = kvs +pub fn convert_to_writes(entries: &[TreeEntry]) -> Vec { + entries .iter() - .map(|&(key, hash)| (key, TreeInstruction::Write(hash))); - kvs.collect() + .copied() + .map(TreeInstruction::Write) + .collect() } /// Emulates leaf index assignment in a real Merkle tree. @@ -88,22 +87,22 @@ pub fn convert_to_writes(kvs: &[(U256, H256)]) -> Vec<(U256, TreeInstruction)> { pub struct TreeMap(HashMap); impl TreeMap { - pub fn new(initial_entries: &[(U256, H256)]) -> Self { + pub fn new(initial_entries: &[TreeEntry]) -> Self { let map = initial_entries .iter() - .enumerate() - .map(|(i, (key, value))| (*key, (*value, i as u64 + 1))) + .map(|entry| (entry.key, (entry.value, entry.leaf_index))) .collect(); Self(map) } - pub fn extend(&mut self, kvs: &[(U256, H256)]) { - for &(key, new_value) in kvs { - if let Some((value, _)) = self.0.get_mut(&key) { - *value = new_value; + pub fn extend(&mut self, kvs: &[TreeEntry]) { + for &new_entry in kvs { + if let Some((value, leaf_index)) = self.0.get_mut(&new_entry.key) { + assert_eq!(*leaf_index, new_entry.leaf_index); // sanity check + *value = new_entry.value; } else { - let leaf_index = self.0.len() as u64 + 1; - self.0.insert(key, (new_value, leaf_index)); + self.0 + .insert(new_entry.key, (new_entry.value, new_entry.leaf_index)); } } } @@ -112,7 +111,7 @@ impl TreeMap { let entries = self .0 .iter() - .map(|(key, (value, idx))| (*key, *value, *idx)); + .map(|(key, (value, leaf_index))| (*key, *value, *leaf_index)); compute_tree_hash_with_indices(entries) } } diff --git a/core/lib/merkle_tree/tests/integration/consistency.rs b/core/lib/merkle_tree/tests/integration/consistency.rs index 7c1d69657bf..b6b424e431a 100644 --- a/core/lib/merkle_tree/tests/integration/consistency.rs +++ b/core/lib/merkle_tree/tests/integration/consistency.rs @@ -3,9 +3,9 @@ use rand::{rngs::StdRng, seq::SliceRandom, Rng, SeedableRng}; use tempfile::TempDir; +use zksync_merkle_tree::{MerkleTree, MerkleTreeColumnFamily, RocksDBWrapper}; use crate::common::generate_key_value_pairs; -use zksync_merkle_tree::{MerkleTree, MerkleTreeColumnFamily, RocksDBWrapper}; // Something (maybe RocksDB) makes the test below work very slowly in the debug mode; // thus, the number of test cases is conditionally reduced. @@ -26,7 +26,7 @@ fn five_thousand_angry_monkeys_vs_merkle_tree() { let kvs = generate_key_value_pairs(0..100); tree.extend(kvs); - tree.verify_consistency(0).unwrap(); + tree.verify_consistency(0, true).unwrap(); let mut raw_db = db.into_inner(); let cf = MerkleTreeColumnFamily::Tree; @@ -53,7 +53,9 @@ fn five_thousand_angry_monkeys_vs_merkle_tree() { raw_db.write(batch).unwrap(); let mut db = RocksDBWrapper::from(raw_db); - let err = MerkleTree::new(&mut db).verify_consistency(0).unwrap_err(); + let err = MerkleTree::new(&mut db) + .verify_consistency(0, true) + .unwrap_err(); println!("{err}"); // Restore the value back so that it doesn't influence the following cases. diff --git a/core/lib/merkle_tree/tests/integration/domain.rs b/core/lib/merkle_tree/tests/integration/domain.rs index d3b666c8849..e96b68fdade 100644 --- a/core/lib/merkle_tree/tests/integration/domain.rs +++ b/core/lib/merkle_tree/tests/integration/domain.rs @@ -1,20 +1,19 @@ //! Domain-specific tests. Taken almost verbatim from the previous tree implementation. +use std::slice; + use serde::{Deserialize, Serialize}; use serde_with::{hex::Hex, serde_as}; use tempfile::TempDir; - -use std::slice; - use zksync_crypto::hasher::blake2::Blake2Hasher; -use zksync_merkle_tree::{domain::ZkSyncTree, HashTree}; +use zksync_merkle_tree::{domain::ZkSyncTree, HashTree, TreeEntry, TreeInstruction}; use zksync_storage::RocksDB; use zksync_system_constants::ACCOUNT_CODE_STORAGE_ADDRESS; use zksync_types::{ - proofs::StorageLogMetadata, AccountTreeId, Address, L1BatchNumber, StorageKey, StorageLog, H256, + proofs::StorageLogMetadata, AccountTreeId, Address, L1BatchNumber, StorageKey, H256, }; -fn gen_storage_logs() -> Vec { +fn gen_storage_logs() -> Vec> { let addrs = vec![ "4b3af74f66ab1f0da3f2e4ec7a3cb99baf1af7b2", "ef4bb7b21c5fe7432a7d63876cc59ecc23b46636", @@ -32,7 +31,11 @@ fn gen_storage_logs() -> Vec { proof_keys .zip(proof_values) - .map(|(proof_key, proof_value)| StorageLog::new_write_log(proof_key, proof_value)) + .enumerate() + .map(|(i, (proof_key, proof_value))| { + let entry = TreeEntry::new(proof_key, i as u64 + 1, proof_value); + TreeInstruction::Write(entry) + }) .collect() } @@ -43,7 +46,7 @@ fn basic_workflow() { let (metadata, expected_root_hash) = { let db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new_lightweight(db); + let mut tree = ZkSyncTree::new_lightweight(db.into()); let metadata = tree.process_l1_batch(&logs); tree.save(); tree.verify_consistency(L1BatchNumber(0)); @@ -54,7 +57,11 @@ fn basic_workflow() { assert_eq!(metadata.rollup_last_leaf_index, 101); assert_eq!(metadata.initial_writes.len(), logs.len()); for (write, log) in metadata.initial_writes.iter().zip(&logs) { - assert_eq!(write.value, log.value); + let expected_value = match log { + TreeInstruction::Write(entry) => entry.value, + TreeInstruction::Read(_) => unreachable!(), + }; + assert_eq!(write.value, expected_value); } assert!(metadata.repeated_writes.is_empty()); @@ -67,7 +74,7 @@ fn basic_workflow() { ); let db = RocksDB::new(temp_dir.as_ref()); - let tree = ZkSyncTree::new_lightweight(db); + let tree = ZkSyncTree::new_lightweight(db.into()); tree.verify_consistency(L1BatchNumber(0)); assert_eq!(tree.root_hash(), expected_root_hash); assert_eq!(tree.next_l1_batch_number(), L1BatchNumber(1)); @@ -81,7 +88,7 @@ fn basic_workflow_multiblock() { let expected_root_hash = { let db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new_lightweight(db); + let mut tree = ZkSyncTree::new_lightweight(db.into()); tree.use_dedicated_thread_pool(2); for block in blocks { tree.process_l1_batch(block); @@ -99,7 +106,7 @@ fn basic_workflow_multiblock() { ); let db = RocksDB::new(temp_dir.as_ref()); - let tree = ZkSyncTree::new_lightweight(db); + let tree = ZkSyncTree::new_lightweight(db.into()); assert_eq!(tree.root_hash(), expected_root_hash); assert_eq!(tree.next_l1_batch_number(), L1BatchNumber(12)); } @@ -108,7 +115,7 @@ fn basic_workflow_multiblock() { fn filtering_out_no_op_writes() { let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new(db); + let mut tree = ZkSyncTree::new(db.into()); let mut logs = gen_storage_logs(); let root_hash = tree.process_l1_batch(&logs).root_hash; tree.save(); @@ -124,7 +131,10 @@ fn filtering_out_no_op_writes() { // Add some actual repeated writes. let mut expected_writes_count = 0; for log in logs.iter_mut().step_by(3) { - log.value = H256::repeat_byte(0xff); + let TreeInstruction::Write(entry) = log else { + unreachable!("Unexpected instruction: {log:?}"); + }; + entry.value = H256::repeat_byte(0xff); expected_writes_count += 1; } let new_metadata = tree.process_l1_batch(&logs); @@ -155,20 +165,22 @@ fn revert_blocks() { // Add couple of blocks of distinct keys/values let mut logs: Vec<_> = proof_keys .zip(proof_values) - .map(|(proof_key, proof_value)| StorageLog::new_write_log(proof_key, proof_value)) + .map(|(proof_key, proof_value)| { + let entry = TreeEntry::new(proof_key, proof_value.to_low_u64_be() + 1, proof_value); + TreeInstruction::Write(entry) + }) .collect(); // Add a block with repeated keys let extra_logs = (0..block_size).map(move |i| { - StorageLog::new_write_log( - StorageKey::new(AccountTreeId::new(address), H256::from_low_u64_be(i as u64)), - H256::from_low_u64_be((i + 1) as u64), - ) + let key = StorageKey::new(AccountTreeId::new(address), H256::from_low_u64_be(i as u64)); + let entry = TreeEntry::new(key, i as u64 + 1, H256::from_low_u64_be(i as u64 + 1)); + TreeInstruction::Write(entry) }); logs.extend(extra_logs); let mirror_logs = logs.clone(); let tree_metadata: Vec<_> = { - let mut tree = ZkSyncTree::new_lightweight(storage); + let mut tree = ZkSyncTree::new_lightweight(storage.into()); let metadata = logs.chunks(block_size).map(|chunk| { let metadata = tree.process_l1_batch(chunk); tree.save(); @@ -200,7 +212,7 @@ fn revert_blocks() { // Revert the last block. let storage = RocksDB::new(temp_dir.as_ref()); { - let mut tree = ZkSyncTree::new_lightweight(storage); + let mut tree = ZkSyncTree::new_lightweight(storage.into()); assert_eq!(tree.root_hash(), tree_metadata.last().unwrap().root_hash); tree.revert_logs(L1BatchNumber(3)); assert_eq!(tree.root_hash(), tree_metadata[3].root_hash); @@ -210,7 +222,7 @@ fn revert_blocks() { // Revert two more blocks. let storage = RocksDB::new(temp_dir.as_ref()); { - let mut tree = ZkSyncTree::new_lightweight(storage); + let mut tree = ZkSyncTree::new_lightweight(storage.into()); tree.revert_logs(L1BatchNumber(1)); assert_eq!(tree.root_hash(), tree_metadata[1].root_hash); tree.save(); @@ -219,7 +231,7 @@ fn revert_blocks() { // Revert two more blocks second time; the result should be the same let storage = RocksDB::new(temp_dir.as_ref()); { - let mut tree = ZkSyncTree::new_lightweight(storage); + let mut tree = ZkSyncTree::new_lightweight(storage.into()); tree.revert_logs(L1BatchNumber(1)); assert_eq!(tree.root_hash(), tree_metadata[1].root_hash); tree.save(); @@ -229,14 +241,14 @@ fn revert_blocks() { let storage = RocksDB::new(temp_dir.as_ref()); { let storage_log = mirror_logs.get(3 * block_size).unwrap(); - let mut tree = ZkSyncTree::new_lightweight(storage); + let mut tree = ZkSyncTree::new_lightweight(storage.into()); tree.process_l1_batch(slice::from_ref(storage_log)); tree.save(); } // check saved block number let storage = RocksDB::new(temp_dir.as_ref()); - let tree = ZkSyncTree::new_lightweight(storage); + let tree = ZkSyncTree::new_lightweight(storage.into()); assert_eq!(tree.next_l1_batch_number(), L1BatchNumber(3)); } @@ -245,7 +257,7 @@ fn reset_tree() { let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let storage = RocksDB::new(temp_dir.as_ref()); let logs = gen_storage_logs(); - let mut tree = ZkSyncTree::new_lightweight(storage); + let mut tree = ZkSyncTree::new_lightweight(storage.into()); let empty_root_hash = tree.root_hash(); logs.chunks(5).fold(empty_root_hash, |hash, chunk| { @@ -267,17 +279,17 @@ fn read_logs() { let write_metadata = { let db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new_lightweight(db); + let mut tree = ZkSyncTree::new_lightweight(db.into()); let metadata = tree.process_l1_batch(&logs); tree.save(); metadata }; let db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new_lightweight(db); + let mut tree = ZkSyncTree::new_lightweight(db.into()); let read_logs: Vec<_> = logs .into_iter() - .map(|log| StorageLog::new_read_log(log.key, log.value)) + .map(|instr| TreeInstruction::Read(instr.key())) .collect(); let read_metadata = tree.process_l1_batch(&read_logs); @@ -285,14 +297,13 @@ fn read_logs() { } fn create_write_log( + leaf_index: u64, address: Address, address_storage_key: [u8; 32], value: [u8; 32], -) -> StorageLog { - StorageLog::new_write_log( - StorageKey::new(AccountTreeId::new(address), H256(address_storage_key)), - H256(value), - ) +) -> TreeInstruction { + let key = StorageKey::new(AccountTreeId::new(address), H256(address_storage_key)); + TreeInstruction::Write(TreeEntry::new(key, leaf_index, H256(value))) } fn subtract_from_max_value(diff: u8) -> [u8; 32] { @@ -305,7 +316,7 @@ fn subtract_from_max_value(diff: u8) -> [u8; 32] { fn root_hash_compatibility() { let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new_lightweight(db); + let mut tree = ZkSyncTree::new_lightweight(db.into()); assert_eq!( tree.root_hash(), H256([ @@ -315,28 +326,33 @@ fn root_hash_compatibility() { ); let storage_logs = vec![ - create_write_log(ACCOUNT_CODE_STORAGE_ADDRESS, [0; 32], [1; 32]), + create_write_log(1, ACCOUNT_CODE_STORAGE_ADDRESS, [0; 32], [1; 32]), create_write_log( + 2, Address::from_low_u64_be(9223372036854775808), [254; 32], subtract_from_max_value(1), ), create_write_log( + 3, Address::from_low_u64_be(9223372036854775809), [253; 32], subtract_from_max_value(2), ), create_write_log( + 4, Address::from_low_u64_be(9223372036854775810), [252; 32], subtract_from_max_value(3), ), create_write_log( + 5, Address::from_low_u64_be(9223372036854775811), [251; 32], subtract_from_max_value(4), ), create_write_log( + 6, Address::from_low_u64_be(9223372036854775812), [250; 32], subtract_from_max_value(5), @@ -357,7 +373,7 @@ fn root_hash_compatibility() { fn process_block_idempotency_check() { let temp_dir = TempDir::new().expect("failed to get temporary directory for RocksDB"); let rocks_db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new_lightweight(rocks_db); + let mut tree = ZkSyncTree::new_lightweight(rocks_db.into()); let logs = gen_storage_logs(); let tree_metadata = tree.process_l1_batch(&logs); @@ -420,7 +436,7 @@ fn witness_workflow() { let (first_chunk, _) = logs.split_at(logs.len() / 2); let db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new(db); + let mut tree = ZkSyncTree::new(db.into()); let metadata = tree.process_l1_batch(first_chunk); let job = metadata.witness.unwrap(); assert_eq!(job.next_enumeration_index(), 1); @@ -450,7 +466,7 @@ fn witnesses_with_multiple_blocks() { let logs = gen_storage_logs(); let db = RocksDB::new(temp_dir.as_ref()); - let mut tree = ZkSyncTree::new(db); + let mut tree = ZkSyncTree::new(db.into()); let empty_tree_hashes: Vec<_> = (0..256) .map(|i| Blake2Hasher.empty_subtree_hash(i)) .collect(); diff --git a/core/lib/merkle_tree/tests/integration/merkle_tree.rs b/core/lib/merkle_tree/tests/integration/merkle_tree.rs index 9f3eb970cd3..117ea0db4d9 100644 --- a/core/lib/merkle_tree/tests/integration/merkle_tree.rs +++ b/core/lib/merkle_tree/tests/integration/merkle_tree.rs @@ -1,18 +1,19 @@ //! Tests not tied to the zksync domain. -use rand::{rngs::StdRng, seq::SliceRandom, Rng, SeedableRng}; -use test_casing::test_casing; - use std::{cmp, mem}; +use rand::{rngs::StdRng, seq::SliceRandom, Rng, SeedableRng}; +use test_casing::test_casing; use zksync_crypto::hasher::blake2::Blake2Hasher; use zksync_merkle_tree::{ - Database, HashTree, MerkleTree, PatchSet, Patched, TreeInstruction, TreeLogEntry, + Database, HashTree, MerkleTree, PatchSet, Patched, TreeEntry, TreeInstruction, TreeLogEntry, TreeRangeDigest, }; use zksync_types::{AccountTreeId, Address, StorageKey, H256, U256}; -use crate::common::{compute_tree_hash, convert_to_writes, generate_key_value_pairs, KVS_AND_HASH}; +use crate::common::{ + compute_tree_hash, convert_to_writes, generate_key_value_pairs, ENTRIES_AND_HASH, +}; #[test] fn compute_tree_hash_works_correctly() { @@ -25,7 +26,7 @@ fn compute_tree_hash_works_correctly() { let address: Address = "4b3af74f66ab1f0da3f2e4ec7a3cb99baf1af7b2".parse().unwrap(); let key = StorageKey::new(AccountTreeId::new(address), H256::zero()); let key = key.hashed_key_u256(); - let hash = compute_tree_hash([(key, H256([1; 32]))].into_iter()); + let hash = compute_tree_hash([TreeEntry::new(key, 1, H256([1; 32]))].into_iter()); assert_eq!(hash, EXPECTED_HASH); } @@ -59,7 +60,7 @@ fn output_proofs_are_computed_correctly_on_empty_tree(kv_count: u64) { let reads = instructions .iter() - .map(|(key, _)| (*key, TreeInstruction::Read)); + .map(|instr| TreeInstruction::Read(instr.key())); let mut reads: Vec<_> = reads.collect(); reads.shuffle(&mut rng); let output = tree.extend_with_proofs(reads.clone()); @@ -77,25 +78,26 @@ fn entry_proofs_are_computed_correctly_on_empty_tree(kv_count: u64) { let expected_hash = compute_tree_hash(kvs.iter().copied()); tree.extend(kvs.clone()); - let existing_keys: Vec<_> = kvs.iter().map(|(key, _)| *key).collect(); + let existing_keys: Vec<_> = kvs.iter().map(|entry| entry.key).collect(); let entries = tree.entries_with_proofs(0, &existing_keys).unwrap(); assert_eq!(entries.len(), existing_keys.len()); - for ((key, value), entry) in kvs.iter().zip(entries) { - entry.verify(&Blake2Hasher, *key, expected_hash); - assert_eq!(entry.base.value_hash, *value); + for (input_entry, entry) in kvs.iter().zip(entries) { + entry.verify(&Blake2Hasher, expected_hash); + assert_eq!(entry.base, *input_entry); } // Test some keys adjacent to existing ones. - let adjacent_keys = kvs.iter().flat_map(|(key, _)| { + let adjacent_keys = kvs.iter().flat_map(|entry| { + let key = entry.key; [ - *key ^ (U256::one() << rng.gen_range(0..256)), - *key ^ (U256::one() << rng.gen_range(0..256)), - *key ^ (U256::one() << rng.gen_range(0..256)), + key ^ (U256::one() << rng.gen_range(0..256)), + key ^ (U256::one() << rng.gen_range(0..256)), + key ^ (U256::one() << rng.gen_range(0..256)), ] }); let random_keys = generate_key_value_pairs(kv_count..(kv_count * 2)) .into_iter() - .map(|(key, _)| key); + .map(|entry| entry.key); let mut missing_keys: Vec<_> = adjacent_keys.chain(random_keys).collect(); missing_keys.shuffle(&mut rng); @@ -103,7 +105,8 @@ fn entry_proofs_are_computed_correctly_on_empty_tree(kv_count: u64) { assert_eq!(entries.len(), missing_keys.len()); for (key, entry) in missing_keys.iter().zip(entries) { assert!(entry.base.is_empty()); - entry.verify(&Blake2Hasher, *key, expected_hash); + assert_eq!(entry.base.key, *key); + entry.verify(&Blake2Hasher, expected_hash); } } @@ -117,10 +120,13 @@ fn proofs_are_computed_correctly_for_mixed_instructions() { let output = tree.extend(kvs.clone()); let old_root_hash = output.root_hash; - let reads = kvs.iter().map(|(key, _)| (*key, TreeInstruction::Read)); + let reads = kvs.iter().map(|entry| TreeInstruction::Read(entry.key)); let mut instructions: Vec<_> = reads.collect(); // Overwrite all keys in the tree. - let writes: Vec<_> = kvs.iter().map(|(key, _)| (*key, H256::zero())).collect(); + let writes: Vec<_> = kvs + .iter() + .map(|entry| entry.with_value(H256::zero())) + .collect(); let expected_hash = compute_tree_hash(writes.iter().copied()); instructions.extend(convert_to_writes(&writes)); instructions.shuffle(&mut rng); @@ -145,7 +151,7 @@ fn proofs_are_computed_correctly_for_missing_keys() { let mut instructions = convert_to_writes(&kvs); let missing_reads = generate_key_value_pairs(20..50) .into_iter() - .map(|(key, _)| (key, TreeInstruction::Read)); + .map(|entry| TreeInstruction::Read(entry.key)); instructions.extend(missing_reads); instructions.shuffle(&mut rng); @@ -161,7 +167,7 @@ fn proofs_are_computed_correctly_for_missing_keys() { } fn test_intermediate_commits(db: &mut impl Database, chunk_size: usize) { - let (kvs, expected_hash) = &*KVS_AND_HASH; + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; let mut final_hash = H256::zero(); let mut tree = MerkleTree::new(db); for chunk in kvs.chunks(chunk_size) { @@ -172,7 +178,7 @@ fn test_intermediate_commits(db: &mut impl Database, chunk_size: usize) { let latest_version = tree.latest_version().unwrap(); for version in 0..=latest_version { - tree.verify_consistency(version).unwrap(); + tree.verify_consistency(version, true).unwrap(); } } @@ -183,7 +189,7 @@ fn root_hash_is_computed_correctly_with_intermediate_commits(chunk_size: usize) #[test_casing(6, [3, 5, 10, 17, 28, 42])] fn output_proofs_are_computed_correctly_with_intermediate_commits(chunk_size: usize) { - let (kvs, expected_hash) = &*KVS_AND_HASH; + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; let mut tree = MerkleTree::new(PatchSet::default()); let mut root_hash = Blake2Hasher.empty_subtree_hash(256); @@ -198,8 +204,8 @@ fn output_proofs_are_computed_correctly_with_intermediate_commits(chunk_size: us #[test_casing(4, [10, 17, 28, 42])] fn entry_proofs_are_computed_correctly_with_intermediate_commits(chunk_size: usize) { - let (kvs, _) = &*KVS_AND_HASH; - let all_keys: Vec<_> = kvs.iter().map(|(key, _)| *key).collect(); + let (kvs, _) = &*ENTRIES_AND_HASH; + let all_keys: Vec<_> = kvs.iter().map(|entry| entry.key).collect(); let mut tree = MerkleTree::new(PatchSet::default()); let mut root_hashes = vec![]; for chunk in kvs.chunks(chunk_size) { @@ -210,8 +216,9 @@ fn entry_proofs_are_computed_correctly_with_intermediate_commits(chunk_size: usi let entries = tree.entries_with_proofs(version as u64, &all_keys).unwrap(); assert_eq!(entries.len(), all_keys.len()); for (i, (key, entry)) in all_keys.iter().zip(entries).enumerate() { + assert_eq!(entry.base.key, *key); assert_eq!(entry.base.is_empty(), i >= (version + 1) * chunk_size); - entry.verify(&Blake2Hasher, *key, output.root_hash); + entry.verify(&Blake2Hasher, output.root_hash); } } @@ -220,14 +227,15 @@ fn entry_proofs_are_computed_correctly_with_intermediate_commits(chunk_size: usi let entries = tree.entries_with_proofs(version as u64, &all_keys).unwrap(); assert_eq!(entries.len(), all_keys.len()); for (i, (key, entry)) in all_keys.iter().zip(entries).enumerate() { + assert_eq!(entry.base.key, *key); assert_eq!(entry.base.is_empty(), i >= (version + 1) * chunk_size); - entry.verify(&Blake2Hasher, *key, root_hash); + entry.verify(&Blake2Hasher, root_hash); } } } fn test_accumulated_commits(db: DB, chunk_size: usize) -> DB { - let (kvs, expected_hash) = &*KVS_AND_HASH; + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; let mut db = Patched::new(db); let mut final_hash = H256::zero(); for chunk in kvs.chunks(chunk_size) { @@ -242,7 +250,7 @@ fn test_accumulated_commits(db: DB, chunk_size: usize) -> DB { let tree = MerkleTree::new(&mut db); let latest_version = tree.latest_version().unwrap(); for version in 0..=latest_version { - tree.verify_consistency(version).unwrap(); + tree.verify_consistency(version, true).unwrap(); } db } @@ -253,9 +261,12 @@ fn accumulating_commits(chunk_size: usize) { } fn test_root_hash_computing_with_reverts(db: &mut impl Database) { - let (kvs, expected_hash) = &*KVS_AND_HASH; + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; let (initial_update, final_update) = kvs.split_at(75); - let key_updates: Vec<_> = kvs.iter().map(|(key, _)| (*key, H256([255; 32]))).collect(); + let key_updates: Vec<_> = kvs + .iter() + .map(|entry| entry.with_value(H256([255; 32]))) + .collect(); let key_inserts = generate_key_value_pairs(100..200); let mut tree = MerkleTree::new(db); @@ -300,7 +311,7 @@ fn test_root_hash_computing_with_key_updates(db: impl Database) { // Overwrite some `kvs` entries and add some new ones. let changed_kvs = kvs.iter_mut().enumerate().filter_map(|(i, kv)| { if i % 3 == 1 { - kv.1 = H256::from_low_u64_be((i + 100) as u64); + *kv = kv.with_value(H256::from_low_u64_be((i + 100) as u64)); return Some(*kv); } None @@ -361,12 +372,12 @@ fn root_hash_is_computed_correctly_with_key_updates() { fn proofs_are_computed_correctly_with_key_updates(updated_keys: usize) { const RNG_SEED: u64 = 1_234; - let (kvs, expected_hash) = &*KVS_AND_HASH; + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; let mut rng = StdRng::seed_from_u64(RNG_SEED); let old_instructions: Vec<_> = kvs[..updated_keys] .iter() - .map(|(key, _)| (*key, TreeInstruction::Write(H256([255; 32])))) + .map(|entry| TreeInstruction::Write(entry.with_value(H256([255; 32])))) .collect(); // Move the updated keys to the random places in the `kvs` vector. let mut writes = convert_to_writes(kvs); @@ -386,11 +397,11 @@ fn proofs_are_computed_correctly_with_key_updates(updated_keys: usize) { assert_eq!(output.root_hash(), Some(*expected_hash)); output.verify_proofs(&Blake2Hasher, root_hash, &instructions); - let keys: Vec<_> = kvs.iter().map(|(key, _)| *key).collect(); + let keys: Vec<_> = kvs.iter().map(|entry| entry.key).collect(); let proofs = tree.entries_with_proofs(1, &keys).unwrap(); - for ((key, value), proof) in kvs.iter().zip(proofs) { - assert_eq!(proof.base.value_hash, *value); - proof.verify(&Blake2Hasher, *key, *expected_hash); + for (entry, proof) in kvs.iter().zip(proofs) { + assert_eq!(proof.base, *entry); + proof.verify(&Blake2Hasher, *expected_hash); } } @@ -417,7 +428,11 @@ fn test_root_hash_equals_to_previous_implementation(db: &mut impl Database) { }) }); let values = (0..100).map(H256::from_low_u64_be); - let kvs: Vec<_> = keys.zip(values).collect(); + let kvs: Vec<_> = keys + .zip(values) + .enumerate() + .map(|(idx, (key, value))| TreeEntry::new(key, idx as u64 + 1, value)) + .collect(); let expected_hash = compute_tree_hash(kvs.iter().copied()); assert_eq!(expected_hash, PREV_IMPL_HASH); @@ -437,13 +452,13 @@ fn root_hash_equals_to_previous_implementation() { #[test_casing(7, [2, 3, 5, 10, 17, 28, 42])] fn range_proofs_with_multiple_existing_items(range_size: usize) { - let (kvs, expected_hash) = &*KVS_AND_HASH; + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; assert!(range_size >= 2 && range_size <= kvs.len()); let mut tree = MerkleTree::new(PatchSet::default()); tree.extend(kvs.clone()); - let mut sorted_keys: Vec<_> = kvs.iter().map(|(key, _)| *key).collect(); + let mut sorted_keys: Vec<_> = kvs.iter().map(|entry| entry.key).collect(); sorted_keys.sort_unstable(); for start_idx in 0..(sorted_keys.len() - range_size) { @@ -460,10 +475,10 @@ fn range_proofs_with_multiple_existing_items(range_size: usize) { let other_entries = tree.entries(0, other_keys).unwrap(); let mut range = TreeRangeDigest::new(&Blake2Hasher, *first_key, &first_entry); - for (key, entry) in other_keys.iter().zip(other_entries) { - range.update(*key, entry); + for entry in other_entries { + range.update(entry); } - let range_hash = range.finalize(*last_key, &last_entry); + let range_hash = range.finalize(&last_entry); assert_eq!(range_hash, *expected_hash); } } @@ -479,7 +494,7 @@ fn range_proofs_with_random_ranges() { const RNG_SEED: u64 = 321; let mut rng = StdRng::seed_from_u64(RNG_SEED); - let (kvs, expected_hash) = &*KVS_AND_HASH; + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; let mut tree = MerkleTree::new(PatchSet::default()); tree.extend(kvs.clone()); @@ -493,9 +508,9 @@ fn range_proofs_with_random_ranges() { } // Find out keys falling into the range. - let keys_in_range = kvs - .iter() - .filter_map(|&(key, _)| (key > start_key && key < end_key).then_some(key)); + let keys_in_range = kvs.iter().filter_map(|entry| { + (entry.key > start_key && entry.key < end_key).then_some(entry.key) + }); let mut keys_in_range: Vec<_> = keys_in_range.collect(); keys_in_range.sort_unstable(); println!("Proving range with {} keys", keys_in_range.len()); @@ -506,26 +521,26 @@ fn range_proofs_with_random_ranges() { let other_entries = tree.entries(0, &keys_in_range).unwrap(); let mut range = TreeRangeDigest::new(&Blake2Hasher, start_key, &first_entry); - for (key, entry) in keys_in_range.iter().zip(other_entries) { - range.update(*key, entry); + for entry in other_entries { + range.update(entry); } - let range_hash = range.finalize(end_key, &last_entry); + let range_hash = range.finalize(&last_entry); assert_eq!(range_hash, *expected_hash); } } /// RocksDB-specific tests. mod rocksdb { + use std::collections::BTreeMap; + use serde::{Deserialize, Serialize}; use serde_with::{hex::Hex, serde_as}; use tempfile::TempDir; - - use std::collections::BTreeMap; - - use super::*; use zksync_merkle_tree::{MerkleTreeColumnFamily, MerkleTreePruner, RocksDBWrapper}; use zksync_storage::RocksDB; + use super::*; + #[derive(Debug)] struct Harness { db: RocksDBWrapper, @@ -633,7 +648,7 @@ mod rocksdb { fn tree_tags_mismatch() { let Harness { mut db, dir: _dir } = Harness::new(); let mut tree = MerkleTree::new(&mut db); - tree.extend(vec![(U256::zero(), H256::zero())]); + tree.extend(vec![TreeEntry::new(U256::zero(), 1, H256::zero())]); MerkleTree::with_hasher(&mut db, ()); } @@ -643,7 +658,7 @@ mod rocksdb { fn tree_tags_mismatch_with_cold_restart() { let Harness { db, dir } = Harness::new(); let mut tree = MerkleTree::new(db); - tree.extend(vec![(U256::zero(), H256::zero())]); + tree.extend(vec![TreeEntry::new(U256::zero(), 1, H256::zero())]); drop(tree); let db = RocksDBWrapper::new(dir.path()); diff --git a/core/lib/merkle_tree/tests/integration/recovery.rs b/core/lib/merkle_tree/tests/integration/recovery.rs index fda57f78851..2bac00f02c3 100644 --- a/core/lib/merkle_tree/tests/integration/recovery.rs +++ b/core/lib/merkle_tree/tests/integration/recovery.rs @@ -2,14 +2,12 @@ use rand::{rngs::StdRng, seq::SliceRandom, SeedableRng}; use test_casing::test_casing; - use zksync_crypto::hasher::blake2::Blake2Hasher; use zksync_merkle_tree::{ - recovery::{MerkleTreeRecovery, RecoveryEntry}, - Database, MerkleTree, PatchSet, PruneDatabase, ValueHash, + recovery::MerkleTreeRecovery, Database, MerkleTree, PatchSet, PruneDatabase, ValueHash, }; -use crate::common::{convert_to_writes, generate_key_value_pairs, TreeMap, KVS_AND_HASH}; +use crate::common::{convert_to_writes, generate_key_value_pairs, TreeMap, ENTRIES_AND_HASH}; #[derive(Debug, Clone, Copy)] enum RecoveryKind { @@ -23,16 +21,8 @@ impl RecoveryKind { #[test] fn recovery_basics() { - let (kvs, expected_hash) = &*KVS_AND_HASH; - let recovery_entries = kvs - .iter() - .enumerate() - .map(|(i, &(key, value))| RecoveryEntry { - key, - value, - leaf_index: i as u64 + 1, - }); - let mut recovery_entries: Vec<_> = recovery_entries.collect(); + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; + let mut recovery_entries: Vec<_> = kvs.clone(); recovery_entries.sort_unstable_by_key(|entry| entry.key); let greatest_key = recovery_entries[99].key; @@ -43,21 +33,13 @@ fn recovery_basics() { assert_eq!(recovery.last_processed_key(), Some(greatest_key)); assert_eq!(recovery.root_hash(), *expected_hash); - let tree = recovery.finalize(); - tree.verify_consistency(recovered_version).unwrap(); + let tree = MerkleTree::new(recovery.finalize()); + tree.verify_consistency(recovered_version, true).unwrap(); } fn test_recovery_in_chunks(mut db: impl PruneDatabase, kind: RecoveryKind, chunk_size: usize) { - let (kvs, expected_hash) = &*KVS_AND_HASH; - let recovery_entries = kvs - .iter() - .enumerate() - .map(|(i, &(key, value))| RecoveryEntry { - key, - value, - leaf_index: i as u64 + 1, - }); - let mut recovery_entries: Vec<_> = recovery_entries.collect(); + let (kvs, expected_hash) = &*ENTRIES_AND_HASH; + let mut recovery_entries = kvs.clone(); if matches!(kind, RecoveryKind::Linear) { recovery_entries.sort_unstable_by_key(|entry| entry.key); } @@ -83,8 +65,8 @@ fn test_recovery_in_chunks(mut db: impl PruneDatabase, kind: RecoveryKind, chunk assert_eq!(recovery.last_processed_key(), Some(greatest_key)); assert_eq!(recovery.root_hash(), *expected_hash); - let mut tree = recovery.finalize(); - tree.verify_consistency(recovered_version).unwrap(); + let mut tree = MerkleTree::new(recovery.finalize()); + tree.verify_consistency(recovered_version, true).unwrap(); // Check that new tree versions can be built and function as expected. test_tree_after_recovery(&mut tree, recovered_version, *expected_hash); } @@ -107,13 +89,13 @@ fn test_tree_after_recovery( let mut rng = StdRng::seed_from_u64(RNG_SEED); let mut kvs = generate_key_value_pairs(100..=150); let mut modified_kvs = generate_key_value_pairs(50..=100); - for (_, value) in &mut modified_kvs { - *value = ValueHash::repeat_byte(1); + for entry in &mut modified_kvs { + entry.value = ValueHash::repeat_byte(1); } + modified_kvs.shuffle(&mut rng); kvs.extend(modified_kvs); - kvs.shuffle(&mut rng); - let mut tree_map = TreeMap::new(&KVS_AND_HASH.0); + let mut tree_map = TreeMap::new(&ENTRIES_AND_HASH.0); let mut prev_root_hash = root_hash; for (i, chunk) in kvs.chunks(CHUNK_SIZE).enumerate() { tree_map.extend(chunk); @@ -129,7 +111,7 @@ fn test_tree_after_recovery( }; assert_eq!(new_root_hash, tree_map.root_hash()); - tree.verify_consistency(recovered_version + i as u64) + tree.verify_consistency(recovered_version + i as u64, true) .unwrap(); prev_root_hash = new_root_hash; } @@ -142,9 +124,9 @@ fn recovery_in_chunks(kind: RecoveryKind, chunk_size: usize) { mod rocksdb { use tempfile::TempDir; + use zksync_merkle_tree::RocksDBWrapper; use super::*; - use zksync_merkle_tree::RocksDBWrapper; #[test_casing(8, test_casing::Product((RecoveryKind::ALL, [6, 10, 17, 42])))] fn recovery_in_chunks(kind: RecoveryKind, chunk_size: usize) { diff --git a/core/lib/mini_merkle_tree/README.md b/core/lib/mini_merkle_tree/README.md index a0608d78f71..afac2fc9ebd 100644 --- a/core/lib/mini_merkle_tree/README.md +++ b/core/lib/mini_merkle_tree/README.md @@ -12,5 +12,5 @@ cargo bench -p zksync_mini_merkle_tree --bench tree ``` The order of timings should be 2M elements/s for all tree sizes (measured on MacBook Pro with 12-core Apple M2 Max CPU), -both for calculating the root and the root + Merkle path. This translates to ~130µs for a tree with 512 leaves (the tree -size used for `L2ToL1Log`s). +both for calculating the root and the root + Merkle path. This translates to approximately 130µs for a tree with 512 +leaves (the tree size used for `L2ToL1Log`s). diff --git a/core/lib/mini_merkle_tree/benches/tree.rs b/core/lib/mini_merkle_tree/benches/tree.rs index a964456bfb4..8ea4128ac34 100644 --- a/core/lib/mini_merkle_tree/benches/tree.rs +++ b/core/lib/mini_merkle_tree/benches/tree.rs @@ -3,7 +3,6 @@ use criterion::{ criterion_group, criterion_main, BatchSize, Bencher, BenchmarkId, Criterion, Throughput, }; - use zksync_mini_merkle_tree::MiniMerkleTree; const TREE_SIZES: &[usize] = &[32, 64, 128, 256, 512, 1_024]; diff --git a/core/lib/mini_merkle_tree/src/lib.rs b/core/lib/mini_merkle_tree/src/lib.rs index a6cbf37213c..168e5d8dd09 100644 --- a/core/lib/mini_merkle_tree/src/lib.rs +++ b/core/lib/mini_merkle_tree/src/lib.rs @@ -5,10 +5,10 @@ #![warn(clippy::all, clippy::pedantic)] #![allow(clippy::must_use_candidate, clippy::similar_names)] -use once_cell::sync::Lazy; - use std::{iter, str::FromStr}; +use once_cell::sync::Lazy; + #[cfg(test)] mod tests; diff --git a/core/lib/multivm/Cargo.toml b/core/lib/multivm/Cargo.toml index 8b69af498a0..fc7218e16e2 100644 --- a/core/lib/multivm/Cargo.toml +++ b/core/lib/multivm/Cargo.toml @@ -10,10 +10,15 @@ keywords = ["blockchain", "zksync"] categories = ["cryptography"] [dependencies] +zk_evm_1_4_1 = { package = "zk_evm", git = "https://github.com/matter-labs/era-zk_evm.git", branch = "v1.4.1" } zk_evm_1_4_0 = { package = "zk_evm", git = "https://github.com/matter-labs/era-zk_evm.git", branch = "v1.4.0" } zk_evm_1_3_3 = { package = "zk_evm", git = "https://github.com/matter-labs/era-zk_evm.git", tag= "v1.3.3-rc2" } zk_evm_1_3_1 = { package = "zk_evm", git = "https://github.com/matter-labs/era-zk_evm.git", tag= "v1.3.1-rc2" } +zkevm_test_harness_1_4_0 = { git = "https://github.com/matter-labs/era-zkevm_test_harness.git", branch = "v1.4.0", package = "zkevm_test_harness" } +zkevm_test_harness_1_4_1 = { git = "https://github.com/matter-labs/era-zkevm_test_harness.git", branch = "v1.4.1", package = "zkevm_test_harness" } + + zksync_types = { path = "../types" } zksync_state = { path = "../state" } zksync_contracts = { path = "../contracts" } @@ -27,8 +32,7 @@ itertools = "0.10" once_cell = "1.7" thiserror = "1.0" tracing = "0.1" -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } - +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } [dev-dependencies] tokio = { version = "1", features = ["time"] } diff --git a/core/lib/multivm/src/glue/history_mode.rs b/core/lib/multivm/src/glue/history_mode.rs index ca56836d8e8..f820477c10a 100644 --- a/core/lib/multivm/src/glue/history_mode.rs +++ b/core/lib/multivm/src/glue/history_mode.rs @@ -7,12 +7,14 @@ pub trait HistoryMode: + GlueInto + GlueInto + GlueInto + + GlueInto { type VmM6Mode: crate::vm_m6::HistoryMode; type Vm1_3_2Mode: crate::vm_1_3_2::HistoryMode; type VmVirtualBlocksMode: crate::vm_virtual_blocks::HistoryMode; type VmVirtualBlocksRefundsEnhancement: crate::vm_refunds_enhancement::HistoryMode; - type VmBoojumIntegration: crate::vm_latest::HistoryMode; + type VmBoojumIntegration: crate::vm_boojum_integration::HistoryMode; + type Vm1_4_1: crate::vm_latest::HistoryMode; } impl GlueFrom for crate::vm_m6::HistoryEnabled { @@ -39,6 +41,12 @@ impl GlueFrom for crate::vm_refunds_enhancemen } } +impl GlueFrom for crate::vm_boojum_integration::HistoryEnabled { + fn glue_from(_: crate::vm_latest::HistoryEnabled) -> Self { + Self + } +} + impl GlueFrom for crate::vm_m6::HistoryDisabled { fn glue_from(_: crate::vm_latest::HistoryDisabled) -> Self { Self @@ -65,12 +73,19 @@ impl GlueFrom } } +impl GlueFrom for crate::vm_boojum_integration::HistoryDisabled { + fn glue_from(_: crate::vm_latest::HistoryDisabled) -> Self { + Self + } +} + impl HistoryMode for crate::vm_latest::HistoryEnabled { type VmM6Mode = crate::vm_m6::HistoryEnabled; type Vm1_3_2Mode = crate::vm_1_3_2::HistoryEnabled; type VmVirtualBlocksMode = crate::vm_virtual_blocks::HistoryEnabled; type VmVirtualBlocksRefundsEnhancement = crate::vm_refunds_enhancement::HistoryEnabled; - type VmBoojumIntegration = crate::vm_latest::HistoryEnabled; + type VmBoojumIntegration = crate::vm_boojum_integration::HistoryEnabled; + type Vm1_4_1 = crate::vm_latest::HistoryEnabled; } impl HistoryMode for crate::vm_latest::HistoryDisabled { @@ -78,5 +93,6 @@ impl HistoryMode for crate::vm_latest::HistoryDisabled { type Vm1_3_2Mode = crate::vm_1_3_2::HistoryDisabled; type VmVirtualBlocksMode = crate::vm_virtual_blocks::HistoryDisabled; type VmVirtualBlocksRefundsEnhancement = crate::vm_refunds_enhancement::HistoryDisabled; - type VmBoojumIntegration = crate::vm_latest::HistoryDisabled; + type VmBoojumIntegration = crate::vm_boojum_integration::HistoryDisabled; + type Vm1_4_1 = crate::vm_latest::HistoryDisabled; } diff --git a/core/lib/multivm/src/glue/mod.rs b/core/lib/multivm/src/glue/mod.rs index 0904661a73c..5347b79d3c2 100644 --- a/core/lib/multivm/src/glue/mod.rs +++ b/core/lib/multivm/src/glue/mod.rs @@ -11,7 +11,7 @@ pub(crate) mod history_mode; pub mod tracers; mod types; -/// This trait is a workaround on the Rust'c [orphan rule](orphan_rule). +/// This trait is a workaround on the Rust's [orphan rule](orphan_rule). /// We need to convert a lot of types that come from two different versions of some crate, /// and `From`/`Into` traits are natural way of doing so. Unfortunately, we can't implement an /// external trait on a pair of external types, so we're unable to use these traits. @@ -29,7 +29,7 @@ pub trait GlueInto: Sized { fn glue_into(self) -> T; } -// Blaknet `GlueInto` impl for any type that implements `GlueFrom`. +// Blanket `GlueInto` impl for any type that implements `GlueFrom`. impl GlueInto for T where U: GlueFrom, diff --git a/core/lib/multivm/src/glue/tracers/mod.rs b/core/lib/multivm/src/glue/tracers/mod.rs index b9c0e083b84..10d9cbe8ed8 100644 --- a/core/lib/multivm/src/glue/tracers/mod.rs +++ b/core/lib/multivm/src/glue/tracers/mod.rs @@ -1,4 +1,4 @@ -//! # Multivm Tracing +//! # MultiVM Tracing //! //! The MultiVM tracing module enables support for Tracers in different versions of virtual machines. //! @@ -7,7 +7,7 @@ //! Different VM versions may have distinct requirements and types for Tracers. To accommodate these differences, //! this module defines one primary trait: //! -//! - `MultivmTracer`: This trait represents a tracer that can be converted into a tracer for +//! - `MultiVMTracer`: This trait represents a tracer that can be converted into a tracer for //! a specific VM version. //! //! Specific traits for each VM version, which support Custom Tracers: @@ -19,24 +19,28 @@ //! into a form compatible with the vm_virtual_blocks version. //! It defines a method `vm_virtual_blocks` for obtaining a boxed tracer. //! -//! For `MultivmTracer` to be implemented, the Tracer must implement all N currently +//! For `MultiVMTracer` to be implemented, the Tracer must implement all N currently //! existing sub-traits. //! //! ## Adding a new VM version //! -//! To add support for one more VM version to MultivmTracer, one needs to: +//! To add support for one more VM version to MultiVMTracer, one needs to: //! - Create a new trait performing conversion to the specified VM tracer, e.g., `IntoTracer`. -//! - Add this trait as a trait bound to the `MultivmTracer`. -//! - Add this trait as a trait bound for `T` in `MultivmTracer` implementation. -//! — Implement the trait for `T` with a bound to `VmTracer` for a specific version. -//! -use crate::HistoryMode; +//! - Add this trait as a trait bound to the `MultiVMTracer`. +//! - Add this trait as a trait bound for `T` in `MultiVMTracer` implementation. +//! - Implement the trait for `T` with a bound to `VmTracer` for a specific version. + use zksync_state::WriteStorage; -pub type MultiVmTracerPointer = Box>; +use crate::HistoryMode; + +pub type MultiVmTracerPointer = Box>; -pub trait MultivmTracer: - IntoLatestTracer + IntoVmVirtualBlocksTracer + IntoVmRefundsEnhancementTracer +pub trait MultiVMTracer: + IntoLatestTracer + + IntoVmVirtualBlocksTracer + + IntoVmRefundsEnhancementTracer + + IntoVmBoojumIntegrationTracer { fn into_tracer_pointer(self) -> MultiVmTracerPointer where @@ -47,7 +51,7 @@ pub trait MultivmTracer: } pub trait IntoLatestTracer { - fn latest(&self) -> crate::vm_latest::TracerPointer; + fn latest(&self) -> crate::vm_latest::TracerPointer; } pub trait IntoVmVirtualBlocksTracer { @@ -62,13 +66,19 @@ pub trait IntoVmRefundsEnhancementTracer { ) -> Box>; } +pub trait IntoVmBoojumIntegrationTracer { + fn vm_boojum_integration( + &self, + ) -> Box>; +} + impl IntoLatestTracer for T where S: WriteStorage, H: HistoryMode, - T: crate::vm_latest::VmTracer + Clone + 'static, + T: crate::vm_latest::VmTracer + Clone + 'static, { - fn latest(&self) -> crate::vm_latest::TracerPointer { + fn latest(&self) -> crate::vm_latest::TracerPointer { Box::new(self.clone()) } } @@ -102,12 +112,26 @@ where } } -impl MultivmTracer for T +impl IntoVmBoojumIntegrationTracer for T +where + S: WriteStorage, + H: HistoryMode, + T: crate::vm_boojum_integration::VmTracer + Clone + 'static, +{ + fn vm_boojum_integration( + &self, + ) -> Box> { + Box::new(self.clone()) + } +} + +impl MultiVMTracer for T where S: WriteStorage, H: HistoryMode, T: IntoLatestTracer + IntoVmVirtualBlocksTracer - + IntoVmRefundsEnhancementTracer, + + IntoVmRefundsEnhancementTracer + + IntoVmBoojumIntegrationTracer, { } diff --git a/core/lib/multivm/src/glue/types/mod.rs b/core/lib/multivm/src/glue/types/mod.rs index c72aff34757..03d003212f4 100644 --- a/core/lib/multivm/src/glue/types/mod.rs +++ b/core/lib/multivm/src/glue/types/mod.rs @@ -7,3 +7,4 @@ mod vm; mod zk_evm_1_3_1; +mod zk_evm_1_4_1; diff --git a/core/lib/multivm/src/glue/types/vm/block_context_mode.rs b/core/lib/multivm/src/glue/types/vm/block_context_mode.rs index eba3c503e06..094339705e1 100644 --- a/core/lib/multivm/src/glue/types/vm/block_context_mode.rs +++ b/core/lib/multivm/src/glue/types/vm/block_context_mode.rs @@ -1,17 +1,19 @@ -use crate::glue::GlueFrom; use zksync_utils::h256_to_u256; +use crate::glue::GlueFrom; + impl GlueFrom for crate::vm_m5::vm_with_bootloader::BlockContextMode { fn glue_from(value: crate::interface::L1BatchEnv) -> Self { + let fee_input = value.fee_input.into_l1_pegged(); let derived = crate::vm_m5::vm_with_bootloader::DerivedBlockContext { context: crate::vm_m5::vm_with_bootloader::BlockContext { block_number: value.number.0, block_timestamp: value.timestamp, operator_address: value.fee_account, - l1_gas_price: value.l1_gas_price, - fair_l2_gas_price: value.fair_l2_gas_price, + l1_gas_price: fee_input.l1_gas_price, + fair_l2_gas_price: fee_input.fair_l2_gas_price, }, - base_fee: value.base_fee(), + base_fee: crate::vm_m5::vm_with_bootloader::get_batch_base_fee(&value), }; match value.previous_batch_hash { Some(hash) => Self::NewBlock(derived, h256_to_u256(hash)), @@ -22,15 +24,16 @@ impl GlueFrom for crate::vm_m5::vm_with_bootloader impl GlueFrom for crate::vm_m6::vm_with_bootloader::BlockContextMode { fn glue_from(value: crate::interface::L1BatchEnv) -> Self { + let fee_input = value.fee_input.into_l1_pegged(); let derived = crate::vm_m6::vm_with_bootloader::DerivedBlockContext { context: crate::vm_m6::vm_with_bootloader::BlockContext { block_number: value.number.0, block_timestamp: value.timestamp, operator_address: value.fee_account, - l1_gas_price: value.l1_gas_price, - fair_l2_gas_price: value.fair_l2_gas_price, + l1_gas_price: fee_input.l1_gas_price, + fair_l2_gas_price: fee_input.fair_l2_gas_price, }, - base_fee: value.base_fee(), + base_fee: crate::vm_m6::vm_with_bootloader::get_batch_base_fee(&value), }; match value.previous_batch_hash { Some(hash) => Self::NewBlock(derived, h256_to_u256(hash)), @@ -43,15 +46,16 @@ impl GlueFrom for crate::vm_1_3_2::vm_with_bootloader::BlockContextMode { fn glue_from(value: crate::interface::L1BatchEnv) -> Self { + let fee_input = value.fee_input.into_l1_pegged(); let derived = crate::vm_1_3_2::vm_with_bootloader::DerivedBlockContext { context: crate::vm_1_3_2::vm_with_bootloader::BlockContext { block_number: value.number.0, block_timestamp: value.timestamp, operator_address: value.fee_account, - l1_gas_price: value.l1_gas_price, - fair_l2_gas_price: value.fair_l2_gas_price, + l1_gas_price: fee_input.l1_gas_price, + fair_l2_gas_price: fee_input.fair_l2_gas_price, }, - base_fee: value.base_fee(), + base_fee: crate::vm_1_3_2::vm_with_bootloader::get_batch_base_fee(&value), }; match value.previous_batch_hash { Some(hash) => Self::NewBlock(derived, h256_to_u256(hash)), diff --git a/core/lib/multivm/src/glue/types/vm/tx_execution_mode.rs b/core/lib/multivm/src/glue/types/vm/tx_execution_mode.rs index 1dd90e104a5..0709b13de78 100644 --- a/core/lib/multivm/src/glue/types/vm/tx_execution_mode.rs +++ b/core/lib/multivm/src/glue/types/vm/tx_execution_mode.rs @@ -19,12 +19,12 @@ impl GlueFrom match value { crate::interface::TxExecutionMode::VerifyExecute => Self::VerifyExecute, crate::interface::TxExecutionMode::EstimateFee => Self::EstimateFee { - // We used it only for api services we don't have limit for storage invocation inside statekeeper + // We used it only for API services we don't have limit for storage invocation inside statekeeper // It's impossible to recover this value for the vm integration after virtual blocks missed_storage_invocation_limit: usize::MAX, }, crate::interface::TxExecutionMode::EthCall => Self::EthCall { - // We used it only for api services we don't have limit for storage invocation inside statekeeper + // We used it only for API services we don't have limit for storage invocation inside statekeeper // It's impossible to recover this value for the vm integration after virtual blocks missed_storage_invocation_limit: usize::MAX, }, @@ -39,12 +39,12 @@ impl GlueFrom match value { crate::interface::TxExecutionMode::VerifyExecute => Self::VerifyExecute, crate::interface::TxExecutionMode::EstimateFee => Self::EstimateFee { - // We used it only for api services we don't have limit for storage invocation inside statekeeper + // We used it only for API services we don't have limit for storage invocation inside statekeeper // It's impossible to recover this value for the vm integration after virtual blocks missed_storage_invocation_limit: usize::MAX, }, crate::interface::TxExecutionMode::EthCall => Self::EthCall { - // We used it only for api services we don't have limit for storage invocation inside statekeeper + // We used it only for API services we don't have limit for storage invocation inside statekeeper // It's impossible to recover this value for the vm integration after virtual blocks missed_storage_invocation_limit: usize::MAX, }, diff --git a/core/lib/multivm/src/glue/types/vm/vm_block_result.rs b/core/lib/multivm/src/glue/types/vm/vm_block_result.rs index 827ac7fe82a..cc76ce22ca0 100644 --- a/core/lib/multivm/src/glue/types/vm/vm_block_result.rs +++ b/core/lib/multivm/src/glue/types/vm/vm_block_result.rs @@ -1,12 +1,14 @@ use zksync_types::l2_to_l1_log::UserL2ToL1Log; -use crate::glue::{GlueFrom, GlueInto}; -use crate::interface::{ - types::outputs::VmExecutionLogs, CurrentExecutionState, ExecutionResult, Refunds, - VmExecutionResultAndLogs, VmExecutionStatistics, +use crate::{ + glue::{GlueFrom, GlueInto}, + interface::{ + types::outputs::VmExecutionLogs, CurrentExecutionState, ExecutionResult, Refunds, + VmExecutionResultAndLogs, VmExecutionStatistics, + }, }; -// Note: In version after vm VmVirtualBlocks the bootloader memory knowledge is encapsulated into the VM. +// Note: In version after vm `VmVirtualBlocks` the bootloader memory knowledge is encapsulated into the VM. // But before it was a part of a public API. // Bootloader memory required only for producing witnesses, // and server doesn't need to generate witnesses for old blocks @@ -24,6 +26,7 @@ impl GlueFrom for crate::interface::Fi computational_gas_used: value.full_result.gas_used, gas_used: value.full_result.gas_used, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: Refunds::default(), }, @@ -44,6 +47,7 @@ impl GlueFrom for crate::interface::Fi storage_refunds: Vec::new(), }, final_bootloader_memory: None, + pubdata_input: None, } } } @@ -61,6 +65,7 @@ impl GlueFrom for crate::interface::Fi computational_gas_used: value.full_result.computational_gas_used, gas_used: value.full_result.gas_used, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: Refunds::default(), }, @@ -81,6 +86,7 @@ impl GlueFrom for crate::interface::Fi storage_refunds: Vec::new(), }, final_bootloader_memory: None, + pubdata_input: None, } } } @@ -104,6 +110,7 @@ impl GlueFrom for crate::interface: computational_gas_used: value.full_result.computational_gas_used, gas_used: value.full_result.gas_used, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: Refunds::default(), }, @@ -124,6 +131,7 @@ impl GlueFrom for crate::interface: storage_refunds: Vec::new(), }, final_bootloader_memory: None, + pubdata_input: None, } } } @@ -163,6 +171,7 @@ impl GlueFrom computational_gas_used: value.full_result.computational_gas_used, gas_used: value.full_result.gas_used, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: Refunds::default(), } @@ -193,6 +202,7 @@ impl GlueFrom computational_gas_used: 0, gas_used: value.full_result.gas_used, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: Refunds::default(), } @@ -234,6 +244,7 @@ impl GlueFrom computational_gas_used: value.full_result.computational_gas_used, gas_used: value.full_result.gas_used, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: Refunds::default(), } diff --git a/core/lib/multivm/src/glue/types/vm/vm_partial_execution_result.rs b/core/lib/multivm/src/glue/types/vm/vm_partial_execution_result.rs index 4de727a04c1..7b25c1ff3e0 100644 --- a/core/lib/multivm/src/glue/types/vm/vm_partial_execution_result.rs +++ b/core/lib/multivm/src/glue/types/vm/vm_partial_execution_result.rs @@ -11,11 +11,12 @@ impl GlueFrom contracts_used: value.contracts_used, cycles_used: value.cycles_used, total_log_queries: value.logs.total_log_queries_count, - // There are no such fields in m5 + // There are no such fields in `m5` gas_used: 0, - // There are no such fields in m5 + // There are no such fields in `m5` computational_gas_used: 0, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: crate::interface::Refunds { gas_refunded: 0, @@ -39,6 +40,7 @@ impl GlueFrom computational_gas_used: value.computational_gas_used, total_log_queries: value.logs.total_log_queries_count, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: crate::interface::Refunds { gas_refunded: 0, @@ -62,6 +64,7 @@ impl GlueFrom computational_gas_used: value.computational_gas_used, total_log_queries: value.logs.total_log_queries_count, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: crate::interface::Refunds { gas_refunded: 0, diff --git a/core/lib/multivm/src/glue/types/vm/vm_tx_execution_result.rs b/core/lib/multivm/src/glue/types/vm/vm_tx_execution_result.rs index 10e422edbca..0c888cdda23 100644 --- a/core/lib/multivm/src/glue/types/vm/vm_tx_execution_result.rs +++ b/core/lib/multivm/src/glue/types/vm/vm_tx_execution_result.rs @@ -1,7 +1,10 @@ -use crate::glue::{GlueFrom, GlueInto}; -use crate::interface::{ExecutionResult, Refunds, TxRevertReason, VmExecutionResultAndLogs}; use zksync_types::tx::tx_execution_info::TxExecutionStatus; +use crate::{ + glue::{GlueFrom, GlueInto}, + interface::{ExecutionResult, Refunds, TxRevertReason, VmExecutionResultAndLogs}, +}; + impl GlueFrom for VmExecutionResultAndLogs { fn glue_from(value: crate::vm_m5::vm_instance::VmTxExecutionResult) -> Self { let mut result: VmExecutionResultAndLogs = value.result.glue_into(); diff --git a/core/lib/multivm/src/glue/types/zk_evm_1_4_1.rs b/core/lib/multivm/src/glue/types/zk_evm_1_4_1.rs new file mode 100644 index 00000000000..c4c4c06c7f8 --- /dev/null +++ b/core/lib/multivm/src/glue/types/zk_evm_1_4_1.rs @@ -0,0 +1,65 @@ +use zk_evm_1_4_1::{ + aux_structures::{LogQuery as LogQuery_1_4_1, Timestamp as Timestamp_1_4_1}, + zkevm_opcode_defs::FarCallOpcode as FarCallOpcode_1_4_1, +}; +use zksync_types::{FarCallOpcode, LogQuery, Timestamp}; + +use crate::glue::{GlueFrom, GlueInto}; + +impl GlueFrom for FarCallOpcode { + fn glue_from(value: FarCallOpcode_1_4_1) -> Self { + match value { + FarCallOpcode_1_4_1::Normal => FarCallOpcode::Normal, + FarCallOpcode_1_4_1::Delegate => FarCallOpcode::Delegate, + FarCallOpcode_1_4_1::Mimic => FarCallOpcode::Mimic, + } + } +} + +impl GlueFrom for Timestamp { + fn glue_from(value: Timestamp_1_4_1) -> Timestamp { + Timestamp(value.0) + } +} + +impl GlueFrom for Timestamp_1_4_1 { + fn glue_from(value: Timestamp) -> Timestamp_1_4_1 { + Timestamp_1_4_1(value.0) + } +} + +impl GlueFrom for LogQuery { + fn glue_from(value: LogQuery_1_4_1) -> LogQuery { + LogQuery { + timestamp: value.timestamp.glue_into(), + tx_number_in_block: value.tx_number_in_block, + aux_byte: value.aux_byte, + shard_id: value.shard_id, + address: value.address, + key: value.key, + read_value: value.read_value, + written_value: value.written_value, + rw_flag: value.rw_flag, + rollback: value.rollback, + is_service: value.is_service, + } + } +} + +impl GlueFrom for LogQuery_1_4_1 { + fn glue_from(value: LogQuery) -> LogQuery_1_4_1 { + LogQuery_1_4_1 { + timestamp: value.timestamp.glue_into(), + tx_number_in_block: value.tx_number_in_block, + aux_byte: value.aux_byte, + shard_id: value.shard_id, + address: value.address, + key: value.key, + read_value: value.read_value, + written_value: value.written_value, + rw_flag: value.rw_flag, + rollback: value.rollback, + is_service: value.is_service, + } + } +} diff --git a/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/mod.rs b/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/mod.rs index bc34775e613..8a0fbbe93cd 100644 --- a/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/mod.rs +++ b/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/mod.rs @@ -1,2 +1,3 @@ pub mod vm_1_3_3; pub mod vm_1_4_0; +pub mod vm_1_4_1; diff --git a/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_3_3.rs b/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_3_3.rs index 2138dd086c0..c088889aa03 100644 --- a/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_3_3.rs +++ b/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_3_3.rs @@ -1,6 +1,6 @@ -use zk_evm_1_3_3::abstractions::Memory; -use zk_evm_1_3_3::tracing::{ - AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData, +use zk_evm_1_3_3::{ + abstractions::Memory, + tracing::{AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData}, }; use zksync_state::StoragePtr; diff --git a/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_4_0.rs b/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_4_0.rs index 61d7831393d..7237e24cb68 100644 --- a/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_4_0.rs +++ b/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_4_0.rs @@ -1,10 +1,10 @@ -use zk_evm_1_4_0::abstractions::Memory; -use zk_evm_1_4_0::tracing::{ - AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData, +use zk_evm_1_4_0::{ + abstractions::Memory, + tracing::{AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData}, }; use zksync_state::StoragePtr; -/// Version of zk_evm_1_3_3::Tracer suitable for dynamic dispatch. +/// Version of `zk_evm_1_4_0::Tracer` suitable for dynamic dispatch. pub trait DynTracer { fn before_decoding(&mut self, _state: VmLocalStateData<'_>, _memory: &M) {} fn after_decoding( diff --git a/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_4_1.rs b/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_4_1.rs new file mode 100644 index 00000000000..4772d14cd20 --- /dev/null +++ b/core/lib/multivm/src/interface/traits/tracers/dyn_tracers/vm_1_4_1.rs @@ -0,0 +1,33 @@ +use zk_evm_1_4_1::{ + abstractions::Memory, + tracing::{AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData}, +}; +use zksync_state::StoragePtr; + +/// Version of `zk_evm_1_4_1::Tracer` suitable for dynamic dispatch. +pub trait DynTracer { + fn before_decoding(&mut self, _state: VmLocalStateData<'_>, _memory: &M) {} + fn after_decoding( + &mut self, + _state: VmLocalStateData<'_>, + _data: AfterDecodingData, + _memory: &M, + ) { + } + fn before_execution( + &mut self, + _state: VmLocalStateData<'_>, + _data: BeforeExecutionData, + _memory: &M, + _storage: StoragePtr, + ) { + } + fn after_execution( + &mut self, + _state: VmLocalStateData<'_>, + _data: AfterExecutionData, + _memory: &M, + _storage: StoragePtr, + ) { + } +} diff --git a/core/lib/multivm/src/interface/traits/vm.rs b/core/lib/multivm/src/interface/traits/vm.rs index b4a9320bbc6..1158588f849 100644 --- a/core/lib/multivm/src/interface/traits/vm.rs +++ b/core/lib/multivm/src/interface/traits/vm.rs @@ -47,20 +47,24 @@ //! let result = vm.execute(multivm::interface::VmExecutionMode::Batch); //! ``` -use crate::interface::types::errors::BytecodeCompressionError; -use crate::interface::types::inputs::{L1BatchEnv, L2BlockEnv, SystemEnv, VmExecutionMode}; -use crate::interface::types::outputs::{ - BootloaderMemory, CurrentExecutionState, VmExecutionResultAndLogs, -}; - -use crate::interface::{FinishedL1Batch, VmMemoryMetrics}; -use crate::tracers::TracerDispatcher; -use crate::vm_latest::HistoryEnabled; -use crate::HistoryMode; use zksync_state::StoragePtr; use zksync_types::Transaction; use zksync_utils::bytecode::CompressedBytecodeInfo; +use crate::{ + interface::{ + types::{ + errors::BytecodeCompressionError, + inputs::{L1BatchEnv, L2BlockEnv, SystemEnv, VmExecutionMode}, + outputs::{BootloaderMemory, CurrentExecutionState, VmExecutionResultAndLogs}, + }, + FinishedL1Batch, VmMemoryMetrics, + }, + tracers::TracerDispatcher, + vm_latest::HistoryEnabled, + HistoryMode, +}; + pub trait VmInterface { type TracerDispatcher: Default + From>; @@ -100,7 +104,10 @@ pub trait VmInterface { &mut self, tx: Transaction, with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { self.inspect_transaction_with_bytecode_compression( Self::TracerDispatcher::default(), tx, @@ -114,7 +121,10 @@ pub trait VmInterface { tracer: Self::TracerDispatcher, tx: Transaction, with_compression: bool, - ) -> Result; + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ); /// Record VM memory metrics. fn record_vm_memory_metrics(&self) -> VmMemoryMetrics; @@ -129,6 +139,7 @@ pub trait VmInterface { block_tip_execution_result: result, final_execution_state: execution_state, final_bootloader_memory: Some(bootloader_memory), + pubdata_input: None, } } } diff --git a/core/lib/multivm/src/interface/types/errors/halt.rs b/core/lib/multivm/src/interface/types/errors/halt.rs index 23bab7ee55e..70de7548f14 100644 --- a/core/lib/multivm/src/interface/types/errors/halt.rs +++ b/core/lib/multivm/src/interface/types/errors/halt.rs @@ -1,12 +1,13 @@ -use super::VmRevertReason; use std::fmt::{Display, Formatter}; +use super::VmRevertReason; + /// Structure for non-contract errors from the Virtual Machine (EVM). /// Differentiates VM-specific issues from contract-related errors. #[derive(Debug, Clone, PartialEq)] pub enum Halt { - // Can only be returned in VerifyAndExecute + // Can only be returned in `VerifyAndExecute` ValidationFailed(VmRevertReason), PaymasterValidationFailed(VmRevertReason), PrePaymasterPreparationFailed(VmRevertReason), @@ -15,7 +16,7 @@ pub enum Halt { FailedToChargeFee(VmRevertReason), // Emitted when trying to call a transaction from an account that has not // been deployed as an account (i.e. the `from` is just a contract). - // Can only be returned in VerifyAndExecute + // Can only be returned in `VerifyAndExecute` FromIsNotAnAccount, // Currently cannot be returned. Should be removed when refactoring errors. InnerTxError, @@ -40,6 +41,7 @@ pub enum Halt { FailedToAppendTransactionToL2Block(String), VMPanic, TracerCustom(String), + FailedToPublishCompressedBytecodes, } impl Display for Halt { @@ -111,6 +113,9 @@ impl Display for Halt { Halt::ValidationOutOfGas => { write!(f, "Validation run out of gas") } + Halt::FailedToPublishCompressedBytecodes => { + write!(f, "Failed to publish compressed bytecodes") + } } } } diff --git a/core/lib/multivm/src/interface/types/errors/tx_revert_reason.rs b/core/lib/multivm/src/interface/types/errors/tx_revert_reason.rs index f92a913fb8b..d863e387e01 100644 --- a/core/lib/multivm/src/interface/types/errors/tx_revert_reason.rs +++ b/core/lib/multivm/src/interface/types/errors/tx_revert_reason.rs @@ -1,8 +1,6 @@ -use super::halt::Halt; - use std::fmt::Display; -use super::{BootloaderErrorCode, VmRevertReason}; +use super::{halt::Halt, BootloaderErrorCode, VmRevertReason}; #[derive(Debug, Clone, PartialEq)] pub enum TxRevertReason { @@ -57,7 +55,7 @@ impl TxRevertReason { BootloaderErrorCode::UnacceptablePubdataPrice => { Self::Halt(Halt::UnexpectedVMBehavior("UnacceptablePubdataPrice".to_owned())) } - // This is different from AccountTxValidationFailed error in a way that it means that + // This is different from `AccountTxValidationFailed` error in a way that it means that // the error was not produced by the account itself, but for some other unknown reason (most likely not enough gas) BootloaderErrorCode::TxValidationError => Self::Halt(Halt::ValidationFailed(revert_reason)), // Note, that `InnerTxError` is derived only after the actual tx execution, so diff --git a/core/lib/multivm/src/interface/types/errors/vm_revert_reason.rs b/core/lib/multivm/src/interface/types/errors/vm_revert_reason.rs index 531d8b5507f..25b394ce258 100644 --- a/core/lib/multivm/src/interface/types/errors/vm_revert_reason.rs +++ b/core/lib/multivm/src/interface/types/errors/vm_revert_reason.rs @@ -12,7 +12,7 @@ pub enum VmRevertReasonParsingError { IncorrectStringLength(Vec), } -/// Rich Revert Reasons https://github.com/0xProject/ZEIPs/issues/32 +/// Rich Revert Reasons `https://github.com/0xProject/ZEIPs/issues/32` #[derive(Debug, Clone, PartialEq)] pub enum VmRevertReason { General { @@ -68,7 +68,7 @@ impl VmRevertReason { pub fn to_user_friendly_string(&self) -> String { match self { - // In case of `Unknown` reason we suppress it to prevent verbose Error function_selector = 0x{} + // In case of `Unknown` reason we suppress it to prevent verbose `Error function_selector = 0x{}` // message shown to user. VmRevertReason::Unknown { .. } => "".to_owned(), _ => self.to_string(), diff --git a/core/lib/multivm/src/interface/types/inputs/l1_batch_env.rs b/core/lib/multivm/src/interface/types/inputs/l1_batch_env.rs index ff239ec4266..3af7bcd3e05 100644 --- a/core/lib/multivm/src/interface/types/inputs/l1_batch_env.rs +++ b/core/lib/multivm/src/interface/types/inputs/l1_batch_env.rs @@ -1,32 +1,18 @@ -use super::L2BlockEnv; -use zksync_types::{Address, L1BatchNumber, H256}; +use zksync_types::{fee_model::BatchFeeInput, Address, L1BatchNumber, H256}; -use crate::vm_latest::utils::fee::derive_base_fee_and_gas_per_pubdata; +use super::L2BlockEnv; -/// Unique params for each block +/// Unique params for each batch #[derive(Debug, Clone)] pub struct L1BatchEnv { // If previous batch hash is None, then this is the first batch pub previous_batch_hash: Option, pub number: L1BatchNumber, pub timestamp: u64, - pub l1_gas_price: u64, - pub fair_l2_gas_price: u64, + + /// The fee input into the batch. It contains information such as L1 gas price, L2 fair gas price, etc. + pub fee_input: BatchFeeInput, pub fee_account: Address, pub enforced_base_fee: Option, pub first_l2_block: L2BlockEnv, } - -impl L1BatchEnv { - pub fn base_fee(&self) -> u64 { - if let Some(base_fee) = self.enforced_base_fee { - return base_fee; - } - let (base_fee, _) = - derive_base_fee_and_gas_per_pubdata(self.l1_gas_price, self.fair_l2_gas_price); - base_fee - } - pub(crate) fn block_gas_price_per_pubdata(&self) -> u64 { - derive_base_fee_and_gas_per_pubdata(self.l1_gas_price, self.fair_l2_gas_price).1 - } -} diff --git a/core/lib/multivm/src/interface/types/outputs/execution_result.rs b/core/lib/multivm/src/interface/types/outputs/execution_result.rs index 3181a94a9da..6471ca1fe19 100644 --- a/core/lib/multivm/src/interface/types/outputs/execution_result.rs +++ b/core/lib/multivm/src/interface/types/outputs/execution_result.rs @@ -1,11 +1,14 @@ -use crate::interface::{Halt, VmExecutionStatistics, VmRevertReason}; use zksync_system_constants::PUBLISH_BYTECODE_OVERHEAD; -use zksync_types::event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}; -use zksync_types::l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}; -use zksync_types::tx::ExecutionMetrics; -use zksync_types::{StorageLogQuery, Transaction, VmEvent}; +use zksync_types::{ + event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}, + l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}, + tx::ExecutionMetrics, + StorageLogQuery, Transaction, VmEvent, +}; use zksync_utils::bytecode::bytecode_len_in_bytes; +use crate::interface::{Halt, VmExecutionStatistics, VmRevertReason}; + /// Refunds produced for the user. #[derive(Debug, Clone, Default)] pub struct Refunds { @@ -98,6 +101,7 @@ impl VmExecutionResultAndLogs { cycles_used: self.statistics.cycles_used, computational_gas_used: self.statistics.computational_gas_used, pubdata_published: self.statistics.pubdata_published, + estimated_circuits_used: self.statistics.estimated_circuits_used, } } } diff --git a/core/lib/multivm/src/interface/types/outputs/execution_state.rs b/core/lib/multivm/src/interface/types/outputs/execution_state.rs index 066de92ffbe..24034a96221 100644 --- a/core/lib/multivm/src/interface/types/outputs/execution_state.rs +++ b/core/lib/multivm/src/interface/types/outputs/execution_state.rs @@ -1,5 +1,7 @@ -use zksync_types::l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}; -use zksync_types::{LogQuery, StorageLogQuery, VmEvent, U256}; +use zksync_types::{ + l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}, + LogQuery, StorageLogQuery, VmEvent, U256, +}; /// State of the VM since the start of the batch execution. #[derive(Debug, Clone, PartialEq)] diff --git a/core/lib/multivm/src/interface/types/outputs/finished_l1batch.rs b/core/lib/multivm/src/interface/types/outputs/finished_l1batch.rs index c2d98049ec6..1418ce6adc3 100644 --- a/core/lib/multivm/src/interface/types/outputs/finished_l1batch.rs +++ b/core/lib/multivm/src/interface/types/outputs/finished_l1batch.rs @@ -9,4 +9,5 @@ pub struct FinishedL1Batch { pub final_execution_state: CurrentExecutionState, /// Memory of the bootloader with all executed transactions. Could be optional for old versions of the VM. pub final_bootloader_memory: Option, + pub pubdata_input: Option>, } diff --git a/core/lib/multivm/src/interface/types/outputs/mod.rs b/core/lib/multivm/src/interface/types/outputs/mod.rs index 39fed3ad9cb..eec19826e0b 100644 --- a/core/lib/multivm/src/interface/types/outputs/mod.rs +++ b/core/lib/multivm/src/interface/types/outputs/mod.rs @@ -1,12 +1,13 @@ +pub use self::{ + execution_result::{ExecutionResult, Refunds, VmExecutionLogs, VmExecutionResultAndLogs}, + execution_state::{BootloaderMemory, CurrentExecutionState}, + finished_l1batch::FinishedL1Batch, + l2_block::L2Block, + statistic::{VmExecutionStatistics, VmMemoryMetrics}, +}; + mod execution_result; mod execution_state; mod finished_l1batch; mod l2_block; mod statistic; - -pub use execution_result::VmExecutionLogs; -pub use execution_result::{ExecutionResult, Refunds, VmExecutionResultAndLogs}; -pub use execution_state::{BootloaderMemory, CurrentExecutionState}; -pub use finished_l1batch::FinishedL1Batch; -pub use l2_block::L2Block; -pub use statistic::{VmExecutionStatistics, VmMemoryMetrics}; diff --git a/core/lib/multivm/src/interface/types/outputs/statistic.rs b/core/lib/multivm/src/interface/types/outputs/statistic.rs index c1312fc95da..1f5b233423c 100644 --- a/core/lib/multivm/src/interface/types/outputs/statistic.rs +++ b/core/lib/multivm/src/interface/types/outputs/statistic.rs @@ -12,6 +12,7 @@ pub struct VmExecutionStatistics { /// Number of log queries produced by the VM during the tx execution. pub total_log_queries: usize, pub pubdata_published: u32, + pub estimated_circuits_used: f32, } /// Oracle metrics of the VM. diff --git a/core/lib/multivm/src/lib.rs b/core/lib/multivm/src/lib.rs index 1e45443c0f2..f5a84a1dc60 100644 --- a/core/lib/multivm/src/lib.rs +++ b/core/lib/multivm/src/lib.rs @@ -1,28 +1,27 @@ -// #![deny(unreachable_pub)] #![deny(unused_crate_dependencies)] #![warn(unused_extern_crates)] #![warn(unused_imports)] +pub use zk_evm_1_3_1; +pub use zk_evm_1_3_3; +pub use zk_evm_1_4_1; +pub use zksync_types::vm_version::VmVersion; + +pub use self::versions::{ + vm_1_3_2, vm_boojum_integration, vm_latest, vm_m5, vm_m6, vm_refunds_enhancement, + vm_virtual_blocks, +}; pub use crate::{ glue::{ history_mode::HistoryMode, - tracers::{MultiVmTracerPointer, MultivmTracer}, + tracers::{MultiVMTracer, MultiVmTracerPointer}, }, vm_instance::VmInstance, }; -pub use zksync_types::vm_version::VmVersion; mod glue; - -mod vm_instance; - pub mod interface; pub mod tracers; +pub mod utils; pub mod versions; - -pub use versions::vm_1_3_2; -pub use versions::vm_latest; -pub use versions::vm_m5; -pub use versions::vm_m6; -pub use versions::vm_refunds_enhancement; -pub use versions::vm_virtual_blocks; +mod vm_instance; diff --git a/core/lib/multivm/src/tracers/call_tracer/metrics.rs b/core/lib/multivm/src/tracers/call_tracer/metrics.rs new file mode 100644 index 00000000000..b3d94464f50 --- /dev/null +++ b/core/lib/multivm/src/tracers/call_tracer/metrics.rs @@ -0,0 +1,15 @@ +use vise::{Buckets, Histogram, Metrics}; + +#[derive(Debug, Metrics)] +#[metrics(prefix = "vm_call_tracer")] +pub struct CallMetrics { + /// Maximum call stack depth during the execution of the transaction. + #[metrics(buckets = Buckets::exponential(1.0..=64.0, 2.0))] + pub call_stack_depth: Histogram, + /// Maximum number of near calls during the execution of the transaction. + #[metrics(buckets = Buckets::exponential(1.0..=64.0, 2.0))] + pub max_near_calls: Histogram, +} + +#[vise::register] +pub static CALL_METRICS: vise::Global = vise::Global::new(); diff --git a/core/lib/multivm/src/tracers/call_tracer/mod.rs b/core/lib/multivm/src/tracers/call_tracer/mod.rs index 90343a53bf6..6d0285fb97d 100644 --- a/core/lib/multivm/src/tracers/call_tracer/mod.rs +++ b/core/lib/multivm/src/tracers/call_tracer/mod.rs @@ -1,7 +1,12 @@ -use once_cell::sync::OnceCell; use std::sync::Arc; + +use once_cell::sync::OnceCell; use zksync_types::vm_trace::Call; +use crate::tracers::call_tracer::metrics::CALL_METRICS; + +mod metrics; +pub mod vm_boojum_integration; pub mod vm_latest; pub mod vm_refunds_enhancement; pub mod vm_virtual_blocks; @@ -10,12 +15,23 @@ pub mod vm_virtual_blocks; pub struct CallTracer { stack: Vec, result: Arc>>, + + max_stack_depth: usize, + max_near_calls: usize, } #[derive(Debug, Clone)] struct FarcallAndNearCallCount { farcall: Call, near_calls_after: usize, + stack_depth_on_prefix: usize, +} + +impl Drop for CallTracer { + fn drop(&mut self) { + CALL_METRICS.call_stack_depth.observe(self.max_stack_depth); + CALL_METRICS.max_near_calls.observe(self.max_near_calls); + } } impl CallTracer { @@ -23,6 +39,8 @@ impl CallTracer { Self { stack: vec![], result, + max_stack_depth: 0, + max_near_calls: 0, } } @@ -38,4 +56,35 @@ impl CallTracer { let cell = self.result.as_ref(); cell.set(result).unwrap(); } + + fn push_call_and_update_stats(&mut self, farcall: Call, near_calls_after: usize) { + let stack_depth = self + .stack + .last() + .map(|x| x.stack_depth_on_prefix) + .unwrap_or(0); + + let depth_on_prefix = stack_depth + 1 + near_calls_after; + + let call = FarcallAndNearCallCount { + farcall, + near_calls_after, + stack_depth_on_prefix: depth_on_prefix, + }; + + self.stack.push(call); + + self.max_stack_depth = self.max_stack_depth.max(depth_on_prefix); + self.max_near_calls = self.max_near_calls.max(near_calls_after); + } + + fn increase_near_call_count(&mut self) { + if let Some(last) = self.stack.last_mut() { + last.near_calls_after += 1; + last.stack_depth_on_prefix += 1; + + self.max_near_calls = self.max_near_calls.max(last.near_calls_after); + self.max_stack_depth = self.max_stack_depth.max(last.stack_depth_on_prefix); + } + } } diff --git a/core/lib/multivm/src/tracers/call_tracer/vm_boojum_integration/mod.rs b/core/lib/multivm/src/tracers/call_tracer/vm_boojum_integration/mod.rs new file mode 100644 index 00000000000..e2e884e26a1 --- /dev/null +++ b/core/lib/multivm/src/tracers/call_tracer/vm_boojum_integration/mod.rs @@ -0,0 +1,216 @@ +use zk_evm_1_4_0::{ + tracing::{AfterExecutionData, VmLocalStateData}, + zkevm_opcode_defs::{ + FarCallABI, FatPointer, Opcode, RetOpcode, CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER, + RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_system_constants::CONTRACT_DEPLOYER_ADDRESS; +use zksync_types::{ + vm_trace::{Call, CallType}, + FarCallOpcode, U256, +}; + +use crate::{ + interface::{ + tracer::VmExecutionStopReason, traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, + VmRevertReason, + }, + tracers::call_tracer::CallTracer, + vm_boojum_integration::{BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState}, +}; + +impl DynTracer> for CallTracer { + fn after_execution( + &mut self, + state: VmLocalStateData<'_>, + data: AfterExecutionData, + memory: &SimpleMemory, + _storage: StoragePtr, + ) { + match data.opcode.variant.opcode { + Opcode::NearCall(_) => { + self.increase_near_call_count(); + } + Opcode::FarCall(far_call) => { + // We use parent gas for properly calculating gas used in the trace. + let current_ergs = state.vm_local_state.callstack.current.ergs_remaining; + let parent_gas = state + .vm_local_state + .callstack + .inner + .last() + .map(|call| call.ergs_remaining + current_ergs) + .unwrap_or(current_ergs); + + let mut current_call = Call { + r#type: CallType::Call(far_call), + gas: 0, + parent_gas, + ..Default::default() + }; + + self.handle_far_call_op_code_vm_boojum_integration( + state, + memory, + &mut current_call, + ); + self.push_call_and_update_stats(current_call, 0); + } + Opcode::Ret(ret_code) => { + self.handle_ret_op_code_vm_boojum_integration(state, memory, ret_code); + } + _ => {} + }; + } +} + +impl VmTracer for CallTracer { + fn after_vm_execution( + &mut self, + _state: &mut ZkSyncVmState, + _bootloader_state: &BootloaderState, + _stop_reason: VmExecutionStopReason, + ) { + self.store_result() + } +} + +impl CallTracer { + fn handle_far_call_op_code_vm_boojum_integration( + &mut self, + state: VmLocalStateData<'_>, + memory: &SimpleMemory, + current_call: &mut Call, + ) { + let current = state.vm_local_state.callstack.current; + // All calls from the actual users are mimic calls, + // so we need to check that the previous call was to the deployer. + // Actually it's a call of the constructor. + // And at this stage caller is user and callee is deployed contract. + let call_type = if let CallType::Call(far_call) = current_call.r#type { + if matches!(far_call, FarCallOpcode::Mimic) { + let previous_caller = state + .vm_local_state + .callstack + .inner + .last() + .map(|call| call.this_address) + // Actually it's safe to just unwrap here, because we have at least one call in the stack + // But i want to be sure that we will not have any problems in the future + .unwrap_or(current.this_address); + if previous_caller == CONTRACT_DEPLOYER_ADDRESS { + CallType::Create + } else { + CallType::Call(far_call) + } + } else { + CallType::Call(far_call) + } + } else { + unreachable!() + }; + let calldata = if current.code_page.0 == 0 || current.ergs_remaining == 0 { + vec![] + } else { + let packed_abi = + state.vm_local_state.registers[CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER as usize]; + assert!(packed_abi.is_pointer); + let far_call_abi = FarCallABI::from_u256(packed_abi.value); + memory.read_unaligned_bytes( + far_call_abi.memory_quasi_fat_pointer.memory_page as usize, + far_call_abi.memory_quasi_fat_pointer.start as usize, + far_call_abi.memory_quasi_fat_pointer.length as usize, + ) + }; + + current_call.input = calldata; + current_call.r#type = call_type; + current_call.from = current.msg_sender; + current_call.to = current.this_address; + current_call.value = U256::from(current.context_u128_value); + current_call.gas = current.ergs_remaining; + } + + fn save_output_vm_boojum_integration( + &mut self, + state: VmLocalStateData<'_>, + memory: &SimpleMemory, + ret_opcode: RetOpcode, + current_call: &mut Call, + ) { + let fat_data_pointer = + state.vm_local_state.registers[RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER as usize]; + + // if `fat_data_pointer` is not a pointer then there is no output + let output = if fat_data_pointer.is_pointer { + let fat_data_pointer = FatPointer::from_u256(fat_data_pointer.value); + if !fat_data_pointer.is_trivial() { + Some(memory.read_unaligned_bytes( + fat_data_pointer.memory_page as usize, + fat_data_pointer.start as usize, + fat_data_pointer.length as usize, + )) + } else { + None + } + } else { + None + }; + + match ret_opcode { + RetOpcode::Ok => { + current_call.output = output.unwrap_or_default(); + } + RetOpcode::Revert => { + if let Some(output) = output { + current_call.revert_reason = + Some(VmRevertReason::from(output.as_slice()).to_string()); + } else { + current_call.revert_reason = Some("Unknown revert reason".to_string()); + } + } + RetOpcode::Panic => { + current_call.error = Some("Panic".to_string()); + } + } + } + + fn handle_ret_op_code_vm_boojum_integration( + &mut self, + state: VmLocalStateData<'_>, + memory: &SimpleMemory, + ret_opcode: RetOpcode, + ) { + let Some(mut current_call) = self.stack.pop() else { + return; + }; + + if current_call.near_calls_after > 0 { + current_call.near_calls_after -= 1; + self.push_call_and_update_stats(current_call.farcall, current_call.near_calls_after); + return; + } + + current_call.farcall.gas_used = current_call + .farcall + .parent_gas + .saturating_sub(state.vm_local_state.callstack.current.ergs_remaining); + + self.save_output_vm_boojum_integration( + state, + memory, + ret_opcode, + &mut current_call.farcall, + ); + + // If there is a parent call, push the current call to it + // Otherwise, push the current call to the stack, because it's the top level call + if let Some(parent_call) = self.stack.last_mut() { + parent_call.farcall.calls.push(current_call.farcall); + } else { + self.push_call_and_update_stats(current_call.farcall, current_call.near_calls_after); + } + } +} diff --git a/core/lib/multivm/src/tracers/call_tracer/vm_latest/mod.rs b/core/lib/multivm/src/tracers/call_tracer/vm_latest/mod.rs index 2b6fc144bd4..d7889c910e2 100644 --- a/core/lib/multivm/src/tracers/call_tracer/vm_latest/mod.rs +++ b/core/lib/multivm/src/tracers/call_tracer/vm_latest/mod.rs @@ -1,4 +1,4 @@ -use zk_evm_1_4_0::{ +use zk_evm_1_4_1::{ tracing::{AfterExecutionData, VmLocalStateData}, zkevm_opcode_defs::{ FarCallABI, FatPointer, Opcode, RetOpcode, CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER, @@ -7,16 +7,20 @@ use zk_evm_1_4_0::{ }; use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::CONTRACT_DEPLOYER_ADDRESS; -use zksync_types::vm_trace::{Call, CallType}; -use zksync_types::FarCallOpcode; -use zksync_types::U256; +use zksync_types::{ + vm_trace::{Call, CallType}, + FarCallOpcode, U256, +}; -use crate::interface::{ - tracer::VmExecutionStopReason, traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, - VmRevertReason, +use crate::{ + glue::GlueInto, + interface::{ + tracer::VmExecutionStopReason, traits::tracers::dyn_tracers::vm_1_4_1::DynTracer, + VmRevertReason, + }, + tracers::call_tracer::CallTracer, + vm_latest::{BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState}, }; -use crate::tracers::call_tracer::{CallTracer, FarcallAndNearCallCount}; -use crate::vm_latest::{BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState}; impl DynTracer> for CallTracer { fn after_execution( @@ -28,9 +32,7 @@ impl DynTracer> for CallTracer { ) { match data.opcode.variant.opcode { Opcode::NearCall(_) => { - if let Some(last) = self.stack.last_mut() { - last.near_calls_after += 1; - } + self.increase_near_call_count(); } Opcode::FarCall(far_call) => { // We use parent gas for properly calculating gas used in the trace. @@ -44,17 +46,14 @@ impl DynTracer> for CallTracer { .unwrap_or(current_ergs); let mut current_call = Call { - r#type: CallType::Call(far_call), + r#type: CallType::Call(far_call.glue_into()), gas: 0, parent_gas, ..Default::default() }; self.handle_far_call_op_code_latest(state, memory, &mut current_call); - self.stack.push(FarcallAndNearCallCount { - farcall: current_call, - near_calls_after: 0, - }); + self.push_call_and_update_stats(current_call, 0); } Opcode::Ret(ret_code) => { self.handle_ret_op_code_latest(state, memory, ret_code); @@ -141,7 +140,7 @@ impl CallTracer { let fat_data_pointer = state.vm_local_state.registers[RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER as usize]; - // if fat_data_pointer is not a pointer then there is no output + // if `fat_data_pointer` is not a pointer then there is no output let output = if fat_data_pointer.is_pointer { let fat_data_pointer = FatPointer::from_u256(fat_data_pointer.value); if !fat_data_pointer.is_trivial() { @@ -187,7 +186,7 @@ impl CallTracer { if current_call.near_calls_after > 0 { current_call.near_calls_after -= 1; - self.stack.push(current_call); + self.push_call_and_update_stats(current_call.farcall, current_call.near_calls_after); return; } @@ -203,7 +202,7 @@ impl CallTracer { if let Some(parent_call) = self.stack.last_mut() { parent_call.farcall.calls.push(current_call.farcall); } else { - self.stack.push(current_call); + self.push_call_and_update_stats(current_call.farcall, current_call.near_calls_after); } } } diff --git a/core/lib/multivm/src/tracers/call_tracer/vm_refunds_enhancement/mod.rs b/core/lib/multivm/src/tracers/call_tracer/vm_refunds_enhancement/mod.rs index 43dd363dcea..6a97d791e8e 100644 --- a/core/lib/multivm/src/tracers/call_tracer/vm_refunds_enhancement/mod.rs +++ b/core/lib/multivm/src/tracers/call_tracer/vm_refunds_enhancement/mod.rs @@ -7,17 +7,18 @@ use zk_evm_1_3_3::{ }; use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::CONTRACT_DEPLOYER_ADDRESS; -use zksync_types::vm_trace::{Call, CallType}; -use zksync_types::FarCallOpcode; -use zksync_types::U256; - -use crate::interface::{ - tracer::VmExecutionStopReason, traits::tracers::dyn_tracers::vm_1_3_3::DynTracer, - VmRevertReason, +use zksync_types::{ + vm_trace::{Call, CallType}, + FarCallOpcode, U256, }; -use crate::tracers::call_tracer::{CallTracer, FarcallAndNearCallCount}; -use crate::vm_refunds_enhancement::{ - BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState, + +use crate::{ + interface::{ + tracer::VmExecutionStopReason, traits::tracers::dyn_tracers::vm_1_3_3::DynTracer, + VmRevertReason, + }, + tracers::call_tracer::CallTracer, + vm_refunds_enhancement::{BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState}, }; impl DynTracer> for CallTracer { @@ -30,9 +31,7 @@ impl DynTracer> for CallTracer { ) { match data.opcode.variant.opcode { Opcode::NearCall(_) => { - if let Some(last) = self.stack.last_mut() { - last.near_calls_after += 1; - } + self.increase_near_call_count(); } Opcode::FarCall(far_call) => { // We use parent gas for properly calculating gas used in the trace. @@ -53,10 +52,8 @@ impl DynTracer> for CallTracer { }; self.handle_far_call_op_code_refunds_enhancement(state, memory, &mut current_call); - self.stack.push(FarcallAndNearCallCount { - farcall: current_call, - near_calls_after: 0, - }); + + self.push_call_and_update_stats(current_call, 0); } Opcode::Ret(ret_code) => { self.handle_ret_op_code_refunds_enhancement(state, memory, ret_code); @@ -143,7 +140,7 @@ impl CallTracer { let fat_data_pointer = state.vm_local_state.registers[RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER as usize]; - // if fat_data_pointer is not a pointer then there is no output + // if `fat_data_pointer` is not a pointer then there is no output let output = if fat_data_pointer.is_pointer { let fat_data_pointer = FatPointer::from_u256(fat_data_pointer.value); if !fat_data_pointer.is_trivial() { @@ -189,7 +186,7 @@ impl CallTracer { if current_call.near_calls_after > 0 { current_call.near_calls_after -= 1; - self.stack.push(current_call); + self.push_call_and_update_stats(current_call.farcall, current_call.near_calls_after); return; } @@ -205,7 +202,7 @@ impl CallTracer { if let Some(parent_call) = self.stack.last_mut() { parent_call.farcall.calls.push(current_call.farcall); } else { - self.stack.push(current_call); + self.push_call_and_update_stats(current_call.farcall, current_call.near_calls_after); } } } diff --git a/core/lib/multivm/src/tracers/call_tracer/vm_virtual_blocks/mod.rs b/core/lib/multivm/src/tracers/call_tracer/vm_virtual_blocks/mod.rs index c78593b40e7..f1713fc5e9f 100644 --- a/core/lib/multivm/src/tracers/call_tracer/vm_virtual_blocks/mod.rs +++ b/core/lib/multivm/src/tracers/call_tracer/vm_virtual_blocks/mod.rs @@ -1,20 +1,23 @@ -use zk_evm_1_3_3::tracing::{AfterExecutionData, VmLocalStateData}; -use zk_evm_1_3_3::zkevm_opcode_defs::{ - FarCallABI, FatPointer, Opcode, RetOpcode, CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER, - RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, +use zk_evm_1_3_3::{ + tracing::{AfterExecutionData, VmLocalStateData}, + zkevm_opcode_defs::{ + FarCallABI, FatPointer, Opcode, RetOpcode, CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER, + RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::CONTRACT_DEPLOYER_ADDRESS; -use zksync_types::vm_trace::{Call, CallType}; -use zksync_types::FarCallOpcode; -use zksync_types::U256; - -use crate::interface::{ - dyn_tracers::vm_1_3_3::DynTracer, VmExecutionResultAndLogs, VmRevertReason, +use zksync_types::{ + vm_trace::{Call, CallType}, + FarCallOpcode, U256, }; -use crate::tracers::call_tracer::{CallTracer, FarcallAndNearCallCount}; -use crate::vm_virtual_blocks::{ - ExecutionEndTracer, ExecutionProcessing, HistoryMode, SimpleMemory, VmTracer, + +use crate::{ + interface::{dyn_tracers::vm_1_3_3::DynTracer, VmExecutionResultAndLogs, VmRevertReason}, + tracers::call_tracer::CallTracer, + vm_virtual_blocks::{ + ExecutionEndTracer, ExecutionProcessing, HistoryMode, SimpleMemory, VmTracer, + }, }; impl DynTracer> for CallTracer { @@ -27,9 +30,7 @@ impl DynTracer> for CallTracer { ) { match data.opcode.variant.opcode { Opcode::NearCall(_) => { - if let Some(last) = self.stack.last_mut() { - last.near_calls_after += 1; - } + self.increase_near_call_count(); } Opcode::FarCall(far_call) => { // We use parent gas for properly calculating gas used in the trace. @@ -50,10 +51,7 @@ impl DynTracer> for CallTracer { }; self.handle_far_call_op_code_virtual_blocks(state, data, memory, &mut current_call); - self.stack.push(FarcallAndNearCallCount { - farcall: current_call, - near_calls_after: 0, - }); + self.push_call_and_update_stats(current_call, 0); } Opcode::Ret(ret_code) => { self.handle_ret_op_code_virtual_blocks(state, data, memory, ret_code); @@ -140,7 +138,7 @@ impl CallTracer { let fat_data_pointer = state.vm_local_state.registers[RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER as usize]; - // if fat_data_pointer is not a pointer then there is no output + // if `fat_data_pointer` is not a pointer then there is no output let output = if fat_data_pointer.is_pointer { let fat_data_pointer = FatPointer::from_u256(fat_data_pointer.value); if !fat_data_pointer.is_trivial() { @@ -187,7 +185,7 @@ impl CallTracer { if current_call.near_calls_after > 0 { current_call.near_calls_after -= 1; - self.stack.push(current_call); + self.push_call_and_update_stats(current_call.farcall, current_call.near_calls_after); return; } @@ -203,7 +201,7 @@ impl CallTracer { if let Some(parent_call) = self.stack.last_mut() { parent_call.farcall.calls.push(current_call.farcall); } else { - self.stack.push(current_call); + self.push_call_and_update_stats(current_call.farcall, current_call.near_calls_after); } } } diff --git a/core/lib/multivm/src/tracers/multivm_dispatcher.rs b/core/lib/multivm/src/tracers/multivm_dispatcher.rs index d4c7337ce65..aee09fe0f49 100644 --- a/core/lib/multivm/src/tracers/multivm_dispatcher.rs +++ b/core/lib/multivm/src/tracers/multivm_dispatcher.rs @@ -1,6 +1,7 @@ -use crate::{HistoryMode, MultiVmTracerPointer}; use zksync_state::WriteStorage; +use crate::{HistoryMode, MultiVmTracerPointer}; + /// Tracer dispatcher is a tracer that can dispatch calls to multiple tracers. pub struct TracerDispatcher { tracers: Vec>, @@ -29,13 +30,27 @@ impl Default for TracerDispatcher { } impl From> - for crate::vm_latest::TracerDispatcher + for crate::vm_latest::TracerDispatcher { fn from(value: TracerDispatcher) -> Self { Self::new(value.tracers.into_iter().map(|x| x.latest()).collect()) } } +impl From> + for crate::vm_boojum_integration::TracerDispatcher +{ + fn from(value: TracerDispatcher) -> Self { + Self::new( + value + .tracers + .into_iter() + .map(|x| x.vm_boojum_integration()) + .collect(), + ) + } +} + impl From> for crate::vm_refunds_enhancement::TracerDispatcher { diff --git a/core/lib/multivm/src/tracers/storage_invocation/mod.rs b/core/lib/multivm/src/tracers/storage_invocation/mod.rs index 3816d2a07a1..f48534709ad 100644 --- a/core/lib/multivm/src/tracers/storage_invocation/mod.rs +++ b/core/lib/multivm/src/tracers/storage_invocation/mod.rs @@ -1,3 +1,4 @@ +pub mod vm_boojum_integration; pub mod vm_latest; pub mod vm_refunds_enhancement; pub mod vm_virtual_blocks; diff --git a/core/lib/multivm/src/tracers/storage_invocation/vm_boojum_integration/mod.rs b/core/lib/multivm/src/tracers/storage_invocation/vm_boojum_integration/mod.rs new file mode 100644 index 00000000000..05651485bd7 --- /dev/null +++ b/core/lib/multivm/src/tracers/storage_invocation/vm_boojum_integration/mod.rs @@ -0,0 +1,35 @@ +use zksync_state::WriteStorage; + +use crate::{ + interface::{ + tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, + Halt, + }, + tracers::storage_invocation::StorageInvocations, + vm_boojum_integration::{BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState}, +}; + +impl DynTracer> for StorageInvocations {} + +impl VmTracer for StorageInvocations { + fn finish_cycle( + &mut self, + state: &mut ZkSyncVmState, + _bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + let current = state + .storage + .storage + .get_ptr() + .borrow() + .missed_storage_invocations(); + + if current >= self.limit { + return TracerExecutionStatus::Stop(TracerExecutionStopReason::Abort( + Halt::TracerCustom("Storage invocations limit reached".to_string()), + )); + } + TracerExecutionStatus::Continue + } +} diff --git a/core/lib/multivm/src/tracers/storage_invocation/vm_latest/mod.rs b/core/lib/multivm/src/tracers/storage_invocation/vm_latest/mod.rs index 0490ec34107..3fa04401782 100644 --- a/core/lib/multivm/src/tracers/storage_invocation/vm_latest/mod.rs +++ b/core/lib/multivm/src/tracers/storage_invocation/vm_latest/mod.rs @@ -1,12 +1,15 @@ -use crate::interface::{ - tracer::{TracerExecutionStatus, TracerExecutionStopReason}, - traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, - Halt, -}; -use crate::tracers::storage_invocation::StorageInvocations; -use crate::vm_latest::{BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState}; use zksync_state::WriteStorage; +use crate::{ + interface::{ + tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + traits::tracers::dyn_tracers::vm_1_4_1::DynTracer, + Halt, + }, + tracers::storage_invocation::StorageInvocations, + vm_latest::{BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState}, +}; + impl DynTracer> for StorageInvocations {} impl VmTracer for StorageInvocations { diff --git a/core/lib/multivm/src/tracers/storage_invocation/vm_refunds_enhancement/mod.rs b/core/lib/multivm/src/tracers/storage_invocation/vm_refunds_enhancement/mod.rs index fe4fc33418d..1e562374afd 100644 --- a/core/lib/multivm/src/tracers/storage_invocation/vm_refunds_enhancement/mod.rs +++ b/core/lib/multivm/src/tracers/storage_invocation/vm_refunds_enhancement/mod.rs @@ -1,10 +1,15 @@ -use crate::interface::tracer::{TracerExecutionStatus, TracerExecutionStopReason}; -use crate::interface::{traits::tracers::dyn_tracers::vm_1_3_3::DynTracer, Halt}; -use crate::tracers::storage_invocation::StorageInvocations; -use crate::vm_refunds_enhancement::VmTracer; -use crate::vm_refunds_enhancement::{BootloaderState, HistoryMode, SimpleMemory, ZkSyncVmState}; use zksync_state::WriteStorage; +use crate::{ + interface::{ + tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + traits::tracers::dyn_tracers::vm_1_3_3::DynTracer, + Halt, + }, + tracers::storage_invocation::StorageInvocations, + vm_refunds_enhancement::{BootloaderState, HistoryMode, SimpleMemory, VmTracer, ZkSyncVmState}, +}; + impl DynTracer> for StorageInvocations {} impl VmTracer for StorageInvocations { diff --git a/core/lib/multivm/src/tracers/storage_invocation/vm_virtual_blocks/mod.rs b/core/lib/multivm/src/tracers/storage_invocation/vm_virtual_blocks/mod.rs index 023b6f376cd..cd0ab9f4bb5 100644 --- a/core/lib/multivm/src/tracers/storage_invocation/vm_virtual_blocks/mod.rs +++ b/core/lib/multivm/src/tracers/storage_invocation/vm_virtual_blocks/mod.rs @@ -1,11 +1,14 @@ -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::tracers::storage_invocation::StorageInvocations; -use crate::vm_virtual_blocks::{ - BootloaderState, ExecutionEndTracer, ExecutionProcessing, HistoryMode, SimpleMemory, VmTracer, - ZkSyncVmState, -}; use zksync_state::WriteStorage; +use crate::{ + interface::dyn_tracers::vm_1_3_3::DynTracer, + tracers::storage_invocation::StorageInvocations, + vm_virtual_blocks::{ + BootloaderState, ExecutionEndTracer, ExecutionProcessing, HistoryMode, SimpleMemory, + VmTracer, ZkSyncVmState, + }, +}; + impl ExecutionEndTracer for StorageInvocations { fn should_stop_execution(&self) -> bool { self.current >= self.limit diff --git a/core/lib/multivm/src/tracers/validator/mod.rs b/core/lib/multivm/src/tracers/validator/mod.rs index 26d3b0ad926..aef11924af8 100644 --- a/core/lib/multivm/src/tracers/validator/mod.rs +++ b/core/lib/multivm/src/tracers/validator/mod.rs @@ -1,19 +1,11 @@ -mod types; -mod vm_latest; -mod vm_refunds_enhancement; -mod vm_virtual_blocks; - -use std::sync::Arc; -use std::{collections::HashSet, marker::PhantomData}; +use std::{collections::HashSet, marker::PhantomData, sync::Arc}; use once_cell::sync::OnceCell; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::{ ACCOUNT_CODE_STORAGE_ADDRESS, BOOTLOADER_ADDRESS, CONTRACT_DEPLOYER_ADDRESS, L2_ETH_TOKEN_ADDRESS, MSG_VALUE_SIMULATOR_ADDRESS, SYSTEM_CONTEXT_ADDRESS, }; - use zksync_types::{ vm_trace::ViolatedValidationRule, web3::signing::keccak256, AccountTreeId, Address, StorageKey, H256, U256, @@ -23,6 +15,12 @@ use zksync_utils::{be_bytes_to_safe_address, u256_to_account_address, u256_to_h2 use crate::tracers::validator::types::{NewTrustedValidationItems, ValidationTracerMode}; pub use crate::tracers::validator::types::{ValidationError, ValidationTracerParams}; +mod types; +mod vm_boojum_integration; +mod vm_latest; +mod vm_refunds_enhancement; +mod vm_virtual_blocks; + /// Tracer that is used to ensure that the validation adheres to all the rules /// to prevent DDoS attacks on the server. #[derive(Debug, Clone)] @@ -104,7 +102,7 @@ impl ValidationTracer { return true; } - // The pair of MSG_VALUE_SIMULATOR_ADDRESS & L2_ETH_TOKEN_ADDRESS simulates the behavior of transfering ETH + // The pair of `MSG_VALUE_SIMULATOR_ADDRESS` & `L2_ETH_TOKEN_ADDRESS` simulates the behavior of transferring ETH // that is safe for the DDoS protection rules. if valid_eth_token_call(address, msg_sender) { return true; @@ -148,11 +146,11 @@ impl ValidationTracer { let (potential_address_bytes, potential_position_bytes) = calldata.split_at(32); let potential_address = be_bytes_to_safe_address(potential_address_bytes); - // If the validation_address is equal to the potential_address, - // then it is a request that could be used for mapping of kind mapping(address => ...). + // If the `validation_address` is equal to the `potential_address`, + // then it is a request that could be used for mapping of kind `mapping(address => ...).` // - // If the potential_position_bytes were already allowed before, then this keccak might be used - // for ERC-20 allowance or any other of mapping(address => mapping(...)) + // If the `potential_position_bytes` were already allowed before, then this keccak might be used + // for ERC-20 allowance or any other of `mapping(address => mapping(...))` if potential_address == Some(validated_address) || self .auxilary_allowed_slots @@ -190,7 +188,7 @@ fn touches_allowed_context(address: Address, key: U256) -> bool { return false; } - // Only chain_id is allowed to be touched. + // Only `chain_id` is allowed to be touched. key == U256::from(0u32) } diff --git a/core/lib/multivm/src/tracers/validator/types.rs b/core/lib/multivm/src/tracers/validator/types.rs index eb80e6f1650..de6217c2988 100644 --- a/core/lib/multivm/src/tracers/validator/types.rs +++ b/core/lib/multivm/src/tracers/validator/types.rs @@ -1,8 +1,8 @@ +use std::{collections::HashSet, fmt::Display}; + +use zksync_types::{vm_trace::ViolatedValidationRule, Address, H256, U256}; + use crate::interface::Halt; -use std::collections::HashSet; -use std::fmt::Display; -use zksync_types::vm_trace::ViolatedValidationRule; -use zksync_types::{Address, H256, U256}; #[derive(Debug, Clone, Eq, PartialEq, Copy)] #[allow(clippy::enum_variant_names)] diff --git a/core/lib/multivm/src/tracers/validator/vm_boojum_integration/mod.rs b/core/lib/multivm/src/tracers/validator/vm_boojum_integration/mod.rs new file mode 100644 index 00000000000..2c9a708abca --- /dev/null +++ b/core/lib/multivm/src/tracers/validator/vm_boojum_integration/mod.rs @@ -0,0 +1,201 @@ +use zk_evm_1_4_0::{ + tracing::{BeforeExecutionData, VmLocalStateData}, + zkevm_opcode_defs::{ContextOpcode, FarCallABI, LogOpcode, Opcode}, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_system_constants::KECCAK256_PRECOMPILE_ADDRESS; +use zksync_types::{ + get_code_key, vm_trace::ViolatedValidationRule, AccountTreeId, StorageKey, H256, +}; +use zksync_utils::{h256_to_account_address, u256_to_account_address, u256_to_h256}; + +use crate::{ + interface::{ + traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, + types::tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + Halt, + }, + tracers::validator::{ + types::{NewTrustedValidationItems, ValidationTracerMode}, + ValidationRoundResult, ValidationTracer, + }, + vm_boojum_integration::{ + tracers::utils::{ + computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, + }, + BootloaderState, SimpleMemory, VmTracer, ZkSyncVmState, + }, + HistoryMode, +}; + +impl ValidationTracer { + fn check_user_restrictions_vm_boojum_integration( + &mut self, + state: VmLocalStateData<'_>, + data: BeforeExecutionData, + memory: &SimpleMemory, + storage: StoragePtr, + ) -> ValidationRoundResult { + if self.computational_gas_used > self.computational_gas_limit { + return Err(ViolatedValidationRule::TookTooManyComputationalGas( + self.computational_gas_limit, + )); + } + + let opcode_variant = data.opcode.variant; + match opcode_variant.opcode { + Opcode::FarCall(_) => { + let packed_abi = data.src0_value.value; + let call_destination_value = data.src1_value.value; + + let called_address = u256_to_account_address(&call_destination_value); + let far_call_abi = FarCallABI::from_u256(packed_abi); + + if called_address == KECCAK256_PRECOMPILE_ADDRESS + && far_call_abi.memory_quasi_fat_pointer.length == 64 + { + let calldata_page = get_calldata_page_via_abi( + &far_call_abi, + state.vm_local_state.callstack.current.base_memory_page, + ); + let calldata = memory.read_unaligned_bytes( + calldata_page as usize, + far_call_abi.memory_quasi_fat_pointer.start as usize, + 64, + ); + + let slot_to_add = + self.slot_to_add_from_keccak_call(&calldata, self.user_address); + + if let Some(slot) = slot_to_add { + return Ok(NewTrustedValidationItems { + new_allowed_slots: vec![slot], + ..Default::default() + }); + } + } else if called_address != self.user_address { + let code_key = get_code_key(&called_address); + let code = storage.borrow_mut().read_value(&code_key); + + if code == H256::zero() { + // The users are not allowed to call contracts with no code + return Err(ViolatedValidationRule::CalledContractWithNoCode( + called_address, + )); + } + } + } + Opcode::Context(context) => { + match context { + ContextOpcode::Meta => { + return Err(ViolatedValidationRule::TouchedUnallowedContext); + } + ContextOpcode::ErgsLeft => { + // TODO (SMA-1168): implement the correct restrictions for the gas left opcode. + } + _ => {} + } + } + Opcode::Log(LogOpcode::StorageRead) => { + let key = data.src0_value.value; + let this_address = state.vm_local_state.callstack.current.this_address; + let msg_sender = state.vm_local_state.callstack.current.msg_sender; + + if !self.is_allowed_storage_read(storage.clone(), this_address, key, msg_sender) { + return Err(ViolatedValidationRule::TouchedUnallowedStorageSlots( + this_address, + key, + )); + } + + if self.trusted_address_slots.contains(&(this_address, key)) { + let storage_key = + StorageKey::new(AccountTreeId::new(this_address), u256_to_h256(key)); + + let value = storage.borrow_mut().read_value(&storage_key); + + return Ok(NewTrustedValidationItems { + new_trusted_addresses: vec![h256_to_account_address(&value)], + ..Default::default() + }); + } + } + _ => {} + } + + Ok(Default::default()) + } +} + +impl DynTracer> + for ValidationTracer +{ + fn before_execution( + &mut self, + state: VmLocalStateData<'_>, + data: BeforeExecutionData, + memory: &SimpleMemory, + storage: StoragePtr, + ) { + // For now, we support only validations for users. + if let ValidationTracerMode::UserTxValidation = self.validation_mode { + self.computational_gas_used = self + .computational_gas_used + .saturating_add(computational_gas_price(state, &data)); + + let validation_round_result = + self.check_user_restrictions_vm_boojum_integration(state, data, memory, storage); + self.process_validation_round_result(validation_round_result); + } + + let hook = VmHook::from_opcode_memory(&state, &data); + print_debug_if_needed(&hook, &state, memory); + + let current_mode = self.validation_mode; + match (current_mode, hook) { + (ValidationTracerMode::NoValidation, VmHook::AccountValidationEntered) => { + // Account validation can be entered when there is no prior validation (i.e. "nested" validations are not allowed) + self.validation_mode = ValidationTracerMode::UserTxValidation; + } + (ValidationTracerMode::NoValidation, VmHook::PaymasterValidationEntered) => { + // Paymaster validation can be entered when there is no prior validation (i.e. "nested" validations are not allowed) + self.validation_mode = ValidationTracerMode::PaymasterTxValidation; + } + (_, VmHook::AccountValidationEntered | VmHook::PaymasterValidationEntered) => { + panic!( + "Unallowed transition inside the validation tracer. Mode: {:#?}, hook: {:#?}", + self.validation_mode, hook + ); + } + (_, VmHook::NoValidationEntered) => { + // Validation can be always turned off + self.validation_mode = ValidationTracerMode::NoValidation; + } + (_, VmHook::ValidationStepEndeded) => { + // The validation step has ended. + self.should_stop_execution = true; + } + (_, _) => { + // The hook is not relevant to the validation tracer. Ignore. + } + } + } +} + +impl VmTracer for ValidationTracer { + fn finish_cycle( + &mut self, + _state: &mut ZkSyncVmState, + _bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + if self.should_stop_execution { + return TracerExecutionStatus::Stop(TracerExecutionStopReason::Finish); + } + if let Some(result) = self.result.get() { + return TracerExecutionStatus::Stop(TracerExecutionStopReason::Abort( + Halt::TracerCustom(format!("Validation error: {:#?}", result)), + )); + } + TracerExecutionStatus::Continue + } +} diff --git a/core/lib/multivm/src/tracers/validator/vm_latest/mod.rs b/core/lib/multivm/src/tracers/validator/vm_latest/mod.rs index 4d5ff43ec47..095f1a20b38 100644 --- a/core/lib/multivm/src/tracers/validator/vm_latest/mod.rs +++ b/core/lib/multivm/src/tracers/validator/vm_latest/mod.rs @@ -1,39 +1,39 @@ -use zk_evm_1_4_0::{ +use zk_evm_1_4_1::{ tracing::{BeforeExecutionData, VmLocalStateData}, zkevm_opcode_defs::{ContextOpcode, FarCallABI, LogOpcode, Opcode}, }; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::KECCAK256_PRECOMPILE_ADDRESS; - -use crate::HistoryMode; use zksync_types::{ get_code_key, vm_trace::ViolatedValidationRule, AccountTreeId, StorageKey, H256, }; use zksync_utils::{h256_to_account_address, u256_to_account_address, u256_to_h256}; -use crate::vm_latest::tracers::utils::{ - computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, -}; - -use crate::interface::{ - traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, - types::tracer::{TracerExecutionStatus, TracerExecutionStopReason}, - Halt, -}; -use crate::tracers::validator::{ - types::{NewTrustedValidationItems, ValidationTracerMode}, - {ValidationRoundResult, ValidationTracer}, +use crate::{ + interface::{ + traits::tracers::dyn_tracers::vm_1_4_1::DynTracer, + types::tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + Halt, + }, + tracers::validator::{ + types::{NewTrustedValidationItems, ValidationTracerMode}, + ValidationRoundResult, ValidationTracer, + }, + vm_latest::{ + tracers::utils::{ + computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, + }, + BootloaderState, SimpleMemory, VmTracer, ZkSyncVmState, + }, + HistoryMode, }; -use crate::vm_latest::{BootloaderState, SimpleMemory, VmTracer, ZkSyncVmState}; - impl ValidationTracer { fn check_user_restrictions_vm_latest( &mut self, state: VmLocalStateData<'_>, data: BeforeExecutionData, - memory: &SimpleMemory, + memory: &SimpleMemory, storage: StoragePtr, ) -> ValidationRoundResult { if self.computational_gas_used > self.computational_gas_limit { @@ -127,14 +127,14 @@ impl ValidationTracer { } } -impl DynTracer> +impl DynTracer> for ValidationTracer { fn before_execution( &mut self, state: VmLocalStateData<'_>, data: BeforeExecutionData, - memory: &SimpleMemory, + memory: &SimpleMemory, storage: StoragePtr, ) { // For now, we support only validations for users. @@ -182,10 +182,10 @@ impl DynTracer VmTracer for ValidationTracer { +impl VmTracer for ValidationTracer { fn finish_cycle( &mut self, - _state: &mut ZkSyncVmState, + _state: &mut ZkSyncVmState, _bootloader_state: &mut BootloaderState, ) -> TracerExecutionStatus { if self.should_stop_execution { diff --git a/core/lib/multivm/src/tracers/validator/vm_refunds_enhancement/mod.rs b/core/lib/multivm/src/tracers/validator/vm_refunds_enhancement/mod.rs index ec4e95e5630..ab3a16c4b90 100644 --- a/core/lib/multivm/src/tracers/validator/vm_refunds_enhancement/mod.rs +++ b/core/lib/multivm/src/tracers/validator/vm_refunds_enhancement/mod.rs @@ -2,31 +2,30 @@ use zk_evm_1_3_3::{ tracing::{BeforeExecutionData, VmLocalStateData}, zkevm_opcode_defs::{ContextOpcode, FarCallABI, LogOpcode, Opcode}, }; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::KECCAK256_PRECOMPILE_ADDRESS; - -use crate::HistoryMode; use zksync_types::{ get_code_key, vm_trace::ViolatedValidationRule, AccountTreeId, StorageKey, H256, }; use zksync_utils::{h256_to_account_address, u256_to_account_address, u256_to_h256}; -use crate::interface::{ - traits::tracers::dyn_tracers::vm_1_3_3::DynTracer, - types::tracer::{TracerExecutionStatus, TracerExecutionStopReason}, - Halt, -}; -use crate::tracers::validator::{ - types::{NewTrustedValidationItems, ValidationTracerMode}, - {ValidationRoundResult, ValidationTracer}, -}; - -use crate::vm_refunds_enhancement::{ - tracers::utils::{ - computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, +use crate::{ + interface::{ + traits::tracers::dyn_tracers::vm_1_3_3::DynTracer, + types::tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + Halt, + }, + tracers::validator::{ + types::{NewTrustedValidationItems, ValidationTracerMode}, + ValidationRoundResult, ValidationTracer, + }, + vm_refunds_enhancement::{ + tracers::utils::{ + computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, + }, + BootloaderState, SimpleMemory, VmTracer, ZkSyncVmState, }, - BootloaderState, SimpleMemory, VmTracer, ZkSyncVmState, + HistoryMode, }; impl ValidationTracer { diff --git a/core/lib/multivm/src/tracers/validator/vm_virtual_blocks/mod.rs b/core/lib/multivm/src/tracers/validator/vm_virtual_blocks/mod.rs index d2155f4ecf8..6fd2955f60b 100644 --- a/core/lib/multivm/src/tracers/validator/vm_virtual_blocks/mod.rs +++ b/core/lib/multivm/src/tracers/validator/vm_virtual_blocks/mod.rs @@ -2,25 +2,27 @@ use zk_evm_1_3_3::{ tracing::{BeforeExecutionData, VmLocalStateData}, zkevm_opcode_defs::{ContextOpcode, FarCallABI, LogOpcode, Opcode}, }; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::KECCAK256_PRECOMPILE_ADDRESS; - -use crate::HistoryMode; -use zksync_types::vm_trace::ViolatedValidationRule; -use zksync_types::{get_code_key, AccountTreeId, StorageKey, H256}; +use zksync_types::{ + get_code_key, vm_trace::ViolatedValidationRule, AccountTreeId, StorageKey, H256, +}; use zksync_utils::{h256_to_account_address, u256_to_account_address, u256_to_h256}; -use crate::vm_virtual_blocks::tracers::utils::{ - computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, +use crate::{ + interface::{dyn_tracers::vm_1_3_3::DynTracer, VmExecutionResultAndLogs}, + tracers::validator::{ + types::{NewTrustedValidationItems, ValidationTracerMode}, + ValidationRoundResult, ValidationTracer, + }, + vm_virtual_blocks::{ + tracers::utils::{ + computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, + }, + ExecutionEndTracer, ExecutionProcessing, SimpleMemory, VmTracer, + }, + HistoryMode, }; -use crate::vm_virtual_blocks::SimpleMemory; -use crate::vm_virtual_blocks::{ExecutionEndTracer, ExecutionProcessing, VmTracer}; - -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::VmExecutionResultAndLogs; -use crate::tracers::validator::types::{NewTrustedValidationItems, ValidationTracerMode}; -use crate::tracers::validator::{ValidationRoundResult, ValidationTracer}; impl ValidationTracer { fn check_user_restrictions_vm_virtual_blocks( diff --git a/core/lib/multivm/src/utils.rs b/core/lib/multivm/src/utils.rs new file mode 100644 index 00000000000..459739c45ba --- /dev/null +++ b/core/lib/multivm/src/utils.rs @@ -0,0 +1,249 @@ +use zksync_types::{fee_model::BatchFeeInput, VmVersion, U256}; + +use crate::vm_latest::L1BatchEnv; + +/// Calculates the base fee and gas per pubdata for the given L1 gas price. +pub fn derive_base_fee_and_gas_per_pubdata( + batch_fee_input: BatchFeeInput, + vm_version: VmVersion, +) -> (u64, u64) { + match vm_version { + VmVersion::M5WithRefunds | VmVersion::M5WithoutRefunds => { + crate::vm_m5::vm_with_bootloader::derive_base_fee_and_gas_per_pubdata( + batch_fee_input.into_l1_pegged(), + ) + } + VmVersion::M6Initial | VmVersion::M6BugWithCompressionFixed => { + crate::vm_m6::vm_with_bootloader::derive_base_fee_and_gas_per_pubdata( + batch_fee_input.into_l1_pegged(), + ) + } + VmVersion::Vm1_3_2 => { + crate::vm_1_3_2::vm_with_bootloader::derive_base_fee_and_gas_per_pubdata( + batch_fee_input.into_l1_pegged(), + ) + } + VmVersion::VmVirtualBlocks => { + crate::vm_virtual_blocks::utils::fee::derive_base_fee_and_gas_per_pubdata( + batch_fee_input.into_l1_pegged(), + ) + } + VmVersion::VmVirtualBlocksRefundsEnhancement => { + crate::vm_refunds_enhancement::utils::fee::derive_base_fee_and_gas_per_pubdata( + batch_fee_input.into_l1_pegged(), + ) + } + VmVersion::VmBoojumIntegration => { + crate::vm_boojum_integration::utils::fee::derive_base_fee_and_gas_per_pubdata( + batch_fee_input.into_l1_pegged(), + ) + } + VmVersion::Vm1_4_1 => crate::vm_latest::utils::fee::derive_base_fee_and_gas_per_pubdata( + batch_fee_input.into_pubdata_independent(), + ), + } +} + +pub fn get_batch_base_fee(l1_batch_env: &L1BatchEnv, vm_version: VmVersion) -> u64 { + match vm_version { + VmVersion::M5WithRefunds | VmVersion::M5WithoutRefunds => { + crate::vm_m5::vm_with_bootloader::get_batch_base_fee(l1_batch_env) + } + VmVersion::M6Initial | VmVersion::M6BugWithCompressionFixed => { + crate::vm_m6::vm_with_bootloader::get_batch_base_fee(l1_batch_env) + } + VmVersion::Vm1_3_2 => crate::vm_1_3_2::vm_with_bootloader::get_batch_base_fee(l1_batch_env), + VmVersion::VmVirtualBlocks => { + crate::vm_virtual_blocks::utils::fee::get_batch_base_fee(l1_batch_env) + } + VmVersion::VmVirtualBlocksRefundsEnhancement => { + crate::vm_refunds_enhancement::utils::fee::get_batch_base_fee(l1_batch_env) + } + VmVersion::VmBoojumIntegration => { + crate::vm_boojum_integration::utils::fee::get_batch_base_fee(l1_batch_env) + } + VmVersion::Vm1_4_1 => crate::vm_latest::utils::fee::get_batch_base_fee(l1_batch_env), + } +} + +/// Changes the batch fee input so that the expected gas per pubdata is smaller than or the `tx_gas_per_pubdata_limit`. +pub fn adjust_pubdata_price_for_tx( + batch_fee_input: BatchFeeInput, + tx_gas_per_pubdata_limit: U256, + vm_version: VmVersion, +) -> BatchFeeInput { + if U256::from(derive_base_fee_and_gas_per_pubdata(batch_fee_input, vm_version).1) + <= tx_gas_per_pubdata_limit + { + return batch_fee_input; + } + + // The latest VM supports adjusting the pubdata price for all the types of the fee models. + crate::vm_latest::utils::fee::adjust_pubdata_price_for_tx( + batch_fee_input, + tx_gas_per_pubdata_limit, + ) +} + +pub fn derive_overhead( + gas_limit: u32, + gas_price_per_pubdata: u32, + encoded_len: usize, + tx_type: u8, + vm_version: VmVersion, +) -> u32 { + match vm_version { + VmVersion::M5WithRefunds | VmVersion::M5WithoutRefunds => { + crate::vm_m5::transaction_data::derive_overhead( + gas_limit, + gas_price_per_pubdata, + encoded_len, + ) + } + VmVersion::M6Initial | VmVersion::M6BugWithCompressionFixed => { + crate::vm_m6::transaction_data::derive_overhead( + gas_limit, + gas_price_per_pubdata, + encoded_len, + crate::vm_m6::transaction_data::OverheadCoefficients::from_tx_type(tx_type), + ) + } + VmVersion::Vm1_3_2 => crate::vm_1_3_2::transaction_data::derive_overhead( + gas_limit, + gas_price_per_pubdata, + encoded_len, + crate::vm_1_3_2::transaction_data::OverheadCoefficients::from_tx_type(tx_type), + ), + VmVersion::VmVirtualBlocks => crate::vm_virtual_blocks::utils::overhead::derive_overhead( + gas_limit, + gas_price_per_pubdata, + encoded_len, + crate::vm_virtual_blocks::utils::overhead::OverheadCoefficients::from_tx_type(tx_type), + ), + VmVersion::VmVirtualBlocksRefundsEnhancement => { + crate::vm_refunds_enhancement::utils::overhead::derive_overhead( + gas_limit, + gas_price_per_pubdata, + encoded_len, + crate::vm_refunds_enhancement::utils::overhead::OverheadCoefficients::from_tx_type( + tx_type, + ), + ) + } + VmVersion::VmBoojumIntegration => { + crate::vm_boojum_integration::utils::overhead::derive_overhead( + gas_limit, + gas_price_per_pubdata, + encoded_len, + crate::vm_boojum_integration::utils::overhead::OverheadCoefficients::from_tx_type( + tx_type, + ), + ) + } + VmVersion::Vm1_4_1 => crate::vm_latest::utils::overhead::derive_overhead(encoded_len), + } +} + +pub fn get_bootloader_encoding_space(version: VmVersion) -> u32 { + match version { + VmVersion::M5WithRefunds | VmVersion::M5WithoutRefunds => { + crate::vm_m5::vm_with_bootloader::BOOTLOADER_TX_ENCODING_SPACE + } + VmVersion::M6Initial | VmVersion::M6BugWithCompressionFixed => { + crate::vm_m6::vm_with_bootloader::BOOTLOADER_TX_ENCODING_SPACE + } + VmVersion::Vm1_3_2 => crate::vm_1_3_2::vm_with_bootloader::BOOTLOADER_TX_ENCODING_SPACE, + VmVersion::VmVirtualBlocks => { + crate::vm_virtual_blocks::constants::BOOTLOADER_TX_ENCODING_SPACE + } + VmVersion::VmVirtualBlocksRefundsEnhancement => { + crate::vm_refunds_enhancement::constants::BOOTLOADER_TX_ENCODING_SPACE + } + VmVersion::VmBoojumIntegration => { + crate::vm_boojum_integration::constants::BOOTLOADER_TX_ENCODING_SPACE + } + VmVersion::Vm1_4_1 => crate::vm_latest::constants::BOOTLOADER_TX_ENCODING_SPACE, + } +} + +pub fn get_bootloader_max_txs_in_batch(version: VmVersion) -> usize { + match version { + VmVersion::M5WithRefunds | VmVersion::M5WithoutRefunds => { + crate::vm_m5::vm_with_bootloader::MAX_TXS_IN_BLOCK + } + VmVersion::M6Initial | VmVersion::M6BugWithCompressionFixed => { + crate::vm_m6::vm_with_bootloader::MAX_TXS_IN_BLOCK + } + VmVersion::Vm1_3_2 => crate::vm_1_3_2::vm_with_bootloader::MAX_TXS_IN_BLOCK, + VmVersion::VmVirtualBlocks => crate::vm_virtual_blocks::constants::MAX_TXS_IN_BLOCK, + VmVersion::VmVirtualBlocksRefundsEnhancement => { + crate::vm_refunds_enhancement::constants::MAX_TXS_IN_BLOCK + } + VmVersion::VmBoojumIntegration => crate::vm_boojum_integration::constants::MAX_TXS_IN_BLOCK, + VmVersion::Vm1_4_1 => crate::vm_latest::constants::MAX_TXS_IN_BATCH, + } +} + +pub fn get_max_gas_per_pubdata_byte(version: VmVersion) -> u64 { + match version { + VmVersion::M5WithRefunds | VmVersion::M5WithoutRefunds => { + crate::vm_m5::vm_with_bootloader::MAX_GAS_PER_PUBDATA_BYTE + } + VmVersion::M6Initial | VmVersion::M6BugWithCompressionFixed => { + crate::vm_m6::vm_with_bootloader::MAX_GAS_PER_PUBDATA_BYTE + } + VmVersion::Vm1_3_2 => crate::vm_1_3_2::vm_with_bootloader::MAX_GAS_PER_PUBDATA_BYTE, + VmVersion::VmVirtualBlocks => crate::vm_virtual_blocks::constants::MAX_GAS_PER_PUBDATA_BYTE, + VmVersion::VmVirtualBlocksRefundsEnhancement => { + crate::vm_refunds_enhancement::constants::MAX_GAS_PER_PUBDATA_BYTE + } + VmVersion::VmBoojumIntegration => { + crate::vm_boojum_integration::constants::MAX_GAS_PER_PUBDATA_BYTE + } + VmVersion::Vm1_4_1 => crate::vm_latest::constants::MAX_GAS_PER_PUBDATA_BYTE, + } +} + +pub fn get_used_bootloader_memory_bytes(version: VmVersion) -> usize { + match version { + VmVersion::M5WithRefunds | VmVersion::M5WithoutRefunds => { + crate::vm_m5::vm_with_bootloader::USED_BOOTLOADER_MEMORY_BYTES + } + VmVersion::M6Initial | VmVersion::M6BugWithCompressionFixed => { + crate::vm_m6::vm_with_bootloader::USED_BOOTLOADER_MEMORY_BYTES + } + VmVersion::Vm1_3_2 => crate::vm_1_3_2::vm_with_bootloader::USED_BOOTLOADER_MEMORY_BYTES, + VmVersion::VmVirtualBlocks => { + crate::vm_virtual_blocks::constants::USED_BOOTLOADER_MEMORY_BYTES + } + VmVersion::VmVirtualBlocksRefundsEnhancement => { + crate::vm_refunds_enhancement::constants::USED_BOOTLOADER_MEMORY_BYTES + } + VmVersion::VmBoojumIntegration => { + crate::vm_boojum_integration::constants::USED_BOOTLOADER_MEMORY_BYTES + } + VmVersion::Vm1_4_1 => crate::vm_latest::constants::USED_BOOTLOADER_MEMORY_BYTES, + } +} + +pub fn get_used_bootloader_memory_words(version: VmVersion) -> usize { + match version { + VmVersion::M5WithRefunds | VmVersion::M5WithoutRefunds => { + crate::vm_m5::vm_with_bootloader::USED_BOOTLOADER_MEMORY_WORDS + } + VmVersion::M6Initial | VmVersion::M6BugWithCompressionFixed => { + crate::vm_m6::vm_with_bootloader::USED_BOOTLOADER_MEMORY_WORDS + } + VmVersion::Vm1_3_2 => crate::vm_1_3_2::vm_with_bootloader::USED_BOOTLOADER_MEMORY_WORDS, + VmVersion::VmVirtualBlocks => { + crate::vm_virtual_blocks::constants::USED_BOOTLOADER_MEMORY_WORDS + } + VmVersion::VmVirtualBlocksRefundsEnhancement => { + crate::vm_refunds_enhancement::constants::USED_BOOTLOADER_MEMORY_WORDS + } + VmVersion::VmBoojumIntegration => { + crate::vm_boojum_integration::constants::USED_BOOTLOADER_MEMORY_WORDS + } + VmVersion::Vm1_4_1 => crate::vm_latest::constants::USED_BOOTLOADER_MEMORY_WORDS, + } +} diff --git a/core/lib/multivm/src/versions/README.md b/core/lib/multivm/src/versions/README.md index a7a4a18c516..01c57509197 100644 --- a/core/lib/multivm/src/versions/README.md +++ b/core/lib/multivm/src/versions/README.md @@ -4,3 +4,14 @@ This folder contains the old versions of the VM we have used in the past. The `m switch the version we use to be able to sync from the genesis. This is a temporary measure until a "native" solution is implemented (i.e., the `vm` crate would itself know the changes between versions, and thus we will have only the functional diff between versions, not several fully-fledged VMs). + +## Versions + +| Name | Protocol versions | Description | +| ---------------------- | ----------------- | --------------------------------------------------------------------- | +| vm_m5 | 0 - 3 | Release for the testnet launch | +| vm_m6 | 4 - 6 | Release for the mainnet launch | +| vm_1_3_2 | 7 - 12 | Release 1.3.2 of the crypto circuits | +| vm_virtual_blocks | 13 - 15 | Adding virtual blocks to help with block number / timestamp migration | +| vm_refunds_enhancement | 16 - 17 | Fixing issue related to refunds in VM | +| vm_boojum_integration | 18 - | New Proving system (boojum), vm version 1.4.0 | diff --git a/core/lib/multivm/src/versions/mod.rs b/core/lib/multivm/src/versions/mod.rs index 71379f6df5c..0fc9111aa9a 100644 --- a/core/lib/multivm/src/versions/mod.rs +++ b/core/lib/multivm/src/versions/mod.rs @@ -1,4 +1,5 @@ pub mod vm_1_3_2; +pub mod vm_boojum_integration; pub mod vm_latest; pub mod vm_m5; pub mod vm_m6; diff --git a/core/lib/multivm/src/versions/vm_1_3_2/bootloader_state.rs b/core/lib/multivm/src/versions/vm_1_3_2/bootloader_state.rs index a5584662323..f0324137bdc 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/bootloader_state.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/bootloader_state.rs @@ -5,7 +5,7 @@ use crate::vm_1_3_2::vm_with_bootloader::TX_DESCRIPTION_OFFSET; /// Required to process transactions one by one (since we intercept the VM execution to execute /// transactions and add new ones to the memory on the fly). /// Think about it like a two-pointer scheme: one pointer (`free_tx_index`) tracks the end of the -/// initialized memory; while another (`tx_to_execute`) tracks our progess in this initialized memory. +/// initialized memory; while another (`tx_to_execute`) tracks our progress in this initialized memory. /// This is required since it's possible to push several transactions to the bootloader memory and then /// execute it one by one. /// diff --git a/core/lib/multivm/src/versions/vm_1_3_2/errors/tx_revert_reason.rs b/core/lib/multivm/src/versions/vm_1_3_2/errors/tx_revert_reason.rs index 4775d8339f7..3ddaa068461 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/errors/tx_revert_reason.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/errors/tx_revert_reason.rs @@ -7,11 +7,11 @@ use super::{BootloaderErrorCode, VmRevertReason}; // Reasons why the transaction executed inside the bootloader could fail. #[derive(Debug, Clone, PartialEq)] pub enum TxRevertReason { - // Can only be returned in EthCall execution mode (=ExecuteOnly) + // Can only be returned in EthCall execution mode `(=ExecuteOnly)` EthCall(VmRevertReason), // Returned when the execution of an L2 transaction has failed TxReverted(VmRevertReason), - // Can only be returned in VerifyAndExecute + // Can only be returned in `VerifyAndExecute` ValidationFailed(VmRevertReason), PaymasterValidationFailed(VmRevertReason), PrePaymasterPreparationFailed(VmRevertReason), @@ -20,7 +20,7 @@ pub enum TxRevertReason { FailedToChargeFee(VmRevertReason), // Emitted when trying to call a transaction from an account that has not // been deployed as an account (i.e. the `from` is just a contract). - // Can only be returned in VerifyAndExecute + // Can only be returned in `VerifyAndExecute` FromIsNotAnAccount, // Currently cannot be returned. Should be removed when refactoring errors. InnerTxError, @@ -101,7 +101,7 @@ impl TxRevertReason { BootloaderErrorCode::UnacceptablePubdataPrice => { Self::UnexpectedVMBehavior("UnacceptablePubdataPrice".to_owned()) } - // This is different from AccountTxValidationFailed error in a way that it means that + // This is different from `AccountTxValidationFailed` error in a way that it means that // the error was not produced by the account itself, but for some other unknown reason (most likely not enough gas) BootloaderErrorCode::TxValidationError => Self::ValidationFailed(revert_reason), // Note, that `InnerTxError` is derived only after the actual tx execution, so diff --git a/core/lib/multivm/src/versions/vm_1_3_2/errors/vm_revert_reason.rs b/core/lib/multivm/src/versions/vm_1_3_2/errors/vm_revert_reason.rs index e1d50f72448..ed17ffc4c39 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/errors/vm_revert_reason.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/errors/vm_revert_reason.rs @@ -1,5 +1,7 @@ -use std::convert::TryFrom; -use std::fmt::{Debug, Display}; +use std::{ + convert::TryFrom, + fmt::{Debug, Display}, +}; use zksync_types::U256; @@ -15,7 +17,7 @@ pub enum VmRevertReasonParsingError { IncorrectStringLength(Vec), } -/// Rich Revert Reasons https://github.com/0xProject/ZEIPs/issues/32 +/// Rich Revert Reasons `https://github.com/0xProject/ZEIPs/issues/32` #[derive(Debug, Clone, PartialEq)] pub enum VmRevertReason { General { @@ -71,7 +73,7 @@ impl VmRevertReason { pub fn to_user_friendly_string(&self) -> String { match self { - // In case of `Unknown` reason we suppress it to prevent verbose Error function_selector = 0x{} + // In case of `Unknown` reason we suppress it to prevent verbose `Error function_selector = 0x{}` // message shown to user. VmRevertReason::Unknown { .. } => "".to_owned(), _ => self.to_string(), diff --git a/core/lib/multivm/src/versions/vm_1_3_2/event_sink.rs b/core/lib/multivm/src/versions/vm_1_3_2/event_sink.rs index db6c5d11aee..b9aea7e09af 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/event_sink.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/event_sink.rs @@ -1,8 +1,5 @@ -use crate::vm_1_3_2::{ - history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, - oracles::OracleWithHistory, -}; use std::collections::HashMap; + use zk_evm_1_3_3::{ abstractions::EventSink, aux_structures::{LogQuery, Timestamp}, @@ -12,6 +9,11 @@ use zk_evm_1_3_3::{ }, }; +use crate::vm_1_3_2::{ + history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, + oracles::OracleWithHistory, +}; + #[derive(Debug, Clone, PartialEq, Default)] pub struct InMemoryEventSink { frames_stack: AppDataFrameManagerWithHistory, H>, @@ -48,7 +50,7 @@ impl InMemoryEventSink { pub fn log_queries_after_timestamp(&self, from_timestamp: Timestamp) -> &[Box] { let events = self.frames_stack.forward().current_frame(); - // Select all of the last elements where e.timestamp >= from_timestamp. + // Select all of the last elements where `e.timestamp >= from_timestamp`. // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. events .rsplit(|e| e.timestamp < from_timestamp) diff --git a/core/lib/multivm/src/versions/vm_1_3_2/history_recorder.rs b/core/lib/multivm/src/versions/vm_1_3_2/history_recorder.rs index 3c83b68e0a3..bb3c12580c4 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/history_recorder.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/history_recorder.rs @@ -5,7 +5,6 @@ use zk_evm_1_3_3::{ vm_state::PrimitiveValue, zkevm_opcode_defs::{self}, }; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::{StorageKey, U256}; use zksync_utils::{h256_to_u256, u256_to_h256}; @@ -13,14 +12,14 @@ use zksync_utils::{h256_to_u256, u256_to_h256}; pub type MemoryWithHistory = HistoryRecorder; pub type IntFrameManagerWithHistory = HistoryRecorder, H>; -// Within the same cycle, timestamps in range timestamp..timestamp+TIME_DELTA_PER_CYCLE-1 +// Within the same cycle, timestamps in range `timestamp..timestamp+TIME_DELTA_PER_CYCLE-1` // can be used. This can sometimes violate monotonicity of the timestamp within the // same cycle, so it should be normalized. #[inline] fn normalize_timestamp(timestamp: Timestamp) -> Timestamp { let timestamp = timestamp.0; - // Making sure it is divisible by TIME_DELTA_PER_CYCLE + // Making sure it is divisible by `TIME_DELTA_PER_CYCLE` Timestamp(timestamp - timestamp % zkevm_opcode_defs::TIME_DELTA_PER_CYCLE) } @@ -765,12 +764,13 @@ impl HistoryRecorder, H> { #[cfg(test)] mod tests { + use zk_evm_1_3_3::{aux_structures::Timestamp, vm_state::PrimitiveValue}; + use zksync_types::U256; + use crate::vm_1_3_2::{ history_recorder::{HistoryRecorder, MemoryWrapper}, HistoryDisabled, }; - use zk_evm_1_3_3::{aux_structures::Timestamp, vm_state::PrimitiveValue}; - use zksync_types::U256; #[test] fn memory_equality() { diff --git a/core/lib/multivm/src/versions/vm_1_3_2/memory.rs b/core/lib/multivm/src/versions/vm_1_3_2/memory.rs index b269ba89b3c..c9f97c0c225 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/memory.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/memory.rs @@ -1,15 +1,19 @@ -use zk_evm_1_3_3::abstractions::{Memory, MemoryType}; -use zk_evm_1_3_3::aux_structures::{MemoryPage, MemoryQuery, Timestamp}; -use zk_evm_1_3_3::vm_state::PrimitiveValue; -use zk_evm_1_3_3::zkevm_opcode_defs::FatPointer; +use zk_evm_1_3_3::{ + abstractions::{Memory, MemoryType}, + aux_structures::{MemoryPage, MemoryQuery, Timestamp}, + vm_state::PrimitiveValue, + zkevm_opcode_defs::FatPointer, +}; use zksync_types::U256; -use crate::vm_1_3_2::history_recorder::{ - FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, - MemoryWrapper, WithHistory, +use crate::vm_1_3_2::{ + history_recorder::{ + FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, + MemoryWrapper, WithHistory, + }, + oracles::OracleWithHistory, + utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}, }; -use crate::vm_1_3_2::oracles::OracleWithHistory; -use crate::vm_1_3_2::utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}; #[derive(Debug, Clone, PartialEq)] pub struct SimpleMemory { @@ -278,7 +282,7 @@ impl Memory for SimpleMemory { let returndata_page = returndata_fat_pointer.memory_page; for &page in current_observable_pages { - // If the page's number is greater than or equal to the base_page, + // If the page's number is greater than or equal to the `base_page`, // it means that it was created by the internal calls of this contract. // We need to add this check as the calldata pointer is also part of the // observable pages. @@ -295,7 +299,7 @@ impl Memory for SimpleMemory { } } -// It is expected that there is some intersection between [word_number*32..word_number*32+31] and [start, end] +// It is expected that there is some intersection between `[word_number*32..word_number*32+31]` and `[start, end]` fn extract_needed_bytes_from_word( word_value: Vec, word_number: usize, @@ -303,7 +307,7 @@ fn extract_needed_bytes_from_word( end: usize, ) -> Vec { let word_start = word_number * 32; - let word_end = word_start + 31; // Note, that at word_start + 32 a new word already starts + let word_end = word_start + 31; // Note, that at `word_start + 32` a new word already starts let intersection_left = std::cmp::max(word_start, start); let intersection_right = std::cmp::min(word_end, end); diff --git a/core/lib/multivm/src/versions/vm_1_3_2/mod.rs b/core/lib/multivm/src/versions/vm_1_3_2/mod.rs index 24e433d9123..37c5f34ffd0 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/mod.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/mod.rs @@ -1,5 +1,18 @@ #![allow(clippy::derive_partial_eq_without_eq)] +pub use zk_evm_1_3_3::{self, block_properties::BlockProperties}; +pub use zksync_types::vm_trace::VmExecutionTrace; + +pub(crate) use self::vm_instance::VmInstance; +pub use self::{ + errors::TxRevertReason, + history_recorder::{HistoryDisabled, HistoryEnabled, HistoryMode}, + oracle_tools::OracleTools, + oracles::storage::StorageOracle, + vm::Vm, + vm_instance::{VmBlockResult, VmExecutionResult}, +}; + mod bootloader_state; pub mod errors; pub mod event_sink; @@ -11,25 +24,13 @@ pub mod oracles; mod pubdata_utils; mod refunds; pub mod test_utils; -pub mod transaction_data; -pub mod utils; -pub mod vm_with_bootloader; - #[cfg(test)] mod tests; +pub mod transaction_data; +pub mod utils; mod vm; pub mod vm_instance; - -pub use errors::TxRevertReason; -pub use history_recorder::{HistoryDisabled, HistoryEnabled, HistoryMode}; -pub use oracle_tools::OracleTools; -pub use oracles::storage::StorageOracle; -pub use vm::Vm; -pub(crate) use vm_instance::VmInstance; -pub use vm_instance::{VmBlockResult, VmExecutionResult}; -pub use zk_evm_1_3_3; -pub use zk_evm_1_3_3::block_properties::BlockProperties; -pub use zksync_types::vm_trace::VmExecutionTrace; +pub mod vm_with_bootloader; pub type Word = zksync_types::U256; diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracle_tools.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracle_tools.rs index 0f1feef4f94..f271d86474c 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracle_tools.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracle_tools.rs @@ -1,18 +1,21 @@ use std::fmt::Debug; -use crate::vm_1_3_2::event_sink::InMemoryEventSink; -use crate::vm_1_3_2::history_recorder::HistoryMode; -use crate::vm_1_3_2::memory::SimpleMemory; -use crate::vm_1_3_2::oracles::{ - decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, - storage::StorageOracle, -}; use zk_evm_1_3_3::witness_trace::DummyTracer; use zksync_state::{StoragePtr, WriteStorage}; +use crate::vm_1_3_2::{ + event_sink::InMemoryEventSink, + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, + storage::StorageOracle, + }, +}; + /// zkEVM requires a bunch of objects implementing given traits to work. /// For example: Storage, Memory, PrecompilerProcessor etc -/// (you can find all these traites in zk_evm crate -> src/abstractions/mod.rs) +/// (you can find all these traits in zk_evm crate -> src/abstractions/mod.rs) /// For each of these traits, we have a local implementation (for example StorageOracle) /// that also support additional features (like rollbacks & history). /// The OracleTools struct, holds all these things together in one place. diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/decommitter.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/decommitter.rs index e5c30c305bc..8bf0e70026b 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/decommitter.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/decommitter.rs @@ -1,23 +1,21 @@ use std::collections::HashMap; -use crate::vm_1_3_2::history_recorder::{ - HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, -}; - -use zk_evm_1_3_3::abstractions::MemoryType; -use zk_evm_1_3_3::aux_structures::Timestamp; use zk_evm_1_3_3::{ - abstractions::{DecommittmentProcessor, Memory}, - aux_structures::{DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery}, + abstractions::{DecommittmentProcessor, Memory, MemoryType}, + aux_structures::{ + DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery, Timestamp, + }, }; use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::U256; -use zksync_utils::bytecode::bytecode_len_in_words; -use zksync_utils::{bytes_to_be_words, u256_to_h256}; +use zksync_utils::{bytecode::bytecode_len_in_words, bytes_to_be_words, u256_to_h256}; use super::OracleWithHistory; +use crate::vm_1_3_2::history_recorder::{ + HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, +}; -/// The main job of the DecommiterOracle is to implement the DecommittmentProcessor trait - that is +/// The main job of the DecommiterOracle is to implement the DecommitmentProcessor trait - that is /// used by the VM to 'load' bytecodes into memory. #[derive(Debug)] pub struct DecommitterOracle { @@ -68,7 +66,7 @@ impl DecommitterOracle } } - /// Adds additional bytecodes. They will take precendent over the bytecodes from storage. + /// Adds additional bytecodes. They will take precedent over the bytecodes from storage. pub fn populate(&mut self, bytecodes: Vec<(U256, Vec)>, timestamp: Timestamp) { for (hash, bytecode) in bytecodes { self.known_bytecodes.insert(hash, bytecode, timestamp); @@ -178,7 +176,7 @@ impl DecommittmentProcessor > { self.decommitment_requests.push((), partial_query.timestamp); // First - check if we didn't fetch this bytecode in the past. - // If we did - we can just return the page that we used before (as the memory is read only). + // If we did - we can just return the page that we used before (as the memory is readonly). if let Some(memory_page) = self .decommitted_code_hashes .inner() diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/mod.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/mod.rs index 342fadb554a..59b0601e148 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/mod.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/mod.rs @@ -1,11 +1,10 @@ use zk_evm_1_3_3::aux_structures::Timestamp; -// We will discard RAM as soon as the execution of a tx ends, so -// it is ok for now to use SimpleMemory -pub use zk_evm_1_3_3::reference_impls::memory::SimpleMemory as RamOracle; // All the changes to the events in the DB will be applied after the tx is executed, -// so fow now it is fine. +// so for now it is fine. pub use zk_evm_1_3_3::reference_impls::event_sink::InMemoryEventSink as EventSinkOracle; - +// We will discard RAM as soon as the execution of a tx ends, so +// it is ok for now to use `SimpleMemory` +pub use zk_evm_1_3_3::reference_impls::memory::SimpleMemory as RamOracle; pub use zk_evm_1_3_3::testing::simple_tracer::NoopTracer; pub mod decommitter; diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/precompile.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/precompile.rs index 0693fac6d60..8089527183f 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/precompile.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/precompile.rs @@ -1,14 +1,11 @@ use zk_evm_1_3_3::{ - abstractions::Memory, - abstractions::PrecompileCyclesWitness, - abstractions::PrecompilesProcessor, + abstractions::{Memory, PrecompileCyclesWitness, PrecompilesProcessor}, aux_structures::{LogQuery, MemoryQuery, Timestamp}, precompiles::DefaultPrecompilesProcessor, }; -use crate::vm_1_3_2::history_recorder::{HistoryEnabled, HistoryMode, HistoryRecorder}; - use super::OracleWithHistory; +use crate::vm_1_3_2::history_recorder::{HistoryEnabled, HistoryMode, HistoryRecorder}; /// Wrap of DefaultPrecompilesProcessor that store queue /// of timestamp when precompiles are called to be executed. diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/storage.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/storage.rs index 9a4873fe59a..745dcad5050 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/storage.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/storage.rs @@ -1,25 +1,22 @@ use std::collections::HashMap; -use crate::vm_1_3_2::history_recorder::{ - AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, - HistoryRecorder, StorageWrapper, WithHistory, -}; - -use zk_evm_1_3_3::abstractions::RefundedAmounts; -use zk_evm_1_3_3::zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES; use zk_evm_1_3_3::{ - abstractions::{RefundType, Storage as VmStorageOracle}, + abstractions::{RefundType, RefundedAmounts, Storage as VmStorageOracle}, aux_structures::{LogQuery, Timestamp}, + zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES, }; use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::utils::storage_key_for_eth_balance; use zksync_types::{ - AccountTreeId, Address, StorageKey, StorageLogQuery, StorageLogQueryType, BOOTLOADER_ADDRESS, - U256, + utils::storage_key_for_eth_balance, AccountTreeId, Address, StorageKey, StorageLogQuery, + StorageLogQueryType, BOOTLOADER_ADDRESS, U256, }; use zksync_utils::u256_to_h256; use super::OracleWithHistory; +use crate::vm_1_3_2::history_recorder::{ + AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, + HistoryRecorder, StorageWrapper, WithHistory, +}; // While the storage does not support different shards, it was decided to write the // code of the StorageOracle with the shard parameters in mind. @@ -170,7 +167,7 @@ impl StorageOracle { ) -> &[Box] { let logs = self.frames_stack.forward().current_frame(); - // Select all of the last elements where l.log_query.timestamp >= from_timestamp. + // Select all of the last elements where `l.log_query.timestamp >= from_timestamp`. // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. logs.rsplit(|l| l.log_query.timestamp < from_timestamp) .next() @@ -211,13 +208,14 @@ impl StorageOracle { } impl VmStorageOracle for StorageOracle { - // Perform a storage read/write access by taking an partially filled query + // Perform a storage read / write access by taking an partially filled query // and returning filled query and cold/warm marker for pricing purposes fn execute_partial_query( &mut self, _monotonic_cycle_counter: u32, query: LogQuery, ) -> LogQuery { + // ``` // tracing::trace!( // "execute partial query cyc {:?} addr {:?} key {:?}, rw {:?}, wr {:?}, tx {:?}", // _monotonic_cycle_counter, @@ -227,6 +225,7 @@ impl VmStorageOracle for StorageOracle { // query.written_value, // query.tx_number_in_block // ); + // ``` assert!(!query.rollback); if query.rw_flag { // The number of bytes that have been compensated by the user to perform this write @@ -306,7 +305,7 @@ impl VmStorageOracle for StorageOracle { ); // Additional validation that the current value was correct - // Unwrap is safe because the return value from write_inner is the previous value in this leaf. + // Unwrap is safe because the return value from `write_inner` is the previous value in this leaf. // It is impossible to set leaf value to `None` assert_eq!(current_value, written_value); } @@ -320,8 +319,8 @@ impl VmStorageOracle for StorageOracle { /// Returns the number of bytes needed to publish a slot. // Since we need to publish the state diffs onchain, for each of the updated storage slot -// we basically need to publish the following pair: (). -// While new_value is always 32 bytes long, for key we use the following optimization: +// we basically need to publish the following pair: `()`. +// While `new_value` is always 32 bytes long, for key we use the following optimization: // - The first time we publish it, we use 32 bytes. // Then, we remember a 8-byte id for this slot and assign it to it. We call this initial write. // - The second time we publish it, we will use this 8-byte instead of the 32 bytes of the entire key. diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/bootloader.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/bootloader.rs index 20d8621e829..fac4a74a1eb 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/bootloader.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/bootloader.rs @@ -1,12 +1,5 @@ use std::marker::PhantomData; -use crate::vm_1_3_2::history_recorder::HistoryMode; -use crate::vm_1_3_2::memory::SimpleMemory; -use crate::vm_1_3_2::oracles::tracer::{ - utils::gas_spent_on_bytecodes_and_long_messages_this_opcode, ExecutionEndTracer, - PendingRefundTracer, PubdataSpentTracer, StorageInvocationTracer, -}; - use zk_evm_1_3_3::{ tracing::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, @@ -16,7 +9,16 @@ use zk_evm_1_3_3::{ zkevm_opcode_defs::{Opcode, RetOpcode}, }; -/// Tells the VM to end the execution before `ret` from the booloader if there is no panic or revert. +use crate::vm_1_3_2::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::tracer::{ + utils::gas_spent_on_bytecodes_and_long_messages_this_opcode, ExecutionEndTracer, + PendingRefundTracer, PubdataSpentTracer, StorageInvocationTracer, + }, +}; + +/// Tells the VM to end the execution before `ret` from the bootloader if there is no panic or revert. /// Also, saves the information if this `ret` was caused by "out of gas" panic. #[derive(Debug, Clone, Default)] pub struct BootloaderTracer { @@ -98,7 +100,7 @@ impl PubdataSpentTracer for BootloaderTracer { impl BootloaderTracer { fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/call.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/call.rs index b50ee5f925c..3f31d7b7123 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/call.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/call.rs @@ -1,20 +1,23 @@ -use crate::vm_1_3_2::errors::VmRevertReason; -use crate::vm_1_3_2::history_recorder::HistoryMode; -use crate::vm_1_3_2::memory::SimpleMemory; -use std::convert::TryFrom; -use std::marker::PhantomData; -use std::mem; -use zk_evm_1_3_3::tracing::{ - AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, -}; -use zk_evm_1_3_3::zkevm_opcode_defs::FatPointer; -use zk_evm_1_3_3::zkevm_opcode_defs::{ - FarCallABI, FarCallOpcode, Opcode, RetOpcode, CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER, - RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, +use std::{convert::TryFrom, marker::PhantomData, mem}; + +use zk_evm_1_3_3::{ + tracing::{ + AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, + }, + zkevm_opcode_defs::{ + FarCallABI, FarCallOpcode, FatPointer, Opcode, RetOpcode, + CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER, RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; use zksync_system_constants::CONTRACT_DEPLOYER_ADDRESS; -use zksync_types::vm_trace::{Call, CallType}; -use zksync_types::U256; +use zksync_types::{ + vm_trace::{Call, CallType}, + U256, +}; + +use crate::vm_1_3_2::{ + errors::VmRevertReason, history_recorder::HistoryMode, memory::SimpleMemory, +}; /// NOTE Auto implementing clone for this tracer can cause stack overflow. /// This is because of the stack field which is a Vec with nested vecs inside. @@ -94,7 +97,7 @@ impl Tracer for CallTracer { } impl CallTracer { - /// We use parent gas for propery calculation of gas used in the trace. + /// We use parent gas for property calculation of gas used in the trace. /// This method updates parent gas for the current call. fn update_parent_gas(&mut self, state: &VmLocalStateData<'_>, current_call: &mut Call) { let current = state.vm_local_state.callstack.current; @@ -185,7 +188,7 @@ impl CallTracer { let fat_data_pointer = state.vm_local_state.registers[RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER as usize]; - // if fat_data_pointer is not a pointer then there is no output + // if `fat_data_pointer` is not a pointer then there is no output let output = if fat_data_pointer.is_pointer { let fat_data_pointer = FatPointer::from_u256(fat_data_pointer.value); if !fat_data_pointer.is_trivial() { @@ -252,8 +255,8 @@ impl CallTracer { // Filter all near calls from the call stack // Important that the very first call is near call - // And this NearCall includes several Normal or Mimic calls - // So we return all childrens of this NearCall + // And this `NearCall` includes several Normal or Mimic calls + // So we return all children of this `NearCall` pub fn extract_calls(&mut self) -> Vec { if let Some(current_call) = self.stack.pop() { filter_near_call(current_call) @@ -264,7 +267,7 @@ impl CallTracer { } // Filter all near calls from the call stack -// Normally wr are not interested in NearCall, because it's just a wrapper for internal calls +// Normally we are not interested in `NearCall`, because it's just a wrapper for internal calls fn filter_near_call(mut call: Call) -> Vec { let mut calls = vec![]; let original_calls = std::mem::take(&mut call.calls); @@ -282,9 +285,10 @@ fn filter_near_call(mut call: Call) -> Vec { #[cfg(test)] mod tests { - use crate::vm_1_3_2::oracles::tracer::call::{filter_near_call, Call, CallType}; use zk_evm_1_3_3::zkevm_opcode_defs::FarCallOpcode; + use crate::vm_1_3_2::oracles::tracer::call::{filter_near_call, Call, CallType}; + #[test] fn test_filter_near_calls() { let mut call = Call::default(); diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/mod.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/mod.rs index 29121f35c5f..5395a0a9d7b 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/mod.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/mod.rs @@ -1,5 +1,15 @@ -use zk_evm_1_3_3::tracing::Tracer; -use zk_evm_1_3_3::vm_state::VmLocalState; +use zk_evm_1_3_3::{tracing::Tracer, vm_state::VmLocalState}; + +pub(crate) use self::transaction_result::TransactionResultTracer; +pub use self::{ + bootloader::BootloaderTracer, + call::CallTracer, + one_tx::OneTxTracer, + validation::{ + ValidationError, ValidationTracer, ValidationTracerParams, ViolatedValidationRule, + }, +}; +use crate::vm_1_3_2::{history_recorder::HistoryMode, memory::SimpleMemory}; mod bootloader; mod call; @@ -8,18 +18,6 @@ mod transaction_result; mod utils; mod validation; -pub use bootloader::BootloaderTracer; -pub use call::CallTracer; -pub use one_tx::OneTxTracer; -pub use validation::{ - ValidationError, ValidationTracer, ValidationTracerParams, ViolatedValidationRule, -}; - -pub(crate) use transaction_result::TransactionResultTracer; - -use crate::vm_1_3_2::history_recorder::HistoryMode; -use crate::vm_1_3_2::memory::SimpleMemory; - pub trait ExecutionEndTracer: Tracer> { // Returns whether the vm execution should stop. fn should_stop_execution(&self) -> bool; diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/one_tx.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/one_tx.rs index a9349ea2035..9bf5a9b7d22 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/one_tx.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/one_tx.rs @@ -1,25 +1,25 @@ +use zk_evm_1_3_3::{ + tracing::{ + AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, + }, + vm_state::VmLocalState, +}; +use zksync_types::vm_trace::Call; + use super::utils::{computational_gas_price, print_debug_if_needed}; use crate::vm_1_3_2::{ history_recorder::HistoryMode, memory::SimpleMemory, oracles::tracer::{ utils::{gas_spent_on_bytecodes_and_long_messages_this_opcode, VmHook}, - BootloaderTracer, ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, + BootloaderTracer, CallTracer, ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, + StorageInvocationTracer, }, vm_instance::get_vm_hook_params, }; -use crate::vm_1_3_2::oracles::tracer::{CallTracer, StorageInvocationTracer}; -use zk_evm_1_3_3::{ - tracing::{ - AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, - }, - vm_state::VmLocalState, -}; -use zksync_types::vm_trace::Call; - /// Allows any opcodes, but tells the VM to end the execution once the tx is over. -// Internally depeds on Bootloader's VMHooks to get the notification once the transaction is finished. +// Internally depends on Bootloader's `VMHooks` to get the notification once the transaction is finished. #[derive(Debug)] pub struct OneTxTracer { tx_has_been_processed: bool, diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/transaction_result.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/transaction_result.rs index 215c66bfa74..c74e9bb862d 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/transaction_result.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/transaction_result.rs @@ -7,18 +7,18 @@ use zk_evm_1_3_3::{ }; use zksync_types::{vm_trace, U256}; -use crate::vm_1_3_2::memory::SimpleMemory; -use crate::vm_1_3_2::oracles::tracer::{ - CallTracer, ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, - StorageInvocationTracer, -}; -use crate::vm_1_3_2::vm_instance::get_vm_hook_params; use crate::vm_1_3_2::{ history_recorder::HistoryMode, - oracles::tracer::utils::{ - gas_spent_on_bytecodes_and_long_messages_this_opcode, print_debug_if_needed, read_pointer, - VmHook, + memory::SimpleMemory, + oracles::tracer::{ + utils::{ + gas_spent_on_bytecodes_and_long_messages_this_opcode, print_debug_if_needed, + read_pointer, VmHook, + }, + CallTracer, ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, + StorageInvocationTracer, }, + vm_instance::get_vm_hook_params, }; #[derive(Debug)] diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/utils.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/utils.rs index 2914faf5120..5ee8d8554b6 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/utils.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/utils.rs @@ -1,14 +1,9 @@ -use crate::vm_1_3_2::history_recorder::HistoryMode; -use crate::vm_1_3_2::memory::SimpleMemory; -use crate::vm_1_3_2::utils::{aux_heap_page_from_base, heap_page_from_base}; -use crate::vm_1_3_2::vm_instance::{get_vm_hook_params, VM_HOOK_POSITION}; -use crate::vm_1_3_2::vm_with_bootloader::BOOTLOADER_HEAP_PAGE; - -use zk_evm_1_3_3::aux_structures::MemoryPage; -use zk_evm_1_3_3::zkevm_opcode_defs::{FarCallABI, FarCallForwardPageType}; use zk_evm_1_3_3::{ + aux_structures::MemoryPage, tracing::{BeforeExecutionData, VmLocalStateData}, - zkevm_opcode_defs::{FatPointer, LogOpcode, Opcode, UMAOpcode}, + zkevm_opcode_defs::{ + FarCallABI, FarCallForwardPageType, FatPointer, LogOpcode, Opcode, UMAOpcode, + }, }; use zksync_system_constants::{ ECRECOVER_PRECOMPILE_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, KNOWN_CODES_STORAGE_ADDRESS, @@ -17,6 +12,14 @@ use zksync_system_constants::{ use zksync_types::U256; use zksync_utils::u256_to_h256; +use crate::vm_1_3_2::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + utils::{aux_heap_page_from_base, heap_page_from_base}, + vm_instance::{get_vm_hook_params, VM_HOOK_POSITION}, + vm_with_bootloader::BOOTLOADER_HEAP_PAGE, +}; + #[derive(Clone, Debug, Copy)] pub(crate) enum VmHook { AccountValidationEntered, @@ -45,7 +48,7 @@ impl VmHook { let value = data.src1_value.value; - // Only UMA opcodes in the bootloader serve for vm hooks + // Only `UMA` opcodes in the bootloader serve for vm hooks if !matches!(opcode_variant.opcode, Opcode::UMA(UMAOpcode::HeapWrite)) || heap_page != BOOTLOADER_HEAP_PAGE || fat_ptr.offset != VM_HOOK_POSITION * 32 @@ -84,7 +87,7 @@ pub(crate) fn get_debug_log( let msg = String::from_utf8(msg).expect("Invalid debug message"); let data = U256::from_big_endian(&data); - // For long data, it is better to use hex-encoding for greater readibility + // For long data, it is better to use hex-encoding for greater readability let data_str = if data > U256::from(u64::max_value()) { let mut bytes = [0u8; 32]; data.to_big_endian(&mut bytes); @@ -99,7 +102,7 @@ pub(crate) fn get_debug_log( } /// Reads the memory slice represented by the fat pointer. -/// Note, that the fat pointer must point to the accesible memory (i.e. not cleared up yet). +/// Note, that the fat pointer must point to the accessible memory (i.e. not cleared up yet). pub(crate) fn read_pointer( memory: &SimpleMemory, pointer: FatPointer, diff --git a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/validation.rs b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/validation.rs index ee1587df3b0..caea2688bb9 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/validation.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/oracles/tracer/validation.rs @@ -1,15 +1,5 @@ use std::{collections::HashSet, fmt::Display, marker::PhantomData}; -use crate::vm_1_3_2::{ - errors::VmRevertReasonParsingResult, - history_recorder::HistoryMode, - memory::SimpleMemory, - oracles::tracer::{ - utils::{computational_gas_price, print_debug_if_needed, VmHook}, - ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, - }, -}; - use zk_evm_1_3_3::{ tracing::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, @@ -17,8 +7,6 @@ use zk_evm_1_3_3::{ zkevm_opcode_defs::{ContextOpcode, FarCallABI, LogOpcode, Opcode}, }; use zksync_state::{StoragePtr, WriteStorage}; - -use crate::vm_1_3_2::oracles::tracer::{utils::get_calldata_page_via_abi, StorageInvocationTracer}; use zksync_system_constants::{ ACCOUNT_CODE_STORAGE_ADDRESS, BOOTLOADER_ADDRESS, CONTRACT_DEPLOYER_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, L2_ETH_TOKEN_ADDRESS, MSG_VALUE_SIMULATOR_ADDRESS, @@ -31,6 +19,18 @@ use zksync_utils::{ be_bytes_to_safe_address, h256_to_account_address, u256_to_account_address, u256_to_h256, }; +use crate::vm_1_3_2::{ + errors::VmRevertReasonParsingResult, + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::tracer::{ + utils::{ + computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, + }, + ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, StorageInvocationTracer, + }, +}; + #[derive(Debug, Clone, Eq, PartialEq, Copy)] #[allow(clippy::enum_variant_names)] pub enum ValidationTracerMode { @@ -100,7 +100,7 @@ fn touches_allowed_context(address: Address, key: U256) -> bool { return false; } - // Only chain_id is allowed to be touched. + // Only `chain_id` is allowed to be touched. key == U256::from(0u32) } @@ -223,7 +223,7 @@ impl ValidationTracer { return true; } - // The pair of MSG_VALUE_SIMULATOR_ADDRESS & L2_ETH_TOKEN_ADDRESS simulates the behavior of transfering ETH + // The pair of `MSG_VALUE_SIMULATOR_ADDRESS` & `L2_ETH_TOKEN_ADDRESS` simulates the behavior of transferring ETH // that is safe for the DDoS protection rules. if valid_eth_token_call(address, msg_sender) { return true; @@ -267,20 +267,20 @@ impl ValidationTracer { let (potential_address_bytes, potential_position_bytes) = calldata.split_at(32); let potential_address = be_bytes_to_safe_address(potential_address_bytes); - // If the validation_address is equal to the potential_address, - // then it is a request that could be used for mapping of kind mapping(address => ...). + // If the `validation_address` is equal to the `potential_address`, + // then it is a request that could be used for mapping of kind `mapping(address => ...)`. // - // If the potential_position_bytes were already allowed before, then this keccak might be used - // for ERC-20 allowance or any other of mapping(address => mapping(...)) + // If the `potential_position_bytes` were already allowed before, then this keccak might be used + // for ERC-20 allowance or any other of `mapping(address => mapping(...))` if potential_address == Some(validated_address) || self .auxilary_allowed_slots .contains(&H256::from_slice(potential_position_bytes)) { - // This is request that could be used for mapping of kind mapping(address => ...) + // This is request that could be used for mapping of kind `mapping(address => ...)` // We could theoretically wait for the slot number to be returned by the - // keccak256 precompile itself, but this would complicate the code even further + // `keccak256` precompile itself, but this would complicate the code even further // so let's calculate it here. let slot = keccak256(calldata); diff --git a/core/lib/multivm/src/versions/vm_1_3_2/pubdata_utils.rs b/core/lib/multivm/src/versions/vm_1_3_2/pubdata_utils.rs index 936b85bfc09..23d42fc2b5a 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/pubdata_utils.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/pubdata_utils.rs @@ -1,14 +1,18 @@ -use crate::vm_1_3_2::history_recorder::HistoryMode; -use crate::vm_1_3_2::oracles::storage::storage_key_of_log; -use crate::vm_1_3_2::VmInstance; use std::collections::HashMap; + use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; -use zksync_types::event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}; -use zksync_types::zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries; -use zksync_types::{StorageKey, PUBLISH_BYTECODE_OVERHEAD, SYSTEM_CONTEXT_ADDRESS}; +use zksync_types::{ + event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}, + zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries, + StorageKey, PUBLISH_BYTECODE_OVERHEAD, SYSTEM_CONTEXT_ADDRESS, +}; use zksync_utils::bytecode::bytecode_len_in_bytes; +use crate::vm_1_3_2::{ + history_recorder::HistoryMode, oracles::storage::storage_key_of_log, VmInstance, +}; + impl VmInstance { pub fn pubdata_published(&self, from_timestamp: Timestamp) -> u32 { let storage_writes_pubdata_published = self.pubdata_published_for_writes(from_timestamp); diff --git a/core/lib/multivm/src/versions/vm_1_3_2/refunds.rs b/core/lib/multivm/src/versions/vm_1_3_2/refunds.rs index 0277379143b..555dd0f643e 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/refunds.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/refunds.rs @@ -1,13 +1,14 @@ -use crate::vm_1_3_2::history_recorder::HistoryMode; -use crate::vm_1_3_2::vm_with_bootloader::{ - eth_price_per_pubdata_byte, BOOTLOADER_HEAP_PAGE, TX_GAS_LIMIT_OFFSET, -}; -use crate::vm_1_3_2::VmInstance; use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; use zksync_types::U256; use zksync_utils::ceil_div_u256; +use crate::vm_1_3_2::{ + history_recorder::HistoryMode, + vm_with_bootloader::{eth_price_per_pubdata_byte, BOOTLOADER_HEAP_PAGE, TX_GAS_LIMIT_OFFSET}, + VmInstance, +}; + impl VmInstance { pub(crate) fn tx_body_refund( &self, @@ -75,7 +76,7 @@ impl VmInstance { ) -> u32 { // TODO (SMA-1715): Make users pay for the block overhead 0 - + // ``` // let pubdata_published = self.pubdata_published(from_timestamp); // // let total_gas_spent = gas_remaining_before - self.gas_remaining(); @@ -120,6 +121,7 @@ impl VmInstance { // ); // 0 // } + // ``` } // TODO (SMA-1715): Make users pay for the block overhead @@ -133,39 +135,39 @@ impl VmInstance { _l2_l1_logs: usize, ) -> u32 { 0 - + // ``` // let overhead_for_block_gas = U256::from(crate::transaction_data::block_overhead_gas( // gas_per_pubdata_byte_limit, // )); - + // // let encoded_len = U256::from(encoded_len); // let pubdata_published = U256::from(pubdata_published); // let gas_spent_on_computation = U256::from(gas_spent_on_computation); // let number_of_decommitment_requests = U256::from(number_of_decommitment_requests); // let l2_l1_logs = U256::from(l2_l1_logs); - + // // let tx_slot_overhead = ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()); - + // // let overhead_for_length = ceil_div_u256( // encoded_len * overhead_for_block_gas, // BOOTLOADER_TX_ENCODING_SPACE.into(), // ); - + // // let actual_overhead_for_pubdata = ceil_div_u256( // pubdata_published * overhead_for_block_gas, // MAX_PUBDATA_PER_BLOCK.into(), // ); - + // // let actual_gas_limit_overhead = ceil_div_u256( // gas_spent_on_computation * overhead_for_block_gas, // MAX_BLOCK_MULTIINSTANCE_GAS_LIMIT.into(), // ); - + // // let code_decommitter_sorter_circuit_overhead = ceil_div_u256( // number_of_decommitment_requests * overhead_for_block_gas, // GEOMETRY_CONFIG.limit_for_code_decommitter_sorter.into(), // ); - + // // let l1_l2_logs_overhead = ceil_div_u256( // l2_l1_logs * overhead_for_block_gas, // std::cmp::min( @@ -174,7 +176,7 @@ impl VmInstance { // ) // .into(), // ); - + // // let overhead = vec![ // tx_slot_overhead, // overhead_for_length, @@ -186,8 +188,9 @@ impl VmInstance { // .into_iter() // .max() // .unwrap(); - + // // overhead.as_u32() + // ``` } /// Returns the given transactions' gas limit - by reading it directly from the VM memory. diff --git a/core/lib/multivm/src/versions/vm_1_3_2/test_utils.rs b/core/lib/multivm/src/versions/vm_1_3_2/test_utils.rs index 6738b070482..c3aa161543a 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/test_utils.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/test_utils.rs @@ -10,8 +10,10 @@ use std::collections::HashMap; use itertools::Itertools; use zk_evm_1_3_3::{aux_structures::Timestamp, vm_state::VmLocalState}; -use zksync_contracts::test_contracts::LoadnextContractExecutionParams; -use zksync_contracts::{deployer_contract, get_loadnext_contract, load_contract}; +use zksync_contracts::{ + deployer_contract, get_loadnext_contract, load_contract, + test_contracts::LoadnextContractExecutionParams, +}; use zksync_state::WriteStorage; use zksync_types::{ ethabi::{Address, Token}, @@ -59,7 +61,7 @@ impl PartialEq for ModifiedKeysMap { #[derive(Clone, PartialEq, Debug)] pub struct DecommitterTestInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub modified_storage_keys: ModifiedKeysMap, pub known_bytecodes: HistoryRecorder>, H>, @@ -68,7 +70,7 @@ pub struct DecommitterTestInnerState { #[derive(Clone, PartialEq, Debug)] pub struct StorageOracleInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub modified_storage_keys: ModifiedKeysMap, diff --git a/core/lib/multivm/src/versions/vm_1_3_2/tests/bootloader.rs b/core/lib/multivm/src/versions/vm_1_3_2/tests/bootloader.rs index da9087afedd..2e5b55c945d 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/tests/bootloader.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/tests/bootloader.rs @@ -1,3 +1,4 @@ +// ``` // //! // //! Tests for the bootloader // //! The description for each of the tests can be found in the corresponding `.yul` file. @@ -8,7 +9,7 @@ // convert::{TryFrom, TryInto}, // }; // use zksync_eth_signer::{raw_ethereum_tx::TransactionParameters, EthereumSigner, PrivateKeySigner}; - +// // use crate::{ // errors::VmRevertReason, // history_recorder::HistoryMode, @@ -37,7 +38,7 @@ // }, // HistoryEnabled, OracleTools, TxRevertReason, VmBlockResult, VmExecutionResult, VmInstance, // }; - +// // use zk_evm_1_3_3::{ // aux_structures::Timestamp, block_properties::BlockProperties, zkevm_opcode_defs::FarCallOpcode, // }; @@ -69,11 +70,11 @@ // test_utils::LoadnextContractExecutionParams, // {bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, u256_to_h256}, // }; - +// // use zksync_contracts::{ // get_loadnext_contract, load_contract, SystemContractCode, PLAYGROUND_BLOCK_BOOTLOADER_CODE, // }; - +// // use super::utils::{read_many_owners_custom_account_contract, read_nonce_holder_tester}; // /// Helper struct for tests, that takes care of setting the database and provides some functions to get and set balances. // /// Example use: @@ -92,7 +93,7 @@ // pub block_properties: BlockProperties, // pub storage_ptr: Box>, // } - +// // impl VmTestEnv { // /// Creates a new test helper with a bunch of already deployed contracts. // pub fn new_with_contracts(contracts: &[(H160, Vec)]) -> Self { @@ -100,39 +101,39 @@ // let (block_context, block_properties) = create_test_block_params(); // (block_context.into(), block_properties) // }; - +// // let mut raw_storage = InMemoryStorage::with_system_contracts(hash_bytecode); // for (address, bytecode) in contracts { // let account = DeployedContract { // account_id: AccountTreeId::new(*address), // bytecode: bytecode.clone(), // }; - +// // insert_contracts(&mut raw_storage, vec![(account, true)]); // } - +// // let storage_ptr = Box::new(StorageView::new(raw_storage)); - +// // VmTestEnv { // block_context, // block_properties, // storage_ptr, // } // } - +// // /// Gets the current ETH balance for a given account. // pub fn get_eth_balance(&mut self, address: &H160) -> U256 { // get_eth_balance(address, self.storage_ptr.as_mut()) // } - +// // /// Sets a large balance for a given account. // pub fn set_rich_account(&mut self, address: &H160) { // let key = storage_key_for_eth_balance(address); - +// // self.storage_ptr // .set_value(key, u256_to_h256(U256::from(10u64.pow(19)))); // } - +// // /// Runs a given transaction in a VM. // // Note: that storage changes will be preserved, but not changed to events etc. // // Strongly suggest to use this function only if this is the only transaction executed within the test. @@ -146,7 +147,7 @@ // ); // (result, tx_has_failed) // } - +// // /// Runs a given transaction in a VM and asserts if it fails. // pub fn run_vm_or_die(&mut self, transaction_data: TransactionData) { // let (result, tx_has_failed) = self.run_vm(transaction_data); @@ -157,13 +158,13 @@ // ); // } // } - +// // impl Default for VmTestEnv { // fn default() -> Self { // VmTestEnv::new_with_contracts(&[]) // } // } - +// // /// Helper struct to create a default VM for a given environment. // #[derive(Debug)] // pub struct VmTestHelper<'a> { @@ -172,12 +173,12 @@ // pub block_properties: BlockProperties, // vm_created: bool, // } - +// // impl<'a> VmTestHelper<'a> { // pub fn new(test_env: &'a mut VmTestEnv) -> Self { // let block_context = test_env.block_context; // let block_properties = test_env.block_properties; - +// // let oracle_tools = OracleTools::new(test_env.storage_ptr.as_mut(), HistoryEnabled); // VmTestHelper { // oracle_tools, @@ -186,7 +187,7 @@ // vm_created: false, // } // } - +// // /// Creates the VM that can be used in tests. // pub fn vm(&'a mut self) -> Box> { // assert!(!self.vm_created, "Vm can be created only once"); @@ -202,7 +203,7 @@ // vm // } // } - +// // fn run_vm_with_custom_factory_deps<'a, H: HistoryMode>( // oracle_tools: &'a mut OracleTools<'a, false, H>, // block_context: BlockContext, @@ -221,7 +222,7 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // vm.bootloader_state.add_tx_data(encoded_tx.len()); // vm.state.memory.populate_page( // BOOTLOADER_HEAP_PAGE as usize, @@ -238,23 +239,23 @@ // ), // Timestamp(0), // ); - +// // let result = vm.execute_next_tx(u32::MAX, false).err(); - +// // assert_eq!(expected_error, result); // } - +// // fn get_balance(token_id: AccountTreeId, account: &Address, main_storage: StoragePtr) -> U256 { // let key = storage_key_for_standard_token_balance(token_id, account); // h256_to_u256(main_storage.borrow_mut().read_value(&key)) // } - +// // fn get_eth_balance(account: &Address, main_storage: &mut StorageView) -> U256 { // let key = // storage_key_for_standard_token_balance(AccountTreeId::new(L2_ETH_TOKEN_ADDRESS), account); // h256_to_u256(main_storage.read_value(&key)) // } - +// // #[test] // fn test_dummy_bootloader() { // let mut vm_test_env = VmTestEnv::default(); @@ -262,12 +263,12 @@ // let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); // let bootloader_code = read_bootloader_test_code("dummy"); // let bootloader_hash = hash_bytecode(&bootloader_code); - +// // base_system_contracts.bootloader = SystemContractCode { // code: bytes_to_be_words(bootloader_code), // hash: bootloader_hash, // }; - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(vm_test_env.block_context, Default::default()), @@ -276,37 +277,37 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let VmBlockResult { // full_result: res, .. // } = vm.execute_till_block_end(BootloaderJobType::BlockPostprocessing); - +// // // Dummy bootloader should not panic // assert!(res.revert_reason.is_none()); - +// // let correct_first_cell = U256::from_str_radix("123123123", 16).unwrap(); - +// // verify_required_memory( // &vm.state, // vec![(correct_first_cell, BOOTLOADER_HEAP_PAGE, 0)], // ); // } - +// // #[test] // fn test_bootloader_out_of_gas() { // let mut vm_test_env = VmTestEnv::default(); // let mut oracle_tools = OracleTools::new(vm_test_env.storage_ptr.as_mut(), HistoryEnabled); - +// // let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); - +// // let bootloader_code = read_bootloader_test_code("dummy"); // let bootloader_hash = hash_bytecode(&bootloader_code); - +// // base_system_contracts.bootloader = SystemContractCode { // code: bytes_to_be_words(bootloader_code), // hash: bootloader_hash, // }; - +// // // init vm with only 10 ergs // let mut vm = init_vm_inner( // &mut oracle_tools, @@ -316,12 +317,12 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let res = vm.execute_block_tip(); - +// // assert_eq!(res.revert_reason, Some(TxRevertReason::BootloaderOutOfGas)); // } - +// // fn verify_required_memory( // state: &ZkSyncVmState<'_, H>, // required_values: Vec<(U256, u32, u32)>, @@ -334,14 +335,14 @@ // assert_eq!(current_value, required_value); // } // } - +// // #[test] // fn test_default_aa_interaction() { // // In this test, we aim to test whether a simple account interaction (without any fee logic) // // will work. The account will try to deploy a simple contract from integration tests. - +// // let mut vm_test_env = VmTestEnv::default(); - +// // let operator_address = vm_test_env.block_context.context.operator_address; // let base_fee = vm_test_env.block_context.base_fee; // // We deploy here counter contract, because its logic is trivial @@ -362,27 +363,27 @@ // ) // .into(); // let tx_data: TransactionData = tx.clone().into(); - +// // let maximal_fee = tx_data.gas_limit * tx_data.max_fee_per_gas; // let sender_address = tx_data.from(); - +// // vm_test_env.set_rich_account(&sender_address); - +// // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let tx_execution_result = vm // .execute_next_tx(u32::MAX, false) // .expect("Bootloader failed while processing transaction"); - +// // assert_eq!( // tx_execution_result.status, // TxExecutionStatus::Success, // "Transaction wasn't successful" // ); - +// // let VmBlockResult { // full_result: res, .. // } = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); @@ -392,28 +393,28 @@ // "Bootloader was not expected to revert: {:?}", // res.revert_reason // ); - +// // // Both deployment and ordinary nonce should be incremented by one. // let account_nonce_key = get_nonce_key(&sender_address); // let expected_nonce = TX_NONCE_INCREMENT + DEPLOYMENT_NONCE_INCREMENT; - +// // // The code hash of the deployed contract should be marked as republished. // let known_codes_key = get_known_code_key(&contract_code_hash); - +// // // The contract should be deployed successfully. // let deployed_address = deployed_address_create(sender_address, U256::zero()); // let account_code_key = get_code_key(&deployed_address); - +// // let expected_slots = vec![ // (u256_to_h256(expected_nonce), account_nonce_key), // (u256_to_h256(U256::from(1u32)), known_codes_key), // (contract_code_hash, account_code_key), // ]; - +// // verify_required_storage(&vm.state, expected_slots); - +// // assert!(!tx_has_failed(&vm.state, 0)); - +// // let expected_fee = // maximal_fee - U256::from(tx_execution_result.gas_refunded) * U256::from(base_fee); // let operator_balance = get_balance( @@ -421,13 +422,13 @@ // &operator_address, // vm.state.storage.storage.get_ptr(), // ); - +// // assert_eq!( // operator_balance, expected_fee, // "Operator did not receive his fee" // ); // } - +// // fn execute_vm_with_predetermined_refund( // txs: Vec, // refunds: Vec, @@ -435,15 +436,15 @@ // ) -> VmBlockResult { // let mut vm_test_env = VmTestEnv::default(); // let block_context = vm_test_env.block_context; - +// // for tx in txs.iter() { // let sender_address = tx.initiator_account(); // vm_test_env.set_rich_account(&sender_address); // } - +// // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); - +// // let codes_for_decommiter = txs // .iter() // .flat_map(|tx| { @@ -456,12 +457,12 @@ // .collect::)>>() // }) // .collect(); - +// // vm.state.decommittment_processor.populate( // codes_for_decommiter, // Timestamp(vm.state.local_state.timestamp), // ); - +// // let memory_with_suggested_refund = get_bootloader_memory( // txs.into_iter().map(Into::into).collect(), // refunds, @@ -469,24 +470,24 @@ // TxExecutionMode::VerifyExecute, // BlockContextMode::NewBlock(block_context, Default::default()), // ); - +// // vm.state.memory.populate_page( // BOOTLOADER_HEAP_PAGE as usize, // memory_with_suggested_refund, // Timestamp(0), // ); - +// // vm.execute_till_block_end(BootloaderJobType::TransactionExecution) // } - +// // #[test] // fn test_predetermined_refunded_gas() { // // In this test, we compare the execution of the bootloader with the predefined // // refunded gas and without them - +// // let mut vm_test_env = VmTestEnv::default(); // let base_fee = vm_test_env.block_context.base_fee; - +// // // We deploy here counter contract, because its logic is trivial // let contract_code = read_test_contract(); // let published_bytecode = CompressedBytecodeInfo::from_original(contract_code.clone()).unwrap(); @@ -504,27 +505,27 @@ // }, // ) // .into(); - +// // let sender_address = tx.initiator_account(); - +// // // set balance // vm_test_env.set_rich_account(&sender_address); - +// // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let tx_execution_result = vm // .execute_next_tx(u32::MAX, false) // .expect("Bootloader failed while processing transaction"); - +// // assert_eq!( // tx_execution_result.status, // TxExecutionStatus::Success, // "Transaction wasn't successful" // ); - +// // // If the refund provided by the operator or the final refund are the 0 // // there is no impact of the operator's refund at all and so this test does not // // make much sense. @@ -536,14 +537,14 @@ // tx_execution_result.gas_refunded > 0, // "The final refund is 0" // ); - +// // let mut result = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); // assert!( // result.full_result.revert_reason.is_none(), // "Bootloader was not expected to revert: {:?}", // result.full_result.revert_reason // ); - +// // let mut result_with_predetermined_refund = execute_vm_with_predetermined_refund( // vec![tx], // vec![tx_execution_result.operator_suggested_refund], @@ -555,7 +556,7 @@ // .full_result // .used_contract_hashes // .sort(); - +// // assert_eq!( // result.full_result.events, // result_with_predetermined_refund.full_result.events @@ -577,18 +578,18 @@ // .used_contract_hashes // ); // } - +// // #[derive(Debug, Clone)] // enum TransactionRollbackTestInfo { // Rejected(Transaction, TxRevertReason), // Processed(Transaction, bool, TxExecutionStatus), // } - +// // impl TransactionRollbackTestInfo { // fn new_rejected(transaction: Transaction, revert_reason: TxRevertReason) -> Self { // Self::Rejected(transaction, revert_reason) // } - +// // fn new_processed( // transaction: Transaction, // should_be_rollbacked: bool, @@ -596,28 +597,28 @@ // ) -> Self { // Self::Processed(transaction, should_be_rollbacked, expected_status) // } - +// // fn get_transaction(&self) -> &Transaction { // match self { // TransactionRollbackTestInfo::Rejected(tx, _) => tx, // TransactionRollbackTestInfo::Processed(tx, _, _) => tx, // } // } - +// // fn rejection_reason(&self) -> Option { // match self { // TransactionRollbackTestInfo::Rejected(_, revert_reason) => Some(revert_reason.clone()), // TransactionRollbackTestInfo::Processed(_, _, _) => None, // } // } - +// // fn should_rollback(&self) -> bool { // match self { // TransactionRollbackTestInfo::Rejected(_, _) => true, // TransactionRollbackTestInfo::Processed(_, x, _) => *x, // } // } - +// // fn expected_status(&self) -> TxExecutionStatus { // match self { // TransactionRollbackTestInfo::Rejected(_, _) => { @@ -627,7 +628,7 @@ // } // } // } - +// // // Accepts the address of the sender as well as the list of pairs of its transactions // // and whether these transactions should succeed. // fn execute_vm_with_possible_rollbacks( @@ -641,13 +642,13 @@ // block_properties, // ..Default::default() // }; - +// // // Setting infinite balance for the sender. // vm_test_env.set_rich_account(&sender_address); - +// // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); - +// // for test_info in transactions { // vm.save_current_vm_as_snapshot(); // let vm_state_before_tx = vm.dump_inner_state(); @@ -657,7 +658,7 @@ // TxExecutionMode::VerifyExecute, // None, // ); - +// // match vm.execute_next_tx(u32::MAX, false) { // Err(reason) => { // assert_eq!(test_info.rejection_reason(), Some(reason)); @@ -671,11 +672,11 @@ // ); // } // }; - +// // if test_info.should_rollback() { // // Some error has occurred, we should reject the transaction // vm.rollback_to_latest_snapshot(); - +// // // vm_state_before_tx. // let state_after_rollback = vm.dump_inner_state(); // assert_eq!( @@ -684,7 +685,7 @@ // ); // } // } - +// // let VmBlockResult { // full_result: mut result, // .. @@ -692,10 +693,10 @@ // // Used contract hashes are retrieved in unordered manner. // // However it must be sorted for the comparisons in tests to work // result.used_contract_hashes.sort(); - +// // result // } - +// // // Sets the signature for an L2 transaction and returns the same transaction // // but this different signature. // fn change_signature(mut tx: Transaction, signature: Vec) -> Transaction { @@ -706,22 +707,22 @@ // } // _ => unreachable!(), // }; - +// // tx // } - +// // #[test] // fn test_vm_rollbacks() { // let (block_context, block_properties): (DerivedBlockContext, BlockProperties) = { // let (block_context, block_properties) = create_test_block_params(); // (block_context.into(), block_properties) // }; - +// // let base_fee = U256::from(block_context.base_fee); - +// // let sender_private_key = H256::random(); // let contract_code = read_test_contract(); - +// // let tx_nonce_0: Transaction = get_deploy_tx( // sender_private_key, // Nonce(0), @@ -764,13 +765,13 @@ // }, // ) // .into(); - +// // let wrong_signature_length_tx = change_signature(tx_nonce_0.clone(), vec![1u8; 32]); // let wrong_v_tx = change_signature(tx_nonce_0.clone(), vec![1u8; 65]); // let wrong_signature_tx = change_signature(tx_nonce_0.clone(), vec![27u8; 65]); - +// // let sender_address = tx_nonce_0.initiator_account(); - +// // let result_without_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -794,7 +795,7 @@ // block_context, // block_properties, // ); - +// // let incorrect_nonce = TxRevertReason::ValidationFailed(VmRevertReason::General { // msg: "Incorrect nonce".to_string(), // data: vec![ @@ -837,7 +838,7 @@ // msg: "Account validation returned invalid magic value. Most often this means that the signature is incorrect".to_string(), // data: vec![], // }); - +// // let result_with_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -882,11 +883,11 @@ // block_context, // block_properties, // ); - +// // assert_eq!(result_without_rollbacks, result_with_rollbacks); - +// // let loadnext_contract = get_loadnext_contract(); - +// // let loadnext_constructor_data = encode(&[Token::Uint(U256::from(100))]); // let loadnext_deploy_tx: Transaction = get_deploy_tx( // sender_private_key, @@ -909,7 +910,7 @@ // false, // TxExecutionStatus::Success, // ); - +// // let get_load_next_tx = |params: LoadnextContractExecutionParams, nonce: Nonce| { // // Here we test loadnext with various kinds of operations // let tx: Transaction = mock_loadnext_test_call( @@ -925,10 +926,10 @@ // params, // ) // .into(); - +// // tx // }; - +// // let loadnext_tx_0 = get_load_next_tx( // LoadnextContractExecutionParams { // reads: 100, @@ -951,7 +952,7 @@ // }, // Nonce(2), // ); - +// // let result_without_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -970,7 +971,7 @@ // block_context, // block_properties, // ); - +// // let result_with_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -1011,10 +1012,10 @@ // block_context, // block_properties, // ); - +// // assert_eq!(result_without_rollbacks, result_with_rollbacks); // } - +// // // Inserts the contracts into the test environment, bypassing the // // deployer system contract. Besides the reference to storage // // it accepts a `contracts` tuple of information about the contract @@ -1023,16 +1024,16 @@ // for (contract, is_account) in contracts { // let deployer_code_key = get_code_key(contract.account_id.address()); // raw_storage.set_value(deployer_code_key, hash_bytecode(&contract.bytecode)); - +// // if is_account { // let is_account_key = get_is_account_key(contract.account_id.address()); // raw_storage.set_value(is_account_key, u256_to_h256(1_u32.into())); // } - +// // raw_storage.store_factory_dep(hash_bytecode(&contract.bytecode), contract.bytecode); // } // } - +// // enum NonceHolderTestMode { // SetValueUnderNonce, // IncreaseMinNonceBy5, @@ -1041,7 +1042,7 @@ // IncreaseMinNonceBy1, // SwitchToArbitraryOrdering, // } - +// // impl From for u8 { // fn from(mode: NonceHolderTestMode) -> u8 { // match mode { @@ -1054,7 +1055,7 @@ // } // } // } - +// // fn get_nonce_holder_test_tx( // nonce: U256, // account_address: Address, @@ -1076,11 +1077,11 @@ // reserved: [U256::zero(); 4], // data: vec![12], // signature: vec![test_mode.into()], - +// // ..Default::default() // } // } - +// // fn run_vm_with_raw_tx<'a, H: HistoryMode>( // oracle_tools: &'a mut OracleTools<'a, false, H>, // block_context: DerivedBlockContext, @@ -1097,9 +1098,9 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let block_gas_price_per_pubdata = block_context.context.block_gas_price_per_pubdata(); - +// // let overhead = tx.overhead_gas(block_gas_price_per_pubdata as u32); // push_raw_transaction_to_bootloader_memory( // &mut vm, @@ -1112,18 +1113,18 @@ // full_result: result, // .. // } = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); - +// // (result, tx_has_failed(&vm.state, 0)) // } - +// // #[test] // fn test_nonce_holder() { // let account_address = H160::random(); // let mut vm_test_env = // VmTestEnv::new_with_contracts(&[(account_address, read_nonce_holder_tester())]); - +// // vm_test_env.set_rich_account(&account_address); - +// // let mut run_nonce_test = |nonce: U256, // test_mode: NonceHolderTestMode, // error_message: Option, @@ -1134,7 +1135,7 @@ // test_mode, // &vm_test_env.block_context, // ); - +// // let (result, tx_has_failed) = vm_test_env.run_vm(tx); // if let Some(msg) = error_message { // let expected_error = @@ -1153,7 +1154,7 @@ // assert!(!tx_has_failed, "{}", comment); // } // }; - +// // // Test 1: trying to set value under non sequential nonce value. // run_nonce_test( // 1u32.into(), @@ -1161,7 +1162,7 @@ // Some("Previous nonce has not been used".to_string()), // "Allowed to set value under non sequential value", // ); - +// // // Test 2: increase min nonce by 1 with sequential nonce ordering: // run_nonce_test( // 0u32.into(), @@ -1169,7 +1170,7 @@ // None, // "Failed to increment nonce by 1 for sequential account", // ); - +// // // Test 3: correctly set value under nonce with sequential nonce ordering: // run_nonce_test( // 1u32.into(), @@ -1177,7 +1178,7 @@ // None, // "Failed to set value under nonce sequential value", // ); - +// // // Test 5: migrate to the arbitrary nonce ordering: // run_nonce_test( // 2u32.into(), @@ -1185,7 +1186,7 @@ // None, // "Failed to switch to arbitrary ordering", // ); - +// // // Test 6: increase min nonce by 5 // run_nonce_test( // 6u32.into(), @@ -1193,7 +1194,7 @@ // None, // "Failed to increase min nonce by 5", // ); - +// // // Test 7: since the nonces in range [6,10] are no longer allowed, the // // tx with nonce 10 should not be allowed // run_nonce_test( @@ -1202,7 +1203,7 @@ // Some("Reusing the same nonce twice".to_string()), // "Allowed to reuse nonce below the minimal one", // ); - +// // // Test 8: we should be able to use nonce 13 // run_nonce_test( // 13u32.into(), @@ -1210,7 +1211,7 @@ // None, // "Did not allow to use unused nonce 10", // ); - +// // // Test 9: we should not be able to reuse nonce 13 // run_nonce_test( // 13u32.into(), @@ -1218,7 +1219,7 @@ // Some("Reusing the same nonce twice".to_string()), // "Allowed to reuse the same nonce twice", // ); - +// // // Test 10: we should be able to simply use nonce 14, while bumping the minimal nonce by 5 // run_nonce_test( // 14u32.into(), @@ -1226,7 +1227,7 @@ // None, // "Did not allow to use a bumped nonce", // ); - +// // // Test 6: Do not allow bumping nonce by too much // run_nonce_test( // 16u32.into(), @@ -1234,7 +1235,7 @@ // Some("The value for incrementing the nonce is too high".to_string()), // "Allowed for incrementing min nonce too much", // ); - +// // // Test 7: Do not allow not setting a nonce as used // run_nonce_test( // 16u32.into(), @@ -1243,7 +1244,7 @@ // "Allowed to leave nonce as unused", // ); // } - +// // #[test] // fn test_l1_tx_execution() { // // In this test, we try to execute a contract deployment from L1 @@ -1255,7 +1256,7 @@ // let contract_code_hash = hash_bytecode(&contract_code); // let l1_deploy_tx = get_l1_deploy_tx(&contract_code, &[]); // let l1_deploy_tx_data: TransactionData = l1_deploy_tx.clone().into(); - +// // let required_l2_to_l1_logs = vec![ // L2ToL1Log { // shard_id: 0, @@ -1274,9 +1275,9 @@ // value: u256_to_h256(U256::from(1u32)), // }, // ]; - +// // let sender_address = l1_deploy_tx_data.from(); - +// // vm_helper.oracle_tools.decommittment_processor.populate( // vec![( // h256_to_u256(contract_code_hash), @@ -1284,38 +1285,38 @@ // )], // Timestamp(0), // ); - +// // let mut vm = vm_helper.vm(); - +// // push_transaction_to_bootloader_memory( // &mut vm, // &l1_deploy_tx, // TxExecutionMode::VerifyExecute, // None, // ); - +// // let res = vm.execute_next_tx(u32::MAX, false).unwrap(); - +// // // The code hash of the deployed contract should be marked as republished. // let known_codes_key = get_known_code_key(&contract_code_hash); - +// // // The contract should be deployed successfully. // let deployed_address = deployed_address_create(sender_address, U256::zero()); // let account_code_key = get_code_key(&deployed_address); - +// // let expected_slots = vec![ // (u256_to_h256(U256::from(1u32)), known_codes_key), // (contract_code_hash, account_code_key), // ]; // assert!(!tx_has_failed(&vm.state, 0)); - +// // verify_required_storage(&vm.state, expected_slots); - +// // assert_eq!(res.result.logs.l2_to_l1_logs, required_l2_to_l1_logs); - +// // let tx = get_l1_execute_test_contract_tx(deployed_address, true); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let res = StorageWritesDeduplicator::apply_on_empty_state( // &vm.execute_next_tx(u32::MAX, false) // .unwrap() @@ -1324,7 +1325,7 @@ // .storage_logs, // ); // assert_eq!(res.initial_storage_writes, 0); - +// // let tx = get_l1_execute_test_contract_tx(deployed_address, false); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); // let res = StorageWritesDeduplicator::apply_on_empty_state( @@ -1335,9 +1336,9 @@ // .storage_logs, // ); // assert_eq!(res.initial_storage_writes, 2); - +// // let repeated_writes = res.repeated_storage_writes; - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); // let res = StorageWritesDeduplicator::apply_on_empty_state( // &vm.execute_next_tx(u32::MAX, false) @@ -1349,7 +1350,7 @@ // assert_eq!(res.initial_storage_writes, 1); // // We do the same storage write, so it will be deduplicated // assert_eq!(res.repeated_storage_writes, repeated_writes); - +// // let mut tx = get_l1_execute_test_contract_tx(deployed_address, false); // tx.execute.value = U256::from(1); // match &mut tx.common_data { @@ -1366,35 +1367,35 @@ // TxExecutionStatus::Failure, // "The transaction should fail" // ); - +// // let res = // StorageWritesDeduplicator::apply_on_empty_state(&execution_result.result.logs.storage_logs); - +// // // There are 2 initial writes here: // // - totalSupply of ETH token // // - balance of the refund recipient // assert_eq!(res.initial_storage_writes, 2); // } - +// // #[test] // fn test_invalid_bytecode() { // let mut vm_test_env = VmTestEnv::default(); - +// // let block_gas_per_pubdata = vm_test_env // .block_context // .context // .block_gas_price_per_pubdata(); - +// // let mut test_vm_with_custom_bytecode_hash = // |bytecode_hash: H256, expected_revert_reason: Option| { // let mut oracle_tools = // OracleTools::new(vm_test_env.storage_ptr.as_mut(), HistoryEnabled); - +// // let (encoded_tx, predefined_overhead) = get_l1_tx_with_custom_bytecode_hash( // h256_to_u256(bytecode_hash), // block_gas_per_pubdata as u32, // ); - +// // run_vm_with_custom_factory_deps( // &mut oracle_tools, // vm_test_env.block_context.context, @@ -1404,14 +1405,14 @@ // expected_revert_reason, // ); // }; - +// // let failed_to_mark_factory_deps = |msg: &str, data: Vec| { // TxRevertReason::FailedToMarkFactoryDependencies(VmRevertReason::General { // msg: msg.to_string(), // data, // }) // }; - +// // // Here we provide the correctly-formatted bytecode hash of // // odd length, so it should work. // test_vm_with_custom_bytecode_hash( @@ -1421,7 +1422,7 @@ // ]), // None, // ); - +// // // Here we provide correctly formatted bytecode of even length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1440,7 +1441,7 @@ // ], // )), // ); - +// // // Here we provide incorrectly formatted bytecode of odd length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1460,7 +1461,7 @@ // ], // )), // ); - +// // // Here we provide incorrectly formatted bytecode of odd length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1481,17 +1482,17 @@ // )), // ); // } - +// // #[test] // fn test_tracing_of_execution_errors() { // // In this test, we are checking that the execution errors are transmitted correctly from the bootloader. // let contract_address = Address::random(); - +// // let mut vm_test_env = // VmTestEnv::new_with_contracts(&[(contract_address, read_error_contract())]); - +// // let private_key = H256::random(); - +// // let tx = get_error_tx( // private_key, // Nonce(0), @@ -1503,25 +1504,25 @@ // gas_per_pubdata_limit: U256::from(MAX_GAS_PER_PUBDATA_BYTE), // }, // ); - +// // vm_test_env.set_rich_account(&tx.common_data.initiator_address); // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); - +// // push_transaction_to_bootloader_memory( // &mut vm, // &tx.into(), // TxExecutionMode::VerifyExecute, // None, // ); - +// // let mut tracer = TransactionResultTracer::new(usize::MAX, false); // assert_eq!( // vm.execute_with_custom_tracer(&mut tracer), // VmExecutionStopReason::VmFinished, // "Tracer should never request stop" // ); - +// // match tracer.revert_reason { // Some(revert_reason) => { // let revert_reason = VmRevertReason::try_from(&revert_reason as &[u8]).unwrap(); @@ -1544,7 +1545,7 @@ // tracer.revert_reason // ), // } - +// // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); // let tx = get_error_tx( @@ -1564,7 +1565,7 @@ // TxExecutionMode::VerifyExecute, // None, // ); - +// // let mut tracer = TransactionResultTracer::new(10, false); // assert_eq!( // vm.execute_with_custom_tracer(&mut tracer), @@ -1572,13 +1573,13 @@ // ); // assert!(tracer.is_limit_reached()); // } - +// // /// Checks that `TX_GAS_LIMIT_OFFSET` constant is correct. // #[test] // fn test_tx_gas_limit_offset() { // let gas_limit = U256::from(999999); // let mut vm_test_env = VmTestEnv::default(); - +// // let contract_code = read_test_contract(); // let tx: Transaction = get_deploy_tx( // H256::random(), @@ -1592,11 +1593,11 @@ // }, // ) // .into(); - +// // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let gas_limit_from_memory = vm // .state // .memory @@ -1607,12 +1608,12 @@ // .value; // assert_eq!(gas_limit_from_memory, gas_limit); // } - +// // #[test] // fn test_is_write_initial_behaviour() { // // In this test, we check result of `is_write_initial` at different stages. // let mut vm_test_env = VmTestEnv::default(); - +// // let base_fee = vm_test_env.block_context.base_fee; // let account_pk = H256::random(); // let contract_code = read_test_contract(); @@ -1630,27 +1631,27 @@ // }, // ) // .into(); - +// // let sender_address = tx.initiator_account(); // let nonce_key = get_nonce_key(&sender_address); - +// // // Check that the next write to the nonce key will be initial. // assert!(vm_test_env.storage_ptr.is_write_initial(&nonce_key)); - +// // // Set balance to be able to pay fee for txs. // vm_test_env.set_rich_account(&sender_address); - +// // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // vm.execute_next_tx(u32::MAX, false) // .expect("Bootloader failed while processing the first transaction"); // // Check that `is_write_initial` still returns true for the nonce key. // assert!(vm_test_env.storage_ptr.is_write_initial(&nonce_key)); // } - +// // pub fn get_l1_tx_with_custom_bytecode_hash( // bytecode_hash: U256, // block_gas_per_pubdata: u32, @@ -1659,10 +1660,10 @@ // let predefined_overhead = // tx.overhead_gas_with_custom_factory_deps(vec![bytecode_hash], block_gas_per_pubdata); // let tx_bytes = tx.abi_encode_with_custom_factory_deps(vec![bytecode_hash]); - +// // (bytes_to_be_words(tx_bytes), predefined_overhead) // } - +// // pub fn get_l1_execute_test_contract_tx(deployed_address: Address, with_panic: bool) -> Transaction { // let sender = H160::random(); // get_l1_execute_test_contract_tx_with_sender( @@ -1673,18 +1674,18 @@ // false, // ) // } - +// // pub fn get_l1_tx_with_large_output(sender: Address, deployed_address: Address) -> Transaction { // let test_contract = load_contract( // "etc/contracts-test-data/artifacts-zk/contracts/long-return-data/long-return-data.sol/LongReturnData.json", // ); - +// // let function = test_contract.function("longReturnData").unwrap(); - +// // let calldata = function // .encode_input(&[]) // .expect("failed to encode parameters"); - +// // Transaction { // common_data: ExecuteTransactionCommon::L1(L1TxCommonData { // sender, @@ -1701,23 +1702,23 @@ // received_timestamp_ms: 0, // } // } - +// // #[test] // fn test_call_tracer() { // let mut vm_test_env = VmTestEnv::default(); - +// // let sender = H160::random(); - +// // let contract_code = read_test_contract(); // let contract_code_hash = hash_bytecode(&contract_code); // let l1_deploy_tx = get_l1_deploy_tx(&contract_code, &[]); // let l1_deploy_tx_data: TransactionData = l1_deploy_tx.clone().into(); - +// // let sender_address_counter = l1_deploy_tx_data.from(); - +// // vm_test_env.set_rich_account(&sender_address_counter); // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); - +// // vm_helper.oracle_tools.decommittment_processor.populate( // vec![( // h256_to_u256(contract_code_hash), @@ -1725,7 +1726,7 @@ // )], // Timestamp(0), // ); - +// // let contract_code = read_long_return_data_contract(); // let contract_code_hash = hash_bytecode(&contract_code); // let l1_deploy_long_return_data_tx = get_l1_deploy_tx(&contract_code, &[]); @@ -1736,21 +1737,21 @@ // )], // Timestamp(0), // ); - +// // let tx_data: TransactionData = l1_deploy_long_return_data_tx.clone().into(); // let sender_long_return_address = tx_data.from(); // // The contract should be deployed successfully. // let deployed_address_long_return_data = // deployed_address_create(sender_long_return_address, U256::zero()); // let mut vm = vm_helper.vm(); - +// // push_transaction_to_bootloader_memory( // &mut vm, // &l1_deploy_tx, // TxExecutionMode::VerifyExecute, // None, // ); - +// // // The contract should be deployed successfully. // let deployed_address = deployed_address_create(sender_address_counter, U256::zero()); // let res = vm.execute_next_tx(u32::MAX, true).unwrap(); @@ -1791,16 +1792,16 @@ // calls: vec![], // }; // assert_eq!(create_call.unwrap(), expected); - +// // push_transaction_to_bootloader_memory( // &mut vm, // &l1_deploy_long_return_data_tx, // TxExecutionMode::VerifyExecute, // None, // ); - +// // vm.execute_next_tx(u32::MAX, false).unwrap(); - +// // let tx = get_l1_execute_test_contract_tx_with_sender( // sender, // deployed_address, @@ -1808,13 +1809,13 @@ // U256::from(1u8), // true, // ); - +// // let tx_data: TransactionData = tx.clone().into(); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let res = vm.execute_next_tx(u32::MAX, true).unwrap(); // let calls = res.call_traces; - +// // // We don't want to compare gas used, because it's not fully deterministic. // let expected = Call { // r#type: CallType::Call(FarCallOpcode::Mimic), @@ -1833,7 +1834,7 @@ // revert_reason: None, // calls: vec![], // }; - +// // // First loop filter out the bootloaders calls and // // the second loop filters out the calls msg value simulator calls // for call in calls { @@ -1845,7 +1846,7 @@ // } // } // } - +// // let tx = get_l1_execute_test_contract_tx_with_sender( // sender, // deployed_address, @@ -1853,13 +1854,13 @@ // U256::from(1u8), // true, // ); - +// // let tx_data: TransactionData = tx.clone().into(); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let res = vm.execute_next_tx(u32::MAX, true).unwrap(); // let calls = res.call_traces; - +// // let expected = Call { // r#type: CallType::Call(FarCallOpcode::Mimic), // to: deployed_address, @@ -1874,7 +1875,7 @@ // revert_reason: Some("This method always reverts".to_string()), // calls: vec![], // }; - +// // for call in calls { // if let CallType::Call(FarCallOpcode::Mimic) = call.r#type { // for call in call.calls { @@ -1884,12 +1885,12 @@ // } // } // } - +// // let tx = get_l1_tx_with_large_output(sender, deployed_address_long_return_data); - +// // let tx_data: TransactionData = tx.clone().into(); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // assert_ne!(deployed_address_long_return_data, deployed_address); // let res = vm.execute_next_tx(u32::MAX, true).unwrap(); // let calls = res.call_traces; @@ -1907,30 +1908,30 @@ // } // } // } - +// // #[test] // fn test_get_used_contracts() { // let mut vm_test_env = VmTestEnv::default(); - +// // let mut vm_helper = VmTestHelper::new(&mut vm_test_env); // let mut vm = vm_helper.vm(); - +// // assert!(known_bytecodes_without_aa_code(&vm).is_empty()); - +// // // create and push and execute some not-empty factory deps transaction with success status // // to check that get_used_contracts() updates // let contract_code = read_test_contract(); // let contract_code_hash = hash_bytecode(&contract_code); // let tx1 = get_l1_deploy_tx(&contract_code, &[]); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx1, TxExecutionMode::VerifyExecute, None); - +// // let res1 = vm.execute_next_tx(u32::MAX, true).unwrap(); // assert_eq!(res1.status, TxExecutionStatus::Success); // assert!(vm // .get_used_contracts() // .contains(&h256_to_u256(contract_code_hash))); - +// // assert_eq!( // vm.get_used_contracts() // .into_iter() @@ -1940,13 +1941,13 @@ // .cloned() // .collect::>() // ); - +// // // create push and execute some non-empty factory deps transaction that fails // // (known_bytecodes will be updated but we expect get_used_contracts() to not be updated) - +// // let mut tx2 = tx1; // tx2.execute.contract_address = L1_MESSENGER_ADDRESS; - +// // let calldata = vec![1, 2, 3]; // let big_calldata: Vec = calldata // .iter() @@ -1954,16 +1955,16 @@ // .take(calldata.len() * 1024) // .cloned() // .collect(); - +// // tx2.execute.calldata = big_calldata; // tx2.execute.factory_deps = Some(vec![vec![1; 32]]); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx2, TxExecutionMode::VerifyExecute, None); - +// // let res2 = vm.execute_next_tx(u32::MAX, false).unwrap(); - +// // assert_eq!(res2.status, TxExecutionStatus::Failure); - +// // for factory_dep in tx2.execute.factory_deps.unwrap() { // let hash = hash_bytecode(&factory_dep); // let hash_to_u256 = h256_to_u256(hash); @@ -1973,7 +1974,7 @@ // assert!(!vm.get_used_contracts().contains(&hash_to_u256)); // } // } - +// // fn known_bytecodes_without_aa_code(vm: &VmInstance) -> HashMap> { // let mut known_bytecodes_without_aa_code = vm // .state @@ -1981,14 +1982,14 @@ // .known_bytecodes // .inner() // .clone(); - +// // known_bytecodes_without_aa_code // .remove(&h256_to_u256(BASE_SYSTEM_CONTRACTS.default_aa.hash)) // .unwrap(); - +// // known_bytecodes_without_aa_code // } - +// // #[tokio::test] // /// This test deploys 'buggy' account abstraction code, and then tries accessing it both with legacy // /// and EIP712 transactions. @@ -1999,31 +2000,31 @@ // // - account_address - AA account, where the contract is deployed // // - beneficiary - an EOA account, where we'll try to transfer the tokens. // let account_address = H160::random(); - +// // let (bytecode, contract) = read_many_owners_custom_account_contract(); - +// // let mut vm_test_env = VmTestEnv::new_with_contracts(&[(account_address, bytecode)]); - +// // let beneficiary = H160::random(); - +// // assert_eq!(vm_test_env.get_eth_balance(&beneficiary), U256::from(0)); - +// // let private_key = H256::random(); // let private_address = PackedEthSignature::address_from_private_key(&private_key).unwrap(); // let pk_signer = PrivateKeySigner::new(private_key); - +// // vm_test_env.set_rich_account(&account_address); // vm_test_env.set_rich_account(&private_address); - +// // let chain_id: u16 = 270; - +// // // First, let's set the owners of the AA account to the private_address. // // (so that messages signed by private_address, are authorized to act on behalf of the AA account). // { // let set_owners_function = contract.function("setOwners").unwrap(); // let encoded_input = set_owners_function // .encode_input(&[Token::Array(vec![Token::Address(private_address)])]); - +// // // Create a legacy transaction to set the owners. // let raw_tx = TransactionParameters { // nonce: U256::from(0), @@ -2039,19 +2040,19 @@ // max_priority_fee_per_gas: U256::from(1000000000), // }; // let txn = pk_signer.sign_transaction(raw_tx).await.unwrap(); - +// // let (txn_request, hash) = TransactionRequest::from_bytes(&txn, chain_id).unwrap(); - +// // let mut l2_tx: L2Tx = L2Tx::from_request(txn_request, 100000).unwrap(); // l2_tx.set_input(txn, hash); // let transaction: Transaction = l2_tx.try_into().unwrap(); // let transaction_data: TransactionData = transaction.try_into().unwrap(); - +// // vm_test_env.run_vm_or_die(transaction_data); // } - +// // let private_account_balance = vm_test_env.get_eth_balance(&private_address); - +// // // And now let's do the transfer from the 'account abstraction' to 'beneficiary' (using 'legacy' transaction). // // Normally this would not work - unless the operator is malicious. // { @@ -2068,32 +2069,32 @@ // max_fee_per_gas: U256::from(1000000000), // max_priority_fee_per_gas: U256::from(1000000000), // }; - +// // let aa_txn = pk_signer.sign_transaction(aa_raw_tx).await.unwrap(); - +// // let (aa_txn_request, aa_hash) = TransactionRequest::from_bytes(&aa_txn, 270).unwrap(); - +// // let mut l2_tx: L2Tx = L2Tx::from_request(aa_txn_request, 100000).unwrap(); // l2_tx.set_input(aa_txn, aa_hash); // // Pretend that operator is malicious and sets the initiator to the AA account. // l2_tx.common_data.initiator_address = account_address; - +// // let transaction: Transaction = l2_tx.try_into().unwrap(); - +// // let transaction_data: TransactionData = transaction.try_into().unwrap(); - +// // vm_test_env.run_vm_or_die(transaction_data); // assert_eq!( // vm_test_env.get_eth_balance(&beneficiary), // U256::from(888000088) // ); -// // Make sure that the tokens were transfered from the AA account. +// // Make sure that the tokens were transferred from the AA account. // assert_eq!( // private_account_balance, // vm_test_env.get_eth_balance(&private_address) // ) // } - +// // // Now send the 'classic' EIP712 transaction // { // let tx_712 = L2Tx::new( @@ -2111,27 +2112,27 @@ // None, // Default::default(), // ); - +// // let transaction_request: TransactionRequest = tx_712.into(); - +// // let domain = Eip712Domain::new(L2ChainId(chain_id)); // let signature = pk_signer // .sign_typed_data(&domain, &transaction_request) // .await // .unwrap(); // let encoded_tx = transaction_request.get_signed_bytes(&signature, L2ChainId(chain_id)); - +// // let (aa_txn_request, aa_hash) = // TransactionRequest::from_bytes(&encoded_tx, chain_id).unwrap(); - +// // let mut l2_tx: L2Tx = L2Tx::from_request(aa_txn_request, 100000).unwrap(); // l2_tx.set_input(encoded_tx, aa_hash); - +// // let transaction: Transaction = l2_tx.try_into().unwrap(); // let transaction_data: TransactionData = transaction.try_into().unwrap(); - +// // vm_test_env.run_vm_or_die(transaction_data); - +// // assert_eq!( // vm_test_env.get_eth_balance(&beneficiary), // U256::from(916375026) @@ -2142,3 +2143,4 @@ // ); // } // } +// ``` \ No newline at end of file diff --git a/core/lib/multivm/src/versions/vm_1_3_2/tests/mod.rs b/core/lib/multivm/src/versions/vm_1_3_2/tests/mod.rs index af4748e3864..04448987b1c 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/tests/mod.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/tests/mod.rs @@ -1,4 +1,6 @@ +// ``` // mod bootloader; // mod upgrades; // mod utils; +// ``` diff --git a/core/lib/multivm/src/versions/vm_1_3_2/tests/upgrades.rs b/core/lib/multivm/src/versions/vm_1_3_2/tests/upgrades.rs index 47e9ad5d4eb..cd3857d46da 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/tests/upgrades.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/tests/upgrades.rs @@ -1,3 +1,4 @@ +// ``` // use crate::{ // test_utils::verify_required_storage, // tests::utils::get_l1_deploy_tx, @@ -7,9 +8,9 @@ // vm_with_bootloader::{BlockContextMode, TxExecutionMode}, // HistoryEnabled, OracleTools, TxRevertReason, // }; - +// // use zk_evm_1_3_3::aux_structures::Timestamp; - +// // use zksync_types::{ // ethabi::Contract, // tx::tx_execution_info::TxExecutionStatus, @@ -18,17 +19,17 @@ // {ethabi::Token, Address, ExecuteTransactionCommon, Transaction, H256, U256}, // {get_code_key, get_known_code_key, H160}, // }; - +// // use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, u256_to_h256}; - +// // use zksync_contracts::{deployer_contract, load_contract, load_sys_contract, read_bytecode}; // use zksync_state::WriteStorage; - +// // use crate::tests::utils::create_storage_view; // use zksync_types::protocol_version::ProtocolUpgradeTxCommonData; - +// // use super::utils::read_test_contract; - +// // /// In this test we ensure that the requirements for protocol upgrade transactions are enforced by the bootloader: // /// - This transaction must be the only one in block // /// - If present, this transaction must be the first one in block @@ -37,9 +38,9 @@ // let mut storage_view = create_storage_view(); // let mut oracle_tools = OracleTools::new(&mut storage_view, HistoryEnabled); // let (block_context, block_properties) = create_test_block_params(); - +// // let bytecode_hash = hash_bytecode(&read_test_contract()); - +// // // Here we just use some random transaction of protocol upgrade type: // let protocol_upgrade_transaction = get_forced_deploy_tx(&[ForceDeployment { // // The bytecode hash to put on an address @@ -53,9 +54,9 @@ // // The constructor calldata // input: vec![], // }]); - +// // let normal_l1_transaction = get_l1_deploy_tx(&read_test_contract(), &[]); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context.into(), Default::default()), @@ -64,14 +65,14 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // let expected_error = TxRevertReason::UnexpectedVMBehavior( // "Assertion error: Protocol upgrade tx not first".to_string(), // ); - +// // // Test 1: there must be only one system transaction in block // vm.save_current_vm_as_snapshot(); - +// // push_transaction_to_bootloader_memory( // &mut vm, // &protocol_upgrade_transaction, @@ -90,15 +91,15 @@ // TxExecutionMode::VerifyExecute, // None, // ); - +// // vm.execute_next_tx(u32::MAX, false).unwrap(); // vm.execute_next_tx(u32::MAX, false).unwrap(); // let res = vm.execute_next_tx(u32::MAX, false); // assert_eq!(res, Err(expected_error.clone())); - +// // // Test 2: the protocol upgrade tx must be the first one in block // vm.rollback_to_latest_snapshot(); - +// // push_transaction_to_bootloader_memory( // &mut vm, // &normal_l1_transaction, @@ -111,26 +112,26 @@ // TxExecutionMode::VerifyExecute, // None, // ); - +// // vm.execute_next_tx(u32::MAX, false).unwrap(); // let res = vm.execute_next_tx(u32::MAX, false); // assert_eq!(res, Err(expected_error)); // } - +// // /// In this test we try to test how force deployments could be done via protocol upgrade transactions. // #[test] // fn test_force_deploy_upgrade() { // let mut storage_view = create_storage_view(); - +// // let bytecode_hash = hash_bytecode(&read_test_contract()); - +// // let known_code_key = get_known_code_key(&bytecode_hash); // // It is generally expected that all the keys will be set as known prior to the protocol upgrade. // storage_view.set_value(known_code_key, u256_to_h256(1.into())); - +// // let mut oracle_tools = OracleTools::new(&mut storage_view, HistoryEnabled); // let (block_context, block_properties) = create_test_block_params(); - +// // let address_to_deploy = H160::random(); // // Here we just use some random transaction of protocol upgrade type: // let transaction = get_forced_deploy_tx(&[ForceDeployment { @@ -145,7 +146,7 @@ // // The constructor calldata // input: vec![], // }]); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context.into(), Default::default()), @@ -167,33 +168,33 @@ // "The force upgrade was not successful" // ); // assert!(!tx_has_failed(&vm.state, 0)); - +// // let expected_slots = vec![(bytecode_hash, get_code_key(&address_to_deploy))]; - +// // // Verify that the bytecode has been set correctly // verify_required_storage(&vm.state, expected_slots); // } - +// // /// Here we show how the work with the complex upgrader could be done // #[test] // fn test_complex_upgrader() { // let mut storage_view = create_storage_view(); - +// // let bytecode_hash = hash_bytecode(&read_complex_upgrade()); // let msg_sender_test_hash = hash_bytecode(&read_msg_sender_test()); - +// // // Let's assume that the bytecode for the implementation of the complex upgrade // // is already deployed in some address in userspace // let upgrade_impl = H160::random(); // let account_code_key = get_code_key(&upgrade_impl); - +// // storage_view.set_value(get_known_code_key(&bytecode_hash), u256_to_h256(1.into())); // storage_view.set_value( // get_known_code_key(&msg_sender_test_hash), // u256_to_h256(1.into()), // ); // storage_view.set_value(account_code_key, bytecode_hash); - +// // let mut oracle_tools: OracleTools = // OracleTools::new(&mut storage_view, HistoryEnabled); // oracle_tools.decommittment_processor.populate( @@ -209,19 +210,19 @@ // ], // Timestamp(0), // ); - +// // let (block_context, block_properties) = create_test_block_params(); - +// // let address_to_deploy1 = H160::random(); // let address_to_deploy2 = H160::random(); - +// // let transaction = get_complex_upgrade_tx( // upgrade_impl, // address_to_deploy1, // address_to_deploy2, // bytecode_hash, // ); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context.into(), Default::default()), @@ -243,16 +244,16 @@ // "The force upgrade was not successful" // ); // assert!(!tx_has_failed(&vm.state, 0)); - +// // let expected_slots = vec![ // (bytecode_hash, get_code_key(&address_to_deploy1)), // (bytecode_hash, get_code_key(&address_to_deploy2)), // ]; - +// // // Verify that the bytecode has been set correctly // verify_required_storage(&vm.state, expected_slots); // } - +// // #[derive(Debug, Clone)] // struct ForceDeployment { // // The bytecode hash to put on an address @@ -266,11 +267,11 @@ // // The constructor calldata // input: Vec, // } - +// // fn get_forced_deploy_tx(deployment: &[ForceDeployment]) -> Transaction { // let deployer = deployer_contract(); // let contract_function = deployer.function("forceDeployOnAddresses").unwrap(); - +// // let encoded_deployments: Vec<_> = deployment // .iter() // .map(|deployment| { @@ -283,20 +284,20 @@ // ]) // }) // .collect(); - +// // let params = [Token::Array(encoded_deployments)]; - +// // let calldata = contract_function // .encode_input(¶ms) // .expect("failed to encode parameters"); - +// // let execute = Execute { // contract_address: CONTRACT_DEPLOYER_ADDRESS, // calldata, // factory_deps: None, // value: U256::zero(), // }; - +// // Transaction { // common_data: ExecuteTransactionCommon::ProtocolUpgrade(ProtocolUpgradeTxCommonData { // sender: CONTRACT_FORCE_DEPLOYER_ADDRESS, @@ -308,7 +309,7 @@ // received_timestamp_ms: 0, // } // } - +// // // Returns the transaction that performs a complex protocol upgrade. // // The first param is the address of the implementation of the complex upgrade // // in user-space, while the next 3 params are params of the implenentaiton itself @@ -329,7 +330,7 @@ // Token::FixedBytes(bytecode_hash.as_bytes().to_vec()), // ]) // .unwrap(); - +// // let complex_upgrader = get_complex_upgrader_abi(); // let upgrade_function = complex_upgrader.function("upgrade").unwrap(); // let complex_upgrader_calldata = upgrade_function @@ -338,14 +339,14 @@ // Token::Bytes(impl_calldata), // ]) // .unwrap(); - +// // let execute = Execute { // contract_address: COMPLEX_UPGRADER_ADDRESS, // calldata: complex_upgrader_calldata, // factory_deps: None, // value: U256::zero(), // }; - +// // Transaction { // common_data: ExecuteTransactionCommon::ProtocolUpgrade(ProtocolUpgradeTxCommonData { // sender: CONTRACT_FORCE_DEPLOYER_ADDRESS, @@ -357,21 +358,22 @@ // received_timestamp_ms: 0, // } // } - +// // fn read_complex_upgrade() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/complex-upgrade/complex-upgrade.sol/ComplexUpgrade.json") // } - +// // fn read_msg_sender_test() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/complex-upgrade/msg-sender.sol/MsgSenderTest.json") // } - +// // fn get_complex_upgrade_abi() -> Contract { // load_contract( // "etc/contracts-test-data/artifacts-zk/contracts/complex-upgrade/complex-upgrade.sol/ComplexUpgrade.json" // ) // } - +// // fn get_complex_upgrader_abi() -> Contract { // load_sys_contract("ComplexUpgrader") // } +// ``` \ No newline at end of file diff --git a/core/lib/multivm/src/versions/vm_1_3_2/tests/utils.rs b/core/lib/multivm/src/versions/vm_1_3_2/tests/utils.rs index 865f8503701..b2231f05cf1 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/tests/utils.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/tests/utils.rs @@ -1,3 +1,4 @@ +// ``` // //! // //! Tests for the bootloader // //! The description for each of the tests can be found in the corresponding `.yul` file. @@ -7,36 +8,36 @@ // Execute, L1TxCommonData, H160, REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, // {ethabi::Token, Address, ExecuteTransactionCommon, Transaction, U256}, // }; - +// // use zksync_contracts::{load_contract, read_bytecode}; // use zksync_state::{InMemoryStorage, StorageView}; // use zksync_utils::bytecode::hash_bytecode; - +// // use crate::test_utils::get_create_execute; - +// // pub fn read_test_contract() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/counter/counter.sol/Counter.json") // } - +// // pub fn read_long_return_data_contract() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/long-return-data/long-return-data.sol/LongReturnData.json") // } - +// // pub fn read_nonce_holder_tester() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/custom-account/nonce-holder-test.sol/NonceHolderTest.json") // } - +// // pub fn read_error_contract() -> Vec { // read_bytecode( // "etc/contracts-test-data/artifacts-zk/contracts/error/error.sol/SimpleRequire.json", // ) // } - +// // pub fn read_many_owners_custom_account_contract() -> (Vec, Contract) { // let path = "etc/contracts-test-data/artifacts-zk/contracts/custom-account/many-owners-custom-account.sol/ManyOwnersCustomAccount.json"; // (read_bytecode(path), load_contract(path)) // } - +// // pub fn get_l1_execute_test_contract_tx_with_sender( // sender: Address, // deployed_address: Address, @@ -45,7 +46,7 @@ // payable: bool, // ) -> Transaction { // let execute = execute_test_contract(deployed_address, with_panic, value, payable); - +// // Transaction { // common_data: ExecuteTransactionCommon::L1(L1TxCommonData { // sender, @@ -58,7 +59,7 @@ // received_timestamp_ms: 0, // } // } - +// // fn execute_test_contract( // address: Address, // with_panic: bool, @@ -68,7 +69,7 @@ // let test_contract = load_contract( // "etc/contracts-test-data/artifacts-zk/contracts/counter/counter.sol/Counter.json", // ); - +// // let function = if payable { // test_contract // .function("incrementWithRevertPayable") @@ -76,11 +77,11 @@ // } else { // test_contract.function("incrementWithRevert").unwrap() // }; - +// // let calldata = function // .encode_input(&[Token::Uint(U256::from(1u8)), Token::Bool(with_panic)]) // .expect("failed to encode parameters"); - +// // Execute { // contract_address: address, // calldata, @@ -88,10 +89,10 @@ // factory_deps: None, // } // } - +// // pub fn get_l1_deploy_tx(code: &[u8], calldata: &[u8]) -> Transaction { // let execute = get_create_execute(code, calldata); - +// // Transaction { // common_data: ExecuteTransactionCommon::L1(L1TxCommonData { // sender: H160::random(), @@ -103,8 +104,9 @@ // received_timestamp_ms: 0, // } // } - +// // pub fn create_storage_view() -> StorageView { // let raw_storage = InMemoryStorage::with_system_contracts(hash_bytecode); // StorageView::new(raw_storage) // } +// ``` \ No newline at end of file diff --git a/core/lib/multivm/src/versions/vm_1_3_2/transaction_data.rs b/core/lib/multivm/src/versions/vm_1_3_2/transaction_data.rs index fc931f2ad9a..58d8d29b604 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/transaction_data.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/transaction_data.rs @@ -1,15 +1,19 @@ use zk_evm_1_3_3::zkevm_opcode_defs::system_params::MAX_TX_ERGS_LIMIT; -use zksync_types::ethabi::{encode, Address, Token}; -use zksync_types::fee::encoding_len; use zksync_types::{ - l1::is_l1_tx_type, l2::TransactionType, ExecuteTransactionCommon, Transaction, U256, + ethabi::{encode, Address, Token}, + fee::encoding_len, + l1::is_l1_tx_type, + l2::TransactionType, + ExecuteTransactionCommon, Transaction, MAX_L2_TX_GAS_LIMIT, U256, +}; +use zksync_utils::{ + address_to_h256, bytecode::hash_bytecode, bytes_to_be_words, ceil_div_u256, h256_to_u256, }; -use zksync_types::{MAX_L2_TX_GAS_LIMIT, MAX_TXS_IN_BLOCK}; -use zksync_utils::{address_to_h256, ceil_div_u256}; -use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; +use super::vm_with_bootloader::MAX_TXS_IN_BLOCK; use crate::vm_1_3_2::vm_with_bootloader::{ BLOCK_OVERHEAD_GAS, BLOCK_OVERHEAD_PUBDATA, BOOTLOADER_TX_ENCODING_SPACE, + MAX_GAS_PER_PUBDATA_BYTE, }; // This structure represents the data that is used by @@ -56,12 +60,22 @@ impl From for TransactionData { U256::zero() }; + // Ethereum transactions do not sign gas per pubdata limit, and so for them we need to use + // some default value. We use the maximum possible value that is allowed by the bootloader + // (i.e. we can not use u64::MAX, because the bootloader requires gas per pubdata for such + // transactions to be higher than `MAX_GAS_PER_PUBDATA_BYTE`). + let gas_per_pubdata_limit = if common_data.transaction_type.is_ethereum_type() { + MAX_GAS_PER_PUBDATA_BYTE.into() + } else { + common_data.fee.gas_per_pubdata_limit + }; + TransactionData { tx_type: (common_data.transaction_type as u32) as u8, from: common_data.initiator_address, to: execute_tx.execute.contract_address, gas_limit: common_data.fee.gas_limit, - pubdata_price_limit: common_data.fee.gas_per_pubdata_limit, + pubdata_price_limit: gas_per_pubdata_limit, max_fee_per_gas: common_data.fee.max_fee_per_gas, max_priority_fee_per_gas: common_data.fee.max_priority_fee_per_gas, paymaster: common_data.paymaster_params.paymaster, @@ -212,12 +226,12 @@ impl TransactionData { self.reserved_dynamic.len() as u64, ); - let coeficients = OverheadCoeficients::from_tx_type(self.tx_type); + let coefficients = OverheadCoefficients::from_tx_type(self.tx_type); get_amortized_overhead( total_gas_limit, gas_price_per_pubdata, encoded_len, - coeficients, + coefficients, ) } @@ -227,13 +241,13 @@ impl TransactionData { } } -pub fn derive_overhead( +pub(crate) fn derive_overhead( gas_limit: u32, gas_price_per_pubdata: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { - // Even if the gas limit is greater than the MAX_TX_ERGS_LIMIT, we assume that everything beyond MAX_TX_ERGS_LIMIT + // Even if the gas limit is greater than the `MAX_TX_ERGS_LIMIT`, we assume that everything beyond `MAX_TX_ERGS_LIMIT` // will be spent entirely on publishing bytecodes and so we derive the overhead solely based on the capped value let gas_limit = std::cmp::min(MAX_TX_ERGS_LIMIT, gas_limit); @@ -242,8 +256,8 @@ pub fn derive_overhead( let gas_limit = U256::from(gas_limit); let encoded_len = U256::from(encoded_len); - // The MAX_TX_ERGS_LIMIT is formed in a way that may fullfills a single-instance circuits - // if used in full. That is, within MAX_TX_ERGS_LIMIT it is possible to fully saturate all the single-instance + // The `MAX_TX_ERGS_LIMIT` is formed in a way that may fulfills a single-instance circuits + // if used in full. That is, within `MAX_TX_ERGS_LIMIT` it is possible to fully saturate all the single-instance // circuits. let overhead_for_single_instance_circuits = ceil_div_u256(gas_limit * max_block_overhead, MAX_TX_ERGS_LIMIT.into()); @@ -257,42 +271,44 @@ pub fn derive_overhead( // The overhead for occupying a single tx slot let tx_slot_overhead = ceil_div_u256(max_block_overhead, MAX_TXS_IN_BLOCK.into()); - // We use "ceil" here for formal reasons to allow easier approach for calculating the overhead in O(1) - // let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata); + // We use `ceil` here for formal reasons to allow easier approach for calculating the overhead in O(1) + // `let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata);` // The maximal potential overhead from pubdata // TODO (EVM-67): possibly use overhead for pubdata + // ``` // let pubdata_overhead = ceil_div_u256( // max_pubdata_in_tx * max_block_overhead, // MAX_PUBDATA_PER_BLOCK.into(), // ); + // ``` vec![ - (coeficients.ergs_limit_overhead_coeficient + (coefficients.ergs_limit_overhead_coeficient * overhead_for_single_instance_circuits.as_u32() as f64) .floor() as u32, - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) .floor() as u32, - (coeficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, + (coefficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, ] .into_iter() .max() .unwrap() } -/// Contains the coeficients with which the overhead for transactions will be calculated. -/// All of the coeficients should be <= 1. There are here to provide a certain "discount" for normal transactions +/// Contains the coefficients with which the overhead for transactions will be calculated. +/// All of the coefficients should be <= 1. There are here to provide a certain "discount" for normal transactions /// at the risk of malicious transactions that may close the block prematurely. -/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coeficients.ergs_limit_overhead_coeficient` MUST +/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coefficients.ergs_limit_overhead_coefficient` MUST /// result in an integer number #[derive(Debug, Clone, Copy)] -pub struct OverheadCoeficients { +pub struct OverheadCoefficients { slot_overhead_coeficient: f64, bootloader_memory_overhead_coeficient: f64, ergs_limit_overhead_coeficient: f64, } -impl OverheadCoeficients { +impl OverheadCoefficients { // This method ensures that the parameters keep the required invariants fn new_checked( slot_overhead_coeficient: f64, @@ -314,15 +330,15 @@ impl OverheadCoeficients { // L1->L2 do not receive any discounts fn new_l1() -> Self { - OverheadCoeficients::new_checked(1.0, 1.0, 1.0) + OverheadCoefficients::new_checked(1.0, 1.0, 1.0) } fn new_l2() -> Self { - OverheadCoeficients::new_checked( + OverheadCoefficients::new_checked( 1.0, 1.0, // For L2 transactions we allow a certain default discount with regard to the number of ergs. - // Multiinstance circuits can in theory be spawned infinite times, while projected future limitations - // on gas per pubdata allow for roughly 800kk gas per L1 batch, so the rough trust "discount" on the proof's part + // Multi-instance circuits can in theory be spawned infinite times, while projected future limitations + // on gas per pubdata allow for roughly 800k gas per L1 batch, so the rough trust "discount" on the proof's part // to be paid by the users is 0.1. 0.1, ) @@ -342,7 +358,7 @@ pub fn get_amortized_overhead( total_gas_limit: u32, gas_per_pubdata_byte_limit: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { // Using large U256 type to prevent overflows. let overhead_for_block_gas = U256::from(block_overhead_gas(gas_per_pubdata_byte_limit)); @@ -350,28 +366,28 @@ pub fn get_amortized_overhead( let encoded_len = U256::from(encoded_len); // Derivation of overhead consists of 4 parts: - // 1. The overhead for taking up a transaction's slot. (O1): O1 = 1 / MAX_TXS_IN_BLOCK - // 2. The overhead for taking up the bootloader's memory (O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE - // 3. The overhead for possible usage of pubdata. (O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK - // 4. The overhead for possible usage of all the single-instance circuits. (O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT + // 1. The overhead for taking up a transaction's slot. `(O1): O1 = 1 / MAX_TXS_IN_BLOCK` + // 2. The overhead for taking up the bootloader's memory `(O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE` + // 3. The overhead for possible usage of pubdata. `(O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK` + // 4. The overhead for possible usage of all the single-instance circuits. `(O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT` // // The maximum of these is taken to derive the part of the block's overhead to be paid by the users: // - // max_overhead = max(O1, O2, O3, O4) - // overhead_gas = ceil(max_overhead * overhead_for_block_gas). Thus, overhead_gas is a function of - // tx_gas_limit, gas_per_pubdata_byte_limit and encoded_len. + // `max_overhead = max(O1, O2, O3, O4)` + // `overhead_gas = ceil(max_overhead * overhead_for_block_gas)`. Thus, `overhead_gas` is a function of + // `tx_gas_limit`, `gas_per_pubdata_byte_limit` and `encoded_len`. // - // While it is possible to derive the overhead with binary search in O(log n), it is too expensive to be done + // While it is possible to derive the overhead with binary search in `O(log n)`, it is too expensive to be done // on L1, so here is a reference implementation of finding the overhead for transaction in O(1): // - // Given total_gas_limit = tx_gas_limit + overhead_gas, we need to find overhead_gas and tx_gas_limit, such that: - // 1. overhead_gas is maximal possible (the operator is paid fairly) - // 2. overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas (the user does not overpay) + // Given `total_gas_limit = tx_gas_limit + overhead_gas`, we need to find `overhead_gas` and `tx_gas_limit`, such that: + // 1. `overhead_gas` is maximal possible (the operator is paid fairly) + // 2. `overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas` (the user does not overpay) // The third part boils to the following 4 inequalities (at least one of these must hold): - // ceil(O1 * overhead_for_block_gas) >= overhead_gas - // ceil(O2 * overhead_for_block_gas) >= overhead_gas - // ceil(O3 * overhead_for_block_gas) >= overhead_gas - // ceil(O4 * overhead_for_block_gas) >= overhead_gas + // `ceil(O1 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O2 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O3 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O4 * overhead_for_block_gas) >= overhead_gas` // // Now, we need to solve each of these separately: @@ -379,10 +395,10 @@ pub fn get_amortized_overhead( let tx_slot_overhead = { let tx_slot_overhead = ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()).as_u32(); - (coeficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 + (coefficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 }; - // 2. The overhead for occupying the bootloader memory can be derived from encoded_len + // 2. The overhead for occupying the bootloader memory can be derived from `encoded_len` let overhead_for_length = { let overhead_for_length = ceil_div_u256( encoded_len * overhead_for_block_gas, @@ -390,18 +406,22 @@ pub fn get_amortized_overhead( ) .as_u32(); - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() as u32 }; // TODO (EVM-67): possibly include the overhead for pubdata. The formula below has not been properly maintained, - // since the pubdat is not published. If decided to use the pubdata overhead, it needs to be updated. + // since the pubdata is not published. If decided to use the pubdata overhead, it needs to be updated. + // ``` // 3. ceil(O3 * overhead_for_block_gas) >= overhead_gas // O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK = ceil(gas_limit / gas_per_pubdata_byte_limit) / MAX_PUBDATA_PER_BLOCK - // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). Throwing off the `ceil`, while may provide marginally lower + // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). + // ``` + // Throwing off the `ceil`, while may provide marginally lower // overhead to the operator, provides substantially easier formula to work with. // - // For better clarity, let's denote gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE + // For better clarity, let's denote `gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE` + // ``` // ceil(OB * (TL - OE) / (EP * MP)) >= OE // // OB * (TL - OE) / (MP * EP) > OE - 1 @@ -414,7 +434,7 @@ pub fn get_amortized_overhead( // + gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK); // let denominator = // gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK) + overhead_for_block_gas; - + // // // Corner case: if `total_gas_limit` = `gas_per_pubdata_byte_limit` = 0 // // then the numerator will be 0 and subtracting 1 will cause a panic, so we just return a zero. // if numerator.is_zero() { @@ -423,7 +443,7 @@ pub fn get_amortized_overhead( // (numerator - 1) / denominator // } // }; - + // // 4. K * ceil(O4 * overhead_for_block_gas) >= overhead_gas, where K is the discount // O4 = gas_limit / MAX_TX_ERGS_LIMIT. Using the notation from the previous equation: // ceil(OB * GL / MAX_TX_ERGS_LIMIT) >= (OE / K) @@ -432,10 +452,11 @@ pub fn get_amortized_overhead( // OB * (TL - OE) > (OE/K) * MAX_TX_ERGS_LIMIT - MAX_TX_ERGS_LIMIT // OB * TL + MAX_TX_ERGS_LIMIT > OE * ( MAX_TX_ERGS_LIMIT/K + OB) // OE = floor(OB * TL + MAX_TX_ERGS_LIMIT / (MAX_TX_ERGS_LIMIT/K + OB)), with possible -1 if the division is without remainder + // ``` let overhead_for_gas = { let numerator = overhead_for_block_gas * total_gas_limit + U256::from(MAX_TX_ERGS_LIMIT); let denominator: U256 = U256::from( - (MAX_TX_ERGS_LIMIT as f64 / coeficients.ergs_limit_overhead_coeficient) as u64, + (MAX_TX_ERGS_LIMIT as f64 / coefficients.ergs_limit_overhead_coeficient) as u64, ) + overhead_for_block_gas; let overhead_for_gas = (numerator - 1) / denominator; @@ -446,21 +467,21 @@ pub fn get_amortized_overhead( let overhead = vec![tx_slot_overhead, overhead_for_length, overhead_for_gas] .into_iter() .max() - // For the sake of consistency making sure that total_gas_limit >= max_overhead + // For the sake of consistency making sure that `total_gas_limit >= max_overhead` .map(|max_overhead| std::cmp::min(max_overhead, total_gas_limit.as_u32())) .unwrap(); let limit_after_deducting_overhead = total_gas_limit - overhead; // During double checking of the overhead, the bootloader will assume that the - // body of the transaction does not have any more than MAX_L2_TX_GAS_LIMIT ergs available to it. + // body of the transaction does not have any more than `MAX_L2_TX_GAS_LIMIT` ergs available to it. if limit_after_deducting_overhead.as_u64() > MAX_L2_TX_GAS_LIMIT { - // We derive the same overhead that would exist for the MAX_L2_TX_GAS_LIMIT ergs + // We derive the same overhead that would exist for the `MAX_L2_TX_GAS_LIMIT` ergs derive_overhead( MAX_L2_TX_GAS_LIMIT as u32, gas_per_pubdata_byte_limit, encoded_len.as_usize(), - coeficients, + coefficients, ) } else { overhead @@ -483,7 +504,7 @@ mod tests { total_gas_limit: u32, gas_per_pubdata_byte_limit: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { let mut left_bound = if MAX_TX_ERGS_LIMIT < total_gas_limit { total_gas_limit - MAX_TX_ERGS_LIMIT @@ -501,7 +522,7 @@ mod tests { total_gas_limit - suggested_overhead, gas_per_pubdata_byte_limit, encoded_len, - coeficients, + coefficients, ); derived_overhead >= suggested_overhead @@ -530,41 +551,41 @@ mod tests { let test_params = |total_gas_limit: u32, gas_per_pubdata: u32, encoded_len: usize, - coeficients: OverheadCoeficients| { + coefficients: OverheadCoefficients| { let result_by_efficient_search = - get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coeficients); + get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coefficients); let result_by_binary_search = get_maximal_allowed_overhead_bin_search( total_gas_limit, gas_per_pubdata, encoded_len, - coeficients, + coefficients, ); assert_eq!(result_by_efficient_search, result_by_binary_search); }; // Some arbitrary test - test_params(60_000_000, 800, 2900, OverheadCoeficients::new_l2()); + test_params(60_000_000, 800, 2900, OverheadCoefficients::new_l2()); // Very small parameters - test_params(0, 1, 12, OverheadCoeficients::new_l2()); + test_params(0, 1, 12, OverheadCoefficients::new_l2()); // Relatively big parameters let max_tx_overhead = derive_overhead( MAX_TX_ERGS_LIMIT, 5000, 10000, - OverheadCoeficients::new_l2(), + OverheadCoefficients::new_l2(), ); test_params( MAX_TX_ERGS_LIMIT + max_tx_overhead, 5000, 10000, - OverheadCoeficients::new_l2(), + OverheadCoefficients::new_l2(), ); - test_params(115432560, 800, 2900, OverheadCoeficients::new_l1()); + test_params(115432560, 800, 2900, OverheadCoefficients::new_l1()); } #[test] diff --git a/core/lib/multivm/src/versions/vm_1_3_2/utils.rs b/core/lib/multivm/src/versions/vm_1_3_2/utils.rs index 44be1b9c8b9..a7956d473ab 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/utils.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/utils.rs @@ -1,13 +1,7 @@ -use crate::vm_1_3_2::history_recorder::HistoryMode; -use crate::vm_1_3_2::{ - memory::SimpleMemory, oracles::tracer::PubdataSpentTracer, vm_with_bootloader::BlockContext, - VmInstance, -}; use once_cell::sync::Lazy; - -use zk_evm_1_3_3::block_properties::BlockProperties; use zk_evm_1_3_3::{ aux_structures::{MemoryPage, Timestamp}, + block_properties::BlockProperties, vm_state::PrimitiveValue, zkevm_opcode_defs::FatPointer, }; @@ -17,6 +11,11 @@ use zksync_system_constants::ZKPORTER_IS_AVAILABLE; use zksync_types::{Address, H160, MAX_L2_TX_GAS_LIMIT, U256}; use zksync_utils::h256_to_u256; +use crate::vm_1_3_2::{ + history_recorder::HistoryMode, memory::SimpleMemory, oracles::tracer::PubdataSpentTracer, + vm_with_bootloader::BlockContext, VmInstance, +}; + pub const INITIAL_TIMESTAMP: u32 = 1024; pub const INITIAL_MEMORY_COUNTER: u32 = 2048; pub const INITIAL_CALLDATA_PAGE: u32 = 7; @@ -191,7 +190,7 @@ impl IntoFixedLengthByteIterator<32> for U256 { /// Receives sorted slice of timestamps. /// Returns count of timestamps that are greater than or equal to `from_timestamp`. -/// Works in O(log(sorted_timestamps.len())). +/// Works in `O(log(sorted_timestamps.len()))`. pub fn precompile_calls_count_after_timestamp( sorted_timestamps: &[Timestamp], from_timestamp: Timestamp, @@ -222,8 +221,8 @@ pub fn create_test_block_params() -> (BlockContext, BlockProperties) { pub fn read_bootloader_test_code(test: &str) -> Vec { read_zbin_bytecode(format!( - "etc/system-contracts/bootloader/tests/artifacts/{}.yul/{}.yul.zbin", - test, test + "contracts/system-contracts/bootloader/tests/artifacts/{}.yul.zbin", + test )) } diff --git a/core/lib/multivm/src/versions/vm_1_3_2/vm.rs b/core/lib/multivm/src/versions/vm_1_3_2/vm.rs index 84b84d3e31a..be99537187b 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/vm.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/vm.rs @@ -1,21 +1,24 @@ -use crate::interface::{ - BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, L1BatchEnv, - L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, - VmInterfaceHistoryEnabled, VmMemoryMetrics, -}; - use std::collections::HashSet; use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::Transaction; -use zksync_utils::bytecode::{hash_bytecode, CompressedBytecodeInfo}; -use zksync_utils::{h256_to_u256, u256_to_h256}; +use zksync_types::{ + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + Transaction, +}; +use zksync_utils::{ + bytecode::{hash_bytecode, CompressedBytecodeInfo}, + h256_to_u256, u256_to_h256, +}; -use crate::glue::history_mode::HistoryMode; -use crate::glue::GlueInto; -use crate::vm_1_3_2::events::merge_events; -use crate::vm_1_3_2::VmInstance; +use crate::{ + glue::{history_mode::HistoryMode, GlueInto}, + interface::{ + BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, + L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, + VmExecutionResultAndLogs, VmInterface, VmInterfaceHistoryEnabled, VmMemoryMetrics, + }, + vm_1_3_2::{events::merge_events, VmInstance}, +}; #[derive(Debug)] pub struct Vm { @@ -159,7 +162,10 @@ impl VmInterface for Vm { _tracer: Self::TracerDispatcher, tx: Transaction, with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { self.last_tx_compressed_bytecodes = vec![]; let bytecodes = if with_compression { let deps = tx.execute.factory_deps.as_deref().unwrap_or_default(); @@ -198,17 +204,31 @@ impl VmInterface for Vm { }; // Even that call tracer is supported here, we don't use it. - let result = self.vm.execute_next_tx( - self.system_env.default_validation_computational_gas_limit, - false, - ); + let result = match self.system_env.execution_mode { + TxExecutionMode::VerifyExecute => self + .vm + .execute_next_tx( + self.system_env.default_validation_computational_gas_limit, + false, + ) + .glue_into(), + TxExecutionMode::EstimateFee | TxExecutionMode::EthCall => self + .vm + .execute_till_block_end( + crate::vm_1_3_2::vm_with_bootloader::BootloaderJobType::TransactionExecution, + ) + .glue_into(), + }; if bytecodes .iter() .any(|info| !self.vm.is_bytecode_known(info)) { - Err(crate::interface::BytecodeCompressionError::BytecodeCompressionFailed) + ( + Err(BytecodeCompressionError::BytecodeCompressionFailed), + result, + ) } else { - Ok(result.glue_into()) + (Ok(()), result) } } diff --git a/core/lib/multivm/src/versions/vm_1_3_2/vm_instance.rs b/core/lib/multivm/src/versions/vm_1_3_2/vm_instance.rs index d84cd1cc7b6..3fe3f1929fd 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/vm_instance.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/vm_instance.rs @@ -1,43 +1,51 @@ -use std::convert::TryFrom; -use std::fmt::Debug; - -use zk_evm_1_3_3::aux_structures::Timestamp; -use zk_evm_1_3_3::vm_state::{PrimitiveValue, VmLocalState, VmState}; -use zk_evm_1_3_3::witness_trace::DummyTracer; -use zk_evm_1_3_3::zkevm_opcode_defs::decoding::{ - AllowedPcOrImm, EncodingModeProduction, VmEncodingMode, +use std::{convert::TryFrom, fmt::Debug}; + +use zk_evm_1_3_3::{ + aux_structures::Timestamp, + vm_state::{PrimitiveValue, VmLocalState, VmState}, + witness_trace::DummyTracer, + zkevm_opcode_defs::{ + decoding::{AllowedPcOrImm, EncodingModeProduction, VmEncodingMode}, + definitions::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; -use zk_evm_1_3_3::zkevm_opcode_defs::definitions::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER; use zksync_state::WriteStorage; -use zksync_system_constants::MAX_TXS_IN_BLOCK; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::tx::tx_execution_info::TxExecutionStatus; -use zksync_types::vm_trace::{Call, VmExecutionTrace, VmTrace}; -use zksync_types::{L1BatchNumber, StorageLogQuery, VmEvent, H256, U256}; - -use crate::interface::types::outputs::VmExecutionLogs; -use crate::vm_1_3_2::bootloader_state::BootloaderState; -use crate::vm_1_3_2::errors::{TxRevertReason, VmRevertReason, VmRevertReasonParsingResult}; -use crate::vm_1_3_2::event_sink::InMemoryEventSink; -use crate::vm_1_3_2::events::merge_events; -use crate::vm_1_3_2::history_recorder::{HistoryEnabled, HistoryMode}; -use crate::vm_1_3_2::memory::SimpleMemory; -use crate::vm_1_3_2::oracles::decommitter::DecommitterOracle; -use crate::vm_1_3_2::oracles::precompile::PrecompilesProcessorWithHistory; -use crate::vm_1_3_2::oracles::storage::StorageOracle; -use crate::vm_1_3_2::oracles::tracer::{ - BootloaderTracer, ExecutionEndTracer, OneTxTracer, PendingRefundTracer, PubdataSpentTracer, - StorageInvocationTracer, TransactionResultTracer, ValidationError, ValidationTracer, - ValidationTracerParams, +use zksync_types::{ + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + tx::tx_execution_info::TxExecutionStatus, + vm_trace::{Call, VmExecutionTrace, VmTrace}, + L1BatchNumber, StorageLogQuery, VmEvent, H256, U256, }; -use crate::vm_1_3_2::oracles::OracleWithHistory; -use crate::vm_1_3_2::utils::{ - calculate_computational_gas_used, dump_memory_page_using_primitive_value, - precompile_calls_count_after_timestamp, -}; -use crate::vm_1_3_2::vm_with_bootloader::{ - BootloaderJobType, DerivedBlockContext, TxExecutionMode, BOOTLOADER_HEAP_PAGE, - OPERATOR_REFUNDS_OFFSET, + +use crate::{ + interface::types::outputs::VmExecutionLogs, + vm_1_3_2::{ + bootloader_state::BootloaderState, + errors::{TxRevertReason, VmRevertReason, VmRevertReasonParsingResult}, + event_sink::InMemoryEventSink, + events::merge_events, + history_recorder::{HistoryEnabled, HistoryMode}, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, + precompile::PrecompilesProcessorWithHistory, + storage::StorageOracle, + tracer::{ + BootloaderTracer, ExecutionEndTracer, OneTxTracer, PendingRefundTracer, + PubdataSpentTracer, StorageInvocationTracer, TransactionResultTracer, + ValidationError, ValidationTracer, ValidationTracerParams, + }, + OracleWithHistory, + }, + utils::{ + calculate_computational_gas_used, dump_memory_page_using_primitive_value, + precompile_calls_count_after_timestamp, + }, + vm_with_bootloader::{ + BootloaderJobType, DerivedBlockContext, TxExecutionMode, BOOTLOADER_HEAP_PAGE, + OPERATOR_REFUNDS_OFFSET, + }, + }, }; pub type ZkSyncVmState = VmState< @@ -85,7 +93,7 @@ pub struct VmExecutionResult { pub l2_to_l1_logs: Vec, pub return_data: Vec, - /// Value denoting the amount of gas spent withing VM invocation. + /// Value denoting the amount of gas spent within VM invocation. /// Note that return value represents the difference between the amount of gas /// available to VM before and after execution. /// @@ -153,6 +161,7 @@ pub enum VmExecutionStopReason { TracerRequestedStop, } +use super::vm_with_bootloader::MAX_TXS_IN_BLOCK; use crate::vm_1_3_2::utils::VmExecutionResult as NewVmExecutionResult; fn vm_may_have_ended_inner( @@ -175,7 +184,7 @@ fn vm_may_have_ended_inner( } (false, _) => None, (true, l) if l == outer_eh_location => { - // check r1,r2,r3 + // check `r1,r2,r3` if vm.local_state.flags.overflow_or_less_than_flag { Some(NewVmExecutionResult::Panic) } else { @@ -208,7 +217,7 @@ fn vm_may_have_ended( NewVmExecutionResult::Ok(data) => { Some(VmExecutionResult { // The correct `events` value for this field should be set separately - // later on based on the information inside the event_sink oracle. + // later on based on the information inside the `event_sink` oracle. events: vec![], storage_log_queries: vm.state.storage.get_final_log_queries(), used_contract_hashes: vm.get_used_contracts(), @@ -364,7 +373,7 @@ impl VmInstance { } } - /// Removes the latest snapshot without rollbacking to it. + /// Removes the latest snapshot without rolling it back. /// This function expects that there is at least one snapshot present. pub fn pop_snapshot_no_rollback(&mut self) { self.snapshots.pop().unwrap(); @@ -481,8 +490,8 @@ impl VmInstance { ); } - // This means that the bootloader has informed the system (usually via VMHooks) - that some gas - // should be refunded back (see askOperatorForRefund in bootloader.yul for details). + // This means that the bootloader has informed the system (usually via `VMHooks`) - that some gas + // should be refunded back (see `askOperatorForRefund` in `bootloader.yul` for details). if let Some(bootloader_refund) = tracer.requested_refund() { assert!( operator_refund.is_none(), @@ -578,8 +587,8 @@ impl VmInstance { /// Panics if there are no new transactions in bootloader. /// Internally uses the OneTxTracer to stop the VM when the last opcode from the transaction is reached. // Err when transaction is rejected. - // Ok(status: TxExecutionStatus::Success) when the transaction succeeded - // Ok(status: TxExecutionStatus::Failure) when the transaction failed. + // `Ok(status: TxExecutionStatus::Success)` when the transaction succeeded + // `Ok(status: TxExecutionStatus::Failure)` when the transaction failed. // Note that failed transactions are considered properly processed and are included in blocks pub fn execute_next_tx( &mut self, @@ -639,7 +648,7 @@ impl VmInstance { revert_reason: None, // getting contracts used during this transaction // at least for now the number returned here is always <= to the number - // of the code hashes actually used by the transaction, since it might've + // of the code hashes actually used by the transaction, since it might have // reused bytecode hashes from some of the previous ones. contracts_used: self .state @@ -904,8 +913,8 @@ impl VmInstance { pub fn save_current_vm_as_snapshot(&mut self) { self.snapshots.push(VmSnapshot { // Vm local state contains O(1) various parameters (registers/etc). - // The only "expensive" copying here is copying of the callstack. - // It will take O(callstack_depth) to copy it. + // The only "expensive" copying here is copying of the call stack. + // It will take `O(callstack_depth)` to copy it. // So it is generally recommended to get snapshots of the bootloader frame, // where the depth is 1. local_state: self.state.local_state.clone(), diff --git a/core/lib/multivm/src/versions/vm_1_3_2/vm_with_bootloader.rs b/core/lib/multivm/src/versions/vm_1_3_2/vm_with_bootloader.rs index d5d3cba4a23..c8d8d828c45 100644 --- a/core/lib/multivm/src/versions/vm_1_3_2/vm_with_bootloader.rs +++ b/core/lib/multivm/src/versions/vm_1_3_2/vm_with_bootloader.rs @@ -1,5 +1,6 @@ use std::collections::HashMap; +use itertools::Itertools; use zk_evm_1_3_3::{ aux_structures::{MemoryPage, Timestamp}, block_properties::BlockProperties, @@ -11,12 +12,12 @@ use zk_evm_1_3_3::{ }, }; use zksync_contracts::BaseSystemContracts; -use zksync_system_constants::MAX_TXS_IN_BLOCK; - +use zksync_state::WriteStorage; +use zksync_system_constants::MAX_L2_TX_GAS_LIMIT; use zksync_types::{ - l1::is_l1_tx_type, zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, Transaction, - BOOTLOADER_ADDRESS, L1_GAS_PER_PUBDATA_BYTE, MAX_GAS_PER_PUBDATA_BYTE, MAX_NEW_FACTORY_DEPS, - U256, USED_BOOTLOADER_MEMORY_WORDS, + fee_model::L1PeggedBatchFeeModelInput, l1::is_l1_tx_type, + zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, Transaction, BOOTLOADER_ADDRESS, + L1_GAS_PER_PUBDATA_BYTE, MAX_NEW_FACTORY_DEPS, U256, }; use zksync_utils::{ address_to_u256, @@ -25,22 +26,22 @@ use zksync_utils::{ misc::ceil_div, }; -use itertools::Itertools; -use zksync_state::WriteStorage; - -use crate::vm_1_3_2::{ - bootloader_state::BootloaderState, - history_recorder::HistoryMode, - transaction_data::TransactionData, - utils::{ - code_page_candidate_from_base, heap_page_from_base, BLOCK_GAS_LIMIT, INITIAL_BASE_PAGE, +use crate::{ + vm_1_3_2::{ + bootloader_state::BootloaderState, + history_recorder::HistoryMode, + transaction_data::TransactionData, + utils::{ + code_page_candidate_from_base, heap_page_from_base, BLOCK_GAS_LIMIT, INITIAL_BASE_PAGE, + }, + vm_instance::ZkSyncVmState, + OracleTools, VmInstance, }, - vm_instance::ZkSyncVmState, - OracleTools, VmInstance, + vm_latest::L1BatchEnv, }; -// TODO (SMA-1703): move these to config and make them programmatically generatable. -// fill these values in the similar fasion as other overhead-related constants +// TODO (SMA-1703): move these to config and make them programmatically generable. +// fill these values in the similar fashion as other overhead-related constants pub const BLOCK_OVERHEAD_GAS: u32 = 1200000; pub const BLOCK_OVERHEAD_L1_GAS: u32 = 1000000; pub const BLOCK_OVERHEAD_PUBDATA: u32 = BLOCK_OVERHEAD_L1_GAS / L1_GAS_PER_PUBDATA_BYTE; @@ -62,7 +63,11 @@ pub struct BlockContext { impl BlockContext { pub fn block_gas_price_per_pubdata(&self) -> u64 { - derive_base_fee_and_gas_per_pubdata(self.l1_gas_price, self.fair_l2_gas_price).1 + derive_base_fee_and_gas_per_pubdata(L1PeggedBatchFeeModelInput { + l1_gas_price: self.l1_gas_price, + fair_l2_gas_price: self.fair_l2_gas_price, + }) + .1 } } @@ -86,31 +91,70 @@ pub fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { ceil_div(eth_price_per_pubdata_byte, base_fee) } -pub fn derive_base_fee_and_gas_per_pubdata(l1_gas_price: u64, fair_gas_price: u64) -> (u64, u64) { +pub(crate) fn derive_base_fee_and_gas_per_pubdata( + fee_input: L1PeggedBatchFeeModelInput, +) -> (u64, u64) { + let L1PeggedBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price, + } = fee_input; + let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); - // The baseFee is set in such a way that it is always possible for a transaction to + // The `baseFee` is set in such a way that it is always possible for a transaction to // publish enough public data while compensating us for it. let base_fee = std::cmp::max( - fair_gas_price, + fair_l2_gas_price, ceil_div(eth_price_per_pubdata_byte, MAX_GAS_PER_PUBDATA_BYTE), ); ( base_fee, - base_fee_to_gas_per_pubdata(l1_gas_price, base_fee), + base_fee_to_gas_per_pubdata(fee_input.l1_gas_price, base_fee), ) } +pub(crate) fn get_batch_base_fee(l1_batch_env: &L1BatchEnv) -> u64 { + if let Some(base_fee) = l1_batch_env.enforced_base_fee { + return base_fee; + } + let (base_fee, _) = + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()); + base_fee +} + impl From for DerivedBlockContext { fn from(context: BlockContext) -> Self { - let base_fee = - derive_base_fee_and_gas_per_pubdata(context.l1_gas_price, context.fair_l2_gas_price).0; + let base_fee = derive_base_fee_and_gas_per_pubdata(L1PeggedBatchFeeModelInput { + l1_gas_price: context.l1_gas_price, + fair_l2_gas_price: context.fair_l2_gas_price, + }) + .0; DerivedBlockContext { context, base_fee } } } +/// The size of the bootloader memory in bytes which is used by the protocol. +/// While the maximal possible size is a lot higher, we restrict ourselves to a certain limit to reduce +/// the requirements on RAM. +pub(crate) const USED_BOOTLOADER_MEMORY_BYTES: usize = 1 << 24; +pub(crate) const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32; + +// This the number of pubdata such that it should be always possible to publish +// from a single transaction. Note, that these pubdata bytes include only bytes that are +// to be published inside the body of transaction (i.e. excluding of factory deps). +const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 4000; + +// The users should always be able to provide `MAX_GAS_PER_PUBDATA_BYTE` gas per pubdata in their +// transactions so that they are able to send at least `GUARANTEED_PUBDATA_PER_L1_BATCH` bytes per +// transaction. +pub(crate) const MAX_GAS_PER_PUBDATA_BYTE: u64 = + MAX_L2_TX_GAS_LIMIT / GUARANTEED_PUBDATA_PER_L1_BATCH; + +// The maximal number of transactions in a single batch +pub(crate) const MAX_TXS_IN_BLOCK: usize = 1024; + // The first 32 slots are reserved for debugging purposes pub const DEBUG_SLOTS_OFFSET: usize = 8; pub const DEBUG_FIRST_SLOTS: usize = 32; @@ -251,12 +295,12 @@ pub fn init_vm_with_gas_limit( } #[derive(Debug, Clone, Copy)] -// The block.number/block.timestamp data are stored in the CONTEXT_SYSTEM_CONTRACT. +// The `block.number` / `block.timestamp` data are stored in the `CONTEXT_SYSTEM_CONTRACT`. // The bootloader can support execution in two modes: -// - "NewBlock" when the new block is created. It is enforced that the block.number is incremented by 1 +// - `NewBlock` when the new block is created. It is enforced that the block.number is incremented by 1 // and the timestamp is non-decreasing. Also, the L2->L1 message used to verify the correctness of the previous root hash is sent. // This is the mode that should be used in the state keeper. -// - "OverrideCurrent" when we need to provide custom block.number and block.timestamp. ONLY to be used in testing/ethCalls. +// - `OverrideCurrent` when we need to provide custom block.number and block.timestamp. ONLY to be used in testing / `ethCalls`. pub enum BlockContextMode { NewBlock(DerivedBlockContext, U256), OverrideCurrent(DerivedBlockContext), @@ -589,7 +633,7 @@ pub(crate) fn get_bootloader_memory_for_encoded_tx( let encoding_length = encoded_tx.len(); memory.extend((tx_description_offset..tx_description_offset + encoding_length).zip(encoded_tx)); - // Note, +1 is moving for poitner + // Note, +1 is moving for pointer let compressed_bytecodes_offset = COMPRESSED_BYTECODES_OFFSET + 1 + previous_compressed_bytecode_size; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/README.md b/core/lib/multivm/src/versions/vm_boojum_integration/README.md new file mode 100644 index 00000000000..d515df0dfc6 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/README.md @@ -0,0 +1,44 @@ +# VM Crate + +This crate contains code that interacts with the VM (Virtual Machine). The VM itself is in a separate repository +[era-zk_evm][zk_evm_repo_ext]. + +## VM Dependencies + +The VM relies on several subcomponents or traits, such as Memory and Storage. These traits are defined in the `zk_evm` +repository, while their implementations can be found in this crate, such as the storage implementation in +`oracles/storage.rs` and the Memory implementation in `memory.rs`. + +Many of these implementations also support easy rollbacks and history, which is useful when creating a block with +multiple transactions and needing to return the VM to a previous state if a transaction doesn't fit. + +## Running the VM + +To interact with the VM, you must initialize it with `L1BatchEnv`, which represents the initial parameters of the batch, +`SystemEnv`, that represents the system parameters, and a reference to the Storage. To execute a transaction, you have +to push the transaction into the bootloader memory and call the `execute_next_transaction` method. + +### Tracers + +The VM implementation allows for the addition of `Tracers`, which are activated before and after each instruction. This +provides a more in-depth look into the VM, collecting detailed debugging information and logs. More details can be found +in the `tracer/` directory. + +This VM also supports custom tracers. You can call the `inspect_next_transaction` method with a custom tracer and +receive the result of the execution. + +### Bootloader + +In the context of zkEVM, we usually think about transactions. However, from the VM's perspective, it runs a single +program called the bootloader, which internally processes multiple transactions. + +### Rollbacks + +The `VMInstance` in `vm.rs` allows for easy rollbacks. You can save the current state at any moment by calling +`make_snapshot()` and return to that state using `rollback_to_the_latest_snapshot()`. + +This rollback affects all subcomponents, such as memory, storage, and events, and is mainly used if a transaction +doesn't fit in a block. + +[zk_evm_repo]: https://github.com/matter-labs/zk_evm 'internal zk EVM repo' +[zk_evm_repo_ext]: https://github.com/matter-labs/era-zk_evm 'external zk EVM repo' diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/l2_block.rs b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/l2_block.rs new file mode 100644 index 00000000000..f032c301c94 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/l2_block.rs @@ -0,0 +1,87 @@ +use std::cmp::Ordering; + +use zksync_types::{MiniblockNumber, H256}; +use zksync_utils::concat_and_hash; + +use crate::{ + interface::{L2Block, L2BlockEnv}, + vm_boojum_integration::{ + bootloader_state::{snapshot::L2BlockSnapshot, tx::BootloaderTx}, + utils::l2_blocks::l2_block_hash, + }, +}; + +const EMPTY_TXS_ROLLING_HASH: H256 = H256::zero(); + +#[derive(Debug, Clone)] +pub(crate) struct BootloaderL2Block { + pub(crate) number: u32, + pub(crate) timestamp: u64, + pub(crate) txs_rolling_hash: H256, // The rolling hash of all the transactions in the miniblock + pub(crate) prev_block_hash: H256, + // Number of the first L2 block tx in L1 batch + pub(crate) first_tx_index: usize, + pub(crate) max_virtual_blocks_to_create: u32, + pub(super) txs: Vec, +} + +impl BootloaderL2Block { + pub(crate) fn new(l2_block: L2BlockEnv, first_tx_place: usize) -> Self { + Self { + number: l2_block.number, + timestamp: l2_block.timestamp, + txs_rolling_hash: EMPTY_TXS_ROLLING_HASH, + prev_block_hash: l2_block.prev_block_hash, + first_tx_index: first_tx_place, + max_virtual_blocks_to_create: l2_block.max_virtual_blocks_to_create, + txs: vec![], + } + } + + pub(super) fn push_tx(&mut self, tx: BootloaderTx) { + self.update_rolling_hash(tx.hash); + self.txs.push(tx) + } + + pub(crate) fn get_hash(&self) -> H256 { + l2_block_hash( + MiniblockNumber(self.number), + self.timestamp, + self.prev_block_hash, + self.txs_rolling_hash, + ) + } + + fn update_rolling_hash(&mut self, tx_hash: H256) { + self.txs_rolling_hash = concat_and_hash(self.txs_rolling_hash, tx_hash) + } + + pub(crate) fn interim_version(&self) -> BootloaderL2Block { + let mut interim = self.clone(); + interim.max_virtual_blocks_to_create = 0; + interim + } + + pub(crate) fn make_snapshot(&self) -> L2BlockSnapshot { + L2BlockSnapshot { + txs_rolling_hash: self.txs_rolling_hash, + txs_len: self.txs.len(), + } + } + + pub(crate) fn apply_snapshot(&mut self, snapshot: L2BlockSnapshot) { + self.txs_rolling_hash = snapshot.txs_rolling_hash; + match self.txs.len().cmp(&snapshot.txs_len) { + Ordering::Greater => self.txs.truncate(snapshot.txs_len), + Ordering::Less => panic!("Applying snapshot from future is not supported"), + Ordering::Equal => {} + } + } + pub(crate) fn l2_block(&self) -> L2Block { + L2Block { + number: self.number, + timestamp: self.timestamp, + hash: self.get_hash(), + } + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/mod.rs new file mode 100644 index 00000000000..73830de2759 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/mod.rs @@ -0,0 +1,8 @@ +mod l2_block; +mod snapshot; +mod state; +mod tx; + +pub(crate) mod utils; +pub(crate) use snapshot::BootloaderStateSnapshot; +pub use state::BootloaderState; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/snapshot.rs b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/snapshot.rs new file mode 100644 index 00000000000..8f1cec3cb7f --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/snapshot.rs @@ -0,0 +1,25 @@ +use zksync_types::H256; + +#[derive(Debug, Clone)] +pub(crate) struct BootloaderStateSnapshot { + /// ID of the next transaction to be executed. + pub(crate) tx_to_execute: usize, + /// Stored L2 blocks in bootloader memory + pub(crate) l2_blocks_len: usize, + /// Snapshot of the last L2 block. Only this block could be changed during the rollback + pub(crate) last_l2_block: L2BlockSnapshot, + /// The number of 32-byte words spent on the already included compressed bytecodes. + pub(crate) compressed_bytecodes_encoding: usize, + /// Current offset of the free space in the bootloader memory. + pub(crate) free_tx_offset: usize, + /// Whether the pubdata information has been provided already + pub(crate) is_pubdata_information_provided: bool, +} + +#[derive(Debug, Clone)] +pub(crate) struct L2BlockSnapshot { + /// The rolling hash of all the transactions in the miniblock + pub(crate) txs_rolling_hash: H256, + /// The number of transactions in the last L2 block + pub(crate) txs_len: usize, +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/state.rs b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/state.rs new file mode 100644 index 00000000000..db13d2aace5 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/state.rs @@ -0,0 +1,295 @@ +use std::cmp::Ordering; + +use once_cell::sync::OnceCell; +use zksync_types::{L2ChainId, U256}; +use zksync_utils::bytecode::CompressedBytecodeInfo; + +use super::{tx::BootloaderTx, utils::apply_pubdata_to_memory}; +use crate::{ + interface::{BootloaderMemory, L2BlockEnv, TxExecutionMode}, + vm_boojum_integration::{ + bootloader_state::{ + l2_block::BootloaderL2Block, + snapshot::BootloaderStateSnapshot, + utils::{apply_l2_block, apply_tx_to_memory}, + }, + constants::TX_DESCRIPTION_OFFSET, + types::internals::{PubdataInput, TransactionData}, + utils::l2_blocks::assert_next_block, + }, +}; + +/// Intermediate bootloader-related VM state. +/// +/// Required to process transactions one by one (since we intercept the VM execution to execute +/// transactions and add new ones to the memory on the fly). +/// Keeps tracking everything related to the bootloader memory and can restore the whole memory. +/// +/// +/// Serves two purposes: +/// - Tracks where next tx should be pushed to in the bootloader memory. +/// - Tracks which transaction should be executed next. +#[derive(Debug, Clone)] +pub struct BootloaderState { + /// ID of the next transaction to be executed. + /// See the structure doc-comment for a better explanation of purpose. + tx_to_execute: usize, + /// Stored txs in bootloader memory + l2_blocks: Vec, + /// The number of 32-byte words spent on the already included compressed bytecodes. + compressed_bytecodes_encoding: usize, + /// Initial memory of bootloader + initial_memory: BootloaderMemory, + /// Mode of txs for execution, it can be changed once per vm lunch + execution_mode: TxExecutionMode, + /// Current offset of the free space in the bootloader memory. + free_tx_offset: usize, + /// Information about the the pubdata that will be needed to supply to the L1Messenger + pubdata_information: OnceCell, +} + +impl BootloaderState { + pub(crate) fn new( + execution_mode: TxExecutionMode, + initial_memory: BootloaderMemory, + first_l2_block: L2BlockEnv, + ) -> Self { + let l2_block = BootloaderL2Block::new(first_l2_block, 0); + Self { + tx_to_execute: 0, + compressed_bytecodes_encoding: 0, + l2_blocks: vec![l2_block], + initial_memory, + execution_mode, + free_tx_offset: 0, + pubdata_information: Default::default(), + } + } + + pub(crate) fn set_refund_for_current_tx(&mut self, refund: u32) { + let current_tx = self.current_tx(); + // We can't set the refund for the latest tx or using the latest `l2_block` for fining tx + // Because we can fill the whole batch first and then execute txs one by one + let tx = self.find_tx_mut(current_tx); + tx.refund = refund; + } + + pub(crate) fn set_pubdata_input(&mut self, info: PubdataInput) { + self.pubdata_information + .set(info) + .expect("Pubdata information is already set"); + } + + pub(crate) fn start_new_l2_block(&mut self, l2_block: L2BlockEnv) { + let last_block = self.last_l2_block(); + assert!( + !last_block.txs.is_empty(), + "Can not create new miniblocks on top of empty ones" + ); + assert_next_block(&last_block.l2_block(), &l2_block); + self.push_l2_block(l2_block); + } + + /// This method bypass sanity checks and should be used carefully. + pub(crate) fn push_l2_block(&mut self, l2_block: L2BlockEnv) { + self.l2_blocks + .push(BootloaderL2Block::new(l2_block, self.free_tx_index())) + } + + pub(crate) fn push_tx( + &mut self, + tx: TransactionData, + predefined_overhead: u32, + predefined_refund: u32, + compressed_bytecodes: Vec, + trusted_ergs_limit: U256, + chain_id: L2ChainId, + ) -> BootloaderMemory { + let tx_offset = self.free_tx_offset(); + let bootloader_tx = BootloaderTx::new( + tx, + predefined_refund, + predefined_overhead, + trusted_ergs_limit, + compressed_bytecodes, + tx_offset, + chain_id, + ); + + let mut memory = vec![]; + let compressed_bytecode_size = apply_tx_to_memory( + &mut memory, + &bootloader_tx, + self.last_l2_block(), + self.free_tx_index(), + self.free_tx_offset(), + self.compressed_bytecodes_encoding, + self.execution_mode, + self.last_l2_block().txs.is_empty(), + ); + self.compressed_bytecodes_encoding += compressed_bytecode_size; + self.free_tx_offset = tx_offset + bootloader_tx.encoded_len(); + self.last_mut_l2_block().push_tx(bootloader_tx); + memory + } + + pub(crate) fn last_l2_block(&self) -> &BootloaderL2Block { + self.l2_blocks.last().unwrap() + } + pub(crate) fn get_pubdata_information(&self) -> &PubdataInput { + self.pubdata_information + .get() + .expect("Pubdata information is not set") + } + + fn last_mut_l2_block(&mut self) -> &mut BootloaderL2Block { + self.l2_blocks.last_mut().unwrap() + } + + /// Apply all bootloader transaction to the initial memory + pub(crate) fn bootloader_memory(&self) -> BootloaderMemory { + let mut initial_memory = self.initial_memory.clone(); + let mut offset = 0; + let mut compressed_bytecodes_offset = 0; + let mut tx_index = 0; + for l2_block in &self.l2_blocks { + for (num, tx) in l2_block.txs.iter().enumerate() { + let compressed_bytecodes_size = apply_tx_to_memory( + &mut initial_memory, + tx, + l2_block, + tx_index, + offset, + compressed_bytecodes_offset, + self.execution_mode, + num == 0, + ); + offset += tx.encoded_len(); + compressed_bytecodes_offset += compressed_bytecodes_size; + tx_index += 1; + } + if l2_block.txs.is_empty() { + apply_l2_block(&mut initial_memory, l2_block, tx_index) + } + } + + let pubdata_information = self + .pubdata_information + .clone() + .into_inner() + .expect("Empty pubdata information"); + + apply_pubdata_to_memory(&mut initial_memory, pubdata_information); + initial_memory + } + + fn free_tx_offset(&self) -> usize { + self.free_tx_offset + } + + pub(crate) fn free_tx_index(&self) -> usize { + let l2_block = self.last_l2_block(); + l2_block.first_tx_index + l2_block.txs.len() + } + + pub(crate) fn get_last_tx_compressed_bytecodes(&self) -> Vec { + if let Some(tx) = self.last_l2_block().txs.last() { + tx.compressed_bytecodes.clone() + } else { + vec![] + } + } + + /// Returns the id of current tx + pub(crate) fn current_tx(&self) -> usize { + self.tx_to_execute + .checked_sub(1) + .expect("There are no current tx to execute") + } + + /// Returns the ID of the next transaction to be executed and increments the local transaction counter. + pub(crate) fn move_tx_to_execute_pointer(&mut self) -> usize { + assert!( + self.tx_to_execute < self.free_tx_index(), + "Attempt to execute tx that was not pushed to memory. Tx ID: {}, txs in bootloader: {}", + self.tx_to_execute, + self.free_tx_index() + ); + + let old = self.tx_to_execute; + self.tx_to_execute += 1; + old + } + + /// Get offset of tx description + pub(crate) fn get_tx_description_offset(&self, tx_index: usize) -> usize { + TX_DESCRIPTION_OFFSET + self.find_tx(tx_index).offset + } + + pub(crate) fn insert_fictive_l2_block(&mut self) -> &BootloaderL2Block { + let block = self.last_l2_block(); + if !block.txs.is_empty() { + self.start_new_l2_block(L2BlockEnv { + timestamp: block.timestamp + 1, + number: block.number + 1, + prev_block_hash: block.get_hash(), + max_virtual_blocks_to_create: 1, + }); + } + self.last_l2_block() + } + + fn find_tx(&self, tx_index: usize) -> &BootloaderTx { + for block in self.l2_blocks.iter().rev() { + if tx_index >= block.first_tx_index { + return &block.txs[tx_index - block.first_tx_index]; + } + } + panic!("The tx with index {} must exist", tx_index) + } + + fn find_tx_mut(&mut self, tx_index: usize) -> &mut BootloaderTx { + for block in self.l2_blocks.iter_mut().rev() { + if tx_index >= block.first_tx_index { + return &mut block.txs[tx_index - block.first_tx_index]; + } + } + panic!("The tx with index {} must exist", tx_index) + } + + pub(crate) fn get_snapshot(&self) -> BootloaderStateSnapshot { + BootloaderStateSnapshot { + tx_to_execute: self.tx_to_execute, + l2_blocks_len: self.l2_blocks.len(), + last_l2_block: self.last_l2_block().make_snapshot(), + compressed_bytecodes_encoding: self.compressed_bytecodes_encoding, + free_tx_offset: self.free_tx_offset, + is_pubdata_information_provided: self.pubdata_information.get().is_some(), + } + } + + pub(crate) fn apply_snapshot(&mut self, snapshot: BootloaderStateSnapshot) { + self.tx_to_execute = snapshot.tx_to_execute; + self.compressed_bytecodes_encoding = snapshot.compressed_bytecodes_encoding; + self.free_tx_offset = snapshot.free_tx_offset; + match self.l2_blocks.len().cmp(&snapshot.l2_blocks_len) { + Ordering::Greater => self.l2_blocks.truncate(snapshot.l2_blocks_len), + Ordering::Less => panic!("Applying snapshot from future is not supported"), + Ordering::Equal => {} + } + self.last_mut_l2_block() + .apply_snapshot(snapshot.last_l2_block); + + if !snapshot.is_pubdata_information_provided { + self.pubdata_information = Default::default(); + } else { + // Under the correct usage of the snapshots of the bootloader state, + // this assertion should never fail, i.e. since the pubdata information + // can be set only once. However, we have this assertion just in case. + assert!( + self.pubdata_information.get().is_some(), + "Snapshot with no pubdata can not rollback to snapshot with one" + ); + } + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/tx.rs b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/tx.rs new file mode 100644 index 00000000000..3030427281b --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/tx.rs @@ -0,0 +1,49 @@ +use zksync_types::{L2ChainId, H256, U256}; +use zksync_utils::bytecode::CompressedBytecodeInfo; + +use crate::vm_boojum_integration::types::internals::TransactionData; + +/// Information about tx necessary for execution in bootloader. +#[derive(Debug, Clone)] +pub(super) struct BootloaderTx { + pub(super) hash: H256, + /// Encoded transaction + pub(super) encoded: Vec, + /// Compressed bytecodes, which has been published during this transaction + pub(super) compressed_bytecodes: Vec, + /// Refunds for this transaction + pub(super) refund: u32, + /// Gas overhead + pub(super) gas_overhead: u32, + /// Gas Limit for this transaction. It can be different from the gas limit inside the transaction + pub(super) trusted_gas_limit: U256, + /// Offset of the tx in bootloader memory + pub(super) offset: usize, +} + +impl BootloaderTx { + pub(super) fn new( + tx: TransactionData, + predefined_refund: u32, + predefined_overhead: u32, + trusted_gas_limit: U256, + compressed_bytecodes: Vec, + offset: usize, + chain_id: L2ChainId, + ) -> Self { + let hash = tx.tx_hash(chain_id); + Self { + hash, + encoded: tx.into_tokens(), + compressed_bytecodes, + refund: predefined_refund, + gas_overhead: predefined_overhead, + trusted_gas_limit, + offset, + } + } + + pub(super) fn encoded_len(&self) -> usize { + self.encoded.len() + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/utils.rs b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/utils.rs new file mode 100644 index 00000000000..77a8ed2ce9b --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/bootloader_state/utils.rs @@ -0,0 +1,177 @@ +use zksync_types::{ethabi, U256}; +use zksync_utils::{bytecode::CompressedBytecodeInfo, bytes_to_be_words, h256_to_u256}; + +use super::tx::BootloaderTx; +use crate::{ + interface::{BootloaderMemory, TxExecutionMode}, + vm_boojum_integration::{ + bootloader_state::l2_block::BootloaderL2Block, + constants::{ + BOOTLOADER_TX_DESCRIPTION_OFFSET, BOOTLOADER_TX_DESCRIPTION_SIZE, + COMPRESSED_BYTECODES_OFFSET, OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET, + OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_SLOTS, OPERATOR_REFUNDS_OFFSET, + TX_DESCRIPTION_OFFSET, TX_OPERATOR_L2_BLOCK_INFO_OFFSET, + TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, TX_OVERHEAD_OFFSET, TX_TRUSTED_GAS_LIMIT_OFFSET, + }, + types::internals::PubdataInput, + }, +}; + +pub(super) fn get_memory_for_compressed_bytecodes( + compressed_bytecodes: &[CompressedBytecodeInfo], +) -> Vec { + let memory_addition: Vec<_> = compressed_bytecodes + .iter() + .flat_map(|x| x.encode_call()) + .collect(); + + bytes_to_be_words(memory_addition) +} + +#[allow(clippy::too_many_arguments)] +pub(super) fn apply_tx_to_memory( + memory: &mut BootloaderMemory, + bootloader_tx: &BootloaderTx, + bootloader_l2_block: &BootloaderL2Block, + tx_index: usize, + tx_offset: usize, + compressed_bytecodes_size: usize, + execution_mode: TxExecutionMode, + start_new_l2_block: bool, +) -> usize { + let bootloader_description_offset = + BOOTLOADER_TX_DESCRIPTION_OFFSET + BOOTLOADER_TX_DESCRIPTION_SIZE * tx_index; + let tx_description_offset = TX_DESCRIPTION_OFFSET + tx_offset; + + memory.push(( + bootloader_description_offset, + assemble_tx_meta(execution_mode, true), + )); + + memory.push(( + bootloader_description_offset + 1, + U256::from_big_endian(&(32 * tx_description_offset).to_be_bytes()), + )); + + let refund_offset = OPERATOR_REFUNDS_OFFSET + tx_index; + memory.push((refund_offset, bootloader_tx.refund.into())); + + let overhead_offset = TX_OVERHEAD_OFFSET + tx_index; + memory.push((overhead_offset, bootloader_tx.gas_overhead.into())); + + let trusted_gas_limit_offset = TX_TRUSTED_GAS_LIMIT_OFFSET + tx_index; + memory.push((trusted_gas_limit_offset, bootloader_tx.trusted_gas_limit)); + + memory.extend( + (tx_description_offset..tx_description_offset + bootloader_tx.encoded_len()) + .zip(bootloader_tx.encoded.clone()), + ); + + let bootloader_l2_block = if start_new_l2_block { + bootloader_l2_block.clone() + } else { + bootloader_l2_block.interim_version() + }; + apply_l2_block(memory, &bootloader_l2_block, tx_index); + + // Note, `+1` is moving for pointer + let compressed_bytecodes_offset = COMPRESSED_BYTECODES_OFFSET + 1 + compressed_bytecodes_size; + + let encoded_compressed_bytecodes = + get_memory_for_compressed_bytecodes(&bootloader_tx.compressed_bytecodes); + let compressed_bytecodes_encoding = encoded_compressed_bytecodes.len(); + + memory.extend( + (compressed_bytecodes_offset + ..compressed_bytecodes_offset + encoded_compressed_bytecodes.len()) + .zip(encoded_compressed_bytecodes), + ); + compressed_bytecodes_encoding +} + +pub(crate) fn apply_l2_block( + memory: &mut BootloaderMemory, + bootloader_l2_block: &BootloaderL2Block, + txs_index: usize, +) { + // Since L2 block information starts from the `TX_OPERATOR_L2_BLOCK_INFO_OFFSET` and each + // L2 block info takes `TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO slots`, the position where the L2 block info + // for this transaction needs to be written is: + + let block_position = + TX_OPERATOR_L2_BLOCK_INFO_OFFSET + txs_index * TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO; + + memory.extend(vec![ + (block_position, bootloader_l2_block.number.into()), + (block_position + 1, bootloader_l2_block.timestamp.into()), + ( + block_position + 2, + h256_to_u256(bootloader_l2_block.prev_block_hash), + ), + ( + block_position + 3, + bootloader_l2_block.max_virtual_blocks_to_create.into(), + ), + ]) +} + +pub(crate) fn apply_pubdata_to_memory( + memory: &mut BootloaderMemory, + pubdata_information: PubdataInput, +) { + // Skipping two slots as they will be filled by the bootloader itself: + // - One slot is for the selector of the call to the `L1Messenger`. + // - The other slot is for the 0x20 offset for the calldata. + let l1_messenger_pubdata_start_slot = OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET + 2; + + // Need to skip first word as it represents array offset + // while bootloader expects only `[len || data]` + let pubdata = ethabi::encode(&[ethabi::Token::Bytes( + pubdata_information.build_pubdata(true), + )])[32..] + .to_vec(); + + assert!( + pubdata.len() / 32 <= OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_SLOTS - 2, + "The encoded pubdata is too big" + ); + + pubdata + .chunks(32) + .enumerate() + .for_each(|(slot_offset, value)| { + memory.push(( + l1_messenger_pubdata_start_slot + slot_offset, + U256::from(value), + )) + }); +} + +/// Forms a word that contains meta information for the transaction execution. +/// +/// # Current layout +/// +/// - 0 byte (MSB): server-side tx execution mode +/// In the server, we may want to execute different parts of the transaction in the different context +/// For example, when checking validity, we don't want to actually execute transaction and have side effects. +/// +/// Possible values: +/// - `0x00`: validate & execute (normal mode) +/// - `0x02`: execute but DO NOT validate +/// +/// - 31 byte (LSB): whether to execute transaction or not (at all). +pub(super) fn assemble_tx_meta(execution_mode: TxExecutionMode, execute_tx: bool) -> U256 { + let mut output = [0u8; 32]; + + // Set 0 byte (execution mode) + output[0] = match execution_mode { + TxExecutionMode::VerifyExecute => 0x00, + TxExecutionMode::EstimateFee { .. } => 0x00, + TxExecutionMode::EthCall { .. } => 0x02, + }; + + // Set 31 byte (marker for tx execution) + output[31] = u8::from(execute_tx); + + U256::from_big_endian(&output) +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/constants.rs b/core/lib/multivm/src/versions/vm_boojum_integration/constants.rs new file mode 100644 index 00000000000..29a67aa20a6 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/constants.rs @@ -0,0 +1,144 @@ +use zk_evm_1_4_0::aux_structures::MemoryPage; +pub use zk_evm_1_4_0::zkevm_opcode_defs::system_params::{ + ERGS_PER_CIRCUIT, INITIAL_STORAGE_WRITE_PUBDATA_BYTES, MAX_PUBDATA_PER_BLOCK, +}; +use zksync_system_constants::{L1_GAS_PER_PUBDATA_BYTE, MAX_L2_TX_GAS_LIMIT, MAX_NEW_FACTORY_DEPS}; + +use crate::vm_boojum_integration::old_vm::utils::heap_page_from_base; + +/// The size of the bootloader memory in bytes which is used by the protocol. +/// While the maximal possible size is a lot higher, we restrict ourselves to a certain limit to reduce +/// the requirements on RAM. +pub(crate) const USED_BOOTLOADER_MEMORY_BYTES: usize = 1 << 24; +pub(crate) const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32; + +// This the number of pubdata such that it should be always possible to publish +// from a single transaction. Note, that these pubdata bytes include only bytes that are +// to be published inside the body of transaction (i.e. excluding of factory deps). +pub(crate) const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 4000; + +// The users should always be able to provide `MAX_GAS_PER_PUBDATA_BYTE` gas per pubdata in their +// transactions so that they are able to send at least `GUARANTEED_PUBDATA_PER_L1_BATCH` bytes per +// transaction. +pub(crate) const MAX_GAS_PER_PUBDATA_BYTE: u64 = + MAX_L2_TX_GAS_LIMIT / GUARANTEED_PUBDATA_PER_L1_BATCH; + +// The maximal number of transactions in a single batch +pub(crate) const MAX_TXS_IN_BLOCK: usize = 1024; + +/// Max cycles for a single transaction. +pub const MAX_CYCLES_FOR_TX: u32 = u32::MAX; + +/// The first 32 slots are reserved for debugging purposes +pub(crate) const DEBUG_SLOTS_OFFSET: usize = 8; +pub(crate) const DEBUG_FIRST_SLOTS: usize = 32; +/// The next 33 slots are reserved for dealing with the paymaster context (1 slot for storing length + 32 slots for storing the actual context). +pub(crate) const PAYMASTER_CONTEXT_SLOTS: usize = 32 + 1; +/// The next PAYMASTER_CONTEXT_SLOTS + 7 slots free slots are needed before each tx, so that the +/// postOp operation could be encoded correctly. +pub(crate) const MAX_POSTOP_SLOTS: usize = PAYMASTER_CONTEXT_SLOTS + 7; + +/// Slots used to store the current L2 transaction's hash and the hash recommended +/// to be used for signing the transaction's content. +const CURRENT_L2_TX_HASHES_SLOTS: usize = 2; + +/// Slots used to store the calldata for the KnownCodesStorage to mark new factory +/// dependencies as known ones. Besides the slots for the new factory dependencies themselves +/// another 4 slots are needed for: selector, marker of whether the user should pay for the pubdata, +/// the offset for the encoding of the array as well as the length of the array. +const NEW_FACTORY_DEPS_RESERVED_SLOTS: usize = MAX_NEW_FACTORY_DEPS + 4; + +/// The operator can provide for each transaction the proposed minimal refund +pub(crate) const OPERATOR_REFUNDS_SLOTS: usize = MAX_TXS_IN_BLOCK; + +pub(crate) const OPERATOR_REFUNDS_OFFSET: usize = DEBUG_SLOTS_OFFSET + + DEBUG_FIRST_SLOTS + + PAYMASTER_CONTEXT_SLOTS + + CURRENT_L2_TX_HASHES_SLOTS + + NEW_FACTORY_DEPS_RESERVED_SLOTS; + +pub(crate) const TX_OVERHEAD_OFFSET: usize = OPERATOR_REFUNDS_OFFSET + OPERATOR_REFUNDS_SLOTS; +pub(crate) const TX_OVERHEAD_SLOTS: usize = MAX_TXS_IN_BLOCK; + +pub(crate) const TX_TRUSTED_GAS_LIMIT_OFFSET: usize = TX_OVERHEAD_OFFSET + TX_OVERHEAD_SLOTS; +pub(crate) const TX_TRUSTED_GAS_LIMIT_SLOTS: usize = MAX_TXS_IN_BLOCK; + +pub(crate) const COMPRESSED_BYTECODES_SLOTS: usize = 32768; + +pub(crate) const PRIORITY_TXS_L1_DATA_OFFSET: usize = + COMPRESSED_BYTECODES_OFFSET + COMPRESSED_BYTECODES_SLOTS; +pub(crate) const PRIORITY_TXS_L1_DATA_SLOTS: usize = 2; + +pub const OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET: usize = + PRIORITY_TXS_L1_DATA_OFFSET + PRIORITY_TXS_L1_DATA_SLOTS; + +/// One of "worst case" scenarios for the number of state diffs in a batch is when 120kb of pubdata is spent +/// on repeated writes, that are all zeroed out. In this case, the number of diffs is 120k / 5 = 24k. This means that they will have +/// accommodate 6528000 bytes of calldata for the uncompressed state diffs. Adding 120k on top leaves us with +/// roughly 6650000 bytes needed for calldata. 207813 slots are needed to accommodate this amount of data. +/// We round up to 208000 slots just in case. +/// +/// In theory though much more calldata could be used (if for instance 1 byte is used for enum index). It is the responsibility of the +/// operator to ensure that it can form the correct calldata for the L1Messenger. +pub(crate) const OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_SLOTS: usize = 208000; + +pub(crate) const BOOTLOADER_TX_DESCRIPTION_OFFSET: usize = + OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET + OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_SLOTS; + +/// The size of the bootloader memory dedicated to the encodings of transactions +pub(crate) const BOOTLOADER_TX_ENCODING_SPACE: u32 = + (USED_BOOTLOADER_MEMORY_WORDS - TX_DESCRIPTION_OFFSET - MAX_TXS_IN_BLOCK) as u32; + +// Size of the bootloader tx description in words +pub(crate) const BOOTLOADER_TX_DESCRIPTION_SIZE: usize = 2; + +/// The actual descriptions of transactions should start after the minor descriptions and a MAX_POSTOP_SLOTS +/// free slots to allow postOp encoding. +pub(crate) const TX_DESCRIPTION_OFFSET: usize = BOOTLOADER_TX_DESCRIPTION_OFFSET + + BOOTLOADER_TX_DESCRIPTION_SIZE * MAX_TXS_IN_BLOCK + + MAX_POSTOP_SLOTS; + +pub(crate) const TX_GAS_LIMIT_OFFSET: usize = 4; + +const INITIAL_BASE_PAGE: u32 = 8; +pub const BOOTLOADER_HEAP_PAGE: u32 = heap_page_from_base(MemoryPage(INITIAL_BASE_PAGE)).0; +pub const BLOCK_OVERHEAD_GAS: u32 = 1200000; +pub const BLOCK_OVERHEAD_L1_GAS: u32 = 1000000; +pub const BLOCK_OVERHEAD_PUBDATA: u32 = BLOCK_OVERHEAD_L1_GAS / L1_GAS_PER_PUBDATA_BYTE; + +/// VM Hooks are used for communication between bootloader and tracers. +/// The 'type' / 'opcode' is put into VM_HOOK_POSITION slot, +/// and VM_HOOKS_PARAMS_COUNT parameters (each 32 bytes) are put in the slots before. +/// So the layout looks like this: +/// `[param 0][param 1][vmhook opcode]` +pub const VM_HOOK_POSITION: u32 = RESULT_SUCCESS_FIRST_SLOT - 1; +pub const VM_HOOK_PARAMS_COUNT: u32 = 2; +pub const VM_HOOK_PARAMS_START_POSITION: u32 = VM_HOOK_POSITION - VM_HOOK_PARAMS_COUNT; + +pub(crate) const MAX_MEM_SIZE_BYTES: u32 = 16777216; // 2^24 + +/// Arbitrary space in memory closer to the end of the page +pub const RESULT_SUCCESS_FIRST_SLOT: u32 = + (MAX_MEM_SIZE_BYTES - (MAX_TXS_IN_BLOCK as u32) * 32) / 32; + +/// How many gas bootloader is allowed to spend within one block. +/// Note that this value doesn't correspond to the gas limit of any particular transaction +/// (except for the fact that, of course, gas limit for each transaction should be <= `BLOCK_GAS_LIMIT`). +pub const BLOCK_GAS_LIMIT: u32 = + zk_evm_1_4_0::zkevm_opcode_defs::system_params::VM_INITIAL_FRAME_ERGS; + +/// How many gas is allowed to spend on a single transaction in eth_call method +pub const ETH_CALL_GAS_LIMIT: u32 = MAX_L2_TX_GAS_LIMIT as u32; + +/// ID of the transaction from L1 +pub const L1_TX_TYPE: u8 = 255; + +pub(crate) const TX_OPERATOR_L2_BLOCK_INFO_OFFSET: usize = + TX_TRUSTED_GAS_LIMIT_OFFSET + TX_TRUSTED_GAS_LIMIT_SLOTS; + +pub(crate) const TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO: usize = 4; +pub(crate) const TX_OPERATOR_L2_BLOCK_INFO_SLOTS: usize = + (MAX_TXS_IN_BLOCK + 1) * TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO; + +pub(crate) const COMPRESSED_BYTECODES_OFFSET: usize = + TX_OPERATOR_L2_BLOCK_INFO_OFFSET + TX_OPERATOR_L2_BLOCK_INFO_SLOTS; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/implementation/bytecode.rs b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/bytecode.rs new file mode 100644 index 00000000000..2e3770a9c52 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/bytecode.rs @@ -0,0 +1,58 @@ +use itertools::Itertools; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_types::U256; +use zksync_utils::{ + bytecode::{compress_bytecode, hash_bytecode, CompressedBytecodeInfo}, + bytes_to_be_words, +}; + +use crate::{interface::VmInterface, vm_boojum_integration::Vm, HistoryMode}; + +impl Vm { + /// Checks the last transaction has successfully published compressed bytecodes and returns `true` if there is at least one is still unknown. + pub(crate) fn has_unpublished_bytecodes(&mut self) -> bool { + self.get_last_tx_compressed_bytecodes().iter().any(|info| { + !self + .state + .storage + .storage + .get_ptr() + .borrow_mut() + .is_bytecode_known(&hash_bytecode(&info.original)) + }) + } +} + +/// Converts bytecode to tokens and hashes it. +pub(crate) fn bytecode_to_factory_dep(bytecode: Vec) -> (U256, Vec) { + let bytecode_hash = hash_bytecode(&bytecode); + let bytecode_hash = U256::from_big_endian(bytecode_hash.as_bytes()); + + let bytecode_words = bytes_to_be_words(bytecode); + + (bytecode_hash, bytecode_words) +} + +pub(crate) fn compress_bytecodes( + bytecodes: &[Vec], + storage: StoragePtr, +) -> Vec { + bytecodes + .iter() + .enumerate() + .sorted_by_key(|(_idx, dep)| *dep) + .dedup_by(|x, y| x.1 == y.1) + .filter(|(_idx, dep)| !storage.borrow_mut().is_bytecode_known(&hash_bytecode(dep))) + .sorted_by_key(|(idx, _dep)| *idx) + .filter_map(|(_idx, dep)| { + let compressed_bytecode = compress_bytecode(dep); + + compressed_bytecode + .ok() + .map(|compressed| CompressedBytecodeInfo { + original: dep.clone(), + compressed, + }) + }) + .collect() +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/implementation/execution.rs b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/execution.rs new file mode 100644 index 00000000000..1d1d19f92b7 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/execution.rs @@ -0,0 +1,137 @@ +use zk_evm_1_4_0::aux_structures::Timestamp; +use zksync_state::WriteStorage; + +use crate::{ + interface::{ + types::tracer::{TracerExecutionStatus, VmExecutionStopReason}, + VmExecutionMode, VmExecutionResultAndLogs, + }, + vm_boojum_integration::{ + old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}, + tracers::{ + dispatcher::TracerDispatcher, DefaultExecutionTracer, PubdataTracer, RefundsTracer, + }, + vm::Vm, + }, + HistoryMode, +}; + +impl Vm { + pub(crate) fn inspect_inner( + &mut self, + dispatcher: TracerDispatcher, + execution_mode: VmExecutionMode, + ) -> VmExecutionResultAndLogs { + let mut enable_refund_tracer = false; + if let VmExecutionMode::OneTx = execution_mode { + // Move the pointer to the next transaction + self.bootloader_state.move_tx_to_execute_pointer(); + enable_refund_tracer = true; + } + + let (_, result) = + self.inspect_and_collect_results(dispatcher, execution_mode, enable_refund_tracer); + result + } + + /// Execute VM with given traces until the stop reason is reached. + /// Collect the result from the default tracers. + fn inspect_and_collect_results( + &mut self, + dispatcher: TracerDispatcher, + execution_mode: VmExecutionMode, + with_refund_tracer: bool, + ) -> (VmExecutionStopReason, VmExecutionResultAndLogs) { + let refund_tracers = + with_refund_tracer.then_some(RefundsTracer::new(self.batch_env.clone())); + let mut tx_tracer: DefaultExecutionTracer = + DefaultExecutionTracer::new( + self.system_env.default_validation_computational_gas_limit, + execution_mode, + dispatcher, + self.storage.clone(), + refund_tracers, + Some(PubdataTracer::new(self.batch_env.clone(), execution_mode)), + ); + + let timestamp_initial = Timestamp(self.state.local_state.timestamp); + let cycles_initial = self.state.local_state.monotonic_cycle_counter; + let gas_remaining_before = self.gas_remaining(); + let spent_pubdata_counter_before = self.state.local_state.spent_pubdata_counter; + + let stop_reason = self.execute_with_default_tracer(&mut tx_tracer); + + let gas_remaining_after = self.gas_remaining(); + + let logs = self.collect_execution_logs_after_timestamp(timestamp_initial); + + let (refunds, pubdata_published) = tx_tracer + .refund_tracer + .as_ref() + .map(|x| (x.get_refunds(), x.pubdata_published())) + .unwrap_or_default(); + + let statistics = self.get_statistics( + timestamp_initial, + cycles_initial, + &tx_tracer, + gas_remaining_before, + gas_remaining_after, + spent_pubdata_counter_before, + pubdata_published, + logs.total_log_queries_count, + tx_tracer.circuits_tracer.estimated_circuits_used, + ); + let result = tx_tracer.result_tracer.into_result(); + + let result = VmExecutionResultAndLogs { + result, + logs, + statistics, + refunds, + }; + + (stop_reason, result) + } + + /// Execute vm with given tracers until the stop reason is reached. + fn execute_with_default_tracer( + &mut self, + tracer: &mut DefaultExecutionTracer, + ) -> VmExecutionStopReason { + tracer.initialize_tracer(&mut self.state); + let result = loop { + // Sanity check: we should never reach the maximum value, because then we won't be able to process the next cycle. + assert_ne!( + self.state.local_state.monotonic_cycle_counter, + u32::MAX, + "VM reached maximum possible amount of cycles. Vm state: {:?}", + self.state + ); + + self.state + .cycle(tracer) + .expect("Failed execution VM cycle."); + + if let TracerExecutionStatus::Stop(reason) = + tracer.finish_cycle(&mut self.state, &mut self.bootloader_state) + { + break VmExecutionStopReason::TracerRequestedStop(reason); + } + if self.has_ended() { + break VmExecutionStopReason::VmFinished; + } + }; + tracer.after_vm_execution(&mut self.state, &self.bootloader_state, result.clone()); + result + } + + fn has_ended(&self) -> bool { + match vm_may_have_ended_inner(&self.state) { + None | Some(VmExecutionResult::MostLikelyDidNotFinish(_, _)) => false, + Some( + VmExecutionResult::Ok(_) | VmExecutionResult::Revert(_) | VmExecutionResult::Panic, + ) => true, + } + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/implementation/gas.rs b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/gas.rs new file mode 100644 index 00000000000..56f13de05e5 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/gas.rs @@ -0,0 +1,43 @@ +use zksync_state::WriteStorage; + +use crate::{ + vm_boojum_integration::{tracers::DefaultExecutionTracer, vm::Vm}, + HistoryMode, +}; + +impl Vm { + /// Returns the amount of gas remaining to the VM. + /// Note that this *does not* correspond to the gas limit of a transaction. + /// To calculate the amount of gas spent by transaction, you should call this method before and after + /// the execution, and subtract these values. + /// + /// Note: this method should only be called when either transaction is fully completed or VM completed + /// its execution. Remaining gas value is read from the current stack frame, so if you'll attempt to + /// read it during the transaction execution, you may receive invalid value. + pub(crate) fn gas_remaining(&self) -> u32 { + self.state.local_state.callstack.current.ergs_remaining + } + + pub(crate) fn calculate_computational_gas_used( + &self, + tracer: &DefaultExecutionTracer, + gas_remaining_before: u32, + spent_pubdata_counter_before: u32, + ) -> u32 { + let total_gas_used = gas_remaining_before + .checked_sub(self.gas_remaining()) + .expect("underflow"); + let gas_used_on_pubdata = + tracer.gas_spent_on_pubdata(&self.state.local_state) - spent_pubdata_counter_before; + total_gas_used + .checked_sub(gas_used_on_pubdata) + .unwrap_or_else(|| { + tracing::error!( + "Gas used on pubdata is greater than total gas used. On pubdata: {}, total: {}", + gas_used_on_pubdata, + total_gas_used + ); + 0 + }) + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/implementation/logs.rs b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/logs.rs new file mode 100644 index 00000000000..af307af55e2 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/logs.rs @@ -0,0 +1,74 @@ +use zk_evm_1_4_0::aux_structures::Timestamp; +use zksync_state::WriteStorage; +use zksync_types::{ + event::extract_l2tol1logs_from_l1_messenger, + l2_to_l1_log::{L2ToL1Log, SystemL2ToL1Log, UserL2ToL1Log}, + VmEvent, +}; + +use crate::{ + interface::types::outputs::VmExecutionLogs, + vm_boojum_integration::{ + old_vm::utils::precompile_calls_count_after_timestamp, utils::logs, vm::Vm, + }, + HistoryMode, +}; + +impl Vm { + pub(crate) fn collect_execution_logs_after_timestamp( + &self, + from_timestamp: Timestamp, + ) -> VmExecutionLogs { + let storage_logs: Vec<_> = self + .state + .storage + .storage_log_queries_after_timestamp(from_timestamp) + .iter() + .map(|log| **log) + .collect(); + let storage_logs_count = storage_logs.len(); + + let (events, system_l2_to_l1_logs) = + self.collect_events_and_l1_system_logs_after_timestamp(from_timestamp); + + let log_queries = self + .state + .event_sink + .log_queries_after_timestamp(from_timestamp); + + let precompile_calls_count = precompile_calls_count_after_timestamp( + self.state.precompiles_processor.timestamp_history.inner(), + from_timestamp, + ); + + let user_logs = extract_l2tol1logs_from_l1_messenger(&events); + + let total_log_queries_count = + storage_logs_count + log_queries.len() + precompile_calls_count; + + VmExecutionLogs { + storage_logs, + events, + user_l2_to_l1_logs: user_logs + .into_iter() + .map(|log| UserL2ToL1Log(log.into())) + .collect(), + system_l2_to_l1_logs: system_l2_to_l1_logs + .into_iter() + .map(SystemL2ToL1Log) + .collect(), + total_log_queries_count, + } + } + + pub(crate) fn collect_events_and_l1_system_logs_after_timestamp( + &self, + from_timestamp: Timestamp, + ) -> (Vec, Vec) { + logs::collect_events_and_l1_system_logs_after_timestamp( + &self.state, + &self.batch_env, + from_timestamp, + ) + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/implementation/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/mod.rs new file mode 100644 index 00000000000..161732cf034 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/mod.rs @@ -0,0 +1,7 @@ +mod bytecode; +mod execution; +mod gas; +mod logs; +mod snapshots; +mod statistics; +mod tx; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/implementation/snapshots.rs b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/snapshots.rs new file mode 100644 index 00000000000..b5b09c0fd6d --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/snapshots.rs @@ -0,0 +1,89 @@ +use std::time::Duration; + +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; +use zk_evm_1_4_0::aux_structures::Timestamp; +use zksync_state::WriteStorage; + +use crate::vm_boojum_integration::{ + old_vm::oracles::OracleWithHistory, types::internals::VmSnapshot, vm::Vm, +}; + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelSet, EncodeLabelValue)] +#[metrics(label = "stage", rename_all = "snake_case")] +enum RollbackStage { + DecommitmentProcessorRollback, + EventSinkRollback, + StorageRollback, + MemoryRollback, + PrecompilesProcessorRollback, + ApplyBootloaderSnapshot, +} + +#[derive(Debug, Metrics)] +#[metrics(prefix = "server_vm_boojum_integration")] +struct VmMetrics { + #[metrics(buckets = Buckets::LATENCIES)] + rollback_time: Family>, +} + +#[vise::register] +static METRICS: vise::Global = vise::Global::new(); + +/// Implementation of VM related to rollbacks inside virtual machine +impl Vm { + pub(crate) fn make_snapshot_inner(&mut self) { + self.snapshots.push(VmSnapshot { + // Vm local state contains O(1) various parameters (registers/etc). + // The only "expensive" copying here is copying of the call stack. + // It will take `O(callstack_depth)` to copy it. + // So it is generally recommended to get snapshots of the bootloader frame, + // where the depth is 1. + local_state: self.state.local_state.clone(), + bootloader_state: self.bootloader_state.get_snapshot(), + }); + } + + pub(crate) fn rollback_to_snapshot(&mut self, snapshot: VmSnapshot) { + let VmSnapshot { + local_state, + bootloader_state, + } = snapshot; + + let stage_latency = + METRICS.rollback_time[&RollbackStage::DecommitmentProcessorRollback].start(); + let timestamp = Timestamp(local_state.timestamp); + tracing::trace!("Rolling back decomitter"); + self.state + .decommittment_processor + .rollback_to_timestamp(timestamp); + stage_latency.observe(); + + let stage_latency = METRICS.rollback_time[&RollbackStage::EventSinkRollback].start(); + tracing::trace!("Rolling back event_sink"); + self.state.event_sink.rollback_to_timestamp(timestamp); + stage_latency.observe(); + + let stage_latency = METRICS.rollback_time[&RollbackStage::StorageRollback].start(); + tracing::trace!("Rolling back storage"); + self.state.storage.rollback_to_timestamp(timestamp); + stage_latency.observe(); + + let stage_latency = METRICS.rollback_time[&RollbackStage::MemoryRollback].start(); + tracing::trace!("Rolling back memory"); + self.state.memory.rollback_to_timestamp(timestamp); + stage_latency.observe(); + + let stage_latency = + METRICS.rollback_time[&RollbackStage::PrecompilesProcessorRollback].start(); + tracing::trace!("Rolling back precompiles_processor"); + self.state + .precompiles_processor + .rollback_to_timestamp(timestamp); + stage_latency.observe(); + + self.state.local_state = local_state; + let stage_latency = METRICS.rollback_time[&RollbackStage::ApplyBootloaderSnapshot].start(); + self.bootloader_state.apply_snapshot(bootloader_state); + stage_latency.observe(); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/implementation/statistics.rs b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/statistics.rs new file mode 100644 index 00000000000..36780c8b845 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/statistics.rs @@ -0,0 +1,72 @@ +use zk_evm_1_4_0::aux_structures::Timestamp; +use zksync_state::WriteStorage; +use zksync_types::U256; + +use crate::{ + interface::{VmExecutionStatistics, VmMemoryMetrics}, + vm_boojum_integration::{tracers::DefaultExecutionTracer, vm::Vm}, + HistoryMode, +}; + +/// Module responsible for observing the VM behavior, i.e. calculating the statistics of the VM runs +/// or reporting the VM memory usage. + +impl Vm { + /// Get statistics about TX execution. + #[allow(clippy::too_many_arguments)] + pub(crate) fn get_statistics( + &self, + timestamp_initial: Timestamp, + cycles_initial: u32, + tracer: &DefaultExecutionTracer, + gas_remaining_before: u32, + gas_remaining_after: u32, + spent_pubdata_counter_before: u32, + pubdata_published: u32, + total_log_queries_count: usize, + estimated_circuits_used: f32, + ) -> VmExecutionStatistics { + let computational_gas_used = self.calculate_computational_gas_used( + tracer, + gas_remaining_before, + spent_pubdata_counter_before, + ); + VmExecutionStatistics { + contracts_used: self + .state + .decommittment_processor + .get_decommitted_bytecodes_after_timestamp(timestamp_initial), + cycles_used: self.state.local_state.monotonic_cycle_counter - cycles_initial, + gas_used: gas_remaining_before - gas_remaining_after, + computational_gas_used, + total_log_queries: total_log_queries_count, + pubdata_published, + estimated_circuits_used, + } + } + + /// Returns the hashes the bytecodes that have been decommitted by the decommitment processor. + pub(crate) fn get_used_contracts(&self) -> Vec { + self.state + .decommittment_processor + .decommitted_code_hashes + .inner() + .keys() + .cloned() + .collect() + } + + /// Returns the info about all oracles' sizes. + pub(crate) fn record_vm_memory_metrics_inner(&self) -> VmMemoryMetrics { + VmMemoryMetrics { + event_sink_inner: self.state.event_sink.get_size(), + event_sink_history: self.state.event_sink.get_history_size(), + memory_inner: self.state.memory.get_size(), + memory_history: self.state.memory.get_history_size(), + decommittment_processor_inner: self.state.decommittment_processor.get_size(), + decommittment_processor_history: self.state.decommittment_processor.get_history_size(), + storage_inner: self.state.storage.get_size(), + storage_history: self.state.storage.get_history_size(), + } + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/implementation/tx.rs b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/tx.rs new file mode 100644 index 00000000000..9eac3e74983 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/implementation/tx.rs @@ -0,0 +1,68 @@ +use zk_evm_1_4_0::aux_structures::Timestamp; +use zksync_state::WriteStorage; +use zksync_types::{l1::is_l1_tx_type, Transaction}; + +use crate::{ + vm_boojum_integration::{ + constants::BOOTLOADER_HEAP_PAGE, + implementation::bytecode::{bytecode_to_factory_dep, compress_bytecodes}, + types::internals::TransactionData, + utils::fee::get_batch_gas_per_pubdata, + vm::Vm, + }, + HistoryMode, +}; + +impl Vm { + pub(crate) fn push_raw_transaction( + &mut self, + tx: TransactionData, + predefined_overhead: u32, + predefined_refund: u32, + with_compression: bool, + ) { + let timestamp = Timestamp(self.state.local_state.timestamp); + let codes_for_decommiter = tx + .factory_deps + .iter() + .map(|dep| bytecode_to_factory_dep(dep.clone())) + .collect(); + + let compressed_bytecodes = if is_l1_tx_type(tx.tx_type) || !with_compression { + // L1 transactions do not need compression + vec![] + } else { + compress_bytecodes(&tx.factory_deps, self.state.storage.storage.get_ptr()) + }; + + self.state + .decommittment_processor + .populate(codes_for_decommiter, timestamp); + + let trusted_ergs_limit = tx.trusted_ergs_limit(get_batch_gas_per_pubdata(&self.batch_env)); + + let memory = self.bootloader_state.push_tx( + tx, + predefined_overhead, + predefined_refund, + compressed_bytecodes, + trusted_ergs_limit, + self.system_env.chain_id, + ); + + self.state + .memory + .populate_page(BOOTLOADER_HEAP_PAGE as usize, memory, timestamp); + } + + pub(crate) fn push_transaction_with_compression( + &mut self, + tx: Transaction, + with_compression: bool, + ) { + let tx: TransactionData = tx.into(); + let block_gas_per_pubdata_byte = get_batch_gas_per_pubdata(&self.batch_env); + let overhead = tx.overhead_gas(block_gas_per_pubdata_byte as u32); + self.push_raw_transaction(tx, overhead, 0, with_compression); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/mod.rs new file mode 100644 index 00000000000..83693e4b24e --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/mod.rs @@ -0,0 +1,34 @@ +pub use self::{ + bootloader_state::BootloaderState, + old_vm::{ + history_recorder::{ + AppDataFrameManagerWithHistory, HistoryDisabled, HistoryEnabled, HistoryMode, + }, + memory::SimpleMemory, + }, + oracles::storage::StorageOracle, + tracers::{ + dispatcher::TracerDispatcher, + traits::{ToTracerPointer, TracerPointer, VmTracer}, + }, + types::internals::ZkSyncVmState, + utils::transaction_encoding::TransactionVmExt, + vm::Vm, +}; +pub use crate::interface::types::{ + inputs::{L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode}, + outputs::{ + BootloaderMemory, CurrentExecutionState, ExecutionResult, FinishedL1Batch, L2Block, + Refunds, VmExecutionLogs, VmExecutionResultAndLogs, VmExecutionStatistics, VmMemoryMetrics, + }, +}; + +mod bootloader_state; +pub mod constants; +mod implementation; +mod old_vm; +mod oracles; +pub(crate) mod tracers; +mod types; +pub mod utils; +mod vm; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/event_sink.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/event_sink.rs new file mode 100644 index 00000000000..6638057643d --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/event_sink.rs @@ -0,0 +1,263 @@ +use std::collections::HashMap; + +use itertools::Itertools; +use zk_evm_1_4_0::{ + abstractions::EventSink, + aux_structures::{LogQuery, Timestamp}, + reference_impls::event_sink::EventMessage, + zkevm_opcode_defs::system_params::{ + BOOTLOADER_FORMAL_ADDRESS, EVENT_AUX_BYTE, L1_MESSAGE_AUX_BYTE, + }, +}; +use zksync_types::U256; + +use crate::vm_boojum_integration::old_vm::{ + history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, + oracles::OracleWithHistory, +}; + +#[derive(Debug, Clone, PartialEq, Default)] +pub struct InMemoryEventSink { + frames_stack: AppDataFrameManagerWithHistory, H>, +} + +impl OracleWithHistory for InMemoryEventSink { + fn rollback_to_timestamp(&mut self, timestamp: Timestamp) { + self.frames_stack.rollback_to_timestamp(timestamp); + } +} + +// as usual, if we rollback the current frame then we apply changes to storage immediately, +// otherwise we carry rollbacks to the parent's frames + +impl InMemoryEventSink { + pub fn flatten(&self) -> (Vec, Vec, Vec) { + assert_eq!( + self.frames_stack.len(), + 1, + "there must exist an initial keeper frame" + ); + // we forget rollbacks as we have finished the execution and can just apply them + let history = self.frames_stack.forward().current_frame(); + + let (events, l1_messages) = Self::events_and_l1_messages_from_history(history); + let events_logs = Self::events_logs_from_history(history); + + (events_logs, events, l1_messages) + } + + pub fn get_log_queries(&self) -> usize { + self.frames_stack.forward().current_frame().len() + } + + /// Returns the log queries in the current frame where `log_query.timestamp >= from_timestamp`. + pub fn log_queries_after_timestamp(&self, from_timestamp: Timestamp) -> &[Box] { + let events = self.frames_stack.forward().current_frame(); + + // Select all of the last elements where `e.timestamp >= from_timestamp`. + // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. + events + .rsplit(|e| e.timestamp < from_timestamp) + .next() + .unwrap_or(&[]) + } + + pub fn get_events_and_l2_l1_logs_after_timestamp( + &self, + from_timestamp: Timestamp, + ) -> (Vec, Vec) { + Self::events_and_l1_messages_from_history(self.log_queries_after_timestamp(from_timestamp)) + } + + fn events_logs_from_history(history: &[Box]) -> Vec { + // Filter out all the L2->L1 logs and leave only events + let mut events = history + .iter() + .filter_map(|log_query| (log_query.aux_byte == EVENT_AUX_BYTE).then_some(**log_query)) + .collect_vec(); + + // Sort the events by timestamp and rollback flag, basically ensuring that + // if an event has been rolled back, the original event and its rollback will be put together + events.sort_by_key(|log| (log.timestamp, log.rollback)); + + let mut stack = Vec::::new(); + let mut net_history = vec![]; + for el in events.iter() { + assert_eq!(el.shard_id, 0, "only rollup shard is supported"); + if stack.is_empty() { + assert!(!el.rollback); + stack.push(*el); + } else { + // we can always pop as it's either one to add to queue, or discard + let previous = stack.pop().unwrap(); + if previous.timestamp == el.timestamp { + // Only rollback can have the same timestamp, so here we do nothing and simply + // double check the invariants + assert!(!previous.rollback); + assert!(el.rollback); + assert!(previous.rw_flag); + assert!(el.rw_flag); + assert_eq!(previous.tx_number_in_block, el.tx_number_in_block); + assert_eq!(previous.shard_id, el.shard_id); + assert_eq!(previous.address, el.address); + assert_eq!(previous.key, el.key); + assert_eq!(previous.written_value, el.written_value); + assert_eq!(previous.is_service, el.is_service); + continue; + } else { + // The event on the stack has not been rolled back. It must be a different event, + // with a different timestamp. + assert!(!el.rollback); + stack.push(*el); + + // cleanup some fields + // flags are conventions + let sorted_log_query = LogQuery { + timestamp: Timestamp(0), + tx_number_in_block: previous.tx_number_in_block, + aux_byte: 0, + shard_id: previous.shard_id, + address: previous.address, + key: previous.key, + read_value: U256::zero(), + written_value: previous.written_value, + rw_flag: false, + rollback: false, + is_service: previous.is_service, + }; + + net_history.push(sorted_log_query); + } + } + } + + // In case the stack is non-empty, then the last element of it has not been rolled back. + if let Some(previous) = stack.pop() { + // cleanup some fields + // flags are conventions + let sorted_log_query = LogQuery { + timestamp: Timestamp(0), + tx_number_in_block: previous.tx_number_in_block, + aux_byte: 0, + shard_id: previous.shard_id, + address: previous.address, + key: previous.key, + read_value: U256::zero(), + written_value: previous.written_value, + rw_flag: false, + rollback: false, + is_service: previous.is_service, + }; + + net_history.push(sorted_log_query); + } + + net_history + } + + fn events_and_l1_messages_from_history( + history: &[Box], + ) -> (Vec, Vec) { + let mut tmp = HashMap::::with_capacity(history.len()); + + // note that we only use "forward" part and discard the rollbacks at the end, + // since if rollbacks of parents were not appended anywhere we just still keep them + for el in history { + // we are time ordered here in terms of rollbacks + if tmp.get(&el.timestamp.0).is_some() { + assert!(el.rollback); + tmp.remove(&el.timestamp.0); + } else { + assert!(!el.rollback); + tmp.insert(el.timestamp.0, **el); + } + } + + // naturally sorted by timestamp + let mut keys: Vec<_> = tmp.keys().cloned().collect(); + keys.sort_unstable(); + + let mut events = vec![]; + let mut l1_messages = vec![]; + + for k in keys.into_iter() { + let el = tmp.remove(&k).unwrap(); + let LogQuery { + shard_id, + is_service, + tx_number_in_block, + address, + key, + written_value, + aux_byte, + .. + } = el; + + let event = EventMessage { + shard_id, + is_first: is_service, + tx_number_in_block, + address, + key, + value: written_value, + }; + + if aux_byte == EVENT_AUX_BYTE { + events.push(event); + } else { + l1_messages.push(event); + } + } + + (events, l1_messages) + } + + pub(crate) fn get_size(&self) -> usize { + self.frames_stack.get_size() + } + + pub fn get_history_size(&self) -> usize { + self.frames_stack.get_history_size() + } + + pub fn delete_history(&mut self) { + self.frames_stack.delete_history(); + } +} + +impl EventSink for InMemoryEventSink { + // when we enter a new frame we should remember all our current applications and rollbacks + // when we exit the current frame then if we did panic we should concatenate all current + // forward and rollback cases + + fn add_partial_query(&mut self, _monotonic_cycle_counter: u32, mut query: LogQuery) { + assert!(query.rw_flag); + assert!(query.aux_byte == EVENT_AUX_BYTE || query.aux_byte == L1_MESSAGE_AUX_BYTE); + assert!(!query.rollback); + + // just append to rollbacks and a full history + + self.frames_stack + .push_forward(Box::new(query), query.timestamp); + // we do not need it explicitly here, but let's be consistent with circuit counterpart + query.rollback = true; + self.frames_stack + .push_rollback(Box::new(query), query.timestamp); + } + + fn start_frame(&mut self, timestamp: Timestamp) { + self.frames_stack.push_frame(timestamp) + } + + fn finish_frame(&mut self, panicked: bool, timestamp: Timestamp) { + // if we panic then we append forward and rollbacks to the forward of parent, + // otherwise we place rollbacks of child before rollbacks of the parent + if panicked { + self.frames_stack.move_rollback_to_forward( + |q| q.address != *BOOTLOADER_FORMAL_ADDRESS || q.aux_byte != EVENT_AUX_BYTE, + timestamp, + ); + } + self.frames_stack.merge_frame(timestamp); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/events.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/events.rs new file mode 100644 index 00000000000..eed8fee4ac8 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/events.rs @@ -0,0 +1,146 @@ +use zk_evm_1_4_0::{ethereum_types::Address, reference_impls::event_sink::EventMessage}; +use zksync_types::{L1BatchNumber, VmEvent, EVENT_WRITER_ADDRESS, H256}; +use zksync_utils::{be_chunks_to_h256_words, h256_to_account_address}; + +#[derive(Clone)] +pub(crate) struct SolidityLikeEvent { + pub(crate) shard_id: u8, + pub(crate) tx_number_in_block: u16, + pub(crate) address: Address, + pub(crate) topics: Vec<[u8; 32]>, + pub(crate) data: Vec, +} + +impl SolidityLikeEvent { + pub(crate) fn into_vm_event(self, block_number: L1BatchNumber) -> VmEvent { + VmEvent { + location: (block_number, self.tx_number_in_block as u32), + address: self.address, + indexed_topics: be_chunks_to_h256_words(self.topics), + value: self.data, + } + } +} + +fn merge_events_inner(events: Vec) -> Vec { + let mut result = vec![]; + let mut current: Option<(usize, u32, SolidityLikeEvent)> = None; + + for message in events.into_iter() { + if !message.is_first { + let EventMessage { + shard_id, + is_first: _, + tx_number_in_block, + address, + key, + value, + } = message; + + if let Some((mut remaining_data_length, mut remaining_topics, mut event)) = + current.take() + { + if event.address != address + || event.shard_id != shard_id + || event.tx_number_in_block != tx_number_in_block + { + continue; + } + let mut data_0 = [0u8; 32]; + let mut data_1 = [0u8; 32]; + key.to_big_endian(&mut data_0); + value.to_big_endian(&mut data_1); + for el in [data_0, data_1].iter() { + if remaining_topics != 0 { + event.topics.push(*el); + remaining_topics -= 1; + } else if remaining_data_length != 0 { + if remaining_data_length >= 32 { + event.data.extend_from_slice(el); + remaining_data_length -= 32; + } else { + event.data.extend_from_slice(&el[..remaining_data_length]); + remaining_data_length = 0; + } + } + } + + if remaining_data_length != 0 || remaining_topics != 0 { + current = Some((remaining_data_length, remaining_topics, event)) + } else { + result.push(event); + } + } + } else { + // start new one. First take the old one only if it's well formed + if let Some((remaining_data_length, remaining_topics, event)) = current.take() { + if remaining_data_length == 0 && remaining_topics == 0 { + result.push(event); + } + } + + let EventMessage { + shard_id, + is_first: _, + tx_number_in_block, + address, + key, + value, + } = message; + // split key as our internal marker. Ignore higher bits + let mut num_topics = key.0[0] as u32; + let mut data_length = (key.0[0] >> 32) as usize; + let mut buffer = [0u8; 32]; + value.to_big_endian(&mut buffer); + + let (topics, data) = if num_topics == 0 && data_length == 0 { + (vec![], vec![]) + } else if num_topics == 0 { + data_length -= 32; + (vec![], buffer.to_vec()) + } else { + num_topics -= 1; + (vec![buffer], vec![]) + }; + + let new_event = SolidityLikeEvent { + shard_id, + tx_number_in_block, + address, + topics, + data, + }; + + current = Some((data_length, num_topics, new_event)) + } + } + + // add the last one + if let Some((remaining_data_length, remaining_topics, event)) = current.take() { + if remaining_data_length == 0 && remaining_topics == 0 { + result.push(event); + } + } + + result +} + +pub(crate) fn merge_events(events: Vec) -> Vec { + let raw_events = merge_events_inner(events); + + raw_events + .into_iter() + .filter(|e| e.address == EVENT_WRITER_ADDRESS) + .map(|event| { + // The events writer events where the first topic is the actual address of the event and the rest of the topics are real topics + let address = h256_to_account_address(&H256(event.topics[0])); + let topics = event.topics.into_iter().skip(1).collect(); + + SolidityLikeEvent { + topics, + address, + ..event + } + }) + .collect() +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/history_recorder.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/history_recorder.rs new file mode 100644 index 00000000000..90d0c868ea3 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/history_recorder.rs @@ -0,0 +1,811 @@ +use std::{collections::HashMap, fmt::Debug, hash::Hash}; + +use zk_evm_1_4_0::{ + aux_structures::Timestamp, + vm_state::PrimitiveValue, + zkevm_opcode_defs::{self}, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_types::{StorageKey, U256}; +use zksync_utils::{h256_to_u256, u256_to_h256}; + +pub(crate) type MemoryWithHistory = HistoryRecorder; +pub(crate) type IntFrameManagerWithHistory = HistoryRecorder, H>; + +// Within the same cycle, timestamps in range `timestamp..timestamp+TIME_DELTA_PER_CYCLE-1` +// can be used. This can sometimes violate monotonicity of the timestamp within the +// same cycle, so it should be normalized. +#[inline] +fn normalize_timestamp(timestamp: Timestamp) -> Timestamp { + let timestamp = timestamp.0; + + // Making sure it is divisible by `TIME_DELTA_PER_CYCLE` + Timestamp(timestamp - timestamp % zkevm_opcode_defs::TIME_DELTA_PER_CYCLE) +} + +/// Accepts history item as its parameter and applies it. +pub trait WithHistory { + type HistoryRecord; + type ReturnValue; + + // Applies an action and returns the action that would + // rollback its effect as well as some returned value + fn apply_historic_record( + &mut self, + item: Self::HistoryRecord, + ) -> (Self::HistoryRecord, Self::ReturnValue); +} + +type EventList = Vec<(Timestamp, ::HistoryRecord)>; + +/// Controls if rolling back is possible or not. +/// Either [HistoryEnabled] or [HistoryDisabled]. +pub trait HistoryMode: private::Sealed + Debug + Clone + Default { + type History: Default; + + fn clone_history(history: &Self::History) -> Self::History + where + T::HistoryRecord: Clone; + fn mutate_history)>( + recorder: &mut HistoryRecorder, + f: F, + ); + fn borrow_history) -> R, R>( + recorder: &HistoryRecorder, + f: F, + default: R, + ) -> R; +} + +mod private { + pub trait Sealed {} + impl Sealed for super::HistoryEnabled {} + impl Sealed for super::HistoryDisabled {} +} + +// derives require that all type parameters implement the trait, which is why +// HistoryEnabled/Disabled derive so many traits even though they mostly don't +// exist at runtime. + +/// A data structure with this parameter can be rolled back. +/// See also: [HistoryDisabled] +#[derive(Debug, Clone, Default, PartialEq)] +pub struct HistoryEnabled; + +/// A data structure with this parameter cannot be rolled back. +/// It won't even have rollback methods. +/// See also: [HistoryEnabled] +#[derive(Debug, Clone, Default)] +pub struct HistoryDisabled; + +impl HistoryMode for HistoryEnabled { + type History = EventList; + + fn clone_history(history: &Self::History) -> Self::History + where + T::HistoryRecord: Clone, + { + history.clone() + } + fn mutate_history)>( + recorder: &mut HistoryRecorder, + f: F, + ) { + f(&mut recorder.inner, &mut recorder.history) + } + fn borrow_history) -> R, R>( + recorder: &HistoryRecorder, + f: F, + _: R, + ) -> R { + f(&recorder.history) + } +} + +impl HistoryMode for HistoryDisabled { + type History = (); + + fn clone_history(_: &Self::History) -> Self::History {} + fn mutate_history)>( + _: &mut HistoryRecorder, + _: F, + ) { + } + fn borrow_history) -> R, R>( + _: &HistoryRecorder, + _: F, + default: R, + ) -> R { + default + } +} + +/// A struct responsible for tracking history for +/// a component that is passed as a generic parameter to it (`inner`). +#[derive(Default)] +pub struct HistoryRecorder { + inner: T, + history: H::History, +} + +impl PartialEq for HistoryRecorder +where + T::HistoryRecord: PartialEq, +{ + fn eq(&self, other: &Self) -> bool { + self.inner == other.inner + && self.borrow_history(|h1| other.borrow_history(|h2| h1 == h2, true), true) + } +} + +impl Debug for HistoryRecorder +where + T::HistoryRecord: Debug, +{ + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + let mut debug_struct = f.debug_struct("HistoryRecorder"); + debug_struct.field("inner", &self.inner); + self.borrow_history( + |h| { + debug_struct.field("history", h); + }, + (), + ); + debug_struct.finish() + } +} + +impl Clone for HistoryRecorder +where + T::HistoryRecord: Clone, + H: HistoryMode, +{ + fn clone(&self) -> Self { + Self { + inner: self.inner.clone(), + history: H::clone_history(&self.history), + } + } +} + +impl HistoryRecorder { + pub fn from_inner(inner: T) -> Self { + Self { + inner, + history: Default::default(), + } + } + + pub fn inner(&self) -> &T { + &self.inner + } + + /// If history exists, modify it using `f`. + pub fn mutate_history)>(&mut self, f: F) { + H::mutate_history(self, f); + } + + /// If history exists, feed it into `f`. Otherwise return `default`. + pub fn borrow_history) -> R, R>(&self, f: F, default: R) -> R { + H::borrow_history(self, f, default) + } + + pub fn apply_historic_record( + &mut self, + item: T::HistoryRecord, + timestamp: Timestamp, + ) -> T::ReturnValue { + let (reversed_item, return_value) = self.inner.apply_historic_record(item); + + self.mutate_history(|_, history| { + let last_recorded_timestamp = history.last().map(|(t, _)| *t).unwrap_or(Timestamp(0)); + let timestamp = normalize_timestamp(timestamp); + assert!( + last_recorded_timestamp <= timestamp, + "Timestamps are not monotonic" + ); + history.push((timestamp, reversed_item)); + }); + + return_value + } + + /// Deletes all the history for its component, making + /// its current state irreversible + pub fn delete_history(&mut self) { + self.mutate_history(|_, h| h.clear()) + } +} + +impl HistoryRecorder { + pub fn history(&self) -> &Vec<(Timestamp, T::HistoryRecord)> { + &self.history + } + + pub(crate) fn rollback_to_timestamp(&mut self, timestamp: Timestamp) { + loop { + let should_undo = self + .history + .last() + .map(|(item_timestamp, _)| *item_timestamp >= timestamp) + .unwrap_or(false); + if !should_undo { + break; + } + + let (_, item_to_apply) = self.history.pop().unwrap(); + self.inner.apply_historic_record(item_to_apply); + } + } +} + +#[derive(Debug, Clone, PartialEq)] +pub enum VectorHistoryEvent { + Push(X), + Pop, +} + +impl WithHistory for Vec { + type HistoryRecord = VectorHistoryEvent; + type ReturnValue = Option; + fn apply_historic_record( + &mut self, + item: VectorHistoryEvent, + ) -> (Self::HistoryRecord, Self::ReturnValue) { + match item { + VectorHistoryEvent::Pop => { + // Note, that here we assume that the users + // will check themselves whether this vector is empty + // prior to popping from it. + let poped_item = self.pop().unwrap(); + + (VectorHistoryEvent::Push(poped_item), Some(poped_item)) + } + VectorHistoryEvent::Push(x) => { + self.push(x); + + (VectorHistoryEvent::Pop, None) + } + } + } +} + +impl HistoryRecorder, H> { + pub fn push(&mut self, elem: T, timestamp: Timestamp) { + self.apply_historic_record(VectorHistoryEvent::Push(elem), timestamp); + } + + pub fn pop(&mut self, timestamp: Timestamp) -> T { + self.apply_historic_record(VectorHistoryEvent::Pop, timestamp) + .unwrap() + } + + pub fn len(&self) -> usize { + self.inner.len() + } + + pub fn is_empty(&self) -> bool { + self.len() == 0 + } +} + +#[derive(Debug, Clone, PartialEq)] +pub struct HashMapHistoryEvent { + pub key: K, + pub value: Option, +} + +impl WithHistory for HashMap { + type HistoryRecord = HashMapHistoryEvent; + type ReturnValue = Option; + fn apply_historic_record( + &mut self, + item: Self::HistoryRecord, + ) -> (Self::HistoryRecord, Self::ReturnValue) { + let HashMapHistoryEvent { key, value } = item; + + let prev_value = match value { + Some(x) => self.insert(key, x), + None => self.remove(&key), + }; + + ( + HashMapHistoryEvent { + key, + value: prev_value.clone(), + }, + prev_value, + ) + } +} + +impl HistoryRecorder, H> { + pub fn insert(&mut self, key: K, value: V, timestamp: Timestamp) -> Option { + self.apply_historic_record( + HashMapHistoryEvent { + key, + value: Some(value), + }, + timestamp, + ) + } + + pub(crate) fn remove(&mut self, key: K, timestamp: Timestamp) -> Option { + self.apply_historic_record(HashMapHistoryEvent { key, value: None }, timestamp) + } +} + +/// A stack of stacks. The inner stacks are called frames. +/// +/// Does not support popping from the outer stack. Instead, the outer stack can +/// push its topmost frame's contents onto the previous frame. +#[derive(Debug, Clone, PartialEq)] +pub struct FramedStack { + data: Vec, + frame_start_indices: Vec, +} + +impl Default for FramedStack { + fn default() -> Self { + // We typically require at least the first frame to be there + // since the last user-provided frame might be reverted + Self { + data: vec![], + frame_start_indices: vec![0], + } + } +} + +#[derive(Debug, Clone, PartialEq)] +pub enum FramedStackEvent { + Push(T), + Pop, + PushFrame(usize), + MergeFrame, +} + +impl WithHistory for FramedStack { + type HistoryRecord = FramedStackEvent; + type ReturnValue = (); + + fn apply_historic_record( + &mut self, + item: Self::HistoryRecord, + ) -> (Self::HistoryRecord, Self::ReturnValue) { + use FramedStackEvent::*; + match item { + Push(x) => { + self.data.push(x); + (Pop, ()) + } + Pop => { + let x = self.data.pop().unwrap(); + (Push(x), ()) + } + PushFrame(i) => { + self.frame_start_indices.push(i); + (MergeFrame, ()) + } + MergeFrame => { + let pos = self.frame_start_indices.pop().unwrap(); + (PushFrame(pos), ()) + } + } + } +} + +impl FramedStack { + fn push_frame(&self) -> FramedStackEvent { + FramedStackEvent::PushFrame(self.data.len()) + } + + pub fn current_frame(&self) -> &[T] { + &self.data[*self.frame_start_indices.last().unwrap()..self.data.len()] + } + + fn len(&self) -> usize { + self.frame_start_indices.len() + } + + /// Returns the amount of memory taken up by the stored items + pub fn get_size(&self) -> usize { + self.data.len() * std::mem::size_of::() + } +} + +impl HistoryRecorder, H> { + pub fn push_to_frame(&mut self, x: T, timestamp: Timestamp) { + self.apply_historic_record(FramedStackEvent::Push(x), timestamp); + } + pub fn clear_frame(&mut self, timestamp: Timestamp) { + let start = *self.inner.frame_start_indices.last().unwrap(); + while self.inner.data.len() > start { + self.apply_historic_record(FramedStackEvent::Pop, timestamp); + } + } + pub fn extend_frame(&mut self, items: impl IntoIterator, timestamp: Timestamp) { + for x in items { + self.push_to_frame(x, timestamp); + } + } + pub fn push_frame(&mut self, timestamp: Timestamp) { + self.apply_historic_record(self.inner.push_frame(), timestamp); + } + pub fn merge_frame(&mut self, timestamp: Timestamp) { + self.apply_historic_record(FramedStackEvent::MergeFrame, timestamp); + } +} + +#[derive(Debug, Clone, PartialEq)] +pub struct AppDataFrameManagerWithHistory { + forward: HistoryRecorder, H>, + rollback: HistoryRecorder, H>, +} + +impl Default for AppDataFrameManagerWithHistory { + fn default() -> Self { + Self { + forward: Default::default(), + rollback: Default::default(), + } + } +} + +impl AppDataFrameManagerWithHistory { + pub(crate) fn delete_history(&mut self) { + self.forward.delete_history(); + self.rollback.delete_history(); + } + + pub(crate) fn push_forward(&mut self, item: T, timestamp: Timestamp) { + self.forward.push_to_frame(item, timestamp); + } + pub(crate) fn push_rollback(&mut self, item: T, timestamp: Timestamp) { + self.rollback.push_to_frame(item, timestamp); + } + pub(crate) fn push_frame(&mut self, timestamp: Timestamp) { + self.forward.push_frame(timestamp); + self.rollback.push_frame(timestamp); + } + pub(crate) fn merge_frame(&mut self, timestamp: Timestamp) { + self.forward.merge_frame(timestamp); + self.rollback.merge_frame(timestamp); + } + + pub(crate) fn len(&self) -> usize { + self.forward.inner.len() + } + pub(crate) fn forward(&self) -> &FramedStack { + &self.forward.inner + } + pub(crate) fn rollback(&self) -> &FramedStack { + &self.rollback.inner + } + + /// Returns the amount of memory taken up by the stored items + pub(crate) fn get_size(&self) -> usize { + self.forward().get_size() + self.rollback().get_size() + } + + pub(crate) fn get_history_size(&self) -> usize { + (self.forward.borrow_history(|h| h.len(), 0) + self.rollback.borrow_history(|h| h.len(), 0)) + * std::mem::size_of::< as WithHistory>::HistoryRecord>() + } +} + +impl AppDataFrameManagerWithHistory { + pub(crate) fn move_rollback_to_forward bool>( + &mut self, + filter: F, + timestamp: Timestamp, + ) { + for x in self.rollback.inner.current_frame().iter().rev() { + if filter(x) { + self.forward.push_to_frame(x.clone(), timestamp); + } + } + self.rollback.clear_frame(timestamp); + } +} + +impl AppDataFrameManagerWithHistory { + pub(crate) fn rollback_to_timestamp(&mut self, timestamp: Timestamp) { + self.forward.rollback_to_timestamp(timestamp); + self.rollback.rollback_to_timestamp(timestamp); + } +} + +const PRIMITIVE_VALUE_EMPTY: PrimitiveValue = PrimitiveValue::empty(); +const PAGE_SUBDIVISION_LEN: usize = 64; + +#[derive(Debug, Default, Clone)] +struct MemoryPage { + root: Vec>>, +} + +impl MemoryPage { + fn get(&self, slot: usize) -> &PrimitiveValue { + self.root + .get(slot / PAGE_SUBDIVISION_LEN) + .and_then(|inner| inner.as_ref()) + .map(|leaf| &leaf[slot % PAGE_SUBDIVISION_LEN]) + .unwrap_or(&PRIMITIVE_VALUE_EMPTY) + } + fn set(&mut self, slot: usize, value: PrimitiveValue) -> PrimitiveValue { + let root_index = slot / PAGE_SUBDIVISION_LEN; + let leaf_index = slot % PAGE_SUBDIVISION_LEN; + + if self.root.len() <= root_index { + self.root.resize_with(root_index + 1, || None); + } + let node = &mut self.root[root_index]; + + if let Some(leaf) = node { + let old = leaf[leaf_index]; + leaf[leaf_index] = value; + old + } else { + let mut leaf = [PrimitiveValue::empty(); PAGE_SUBDIVISION_LEN]; + leaf[leaf_index] = value; + self.root[root_index] = Some(Box::new(leaf)); + PrimitiveValue::empty() + } + } + + fn get_size(&self) -> usize { + self.root.iter().filter_map(|x| x.as_ref()).count() + * PAGE_SUBDIVISION_LEN + * std::mem::size_of::() + } +} + +impl PartialEq for MemoryPage { + fn eq(&self, other: &Self) -> bool { + for slot in 0..self.root.len().max(other.root.len()) * PAGE_SUBDIVISION_LEN { + if self.get(slot) != other.get(slot) { + return false; + } + } + true + } +} + +#[derive(Debug, Default, Clone)] +pub struct MemoryWrapper { + memory: Vec, +} + +impl PartialEq for MemoryWrapper { + fn eq(&self, other: &Self) -> bool { + let empty_page = MemoryPage::default(); + let empty_pages = std::iter::repeat(&empty_page); + self.memory + .iter() + .chain(empty_pages.clone()) + .zip(other.memory.iter().chain(empty_pages)) + .take(self.memory.len().max(other.memory.len())) + .all(|(a, b)| a == b) + } +} + +#[derive(Debug, Clone, PartialEq)] +pub struct MemoryHistoryRecord { + pub page: usize, + pub slot: usize, + pub set_value: PrimitiveValue, +} + +impl MemoryWrapper { + pub fn ensure_page_exists(&mut self, page: usize) { + if self.memory.len() <= page { + // We don't need to record such events in history + // because all these vectors will be empty + self.memory.resize_with(page + 1, MemoryPage::default); + } + } + + pub fn dump_page_content_as_u256_words( + &self, + page_number: u32, + range: std::ops::Range, + ) -> Vec { + if let Some(page) = self.memory.get(page_number as usize) { + let mut result = vec![]; + for i in range { + result.push(*page.get(i as usize)); + } + result + } else { + vec![PrimitiveValue::empty(); range.len()] + } + } + + pub fn read_slot(&self, page: usize, slot: usize) -> &PrimitiveValue { + self.memory + .get(page) + .map(|page| page.get(slot)) + .unwrap_or(&PRIMITIVE_VALUE_EMPTY) + } + + pub fn get_size(&self) -> usize { + self.memory.iter().map(|page| page.get_size()).sum() + } +} + +impl WithHistory for MemoryWrapper { + type HistoryRecord = MemoryHistoryRecord; + type ReturnValue = PrimitiveValue; + + fn apply_historic_record( + &mut self, + item: MemoryHistoryRecord, + ) -> (Self::HistoryRecord, Self::ReturnValue) { + let MemoryHistoryRecord { + page, + slot, + set_value, + } = item; + + self.ensure_page_exists(page); + let page_handle = self.memory.get_mut(page).unwrap(); + let prev_value = page_handle.set(slot, set_value); + + let undo = MemoryHistoryRecord { + page, + slot, + set_value: prev_value, + }; + + (undo, prev_value) + } +} + +impl HistoryRecorder { + pub fn write_to_memory( + &mut self, + page: usize, + slot: usize, + value: PrimitiveValue, + timestamp: Timestamp, + ) -> PrimitiveValue { + self.apply_historic_record( + MemoryHistoryRecord { + page, + slot, + set_value: value, + }, + timestamp, + ) + } + + pub fn clear_page(&mut self, page: usize, timestamp: Timestamp) { + self.mutate_history(|inner, history| { + if let Some(page_handle) = inner.memory.get(page) { + for (i, x) in page_handle.root.iter().enumerate() { + if let Some(slots) = x { + for (j, value) in slots.iter().enumerate() { + if *value != PrimitiveValue::empty() { + history.push(( + timestamp, + MemoryHistoryRecord { + page, + slot: PAGE_SUBDIVISION_LEN * i + j, + set_value: *value, + }, + )) + } + } + } + } + inner.memory[page] = MemoryPage::default(); + } + }); + } +} + +#[derive(Debug)] +pub struct StorageWrapper { + storage_ptr: StoragePtr, +} + +impl StorageWrapper { + pub fn new(storage_ptr: StoragePtr) -> Self { + Self { storage_ptr } + } + + pub fn get_ptr(&self) -> StoragePtr { + self.storage_ptr.clone() + } + + pub fn read_from_storage(&self, key: &StorageKey) -> U256 { + h256_to_u256(self.storage_ptr.borrow_mut().read_value(key)) + } +} + +#[derive(Debug, Clone)] +pub struct StorageHistoryRecord { + pub key: StorageKey, + pub value: U256, +} + +impl WithHistory for StorageWrapper { + type HistoryRecord = StorageHistoryRecord; + type ReturnValue = U256; + + fn apply_historic_record( + &mut self, + item: Self::HistoryRecord, + ) -> (Self::HistoryRecord, Self::ReturnValue) { + let prev_value = h256_to_u256( + self.storage_ptr + .borrow_mut() + .set_value(item.key, u256_to_h256(item.value)), + ); + + let reverse_item = StorageHistoryRecord { + key: item.key, + value: prev_value, + }; + + (reverse_item, prev_value) + } +} + +impl HistoryRecorder, H> { + pub fn read_from_storage(&self, key: &StorageKey) -> U256 { + self.inner.read_from_storage(key) + } + + pub fn write_to_storage(&mut self, key: StorageKey, value: U256, timestamp: Timestamp) -> U256 { + self.apply_historic_record(StorageHistoryRecord { key, value }, timestamp) + } + + /// Returns a pointer to the storage. + /// Note, that any changes done to the storage via this pointer + /// will NOT be recorded as its history. + pub fn get_ptr(&self) -> StoragePtr { + self.inner.get_ptr() + } +} + +#[cfg(test)] +mod tests { + use zk_evm_1_4_0::{aux_structures::Timestamp, vm_state::PrimitiveValue}; + use zksync_types::U256; + + use crate::vm_boojum_integration::{ + old_vm::history_recorder::{HistoryRecorder, MemoryWrapper}, + HistoryDisabled, + }; + + #[test] + fn memory_equality() { + let mut a: HistoryRecorder = Default::default(); + let mut b = a.clone(); + let nonzero = U256::from_dec_str("123").unwrap(); + let different_value = U256::from_dec_str("1234").unwrap(); + + let write = |memory: &mut HistoryRecorder, value| { + memory.write_to_memory( + 17, + 34, + PrimitiveValue { + value, + is_pointer: false, + }, + Timestamp::empty(), + ); + }; + + assert_eq!(a, b); + + write(&mut b, nonzero); + assert_ne!(a, b); + + write(&mut a, different_value); + assert_ne!(a, b); + + write(&mut a, nonzero); + assert_eq!(a, b); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/memory.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/memory.rs new file mode 100644 index 00000000000..8229727b6dd --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/memory.rs @@ -0,0 +1,327 @@ +use zk_evm_1_4_0::{ + abstractions::{Memory, MemoryType}, + aux_structures::{MemoryPage, MemoryQuery, Timestamp}, + vm_state::PrimitiveValue, + zkevm_opcode_defs::FatPointer, +}; +use zksync_types::U256; + +use crate::vm_boojum_integration::old_vm::{ + history_recorder::{ + FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, + MemoryWrapper, WithHistory, + }, + oracles::OracleWithHistory, + utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}, +}; + +#[derive(Debug, Clone, PartialEq)] +pub struct SimpleMemory { + memory: MemoryWithHistory, + observable_pages: IntFrameManagerWithHistory, +} + +impl Default for SimpleMemory { + fn default() -> Self { + let mut memory: MemoryWithHistory = Default::default(); + memory.mutate_history(|_, h| h.reserve(607)); + Self { + memory, + observable_pages: Default::default(), + } + } +} + +impl OracleWithHistory for SimpleMemory { + fn rollback_to_timestamp(&mut self, timestamp: Timestamp) { + self.memory.rollback_to_timestamp(timestamp); + self.observable_pages.rollback_to_timestamp(timestamp); + } +} + +impl SimpleMemory { + pub fn populate(&mut self, elements: Vec<(u32, Vec)>, timestamp: Timestamp) { + for (page, values) in elements.into_iter() { + for (i, value) in values.into_iter().enumerate() { + let value = PrimitiveValue { + value, + is_pointer: false, + }; + self.memory + .write_to_memory(page as usize, i, value, timestamp); + } + } + } + + pub fn populate_page( + &mut self, + page: usize, + elements: Vec<(usize, U256)>, + timestamp: Timestamp, + ) { + elements.into_iter().for_each(|(offset, value)| { + let value = PrimitiveValue { + value, + is_pointer: false, + }; + + self.memory.write_to_memory(page, offset, value, timestamp); + }); + } + + pub fn dump_page_content_as_u256_words( + &self, + page: u32, + range: std::ops::Range, + ) -> Vec { + self.memory + .inner() + .dump_page_content_as_u256_words(page, range) + .into_iter() + .map(|v| v.value) + .collect() + } + + pub fn read_slot(&self, page: usize, slot: usize) -> &PrimitiveValue { + self.memory.inner().read_slot(page, slot) + } + + // This method should be used with relatively small lengths, since + // we don't heavily optimize here for cases with long lengths + pub fn read_unaligned_bytes(&self, page: usize, start: usize, length: usize) -> Vec { + if length == 0 { + return vec![]; + } + + let end = start + length - 1; + + let mut current_word = start / 32; + let mut result = vec![]; + while current_word * 32 <= end { + let word_value = self.read_slot(page, current_word).value; + let word_value = { + let mut bytes: Vec = vec![0u8; 32]; + word_value.to_big_endian(&mut bytes); + bytes + }; + + result.extend(extract_needed_bytes_from_word( + word_value, + current_word, + start, + end, + )); + + current_word += 1; + } + + assert_eq!(result.len(), length); + + result + } + + pub(crate) fn get_size(&self) -> usize { + // Hashmap memory overhead is neglected. + let memory_size = self.memory.inner().get_size(); + let observable_pages_size = self.observable_pages.inner().get_size(); + + memory_size + observable_pages_size + } + + pub fn get_history_size(&self) -> usize { + let memory_size = self.memory.borrow_history(|h| h.len(), 0) + * std::mem::size_of::<::HistoryRecord>(); + let observable_pages_size = self.observable_pages.borrow_history(|h| h.len(), 0) + * std::mem::size_of::< as WithHistory>::HistoryRecord>(); + + memory_size + observable_pages_size + } + + pub fn delete_history(&mut self) { + self.memory.delete_history(); + self.observable_pages.delete_history(); + } +} + +impl Memory for SimpleMemory { + fn execute_partial_query( + &mut self, + _monotonic_cycle_counter: u32, + mut query: MemoryQuery, + ) -> MemoryQuery { + match query.location.memory_type { + MemoryType::Stack => {} + MemoryType::Heap | MemoryType::AuxHeap => { + // The following assertion works fine even when doing a read + // from heap through pointer, since `value_is_pointer` can only be set to + // `true` during memory writes. + assert!( + !query.value_is_pointer, + "Pointers can only be stored on stack" + ); + } + MemoryType::FatPointer => { + assert!(!query.rw_flag); + assert!( + !query.value_is_pointer, + "Pointers can only be stored on stack" + ); + } + MemoryType::Code => { + unreachable!("code should be through specialized query"); + } + } + + let page = query.location.page.0 as usize; + let slot = query.location.index.0 as usize; + + if query.rw_flag { + self.memory.write_to_memory( + page, + slot, + PrimitiveValue { + value: query.value, + is_pointer: query.value_is_pointer, + }, + query.timestamp, + ); + } else { + let current_value = self.read_slot(page, slot); + query.value = current_value.value; + query.value_is_pointer = current_value.is_pointer; + } + + query + } + + fn specialized_code_query( + &mut self, + _monotonic_cycle_counter: u32, + mut query: MemoryQuery, + ) -> MemoryQuery { + assert_eq!(query.location.memory_type, MemoryType::Code); + assert!( + !query.value_is_pointer, + "Pointers are not used for decommmits" + ); + + let page = query.location.page.0 as usize; + let slot = query.location.index.0 as usize; + + if query.rw_flag { + self.memory.write_to_memory( + page, + slot, + PrimitiveValue { + value: query.value, + is_pointer: query.value_is_pointer, + }, + query.timestamp, + ); + } else { + let current_value = self.read_slot(page, slot); + query.value = current_value.value; + query.value_is_pointer = current_value.is_pointer; + } + + query + } + + fn read_code_query( + &self, + _monotonic_cycle_counter: u32, + mut query: MemoryQuery, + ) -> MemoryQuery { + assert_eq!(query.location.memory_type, MemoryType::Code); + assert!( + !query.value_is_pointer, + "Pointers are not used for decommmits" + ); + assert!(!query.rw_flag, "Only read queries can be processed"); + + let page = query.location.page.0 as usize; + let slot = query.location.index.0 as usize; + + let current_value = self.read_slot(page, slot); + query.value = current_value.value; + query.value_is_pointer = current_value.is_pointer; + + query + } + + fn start_global_frame( + &mut self, + _current_base_page: MemoryPage, + new_base_page: MemoryPage, + calldata_fat_pointer: FatPointer, + timestamp: Timestamp, + ) { + // Besides the calldata page, we also formally include the current stack + // page, heap page and aux heap page. + // The code page will be always left observable, so we don't include it here. + self.observable_pages.push_frame(timestamp); + self.observable_pages.extend_frame( + vec![ + calldata_fat_pointer.memory_page, + stack_page_from_base(new_base_page).0, + heap_page_from_base(new_base_page).0, + aux_heap_page_from_base(new_base_page).0, + ], + timestamp, + ); + } + + fn finish_global_frame( + &mut self, + base_page: MemoryPage, + returndata_fat_pointer: FatPointer, + timestamp: Timestamp, + ) { + // Safe to unwrap here, since `finish_global_frame` is never called with empty stack + let current_observable_pages = self.observable_pages.inner().current_frame(); + let returndata_page = returndata_fat_pointer.memory_page; + + for &page in current_observable_pages { + // If the page's number is greater than or equal to the `base_page`, + // it means that it was created by the internal calls of this contract. + // We need to add this check as the calldata pointer is also part of the + // observable pages. + if page >= base_page.0 && page != returndata_page { + self.memory.clear_page(page as usize, timestamp); + } + } + + self.observable_pages.clear_frame(timestamp); + self.observable_pages.merge_frame(timestamp); + + self.observable_pages + .push_to_frame(returndata_page, timestamp); + } +} + +// It is expected that there is some intersection between `[word_number*32..word_number*32+31]` and `[start, end]` +fn extract_needed_bytes_from_word( + word_value: Vec, + word_number: usize, + start: usize, + end: usize, +) -> Vec { + let word_start = word_number * 32; + let word_end = word_start + 31; // Note, that at `word_start + 32` a new word already starts + + let intersection_left = std::cmp::max(word_start, start); + let intersection_right = std::cmp::min(word_end, end); + + if intersection_right < intersection_left { + vec![] + } else { + let start_bytes = intersection_left - word_start; + let to_take = intersection_right - intersection_left + 1; + + word_value + .into_iter() + .skip(start_bytes) + .take(to_take) + .collect() + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/mod.rs new file mode 100644 index 00000000000..afade198461 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/mod.rs @@ -0,0 +1,8 @@ +/// This module contains the parts from old VM implementation, which were not changed during the vm implementation. +/// It should be refactored and removed in the future. +pub(crate) mod event_sink; +pub(crate) mod events; +pub(crate) mod history_recorder; +pub(crate) mod memory; +pub(crate) mod oracles; +pub(crate) mod utils; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/decommitter.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/decommitter.rs new file mode 100644 index 00000000000..6ff63e17ce0 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/decommitter.rs @@ -0,0 +1,236 @@ +use std::{collections::HashMap, fmt::Debug}; + +use zk_evm_1_4_0::{ + abstractions::{DecommittmentProcessor, Memory, MemoryType}, + aux_structures::{ + DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery, Timestamp, + }, +}; +use zksync_state::{ReadStorage, StoragePtr}; +use zksync_types::U256; +use zksync_utils::{bytecode::bytecode_len_in_words, bytes_to_be_words, u256_to_h256}; + +use super::OracleWithHistory; +use crate::vm_boojum_integration::old_vm::history_recorder::{ + HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, +}; + +/// The main job of the DecommiterOracle is to implement the DecommitmentProcessor trait - that is +/// used by the VM to 'load' bytecodes into memory. +#[derive(Debug)] +pub struct DecommitterOracle { + /// Pointer that enables to read contract bytecodes from the database. + storage: StoragePtr, + /// The cache of bytecodes that the bootloader "knows", but that are not necessarily in the database. + /// And it is also used as a database cache. + pub known_bytecodes: HistoryRecorder>, H>, + /// Stores pages of memory where certain code hashes have already been decommitted. + /// It is expected that they all are present in the DB. + // `decommitted_code_hashes` history is necessary + pub decommitted_code_hashes: HistoryRecorder, HistoryEnabled>, + /// Stores history of decommitment requests. + decommitment_requests: HistoryRecorder, H>, +} + +impl DecommitterOracle { + pub fn new(storage: StoragePtr) -> Self { + Self { + storage, + known_bytecodes: HistoryRecorder::default(), + decommitted_code_hashes: HistoryRecorder::default(), + decommitment_requests: HistoryRecorder::default(), + } + } + + /// Gets the bytecode for a given hash (either from storage, or from 'known_bytecodes' that were populated by `populate` method). + /// Panics if bytecode doesn't exist. + pub fn get_bytecode(&mut self, hash: U256, timestamp: Timestamp) -> Vec { + let entry = self.known_bytecodes.inner().get(&hash); + + match entry { + Some(x) => x.clone(), + None => { + // It is ok to panic here, since the decommitter is never called directly by + // the users and always called by the VM. VM will never let decommit the + // code hash which we didn't previously claim to know the preimage of. + let value = self + .storage + .borrow_mut() + .load_factory_dep(u256_to_h256(hash)) + .expect("Trying to decode unexisting hash"); + + let value = bytes_to_be_words(value); + self.known_bytecodes.insert(hash, value.clone(), timestamp); + value + } + } + } + + /// Adds additional bytecodes. They will take precedent over the bytecodes from storage. + pub fn populate(&mut self, bytecodes: Vec<(U256, Vec)>, timestamp: Timestamp) { + for (hash, bytecode) in bytecodes { + self.known_bytecodes.insert(hash, bytecode, timestamp); + } + } + + pub fn get_used_bytecode_hashes(&self) -> Vec { + self.decommitted_code_hashes + .inner() + .iter() + .map(|item| *item.0) + .collect() + } + + pub fn get_decommitted_bytecodes_after_timestamp(&self, timestamp: Timestamp) -> usize { + // Note, that here we rely on the fact that for each used bytecode + // there is one and only one corresponding event in the history of it. + self.decommitted_code_hashes + .history() + .iter() + .rev() + .take_while(|(t, _)| *t >= timestamp) + .count() + } + + pub fn get_decommitted_code_hashes_with_history( + &self, + ) -> &HistoryRecorder, HistoryEnabled> { + &self.decommitted_code_hashes + } + + /// Returns the storage handle. Used only in tests. + pub fn get_storage(&self) -> StoragePtr { + self.storage.clone() + } + + /// Measures the amount of memory used by this Oracle (used for metrics only). + pub(crate) fn get_size(&self) -> usize { + // Hashmap memory overhead is neglected. + let known_bytecodes_size = self + .known_bytecodes + .inner() + .iter() + .map(|(_, value)| value.len() * std::mem::size_of::()) + .sum::(); + let decommitted_code_hashes_size = + self.decommitted_code_hashes.inner().len() * std::mem::size_of::<(U256, u32)>(); + + known_bytecodes_size + decommitted_code_hashes_size + } + + pub(crate) fn get_history_size(&self) -> usize { + let known_bytecodes_stack_size = self.known_bytecodes.borrow_history(|h| h.len(), 0) + * std::mem::size_of::<> as WithHistory>::HistoryRecord>(); + let known_bytecodes_heap_size = self.known_bytecodes.borrow_history( + |h| { + h.iter() + .map(|(_, event)| { + if let Some(bytecode) = event.value.as_ref() { + bytecode.len() * std::mem::size_of::() + } else { + 0 + } + }) + .sum::() + }, + 0, + ); + let decommitted_code_hashes_size = + self.decommitted_code_hashes.borrow_history(|h| h.len(), 0) + * std::mem::size_of::< as WithHistory>::HistoryRecord>(); + + known_bytecodes_stack_size + known_bytecodes_heap_size + decommitted_code_hashes_size + } + + pub fn delete_history(&mut self) { + self.decommitted_code_hashes.delete_history(); + self.known_bytecodes.delete_history(); + self.decommitment_requests.delete_history(); + } +} + +impl OracleWithHistory for DecommitterOracle { + fn rollback_to_timestamp(&mut self, timestamp: Timestamp) { + self.decommitted_code_hashes + .rollback_to_timestamp(timestamp); + self.known_bytecodes.rollback_to_timestamp(timestamp); + self.decommitment_requests.rollback_to_timestamp(timestamp); + } +} + +impl DecommittmentProcessor + for DecommitterOracle +{ + /// Loads a given bytecode hash into memory (see trait description for more details). + fn decommit_into_memory( + &mut self, + monotonic_cycle_counter: u32, + mut partial_query: DecommittmentQuery, + memory: &mut M, + ) -> Result< + ( + zk_evm_1_4_0::aux_structures::DecommittmentQuery, + Option>, + ), + anyhow::Error, + > { + self.decommitment_requests.push((), partial_query.timestamp); + // First - check if we didn't fetch this bytecode in the past. + // If we did - we can just return the page that we used before (as the memory is readonly). + if let Some(memory_page) = self + .decommitted_code_hashes + .inner() + .get(&partial_query.hash) + .copied() + { + partial_query.is_fresh = false; + partial_query.memory_page = MemoryPage(memory_page); + partial_query.decommitted_length = + bytecode_len_in_words(&u256_to_h256(partial_query.hash)); + + Ok((partial_query, None)) + } else { + // We are fetching a fresh bytecode that we didn't read before. + let values = self.get_bytecode(partial_query.hash, partial_query.timestamp); + let page_to_use = partial_query.memory_page; + let timestamp = partial_query.timestamp; + partial_query.decommitted_length = values.len() as u16; + partial_query.is_fresh = true; + + // Create a template query, that we'll use for writing into memory. + // value & index are set to 0 - as they will be updated in the inner loop below. + let mut tmp_q = MemoryQuery { + timestamp, + location: MemoryLocation { + memory_type: MemoryType::Code, + page: page_to_use, + index: MemoryIndex(0), + }, + value: U256::zero(), + value_is_pointer: false, + rw_flag: true, + }; + self.decommitted_code_hashes + .insert(partial_query.hash, page_to_use.0, timestamp); + + // Copy the bytecode (that is stored in 'values' Vec) into the memory page. + if B { + for (i, value) in values.iter().enumerate() { + tmp_q.location.index = MemoryIndex(i as u32); + tmp_q.value = *value; + memory.specialized_code_query(monotonic_cycle_counter, tmp_q); + } + // If we're in the witness mode - we also have to return the values. + Ok((partial_query, Some(values))) + } else { + for (i, value) in values.into_iter().enumerate() { + tmp_q.location.index = MemoryIndex(i as u32); + tmp_q.value = value; + memory.specialized_code_query(monotonic_cycle_counter, tmp_q); + } + + Ok((partial_query, None)) + } + } + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/mod.rs new file mode 100644 index 00000000000..3f8d2d0f138 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/mod.rs @@ -0,0 +1,8 @@ +use zk_evm_1_4_0::aux_structures::Timestamp; + +pub(crate) mod decommitter; +pub(crate) mod precompile; + +pub(crate) trait OracleWithHistory { + fn rollback_to_timestamp(&mut self, timestamp: Timestamp); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/precompile.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/precompile.rs new file mode 100644 index 00000000000..0fc1108ead8 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/precompile.rs @@ -0,0 +1,114 @@ +use std::convert::TryFrom; + +use zk_evm_1_4_0::{ + abstractions::{Memory, PrecompileCyclesWitness, PrecompilesProcessor}, + aux_structures::{LogQuery, MemoryQuery, Timestamp}, + zk_evm_abstractions::precompiles::{ecrecover, keccak256, sha256, PrecompileAddress}, +}; + +use super::OracleWithHistory; +use crate::vm_boojum_integration::old_vm::history_recorder::{ + HistoryEnabled, HistoryMode, HistoryRecorder, +}; + +/// Wrap of DefaultPrecompilesProcessor that store queue +/// of timestamp when precompiles are called to be executed. +/// Number of precompiles per block is strictly limited, +/// saving timestamps allows us to check the exact number +/// of log queries, that were used during the tx execution. +#[derive(Debug, Clone)] +pub struct PrecompilesProcessorWithHistory { + pub timestamp_history: HistoryRecorder, H>, + pub precompile_cycles_history: HistoryRecorder, H>, +} + +impl Default for PrecompilesProcessorWithHistory { + fn default() -> Self { + Self { + timestamp_history: Default::default(), + precompile_cycles_history: Default::default(), + } + } +} + +impl OracleWithHistory for PrecompilesProcessorWithHistory { + fn rollback_to_timestamp(&mut self, timestamp: Timestamp) { + self.timestamp_history.rollback_to_timestamp(timestamp); + self.precompile_cycles_history + .rollback_to_timestamp(timestamp); + } +} + +impl PrecompilesProcessorWithHistory { + pub fn get_timestamp_history(&self) -> &Vec { + self.timestamp_history.inner() + } + + pub fn delete_history(&mut self) { + self.timestamp_history.delete_history(); + self.precompile_cycles_history.delete_history(); + } +} + +impl PrecompilesProcessor for PrecompilesProcessorWithHistory { + fn start_frame(&mut self) { + // there are no precompiles to rollback, do nothing + } + + fn execute_precompile( + &mut self, + monotonic_cycle_counter: u32, + query: LogQuery, + memory: &mut M, + ) -> Option<(Vec, Vec, PrecompileCyclesWitness)> { + // In the next line we same `query.timestamp` as both + // an operation in the history of precompiles processor and + // the time when this operation occurred. + // While slightly weird, it is done for consistency with other oracles + // where operations and timestamp have different types. + self.timestamp_history + .push(query.timestamp, query.timestamp); + + let address_low = u16::from_le_bytes([query.address.0[19], query.address.0[18]]); + if let Ok(precompile_address) = PrecompileAddress::try_from(address_low) { + let rounds = match precompile_address { + PrecompileAddress::Keccak256 => { + // pure function call, non-revertable + keccak256::keccak256_rounds_function::( + monotonic_cycle_counter, + query, + memory, + ) + .0 + } + PrecompileAddress::SHA256 => { + // pure function call, non-revertable + sha256::sha256_rounds_function::( + monotonic_cycle_counter, + query, + memory, + ) + .0 + } + PrecompileAddress::Ecrecover => { + // pure function call, non-revertable + ecrecover::ecrecover_function::( + monotonic_cycle_counter, + query, + memory, + ) + .0 + } + }; + + self.precompile_cycles_history + .push((precompile_address, rounds), query.timestamp); + }; + + None + } + + fn finish_frame(&mut self, _panicked: bool) { + // there are no revertible precompile yes, so we are ok + } +} diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/storage.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/storage.rs similarity index 99% rename from core/lib/multivm/src/versions/vm_latest/old_vm/oracles/storage.rs rename to core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/storage.rs index b2c471832e4..1c14706de87 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/storage.rs +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/oracles/storage.rs @@ -1,6 +1,6 @@ use std::collections::HashMap; -use crate::vm_latest::old_vm::history_recorder::{ +use crate::vm_boojum_integration::old_vm::history_recorder::{ AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, HistoryRecorder, StorageWrapper, WithHistory, }; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/utils.rs b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/utils.rs new file mode 100644 index 00000000000..342cc64ea2a --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/old_vm/utils.rs @@ -0,0 +1,221 @@ +use zk_evm_1_4_0::{ + aux_structures::{MemoryPage, Timestamp}, + vm_state::PrimitiveValue, + zkevm_opcode_defs::{ + decoding::{AllowedPcOrImm, EncodingModeProduction, VmEncodingMode}, + FatPointer, RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, +}; +use zksync_state::WriteStorage; +use zksync_system_constants::L1_GAS_PER_PUBDATA_BYTE; +use zksync_types::{Address, U256}; + +use crate::vm_boojum_integration::{ + old_vm::memory::SimpleMemory, types::internals::ZkSyncVmState, HistoryMode, +}; + +#[derive(Debug, Clone)] +pub(crate) enum VmExecutionResult { + Ok(Vec), + Revert(Vec), + Panic, + MostLikelyDidNotFinish(Address, u16), +} + +pub(crate) const fn stack_page_from_base(base: MemoryPage) -> MemoryPage { + MemoryPage(base.0 + 1) +} + +pub(crate) const fn heap_page_from_base(base: MemoryPage) -> MemoryPage { + MemoryPage(base.0 + 2) +} + +pub(crate) const fn aux_heap_page_from_base(base: MemoryPage) -> MemoryPage { + MemoryPage(base.0 + 3) +} + +pub(crate) trait FixedLengthIterator<'a, I: 'a, const N: usize>: Iterator +where + Self: 'a, +{ + fn next(&mut self) -> Option<::Item> { + ::next(self) + } +} + +pub(crate) trait IntoFixedLengthByteIterator { + type IntoIter: FixedLengthIterator<'static, u8, N>; + fn into_le_iter(self) -> Self::IntoIter; + fn into_be_iter(self) -> Self::IntoIter; +} + +pub(crate) struct FixedBufferValueIterator { + iter: std::array::IntoIter, +} + +impl Iterator for FixedBufferValueIterator { + type Item = T; + fn next(&mut self) -> Option { + self.iter.next() + } +} + +impl FixedLengthIterator<'static, T, N> + for FixedBufferValueIterator +{ +} + +impl IntoFixedLengthByteIterator<32> for U256 { + type IntoIter = FixedBufferValueIterator; + fn into_le_iter(self) -> Self::IntoIter { + let mut buffer = [0u8; 32]; + self.to_little_endian(&mut buffer); + + FixedBufferValueIterator { + iter: IntoIterator::into_iter(buffer), + } + } + + fn into_be_iter(self) -> Self::IntoIter { + let mut buffer = [0u8; 32]; + self.to_big_endian(&mut buffer); + + FixedBufferValueIterator { + iter: IntoIterator::into_iter(buffer), + } + } +} + +/// Receives sorted slice of timestamps. +/// Returns count of timestamps that are greater than or equal to `from_timestamp`. +/// Works in O(log(sorted_timestamps.len())). +pub(crate) fn precompile_calls_count_after_timestamp( + sorted_timestamps: &[Timestamp], + from_timestamp: Timestamp, +) -> usize { + sorted_timestamps.len() - sorted_timestamps.partition_point(|t| *t < from_timestamp) +} + +pub(crate) fn eth_price_per_pubdata_byte(l1_gas_price: u64) -> u64 { + // This value will typically be a lot less than u64 + // unless the gas price on L1 goes beyond tens of millions of gwei + l1_gas_price * (L1_GAS_PER_PUBDATA_BYTE as u64) +} + +pub(crate) fn vm_may_have_ended_inner( + vm: &ZkSyncVmState, +) -> Option { + let execution_has_ended = vm.execution_has_ended(); + + let r1 = vm.local_state.registers[RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER as usize]; + let current_address = vm.local_state.callstack.get_current_stack().this_address; + + let outer_eh_location = >::PcOrImm::MAX.as_u64(); + match ( + execution_has_ended, + vm.local_state.callstack.get_current_stack().pc.as_u64(), + ) { + (true, 0) => { + let returndata = dump_memory_page_using_primitive_value(&vm.memory, r1); + + Some(VmExecutionResult::Ok(returndata)) + } + (false, _) => None, + (true, l) if l == outer_eh_location => { + // check `r1,r2,r3` + if vm.local_state.flags.overflow_or_less_than_flag { + Some(VmExecutionResult::Panic) + } else { + let returndata = dump_memory_page_using_primitive_value(&vm.memory, r1); + Some(VmExecutionResult::Revert(returndata)) + } + } + (_, a) => Some(VmExecutionResult::MostLikelyDidNotFinish( + current_address, + a as u16, + )), + } +} + +pub(crate) fn dump_memory_page_using_primitive_value( + memory: &SimpleMemory, + ptr: PrimitiveValue, +) -> Vec { + if !ptr.is_pointer { + return vec![]; + } + let fat_ptr = FatPointer::from_u256(ptr.value); + dump_memory_page_using_fat_pointer(memory, fat_ptr) +} + +pub(crate) fn dump_memory_page_using_fat_pointer( + memory: &SimpleMemory, + fat_ptr: FatPointer, +) -> Vec { + dump_memory_page_by_offset_and_length( + memory, + fat_ptr.memory_page, + (fat_ptr.start + fat_ptr.offset) as usize, + (fat_ptr.length - fat_ptr.offset) as usize, + ) +} + +pub(crate) fn dump_memory_page_by_offset_and_length( + memory: &SimpleMemory, + page: u32, + offset: usize, + length: usize, +) -> Vec { + assert!(offset < (1u32 << 24) as usize); + assert!(length < (1u32 << 24) as usize); + let mut dump = Vec::with_capacity(length); + if length == 0 { + return dump; + } + + let first_word = offset / 32; + let end_byte = offset + length; + let mut last_word = end_byte / 32; + if end_byte % 32 != 0 { + last_word += 1; + } + + let unalignment = offset % 32; + + let page_part = + memory.dump_page_content_as_u256_words(page, (first_word as u32)..(last_word as u32)); + + let mut is_first = true; + let mut remaining = length; + for word in page_part.into_iter() { + let it = word.into_be_iter(); + if is_first { + is_first = false; + let it = it.skip(unalignment); + for next in it { + if remaining > 0 { + dump.push(next); + remaining -= 1; + } + } + } else { + for next in it { + if remaining > 0 { + dump.push(next); + remaining -= 1; + } + } + } + } + + assert_eq!( + dump.len(), + length, + "tried to dump with offset {}, length {}, got a bytestring of length {}", + offset, + length, + dump.len() + ); + + dump +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/oracles/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/oracles/mod.rs new file mode 100644 index 00000000000..b21c842572f --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/oracles/mod.rs @@ -0,0 +1 @@ +pub(crate) mod storage; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/oracles/storage.rs b/core/lib/multivm/src/versions/vm_boojum_integration/oracles/storage.rs new file mode 100644 index 00000000000..1367f83f4e5 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/oracles/storage.rs @@ -0,0 +1,509 @@ +use std::collections::HashMap; + +use zk_evm_1_4_0::{ + abstractions::{RefundType, RefundedAmounts, Storage as VmStorageOracle}, + aux_structures::{LogQuery, Timestamp}, + zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_types::{ + utils::storage_key_for_eth_balance, + writes::{ + compression::compress_with_best_strategy, BYTES_PER_DERIVED_KEY, + BYTES_PER_ENUMERATION_INDEX, + }, + AccountTreeId, Address, StorageKey, StorageLogQuery, StorageLogQueryType, BOOTLOADER_ADDRESS, + U256, +}; +use zksync_utils::u256_to_h256; + +use crate::vm_boojum_integration::old_vm::{ + history_recorder::{ + AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, + HistoryRecorder, StorageWrapper, VectorHistoryEvent, WithHistory, + }, + oracles::OracleWithHistory, +}; + +// While the storage does not support different shards, it was decided to write the +// code of the StorageOracle with the shard parameters in mind. +pub(crate) fn triplet_to_storage_key(_shard_id: u8, address: Address, key: U256) -> StorageKey { + StorageKey::new(AccountTreeId::new(address), u256_to_h256(key)) +} + +pub(crate) fn storage_key_of_log(query: &LogQuery) -> StorageKey { + triplet_to_storage_key(query.shard_id, query.address, query.key) +} + +#[derive(Debug)] +pub struct StorageOracle { + // Access to the persistent storage. Please note that it + // is used only for read access. All the actual writes happen + // after the execution ended. + pub(crate) storage: HistoryRecorder, H>, + + pub(crate) frames_stack: AppDataFrameManagerWithHistory, H>, + + // The changes that have been paid for in previous transactions. + // It is a mapping from storage key to the number of *bytes* that was paid by the user + // to cover this slot. + pub(crate) pre_paid_changes: HistoryRecorder, H>, + + // The changes that have been paid for in the current transaction + pub(crate) paid_changes: HistoryRecorder, H>, + + // The map that contains all the first values read from storage for each slot. + // While formally it does not have to be rolled back, we still do it to avoid memory bloat + // for unused slots. + pub(crate) initial_values: HistoryRecorder, H>, + + // Storage refunds that oracle has returned in `estimate_refunds_for_write`. + pub(crate) returned_refunds: HistoryRecorder, H>, + + // Keeps track of storage keys that were ever written to. + pub(crate) written_keys: HistoryRecorder, HistoryEnabled>, + // Keeps track of storage keys that were ever read. + pub(crate) read_keys: HistoryRecorder, HistoryEnabled>, +} + +impl OracleWithHistory for StorageOracle { + fn rollback_to_timestamp(&mut self, timestamp: Timestamp) { + self.storage.rollback_to_timestamp(timestamp); + self.frames_stack.rollback_to_timestamp(timestamp); + self.pre_paid_changes.rollback_to_timestamp(timestamp); + self.paid_changes.rollback_to_timestamp(timestamp); + self.initial_values.rollback_to_timestamp(timestamp); + self.returned_refunds.rollback_to_timestamp(timestamp); + self.written_keys.rollback_to_timestamp(timestamp); + self.read_keys.rollback_to_timestamp(timestamp); + } +} + +impl StorageOracle { + pub fn new(storage: StoragePtr) -> Self { + Self { + storage: HistoryRecorder::from_inner(StorageWrapper::new(storage)), + frames_stack: Default::default(), + pre_paid_changes: Default::default(), + paid_changes: Default::default(), + initial_values: Default::default(), + returned_refunds: Default::default(), + written_keys: Default::default(), + read_keys: Default::default(), + } + } + + pub fn delete_history(&mut self) { + self.storage.delete_history(); + self.frames_stack.delete_history(); + self.pre_paid_changes.delete_history(); + self.paid_changes.delete_history(); + self.initial_values.delete_history(); + self.returned_refunds.delete_history(); + self.written_keys.delete_history(); + self.read_keys.delete_history(); + } + + fn is_storage_key_free(&self, key: &StorageKey) -> bool { + key.address() == &zksync_system_constants::SYSTEM_CONTEXT_ADDRESS + || *key == storage_key_for_eth_balance(&BOOTLOADER_ADDRESS) + } + + fn get_initial_value(&self, storage_key: &StorageKey) -> Option { + self.initial_values.inner().get(storage_key).copied() + } + + fn set_initial_value(&mut self, storage_key: &StorageKey, value: U256, timestamp: Timestamp) { + if !self.initial_values.inner().contains_key(storage_key) { + self.initial_values.insert(*storage_key, value, timestamp); + } + } + + fn read_value(&mut self, mut query: LogQuery) -> LogQuery { + let key = triplet_to_storage_key(query.shard_id, query.address, query.key); + + if !self.read_keys.inner().contains_key(&key) { + self.read_keys.insert(key, (), query.timestamp); + } + let current_value = self.storage.read_from_storage(&key); + + query.read_value = current_value; + + self.set_initial_value(&key, current_value, query.timestamp); + + self.frames_stack.push_forward( + Box::new(StorageLogQuery { + log_query: query, + log_type: StorageLogQueryType::Read, + }), + query.timestamp, + ); + + query + } + + fn write_value(&mut self, query: LogQuery) -> LogQuery { + let key = triplet_to_storage_key(query.shard_id, query.address, query.key); + if !self.written_keys.inner().contains_key(&key) { + self.written_keys.insert(key, (), query.timestamp); + } + let current_value = + self.storage + .write_to_storage(key, query.written_value, query.timestamp); + + let is_initial_write = self.storage.get_ptr().borrow_mut().is_write_initial(&key); + let log_query_type = if is_initial_write { + StorageLogQueryType::InitialWrite + } else { + StorageLogQueryType::RepeatedWrite + }; + + self.set_initial_value(&key, current_value, query.timestamp); + + let mut storage_log_query = StorageLogQuery { + log_query: query, + log_type: log_query_type, + }; + self.frames_stack + .push_forward(Box::new(storage_log_query), query.timestamp); + storage_log_query.log_query.rollback = true; + self.frames_stack + .push_rollback(Box::new(storage_log_query), query.timestamp); + storage_log_query.log_query.rollback = false; + + query + } + + // Returns the amount of funds that has been already paid for writes into the storage slot + fn prepaid_for_write(&self, storage_key: &StorageKey) -> u32 { + self.paid_changes + .inner() + .get(storage_key) + .copied() + .unwrap_or_else(|| { + self.pre_paid_changes + .inner() + .get(storage_key) + .copied() + .unwrap_or(0) + }) + } + + // Remembers the changes that have been paid for in the current transaction. + // It also returns how much pubdata did the user pay for and how much was actually published. + pub(crate) fn save_paid_changes(&mut self, timestamp: Timestamp) -> u32 { + let mut published = 0; + + let modified_keys = self + .paid_changes + .inner() + .iter() + .map(|(k, v)| (*k, *v)) + .collect::>(); + + for (key, _) in modified_keys { + // It is expected that for each slot for which we have paid changes, there is some + // first slot value already read. + let first_slot_value = self.initial_values.inner().get(&key).copied().unwrap(); + + // This is the value has been written to the storage slot at the end. + let current_slot_value = self.storage.read_from_storage(&key); + + let required_pubdata = + self.base_price_for_write(&key, first_slot_value, current_slot_value); + + // We assume that `prepaid_for_slot` represents both the number of pubdata published and the number of bytes paid by the previous transactions + // as they should be identical. + let prepaid_for_slot = self + .pre_paid_changes + .inner() + .get(&key) + .copied() + .unwrap_or_default(); + + published += required_pubdata.saturating_sub(prepaid_for_slot); + + // We remove the slot from the paid changes and move to the pre-paid changes as + // the transaction ends. + self.paid_changes.remove(key, timestamp); + self.pre_paid_changes + .insert(key, prepaid_for_slot.max(required_pubdata), timestamp); + } + + published + } + + fn base_price_for_write_query(&self, query: &LogQuery) -> u32 { + let storage_key = storage_key_of_log(query); + + let initial_value = self + .get_initial_value(&storage_key) + .unwrap_or(self.storage.read_from_storage(&storage_key)); + + self.base_price_for_write(&storage_key, initial_value, query.written_value) + } + + pub(crate) fn base_price_for_write( + &self, + storage_key: &StorageKey, + prev_value: U256, + new_value: U256, + ) -> u32 { + if self.is_storage_key_free(storage_key) || prev_value == new_value { + return 0; + } + + let is_initial_write = self + .storage + .get_ptr() + .borrow_mut() + .is_write_initial(storage_key); + + get_pubdata_price_bytes(prev_value, new_value, is_initial_write) + } + + // Returns the price of the update in terms of pubdata bytes. + // TODO (SMA-1701): update VM to accept gas instead of pubdata. + fn value_update_price(&mut self, query: &LogQuery) -> u32 { + let storage_key = storage_key_of_log(query); + + let base_cost = self.base_price_for_write_query(query); + + let initial_value = self + .get_initial_value(&storage_key) + .unwrap_or(self.storage.read_from_storage(&storage_key)); + + self.set_initial_value(&storage_key, initial_value, query.timestamp); + + let already_paid = self.prepaid_for_write(&storage_key); + + if base_cost <= already_paid { + // Some other transaction has already paid for this slot, no need to pay anything + 0u32 + } else { + base_cost - already_paid + } + } + + /// Returns storage log queries from current frame where `log.log_query.timestamp >= from_timestamp`. + pub(crate) fn storage_log_queries_after_timestamp( + &self, + from_timestamp: Timestamp, + ) -> &[Box] { + let logs = self.frames_stack.forward().current_frame(); + + // Select all of the last elements where `l.log_query.timestamp >= from_timestamp`. + // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. + logs.rsplit(|l| l.log_query.timestamp < from_timestamp) + .next() + .unwrap_or(&[]) + } + + pub(crate) fn get_final_log_queries(&self) -> Vec { + assert_eq!( + self.frames_stack.len(), + 1, + "VM finished execution in unexpected state" + ); + + self.frames_stack + .forward() + .current_frame() + .iter() + .map(|x| **x) + .collect() + } + + pub(crate) fn get_size(&self) -> usize { + let frames_stack_size = self.frames_stack.get_size(); + let paid_changes_size = + self.paid_changes.inner().len() * std::mem::size_of::<(StorageKey, u32)>(); + + frames_stack_size + paid_changes_size + } + + pub(crate) fn get_history_size(&self) -> usize { + let storage_size = self.storage.borrow_history(|h| h.len(), 0) + * std::mem::size_of::< as WithHistory>::HistoryRecord>(); + let frames_stack_size = self.frames_stack.get_history_size(); + let paid_changes_size = self.paid_changes.borrow_history(|h| h.len(), 0) + * std::mem::size_of::< as WithHistory>::HistoryRecord>(); + storage_size + frames_stack_size + paid_changes_size + } +} + +impl VmStorageOracle for StorageOracle { + // Perform a storage read/write access by taking an partially filled query + // and returning filled query and cold/warm marker for pricing purposes + fn execute_partial_query( + &mut self, + _monotonic_cycle_counter: u32, + mut query: LogQuery, + ) -> LogQuery { + //``` + // tracing::trace!( + // "execute partial query cyc {:?} addr {:?} key {:?}, rw {:?}, wr {:?}, tx {:?}", + // _monotonic_cycle_counter, + // query.address, + // query.key, + // query.rw_flag, + // query.written_value, + // query.tx_number_in_block + // ); + //``` + assert!(!query.rollback); + if query.rw_flag { + // The number of bytes that have been compensated by the user to perform this write + let storage_key = storage_key_of_log(&query); + let read_value = self.storage.read_from_storage(&storage_key); + query.read_value = read_value; + + // It is considered that the user has paid for the whole base price for the writes + let to_pay_by_user = self.base_price_for_write_query(&query); + let prepaid = self.prepaid_for_write(&storage_key); + + if to_pay_by_user > prepaid { + self.paid_changes.apply_historic_record( + HashMapHistoryEvent { + key: storage_key, + value: Some(to_pay_by_user), + }, + query.timestamp, + ); + } + self.write_value(query) + } else { + self.read_value(query) + } + } + + // We can return the size of the refund before each storage query. + // Note, that while the `RefundType` allows to provide refunds both in + // `ergs` and `pubdata`, only refunds in pubdata will be compensated for the users + fn estimate_refunds_for_write( + &mut self, // to avoid any hacks inside, like prefetch + _monotonic_cycle_counter: u32, + partial_query: &LogQuery, + ) -> RefundType { + let storage_key = storage_key_of_log(partial_query); + let mut partial_query = *partial_query; + let read_value = self.storage.read_from_storage(&storage_key); + partial_query.read_value = read_value; + + let price_to_pay = self + .value_update_price(&partial_query) + .min(INITIAL_STORAGE_WRITE_PUBDATA_BYTES as u32); + + let refund = RefundType::RepeatedWrite(RefundedAmounts { + ergs: 0, + // `INITIAL_STORAGE_WRITE_PUBDATA_BYTES` is the default amount of pubdata bytes the user pays for. + pubdata_bytes: (INITIAL_STORAGE_WRITE_PUBDATA_BYTES as u32) - price_to_pay, + }); + self.returned_refunds.apply_historic_record( + VectorHistoryEvent::Push(refund.pubdata_refund()), + partial_query.timestamp, + ); + + refund + } + + // Indicate a start of execution frame for rollback purposes + fn start_frame(&mut self, timestamp: Timestamp) { + self.frames_stack.push_frame(timestamp); + } + + // Indicate that execution frame went out from the scope, so we can + // log the history and either rollback immediately or keep records to rollback later + fn finish_frame(&mut self, timestamp: Timestamp, panicked: bool) { + // If we panic then we append forward and rollbacks to the forward of parent, + // otherwise we place rollbacks of child before rollbacks of the parent + if panicked { + // perform actual rollback + for query in self.frames_stack.rollback().current_frame().iter().rev() { + let read_value = match query.log_type { + StorageLogQueryType::Read => { + // Having Read logs in rollback is not possible + tracing::warn!("Read log in rollback queue {:?}", query); + continue; + } + StorageLogQueryType::InitialWrite | StorageLogQueryType::RepeatedWrite => { + query.log_query.read_value + } + }; + + let LogQuery { written_value, .. } = query.log_query; + let key = triplet_to_storage_key( + query.log_query.shard_id, + query.log_query.address, + query.log_query.key, + ); + let current_value = self.storage.write_to_storage( + key, + // NOTE, that since it is a rollback query, + // the `read_value` is being set + read_value, timestamp, + ); + + // Additional validation that the current value was correct + // Unwrap is safe because the return value from `write_inner` is the previous value in this leaf. + // It is impossible to set leaf value to `None` + assert_eq!(current_value, written_value); + } + + self.frames_stack + .move_rollback_to_forward(|_| true, timestamp); + } + self.frames_stack.merge_frame(timestamp); + } +} + +/// Returns the number of bytes needed to publish a slot. +// Since we need to publish the state diffs onchain, for each of the updated storage slot +// we basically need to publish the following pair: `()`. +// For key we use the following optimization: +// - The first time we publish it, we use 32 bytes. +// Then, we remember a 8-byte id for this slot and assign it to it. We call this initial write. +// - The second time we publish it, we will use the 4/5 byte representation of this 8-byte instead of the 32 +// bytes of the entire key. +// For value compression, we use a metadata byte which holds the length of the value and the operation from the +// previous state to the new state, and the compressed value. The maximum for this is 33 bytes. +// Total bytes for initial writes then becomes 65 bytes and repeated writes becomes 38 bytes. +fn get_pubdata_price_bytes(initial_value: U256, final_value: U256, is_initial: bool) -> u32 { + // TODO (SMA-1702): take into account the content of the log query, i.e. values that contain mostly zeroes + // should cost less. + + let compressed_value_size = + compress_with_best_strategy(initial_value, final_value).len() as u32; + + if is_initial { + (BYTES_PER_DERIVED_KEY as u32) + compressed_value_size + } else { + (BYTES_PER_ENUMERATION_INDEX as u32) + compressed_value_size + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_get_pubdata_price_bytes() { + let initial_value = U256::default(); + let final_value = U256::from(92122); + let is_initial = true; + + let compression_len = 4; + + let initial_bytes_price = get_pubdata_price_bytes(initial_value, final_value, is_initial); + let repeated_bytes_price = get_pubdata_price_bytes(initial_value, final_value, !is_initial); + + assert_eq!( + initial_bytes_price, + (compression_len + BYTES_PER_DERIVED_KEY as usize) as u32 + ); + assert_eq!( + repeated_bytes_price, + (compression_len + BYTES_PER_ENUMERATION_INDEX as usize) as u32 + ); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/bootloader.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/bootloader.rs new file mode 100644 index 00000000000..0ee3b811b4c --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/bootloader.rs @@ -0,0 +1,56 @@ +use zksync_types::U256; + +use crate::{ + interface::{ExecutionResult, Halt, TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + constants::BOOTLOADER_HEAP_PAGE, + tests::{ + tester::VmTesterBuilder, + utils::{get_bootloader, verify_required_memory, BASE_SYSTEM_CONTRACTS}, + }, + HistoryEnabled, + }, +}; + +#[test] +fn test_dummy_bootloader() { + let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); + base_system_contracts.bootloader = get_bootloader("dummy"); + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_base_system_smart_contracts(base_system_contracts) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .build(); + + let result = vm.vm.execute(VmExecutionMode::Batch); + assert!(!result.result.is_failed()); + + let correct_first_cell = U256::from_str_radix("123123123", 16).unwrap(); + verify_required_memory( + &vm.vm.state, + vec![(correct_first_cell, BOOTLOADER_HEAP_PAGE, 0)], + ); +} + +#[test] +fn test_bootloader_out_of_gas() { + let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); + base_system_contracts.bootloader = get_bootloader("dummy"); + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_base_system_smart_contracts(base_system_contracts) + .with_gas_limit(10) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .build(); + + let res = vm.vm.execute(VmExecutionMode::Batch); + + assert!(matches!( + res.result, + ExecutionResult::Halt { + reason: Halt::BootloaderOutOfGas + } + )); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/bytecode_publishing.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/bytecode_publishing.rs new file mode 100644 index 00000000000..ad1b0f26036 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/bytecode_publishing.rs @@ -0,0 +1,43 @@ +use zksync_types::event::extract_long_l2_to_l1_messages; +use zksync_utils::bytecode::compress_bytecode; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + tests::{ + tester::{DeployContractsTx, TxType, VmTesterBuilder}, + utils::read_test_contract, + }, + HistoryEnabled, + }, +}; + +#[test] +fn test_bytecode_publishing() { + // In this test, we aim to ensure that the contents of the compressed bytecodes + // are included as part of the L2->L1 long messages + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let counter = read_test_contract(); + let account = &mut vm.rich_accounts[0]; + + let compressed_bytecode = compress_bytecode(&counter).unwrap(); + + let DeployContractsTx { tx, .. } = account.get_deploy_tx(&counter, None, TxType::L2); + vm.vm.push_transaction(tx); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!result.result.is_failed(), "Transaction wasn't successful"); + + vm.vm.execute(VmExecutionMode::Batch); + + let state = vm.vm.get_current_execution_state(); + let long_messages = extract_long_l2_to_l1_messages(&state.events); + assert!( + long_messages.contains(&compressed_bytecode), + "Bytecode not published" + ); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/call_tracer.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/call_tracer.rs new file mode 100644 index 00000000000..e9df4fa80ff --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/call_tracer.rs @@ -0,0 +1,92 @@ +use std::sync::Arc; + +use once_cell::sync::OnceCell; +use zksync_types::{Address, Execute}; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + tracers::CallTracer, + vm_boojum_integration::{ + constants::BLOCK_GAS_LIMIT, + tests::{ + tester::VmTesterBuilder, + utils::{read_max_depth_contract, read_test_contract}, + }, + HistoryEnabled, ToTracerPointer, + }, +}; + +// This test is ultra slow, so it's ignored by default. +#[test] +#[ignore] +fn test_max_depth() { + let contarct = read_max_depth_contract(); + let address = Address::random(); + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_custom_contracts(vec![(contarct, address, true)]) + .build(); + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: address, + calldata: vec![], + value: Default::default(), + factory_deps: None, + }, + None, + ); + + let result = Arc::new(OnceCell::new()); + let call_tracer = CallTracer::new(result.clone()).into_tracer_pointer(); + vm.vm.push_transaction(tx); + let res = vm.vm.inspect(call_tracer.into(), VmExecutionMode::OneTx); + assert!(result.get().is_some()); + assert!(res.result.is_failed()); +} + +#[test] +fn test_basic_behavior() { + let contarct = read_test_contract(); + let address = Address::random(); + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_custom_contracts(vec![(contarct, address, true)]) + .build(); + + let increment_by_6_calldata = + "7cf5dab00000000000000000000000000000000000000000000000000000000000000006"; + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: address, + calldata: hex::decode(increment_by_6_calldata).unwrap(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + + let result = Arc::new(OnceCell::new()); + let call_tracer = CallTracer::new(result.clone()).into_tracer_pointer(); + vm.vm.push_transaction(tx); + let res = vm.vm.inspect(call_tracer.into(), VmExecutionMode::OneTx); + + let call_tracer_result = result.get().unwrap(); + + assert_eq!(call_tracer_result.len(), 1); + // Expect that there are a plenty of subcalls underneath. + let subcall = &call_tracer_result[0].calls; + assert!(subcall.len() > 10); + assert!(!res.result.is_failed()); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/circuits.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/circuits.rs new file mode 100644 index 00000000000..7c5b8ee2a2d --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/circuits.rs @@ -0,0 +1,44 @@ +use zksync_types::{Address, Execute, U256}; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{constants::BLOCK_GAS_LIMIT, tests::tester::VmTesterBuilder, HistoryEnabled}, +}; + +// Checks that estimated number of circuits for simple transfer doesn't differ much +// from hardcoded expected value. +#[test] +fn test_circuits() { + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .build(); + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: Address::random(), + calldata: Vec::new(), + value: U256::from(1u8), + factory_deps: None, + }, + None, + ); + vm.vm.push_transaction(tx); + let res = vm.vm.inspect(Default::default(), VmExecutionMode::OneTx); + + const EXPECTED_CIRCUITS_USED: f32 = 4.8685; + let delta = + (res.statistics.estimated_circuits_used - EXPECTED_CIRCUITS_USED) / EXPECTED_CIRCUITS_USED; + + if delta.abs() > 0.1 { + panic!( + "Estimation differs from expected result by too much: {}%, expected value: {}", + delta * 100.0, + res.statistics.estimated_circuits_used + ); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/default_aa.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/default_aa.rs new file mode 100644 index 00000000000..a8c20cfebc1 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/default_aa.rs @@ -0,0 +1,76 @@ +use zksync_system_constants::L2_ETH_TOKEN_ADDRESS; +use zksync_types::{ + get_code_key, get_known_code_key, get_nonce_key, + system_contracts::{DEPLOYMENT_NONCE_INCREMENT, TX_NONCE_INCREMENT}, + AccountTreeId, U256, +}; +use zksync_utils::u256_to_h256; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + tests::{ + tester::{DeployContractsTx, TxType, VmTesterBuilder}, + utils::{get_balance, read_test_contract, verify_required_storage}, + }, + HistoryEnabled, + }, +}; + +#[test] +fn test_default_aa_interaction() { + // In this test, we aim to test whether a simple account interaction (without any fee logic) + // will work. The account will try to deploy a simple contract from integration tests. + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let counter = read_test_contract(); + let account = &mut vm.rich_accounts[0]; + let DeployContractsTx { + tx, + bytecode_hash, + address, + } = account.get_deploy_tx(&counter, None, TxType::L2); + let maximal_fee = tx.gas_limit() * vm.vm.batch_env.base_fee(); + + vm.vm.push_transaction(tx); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!result.result.is_failed(), "Transaction wasn't successful"); + + vm.vm.execute(VmExecutionMode::Batch); + vm.vm.get_current_execution_state(); + + // Both deployment and ordinary nonce should be incremented by one. + let account_nonce_key = get_nonce_key(&account.address); + let expected_nonce = TX_NONCE_INCREMENT + DEPLOYMENT_NONCE_INCREMENT; + + // The code hash of the deployed contract should be marked as republished. + let known_codes_key = get_known_code_key(&bytecode_hash); + + // The contract should be deployed successfully. + let account_code_key = get_code_key(&address); + + let expected_slots = vec![ + (u256_to_h256(expected_nonce), account_nonce_key), + (u256_to_h256(U256::from(1u32)), known_codes_key), + (bytecode_hash, account_code_key), + ]; + + verify_required_storage(&vm.vm.state, expected_slots); + + let expected_fee = maximal_fee + - U256::from(result.refunds.gas_refunded) * U256::from(vm.vm.batch_env.base_fee()); + let operator_balance = get_balance( + AccountTreeId::new(L2_ETH_TOKEN_ADDRESS), + &vm.fee_account, + vm.vm.state.storage.storage.get_ptr(), + ); + + assert_eq!( + operator_balance, expected_fee, + "Operator did not receive his fee" + ); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/gas_limit.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/gas_limit.rs new file mode 100644 index 00000000000..30a65097111 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/gas_limit.rs @@ -0,0 +1,47 @@ +use zksync_types::{fee::Fee, Execute}; + +use crate::{ + interface::{TxExecutionMode, VmInterface}, + vm_boojum_integration::{ + constants::{BOOTLOADER_HEAP_PAGE, TX_DESCRIPTION_OFFSET, TX_GAS_LIMIT_OFFSET}, + tests::tester::VmTesterBuilder, + HistoryDisabled, + }, +}; + +/// Checks that `TX_GAS_LIMIT_OFFSET` constant is correct. +#[test] +fn test_tx_gas_limit_offset() { + let mut vm = VmTesterBuilder::new(HistoryDisabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let gas_limit = 9999.into(); + let tx = vm.rich_accounts[0].get_l2_tx_for_execute( + Execute { + contract_address: Default::default(), + calldata: vec![], + value: Default::default(), + factory_deps: None, + }, + Some(Fee { + gas_limit, + ..Default::default() + }), + ); + + vm.vm.push_transaction(tx); + + let gas_limit_from_memory = vm + .vm + .state + .memory + .read_slot( + BOOTLOADER_HEAP_PAGE as usize, + TX_DESCRIPTION_OFFSET + TX_GAS_LIMIT_OFFSET, + ) + .value; + assert_eq!(gas_limit_from_memory, gas_limit); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/get_used_contracts.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/get_used_contracts.rs new file mode 100644 index 00000000000..25aab0871f1 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/get_used_contracts.rs @@ -0,0 +1,109 @@ +use std::collections::{HashMap, HashSet}; + +use itertools::Itertools; +use zksync_state::WriteStorage; +use zksync_system_constants::CONTRACT_DEPLOYER_ADDRESS; +use zksync_test_account::Account; +use zksync_types::{Execute, U256}; +use zksync_utils::{bytecode::hash_bytecode, h256_to_u256}; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + tests::{ + tester::{TxType, VmTesterBuilder}, + utils::{read_test_contract, BASE_SYSTEM_CONTRACTS}, + }, + HistoryDisabled, Vm, + }, + HistoryMode, +}; + +#[test] +fn test_get_used_contracts() { + let mut vm = VmTesterBuilder::new(HistoryDisabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .build(); + + assert!(known_bytecodes_without_aa_code(&vm.vm).is_empty()); + + // create and push and execute some not-empty factory deps transaction with success status + // to check that get_used_contracts() updates + let contract_code = read_test_contract(); + let mut account = Account::random(); + let tx = account.get_deploy_tx(&contract_code, None, TxType::L1 { serial_id: 0 }); + vm.vm.push_transaction(tx.tx.clone()); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!result.result.is_failed()); + + assert!(vm + .vm + .get_used_contracts() + .contains(&h256_to_u256(tx.bytecode_hash))); + + // Note: Default_AA will be in the list of used contracts if l2 tx is used + assert_eq!( + vm.vm + .get_used_contracts() + .into_iter() + .collect::>(), + known_bytecodes_without_aa_code(&vm.vm) + .keys() + .cloned() + .collect::>() + ); + + // create push and execute some non-empty factory deps transaction that fails + // (known_bytecodes will be updated but we expect get_used_contracts() to not be updated) + + let calldata = [1, 2, 3]; + let big_calldata: Vec = calldata + .iter() + .cycle() + .take(calldata.len() * 1024) + .cloned() + .collect(); + let account2 = Account::random(); + let tx2 = account2.get_l1_tx( + Execute { + contract_address: CONTRACT_DEPLOYER_ADDRESS, + calldata: big_calldata, + value: Default::default(), + factory_deps: Some(vec![vec![1; 32]]), + }, + 1, + ); + + vm.vm.push_transaction(tx2.clone()); + + let res2 = vm.vm.execute(VmExecutionMode::OneTx); + + assert!(res2.result.is_failed()); + + for factory_dep in tx2.execute.factory_deps.unwrap() { + let hash = hash_bytecode(&factory_dep); + let hash_to_u256 = h256_to_u256(hash); + assert!(known_bytecodes_without_aa_code(&vm.vm) + .keys() + .contains(&hash_to_u256)); + assert!(!vm.vm.get_used_contracts().contains(&hash_to_u256)); + } +} + +fn known_bytecodes_without_aa_code( + vm: &Vm, +) -> HashMap> { + let mut known_bytecodes_without_aa_code = vm + .state + .decommittment_processor + .known_bytecodes + .inner() + .clone(); + + known_bytecodes_without_aa_code + .remove(&h256_to_u256(BASE_SYSTEM_CONTRACTS.default_aa.hash)) + .unwrap(); + + known_bytecodes_without_aa_code +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/invalid_bytecode.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/invalid_bytecode.rs new file mode 100644 index 00000000000..079e6d61b6c --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/invalid_bytecode.rs @@ -0,0 +1,120 @@ +use zksync_types::H256; +use zksync_utils::h256_to_u256; + +use crate::vm_boojum_integration::tests::tester::VmTesterBuilder; +use crate::vm_boojum_integration::types::inputs::system_env::TxExecutionMode; +use crate::vm_boojum_integration::{HistoryEnabled, TxRevertReason}; + +// TODO this test requires a lot of hacks for bypassing the bytecode checks in the VM. +// Port it later, it's not significant. for now + +#[test] +fn test_invalid_bytecode() { + let mut vm_builder = VmTesterBuilder::new(HistoryEnabled) + .with_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1); + let mut storage = vm_builder.take_storage(); + let mut vm = vm_builder.build(&mut storage); + + let block_gas_per_pubdata = vm_test_env + .block_context + .context + .block_gas_price_per_pubdata(); + + let mut test_vm_with_custom_bytecode_hash = + |bytecode_hash: H256, expected_revert_reason: Option| { + let mut oracle_tools = + OracleTools::new(vm_test_env.storage_ptr.as_mut(), HistoryEnabled); + + let (encoded_tx, predefined_overhead) = get_l1_tx_with_custom_bytecode_hash( + h256_to_u256(bytecode_hash), + block_gas_per_pubdata as u32, + ); + + run_vm_with_custom_factory_deps( + &mut oracle_tools, + vm_test_env.block_context.context, + &vm_test_env.block_properties, + encoded_tx, + predefined_overhead, + expected_revert_reason, + ); + }; + + let failed_to_mark_factory_deps = |msg: &str, data: Vec| { + TxRevertReason::FailedToMarkFactoryDependencies(VmRevertReason::General { + msg: msg.to_string(), + data, + }) + }; + + // Here we provide the correctly-formatted bytecode hash of + // odd length, so it should work. + test_vm_with_custom_bytecode_hash( + H256([ + 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, + ]), + None, + ); + + // Here we provide correctly formatted bytecode of even length, so + // it should fail. + test_vm_with_custom_bytecode_hash( + H256([ + 1, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, + ]), + Some(failed_to_mark_factory_deps( + "Code length in words must be odd", + vec![ + 8, 195, 121, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 67, 111, 100, 101, 32, 108, 101, 110, + 103, 116, 104, 32, 105, 110, 32, 119, 111, 114, 100, 115, 32, 109, 117, 115, 116, + 32, 98, 101, 32, 111, 100, 100, + ], + )), + ); + + // Here we provide incorrectly formatted bytecode of odd length, so + // it should fail. + test_vm_with_custom_bytecode_hash( + H256([ + 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, + ]), + Some(failed_to_mark_factory_deps( + "Incorrectly formatted bytecodeHash", + vec![ + 8, 195, 121, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 34, 73, 110, 99, 111, 114, 114, 101, 99, + 116, 108, 121, 32, 102, 111, 114, 109, 97, 116, 116, 101, 100, 32, 98, 121, 116, + 101, 99, 111, 100, 101, 72, 97, 115, 104, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + ], + )), + ); + + // Here we provide incorrectly formatted bytecode of odd length, so + // it should fail. + test_vm_with_custom_bytecode_hash( + H256([ + 2, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, + ]), + Some(failed_to_mark_factory_deps( + "Incorrectly formatted bytecodeHash", + vec![ + 8, 195, 121, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 34, 73, 110, 99, 111, 114, 114, 101, 99, + 116, 108, 121, 32, 102, 111, 114, 109, 97, 116, 116, 101, 100, 32, 98, 121, 116, + 101, 99, 111, 100, 101, 72, 97, 115, 104, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + ], + )), + ); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/is_write_initial.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/is_write_initial.rs new file mode 100644 index 00000000000..bf56aa2b816 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/is_write_initial.rs @@ -0,0 +1,48 @@ +use zksync_state::ReadStorage; +use zksync_types::get_nonce_key; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + tests::{ + tester::{Account, TxType, VmTesterBuilder}, + utils::read_test_contract, + }, + HistoryDisabled, + }, +}; + +#[test] +fn test_is_write_initial_behaviour() { + // In this test, we check result of `is_write_initial` at different stages. + // The main idea is to check that `is_write_initial` storage uses the correct cache for initial_writes and doesn't + // messed up it with the repeated writes during the one batch execution. + + let mut account = Account::random(); + let mut vm = VmTesterBuilder::new(HistoryDisabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_rich_accounts(vec![account.clone()]) + .build(); + + let nonce_key = get_nonce_key(&account.address); + // Check that the next write to the nonce key will be initial. + assert!(vm + .storage + .as_ref() + .borrow_mut() + .is_write_initial(&nonce_key)); + + let contract_code = read_test_contract(); + let tx = account.get_deploy_tx(&contract_code, None, TxType::L2).tx; + + vm.vm.push_transaction(tx); + vm.vm.execute(VmExecutionMode::OneTx); + + // Check that `is_write_initial` still returns true for the nonce key. + assert!(vm + .storage + .as_ref() + .borrow_mut() + .is_write_initial(&nonce_key)); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/l1_tx_execution.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/l1_tx_execution.rs new file mode 100644 index 00000000000..b547f346d28 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/l1_tx_execution.rs @@ -0,0 +1,139 @@ +use zksync_system_constants::BOOTLOADER_ADDRESS; +use zksync_types::{ + get_code_key, get_known_code_key, + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + storage_writes_deduplicator::StorageWritesDeduplicator, + U256, +}; +use zksync_utils::u256_to_h256; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + tests::{ + tester::{TxType, VmTesterBuilder}, + utils::{read_test_contract, verify_required_storage, BASE_SYSTEM_CONTRACTS}, + }, + types::internals::TransactionData, + HistoryEnabled, + }, +}; + +#[test] +fn test_l1_tx_execution() { + // In this test, we try to execute a contract deployment from L1 + // Here instead of marking code hash via the bootloader means, we will be + // using L1->L2 communication, the same it would likely be done during the priority mode. + + // There are always at least 7 initial writes here, because we pay fees from l1: + // - totalSupply of ETH token + // - balance of the refund recipient + // - balance of the bootloader + // - tx_rolling hash + // - rolling hash of L2->L1 logs + // - transaction number in block counter + // - L2->L1 log counter in L1Messenger + + // TODO(PLA-537): right now we are using 4 slots instead of 7 due to 0 fee for transaction. + let basic_initial_writes = 4; + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_base_system_smart_contracts(BASE_SYSTEM_CONTRACTS.clone()) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let contract_code = read_test_contract(); + let account = &mut vm.rich_accounts[0]; + let deploy_tx = account.get_deploy_tx(&contract_code, None, TxType::L1 { serial_id: 1 }); + let tx_data: TransactionData = deploy_tx.tx.clone().into(); + + let required_l2_to_l1_logs: Vec<_> = vec![L2ToL1Log { + shard_id: 0, + is_service: true, + tx_number_in_block: 0, + sender: BOOTLOADER_ADDRESS, + key: tx_data.tx_hash(0.into()), + value: u256_to_h256(U256::from(1u32)), + }] + .into_iter() + .map(UserL2ToL1Log) + .collect(); + + vm.vm.push_transaction(deploy_tx.tx.clone()); + + let res = vm.vm.execute(VmExecutionMode::OneTx); + + // The code hash of the deployed contract should be marked as republished. + let known_codes_key = get_known_code_key(&deploy_tx.bytecode_hash); + + // The contract should be deployed successfully. + let account_code_key = get_code_key(&deploy_tx.address); + + let expected_slots = vec![ + (u256_to_h256(U256::from(1u32)), known_codes_key), + (deploy_tx.bytecode_hash, account_code_key), + ]; + assert!(!res.result.is_failed()); + + verify_required_storage(&vm.vm.state, expected_slots); + + assert_eq!(res.logs.user_l2_to_l1_logs, required_l2_to_l1_logs); + + let tx = account.get_test_contract_transaction( + deploy_tx.address, + true, + None, + false, + TxType::L1 { serial_id: 0 }, + ); + vm.vm.push_transaction(tx); + let res = vm.vm.execute(VmExecutionMode::OneTx); + let storage_logs = res.logs.storage_logs; + let res = StorageWritesDeduplicator::apply_on_empty_state(&storage_logs); + + // Tx panicked + assert_eq!(res.initial_storage_writes - basic_initial_writes, 0); + + let tx = account.get_test_contract_transaction( + deploy_tx.address, + false, + None, + false, + TxType::L1 { serial_id: 0 }, + ); + vm.vm.push_transaction(tx.clone()); + let res = vm.vm.execute(VmExecutionMode::OneTx); + let storage_logs = res.logs.storage_logs; + let res = StorageWritesDeduplicator::apply_on_empty_state(&storage_logs); + // We changed one slot inside contract + assert_eq!(res.initial_storage_writes - basic_initial_writes, 1); + + // No repeated writes + let repeated_writes = res.repeated_storage_writes; + assert_eq!(res.repeated_storage_writes, 0); + + vm.vm.push_transaction(tx); + let storage_logs = vm.vm.execute(VmExecutionMode::OneTx).logs.storage_logs; + let res = StorageWritesDeduplicator::apply_on_empty_state(&storage_logs); + // We do the same storage write, it will be deduplicated, so still 4 initial write and 0 repeated + assert_eq!(res.initial_storage_writes - basic_initial_writes, 1); + assert_eq!(res.repeated_storage_writes, repeated_writes); + + let tx = account.get_test_contract_transaction( + deploy_tx.address, + false, + Some(10.into()), + false, + TxType::L1 { serial_id: 1 }, + ); + vm.vm.push_transaction(tx); + let result = vm.vm.execute(VmExecutionMode::OneTx); + // Method is not payable tx should fail + assert!(result.result.is_failed(), "The transaction should fail"); + + let res = StorageWritesDeduplicator::apply_on_empty_state(&result.logs.storage_logs); + // There are only basic initial writes + assert_eq!(res.initial_storage_writes - basic_initial_writes, 2); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/l2_blocks.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/l2_blocks.rs new file mode 100644 index 00000000000..b26cc09e057 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/l2_blocks.rs @@ -0,0 +1,437 @@ +//! +//! Tests for the bootloader +//! The description for each of the tests can be found in the corresponding `.yul` file. +//! + +use zk_evm_1_4_0::aux_structures::Timestamp; +use zksync_state::WriteStorage; +use zksync_system_constants::REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE; +use zksync_types::{ + block::{pack_block_info, MiniblockHasher}, + AccountTreeId, Execute, ExecuteTransactionCommon, L1BatchNumber, L1TxCommonData, + MiniblockNumber, ProtocolVersionId, StorageKey, Transaction, H160, H256, + SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_BLOCK_INFO_POSITION, + SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, + U256, +}; +use zksync_utils::{h256_to_u256, u256_to_h256}; + +use crate::{ + interface::{ExecutionResult, Halt, L2BlockEnv, TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + constants::{ + BOOTLOADER_HEAP_PAGE, TX_OPERATOR_L2_BLOCK_INFO_OFFSET, + TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, + }, + tests::tester::{default_l1_batch, VmTesterBuilder}, + utils::l2_blocks::get_l2_block_hash_key, + HistoryEnabled, Vm, + }, + HistoryMode, +}; + +fn get_l1_noop() -> Transaction { + Transaction { + common_data: ExecuteTransactionCommon::L1(L1TxCommonData { + sender: H160::random(), + gas_limit: U256::from(2000000u32), + gas_per_pubdata_limit: REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE.into(), + ..Default::default() + }), + execute: Execute { + contract_address: H160::zero(), + calldata: vec![], + value: U256::zero(), + factory_deps: None, + }, + received_timestamp_ms: 0, + raw_bytes: None, + } +} + +#[test] +fn test_l2_block_initialization_timestamp() { + // This test checks that the L2 block initialization works correctly. + // Here we check that that the first block must have timestamp that is greater or equal to the timestamp + // of the current batch. + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + // Override the timestamp of the current miniblock to be 0. + vm.vm.bootloader_state.push_l2_block(L2BlockEnv { + number: 1, + timestamp: 0, + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), + max_virtual_blocks_to_create: 1, + }); + let l1_tx = get_l1_noop(); + + vm.vm.push_transaction(l1_tx); + let res = vm.vm.execute(VmExecutionMode::OneTx); + + assert_eq!( + res.result, + ExecutionResult::Halt {reason: Halt::FailedToSetL2Block("The timestamp of the L2 block must be greater than or equal to the timestamp of the current batch".to_string())} + ); +} + +#[test] +fn test_l2_block_initialization_number_non_zero() { + // This test checks that the L2 block initialization works correctly. + // Here we check that the first miniblock number can not be zero. + + let l1_batch = default_l1_batch(L1BatchNumber(1)); + let first_l2_block = L2BlockEnv { + number: 0, + timestamp: l1_batch.timestamp, + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), + max_virtual_blocks_to_create: 1, + }; + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_l1_batch_env(l1_batch) + .with_random_rich_accounts(1) + .build(); + + let l1_tx = get_l1_noop(); + + vm.vm.push_transaction(l1_tx); + + let timestamp = Timestamp(vm.vm.state.local_state.timestamp); + set_manual_l2_block_info(&mut vm.vm, 0, first_l2_block, timestamp); + + let res = vm.vm.execute(VmExecutionMode::OneTx); + + assert_eq!( + res.result, + ExecutionResult::Halt { + reason: Halt::FailedToSetL2Block( + "L2 block number is never expected to be zero".to_string() + ) + } + ); +} + +fn test_same_l2_block( + expected_error: Option, + override_timestamp: Option, + override_prev_block_hash: Option, +) { + let mut l1_batch = default_l1_batch(L1BatchNumber(1)); + l1_batch.timestamp = 1; + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_l1_batch_env(l1_batch) + .with_random_rich_accounts(1) + .build(); + + let l1_tx = get_l1_noop(); + vm.vm.push_transaction(l1_tx.clone()); + let res = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!res.result.is_failed()); + + let mut current_l2_block = vm.vm.batch_env.first_l2_block; + + if let Some(timestamp) = override_timestamp { + current_l2_block.timestamp = timestamp; + } + if let Some(prev_block_hash) = override_prev_block_hash { + current_l2_block.prev_block_hash = prev_block_hash; + } + + if (None, None) == (override_timestamp, override_prev_block_hash) { + current_l2_block.max_virtual_blocks_to_create = 0; + } + + vm.vm.push_transaction(l1_tx); + let timestamp = Timestamp(vm.vm.state.local_state.timestamp); + set_manual_l2_block_info(&mut vm.vm, 1, current_l2_block, timestamp); + + let result = vm.vm.execute(VmExecutionMode::OneTx); + + if let Some(err) = expected_error { + assert_eq!(result.result, ExecutionResult::Halt { reason: err }); + } else { + assert_eq!(result.result, ExecutionResult::Success { output: vec![] }); + } +} + +#[test] +fn test_l2_block_same_l2_block() { + // This test aims to test the case when there are multiple transactions inside the same L2 block. + + // Case 1: Incorrect timestamp + test_same_l2_block( + Some(Halt::FailedToSetL2Block( + "The timestamp of the same L2 block must be same".to_string(), + )), + Some(0), + None, + ); + + // Case 2: Incorrect previous block hash + test_same_l2_block( + Some(Halt::FailedToSetL2Block( + "The previous hash of the same L2 block must be same".to_string(), + )), + None, + Some(H256::zero()), + ); + + // Case 3: Correct continuation of the same L2 block + test_same_l2_block(None, None, None); +} + +fn test_new_l2_block( + first_l2_block: L2BlockEnv, + overriden_second_block_number: Option, + overriden_second_block_timestamp: Option, + overriden_second_block_prev_block_hash: Option, + expected_error: Option, +) { + let mut l1_batch = default_l1_batch(L1BatchNumber(1)); + l1_batch.timestamp = 1; + l1_batch.first_l2_block = first_l2_block; + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_l1_batch_env(l1_batch) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let l1_tx = get_l1_noop(); + + // Firstly we execute the first transaction + vm.vm.push_transaction(l1_tx.clone()); + vm.vm.execute(VmExecutionMode::OneTx); + + let mut second_l2_block = vm.vm.batch_env.first_l2_block; + second_l2_block.number += 1; + second_l2_block.timestamp += 1; + second_l2_block.prev_block_hash = vm.vm.bootloader_state.last_l2_block().get_hash(); + + if let Some(block_number) = overriden_second_block_number { + second_l2_block.number = block_number; + } + if let Some(timestamp) = overriden_second_block_timestamp { + second_l2_block.timestamp = timestamp; + } + if let Some(prev_block_hash) = overriden_second_block_prev_block_hash { + second_l2_block.prev_block_hash = prev_block_hash; + } + + vm.vm.bootloader_state.push_l2_block(second_l2_block); + + vm.vm.push_transaction(l1_tx); + + let result = vm.vm.execute(VmExecutionMode::OneTx); + if let Some(err) = expected_error { + assert_eq!(result.result, ExecutionResult::Halt { reason: err }); + } else { + assert_eq!(result.result, ExecutionResult::Success { output: vec![] }); + } +} + +#[test] +fn test_l2_block_new_l2_block() { + // This test is aimed to cover potential issue + + let correct_first_block = L2BlockEnv { + number: 1, + timestamp: 1, + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), + max_virtual_blocks_to_create: 1, + }; + + // Case 1: Block number increasing by more than 1 + test_new_l2_block( + correct_first_block, + Some(3), + None, + None, + Some(Halt::FailedToSetL2Block( + "Invalid new L2 block number".to_string(), + )), + ); + + // Case 2: Timestamp not increasing + test_new_l2_block( + correct_first_block, + None, + Some(1), + None, + Some(Halt::FailedToSetL2Block("The timestamp of the new L2 block must be greater than the timestamp of the previous L2 block".to_string())), + ); + + // Case 3: Incorrect previous block hash + test_new_l2_block( + correct_first_block, + None, + None, + Some(H256::zero()), + Some(Halt::FailedToSetL2Block( + "The current L2 block hash is incorrect".to_string(), + )), + ); + + // Case 4: Correct new block + test_new_l2_block(correct_first_block, None, None, None, None); +} + +#[allow(clippy::too_many_arguments)] +fn test_first_in_batch( + miniblock_timestamp: u64, + miniblock_number: u32, + pending_txs_hash: H256, + batch_timestamp: u64, + new_batch_timestamp: u64, + batch_number: u32, + proposed_block: L2BlockEnv, + expected_error: Option, +) { + let mut l1_batch = default_l1_batch(L1BatchNumber(1)); + l1_batch.number += 1; + l1_batch.timestamp = new_batch_timestamp; + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_l1_batch_env(l1_batch) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + let l1_tx = get_l1_noop(); + + // Setting the values provided. + let storage_ptr = vm.vm.state.storage.storage.get_ptr(); + let miniblock_info_slot = StorageKey::new( + AccountTreeId::new(SYSTEM_CONTEXT_ADDRESS), + SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, + ); + let pending_txs_hash_slot = StorageKey::new( + AccountTreeId::new(SYSTEM_CONTEXT_ADDRESS), + SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, + ); + let batch_info_slot = StorageKey::new( + AccountTreeId::new(SYSTEM_CONTEXT_ADDRESS), + SYSTEM_CONTEXT_BLOCK_INFO_POSITION, + ); + let prev_block_hash_position = get_l2_block_hash_key(miniblock_number - 1); + + storage_ptr.borrow_mut().set_value( + miniblock_info_slot, + u256_to_h256(pack_block_info( + miniblock_number as u64, + miniblock_timestamp, + )), + ); + storage_ptr + .borrow_mut() + .set_value(pending_txs_hash_slot, pending_txs_hash); + storage_ptr.borrow_mut().set_value( + batch_info_slot, + u256_to_h256(pack_block_info(batch_number as u64, batch_timestamp)), + ); + storage_ptr.borrow_mut().set_value( + prev_block_hash_position, + MiniblockHasher::legacy_hash(MiniblockNumber(miniblock_number - 1)), + ); + + // In order to skip checks from the Rust side of the VM, we firstly use some definitely correct L2 block info. + // And then override it with the user-provided value + + let last_l2_block = vm.vm.bootloader_state.last_l2_block(); + let new_l2_block = L2BlockEnv { + number: last_l2_block.number + 1, + timestamp: last_l2_block.timestamp + 1, + prev_block_hash: last_l2_block.get_hash(), + max_virtual_blocks_to_create: last_l2_block.max_virtual_blocks_to_create, + }; + + vm.vm.bootloader_state.push_l2_block(new_l2_block); + vm.vm.push_transaction(l1_tx); + let timestamp = Timestamp(vm.vm.state.local_state.timestamp); + set_manual_l2_block_info(&mut vm.vm, 0, proposed_block, timestamp); + + let result = vm.vm.execute(VmExecutionMode::OneTx); + if let Some(err) = expected_error { + assert_eq!(result.result, ExecutionResult::Halt { reason: err }); + } else { + assert_eq!(result.result, ExecutionResult::Success { output: vec![] }); + } +} + +#[test] +fn test_l2_block_first_in_batch() { + let prev_block_hash = MiniblockHasher::legacy_hash(MiniblockNumber(0)); + let prev_block_hash = MiniblockHasher::new(MiniblockNumber(1), 1, prev_block_hash) + .finalize(ProtocolVersionId::latest()); + test_first_in_batch( + 1, + 1, + H256::zero(), + 1, + 2, + 1, + L2BlockEnv { + number: 2, + timestamp: 2, + prev_block_hash, + max_virtual_blocks_to_create: 1, + }, + None, + ); + + let prev_block_hash = MiniblockHasher::legacy_hash(MiniblockNumber(0)); + let prev_block_hash = MiniblockHasher::new(MiniblockNumber(1), 8, prev_block_hash) + .finalize(ProtocolVersionId::latest()); + test_first_in_batch( + 8, + 1, + H256::zero(), + 5, + 12, + 1, + L2BlockEnv { + number: 2, + timestamp: 9, + prev_block_hash, + max_virtual_blocks_to_create: 1, + }, + Some(Halt::FailedToSetL2Block("The timestamp of the L2 block must be greater than or equal to the timestamp of the current batch".to_string())), + ); +} + +fn set_manual_l2_block_info( + vm: &mut Vm, + tx_number: usize, + block_info: L2BlockEnv, + timestamp: Timestamp, +) { + let fictive_miniblock_position = + TX_OPERATOR_L2_BLOCK_INFO_OFFSET + TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO * tx_number; + + vm.state.memory.populate_page( + BOOTLOADER_HEAP_PAGE as usize, + vec![ + (fictive_miniblock_position, block_info.number.into()), + (fictive_miniblock_position + 1, block_info.timestamp.into()), + ( + fictive_miniblock_position + 2, + h256_to_u256(block_info.prev_block_hash), + ), + ( + fictive_miniblock_position + 3, + block_info.max_virtual_blocks_to_create.into(), + ), + ], + timestamp, + ) +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/mod.rs new file mode 100644 index 00000000000..95377232b3e --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/mod.rs @@ -0,0 +1,22 @@ +mod bootloader; +mod default_aa; +// TODO - fix this test +// mod invalid_bytecode; +mod bytecode_publishing; +mod call_tracer; +mod circuits; +mod gas_limit; +mod get_used_contracts; +mod is_write_initial; +mod l1_tx_execution; +mod l2_blocks; +mod nonce_holder; +mod precompiles; +mod refunds; +mod require_eip712; +mod rollbacks; +mod simple_execution; +mod tester; +mod tracing_execution_error; +mod upgrade; +mod utils; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/nonce_holder.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/nonce_holder.rs new file mode 100644 index 00000000000..44ba3e4e323 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/nonce_holder.rs @@ -0,0 +1,188 @@ +use zksync_types::{Execute, Nonce}; + +use crate::{ + interface::{ + ExecutionResult, Halt, TxExecutionMode, TxRevertReason, VmExecutionMode, VmInterface, + VmRevertReason, + }, + vm_boojum_integration::{ + tests::{ + tester::{Account, VmTesterBuilder}, + utils::read_nonce_holder_tester, + }, + types::internals::TransactionData, + HistoryEnabled, + }, +}; + +pub enum NonceHolderTestMode { + SetValueUnderNonce, + IncreaseMinNonceBy5, + IncreaseMinNonceTooMuch, + LeaveNonceUnused, + IncreaseMinNonceBy1, + SwitchToArbitraryOrdering, +} + +impl From for u8 { + fn from(mode: NonceHolderTestMode) -> u8 { + match mode { + NonceHolderTestMode::SetValueUnderNonce => 0, + NonceHolderTestMode::IncreaseMinNonceBy5 => 1, + NonceHolderTestMode::IncreaseMinNonceTooMuch => 2, + NonceHolderTestMode::LeaveNonceUnused => 3, + NonceHolderTestMode::IncreaseMinNonceBy1 => 4, + NonceHolderTestMode::SwitchToArbitraryOrdering => 5, + } + } +} + +#[test] +fn test_nonce_holder() { + let mut account = Account::random(); + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_deployer() + .with_custom_contracts(vec![( + read_nonce_holder_tester().to_vec(), + account.address, + true, + )]) + .with_rich_accounts(vec![account.clone()]) + .build(); + + let mut run_nonce_test = |nonce: u32, + test_mode: NonceHolderTestMode, + error_message: Option, + comment: &'static str| { + // In this test we have to reset VM state after each test case. Because once bootloader failed during the validation of the transaction, + // it will fail again and again. At the same time we have to keep the same storage, because we want to keep the nonce holder contract state. + // The easiest way in terms of lifetimes is to reuse vm_builder to achieve it. + vm.reset_state(true); + let mut transaction_data: TransactionData = account + .get_l2_tx_for_execute_with_nonce( + Execute { + contract_address: account.address, + calldata: vec![12], + value: Default::default(), + factory_deps: None, + }, + None, + Nonce(nonce), + ) + .into(); + + transaction_data.signature = vec![test_mode.into()]; + vm.vm.push_raw_transaction(transaction_data, 0, 0, true); + let result = vm.vm.execute(VmExecutionMode::OneTx); + + if let Some(msg) = error_message { + let expected_error = + TxRevertReason::Halt(Halt::ValidationFailed(VmRevertReason::General { + msg, + data: vec![], + })); + let ExecutionResult::Halt { reason } = result.result else { + panic!("Expected revert, got {:?}", result.result); + }; + assert_eq!( + reason.to_string(), + expected_error.to_string(), + "{}", + comment + ); + } else { + assert!(!result.result.is_failed(), "{}", comment); + } + }; + // Test 1: trying to set value under non sequential nonce value. + run_nonce_test( + 1u32, + NonceHolderTestMode::SetValueUnderNonce, + Some("Previous nonce has not been used".to_string()), + "Allowed to set value under non sequential value", + ); + + // Test 2: increase min nonce by 1 with sequential nonce ordering: + run_nonce_test( + 0u32, + NonceHolderTestMode::IncreaseMinNonceBy1, + None, + "Failed to increment nonce by 1 for sequential account", + ); + + // Test 3: correctly set value under nonce with sequential nonce ordering: + run_nonce_test( + 1u32, + NonceHolderTestMode::SetValueUnderNonce, + None, + "Failed to set value under nonce sequential value", + ); + + // Test 5: migrate to the arbitrary nonce ordering: + run_nonce_test( + 2u32, + NonceHolderTestMode::SwitchToArbitraryOrdering, + None, + "Failed to switch to arbitrary ordering", + ); + + // Test 6: increase min nonce by 5 + run_nonce_test( + 6u32, + NonceHolderTestMode::IncreaseMinNonceBy5, + None, + "Failed to increase min nonce by 5", + ); + + // Test 7: since the nonces in range [6,10] are no longer allowed, the + // tx with nonce 10 should not be allowed + run_nonce_test( + 10u32, + NonceHolderTestMode::IncreaseMinNonceBy5, + Some("Reusing the same nonce twice".to_string()), + "Allowed to reuse nonce below the minimal one", + ); + + // Test 8: we should be able to use nonce 13 + run_nonce_test( + 13u32, + NonceHolderTestMode::SetValueUnderNonce, + None, + "Did not allow to use unused nonce 10", + ); + + // Test 9: we should not be able to reuse nonce 13 + run_nonce_test( + 13u32, + NonceHolderTestMode::IncreaseMinNonceBy5, + Some("Reusing the same nonce twice".to_string()), + "Allowed to reuse the same nonce twice", + ); + + // Test 10: we should be able to simply use nonce 14, while bumping the minimal nonce by 5 + run_nonce_test( + 14u32, + NonceHolderTestMode::IncreaseMinNonceBy5, + None, + "Did not allow to use a bumped nonce", + ); + + // Test 11: Do not allow bumping nonce by too much + run_nonce_test( + 16u32, + NonceHolderTestMode::IncreaseMinNonceTooMuch, + Some("The value for incrementing the nonce is too high".to_string()), + "Allowed for incrementing min nonce too much", + ); + + // Test 12: Do not allow not setting a nonce as used + run_nonce_test( + 16u32, + NonceHolderTestMode::LeaveNonceUnused, + Some("The nonce was not set as used".to_string()), + "Allowed to leave nonce as unused", + ); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/precompiles.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/precompiles.rs new file mode 100644 index 00000000000..516331d574f --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/precompiles.rs @@ -0,0 +1,136 @@ +use zk_evm_1_4_0::zk_evm_abstractions::precompiles::PrecompileAddress; +use zksync_types::{Address, Execute}; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + constants::BLOCK_GAS_LIMIT, + tests::{tester::VmTesterBuilder, utils::read_precompiles_contract}, + HistoryEnabled, + }, +}; + +#[test] +fn test_keccak() { + // Execute special transaction and check that at least 1000 keccak calls were made. + let contract = read_precompiles_contract(); + let address = Address::random(); + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_custom_contracts(vec![(contract, address, true)]) + .build(); + + // calldata for `doKeccak(1000)`. + let keccak1000_calldata = + "370f20ac00000000000000000000000000000000000000000000000000000000000003e8"; + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: address, + calldata: hex::decode(keccak1000_calldata).unwrap(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + vm.vm.push_transaction(tx); + let _ = vm.vm.inspect(Default::default(), VmExecutionMode::OneTx); + + let keccak_count = vm + .vm + .state + .precompiles_processor + .precompile_cycles_history + .inner() + .iter() + .filter(|(precompile, _)| precompile == &PrecompileAddress::Keccak256) + .count(); + + assert!(keccak_count >= 1000); +} + +#[test] +fn test_sha256() { + // Execute special transaction and check that at least 1000 sha256 calls were made. + let contract = read_precompiles_contract(); + let address = Address::random(); + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_custom_contracts(vec![(contract, address, true)]) + .build(); + + // calldata for `doSha256(1000)`. + let sha1000_calldata = + "5d0b4fb500000000000000000000000000000000000000000000000000000000000003e8"; + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: address, + calldata: hex::decode(sha1000_calldata).unwrap(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + vm.vm.push_transaction(tx); + let _ = vm.vm.inspect(Default::default(), VmExecutionMode::OneTx); + + let sha_count = vm + .vm + .state + .precompiles_processor + .precompile_cycles_history + .inner() + .iter() + .filter(|(precompile, _)| precompile == &PrecompileAddress::SHA256) + .count(); + + assert!(sha_count >= 1000); +} + +#[test] +fn test_ecrecover() { + // Execute simple transfer and check that exactly 1 ecrecover call was made (it's done during tx validation). + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .build(); + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: account.address, + calldata: Vec::new(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + vm.vm.push_transaction(tx); + let _ = vm.vm.inspect(Default::default(), VmExecutionMode::OneTx); + + let ecrecover_count = vm + .vm + .state + .precompiles_processor + .precompile_cycles_history + .inner() + .iter() + .filter(|(precompile, _)| precompile == &PrecompileAddress::Ecrecover) + .count(); + + assert_eq!(ecrecover_count, 1); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/refunds.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/refunds.rs new file mode 100644 index 00000000000..521bd81f2ef --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/refunds.rs @@ -0,0 +1,167 @@ +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + tests::{ + tester::{DeployContractsTx, TxType, VmTesterBuilder}, + utils::read_test_contract, + }, + types::internals::TransactionData, + HistoryEnabled, + }, +}; + +#[test] +fn test_predetermined_refunded_gas() { + // In this test, we compare the execution of the bootloader with the predefined + // refunded gas and without them + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + let l1_batch = vm.vm.batch_env.clone(); + + let counter = read_test_contract(); + let account = &mut vm.rich_accounts[0]; + + let DeployContractsTx { + tx, + bytecode_hash: _, + address: _, + } = account.get_deploy_tx(&counter, None, TxType::L2); + vm.vm.push_transaction(tx.clone()); + let result = vm.vm.execute(VmExecutionMode::OneTx); + + assert!(!result.result.is_failed()); + + // If the refund provided by the operator or the final refund are the 0 + // there is no impact of the operator's refund at all and so this test does not + // make much sense. + assert!( + result.refunds.operator_suggested_refund > 0, + "The operator's refund is 0" + ); + assert!(result.refunds.gas_refunded > 0, "The final refund is 0"); + + let result_without_predefined_refunds = vm.vm.execute(VmExecutionMode::Batch); + let mut current_state_without_predefined_refunds = vm.vm.get_current_execution_state(); + assert!(!result_without_predefined_refunds.result.is_failed(),); + + // Here we want to provide the same refund from the operator and check that it's the correct one. + // We execute the whole block without refund tracer, because refund tracer will eventually override the provided refund. + // But the overall result should be the same + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_l1_batch_env(l1_batch.clone()) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_rich_accounts(vec![account.clone()]) + .build(); + + let tx: TransactionData = tx.into(); + let block_gas_per_pubdata_byte = vm.vm.batch_env.block_gas_price_per_pubdata(); + // Overhead + let overhead = tx.overhead_gas(block_gas_per_pubdata_byte as u32); + vm.vm + .push_raw_transaction(tx.clone(), overhead, result.refunds.gas_refunded, true); + + let result_with_predefined_refunds = vm.vm.execute(VmExecutionMode::Batch); + let mut current_state_with_predefined_refunds = vm.vm.get_current_execution_state(); + + assert!(!result_with_predefined_refunds.result.is_failed()); + + // We need to sort these lists as those are flattened from HashMaps + current_state_with_predefined_refunds + .used_contract_hashes + .sort(); + current_state_without_predefined_refunds + .used_contract_hashes + .sort(); + + assert_eq!( + current_state_with_predefined_refunds.events, + current_state_without_predefined_refunds.events + ); + + assert_eq!( + current_state_with_predefined_refunds.user_l2_to_l1_logs, + current_state_without_predefined_refunds.user_l2_to_l1_logs + ); + + assert_eq!( + current_state_with_predefined_refunds.system_logs, + current_state_without_predefined_refunds.system_logs + ); + + assert_eq!( + current_state_with_predefined_refunds.storage_log_queries, + current_state_without_predefined_refunds.storage_log_queries + ); + assert_eq!( + current_state_with_predefined_refunds.used_contract_hashes, + current_state_without_predefined_refunds.used_contract_hashes + ); + + // In this test we put the different refund from the operator. + // We still can't use the refund tracer, because it will override the refund. + // But we can check that the logs and events have changed. + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_l1_batch_env(l1_batch) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_rich_accounts(vec![account.clone()]) + .build(); + + let changed_operator_suggested_refund = result.refunds.gas_refunded + 1000; + vm.vm + .push_raw_transaction(tx, overhead, changed_operator_suggested_refund, true); + let result = vm.vm.execute(VmExecutionMode::Batch); + let mut current_state_with_changed_predefined_refunds = vm.vm.get_current_execution_state(); + + assert!(!result.result.is_failed()); + current_state_with_changed_predefined_refunds + .used_contract_hashes + .sort(); + current_state_without_predefined_refunds + .used_contract_hashes + .sort(); + + assert_eq!( + current_state_with_changed_predefined_refunds.events.len(), + current_state_without_predefined_refunds.events.len() + ); + + assert_ne!( + current_state_with_changed_predefined_refunds.events, + current_state_without_predefined_refunds.events + ); + + assert_eq!( + current_state_with_changed_predefined_refunds.user_l2_to_l1_logs, + current_state_without_predefined_refunds.user_l2_to_l1_logs + ); + + assert_ne!( + current_state_with_changed_predefined_refunds.system_logs, + current_state_without_predefined_refunds.system_logs + ); + + assert_eq!( + current_state_with_changed_predefined_refunds + .storage_log_queries + .len(), + current_state_without_predefined_refunds + .storage_log_queries + .len() + ); + + assert_ne!( + current_state_with_changed_predefined_refunds.storage_log_queries, + current_state_without_predefined_refunds.storage_log_queries + ); + assert_eq!( + current_state_with_changed_predefined_refunds.used_contract_hashes, + current_state_without_predefined_refunds.used_contract_hashes + ); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/require_eip712.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/require_eip712.rs new file mode 100644 index 00000000000..90c3206b24b --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/require_eip712.rs @@ -0,0 +1,165 @@ +use std::convert::TryInto; + +use ethabi::Token; +use zksync_eth_signer::{raw_ethereum_tx::TransactionParameters, EthereumSigner}; +use zksync_system_constants::L2_ETH_TOKEN_ADDRESS; +use zksync_types::{ + fee::Fee, l2::L2Tx, transaction_request::TransactionRequest, + utils::storage_key_for_standard_token_balance, AccountTreeId, Address, Eip712Domain, Execute, + L2ChainId, Nonce, Transaction, U256, +}; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + tests::{ + tester::{Account, VmTester, VmTesterBuilder}, + utils::read_many_owners_custom_account_contract, + }, + HistoryDisabled, + }, +}; + +impl VmTester { + pub(crate) fn get_eth_balance(&mut self, address: Address) -> U256 { + let key = storage_key_for_standard_token_balance( + AccountTreeId::new(L2_ETH_TOKEN_ADDRESS), + &address, + ); + self.vm.state.storage.storage.read_from_storage(&key) + } +} + +// TODO refactor this test it use too much internal details of the VM +#[tokio::test] +/// This test deploys 'buggy' account abstraction code, and then tries accessing it both with legacy +/// and EIP712 transactions. +/// Currently we support both, but in the future, we should allow only EIP712 transactions to access the AA accounts. +async fn test_require_eip712() { + // Use 3 accounts: + // - private_address - EOA account, where we have the key + // - account_address - AA account, where the contract is deployed + // - beneficiary - an EOA account, where we'll try to transfer the tokens. + let account_abstraction = Account::random(); + let mut private_account = Account::random(); + let beneficiary = Account::random(); + + let (bytecode, contract) = read_many_owners_custom_account_contract(); + let mut vm = VmTesterBuilder::new(HistoryDisabled) + .with_empty_in_memory_storage() + .with_custom_contracts(vec![(bytecode, account_abstraction.address, true)]) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_rich_accounts(vec![account_abstraction.clone(), private_account.clone()]) + .build(); + + assert_eq!(vm.get_eth_balance(beneficiary.address), U256::from(0)); + + let chain_id: u32 = 270; + + // First, let's set the owners of the AA account to the private_address. + // (so that messages signed by private_address, are authorized to act on behalf of the AA account). + let set_owners_function = contract.function("setOwners").unwrap(); + let encoded_input = set_owners_function + .encode_input(&[Token::Array(vec![Token::Address(private_account.address)])]) + .unwrap(); + + let tx = private_account.get_l2_tx_for_execute( + Execute { + contract_address: account_abstraction.address, + calldata: encoded_input, + value: Default::default(), + factory_deps: None, + }, + None, + ); + + vm.vm.push_transaction(tx); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!result.result.is_failed()); + + let private_account_balance = vm.get_eth_balance(private_account.address); + + // And now let's do the transfer from the 'account abstraction' to 'beneficiary' (using 'legacy' transaction). + // Normally this would not work - unless the operator is malicious. + let aa_raw_tx = TransactionParameters { + nonce: U256::from(0), + to: Some(beneficiary.address), + gas: U256::from(100000000), + gas_price: Some(U256::from(10000000)), + value: U256::from(888000088), + data: vec![], + chain_id: 270, + transaction_type: None, + access_list: None, + max_fee_per_gas: U256::from(1000000000), + max_priority_fee_per_gas: U256::from(1000000000), + }; + + let aa_tx = private_account.sign_legacy_tx(aa_raw_tx).await; + let (tx_request, hash) = TransactionRequest::from_bytes(&aa_tx, L2ChainId::from(270)).unwrap(); + + let mut l2_tx: L2Tx = L2Tx::from_request(tx_request, 10000).unwrap(); + l2_tx.set_input(aa_tx, hash); + // Pretend that operator is malicious and sets the initiator to the AA account. + l2_tx.common_data.initiator_address = account_abstraction.address; + let transaction: Transaction = l2_tx.try_into().unwrap(); + + vm.vm.push_transaction(transaction); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!result.result.is_failed()); + assert_eq!( + vm.get_eth_balance(beneficiary.address), + U256::from(888000088) + ); + // Make sure that the tokens were transferred from the AA account. + assert_eq!( + private_account_balance, + vm.get_eth_balance(private_account.address) + ); + + // // Now send the 'classic' EIP712 transaction + let tx_712 = L2Tx::new( + beneficiary.address, + vec![], + Nonce(1), + Fee { + gas_limit: U256::from(1000000000), + max_fee_per_gas: U256::from(1000000000), + max_priority_fee_per_gas: U256::from(1000000000), + gas_per_pubdata_limit: U256::from(1000000000), + }, + account_abstraction.address, + U256::from(28374938), + None, + Default::default(), + ); + + let transaction_request: TransactionRequest = tx_712.into(); + + let domain = Eip712Domain::new(L2ChainId::from(chain_id)); + let signature = private_account + .get_pk_signer() + .sign_typed_data(&domain, &transaction_request) + .await + .unwrap(); + let encoded_tx = transaction_request.get_signed_bytes(&signature, L2ChainId::from(chain_id)); + + let (aa_txn_request, aa_hash) = + TransactionRequest::from_bytes(&encoded_tx, L2ChainId::from(chain_id)).unwrap(); + + let mut l2_tx = L2Tx::from_request(aa_txn_request, 100000).unwrap(); + l2_tx.set_input(encoded_tx, aa_hash); + + let transaction: Transaction = l2_tx.try_into().unwrap(); + vm.vm.push_transaction(transaction); + vm.vm.execute(VmExecutionMode::OneTx); + + assert_eq!( + vm.get_eth_balance(beneficiary.address), + U256::from(916375026) + ); + assert_eq!( + private_account_balance, + vm.get_eth_balance(private_account.address) + ); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/rollbacks.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/rollbacks.rs new file mode 100644 index 00000000000..3d3127f8428 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/rollbacks.rs @@ -0,0 +1,263 @@ +use ethabi::Token; +use zksync_contracts::{get_loadnext_contract, test_contracts::LoadnextContractExecutionParams}; +use zksync_state::WriteStorage; +use zksync_types::{get_nonce_key, Execute, U256}; + +use crate::{ + interface::{ + dyn_tracers::vm_1_4_0::DynTracer, + tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + TxExecutionMode, VmExecutionMode, VmInterface, VmInterfaceHistoryEnabled, + }, + vm_boojum_integration::{ + tests::{ + tester::{DeployContractsTx, TransactionTestInfo, TxModifier, TxType, VmTesterBuilder}, + utils::read_test_contract, + }, + types::internals::ZkSyncVmState, + BootloaderState, HistoryEnabled, HistoryMode, SimpleMemory, ToTracerPointer, VmTracer, + }, +}; + +#[test] +fn test_vm_rollbacks() { + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let mut account = vm.rich_accounts[0].clone(); + let counter = read_test_contract(); + let tx_0 = account.get_deploy_tx(&counter, None, TxType::L2).tx; + let tx_1 = account.get_deploy_tx(&counter, None, TxType::L2).tx; + let tx_2 = account.get_deploy_tx(&counter, None, TxType::L2).tx; + + let result_without_rollbacks = vm.execute_and_verify_txs(&vec![ + TransactionTestInfo::new_processed(tx_0.clone(), false), + TransactionTestInfo::new_processed(tx_1.clone(), false), + TransactionTestInfo::new_processed(tx_2.clone(), false), + ]); + + // reset vm + vm.reset_with_empty_storage(); + + let result_with_rollbacks = vm.execute_and_verify_txs(&vec![ + TransactionTestInfo::new_rejected(tx_0.clone(), TxModifier::WrongSignatureLength.into()), + TransactionTestInfo::new_rejected(tx_0.clone(), TxModifier::WrongMagicValue.into()), + TransactionTestInfo::new_rejected(tx_0.clone(), TxModifier::WrongSignature.into()), + // The correct nonce is 0, this tx will fail + TransactionTestInfo::new_rejected(tx_2.clone(), TxModifier::WrongNonce.into()), + // This tx will succeed + TransactionTestInfo::new_processed(tx_0.clone(), false), + // The correct nonce is 1, this tx will fail + TransactionTestInfo::new_rejected(tx_0.clone(), TxModifier::NonceReused.into()), + // The correct nonce is 1, this tx will fail + TransactionTestInfo::new_rejected(tx_2.clone(), TxModifier::WrongNonce.into()), + // This tx will succeed + TransactionTestInfo::new_processed(tx_1, false), + // The correct nonce is 2, this tx will fail + TransactionTestInfo::new_rejected(tx_0.clone(), TxModifier::NonceReused.into()), + // This tx will succeed + TransactionTestInfo::new_processed(tx_2.clone(), false), + // This tx will fail + TransactionTestInfo::new_rejected(tx_2, TxModifier::NonceReused.into()), + TransactionTestInfo::new_rejected(tx_0, TxModifier::NonceReused.into()), + ]); + + assert_eq!(result_without_rollbacks, result_with_rollbacks); +} + +#[test] +fn test_vm_loadnext_rollbacks() { + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + let mut account = vm.rich_accounts[0].clone(); + + let loadnext_contract = get_loadnext_contract(); + let loadnext_constructor_data = &[Token::Uint(U256::from(100))]; + let DeployContractsTx { + tx: loadnext_deploy_tx, + address, + .. + } = account.get_deploy_tx_with_factory_deps( + &loadnext_contract.bytecode, + Some(loadnext_constructor_data), + loadnext_contract.factory_deps.clone(), + TxType::L2, + ); + + let loadnext_tx_1 = account.get_l2_tx_for_execute( + Execute { + contract_address: address, + calldata: LoadnextContractExecutionParams { + reads: 100, + writes: 100, + events: 100, + hashes: 500, + recursive_calls: 10, + deploys: 60, + } + .to_bytes(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + + let loadnext_tx_2 = account.get_l2_tx_for_execute( + Execute { + contract_address: address, + calldata: LoadnextContractExecutionParams { + reads: 100, + writes: 100, + events: 100, + hashes: 500, + recursive_calls: 10, + deploys: 60, + } + .to_bytes(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + + let result_without_rollbacks = vm.execute_and_verify_txs(&vec![ + TransactionTestInfo::new_processed(loadnext_deploy_tx.clone(), false), + TransactionTestInfo::new_processed(loadnext_tx_1.clone(), false), + TransactionTestInfo::new_processed(loadnext_tx_2.clone(), false), + ]); + + // reset vm + vm.reset_with_empty_storage(); + + let result_with_rollbacks = vm.execute_and_verify_txs(&vec![ + TransactionTestInfo::new_processed(loadnext_deploy_tx.clone(), false), + TransactionTestInfo::new_processed(loadnext_tx_1.clone(), true), + TransactionTestInfo::new_rejected( + loadnext_deploy_tx.clone(), + TxModifier::NonceReused.into(), + ), + TransactionTestInfo::new_processed(loadnext_tx_1, false), + TransactionTestInfo::new_processed(loadnext_tx_2.clone(), true), + TransactionTestInfo::new_processed(loadnext_tx_2.clone(), true), + TransactionTestInfo::new_rejected(loadnext_deploy_tx, TxModifier::NonceReused.into()), + TransactionTestInfo::new_processed(loadnext_tx_2, false), + ]); + + assert_eq!(result_without_rollbacks, result_with_rollbacks); +} + +// Testing tracer that does not allow the recursion to go deeper than a certain limit +struct MaxRecursionTracer { + max_recursion_depth: usize, +} + +/// Tracer responsible for calculating the number of storage invocations and +/// stopping the VM execution if the limit is reached. +impl DynTracer> for MaxRecursionTracer {} + +impl VmTracer for MaxRecursionTracer { + fn finish_cycle( + &mut self, + state: &mut ZkSyncVmState, + _bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + let current_depth = state.local_state.callstack.depth(); + + if current_depth > self.max_recursion_depth { + TracerExecutionStatus::Stop(TracerExecutionStopReason::Finish) + } else { + TracerExecutionStatus::Continue + } + } +} + +#[test] +fn test_layered_rollback() { + // This test checks that the layered rollbacks work correctly, i.e. + // the rollback by the operator will always revert all the changes + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let account = &mut vm.rich_accounts[0]; + let loadnext_contract = get_loadnext_contract().bytecode; + + let DeployContractsTx { + tx: deploy_tx, + address, + .. + } = account.get_deploy_tx( + &loadnext_contract, + Some(&[Token::Uint(0.into())]), + TxType::L2, + ); + vm.vm.push_transaction(deploy_tx); + let deployment_res = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!deployment_res.result.is_failed(), "transaction failed"); + + let loadnext_transaction = account.get_loadnext_transaction( + address, + LoadnextContractExecutionParams { + writes: 1, + recursive_calls: 20, + ..LoadnextContractExecutionParams::empty() + }, + TxType::L2, + ); + + let nonce_val = vm + .vm + .state + .storage + .storage + .read_from_storage(&get_nonce_key(&account.address)); + + vm.vm.make_snapshot(); + + vm.vm.push_transaction(loadnext_transaction.clone()); + vm.vm.inspect( + MaxRecursionTracer { + max_recursion_depth: 15, + } + .into_tracer_pointer() + .into(), + VmExecutionMode::OneTx, + ); + + let nonce_val2 = vm + .vm + .state + .storage + .storage + .read_from_storage(&get_nonce_key(&account.address)); + + // The tracer stopped after the validation has passed, so nonce has already been increased + assert_eq!(nonce_val + U256::one(), nonce_val2, "nonce did not change"); + + vm.vm.rollback_to_the_latest_snapshot(); + + let nonce_val_after_rollback = vm + .vm + .state + .storage + .storage + .read_from_storage(&get_nonce_key(&account.address)); + + assert_eq!( + nonce_val, nonce_val_after_rollback, + "nonce changed after rollback" + ); + + vm.vm.push_transaction(loadnext_transaction); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!result.result.is_failed(), "transaction must not fail"); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/simple_execution.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/simple_execution.rs new file mode 100644 index 00000000000..fc94e2c7152 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/simple_execution.rs @@ -0,0 +1,81 @@ +use crate::{ + interface::{ExecutionResult, VmExecutionMode, VmInterface}, + vm_boojum_integration::{ + tests::tester::{TxType, VmTesterBuilder}, + HistoryDisabled, + }, +}; + +#[test] +fn estimate_fee() { + let mut vm_tester = VmTesterBuilder::new(HistoryDisabled) + .with_empty_in_memory_storage() + .with_deployer() + .with_random_rich_accounts(1) + .build(); + + vm_tester.deploy_test_contract(); + let account = &mut vm_tester.rich_accounts[0]; + + let tx = account.get_test_contract_transaction( + vm_tester.test_contract.unwrap(), + false, + Default::default(), + false, + TxType::L2, + ); + + vm_tester.vm.push_transaction(tx); + + let result = vm_tester.vm.execute(VmExecutionMode::OneTx); + assert!(matches!(result.result, ExecutionResult::Success { .. })); +} + +#[test] +fn simple_execute() { + let mut vm_tester = VmTesterBuilder::new(HistoryDisabled) + .with_empty_in_memory_storage() + .with_deployer() + .with_random_rich_accounts(1) + .build(); + + vm_tester.deploy_test_contract(); + + let account = &mut vm_tester.rich_accounts[0]; + + let tx1 = account.get_test_contract_transaction( + vm_tester.test_contract.unwrap(), + false, + Default::default(), + false, + TxType::L1 { serial_id: 1 }, + ); + + let tx2 = account.get_test_contract_transaction( + vm_tester.test_contract.unwrap(), + true, + Default::default(), + false, + TxType::L1 { serial_id: 1 }, + ); + + let tx3 = account.get_test_contract_transaction( + vm_tester.test_contract.unwrap(), + false, + Default::default(), + false, + TxType::L1 { serial_id: 1 }, + ); + let vm = &mut vm_tester.vm; + vm.push_transaction(tx1); + vm.push_transaction(tx2); + vm.push_transaction(tx3); + let tx = vm.execute(VmExecutionMode::OneTx); + assert!(matches!(tx.result, ExecutionResult::Success { .. })); + let tx = vm.execute(VmExecutionMode::OneTx); + assert!(matches!(tx.result, ExecutionResult::Revert { .. })); + let tx = vm.execute(VmExecutionMode::OneTx); + assert!(matches!(tx.result, ExecutionResult::Success { .. })); + let block_tip = vm.execute(VmExecutionMode::Batch); + assert!(matches!(block_tip.result, ExecutionResult::Success { .. })); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/inner_state.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/inner_state.rs new file mode 100644 index 00000000000..24f31c5a939 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/inner_state.rs @@ -0,0 +1,130 @@ +use std::collections::HashMap; + +use zk_evm_1_4_0::{aux_structures::Timestamp, vm_state::VmLocalState}; +use zksync_state::WriteStorage; +use zksync_types::{StorageKey, StorageLogQuery, StorageValue, U256}; + +use crate::{ + vm_boojum_integration::{ + old_vm::{ + event_sink::InMemoryEventSink, + history_recorder::{AppDataFrameManagerWithHistory, HistoryRecorder}, + }, + HistoryEnabled, HistoryMode, SimpleMemory, Vm, + }, + HistoryMode as CommonHistoryMode, +}; + +#[derive(Clone, Debug)] +pub(crate) struct ModifiedKeysMap(HashMap); + +// We consider hashmaps to be equal even if there is a key +// that is not present in one but has zero value in another. +impl PartialEq for ModifiedKeysMap { + fn eq(&self, other: &Self) -> bool { + for (key, value) in self.0.iter() { + if *value != other.0.get(key).cloned().unwrap_or_default() { + return false; + } + } + for (key, value) in other.0.iter() { + if *value != self.0.get(key).cloned().unwrap_or_default() { + return false; + } + } + true + } +} + +#[derive(Clone, PartialEq, Debug)] +pub(crate) struct DecommitterTestInnerState { + /// There is no way to "truly" compare the storage pointer, + /// so we just compare the modified keys. This is reasonable enough. + pub(crate) modified_storage_keys: ModifiedKeysMap, + pub(crate) known_bytecodes: HistoryRecorder>, H>, + pub(crate) decommitted_code_hashes: HistoryRecorder, HistoryEnabled>, +} + +#[derive(Clone, PartialEq, Debug)] +pub(crate) struct StorageOracleInnerState { + /// There is no way to "truly" compare the storage pointer, + /// so we just compare the modified keys. This is reasonable enough. + pub(crate) modified_storage_keys: ModifiedKeysMap, + + pub(crate) frames_stack: AppDataFrameManagerWithHistory, H>, + + pub(crate) pre_paid_changes: HistoryRecorder, H>, + pub(crate) paid_changes: HistoryRecorder, H>, + pub(crate) initial_values: HistoryRecorder, H>, + pub(crate) returned_refunds: HistoryRecorder, H>, +} + +#[derive(Clone, PartialEq, Debug)] +pub(crate) struct PrecompileProcessorTestInnerState { + pub(crate) timestamp_history: HistoryRecorder, H>, +} + +/// A struct that encapsulates the state of the VM's oracles +/// The state is to be used in tests. +#[derive(Clone, PartialEq, Debug)] +pub(crate) struct VmInstanceInnerState { + event_sink: InMemoryEventSink, + precompile_processor_state: PrecompileProcessorTestInnerState, + memory: SimpleMemory, + decommitter_state: DecommitterTestInnerState, + storage_oracle_state: StorageOracleInnerState, + local_state: VmLocalState, +} + +impl Vm { + // Dump inner state of the VM. + pub(crate) fn dump_inner_state(&self) -> VmInstanceInnerState { + let event_sink = self.state.event_sink.clone(); + let precompile_processor_state = PrecompileProcessorTestInnerState { + timestamp_history: self.state.precompiles_processor.timestamp_history.clone(), + }; + let memory = self.state.memory.clone(); + let decommitter_state = DecommitterTestInnerState { + modified_storage_keys: ModifiedKeysMap( + self.state + .decommittment_processor + .get_storage() + .borrow() + .modified_storage_keys() + .clone(), + ), + known_bytecodes: self.state.decommittment_processor.known_bytecodes.clone(), + decommitted_code_hashes: self + .state + .decommittment_processor + .get_decommitted_code_hashes_with_history() + .clone(), + }; + let storage_oracle_state = StorageOracleInnerState { + modified_storage_keys: ModifiedKeysMap( + self.state + .storage + .storage + .get_ptr() + .borrow() + .modified_storage_keys() + .clone(), + ), + frames_stack: self.state.storage.frames_stack.clone(), + pre_paid_changes: self.state.storage.pre_paid_changes.clone(), + paid_changes: self.state.storage.paid_changes.clone(), + initial_values: self.state.storage.initial_values.clone(), + returned_refunds: self.state.storage.returned_refunds.clone(), + }; + let local_state = self.state.local_state.clone(); + + VmInstanceInnerState { + event_sink, + precompile_processor_state, + memory, + decommitter_state, + storage_oracle_state, + local_state, + } + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/mod.rs new file mode 100644 index 00000000000..dfe8905a7e0 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/mod.rs @@ -0,0 +1,7 @@ +pub(crate) use transaction_test_info::{ExpectedError, TransactionTestInfo, TxModifier}; +pub(crate) use vm_tester::{default_l1_batch, InMemoryStorageView, VmTester, VmTesterBuilder}; +pub(crate) use zksync_test_account::{Account, DeployContractsTx, TxType}; + +mod inner_state; +mod transaction_test_info; +mod vm_tester; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/transaction_test_info.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/transaction_test_info.rs new file mode 100644 index 00000000000..4d6572fe78a --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/transaction_test_info.rs @@ -0,0 +1,217 @@ +use zksync_types::{ExecuteTransactionCommon, Transaction}; + +use crate::{ + interface::{ + CurrentExecutionState, ExecutionResult, Halt, TxRevertReason, VmExecutionMode, + VmExecutionResultAndLogs, VmInterface, VmInterfaceHistoryEnabled, VmRevertReason, + }, + vm_boojum_integration::{tests::tester::vm_tester::VmTester, HistoryEnabled}, +}; + +#[derive(Debug, Clone)] +pub(crate) enum TxModifier { + WrongSignatureLength, + WrongSignature, + WrongMagicValue, + WrongNonce, + NonceReused, +} + +#[derive(Debug, Clone)] +pub(crate) enum TxExpectedResult { + Rejected { error: ExpectedError }, + Processed { rollback: bool }, +} + +#[derive(Debug, Clone)] +pub(crate) struct TransactionTestInfo { + tx: Transaction, + result: TxExpectedResult, +} + +#[derive(Debug, Clone)] +pub(crate) struct ExpectedError { + pub(crate) revert_reason: TxRevertReason, + pub(crate) modifier: Option, +} + +impl From for ExpectedError { + fn from(value: TxModifier) -> Self { + let revert_reason = match value { + TxModifier::WrongSignatureLength => { + Halt::ValidationFailed(VmRevertReason::General { + msg: "Signature length is incorrect".to_string(), + data: vec![ + 8, 195, 121, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 29, 83, 105, 103, 110, 97, 116, 117, 114, 101, 32, + 108, 101, 110, 103, 116, 104, 32, 105, 115, 32, 105, 110, 99, 111, 114, 114, 101, 99, + 116, 0, 0, 0, + ], + }) + } + TxModifier::WrongSignature => { + Halt::ValidationFailed(VmRevertReason::General { + msg: "Account validation returned invalid magic value. Most often this means that the signature is incorrect".to_string(), + data: vec![], + }) + } + TxModifier::WrongMagicValue => { + Halt::ValidationFailed(VmRevertReason::General { + msg: "v is neither 27 nor 28".to_string(), + data: vec![ + 8, 195, 121, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 22, 118, 32, 105, 115, 32, 110, 101, 105, 116, 104, + 101, 114, 32, 50, 55, 32, 110, 111, 114, 32, 50, 56, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + ], + }) + + } + TxModifier::WrongNonce => { + Halt::ValidationFailed(VmRevertReason::General { + msg: "Incorrect nonce".to_string(), + data: vec![ + 8, 195, 121, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15, 73, 110, 99, 111, 114, 114, 101, 99, 116, 32, 110, + 111, 110, 99, 101, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + ], + }) + } + TxModifier::NonceReused => { + Halt::ValidationFailed(VmRevertReason::General { + msg: "Reusing the same nonce twice".to_string(), + data: vec![ + 8, 195, 121, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 28, 82, 101, 117, 115, 105, 110, 103, 32, 116, 104, + 101, 32, 115, 97, 109, 101, 32, 110, 111, 110, 99, 101, 32, 116, 119, 105, 99, 101, 0, + 0, 0, 0, + ], + }) + } + }; + + ExpectedError { + revert_reason: TxRevertReason::Halt(revert_reason), + modifier: Some(value), + } + } +} + +impl TransactionTestInfo { + pub(crate) fn new_rejected( + mut transaction: Transaction, + expected_error: ExpectedError, + ) -> Self { + transaction.common_data = match transaction.common_data { + ExecuteTransactionCommon::L2(mut data) => { + if let Some(modifier) = &expected_error.modifier { + match modifier { + TxModifier::WrongSignatureLength => { + data.signature = data.signature[..data.signature.len() - 20].to_vec() + } + TxModifier::WrongSignature => data.signature = vec![27u8; 65], + TxModifier::WrongMagicValue => data.signature = vec![1u8; 65], + TxModifier::WrongNonce => { + // Do not need to modify signature for nonce error + } + TxModifier::NonceReused => { + // Do not need to modify signature for nonce error + } + } + } + ExecuteTransactionCommon::L2(data) + } + _ => panic!("L1 transactions are not supported"), + }; + + Self { + tx: transaction, + result: TxExpectedResult::Rejected { + error: expected_error, + }, + } + } + + pub(crate) fn new_processed(transaction: Transaction, should_be_rollbacked: bool) -> Self { + Self { + tx: transaction, + result: TxExpectedResult::Processed { + rollback: should_be_rollbacked, + }, + } + } + + fn verify_result(&self, result: &VmExecutionResultAndLogs) { + match &self.result { + TxExpectedResult::Rejected { error } => match &result.result { + ExecutionResult::Success { .. } => { + panic!("Transaction should be reverted {:?}", self.tx.nonce()) + } + ExecutionResult::Revert { output } => match &error.revert_reason { + TxRevertReason::TxReverted(expected) => { + assert_eq!(output, expected) + } + _ => { + panic!("Error types mismatch"); + } + }, + ExecutionResult::Halt { reason } => match &error.revert_reason { + TxRevertReason::Halt(expected) => { + assert_eq!(reason, expected) + } + _ => { + panic!("Error types mismatch"); + } + }, + }, + TxExpectedResult::Processed { .. } => { + assert!(!result.result.is_failed()); + } + } + } + + fn should_rollback(&self) -> bool { + match &self.result { + TxExpectedResult::Rejected { .. } => true, + TxExpectedResult::Processed { rollback } => *rollback, + } + } +} + +impl VmTester { + pub(crate) fn execute_and_verify_txs( + &mut self, + txs: &[TransactionTestInfo], + ) -> CurrentExecutionState { + for tx_test_info in txs { + self.execute_tx_and_verify(tx_test_info.clone()); + } + self.vm.execute(VmExecutionMode::Batch); + let mut state = self.vm.get_current_execution_state(); + state.used_contract_hashes.sort(); + state + } + + pub(crate) fn execute_tx_and_verify( + &mut self, + tx_test_info: TransactionTestInfo, + ) -> VmExecutionResultAndLogs { + let inner_state_before = self.vm.dump_inner_state(); + self.vm.make_snapshot(); + self.vm.push_transaction(tx_test_info.tx.clone()); + let result = self.vm.execute(VmExecutionMode::OneTx); + tx_test_info.verify_result(&result); + if tx_test_info.should_rollback() { + self.vm.rollback_to_the_latest_snapshot(); + let inner_state_after = self.vm.dump_inner_state(); + assert_eq!( + inner_state_before, inner_state_after, + "Inner state before and after rollback should be equal" + ); + } + result + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/vm_tester.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/vm_tester.rs new file mode 100644 index 00000000000..30bf9535eb8 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tester/vm_tester.rs @@ -0,0 +1,295 @@ +use std::marker::PhantomData; + +use zksync_contracts::BaseSystemContracts; +use zksync_state::{InMemoryStorage, StoragePtr, StorageView, WriteStorage}; +use zksync_types::{ + block::MiniblockHasher, + get_code_key, get_is_account_key, + helpers::unix_timestamp_ms, + utils::{deployed_address_create, storage_key_for_eth_balance}, + Address, L1BatchNumber, L2ChainId, MiniblockNumber, Nonce, ProtocolVersionId, U256, +}; +use zksync_utils::{bytecode::hash_bytecode, u256_to_h256}; + +use crate::{ + interface::{ + L1BatchEnv, L2Block, L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, VmInterface, + }, + vm_boojum_integration::{ + constants::BLOCK_GAS_LIMIT, + tests::{ + tester::{Account, TxType}, + utils::read_test_contract, + }, + utils::l2_blocks::load_last_l2_block, + Vm, + }, + HistoryMode, +}; + +pub(crate) type InMemoryStorageView = StorageView; + +pub(crate) struct VmTester { + pub(crate) vm: Vm, + pub(crate) storage: StoragePtr, + pub(crate) fee_account: Address, + pub(crate) deployer: Option, + pub(crate) test_contract: Option

, + pub(crate) rich_accounts: Vec, + pub(crate) custom_contracts: Vec, + _phantom: std::marker::PhantomData, +} + +impl VmTester { + pub(crate) fn deploy_test_contract(&mut self) { + let contract = read_test_contract(); + let tx = self + .deployer + .as_mut() + .expect("You have to initialize builder with deployer") + .get_deploy_tx(&contract, None, TxType::L2) + .tx; + let nonce = tx.nonce().unwrap().0.into(); + self.vm.push_transaction(tx); + self.vm.execute(VmExecutionMode::OneTx); + let deployed_address = + deployed_address_create(self.deployer.as_ref().unwrap().address, nonce); + self.test_contract = Some(deployed_address); + } + + pub(crate) fn reset_with_empty_storage(&mut self) { + self.storage = StorageView::new(get_empty_storage()).to_rc_ptr(); + self.reset_state(false); + } + + /// Reset the state of the VM to the initial state. + /// If `use_latest_l2_block` is true, then the VM will use the latest L2 block from storage, + /// otherwise it will use the first L2 block of l1 batch env + pub(crate) fn reset_state(&mut self, use_latest_l2_block: bool) { + for account in self.rich_accounts.iter_mut() { + account.nonce = Nonce(0); + make_account_rich(self.storage.clone(), account); + } + if let Some(deployer) = &self.deployer { + make_account_rich(self.storage.clone(), deployer); + } + + if !self.custom_contracts.is_empty() { + println!("Inserting custom contracts is not yet supported") + // insert_contracts(&mut self.storage, &self.custom_contracts); + } + + let mut l1_batch = self.vm.batch_env.clone(); + if use_latest_l2_block { + let last_l2_block = load_last_l2_block(self.storage.clone()).unwrap_or(L2Block { + number: 0, + timestamp: 0, + hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), + }); + l1_batch.first_l2_block = L2BlockEnv { + number: last_l2_block.number + 1, + timestamp: std::cmp::max(last_l2_block.timestamp + 1, l1_batch.timestamp), + prev_block_hash: last_l2_block.hash, + max_virtual_blocks_to_create: 1, + }; + } + + let vm = Vm::new(l1_batch, self.vm.system_env.clone(), self.storage.clone()); + + if self.test_contract.is_some() { + self.deploy_test_contract(); + } + + self.vm = vm; + } +} + +pub(crate) type ContractsToDeploy = (Vec, Address, bool); + +pub(crate) struct VmTesterBuilder { + storage: Option, + l1_batch_env: Option, + system_env: SystemEnv, + deployer: Option, + rich_accounts: Vec, + custom_contracts: Vec, + _phantom: PhantomData, +} + +impl Clone for VmTesterBuilder { + fn clone(&self) -> Self { + Self { + storage: None, + l1_batch_env: self.l1_batch_env.clone(), + system_env: self.system_env.clone(), + deployer: self.deployer.clone(), + rich_accounts: self.rich_accounts.clone(), + custom_contracts: self.custom_contracts.clone(), + _phantom: PhantomData, + } + } +} + +#[allow(dead_code)] +impl VmTesterBuilder { + pub(crate) fn new(_: H) -> Self { + Self { + storage: None, + l1_batch_env: None, + system_env: SystemEnv { + zk_porter_available: false, + version: ProtocolVersionId::latest(), + base_system_smart_contracts: BaseSystemContracts::playground(), + gas_limit: BLOCK_GAS_LIMIT, + execution_mode: TxExecutionMode::VerifyExecute, + default_validation_computational_gas_limit: BLOCK_GAS_LIMIT, + chain_id: L2ChainId::from(270), + }, + deployer: None, + rich_accounts: vec![], + custom_contracts: vec![], + _phantom: PhantomData, + } + } + + pub(crate) fn with_l1_batch_env(mut self, l1_batch_env: L1BatchEnv) -> Self { + self.l1_batch_env = Some(l1_batch_env); + self + } + + pub(crate) fn with_system_env(mut self, system_env: SystemEnv) -> Self { + self.system_env = system_env; + self + } + + pub(crate) fn with_storage(mut self, storage: InMemoryStorage) -> Self { + self.storage = Some(storage); + self + } + + pub(crate) fn with_base_system_smart_contracts( + mut self, + base_system_smart_contracts: BaseSystemContracts, + ) -> Self { + self.system_env.base_system_smart_contracts = base_system_smart_contracts; + self + } + + pub(crate) fn with_gas_limit(mut self, gas_limit: u32) -> Self { + self.system_env.gas_limit = gas_limit; + self + } + + pub(crate) fn with_execution_mode(mut self, execution_mode: TxExecutionMode) -> Self { + self.system_env.execution_mode = execution_mode; + self + } + + pub(crate) fn with_empty_in_memory_storage(mut self) -> Self { + self.storage = Some(get_empty_storage()); + self + } + + pub(crate) fn with_random_rich_accounts(mut self, number: u32) -> Self { + for _ in 0..number { + let account = Account::random(); + self.rich_accounts.push(account); + } + self + } + + pub(crate) fn with_rich_accounts(mut self, accounts: Vec) -> Self { + self.rich_accounts.extend(accounts); + self + } + + pub(crate) fn with_deployer(mut self) -> Self { + let deployer = Account::random(); + self.deployer = Some(deployer); + self + } + + pub(crate) fn with_custom_contracts(mut self, contracts: Vec) -> Self { + self.custom_contracts = contracts; + self + } + + pub(crate) fn build(self) -> VmTester { + let l1_batch_env = self + .l1_batch_env + .unwrap_or_else(|| default_l1_batch(L1BatchNumber(1))); + + let mut raw_storage = self.storage.unwrap_or_else(get_empty_storage); + insert_contracts(&mut raw_storage, &self.custom_contracts); + let storage_ptr = StorageView::new(raw_storage).to_rc_ptr(); + for account in self.rich_accounts.iter() { + make_account_rich(storage_ptr.clone(), account); + } + if let Some(deployer) = &self.deployer { + make_account_rich(storage_ptr.clone(), deployer); + } + let fee_account = l1_batch_env.fee_account; + + let vm = Vm::new(l1_batch_env, self.system_env, storage_ptr.clone()); + + VmTester { + vm, + storage: storage_ptr, + fee_account, + deployer: self.deployer, + test_contract: None, + rich_accounts: self.rich_accounts.clone(), + custom_contracts: self.custom_contracts.clone(), + _phantom: PhantomData, + } + } +} + +pub(crate) fn default_l1_batch(number: L1BatchNumber) -> L1BatchEnv { + let timestamp = unix_timestamp_ms(); + L1BatchEnv { + previous_batch_hash: None, + number, + timestamp, + l1_gas_price: 50_000_000_000, // 50 gwei + fair_l2_gas_price: 250_000_000, // 0.25 gwei + fee_account: Address::random(), + enforced_base_fee: None, + first_l2_block: L2BlockEnv { + number: 1, + timestamp, + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), + max_virtual_blocks_to_create: 100, + }, + } +} + +pub(crate) fn make_account_rich(storage: StoragePtr, account: &Account) { + let key = storage_key_for_eth_balance(&account.address); + storage + .as_ref() + .borrow_mut() + .set_value(key, u256_to_h256(U256::from(10u64.pow(19)))); +} + +pub(crate) fn get_empty_storage() -> InMemoryStorage { + InMemoryStorage::with_system_contracts(hash_bytecode) +} + +// Inserts the contracts into the test environment, bypassing the +// deployer system contract. Besides the reference to storage +// it accepts a `contracts` tuple of information about the contract +// and whether or not it is an account. +fn insert_contracts(raw_storage: &mut InMemoryStorage, contracts: &[ContractsToDeploy]) { + for (contract, address, is_account) in contracts { + let deployer_code_key = get_code_key(address); + raw_storage.set_value(deployer_code_key, hash_bytecode(contract)); + + if *is_account { + let is_account_key = get_is_account_key(address); + raw_storage.set_value(is_account_key, u256_to_h256(1_u32.into())); + } + + raw_storage.store_factory_dep(hash_bytecode(contract), contract.clone()); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/tracing_execution_error.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tracing_execution_error.rs new file mode 100644 index 00000000000..8c538dcf9bf --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/tracing_execution_error.rs @@ -0,0 +1,54 @@ +use zksync_types::{Execute, H160}; + +use crate::{ + interface::{TxExecutionMode, TxRevertReason, VmRevertReason}, + vm_boojum_integration::{ + tests::{ + tester::{ExpectedError, TransactionTestInfo, VmTesterBuilder}, + utils::{get_execute_error_calldata, read_error_contract, BASE_SYSTEM_CONTRACTS}, + }, + HistoryEnabled, + }, +}; + +#[test] +fn test_tracing_of_execution_errors() { + let contract_address = H160::random(); + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_base_system_smart_contracts(BASE_SYSTEM_CONTRACTS.clone()) + .with_custom_contracts(vec![(read_error_contract(), contract_address, false)]) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_deployer() + .with_random_rich_accounts(1) + .build(); + + let account = &mut vm.rich_accounts[0]; + + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address, + calldata: get_execute_error_calldata(), + value: Default::default(), + factory_deps: Some(vec![]), + }, + None, + ); + + vm.execute_tx_and_verify(TransactionTestInfo::new_rejected( + tx, + ExpectedError { + revert_reason: TxRevertReason::TxReverted(VmRevertReason::General { + msg: "short".to_string(), + data: vec![ + 8, 195, 121, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 115, 104, 111, 114, 116, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, + ], + }), + modifier: None, + }, + )); +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/upgrade.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/upgrade.rs new file mode 100644 index 00000000000..4442d7c4082 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/upgrade.rs @@ -0,0 +1,362 @@ +use zk_evm_1_4_0::aux_structures::Timestamp; +use zksync_contracts::{deployer_contract, load_contract, load_sys_contract, read_bytecode}; +use zksync_state::WriteStorage; +use zksync_test_account::TxType; +use zksync_types::{ + ethabi::{Contract, Token}, + get_code_key, get_known_code_key, + protocol_version::ProtocolUpgradeTxCommonData, + Address, Execute, ExecuteTransactionCommon, Transaction, COMPLEX_UPGRADER_ADDRESS, + CONTRACT_DEPLOYER_ADDRESS, CONTRACT_FORCE_DEPLOYER_ADDRESS, H160, H256, + REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, U256, +}; +use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, u256_to_h256}; + +use super::utils::read_test_contract; +use crate::{ + interface::{ + ExecutionResult, Halt, TxExecutionMode, VmExecutionMode, VmInterface, + VmInterfaceHistoryEnabled, + }, + vm_boojum_integration::{ + tests::{tester::VmTesterBuilder, utils::verify_required_storage}, + HistoryEnabled, + }, +}; + +/// In this test we ensure that the requirements for protocol upgrade transactions are enforced by the bootloader: +/// - This transaction must be the only one in block +/// - If present, this transaction must be the first one in block +#[test] +fn test_protocol_upgrade_is_first() { + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let bytecode_hash = hash_bytecode(&read_test_contract()); + vm.vm + .storage + .borrow_mut() + .set_value(get_known_code_key(&bytecode_hash), u256_to_h256(1.into())); + + // Here we just use some random transaction of protocol upgrade type: + let protocol_upgrade_transaction = get_forced_deploy_tx(&[ForceDeployment { + // The bytecode hash to put on an address + bytecode_hash, + // The address on which to deploy the bytecodehash to + address: H160::random(), + // Whether to run the constructor on the force deployment + call_constructor: false, + // The value with which to initialize a contract + value: U256::zero(), + // The constructor calldata + input: vec![], + }]); + + // Another random upgrade transaction + let another_protocol_upgrade_transaction = get_forced_deploy_tx(&[ForceDeployment { + // The bytecode hash to put on an address + bytecode_hash, + // The address on which to deploy the bytecodehash to + address: H160::random(), + // Whether to run the constructor on the force deployment + call_constructor: false, + // The value with which to initialize a contract + value: U256::zero(), + // The constructor calldata + input: vec![], + }]); + + let normal_l1_transaction = vm.rich_accounts[0] + .get_deploy_tx(&read_test_contract(), None, TxType::L1 { serial_id: 0 }) + .tx; + + let expected_error = + Halt::UnexpectedVMBehavior("Assertion error: Protocol upgrade tx not first".to_string()); + + vm.vm.make_snapshot(); + // Test 1: there must be only one system transaction in block + vm.vm.push_transaction(protocol_upgrade_transaction.clone()); + vm.vm.push_transaction(normal_l1_transaction.clone()); + vm.vm.push_transaction(another_protocol_upgrade_transaction); + + vm.vm.execute(VmExecutionMode::OneTx); + vm.vm.execute(VmExecutionMode::OneTx); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert_eq!( + result.result, + ExecutionResult::Halt { + reason: expected_error.clone() + } + ); + + // Test 2: the protocol upgrade tx must be the first one in block + vm.vm.rollback_to_the_latest_snapshot(); + vm.vm.make_snapshot(); + vm.vm.push_transaction(normal_l1_transaction.clone()); + vm.vm.push_transaction(protocol_upgrade_transaction.clone()); + + vm.vm.execute(VmExecutionMode::OneTx); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert_eq!( + result.result, + ExecutionResult::Halt { + reason: expected_error + } + ); + + vm.vm.rollback_to_the_latest_snapshot(); + vm.vm.make_snapshot(); + vm.vm.push_transaction(protocol_upgrade_transaction); + vm.vm.push_transaction(normal_l1_transaction); + + vm.vm.execute(VmExecutionMode::OneTx); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!(!result.result.is_failed()); +} + +/// In this test we try to test how force deployments could be done via protocol upgrade transactions. +#[test] +fn test_force_deploy_upgrade() { + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let storage_view = vm.storage.clone(); + let bytecode_hash = hash_bytecode(&read_test_contract()); + + let known_code_key = get_known_code_key(&bytecode_hash); + // It is generally expected that all the keys will be set as known prior to the protocol upgrade. + storage_view + .borrow_mut() + .set_value(known_code_key, u256_to_h256(1.into())); + drop(storage_view); + + let address_to_deploy = H160::random(); + // Here we just use some random transaction of protocol upgrade type: + let transaction = get_forced_deploy_tx(&[ForceDeployment { + // The bytecode hash to put on an address + bytecode_hash, + // The address on which to deploy the bytecodehash to + address: address_to_deploy, + // Whether to run the constructor on the force deployment + call_constructor: false, + // The value with which to initialize a contract + value: U256::zero(), + // The constructor calldata + input: vec![], + }]); + + vm.vm.push_transaction(transaction); + + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!( + !result.result.is_failed(), + "The force upgrade was not successful" + ); + + let expected_slots = vec![(bytecode_hash, get_code_key(&address_to_deploy))]; + + // Verify that the bytecode has been set correctly + verify_required_storage(&vm.vm.state, expected_slots); +} + +/// Here we show how the work with the complex upgrader could be done +#[test] +fn test_complex_upgrader() { + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let storage_view = vm.storage.clone(); + + let bytecode_hash = hash_bytecode(&read_complex_upgrade()); + let msg_sender_test_hash = hash_bytecode(&read_msg_sender_test()); + + // Let's assume that the bytecode for the implementation of the complex upgrade + // is already deployed in some address in userspace + let upgrade_impl = H160::random(); + let account_code_key = get_code_key(&upgrade_impl); + + storage_view + .borrow_mut() + .set_value(get_known_code_key(&bytecode_hash), u256_to_h256(1.into())); + storage_view.borrow_mut().set_value( + get_known_code_key(&msg_sender_test_hash), + u256_to_h256(1.into()), + ); + storage_view + .borrow_mut() + .set_value(account_code_key, bytecode_hash); + drop(storage_view); + + vm.vm.state.decommittment_processor.populate( + vec![ + ( + h256_to_u256(bytecode_hash), + bytes_to_be_words(read_complex_upgrade()), + ), + ( + h256_to_u256(msg_sender_test_hash), + bytes_to_be_words(read_msg_sender_test()), + ), + ], + Timestamp(0), + ); + + let address_to_deploy1 = H160::random(); + let address_to_deploy2 = H160::random(); + + let transaction = get_complex_upgrade_tx( + upgrade_impl, + address_to_deploy1, + address_to_deploy2, + bytecode_hash, + ); + + vm.vm.push_transaction(transaction); + let result = vm.vm.execute(VmExecutionMode::OneTx); + assert!( + !result.result.is_failed(), + "The force upgrade was not successful" + ); + + let expected_slots = vec![ + (bytecode_hash, get_code_key(&address_to_deploy1)), + (bytecode_hash, get_code_key(&address_to_deploy2)), + ]; + + // Verify that the bytecode has been set correctly + verify_required_storage(&vm.vm.state, expected_slots); +} + +#[derive(Debug, Clone)] +struct ForceDeployment { + // The bytecode hash to put on an address + bytecode_hash: H256, + // The address on which to deploy the bytecodehash to + address: Address, + // Whether to run the constructor on the force deployment + call_constructor: bool, + // The value with which to initialize a contract + value: U256, + // The constructor calldata + input: Vec, +} + +fn get_forced_deploy_tx(deployment: &[ForceDeployment]) -> Transaction { + let deployer = deployer_contract(); + let contract_function = deployer.function("forceDeployOnAddresses").unwrap(); + + let encoded_deployments: Vec<_> = deployment + .iter() + .map(|deployment| { + Token::Tuple(vec![ + Token::FixedBytes(deployment.bytecode_hash.as_bytes().to_vec()), + Token::Address(deployment.address), + Token::Bool(deployment.call_constructor), + Token::Uint(deployment.value), + Token::Bytes(deployment.input.clone()), + ]) + }) + .collect(); + + let params = [Token::Array(encoded_deployments)]; + + let calldata = contract_function + .encode_input(¶ms) + .expect("failed to encode parameters"); + + let execute = Execute { + contract_address: CONTRACT_DEPLOYER_ADDRESS, + calldata, + factory_deps: None, + value: U256::zero(), + }; + + Transaction { + common_data: ExecuteTransactionCommon::ProtocolUpgrade(ProtocolUpgradeTxCommonData { + sender: CONTRACT_FORCE_DEPLOYER_ADDRESS, + gas_limit: U256::from(200_000_000u32), + gas_per_pubdata_limit: REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE.into(), + ..Default::default() + }), + execute, + received_timestamp_ms: 0, + raw_bytes: None, + } +} + +// Returns the transaction that performs a complex protocol upgrade. +// The first param is the address of the implementation of the complex upgrade +// in user-space, while the next 3 params are params of the implenentaiton itself +// For the explanatation for the parameters, please refer to: +// etc/contracts-test-data/complex-upgrade/complex-upgrade.sol +fn get_complex_upgrade_tx( + implementation_address: Address, + address1: Address, + address2: Address, + bytecode_hash: H256, +) -> Transaction { + let impl_contract = get_complex_upgrade_abi(); + let impl_function = impl_contract.function("someComplexUpgrade").unwrap(); + let impl_calldata = impl_function + .encode_input(&[ + Token::Address(address1), + Token::Address(address2), + Token::FixedBytes(bytecode_hash.as_bytes().to_vec()), + ]) + .unwrap(); + + let complex_upgrader = get_complex_upgrader_abi(); + let upgrade_function = complex_upgrader.function("upgrade").unwrap(); + let complex_upgrader_calldata = upgrade_function + .encode_input(&[ + Token::Address(implementation_address), + Token::Bytes(impl_calldata), + ]) + .unwrap(); + + let execute = Execute { + contract_address: COMPLEX_UPGRADER_ADDRESS, + calldata: complex_upgrader_calldata, + factory_deps: None, + value: U256::zero(), + }; + + Transaction { + common_data: ExecuteTransactionCommon::ProtocolUpgrade(ProtocolUpgradeTxCommonData { + sender: CONTRACT_FORCE_DEPLOYER_ADDRESS, + gas_limit: U256::from(200_000_000u32), + gas_per_pubdata_limit: REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE.into(), + ..Default::default() + }), + execute, + received_timestamp_ms: 0, + raw_bytes: None, + } +} + +fn read_complex_upgrade() -> Vec { + read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/complex-upgrade/complex-upgrade.sol/ComplexUpgrade.json") +} + +fn read_msg_sender_test() -> Vec { + read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/complex-upgrade/msg-sender.sol/MsgSenderTest.json") +} + +fn get_complex_upgrade_abi() -> Contract { + load_contract( + "etc/contracts-test-data/artifacts-zk/contracts/complex-upgrade/complex-upgrade.sol/ComplexUpgrade.json" + ) +} + +fn get_complex_upgrader_abi() -> Contract { + load_sys_contract("ComplexUpgrader") +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tests/utils.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tests/utils.rs new file mode 100644 index 00000000000..2dd8e2350eb --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tests/utils.rs @@ -0,0 +1,111 @@ +use ethabi::Contract; +use once_cell::sync::Lazy; +use zksync_contracts::{ + load_contract, read_bytecode, read_zbin_bytecode, BaseSystemContracts, SystemContractCode, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_types::{ + utils::storage_key_for_standard_token_balance, AccountTreeId, Address, StorageKey, H256, U256, +}; +use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, u256_to_h256}; + +use crate::vm_boojum_integration::{ + tests::tester::InMemoryStorageView, types::internals::ZkSyncVmState, HistoryMode, +}; + +pub(crate) static BASE_SYSTEM_CONTRACTS: Lazy = + Lazy::new(BaseSystemContracts::load_from_disk); + +// Probably make it a part of vm tester +pub(crate) fn verify_required_storage( + state: &ZkSyncVmState, + required_values: Vec<(H256, StorageKey)>, +) { + for (required_value, key) in required_values { + let current_value = state.storage.storage.read_from_storage(&key); + + assert_eq!( + u256_to_h256(current_value), + required_value, + "Invalid value at key {key:?}" + ); + } +} + +pub(crate) fn verify_required_memory( + state: &ZkSyncVmState, + required_values: Vec<(U256, u32, u32)>, +) { + for (required_value, memory_page, cell) in required_values { + let current_value = state + .memory + .read_slot(memory_page as usize, cell as usize) + .value; + assert_eq!(current_value, required_value); + } +} + +pub(crate) fn get_balance( + token_id: AccountTreeId, + account: &Address, + main_storage: StoragePtr, +) -> U256 { + let key = storage_key_for_standard_token_balance(token_id, account); + h256_to_u256(main_storage.borrow_mut().read_value(&key)) +} + +pub(crate) fn read_test_contract() -> Vec { + read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/counter/counter.sol/Counter.json") +} + +pub(crate) fn get_bootloader(test: &str) -> SystemContractCode { + let bootloader_code = read_zbin_bytecode(format!( + "contracts/system-contracts/bootloader/tests/artifacts/{}.yul.zbin", + test + )); + + let bootloader_hash = hash_bytecode(&bootloader_code); + SystemContractCode { + code: bytes_to_be_words(bootloader_code), + hash: bootloader_hash, + } +} + +pub(crate) fn read_nonce_holder_tester() -> Vec { + read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/custom-account/nonce-holder-test.sol/NonceHolderTest.json") +} + +pub(crate) fn read_error_contract() -> Vec { + read_bytecode( + "etc/contracts-test-data/artifacts-zk/contracts/error/error.sol/SimpleRequire.json", + ) +} + +pub(crate) fn get_execute_error_calldata() -> Vec { + let test_contract = load_contract( + "etc/contracts-test-data/artifacts-zk/contracts/error/error.sol/SimpleRequire.json", + ); + + let function = test_contract.function("require_short").unwrap(); + + function + .encode_input(&[]) + .expect("failed to encode parameters") +} + +pub(crate) fn read_many_owners_custom_account_contract() -> (Vec, Contract) { + let path = "etc/contracts-test-data/artifacts-zk/contracts/custom-account/many-owners-custom-account.sol/ManyOwnersCustomAccount.json"; + (read_bytecode(path), load_contract(path)) +} + +pub(crate) fn read_max_depth_contract() -> Vec { + read_zbin_bytecode( + "core/tests/ts-integration/contracts/zkasm/artifacts/deep_stak.zkasm/deep_stak.zkasm.zbin", + ) +} + +pub(crate) fn read_precompiles_contract() -> Vec { + read_bytecode( + "etc/contracts-test-data/artifacts-zk/contracts/precompiles/precompiles.sol/Precompiles.json", + ) +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/circuits_capacity.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/circuits_capacity.rs new file mode 100644 index 00000000000..33fa6677de2 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/circuits_capacity.rs @@ -0,0 +1,85 @@ +use zkevm_test_harness_1_4_0::{geometry_config::get_geometry_config, toolset::GeometryConfig}; + +const GEOMETRY_CONFIG: GeometryConfig = get_geometry_config(); +const OVERESTIMATE_PERCENT: f32 = 1.05; + +const MAIN_VM_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_vm_snapshot as f32; + +const CODE_DECOMMITTER_SORTER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_code_decommitter_sorter as f32; + +const LOG_DEMUXER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_log_demuxer as f32; + +const STORAGE_SORTER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_storage_sorter as f32; + +const EVENTS_OR_L1_MESSAGES_SORTER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_events_or_l1_messages_sorter as f32; + +const RAM_PERMUTATION_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_ram_permutation as f32; + +pub(crate) const CODE_DECOMMITTER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_code_decommitter as f32; + +pub(crate) const STORAGE_APPLICATION_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_storage_application as f32; + +pub(crate) const KECCAK256_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_keccak256_circuit as f32; + +pub(crate) const SHA256_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_sha256_circuit as f32; + +pub(crate) const ECRECOVER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_ecrecover_circuit as f32; + +// "Rich addressing" opcodes are opcodes that can write their return value/read the input onto the stack +// and so take 1-2 RAM permutations more than an average opcode. +// In the worst case, a rich addressing may take 3 ram permutations +// (1 for reading the opcode, 1 for writing input value, 1 for writing output value). +pub(crate) const RICH_ADDRESSING_OPCODE_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + 3.0 * RAM_PERMUTATION_CYCLE_FRACTION; + +pub(crate) const AVERAGE_OPCODE_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + RAM_PERMUTATION_CYCLE_FRACTION; + +// Here "base" fraction is a fraction that will be used unconditionally. +// Usage of `StorageApplication` is being tracked separately as it depends on whether slot was read before or not. +pub(crate) const STORAGE_READ_BASE_FRACTION: f32 = MAIN_VM_CYCLE_FRACTION + + RAM_PERMUTATION_CYCLE_FRACTION + + LOG_DEMUXER_CYCLE_FRACTION + + STORAGE_SORTER_CYCLE_FRACTION; + +pub(crate) const EVENT_OR_L1_MESSAGE_FRACTION: f32 = MAIN_VM_CYCLE_FRACTION + + RAM_PERMUTATION_CYCLE_FRACTION + + 2.0 * LOG_DEMUXER_CYCLE_FRACTION + + 2.0 * EVENTS_OR_L1_MESSAGES_SORTER_CYCLE_FRACTION; + +// Here "base" fraction is a fraction that will be used unconditionally. +// Usage of `StorageApplication` is being tracked separately as it depends on whether slot was written before or not. +pub(crate) const STORAGE_WRITE_BASE_FRACTION: f32 = MAIN_VM_CYCLE_FRACTION + + RAM_PERMUTATION_CYCLE_FRACTION + + 2.0 * LOG_DEMUXER_CYCLE_FRACTION + + 2.0 * STORAGE_SORTER_CYCLE_FRACTION; + +pub(crate) const FAR_CALL_FRACTION: f32 = MAIN_VM_CYCLE_FRACTION + + RAM_PERMUTATION_CYCLE_FRACTION + + STORAGE_SORTER_CYCLE_FRACTION + + CODE_DECOMMITTER_SORTER_CYCLE_FRACTION; + +// 5 RAM permutations, because: 1 to read opcode + 2 reads + 2 writes. +// 2 reads and 2 writes are needed because unaligned access is implemented with +// aligned queries. +pub(crate) const UMA_WRITE_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + 5.0 * RAM_PERMUTATION_CYCLE_FRACTION; + +// 3 RAM permutations, because: 1 to read opcode + 2 reads. +// 2 reads are needed because unaligned access is implemented with aligned queries. +pub(crate) const UMA_READ_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + 3.0 * RAM_PERMUTATION_CYCLE_FRACTION; + +pub(crate) const PRECOMPILE_CALL_COMMON_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + RAM_PERMUTATION_CYCLE_FRACTION + LOG_DEMUXER_CYCLE_FRACTION; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/circuits_tracer.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/circuits_tracer.rs new file mode 100644 index 00000000000..e6b52221e02 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/circuits_tracer.rs @@ -0,0 +1,192 @@ +use std::marker::PhantomData; + +use zk_evm_1_4_0::{ + tracing::{BeforeExecutionData, VmLocalStateData}, + zk_evm_abstractions::precompiles::PrecompileAddress, + zkevm_opcode_defs::{LogOpcode, Opcode, UMAOpcode}, +}; +use zksync_state::{StoragePtr, WriteStorage}; + +use super::circuits_capacity::*; +use crate::{ + interface::{dyn_tracers::vm_1_4_0::DynTracer, tracer::TracerExecutionStatus}, + vm_boojum_integration::{ + bootloader_state::BootloaderState, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::traits::VmTracer, + types::internals::ZkSyncVmState, + }, +}; + +/// Tracer responsible for collecting information about refunds. +#[derive(Debug)] +pub(crate) struct CircuitsTracer { + pub(crate) estimated_circuits_used: f32, + last_decommitment_history_entry_checked: Option, + last_written_keys_history_entry_checked: Option, + last_read_keys_history_entry_checked: Option, + last_precompile_inner_entry_checked: Option, + _phantom_data: PhantomData, +} + +impl CircuitsTracer { + pub(crate) fn new() -> Self { + Self { + estimated_circuits_used: 0.0, + last_decommitment_history_entry_checked: None, + last_written_keys_history_entry_checked: None, + last_read_keys_history_entry_checked: None, + last_precompile_inner_entry_checked: None, + _phantom_data: Default::default(), + } + } +} + +impl DynTracer> for CircuitsTracer { + fn before_execution( + &mut self, + _state: VmLocalStateData<'_>, + data: BeforeExecutionData, + _memory: &SimpleMemory, + _storage: StoragePtr, + ) { + let used = match data.opcode.variant.opcode { + Opcode::Nop(_) + | Opcode::Add(_) + | Opcode::Sub(_) + | Opcode::Mul(_) + | Opcode::Div(_) + | Opcode::Jump(_) + | Opcode::Binop(_) + | Opcode::Shift(_) + | Opcode::Ptr(_) => RICH_ADDRESSING_OPCODE_FRACTION, + Opcode::Context(_) | Opcode::Ret(_) | Opcode::NearCall(_) => AVERAGE_OPCODE_FRACTION, + Opcode::Log(LogOpcode::StorageRead) => STORAGE_READ_BASE_FRACTION, + Opcode::Log(LogOpcode::StorageWrite) => STORAGE_WRITE_BASE_FRACTION, + Opcode::Log(LogOpcode::ToL1Message) | Opcode::Log(LogOpcode::Event) => { + EVENT_OR_L1_MESSAGE_FRACTION + } + Opcode::Log(LogOpcode::PrecompileCall) => PRECOMPILE_CALL_COMMON_FRACTION, + Opcode::FarCall(_) => FAR_CALL_FRACTION, + Opcode::UMA(UMAOpcode::AuxHeapWrite | UMAOpcode::HeapWrite) => UMA_WRITE_FRACTION, + Opcode::UMA( + UMAOpcode::AuxHeapRead | UMAOpcode::HeapRead | UMAOpcode::FatPointerRead, + ) => UMA_READ_FRACTION, + Opcode::Invalid(_) => unreachable!(), // invalid opcodes are never executed + }; + + self.estimated_circuits_used += used; + } +} + +impl VmTracer for CircuitsTracer { + fn initialize_tracer(&mut self, state: &mut ZkSyncVmState) { + self.last_decommitment_history_entry_checked = Some( + state + .decommittment_processor + .decommitted_code_hashes + .history() + .len(), + ); + + self.last_written_keys_history_entry_checked = + Some(state.storage.written_keys.history().len()); + + self.last_read_keys_history_entry_checked = Some(state.storage.read_keys.history().len()); + + self.last_precompile_inner_entry_checked = Some( + state + .precompiles_processor + .precompile_cycles_history + .inner() + .len(), + ); + } + + fn finish_cycle( + &mut self, + state: &mut ZkSyncVmState, + _bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + // Trace decommitments. + let last_decommitment_history_entry_checked = self + .last_decommitment_history_entry_checked + .expect("Value must be set during init"); + let history = state + .decommittment_processor + .decommitted_code_hashes + .history(); + for (_, history_event) in &history[last_decommitment_history_entry_checked..] { + // We assume that only insertions may happen during a single VM inspection. + assert!(history_event.value.is_none()); + let bytecode_len = state + .decommittment_processor + .known_bytecodes + .inner() + .get(&history_event.key) + .expect("Bytecode must be known at this point") + .len(); + + // Each cycle of `CodeDecommitter` processes 2 words. + // If the number of words in bytecode is odd, then number of cycles must be rounded up. + let decommitter_cycles_used = (bytecode_len + 1) / 2; + self.estimated_circuits_used += + (decommitter_cycles_used as f32) * CODE_DECOMMITTER_CYCLE_FRACTION; + } + self.last_decommitment_history_entry_checked = Some(history.len()); + + // Process storage writes. + let last_writes_history_entry_checked = self + .last_written_keys_history_entry_checked + .expect("Value must be set during init"); + let history = state.storage.written_keys.history(); + for (_, history_event) in &history[last_writes_history_entry_checked..] { + // We assume that only insertions may happen during a single VM inspection. + assert!(history_event.value.is_none()); + + self.estimated_circuits_used += 2.0 * STORAGE_APPLICATION_CYCLE_FRACTION; + } + self.last_written_keys_history_entry_checked = Some(history.len()); + + // Process storage reads. + let last_reads_history_entry_checked = self + .last_read_keys_history_entry_checked + .expect("Value must be set during init"); + let history = state.storage.read_keys.history(); + for (_, history_event) in &history[last_reads_history_entry_checked..] { + // We assume that only insertions may happen during a single VM inspection. + assert!(history_event.value.is_none()); + + // If the slot is already written to, then we've already taken 2 cycles into account. + if !state + .storage + .written_keys + .inner() + .contains_key(&history_event.key) + { + self.estimated_circuits_used += STORAGE_APPLICATION_CYCLE_FRACTION; + } + } + self.last_read_keys_history_entry_checked = Some(history.len()); + + // Process precompiles. + let last_precompile_inner_entry_checked = self + .last_precompile_inner_entry_checked + .expect("Value must be set during init"); + let inner = state + .precompiles_processor + .precompile_cycles_history + .inner(); + for (precompile, cycles) in &inner[last_precompile_inner_entry_checked..] { + let fraction = match precompile { + PrecompileAddress::Ecrecover => ECRECOVER_CYCLE_FRACTION, + PrecompileAddress::SHA256 => SHA256_CYCLE_FRACTION, + PrecompileAddress::Keccak256 => KECCAK256_CYCLE_FRACTION, + }; + self.estimated_circuits_used += (*cycles as f32) * fraction; + } + self.last_precompile_inner_entry_checked = Some(inner.len()); + + TracerExecutionStatus::Continue + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/default_tracers.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/default_tracers.rs new file mode 100644 index 00000000000..422463d2921 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/default_tracers.rs @@ -0,0 +1,311 @@ +use std::{ + fmt::{Debug, Formatter}, + marker::PhantomData, +}; + +use zk_evm_1_4_0::{ + tracing::{ + AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, + }, + vm_state::VmLocalState, + witness_trace::DummyTracer, + zkevm_opcode_defs::{decoding::EncodingModeProduction, Opcode, RetOpcode}, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_types::Timestamp; + +use super::PubdataTracer; +use crate::{ + interface::{ + tracer::{TracerExecutionStopReason, VmExecutionStopReason}, + traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, + types::tracer::TracerExecutionStatus, + Halt, VmExecutionMode, + }, + vm_boojum_integration::{ + bootloader_state::{utils::apply_l2_block, BootloaderState}, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::{ + dispatcher::TracerDispatcher, + utils::{ + computational_gas_price, gas_spent_on_bytecodes_and_long_messages_this_opcode, + print_debug_if_needed, VmHook, + }, + CircuitsTracer, RefundsTracer, ResultTracer, + }, + types::internals::ZkSyncVmState, + VmTracer, + }, +}; + +/// Default tracer for the VM. It manages the other tracers execution and stop the vm when needed. +pub(crate) struct DefaultExecutionTracer { + tx_has_been_processed: bool, + execution_mode: VmExecutionMode, + + pub(crate) gas_spent_on_bytecodes_and_long_messages: u32, + // Amount of gas used during account validation. + pub(crate) computational_gas_used: u32, + // Maximum number of gas that we're allowed to use during account validation. + tx_validation_gas_limit: u32, + in_account_validation: bool, + final_batch_info_requested: bool, + pub(crate) result_tracer: ResultTracer, + // This tracer is designed specifically for calculating refunds. Its separation from the custom tracer + // ensures static dispatch, enhancing performance by avoiding dynamic dispatch overhead. + // Additionally, being an internal tracer, it saves the results directly to `VmResultAndLogs`. + pub(crate) refund_tracer: Option>, + // The pubdata tracer is responsible for inserting the pubdata packing information into the bootloader + // memory at the end of the batch. Its separation from the custom tracer + // ensures static dispatch, enhancing performance by avoiding dynamic dispatch overhead. + pub(crate) pubdata_tracer: Option>, + pub(crate) dispatcher: TracerDispatcher, + ret_from_the_bootloader: Option, + // This tracer tracks what opcodes were executed and calculates how much circuits will be generated. + // It only takes into account circuits that are generated for actual execution. It doesn't + // take into account e.g circuits produced by the initial bootloader memory commitment. + pub(crate) circuits_tracer: CircuitsTracer, + + storage: StoragePtr, + _phantom: PhantomData, +} + +impl DefaultExecutionTracer { + pub(crate) fn new( + computational_gas_limit: u32, + execution_mode: VmExecutionMode, + dispatcher: TracerDispatcher, + storage: StoragePtr, + refund_tracer: Option>, + pubdata_tracer: Option>, + ) -> Self { + Self { + tx_has_been_processed: false, + execution_mode, + gas_spent_on_bytecodes_and_long_messages: 0, + computational_gas_used: 0, + tx_validation_gas_limit: computational_gas_limit, + in_account_validation: false, + final_batch_info_requested: false, + result_tracer: ResultTracer::new(execution_mode), + refund_tracer, + dispatcher, + pubdata_tracer, + ret_from_the_bootloader: None, + circuits_tracer: CircuitsTracer::new(), + storage, + _phantom: PhantomData, + } + } + + pub(crate) fn tx_has_been_processed(&self) -> bool { + self.tx_has_been_processed + } + + pub(crate) fn validation_run_out_of_gas(&self) -> bool { + self.computational_gas_used > self.tx_validation_gas_limit + } + + pub(crate) fn gas_spent_on_pubdata(&self, vm_local_state: &VmLocalState) -> u32 { + self.gas_spent_on_bytecodes_and_long_messages + vm_local_state.spent_pubdata_counter + } + + fn set_fictive_l2_block( + &mut self, + state: &mut ZkSyncVmState, + bootloader_state: &mut BootloaderState, + ) { + let current_timestamp = Timestamp(state.local_state.timestamp); + let txs_index = bootloader_state.free_tx_index(); + let l2_block = bootloader_state.insert_fictive_l2_block(); + let mut memory = vec![]; + apply_l2_block(&mut memory, l2_block, txs_index); + state + .memory + .populate_page(BOOTLOADER_HEAP_PAGE as usize, memory, current_timestamp); + self.final_batch_info_requested = false; + } + + fn should_stop_execution(&self) -> TracerExecutionStatus { + match self.execution_mode { + VmExecutionMode::OneTx if self.tx_has_been_processed() => { + return TracerExecutionStatus::Stop(TracerExecutionStopReason::Finish); + } + VmExecutionMode::Bootloader if self.ret_from_the_bootloader == Some(RetOpcode::Ok) => { + return TracerExecutionStatus::Stop(TracerExecutionStopReason::Finish); + } + _ => {} + }; + if self.validation_run_out_of_gas() { + return TracerExecutionStatus::Stop(TracerExecutionStopReason::Abort( + Halt::ValidationOutOfGas, + )); + } + TracerExecutionStatus::Continue + } +} + +impl Debug for DefaultExecutionTracer { + fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { + f.debug_struct("DefaultExecutionTracer").finish() + } +} + +/// The default tracer for the VM manages all other tracers. For the sake of optimization, these tracers are statically dispatched. +/// At the same time, the boilerplate for calling these tracers for all tracer calls is quite extensive. +/// This macro is used to reduce the boilerplate. +/// +/// Usage: +/// ``` +/// dispatch_tracers!( +/// self.after_decoding(state, data, memory) +/// ); +/// ``` +/// Whenever a new tracer is added, it should be added to the macro call. +/// +/// The macro passes the function call to all tracers. +macro_rules! dispatch_tracers { + ($self:ident.$function:ident($( $params:expr ),*)) => { + $self.result_tracer.$function($( $params ),*); + $self.dispatcher.$function($( $params ),*); + if let Some(tracer) = &mut $self.refund_tracer { + tracer.$function($( $params ),*); + } + if let Some(tracer) = &mut $self.pubdata_tracer { + tracer.$function($( $params ),*); + } + $self.circuits_tracer.$function($( $params ),*); + }; +} + +impl Tracer for DefaultExecutionTracer { + const CALL_BEFORE_DECODING: bool = false; + const CALL_AFTER_DECODING: bool = true; + const CALL_BEFORE_EXECUTION: bool = true; + const CALL_AFTER_EXECUTION: bool = true; + type SupportedMemory = SimpleMemory; + + fn before_decoding( + &mut self, + _state: VmLocalStateData<'_, 8, EncodingModeProduction>, + _memory: &Self::SupportedMemory, + ) { + } + + fn after_decoding( + &mut self, + state: VmLocalStateData<'_>, + data: AfterDecodingData, + memory: &Self::SupportedMemory, + ) { + dispatch_tracers!(self.after_decoding(state, data, memory)); + } + + fn before_execution( + &mut self, + state: VmLocalStateData<'_>, + data: BeforeExecutionData, + memory: &Self::SupportedMemory, + ) { + if self.in_account_validation { + self.computational_gas_used = self + .computational_gas_used + .saturating_add(computational_gas_price(state, &data)); + } + + let hook = VmHook::from_opcode_memory(&state, &data); + print_debug_if_needed(&hook, &state, memory); + + match hook { + VmHook::TxHasEnded => self.tx_has_been_processed = true, + VmHook::NoValidationEntered => self.in_account_validation = false, + VmHook::AccountValidationEntered => self.in_account_validation = true, + VmHook::FinalBatchInfo => self.final_batch_info_requested = true, + _ => {} + } + + self.gas_spent_on_bytecodes_and_long_messages += + gas_spent_on_bytecodes_and_long_messages_this_opcode(&state, &data); + + dispatch_tracers!(self.before_execution(state, data, memory, self.storage.clone())); + } + + fn after_execution( + &mut self, + state: VmLocalStateData<'_>, + data: AfterExecutionData, + memory: &Self::SupportedMemory, + ) { + if let VmExecutionMode::Bootloader = self.execution_mode { + let (next_opcode, _, _) = zk_evm_1_4_0::vm_state::read_and_decode( + state.vm_local_state, + memory, + &mut DummyTracer, + self, + ); + if current_frame_is_bootloader(state.vm_local_state) { + if let Opcode::Ret(ret) = next_opcode.inner.variant.opcode { + self.ret_from_the_bootloader = Some(ret); + } + } + } + + dispatch_tracers!(self.after_execution(state, data, memory, self.storage.clone())); + } +} + +impl DefaultExecutionTracer { + pub(crate) fn initialize_tracer(&mut self, state: &mut ZkSyncVmState) { + dispatch_tracers!(self.initialize_tracer(state)); + } + + pub(crate) fn finish_cycle( + &mut self, + state: &mut ZkSyncVmState, + bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + if self.final_batch_info_requested { + self.set_fictive_l2_block(state, bootloader_state) + } + + let mut result = self.result_tracer.finish_cycle(state, bootloader_state); + if let Some(refund_tracer) = &mut self.refund_tracer { + result = refund_tracer + .finish_cycle(state, bootloader_state) + .stricter(&result); + } + result = self + .dispatcher + .finish_cycle(state, bootloader_state) + .stricter(&result); + if let Some(pubdata_tracer) = &mut self.pubdata_tracer { + result = pubdata_tracer + .finish_cycle(state, bootloader_state) + .stricter(&result); + } + + result = self + .circuits_tracer + .finish_cycle(state, bootloader_state) + .stricter(&result); + + result.stricter(&self.should_stop_execution()) + } + + pub(crate) fn after_vm_execution( + &mut self, + state: &mut ZkSyncVmState, + bootloader_state: &BootloaderState, + stop_reason: VmExecutionStopReason, + ) { + dispatch_tracers!(self.after_vm_execution(state, bootloader_state, stop_reason.clone())); + } +} + +fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { + // The current frame is bootloader if the call stack depth is 1. + // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior + // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. + local_state.callstack.inner.len() == 1 +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/dispatcher.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/dispatcher.rs new file mode 100644 index 00000000000..11262c4d766 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/dispatcher.rs @@ -0,0 +1,126 @@ +use zk_evm_1_4_0::tracing::{ + AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData, +}; +use zksync_state::{StoragePtr, WriteStorage}; + +use crate::{ + interface::{ + dyn_tracers::vm_1_4_0::DynTracer, + tracer::{TracerExecutionStatus, VmExecutionStopReason}, + }, + vm_boojum_integration::{ + BootloaderState, HistoryMode, SimpleMemory, TracerPointer, VmTracer, ZkSyncVmState, + }, +}; + +/// Tracer dispatcher is a tracer that can dispatch calls to multiple tracers. +pub struct TracerDispatcher { + tracers: Vec>, +} + +impl TracerDispatcher { + pub fn new(tracers: Vec>) -> Self { + Self { tracers } + } +} + +impl From> for TracerDispatcher { + fn from(value: TracerPointer) -> Self { + Self { + tracers: vec![value], + } + } +} + +impl From>> for TracerDispatcher { + fn from(value: Vec>) -> Self { + Self { tracers: value } + } +} + +impl Default for TracerDispatcher { + fn default() -> Self { + Self { tracers: vec![] } + } +} + +impl DynTracer> for TracerDispatcher { + #[inline(always)] + fn before_decoding(&mut self, _state: VmLocalStateData<'_>, _memory: &SimpleMemory) { + for tracer in self.tracers.iter_mut() { + tracer.before_decoding(_state, _memory); + } + } + + #[inline(always)] + fn after_decoding( + &mut self, + _state: VmLocalStateData<'_>, + _data: AfterDecodingData, + _memory: &SimpleMemory, + ) { + for tracer in self.tracers.iter_mut() { + tracer.after_decoding(_state, _data, _memory); + } + } + + #[inline(always)] + fn before_execution( + &mut self, + _state: VmLocalStateData<'_>, + _data: BeforeExecutionData, + _memory: &SimpleMemory, + _storage: StoragePtr, + ) { + for tracer in self.tracers.iter_mut() { + tracer.before_execution(_state, _data, _memory, _storage.clone()); + } + } + + #[inline(always)] + fn after_execution( + &mut self, + _state: VmLocalStateData<'_>, + _data: AfterExecutionData, + _memory: &SimpleMemory, + _storage: StoragePtr, + ) { + for tracer in self.tracers.iter_mut() { + tracer.after_execution(_state, _data, _memory, _storage.clone()); + } + } +} + +impl VmTracer for TracerDispatcher { + fn initialize_tracer(&mut self, _state: &mut ZkSyncVmState) { + for tracer in self.tracers.iter_mut() { + tracer.initialize_tracer(_state); + } + } + + /// Run after each vm execution cycle + #[inline(always)] + fn finish_cycle( + &mut self, + _state: &mut ZkSyncVmState, + _bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + let mut result = TracerExecutionStatus::Continue; + for tracer in self.tracers.iter_mut() { + result = result.stricter(&tracer.finish_cycle(_state, _bootloader_state)); + } + result + } + + /// Run after the vm execution + fn after_vm_execution( + &mut self, + _state: &mut ZkSyncVmState, + _bootloader_state: &BootloaderState, + _stop_reason: VmExecutionStopReason, + ) { + for tracer in self.tracers.iter_mut() { + tracer.after_vm_execution(_state, _bootloader_state, _stop_reason.clone()); + } + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/mod.rs new file mode 100644 index 00000000000..1bdb1b6ccdb --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/mod.rs @@ -0,0 +1,16 @@ +pub(crate) use circuits_tracer::CircuitsTracer; +pub(crate) use default_tracers::DefaultExecutionTracer; +pub(crate) use pubdata_tracer::PubdataTracer; +pub(crate) use refunds::RefundsTracer; +pub(crate) use result_tracer::ResultTracer; + +pub(crate) mod circuits_tracer; +pub(crate) mod default_tracers; +pub(crate) mod pubdata_tracer; +pub(crate) mod refunds; +pub(crate) mod result_tracer; + +mod circuits_capacity; +pub mod dispatcher; +pub(crate) mod traits; +pub(crate) mod utils; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/pubdata_tracer.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/pubdata_tracer.rs new file mode 100644 index 00000000000..3e3075cb45f --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/pubdata_tracer.rs @@ -0,0 +1,212 @@ +use std::marker::PhantomData; + +use zk_evm_1_4_0::{ + aux_structures::Timestamp, + tracing::{BeforeExecutionData, VmLocalStateData}, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_types::{ + event::{ + extract_bytecode_publication_requests_from_l1_messenger, + extract_l2tol1logs_from_l1_messenger, extract_long_l2_to_l1_messages, L1MessengerL2ToL1Log, + }, + writes::StateDiffRecord, + zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries, + AccountTreeId, StorageKey, L1_MESSENGER_ADDRESS, +}; +use zksync_utils::{h256_to_u256, u256_to_bytes_be, u256_to_h256}; + +use crate::{ + interface::{ + dyn_tracers::vm_1_4_0::DynTracer, + tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + types::inputs::L1BatchEnv, + VmExecutionMode, + }, + vm_boojum_integration::{ + bootloader_state::{utils::apply_pubdata_to_memory, BootloaderState}, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::{traits::VmTracer, utils::VmHook}, + types::internals::{PubdataInput, ZkSyncVmState}, + utils::logs::collect_events_and_l1_system_logs_after_timestamp, + StorageOracle, + }, +}; + +/// Tracer responsible for collecting information about refunds. +#[derive(Debug, Clone)] +pub(crate) struct PubdataTracer { + l1_batch_env: L1BatchEnv, + pubdata_info_requested: bool, + execution_mode: VmExecutionMode, + _phantom_data: PhantomData, +} + +impl PubdataTracer { + pub(crate) fn new(l1_batch_env: L1BatchEnv, execution_mode: VmExecutionMode) -> Self { + Self { + l1_batch_env, + pubdata_info_requested: false, + execution_mode, + _phantom_data: Default::default(), + } + } +} + +impl PubdataTracer { + // Packs part of L1 Messenger total pubdata that corresponds to + // `L2toL1Logs` sent in the block + fn get_total_user_logs( + &self, + state: &ZkSyncVmState, + ) -> Vec { + let (all_generated_events, _) = collect_events_and_l1_system_logs_after_timestamp( + state, + &self.l1_batch_env, + Timestamp(0), + ); + extract_l2tol1logs_from_l1_messenger(&all_generated_events) + } + + // Packs part of L1 Messenger total pubdata that corresponds to + // Messages sent in the block + fn get_total_l1_messenger_messages( + &self, + state: &ZkSyncVmState, + ) -> Vec> { + let (all_generated_events, _) = collect_events_and_l1_system_logs_after_timestamp( + state, + &self.l1_batch_env, + Timestamp(0), + ); + + extract_long_l2_to_l1_messages(&all_generated_events) + } + + // Packs part of L1 Messenger total pubdata that corresponds to + // Bytecodes needed to be published on L1 + fn get_total_published_bytecodes( + &self, + state: &ZkSyncVmState, + ) -> Vec> { + let (all_generated_events, _) = collect_events_and_l1_system_logs_after_timestamp( + state, + &self.l1_batch_env, + Timestamp(0), + ); + + let bytecode_publication_requests = + extract_bytecode_publication_requests_from_l1_messenger(&all_generated_events); + + bytecode_publication_requests + .iter() + .map(|bytecode_publication_request| { + state + .decommittment_processor + .known_bytecodes + .inner() + .get(&h256_to_u256(bytecode_publication_request.bytecode_hash)) + .unwrap() + .iter() + .flat_map(u256_to_bytes_be) + .collect() + }) + .collect() + } + + // Packs part of L1Messenger total pubdata that corresponds to + // State diffs needed to be published on L1 + fn get_state_diffs(storage: &StorageOracle) -> Vec { + sort_storage_access_queries( + storage + .storage_log_queries_after_timestamp(Timestamp(0)) + .iter() + .map(|log| &log.log_query), + ) + .1 + .into_iter() + .filter(|log| log.rw_flag) + .filter(|log| log.read_value != log.written_value) + .filter(|log| log.address != L1_MESSENGER_ADDRESS) + .map(|log| StateDiffRecord { + address: log.address, + key: log.key, + derived_key: log.derive_final_address(), + enumeration_index: storage + .storage + .get_ptr() + .borrow_mut() + .get_enumeration_index(&StorageKey::new( + AccountTreeId::new(log.address), + u256_to_h256(log.key), + )) + .unwrap_or_default(), + initial_value: log.read_value, + final_value: log.written_value, + }) + .collect() + } + + fn build_pubdata_input(&self, state: &ZkSyncVmState) -> PubdataInput { + PubdataInput { + user_logs: self.get_total_user_logs(state), + l2_to_l1_messages: self.get_total_l1_messenger_messages(state), + published_bytecodes: self.get_total_published_bytecodes(state), + state_diffs: Self::get_state_diffs(&state.storage), + } + } +} + +impl DynTracer> for PubdataTracer { + fn before_execution( + &mut self, + state: VmLocalStateData<'_>, + data: BeforeExecutionData, + _memory: &SimpleMemory, + _storage: StoragePtr, + ) { + let hook = VmHook::from_opcode_memory(&state, &data); + if let VmHook::PubdataRequested = hook { + self.pubdata_info_requested = true; + } + } +} + +impl VmTracer for PubdataTracer { + fn finish_cycle( + &mut self, + state: &mut ZkSyncVmState, + bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + if !matches!(self.execution_mode, VmExecutionMode::Batch) { + // We do not provide the pubdata when executing the block tip or a single transaction + if self.pubdata_info_requested { + return TracerExecutionStatus::Stop(TracerExecutionStopReason::Finish); + } else { + return TracerExecutionStatus::Continue; + } + } + + if self.pubdata_info_requested { + let pubdata_input = self.build_pubdata_input(state); + + // Save the pubdata for the future initial bootloader memory building + bootloader_state.set_pubdata_input(pubdata_input.clone()); + + // Apply the pubdata to the current memory + let mut memory_to_apply = vec![]; + + apply_pubdata_to_memory(&mut memory_to_apply, pubdata_input); + state.memory.populate_page( + BOOTLOADER_HEAP_PAGE as usize, + memory_to_apply, + Timestamp(state.local_state.timestamp), + ); + + self.pubdata_info_requested = false; + } + + TracerExecutionStatus::Continue + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/refunds.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/refunds.rs new file mode 100644 index 00000000000..a4e7295eca8 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/refunds.rs @@ -0,0 +1,352 @@ +use std::marker::PhantomData; + +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; +use zk_evm_1_4_0::{ + aux_structures::Timestamp, + tracing::{BeforeExecutionData, VmLocalStateData}, + vm_state::VmLocalState, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_system_constants::{PUBLISH_BYTECODE_OVERHEAD, SYSTEM_CONTEXT_ADDRESS}; +use zksync_types::{ + event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}, + l2_to_l1_log::L2ToL1Log, + L1BatchNumber, U256, +}; +use zksync_utils::{bytecode::bytecode_len_in_bytes, ceil_div_u256, u256_to_h256}; + +use crate::{ + interface::{ + traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, types::tracer::TracerExecutionStatus, + L1BatchEnv, Refunds, + }, + vm_boojum_integration::{ + bootloader_state::BootloaderState, + constants::{BOOTLOADER_HEAP_PAGE, OPERATOR_REFUNDS_OFFSET, TX_GAS_LIMIT_OFFSET}, + old_vm::{ + events::merge_events, history_recorder::HistoryMode, memory::SimpleMemory, + utils::eth_price_per_pubdata_byte, + }, + tracers::{ + traits::VmTracer, + utils::{ + gas_spent_on_bytecodes_and_long_messages_this_opcode, get_vm_hook_params, VmHook, + }, + }, + types::internals::ZkSyncVmState, + utils::fee::get_batch_base_fee, + }, +}; + +/// Tracer responsible for collecting information about refunds. +#[derive(Debug, Clone)] +pub(crate) struct RefundsTracer { + // Some(x) means that the bootloader has asked the operator + // to provide the refund the user, where `x` is the refund proposed + // by the bootloader itself. + pending_operator_refund: Option, + refund_gas: u32, + operator_refund: Option, + timestamp_initial: Timestamp, + timestamp_before_cycle: Timestamp, + gas_remaining_before: u32, + spent_pubdata_counter_before: u32, + gas_spent_on_bytecodes_and_long_messages: u32, + l1_batch: L1BatchEnv, + pubdata_published: u32, + _phantom: PhantomData, +} + +impl RefundsTracer { + pub(crate) fn new(l1_batch: L1BatchEnv) -> Self { + Self { + pending_operator_refund: None, + refund_gas: 0, + operator_refund: None, + timestamp_initial: Timestamp(0), + timestamp_before_cycle: Timestamp(0), + gas_remaining_before: 0, + spent_pubdata_counter_before: 0, + gas_spent_on_bytecodes_and_long_messages: 0, + l1_batch, + pubdata_published: 0, + _phantom: PhantomData, + } + } +} + +impl RefundsTracer { + fn requested_refund(&self) -> Option { + self.pending_operator_refund + } + + fn set_refund_as_done(&mut self) { + self.pending_operator_refund = None; + } + + fn block_overhead_refund(&mut self) -> u32 { + 0 + } + + pub(crate) fn get_refunds(&self) -> Refunds { + Refunds { + gas_refunded: self.refund_gas, + operator_suggested_refund: self.operator_refund.unwrap_or_default(), + } + } + + pub(crate) fn tx_body_refund( + &self, + bootloader_refund: u32, + gas_spent_on_pubdata: u32, + tx_gas_limit: u32, + current_ergs_per_pubdata_byte: u32, + pubdata_published: u32, + ) -> u32 { + let total_gas_spent = tx_gas_limit - bootloader_refund; + + let gas_spent_on_computation = total_gas_spent + .checked_sub(gas_spent_on_pubdata) + .unwrap_or_else(|| { + tracing::error!( + "Gas spent on pubdata is greater than total gas spent. On pubdata: {}, total: {}", + gas_spent_on_pubdata, + total_gas_spent + ); + 0 + }); + + // For now, bootloader charges only for base fee. + let effective_gas_price = get_batch_base_fee(&self.l1_batch); + + let bootloader_eth_price_per_pubdata_byte = + U256::from(effective_gas_price) * U256::from(current_ergs_per_pubdata_byte); + + let fair_eth_price_per_pubdata_byte = U256::from(eth_price_per_pubdata_byte( + self.l1_batch.fee_input.l1_gas_price(), + )); + + // For now, L1 originated transactions are allowed to pay less than fair fee per pubdata, + // so we should take it into account. + let eth_price_per_pubdata_byte_for_calculation = std::cmp::min( + bootloader_eth_price_per_pubdata_byte, + fair_eth_price_per_pubdata_byte, + ); + + let fair_fee_eth = U256::from(gas_spent_on_computation) + * U256::from(self.l1_batch.fee_input.fair_l2_gas_price()) + + U256::from(pubdata_published) * eth_price_per_pubdata_byte_for_calculation; + let pre_paid_eth = U256::from(tx_gas_limit) * U256::from(effective_gas_price); + let refund_eth = pre_paid_eth.checked_sub(fair_fee_eth).unwrap_or_else(|| { + tracing::error!( + "Fair fee is greater than pre paid. Fair fee: {} wei, pre paid: {} wei", + fair_fee_eth, + pre_paid_eth + ); + U256::zero() + }); + + ceil_div_u256(refund_eth, effective_gas_price.into()).as_u32() + } + + pub(crate) fn gas_spent_on_pubdata(&self, vm_local_state: &VmLocalState) -> u32 { + self.gas_spent_on_bytecodes_and_long_messages + vm_local_state.spent_pubdata_counter + } + + pub(crate) fn pubdata_published(&self) -> u32 { + self.pubdata_published + } +} + +impl DynTracer> for RefundsTracer { + fn before_execution( + &mut self, + state: VmLocalStateData<'_>, + data: BeforeExecutionData, + memory: &SimpleMemory, + _storage: StoragePtr, + ) { + self.timestamp_before_cycle = Timestamp(state.vm_local_state.timestamp); + let hook = VmHook::from_opcode_memory(&state, &data); + match hook { + VmHook::NotifyAboutRefund => self.refund_gas = get_vm_hook_params(memory)[0].as_u32(), + VmHook::AskOperatorForRefund => { + self.pending_operator_refund = Some(get_vm_hook_params(memory)[0].as_u32()) + } + _ => {} + } + + self.gas_spent_on_bytecodes_and_long_messages += + gas_spent_on_bytecodes_and_long_messages_this_opcode(&state, &data); + } +} + +impl VmTracer for RefundsTracer { + fn initialize_tracer(&mut self, state: &mut ZkSyncVmState) { + self.timestamp_initial = Timestamp(state.local_state.timestamp); + self.gas_remaining_before = state.local_state.callstack.current.ergs_remaining; + self.spent_pubdata_counter_before = state.local_state.spent_pubdata_counter; + } + + fn finish_cycle( + &mut self, + state: &mut ZkSyncVmState, + bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] + #[metrics(label = "type", rename_all = "snake_case")] + enum RefundType { + Bootloader, + Operator, + } + + const PERCENT_BUCKETS: Buckets = Buckets::values(&[ + 5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 120.0, + ]); + + #[derive(Debug, Metrics)] + #[metrics(prefix = "vm_boojum_integration")] + struct RefundMetrics { + #[metrics(buckets = PERCENT_BUCKETS)] + refund: Family>, + #[metrics(buckets = PERCENT_BUCKETS)] + refund_diff: Histogram, + } + + #[vise::register] + static METRICS: vise::Global = vise::Global::new(); + + // This means that the bootloader has informed the system (usually via `VMHooks`) - that some gas + // should be refunded back (see `askOperatorForRefund` in `bootloader.yul` for details). + if let Some(bootloader_refund) = self.requested_refund() { + assert!( + self.operator_refund.is_none(), + "Operator was asked for refund two times" + ); + let gas_spent_on_pubdata = + self.gas_spent_on_pubdata(&state.local_state) - self.spent_pubdata_counter_before; + + let current_tx_index = bootloader_state.current_tx(); + let tx_description_offset = + bootloader_state.get_tx_description_offset(current_tx_index); + let tx_gas_limit = state + .memory + .read_slot( + BOOTLOADER_HEAP_PAGE as usize, + tx_description_offset + TX_GAS_LIMIT_OFFSET, + ) + .value + .as_u32(); + + let used_published_storage_slots = state + .storage + .save_paid_changes(Timestamp(state.local_state.timestamp)); + + let pubdata_published = pubdata_published( + state, + used_published_storage_slots, + self.timestamp_initial, + self.l1_batch.number, + ); + + self.pubdata_published = pubdata_published; + let current_ergs_per_pubdata_byte = state.local_state.current_ergs_per_pubdata_byte; + let tx_body_refund = self.tx_body_refund( + bootloader_refund, + gas_spent_on_pubdata, + tx_gas_limit, + current_ergs_per_pubdata_byte, + pubdata_published, + ); + + if tx_body_refund < bootloader_refund { + tracing::error!( + "Suggested tx body refund is less than bootloader refund. Tx body refund: {tx_body_refund}, \ + bootloader refund: {bootloader_refund}" + ); + } + + let refund_to_propose = tx_body_refund + self.block_overhead_refund(); + + let refund_slot = OPERATOR_REFUNDS_OFFSET + current_tx_index; + + // Writing the refund into memory + state.memory.populate_page( + BOOTLOADER_HEAP_PAGE as usize, + vec![(refund_slot, refund_to_propose.into())], + self.timestamp_before_cycle, + ); + + bootloader_state.set_refund_for_current_tx(refund_to_propose); + self.operator_refund = Some(refund_to_propose); + self.set_refund_as_done(); + + if tx_gas_limit < bootloader_refund { + tracing::error!( + "Tx gas limit is less than bootloader refund. Tx gas limit: {tx_gas_limit}, \ + bootloader refund: {bootloader_refund}" + ); + } + if tx_gas_limit < refund_to_propose { + tracing::error!( + "Tx gas limit is less than operator refund. Tx gas limit: {tx_gas_limit}, \ + operator refund: {refund_to_propose}" + ); + } + + METRICS.refund[&RefundType::Bootloader] + .observe(bootloader_refund as f64 / tx_gas_limit as f64 * 100.0); + METRICS.refund[&RefundType::Operator] + .observe(refund_to_propose as f64 / tx_gas_limit as f64 * 100.0); + let refund_diff = + (refund_to_propose as f64 - bootloader_refund as f64) / tx_gas_limit as f64 * 100.0; + METRICS.refund_diff.observe(refund_diff); + } + TracerExecutionStatus::Continue + } +} + +/// Returns the given transactions' gas limit - by reading it directly from the VM memory. +pub(crate) fn pubdata_published( + state: &ZkSyncVmState, + storage_writes_pubdata_published: u32, + from_timestamp: Timestamp, + batch_number: L1BatchNumber, +) -> u32 { + let (raw_events, l1_messages) = state + .event_sink + .get_events_and_l2_l1_logs_after_timestamp(from_timestamp); + let events: Vec<_> = merge_events(raw_events) + .into_iter() + .map(|e| e.into_vm_event(batch_number)) + .collect(); + // For the first transaction in L1 batch there may be (it depends on the execution mode) an L2->L1 log + // that is sent by `SystemContext` in `setNewBlock`. It's a part of the L1 batch pubdata overhead and not the transaction itself. + let l2_l1_logs_bytes = (l1_messages + .into_iter() + .map(|log| L2ToL1Log { + shard_id: log.shard_id, + is_service: log.is_first, + tx_number_in_block: log.tx_number_in_block, + sender: log.address, + key: u256_to_h256(log.key), + value: u256_to_h256(log.value), + }) + .filter(|log| log.sender != SYSTEM_CONTEXT_ADDRESS) + .count() as u32) + * zk_evm_1_4_0::zkevm_opcode_defs::system_params::L1_MESSAGE_PUBDATA_BYTES; + let l2_l1_long_messages_bytes: u32 = extract_long_l2_to_l1_messages(&events) + .iter() + .map(|event| event.len() as u32) + .sum(); + + let published_bytecode_bytes: u32 = extract_published_bytecodes(&events) + .iter() + .map(|bytecodehash| bytecode_len_in_bytes(*bytecodehash) as u32 + PUBLISH_BYTECODE_OVERHEAD) + .sum(); + + storage_writes_pubdata_published + + l2_l1_logs_bytes + + l2_l1_long_messages_bytes + + published_bytecode_bytes +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/result_tracer.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/result_tracer.rs new file mode 100644 index 00000000000..2293273228b --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/result_tracer.rs @@ -0,0 +1,246 @@ +use std::marker::PhantomData; + +use zk_evm_1_4_0::{ + tracing::{AfterDecodingData, BeforeExecutionData, VmLocalStateData}, + vm_state::{ErrorFlags, VmLocalState}, + zkevm_opcode_defs::FatPointer, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_types::U256; + +use crate::{ + interface::{ + tracer::VmExecutionStopReason, traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, + types::tracer::TracerExecutionStopReason, ExecutionResult, Halt, TxRevertReason, + VmExecutionMode, VmRevertReason, + }, + vm_boojum_integration::{ + constants::{BOOTLOADER_HEAP_PAGE, RESULT_SUCCESS_FIRST_SLOT}, + old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}, + tracers::{ + traits::VmTracer, + utils::{get_vm_hook_params, read_pointer, VmHook}, + }, + types::internals::ZkSyncVmState, + BootloaderState, HistoryMode, SimpleMemory, + }, +}; + +#[derive(Debug, Clone)] +enum Result { + Error { error_reason: VmRevertReason }, + Success { return_data: Vec }, + Halt { reason: Halt }, +} + +/// Tracer responsible for handling the VM execution result. +#[derive(Debug, Clone)] +pub(crate) struct ResultTracer { + result: Option, + bootloader_out_of_gas: bool, + execution_mode: VmExecutionMode, + _phantom: PhantomData, +} + +impl ResultTracer { + pub(crate) fn new(execution_mode: VmExecutionMode) -> Self { + Self { + result: None, + bootloader_out_of_gas: false, + execution_mode, + _phantom: PhantomData, + } + } +} + +fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { + // The current frame is bootloader if the call stack depth is 1. + // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior + // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. + local_state.callstack.inner.len() == 1 +} + +impl DynTracer> for ResultTracer { + fn after_decoding( + &mut self, + state: VmLocalStateData<'_>, + data: AfterDecodingData, + _memory: &SimpleMemory, + ) { + // We should check not only for the `NOT_ENOUGH_ERGS` flag but if the current frame is bootloader too. + if current_frame_is_bootloader(state.vm_local_state) + && data + .error_flags_accumulated + .contains(ErrorFlags::NOT_ENOUGH_ERGS) + { + self.bootloader_out_of_gas = true; + } + } + + fn before_execution( + &mut self, + state: VmLocalStateData<'_>, + data: BeforeExecutionData, + memory: &SimpleMemory, + _storage: StoragePtr, + ) { + let hook = VmHook::from_opcode_memory(&state, &data); + if let VmHook::ExecutionResult = hook { + let vm_hook_params = get_vm_hook_params(memory); + let success = vm_hook_params[0]; + let returndata_ptr = FatPointer::from_u256(vm_hook_params[1]); + let returndata = read_pointer(memory, returndata_ptr); + if success == U256::zero() { + self.result = Some(Result::Error { + // Tx has reverted, without bootloader error, we can simply parse the revert reason + error_reason: (VmRevertReason::from(returndata.as_slice())), + }); + } else { + self.result = Some(Result::Success { + return_data: returndata, + }); + } + } + } +} + +impl VmTracer for ResultTracer { + fn after_vm_execution( + &mut self, + state: &mut ZkSyncVmState, + bootloader_state: &BootloaderState, + stop_reason: VmExecutionStopReason, + ) { + match stop_reason { + // Vm has finished execution, we need to check the result of it + VmExecutionStopReason::VmFinished => { + self.vm_finished_execution(state); + } + // One of the tracers above has requested to stop the execution. + // If it was the correct stop we already have the result, + // otherwise it can be out of gas error + VmExecutionStopReason::TracerRequestedStop(reason) => { + match self.execution_mode { + VmExecutionMode::OneTx => { + self.vm_stopped_execution(state, bootloader_state, reason) + } + VmExecutionMode::Batch => self.vm_finished_execution(state), + VmExecutionMode::Bootloader => self.vm_finished_execution(state), + }; + } + } + } +} + +impl ResultTracer { + fn vm_finished_execution(&mut self, state: &ZkSyncVmState) { + let Some(result) = vm_may_have_ended_inner(state) else { + // The VM has finished execution, but the result is not yet available. + self.result = Some(Result::Success { + return_data: vec![], + }); + return; + }; + + // Check it's not inside tx + match result { + VmExecutionResult::Ok(output) => { + self.result = Some(Result::Success { + return_data: output, + }); + } + VmExecutionResult::Revert(output) => { + // Unlike `VmHook::ExecutionResult`, vm has completely finished and returned not only the revert reason, + // but with bytecode, which represents the type of error from the bootloader side + let revert_reason = TxRevertReason::parse_error(&output); + + match revert_reason { + TxRevertReason::TxReverted(reason) => { + self.result = Some(Result::Error { + error_reason: reason, + }); + } + TxRevertReason::Halt(halt) => { + self.result = Some(Result::Halt { reason: halt }); + } + }; + } + VmExecutionResult::Panic => { + if self.bootloader_out_of_gas { + self.result = Some(Result::Halt { + reason: Halt::BootloaderOutOfGas, + }); + } else { + self.result = Some(Result::Halt { + reason: Halt::VMPanic, + }); + } + } + VmExecutionResult::MostLikelyDidNotFinish(_, _) => { + unreachable!() + } + } + } + + fn vm_stopped_execution( + &mut self, + state: &ZkSyncVmState, + bootloader_state: &BootloaderState, + reason: TracerExecutionStopReason, + ) { + if let TracerExecutionStopReason::Abort(halt) = reason { + self.result = Some(Result::Halt { reason: halt }); + return; + } + + if self.bootloader_out_of_gas { + self.result = Some(Result::Halt { + reason: Halt::BootloaderOutOfGas, + }); + } else { + if self.result.is_some() { + return; + } + + let has_failed = tx_has_failed(state, bootloader_state.current_tx() as u32); + if has_failed { + self.result = Some(Result::Error { + error_reason: VmRevertReason::General { + msg: "Transaction reverted with empty reason. Possibly out of gas" + .to_string(), + data: vec![], + }, + }); + } else { + self.result = Some(self.result.clone().unwrap_or(Result::Success { + return_data: vec![], + })); + } + } + } + + pub(crate) fn into_result(self) -> ExecutionResult { + match self.result.unwrap() { + Result::Error { error_reason } => ExecutionResult::Revert { + output: error_reason, + }, + Result::Success { return_data } => ExecutionResult::Success { + output: return_data, + }, + Result::Halt { reason } => ExecutionResult::Halt { reason }, + } + } +} + +pub(crate) fn tx_has_failed( + state: &ZkSyncVmState, + tx_id: u32, +) -> bool { + let mem_slot = RESULT_SUCCESS_FIRST_SLOT + tx_id; + let mem_value = state + .memory + .read_slot(BOOTLOADER_HEAP_PAGE as usize, mem_slot as usize) + .value; + + mem_value == U256::zero() +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/traits.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/traits.rs new file mode 100644 index 00000000000..767f45c6050 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/traits.rs @@ -0,0 +1,47 @@ +use zksync_state::WriteStorage; + +use crate::{ + interface::{ + dyn_tracers::vm_1_4_0::DynTracer, + tracer::{TracerExecutionStatus, VmExecutionStopReason}, + }, + vm_boojum_integration::{ + bootloader_state::BootloaderState, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + types::internals::ZkSyncVmState, + }, +}; + +pub type TracerPointer = Box>; + +/// Run tracer for collecting data during the vm execution cycles +pub trait VmTracer: DynTracer> { + /// Initialize the tracer before the vm execution + fn initialize_tracer(&mut self, _state: &mut ZkSyncVmState) {} + /// Run after each vm execution cycle + fn finish_cycle( + &mut self, + _state: &mut ZkSyncVmState, + _bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + TracerExecutionStatus::Continue + } + /// Run after the vm execution + fn after_vm_execution( + &mut self, + _state: &mut ZkSyncVmState, + _bootloader_state: &BootloaderState, + _stop_reason: VmExecutionStopReason, + ) { + } +} + +pub trait ToTracerPointer { + fn into_tracer_pointer(self) -> TracerPointer; +} + +impl + 'static> ToTracerPointer for T { + fn into_tracer_pointer(self) -> TracerPointer { + Box::new(self) + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/tracers/utils.rs b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/utils.rs new file mode 100644 index 00000000000..58264d89c8e --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/tracers/utils.rs @@ -0,0 +1,225 @@ +use zk_evm_1_4_0::{ + aux_structures::MemoryPage, + tracing::{BeforeExecutionData, VmLocalStateData}, + zkevm_opcode_defs::{ + FarCallABI, FarCallForwardPageType, FatPointer, LogOpcode, Opcode, UMAOpcode, + }, +}; +use zksync_system_constants::{ + ECRECOVER_PRECOMPILE_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, KNOWN_CODES_STORAGE_ADDRESS, + L1_MESSENGER_ADDRESS, SHA256_PRECOMPILE_ADDRESS, +}; +use zksync_types::U256; +use zksync_utils::u256_to_h256; + +use crate::vm_boojum_integration::{ + constants::{ + BOOTLOADER_HEAP_PAGE, VM_HOOK_PARAMS_COUNT, VM_HOOK_PARAMS_START_POSITION, VM_HOOK_POSITION, + }, + old_vm::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + utils::{aux_heap_page_from_base, heap_page_from_base}, + }, +}; + +#[derive(Clone, Debug, Copy)] +pub(crate) enum VmHook { + AccountValidationEntered, + PaymasterValidationEntered, + NoValidationEntered, + ValidationStepEndeded, + TxHasEnded, + DebugLog, + DebugReturnData, + NoHook, + NearCallCatch, + AskOperatorForRefund, + NotifyAboutRefund, + ExecutionResult, + FinalBatchInfo, + // Hook used to signal that the final pubdata for a batch is requested. + PubdataRequested, +} + +impl VmHook { + pub(crate) fn from_opcode_memory( + state: &VmLocalStateData<'_>, + data: &BeforeExecutionData, + ) -> Self { + let opcode_variant = data.opcode.variant; + let heap_page = + heap_page_from_base(state.vm_local_state.callstack.current.base_memory_page).0; + + let src0_value = data.src0_value.value; + + let fat_ptr = FatPointer::from_u256(src0_value); + + let value = data.src1_value.value; + + // Only `UMA` opcodes in the bootloader serve for vm hooks + if !matches!(opcode_variant.opcode, Opcode::UMA(UMAOpcode::HeapWrite)) + || heap_page != BOOTLOADER_HEAP_PAGE + || fat_ptr.offset != VM_HOOK_POSITION * 32 + { + return Self::NoHook; + } + + match value.as_u32() { + 0 => Self::AccountValidationEntered, + 1 => Self::PaymasterValidationEntered, + 2 => Self::NoValidationEntered, + 3 => Self::ValidationStepEndeded, + 4 => Self::TxHasEnded, + 5 => Self::DebugLog, + 6 => Self::DebugReturnData, + 7 => Self::NearCallCatch, + 8 => Self::AskOperatorForRefund, + 9 => Self::NotifyAboutRefund, + 10 => Self::ExecutionResult, + 11 => Self::FinalBatchInfo, + 12 => Self::PubdataRequested, + _ => panic!("Unknown hook: {}", value.as_u32()), + } + } +} + +pub(crate) fn get_debug_log( + state: &VmLocalStateData<'_>, + memory: &SimpleMemory, +) -> String { + let vm_hook_params: Vec<_> = get_vm_hook_params(memory) + .into_iter() + .map(u256_to_h256) + .collect(); + let msg = vm_hook_params[0].as_bytes().to_vec(); + let data = vm_hook_params[1].as_bytes().to_vec(); + + let msg = String::from_utf8(msg).expect("Invalid debug message"); + let data = U256::from_big_endian(&data); + + // For long data, it is better to use hex-encoding for greater readability + let data_str = if data > U256::from(u64::max_value()) { + let mut bytes = [0u8; 32]; + data.to_big_endian(&mut bytes); + format!("0x{}", hex::encode(bytes)) + } else { + data.to_string() + }; + + let tx_id = state.vm_local_state.tx_number_in_block; + + format!("Bootloader transaction {}: {} {}", tx_id, msg, data_str) +} + +/// Reads the memory slice represented by the fat pointer. +/// Note, that the fat pointer must point to the accessible memory (i.e. not cleared up yet). +pub(crate) fn read_pointer( + memory: &SimpleMemory, + pointer: FatPointer, +) -> Vec { + let FatPointer { + offset, + length, + start, + memory_page, + } = pointer; + + // The actual bounds of the returndata ptr is [start+offset..start+length] + let mem_region_start = start + offset; + let mem_region_length = length - offset; + + memory.read_unaligned_bytes( + memory_page as usize, + mem_region_start as usize, + mem_region_length as usize, + ) +} + +/// Outputs the returndata for the latest call. +/// This is usually used to output the revert reason. +pub(crate) fn get_debug_returndata(memory: &SimpleMemory) -> String { + let vm_hook_params: Vec<_> = get_vm_hook_params(memory); + let returndata_ptr = FatPointer::from_u256(vm_hook_params[0]); + let returndata = read_pointer(memory, returndata_ptr); + + format!("0x{}", hex::encode(returndata)) +} + +/// Accepts a vm hook and, if it requires to output some debug log, outputs it. +pub(crate) fn print_debug_if_needed( + hook: &VmHook, + state: &VmLocalStateData<'_>, + memory: &SimpleMemory, +) { + let log = match hook { + VmHook::DebugLog => get_debug_log(state, memory), + VmHook::DebugReturnData => get_debug_returndata(memory), + _ => return, + }; + + tracing::trace!("{}", log); +} + +pub(crate) fn computational_gas_price( + state: VmLocalStateData<'_>, + data: &BeforeExecutionData, +) -> u32 { + // We calculate computational gas used as a raw price for opcode plus cost for precompiles. + // This calculation is incomplete as it misses decommitment and memory growth costs. + // To calculate decommitment cost we need an access to decommitter oracle which is missing in tracer now. + // Memory growth calculation is complex and it will require different logic for different opcodes (`FarCall`, `Ret`, `UMA`). + let base_price = data.opcode.inner.variant.ergs_price(); + let precompile_price = match data.opcode.variant.opcode { + Opcode::Log(LogOpcode::PrecompileCall) => { + let address = state.vm_local_state.callstack.current.this_address; + + if address == KECCAK256_PRECOMPILE_ADDRESS + || address == SHA256_PRECOMPILE_ADDRESS + || address == ECRECOVER_PRECOMPILE_ADDRESS + { + data.src1_value.value.low_u32() + } else { + 0 + } + } + _ => 0, + }; + base_price + precompile_price +} + +pub(crate) fn gas_spent_on_bytecodes_and_long_messages_this_opcode( + state: &VmLocalStateData<'_>, + data: &BeforeExecutionData, +) -> u32 { + if data.opcode.variant.opcode == Opcode::Log(LogOpcode::PrecompileCall) { + let current_stack = state.vm_local_state.callstack.get_current_stack(); + // Trace for precompile calls from `KNOWN_CODES_STORAGE_ADDRESS` and `L1_MESSENGER_ADDRESS` that burn some gas. + // Note, that if there is less gas left than requested to burn it will be burnt anyway. + if current_stack.this_address == KNOWN_CODES_STORAGE_ADDRESS + || current_stack.this_address == L1_MESSENGER_ADDRESS + { + std::cmp::min(data.src1_value.value.as_u32(), current_stack.ergs_remaining) + } else { + 0 + } + } else { + 0 + } +} + +pub(crate) fn get_calldata_page_via_abi(far_call_abi: &FarCallABI, base_page: MemoryPage) -> u32 { + match far_call_abi.forwarding_mode { + FarCallForwardPageType::ForwardFatPointer => { + far_call_abi.memory_quasi_fat_pointer.memory_page + } + FarCallForwardPageType::UseAuxHeap => aux_heap_page_from_base(base_page).0, + FarCallForwardPageType::UseHeap => heap_page_from_base(base_page).0, + } +} +pub(crate) fn get_vm_hook_params(memory: &SimpleMemory) -> Vec { + memory.dump_page_content_as_u256_words( + BOOTLOADER_HEAP_PAGE, + VM_HOOK_PARAMS_START_POSITION..VM_HOOK_PARAMS_START_POSITION + VM_HOOK_PARAMS_COUNT, + ) +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/mod.rs new file mode 100644 index 00000000000..7dc60ec5b0f --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/mod.rs @@ -0,0 +1,9 @@ +pub(crate) use pubdata::PubdataInput; +pub(crate) use snapshot::VmSnapshot; +pub(crate) use transaction_data::TransactionData; +pub(crate) use vm_state::new_vm_state; +pub use vm_state::ZkSyncVmState; +mod pubdata; +mod snapshot; +mod transaction_data; +mod vm_state; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/pubdata.rs b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/pubdata.rs new file mode 100644 index 00000000000..5451201c5bc --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/pubdata.rs @@ -0,0 +1,124 @@ +use zksync_types::{ + event::L1MessengerL2ToL1Log, + writes::{compress_state_diffs, StateDiffRecord}, +}; + +/// Struct based on which the pubdata blob is formed +#[derive(Debug, Clone, Default)] +pub(crate) struct PubdataInput { + pub(crate) user_logs: Vec, + pub(crate) l2_to_l1_messages: Vec>, + pub(crate) published_bytecodes: Vec>, + pub(crate) state_diffs: Vec, +} + +impl PubdataInput { + pub(crate) fn build_pubdata(self, with_uncompressed_state_diffs: bool) -> Vec { + let mut l1_messenger_pubdata = vec![]; + + let PubdataInput { + user_logs, + l2_to_l1_messages, + published_bytecodes, + state_diffs, + } = self; + + // Encoding user L2->L1 logs. + // Format: `[(numberOfL2ToL1Logs as u32) || l2tol1logs[1] || ... || l2tol1logs[n]]` + l1_messenger_pubdata.extend((user_logs.len() as u32).to_be_bytes()); + for l2tol1log in user_logs { + l1_messenger_pubdata.extend(l2tol1log.packed_encoding()); + } + + // Encoding L2->L1 messages + // Format: `[(numberOfMessages as u32) || (messages[1].len() as u32) || messages[1] || ... || (messages[n].len() as u32) || messages[n]]` + l1_messenger_pubdata.extend((l2_to_l1_messages.len() as u32).to_be_bytes()); + for message in l2_to_l1_messages { + l1_messenger_pubdata.extend((message.len() as u32).to_be_bytes()); + l1_messenger_pubdata.extend(message); + } + + // Encoding bytecodes + // Format: `[(numberOfBytecodes as u32) || (bytecodes[1].len() as u32) || bytecodes[1] || ... || (bytecodes[n].len() as u32) || bytecodes[n]]` + l1_messenger_pubdata.extend((published_bytecodes.len() as u32).to_be_bytes()); + for bytecode in published_bytecodes { + l1_messenger_pubdata.extend((bytecode.len() as u32).to_be_bytes()); + l1_messenger_pubdata.extend(bytecode); + } + + // Encoding state diffs + // Format: `[size of compressed state diffs u32 || compressed state diffs || (# state diffs: initial + repeated) as u32 || sorted state diffs by ]` + let state_diffs_compressed = compress_state_diffs(state_diffs.clone()); + l1_messenger_pubdata.extend(state_diffs_compressed); + + if with_uncompressed_state_diffs { + l1_messenger_pubdata.extend((state_diffs.len() as u32).to_be_bytes()); + for state_diff in state_diffs { + l1_messenger_pubdata.extend(state_diff.encode_padded()); + } + } + + l1_messenger_pubdata + } +} + +#[cfg(test)] +mod tests { + use zksync_system_constants::{ACCOUNT_CODE_STORAGE_ADDRESS, BOOTLOADER_ADDRESS}; + use zksync_utils::u256_to_h256; + + use super::*; + + #[test] + fn test_basic_pubdata_building() { + // Just using some constant addresses for tests + let addr1 = BOOTLOADER_ADDRESS; + let addr2 = ACCOUNT_CODE_STORAGE_ADDRESS; + + let user_logs = vec![L1MessengerL2ToL1Log { + l2_shard_id: 0, + is_service: false, + tx_number_in_block: 0, + sender: addr1, + key: 1.into(), + value: 128.into(), + }]; + + let l2_to_l1_messages = vec![hex::decode("deadbeef").unwrap()]; + + let published_bytecodes = vec![hex::decode("aaaabbbb").unwrap()]; + + // For covering more cases, we have two state diffs: + // One with enumeration index present (and so it is a repeated write) and the one without it. + let state_diffs = vec![ + StateDiffRecord { + address: addr2, + key: 155.into(), + derived_key: u256_to_h256(125.into()).0, + enumeration_index: 12, + initial_value: 11.into(), + final_value: 12.into(), + }, + StateDiffRecord { + address: addr2, + key: 156.into(), + derived_key: u256_to_h256(126.into()).0, + enumeration_index: 0, + initial_value: 0.into(), + final_value: 14.into(), + }, + ]; + + let input = PubdataInput { + user_logs, + l2_to_l1_messages, + published_bytecodes, + state_diffs, + }; + + let pubdata = + ethabi::encode(&[ethabi::Token::Bytes(input.build_pubdata(true))])[32..].to_vec(); + + assert_eq!(hex::encode(pubdata), "00000000000000000000000000000000000000000000000000000000000002c700000001000000000000000000000000000000000000000000008001000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000800000000100000004deadbeef0000000100000004aaaabbbb0100002a040001000000000000000000000000000000000000000000000000000000000000007e090e0000000c0901000000020000000000000000000000000000000000008002000000000000000000000000000000000000000000000000000000000000009b000000000000000000000000000000000000000000000000000000000000007d000000000000000c000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008002000000000000000000000000000000000000000000000000000000000000009c000000000000000000000000000000000000000000000000000000000000007e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/snapshot.rs b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/snapshot.rs new file mode 100644 index 00000000000..f1aa1a33359 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/snapshot.rs @@ -0,0 +1,11 @@ +use zk_evm_1_4_0::vm_state::VmLocalState; + +use crate::vm_boojum_integration::bootloader_state::BootloaderStateSnapshot; + +/// A snapshot of the VM that holds enough information to +/// rollback the VM to some historical state. +#[derive(Debug, Clone)] +pub(crate) struct VmSnapshot { + pub(crate) local_state: VmLocalState, + pub(crate) bootloader_state: BootloaderStateSnapshot, +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/transaction_data.rs b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/transaction_data.rs new file mode 100644 index 00000000000..81d967029f6 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/transaction_data.rs @@ -0,0 +1,358 @@ +use std::convert::TryInto; + +use zksync_types::{ + ethabi::{encode, Address, Token}, + fee::{encoding_len, Fee}, + l1::is_l1_tx_type, + l2::{L2Tx, TransactionType}, + transaction_request::{PaymasterParams, TransactionRequest}, + Bytes, Execute, ExecuteTransactionCommon, L2ChainId, L2TxCommonData, Nonce, Transaction, H256, + U256, +}; +use zksync_utils::{address_to_h256, bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; + +use crate::vm_boojum_integration::{ + constants::MAX_GAS_PER_PUBDATA_BYTE, + utils::overhead::{get_amortized_overhead, OverheadCoefficients}, +}; + +/// This structure represents the data that is used by +/// the Bootloader to describe the transaction. +#[derive(Debug, Default, Clone)] +pub(crate) struct TransactionData { + pub(crate) tx_type: u8, + pub(crate) from: Address, + pub(crate) to: Address, + pub(crate) gas_limit: U256, + pub(crate) pubdata_price_limit: U256, + pub(crate) max_fee_per_gas: U256, + pub(crate) max_priority_fee_per_gas: U256, + pub(crate) paymaster: Address, + pub(crate) nonce: U256, + pub(crate) value: U256, + // The reserved fields that are unique for different types of transactions. + // E.g. nonce is currently used in all transaction, but it should not be mandatory + // in the long run. + pub(crate) reserved: [U256; 4], + pub(crate) data: Vec, + pub(crate) signature: Vec, + // The factory deps provided with the transaction. + // Note that *only hashes* of these bytecodes are signed by the user + // and they are used in the ABI encoding of the struct. + // TODO: include this into the tx signature as part of SMA-1010 + pub(crate) factory_deps: Vec>, + pub(crate) paymaster_input: Vec, + pub(crate) reserved_dynamic: Vec, + pub(crate) raw_bytes: Option>, +} + +impl From for TransactionData { + fn from(execute_tx: Transaction) -> Self { + match execute_tx.common_data { + ExecuteTransactionCommon::L2(common_data) => { + let nonce = U256::from_big_endian(&common_data.nonce.to_be_bytes()); + + let should_check_chain_id = if matches!( + common_data.transaction_type, + TransactionType::LegacyTransaction + ) && common_data.extract_chain_id().is_some() + { + U256([1, 0, 0, 0]) + } else { + U256::zero() + }; + + // Ethereum transactions do not sign gas per pubdata limit, and so for them we need to use + // some default value. We use the maximum possible value that is allowed by the bootloader + // (i.e. we can not use u64::MAX, because the bootloader requires gas per pubdata for such + // transactions to be higher than `MAX_GAS_PER_PUBDATA_BYTE`). + let gas_per_pubdata_limit = if common_data.transaction_type.is_ethereum_type() { + MAX_GAS_PER_PUBDATA_BYTE.into() + } else { + common_data.fee.gas_per_pubdata_limit + }; + + TransactionData { + tx_type: (common_data.transaction_type as u32) as u8, + from: common_data.initiator_address, + to: execute_tx.execute.contract_address, + gas_limit: common_data.fee.gas_limit, + pubdata_price_limit: gas_per_pubdata_limit, + max_fee_per_gas: common_data.fee.max_fee_per_gas, + max_priority_fee_per_gas: common_data.fee.max_priority_fee_per_gas, + paymaster: common_data.paymaster_params.paymaster, + nonce, + value: execute_tx.execute.value, + reserved: [ + should_check_chain_id, + U256::zero(), + U256::zero(), + U256::zero(), + ], + data: execute_tx.execute.calldata, + signature: common_data.signature, + factory_deps: execute_tx.execute.factory_deps.unwrap_or_default(), + paymaster_input: common_data.paymaster_params.paymaster_input, + reserved_dynamic: vec![], + raw_bytes: execute_tx.raw_bytes.map(|a| a.0), + } + } + ExecuteTransactionCommon::L1(common_data) => { + let refund_recipient = h256_to_u256(address_to_h256(&common_data.refund_recipient)); + TransactionData { + tx_type: common_data.tx_format() as u8, + from: common_data.sender, + to: execute_tx.execute.contract_address, + gas_limit: common_data.gas_limit, + pubdata_price_limit: common_data.gas_per_pubdata_limit, + // It doesn't matter what we put here, since + // the bootloader does not charge anything + max_fee_per_gas: common_data.max_fee_per_gas, + max_priority_fee_per_gas: U256::zero(), + paymaster: Address::default(), + nonce: U256::from(common_data.serial_id.0), // priority op ID + value: execute_tx.execute.value, + reserved: [ + common_data.to_mint, + refund_recipient, + U256::zero(), + U256::zero(), + ], + data: execute_tx.execute.calldata, + // The signature isn't checked for L1 transactions so we don't care + signature: vec![], + factory_deps: execute_tx.execute.factory_deps.unwrap_or_default(), + paymaster_input: vec![], + reserved_dynamic: vec![], + raw_bytes: None, + } + } + ExecuteTransactionCommon::ProtocolUpgrade(common_data) => { + let refund_recipient = h256_to_u256(address_to_h256(&common_data.refund_recipient)); + TransactionData { + tx_type: common_data.tx_format() as u8, + from: common_data.sender, + to: execute_tx.execute.contract_address, + gas_limit: common_data.gas_limit, + pubdata_price_limit: common_data.gas_per_pubdata_limit, + // It doesn't matter what we put here, since + // the bootloader does not charge anything + max_fee_per_gas: common_data.max_fee_per_gas, + max_priority_fee_per_gas: U256::zero(), + paymaster: Address::default(), + nonce: U256::from(common_data.upgrade_id as u16), + value: execute_tx.execute.value, + reserved: [ + common_data.to_mint, + refund_recipient, + U256::zero(), + U256::zero(), + ], + data: execute_tx.execute.calldata, + // The signature isn't checked for L1 transactions so we don't care + signature: vec![], + factory_deps: execute_tx.execute.factory_deps.unwrap_or_default(), + paymaster_input: vec![], + reserved_dynamic: vec![], + raw_bytes: None, + } + } + } + } +} + +impl TransactionData { + pub(crate) fn abi_encode_with_custom_factory_deps( + self, + factory_deps_hashes: Vec, + ) -> Vec { + encode(&[Token::Tuple(vec![ + Token::Uint(U256::from_big_endian(&self.tx_type.to_be_bytes())), + Token::Address(self.from), + Token::Address(self.to), + Token::Uint(self.gas_limit), + Token::Uint(self.pubdata_price_limit), + Token::Uint(self.max_fee_per_gas), + Token::Uint(self.max_priority_fee_per_gas), + Token::Address(self.paymaster), + Token::Uint(self.nonce), + Token::Uint(self.value), + Token::FixedArray(self.reserved.iter().copied().map(Token::Uint).collect()), + Token::Bytes(self.data), + Token::Bytes(self.signature), + Token::Array(factory_deps_hashes.into_iter().map(Token::Uint).collect()), + Token::Bytes(self.paymaster_input), + Token::Bytes(self.reserved_dynamic), + ])]) + } + + pub(crate) fn abi_encode(self) -> Vec { + let factory_deps_hashes = self + .factory_deps + .iter() + .map(|dep| h256_to_u256(hash_bytecode(dep))) + .collect(); + self.abi_encode_with_custom_factory_deps(factory_deps_hashes) + } + + pub(crate) fn into_tokens(self) -> Vec { + let bytes = self.abi_encode(); + assert!(bytes.len() % 32 == 0); + + bytes_to_be_words(bytes) + } + + pub(crate) fn effective_gas_price_per_pubdata(&self, block_gas_price_per_pubdata: u32) -> u32 { + // It is enforced by the protocol that the L1 transactions always pay the exact amount of gas per pubdata + // as was supplied in the transaction. + if is_l1_tx_type(self.tx_type) { + self.pubdata_price_limit.as_u32() + } else { + block_gas_price_per_pubdata + } + } + + pub(crate) fn overhead_gas(&self, block_gas_price_per_pubdata: u32) -> u32 { + let total_gas_limit = self.gas_limit.as_u32(); + let gas_price_per_pubdata = + self.effective_gas_price_per_pubdata(block_gas_price_per_pubdata); + + let encoded_len = encoding_len( + self.data.len() as u64, + self.signature.len() as u64, + self.factory_deps.len() as u64, + self.paymaster_input.len() as u64, + self.reserved_dynamic.len() as u64, + ); + + let coefficients = OverheadCoefficients::from_tx_type(self.tx_type); + get_amortized_overhead( + total_gas_limit, + gas_price_per_pubdata, + encoded_len, + coefficients, + ) + } + + pub(crate) fn trusted_ergs_limit(&self, _block_gas_price_per_pubdata: u64) -> U256 { + // TODO (EVM-66): correctly calculate the trusted gas limit for a transaction + self.gas_limit + } + + pub(crate) fn tx_hash(&self, chain_id: L2ChainId) -> H256 { + if is_l1_tx_type(self.tx_type) { + return self.canonical_l1_tx_hash().unwrap(); + } + + let l2_tx: L2Tx = self.clone().try_into().unwrap(); + let transaction_request: TransactionRequest = l2_tx.into(); + + // It is assumed that the `TransactionData` always has all the necessary components to recover the hash. + transaction_request + .get_tx_hash(chain_id) + .expect("Could not recover L2 transaction hash") + } + + fn canonical_l1_tx_hash(&self) -> Result { + use zksync_types::web3::signing::keccak256; + + if !is_l1_tx_type(self.tx_type) { + return Err(TxHashCalculationError::CannotCalculateL1HashForL2Tx); + } + + let encoded_bytes = self.clone().abi_encode(); + + Ok(H256(keccak256(&encoded_bytes))) + } +} + +#[derive(Debug, Clone, Copy)] +pub(crate) enum TxHashCalculationError { + CannotCalculateL1HashForL2Tx, + CannotCalculateL2HashForL1Tx, +} + +impl TryInto for TransactionData { + type Error = TxHashCalculationError; + + fn try_into(self) -> Result { + if is_l1_tx_type(self.tx_type) { + return Err(TxHashCalculationError::CannotCalculateL2HashForL1Tx); + } + + let common_data = L2TxCommonData { + transaction_type: (self.tx_type as u32).try_into().unwrap(), + nonce: Nonce(self.nonce.as_u32()), + fee: Fee { + max_fee_per_gas: self.max_fee_per_gas, + max_priority_fee_per_gas: self.max_priority_fee_per_gas, + gas_limit: self.gas_limit, + gas_per_pubdata_limit: self.pubdata_price_limit, + }, + signature: self.signature, + input: None, + initiator_address: self.from, + paymaster_params: PaymasterParams { + paymaster: self.paymaster, + paymaster_input: self.paymaster_input, + }, + }; + let factory_deps = (!self.factory_deps.is_empty()).then_some(self.factory_deps); + let execute = Execute { + contract_address: self.to, + value: self.value, + calldata: self.data, + factory_deps, + }; + + Ok(L2Tx { + execute, + common_data, + received_timestamp_ms: 0, + raw_bytes: self.raw_bytes.map(Bytes::from), + }) + } +} + +#[cfg(test)] +mod tests { + use zksync_types::fee::encoding_len; + + use super::*; + + #[test] + fn test_consistency_with_encoding_length() { + let transaction = TransactionData { + tx_type: 113, + from: Address::random(), + to: Address::random(), + gas_limit: U256::from(1u32), + pubdata_price_limit: U256::from(1u32), + max_fee_per_gas: U256::from(1u32), + max_priority_fee_per_gas: U256::from(1u32), + paymaster: Address::random(), + nonce: U256::zero(), + value: U256::zero(), + // The reserved fields that are unique for different types of transactions. + // E.g. nonce is currently used in all transaction, but it should not be mandatory + // in the long run. + reserved: [U256::zero(); 4], + data: vec![0u8; 65], + signature: vec![0u8; 75], + // The factory deps provided with the transaction. + // Note that *only hashes* of these bytecodes are signed by the user + // and they are used in the ABI encoding of the struct. + // TODO: include this into the tx signature as part of SMA-1010 + factory_deps: vec![vec![0u8; 32], vec![1u8; 32]], + paymaster_input: vec![0u8; 85], + reserved_dynamic: vec![0u8; 32], + raw_bytes: None, + }; + + let assumed_encoded_len = encoding_len(65, 75, 2, 85, 32); + + let true_encoding_len = transaction.into_tokens().len(); + + assert_eq!(assumed_encoded_len, true_encoding_len); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/vm_state.rs b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/vm_state.rs new file mode 100644 index 00000000000..bff8dbf0f56 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/types/internals/vm_state.rs @@ -0,0 +1,183 @@ +use zk_evm_1_4_0::{ + aux_structures::{MemoryPage, Timestamp}, + block_properties::BlockProperties, + vm_state::{CallStackEntry, PrimitiveValue, VmState}, + witness_trace::DummyTracer, + zkevm_opcode_defs::{ + system_params::{BOOTLOADER_MAX_MEMORY, INITIAL_FRAME_FORMAL_EH_LOCATION}, + FatPointer, BOOTLOADER_BASE_PAGE, BOOTLOADER_CALLDATA_PAGE, BOOTLOADER_CODE_PAGE, + STARTING_BASE_PAGE, STARTING_TIMESTAMP, + }, +}; +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_system_constants::BOOTLOADER_ADDRESS; +use zksync_types::{ + block::MiniblockHasher, zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, + MiniblockNumber, +}; +use zksync_utils::h256_to_u256; + +use crate::{ + interface::{L1BatchEnv, L2Block, SystemEnv}, + vm_boojum_integration::{ + bootloader_state::BootloaderState, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{ + event_sink::InMemoryEventSink, + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, + }, + }, + oracles::storage::StorageOracle, + types::l1_batch::bootloader_initial_memory, + utils::l2_blocks::{assert_next_block, load_last_l2_block}, + }, +}; + +pub type ZkSyncVmState = VmState< + StorageOracle, + SimpleMemory, + InMemoryEventSink, + PrecompilesProcessorWithHistory, + DecommitterOracle, + DummyTracer, +>; + +fn formal_calldata_abi() -> PrimitiveValue { + let fat_pointer = FatPointer { + offset: 0, + memory_page: BOOTLOADER_CALLDATA_PAGE, + start: 0, + length: 0, + }; + + PrimitiveValue { + value: fat_pointer.to_u256(), + is_pointer: true, + } +} + +/// Initialize the vm state and all necessary oracles +pub(crate) fn new_vm_state( + storage: StoragePtr, + system_env: &SystemEnv, + l1_batch_env: &L1BatchEnv, +) -> (ZkSyncVmState, BootloaderState) { + let last_l2_block = if let Some(last_l2_block) = load_last_l2_block(storage.clone()) { + last_l2_block + } else { + // This is the scenario of either the first L2 block ever or + // the first block after the upgrade for support of L2 blocks. + L2Block { + number: l1_batch_env.first_l2_block.number.saturating_sub(1), + timestamp: 0, + hash: MiniblockHasher::legacy_hash( + MiniblockNumber(l1_batch_env.first_l2_block.number) - 1, + ), + } + }; + + assert_next_block(&last_l2_block, &l1_batch_env.first_l2_block); + let first_l2_block = l1_batch_env.first_l2_block; + let storage_oracle: StorageOracle = StorageOracle::new(storage.clone()); + let mut memory = SimpleMemory::default(); + let event_sink = InMemoryEventSink::default(); + let precompiles_processor = PrecompilesProcessorWithHistory::::default(); + let mut decommittment_processor: DecommitterOracle = + DecommitterOracle::new(storage); + + decommittment_processor.populate( + vec![( + h256_to_u256(system_env.base_system_smart_contracts.default_aa.hash), + system_env + .base_system_smart_contracts + .default_aa + .code + .clone(), + )], + Timestamp(0), + ); + + memory.populate( + vec![( + BOOTLOADER_CODE_PAGE, + system_env + .base_system_smart_contracts + .bootloader + .code + .clone(), + )], + Timestamp(0), + ); + + let bootloader_initial_memory = bootloader_initial_memory(l1_batch_env); + memory.populate_page( + BOOTLOADER_HEAP_PAGE as usize, + bootloader_initial_memory.clone(), + Timestamp(0), + ); + + let mut vm = VmState::empty_state( + storage_oracle, + memory, + event_sink, + precompiles_processor, + decommittment_processor, + DummyTracer, + BlockProperties { + default_aa_code_hash: h256_to_u256( + system_env.base_system_smart_contracts.default_aa.hash, + ), + zkporter_is_available: system_env.zk_porter_available, + }, + ); + + vm.local_state.callstack.current.ergs_remaining = system_env.gas_limit; + + let initial_context = CallStackEntry { + this_address: BOOTLOADER_ADDRESS, + msg_sender: Address::zero(), + code_address: BOOTLOADER_ADDRESS, + base_memory_page: MemoryPage(BOOTLOADER_BASE_PAGE), + code_page: MemoryPage(BOOTLOADER_CODE_PAGE), + sp: 0, + pc: 0, + // Note, that since the results are written at the end of the memory + // it is needed to have the entire heap available from the beginning + heap_bound: BOOTLOADER_MAX_MEMORY, + aux_heap_bound: BOOTLOADER_MAX_MEMORY, + exception_handler_location: INITIAL_FRAME_FORMAL_EH_LOCATION, + ergs_remaining: system_env.gas_limit, + this_shard_id: 0, + caller_shard_id: 0, + code_shard_id: 0, + is_static: false, + is_local_frame: false, + context_u128_value: 0, + }; + + // We consider the contract that is being run as a bootloader + vm.push_bootloader_context(INITIAL_MONOTONIC_CYCLE_COUNTER - 1, initial_context); + vm.local_state.timestamp = STARTING_TIMESTAMP; + vm.local_state.memory_page_counter = STARTING_BASE_PAGE; + vm.local_state.monotonic_cycle_counter = INITIAL_MONOTONIC_CYCLE_COUNTER; + vm.local_state.current_ergs_per_pubdata_byte = 0; + vm.local_state.registers[0] = formal_calldata_abi(); + + // Deleting all the historical records brought by the initial + // initialization of the VM to make them permanent. + vm.decommittment_processor.delete_history(); + vm.event_sink.delete_history(); + vm.storage.delete_history(); + vm.memory.delete_history(); + vm.precompiles_processor.delete_history(); + let bootloader_state = BootloaderState::new( + system_env.execution_mode, + bootloader_initial_memory, + first_l2_block, + ); + + (vm, bootloader_state) +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/types/l1_batch.rs b/core/lib/multivm/src/versions/vm_boojum_integration/types/l1_batch.rs new file mode 100644 index 00000000000..386dc040099 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/types/l1_batch.rs @@ -0,0 +1,43 @@ +use zksync_types::U256; +use zksync_utils::{address_to_u256, h256_to_u256}; + +use crate::{interface::L1BatchEnv, vm_boojum_integration::utils::fee::get_batch_base_fee}; + +const OPERATOR_ADDRESS_SLOT: usize = 0; +const PREV_BLOCK_HASH_SLOT: usize = 1; +const NEW_BLOCK_TIMESTAMP_SLOT: usize = 2; +const NEW_BLOCK_NUMBER_SLOT: usize = 3; +const L1_GAS_PRICE_SLOT: usize = 4; +const FAIR_L2_GAS_PRICE_SLOT: usize = 5; +const EXPECTED_BASE_FEE_SLOT: usize = 6; +const SHOULD_SET_NEW_BLOCK_SLOT: usize = 7; + +/// Returns the initial memory for the bootloader based on the current batch environment. +pub(crate) fn bootloader_initial_memory(l1_batch: &L1BatchEnv) -> Vec<(usize, U256)> { + let (prev_block_hash, should_set_new_block) = l1_batch + .previous_batch_hash + .map(|prev_block_hash| (h256_to_u256(prev_block_hash), U256::one())) + .unwrap_or_default(); + + let fee_input = l1_batch.fee_input.into_l1_pegged(); + + vec![ + ( + OPERATOR_ADDRESS_SLOT, + address_to_u256(&l1_batch.fee_account), + ), + (PREV_BLOCK_HASH_SLOT, prev_block_hash), + (NEW_BLOCK_TIMESTAMP_SLOT, U256::from(l1_batch.timestamp)), + (NEW_BLOCK_NUMBER_SLOT, U256::from(l1_batch.number.0)), + (L1_GAS_PRICE_SLOT, U256::from(fee_input.l1_gas_price)), + ( + FAIR_L2_GAS_PRICE_SLOT, + U256::from(fee_input.fair_l2_gas_price), + ), + ( + EXPECTED_BASE_FEE_SLOT, + U256::from(get_batch_base_fee(l1_batch)), + ), + (SHOULD_SET_NEW_BLOCK_SLOT, should_set_new_block), + ] +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/types/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/types/mod.rs new file mode 100644 index 00000000000..a12005734ab --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/types/mod.rs @@ -0,0 +1,2 @@ +pub(crate) mod internals; +mod l1_batch; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/utils/fee.rs b/core/lib/multivm/src/versions/vm_boojum_integration/utils/fee.rs new file mode 100644 index 00000000000..55c7d089459 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/utils/fee.rs @@ -0,0 +1,53 @@ +//! Utility functions for vm +use zksync_types::fee_model::L1PeggedBatchFeeModelInput; +use zksync_utils::ceil_div; + +use crate::{ + vm_boojum_integration::{ + constants::MAX_GAS_PER_PUBDATA_BYTE, old_vm::utils::eth_price_per_pubdata_byte, + }, + vm_latest::L1BatchEnv, +}; + +/// Calculates the amount of gas required to publish one byte of pubdata +pub fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { + let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); + + ceil_div(eth_price_per_pubdata_byte, base_fee) +} + +/// Calculates the base fee and gas per pubdata for the given L1 gas price. +pub(crate) fn derive_base_fee_and_gas_per_pubdata( + fee_input: L1PeggedBatchFeeModelInput, +) -> (u64, u64) { + let L1PeggedBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price, + } = fee_input; + let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); + + // The `baseFee` is set in such a way that it is always possible for a transaction to + // publish enough public data while compensating us for it. + let base_fee = std::cmp::max( + fair_l2_gas_price, + ceil_div(eth_price_per_pubdata_byte, MAX_GAS_PER_PUBDATA_BYTE), + ); + + ( + base_fee, + base_fee_to_gas_per_pubdata(l1_gas_price, base_fee), + ) +} + +pub(crate) fn get_batch_base_fee(l1_batch_env: &L1BatchEnv) -> u64 { + if let Some(base_fee) = l1_batch_env.enforced_base_fee { + return base_fee; + } + let (base_fee, _) = + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()); + base_fee +} + +pub(crate) fn get_batch_gas_per_pubdata(l1_batch_env: &L1BatchEnv) -> u64 { + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()).1 +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/utils/l2_blocks.rs b/core/lib/multivm/src/versions/vm_boojum_integration/utils/l2_blocks.rs new file mode 100644 index 00000000000..e5832f7f587 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/utils/l2_blocks.rs @@ -0,0 +1,95 @@ +use zksync_state::{ReadStorage, StoragePtr}; +use zksync_system_constants::{ + SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION, + SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, + SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES, +}; +use zksync_types::{ + block::unpack_block_info, web3::signing::keccak256, AccountTreeId, MiniblockNumber, StorageKey, + H256, U256, +}; +use zksync_utils::{h256_to_u256, u256_to_h256}; + +use crate::interface::{L2Block, L2BlockEnv}; + +pub(crate) fn get_l2_block_hash_key(block_number: u32) -> StorageKey { + let position = h256_to_u256(SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION) + + U256::from(block_number % SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES); + StorageKey::new( + AccountTreeId::new(SYSTEM_CONTEXT_ADDRESS), + u256_to_h256(position), + ) +} + +pub(crate) fn assert_next_block(prev_block: &L2Block, next_block: &L2BlockEnv) { + if prev_block.number == 0 { + // Special case for the first block it can have the same timestamp as the previous block. + assert!(prev_block.timestamp <= next_block.timestamp); + } else { + assert_eq!(prev_block.number + 1, next_block.number); + assert!(prev_block.timestamp < next_block.timestamp); + } + assert_eq!(prev_block.hash, next_block.prev_block_hash); +} + +/// Returns the hash of the l2_block. +/// `txs_rolling_hash` of the l2_block is calculated the following way: +/// If the l2_block has 0 transactions, then `txs_rolling_hash` is equal to `H256::zero()`. +/// If the l2_block has i transactions, then `txs_rolling_hash` is equal to `H(H_{i-1}, H(tx_i))`, where +/// `H_{i-1}` is the `txs_rolling_hash` of the first i-1 transactions. +pub(crate) fn l2_block_hash( + l2_block_number: MiniblockNumber, + l2_block_timestamp: u64, + prev_l2_block_hash: H256, + txs_rolling_hash: H256, +) -> H256 { + let mut digest: [u8; 128] = [0u8; 128]; + U256::from(l2_block_number.0).to_big_endian(&mut digest[0..32]); + U256::from(l2_block_timestamp).to_big_endian(&mut digest[32..64]); + digest[64..96].copy_from_slice(prev_l2_block_hash.as_bytes()); + digest[96..128].copy_from_slice(txs_rolling_hash.as_bytes()); + + H256(keccak256(&digest)) +} + +/// Get last saved block from storage +pub fn load_last_l2_block(storage: StoragePtr) -> Option { + // Get block number and timestamp + let current_l2_block_info_key = StorageKey::new( + AccountTreeId::new(SYSTEM_CONTEXT_ADDRESS), + SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, + ); + let mut storage_ptr = storage.borrow_mut(); + let current_l2_block_info = storage_ptr.read_value(¤t_l2_block_info_key); + let (block_number, block_timestamp) = unpack_block_info(h256_to_u256(current_l2_block_info)); + let block_number = block_number as u32; + if block_number == 0 { + // The block does not exist yet + return None; + } + + // Get previous block hash + let position = get_l2_block_hash_key(block_number - 1); + let prev_block_hash = storage_ptr.read_value(&position); + + // Get current tx rolling hash + let position = StorageKey::new( + AccountTreeId::new(SYSTEM_CONTEXT_ADDRESS), + SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, + ); + let current_tx_rolling_hash = storage_ptr.read_value(&position); + + // Calculate current hash + let current_block_hash = l2_block_hash( + MiniblockNumber(block_number), + block_timestamp, + prev_block_hash, + current_tx_rolling_hash, + ); + + Some(L2Block { + number: block_number, + timestamp: block_timestamp, + hash: current_block_hash, + }) +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/utils/logs.rs b/core/lib/multivm/src/versions/vm_boojum_integration/utils/logs.rs new file mode 100644 index 00000000000..0461b4a8887 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/utils/logs.rs @@ -0,0 +1,25 @@ +use zksync_state::WriteStorage; +use zksync_types::{l2_to_l1_log::L2ToL1Log, Timestamp, VmEvent}; + +use crate::{ + interface::L1BatchEnv, + vm_boojum_integration::{ + old_vm::{events::merge_events, history_recorder::HistoryMode}, + types::internals::ZkSyncVmState, + }, +}; + +pub(crate) fn collect_events_and_l1_system_logs_after_timestamp( + vm_state: &ZkSyncVmState, + batch_env: &L1BatchEnv, + from_timestamp: Timestamp, +) -> (Vec, Vec) { + let (raw_events, l1_messages) = vm_state + .event_sink + .get_events_and_l2_l1_logs_after_timestamp(from_timestamp); + let events = merge_events(raw_events) + .into_iter() + .map(|e| e.into_vm_event(batch_env.number)) + .collect(); + (events, l1_messages.into_iter().map(Into::into).collect()) +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/utils/mod.rs b/core/lib/multivm/src/versions/vm_boojum_integration/utils/mod.rs new file mode 100644 index 00000000000..0fb803de5d4 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/utils/mod.rs @@ -0,0 +1,6 @@ +/// Utility functions for the VM. +pub mod fee; +pub mod l2_blocks; +pub(crate) mod logs; +pub mod overhead; +pub mod transaction_encoding; diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/utils/overhead.rs b/core/lib/multivm/src/versions/vm_boojum_integration/utils/overhead.rs new file mode 100644 index 00000000000..750d18940f5 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/utils/overhead.rs @@ -0,0 +1,351 @@ +use zk_evm_1_4_0::zkevm_opcode_defs::system_params::MAX_TX_ERGS_LIMIT; +use zksync_system_constants::MAX_L2_TX_GAS_LIMIT; +use zksync_types::{l1::is_l1_tx_type, U256}; +use zksync_utils::ceil_div_u256; + +use crate::vm_boojum_integration::constants::{ + BLOCK_OVERHEAD_GAS, BLOCK_OVERHEAD_PUBDATA, BOOTLOADER_TX_ENCODING_SPACE, MAX_TXS_IN_BLOCK, +}; + +/// Derives the overhead for processing transactions in a block. +pub(crate) fn derive_overhead( + gas_limit: u32, + gas_price_per_pubdata: u32, + encoded_len: usize, + coefficients: OverheadCoefficients, +) -> u32 { + // Even if the gas limit is greater than the `MAX_TX_ERGS_LIMIT`, we assume that everything beyond `MAX_TX_ERGS_LIMIT` + // will be spent entirely on publishing bytecodes and so we derive the overhead solely based on the capped value + let gas_limit = std::cmp::min(MAX_TX_ERGS_LIMIT, gas_limit); + + // Using large `U256` type to avoid overflow + let max_block_overhead = U256::from(block_overhead_gas(gas_price_per_pubdata)); + let gas_limit = U256::from(gas_limit); + let encoded_len = U256::from(encoded_len); + + // The `MAX_TX_ERGS_LIMIT` is formed in a way that may fulfills a single-instance circuits + // if used in full. That is, within `MAX_TX_ERGS_LIMIT` it is possible to fully saturate all the single-instance + // circuits. + let overhead_for_single_instance_circuits = + ceil_div_u256(gas_limit * max_block_overhead, MAX_TX_ERGS_LIMIT.into()); + + // The overhead for occupying the bootloader memory + let overhead_for_length = ceil_div_u256( + encoded_len * max_block_overhead, + BOOTLOADER_TX_ENCODING_SPACE.into(), + ); + + // The overhead for occupying a single tx slot + let tx_slot_overhead = ceil_div_u256(max_block_overhead, MAX_TXS_IN_BLOCK.into()); + + // We use `ceil` here for formal reasons to allow easier approach for calculating the overhead in `O(1)` + // `let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata);` + + // The maximal potential overhead from pubdata + // TODO (EVM-67): possibly use overhead for pubdata + // ``` + // let pubdata_overhead = ceil_div_u256( + // max_pubdata_in_tx * max_block_overhead, + // MAX_PUBDATA_PER_BLOCK.into(), + // ); + //``` + + vec![ + (coefficients.ergs_limit_overhead_coeficient + * overhead_for_single_instance_circuits.as_u32() as f64) + .floor() as u32, + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) + .floor() as u32, + (coefficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, + ] + .into_iter() + .max() + .unwrap() +} + +/// Contains the coefficients with which the overhead for transactions will be calculated. +/// All of the coefficients should be <= 1. There are here to provide a certain "discount" for normal transactions +/// at the risk of malicious transactions that may close the block prematurely. +/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coefficients.ergs_limit_overhead_coeficient` MUST +/// result in an integer number +#[derive(Debug, Clone, Copy)] +pub struct OverheadCoefficients { + slot_overhead_coeficient: f64, + bootloader_memory_overhead_coeficient: f64, + ergs_limit_overhead_coeficient: f64, +} + +impl OverheadCoefficients { + // This method ensures that the parameters keep the required invariants + fn new_checked( + slot_overhead_coeficient: f64, + bootloader_memory_overhead_coeficient: f64, + ergs_limit_overhead_coeficient: f64, + ) -> Self { + assert!( + (MAX_TX_ERGS_LIMIT as f64 / ergs_limit_overhead_coeficient).round() + == MAX_TX_ERGS_LIMIT as f64 / ergs_limit_overhead_coeficient, + "MAX_TX_ERGS_LIMIT / ergs_limit_overhead_coeficient must be an integer" + ); + + Self { + slot_overhead_coeficient, + bootloader_memory_overhead_coeficient, + ergs_limit_overhead_coeficient, + } + } + + // L1->L2 do not receive any discounts + fn new_l1() -> Self { + OverheadCoefficients::new_checked(1.0, 1.0, 1.0) + } + + fn new_l2() -> Self { + OverheadCoefficients::new_checked( + 1.0, 1.0, + // For L2 transactions we allow a certain default discount with regard to the number of ergs. + // Multi-instance circuits can in theory be spawned infinite times, while projected future limitations + // on gas per pubdata allow for roughly 800k gas per L1 batch, so the rough trust "discount" on the proof's part + // to be paid by the users is 0.1. + 0.1, + ) + } + + /// Return the coefficients for the given transaction type + pub fn from_tx_type(tx_type: u8) -> Self { + if is_l1_tx_type(tx_type) { + Self::new_l1() + } else { + Self::new_l2() + } + } +} + +/// This method returns the overhead for processing the block +pub(crate) fn get_amortized_overhead( + total_gas_limit: u32, + gas_per_pubdata_byte_limit: u32, + encoded_len: usize, + coefficients: OverheadCoefficients, +) -> u32 { + // Using large U256 type to prevent overflows. + let overhead_for_block_gas = U256::from(block_overhead_gas(gas_per_pubdata_byte_limit)); + let total_gas_limit = U256::from(total_gas_limit); + let encoded_len = U256::from(encoded_len); + + // Derivation of overhead consists of 4 parts: + // 1. The overhead for taking up a transaction's slot. `(O1): O1 = 1 / MAX_TXS_IN_BLOCK` + // 2. The overhead for taking up the bootloader's memory `(O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE` + // 3. The overhead for possible usage of pubdata. `(O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK` + // 4. The overhead for possible usage of all the single-instance circuits. `(O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT` + // + // The maximum of these is taken to derive the part of the block's overhead to be paid by the users: + // + // `max_overhead = max(O1, O2, O3, O4)` + // `overhead_gas = ceil(max_overhead * overhead_for_block_gas)`. Thus, `overhead_gas` is a function of + // `tx_gas_limit`, `gas_per_pubdata_byte_limit` and `encoded_len`. + // + // While it is possible to derive the overhead with binary search in `O(log n)`, it is too expensive to be done + // on L1, so here is a reference implementation of finding the overhead for transaction in `O(1)`: + // + // Given `total_gas_limit = tx_gas_limit + overhead_gas`, we need to find `overhead_gas` and `tx_gas_limit`, such that: + // 1. `overhead_gas` is maximal possible (the operator is paid fairly) + // 2. `overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas` (the user does not overpay) + // The third part boils to the following 4 inequalities (at least one of these must hold): + // `ceil(O1 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O2 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O3 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O4 * overhead_for_block_gas) >= overhead_gas` + // + // Now, we need to solve each of these separately: + + // 1. The overhead for occupying a single tx slot is a constant: + let tx_slot_overhead = { + let tx_slot_overhead = + ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()).as_u32(); + (coefficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 + }; + + // 2. The overhead for occupying the bootloader memory can be derived from `encoded_len` + let overhead_for_length = { + let overhead_for_length = ceil_div_u256( + encoded_len * overhead_for_block_gas, + BOOTLOADER_TX_ENCODING_SPACE.into(), + ) + .as_u32(); + + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() + as u32 + }; + //``` + // TODO (EVM-67): possibly include the overhead for pubdata. The formula below has not been properly maintained, + // since the pubdat is not published. If decided to use the pubdata overhead, it needs to be updated. + // 3. ceil(O3 * overhead_for_block_gas) >= overhead_gas + // O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK = ceil(gas_limit / gas_per_pubdata_byte_limit) / MAX_PUBDATA_PER_BLOCK + // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). Throwing off the `ceil`, while may provide marginally lower + // overhead to the operator, provides substantially easier formula to work with. + // + // For better clarity, let's denote gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE + // ceil(OB * (TL - OE) / (EP * MP)) >= OE + // + // OB * (TL - OE) / (MP * EP) > OE - 1 + // OB * (TL - OE) > (OE - 1) * EP * MP + // OB * TL + EP * MP > OE * EP * MP + OE * OB + // (OB * TL + EP * MP) / (EP * MP + OB) > OE + // OE = floor((OB * TL + EP * MP) / (EP * MP + OB)) with possible -1 if the division is without remainder + // let overhead_for_pubdata = { + // let numerator: U256 = overhead_for_block_gas * total_gas_limit + // + gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK); + // let denominator = + // gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK) + overhead_for_block_gas; + // + // // Corner case: if `total_gas_limit` = `gas_per_pubdata_byte_limit` = 0 + // // then the numerator will be 0 and subtracting 1 will cause a panic, so we just return a zero. + // if numerator.is_zero() { + // 0.into() + // } else { + // (numerator - 1) / denominator + // } + // }; + //``` + // 4. `K * ceil(O4 * overhead_for_block_gas) >= overhead_gas`, where K is the discount + // `O4 = gas_limit / MAX_TX_ERGS_LIMIT`. Using the notation from the previous equation: + // `ceil(OB * GL / MAX_TX_ERGS_LIMIT) >= (OE / K)` + // `ceil(OB * (TL - OE) / MAX_TX_ERGS_LIMIT) >= (OE/K)` + // `OB * (TL - OE) / MAX_TX_ERGS_LIMIT > (OE/K) - 1` + // `OB * (TL - OE) > (OE/K) * MAX_TX_ERGS_LIMIT - MAX_TX_ERGS_LIMIT` + // `OB * TL + MAX_TX_ERGS_LIMIT > OE * ( MAX_TX_ERGS_LIMIT/K + OB)` + // `OE = floor(OB * TL + MAX_TX_ERGS_LIMIT / (MAX_TX_ERGS_LIMIT/K + OB))`, with possible -1 if the division is without remainder + let overhead_for_gas = { + let numerator = overhead_for_block_gas * total_gas_limit + U256::from(MAX_TX_ERGS_LIMIT); + let denominator: U256 = U256::from( + (MAX_TX_ERGS_LIMIT as f64 / coefficients.ergs_limit_overhead_coeficient) as u64, + ) + overhead_for_block_gas; + + let overhead_for_gas = (numerator - 1) / denominator; + + overhead_for_gas.as_u32() + }; + + let overhead = vec![tx_slot_overhead, overhead_for_length, overhead_for_gas] + .into_iter() + .max() + // For the sake of consistency making sure that total_gas_limit >= max_overhead + .map(|max_overhead| std::cmp::min(max_overhead, total_gas_limit.as_u32())) + .unwrap(); + + let limit_after_deducting_overhead = total_gas_limit - overhead; + + // During double checking of the overhead, the bootloader will assume that the + // body of the transaction does not have any more than MAX_L2_TX_GAS_LIMIT ergs available to it. + if limit_after_deducting_overhead.as_u64() > MAX_L2_TX_GAS_LIMIT { + // We derive the same overhead that would exist for the MAX_L2_TX_GAS_LIMIT ergs + derive_overhead( + MAX_L2_TX_GAS_LIMIT as u32, + gas_per_pubdata_byte_limit, + encoded_len.as_usize(), + coefficients, + ) + } else { + overhead + } +} + +pub(crate) fn block_overhead_gas(gas_per_pubdata_byte: u32) -> u32 { + BLOCK_OVERHEAD_GAS + BLOCK_OVERHEAD_PUBDATA * gas_per_pubdata_byte +} + +#[cfg(test)] +mod tests { + + use super::*; + + // This method returns the maximum block overhead that can be charged from the user based on the binary search approach + pub(crate) fn get_maximal_allowed_overhead_bin_search( + total_gas_limit: u32, + gas_per_pubdata_byte_limit: u32, + encoded_len: usize, + coefficients: OverheadCoefficients, + ) -> u32 { + let mut left_bound = if MAX_TX_ERGS_LIMIT < total_gas_limit { + total_gas_limit - MAX_TX_ERGS_LIMIT + } else { + 0u32 + }; + // Safe cast: the gas_limit for a transaction can not be larger than 2^32 + let mut right_bound = total_gas_limit; + + // The closure returns whether a certain overhead would be accepted by the bootloader. + // It is accepted if the derived overhead (i.e. the actual overhead that the user has to pay) + // is >= than the overhead proposed by the operator. + let is_overhead_accepted = |suggested_overhead: u32| { + let derived_overhead = derive_overhead( + total_gas_limit - suggested_overhead, + gas_per_pubdata_byte_limit, + encoded_len, + coefficients, + ); + + derived_overhead >= suggested_overhead + }; + + // In order to find the maximal allowed overhead we are doing binary search + while left_bound + 1 < right_bound { + let mid = (left_bound + right_bound) / 2; + + if is_overhead_accepted(mid) { + left_bound = mid; + } else { + right_bound = mid; + } + } + + if is_overhead_accepted(right_bound) { + right_bound + } else { + left_bound + } + } + + #[test] + fn test_correctness_for_efficient_overhead() { + let test_params = |total_gas_limit: u32, + gas_per_pubdata: u32, + encoded_len: usize, + coefficients: OverheadCoefficients| { + let result_by_efficient_search = + get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coefficients); + + let result_by_binary_search = get_maximal_allowed_overhead_bin_search( + total_gas_limit, + gas_per_pubdata, + encoded_len, + coefficients, + ); + + assert_eq!(result_by_efficient_search, result_by_binary_search); + }; + + // Some arbitrary test + test_params(60_000_000, 800, 2900, OverheadCoefficients::new_l2()); + + // Very small parameters + test_params(0, 1, 12, OverheadCoefficients::new_l2()); + + // Relatively big parameters + let max_tx_overhead = derive_overhead( + MAX_TX_ERGS_LIMIT, + 5000, + 10000, + OverheadCoefficients::new_l2(), + ); + test_params( + MAX_TX_ERGS_LIMIT + max_tx_overhead, + 5000, + 10000, + OverheadCoefficients::new_l2(), + ); + + test_params(115432560, 800, 2900, OverheadCoefficients::new_l1()); + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/utils/transaction_encoding.rs b/core/lib/multivm/src/versions/vm_boojum_integration/utils/transaction_encoding.rs new file mode 100644 index 00000000000..0a447ac31db --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/utils/transaction_encoding.rs @@ -0,0 +1,16 @@ +use zksync_types::Transaction; + +use crate::vm_boojum_integration::types::internals::TransactionData; + +/// Extension for transactions, specific for VM. Required for bypassing the orphan rule +pub trait TransactionVmExt { + /// Get the size of the transaction in tokens. + fn bootloader_encoding_size(&self) -> usize; +} + +impl TransactionVmExt for Transaction { + fn bootloader_encoding_size(&self) -> usize { + let transaction_data: TransactionData = self.clone().into(); + transaction_data.into_tokens().len() + } +} diff --git a/core/lib/multivm/src/versions/vm_boojum_integration/vm.rs b/core/lib/multivm/src/versions/vm_boojum_integration/vm.rs new file mode 100644 index 00000000000..425833bd910 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_boojum_integration/vm.rs @@ -0,0 +1,190 @@ +use zksync_state::{StoragePtr, WriteStorage}; +use zksync_types::{ + event::extract_l2tol1logs_from_l1_messenger, + l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}, + Transaction, +}; +use zksync_utils::bytecode::CompressedBytecodeInfo; + +use crate::{ + interface::{ + BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, + L1BatchEnv, L2BlockEnv, SystemEnv, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, + VmInterfaceHistoryEnabled, VmMemoryMetrics, + }, + vm_boojum_integration::{ + bootloader_state::BootloaderState, + old_vm::events::merge_events, + tracers::dispatcher::TracerDispatcher, + types::internals::{new_vm_state, VmSnapshot, ZkSyncVmState}, + }, + HistoryMode, +}; + +/// Main entry point for Virtual Machine integration. +/// The instance should process only one l1 batch +#[derive(Debug)] +pub struct Vm { + pub(crate) bootloader_state: BootloaderState, + // Current state and oracles of virtual machine + pub(crate) state: ZkSyncVmState, + pub(crate) storage: StoragePtr, + pub(crate) system_env: SystemEnv, + pub(crate) batch_env: L1BatchEnv, + // Snapshots for the current run + pub(crate) snapshots: Vec, + _phantom: std::marker::PhantomData, +} + +impl VmInterface for Vm { + type TracerDispatcher = TracerDispatcher; + + fn new(batch_env: L1BatchEnv, system_env: SystemEnv, storage: StoragePtr) -> Self { + let (state, bootloader_state) = new_vm_state(storage.clone(), &system_env, &batch_env); + Self { + bootloader_state, + state, + storage, + system_env, + batch_env, + snapshots: vec![], + _phantom: Default::default(), + } + } + + /// Push tx into memory for the future execution + fn push_transaction(&mut self, tx: Transaction) { + self.push_transaction_with_compression(tx, true); + } + + /// Execute VM with custom tracers. + fn inspect( + &mut self, + tracer: Self::TracerDispatcher, + execution_mode: VmExecutionMode, + ) -> VmExecutionResultAndLogs { + self.inspect_inner(tracer, execution_mode) + } + + /// Get current state of bootloader memory. + fn get_bootloader_memory(&self) -> BootloaderMemory { + self.bootloader_state.bootloader_memory() + } + + /// Get compressed bytecodes of the last executed transaction + fn get_last_tx_compressed_bytecodes(&self) -> Vec { + self.bootloader_state.get_last_tx_compressed_bytecodes() + } + + fn start_new_l2_block(&mut self, l2_block_env: L2BlockEnv) { + self.bootloader_state.start_new_l2_block(l2_block_env); + } + + /// Get current state of virtual machine. + /// This method should be used only after the batch execution. + /// Otherwise it can panic. + fn get_current_execution_state(&self) -> CurrentExecutionState { + let (deduplicated_events_logs, raw_events, l1_messages) = self.state.event_sink.flatten(); + let events: Vec<_> = merge_events(raw_events) + .into_iter() + .map(|e| e.into_vm_event(self.batch_env.number)) + .collect(); + + let user_l2_to_l1_logs = extract_l2tol1logs_from_l1_messenger(&events); + let system_logs = l1_messages + .into_iter() + .map(|log| SystemL2ToL1Log(log.into())) + .collect(); + let total_log_queries = self.state.event_sink.get_log_queries() + + self + .state + .precompiles_processor + .get_timestamp_history() + .len() + + self.state.storage.get_final_log_queries().len(); + + CurrentExecutionState { + events, + storage_log_queries: self.state.storage.get_final_log_queries(), + used_contract_hashes: self.get_used_contracts(), + user_l2_to_l1_logs: user_l2_to_l1_logs + .into_iter() + .map(|log| UserL2ToL1Log(log.into())) + .collect(), + system_logs, + total_log_queries, + cycles_used: self.state.local_state.monotonic_cycle_counter, + deduplicated_events_logs, + storage_refunds: self.state.storage.returned_refunds.inner().clone(), + } + } + + /// Execute transaction with optional bytecode compression. + + /// Inspect transaction with optional bytecode compression. + fn inspect_transaction_with_bytecode_compression( + &mut self, + tracer: Self::TracerDispatcher, + tx: Transaction, + with_compression: bool, + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { + self.push_transaction_with_compression(tx, with_compression); + let result = self.inspect_inner(tracer, VmExecutionMode::OneTx); + if self.has_unpublished_bytecodes() { + ( + Err(BytecodeCompressionError::BytecodeCompressionFailed), + result, + ) + } else { + (Ok(()), result) + } + } + + fn record_vm_memory_metrics(&self) -> VmMemoryMetrics { + self.record_vm_memory_metrics_inner() + } + + fn finish_batch(&mut self) -> FinishedL1Batch { + let result = self.execute(VmExecutionMode::Batch); + let execution_state = self.get_current_execution_state(); + let bootloader_memory = self.get_bootloader_memory(); + FinishedL1Batch { + block_tip_execution_result: result, + final_execution_state: execution_state, + final_bootloader_memory: Some(bootloader_memory), + pubdata_input: Some( + self.bootloader_state + .get_pubdata_information() + .clone() + .build_pubdata(false), + ), + } + } +} + +/// Methods of vm, which required some history manipulations +impl VmInterfaceHistoryEnabled for Vm { + /// Create snapshot of current vm state and push it into the memory + fn make_snapshot(&mut self) { + self.make_snapshot_inner() + } + + /// Rollback vm state to the latest snapshot and destroy the snapshot + fn rollback_to_the_latest_snapshot(&mut self) { + let snapshot = self + .snapshots + .pop() + .expect("Snapshot should be created before rolling it back"); + self.rollback_to_snapshot(snapshot); + } + + /// Pop the latest snapshot from the memory and destroy it + fn pop_snapshot_no_rollback(&mut self) { + self.snapshots + .pop() + .expect("Snapshot should be created before rolling it back"); + } +} diff --git a/core/lib/multivm/src/versions/vm_latest/bootloader_state/l2_block.rs b/core/lib/multivm/src/versions/vm_latest/bootloader_state/l2_block.rs index 6da9b64673e..70b6a2b866e 100644 --- a/core/lib/multivm/src/versions/vm_latest/bootloader_state/l2_block.rs +++ b/core/lib/multivm/src/versions/vm_latest/bootloader_state/l2_block.rs @@ -1,11 +1,15 @@ use std::cmp::Ordering; + use zksync_types::{MiniblockNumber, H256}; use zksync_utils::concat_and_hash; -use crate::interface::{L2Block, L2BlockEnv}; -use crate::vm_latest::bootloader_state::snapshot::L2BlockSnapshot; -use crate::vm_latest::bootloader_state::tx::BootloaderTx; -use crate::vm_latest::utils::l2_blocks::l2_block_hash; +use crate::{ + interface::{L2Block, L2BlockEnv}, + vm_latest::{ + bootloader_state::{snapshot::L2BlockSnapshot, tx::BootloaderTx}, + utils::l2_blocks::l2_block_hash, + }, +}; const EMPTY_TXS_ROLLING_HASH: H256 = H256::zero(); @@ -15,10 +19,10 @@ pub(crate) struct BootloaderL2Block { pub(crate) timestamp: u64, pub(crate) txs_rolling_hash: H256, // The rolling hash of all the transactions in the miniblock pub(crate) prev_block_hash: H256, - // Number of the first l2 block tx in l1 batch + // Number of the first L2 block tx in L1 batch pub(crate) first_tx_index: usize, pub(crate) max_virtual_blocks_to_create: u32, - pub(super) txs: Vec, + pub(crate) txs: Vec, } impl BootloaderL2Block { diff --git a/core/lib/multivm/src/versions/vm_latest/bootloader_state/snapshot.rs b/core/lib/multivm/src/versions/vm_latest/bootloader_state/snapshot.rs index 683fc28a69e..8f1cec3cb7f 100644 --- a/core/lib/multivm/src/versions/vm_latest/bootloader_state/snapshot.rs +++ b/core/lib/multivm/src/versions/vm_latest/bootloader_state/snapshot.rs @@ -4,9 +4,9 @@ use zksync_types::H256; pub(crate) struct BootloaderStateSnapshot { /// ID of the next transaction to be executed. pub(crate) tx_to_execute: usize, - /// Stored l2 blocks in bootloader memory + /// Stored L2 blocks in bootloader memory pub(crate) l2_blocks_len: usize, - /// Snapshot of the last l2 block. Only this block could be changed during the rollback + /// Snapshot of the last L2 block. Only this block could be changed during the rollback pub(crate) last_l2_block: L2BlockSnapshot, /// The number of 32-byte words spent on the already included compressed bytecodes. pub(crate) compressed_bytecodes_encoding: usize, @@ -20,6 +20,6 @@ pub(crate) struct BootloaderStateSnapshot { pub(crate) struct L2BlockSnapshot { /// The rolling hash of all the transactions in the miniblock pub(crate) txs_rolling_hash: H256, - /// The number of transactions in the last l2 block + /// The number of transactions in the last L2 block pub(crate) txs_len: usize, } diff --git a/core/lib/multivm/src/versions/vm_latest/bootloader_state/state.rs b/core/lib/multivm/src/versions/vm_latest/bootloader_state/state.rs index b4641d9bc64..d914aacab17 100644 --- a/core/lib/multivm/src/versions/vm_latest/bootloader_state/state.rs +++ b/core/lib/multivm/src/versions/vm_latest/bootloader_state/state.rs @@ -1,20 +1,24 @@ -use crate::vm_latest::bootloader_state::l2_block::BootloaderL2Block; -use crate::vm_latest::bootloader_state::snapshot::BootloaderStateSnapshot; -use crate::vm_latest::bootloader_state::utils::{apply_l2_block, apply_tx_to_memory}; -use once_cell::sync::OnceCell; use std::cmp::Ordering; + +use once_cell::sync::OnceCell; use zksync_types::{L2ChainId, U256}; use zksync_utils::bytecode::CompressedBytecodeInfo; -use crate::interface::{BootloaderMemory, L2BlockEnv, TxExecutionMode}; -use crate::vm_latest::types::internals::pubdata::PubdataInput; -use crate::vm_latest::{ - constants::TX_DESCRIPTION_OFFSET, types::internals::TransactionData, - utils::l2_blocks::assert_next_block, +use super::{tx::BootloaderTx, utils::apply_pubdata_to_memory}; +use crate::{ + interface::{BootloaderMemory, L2BlockEnv, TxExecutionMode}, + vm_latest::{ + bootloader_state::{ + l2_block::BootloaderL2Block, + snapshot::BootloaderStateSnapshot, + utils::{apply_l2_block, apply_tx_to_memory}, + }, + constants::TX_DESCRIPTION_OFFSET, + types::internals::{PubdataInput, TransactionData}, + utils::l2_blocks::assert_next_block, + }, }; -use super::tx::BootloaderTx; -use super::utils::apply_pubdata_to_memory; /// Intermediate bootloader-related VM state. /// /// Required to process transactions one by one (since we intercept the VM execution to execute @@ -132,6 +136,11 @@ impl BootloaderState { pub(crate) fn last_l2_block(&self) -> &BootloaderL2Block { self.l2_blocks.last().unwrap() } + pub(crate) fn get_pubdata_information(&self) -> &PubdataInput { + self.pubdata_information + .get() + .expect("Pubdata information is not set") + } fn last_mut_l2_block(&mut self) -> &mut BootloaderL2Block { self.l2_blocks.last_mut().unwrap() diff --git a/core/lib/multivm/src/versions/vm_latest/bootloader_state/tx.rs b/core/lib/multivm/src/versions/vm_latest/bootloader_state/tx.rs index 6d322e5877d..9f44d848a4e 100644 --- a/core/lib/multivm/src/versions/vm_latest/bootloader_state/tx.rs +++ b/core/lib/multivm/src/versions/vm_latest/bootloader_state/tx.rs @@ -1,23 +1,24 @@ -use crate::vm_latest::types::internals::TransactionData; use zksync_types::{L2ChainId, H256, U256}; use zksync_utils::bytecode::CompressedBytecodeInfo; +use crate::vm_latest::types::internals::TransactionData; + /// Information about tx necessary for execution in bootloader. #[derive(Debug, Clone)] -pub(super) struct BootloaderTx { - pub(super) hash: H256, +pub(crate) struct BootloaderTx { + pub(crate) hash: H256, /// Encoded transaction - pub(super) encoded: Vec, + pub(crate) encoded: Vec, /// Compressed bytecodes, which has been published during this transaction - pub(super) compressed_bytecodes: Vec, + pub(crate) compressed_bytecodes: Vec, /// Refunds for this transaction - pub(super) refund: u32, + pub(crate) refund: u32, /// Gas overhead - pub(super) gas_overhead: u32, - /// Gas Limit for this transaction. It can be different from the gaslimit inside the transaction - pub(super) trusted_gas_limit: U256, + pub(crate) gas_overhead: u32, + /// Gas Limit for this transaction. It can be different from the gas limit inside the transaction + pub(crate) trusted_gas_limit: U256, /// Offset of the tx in bootloader memory - pub(super) offset: usize, + pub(crate) offset: usize, } impl BootloaderTx { diff --git a/core/lib/multivm/src/versions/vm_latest/bootloader_state/utils.rs b/core/lib/multivm/src/versions/vm_latest/bootloader_state/utils.rs index 78b98f0a404..346c1bde536 100644 --- a/core/lib/multivm/src/versions/vm_latest/bootloader_state/utils.rs +++ b/core/lib/multivm/src/versions/vm_latest/bootloader_state/utils.rs @@ -1,18 +1,21 @@ -use zksync_types::U256; -use zksync_utils::bytecode::CompressedBytecodeInfo; -use zksync_utils::{bytes_to_be_words, h256_to_u256}; - -use crate::interface::{BootloaderMemory, TxExecutionMode}; -use crate::vm_latest::bootloader_state::l2_block::BootloaderL2Block; -use crate::vm_latest::constants::{ - BOOTLOADER_TX_DESCRIPTION_OFFSET, BOOTLOADER_TX_DESCRIPTION_SIZE, COMPRESSED_BYTECODES_OFFSET, - OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET, OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_SLOTS, - OPERATOR_REFUNDS_OFFSET, TX_DESCRIPTION_OFFSET, TX_OPERATOR_L2_BLOCK_INFO_OFFSET, - TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, TX_OVERHEAD_OFFSET, TX_TRUSTED_GAS_LIMIT_OFFSET, -}; -use crate::vm_latest::types::internals::pubdata::PubdataInput; +use zksync_types::{ethabi, U256}; +use zksync_utils::{bytecode::CompressedBytecodeInfo, bytes_to_be_words, h256_to_u256}; use super::tx::BootloaderTx; +use crate::{ + interface::{BootloaderMemory, TxExecutionMode}, + vm_latest::{ + bootloader_state::l2_block::BootloaderL2Block, + constants::{ + BOOTLOADER_TX_DESCRIPTION_OFFSET, BOOTLOADER_TX_DESCRIPTION_SIZE, + COMPRESSED_BYTECODES_OFFSET, OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET, + OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_SLOTS, OPERATOR_REFUNDS_OFFSET, + TX_DESCRIPTION_OFFSET, TX_OPERATOR_L2_BLOCK_INFO_OFFSET, + TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, TX_OVERHEAD_OFFSET, TX_TRUSTED_GAS_LIMIT_OFFSET, + }, + types::internals::PubdataInput, + }, +}; pub(super) fn get_memory_for_compressed_bytecodes( compressed_bytecodes: &[CompressedBytecodeInfo], @@ -71,7 +74,7 @@ pub(super) fn apply_tx_to_memory( }; apply_l2_block(memory, &bootloader_l2_block, tx_index); - // Note, +1 is moving for poitner + // Note, +1 is moving for pointer let compressed_bytecodes_offset = COMPRESSED_BYTECODES_OFFSET + 1 + compressed_bytecodes_size; let encoded_compressed_bytecodes = @@ -91,8 +94,8 @@ pub(crate) fn apply_l2_block( bootloader_l2_block: &BootloaderL2Block, txs_index: usize, ) { - // Since L2 block infos start from the TX_OPERATOR_L2_BLOCK_INFO_OFFSET and each - // L2 block info takes TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO slots, the position where the L2 block info + // Since L2 block information start from the `TX_OPERATOR_L2_BLOCK_INFO_OFFSET` and each + // L2 block info takes `TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO` slots, the position where the L2 block info // for this transaction needs to be written is: let block_position = @@ -121,7 +124,12 @@ pub(crate) fn apply_pubdata_to_memory( // - The other slot is for the 0x20 offset for the calldata. let l1_messenger_pubdata_start_slot = OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET + 2; - let pubdata = pubdata_information.build_pubdata(); + // Need to skip first word as it represents array offset + // while bootloader expects only [len || data] + let pubdata = ethabi::encode(&[ethabi::Token::Bytes( + pubdata_information.build_pubdata(true), + )])[32..] + .to_vec(); assert!( pubdata.len() / 32 <= OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_SLOTS - 2, diff --git a/core/lib/multivm/src/versions/vm_latest/constants.rs b/core/lib/multivm/src/versions/vm_latest/constants.rs index c67156681a0..1652a2f9424 100644 --- a/core/lib/multivm/src/versions/vm_latest/constants.rs +++ b/core/lib/multivm/src/versions/vm_latest/constants.rs @@ -1,22 +1,40 @@ -use zk_evm_1_4_0::aux_structures::MemoryPage; - -use zksync_system_constants::{ - L1_GAS_PER_PUBDATA_BYTE, MAX_L2_TX_GAS_LIMIT, MAX_NEW_FACTORY_DEPS, MAX_TXS_IN_BLOCK, - USED_BOOTLOADER_MEMORY_WORDS, -}; - -pub use zk_evm_1_4_0::zkevm_opcode_defs::system_params::{ +use zk_evm_1_4_1::aux_structures::MemoryPage; +pub use zk_evm_1_4_1::zkevm_opcode_defs::system_params::{ ERGS_PER_CIRCUIT, INITIAL_STORAGE_WRITE_PUBDATA_BYTES, MAX_PUBDATA_PER_BLOCK, }; +use zksync_system_constants::{MAX_L2_TX_GAS_LIMIT, MAX_NEW_FACTORY_DEPS}; use crate::vm_latest::old_vm::utils::heap_page_from_base; +/// The size of the bootloader memory in bytes which is used by the protocol. +/// While the maximal possible size is a lot higher, we restrict ourselves to a certain limit to reduce +/// the requirements on RAM. +/// In this version of the VM the used bootloader memory bytes has increased from `2^24`` to `24_000_000`. +pub(crate) const USED_BOOTLOADER_MEMORY_BYTES: usize = 24_000_000; +pub(crate) const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32; + +// This the number of pubdata such that it should be always possible to publish +// from a single transaction. Note, that these pubdata bytes include only bytes that are +// to be published inside the body of transaction (i.e. excluding of factory deps). +pub(crate) const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 2500; + +// The users should always be able to provide `MAX_GAS_PER_PUBDATA_BYTE` gas per pubdata in their +// transactions so that they are able to send at least `GUARANTEED_PUBDATA_PER_L1_BATCH` bytes per +// transaction. +pub(crate) const MAX_GAS_PER_PUBDATA_BYTE: u64 = + MAX_L2_TX_GAS_LIMIT / GUARANTEED_PUBDATA_PER_L1_BATCH; + +// The maximal number of transactions in a single batch. +// In this version of the VM the limit has been increased from `1024` to to `10000`. +pub(crate) const MAX_TXS_IN_BATCH: usize = 10000; + /// Max cycles for a single transaction. pub const MAX_CYCLES_FOR_TX: u32 = u32::MAX; /// The first 32 slots are reserved for debugging purposes pub(crate) const DEBUG_SLOTS_OFFSET: usize = 8; pub(crate) const DEBUG_FIRST_SLOTS: usize = 32; + /// The next 33 slots are reserved for dealing with the paymaster context (1 slot for storing length + 32 slots for storing the actual context). pub(crate) const PAYMASTER_CONTEXT_SLOTS: usize = 32 + 1; /// The next PAYMASTER_CONTEXT_SLOTS + 7 slots free slots are needed before each tx, so that the @@ -34,7 +52,7 @@ const CURRENT_L2_TX_HASHES_SLOTS: usize = 2; const NEW_FACTORY_DEPS_RESERVED_SLOTS: usize = MAX_NEW_FACTORY_DEPS + 4; /// The operator can provide for each transaction the proposed minimal refund -pub(crate) const OPERATOR_REFUNDS_SLOTS: usize = MAX_TXS_IN_BLOCK; +pub(crate) const OPERATOR_REFUNDS_SLOTS: usize = MAX_TXS_IN_BATCH; pub(crate) const OPERATOR_REFUNDS_OFFSET: usize = DEBUG_SLOTS_OFFSET + DEBUG_FIRST_SLOTS @@ -43,10 +61,10 @@ pub(crate) const OPERATOR_REFUNDS_OFFSET: usize = DEBUG_SLOTS_OFFSET + NEW_FACTORY_DEPS_RESERVED_SLOTS; pub(crate) const TX_OVERHEAD_OFFSET: usize = OPERATOR_REFUNDS_OFFSET + OPERATOR_REFUNDS_SLOTS; -pub(crate) const TX_OVERHEAD_SLOTS: usize = MAX_TXS_IN_BLOCK; +pub(crate) const TX_OVERHEAD_SLOTS: usize = MAX_TXS_IN_BATCH; pub(crate) const TX_TRUSTED_GAS_LIMIT_OFFSET: usize = TX_OVERHEAD_OFFSET + TX_OVERHEAD_SLOTS; -pub(crate) const TX_TRUSTED_GAS_LIMIT_SLOTS: usize = MAX_TXS_IN_BLOCK; +pub(crate) const TX_TRUSTED_GAS_LIMIT_SLOTS: usize = MAX_TXS_IN_BATCH; pub(crate) const COMPRESSED_BYTECODES_SLOTS: usize = 32768; @@ -60,7 +78,7 @@ pub const OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET: usize = /// One of "worst case" scenarios for the number of state diffs in a batch is when 120kb of pubdata is spent /// on repeated writes, that are all zeroed out. In this case, the number of diffs is 120k / 5 = 24k. This means that they will have /// accommodate 6528000 bytes of calldata for the uncompressed state diffs. Adding 120k on top leaves us with -/// roughly 6650000 bytes needed for calldata. 207813 slots are needed to accomodate this amount of data. +/// roughly 6650000 bytes needed for calldata. 207813 slots are needed to accommodate this amount of data. /// We round up to 208000 slots just in case. /// /// In theory though much more calldata could be used (if for instance 1 byte is used for enum index). It is the responsibility of the @@ -71,8 +89,8 @@ pub(crate) const BOOTLOADER_TX_DESCRIPTION_OFFSET: usize = OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_OFFSET + OPERATOR_PROVIDED_L1_MESSENGER_PUBDATA_SLOTS; /// The size of the bootloader memory dedicated to the encodings of transactions -pub const BOOTLOADER_TX_ENCODING_SPACE: u32 = - (USED_BOOTLOADER_MEMORY_WORDS - TX_DESCRIPTION_OFFSET - MAX_TXS_IN_BLOCK) as u32; +pub(crate) const BOOTLOADER_TX_ENCODING_SPACE: u32 = + (USED_BOOTLOADER_MEMORY_WORDS - TX_DESCRIPTION_OFFSET - MAX_TXS_IN_BATCH) as u32; // Size of the bootloader tx description in words pub(crate) const BOOTLOADER_TX_DESCRIPTION_SIZE: usize = 2; @@ -80,37 +98,32 @@ pub(crate) const BOOTLOADER_TX_DESCRIPTION_SIZE: usize = 2; /// The actual descriptions of transactions should start after the minor descriptions and a MAX_POSTOP_SLOTS /// free slots to allow postOp encoding. pub(crate) const TX_DESCRIPTION_OFFSET: usize = BOOTLOADER_TX_DESCRIPTION_OFFSET - + BOOTLOADER_TX_DESCRIPTION_SIZE * MAX_TXS_IN_BLOCK + + BOOTLOADER_TX_DESCRIPTION_SIZE * MAX_TXS_IN_BATCH + MAX_POSTOP_SLOTS; pub(crate) const TX_GAS_LIMIT_OFFSET: usize = 4; const INITIAL_BASE_PAGE: u32 = 8; pub const BOOTLOADER_HEAP_PAGE: u32 = heap_page_from_base(MemoryPage(INITIAL_BASE_PAGE)).0; -pub const BLOCK_OVERHEAD_GAS: u32 = 1200000; -pub const BLOCK_OVERHEAD_L1_GAS: u32 = 1000000; -pub const BLOCK_OVERHEAD_PUBDATA: u32 = BLOCK_OVERHEAD_L1_GAS / L1_GAS_PER_PUBDATA_BYTE; /// VM Hooks are used for communication between bootloader and tracers. -/// The 'type'/'opcode' is put into VM_HOOK_POSITION slot, +/// The 'type' / 'opcode' is put into VM_HOOK_POSITION slot, /// and VM_HOOKS_PARAMS_COUNT parameters (each 32 bytes) are put in the slots before. /// So the layout looks like this: -/// [param 0][param 1][vmhook opcode] +/// `[param 0][param 1][vmhook opcode]` pub const VM_HOOK_POSITION: u32 = RESULT_SUCCESS_FIRST_SLOT - 1; pub const VM_HOOK_PARAMS_COUNT: u32 = 2; pub const VM_HOOK_PARAMS_START_POSITION: u32 = VM_HOOK_POSITION - VM_HOOK_PARAMS_COUNT; -pub(crate) const MAX_MEM_SIZE_BYTES: u32 = 16777216; // 2^24 - /// Arbitrary space in memory closer to the end of the page pub const RESULT_SUCCESS_FIRST_SLOT: u32 = - (MAX_MEM_SIZE_BYTES - (MAX_TXS_IN_BLOCK as u32) * 32) / 32; + ((USED_BOOTLOADER_MEMORY_BYTES as u32) - (MAX_TXS_IN_BATCH as u32) * 32) / 32; /// How many gas bootloader is allowed to spend within one block. /// Note that this value doesn't correspond to the gas limit of any particular transaction /// (except for the fact that, of course, gas limit for each transaction should be <= `BLOCK_GAS_LIMIT`). pub const BLOCK_GAS_LIMIT: u32 = - zk_evm_1_4_0::zkevm_opcode_defs::system_params::VM_INITIAL_FRAME_ERGS; + zk_evm_1_4_1::zkevm_opcode_defs::system_params::VM_INITIAL_FRAME_ERGS; /// How many gas is allowed to spend on a single transaction in eth_call method pub const ETH_CALL_GAS_LIMIT: u32 = MAX_L2_TX_GAS_LIMIT as u32; @@ -123,7 +136,35 @@ pub(crate) const TX_OPERATOR_L2_BLOCK_INFO_OFFSET: usize = pub(crate) const TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO: usize = 4; pub(crate) const TX_OPERATOR_L2_BLOCK_INFO_SLOTS: usize = - (MAX_TXS_IN_BLOCK + 1) * TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO; + (MAX_TXS_IN_BATCH + 1) * TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO; pub(crate) const COMPRESSED_BYTECODES_OFFSET: usize = TX_OPERATOR_L2_BLOCK_INFO_OFFSET + TX_OPERATOR_L2_BLOCK_INFO_SLOTS; + +/// The maximal gas limit that gets passed into an L1->L2 transaction. +/// +/// It is equal to the `MAX_GAS_PER_TRANSACTION` in the bootloader. +/// We need to limit the number of gas that can be passed into the L1->L2 transaction due to the fact +/// that unlike L2 transactions they can not be rejected by the operator and must be executed. In turn, +/// this means that if a transaction spends more than `MAX_GAS_PER_TRANSACTION`, it use up all the limited resources of a batch. +/// +/// It the gas limit cap on Mailbox for a priority transactions should generally be low enough to never cross that boundary, since +/// artificially limiting the gas price is bad UX. However, during the transition between the pre-1.4.1 fee model and the 1.4.1 one, +/// we need to process such transactions somehow. +pub(crate) const PRIORITY_TX_MAX_GAS_LIMIT: usize = 80_000_000; + +/// The amount of gas to be charged for occupying a single slot of a transaction. +/// It is roughly equal to `80kk/MAX_TRANSACTIONS_PER_BATCH`, i.e. how many gas would an L1->L2 transaction +/// need to pay to compensate for the batch being closed. +/// While the derived formula is used for the worst case for L1->L2 transaction, it suits L2 transactions as well +/// and serves to compensate the operator for the fact that they need to process the transaction. In case batches start +/// getting often sealed due to the slot limit being reached, the L2 fair gas price will be increased. +pub(crate) const TX_SLOT_OVERHEAD_GAS: u32 = 10_000; + +/// The amount of gas to be charged for occupying a single byte of the bootloader's memory. +/// It is roughly equal to `80kk/BOOTLOADER_MEMORY_FOR_TXS`, i.e. how many gas would an L1->L2 transaction +/// need to pay to compensate for the batch being closed. +/// While the derived formula is used for the worst case for L1->L2 transaction, it suits L2 transactions as well +/// and serves to compensate the operator for the fact that they need to fill up the bootloader memory. In case batches start +/// getting often sealed due to the memory limit being reached, the L2 fair gas price will be increased. +pub(crate) const TX_MEMORY_OVERHEAD_GAS: u32 = 10; diff --git a/core/lib/multivm/src/versions/vm_latest/implementation/bytecode.rs b/core/lib/multivm/src/versions/vm_latest/implementation/bytecode.rs index 83a7be74897..bda1803067f 100644 --- a/core/lib/multivm/src/versions/vm_latest/implementation/bytecode.rs +++ b/core/lib/multivm/src/versions/vm_latest/implementation/bytecode.rs @@ -1,13 +1,12 @@ use itertools::Itertools; - -use crate::interface::VmInterface; -use crate::HistoryMode; use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::U256; -use zksync_utils::bytecode::{compress_bytecode, hash_bytecode, CompressedBytecodeInfo}; -use zksync_utils::bytes_to_be_words; +use zksync_utils::{ + bytecode::{compress_bytecode, hash_bytecode, CompressedBytecodeInfo}, + bytes_to_be_words, +}; -use crate::vm_latest::Vm; +use crate::{interface::VmInterface, vm_latest::Vm, HistoryMode}; impl Vm { /// Checks the last transaction has successfully published compressed bytecodes and returns `true` if there is at least one is still unknown. diff --git a/core/lib/multivm/src/versions/vm_latest/implementation/execution.rs b/core/lib/multivm/src/versions/vm_latest/implementation/execution.rs index 1b3197f57b9..9bda37e20dd 100644 --- a/core/lib/multivm/src/versions/vm_latest/implementation/execution.rs +++ b/core/lib/multivm/src/versions/vm_latest/implementation/execution.rs @@ -1,21 +1,25 @@ -use crate::HistoryMode; -use zk_evm_1_4_0::aux_structures::Timestamp; +use zk_evm_1_4_1::aux_structures::Timestamp; use zksync_state::WriteStorage; -use crate::interface::{ - types::tracer::{TracerExecutionStatus, VmExecutionStopReason}, - VmExecutionMode, VmExecutionResultAndLogs, -}; -use crate::vm_latest::{ - old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}, - tracers::{dispatcher::TracerDispatcher, DefaultExecutionTracer, PubdataTracer, RefundsTracer}, - vm::Vm, +use crate::{ + interface::{ + types::tracer::{TracerExecutionStatus, VmExecutionStopReason}, + VmExecutionMode, VmExecutionResultAndLogs, + }, + vm_latest::{ + old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}, + tracers::{ + dispatcher::TracerDispatcher, DefaultExecutionTracer, PubdataTracer, RefundsTracer, + }, + vm::Vm, + }, + HistoryMode, }; impl Vm { pub(crate) fn inspect_inner( &mut self, - dispatcher: TracerDispatcher, + dispatcher: TracerDispatcher, execution_mode: VmExecutionMode, ) -> VmExecutionResultAndLogs { let mut enable_refund_tracer = false; @@ -34,21 +38,20 @@ impl Vm { /// Collect the result from the default tracers. fn inspect_and_collect_results( &mut self, - dispatcher: TracerDispatcher, + dispatcher: TracerDispatcher, execution_mode: VmExecutionMode, with_refund_tracer: bool, ) -> (VmExecutionStopReason, VmExecutionResultAndLogs) { let refund_tracers = with_refund_tracer.then_some(RefundsTracer::new(self.batch_env.clone())); - let mut tx_tracer: DefaultExecutionTracer = - DefaultExecutionTracer::new( - self.system_env.default_validation_computational_gas_limit, - execution_mode, - dispatcher, - self.storage.clone(), - refund_tracers, - Some(PubdataTracer::new(self.batch_env.clone(), execution_mode)), - ); + let mut tx_tracer: DefaultExecutionTracer = DefaultExecutionTracer::new( + self.system_env.default_validation_computational_gas_limit, + execution_mode, + dispatcher, + self.storage.clone(), + refund_tracers, + Some(PubdataTracer::new(self.batch_env.clone(), execution_mode)), + ); let timestamp_initial = Timestamp(self.state.local_state.timestamp); let cycles_initial = self.state.local_state.monotonic_cycle_counter; @@ -76,6 +79,7 @@ impl Vm { spent_pubdata_counter_before, pubdata_published, logs.total_log_queries_count, + tx_tracer.circuits_tracer.estimated_circuits_used, ); let result = tx_tracer.result_tracer.into_result(); @@ -92,7 +96,7 @@ impl Vm { /// Execute vm with given tracers until the stop reason is reached. fn execute_with_default_tracer( &mut self, - tracer: &mut DefaultExecutionTracer, + tracer: &mut DefaultExecutionTracer, ) -> VmExecutionStopReason { tracer.initialize_tracer(&mut self.state); let result = loop { diff --git a/core/lib/multivm/src/versions/vm_latest/implementation/gas.rs b/core/lib/multivm/src/versions/vm_latest/implementation/gas.rs index c970cd4e5d2..8b21e7aeca1 100644 --- a/core/lib/multivm/src/versions/vm_latest/implementation/gas.rs +++ b/core/lib/multivm/src/versions/vm_latest/implementation/gas.rs @@ -1,8 +1,9 @@ -use crate::HistoryMode; use zksync_state::WriteStorage; -use crate::vm_latest::tracers::DefaultExecutionTracer; -use crate::vm_latest::vm::Vm; +use crate::{ + vm_latest::{tracers::DefaultExecutionTracer, vm::Vm}, + HistoryMode, +}; impl Vm { /// Returns the amount of gas remaining to the VM. @@ -19,7 +20,7 @@ impl Vm { pub(crate) fn calculate_computational_gas_used( &self, - tracer: &DefaultExecutionTracer, + tracer: &DefaultExecutionTracer, gas_remaining_before: u32, spent_pubdata_counter_before: u32, ) -> u32 { diff --git a/core/lib/multivm/src/versions/vm_latest/implementation/logs.rs b/core/lib/multivm/src/versions/vm_latest/implementation/logs.rs index c468cf87817..a2682ef03af 100644 --- a/core/lib/multivm/src/versions/vm_latest/implementation/logs.rs +++ b/core/lib/multivm/src/versions/vm_latest/implementation/logs.rs @@ -1,16 +1,17 @@ -use zk_evm_1_4_0::aux_structures::Timestamp; +use zk_evm_1_4_1::aux_structures::Timestamp; use zksync_state::WriteStorage; -use zksync_types::event::extract_l2tol1logs_from_l1_messenger; +use zksync_types::{ + event::extract_l2tol1logs_from_l1_messenger, + l2_to_l1_log::{L2ToL1Log, SystemL2ToL1Log, UserL2ToL1Log}, + VmEvent, +}; -use crate::HistoryMode; -use zksync_types::l2_to_l1_log::{L2ToL1Log, SystemL2ToL1Log, UserL2ToL1Log}; -use zksync_types::VmEvent; - -use crate::interface::types::outputs::VmExecutionLogs; - -use crate::vm_latest::old_vm::utils::precompile_calls_count_after_timestamp; -use crate::vm_latest::utils::logs; -use crate::vm_latest::vm::Vm; +use crate::{ + glue::GlueInto, + interface::types::outputs::VmExecutionLogs, + vm_latest::{old_vm::utils::precompile_calls_count_after_timestamp, utils::logs, vm::Vm}, + HistoryMode, +}; impl Vm { pub(crate) fn collect_execution_logs_after_timestamp( @@ -66,7 +67,7 @@ impl Vm { logs::collect_events_and_l1_system_logs_after_timestamp( &self.state, &self.batch_env, - from_timestamp, + from_timestamp.glue_into(), ) } } diff --git a/core/lib/multivm/src/versions/vm_latest/implementation/snapshots.rs b/core/lib/multivm/src/versions/vm_latest/implementation/snapshots.rs index 99d41a2aec6..6f27c031d09 100644 --- a/core/lib/multivm/src/versions/vm_latest/implementation/snapshots.rs +++ b/core/lib/multivm/src/versions/vm_latest/implementation/snapshots.rs @@ -1,8 +1,7 @@ -use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; - use std::time::Duration; -use zk_evm_1_4_0::aux_structures::Timestamp; +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; +use zk_evm_1_4_1::aux_structures::Timestamp; use zksync_state::WriteStorage; use crate::vm_latest::{ @@ -37,8 +36,8 @@ impl Vm { pub(crate) fn make_snapshot_inner(&mut self) { self.snapshots.push(VmSnapshot { // Vm local state contains O(1) various parameters (registers/etc). - // The only "expensive" copying here is copying of the callstack. - // It will take O(callstack_depth) to copy it. + // The only "expensive" copying here is copying of the call stack. + // It will take `O(callstack_depth)` to copy it. // So it is generally recommended to get snapshots of the bootloader frame, // where the depth is 1. local_state: self.state.local_state.clone(), diff --git a/core/lib/multivm/src/versions/vm_latest/implementation/statistics.rs b/core/lib/multivm/src/versions/vm_latest/implementation/statistics.rs index ddbc7aec2f7..bdaf425ee27 100644 --- a/core/lib/multivm/src/versions/vm_latest/implementation/statistics.rs +++ b/core/lib/multivm/src/versions/vm_latest/implementation/statistics.rs @@ -1,12 +1,12 @@ -use zk_evm_1_4_0::aux_structures::Timestamp; +use zk_evm_1_4_1::aux_structures::Timestamp; use zksync_state::WriteStorage; - -use crate::HistoryMode; use zksync_types::U256; -use crate::interface::{VmExecutionStatistics, VmMemoryMetrics}; -use crate::vm_latest::tracers::DefaultExecutionTracer; -use crate::vm_latest::vm::Vm; +use crate::{ + interface::{VmExecutionStatistics, VmMemoryMetrics}, + vm_latest::{tracers::DefaultExecutionTracer, vm::Vm}, + HistoryMode, +}; /// Module responsible for observing the VM behavior, i.e. calculating the statistics of the VM runs /// or reporting the VM memory usage. @@ -18,12 +18,13 @@ impl Vm { &self, timestamp_initial: Timestamp, cycles_initial: u32, - tracer: &DefaultExecutionTracer, + tracer: &DefaultExecutionTracer, gas_remaining_before: u32, gas_remaining_after: u32, spent_pubdata_counter_before: u32, pubdata_published: u32, total_log_queries_count: usize, + estimated_circuits_used: f32, ) -> VmExecutionStatistics { let computational_gas_used = self.calculate_computational_gas_used( tracer, @@ -40,10 +41,11 @@ impl Vm { computational_gas_used, total_log_queries: total_log_queries_count, pubdata_published, + estimated_circuits_used, } } - /// Returns the hashes the bytecodes that have been decommitted by the decomittment processor. + /// Returns the hashes the bytecodes that have been decommitted by the decommitment processor. pub(crate) fn get_used_contracts(&self) -> Vec { self.state .decommittment_processor diff --git a/core/lib/multivm/src/versions/vm_latest/implementation/tx.rs b/core/lib/multivm/src/versions/vm_latest/implementation/tx.rs index 6def1da0f5d..d7d948c8b57 100644 --- a/core/lib/multivm/src/versions/vm_latest/implementation/tx.rs +++ b/core/lib/multivm/src/versions/vm_latest/implementation/tx.rs @@ -1,13 +1,16 @@ -use crate::vm_latest::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_latest::implementation::bytecode::{bytecode_to_factory_dep, compress_bytecodes}; -use crate::HistoryMode; -use zk_evm_1_4_0::aux_structures::Timestamp; +use zk_evm_1_4_1::aux_structures::Timestamp; use zksync_state::WriteStorage; -use zksync_types::l1::is_l1_tx_type; -use zksync_types::Transaction; +use zksync_types::{l1::is_l1_tx_type, Transaction}; -use crate::vm_latest::types::internals::TransactionData; -use crate::vm_latest::vm::Vm; +use crate::{ + vm_latest::{ + constants::BOOTLOADER_HEAP_PAGE, + implementation::bytecode::{bytecode_to_factory_dep, compress_bytecodes}, + types::internals::TransactionData, + vm::Vm, + }, + HistoryMode, +}; impl Vm { pub(crate) fn push_raw_transaction( @@ -35,8 +38,7 @@ impl Vm { .decommittment_processor .populate(codes_for_decommiter, timestamp); - let trusted_ergs_limit = - tx.trusted_ergs_limit(self.batch_env.block_gas_price_per_pubdata()); + let trusted_ergs_limit = tx.trusted_ergs_limit(); let memory = self.bootloader_state.push_tx( tx, @@ -58,8 +60,7 @@ impl Vm { with_compression: bool, ) { let tx: TransactionData = tx.into(); - let block_gas_per_pubdata_byte = self.batch_env.block_gas_price_per_pubdata(); - let overhead = tx.overhead_gas(block_gas_per_pubdata_byte as u32); + let overhead = tx.overhead_gas(); self.push_raw_transaction(tx, overhead, 0, with_compression); } } diff --git a/core/lib/multivm/src/versions/vm_latest/mod.rs b/core/lib/multivm/src/versions/vm_latest/mod.rs index 49cd7111f6f..c3df28f6c31 100644 --- a/core/lib/multivm/src/versions/vm_latest/mod.rs +++ b/core/lib/multivm/src/versions/vm_latest/mod.rs @@ -1,15 +1,20 @@ -pub use old_vm::{ - history_recorder::{HistoryDisabled, HistoryEnabled, HistoryMode}, - memory::SimpleMemory, -}; - -pub use oracles::storage::StorageOracle; - -pub use tracers::{ - dispatcher::TracerDispatcher, - traits::{ToTracerPointer, TracerPointer, VmTracer}, +pub use self::{ + bootloader_state::BootloaderState, + old_vm::{ + history_recorder::{ + AppDataFrameManagerWithHistory, HistoryDisabled, HistoryEnabled, HistoryMode, + }, + memory::SimpleMemory, + }, + oracles::storage::StorageOracle, + tracers::{ + dispatcher::TracerDispatcher, + traits::{ToTracerPointer, TracerPointer, VmTracer}, + }, + types::internals::ZkSyncVmState, + utils::transaction_encoding::TransactionVmExt, + vm::Vm, }; - pub use crate::interface::types::{ inputs::{L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode}, outputs::{ @@ -17,23 +22,15 @@ pub use crate::interface::types::{ Refunds, VmExecutionLogs, VmExecutionResultAndLogs, VmExecutionStatistics, VmMemoryMetrics, }, }; -pub use types::internals::ZkSyncVmState; -pub use utils::transaction_encoding::TransactionVmExt; - -pub use bootloader_state::BootloaderState; - -pub use vm::Vm; mod bootloader_state; +pub mod constants; mod implementation; mod old_vm; mod oracles; +#[cfg(test)] +mod tests; pub(crate) mod tracers; mod types; -mod vm; - -pub mod constants; pub mod utils; - -#[cfg(test)] -mod tests; +mod vm; diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/event_sink.rs b/core/lib/multivm/src/versions/vm_latest/old_vm/event_sink.rs index 4174d9f4f17..7e3097aaeb4 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/event_sink.rs +++ b/core/lib/multivm/src/versions/vm_latest/old_vm/event_sink.rs @@ -1,10 +1,7 @@ -use crate::vm_latest::old_vm::{ - history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, - oracles::OracleWithHistory, -}; -use itertools::Itertools; use std::collections::HashMap; -use zk_evm_1_4_0::{ + +use itertools::Itertools; +use zk_evm_1_4_1::{ abstractions::EventSink, aux_structures::{LogQuery, Timestamp}, reference_impls::event_sink::EventMessage, @@ -14,6 +11,11 @@ use zk_evm_1_4_0::{ }; use zksync_types::U256; +use crate::vm_latest::old_vm::{ + history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, + oracles::OracleWithHistory, +}; + #[derive(Debug, Clone, PartialEq, Default)] pub struct InMemoryEventSink { frames_stack: AppDataFrameManagerWithHistory, H>, @@ -52,7 +54,7 @@ impl InMemoryEventSink { pub fn log_queries_after_timestamp(&self, from_timestamp: Timestamp) -> &[Box] { let events = self.frames_stack.forward().current_frame(); - // Select all of the last elements where e.timestamp >= from_timestamp. + // Select all of the last elements where `e.timestamp >= from_timestamp`. // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. events .rsplit(|e| e.timestamp < from_timestamp) diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/events.rs b/core/lib/multivm/src/versions/vm_latest/old_vm/events.rs index eed8fee4ac8..fc97b6f4a41 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/events.rs +++ b/core/lib/multivm/src/versions/vm_latest/old_vm/events.rs @@ -1,4 +1,4 @@ -use zk_evm_1_4_0::{ethereum_types::Address, reference_impls::event_sink::EventMessage}; +use zk_evm_1_4_1::{ethereum_types::Address, reference_impls::event_sink::EventMessage}; use zksync_types::{L1BatchNumber, VmEvent, EVENT_WRITER_ADDRESS, H256}; use zksync_utils::{be_chunks_to_h256_words, h256_to_account_address}; diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/history_recorder.rs b/core/lib/multivm/src/versions/vm_latest/old_vm/history_recorder.rs index 7c0490044d6..d4c5b6367a9 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/history_recorder.rs +++ b/core/lib/multivm/src/versions/vm_latest/old_vm/history_recorder.rs @@ -1,11 +1,10 @@ use std::{collections::HashMap, fmt::Debug, hash::Hash}; -use zk_evm_1_4_0::{ +use zk_evm_1_4_1::{ aux_structures::Timestamp, vm_state::PrimitiveValue, zkevm_opcode_defs::{self}, }; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::{StorageKey, U256}; use zksync_utils::{h256_to_u256, u256_to_h256}; @@ -13,14 +12,14 @@ use zksync_utils::{h256_to_u256, u256_to_h256}; pub(crate) type MemoryWithHistory = HistoryRecorder; pub(crate) type IntFrameManagerWithHistory = HistoryRecorder, H>; -// Within the same cycle, timestamps in range timestamp..timestamp+TIME_DELTA_PER_CYCLE-1 +// Within the same cycle, timestamps in range `timestamp..timestamp+TIME_DELTA_PER_CYCLE-1` // can be used. This can sometimes violate monotonicity of the timestamp within the // same cycle, so it should be normalized. #[inline] fn normalize_timestamp(timestamp: Timestamp) -> Timestamp { let timestamp = timestamp.0; - // Making sure it is divisible by TIME_DELTA_PER_CYCLE + // Making sure it is divisible by `TIME_DELTA_PER_CYCLE` Timestamp(timestamp - timestamp % zkevm_opcode_defs::TIME_DELTA_PER_CYCLE) } @@ -438,7 +437,7 @@ impl HistoryRecorder, H> { } #[derive(Debug, Clone, PartialEq)] -pub(crate) struct AppDataFrameManagerWithHistory { +pub struct AppDataFrameManagerWithHistory { forward: HistoryRecorder, H>, rollback: HistoryRecorder, H>, } @@ -771,11 +770,14 @@ impl HistoryRecorder, H> { #[cfg(test)] mod tests { - use crate::vm_latest::old_vm::history_recorder::{HistoryRecorder, MemoryWrapper}; - use crate::vm_latest::HistoryDisabled; - use zk_evm_1_4_0::{aux_structures::Timestamp, vm_state::PrimitiveValue}; + use zk_evm_1_4_1::{aux_structures::Timestamp, vm_state::PrimitiveValue}; use zksync_types::U256; + use crate::vm_latest::{ + old_vm::history_recorder::{HistoryRecorder, MemoryWrapper}, + HistoryDisabled, + }; + #[test] fn memory_equality() { let mut a: HistoryRecorder = Default::default(); diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/memory.rs b/core/lib/multivm/src/versions/vm_latest/old_vm/memory.rs index 5694a725d93..120ff43acff 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/memory.rs +++ b/core/lib/multivm/src/versions/vm_latest/old_vm/memory.rs @@ -1,16 +1,18 @@ -use zk_evm_1_4_0::abstractions::{Memory, MemoryType}; -use zk_evm_1_4_0::aux_structures::{MemoryPage, MemoryQuery, Timestamp}; -use zk_evm_1_4_0::vm_state::PrimitiveValue; -use zk_evm_1_4_0::zkevm_opcode_defs::FatPointer; +use zk_evm_1_4_1::{ + abstractions::{Memory, MemoryType}, + aux_structures::{MemoryPage, MemoryQuery, Timestamp}, + vm_state::PrimitiveValue, + zkevm_opcode_defs::FatPointer, +}; use zksync_types::U256; -use crate::vm_latest::old_vm::history_recorder::{ - FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, - MemoryWrapper, WithHistory, -}; -use crate::vm_latest::old_vm::oracles::OracleWithHistory; -use crate::vm_latest::old_vm::utils::{ - aux_heap_page_from_base, heap_page_from_base, stack_page_from_base, +use crate::vm_latest::old_vm::{ + history_recorder::{ + FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, + MemoryWrapper, WithHistory, + }, + oracles::OracleWithHistory, + utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}, }; #[derive(Debug, Clone, PartialEq)] @@ -280,7 +282,7 @@ impl Memory for SimpleMemory { let returndata_page = returndata_fat_pointer.memory_page; for &page in current_observable_pages { - // If the page's number is greater than or equal to the base_page, + // If the page's number is greater than or equal to the `base_page`, // it means that it was created by the internal calls of this contract. // We need to add this check as the calldata pointer is also part of the // observable pages. @@ -297,7 +299,7 @@ impl Memory for SimpleMemory { } } -// It is expected that there is some intersection between [word_number*32..word_number*32+31] and [start, end] +// It is expected that there is some intersection between `[word_number*32..word_number*32+31]` and `[start, end]` fn extract_needed_bytes_from_word( word_value: Vec, word_number: usize, @@ -305,7 +307,7 @@ fn extract_needed_bytes_from_word( end: usize, ) -> Vec { let word_start = word_number * 32; - let word_end = word_start + 31; // Note, that at word_start + 32 a new word already starts + let word_end = word_start + 31; // Note, that at `word_start + 32` a new word already starts let intersection_left = std::cmp::max(word_start, start); let intersection_right = std::cmp::min(word_end, end); diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/decommitter.rs b/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/decommitter.rs index fe5416cd120..86e8f6f6da1 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/decommitter.rs +++ b/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/decommitter.rs @@ -1,24 +1,19 @@ -use std::collections::HashMap; -use std::fmt::Debug; +use std::{collections::HashMap, fmt::Debug}; -use crate::vm_latest::old_vm::history_recorder::{ - HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, +use zk_evm_1_4_1::{ + abstractions::{DecommittmentProcessor, Memory, MemoryType}, + aux_structures::{ + DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery, Timestamp, + }, }; - -use zk_evm_1_4_0::abstractions::MemoryType; -use zk_evm_1_4_0::aux_structures::Timestamp; -use zk_evm_1_4_0::{ - abstractions::{DecommittmentProcessor, Memory}, - aux_structures::{DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery}, -}; - use zksync_state::{ReadStorage, StoragePtr}; use zksync_types::U256; -use zksync_utils::bytecode::bytecode_len_in_words; -use zksync_utils::{bytes_to_be_words, u256_to_h256}; +use zksync_utils::{bytecode::bytecode_len_in_words, bytes_to_be_words, u256_to_h256}; use super::OracleWithHistory; - +use crate::vm_latest::old_vm::history_recorder::{ + HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, +}; /// The main job of the DecommiterOracle is to implement the DecommittmentProcessor trait - that is /// used by the VM to 'load' bytecodes into memory. #[derive(Debug)] @@ -70,7 +65,7 @@ impl DecommitterOracle { } } - /// Adds additional bytecodes. They will take precendent over the bytecodes from storage. + /// Adds additional bytecodes. They will take precedent over the bytecodes from storage. pub fn populate(&mut self, bytecodes: Vec<(U256, Vec)>, timestamp: Timestamp) { for (hash, bytecode) in bytecodes { self.known_bytecodes.insert(hash, bytecode, timestamp); @@ -173,14 +168,14 @@ impl DecommittmentProcess memory: &mut M, ) -> Result< ( - zk_evm_1_4_0::aux_structures::DecommittmentQuery, + zk_evm_1_4_1::aux_structures::DecommittmentQuery, Option>, ), anyhow::Error, > { self.decommitment_requests.push((), partial_query.timestamp); // First - check if we didn't fetch this bytecode in the past. - // If we did - we can just return the page that we used before (as the memory is read only). + // If we did - we can just return the page that we used before (as the memory is readonly). if let Some(memory_page) = self .decommitted_code_hashes .inner() diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/mod.rs b/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/mod.rs index 3f8d2d0f138..7d463ad082b 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/mod.rs +++ b/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/mod.rs @@ -1,4 +1,4 @@ -use zk_evm_1_4_0::aux_structures::Timestamp; +use zk_evm_1_4_1::aux_structures::Timestamp; pub(crate) mod decommitter; pub(crate) mod precompile; diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/precompile.rs b/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/precompile.rs index ed3621fc497..8de51a9619d 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/precompile.rs +++ b/core/lib/multivm/src/versions/vm_latest/old_vm/oracles/precompile.rs @@ -1,14 +1,13 @@ -use zk_evm_1_4_0::{ - abstractions::Memory, - abstractions::PrecompileCyclesWitness, - abstractions::PrecompilesProcessor, +use std::convert::TryFrom; + +use zk_evm_1_4_1::{ + abstractions::{Memory, PrecompileCyclesWitness, PrecompilesProcessor}, aux_structures::{LogQuery, MemoryQuery, Timestamp}, - zk_evm_abstractions::precompiles::DefaultPrecompilesProcessor, + zk_evm_abstractions::precompiles::{ecrecover, keccak256, sha256, PrecompileAddress}, }; -use crate::vm_latest::old_vm::history_recorder::{HistoryEnabled, HistoryMode, HistoryRecorder}; - use super::OracleWithHistory; +use crate::vm_latest::old_vm::history_recorder::{HistoryEnabled, HistoryMode, HistoryRecorder}; /// Wrap of DefaultPrecompilesProcessor that store queue /// of timestamp when precompiles are called to be executed. @@ -16,40 +15,44 @@ use super::OracleWithHistory; /// saving timestamps allows us to check the exact number /// of log queries, that were used during the tx execution. #[derive(Debug, Clone)] -pub struct PrecompilesProcessorWithHistory { +pub struct PrecompilesProcessorWithHistory { pub timestamp_history: HistoryRecorder, H>, - pub default_precompiles_processor: DefaultPrecompilesProcessor, + pub precompile_cycles_history: HistoryRecorder, H>, } -impl Default for PrecompilesProcessorWithHistory { +impl Default for PrecompilesProcessorWithHistory { fn default() -> Self { Self { timestamp_history: Default::default(), - default_precompiles_processor: DefaultPrecompilesProcessor, + precompile_cycles_history: Default::default(), } } } -impl OracleWithHistory for PrecompilesProcessorWithHistory { +impl OracleWithHistory for PrecompilesProcessorWithHistory { fn rollback_to_timestamp(&mut self, timestamp: Timestamp) { self.timestamp_history.rollback_to_timestamp(timestamp); + self.precompile_cycles_history + .rollback_to_timestamp(timestamp); } } -impl PrecompilesProcessorWithHistory { +impl PrecompilesProcessorWithHistory { pub fn get_timestamp_history(&self) -> &Vec { self.timestamp_history.inner() } pub fn delete_history(&mut self) { self.timestamp_history.delete_history(); + self.precompile_cycles_history.delete_history(); } } -impl PrecompilesProcessor for PrecompilesProcessorWithHistory { +impl PrecompilesProcessor for PrecompilesProcessorWithHistory { fn start_frame(&mut self) { - self.default_precompiles_processor.start_frame(); + // there are no precompiles to rollback, do nothing } + fn execute_precompile( &mut self, monotonic_cycle_counter: u32, @@ -63,13 +66,47 @@ impl PrecompilesProcessor for PrecompilesProcesso // where operations and timestamp have different types. self.timestamp_history .push(query.timestamp, query.timestamp); - self.default_precompiles_processor.execute_precompile( - monotonic_cycle_counter, - query, - memory, - ) + + let address_low = u16::from_le_bytes([query.address.0[19], query.address.0[18]]); + if let Ok(precompile_address) = PrecompileAddress::try_from(address_low) { + let rounds = match precompile_address { + PrecompileAddress::Keccak256 => { + // pure function call, non-revertable + keccak256::keccak256_rounds_function::( + monotonic_cycle_counter, + query, + memory, + ) + .0 + } + PrecompileAddress::SHA256 => { + // pure function call, non-revertable + sha256::sha256_rounds_function::( + monotonic_cycle_counter, + query, + memory, + ) + .0 + } + PrecompileAddress::Ecrecover => { + // pure function call, non-revertable + ecrecover::ecrecover_function::( + monotonic_cycle_counter, + query, + memory, + ) + .0 + } + }; + + self.precompile_cycles_history + .push((precompile_address, rounds), query.timestamp); + }; + + None } + fn finish_frame(&mut self, _panicked: bool) { - self.default_precompiles_processor.finish_frame(_panicked); + // there are no revertible precompile yes, so we are ok } } diff --git a/core/lib/multivm/src/versions/vm_latest/old_vm/utils.rs b/core/lib/multivm/src/versions/vm_latest/old_vm/utils.rs index afaa19cac87..6320c6bcb1f 100644 --- a/core/lib/multivm/src/versions/vm_latest/old_vm/utils.rs +++ b/core/lib/multivm/src/versions/vm_latest/old_vm/utils.rs @@ -1,22 +1,18 @@ -use crate::vm_latest::old_vm::memory::SimpleMemory; - -use crate::vm_latest::types::internals::ZkSyncVmState; -use crate::vm_latest::HistoryMode; - -use zk_evm_1_4_0::zkevm_opcode_defs::decoding::{ - AllowedPcOrImm, EncodingModeProduction, VmEncodingMode, -}; -use zk_evm_1_4_0::zkevm_opcode_defs::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER; -use zk_evm_1_4_0::{ +use zk_evm_1_4_1::{ aux_structures::{MemoryPage, Timestamp}, vm_state::PrimitiveValue, - zkevm_opcode_defs::FatPointer, + zkevm_opcode_defs::{ + decoding::{AllowedPcOrImm, EncodingModeProduction, VmEncodingMode}, + FatPointer, RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; use zksync_state::WriteStorage; -use zksync_system_constants::L1_GAS_PER_PUBDATA_BYTE; - use zksync_types::{Address, U256}; +use crate::vm_latest::{ + old_vm::memory::SimpleMemory, types::internals::ZkSyncVmState, HistoryMode, +}; + #[derive(Debug, Clone)] pub(crate) enum VmExecutionResult { Ok(Vec), @@ -99,12 +95,6 @@ pub(crate) fn precompile_calls_count_after_timestamp( sorted_timestamps.len() - sorted_timestamps.partition_point(|t| *t < from_timestamp) } -pub(crate) fn eth_price_per_pubdata_byte(l1_gas_price: u64) -> u64 { - // This value will typically be a lot less than u64 - // unless the gas price on L1 goes beyond tens of millions of gwei - l1_gas_price * (L1_GAS_PER_PUBDATA_BYTE as u64) -} - pub(crate) fn vm_may_have_ended_inner( vm: &ZkSyncVmState, ) -> Option { @@ -125,7 +115,7 @@ pub(crate) fn vm_may_have_ended_inner( } (false, _) => None, (true, l) if l == outer_eh_location => { - // check r1,r2,r3 + // check `r1,r2,r3` if vm.local_state.flags.overflow_or_less_than_flag { Some(VmExecutionResult::Panic) } else { diff --git a/core/lib/multivm/src/versions/vm_latest/oracles/storage.rs b/core/lib/multivm/src/versions/vm_latest/oracles/storage.rs index beec2fa086f..72d6e1f696b 100644 --- a/core/lib/multivm/src/versions/vm_latest/oracles/storage.rs +++ b/core/lib/multivm/src/versions/vm_latest/oracles/storage.rs @@ -1,28 +1,33 @@ use std::collections::HashMap; -use crate::vm_latest::old_vm::history_recorder::{ - AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, - HistoryRecorder, StorageWrapper, VectorHistoryEvent, WithHistory, -}; -use crate::vm_latest::old_vm::oracles::OracleWithHistory; - -use zk_evm_1_4_0::abstractions::RefundedAmounts; -use zk_evm_1_4_0::zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES; -use zk_evm_1_4_0::{ - abstractions::{RefundType, Storage as VmStorageOracle}, +use zk_evm_1_4_1::{ + abstractions::{RefundType, RefundedAmounts, Storage as VmStorageOracle}, aux_structures::{LogQuery, Timestamp}, + zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES, }; - use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::utils::storage_key_for_eth_balance; -use zksync_types::writes::compression::compress_with_best_strategy; -use zksync_types::writes::{BYTES_PER_DERIVED_KEY, BYTES_PER_ENUMERATION_INDEX}; use zksync_types::{ + utils::storage_key_for_eth_balance, + writes::{ + compression::compress_with_best_strategy, BYTES_PER_DERIVED_KEY, + BYTES_PER_ENUMERATION_INDEX, + }, AccountTreeId, Address, StorageKey, StorageLogQuery, StorageLogQueryType, BOOTLOADER_ADDRESS, U256, }; use zksync_utils::u256_to_h256; +use crate::{ + glue::GlueInto, + vm_latest::old_vm::{ + history_recorder::{ + AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, + HistoryRecorder, StorageWrapper, VectorHistoryEvent, WithHistory, + }, + oracles::OracleWithHistory, + }, +}; + // While the storage does not support different shards, it was decided to write the // code of the StorageOracle with the shard parameters in mind. pub(crate) fn triplet_to_storage_key(_shard_id: u8, address: Address, key: U256) -> StorageKey { @@ -51,12 +56,17 @@ pub struct StorageOracle { pub(crate) paid_changes: HistoryRecorder, H>, // The map that contains all the first values read from storage for each slot. - // While formally it does not have to be rollbackable, we still do it to avoid memory bloat + // While formally it does not have to be capable of rolling back, we still do it to avoid memory bloat // for unused slots. pub(crate) initial_values: HistoryRecorder, H>, // Storage refunds that oracle has returned in `estimate_refunds_for_write`. pub(crate) returned_refunds: HistoryRecorder, H>, + + // Keeps track of storage keys that were ever written to. + pub(crate) written_keys: HistoryRecorder, HistoryEnabled>, + // Keeps track of storage keys that were ever read. + pub(crate) read_keys: HistoryRecorder, HistoryEnabled>, } impl OracleWithHistory for StorageOracle { @@ -67,6 +77,8 @@ impl OracleWithHistory for StorageOracle { self.paid_changes.rollback_to_timestamp(timestamp); self.initial_values.rollback_to_timestamp(timestamp); self.returned_refunds.rollback_to_timestamp(timestamp); + self.written_keys.rollback_to_timestamp(timestamp); + self.read_keys.rollback_to_timestamp(timestamp); } } @@ -79,6 +91,8 @@ impl StorageOracle { paid_changes: Default::default(), initial_values: Default::default(), returned_refunds: Default::default(), + written_keys: Default::default(), + read_keys: Default::default(), } } @@ -89,6 +103,8 @@ impl StorageOracle { self.paid_changes.delete_history(); self.initial_values.delete_history(); self.returned_refunds.delete_history(); + self.written_keys.delete_history(); + self.read_keys.delete_history(); } fn is_storage_key_free(&self, key: &StorageKey) -> bool { @@ -106,8 +122,12 @@ impl StorageOracle { } } - pub fn read_value(&mut self, mut query: LogQuery) -> LogQuery { + fn read_value(&mut self, mut query: LogQuery) -> LogQuery { let key = triplet_to_storage_key(query.shard_id, query.address, query.key); + + if !self.read_keys.inner().contains_key(&key) { + self.read_keys.insert(key, (), query.timestamp); + } let current_value = self.storage.read_from_storage(&key); query.read_value = current_value; @@ -116,7 +136,7 @@ impl StorageOracle { self.frames_stack.push_forward( Box::new(StorageLogQuery { - log_query: query, + log_query: query.glue_into(), log_type: StorageLogQueryType::Read, }), query.timestamp, @@ -125,8 +145,11 @@ impl StorageOracle { query } - pub fn write_value(&mut self, query: LogQuery) -> LogQuery { + fn write_value(&mut self, query: LogQuery) -> LogQuery { let key = triplet_to_storage_key(query.shard_id, query.address, query.key); + if !self.written_keys.inner().contains_key(&key) { + self.written_keys.insert(key, (), query.timestamp); + } let current_value = self.storage .write_to_storage(key, query.written_value, query.timestamp); @@ -141,7 +164,7 @@ impl StorageOracle { self.set_initial_value(&key, current_value, query.timestamp); let mut storage_log_query = StorageLogQuery { - log_query: query, + log_query: query.glue_into(), log_type: log_query_type, }; self.frames_stack @@ -192,7 +215,7 @@ impl StorageOracle { let required_pubdata = self.base_price_for_write(&key, first_slot_value, current_slot_value); - // We assume that "prepaid_for_slot" represents both the number of pubdata published and the number of bytes paid by the previous transactions + // We assume that `prepaid_for_slot` represents both the number of pubdata published and the number of bytes paid by the previous transactions // as they should be identical. let prepaid_for_slot = self .pre_paid_changes @@ -272,9 +295,9 @@ impl StorageOracle { ) -> &[Box] { let logs = self.frames_stack.forward().current_frame(); - // Select all of the last elements where l.log_query.timestamp >= from_timestamp. + // Select all of the last elements where `l.log_query.timestamp >= from_timestamp`. // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. - logs.rsplit(|l| l.log_query.timestamp < from_timestamp) + logs.rsplit(|l| l.log_query.timestamp < from_timestamp.glue_into()) .next() .unwrap_or(&[]) } @@ -320,6 +343,7 @@ impl VmStorageOracle for StorageOracle { _monotonic_cycle_counter: u32, mut query: LogQuery, ) -> LogQuery { + // ``` // tracing::trace!( // "execute partial query cyc {:?} addr {:?} key {:?}, rw {:?}, wr {:?}, tx {:?}", // _monotonic_cycle_counter, @@ -329,6 +353,7 @@ impl VmStorageOracle for StorageOracle { // query.written_value, // query.tx_number_in_block // ); + // ``` assert!(!query.rollback); if query.rw_flag { // The number of bytes that have been compensated by the user to perform this write @@ -409,7 +434,7 @@ impl VmStorageOracle for StorageOracle { } }; - let LogQuery { written_value, .. } = query.log_query; + let LogQuery { written_value, .. } = query.log_query.glue_into(); let key = triplet_to_storage_key( query.log_query.shard_id, query.log_query.address, @@ -423,7 +448,7 @@ impl VmStorageOracle for StorageOracle { ); // Additional validation that the current value was correct - // Unwrap is safe because the return value from write_inner is the previous value in this leaf. + // Unwrap is safe because the return value from `write_inner` is the previous value in this leaf. // It is impossible to set leaf value to `None` assert_eq!(current_value, written_value); } @@ -437,14 +462,14 @@ impl VmStorageOracle for StorageOracle { /// Returns the number of bytes needed to publish a slot. // Since we need to publish the state diffs onchain, for each of the updated storage slot -// we basically need to publish the following pair: (). +// we basically need to publish the following pair: `()`. // For key we use the following optimization: // - The first time we publish it, we use 32 bytes. // Then, we remember a 8-byte id for this slot and assign it to it. We call this initial write. // - The second time we publish it, we will use the 4/5 byte representation of this 8-byte instead of the 32 // bytes of the entire key. // For value compression, we use a metadata byte which holds the length of the value and the operation from the -// previous state to the new state, and the compressed value. The maxiumum for this is 33 bytes. +// previous state to the new state, and the compressed value. The maximum for this is 33 bytes. // Total bytes for initial writes then becomes 65 bytes and repeated writes becomes 38 bytes. fn get_pubdata_price_bytes(initial_value: U256, final_value: U256, is_initial: bool) -> u32 { // TODO (SMA-1702): take into account the content of the log query, i.e. values that contain mostly zeroes diff --git a/core/lib/multivm/src/versions/vm_latest/tests/bootloader.rs b/core/lib/multivm/src/versions/vm_latest/tests/bootloader.rs index b2763f358be..78fb964f722 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/bootloader.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/bootloader.rs @@ -1,15 +1,17 @@ use zksync_types::U256; -use crate::interface::{Halt, TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::vm_latest::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_latest::tests::tester::VmTesterBuilder; -use crate::vm_latest::tests::utils::{ - get_bootloader, verify_required_memory, BASE_SYSTEM_CONTRACTS, +use crate::{ + interface::{ExecutionResult, Halt, TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + constants::BOOTLOADER_HEAP_PAGE, + tests::{ + tester::VmTesterBuilder, + utils::{get_bootloader, verify_required_memory, BASE_SYSTEM_CONTRACTS}, + }, + HistoryEnabled, + }, }; -use crate::interface::ExecutionResult; -use crate::vm_latest::HistoryEnabled; - #[test] fn test_dummy_bootloader() { let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); diff --git a/core/lib/multivm/src/versions/vm_latest/tests/bytecode_publishing.rs b/core/lib/multivm/src/versions/vm_latest/tests/bytecode_publishing.rs index e574a881d91..a0c10addff9 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/bytecode_publishing.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/bytecode_publishing.rs @@ -1,10 +1,16 @@ use zksync_types::event::extract_long_l2_to_l1_messages; use zksync_utils::bytecode::compress_bytecode; -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{DeployContractsTx, TxType, VmTesterBuilder}; -use crate::vm_latest::tests::utils::read_test_contract; -use crate::vm_latest::HistoryEnabled; +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + tests::{ + tester::{DeployContractsTx, TxType, VmTesterBuilder}, + utils::read_test_contract, + }, + HistoryEnabled, + }, +}; #[test] fn test_bytecode_publishing() { diff --git a/core/lib/multivm/src/versions/vm_latest/tests/call_tracer.rs b/core/lib/multivm/src/versions/vm_latest/tests/call_tracer.rs index e5b1ce15fcd..2f8f37e081b 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/call_tracer.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/call_tracer.rs @@ -1,13 +1,21 @@ -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::tracers::CallTracer; -use crate::vm_latest::constants::BLOCK_GAS_LIMIT; -use crate::vm_latest::tests::tester::VmTesterBuilder; -use crate::vm_latest::tests::utils::{read_max_depth_contract, read_test_contract}; -use crate::vm_latest::{HistoryEnabled, ToTracerPointer}; -use once_cell::sync::OnceCell; use std::sync::Arc; + +use once_cell::sync::OnceCell; use zksync_types::{Address, Execute}; +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + tracers::CallTracer, + vm_latest::{ + constants::BLOCK_GAS_LIMIT, + tests::{ + tester::VmTesterBuilder, + utils::{read_max_depth_contract, read_test_contract}, + }, + HistoryEnabled, ToTracerPointer, + }, +}; + // This test is ultra slow, so it's ignored by default. #[test] #[ignore] diff --git a/core/lib/multivm/src/versions/vm_latest/tests/circuits.rs b/core/lib/multivm/src/versions/vm_latest/tests/circuits.rs new file mode 100644 index 00000000000..bc19fc8793a --- /dev/null +++ b/core/lib/multivm/src/versions/vm_latest/tests/circuits.rs @@ -0,0 +1,44 @@ +use zksync_types::{Address, Execute, U256}; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{constants::BLOCK_GAS_LIMIT, tests::tester::VmTesterBuilder, HistoryEnabled}, +}; + +// Checks that estimated number of circuits for simple transfer doesn't differ much +// from hardcoded expected value. +#[test] +fn test_circuits() { + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .build(); + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: Address::random(), + calldata: Vec::new(), + value: U256::from(1u8), + factory_deps: None, + }, + None, + ); + vm.vm.push_transaction(tx); + let res = vm.vm.inspect(Default::default(), VmExecutionMode::OneTx); + + const EXPECTED_CIRCUITS_USED: f32 = 4.8685; + let delta = + (res.statistics.estimated_circuits_used - EXPECTED_CIRCUITS_USED) / EXPECTED_CIRCUITS_USED; + + if delta.abs() > 0.1 { + panic!( + "Estimation differs from expected result by too much: {}%, expected value: {}", + delta * 100.0, + res.statistics.estimated_circuits_used + ); + } +} diff --git a/core/lib/multivm/src/versions/vm_latest/tests/default_aa.rs b/core/lib/multivm/src/versions/vm_latest/tests/default_aa.rs index b31e32270d9..05e3e64f9c9 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/default_aa.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/default_aa.rs @@ -1,13 +1,22 @@ use zksync_system_constants::L2_ETH_TOKEN_ADDRESS; -use zksync_types::system_contracts::{DEPLOYMENT_NONCE_INCREMENT, TX_NONCE_INCREMENT}; - -use zksync_types::{get_code_key, get_known_code_key, get_nonce_key, AccountTreeId, U256}; +use zksync_types::{ + get_code_key, get_known_code_key, get_nonce_key, + system_contracts::{DEPLOYMENT_NONCE_INCREMENT, TX_NONCE_INCREMENT}, + AccountTreeId, U256, +}; use zksync_utils::u256_to_h256; -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{DeployContractsTx, TxType, VmTesterBuilder}; -use crate::vm_latest::tests::utils::{get_balance, read_test_contract, verify_required_storage}; -use crate::vm_latest::HistoryEnabled; +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + tests::{ + tester::{DeployContractsTx, TxType, VmTesterBuilder}, + utils::{get_balance, read_test_contract, verify_required_storage}, + }, + utils::fee::get_batch_base_fee, + HistoryEnabled, + }, +}; #[test] fn test_default_aa_interaction() { @@ -26,7 +35,7 @@ fn test_default_aa_interaction() { bytecode_hash, address, } = account.get_deploy_tx(&counter, None, TxType::L2); - let maximal_fee = tx.gas_limit() * vm.vm.batch_env.base_fee(); + let maximal_fee = tx.gas_limit() * get_batch_base_fee(&vm.vm.batch_env); vm.vm.push_transaction(tx); let result = vm.vm.execute(VmExecutionMode::OneTx); @@ -54,7 +63,8 @@ fn test_default_aa_interaction() { verify_required_storage(&vm.vm.state, expected_slots); let expected_fee = maximal_fee - - U256::from(result.refunds.gas_refunded) * U256::from(vm.vm.batch_env.base_fee()); + - U256::from(result.refunds.gas_refunded) + * U256::from(get_batch_base_fee(&vm.vm.batch_env)); let operator_balance = get_balance( AccountTreeId::new(L2_ETH_TOKEN_ADDRESS), &vm.fee_account, diff --git a/core/lib/multivm/src/versions/vm_latest/tests/gas_limit.rs b/core/lib/multivm/src/versions/vm_latest/tests/gas_limit.rs index 6bebffeacee..533d9ec660e 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/gas_limit.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/gas_limit.rs @@ -1,13 +1,13 @@ -use zksync_types::fee::Fee; -use zksync_types::Execute; +use zksync_types::{fee::Fee, Execute}; -use crate::vm_latest::constants::{ - BOOTLOADER_HEAP_PAGE, TX_DESCRIPTION_OFFSET, TX_GAS_LIMIT_OFFSET, +use crate::{ + interface::{TxExecutionMode, VmInterface}, + vm_latest::{ + constants::{BOOTLOADER_HEAP_PAGE, TX_DESCRIPTION_OFFSET, TX_GAS_LIMIT_OFFSET}, + tests::tester::VmTesterBuilder, + HistoryDisabled, + }, }; -use crate::vm_latest::tests::tester::VmTesterBuilder; - -use crate::interface::{TxExecutionMode, VmInterface}; -use crate::vm_latest::HistoryDisabled; /// Checks that `TX_GAS_LIMIT_OFFSET` constant is correct. #[test] diff --git a/core/lib/multivm/src/versions/vm_latest/tests/get_used_contracts.rs b/core/lib/multivm/src/versions/vm_latest/tests/get_used_contracts.rs index 688711d5a9c..38a4d7cbb43 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/get_used_contracts.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/get_used_contracts.rs @@ -1,19 +1,23 @@ use std::collections::{HashMap, HashSet}; use itertools::Itertools; - -use crate::HistoryMode; use zksync_state::WriteStorage; use zksync_system_constants::CONTRACT_DEPLOYER_ADDRESS; use zksync_test_account::Account; use zksync_types::{Execute, U256}; -use zksync_utils::bytecode::hash_bytecode; -use zksync_utils::h256_to_u256; - -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{TxType, VmTesterBuilder}; -use crate::vm_latest::tests::utils::{read_test_contract, BASE_SYSTEM_CONTRACTS}; -use crate::vm_latest::{HistoryDisabled, Vm}; +use zksync_utils::{bytecode::hash_bytecode, h256_to_u256}; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + tests::{ + tester::{TxType, VmTesterBuilder}, + utils::{read_test_contract, BASE_SYSTEM_CONTRACTS}, + }, + HistoryDisabled, Vm, + }, + HistoryMode, +}; #[test] fn test_get_used_contracts() { @@ -25,7 +29,7 @@ fn test_get_used_contracts() { assert!(known_bytecodes_without_aa_code(&vm.vm).is_empty()); // create and push and execute some not-empty factory deps transaction with success status - // to check that get_used_contracts() updates + // to check that `get_used_contracts()` updates let contract_code = read_test_contract(); let mut account = Account::random(); let tx = account.get_deploy_tx(&contract_code, None, TxType::L1 { serial_id: 0 }); @@ -38,7 +42,7 @@ fn test_get_used_contracts() { .get_used_contracts() .contains(&h256_to_u256(tx.bytecode_hash))); - // Note: Default_AA will be in the list of used contracts if l2 tx is used + // Note: `Default_AA` will be in the list of used contracts if L2 tx is used assert_eq!( vm.vm .get_used_contracts() @@ -51,7 +55,7 @@ fn test_get_used_contracts() { ); // create push and execute some non-empty factory deps transaction that fails - // (known_bytecodes will be updated but we expect get_used_contracts() to not be updated) + // (`known_bytecodes` will be updated but we expect `get_used_contracts()` to not be updated) let calldata = [1, 2, 3]; let big_calldata: Vec = calldata diff --git a/core/lib/multivm/src/versions/vm_latest/tests/is_write_initial.rs b/core/lib/multivm/src/versions/vm_latest/tests/is_write_initial.rs index d40f9109dcb..d5a6679502b 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/is_write_initial.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/is_write_initial.rs @@ -1,10 +1,16 @@ use zksync_state::ReadStorage; use zksync_types::get_nonce_key; -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{Account, TxType, VmTesterBuilder}; -use crate::vm_latest::tests::utils::read_test_contract; -use crate::vm_latest::HistoryDisabled; +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + tests::{ + tester::{Account, TxType, VmTesterBuilder}, + utils::read_test_contract, + }, + HistoryDisabled, + }, +}; #[test] fn test_is_write_initial_behaviour() { diff --git a/core/lib/multivm/src/versions/vm_latest/tests/l1_tx_execution.rs b/core/lib/multivm/src/versions/vm_latest/tests/l1_tx_execution.rs index 5c1bdbad58a..fe2987d76ac 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/l1_tx_execution.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/l1_tx_execution.rs @@ -1,16 +1,25 @@ -use zksync_system_constants::BOOTLOADER_ADDRESS; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::storage_writes_deduplicator::StorageWritesDeduplicator; -use zksync_types::{get_code_key, get_known_code_key, U256}; +use ethabi::Token; +use zksync_contracts::l1_messenger_contract; +use zksync_system_constants::{BOOTLOADER_ADDRESS, L1_MESSENGER_ADDRESS}; +use zksync_types::{ + get_code_key, get_known_code_key, + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + storage_writes_deduplicator::StorageWritesDeduplicator, + Execute, ExecuteTransactionCommon, U256, +}; use zksync_utils::u256_to_h256; -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{TxType, VmTesterBuilder}; -use crate::vm_latest::tests::utils::{ - read_test_contract, verify_required_storage, BASE_SYSTEM_CONTRACTS, +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + tests::{ + tester::{TxType, VmTesterBuilder}, + utils::{read_test_contract, verify_required_storage, BASE_SYSTEM_CONTRACTS}, + }, + types::internals::TransactionData, + HistoryEnabled, + }, }; -use crate::vm_latest::types::internals::TransactionData; -use crate::vm_latest::HistoryEnabled; #[test] fn test_l1_tx_execution() { @@ -19,13 +28,13 @@ fn test_l1_tx_execution() { // using L1->L2 communication, the same it would likely be done during the priority mode. // There are always at least 7 initial writes here, because we pay fees from l1: - // - totalSupply of ETH token + // - `totalSupply` of ETH token // - balance of the refund recipient // - balance of the bootloader - // - tx_rolling hash + // - `tx_rolling` hash // - rolling hash of L2->L1 logs // - transaction number in block counter - // - L2->L1 log counter in L1Messenger + // - L2->L1 log counter in `L1Messenger` // TODO(PLA-537): right now we are using 4 slots instead of 7 due to 0 fee for transaction. let basic_initial_writes = 4; @@ -130,3 +139,51 @@ fn test_l1_tx_execution() { // There are only basic initial writes assert_eq!(res.initial_storage_writes - basic_initial_writes, 2); } + +#[test] +fn test_l1_tx_execution_high_gas_limit() { + // In this test, we try to execute an L1->L2 transaction with a high gas limit. + // Usually priority transactions with dangerously gas limit should even pass the checks on the L1, + // however, they might pass during the transition period to the new fee model, so we check that we can safely process those. + + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_base_system_smart_contracts(BASE_SYSTEM_CONTRACTS.clone()) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_random_rich_accounts(1) + .build(); + + let account = &mut vm.rich_accounts[0]; + + let l1_messenger = l1_messenger_contract(); + + let contract_function = l1_messenger.function("sendToL1").unwrap(); + let params = [ + // Even a message of size 100k should not be able to be sent by a priority transaction + Token::Bytes(vec![0u8; 100_000]), + ]; + let calldata = contract_function.encode_input(¶ms).unwrap(); + + let mut tx = account.get_l1_tx( + Execute { + contract_address: L1_MESSENGER_ADDRESS, + value: 0.into(), + factory_deps: None, + calldata, + }, + 0, + ); + + if let ExecuteTransactionCommon::L1(data) = &mut tx.common_data { + // Using some large gas limit + data.gas_limit = 300_000_000.into(); + } else { + unreachable!() + }; + + vm.vm.push_transaction(tx); + + let res = vm.vm.execute(VmExecutionMode::OneTx); + + assert!(res.result.is_failed(), "The transaction should've failed"); +} diff --git a/core/lib/multivm/src/versions/vm_latest/tests/l2_blocks.rs b/core/lib/multivm/src/versions/vm_latest/tests/l2_blocks.rs index 4fd4e0207d4..d103ebf7ebc 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/l2_blocks.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/l2_blocks.rs @@ -3,30 +3,33 @@ //! The description for each of the tests can be found in the corresponding `.yul` file. //! -use crate::interface::{ - ExecutionResult, Halt, L2BlockEnv, TxExecutionMode, VmExecutionMode, VmInterface, -}; -use crate::vm_latest::constants::{ - BOOTLOADER_HEAP_PAGE, TX_OPERATOR_L2_BLOCK_INFO_OFFSET, TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, -}; -use crate::vm_latest::tests::tester::default_l1_batch; -use crate::vm_latest::tests::tester::VmTesterBuilder; -use crate::vm_latest::utils::l2_blocks::get_l2_block_hash_key; -use crate::vm_latest::{HistoryEnabled, Vm}; -use crate::HistoryMode; -use zk_evm_1_4_0::aux_structures::Timestamp; +use zk_evm_1_4_1::aux_structures::Timestamp; use zksync_state::WriteStorage; use zksync_system_constants::REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE; -use zksync_types::block::pack_block_info; use zksync_types::{ - block::{legacy_miniblock_hash, miniblock_hash}, + block::{pack_block_info, MiniblockHasher}, AccountTreeId, Execute, ExecuteTransactionCommon, L1BatchNumber, L1TxCommonData, - MiniblockNumber, StorageKey, Transaction, H160, H256, SYSTEM_CONTEXT_ADDRESS, - SYSTEM_CONTEXT_BLOCK_INFO_POSITION, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, - SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, U256, + MiniblockNumber, ProtocolVersionId, StorageKey, Transaction, H160, H256, + SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_BLOCK_INFO_POSITION, + SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, + U256, }; use zksync_utils::{h256_to_u256, u256_to_h256}; +use crate::{ + interface::{ExecutionResult, Halt, L2BlockEnv, TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + constants::{ + BOOTLOADER_HEAP_PAGE, TX_OPERATOR_L2_BLOCK_INFO_OFFSET, + TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, + }, + tests::tester::{default_l1_batch, VmTesterBuilder}, + utils::l2_blocks::get_l2_block_hash_key, + HistoryEnabled, Vm, + }, + HistoryMode, +}; + fn get_l1_noop() -> Transaction { Transaction { common_data: ExecuteTransactionCommon::L1(L1TxCommonData { @@ -62,7 +65,7 @@ fn test_l2_block_initialization_timestamp() { vm.vm.bootloader_state.push_l2_block(L2BlockEnv { number: 1, timestamp: 0, - prev_block_hash: legacy_miniblock_hash(MiniblockNumber(0)), + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), max_virtual_blocks_to_create: 1, }); let l1_tx = get_l1_noop(); @@ -85,7 +88,7 @@ fn test_l2_block_initialization_number_non_zero() { let first_l2_block = L2BlockEnv { number: 0, timestamp: l1_batch.timestamp, - prev_block_hash: legacy_miniblock_hash(MiniblockNumber(0)), + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), max_virtual_blocks_to_create: 1, }; @@ -244,7 +247,7 @@ fn test_l2_block_new_l2_block() { let correct_first_block = L2BlockEnv { number: 1, timestamp: 1, - prev_block_hash: legacy_miniblock_hash(MiniblockNumber(0)), + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), max_virtual_blocks_to_create: 1, }; @@ -338,7 +341,7 @@ fn test_first_in_batch( ); storage_ptr.borrow_mut().set_value( prev_block_hash_position, - legacy_miniblock_hash(MiniblockNumber(miniblock_number - 1)), + MiniblockHasher::legacy_hash(MiniblockNumber(miniblock_number - 1)), ); // In order to skip checks from the Rust side of the VM, we firstly use some definitely correct L2 block info. @@ -367,6 +370,9 @@ fn test_first_in_batch( #[test] fn test_l2_block_first_in_batch() { + let prev_block_hash = MiniblockHasher::legacy_hash(MiniblockNumber(0)); + let prev_block_hash = MiniblockHasher::new(MiniblockNumber(1), 1, prev_block_hash) + .finalize(ProtocolVersionId::latest()); test_first_in_batch( 1, 1, @@ -377,17 +383,15 @@ fn test_l2_block_first_in_batch() { L2BlockEnv { number: 2, timestamp: 2, - prev_block_hash: miniblock_hash( - MiniblockNumber(1), - 1, - legacy_miniblock_hash(MiniblockNumber(0)), - H256::zero(), - ), + prev_block_hash, max_virtual_blocks_to_create: 1, }, None, ); + let prev_block_hash = MiniblockHasher::legacy_hash(MiniblockNumber(0)); + let prev_block_hash = MiniblockHasher::new(MiniblockNumber(1), 8, prev_block_hash) + .finalize(ProtocolVersionId::latest()); test_first_in_batch( 8, 1, @@ -398,8 +402,8 @@ fn test_l2_block_first_in_batch() { L2BlockEnv { number: 2, timestamp: 9, - prev_block_hash: miniblock_hash(MiniblockNumber(1), 8, legacy_miniblock_hash(MiniblockNumber(0)), H256::zero()), - max_virtual_blocks_to_create: 1 + prev_block_hash, + max_virtual_blocks_to_create: 1, }, Some(Halt::FailedToSetL2Block("The timestamp of the L2 block must be greater than or equal to the timestamp of the current batch".to_string())), ); diff --git a/core/lib/multivm/src/versions/vm_latest/tests/mod.rs b/core/lib/multivm/src/versions/vm_latest/tests/mod.rs index ffb38dd3725..b6c2cb654a8 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/mod.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/mod.rs @@ -1,15 +1,17 @@ mod bootloader; mod default_aa; // TODO - fix this test -// mod invalid_bytecode; +// `mod invalid_bytecode;` mod bytecode_publishing; mod call_tracer; +mod circuits; mod gas_limit; mod get_used_contracts; mod is_write_initial; mod l1_tx_execution; mod l2_blocks; mod nonce_holder; +mod precompiles; mod refunds; mod require_eip712; mod rollbacks; diff --git a/core/lib/multivm/src/versions/vm_latest/tests/nonce_holder.rs b/core/lib/multivm/src/versions/vm_latest/tests/nonce_holder.rs index dedaae5c933..309e26120af 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/nonce_holder.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/nonce_holder.rs @@ -1,12 +1,19 @@ use zksync_types::{Execute, Nonce}; -use crate::interface::VmRevertReason; -use crate::interface::{ExecutionResult, Halt, TxRevertReason, VmExecutionMode}; -use crate::interface::{TxExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{Account, VmTesterBuilder}; -use crate::vm_latest::tests::utils::read_nonce_holder_tester; -use crate::vm_latest::types::internals::TransactionData; -use crate::vm_latest::HistoryEnabled; +use crate::{ + interface::{ + ExecutionResult, Halt, TxExecutionMode, TxRevertReason, VmExecutionMode, VmInterface, + VmRevertReason, + }, + vm_latest::{ + tests::{ + tester::{Account, VmTesterBuilder}, + utils::read_nonce_holder_tester, + }, + types::internals::TransactionData, + HistoryEnabled, + }, +}; pub enum NonceHolderTestMode { SetValueUnderNonce, @@ -52,7 +59,7 @@ fn test_nonce_holder() { comment: &'static str| { // In this test we have to reset VM state after each test case. Because once bootloader failed during the validation of the transaction, // it will fail again and again. At the same time we have to keep the same storage, because we want to keep the nonce holder contract state. - // The easiest way in terms of lifetimes is to reuse vm_builder to achieve it. + // The easiest way in terms of lifetimes is to reuse `vm_builder` to achieve it. vm.reset_state(true); let mut transaction_data: TransactionData = account .get_l2_tx_for_execute_with_nonce( diff --git a/core/lib/multivm/src/versions/vm_latest/tests/precompiles.rs b/core/lib/multivm/src/versions/vm_latest/tests/precompiles.rs new file mode 100644 index 00000000000..8556b17fd5b --- /dev/null +++ b/core/lib/multivm/src/versions/vm_latest/tests/precompiles.rs @@ -0,0 +1,136 @@ +use zk_evm_1_4_1::zk_evm_abstractions::precompiles::PrecompileAddress; +use zksync_types::{Address, Execute}; + +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + constants::BLOCK_GAS_LIMIT, + tests::{tester::VmTesterBuilder, utils::read_precompiles_contract}, + HistoryEnabled, + }, +}; + +#[test] +fn test_keccak() { + // Execute special transaction and check that at least 1000 keccak calls were made. + let contract = read_precompiles_contract(); + let address = Address::random(); + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_custom_contracts(vec![(contract, address, true)]) + .build(); + + // calldata for `doKeccak(1000)`. + let keccak1000_calldata = + "370f20ac00000000000000000000000000000000000000000000000000000000000003e8"; + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: address, + calldata: hex::decode(keccak1000_calldata).unwrap(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + vm.vm.push_transaction(tx); + let _ = vm.vm.inspect(Default::default(), VmExecutionMode::OneTx); + + let keccak_count = vm + .vm + .state + .precompiles_processor + .precompile_cycles_history + .inner() + .iter() + .filter(|(precompile, _)| precompile == &PrecompileAddress::Keccak256) + .count(); + + assert!(keccak_count >= 1000); +} + +#[test] +fn test_sha256() { + // Execute special transaction and check that at least 1000 `sha256` calls were made. + let contract = read_precompiles_contract(); + let address = Address::random(); + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .with_custom_contracts(vec![(contract, address, true)]) + .build(); + + // calldata for `doSha256(1000)`. + let sha1000_calldata = + "5d0b4fb500000000000000000000000000000000000000000000000000000000000003e8"; + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: address, + calldata: hex::decode(sha1000_calldata).unwrap(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + vm.vm.push_transaction(tx); + let _ = vm.vm.inspect(Default::default(), VmExecutionMode::OneTx); + + let sha_count = vm + .vm + .state + .precompiles_processor + .precompile_cycles_history + .inner() + .iter() + .filter(|(precompile, _)| precompile == &PrecompileAddress::SHA256) + .count(); + + assert!(sha_count >= 1000); +} + +#[test] +fn test_ecrecover() { + // Execute simple transfer and check that exactly 1 `ecrecover` call was made (it's done during tx validation). + let mut vm = VmTesterBuilder::new(HistoryEnabled) + .with_empty_in_memory_storage() + .with_random_rich_accounts(1) + .with_deployer() + .with_gas_limit(BLOCK_GAS_LIMIT) + .with_execution_mode(TxExecutionMode::VerifyExecute) + .build(); + + let account = &mut vm.rich_accounts[0]; + let tx = account.get_l2_tx_for_execute( + Execute { + contract_address: account.address, + calldata: Vec::new(), + value: Default::default(), + factory_deps: None, + }, + None, + ); + vm.vm.push_transaction(tx); + let _ = vm.vm.inspect(Default::default(), VmExecutionMode::OneTx); + + let ecrecover_count = vm + .vm + .state + .precompiles_processor + .precompile_cycles_history + .inner() + .iter() + .filter(|(precompile, _)| precompile == &PrecompileAddress::Ecrecover) + .count(); + + assert_eq!(ecrecover_count, 1); +} diff --git a/core/lib/multivm/src/versions/vm_latest/tests/refunds.rs b/core/lib/multivm/src/versions/vm_latest/tests/refunds.rs index 9d4afcdb317..5662ee1fd66 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/refunds.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/refunds.rs @@ -1,9 +1,14 @@ -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{DeployContractsTx, TxType, VmTesterBuilder}; -use crate::vm_latest::tests::utils::read_test_contract; - -use crate::vm_latest::types::internals::TransactionData; -use crate::vm_latest::HistoryEnabled; +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + tests::{ + tester::{DeployContractsTx, TxType, VmTesterBuilder}, + utils::read_test_contract, + }, + types::internals::TransactionData, + HistoryEnabled, + }, +}; #[test] fn test_predetermined_refunded_gas() { @@ -55,9 +60,8 @@ fn test_predetermined_refunded_gas() { .build(); let tx: TransactionData = tx.into(); - let block_gas_per_pubdata_byte = vm.vm.batch_env.block_gas_price_per_pubdata(); // Overhead - let overhead = tx.overhead_gas(block_gas_per_pubdata_byte as u32); + let overhead = tx.overhead_gas(); vm.vm .push_raw_transaction(tx.clone(), overhead, result.refunds.gas_refunded, true); diff --git a/core/lib/multivm/src/versions/vm_latest/tests/require_eip712.rs b/core/lib/multivm/src/versions/vm_latest/tests/require_eip712.rs index ad1d405a075..de4f27436af 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/require_eip712.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/require_eip712.rs @@ -1,22 +1,24 @@ use std::convert::TryInto; use ethabi::Token; - -use zksync_eth_signer::raw_ethereum_tx::TransactionParameters; -use zksync_eth_signer::EthereumSigner; +use zksync_eth_signer::{raw_ethereum_tx::TransactionParameters, EthereumSigner}; use zksync_system_constants::L2_ETH_TOKEN_ADDRESS; -use zksync_types::fee::Fee; -use zksync_types::l2::L2Tx; -use zksync_types::transaction_request::TransactionRequest; -use zksync_types::utils::storage_key_for_standard_token_balance; use zksync_types::{ - AccountTreeId, Address, Eip712Domain, Execute, L2ChainId, Nonce, Transaction, U256, + fee::Fee, l2::L2Tx, transaction_request::TransactionRequest, + utils::storage_key_for_standard_token_balance, AccountTreeId, Address, Eip712Domain, Execute, + L2ChainId, Nonce, Transaction, U256, }; -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{Account, VmTester, VmTesterBuilder}; -use crate::vm_latest::tests::utils::read_many_owners_custom_account_contract; -use crate::vm_latest::HistoryDisabled; +use crate::{ + interface::{TxExecutionMode, VmExecutionMode, VmInterface}, + vm_latest::{ + tests::{ + tester::{Account, VmTester, VmTesterBuilder}, + utils::read_many_owners_custom_account_contract, + }, + HistoryDisabled, + }, +}; impl VmTester { pub(crate) fn get_eth_balance(&mut self, address: Address) -> U256 { @@ -35,8 +37,8 @@ impl VmTester { /// Currently we support both, but in the future, we should allow only EIP712 transactions to access the AA accounts. async fn test_require_eip712() { // Use 3 accounts: - // - private_address - EOA account, where we have the key - // - account_address - AA account, where the contract is deployed + // - `private_address` - EOA account, where we have the key + // - `account_address` - AA account, where the contract is deployed // - beneficiary - an EOA account, where we'll try to transfer the tokens. let account_abstraction = Account::random(); let mut private_account = Account::random(); @@ -54,8 +56,8 @@ async fn test_require_eip712() { let chain_id: u32 = 270; - // First, let's set the owners of the AA account to the private_address. - // (so that messages signed by private_address, are authorized to act on behalf of the AA account). + // First, let's set the owners of the AA account to the `private_address`. + // (so that messages signed by `private_address`, are authorized to act on behalf of the AA account). let set_owners_function = contract.function("setOwners").unwrap(); let encoded_input = set_owners_function .encode_input(&[Token::Array(vec![Token::Address(private_account.address)])]) @@ -109,7 +111,7 @@ async fn test_require_eip712() { vm.get_eth_balance(beneficiary.address), U256::from(888000088) ); - // Make sure that the tokens were transfered from the AA account. + // Make sure that the tokens were transferred from the AA account. assert_eq!( private_account_balance, vm.get_eth_balance(private_account.address) diff --git a/core/lib/multivm/src/versions/vm_latest/tests/rollbacks.rs b/core/lib/multivm/src/versions/vm_latest/tests/rollbacks.rs index 343d30dcd95..188941d74d7 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/rollbacks.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/rollbacks.rs @@ -1,21 +1,22 @@ use ethabi::Token; - -use zksync_contracts::get_loadnext_contract; -use zksync_contracts::test_contracts::LoadnextContractExecutionParams; - +use zksync_contracts::{get_loadnext_contract, test_contracts::LoadnextContractExecutionParams}; use zksync_state::WriteStorage; use zksync_types::{get_nonce_key, Execute, U256}; -use crate::interface::dyn_tracers::vm_1_4_0::DynTracer; -use crate::interface::tracer::{TracerExecutionStatus, TracerExecutionStopReason}; -use crate::interface::{TxExecutionMode, VmExecutionMode, VmInterface, VmInterfaceHistoryEnabled}; -use crate::vm_latest::tests::tester::{ - DeployContractsTx, TransactionTestInfo, TxModifier, TxType, VmTesterBuilder, -}; -use crate::vm_latest::tests::utils::read_test_contract; -use crate::vm_latest::types::internals::ZkSyncVmState; -use crate::vm_latest::{ - BootloaderState, HistoryEnabled, HistoryMode, SimpleMemory, ToTracerPointer, VmTracer, +use crate::{ + interface::{ + dyn_tracers::vm_1_4_1::DynTracer, + tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + TxExecutionMode, VmExecutionMode, VmInterface, VmInterfaceHistoryEnabled, + }, + vm_latest::{ + tests::{ + tester::{DeployContractsTx, TransactionTestInfo, TxModifier, TxType, VmTesterBuilder}, + utils::read_test_contract, + }, + types::internals::ZkSyncVmState, + BootloaderState, HistoryEnabled, HistoryMode, SimpleMemory, ToTracerPointer, VmTracer, + }, }; #[test] diff --git a/core/lib/multivm/src/versions/vm_latest/tests/simple_execution.rs b/core/lib/multivm/src/versions/vm_latest/tests/simple_execution.rs index 9f0c855b459..a864538524a 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/simple_execution.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/simple_execution.rs @@ -1,6 +1,10 @@ -use crate::interface::{ExecutionResult, VmExecutionMode, VmInterface}; -use crate::vm_latest::tests::tester::{TxType, VmTesterBuilder}; -use crate::vm_latest::HistoryDisabled; +use crate::{ + interface::{ExecutionResult, VmExecutionMode, VmInterface}, + vm_latest::{ + tests::tester::{TxType, VmTesterBuilder}, + HistoryDisabled, + }, +}; #[test] fn estimate_fee() { diff --git a/core/lib/multivm/src/versions/vm_latest/tests/tester/inner_state.rs b/core/lib/multivm/src/versions/vm_latest/tests/tester/inner_state.rs index ec9ffe785f9..9dbb72d56d8 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/tester/inner_state.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/tester/inner_state.rs @@ -1,15 +1,19 @@ use std::collections::HashMap; -use zk_evm_1_4_0::aux_structures::Timestamp; -use zk_evm_1_4_0::vm_state::VmLocalState; +use zk_evm_1_4_1::{aux_structures::Timestamp, vm_state::VmLocalState}; use zksync_state::WriteStorage; - use zksync_types::{StorageKey, StorageLogQuery, StorageValue, U256}; -use crate::vm_latest::old_vm::event_sink::InMemoryEventSink; -use crate::vm_latest::old_vm::history_recorder::{AppDataFrameManagerWithHistory, HistoryRecorder}; -use crate::vm_latest::{HistoryEnabled, HistoryMode, SimpleMemory, Vm}; -use crate::HistoryMode as CommonHistoryMode; +use crate::{ + vm_latest::{ + old_vm::{ + event_sink::InMemoryEventSink, + history_recorder::{AppDataFrameManagerWithHistory, HistoryRecorder}, + }, + HistoryEnabled, HistoryMode, SimpleMemory, Vm, + }, + HistoryMode as CommonHistoryMode, +}; #[derive(Clone, Debug)] pub(crate) struct ModifiedKeysMap(HashMap); @@ -34,7 +38,7 @@ impl PartialEq for ModifiedKeysMap { #[derive(Clone, PartialEq, Debug)] pub(crate) struct DecommitterTestInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub(crate) modified_storage_keys: ModifiedKeysMap, pub(crate) known_bytecodes: HistoryRecorder>, H>, @@ -43,7 +47,7 @@ pub(crate) struct DecommitterTestInnerState { #[derive(Clone, PartialEq, Debug)] pub(crate) struct StorageOracleInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub(crate) modified_storage_keys: ModifiedKeysMap, @@ -74,7 +78,7 @@ pub(crate) struct VmInstanceInnerState { impl Vm { // Dump inner state of the VM. - pub(crate) fn dump_inner_state(&self) -> VmInstanceInnerState { + pub(crate) fn dump_inner_state(&self) -> VmInstanceInnerState { let event_sink = self.state.event_sink.clone(); let precompile_processor_state = PrecompileProcessorTestInnerState { timestamp_history: self.state.precompiles_processor.timestamp_history.clone(), diff --git a/core/lib/multivm/src/versions/vm_latest/tests/tester/transaction_test_info.rs b/core/lib/multivm/src/versions/vm_latest/tests/tester/transaction_test_info.rs index 6fdfa7955e0..114f80d1a21 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/tester/transaction_test_info.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/tester/transaction_test_info.rs @@ -1,12 +1,12 @@ use zksync_types::{ExecuteTransactionCommon, Transaction}; -use crate::interface::{ - CurrentExecutionState, ExecutionResult, Halt, TxRevertReason, VmExecutionMode, - VmExecutionResultAndLogs, +use crate::{ + interface::{ + CurrentExecutionState, ExecutionResult, Halt, TxRevertReason, VmExecutionMode, + VmExecutionResultAndLogs, VmInterface, VmInterfaceHistoryEnabled, VmRevertReason, + }, + vm_latest::{tests::tester::vm_tester::VmTester, HistoryEnabled}, }; -use crate::interface::{VmInterface, VmInterfaceHistoryEnabled, VmRevertReason}; -use crate::vm_latest::tests::tester::vm_tester::VmTester; -use crate::vm_latest::HistoryEnabled; #[derive(Debug, Clone)] pub(crate) enum TxModifier { diff --git a/core/lib/multivm/src/versions/vm_latest/tests/tester/vm_tester.rs b/core/lib/multivm/src/versions/vm_latest/tests/tester/vm_tester.rs index cbf009d5b02..0220ba4fc4d 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/tester/vm_tester.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/tester/vm_tester.rs @@ -1,28 +1,32 @@ use std::marker::PhantomData; + use zksync_contracts::BaseSystemContracts; use zksync_state::{InMemoryStorage, StoragePtr, StorageView, WriteStorage}; - -use zksync_types::block::legacy_miniblock_hash; -use zksync_types::helpers::unix_timestamp_ms; -use zksync_types::utils::{deployed_address_create, storage_key_for_eth_balance}; use zksync_types::{ - get_code_key, get_is_account_key, Address, L1BatchNumber, L2ChainId, MiniblockNumber, Nonce, - ProtocolVersionId, U256, + block::MiniblockHasher, + fee_model::BatchFeeInput, + get_code_key, get_is_account_key, + helpers::unix_timestamp_ms, + utils::{deployed_address_create, storage_key_for_eth_balance}, + Address, L1BatchNumber, L2ChainId, MiniblockNumber, Nonce, ProtocolVersionId, U256, }; -use zksync_utils::bytecode::hash_bytecode; -use zksync_utils::u256_to_h256; - -use crate::vm_latest::constants::BLOCK_GAS_LIMIT; - -use crate::interface::{ - L1BatchEnv, L2Block, L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, VmInterface, +use zksync_utils::{bytecode::hash_bytecode, u256_to_h256}; + +use crate::{ + interface::{ + L1BatchEnv, L2Block, L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, VmInterface, + }, + vm_latest::{ + constants::BLOCK_GAS_LIMIT, + tests::{ + tester::{Account, TxType}, + utils::read_test_contract, + }, + utils::l2_blocks::load_last_l2_block, + Vm, + }, + HistoryMode, }; -use crate::vm_latest::tests::tester::Account; -use crate::vm_latest::tests::tester::TxType; -use crate::vm_latest::tests::utils::read_test_contract; -use crate::vm_latest::utils::l2_blocks::load_last_l2_block; -use crate::vm_latest::Vm; -use crate::HistoryMode; pub(crate) type InMemoryStorageView = StorageView; @@ -73,7 +77,7 @@ impl VmTester { if !self.custom_contracts.is_empty() { println!("Inserting custom contracts is not yet supported") - // insert_contracts(&mut self.storage, &self.custom_contracts); + // `insert_contracts(&mut self.storage, &self.custom_contracts);` } let mut l1_batch = self.vm.batch_env.clone(); @@ -81,7 +85,7 @@ impl VmTester { let last_l2_block = load_last_l2_block(self.storage.clone()).unwrap_or(L2Block { number: 0, timestamp: 0, - hash: legacy_miniblock_hash(MiniblockNumber(0)), + hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), }); l1_batch.first_l2_block = L2BlockEnv { number: last_l2_block.number + 1, @@ -248,14 +252,16 @@ pub(crate) fn default_l1_batch(number: L1BatchNumber) -> L1BatchEnv { previous_batch_hash: None, number, timestamp, - l1_gas_price: 50_000_000_000, // 50 gwei - fair_l2_gas_price: 250_000_000, // 0.25 gwei + fee_input: BatchFeeInput::l1_pegged( + 50_000_000_000, // 50 gwei + 250_000_000, // 0.25 gwei + ), fee_account: Address::random(), enforced_base_fee: None, first_l2_block: L2BlockEnv { number: 1, timestamp, - prev_block_hash: legacy_miniblock_hash(MiniblockNumber(0)), + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), max_virtual_blocks_to_create: 100, }, } diff --git a/core/lib/multivm/src/versions/vm_latest/tests/tracing_execution_error.rs b/core/lib/multivm/src/versions/vm_latest/tests/tracing_execution_error.rs index f55eadecde6..f02de899b03 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/tracing_execution_error.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/tracing_execution_error.rs @@ -1,12 +1,15 @@ use zksync_types::{Execute, H160}; -use crate::interface::TxExecutionMode; -use crate::interface::{TxRevertReason, VmRevertReason}; -use crate::vm_latest::tests::tester::{ExpectedError, TransactionTestInfo, VmTesterBuilder}; -use crate::vm_latest::tests::utils::{ - get_execute_error_calldata, read_error_contract, BASE_SYSTEM_CONTRACTS, +use crate::{ + interface::{TxExecutionMode, TxRevertReason, VmRevertReason}, + vm_latest::{ + tests::{ + tester::{ExpectedError, TransactionTestInfo, VmTesterBuilder}, + utils::{get_execute_error_calldata, read_error_contract, BASE_SYSTEM_CONTRACTS}, + }, + HistoryEnabled, + }, }; -use crate::vm_latest::HistoryEnabled; #[test] fn test_tracing_of_execution_errors() { diff --git a/core/lib/multivm/src/versions/vm_latest/tests/upgrade.rs b/core/lib/multivm/src/versions/vm_latest/tests/upgrade.rs index 65780114e9a..1e2bdcb4515 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/upgrade.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/upgrade.rs @@ -1,28 +1,28 @@ -use zk_evm_1_4_0::aux_structures::Timestamp; - -use zksync_types::{ - ethabi::Contract, - Execute, COMPLEX_UPGRADER_ADDRESS, CONTRACT_DEPLOYER_ADDRESS, CONTRACT_FORCE_DEPLOYER_ADDRESS, - REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, - {ethabi::Token, Address, ExecuteTransactionCommon, Transaction, H256, U256}, - {get_code_key, get_known_code_key, H160}, -}; - -use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, u256_to_h256}; - +use zk_evm_1_4_1::aux_structures::Timestamp; use zksync_contracts::{deployer_contract, load_contract, load_sys_contract, read_bytecode}; use zksync_state::WriteStorage; use zksync_test_account::TxType; - -use crate::interface::{ - ExecutionResult, Halt, TxExecutionMode, VmExecutionMode, VmInterface, VmInterfaceHistoryEnabled, +use zksync_types::{ + ethabi::{Contract, Token}, + get_code_key, get_known_code_key, + protocol_version::ProtocolUpgradeTxCommonData, + Address, Execute, ExecuteTransactionCommon, Transaction, COMPLEX_UPGRADER_ADDRESS, + CONTRACT_DEPLOYER_ADDRESS, CONTRACT_FORCE_DEPLOYER_ADDRESS, H160, H256, + REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, U256, }; -use crate::vm_latest::tests::tester::VmTesterBuilder; -use crate::vm_latest::tests::utils::verify_required_storage; -use crate::vm_latest::HistoryEnabled; -use zksync_types::protocol_version::ProtocolUpgradeTxCommonData; +use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, u256_to_h256}; use super::utils::read_test_contract; +use crate::{ + interface::{ + ExecutionResult, Halt, TxExecutionMode, VmExecutionMode, VmInterface, + VmInterfaceHistoryEnabled, + }, + vm_latest::{ + tests::{tester::VmTesterBuilder, utils::verify_required_storage}, + HistoryEnabled, + }, +}; /// In this test we ensure that the requirements for protocol upgrade transactions are enforced by the bootloader: /// - This transaction must be the only one in block @@ -45,7 +45,7 @@ fn test_protocol_upgrade_is_first() { let protocol_upgrade_transaction = get_forced_deploy_tx(&[ForceDeployment { // The bytecode hash to put on an address bytecode_hash, - // The address on which to deploy the bytecodehash to + // The address on which to deploy the bytecode hash to address: H160::random(), // Whether to run the constructor on the force deployment call_constructor: false, @@ -59,7 +59,7 @@ fn test_protocol_upgrade_is_first() { let another_protocol_upgrade_transaction = get_forced_deploy_tx(&[ForceDeployment { // The bytecode hash to put on an address bytecode_hash, - // The address on which to deploy the bytecodehash to + // The address on which to deploy the bytecode hash to address: H160::random(), // Whether to run the constructor on the force deployment call_constructor: false, @@ -141,7 +141,7 @@ fn test_force_deploy_upgrade() { let transaction = get_forced_deploy_tx(&[ForceDeployment { // The bytecode hash to put on an address bytecode_hash, - // The address on which to deploy the bytecodehash to + // The address on which to deploy the bytecode hash to address: address_to_deploy, // Whether to run the constructor on the force deployment call_constructor: false, @@ -180,7 +180,7 @@ fn test_complex_upgrader() { let msg_sender_test_hash = hash_bytecode(&read_msg_sender_test()); // Let's assume that the bytecode for the implementation of the complex upgrade - // is already deployed in some address in userspace + // is already deployed in some address in user space let upgrade_impl = H160::random(); let account_code_key = get_code_key(&upgrade_impl); @@ -240,7 +240,7 @@ fn test_complex_upgrader() { struct ForceDeployment { // The bytecode hash to put on an address bytecode_hash: H256, - // The address on which to deploy the bytecodehash to + // The address on which to deploy the bytecode hash to address: Address, // Whether to run the constructor on the force deployment call_constructor: bool, @@ -295,8 +295,8 @@ fn get_forced_deploy_tx(deployment: &[ForceDeployment]) -> Transaction { // Returns the transaction that performs a complex protocol upgrade. // The first param is the address of the implementation of the complex upgrade -// in user-space, while the next 3 params are params of the implenentaiton itself -// For the explanatation for the parameters, please refer to: +// in user-space, while the next 3 params are params of the implementation itself +// For the explanation for the parameters, please refer to: // etc/contracts-test-data/complex-upgrade/complex-upgrade.sol fn get_complex_upgrade_tx( implementation_address: Address, diff --git a/core/lib/multivm/src/versions/vm_latest/tests/utils.rs b/core/lib/multivm/src/versions/vm_latest/tests/utils.rs index e30f0b9f39a..7c937033a21 100644 --- a/core/lib/multivm/src/versions/vm_latest/tests/utils.rs +++ b/core/lib/multivm/src/versions/vm_latest/tests/utils.rs @@ -1,18 +1,17 @@ use ethabi::Contract; use once_cell::sync::Lazy; - -use crate::vm_latest::tests::tester::InMemoryStorageView; use zksync_contracts::{ load_contract, read_bytecode, read_zbin_bytecode, BaseSystemContracts, SystemContractCode, }; use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::utils::storage_key_for_standard_token_balance; -use zksync_types::{AccountTreeId, Address, StorageKey, H256, U256}; -use zksync_utils::bytecode::hash_bytecode; -use zksync_utils::{bytes_to_be_words, h256_to_u256, u256_to_h256}; +use zksync_types::{ + utils::storage_key_for_standard_token_balance, AccountTreeId, Address, StorageKey, H256, U256, +}; +use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, u256_to_h256}; -use crate::vm_latest::types::internals::ZkSyncVmState; -use crate::vm_latest::HistoryMode; +use crate::vm_latest::{ + tests::tester::InMemoryStorageView, types::internals::ZkSyncVmState, HistoryMode, +}; pub(crate) static BASE_SYSTEM_CONTRACTS: Lazy = Lazy::new(BaseSystemContracts::load_from_disk); @@ -61,8 +60,8 @@ pub(crate) fn read_test_contract() -> Vec { pub(crate) fn get_bootloader(test: &str) -> SystemContractCode { let bootloader_code = read_zbin_bytecode(format!( - "etc/system-contracts/bootloader/tests/artifacts/{}.yul/{}.yul.zbin", - test, test + "contracts/system-contracts/bootloader/tests/artifacts/{}.yul.zbin", + test )); let bootloader_hash = hash_bytecode(&bootloader_code); @@ -104,3 +103,9 @@ pub(crate) fn read_max_depth_contract() -> Vec { "core/tests/ts-integration/contracts/zkasm/artifacts/deep_stak.zkasm/deep_stak.zkasm.zbin", ) } + +pub(crate) fn read_precompiles_contract() -> Vec { + read_bytecode( + "etc/contracts-test-data/artifacts-zk/contracts/precompiles/precompiles.sol/Precompiles.json", + ) +} diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/circuits_capacity.rs b/core/lib/multivm/src/versions/vm_latest/tracers/circuits_capacity.rs new file mode 100644 index 00000000000..6b63a38c12f --- /dev/null +++ b/core/lib/multivm/src/versions/vm_latest/tracers/circuits_capacity.rs @@ -0,0 +1,85 @@ +use zkevm_test_harness_1_4_1::{geometry_config::get_geometry_config, toolset::GeometryConfig}; + +const GEOMETRY_CONFIG: GeometryConfig = get_geometry_config(); +const OVERESTIMATE_PERCENT: f32 = 1.05; + +const MAIN_VM_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_vm_snapshot as f32; + +const CODE_DECOMMITTER_SORTER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_code_decommitter_sorter as f32; + +const LOG_DEMUXER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_log_demuxer as f32; + +const STORAGE_SORTER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_storage_sorter as f32; + +const EVENTS_OR_L1_MESSAGES_SORTER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_events_or_l1_messages_sorter as f32; + +const RAM_PERMUTATION_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_ram_permutation as f32; + +pub(crate) const CODE_DECOMMITTER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_code_decommitter as f32; + +pub(crate) const STORAGE_APPLICATION_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_storage_application as f32; + +pub(crate) const KECCAK256_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_keccak256_circuit as f32; + +pub(crate) const SHA256_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_sha256_circuit as f32; + +pub(crate) const ECRECOVER_CYCLE_FRACTION: f32 = + OVERESTIMATE_PERCENT / GEOMETRY_CONFIG.cycles_per_ecrecover_circuit as f32; + +// "Rich addressing" opcodes are opcodes that can write their return value/read the input onto the stack +// and so take 1-2 RAM permutations more than an average opcode. +// In the worst case, a rich addressing may take 3 ram permutations +// (1 for reading the opcode, 1 for writing input value, 1 for writing output value). +pub(crate) const RICH_ADDRESSING_OPCODE_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + 3.0 * RAM_PERMUTATION_CYCLE_FRACTION; + +pub(crate) const AVERAGE_OPCODE_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + RAM_PERMUTATION_CYCLE_FRACTION; + +// Here "base" fraction is a fraction that will be used unconditionally. +// Usage of `StorageApplication` is being tracked separately as it depends on whether slot was read before or not. +pub(crate) const STORAGE_READ_BASE_FRACTION: f32 = MAIN_VM_CYCLE_FRACTION + + RAM_PERMUTATION_CYCLE_FRACTION + + LOG_DEMUXER_CYCLE_FRACTION + + STORAGE_SORTER_CYCLE_FRACTION; + +pub(crate) const EVENT_OR_L1_MESSAGE_FRACTION: f32 = MAIN_VM_CYCLE_FRACTION + + RAM_PERMUTATION_CYCLE_FRACTION + + 2.0 * LOG_DEMUXER_CYCLE_FRACTION + + 2.0 * EVENTS_OR_L1_MESSAGES_SORTER_CYCLE_FRACTION; + +// Here "base" fraction is a fraction that will be used unconditionally. +// Usage of `StorageApplication` is being tracked separately as it depends on whether slot was written before or not. +pub(crate) const STORAGE_WRITE_BASE_FRACTION: f32 = MAIN_VM_CYCLE_FRACTION + + RAM_PERMUTATION_CYCLE_FRACTION + + 2.0 * LOG_DEMUXER_CYCLE_FRACTION + + 2.0 * STORAGE_SORTER_CYCLE_FRACTION; + +pub(crate) const FAR_CALL_FRACTION: f32 = MAIN_VM_CYCLE_FRACTION + + RAM_PERMUTATION_CYCLE_FRACTION + + STORAGE_SORTER_CYCLE_FRACTION + + CODE_DECOMMITTER_SORTER_CYCLE_FRACTION; + +// 5 RAM permutations, because: 1 to read opcode + 2 reads + 2 writes. +// 2 reads and 2 writes are needed because unaligned access is implemented with +// aligned queries. +pub(crate) const UMA_WRITE_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + 5.0 * RAM_PERMUTATION_CYCLE_FRACTION; + +// 3 RAM permutations, because: 1 to read opcode + 2 reads. +// 2 reads are needed because unaligned access is implemented with aligned queries. +pub(crate) const UMA_READ_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + 3.0 * RAM_PERMUTATION_CYCLE_FRACTION; + +pub(crate) const PRECOMPILE_CALL_COMMON_FRACTION: f32 = + MAIN_VM_CYCLE_FRACTION + RAM_PERMUTATION_CYCLE_FRACTION + LOG_DEMUXER_CYCLE_FRACTION; diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/circuits_tracer.rs b/core/lib/multivm/src/versions/vm_latest/tracers/circuits_tracer.rs new file mode 100644 index 00000000000..acd2d191c65 --- /dev/null +++ b/core/lib/multivm/src/versions/vm_latest/tracers/circuits_tracer.rs @@ -0,0 +1,192 @@ +use std::marker::PhantomData; + +use zk_evm_1_4_1::{ + tracing::{BeforeExecutionData, VmLocalStateData}, + zk_evm_abstractions::precompiles::PrecompileAddress, + zkevm_opcode_defs::{LogOpcode, Opcode, UMAOpcode}, +}; +use zksync_state::{StoragePtr, WriteStorage}; + +use super::circuits_capacity::*; +use crate::{ + interface::{dyn_tracers::vm_1_4_1::DynTracer, tracer::TracerExecutionStatus}, + vm_latest::{ + bootloader_state::BootloaderState, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::traits::VmTracer, + types::internals::ZkSyncVmState, + }, +}; + +/// Tracer responsible for collecting information about refunds. +#[derive(Debug)] +pub(crate) struct CircuitsTracer { + pub(crate) estimated_circuits_used: f32, + last_decommitment_history_entry_checked: Option, + last_written_keys_history_entry_checked: Option, + last_read_keys_history_entry_checked: Option, + last_precompile_inner_entry_checked: Option, + _phantom_data: PhantomData, +} + +impl CircuitsTracer { + pub(crate) fn new() -> Self { + Self { + estimated_circuits_used: 0.0, + last_decommitment_history_entry_checked: None, + last_written_keys_history_entry_checked: None, + last_read_keys_history_entry_checked: None, + last_precompile_inner_entry_checked: None, + _phantom_data: Default::default(), + } + } +} + +impl DynTracer> for CircuitsTracer { + fn before_execution( + &mut self, + _state: VmLocalStateData<'_>, + data: BeforeExecutionData, + _memory: &SimpleMemory, + _storage: StoragePtr, + ) { + let used = match data.opcode.variant.opcode { + Opcode::Nop(_) + | Opcode::Add(_) + | Opcode::Sub(_) + | Opcode::Mul(_) + | Opcode::Div(_) + | Opcode::Jump(_) + | Opcode::Binop(_) + | Opcode::Shift(_) + | Opcode::Ptr(_) => RICH_ADDRESSING_OPCODE_FRACTION, + Opcode::Context(_) | Opcode::Ret(_) | Opcode::NearCall(_) => AVERAGE_OPCODE_FRACTION, + Opcode::Log(LogOpcode::StorageRead) => STORAGE_READ_BASE_FRACTION, + Opcode::Log(LogOpcode::StorageWrite) => STORAGE_WRITE_BASE_FRACTION, + Opcode::Log(LogOpcode::ToL1Message) | Opcode::Log(LogOpcode::Event) => { + EVENT_OR_L1_MESSAGE_FRACTION + } + Opcode::Log(LogOpcode::PrecompileCall) => PRECOMPILE_CALL_COMMON_FRACTION, + Opcode::FarCall(_) => FAR_CALL_FRACTION, + Opcode::UMA(UMAOpcode::AuxHeapWrite | UMAOpcode::HeapWrite) => UMA_WRITE_FRACTION, + Opcode::UMA( + UMAOpcode::AuxHeapRead | UMAOpcode::HeapRead | UMAOpcode::FatPointerRead, + ) => UMA_READ_FRACTION, + Opcode::Invalid(_) => unreachable!(), // invalid opcodes are never executed + }; + + self.estimated_circuits_used += used; + } +} + +impl VmTracer for CircuitsTracer { + fn initialize_tracer(&mut self, state: &mut ZkSyncVmState) { + self.last_decommitment_history_entry_checked = Some( + state + .decommittment_processor + .decommitted_code_hashes + .history() + .len(), + ); + + self.last_written_keys_history_entry_checked = + Some(state.storage.written_keys.history().len()); + + self.last_read_keys_history_entry_checked = Some(state.storage.read_keys.history().len()); + + self.last_precompile_inner_entry_checked = Some( + state + .precompiles_processor + .precompile_cycles_history + .inner() + .len(), + ); + } + + fn finish_cycle( + &mut self, + state: &mut ZkSyncVmState, + _bootloader_state: &mut BootloaderState, + ) -> TracerExecutionStatus { + // Trace decommitments. + let last_decommitment_history_entry_checked = self + .last_decommitment_history_entry_checked + .expect("Value must be set during init"); + let history = state + .decommittment_processor + .decommitted_code_hashes + .history(); + for (_, history_event) in &history[last_decommitment_history_entry_checked..] { + // We assume that only insertions may happen during a single VM inspection. + assert!(history_event.value.is_none()); + let bytecode_len = state + .decommittment_processor + .known_bytecodes + .inner() + .get(&history_event.key) + .expect("Bytecode must be known at this point") + .len(); + + // Each cycle of `CodeDecommitter` processes 2 words. + // If the number of words in bytecode is odd, then number of cycles must be rounded up. + let decommitter_cycles_used = (bytecode_len + 1) / 2; + self.estimated_circuits_used += + (decommitter_cycles_used as f32) * CODE_DECOMMITTER_CYCLE_FRACTION; + } + self.last_decommitment_history_entry_checked = Some(history.len()); + + // Process storage writes. + let last_writes_history_entry_checked = self + .last_written_keys_history_entry_checked + .expect("Value must be set during init"); + let history = state.storage.written_keys.history(); + for (_, history_event) in &history[last_writes_history_entry_checked..] { + // We assume that only insertions may happen during a single VM inspection. + assert!(history_event.value.is_none()); + + self.estimated_circuits_used += 2.0 * STORAGE_APPLICATION_CYCLE_FRACTION; + } + self.last_written_keys_history_entry_checked = Some(history.len()); + + // Process storage reads. + let last_reads_history_entry_checked = self + .last_read_keys_history_entry_checked + .expect("Value must be set during init"); + let history = state.storage.read_keys.history(); + for (_, history_event) in &history[last_reads_history_entry_checked..] { + // We assume that only insertions may happen during a single VM inspection. + assert!(history_event.value.is_none()); + + // If the slot is already written to, then we've already taken 2 cycles into account. + if !state + .storage + .written_keys + .inner() + .contains_key(&history_event.key) + { + self.estimated_circuits_used += STORAGE_APPLICATION_CYCLE_FRACTION; + } + } + self.last_read_keys_history_entry_checked = Some(history.len()); + + // Process precompiles. + let last_precompile_inner_entry_checked = self + .last_precompile_inner_entry_checked + .expect("Value must be set during init"); + let inner = state + .precompiles_processor + .precompile_cycles_history + .inner(); + for (precompile, cycles) in &inner[last_precompile_inner_entry_checked..] { + let fraction = match precompile { + PrecompileAddress::Ecrecover => ECRECOVER_CYCLE_FRACTION, + PrecompileAddress::SHA256 => SHA256_CYCLE_FRACTION, + PrecompileAddress::Keccak256 => KECCAK256_CYCLE_FRACTION, + }; + self.estimated_circuits_used += (*cycles as f32) * fraction; + } + self.last_precompile_inner_entry_checked = Some(inner.len()); + + TracerExecutionStatus::Continue + } +} diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/default_tracers.rs b/core/lib/multivm/src/versions/vm_latest/tracers/default_tracers.rs index 9582e6e1053..1988cc9f027 100644 --- a/core/lib/multivm/src/versions/vm_latest/tracers/default_tracers.rs +++ b/core/lib/multivm/src/versions/vm_latest/tracers/default_tracers.rs @@ -1,9 +1,9 @@ -use std::fmt::{Debug, Formatter}; -use std::marker::PhantomData; +use std::{ + fmt::{Debug, Formatter}, + marker::PhantomData, +}; -use crate::interface::tracer::{TracerExecutionStopReason, VmExecutionStopReason}; -use crate::interface::{Halt, VmExecutionMode}; -use zk_evm_1_4_0::{ +use zk_evm_1_4_1::{ tracing::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, }, @@ -14,23 +14,31 @@ use zk_evm_1_4_0::{ use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::Timestamp; -use crate::interface::traits::tracers::dyn_tracers::vm_1_4_0::DynTracer; -use crate::interface::types::tracer::TracerExecutionStatus; -use crate::vm_latest::bootloader_state::utils::apply_l2_block; -use crate::vm_latest::bootloader_state::BootloaderState; -use crate::vm_latest::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_latest::old_vm::history_recorder::HistoryMode; -use crate::vm_latest::old_vm::memory::SimpleMemory; -use crate::vm_latest::tracers::dispatcher::TracerDispatcher; -use crate::vm_latest::tracers::utils::{ - computational_gas_price, gas_spent_on_bytecodes_and_long_messages_this_opcode, - print_debug_if_needed, VmHook, -}; -use crate::vm_latest::tracers::{RefundsTracer, ResultTracer}; -use crate::vm_latest::types::internals::ZkSyncVmState; -use crate::vm_latest::VmTracer; - use super::PubdataTracer; +use crate::{ + glue::GlueInto, + interface::{ + tracer::{TracerExecutionStopReason, VmExecutionStopReason}, + traits::tracers::dyn_tracers::vm_1_4_1::DynTracer, + types::tracer::TracerExecutionStatus, + Halt, VmExecutionMode, + }, + vm_latest::{ + bootloader_state::{utils::apply_l2_block, BootloaderState}, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::{ + dispatcher::TracerDispatcher, + utils::{ + computational_gas_price, gas_spent_on_bytecodes_and_long_messages_this_opcode, + print_debug_if_needed, VmHook, + }, + CircuitsTracer, RefundsTracer, ResultTracer, + }, + types::internals::ZkSyncVmState, + VmTracer, + }, +}; /// Default tracer for the VM. It manages the other tracers execution and stop the vm when needed. pub(crate) struct DefaultExecutionTracer { @@ -47,7 +55,7 @@ pub(crate) struct DefaultExecutionTracer { pub(crate) result_tracer: ResultTracer, // This tracer is designed specifically for calculating refunds. Its separation from the custom tracer // ensures static dispatch, enhancing performance by avoiding dynamic dispatch overhead. - // Additionally, being an internal tracer, it saves the results directly to VmResultAndLogs. + // Additionally, being an internal tracer, it saves the results directly to `VmResultAndLogs`. pub(crate) refund_tracer: Option>, // The pubdata tracer is responsible for inserting the pubdata packing information into the bootloader // memory at the end of the batch. Its separation from the custom tracer @@ -55,6 +63,11 @@ pub(crate) struct DefaultExecutionTracer { pub(crate) pubdata_tracer: Option>, pub(crate) dispatcher: TracerDispatcher, ret_from_the_bootloader: Option, + // This tracer tracks what opcodes were executed and calculates how much circuits will be generated. + // It only takes into account circuits that are generated for actual execution. It doesn't + // take into account e.g circuits produced by the initial bootloader memory commitment. + pub(crate) circuits_tracer: CircuitsTracer, + storage: StoragePtr, _phantom: PhantomData, } @@ -81,6 +94,7 @@ impl DefaultExecutionTracer { dispatcher, pubdata_tracer, ret_from_the_bootloader: None, + circuits_tracer: CircuitsTracer::new(), storage, _phantom: PhantomData, } @@ -108,9 +122,11 @@ impl DefaultExecutionTracer { let l2_block = bootloader_state.insert_fictive_l2_block(); let mut memory = vec![]; apply_l2_block(&mut memory, l2_block, txs_index); - state - .memory - .populate_page(BOOTLOADER_HEAP_PAGE as usize, memory, current_timestamp); + state.memory.populate_page( + BOOTLOADER_HEAP_PAGE as usize, + memory, + current_timestamp.glue_into(), + ); self.final_batch_info_requested = false; } @@ -154,14 +170,15 @@ impl Debug for DefaultExecutionTracer { /// The macro passes the function call to all tracers. macro_rules! dispatch_tracers { ($self:ident.$function:ident($( $params:expr ),*)) => { - $self.result_tracer.$function($( $params ),*); - $self.dispatcher.$function($( $params ),*); + $self.result_tracer.$function($( $params ),*); + $self.dispatcher.$function($( $params ),*); if let Some(tracer) = &mut $self.refund_tracer { tracer.$function($( $params ),*); } if let Some(tracer) = &mut $self.pubdata_tracer { tracer.$function($( $params ),*); } + $self.circuits_tracer.$function($( $params ),*); }; } @@ -224,7 +241,7 @@ impl Tracer for DefaultExecutionTracer { memory: &Self::SupportedMemory, ) { if let VmExecutionMode::Bootloader = self.execution_mode { - let (next_opcode, _, _) = zk_evm_1_4_0::vm_state::read_and_decode( + let (next_opcode, _, _) = zk_evm_1_4_1::vm_state::read_and_decode( state.vm_local_state, memory, &mut DummyTracer, @@ -270,6 +287,12 @@ impl DefaultExecutionTracer { .finish_cycle(state, bootloader_state) .stricter(&result); } + + result = self + .circuits_tracer + .finish_cycle(state, bootloader_state) + .stricter(&result); + result.stricter(&self.should_stop_execution()) } @@ -284,7 +307,7 @@ impl DefaultExecutionTracer { } fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/dispatcher.rs b/core/lib/multivm/src/versions/vm_latest/tracers/dispatcher.rs index b75277670dc..b6c779303a7 100644 --- a/core/lib/multivm/src/versions/vm_latest/tracers/dispatcher.rs +++ b/core/lib/multivm/src/versions/vm_latest/tracers/dispatcher.rs @@ -1,13 +1,18 @@ -use crate::interface::dyn_tracers::vm_1_4_0::DynTracer; -use crate::interface::tracer::{TracerExecutionStatus, VmExecutionStopReason}; -use crate::vm_latest::{ - BootloaderState, HistoryMode, SimpleMemory, TracerPointer, VmTracer, ZkSyncVmState, -}; -use zk_evm_1_4_0::tracing::{ +use zk_evm_1_4_1::tracing::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData, }; use zksync_state::{StoragePtr, WriteStorage}; +use crate::{ + interface::{ + dyn_tracers::vm_1_4_1::DynTracer, + tracer::{TracerExecutionStatus, VmExecutionStopReason}, + }, + vm_latest::{ + BootloaderState, HistoryMode, SimpleMemory, TracerPointer, VmTracer, ZkSyncVmState, + }, +}; + /// Tracer dispatcher is a tracer that can dispatch calls to multiple tracers. pub struct TracerDispatcher { tracers: Vec>, diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/mod.rs b/core/lib/multivm/src/versions/vm_latest/tracers/mod.rs index 33d043de6eb..1bdb1b6ccdb 100644 --- a/core/lib/multivm/src/versions/vm_latest/tracers/mod.rs +++ b/core/lib/multivm/src/versions/vm_latest/tracers/mod.rs @@ -1,13 +1,16 @@ +pub(crate) use circuits_tracer::CircuitsTracer; pub(crate) use default_tracers::DefaultExecutionTracer; pub(crate) use pubdata_tracer::PubdataTracer; pub(crate) use refunds::RefundsTracer; pub(crate) use result_tracer::ResultTracer; +pub(crate) mod circuits_tracer; pub(crate) mod default_tracers; pub(crate) mod pubdata_tracer; pub(crate) mod refunds; pub(crate) mod result_tracer; +mod circuits_capacity; pub mod dispatcher; pub(crate) mod traits; pub(crate) mod utils; diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/pubdata_tracer.rs b/core/lib/multivm/src/versions/vm_latest/tracers/pubdata_tracer.rs index 59a9d8eb452..b147bd597fa 100644 --- a/core/lib/multivm/src/versions/vm_latest/tracers/pubdata_tracer.rs +++ b/core/lib/multivm/src/versions/vm_latest/tracers/pubdata_tracer.rs @@ -1,9 +1,9 @@ use std::marker::PhantomData; -use zk_evm_1_4_0::{ + +use zk_evm_1_4_1::{ aux_structures::Timestamp, tracing::{BeforeExecutionData, VmLocalStateData}, }; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::{ event::{ @@ -14,24 +14,24 @@ use zksync_types::{ zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries, AccountTreeId, StorageKey, L1_MESSENGER_ADDRESS, }; -use zksync_utils::u256_to_h256; -use zksync_utils::{h256_to_u256, u256_to_bytes_be}; +use zksync_utils::{h256_to_u256, u256_to_bytes_be, u256_to_h256}; -use crate::vm_latest::{ - old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, - types::internals::pubdata::PubdataInput, -}; -use crate::{vm_latest::constants::BOOTLOADER_HEAP_PAGE, vm_latest::StorageOracle}; - -use crate::interface::dyn_tracers::vm_1_4_0::DynTracer; -use crate::interface::tracer::{TracerExecutionStatus, TracerExecutionStopReason}; -use crate::interface::types::inputs::L1BatchEnv; -use crate::vm_latest::tracers::{traits::VmTracer, utils::VmHook}; -use crate::vm_latest::types::internals::ZkSyncVmState; -use crate::vm_latest::utils::logs::collect_events_and_l1_system_logs_after_timestamp; use crate::{ - interface::VmExecutionMode, - vm_latest::bootloader_state::{utils::apply_pubdata_to_memory, BootloaderState}, + interface::{ + dyn_tracers::vm_1_4_1::DynTracer, + tracer::{TracerExecutionStatus, TracerExecutionStopReason}, + types::inputs::L1BatchEnv, + VmExecutionMode, + }, + vm_latest::{ + bootloader_state::{utils::apply_pubdata_to_memory, BootloaderState}, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::{traits::VmTracer, utils::VmHook}, + types::internals::{PubdataInput, ZkSyncVmState}, + utils::logs::collect_events_and_l1_system_logs_after_timestamp, + StorageOracle, + }, }; /// Tracer responsible for collecting information about refunds. @@ -56,7 +56,7 @@ impl PubdataTracer { impl PubdataTracer { // Packs part of L1 Messenger total pubdata that corresponds to - // L2toL1Logs sent in the block + // `L2toL1Logs` sent in the block fn get_total_user_logs( &self, state: &ZkSyncVmState, diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/refunds.rs b/core/lib/multivm/src/versions/vm_latest/tracers/refunds.rs index f3e6c336684..24003d6e81b 100644 --- a/core/lib/multivm/src/versions/vm_latest/tracers/refunds.rs +++ b/core/lib/multivm/src/versions/vm_latest/tracers/refunds.rs @@ -1,40 +1,41 @@ use std::marker::PhantomData; -use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; -use crate::interface::traits::tracers::dyn_tracers::vm_1_4_0::DynTracer; -use crate::interface::types::tracer::TracerExecutionStatus; -use crate::interface::{L1BatchEnv, Refunds}; -use zk_evm_1_4_0::{ +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; +use zk_evm_1_4_1::{ aux_structures::Timestamp, tracing::{BeforeExecutionData, VmLocalStateData}, vm_state::VmLocalState, + zkevm_opcode_defs::system_params::L1_MESSAGE_PUBDATA_BYTES, }; use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::{PUBLISH_BYTECODE_OVERHEAD, SYSTEM_CONTEXT_ADDRESS}; use zksync_types::{ event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}, l2_to_l1_log::L2ToL1Log, - L1BatchNumber, U256, + L1BatchNumber, H256, U256, }; -use zksync_utils::bytecode::bytecode_len_in_bytes; -use zksync_utils::{ceil_div_u256, u256_to_h256}; - -use crate::vm_latest::constants::{ - BOOTLOADER_HEAP_PAGE, OPERATOR_REFUNDS_OFFSET, TX_GAS_LIMIT_OFFSET, -}; -use crate::vm_latest::old_vm::{ - events::merge_events, history_recorder::HistoryMode, memory::SimpleMemory, - utils::eth_price_per_pubdata_byte, +use zksync_utils::{bytecode::bytecode_len_in_bytes, ceil_div_u256, u256_to_h256}; + +use crate::{ + interface::{ + traits::tracers::dyn_tracers::vm_1_4_1::DynTracer, types::tracer::TracerExecutionStatus, + L1BatchEnv, Refunds, + }, + vm_latest::{ + bootloader_state::BootloaderState, + constants::{BOOTLOADER_HEAP_PAGE, OPERATOR_REFUNDS_OFFSET, TX_GAS_LIMIT_OFFSET}, + old_vm::{events::merge_events, history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::{ + traits::VmTracer, + utils::{ + gas_spent_on_bytecodes_and_long_messages_this_opcode, get_vm_hook_params, VmHook, + }, + }, + types::internals::ZkSyncVmState, + utils::fee::get_batch_base_fee, + }, }; -use crate::vm_latest::bootloader_state::BootloaderState; -use crate::vm_latest::tracers::utils::gas_spent_on_bytecodes_and_long_messages_this_opcode; -use crate::vm_latest::tracers::{ - traits::VmTracer, - utils::{get_vm_hook_params, VmHook}, -}; -use crate::vm_latest::types::internals::ZkSyncVmState; - /// Tracer responsible for collecting information about refunds. #[derive(Debug, Clone)] pub(crate) struct RefundsTracer { @@ -99,6 +100,7 @@ impl RefundsTracer { tx_gas_limit: u32, current_ergs_per_pubdata_byte: u32, pubdata_published: u32, + tx_hash: H256, ) -> u32 { let total_gas_spent = tx_gas_limit - bootloader_refund; @@ -114,13 +116,13 @@ impl RefundsTracer { }); // For now, bootloader charges only for base fee. - let effective_gas_price = self.l1_batch.base_fee(); + let effective_gas_price = get_batch_base_fee(&self.l1_batch); let bootloader_eth_price_per_pubdata_byte = U256::from(effective_gas_price) * U256::from(current_ergs_per_pubdata_byte); let fair_eth_price_per_pubdata_byte = - U256::from(eth_price_per_pubdata_byte(self.l1_batch.l1_gas_price)); + U256::from(self.l1_batch.fee_input.fair_pubdata_price()); // For now, L1 originated transactions are allowed to pay less than fair fee per pubdata, // so we should take it into account. @@ -130,7 +132,7 @@ impl RefundsTracer { ); let fair_fee_eth = U256::from(gas_spent_on_computation) - * U256::from(self.l1_batch.fair_l2_gas_price) + * U256::from(self.l1_batch.fee_input.fair_l2_gas_price()) + U256::from(pubdata_published) * eth_price_per_pubdata_byte_for_calculation; let pre_paid_eth = U256::from(tx_gas_limit) * U256::from(effective_gas_price); let refund_eth = pre_paid_eth.checked_sub(fair_fee_eth).unwrap_or_else(|| { @@ -142,6 +144,15 @@ impl RefundsTracer { U256::zero() }); + tracing::trace!( + "Fee benchmark for transaction with hash {}", + hex::encode(tx_hash.as_bytes()) + ); + tracing::trace!("Gas Limit: {}", tx_gas_limit); + tracing::trace!("Gas spent on computation: {}", gas_spent_on_computation); + tracing::trace!("Gas spent on pubdata: {}", gas_spent_on_pubdata); + tracing::trace!("Pubdata published: {}", pubdata_published); + ceil_div_u256(refund_eth, effective_gas_price.into()).as_u32() } @@ -212,8 +223,8 @@ impl VmTracer for RefundsTracer { #[vise::register] static METRICS: vise::Global = vise::Global::new(); - // This means that the bootloader has informed the system (usually via VMHooks) - that some gas - // should be refunded back (see askOperatorForRefund in bootloader.yul for details). + // This means that the bootloader has informed the system (usually via `VMHooks`) - that some gas + // should be refunded back (see `askOperatorForRefund` in `bootloader.yul` for details). if let Some(bootloader_refund) = self.requested_refund() { assert!( self.operator_refund.is_none(), @@ -247,12 +258,14 @@ impl VmTracer for RefundsTracer { self.pubdata_published = pubdata_published; let current_ergs_per_pubdata_byte = state.local_state.current_ergs_per_pubdata_byte; + let tx_body_refund = self.tx_body_refund( bootloader_refund, gas_spent_on_pubdata, tx_gas_limit, current_ergs_per_pubdata_byte, pubdata_published, + bootloader_state.last_l2_block().txs.last().unwrap().hash, ); if tx_body_refund < bootloader_refund { @@ -330,7 +343,7 @@ pub(crate) fn pubdata_published( }) .filter(|log| log.sender != SYSTEM_CONTEXT_ADDRESS) .count() as u32) - * zk_evm_1_4_0::zkevm_opcode_defs::system_params::L1_MESSAGE_PUBDATA_BYTES; + * L1_MESSAGE_PUBDATA_BYTES; let l2_l1_long_messages_bytes: u32 = extract_long_l2_to_l1_messages(&events) .iter() .map(|event| event.len() as u32) diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/result_tracer.rs b/core/lib/multivm/src/versions/vm_latest/tracers/result_tracer.rs index 7e6e08a0a49..71a7dcb3738 100644 --- a/core/lib/multivm/src/versions/vm_latest/tracers/result_tracer.rs +++ b/core/lib/multivm/src/versions/vm_latest/tracers/result_tracer.rs @@ -1,27 +1,29 @@ use std::marker::PhantomData; -use zk_evm_1_4_0::{ + +use zk_evm_1_4_1::{ tracing::{AfterDecodingData, BeforeExecutionData, VmLocalStateData}, vm_state::{ErrorFlags, VmLocalState}, zkevm_opcode_defs::FatPointer, }; use zksync_state::{StoragePtr, WriteStorage}; - -use crate::interface::{ - tracer::VmExecutionStopReason, traits::tracers::dyn_tracers::vm_1_4_0::DynTracer, - types::tracer::TracerExecutionStopReason, ExecutionResult, Halt, TxRevertReason, - VmExecutionMode, VmRevertReason, -}; use zksync_types::U256; -use crate::vm_latest::{ - constants::{BOOTLOADER_HEAP_PAGE, RESULT_SUCCESS_FIRST_SLOT}, - old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}, - tracers::{ - traits::VmTracer, - utils::{get_vm_hook_params, read_pointer, VmHook}, +use crate::{ + interface::{ + tracer::VmExecutionStopReason, traits::tracers::dyn_tracers::vm_1_4_1::DynTracer, + types::tracer::TracerExecutionStopReason, ExecutionResult, Halt, TxRevertReason, + VmExecutionMode, VmRevertReason, + }, + vm_latest::{ + constants::{BOOTLOADER_HEAP_PAGE, RESULT_SUCCESS_FIRST_SLOT}, + old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}, + tracers::{ + traits::VmTracer, + utils::{get_vm_hook_params, read_pointer, VmHook}, + }, + types::internals::ZkSyncVmState, + BootloaderState, HistoryMode, SimpleMemory, }, - types::internals::ZkSyncVmState, - BootloaderState, HistoryMode, SimpleMemory, }; #[derive(Debug, Clone)] @@ -52,7 +54,7 @@ impl ResultTracer { } fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 @@ -148,7 +150,7 @@ impl ResultTracer { }); } VmExecutionResult::Revert(output) => { - // Unlike VmHook::ExecutionResult, vm has completely finished and returned not only the revert reason, + // Unlike `VmHook::ExecutionResult`, vm has completely finished and returned not only the revert reason, // but with bytecode, which represents the type of error from the bootloader side let revert_reason = TxRevertReason::parse_error(&output); diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/traits.rs b/core/lib/multivm/src/versions/vm_latest/tracers/traits.rs index a3970541bac..49cdc0b2839 100644 --- a/core/lib/multivm/src/versions/vm_latest/tracers/traits.rs +++ b/core/lib/multivm/src/versions/vm_latest/tracers/traits.rs @@ -1,11 +1,16 @@ -use crate::interface::dyn_tracers::vm_1_4_0::DynTracer; -use crate::interface::tracer::{TracerExecutionStatus, VmExecutionStopReason}; use zksync_state::WriteStorage; -use crate::vm_latest::bootloader_state::BootloaderState; -use crate::vm_latest::old_vm::history_recorder::HistoryMode; -use crate::vm_latest::old_vm::memory::SimpleMemory; -use crate::vm_latest::types::internals::ZkSyncVmState; +use crate::{ + interface::{ + dyn_tracers::vm_1_4_1::DynTracer, + tracer::{TracerExecutionStatus, VmExecutionStopReason}, + }, + vm_latest::{ + bootloader_state::BootloaderState, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + types::internals::ZkSyncVmState, + }, +}; pub type TracerPointer = Box>; diff --git a/core/lib/multivm/src/versions/vm_latest/tracers/utils.rs b/core/lib/multivm/src/versions/vm_latest/tracers/utils.rs index 5b34eee4742..78129790c44 100644 --- a/core/lib/multivm/src/versions/vm_latest/tracers/utils.rs +++ b/core/lib/multivm/src/versions/vm_latest/tracers/utils.rs @@ -1,10 +1,10 @@ -use zk_evm_1_4_0::aux_structures::MemoryPage; -use zk_evm_1_4_0::zkevm_opcode_defs::{FarCallABI, FarCallForwardPageType}; -use zk_evm_1_4_0::{ +use zk_evm_1_4_1::{ + aux_structures::MemoryPage, tracing::{BeforeExecutionData, VmLocalStateData}, - zkevm_opcode_defs::{FatPointer, LogOpcode, Opcode, UMAOpcode}, + zkevm_opcode_defs::{ + FarCallABI, FarCallForwardPageType, FatPointer, LogOpcode, Opcode, UMAOpcode, + }, }; - use zksync_system_constants::{ ECRECOVER_PRECOMPILE_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, KNOWN_CODES_STORAGE_ADDRESS, L1_MESSENGER_ADDRESS, SHA256_PRECOMPILE_ADDRESS, @@ -12,12 +12,16 @@ use zksync_system_constants::{ use zksync_types::U256; use zksync_utils::u256_to_h256; -use crate::vm_latest::constants::{ - BOOTLOADER_HEAP_PAGE, VM_HOOK_PARAMS_COUNT, VM_HOOK_PARAMS_START_POSITION, VM_HOOK_POSITION, +use crate::vm_latest::{ + constants::{ + BOOTLOADER_HEAP_PAGE, VM_HOOK_PARAMS_COUNT, VM_HOOK_PARAMS_START_POSITION, VM_HOOK_POSITION, + }, + old_vm::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + utils::{aux_heap_page_from_base, heap_page_from_base}, + }, }; -use crate::vm_latest::old_vm::history_recorder::HistoryMode; -use crate::vm_latest::old_vm::memory::SimpleMemory; -use crate::vm_latest::old_vm::utils::{aux_heap_page_from_base, heap_page_from_base}; #[derive(Clone, Debug, Copy)] pub(crate) enum VmHook { @@ -53,7 +57,7 @@ impl VmHook { let value = data.src1_value.value; - // Only UMA opcodes in the bootloader serve for vm hooks + // Only `UMA` opcodes in the bootloader serve for vm hooks if !matches!(opcode_variant.opcode, Opcode::UMA(UMAOpcode::HeapWrite)) || heap_page != BOOTLOADER_HEAP_PAGE || fat_ptr.offset != VM_HOOK_POSITION * 32 @@ -94,7 +98,7 @@ pub(crate) fn get_debug_log( let msg = String::from_utf8(msg).expect("Invalid debug message"); let data = U256::from_big_endian(&data); - // For long data, it is better to use hex-encoding for greater readibility + // For long data, it is better to use hex-encoding for greater readability let data_str = if data > U256::from(u64::max_value()) { let mut bytes = [0u8; 32]; data.to_big_endian(&mut bytes); @@ -109,7 +113,7 @@ pub(crate) fn get_debug_log( } /// Reads the memory slice represented by the fat pointer. -/// Note, that the fat pointer must point to the accesible memory (i.e. not cleared up yet). +/// Note, that the fat pointer must point to the accessible memory (i.e. not cleared up yet). pub(crate) fn read_pointer( memory: &SimpleMemory, pointer: FatPointer, diff --git a/core/lib/multivm/src/versions/vm_latest/types/internals/mod.rs b/core/lib/multivm/src/versions/vm_latest/types/internals/mod.rs index c189de7266d..7dc60ec5b0f 100644 --- a/core/lib/multivm/src/versions/vm_latest/types/internals/mod.rs +++ b/core/lib/multivm/src/versions/vm_latest/types/internals/mod.rs @@ -1,8 +1,9 @@ +pub(crate) use pubdata::PubdataInput; pub(crate) use snapshot::VmSnapshot; pub(crate) use transaction_data::TransactionData; pub(crate) use vm_state::new_vm_state; -pub(crate) mod pubdata; pub use vm_state::ZkSyncVmState; +mod pubdata; mod snapshot; mod transaction_data; mod vm_state; diff --git a/core/lib/multivm/src/versions/vm_latest/types/internals/pubdata.rs b/core/lib/multivm/src/versions/vm_latest/types/internals/pubdata.rs index e246bceeac5..38489a6c8e9 100644 --- a/core/lib/multivm/src/versions/vm_latest/types/internals/pubdata.rs +++ b/core/lib/multivm/src/versions/vm_latest/types/internals/pubdata.rs @@ -1,4 +1,3 @@ -use zksync_types::ethabi; use zksync_types::{ event::L1MessengerL2ToL1Log, writes::{compress_state_diffs, StateDiffRecord}, @@ -14,7 +13,7 @@ pub(crate) struct PubdataInput { } impl PubdataInput { - pub(crate) fn build_pubdata(self) -> Vec { + pub(crate) fn build_pubdata(self, with_uncompressed_state_diffs: bool) -> Vec { let mut l1_messenger_pubdata = vec![]; let PubdataInput { @@ -25,14 +24,14 @@ impl PubdataInput { } = self; // Encoding user L2->L1 logs. - // Format: [(numberOfL2ToL1Logs as u32) || l2tol1logs[1] || ... || l2tol1logs[n]] + // Format: `[(numberOfL2ToL1Logs as u32) || l2tol1logs[1] || ... || l2tol1logs[n]]` l1_messenger_pubdata.extend((user_logs.len() as u32).to_be_bytes()); for l2tol1log in user_logs { l1_messenger_pubdata.extend(l2tol1log.packed_encoding()); } // Encoding L2->L1 messages - // Format: [(numberOfMessages as u32) || (messages[1].len() as u32) || messages[1] || ... || (messages[n].len() as u32) || messages[n]] + // Format: `[(numberOfMessages as u32) || (messages[1].len() as u32) || messages[1] || ... || (messages[n].len() as u32) || messages[n]]` l1_messenger_pubdata.extend((l2_to_l1_messages.len() as u32).to_be_bytes()); for message in l2_to_l1_messages { l1_messenger_pubdata.extend((message.len() as u32).to_be_bytes()); @@ -40,7 +39,7 @@ impl PubdataInput { } // Encoding bytecodes - // Format: [(numberOfBytecodes as u32) || (bytecodes[1].len() as u32) || bytecodes[1] || ... || (bytecodes[n].len() as u32) || bytecodes[n]] + // Format: `[(numberOfBytecodes as u32) || (bytecodes[1].len() as u32) || bytecodes[1] || ... || (bytecodes[n].len() as u32) || bytecodes[n]]` l1_messenger_pubdata.extend((published_bytecodes.len() as u32).to_be_bytes()); for bytecode in published_bytecodes { l1_messenger_pubdata.extend((bytecode.len() as u32).to_be_bytes()); @@ -48,27 +47,18 @@ impl PubdataInput { } // Encoding state diffs - // Format: [size of compressed state diffs u32 || compressed state diffs || (# state diffs: intial + repeated) as u32 || sorted state diffs by ] + // Format: `[size of compressed state diffs u32 || compressed state diffs || (# state diffs: intial + repeated) as u32 || sorted state diffs by ]` let state_diffs_compressed = compress_state_diffs(state_diffs.clone()); l1_messenger_pubdata.extend(state_diffs_compressed); - l1_messenger_pubdata.extend((state_diffs.len() as u32).to_be_bytes()); - for state_diff in state_diffs { - l1_messenger_pubdata.extend(state_diff.encode_padded()); + if with_uncompressed_state_diffs { + l1_messenger_pubdata.extend((state_diffs.len() as u32).to_be_bytes()); + for state_diff in state_diffs { + l1_messenger_pubdata.extend(state_diff.encode_padded()); + } } - // ABI-encoding the final pubdata - let l1_messenger_abi_encoded_pubdata = - ethabi::encode(&[ethabi::Token::Bytes(l1_messenger_pubdata)]); - - assert!( - l1_messenger_abi_encoded_pubdata.len() % 32 == 0, - "abi encoded bytes array length should be divisible by 32" - ); - - // Need to skip first word as it represents array offset - // while bootloader expects only [len || data] - l1_messenger_abi_encoded_pubdata[32..].to_vec() + l1_messenger_pubdata } } @@ -126,7 +116,8 @@ mod tests { state_diffs, }; - let pubdata = input.build_pubdata(); + let pubdata = + ethabi::encode(&[ethabi::Token::Bytes(input.build_pubdata(true))])[32..].to_vec(); assert_eq!(hex::encode(pubdata), "00000000000000000000000000000000000000000000000000000000000002c700000001000000000000000000000000000000000000000000008001000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000800000000100000004deadbeef0000000100000004aaaabbbb0100002a040001000000000000000000000000000000000000000000000000000000000000007e090e0000000c0901000000020000000000000000000000000000000000008002000000000000000000000000000000000000000000000000000000000000009b000000000000000000000000000000000000000000000000000000000000007d000000000000000c000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008002000000000000000000000000000000000000000000000000000000000000009c000000000000000000000000000000000000000000000000000000000000007e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"); } diff --git a/core/lib/multivm/src/versions/vm_latest/types/internals/snapshot.rs b/core/lib/multivm/src/versions/vm_latest/types/internals/snapshot.rs index 2a9368c37a3..0633cf61cda 100644 --- a/core/lib/multivm/src/versions/vm_latest/types/internals/snapshot.rs +++ b/core/lib/multivm/src/versions/vm_latest/types/internals/snapshot.rs @@ -1,4 +1,4 @@ -use zk_evm_1_4_0::vm_state::VmLocalState; +use zk_evm_1_4_1::vm_state::VmLocalState; use crate::vm_latest::bootloader_state::BootloaderStateSnapshot; diff --git a/core/lib/multivm/src/versions/vm_latest/types/internals/transaction_data.rs b/core/lib/multivm/src/versions/vm_latest/types/internals/transaction_data.rs index f81741d2a43..2445e1bdb72 100644 --- a/core/lib/multivm/src/versions/vm_latest/types/internals/transaction_data.rs +++ b/core/lib/multivm/src/versions/vm_latest/types/internals/transaction_data.rs @@ -1,17 +1,20 @@ use std::convert::TryInto; -use zksync_types::ethabi::{encode, Address, Token}; -use zksync_types::fee::{encoding_len, Fee}; -use zksync_types::l1::is_l1_tx_type; -use zksync_types::l2::L2Tx; -use zksync_types::transaction_request::{PaymasterParams, TransactionRequest}; + use zksync_types::{ - l2::TransactionType, Bytes, Execute, ExecuteTransactionCommon, L2ChainId, L2TxCommonData, - Nonce, Transaction, H256, U256, + ethabi::{encode, Address, Token}, + fee::{encoding_len, Fee}, + l1::is_l1_tx_type, + l2::{L2Tx, TransactionType}, + transaction_request::{PaymasterParams, TransactionRequest}, + Bytes, Execute, ExecuteTransactionCommon, L2ChainId, L2TxCommonData, Nonce, Transaction, H256, + U256, }; -use zksync_utils::address_to_h256; -use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; +use zksync_utils::{address_to_h256, bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; -use crate::vm_latest::utils::overhead::{get_amortized_overhead, OverheadCoeficients}; +use crate::vm_latest::{ + constants::{L1_TX_TYPE, MAX_GAS_PER_PUBDATA_BYTE, PRIORITY_TX_MAX_GAS_LIMIT}, + utils::overhead::derive_overhead, +}; /// This structure represents the data that is used by /// the Bootloader to describe the transaction. @@ -59,12 +62,22 @@ impl From for TransactionData { U256::zero() }; + // Ethereum transactions do not sign gas per pubdata limit, and so for them we need to use + // some default value. We use the maximum possible value that is allowed by the bootloader + // (i.e. we can not use u64::MAX, because the bootloader requires gas per pubdata for such + // transactions to be higher than `MAX_GAS_PER_PUBDATA_BYTE`). + let gas_per_pubdata_limit = if common_data.transaction_type.is_ethereum_type() { + MAX_GAS_PER_PUBDATA_BYTE.into() + } else { + common_data.fee.gas_per_pubdata_limit + }; + TransactionData { tx_type: (common_data.transaction_type as u32) as u8, from: common_data.initiator_address, to: execute_tx.execute.contract_address, gas_limit: common_data.fee.gas_limit, - pubdata_price_limit: common_data.fee.gas_per_pubdata_limit, + pubdata_price_limit: gas_per_pubdata_limit, max_fee_per_gas: common_data.fee.max_fee_per_gas, max_priority_fee_per_gas: common_data.fee.max_priority_fee_per_gas, paymaster: common_data.paymaster_params.paymaster, @@ -189,21 +202,7 @@ impl TransactionData { bytes_to_be_words(bytes) } - pub(crate) fn effective_gas_price_per_pubdata(&self, block_gas_price_per_pubdata: u32) -> u32 { - // It is enforced by the protocol that the L1 transactions always pay the exact amount of gas per pubdata - // as was supplied in the transaction. - if is_l1_tx_type(self.tx_type) { - self.pubdata_price_limit.as_u32() - } else { - block_gas_price_per_pubdata - } - } - - pub(crate) fn overhead_gas(&self, block_gas_price_per_pubdata: u32) -> u32 { - let total_gas_limit = self.gas_limit.as_u32(); - let gas_price_per_pubdata = - self.effective_gas_price_per_pubdata(block_gas_price_per_pubdata); - + pub(crate) fn overhead_gas(&self) -> u32 { let encoded_len = encoding_len( self.data.len() as u64, self.signature.len() as u64, @@ -212,16 +211,16 @@ impl TransactionData { self.reserved_dynamic.len() as u64, ); - let coeficients = OverheadCoeficients::from_tx_type(self.tx_type); - get_amortized_overhead( - total_gas_limit, - gas_price_per_pubdata, - encoded_len, - coeficients, - ) + derive_overhead(encoded_len) } - pub(crate) fn trusted_ergs_limit(&self, _block_gas_price_per_pubdata: u64) -> U256 { + pub(crate) fn trusted_ergs_limit(&self) -> U256 { + if self.tx_type == L1_TX_TYPE { + // In case we get a users' transactions with unexpected gas limit, we do not let it have more than + // a certain limit + return U256::from(PRIORITY_TX_MAX_GAS_LIMIT).min(self.gas_limit); + } + // TODO (EVM-66): correctly calculate the trusted gas limit for a transaction self.gas_limit } @@ -234,7 +233,7 @@ impl TransactionData { let l2_tx: L2Tx = self.clone().try_into().unwrap(); let transaction_request: TransactionRequest = l2_tx.into(); - // It is assumed that the TransactionData always has all the necessary components to recover the hash. + // It is assumed that the `TransactionData` always has all the necessary components to recover the hash. transaction_request .get_tx_hash(chain_id) .expect("Could not recover L2 transaction hash") @@ -303,9 +302,10 @@ impl TryInto for TransactionData { #[cfg(test)] mod tests { - use super::*; use zksync_types::fee::encoding_len; + use super::*; + #[test] fn test_consistency_with_encoding_length() { let transaction = TransactionData { diff --git a/core/lib/multivm/src/versions/vm_latest/types/internals/vm_state.rs b/core/lib/multivm/src/versions/vm_latest/types/internals/vm_state.rs index 0c519a324c0..223c908ae7f 100644 --- a/core/lib/multivm/src/versions/vm_latest/types/internals/vm_state.rs +++ b/core/lib/multivm/src/versions/vm_latest/types/internals/vm_state.rs @@ -1,40 +1,46 @@ -use zk_evm_1_4_0::{ - aux_structures::MemoryPage, - aux_structures::Timestamp, +use zk_evm_1_4_1::{ + aux_structures::{MemoryPage, Timestamp}, block_properties::BlockProperties, vm_state::{CallStackEntry, PrimitiveValue, VmState}, witness_trace::DummyTracer, zkevm_opcode_defs::{ system_params::{BOOTLOADER_MAX_MEMORY, INITIAL_FRAME_FORMAL_EH_LOCATION}, - FatPointer, BOOTLOADER_CALLDATA_PAGE, + FatPointer, BOOTLOADER_BASE_PAGE, BOOTLOADER_CALLDATA_PAGE, BOOTLOADER_CODE_PAGE, + STARTING_BASE_PAGE, STARTING_TIMESTAMP, }, }; - -use crate::interface::{L1BatchEnv, L2Block, SystemEnv}; -use zk_evm_1_4_0::zkevm_opcode_defs::{ - BOOTLOADER_BASE_PAGE, BOOTLOADER_CODE_PAGE, STARTING_BASE_PAGE, STARTING_TIMESTAMP, -}; use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::BOOTLOADER_ADDRESS; -use zksync_types::block::legacy_miniblock_hash; -use zksync_types::{zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, MiniblockNumber}; +use zksync_types::{ + block::MiniblockHasher, zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, + MiniblockNumber, +}; use zksync_utils::h256_to_u256; -use crate::vm_latest::bootloader_state::BootloaderState; -use crate::vm_latest::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_latest::old_vm::{ - event_sink::InMemoryEventSink, history_recorder::HistoryMode, memory::SimpleMemory, - oracles::decommitter::DecommitterOracle, oracles::precompile::PrecompilesProcessorWithHistory, +use crate::{ + interface::{L1BatchEnv, L2Block, SystemEnv}, + vm_latest::{ + bootloader_state::BootloaderState, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{ + event_sink::InMemoryEventSink, + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, + }, + }, + oracles::storage::StorageOracle, + types::l1_batch::bootloader_initial_memory, + utils::l2_blocks::{assert_next_block, load_last_l2_block}, + }, }; -use crate::vm_latest::oracles::storage::StorageOracle; -use crate::vm_latest::types::l1_batch::bootloader_initial_memory; -use crate::vm_latest::utils::l2_blocks::{assert_next_block, load_last_l2_block}; pub type ZkSyncVmState = VmState< StorageOracle, SimpleMemory, InMemoryEventSink, - PrecompilesProcessorWithHistory, + PrecompilesProcessorWithHistory, DecommitterOracle, DummyTracer, >; @@ -67,7 +73,9 @@ pub(crate) fn new_vm_state( L2Block { number: l1_batch_env.first_l2_block.number.saturating_sub(1), timestamp: 0, - hash: legacy_miniblock_hash(MiniblockNumber(l1_batch_env.first_l2_block.number) - 1), + hash: MiniblockHasher::legacy_hash( + MiniblockNumber(l1_batch_env.first_l2_block.number) - 1, + ), } }; @@ -76,7 +84,7 @@ pub(crate) fn new_vm_state( let storage_oracle: StorageOracle = StorageOracle::new(storage.clone()); let mut memory = SimpleMemory::default(); let event_sink = InMemoryEventSink::default(); - let precompiles_processor = PrecompilesProcessorWithHistory::::default(); + let precompiles_processor = PrecompilesProcessorWithHistory::::default(); let mut decommittment_processor: DecommitterOracle = DecommitterOracle::new(storage); diff --git a/core/lib/multivm/src/versions/vm_latest/types/l1_batch.rs b/core/lib/multivm/src/versions/vm_latest/types/l1_batch.rs index 631f1436cc3..b3bf15cb1be 100644 --- a/core/lib/multivm/src/versions/vm_latest/types/l1_batch.rs +++ b/core/lib/multivm/src/versions/vm_latest/types/l1_batch.rs @@ -1,12 +1,13 @@ -use crate::interface::L1BatchEnv; use zksync_types::U256; use zksync_utils::{address_to_u256, h256_to_u256}; +use crate::{interface::L1BatchEnv, vm_latest::utils::fee::get_batch_base_fee}; + const OPERATOR_ADDRESS_SLOT: usize = 0; const PREV_BLOCK_HASH_SLOT: usize = 1; const NEW_BLOCK_TIMESTAMP_SLOT: usize = 2; const NEW_BLOCK_NUMBER_SLOT: usize = 3; -const L1_GAS_PRICE_SLOT: usize = 4; +const FAIR_PUBDATA_PRICE_SLOT: usize = 4; const FAIR_L2_GAS_PRICE_SLOT: usize = 5; const EXPECTED_BASE_FEE_SLOT: usize = 6; const SHOULD_SET_NEW_BLOCK_SLOT: usize = 7; @@ -26,12 +27,18 @@ pub(crate) fn bootloader_initial_memory(l1_batch: &L1BatchEnv) -> Vec<(usize, U2 (PREV_BLOCK_HASH_SLOT, prev_block_hash), (NEW_BLOCK_TIMESTAMP_SLOT, U256::from(l1_batch.timestamp)), (NEW_BLOCK_NUMBER_SLOT, U256::from(l1_batch.number.0)), - (L1_GAS_PRICE_SLOT, U256::from(l1_batch.l1_gas_price)), + ( + FAIR_PUBDATA_PRICE_SLOT, + U256::from(l1_batch.fee_input.fair_pubdata_price()), + ), ( FAIR_L2_GAS_PRICE_SLOT, - U256::from(l1_batch.fair_l2_gas_price), + U256::from(l1_batch.fee_input.fair_l2_gas_price()), + ), + ( + EXPECTED_BASE_FEE_SLOT, + U256::from(get_batch_base_fee(l1_batch)), ), - (EXPECTED_BASE_FEE_SLOT, U256::from(l1_batch.base_fee())), (SHOULD_SET_NEW_BLOCK_SLOT, should_set_new_block), ] } diff --git a/core/lib/multivm/src/versions/vm_latest/utils/fee.rs b/core/lib/multivm/src/versions/vm_latest/utils/fee.rs index bbf09a75f3f..ea4de720443 100644 --- a/core/lib/multivm/src/versions/vm_latest/utils/fee.rs +++ b/core/lib/multivm/src/versions/vm_latest/utils/fee.rs @@ -1,29 +1,70 @@ //! Utility functions for vm -use zksync_system_constants::MAX_GAS_PER_PUBDATA_BYTE; +use zksync_types::{ + fee_model::{BatchFeeInput, PubdataIndependentBatchFeeModelInput}, + U256, +}; use zksync_utils::ceil_div; -use crate::vm_latest::old_vm::utils::eth_price_per_pubdata_byte; - -/// Calcluates the amount of gas required to publish one byte of pubdata -pub fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { - let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); - - ceil_div(eth_price_per_pubdata_byte, base_fee) -} +use crate::vm_latest::{constants::MAX_GAS_PER_PUBDATA_BYTE, L1BatchEnv}; /// Calculates the base fee and gas per pubdata for the given L1 gas price. -pub fn derive_base_fee_and_gas_per_pubdata(l1_gas_price: u64, fair_gas_price: u64) -> (u64, u64) { - let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); +pub(crate) fn derive_base_fee_and_gas_per_pubdata( + fee_input: PubdataIndependentBatchFeeModelInput, +) -> (u64, u64) { + let PubdataIndependentBatchFeeModelInput { + fair_l2_gas_price, + fair_pubdata_price, + .. + } = fee_input; - // The baseFee is set in such a way that it is always possible for a transaction to + // The `baseFee` is set in such a way that it is always possible for a transaction to // publish enough public data while compensating us for it. let base_fee = std::cmp::max( - fair_gas_price, - ceil_div(eth_price_per_pubdata_byte, MAX_GAS_PER_PUBDATA_BYTE), + fair_l2_gas_price, + ceil_div(fair_pubdata_price, MAX_GAS_PER_PUBDATA_BYTE), ); - ( - base_fee, - base_fee_to_gas_per_pubdata(l1_gas_price, base_fee), - ) + let gas_per_pubdata = ceil_div(fair_pubdata_price, base_fee); + + (base_fee, gas_per_pubdata) +} + +pub(crate) fn get_batch_base_fee(l1_batch_env: &L1BatchEnv) -> u64 { + if let Some(base_fee) = l1_batch_env.enforced_base_fee { + return base_fee; + } + let (base_fee, _) = + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_pubdata_independent()); + base_fee +} + +/// Changes the fee model output so that the expected gas per pubdata is smaller than or the `tx_gas_per_pubdata_limit`. +/// This function expects that the currently expected gas per pubdata is greater than the `tx_gas_per_pubdata_limit`. +pub(crate) fn adjust_pubdata_price_for_tx( + mut batch_fee_input: BatchFeeInput, + tx_gas_per_pubdata_limit: U256, +) -> BatchFeeInput { + match &mut batch_fee_input { + BatchFeeInput::L1Pegged(fee_input) => { + // `gasPerPubdata = ceil(17 * l1gasprice / fair_l2_gas_price)` + // `gasPerPubdata <= 17 * l1gasprice / fair_l2_gas_price + 1` + // `fair_l2_gas_price(gasPerPubdata - 1) / 17 <= l1gasprice` + let new_l1_gas_price = U256::from(fee_input.fair_l2_gas_price) + * (tx_gas_per_pubdata_limit - U256::from(1u32)) + / U256::from(17); + + fee_input.l1_gas_price = new_l1_gas_price.as_u64(); + } + BatchFeeInput::PubdataIndependent(fee_input) => { + // `gasPerPubdata = ceil(fair_pubdata_price / fair_l2_gas_price)` + // `gasPerPubdata <= fair_pubdata_price / fair_l2_gas_price + 1` + // `fair_l2_gas_price(gasPerPubdata - 1) <= fair_pubdata_price` + let new_fair_pubdata_price = U256::from(fee_input.fair_l2_gas_price) + * (tx_gas_per_pubdata_limit - U256::from(1u32)); + + fee_input.fair_pubdata_price = new_fair_pubdata_price.as_u64(); + } + } + + batch_fee_input } diff --git a/core/lib/multivm/src/versions/vm_latest/utils/l2_blocks.rs b/core/lib/multivm/src/versions/vm_latest/utils/l2_blocks.rs index 3d5f58094e0..e5832f7f587 100644 --- a/core/lib/multivm/src/versions/vm_latest/utils/l2_blocks.rs +++ b/core/lib/multivm/src/versions/vm_latest/utils/l2_blocks.rs @@ -1,15 +1,17 @@ -use crate::interface::{L2Block, L2BlockEnv}; use zksync_state::{ReadStorage, StoragePtr}; use zksync_system_constants::{ SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES, }; -use zksync_types::block::unpack_block_info; -use zksync_types::web3::signing::keccak256; -use zksync_types::{AccountTreeId, MiniblockNumber, StorageKey, H256, U256}; +use zksync_types::{ + block::unpack_block_info, web3::signing::keccak256, AccountTreeId, MiniblockNumber, StorageKey, + H256, U256, +}; use zksync_utils::{h256_to_u256, u256_to_h256}; +use crate::interface::{L2Block, L2BlockEnv}; + pub(crate) fn get_l2_block_hash_key(block_number: u32) -> StorageKey { let position = h256_to_u256(SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION) + U256::from(block_number % SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES); @@ -66,7 +68,7 @@ pub fn load_last_l2_block(storage: StoragePtr) -> Option( diff --git a/core/lib/multivm/src/versions/vm_latest/utils/overhead.rs b/core/lib/multivm/src/versions/vm_latest/utils/overhead.rs index 2541c7d7037..da200c8138c 100644 --- a/core/lib/multivm/src/versions/vm_latest/utils/overhead.rs +++ b/core/lib/multivm/src/versions/vm_latest/utils/overhead.rs @@ -1,349 +1,8 @@ -use crate::vm_latest::constants::{ - BLOCK_OVERHEAD_GAS, BLOCK_OVERHEAD_PUBDATA, BOOTLOADER_TX_ENCODING_SPACE, -}; -use zk_evm_1_4_0::zkevm_opcode_defs::system_params::MAX_TX_ERGS_LIMIT; -use zksync_system_constants::{MAX_L2_TX_GAS_LIMIT, MAX_TXS_IN_BLOCK}; -use zksync_types::l1::is_l1_tx_type; -use zksync_types::U256; -use zksync_utils::ceil_div_u256; +use crate::vm_latest::constants::{TX_MEMORY_OVERHEAD_GAS, TX_SLOT_OVERHEAD_GAS}; -/// Derives the overhead for processing transactions in a block. -pub fn derive_overhead( - gas_limit: u32, - gas_price_per_pubdata: u32, - encoded_len: usize, - coeficients: OverheadCoeficients, -) -> u32 { - // Even if the gas limit is greater than the MAX_TX_ERGS_LIMIT, we assume that everything beyond MAX_TX_ERGS_LIMIT - // will be spent entirely on publishing bytecodes and so we derive the overhead solely based on the capped value - let gas_limit = std::cmp::min(MAX_TX_ERGS_LIMIT, gas_limit); - - // Using large U256 type to avoid overflow - let max_block_overhead = U256::from(block_overhead_gas(gas_price_per_pubdata)); - let gas_limit = U256::from(gas_limit); - let encoded_len = U256::from(encoded_len); - - // The MAX_TX_ERGS_LIMIT is formed in a way that may fullfills a single-instance circuits - // if used in full. That is, within MAX_TX_ERGS_LIMIT it is possible to fully saturate all the single-instance - // circuits. - let overhead_for_single_instance_circuits = - ceil_div_u256(gas_limit * max_block_overhead, MAX_TX_ERGS_LIMIT.into()); - - // The overhead for occupying the bootloader memory - let overhead_for_length = ceil_div_u256( - encoded_len * max_block_overhead, - BOOTLOADER_TX_ENCODING_SPACE.into(), - ); - - // The overhead for occupying a single tx slot - let tx_slot_overhead = ceil_div_u256(max_block_overhead, MAX_TXS_IN_BLOCK.into()); - - // We use "ceil" here for formal reasons to allow easier approach for calculating the overhead in O(1) - // let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata); - - // The maximal potential overhead from pubdata - // TODO (EVM-67): possibly use overhead for pubdata - // let pubdata_overhead = ceil_div_u256( - // max_pubdata_in_tx * max_block_overhead, - // MAX_PUBDATA_PER_BLOCK.into(), - // ); - - vec![ - (coeficients.ergs_limit_overhead_coeficient - * overhead_for_single_instance_circuits.as_u32() as f64) - .floor() as u32, - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) - .floor() as u32, - (coeficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, - ] - .into_iter() - .max() - .unwrap() -} - -/// Contains the coeficients with which the overhead for transactions will be calculated. -/// All of the coeficients should be <= 1. There are here to provide a certain "discount" for normal transactions -/// at the risk of malicious transactions that may close the block prematurely. -/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coeficients.ergs_limit_overhead_coeficient` MUST -/// result in an integer number -#[derive(Debug, Clone, Copy)] -pub struct OverheadCoeficients { - slot_overhead_coeficient: f64, - bootloader_memory_overhead_coeficient: f64, - ergs_limit_overhead_coeficient: f64, -} - -impl OverheadCoeficients { - // This method ensures that the parameters keep the required invariants - fn new_checked( - slot_overhead_coeficient: f64, - bootloader_memory_overhead_coeficient: f64, - ergs_limit_overhead_coeficient: f64, - ) -> Self { - assert!( - (MAX_TX_ERGS_LIMIT as f64 / ergs_limit_overhead_coeficient).round() - == MAX_TX_ERGS_LIMIT as f64 / ergs_limit_overhead_coeficient, - "MAX_TX_ERGS_LIMIT / ergs_limit_overhead_coeficient must be an integer" - ); - - Self { - slot_overhead_coeficient, - bootloader_memory_overhead_coeficient, - ergs_limit_overhead_coeficient, - } - } - - // L1->L2 do not receive any discounts - fn new_l1() -> Self { - OverheadCoeficients::new_checked(1.0, 1.0, 1.0) - } - - fn new_l2() -> Self { - OverheadCoeficients::new_checked( - 1.0, 1.0, - // For L2 transactions we allow a certain default discount with regard to the number of ergs. - // Multiinstance circuits can in theory be spawned infinite times, while projected future limitations - // on gas per pubdata allow for roughly 800kk gas per L1 batch, so the rough trust "discount" on the proof's part - // to be paid by the users is 0.1. - 0.1, - ) - } - - /// Return the coeficients for the given transaction type - pub fn from_tx_type(tx_type: u8) -> Self { - if is_l1_tx_type(tx_type) { - Self::new_l1() - } else { - Self::new_l2() - } - } -} - -/// This method returns the overhead for processing the block -pub(crate) fn get_amortized_overhead( - total_gas_limit: u32, - gas_per_pubdata_byte_limit: u32, - encoded_len: usize, - coeficients: OverheadCoeficients, -) -> u32 { - // Using large U256 type to prevent overflows. - let overhead_for_block_gas = U256::from(block_overhead_gas(gas_per_pubdata_byte_limit)); - let total_gas_limit = U256::from(total_gas_limit); - let encoded_len = U256::from(encoded_len); - - // Derivation of overhead consists of 4 parts: - // 1. The overhead for taking up a transaction's slot. (O1): O1 = 1 / MAX_TXS_IN_BLOCK - // 2. The overhead for taking up the bootloader's memory (O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE - // 3. The overhead for possible usage of pubdata. (O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK - // 4. The overhead for possible usage of all the single-instance circuits. (O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT - // - // The maximum of these is taken to derive the part of the block's overhead to be paid by the users: - // - // max_overhead = max(O1, O2, O3, O4) - // overhead_gas = ceil(max_overhead * overhead_for_block_gas). Thus, overhead_gas is a function of - // tx_gas_limit, gas_per_pubdata_byte_limit and encoded_len. - // - // While it is possible to derive the overhead with binary search in O(log n), it is too expensive to be done - // on L1, so here is a reference implementation of finding the overhead for transaction in O(1): - // - // Given total_gas_limit = tx_gas_limit + overhead_gas, we need to find overhead_gas and tx_gas_limit, such that: - // 1. overhead_gas is maximal possible (the operator is paid fairly) - // 2. overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas (the user does not overpay) - // The third part boils to the following 4 inequalities (at least one of these must hold): - // ceil(O1 * overhead_for_block_gas) >= overhead_gas - // ceil(O2 * overhead_for_block_gas) >= overhead_gas - // ceil(O3 * overhead_for_block_gas) >= overhead_gas - // ceil(O4 * overhead_for_block_gas) >= overhead_gas - // - // Now, we need to solve each of these separately: - - // 1. The overhead for occupying a single tx slot is a constant: - let tx_slot_overhead = { - let tx_slot_overhead = - ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()).as_u32(); - (coeficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 - }; - - // 2. The overhead for occupying the bootloader memory can be derived from encoded_len - let overhead_for_length = { - let overhead_for_length = ceil_div_u256( - encoded_len * overhead_for_block_gas, - BOOTLOADER_TX_ENCODING_SPACE.into(), - ) - .as_u32(); - - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() - as u32 - }; - - // TODO (EVM-67): possibly include the overhead for pubdata. The formula below has not been properly maintained, - // since the pubdat is not published. If decided to use the pubdata overhead, it needs to be updated. - // 3. ceil(O3 * overhead_for_block_gas) >= overhead_gas - // O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK = ceil(gas_limit / gas_per_pubdata_byte_limit) / MAX_PUBDATA_PER_BLOCK - // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). Throwing off the `ceil`, while may provide marginally lower - // overhead to the operator, provides substantially easier formula to work with. - // - // For better clarity, let's denote gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE - // ceil(OB * (TL - OE) / (EP * MP)) >= OE - // - // OB * (TL - OE) / (MP * EP) > OE - 1 - // OB * (TL - OE) > (OE - 1) * EP * MP - // OB * TL + EP * MP > OE * EP * MP + OE * OB - // (OB * TL + EP * MP) / (EP * MP + OB) > OE - // OE = floor((OB * TL + EP * MP) / (EP * MP + OB)) with possible -1 if the division is without remainder - // let overhead_for_pubdata = { - // let numerator: U256 = overhead_for_block_gas * total_gas_limit - // + gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK); - // let denominator = - // gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK) + overhead_for_block_gas; - - // // Corner case: if `total_gas_limit` = `gas_per_pubdata_byte_limit` = 0 - // // then the numerator will be 0 and subtracting 1 will cause a panic, so we just return a zero. - // if numerator.is_zero() { - // 0.into() - // } else { - // (numerator - 1) / denominator - // } - // }; - - // 4. K * ceil(O4 * overhead_for_block_gas) >= overhead_gas, where K is the discount - // O4 = gas_limit / MAX_TX_ERGS_LIMIT. Using the notation from the previous equation: - // ceil(OB * GL / MAX_TX_ERGS_LIMIT) >= (OE / K) - // ceil(OB * (TL - OE) / MAX_TX_ERGS_LIMIT) >= (OE/K) - // OB * (TL - OE) / MAX_TX_ERGS_LIMIT > (OE/K) - 1 - // OB * (TL - OE) > (OE/K) * MAX_TX_ERGS_LIMIT - MAX_TX_ERGS_LIMIT - // OB * TL + MAX_TX_ERGS_LIMIT > OE * ( MAX_TX_ERGS_LIMIT/K + OB) - // OE = floor(OB * TL + MAX_TX_ERGS_LIMIT / (MAX_TX_ERGS_LIMIT/K + OB)), with possible -1 if the division is without remainder - let overhead_for_gas = { - let numerator = overhead_for_block_gas * total_gas_limit + U256::from(MAX_TX_ERGS_LIMIT); - let denominator: U256 = U256::from( - (MAX_TX_ERGS_LIMIT as f64 / coeficients.ergs_limit_overhead_coeficient) as u64, - ) + overhead_for_block_gas; - - let overhead_for_gas = (numerator - 1) / denominator; - - overhead_for_gas.as_u32() - }; - - let overhead = vec![tx_slot_overhead, overhead_for_length, overhead_for_gas] - .into_iter() - .max() - // For the sake of consistency making sure that total_gas_limit >= max_overhead - .map(|max_overhead| std::cmp::min(max_overhead, total_gas_limit.as_u32())) - .unwrap(); - - let limit_after_deducting_overhead = total_gas_limit - overhead; - - // During double checking of the overhead, the bootloader will assume that the - // body of the transaction does not have any more than MAX_L2_TX_GAS_LIMIT ergs available to it. - if limit_after_deducting_overhead.as_u64() > MAX_L2_TX_GAS_LIMIT { - // We derive the same overhead that would exist for the MAX_L2_TX_GAS_LIMIT ergs - derive_overhead( - MAX_L2_TX_GAS_LIMIT as u32, - gas_per_pubdata_byte_limit, - encoded_len.as_usize(), - coeficients, - ) - } else { - overhead - } -} - -pub(crate) fn block_overhead_gas(gas_per_pubdata_byte: u32) -> u32 { - BLOCK_OVERHEAD_GAS + BLOCK_OVERHEAD_PUBDATA * gas_per_pubdata_byte -} - -#[cfg(test)] -mod tests { - - use super::*; - - // This method returns the maximum block overhead that can be charged from the user based on the binary search approach - pub(crate) fn get_maximal_allowed_overhead_bin_search( - total_gas_limit: u32, - gas_per_pubdata_byte_limit: u32, - encoded_len: usize, - coeficients: OverheadCoeficients, - ) -> u32 { - let mut left_bound = if MAX_TX_ERGS_LIMIT < total_gas_limit { - total_gas_limit - MAX_TX_ERGS_LIMIT - } else { - 0u32 - }; - // Safe cast: the gas_limit for a transaction can not be larger than 2^32 - let mut right_bound = total_gas_limit; - - // The closure returns whether a certain overhead would be accepted by the bootloader. - // It is accepted if the derived overhead (i.e. the actual overhead that the user has to pay) - // is >= than the overhead proposed by the operator. - let is_overhead_accepted = |suggested_overhead: u32| { - let derived_overhead = derive_overhead( - total_gas_limit - suggested_overhead, - gas_per_pubdata_byte_limit, - encoded_len, - coeficients, - ); - - derived_overhead >= suggested_overhead - }; - - // In order to find the maximal allowed overhead we are doing binary search - while left_bound + 1 < right_bound { - let mid = (left_bound + right_bound) / 2; - - if is_overhead_accepted(mid) { - left_bound = mid; - } else { - right_bound = mid; - } - } - - if is_overhead_accepted(right_bound) { - right_bound - } else { - left_bound - } - } - - #[test] - fn test_correctness_for_efficient_overhead() { - let test_params = |total_gas_limit: u32, - gas_per_pubdata: u32, - encoded_len: usize, - coeficients: OverheadCoeficients| { - let result_by_efficient_search = - get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coeficients); - - let result_by_binary_search = get_maximal_allowed_overhead_bin_search( - total_gas_limit, - gas_per_pubdata, - encoded_len, - coeficients, - ); - - assert_eq!(result_by_efficient_search, result_by_binary_search); - }; - - // Some arbitrary test - test_params(60_000_000, 800, 2900, OverheadCoeficients::new_l2()); - - // Very small parameters - test_params(0, 1, 12, OverheadCoeficients::new_l2()); - - // Relatively big parameters - let max_tx_overhead = derive_overhead( - MAX_TX_ERGS_LIMIT, - 5000, - 10000, - OverheadCoeficients::new_l2(), - ); - test_params( - MAX_TX_ERGS_LIMIT + max_tx_overhead, - 5000, - 10000, - OverheadCoeficients::new_l2(), - ); - - test_params(115432560, 800, 2900, OverheadCoeficients::new_l1()); - } +/// In the past, the overhead for transaction depended also on the effective gas per pubdata limit for the transaction. +/// Currently, the approach is more similar to EVM, where only the calldata length and the transaction overhead are taken +/// into account by a constant formula. +pub(crate) fn derive_overhead(encoded_len: usize) -> u32 { + TX_SLOT_OVERHEAD_GAS.max(TX_MEMORY_OVERHEAD_GAS * (encoded_len as u32)) } diff --git a/core/lib/multivm/src/versions/vm_latest/utils/transaction_encoding.rs b/core/lib/multivm/src/versions/vm_latest/utils/transaction_encoding.rs index 9aecef6367e..86c49a3eb15 100644 --- a/core/lib/multivm/src/versions/vm_latest/utils/transaction_encoding.rs +++ b/core/lib/multivm/src/versions/vm_latest/utils/transaction_encoding.rs @@ -1,6 +1,7 @@ -use crate::vm_latest::types::internals::TransactionData; use zksync_types::Transaction; +use crate::vm_latest::types::internals::TransactionData; + /// Extension for transactions, specific for VM. Required for bypassing the orphan rule pub trait TransactionVmExt { /// Get the size of the transaction in tokens. diff --git a/core/lib/multivm/src/versions/vm_latest/vm.rs b/core/lib/multivm/src/versions/vm_latest/vm.rs index e63b6438dc9..fd6dded0555 100644 --- a/core/lib/multivm/src/versions/vm_latest/vm.rs +++ b/core/lib/multivm/src/versions/vm_latest/vm.rs @@ -1,21 +1,26 @@ -use crate::HistoryMode; use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}; -use zksync_types::{event::extract_l2tol1logs_from_l1_messenger, Transaction}; +use zksync_types::{ + event::extract_l2tol1logs_from_l1_messenger, + l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}, + Transaction, +}; use zksync_utils::bytecode::CompressedBytecodeInfo; -use crate::vm_latest::old_vm::events::merge_events; -use crate::vm_latest::old_vm::history_recorder::HistoryEnabled; - -use crate::interface::{ - BootloaderMemory, CurrentExecutionState, L1BatchEnv, L2BlockEnv, SystemEnv, VmExecutionMode, - VmExecutionResultAndLogs, VmInterfaceHistoryEnabled, VmMemoryMetrics, +use crate::{ + glue::GlueInto, + interface::{ + BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, + L1BatchEnv, L2BlockEnv, SystemEnv, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, + VmInterfaceHistoryEnabled, VmMemoryMetrics, + }, + vm_latest::{ + bootloader_state::BootloaderState, + old_vm::{events::merge_events, history_recorder::HistoryEnabled}, + tracers::dispatcher::TracerDispatcher, + types::internals::{new_vm_state, VmSnapshot, ZkSyncVmState}, + }, + HistoryMode, }; -use crate::interface::{BytecodeCompressionError, VmInterface}; -use crate::vm_latest::bootloader_state::BootloaderState; -use crate::vm_latest::tracers::dispatcher::TracerDispatcher; - -use crate::vm_latest::types::internals::{new_vm_state, VmSnapshot, ZkSyncVmState}; /// Main entry point for Virtual Machine integration. /// The instance should process only one l1 batch @@ -23,7 +28,7 @@ use crate::vm_latest::types::internals::{new_vm_state, VmSnapshot, ZkSyncVmState pub struct Vm { pub(crate) bootloader_state: BootloaderState, // Current state and oracles of virtual machine - pub(crate) state: ZkSyncVmState, + pub(crate) state: ZkSyncVmState, pub(crate) storage: StoragePtr, pub(crate) system_env: SystemEnv, pub(crate) batch_env: L1BatchEnv, @@ -33,7 +38,7 @@ pub struct Vm { } impl VmInterface for Vm { - type TracerDispatcher = TracerDispatcher; + type TracerDispatcher = TracerDispatcher; fn new(batch_env: L1BatchEnv, system_env: SystemEnv, storage: StoragePtr) -> Self { let (state, bootloader_state) = new_vm_state(storage.clone(), &system_env, &batch_env); @@ -110,7 +115,10 @@ impl VmInterface for Vm { system_logs, total_log_queries, cycles_used: self.state.local_state.monotonic_cycle_counter, - deduplicated_events_logs, + deduplicated_events_logs: deduplicated_events_logs + .into_iter() + .map(GlueInto::glue_into) + .collect(), storage_refunds: self.state.storage.returned_refunds.inner().clone(), } } @@ -123,22 +131,45 @@ impl VmInterface for Vm { tracer: Self::TracerDispatcher, tx: Transaction, with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { self.push_transaction_with_compression(tx, with_compression); let result = self.inspect_inner(tracer, VmExecutionMode::OneTx); if self.has_unpublished_bytecodes() { - Err(BytecodeCompressionError::BytecodeCompressionFailed) + ( + Err(BytecodeCompressionError::BytecodeCompressionFailed), + result, + ) } else { - Ok(result) + (Ok(()), result) } } fn record_vm_memory_metrics(&self) -> VmMemoryMetrics { self.record_vm_memory_metrics_inner() } + + fn finish_batch(&mut self) -> FinishedL1Batch { + let result = self.execute(VmExecutionMode::Batch); + let execution_state = self.get_current_execution_state(); + let bootloader_memory = self.get_bootloader_memory(); + FinishedL1Batch { + block_tip_execution_result: result, + final_execution_state: execution_state, + final_bootloader_memory: Some(bootloader_memory), + pubdata_input: Some( + self.bootloader_state + .get_pubdata_information() + .clone() + .build_pubdata(false), + ), + } + } } -/// Methods of vm, which required some history manipullations +/// Methods of vm, which required some history manipulations impl VmInterfaceHistoryEnabled for Vm { /// Create snapshot of current vm state and push it into the memory fn make_snapshot(&mut self) { diff --git a/core/lib/multivm/src/versions/vm_m5/bootloader_state.rs b/core/lib/multivm/src/versions/vm_m5/bootloader_state.rs index 518d999b6ea..4bb51c7a839 100644 --- a/core/lib/multivm/src/versions/vm_m5/bootloader_state.rs +++ b/core/lib/multivm/src/versions/vm_m5/bootloader_state.rs @@ -5,7 +5,7 @@ use crate::vm_m5::vm_with_bootloader::TX_DESCRIPTION_OFFSET; /// Required to process transactions one by one (since we intercept the VM execution to execute /// transactions and add new ones to the memory on the fly). /// Think about it like a two-pointer scheme: one pointer (`free_tx_index`) tracks the end of the -/// initialized memory; while another (`tx_to_execute`) tracks our progess in this initialized memory. +/// initialized memory; while another (`tx_to_execute`) tracks our progress in this initialized memory. /// This is required since it's possible to push several transactions to the bootloader memory and then /// execute it one by one. /// diff --git a/core/lib/multivm/src/versions/vm_m5/errors/tx_revert_reason.rs b/core/lib/multivm/src/versions/vm_m5/errors/tx_revert_reason.rs index 9259dd87a37..439524108a9 100644 --- a/core/lib/multivm/src/versions/vm_m5/errors/tx_revert_reason.rs +++ b/core/lib/multivm/src/versions/vm_m5/errors/tx_revert_reason.rs @@ -7,11 +7,11 @@ use super::{BootloaderErrorCode, VmRevertReason}; // Reasons why the transaction executed inside the bootloader could fail. #[derive(Debug, Clone, PartialEq)] pub enum TxRevertReason { - // Can only be returned in EthCall execution mode (=ExecuteOnly) + // Can only be returned in EthCall execution mode `(=ExecuteOnly)` EthCall(VmRevertReason), // Returned when the execution of an L2 transaction has failed TxReverted(VmRevertReason), - // Can only be returned in VerifyAndExecute + // Can only be returned in `VerifyAndExecute` ValidationFailed(VmRevertReason), PaymasterValidationFailed(VmRevertReason), PrePaymasterPreparationFailed(VmRevertReason), @@ -20,7 +20,7 @@ pub enum TxRevertReason { FailedToChargeFee(VmRevertReason), // Emitted when trying to call a transaction from an account that has not // been deployed as an account (i.e. the `from` is just a contract). - // Can only be returned in VerifyAndExecute + // Can only be returned in `VerifyAndExecute` FromIsNotAnAccount, // Currently cannot be returned. Should be removed when refactoring errors. InnerTxError, @@ -98,7 +98,7 @@ impl TxRevertReason { BootloaderErrorCode::UnacceptablePubdataPrice => { Self::UnexpectedVMBehavior("UnacceptablePubdataPrice".to_owned()) } - // This is different from AccountTxValidationFailed error in a way that it means that + // This is different from `AccountTxValidationFailed` error in a way that it means that // the error was not produced by the account itself, but for some other unknown reason (most likely not enough gas) BootloaderErrorCode::TxValidationError => Self::ValidationFailed(revert_reason), // Note, that `InnerTxError` is derived only after the actual tx execution, so diff --git a/core/lib/multivm/src/versions/vm_m5/errors/vm_revert_reason.rs b/core/lib/multivm/src/versions/vm_m5/errors/vm_revert_reason.rs index 1997336c3a4..7cfa8708fc3 100644 --- a/core/lib/multivm/src/versions/vm_m5/errors/vm_revert_reason.rs +++ b/core/lib/multivm/src/versions/vm_m5/errors/vm_revert_reason.rs @@ -1,5 +1,7 @@ -use std::convert::TryFrom; -use std::fmt::{Debug, Display}; +use std::{ + convert::TryFrom, + fmt::{Debug, Display}, +}; use zksync_types::U256; @@ -15,7 +17,7 @@ pub enum VmRevertReasonParsingError { IncorrectStringLength(Vec), } -/// Rich Revert Reasons https://github.com/0xProject/ZEIPs/issues/32 +/// Rich Revert Reasons `https://github.com/0xProject/ZEIPs/issues/32` #[derive(Debug, Clone, PartialEq)] pub enum VmRevertReason { General { diff --git a/core/lib/multivm/src/versions/vm_m5/event_sink.rs b/core/lib/multivm/src/versions/vm_m5/event_sink.rs index 80ceb8baeaa..0bb1ee498f6 100644 --- a/core/lib/multivm/src/versions/vm_m5/event_sink.rs +++ b/core/lib/multivm/src/versions/vm_m5/event_sink.rs @@ -1,5 +1,5 @@ -use crate::vm_m5::{oracles::OracleWithHistory, utils::collect_log_queries_after_timestamp}; use std::collections::HashMap; + use zk_evm_1_3_1::{ abstractions::EventSink, aux_structures::{LogQuery, Timestamp}, @@ -9,7 +9,10 @@ use zk_evm_1_3_1::{ }, }; -use crate::vm_m5::history_recorder::AppDataFrameManagerWithHistory; +use crate::vm_m5::{ + history_recorder::AppDataFrameManagerWithHistory, oracles::OracleWithHistory, + utils::collect_log_queries_after_timestamp, +}; #[derive(Debug, Default, Clone, PartialEq)] pub struct InMemoryEventSink { diff --git a/core/lib/multivm/src/versions/vm_m5/history_recorder.rs b/core/lib/multivm/src/versions/vm_m5/history_recorder.rs index 896b2261e9c..f744be32d0b 100644 --- a/core/lib/multivm/src/versions/vm_m5/history_recorder.rs +++ b/core/lib/multivm/src/versions/vm_m5/history_recorder.rs @@ -3,30 +3,29 @@ use std::{ hash::{BuildHasherDefault, Hash, Hasher}, }; -use crate::vm_m5::storage::{Storage, StoragePtr}; - use zk_evm_1_3_1::{ aux_structures::Timestamp, reference_impls::event_sink::ApplicationData, vm_state::PrimitiveValue, zkevm_opcode_defs::{self}, }; - use zksync_types::{StorageKey, U256}; use zksync_utils::{h256_to_u256, u256_to_h256}; +use crate::vm_m5::storage::{Storage, StoragePtr}; + pub type AppDataFrameManagerWithHistory = FrameManagerWithHistory>; pub type MemoryWithHistory = HistoryRecorder; pub type FrameManagerWithHistory = HistoryRecorder>; pub type IntFrameManagerWithHistory = FrameManagerWithHistory>; -// Within the same cycle, timestamps in range timestamp..timestamp+TIME_DELTA_PER_CYCLE-1 -// can be used. This can sometimes vioalate monotonicity of the timestamp within the +// Within the same cycle, timestamps in range `timestamp..timestamp+TIME_DELTA_PER_CYCLE-1` +// can be used. This can sometimes violate monotonicity of the timestamp within the // same cycle, so it should be normalized. fn normalize_timestamp(timestamp: Timestamp) -> Timestamp { let timestamp = timestamp.0; - // Making sure it is divisible by TIME_DELTA_PER_CYCLE + // Making sure it is divisible by `TIME_DELTA_PER_CYCLE` Timestamp(timestamp - timestamp % zkevm_opcode_defs::TIME_DELTA_PER_CYCLE) } diff --git a/core/lib/multivm/src/versions/vm_m5/memory.rs b/core/lib/multivm/src/versions/vm_m5/memory.rs index 2c0b317a798..34c083b21f7 100644 --- a/core/lib/multivm/src/versions/vm_m5/memory.rs +++ b/core/lib/multivm/src/versions/vm_m5/memory.rs @@ -1,12 +1,16 @@ -use zk_evm_1_3_1::abstractions::{Memory, MemoryType, MEMORY_CELLS_OTHER_PAGES}; -use zk_evm_1_3_1::aux_structures::{MemoryPage, MemoryQuery, Timestamp}; -use zk_evm_1_3_1::vm_state::PrimitiveValue; -use zk_evm_1_3_1::zkevm_opcode_defs::FatPointer; +use zk_evm_1_3_1::{ + abstractions::{Memory, MemoryType, MEMORY_CELLS_OTHER_PAGES}, + aux_structures::{MemoryPage, MemoryQuery, Timestamp}, + vm_state::PrimitiveValue, + zkevm_opcode_defs::FatPointer, +}; use zksync_types::U256; -use crate::vm_m5::history_recorder::{IntFrameManagerWithHistory, MemoryWithHistory}; -use crate::vm_m5::oracles::OracleWithHistory; -use crate::vm_m5::utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}; +use crate::vm_m5::{ + history_recorder::{IntFrameManagerWithHistory, MemoryWithHistory}, + oracles::OracleWithHistory, + utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}, +}; #[derive(Debug, Default, Clone, PartialEq)] pub struct SimpleMemory { @@ -30,7 +34,7 @@ impl OracleWithHistory for SimpleMemory { impl SimpleMemory { pub fn populate(&mut self, elements: Vec<(u32, Vec)>, timestamp: Timestamp) { for (page, values) in elements.into_iter() { - // Resizing the pages array to fit the page. + // Re-sizing the pages array to fit the page. let len = values.len(); assert!(len <= MEMORY_CELLS_OTHER_PAGES); @@ -257,7 +261,7 @@ impl Memory for SimpleMemory { let returndata_page = returndata_fat_pointer.memory_page; for page in current_observable_pages { - // If the page's number is greater than or equal to the base_page, + // If the page's number is greater than or equal to the `base_page`, // it means that it was created by the internal calls of this contract. // We need to add this check as the calldata pointer is also part of the // observable pages. @@ -272,7 +276,7 @@ impl Memory for SimpleMemory { } } -// It is expected that there is some intersection between [word_number*32..word_number*32+31] and [start, end] +// It is expected that there is some intersection between `[word_number*32..word_number*32+31]` and `[start, end]` fn extract_needed_bytes_from_word( word_value: Vec, word_number: usize, @@ -280,7 +284,7 @@ fn extract_needed_bytes_from_word( end: usize, ) -> Vec { let word_start = word_number * 32; - let word_end = word_start + 31; // Note, that at word_start + 32 a new word already starts + let word_end = word_start + 31; // Note, that at `word_start + 32` a new word already starts let intersection_left = std::cmp::max(word_start, start); let intersection_right = std::cmp::min(word_end, end); diff --git a/core/lib/multivm/src/versions/vm_m5/mod.rs b/core/lib/multivm/src/versions/vm_m5/mod.rs index d8231ea502d..fc549761e03 100644 --- a/core/lib/multivm/src/versions/vm_m5/mod.rs +++ b/core/lib/multivm/src/versions/vm_m5/mod.rs @@ -1,5 +1,16 @@ #![allow(clippy::derive_partial_eq_without_eq)] +pub use zk_evm_1_3_1; +pub use zksync_types::vm_trace::VmExecutionTrace; + +pub use self::{ + errors::TxRevertReason, + oracle_tools::OracleTools, + oracles::storage::StorageOracle, + vm::Vm, + vm_instance::{VmBlockResult, VmExecutionResult}, +}; + mod bootloader_state; pub mod errors; pub mod event_sink; @@ -12,24 +23,14 @@ mod pubdata_utils; mod refunds; pub mod storage; pub mod test_utils; +#[cfg(test)] +mod tests; pub mod transaction_data; pub mod utils; +mod vm; pub mod vm_instance; pub mod vm_with_bootloader; -#[cfg(test)] -mod tests; -mod vm; - -pub use errors::TxRevertReason; -pub use oracle_tools::OracleTools; -pub use oracles::storage::StorageOracle; -pub use vm::Vm; -pub use vm_instance::VmBlockResult; -pub use vm_instance::VmExecutionResult; -pub use zk_evm_1_3_1; -pub use zksync_types::vm_trace::VmExecutionTrace; - pub type Word = zksync_types::U256; pub const MEMORY_SIZE: usize = 1 << 16; diff --git a/core/lib/multivm/src/versions/vm_m5/oracle_tools.rs b/core/lib/multivm/src/versions/vm_m5/oracle_tools.rs index 4858a23adb6..32930f31cd7 100644 --- a/core/lib/multivm/src/versions/vm_m5/oracle_tools.rs +++ b/core/lib/multivm/src/versions/vm_m5/oracle_tools.rs @@ -1,15 +1,18 @@ -use crate::vm_m5::memory::SimpleMemory; -use crate::vm_m5::vm_instance::MultiVMSubversion; - use std::fmt::Debug; -use crate::vm_m5::event_sink::InMemoryEventSink; -use crate::vm_m5::oracles::decommitter::DecommitterOracle; -use crate::vm_m5::oracles::precompile::PrecompilesProcessorWithHistory; -use crate::vm_m5::oracles::storage::StorageOracle; -use crate::vm_m5::storage::{Storage, StoragePtr}; use zk_evm_1_3_1::witness_trace::DummyTracer; +use crate::vm_m5::{ + event_sink::InMemoryEventSink, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, + storage::StorageOracle, + }, + storage::{Storage, StoragePtr}, + vm_instance::MultiVMSubversion, +}; + #[derive(Debug)] pub struct OracleTools { pub storage: StorageOracle, diff --git a/core/lib/multivm/src/versions/vm_m5/oracles/decommitter.rs b/core/lib/multivm/src/versions/vm_m5/oracles/decommitter.rs index 24a18f998df..bc43c72966e 100644 --- a/core/lib/multivm/src/versions/vm_m5/oracles/decommitter.rs +++ b/core/lib/multivm/src/versions/vm_m5/oracles/decommitter.rs @@ -1,20 +1,19 @@ use std::collections::HashMap; -use crate::vm_m5::history_recorder::HistoryRecorder; -use crate::vm_m5::storage::{Storage, StoragePtr}; - -use zk_evm_1_3_1::abstractions::MemoryType; -use zk_evm_1_3_1::aux_structures::Timestamp; use zk_evm_1_3_1::{ - abstractions::{DecommittmentProcessor, Memory}, - aux_structures::{DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery}, + abstractions::{DecommittmentProcessor, Memory, MemoryType}, + aux_structures::{ + DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery, Timestamp, + }, }; - use zksync_types::U256; -use zksync_utils::bytecode::bytecode_len_in_words; -use zksync_utils::{bytes_to_be_words, u256_to_h256}; +use zksync_utils::{bytecode::bytecode_len_in_words, bytes_to_be_words, u256_to_h256}; use super::OracleWithHistory; +use crate::vm_m5::{ + history_recorder::HistoryRecorder, + storage::{Storage, StoragePtr}, +}; #[derive(Debug)] pub struct DecommitterOracle { diff --git a/core/lib/multivm/src/versions/vm_m5/oracles/mod.rs b/core/lib/multivm/src/versions/vm_m5/oracles/mod.rs index 31686fa70f6..c43c9987de5 100644 --- a/core/lib/multivm/src/versions/vm_m5/oracles/mod.rs +++ b/core/lib/multivm/src/versions/vm_m5/oracles/mod.rs @@ -1,11 +1,10 @@ use zk_evm_1_3_1::aux_structures::Timestamp; -// We will discard RAM as soon as the execution of a tx ends, so -// it is ok for now to use SimpleMemory -pub use zk_evm_1_3_1::reference_impls::memory::SimpleMemory as RamOracle; // All the changes to the events in the DB will be applied after the tx is executed, -// so fow now it is fine. +// so for now it is fine. pub use zk_evm_1_3_1::reference_impls::event_sink::InMemoryEventSink as EventSinkOracle; - +// We will discard RAM as soon as the execution of a tx ends, so +// it is ok for now to use `SimpleMemory` +pub use zk_evm_1_3_1::reference_impls::memory::SimpleMemory as RamOracle; pub use zk_evm_1_3_1::testing::simple_tracer::NoopTracer; pub mod decommitter; diff --git a/core/lib/multivm/src/versions/vm_m5/oracles/precompile.rs b/core/lib/multivm/src/versions/vm_m5/oracles/precompile.rs index 137a1046d48..41a00b2e8a5 100644 --- a/core/lib/multivm/src/versions/vm_m5/oracles/precompile.rs +++ b/core/lib/multivm/src/versions/vm_m5/oracles/precompile.rs @@ -1,14 +1,11 @@ use zk_evm_1_3_1::{ - abstractions::Memory, - abstractions::PrecompileCyclesWitness, - abstractions::PrecompilesProcessor, + abstractions::{Memory, PrecompileCyclesWitness, PrecompilesProcessor}, aux_structures::{LogQuery, MemoryQuery, Timestamp}, precompiles::DefaultPrecompilesProcessor, }; -use crate::vm_m5::history_recorder::HistoryRecorder; - use super::OracleWithHistory; +use crate::vm_m5::history_recorder::HistoryRecorder; /// Wrap of DefaultPrecompilesProcessor that store queue /// of timestamp when precompiles are called to be executed. diff --git a/core/lib/multivm/src/versions/vm_m5/oracles/storage.rs b/core/lib/multivm/src/versions/vm_m5/oracles/storage.rs index ca2c3ab7514..b38da4051f3 100644 --- a/core/lib/multivm/src/versions/vm_m5/oracles/storage.rs +++ b/core/lib/multivm/src/versions/vm_m5/oracles/storage.rs @@ -1,29 +1,28 @@ use std::collections::HashMap; -use crate::vm_m5::storage::{Storage, StoragePtr}; - -use crate::vm_m5::history_recorder::{ - AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryRecorder, StorageWrapper, -}; -use crate::vm_m5::vm_instance::MultiVMSubversion; - -use zk_evm_1_3_1::abstractions::RefundedAmounts; -use zk_evm_1_3_1::zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES; use zk_evm_1_3_1::{ - abstractions::{RefundType, Storage as VmStorageOracle}, + abstractions::{RefundType, RefundedAmounts, Storage as VmStorageOracle}, aux_structures::{LogQuery, Timestamp}, reference_impls::event_sink::ApplicationData, + zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES, }; - -use crate::glue::GlueInto; -use zksync_types::utils::storage_key_for_eth_balance; use zksync_types::{ - AccountTreeId, Address, StorageKey, StorageLogQuery, StorageLogQueryType, BOOTLOADER_ADDRESS, - U256, + utils::storage_key_for_eth_balance, AccountTreeId, Address, StorageKey, StorageLogQuery, + StorageLogQueryType, BOOTLOADER_ADDRESS, U256, }; use zksync_utils::u256_to_h256; use super::OracleWithHistory; +use crate::{ + glue::GlueInto, + vm_m5::{ + history_recorder::{ + AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryRecorder, StorageWrapper, + }, + storage::{Storage, StoragePtr}, + vm_instance::MultiVMSubversion, + }, +}; // While the storage does not support different shards, it was decided to write the // code of the StorageOracle with the shard parameters in mind. @@ -184,6 +183,7 @@ impl VmStorageOracle for StorageOracle { _monotonic_cycle_counter: u32, query: LogQuery, ) -> LogQuery { + // ``` // tracing::trace!( // "execute partial query cyc {:?} addr {:?} key {:?}, rw {:?}, wr {:?}, tx {:?}", // _monotonic_cycle_counter, @@ -193,6 +193,7 @@ impl VmStorageOracle for StorageOracle { // query.written_value, // query.tx_number_in_block // ); + // ``` assert!(!query.rollback); if query.rw_flag { // The number of bytes that have been compensated by the user to perform this write @@ -275,7 +276,7 @@ impl VmStorageOracle for StorageOracle { ); // Additional validation that the current value was correct - // Unwrap is safe because the return value from write_inner is the previous value in this leaf. + // Unwrap is safe because the return value from `write_inner` is the previous value in this leaf. // It is impossible to set leaf value to `None` assert_eq!(current_value, written_value); } diff --git a/core/lib/multivm/src/versions/vm_m5/oracles/tracer.rs b/core/lib/multivm/src/versions/vm_m5/oracles/tracer.rs index d8a70bdaf64..481e8a7e02e 100644 --- a/core/lib/multivm/src/versions/vm_m5/oracles/tracer.rs +++ b/core/lib/multivm/src/versions/vm_m5/oracles/tracer.rs @@ -1,19 +1,8 @@ -use std::fmt::Debug; use std::{ collections::HashSet, - fmt::{self, Display}, + fmt::{self, Debug, Display}, }; -use crate::vm_m5::{ - errors::VmRevertReasonParsingResult, - memory::SimpleMemory, - storage::StoragePtr, - utils::{aux_heap_page_from_base, heap_page_from_base}, - vm_instance::{get_vm_hook_params, VM_HOOK_POSITION}, - vm_with_bootloader::BOOTLOADER_HEAP_PAGE, -}; -// use zk_evm_1_3_1::testing::memory::SimpleMemory; -use crate::vm_m5::storage::Storage; use zk_evm_1_3_1::{ abstractions::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, @@ -26,7 +15,6 @@ use zk_evm_1_3_1::{ LogOpcode, Opcode, RetOpcode, UMAOpcode, }, }; - use zksync_types::{ get_code_key, web3::signing::keccak256, AccountTreeId, Address, StorageKey, ACCOUNT_CODE_STORAGE_ADDRESS, BOOTLOADER_ADDRESS, CONTRACT_DEPLOYER_ADDRESS, H256, @@ -37,6 +25,15 @@ use zksync_utils::{ be_bytes_to_safe_address, h256_to_account_address, u256_to_account_address, u256_to_h256, }; +use crate::vm_m5::{ + errors::VmRevertReasonParsingResult, + memory::SimpleMemory, + storage::{Storage, StoragePtr}, + utils::{aux_heap_page_from_base, heap_page_from_base}, + vm_instance::{get_vm_hook_params, VM_HOOK_POSITION}, + vm_with_bootloader::BOOTLOADER_HEAP_PAGE, +}; + pub trait ExecutionEndTracer: Tracer { // Returns whether the vm execution should stop. fn should_stop_execution(&self) -> bool; @@ -181,7 +178,7 @@ fn touches_allowed_context(address: Address, key: U256) -> bool { return false; } - // Only chain_id is allowed to be touched. + // Only `chain_id` is allowed to be touched. key == U256::from(0u32) } @@ -306,7 +303,7 @@ impl ValidationTracer { return true; } - // The pair of MSG_VALUE_SIMULATOR_ADDRESS & L2_ETH_TOKEN_ADDRESS simulates the behavior of transfering ETH + // The pair of `MSG_VALUE_SIMULATOR_ADDRESS` & `L2_ETH_TOKEN_ADDRESS` simulates the behavior of transferring ETH // that is safe for the DDoS protection rules. if valid_eth_token_call(address, msg_sender) { return true; @@ -350,20 +347,20 @@ impl ValidationTracer { let (potential_address_bytes, potential_position_bytes) = calldata.split_at(32); let potential_address = be_bytes_to_safe_address(potential_address_bytes); - // If the validation_address is equal to the potential_address, - // then it is a request that could be used for mapping of kind mapping(address => ...). + // If the `validation_address` is equal to the `potential_address`, + // then it is a request that could be used for mapping of kind `mapping(address => ...)`. // - // If the potential_position_bytes were already allowed before, then this keccak might be used - // for ERC-20 allowance or any other of mapping(address => mapping(...)) + // If the `potential_position_bytes` were already allowed before, then this keccak might be used + // for ERC-20 allowance or any other of `mapping(address => mapping(...))` if potential_address == Some(validated_address) || self .auxilary_allowed_slots .contains(&H256::from_slice(potential_position_bytes)) { - // This is request that could be used for mapping of kind mapping(address => ...) + // This is request that could be used for mapping of kind `mapping(address => ...)` // We could theoretically wait for the slot number to be returned by the - // keccak256 precompile itself, but this would complicate the code even further + // `keccak256` precompile itself, but this would complicate the code even further // so let's calculate it here. let slot = keccak256(calldata); @@ -651,7 +648,7 @@ impl OneTxTracer { } } -/// Tells the VM to end the execution before `ret` from the booloader if there is no panic or revert. +/// Tells the VM to end the execution before `ret` from the bootloader if there is no panic or revert. /// Also, saves the information if this `ret` was caused by "out of gas" panic. #[derive(Debug, Clone, Default)] pub struct BootloaderTracer { @@ -722,7 +719,7 @@ impl PubdataSpentTracer for BootloaderTracer {} impl BootloaderTracer { fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 @@ -765,7 +762,7 @@ impl VmHook { let value = data.src1_value.value; - // Only UMA opcodes in the bootloader serve for vm hooks + // Only `UMA` opcodes in the bootloader serve for vm hooks if !matches!(opcode_variant.opcode, Opcode::UMA(UMAOpcode::HeapWrite)) || heap_page != BOOTLOADER_HEAP_PAGE || fat_ptr.offset != VM_HOOK_POSITION * 32 @@ -801,7 +798,7 @@ fn get_debug_log(state: &VmLocalStateData<'_>, memory: &SimpleMemory) -> String let msg = String::from_utf8(msg).expect("Invalid debug message"); let data = U256::from_big_endian(&data); - // For long data, it is better to use hex-encoding for greater readibility + // For long data, it is better to use hex-encoding for greater readability let data_str = if data > U256::from(u64::max_value()) { let mut bytes = [0u8; 32]; data.to_big_endian(&mut bytes); @@ -816,7 +813,7 @@ fn get_debug_log(state: &VmLocalStateData<'_>, memory: &SimpleMemory) -> String } /// Reads the memory slice represented by the fat pointer. -/// Note, that the fat pointer must point to the accesible memory (i.e. not cleared up yet). +/// Note, that the fat pointer must point to the accessible memory (i.e. not cleared up yet). pub(crate) fn read_pointer(memory: &SimpleMemory, pointer: FatPointer) -> Vec { let FatPointer { offset, diff --git a/core/lib/multivm/src/versions/vm_m5/pubdata_utils.rs b/core/lib/multivm/src/versions/vm_m5/pubdata_utils.rs index 80c1cd2c0e4..63e45edcbb8 100644 --- a/core/lib/multivm/src/versions/vm_m5/pubdata_utils.rs +++ b/core/lib/multivm/src/versions/vm_m5/pubdata_utils.rs @@ -1,16 +1,21 @@ -use crate::vm_m5::oracles::storage::storage_key_of_log; -use crate::vm_m5::storage::Storage; -use crate::vm_m5::utils::collect_storage_log_queries_after_timestamp; -use crate::vm_m5::vm_instance::VmInstance; use std::collections::HashMap; -use zk_evm_1_3_1::aux_structures::Timestamp; -use crate::glue::GlueInto; -use zksync_types::event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}; -use zksync_types::zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries; -use zksync_types::{StorageKey, PUBLISH_BYTECODE_OVERHEAD, SYSTEM_CONTEXT_ADDRESS}; +use zk_evm_1_3_1::aux_structures::Timestamp; +use zksync_types::{ + event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}, + zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries, + StorageKey, PUBLISH_BYTECODE_OVERHEAD, SYSTEM_CONTEXT_ADDRESS, +}; use zksync_utils::bytecode::bytecode_len_in_bytes; +use crate::{ + glue::GlueInto, + vm_m5::{ + oracles::storage::storage_key_of_log, storage::Storage, + utils::collect_storage_log_queries_after_timestamp, vm_instance::VmInstance, + }, +}; + impl VmInstance { pub fn pubdata_published(&self, from_timestamp: Timestamp) -> u32 { let storage_writes_pubdata_published = self.pubdata_published_for_writes(from_timestamp); diff --git a/core/lib/multivm/src/versions/vm_m5/refunds.rs b/core/lib/multivm/src/versions/vm_m5/refunds.rs index 8f1b2b44f4d..fd4e2788f03 100644 --- a/core/lib/multivm/src/versions/vm_m5/refunds.rs +++ b/core/lib/multivm/src/versions/vm_m5/refunds.rs @@ -1,13 +1,13 @@ -use crate::vm_m5::storage::Storage; -use crate::vm_m5::vm_instance::VmInstance; -use crate::vm_m5::vm_with_bootloader::{ - eth_price_per_pubdata_byte, BOOTLOADER_HEAP_PAGE, TX_GAS_LIMIT_OFFSET, -}; use zk_evm_1_3_1::aux_structures::Timestamp; - use zksync_types::U256; use zksync_utils::ceil_div_u256; +use crate::vm_m5::{ + storage::Storage, + vm_instance::VmInstance, + vm_with_bootloader::{eth_price_per_pubdata_byte, BOOTLOADER_HEAP_PAGE, TX_GAS_LIMIT_OFFSET}, +}; + impl VmInstance { pub(crate) fn tx_body_refund( &self, @@ -75,7 +75,7 @@ impl VmInstance { ) -> u32 { // TODO (SMA-1715): Make users pay for the block overhead 0 - + // ``` // let pubdata_published = self.pubdata_published(from_timestamp); // // let total_gas_spent = gas_remaining_before - self.gas_remaining(); @@ -120,6 +120,7 @@ impl VmInstance { // ); // 0 // } + // ``` } // TODO (SMA-1715): Make users pay for the block overhead @@ -133,39 +134,39 @@ impl VmInstance { _l2_l1_logs: usize, ) -> u32 { 0 - + // ``` // let overhead_for_block_gas = U256::from(crate::transaction_data::block_overhead_gas( // gas_per_pubdata_byte_limit, // )); - + // // let encoded_len = U256::from(encoded_len); // let pubdata_published = U256::from(pubdata_published); // let gas_spent_on_computation = U256::from(gas_spent_on_computation); // let number_of_decommitment_requests = U256::from(number_of_decommitment_requests); // let l2_l1_logs = U256::from(l2_l1_logs); - + // // let tx_slot_overhead = ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()); - + // // let overhead_for_length = ceil_div_u256( // encoded_len * overhead_for_block_gas, // BOOTLOADER_TX_ENCODING_SPACE.into(), // ); - + // // let actual_overhead_for_pubdata = ceil_div_u256( // pubdata_published * overhead_for_block_gas, // MAX_PUBDATA_PER_BLOCK.into(), // ); - + // // let actual_gas_limit_overhead = ceil_div_u256( // gas_spent_on_computation * overhead_for_block_gas, // MAX_BLOCK_MULTIINSTANCE_GAS_LIMIT.into(), // ); - + // // let code_decommitter_sorter_circuit_overhead = ceil_div_u256( // number_of_decommitment_requests * overhead_for_block_gas, // GEOMETRY_CONFIG.limit_for_code_decommitter_sorter.into(), // ); - + // // let l1_l2_logs_overhead = ceil_div_u256( // l2_l1_logs * overhead_for_block_gas, // std::cmp::min( @@ -174,7 +175,7 @@ impl VmInstance { // ) // .into(), // ); - + // // let overhead = vec![ // tx_slot_overhead, // overhead_for_length, @@ -186,8 +187,9 @@ impl VmInstance { // .into_iter() // .max() // .unwrap(); - + // // overhead.as_u32() + // ``` } pub(crate) fn get_tx_gas_limit(&self, tx_index: usize) -> u32 { diff --git a/core/lib/multivm/src/versions/vm_m5/storage.rs b/core/lib/multivm/src/versions/vm_m5/storage.rs index d5f448812ca..deb3501b416 100644 --- a/core/lib/multivm/src/versions/vm_m5/storage.rs +++ b/core/lib/multivm/src/versions/vm_m5/storage.rs @@ -1,7 +1,4 @@ -use std::cell::RefCell; -use std::collections::HashMap; -use std::fmt::Debug; -use std::rc::Rc; +use std::{cell::RefCell, collections::HashMap, fmt::Debug, rc::Rc}; use zksync_state::{ReadStorage, WriteStorage}; use zksync_types::{StorageKey, StorageValue, H256}; diff --git a/core/lib/multivm/src/versions/vm_m5/test_utils.rs b/core/lib/multivm/src/versions/vm_m5/test_utils.rs index 590579be6d8..6920e77b8a8 100644 --- a/core/lib/multivm/src/versions/vm_m5/test_utils.rs +++ b/core/lib/multivm/src/versions/vm_m5/test_utils.rs @@ -12,8 +12,10 @@ use itertools::Itertools; use zk_evm_1_3_1::{ aux_structures::Timestamp, reference_impls::event_sink::ApplicationData, vm_state::VmLocalState, }; -use zksync_contracts::test_contracts::LoadnextContractExecutionParams; -use zksync_contracts::{deployer_contract, get_loadnext_contract, load_contract}; +use zksync_contracts::{ + deployer_contract, get_loadnext_contract, load_contract, + test_contracts::LoadnextContractExecutionParams, +}; use zksync_types::{ ethabi::{Address, Token}, fee::Fee, @@ -26,13 +28,12 @@ use zksync_utils::{ address_to_h256, bytecode::hash_bytecode, h256_to_account_address, u256_to_h256, }; -use crate::vm_m5::storage::Storage; -use crate::vm_m5::vm_instance::VmInstance; -/// The tests here help us with the testing the VM use crate::vm_m5::{ event_sink::InMemoryEventSink, history_recorder::{FrameManager, HistoryRecorder}, memory::SimpleMemory, + storage::Storage, + vm_instance::VmInstance, }; #[derive(Clone, Debug)] @@ -58,7 +59,7 @@ impl PartialEq for ModifiedKeysMap { #[derive(Clone, PartialEq, Debug)] pub struct DecommitterTestInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub modified_storage_keys: ModifiedKeysMap, pub known_bytecodes: HistoryRecorder>>, @@ -67,7 +68,7 @@ pub struct DecommitterTestInnerState { #[derive(Clone, PartialEq, Debug)] pub struct StorageOracleInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub modified_storage_keys: ModifiedKeysMap, diff --git a/core/lib/multivm/src/versions/vm_m5/tests/bootloader.rs b/core/lib/multivm/src/versions/vm_m5/tests/bootloader.rs index 1034e859593..d9e07c5068d 100644 --- a/core/lib/multivm/src/versions/vm_m5/tests/bootloader.rs +++ b/core/lib/multivm/src/versions/vm_m5/tests/bootloader.rs @@ -1,9 +1,10 @@ +// ``` // //! // //! Tests for the bootloader // //! The description for each of the tests can be found in the corresponding `.yul` file. // //! // #![cfg_attr(test, allow(unused_imports))] - +// // use crate::errors::{VmRevertReason, VmRevertReasonParsingResult}; // use crate::memory::SimpleMemory; // use crate::oracles::tracer::{ @@ -58,7 +59,7 @@ // u256_to_h256, // }; // use zksync_utils::{h256_to_account_address, u256_to_account_address}; - +// // use crate::{transaction_data::TransactionData, OracleTools}; // use std::time; // use zksync_contracts::{ @@ -88,10 +89,10 @@ // MAX_TXS_IN_BLOCK, SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_GAS_PRICE_POSITION, // SYSTEM_CONTEXT_MINIMAL_BASE_FEE, SYSTEM_CONTEXT_TX_ORIGIN_POSITION, // }; - +// // use once_cell::sync::Lazy; // use zksync_system_constants::ZKPORTER_IS_AVAILABLE; - +// // fn run_vm_with_custom_factory_deps<'a>( // oracle_tools: &'a mut OracleTools<'a, false>, // block_context: BlockContext, @@ -110,7 +111,7 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // vm.bootloader_state.add_tx_data(encoded_tx.len()); // vm.state.memory.populate_page( // BOOTLOADER_HEAP_PAGE as usize, @@ -124,17 +125,17 @@ // ), // Timestamp(0), // ); - +// // let result = vm.execute_next_tx().err(); - +// // assert_eq!(expected_error, result); // } - +// // fn get_balance(token_id: AccountTreeId, account: &Address, main_storage: StoragePtr<'_>) -> U256 { // let key = storage_key_for_standard_token_balance(token_id, account); // h256_to_u256(main_storage.borrow_mut().get_value(&key)) // } - +// // #[test] // fn test_dummy_bootloader() { // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); @@ -143,18 +144,18 @@ // insert_system_contracts(&mut raw_storage); // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; - +// // let mut oracle_tools = OracleTools::new(storage_ptr); // let (block_context, block_properties) = create_test_block_params(); // let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); // let bootloader_code = read_bootloader_test_code("dummy"); // let bootloader_hash = hash_bytecode(&bootloader_code); - +// // base_system_contracts.bootloader = SystemContractCode { // code: bytes_to_be_words(bootloader_code), // hash: bootloader_hash, // }; - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context.into(), Default::default()), @@ -163,22 +164,22 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let VmBlockResult { // full_result: res, .. // } = vm.execute_till_block_end(BootloaderJobType::BlockPostprocessing); - +// // // Dummy bootloader should not panic // assert!(res.revert_reason.is_none()); - +// // let correct_first_cell = U256::from_str_radix("123123123", 16).unwrap(); - +// // verify_required_memory( // &vm.state, // vec![(correct_first_cell, BOOTLOADER_HEAP_PAGE, 0)], // ); // } - +// // #[test] // fn test_bootloader_out_of_gas() { // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); @@ -187,20 +188,20 @@ // insert_system_contracts(&mut raw_storage); // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; - +// // let mut oracle_tools = OracleTools::new(storage_ptr); // let (block_context, block_properties) = create_test_block_params(); - +// // let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); - +// // let bootloader_code = read_bootloader_test_code("dummy"); // let bootloader_hash = hash_bytecode(&bootloader_code); - +// // base_system_contracts.bootloader = SystemContractCode { // code: bytes_to_be_words(bootloader_code), // hash: bootloader_hash, // }; - +// // // init vm with only 100 ergs // let mut vm = init_vm_inner( // &mut oracle_tools, @@ -210,16 +211,16 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let res = vm.execute_block_tip(); - +// // assert_eq!(res.revert_reason, Some(TxRevertReason::BootloaderOutOfGas)); // } - +// // fn verify_required_storage(state: &ZkSyncVmState<'_>, required_values: Vec<(H256, StorageKey)>) { // for (required_value, key) in required_values { // let current_value = state.storage.storage.read_from_storage(&key); - +// // assert_eq!( // u256_to_h256(current_value), // required_value, @@ -227,7 +228,7 @@ // ); // } // } - +// // fn verify_required_memory(state: &ZkSyncVmState<'_>, required_values: Vec<(U256, u32, u32)>) { // for (required_value, memory_page, cell) in required_values { // let current_value = state @@ -236,21 +237,21 @@ // assert_eq!(current_value, required_value); // } // } - +// // #[test] // fn test_default_aa_interaction() { // // In this test, we aim to test whether a simple account interaction (without any fee logic) // // will work. The account will try to deploy a simple contract from integration tests. - +// // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let operator_address = block_context.context.operator_address; // let base_fee = block_context.base_fee; // // We deploy here counter contract, because its logic is trivial @@ -271,16 +272,16 @@ // ) // .into(); // let tx_data: TransactionData = tx.clone().into(); - +// // let maximal_fee = tx_data.gas_limit * tx_data.max_fee_per_gas; // let sender_address = tx_data.from(); // // set balance - +// // let key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -290,17 +291,17 @@ // TxExecutionMode::VerifyExecute, // ); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute); - +// // let tx_execution_result = vm // .execute_next_tx() // .expect("Bootloader failed while processing transaction"); - +// // assert_eq!( // tx_execution_result.status, // TxExecutionStatus::Success, // "Transaction wasn't successful" // ); - +// // let VmBlockResult { // full_result: res, .. // } = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); @@ -310,28 +311,28 @@ // "Bootloader was not expected to revert: {:?}", // res.revert_reason // ); - +// // // Both deployment and ordinary nonce should be incremented by one. // let account_nonce_key = get_nonce_key(&sender_address); // let expected_nonce = TX_NONCE_INCREMENT + DEPLOYMENT_NONCE_INCREMENT; - +// // // The code hash of the deployed contract should be marked as republished. // let known_codes_key = get_known_code_key(&contract_code_hash); - +// // // The contract should be deployed successfully. // let deployed_address = deployed_address_create(sender_address, U256::zero()); // let account_code_key = get_code_key(&deployed_address); - +// // let expected_slots = vec![ // (u256_to_h256(expected_nonce), account_nonce_key), // (u256_to_h256(U256::from(1u32)), known_codes_key), // (contract_code_hash, account_code_key), // ]; - +// // verify_required_storage(&vm.state, expected_slots); - +// // assert!(!tx_has_failed(&vm.state, 0)); - +// // let expected_fee = // maximal_fee - U256::from(tx_execution_result.gas_refunded) * U256::from(base_fee); // let operator_balance = get_balance( @@ -339,32 +340,32 @@ // &operator_address, // vm.state.storage.storage.get_ptr(), // ); - +// // assert!( // operator_balance == expected_fee, // "Operator did not receive his fee" // ); // } - +// // fn execute_vm_with_predetermined_refund(txs: Vec, refunds: Vec) -> VmBlockResult { // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // // set balance // for tx in txs.iter() { // let sender_address = tx.initiator_account(); // let key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); // } - +// // let mut oracle_tools = OracleTools::new(storage_ptr); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -373,7 +374,7 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // let codes_for_decommiter = txs // .iter() // .flat_map(|tx| { @@ -386,42 +387,42 @@ // .collect::)>>() // }) // .collect(); - +// // vm.state.decommittment_processor.populate( // codes_for_decommiter, // Timestamp(vm.state.local_state.timestamp), // ); - +// // let memory_with_suggested_refund = get_bootloader_memory( // txs.into_iter().map(Into::into).collect(), // refunds, // TxExecutionMode::VerifyExecute, // BlockContextMode::NewBlock(block_context, Default::default()), // ); - +// // vm.state.memory.populate_page( // BOOTLOADER_HEAP_PAGE as usize, // memory_with_suggested_refund, // Timestamp(0), // ); - +// // vm.execute_till_block_end(BootloaderJobType::TransactionExecution) // } - +// // #[test] // fn test_predetermined_refunded_gas() { // // In this test, we compare the execution of the bootloader with the predefined // // refunded gas and without them - +// // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let base_fee = block_context.base_fee; // // We deploy here counter contract, because its logic is trivial // let contract_code = read_test_contract(); @@ -439,15 +440,15 @@ // }, // ) // .into(); - +// // let sender_address = tx.initiator_account(); - +// // // set balance // let key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -456,19 +457,19 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute); - +// // let tx_execution_result = vm // .execute_next_tx() // .expect("Bootloader failed while processing transaction"); - +// // assert_eq!( // tx_execution_result.status, // TxExecutionStatus::Success, // "Transaction wasn't successful" // ); - +// // // If the refund provided by the operator or the final refund are the 0 // // there is no impact of the operator's refund at all and so this test does not // // make much sense. @@ -480,14 +481,14 @@ // tx_execution_result.gas_refunded > 0, // "The final refund is 0" // ); - +// // let mut result = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); // assert!( // result.full_result.revert_reason.is_none(), // "Bootloader was not expected to revert: {:?}", // result.full_result.revert_reason // ); - +// // let mut result_with_predetermined_refund = execute_vm_with_predetermined_refund( // vec![tx], // vec![tx_execution_result.operator_suggested_refund], @@ -498,7 +499,7 @@ // .full_result // .used_contract_hashes // .sort(); - +// // assert_eq!( // result.full_result.events, // result_with_predetermined_refund.full_result.events @@ -520,18 +521,18 @@ // .used_contract_hashes // ); // } - +// // #[derive(Debug, Clone)] // enum TransactionRollbackTestInfo { // Rejected(Transaction, TxRevertReason), // Processed(Transaction, bool, TxExecutionStatus), // } - +// // impl TransactionRollbackTestInfo { // fn new_rejected(transaction: Transaction, revert_reason: TxRevertReason) -> Self { // Self::Rejected(transaction, revert_reason) // } - +// // fn new_processed( // transaction: Transaction, // should_be_rollbacked: bool, @@ -539,28 +540,28 @@ // ) -> Self { // Self::Processed(transaction, should_be_rollbacked, expected_status) // } - +// // fn get_transaction(&self) -> &Transaction { // match self { // TransactionRollbackTestInfo::Rejected(tx, _) => tx, // TransactionRollbackTestInfo::Processed(tx, _, _) => tx, // } // } - +// // fn rejection_reason(&self) -> Option { // match self { // TransactionRollbackTestInfo::Rejected(_, revert_reason) => Some(revert_reason.clone()), // TransactionRollbackTestInfo::Processed(_, _, _) => None, // } // } - +// // fn should_rollback(&self) -> bool { // match self { // TransactionRollbackTestInfo::Rejected(_, _) => true, // TransactionRollbackTestInfo::Processed(_, x, _) => *x, // } // } - +// // fn expected_status(&self) -> TxExecutionStatus { // match self { // TransactionRollbackTestInfo::Rejected(_, _) => { @@ -570,7 +571,7 @@ // } // } // } - +// // // Accepts the address of the sender as well as the list of pairs of its transactions // // and whether these transactions should succeed. // fn execute_vm_with_possible_rollbacks( @@ -584,13 +585,13 @@ // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // // Setting infinite balance for the sender. // let key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -599,7 +600,7 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // for test_info in transactions { // vm.save_current_vm_as_snapshot(); // let vm_state_before_tx = vm.dump_inner_state(); @@ -608,7 +609,7 @@ // test_info.get_transaction(), // TxExecutionMode::VerifyExecute, // ); - +// // match vm.execute_next_tx() { // Err(reason) => { // assert_eq!(test_info.rejection_reason(), Some(reason)); @@ -622,11 +623,11 @@ // ); // } // }; - +// // if test_info.should_rollback() { // // Some error has occurred, we should reject the transaction // vm.rollback_to_latest_snapshot(); - +// // // vm_state_before_tx. // let state_after_rollback = vm.dump_inner_state(); // assert_eq!( @@ -635,7 +636,7 @@ // ); // } // } - +// // let VmBlockResult { // full_result: mut result, // .. @@ -643,10 +644,10 @@ // // Used contract hashes are retrieved in unordered manner. // // However it must be sorted for the comparisons in tests to work // result.used_contract_hashes.sort(); - +// // result // } - +// // // Sets the signature for an L2 transaction and returns the same transaction // // but this different signature. // fn change_signature(mut tx: Transaction, signature: Vec) -> Transaction { @@ -657,22 +658,22 @@ // } // _ => unreachable!(), // }; - +// // tx // } - +// // #[test] // fn test_vm_rollbacks() { // let (block_context, block_properties): (DerivedBlockContext, BlockProperties) = { // let (block_context, block_properties) = create_test_block_params(); // (block_context.into(), block_properties) // }; - +// // let base_fee = U256::from(block_context.base_fee); - +// // let sender_private_key = H256::random(); // let contract_code = read_test_contract(); - +// // let tx_nonce_0: Transaction = get_deploy_tx( // sender_private_key, // Nonce(0), @@ -715,13 +716,13 @@ // }, // ) // .into(); - +// // let wrong_signature_length_tx = change_signature(tx_nonce_0.clone(), vec![1u8; 32]); // let wrong_v_tx = change_signature(tx_nonce_0.clone(), vec![1u8; 65]); // let wrong_signature_tx = change_signature(tx_nonce_0.clone(), vec![27u8; 65]); - +// // let sender_address = tx_nonce_0.initiator_account(); - +// // let result_without_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -745,7 +746,7 @@ // block_context, // block_properties, // ); - +// // let incorrect_nonce = TxRevertReason::ValidationFailed(VmRevertReason::General { // msg: "Incorrect nonce".to_string(), // }); @@ -761,7 +762,7 @@ // let signature_is_incorrect = TxRevertReason::ValidationFailed(VmRevertReason::General { // msg: "Account validation returned invalid magic value. Most often this means that the signature is incorrect".to_string(), // }); - +// // let result_with_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -806,11 +807,11 @@ // block_context, // block_properties, // ); - +// // assert_eq!(result_without_rollbacks, result_with_rollbacks); - +// // let loadnext_contract = get_loadnext_contract(); - +// // let loadnext_constructor_data = encode(&[Token::Uint(U256::from(100))]); // let loadnext_deploy_tx: Transaction = get_deploy_tx( // sender_private_key, @@ -833,7 +834,7 @@ // false, // TxExecutionStatus::Success, // ); - +// // let get_load_next_tx = |params: LoadnextContractExecutionParams, nonce: Nonce| { // // Here we test loadnext with various kinds of operations // let tx: Transaction = mock_loadnext_test_call( @@ -849,10 +850,10 @@ // params, // ) // .into(); - +// // tx // }; - +// // let loadnext_tx_0 = get_load_next_tx( // LoadnextContractExecutionParams { // reads: 100, @@ -875,7 +876,7 @@ // }, // Nonce(2), // ); - +// // let result_without_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -894,7 +895,7 @@ // block_context, // block_properties, // ); - +// // let result_with_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -935,10 +936,10 @@ // block_context, // block_properties, // ); - +// // assert_eq!(result_without_rollbacks, result_with_rollbacks); // } - +// // // Inserts the contracts into the test environment, bypassing the // // deployer system contract. Besides the reference to storage // // it accepts a `contracts` tuple of information about the contract @@ -951,13 +952,13 @@ // .iter() // .flat_map(|(contract, is_account)| { // let mut new_logs = vec![]; - +// // let deployer_code_key = get_code_key(contract.account_id.address()); // new_logs.push(StorageLog::new_write_log( // deployer_code_key, // hash_bytecode(&contract.bytecode), // )); - +// // if *is_account { // let is_account_key = get_is_account_key(contract.account_id.address()); // new_logs.push(StorageLog::new_write_log( @@ -965,19 +966,19 @@ // u256_to_h256(1u32.into()), // )); // } - +// // new_logs // }) // .collect(); // raw_storage.process_transaction_logs(&logs); - +// // for (contract, _) in contracts { // raw_storage.store_contract(*contract.account_id.address(), contract.bytecode.clone()); // raw_storage.store_factory_dep(hash_bytecode(&contract.bytecode), contract.bytecode); // } // raw_storage.save(L1BatchNumber(0)); // } - +// // enum NonceHolderTestMode { // SetValueUnderNonce, // IncreaseMinNonceBy5, @@ -986,7 +987,7 @@ // IncreaseMinNonceBy1, // SwitchToArbitraryOrdering, // } - +// // impl From for u8 { // fn from(mode: NonceHolderTestMode) -> u8 { // match mode { @@ -999,7 +1000,7 @@ // } // } // } - +// // fn get_nonce_holder_test_tx( // nonce: U256, // account_address: Address, @@ -1021,11 +1022,11 @@ // reserved: [U256::zero(); 4], // data: vec![12], // signature: vec![test_mode.into()], - +// // ..Default::default() // } // } - +// // fn run_vm_with_raw_tx<'a>( // oracle_tools: &'a mut OracleTools<'a, false>, // block_context: DerivedBlockContext, @@ -1042,7 +1043,7 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let overhead = tx.overhead_gas(); // push_raw_transaction_to_bootloader_memory( // &mut vm, @@ -1054,43 +1055,43 @@ // full_result: result, // .. // } = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); - +// // (result, tx_has_failed(&vm.state, 0)) // } - +// // #[test] // fn test_nonce_holder() { // let (block_context, block_properties): (DerivedBlockContext, BlockProperties) = { // let (block_context, block_properties) = create_test_block_params(); // (block_context.into(), block_properties) // }; - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); - +// // let account_address = H160::random(); // let account = DeployedContract { // account_id: AccountTreeId::new(account_address), // bytecode: read_nonce_holder_tester(), // }; - +// // insert_contracts(&mut raw_storage, vec![(account, true)]); - +// // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // // We deploy here counter contract, because its logic is trivial - +// // let key = storage_key_for_eth_balance(&account_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut run_nonce_test = |nonce: U256, // test_mode: NonceHolderTestMode, // error_message: Option, // comment: &'static str| { // let tx = get_nonce_holder_test_tx(nonce, account_address, test_mode, &block_context); - +// // let mut oracle_tools = OracleTools::new(storage_ptr); // let (result, tx_has_failed) = // run_vm_with_raw_tx(&mut oracle_tools, block_context, &block_properties, tx); @@ -1109,7 +1110,7 @@ // assert!(!tx_has_failed, "{}", comment); // } // }; - +// // // Test 1: trying to set value under non sequential nonce value. // run_nonce_test( // 1u32.into(), @@ -1117,7 +1118,7 @@ // Some("Previous nonce has not been used".to_string()), // "Allowed to set value under non sequential value", // ); - +// // // Test 2: increase min nonce by 1 with sequential nonce ordering: // run_nonce_test( // 0u32.into(), @@ -1125,7 +1126,7 @@ // None, // "Failed to increment nonce by 1 for sequential account", // ); - +// // // Test 3: correctly set value under nonce with sequential nonce ordering: // run_nonce_test( // 1u32.into(), @@ -1133,7 +1134,7 @@ // None, // "Failed to set value under nonce sequential value", // ); - +// // // Test 5: migrate to the arbitrary nonce ordering: // run_nonce_test( // 2u32.into(), @@ -1141,7 +1142,7 @@ // None, // "Failed to switch to arbitrary ordering", // ); - +// // // Test 6: increase min nonce by 5 // run_nonce_test( // 6u32.into(), @@ -1149,7 +1150,7 @@ // None, // "Failed to increase min nonce by 5", // ); - +// // // Test 7: since the nonces in range [6,10] are no longer allowed, the // // tx with nonce 10 should not be allowed // run_nonce_test( @@ -1158,7 +1159,7 @@ // Some("Reusing the same nonce twice".to_string()), // "Allowed to reuse nonce below the minimal one", // ); - +// // // Test 8: we should be able to use nonce 13 // run_nonce_test( // 13u32.into(), @@ -1166,7 +1167,7 @@ // None, // "Did not allow to use unused nonce 10", // ); - +// // // Test 9: we should not be able to reuse nonce 13 // run_nonce_test( // 13u32.into(), @@ -1174,7 +1175,7 @@ // Some("Reusing the same nonce twice".to_string()), // "Allowed to reuse the same nonce twice", // ); - +// // // Test 10: we should be able to simply use nonce 14, while bumping the minimal nonce by 5 // run_nonce_test( // 14u32.into(), @@ -1182,7 +1183,7 @@ // None, // "Did not allow to use a bumped nonce", // ); - +// // // Test 6: Do not allow bumping nonce by too much // run_nonce_test( // 16u32.into(), @@ -1190,7 +1191,7 @@ // Some("The value for incrementing the nonce is too high".to_string()), // "Allowed for incrementing min nonce too much", // ); - +// // // Test 7: Do not allow not setting a nonce as used // run_nonce_test( // 16u32.into(), @@ -1199,7 +1200,7 @@ // "Allowed to leave nonce as unused", // ); // } - +// // #[test] // fn test_l1_tx_execution() { // // In this test, we try to execute a contract deployment from L1 @@ -1209,17 +1210,17 @@ // insert_system_contracts(&mut raw_storage); // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; - +// // let mut oracle_tools = OracleTools::new(storage_ptr); // let (block_context, block_properties) = create_test_block_params(); - +// // // Here instead of marking code hash via the bootloader means, we will // // using L1->L2 communication, the same it would likely be done during the priority mode. // let contract_code = read_test_contract(); // let contract_code_hash = hash_bytecode(&contract_code); // let l1_deploy_tx = get_l1_deploy_tx(&contract_code, &[]); // let l1_deploy_tx_data: TransactionData = l1_deploy_tx.clone().into(); - +// // let required_l2_to_l1_logs = vec![ // L2ToL1Log { // shard_id: 0, @@ -1238,9 +1239,9 @@ // value: u256_to_h256(U256::from(1u32)), // }, // ]; - +// // let sender_address = l1_deploy_tx_data.from(); - +// // oracle_tools.decommittment_processor.populate( // vec![( // h256_to_u256(contract_code_hash), @@ -1248,7 +1249,7 @@ // )], // Timestamp(0), // ); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context.into(), Default::default()), @@ -1258,44 +1259,44 @@ // TxExecutionMode::VerifyExecute, // ); // push_transaction_to_bootloader_memory(&mut vm, &l1_deploy_tx, TxExecutionMode::VerifyExecute); - +// // let res = vm.execute_next_tx().unwrap(); - +// // // The code hash of the deployed contract should be marked as republished. // let known_codes_key = get_known_code_key(&contract_code_hash); - +// // // The contract should be deployed successfully. // let deployed_address = deployed_address_create(sender_address, U256::zero()); // let account_code_key = get_code_key(&deployed_address); - +// // let expected_slots = vec![ // (u256_to_h256(U256::from(1u32)), known_codes_key), // (contract_code_hash, account_code_key), // ]; // assert!(!tx_has_failed(&vm.state, 0)); - +// // verify_required_storage(&vm.state, expected_slots); - +// // assert_eq!(res.result.logs.l2_to_l1_logs, required_l2_to_l1_logs); - +// // let tx = get_l1_execute_test_contract_tx(deployed_address, true); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute); // let res = ExecutionMetrics::new(&vm.execute_next_tx().unwrap().result.logs, 0, 0, 0, 0); // assert_eq!(res.initial_storage_writes, 0); - +// // let tx = get_l1_execute_test_contract_tx(deployed_address, false); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute); // let res = ExecutionMetrics::new(&vm.execute_next_tx().unwrap().result.logs, 0, 0, 0, 0); // assert_eq!(res.initial_storage_writes, 2); - +// // let repeated_writes = res.repeated_storage_writes; - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute); // let res = ExecutionMetrics::new(&vm.execute_next_tx().unwrap().result.logs, 0, 0, 0, 0); // assert_eq!(res.initial_storage_writes, 1); // // We do the same storage write, so it will be deduplicated // assert_eq!(res.repeated_storage_writes, repeated_writes); - +// // let mut tx = get_l1_execute_test_contract_tx(deployed_address, false); // tx.execute.value = U256::from(1); // match &mut tx.common_data { @@ -1312,15 +1313,15 @@ // TxExecutionStatus::Failure, // "The transaction should fail" // ); - +// // let res = ExecutionMetrics::new(&execution_result.result.logs, 0, 0, 0, 0); - +// // // There are 2 initial writes here: // // - totalSupply of ETH token // // - balance of the refund recipient // assert_eq!(res.initial_storage_writes, 2); // } - +// // #[test] // fn test_invalid_bytecode() { // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); @@ -1328,16 +1329,16 @@ // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let (block_context, block_properties) = create_test_block_params(); - +// // let test_vm_with_custom_bytecode_hash = // |bytecode_hash: H256, expected_revert_reason: Option| { // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; // let mut oracle_tools = OracleTools::new(storage_ptr); - +// // let (encoded_tx, predefined_overhead) = // get_l1_tx_with_custom_bytecode_hash(h256_to_u256(bytecode_hash)); - +// // run_vm_with_custom_factory_deps( // &mut oracle_tools, // block_context, @@ -1347,13 +1348,13 @@ // expected_revert_reason, // ); // }; - +// // let failed_to_mark_factory_deps = |msg: &str| { // TxRevertReason::FailedToMarkFactoryDependencies(VmRevertReason::General { // msg: msg.to_string(), // }) // }; - +// // // Here we provide the correctly-formatted bytecode hash of // // odd length, so it should work. // test_vm_with_custom_bytecode_hash( @@ -1363,7 +1364,7 @@ // ]), // None, // ); - +// // // Here we provide correctly formatted bytecode of even length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1375,7 +1376,7 @@ // "Code length in words must be odd", // )), // ); - +// // // Here we provide incorrectly formatted bytecode of odd length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1387,7 +1388,7 @@ // "Incorrectly formatted bytecodeHash", // )), // ); - +// // // Here we provide incorrectly formatted bytecode of odd length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1400,24 +1401,24 @@ // )), // ); // } - +// // #[test] // fn test_tracing_of_execution_errors() { // // In this test, we are checking that the execution errors are transmitted correctly from the bootloader. // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); - +// // let contract_address = Address::random(); // let error_contract = DeployedContract { // account_id: AccountTreeId::new(contract_address), // bytecode: read_error_contract(), // }; - +// // let tx = get_error_tx( // H256::random(), // Nonce(0), @@ -1429,16 +1430,16 @@ // gas_per_pubdata_limit: U256::from(50000u32), // }, // ); - +// // insert_contracts(&mut raw_storage, vec![(error_contract, false)]); - +// // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let key = storage_key_for_eth_balance(&tx.common_data.initiator_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -1448,14 +1449,14 @@ // TxExecutionMode::VerifyExecute, // ); // push_transaction_to_bootloader_memory(&mut vm, &tx.into(), TxExecutionMode::VerifyExecute); - +// // let mut tracer = TransactionResultTracer::default(); // assert_eq!( // vm.execute_with_custom_tracer(&mut tracer), // VmExecutionStopReason::VmFinished, // "Tracer should never request stop" // ); - +// // match tracer.revert_reason { // Some(revert_reason) => { // let revert_reason = VmRevertReason::try_from(&revert_reason as &[u8]).unwrap(); @@ -1472,20 +1473,20 @@ // ), // } // } - +// // /// Checks that `TX_GAS_LIMIT_OFFSET` constant is correct. // #[test] // fn test_tx_gas_limit_offset() { // let gas_limit = U256::from(999999); - +// // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let raw_storage = SecondaryStateStorage::new(db); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let contract_code = read_test_contract(); // let tx: Transaction = get_deploy_tx( // H256::random(), @@ -1499,9 +1500,9 @@ // }, // ) // .into(); - +// // let mut oracle_tools = OracleTools::new(storage_ptr); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -1511,7 +1512,7 @@ // TxExecutionMode::VerifyExecute, // ); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute); - +// // let gas_limit_from_memory = vm // .state // .memory @@ -1522,20 +1523,20 @@ // .value; // assert_eq!(gas_limit_from_memory, gas_limit); // } - +// // #[test] // fn test_is_write_initial_behaviour() { // // In this test, we check result of `is_write_initial` at different stages. - +// // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let base_fee = block_context.base_fee; // let account_pk = H256::random(); // let contract_code = read_test_contract(); @@ -1553,19 +1554,19 @@ // }, // ) // .into(); - +// // let sender_address = tx.initiator_account(); // let nonce_key = get_nonce_key(&sender_address); - +// // // Check that the next write to the nonce key will be initial. // assert!(storage_ptr.is_write_initial(&nonce_key)); - +// // // Set balance to be able to pay fee for txs. // let balance_key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&balance_key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -1574,25 +1575,25 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute); - +// // vm.execute_next_tx() // .expect("Bootloader failed while processing the first transaction"); // // Check that `is_write_initial` still returns true for the nonce key. // assert!(storage_ptr.is_write_initial(&nonce_key)); // } - +// // pub fn get_l1_tx_with_custom_bytecode_hash(bytecode_hash: U256) -> (Vec, u32) { // let tx: TransactionData = get_l1_execute_test_contract_tx(Default::default(), false).into(); // let predefined_overhead = tx.overhead_gas_with_custom_factory_deps(vec![bytecode_hash]); // let tx_bytes = tx.abi_encode_with_custom_factory_deps(vec![bytecode_hash]); - +// // (bytes_to_be_words(tx_bytes), predefined_overhead) // } - +// // const L1_TEST_GAS_PER_PUBDATA_BYTE: u32 = 800; - +// // pub fn get_l1_execute_test_contract_tx(deployed_address: Address, with_panic: bool) -> Transaction { // let execute = execute_test_contract(deployed_address, with_panic); // Transaction { @@ -1606,10 +1607,10 @@ // received_timestamp_ms: 0, // } // } - +// // pub fn get_l1_deploy_tx(code: &[u8], calldata: &[u8]) -> Transaction { // let execute = get_create_execute(code, calldata); - +// // Transaction { // common_data: ExecuteTransactionCommon::L1(L1TxCommonData { // sender: H160::random(), @@ -1621,28 +1622,28 @@ // received_timestamp_ms: 0, // } // } - +// // fn read_test_contract() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/counter/counter.sol/Counter.json") // } - +// // fn read_nonce_holder_tester() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/custom-account/nonce-holder-test.sol/NonceHolderTest.json") // } - +// // fn read_error_contract() -> Vec { // read_bytecode( // "etc/contracts-test-data/artifacts-zk/contracts/error/error.sol/SimpleRequire.json", // ) // } - +// // fn execute_test_contract(address: Address, with_panic: bool) -> Execute { // let test_contract = load_contract( // "etc/contracts-test-data/artifacts-zk/contracts/counter/counter.sol/Counter.json", // ); - +// // let function = test_contract.function("incrementWithRevert").unwrap(); - +// // let calldata = function // .encode_input(&[Token::Uint(U256::from(1u8)), Token::Bool(with_panic)]) // .expect("failed to encode parameters"); @@ -1653,3 +1654,4 @@ // factory_deps: None, // } // } +// ``` diff --git a/core/lib/multivm/src/versions/vm_m5/transaction_data.rs b/core/lib/multivm/src/versions/vm_m5/transaction_data.rs index b749ff09275..c3fba68bc8d 100644 --- a/core/lib/multivm/src/versions/vm_m5/transaction_data.rs +++ b/core/lib/multivm/src/versions/vm_m5/transaction_data.rs @@ -1,13 +1,17 @@ use zk_evm_1_3_1::zkevm_opcode_defs::system_params::{MAX_PUBDATA_PER_BLOCK, MAX_TX_ERGS_LIMIT}; -use zksync_types::ethabi::{encode, Address, Token}; -use zksync_types::fee::encoding_len; -use zksync_types::MAX_TXS_IN_BLOCK; -use zksync_types::{l2::TransactionType, ExecuteTransactionCommon, Transaction, U256}; -use zksync_utils::{address_to_h256, ceil_div_u256}; -use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; +use zksync_types::{ + ethabi::{encode, Address, Token}, + fee::encoding_len, + l2::TransactionType, + ExecuteTransactionCommon, Transaction, U256, +}; +use zksync_utils::{ + address_to_h256, bytecode::hash_bytecode, bytes_to_be_words, ceil_div_u256, h256_to_u256, +}; +use super::vm_with_bootloader::MAX_GAS_PER_PUBDATA_BYTE; use crate::vm_m5::vm_with_bootloader::{ - BLOCK_OVERHEAD_GAS, BLOCK_OVERHEAD_PUBDATA, BOOTLOADER_TX_ENCODING_SPACE, + BLOCK_OVERHEAD_GAS, BLOCK_OVERHEAD_PUBDATA, BOOTLOADER_TX_ENCODING_SPACE, MAX_TXS_IN_BLOCK, }; const L1_TX_TYPE: u8 = 255; @@ -56,12 +60,22 @@ impl From for TransactionData { U256::zero() }; + // Ethereum transactions do not sign gas per pubdata limit, and so for them we need to use + // some default value. We use the maximum possible value that is allowed by the bootloader + // (i.e. we can not use u64::MAX, because the bootloader requires gas per pubdata for such + // transactions to be higher than `MAX_GAS_PER_PUBDATA_BYTE`). + let gas_per_pubdata_limit = if common_data.transaction_type.is_ethereum_type() { + MAX_GAS_PER_PUBDATA_BYTE.into() + } else { + common_data.fee.gas_per_pubdata_limit + }; + TransactionData { tx_type: (common_data.transaction_type as u32) as u8, from: execute_tx.initiator_account(), to: execute_tx.execute.contract_address, gas_limit: common_data.fee.gas_limit, - pubdata_price_limit: common_data.fee.gas_per_pubdata_limit, + pubdata_price_limit: gas_per_pubdata_limit, max_fee_per_gas: common_data.fee.max_fee_per_gas, max_priority_fee_per_gas: common_data.fee.max_priority_fee_per_gas, paymaster: common_data.paymaster_params.paymaster, @@ -181,7 +195,7 @@ impl TransactionData { get_maximal_allowed_overhead(total_gas_limit, gas_per_pubdata_byte_limit, encoded_len) } - + // ``` // #[cfg(test)] // pub(crate) fn overhead_gas_with_custom_factory_deps( // &self, @@ -198,22 +212,27 @@ impl TransactionData { // ); // get_maximal_allowed_overhead(total_gas_limit, gas_per_pubdata_byte_limit, encoded_len) // } - + // // #[cfg(test)] // pub(crate) fn canonical_l1_tx_hash(&self) -> zksync_types::H256 { // use zksync_types::web3::signing::keccak256; - + // // if self.tx_type != L1_TX_TYPE { // panic!("Trying to get L1 tx hash for non-L1 tx"); // } - + // // let encoded_bytes = self.clone().abi_encode(); - + // // zksync_types::H256(keccak256(&encoded_bytes)) // } + // ``` } -pub fn derive_overhead(gas_limit: u32, gas_price_per_pubdata: u32, encoded_len: usize) -> u32 { +pub(crate) fn derive_overhead( + gas_limit: u32, + gas_price_per_pubdata: u32, + encoded_len: usize, +) -> u32 { assert!( gas_limit <= MAX_TX_ERGS_LIMIT, "gas limit is larger than the maximal one" @@ -225,8 +244,8 @@ pub fn derive_overhead(gas_limit: u32, gas_price_per_pubdata: u32, encoded_len: let gas_price_per_pubdata = U256::from(gas_price_per_pubdata); let encoded_len = U256::from(encoded_len); - // The MAX_TX_ERGS_LIMIT is formed in a way that may fullfills a single-instance circuits - // if used in full. That is, within MAX_TX_ERGS_LIMIT it is possible to fully saturate all the single-instance + // The `MAX_TX_ERGS_LIMIT` is formed in a way that may fulfills a single-instance circuits + // if used in full. That is, within `MAX_TX_ERGS_LIMIT` it is possible to fully saturate all the single-instance // circuits. let overhead_for_single_instance_circuits = ceil_div_u256(gas_limit * max_block_overhead, MAX_TX_ERGS_LIMIT.into()); @@ -240,7 +259,7 @@ pub fn derive_overhead(gas_limit: u32, gas_price_per_pubdata: u32, encoded_len: // The overhead for occupying a single tx slot let tx_slot_overhead = ceil_div_u256(max_block_overhead, MAX_TXS_IN_BLOCK.into()); - // We use "ceil" here for formal reasons to allow easier approach for calculating the overhead in O(1) + // We use `ceil` here for formal reasons to allow easier approach for calculating the overhead in O(1) let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata); // The maximal potential overhead from pubdata @@ -274,53 +293,55 @@ pub fn get_maximal_allowed_overhead( let encoded_len = U256::from(encoded_len); // Derivation of overhead consists of 4 parts: - // 1. The overhead for taking up a transaction's slot. (O1): O1 = 1 / MAX_TXS_IN_BLOCK - // 2. The overhead for taking up the bootloader's memory (O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE - // 3. The overhead for possible usage of pubdata. (O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK - // 4. The overhead for possible usage of all the single-instance circuits. (O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT + // 1. The overhead for taking up a transaction's slot. `(O1): O1 = 1 / MAX_TXS_IN_BLOCK` + // 2. The overhead for taking up the bootloader's memory `(O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE` + // 3. The overhead for possible usage of pubdata. `(O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK` + // 4. The overhead for possible usage of all the single-instance circuits. `(O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT` // // The maximum of these is taken to derive the part of the block's overhead to be paid by the users: // - // max_overhead = max(O1, O2, O3, O4) - // overhead_gas = ceil(max_overhead * overhead_for_block_gas). Thus, overhead_gas is a function of - // tx_gas_limit, gas_per_pubdata_byte_limit and encoded_len. + // `max_overhead = max(O1, O2, O3, O4)` + // `overhead_gas = ceil(max_overhead * overhead_for_block_gas)`. Thus, `overhead_gas` is a function of + // `tx_gas_limit`, `gas_per_pubdata_byte_limit` and `encoded_len`. // - // While it is possible to derive the overhead with binary search in O(log n), it is too expensive to be done + // While it is possible to derive the overhead with binary search in `O(log n)`, it is too expensive to be done // on L1, so here is a reference implementation of finding the overhead for transaction in O(1): // - // Given total_gas_limit = tx_gas_limit + overhead_gas, we need to find overhead_gas and tx_gas_limit, such that: - // 1. overhead_gas is maximal possible (the operator is paid fairly) - // 2. overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas (the user does not overpay) + // Given `total_gas_limit = tx_gas_limit + overhead_gas`, we need to find `overhead_gas` and `tx_gas_limit`, such that: + // 1. `overhead_gas` is maximal possible (the operator is paid fairly) + // 2. `overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas` (the user does not overpay) // The third part boils to the following 4 inequalities (at least one of these must hold): - // ceil(O1 * overhead_for_block_gas) >= overhead_gas - // ceil(O2 * overhead_for_block_gas) >= overhead_gas - // ceil(O3 * overhead_for_block_gas) >= overhead_gas - // ceil(O4 * overhead_for_block_gas) >= overhead_gas + // `ceil(O1 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O2 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O3 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O4 * overhead_for_block_gas) >= overhead_gas` // // Now, we need to solve each of these separately: // 1. The overhead for occupying a single tx slot is a constant: let tx_slot_overhead = ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()); - // 2. The overhead for occupying the bootloader memory can be derived from encoded_len + // 2. The overhead for occupying the bootloader memory can be derived from `encoded_len` let overhead_for_length = ceil_div_u256( encoded_len * overhead_for_block_gas, BOOTLOADER_TX_ENCODING_SPACE.into(), ); - + // ``` // 3. ceil(O3 * overhead_for_block_gas) >= overhead_gas // O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK = ceil(gas_limit / gas_per_pubdata_byte_limit) / MAX_PUBDATA_PER_BLOCK - // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). Throwing off the `ceil`, while may provide marginally lower - // overhead to the operator, provides substantially easier formula to work with. + // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). + // ``` + //Throwing off the `ceil`, while may provide marginally lower + //overhead to the operator, provides substantially easier formula to work with. // - // For better clarity, let's denote gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE - // ceil(OB * (TL - OE) / (EP * MP)) >= OE + // For better clarity, let's denote `gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE` + // `ceil(OB * (TL - OE) / (EP * MP)) >= OE` // - // OB * (TL - OE) / (MP * EP) > OE - 1 - // OB * (TL - OE) > (OE - 1) * EP * MP - // OB * TL + EP * MP > OE * EP * MP + OE * OB - // (OB * TL + EP * MP) / (EP * MP + OB) > OE - // OE = floor((OB * TL + EP * MP) / (EP * MP + OB)) with possible -1 if the division is without remainder + // `OB * (TL - OE) / (MP * EP) > OE - 1` + // `OB * (TL - OE) > (OE - 1) * EP * MP` + // `OB * TL + EP * MP > OE * EP * MP + OE * OB` + // `(OB * TL + EP * MP) / (EP * MP + OB) > OE` + // `OE = floor((OB * TL + EP * MP) / (EP * MP + OB))` with possible -1 if the division is without remainder let overhead_for_pubdata = { let numerator: U256 = overhead_for_block_gas * total_gas_limit + gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK); @@ -335,7 +356,7 @@ pub fn get_maximal_allowed_overhead( (numerator - 1) / denominator } }; - + // ``` // 4. ceil(O4 * overhead_for_block_gas) >= overhead_gas // O4 = gas_limit / MAX_TX_ERGS_LIMIT. Using the notation from the previous equation: // ceil(OB * GL / MAX_TX_ERGS_LIMIT) >= OE @@ -344,6 +365,7 @@ pub fn get_maximal_allowed_overhead( // OB * (TL - OE) > OE * MAX_TX_ERGS_LIMIT - MAX_TX_ERGS_LIMIT // OB * TL + MAX_TX_ERGS_LIMIT > OE * ( MAX_TX_ERGS_LIMIT + OB) // OE = floor(OB * TL + MAX_TX_ERGS_LIMIT / (MAX_TX_ERGS_LIMIT + OB)), with possible -1 if the division is without remainder + // ``` let overhead_for_gas = { let numerator = overhead_for_block_gas * total_gas_limit + U256::from(MAX_TX_ERGS_LIMIT); let denominator: U256 = U256::from(MAX_TX_ERGS_LIMIT) + overhead_for_block_gas; @@ -388,7 +410,7 @@ mod tests { } else { 0u32 }; - // Safe cast: the gas_limit for a transaction can not be larger than 2^32 + // Safe cast: the gas_limit for a transaction can not be larger than `2^32` let mut right_bound = total_gas_limit; // The closure returns whether a certain overhead would be accepted by the bootloader. diff --git a/core/lib/multivm/src/versions/vm_m5/utils.rs b/core/lib/multivm/src/versions/vm_m5/utils.rs index b8fef994428..addab5c78af 100644 --- a/core/lib/multivm/src/versions/vm_m5/utils.rs +++ b/core/lib/multivm/src/versions/vm_m5/utils.rs @@ -1,10 +1,7 @@ -use crate::vm_m5::{memory::SimpleMemory, vm_with_bootloader::BlockContext}; use once_cell::sync::Lazy; - -use crate::glue::GlueInto; -use zk_evm_1_3_1::block_properties::BlockProperties; use zk_evm_1_3_1::{ aux_structures::{LogQuery, MemoryPage, Timestamp}, + block_properties::BlockProperties, vm_state::PrimitiveValue, zkevm_opcode_defs::FatPointer, }; @@ -13,6 +10,11 @@ use zksync_system_constants::ZKPORTER_IS_AVAILABLE; use zksync_types::{Address, StorageLogQuery, H160, MAX_L2_TX_GAS_LIMIT, U256}; use zksync_utils::h256_to_u256; +use crate::{ + glue::GlueInto, + vm_m5::{memory::SimpleMemory, vm_with_bootloader::BlockContext}, +}; + pub const INITIAL_TIMESTAMP: u32 = 1024; pub const INITIAL_MEMORY_COUNTER: u32 = 2048; pub const INITIAL_CALLDATA_PAGE: u32 = 7; @@ -222,7 +224,7 @@ pub fn collect_log_queries_after_timestamp( /// Receives sorted slice of timestamps. /// Returns count of timestamps that are greater than or equal to `from_timestamp`. -/// Works in O(log(sorted_timestamps.len())). +/// Works in `O(log(sorted_timestamps.len()))`. pub fn precompile_calls_count_after_timestamp( sorted_timestamps: &[Timestamp], from_timestamp: Timestamp, @@ -253,7 +255,7 @@ pub fn create_test_block_params() -> (BlockContext, BlockProperties) { pub fn read_bootloader_test_code(test: &str) -> Vec { read_zbin_bytecode(format!( - "etc/system-contracts/bootloader/tests/artifacts/{}.yul/{}.yul.zbin", - test, test + "contracts/system-contracts/bootloader/tests/artifacts/{}.yul.zbin", + test )) } diff --git a/core/lib/multivm/src/versions/vm_m5/vm.rs b/core/lib/multivm/src/versions/vm_m5/vm.rs index 87186bb7f15..08fa783cbdb 100644 --- a/core/lib/multivm/src/versions/vm_m5/vm.rs +++ b/core/lib/multivm/src/versions/vm_m5/vm.rs @@ -1,21 +1,23 @@ -use crate::interface::{ - BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, L1BatchEnv, - L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, - VmInterfaceHistoryEnabled, VmMemoryMetrics, -}; - use zksync_state::StoragePtr; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::{Transaction, VmVersion}; -use zksync_utils::bytecode::CompressedBytecodeInfo; -use zksync_utils::{h256_to_u256, u256_to_h256}; - -use crate::glue::history_mode::HistoryMode; -use crate::glue::GlueInto; -use crate::vm_m5::events::merge_events; -use crate::vm_m5::storage::Storage; -use crate::vm_m5::vm_instance::MultiVMSubversion; -use crate::vm_m5::vm_instance::VmInstance; +use zksync_types::{ + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + Transaction, VmVersion, +}; +use zksync_utils::{bytecode::CompressedBytecodeInfo, h256_to_u256, u256_to_h256}; + +use crate::{ + glue::{history_mode::HistoryMode, GlueInto}, + interface::{ + BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, + L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, + VmExecutionResultAndLogs, VmInterface, VmInterfaceHistoryEnabled, VmMemoryMetrics, + }, + vm_m5::{ + events::merge_events, + storage::Storage, + vm_instance::{MultiVMSubversion, VmInstance}, + }, +}; #[derive(Debug)] pub struct Vm { @@ -159,7 +161,7 @@ impl VmInterface for Vm { user_l2_to_l1_logs: l2_to_l1_logs, total_log_queries, cycles_used: self.vm.state.local_state.monotonic_cycle_counter, - // It's not applicable for vm5 + // It's not applicable for `vm5` deduplicated_events_logs: vec![], storage_refunds: vec![], } @@ -170,13 +172,16 @@ impl VmInterface for Vm { _tracer: Self::TracerDispatcher, tx: Transaction, _with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { crate::vm_m5::vm_with_bootloader::push_transaction_to_bootloader_memory( &mut self.vm, &tx, self.system_env.execution_mode.glue_into(), ); - Ok(self.execute(VmExecutionMode::OneTx)) + (Ok(()), self.execute(VmExecutionMode::OneTx)) } fn record_vm_memory_metrics(&self) -> VmMemoryMetrics { diff --git a/core/lib/multivm/src/versions/vm_m5/vm_instance.rs b/core/lib/multivm/src/versions/vm_m5/vm_instance.rs index 1df40c8a0b8..2a78a817999 100644 --- a/core/lib/multivm/src/versions/vm_m5/vm_instance.rs +++ b/core/lib/multivm/src/versions/vm_m5/vm_instance.rs @@ -1,43 +1,51 @@ -use std::convert::TryFrom; -use std::fmt::Debug; - -use zk_evm_1_3_1::aux_structures::Timestamp; -use zk_evm_1_3_1::vm_state::{PrimitiveValue, VmLocalState, VmState}; -use zk_evm_1_3_1::witness_trace::DummyTracer; -use zk_evm_1_3_1::zkevm_opcode_defs::decoding::{ - AllowedPcOrImm, EncodingModeProduction, VmEncodingMode, +use std::{convert::TryFrom, fmt::Debug}; + +use zk_evm_1_3_1::{ + aux_structures::Timestamp, + vm_state::{PrimitiveValue, VmLocalState, VmState}, + witness_trace::DummyTracer, + zkevm_opcode_defs::{ + decoding::{AllowedPcOrImm, EncodingModeProduction, VmEncodingMode}, + definitions::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; -use zk_evm_1_3_1::zkevm_opcode_defs::definitions::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER; -use zksync_system_constants::MAX_TXS_IN_BLOCK; - -use crate::glue::GlueInto; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::tx::tx_execution_info::TxExecutionStatus; -use zksync_types::vm_trace::VmExecutionTrace; -use zksync_types::{L1BatchNumber, StorageLogQuery, VmEvent, U256}; - -use crate::interface::types::outputs::VmExecutionLogs; -use crate::vm_m5::bootloader_state::BootloaderState; -use crate::vm_m5::errors::{TxRevertReason, VmRevertReason, VmRevertReasonParsingResult}; -use crate::vm_m5::event_sink::InMemoryEventSink; -use crate::vm_m5::events::merge_events; -use crate::vm_m5::memory::SimpleMemory; -use crate::vm_m5::oracles::decommitter::DecommitterOracle; -use crate::vm_m5::oracles::precompile::PrecompilesProcessorWithHistory; -use crate::vm_m5::oracles::storage::StorageOracle; -use crate::vm_m5::oracles::tracer::{ - BootloaderTracer, ExecutionEndTracer, OneTxTracer, PendingRefundTracer, PubdataSpentTracer, - TransactionResultTracer, ValidationError, ValidationTracer, ValidationTracerParams, +use zksync_types::{ + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + tx::tx_execution_info::TxExecutionStatus, + vm_trace::VmExecutionTrace, + L1BatchNumber, StorageLogQuery, VmEvent, U256, }; -use crate::vm_m5::oracles::OracleWithHistory; -use crate::vm_m5::storage::Storage; -use crate::vm_m5::utils::{ - collect_log_queries_after_timestamp, collect_storage_log_queries_after_timestamp, - dump_memory_page_using_primitive_value, precompile_calls_count_after_timestamp, -}; -use crate::vm_m5::vm_with_bootloader::{ - BootloaderJobType, DerivedBlockContext, TxExecutionMode, BOOTLOADER_HEAP_PAGE, - OPERATOR_REFUNDS_OFFSET, + +use crate::{ + glue::GlueInto, + interface::types::outputs::VmExecutionLogs, + vm_m5::{ + bootloader_state::BootloaderState, + errors::{TxRevertReason, VmRevertReason, VmRevertReasonParsingResult}, + event_sink::InMemoryEventSink, + events::merge_events, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, + precompile::PrecompilesProcessorWithHistory, + storage::StorageOracle, + tracer::{ + BootloaderTracer, ExecutionEndTracer, OneTxTracer, PendingRefundTracer, + PubdataSpentTracer, TransactionResultTracer, ValidationError, ValidationTracer, + ValidationTracerParams, + }, + OracleWithHistory, + }, + storage::Storage, + utils::{ + collect_log_queries_after_timestamp, collect_storage_log_queries_after_timestamp, + dump_memory_page_using_primitive_value, precompile_calls_count_after_timestamp, + }, + vm_with_bootloader::{ + BootloaderJobType, DerivedBlockContext, TxExecutionMode, BOOTLOADER_HEAP_PAGE, + OPERATOR_REFUNDS_OFFSET, + }, + }, }; pub type ZkSyncVmState = VmState< @@ -103,12 +111,12 @@ pub struct VmExecutionResult { pub l2_to_l1_logs: Vec, pub return_data: Vec, - /// Value denoting the amount of gas spent withing VM invocation. + /// Value denoting the amount of gas spent within VM invocation. /// Note that return value represents the difference between the amount of gas /// available to VM before and after execution. /// /// It means, that depending on the context, `gas_used` may represent different things. - /// If VM is continously invoked and interrupted after each tx, this field may represent the + /// If VM is continuously invoked and interrupted after each tx, this field may represent the /// amount of gas spent by a single transaction. /// /// To understand, which value does `gas_used` represent, see the documentation for the method @@ -167,6 +175,7 @@ pub enum VmExecutionStopReason { TracerRequestedStop, } +use super::vm_with_bootloader::MAX_TXS_IN_BLOCK; use crate::vm_m5::utils::VmExecutionResult as NewVmExecutionResult; fn vm_may_have_ended_inner( @@ -196,7 +205,7 @@ fn vm_may_have_ended_inner( } (false, _) => None, (true, l) if l == outer_eh_location => { - // check r1,r2,r3 + // check `r1,r2,r3` if vm.local_state.flags.overflow_or_less_than_flag { Some(NewVmExecutionResult::Panic) } else { @@ -224,7 +233,7 @@ fn vm_may_have_ended(vm: &VmInstance, gas_before: u32) -> Option< NewVmExecutionResult::Ok(data) => { Some(VmExecutionResult { // The correct `events` value for this field should be set separately - // later on based on the information inside the event_sink oracle. + // later on based on the information inside the `event_sink` oracle. events: vec![], storage_log_queries: vm.get_final_log_queries(), used_contract_hashes: vm.get_used_contracts(), @@ -371,8 +380,8 @@ impl VmInstance { pub fn save_current_vm_as_snapshot(&mut self) { self.snapshots.push(VmSnapshot { // Vm local state contains O(1) various parameters (registers/etc). - // The only "expensive" copying here is copying of the callstack. - // It will take O(callstack_depth) to copy it. + // The only "expensive" copying here is copying of the call stack. + // It will take `O(callstack_depth)` to copy it. // So it is generally recommended to get snapshots of the bootloader frame, // where the depth is 1. local_state: self.state.local_state.clone(), @@ -617,8 +626,8 @@ impl VmInstance { } // Err when transaction is rejected. - // Ok(status: TxExecutionStatus::Success) when the transaction succeeded - // Ok(status: TxExecutionStatus::Failure) when the transaction failed. + // `Ok(status: TxExecutionStatus::Success)` when the transaction succeeded + // `Ok(status: TxExecutionStatus::Failure)` when the transaction failed. // Note that failed transactions are considered properly processed and are included in blocks pub fn execute_next_tx(&mut self) -> Result { let tx_index = self.bootloader_state.next_unexecuted_tx() as u32; @@ -663,7 +672,7 @@ impl VmInstance { revert_reason: None, // getting contracts used during this transaction // at least for now the number returned here is always <= to the number - // of the code hashes actually used by the transaction, since it might've + // of the code hashes actually used by the transaction, since it might have // reused bytecode hashes from some of the previous ones. contracts_used: self .state diff --git a/core/lib/multivm/src/versions/vm_m5/vm_with_bootloader.rs b/core/lib/multivm/src/versions/vm_m5/vm_with_bootloader.rs index 0116660594e..7b8a361c5a5 100644 --- a/core/lib/multivm/src/versions/vm_m5/vm_with_bootloader.rs +++ b/core/lib/multivm/src/versions/vm_m5/vm_with_bootloader.rs @@ -11,31 +11,32 @@ use zk_evm_1_3_1::{ }, }; use zksync_contracts::BaseSystemContracts; -use zksync_system_constants::MAX_TXS_IN_BLOCK; - +use zksync_system_constants::MAX_L2_TX_GAS_LIMIT; use zksync_types::{ - zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, Transaction, BOOTLOADER_ADDRESS, - L1_GAS_PER_PUBDATA_BYTE, MAX_GAS_PER_PUBDATA_BYTE, MAX_NEW_FACTORY_DEPS, U256, + fee_model::L1PeggedBatchFeeModelInput, zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, + Address, Transaction, BOOTLOADER_ADDRESS, L1_GAS_PER_PUBDATA_BYTE, MAX_NEW_FACTORY_DEPS, U256, }; use zksync_utils::{ address_to_u256, bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, misc::ceil_div, }; -use crate::vm_m5::storage::Storage; -use crate::vm_m5::{ - bootloader_state::BootloaderState, - oracles::OracleWithHistory, - transaction_data::TransactionData, - utils::{ - code_page_candidate_from_base, heap_page_from_base, BLOCK_GAS_LIMIT, INITIAL_BASE_PAGE, +use crate::{ + vm_latest::L1BatchEnv, + vm_m5::{ + bootloader_state::BootloaderState, + oracles::OracleWithHistory, + storage::Storage, + transaction_data::TransactionData, + utils::{ + code_page_candidate_from_base, heap_page_from_base, BLOCK_GAS_LIMIT, INITIAL_BASE_PAGE, + }, + vm_instance::{MultiVMSubversion, VmInstance, ZkSyncVmState}, + OracleTools, }, - vm_instance::VmInstance, - vm_instance::{MultiVMSubversion, ZkSyncVmState}, - OracleTools, }; -// TODO (SMA-1703): move these to config and make them programmatically generatable. -// fill these values in the similar fasion as other overhead-related constants +// TODO (SMA-1703): move these to config and make them programmatically generable. +// fill these values in the similar fashion as other overhead-related constants pub const BLOCK_OVERHEAD_GAS: u32 = 1200000; pub const BLOCK_OVERHEAD_L1_GAS: u32 = 1000000; pub const BLOCK_OVERHEAD_PUBDATA: u32 = BLOCK_OVERHEAD_L1_GAS / L1_GAS_PER_PUBDATA_BYTE; @@ -69,37 +70,76 @@ pub(crate) fn eth_price_per_pubdata_byte(l1_gas_price: u64) -> u64 { l1_gas_price * (L1_GAS_PER_PUBDATA_BYTE as u64) } -pub fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { +pub(crate) fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); ceil_div(eth_price_per_pubdata_byte, base_fee) } -pub fn derive_base_fee_and_gas_per_pubdata(l1_gas_price: u64, fair_gas_price: u64) -> (u64, u64) { +pub(crate) fn derive_base_fee_and_gas_per_pubdata( + fee_input: L1PeggedBatchFeeModelInput, +) -> (u64, u64) { + let L1PeggedBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price, + } = fee_input; + let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); - // The baseFee is set in such a way that it is always possible to a transaciton to + // The `baseFee` is set in such a way that it is always possible to a transaction to // publish enough public data while compensating us for it. let base_fee = std::cmp::max( - fair_gas_price, + fair_l2_gas_price, ceil_div(eth_price_per_pubdata_byte, MAX_GAS_PER_PUBDATA_BYTE), ); ( base_fee, - base_fee_to_gas_per_pubdata(l1_gas_price, base_fee), + base_fee_to_gas_per_pubdata(fee_input.l1_gas_price, base_fee), ) } +pub(crate) fn get_batch_base_fee(l1_batch_env: &L1BatchEnv) -> u64 { + if let Some(base_fee) = l1_batch_env.enforced_base_fee { + return base_fee; + } + let (base_fee, _) = + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()); + base_fee +} + impl From for DerivedBlockContext { fn from(context: BlockContext) -> Self { - let base_fee = - derive_base_fee_and_gas_per_pubdata(context.l1_gas_price, context.fair_l2_gas_price).0; + let base_fee = derive_base_fee_and_gas_per_pubdata(L1PeggedBatchFeeModelInput { + l1_gas_price: context.l1_gas_price, + fair_l2_gas_price: context.fair_l2_gas_price, + }) + .0; DerivedBlockContext { context, base_fee } } } +/// The size of the bootloader memory in bytes which is used by the protocol. +/// While the maximal possible size is a lot higher, we restrict ourselves to a certain limit to reduce +/// the requirements on RAM. +pub(crate) const USED_BOOTLOADER_MEMORY_BYTES: usize = 1 << 24; +pub(crate) const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32; + +// This the number of pubdata such that it should be always possible to publish +// from a single transaction. Note, that these pubdata bytes include only bytes that are +// to be published inside the body of transaction (i.e. excluding of factory deps). +pub(crate) const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 4000; + +// The users should always be able to provide `MAX_GAS_PER_PUBDATA_BYTE` gas per pubdata in their +// transactions so that they are able to send at least `GUARANTEED_PUBDATA_PER_L1_BATCH` bytes per +// transaction. +pub(crate) const MAX_GAS_PER_PUBDATA_BYTE: u64 = + MAX_L2_TX_GAS_LIMIT / GUARANTEED_PUBDATA_PER_L1_BATCH; + +// The maximal number of transactions in a single batch +pub(crate) const MAX_TXS_IN_BLOCK: usize = 1024; + // The first 32 slots are reserved for debugging purposes pub const DEBUG_SLOTS_OFFSET: usize = 8; pub const DEBUG_FIRST_SLOTS: usize = 32; @@ -134,7 +174,7 @@ pub const TX_OVERHEAD_SLOTS: usize = MAX_TXS_IN_BLOCK; pub const BOOTLOADER_TX_DESCRIPTION_OFFSET: usize = TX_OVERHEAD_OFFSET + TX_OVERHEAD_SLOTS; // The size of the bootloader memory dedicated to the encodings of transactions -pub const BOOTLOADER_TX_ENCODING_SPACE: u32 = +pub(crate) const BOOTLOADER_TX_ENCODING_SPACE: u32 = (MAX_HEAP_PAGE_SIZE_IN_WORDS - TX_DESCRIPTION_OFFSET - MAX_TXS_IN_BLOCK) as u32; // Size of the bootloader tx description in words @@ -218,12 +258,12 @@ pub fn init_vm_with_gas_limit( } #[derive(Debug, Clone, Copy)] -// The block.number/block.timestamp data are stored in the CONTEXT_SYSTEM_CONTRACT. +// The `block.number` / `block.timestamp` data are stored in the `CONTEXT_SYSTEM_CONTRACT`. // The bootloader can support execution in two modes: -// - "NewBlock" when the new block is created. It is enforced that the block.number is incremented by 1 +// - `NewBlock` when the new block is created. It is enforced that the block.number is incremented by 1 // and the timestamp is non-decreasing. Also, the L2->L1 message used to verify the correctness of the previous root hash is sent. // This is the mode that should be used in the state keeper. -// - "OverrideCurrent" when we need to provide custom block.number and block.timestamp. ONLY to be used in testing/ethCalls. +// - `OverrideCurrent` when we need to provide custom `block.number` and `block.timestamp`. ONLY to be used in testing / `ethCalls`. pub enum BlockContextMode { NewBlock(DerivedBlockContext, U256), OverrideCurrent(DerivedBlockContext), diff --git a/core/lib/multivm/src/versions/vm_m6/bootloader_state.rs b/core/lib/multivm/src/versions/vm_m6/bootloader_state.rs index 5dce7e1c6a9..1328d0dd701 100644 --- a/core/lib/multivm/src/versions/vm_m6/bootloader_state.rs +++ b/core/lib/multivm/src/versions/vm_m6/bootloader_state.rs @@ -5,7 +5,7 @@ use crate::vm_m6::vm_with_bootloader::TX_DESCRIPTION_OFFSET; /// Required to process transactions one by one (since we intercept the VM execution to execute /// transactions and add new ones to the memory on the fly). /// Think about it like a two-pointer scheme: one pointer (`free_tx_index`) tracks the end of the -/// initialized memory; while another (`tx_to_execute`) tracks our progess in this initialized memory. +/// initialized memory; while another (`tx_to_execute`) tracks our progress in this initialized memory. /// This is required since it's possible to push several transactions to the bootloader memory and then /// execute it one by one. /// diff --git a/core/lib/multivm/src/versions/vm_m6/errors/tx_revert_reason.rs b/core/lib/multivm/src/versions/vm_m6/errors/tx_revert_reason.rs index 4775d8339f7..3ddaa068461 100644 --- a/core/lib/multivm/src/versions/vm_m6/errors/tx_revert_reason.rs +++ b/core/lib/multivm/src/versions/vm_m6/errors/tx_revert_reason.rs @@ -7,11 +7,11 @@ use super::{BootloaderErrorCode, VmRevertReason}; // Reasons why the transaction executed inside the bootloader could fail. #[derive(Debug, Clone, PartialEq)] pub enum TxRevertReason { - // Can only be returned in EthCall execution mode (=ExecuteOnly) + // Can only be returned in EthCall execution mode `(=ExecuteOnly)` EthCall(VmRevertReason), // Returned when the execution of an L2 transaction has failed TxReverted(VmRevertReason), - // Can only be returned in VerifyAndExecute + // Can only be returned in `VerifyAndExecute` ValidationFailed(VmRevertReason), PaymasterValidationFailed(VmRevertReason), PrePaymasterPreparationFailed(VmRevertReason), @@ -20,7 +20,7 @@ pub enum TxRevertReason { FailedToChargeFee(VmRevertReason), // Emitted when trying to call a transaction from an account that has not // been deployed as an account (i.e. the `from` is just a contract). - // Can only be returned in VerifyAndExecute + // Can only be returned in `VerifyAndExecute` FromIsNotAnAccount, // Currently cannot be returned. Should be removed when refactoring errors. InnerTxError, @@ -101,7 +101,7 @@ impl TxRevertReason { BootloaderErrorCode::UnacceptablePubdataPrice => { Self::UnexpectedVMBehavior("UnacceptablePubdataPrice".to_owned()) } - // This is different from AccountTxValidationFailed error in a way that it means that + // This is different from `AccountTxValidationFailed` error in a way that it means that // the error was not produced by the account itself, but for some other unknown reason (most likely not enough gas) BootloaderErrorCode::TxValidationError => Self::ValidationFailed(revert_reason), // Note, that `InnerTxError` is derived only after the actual tx execution, so diff --git a/core/lib/multivm/src/versions/vm_m6/errors/vm_revert_reason.rs b/core/lib/multivm/src/versions/vm_m6/errors/vm_revert_reason.rs index d954f077953..0e5bf9fd834 100644 --- a/core/lib/multivm/src/versions/vm_m6/errors/vm_revert_reason.rs +++ b/core/lib/multivm/src/versions/vm_m6/errors/vm_revert_reason.rs @@ -1,5 +1,7 @@ -use std::convert::TryFrom; -use std::fmt::{Debug, Display}; +use std::{ + convert::TryFrom, + fmt::{Debug, Display}, +}; use zksync_types::U256; @@ -15,7 +17,7 @@ pub enum VmRevertReasonParsingError { IncorrectStringLength(Vec), } -/// Rich Revert Reasons https://github.com/0xProject/ZEIPs/issues/32 +/// Rich Revert Reasons `https://github.com/0xProject/ZEIPs/issues/32` #[derive(Debug, Clone, PartialEq)] pub enum VmRevertReason { General { @@ -71,7 +73,7 @@ impl VmRevertReason { pub fn to_user_friendly_string(&self) -> String { match self { - // In case of `Unknown` reason we suppress it to prevent verbose Error function_selector = 0x{} + // In case of `Unknown` reason we suppress it to prevent verbose `Error function_selector = 0x{}` // message shown to user. VmRevertReason::Unknown { .. } => "".to_owned(), _ => self.to_string(), diff --git a/core/lib/multivm/src/versions/vm_m6/event_sink.rs b/core/lib/multivm/src/versions/vm_m6/event_sink.rs index 41fd22e9eed..2fb5d934e96 100644 --- a/core/lib/multivm/src/versions/vm_m6/event_sink.rs +++ b/core/lib/multivm/src/versions/vm_m6/event_sink.rs @@ -1,9 +1,5 @@ -use crate::vm_m6::{ - history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, - oracles::OracleWithHistory, - utils::collect_log_queries_after_timestamp, -}; use std::collections::HashMap; + use zk_evm_1_3_1::{ abstractions::EventSink, aux_structures::{LogQuery, Timestamp}, @@ -13,6 +9,12 @@ use zk_evm_1_3_1::{ }, }; +use crate::vm_m6::{ + history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, + oracles::OracleWithHistory, + utils::collect_log_queries_after_timestamp, +}; + #[derive(Debug, Clone, PartialEq, Default)] pub struct InMemoryEventSink { pub frames_stack: AppDataFrameManagerWithHistory, diff --git a/core/lib/multivm/src/versions/vm_m6/history_recorder.rs b/core/lib/multivm/src/versions/vm_m6/history_recorder.rs index a85279e56c1..63dc9be4933 100644 --- a/core/lib/multivm/src/versions/vm_m6/history_recorder.rs +++ b/core/lib/multivm/src/versions/vm_m6/history_recorder.rs @@ -4,28 +4,27 @@ use std::{ hash::{BuildHasherDefault, Hash, Hasher}, }; -use crate::vm_m6::storage::{Storage, StoragePtr}; - use zk_evm_1_3_1::{ aux_structures::Timestamp, vm_state::PrimitiveValue, zkevm_opcode_defs::{self}, }; - use zksync_types::{StorageKey, U256}; use zksync_utils::{h256_to_u256, u256_to_h256}; +use crate::vm_m6::storage::{Storage, StoragePtr}; + pub type MemoryWithHistory = HistoryRecorder; pub type IntFrameManagerWithHistory = HistoryRecorder, H>; // Within the same cycle, timestamps in range timestamp..timestamp+TIME_DELTA_PER_CYCLE-1 -// can be used. This can sometimes vioalate monotonicity of the timestamp within the +// can be used. This can sometimes violate monotonicity of the timestamp within the // same cycle, so it should be normalized. #[inline] fn normalize_timestamp(timestamp: Timestamp) -> Timestamp { let timestamp = timestamp.0; - // Making sure it is divisible by TIME_DELTA_PER_CYCLE + // Making sure it is divisible by `TIME_DELTA_PER_CYCLE` Timestamp(timestamp - timestamp % zkevm_opcode_defs::TIME_DELTA_PER_CYCLE) } diff --git a/core/lib/multivm/src/versions/vm_m6/memory.rs b/core/lib/multivm/src/versions/vm_m6/memory.rs index 52a3d7f606f..6ad92c3a1e0 100644 --- a/core/lib/multivm/src/versions/vm_m6/memory.rs +++ b/core/lib/multivm/src/versions/vm_m6/memory.rs @@ -1,15 +1,19 @@ -use zk_evm_1_3_1::abstractions::{Memory, MemoryType, MEMORY_CELLS_OTHER_PAGES}; -use zk_evm_1_3_1::aux_structures::{MemoryPage, MemoryQuery, Timestamp}; -use zk_evm_1_3_1::vm_state::PrimitiveValue; -use zk_evm_1_3_1::zkevm_opcode_defs::FatPointer; +use zk_evm_1_3_1::{ + abstractions::{Memory, MemoryType, MEMORY_CELLS_OTHER_PAGES}, + aux_structures::{MemoryPage, MemoryQuery, Timestamp}, + vm_state::PrimitiveValue, + zkevm_opcode_defs::FatPointer, +}; use zksync_types::U256; -use crate::vm_m6::history_recorder::{ - FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, - MemoryWrapper, WithHistory, +use crate::vm_m6::{ + history_recorder::{ + FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, + MemoryWrapper, WithHistory, + }, + oracles::OracleWithHistory, + utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}, }; -use crate::vm_m6::oracles::OracleWithHistory; -use crate::vm_m6::utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}; #[derive(Debug, Clone, PartialEq, Default)] pub struct SimpleMemory { @@ -27,7 +31,7 @@ impl OracleWithHistory for SimpleMemory { impl SimpleMemory { pub fn populate(&mut self, elements: Vec<(u32, Vec)>, timestamp: Timestamp) { for (page, values) in elements.into_iter() { - // Resizing the pages array to fit the page. + // Re-sizing the pages array to fit the page. let len = values.len(); assert!(len <= MEMORY_CELLS_OTHER_PAGES); @@ -277,7 +281,7 @@ impl Memory for SimpleMemory { let returndata_page = returndata_fat_pointer.memory_page; for &page in current_observable_pages { - // If the page's number is greater than or equal to the base_page, + // If the page's number is greater than or equal to the `base_page`, // it means that it was created by the internal calls of this contract. // We need to add this check as the calldata pointer is also part of the // observable pages. @@ -294,7 +298,7 @@ impl Memory for SimpleMemory { } } -// It is expected that there is some intersection between [word_number*32..word_number*32+31] and [start, end] +// It is expected that there is some intersection between `[word_number*32..word_number*32+31]` and `[start, end]` fn extract_needed_bytes_from_word( word_value: Vec, word_number: usize, @@ -302,7 +306,7 @@ fn extract_needed_bytes_from_word( end: usize, ) -> Vec { let word_start = word_number * 32; - let word_end = word_start + 31; // Note, that at word_start + 32 a new word already starts + let word_end = word_start + 31; // Note, that at `word_start + 32` a new word already starts let intersection_left = std::cmp::max(word_start, start); let intersection_right = std::cmp::min(word_end, end); diff --git a/core/lib/multivm/src/versions/vm_m6/oracle_tools.rs b/core/lib/multivm/src/versions/vm_m6/oracle_tools.rs index 6650752da27..7ae5e874806 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracle_tools.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracle_tools.rs @@ -1,19 +1,21 @@ -use crate::vm_m6::memory::SimpleMemory; - use std::fmt::Debug; -use crate::vm_m6::event_sink::InMemoryEventSink; -use crate::vm_m6::history_recorder::HistoryMode; -use crate::vm_m6::oracles::{ - decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, - storage::StorageOracle, -}; -use crate::vm_m6::storage::{Storage, StoragePtr}; use zk_evm_1_3_1::witness_trace::DummyTracer; +use crate::vm_m6::{ + event_sink::InMemoryEventSink, + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, + storage::StorageOracle, + }, + storage::{Storage, StoragePtr}, +}; + /// zkEVM requires a bunch of objects implementing given traits to work. /// For example: Storage, Memory, PrecompilerProcessor etc -/// (you can find all these traites in zk_evm crate -> src/abstractions/mod.rs) +/// (you can find all these traits in zk_evm crate -> src/abstractions/mod.rs) /// For each of these traits, we have a local implementation (for example StorageOracle) /// that also support additional features (like rollbacks & history). /// The OracleTools struct, holds all these things together in one place. diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/decommitter.rs b/core/lib/multivm/src/versions/vm_m6/oracles/decommitter.rs index 3917063422a..fe59580e2ce 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/decommitter.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/decommitter.rs @@ -1,21 +1,21 @@ use std::collections::HashMap; -use crate::vm_m6::history_recorder::{HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory}; -use crate::vm_m6::storage::{Storage, StoragePtr}; - -use zk_evm_1_3_1::abstractions::MemoryType; -use zk_evm_1_3_1::aux_structures::Timestamp; use zk_evm_1_3_1::{ - abstractions::{DecommittmentProcessor, Memory}, - aux_structures::{DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery}, + abstractions::{DecommittmentProcessor, Memory, MemoryType}, + aux_structures::{ + DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery, Timestamp, + }, }; use zksync_types::U256; -use zksync_utils::bytecode::bytecode_len_in_words; -use zksync_utils::{bytes_to_be_words, u256_to_h256}; +use zksync_utils::{bytecode::bytecode_len_in_words, bytes_to_be_words, u256_to_h256}; use super::OracleWithHistory; +use crate::vm_m6::{ + history_recorder::{HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory}, + storage::{Storage, StoragePtr}, +}; -/// The main job of the DecommiterOracle is to implement the DecommittmentProcessor trait - that is +/// The main job of the DecommiterOracle is to implement the DecommitmentProcessor trait - that is /// used by the VM to 'load' bytecodes into memory. #[derive(Debug)] pub struct DecommitterOracle { @@ -66,7 +66,7 @@ impl DecommitterOracle { } } - /// Adds additional bytecodes. They will take precendent over the bytecodes from storage. + /// Adds additional bytecodes. They will take precedent over the bytecodes from storage. pub fn populate(&mut self, bytecodes: Vec<(U256, Vec)>, timestamp: Timestamp) { for (hash, bytecode) in bytecodes { self.known_bytecodes.insert(hash, bytecode, timestamp); @@ -170,7 +170,7 @@ impl DecommittmentProcessor ) -> (DecommittmentQuery, Option>) { self.decommitment_requests.push((), partial_query.timestamp); // First - check if we didn't fetch this bytecode in the past. - // If we did - we can just return the page that we used before (as the memory is read only). + // If we did - we can just return the page that we used before (as the memory is readonly). if let Some(memory_page) = self .decommitted_code_hashes .inner() diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/mod.rs b/core/lib/multivm/src/versions/vm_m6/oracles/mod.rs index d6b00c8500d..450ed4cf1e0 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/mod.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/mod.rs @@ -1,11 +1,10 @@ use zk_evm_1_3_1::aux_structures::Timestamp; -// We will discard RAM as soon as the execution of a tx ends, so -// it is ok for now to use SimpleMemory -pub use zk_evm_1_3_1::reference_impls::memory::SimpleMemory as RamOracle; // All the changes to the events in the DB will be applied after the tx is executed, -// so fow now it is fine. +// so for now it is fine. pub use zk_evm_1_3_1::reference_impls::event_sink::InMemoryEventSink as EventSinkOracle; - +// We will discard RAM as soon as the execution of a tx ends, so +// it is ok for now to use `SimpleMemory` +pub use zk_evm_1_3_1::reference_impls::memory::SimpleMemory as RamOracle; pub use zk_evm_1_3_1::testing::simple_tracer::NoopTracer; pub mod decommitter; diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/precompile.rs b/core/lib/multivm/src/versions/vm_m6/oracles/precompile.rs index aff382614af..2e236b70267 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/precompile.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/precompile.rs @@ -1,14 +1,11 @@ use zk_evm_1_3_1::{ - abstractions::Memory, - abstractions::PrecompileCyclesWitness, - abstractions::PrecompilesProcessor, + abstractions::{Memory, PrecompileCyclesWitness, PrecompilesProcessor}, aux_structures::{LogQuery, MemoryQuery, Timestamp}, precompiles::DefaultPrecompilesProcessor, }; -use crate::vm_m6::history_recorder::{HistoryEnabled, HistoryMode, HistoryRecorder}; - use super::OracleWithHistory; +use crate::vm_m6::history_recorder::{HistoryEnabled, HistoryMode, HistoryRecorder}; /// Wrap of DefaultPrecompilesProcessor that store queue /// of timestamp when precompiles are called to be executed. diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/storage.rs b/core/lib/multivm/src/versions/vm_m6/oracles/storage.rs index 45c3bdf50f8..4eafbacbebd 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/storage.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/storage.rs @@ -1,27 +1,27 @@ use std::collections::HashMap; -use crate::vm_m6::storage::{Storage, StoragePtr}; - -use crate::glue::GlueInto; -use crate::vm_m6::history_recorder::{ - AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, - HistoryRecorder, StorageWrapper, WithHistory, -}; - -use zk_evm_1_3_1::abstractions::RefundedAmounts; -use zk_evm_1_3_1::zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES; use zk_evm_1_3_1::{ - abstractions::{RefundType, Storage as VmStorageOracle}, + abstractions::{RefundType, RefundedAmounts, Storage as VmStorageOracle}, aux_structures::{LogQuery, Timestamp}, + zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES, }; -use zksync_types::utils::storage_key_for_eth_balance; use zksync_types::{ - AccountTreeId, Address, StorageKey, StorageLogQuery, StorageLogQueryType, BOOTLOADER_ADDRESS, - U256, + utils::storage_key_for_eth_balance, AccountTreeId, Address, StorageKey, StorageLogQuery, + StorageLogQueryType, BOOTLOADER_ADDRESS, U256, }; use zksync_utils::u256_to_h256; use super::OracleWithHistory; +use crate::{ + glue::GlueInto, + vm_m6::{ + history_recorder::{ + AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, + HistoryRecorder, StorageWrapper, WithHistory, + }, + storage::{Storage, StoragePtr}, + }, +}; // While the storage does not support different shards, it was decided to write the // code of the StorageOracle with the shard parameters in mind. @@ -190,6 +190,7 @@ impl VmStorageOracle for StorageOracle { _monotonic_cycle_counter: u32, query: LogQuery, ) -> LogQuery { + // ``` // tracing::trace!( // "execute partial query cyc {:?} addr {:?} key {:?}, rw {:?}, wr {:?}, tx {:?}", // _monotonic_cycle_counter, @@ -199,6 +200,7 @@ impl VmStorageOracle for StorageOracle { // query.written_value, // query.tx_number_in_block // ); + // ``` assert!(!query.rollback); if query.rw_flag { // The number of bytes that have been compensated by the user to perform this write @@ -278,7 +280,7 @@ impl VmStorageOracle for StorageOracle { ); // Additional validation that the current value was correct - // Unwrap is safe because the return value from write_inner is the previous value in this leaf. + // Unwrap is safe because the return value from `write_inner` is the previous value in this leaf. // It is impossible to set leaf value to `None` assert_eq!(current_value, written_value); } @@ -292,8 +294,8 @@ impl VmStorageOracle for StorageOracle { /// Returns the number of bytes needed to publish a slot. // Since we need to publish the state diffs onchain, for each of the updated storage slot -// we basically need to publish the following pair: (). -// While new_value is always 32 bytes long, for key we use the following optimization: +// we basically need to publish the following pair: `()`. +// While `new_value` is always 32 bytes long, for key we use the following optimization: // - The first time we publish it, we use 32 bytes. // Then, we remember a 8-byte id for this slot and assign it to it. We call this initial write. // - The second time we publish it, we will use this 8-byte instead of the 32 bytes of the entire key. diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/bootloader.rs b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/bootloader.rs index fc2a62374db..f2780c6ae80 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/bootloader.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/bootloader.rs @@ -1,12 +1,5 @@ use std::marker::PhantomData; -use crate::vm_m6::history_recorder::HistoryMode; -use crate::vm_m6::memory::SimpleMemory; -use crate::vm_m6::oracles::tracer::{ - utils::gas_spent_on_bytecodes_and_long_messages_this_opcode, ExecutionEndTracer, - PendingRefundTracer, PubdataSpentTracer, StorageInvocationTracer, -}; - use zk_evm_1_3_1::{ abstractions::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, @@ -16,7 +9,16 @@ use zk_evm_1_3_1::{ zkevm_opcode_defs::{Opcode, RetOpcode}, }; -/// Tells the VM to end the execution before `ret` from the booloader if there is no panic or revert. +use crate::vm_m6::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::tracer::{ + utils::gas_spent_on_bytecodes_and_long_messages_this_opcode, ExecutionEndTracer, + PendingRefundTracer, PubdataSpentTracer, StorageInvocationTracer, + }, +}; + +/// Tells the VM to end the execution before `ret` from the bootloader if there is no panic or revert. /// Also, saves the information if this `ret` was caused by "out of gas" panic. #[derive(Debug, Clone, Default)] pub struct BootloaderTracer { @@ -98,7 +100,7 @@ impl PubdataSpentTracer for BootloaderTracer { impl BootloaderTracer { fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/call.rs b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/call.rs index 4b61c9fcc15..19cbf13b275 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/call.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/call.rs @@ -1,21 +1,24 @@ -use crate::glue::GlueInto; -use crate::vm_m6::errors::VmRevertReason; -use crate::vm_m6::history_recorder::HistoryMode; -use crate::vm_m6::memory::SimpleMemory; -use std::convert::TryFrom; -use std::marker::PhantomData; -use std::mem; -use zk_evm_1_3_1::abstractions::{ - AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, -}; -use zk_evm_1_3_1::zkevm_opcode_defs::FatPointer; -use zk_evm_1_3_1::zkevm_opcode_defs::{ - FarCallABI, FarCallOpcode, Opcode, RetOpcode, CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER, - RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, +use std::{convert::TryFrom, marker::PhantomData, mem}; + +use zk_evm_1_3_1::{ + abstractions::{ + AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, + }, + zkevm_opcode_defs::{ + FarCallABI, FarCallOpcode, FatPointer, Opcode, RetOpcode, + CALL_IMPLICIT_CALLDATA_FAT_PTR_REGISTER, RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; use zksync_system_constants::CONTRACT_DEPLOYER_ADDRESS; -use zksync_types::vm_trace::{Call, CallType}; -use zksync_types::U256; +use zksync_types::{ + vm_trace::{Call, CallType}, + U256, +}; + +use crate::{ + glue::GlueInto, + vm_m6::{errors::VmRevertReason, history_recorder::HistoryMode, memory::SimpleMemory}, +}; /// NOTE Auto implementing clone for this tracer can cause stack overflow. /// This is because of the stack field which is a Vec with nested vecs inside. @@ -95,7 +98,7 @@ impl Tracer for CallTracer { } impl CallTracer { - /// We use parent gas for propery calculation of gas used in the trace. + /// We use parent gas for property calculation of gas used in the trace. /// This method updates parent gas for the current call. fn update_parent_gas(&mut self, state: &VmLocalStateData<'_>, current_call: &mut Call) { let current = state.vm_local_state.callstack.current; @@ -186,7 +189,7 @@ impl CallTracer { let fat_data_pointer = state.vm_local_state.registers[RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER as usize]; - // if fat_data_pointer is not a pointer then there is no output + // if `fat_data_pointer` is not a pointer then there is no output let output = if fat_data_pointer.is_pointer { let fat_data_pointer = FatPointer::from_u256(fat_data_pointer.value); if !fat_data_pointer.is_trivial() { @@ -253,8 +256,8 @@ impl CallTracer { // Filter all near calls from the call stack // Important that the very first call is near call - // And this NearCall includes several Normal or Mimic calls - // So we return all childrens of this NearCall + // And this `NearCall` includes several Normal or Mimic calls + // So we return all children of this `NearCall` pub fn extract_calls(&mut self) -> Vec { if let Some(current_call) = self.stack.pop() { filter_near_call(current_call) @@ -265,7 +268,7 @@ impl CallTracer { } // Filter all near calls from the call stack -// Normally wr are not interested in NearCall, because it's just a wrapper for internal calls +// Normally we are not interested in `NearCall`, because it's just a wrapper for internal calls fn filter_near_call(mut call: Call) -> Vec { let mut calls = vec![]; let original_calls = std::mem::take(&mut call.calls); @@ -283,10 +286,13 @@ fn filter_near_call(mut call: Call) -> Vec { #[cfg(test)] mod tests { - use crate::glue::GlueInto; - use crate::vm_m6::oracles::tracer::call::{filter_near_call, Call, CallType}; use zk_evm_1_3_1::zkevm_opcode_defs::FarCallOpcode; + use crate::{ + glue::GlueInto, + vm_m6::oracles::tracer::call::{filter_near_call, Call, CallType}, + }; + #[test] fn test_filter_near_calls() { let mut call = Call::default(); diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/mod.rs b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/mod.rs index 93486f039fa..cdf83345d2f 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/mod.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/mod.rs @@ -1,5 +1,15 @@ -use zk_evm_1_3_1::abstractions::Tracer; -use zk_evm_1_3_1::vm_state::VmLocalState; +use zk_evm_1_3_1::{abstractions::Tracer, vm_state::VmLocalState}; + +pub(crate) use self::transaction_result::TransactionResultTracer; +pub use self::{ + bootloader::BootloaderTracer, + call::CallTracer, + one_tx::OneTxTracer, + validation::{ + ValidationError, ValidationTracer, ValidationTracerParams, ViolatedValidationRule, + }, +}; +use crate::vm_m6::{history_recorder::HistoryMode, memory::SimpleMemory}; mod bootloader; mod call; @@ -8,18 +18,6 @@ mod transaction_result; mod utils; mod validation; -pub use bootloader::BootloaderTracer; -pub use call::CallTracer; -pub use one_tx::OneTxTracer; -pub use validation::{ - ValidationError, ValidationTracer, ValidationTracerParams, ViolatedValidationRule, -}; - -pub(crate) use transaction_result::TransactionResultTracer; - -use crate::vm_m6::history_recorder::HistoryMode; -use crate::vm_m6::memory::SimpleMemory; - pub trait ExecutionEndTracer: Tracer> { // Returns whether the vm execution should stop. fn should_stop_execution(&self) -> bool; diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/one_tx.rs b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/one_tx.rs index d5fbb78c909..53e5e4ee2f6 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/one_tx.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/one_tx.rs @@ -1,25 +1,25 @@ +use zk_evm_1_3_1::{ + abstractions::{ + AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, + }, + vm_state::VmLocalState, +}; +use zksync_types::vm_trace::Call; + use super::utils::{computational_gas_price, print_debug_if_needed}; use crate::vm_m6::{ history_recorder::HistoryMode, memory::SimpleMemory, oracles::tracer::{ utils::{gas_spent_on_bytecodes_and_long_messages_this_opcode, VmHook}, - BootloaderTracer, ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, + BootloaderTracer, CallTracer, ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, + StorageInvocationTracer, }, vm_instance::get_vm_hook_params, }; -use crate::vm_m6::oracles::tracer::{CallTracer, StorageInvocationTracer}; -use zk_evm_1_3_1::{ - abstractions::{ - AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, - }, - vm_state::VmLocalState, -}; -use zksync_types::vm_trace::Call; - /// Allows any opcodes, but tells the VM to end the execution once the tx is over. -// Internally depeds on Bootloader's VMHooks to get the notification once the transaction is finished. +// Internally depends on Bootloader's `VMHooks` to get the notification once the transaction is finished. #[derive(Debug)] pub struct OneTxTracer { tx_has_been_processed: bool, diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/transaction_result.rs b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/transaction_result.rs index a3e4391af24..2ecf484b60a 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/transaction_result.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/transaction_result.rs @@ -7,18 +7,18 @@ use zk_evm_1_3_1::{ }; use zksync_types::{vm_trace, U256}; -use crate::vm_m6::memory::SimpleMemory; -use crate::vm_m6::oracles::tracer::{ - CallTracer, ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, - StorageInvocationTracer, -}; -use crate::vm_m6::vm_instance::get_vm_hook_params; use crate::vm_m6::{ history_recorder::HistoryMode, - oracles::tracer::utils::{ - gas_spent_on_bytecodes_and_long_messages_this_opcode, print_debug_if_needed, read_pointer, - VmHook, + memory::SimpleMemory, + oracles::tracer::{ + utils::{ + gas_spent_on_bytecodes_and_long_messages_this_opcode, print_debug_if_needed, + read_pointer, VmHook, + }, + CallTracer, ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, + StorageInvocationTracer, }, + vm_instance::get_vm_hook_params, }; #[derive(Debug)] diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/utils.rs b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/utils.rs index e2f1652e9b7..2df22aa2d3f 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/utils.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/utils.rs @@ -1,14 +1,9 @@ -use crate::vm_m6::history_recorder::HistoryMode; -use crate::vm_m6::memory::SimpleMemory; -use crate::vm_m6::utils::{aux_heap_page_from_base, heap_page_from_base}; -use crate::vm_m6::vm_instance::{get_vm_hook_params, VM_HOOK_POSITION}; -use crate::vm_m6::vm_with_bootloader::BOOTLOADER_HEAP_PAGE; - -use zk_evm_1_3_1::aux_structures::MemoryPage; -use zk_evm_1_3_1::zkevm_opcode_defs::{FarCallABI, FarCallForwardPageType}; use zk_evm_1_3_1::{ abstractions::{BeforeExecutionData, VmLocalStateData}, - zkevm_opcode_defs::{FatPointer, LogOpcode, Opcode, UMAOpcode}, + aux_structures::MemoryPage, + zkevm_opcode_defs::{ + FarCallABI, FarCallForwardPageType, FatPointer, LogOpcode, Opcode, UMAOpcode, + }, }; use zksync_system_constants::{ ECRECOVER_PRECOMPILE_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, KNOWN_CODES_STORAGE_ADDRESS, @@ -17,6 +12,14 @@ use zksync_system_constants::{ use zksync_types::U256; use zksync_utils::u256_to_h256; +use crate::vm_m6::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + utils::{aux_heap_page_from_base, heap_page_from_base}, + vm_instance::{get_vm_hook_params, VM_HOOK_POSITION}, + vm_with_bootloader::BOOTLOADER_HEAP_PAGE, +}; + #[derive(Clone, Debug, Copy)] pub(crate) enum VmHook { AccountValidationEntered, @@ -45,7 +48,7 @@ impl VmHook { let value = data.src1_value.value; - // Only UMA opcodes in the bootloader serve for vm hooks + // Only `UMA` opcodes in the bootloader serve for vm hooks if !matches!(opcode_variant.opcode, Opcode::UMA(UMAOpcode::HeapWrite)) || heap_page != BOOTLOADER_HEAP_PAGE || fat_ptr.offset != VM_HOOK_POSITION * 32 @@ -84,7 +87,7 @@ pub(crate) fn get_debug_log( let msg = String::from_utf8(msg).expect("Invalid debug message"); let data = U256::from_big_endian(&data); - // For long data, it is better to use hex-encoding for greater readibility + // For long data, it is better to use hex-encoding for greater readability let data_str = if data > U256::from(u64::max_value()) { let mut bytes = [0u8; 32]; data.to_big_endian(&mut bytes); @@ -99,7 +102,7 @@ pub(crate) fn get_debug_log( } /// Reads the memory slice represented by the fat pointer. -/// Note, that the fat pointer must point to the accesible memory (i.e. not cleared up yet). +/// Note, that the fat pointer must point to the accessible memory (i.e. not cleared up yet). pub(crate) fn read_pointer( memory: &SimpleMemory, pointer: FatPointer, diff --git a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/validation.rs b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/validation.rs index 4e55ad4db00..704a967548b 100644 --- a/core/lib/multivm/src/versions/vm_m6/oracles/tracer/validation.rs +++ b/core/lib/multivm/src/versions/vm_m6/oracles/tracer/validation.rs @@ -1,16 +1,4 @@ -use std::fmt; -use std::fmt::Display; -use std::{collections::HashSet, marker::PhantomData}; - -use crate::vm_m6::{ - errors::VmRevertReasonParsingResult, - history_recorder::HistoryMode, - memory::SimpleMemory, - oracles::tracer::{ - utils::{computational_gas_price, print_debug_if_needed, VmHook}, - ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, - }, -}; +use std::{collections::HashSet, fmt, fmt::Display, marker::PhantomData}; use zk_evm_1_3_1::{ abstractions::{ @@ -18,15 +6,11 @@ use zk_evm_1_3_1::{ }, zkevm_opcode_defs::{ContextOpcode, FarCallABI, LogOpcode, Opcode}, }; - -use crate::vm_m6::oracles::tracer::{utils::get_calldata_page_via_abi, StorageInvocationTracer}; -use crate::vm_m6::storage::{Storage, StoragePtr}; use zksync_system_constants::{ ACCOUNT_CODE_STORAGE_ADDRESS, BOOTLOADER_ADDRESS, CONTRACT_DEPLOYER_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, L2_ETH_TOKEN_ADDRESS, MSG_VALUE_SIMULATOR_ADDRESS, SYSTEM_CONTEXT_ADDRESS, }; - use zksync_types::{ get_code_key, web3::signing::keccak256, AccountTreeId, Address, StorageKey, H256, U256, }; @@ -34,6 +18,19 @@ use zksync_utils::{ be_bytes_to_safe_address, h256_to_account_address, u256_to_account_address, u256_to_h256, }; +use crate::vm_m6::{ + errors::VmRevertReasonParsingResult, + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::tracer::{ + utils::{ + computational_gas_price, get_calldata_page_via_abi, print_debug_if_needed, VmHook, + }, + ExecutionEndTracer, PendingRefundTracer, PubdataSpentTracer, StorageInvocationTracer, + }, + storage::{Storage, StoragePtr}, +}; + #[derive(Debug, Clone, Eq, PartialEq, Copy)] #[allow(clippy::enum_variant_names)] pub enum ValidationTracerMode { @@ -103,7 +100,7 @@ fn touches_allowed_context(address: Address, key: U256) -> bool { return false; } - // Only chain_id is allowed to be touched. + // Only `chain_id` is allowed to be touched. key == U256::from(0u32) } @@ -238,7 +235,7 @@ impl ValidationTracer { return true; } - // The pair of MSG_VALUE_SIMULATOR_ADDRESS & L2_ETH_TOKEN_ADDRESS simulates the behavior of transfering ETH + // The pair of `MSG_VALUE_SIMULATOR_ADDRESS` & `L2_ETH_TOKEN_ADDRESS` simulates the behavior of transferring ETH // that is safe for the DDoS protection rules. if valid_eth_token_call(address, msg_sender) { return true; @@ -282,20 +279,20 @@ impl ValidationTracer { let (potential_address_bytes, potential_position_bytes) = calldata.split_at(32); let potential_address = be_bytes_to_safe_address(potential_address_bytes); - // If the validation_address is equal to the potential_address, - // then it is a request that could be used for mapping of kind mapping(address => ...). + // If the `validation_address` is equal to the `potential_address`, + // then it is a request that could be used for mapping of kind `mapping(address => ...)`. // - // If the potential_position_bytes were already allowed before, then this keccak might be used - // for ERC-20 allowance or any other of mapping(address => mapping(...)) + // If the `potential_position_bytes` were already allowed before, then this keccak might be used + // for ERC-20 allowance or any other of `mapping(address => mapping(...))` if potential_address == Some(validated_address) || self .auxilary_allowed_slots .contains(&H256::from_slice(potential_position_bytes)) { - // This is request that could be used for mapping of kind mapping(address => ...) + // This is request that could be used for mapping of kind `mapping(address => ...)` // We could theoretically wait for the slot number to be returned by the - // keccak256 precompile itself, but this would complicate the code even further + // `keccak256` precompile itself, but this would complicate the code even further // so let's calculate it here. let slot = keccak256(calldata); diff --git a/core/lib/multivm/src/versions/vm_m6/pubdata_utils.rs b/core/lib/multivm/src/versions/vm_m6/pubdata_utils.rs index a823e5f5ae6..33307507f7e 100644 --- a/core/lib/multivm/src/versions/vm_m6/pubdata_utils.rs +++ b/core/lib/multivm/src/versions/vm_m6/pubdata_utils.rs @@ -1,16 +1,21 @@ -use crate::glue::GlueInto; -use crate::vm_m6::history_recorder::HistoryMode; -use crate::vm_m6::oracles::storage::storage_key_of_log; -use crate::vm_m6::storage::Storage; -use crate::vm_m6::utils::collect_storage_log_queries_after_timestamp; -use crate::vm_m6::VmInstance; use std::collections::HashMap; + use zk_evm_1_3_1::aux_structures::Timestamp; -use zksync_types::event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}; -use zksync_types::zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries; -use zksync_types::{StorageKey, PUBLISH_BYTECODE_OVERHEAD, SYSTEM_CONTEXT_ADDRESS}; +use zksync_types::{ + event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}, + zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries, + StorageKey, PUBLISH_BYTECODE_OVERHEAD, SYSTEM_CONTEXT_ADDRESS, +}; use zksync_utils::bytecode::bytecode_len_in_bytes; +use crate::{ + glue::GlueInto, + vm_m6::{ + history_recorder::HistoryMode, oracles::storage::storage_key_of_log, storage::Storage, + utils::collect_storage_log_queries_after_timestamp, VmInstance, + }, +}; + impl VmInstance { pub fn pubdata_published(&self, from_timestamp: Timestamp) -> u32 { let storage_writes_pubdata_published = self.pubdata_published_for_writes(from_timestamp); diff --git a/core/lib/multivm/src/versions/vm_m6/refunds.rs b/core/lib/multivm/src/versions/vm_m6/refunds.rs index da16d621911..406bf380a0b 100644 --- a/core/lib/multivm/src/versions/vm_m6/refunds.rs +++ b/core/lib/multivm/src/versions/vm_m6/refunds.rs @@ -1,13 +1,14 @@ -use crate::vm_m6::history_recorder::HistoryMode; -use crate::vm_m6::storage::Storage; -use crate::vm_m6::vm_with_bootloader::{ - eth_price_per_pubdata_byte, BOOTLOADER_HEAP_PAGE, TX_GAS_LIMIT_OFFSET, -}; -use crate::vm_m6::VmInstance; use zk_evm_1_3_1::aux_structures::Timestamp; use zksync_types::U256; use zksync_utils::ceil_div_u256; +use crate::vm_m6::{ + history_recorder::HistoryMode, + storage::Storage, + vm_with_bootloader::{eth_price_per_pubdata_byte, BOOTLOADER_HEAP_PAGE, TX_GAS_LIMIT_OFFSET}, + VmInstance, +}; + impl VmInstance { pub(crate) fn tx_body_refund( &self, @@ -75,7 +76,7 @@ impl VmInstance { ) -> u32 { // TODO (SMA-1715): Make users pay for the block overhead 0 - + // ``` // let pubdata_published = self.pubdata_published(from_timestamp); // // let total_gas_spent = gas_remaining_before - self.gas_remaining(); @@ -120,6 +121,7 @@ impl VmInstance { // ); // 0 // } + // ``` } // TODO (SMA-1715): Make users pay for the block overhead @@ -133,39 +135,39 @@ impl VmInstance { _l2_l1_logs: usize, ) -> u32 { 0 - + // ``` // let overhead_for_block_gas = U256::from(crate::transaction_data::block_overhead_gas( // gas_per_pubdata_byte_limit, // )); - + // // let encoded_len = U256::from(encoded_len); // let pubdata_published = U256::from(pubdata_published); // let gas_spent_on_computation = U256::from(gas_spent_on_computation); // let number_of_decommitment_requests = U256::from(number_of_decommitment_requests); // let l2_l1_logs = U256::from(l2_l1_logs); - + // // let tx_slot_overhead = ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()); - + // // let overhead_for_length = ceil_div_u256( // encoded_len * overhead_for_block_gas, // BOOTLOADER_TX_ENCODING_SPACE.into(), // ); - + // // let actual_overhead_for_pubdata = ceil_div_u256( // pubdata_published * overhead_for_block_gas, // MAX_PUBDATA_PER_BLOCK.into(), // ); - + // // let actual_gas_limit_overhead = ceil_div_u256( // gas_spent_on_computation * overhead_for_block_gas, // MAX_BLOCK_MULTIINSTANCE_GAS_LIMIT.into(), // ); - + // // let code_decommitter_sorter_circuit_overhead = ceil_div_u256( // number_of_decommitment_requests * overhead_for_block_gas, // GEOMETRY_CONFIG.limit_for_code_decommitter_sorter.into(), // ); - + // // let l1_l2_logs_overhead = ceil_div_u256( // l2_l1_logs * overhead_for_block_gas, // std::cmp::min( @@ -174,7 +176,7 @@ impl VmInstance { // ) // .into(), // ); - + // // let overhead = vec![ // tx_slot_overhead, // overhead_for_length, @@ -186,8 +188,9 @@ impl VmInstance { // .into_iter() // .max() // .unwrap(); - + // // overhead.as_u32() + // ``` } /// Returns the given transactions' gas limit - by reading it directly from the VM memory. diff --git a/core/lib/multivm/src/versions/vm_m6/storage.rs b/core/lib/multivm/src/versions/vm_m6/storage.rs index 5441fc8a296..80f7e016010 100644 --- a/core/lib/multivm/src/versions/vm_m6/storage.rs +++ b/core/lib/multivm/src/versions/vm_m6/storage.rs @@ -1,7 +1,4 @@ -use std::cell::RefCell; -use std::collections::HashMap; -use std::fmt::Debug; -use std::rc::Rc; +use std::{cell::RefCell, collections::HashMap, fmt::Debug, rc::Rc}; use zksync_state::{ReadStorage, WriteStorage}; use zksync_types::{get_known_code_key, StorageKey, StorageValue, H256}; diff --git a/core/lib/multivm/src/versions/vm_m6/test_utils.rs b/core/lib/multivm/src/versions/vm_m6/test_utils.rs index 8b022c008a7..55e5add1164 100644 --- a/core/lib/multivm/src/versions/vm_m6/test_utils.rs +++ b/core/lib/multivm/src/versions/vm_m6/test_utils.rs @@ -10,8 +10,10 @@ use std::collections::HashMap; use itertools::Itertools; use zk_evm_1_3_1::{aux_structures::Timestamp, vm_state::VmLocalState}; -use zksync_contracts::test_contracts::LoadnextContractExecutionParams; -use zksync_contracts::{deployer_contract, get_loadnext_contract, load_contract}; +use zksync_contracts::{ + deployer_contract, get_loadnext_contract, load_contract, + test_contracts::LoadnextContractExecutionParams, +}; use zksync_types::{ ethabi::{Address, Token}, fee::Fee, @@ -24,14 +26,13 @@ use zksync_utils::{ address_to_h256, bytecode::hash_bytecode, h256_to_account_address, u256_to_h256, }; -use crate::vm_m6::storage::Storage; -/// The tests here help us with the testing the VM use crate::vm_m6::{ event_sink::InMemoryEventSink, history_recorder::{ AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode, HistoryRecorder, }, memory::SimpleMemory, + storage::Storage, VmInstance, }; @@ -58,7 +59,7 @@ impl PartialEq for ModifiedKeysMap { #[derive(Clone, PartialEq, Debug)] pub struct DecommitterTestInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub modified_storage_keys: ModifiedKeysMap, pub known_bytecodes: HistoryRecorder>, H>, @@ -67,7 +68,7 @@ pub struct DecommitterTestInnerState { #[derive(Clone, PartialEq, Debug)] pub struct StorageOracleInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub modified_storage_keys: ModifiedKeysMap, diff --git a/core/lib/multivm/src/versions/vm_m6/tests/bootloader.rs b/core/lib/multivm/src/versions/vm_m6/tests/bootloader.rs index be840e16a14..16d2b7f47d2 100644 --- a/core/lib/multivm/src/versions/vm_m6/tests/bootloader.rs +++ b/core/lib/multivm/src/versions/vm_m6/tests/bootloader.rs @@ -1,3 +1,4 @@ +// ``` // //! // //! Tests for the bootloader // //! The description for each of the tests can be found in the corresponding `.yul` file. @@ -8,7 +9,7 @@ // convert::TryFrom, // }; // use tempfile::TempDir; - +// // use crate::{ // errors::VmRevertReason, // history_recorder::HistoryMode, @@ -35,11 +36,11 @@ // }, // HistoryEnabled, OracleTools, TxRevertReason, VmBlockResult, VmExecutionResult, VmInstance, // }; - +// // use zk_evm_1_3_1::{ // aux_structures::Timestamp, block_properties::BlockProperties, zkevm_opcode_defs::FarCallOpcode, // }; - +// // use zksync_types::{ // block::DeployedContract, // ethabi::encode, @@ -60,21 +61,21 @@ // L2_ETH_TOKEN_ADDRESS, MAX_GAS_PER_PUBDATA_BYTE, SYSTEM_CONTEXT_ADDRESS, // }, // }; - +// // use zksync_utils::{ // bytecode::CompressedBytecodeInfo, // test_utils::LoadnextContractExecutionParams, // {bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256, u256_to_h256}, // }; - +// // use zksync_contracts::{ // get_loadnext_contract, load_contract, read_bytecode, SystemContractCode, // PLAYGROUND_BLOCK_BOOTLOADER_CODE, // }; - +// // use zksync_state::{secondary_storage::SecondaryStateStorage, storage_view::StorageView}; // use zksync_storage::{db::Database, RocksDB}; - +// // fn run_vm_with_custom_factory_deps<'a, H: HistoryMode>( // oracle_tools: &'a mut OracleTools<'a, false, H>, // block_context: BlockContext, @@ -93,7 +94,7 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // vm.bootloader_state.add_tx_data(encoded_tx.len()); // vm.state.memory.populate_page( // BOOTLOADER_HEAP_PAGE as usize, @@ -110,17 +111,17 @@ // ), // Timestamp(0), // ); - +// // let result = vm.execute_next_tx(u32::MAX, false).err(); - +// // assert_eq!(expected_error, result); // } - +// // fn get_balance(token_id: AccountTreeId, account: &Address, main_storage: StoragePtr<'_>) -> U256 { // let key = storage_key_for_standard_token_balance(token_id, account); // h256_to_u256(main_storage.borrow_mut().get_value(&key)) // } - +// // #[test] // fn test_dummy_bootloader() { // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); @@ -129,18 +130,18 @@ // insert_system_contracts(&mut raw_storage); // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); // let (block_context, block_properties) = create_test_block_params(); // let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); // let bootloader_code = read_bootloader_test_code("dummy"); // let bootloader_hash = hash_bytecode(&bootloader_code); - +// // base_system_contracts.bootloader = SystemContractCode { // code: bytes_to_be_words(bootloader_code), // hash: bootloader_hash, // }; - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context.into(), Default::default()), @@ -149,22 +150,22 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let VmBlockResult { // full_result: res, .. // } = vm.execute_till_block_end(BootloaderJobType::BlockPostprocessing); - +// // // Dummy bootloader should not panic // assert!(res.revert_reason.is_none()); - +// // let correct_first_cell = U256::from_str_radix("123123123", 16).unwrap(); - +// // verify_required_memory( // &vm.state, // vec![(correct_first_cell, BOOTLOADER_HEAP_PAGE, 0)], // ); // } - +// // #[test] // fn test_bootloader_out_of_gas() { // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); @@ -173,20 +174,20 @@ // insert_system_contracts(&mut raw_storage); // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); // let (block_context, block_properties) = create_test_block_params(); - +// // let mut base_system_contracts = BASE_SYSTEM_CONTRACTS.clone(); - +// // let bootloader_code = read_bootloader_test_code("dummy"); // let bootloader_hash = hash_bytecode(&bootloader_code); - +// // base_system_contracts.bootloader = SystemContractCode { // code: bytes_to_be_words(bootloader_code), // hash: bootloader_hash, // }; - +// // // init vm with only 10 ergs // let mut vm = init_vm_inner( // &mut oracle_tools, @@ -196,19 +197,19 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let res = vm.execute_block_tip(); - +// // assert_eq!(res.revert_reason, Some(TxRevertReason::BootloaderOutOfGas)); // } - +// // fn verify_required_storage( // state: &ZkSyncVmState<'_, H>, // required_values: Vec<(H256, StorageKey)>, // ) { // for (required_value, key) in required_values { // let current_value = state.storage.storage.read_from_storage(&key); - +// // assert_eq!( // u256_to_h256(current_value), // required_value, @@ -216,7 +217,7 @@ // ); // } // } - +// // fn verify_required_memory( // state: &ZkSyncVmState<'_, H>, // required_values: Vec<(U256, u32, u32)>, @@ -229,21 +230,21 @@ // assert_eq!(current_value, required_value); // } // } - +// // #[test] // fn test_default_aa_interaction() { // // In this test, we aim to test whether a simple account interaction (without any fee logic) // // will work. The account will try to deploy a simple contract from integration tests. - +// // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let operator_address = block_context.context.operator_address; // let base_fee = block_context.base_fee; // // We deploy here counter contract, because its logic is trivial @@ -264,16 +265,16 @@ // ) // .into(); // let tx_data: TransactionData = tx.clone().into(); - +// // let maximal_fee = tx_data.gas_limit * tx_data.max_fee_per_gas; // let sender_address = tx_data.from(); // // set balance - +// // let key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -283,17 +284,17 @@ // TxExecutionMode::VerifyExecute, // ); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let tx_execution_result = vm // .execute_next_tx(u32::MAX, false) // .expect("Bootloader failed while processing transaction"); - +// // assert_eq!( // tx_execution_result.status, // TxExecutionStatus::Success, // "Transaction wasn't successful" // ); - +// // let VmBlockResult { // full_result: res, .. // } = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); @@ -303,28 +304,28 @@ // "Bootloader was not expected to revert: {:?}", // res.revert_reason // ); - +// // // Both deployment and ordinary nonce should be incremented by one. // let account_nonce_key = get_nonce_key(&sender_address); // let expected_nonce = TX_NONCE_INCREMENT + DEPLOYMENT_NONCE_INCREMENT; - +// // // The code hash of the deployed contract should be marked as republished. // let known_codes_key = get_known_code_key(&contract_code_hash); - +// // // The contract should be deployed successfully. // let deployed_address = deployed_address_create(sender_address, U256::zero()); // let account_code_key = get_code_key(&deployed_address); - +// // let expected_slots = vec![ // (u256_to_h256(expected_nonce), account_nonce_key), // (u256_to_h256(U256::from(1u32)), known_codes_key), // (contract_code_hash, account_code_key), // ]; - +// // verify_required_storage(&vm.state, expected_slots); - +// // assert!(!tx_has_failed(&vm.state, 0)); - +// // let expected_fee = // maximal_fee - U256::from(tx_execution_result.gas_refunded) * U256::from(base_fee); // let operator_balance = get_balance( @@ -332,13 +333,13 @@ // &operator_address, // vm.state.storage.storage.get_ptr(), // ); - +// // assert!( // operator_balance == expected_fee, // "Operator did not receive his fee" // ); // } - +// // fn execute_vm_with_predetermined_refund( // txs: Vec, // refunds: Vec, @@ -346,22 +347,22 @@ // ) -> VmBlockResult { // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // // set balance // for tx in txs.iter() { // let sender_address = tx.initiator_account(); // let key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); // } - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -370,7 +371,7 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // let codes_for_decommiter = txs // .iter() // .flat_map(|tx| { @@ -383,12 +384,12 @@ // .collect::)>>() // }) // .collect(); - +// // vm.state.decommittment_processor.populate( // codes_for_decommiter, // Timestamp(vm.state.local_state.timestamp), // ); - +// // let memory_with_suggested_refund = get_bootloader_memory( // txs.into_iter().map(Into::into).collect(), // refunds, @@ -396,32 +397,32 @@ // TxExecutionMode::VerifyExecute, // BlockContextMode::NewBlock(block_context, Default::default()), // ); - +// // vm.state.memory.populate_page( // BOOTLOADER_HEAP_PAGE as usize, // memory_with_suggested_refund, // Timestamp(0), // ); - +// // vm.execute_till_block_end(BootloaderJobType::TransactionExecution) // } - +// // #[test] // fn test_predetermined_refunded_gas() { // // In this test, we compare the execution of the bootloader with the predefined // // refunded gas and without them - +// // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let base_fee = block_context.base_fee; - +// // // We deploy here counter contract, because its logic is trivial // let contract_code = read_test_contract(); // let published_bytecode = CompressedBytecodeInfo::from_original(contract_code.clone()).unwrap(); @@ -439,15 +440,15 @@ // }, // ) // .into(); - +// // let sender_address = tx.initiator_account(); - +// // // set balance // let key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -456,19 +457,19 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let tx_execution_result = vm // .execute_next_tx(u32::MAX, false) // .expect("Bootloader failed while processing transaction"); - +// // assert_eq!( // tx_execution_result.status, // TxExecutionStatus::Success, // "Transaction wasn't successful" // ); - +// // // If the refund provided by the operator or the final refund are the 0 // // there is no impact of the operator's refund at all and so this test does not // // make much sense. @@ -480,14 +481,14 @@ // tx_execution_result.gas_refunded > 0, // "The final refund is 0" // ); - +// // let mut result = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); // assert!( // result.full_result.revert_reason.is_none(), // "Bootloader was not expected to revert: {:?}", // result.full_result.revert_reason // ); - +// // let mut result_with_predetermined_refund = execute_vm_with_predetermined_refund( // vec![tx], // vec![tx_execution_result.operator_suggested_refund], @@ -499,7 +500,7 @@ // .full_result // .used_contract_hashes // .sort(); - +// // assert_eq!( // result.full_result.events, // result_with_predetermined_refund.full_result.events @@ -521,18 +522,18 @@ // .used_contract_hashes // ); // } - +// // #[derive(Debug, Clone)] // enum TransactionRollbackTestInfo { // Rejected(Transaction, TxRevertReason), // Processed(Transaction, bool, TxExecutionStatus), // } - +// // impl TransactionRollbackTestInfo { // fn new_rejected(transaction: Transaction, revert_reason: TxRevertReason) -> Self { // Self::Rejected(transaction, revert_reason) // } - +// // fn new_processed( // transaction: Transaction, // should_be_rollbacked: bool, @@ -540,28 +541,28 @@ // ) -> Self { // Self::Processed(transaction, should_be_rollbacked, expected_status) // } - +// // fn get_transaction(&self) -> &Transaction { // match self { // TransactionRollbackTestInfo::Rejected(tx, _) => tx, // TransactionRollbackTestInfo::Processed(tx, _, _) => tx, // } // } - +// // fn rejection_reason(&self) -> Option { // match self { // TransactionRollbackTestInfo::Rejected(_, revert_reason) => Some(revert_reason.clone()), // TransactionRollbackTestInfo::Processed(_, _, _) => None, // } // } - +// // fn should_rollback(&self) -> bool { // match self { // TransactionRollbackTestInfo::Rejected(_, _) => true, // TransactionRollbackTestInfo::Processed(_, x, _) => *x, // } // } - +// // fn expected_status(&self) -> TxExecutionStatus { // match self { // TransactionRollbackTestInfo::Rejected(_, _) => { @@ -571,7 +572,7 @@ // } // } // } - +// // // Accepts the address of the sender as well as the list of pairs of its transactions // // and whether these transactions should succeed. // fn execute_vm_with_possible_rollbacks( @@ -585,13 +586,13 @@ // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // // Setting infinite balance for the sender. // let key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -600,7 +601,7 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // for test_info in transactions { // vm.save_current_vm_as_snapshot(); // let vm_state_before_tx = vm.dump_inner_state(); @@ -610,7 +611,7 @@ // TxExecutionMode::VerifyExecute, // None, // ); - +// // match vm.execute_next_tx(u32::MAX, false) { // Err(reason) => { // assert_eq!(test_info.rejection_reason(), Some(reason)); @@ -624,11 +625,11 @@ // ); // } // }; - +// // if test_info.should_rollback() { // // Some error has occurred, we should reject the transaction // vm.rollback_to_latest_snapshot(); - +// // // vm_state_before_tx. // let state_after_rollback = vm.dump_inner_state(); // assert_eq!( @@ -637,7 +638,7 @@ // ); // } // } - +// // let VmBlockResult { // full_result: mut result, // .. @@ -645,10 +646,10 @@ // // Used contract hashes are retrieved in unordered manner. // // However it must be sorted for the comparisons in tests to work // result.used_contract_hashes.sort(); - +// // result // } - +// // // Sets the signature for an L2 transaction and returns the same transaction // // but this different signature. // fn change_signature(mut tx: Transaction, signature: Vec) -> Transaction { @@ -659,22 +660,22 @@ // } // _ => unreachable!(), // }; - +// // tx // } - +// // #[test] // fn test_vm_rollbacks() { // let (block_context, block_properties): (DerivedBlockContext, BlockProperties) = { // let (block_context, block_properties) = create_test_block_params(); // (block_context.into(), block_properties) // }; - +// // let base_fee = U256::from(block_context.base_fee); - +// // let sender_private_key = H256::random(); // let contract_code = read_test_contract(); - +// // let tx_nonce_0: Transaction = get_deploy_tx( // sender_private_key, // Nonce(0), @@ -717,13 +718,13 @@ // }, // ) // .into(); - +// // let wrong_signature_length_tx = change_signature(tx_nonce_0.clone(), vec![1u8; 32]); // let wrong_v_tx = change_signature(tx_nonce_0.clone(), vec![1u8; 65]); // let wrong_signature_tx = change_signature(tx_nonce_0.clone(), vec![27u8; 65]); - +// // let sender_address = tx_nonce_0.initiator_account(); - +// // let result_without_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -747,7 +748,7 @@ // block_context, // block_properties, // ); - +// // let incorrect_nonce = TxRevertReason::ValidationFailed(VmRevertReason::General { // msg: "Incorrect nonce".to_string(), // data: vec![ @@ -790,7 +791,7 @@ // msg: "Account validation returned invalid magic value. Most often this means that the signature is incorrect".to_string(), // data: vec![], // }); - +// // let result_with_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -835,11 +836,11 @@ // block_context, // block_properties, // ); - +// // assert_eq!(result_without_rollbacks, result_with_rollbacks); - +// // let loadnext_contract = get_loadnext_contract(); - +// // let loadnext_constructor_data = encode(&[Token::Uint(U256::from(100))]); // let loadnext_deploy_tx: Transaction = get_deploy_tx( // sender_private_key, @@ -862,7 +863,7 @@ // false, // TxExecutionStatus::Success, // ); - +// // let get_load_next_tx = |params: LoadnextContractExecutionParams, nonce: Nonce| { // // Here we test loadnext with various kinds of operations // let tx: Transaction = mock_loadnext_test_call( @@ -878,10 +879,10 @@ // params, // ) // .into(); - +// // tx // }; - +// // let loadnext_tx_0 = get_load_next_tx( // LoadnextContractExecutionParams { // reads: 100, @@ -904,7 +905,7 @@ // }, // Nonce(2), // ); - +// // let result_without_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -923,7 +924,7 @@ // block_context, // block_properties, // ); - +// // let result_with_rollbacks = execute_vm_with_possible_rollbacks( // sender_address, // vec![ @@ -964,10 +965,10 @@ // block_context, // block_properties, // ); - +// // assert_eq!(result_without_rollbacks, result_with_rollbacks); // } - +// // // Inserts the contracts into the test environment, bypassing the // // deployer system contract. Besides the reference to storage // // it accepts a `contracts` tuple of information about the contract @@ -980,13 +981,13 @@ // .iter() // .flat_map(|(contract, is_account)| { // let mut new_logs = vec![]; - +// // let deployer_code_key = get_code_key(contract.account_id.address()); // new_logs.push(StorageLog::new_write_log( // deployer_code_key, // hash_bytecode(&contract.bytecode), // )); - +// // if *is_account { // let is_account_key = get_is_account_key(contract.account_id.address()); // new_logs.push(StorageLog::new_write_log( @@ -994,18 +995,18 @@ // u256_to_h256(1u32.into()), // )); // } - +// // new_logs // }) // .collect(); // raw_storage.process_transaction_logs(&logs); - +// // for (contract, _) in contracts { // raw_storage.store_factory_dep(hash_bytecode(&contract.bytecode), contract.bytecode); // } // raw_storage.save(L1BatchNumber(0)); // } - +// // enum NonceHolderTestMode { // SetValueUnderNonce, // IncreaseMinNonceBy5, @@ -1014,7 +1015,7 @@ // IncreaseMinNonceBy1, // SwitchToArbitraryOrdering, // } - +// // impl From for u8 { // fn from(mode: NonceHolderTestMode) -> u8 { // match mode { @@ -1027,7 +1028,7 @@ // } // } // } - +// // fn get_nonce_holder_test_tx( // nonce: U256, // account_address: Address, @@ -1049,11 +1050,11 @@ // reserved: [U256::zero(); 4], // data: vec![12], // signature: vec![test_mode.into()], - +// // ..Default::default() // } // } - +// // fn run_vm_with_raw_tx<'a, H: HistoryMode>( // oracle_tools: &'a mut OracleTools<'a, false, H>, // block_context: DerivedBlockContext, @@ -1070,9 +1071,9 @@ // &base_system_contracts, // TxExecutionMode::VerifyExecute, // ); - +// // let block_gas_price_per_pubdata = block_context.context.block_gas_price_per_pubdata(); - +// // let overhead = tx.overhead_gas(block_gas_price_per_pubdata as u32); // push_raw_transaction_to_bootloader_memory( // &mut vm, @@ -1085,43 +1086,43 @@ // full_result: result, // .. // } = vm.execute_till_block_end(BootloaderJobType::TransactionExecution); - +// // (result, tx_has_failed(&vm.state, 0)) // } - +// // #[test] // fn test_nonce_holder() { // let (block_context, block_properties): (DerivedBlockContext, BlockProperties) = { // let (block_context, block_properties) = create_test_block_params(); // (block_context.into(), block_properties) // }; - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); - +// // let account_address = H160::random(); // let account = DeployedContract { // account_id: AccountTreeId::new(account_address), // bytecode: read_nonce_holder_tester(), // }; - +// // insert_contracts(&mut raw_storage, vec![(account, true)]); - +// // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // // We deploy here counter contract, because its logic is trivial - +// // let key = storage_key_for_eth_balance(&account_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut run_nonce_test = |nonce: U256, // test_mode: NonceHolderTestMode, // error_message: Option, // comment: &'static str| { // let tx = get_nonce_holder_test_tx(nonce, account_address, test_mode, &block_context); - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); // let (result, tx_has_failed) = // run_vm_with_raw_tx(&mut oracle_tools, block_context, &block_properties, tx); @@ -1142,7 +1143,7 @@ // assert!(!tx_has_failed, "{}", comment); // } // }; - +// // // Test 1: trying to set value under non sequential nonce value. // run_nonce_test( // 1u32.into(), @@ -1150,7 +1151,7 @@ // Some("Previous nonce has not been used".to_string()), // "Allowed to set value under non sequential value", // ); - +// // // Test 2: increase min nonce by 1 with sequential nonce ordering: // run_nonce_test( // 0u32.into(), @@ -1158,7 +1159,7 @@ // None, // "Failed to increment nonce by 1 for sequential account", // ); - +// // // Test 3: correctly set value under nonce with sequential nonce ordering: // run_nonce_test( // 1u32.into(), @@ -1166,7 +1167,7 @@ // None, // "Failed to set value under nonce sequential value", // ); - +// // // Test 5: migrate to the arbitrary nonce ordering: // run_nonce_test( // 2u32.into(), @@ -1174,7 +1175,7 @@ // None, // "Failed to switch to arbitrary ordering", // ); - +// // // Test 6: increase min nonce by 5 // run_nonce_test( // 6u32.into(), @@ -1182,7 +1183,7 @@ // None, // "Failed to increase min nonce by 5", // ); - +// // // Test 7: since the nonces in range [6,10] are no longer allowed, the // // tx with nonce 10 should not be allowed // run_nonce_test( @@ -1191,7 +1192,7 @@ // Some("Reusing the same nonce twice".to_string()), // "Allowed to reuse nonce below the minimal one", // ); - +// // // Test 8: we should be able to use nonce 13 // run_nonce_test( // 13u32.into(), @@ -1199,7 +1200,7 @@ // None, // "Did not allow to use unused nonce 10", // ); - +// // // Test 9: we should not be able to reuse nonce 13 // run_nonce_test( // 13u32.into(), @@ -1207,7 +1208,7 @@ // Some("Reusing the same nonce twice".to_string()), // "Allowed to reuse the same nonce twice", // ); - +// // // Test 10: we should be able to simply use nonce 14, while bumping the minimal nonce by 5 // run_nonce_test( // 14u32.into(), @@ -1215,7 +1216,7 @@ // None, // "Did not allow to use a bumped nonce", // ); - +// // // Test 6: Do not allow bumping nonce by too much // run_nonce_test( // 16u32.into(), @@ -1223,7 +1224,7 @@ // Some("The value for incrementing the nonce is too high".to_string()), // "Allowed for incrementing min nonce too much", // ); - +// // // Test 7: Do not allow not setting a nonce as used // run_nonce_test( // 16u32.into(), @@ -1232,7 +1233,7 @@ // "Allowed to leave nonce as unused", // ); // } - +// // #[test] // fn test_l1_tx_execution() { // // In this test, we try to execute a contract deployment from L1 @@ -1242,17 +1243,17 @@ // insert_system_contracts(&mut raw_storage); // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); // let (block_context, block_properties) = create_test_block_params(); - +// // // Here instead of marking code hash via the bootloader means, we will // // using L1->L2 communication, the same it would likely be done during the priority mode. // let contract_code = read_test_contract(); // let contract_code_hash = hash_bytecode(&contract_code); // let l1_deploy_tx = get_l1_deploy_tx(&contract_code, &[]); // let l1_deploy_tx_data: TransactionData = l1_deploy_tx.clone().into(); - +// // let required_l2_to_l1_logs = vec![ // L2ToL1Log { // shard_id: 0, @@ -1271,9 +1272,9 @@ // value: u256_to_h256(U256::from(1u32)), // }, // ]; - +// // let sender_address = l1_deploy_tx_data.from(); - +// // oracle_tools.decommittment_processor.populate( // vec![( // h256_to_u256(contract_code_hash), @@ -1281,7 +1282,7 @@ // )], // Timestamp(0), // ); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context.into(), Default::default()), @@ -1296,29 +1297,29 @@ // TxExecutionMode::VerifyExecute, // None, // ); - +// // let res = vm.execute_next_tx(u32::MAX, false).unwrap(); - +// // // The code hash of the deployed contract should be marked as republished. // let known_codes_key = get_known_code_key(&contract_code_hash); - +// // // The contract should be deployed successfully. // let deployed_address = deployed_address_create(sender_address, U256::zero()); // let account_code_key = get_code_key(&deployed_address); - +// // let expected_slots = vec![ // (u256_to_h256(U256::from(1u32)), known_codes_key), // (contract_code_hash, account_code_key), // ]; // assert!(!tx_has_failed(&vm.state, 0)); - +// // verify_required_storage(&vm.state, expected_slots); - +// // assert_eq!(res.result.logs.l2_to_l1_logs, required_l2_to_l1_logs); - +// // let tx = get_l1_execute_test_contract_tx(deployed_address, true); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let res = StorageWritesDeduplicator::apply_on_empty_state( // &vm.execute_next_tx(u32::MAX, false) // .unwrap() @@ -1327,7 +1328,7 @@ // .storage_logs, // ); // assert_eq!(res.initial_storage_writes, 0); - +// // let tx = get_l1_execute_test_contract_tx(deployed_address, false); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); // let res = StorageWritesDeduplicator::apply_on_empty_state( @@ -1338,9 +1339,9 @@ // .storage_logs, // ); // assert_eq!(res.initial_storage_writes, 2); - +// // let repeated_writes = res.repeated_storage_writes; - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); // let res = StorageWritesDeduplicator::apply_on_empty_state( // &vm.execute_next_tx(u32::MAX, false) @@ -1352,7 +1353,7 @@ // assert_eq!(res.initial_storage_writes, 1); // // We do the same storage write, so it will be deduplicated // assert_eq!(res.repeated_storage_writes, repeated_writes); - +// // let mut tx = get_l1_execute_test_contract_tx(deployed_address, false); // tx.execute.value = U256::from(1); // match &mut tx.common_data { @@ -1369,16 +1370,16 @@ // TxExecutionStatus::Failure, // "The transaction should fail" // ); - +// // let res = // StorageWritesDeduplicator::apply_on_empty_state(&execution_result.result.logs.storage_logs); - +// // // There are 2 initial writes here: // // - totalSupply of ETH token // // - balance of the refund recipient // assert_eq!(res.initial_storage_writes, 2); // } - +// // #[test] // fn test_invalid_bytecode() { // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); @@ -1387,18 +1388,18 @@ // insert_system_contracts(&mut raw_storage); // let (block_context, block_properties) = create_test_block_params(); // let block_gas_per_pubdata = block_context.block_gas_price_per_pubdata(); - +// // let test_vm_with_custom_bytecode_hash = // |bytecode_hash: H256, expected_revert_reason: Option| { // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let (encoded_tx, predefined_overhead) = get_l1_tx_with_custom_bytecode_hash( // h256_to_u256(bytecode_hash), // block_gas_per_pubdata as u32, // ); - +// // run_vm_with_custom_factory_deps( // &mut oracle_tools, // block_context, @@ -1408,14 +1409,14 @@ // expected_revert_reason, // ); // }; - +// // let failed_to_mark_factory_deps = |msg: &str, data: Vec| { // TxRevertReason::FailedToMarkFactoryDependencies(VmRevertReason::General { // msg: msg.to_string(), // data, // }) // }; - +// // // Here we provide the correctly-formatted bytecode hash of // // odd length, so it should work. // test_vm_with_custom_bytecode_hash( @@ -1425,7 +1426,7 @@ // ]), // None, // ); - +// // // Here we provide correctly formatted bytecode of even length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1444,7 +1445,7 @@ // ], // )), // ); - +// // // Here we provide incorrectly formatted bytecode of odd length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1464,7 +1465,7 @@ // ], // )), // ); - +// // // Here we provide incorrectly formatted bytecode of odd length, so // // it should fail. // test_vm_with_custom_bytecode_hash( @@ -1485,25 +1486,25 @@ // )), // ); // } - +// // #[test] // fn test_tracing_of_execution_errors() { // // In this test, we are checking that the execution errors are transmitted correctly from the bootloader. // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let private_key = H256::random(); - +// // let contract_address = Address::random(); // let error_contract = DeployedContract { // account_id: AccountTreeId::new(contract_address), // bytecode: read_error_contract(), // }; - +// // let tx = get_error_tx( // private_key, // Nonce(0), @@ -1515,16 +1516,16 @@ // gas_per_pubdata_limit: U256::from(MAX_GAS_PER_PUBDATA_BYTE), // }, // ); - +// // insert_contracts(&mut raw_storage, vec![(error_contract, false)]); - +// // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let key = storage_key_for_eth_balance(&tx.common_data.initiator_address); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -1539,14 +1540,14 @@ // TxExecutionMode::VerifyExecute, // None, // ); - +// // let mut tracer = TransactionResultTracer::new(usize::MAX, false); // assert_eq!( // vm.execute_with_custom_tracer(&mut tracer), // VmExecutionStopReason::VmFinished, // "Tracer should never request stop" // ); - +// // match tracer.revert_reason { // Some(revert_reason) => { // let revert_reason = VmRevertReason::try_from(&revert_reason as &[u8]).unwrap(); @@ -1570,7 +1571,7 @@ // ), // } // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -1596,7 +1597,7 @@ // TxExecutionMode::VerifyExecute, // None, // ); - +// // let mut tracer = TransactionResultTracer::new(10, false); // assert_eq!( // vm.execute_with_custom_tracer(&mut tracer), @@ -1604,20 +1605,20 @@ // ); // assert!(tracer.is_limit_reached()); // } - +// // /// Checks that `TX_GAS_LIMIT_OFFSET` constant is correct. // #[test] // fn test_tx_gas_limit_offset() { // let gas_limit = U256::from(999999); - +// // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let raw_storage = SecondaryStateStorage::new(db); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let contract_code = read_test_contract(); // let tx: Transaction = get_deploy_tx( // H256::random(), @@ -1631,9 +1632,9 @@ // }, // ) // .into(); - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -1643,7 +1644,7 @@ // TxExecutionMode::VerifyExecute, // ); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let gas_limit_from_memory = vm // .state // .memory @@ -1654,20 +1655,20 @@ // .value; // assert_eq!(gas_limit_from_memory, gas_limit); // } - +// // #[test] // fn test_is_write_initial_behaviour() { // // In this test, we check result of `is_write_initial` at different stages. - +// // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); - +// // let base_fee = block_context.base_fee; // let account_pk = H256::random(); // let contract_code = read_test_contract(); @@ -1685,19 +1686,19 @@ // }, // ) // .into(); - +// // let sender_address = tx.initiator_account(); // let nonce_key = get_nonce_key(&sender_address); - +// // // Check that the next write to the nonce key will be initial. // assert!(storage_ptr.is_write_initial(&nonce_key)); - +// // // Set balance to be able to pay fee for txs. // let balance_key = storage_key_for_eth_balance(&sender_address); // storage_ptr.set_value(&balance_key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // let mut vm = init_vm_inner( // &mut oracle_tools, // BlockContextMode::NewBlock(block_context, Default::default()), @@ -1706,15 +1707,15 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // vm.execute_next_tx(u32::MAX, false) // .expect("Bootloader failed while processing the first transaction"); // // Check that `is_write_initial` still returns true for the nonce key. // assert!(storage_ptr.is_write_initial(&nonce_key)); // } - +// // pub fn get_l1_tx_with_custom_bytecode_hash( // bytecode_hash: U256, // block_gas_per_pubdata: u32, @@ -1723,12 +1724,12 @@ // let predefined_overhead = // tx.overhead_gas_with_custom_factory_deps(vec![bytecode_hash], block_gas_per_pubdata); // let tx_bytes = tx.abi_encode_with_custom_factory_deps(vec![bytecode_hash]); - +// // (bytes_to_be_words(tx_bytes), predefined_overhead) // } - +// // const L1_TEST_GAS_PER_PUBDATA_BYTE: u32 = 800; - +// // pub fn get_l1_execute_test_contract_tx(deployed_address: Address, with_panic: bool) -> Transaction { // let sender = H160::random(); // get_l1_execute_test_contract_tx_with_sender( @@ -1739,18 +1740,18 @@ // false, // ) // } - +// // pub fn get_l1_tx_with_large_output(sender: Address, deployed_address: Address) -> Transaction { // let test_contract = load_contract( // "etc/contracts-test-data/artifacts-zk/contracts/long-return-data/long-return-data.sol/LongReturnData.json", // ); - +// // let function = test_contract.function("longReturnData").unwrap(); - +// // let calldata = function // .encode_input(&[]) // .expect("failed to encode parameters"); - +// // Transaction { // common_data: ExecuteTransactionCommon::L1(L1TxCommonData { // sender, @@ -1767,7 +1768,7 @@ // received_timestamp_ms: 0, // } // } - +// // pub fn get_l1_execute_test_contract_tx_with_sender( // sender: Address, // deployed_address: Address, @@ -1776,7 +1777,7 @@ // payable: bool, // ) -> Transaction { // let execute = execute_test_contract(deployed_address, with_panic, value, payable); - +// // Transaction { // common_data: ExecuteTransactionCommon::L1(L1TxCommonData { // sender, @@ -1789,10 +1790,10 @@ // received_timestamp_ms: 0, // } // } - +// // pub fn get_l1_deploy_tx(code: &[u8], calldata: &[u8]) -> Transaction { // let execute = get_create_execute(code, calldata); - +// // Transaction { // common_data: ExecuteTransactionCommon::L1(L1TxCommonData { // sender: H160::random(), @@ -1804,25 +1805,25 @@ // received_timestamp_ms: 0, // } // } - +// // fn read_test_contract() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/counter/counter.sol/Counter.json") // } - +// // fn read_long_return_data_contract() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/long-return-data/long-return-data.sol/LongReturnData.json") // } - +// // fn read_nonce_holder_tester() -> Vec { // read_bytecode("etc/contracts-test-data/artifacts-zk/contracts/custom-account/nonce-holder-test.sol/NonceHolderTest.json") // } - +// // fn read_error_contract() -> Vec { // read_bytecode( // "etc/contracts-test-data/artifacts-zk/contracts/error/error.sol/SimpleRequire.json", // ) // } - +// // fn execute_test_contract( // address: Address, // with_panic: bool, @@ -1832,7 +1833,7 @@ // let test_contract = load_contract( // "etc/contracts-test-data/artifacts-zk/contracts/counter/counter.sol/Counter.json", // ); - +// // let function = if payable { // test_contract // .function("incrementWithRevertPayable") @@ -1840,11 +1841,11 @@ // } else { // test_contract.function("incrementWithRevert").unwrap() // }; - +// // let calldata = function // .encode_input(&[Token::Uint(U256::from(1u8)), Token::Bool(with_panic)]) // .expect("failed to encode parameters"); - +// // Execute { // contract_address: address, // calldata, @@ -1852,7 +1853,7 @@ // factory_deps: None, // } // } - +// // #[test] // fn test_call_tracer() { // let sender = H160::random(); @@ -1860,21 +1861,21 @@ // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); - +// // let (block_context, block_properties) = create_test_block_params(); - +// // let contract_code = read_test_contract(); // let contract_code_hash = hash_bytecode(&contract_code); // let l1_deploy_tx = get_l1_deploy_tx(&contract_code, &[]); // let l1_deploy_tx_data: TransactionData = l1_deploy_tx.clone().into(); - +// // let sender_address_counter = l1_deploy_tx_data.from(); // let mut storage_accessor = StorageView::new(&raw_storage); // let storage_ptr: &mut dyn Storage = &mut storage_accessor; - +// // let key = storage_key_for_eth_balance(&sender_address_counter); // storage_ptr.set_value(&key, u256_to_h256(U256([0, 0, 1, 0]))); - +// // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); // oracle_tools.decommittment_processor.populate( // vec![( @@ -1883,7 +1884,7 @@ // )], // Timestamp(0), // ); - +// // let contract_code = read_long_return_data_contract(); // let contract_code_hash = hash_bytecode(&contract_code); // let l1_deploy_long_return_data_tx = get_l1_deploy_tx(&contract_code, &[]); @@ -1894,7 +1895,7 @@ // )], // Timestamp(0), // ); - +// // let tx_data: TransactionData = l1_deploy_long_return_data_tx.clone().into(); // let sender_long_return_address = tx_data.from(); // // The contract should be deployed successfully. @@ -1908,14 +1909,14 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // push_transaction_to_bootloader_memory( // &mut vm, // &l1_deploy_tx, // TxExecutionMode::VerifyExecute, // None, // ); - +// // // The contract should be deployed successfully. // let deployed_address = deployed_address_create(sender_address_counter, U256::zero()); // let res = vm.execute_next_tx(u32::MAX, true).unwrap(); @@ -1956,16 +1957,16 @@ // calls: vec![], // }; // assert_eq!(create_call.unwrap(), expected); - +// // push_transaction_to_bootloader_memory( // &mut vm, // &l1_deploy_long_return_data_tx, // TxExecutionMode::VerifyExecute, // None, // ); - +// // vm.execute_next_tx(u32::MAX, false).unwrap(); - +// // let tx = get_l1_execute_test_contract_tx_with_sender( // sender, // deployed_address, @@ -1973,13 +1974,13 @@ // U256::from(1u8), // true, // ); - +// // let tx_data: TransactionData = tx.clone().into(); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let res = vm.execute_next_tx(u32::MAX, true).unwrap(); // let calls = res.call_traces; - +// // // We don't want to compare gas used, because it's not fully deterministic. // let expected = Call { // r#type: CallType::Call(FarCallOpcode::Mimic), @@ -1998,7 +1999,7 @@ // revert_reason: None, // calls: vec![], // }; - +// // // First loop filter out the bootloaders calls and // // the second loop filters out the calls msg value simulator calls // for call in calls { @@ -2010,7 +2011,7 @@ // } // } // } - +// // let tx = get_l1_execute_test_contract_tx_with_sender( // sender, // deployed_address, @@ -2018,13 +2019,13 @@ // U256::from(1u8), // true, // ); - +// // let tx_data: TransactionData = tx.clone().into(); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // let res = vm.execute_next_tx(u32::MAX, true).unwrap(); // let calls = res.call_traces; - +// // let expected = Call { // r#type: CallType::Call(FarCallOpcode::Mimic), // to: deployed_address, @@ -2039,7 +2040,7 @@ // revert_reason: Some("This method always reverts".to_string()), // calls: vec![], // }; - +// // for call in calls { // if let CallType::Call(FarCallOpcode::Mimic) = call.r#type { // for call in call.calls { @@ -2049,12 +2050,12 @@ // } // } // } - +// // let tx = get_l1_tx_with_large_output(sender, deployed_address_long_return_data); - +// // let tx_data: TransactionData = tx.clone().into(); // push_transaction_to_bootloader_memory(&mut vm, &tx, TxExecutionMode::VerifyExecute, None); - +// // assert_ne!(deployed_address_long_return_data, deployed_address); // let res = vm.execute_next_tx(u32::MAX, true).unwrap(); // let calls = res.call_traces; @@ -2072,23 +2073,23 @@ // } // } // } - +// // #[test] // fn test_get_used_contracts() { // // get block context // let (block_context, block_properties) = create_test_block_params(); // let block_context: DerivedBlockContext = block_context.into(); - +// // // insert system contracts to avoid vm errors during initialization // let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); // let db = RocksDB::new(Database::StateKeeper, temp_dir.as_ref(), false); // let mut raw_storage = SecondaryStateStorage::new(db); // insert_system_contracts(&mut raw_storage); - +// // // get oracle tools // let storage_ptr: &mut dyn Storage = &mut StorageView::new(&raw_storage); // let mut oracle_tools = OracleTools::new(storage_ptr, HistoryEnabled); - +// // // init vm // let mut vm = init_vm_inner( // &mut oracle_tools, @@ -2098,23 +2099,23 @@ // &BASE_SYSTEM_CONTRACTS, // TxExecutionMode::VerifyExecute, // ); - +// // assert!(known_bytecodes_without_aa_code(&vm).is_empty()); - +// // // create and push and execute some not-empty factory deps transaction with success status // // to check that get_used_contracts() updates // let contract_code = read_test_contract(); // let contract_code_hash = hash_bytecode(&contract_code); // let tx1 = get_l1_deploy_tx(&contract_code, &[]); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx1, TxExecutionMode::VerifyExecute, None); - +// // let res1 = vm.execute_next_tx(u32::MAX, true).unwrap(); // assert_eq!(res1.status, TxExecutionStatus::Success); // assert!(vm // .get_used_contracts() // .contains(&h256_to_u256(contract_code_hash))); - +// // assert_eq!( // vm.get_used_contracts() // .into_iter() @@ -2124,13 +2125,13 @@ // .cloned() // .collect::>() // ); - +// // // create push and execute some non-empty factory deps transaction that fails // // (known_bytecodes will be updated but we expect get_used_contracts() to not be updated) - +// // let mut tx2 = tx1; // tx2.execute.contract_address = L1_MESSENGER_ADDRESS; - +// // let calldata = vec![1, 2, 3]; // let big_calldata: Vec = calldata // .iter() @@ -2138,16 +2139,16 @@ // .take(calldata.len() * 1024) // .cloned() // .collect(); - +// // tx2.execute.calldata = big_calldata; // tx2.execute.factory_deps = Some(vec![vec![1; 32]]); - +// // push_transaction_to_bootloader_memory(&mut vm, &tx2, TxExecutionMode::VerifyExecute, None); - +// // let res2 = vm.execute_next_tx(u32::MAX, false).unwrap(); - +// // assert_eq!(res2.status, TxExecutionStatus::Failure); - +// // for factory_dep in tx2.execute.factory_deps.unwrap() { // let hash = hash_bytecode(&factory_dep); // let hash_to_u256 = h256_to_u256(hash); @@ -2157,7 +2158,7 @@ // assert!(!vm.get_used_contracts().contains(&hash_to_u256)); // } // } - +// // fn known_bytecodes_without_aa_code(vm: &VmInstance) -> HashMap> { // let mut known_bytecodes_without_aa_code = vm // .state @@ -2165,10 +2166,11 @@ // .known_bytecodes // .inner() // .clone(); - +// // known_bytecodes_without_aa_code // .remove(&h256_to_u256(BASE_SYSTEM_CONTRACTS.default_aa.hash)) // .unwrap(); - +// // known_bytecodes_without_aa_code // } +// ``` diff --git a/core/lib/multivm/src/versions/vm_m6/transaction_data.rs b/core/lib/multivm/src/versions/vm_m6/transaction_data.rs index f41afee3a40..5934199a96f 100644 --- a/core/lib/multivm/src/versions/vm_m6/transaction_data.rs +++ b/core/lib/multivm/src/versions/vm_m6/transaction_data.rs @@ -1,12 +1,16 @@ use zk_evm_1_3_1::zkevm_opcode_defs::system_params::MAX_TX_ERGS_LIMIT; -use zksync_types::ethabi::{encode, Address, Token}; -use zksync_types::fee::encoding_len; -use zksync_types::l1::is_l1_tx_type; -use zksync_types::{l2::TransactionType, ExecuteTransactionCommon, Transaction, U256}; -use zksync_types::{MAX_L2_TX_GAS_LIMIT, MAX_TXS_IN_BLOCK}; -use zksync_utils::{address_to_h256, ceil_div_u256}; -use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; +use zksync_types::{ + ethabi::{encode, Address, Token}, + fee::encoding_len, + l1::is_l1_tx_type, + l2::TransactionType, + ExecuteTransactionCommon, Transaction, MAX_L2_TX_GAS_LIMIT, U256, +}; +use zksync_utils::{ + address_to_h256, bytecode::hash_bytecode, bytes_to_be_words, ceil_div_u256, h256_to_u256, +}; +use super::vm_with_bootloader::{MAX_GAS_PER_PUBDATA_BYTE, MAX_TXS_IN_BLOCK}; use crate::vm_m6::vm_with_bootloader::{ BLOCK_OVERHEAD_GAS, BLOCK_OVERHEAD_PUBDATA, BOOTLOADER_TX_ENCODING_SPACE, }; @@ -57,12 +61,22 @@ impl From for TransactionData { U256::zero() }; + // Ethereum transactions do not sign gas per pubdata limit, and so for them we need to use + // some default value. We use the maximum possible value that is allowed by the bootloader + // (i.e. we can not use u64::MAX, because the bootloader requires gas per pubdata for such + // transactions to be higher than `MAX_GAS_PER_PUBDATA_BYTE`). + let gas_per_pubdata_limit = if common_data.transaction_type.is_ethereum_type() { + MAX_GAS_PER_PUBDATA_BYTE.into() + } else { + common_data.fee.gas_per_pubdata_limit + }; + TransactionData { tx_type: (common_data.transaction_type as u32) as u8, from: execute_tx.initiator_account(), to: execute_tx.execute.contract_address, gas_limit: common_data.fee.gas_limit, - pubdata_price_limit: common_data.fee.gas_per_pubdata_limit, + pubdata_price_limit: gas_per_pubdata_limit, max_fee_per_gas: common_data.fee.max_fee_per_gas, max_priority_fee_per_gas: common_data.fee.max_priority_fee_per_gas, paymaster: common_data.paymaster_params.paymaster, @@ -213,12 +227,12 @@ impl TransactionData { self.reserved_dynamic.len() as u64, ); - let coeficients = OverheadCoeficients::from_tx_type(self.tx_type); + let coefficients = OverheadCoefficients::from_tx_type(self.tx_type); get_amortized_overhead( total_gas_limit, gas_price_per_pubdata, encoded_len, - coeficients, + coefficients, ) } @@ -228,13 +242,13 @@ impl TransactionData { } } -pub fn derive_overhead( +pub(crate) fn derive_overhead( gas_limit: u32, gas_price_per_pubdata: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { - // Even if the gas limit is greater than the MAX_TX_ERGS_LIMIT, we assume that everything beyond MAX_TX_ERGS_LIMIT + // Even if the gas limit is greater than the `MAX_TX_ERGS_LIMIT`, we assume that everything beyond `MAX_TX_ERGS_LIMIT` // will be spent entirely on publishing bytecodes and so we derive the overhead solely based on the capped value let gas_limit = std::cmp::min(MAX_TX_ERGS_LIMIT, gas_limit); @@ -243,8 +257,8 @@ pub fn derive_overhead( let gas_limit = U256::from(gas_limit); let encoded_len = U256::from(encoded_len); - // The MAX_TX_ERGS_LIMIT is formed in a way that may fullfills a single-instance circuits - // if used in full. That is, within MAX_TX_ERGS_LIMIT it is possible to fully saturate all the single-instance + // The `MAX_TX_ERGS_LIMIT` is formed in a way that may fulfills a single-instance circuits + // if used in full. That is, within `MAX_TX_ERGS_LIMIT` it is possible to fully saturate all the single-instance // circuits. let overhead_for_single_instance_circuits = ceil_div_u256(gas_limit * max_block_overhead, MAX_TX_ERGS_LIMIT.into()); @@ -258,42 +272,43 @@ pub fn derive_overhead( // The overhead for occupying a single tx slot let tx_slot_overhead = ceil_div_u256(max_block_overhead, MAX_TXS_IN_BLOCK.into()); - // We use "ceil" here for formal reasons to allow easier approach for calculating the overhead in O(1) - // let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata); + // We use `ceil` here for formal reasons to allow easier approach for calculating the overhead in O(1) + // `let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata);` // The maximal potential overhead from pubdata // TODO (EVM-67): possibly use overhead for pubdata + // ``` // let pubdata_overhead = ceil_div_u256( // max_pubdata_in_tx * max_block_overhead, // MAX_PUBDATA_PER_BLOCK.into(), // ); - + // ``` vec![ - (coeficients.ergs_limit_overhead_coeficient + (coefficients.ergs_limit_overhead_coeficient * overhead_for_single_instance_circuits.as_u32() as f64) .floor() as u32, - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) .floor() as u32, - (coeficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, + (coefficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, ] .into_iter() .max() .unwrap() } -/// Contains the coeficients with which the overhead for transactions will be calculated. -/// All of the coeficients should be <= 1. There are here to provide a certain "discount" for normal transactions +/// Contains the coefficients with which the overhead for transactions will be calculated. +/// All of the coefficients should be <= 1. There are here to provide a certain "discount" for normal transactions /// at the risk of malicious transactions that may close the block prematurely. -/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coeficients.ergs_limit_overhead_coeficient` MUST +/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coefficients.ergs_limit_overhead_coefficient` MUST /// result in an integer number #[derive(Debug, Clone, Copy)] -pub struct OverheadCoeficients { +pub struct OverheadCoefficients { slot_overhead_coeficient: f64, bootloader_memory_overhead_coeficient: f64, ergs_limit_overhead_coeficient: f64, } -impl OverheadCoeficients { +impl OverheadCoefficients { // This method ensures that the parameters keep the required invariants fn new_checked( slot_overhead_coeficient: f64, @@ -315,15 +330,15 @@ impl OverheadCoeficients { // L1->L2 do not receive any discounts fn new_l1() -> Self { - OverheadCoeficients::new_checked(1.0, 1.0, 1.0) + OverheadCoefficients::new_checked(1.0, 1.0, 1.0) } fn new_l2() -> Self { - OverheadCoeficients::new_checked( + OverheadCoefficients::new_checked( 1.0, 1.0, // For L2 transactions we allow a certain default discount with regard to the number of ergs. - // Multiinstance circuits can in theory be spawned infinite times, while projected future limitations - // on gas per pubdata allow for roughly 800kk gas per L1 batch, so the rough trust "discount" on the proof's part + // Multi-instance circuits can in theory be spawned infinite times, while projected future limitations + // on gas per pubdata allow for roughly 800k gas per L1 batch, so the rough trust "discount" on the proof's part // to be paid by the users is 0.1. 0.1, ) @@ -343,7 +358,7 @@ pub fn get_amortized_overhead( total_gas_limit: u32, gas_per_pubdata_byte_limit: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { // Using large U256 type to prevent overflows. let overhead_for_block_gas = U256::from(block_overhead_gas(gas_per_pubdata_byte_limit)); @@ -351,28 +366,28 @@ pub fn get_amortized_overhead( let encoded_len = U256::from(encoded_len); // Derivation of overhead consists of 4 parts: - // 1. The overhead for taking up a transaction's slot. (O1): O1 = 1 / MAX_TXS_IN_BLOCK - // 2. The overhead for taking up the bootloader's memory (O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE - // 3. The overhead for possible usage of pubdata. (O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK - // 4. The overhead for possible usage of all the single-instance circuits. (O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT + // 1. The overhead for taking up a transaction's slot. `(O1): O1 = 1 / MAX_TXS_IN_BLOCK` + // 2. The overhead for taking up the bootloader's memory `(O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE` + // 3. The overhead for possible usage of pubdata. `(O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK` + // 4. The overhead for possible usage of all the single-instance circuits. `(O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT` // // The maximum of these is taken to derive the part of the block's overhead to be paid by the users: // - // max_overhead = max(O1, O2, O3, O4) - // overhead_gas = ceil(max_overhead * overhead_for_block_gas). Thus, overhead_gas is a function of - // tx_gas_limit, gas_per_pubdata_byte_limit and encoded_len. + // `max_overhead = max(O1, O2, O3, O4)` + // `overhead_gas = ceil(max_overhead * overhead_for_block_gas)`. Thus, `overhead_gas` is a function of + // `tx_gas_limit`, `gas_per_pubdata_byte_limit` and `encoded_len`. // - // While it is possible to derive the overhead with binary search in O(log n), it is too expensive to be done + // While it is possible to derive the overhead with binary search in `O(log n)`, it is too expensive to be done // on L1, so here is a reference implementation of finding the overhead for transaction in O(1): // - // Given total_gas_limit = tx_gas_limit + overhead_gas, we need to find overhead_gas and tx_gas_limit, such that: - // 1. overhead_gas is maximal possible (the operator is paid fairly) - // 2. overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas (the user does not overpay) + // Given `total_gas_limit = tx_gas_limit + overhead_gas`, we need to find `overhead_gas` and `tx_gas_limit`, such that: + // 1. `overhead_gas` is maximal possible (the operator is paid fairly) + // 2. `overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas` (the user does not overpay) // The third part boils to the following 4 inequalities (at least one of these must hold): - // ceil(O1 * overhead_for_block_gas) >= overhead_gas - // ceil(O2 * overhead_for_block_gas) >= overhead_gas - // ceil(O3 * overhead_for_block_gas) >= overhead_gas - // ceil(O4 * overhead_for_block_gas) >= overhead_gas + // `ceil(O1 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O2 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O3 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O4 * overhead_for_block_gas) >= overhead_gas` // // Now, we need to solve each of these separately: @@ -380,10 +395,10 @@ pub fn get_amortized_overhead( let tx_slot_overhead = { let tx_slot_overhead = ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()).as_u32(); - (coeficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 + (coefficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 }; - // 2. The overhead for occupying the bootloader memory can be derived from encoded_len + // 2. The overhead for occupying the bootloader memory can be derived from `encoded_len` let overhead_for_length = { let overhead_for_length = ceil_div_u256( encoded_len * overhead_for_block_gas, @@ -391,20 +406,23 @@ pub fn get_amortized_overhead( ) .as_u32(); - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() as u32 }; // TODO (EVM-67): possibly include the overhead for pubdata. The formula below has not been properly maintained, - // since the pubdat is not published. If decided to use the pubdata overhead, it needs to be updated. + // since the pubdata is not published. If decided to use the pubdata overhead, it needs to be updated. + // ``` // 3. ceil(O3 * overhead_for_block_gas) >= overhead_gas // O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK = ceil(gas_limit / gas_per_pubdata_byte_limit) / MAX_PUBDATA_PER_BLOCK - // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). Throwing off the `ceil`, while may provide marginally lower + // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). + // ``` + // Throwing off the `ceil`, while may provide marginally lower // overhead to the operator, provides substantially easier formula to work with. // - // For better clarity, let's denote gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE - // ceil(OB * (TL - OE) / (EP * MP)) >= OE - // + // For better clarity, let's denote `gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE` + // `ceil(OB * (TL - OE) / (EP * MP)) >= OE` + // ``` // OB * (TL - OE) / (MP * EP) > OE - 1 // OB * (TL - OE) > (OE - 1) * EP * MP // OB * TL + EP * MP > OE * EP * MP + OE * OB @@ -415,7 +433,7 @@ pub fn get_amortized_overhead( // + gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK); // let denominator = // gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK) + overhead_for_block_gas; - + // // // Corner case: if `total_gas_limit` = `gas_per_pubdata_byte_limit` = 0 // // then the numerator will be 0 and subtracting 1 will cause a panic, so we just return a zero. // if numerator.is_zero() { @@ -424,7 +442,7 @@ pub fn get_amortized_overhead( // (numerator - 1) / denominator // } // }; - + // // 4. K * ceil(O4 * overhead_for_block_gas) >= overhead_gas, where K is the discount // O4 = gas_limit / MAX_TX_ERGS_LIMIT. Using the notation from the previous equation: // ceil(OB * GL / MAX_TX_ERGS_LIMIT) >= (OE / K) @@ -433,10 +451,11 @@ pub fn get_amortized_overhead( // OB * (TL - OE) > (OE/K) * MAX_TX_ERGS_LIMIT - MAX_TX_ERGS_LIMIT // OB * TL + MAX_TX_ERGS_LIMIT > OE * ( MAX_TX_ERGS_LIMIT/K + OB) // OE = floor(OB * TL + MAX_TX_ERGS_LIMIT / (MAX_TX_ERGS_LIMIT/K + OB)), with possible -1 if the division is without remainder + // ``` let overhead_for_gas = { let numerator = overhead_for_block_gas * total_gas_limit + U256::from(MAX_TX_ERGS_LIMIT); let denominator: U256 = U256::from( - (MAX_TX_ERGS_LIMIT as f64 / coeficients.ergs_limit_overhead_coeficient) as u64, + (MAX_TX_ERGS_LIMIT as f64 / coefficients.ergs_limit_overhead_coeficient) as u64, ) + overhead_for_block_gas; let overhead_for_gas = (numerator - 1) / denominator; @@ -447,21 +466,21 @@ pub fn get_amortized_overhead( let overhead = vec![tx_slot_overhead, overhead_for_length, overhead_for_gas] .into_iter() .max() - // For the sake of consistency making sure that total_gas_limit >= max_overhead + // For the sake of consistency making sure that `total_gas_limit >= max_overhead` .map(|max_overhead| std::cmp::min(max_overhead, total_gas_limit.as_u32())) .unwrap(); let limit_after_deducting_overhead = total_gas_limit - overhead; // During double checking of the overhead, the bootloader will assume that the - // body of the transaction does not have any more than MAX_L2_TX_GAS_LIMIT ergs available to it. + // body of the transaction does not have any more than `MAX_L2_TX_GAS_LIMIT` ergs available to it. if limit_after_deducting_overhead.as_u64() > MAX_L2_TX_GAS_LIMIT { - // We derive the same overhead that would exist for the MAX_L2_TX_GAS_LIMIT ergs + // We derive the same overhead that would exist for the `MAX_L2_TX_GAS_LIMIT` ergs derive_overhead( MAX_L2_TX_GAS_LIMIT as u32, gas_per_pubdata_byte_limit, encoded_len.as_usize(), - coeficients, + coefficients, ) } else { overhead @@ -484,14 +503,14 @@ mod tests { total_gas_limit: u32, gas_per_pubdata_byte_limit: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { let mut left_bound = if MAX_TX_ERGS_LIMIT < total_gas_limit { total_gas_limit - MAX_TX_ERGS_LIMIT } else { 0u32 }; - // Safe cast: the gas_limit for a transaction can not be larger than 2^32 + // Safe cast: the gas_limit for a transaction can not be larger than `2^32` let mut right_bound = total_gas_limit; // The closure returns whether a certain overhead would be accepted by the bootloader. @@ -502,7 +521,7 @@ mod tests { total_gas_limit - suggested_overhead, gas_per_pubdata_byte_limit, encoded_len, - coeficients, + coefficients, ); derived_overhead >= suggested_overhead @@ -531,41 +550,41 @@ mod tests { let test_params = |total_gas_limit: u32, gas_per_pubdata: u32, encoded_len: usize, - coeficients: OverheadCoeficients| { + coefficients: OverheadCoefficients| { let result_by_efficient_search = - get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coeficients); + get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coefficients); let result_by_binary_search = get_maximal_allowed_overhead_bin_search( total_gas_limit, gas_per_pubdata, encoded_len, - coeficients, + coefficients, ); assert_eq!(result_by_efficient_search, result_by_binary_search); }; // Some arbitrary test - test_params(60_000_000, 800, 2900, OverheadCoeficients::new_l2()); + test_params(60_000_000, 800, 2900, OverheadCoefficients::new_l2()); // Very small parameters - test_params(0, 1, 12, OverheadCoeficients::new_l2()); + test_params(0, 1, 12, OverheadCoefficients::new_l2()); // Relatively big parameters let max_tx_overhead = derive_overhead( MAX_TX_ERGS_LIMIT, 5000, 10000, - OverheadCoeficients::new_l2(), + OverheadCoefficients::new_l2(), ); test_params( MAX_TX_ERGS_LIMIT + max_tx_overhead, 5000, 10000, - OverheadCoeficients::new_l2(), + OverheadCoefficients::new_l2(), ); - test_params(115432560, 800, 2900, OverheadCoeficients::new_l1()); + test_params(115432560, 800, 2900, OverheadCoefficients::new_l1()); } #[test] diff --git a/core/lib/multivm/src/versions/vm_m6/utils.rs b/core/lib/multivm/src/versions/vm_m6/utils.rs index a8ed8b02a52..4cabab82c9c 100644 --- a/core/lib/multivm/src/versions/vm_m6/utils.rs +++ b/core/lib/multivm/src/versions/vm_m6/utils.rs @@ -1,15 +1,7 @@ -use crate::vm_m6::history_recorder::HistoryMode; -use crate::vm_m6::{ - memory::SimpleMemory, oracles::tracer::PubdataSpentTracer, vm_with_bootloader::BlockContext, - VmInstance, -}; use once_cell::sync::Lazy; - -use crate::glue::GlueInto; -use crate::vm_m6::storage::Storage; -use zk_evm_1_3_1::block_properties::BlockProperties; use zk_evm_1_3_1::{ aux_structures::{LogQuery, MemoryPage, Timestamp}, + block_properties::BlockProperties, vm_state::PrimitiveValue, zkevm_opcode_defs::FatPointer, }; @@ -18,6 +10,14 @@ use zksync_system_constants::ZKPORTER_IS_AVAILABLE; use zksync_types::{Address, StorageLogQuery, H160, MAX_L2_TX_GAS_LIMIT, U256}; use zksync_utils::h256_to_u256; +use crate::{ + glue::GlueInto, + vm_m6::{ + history_recorder::HistoryMode, memory::SimpleMemory, oracles::tracer::PubdataSpentTracer, + storage::Storage, vm_with_bootloader::BlockContext, VmInstance, + }, +}; + pub const INITIAL_TIMESTAMP: u32 = 1024; pub const INITIAL_MEMORY_COUNTER: u32 = 2048; pub const INITIAL_CALLDATA_PAGE: u32 = 7; @@ -227,7 +227,7 @@ pub fn collect_log_queries_after_timestamp( /// Receives sorted slice of timestamps. /// Returns count of timestamps that are greater than or equal to `from_timestamp`. -/// Works in O(log(sorted_timestamps.len())). +/// Works in `O(log(sorted_timestamps.len()))`. pub fn precompile_calls_count_after_timestamp( sorted_timestamps: &[Timestamp], from_timestamp: Timestamp, @@ -258,8 +258,8 @@ pub fn create_test_block_params() -> (BlockContext, BlockProperties) { pub fn read_bootloader_test_code(test: &str) -> Vec { read_zbin_bytecode(format!( - "etc/system-contracts/bootloader/tests/artifacts/{}.yul/{}.yul.zbin", - test, test + "contracts/system-contracts/bootloader/tests/artifacts/{}.yul.zbin", + test )) } diff --git a/core/lib/multivm/src/versions/vm_m6/vm.rs b/core/lib/multivm/src/versions/vm_m6/vm.rs index 2937b621a9a..25a922ee510 100644 --- a/core/lib/multivm/src/versions/vm_m6/vm.rs +++ b/core/lib/multivm/src/versions/vm_m6/vm.rs @@ -1,23 +1,24 @@ -use crate::interface::{ - BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, L1BatchEnv, - L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, - VmInterfaceHistoryEnabled, VmMemoryMetrics, -}; - use std::collections::HashSet; use zksync_state::StoragePtr; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::{Transaction, VmVersion}; -use zksync_utils::bytecode::{hash_bytecode, CompressedBytecodeInfo}; -use zksync_utils::{h256_to_u256, u256_to_h256}; +use zksync_types::{ + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + Transaction, VmVersion, +}; +use zksync_utils::{ + bytecode::{hash_bytecode, CompressedBytecodeInfo}, + h256_to_u256, u256_to_h256, +}; -use crate::glue::history_mode::HistoryMode; -use crate::glue::GlueInto; -use crate::vm_m6::events::merge_events; -use crate::vm_m6::storage::Storage; -use crate::vm_m6::vm_instance::MultiVMSubversion; -use crate::vm_m6::VmInstance; +use crate::{ + glue::{history_mode::HistoryMode, GlueInto}, + interface::{ + BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, + L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode, VmExecutionMode, + VmExecutionResultAndLogs, VmInterface, VmInterfaceHistoryEnabled, VmMemoryMetrics, + }, + vm_m6::{events::merge_events, storage::Storage, vm_instance::MultiVMSubversion, VmInstance}, +}; #[derive(Debug)] pub struct Vm { @@ -166,7 +167,7 @@ impl VmInterface for Vm { system_logs: vec![], total_log_queries, cycles_used: self.vm.state.local_state.monotonic_cycle_counter, - // It's not applicable for vm6 + // It's not applicable for `vm6` deduplicated_events_logs: vec![], storage_refunds: vec![], user_l2_to_l1_logs: l2_to_l1_logs, @@ -178,7 +179,10 @@ impl VmInterface for Vm { _tracer: Self::TracerDispatcher, tx: Transaction, with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { self.last_tx_compressed_bytecodes = vec![]; let bytecodes = if with_compression { let deps = tx.execute.factory_deps.as_deref().unwrap_or_default(); @@ -217,17 +221,31 @@ impl VmInterface for Vm { }; // Even that call tracer is supported here, we don't use it. - let result = self.vm.execute_next_tx( - self.system_env.default_validation_computational_gas_limit, - false, - ); + let result = match self.system_env.execution_mode { + TxExecutionMode::VerifyExecute => self + .vm + .execute_next_tx( + self.system_env.default_validation_computational_gas_limit, + false, + ) + .glue_into(), + TxExecutionMode::EstimateFee | TxExecutionMode::EthCall => self + .vm + .execute_till_block_end( + crate::vm_m6::vm_with_bootloader::BootloaderJobType::TransactionExecution, + ) + .glue_into(), + }; if bytecodes .iter() .any(|info| !self.vm.is_bytecode_exists(info)) { - Err(crate::interface::BytecodeCompressionError::BytecodeCompressionFailed) + ( + Err(BytecodeCompressionError::BytecodeCompressionFailed), + result, + ) } else { - Ok(result.glue_into()) + (Ok(()), result) } } diff --git a/core/lib/multivm/src/versions/vm_m6/vm_instance.rs b/core/lib/multivm/src/versions/vm_m6/vm_instance.rs index cfb5bac806d..1e792d308f1 100644 --- a/core/lib/multivm/src/versions/vm_m6/vm_instance.rs +++ b/core/lib/multivm/src/versions/vm_m6/vm_instance.rs @@ -1,45 +1,53 @@ -use std::convert::TryFrom; -use std::fmt::Debug; - -use crate::glue::GlueInto; -use zk_evm_1_3_1::aux_structures::Timestamp; -use zk_evm_1_3_1::vm_state::{PrimitiveValue, VmLocalState, VmState}; -use zk_evm_1_3_1::witness_trace::DummyTracer; -use zk_evm_1_3_1::zkevm_opcode_defs::decoding::{ - AllowedPcOrImm, EncodingModeProduction, VmEncodingMode, +use std::{convert::TryFrom, fmt::Debug}; + +use zk_evm_1_3_1::{ + aux_structures::Timestamp, + vm_state::{PrimitiveValue, VmLocalState, VmState}, + witness_trace::DummyTracer, + zkevm_opcode_defs::{ + decoding::{AllowedPcOrImm, EncodingModeProduction, VmEncodingMode}, + definitions::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; -use zk_evm_1_3_1::zkevm_opcode_defs::definitions::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER; -use zksync_system_constants::MAX_TXS_IN_BLOCK; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::tx::tx_execution_info::TxExecutionStatus; -use zksync_types::vm_trace::{Call, VmExecutionTrace, VmTrace}; -use zksync_types::{L1BatchNumber, StorageLogQuery, VmEvent, H256, U256}; - -use crate::interface::types::outputs::VmExecutionLogs; -use crate::vm_m6::bootloader_state::BootloaderState; -use crate::vm_m6::errors::{TxRevertReason, VmRevertReason, VmRevertReasonParsingResult}; -use crate::vm_m6::event_sink::InMemoryEventSink; -use crate::vm_m6::events::merge_events; -use crate::vm_m6::history_recorder::{HistoryEnabled, HistoryMode}; -use crate::vm_m6::memory::SimpleMemory; -use crate::vm_m6::oracles::decommitter::DecommitterOracle; -use crate::vm_m6::oracles::precompile::PrecompilesProcessorWithHistory; -use crate::vm_m6::oracles::storage::StorageOracle; -use crate::vm_m6::oracles::tracer::{ - BootloaderTracer, ExecutionEndTracer, OneTxTracer, PendingRefundTracer, PubdataSpentTracer, - StorageInvocationTracer, TransactionResultTracer, ValidationError, ValidationTracer, - ValidationTracerParams, +use zksync_types::{ + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + tx::tx_execution_info::TxExecutionStatus, + vm_trace::{Call, VmExecutionTrace, VmTrace}, + L1BatchNumber, StorageLogQuery, VmEvent, H256, U256, }; -use crate::vm_m6::oracles::OracleWithHistory; -use crate::vm_m6::storage::Storage; -use crate::vm_m6::utils::{ - calculate_computational_gas_used, collect_log_queries_after_timestamp, - collect_storage_log_queries_after_timestamp, dump_memory_page_using_primitive_value, - precompile_calls_count_after_timestamp, -}; -use crate::vm_m6::vm_with_bootloader::{ - BootloaderJobType, DerivedBlockContext, TxExecutionMode, BOOTLOADER_HEAP_PAGE, - OPERATOR_REFUNDS_OFFSET, + +use crate::{ + glue::GlueInto, + interface::types::outputs::VmExecutionLogs, + vm_m6::{ + bootloader_state::BootloaderState, + errors::{TxRevertReason, VmRevertReason, VmRevertReasonParsingResult}, + event_sink::InMemoryEventSink, + events::merge_events, + history_recorder::{HistoryEnabled, HistoryMode}, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, + precompile::PrecompilesProcessorWithHistory, + storage::StorageOracle, + tracer::{ + BootloaderTracer, ExecutionEndTracer, OneTxTracer, PendingRefundTracer, + PubdataSpentTracer, StorageInvocationTracer, TransactionResultTracer, + ValidationError, ValidationTracer, ValidationTracerParams, + }, + OracleWithHistory, + }, + storage::Storage, + utils::{ + calculate_computational_gas_used, collect_log_queries_after_timestamp, + collect_storage_log_queries_after_timestamp, dump_memory_page_using_primitive_value, + precompile_calls_count_after_timestamp, + }, + vm_with_bootloader::{ + BootloaderJobType, DerivedBlockContext, TxExecutionMode, BOOTLOADER_HEAP_PAGE, + OPERATOR_REFUNDS_OFFSET, + }, + }, }; pub type ZkSyncVmState = VmState< @@ -103,7 +111,7 @@ pub struct VmExecutionResult { pub l2_to_l1_logs: Vec, pub return_data: Vec, - /// Value denoting the amount of gas spent withing VM invocation. + /// Value denoting the amount of gas spent within VM invocation. /// Note that return value represents the difference between the amount of gas /// available to VM before and after execution. /// @@ -171,6 +179,7 @@ pub enum VmExecutionStopReason { TracerRequestedStop, } +use super::vm_with_bootloader::MAX_TXS_IN_BLOCK; use crate::vm_m6::utils::VmExecutionResult as NewVmExecutionResult; fn vm_may_have_ended_inner( @@ -193,7 +202,7 @@ fn vm_may_have_ended_inner( } (false, _) => None, (true, l) if l == outer_eh_location => { - // check r1,r2,r3 + // check `r1,r2,r3` if vm.local_state.flags.overflow_or_less_than_flag { Some(NewVmExecutionResult::Panic) } else { @@ -226,7 +235,7 @@ fn vm_may_have_ended( NewVmExecutionResult::Ok(data) => { Some(VmExecutionResult { // The correct `events` value for this field should be set separately - // later on based on the information inside the event_sink oracle. + // later on based on the information inside the `event_sink` oracle. events: vec![], storage_log_queries: vm.get_final_log_queries(), used_contract_hashes: vm.get_used_contracts(), @@ -383,7 +392,7 @@ impl VmInstance { } } - /// Removes the latest snapshot without rollbacking to it. + /// Removes the latest snapshot without rolling back to it. /// This function expects that there is at least one snapshot present. pub fn pop_snapshot_no_rollback(&mut self) { self.snapshots.pop().unwrap(); @@ -499,8 +508,8 @@ impl VmInstance { ); } - // This means that the bootloader has informed the system (usually via VMHooks) - that some gas - // should be refunded back (see askOperatorForRefund in bootloader.yul for details). + // This means that the bootloader has informed the system (usually via `VMHooks`) - that some gas + // should be refunded back (see `askOperatorForRefund` in `bootloader.yul` for details). if let Some(bootloader_refund) = tracer.requested_refund() { assert!( operator_refund.is_none(), @@ -596,8 +605,8 @@ impl VmInstance { /// Panics if there are no new transactions in bootloader. /// Internally uses the OneTxTracer to stop the VM when the last opcode from the transaction is reached. // Err when transaction is rejected. - // Ok(status: TxExecutionStatus::Success) when the transaction succeeded - // Ok(status: TxExecutionStatus::Failure) when the transaction failed. + // `Ok(status: TxExecutionStatus::Success)` when the transaction succeeded + // `Ok(status: TxExecutionStatus::Failure)` when the transaction failed. // Note that failed transactions are considered properly processed and are included in blocks pub fn execute_next_tx( &mut self, @@ -657,7 +666,7 @@ impl VmInstance { revert_reason: None, // getting contracts used during this transaction // at least for now the number returned here is always <= to the number - // of the code hashes actually used by the transaction, since it might've + // of the code hashes actually used by the transaction, since it might have // reused bytecode hashes from some of the previous ones. contracts_used: self .state @@ -942,8 +951,8 @@ impl VmInstance { pub fn save_current_vm_as_snapshot(&mut self) { self.snapshots.push(VmSnapshot { // Vm local state contains O(1) various parameters (registers/etc). - // The only "expensive" copying here is copying of the callstack. - // It will take O(callstack_depth) to copy it. + // The only "expensive" copying here is copying of the call stack. + // It will take `O(callstack_depth)` to copy it. // So it is generally recommended to get snapshots of the bootloader frame, // where the depth is 1. local_state: self.state.local_state.clone(), diff --git a/core/lib/multivm/src/versions/vm_m6/vm_with_bootloader.rs b/core/lib/multivm/src/versions/vm_m6/vm_with_bootloader.rs index 306c0ffc6de..91a41bdbb0a 100644 --- a/core/lib/multivm/src/versions/vm_m6/vm_with_bootloader.rs +++ b/core/lib/multivm/src/versions/vm_m6/vm_with_bootloader.rs @@ -11,11 +11,10 @@ use zk_evm_1_3_1::{ }, }; use zksync_contracts::BaseSystemContracts; -use zksync_system_constants::MAX_TXS_IN_BLOCK; - +use zksync_system_constants::MAX_L2_TX_GAS_LIMIT; use zksync_types::{ - zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, Transaction, BOOTLOADER_ADDRESS, - L1_GAS_PER_PUBDATA_BYTE, MAX_GAS_PER_PUBDATA_BYTE, MAX_NEW_FACTORY_DEPS, U256, + fee_model::L1PeggedBatchFeeModelInput, zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, + Address, Transaction, BOOTLOADER_ADDRESS, L1_GAS_PER_PUBDATA_BYTE, MAX_NEW_FACTORY_DEPS, U256, }; use zksync_utils::{ address_to_u256, @@ -24,20 +23,23 @@ use zksync_utils::{ misc::ceil_div, }; -use crate::vm_m6::storage::Storage; -use crate::vm_m6::{ - bootloader_state::BootloaderState, - history_recorder::HistoryMode, - transaction_data::{TransactionData, L1_TX_TYPE}, - utils::{ - code_page_candidate_from_base, heap_page_from_base, BLOCK_GAS_LIMIT, INITIAL_BASE_PAGE, +use crate::{ + vm_latest::L1BatchEnv, + vm_m6::{ + bootloader_state::BootloaderState, + history_recorder::HistoryMode, + storage::Storage, + transaction_data::{TransactionData, L1_TX_TYPE}, + utils::{ + code_page_candidate_from_base, heap_page_from_base, BLOCK_GAS_LIMIT, INITIAL_BASE_PAGE, + }, + vm_instance::{MultiVMSubversion, ZkSyncVmState}, + OracleTools, VmInstance, }, - vm_instance::{MultiVMSubversion, ZkSyncVmState}, - OracleTools, VmInstance, }; -// TODO (SMA-1703): move these to config and make them programmatically generatable. -// fill these values in the similar fasion as other overhead-related constants +// TODO (SMA-1703): move these to config and make them programmatically generable. +// fill these values in the similar fashion as other overhead-related constants pub const BLOCK_OVERHEAD_GAS: u32 = 1200000; pub const BLOCK_OVERHEAD_L1_GAS: u32 = 1000000; pub const BLOCK_OVERHEAD_PUBDATA: u32 = BLOCK_OVERHEAD_L1_GAS / L1_GAS_PER_PUBDATA_BYTE; @@ -59,7 +61,11 @@ pub struct BlockContext { impl BlockContext { pub fn block_gas_price_per_pubdata(&self) -> u64 { - derive_base_fee_and_gas_per_pubdata(self.l1_gas_price, self.fair_l2_gas_price).1 + derive_base_fee_and_gas_per_pubdata(L1PeggedBatchFeeModelInput { + l1_gas_price: self.l1_gas_price, + fair_l2_gas_price: self.fair_l2_gas_price, + }) + .1 } } @@ -77,37 +83,76 @@ pub(crate) fn eth_price_per_pubdata_byte(l1_gas_price: u64) -> u64 { l1_gas_price * (L1_GAS_PER_PUBDATA_BYTE as u64) } -pub fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { +pub(crate) fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); ceil_div(eth_price_per_pubdata_byte, base_fee) } -pub fn derive_base_fee_and_gas_per_pubdata(l1_gas_price: u64, fair_gas_price: u64) -> (u64, u64) { +pub(crate) fn derive_base_fee_and_gas_per_pubdata( + fee_input: L1PeggedBatchFeeModelInput, +) -> (u64, u64) { + let L1PeggedBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price, + } = fee_input; + let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); - // The baseFee is set in such a way that it is always possible for a transaction to + // The `baseFee` is set in such a way that it is always possible for a transaction to // publish enough public data while compensating us for it. let base_fee = std::cmp::max( - fair_gas_price, + fair_l2_gas_price, ceil_div(eth_price_per_pubdata_byte, MAX_GAS_PER_PUBDATA_BYTE), ); ( base_fee, - base_fee_to_gas_per_pubdata(l1_gas_price, base_fee), + base_fee_to_gas_per_pubdata(fee_input.l1_gas_price, base_fee), ) } +pub(crate) fn get_batch_base_fee(l1_batch_env: &L1BatchEnv) -> u64 { + if let Some(base_fee) = l1_batch_env.enforced_base_fee { + return base_fee; + } + let (base_fee, _) = + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()); + base_fee +} + impl From for DerivedBlockContext { fn from(context: BlockContext) -> Self { - let base_fee = - derive_base_fee_and_gas_per_pubdata(context.l1_gas_price, context.fair_l2_gas_price).0; + let base_fee = derive_base_fee_and_gas_per_pubdata(L1PeggedBatchFeeModelInput { + l1_gas_price: context.l1_gas_price, + fair_l2_gas_price: context.fair_l2_gas_price, + }) + .0; DerivedBlockContext { context, base_fee } } } +/// The size of the bootloader memory in bytes which is used by the protocol. +/// While the maximal possible size is a lot higher, we restrict ourselves to a certain limit to reduce +/// the requirements on RAM. +pub(crate) const USED_BOOTLOADER_MEMORY_BYTES: usize = 1 << 24; +pub(crate) const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32; + +// This the number of pubdata such that it should be always possible to publish +// from a single transaction. Note, that these pubdata bytes include only bytes that are +// to be published inside the body of transaction (i.e. excluding of factory deps). +pub(crate) const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 4000; + +// The users should always be able to provide `MAX_GAS_PER_PUBDATA_BYTE` gas per pubdata in their +// transactions so that they are able to send at least `GUARANTEED_PUBDATA_PER_L1_BATCH` bytes per +// transaction. +pub(crate) const MAX_GAS_PER_PUBDATA_BYTE: u64 = + MAX_L2_TX_GAS_LIMIT / GUARANTEED_PUBDATA_PER_L1_BATCH; + +// The maximal number of transactions in a single batch +pub(crate) const MAX_TXS_IN_BLOCK: usize = 1024; + // The first 32 slots are reserved for debugging purposes pub const DEBUG_SLOTS_OFFSET: usize = 8; pub const DEBUG_FIRST_SLOTS: usize = 32; @@ -150,7 +195,7 @@ pub const BOOTLOADER_TX_DESCRIPTION_OFFSET: usize = COMPRESSED_BYTECODES_OFFSET + COMPRESSED_BYTECODES_SLOTS; // The size of the bootloader memory dedicated to the encodings of transactions -pub const BOOTLOADER_TX_ENCODING_SPACE: u32 = +pub(crate) const BOOTLOADER_TX_ENCODING_SPACE: u32 = (MAX_HEAP_PAGE_SIZE_IN_WORDS - TX_DESCRIPTION_OFFSET - MAX_TXS_IN_BLOCK) as u32; // Size of the bootloader tx description in words @@ -252,12 +297,12 @@ pub fn init_vm_with_gas_limit( } #[derive(Debug, Clone, Copy)] -// The block.number/block.timestamp data are stored in the CONTEXT_SYSTEM_CONTRACT. +// The `block.number` / `block.timestamp` data are stored in the `CONTEXT_SYSTEM_CONTRACT`. // The bootloader can support execution in two modes: -// - "NewBlock" when the new block is created. It is enforced that the block.number is incremented by 1 +// - `NewBlock` when the new block is created. It is enforced that the block.number is incremented by 1 // and the timestamp is non-decreasing. Also, the L2->L1 message used to verify the correctness of the previous root hash is sent. // This is the mode that should be used in the state keeper. -// - "OverrideCurrent" when we need to provide custom block.number and block.timestamp. ONLY to be used in testing/ethCalls. +// - `OverrideCurrent` when we need to provide custom `block.number` and `block.timestamp`. ONLY to be used in testing / `ethCalls`. pub enum BlockContextMode { NewBlock(DerivedBlockContext, U256), OverrideCurrent(DerivedBlockContext), @@ -766,7 +811,7 @@ pub(crate) fn get_bootloader_memory_for_encoded_tx( let encoding_length = encoded_tx.len(); memory.extend((tx_description_offset..tx_description_offset + encoding_length).zip(encoded_tx)); - // Note, +1 is moving for poitner + // Note, +1 is moving for pointer let compressed_bytecodes_offset = COMPRESSED_BYTECODES_OFFSET + 1 + previous_compressed_bytecode_size; diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/l2_block.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/l2_block.rs index 56b5b1b6b39..03544d8b054 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/l2_block.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/l2_block.rs @@ -1,11 +1,15 @@ use std::cmp::Ordering; + use zksync_types::{MiniblockNumber, H256}; use zksync_utils::concat_and_hash; -use crate::interface::{L2Block, L2BlockEnv}; -use crate::vm_refunds_enhancement::bootloader_state::snapshot::L2BlockSnapshot; -use crate::vm_refunds_enhancement::bootloader_state::tx::BootloaderTx; -use crate::vm_refunds_enhancement::utils::l2_blocks::l2_block_hash; +use crate::{ + interface::{L2Block, L2BlockEnv}, + vm_refunds_enhancement::{ + bootloader_state::{snapshot::L2BlockSnapshot, tx::BootloaderTx}, + utils::l2_blocks::l2_block_hash, + }, +}; const EMPTY_TXS_ROLLING_HASH: H256 = H256::zero(); @@ -15,7 +19,7 @@ pub(crate) struct BootloaderL2Block { pub(crate) timestamp: u64, pub(crate) txs_rolling_hash: H256, // The rolling hash of all the transactions in the miniblock pub(crate) prev_block_hash: H256, - // Number of the first l2 block tx in l1 batch + // Number of the first L2 block tx in L1 batch pub(crate) first_tx_index: usize, pub(crate) max_virtual_blocks_to_create: u32, pub(super) txs: Vec, diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/snapshot.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/snapshot.rs index e417a3b9ee6..2c599092869 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/snapshot.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/snapshot.rs @@ -4,9 +4,9 @@ use zksync_types::H256; pub(crate) struct BootloaderStateSnapshot { /// ID of the next transaction to be executed. pub(crate) tx_to_execute: usize, - /// Stored l2 blocks in bootloader memory + /// Stored L2 blocks in bootloader memory pub(crate) l2_blocks_len: usize, - /// Snapshot of the last l2 block. Only this block could be changed during the rollback + /// Snapshot of the last L2 block. Only this block could be changed during the rollback pub(crate) last_l2_block: L2BlockSnapshot, /// The number of 32-byte words spent on the already included compressed bytecodes. pub(crate) compressed_bytecodes_encoding: usize, @@ -18,6 +18,6 @@ pub(crate) struct BootloaderStateSnapshot { pub(crate) struct L2BlockSnapshot { /// The rolling hash of all the transactions in the miniblock pub(crate) txs_rolling_hash: H256, - /// The number of transactions in the last l2 block + /// The number of transactions in the last L2 block pub(crate) txs_len: usize, } diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/state.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/state.rs index 4c8d48bc1a7..d436a2adb0a 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/state.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/state.rs @@ -1,17 +1,22 @@ -use crate::vm_refunds_enhancement::bootloader_state::l2_block::BootloaderL2Block; -use crate::vm_refunds_enhancement::bootloader_state::snapshot::BootloaderStateSnapshot; -use crate::vm_refunds_enhancement::bootloader_state::utils::{apply_l2_block, apply_tx_to_memory}; use std::cmp::Ordering; + use zksync_types::{L2ChainId, U256}; use zksync_utils::bytecode::CompressedBytecodeInfo; -use crate::interface::{BootloaderMemory, L2BlockEnv, TxExecutionMode}; -use crate::vm_refunds_enhancement::{ - constants::TX_DESCRIPTION_OFFSET, types::internals::TransactionData, - utils::l2_blocks::assert_next_block, -}; - use super::tx::BootloaderTx; +use crate::{ + interface::{BootloaderMemory, L2BlockEnv, TxExecutionMode}, + vm_refunds_enhancement::{ + bootloader_state::{ + l2_block::BootloaderL2Block, + snapshot::BootloaderStateSnapshot, + utils::{apply_l2_block, apply_tx_to_memory}, + }, + constants::TX_DESCRIPTION_OFFSET, + types::internals::TransactionData, + utils::l2_blocks::assert_next_block, + }, +}; /// Intermediate bootloader-related VM state. /// /// Required to process transactions one by one (since we intercept the VM execution to execute diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/tx.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/tx.rs index c1551dcf6cd..e7f833e5bad 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/tx.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/tx.rs @@ -1,7 +1,8 @@ -use crate::vm_refunds_enhancement::types::internals::TransactionData; use zksync_types::{L2ChainId, H256, U256}; use zksync_utils::bytecode::CompressedBytecodeInfo; +use crate::vm_refunds_enhancement::types::internals::TransactionData; + /// Information about tx necessary for execution in bootloader. #[derive(Debug, Clone)] pub(super) struct BootloaderTx { @@ -14,7 +15,7 @@ pub(super) struct BootloaderTx { pub(super) refund: u32, /// Gas overhead pub(super) gas_overhead: u32, - /// Gas Limit for this transaction. It can be different from the gaslimit inside the transaction + /// Gas Limit for this transaction. It can be different from the gas limit inside the transaction pub(super) trusted_gas_limit: U256, /// Offset of the tx in bootloader memory pub(super) offset: usize, diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/utils.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/utils.rs index dbb3fa0dff2..f47b95d6cbf 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/utils.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/bootloader_state/utils.rs @@ -1,16 +1,19 @@ use zksync_types::U256; -use zksync_utils::bytecode::CompressedBytecodeInfo; -use zksync_utils::{bytes_to_be_words, h256_to_u256}; - -use crate::interface::{BootloaderMemory, TxExecutionMode}; -use crate::vm_refunds_enhancement::bootloader_state::l2_block::BootloaderL2Block; -use crate::vm_refunds_enhancement::constants::{ - BOOTLOADER_TX_DESCRIPTION_OFFSET, BOOTLOADER_TX_DESCRIPTION_SIZE, COMPRESSED_BYTECODES_OFFSET, - OPERATOR_REFUNDS_OFFSET, TX_DESCRIPTION_OFFSET, TX_OPERATOR_L2_BLOCK_INFO_OFFSET, - TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, TX_OVERHEAD_OFFSET, TX_TRUSTED_GAS_LIMIT_OFFSET, -}; +use zksync_utils::{bytecode::CompressedBytecodeInfo, bytes_to_be_words, h256_to_u256}; use super::tx::BootloaderTx; +use crate::{ + interface::{BootloaderMemory, TxExecutionMode}, + vm_refunds_enhancement::{ + bootloader_state::l2_block::BootloaderL2Block, + constants::{ + BOOTLOADER_TX_DESCRIPTION_OFFSET, BOOTLOADER_TX_DESCRIPTION_SIZE, + COMPRESSED_BYTECODES_OFFSET, OPERATOR_REFUNDS_OFFSET, TX_DESCRIPTION_OFFSET, + TX_OPERATOR_L2_BLOCK_INFO_OFFSET, TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, + TX_OVERHEAD_OFFSET, TX_TRUSTED_GAS_LIMIT_OFFSET, + }, + }, +}; pub(super) fn get_memory_for_compressed_bytecodes( compressed_bytecodes: &[CompressedBytecodeInfo], @@ -69,7 +72,7 @@ pub(super) fn apply_tx_to_memory( }; apply_l2_block(memory, &bootloader_l2_block, tx_index); - // Note, +1 is moving for poitner + // Note, +1 is moving for pointer let compressed_bytecodes_offset = COMPRESSED_BYTECODES_OFFSET + 1 + compressed_bytecodes_size; let encoded_compressed_bytecodes = @@ -89,8 +92,8 @@ pub(crate) fn apply_l2_block( bootloader_l2_block: &BootloaderL2Block, txs_index: usize, ) { - // Since L2 block infos start from the TX_OPERATOR_L2_BLOCK_INFO_OFFSET and each - // L2 block info takes TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO slots, the position where the L2 block info + // Since L2 block information start from the `TX_OPERATOR_L2_BLOCK_INFO_OFFSET` and each + // L2 block info takes `TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO` slots, the position where the L2 block info // for this transaction needs to be written is: let block_position = diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/constants.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/constants.rs index ef3b09299fd..3c5560e7ea8 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/constants.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/constants.rs @@ -1,16 +1,31 @@ use zk_evm_1_3_3::aux_structures::MemoryPage; - -use zksync_system_constants::{ - L1_GAS_PER_PUBDATA_BYTE, MAX_L2_TX_GAS_LIMIT, MAX_NEW_FACTORY_DEPS, MAX_TXS_IN_BLOCK, - USED_BOOTLOADER_MEMORY_WORDS, -}; - pub use zk_evm_1_3_3::zkevm_opcode_defs::system_params::{ ERGS_PER_CIRCUIT, INITIAL_STORAGE_WRITE_PUBDATA_BYTES, MAX_PUBDATA_PER_BLOCK, }; +use zksync_system_constants::{L1_GAS_PER_PUBDATA_BYTE, MAX_L2_TX_GAS_LIMIT, MAX_NEW_FACTORY_DEPS}; use crate::vm_refunds_enhancement::old_vm::utils::heap_page_from_base; +/// The size of the bootloader memory in bytes which is used by the protocol. +/// While the maximal possible size is a lot higher, we restrict ourselves to a certain limit to reduce +/// the requirements on RAM. +pub(crate) const USED_BOOTLOADER_MEMORY_BYTES: usize = 1 << 24; +pub(crate) const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32; + +// This the number of pubdata such that it should be always possible to publish +// from a single transaction. Note, that these pubdata bytes include only bytes that are +// to be published inside the body of transaction (i.e. excluding of factory deps). +pub(crate) const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 4000; + +// The users should always be able to provide `MAX_GAS_PER_PUBDATA_BYTE` gas per pubdata in their +// transactions so that they are able to send at least `GUARANTEED_PUBDATA_PER_L1_BATCH` bytes per +// transaction. +pub(crate) const MAX_GAS_PER_PUBDATA_BYTE: u64 = + MAX_L2_TX_GAS_LIMIT / GUARANTEED_PUBDATA_PER_L1_BATCH; + +// The maximal number of transactions in a single batch +pub(crate) const MAX_TXS_IN_BLOCK: usize = 1024; + /// Max cycles for a single transaction. pub const MAX_CYCLES_FOR_TX: u32 = u32::MAX; @@ -54,7 +69,7 @@ pub(crate) const BOOTLOADER_TX_DESCRIPTION_OFFSET: usize = COMPRESSED_BYTECODES_OFFSET + COMPRESSED_BYTECODES_SLOTS; /// The size of the bootloader memory dedicated to the encodings of transactions -pub const BOOTLOADER_TX_ENCODING_SPACE: u32 = +pub(crate) const BOOTLOADER_TX_ENCODING_SPACE: u32 = (USED_BOOTLOADER_MEMORY_WORDS - TX_DESCRIPTION_OFFSET - MAX_TXS_IN_BLOCK) as u32; // Size of the bootloader tx description in words @@ -75,10 +90,10 @@ pub const BLOCK_OVERHEAD_L1_GAS: u32 = 1000000; pub const BLOCK_OVERHEAD_PUBDATA: u32 = BLOCK_OVERHEAD_L1_GAS / L1_GAS_PER_PUBDATA_BYTE; /// VM Hooks are used for communication between bootloader and tracers. -/// The 'type'/'opcode' is put into VM_HOOK_POSITION slot, +/// The 'type' / 'opcode' is put into VM_HOOK_POSITION slot, /// and VM_HOOKS_PARAMS_COUNT parameters (each 32 bytes) are put in the slots before. /// So the layout looks like this: -/// [param 0][param 1][vmhook opcode] +/// `[param 0][param 1][vmhook opcode]` pub const VM_HOOK_POSITION: u32 = RESULT_SUCCESS_FIRST_SLOT - 1; pub const VM_HOOK_PARAMS_COUNT: u32 = 2; pub const VM_HOOK_PARAMS_START_POSITION: u32 = VM_HOOK_POSITION - VM_HOOK_PARAMS_COUNT; diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/bytecode.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/bytecode.rs index 4b7e529fc5b..69670f9682b 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/bytecode.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/bytecode.rs @@ -1,13 +1,12 @@ use itertools::Itertools; - -use crate::interface::VmInterface; -use crate::HistoryMode; use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::U256; -use zksync_utils::bytecode::{compress_bytecode, hash_bytecode, CompressedBytecodeInfo}; -use zksync_utils::bytes_to_be_words; +use zksync_utils::{ + bytecode::{compress_bytecode, hash_bytecode, CompressedBytecodeInfo}, + bytes_to_be_words, +}; -use crate::vm_refunds_enhancement::Vm; +use crate::{interface::VmInterface, vm_refunds_enhancement::Vm, HistoryMode}; impl Vm { /// Checks the last transaction has successfully published compressed bytecodes and returns `true` if there is at least one is still unknown. diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/execution.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/execution.rs index 9e55180d66f..a1d81bdce5e 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/execution.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/execution.rs @@ -1,15 +1,20 @@ -use crate::HistoryMode; use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; -use crate::interface::tracer::{TracerExecutionStatus, VmExecutionStopReason}; -use crate::interface::{VmExecutionMode, VmExecutionResultAndLogs}; -use crate::vm_refunds_enhancement::old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}; -use crate::vm_refunds_enhancement::tracers::dispatcher::TracerDispatcher; -use crate::vm_refunds_enhancement::tracers::{ - traits::VmTracer, DefaultExecutionTracer, RefundsTracer, +use crate::{ + interface::{ + tracer::{TracerExecutionStatus, VmExecutionStopReason}, + VmExecutionMode, VmExecutionResultAndLogs, + }, + vm_refunds_enhancement::{ + old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}, + tracers::{ + dispatcher::TracerDispatcher, traits::VmTracer, DefaultExecutionTracer, RefundsTracer, + }, + vm::Vm, + }, + HistoryMode, }; -use crate::vm_refunds_enhancement::vm::Vm; impl Vm { pub(crate) fn inspect_inner( diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/gas.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/gas.rs index cce9bfad699..4083e27b0b3 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/gas.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/gas.rs @@ -1,8 +1,9 @@ -use crate::HistoryMode; use zksync_state::WriteStorage; -use crate::vm_refunds_enhancement::tracers::DefaultExecutionTracer; -use crate::vm_refunds_enhancement::vm::Vm; +use crate::{ + vm_refunds_enhancement::{tracers::DefaultExecutionTracer, vm::Vm}, + HistoryMode, +}; impl Vm { /// Returns the amount of gas remaining to the VM. diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/logs.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/logs.rs index b8e8652f301..bded1c19041 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/logs.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/logs.rs @@ -1,14 +1,18 @@ use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; +use zksync_types::{ + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + VmEvent, +}; -use crate::HistoryMode; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::VmEvent; - -use crate::interface::types::outputs::VmExecutionLogs; -use crate::vm_refunds_enhancement::old_vm::events::merge_events; -use crate::vm_refunds_enhancement::old_vm::utils::precompile_calls_count_after_timestamp; -use crate::vm_refunds_enhancement::vm::Vm; +use crate::{ + interface::types::outputs::VmExecutionLogs, + vm_refunds_enhancement::{ + old_vm::{events::merge_events, utils::precompile_calls_count_after_timestamp}, + vm::Vm, + }, + HistoryMode, +}; impl Vm { pub(crate) fn collect_execution_logs_after_timestamp( diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/snapshots.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/snapshots.rs index 972d50e5d76..56c219fffa4 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/snapshots.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/snapshots.rs @@ -1,13 +1,14 @@ -use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; - use std::time::Duration; -use crate::vm_latest::HistoryEnabled; +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; -use crate::vm_refunds_enhancement::{ - old_vm::oracles::OracleWithHistory, types::internals::VmSnapshot, vm::Vm, +use crate::{ + vm_latest::HistoryEnabled, + vm_refunds_enhancement::{ + old_vm::oracles::OracleWithHistory, types::internals::VmSnapshot, vm::Vm, + }, }; #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelSet, EncodeLabelValue)] @@ -36,8 +37,8 @@ impl Vm { pub(crate) fn make_snapshot_inner(&mut self) { self.snapshots.push(VmSnapshot { // Vm local state contains O(1) various parameters (registers/etc). - // The only "expensive" copying here is copying of the callstack. - // It will take O(callstack_depth) to copy it. + // The only "expensive" copying here is copying of the call stack. + // It will take `O(callstack_depth)` to copy it. // So it is generally recommended to get snapshots of the bootloader frame, // where the depth is 1. local_state: self.state.local_state.clone(), diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/statistics.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/statistics.rs index 48bbd64ecf2..d64e71c3ff2 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/statistics.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/statistics.rs @@ -1,12 +1,12 @@ use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; - -use crate::HistoryMode; use zksync_types::U256; -use crate::interface::{VmExecutionStatistics, VmMemoryMetrics}; -use crate::vm_refunds_enhancement::tracers::DefaultExecutionTracer; -use crate::vm_refunds_enhancement::vm::Vm; +use crate::{ + interface::{VmExecutionStatistics, VmMemoryMetrics}, + vm_refunds_enhancement::{tracers::DefaultExecutionTracer, vm::Vm}, + HistoryMode, +}; /// Module responsible for observing the VM behavior, i.e. calculating the statistics of the VM runs /// or reporting the VM memory usage. @@ -40,10 +40,11 @@ impl Vm { computational_gas_used, total_log_queries: total_log_queries_count, pubdata_published, + estimated_circuits_used: 0.0, } } - /// Returns the hashes the bytecodes that have been decommitted by the decomittment processor. + /// Returns the hashes the bytecodes that have been decommitted by the decommitment processor. pub(crate) fn get_used_contracts(&self) -> Vec { self.state .decommittment_processor diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/tx.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/tx.rs index d6fd4858870..6dc4772d095 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/tx.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/implementation/tx.rs @@ -1,15 +1,17 @@ -use crate::vm_refunds_enhancement::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_refunds_enhancement::implementation::bytecode::{ - bytecode_to_factory_dep, compress_bytecodes, -}; use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; -use zksync_types::l1::is_l1_tx_type; -use zksync_types::Transaction; +use zksync_types::{l1::is_l1_tx_type, Transaction}; -use crate::vm_refunds_enhancement::types::internals::TransactionData; -use crate::vm_refunds_enhancement::vm::Vm; -use crate::HistoryMode; +use crate::{ + vm_refunds_enhancement::{ + constants::BOOTLOADER_HEAP_PAGE, + implementation::bytecode::{bytecode_to_factory_dep, compress_bytecodes}, + types::internals::TransactionData, + utils::fee::get_batch_gas_per_pubdata, + vm::Vm, + }, + HistoryMode, +}; impl Vm { pub(crate) fn push_raw_transaction( @@ -37,8 +39,7 @@ impl Vm { .decommittment_processor .populate(codes_for_decommiter, timestamp); - let trusted_ergs_limit = - tx.trusted_ergs_limit(self.batch_env.block_gas_price_per_pubdata()); + let trusted_ergs_limit = tx.trusted_ergs_limit(get_batch_gas_per_pubdata(&self.batch_env)); let memory = self.bootloader_state.push_tx( tx, @@ -60,7 +61,7 @@ impl Vm { with_compression: bool, ) { let tx: TransactionData = tx.into(); - let block_gas_per_pubdata_byte = self.batch_env.block_gas_price_per_pubdata(); + let block_gas_per_pubdata_byte = get_batch_gas_per_pubdata(&self.batch_env); let overhead = tx.overhead_gas(block_gas_per_pubdata_byte as u32); self.push_raw_transaction(tx, overhead, 0, with_compression); } diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/mod.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/mod.rs index 89e7b21b984..81267701b5c 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/mod.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/mod.rs @@ -1,30 +1,27 @@ -pub use old_vm::{ - history_recorder::{HistoryDisabled, HistoryEnabled, HistoryMode}, - memory::SimpleMemory, +pub use self::{ + bootloader_state::BootloaderState, + old_vm::{ + history_recorder::{ + AppDataFrameManagerWithHistory, HistoryDisabled, HistoryEnabled, HistoryMode, + }, + memory::SimpleMemory, + }, + oracles::storage::StorageOracle, + tracers::{ + dispatcher::TracerDispatcher, + traits::{ToTracerPointer, TracerPointer, VmTracer}, + }, + types::internals::ZkSyncVmState, + utils::transaction_encoding::TransactionVmExt, + vm::Vm, }; -pub use oracles::storage::StorageOracle; - -pub use tracers::dispatcher::TracerDispatcher; -pub use tracers::traits::{TracerPointer, VmTracer}; - -pub use utils::transaction_encoding::TransactionVmExt; - -pub use bootloader_state::BootloaderState; -pub use types::internals::ZkSyncVmState; - -pub use vm::Vm; - mod bootloader_state; +pub mod constants; mod implementation; mod old_vm; mod oracles; pub(crate) mod tracers; mod types; -mod vm; - -pub mod constants; pub mod utils; - -// #[cfg(test)] -// mod tests; +mod vm; diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/event_sink.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/event_sink.rs index adbee280a3d..74dca71d10f 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/event_sink.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/event_sink.rs @@ -1,8 +1,5 @@ -use crate::vm_refunds_enhancement::old_vm::{ - history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, - oracles::OracleWithHistory, -}; use std::collections::HashMap; + use zk_evm_1_3_3::{ abstractions::EventSink, aux_structures::{LogQuery, Timestamp}, @@ -12,6 +9,11 @@ use zk_evm_1_3_3::{ }, }; +use crate::vm_refunds_enhancement::old_vm::{ + history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, + oracles::OracleWithHistory, +}; + #[derive(Debug, Clone, PartialEq, Default)] pub struct InMemoryEventSink { frames_stack: AppDataFrameManagerWithHistory, H>, @@ -48,7 +50,7 @@ impl InMemoryEventSink { pub fn log_queries_after_timestamp(&self, from_timestamp: Timestamp) -> &[Box] { let events = self.frames_stack.forward().current_frame(); - // Select all of the last elements where e.timestamp >= from_timestamp. + // Select all of the last elements where `e.timestamp >= from_timestamp`. // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. events .rsplit(|e| e.timestamp < from_timestamp) diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/history_recorder.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/history_recorder.rs index 44d510b0075..e862f57898a 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/history_recorder.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/history_recorder.rs @@ -5,7 +5,6 @@ use zk_evm_1_3_3::{ vm_state::PrimitiveValue, zkevm_opcode_defs::{self}, }; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::{StorageKey, U256}; use zksync_utils::{h256_to_u256, u256_to_h256}; @@ -13,14 +12,14 @@ use zksync_utils::{h256_to_u256, u256_to_h256}; pub(crate) type MemoryWithHistory = HistoryRecorder; pub(crate) type IntFrameManagerWithHistory = HistoryRecorder, H>; -// Within the same cycle, timestamps in range timestamp..timestamp+TIME_DELTA_PER_CYCLE-1 +// Within the same cycle, timestamps in range `timestamp..timestamp+TIME_DELTA_PER_CYCLE-1` // can be used. This can sometimes violate monotonicity of the timestamp within the // same cycle, so it should be normalized. #[inline] fn normalize_timestamp(timestamp: Timestamp) -> Timestamp { let timestamp = timestamp.0; - // Making sure it is divisible by TIME_DELTA_PER_CYCLE + // Making sure it is divisible by `TIME_DELTA_PER_CYCLE` Timestamp(timestamp - timestamp % zkevm_opcode_defs::TIME_DELTA_PER_CYCLE) } @@ -438,7 +437,7 @@ impl HistoryRecorder, H> { } #[derive(Debug, Clone, PartialEq)] -pub(crate) struct AppDataFrameManagerWithHistory { +pub struct AppDataFrameManagerWithHistory { forward: HistoryRecorder, H>, rollback: HistoryRecorder, H>, } @@ -771,11 +770,14 @@ impl HistoryRecorder, H> { #[cfg(test)] mod tests { - use crate::vm_refunds_enhancement::old_vm::history_recorder::{HistoryRecorder, MemoryWrapper}; - use crate::vm_refunds_enhancement::HistoryDisabled; use zk_evm_1_3_3::{aux_structures::Timestamp, vm_state::PrimitiveValue}; use zksync_types::U256; + use crate::vm_refunds_enhancement::{ + old_vm::history_recorder::{HistoryRecorder, MemoryWrapper}, + HistoryDisabled, + }; + #[test] fn memory_equality() { let mut a: HistoryRecorder = Default::default(); diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/memory.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/memory.rs index 1ef04da58cb..9219126d76e 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/memory.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/memory.rs @@ -1,16 +1,18 @@ -use zk_evm_1_3_3::abstractions::{Memory, MemoryType}; -use zk_evm_1_3_3::aux_structures::{MemoryPage, MemoryQuery, Timestamp}; -use zk_evm_1_3_3::vm_state::PrimitiveValue; -use zk_evm_1_3_3::zkevm_opcode_defs::FatPointer; +use zk_evm_1_3_3::{ + abstractions::{Memory, MemoryType}, + aux_structures::{MemoryPage, MemoryQuery, Timestamp}, + vm_state::PrimitiveValue, + zkevm_opcode_defs::FatPointer, +}; use zksync_types::U256; -use crate::vm_refunds_enhancement::old_vm::history_recorder::{ - FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, - MemoryWrapper, WithHistory, -}; -use crate::vm_refunds_enhancement::old_vm::oracles::OracleWithHistory; -use crate::vm_refunds_enhancement::old_vm::utils::{ - aux_heap_page_from_base, heap_page_from_base, stack_page_from_base, +use crate::vm_refunds_enhancement::old_vm::{ + history_recorder::{ + FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, + MemoryWrapper, WithHistory, + }, + oracles::OracleWithHistory, + utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}, }; #[derive(Debug, Clone, PartialEq)] @@ -280,7 +282,7 @@ impl Memory for SimpleMemory { let returndata_page = returndata_fat_pointer.memory_page; for &page in current_observable_pages { - // If the page's number is greater than or equal to the base_page, + // If the page's number is greater than or equal to the `base_page`, // it means that it was created by the internal calls of this contract. // We need to add this check as the calldata pointer is also part of the // observable pages. @@ -297,7 +299,7 @@ impl Memory for SimpleMemory { } } -// It is expected that there is some intersection between [word_number*32..word_number*32+31] and [start, end] +// It is expected that there is some intersection between `[word_number*32..word_number*32+31]` and `[start, end]` fn extract_needed_bytes_from_word( word_value: Vec, word_number: usize, @@ -305,7 +307,7 @@ fn extract_needed_bytes_from_word( end: usize, ) -> Vec { let word_start = word_number * 32; - let word_end = word_start + 31; // Note, that at word_start + 32 a new word already starts + let word_end = word_start + 31; // Note, that at `word_start + 32` a new word already starts let intersection_left = std::cmp::max(word_start, start); let intersection_right = std::cmp::min(word_end, end); diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/oracles/decommitter.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/oracles/decommitter.rs index 0f335cabf39..9a7addc97e1 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/oracles/decommitter.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/oracles/decommitter.rs @@ -1,25 +1,21 @@ -use std::collections::HashMap; -use std::fmt::Debug; +use std::{collections::HashMap, fmt::Debug}; -use crate::vm_refunds_enhancement::old_vm::history_recorder::{ - HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, -}; - -use zk_evm_1_3_3::abstractions::MemoryType; -use zk_evm_1_3_3::aux_structures::Timestamp; use zk_evm_1_3_3::{ - abstractions::{DecommittmentProcessor, Memory}, - aux_structures::{DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery}, + abstractions::{DecommittmentProcessor, Memory, MemoryType}, + aux_structures::{ + DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery, Timestamp, + }, }; - use zksync_state::{ReadStorage, StoragePtr}; use zksync_types::U256; -use zksync_utils::bytecode::bytecode_len_in_words; -use zksync_utils::{bytes_to_be_words, u256_to_h256}; +use zksync_utils::{bytecode::bytecode_len_in_words, bytes_to_be_words, u256_to_h256}; use super::OracleWithHistory; +use crate::vm_refunds_enhancement::old_vm::history_recorder::{ + HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, +}; -/// The main job of the DecommiterOracle is to implement the DecommittmentProcessor trait - that is +/// The main job of the DecommiterOracle is to implement the DecommitmentProcessor trait - that is /// used by the VM to 'load' bytecodes into memory. #[derive(Debug)] pub struct DecommitterOracle { @@ -70,7 +66,7 @@ impl DecommitterOracle { } } - /// Adds additional bytecodes. They will take precendent over the bytecodes from storage. + /// Adds additional bytecodes. They will take precedent over the bytecodes from storage. pub fn populate(&mut self, bytecodes: Vec<(U256, Vec)>, timestamp: Timestamp) { for (hash, bytecode) in bytecodes { self.known_bytecodes.insert(hash, bytecode, timestamp); @@ -180,7 +176,7 @@ impl DecommittmentProcess > { self.decommitment_requests.push((), partial_query.timestamp); // First - check if we didn't fetch this bytecode in the past. - // If we did - we can just return the page that we used before (as the memory is read only). + // If we did - we can just return the page that we used before (as the memory is readonly). if let Some(memory_page) = self .decommitted_code_hashes .inner() diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/oracles/precompile.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/oracles/precompile.rs index eb3f7b866b1..c59fb188e59 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/oracles/precompile.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/oracles/precompile.rs @@ -1,17 +1,14 @@ use zk_evm_1_3_3::{ - abstractions::Memory, - abstractions::PrecompileCyclesWitness, - abstractions::PrecompilesProcessor, + abstractions::{Memory, PrecompileCyclesWitness, PrecompilesProcessor}, aux_structures::{LogQuery, MemoryQuery, Timestamp}, precompiles::DefaultPrecompilesProcessor, }; +use super::OracleWithHistory; use crate::vm_refunds_enhancement::old_vm::history_recorder::{ HistoryEnabled, HistoryMode, HistoryRecorder, }; -use super::OracleWithHistory; - /// Wrap of DefaultPrecompilesProcessor that store queue /// of timestamp when precompiles are called to be executed. /// Number of precompiles per block is strictly limited, diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/utils.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/utils.rs index 9b4aae851d2..c2478edf7a8 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/utils.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/old_vm/utils.rs @@ -1,22 +1,19 @@ -use crate::vm_refunds_enhancement::old_vm::memory::SimpleMemory; - -use crate::vm_refunds_enhancement::types::internals::ZkSyncVmState; -use crate::vm_refunds_enhancement::HistoryMode; - -use zk_evm_1_3_3::zkevm_opcode_defs::decoding::{ - AllowedPcOrImm, EncodingModeProduction, VmEncodingMode, -}; -use zk_evm_1_3_3::zkevm_opcode_defs::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER; use zk_evm_1_3_3::{ aux_structures::{MemoryPage, Timestamp}, vm_state::PrimitiveValue, - zkevm_opcode_defs::FatPointer, + zkevm_opcode_defs::{ + decoding::{AllowedPcOrImm, EncodingModeProduction, VmEncodingMode}, + FatPointer, RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; use zksync_state::WriteStorage; use zksync_system_constants::L1_GAS_PER_PUBDATA_BYTE; - use zksync_types::{Address, U256}; +use crate::vm_refunds_enhancement::{ + old_vm::memory::SimpleMemory, types::internals::ZkSyncVmState, HistoryMode, +}; + #[derive(Debug, Clone)] pub(crate) enum VmExecutionResult { Ok(Vec), @@ -125,7 +122,7 @@ pub(crate) fn vm_may_have_ended_inner( } (false, _) => None, (true, l) if l == outer_eh_location => { - // check r1,r2,r3 + // check `r1,r2,r3` if vm.local_state.flags.overflow_or_less_than_flag { Some(VmExecutionResult::Panic) } else { diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/oracles/storage.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/oracles/storage.rs index e054cdbe2a6..6e58b8b3092 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/oracles/storage.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/oracles/storage.rs @@ -1,26 +1,25 @@ use std::collections::HashMap; -use crate::vm_refunds_enhancement::old_vm::history_recorder::{ - AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, - HistoryRecorder, StorageWrapper, VectorHistoryEvent, WithHistory, -}; -use crate::vm_refunds_enhancement::old_vm::oracles::OracleWithHistory; - -use zk_evm_1_3_3::abstractions::RefundedAmounts; -use zk_evm_1_3_3::zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES; use zk_evm_1_3_3::{ - abstractions::{RefundType, Storage as VmStorageOracle}, + abstractions::{RefundType, RefundedAmounts, Storage as VmStorageOracle}, aux_structures::{LogQuery, Timestamp}, + zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES, }; - use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::utils::storage_key_for_eth_balance; use zksync_types::{ - AccountTreeId, Address, StorageKey, StorageLogQuery, StorageLogQueryType, BOOTLOADER_ADDRESS, - U256, + utils::storage_key_for_eth_balance, AccountTreeId, Address, StorageKey, StorageLogQuery, + StorageLogQueryType, BOOTLOADER_ADDRESS, U256, }; use zksync_utils::u256_to_h256; +use crate::vm_refunds_enhancement::old_vm::{ + history_recorder::{ + AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, + HistoryRecorder, StorageWrapper, VectorHistoryEvent, WithHistory, + }, + oracles::OracleWithHistory, +}; + // While the storage does not support different shards, it was decided to write the // code of the StorageOracle with the shard parameters in mind. pub(crate) fn triplet_to_storage_key(_shard_id: u8, address: Address, key: U256) -> StorageKey { @@ -49,7 +48,7 @@ pub struct StorageOracle { pub(crate) paid_changes: HistoryRecorder, H>, // The map that contains all the first values read from storage for each slot. - // While formally it does not have to be rollbackable, we still do it to avoid memory bloat + // While formally it does not have to be capable of rolling back, we still do it to avoid memory bloat // for unused slots. pub(crate) initial_values: HistoryRecorder, H>, @@ -183,7 +182,7 @@ impl StorageOracle { let required_pubdata = self.base_price_for_write(&key, first_slot_value, current_slot_value); - // We assume that "prepaid_for_slot" represents both the number of pubdata published and the number of bytes paid by the previous transactions + // We assume that `prepaid_for_slot` represents both the number of pubdata published and the number of bytes paid by the previous transactions // as they should be identical. let prepaid_for_slot = self .pre_paid_changes @@ -253,7 +252,7 @@ impl StorageOracle { ) -> &[Box] { let logs = self.frames_stack.forward().current_frame(); - // Select all of the last elements where l.log_query.timestamp >= from_timestamp. + // Select all of the last elements where `l.log_query.timestamp >= from_timestamp`. // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. logs.rsplit(|l| l.log_query.timestamp < from_timestamp) .next() @@ -301,6 +300,7 @@ impl VmStorageOracle for StorageOracle { _monotonic_cycle_counter: u32, query: LogQuery, ) -> LogQuery { + // ``` // tracing::trace!( // "execute partial query cyc {:?} addr {:?} key {:?}, rw {:?}, wr {:?}, tx {:?}", // _monotonic_cycle_counter, @@ -310,6 +310,7 @@ impl VmStorageOracle for StorageOracle { // query.written_value, // query.tx_number_in_block // ); + // ``` assert!(!query.rollback); if query.rw_flag { // The number of bytes that have been compensated by the user to perform this write @@ -395,7 +396,7 @@ impl VmStorageOracle for StorageOracle { ); // Additional validation that the current value was correct - // Unwrap is safe because the return value from write_inner is the previous value in this leaf. + // Unwrap is safe because the return value from `write_inner` is the previous value in this leaf. // It is impossible to set leaf value to `None` assert_eq!(current_value, written_value); } @@ -409,8 +410,8 @@ impl VmStorageOracle for StorageOracle { /// Returns the number of bytes needed to publish a slot. // Since we need to publish the state diffs onchain, for each of the updated storage slot -// we basically need to publish the following pair: (). -// While new_value is always 32 bytes long, for key we use the following optimization: +// we basically need to publish the following pair: `()`. +// While `new_value` is always 32 bytes long, for key we use the following optimization: // - The first time we publish it, we use 32 bytes. // Then, we remember a 8-byte id for this slot and assign it to it. We call this initial write. // - The second time we publish it, we will use this 8-byte instead of the 32 bytes of the entire key. diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/require_eip712.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/require_eip712.rs index 253a3463c53..03a704841b0 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/require_eip712.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/require_eip712.rs @@ -109,7 +109,7 @@ async fn test_require_eip712() { vm.get_eth_balance(beneficiary.address), U256::from(888000088) ); - // Make sure that the tokens were transfered from the AA account. + // Make sure that the tokens were transferred from the AA account. assert_eq!( private_account_balance, vm.get_eth_balance(private_account.address) diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/tester/inner_state.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/tester/inner_state.rs index c4c6ec05bd7..5af50ee0d91 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/tester/inner_state.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/tester/inner_state.rs @@ -35,7 +35,7 @@ impl PartialEq for ModifiedKeysMap { #[derive(Clone, PartialEq, Debug)] pub(crate) struct DecommitterTestInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub(crate) modified_storage_keys: ModifiedKeysMap, pub(crate) known_bytecodes: HistoryRecorder>, H>, @@ -44,7 +44,7 @@ pub(crate) struct DecommitterTestInnerState { #[derive(Clone, PartialEq, Debug)] pub(crate) struct StorageOracleInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub(crate) modified_storage_keys: ModifiedKeysMap, diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/utils.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/utils.rs index 7d3cc7d8e2d..3a936f95681 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/utils.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tests/utils.rs @@ -61,8 +61,8 @@ pub(crate) fn read_test_contract() -> Vec { pub(crate) fn get_bootloader(test: &str) -> SystemContractCode { let bootloader_code = read_zbin_bytecode(format!( - "etc/system-contracts/bootloader/tests/artifacts/{}.yul/{}.yul.zbin", - test, test + "contracts/system-contracts/bootloader/tests/artifacts/{}.yul.zbin", + test )); let bootloader_hash = hash_bytecode(&bootloader_code); diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/default_tracers.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/default_tracers.rs index 51fbf06d855..47fe3142aba 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/default_tracers.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/default_tracers.rs @@ -1,11 +1,5 @@ use std::fmt::{Debug, Formatter}; -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::tracer::{ - TracerExecutionStatus, TracerExecutionStopReason, VmExecutionStopReason, -}; -use crate::interface::{Halt, VmExecutionMode}; -use crate::vm_refunds_enhancement::VmTracer; use zk_evm_1_3_3::{ tracing::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, @@ -17,18 +11,28 @@ use zk_evm_1_3_3::{ use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::Timestamp; -use crate::vm_refunds_enhancement::bootloader_state::utils::apply_l2_block; -use crate::vm_refunds_enhancement::bootloader_state::BootloaderState; -use crate::vm_refunds_enhancement::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_refunds_enhancement::old_vm::history_recorder::HistoryMode; -use crate::vm_refunds_enhancement::old_vm::memory::SimpleMemory; -use crate::vm_refunds_enhancement::tracers::dispatcher::TracerDispatcher; -use crate::vm_refunds_enhancement::tracers::utils::{ - computational_gas_price, gas_spent_on_bytecodes_and_long_messages_this_opcode, - print_debug_if_needed, VmHook, +use crate::{ + interface::{ + dyn_tracers::vm_1_3_3::DynTracer, + tracer::{TracerExecutionStatus, TracerExecutionStopReason, VmExecutionStopReason}, + Halt, VmExecutionMode, + }, + vm_refunds_enhancement::{ + bootloader_state::{utils::apply_l2_block, BootloaderState}, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::{ + dispatcher::TracerDispatcher, + utils::{ + computational_gas_price, gas_spent_on_bytecodes_and_long_messages_this_opcode, + print_debug_if_needed, VmHook, + }, + RefundsTracer, ResultTracer, + }, + types::internals::ZkSyncVmState, + VmTracer, + }, }; -use crate::vm_refunds_enhancement::tracers::{RefundsTracer, ResultTracer}; -use crate::vm_refunds_enhancement::types::internals::ZkSyncVmState; /// Default tracer for the VM. It manages the other tracers execution and stop the vm when needed. pub(crate) struct DefaultExecutionTracer { @@ -45,7 +49,7 @@ pub(crate) struct DefaultExecutionTracer { pub(crate) result_tracer: ResultTracer, // This tracer is designed specifically for calculating refunds. Its separation from the custom tracer // ensures static dispatch, enhancing performance by avoiding dynamic dispatch overhead. - // Additionally, being an internal tracer, it saves the results directly to VmResultAndLogs. + // Additionally, being an internal tracer, it saves the results directly to `VmResultAndLogs`. pub(crate) refund_tracer: Option, pub(crate) dispatcher: TracerDispatcher, ret_from_the_bootloader: Option, @@ -287,7 +291,7 @@ impl VmTracer for DefaultExecutionTracer< } fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/dispatcher.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/dispatcher.rs index f2296d205a9..2392c3e51af 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/dispatcher.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/dispatcher.rs @@ -1,13 +1,18 @@ -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::tracer::{TracerExecutionStatus, VmExecutionStopReason}; -use crate::vm_refunds_enhancement::{ - BootloaderState, HistoryMode, SimpleMemory, TracerPointer, VmTracer, ZkSyncVmState, -}; use zk_evm_1_3_3::tracing::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData, }; use zksync_state::{StoragePtr, WriteStorage}; +use crate::{ + interface::{ + dyn_tracers::vm_1_3_3::DynTracer, + tracer::{TracerExecutionStatus, VmExecutionStopReason}, + }, + vm_refunds_enhancement::{ + BootloaderState, HistoryMode, SimpleMemory, TracerPointer, VmTracer, ZkSyncVmState, + }, +}; + /// Tracer dispatcher is a tracer that can dispatch calls to multiple tracers. pub struct TracerDispatcher { tracers: Vec>, diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/refunds.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/refunds.rs index 5256561b5eb..20e799a3883 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/refunds.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/refunds.rs @@ -1,5 +1,4 @@ use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; - use zk_evm_1_3_3::{ aux_structures::Timestamp, tracing::{BeforeExecutionData, VmLocalStateData}, @@ -12,27 +11,28 @@ use zksync_types::{ l2_to_l1_log::L2ToL1Log, L1BatchNumber, U256, }; -use zksync_utils::bytecode::bytecode_len_in_bytes; -use zksync_utils::{ceil_div_u256, u256_to_h256}; - -use crate::interface::{ - dyn_tracers::vm_1_3_3::DynTracer, tracer::TracerExecutionStatus, L1BatchEnv, Refunds, -}; -use crate::vm_refunds_enhancement::constants::{ - BOOTLOADER_HEAP_PAGE, OPERATOR_REFUNDS_OFFSET, TX_GAS_LIMIT_OFFSET, -}; +use zksync_utils::{bytecode::bytecode_len_in_bytes, ceil_div_u256, u256_to_h256}; -use crate::vm_refunds_enhancement::{ - bootloader_state::BootloaderState, - old_vm::{ - events::merge_events, history_recorder::HistoryMode, memory::SimpleMemory, - utils::eth_price_per_pubdata_byte, +use crate::{ + interface::{ + dyn_tracers::vm_1_3_3::DynTracer, tracer::TracerExecutionStatus, L1BatchEnv, Refunds, }, - tracers::{ - traits::VmTracer, - utils::{gas_spent_on_bytecodes_and_long_messages_this_opcode, get_vm_hook_params, VmHook}, + vm_refunds_enhancement::{ + bootloader_state::BootloaderState, + constants::{BOOTLOADER_HEAP_PAGE, OPERATOR_REFUNDS_OFFSET, TX_GAS_LIMIT_OFFSET}, + old_vm::{ + events::merge_events, history_recorder::HistoryMode, memory::SimpleMemory, + utils::eth_price_per_pubdata_byte, + }, + tracers::{ + traits::VmTracer, + utils::{ + gas_spent_on_bytecodes_and_long_messages_this_opcode, get_vm_hook_params, VmHook, + }, + }, + types::internals::ZkSyncVmState, + utils::fee::get_batch_base_fee, }, - types::internals::ZkSyncVmState, }; /// Tracer responsible for collecting information about refunds. @@ -112,13 +112,14 @@ impl RefundsTracer { }); // For now, bootloader charges only for base fee. - let effective_gas_price = self.l1_batch.base_fee(); + let effective_gas_price = get_batch_base_fee(&self.l1_batch); let bootloader_eth_price_per_pubdata_byte = U256::from(effective_gas_price) * U256::from(current_ergs_per_pubdata_byte); - let fair_eth_price_per_pubdata_byte = - U256::from(eth_price_per_pubdata_byte(self.l1_batch.l1_gas_price)); + let fair_eth_price_per_pubdata_byte = U256::from(eth_price_per_pubdata_byte( + self.l1_batch.fee_input.l1_gas_price(), + )); // For now, L1 originated transactions are allowed to pay less than fair fee per pubdata, // so we should take it into account. @@ -128,7 +129,7 @@ impl RefundsTracer { ); let fair_fee_eth = U256::from(gas_spent_on_computation) - * U256::from(self.l1_batch.fair_l2_gas_price) + * U256::from(self.l1_batch.fee_input.fair_l2_gas_price()) + U256::from(pubdata_published) * eth_price_per_pubdata_byte_for_calculation; let pre_paid_eth = U256::from(tx_gas_limit) * U256::from(effective_gas_price); let refund_eth = pre_paid_eth.checked_sub(fair_fee_eth).unwrap_or_else(|| { @@ -210,8 +211,8 @@ impl VmTracer for RefundsTracer { #[vise::register] static METRICS: vise::Global = vise::Global::new(); - // This means that the bootloader has informed the system (usually via VMHooks) - that some gas - // should be refunded back (see askOperatorForRefund in bootloader.yul for details). + // This means that the bootloader has informed the system (usually via `VMHooks`) - that some gas + // should be refunded back (see `askOperatorForRefund` in `bootloader.yul` for details). if let Some(bootloader_refund) = self.requested_refund() { assert!( self.operator_refund.is_none(), diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/result_tracer.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/result_tracer.rs index c0a8e5d6cc0..22cf08c8ef9 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/result_tracer.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/result_tracer.rs @@ -4,23 +4,27 @@ use zk_evm_1_3_3::{ zkevm_opcode_defs::FatPointer, }; use zksync_state::{StoragePtr, WriteStorage}; - -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::tracer::{TracerExecutionStopReason, VmExecutionStopReason}; -use crate::interface::{ExecutionResult, Halt, TxRevertReason, VmExecutionMode, VmRevertReason}; use zksync_types::U256; -use crate::vm_refunds_enhancement::bootloader_state::BootloaderState; -use crate::vm_refunds_enhancement::old_vm::{ - history_recorder::HistoryMode, - memory::SimpleMemory, - utils::{vm_may_have_ended_inner, VmExecutionResult}, +use crate::{ + interface::{ + dyn_tracers::vm_1_3_3::DynTracer, + tracer::{TracerExecutionStopReason, VmExecutionStopReason}, + ExecutionResult, Halt, TxRevertReason, VmExecutionMode, VmRevertReason, + }, + vm_refunds_enhancement::{ + bootloader_state::BootloaderState, + constants::{BOOTLOADER_HEAP_PAGE, RESULT_SUCCESS_FIRST_SLOT}, + old_vm::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + utils::{vm_may_have_ended_inner, VmExecutionResult}, + }, + tracers::utils::{get_vm_hook_params, read_pointer, VmHook}, + types::internals::ZkSyncVmState, + VmTracer, + }, }; -use crate::vm_refunds_enhancement::tracers::utils::{get_vm_hook_params, read_pointer, VmHook}; - -use crate::vm_refunds_enhancement::constants::{BOOTLOADER_HEAP_PAGE, RESULT_SUCCESS_FIRST_SLOT}; -use crate::vm_refunds_enhancement::types::internals::ZkSyncVmState; -use crate::vm_refunds_enhancement::VmTracer; #[derive(Debug, Clone)] enum Result { @@ -48,7 +52,7 @@ impl ResultTracer { } fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 @@ -147,7 +151,7 @@ impl ResultTracer { }); } VmExecutionResult::Revert(output) => { - // Unlike VmHook::ExecutionResult, vm has completely finished and returned not only the revert reason, + // Unlike `VmHook::ExecutionResult`, vm has completely finished and returned not only the revert reason, // but with bytecode, which represents the type of error from the bootloader side let revert_reason = TxRevertReason::parse_error(&output); diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/traits.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/traits.rs index 13b295b9fe9..b54819148fa 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/traits.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/traits.rs @@ -1,11 +1,16 @@ -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::tracer::{TracerExecutionStatus, VmExecutionStopReason}; use zksync_state::WriteStorage; -use crate::vm_refunds_enhancement::bootloader_state::BootloaderState; -use crate::vm_refunds_enhancement::old_vm::history_recorder::HistoryMode; -use crate::vm_refunds_enhancement::old_vm::memory::SimpleMemory; -use crate::vm_refunds_enhancement::types::internals::ZkSyncVmState; +use crate::{ + interface::{ + dyn_tracers::vm_1_3_3::DynTracer, + tracer::{TracerExecutionStatus, VmExecutionStopReason}, + }, + vm_refunds_enhancement::{ + bootloader_state::BootloaderState, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + types::internals::ZkSyncVmState, + }, +}; pub type TracerPointer = Box>; diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/utils.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/utils.rs index 654c7300e4a..ccacea0cd7e 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/utils.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/tracers/utils.rs @@ -5,7 +5,6 @@ use zk_evm_1_3_3::{ FarCallABI, FarCallForwardPageType, FatPointer, LogOpcode, Opcode, UMAOpcode, }, }; - use zksync_system_constants::{ ECRECOVER_PRECOMPILE_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, KNOWN_CODES_STORAGE_ADDRESS, L1_MESSENGER_ADDRESS, SHA256_PRECOMPILE_ADDRESS, @@ -13,13 +12,15 @@ use zksync_system_constants::{ use zksync_types::U256; use zksync_utils::u256_to_h256; -use crate::vm_refunds_enhancement::constants::{ - BOOTLOADER_HEAP_PAGE, VM_HOOK_PARAMS_COUNT, VM_HOOK_PARAMS_START_POSITION, VM_HOOK_POSITION, -}; -use crate::vm_refunds_enhancement::old_vm::{ - history_recorder::HistoryMode, - memory::SimpleMemory, - utils::{aux_heap_page_from_base, heap_page_from_base}, +use crate::vm_refunds_enhancement::{ + constants::{ + BOOTLOADER_HEAP_PAGE, VM_HOOK_PARAMS_COUNT, VM_HOOK_PARAMS_START_POSITION, VM_HOOK_POSITION, + }, + old_vm::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + utils::{aux_heap_page_from_base, heap_page_from_base}, + }, }; #[derive(Clone, Debug, Copy)] @@ -54,7 +55,7 @@ impl VmHook { let value = data.src1_value.value; - // Only UMA opcodes in the bootloader serve for vm hooks + // Only `UMA` opcodes in the bootloader serve for vm hooks if !matches!(opcode_variant.opcode, Opcode::UMA(UMAOpcode::HeapWrite)) || heap_page != BOOTLOADER_HEAP_PAGE || fat_ptr.offset != VM_HOOK_POSITION * 32 @@ -94,7 +95,7 @@ pub(crate) fn get_debug_log( let msg = String::from_utf8(msg).expect("Invalid debug message"); let data = U256::from_big_endian(&data); - // For long data, it is better to use hex-encoding for greater readibility + // For long data, it is better to use hex-encoding for greater readability let data_str = if data > U256::from(u64::max_value()) { let mut bytes = [0u8; 32]; data.to_big_endian(&mut bytes); @@ -109,7 +110,7 @@ pub(crate) fn get_debug_log( } /// Reads the memory slice represented by the fat pointer. -/// Note, that the fat pointer must point to the accesible memory (i.e. not cleared up yet). +/// Note, that the fat pointer must point to the accessible memory (i.e. not cleared up yet). pub(crate) fn read_pointer( memory: &SimpleMemory, pointer: FatPointer, diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/types/internals/transaction_data.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/types/internals/transaction_data.rs index 1b589146a29..1493cf7e59d 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/types/internals/transaction_data.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/types/internals/transaction_data.rs @@ -1,17 +1,20 @@ use std::convert::TryInto; -use zksync_types::ethabi::{encode, Address, Token}; -use zksync_types::fee::{encoding_len, Fee}; -use zksync_types::l1::is_l1_tx_type; -use zksync_types::l2::L2Tx; -use zksync_types::transaction_request::{PaymasterParams, TransactionRequest}; + use zksync_types::{ - l2::TransactionType, Bytes, Execute, ExecuteTransactionCommon, L2ChainId, L2TxCommonData, - Nonce, Transaction, H256, U256, + ethabi::{encode, Address, Token}, + fee::{encoding_len, Fee}, + l1::is_l1_tx_type, + l2::{L2Tx, TransactionType}, + transaction_request::{PaymasterParams, TransactionRequest}, + Bytes, Execute, ExecuteTransactionCommon, L2ChainId, L2TxCommonData, Nonce, Transaction, H256, + U256, }; -use zksync_utils::address_to_h256; -use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; +use zksync_utils::{address_to_h256, bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; -use crate::vm_refunds_enhancement::utils::overhead::{get_amortized_overhead, OverheadCoeficients}; +use crate::vm_refunds_enhancement::{ + constants::MAX_GAS_PER_PUBDATA_BYTE, + utils::overhead::{get_amortized_overhead, OverheadCoefficients}, +}; /// This structure represents the data that is used by /// the Bootloader to describe the transaction. @@ -59,12 +62,22 @@ impl From for TransactionData { U256::zero() }; + // Ethereum transactions do not sign gas per pubdata limit, and so for them we need to use + // some default value. We use the maximum possible value that is allowed by the bootloader + // (i.e. we can not use u64::MAX, because the bootloader requires gas per pubdata for such + // transactions to be higher than `MAX_GAS_PER_PUBDATA_BYTE`). + let gas_per_pubdata_limit = if common_data.transaction_type.is_ethereum_type() { + MAX_GAS_PER_PUBDATA_BYTE.into() + } else { + common_data.fee.gas_per_pubdata_limit + }; + TransactionData { tx_type: (common_data.transaction_type as u32) as u8, from: common_data.initiator_address, to: execute_tx.execute.contract_address, gas_limit: common_data.fee.gas_limit, - pubdata_price_limit: common_data.fee.gas_per_pubdata_limit, + pubdata_price_limit: gas_per_pubdata_limit, max_fee_per_gas: common_data.fee.max_fee_per_gas, max_priority_fee_per_gas: common_data.fee.max_priority_fee_per_gas, paymaster: common_data.paymaster_params.paymaster, @@ -212,12 +225,12 @@ impl TransactionData { self.reserved_dynamic.len() as u64, ); - let coeficients = OverheadCoeficients::from_tx_type(self.tx_type); + let coefficients = OverheadCoefficients::from_tx_type(self.tx_type); get_amortized_overhead( total_gas_limit, gas_price_per_pubdata, encoded_len, - coeficients, + coefficients, ) } @@ -234,7 +247,7 @@ impl TransactionData { let l2_tx: L2Tx = self.clone().try_into().unwrap(); let transaction_request: TransactionRequest = l2_tx.into(); - // It is assumed that the TransactionData always has all the necessary components to recover the hash. + // It is assumed that the `TransactionData` always has all the necessary components to recover the hash. transaction_request .get_tx_hash(chain_id) .expect("Could not recover L2 transaction hash") @@ -303,9 +316,10 @@ impl TryInto for TransactionData { #[cfg(test)] mod tests { - use super::*; use zksync_types::fee::encoding_len; + use super::*; + #[test] fn test_consistency_with_encoding_length() { let transaction = TransactionData { diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/types/internals/vm_state.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/types/internals/vm_state.rs index b656cd09f9b..48c1e1f082f 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/types/internals/vm_state.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/types/internals/vm_state.rs @@ -1,34 +1,40 @@ use zk_evm_1_3_3::{ - aux_structures::MemoryPage, - aux_structures::Timestamp, + aux_structures::{MemoryPage, Timestamp}, block_properties::BlockProperties, vm_state::{CallStackEntry, PrimitiveValue, VmState}, witness_trace::DummyTracer, zkevm_opcode_defs::{ system_params::{BOOTLOADER_MAX_MEMORY, INITIAL_FRAME_FORMAL_EH_LOCATION}, - FatPointer, BOOTLOADER_CALLDATA_PAGE, + FatPointer, BOOTLOADER_BASE_PAGE, BOOTLOADER_CALLDATA_PAGE, BOOTLOADER_CODE_PAGE, + STARTING_BASE_PAGE, STARTING_TIMESTAMP, }, }; - -use crate::interface::{L1BatchEnv, L2Block, SystemEnv}; -use zk_evm_1_3_3::zkevm_opcode_defs::{ - BOOTLOADER_BASE_PAGE, BOOTLOADER_CODE_PAGE, STARTING_BASE_PAGE, STARTING_TIMESTAMP, -}; use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::BOOTLOADER_ADDRESS; -use zksync_types::block::legacy_miniblock_hash; -use zksync_types::{zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, MiniblockNumber}; +use zksync_types::{ + block::MiniblockHasher, zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, + MiniblockNumber, +}; use zksync_utils::h256_to_u256; -use crate::vm_refunds_enhancement::bootloader_state::BootloaderState; -use crate::vm_refunds_enhancement::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_refunds_enhancement::old_vm::{ - event_sink::InMemoryEventSink, history_recorder::HistoryMode, memory::SimpleMemory, - oracles::decommitter::DecommitterOracle, oracles::precompile::PrecompilesProcessorWithHistory, +use crate::{ + interface::{L1BatchEnv, L2Block, SystemEnv}, + vm_refunds_enhancement::{ + bootloader_state::BootloaderState, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{ + event_sink::InMemoryEventSink, + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, + }, + }, + oracles::storage::StorageOracle, + types::l1_batch::bootloader_initial_memory, + utils::l2_blocks::{assert_next_block, load_last_l2_block}, + }, }; -use crate::vm_refunds_enhancement::oracles::storage::StorageOracle; -use crate::vm_refunds_enhancement::types::l1_batch::bootloader_initial_memory; -use crate::vm_refunds_enhancement::utils::l2_blocks::{assert_next_block, load_last_l2_block}; pub type ZkSyncVmState = VmState< StorageOracle, @@ -67,7 +73,9 @@ pub(crate) fn new_vm_state( L2Block { number: l1_batch_env.first_l2_block.number.saturating_sub(1), timestamp: 0, - hash: legacy_miniblock_hash(MiniblockNumber(l1_batch_env.first_l2_block.number) - 1), + hash: MiniblockHasher::legacy_hash( + MiniblockNumber(l1_batch_env.first_l2_block.number) - 1, + ), } }; diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/types/l1_batch.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/types/l1_batch.rs index 631f1436cc3..b449165be34 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/types/l1_batch.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/types/l1_batch.rs @@ -1,7 +1,8 @@ -use crate::interface::L1BatchEnv; use zksync_types::U256; use zksync_utils::{address_to_u256, h256_to_u256}; +use crate::{interface::L1BatchEnv, vm_refunds_enhancement::utils::fee::get_batch_base_fee}; + const OPERATOR_ADDRESS_SLOT: usize = 0; const PREV_BLOCK_HASH_SLOT: usize = 1; const NEW_BLOCK_TIMESTAMP_SLOT: usize = 2; @@ -18,6 +19,8 @@ pub(crate) fn bootloader_initial_memory(l1_batch: &L1BatchEnv) -> Vec<(usize, U2 .map(|prev_block_hash| (h256_to_u256(prev_block_hash), U256::one())) .unwrap_or_default(); + let fee_input = l1_batch.fee_input.into_l1_pegged(); + vec![ ( OPERATOR_ADDRESS_SLOT, @@ -26,12 +29,15 @@ pub(crate) fn bootloader_initial_memory(l1_batch: &L1BatchEnv) -> Vec<(usize, U2 (PREV_BLOCK_HASH_SLOT, prev_block_hash), (NEW_BLOCK_TIMESTAMP_SLOT, U256::from(l1_batch.timestamp)), (NEW_BLOCK_NUMBER_SLOT, U256::from(l1_batch.number.0)), - (L1_GAS_PRICE_SLOT, U256::from(l1_batch.l1_gas_price)), + (L1_GAS_PRICE_SLOT, U256::from(fee_input.l1_gas_price)), ( FAIR_L2_GAS_PRICE_SLOT, - U256::from(l1_batch.fair_l2_gas_price), + U256::from(fee_input.fair_l2_gas_price), + ), + ( + EXPECTED_BASE_FEE_SLOT, + U256::from(get_batch_base_fee(l1_batch)), ), - (EXPECTED_BASE_FEE_SLOT, U256::from(l1_batch.base_fee())), (SHOULD_SET_NEW_BLOCK_SLOT, should_set_new_block), ] } diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/fee.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/fee.rs index 02ea1c4a561..a2fccb59630 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/fee.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/fee.rs @@ -1,29 +1,53 @@ //! Utility functions for vm -use zksync_system_constants::MAX_GAS_PER_PUBDATA_BYTE; +use zksync_types::fee_model::L1PeggedBatchFeeModelInput; use zksync_utils::ceil_div; -use crate::vm_refunds_enhancement::old_vm::utils::eth_price_per_pubdata_byte; +use crate::{ + vm_latest::L1BatchEnv, + vm_refunds_enhancement::{ + constants::MAX_GAS_PER_PUBDATA_BYTE, old_vm::utils::eth_price_per_pubdata_byte, + }, +}; -/// Calcluates the amount of gas required to publish one byte of pubdata -pub fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { +/// Calculates the amount of gas required to publish one byte of pubdata +pub(crate) fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); ceil_div(eth_price_per_pubdata_byte, base_fee) } /// Calculates the base fee and gas per pubdata for the given L1 gas price. -pub fn derive_base_fee_and_gas_per_pubdata(l1_gas_price: u64, fair_gas_price: u64) -> (u64, u64) { +pub(crate) fn derive_base_fee_and_gas_per_pubdata( + fee_input: L1PeggedBatchFeeModelInput, +) -> (u64, u64) { + let L1PeggedBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price, + } = fee_input; let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); - // The baseFee is set in such a way that it is always possible for a transaction to + // The `baseFee` is set in such a way that it is always possible for a transaction to // publish enough public data while compensating us for it. let base_fee = std::cmp::max( - fair_gas_price, + fair_l2_gas_price, ceil_div(eth_price_per_pubdata_byte, MAX_GAS_PER_PUBDATA_BYTE), ); ( base_fee, - base_fee_to_gas_per_pubdata(l1_gas_price, base_fee), + base_fee_to_gas_per_pubdata(fee_input.l1_gas_price, base_fee), ) } + +pub(crate) fn get_batch_base_fee(l1_batch_env: &L1BatchEnv) -> u64 { + if let Some(base_fee) = l1_batch_env.enforced_base_fee { + return base_fee; + } + let (base_fee, _) = + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()); + base_fee +} + +pub(crate) fn get_batch_gas_per_pubdata(l1_batch_env: &L1BatchEnv) -> u64 { + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()).1 +} diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/l2_blocks.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/l2_blocks.rs index 3d5f58094e0..e5832f7f587 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/l2_blocks.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/l2_blocks.rs @@ -1,15 +1,17 @@ -use crate::interface::{L2Block, L2BlockEnv}; use zksync_state::{ReadStorage, StoragePtr}; use zksync_system_constants::{ SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES, }; -use zksync_types::block::unpack_block_info; -use zksync_types::web3::signing::keccak256; -use zksync_types::{AccountTreeId, MiniblockNumber, StorageKey, H256, U256}; +use zksync_types::{ + block::unpack_block_info, web3::signing::keccak256, AccountTreeId, MiniblockNumber, StorageKey, + H256, U256, +}; use zksync_utils::{h256_to_u256, u256_to_h256}; +use crate::interface::{L2Block, L2BlockEnv}; + pub(crate) fn get_l2_block_hash_key(block_number: u32) -> StorageKey { let position = h256_to_u256(SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION) + U256::from(block_number % SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES); @@ -66,7 +68,7 @@ pub fn load_last_l2_block(storage: StoragePtr) -> Option u32 { - // Even if the gas limit is greater than the MAX_TX_ERGS_LIMIT, we assume that everything beyond MAX_TX_ERGS_LIMIT + // Even if the gas limit is greater than the `MAX_TX_ERGS_LIMIT`, we assume that everything beyond `MAX_TX_ERGS_LIMIT` // will be spent entirely on publishing bytecodes and so we derive the overhead solely based on the capped value let gas_limit = std::cmp::min(MAX_TX_ERGS_LIMIT, gas_limit); @@ -23,8 +23,8 @@ pub fn derive_overhead( let gas_limit = U256::from(gas_limit); let encoded_len = U256::from(encoded_len); - // The MAX_TX_ERGS_LIMIT is formed in a way that may fullfills a single-instance circuits - // if used in full. That is, within MAX_TX_ERGS_LIMIT it is possible to fully saturate all the single-instance + // The `MAX_TX_ERGS_LIMIT` is formed in a way that may fulfills a single-instance circuits + // if used in full. That is, within `MAX_TX_ERGS_LIMIT` it is possible to fully saturate all the single-instance // circuits. let overhead_for_single_instance_circuits = ceil_div_u256(gas_limit * max_block_overhead, MAX_TX_ERGS_LIMIT.into()); @@ -38,42 +38,44 @@ pub fn derive_overhead( // The overhead for occupying a single tx slot let tx_slot_overhead = ceil_div_u256(max_block_overhead, MAX_TXS_IN_BLOCK.into()); - // We use "ceil" here for formal reasons to allow easier approach for calculating the overhead in O(1) - // let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata); + // We use `ceil` here for formal reasons to allow easier approach for calculating the overhead in O(1) + // `let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata);` // The maximal potential overhead from pubdata // TODO (EVM-67): possibly use overhead for pubdata + // ``` // let pubdata_overhead = ceil_div_u256( // max_pubdata_in_tx * max_block_overhead, // MAX_PUBDATA_PER_BLOCK.into(), // ); + // ``` vec![ - (coeficients.ergs_limit_overhead_coeficient + (coefficients.ergs_limit_overhead_coeficient * overhead_for_single_instance_circuits.as_u32() as f64) .floor() as u32, - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) .floor() as u32, - (coeficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, + (coefficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, ] .into_iter() .max() .unwrap() } -/// Contains the coeficients with which the overhead for transactions will be calculated. -/// All of the coeficients should be <= 1. There are here to provide a certain "discount" for normal transactions +/// Contains the coefficients with which the overhead for transactions will be calculated. +/// All of the coefficients should be <= 1. There are here to provide a certain "discount" for normal transactions /// at the risk of malicious transactions that may close the block prematurely. -/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coeficients.ergs_limit_overhead_coeficient` MUST +/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coefficients.ergs_limit_overhead_coeficient` MUST /// result in an integer number #[derive(Debug, Clone, Copy)] -pub struct OverheadCoeficients { +pub struct OverheadCoefficients { slot_overhead_coeficient: f64, bootloader_memory_overhead_coeficient: f64, ergs_limit_overhead_coeficient: f64, } -impl OverheadCoeficients { +impl OverheadCoefficients { // This method ensures that the parameters keep the required invariants fn new_checked( slot_overhead_coeficient: f64, @@ -95,21 +97,21 @@ impl OverheadCoeficients { // L1->L2 do not receive any discounts fn new_l1() -> Self { - OverheadCoeficients::new_checked(1.0, 1.0, 1.0) + OverheadCoefficients::new_checked(1.0, 1.0, 1.0) } fn new_l2() -> Self { - OverheadCoeficients::new_checked( + OverheadCoefficients::new_checked( 1.0, 1.0, // For L2 transactions we allow a certain default discount with regard to the number of ergs. - // Multiinstance circuits can in theory be spawned infinite times, while projected future limitations - // on gas per pubdata allow for roughly 800kk gas per L1 batch, so the rough trust "discount" on the proof's part + // Multi-instance circuits can in theory be spawned infinite times, while projected future limitations + // on gas per pubdata allow for roughly 800k gas per L1 batch, so the rough trust "discount" on the proof's part // to be paid by the users is 0.1. 0.1, ) } - /// Return the coeficients for the given transaction type + /// Return the coefficients for the given transaction type pub fn from_tx_type(tx_type: u8) -> Self { if is_l1_tx_type(tx_type) { Self::new_l1() @@ -124,7 +126,7 @@ pub(crate) fn get_amortized_overhead( total_gas_limit: u32, gas_per_pubdata_byte_limit: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { // Using large U256 type to prevent overflows. let overhead_for_block_gas = U256::from(block_overhead_gas(gas_per_pubdata_byte_limit)); @@ -132,28 +134,28 @@ pub(crate) fn get_amortized_overhead( let encoded_len = U256::from(encoded_len); // Derivation of overhead consists of 4 parts: - // 1. The overhead for taking up a transaction's slot. (O1): O1 = 1 / MAX_TXS_IN_BLOCK - // 2. The overhead for taking up the bootloader's memory (O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE - // 3. The overhead for possible usage of pubdata. (O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK - // 4. The overhead for possible usage of all the single-instance circuits. (O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT + // 1. The overhead for taking up a transaction's slot. `(O1): O1 = 1 / MAX_TXS_IN_BLOCK` + // 2. The overhead for taking up the bootloader's memory `(O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE` + // 3. The overhead for possible usage of pubdata. `(O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK` + // 4. The overhead for possible usage of all the single-instance circuits. `(O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT` // // The maximum of these is taken to derive the part of the block's overhead to be paid by the users: // - // max_overhead = max(O1, O2, O3, O4) - // overhead_gas = ceil(max_overhead * overhead_for_block_gas). Thus, overhead_gas is a function of - // tx_gas_limit, gas_per_pubdata_byte_limit and encoded_len. + // `max_overhead = max(O1, O2, O3, O4)` + // `overhead_gas = ceil(max_overhead * overhead_for_block_gas)`. Thus, `overhead_gas` is a function of + // `tx_gas_limit`, `gas_per_pubdata_byte_limit` and `encoded_len`. // - // While it is possible to derive the overhead with binary search in O(log n), it is too expensive to be done + // While it is possible to derive the overhead with binary search in `O(log n)`, it is too expensive to be done // on L1, so here is a reference implementation of finding the overhead for transaction in O(1): // - // Given total_gas_limit = tx_gas_limit + overhead_gas, we need to find overhead_gas and tx_gas_limit, such that: - // 1. overhead_gas is maximal possible (the operator is paid fairly) - // 2. overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas (the user does not overpay) + // Given `total_gas_limit = tx_gas_limit + overhead_gas`, we need to find `overhead_gas` and `tx_gas_limit`, such that: + // 1. `overhead_gas` is maximal possible (the operator is paid fairly) + // 2. `overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas` (the user does not overpay) // The third part boils to the following 4 inequalities (at least one of these must hold): - // ceil(O1 * overhead_for_block_gas) >= overhead_gas - // ceil(O2 * overhead_for_block_gas) >= overhead_gas - // ceil(O3 * overhead_for_block_gas) >= overhead_gas - // ceil(O4 * overhead_for_block_gas) >= overhead_gas + // `ceil(O1 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O2 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O3 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O4 * overhead_for_block_gas) >= overhead_gas` // // Now, we need to solve each of these separately: @@ -161,10 +163,10 @@ pub(crate) fn get_amortized_overhead( let tx_slot_overhead = { let tx_slot_overhead = ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()).as_u32(); - (coeficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 + (coefficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 }; - // 2. The overhead for occupying the bootloader memory can be derived from encoded_len + // 2. The overhead for occupying the bootloader memory can be derived from `encoded_len` let overhead_for_length = { let overhead_for_length = ceil_div_u256( encoded_len * overhead_for_block_gas, @@ -172,20 +174,23 @@ pub(crate) fn get_amortized_overhead( ) .as_u32(); - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() as u32 }; // TODO (EVM-67): possibly include the overhead for pubdata. The formula below has not been properly maintained, - // since the pubdat is not published. If decided to use the pubdata overhead, it needs to be updated. - // 3. ceil(O3 * overhead_for_block_gas) >= overhead_gas - // O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK = ceil(gas_limit / gas_per_pubdata_byte_limit) / MAX_PUBDATA_PER_BLOCK - // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). Throwing off the `ceil`, while may provide marginally lower + // since the pubdata is not published. If decided to use the pubdata overhead, it needs to be updated. + // ``` + // 3. ceil(O3 * overhead_for_block_gas) >= overhead_gas` + // O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK = ceil(gas_limit / gas_per_pubdata_byte_limit) / MAX_PUBDATA_PER_BLOCK` + // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). + // ``` + // Throwing off the `ceil`, while may provide marginally lower // overhead to the operator, provides substantially easier formula to work with. // - // For better clarity, let's denote gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE - // ceil(OB * (TL - OE) / (EP * MP)) >= OE - // + // For better clarity, let's denote `gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE` + // `ceil(OB * (TL - OE) / (EP * MP)) >= OE` + // ``` // OB * (TL - OE) / (MP * EP) > OE - 1 // OB * (TL - OE) > (OE - 1) * EP * MP // OB * TL + EP * MP > OE * EP * MP + OE * OB @@ -196,7 +201,7 @@ pub(crate) fn get_amortized_overhead( // + gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK); // let denominator = // gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK) + overhead_for_block_gas; - + // // // Corner case: if `total_gas_limit` = `gas_per_pubdata_byte_limit` = 0 // // then the numerator will be 0 and subtracting 1 will cause a panic, so we just return a zero. // if numerator.is_zero() { @@ -205,7 +210,7 @@ pub(crate) fn get_amortized_overhead( // (numerator - 1) / denominator // } // }; - + // // 4. K * ceil(O4 * overhead_for_block_gas) >= overhead_gas, where K is the discount // O4 = gas_limit / MAX_TX_ERGS_LIMIT. Using the notation from the previous equation: // ceil(OB * GL / MAX_TX_ERGS_LIMIT) >= (OE / K) @@ -214,10 +219,11 @@ pub(crate) fn get_amortized_overhead( // OB * (TL - OE) > (OE/K) * MAX_TX_ERGS_LIMIT - MAX_TX_ERGS_LIMIT // OB * TL + MAX_TX_ERGS_LIMIT > OE * ( MAX_TX_ERGS_LIMIT/K + OB) // OE = floor(OB * TL + MAX_TX_ERGS_LIMIT / (MAX_TX_ERGS_LIMIT/K + OB)), with possible -1 if the division is without remainder + // ``` let overhead_for_gas = { let numerator = overhead_for_block_gas * total_gas_limit + U256::from(MAX_TX_ERGS_LIMIT); let denominator: U256 = U256::from( - (MAX_TX_ERGS_LIMIT as f64 / coeficients.ergs_limit_overhead_coeficient) as u64, + (MAX_TX_ERGS_LIMIT as f64 / coefficients.ergs_limit_overhead_coeficient) as u64, ) + overhead_for_block_gas; let overhead_for_gas = (numerator - 1) / denominator; @@ -228,21 +234,21 @@ pub(crate) fn get_amortized_overhead( let overhead = vec![tx_slot_overhead, overhead_for_length, overhead_for_gas] .into_iter() .max() - // For the sake of consistency making sure that total_gas_limit >= max_overhead + // For the sake of consistency making sure that `total_gas_limit >= max_overhead` .map(|max_overhead| std::cmp::min(max_overhead, total_gas_limit.as_u32())) .unwrap(); let limit_after_deducting_overhead = total_gas_limit - overhead; // During double checking of the overhead, the bootloader will assume that the - // body of the transaction does not have any more than MAX_L2_TX_GAS_LIMIT ergs available to it. + // body of the transaction does not have any more than `MAX_L2_TX_GAS_LIMIT` ergs available to it. if limit_after_deducting_overhead.as_u64() > MAX_L2_TX_GAS_LIMIT { - // We derive the same overhead that would exist for the MAX_L2_TX_GAS_LIMIT ergs + // We derive the same overhead that would exist for the `MAX_L2_TX_GAS_LIMIT` ergs derive_overhead( MAX_L2_TX_GAS_LIMIT as u32, gas_per_pubdata_byte_limit, encoded_len.as_usize(), - coeficients, + coefficients, ) } else { overhead @@ -263,14 +269,14 @@ mod tests { total_gas_limit: u32, gas_per_pubdata_byte_limit: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { let mut left_bound = if MAX_TX_ERGS_LIMIT < total_gas_limit { total_gas_limit - MAX_TX_ERGS_LIMIT } else { 0u32 }; - // Safe cast: the gas_limit for a transaction can not be larger than 2^32 + // Safe cast: the `gas_limit` for a transaction can not be larger than `2^32` let mut right_bound = total_gas_limit; // The closure returns whether a certain overhead would be accepted by the bootloader. @@ -281,7 +287,7 @@ mod tests { total_gas_limit - suggested_overhead, gas_per_pubdata_byte_limit, encoded_len, - coeficients, + coefficients, ); derived_overhead >= suggested_overhead @@ -310,40 +316,40 @@ mod tests { let test_params = |total_gas_limit: u32, gas_per_pubdata: u32, encoded_len: usize, - coeficients: OverheadCoeficients| { + coefficients: OverheadCoefficients| { let result_by_efficient_search = - get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coeficients); + get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coefficients); let result_by_binary_search = get_maximal_allowed_overhead_bin_search( total_gas_limit, gas_per_pubdata, encoded_len, - coeficients, + coefficients, ); assert_eq!(result_by_efficient_search, result_by_binary_search); }; // Some arbitrary test - test_params(60_000_000, 800, 2900, OverheadCoeficients::new_l2()); + test_params(60_000_000, 800, 2900, OverheadCoefficients::new_l2()); // Very small parameters - test_params(0, 1, 12, OverheadCoeficients::new_l2()); + test_params(0, 1, 12, OverheadCoefficients::new_l2()); // Relatively big parameters let max_tx_overhead = derive_overhead( MAX_TX_ERGS_LIMIT, 5000, 10000, - OverheadCoeficients::new_l2(), + OverheadCoefficients::new_l2(), ); test_params( MAX_TX_ERGS_LIMIT + max_tx_overhead, 5000, 10000, - OverheadCoeficients::new_l2(), + OverheadCoefficients::new_l2(), ); - test_params(115432560, 800, 2900, OverheadCoeficients::new_l1()); + test_params(115432560, 800, 2900, OverheadCoefficients::new_l1()); } } diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/transaction_encoding.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/transaction_encoding.rs index ab1352c2c75..56052eca813 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/transaction_encoding.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/utils/transaction_encoding.rs @@ -1,6 +1,7 @@ -use crate::vm_refunds_enhancement::types::internals::TransactionData; use zksync_types::Transaction; +use crate::vm_refunds_enhancement::types::internals::TransactionData; + /// Extension for transactions, specific for VM. Required for bypassing the orphan rule pub trait TransactionVmExt { /// Get the size of the transaction in tokens. diff --git a/core/lib/multivm/src/versions/vm_refunds_enhancement/vm.rs b/core/lib/multivm/src/versions/vm_refunds_enhancement/vm.rs index 4056d709a9b..f1554ee1761 100644 --- a/core/lib/multivm/src/versions/vm_refunds_enhancement/vm.rs +++ b/core/lib/multivm/src/versions/vm_refunds_enhancement/vm.rs @@ -1,20 +1,22 @@ -use crate::HistoryMode; use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::l2_to_l1_log::UserL2ToL1Log; -use zksync_types::Transaction; +use zksync_types::{l2_to_l1_log::UserL2ToL1Log, Transaction}; use zksync_utils::bytecode::CompressedBytecodeInfo; -use crate::vm_refunds_enhancement::old_vm::events::merge_events; - -use crate::interface::{ - BootloaderMemory, CurrentExecutionState, L1BatchEnv, L2BlockEnv, SystemEnv, VmExecutionMode, - VmExecutionResultAndLogs, VmInterface, VmInterfaceHistoryEnabled, +use crate::{ + interface::{ + BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, L1BatchEnv, L2BlockEnv, + SystemEnv, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, + VmInterfaceHistoryEnabled, VmMemoryMetrics, + }, + vm_latest::HistoryEnabled, + vm_refunds_enhancement::{ + bootloader_state::BootloaderState, + old_vm::events::merge_events, + tracers::dispatcher::TracerDispatcher, + types::internals::{new_vm_state, VmSnapshot, ZkSyncVmState}, + }, + HistoryMode, }; -use crate::interface::{BytecodeCompressionError, VmMemoryMetrics}; -use crate::vm_latest::HistoryEnabled; -use crate::vm_refunds_enhancement::bootloader_state::BootloaderState; -use crate::vm_refunds_enhancement::tracers::dispatcher::TracerDispatcher; -use crate::vm_refunds_enhancement::types::internals::{new_vm_state, VmSnapshot, ZkSyncVmState}; /// Main entry point for Virtual Machine integration. /// The instance should process only one l1 batch @@ -116,13 +118,19 @@ impl VmInterface for Vm { dispatcher: Self::TracerDispatcher, tx: Transaction, with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { self.push_transaction_with_compression(tx, with_compression); let result = self.inspect(dispatcher, VmExecutionMode::OneTx); if self.has_unpublished_bytecodes() { - Err(BytecodeCompressionError::BytecodeCompressionFailed) + ( + Err(BytecodeCompressionError::BytecodeCompressionFailed), + result, + ) } else { - Ok(result) + (Ok(()), result) } } @@ -131,7 +139,7 @@ impl VmInterface for Vm { } } -/// Methods of vm, which required some history manipullations +/// Methods of vm, which required some history manipulations impl VmInterfaceHistoryEnabled for Vm { /// Create snapshot of current vm state and push it into the memory fn make_snapshot(&mut self) { diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/l2_block.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/l2_block.rs index fac7cb33d21..48284bcc2ac 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/l2_block.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/l2_block.rs @@ -1,11 +1,15 @@ use std::cmp::Ordering; + use zksync_types::{MiniblockNumber, H256}; use zksync_utils::concat_and_hash; -use crate::interface::{L2Block, L2BlockEnv}; -use crate::vm_virtual_blocks::bootloader_state::snapshot::L2BlockSnapshot; -use crate::vm_virtual_blocks::bootloader_state::tx::BootloaderTx; -use crate::vm_virtual_blocks::utils::l2_blocks::l2_block_hash; +use crate::{ + interface::{L2Block, L2BlockEnv}, + vm_virtual_blocks::{ + bootloader_state::{snapshot::L2BlockSnapshot, tx::BootloaderTx}, + utils::l2_blocks::l2_block_hash, + }, +}; const EMPTY_TXS_ROLLING_HASH: H256 = H256::zero(); @@ -15,7 +19,7 @@ pub(crate) struct BootloaderL2Block { pub(crate) timestamp: u64, pub(crate) txs_rolling_hash: H256, // The rolling hash of all the transactions in the miniblock pub(crate) prev_block_hash: H256, - // Number of the first l2 block tx in l1 batch + // Number of the first L2 block tx in L1 batch pub(crate) first_tx_index: usize, pub(crate) max_virtual_blocks_to_create: u32, pub(super) txs: Vec, diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/snapshot.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/snapshot.rs index e417a3b9ee6..2c599092869 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/snapshot.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/snapshot.rs @@ -4,9 +4,9 @@ use zksync_types::H256; pub(crate) struct BootloaderStateSnapshot { /// ID of the next transaction to be executed. pub(crate) tx_to_execute: usize, - /// Stored l2 blocks in bootloader memory + /// Stored L2 blocks in bootloader memory pub(crate) l2_blocks_len: usize, - /// Snapshot of the last l2 block. Only this block could be changed during the rollback + /// Snapshot of the last L2 block. Only this block could be changed during the rollback pub(crate) last_l2_block: L2BlockSnapshot, /// The number of 32-byte words spent on the already included compressed bytecodes. pub(crate) compressed_bytecodes_encoding: usize, @@ -18,6 +18,6 @@ pub(crate) struct BootloaderStateSnapshot { pub(crate) struct L2BlockSnapshot { /// The rolling hash of all the transactions in the miniblock pub(crate) txs_rolling_hash: H256, - /// The number of transactions in the last l2 block + /// The number of transactions in the last L2 block pub(crate) txs_len: usize, } diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/state.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/state.rs index 2d67121e89b..685b1821fd5 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/state.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/state.rs @@ -1,19 +1,22 @@ -use crate::vm_virtual_blocks::bootloader_state::{ - l2_block::BootloaderL2Block, - snapshot::BootloaderStateSnapshot, - utils::{apply_l2_block, apply_tx_to_memory}, -}; use std::cmp::Ordering; + use zksync_types::{L2ChainId, U256}; use zksync_utils::bytecode::CompressedBytecodeInfo; -use crate::interface::{BootloaderMemory, L2BlockEnv, TxExecutionMode}; -use crate::vm_virtual_blocks::{ - constants::TX_DESCRIPTION_OFFSET, types::internals::TransactionData, - utils::l2_blocks::assert_next_block, -}; - use super::tx::BootloaderTx; +use crate::{ + interface::{BootloaderMemory, L2BlockEnv, TxExecutionMode}, + vm_virtual_blocks::{ + bootloader_state::{ + l2_block::BootloaderL2Block, + snapshot::BootloaderStateSnapshot, + utils::{apply_l2_block, apply_tx_to_memory}, + }, + constants::TX_DESCRIPTION_OFFSET, + types::internals::TransactionData, + utils::l2_blocks::assert_next_block, + }, +}; /// Intermediate bootloader-related VM state. /// /// Required to process transactions one by one (since we intercept the VM execution to execute diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/tx.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/tx.rs index 73825312b5e..067d62a9fdd 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/tx.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/tx.rs @@ -1,7 +1,8 @@ -use crate::vm_virtual_blocks::types::internals::TransactionData; use zksync_types::{L2ChainId, H256, U256}; use zksync_utils::bytecode::CompressedBytecodeInfo; +use crate::vm_virtual_blocks::types::internals::TransactionData; + /// Information about tx necessary for execution in bootloader. #[derive(Debug, Clone)] pub(super) struct BootloaderTx { @@ -14,7 +15,7 @@ pub(super) struct BootloaderTx { pub(super) refund: u32, /// Gas overhead pub(super) gas_overhead: u32, - /// Gas Limit for this transaction. It can be different from the gaslimit inside the transaction + /// Gas Limit for this transaction. It can be different from the gas limit inside the transaction pub(super) trusted_gas_limit: U256, /// Offset of the tx in bootloader memory pub(super) offset: usize, diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/utils.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/utils.rs index ffe0be2f03b..9a682da3a5a 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/utils.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/bootloader_state/utils.rs @@ -1,16 +1,19 @@ use zksync_types::U256; -use zksync_utils::bytecode::CompressedBytecodeInfo; -use zksync_utils::{bytes_to_be_words, h256_to_u256}; - -use crate::interface::{BootloaderMemory, TxExecutionMode}; -use crate::vm_virtual_blocks::bootloader_state::l2_block::BootloaderL2Block; -use crate::vm_virtual_blocks::constants::{ - BOOTLOADER_TX_DESCRIPTION_OFFSET, BOOTLOADER_TX_DESCRIPTION_SIZE, COMPRESSED_BYTECODES_OFFSET, - OPERATOR_REFUNDS_OFFSET, TX_DESCRIPTION_OFFSET, TX_OPERATOR_L2_BLOCK_INFO_OFFSET, - TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, TX_OVERHEAD_OFFSET, TX_TRUSTED_GAS_LIMIT_OFFSET, -}; +use zksync_utils::{bytecode::CompressedBytecodeInfo, bytes_to_be_words, h256_to_u256}; use super::tx::BootloaderTx; +use crate::{ + interface::{BootloaderMemory, TxExecutionMode}, + vm_virtual_blocks::{ + bootloader_state::l2_block::BootloaderL2Block, + constants::{ + BOOTLOADER_TX_DESCRIPTION_OFFSET, BOOTLOADER_TX_DESCRIPTION_SIZE, + COMPRESSED_BYTECODES_OFFSET, OPERATOR_REFUNDS_OFFSET, TX_DESCRIPTION_OFFSET, + TX_OPERATOR_L2_BLOCK_INFO_OFFSET, TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO, + TX_OVERHEAD_OFFSET, TX_TRUSTED_GAS_LIMIT_OFFSET, + }, + }, +}; pub(super) fn get_memory_for_compressed_bytecodes( compressed_bytecodes: &[CompressedBytecodeInfo], @@ -69,7 +72,7 @@ pub(super) fn apply_tx_to_memory( }; apply_l2_block(memory, &bootloader_l2_block, tx_index); - // Note, +1 is moving for poitner + // Note, +1 is moving for pointer let compressed_bytecodes_offset = COMPRESSED_BYTECODES_OFFSET + 1 + compressed_bytecodes_size; let encoded_compressed_bytecodes = @@ -89,8 +92,8 @@ pub(crate) fn apply_l2_block( bootloader_l2_block: &BootloaderL2Block, txs_index: usize, ) { - // Since L2 block infos start from the TX_OPERATOR_L2_BLOCK_INFO_OFFSET and each - // L2 block info takes TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO slots, the position where the L2 block info + // Since L2 block information start from the `TX_OPERATOR_L2_BLOCK_INFO_OFFSET` and each + // L2 block info takes `TX_OPERATOR_SLOTS_PER_L2_BLOCK_INFO` slots, the position where the L2 block info // for this transaction needs to be written is: let block_position = diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/constants.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/constants.rs index ed462581cb7..0e0eab85877 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/constants.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/constants.rs @@ -1,16 +1,31 @@ use zk_evm_1_3_3::aux_structures::MemoryPage; - -use zksync_system_constants::{ - L1_GAS_PER_PUBDATA_BYTE, MAX_L2_TX_GAS_LIMIT, MAX_NEW_FACTORY_DEPS, MAX_TXS_IN_BLOCK, - USED_BOOTLOADER_MEMORY_WORDS, -}; - pub use zk_evm_1_3_3::zkevm_opcode_defs::system_params::{ ERGS_PER_CIRCUIT, INITIAL_STORAGE_WRITE_PUBDATA_BYTES, MAX_PUBDATA_PER_BLOCK, }; +use zksync_system_constants::{L1_GAS_PER_PUBDATA_BYTE, MAX_L2_TX_GAS_LIMIT, MAX_NEW_FACTORY_DEPS}; use crate::vm_virtual_blocks::old_vm::utils::heap_page_from_base; +/// The size of the bootloader memory in bytes which is used by the protocol. +/// While the maximal possible size is a lot higher, we restrict ourselves to a certain limit to reduce +/// the requirements on RAM. +pub(crate) const USED_BOOTLOADER_MEMORY_BYTES: usize = 1 << 24; +pub(crate) const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32; + +// This the number of pubdata such that it should be always possible to publish +// from a single transaction. Note, that these pubdata bytes include only bytes that are +// to be published inside the body of transaction (i.e. excluding of factory deps). +pub(crate) const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 4000; + +// The users should always be able to provide `MAX_GAS_PER_PUBDATA_BYTE` gas per pubdata in their +// transactions so that they are able to send at least `GUARANTEED_PUBDATA_PER_L1_BATCH` bytes per +// transaction. +pub(crate) const MAX_GAS_PER_PUBDATA_BYTE: u64 = + MAX_L2_TX_GAS_LIMIT / GUARANTEED_PUBDATA_PER_L1_BATCH; + +// The maximal number of transactions in a single batch +pub(crate) const MAX_TXS_IN_BLOCK: usize = 1024; + /// Max cycles for a single transaction. pub const MAX_CYCLES_FOR_TX: u32 = u32::MAX; @@ -54,7 +69,7 @@ pub(crate) const BOOTLOADER_TX_DESCRIPTION_OFFSET: usize = COMPRESSED_BYTECODES_OFFSET + COMPRESSED_BYTECODES_SLOTS; /// The size of the bootloader memory dedicated to the encodings of transactions -pub const BOOTLOADER_TX_ENCODING_SPACE: u32 = +pub(crate) const BOOTLOADER_TX_ENCODING_SPACE: u32 = (USED_BOOTLOADER_MEMORY_WORDS - TX_DESCRIPTION_OFFSET - MAX_TXS_IN_BLOCK) as u32; // Size of the bootloader tx description in words @@ -75,10 +90,10 @@ pub(crate) const BLOCK_OVERHEAD_L1_GAS: u32 = 1000000; pub const BLOCK_OVERHEAD_PUBDATA: u32 = BLOCK_OVERHEAD_L1_GAS / L1_GAS_PER_PUBDATA_BYTE; /// VM Hooks are used for communication between bootloader and tracers. -/// The 'type'/'opcode' is put into VM_HOOK_POSITION slot, +/// The 'type' / 'opcode' is put into VM_HOOK_POSITION slot, /// and VM_HOOKS_PARAMS_COUNT parameters (each 32 bytes) are put in the slots before. /// So the layout looks like this: -/// [param 0][param 1][vmhook opcode] +/// `[param 0][param 1][vmhook opcode]` pub const VM_HOOK_POSITION: u32 = RESULT_SUCCESS_FIRST_SLOT - 1; pub const VM_HOOK_PARAMS_COUNT: u32 = 2; pub const VM_HOOK_PARAMS_START_POSITION: u32 = VM_HOOK_POSITION - VM_HOOK_PARAMS_COUNT; diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/bytecode.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/bytecode.rs index 2ae53a48ef3..570581740ef 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/bytecode.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/bytecode.rs @@ -1,13 +1,12 @@ use itertools::Itertools; - -use crate::interface::VmInterface; -use crate::HistoryMode; use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::U256; -use zksync_utils::bytecode::{compress_bytecode, hash_bytecode, CompressedBytecodeInfo}; -use zksync_utils::bytes_to_be_words; +use zksync_utils::{ + bytecode::{compress_bytecode, hash_bytecode, CompressedBytecodeInfo}, + bytes_to_be_words, +}; -use crate::vm_virtual_blocks::Vm; +use crate::{interface::VmInterface, vm_virtual_blocks::Vm, HistoryMode}; impl Vm { /// Checks the last transaction has successfully published compressed bytecodes and returns `true` if there is at least one is still unknown. diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/execution.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/execution.rs index ac95312019d..2938280d266 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/execution.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/execution.rs @@ -1,16 +1,22 @@ -use crate::interface::tracer::{TracerExecutionStopReason, VmExecutionStopReason}; -use crate::interface::{VmExecutionMode, VmExecutionResultAndLogs}; -use crate::HistoryMode; use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; -use crate::vm_virtual_blocks::old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}; -use crate::vm_virtual_blocks::tracers::dispatcher::TracerDispatcher; -use crate::vm_virtual_blocks::tracers::{ - traits::{ExecutionEndTracer, VmTracer}, - DefaultExecutionTracer, RefundsTracer, +use crate::{ + interface::{ + tracer::{TracerExecutionStopReason, VmExecutionStopReason}, + VmExecutionMode, VmExecutionResultAndLogs, + }, + vm_virtual_blocks::{ + old_vm::utils::{vm_may_have_ended_inner, VmExecutionResult}, + tracers::{ + dispatcher::TracerDispatcher, + traits::{ExecutionEndTracer, VmTracer}, + DefaultExecutionTracer, RefundsTracer, + }, + vm::Vm, + }, + HistoryMode, }; -use crate::vm_virtual_blocks::vm::Vm; impl Vm { pub(crate) fn inspect_inner( diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/gas.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/gas.rs index 1f06ecb0827..0ca52d2b687 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/gas.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/gas.rs @@ -1,8 +1,9 @@ -use crate::HistoryMode; use zksync_state::WriteStorage; -use crate::vm_virtual_blocks::tracers::DefaultExecutionTracer; -use crate::vm_virtual_blocks::vm::Vm; +use crate::{ + vm_virtual_blocks::{tracers::DefaultExecutionTracer, vm::Vm}, + HistoryMode, +}; impl Vm { /// Returns the amount of gas remaining to the VM. diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/logs.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/logs.rs index a32f3a16572..0d407efd041 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/logs.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/logs.rs @@ -1,14 +1,18 @@ use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; +use zksync_types::{ + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + VmEvent, +}; -use crate::interface::types::outputs::VmExecutionLogs; -use crate::HistoryMode; -use zksync_types::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; -use zksync_types::VmEvent; - -use crate::vm_virtual_blocks::old_vm::events::merge_events; -use crate::vm_virtual_blocks::old_vm::utils::precompile_calls_count_after_timestamp; -use crate::vm_virtual_blocks::vm::Vm; +use crate::{ + interface::types::outputs::VmExecutionLogs, + vm_virtual_blocks::{ + old_vm::{events::merge_events, utils::precompile_calls_count_after_timestamp}, + vm::Vm, + }, + HistoryMode, +}; impl Vm { pub(crate) fn collect_execution_logs_after_timestamp( diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/snapshots.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/snapshots.rs index 1a8ad6fefd2..2b653333a5c 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/snapshots.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/snapshots.rs @@ -1,13 +1,12 @@ -use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; - use std::time::Duration; -use crate::vm_latest::HistoryEnabled; +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; -use crate::vm_virtual_blocks::{ - old_vm::oracles::OracleWithHistory, types::internals::VmSnapshot, vm::Vm, +use crate::{ + vm_latest::HistoryEnabled, + vm_virtual_blocks::{old_vm::oracles::OracleWithHistory, types::internals::VmSnapshot, vm::Vm}, }; #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelSet, EncodeLabelValue)] @@ -36,8 +35,8 @@ impl Vm { pub(crate) fn make_snapshot_inner(&mut self) { self.snapshots.push(VmSnapshot { // Vm local state contains O(1) various parameters (registers/etc). - // The only "expensive" copying here is copying of the callstack. - // It will take O(callstack_depth) to copy it. + // The only "expensive" copying here is copying of the call stack. + // It will take `O(callstack_depth)` to copy it. // So it is generally recommended to get snapshots of the bootloader frame, // where the depth is 1. local_state: self.state.local_state.clone(), diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/statistics.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/statistics.rs index 14570f15453..1421a7b35f4 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/statistics.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/statistics.rs @@ -1,12 +1,12 @@ use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; - -use crate::interface::{VmExecutionStatistics, VmMemoryMetrics}; -use crate::HistoryMode; use zksync_types::U256; -use crate::vm_virtual_blocks::tracers::DefaultExecutionTracer; -use crate::vm_virtual_blocks::vm::Vm; +use crate::{ + interface::{VmExecutionStatistics, VmMemoryMetrics}, + vm_virtual_blocks::{tracers::DefaultExecutionTracer, vm::Vm}, + HistoryMode, +}; /// Module responsible for observing the VM behavior, i.e. calculating the statistics of the VM runs /// or reporting the VM memory usage. @@ -38,12 +38,13 @@ impl Vm { gas_used: gas_remaining_before - gas_remaining_after, computational_gas_used, total_log_queries: total_log_queries_count, - // This field will be populated by the RefundTracer + // This field will be populated by the `RefundTracer` pubdata_published: 0, + estimated_circuits_used: 0.0, } } - /// Returns the hashes the bytecodes that have been decommitted by the decomittment processor. + /// Returns the hashes the bytecodes that have been decommitted by the decommitment processor. pub(crate) fn get_used_contracts(&self) -> Vec { self.state .decommittment_processor diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/tx.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/tx.rs index bfeeb56e022..0f4705a633f 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/tx.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/implementation/tx.rs @@ -1,15 +1,17 @@ -use crate::vm_virtual_blocks::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_virtual_blocks::implementation::bytecode::{ - bytecode_to_factory_dep, compress_bytecodes, -}; -use crate::HistoryMode; use zk_evm_1_3_3::aux_structures::Timestamp; use zksync_state::WriteStorage; -use zksync_types::l1::is_l1_tx_type; -use zksync_types::Transaction; +use zksync_types::{l1::is_l1_tx_type, Transaction}; -use crate::vm_virtual_blocks::types::internals::TransactionData; -use crate::vm_virtual_blocks::vm::Vm; +use crate::{ + vm_virtual_blocks::{ + constants::BOOTLOADER_HEAP_PAGE, + implementation::bytecode::{bytecode_to_factory_dep, compress_bytecodes}, + types::internals::TransactionData, + utils::fee::get_batch_gas_per_pubdata, + vm::Vm, + }, + HistoryMode, +}; impl Vm { pub(crate) fn push_raw_transaction( @@ -37,8 +39,7 @@ impl Vm { .decommittment_processor .populate(codes_for_decommiter, timestamp); - let trusted_ergs_limit = - tx.trusted_ergs_limit(self.batch_env.block_gas_price_per_pubdata()); + let trusted_ergs_limit = tx.trusted_ergs_limit(get_batch_gas_per_pubdata(&self.batch_env)); let memory = self.bootloader_state.push_tx( tx, @@ -60,7 +61,7 @@ impl Vm { with_compression: bool, ) { let tx: TransactionData = tx.into(); - let block_gas_per_pubdata_byte = self.batch_env.block_gas_price_per_pubdata(); + let block_gas_per_pubdata_byte = get_batch_gas_per_pubdata(&self.batch_env); let overhead = tx.overhead_gas(block_gas_per_pubdata_byte as u32); self.push_raw_transaction(tx, overhead, 0, with_compression); } diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/mod.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/mod.rs index 3a7a96e729d..1500e7027b7 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/mod.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/mod.rs @@ -1,30 +1,24 @@ -pub use old_vm::{ - history_recorder::{HistoryDisabled, HistoryEnabled, HistoryMode}, - memory::SimpleMemory, - oracles::storage::StorageOracle, +pub use self::{ + bootloader_state::BootloaderState, + old_vm::{ + history_recorder::{HistoryDisabled, HistoryEnabled, HistoryMode}, + memory::SimpleMemory, + oracles::storage::StorageOracle, + }, + tracers::{ + dispatcher::TracerDispatcher, + traits::{ExecutionEndTracer, ExecutionProcessing, TracerPointer, VmTracer}, + }, + types::internals::ZkSyncVmState, + utils::transaction_encoding::TransactionVmExt, + vm::Vm, }; -pub use tracers::{ - dispatcher::TracerDispatcher, - traits::{ExecutionEndTracer, ExecutionProcessing, TracerPointer, VmTracer}, -}; - -pub use types::internals::ZkSyncVmState; -pub use utils::transaction_encoding::TransactionVmExt; - -pub use bootloader_state::BootloaderState; - -pub use vm::Vm; - mod bootloader_state; +pub mod constants; mod implementation; mod old_vm; pub(crate) mod tracers; mod types; -mod vm; - -pub mod constants; pub mod utils; - -// #[cfg(test)] -// mod tests; +mod vm; diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/event_sink.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/event_sink.rs index 49ec162fd5e..00a03ca0adb 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/event_sink.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/event_sink.rs @@ -1,8 +1,5 @@ -use crate::vm_virtual_blocks::old_vm::{ - history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, - oracles::OracleWithHistory, -}; use std::collections::HashMap; + use zk_evm_1_3_3::{ abstractions::EventSink, aux_structures::{LogQuery, Timestamp}, @@ -12,6 +9,11 @@ use zk_evm_1_3_3::{ }, }; +use crate::vm_virtual_blocks::old_vm::{ + history_recorder::{AppDataFrameManagerWithHistory, HistoryEnabled, HistoryMode}, + oracles::OracleWithHistory, +}; + #[derive(Debug, Clone, PartialEq, Default)] pub struct InMemoryEventSink { frames_stack: AppDataFrameManagerWithHistory, H>, @@ -48,7 +50,7 @@ impl InMemoryEventSink { pub fn log_queries_after_timestamp(&self, from_timestamp: Timestamp) -> &[Box] { let events = self.frames_stack.forward().current_frame(); - // Select all of the last elements where e.timestamp >= from_timestamp. + // Select all of the last elements where `e.timestamp >= from_timestamp`. // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. events .rsplit(|e| e.timestamp < from_timestamp) diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/history_recorder.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/history_recorder.rs index a38ee177245..baed63c14b8 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/history_recorder.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/history_recorder.rs @@ -5,7 +5,6 @@ use zk_evm_1_3_3::{ vm_state::PrimitiveValue, zkevm_opcode_defs::{self}, }; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::{StorageKey, U256}; use zksync_utils::{h256_to_u256, u256_to_h256}; @@ -13,14 +12,14 @@ use zksync_utils::{h256_to_u256, u256_to_h256}; pub(crate) type MemoryWithHistory = HistoryRecorder; pub(crate) type IntFrameManagerWithHistory = HistoryRecorder, H>; -// Within the same cycle, timestamps in range timestamp..timestamp+TIME_DELTA_PER_CYCLE-1 +// Within the same cycle, timestamps in range `timestamp..timestamp+TIME_DELTA_PER_CYCLE-1` // can be used. This can sometimes violate monotonicity of the timestamp within the // same cycle, so it should be normalized. #[inline] fn normalize_timestamp(timestamp: Timestamp) -> Timestamp { let timestamp = timestamp.0; - // Making sure it is divisible by TIME_DELTA_PER_CYCLE + // Making sure it is divisible by `TIME_DELTA_PER_CYCLE` Timestamp(timestamp - timestamp % zkevm_opcode_defs::TIME_DELTA_PER_CYCLE) } @@ -434,7 +433,7 @@ impl HistoryRecorder, H> { } #[derive(Debug, Clone, PartialEq)] -pub(crate) struct AppDataFrameManagerWithHistory { +pub struct AppDataFrameManagerWithHistory { forward: HistoryRecorder, H>, rollback: HistoryRecorder, H>, } @@ -767,11 +766,14 @@ impl HistoryRecorder, H> { #[cfg(test)] mod tests { - use crate::vm_virtual_blocks::old_vm::history_recorder::{HistoryRecorder, MemoryWrapper}; - use crate::vm_virtual_blocks::HistoryDisabled; use zk_evm_1_3_3::{aux_structures::Timestamp, vm_state::PrimitiveValue}; use zksync_types::U256; + use crate::vm_virtual_blocks::{ + old_vm::history_recorder::{HistoryRecorder, MemoryWrapper}, + HistoryDisabled, + }; + #[test] fn memory_equality() { let mut a: HistoryRecorder = Default::default(); diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/memory.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/memory.rs index f1a424c36ae..a48620db11c 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/memory.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/memory.rs @@ -1,16 +1,18 @@ -use zk_evm_1_3_3::abstractions::{Memory, MemoryType}; -use zk_evm_1_3_3::aux_structures::{MemoryPage, MemoryQuery, Timestamp}; -use zk_evm_1_3_3::vm_state::PrimitiveValue; -use zk_evm_1_3_3::zkevm_opcode_defs::FatPointer; +use zk_evm_1_3_3::{ + abstractions::{Memory, MemoryType}, + aux_structures::{MemoryPage, MemoryQuery, Timestamp}, + vm_state::PrimitiveValue, + zkevm_opcode_defs::FatPointer, +}; use zksync_types::U256; -use crate::vm_virtual_blocks::old_vm::history_recorder::{ - FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, - MemoryWrapper, WithHistory, -}; -use crate::vm_virtual_blocks::old_vm::oracles::OracleWithHistory; -use crate::vm_virtual_blocks::old_vm::utils::{ - aux_heap_page_from_base, heap_page_from_base, stack_page_from_base, +use crate::vm_virtual_blocks::old_vm::{ + history_recorder::{ + FramedStack, HistoryEnabled, HistoryMode, IntFrameManagerWithHistory, MemoryWithHistory, + MemoryWrapper, WithHistory, + }, + oracles::OracleWithHistory, + utils::{aux_heap_page_from_base, heap_page_from_base, stack_page_from_base}, }; #[derive(Debug, Clone, PartialEq)] @@ -280,7 +282,7 @@ impl Memory for SimpleMemory { let returndata_page = returndata_fat_pointer.memory_page; for &page in current_observable_pages { - // If the page's number is greater than or equal to the base_page, + // If the page's number is greater than or equal to the `base_page`, // it means that it was created by the internal calls of this contract. // We need to add this check as the calldata pointer is also part of the // observable pages. @@ -297,7 +299,7 @@ impl Memory for SimpleMemory { } } -// It is expected that there is some intersection between [word_number*32..word_number*32+31] and [start, end] +// It is expected that there is some intersection between `[word_number*32..word_number*32+31]` and `[start, end]` fn extract_needed_bytes_from_word( word_value: Vec, word_number: usize, @@ -305,7 +307,7 @@ fn extract_needed_bytes_from_word( end: usize, ) -> Vec { let word_start = word_number * 32; - let word_end = word_start + 31; // Note, that at word_start + 32 a new word already starts + let word_end = word_start + 31; // Note, that at `word_start + 32` a new word already starts let intersection_left = std::cmp::max(word_start, start); let intersection_right = std::cmp::min(word_end, end); diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/decommitter.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/decommitter.rs index 050b244736f..f01394cebb5 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/decommitter.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/decommitter.rs @@ -1,25 +1,21 @@ -use std::collections::HashMap; -use std::fmt::Debug; +use std::{collections::HashMap, fmt::Debug}; -use crate::vm_virtual_blocks::old_vm::history_recorder::{ - HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, -}; - -use zk_evm_1_3_3::abstractions::MemoryType; -use zk_evm_1_3_3::aux_structures::Timestamp; use zk_evm_1_3_3::{ - abstractions::{DecommittmentProcessor, Memory}, - aux_structures::{DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery}, + abstractions::{DecommittmentProcessor, Memory, MemoryType}, + aux_structures::{ + DecommittmentQuery, MemoryIndex, MemoryLocation, MemoryPage, MemoryQuery, Timestamp, + }, }; - use zksync_state::{ReadStorage, StoragePtr}; use zksync_types::U256; -use zksync_utils::bytecode::bytecode_len_in_words; -use zksync_utils::{bytes_to_be_words, u256_to_h256}; +use zksync_utils::{bytecode::bytecode_len_in_words, bytes_to_be_words, u256_to_h256}; use super::OracleWithHistory; +use crate::vm_virtual_blocks::old_vm::history_recorder::{ + HistoryEnabled, HistoryMode, HistoryRecorder, WithHistory, +}; -/// The main job of the DecommiterOracle is to implement the DecommittmentProcessor trait - that is +/// The main job of the DecommiterOracle is to implement the DecommitmentProcessor trait - that is /// used by the VM to 'load' bytecodes into memory. #[derive(Debug)] pub struct DecommitterOracle { @@ -70,7 +66,7 @@ impl DecommitterOracle { } } - /// Adds additional bytecodes. They will take precendent over the bytecodes from storage. + /// Adds additional bytecodes. They will take precedent over the bytecodes from storage. pub fn populate(&mut self, bytecodes: Vec<(U256, Vec)>, timestamp: Timestamp) { for (hash, bytecode) in bytecodes { self.known_bytecodes.insert(hash, bytecode, timestamp); @@ -180,7 +176,7 @@ impl DecommittmentProcess > { self.decommitment_requests.push((), partial_query.timestamp); // First - check if we didn't fetch this bytecode in the past. - // If we did - we can just return the page that we used before (as the memory is read only). + // If we did - we can just return the page that we used before (as the memory is readonly). if let Some(memory_page) = self .decommitted_code_hashes .inner() diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/precompile.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/precompile.rs index 11ddb26d03a..8fd77ef7f87 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/precompile.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/precompile.rs @@ -1,17 +1,14 @@ use zk_evm_1_3_3::{ - abstractions::Memory, - abstractions::PrecompileCyclesWitness, - abstractions::PrecompilesProcessor, + abstractions::{Memory, PrecompileCyclesWitness, PrecompilesProcessor}, aux_structures::{LogQuery, MemoryQuery, Timestamp}, precompiles::DefaultPrecompilesProcessor, }; +use super::OracleWithHistory; use crate::vm_virtual_blocks::old_vm::history_recorder::{ HistoryEnabled, HistoryMode, HistoryRecorder, }; -use super::OracleWithHistory; - /// Wrap of DefaultPrecompilesProcessor that store queue /// of timestamp when precompiles are called to be executed. /// Number of precompiles per block is strictly limited, diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/storage.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/storage.rs index 70186b78b32..2555f57fc7e 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/storage.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/oracles/storage.rs @@ -1,26 +1,22 @@ use std::collections::HashMap; -use crate::vm_virtual_blocks::old_vm::history_recorder::{ - AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, - HistoryRecorder, StorageWrapper, WithHistory, -}; - -use zk_evm_1_3_3::abstractions::RefundedAmounts; -use zk_evm_1_3_3::zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES; use zk_evm_1_3_3::{ - abstractions::{RefundType, Storage as VmStorageOracle}, + abstractions::{RefundType, RefundedAmounts, Storage as VmStorageOracle}, aux_structures::{LogQuery, Timestamp}, + zkevm_opcode_defs::system_params::INITIAL_STORAGE_WRITE_PUBDATA_BYTES, }; - use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::utils::storage_key_for_eth_balance; use zksync_types::{ - AccountTreeId, Address, StorageKey, StorageLogQuery, StorageLogQueryType, BOOTLOADER_ADDRESS, - U256, + utils::storage_key_for_eth_balance, AccountTreeId, Address, StorageKey, StorageLogQuery, + StorageLogQueryType, BOOTLOADER_ADDRESS, U256, }; use zksync_utils::u256_to_h256; use super::OracleWithHistory; +use crate::vm_virtual_blocks::old_vm::history_recorder::{ + AppDataFrameManagerWithHistory, HashMapHistoryEvent, HistoryEnabled, HistoryMode, + HistoryRecorder, StorageWrapper, WithHistory, +}; // While the storage does not support different shards, it was decided to write the // code of the StorageOracle with the shard parameters in mind. @@ -171,7 +167,7 @@ impl StorageOracle { ) -> &[Box] { let logs = self.frames_stack.forward().current_frame(); - // Select all of the last elements where l.log_query.timestamp >= from_timestamp. + // Select all of the last elements where `l.log_query.timestamp >= from_timestamp`. // Note, that using binary search here is dangerous, because the logs are not sorted by timestamp. logs.rsplit(|l| l.log_query.timestamp < from_timestamp) .next() @@ -212,13 +208,14 @@ impl StorageOracle { } impl VmStorageOracle for StorageOracle { - // Perform a storage read/write access by taking an partially filled query + // Perform a storage read / write access by taking an partially filled query // and returning filled query and cold/warm marker for pricing purposes fn execute_partial_query( &mut self, _monotonic_cycle_counter: u32, query: LogQuery, ) -> LogQuery { + // ``` // tracing::trace!( // "execute partial query cyc {:?} addr {:?} key {:?}, rw {:?}, wr {:?}, tx {:?}", // _monotonic_cycle_counter, @@ -228,6 +225,7 @@ impl VmStorageOracle for StorageOracle { // query.written_value, // query.tx_number_in_block // ); + // ``` assert!(!query.rollback); if query.rw_flag { // The number of bytes that have been compensated by the user to perform this write @@ -307,7 +305,7 @@ impl VmStorageOracle for StorageOracle { ); // Additional validation that the current value was correct - // Unwrap is safe because the return value from write_inner is the previous value in this leaf. + // Unwrap is safe because the return value from `write_inner` is the previous value in this leaf. // It is impossible to set leaf value to `None` assert_eq!(current_value, written_value); } @@ -321,8 +319,8 @@ impl VmStorageOracle for StorageOracle { /// Returns the number of bytes needed to publish a slot. // Since we need to publish the state diffs onchain, for each of the updated storage slot -// we basically need to publish the following pair: (). -// While new_value is always 32 bytes long, for key we use the following optimization: +// we basically need to publish the following pair: `()`. +// While `new_value` is always 32 bytes long, for key we use the following optimization: // - The first time we publish it, we use 32 bytes. // Then, we remember a 8-byte id for this slot and assign it to it. We call this initial write. // - The second time we publish it, we will use this 8-byte instead of the 32 bytes of the entire key. diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/utils.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/utils.rs index 65497778495..5be62e38437 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/utils.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/old_vm/utils.rs @@ -1,22 +1,19 @@ -use crate::vm_virtual_blocks::old_vm::memory::SimpleMemory; - -use crate::vm_virtual_blocks::types::internals::ZkSyncVmState; -use crate::vm_virtual_blocks::HistoryMode; - -use zk_evm_1_3_3::zkevm_opcode_defs::decoding::{ - AllowedPcOrImm, EncodingModeProduction, VmEncodingMode, -}; -use zk_evm_1_3_3::zkevm_opcode_defs::RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER; use zk_evm_1_3_3::{ aux_structures::{MemoryPage, Timestamp}, vm_state::PrimitiveValue, - zkevm_opcode_defs::FatPointer, + zkevm_opcode_defs::{ + decoding::{AllowedPcOrImm, EncodingModeProduction, VmEncodingMode}, + FatPointer, RET_IMPLICIT_RETURNDATA_PARAMS_REGISTER, + }, }; use zksync_state::WriteStorage; use zksync_system_constants::L1_GAS_PER_PUBDATA_BYTE; - use zksync_types::{Address, U256}; +use crate::vm_virtual_blocks::{ + old_vm::memory::SimpleMemory, types::internals::ZkSyncVmState, HistoryMode, +}; + #[derive(Debug, Clone)] pub(crate) enum VmExecutionResult { Ok(Vec), @@ -125,7 +122,7 @@ pub(crate) fn vm_may_have_ended_inner( } (false, _) => None, (true, l) if l == outer_eh_location => { - // check r1,r2,r3 + // check `r1,r2,r3` if vm.local_state.flags.overflow_or_less_than_flag { Some(VmExecutionResult::Panic) } else { diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tests/require_eip712.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tests/require_eip712.rs index 82c1a052792..988841e90ce 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tests/require_eip712.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tests/require_eip712.rs @@ -107,7 +107,7 @@ async fn test_require_eip712() { vm.get_eth_balance(beneficiary.address), U256::from(888000088) ); - // Make sure that the tokens were transfered from the AA account. + // Make sure that the tokens were transferred from the AA account. assert_eq!( private_account_balance, vm.get_eth_balance(private_account.address) diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tests/tester/inner_state.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tests/tester/inner_state.rs index 8105ca244d3..83ad0b9044b 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tests/tester/inner_state.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tests/tester/inner_state.rs @@ -36,7 +36,7 @@ impl PartialEq for ModifiedKeysMap { #[derive(Clone, PartialEq, Debug)] pub(crate) struct DecommitterTestInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub(crate) modified_storage_keys: ModifiedKeysMap, pub(crate) known_bytecodes: HistoryRecorder>, H>, @@ -45,7 +45,7 @@ pub(crate) struct DecommitterTestInnerState { #[derive(Clone, PartialEq, Debug)] pub(crate) struct StorageOracleInnerState { - /// There is no way to "trully" compare the storage pointer, + /// There is no way to "truly" compare the storage pointer, /// so we just compare the modified keys. This is reasonable enough. pub(crate) modified_storage_keys: ModifiedKeysMap, diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tests/utils.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tests/utils.rs index d418a6f32e0..ca04d2fedf5 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tests/utils.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tests/utils.rs @@ -61,8 +61,8 @@ pub(crate) fn read_test_contract() -> Vec { pub(crate) fn get_bootloader(test: &str) -> SystemContractCode { let bootloader_code = read_zbin_bytecode(format!( - "etc/system-contracts/bootloader/tests/artifacts/{}.yul/{}.yul.zbin", - test, test + "contracts/system-contracts/bootloader/tests/artifacts/{}.yul.zbin", + test )); let bootloader_hash = hash_bytecode(&bootloader_code); diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/default_tracers.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/default_tracers.rs index f394ab5f752..f6007214494 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/default_tracers.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/default_tracers.rs @@ -1,33 +1,37 @@ -use std::fmt::{Debug, Formatter}; -use std::marker::PhantomData; +use std::{ + fmt::{Debug, Formatter}, + marker::PhantomData, +}; -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::tracer::VmExecutionStopReason; -use crate::interface::VmExecutionMode; -use zk_evm_1_3_3::witness_trace::DummyTracer; -use zk_evm_1_3_3::zkevm_opcode_defs::{Opcode, RetOpcode}; use zk_evm_1_3_3::{ tracing::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, Tracer, VmLocalStateData, }, vm_state::VmLocalState, + witness_trace::DummyTracer, + zkevm_opcode_defs::{Opcode, RetOpcode}, }; use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::Timestamp; -use crate::vm_virtual_blocks::bootloader_state::utils::apply_l2_block; -use crate::vm_virtual_blocks::bootloader_state::BootloaderState; -use crate::vm_virtual_blocks::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_virtual_blocks::old_vm::history_recorder::HistoryMode; -use crate::vm_virtual_blocks::old_vm::memory::SimpleMemory; -use crate::vm_virtual_blocks::tracers::dispatcher::TracerDispatcher; -use crate::vm_virtual_blocks::tracers::traits::{ExecutionEndTracer, ExecutionProcessing}; -use crate::vm_virtual_blocks::tracers::utils::{ - computational_gas_price, gas_spent_on_bytecodes_and_long_messages_this_opcode, - print_debug_if_needed, VmHook, +use crate::{ + interface::{dyn_tracers::vm_1_3_3::DynTracer, tracer::VmExecutionStopReason, VmExecutionMode}, + vm_virtual_blocks::{ + bootloader_state::{utils::apply_l2_block, BootloaderState}, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + tracers::{ + dispatcher::TracerDispatcher, + traits::{ExecutionEndTracer, ExecutionProcessing}, + utils::{ + computational_gas_price, gas_spent_on_bytecodes_and_long_messages_this_opcode, + print_debug_if_needed, VmHook, + }, + RefundsTracer, ResultTracer, + }, + types::internals::ZkSyncVmState, + }, }; -use crate::vm_virtual_blocks::tracers::{RefundsTracer, ResultTracer}; -use crate::vm_virtual_blocks::types::internals::ZkSyncVmState; /// Default tracer for the VM. It manages the other tracers execution and stop the vm when needed. pub(crate) struct DefaultExecutionTracer { @@ -267,7 +271,7 @@ impl DefaultExecutionTracer { } fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/dispatcher.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/dispatcher.rs index 7eb89461eab..b1b5ef418ee 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/dispatcher.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/dispatcher.rs @@ -1,17 +1,18 @@ -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::tracer::VmExecutionStopReason; -use crate::interface::VmExecutionResultAndLogs; -use crate::vm_virtual_blocks::TracerPointer; -use crate::vm_virtual_blocks::{ - BootloaderState, ExecutionEndTracer, ExecutionProcessing, HistoryMode, SimpleMemory, VmTracer, - ZkSyncVmState, -}; - use zk_evm_1_3_3::tracing::{ AfterDecodingData, AfterExecutionData, BeforeExecutionData, VmLocalStateData, }; use zksync_state::{StoragePtr, WriteStorage}; +use crate::{ + interface::{ + dyn_tracers::vm_1_3_3::DynTracer, tracer::VmExecutionStopReason, VmExecutionResultAndLogs, + }, + vm_virtual_blocks::{ + BootloaderState, ExecutionEndTracer, ExecutionProcessing, HistoryMode, SimpleMemory, + TracerPointer, VmTracer, ZkSyncVmState, + }, +}; + impl From> for TracerDispatcher { fn from(value: TracerPointer) -> Self { Self { diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/refunds.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/refunds.rs index 6496e13172a..106a431edba 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/refunds.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/refunds.rs @@ -1,9 +1,6 @@ -use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; - use std::collections::HashMap; -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::{L1BatchEnv, Refunds, VmExecutionResultAndLogs}; +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; use zk_evm_1_3_3::{ aux_structures::Timestamp, tracing::{BeforeExecutionData, VmLocalStateData}, @@ -17,23 +14,27 @@ use zksync_types::{ zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries, L1BatchNumber, StorageKey, U256, }; -use zksync_utils::bytecode::bytecode_len_in_bytes; -use zksync_utils::{ceil_div_u256, u256_to_h256}; - -use crate::vm_virtual_blocks::bootloader_state::BootloaderState; -use crate::vm_virtual_blocks::constants::{ - BOOTLOADER_HEAP_PAGE, OPERATOR_REFUNDS_OFFSET, TX_GAS_LIMIT_OFFSET, -}; -use crate::vm_virtual_blocks::old_vm::{ - events::merge_events, history_recorder::HistoryMode, memory::SimpleMemory, - oracles::storage::storage_key_of_log, utils::eth_price_per_pubdata_byte, -}; -use crate::vm_virtual_blocks::tracers::utils::gas_spent_on_bytecodes_and_long_messages_this_opcode; -use crate::vm_virtual_blocks::tracers::{ - traits::{ExecutionEndTracer, ExecutionProcessing, VmTracer}, - utils::{get_vm_hook_params, VmHook}, +use zksync_utils::{bytecode::bytecode_len_in_bytes, ceil_div_u256, u256_to_h256}; + +use crate::{ + interface::{dyn_tracers::vm_1_3_3::DynTracer, L1BatchEnv, Refunds, VmExecutionResultAndLogs}, + vm_virtual_blocks::{ + bootloader_state::BootloaderState, + constants::{BOOTLOADER_HEAP_PAGE, OPERATOR_REFUNDS_OFFSET, TX_GAS_LIMIT_OFFSET}, + old_vm::{ + events::merge_events, history_recorder::HistoryMode, memory::SimpleMemory, + oracles::storage::storage_key_of_log, utils::eth_price_per_pubdata_byte, + }, + tracers::{ + traits::{ExecutionEndTracer, ExecutionProcessing, VmTracer}, + utils::{ + gas_spent_on_bytecodes_and_long_messages_this_opcode, get_vm_hook_params, VmHook, + }, + }, + types::internals::ZkSyncVmState, + utils::fee::get_batch_base_fee, + }, }; -use crate::vm_virtual_blocks::types::internals::ZkSyncVmState; /// Tracer responsible for collecting information about refunds. #[derive(Debug, Clone)] @@ -109,13 +110,14 @@ impl RefundsTracer { }); // For now, bootloader charges only for base fee. - let effective_gas_price = self.l1_batch.base_fee(); + let effective_gas_price = get_batch_base_fee(&self.l1_batch); let bootloader_eth_price_per_pubdata_byte = U256::from(effective_gas_price) * U256::from(current_ergs_per_pubdata_byte); - let fair_eth_price_per_pubdata_byte = - U256::from(eth_price_per_pubdata_byte(self.l1_batch.l1_gas_price)); + let fair_eth_price_per_pubdata_byte = U256::from(eth_price_per_pubdata_byte( + self.l1_batch.fee_input.l1_gas_price(), + )); // For now, L1 originated transactions are allowed to pay less than fair fee per pubdata, // so we should take it into account. @@ -125,7 +127,7 @@ impl RefundsTracer { ); let fair_fee_eth = U256::from(gas_spent_on_computation) - * U256::from(self.l1_batch.fair_l2_gas_price) + * U256::from(self.l1_batch.fee_input.fair_l2_gas_price()) + U256::from(pubdata_published) * eth_price_per_pubdata_byte_for_calculation; let pre_paid_eth = U256::from(tx_gas_limit) * U256::from(effective_gas_price); let refund_eth = pre_paid_eth.checked_sub(fair_fee_eth).unwrap_or_else(|| { @@ -208,8 +210,8 @@ impl ExecutionProcessing for RefundsTrace #[vise::register] static METRICS: vise::Global = vise::Global::new(); - // This means that the bootloader has informed the system (usually via VMHooks) - that some gas - // should be refunded back (see askOperatorForRefund in bootloader.yul for details). + // This means that the bootloader has informed the system (usually via `VMHooks`) - that some gas + // should be refunded back (see `askOperatorForRefund` in `bootloader.yul` for details). if let Some(bootloader_refund) = self.requested_refund() { assert!( self.operator_refund.is_none(), diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/result_tracer.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/result_tracer.rs index 1f566fea567..3ba396fd0c4 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/result_tracer.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/result_tracer.rs @@ -4,28 +4,28 @@ use zk_evm_1_3_3::{ zkevm_opcode_defs::FatPointer, }; use zksync_state::{StoragePtr, WriteStorage}; - -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::tracer::VmExecutionStopReason; -use crate::interface::{ - ExecutionResult, Halt, TxRevertReason, VmExecutionMode, VmExecutionResultAndLogs, - VmRevertReason, -}; use zksync_types::U256; -use crate::vm_virtual_blocks::bootloader_state::BootloaderState; -use crate::vm_virtual_blocks::old_vm::{ - history_recorder::HistoryMode, - memory::SimpleMemory, - utils::{vm_may_have_ended_inner, VmExecutionResult}, +use crate::{ + interface::{ + dyn_tracers::vm_1_3_3::DynTracer, tracer::VmExecutionStopReason, ExecutionResult, Halt, + TxRevertReason, VmExecutionMode, VmExecutionResultAndLogs, VmRevertReason, + }, + vm_virtual_blocks::{ + bootloader_state::BootloaderState, + constants::{BOOTLOADER_HEAP_PAGE, RESULT_SUCCESS_FIRST_SLOT}, + old_vm::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + utils::{vm_may_have_ended_inner, VmExecutionResult}, + }, + tracers::{ + traits::{ExecutionEndTracer, ExecutionProcessing, VmTracer}, + utils::{get_vm_hook_params, read_pointer, VmHook}, + }, + types::internals::ZkSyncVmState, + }, }; -use crate::vm_virtual_blocks::tracers::{ - traits::{ExecutionEndTracer, ExecutionProcessing, VmTracer}, - utils::{get_vm_hook_params, read_pointer, VmHook}, -}; -use crate::vm_virtual_blocks::types::internals::ZkSyncVmState; - -use crate::vm_virtual_blocks::constants::{BOOTLOADER_HEAP_PAGE, RESULT_SUCCESS_FIRST_SLOT}; #[derive(Debug, Clone)] enum Result { @@ -53,7 +53,7 @@ impl ResultTracer { } fn current_frame_is_bootloader(local_state: &VmLocalState) -> bool { - // The current frame is bootloader if the callstack depth is 1. + // The current frame is bootloader if the call stack depth is 1. // Some of the near calls inside the bootloader can be out of gas, which is totally normal behavior // and it shouldn't result in `is_bootloader_out_of_gas` becoming true. local_state.callstack.inner.len() == 1 @@ -152,7 +152,7 @@ impl ResultTracer { }); } VmExecutionResult::Revert(output) => { - // Unlike VmHook::ExecutionResult, vm has completely finished and returned not only the revert reason, + // Unlike `VmHook::ExecutionResult`, vm has completely finished and returned not only the revert reason, // but with bytecode, which represents the type of error from the bootloader side let revert_reason = TxRevertReason::parse_error(&output); diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/traits.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/traits.rs index 3045e6f8319..6d8fdab4e66 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/traits.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/traits.rs @@ -1,12 +1,15 @@ -use crate::interface::dyn_tracers::vm_1_3_3::DynTracer; -use crate::interface::tracer::VmExecutionStopReason; -use crate::interface::VmExecutionResultAndLogs; use zksync_state::WriteStorage; -use crate::vm_virtual_blocks::bootloader_state::BootloaderState; -use crate::vm_virtual_blocks::old_vm::history_recorder::HistoryMode; -use crate::vm_virtual_blocks::old_vm::memory::SimpleMemory; -use crate::vm_virtual_blocks::types::internals::ZkSyncVmState; +use crate::{ + interface::{ + dyn_tracers::vm_1_3_3::DynTracer, tracer::VmExecutionStopReason, VmExecutionResultAndLogs, + }, + vm_virtual_blocks::{ + bootloader_state::BootloaderState, + old_vm::{history_recorder::HistoryMode, memory::SimpleMemory}, + types::internals::ZkSyncVmState, + }, +}; pub type TracerPointer = Box>; /// Run tracer for collecting data during the vm execution cycles diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/utils.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/utils.rs index abf8714bbe9..1f3d27d9d20 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/utils.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/tracers/utils.rs @@ -1,10 +1,10 @@ -use zk_evm_1_3_3::aux_structures::MemoryPage; -use zk_evm_1_3_3::zkevm_opcode_defs::{FarCallABI, FarCallForwardPageType}; use zk_evm_1_3_3::{ + aux_structures::MemoryPage, tracing::{BeforeExecutionData, VmLocalStateData}, - zkevm_opcode_defs::{FatPointer, LogOpcode, Opcode, UMAOpcode}, + zkevm_opcode_defs::{ + FarCallABI, FarCallForwardPageType, FatPointer, LogOpcode, Opcode, UMAOpcode, + }, }; - use zksync_system_constants::{ ECRECOVER_PRECOMPILE_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, KNOWN_CODES_STORAGE_ADDRESS, L1_MESSENGER_ADDRESS, SHA256_PRECOMPILE_ADDRESS, @@ -12,12 +12,16 @@ use zksync_system_constants::{ use zksync_types::U256; use zksync_utils::u256_to_h256; -use crate::vm_virtual_blocks::constants::{ - BOOTLOADER_HEAP_PAGE, VM_HOOK_PARAMS_COUNT, VM_HOOK_PARAMS_START_POSITION, VM_HOOK_POSITION, +use crate::vm_virtual_blocks::{ + constants::{ + BOOTLOADER_HEAP_PAGE, VM_HOOK_PARAMS_COUNT, VM_HOOK_PARAMS_START_POSITION, VM_HOOK_POSITION, + }, + old_vm::{ + history_recorder::HistoryMode, + memory::SimpleMemory, + utils::{aux_heap_page_from_base, heap_page_from_base}, + }, }; -use crate::vm_virtual_blocks::old_vm::history_recorder::HistoryMode; -use crate::vm_virtual_blocks::old_vm::memory::SimpleMemory; -use crate::vm_virtual_blocks::old_vm::utils::{aux_heap_page_from_base, heap_page_from_base}; #[derive(Clone, Debug, Copy)] pub(crate) enum VmHook { @@ -51,7 +55,7 @@ impl VmHook { let value = data.src1_value.value; - // Only UMA opcodes in the bootloader serve for vm hooks + // Only `UMA` opcodes in the bootloader serve for vm hooks if !matches!(opcode_variant.opcode, Opcode::UMA(UMAOpcode::HeapWrite)) || heap_page != BOOTLOADER_HEAP_PAGE || fat_ptr.offset != VM_HOOK_POSITION * 32 @@ -91,7 +95,7 @@ pub(crate) fn get_debug_log( let msg = String::from_utf8(msg).expect("Invalid debug message"); let data = U256::from_big_endian(&data); - // For long data, it is better to use hex-encoding for greater readibility + // For long data, it is better to use hex-encoding for greater readability let data_str = if data > U256::from(u64::max_value()) { let mut bytes = [0u8; 32]; data.to_big_endian(&mut bytes); @@ -106,7 +110,7 @@ pub(crate) fn get_debug_log( } /// Reads the memory slice represented by the fat pointer. -/// Note, that the fat pointer must point to the accesible memory (i.e. not cleared up yet). +/// Note, that the fat pointer must point to the accessible memory (i.e. not cleared up yet). pub(crate) fn read_pointer( memory: &SimpleMemory, pointer: FatPointer, diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/types/internals/transaction_data.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/types/internals/transaction_data.rs index 55f942d9928..6fee52542fc 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/types/internals/transaction_data.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/types/internals/transaction_data.rs @@ -1,17 +1,20 @@ use std::convert::TryInto; -use zksync_types::ethabi::{encode, Address, Token}; -use zksync_types::fee::{encoding_len, Fee}; -use zksync_types::l1::is_l1_tx_type; -use zksync_types::l2::L2Tx; -use zksync_types::transaction_request::{PaymasterParams, TransactionRequest}; + use zksync_types::{ - l2::TransactionType, Bytes, Execute, ExecuteTransactionCommon, L2ChainId, L2TxCommonData, - Nonce, Transaction, H256, U256, + ethabi::{encode, Address, Token}, + fee::{encoding_len, Fee}, + l1::is_l1_tx_type, + l2::{L2Tx, TransactionType}, + transaction_request::{PaymasterParams, TransactionRequest}, + Bytes, Execute, ExecuteTransactionCommon, L2ChainId, L2TxCommonData, Nonce, Transaction, H256, + U256, }; -use zksync_utils::address_to_h256; -use zksync_utils::{bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; +use zksync_utils::{address_to_h256, bytecode::hash_bytecode, bytes_to_be_words, h256_to_u256}; -use crate::vm_virtual_blocks::utils::overhead::{get_amortized_overhead, OverheadCoeficients}; +use crate::vm_virtual_blocks::{ + constants::MAX_GAS_PER_PUBDATA_BYTE, + utils::overhead::{get_amortized_overhead, OverheadCoefficients}, +}; /// This structure represents the data that is used by /// the Bootloader to describe the transaction. @@ -59,12 +62,22 @@ impl From for TransactionData { U256::zero() }; + // Ethereum transactions do not sign gas per pubdata limit, and so for them we need to use + // some default value. We use the maximum possible value that is allowed by the bootloader + // (i.e. we can not use u64::MAX, because the bootloader requires gas per pubdata for such + // transactions to be higher than `MAX_GAS_PER_PUBDATA_BYTE`). + let gas_per_pubdata_limit = if common_data.transaction_type.is_ethereum_type() { + MAX_GAS_PER_PUBDATA_BYTE.into() + } else { + common_data.fee.gas_per_pubdata_limit + }; + TransactionData { tx_type: (common_data.transaction_type as u32) as u8, from: common_data.initiator_address, to: execute_tx.execute.contract_address, gas_limit: common_data.fee.gas_limit, - pubdata_price_limit: common_data.fee.gas_per_pubdata_limit, + pubdata_price_limit: gas_per_pubdata_limit, max_fee_per_gas: common_data.fee.max_fee_per_gas, max_priority_fee_per_gas: common_data.fee.max_priority_fee_per_gas, paymaster: common_data.paymaster_params.paymaster, @@ -212,12 +225,12 @@ impl TransactionData { self.reserved_dynamic.len() as u64, ); - let coeficients = OverheadCoeficients::from_tx_type(self.tx_type); + let coefficients = OverheadCoefficients::from_tx_type(self.tx_type); get_amortized_overhead( total_gas_limit, gas_price_per_pubdata, encoded_len, - coeficients, + coefficients, ) } @@ -234,7 +247,7 @@ impl TransactionData { let l2_tx: L2Tx = self.clone().try_into().unwrap(); let transaction_request: TransactionRequest = l2_tx.into(); - // It is assumed that the TransactionData always has all the necessary components to recover the hash. + // It is assumed that the `TransactionData` always has all the necessary components to recover the hash. transaction_request .get_tx_hash(chain_id) .expect("Could not recover L2 transaction hash") @@ -303,9 +316,10 @@ impl TryInto for TransactionData { #[cfg(test)] mod tests { - use super::*; use zksync_types::fee::encoding_len; + use super::*; + #[test] fn test_consistency_with_encoding_length() { let transaction = TransactionData { diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/types/internals/vm_state.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/types/internals/vm_state.rs index 8784c754fad..c2dc400439d 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/types/internals/vm_state.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/types/internals/vm_state.rs @@ -1,34 +1,40 @@ use zk_evm_1_3_3::{ - aux_structures::MemoryPage, - aux_structures::Timestamp, + aux_structures::{MemoryPage, Timestamp}, block_properties::BlockProperties, vm_state::{CallStackEntry, PrimitiveValue, VmState}, witness_trace::DummyTracer, zkevm_opcode_defs::{ system_params::{BOOTLOADER_MAX_MEMORY, INITIAL_FRAME_FORMAL_EH_LOCATION}, - FatPointer, BOOTLOADER_CALLDATA_PAGE, + FatPointer, BOOTLOADER_BASE_PAGE, BOOTLOADER_CALLDATA_PAGE, BOOTLOADER_CODE_PAGE, + STARTING_BASE_PAGE, STARTING_TIMESTAMP, }, }; - -use crate::interface::{L1BatchEnv, L2Block, SystemEnv}; -use zk_evm_1_3_3::zkevm_opcode_defs::{ - BOOTLOADER_BASE_PAGE, BOOTLOADER_CODE_PAGE, STARTING_BASE_PAGE, STARTING_TIMESTAMP, -}; use zksync_state::{StoragePtr, WriteStorage}; use zksync_system_constants::BOOTLOADER_ADDRESS; -use zksync_types::block::legacy_miniblock_hash; -use zksync_types::{zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, MiniblockNumber}; +use zksync_types::{ + block::MiniblockHasher, zkevm_test_harness::INITIAL_MONOTONIC_CYCLE_COUNTER, Address, + MiniblockNumber, +}; use zksync_utils::h256_to_u256; -use crate::vm_virtual_blocks::bootloader_state::BootloaderState; -use crate::vm_virtual_blocks::constants::BOOTLOADER_HEAP_PAGE; -use crate::vm_virtual_blocks::old_vm::{ - event_sink::InMemoryEventSink, history_recorder::HistoryMode, memory::SimpleMemory, - oracles::decommitter::DecommitterOracle, oracles::precompile::PrecompilesProcessorWithHistory, - oracles::storage::StorageOracle, +use crate::{ + interface::{L1BatchEnv, L2Block, SystemEnv}, + vm_virtual_blocks::{ + bootloader_state::BootloaderState, + constants::BOOTLOADER_HEAP_PAGE, + old_vm::{ + event_sink::InMemoryEventSink, + history_recorder::HistoryMode, + memory::SimpleMemory, + oracles::{ + decommitter::DecommitterOracle, precompile::PrecompilesProcessorWithHistory, + storage::StorageOracle, + }, + }, + types::l1_batch_env::bootloader_initial_memory, + utils::l2_blocks::{assert_next_block, load_last_l2_block}, + }, }; -use crate::vm_virtual_blocks::types::l1_batch_env::bootloader_initial_memory; -use crate::vm_virtual_blocks::utils::l2_blocks::{assert_next_block, load_last_l2_block}; pub type ZkSyncVmState = VmState< StorageOracle, @@ -67,7 +73,9 @@ pub(crate) fn new_vm_state( L2Block { number: l1_batch_env.first_l2_block.number.saturating_sub(1), timestamp: 0, - hash: legacy_miniblock_hash(MiniblockNumber(l1_batch_env.first_l2_block.number) - 1), + hash: MiniblockHasher::legacy_hash( + MiniblockNumber(l1_batch_env.first_l2_block.number) - 1, + ), } }; diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/types/l1_batch_env.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/types/l1_batch_env.rs index 8af706954ed..f86d8749c9e 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/types/l1_batch_env.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/types/l1_batch_env.rs @@ -1,7 +1,8 @@ -use crate::interface::L1BatchEnv; use zksync_types::U256; use zksync_utils::{address_to_u256, h256_to_u256}; +use crate::{interface::L1BatchEnv, vm_virtual_blocks::utils::fee::get_batch_base_fee}; + const OPERATOR_ADDRESS_SLOT: usize = 0; const PREV_BLOCK_HASH_SLOT: usize = 1; const NEW_BLOCK_TIMESTAMP_SLOT: usize = 2; @@ -26,12 +27,18 @@ pub(crate) fn bootloader_initial_memory(l1_batch_env: &L1BatchEnv) -> Vec<(usize (PREV_BLOCK_HASH_SLOT, prev_block_hash), (NEW_BLOCK_TIMESTAMP_SLOT, U256::from(l1_batch_env.timestamp)), (NEW_BLOCK_NUMBER_SLOT, U256::from(l1_batch_env.number.0)), - (L1_GAS_PRICE_SLOT, U256::from(l1_batch_env.l1_gas_price)), + ( + L1_GAS_PRICE_SLOT, + U256::from(l1_batch_env.fee_input.l1_gas_price()), + ), ( FAIR_L2_GAS_PRICE_SLOT, - U256::from(l1_batch_env.fair_l2_gas_price), + U256::from(l1_batch_env.fee_input.fair_l2_gas_price()), + ), + ( + EXPECTED_BASE_FEE_SLOT, + U256::from(get_batch_base_fee(l1_batch_env)), ), - (EXPECTED_BASE_FEE_SLOT, U256::from(l1_batch_env.base_fee())), (SHOULD_SET_NEW_BLOCK_SLOT, should_set_new_block), ] } diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/utils/fee.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/utils/fee.rs index d4808e91bf4..14133553b04 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/utils/fee.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/utils/fee.rs @@ -1,29 +1,54 @@ //! Utility functions for vm -use zksync_system_constants::MAX_GAS_PER_PUBDATA_BYTE; +use zksync_types::fee_model::L1PeggedBatchFeeModelInput; use zksync_utils::ceil_div; -use crate::vm_virtual_blocks::old_vm::utils::eth_price_per_pubdata_byte; +use crate::{ + vm_latest::L1BatchEnv, + vm_virtual_blocks::{ + constants::MAX_GAS_PER_PUBDATA_BYTE, old_vm::utils::eth_price_per_pubdata_byte, + }, +}; -/// Calcluates the amount of gas required to publish one byte of pubdata -pub fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { +/// Calculates the amount of gas required to publish one byte of pubdata +pub(crate) fn base_fee_to_gas_per_pubdata(l1_gas_price: u64, base_fee: u64) -> u64 { let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); ceil_div(eth_price_per_pubdata_byte, base_fee) } /// Calculates the base fee and gas per pubdata for the given L1 gas price. -pub fn derive_base_fee_and_gas_per_pubdata(l1_gas_price: u64, fair_gas_price: u64) -> (u64, u64) { +pub(crate) fn derive_base_fee_and_gas_per_pubdata( + fee_input: L1PeggedBatchFeeModelInput, +) -> (u64, u64) { + let L1PeggedBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price, + } = fee_input; + let eth_price_per_pubdata_byte = eth_price_per_pubdata_byte(l1_gas_price); - // The baseFee is set in such a way that it is always possible for a transaction to + // The `baseFee` is set in such a way that it is always possible for a transaction to // publish enough public data while compensating us for it. let base_fee = std::cmp::max( - fair_gas_price, + fair_l2_gas_price, ceil_div(eth_price_per_pubdata_byte, MAX_GAS_PER_PUBDATA_BYTE), ); ( base_fee, - base_fee_to_gas_per_pubdata(l1_gas_price, base_fee), + base_fee_to_gas_per_pubdata(fee_input.l1_gas_price, base_fee), ) } + +pub(crate) fn get_batch_base_fee(l1_batch_env: &L1BatchEnv) -> u64 { + if let Some(base_fee) = l1_batch_env.enforced_base_fee { + return base_fee; + } + let (base_fee, _) = + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()); + base_fee +} + +pub(crate) fn get_batch_gas_per_pubdata(l1_batch_env: &L1BatchEnv) -> u64 { + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input.into_l1_pegged()).1 +} diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/utils/l2_blocks.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/utils/l2_blocks.rs index 3d5f58094e0..e5832f7f587 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/utils/l2_blocks.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/utils/l2_blocks.rs @@ -1,15 +1,17 @@ -use crate::interface::{L2Block, L2BlockEnv}; use zksync_state::{ReadStorage, StoragePtr}; use zksync_system_constants::{ SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, SYSTEM_CONTEXT_CURRENT_TX_ROLLING_HASH_POSITION, SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES, }; -use zksync_types::block::unpack_block_info; -use zksync_types::web3::signing::keccak256; -use zksync_types::{AccountTreeId, MiniblockNumber, StorageKey, H256, U256}; +use zksync_types::{ + block::unpack_block_info, web3::signing::keccak256, AccountTreeId, MiniblockNumber, StorageKey, + H256, U256, +}; use zksync_utils::{h256_to_u256, u256_to_h256}; +use crate::interface::{L2Block, L2BlockEnv}; + pub(crate) fn get_l2_block_hash_key(block_number: u32) -> StorageKey { let position = h256_to_u256(SYSTEM_CONTEXT_CURRENT_L2_BLOCK_HASHES_POSITION) + U256::from(block_number % SYSTEM_CONTEXT_STORED_L2_BLOCK_HASHES); @@ -66,7 +68,7 @@ pub fn load_last_l2_block(storage: StoragePtr) -> Option u32 { - // Even if the gas limit is greater than the MAX_TX_ERGS_LIMIT, we assume that everything beyond MAX_TX_ERGS_LIMIT + // Even if the gas limit is greater than the `MAX_TX_ERGS_LIMIT`, we assume that everything beyond `MAX_TX_ERGS_LIMIT` // will be spent entirely on publishing bytecodes and so we derive the overhead solely based on the capped value let gas_limit = std::cmp::min(MAX_TX_ERGS_LIMIT, gas_limit); @@ -23,8 +23,8 @@ pub fn derive_overhead( let gas_limit = U256::from(gas_limit); let encoded_len = U256::from(encoded_len); - // The MAX_TX_ERGS_LIMIT is formed in a way that may fullfills a single-instance circuits - // if used in full. That is, within MAX_TX_ERGS_LIMIT it is possible to fully saturate all the single-instance + // The `MAX_TX_ERGS_LIMIT` is formed in a way that may fulfills a single-instance circuits + // if used in full. That is, within `MAX_TX_ERGS_LIMIT` it is possible to fully saturate all the single-instance // circuits. let overhead_for_single_instance_circuits = ceil_div_u256(gas_limit * max_block_overhead, MAX_TX_ERGS_LIMIT.into()); @@ -38,42 +38,44 @@ pub fn derive_overhead( // The overhead for occupying a single tx slot let tx_slot_overhead = ceil_div_u256(max_block_overhead, MAX_TXS_IN_BLOCK.into()); - // We use "ceil" here for formal reasons to allow easier approach for calculating the overhead in O(1) - // let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata); + // We use `ceil` here for formal reasons to allow easier approach for calculating the overhead in O(1) + // `let max_pubdata_in_tx = ceil_div_u256(gas_limit, gas_price_per_pubdata);` // The maximal potential overhead from pubdata // TODO (EVM-67): possibly use overhead for pubdata + // ``` // let pubdata_overhead = ceil_div_u256( // max_pubdata_in_tx * max_block_overhead, // MAX_PUBDATA_PER_BLOCK.into(), // ); + // ``` vec![ - (coeficients.ergs_limit_overhead_coeficient + (coefficients.ergs_limit_overhead_coeficient * overhead_for_single_instance_circuits.as_u32() as f64) .floor() as u32, - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length.as_u32() as f64) .floor() as u32, - (coeficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, + (coefficients.slot_overhead_coeficient * tx_slot_overhead.as_u32() as f64) as u32, ] .into_iter() .max() .unwrap() } -/// Contains the coeficients with which the overhead for transactions will be calculated. -/// All of the coeficients should be <= 1. There are here to provide a certain "discount" for normal transactions +/// Contains the coefficients with which the overhead for transactions will be calculated. +/// All of the coefficients should be <= 1. There are here to provide a certain "discount" for normal transactions /// at the risk of malicious transactions that may close the block prematurely. -/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coeficients.ergs_limit_overhead_coeficient` MUST +/// IMPORTANT: to perform correct computations, `MAX_TX_ERGS_LIMIT / coefficients.ergs_limit_overhead_coefficient` MUST /// result in an integer number #[derive(Debug, Clone, Copy)] -pub struct OverheadCoeficients { +pub struct OverheadCoefficients { slot_overhead_coeficient: f64, bootloader_memory_overhead_coeficient: f64, ergs_limit_overhead_coeficient: f64, } -impl OverheadCoeficients { +impl OverheadCoefficients { // This method ensures that the parameters keep the required invariants fn new_checked( slot_overhead_coeficient: f64, @@ -95,21 +97,21 @@ impl OverheadCoeficients { // L1->L2 do not receive any discounts fn new_l1() -> Self { - OverheadCoeficients::new_checked(1.0, 1.0, 1.0) + OverheadCoefficients::new_checked(1.0, 1.0, 1.0) } fn new_l2() -> Self { - OverheadCoeficients::new_checked( + OverheadCoefficients::new_checked( 1.0, 1.0, // For L2 transactions we allow a certain default discount with regard to the number of ergs. - // Multiinstance circuits can in theory be spawned infinite times, while projected future limitations - // on gas per pubdata allow for roughly 800kk gas per L1 batch, so the rough trust "discount" on the proof's part + // Multi-instance circuits can in theory be spawned infinite times, while projected future limitations + // on gas per pubdata allow for roughly 800k gas per L1 batch, so the rough trust "discount" on the proof's part // to be paid by the users is 0.1. 0.1, ) } - /// Return the coeficients for the given transaction type + /// Return the coefficients for the given transaction type pub fn from_tx_type(tx_type: u8) -> Self { if is_l1_tx_type(tx_type) { Self::new_l1() @@ -124,7 +126,7 @@ pub(crate) fn get_amortized_overhead( total_gas_limit: u32, gas_per_pubdata_byte_limit: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { // Using large U256 type to prevent overflows. let overhead_for_block_gas = U256::from(block_overhead_gas(gas_per_pubdata_byte_limit)); @@ -132,28 +134,28 @@ pub(crate) fn get_amortized_overhead( let encoded_len = U256::from(encoded_len); // Derivation of overhead consists of 4 parts: - // 1. The overhead for taking up a transaction's slot. (O1): O1 = 1 / MAX_TXS_IN_BLOCK - // 2. The overhead for taking up the bootloader's memory (O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE - // 3. The overhead for possible usage of pubdata. (O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK - // 4. The overhead for possible usage of all the single-instance circuits. (O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT + // 1. The overhead for taking up a transaction's slot. `(O1): O1 = 1 / MAX_TXS_IN_BLOCK` + // 2. The overhead for taking up the bootloader's memory `(O2): O2 = encoded_len / BOOTLOADER_TX_ENCODING_SPACE` + // 3. The overhead for possible usage of pubdata. `(O3): O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK` + // 4. The overhead for possible usage of all the single-instance circuits. `(O4): O4 = gas_limit / MAX_TX_ERGS_LIMIT` // // The maximum of these is taken to derive the part of the block's overhead to be paid by the users: // - // max_overhead = max(O1, O2, O3, O4) - // overhead_gas = ceil(max_overhead * overhead_for_block_gas). Thus, overhead_gas is a function of - // tx_gas_limit, gas_per_pubdata_byte_limit and encoded_len. + // `max_overhead = max(O1, O2, O3, O4)` + // `overhead_gas = ceil(max_overhead * overhead_for_block_gas)`. Thus, `overhead_gas` is a function of + // `tx_gas_limit`, `gas_per_pubdata_byte_limit` and `encoded_len`. // // While it is possible to derive the overhead with binary search in O(log n), it is too expensive to be done // on L1, so here is a reference implementation of finding the overhead for transaction in O(1): // - // Given total_gas_limit = tx_gas_limit + overhead_gas, we need to find overhead_gas and tx_gas_limit, such that: - // 1. overhead_gas is maximal possible (the operator is paid fairly) - // 2. overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas (the user does not overpay) + // Given `total_gas_limit = tx_gas_limit + overhead_gas`, we need to find `overhead_gas` and `tx_gas_limit`, such that: + // 1. `overhead_gas` is maximal possible (the operator is paid fairly) + // 2. `overhead_gas(tx_gas_limit, gas_per_pubdata_byte_limit, encoded_len) >= overhead_gas` (the user does not overpay) // The third part boils to the following 4 inequalities (at least one of these must hold): - // ceil(O1 * overhead_for_block_gas) >= overhead_gas - // ceil(O2 * overhead_for_block_gas) >= overhead_gas - // ceil(O3 * overhead_for_block_gas) >= overhead_gas - // ceil(O4 * overhead_for_block_gas) >= overhead_gas + // `ceil(O1 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O2 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O3 * overhead_for_block_gas) >= overhead_gas` + // `ceil(O4 * overhead_for_block_gas) >= overhead_gas` // // Now, we need to solve each of these separately: @@ -161,10 +163,10 @@ pub(crate) fn get_amortized_overhead( let tx_slot_overhead = { let tx_slot_overhead = ceil_div_u256(overhead_for_block_gas, MAX_TXS_IN_BLOCK.into()).as_u32(); - (coeficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 + (coefficients.slot_overhead_coeficient * tx_slot_overhead as f64).floor() as u32 }; - // 2. The overhead for occupying the bootloader memory can be derived from encoded_len + // 2. The overhead for occupying the bootloader memory can be derived from `encoded_len` let overhead_for_length = { let overhead_for_length = ceil_div_u256( encoded_len * overhead_for_block_gas, @@ -172,18 +174,22 @@ pub(crate) fn get_amortized_overhead( ) .as_u32(); - (coeficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() + (coefficients.bootloader_memory_overhead_coeficient * overhead_for_length as f64).floor() as u32 }; // TODO (EVM-67): possibly include the overhead for pubdata. The formula below has not been properly maintained, - // since the pubdat is not published. If decided to use the pubdata overhead, it needs to be updated. + // since the pubdata is not published. If decided to use the pubdata overhead, it needs to be updated. + // ``` // 3. ceil(O3 * overhead_for_block_gas) >= overhead_gas // O3 = max_pubdata_in_tx / MAX_PUBDATA_PER_BLOCK = ceil(gas_limit / gas_per_pubdata_byte_limit) / MAX_PUBDATA_PER_BLOCK - // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). Throwing off the `ceil`, while may provide marginally lower + // >= (gas_limit / (gas_per_pubdata_byte_limit * MAX_PUBDATA_PER_BLOCK). + // ``` + // Throwing off the `ceil`, while may provide marginally lower // overhead to the operator, provides substantially easier formula to work with. // - // For better clarity, let's denote gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE + // For better clarity, let's denote `gas_limit = GL, MAX_PUBDATA_PER_BLOCK = MP, gas_per_pubdata_byte_limit = EP, overhead_for_block_gas = OB, total_gas_limit = TL, overhead_gas = OE` + // ``` // ceil(OB * (TL - OE) / (EP * MP)) >= OE // // OB * (TL - OE) / (MP * EP) > OE - 1 @@ -196,7 +202,7 @@ pub(crate) fn get_amortized_overhead( // + gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK); // let denominator = // gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK) + overhead_for_block_gas; - + // // // Corner case: if `total_gas_limit` = `gas_per_pubdata_byte_limit` = 0 // // then the numerator will be 0 and subtracting 1 will cause a panic, so we just return a zero. // if numerator.is_zero() { @@ -205,7 +211,7 @@ pub(crate) fn get_amortized_overhead( // (numerator - 1) / denominator // } // }; - + // // 4. K * ceil(O4 * overhead_for_block_gas) >= overhead_gas, where K is the discount // O4 = gas_limit / MAX_TX_ERGS_LIMIT. Using the notation from the previous equation: // ceil(OB * GL / MAX_TX_ERGS_LIMIT) >= (OE / K) @@ -217,7 +223,7 @@ pub(crate) fn get_amortized_overhead( let overhead_for_gas = { let numerator = overhead_for_block_gas * total_gas_limit + U256::from(MAX_TX_ERGS_LIMIT); let denominator: U256 = U256::from( - (MAX_TX_ERGS_LIMIT as f64 / coeficients.ergs_limit_overhead_coeficient) as u64, + (MAX_TX_ERGS_LIMIT as f64 / coefficients.ergs_limit_overhead_coeficient) as u64, ) + overhead_for_block_gas; let overhead_for_gas = (numerator - 1) / denominator; @@ -242,7 +248,7 @@ pub(crate) fn get_amortized_overhead( MAX_L2_TX_GAS_LIMIT as u32, gas_per_pubdata_byte_limit, encoded_len.as_usize(), - coeficients, + coefficients, ) } else { overhead @@ -263,7 +269,7 @@ mod tests { total_gas_limit: u32, gas_per_pubdata_byte_limit: u32, encoded_len: usize, - coeficients: OverheadCoeficients, + coefficients: OverheadCoefficients, ) -> u32 { let mut left_bound = if MAX_TX_ERGS_LIMIT < total_gas_limit { total_gas_limit - MAX_TX_ERGS_LIMIT @@ -281,7 +287,7 @@ mod tests { total_gas_limit - suggested_overhead, gas_per_pubdata_byte_limit, encoded_len, - coeficients, + coefficients, ); derived_overhead >= suggested_overhead @@ -310,40 +316,40 @@ mod tests { let test_params = |total_gas_limit: u32, gas_per_pubdata: u32, encoded_len: usize, - coeficients: OverheadCoeficients| { + coefficients: OverheadCoefficients| { let result_by_efficient_search = - get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coeficients); + get_amortized_overhead(total_gas_limit, gas_per_pubdata, encoded_len, coefficients); let result_by_binary_search = get_maximal_allowed_overhead_bin_search( total_gas_limit, gas_per_pubdata, encoded_len, - coeficients, + coefficients, ); assert_eq!(result_by_efficient_search, result_by_binary_search); }; // Some arbitrary test - test_params(60_000_000, 800, 2900, OverheadCoeficients::new_l2()); + test_params(60_000_000, 800, 2900, OverheadCoefficients::new_l2()); // Very small parameters - test_params(0, 1, 12, OverheadCoeficients::new_l2()); + test_params(0, 1, 12, OverheadCoefficients::new_l2()); // Relatively big parameters let max_tx_overhead = derive_overhead( MAX_TX_ERGS_LIMIT, 5000, 10000, - OverheadCoeficients::new_l2(), + OverheadCoefficients::new_l2(), ); test_params( MAX_TX_ERGS_LIMIT + max_tx_overhead, 5000, 10000, - OverheadCoeficients::new_l2(), + OverheadCoefficients::new_l2(), ); - test_params(115432560, 800, 2900, OverheadCoeficients::new_l1()); + test_params(115432560, 800, 2900, OverheadCoefficients::new_l1()); } } diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/utils/transaction_encoding.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/utils/transaction_encoding.rs index b45ec4d1411..5f9c37cbb73 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/utils/transaction_encoding.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/utils/transaction_encoding.rs @@ -1,6 +1,7 @@ -use crate::vm_virtual_blocks::types::internals::TransactionData; use zksync_types::Transaction; +use crate::vm_virtual_blocks::types::internals::TransactionData; + /// Extension for transactions, specific for VM. Required for bypassing the orphan rule pub trait TransactionVmExt { /// Get the size of the transaction in tokens. diff --git a/core/lib/multivm/src/versions/vm_virtual_blocks/vm.rs b/core/lib/multivm/src/versions/vm_virtual_blocks/vm.rs index e96c326b219..3bb43669f00 100644 --- a/core/lib/multivm/src/versions/vm_virtual_blocks/vm.rs +++ b/core/lib/multivm/src/versions/vm_virtual_blocks/vm.rs @@ -1,21 +1,22 @@ -use crate::interface::{ - BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, L1BatchEnv, L2BlockEnv, - SystemEnv, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, VmInterfaceHistoryEnabled, - VmMemoryMetrics, -}; -use crate::vm_latest::HistoryEnabled; -use crate::HistoryMode; use zksync_state::{StoragePtr, WriteStorage}; -use zksync_types::l2_to_l1_log::UserL2ToL1Log; -use zksync_types::Transaction; +use zksync_types::{l2_to_l1_log::UserL2ToL1Log, Transaction}; use zksync_utils::bytecode::CompressedBytecodeInfo; -use crate::vm_virtual_blocks::old_vm::events::merge_events; - -use crate::vm_virtual_blocks::bootloader_state::BootloaderState; -use crate::vm_virtual_blocks::tracers::dispatcher::TracerDispatcher; - -use crate::vm_virtual_blocks::types::internals::{new_vm_state, VmSnapshot, ZkSyncVmState}; +use crate::{ + interface::{ + BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, L1BatchEnv, L2BlockEnv, + SystemEnv, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, + VmInterfaceHistoryEnabled, VmMemoryMetrics, + }, + vm_latest::HistoryEnabled, + vm_virtual_blocks::{ + bootloader_state::BootloaderState, + old_vm::events::merge_events, + tracers::dispatcher::TracerDispatcher, + types::internals::{new_vm_state, VmSnapshot, ZkSyncVmState}, + }, + HistoryMode, +}; /// Main entry point for Virtual Machine integration. /// The instance should process only one l1 batch @@ -117,13 +118,19 @@ impl VmInterface for Vm { tracer: TracerDispatcher, tx: Transaction, with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { self.push_transaction_with_compression(tx, with_compression); let result = self.inspect_inner(tracer, VmExecutionMode::OneTx); if self.has_unpublished_bytecodes() { - Err(BytecodeCompressionError::BytecodeCompressionFailed) + ( + Err(BytecodeCompressionError::BytecodeCompressionFailed), + result, + ) } else { - Ok(result) + (Ok(()), result) } } @@ -132,7 +139,7 @@ impl VmInterface for Vm { } } -/// Methods of vm, which required some history manipullations +/// Methods of vm, which required some history manipulations impl VmInterfaceHistoryEnabled for Vm { /// Create snapshot of current vm state and push it into the memory fn make_snapshot(&mut self) { diff --git a/core/lib/multivm/src/vm_instance.rs b/core/lib/multivm/src/vm_instance.rs index 6b90da4bd3b..4eaca6f44b0 100644 --- a/core/lib/multivm/src/vm_instance.rs +++ b/core/lib/multivm/src/vm_instance.rs @@ -1,15 +1,16 @@ -use crate::interface::{ - BootloaderMemory, CurrentExecutionState, FinishedL1Batch, L1BatchEnv, L2BlockEnv, SystemEnv, - VmExecutionMode, VmExecutionResultAndLogs, VmInterface, VmInterfaceHistoryEnabled, - VmMemoryMetrics, -}; - use zksync_state::{StoragePtr, WriteStorage}; use zksync_types::VmVersion; use zksync_utils::bytecode::CompressedBytecodeInfo; -use crate::glue::history_mode::HistoryMode; -use crate::tracers::TracerDispatcher; +use crate::{ + glue::history_mode::HistoryMode, + interface::{ + BootloaderMemory, BytecodeCompressionError, CurrentExecutionState, FinishedL1Batch, + L1BatchEnv, L2BlockEnv, SystemEnv, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, + VmInterfaceHistoryEnabled, VmMemoryMetrics, + }, + tracers::TracerDispatcher, +}; #[derive(Debug)] pub enum VmInstance { @@ -18,7 +19,8 @@ pub enum VmInstance { Vm1_3_2(crate::vm_1_3_2::Vm), VmVirtualBlocks(crate::vm_virtual_blocks::Vm), VmVirtualBlocksRefundsEnhancement(crate::vm_refunds_enhancement::Vm), - VmBoojumIntegration(crate::vm_latest::Vm), + VmBoojumIntegration(crate::vm_boojum_integration::Vm), + Vm1_4_1(crate::vm_latest::Vm), } macro_rules! dispatch_vm { @@ -30,6 +32,7 @@ macro_rules! dispatch_vm { VmInstance::VmVirtualBlocks(vm) => vm.$function($($params)*), VmInstance::VmVirtualBlocksRefundsEnhancement(vm) => vm.$function($($params)*), VmInstance::VmBoojumIntegration(vm) => vm.$function($($params)*), + VmInstance::Vm1_4_1(vm) => vm.$function($($params)*), } }; } @@ -85,7 +88,10 @@ impl VmInterface for VmInstance { &mut self, tx: zksync_types::Transaction, with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { dispatch_vm!(self.execute_transaction_with_bytecode_compression(tx, with_compression)) } @@ -95,7 +101,10 @@ impl VmInterface for VmInstance { dispatcher: Self::TracerDispatcher, tx: zksync_types::Transaction, with_compression: bool, - ) -> Result { + ) -> ( + Result<(), BytecodeCompressionError>, + VmExecutionResultAndLogs, + ) { dispatch_vm!(self.inspect_transaction_with_bytecode_compression( dispatcher.into(), tx, @@ -187,9 +196,14 @@ impl VmInstance { VmInstance::VmVirtualBlocksRefundsEnhancement(vm) } VmVersion::VmBoojumIntegration => { - let vm = crate::vm_latest::Vm::new(l1_batch_env, system_env, storage_view); + let vm = + crate::vm_boojum_integration::Vm::new(l1_batch_env, system_env, storage_view); VmInstance::VmBoojumIntegration(vm) } + VmVersion::Vm1_4_1 => { + let vm = crate::vm_latest::Vm::new(l1_batch_env, system_env, storage_view); + VmInstance::Vm1_4_1(vm) + } } } } diff --git a/core/lib/object_store/Cargo.toml b/core/lib/object_store/Cargo.toml index 941674d6e50..ec42f47c6bf 100644 --- a/core/lib/object_store/Cargo.toml +++ b/core/lib/object_store/Cargo.toml @@ -10,18 +10,22 @@ keywords = ["blockchain", "zksync"] categories = ["cryptography"] [dependencies] -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } zksync_config = { path = "../config" } zksync_types = { path = "../types" } +zksync_protobuf = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } anyhow = "1.0" async-trait = "0.1" bincode = "1" -google-cloud-storage = "0.12.0" -google-cloud-auth = "0.11.0" +google-cloud-storage = "0.15.0" +google-cloud-auth = "0.13.0" http = "0.2.9" +serde_json = "1.0" +flate2 = "1.0.28" tokio = { version = "1.21.2", features = ["full"] } tracing = "0.1" +prost = "0.12.1" [dev-dependencies] tempdir = "0.3.7" diff --git a/core/lib/object_store/src/file.rs b/core/lib/object_store/src/file.rs index c248fb76595..2d77366a952 100644 --- a/core/lib/object_store/src/file.rs +++ b/core/lib/object_store/src/file.rs @@ -1,8 +1,8 @@ +use std::fmt::Debug; + use async_trait::async_trait; use tokio::{fs, io}; -use std::fmt::Debug; - use crate::raw::{Bucket, ObjectStore, ObjectStoreError}; impl From for ObjectStoreError { @@ -32,6 +32,7 @@ impl FileBackedObjectStore { Bucket::NodeAggregationWitnessJobsFri, Bucket::SchedulerWitnessJobsFri, Bucket::ProofsFri, + Bucket::StorageSnapshot, ] { let bucket_path = format!("{base_dir}/{bucket}"); fs::create_dir_all(&bucket_path) @@ -69,6 +70,10 @@ impl ObjectStore for FileBackedObjectStore { let filename = self.filename(bucket, key); fs::remove_file(filename).await.map_err(From::from) } + + fn storage_prefix_raw(&self, bucket: Bucket) -> String { + format!("{}/{}", self.base_dir, bucket) + } } #[cfg(test)] diff --git a/core/lib/object_store/src/gcs.rs b/core/lib/object_store/src/gcs.rs index d01fb833b12..93ee39fdef2 100644 --- a/core/lib/object_store/src/gcs.rs +++ b/core/lib/object_store/src/gcs.rs @@ -1,21 +1,23 @@ //! GCS-based [`ObjectStore`] implementation. +use std::{fmt, future::Future, time::Duration}; + use async_trait::async_trait; use google_cloud_auth::{credentials::CredentialsFile, error::Error}; use google_cloud_storage::{ client::{Client, ClientConfig}, - http::objects::{ - delete::DeleteObjectRequest, - download::Range, - get::GetObjectRequest, - upload::{Media, UploadObjectRequest, UploadType}, + http::{ + objects::{ + delete::DeleteObjectRequest, + download::Range, + get::GetObjectRequest, + upload::{Media, UploadObjectRequest, UploadType}, + }, + Error as HttpError, }, - http::Error as HttpError, }; use http::StatusCode; -use std::{fmt, future::Future, time::Duration}; - use crate::{ metrics::GCS_METRICS, raw::{Bucket, ObjectStore, ObjectStoreError}, @@ -97,7 +99,7 @@ impl GoogleCloudStorage { format!("{bucket}/{filename}") } - // For some bizzare reason, `async fn` doesn't work here, failing with the following error: + // For some bizarre reason, `async fn` doesn't work here, failing with the following error: // // > hidden type for `impl std::future::Future>` // > captures lifetime that does not appear in bounds @@ -205,6 +207,14 @@ impl ObjectStore for GoogleCloudStorage { async fn remove_raw(&self, bucket: Bucket, key: &str) -> Result<(), ObjectStoreError> { self.remove_inner(bucket.as_str(), key).await } + + fn storage_prefix_raw(&self, bucket: Bucket) -> String { + format!( + "https://storage.googleapis.com/{}/{}", + self.bucket_prefix.clone(), + bucket.as_str() + ) + } } #[cfg(test)] diff --git a/core/lib/object_store/src/metrics.rs b/core/lib/object_store/src/metrics.rs index 9cd51ba3ed7..f372b5bac1c 100644 --- a/core/lib/object_store/src/metrics.rs +++ b/core/lib/object_store/src/metrics.rs @@ -1,9 +1,9 @@ //! Metrics for the object storage. -use vise::{Buckets, Histogram, LabeledFamily, LatencyObserver, Metrics}; - use std::time::Duration; +use vise::{Buckets, Histogram, LabeledFamily, LatencyObserver, Metrics}; + use crate::Bucket; #[derive(Debug, Metrics)] diff --git a/core/lib/object_store/src/mock.rs b/core/lib/object_store/src/mock.rs index 727ef1e8d53..f7ee7119c7a 100644 --- a/core/lib/object_store/src/mock.rs +++ b/core/lib/object_store/src/mock.rs @@ -1,10 +1,10 @@ //! Mock implementation of [`ObjectStore`]. +use std::collections::HashMap; + use async_trait::async_trait; use tokio::sync::Mutex; -use std::collections::HashMap; - use crate::raw::{Bucket, ObjectStore, ObjectStoreError}; type BucketMap = HashMap>; @@ -45,4 +45,8 @@ impl ObjectStore for MockStore { bucket_map.remove(key); Ok(()) } + + fn storage_prefix_raw(&self, bucket: Bucket) -> String { + bucket.to_string() + } } diff --git a/core/lib/object_store/src/objects.rs b/core/lib/object_store/src/objects.rs index e5ee186676e..dc9865a7c7c 100644 --- a/core/lib/object_store/src/objects.rs +++ b/core/lib/object_store/src/objects.rs @@ -1,15 +1,26 @@ //! Stored objects. -use zksync_types::aggregated_operations::L1BatchProofForL1; +use std::io::{Read, Write}; + +use anyhow::Context; +use flate2::{read::GzDecoder, write::GzEncoder, Compression}; +use prost::Message; +use zksync_protobuf::{decode, ProtoFmt}; use zksync_types::{ + aggregated_operations::L1BatchProofForL1, proofs::{AggregationRound, PrepareBasicCircuitsJob}, + snapshots::{ + SnapshotFactoryDependencies, SnapshotStorageLogsChunk, SnapshotStorageLogsStorageKey, + }, storage::witness_block_state::WitnessBlockState, zkevm_test_harness::{ abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit, bellman::bn256::Bn256, encodings::{recursion_request::RecursionRequest, QueueSimulator}, - witness::full_block_artifact::{BlockBasicCircuits, BlockBasicCircuitsPublicInputs}, - witness::oracle::VmWitnessOracle, + witness::{ + full_block_artifact::{BlockBasicCircuits, BlockBasicCircuitsPublicInputs}, + oracle::VmWitnessOracle, + }, LeafAggregationOutputDataWitness, NodeAggregationOutputDataWitness, SchedulerCircuitInstanceWitness, }, @@ -63,6 +74,63 @@ macro_rules! serialize_using_bincode { }; } +impl StoredObject for SnapshotFactoryDependencies { + const BUCKET: Bucket = Bucket::StorageSnapshot; + type Key<'a> = L1BatchNumber; + + fn encode_key(key: Self::Key<'_>) -> String { + format!("snapshot_l1_batch_{key}_factory_deps.proto.gzip") + } + + fn serialize(&self) -> Result, BoxedError> { + let mut encoder = GzEncoder::new(Vec::new(), Compression::default()); + let encoded_bytes = self.build().encode_to_vec(); + encoder.write_all(&encoded_bytes)?; + encoder.finish().map_err(From::from) + } + + fn deserialize(bytes: Vec) -> Result { + let mut decoder = GzDecoder::new(&bytes[..]); + let mut decompressed_bytes = Vec::new(); + decoder + .read_to_end(&mut decompressed_bytes) + .map_err(BoxedError::from)?; + decode(&decompressed_bytes[..]) + .context("deserialization of Message to SnapshotFactoryDependencies") + .map_err(From::from) + } +} + +impl StoredObject for SnapshotStorageLogsChunk { + const BUCKET: Bucket = Bucket::StorageSnapshot; + type Key<'a> = SnapshotStorageLogsStorageKey; + + fn encode_key(key: Self::Key<'_>) -> String { + format!( + "snapshot_l1_batch_{}_storage_logs_part_{:0>4}.proto.gzip", + key.l1_batch_number, key.chunk_id + ) + } + + fn serialize(&self) -> Result, BoxedError> { + let mut encoder = GzEncoder::new(Vec::new(), Compression::default()); + let encoded_bytes = self.build().encode_to_vec(); + encoder.write_all(&encoded_bytes)?; + encoder.finish().map_err(From::from) + } + + fn deserialize(bytes: Vec) -> Result { + let mut decoder = GzDecoder::new(&bytes[..]); + let mut decompressed_bytes = Vec::new(); + decoder + .read_to_end(&mut decompressed_bytes) + .map_err(BoxedError::from)?; + decode(&decompressed_bytes[..]) + .context("deserialization of Message to SnapshotStorageLogsChunk") + .map_err(From::from) + } +} + impl StoredObject for WitnessBlockState { const BUCKET: Bucket = Bucket::WitnessInput; type Key<'a> = L1BatchNumber; @@ -242,4 +310,94 @@ impl dyn ObjectStore + '_ { self.put_raw(V::BUCKET, &key, bytes).await?; Ok(key) } + + pub fn get_storage_prefix(&self) -> String { + self.storage_prefix_raw(V::BUCKET) + } +} + +#[cfg(test)] +mod tests { + use zksync_types::{ + snapshots::{SnapshotFactoryDependency, SnapshotStorageLog}, + AccountTreeId, Bytes, StorageKey, H160, H256, + }; + + use super::*; + use crate::ObjectStoreFactory; + + #[test] + fn test_storage_logs_filesnames_generate_corretly() { + let filename1 = SnapshotStorageLogsChunk::encode_key(SnapshotStorageLogsStorageKey { + l1_batch_number: L1BatchNumber(42), + chunk_id: 97, + }); + let filename2 = SnapshotStorageLogsChunk::encode_key(SnapshotStorageLogsStorageKey { + l1_batch_number: L1BatchNumber(3), + chunk_id: 531, + }); + let filename3 = SnapshotStorageLogsChunk::encode_key(SnapshotStorageLogsStorageKey { + l1_batch_number: L1BatchNumber(567), + chunk_id: 5, + }); + assert_eq!( + "snapshot_l1_batch_42_storage_logs_part_0097.proto.gzip", + filename1 + ); + assert_eq!( + "snapshot_l1_batch_3_storage_logs_part_0531.proto.gzip", + filename2 + ); + assert_eq!( + "snapshot_l1_batch_567_storage_logs_part_0005.proto.gzip", + filename3 + ); + } + + #[tokio::test] + async fn test_storage_logs_can_be_serialized_and_deserialized() { + let store = ObjectStoreFactory::mock().create_store().await; + let key = SnapshotStorageLogsStorageKey { + l1_batch_number: L1BatchNumber(567), + chunk_id: 5, + }; + let storage_logs = SnapshotStorageLogsChunk { + storage_logs: vec![ + SnapshotStorageLog { + key: StorageKey::new(AccountTreeId::new(H160::random()), H256::random()), + value: H256::random(), + l1_batch_number_of_initial_write: L1BatchNumber(123), + enumeration_index: 234, + }, + SnapshotStorageLog { + key: StorageKey::new(AccountTreeId::new(H160::random()), H256::random()), + value: H256::random(), + l1_batch_number_of_initial_write: L1BatchNumber(345), + enumeration_index: 456, + }, + ], + }; + store.put(key, &storage_logs).await.unwrap(); + let reconstructed_storage_logs = store.get(key).await.unwrap(); + assert_eq!(storage_logs, reconstructed_storage_logs); + } + + #[tokio::test] + async fn test_factory_deps_can_be_serialized_and_deserialized() { + let store = ObjectStoreFactory::mock().create_store().await; + let key = L1BatchNumber(123); + let factory_deps = SnapshotFactoryDependencies { + factory_deps: vec![ + SnapshotFactoryDependency { + bytecode: Bytes(vec![1, 51, 101, 201, 255]), + }, + SnapshotFactoryDependency { + bytecode: Bytes(vec![2, 52, 102, 202, 255]), + }, + ], + }; + store.put(key, &factory_deps).await.unwrap(); + let reconstructed_factory_deps = store.get(key).await.unwrap(); + assert_eq!(factory_deps, reconstructed_factory_deps); + } } diff --git a/core/lib/object_store/src/raw.rs b/core/lib/object_store/src/raw.rs index bf318a61610..764809764da 100644 --- a/core/lib/object_store/src/raw.rs +++ b/core/lib/object_store/src/raw.rs @@ -1,10 +1,10 @@ -use async_trait::async_trait; - use std::{error, fmt, sync::Arc}; -use crate::{file::FileBackedObjectStore, gcs::GoogleCloudStorage, mock::MockStore}; +use async_trait::async_trait; use zksync_config::configs::object_store::{ObjectStoreConfig, ObjectStoreMode}; +use crate::{file::FileBackedObjectStore, gcs::GoogleCloudStorage, mock::MockStore}; + /// Bucket for [`ObjectStore`] in which objects can be placed. #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] #[non_exhaustive] @@ -19,6 +19,7 @@ pub enum Bucket { NodeAggregationWitnessJobsFri, SchedulerWitnessJobsFri, ProofsFri, + StorageSnapshot, } impl Bucket { @@ -34,6 +35,7 @@ impl Bucket { Self::NodeAggregationWitnessJobsFri => "node_aggregation_witness_jobs_fri", Self::SchedulerWitnessJobsFri => "scheduler_witness_jobs_fri", Self::ProofsFri => "proofs_fri", + Self::StorageSnapshot => "storage_logs_snapshots", } } } @@ -86,7 +88,7 @@ impl error::Error for ObjectStoreError { /// /// [`StoredObject`]: crate::StoredObject #[async_trait] -pub trait ObjectStore: fmt::Debug + Send + Sync { +pub trait ObjectStore: 'static + fmt::Debug + Send + Sync { /// Fetches the value for the given key from the given bucket if it exists. /// /// # Errors @@ -113,6 +115,8 @@ pub trait ObjectStore: fmt::Debug + Send + Sync { /// /// Returns an error if removal fails. async fn remove_raw(&self, bucket: Bucket, key: &str) -> Result<(), ObjectStoreError>; + + fn storage_prefix_raw(&self, bucket: Bucket) -> String; } #[async_trait] @@ -133,6 +137,10 @@ impl ObjectStore for Arc { async fn remove_raw(&self, bucket: Bucket, key: &str) -> Result<(), ObjectStoreError> { (**self).remove_raw(bucket, key).await } + + fn storage_prefix_raw(&self, bucket: Bucket) -> String { + (**self).storage_prefix_raw(bucket) + } } #[derive(Debug)] @@ -170,14 +178,14 @@ impl ObjectStoreFactory { } /// Creates an [`ObjectStore`]. - pub async fn create_store(&self) -> Box { + pub async fn create_store(&self) -> Arc { match &self.origin { ObjectStoreOrigin::Config(config) => Self::create_from_config(config).await, - ObjectStoreOrigin::Mock(store) => Box::new(Arc::clone(store)), + ObjectStoreOrigin::Mock(store) => Arc::new(Arc::clone(store)), } } - async fn create_from_config(config: &ObjectStoreConfig) -> Box { + async fn create_from_config(config: &ObjectStoreConfig) -> Arc { let gcs_credential_file_path = match config.mode { ObjectStoreMode::GCSWithCredentialFile => Some(config.gcs_credential_file_path.clone()), _ => None, @@ -193,7 +201,7 @@ impl ObjectStoreFactory { config.max_retries, ) .await; - Box::new(store) + Arc::new(store) } ObjectStoreMode::GCSWithCredentialFile => { tracing::trace!("Initialized GoogleCloudStorage Object store with credential file"); @@ -203,12 +211,12 @@ impl ObjectStoreFactory { config.max_retries, ) .await; - Box::new(store) + Arc::new(store) } ObjectStoreMode::FileBacked => { tracing::trace!("Initialized FileBacked Object store"); let store = FileBackedObjectStore::new(config.file_backed_base_path.clone()).await; - Box::new(store) + Arc::new(store) } } } diff --git a/core/lib/object_store/tests/integration.rs b/core/lib/object_store/tests/integration.rs index dfa659dcf8b..9db2061f17f 100644 --- a/core/lib/object_store/tests/integration.rs +++ b/core/lib/object_store/tests/integration.rs @@ -1,7 +1,6 @@ //! Integration tests for object store. use tokio::fs; - use zksync_object_store::{Bucket, ObjectStoreFactory}; use zksync_types::{ proofs::{PrepareBasicCircuitsJob, StorageLogMetadata}, diff --git a/core/lib/prometheus_exporter/Cargo.toml b/core/lib/prometheus_exporter/Cargo.toml index 3f85ebd87aa..a70037dd13f 100644 --- a/core/lib/prometheus_exporter/Cargo.toml +++ b/core/lib/prometheus_exporter/Cargo.toml @@ -14,10 +14,10 @@ anyhow = "1.0" metrics = "0.21" metrics-exporter-prometheus = "0.12" tokio = "1" -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } [dependencies.vise-exporter] git = "https://github.com/matter-labs/vise.git" version = "0.1.0" -rev = "dd05139b76ab0843443ab3ff730174942c825dae" +rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" features = ["legacy"] diff --git a/core/lib/prometheus_exporter/src/lib.rs b/core/lib/prometheus_exporter/src/lib.rs index 25f5915e205..4eda0bebe0e 100644 --- a/core/lib/prometheus_exporter/src/lib.rs +++ b/core/lib/prometheus_exporter/src/lib.rs @@ -1,11 +1,11 @@ +use std::{net::Ipv4Addr, time::Duration}; + use anyhow::Context as _; use metrics_exporter_prometheus::{Matcher, PrometheusBuilder}; use tokio::sync::watch; use vise::MetricsCollection; use vise_exporter::MetricsExporter; -use std::{net::Ipv4Addr, time::Duration}; - fn configure_legacy_exporter(builder: PrometheusBuilder) -> PrometheusBuilder { // in seconds let default_latency_buckets = [0.001, 0.005, 0.025, 0.1, 0.25, 1.0, 5.0, 30.0, 120.0]; diff --git a/core/lib/prover_utils/Cargo.toml b/core/lib/prover_utils/Cargo.toml deleted file mode 100644 index 3afa050ace0..00000000000 --- a/core/lib/prover_utils/Cargo.toml +++ /dev/null @@ -1,26 +0,0 @@ -[package] -name = "zksync_prover_utils" -version = "0.1.0" -edition = "2018" -authors = ["The Matter Labs Team "] -homepage = "https://zksync.io/" -repository = "https://github.com/matter-labs/zksync-era" -license = "MIT OR Apache-2.0" -keywords = ["blockchain", "zksync"] -categories = ["cryptography"] - -[dependencies] -zksync_config = { path = "../../lib/config" } -zksync_utils = { path = "../../lib/utils" } -zksync_types = { path = "../../lib/types" } -zksync_object_store = { path = "../../lib/object_store" } - -anyhow = "1.0" -reqwest = { version = "0.11", features = ["blocking"] } -regex = "1.7.2" -tokio = "1.27.0" -futures = { version = "0.3", features = ["compat"] } -ctrlc = { version = "3.1", features = ["termination"] } -toml_edit = "0.14.4" -async-trait = "0.1" -tracing = "0.1" diff --git a/core/lib/prover_utils/src/gcs_proof_fetcher.rs b/core/lib/prover_utils/src/gcs_proof_fetcher.rs deleted file mode 100644 index 8b59fe67a61..00000000000 --- a/core/lib/prover_utils/src/gcs_proof_fetcher.rs +++ /dev/null @@ -1,23 +0,0 @@ -use zksync_object_store::{ObjectStore, ObjectStoreError}; -use zksync_types::aggregated_operations::L1BatchProofForL1; -use zksync_types::L1BatchNumber; - -pub async fn load_wrapped_fri_proofs_for_range( - from: L1BatchNumber, - to: L1BatchNumber, - blob_store: &dyn ObjectStore, -) -> Vec { - let mut proofs = Vec::new(); - for l1_batch_number in from.0..=to.0 { - let l1_batch_number = L1BatchNumber(l1_batch_number); - match blob_store.get(l1_batch_number).await { - Ok(proof) => proofs.push(proof), - Err(ObjectStoreError::KeyNotFound(_)) => (), // do nothing, proof is not ready yet - Err(err) => panic!( - "Failed to load proof for batch {}: {}", - l1_batch_number.0, err - ), - } - } - proofs -} diff --git a/core/lib/prover_utils/src/lib.rs b/core/lib/prover_utils/src/lib.rs deleted file mode 100644 index 0ee42ffee06..00000000000 --- a/core/lib/prover_utils/src/lib.rs +++ /dev/null @@ -1,126 +0,0 @@ -#![allow(clippy::upper_case_acronyms, clippy::derive_partial_eq_without_eq)] - -extern crate core; - -use std::{fs::create_dir_all, io::Cursor, path::Path, time::Duration}; - -use futures::{channel::mpsc, executor::block_on, SinkExt}; - -pub mod gcs_proof_fetcher; -pub mod periodic_job; -pub mod region_fetcher; -pub mod vk_commitment_helper; - -fn download_bytes(key_download_url: &str) -> reqwest::Result> { - tracing::info!("Downloading initial setup from {:?}", key_download_url); - - const DOWNLOAD_TIMEOUT: Duration = Duration::from_secs(120); - let client = reqwest::blocking::Client::builder() - .timeout(DOWNLOAD_TIMEOUT) - .build() - .unwrap(); - - const DOWNLOAD_RETRIES: usize = 5; - let mut retry_count = 0; - - while retry_count < DOWNLOAD_RETRIES { - let bytes = client - .get(key_download_url) - .send() - .and_then(|response| response.bytes().map(|bytes| bytes.to_vec())); - match bytes { - Ok(bytes) => return Ok(bytes), - Err(_) => retry_count += 1, - } - - tracing::warn!("Failed to download keys. Backing off for 5 second"); - std::thread::sleep(Duration::from_secs(5)); - } - - client - .get(key_download_url) - .send() - .and_then(|response| response.bytes().map(|bytes| bytes.to_vec())) -} - -pub fn ensure_initial_setup_keys_present(initial_setup_key_path: &str, key_download_url: &str) { - if Path::new(initial_setup_key_path).exists() { - tracing::info!( - "Initial setup already present at {:?}", - initial_setup_key_path - ); - return; - } - - let bytes = download_bytes(key_download_url).expect("Failed downloading initial setup"); - let initial_setup_key_dir = Path::new(initial_setup_key_path).parent().unwrap(); - create_dir_all(initial_setup_key_dir).unwrap_or_else(|_| { - panic!( - "Failed creating dirs recursively: {:?}", - initial_setup_key_dir - ) - }); - let mut file = std::fs::File::create(initial_setup_key_path) - .expect("Cannot create file for the initial setup"); - let mut content = Cursor::new(bytes); - std::io::copy(&mut content, &mut file).expect("Cannot write the downloaded key to the file"); -} - -pub fn numeric_index_to_circuit_name(circuit_numeric_index: u8) -> Option<&'static str> { - match circuit_numeric_index { - 0 => Some("Scheduler"), - 1 => Some("Node aggregation"), - 2 => Some("Leaf aggregation"), - 3 => Some("Main VM"), - 4 => Some("Decommitts sorter"), - 5 => Some("Code decommitter"), - 6 => Some("Log demuxer"), - 7 => Some("Keccak"), - 8 => Some("SHA256"), - 9 => Some("ECRecover"), - 10 => Some("RAM permutation"), - 11 => Some("Storage sorter"), - 12 => Some("Storage application"), - 13 => Some("Initial writes pubdata rehasher"), - 14 => Some("Repeated writes pubdata rehasher"), - 15 => Some("Events sorter"), - 16 => Some("L1 messages sorter"), - 17 => Some("L1 messages rehasher"), - 18 => Some("L1 messages merklizer"), - _ => None, - } -} - -pub fn circuit_name_to_numeric_index(circuit_name: &str) -> Option { - match circuit_name { - "Scheduler" => Some(0), - "Node aggregation" => Some(1), - "Leaf aggregation" => Some(2), - "Main VM" => Some(3), - "Decommitts sorter" => Some(4), - "Code decommitter" => Some(5), - "Log demuxer" => Some(6), - "Keccak" => Some(7), - "SHA256" => Some(8), - "ECRecover" => Some(9), - "RAM permutation" => Some(10), - "Storage sorter" => Some(11), - "Storage application" => Some(12), - "Initial writes pubdata rehasher" => Some(13), - "Repeated writes pubdata rehasher" => Some(14), - "Events sorter" => Some(15), - "L1 messages sorter" => Some(16), - "L1 messages rehasher" => Some(17), - "L1 messages merklizer" => Some(18), - _ => None, - } -} - -pub fn get_stop_signal_receiver() -> mpsc::Receiver { - let (mut stop_signal_sender, stop_signal_receiver) = mpsc::channel(256); - ctrlc::set_handler(move || { - block_on(stop_signal_sender.send(true)).expect("Ctrl+C signal send"); - }) - .expect("Error setting Ctrl+C handler"); - stop_signal_receiver -} diff --git a/core/lib/prover_utils/src/region_fetcher.rs b/core/lib/prover_utils/src/region_fetcher.rs deleted file mode 100644 index 22a0cedce49..00000000000 --- a/core/lib/prover_utils/src/region_fetcher.rs +++ /dev/null @@ -1,110 +0,0 @@ -use anyhow::Context as _; -use regex::Regex; -use reqwest::header::{HeaderMap, HeaderValue}; -use reqwest::Method; - -use zksync_config::configs::ProverGroupConfig; -use zksync_utils::http_with_retries::send_request_with_retries; - -pub async fn get_region(prover_group_config: &ProverGroupConfig) -> anyhow::Result { - if let Some(region) = &prover_group_config.region_override { - return Ok(region.clone()); - } - let url = &prover_group_config.region_read_url; - fetch_from_url(url).await.context("fetch_from_url()") -} - -pub async fn get_zone(prover_group_config: &ProverGroupConfig) -> anyhow::Result { - if let Some(zone) = &prover_group_config.zone_override { - return Ok(zone.clone()); - } - let url = &prover_group_config.zone_read_url; - let data = fetch_from_url(url).await.context("fetch_from_url()")?; - parse_zone(&data).context("parse_zone") -} - -async fn fetch_from_url(url: &str) -> anyhow::Result { - let mut headers = HeaderMap::new(); - headers.insert("Metadata-Flavor", HeaderValue::from_static("Google")); - let response = send_request_with_retries(url, 5, Method::GET, Some(headers), None).await; - response - .map_err(|err| anyhow::anyhow!("Failed fetching response from url: {url}: {err:?}"))? - .text() - .await - .context("Failed to read response as text") -} - -fn parse_zone(data: &str) -> anyhow::Result { - // Statically provided Regex should always compile. - let re = Regex::new(r"^projects/\d+/zones/(\w+-\w+-\w+)$").unwrap(); - if let Some(caps) = re.captures(data) { - let zone = &caps[1]; - return Ok(zone.to_string()); - } - anyhow::bail!("failed to extract zone from: {data}"); -} - -#[cfg(test)] -mod tests { - use zksync_config::configs::ProverGroupConfig; - - use crate::region_fetcher::{get_region, get_zone, parse_zone}; - - #[test] - fn test_parse_zone() { - let data = "projects/295056426491/zones/us-central1-a"; - let zone = parse_zone(data).unwrap(); - assert_eq!(zone, "us-central1-a"); - } - - #[test] - fn test_parse_zone_panic() { - let data = "invalid data"; - assert!(parse_zone(data).is_err()); - } - - #[tokio::test] - async fn test_get_region_with_override() { - let config = ProverGroupConfig { - group_0_circuit_ids: vec![], - group_1_circuit_ids: vec![], - group_2_circuit_ids: vec![], - group_3_circuit_ids: vec![], - group_4_circuit_ids: vec![], - group_5_circuit_ids: vec![], - group_6_circuit_ids: vec![], - group_7_circuit_ids: vec![], - group_8_circuit_ids: vec![], - group_9_circuit_ids: vec![], - region_override: Some("us-central-1".to_string()), - region_read_url: "".to_string(), - zone_override: Some("us-central-1-b".to_string()), - zone_read_url: "".to_string(), - synthesizer_per_gpu: 0, - }; - - assert_eq!("us-central-1", get_region(&config).await.unwrap()); - } - - #[tokio::test] - async fn test_get_zone_with_override() { - let config = ProverGroupConfig { - group_0_circuit_ids: vec![], - group_1_circuit_ids: vec![], - group_2_circuit_ids: vec![], - group_3_circuit_ids: vec![], - group_4_circuit_ids: vec![], - group_5_circuit_ids: vec![], - group_6_circuit_ids: vec![], - group_7_circuit_ids: vec![], - group_8_circuit_ids: vec![], - group_9_circuit_ids: vec![], - region_override: Some("us-central-1".to_string()), - region_read_url: "".to_string(), - zone_override: Some("us-central-1-b".to_string()), - zone_read_url: "".to_string(), - synthesizer_per_gpu: 0, - }; - assert_eq!("us-central-1-b", get_zone(&config).await.unwrap()); - } -} diff --git a/core/lib/queued_job_processor/Cargo.toml b/core/lib/queued_job_processor/Cargo.toml index 72ff3daa629..76f4f72e1d3 100644 --- a/core/lib/queued_job_processor/Cargo.toml +++ b/core/lib/queued_job_processor/Cargo.toml @@ -17,4 +17,4 @@ tokio = { version = "1", features = ["time"] } tracing = "0.1" zksync_utils = { path = "../../lib/utils" } -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } diff --git a/core/lib/queued_job_processor/src/lib.rs b/core/lib/queued_job_processor/src/lib.rs index d5ed185b256..49ec8b348ee 100644 --- a/core/lib/queued_job_processor/src/lib.rs +++ b/core/lib/queued_job_processor/src/lib.rs @@ -1,15 +1,13 @@ -use std::fmt::Debug; -use std::time::{Duration, Instant}; +use std::{ + fmt::Debug, + time::{Duration, Instant}, +}; use anyhow::Context as _; pub use async_trait::async_trait; -use tokio::sync::watch; -use tokio::task::JoinHandle; -use tokio::time::sleep; - -use zksync_utils::panic_extractor::try_extract_panic_message; - +use tokio::{sync::watch, task::JoinHandle, time::sleep}; use vise::{Buckets, Counter, Histogram, LabeledFamily, Metrics}; +use zksync_utils::panic_extractor::try_extract_panic_message; const ATTEMPT_BUCKETS: Buckets = Buckets::exponential(1.0..=64.0, 2.0); @@ -111,6 +109,16 @@ pub trait JobProcessor: Sync + Send { task: JoinHandle>, ) -> anyhow::Result<()> { let attempts = self.get_job_attempts(&job_id).await?; + let max_attempts = self.max_attempts(); + if attempts == max_attempts { + METRICS.max_attempts_reached[&(Self::SERVICE_NAME, format!("{job_id:?}"))].inc(); + tracing::error!( + "Max attempts ({max_attempts}) reached for {} job {:?}", + Self::SERVICE_NAME, + job_id, + ); + } + let result = loop { tracing::trace!( "Polling {} task with id {:?}. Is finished: {}", @@ -146,15 +154,6 @@ pub trait JobProcessor: Sync + Send { error_message ); - let max_attempts = self.max_attempts(); - if attempts == max_attempts { - METRICS.max_attempts_reached[&(Self::SERVICE_NAME, format!("{job_id:?}"))].inc(); - tracing::error!( - "Max attempts ({max_attempts}) reached for {} job {:?}", - Self::SERVICE_NAME, - job_id, - ); - } self.save_failure(job_id, started_at, error_message).await; Ok(()) } diff --git a/core/lib/state/Cargo.toml b/core/lib/state/Cargo.toml index b613266a650..87e433a4160 100644 --- a/core/lib/state/Cargo.toml +++ b/core/lib/state/Cargo.toml @@ -10,7 +10,7 @@ keywords = ["blockchain", "zksync"] categories = ["cryptography"] [dependencies] -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } zksync_dal = { path = "../dal" } zksync_types = { path = "../types" } zksync_utils = { path = "../utils" } diff --git a/core/lib/state/README.md b/core/lib/state/README.md index 0e98203bc47..fd01452aed3 100644 --- a/core/lib/state/README.md +++ b/core/lib/state/README.md @@ -5,7 +5,7 @@ component responsible for handling transaction execution and creating miniblocks All state keeper data is currently stored in Postgres. (Beside it, we provide an in-memory implementation for benchmarking / testing purposes.) We also keep a secondary copy for part of it in RocksDB for performance reasons. -Currently, we only duplicate the data needed by the [`vm`] crate. +Currently, we only duplicate the data needed by the [`multivm`] crate. [`zksync_core`]: ../zksync_core -[`vm`]: ../vm +[`multivm`]: ../multivm diff --git a/core/lib/state/src/cache/metrics.rs b/core/lib/state/src/cache/metrics.rs index 7198d433947..0e8c8cd8685 100644 --- a/core/lib/state/src/cache/metrics.rs +++ b/core/lib/state/src/cache/metrics.rs @@ -1,9 +1,9 @@ //! General-purpose cache metrics. -use vise::{Buckets, Counter, EncodeLabelValue, Gauge, Histogram, LabeledFamily, Metrics}; - use std::time::Duration; +use vise::{Buckets, Counter, EncodeLabelValue, Gauge, Histogram, LabeledFamily, Metrics}; + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue)] #[metrics(rename_all = "snake_case")] pub(super) enum Method { @@ -18,7 +18,7 @@ pub(super) enum RequestOutcome { Miss, } -/// Buckets for small latencies: from 10ns to 1ms. +/// Buckets for small latencies: from 10 ns to 1 ms. const SMALL_LATENCIES: Buckets = Buckets::values(&[ 1e-8, 2.5e-8, 5e-8, 1e-7, 2.5e-7, 5e-7, 1e-6, 2.5e-6, 5e-6, 1e-5, 2.5e-5, 5e-5, 1e-4, 1e-3, ]); diff --git a/core/lib/state/src/in_memory.rs b/core/lib/state/src/in_memory.rs index 87a26b238f2..d6058649a45 100644 --- a/core/lib/state/src/in_memory.rs +++ b/core/lib/state/src/in_memory.rs @@ -1,6 +1,5 @@ use std::collections::{hash_map::Entry, BTreeMap, HashMap}; -use crate::ReadStorage; use zksync_types::{ block::DeployedContract, get_code_key, get_known_code_key, get_system_context_init_logs, system_contracts::get_system_smart_contracts, L2ChainId, StorageKey, StorageLog, @@ -8,11 +7,13 @@ use zksync_types::{ }; use zksync_utils::u256_to_h256; -/// Network ID we use by defailt for in memory storage. +use crate::ReadStorage; + +/// Network ID we use by default for in memory storage. pub const IN_MEMORY_STORAGE_DEFAULT_NETWORK_ID: u32 = 270; /// In-memory storage. -#[derive(Debug, Default)] +#[derive(Debug, Default, Clone)] pub struct InMemoryStorage { pub(crate) state: HashMap, pub(crate) factory_deps: HashMap>, @@ -100,6 +101,11 @@ impl InMemoryStorage { pub fn store_factory_dep(&mut self, hash: H256, bytecode: Vec) { self.factory_deps.insert(hash, bytecode); } + + /// Get internal state of the storage. + pub fn get_state(&self) -> &HashMap { + &self.state + } } impl ReadStorage for &InMemoryStorage { diff --git a/core/lib/state/src/lib.rs b/core/lib/state/src/lib.rs index c943e48dbc1..3d54967c9ad 100644 --- a/core/lib/state/src/lib.rs +++ b/core/lib/state/src/lib.rs @@ -43,7 +43,7 @@ pub trait ReadStorage: fmt::Debug { /// Checks whether a write to this storage at the specified `key` would be an initial write. /// Roughly speaking, this is the case when the storage doesn't contain `key`, although - /// in case of mutable storages, the caveats apply (a write to a key that is present + /// in case of mutable storage, the caveats apply (a write to a key that is present /// in the storage but was not committed is still an initial write). fn is_write_initial(&mut self, key: &StorageKey) -> bool; diff --git a/core/lib/state/src/postgres/metrics.rs b/core/lib/state/src/postgres/metrics.rs index 33e5664bb2b..18fb54cdfa3 100644 --- a/core/lib/state/src/postgres/metrics.rs +++ b/core/lib/state/src/postgres/metrics.rs @@ -1,9 +1,9 @@ //! Metrics for `PostgresStorage`. -use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; - use std::time::Duration; +use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelSet, EncodeLabelValue)] #[metrics(label = "stage", rename_all = "snake_case")] pub(super) enum ValuesUpdateStage { diff --git a/core/lib/state/src/postgres/mod.rs b/core/lib/state/src/postgres/mod.rs index 8cc69f7bbbd..7208877abb3 100644 --- a/core/lib/state/src/postgres/mod.rs +++ b/core/lib/state/src/postgres/mod.rs @@ -1,23 +1,22 @@ -use tokio::{runtime::Handle, sync::mpsc}; - use std::{ mem, sync::{Arc, RwLock}, }; +use tokio::{runtime::Handle, sync::mpsc}; use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_types::{L1BatchNumber, MiniblockNumber, StorageKey, StorageValue, H256}; -mod metrics; -#[cfg(test)] -mod tests; - use self::metrics::{Method, ValuesUpdateStage, CACHE_METRICS, STORAGE_METRICS}; use crate::{ cache::{Cache, CacheValue}, ReadStorage, }; +mod metrics; +#[cfg(test)] +mod tests; + /// Type alias for smart contract source code cache. type FactoryDepsCache = Cache>; diff --git a/core/lib/state/src/postgres/tests.rs b/core/lib/state/src/postgres/tests.rs index 213360bb73d..6514da136d5 100644 --- a/core/lib/state/src/postgres/tests.rs +++ b/core/lib/state/src/postgres/tests.rs @@ -1,13 +1,12 @@ //! Tests for `PostgresStorage`. +use std::{collections::HashMap, mem}; + use rand::{ + rngs::StdRng, seq::{IteratorRandom, SliceRandom}, Rng, SeedableRng, }; - -use rand::rngs::StdRng; -use std::{collections::HashMap, mem}; - use zksync_dal::ConnectionPool; use zksync_types::StorageLog; diff --git a/core/lib/state/src/rocksdb/metrics.rs b/core/lib/state/src/rocksdb/metrics.rs index 81b035811d5..997f4b42ed3 100644 --- a/core/lib/state/src/rocksdb/metrics.rs +++ b/core/lib/state/src/rocksdb/metrics.rs @@ -1,9 +1,9 @@ //! Metrics for `RocksdbStorage`. -use vise::{Buckets, Gauge, Histogram, Metrics}; - use std::time::Duration; +use vise::{Buckets, Gauge, Histogram, Metrics}; + #[derive(Debug, Metrics)] #[metrics(prefix = "server_state_keeper_secondary_storage")] pub(super) struct RocksdbStorageMetrics { diff --git a/core/lib/state/src/rocksdb/mod.rs b/core/lib/state/src/rocksdb/mod.rs index e3748f3acfd..6e0bb7233ee 100644 --- a/core/lib/state/src/rocksdb/mod.rs +++ b/core/lib/state/src/rocksdb/mod.rs @@ -19,19 +19,19 @@ //! | Contracts | address (20 bytes) | `Vec` | Contract contents | //! | Factory deps | hash (32 bytes) | `Vec` | Bytecodes for new contracts that a certain contract may deploy. | -use itertools::{Either, Itertools}; use std::{collections::HashMap, convert::TryInto, mem, path::Path, time::Instant}; +use itertools::{Either, Itertools}; use zksync_dal::StorageProcessor; use zksync_storage::{db::NamedColumnFamily, RocksDB}; use zksync_types::{L1BatchNumber, StorageKey, StorageValue, H256, U256}; use zksync_utils::{h256_to_u256, u256_to_h256}; -mod metrics; - use self::metrics::METRICS; use crate::{InMemoryStorage, ReadStorage}; +mod metrics; + fn serialize_block_number(block_number: u32) -> [u8; 4] { block_number.to_le_bytes() } @@ -135,15 +135,16 @@ impl RocksdbStorage { /// in Postgres. pub async fn update_from_postgres(&mut self, conn: &mut StorageProcessor<'_>) { let latency = METRICS.update.start(); - let latest_l1_batch_number = conn + let Some(latest_l1_batch_number) = conn .blocks_dal() .get_sealed_l1_batch_number() .await - .unwrap(); - tracing::debug!( - "loading storage for l1 batch number {}", - latest_l1_batch_number.0 - ); + .unwrap() + else { + // No L1 batches are persisted in Postgres; update is not necessary. + return; + }; + tracing::debug!("Loading storage for l1 batch number {latest_l1_batch_number}"); let mut current_l1_batch_number = self.l1_batch_number().0; assert!( @@ -506,13 +507,13 @@ impl ReadStorage for RocksdbStorage { #[cfg(test)] mod tests { use tempfile::TempDir; + use zksync_dal::ConnectionPool; + use zksync_types::{MiniblockNumber, StorageLog}; use super::*; use crate::test_utils::{ create_l1_batch, create_miniblock, gen_storage_logs, prepare_postgres, }; - use zksync_dal::ConnectionPool; - use zksync_types::{MiniblockNumber, StorageLog}; #[tokio::test] async fn rocksdb_storage_basics() { @@ -675,7 +676,7 @@ mod tests { storage.update_from_postgres(&mut conn).await; assert_eq!(storage.l1_batch_number(), L1BatchNumber(2)); - // Check that enum indices are correct after syncing with postgres. + // Check that enum indices are correct after syncing with Postgres. for log in &storage_logs { let expected_index = enum_indices[&log.key.hashed_key()]; assert_eq!( diff --git a/core/lib/state/src/shadow_storage.rs b/core/lib/state/src/shadow_storage.rs index dea713ba40c..0a2bd0fa43e 100644 --- a/core/lib/state/src/shadow_storage.rs +++ b/core/lib/state/src/shadow_storage.rs @@ -1,7 +1,7 @@ use vise::{Counter, Metrics}; +use zksync_types::{L1BatchNumber, StorageKey, StorageValue, H256}; use crate::ReadStorage; -use zksync_types::{L1BatchNumber, StorageKey, StorageValue, H256}; #[derive(Debug, Metrics)] #[metrics(prefix = "shadow_storage")] diff --git a/core/lib/state/src/storage_view.rs b/core/lib/state/src/storage_view.rs index 8476be78aa9..543b41bc657 100644 --- a/core/lib/state/src/storage_view.rs +++ b/core/lib/state/src/storage_view.rs @@ -1,14 +1,15 @@ -use std::cell::RefCell; -use std::rc::Rc; use std::{ + cell::RefCell, collections::HashMap, fmt, mem, + rc::Rc, time::{Duration, Instant}, }; -use crate::{ReadStorage, WriteStorage}; use zksync_types::{witness_block_state::WitnessBlockState, StorageKey, StorageValue, H256}; +use crate::{ReadStorage, WriteStorage}; + /// Metrics for [`StorageView`]. #[derive(Debug, Default, Clone, Copy)] pub struct StorageViewMetrics { @@ -204,9 +205,10 @@ impl WriteStorage for StorageView { #[cfg(test)] mod test { + use zksync_types::{AccountTreeId, Address, H256}; + use super::*; use crate::InMemoryStorage; - use zksync_types::{AccountTreeId, Address, H256}; #[test] fn test_storage_access() { diff --git a/core/lib/state/src/test_utils.rs b/core/lib/state/src/test_utils.rs index b9a9d81fc54..340f2ea6223 100644 --- a/core/lib/state/src/test_utils.rs +++ b/core/lib/state/src/test_utils.rs @@ -1,5 +1,7 @@ //! Shared utils for unit tests. +use std::ops; + use zksync_dal::StorageProcessor; use zksync_types::{ block::{BlockGasCount, L1BatchHeader, MiniblockHeader}, @@ -7,8 +9,6 @@ use zksync_types::{ StorageLog, H256, }; -use std::ops; - pub(crate) async fn prepare_postgres(conn: &mut StorageProcessor<'_>) { if conn.blocks_dal().is_genesis_needed().await.unwrap() { conn.protocol_versions_dal() @@ -35,7 +35,6 @@ pub(crate) async fn prepare_postgres(conn: &mut StorageProcessor<'_>) { } pub(crate) fn gen_storage_logs(indices: ops::Range) -> Vec { - // Addresses and keys of storage logs must be sorted for the `multi_block_workflow` test. let mut accounts = [ "4b3af74f66ab1f0da3f2e4ec7a3cb99baf1af7b2", "ef4bb7b21c5fe7432a7d63876cc59ecc23b46636", @@ -74,8 +73,8 @@ pub(crate) async fn create_miniblock( l1_tx_count: 0, l2_tx_count: 0, base_fee_per_gas: 0, - l1_gas_price: 0, - l2_fair_gas_price: 0, + batch_fee_input: Default::default(), + gas_per_pubdata_limit: 0, base_system_contracts_hashes: Default::default(), protocol_version: Some(Default::default()), virtual_blocks: 0, @@ -106,7 +105,7 @@ pub(crate) async fn create_l1_batch( ); header.is_finished = true; conn.blocks_dal() - .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[]) + .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[], 0) .await .unwrap(); conn.blocks_dal() diff --git a/core/lib/state/src/witness.rs b/core/lib/state/src/witness.rs index 72aab8bbe6e..50e2d9b5407 100644 --- a/core/lib/state/src/witness.rs +++ b/core/lib/state/src/witness.rs @@ -1,9 +1,8 @@ use vise::{Counter, Metrics}; +use zksync_types::{witness_block_state::WitnessBlockState, StorageKey, StorageValue, H256}; use crate::ReadStorage; -use zksync_types::{witness_block_state::WitnessBlockState, StorageKey, StorageValue, H256}; - #[derive(Debug, Metrics)] #[metrics(prefix = "witness_storage")] struct WitnessStorageMetrics { diff --git a/core/lib/storage/Cargo.toml b/core/lib/storage/Cargo.toml index 015e41c6640..4a6c2e2cbe2 100644 --- a/core/lib/storage/Cargo.toml +++ b/core/lib/storage/Cargo.toml @@ -10,7 +10,7 @@ keywords = ["blockchain", "zksync"] categories = ["cryptography"] [dependencies] -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } num_cpus = "1.13" once_cell = "1.18.0" diff --git a/core/lib/storage/src/db.rs b/core/lib/storage/src/db.rs index 3280183abf9..24502493a60 100644 --- a/core/lib/storage/src/db.rs +++ b/core/lib/storage/src/db.rs @@ -1,8 +1,3 @@ -use rocksdb::{ - properties, BlockBasedOptions, Cache, ColumnFamily, ColumnFamilyDescriptor, DBPinnableSlice, - Direction, IteratorMode, Options, PrefixRange, ReadOptions, WriteOptions, DB, -}; - use std::{ collections::{HashMap, HashSet}, ffi::CStr, @@ -15,6 +10,11 @@ use std::{ time::{Duration, Instant}, }; +use rocksdb::{ + properties, BlockBasedOptions, Cache, ColumnFamily, ColumnFamilyDescriptor, DBPinnableSlice, + Direction, IteratorMode, Options, PrefixRange, ReadOptions, WriteOptions, DB, +}; + use crate::metrics::{RocksdbLabels, RocksdbSizeMetrics, METRICS}; /// Number of active RocksDB instances used to determine if it's safe to exit current process. @@ -529,7 +529,7 @@ impl RocksDB { .iterator_cf_opt(cf, options, IteratorMode::Start) .map(Result::unwrap) .fuse() - // ^ The rocksdb docs say that a raw iterator (which is used by the returned ordinary iterator) + // ^ The RocksDB docs say that a raw iterator (which is used by the returned ordinary iterator) // can become invalid "when it reaches the end of its defined range, or when it encounters an error." // We panic on RocksDB errors elsewhere and fuse it to prevent polling after the end of the range. // Thus, `unwrap()` should be safe. @@ -553,7 +553,7 @@ impl RocksDB { } impl RocksDB<()> { - /// Awaits termination of all running rocksdb instances. + /// Awaits termination of all running RocksDB instances. /// /// This method is blocking and should be wrapped in `spawn_blocking(_)` if run in the async context. pub fn await_rocksdb_termination() { @@ -570,7 +570,7 @@ impl RocksDB<()> { } } -/// Empty struct used to register rocksdb instance +/// Empty struct used to register RocksDB instance #[derive(Debug)] struct RegistryEntry; diff --git a/core/lib/storage/src/metrics.rs b/core/lib/storage/src/metrics.rs index 0c26bd749d5..47b0a52ee98 100644 --- a/core/lib/storage/src/metrics.rs +++ b/core/lib/storage/src/metrics.rs @@ -1,14 +1,14 @@ //! General-purpose RocksDB metrics. All metrics code in the crate should be in this module. -use once_cell::sync::Lazy; -use vise::{Buckets, Collector, Counter, EncodeLabelSet, Family, Gauge, Histogram, Metrics, Unit}; - use std::{ collections::HashMap, sync::{Mutex, Weak}, time::Duration, }; +use once_cell::sync::Lazy; +use vise::{Buckets, Collector, Counter, EncodeLabelSet, Family, Gauge, Histogram, Metrics, Unit}; + use crate::db::RocksDBInner; #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelSet)] @@ -96,7 +96,7 @@ pub(crate) struct RocksdbSizeMetrics { pub live_data_size: Family>, /// Total size of all SST files in the column family of a RocksDB instance. pub total_sst_size: Family>, - /// Total size of all mem tables in the column family of a RocksDB instance. + /// Total size of all memory tables in the column family of a RocksDB instance. pub total_mem_table_size: Family>, /// Total size of block cache in the column family of a RocksDB instance. pub block_cache_size: Family>, diff --git a/core/lib/test_account/src/lib.rs b/core/lib/test_account/src/lib.rs index 00764df6dc4..ec3c1b7a7b0 100644 --- a/core/lib/test_account/src/lib.rs +++ b/core/lib/test_account/src/lib.rs @@ -1,21 +1,22 @@ use ethabi::Token; -use zksync_contracts::test_contracts::LoadnextContractExecutionParams; -use zksync_contracts::{deployer_contract, load_contract}; +use zksync_contracts::{ + deployer_contract, load_contract, test_contracts::LoadnextContractExecutionParams, +}; +use zksync_eth_signer::{raw_ethereum_tx::TransactionParameters, EthereumSigner, PrivateKeySigner}; use zksync_system_constants::{ - CONTRACT_DEPLOYER_ADDRESS, MAX_GAS_PER_PUBDATA_BYTE, REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, + CONTRACT_DEPLOYER_ADDRESS, DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE, + REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, }; -use zksync_types::fee::Fee; -use zksync_types::l2::L2Tx; -use zksync_types::utils::deployed_address_create; use zksync_types::{ + fee::Fee, + l1::{OpProcessingType, PriorityQueueType}, + l2::L2Tx, + utils::deployed_address_create, Address, Execute, ExecuteTransactionCommon, L1TxCommonData, L2ChainId, Nonce, PackedEthSignature, PriorityOpId, Transaction, H256, U256, }; - -use zksync_eth_signer::{raw_ethereum_tx::TransactionParameters, EthereumSigner, PrivateKeySigner}; -use zksync_types::l1::{OpProcessingType, PriorityQueueType}; - use zksync_utils::bytecode::hash_bytecode; + pub const L1_TEST_GAS_PER_PUBDATA_BYTE: u32 = 800; const BASE_FEE: u64 = 2_000_000_000; @@ -94,7 +95,7 @@ impl Account { gas_limit: U256::from(2000000000u32), max_fee_per_gas: U256::from(BASE_FEE), max_priority_fee_per_gas: U256::from(100), - gas_per_pubdata_limit: U256::from(MAX_GAS_PER_PUBDATA_BYTE), + gas_per_pubdata_limit: U256::from(DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE), } } diff --git a/core/lib/types/Cargo.toml b/core/lib/types/Cargo.toml index 6bf130bc70c..95d433e0791 100644 --- a/core/lib/types/Cargo.toml +++ b/core/lib/types/Cargo.toml @@ -10,27 +10,26 @@ keywords = ["blockchain", "zksync"] categories = ["cryptography"] readme = "README.md" -links = "zksync_types_proto" - [dependencies] zksync_system_constants = { path = "../constants" } zksync_utils = { path = "../utils" } zksync_basic_types = { path = "../basic_types" } zksync_contracts = { path = "../contracts" } zksync_mini_merkle_tree = { path = "../mini_merkle_tree" } +zksync_config = { path = "../config" } # We need this import because we wanat DAL to be responsible for (de)serialization codegen = { git = "https://github.com/matter-labs/solidity_plonk_verifier.git", branch = "dev" } zkevm_test_harness = { git = "https://github.com/matter-labs/era-zkevm_test_harness.git", branch = "v1.3.3" } -zk_evm_1_4_0 = { git = "https://github.com/matter-labs/era-zk_evm.git", branch = "v1.4.0", package = "zk_evm" } +zk_evm_1_4_1 = { package = "zk_evm", git = "https://github.com/matter-labs/era-zk_evm.git", branch = "v1.4.1" } +zk_evm_1_4_0 = { package = "zk_evm", git = "https://github.com/matter-labs/era-zk_evm.git", branch = "v1.4.0" } zk_evm = { git = "https://github.com/matter-labs/era-zk_evm.git", tag = "v1.3.3-rc2" } -zksync_consensus_roles = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } -zksync_protobuf = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } +zksync_consensus_roles = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_protobuf = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } anyhow = "1.0.75" chrono = { version = "0.4", features = ["serde"] } -num = { version = "0.3.1", features = ["serde"] } +num = { version = "0.4.0", features = ["serde"] } once_cell = "1.7" -prost = "0.12.1" rlp = "0.5" serde = "1.0.90" serde_json = "1.0.0" @@ -39,6 +38,7 @@ strum = { version = "0.24", features = ["derive"] } thiserror = "1.0" num_enum = "0.6" hex = "0.4" +prost = "0.12.1" # Crypto stuff # TODO (PLA-440): remove parity-crypto @@ -55,4 +55,4 @@ tokio = { version = "1", features = ["rt", "macros"] } serde_with = { version = "1", features = ["hex"] } [build-dependencies] -zksync_protobuf_build = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } +zksync_protobuf_build = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } diff --git a/core/lib/types/build.rs b/core/lib/types/build.rs index 464a905e47a..62a98bd982c 100644 --- a/core/lib/types/build.rs +++ b/core/lib/types/build.rs @@ -3,9 +3,9 @@ fn main() { zksync_protobuf_build::Config { input_root: "src/proto".into(), proto_root: "zksync/types".into(), - dependencies: vec!["::zksync_consensus_roles::proto".parse().unwrap()], + dependencies: vec![], protobuf_crate: "::zksync_protobuf".parse().unwrap(), - is_public: true, + is_public: false, } .generate() .expect("generate()"); diff --git a/core/lib/types/src/aggregated_operations.rs b/core/lib/types/src/aggregated_operations.rs index 8819460f269..006eca562e7 100644 --- a/core/lib/types/src/aggregated_operations.rs +++ b/core/lib/types/src/aggregated_operations.rs @@ -1,12 +1,12 @@ -use codegen::serialize_proof; - use std::{fmt, ops, str::FromStr}; +use codegen::serialize_proof; use serde::{Deserialize, Serialize}; -use zkevm_test_harness::abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit; -use zkevm_test_harness::bellman::bn256::Bn256; -use zkevm_test_harness::bellman::plonk::better_better_cs::proof::Proof; -use zkevm_test_harness::witness::oracle::VmWitnessOracle; +use zkevm_test_harness::{ + abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit, + bellman::{bn256::Bn256, plonk::better_better_cs::proof::Proof}, + witness::oracle::VmWitnessOracle, +}; use zksync_basic_types::{ethabi::Token, L1BatchNumber}; use crate::{commitment::L1BatchWithMetadata, ProtocolVersionId, U256}; diff --git a/core/lib/types/src/api/en.rs b/core/lib/types/src/api/en.rs index aa3d2955e2e..2e7afcdfb73 100644 --- a/core/lib/types/src/api/en.rs +++ b/core/lib/types/src/api/en.rs @@ -5,7 +5,7 @@ use zk_evm::ethereum_types::Address; use zksync_basic_types::{L1BatchNumber, MiniblockNumber, H256}; use zksync_contracts::BaseSystemContractsHashes; -use crate::{block::ConsensusBlockFields, ProtocolVersionId}; +use crate::ProtocolVersionId; /// Representation of the L2 block, as needed for the EN synchronization. /// This structure has several fields that describe *L1 batch* rather than @@ -24,12 +24,12 @@ pub struct SyncBlock { pub last_in_batch: bool, /// L2 block timestamp. pub timestamp: u64, - /// Hash of the L2 block (not the Merkle root hash). - pub root_hash: Option, /// L1 gas price used as VM parameter for the L1 batch corresponding to this L2 block. pub l1_gas_price: u64, /// L2 gas price used as VM parameter for the L1 batch corresponding to this L2 block. pub l2_fair_gas_price: u64, + /// The pubdata price used as VM parameter for the L1 batch corresponding to this L2 block. + pub fair_pubdata_price: Option, /// Hashes of the base system contracts used in for the L1 batch corresponding to this L2 block. pub base_system_contracts_hashes: BaseSystemContractsHashes, /// Address of the operator account who produced for the L1 batch corresponding to this L2 block. @@ -44,7 +44,4 @@ pub struct SyncBlock { pub hash: Option, /// Version of the protocol used for this block. pub protocol_version: ProtocolVersionId, - /// Consensus-related information about the block. Not present if consensus is not enabled - /// for the environment. - pub consensus: Option, } diff --git a/core/lib/types/src/api/mod.rs b/core/lib/types/src/api/mod.rs index f0cc7132831..9f00aee0cf7 100644 --- a/core/lib/types/src/api/mod.rs +++ b/core/lib/types/src/api/mod.rs @@ -1,20 +1,21 @@ use chrono::{DateTime, Utc}; use serde::{de, Deserialize, Deserializer, Serialize, Serializer}; use strum::Display; - use zksync_basic_types::{ web3::types::{Bytes, H160, H256, H64, U256, U64}, L1BatchNumber, }; use zksync_contracts::BaseSystemContractsHashes; -use crate::protocol_version::L1VerifierConfig; pub use crate::transaction_request::{ Eip712Meta, SerializationTransactionError, TransactionRequest, }; -use crate::vm_trace::{Call, CallType}; -use crate::web3::types::{AccessList, Index, H2048}; -use crate::{Address, MiniblockNumber, ProtocolVersionId}; +use crate::{ + protocol_version::L1VerifierConfig, + vm_trace::{Call, CallType}, + web3::types::{AccessList, Index, H2048}, + Address, MiniblockNumber, ProtocolVersionId, +}; pub mod en; @@ -89,7 +90,7 @@ impl<'de> Deserialize<'de> for BlockNumber { } } -/// Block unified identifier in terms of ZKSync +/// Block unified identifier in terms of zkSync /// /// This is an utility structure that cannot be (de)serialized, it has to be created manually. /// The reason is because Web3 API provides multiple methods for referring block either by hash or number, @@ -208,10 +209,10 @@ pub struct TransactionReceipt { pub transaction_index: Index, /// Hash of the block this transaction was included within. #[serde(rename = "blockHash")] - pub block_hash: Option, + pub block_hash: H256, /// Number of the miniblock this transaction was included within. #[serde(rename = "blockNumber")] - pub block_number: Option, + pub block_number: U64, /// Index of transaction in l1 batch #[serde(rename = "l1BatchTxIndex")] pub l1_batch_tx_index: Option, @@ -245,9 +246,9 @@ pub struct TransactionReceipt { #[serde(rename = "l2ToL1Logs")] pub l2_to_l1_logs: Vec, /// Status: either 1 (success) or 0 (failure). - pub status: Option, + pub status: U64, /// State root. - pub root: Option, + pub root: H256, /// Logs bloom #[serde(rename = "logsBloom")] pub logs_bloom: H2048, @@ -271,7 +272,7 @@ pub struct Block { /// Hash of the uncles #[serde(rename = "sha3Uncles")] pub uncles_hash: H256, - /// Miner/author's address + /// Miner / author's address #[serde(rename = "miner", default, deserialize_with = "null_to_default")] pub author: H160, /// State root hash @@ -463,7 +464,7 @@ pub struct Transaction { pub from: Option
, /// Recipient (None when contract creation) pub to: Option
, - /// Transfered value + /// Transferred value pub value: U256, /// Gas Price #[serde(rename = "gasPrice")] @@ -548,7 +549,7 @@ pub struct TransactionDetails { #[derive(Debug, Clone)] pub struct GetLogsFilter { pub from_block: MiniblockNumber, - pub to_block: Option, + pub to_block: MiniblockNumber, pub addresses: Vec
, pub topics: Vec<(u32, Vec)>, } @@ -567,7 +568,7 @@ pub enum DebugCallType { Create, } -#[derive(Debug, Serialize, Deserialize, Clone)] +#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] #[serde(rename_all = "camelCase")] pub struct DebugCall { pub r#type: DebugCallType, diff --git a/core/lib/types/src/block.rs b/core/lib/types/src/block.rs index 762733f8e21..48765e27e0f 100644 --- a/core/lib/types/src/block.rs +++ b/core/lib/types/src/block.rs @@ -1,15 +1,13 @@ -use anyhow::Context as _; -use serde::{Deserialize, Serialize}; -use zksync_system_constants::SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER; - use std::{fmt, ops}; +use serde::{Deserialize, Serialize}; use zksync_basic_types::{H2048, H256, U256}; -use zksync_consensus_roles::validator; use zksync_contracts::BaseSystemContractsHashes; -use zksync_protobuf::{read_required, ProtoFmt}; +use zksync_system_constants::SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER; +use zksync_utils::concat_and_hash; use crate::{ + fee_model::BatchFeeInput, l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}, priority_op_onchain_data::PriorityOpOnchainData, web3::signing::keccak256, @@ -64,14 +62,15 @@ pub struct L1BatchHeader { /// The L2 gas price that the operator agrees on. pub l2_fair_gas_price: u64, pub base_system_contracts_hashes: BaseSystemContractsHashes, - /// System logs are those emitted as part of the Vm excecution. + /// System logs are those emitted as part of the Vm execution. pub system_logs: Vec, /// Version of protocol used for the L1 batch. pub protocol_version: Option, + pub pubdata_input: Option>, } /// Holder for the miniblock metadata that is not available from transactions themselves. -#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] +#[derive(Debug, Clone, PartialEq)] pub struct MiniblockHeader { pub number: MiniblockNumber, pub timestamp: u64, @@ -80,51 +79,14 @@ pub struct MiniblockHeader { pub l2_tx_count: u16, pub base_fee_per_gas: u64, // Min wei per gas that txs in this miniblock need to have. - pub l1_gas_price: u64, // L1 gas price assumed in the corresponding batch - pub l2_fair_gas_price: u64, // L2 gas price assumed in the corresponding batch + pub batch_fee_input: BatchFeeInput, + pub gas_per_pubdata_limit: u64, pub base_system_contracts_hashes: BaseSystemContractsHashes, pub protocol_version: Option, /// The maximal number of virtual blocks to be created in the miniblock. pub virtual_blocks: u32, } -/// Consensus-related L2 block (= miniblock) fields. -#[derive(Debug, Clone)] -pub struct ConsensusBlockFields { - /// Hash of the previous consensus block. - pub parent: validator::BlockHeaderHash, - /// Quorum certificate for the block. - pub justification: validator::CommitQC, -} - -impl ProtoFmt for ConsensusBlockFields { - type Proto = crate::proto::ConsensusBlockFields; - fn read(r: &Self::Proto) -> anyhow::Result { - Ok(Self { - parent: read_required(&r.parent).context("parent")?, - justification: read_required(&r.justification).context("justification")?, - }) - } - fn build(&self) -> Self::Proto { - Self::Proto { - parent: Some(self.parent.build()), - justification: Some(self.justification.build()), - } - } -} - -impl Serialize for ConsensusBlockFields { - fn serialize(&self, s: S) -> Result { - zksync_protobuf::serde::serialize(self, s) - } -} - -impl<'de> Deserialize<'de> for ConsensusBlockFields { - fn deserialize>(d: D) -> Result { - zksync_protobuf::serde::deserialize(d) - } -} - /// Data needed to execute a miniblock in the VM. #[derive(Debug)] pub struct MiniblockExecutionData { @@ -161,6 +123,7 @@ impl L1BatchHeader { base_system_contracts_hashes, system_logs: vec![], protocol_version: Some(protocol_version), + pubdata_input: Some(vec![]), } } @@ -228,30 +191,65 @@ impl ops::AddAssign for BlockGasCount { } } -/// Returns the hash of the miniblock. -/// `txs_rolling_hash` of the miniblock is calculated the following way: -/// If the miniblock has 0 transactions, then `txs_rolling_hash` is equal to `H256::zero()`. -/// If the miniblock has i transactions, then `txs_rolling_hash` is equal to `H(H_{i-1}, H(tx_i))`, where -/// `H_{i-1}` is the `txs_rolling_hash` of the first i-1 transactions. -pub fn miniblock_hash( - miniblock_number: MiniblockNumber, - miniblock_timestamp: u64, +/// Hasher of miniblock contents used by the VM. +#[derive(Debug)] +pub struct MiniblockHasher { + number: MiniblockNumber, + timestamp: u64, prev_miniblock_hash: H256, txs_rolling_hash: H256, -) -> H256 { - let mut digest: [u8; 128] = [0u8; 128]; - U256::from(miniblock_number.0).to_big_endian(&mut digest[0..32]); - U256::from(miniblock_timestamp).to_big_endian(&mut digest[32..64]); - digest[64..96].copy_from_slice(prev_miniblock_hash.as_bytes()); - digest[96..128].copy_from_slice(txs_rolling_hash.as_bytes()); - - H256(keccak256(&digest)) } -/// At the beginning of the zkSync, the hashes of the blocks could be calculated as the hash of their number. -/// This method returns the hash of such miniblocks. -pub fn legacy_miniblock_hash(miniblock_number: MiniblockNumber) -> H256 { - H256(keccak256(&miniblock_number.0.to_be_bytes())) +impl MiniblockHasher { + /// At the beginning of the zkSync, the hashes of the blocks could be calculated as the hash of their number. + /// This method returns the hash of such miniblocks. + pub fn legacy_hash(miniblock_number: MiniblockNumber) -> H256 { + H256(keccak256(&miniblock_number.0.to_be_bytes())) + } + + /// Creates a new hasher with the specified params. This assumes a miniblock without transactions; + /// transaction hashes can be supplied using [`Self::push_tx_hash()`]. + pub fn new(number: MiniblockNumber, timestamp: u64, prev_miniblock_hash: H256) -> Self { + Self { + number, + timestamp, + prev_miniblock_hash, + txs_rolling_hash: H256::zero(), + } + } + + /// Updates this hasher with a transaction hash. This should be called for all transactions in the block + /// in the order of their execution. + pub fn push_tx_hash(&mut self, tx_hash: H256) { + self.txs_rolling_hash = concat_and_hash(self.txs_rolling_hash, tx_hash); + } + + /// Returns the hash of the miniblock. + /// + /// For newer protocol versions, the hash is computed as + /// + /// ```text + /// keccak256(u256_be(number) ++ u256_be(timestamp) ++ prev_miniblock_hash ++ txs_rolling_hash) + /// ``` + /// + /// Here, `u256_be` is the big-endian 256-bit serialization of a number, and `txs_rolling_hash` + /// is *the rolling hash* of miniblock transactions. `txs_rolling_hash` is calculated the following way: + /// + /// - If the miniblock has 0 transactions, then `txs_rolling_hash` is equal to `H256::zero()`. + /// - If the miniblock has i transactions, then `txs_rolling_hash` is equal to `H(H_{i-1}, H(tx_i))`, where + /// `H_{i-1}` is the `txs_rolling_hash` of the first i-1 transactions. + pub fn finalize(self, protocol_version: ProtocolVersionId) -> H256 { + if protocol_version >= ProtocolVersionId::Version13 { + let mut digest = [0_u8; 128]; + U256::from(self.number.0).to_big_endian(&mut digest[0..32]); + U256::from(self.timestamp).to_big_endian(&mut digest[32..64]); + digest[64..96].copy_from_slice(self.prev_miniblock_hash.as_bytes()); + digest[96..128].copy_from_slice(self.txs_rolling_hash.as_bytes()); + H256(keccak256(&digest)) + } else { + Self::legacy_hash(self.number) + } + } } /// Returns block.number/timestamp based on the block's information @@ -267,19 +265,9 @@ pub fn pack_block_info(block_number: u64, block_timestamp: u64) -> U256 { + U256::from(block_timestamp) } -/// Returns virtual_block_start_batch and virtual_block_finish_l2_block based on the virtual block upgrade information -pub fn unpack_block_upgrade_info(info: U256) -> (u64, u64) { - // its safe to use SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER here, since VirtualBlockUpgradeInfo and BlockInfo are packed same way - let virtual_block_start_batch = (info / SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER).as_u64(); - let virtual_block_finish_l2_block = (info % SYSTEM_BLOCK_INFO_BLOCK_NUMBER_MULTIPLIER).as_u64(); - (virtual_block_start_batch, virtual_block_finish_l2_block) -} - #[cfg(test)] mod tests { - use zksync_basic_types::{MiniblockNumber, H256}; - - use crate::block::{legacy_miniblock_hash, miniblock_hash, pack_block_info, unpack_block_info}; + use super::*; #[test] fn test_legacy_miniblock_hashes() { @@ -288,7 +276,7 @@ mod tests { .parse() .unwrap(); assert_eq!( - legacy_miniblock_hash(MiniblockNumber(11470850)), + MiniblockHasher::legacy_hash(MiniblockNumber(11470850)), expected_hash ) } @@ -309,12 +297,13 @@ mod tests { .unwrap(); assert_eq!( expected_hash, - miniblock_hash( - MiniblockNumber(1), - 12, + MiniblockHasher { + number: MiniblockNumber(1), + timestamp: 12, prev_miniblock_hash, - txs_rolling_hash - ) + txs_rolling_hash, + } + .finalize(ProtocolVersionId::latest()) ) } diff --git a/core/lib/types/src/circuit.rs b/core/lib/types/src/circuit.rs index 940b4ecf273..05f269c451e 100644 --- a/core/lib/types/src/circuit.rs +++ b/core/lib/types/src/circuit.rs @@ -1,5 +1,4 @@ -use zkevm_test_harness::geometry_config::get_geometry_config; -use zkevm_test_harness::toolset::GeometryConfig; +use zkevm_test_harness::{geometry_config::get_geometry_config, toolset::GeometryConfig}; pub const LEAF_SPLITTING_FACTOR: usize = 50; pub const NODE_SPLITTING_FACTOR: usize = 48; diff --git a/core/lib/types/src/commitment.rs b/core/lib/types/src/commitment.rs index 29750a5c77b..8a59bd4758f 100644 --- a/core/lib/types/src/commitment.rs +++ b/core/lib/types/src/commitment.rs @@ -6,15 +6,14 @@ //! required for the rollup to execute L1 batches, it's needed for the proof generation and the Ethereum //! transactions, thus the calculations are done separately and asynchronously. -use serde::{Deserialize, Serialize}; -use zksync_utils::u256_to_h256; - use std::{collections::HashMap, convert::TryFrom}; +use serde::{Deserialize, Serialize}; use zksync_mini_merkle_tree::MiniMerkleTree; use zksync_system_constants::{ L2_TO_L1_LOGS_TREE_ROOT_KEY, STATE_DIFF_HASH_KEY, ZKPORTER_IS_AVAILABLE, }; +use zksync_utils::u256_to_h256; use crate::{ block::L1BatchHeader, @@ -25,7 +24,7 @@ use crate::{ compress_state_diffs, InitialStorageWrite, RepeatedStorageWrite, StateDiffRecord, PADDED_ENCODED_STORAGE_DIFF_LEN_BYTES, }, - H256, KNOWN_CODES_STORAGE_ADDRESS, U256, + ProtocolVersionId, H256, KNOWN_CODES_STORAGE_ADDRESS, U256, }; /// Type that can be serialized for commitment. @@ -132,24 +131,34 @@ impl L1BatchWithMetadata { }) } + /// Encodes L1Batch into `StorageBatchInfo` (see `IExecutor.sol`) pub fn l1_header_data(&self) -> Token { Token::Tuple(vec![ + // `batchNumber` Token::Uint(U256::from(self.header.number.0)), + // `batchHash` Token::FixedBytes(self.metadata.root_hash.as_bytes().to_vec()), + // `indexRepeatedStorageChanges` Token::Uint(U256::from(self.metadata.rollup_last_leaf_index)), + // `numberOfLayer1Txs` Token::Uint(U256::from(self.header.l1_tx_count)), + // `priorityOperationsHash` Token::FixedBytes( self.header .priority_ops_onchain_data_hash() .as_bytes() .to_vec(), ), + // `l2LogsTreeRoot` Token::FixedBytes(self.metadata.l2_l1_merkle_root.as_bytes().to_vec()), + // timestamp Token::Uint(U256::from(self.header.timestamp)), + // commitment Token::FixedBytes(self.metadata.commitment.as_bytes().to_vec()), ]) } + /// Encodes the L1Batch into CommitBatchInfo (see IExecutor.sol). pub fn l1_commit_data(&self) -> Token { if self.header.protocol_version.unwrap().is_pre_boojum() { Token::Tuple(vec![ @@ -184,17 +193,24 @@ impl L1BatchWithMetadata { ]) } else { Token::Tuple(vec![ + // `batchNumber` Token::Uint(U256::from(self.header.number.0)), + // `timestamp` Token::Uint(U256::from(self.header.timestamp)), + // `indexRepeatedStorageChanges` Token::Uint(U256::from(self.metadata.rollup_last_leaf_index)), + // `newStateRoot` Token::FixedBytes(self.metadata.merkle_root_hash.as_bytes().to_vec()), + // `numberOfLayer1Txs` Token::Uint(U256::from(self.header.l1_tx_count)), + // `priorityOperationsHash` Token::FixedBytes( self.header .priority_ops_onchain_data_hash() .as_bytes() .to_vec(), ), + // `bootloaderHeapInitialContentsHash` Token::FixedBytes( self.metadata .bootloader_initial_content_commitment @@ -202,6 +218,7 @@ impl L1BatchWithMetadata { .as_bytes() .to_vec(), ), + // `eventsQueueStateHash` Token::FixedBytes( self.metadata .events_queue_commitment @@ -209,8 +226,15 @@ impl L1BatchWithMetadata { .as_bytes() .to_vec(), ), + // `systemLogs` Token::Bytes(self.metadata.l2_l1_messages_compressed.clone()), - Token::Bytes(self.construct_pubdata()), + // `totalL2ToL1Pubdata` + Token::Bytes( + self.header + .pubdata_input + .clone() + .unwrap_or(self.construct_pubdata()), + ), ]) } } @@ -231,7 +255,7 @@ impl L1BatchWithMetadata { res.extend(l2_to_l1_log.0.to_bytes()); } - // Process and Pack Msgs + // Process and Pack Messages res.extend((self.header.l2_to_l1_messages.len() as u32).to_be_bytes()); for msg in &self.header.l2_to_l1_messages { res.extend((msg.len() as u32).to_be_bytes()); @@ -323,7 +347,7 @@ struct L1BatchAuxiliaryOutput { l2_l1_logs_merkle_root: H256, // Once cut over to boojum, these fields are no longer required as their values - // are covered by state_diffs_compressed and its hash. + // are covered by `state_diffs_compressed` and its hash. // Task to remove: PLA-640 initial_writes_compressed: Vec, initial_writes_hash: H256, @@ -332,16 +356,12 @@ struct L1BatchAuxiliaryOutput { // The fields below are necessary for boojum. system_logs_compressed: Vec, - #[allow(dead_code)] system_logs_linear_hash: H256, - #[allow(dead_code)] state_diffs_hash: H256, state_diffs_compressed: Vec, - #[allow(dead_code)] bootloader_heap_hash: H256, - #[allow(dead_code)] events_state_queue_hash: H256, - is_pre_boojum: bool, + protocol_version: ProtocolVersionId, } impl L1BatchAuxiliaryOutput { @@ -354,7 +374,7 @@ impl L1BatchAuxiliaryOutput { state_diffs: Vec, bootloader_heap_hash: H256, events_state_queue_hash: H256, - is_pre_boojum: bool, + protocol_version: ProtocolVersionId, ) -> Self { let state_diff_hash_from_logs = system_logs.iter().find_map(|log| { if log.0.key == u256_to_h256(STATE_DIFF_HASH_KEY.into()) { @@ -378,7 +398,7 @@ impl L1BatchAuxiliaryOutput { repeated_writes_compressed, system_logs_compressed, state_diffs_packed, - ) = if is_pre_boojum { + ) = if protocol_version.is_pre_boojum() { ( pre_boojum_serialize_commitments(&l2_l1_logs), pre_boojum_serialize_commitments(&initial_writes), @@ -404,7 +424,7 @@ impl L1BatchAuxiliaryOutput { let repeated_writes_hash = H256::from(keccak256(&repeated_writes_compressed)); let state_diffs_hash = H256::from(keccak256(&(state_diffs_packed))); - let serialized_logs = if is_pre_boojum { + let serialized_logs = if protocol_version.is_pre_boojum() { &l2_l1_logs_compressed[4..] } else { &l2_l1_logs_compressed @@ -414,7 +434,7 @@ impl L1BatchAuxiliaryOutput { .chunks(UserL2ToL1Log::SERIALIZED_SIZE) .map(|chunk| <[u8; UserL2ToL1Log::SERIALIZED_SIZE]>::try_from(chunk).unwrap()); // ^ Skip first 4 bytes of the serialized logs (i.e., the number of logs). - let min_tree_size = if is_pre_boojum { + let min_tree_size = if protocol_version.is_pre_boojum() { L2ToL1Log::PRE_BOOJUM_MIN_L2_L1_LOGS_TREE_SIZE } else { L2ToL1Log::MIN_L2_L1_LOGS_TREE_SIZE @@ -453,7 +473,7 @@ impl L1BatchAuxiliaryOutput { bootloader_heap_hash, events_state_queue_hash, - is_pre_boojum, + protocol_version, } } @@ -462,16 +482,27 @@ impl L1BatchAuxiliaryOutput { const SERIALIZED_SIZE: usize = 128; let mut result = Vec::with_capacity(SERIALIZED_SIZE); - if self.is_pre_boojum { + if self.protocol_version.is_pre_boojum() { result.extend(self.l2_l1_logs_merkle_root.as_bytes()); result.extend(self.l2_l1_logs_linear_hash.as_bytes()); result.extend(self.initial_writes_hash.as_bytes()); result.extend(self.repeated_writes_hash.as_bytes()); + } else if self.protocol_version.is_1_4_0() { + result.extend(self.system_logs_linear_hash.as_bytes()); + result.extend(self.state_diffs_hash.as_bytes()); + result.extend(self.bootloader_heap_hash.as_bytes()); + result.extend(self.events_state_queue_hash.as_bytes()); } else { result.extend(self.system_logs_linear_hash.as_bytes()); result.extend(self.state_diffs_hash.as_bytes()); result.extend(self.bootloader_heap_hash.as_bytes()); result.extend(self.events_state_queue_hash.as_bytes()); + + // For now, we are using zeroes as commitments to the KZG pubdata. + result.extend(H256::zero().as_bytes()); + result.extend(H256::zero().as_bytes()); + result.extend(H256::zero().as_bytes()); + result.extend(H256::zero().as_bytes()); } result } @@ -566,7 +597,7 @@ impl L1BatchCommitment { state_diffs: Vec, bootloader_heap_hash: H256, events_state_queue_hash: H256, - is_pre_boojum: bool, + protocol_version: ProtocolVersionId, ) -> Self { let meta_parameters = L1BatchMetaParameters { zkporter_is_available: ZKPORTER_IS_AVAILABLE, @@ -581,7 +612,7 @@ impl L1BatchCommitment { last_leaf_index: rollup_last_leaf_index, root_hash: rollup_root_hash, }, - // Despite the fact that zk_porter is not available we have to add params about it. + // Despite the fact that `zk_porter` is not available we have to add params about it. RootState { last_leaf_index: 0, root_hash: H256::zero(), @@ -596,7 +627,7 @@ impl L1BatchCommitment { state_diffs, bootloader_heap_hash, events_state_queue_hash, - is_pre_boojum, + protocol_version, ), meta_parameters, } @@ -666,12 +697,15 @@ mod tests { use serde::{Deserialize, Serialize}; use serde_with::serde_as; - use crate::commitment::{ - L1BatchAuxiliaryOutput, L1BatchCommitment, L1BatchMetaParameters, L1BatchPassThroughData, + use crate::{ + commitment::{ + L1BatchAuxiliaryOutput, L1BatchCommitment, L1BatchMetaParameters, + L1BatchPassThroughData, + }, + l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}, + writes::{InitialStorageWrite, RepeatedStorageWrite}, + ProtocolVersionId, H256, U256, }; - use crate::l2_to_l1_log::{L2ToL1Log, UserL2ToL1Log}; - use crate::writes::{InitialStorageWrite, RepeatedStorageWrite}; - use crate::{H256, U256}; #[serde_as] #[derive(Debug, Serialize, Deserialize)] @@ -719,8 +753,6 @@ mod tests { expected_outputs: ExpectedOutput, } - // TODO(PLA-568): restore this test - #[ignore] #[test] fn commitment_test() { let zksync_home = std::env::var("ZKSYNC_HOME").unwrap_or_else(|_| ".".into()); @@ -754,7 +786,7 @@ mod tests { vec![], H256::zero(), H256::zero(), - false, + ProtocolVersionId::latest(), ); let commitment = L1BatchCommitment { diff --git a/core/lib/types/src/contract_verification_api.rs b/core/lib/types/src/contract_verification_api.rs index a7feb5116f2..02a5bef727d 100644 --- a/core/lib/types/src/contract_verification_api.rs +++ b/core/lib/types/src/contract_verification_api.rs @@ -6,9 +6,8 @@ use serde::{ Deserialize, Serialize, }; -use crate::{Address, Bytes}; - pub use crate::Execute as ExecuteData; +use crate::{Address, Bytes}; #[derive(Debug, Clone, Serialize)] #[serde(tag = "codeFormat", content = "sourceCode")] diff --git a/core/lib/types/src/eth_sender.rs b/core/lib/types/src/eth_sender.rs index 847662eaeaa..7778d825208 100644 --- a/core/lib/types/src/eth_sender.rs +++ b/core/lib/types/src/eth_sender.rs @@ -1,5 +1,4 @@ -use crate::aggregated_operations::AggregatedActionType; -use crate::{Address, Nonce, H256}; +use crate::{aggregated_operations::AggregatedActionType, Address, Nonce, H256}; #[derive(Clone)] pub struct EthTx { @@ -14,7 +13,7 @@ pub struct EthTx { impl std::fmt::Debug for EthTx { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { - // Do not print raw_tx + // Do not print `raw_tx` f.debug_struct("EthTx") .field("id", &self.id) .field("nonce", &self.nonce) diff --git a/core/lib/types/src/event.rs b/core/lib/types/src/event.rs index 285567c8911..dc4bcdc6045 100644 --- a/core/lib/types/src/event.rs +++ b/core/lib/types/src/event.rs @@ -1,3 +1,10 @@ +use std::fmt::Debug; + +use once_cell::sync::Lazy; +use serde::{Deserialize, Serialize}; +use zksync_basic_types::ethabi::Token; +use zksync_utils::{h256_to_account_address, u256_to_bytes_be, u256_to_h256}; + use crate::{ ethabi, l2_to_l1_log::L2ToL1Log, @@ -5,11 +12,6 @@ use crate::{ Address, L1BatchNumber, CONTRACT_DEPLOYER_ADDRESS, H256, KNOWN_CODES_STORAGE_ADDRESS, L1_MESSENGER_ADDRESS, U256, }; -use once_cell::sync::Lazy; -use serde::{Deserialize, Serialize}; -use std::fmt::Debug; -use zksync_basic_types::ethabi::Token; -use zksync_utils::{h256_to_account_address, u256_to_bytes_be, u256_to_h256}; #[derive(Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)] pub struct VmEvent { @@ -202,7 +204,7 @@ fn extract_added_token_info_from_addresses( .collect() } -// moved from RuntimeContext +// moved from `RuntimeContext` // Extracts all the "long" L2->L1 messages that were submitted by the // L1Messenger contract pub fn extract_long_l2_to_l1_messages(all_generated_events: &[VmEvent]) -> Vec> { @@ -224,8 +226,8 @@ pub fn extract_long_l2_to_l1_messages(all_generated_events: &[VmEvent]) -> Vec Vec { @@ -348,13 +350,12 @@ mod tests { }; use zksync_utils::u256_to_h256; - use crate::VmEvent; - use super::{ extract_bytecode_publication_requests_from_l1_messenger, extract_l2tol1logs_from_l1_messenger, L1MessengerBytecodePublicationRequest, L1MessengerL2ToL1Log, }; + use crate::VmEvent; fn create_l2_to_l1_log_sent_value( tx_number: U256, @@ -369,9 +370,9 @@ mod tests { value.to_big_endian(&mut val_arr); let tokens = vec![ - /*l2ShardId*/ Token::Uint(U256::from(0)), - /*isService*/ Token::Bool(true), - /*txNumberInBlock*/ Token::Uint(tx_number), + /*`l2ShardId`*/ Token::Uint(U256::from(0)), + /*`isService`*/ Token::Bool(true), + /*`txNumberInBlock`*/ Token::Uint(tx_number), /*sender*/ Token::Address(sender), /*key*/ Token::FixedBytes(key_arr.to_vec()), /*value*/ Token::FixedBytes(val_arr.to_vec()), diff --git a/core/lib/types/src/fee.rs b/core/lib/types/src/fee.rs index 53e05fbb59a..fad4d09f528 100644 --- a/core/lib/types/src/fee.rs +++ b/core/lib/types/src/fee.rs @@ -24,6 +24,7 @@ pub struct TransactionExecutionMetrics { pub computational_gas_used: u32, pub total_updated_values_size: usize, pub pubdata_published: u32, + pub estimated_circuits_used: f32, } #[derive(Debug, Default, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)] diff --git a/core/lib/types/src/fee_model.rs b/core/lib/types/src/fee_model.rs new file mode 100644 index 00000000000..8f8d43a4ab7 --- /dev/null +++ b/core/lib/types/src/fee_model.rs @@ -0,0 +1,227 @@ +use serde::{Deserialize, Serialize}; +use zksync_config::configs::chain::{FeeModelVersion, StateKeeperConfig}; +use zksync_system_constants::L1_GAS_PER_PUBDATA_BYTE; + +use crate::ProtocolVersionId; + +/// Fee input to be provided into the VM. It contains two options: +/// - `L1Pegged`: L1 gas price is provided to the VM, and the pubdata price is derived from it. Using this option is required for the +/// versions of Era prior to 1.4.1 integration. +/// - `PubdataIndependent`: L1 gas price and pubdata price are not necessarily dependent on one another. This options is more suitable for the +/// versions of Era after the 1.4.1 integration. It is expected that if a VM supports `PubdataIndependent` version, then it should also support `L1Pegged` version, but converting it into `PubdataIndependentBatchFeeModelInput` in-place. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum BatchFeeInput { + L1Pegged(L1PeggedBatchFeeModelInput), + PubdataIndependent(PubdataIndependentBatchFeeModelInput), +} + +impl BatchFeeInput { + // Sometimes for temporary usage or tests a "sensible" default, i.e. the one consisting of non-zero values is needed. + pub fn sensible_l1_pegged_default() -> Self { + Self::L1Pegged(L1PeggedBatchFeeModelInput { + l1_gas_price: 1_000_000_000, + fair_l2_gas_price: 100_000_000, + }) + } + + pub fn l1_pegged(l1_gas_price: u64, fair_l2_gas_price: u64) -> Self { + Self::L1Pegged(L1PeggedBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price, + }) + } +} + +impl Default for BatchFeeInput { + fn default() -> Self { + Self::L1Pegged(L1PeggedBatchFeeModelInput { + l1_gas_price: 0, + fair_l2_gas_price: 0, + }) + } +} + +impl BatchFeeInput { + pub fn into_l1_pegged(self) -> L1PeggedBatchFeeModelInput { + match self { + BatchFeeInput::L1Pegged(input) => input, + _ => panic!("Can not convert PubdataIndependentBatchFeeModelInput into L1PeggedBatchFeeModelInput"), + } + } + + pub fn fair_pubdata_price(&self) -> u64 { + match self { + BatchFeeInput::L1Pegged(input) => input.l1_gas_price * L1_GAS_PER_PUBDATA_BYTE as u64, + BatchFeeInput::PubdataIndependent(input) => input.fair_pubdata_price, + } + } + + pub fn fair_l2_gas_price(&self) -> u64 { + match self { + BatchFeeInput::L1Pegged(input) => input.fair_l2_gas_price, + BatchFeeInput::PubdataIndependent(input) => input.fair_l2_gas_price, + } + } + + pub fn l1_gas_price(&self) -> u64 { + match self { + BatchFeeInput::L1Pegged(input) => input.l1_gas_price, + BatchFeeInput::PubdataIndependent(input) => input.l1_gas_price, + } + } + + pub fn into_pubdata_independent(self) -> PubdataIndependentBatchFeeModelInput { + match self { + BatchFeeInput::PubdataIndependent(input) => input, + BatchFeeInput::L1Pegged(input) => PubdataIndependentBatchFeeModelInput { + fair_l2_gas_price: input.fair_l2_gas_price, + fair_pubdata_price: input.l1_gas_price * L1_GAS_PER_PUBDATA_BYTE as u64, + l1_gas_price: input.l1_gas_price, + }, + } + } + + pub fn for_protocol_version( + protocol_version: ProtocolVersionId, + fair_l2_gas_price: u64, + fair_pubdata_price: Option, + l1_gas_price: u64, + ) -> Self { + if protocol_version.is_post_1_4_1() { + Self::PubdataIndependent(PubdataIndependentBatchFeeModelInput { + fair_l2_gas_price, + fair_pubdata_price: fair_pubdata_price + .expect("Pubdata price must be provided for protocol version 1.4.1"), + l1_gas_price, + }) + } else { + Self::L1Pegged(L1PeggedBatchFeeModelInput { + fair_l2_gas_price, + l1_gas_price, + }) + } + } +} + +/// Pubdata is only published via calldata and so its price is pegged to the L1 gas price. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub struct L1PeggedBatchFeeModelInput { + /// Fair L2 gas price to provide + pub fair_l2_gas_price: u64, + /// The L1 gas price to provide to the VM. + pub l1_gas_price: u64, +} + +/// Pubdata price may be independent from L1 gas price. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub struct PubdataIndependentBatchFeeModelInput { + /// Fair L2 gas price to provide + pub fair_l2_gas_price: u64, + /// Fair pubdata price to provide. + pub fair_pubdata_price: u64, + /// The L1 gas price to provide to the VM. Even if some of the VM versions may not use this value, it is still maintained for backward compatibility. + pub l1_gas_price: u64, +} + +/// The enum which represents the version of the fee model. It is used to determine which fee model should be used for the batch. +/// - `V1`, the first model that was used in zkSync Era. In this fee model, the pubdata price must be pegged to the L1 gas price. +/// Also, the fair L2 gas price is expected to only include the proving/computation price for the operator and not the costs that come from +/// processing the batch on L1. +/// - `V2`, the second model that was used in zkSync Era. There the pubdata price might be independent from the L1 gas price. Also, +/// The fair L2 gas price is expected to both the proving/computation price for the operator and the costs that come from +/// processing the batch on L1. +#[derive(Debug, Clone, Copy, Serialize, Deserialize)] +pub enum FeeModelConfig { + V1(FeeModelConfigV1), + V2(FeeModelConfigV2), +} + +/// Config params for the first version of the fee model. Here, the pubdata price is pegged to the L1 gas price and +/// neither fair L2 gas price nor the pubdata price include the overhead for closing the batch +#[derive(Debug, Clone, Copy, Serialize, Deserialize)] +pub struct FeeModelConfigV1 { + /// The minimal acceptable L2 gas price, i.e. the price that should include the cost of computation/proving as well + /// as potentially premium for congestion. + /// Unlike the `V2`, this price will be directly used as the `fair_l2_gas_price` in the bootloader. + pub minimal_l2_gas_price: u64, +} + +#[derive(Debug, Clone, Copy, Serialize, Deserialize)] +pub struct FeeModelConfigV2 { + /// The minimal acceptable L2 gas price, i.e. the price that should include the cost of computation/proving as well + /// as potentially premium for congestion. + pub minimal_l2_gas_price: u64, + /// The constant that represents the possibility that a batch can be sealed because of overuse of computation resources. + /// It has range from 0 to 1. If it is 0, the compute will not depend on the cost for closing the batch. + /// If it is 1, the gas limit per batch will have to cover the entire cost of closing the batch. + pub compute_overhead_part: f64, + /// The constant that represents the possibility that a batch can be sealed because of overuse of pubdata. + /// It has range from 0 to 1. If it is 0, the pubdata will not depend on the cost for closing the batch. + /// If it is 1, the pubdata limit per batch will have to cover the entire cost of closing the batch. + pub pubdata_overhead_part: f64, + /// The constant amount of L1 gas that is used as the overhead for the batch. It includes the price for batch verification, etc. + pub batch_overhead_l1_gas: u64, + /// The maximum amount of gas that can be used by the batch. This value is derived from the circuits limitation per batch. + pub max_gas_per_batch: u64, + /// The maximum amount of pubdata that can be used by the batch. Note that if the calldata is used as pubdata, this variable should not exceed 128kb. + pub max_pubdata_per_batch: u64, +} + +impl Default for FeeModelConfig { + /// Config with all zeroes is not a valid config (since for instance having 0 max gas per batch may incur division by zero), + /// so we implement a sensible default config here. + fn default() -> Self { + Self::V1(FeeModelConfigV1 { + minimal_l2_gas_price: 100_000_000, + }) + } +} + +impl FeeModelConfig { + pub fn from_state_keeper_config(state_keeper_config: &StateKeeperConfig) -> Self { + match state_keeper_config.fee_model_version { + FeeModelVersion::V1 => Self::V1(FeeModelConfigV1 { + minimal_l2_gas_price: state_keeper_config.minimal_l2_gas_price, + }), + FeeModelVersion::V2 => Self::V2(FeeModelConfigV2 { + minimal_l2_gas_price: state_keeper_config.minimal_l2_gas_price, + compute_overhead_part: state_keeper_config.compute_overhead_part, + pubdata_overhead_part: state_keeper_config.pubdata_overhead_part, + batch_overhead_l1_gas: state_keeper_config.batch_overhead_l1_gas, + max_gas_per_batch: state_keeper_config.max_gas_per_batch, + max_pubdata_per_batch: state_keeper_config.max_pubdata_per_batch, + }), + } + } +} + +#[derive(Debug, Clone, Copy, Serialize, Deserialize)] +pub struct FeeParamsV1 { + pub config: FeeModelConfigV1, + pub l1_gas_price: u64, +} + +#[derive(Debug, Clone, Copy, Serialize, Deserialize)] +pub struct FeeParamsV2 { + pub config: FeeModelConfigV2, + pub l1_gas_price: u64, + pub l1_pubdata_price: u64, +} + +#[derive(Debug, Clone, Copy, Serialize, Deserialize)] +pub enum FeeParams { + V1(FeeParamsV1), + V2(FeeParamsV2), +} + +impl FeeParams { + // Sometimes for temporary usage or tests a "sensible" default, i.e. the one consisting of non-zero values is needed. + pub fn sensible_v1_default() -> Self { + Self::V1(FeeParamsV1 { + config: FeeModelConfigV1 { + minimal_l2_gas_price: 100_000_000, + }, + l1_gas_price: 1_000_000_000, + }) + } +} diff --git a/core/lib/types/src/l1/mod.rs b/core/lib/types/src/l1/mod.rs index 75d7f71a883..a37f535cfd1 100644 --- a/core/lib/types/src/l1/mod.rs +++ b/core/lib/types/src/l1/mod.rs @@ -1,14 +1,15 @@ //! Definition of zkSync network priority operations: operations initiated from the L1. -use serde::{Deserialize, Serialize}; use std::convert::TryFrom; +use serde::{Deserialize, Serialize}; use zksync_basic_types::{ ethabi::{decode, ParamType, Token}, Address, L1BlockNumber, Log, PriorityOpId, H160, H256, U256, }; use zksync_utils::u256_to_account_address; +use super::Transaction; use crate::{ helpers::unix_timestamp_ms, l1::error::L1TxParseError, @@ -18,8 +19,6 @@ use crate::{ ExecuteTransactionCommon, PRIORITY_OPERATION_L2_TX_TYPE, PROTOCOL_UPGRADE_TX_TYPE, }; -use super::Transaction; - pub mod error; #[derive(Debug, PartialEq, Serialize, Deserialize, Clone, Copy)] @@ -200,11 +199,11 @@ impl TryFrom for L1Tx { fn try_from(event: Log) -> Result { // TODO: refactor according to tx type let transaction_param_type = ParamType::Tuple(vec![ - ParamType::Uint(8), // txType + ParamType::Uint(8), // `txType` ParamType::Address, // sender ParamType::Address, // to ParamType::Uint(256), // gasLimit - ParamType::Uint(256), // gasPerPubdataLimit + ParamType::Uint(256), // `gasPerPubdataLimit` ParamType::Uint(256), // maxFeePerGas ParamType::Uint(256), // maxPriorityFeePerGas ParamType::Address, // paymaster @@ -215,7 +214,7 @@ impl TryFrom for L1Tx { ParamType::Bytes, // signature ParamType::Array(Box::new(ParamType::Uint(256))), // factory deps ParamType::Bytes, // paymaster input - ParamType::Bytes, // reservedDynamic + ParamType::Bytes, // `reservedDynamic` ]); let mut dec_ev = decode( @@ -303,7 +302,7 @@ impl TryFrom for L1Tx { let signature = transaction.remove(0).into_bytes().unwrap(); assert_eq!(signature.len(), 0); - // TODO (SMA-1621): check that reservedDynamic are constructed correctly. + // TODO (SMA-1621): check that `reservedDynamic` are constructed correctly. let _factory_deps_hashes = transaction.remove(0).into_array().unwrap(); let _paymaster_input = transaction.remove(0).into_bytes().unwrap(); let _reserved_dynamic = transaction.remove(0).into_bytes().unwrap(); diff --git a/core/lib/types/src/l2/mod.rs b/core/lib/types/src/l2/mod.rs index 4c0632c5553..08c32f900be 100644 --- a/core/lib/types/src/l2/mod.rs +++ b/core/lib/types/src/l2/mod.rs @@ -2,40 +2,50 @@ use std::convert::TryFrom; use num_enum::TryFromPrimitive; use rlp::{Rlp, RlpStream}; +use serde::{Deserialize, Serialize}; use self::error::SignError; -use crate::transaction_request::PaymasterParams; -use crate::LEGACY_TX_TYPE; - use crate::{ - api, tx::primitives::PackedEthSignature, tx::Execute, web3::types::U64, Address, Bytes, - EIP712TypedStructure, Eip712Domain, ExecuteTransactionCommon, InputData, L2ChainId, Nonce, - StructBuilder, Transaction, EIP_1559_TX_TYPE, EIP_2930_TX_TYPE, EIP_712_TX_TYPE, H256, - PRIORITY_OPERATION_L2_TX_TYPE, PROTOCOL_UPGRADE_TX_TYPE, U256, + api, + api::TransactionRequest, + fee::{encoding_len, Fee}, + helpers::unix_timestamp_ms, + transaction_request::PaymasterParams, + tx::{primitives::PackedEthSignature, Execute}, + web3::types::U64, + Address, Bytes, EIP712TypedStructure, Eip712Domain, ExecuteTransactionCommon, InputData, + L2ChainId, Nonce, StructBuilder, Transaction, EIP_1559_TX_TYPE, EIP_2930_TX_TYPE, + EIP_712_TX_TYPE, H256, LEGACY_TX_TYPE, PRIORITY_OPERATION_L2_TX_TYPE, PROTOCOL_UPGRADE_TX_TYPE, + U256, }; -use serde::{Deserialize, Serialize}; - pub mod error; -use crate::api::TransactionRequest; -use crate::fee::{encoding_len, Fee}; -use crate::helpers::unix_timestamp_ms; - #[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, TryFromPrimitive)] #[repr(u32)] pub enum TransactionType { // Native ECDSA Transaction LegacyTransaction = 0, - EIP2930Transaction = 1, EIP1559Transaction = 2, - // Eip 712 transaction with additional fields specified for zksync + // EIP 712 transaction with additional fields specified for zkSync EIP712Transaction = EIP_712_TX_TYPE as u32, PriorityOpTransaction = PRIORITY_OPERATION_L2_TX_TYPE as u32, ProtocolUpgradeTransaction = PROTOCOL_UPGRADE_TX_TYPE as u32, } +impl TransactionType { + /// Returns whether a transaction type is an Ethereum transaction type. + pub fn is_ethereum_type(&self) -> bool { + matches!( + self, + TransactionType::LegacyTransaction + | TransactionType::EIP2930Transaction + | TransactionType::EIP1559Transaction + ) + } +} + #[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[serde(rename_all = "camelCase")] pub struct L2TxCommonData { @@ -290,7 +300,7 @@ impl L2Tx { fn signature_to_vrs(signature: &[u8], tx_type: u32) -> (Option, Option, Option) { let signature = if tx_type == LEGACY_TX_TYPE as u32 { // Note that we use `deserialize_packed_no_v_check` here, because we want to preserve the original `v` value. - // This is needed due to inconsistent behaviour on Ethereum where the `v` value is >= 27 for legacy transactions + // This is needed due to inconsistent behavior on Ethereum where the `v` value is >= 27 for legacy transactions // and is either 0 or 1 for other ones. PackedEthSignature::deserialize_packed_no_v_check(signature) } else { @@ -463,13 +473,12 @@ impl EIP712TypedStructure for L2Tx { mod tests { use zksync_basic_types::{Nonce, U256}; + use super::{L2Tx, TransactionType}; use crate::{ api::TransactionRequest, fee::Fee, transaction_request::PaymasterParams, Execute, L2TxCommonData, }; - use super::{L2Tx, TransactionType}; - #[test] fn test_correct_l2_tx_transaction_request_conversion() { // It is a random valid signature diff --git a/core/lib/types/src/l2_to_l1_log.rs b/core/lib/types/src/l2_to_l1_log.rs index 670a2b22e81..03ac163e559 100644 --- a/core/lib/types/src/l2_to_l1_log.rs +++ b/core/lib/types/src/l2_to_l1_log.rs @@ -1,10 +1,11 @@ -use crate::commitment::SerializeCommitment; -use crate::{Address, H256}; use serde::{Deserialize, Serialize}; use zk_evm::reference_impls::event_sink::EventMessage; use zk_evm_1_4_0::reference_impls::event_sink::EventMessage as EventMessage_1_4_0; +use zk_evm_1_4_1::reference_impls::event_sink::EventMessage as EventMessage_1_4_1; use zksync_utils::u256_to_h256; +use crate::{commitment::SerializeCommitment, Address, H256}; + #[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Default, Eq)] pub struct L2ToL1Log { pub shard_id: u8, @@ -92,13 +93,27 @@ impl From for L2ToL1Log { } } +impl From for L2ToL1Log { + fn from(m: EventMessage_1_4_1) -> Self { + Self { + shard_id: m.shard_id, + is_service: m.is_first, + tx_number_in_block: m.tx_number_in_block, + sender: m.address, + key: u256_to_h256(m.key), + value: u256_to_h256(m.value), + } + } +} + #[cfg(test)] mod tests { - use super::L2ToL1Log; use zksync_basic_types::U256; use zksync_system_constants::L1_MESSENGER_ADDRESS; use zksync_utils::u256_to_h256; + use super::L2ToL1Log; + #[test] fn l2_to_l1_log_to_bytes() { let expected_log_bytes = [ diff --git a/core/lib/types/src/lib.rs b/core/lib/types/src/lib.rs index 22904eb71b8..6d9017e3310 100644 --- a/core/lib/types/src/lib.rs +++ b/core/lib/types/src/lib.rs @@ -5,33 +5,32 @@ #![allow(clippy::upper_case_acronyms, clippy::derive_partial_eq_without_eq)] +use std::{fmt, fmt::Debug}; + use fee::encoding_len; use serde::{Deserialize, Serialize}; -use std::{fmt, fmt::Debug}; pub use crate::{Nonce, H256, U256, U64}; pub type SerialId = u64; -use crate::l2::TransactionType; -use crate::protocol_version::ProtocolUpgradeTxCommonData; pub use event::{VmEvent, VmEventGroupKey}; pub use l1::L1TxCommonData; pub use l2::L2TxCommonData; pub use protocol_version::{ProtocolUpgrade, ProtocolVersion, ProtocolVersionId}; pub use storage::*; -pub use tx::primitives::*; -pub use tx::Execute; +pub use tx::{primitives::*, Execute}; pub use vm_version::VmVersion; pub use zk_evm::{ aux_structures::{LogQuery, Timestamp}, reference_impls::event_sink::EventMessage, zkevm_opcode_defs::FarCallOpcode, }; - pub use zkevm_test_harness; pub use zksync_basic_types::*; +use crate::{l2::TransactionType, protocol_version::ProtocolUpgradeTxCommonData}; + pub mod aggregated_operations; pub mod block; pub mod circuit; @@ -40,11 +39,13 @@ pub mod contract_verification_api; pub mod contracts; pub mod event; pub mod fee; +pub mod fee_model; pub mod l1; pub mod l2; pub mod l2_to_l1_log; pub mod priority_op_onchain_data; pub mod protocol_version; +pub mod snapshots; pub mod storage; pub mod storage_writes_deduplicator; pub mod system_contracts; @@ -56,14 +57,13 @@ pub mod api; pub mod eth_sender; pub mod helpers; pub mod proofs; +pub mod proto; pub mod prover_server_api; pub mod transaction_request; pub mod utils; pub mod vk_transform; pub mod vm_version; -mod proto; - /// Denotes the first byte of the special zkSync's EIP-712-signed transaction. pub const EIP_712_TX_TYPE: u8 = 0x71; diff --git a/core/lib/types/src/priority_op_onchain_data.rs b/core/lib/types/src/priority_op_onchain_data.rs index a729aa27bf4..559bb996388 100644 --- a/core/lib/types/src/priority_op_onchain_data.rs +++ b/core/lib/types/src/priority_op_onchain_data.rs @@ -1,7 +1,7 @@ -use serde::{Deserialize, Serialize}; - use std::cmp::Ordering; +use serde::{Deserialize, Serialize}; + use crate::{ l1::{OpProcessingType, PriorityQueueType}, H256, U256, diff --git a/core/lib/types/src/proofs.rs b/core/lib/types/src/proofs.rs index 28d25900231..392369f645d 100644 --- a/core/lib/types/src/proofs.rs +++ b/core/lib/types/src/proofs.rs @@ -1,25 +1,25 @@ -use std::convert::{TryFrom, TryInto}; -use std::fmt::Debug; -use std::net::IpAddr; -use std::ops::Add; -use std::str::FromStr; +use std::{ + convert::{TryFrom, TryInto}, + fmt::Debug, + net::IpAddr, + ops::Add, + str::FromStr, +}; use chrono::{DateTime, Utc}; use serde::{Deserialize, Serialize}; use serde_with::{serde_as, Bytes}; -use zkevm_test_harness::abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit; -use zkevm_test_harness::bellman::bn256::Bn256; -use zkevm_test_harness::bellman::plonk::better_better_cs::proof::Proof; -use zkevm_test_harness::encodings::{recursion_request::RecursionRequest, QueueSimulator}; -use zkevm_test_harness::witness::full_block_artifact::{ - BlockBasicCircuits, BlockBasicCircuitsPublicInputs, -}; -use zkevm_test_harness::witness::oracle::VmWitnessOracle; use zkevm_test_harness::{ + abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit, + bellman::{bn256::Bn256, plonk::better_better_cs::proof::Proof}, + encodings::{recursion_request::RecursionRequest, QueueSimulator}, + witness::{ + full_block_artifact::{BlockBasicCircuits, BlockBasicCircuitsPublicInputs}, + oracle::VmWitnessOracle, + }, LeafAggregationOutputDataWitness, NodeAggregationOutputDataWitness, SchedulerCircuitInstanceWitness, }; - use zksync_basic_types::{L1BatchNumber, H256, U256}; const HASH_LEN: usize = H256::len_bytes(); @@ -36,7 +36,7 @@ pub struct StorageLogMetadata { pub merkle_paths: Vec<[u8; HASH_LEN]>, pub leaf_hashed_key: U256, pub leaf_enumeration_index: u64, - // **NB.** For compatibility reasons, `#[serde_as(as = "Bytes")]` attrs are not added below. + // **NB.** For compatibility reasons, `#[serde_as(as = "Bytes")]` attributes are not added below. pub value_written: [u8; HASH_LEN], pub value_read: [u8; HASH_LEN], } @@ -98,6 +98,17 @@ impl AggregationRound { } } +impl std::fmt::Display for AggregationRound { + fn fmt(&self, formatter: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + formatter.write_str(match self { + Self::BasicCircuits => "basic_circuits", + Self::LeafAggregation => "leaf_aggregation", + Self::NodeAggregation => "node_aggregation", + Self::Scheduler => "scheduler", + }) + } +} + impl FromStr for AggregationRound { type Err = String; @@ -435,7 +446,7 @@ pub struct SocketAddress { pub port: u16, } -#[derive(Debug)] +#[derive(Debug, Copy, Clone)] pub enum GpuProverInstanceStatus { // The instance is available for processing. Available, diff --git a/core/lib/types/src/proto/mod.proto b/core/lib/types/src/proto/mod.proto index 2fc03e285d3..163215bb123 100644 --- a/core/lib/types/src/proto/mod.proto +++ b/core/lib/types/src/proto/mod.proto @@ -2,9 +2,22 @@ syntax = "proto3"; package zksync.types; -import "zksync/roles/validator.proto"; +message SnapshotStorageLogsChunk { + repeated SnapshotStorageLog storage_logs = 1; +} + +message SnapshotStorageLog { + optional bytes account_address = 1; // required; H160 + optional bytes storage_key = 2; // required; H256 + optional bytes storage_value = 3; // required; H256 + optional uint32 l1_batch_number_of_initial_write = 4; // required + optional uint64 enumeration_index = 5; // required +} + +message SnapshotFactoryDependencies { + repeated SnapshotFactoryDependency factory_deps = 1; +} -message ConsensusBlockFields { - optional roles.validator.BlockHeaderHash parent = 1; - optional roles.validator.CommitQC justification = 2; +message SnapshotFactoryDependency { + optional bytes bytecode = 1; // required } diff --git a/core/lib/types/src/proto/mod.rs b/core/lib/types/src/proto/mod.rs index 660bf4c5b4c..9f44835b29c 100644 --- a/core/lib/types/src/proto/mod.rs +++ b/core/lib/types/src/proto/mod.rs @@ -1,2 +1,3 @@ #![allow(warnings)] + include!(concat!(env!("OUT_DIR"), "/src/proto/gen.rs")); diff --git a/core/lib/types/src/protocol_version.rs b/core/lib/types/src/protocol_version.rs index 09a722c72cd..38caa0f8a20 100644 --- a/core/lib/types/src/protocol_version.rs +++ b/core/lib/types/src/protocol_version.rs @@ -1,3 +1,10 @@ +use std::convert::{TryFrom, TryInto}; + +use num_enum::TryFromPrimitive; +use serde::{Deserialize, Serialize}; +use zksync_contracts::BaseSystemContractsHashes; +use zksync_utils::u256_to_account_address; + use crate::{ ethabi::{decode, encode, ParamType, Token}, helpers::unix_timestamp_ms, @@ -8,11 +15,6 @@ use crate::{ Address, Execute, ExecuteTransactionCommon, Log, Transaction, TransactionType, VmVersion, H256, PROTOCOL_UPGRADE_TX_TYPE, U256, }; -use num_enum::TryFromPrimitive; -use serde::{Deserialize, Serialize}; -use std::convert::{TryFrom, TryInto}; -use zksync_contracts::BaseSystemContractsHashes; -use zksync_utils::u256_to_account_address; #[repr(u16)] #[derive( @@ -39,15 +41,17 @@ pub enum ProtocolVersionId { Version17, Version18, Version19, + Version20, + Version21, } impl ProtocolVersionId { pub fn latest() -> Self { - Self::Version18 + Self::Version20 } pub fn next() -> Self { - Self::Version19 + Self::Version21 } /// Returns VM version to be used by API for this protocol version. @@ -74,11 +78,27 @@ impl ProtocolVersionId { ProtocolVersionId::Version17 => VmVersion::VmVirtualBlocksRefundsEnhancement, ProtocolVersionId::Version18 => VmVersion::VmBoojumIntegration, ProtocolVersionId::Version19 => VmVersion::VmBoojumIntegration, + ProtocolVersionId::Version20 => VmVersion::Vm1_4_1, + ProtocolVersionId::Version21 => VmVersion::Vm1_4_1, } } + // It is possible that some external nodes do not store protocol versions for versions below 9. + // That's why we assume that whenever a protocol version is not present, version 9 is to be used. + pub fn last_potentially_undefined() -> Self { + Self::Version9 + } + pub fn is_pre_boojum(&self) -> bool { - self < &ProtocolVersionId::Version18 + self <= &Self::Version17 + } + + pub fn is_1_4_0(&self) -> bool { + self >= &ProtocolVersionId::Version18 && self < &ProtocolVersionId::Version20 + } + + pub fn is_post_1_4_1(&self) -> bool { + self >= &ProtocolVersionId::Version20 } } @@ -237,11 +257,11 @@ impl TryFrom for ProtocolUpgrade { }; let transaction_param_type = ParamType::Tuple(vec![ - ParamType::Uint(256), // txType + ParamType::Uint(256), // `txType` ParamType::Uint(256), // sender ParamType::Uint(256), // to ParamType::Uint(256), // gasLimit - ParamType::Uint(256), // gasPerPubdataLimit + ParamType::Uint(256), // `gasPerPubdataLimit` ParamType::Uint(256), // maxFeePerGas ParamType::Uint(256), // maxPriorityFeePerGas ParamType::Uint(256), // paymaster @@ -252,7 +272,7 @@ impl TryFrom for ProtocolUpgrade { ParamType::Bytes, // signature ParamType::Array(Box::new(ParamType::Uint(256))), // factory deps ParamType::Bytes, // paymaster input - ParamType::Bytes, // reservedDynamic + ParamType::Bytes, // `reservedDynamic` ]); let verifier_params_type = ParamType::Tuple(vec![ ParamType::FixedBytes(32), @@ -347,7 +367,7 @@ impl TryFrom for ProtocolUpgrade { let paymaster_input = transaction.remove(0).into_bytes().unwrap(); assert_eq!(paymaster_input.len(), 0); - // TODO (SMA-1621): check that reservedDynamic are constructed correctly. + // TODO (SMA-1621): check that `reservedDynamic` are constructed correctly. let reserved_dynamic = transaction.remove(0).into_bytes().unwrap(); assert_eq!(reserved_dynamic.len(), 0); @@ -693,6 +713,8 @@ impl From for VmVersion { ProtocolVersionId::Version17 => VmVersion::VmVirtualBlocksRefundsEnhancement, ProtocolVersionId::Version18 => VmVersion::VmBoojumIntegration, ProtocolVersionId::Version19 => VmVersion::VmBoojumIntegration, + ProtocolVersionId::Version20 => VmVersion::Vm1_4_1, + ProtocolVersionId::Version21 => VmVersion::Vm1_4_1, } } } diff --git a/core/lib/types/src/prover_server_api/mod.rs b/core/lib/types/src/prover_server_api/mod.rs index 84262b182c6..fdbbd57624f 100644 --- a/core/lib/types/src/prover_server_api/mod.rs +++ b/core/lib/types/src/prover_server_api/mod.rs @@ -1,10 +1,11 @@ use serde::{Deserialize, Serialize}; - use zksync_basic_types::L1BatchNumber; -use crate::aggregated_operations::L1BatchProofForL1; -use crate::proofs::PrepareBasicCircuitsJob; -use crate::protocol_version::{FriProtocolVersionId, L1VerifierConfig}; +use crate::{ + aggregated_operations::L1BatchProofForL1, + proofs::PrepareBasicCircuitsJob, + protocol_version::{FriProtocolVersionId, L1VerifierConfig}, +}; #[derive(Debug, Serialize, Deserialize)] pub struct ProofGenerationData { @@ -19,7 +20,7 @@ pub struct ProofGenerationDataRequest {} #[derive(Debug, Serialize, Deserialize)] pub enum ProofGenerationDataResponse { - Success(ProofGenerationData), + Success(Option), Error(String), } diff --git a/core/lib/types/src/snapshots.rs b/core/lib/types/src/snapshots.rs new file mode 100644 index 00000000000..19f818bb5d1 --- /dev/null +++ b/core/lib/types/src/snapshots.rs @@ -0,0 +1,198 @@ +use std::convert::TryFrom; + +use anyhow::Context; +use serde::{Deserialize, Serialize}; +use zksync_basic_types::{AccountTreeId, L1BatchNumber, MiniblockNumber, H256}; +use zksync_protobuf::{required, ProtoFmt}; + +use crate::{commitment::L1BatchWithMetadata, Bytes, StorageKey, StorageValue}; + +/// Information about all snapshots persisted by the node. +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct AllSnapshots { + /// L1 batch numbers for complete snapshots. Ordered by descending number (i.e., 0th element + /// corresponds to the newest snapshot). + pub snapshots_l1_batch_numbers: Vec, +} + +/// Storage snapshot metadata. Used in DAL to fetch certain snapshot data. +#[derive(Debug, Clone)] +pub struct SnapshotMetadata { + /// L1 batch for the snapshot. The data in the snapshot captures node storage at the end of this batch. + pub l1_batch_number: L1BatchNumber, + /// Path to the factory dependencies blob. + pub factory_deps_filepath: String, + /// Paths to the storage log blobs. Ordered by the chunk ID. If a certain chunk is not produced yet, + /// the corresponding path is `None`. + pub storage_logs_filepaths: Vec>, +} + +impl SnapshotMetadata { + /// Checks whether a snapshot is complete (contains all information to restore from). + pub fn is_complete(&self) -> bool { + self.storage_logs_filepaths.iter().all(Option::is_some) + } +} + +/// Snapshot data returned by using JSON-RPC API. +/// Contains all data not contained in `factory_deps` / `storage_logs` files to perform restore process. +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct SnapshotHeader { + pub l1_batch_number: L1BatchNumber, + pub miniblock_number: MiniblockNumber, + /// Ordered by chunk IDs. + pub storage_logs_chunks: Vec, + pub factory_deps_filepath: String, + pub last_l1_batch_with_metadata: L1BatchWithMetadata, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct SnapshotStorageLogsChunkMetadata { + pub chunk_id: u64, + // can be either be a file available under HTTP(s) or local filesystem path + pub filepath: String, +} + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct SnapshotStorageLogsStorageKey { + pub l1_batch_number: L1BatchNumber, + pub chunk_id: u64, +} + +#[derive(Debug, Clone, PartialEq)] +pub struct SnapshotStorageLogsChunk { + pub storage_logs: Vec, +} + +#[derive(Debug, Clone, PartialEq, Eq, Hash)] +pub struct SnapshotStorageLog { + pub key: StorageKey, + pub value: StorageValue, + pub l1_batch_number_of_initial_write: L1BatchNumber, + pub enumeration_index: u64, +} + +#[derive(Debug, PartialEq)] +pub struct SnapshotFactoryDependencies { + pub factory_deps: Vec, +} + +#[derive(Debug, Clone, PartialEq, Eq, Hash)] +pub struct SnapshotFactoryDependency { + pub bytecode: Bytes, +} + +impl ProtoFmt for SnapshotFactoryDependency { + type Proto = crate::proto::SnapshotFactoryDependency; + + fn read(r: &Self::Proto) -> anyhow::Result { + Ok(Self { + bytecode: Bytes(required(&r.bytecode).context("bytecode")?.clone()), + }) + } + fn build(&self) -> Self::Proto { + Self::Proto { + bytecode: Some(self.bytecode.0.as_slice().into()), + } + } +} + +impl ProtoFmt for SnapshotFactoryDependencies { + type Proto = crate::proto::SnapshotFactoryDependencies; + + fn read(r: &Self::Proto) -> anyhow::Result { + let mut factory_deps = Vec::with_capacity(r.factory_deps.len()); + for (i, factory_dep) in r.factory_deps.iter().enumerate() { + factory_deps.push( + SnapshotFactoryDependency::read(factory_dep) + .with_context(|| format!("factory_deps[{i}]"))?, + ) + } + Ok(Self { factory_deps }) + } + fn build(&self) -> Self::Proto { + Self::Proto { + factory_deps: self + .factory_deps + .iter() + .map(SnapshotFactoryDependency::build) + .collect(), + } + } +} + +impl ProtoFmt for SnapshotStorageLog { + type Proto = crate::proto::SnapshotStorageLog; + + fn read(r: &Self::Proto) -> anyhow::Result { + Ok(Self { + key: StorageKey::new( + AccountTreeId::new( + required(&r.account_address) + .and_then(|bytes| Ok(<[u8; 20]>::try_from(bytes.as_slice())?.into())) + .context("account_address")?, + ), + required(&r.storage_key) + .and_then(|bytes| Ok(<[u8; 32]>::try_from(bytes.as_slice())?.into())) + .context("storage_key")?, + ), + value: required(&r.storage_value) + .and_then(|bytes| Ok(<[u8; 32]>::try_from(bytes.as_slice())?.into())) + .context("storage_value")?, + l1_batch_number_of_initial_write: L1BatchNumber( + *required(&r.l1_batch_number_of_initial_write) + .context("l1_batch_number_of_initial_write")?, + ), + enumeration_index: *required(&r.enumeration_index).context("enumeration_index")?, + }) + } + + fn build(&self) -> Self::Proto { + Self::Proto { + account_address: Some(self.key.address().as_bytes().into()), + storage_key: Some(self.key.key().as_bytes().into()), + storage_value: Some(self.value.as_bytes().into()), + l1_batch_number_of_initial_write: Some(self.l1_batch_number_of_initial_write.0), + enumeration_index: Some(self.enumeration_index), + } + } +} + +impl ProtoFmt for SnapshotStorageLogsChunk { + type Proto = crate::proto::SnapshotStorageLogsChunk; + + fn read(r: &Self::Proto) -> anyhow::Result { + let mut storage_logs = Vec::with_capacity(r.storage_logs.len()); + for (i, storage_log) in r.storage_logs.iter().enumerate() { + storage_logs.push( + SnapshotStorageLog::read(storage_log) + .with_context(|| format!("storage_log[{i}]"))?, + ) + } + Ok(Self { storage_logs }) + } + + fn build(&self) -> Self::Proto { + Self::Proto { + storage_logs: self + .storage_logs + .iter() + .map(SnapshotStorageLog::build) + .collect(), + } + } +} + +#[derive(Debug, PartialEq)] +pub struct SnapshotRecoveryStatus { + pub l1_batch_number: L1BatchNumber, + pub l1_batch_root_hash: H256, + pub miniblock_number: MiniblockNumber, + pub miniblock_root_hash: H256, + pub last_finished_chunk_id: Option, + pub total_chunk_count: u64, +} diff --git a/core/lib/types/src/storage/log.rs b/core/lib/types/src/storage/log.rs index aa295a2bade..a64bbb50220 100644 --- a/core/lib/types/src/storage/log.rs +++ b/core/lib/types/src/storage/log.rs @@ -1,14 +1,13 @@ -use serde::{Deserialize, Serialize}; - use std::mem; +use serde::{Deserialize, Serialize}; use zk_evm::aux_structures::{LogQuery, Timestamp}; use zksync_basic_types::AccountTreeId; use zksync_utils::u256_to_h256; use crate::{StorageKey, StorageValue, U256}; -// TODO (SMA-1269): Refactor StorageLog/StorageLogQuery and StorageLogKind/StorageLongQueryType. +// TODO (SMA-1269): Refactor `StorageLog/StorageLogQuery and StorageLogKind/StorageLongQueryType`. #[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize)] pub enum StorageLogKind { Read, diff --git a/core/lib/types/src/storage/mod.rs b/core/lib/types/src/storage/mod.rs index 46b98575f12..54694f63c50 100644 --- a/core/lib/types/src/storage/mod.rs +++ b/core/lib/types/src/storage/mod.rs @@ -67,7 +67,7 @@ fn get_address_mapping_key(address: &Address, position: H256) -> H256 { pub fn get_nonce_key(account: &Address) -> StorageKey { let nonce_manager = AccountTreeId::new(NONCE_HOLDER_ADDRESS); - // The `minNonce` (used as nonce for EOAs) is stored in a mapping inside the NONCE_HOLDER system contract + // The `minNonce` (used as nonce for EOAs) is stored in a mapping inside the `NONCE_HOLDER` system contract let key = get_address_mapping_key(account, H256::zero()); StorageKey::new(nonce_manager, key) diff --git a/core/lib/types/src/storage/witness_block_state.rs b/core/lib/types/src/storage/witness_block_state.rs index 2ba57a9aea0..63ee1ba1c56 100644 --- a/core/lib/types/src/storage/witness_block_state.rs +++ b/core/lib/types/src/storage/witness_block_state.rs @@ -1,7 +1,9 @@ -use crate::{StorageKey, StorageValue}; -use serde::{Deserialize, Serialize}; use std::collections::HashMap; +use serde::{Deserialize, Serialize}; + +use crate::{StorageKey, StorageValue}; + /// Storage data used during Witness Generation. #[derive(Debug, Default, Serialize, Deserialize)] pub struct WitnessBlockState { diff --git a/core/lib/types/src/storage/writes/compression.rs b/core/lib/types/src/storage/writes/compression.rs index a325801b8a8..cd0a174fa76 100644 --- a/core/lib/types/src/storage/writes/compression.rs +++ b/core/lib/types/src/storage/writes/compression.rs @@ -210,9 +210,10 @@ pub fn compress_with_best_strategy(prev_value: U256, new_value: U256) -> Vec #[cfg(test)] mod tests { - use super::*; use std::ops::{Add, BitAnd, Shr, Sub}; + use super::*; + #[test] fn test_compress_addition() { let initial_val = U256::from(255438218); diff --git a/core/lib/types/src/storage/writes/mod.rs b/core/lib/types/src/storage/writes/mod.rs index 6a17afb7d15..22400964bf4 100644 --- a/core/lib/types/src/storage/writes/mod.rs +++ b/core/lib/types/src/storage/writes/mod.rs @@ -1,10 +1,10 @@ use std::convert::TryInto; -use crate::H256; use serde::{Deserialize, Serialize}; use zksync_basic_types::{Address, U256}; pub(crate) use self::compression::{compress_with_best_strategy, COMPRESSION_VERSION_NUMBER}; +use crate::H256; pub mod compression; @@ -41,7 +41,7 @@ pub struct RepeatedStorageWrite { #[derive(Clone, Debug, Deserialize, Serialize, Default, Eq, PartialEq)] pub struct StateDiffRecord { - /// address state diff occured at + /// address state diff occurred at pub address: Address, /// storage slot key updated pub key: U256, @@ -115,7 +115,7 @@ impl StateDiffRecord { } } - /// compression follows the following algo: + /// compression follows the following algorithm: /// 1. if repeated write: /// entry <- enumeration_index || compressed value /// 2. if initial write: @@ -184,12 +184,13 @@ fn prepend_header(compressed_state_diffs: Vec) -> Vec { #[cfg(test)] mod tests { - use std::ops::{Add, Sub}; - use std::str::FromStr; + use std::{ + ops::{Add, Sub}, + str::FromStr, + }; use super::*; - use crate::commitment::serialize_commitments; - use crate::{H256, U256}; + use crate::{commitment::serialize_commitments, H256, U256}; #[test] fn calculate_hash_for_storage_writes() { diff --git a/core/lib/types/src/storage_writes_deduplicator.rs b/core/lib/types/src/storage_writes_deduplicator.rs index 42ce67e6375..14a5413ee6a 100644 --- a/core/lib/types/src/storage_writes_deduplicator.rs +++ b/core/lib/types/src/storage_writes_deduplicator.rs @@ -2,9 +2,11 @@ use std::collections::HashMap; use zksync_utils::u256_to_h256; -use crate::tx::tx_execution_info::DeduplicatedWritesMetrics; -use crate::writes::compression::compress_with_best_strategy; -use crate::{AccountTreeId, StorageKey, StorageLogQuery, StorageLogQueryType, U256}; +use crate::{ + tx::tx_execution_info::DeduplicatedWritesMetrics, + writes::compression::compress_with_best_strategy, AccountTreeId, StorageKey, StorageLogQuery, + StorageLogQueryType, U256, +}; #[derive(Debug, Clone, Copy, PartialEq, Default)] pub struct ModifiedSlot { @@ -219,9 +221,8 @@ impl StorageWritesDeduplicator { mod tests { use zk_evm::aux_structures::{LogQuery, Timestamp}; - use crate::H160; - use super::*; + use crate::H160; fn storage_log_query( key: U256, diff --git a/core/lib/types/src/system_contracts.rs b/core/lib/types/src/system_contracts.rs index 430d8d4701d..464a562b927 100644 --- a/core/lib/types/src/system_contracts.rs +++ b/core/lib/types/src/system_contracts.rs @@ -1,5 +1,6 @@ use std::path::PathBuf; +use once_cell::sync::Lazy; use zksync_basic_types::{AccountTreeId, Address, U256}; use zksync_contracts::{read_sys_contract_bytecode, ContractLanguage, SystemContractsRepo}; use zksync_system_constants::{ @@ -9,21 +10,21 @@ use zksync_system_constants::{ use crate::{ block::DeployedContract, ACCOUNT_CODE_STORAGE_ADDRESS, BOOTLOADER_ADDRESS, COMPLEX_UPGRADER_ADDRESS, CONTRACT_DEPLOYER_ADDRESS, ECRECOVER_PRECOMPILE_ADDRESS, - IMMUTABLE_SIMULATOR_STORAGE_ADDRESS, KECCAK256_PRECOMPILE_ADDRESS, KNOWN_CODES_STORAGE_ADDRESS, - L1_MESSENGER_ADDRESS, L2_ETH_TOKEN_ADDRESS, MSG_VALUE_SIMULATOR_ADDRESS, NONCE_HOLDER_ADDRESS, + EC_ADD_PRECOMPILE_ADDRESS, EC_MUL_PRECOMPILE_ADDRESS, IMMUTABLE_SIMULATOR_STORAGE_ADDRESS, + KECCAK256_PRECOMPILE_ADDRESS, KNOWN_CODES_STORAGE_ADDRESS, L1_MESSENGER_ADDRESS, + L2_ETH_TOKEN_ADDRESS, MSG_VALUE_SIMULATOR_ADDRESS, NONCE_HOLDER_ADDRESS, SHA256_PRECOMPILE_ADDRESS, SYSTEM_CONTEXT_ADDRESS, }; -use once_cell::sync::Lazy; -// Note, that in the NONCE_HOLDER_ADDRESS's storage the nonces of accounts +// Note, that in the `NONCE_HOLDER_ADDRESS` storage the nonces of accounts // are stored in the following form: -// 2^128 * deployment_nonce + tx_nonce, +// `2^128 * deployment_nonce + tx_nonce`, // where `tx_nonce` should be number of transactions, the account has processed // and the `deployment_nonce` should be the number of contracts. pub const TX_NONCE_INCREMENT: U256 = U256([1, 0, 0, 0]); // 1 pub const DEPLOYMENT_NONCE_INCREMENT: U256 = U256([0, 0, 1, 0]); // 2^128 -static SYSTEM_CONTRACT_LIST: [(&str, &str, Address, ContractLanguage); 18] = [ +static SYSTEM_CONTRACT_LIST: [(&str, &str, Address, ContractLanguage); 20] = [ ( "", "AccountCodeStorage", @@ -90,6 +91,18 @@ static SYSTEM_CONTRACT_LIST: [(&str, &str, Address, ContractLanguage); 18] = [ ECRECOVER_PRECOMPILE_ADDRESS, ContractLanguage::Yul, ), + ( + "precompiles/", + "EcAdd", + EC_ADD_PRECOMPILE_ADDRESS, + ContractLanguage::Yul, + ), + ( + "precompiles/", + "EcMul", + EC_MUL_PRECOMPILE_ADDRESS, + ContractLanguage::Yul, + ), ( "", "SystemContext", diff --git a/core/lib/types/src/transaction_request.rs b/core/lib/types/src/transaction_request.rs index 3c450e77c89..7fda18d70a4 100644 --- a/core/lib/types/src/transaction_request.rs +++ b/core/lib/types/src/transaction_request.rs @@ -1,17 +1,15 @@ -// Built-in uses use std::convert::{TryFrom, TryInto}; -// External uses use rlp::{DecoderError, Rlp, RlpStream}; use serde::{Deserialize, Serialize}; use thiserror::Error; use zksync_basic_types::H256; +use zksync_system_constants::{DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE, MAX_ENCODED_TX_SIZE}; +use zksync_utils::{ + bytecode::{hash_bytecode, validate_bytecode, InvalidBytecodeError}, + concat_and_hash, u256_to_h256, +}; -use zksync_system_constants::{MAX_GAS_PER_PUBDATA_BYTE, USED_BOOTLOADER_MEMORY_BYTES}; -use zksync_utils::bytecode::{hash_bytecode, validate_bytecode, InvalidBytecodeError}; -use zksync_utils::{concat_and_hash, u256_to_h256}; - -// Local uses use super::{EIP_1559_TX_TYPE, EIP_2930_TX_TYPE, EIP_712_TX_TYPE}; use crate::{ fee::Fee, @@ -60,7 +58,7 @@ pub struct CallRequest { /// Access list #[serde(default, skip_serializing_if = "Option::is_none")] pub access_list: Option, - /// Eip712 meta + /// EIP712 meta #[serde(default, skip_serializing_if = "Option::is_none")] pub eip712_meta: Option, } @@ -97,7 +95,7 @@ impl CallRequestBuilder { self } - /// Set transfered value (None for no transfer) + /// Set transferred, value (None for no transfer) pub fn gas_price(mut self, gas_price: U256) -> Self { self.call_request.gas_price = Some(gas_price); self @@ -113,7 +111,7 @@ impl CallRequestBuilder { self } - /// Set transfered value (None for no transfer) + /// Set transferred, value (None for no transfer) pub fn value(mut self, value: U256) -> Self { self.call_request.value = Some(value); self @@ -177,7 +175,7 @@ pub enum SerializationTransactionError { AccessListsNotSupported, #[error("nonce has max value")] TooBigNonce, - /// TooHighGas is a sanity error to avoid extremely big numbers specified + /// Sanity check error to avoid extremely big numbers specified /// to gas and pubdata price. #[error("{0}")] TooHighGas(String), @@ -444,7 +442,7 @@ impl TransactionRequest { match self.transaction_type { // EIP-2930 (0x01) Some(x) if x == EIP_2930_TX_TYPE.into() => { - // rlp_opt(rlp, &self.chain_id); + // `rlp_opt(rlp, &self.chain_id);` rlp.append(&chain_id); rlp.append(&self.nonce); rlp.append(&self.gas_price); @@ -456,7 +454,7 @@ impl TransactionRequest { } // EIP-1559 (0x02) Some(x) if x == EIP_1559_TX_TYPE.into() => { - // rlp_opt(rlp, &self.chain_id); + // `rlp_opt(rlp, &self.chain_id);` rlp.append(&chain_id); rlp.append(&self.nonce); rlp_opt(rlp, &self.max_priority_fee_per_gas); @@ -745,8 +743,8 @@ impl TransactionRequest { } meta.gas_per_pubdata } else { - // For transactions that don't support corresponding field, a default is chosen. - U256::from(MAX_GAS_PER_PUBDATA_BYTE) + // For transactions that don't support corresponding field, a maximal default value is chosen. + DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE.into() }; let max_priority_fee_per_gas = self.max_priority_fee_per_gas.unwrap_or(self.gas_price); @@ -823,7 +821,7 @@ impl L2Tx { /// Ensures that encoded transaction size is not greater than `max_tx_size`. fn check_encoded_size(&self, max_tx_size: usize) -> Result<(), SerializationTransactionError> { - // since abi_encoding_len returns 32-byte words multiplication on 32 is needed + // since `abi_encoding_len` returns 32-byte words multiplication on 32 is needed let tx_size = self.abi_encoding_len() * 32; if tx_size > max_tx_size { return Err(SerializationTransactionError::OversizedData( @@ -884,7 +882,7 @@ impl TryFrom for L1Tx { type Error = SerializationTransactionError; fn try_from(tx: CallRequest) -> Result { // L1 transactions have no limitations on the transaction size. - let tx: L2Tx = L2Tx::from_request(tx.into(), USED_BOOTLOADER_MEMORY_BYTES)?; + let tx: L2Tx = L2Tx::from_request(tx.into(), MAX_ENCODED_TX_SIZE)?; // Note, that while the user has theoretically provided the fee for ETH on L1, // the payment to the operator as well as refunds happen on L2 and so all the ETH @@ -892,7 +890,7 @@ impl TryFrom for L1Tx { let total_needed_eth = tx.execute.value + tx.common_data.fee.max_fee_per_gas * tx.common_data.fee.gas_limit; - // Note, that we do not set refund_recipient here, to keep it explicitly 0, + // Note, that we do not set `refund_recipient` here, to keep it explicitly 0, // so that during fee estimation it is taken into account that the refund recipient may be a different address let common_data = L1TxCommonData { sender: tx.common_data.initiator_address, @@ -947,13 +945,14 @@ pub fn validate_factory_deps( #[cfg(test)] mod tests { + use secp256k1::SecretKey; + use super::*; use crate::web3::{ api::Namespace, transports::test::TestTransport, types::{TransactionParameters, H256, U256}, }; - use secp256k1::SecretKey; #[tokio::test] async fn decode_real_tx() { @@ -1397,7 +1396,7 @@ mod tests { let random_tx_max_size = 1_000_000; // bytes let private_key = H256::random(); let address = PackedEthSignature::address_from_private_key(&private_key).unwrap(); - // choose some number that devides on 8 and is > 1_000_000 + // choose some number that divides on 8 and is `> 1_000_000` let factory_dep = vec![2u8; 1600000]; let factory_deps: Vec> = factory_dep.chunks(32).map(|s| s.into()).collect(); let mut tx = TransactionRequest { @@ -1489,21 +1488,15 @@ mod tests { access_list: None, eip712_meta: None, }; - let l2_tx = L2Tx::from_request( - call_request_with_nonce.clone().into(), - USED_BOOTLOADER_MEMORY_BYTES, - ) - .unwrap(); + let l2_tx = L2Tx::from_request(call_request_with_nonce.clone().into(), MAX_ENCODED_TX_SIZE) + .unwrap(); assert_eq!(l2_tx.nonce(), Nonce(123u32)); let mut call_request_without_nonce = call_request_with_nonce; call_request_without_nonce.nonce = None; - let l2_tx = L2Tx::from_request( - call_request_without_nonce.into(), - USED_BOOTLOADER_MEMORY_BYTES, - ) - .unwrap(); + let l2_tx = + L2Tx::from_request(call_request_without_nonce.into(), MAX_ENCODED_TX_SIZE).unwrap(); assert_eq!(l2_tx.nonce(), Nonce(0u32)); } } diff --git a/core/lib/types/src/tx/execute.rs b/core/lib/types/src/tx/execute.rs index e33dff694fe..21f0b401cce 100644 --- a/core/lib/types/src/tx/execute.rs +++ b/core/lib/types/src/tx/execute.rs @@ -1,8 +1,9 @@ -use crate::{web3::ethabi, Address, EIP712TypedStructure, StructBuilder, H256, U256}; use once_cell::sync::Lazy; use serde::{Deserialize, Serialize}; use zksync_utils::ZeroPrefixHexSerde; +use crate::{web3::ethabi, Address, EIP712TypedStructure, StructBuilder, H256, U256}; + /// `Execute` transaction executes a previously deployed smart contract in the L2 rollup. #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] #[serde(rename_all = "camelCase")] @@ -30,7 +31,7 @@ impl EIP712TypedStructure for Execute { builder.add_member("data", &self.calldata.as_slice()); // Factory deps are not included into the transaction signature, since they are parsed from the // transaction metadata. - // Note that for the deploy transactions all the dependencies are implicitly included into the "calldataHash" + // Note that for the deploy transactions all the dependencies are implicitly included into the `calldataHash` // field, because the deps are referenced in the bytecode of the "main" contract bytecode. } } diff --git a/core/lib/types/src/tx/mod.rs b/core/lib/types/src/tx/mod.rs index bd2e6e46694..1371fa74ee7 100644 --- a/core/lib/types/src/tx/mod.rs +++ b/core/lib/types/src/tx/mod.rs @@ -5,19 +5,18 @@ //! with metadata (such as fees and/or signatures) for L1 and L2 separately. use std::fmt::Debug; + use zksync_basic_types::{Address, H256}; use zksync_utils::bytecode::CompressedBytecodeInfo; +use self::tx_execution_info::TxExecutionStatus; +pub use self::{execute::Execute, tx_execution_info::ExecutionMetrics}; +use crate::{vm_trace::Call, Transaction}; + pub mod execute; pub mod primitives; pub mod tx_execution_info; -pub use self::execute::Execute; -use crate::vm_trace::Call; -use crate::Transaction; -pub use tx_execution_info::ExecutionMetrics; -use tx_execution_info::TxExecutionStatus; - #[derive(Debug, Clone, PartialEq)] pub struct TransactionExecutionResult { pub transaction: Transaction, @@ -49,7 +48,7 @@ impl TransactionExecutionResult { } } -#[derive(Debug, Clone)] +#[derive(Debug, Clone, Copy)] pub struct IncludedTxLocation { pub tx_hash: H256, pub tx_index_in_miniblock: u32, diff --git a/core/lib/types/src/tx/primitives/eip712_signature/member_types.rs b/core/lib/types/src/tx/primitives/eip712_signature/member_types.rs index cc4906ef7e8..aecece572dd 100644 --- a/core/lib/types/src/tx/primitives/eip712_signature/member_types.rs +++ b/core/lib/types/src/tx/primitives/eip712_signature/member_types.rs @@ -1,9 +1,10 @@ -use crate::tx::primitives::eip712_signature::typed_structure::{ - EncodedStructureMember, StructMember, -}; -use crate::web3::signing::keccak256; use zksync_basic_types::{Address, H256, U256}; +use crate::{ + tx::primitives::eip712_signature::typed_structure::{EncodedStructureMember, StructMember}, + web3::signing::keccak256, +}; + impl StructMember for String { const MEMBER_TYPE: &'static str = "string"; const IS_REFERENCE_TYPE: bool = false; diff --git a/core/lib/types/src/tx/primitives/eip712_signature/struct_builder.rs b/core/lib/types/src/tx/primitives/eip712_signature/struct_builder.rs index f6189f504df..1b3260993ea 100644 --- a/core/lib/types/src/tx/primitives/eip712_signature/struct_builder.rs +++ b/core/lib/types/src/tx/primitives/eip712_signature/struct_builder.rs @@ -1,5 +1,6 @@ -use serde_json::Value; use std::collections::{BTreeMap, VecDeque}; + +use serde_json::Value; use zksync_basic_types::H256; use crate::tx::primitives::eip712_signature::typed_structure::{ @@ -86,7 +87,7 @@ pub(crate) struct EncodeBuilder { impl EncodeBuilder { /// Returns the concatenation of the encoded member values in the order that they appear in the type. pub fn encode_data(&self) -> Vec { - // encodeData(s : 𝕊) = enc(value₁) ‖ enc(value₂) ‖ … ‖ enc(valueₙ). + // `encodeData(s : 𝕊) = enc(value₁) ‖ enc(value₂) ‖ … ‖ enc(valueₙ).` self.members.iter().map(|(_, data)| *data).collect() } diff --git a/core/lib/types/src/tx/primitives/eip712_signature/tests.rs b/core/lib/types/src/tx/primitives/eip712_signature/tests.rs index 70ae415531c..8bfd14b45c4 100644 --- a/core/lib/types/src/tx/primitives/eip712_signature/tests.rs +++ b/core/lib/types/src/tx/primitives/eip712_signature/tests.rs @@ -1,13 +1,20 @@ -use crate::tx::primitives::eip712_signature::{ - struct_builder::StructBuilder, - typed_structure::{EIP712TypedStructure, Eip712Domain}, -}; -use crate::tx::primitives::{eip712_signature::utils::get_eip712_json, PackedEthSignature}; -use crate::web3::signing::keccak256; -use serde::Serialize; use std::str::FromStr; + +use serde::Serialize; use zksync_basic_types::{Address, H256, U256}; +use crate::{ + tx::primitives::{ + eip712_signature::{ + struct_builder::StructBuilder, + typed_structure::{EIP712TypedStructure, Eip712Domain}, + utils::get_eip712_json, + }, + PackedEthSignature, + }, + web3::signing::keccak256, +}; + #[derive(Clone, Serialize)] struct Person { name: String, diff --git a/core/lib/types/src/tx/primitives/eip712_signature/typed_structure.rs b/core/lib/types/src/tx/primitives/eip712_signature/typed_structure.rs index 999afbbe604..421944e5d46 100644 --- a/core/lib/types/src/tx/primitives/eip712_signature/typed_structure.rs +++ b/core/lib/types/src/tx/primitives/eip712_signature/typed_structure.rs @@ -1,11 +1,11 @@ -use crate::web3::signing::keccak256; use serde::{Deserialize, Serialize}; use serde_json::Value; -use crate::tx::primitives::eip712_signature::struct_builder::{ - EncodeBuilder, StructBuilder, TypeBuilder, +use crate::{ + tx::primitives::eip712_signature::struct_builder::{EncodeBuilder, StructBuilder, TypeBuilder}, + web3::signing::keccak256, + L2ChainId, H256, U256, }; -use crate::{L2ChainId, H256, U256}; #[derive(Debug, Clone)] pub struct EncodedStructureMember { @@ -29,7 +29,7 @@ impl EncodedStructureMember { } } - /// Encodes the structure as `name ‖ "(" ‖ member₁ ‖ "," ‖ member₂ ‖ "," ‖ … ‖ memberₙ ")". + /// Encodes the structure as `name ‖ "(" ‖ member₁ ‖ "," ‖ member₂ ‖ "," ‖ … ‖ memberₙ ")"`. pub fn get_encoded_type(&self) -> String { let mut encoded_type = String::new(); encoded_type.push_str(&self.member_type); @@ -123,7 +123,7 @@ pub trait EIP712TypedStructure: Serialize { } fn hash_struct(&self) -> H256 { - // hashStruct(s : 𝕊) = keccak256(keccak256(encodeType(typeOf(s))) ‖ encodeData(s)). + // `hashStruct(s : 𝕊) = keccak256(keccak256(encodeType(typeOf(s))) ‖ encodeData(s)).` let type_hash = { let encode_type = self.encode_type(); keccak256(encode_type.as_bytes()) diff --git a/core/lib/types/src/tx/primitives/eip712_signature/utils.rs b/core/lib/types/src/tx/primitives/eip712_signature/utils.rs index 57db7894321..f338c017e2b 100644 --- a/core/lib/types/src/tx/primitives/eip712_signature/utils.rs +++ b/core/lib/types/src/tx/primitives/eip712_signature/utils.rs @@ -1,7 +1,8 @@ +use serde_json::{Map, Value}; + use crate::tx::primitives::eip712_signature::typed_structure::{ EIP712TypedStructure, Eip712Domain, }; -use serde_json::{Map, Value}; /// Formats the data that needs to be signed in json according to the standard eip-712. /// Compatible with `eth_signTypedData` RPC call. diff --git a/core/lib/types/src/tx/primitives/packed_eth_signature.rs b/core/lib/types/src/tx/primitives/packed_eth_signature.rs index b249d151ef5..c165f6a36b2 100644 --- a/core/lib/types/src/tx/primitives/packed_eth_signature.rs +++ b/core/lib/types/src/tx/primitives/packed_eth_signature.rs @@ -1,6 +1,3 @@ -use crate::tx::primitives::eip712_signature::typed_structure::{ - EIP712TypedStructure, Eip712Domain, -}; use ethereum_types_old::H256 as ParityCryptoH256; use parity_crypto::{ publickey::{ @@ -14,7 +11,11 @@ use thiserror::Error; use zksync_basic_types::{Address, H256}; use zksync_utils::ZeroPrefixHexSerde; -/// Struct used for working with ethereum signatures created using eth_sign (using geth, ethers.js, etc) +use crate::tx::primitives::eip712_signature::typed_structure::{ + EIP712TypedStructure, Eip712Domain, +}; + +/// Struct used for working with Ethereum signatures created using eth_sign (using geth, ethers.js, etc) /// message is serialized as 65 bytes long `0x` prefixed string. /// /// Some notes on implementation of methods of this structure: @@ -66,7 +67,7 @@ impl PackedEthSignature { Ok(PackedEthSignature(ETHSignature::from(signature))) } - /// Signs message using ethereum private key, results are identical to signature created + /// Signs message using Ethereum private key, results are identical to signature created /// using `geth`, `ethers.js`, etc. No hashing and prefixes required. pub fn sign(private_key: &H256, msg: &[u8]) -> Result { let signed_bytes = Self::message_to_signed_bytes(msg); @@ -85,7 +86,7 @@ impl PackedEthSignature { Ok(PackedEthSignature(signature)) } - /// Signs typed struct using ethereum private key by EIP-712 signature standard. + /// Signs typed struct using Ethereum private key by EIP-712 signature standard. /// Result of this function is the equivalent of RPC calling `eth_signTypedData`. pub fn sign_typed_data( private_key: &H256, @@ -115,7 +116,7 @@ impl PackedEthSignature { msg.keccak256().into() } - /// Checks signature and returns ethereum address of the signer. + /// Checks signature and returns Ethereum address of the signer. /// message should be the same message that was passed to `eth.sign`(or similar) method /// as argument. No hashing and prefixes required. pub fn signature_recover_signer( diff --git a/core/lib/types/src/tx/tx_execution_info.rs b/core/lib/types/src/tx/tx_execution_info.rs index 0f72172f529..968a56d6c55 100644 --- a/core/lib/types/src/tx/tx_execution_info.rs +++ b/core/lib/types/src/tx/tx_execution_info.rs @@ -1,14 +1,15 @@ -use crate::fee::TransactionExecutionMetrics; -use crate::l2_to_l1_log::L2ToL1Log; +use std::ops::{Add, AddAssign}; + use crate::{ commitment::SerializeCommitment, + fee::TransactionExecutionMetrics, + l2_to_l1_log::L2ToL1Log, writes::{ InitialStorageWrite, RepeatedStorageWrite, BYTES_PER_DERIVED_KEY, BYTES_PER_ENUMERATION_INDEX, }, ProtocolVersionId, }; -use std::ops::{Add, AddAssign}; #[derive(Debug, Clone, Copy, Eq, PartialEq)] pub enum TxExecutionStatus { @@ -68,6 +69,7 @@ pub struct ExecutionMetrics { pub cycles_used: u32, pub computational_gas_used: u32, pub pubdata_published: u32, + pub estimated_circuits_used: f32, } impl ExecutionMetrics { @@ -85,6 +87,7 @@ impl ExecutionMetrics { cycles_used: tx_metrics.cycles_used, computational_gas_used: tx_metrics.computational_gas_used, pubdata_published: tx_metrics.pubdata_published, + estimated_circuits_used: tx_metrics.estimated_circuits_used, } } @@ -118,6 +121,7 @@ impl Add for ExecutionMetrics { cycles_used: self.cycles_used + other.cycles_used, computational_gas_used: self.computational_gas_used + other.computational_gas_used, pubdata_published: self.pubdata_published + other.pubdata_published, + estimated_circuits_used: self.estimated_circuits_used + other.estimated_circuits_used, } } } diff --git a/core/lib/types/src/utils.rs b/core/lib/types/src/utils.rs index 617179c4936..b13887000cb 100644 --- a/core/lib/types/src/utils.rs +++ b/core/lib/types/src/utils.rs @@ -1,11 +1,11 @@ -use crate::system_contracts::DEPLOYMENT_NONCE_INCREMENT; -use crate::L2_ETH_TOKEN_ADDRESS; -use crate::{web3::signing::keccak256, AccountTreeId, StorageKey, U256}; - use zksync_basic_types::{Address, H256}; - use zksync_utils::{address_to_h256, u256_to_h256}; +use crate::{ + system_contracts::DEPLOYMENT_NONCE_INCREMENT, web3::signing::keccak256, AccountTreeId, + StorageKey, L2_ETH_TOKEN_ADDRESS, U256, +}; + /// Transforms the *full* account nonce into an *account* nonce. /// Full nonce is a composite one: it includes both account nonce (number of transactions /// initiated by the account) and deployer nonce (number of smart contracts deployed by the @@ -48,7 +48,7 @@ pub fn storage_key_for_standard_token_balance( token_contract: AccountTreeId, address: &Address, ) -> StorageKey { - // We have different implementation of the standard erc20 contract and native + // We have different implementation of the standard ERC20 contract and native // eth contract. The key for the balance is different for each. let key = if token_contract.address() == &L2_ETH_TOKEN_ADDRESS { key_for_eth_balance(address) @@ -79,10 +79,11 @@ pub fn deployed_address_create(sender: Address, deploy_nonce: U256) -> Address { #[cfg(test)] mod tests { + use std::str::FromStr; + use crate::{ utils::storage_key_for_standard_token_balance, AccountTreeId, Address, StorageKey, H256, }; - use std::str::FromStr; #[test] fn test_storage_key_for_eth_token() { diff --git a/core/lib/types/src/vk_transform.rs b/core/lib/types/src/vk_transform.rs index dfa022fb7c1..b19fdaef692 100644 --- a/core/lib/types/src/vk_transform.rs +++ b/core/lib/types/src/vk_transform.rs @@ -1,5 +1,5 @@ -use crate::{ethabi::Token, H256}; use std::str::FromStr; + use zkevm_test_harness::{ abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit, bellman::{ @@ -14,6 +14,8 @@ use zkevm_test_harness::{ }, }; +use crate::{ethabi::Token, H256}; + /// Calculates commitment for vk from L1 verifier contract. pub fn l1_vk_commitment(token: Token) -> H256 { let vk = vk_from_token(token); diff --git a/core/lib/types/src/vm_trace.rs b/core/lib/types/src/vm_trace.rs index 6b37848dc5a..d3a94d51fa5 100644 --- a/core/lib/types/src/vm_trace.rs +++ b/core/lib/types/src/vm_trace.rs @@ -1,12 +1,16 @@ -use crate::{Address, U256}; +use std::{ + collections::{HashMap, HashSet}, + fmt, + fmt::Display, +}; + use serde::{Deserialize, Deserializer, Serialize, Serializer}; -use std::collections::{HashMap, HashSet}; -use std::fmt; -use std::fmt::Display; use zk_evm::zkevm_opcode_defs::FarCallOpcode; use zksync_system_constants::BOOTLOADER_ADDRESS; use zksync_utils::u256_to_h256; +use crate::{Address, U256}; + #[derive(Debug, Serialize, Deserialize, Clone, PartialEq)] pub enum VmTrace { ExecutionTrace(VmExecutionTrace), diff --git a/core/lib/types/src/vm_version.rs b/core/lib/types/src/vm_version.rs index 0f0fe4d337f..2a4e9dc3ef2 100644 --- a/core/lib/types/src/vm_version.rs +++ b/core/lib/types/src/vm_version.rs @@ -8,11 +8,12 @@ pub enum VmVersion { VmVirtualBlocks, VmVirtualBlocksRefundsEnhancement, VmBoojumIntegration, + Vm1_4_1, } impl VmVersion { /// Returns the latest supported VM version. pub const fn latest() -> VmVersion { - Self::VmBoojumIntegration + Self::Vm1_4_1 } } diff --git a/core/lib/utils/Cargo.toml b/core/lib/utils/Cargo.toml index 64b100d257e..5561f9b36da 100644 --- a/core/lib/utils/Cargo.toml +++ b/core/lib/utils/Cargo.toml @@ -14,8 +14,8 @@ zksync_basic_types = { path = "../../lib/basic_types" } zk_evm = { git = "https://github.com/matter-labs/era-zk_evm.git", tag = "v1.3.3-rc2" } vlog = { path = "../../lib/vlog" } -num = { version = "0.3.1", features = ["serde"] } -bigdecimal = { version = "0.2.2", features = ["serde"] } +bigdecimal = { version = "0.3.0", features = ["serde"] } +num = { version = "0.4.0", features = ["serde"] } serde = { version = "1.0", features = ["derive"] } tokio = { version = "1", features = ["time"] } tracing = "0.1" diff --git a/core/lib/utils/src/bytecode.rs b/core/lib/utils/src/bytecode.rs index 66101da4f5b..f9554c6f72b 100644 --- a/core/lib/utils/src/bytecode.rs +++ b/core/lib/utils/src/bytecode.rs @@ -1,8 +1,10 @@ +use std::{collections::HashMap, convert::TryInto}; + use itertools::Itertools; -use std::collections::HashMap; -use std::convert::TryInto; -use zksync_basic_types::ethabi::{encode, Token}; -use zksync_basic_types::H256; +use zksync_basic_types::{ + ethabi::{encode, Token}, + H256, +}; use crate::bytes_to_chunks; @@ -27,7 +29,7 @@ pub enum FailedToCompressBytecodeError { InvalidBytecode(#[from] InvalidBytecodeError), } -/// Implelements a simple compression algorithm for the bytecode. +/// Implements, a simple compression algorithm for the bytecode. pub fn compress_bytecode(code: &[u8]) -> Result, FailedToCompressBytecodeError> { validate_bytecode(code)?; @@ -56,7 +58,7 @@ pub fn compress_bytecode(code: &[u8]) -> Result, FailedToCompressBytecod return Err(FailedToCompressBytecodeError::DictionaryOverflow); } - // Fill the dictionary with the pmost popular chunks. + // Fill the dictionary with the most popular chunks. // The most popular chunks will be encoded with the smallest indexes, so that // the 255 most popular chunks will be encoded with one zero byte. // And the encoded data will be filled with more zeros, so @@ -212,9 +214,9 @@ mod test { let example_code = hex::decode("0000000000000000111111111111111111111111111111112222222222222222") .unwrap(); - // The size of the dictionary should be 0x0003 - // The dictionary itself should put the most common chunk first, i.e. 0x1111111111111111 - // Then, the ordering does not matter, but the algorithm will return the one with the highest position, i.e. 0x2222222222222222 + // The size of the dictionary should be `0x0003` + // The dictionary itself should put the most common chunk first, i.e. `0x1111111111111111` + // Then, the ordering does not matter, but the algorithm will return the one with the highest position, i.e. `0x2222222222222222` let expected_encoding = hex::decode("00031111111111111111222222222222222200000000000000000002000000000001") .unwrap(); diff --git a/core/lib/utils/src/convert.rs b/core/lib/utils/src/convert.rs index bcaa6c68f1f..cc4699448e6 100644 --- a/core/lib/utils/src/convert.rs +++ b/core/lib/utils/src/convert.rs @@ -1,3 +1,5 @@ +use std::convert::TryInto; + use bigdecimal::BigDecimal; use num::{ bigint::ToBigInt, @@ -5,7 +7,6 @@ use num::{ traits::{sign::Signed, Pow}, BigUint, }; -use std::convert::TryInto; use zksync_basic_types::{Address, H256, U256}; pub fn u256_to_big_decimal(value: U256) -> BigDecimal { @@ -154,7 +155,7 @@ pub fn h256_to_u32(value: H256) -> u32 { u32::from_be_bytes(be_u32_bytes) } -/// Converts u32 into the h256 as BE bytes +/// Converts u32 into the H256 as BE bytes pub fn u32_to_h256(value: u32) -> H256 { let mut result = [0u8; 32]; result[28..].copy_from_slice(&value.to_be_bytes()); @@ -170,10 +171,12 @@ pub fn u256_to_bytes_be(value: &U256) -> Vec { #[cfg(test)] mod test { - use super::*; - use num::BigInt; use std::str::FromStr; + use num::BigInt; + + use super::*; + #[test] fn test_ratio_to_big_decimal() { let ratio = Ratio::from_integer(BigUint::from(0u32)); diff --git a/core/lib/utils/src/http_with_retries.rs b/core/lib/utils/src/http_with_retries.rs index 61742769fd6..15973ee6b2a 100644 --- a/core/lib/utils/src/http_with_retries.rs +++ b/core/lib/utils/src/http_with_retries.rs @@ -1,5 +1,4 @@ -use reqwest::header::HeaderMap; -use reqwest::{Client, Error, Method, Response}; +use reqwest::{header::HeaderMap, Client, Error, Method, Response}; use tokio::time::{sleep, Duration}; #[derive(Debug)] diff --git a/core/lib/utils/src/misc.rs b/core/lib/utils/src/misc.rs index 468e953f83b..94f7a9adc09 100644 --- a/core/lib/utils/src/misc.rs +++ b/core/lib/utils/src/misc.rs @@ -1,5 +1,4 @@ -use zksync_basic_types::web3::signing::keccak256; -use zksync_basic_types::{H256, U256}; +use zksync_basic_types::{web3::signing::keccak256, H256, U256}; pub const fn ceil_div(a: u64, b: u64) -> u64 { if a == 0 { @@ -27,7 +26,7 @@ pub fn expand_memory_contents(packed: &[(usize, U256)], memory_size_bytes: usize value.to_big_endian(&mut result[(offset * 32)..(offset + 1) * 32]); } - result.to_vec() + result } #[cfg(test)] diff --git a/core/lib/vlog/src/lib.rs b/core/lib/vlog/src/lib.rs index 173770beece..1ea573148c4 100644 --- a/core/lib/vlog/src/lib.rs +++ b/core/lib/vlog/src/lib.rs @@ -1,18 +1,14 @@ //! This module contains the observability subsystem. //! It is responsible for providing a centralized interface for consistent observability configuration. -use std::backtrace::Backtrace; -use std::borrow::Cow; -use std::panic::PanicInfo; +use std::{backtrace::Backtrace, borrow::Cow, panic::PanicInfo}; +// Temporary re-export of `sentry::capture_message` aiming to simplify the transition from `vlog` to using +// crates directly. +pub use sentry::{capture_message, Level as AlertLevel}; use sentry::{types::Dsn, ClientInitGuard}; use tracing_subscriber::{fmt, layer::SubscriberExt, util::SubscriberInitExt}; -/// Temporary re-export of `sentry::capture_message` aiming to simplify the transition from `vlog` to using -/// crates directly. -pub use sentry::capture_message; -pub use sentry::Level as AlertLevel; - /// Specifies the format of the logs in stdout. #[derive(Debug, Clone, Copy, Default)] pub enum LogFormat { @@ -153,7 +149,7 @@ pub fn log_format_from_env() -> LogFormat { } /// Loads the Sentry URL from the environment variable according to the existing zkSync configuration scheme. -/// If the environemnt value is present but the value is `unset`, `None` will be returned for compatibility with the +/// If the environment value is present but the value is `unset`, `None` will be returned for compatibility with the /// existing configuration setup. /// /// This is a deprecated function existing for compatibility with the old configuration scheme. diff --git a/core/lib/web3_decl/Cargo.toml b/core/lib/web3_decl/Cargo.toml index 120cc525d9f..df82880d3c8 100644 --- a/core/lib/web3_decl/Cargo.toml +++ b/core/lib/web3_decl/Cargo.toml @@ -15,8 +15,8 @@ serde = "1.0" serde_json = "1.0" rlp = "0.5.0" thiserror = "1.0" -bigdecimal = { version = "0.2.2", features = ["serde"] } -jsonrpsee = { version = "0.19.0", default-features = false, features = [ +bigdecimal = { version = "0.3.0", features = ["serde"] } +jsonrpsee = { version = "0.21.0", default-features = false, features = [ "macros", ] } chrono = "0.4" diff --git a/core/lib/web3_decl/src/error.rs b/core/lib/web3_decl/src/error.rs index d36bd2531f3..f2c77c743c5 100644 --- a/core/lib/web3_decl/src/error.rs +++ b/core/lib/web3_decl/src/error.rs @@ -1,12 +1,16 @@ //! Definition of errors that can occur in the zkSync Web3 API. use thiserror::Error; -use zksync_types::api::SerializationTransactionError; +use zksync_types::{api::SerializationTransactionError, L1BatchNumber, MiniblockNumber}; #[derive(Debug, Error)] pub enum Web3Error { #[error("Block with such an ID doesn't exist yet")] NoBlock, + #[error("Block with such an ID is pruned; the first retained block is {0}")] + PrunedBlock(MiniblockNumber), + #[error("L1 batch with such an ID is pruned; the first retained L1 batch is {0}")] + PrunedL1Batch(L1BatchNumber), #[error("Request timeout")] RequestTimeout, #[error("Internal error")] @@ -35,8 +39,6 @@ pub enum Web3Error { LogsLimitExceeded(usize, u32, u32), #[error("invalid filter: if blockHash is supplied fromBlock and toBlock must not be")] InvalidFilterBlockHash, - #[error("Query returned more than {0} results. Try smaller range of blocks")] - TooManyLogs(usize), #[error("Tree API is not available")] TreeApiUnavailable, } diff --git a/core/lib/web3_decl/src/lib.rs b/core/lib/web3_decl/src/lib.rs index 974de1ac04a..f109ec9efec 100644 --- a/core/lib/web3_decl/src/lib.rs +++ b/core/lib/web3_decl/src/lib.rs @@ -15,6 +15,6 @@ pub mod namespaces; pub mod types; pub use jsonrpsee; -use jsonrpsee::core::Error; +use jsonrpsee::core::ClientError; -pub type RpcResult = Result; +pub type RpcResult = Result; diff --git a/core/lib/web3_decl/src/namespaces/debug.rs b/core/lib/web3_decl/src/namespaces/debug.rs index 7db44f27527..02e75e946b7 100644 --- a/core/lib/web3_decl/src/namespaces/debug.rs +++ b/core/lib/web3_decl/src/namespaces/debug.rs @@ -1,8 +1,10 @@ -use crate::types::H256; use jsonrpsee::{core::RpcResult, proc_macros::rpc}; +use zksync_types::{ + api::{BlockId, BlockNumber, DebugCall, ResultDebugCall, TracerConfig}, + transaction_request::CallRequest, +}; -use zksync_types::api::{BlockId, BlockNumber, DebugCall, ResultDebugCall, TracerConfig}; -use zksync_types::transaction_request::CallRequest; +use crate::types::H256; #[cfg_attr( all(feature = "client", feature = "server"), diff --git a/core/lib/web3_decl/src/namespaces/eth.rs b/core/lib/web3_decl/src/namespaces/eth.rs index f92f2a56239..5ed49355fdd 100644 --- a/core/lib/web3_decl/src/namespaces/eth.rs +++ b/core/lib/web3_decl/src/namespaces/eth.rs @@ -1,20 +1,17 @@ -// External uses -use jsonrpsee::{core::RpcResult, proc_macros::rpc}; - -// Workspace uses -use crate::types::{ - Block, Bytes, FeeHistory, Filter, FilterChanges, Index, Log, SyncState, TransactionReceipt, - U256, U64, +use jsonrpsee::{ + core::{RpcResult, SubscriptionResult}, + proc_macros::rpc, }; - use zksync_types::{ - api::Transaction, - api::{BlockIdVariant, BlockNumber, TransactionVariant}, + api::{BlockIdVariant, BlockNumber, Transaction, TransactionVariant}, transaction_request::CallRequest, Address, H256, }; -// Local uses +use crate::types::{ + Block, Bytes, FeeHistory, Filter, FilterChanges, Index, Log, PubSubFilter, SyncState, + TransactionReceipt, U256, U64, +}; #[cfg_attr( all(feature = "client", feature = "server"), @@ -172,3 +169,10 @@ pub trait EthNamespace { reward_percentiles: Vec, ) -> RpcResult; } + +#[rpc(server, namespace = "eth")] +pub trait EthPubSub { + #[subscription(name = "subscribe" => "subscription", unsubscribe = "unsubscribe", item = PubSubResult)] + async fn subscribe(&self, sub_type: String, filter: Option) + -> SubscriptionResult; +} diff --git a/core/lib/web3_decl/src/namespaces/mod.rs b/core/lib/web3_decl/src/namespaces/mod.rs index 996cb27267c..e3fcc6669a7 100644 --- a/core/lib/web3_decl/src/namespaces/mod.rs +++ b/core/lib/web3_decl/src/namespaces/mod.rs @@ -3,19 +3,19 @@ pub mod en; pub mod eth; pub mod eth_subscribe; pub mod net; +pub mod snapshots; pub mod web3; pub mod zks; -// Server trait re-exports. -#[cfg(feature = "server")] -pub use self::{ - debug::DebugNamespaceServer, en::EnNamespaceServer, eth::EthNamespaceServer, - net::NetNamespaceServer, web3::Web3NamespaceServer, zks::ZksNamespaceServer, -}; - -// Client trait re-exports. #[cfg(feature = "client")] pub use self::{ debug::DebugNamespaceClient, en::EnNamespaceClient, eth::EthNamespaceClient, - net::NetNamespaceClient, web3::Web3NamespaceClient, zks::ZksNamespaceClient, + net::NetNamespaceClient, snapshots::SnapshotsNamespaceServer, web3::Web3NamespaceClient, + zks::ZksNamespaceClient, +}; +#[cfg(feature = "server")] +pub use self::{ + debug::DebugNamespaceServer, en::EnNamespaceServer, eth::EthNamespaceServer, + eth::EthPubSubServer, net::NetNamespaceServer, snapshots::SnapshotsNamespaceClient, + web3::Web3NamespaceServer, zks::ZksNamespaceServer, }; diff --git a/core/lib/web3_decl/src/namespaces/snapshots.rs b/core/lib/web3_decl/src/namespaces/snapshots.rs new file mode 100644 index 00000000000..02f9aa6b36d --- /dev/null +++ b/core/lib/web3_decl/src/namespaces/snapshots.rs @@ -0,0 +1,28 @@ +use jsonrpsee::{core::RpcResult, proc_macros::rpc}; +use zksync_types::{ + snapshots::{AllSnapshots, SnapshotHeader}, + L1BatchNumber, +}; + +#[cfg_attr( + all(feature = "client", feature = "server"), + rpc(server, client, namespace = "snapshots") +)] +#[cfg_attr( + all(feature = "client", not(feature = "server")), + rpc(client, namespace = "snapshots") +)] +#[cfg_attr( + all(not(feature = "client"), feature = "server"), + rpc(server, namespace = "snapshots") +)] +pub trait SnapshotsNamespace { + #[method(name = "getAllSnapshots")] + async fn get_all_snapshots(&self) -> RpcResult; + + #[method(name = "getSnapshot")] + async fn get_snapshot_by_l1_batch_number( + &self, + l1_batch_number: L1BatchNumber, + ) -> RpcResult>; +} diff --git a/core/lib/web3_decl/src/namespaces/zks.rs b/core/lib/web3_decl/src/namespaces/zks.rs index 7543fa59269..aee91dccb46 100644 --- a/core/lib/web3_decl/src/namespaces/zks.rs +++ b/core/lib/web3_decl/src/namespaces/zks.rs @@ -2,18 +2,18 @@ use std::collections::HashMap; use bigdecimal::BigDecimal; use jsonrpsee::{core::RpcResult, proc_macros::rpc}; - use zksync_types::{ api::{ BlockDetails, BridgeAddresses, L1BatchDetails, L2ToL1LogProof, Proof, ProtocolVersion, TransactionDetails, }, fee::Fee, + fee_model::FeeParams, transaction_request::CallRequest, Address, L1BatchNumber, MiniblockNumber, H256, U256, U64, }; -use crate::types::{Filter, Log, Token}; +use crate::types::Token; #[cfg_attr( all(feature = "client", feature = "server"), @@ -105,15 +105,15 @@ pub trait ZksNamespace { #[method(name = "getL1GasPrice")] async fn get_l1_gas_price(&self) -> RpcResult; + #[method(name = "getFeeParams")] + async fn get_fee_params(&self) -> RpcResult; + #[method(name = "getProtocolVersion")] async fn get_protocol_version( &self, version_id: Option, ) -> RpcResult>; - #[method(name = "getLogsWithVirtualBlocks")] - async fn get_logs_with_virtual_blocks(&self, filter: Filter) -> RpcResult>; - #[method(name = "getProof")] async fn get_proof( &self, diff --git a/core/lib/web3_decl/src/types.rs b/core/lib/web3_decl/src/types.rs index 46033bc4118..61a3e10397c 100644 --- a/core/lib/web3_decl/src/types.rs +++ b/core/lib/web3_decl/src/types.rs @@ -5,14 +5,15 @@ //! //! These "extensions" are required to provide more zkSync-specific information while remaining Web3-compilant. -use core::convert::{TryFrom, TryInto}; -use core::fmt; -use core::marker::PhantomData; +use core::{ + convert::{TryFrom, TryInto}, + fmt, + marker::PhantomData, +}; use itertools::unfold; use rlp::Rlp; -use serde::{de, Deserialize, Serialize, Serializer}; - +use serde::{de, Deserialize, Deserializer, Serialize, Serializer}; pub use zksync_types::{ api::{Block, BlockNumber, Log, TransactionReceipt, TransactionRequest}, vm_trace::{ContractSourceDebugInfo, VmDebugTrace, VmExecutionStep}, @@ -105,13 +106,18 @@ pub enum FilterChanges { } /// Either value or array of values. +/// +/// A value must serialize into a string. #[derive(Default, Debug, PartialEq, Clone)] pub struct ValueOrArray(pub Vec); -impl Serialize for ValueOrArray -where - T: Serialize, -{ +impl From for ValueOrArray { + fn from(value: T) -> Self { + Self(vec![value]) + } +} + +impl Serialize for ValueOrArray { fn serialize(&self, serializer: S) -> Result where S: Serializer, @@ -124,18 +130,18 @@ where } } -impl<'de, T: std::fmt::Debug + Deserialize<'de>> ::serde::Deserialize<'de> for ValueOrArray { +impl<'de, T: Deserialize<'de>> Deserialize<'de> for ValueOrArray { fn deserialize(deserializer: D) -> Result where - D: ::serde::Deserializer<'de>, + D: Deserializer<'de>, { struct Visitor(PhantomData); - impl<'de, T: std::fmt::Debug + Deserialize<'de>> de::Visitor<'de> for Visitor { + impl<'de, T: Deserialize<'de>> de::Visitor<'de> for Visitor { type Value = ValueOrArray; - fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result { - formatter.write_str("Expected value or sequence") + fn expecting(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { + formatter.write_str("string value or sequence of values") } fn visit_str(self, value: &str) -> Result @@ -343,9 +349,10 @@ pub enum PubSubResult { #[cfg(test)] mod tests { - use super::*; use zksync_types::api::{BlockId, BlockIdVariant}; + use super::*; + #[test] fn get_block_number_serde() { let test_vector = &[ @@ -408,4 +415,30 @@ mod tests { assert_eq!(&actual_block_id, expected_block_id); } } + + #[test] + fn serializing_value_or_array() { + let value = ValueOrArray::from(Address::repeat_byte(0x1f)); + let json = serde_json::to_value(value.clone()).unwrap(); + assert_eq!( + json, + serde_json::json!("0x1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f") + ); + + let restored_value: ValueOrArray
= serde_json::from_value(json).unwrap(); + assert_eq!(restored_value, value); + + let value = ValueOrArray(vec![Address::repeat_byte(0x1f), Address::repeat_byte(0x23)]); + let json = serde_json::to_value(value.clone()).unwrap(); + assert_eq!( + json, + serde_json::json!([ + "0x1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f", + "0x2323232323232323232323232323232323232323", + ]) + ); + + let restored_value: ValueOrArray
= serde_json::from_value(json).unwrap(); + assert_eq!(restored_value, value); + } } diff --git a/core/lib/zksync_core/Cargo.toml b/core/lib/zksync_core/Cargo.toml index 2bccff98ae9..af62e4afff5 100644 --- a/core/lib/zksync_core/Cargo.toml +++ b/core/lib/zksync_core/Cargo.toml @@ -10,7 +10,7 @@ keywords = ["blockchain", "zksync"] categories = ["cryptography"] [dependencies] -vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "dd05139b76ab0843443ab3ff730174942c825dae" } +vise = { git = "https://github.com/matter-labs/vise.git", version = "0.1.0", rev = "1c9cc500e92cf9ea052b230e114a6f9cce4fb2c1" } zksync_state = { path = "../state" } zksync_types = { path = "../types" } zksync_dal = { path = "../dal" } @@ -22,13 +22,11 @@ zksync_commitment_utils = { path = "../commitment_utils" } zksync_eth_client = { path = "../eth_client" } zksync_eth_signer = { path = "../eth_signer" } zksync_mempool = { path = "../mempool" } -zksync_prover_utils = { path = "../prover_utils" } zksync_queued_job_processor = { path = "../queued_job_processor" } zksync_circuit_breaker = { path = "../circuit_breaker" } zksync_storage = { path = "../storage" } zksync_merkle_tree = { path = "../merkle_tree" } zksync_mini_merkle_tree = { path = "../mini_merkle_tree" } -zksync_verification_key_generator_and_server = { path = "../../bin/verification_key_generator_and_server" } prometheus_exporter = { path = "../prometheus_exporter" } zksync_web3_decl = { path = "../web3_decl", default-features = false, features = [ "server", @@ -40,11 +38,14 @@ vlog = { path = "../vlog" } multivm = { path = "../multivm" } # Consensus dependenices -zksync_concurrency = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } -zksync_consensus_roles = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } -zksync_consensus_storage = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } -zksync_consensus_executor = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } -zksync_protobuf = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } +zksync_concurrency = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_consensus_crypto = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_consensus_roles = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_consensus_storage = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_consensus_executor = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_consensus_bft = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_consensus_utils = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } +zksync_protobuf = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "5727a3e0b22470bb90092388f9125bcb366df613" } prost = "0.12.1" serde = { version = "1.0", features = ["derive"] } @@ -62,17 +63,11 @@ thiserror = "1.0" async-trait = "0.1" bitflags = "1.3.2" -# API dependencies -jsonrpc-core = { git = "https://github.com/matter-labs/jsonrpc.git", branch = "master" } -jsonrpc-core-client = { git = "https://github.com/matter-labs/jsonrpc.git", branch = "master" } # Required for the RPC trait -jsonrpc-http-server = { git = "https://github.com/matter-labs/jsonrpc.git", branch = "master" } -jsonrpc-ws-server = { git = "https://github.com/matter-labs/jsonrpc.git", branch = "master" } -jsonrpc-derive = { git = "https://github.com/matter-labs/jsonrpc.git", branch = "master" } -jsonrpc-pubsub = { git = "https://github.com/matter-labs/jsonrpc.git", branch = "master" } num = { version = "0.3.1", features = ["serde"] } -bigdecimal = { version = "0.2.2", features = ["serde"] } +bigdecimal = { version = "0.3.0", features = ["serde"] } reqwest = { version = "0.11", features = ["blocking", "json"] } hex = "0.4" +lru = { version = "0.12.1", default-features = false } governor = "0.4.2" tower-http = { version = "0.4.1", features = ["full"] } tower = { version = "0.4.13", features = ["full"] } @@ -94,8 +89,6 @@ tracing = "0.1.26" zksync_test_account = { path = "../test_account" } assert_matches = "1.5" +jsonrpsee = "0.21.0" tempfile = "3.0.2" test-casing = "0.1.2" - -[build-dependencies] -zksync_protobuf_build = { version = "0.1.0", git = "https://github.com/matter-labs/era-consensus.git", rev = "ed71b2e817c980a2daffef6a01885219e1dc6fa0" } diff --git a/core/lib/zksync_core/src/api_server/contract_verification/api_decl.rs b/core/lib/zksync_core/src/api_server/contract_verification/api_decl.rs index 1b7d07b4276..553f6f2ad45 100644 --- a/core/lib/zksync_core/src/api_server/contract_verification/api_decl.rs +++ b/core/lib/zksync_core/src/api_server/contract_verification/api_decl.rs @@ -1,5 +1,4 @@ use actix_web::web; - use zksync_dal::connection::ConnectionPool; #[derive(Debug, Clone)] @@ -19,7 +18,7 @@ impl RestApi { } } - /// Creates an actix-web `Scope`, which can be mounted to the Http server. + /// Creates an actix-web `Scope`, which can be mounted to the HTTP server. pub fn into_scope(self) -> actix_web::Scope { web::scope("") .app_data(web::Data::new(self)) diff --git a/core/lib/zksync_core/src/api_server/contract_verification/api_impl.rs b/core/lib/zksync_core/src/api_server/contract_verification/api_impl.rs index d107483db01..81c9f7e264c 100644 --- a/core/lib/zksync_core/src/api_server/contract_verification/api_impl.rs +++ b/core/lib/zksync_core/src/api_server/contract_verification/api_impl.rs @@ -3,7 +3,6 @@ use actix_web::{ HttpResponse, Result as ActixResult, }; use serde::Serialize; - use zksync_types::{contract_verification_api::VerificationIncomingRequest, Address}; use super::{api_decl::RestApi, metrics::METRICS}; diff --git a/core/lib/zksync_core/src/api_server/contract_verification/metrics.rs b/core/lib/zksync_core/src/api_server/contract_verification/metrics.rs index 1e114f68ff6..4947e48b094 100644 --- a/core/lib/zksync_core/src/api_server/contract_verification/metrics.rs +++ b/core/lib/zksync_core/src/api_server/contract_verification/metrics.rs @@ -1,9 +1,9 @@ //! Metrics for contract verification. -use vise::{Buckets, Histogram, LabeledFamily, Metrics}; - use std::time::Duration; +use vise::{Buckets, Histogram, LabeledFamily, Metrics}; + #[derive(Debug, Metrics)] #[metrics(prefix = "api_contract_verification")] pub(super) struct ContractVerificationMetrics { diff --git a/core/lib/zksync_core/src/api_server/contract_verification/mod.rs b/core/lib/zksync_core/src/api_server/contract_verification/mod.rs index 5b59fafa917..27d36429d03 100644 --- a/core/lib/zksync_core/src/api_server/contract_verification/mod.rs +++ b/core/lib/zksync_core/src/api_server/contract_verification/mod.rs @@ -1,23 +1,19 @@ use std::{net::SocketAddr, time::Duration}; use actix_cors::Cors; -use actix_web::{ - dev::Server, - {web, App, HttpResponse, HttpServer}, -}; +use actix_web::{dev::Server, web, App, HttpResponse, HttpServer}; use tokio::{sync::watch, task::JoinHandle}; - use zksync_config::configs::api::ContractVerificationApiConfig; use zksync_dal::connection::ConnectionPool; use zksync_utils::panic_notify::{spawn_panic_handler, ThreadPanicNotify}; +use self::api_decl::RestApi; + mod api_decl; mod api_impl; mod metrics; -use self::api_decl::RestApi; - -fn start_server(api: RestApi, bind_to: SocketAddr, threads: usize) -> Server { +fn start_server(api: RestApi, bind_to: SocketAddr) -> Server { HttpServer::new(move || { let api = api.clone(); App::new() @@ -30,13 +26,12 @@ fn start_server(api: RestApi, bind_to: SocketAddr, threads: usize) -> Server { .allow_any_method(), ) .service(api.into_scope()) - // Endpoint needed for js isReachable + // Endpoint needed for js `isReachable` .route( "/favicon.ico", web::get().to(|| async { HttpResponse::Ok().finish() }), ) }) - .workers(threads) .bind(bind_to) .unwrap() .shutdown_timeout(60) @@ -62,10 +57,9 @@ pub fn start_server_thread_detached( actix_rt::System::new().block_on(async move { let bind_address = api_config.bind_addr(); - let threads = api_config.threads_per_server as usize; let api = RestApi::new(master_connection_pool, replica_connection_pool); - let server = start_server(api, bind_address, threads); + let server = start_server(api, bind_address); let close_handle = server.handle(); actix_rt::spawn(async move { if stop_receiver.changed().await.is_ok() { diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/apply.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/apply.rs index 36ede77abdb..45743397695 100644 --- a/core/lib/zksync_core/src/api_server/execution_sandbox/apply.rs +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/apply.rs @@ -8,12 +8,14 @@ use std::time::{Duration, Instant}; -use multivm::vm_latest::{constants::BLOCK_GAS_LIMIT, HistoryDisabled}; - -use multivm::interface::VmInterface; -use multivm::interface::{L1BatchEnv, L2BlockEnv, SystemEnv}; -use multivm::VmInstance; -use zksync_dal::{ConnectionPool, SqlxError, StorageProcessor}; +use anyhow::Context as _; +use multivm::{ + interface::{L1BatchEnv, L2BlockEnv, SystemEnv, VmInterface}, + utils::adjust_pubdata_price_for_tx, + vm_latest::{constants::BLOCK_GAS_LIMIT, HistoryDisabled}, + VmInstance, +}; +use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_state::{PostgresStorage, ReadStorage, StorageView, WriteStorage}; use zksync_system_constants::{ SYSTEM_CONTEXT_ADDRESS, SYSTEM_CONTEXT_CURRENT_L2_BLOCK_INFO_POSITION, @@ -21,7 +23,7 @@ use zksync_system_constants::{ }; use zksync_types::{ api, - block::{legacy_miniblock_hash, pack_block_info, unpack_block_info}, + block::{pack_block_info, unpack_block_info, MiniblockHasher}, get_nonce_key, utils::{decompose_full_nonce, nonces_to_full_nonce, storage_key_for_eth_balance}, AccountTreeId, L1BatchNumber, MiniblockNumber, Nonce, ProtocolVersionId, StorageKey, @@ -33,11 +35,16 @@ use super::{ vm_metrics::{self, SandboxStage, SANDBOX_METRICS}, BlockArgs, TxExecutionArgs, TxSharedArgs, VmPermit, }; +use crate::utils::projected_first_l1_batch; #[allow(clippy::too_many_arguments)] pub(super) fn apply_vm_in_sandbox( vm_permit: VmPermit, shared_args: TxSharedArgs, + // If `true`, then the batch's L1/pubdata gas price will be adjusted so that the transaction's gas per pubdata limit is <= + // to the one in the block. This is often helpful in case we want the transaction validation to work regardless of the + // current L1 prices for gas or pubdata. + adjust_pubdata_price: bool, execution_args: &TxExecutionArgs, connection_pool: &ConnectionPool, tx: Transaction, @@ -102,12 +109,12 @@ pub(super) fn apply_vm_in_sandbox( } else if current_l2_block_info.l2_block_number == 0 { // Special case: // - For environments, where genesis block was created before virtual block upgrade it doesn't matter what we put here. - // - Otherwise, we need to put actual values here. We cannot create next l2 block with block_number=0 and max_virtual_blocks_to_create=0 + // - Otherwise, we need to put actual values here. We cannot create next L2 block with block_number=0 and `max_virtual_blocks_to_create=0` // because of SystemContext requirements. But, due to intrinsics of SystemContext, block.number still will be resolved to 0. L2BlockEnv { number: 1, timestamp: 0, - prev_block_hash: legacy_miniblock_hash(MiniblockNumber(0)), + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), max_virtual_blocks_to_create: 1, } } else { @@ -175,14 +182,23 @@ pub(super) fn apply_vm_in_sandbox( let TxSharedArgs { operator_account, - l1_gas_price, - fair_l2_gas_price, + fee_input, base_system_contracts, validation_computational_gas_limit, chain_id, .. } = shared_args; + let fee_input = if adjust_pubdata_price { + adjust_pubdata_price_for_tx( + fee_input, + tx.gas_per_pubdata_byte_limit(), + protocol_version.into(), + ) + } else { + fee_input + }; + let system_env = SystemEnv { zk_porter_available: ZKPORTER_IS_AVAILABLE, version: protocol_version, @@ -198,8 +214,7 @@ pub(super) fn apply_vm_in_sandbox( previous_batch_hash: None, number: vm_l1_batch_number, timestamp: l1_batch_timestamp, - l1_gas_price, - fair_l2_gas_price, + fee_input, fee_account: *operator_account.address(), enforced_base_fee: execution_args.enforced_base_fee, first_l2_block: next_l2_block_info, @@ -286,7 +301,7 @@ async fn read_l2_block_info( } #[derive(Debug)] -struct ResolvedBlockInfo { +pub(crate) struct ResolvedBlockInfo { pub state_l2_block_number: MiniblockNumber, pub vm_l1_batch_number: L1BatchNumber, pub l1_batch_timestamp: u64, @@ -301,59 +316,51 @@ impl BlockArgs { ) } - async fn resolve_block_info( + pub(crate) async fn resolve_block_info( &self, connection: &mut StorageProcessor<'_>, - ) -> Result { - let (state_l2_block_number, vm_l1_batch_number, l1_batch_timestamp) = - if self.is_pending_miniblock() { - let sealed_l1_batch_number = connection - .blocks_web3_dal() - .get_sealed_l1_batch_number() - .await?; - let sealed_miniblock_header = connection - .blocks_dal() - .get_last_sealed_miniblock_header() - .await - .unwrap() - .expect("At least one miniblock must exist"); - - // Timestamp of the next L1 batch must be greater than the timestamp of the last miniblock. - let l1_batch_timestamp = - seconds_since_epoch().max(sealed_miniblock_header.timestamp + 1); - ( - sealed_miniblock_header.number, - sealed_l1_batch_number + 1, - l1_batch_timestamp, - ) - } else { - let l1_batch_number = connection - .storage_web3_dal() - .resolve_l1_batch_number_of_miniblock(self.resolved_block_number) - .await? - .expected_l1_batch(); - let l1_batch_timestamp = self.l1_batch_timestamp_s.unwrap_or_else(|| { - panic!( + ) -> anyhow::Result { + let (state_l2_block_number, vm_l1_batch_number, l1_batch_timestamp); + + if self.is_pending_miniblock() { + let sealed_l1_batch_number = + connection.blocks_dal().get_sealed_l1_batch_number().await?; + let sealed_miniblock_header = connection + .blocks_dal() + .get_last_sealed_miniblock_header() + .await? + .context("no miniblocks in storage")?; + + vm_l1_batch_number = match sealed_l1_batch_number { + Some(number) => number + 1, + None => projected_first_l1_batch(connection).await?, + }; + state_l2_block_number = sealed_miniblock_header.number; + // Timestamp of the next L1 batch must be greater than the timestamp of the last miniblock. + l1_batch_timestamp = seconds_since_epoch().max(sealed_miniblock_header.timestamp + 1); + } else { + vm_l1_batch_number = connection + .storage_web3_dal() + .resolve_l1_batch_number_of_miniblock(self.resolved_block_number) + .await? + .expected_l1_batch(); + l1_batch_timestamp = self.l1_batch_timestamp_s.unwrap_or_else(|| { + panic!( "L1 batch timestamp is `None`, `block_id`: {:?}, `resolved_block_number`: {}", self.block_id, self.resolved_block_number.0 ); - }); - - ( - self.resolved_block_number, - l1_batch_number, - l1_batch_timestamp, - ) - }; + }); + state_l2_block_number = self.resolved_block_number; + }; // Blocks without version specified are considered to be of `Version9`. // TODO: remove `unwrap_or` when protocol version ID will be assigned for each block. let protocol_version = connection .blocks_dal() .get_miniblock_protocol_version_id(state_l2_block_number) - .await - .unwrap() - .unwrap_or(ProtocolVersionId::Version9); + .await? + .unwrap_or(ProtocolVersionId::last_potentially_undefined()); + Ok(ResolvedBlockInfo { state_l2_block_number, vm_l1_batch_number, diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/error.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/error.rs index 59e874ade90..9d6d635a344 100644 --- a/core/lib/zksync_core/src/api_server/execution_sandbox/error.rs +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/error.rs @@ -1,6 +1,5 @@ -use thiserror::Error; - use multivm::interface::{Halt, TxRevertReason}; +use thiserror::Error; #[derive(Debug, Error)] pub(crate) enum SandboxExecutionError { @@ -65,6 +64,9 @@ impl From for SandboxExecutionError { Halt::ValidationOutOfGas => Self::AccountValidationFailed( "The validation of the transaction ran out of gas".to_string(), ), + Halt::FailedToPublishCompressedBytecodes => { + Self::UnexpectedVMBehavior("Failed to publish compressed bytecodes".to_string()) + } } } } diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/execute.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/execute.rs index d1ac41553df..80c5fcd979a 100644 --- a/core/lib/zksync_core/src/api_server/execution_sandbox/execute.rs +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/execute.rs @@ -1,18 +1,20 @@ //! Implementation of "executing" methods, e.g. `eth_call`. +use multivm::{ + interface::{TxExecutionMode, VmExecutionResultAndLogs, VmInterface}, + tracers::StorageInvocations, + vm_latest::constants::ETH_CALL_GAS_LIMIT, + MultiVMTracer, +}; use tracing::{span, Level}; - -use multivm::interface::{TxExecutionMode, VmExecutionMode, VmExecutionResultAndLogs, VmInterface}; -use multivm::tracers::StorageInvocations; -use multivm::vm_latest::constants::ETH_CALL_GAS_LIMIT; -use multivm::MultivmTracer; use zksync_dal::ConnectionPool; - use zksync_types::{ fee::TransactionExecutionMetrics, l2::L2Tx, ExecuteTransactionCommon, Nonce, PackedEthSignature, Transaction, U256, }; +#[cfg(test)] +use super::testonly::MockTransactionExecutor; use super::{apply, vm_metrics, ApiTracer, BlockArgs, TxSharedArgs, VmPermit}; #[derive(Debug)] @@ -73,115 +75,120 @@ impl TxExecutionArgs { } } -pub(crate) async fn execute_tx_eth_call( - vm_permit: VmPermit, - shared_args: TxSharedArgs, - connection_pool: ConnectionPool, - mut tx: L2Tx, - block_args: BlockArgs, - vm_execution_cache_misses_limit: Option, - custom_tracers: Vec, -) -> VmExecutionResultAndLogs { - let enforced_base_fee = tx.common_data.fee.max_fee_per_gas.as_u64(); - let execution_args = - TxExecutionArgs::for_eth_call(enforced_base_fee, vm_execution_cache_misses_limit); - - if tx.common_data.signature.is_empty() { - tx.common_data.signature = PackedEthSignature::default().serialize_packed().into(); - } - - // Protection against infinite-loop eth_calls and alike: - // limiting the amount of gas the call can use. - // We can't use BLOCK_ERGS_LIMIT here since the VM itself has some overhead. - tx.common_data.fee.gas_limit = ETH_CALL_GAS_LIMIT.into(); - let (vm_result, _) = execute_tx_in_sandbox( - vm_permit, - shared_args, - execution_args, - connection_pool, - tx.into(), - block_args, - custom_tracers, - ) - .await; - - vm_result +/// Executor of transactions. +#[derive(Debug)] +pub(crate) enum TransactionExecutor { + Real, + #[cfg(test)] + Mock(MockTransactionExecutor), } -#[tracing::instrument(skip_all)] -pub(crate) async fn execute_tx_with_pending_state( - vm_permit: VmPermit, - mut shared_args: TxSharedArgs, - execution_args: TxExecutionArgs, - connection_pool: ConnectionPool, - tx: Transaction, -) -> (VmExecutionResultAndLogs, TransactionExecutionMetrics) { - let mut connection = connection_pool.access_storage_tagged("api").await.unwrap(); - let block_args = BlockArgs::pending(&mut connection).await; - drop(connection); - // In order for execution to pass smoothlessly, we need to ensure that block's required gasPerPubdata will be - // <= to the one in the transaction itself. - shared_args.adjust_l1_gas_price(tx.gas_per_pubdata_byte_limit()); - - execute_tx_in_sandbox( - vm_permit, - shared_args, - execution_args, - connection_pool, - tx, - block_args, - vec![], - ) - .await -} +impl TransactionExecutor { + /// This method assumes that (block with number `resolved_block_number` is present in DB) + /// or (`block_id` is `pending` and block with number `resolved_block_number - 1` is present in DB) + #[allow(clippy::too_many_arguments)] + #[tracing::instrument(skip_all)] + pub async fn execute_tx_in_sandbox( + &self, + vm_permit: VmPermit, + shared_args: TxSharedArgs, + // If `true`, then the batch's L1/pubdata gas price will be adjusted so that the transaction's gas per pubdata limit is <= + // to the one in the block. This is often helpful in case we want the transaction validation to work regardless of the + // current L1 prices for gas or pubdata. + adjust_pubdata_price: bool, + execution_args: TxExecutionArgs, + connection_pool: ConnectionPool, + tx: Transaction, + block_args: BlockArgs, + custom_tracers: Vec, + ) -> (VmExecutionResultAndLogs, TransactionExecutionMetrics, bool) { + #[cfg(test)] + if let Self::Mock(mock_executor) = self { + return mock_executor.execute_tx(&tx); + } + + let total_factory_deps = tx + .execute + .factory_deps + .as_ref() + .map_or(0, |deps| deps.len() as u16); + + let (published_bytecodes, execution_result) = tokio::task::spawn_blocking(move || { + let span = span!(Level::DEBUG, "execute_in_sandbox").entered(); + let result = apply::apply_vm_in_sandbox( + vm_permit, + shared_args, + adjust_pubdata_price, + &execution_args, + &connection_pool, + tx, + block_args, + |vm, tx| { + let storage_invocation_tracer = + StorageInvocations::new(execution_args.missed_storage_invocation_limit); + let custom_tracers: Vec<_> = custom_tracers + .into_iter() + .map(|tracer| tracer.into_boxed()) + .chain(vec![storage_invocation_tracer.into_tracer_pointer()]) + .collect(); + vm.inspect_transaction_with_bytecode_compression( + custom_tracers.into(), + tx, + true, + ) + }, + ); + span.exit(); + result + }) + .await + .unwrap(); + + let tx_execution_metrics = + vm_metrics::collect_tx_execution_metrics(total_factory_deps, &execution_result); + ( + execution_result, + tx_execution_metrics, + published_bytecodes.is_ok(), + ) + } -/// This method assumes that (block with number `resolved_block_number` is present in DB) -/// or (`block_id` is `pending` and block with number `resolved_block_number - 1` is present in DB) -#[allow(clippy::too_many_arguments)] -#[tracing::instrument(skip_all)] -async fn execute_tx_in_sandbox( - vm_permit: VmPermit, - shared_args: TxSharedArgs, - execution_args: TxExecutionArgs, - connection_pool: ConnectionPool, - tx: Transaction, - block_args: BlockArgs, - custom_tracers: Vec, -) -> (VmExecutionResultAndLogs, TransactionExecutionMetrics) { - let total_factory_deps = tx - .execute - .factory_deps - .as_ref() - .map_or(0, |deps| deps.len() as u16); - - let execution_result = tokio::task::spawn_blocking(move || { - let span = span!(Level::DEBUG, "execute_in_sandbox").entered(); - let result = apply::apply_vm_in_sandbox( - vm_permit, - shared_args, - &execution_args, - &connection_pool, - tx, - block_args, - |vm, tx| { - vm.push_transaction(tx); - let storage_invocation_tracer = - StorageInvocations::new(execution_args.missed_storage_invocation_limit); - let custom_tracers: Vec<_> = custom_tracers - .into_iter() - .map(|tracer| tracer.into_boxed()) - .chain(vec![storage_invocation_tracer.into_tracer_pointer()]) - .collect(); - vm.inspect(custom_tracers.into(), VmExecutionMode::OneTx) - }, - ); - span.exit(); - result - }) - .await - .unwrap(); - - let tx_execution_metrics = - vm_metrics::collect_tx_execution_metrics(total_factory_deps, &execution_result); - (execution_result, tx_execution_metrics) + #[allow(clippy::too_many_arguments)] + pub async fn execute_tx_eth_call( + &self, + vm_permit: VmPermit, + shared_args: TxSharedArgs, + connection_pool: ConnectionPool, + mut tx: L2Tx, + block_args: BlockArgs, + vm_execution_cache_misses_limit: Option, + custom_tracers: Vec, + ) -> VmExecutionResultAndLogs { + let enforced_base_fee = tx.common_data.fee.max_fee_per_gas.as_u64(); + let execution_args = + TxExecutionArgs::for_eth_call(enforced_base_fee, vm_execution_cache_misses_limit); + + if tx.common_data.signature.is_empty() { + tx.common_data.signature = PackedEthSignature::default().serialize_packed().into(); + } + + // Protection against infinite-loop eth_calls and alike: + // limiting the amount of gas the call can use. + // We can't use `BLOCK_ERGS_LIMIT` here since the VM itself has some overhead. + tx.common_data.fee.gas_limit = ETH_CALL_GAS_LIMIT.into(); + let (vm_result, ..) = self + .execute_tx_in_sandbox( + vm_permit, + shared_args, + false, + execution_args, + connection_pool, + tx.into(), + block_args, + custom_tracers, + ) + .await; + + vm_result + } } diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/mod.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/mod.rs index 67feced9d5e..95c6793d066 100644 --- a/core/lib/zksync_core/src/api_server/execution_sandbox/mod.rs +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/mod.rs @@ -1,30 +1,35 @@ -use std::sync::Arc; -use std::time::Duration; -use tokio::runtime::Handle; +use std::{sync::Arc, time::Duration}; +use anyhow::Context; +use tokio::runtime::Handle; use zksync_dal::{ConnectionPool, SqlxError, StorageProcessor}; use zksync_state::{PostgresStorage, PostgresStorageCaches, ReadStorage, StorageView}; use zksync_system_constants::PUBLISH_BYTECODE_OVERHEAD; -use zksync_types::{api, AccountTreeId, L2ChainId, MiniblockNumber, U256}; +use zksync_types::{ + api, fee_model::BatchFeeInput, AccountTreeId, L1BatchNumber, L2ChainId, MiniblockNumber, +}; use zksync_utils::bytecode::{compress_bytecode, hash_bytecode}; -// Note: keep the modules private, and instead re-export functions that make public interface. -mod apply; -mod error; -mod execute; -mod tracers; -mod validate; -mod vm_metrics; - use self::vm_metrics::SandboxStage; pub(super) use self::{ error::SandboxExecutionError, - execute::{execute_tx_eth_call, execute_tx_with_pending_state, TxExecutionArgs}, + execute::{TransactionExecutor, TxExecutionArgs}, tracers::ApiTracer, vm_metrics::{SubmitTxStage, SANDBOX_METRICS}, }; use super::tx_sender::MultiVMBaseSystemContracts; -use multivm::vm_latest::utils::fee::derive_base_fee_and_gas_per_pubdata; + +// Note: keep the modules private, and instead re-export functions that make public interface. +mod apply; +mod error; +mod execute; +#[cfg(test)] +pub(super) mod testonly; +#[cfg(test)] +mod tests; +mod tracers; +mod validate; +mod vm_metrics; /// Permit to invoke VM code. /// @@ -142,28 +147,6 @@ impl VmConcurrencyLimiter { } } -pub(super) fn adjust_l1_gas_price_for_tx( - l1_gas_price: u64, - fair_l2_gas_price: u64, - tx_gas_per_pubdata_limit: U256, -) -> u64 { - let (_, current_pubdata_price) = - derive_base_fee_and_gas_per_pubdata(l1_gas_price, fair_l2_gas_price); - if U256::from(current_pubdata_price) <= tx_gas_per_pubdata_limit { - // The current pubdata price is small enough - l1_gas_price - } else { - // gasPerPubdata = ceil(17 * l1gasprice / fair_l2_gas_price) - // gasPerPubdata <= 17 * l1gasprice / fair_l2_gas_price + 1 - // fair_l2_gas_price(gasPerPubdata - 1) / 17 <= l1gasprice - let l1_gas_price = U256::from(fair_l2_gas_price) - * (tx_gas_per_pubdata_limit - U256::from(1u32)) - / U256::from(17); - - l1_gas_price.as_u64() - } -} - async fn get_pending_state( connection: &mut StorageProcessor<'_>, ) -> (api::BlockId, MiniblockNumber) { @@ -225,14 +208,69 @@ pub(super) async fn get_pubdata_for_factory_deps( #[derive(Debug, Clone)] pub(crate) struct TxSharedArgs { pub operator_account: AccountTreeId, - pub l1_gas_price: u64, - pub fair_l2_gas_price: u64, + pub fee_input: BatchFeeInput, pub base_system_contracts: MultiVMBaseSystemContracts, pub caches: PostgresStorageCaches, pub validation_computational_gas_limit: u32, pub chain_id: L2ChainId, } +/// Information about first L1 batch / miniblock in the node storage. +#[derive(Debug, Clone, Copy)] +pub(crate) struct BlockStartInfo { + /// Projected number of the first locally available miniblock. This miniblock is **not** + /// guaranteed to be present in the storage! + pub first_miniblock: MiniblockNumber, + /// Projected number of the first locally available L1 batch. This L1 batch is **not** + /// guaranteed to be present in the storage! + pub first_l1_batch: L1BatchNumber, +} + +impl BlockStartInfo { + pub async fn new(storage: &mut StorageProcessor<'_>) -> anyhow::Result { + let snapshot_recovery = storage + .snapshot_recovery_dal() + .get_applied_snapshot_status() + .await + .context("failed getting snapshot recovery status")?; + let snapshot_recovery = snapshot_recovery.as_ref(); + Ok(Self { + first_miniblock: snapshot_recovery + .map_or(MiniblockNumber(0), |recovery| recovery.miniblock_number + 1), + first_l1_batch: snapshot_recovery + .map_or(L1BatchNumber(0), |recovery| recovery.l1_batch_number + 1), + }) + } + + /// Checks whether a block with the specified ID is pruned and returns an error if it is. + /// The `Err` variant wraps the first non-pruned miniblock. + pub fn ensure_not_pruned_block(&self, block: api::BlockId) -> Result<(), MiniblockNumber> { + match block { + api::BlockId::Number(api::BlockNumber::Number(number)) + if number < self.first_miniblock.0.into() => + { + Err(self.first_miniblock) + } + api::BlockId::Number(api::BlockNumber::Earliest) + if self.first_miniblock > MiniblockNumber(0) => + { + Err(self.first_miniblock) + } + _ => Ok(()), + } + } +} + +#[derive(Debug, thiserror::Error)] +pub(crate) enum BlockArgsError { + #[error("Block is pruned; first retained block is {0}")] + Pruned(MiniblockNumber), + #[error("Block is missing, but can appear in the future")] + Missing, + #[error("Database error")] + Database(#[from] SqlxError), +} + /// Information about a block provided to VM. #[derive(Debug, Clone, Copy)] pub(crate) struct BlockArgs { @@ -242,7 +280,7 @@ pub(crate) struct BlockArgs { } impl BlockArgs { - async fn pending(connection: &mut StorageProcessor<'_>) -> Self { + pub(crate) async fn pending(connection: &mut StorageProcessor<'_>) -> Self { let (block_id, resolved_block_number) = get_pending_state(connection).await; Self { block_id, @@ -255,9 +293,17 @@ impl BlockArgs { pub async fn new( connection: &mut StorageProcessor<'_>, block_id: api::BlockId, - ) -> Result, SqlxError> { + start_info: BlockStartInfo, + ) -> Result { + // We need to check that `block_id` is present in Postgres or can be present in the future + // (i.e., it does not refer to a pruned block). If called for a pruned block, the returned value + // (specifically, `l1_batch_timestamp_s`) will be nonsensical. + start_info + .ensure_not_pruned_block(block_id) + .map_err(BlockArgsError::Pruned)?; + if block_id == api::BlockId::Number(api::BlockNumber::Pending) { - return Ok(Some(BlockArgs::pending(connection).await)); + return Ok(BlockArgs::pending(connection).await); } let resolved_block_number = connection @@ -265,27 +311,27 @@ impl BlockArgs { .resolve_block_id(block_id) .await?; let Some(resolved_block_number) = resolved_block_number else { - return Ok(None); + return Err(BlockArgsError::Missing); }; let l1_batch_number = connection .storage_web3_dal() .resolve_l1_batch_number_of_miniblock(resolved_block_number) - .await? - .expected_l1_batch(); + .await?; let l1_batch_timestamp_s = connection .blocks_web3_dal() - .get_expected_l1_batch_timestamp(l1_batch_number) + .get_expected_l1_batch_timestamp(&l1_batch_number) .await?; - assert!( - l1_batch_timestamp_s.is_some(), - "Missing batch timestamp for non-pending block" - ); - Ok(Some(Self { + if l1_batch_timestamp_s.is_none() { + // Can happen after snapshot recovery if no miniblocks are persisted yet. In this case, + // we cannot proceed; the issue will be resolved shortly. + return Err(BlockArgsError::Missing); + } + Ok(Self { block_id, resolved_block_number, l1_batch_timestamp_s, - })) + }) } pub fn resolved_block_number(&self) -> MiniblockNumber { diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/testonly.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/testonly.rs new file mode 100644 index 00000000000..c9780c42e04 --- /dev/null +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/testonly.rs @@ -0,0 +1,74 @@ +use std::collections::HashMap; + +use multivm::{ + interface::{ExecutionResult, VmExecutionResultAndLogs}, + tracers::validator::ValidationError, +}; +use zksync_types::{ + fee::TransactionExecutionMetrics, l2::L2Tx, ExecuteTransactionCommon, Transaction, H256, +}; + +use super::TransactionExecutor; + +type MockExecutionOutput = (VmExecutionResultAndLogs, TransactionExecutionMetrics, bool); + +#[derive(Debug, Default)] +pub(crate) struct MockTransactionExecutor { + call_responses: HashMap, MockExecutionOutput>, + tx_responses: HashMap, +} + +impl MockTransactionExecutor { + pub fn insert_call_response(&mut self, calldata: Vec, result: ExecutionResult) { + let result = VmExecutionResultAndLogs { + result, + logs: Default::default(), + statistics: Default::default(), + refunds: Default::default(), + }; + let output = (result, TransactionExecutionMetrics::default(), true); + self.call_responses.insert(calldata, output); + } + + pub fn insert_tx_response(&mut self, tx_hash: H256, result: ExecutionResult) { + let result = VmExecutionResultAndLogs { + result, + logs: Default::default(), + statistics: Default::default(), + refunds: Default::default(), + }; + let output = (result, TransactionExecutionMetrics::default(), true); + self.tx_responses.insert(tx_hash, output); + } + + pub fn validate_tx(&self, tx: &L2Tx) -> Result<(), ValidationError> { + self.tx_responses + .get(&tx.hash()) + .unwrap_or_else(|| panic!("Validating unexpected transaction: {tx:?}")); + Ok(()) + } + + pub fn execute_tx(&self, tx: &Transaction) -> MockExecutionOutput { + if let ExecuteTransactionCommon::L2(data) = &tx.common_data { + if data.input.is_none() { + // `Transaction` was obtained from a `CallRequest` + return self + .call_responses + .get(tx.execute.calldata()) + .unwrap_or_else(|| panic!("Executing unexpected call: {tx:?}")) + .clone(); + } + } + + self.tx_responses + .get(&tx.hash()) + .unwrap_or_else(|| panic!("Executing unexpected transaction: {tx:?}")) + .clone() + } +} + +impl From for TransactionExecutor { + fn from(executor: MockTransactionExecutor) -> Self { + Self::Mock(executor) + } +} diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/tests.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/tests.rs new file mode 100644 index 00000000000..d81b4b94045 --- /dev/null +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/tests.rs @@ -0,0 +1,155 @@ +//! Tests for the VM execution sandbox. + +use assert_matches::assert_matches; + +use super::*; +use crate::{ + genesis::{ensure_genesis_state, GenesisParams}, + utils::testonly::{create_miniblock, prepare_empty_recovery_snapshot}, +}; + +#[tokio::test] +async fn creating_block_args() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + let miniblock = create_miniblock(1); + storage + .blocks_dal() + .insert_miniblock(&miniblock) + .await + .unwrap(); + + let pending_block_args = BlockArgs::pending(&mut storage).await; + assert_eq!( + pending_block_args.block_id, + api::BlockId::Number(api::BlockNumber::Pending) + ); + assert_eq!(pending_block_args.resolved_block_number, MiniblockNumber(2)); + assert_eq!(pending_block_args.l1_batch_timestamp_s, None); + + let start_info = BlockStartInfo::new(&mut storage).await.unwrap(); + assert_eq!(start_info.first_miniblock, MiniblockNumber(0)); + assert_eq!(start_info.first_l1_batch, L1BatchNumber(0)); + + let latest_block = api::BlockId::Number(api::BlockNumber::Latest); + let latest_block_args = BlockArgs::new(&mut storage, latest_block, start_info) + .await + .unwrap(); + assert_eq!(latest_block_args.block_id, latest_block); + assert_eq!(latest_block_args.resolved_block_number, MiniblockNumber(1)); + assert_eq!( + latest_block_args.l1_batch_timestamp_s, + Some(miniblock.timestamp) + ); + + let earliest_block = api::BlockId::Number(api::BlockNumber::Earliest); + let earliest_block_args = BlockArgs::new(&mut storage, earliest_block, start_info) + .await + .unwrap(); + assert_eq!(earliest_block_args.block_id, earliest_block); + assert_eq!( + earliest_block_args.resolved_block_number, + MiniblockNumber(0) + ); + assert_eq!(earliest_block_args.l1_batch_timestamp_s, Some(0)); + + let missing_block = api::BlockId::Number(100.into()); + let err = BlockArgs::new(&mut storage, missing_block, start_info) + .await + .unwrap_err(); + assert_matches!(err, BlockArgsError::Missing); +} + +#[tokio::test] +async fn creating_block_args_after_snapshot_recovery() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + let snapshot_recovery = prepare_empty_recovery_snapshot(&mut storage, 23).await; + + let pending_block_args = BlockArgs::pending(&mut storage).await; + assert_eq!( + pending_block_args.block_id, + api::BlockId::Number(api::BlockNumber::Pending) + ); + assert_eq!( + pending_block_args.resolved_block_number, + snapshot_recovery.miniblock_number + 1 + ); + assert_eq!(pending_block_args.l1_batch_timestamp_s, None); + + let start_info = BlockStartInfo::new(&mut storage).await.unwrap(); + assert_eq!( + start_info.first_miniblock, + snapshot_recovery.miniblock_number + 1 + ); + assert_eq!( + start_info.first_l1_batch, + snapshot_recovery.l1_batch_number + 1 + ); + + let latest_block = api::BlockId::Number(api::BlockNumber::Latest); + let err = BlockArgs::new(&mut storage, latest_block, start_info) + .await + .unwrap_err(); + assert_matches!(err, BlockArgsError::Missing); + + let pruned_blocks = [ + api::BlockNumber::Earliest, + 0.into(), + snapshot_recovery.miniblock_number.0.into(), + ]; + for pruned_block in pruned_blocks { + let pruned_block = api::BlockId::Number(pruned_block); + let err = BlockArgs::new(&mut storage, pruned_block, start_info) + .await + .unwrap_err(); + assert_matches!(err, BlockArgsError::Pruned(_)); + } + + let missing_blocks = [ + api::BlockNumber::from(snapshot_recovery.miniblock_number.0 + 2), + 100.into(), + ]; + for missing_block in missing_blocks { + let missing_block = api::BlockId::Number(missing_block); + let err = BlockArgs::new(&mut storage, missing_block, start_info) + .await + .unwrap_err(); + assert_matches!(err, BlockArgsError::Missing); + } + + let miniblock = create_miniblock(snapshot_recovery.miniblock_number.0 + 1); + storage + .blocks_dal() + .insert_miniblock(&miniblock) + .await + .unwrap(); + + let latest_block_args = BlockArgs::new(&mut storage, latest_block, start_info) + .await + .unwrap(); + assert_eq!(latest_block_args.block_id, latest_block); + assert_eq!(latest_block_args.resolved_block_number, miniblock.number); + assert_eq!( + latest_block_args.l1_batch_timestamp_s, + Some(miniblock.timestamp) + ); + + for pruned_block in pruned_blocks { + let pruned_block = api::BlockId::Number(pruned_block); + let err = BlockArgs::new(&mut storage, pruned_block, start_info) + .await + .unwrap_err(); + assert_matches!(err, BlockArgsError::Pruned(_)); + } + for missing_block in missing_blocks { + let missing_block = api::BlockId::Number(missing_block); + let err = BlockArgs::new(&mut storage, missing_block, start_info) + .await + .unwrap_err(); + assert_matches!(err, BlockArgsError::Missing); + } +} diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/tracers.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/tracers.rs index ac675eee707..e6add9a9e3b 100644 --- a/core/lib/zksync_core/src/api_server/execution_sandbox/tracers.rs +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/tracers.rs @@ -1,13 +1,11 @@ -use multivm::tracers::CallTracer; -use multivm::vm_latest::HistoryMode; -use multivm::{MultiVmTracerPointer, MultivmTracer}; -use once_cell::sync::OnceCell; - use std::sync::Arc; + +use multivm::{tracers::CallTracer, vm_latest::HistoryMode, MultiVMTracer, MultiVmTracerPointer}; +use once_cell::sync::OnceCell; use zksync_state::WriteStorage; use zksync_types::vm_trace::Call; -/// Custom tracers supported by our api +/// Custom tracers supported by our API #[derive(Debug)] pub(crate) enum ApiTracer { CallTracer(Arc>>), @@ -16,7 +14,7 @@ pub(crate) enum ApiTracer { impl ApiTracer { pub fn into_boxed< S: WriteStorage, - H: HistoryMode + multivm::HistoryMode + 'static, + H: HistoryMode + multivm::HistoryMode + 'static, >( self, ) -> MultiVmTracerPointer { diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/validate.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/validate.rs index 119d6423ba2..419e9804f88 100644 --- a/core/lib/zksync_core/src/api_server/execution_sandbox/validate.rs +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/validate.rs @@ -1,61 +1,39 @@ -use multivm::interface::{ExecutionResult, VmExecutionMode, VmInterface}; -use multivm::MultivmTracer; use std::collections::HashSet; -use multivm::tracers::{ - validator::{ValidationError, ValidationTracer, ValidationTracerParams}, - StorageInvocations, +use multivm::{ + interface::{ExecutionResult, VmExecutionMode, VmInterface}, + tracers::{ + validator::{ValidationError, ValidationTracer, ValidationTracerParams}, + StorageInvocations, + }, + vm_latest::HistoryDisabled, + MultiVMTracer, }; -use multivm::vm_latest::HistoryDisabled; use zksync_dal::{ConnectionPool, StorageProcessor}; -use zksync_types::{l2::L2Tx, Transaction, TRUSTED_ADDRESS_SLOTS, TRUSTED_TOKEN_SLOTS, U256}; +use zksync_types::{l2::L2Tx, Transaction, TRUSTED_ADDRESS_SLOTS, TRUSTED_TOKEN_SLOTS}; use super::{ - adjust_l1_gas_price_for_tx, apply, + apply, + execute::TransactionExecutor, vm_metrics::{SandboxStage, EXECUTION_METRICS, SANDBOX_METRICS}, BlockArgs, TxExecutionArgs, TxSharedArgs, VmPermit, }; -impl TxSharedArgs { - pub async fn validate_tx_with_pending_state( - mut self, - vm_permit: VmPermit, - connection_pool: ConnectionPool, - tx: L2Tx, - computational_gas_limit: u32, - ) -> Result<(), ValidationError> { - let mut connection = connection_pool.access_storage_tagged("api").await.unwrap(); - let block_args = BlockArgs::pending(&mut connection).await; - drop(connection); - self.adjust_l1_gas_price(tx.common_data.fee.gas_per_pubdata_limit); - self.validate_tx_in_sandbox( - connection_pool, - vm_permit, - tx, - block_args, - computational_gas_limit, - ) - .await - } - - // In order for validation to pass smoothlessly, we need to ensure that block's required gasPerPubdata will be - // <= to the one in the transaction itself. - pub fn adjust_l1_gas_price(&mut self, gas_per_pubdata_limit: U256) { - self.l1_gas_price = adjust_l1_gas_price_for_tx( - self.l1_gas_price, - self.fair_l2_gas_price, - gas_per_pubdata_limit, - ); - } - - async fn validate_tx_in_sandbox( - self, +impl TransactionExecutor { + pub(crate) async fn validate_tx_in_sandbox( + &self, connection_pool: ConnectionPool, vm_permit: VmPermit, tx: L2Tx, + shared_args: TxSharedArgs, block_args: BlockArgs, computational_gas_limit: u32, ) -> Result<(), ValidationError> { + #[cfg(test)] + if let Self::Mock(mock) = self { + return mock.validate_tx(&tx); + } + let stage_latency = SANDBOX_METRICS.sandbox[&SandboxStage::ValidateInSandbox].start(); let mut connection = connection_pool.access_storage_tagged("api").await.unwrap(); let validation_params = @@ -69,7 +47,8 @@ impl TxSharedArgs { let span = tracing::debug_span!("validate_in_sandbox").entered(); let result = apply::apply_vm_in_sandbox( vm_permit, - self, + shared_args, + true, &execution_args, &connection_pool, tx, diff --git a/core/lib/zksync_core/src/api_server/execution_sandbox/vm_metrics.rs b/core/lib/zksync_core/src/api_server/execution_sandbox/vm_metrics.rs index 138d06a3a7c..82e082d4dd8 100644 --- a/core/lib/zksync_core/src/api_server/execution_sandbox/vm_metrics.rs +++ b/core/lib/zksync_core/src/api_server/execution_sandbox/vm_metrics.rs @@ -1,12 +1,13 @@ -use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; - use std::time::Duration; use multivm::interface::{VmExecutionResultAndLogs, VmMemoryMetrics}; +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; use zksync_state::StorageViewMetrics; -use zksync_types::event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}; -use zksync_types::fee::TransactionExecutionMetrics; -use zksync_types::storage_writes_deduplicator::StorageWritesDeduplicator; +use zksync_types::{ + event::{extract_long_l2_to_l1_messages, extract_published_bytecodes}, + fee::TransactionExecutionMetrics, + storage_writes_deduplicator::StorageWritesDeduplicator, +}; use zksync_utils::bytecode::bytecode_len_in_bytes; use crate::metrics::InteractionType; @@ -239,5 +240,6 @@ pub(super) fn collect_tx_execution_metrics( computational_gas_used: result.statistics.computational_gas_used, total_updated_values_size: writes_metrics.total_updated_values_size, pubdata_published: result.statistics.pubdata_published, + estimated_circuits_used: result.statistics.estimated_circuits_used, } } diff --git a/core/lib/zksync_core/src/api_server/healthcheck.rs b/core/lib/zksync_core/src/api_server/healthcheck.rs index 74495f3439c..7010d29fb4b 100644 --- a/core/lib/zksync_core/src/api_server/healthcheck.rs +++ b/core/lib/zksync_core/src/api_server/healthcheck.rs @@ -1,8 +1,7 @@ -use axum::{extract::State, http::StatusCode, routing::get, Json, Router}; -use tokio::sync::watch; - use std::{collections::HashSet, net::SocketAddr, sync::Arc, time::Duration}; +use axum::{extract::State, http::StatusCode, routing::get, Json, Router}; +use tokio::sync::watch; use zksync_health_check::{AppHealth, CheckHealth}; type SharedHealthchecks = Arc<[Box]>; @@ -75,7 +74,7 @@ impl HealthCheckHandle { pub async fn stop(self) { // Paradoxically, `hyper` server is quite slow to shut down if it isn't queried during shutdown: - // https://github.com/hyperium/hyper/issues/3188. It is thus recommended to set a timeout for shutdown. + // . It is thus recommended to set a timeout for shutdown. const GRACEFUL_SHUTDOWN_WAIT: Duration = Duration::from_secs(10); self.stop_sender.send(true).ok(); diff --git a/core/lib/zksync_core/src/api_server/tree/metrics.rs b/core/lib/zksync_core/src/api_server/tree/metrics.rs index e6b552468d8..d185861d07c 100644 --- a/core/lib/zksync_core/src/api_server/tree/metrics.rs +++ b/core/lib/zksync_core/src/api_server/tree/metrics.rs @@ -1,9 +1,9 @@ //! Metrics for the Merkle tree API. -use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics, Unit}; - use std::time::Duration; +use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics, Unit}; + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] #[metrics(label = "method", rename_all = "snake_case")] pub(super) enum MerkleTreeApiMethod { diff --git a/core/lib/zksync_core/src/api_server/tree/mod.rs b/core/lib/zksync_core/src/api_server/tree/mod.rs index 74dd3e5b70c..a6b6d51fbaa 100644 --- a/core/lib/zksync_core/src/api_server/tree/mod.rs +++ b/core/lib/zksync_core/src/api_server/tree/mod.rs @@ -1,5 +1,7 @@ //! Primitive Merkle tree API used internally to fetch proofs. +use std::{fmt, future::Future, net::SocketAddr, pin::Pin}; + use anyhow::Context as _; use async_trait::async_trait; use axum::{ @@ -10,19 +12,16 @@ use axum::{ }; use serde::{Deserialize, Serialize}; use tokio::sync::watch; - -use std::{fmt, future::Future, net::SocketAddr, pin::Pin}; - use zksync_merkle_tree::NoVersionError; use zksync_types::{L1BatchNumber, H256, U256}; +use self::metrics::{MerkleTreeApiMethod, API_METRICS}; +use crate::metadata_calculator::{AsyncTreeReader, MerkleTreeInfo}; + mod metrics; #[cfg(test)] mod tests; -use self::metrics::{MerkleTreeApiMethod, API_METRICS}; -use crate::metadata_calculator::{AsyncTreeReader, MerkleTreeInfo}; - #[derive(Debug, Serialize, Deserialize)] struct TreeProofsRequest { l1_batch_number: L1BatchNumber, @@ -54,7 +53,7 @@ impl TreeEntryWithProof { let mut merkle_path = src.merkle_path; merkle_path.reverse(); // Use root-to-leaf enumeration direction as in Ethereum Self { - value: src.base.value_hash, + value: src.base.value, index: src.base.leaf_index, merkle_path, } @@ -74,7 +73,7 @@ impl IntoResponse for TreeApiError { } }; - // Loosely conforms to HTTP Problem Details RFC: https://datatracker.ietf.org/doc/html/rfc7807 + // Loosely conforms to HTTP Problem Details RFC: let body = serde_json::json!({ "type": "/errors#l1-batch-not-found", "title": title, diff --git a/core/lib/zksync_core/src/api_server/tree/tests.rs b/core/lib/zksync_core/src/api_server/tree/tests.rs index 2f90b9fabdf..d934aaab476 100644 --- a/core/lib/zksync_core/src/api_server/tree/tests.rs +++ b/core/lib/zksync_core/src/api_server/tree/tests.rs @@ -1,9 +1,8 @@ //! Tests for the Merkle tree API. -use tempfile::TempDir; - use std::net::Ipv4Addr; +use tempfile::TempDir; use zksync_dal::ConnectionPool; use super::*; @@ -14,22 +13,25 @@ use crate::metadata_calculator::tests::{ #[tokio::test] async fn merkle_tree_api() { let pool = ConnectionPool::test_pool().await; - let prover_pool = ConnectionPool::test_pool().await; let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let (calculator, _) = setup_calculator(temp_dir.path(), &pool).await; let api_addr = (Ipv4Addr::LOCALHOST, 0).into(); + + reset_db_state(&pool, 5).await; + let tree_reader = calculator.tree_reader(); + let calculator_task = tokio::spawn(run_calculator(calculator, pool)); + let (stop_sender, stop_receiver) = watch::channel(false); - let api_server = calculator - .tree_reader() + let api_server = tree_reader + .await .create_api_server(&api_addr, stop_receiver.clone()) .unwrap(); let local_addr = *api_server.local_addr(); let api_server_task = tokio::spawn(api_server.run()); let api_client = TreeApiHttpClient::new(&format!("http://{local_addr}")); - reset_db_state(&pool, 5).await; // Wait until the calculator processes initial L1 batches. - run_calculator(calculator, pool, prover_pool).await; + calculator_task.await.unwrap(); // Query the API. let tree_info = api_client.get_info().await.unwrap(); diff --git a/core/lib/zksync_core/src/api_server/tx_sender/mod.rs b/core/lib/zksync_core/src/api_server/tx_sender/mod.rs index 400888fa624..0c519bb1560 100644 --- a/core/lib/zksync_core/src/api_server/tx_sender/mod.rs +++ b/core/lib/zksync_core/src/api_server/tx_sender/mod.rs @@ -1,67 +1,50 @@ //! Helper module to submit transactions into the zkSync Network. -// External uses -use governor::{ - clock::MonotonicClock, - middleware::NoOpMiddleware, - state::{InMemoryState, NotKeyed}, - Quota, RateLimiter, -}; - -// Built-in uses -use std::{cmp, num::NonZeroU32, sync::Arc, time::Instant}; +use std::{cmp, sync::Arc, time::Instant}; -// Workspace uses - -use multivm::interface::VmExecutionResultAndLogs; -use multivm::vm_latest::{ - constants::{BLOCK_GAS_LIMIT, MAX_PUBDATA_PER_BLOCK}, - utils::{ - fee::derive_base_fee_and_gas_per_pubdata, - overhead::{derive_overhead, OverheadCoeficients}, - }, +use anyhow::Context as _; +use multivm::{ + interface::VmExecutionResultAndLogs, + utils::{adjust_pubdata_price_for_tx, derive_base_fee_and_gas_per_pubdata, derive_overhead}, + vm_latest::constants::{BLOCK_GAS_LIMIT, MAX_PUBDATA_PER_BLOCK}, }; - use zksync_config::configs::{api::Web3JsonRpcConfig, chain::StateKeeperConfig}; use zksync_contracts::BaseSystemContracts; use zksync_dal::{transactions_dal::L2TxSubmissionResult, ConnectionPool}; use zksync_state::PostgresStorageCaches; +use zksync_system_constants::DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE; use zksync_types::{ fee::{Fee, TransactionExecutionMetrics}, + fee_model::BatchFeeInput, get_code_key, get_intrinsic_constants, - l2::error::TxCheckError::TxDuplication, - l2::L2Tx, + l1::is_l1_tx_type, + l2::{error::TxCheckError::TxDuplication, L2Tx}, utils::storage_key_for_eth_balance, - AccountTreeId, Address, ExecuteTransactionCommon, L2ChainId, Nonce, PackedEthSignature, - ProtocolVersionId, Transaction, H160, H256, MAX_GAS_PER_PUBDATA_BYTE, MAX_L2_TX_GAS_LIMIT, + AccountTreeId, Address, ExecuteTransactionCommon, L2ChainId, MiniblockNumber, Nonce, + PackedEthSignature, ProtocolVersionId, Transaction, VmVersion, H160, H256, MAX_L2_TX_GAS_LIMIT, MAX_NEW_FACTORY_DEPS, U256, }; - use zksync_utils::h256_to_u256; -// Local uses -use crate::api_server::{ - execution_sandbox::{ - adjust_l1_gas_price_for_tx, execute_tx_eth_call, execute_tx_with_pending_state, - get_pubdata_for_factory_deps, BlockArgs, SubmitTxStage, TxExecutionArgs, TxSharedArgs, - VmConcurrencyLimiter, VmPermit, SANDBOX_METRICS, - }, - tx_sender::result::ApiCallResult, -}; +pub(super) use self::{proxy::TxProxy, result::SubmitTxError}; use crate::{ - l1_gas_price::L1GasPriceProvider, + api_server::{ + execution_sandbox::{ + get_pubdata_for_factory_deps, BlockArgs, BlockStartInfo, SubmitTxStage, + TransactionExecutor, TxExecutionArgs, TxSharedArgs, VmConcurrencyLimiter, VmPermit, + SANDBOX_METRICS, + }, + tx_sender::result::ApiCallResult, + }, + fee_model::BatchFeeModelInputProvider, metrics::{TxStage, APP_METRICS}, - state_keeper::seal_criteria::{ConditionalSealer, SealData}, + state_keeper::seal_criteria::{ConditionalSealer, NoopSealer, SealData}, }; mod proxy; mod result; - -pub(super) use self::{proxy::TxProxy, result::SubmitTxError}; - -/// Type alias for the rate limiter implementation. -type TxSenderRateLimiter = - RateLimiter>; +#[cfg(test)] +pub(crate) mod tests; #[derive(Debug, Clone)] pub struct MultiVMBaseSystemContracts { @@ -73,6 +56,10 @@ pub struct MultiVMBaseSystemContracts { pub(crate) post_virtual_blocks_finish_upgrade_fix: BaseSystemContracts, /// Contracts to be used for post-boojum protocol versions. pub(crate) post_boojum: BaseSystemContracts, + /// Contracts to be used after the allow-list removal upgrade + pub(crate) post_allowlist_removal: BaseSystemContracts, + /// Contracts to be used after the 1.4.1 upgrade + pub(crate) post_1_4_1: BaseSystemContracts, } impl MultiVMBaseSystemContracts { @@ -96,7 +83,9 @@ impl MultiVMBaseSystemContracts { | ProtocolVersionId::Version15 | ProtocolVersionId::Version16 | ProtocolVersionId::Version17 => self.post_virtual_blocks_finish_upgrade_fix, - ProtocolVersionId::Version18 | ProtocolVersionId::Version19 => self.post_boojum, + ProtocolVersionId::Version18 => self.post_boojum, + ProtocolVersionId::Version19 => self.post_allowlist_removal, + ProtocolVersionId::Version20 | ProtocolVersionId::Version21 => self.post_1_4_1, } } } @@ -111,7 +100,7 @@ pub struct ApiContracts { pub(crate) estimate_gas: MultiVMBaseSystemContracts, /// Contracts to be used when performing `eth_call` requests. /// These contracts (mainly, bootloader) normally should be tuned to provide better UX - /// exeprience (e.g. revert messages). + /// experience (e.g. revert messages). pub(crate) eth_call: MultiVMBaseSystemContracts, } @@ -127,6 +116,8 @@ impl ApiContracts { post_virtual_blocks_finish_upgrade_fix: BaseSystemContracts::estimate_gas_post_virtual_blocks_finish_upgrade_fix(), post_boojum: BaseSystemContracts::estimate_gas_post_boojum(), + post_allowlist_removal: BaseSystemContracts::estimate_gas_post_allowlist_removal(), + post_1_4_1: BaseSystemContracts::estimate_gas_post_1_4_1(), }, eth_call: MultiVMBaseSystemContracts { pre_virtual_blocks: BaseSystemContracts::playground_pre_virtual_blocks(), @@ -134,6 +125,8 @@ impl ApiContracts { post_virtual_blocks_finish_upgrade_fix: BaseSystemContracts::playground_post_virtual_blocks_finish_upgrade_fix(), post_boojum: BaseSystemContracts::playground_post_boojum(), + post_allowlist_removal: BaseSystemContracts::playground_post_allowlist_removal(), + post_1_4_1: BaseSystemContracts::playground_post_1_4_1(), }, } } @@ -148,13 +141,10 @@ pub struct TxSenderBuilder { replica_connection_pool: ConnectionPool, /// Connection pool for write requests. If not set, `proxy` must be set. master_connection_pool: Option, - /// Rate limiter for tx submissions. - rate_limiter: Option, /// Proxy to submit transactions to the network. If not set, `master_connection_pool` must be set. proxy: Option, - /// Actual state keeper configuration, required for tx verification. - /// If not set, transactions would not be checked against seal criteria. - state_keeper_config: Option, + /// Batch sealer used to check whether transaction can be executed by the sequencer. + sealer: Option>, } impl TxSenderBuilder { @@ -163,21 +153,14 @@ impl TxSenderBuilder { config, replica_connection_pool, master_connection_pool: None, - rate_limiter: None, proxy: None, - state_keeper_config: None, + sealer: None, } } - pub fn with_rate_limiter(self, transactions_per_sec: u32) -> Self { - let rate_limiter = RateLimiter::direct_with_clock( - Quota::per_second(NonZeroU32::new(transactions_per_sec).unwrap()), - &MonotonicClock, - ); - Self { - rate_limiter: Some(rate_limiter), - ..self - } + pub fn with_sealer(mut self, sealer: Arc) -> Self { + self.sealer = Some(sealer); + self } pub fn with_tx_proxy(mut self, main_node_url: &str) -> Self { @@ -190,34 +173,32 @@ impl TxSenderBuilder { self } - pub fn with_state_keeper_config(mut self, state_keeper_config: StateKeeperConfig) -> Self { - self.state_keeper_config = Some(state_keeper_config); - self - } - - pub async fn build( + pub async fn build( self, - l1_gas_price_source: Arc, + batch_fee_input_provider: Arc, vm_concurrency_limiter: Arc, api_contracts: ApiContracts, storage_caches: PostgresStorageCaches, - ) -> TxSender { + ) -> TxSender { assert!( self.master_connection_pool.is_some() || self.proxy.is_some(), "Either master connection pool or proxy must be set" ); + // Use noop sealer if no sealer was explicitly provided. + let sealer = self.sealer.unwrap_or_else(|| Arc::new(NoopSealer)); + TxSender(Arc::new(TxSenderInner { sender_config: self.config, master_connection_pool: self.master_connection_pool, replica_connection_pool: self.replica_connection_pool, - l1_gas_price_source, + batch_fee_input_provider, api_contracts, - rate_limiter: self.rate_limiter, proxy: self.proxy, - state_keeper_config: self.state_keeper_config, vm_concurrency_limiter, storage_caches, + sealer, + executor: TransactionExecutor::Real, })) } } @@ -232,9 +213,9 @@ pub struct TxSenderConfig { pub gas_price_scale_factor: f64, pub max_nonce_ahead: u32, pub max_allowed_l2_tx_gas_limit: u32, - pub fair_l2_gas_price: u64, pub vm_execution_cache_misses_limit: Option, pub validation_computational_gas_limit: u32, + pub l1_to_l2_transactions_compatibility_mode: bool, pub chain_id: L2ChainId, } @@ -249,54 +230,44 @@ impl TxSenderConfig { gas_price_scale_factor: web3_json_config.gas_price_scale_factor, max_nonce_ahead: web3_json_config.max_nonce_ahead, max_allowed_l2_tx_gas_limit: state_keeper_config.max_allowed_l2_tx_gas_limit, - fair_l2_gas_price: state_keeper_config.fair_l2_gas_price, vm_execution_cache_misses_limit: web3_json_config.vm_execution_cache_misses_limit, validation_computational_gas_limit: state_keeper_config .validation_computational_gas_limit, + l1_to_l2_transactions_compatibility_mode: web3_json_config + .l1_to_l2_transactions_compatibility_mode, chain_id, } } } -pub struct TxSenderInner { +pub struct TxSenderInner { pub(super) sender_config: TxSenderConfig, pub master_connection_pool: Option, pub replica_connection_pool: ConnectionPool, // Used to keep track of gas prices for the fee ticker. - pub l1_gas_price_source: Arc, + pub batch_fee_input_provider: Arc, pub(super) api_contracts: ApiContracts, - /// Optional rate limiter that will limit the amount of transactions per second sent from a single entity. - rate_limiter: Option, /// Optional transaction proxy to be used for transaction submission. pub(super) proxy: Option, - /// An up-to-date version of the state keeper config. - /// This field may be omitted on the external node, since the configuration may change unexpectedly. - /// If this field is set to `None`, `TxSender` will assume that any transaction is executable. - state_keeper_config: Option, /// Used to limit the amount of VMs that can be executed simultaneously. pub(super) vm_concurrency_limiter: Arc, // Caches used in VM execution. storage_caches: PostgresStorageCaches, + /// Batch sealer used to check whether transaction can be executed by the sequencer. + sealer: Arc, + pub(super) executor: TransactionExecutor, } -pub struct TxSender(pub(super) Arc>); +#[derive(Clone)] +pub struct TxSender(pub(super) Arc); -// Custom implementation is required due to generic param: -// Even though it's under `Arc`, compiler doesn't generate the `Clone` implementation unless -// an unnecessary bound is added. -impl Clone for TxSender { - fn clone(&self) -> Self { - Self(self.0.clone()) - } -} - -impl std::fmt::Debug for TxSender { +impl std::fmt::Debug for TxSender { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("TxSender").finish() } } -impl TxSender { +impl TxSender { pub(crate) fn vm_concurrency_limiter(&self) -> Arc { Arc::clone(&self.0.vm_concurrency_limiter) } @@ -305,14 +276,9 @@ impl TxSender { self.0.storage_caches.clone() } + // TODO (PLA-725): propagate DB errors instead of panicking #[tracing::instrument(skip(self, tx))] pub async fn submit_tx(&self, tx: L2Tx) -> Result { - if let Some(rate_limiter) = &self.0.rate_limiter { - if rate_limiter.check().is_err() { - return Err(SubmitTxError::RateLimitExceeded); - } - } - let stage_latency = SANDBOX_METRICS.submit_tx[&SubmitTxStage::Validate].start(); self.validate_tx(&tx).await?; stage_latency.observe(); @@ -321,15 +287,29 @@ impl TxSender { let shared_args = self.shared_args(); let vm_permit = self.0.vm_concurrency_limiter.acquire().await; let vm_permit = vm_permit.ok_or(SubmitTxError::ServerShuttingDown)?; + let mut connection = self + .0 + .replica_connection_pool + .access_storage_tagged("api") + .await + .unwrap(); + let block_args = BlockArgs::pending(&mut connection).await; + drop(connection); - let (_, tx_metrics) = execute_tx_with_pending_state( - vm_permit.clone(), - shared_args.clone(), - TxExecutionArgs::for_validation(&tx), - self.0.replica_connection_pool.clone(), - tx.clone().into(), - ) - .await; + let (_, tx_metrics, published_bytecodes) = self + .0 + .executor + .execute_tx_in_sandbox( + vm_permit.clone(), + shared_args.clone(), + true, + TxExecutionArgs::for_validation(&tx), + self.0.replica_connection_pool.clone(), + tx.clone().into(), + block_args, + vec![], + ) + .await; tracing::info!( "Submit tx {:?} with execution metrics {:?}", @@ -340,11 +320,15 @@ impl TxSender { let stage_latency = SANDBOX_METRICS.submit_tx[&SubmitTxStage::VerifyExecute].start(); let computational_gas_limit = self.0.sender_config.validation_computational_gas_limit; - let validation_result = shared_args - .validate_tx_with_pending_state( - vm_permit, + let validation_result = self + .0 + .executor + .validate_tx_in_sandbox( self.0.replica_connection_pool.clone(), + vm_permit, tx.clone(), + shared_args, + block_args, computational_gas_limit, ) .await; @@ -354,6 +338,10 @@ impl TxSender { return Err(err.into()); } + if !published_bytecodes { + return Err(SubmitTxError::FailedToPublishCompressedBytecodes); + } + let stage_started_at = Instant::now(); self.ensure_tx_executable(tx.clone().into(), &tx_metrics, true)?; @@ -379,7 +367,7 @@ impl TxSender { let nonce = tx.common_data.nonce.0; let hash = tx.hash(); - let expected_nonce = self.get_expected_nonce(&tx).await; + let initiator_account = tx.initiator_account(); let submission_res_handle = self .0 .master_connection_pool @@ -395,11 +383,15 @@ impl TxSender { APP_METRICS.processed_txs[&TxStage::Mempool(submission_res_handle)].inc(); match submission_res_handle { - L2TxSubmissionResult::AlreadyExecuted => Err(SubmitTxError::NonceIsTooLow( - expected_nonce.0, - expected_nonce.0 + self.0.sender_config.max_nonce_ahead, - nonce, - )), + L2TxSubmissionResult::AlreadyExecuted => { + let Nonce(expected_nonce) = + self.get_expected_nonce(initiator_account).await.unwrap(); + Err(SubmitTxError::NonceIsTooLow( + expected_nonce, + expected_nonce + self.0.sender_config.max_nonce_ahead, + nonce, + )) + } L2TxSubmissionResult::Duplicate => Err(SubmitTxError::IncorrectTx(TxDuplication(hash))), _ => { SANDBOX_METRICS.submit_tx[&SubmitTxStage::DbInsert] @@ -412,8 +404,7 @@ impl TxSender { fn shared_args(&self) -> TxSharedArgs { TxSharedArgs { operator_account: AccountTreeId::new(self.0.sender_config.fee_account_addr), - l1_gas_price: self.0.l1_gas_price_source.estimate_effective_gas_price(), - fair_l2_gas_price: self.0.sender_config.fair_l2_gas_price, + fee_input: self.0.batch_fee_input_provider.get_batch_fee_input(), base_system_contracts: self.0.api_contracts.eth_call.clone(), caches: self.storage_caches(), validation_computational_gas_limit: self @@ -432,6 +423,8 @@ impl TxSender { return Err(SubmitTxError::GasLimitIsTooBig); } + let fee_input = self.0.batch_fee_input_provider.get_batch_fee_input(); + // TODO (SMA-1715): do not subsidize the overhead for the transaction if tx.common_data.fee.gas_limit > self.0.sender_config.max_allowed_l2_tx_gas_limit.into() { @@ -442,7 +435,7 @@ impl TxSender { ); return Err(SubmitTxError::GasLimitIsTooBig); } - if tx.common_data.fee.max_fee_per_gas < self.0.sender_config.fair_l2_gas_price.into() { + if tx.common_data.fee.max_fee_per_gas < fee_input.fair_l2_gas_price().into() { tracing::info!( "Submitted Tx is Unexecutable {:?} because of MaxFeePerGasTooLow {}", tx.hash(), @@ -465,19 +458,12 @@ impl TxSender { )); } - let l1_gas_price = self.0.l1_gas_price_source.estimate_effective_gas_price(); - let (_, gas_per_pubdata_byte) = derive_base_fee_and_gas_per_pubdata( - l1_gas_price, - self.0.sender_config.fair_l2_gas_price, - ); - let effective_gas_per_pubdata = cmp::min( - tx.common_data.fee.gas_per_pubdata_limit, - gas_per_pubdata_byte.into(), - ); - let intrinsic_consts = get_intrinsic_constants(); - let min_gas_limit = U256::from(intrinsic_consts.l2_tx_intrinsic_gas) - + U256::from(intrinsic_consts.l2_tx_intrinsic_pubdata) * effective_gas_per_pubdata; + assert!( + intrinsic_consts.l2_tx_intrinsic_pubdata == 0, + "Currently we assume that the L2 transactions do not have any intrinsic pubdata" + ); + let min_gas_limit = U256::from(intrinsic_consts.l2_tx_intrinsic_gas); if tx.common_data.fee.gas_limit < min_gas_limit { return Err(SubmitTxError::IntrinsicGas); } @@ -492,19 +478,22 @@ impl TxSender { } async fn validate_account_nonce(&self, tx: &L2Tx) -> Result<(), SubmitTxError> { - let expected_nonce = self.get_expected_nonce(tx).await; + let Nonce(expected_nonce) = self + .get_expected_nonce(tx.initiator_account()) + .await + .unwrap(); - if tx.common_data.nonce.0 < expected_nonce.0 { + if tx.common_data.nonce.0 < expected_nonce { Err(SubmitTxError::NonceIsTooLow( - expected_nonce.0, - expected_nonce.0 + self.0.sender_config.max_nonce_ahead, + expected_nonce, + expected_nonce + self.0.sender_config.max_nonce_ahead, tx.nonce().0, )) } else { - let max_nonce = expected_nonce.0 + self.0.sender_config.max_nonce_ahead; - if !(expected_nonce.0..=max_nonce).contains(&tx.common_data.nonce.0) { + let max_nonce = expected_nonce + self.0.sender_config.max_nonce_ahead; + if !(expected_nonce..=max_nonce).contains(&tx.common_data.nonce.0) { Err(SubmitTxError::NonceIsTooHigh( - expected_nonce.0, + expected_nonce, max_nonce, tx.nonce().0, )) @@ -514,25 +503,37 @@ impl TxSender { } } - async fn get_expected_nonce(&self, tx: &L2Tx) -> Nonce { - let mut connection = self + async fn get_expected_nonce(&self, initiator_account: Address) -> anyhow::Result { + let mut storage = self .0 .replica_connection_pool .access_storage_tagged("api") - .await - .unwrap(); + .await?; - let latest_block_number = connection - .blocks_web3_dal() + let latest_block_number = storage + .blocks_dal() .get_sealed_miniblock_number() .await - .unwrap(); - let nonce = connection + .context("failed getting sealed miniblock number")?; + let latest_block_number = match latest_block_number { + Some(number) => number, + None => { + // We don't have miniblocks in the storage yet. Use the snapshot miniblock number instead. + let start = BlockStartInfo::new(&mut storage).await?; + MiniblockNumber(start.first_miniblock.saturating_sub(1)) + } + }; + + let nonce = storage .storage_web3_dal() - .get_address_historical_nonce(tx.initiator_account(), latest_block_number) + .get_address_historical_nonce(initiator_account, latest_block_number) .await - .unwrap(); - Nonce(nonce.as_u32()) + .with_context(|| { + format!("failed getting nonce for address {initiator_account:?} at miniblock #{latest_block_number}") + })?; + let nonce = u32::try_from(nonce) + .map_err(|err| anyhow::anyhow!("failed converting nonce to u32: {err}"))?; + Ok(Nonce(nonce)) } async fn validate_enough_balance(&self, tx: &L2Tx) -> Result<(), SubmitTxError> { @@ -547,11 +548,7 @@ impl TxSender { let balance = self.get_balance(&tx.common_data.initiator_address).await; // Estimate the minimum fee price user will agree to. - let gas_price = cmp::min( - tx.common_data.fee.max_fee_per_gas, - U256::from(self.0.sender_config.fair_l2_gas_price) - + tx.common_data.fee.max_priority_fee_per_gas, - ); + let gas_price = tx.common_data.fee.max_fee_per_gas; let max_fee = tx.common_data.fee.gas_limit * gas_price; let max_fee_and_value = max_fee + tx.execute.value; @@ -590,17 +587,20 @@ impl TxSender { &self, vm_permit: VmPermit, mut tx: Transaction, - gas_per_pubdata_byte: u64, tx_gas_limit: u32, - l1_gas_price: u64, + gas_price_per_pubdata: u32, + fee_model_params: BatchFeeInput, + block_args: BlockArgs, base_fee: u64, + vm_version: VmVersion, ) -> (VmExecutionResultAndLogs, TransactionExecutionMetrics) { let gas_limit_with_overhead = tx_gas_limit + derive_overhead( tx_gas_limit, - gas_per_pubdata_byte as u32, + gas_price_per_pubdata, tx.encoding_len(), - OverheadCoeficients::from_tx_type(tx.tx_format() as u8), + tx.tx_format() as u8, + vm_version, ); match &mut tx.common_data { @@ -623,28 +623,34 @@ impl TxSender { } } - let shared_args = self.shared_args_for_gas_estimate(l1_gas_price); + let shared_args = self.shared_args_for_gas_estimate(fee_model_params); let vm_execution_cache_misses_limit = self.0.sender_config.vm_execution_cache_misses_limit; let execution_args = TxExecutionArgs::for_gas_estimate(vm_execution_cache_misses_limit, &tx, base_fee); - let (exec_result, tx_metrics) = execute_tx_with_pending_state( - vm_permit, - shared_args, - execution_args, - self.0.replica_connection_pool.clone(), - tx.clone(), - ) - .await; + let (exec_result, tx_metrics, _) = self + .0 + .executor + .execute_tx_in_sandbox( + vm_permit, + shared_args, + true, + execution_args, + self.0.replica_connection_pool.clone(), + tx.clone(), + block_args, + vec![], + ) + .await; (exec_result, tx_metrics) } - fn shared_args_for_gas_estimate(&self, l1_gas_price: u64) -> TxSharedArgs { + fn shared_args_for_gas_estimate(&self, fee_input: BatchFeeInput) -> TxSharedArgs { let config = &self.0.sender_config; + TxSharedArgs { operator_account: AccountTreeId::new(config.fee_account_addr), - l1_gas_price, - fair_l2_gas_price: config.fair_l2_gas_price, + fee_input, // We want to bypass the computation gas limit check for gas estimation validation_computational_gas_limit: BLOCK_GAS_LIMIT, base_system_contracts: self.0.api_contracts.estimate_gas.clone(), @@ -660,24 +666,37 @@ impl TxSender { acceptable_overestimation: u32, ) -> Result { let estimation_started_at = Instant::now(); - let l1_gas_price = { - let effective_gas_price = self.0.l1_gas_price_source.estimate_effective_gas_price(); - let current_l1_gas_price = - ((effective_gas_price as f64) * self.0.sender_config.gas_price_scale_factor) as u64; - - // In order for execution to pass smoothly, we need to ensure that block's required gasPerPubdata will be - // <= to the one in the transaction itself. - adjust_l1_gas_price_for_tx( - current_l1_gas_price, - self.0.sender_config.fair_l2_gas_price, + + let mut connection = self + .0 + .replica_connection_pool + .access_storage_tagged("api") + .await + .unwrap(); + let block_args = BlockArgs::pending(&mut connection).await; + let protocol_version = block_args + .resolve_block_info(&mut connection) + .await + .unwrap() + .protocol_version; + + drop(connection); + + let fee_input = { + // For now, both L1 gas price and pubdata price are scaled with the same coefficient + let fee_input = self.0.batch_fee_input_provider.get_batch_fee_input_scaled( + self.0.sender_config.gas_price_scale_factor, + self.0.sender_config.gas_price_scale_factor, + ); + adjust_pubdata_price_for_tx( + fee_input, tx.gas_per_pubdata_byte_limit(), + protocol_version.into(), ) }; - let (base_fee, gas_per_pubdata_byte) = derive_base_fee_and_gas_per_pubdata( - l1_gas_price, - self.0.sender_config.fair_l2_gas_price, - ); + let (base_fee, gas_per_pubdata_byte) = + derive_base_fee_and_gas_per_pubdata(fee_input, protocol_version.into()); match &mut tx.common_data { ExecuteTransactionCommon::L2(common_data) => { common_data.fee.max_fee_per_gas = base_fee.into(); @@ -725,7 +744,8 @@ impl TxSender { l2_common_data.signature = PackedEthSignature::default().serialize_packed().into(); } - l2_common_data.fee.gas_per_pubdata_limit = MAX_GAS_PER_PUBDATA_BYTE.into(); + l2_common_data.fee.gas_per_pubdata_limit = + U256::from(DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE); } // Acquire the vm token for the whole duration of the binary search. @@ -755,7 +775,7 @@ impl TxSender { }; // We are using binary search to find the minimal values of gas_limit under which - // the transaction succeedes + // the transaction succeeds let mut lower_bound = 0; let mut upper_bound = MAX_L2_TX_GAS_LIMIT as u32; let tx_id = format!( @@ -773,7 +793,7 @@ impl TxSender { while lower_bound + acceptable_overestimation < upper_bound { let mid = (lower_bound + upper_bound) / 2; // There is no way to distinct between errors due to out of gas - // or normal exeuction errors, so we just hope that increasing the + // or normal execution errors, so we just hope that increasing the // gas limit will make the transaction successful let iteration_started_at = Instant::now(); let try_gas_limit = gas_for_bytecodes_pubdata + mid; @@ -781,10 +801,12 @@ impl TxSender { .estimate_gas_step( vm_permit.clone(), tx.clone(), - gas_per_pubdata_byte, try_gas_limit, - l1_gas_price, + gas_per_pubdata_byte as u32, + fee_input, + block_args, base_fee, + protocol_version.into(), ) .await; @@ -818,22 +840,41 @@ impl TxSender { .estimate_gas_step( vm_permit, tx.clone(), - gas_per_pubdata_byte, suggested_gas_limit, - l1_gas_price, + gas_per_pubdata_byte as u32, + fee_input, + block_args, base_fee, + protocol_version.into(), ) .await; result.into_api_call_result()?; self.ensure_tx_executable(tx.clone(), &tx_metrics, false)?; - let overhead = derive_overhead( - suggested_gas_limit, - gas_per_pubdata_byte as u32, - tx.encoding_len(), - OverheadCoeficients::from_tx_type(tx.tx_format() as u8), - ); + // Now, we need to calculate the final overhead for the transaction. We need to take into account the fact + // that the migration of 1.4.1 may be still going on. + let overhead = if self + .0 + .sender_config + .l1_to_l2_transactions_compatibility_mode + { + derive_pessimistic_overhead( + suggested_gas_limit, + gas_per_pubdata_byte as u32, + tx.encoding_len(), + tx.tx_format() as u8, + protocol_version.into(), + ) + } else { + derive_overhead( + suggested_gas_limit, + gas_per_pubdata_byte as u32, + tx.encoding_len(), + tx.tx_format() as u8, + protocol_version.into(), + ) + }; let full_gas_limit = match tx_body_gas_limit.overflowing_add(gas_for_bytecodes_pubdata + overhead) { @@ -863,27 +904,45 @@ impl TxSender { let vm_permit = vm_permit.ok_or(SubmitTxError::ServerShuttingDown)?; let vm_execution_cache_misses_limit = self.0.sender_config.vm_execution_cache_misses_limit; - execute_tx_eth_call( - vm_permit, - self.shared_args(), - self.0.replica_connection_pool.clone(), - tx, - block_args, - vm_execution_cache_misses_limit, - vec![], - ) - .await - .into_api_call_result() + self.0 + .executor + .execute_tx_eth_call( + vm_permit, + self.shared_args(), + self.0.replica_connection_pool.clone(), + tx, + block_args, + vm_execution_cache_misses_limit, + vec![], + ) + .await + .into_api_call_result() } - pub fn gas_price(&self) -> u64 { - let gas_price = self.0.l1_gas_price_source.estimate_effective_gas_price(); - let l1_gas_price = (gas_price as f64 * self.0.sender_config.gas_price_scale_factor).round(); + pub async fn gas_price(&self) -> u64 { + let mut connection = self + .0 + .replica_connection_pool + .access_storage_tagged("api") + .await + .unwrap(); + let block_args = BlockArgs::pending(&mut connection).await; + let protocol_version = block_args + .resolve_block_info(&mut connection) + .await + .unwrap() + .protocol_version; + drop(connection); + let (base_fee, _) = derive_base_fee_and_gas_per_pubdata( - l1_gas_price as u64, - self.0.sender_config.fair_l2_gas_price, + // For now, both the L1 gas price and the L1 pubdata price are scaled with the same coefficient + self.0.batch_fee_input_provider.get_batch_fee_input_scaled( + self.0.sender_config.gas_price_scale_factor, + self.0.sender_config.gas_price_scale_factor, + ), + protocol_version.into(), ); - base_fee * self.0.l1_gas_price_source.get_erc20_conversion_rate() + base_fee * self.0.batch_fee_input_provider.get_erc20_conversion_rate() } fn ensure_tx_executable( @@ -892,13 +951,6 @@ impl TxSender { tx_metrics: &TransactionExecutionMetrics, log_message: bool, ) -> Result<(), SubmitTxError> { - let Some(sk_config) = &self.0.state_keeper_config else { - // No config provided, so we can't check if transaction satisfies the seal criteria. - // We assume that it's executable, and if it's not, it will be caught by the main server - // (where this check is always performed). - return Ok(()); - }; - // Hash is not computable for the provided `transaction` during gas estimation (it doesn't have // its input data set). Since we don't log a hash in this case anyway, we just use a dummy value. let tx_hash = if log_message { @@ -912,8 +964,10 @@ impl TxSender { // still reject them as it's not. let protocol_version = ProtocolVersionId::latest(); let seal_data = SealData::for_transaction(transaction, tx_metrics, protocol_version); - if let Some(reason) = - ConditionalSealer::find_unexecutable_reason(sk_config, &seal_data, protocol_version) + if let Some(reason) = self + .0 + .sealer + .find_unexecutable_reason(&seal_data, protocol_version) { let message = format!( "Tx is Unexecutable because of {reason}; inputs for decision: {seal_data:?}" @@ -926,3 +980,40 @@ impl TxSender { Ok(()) } } + +/// During switch to the 1.4.1 protocol version, there will be a moment of discrepancy, when while +/// the L2 has already upgraded to 1.4.1 (and thus suggests smaller overhead), the L1 is still on the previous version. +/// +/// This might lead to situations when L1->L2 transactions estimated with the new versions would work on the state keeper side, +/// but they won't even make it there, but the protection mechanisms for L1->L2 transactions will reject them on L1. +/// TODO(X): remove this function after the upgrade is complete +fn derive_pessimistic_overhead( + gas_limit: u32, + gas_price_per_pubdata: u32, + encoded_len: usize, + tx_type: u8, + vm_version: VmVersion, +) -> u32 { + let current_overhead = derive_overhead( + gas_limit, + gas_price_per_pubdata, + encoded_len, + tx_type, + vm_version, + ); + + if is_l1_tx_type(tx_type) { + // We are in the L1->L2 transaction, so we need to account for the fact that the L1 is still on the previous version. + // We assume that the overhead will be the same as for the previous version. + let previous_overhead = derive_overhead( + gas_limit, + gas_price_per_pubdata, + encoded_len, + tx_type, + VmVersion::VmBoojumIntegration, + ); + current_overhead.max(previous_overhead) + } else { + current_overhead + } +} diff --git a/core/lib/zksync_core/src/api_server/tx_sender/proxy.rs b/core/lib/zksync_core/src/api_server/tx_sender/proxy.rs index 4f70b1d5e50..c9ddede1da0 100644 --- a/core/lib/zksync_core/src/api_server/tx_sender/proxy.rs +++ b/core/lib/zksync_core/src/api_server/tx_sender/proxy.rs @@ -1,8 +1,8 @@ use std::collections::HashMap; -use tokio::sync::RwLock; +use tokio::sync::RwLock; use zksync_types::{ - api::{BlockId, Transaction, TransactionDetails, TransactionId, TransactionReceipt}, + api::{BlockId, Transaction, TransactionDetails, TransactionId}, l2::L2Tx, H256, }; @@ -67,8 +67,4 @@ impl TxProxy { pub async fn request_tx_details(&self, hash: H256) -> RpcResult> { self.client.get_transaction_details(hash).await } - - pub async fn request_tx_receipt(&self, hash: H256) -> RpcResult> { - self.client.get_transaction_receipt(hash).await - } } diff --git a/core/lib/zksync_core/src/api_server/tx_sender/result.rs b/core/lib/zksync_core/src/api_server/tx_sender/result.rs index b02049f014e..a8183c5e8ac 100644 --- a/core/lib/zksync_core/src/api_server/tx_sender/result.rs +++ b/core/lib/zksync_core/src/api_server/tx_sender/result.rs @@ -1,10 +1,11 @@ -use crate::api_server::execution_sandbox::SandboxExecutionError; +use multivm::{ + interface::{ExecutionResult, VmExecutionResultAndLogs}, + tracers::validator::ValidationError, +}; use thiserror::Error; +use zksync_types::{l2::error::TxCheckError, U256}; -use multivm::interface::{ExecutionResult, VmExecutionResultAndLogs}; -use multivm::tracers::validator::ValidationError; -use zksync_types::l2::error::TxCheckError; -use zksync_types::U256; +use crate::api_server::execution_sandbox::SandboxExecutionError; #[derive(Debug, Error)] pub enum SubmitTxError { @@ -67,7 +68,9 @@ pub enum SubmitTxError { IntrinsicGas, /// Error returned from main node #[error("{0}")] - ProxyError(#[from] zksync_web3_decl::jsonrpsee::core::Error), + ProxyError(#[from] zksync_web3_decl::jsonrpsee::core::ClientError), + #[error("not enough gas to publish compressed bytecodes")] + FailedToPublishCompressedBytecodes, } impl SubmitTxError { @@ -98,6 +101,7 @@ impl SubmitTxError { Self::InsufficientFundsForTransfer => "insufficient-funds-for-transfer", Self::IntrinsicGas => "intrinsic-gas", Self::ProxyError(_) => "proxy-error", + Self::FailedToPublishCompressedBytecodes => "failed-to-publish-compressed-bytecodes", } } diff --git a/core/lib/zksync_core/src/api_server/tx_sender/tests.rs b/core/lib/zksync_core/src/api_server/tx_sender/tests.rs new file mode 100644 index 00000000000..55c6852cd4a --- /dev/null +++ b/core/lib/zksync_core/src/api_server/tx_sender/tests.rs @@ -0,0 +1,139 @@ +//! Tests for the transaction sender. + +use zksync_types::{get_nonce_key, StorageLog}; + +use super::*; +use crate::{ + api_server::execution_sandbox::{testonly::MockTransactionExecutor, VmConcurrencyBarrier}, + genesis::{ensure_genesis_state, GenesisParams}, + utils::testonly::{create_miniblock, prepare_recovery_snapshot, MockL1GasPriceProvider}, +}; + +pub(crate) async fn create_test_tx_sender( + pool: ConnectionPool, + l2_chain_id: L2ChainId, + tx_executor: TransactionExecutor, +) -> (TxSender, VmConcurrencyBarrier) { + let web3_config = Web3JsonRpcConfig::for_tests(); + let state_keeper_config = StateKeeperConfig::for_tests(); + let tx_sender_config = TxSenderConfig::new(&state_keeper_config, &web3_config, l2_chain_id); + + let mut storage_caches = PostgresStorageCaches::new(1, 1); + let cache_update_task = storage_caches.configure_storage_values_cache( + 1, + pool.clone(), + tokio::runtime::Handle::current(), + ); + tokio::task::spawn_blocking(cache_update_task); + + let gas_adjuster = Arc::new(MockL1GasPriceProvider(1)); + let (mut tx_sender, vm_barrier) = crate::build_tx_sender( + &tx_sender_config, + &web3_config, + &state_keeper_config, + pool.clone(), + pool, + gas_adjuster, + storage_caches, + ) + .await; + + Arc::get_mut(&mut tx_sender.0).unwrap().executor = tx_executor; + (tx_sender, vm_barrier) +} + +#[tokio::test] +async fn getting_nonce_for_account() { + let l2_chain_id = L2ChainId::default(); + let test_address = Address::repeat_byte(1); + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, l2_chain_id, &GenesisParams::mock()) + .await + .unwrap(); + // Manually insert a nonce for the address. + let nonce_key = get_nonce_key(&test_address); + let nonce_log = StorageLog::new_write_log(nonce_key, H256::from_low_u64_be(123)); + storage + .storage_logs_dal() + .append_storage_logs(MiniblockNumber(0), &[(H256::default(), vec![nonce_log])]) + .await; + + let tx_executor = MockTransactionExecutor::default().into(); + let (tx_sender, _) = create_test_tx_sender(pool.clone(), l2_chain_id, tx_executor).await; + + let nonce = tx_sender.get_expected_nonce(test_address).await.unwrap(); + assert_eq!(nonce, Nonce(123)); + + // Insert another miniblock with a new nonce log. + storage + .blocks_dal() + .insert_miniblock(&create_miniblock(1)) + .await + .unwrap(); + let nonce_log = StorageLog { + value: H256::from_low_u64_be(321), + ..nonce_log + }; + storage + .storage_logs_dal() + .insert_storage_logs(MiniblockNumber(1), &[(H256::default(), vec![nonce_log])]) + .await; + + let nonce = tx_sender.get_expected_nonce(test_address).await.unwrap(); + assert_eq!(nonce, Nonce(321)); + let missing_address = Address::repeat_byte(0xff); + let nonce = tx_sender.get_expected_nonce(missing_address).await.unwrap(); + assert_eq!(nonce, Nonce(0)); +} + +#[tokio::test] +async fn getting_nonce_for_account_after_snapshot_recovery() { + const SNAPSHOT_MINIBLOCK_NUMBER: u32 = 42; + + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + let test_address = Address::repeat_byte(1); + let other_address = Address::repeat_byte(2); + let nonce_logs = [ + StorageLog::new_write_log(get_nonce_key(&test_address), H256::from_low_u64_be(123)), + StorageLog::new_write_log(get_nonce_key(&other_address), H256::from_low_u64_be(25)), + ]; + prepare_recovery_snapshot(&mut storage, SNAPSHOT_MINIBLOCK_NUMBER, &nonce_logs).await; + + let l2_chain_id = L2ChainId::default(); + let tx_executor = MockTransactionExecutor::default().into(); + let (tx_sender, _) = create_test_tx_sender(pool.clone(), l2_chain_id, tx_executor).await; + + let nonce = tx_sender.get_expected_nonce(test_address).await.unwrap(); + assert_eq!(nonce, Nonce(123)); + let nonce = tx_sender.get_expected_nonce(other_address).await.unwrap(); + assert_eq!(nonce, Nonce(25)); + let missing_address = Address::repeat_byte(0xff); + let nonce = tx_sender.get_expected_nonce(missing_address).await.unwrap(); + assert_eq!(nonce, Nonce(0)); + + storage + .blocks_dal() + .insert_miniblock(&create_miniblock(SNAPSHOT_MINIBLOCK_NUMBER + 1)) + .await + .unwrap(); + let new_nonce_logs = vec![StorageLog::new_write_log( + get_nonce_key(&test_address), + H256::from_low_u64_be(321), + )]; + storage + .storage_logs_dal() + .insert_storage_logs( + MiniblockNumber(SNAPSHOT_MINIBLOCK_NUMBER + 1), + &[(H256::default(), new_nonce_logs)], + ) + .await; + + let nonce = tx_sender.get_expected_nonce(test_address).await.unwrap(); + assert_eq!(nonce, Nonce(321)); + let nonce = tx_sender.get_expected_nonce(other_address).await.unwrap(); + assert_eq!(nonce, Nonce(25)); + let nonce = tx_sender.get_expected_nonce(missing_address).await.unwrap(); + assert_eq!(nonce, Nonce(0)); +} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/batch_limiter_middleware.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/batch_limiter_middleware.rs deleted file mode 100644 index f85325c03bc..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/batch_limiter_middleware.rs +++ /dev/null @@ -1,145 +0,0 @@ -use futures::{future, FutureExt}; -use governor::{ - clock::DefaultClock, - middleware::NoOpMiddleware, - state::{InMemoryState, NotKeyed}, - Quota, RateLimiter, -}; -use jsonrpc_core::{ - middleware::{self, Middleware}, - Error, FutureResponse, Request, Response, Version, -}; -use jsonrpc_pubsub::Session; -use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; - -use std::{future::Future, num::NonZeroU32, sync::Arc}; - -/// Configures the rate limiting for the WebSocket API. -/// Rate limiting is applied per active connection, e.g. a single connected user may not send more than X requests -/// per minute. -#[derive(Debug, Clone)] -pub struct RateLimitMetadata { - meta: T, - rate_limiter: Option>>, -} - -impl RateLimitMetadata { - pub(crate) fn new(requests_per_minute: Option, meta: T) -> Self { - let rate_limiter = if let Some(requests_per_minute) = requests_per_minute { - assert!(requests_per_minute > 0, "requests_per_minute must be > 0"); - - Some(Arc::new(RateLimiter::direct(Quota::per_minute( - NonZeroU32::new(requests_per_minute).unwrap(), - )))) - } else { - None - }; - - Self { meta, rate_limiter } - } -} - -impl jsonrpc_core::Metadata for RateLimitMetadata {} - -impl jsonrpc_pubsub::PubSubMetadata for RateLimitMetadata { - fn session(&self) -> Option> { - self.meta.session() - } -} - -#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] -#[metrics(label = "transport", rename_all = "snake_case")] -pub(crate) enum Transport { - Ws, -} - -#[derive(Debug, Metrics)] -#[metrics(prefix = "api_jsonrpc_backend_batch")] -struct LimitMiddlewareMetrics { - /// Number of rate-limited requests. - rate_limited: Family, - /// Size of batch requests. - #[metrics(buckets = Buckets::exponential(1.0..=512.0, 2.0))] - size: Family>, - /// Number of requests rejected by the limiter. - rejected: Family, -} - -#[vise::register] -static METRICS: vise::Global = vise::Global::new(); - -/// Middleware that implements limiting for WebSocket connections: -/// - Limits the number of requests per minute for a single connection. -/// - Limits the maximum size of the batch requests. -/// -/// Rate limiting data is stored in the metadata of the connection, while the maximum batch size is stored in the -/// middleware itself. -#[derive(Debug)] -pub(crate) struct LimitMiddleware { - transport: Transport, - max_batch_size: Option, -} - -impl LimitMiddleware { - pub fn new(transport: Transport, max_batch_size: Option) -> Self { - Self { - transport, - max_batch_size, - } - } -} - -impl Middleware> for LimitMiddleware { - type Future = FutureResponse; - - type CallFuture = middleware::NoopCallFuture; - - fn on_request( - &self, - request: Request, - meta: RateLimitMetadata, - next: F, - ) -> future::Either - where - F: Fn(Request, RateLimitMetadata) -> X + Send + Sync, - X: Future> + Send + 'static, - { - // Check whether rate limiting is enabled, and if so, whether we should discard the request. - // Note that RPC batch requests are stil counted as a single request. - if let Some(rate_limiter) = &meta.rate_limiter { - // Check number of actual RPC requests. - let num_requests: usize = match &request { - Request::Single(_) => 1, - Request::Batch(batch) => batch.len(), - }; - let num_requests = NonZeroU32::new(num_requests.max(1) as u32).unwrap(); - - // Note: if required, we can extract data on rate limiting from the error. - if rate_limiter.check_n(num_requests).is_err() { - METRICS.rate_limited[&self.transport].inc(); - let err = Error { - code: jsonrpc_core::error::ErrorCode::ServerError(429), - message: "Too many requests".to_string(), - data: None, - }; - - let response = Response::from(err, Some(Version::V2)); - return future::ready(Some(response)).boxed().left_future(); - } - } - - // Check whether the batch size is within the allowed limits. - if let Request::Batch(batch) = &request { - METRICS.size[&self.transport].observe(batch.len()); - - if Some(batch.len()) > self.max_batch_size { - METRICS.rejected[&self.transport].inc(); - let response = Response::from(Error::invalid_request(), Some(Version::V2)); - return future::ready(Some(response)).boxed().left_future(); - } - } - - // Proceed with the request. - next(request, meta).right_future() - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/error.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/error.rs deleted file mode 100644 index 4a30961c453..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/error.rs +++ /dev/null @@ -1,44 +0,0 @@ -use jsonrpc_core::{Error, ErrorCode}; -use zksync_web3_decl::error::Web3Error; - -use std::fmt; - -use crate::api_server::web3::metrics::API_METRICS; - -pub fn into_jsrpc_error(err: Web3Error) -> Error { - Error { - code: match err { - Web3Error::InternalError | Web3Error::NotImplemented => ErrorCode::InternalError, - Web3Error::NoBlock - | Web3Error::NoSuchFunction - | Web3Error::RLPError(_) - | Web3Error::InvalidTransactionData(_) - | Web3Error::TooManyTopics - | Web3Error::FilterNotFound - | Web3Error::InvalidFeeParams(_) - | Web3Error::LogsLimitExceeded(_, _, _) - | Web3Error::TooManyLogs(_) - | Web3Error::InvalidFilterBlockHash => ErrorCode::InvalidParams, - Web3Error::SubmitTransactionError(_, _) | Web3Error::SerializationError(_) => 3.into(), - Web3Error::PubSubTimeout => 4.into(), - Web3Error::RequestTimeout => 5.into(), - Web3Error::TreeApiUnavailable => 6.into(), - }, - message: match err { - Web3Error::SubmitTransactionError(_, _) => err.to_string(), - _ => err.to_string(), - }, - data: match err { - Web3Error::SubmitTransactionError(_, data) => { - Some(format!("0x{}", hex::encode(data)).into()) - } - _ => None, - }, - } -} - -pub fn internal_error(method_name: &'static str, error: impl fmt::Display) -> Web3Error { - tracing::error!("Internal error in method {method_name}: {error}"); - API_METRICS.web3_internal_errors[&method_name].inc(); - Web3Error::InternalError -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/mod.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/mod.rs deleted file mode 100644 index d1d83a37d40..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/mod.rs +++ /dev/null @@ -1,4 +0,0 @@ -pub(crate) mod batch_limiter_middleware; -pub mod error; -pub mod namespaces; -pub mod pub_sub; diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/debug.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/debug.rs deleted file mode 100644 index 3775da78e41..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/debug.rs +++ /dev/null @@ -1,97 +0,0 @@ -// External uses -use crate::api_server::web3::backend_jsonrpc::error::into_jsrpc_error; -use crate::api_server::web3::namespaces::DebugNamespace; -use jsonrpc_core::{BoxFuture, Result}; -use jsonrpc_derive::rpc; - -use zksync_types::{ - api::{BlockId, BlockNumber, DebugCall, ResultDebugCall, TracerConfig}, - transaction_request::CallRequest, - H256, -}; - -#[rpc] -pub trait DebugNamespaceT { - #[rpc(name = "debug_traceBlockByNumber")] - fn trace_block_by_number( - &self, - block: BlockNumber, - options: Option, - ) -> BoxFuture>>; - - #[rpc(name = "debug_traceBlockByHash")] - fn trace_block_by_hash( - &self, - hash: H256, - options: Option, - ) -> BoxFuture>>; - - #[rpc(name = "debug_traceCall")] - fn trace_call( - &self, - request: CallRequest, - block: Option, - options: Option, - ) -> BoxFuture>; - - #[rpc(name = "debug_traceTransaction")] - fn trace_transaction( - &self, - tx_hash: H256, - options: Option, - ) -> BoxFuture>>; -} - -impl DebugNamespaceT for DebugNamespace { - fn trace_block_by_number( - &self, - block: BlockNumber, - options: Option, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .debug_trace_block_impl(BlockId::Number(block), options) - .await - .map_err(into_jsrpc_error) - }) - } - - fn trace_block_by_hash( - &self, - hash: H256, - options: Option, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .debug_trace_block_impl(BlockId::Hash(hash), options) - .await - .map_err(into_jsrpc_error) - }) - } - - fn trace_call( - &self, - request: CallRequest, - block: Option, - options: Option, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .debug_trace_call_impl(request, block, options) - .await - .map_err(into_jsrpc_error) - }) - } - - fn trace_transaction( - &self, - tx_hash: H256, - options: Option, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.debug_trace_transaction_impl(tx_hash, options).await) }) - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/en.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/en.rs deleted file mode 100644 index e75d7caade2..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/en.rs +++ /dev/null @@ -1,40 +0,0 @@ -// Built-in uses - -// External uses -use jsonrpc_core::{BoxFuture, Result}; -use jsonrpc_derive::rpc; - -// Workspace uses -use zksync_types::{api::en::SyncBlock, MiniblockNumber}; - -// Local uses -use crate::{ - api_server::web3::{backend_jsonrpc::error::into_jsrpc_error, EnNamespace}, - l1_gas_price::L1GasPriceProvider, -}; - -#[rpc] -pub trait EnNamespaceT { - #[rpc(name = "en_syncL2Block")] - fn sync_l2_block( - &self, - block_number: MiniblockNumber, - include_transactions: bool, - ) -> BoxFuture>>; -} - -impl EnNamespaceT for EnNamespace { - fn sync_l2_block( - &self, - block_number: MiniblockNumber, - include_transactions: bool, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .sync_l2_block_impl(block_number, include_transactions) - .await - .map_err(into_jsrpc_error) - }) - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/eth.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/eth.rs deleted file mode 100644 index 00ba9379ae5..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/eth.rs +++ /dev/null @@ -1,510 +0,0 @@ -// Built-in uses - -// External uses -use jsonrpc_core::{BoxFuture, Result}; -use jsonrpc_derive::rpc; - -// Workspace uses -use zksync_types::{ - api::{ - BlockId, BlockIdVariant, BlockNumber, Transaction, TransactionId, TransactionReceipt, - TransactionVariant, - }, - transaction_request::CallRequest, - web3::types::{FeeHistory, Index, SyncState}, - Address, Bytes, H256, U256, U64, -}; -use zksync_web3_decl::types::{Block, Filter, FilterChanges, Log}; - -// Local uses -use crate::web3::namespaces::EthNamespace; -use crate::{l1_gas_price::L1GasPriceProvider, web3::backend_jsonrpc::error::into_jsrpc_error}; - -#[rpc] -pub trait EthNamespaceT { - #[rpc(name = "eth_blockNumber")] - fn get_block_number(&self) -> BoxFuture>; - - #[rpc(name = "eth_chainId")] - fn chain_id(&self) -> BoxFuture>; - - #[rpc(name = "eth_call")] - fn call(&self, req: CallRequest, block: Option) -> BoxFuture>; - - #[rpc(name = "eth_estimateGas")] - fn estimate_gas( - &self, - req: CallRequest, - _block: Option, - ) -> BoxFuture>; - - #[rpc(name = "eth_gasPrice")] - fn gas_price(&self) -> BoxFuture>; - - #[rpc(name = "eth_newFilter")] - fn new_filter(&self, filter: Filter) -> BoxFuture>; - - #[rpc(name = "eth_newBlockFilter")] - fn new_block_filter(&self) -> BoxFuture>; - - #[rpc(name = "eth_uninstallFilter")] - fn uninstall_filter(&self, idx: U256) -> BoxFuture>; - - #[rpc(name = "eth_newPendingTransactionFilter")] - fn new_pending_transaction_filter(&self) -> BoxFuture>; - - #[rpc(name = "eth_getLogs")] - fn get_logs(&self, filter: Filter) -> BoxFuture>>; - - #[rpc(name = "eth_getFilterLogs")] - fn get_filter_logs(&self, filter_index: U256) -> BoxFuture>; - - #[rpc(name = "eth_getFilterChanges")] - fn get_filter_changes(&self, filter_index: U256) -> BoxFuture>; - - #[rpc(name = "eth_getBalance")] - fn get_balance( - &self, - address: Address, - block: Option, - ) -> BoxFuture>; - - #[rpc(name = "eth_getBlockByNumber")] - fn get_block_by_number( - &self, - block_number: BlockNumber, - full_transactions: bool, - ) -> BoxFuture>>>; - - #[rpc(name = "eth_getBlockByHash")] - fn get_block_by_hash( - &self, - hash: H256, - full_transactions: bool, - ) -> BoxFuture>>>; - - #[rpc(name = "eth_getBlockTransactionCountByNumber")] - fn get_block_transaction_count_by_number( - &self, - block_number: BlockNumber, - ) -> BoxFuture>>; - - #[rpc(name = "eth_getBlockTransactionCountByHash")] - fn get_block_transaction_count_by_hash( - &self, - block_hash: H256, - ) -> BoxFuture>>; - - #[rpc(name = "eth_getCode")] - fn get_code(&self, address: Address, block: Option) - -> BoxFuture>; - - #[rpc(name = "eth_getStorageAt")] - fn get_storage( - &self, - address: Address, - idx: U256, - block: Option, - ) -> BoxFuture>; - - #[rpc(name = "eth_getTransactionCount")] - fn get_transaction_count( - &self, - address: Address, - block: Option, - ) -> BoxFuture>; - - #[rpc(name = "eth_getTransactionByHash")] - fn get_transaction_by_hash(&self, hash: H256) -> BoxFuture>>; - - #[rpc(name = "eth_getTransactionByBlockHashAndIndex")] - fn get_transaction_by_block_hash_and_index( - &self, - block_hash: H256, - index: Index, - ) -> BoxFuture>>; - - #[rpc(name = "eth_getTransactionByBlockNumberAndIndex")] - fn get_transaction_by_block_number_and_index( - &self, - block_number: BlockNumber, - index: Index, - ) -> BoxFuture>>; - - #[rpc(name = "eth_getTransactionReceipt")] - fn get_transaction_receipt(&self, hash: H256) -> BoxFuture>>; - - #[rpc(name = "eth_protocolVersion")] - fn protocol_version(&self) -> BoxFuture>; - - #[rpc(name = "eth_sendRawTransaction")] - fn send_raw_transaction(&self, tx_bytes: Bytes) -> BoxFuture>; - - #[rpc(name = "eth_syncing")] - fn syncing(&self) -> BoxFuture>; - - #[rpc(name = "eth_accounts")] - fn accounts(&self) -> BoxFuture>>; - - #[rpc(name = "eth_coinbase")] - fn coinbase(&self) -> BoxFuture>; - - #[rpc(name = "eth_getCompilers")] - fn compilers(&self) -> BoxFuture>>; - - #[rpc(name = "eth_hashrate")] - fn hashrate(&self) -> BoxFuture>; - - #[rpc(name = "eth_getUncleCountByBlockHash")] - fn get_uncle_count_by_block_hash(&self, hash: H256) -> BoxFuture>>; - - #[rpc(name = "eth_getUncleCountByBlockNumber")] - fn get_uncle_count_by_block_number( - &self, - number: BlockNumber, - ) -> BoxFuture>>; - - #[rpc(name = "eth_mining")] - fn mining(&self) -> BoxFuture>; - - #[rpc(name = "eth_feeHistory")] - fn fee_history( - &self, - block_count: U64, - newest_block: BlockNumber, - reward_percentiles: Vec, - ) -> BoxFuture>; -} - -impl EthNamespaceT for EthNamespace { - fn get_block_number(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_block_number_impl() - .await - .map_err(into_jsrpc_error) - }) - } - - fn chain_id(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.chain_id_impl()) }) - } - - fn call(&self, req: CallRequest, block: Option) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .call_impl(req, block.map(Into::into)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn estimate_gas( - &self, - req: CallRequest, - block: Option, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .estimate_gas_impl(req, block) - .await - .map_err(into_jsrpc_error) - }) - } - - fn gas_price(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { self_.gas_price_impl().map_err(into_jsrpc_error) }) - } - - fn new_filter(&self, filter: Filter) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .new_filter_impl(filter) - .await - .map_err(into_jsrpc_error) - }) - } - - fn new_block_filter(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .new_block_filter_impl() - .await - .map_err(into_jsrpc_error) - }) - } - - fn uninstall_filter(&self, idx: U256) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.uninstall_filter_impl(idx).await) }) - } - - fn new_pending_transaction_filter(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.new_pending_transaction_filter_impl().await) }) - } - - fn get_logs(&self, filter: Filter) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { self_.get_logs_impl(filter).await.map_err(into_jsrpc_error) }) - } - - fn get_filter_logs(&self, filter_index: U256) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_filter_logs_impl(filter_index) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_filter_changes(&self, filter_index: U256) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_filter_changes_impl(filter_index) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_balance( - &self, - address: Address, - block: Option, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_balance_impl(address, block.map(Into::into)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_block_by_number( - &self, - block_number: BlockNumber, - full_transactions: bool, - ) -> BoxFuture>>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_block_impl(BlockId::Number(block_number), full_transactions) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_block_by_hash( - &self, - hash: H256, - full_transactions: bool, - ) -> BoxFuture>>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_block_impl(BlockId::Hash(hash), full_transactions) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_block_transaction_count_by_number( - &self, - block_number: BlockNumber, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_block_transaction_count_impl(BlockId::Number(block_number)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_block_transaction_count_by_hash( - &self, - block_hash: H256, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_block_transaction_count_impl(BlockId::Hash(block_hash)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_code( - &self, - address: Address, - block: Option, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_code_impl(address, block.map(Into::into)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_storage( - &self, - address: Address, - idx: U256, - block: Option, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_storage_at_impl(address, idx, block.map(Into::into)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_transaction_count( - &self, - address: Address, - block: Option, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_transaction_count_impl(address, block.map(Into::into)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_transaction_by_hash(&self, hash: H256) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_transaction_impl(TransactionId::Hash(hash)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_transaction_by_block_hash_and_index( - &self, - block_hash: H256, - index: Index, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_transaction_impl(TransactionId::Block(BlockId::Hash(block_hash), index)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_transaction_by_block_number_and_index( - &self, - block_number: BlockNumber, - index: Index, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_transaction_impl(TransactionId::Block(BlockId::Number(block_number), index)) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_transaction_receipt(&self, hash: H256) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_transaction_receipt_impl(hash) - .await - .map_err(into_jsrpc_error) - }) - } - - fn protocol_version(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.protocol_version()) }) - } - - fn send_raw_transaction(&self, tx_bytes: Bytes) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .send_raw_transaction_impl(tx_bytes) - .await - .map_err(into_jsrpc_error) - }) - } - - fn syncing(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.syncing_impl()) }) - } - - fn accounts(&self) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.accounts_impl()) }) - } - - fn coinbase(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.coinbase_impl()) }) - } - - fn compilers(&self) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.compilers_impl()) }) - } - - fn hashrate(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.hashrate_impl()) }) - } - - fn get_uncle_count_by_block_hash(&self, hash: H256) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.uncle_count_impl(BlockId::Hash(hash))) }) - } - - fn get_uncle_count_by_block_number( - &self, - number: BlockNumber, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.uncle_count_impl(BlockId::Number(number))) }) - } - - fn mining(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.mining_impl()) }) - } - - fn fee_history( - &self, - block_count: U64, - newest_block: BlockNumber, - reward_percentiles: Vec, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .fee_history_impl(block_count, newest_block, reward_percentiles) - .await - .map_err(into_jsrpc_error) - }) - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/mod.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/mod.rs deleted file mode 100644 index 8fbd3919c26..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/mod.rs +++ /dev/null @@ -1,6 +0,0 @@ -pub mod debug; -pub mod en; -pub mod eth; -pub mod net; -pub mod web3; -pub mod zks; diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/net.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/net.rs deleted file mode 100644 index 89abd3177c8..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/net.rs +++ /dev/null @@ -1,37 +0,0 @@ -// Built-in uses - -// External uses -use jsonrpc_core::Result; -use jsonrpc_derive::rpc; - -// Workspace uses -use zksync_types::U256; - -// Local uses -use crate::web3::namespaces::NetNamespace; - -#[rpc] -pub trait NetNamespaceT { - #[rpc(name = "net_version", returns = "String")] - fn net_version(&self) -> Result; - - #[rpc(name = "net_peerCount", returns = "U256")] - fn net_peer_count(&self) -> Result; - - #[rpc(name = "net_listening", returns = "bool")] - fn net_listening(&self) -> Result; -} - -impl NetNamespaceT for NetNamespace { - fn net_version(&self) -> Result { - Ok(self.version_impl()) - } - - fn net_peer_count(&self) -> Result { - Ok(self.peer_count_impl()) - } - - fn net_listening(&self) -> Result { - Ok(self.is_listening_impl()) - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/web3.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/web3.rs deleted file mode 100644 index 1df21812e74..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/web3.rs +++ /dev/null @@ -1,22 +0,0 @@ -// Built-in uses - -// External uses -use jsonrpc_core::Result; -use jsonrpc_derive::rpc; - -// Workspace uses - -// Local uses -use crate::web3::namespaces::Web3Namespace; - -#[rpc] -pub trait Web3NamespaceT { - #[rpc(name = "web3_clientVersion", returns = "String")] - fn client_version(&self) -> Result; -} - -impl Web3NamespaceT for Web3Namespace { - fn client_version(&self) -> Result { - Ok(self.client_version_impl()) - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/zks.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/zks.rs deleted file mode 100644 index efaeb892cdd..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/namespaces/zks.rs +++ /dev/null @@ -1,359 +0,0 @@ -// Built-in uses -use std::collections::HashMap; - -// External uses -use bigdecimal::BigDecimal; -use jsonrpc_core::{BoxFuture, Result}; -use jsonrpc_derive::rpc; - -// Workspace uses -use zksync_types::{ - api::{ - BlockDetails, BridgeAddresses, L1BatchDetails, L2ToL1LogProof, Proof, ProtocolVersion, - TransactionDetails, - }, - fee::Fee, - transaction_request::CallRequest, - Address, L1BatchNumber, MiniblockNumber, H256, U256, U64, -}; -use zksync_web3_decl::types::{Filter, Log, Token}; - -// Local uses -use crate::web3::namespaces::ZksNamespace; -use crate::{l1_gas_price::L1GasPriceProvider, web3::backend_jsonrpc::error::into_jsrpc_error}; - -#[rpc] -pub trait ZksNamespaceT { - #[rpc(name = "zks_estimateFee")] - fn estimate_fee(&self, req: CallRequest) -> BoxFuture>; - - #[rpc(name = "zks_estimateGasL1ToL2")] - fn estimate_gas_l1_to_l2(&self, req: CallRequest) -> BoxFuture>; - - #[rpc(name = "zks_getMainContract")] - fn get_main_contract(&self) -> BoxFuture>; - - #[rpc(name = "zks_getNativeTokenAddress")] - fn get_native_token_address(&self) -> BoxFuture>; - - #[rpc(name = "zks_getTestnetPaymaster")] - fn get_testnet_paymaster(&self) -> BoxFuture>>; - - #[rpc(name = "zks_getBridgeContracts")] - fn get_bridge_contracts(&self) -> BoxFuture>; - - #[rpc(name = "zks_L1ChainId")] - fn l1_chain_id(&self) -> BoxFuture>; - - #[rpc(name = "zks_getConfirmedTokens")] - fn get_confirmed_tokens(&self, from: u32, limit: u8) -> BoxFuture>>; - - #[rpc(name = "zks_getTokenPrice")] - fn get_token_price(&self, token_address: Address) -> BoxFuture>; - - #[rpc(name = "zks_getAllAccountBalances")] - fn get_all_account_balances( - &self, - address: Address, - ) -> BoxFuture>>; - - #[rpc(name = "zks_getL2ToL1MsgProof")] - fn get_l2_to_l1_msg_proof( - &self, - block: MiniblockNumber, - sender: Address, - msg: H256, - l2_log_position: Option, - ) -> BoxFuture>>; - - #[rpc(name = "zks_getL2ToL1LogProof")] - fn get_l2_to_l1_log_proof( - &self, - tx_hash: H256, - index: Option, - ) -> BoxFuture>>; - - #[rpc(name = "zks_L1BatchNumber")] - fn get_l1_batch_number(&self) -> BoxFuture>; - - #[rpc(name = "zks_getBlockDetails")] - fn get_block_details( - &self, - block_number: MiniblockNumber, - ) -> BoxFuture>>; - - #[rpc(name = "zks_getL1BatchBlockRange")] - fn get_miniblock_range(&self, batch: L1BatchNumber) -> BoxFuture>>; - - #[rpc(name = "zks_getTransactionDetails")] - fn get_transaction_details(&self, hash: H256) -> BoxFuture>>; - - #[rpc(name = "zks_getRawBlockTransactions")] - fn get_raw_block_transactions( - &self, - block_number: MiniblockNumber, - ) -> BoxFuture>>; - - #[rpc(name = "zks_getL1BatchDetails")] - fn get_l1_batch_details( - &self, - batch: L1BatchNumber, - ) -> BoxFuture>>; - - #[rpc(name = "zks_getBytecodeByHash")] - fn get_bytecode_by_hash(&self, hash: H256) -> BoxFuture>>>; - - #[rpc(name = "zks_getL1GasPrice")] - fn get_l1_gas_price(&self) -> BoxFuture>; - - #[rpc(name = "zks_getProtocolVersion")] - fn get_protocol_version( - &self, - version_id: Option, - ) -> BoxFuture>>; - - #[rpc(name = "zks_getLogsWithVirtualBlocks")] - fn get_logs_with_virtual_blocks(&self, filter: Filter) -> BoxFuture>>; - - #[rpc(name = "zks_getProof")] - fn get_proof( - &self, - address: Address, - keys: Vec, - l1_batch_number: L1BatchNumber, - ) -> BoxFuture>; - - #[rpc(name = "zks_getConversionRate")] - fn get_conversion_rate(&self) -> BoxFuture>; -} - -impl ZksNamespaceT for ZksNamespace { - fn estimate_fee(&self, req: CallRequest) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { self_.estimate_fee_impl(req).await.map_err(into_jsrpc_error) }) - } - - fn estimate_gas_l1_to_l2(&self, req: CallRequest) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .estimate_l1_to_l2_gas_impl(req) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_main_contract(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.get_main_contract_impl()) }) - } - - fn get_native_token_address(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_native_token_address_impl() - .map_err(into_jsrpc_error) - }) - } - - fn get_miniblock_range(&self, batch: L1BatchNumber) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_miniblock_range_impl(batch) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_testnet_paymaster(&self) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.get_testnet_paymaster_impl()) }) - } - - fn get_bridge_contracts(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.get_bridge_contracts_impl()) }) - } - - fn l1_chain_id(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.l1_chain_id_impl()) }) - } - - fn get_confirmed_tokens(&self, from: u32, limit: u8) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_confirmed_tokens_impl(from, limit) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_token_price(&self, token_address: Address) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_token_price_impl(token_address) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_all_account_balances( - &self, - address: Address, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_all_account_balances_impl(address) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_l2_to_l1_msg_proof( - &self, - block: MiniblockNumber, - sender: Address, - msg: H256, - l2_log_position: Option, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_l2_to_l1_msg_proof_impl(block, sender, msg, l2_log_position) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_l2_to_l1_log_proof( - &self, - tx_hash: H256, - index: Option, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_l2_to_l1_log_proof_impl(tx_hash, index) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_l1_batch_number(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_l1_batch_number_impl() - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_block_details( - &self, - block_number: MiniblockNumber, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_block_details_impl(block_number) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_transaction_details(&self, hash: H256) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_transaction_details_impl(hash) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_raw_block_transactions( - &self, - block_number: MiniblockNumber, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_raw_block_transactions_impl(block_number) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_l1_batch_details( - &self, - batch: L1BatchNumber, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_l1_batch_details_impl(batch) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_bytecode_by_hash(&self, hash: H256) -> BoxFuture>>> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.get_bytecode_by_hash_impl(hash).await) }) - } - - fn get_l1_gas_price(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.get_l1_gas_price_impl()) }) - } - - fn get_protocol_version( - &self, - version_id: Option, - ) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { Ok(self_.get_protocol_version_impl(version_id).await) }) - } - - fn get_logs_with_virtual_blocks(&self, filter: Filter) -> BoxFuture>> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_logs_with_virtual_blocks_impl(filter) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_proof( - &self, - address: Address, - keys: Vec, - l1_batch_number: L1BatchNumber, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_proofs_impl(address, keys.clone(), l1_batch_number) - .await - .map_err(into_jsrpc_error) - }) - } - - fn get_conversion_rate(&self) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { - self_ - .get_conversion_rate_impl() - .await - .map_err(into_jsrpc_error) - }) - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/pub_sub.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/pub_sub.rs deleted file mode 100644 index 4a28a17b4e3..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpc/pub_sub.rs +++ /dev/null @@ -1,62 +0,0 @@ -use std::sync::Arc; - -use jsonrpc_core::{BoxFuture, Result}; -use jsonrpc_derive::rpc; -use jsonrpc_pubsub::typed; -use jsonrpc_pubsub::{Session, SubscriptionId}; - -use zksync_web3_decl::types::PubSubResult; - -use super::super::namespaces::EthSubscribe; -use super::batch_limiter_middleware::RateLimitMetadata; - -#[rpc] -pub trait Web3PubSub { - type Metadata; - - #[pubsub(subscription = "eth_subscription", subscribe, name = "eth_subscribe")] - fn subscribe( - &self, - meta: Self::Metadata, - subscriber: typed::Subscriber, - sub_type: String, - params: Option, - ); - - #[pubsub( - subscription = "eth_subscription", - unsubscribe, - name = "eth_unsubscribe" - )] - fn unsubscribe( - &self, - meta: Option, - subscription: SubscriptionId, - ) -> BoxFuture>; -} - -impl Web3PubSub for EthSubscribe { - type Metadata = RateLimitMetadata>; - - fn subscribe( - &self, - _meta: Self::Metadata, - subscriber: typed::Subscriber, - sub_type: String, - params: Option, - ) { - let self_ = self.clone(); - // Fire and forget is OK here. - self.runtime_handle - .spawn(async move { self_.sub(subscriber, sub_type, params).await }); - } - - fn unsubscribe( - &self, - _meta: Option, - id: SubscriptionId, - ) -> BoxFuture> { - let self_ = self.clone(); - Box::pin(async move { self_.unsub(id).await }) - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/batch_limiter_middleware.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/batch_limiter_middleware.rs new file mode 100644 index 00000000000..ae37f87541e --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/batch_limiter_middleware.rs @@ -0,0 +1,93 @@ +use std::num::NonZeroU32; + +use governor::{ + clock::DefaultClock, + middleware::NoOpMiddleware, + state::{InMemoryState, NotKeyed}, + Quota, RateLimiter, +}; +use vise::{ + Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, GaugeGuard, Histogram, Metrics, +}; +use zksync_web3_decl::jsonrpsee::{ + server::middleware::rpc::{layer::ResponseFuture, RpcServiceT}, + types::{error::ErrorCode, ErrorObject, Request}, + MethodResponse, +}; + +use crate::api_server::web3::metrics::API_METRICS; + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] +#[metrics(label = "transport", rename_all = "snake_case")] +pub(crate) enum Transport { + Ws, +} + +#[derive(Debug, Metrics)] +#[metrics(prefix = "api_jsonrpc_backend_batch")] +struct LimitMiddlewareMetrics { + /// Number of rate-limited requests. + rate_limited: Family, + /// Size of batch requests. + #[metrics(buckets = Buckets::exponential(1.0..=512.0, 2.0))] + size: Family>, + /// Number of requests rejected by the limiter. + rejected: Family, +} + +#[vise::register] +static METRICS: vise::Global = vise::Global::new(); + +/// A rate-limiting middleware. +/// +/// `jsonrpsee` will allocate the instance of this struct once per session. +pub(crate) struct LimitMiddleware { + inner: S, + rate_limiter: Option>, + transport: Transport, + _guard: GaugeGuard, +} + +impl LimitMiddleware { + pub(crate) fn new(inner: S, requests_per_minute_limit: Option) -> Self { + Self { + inner, + rate_limiter: requests_per_minute_limit + .map(|limit| RateLimiter::direct(Quota::per_minute(limit))), + transport: Transport::Ws, + _guard: API_METRICS.ws_open_sessions.inc_guard(1), + } + } +} + +impl<'a, S> RpcServiceT<'a> for LimitMiddleware +where + S: Send + Clone + Sync + RpcServiceT<'a>, +{ + type Future = ResponseFuture; + + fn call(&self, request: Request<'a>) -> Self::Future { + if let Some(rate_limiter) = &self.rate_limiter { + let num_requests = NonZeroU32::MIN; // 1 request, no batches possible + + // Note: if required, we can extract data on rate limiting from the error. + if rate_limiter.check_n(num_requests).is_err() { + METRICS.rate_limited[&self.transport].inc(); + + let rp = MethodResponse::error( + request.id, + ErrorObject::borrowed( + ErrorCode::ServerError( + reqwest::StatusCode::TOO_MANY_REQUESTS.as_u16().into(), + ) + .code(), + "Too many requests", + None, + ), + ); + return ResponseFuture::ready(rp); + } + } + ResponseFuture::future(self.inner.call(request)) + } +} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/mod.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/mod.rs index 04f6102066f..c8fbc726e2f 100644 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/mod.rs +++ b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/mod.rs @@ -2,21 +2,25 @@ //! Consists mostly of boilerplate code implementing the `jsonrpsee` server traits for the corresponding //! namespace structures defined in `zksync_core`. -use std::error::Error; -use zksync_web3_decl::error::Web3Error; -use zksync_web3_decl::jsonrpsee::types::{error::ErrorCode, ErrorObjectOwned}; +use std::fmt; -pub mod namespaces; +use zksync_web3_decl::{ + error::Web3Error, + jsonrpsee::types::{error::ErrorCode, ErrorObjectOwned}, +}; -pub fn from_std_error(e: impl Error) -> ErrorObjectOwned { - ErrorObjectOwned::owned(ErrorCode::InternalError.code(), e.to_string(), Some(())) -} +use crate::api_server::web3::metrics::API_METRICS; + +pub mod batch_limiter_middleware; +pub mod namespaces; pub fn into_jsrpc_error(err: Web3Error) -> ErrorObjectOwned { ErrorObjectOwned::owned( match err { Web3Error::InternalError | Web3Error::NotImplemented => ErrorCode::InternalError.code(), Web3Error::NoBlock + | Web3Error::PrunedBlock(_) + | Web3Error::PrunedL1Batch(_) | Web3Error::NoSuchFunction | Web3Error::RLPError(_) | Web3Error::InvalidTransactionData(_) @@ -24,8 +28,7 @@ pub fn into_jsrpc_error(err: Web3Error) -> ErrorObjectOwned { | Web3Error::FilterNotFound | Web3Error::InvalidFeeParams(_) | Web3Error::InvalidFilterBlockHash - | Web3Error::LogsLimitExceeded(_, _, _) - | Web3Error::TooManyLogs(_) => ErrorCode::InvalidParams.code(), + | Web3Error::LogsLimitExceeded(_, _, _) => ErrorCode::InvalidParams.code(), Web3Error::SubmitTransactionError(_, _) | Web3Error::SerializationError(_) => 3, Web3Error::PubSubTimeout => 4, Web3Error::RequestTimeout => 5, @@ -41,3 +44,9 @@ pub fn into_jsrpc_error(err: Web3Error) -> ErrorObjectOwned { }, ) } + +pub fn internal_error(method_name: &'static str, error: impl fmt::Display) -> Web3Error { + tracing::error!("Internal error in method {method_name}: {error}"); + API_METRICS.web3_internal_errors[&method_name].inc(); + Web3Error::InternalError +} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/debug.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/debug.rs index 0bd61bbbc3d..9f1e00a6c80 100644 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/debug.rs +++ b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/debug.rs @@ -21,6 +21,7 @@ impl DebugNamespaceServer for DebugNamespace { .await .map_err(into_jsrpc_error) } + async fn trace_block_by_hash( &self, hash: H256, @@ -30,6 +31,7 @@ impl DebugNamespaceServer for DebugNamespace { .await .map_err(into_jsrpc_error) } + async fn trace_call( &self, request: CallRequest, @@ -40,11 +42,14 @@ impl DebugNamespaceServer for DebugNamespace { .await .map_err(into_jsrpc_error) } + async fn trace_transaction( &self, tx_hash: H256, options: Option, ) -> RpcResult> { - Ok(self.debug_trace_transaction_impl(tx_hash, options).await) + self.debug_trace_transaction_impl(tx_hash, options) + .await + .map_err(into_jsrpc_error) } } diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/en.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/en.rs index 69dce6f6dae..3480c6ec76f 100644 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/en.rs +++ b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/en.rs @@ -4,13 +4,10 @@ use zksync_web3_decl::{ namespaces::en::EnNamespaceServer, }; -use crate::{ - api_server::web3::{backend_jsonrpsee::into_jsrpc_error, namespaces::EnNamespace}, - l1_gas_price::L1GasPriceProvider, -}; +use crate::api_server::web3::{backend_jsonrpsee::into_jsrpc_error, namespaces::EnNamespace}; #[async_trait] -impl EnNamespaceServer for EnNamespace { +impl EnNamespaceServer for EnNamespace { async fn sync_l2_block( &self, block_number: MiniblockNumber, diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/eth.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/eth.rs index 3751673ba8e..5f3dfcd3417 100644 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/eth.rs +++ b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/eth.rs @@ -13,13 +13,10 @@ use zksync_web3_decl::{ types::{Filter, FilterChanges}, }; -use crate::{ - api_server::web3::{backend_jsonrpsee::into_jsrpc_error, EthNamespace}, - l1_gas_price::L1GasPriceProvider, -}; +use crate::api_server::web3::{backend_jsonrpsee::into_jsrpc_error, EthNamespace}; #[async_trait] -impl EthNamespaceServer for EthNamespace { +impl EthNamespaceServer for EthNamespace { async fn get_block_number(&self) -> RpcResult { self.get_block_number_impl().await.map_err(into_jsrpc_error) } @@ -41,9 +38,7 @@ impl EthNamespaceServer for EthNa } async fn gas_price(&self) -> RpcResult { - let gas_price = self.gas_price_impl().map_err(into_jsrpc_error); - println!("The gas price: {:?}", gas_price); - return gas_price; + self.gas_price_impl().await.map_err(into_jsrpc_error) } async fn new_filter(&self, filter: Filter) -> RpcResult { diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/mod.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/mod.rs index 2551b90e824..3b76771a8cd 100644 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/mod.rs +++ b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/mod.rs @@ -3,5 +3,6 @@ pub mod en; pub mod eth; pub mod eth_subscribe; pub mod net; +pub mod snapshots; pub mod web3; pub mod zks; diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/snapshots.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/snapshots.rs new file mode 100644 index 00000000000..6596e29e4f2 --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/snapshots.rs @@ -0,0 +1,28 @@ +use async_trait::async_trait; +use zksync_types::{ + snapshots::{AllSnapshots, SnapshotHeader}, + L1BatchNumber, +}; +use zksync_web3_decl::{jsonrpsee::core::RpcResult, namespaces::SnapshotsNamespaceServer}; + +use crate::api_server::web3::{ + backend_jsonrpsee::into_jsrpc_error, namespaces::SnapshotsNamespace, +}; + +#[async_trait] +impl SnapshotsNamespaceServer for SnapshotsNamespace { + async fn get_all_snapshots(&self) -> RpcResult { + self.get_all_snapshots_impl() + .await + .map_err(into_jsrpc_error) + } + + async fn get_snapshot_by_l1_batch_number( + &self, + l1_batch_number: L1BatchNumber, + ) -> RpcResult> { + self.get_snapshot_by_l1_batch_number_impl(l1_batch_number) + .await + .map_err(into_jsrpc_error) + } +} diff --git a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/zks.rs b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/zks.rs index 54f8da8a756..8fe8fe1894c 100644 --- a/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/zks.rs +++ b/core/lib/zksync_core/src/api_server/web3/backend_jsonrpsee/namespaces/zks.rs @@ -1,29 +1,26 @@ -use bigdecimal::BigDecimal; - use std::collections::HashMap; +use bigdecimal::BigDecimal; use zksync_types::{ api::{ BlockDetails, BridgeAddresses, L1BatchDetails, L2ToL1LogProof, Proof, ProtocolVersion, TransactionDetails, }, fee::Fee, + fee_model::FeeParams, transaction_request::CallRequest, Address, L1BatchNumber, MiniblockNumber, H256, U256, U64, }; use zksync_web3_decl::{ jsonrpsee::core::{async_trait, RpcResult}, namespaces::zks::ZksNamespaceServer, - types::{Filter, Log, Token}, + types::Token, }; -use crate::{ - api_server::web3::{backend_jsonrpsee::into_jsrpc_error, ZksNamespace}, - l1_gas_price::L1GasPriceProvider, -}; +use crate::api_server::web3::{backend_jsonrpsee::into_jsrpc_error, ZksNamespace}; #[async_trait] -impl ZksNamespaceServer for ZksNamespace { +impl ZksNamespaceServer for ZksNamespace { async fn estimate_fee(&self, req: CallRequest) -> RpcResult { self.estimate_fee_impl(req).await.map_err(into_jsrpc_error) } @@ -144,22 +141,24 @@ impl ZksNamespaceServer for ZksNa } async fn get_bytecode_by_hash(&self, hash: H256) -> RpcResult>> { - Ok(self.get_bytecode_by_hash_impl(hash).await) + self.get_bytecode_by_hash_impl(hash) + .await + .map_err(into_jsrpc_error) } async fn get_l1_gas_price(&self) -> RpcResult { Ok(self.get_l1_gas_price_impl()) } + async fn get_fee_params(&self) -> RpcResult { + Ok(self.get_fee_params_impl()) + } + async fn get_protocol_version( &self, version_id: Option, ) -> RpcResult> { - Ok(self.get_protocol_version_impl(version_id).await) - } - - async fn get_logs_with_virtual_blocks(&self, filter: Filter) -> RpcResult> { - self.get_logs_with_virtual_blocks_impl(filter) + self.get_protocol_version_impl(version_id) .await .map_err(into_jsrpc_error) } diff --git a/core/lib/zksync_core/src/api_server/web3/metrics.rs b/core/lib/zksync_core/src/api_server/web3/metrics.rs index 2df24f9dd60..44edd032f69 100644 --- a/core/lib/zksync_core/src/api_server/web3/metrics.rs +++ b/core/lib/zksync_core/src/api_server/web3/metrics.rs @@ -1,18 +1,18 @@ //! Metrics for the JSON-RPC server. -use vise::{ - Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, LabeledFamily, - LatencyObserver, Metrics, Unit, -}; - use std::{ fmt, time::{Duration, Instant}, }; -use super::{ApiTransport, TypedFilter}; +use vise::{ + Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, LabeledFamily, + LatencyObserver, Metrics, Unit, +}; use zksync_types::api; +use super::{ApiTransport, TypedFilter}; + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] #[metrics(label = "scheme", rename_all = "UPPERCASE")] pub(super) enum ApiTransportLabel { @@ -184,12 +184,27 @@ pub(super) enum SubscriptionType { #[derive(Debug, Metrics)] #[metrics(prefix = "api_web3_pubsub")] pub(super) struct PubSubMetrics { + /// Latency to load new events from Postgres before broadcasting them to subscribers. #[metrics(buckets = Buckets::LATENCIES)] pub db_poll_latency: Family>, + /// Latency to send an atomic batch of events to a single subscriber. #[metrics(buckets = Buckets::LATENCIES)] pub notify_subscribers_latency: Family>, + /// Total number of events sent to all subscribers of a certain type. pub notify: Family, + /// Number of currently active subscribers split by the subscription type. pub active_subscribers: Family, + /// Lifetime of a subscriber of a certain type. + #[metrics(buckets = Buckets::LATENCIES)] + pub subscriber_lifetime: Family>, + /// Current length of the broadcast channel of a certain type. With healthy subscribers, this value + /// should be reasonably low. + pub broadcast_channel_len: Family>, + /// Number of skipped broadcast messages. + #[metrics(buckets = Buckets::exponential(1.0..=128.0, 2.0))] + pub skipped_broadcast_messages: Family>, + /// Number of subscribers dropped because of a send timeout. + pub subscriber_send_timeouts: Family, } #[vise::register] @@ -217,7 +232,7 @@ impl From<&TypedFilter> for FilterType { #[metrics(prefix = "api_web3_filter")] pub(super) struct FilterMetrics { /// Number of currently active filters grouped by the filter type - pub metrics_count: Family, + pub filter_count: Family, /// Time in seconds between consecutive requests to the filter grouped by the filter type #[metrics(buckets = Buckets::LATENCIES, unit = Unit::Seconds)] pub request_frequency: Family>, @@ -226,7 +241,7 @@ pub(super) struct FilterMetrics { pub filter_lifetime: Family>, /// Number of requests to the filter grouped by the filter type #[metrics(buckets = Buckets::exponential(1.0..=1048576.0, 2.0))] - pub filter_count: Family>, + pub request_count: Family>, } #[vise::register] diff --git a/core/lib/zksync_core/src/api_server/web3/mod.rs b/core/lib/zksync_core/src/api_server/web3/mod.rs index 918bf5f67dc..0f88382a1d1 100644 --- a/core/lib/zksync_core/src/api_server/web3/mod.rs +++ b/core/lib/zksync_core/src/api_server/web3/mod.rs @@ -1,67 +1,56 @@ +use std::{net::SocketAddr, num::NonZeroU32, sync::Arc, time::Duration}; + use anyhow::Context as _; +use chrono::NaiveDateTime; use futures::future; -use jsonrpc_core::MetaIoHandler; -use jsonrpc_http_server::hyper; -use jsonrpc_pubsub::PubSubHandler; use serde::Deserialize; -use tokio::sync::{oneshot, watch, Mutex}; +use tokio::{ + sync::{mpsc, oneshot, watch, Mutex}, + task::JoinHandle, +}; use tower_http::{cors::CorsLayer, metrics::InFlightRequestsLayer}; - -use chrono::NaiveDateTime; -use std::{net::SocketAddr, sync::Arc, time::Duration}; -use tokio::task::JoinHandle; - -use zksync_dal::{ConnectionPool, StorageProcessor}; +use zksync_dal::ConnectionPool; use zksync_health_check::{HealthStatus, HealthUpdater, ReactiveHealthCheck}; -use zksync_types::{api, MiniblockNumber}; +use zksync_types::MiniblockNumber; use zksync_web3_decl::{ - error::Web3Error, jsonrpsee::{ - server::{BatchRequestConfig, ServerBuilder}, + server::{BatchRequestConfig, RpcServiceBuilder, ServerBuilder}, RpcModule, }, namespaces::{ - DebugNamespaceServer, EnNamespaceServer, EthNamespaceServer, NetNamespaceServer, - Web3NamespaceServer, ZksNamespaceServer, + DebugNamespaceServer, EnNamespaceServer, EthNamespaceServer, EthPubSubServer, + NetNamespaceServer, SnapshotsNamespaceServer, Web3NamespaceServer, ZksNamespaceServer, }, types::Filter, }; +use self::{ + metrics::API_METRICS, + namespaces::{ + DebugNamespace, EnNamespace, EthNamespace, NetNamespace, SnapshotsNamespace, Web3Namespace, + ZksNamespace, + }, + pubsub::{EthSubscribe, EthSubscriptionIdProvider, PubSubEvent}, + state::{Filters, InternalApiConfig, RpcState, SealedMiniblockNumber}, +}; use crate::{ api_server::{ - execution_sandbox::VmConcurrencyBarrier, tree::TreeApiHttpClient, tx_sender::TxSender, - web3::backend_jsonrpc::batch_limiter_middleware::RateLimitMetadata, + execution_sandbox::{BlockStartInfo, VmConcurrencyBarrier}, + tree::TreeApiHttpClient, + tx_sender::TxSender, + web3::backend_jsonrpsee::batch_limiter_middleware::LimitMiddleware, }, - l1_gas_price::L1GasPriceProvider, sync_layer::SyncState, }; -pub mod backend_jsonrpc; pub mod backend_jsonrpsee; mod metrics; pub mod namespaces; -mod pubsub_notifier; +mod pubsub; pub mod state; #[cfg(test)] pub(crate) mod tests; -use self::backend_jsonrpc::{ - batch_limiter_middleware::{LimitMiddleware, Transport}, - error::internal_error, - namespaces::{ - debug::DebugNamespaceT, en::EnNamespaceT, eth::EthNamespaceT, net::NetNamespaceT, - web3::Web3NamespaceT, zks::ZksNamespaceT, - }, - pub_sub::Web3PubSub, -}; -use self::metrics::API_METRICS; -use self::namespaces::{ - DebugNamespace, EnNamespace, EthNamespace, EthSubscribe, NetNamespace, Web3Namespace, - ZksNamespace, -}; -use self::pubsub_notifier::{notify_blocks, notify_logs, notify_txs}; -use self::state::{Filters, InternalApiConfig, RpcState, SealedMiniblockNumber}; - /// Timeout for graceful shutdown logic within API servers. const GRACEFUL_SHUTDOWN_TIMEOUT: Duration = Duration::from_secs(5); @@ -76,12 +65,6 @@ pub(crate) enum TypedFilter { PendingTransactions(NaiveDateTime), } -#[derive(Debug, Clone, Copy)] -enum ApiBackend { - Jsonrpsee, - Jsonrpc, -} - #[derive(Debug, Clone, Copy)] enum ApiTransport { WebSocket(SocketAddr), @@ -98,26 +81,17 @@ pub enum Namespace { Zks, En, Pubsub, + Snapshots, } impl Namespace { - pub const ALL: &'static [Namespace] = &[ - Namespace::Eth, - Namespace::Net, - Namespace::Web3, - Namespace::Debug, - Namespace::Zks, - Namespace::En, - Namespace::Pubsub, - ]; - - pub const NON_DEBUG: &'static [Namespace] = &[ - Namespace::Eth, - Namespace::Net, - Namespace::Web3, - Namespace::Zks, - Namespace::En, - Namespace::Pubsub, + pub const DEFAULT: &'static [Self] = &[ + Self::Eth, + Self::Net, + Self::Web3, + Self::Zks, + Self::En, + Self::Pubsub, ]; } @@ -129,58 +103,63 @@ pub struct ApiServerHandles { pub health_check: ReactiveHealthCheck, } +/// Optional part of the API server parameters. +#[derive(Debug, Default)] +struct OptionalApiParams { + sync_state: Option, + filters_limit: Option, + subscriptions_limit: Option, + batch_request_size_limit: Option, + response_body_size_limit: Option, + websocket_requests_per_minute_limit: Option, + tree_api_url: Option, + pub_sub_events_sender: Option>, +} + +/// Full API server parameters. +#[derive(Debug)] +struct FullApiParams { + pool: ConnectionPool, + last_miniblock_pool: ConnectionPool, + config: InternalApiConfig, + transport: ApiTransport, + tx_sender: TxSender, + vm_barrier: VmConcurrencyBarrier, + polling_interval: Duration, + namespaces: Vec, + optional: OptionalApiParams, +} + #[derive(Debug)] -pub struct ApiBuilder { - backend: ApiBackend, +pub struct ApiBuilder { pool: ConnectionPool, last_miniblock_pool: ConnectionPool, config: InternalApiConfig, + polling_interval: Duration, + // Mandatory params that must be set using builder methods. transport: Option, - tx_sender: Option>, + tx_sender: Option, vm_barrier: Option, - filters_limit: Option, - subscriptions_limit: Option, - batch_request_size_limit: Option, - response_body_size_limit: Option, - websocket_requests_per_minute_limit: Option, - sync_state: Option, - threads: Option, - vm_concurrency_limit: Option, - polling_interval: Option, + // Optional params that may or may not be set using builder methods. We treat `namespaces` + // specially because we want to output a warning if they are not set. namespaces: Option>, - logs_translator_enabled: bool, - tree_api_url: Option, + optional: OptionalApiParams, } -impl ApiBuilder { +impl ApiBuilder { + const DEFAULT_POLLING_INTERVAL: Duration = Duration::from_millis(200); + pub fn jsonrpsee_backend(config: InternalApiConfig, pool: ConnectionPool) -> Self { Self { - backend: ApiBackend::Jsonrpsee, - transport: None, last_miniblock_pool: pool.clone(), pool, - sync_state: None, + config, + polling_interval: Self::DEFAULT_POLLING_INTERVAL, + transport: None, tx_sender: None, vm_barrier: None, - filters_limit: None, - subscriptions_limit: None, - batch_request_size_limit: None, - response_body_size_limit: None, - websocket_requests_per_minute_limit: None, - threads: None, - vm_concurrency_limit: None, - polling_interval: None, namespaces: None, - config, - logs_translator_enabled: false, - tree_api_url: None, - } - } - - pub fn jsonrpc_backend(config: InternalApiConfig, pool: ConnectionPool) -> Self { - Self { - backend: ApiBackend::Jsonrpc, - ..Self::jsonrpsee_backend(config, pool) + optional: OptionalApiParams::default(), } } @@ -202,61 +181,48 @@ impl ApiBuilder { self } - pub fn with_tx_sender( - mut self, - tx_sender: TxSender, - vm_barrier: VmConcurrencyBarrier, - ) -> Self { + pub fn with_tx_sender(mut self, tx_sender: TxSender, vm_barrier: VmConcurrencyBarrier) -> Self { self.tx_sender = Some(tx_sender); self.vm_barrier = Some(vm_barrier); self } pub fn with_filter_limit(mut self, filters_limit: usize) -> Self { - self.filters_limit = Some(filters_limit); + self.optional.filters_limit = Some(filters_limit); self } pub fn with_subscriptions_limit(mut self, subscriptions_limit: usize) -> Self { - self.subscriptions_limit = Some(subscriptions_limit); + self.optional.subscriptions_limit = Some(subscriptions_limit); self } pub fn with_batch_request_size_limit(mut self, batch_request_size_limit: usize) -> Self { - self.batch_request_size_limit = Some(batch_request_size_limit); + self.optional.batch_request_size_limit = Some(batch_request_size_limit); self } pub fn with_response_body_size_limit(mut self, response_body_size_limit: usize) -> Self { - self.response_body_size_limit = Some(response_body_size_limit); + self.optional.response_body_size_limit = Some(response_body_size_limit); self } pub fn with_websocket_requests_per_minute_limit( mut self, - websocket_requests_per_minute_limit: u32, + websocket_requests_per_minute_limit: NonZeroU32, ) -> Self { - self.websocket_requests_per_minute_limit = Some(websocket_requests_per_minute_limit); + self.optional.websocket_requests_per_minute_limit = + Some(websocket_requests_per_minute_limit); self } pub fn with_sync_state(mut self, sync_state: SyncState) -> Self { - self.sync_state = Some(sync_state); - self - } - - pub fn with_threads(mut self, threads: usize) -> Self { - self.threads = Some(threads); + self.optional.sync_state = Some(sync_state); self } pub fn with_polling_interval(mut self, polling_interval: Duration) -> Self { - self.polling_interval = Some(polling_interval); - self - } - - pub fn with_vm_concurrency_limit(mut self, vm_concurrency_limit: usize) -> Self { - self.vm_concurrency_limit = Some(vm_concurrency_limit); + self.polling_interval = polling_interval; self } @@ -265,54 +231,89 @@ impl ApiBuilder { self } - pub fn enable_request_translator(mut self) -> Self { - tracing::info!("Logs request translator enabled"); - self.logs_translator_enabled = true; + pub fn with_tree_api(mut self, tree_api_url: Option) -> Self { + self.optional.tree_api_url = tree_api_url; self } - pub fn with_tree_api(mut self, tree_api_url: Option) -> Self { - self.tree_api_url = tree_api_url; + #[cfg(test)] + fn with_pub_sub_events(mut self, sender: mpsc::UnboundedSender) -> Self { + self.optional.pub_sub_events_sender = Some(sender); self } -} -impl ApiBuilder { - fn build_rpc_state(self) -> RpcState { - // Chosen to be significantly smaller than the interval between miniblocks, but larger than - // the latency of getting the latest sealed miniblock number from Postgres. If the API server - // processes enough requests, information about the latest sealed miniblock will be updated - // by reporting block difference metrics, so the actual update lag would be much smaller than this value. - const SEALED_MINIBLOCK_UPDATE_INTERVAL: Duration = Duration::from_millis(25); + fn into_full_params(self) -> anyhow::Result { + Ok(FullApiParams { + pool: self.pool, + last_miniblock_pool: self.last_miniblock_pool, + config: self.config, + transport: self.transport.context("API transport not set")?, + tx_sender: self.tx_sender.context("Transaction sender not set")?, + vm_barrier: self.vm_barrier.context("VM barrier not set")?, + polling_interval: self.polling_interval, + namespaces: self.namespaces.unwrap_or_else(|| { + tracing::warn!( + "debug_ and snapshots_ API namespace will be disabled by default in ApiBuilder" + ); + Namespace::DEFAULT.to_vec() + }), + optional: self.optional, + }) + } +} - let (last_sealed_miniblock, update_task) = - SealedMiniblockNumber::new(self.last_miniblock_pool, SEALED_MINIBLOCK_UPDATE_INTERVAL); - // The update tasks takes care of its termination, so we don't need to retain its handle. - tokio::spawn(update_task); +impl ApiBuilder { + pub async fn build( + self, + stop_receiver: watch::Receiver, + ) -> anyhow::Result { + self.into_full_params()?.spawn_server(stop_receiver).await + } +} - RpcState { - installed_filters: Arc::new(Mutex::new(Filters::new( - self.filters_limit.unwrap_or(usize::MAX), - ))), +impl FullApiParams { + async fn build_rpc_state( + self, + last_sealed_miniblock: SealedMiniblockNumber, + ) -> anyhow::Result { + let mut storage = self + .last_miniblock_pool + .access_storage_tagged("api") + .await?; + let start_info = BlockStartInfo::new(&mut storage).await?; + drop(storage); + + Ok(RpcState { + installed_filters: Arc::new(Mutex::new(Filters::new(self.optional.filters_limit))), connection_pool: self.pool, - tx_sender: self.tx_sender.expect("TxSender is not provided"), - sync_state: self.sync_state, + tx_sender: self.tx_sender, + sync_state: self.optional.sync_state, api_config: self.config, + start_info, last_sealed_miniblock, - logs_translator_enabled: self.logs_translator_enabled, tree_api: self + .optional .tree_api_url .map(|url| TreeApiHttpClient::new(url.as_str())), - } + }) } - async fn build_rpc_module(mut self) -> RpcModule<()> { - let namespaces = self.namespaces.take().unwrap(); + async fn build_rpc_module( + self, + pub_sub: Option, + last_sealed_miniblock: SealedMiniblockNumber, + ) -> anyhow::Result> { + let namespaces = self.namespaces.clone(); let zksync_network_id = self.config.l2_chain_id; - let rpc_state = self.build_rpc_state(); + let rpc_state = self.build_rpc_state(last_sealed_miniblock).await?; // Collect all the methods into a single RPC module. let mut rpc = RpcModule::new(()); + if let Some(pub_sub) = pub_sub { + rpc.merge(pub_sub.into_rpc()) + .expect("Can't merge eth pubsub namespace"); + } + if namespaces.contains(&Namespace::Eth) { rpc.merge(EthNamespace::new(rpc_state.clone()).into_rpc()) .expect("Can't merge eth namespace"); @@ -334,42 +335,37 @@ impl ApiBuilder { .expect("Can't merge en namespace"); } if namespaces.contains(&Namespace::Debug) { - rpc.merge(DebugNamespace::new(rpc_state).await.into_rpc()) + rpc.merge(DebugNamespace::new(rpc_state.clone()).await.into_rpc()) .expect("Can't merge debug namespace"); } - rpc + if namespaces.contains(&Namespace::Snapshots) { + rpc.merge(SnapshotsNamespace::new(rpc_state).into_rpc()) + .expect("Can't merge snapshots namespace"); + } + Ok(rpc) } - pub async fn build( - mut self, + async fn spawn_server( + self, stop_receiver: watch::Receiver, ) -> anyhow::Result { - if self.filters_limit.is_none() { + if self.optional.filters_limit.is_none() { tracing::warn!("Filters limit is not set - unlimited filters are allowed"); } - if self.namespaces.is_none() { - tracing::warn!("debug_ API namespace will be disabled by default in ApiBuilder"); - self.namespaces = Some(Namespace::NON_DEBUG.to_vec()); - } - - if self - .namespaces - .as_ref() - .unwrap() - .contains(&Namespace::Pubsub) - && matches!(&self.transport, Some(ApiTransport::Http(_))) + if self.namespaces.contains(&Namespace::Pubsub) + && matches!(&self.transport, ApiTransport::Http(_)) { tracing::debug!("pubsub API is not supported for HTTP transport, ignoring"); } - match (&self.transport, self.subscriptions_limit) { - (Some(ApiTransport::WebSocket(_)), None) => { + match (&self.transport, self.optional.subscriptions_limit) { + (ApiTransport::WebSocket(_), None) => { tracing::warn!( "`subscriptions_limit` is not set - unlimited subscriptions are allowed" ); } - (Some(ApiTransport::Http(_)), Some(_)) => { + (ApiTransport::Http(_), Some(_)) => { tracing::warn!( "`subscriptions_limit` is ignored for HTTP transport, use WebSocket instead" ); @@ -377,92 +373,7 @@ impl ApiBuilder { _ => {} } - match (self.backend, self.transport.take()) { - (ApiBackend::Jsonrpc, Some(ApiTransport::Http(addr))) => { - self.build_jsonrpc_http(addr, stop_receiver).await - } - (ApiBackend::Jsonrpc, Some(ApiTransport::WebSocket(addr))) => { - self.build_jsonrpc_ws(addr, stop_receiver).await - } - (ApiBackend::Jsonrpsee, Some(transport)) => { - self.build_jsonrpsee(transport, stop_receiver).await - } - (_, None) => anyhow::bail!("ApiTransport is not specified"), - } - } - - async fn build_jsonrpc_http( - mut self, - addr: SocketAddr, - mut stop_receiver: watch::Receiver, - ) -> anyhow::Result { - if self.batch_request_size_limit.is_some() { - tracing::info!("`batch_request_size_limit` is not supported for HTTP `jsonrpc` backend, this value is ignored"); - } - if self.response_body_size_limit.is_some() { - tracing::info!("`response_body_size_limit` is not supported for `jsonrpc` backend, this value is ignored"); - } - - let (health_check, health_updater) = ReactiveHealthCheck::new("http_api"); - let vm_barrier = self.vm_barrier.take().unwrap(); - // ^ `unwrap()` is safe by construction - - let runtime = tokio::runtime::Builder::new_multi_thread() - .enable_all() - .thread_name("jsonrpc-http-worker") - .worker_threads(self.threads.unwrap()) - .build() - .context("Failed creating Tokio runtime for `jsonrpc` API backend")?; - let mut io_handler: MetaIoHandler<()> = MetaIoHandler::default(); - self.extend_jsonrpc_methods(&mut io_handler).await; - - let (local_addr_sender, local_addr) = oneshot::channel(); - let server_task = tokio::task::spawn_blocking(move || { - let server = jsonrpc_http_server::ServerBuilder::new(io_handler) - .threads(1) - .event_loop_executor(runtime.handle().clone()) - .start_http(&addr) - .context("jsonrpc_http::Server::start_http")?; - local_addr_sender.send(*server.address()).ok(); - - let close_handle = server.close_handle(); - let closing_vm_barrier = vm_barrier.clone(); - runtime.handle().spawn(async move { - if stop_receiver.changed().await.is_err() { - tracing::warn!( - "Stop signal sender for HTTP JSON-RPC server was dropped without sending a signal" - ); - } - tracing::info!("Stop signal received, HTTP JSON-RPC server is shutting down"); - closing_vm_barrier.close(); - close_handle.close(); - }); - - health_updater.update(HealthStatus::Ready.into()); - server.wait(); - drop(health_updater); - tracing::info!("HTTP JSON-RPC server stopped"); - runtime.block_on(Self::wait_for_vm(vm_barrier, "HTTP")); - runtime.shutdown_timeout(GRACEFUL_SHUTDOWN_TIMEOUT); - Ok(()) - }); - - let local_addr = match local_addr.await { - Ok(addr) => addr, - Err(_) => { - // If the local address was not transmitted, `server_task` must have failed. - let err = server_task - .await - .context("HTTP JSON-RPC server panicked")? - .unwrap_err(); - return Err(err); - } - }; - Ok(ApiServerHandles { - local_addr, - health_check, - tasks: vec![server_task], - }) + self.build_jsonrpsee(stop_receiver).await } async fn wait_for_vm(vm_barrier: VmConcurrencyBarrier, transport: &str) { @@ -478,215 +389,57 @@ impl ApiBuilder { } } - async fn extend_jsonrpc_methods(mut self, io: &mut MetaIoHandler) - where - T: jsonrpc_core::Metadata, - S: jsonrpc_core::Middleware, - { - let zksync_network_id = self.config.l2_chain_id; - let namespaces = self.namespaces.take().unwrap(); - let rpc_state = self.build_rpc_state(); - if namespaces.contains(&Namespace::Eth) { - io.extend_with(EthNamespace::new(rpc_state.clone()).to_delegate()); - } - if namespaces.contains(&Namespace::Zks) { - io.extend_with(ZksNamespace::new(rpc_state.clone()).to_delegate()); - } - if namespaces.contains(&Namespace::En) { - io.extend_with(EnNamespace::new(rpc_state.clone()).to_delegate()); - } - if namespaces.contains(&Namespace::Web3) { - io.extend_with(Web3Namespace.to_delegate()); - } - if namespaces.contains(&Namespace::Net) { - io.extend_with(NetNamespace::new(zksync_network_id).to_delegate()); - } - if namespaces.contains(&Namespace::Debug) { - let debug_ns = DebugNamespace::new(rpc_state).await; - io.extend_with(debug_ns.to_delegate()); - } - } - - async fn build_jsonrpc_ws( - mut self, - addr: SocketAddr, - mut stop_receiver: watch::Receiver, + async fn build_jsonrpsee( + self, + stop_receiver: watch::Receiver, ) -> anyhow::Result { - if self.response_body_size_limit.is_some() { - tracing::info!("`response_body_size_limit` is not supported for `jsonrpc` backend, this value is ignored"); - } - - let (health_check, health_updater) = ReactiveHealthCheck::new("ws_api"); - let websocket_requests_per_second_limit = self.websocket_requests_per_minute_limit; - let batch_limiter_middleware = - LimitMiddleware::new(Transport::Ws, self.batch_request_size_limit); - - let runtime = tokio::runtime::Builder::new_multi_thread() - .enable_all() - .thread_name("jsonrpc-ws-worker") - .worker_threads(self.threads.unwrap()) - .build() - .context("Failed creating Tokio runtime for `jsonrpc-ws` API backend")?; - let max_connections = self.subscriptions_limit.unwrap_or(usize::MAX); - let vm_barrier = self.vm_barrier.take().unwrap(); - - let io_handler: MetaIoHandler>, _> = - MetaIoHandler::with_middleware(batch_limiter_middleware); - let mut io_handler = PubSubHandler::new(io_handler); - let mut tasks = Vec::new(); - - if self - .namespaces - .as_ref() - .unwrap() - .contains(&Namespace::Pubsub) - { - let pub_sub = EthSubscribe::new(runtime.handle().clone()); - let polling_interval = self - .polling_interval - .context("Polling interval is not set")?; - tasks.extend([ - tokio::spawn(notify_blocks( - pub_sub.active_block_subs.clone(), - self.pool.clone(), - polling_interval, - stop_receiver.clone(), - )), - tokio::spawn(notify_txs( - pub_sub.active_tx_subs.clone(), - self.pool.clone(), - polling_interval, - stop_receiver.clone(), - )), - tokio::spawn(notify_logs( - pub_sub.active_log_subs.clone(), - self.pool.clone(), - polling_interval, - stop_receiver.clone(), - )), - ]); - io_handler.extend_with(pub_sub.to_delegate()); - } - self.extend_jsonrpc_methods(&mut io_handler).await; - - let (local_addr_sender, local_addr) = oneshot::channel(); - let server_task = tokio::task::spawn_blocking(move || { - let server = jsonrpc_ws_server::ServerBuilder::with_meta_extractor( - io_handler, - move |context: &jsonrpc_ws_server::RequestContext| { - let session = Arc::new(jsonrpc_pubsub::Session::new(context.sender())); - RateLimitMetadata::new(websocket_requests_per_second_limit, session) - }, - ) - .event_loop_executor(runtime.handle().clone()) - .max_connections(max_connections) - .session_stats(TrackOpenWsConnections) - .start(&addr) - .context("jsonrpc_ws_server::Server::start()")?; - - local_addr_sender.send(*server.addr()).ok(); - - let close_handle = server.close_handle(); - let closing_vm_barrier = vm_barrier.clone(); - runtime.handle().spawn(async move { - if stop_receiver.changed().await.is_err() { - tracing::warn!( - "Stop signal sender for WS JSON-RPC server was dropped without sending a signal" - ); - } - tracing::info!("Stop signal received, WS JSON-RPC server is shutting down"); - closing_vm_barrier.close(); - close_handle.close(); - }); - - health_updater.update(HealthStatus::Ready.into()); - server - .wait() - .context("WS JSON-RPC server encountered fatal error")?; - drop(health_updater); - tracing::info!("WS JSON-RPC server stopped"); - runtime.block_on(Self::wait_for_vm(vm_barrier, "WS")); - runtime.shutdown_timeout(GRACEFUL_SHUTDOWN_TIMEOUT); - Ok(()) - }); + // Chosen to be significantly smaller than the interval between miniblocks, but larger than + // the latency of getting the latest sealed miniblock number from Postgres. If the API server + // processes enough requests, information about the latest sealed miniblock will be updated + // by reporting block difference metrics, so the actual update lag would be much smaller than this value. + const SEALED_MINIBLOCK_UPDATE_INTERVAL: Duration = Duration::from_millis(25); - let local_addr = match local_addr.await { - Ok(addr) => addr, - Err(_) => { - // If the local address was not transmitted, `server_task` must have failed. - let err = server_task - .await - .context("WS JSON-RPC server panicked")? - .unwrap_err(); - return Err(err); - } + let transport = self.transport; + let health_check_name = match transport { + ApiTransport::Http(_) => "http_api", + ApiTransport::WebSocket(_) => "ws_api", }; - tasks.push(server_task); + let (health_check, health_updater) = ReactiveHealthCheck::new(health_check_name); - Ok(ApiServerHandles { - local_addr, - tasks, - health_check, - }) - } + let (last_sealed_miniblock, update_task) = SealedMiniblockNumber::new( + self.last_miniblock_pool.clone(), + SEALED_MINIBLOCK_UPDATE_INTERVAL, + stop_receiver.clone(), + ); + let mut tasks = vec![tokio::spawn(update_task)]; - async fn build_jsonrpsee( - mut self, - transport: ApiTransport, - stop_receiver: watch::Receiver, - ) -> anyhow::Result { - if matches!(transport, ApiTransport::WebSocket(_)) { - // TODO (SMA-1588): Implement `eth_subscribe` method for `jsonrpsee`. - tracing::warn!( - "`eth_subscribe` is not implemented for jsonrpsee backend, use jsonrpc instead" - ); - if self.websocket_requests_per_minute_limit.is_some() { - tracing::info!("`websocket_requests_per_second_limit` is not supported for `jsonrpsee` backend, this value is ignored"); + let pub_sub = if matches!(transport, ApiTransport::WebSocket(_)) + && self.namespaces.contains(&Namespace::Pubsub) + { + let mut pub_sub = EthSubscribe::new(); + if let Some(sender) = &self.optional.pub_sub_events_sender { + pub_sub.set_events_sender(sender.clone()); } - } - let (runtime_thread_name, health_check_name) = match transport { - ApiTransport::Http(_) => ("jsonrpsee-http-worker", "http_api"), - ApiTransport::WebSocket(_) => ("jsonrpsee-ws-worker", "ws_api"), - }; - let (health_check, health_updater) = ReactiveHealthCheck::new(health_check_name); - let vm_barrier = self.vm_barrier.take().unwrap(); - let batch_request_config = if let Some(limit) = self.batch_request_size_limit { - BatchRequestConfig::Limit(limit as u32) + tasks.extend(pub_sub.spawn_notifiers( + self.pool.clone(), + self.polling_interval, + stop_receiver.clone(), + )); + Some(pub_sub) } else { - BatchRequestConfig::Unlimited + None }; - let response_body_size_limit = self - .response_body_size_limit - .map(|limit| limit as u32) - .unwrap_or(u32::MAX); - - let runtime = tokio::runtime::Builder::new_multi_thread() - .enable_all() - .thread_name(runtime_thread_name) - .worker_threads(self.threads.unwrap()) - .build() - .with_context(|| { - format!("Failed creating Tokio runtime for {health_check_name} jsonrpsee server") - })?; - let rpc = self.build_rpc_module().await; // Start the server in a separate tokio runtime from a dedicated thread. let (local_addr_sender, local_addr) = oneshot::channel(); - let server_task = tokio::task::spawn_blocking(move || { - let res = runtime.block_on(Self::run_jsonrpsee_server( - rpc, - transport, - stop_receiver, - local_addr_sender, - health_updater, - vm_barrier, - batch_request_config, - response_body_size_limit, - )); - runtime.shutdown_timeout(GRACEFUL_SHUTDOWN_TIMEOUT); - res - }); + let server_task = tokio::spawn(self.run_jsonrpsee_server( + stop_receiver, + pub_sub, + last_sealed_miniblock, + local_addr_sender, + health_updater, + )); let local_addr = match local_addr.await { Ok(addr) => addr, @@ -699,24 +452,41 @@ impl ApiBuilder { return Err(err); } }; + tasks.push(server_task); Ok(ApiServerHandles { local_addr, health_check, - tasks: vec![server_task], + tasks, }) } - #[allow(clippy::too_many_arguments)] async fn run_jsonrpsee_server( - rpc: RpcModule<()>, - transport: ApiTransport, + self, mut stop_receiver: watch::Receiver, + pub_sub: Option, + last_sealed_miniblock: SealedMiniblockNumber, local_addr_sender: oneshot::Sender, health_updater: HealthUpdater, - vm_barrier: VmConcurrencyBarrier, - batch_request_config: BatchRequestConfig, - response_body_size_limit: u32, ) -> anyhow::Result<()> { + let transport = self.transport; + let batch_request_config = self + .optional + .batch_request_size_limit + .map_or(BatchRequestConfig::Unlimited, |limit| { + BatchRequestConfig::Limit(limit as u32) + }); + let response_body_size_limit = self + .optional + .response_body_size_limit + .map_or(u32::MAX, |limit| limit as u32); + let websocket_requests_per_minute_limit = self.optional.websocket_requests_per_minute_limit; + let subscriptions_limit = self.optional.subscriptions_limit; + let vm_barrier = self.vm_barrier.clone(); + + let rpc = self + .build_rpc_module(pub_sub, last_sealed_miniblock) + .await?; + let (transport_str, is_http, addr) = match transport { ApiTransport::Http(addr) => ("HTTP", true, addr), ApiTransport::WebSocket(addr) => ("WS", false, addr), @@ -727,10 +497,10 @@ impl ApiBuilder { let cors = is_http.then(|| { CorsLayer::new() // Allow `POST` when accessing the resource - .allow_methods([hyper::Method::POST]) + .allow_methods([reqwest::Method::POST]) // Allow requests from any origin .allow_origin(tower_http::cors::Any) - .allow_headers([hyper::header::CONTENT_TYPE]) + .allow_headers([reqwest::header::CONTENT_TYPE]) }); // Setup metrics for the number of in-flight requests. let (in_flight_requests, counter) = InFlightRequestsLayer::pair(); @@ -745,24 +515,41 @@ impl ApiBuilder { .layer(in_flight_requests) .option_layer(cors); - let server_builder = if is_http { - ServerBuilder::default().http_only().max_connections(5_000) + // Settings shared by HTTP and WS servers. + let max_connections = !is_http + .then_some(subscriptions_limit) + .flatten() + .unwrap_or(5_000); + let server_builder = ServerBuilder::default() + .max_connections(max_connections as u32) + .set_http_middleware(middleware) + .max_response_body_size(response_body_size_limit) + .set_batch_request_config(batch_request_config); + + let (local_addr, server_handle) = if is_http { + // HTTP-specific settings + let server = server_builder + .http_only() + .build(addr) + .await + .context("Failed building HTTP JSON-RPC server")?; + (server.local_addr(), server.start(rpc)) } else { - ServerBuilder::default().ws_only() + // WS specific settings + let server = server_builder + .set_rpc_middleware(RpcServiceBuilder::new().layer_fn(move |a| { + LimitMiddleware::new(a, websocket_requests_per_minute_limit) + })) + .set_id_provider(EthSubscriptionIdProvider) + .build(addr) + .await + .context("Failed building WS JSON-RPC server")?; + (server.local_addr(), server.start(rpc)) }; - - let server = server_builder - .set_batch_request_config(batch_request_config) - .set_middleware(middleware) - .max_response_body_size(response_body_size_limit) - .build(addr) - .await - .with_context(|| format!("Failed building {transport_str} JSON-RPC server"))?; - let local_addr = server.local_addr().with_context(|| { + let local_addr = local_addr.with_context(|| { format!("Failed getting local address for {transport_str} JSON-RPC server") })?; local_addr_sender.send(local_addr).ok(); - let server_handle = server.start(rpc); let close_handle = server_handle.clone(); let closing_vm_barrier = vm_barrier.clone(); @@ -788,26 +575,3 @@ impl ApiBuilder { Ok(()) } } - -struct TrackOpenWsConnections; - -impl jsonrpc_ws_server::SessionStats for TrackOpenWsConnections { - fn open_session(&self, _id: jsonrpc_ws_server::SessionId) { - API_METRICS.ws_open_sessions.inc_by(1); - } - - fn close_session(&self, _id: jsonrpc_ws_server::SessionId) { - API_METRICS.ws_open_sessions.dec_by(1); - } -} - -async fn resolve_block( - connection: &mut StorageProcessor<'_>, - block: api::BlockId, - method_name: &'static str, -) -> Result { - let result = connection.blocks_web3_dal().resolve_block_id(block).await; - result - .map_err(|err| internal_error(method_name, err))? - .ok_or(Web3Error::NoBlock) -} diff --git a/core/lib/zksync_core/src/api_server/web3/namespaces/debug.rs b/core/lib/zksync_core/src/api_server/web3/namespaces/debug.rs index d59c25ddbe9..693ebb8f358 100644 --- a/core/lib/zksync_core/src/api_server/web3/namespaces/debug.rs +++ b/core/lib/zksync_core/src/api_server/web3/namespaces/debug.rs @@ -1,63 +1,53 @@ -use multivm::vm_latest::constants::BLOCK_GAS_LIMIT; -use once_cell::sync::OnceCell; use std::sync::Arc; -use multivm::interface::ExecutionResult; - -use zksync_dal::ConnectionPool; -use zksync_state::PostgresStorageCaches; +use multivm::{interface::ExecutionResult, vm_latest::constants::BLOCK_GAS_LIMIT}; +use once_cell::sync::OnceCell; +use zksync_system_constants::MAX_ENCODED_TX_SIZE; use zksync_types::{ api::{BlockId, BlockNumber, DebugCall, ResultDebugCall, TracerConfig}, + fee_model::BatchFeeInput, l2::L2Tx, transaction_request::CallRequest, vm_trace::Call, - AccountTreeId, L2ChainId, H256, USED_BOOTLOADER_MEMORY_BYTES, + AccountTreeId, H256, }; use zksync_web3_decl::error::Web3Error; use crate::api_server::{ - execution_sandbox::{ - execute_tx_eth_call, ApiTracer, BlockArgs, TxSharedArgs, VmConcurrencyLimiter, - }, - tx_sender::ApiContracts, - web3::{ - backend_jsonrpc::error::internal_error, - metrics::API_METRICS, - resolve_block, - state::{RpcState, SealedMiniblockNumber}, - }, + execution_sandbox::{ApiTracer, TxSharedArgs}, + tx_sender::{ApiContracts, TxSenderConfig}, + web3::{backend_jsonrpsee::internal_error, metrics::API_METRICS, state::RpcState}, }; -use crate::l1_gas_price::L1GasPriceProvider; #[derive(Debug, Clone)] pub struct DebugNamespace { - connection_pool: ConnectionPool, - fair_l2_gas_price: u64, + batch_fee_input: BatchFeeInput, + state: RpcState, api_contracts: ApiContracts, - vm_execution_cache_misses_limit: Option, - vm_concurrency_limiter: Arc, - storage_caches: PostgresStorageCaches, - last_sealed_miniblock: SealedMiniblockNumber, - chain_id: L2ChainId, } impl DebugNamespace { - pub async fn new(state: RpcState) -> Self { - let sender_config = &state.tx_sender.0.sender_config; - + pub async fn new(state: RpcState) -> Self { let api_contracts = ApiContracts::load_from_disk(); Self { - connection_pool: state.connection_pool, - fair_l2_gas_price: sender_config.fair_l2_gas_price, + // For now, the same scaling is used for both the L1 gas price and the pubdata price + batch_fee_input: state + .tx_sender + .0 + .batch_fee_input_provider + .get_batch_fee_input_scaled( + state.api_config.estimate_gas_scale_factor, + state.api_config.estimate_gas_scale_factor, + ), + state, api_contracts, - vm_execution_cache_misses_limit: sender_config.vm_execution_cache_misses_limit, - vm_concurrency_limiter: state.tx_sender.vm_concurrency_limiter(), - storage_caches: state.tx_sender.storage_caches(), - last_sealed_miniblock: state.last_sealed_miniblock, - chain_id: sender_config.chain_id, } } + fn sender_config(&self) -> &TxSenderConfig { + &self.state.tx_sender.0.sender_config + } + #[tracing::instrument(skip(self))] pub async fn debug_trace_block_impl( &self, @@ -71,17 +61,21 @@ impl DebugNamespace { .map(|options| options.tracer_config.only_top_call) .unwrap_or(false); let mut connection = self + .state .connection_pool .access_storage_tagged("api") .await - .unwrap(); - let block_number = resolve_block(&mut connection, block_id, METHOD_NAME).await?; - let call_trace = connection + .map_err(|err| internal_error(METHOD_NAME, err))?; + let block_number = self + .state + .resolve_block(&mut connection, block_id, METHOD_NAME) + .await?; + let call_traces = connection .blocks_web3_dal() - .get_trace_for_miniblock(block_number) + .get_trace_for_miniblock(block_number) // FIXME: is some ordering among transactions expected? .await - .unwrap(); - let call_trace = call_trace + .map_err(|err| internal_error(METHOD_NAME, err))?; + let call_trace = call_traces .into_iter() .map(|call_trace| { let mut result: DebugCall = call_trace.into(); @@ -92,7 +86,7 @@ impl DebugNamespace { }) .collect(); - let block_diff = self.last_sealed_miniblock.diff(block_number); + let block_diff = self.state.last_sealed_miniblock.diff(block_number); method_latency.observe(block_diff); Ok(call_trace) } @@ -102,25 +96,26 @@ impl DebugNamespace { &self, tx_hash: H256, options: Option, - ) -> Option { + ) -> Result, Web3Error> { + const METHOD_NAME: &str = "debug_trace_transaction"; + let only_top_call = options .map(|options| options.tracer_config.only_top_call) .unwrap_or(false); - let call_trace = self + let mut connection = self + .state .connection_pool .access_storage_tagged("api") .await - .unwrap() - .transactions_dal() - .get_call_trace(tx_hash) - .await; - call_trace.map(|call_trace| { + .map_err(|err| internal_error(METHOD_NAME, err))?; + let call_trace = connection.transactions_dal().get_call_trace(tx_hash).await; + Ok(call_trace.map(|call_trace| { let mut result: DebugCall = call_trace.into(); if only_top_call { result.calls = vec![]; } result - }) + })) } #[tracing::instrument(skip(self, request, block_id))] @@ -139,20 +134,26 @@ impl DebugNamespace { .unwrap_or(false); let mut connection = self + .state .connection_pool .access_storage_tagged("api") .await - .unwrap(); - let block_args = BlockArgs::new(&mut connection, block_id) - .await - .map_err(|err| internal_error("debug_trace_call", err))? - .ok_or(Web3Error::NoBlock)?; + .map_err(|err| internal_error(METHOD_NAME, err))?; + let block_args = self + .state + .resolve_block_args(&mut connection, block_id, METHOD_NAME) + .await?; drop(connection); - let tx = L2Tx::from_request(request.into(), USED_BOOTLOADER_MEMORY_BYTES)?; + let tx = L2Tx::from_request(request.into(), MAX_ENCODED_TX_SIZE)?; let shared_args = self.shared_args(); - let vm_permit = self.vm_concurrency_limiter.acquire().await; + let vm_permit = self + .state + .tx_sender + .vm_concurrency_limiter() + .acquire() + .await; let vm_permit = vm_permit.ok_or(Web3Error::InternalError)?; // We don't need properly trace if we only need top call @@ -163,16 +164,18 @@ impl DebugNamespace { vec![ApiTracer::CallTracer(call_tracer_result.clone())] }; - let result = execute_tx_eth_call( - vm_permit, - shared_args, - self.connection_pool.clone(), - tx.clone(), - block_args, - self.vm_execution_cache_misses_limit, - custom_tracers, - ) - .await; + let executor = &self.state.tx_sender.0.executor; + let result = executor + .execute_tx_eth_call( + vm_permit, + shared_args, + self.state.connection_pool.clone(), + tx.clone(), + block_args, + self.sender_config().vm_execution_cache_misses_limit, + custom_tracers, + ) + .await; let (output, revert_reason) = match result.result { ExecutionResult::Success { output, .. } => (output, None), @@ -200,20 +203,23 @@ impl DebugNamespace { trace, ); - let block_diff = self.last_sealed_miniblock.diff_with_block_args(&block_args); + let block_diff = self + .state + .last_sealed_miniblock + .diff_with_block_args(&block_args); method_latency.observe(block_diff); Ok(call.into()) } fn shared_args(&self) -> TxSharedArgs { + let sender_config = self.sender_config(); TxSharedArgs { operator_account: AccountTreeId::default(), - l1_gas_price: 100_000, - fair_l2_gas_price: self.fair_l2_gas_price, + fee_input: self.batch_fee_input, base_system_contracts: self.api_contracts.eth_call.clone(), - caches: self.storage_caches.clone(), + caches: self.state.tx_sender.storage_caches().clone(), validation_computational_gas_limit: BLOCK_GAS_LIMIT, - chain_id: self.chain_id, + chain_id: sender_config.chain_id, } } } diff --git a/core/lib/zksync_core/src/api_server/web3/namespaces/en.rs b/core/lib/zksync_core/src/api_server/web3/namespaces/en.rs index b43f5523938..92781ae8f68 100644 --- a/core/lib/zksync_core/src/api_server/web3/namespaces/en.rs +++ b/core/lib/zksync_core/src/api_server/web3/namespaces/en.rs @@ -1,28 +1,17 @@ use zksync_types::{api::en::SyncBlock, MiniblockNumber}; use zksync_web3_decl::error::Web3Error; -use crate::{ - api_server::{web3::backend_jsonrpc::error::internal_error, web3::state::RpcState}, - l1_gas_price::L1GasPriceProvider, -}; +use crate::api_server::web3::{backend_jsonrpsee::internal_error, state::RpcState}; /// Namespace for External Node unique methods. /// Main use case for it is the EN synchronization. #[derive(Debug)] -pub struct EnNamespace { - pub state: RpcState, +pub struct EnNamespace { + state: RpcState, } -impl Clone for EnNamespace { - fn clone(&self) -> Self { - Self { - state: self.state.clone(), - } - } -} - -impl EnNamespace { - pub fn new(state: RpcState) -> Self { +impl EnNamespace { + pub fn new(state: RpcState) -> Self { Self { state } } @@ -32,12 +21,14 @@ impl EnNamespace { block_number: MiniblockNumber, include_transactions: bool, ) -> Result, Web3Error> { + const METHOD_NAME: &str = "en_syncL2Block"; + let mut storage = self .state .connection_pool .access_storage_tagged("api") .await - .unwrap(); + .map_err(|err| internal_error(METHOD_NAME, err))?; storage .sync_dal() .sync_block( @@ -46,6 +37,6 @@ impl EnNamespace { include_transactions, ) .await - .map_err(|err| internal_error("en_syncL2Block", err)) + .map_err(|err| internal_error(METHOD_NAME, err)) } } diff --git a/core/lib/zksync_core/src/api_server/web3/namespaces/eth.rs b/core/lib/zksync_core/src/api_server/web3/namespaces/eth.rs index 4cabb8e15da..da6df61e1e8 100644 --- a/core/lib/zksync_core/src/api_server/web3/namespaces/eth.rs +++ b/core/lib/zksync_core/src/api_server/web3/namespaces/eth.rs @@ -1,3 +1,4 @@ +use zksync_system_constants::DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE; use zksync_types::{ api::{ BlockId, BlockNumber, GetLogsFilter, Transaction, TransactionId, TransactionReceipt, @@ -8,8 +9,7 @@ use zksync_types::{ utils::decompose_full_nonce, web3, web3::types::{FeeHistory, SyncInfo, SyncState}, - AccountTreeId, Bytes, MiniblockNumber, StorageKey, H256, L2_ETH_TOKEN_ADDRESS, - MAX_GAS_PER_PUBDATA_BYTE, U256, + AccountTreeId, Bytes, MiniblockNumber, StorageKey, H256, L2_ETH_TOKEN_ADDRESS, U256, }; use zksync_utils::u256_to_h256; use zksync_web3_decl::{ @@ -17,38 +17,23 @@ use zksync_web3_decl::{ types::{Address, Block, Filter, FilterChanges, Log, U64}, }; -use crate::{ - api_server::{ - execution_sandbox::BlockArgs, - web3::{ - backend_jsonrpc::error::internal_error, - metrics::{BlockCallObserver, API_METRICS}, - resolve_block, - state::RpcState, - TypedFilter, - }, - }, - l1_gas_price::L1GasPriceProvider, +use crate::api_server::web3::{ + backend_jsonrpsee::internal_error, + metrics::{BlockCallObserver, API_METRICS}, + state::RpcState, + TypedFilter, }; pub const EVENT_TOPIC_NUMBER_LIMIT: usize = 4; pub const PROTOCOL_VERSION: &str = "zks/1"; #[derive(Debug)] -pub struct EthNamespace { - state: RpcState, -} - -impl Clone for EthNamespace { - fn clone(&self) -> Self { - Self { - state: self.state.clone(), - } - } +pub struct EthNamespace { + state: RpcState, } -impl EthNamespace { - pub fn new(state: RpcState) -> Self { +impl EthNamespace { + pub fn new(state: RpcState) -> Self { Self { state } } @@ -57,20 +42,21 @@ impl EthNamespace { const METHOD_NAME: &str = "get_block_number"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let block_number = self + let mut storage = self .state .connection_pool .access_storage_tagged("api") .await - .unwrap() - .blocks_web3_dal() + .map_err(|err| internal_error(METHOD_NAME, err))?; + let block_number = storage + .blocks_dal() .get_sealed_miniblock_number() .await - .map(|n| U64::from(n.0)) - .map_err(|err| internal_error(METHOD_NAME, err)); + .map_err(|err| internal_error(METHOD_NAME, err))? + .ok_or(Web3Error::NoBlock)?; method_latency.observe(); - block_number + Ok(block_number.0.into()) } #[tracing::instrument(skip(self, request, block_id))] @@ -88,11 +74,12 @@ impl EthNamespace { .connection_pool .access_storage_tagged("api") .await - .unwrap(); - let block_args = BlockArgs::new(&mut connection, block_id) - .await - .map_err(|err| internal_error("eth_call", err))? - .ok_or(Web3Error::NoBlock)?; + .map_err(|err| internal_error(METHOD_NAME, err))?; + let block_args = self + .state + .resolve_block_args(&mut connection, block_id, METHOD_NAME) + .await?; + drop(connection); let tx = L2Tx::from_request(request.into(), self.state.api_config.max_tx_size)?; @@ -125,7 +112,7 @@ impl EthNamespace { if let Some(ref mut eip712_meta) = request_with_gas_per_pubdata_overridden.eip712_meta { if eip712_meta.gas_per_pubdata == U256::zero() { - eip712_meta.gas_per_pubdata = MAX_GAS_PER_PUBDATA_BYTE.into(); + eip712_meta.gas_per_pubdata = DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE.into(); } } @@ -147,7 +134,7 @@ impl EthNamespace { // When we're estimating fee, we are trying to deduce values related to fee, so we should // not consider provided ones. - tx.common_data.fee.max_fee_per_gas = self.state.tx_sender.gas_price().into(); + tx.common_data.fee.max_fee_per_gas = self.state.tx_sender.gas_price().await.into(); tx.common_data.fee.max_priority_fee_per_gas = tx.common_data.fee.max_fee_per_gas; // Modify the l1 gas price with the scale factor @@ -167,11 +154,11 @@ impl EthNamespace { } #[tracing::instrument(skip(self))] - pub fn gas_price_impl(&self) -> Result { + pub async fn gas_price_impl(&self) -> Result { const METHOD_NAME: &str = "gas_price"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let price = self.state.tx_sender.gas_price(); + let price = self.state.tx_sender.gas_price().await; method_latency.observe(); Ok(price.into()) } @@ -191,8 +178,11 @@ impl EthNamespace { .connection_pool .access_storage_tagged("api") .await - .unwrap(); - let block_number = resolve_block(&mut connection, block_id, METHOD_NAME).await?; + .map_err(|err| internal_error(METHOD_NAME, err))?; + let block_number = self + .state + .resolve_block(&mut connection, block_id, METHOD_NAME) + .await?; let balance = connection .storage_web3_dal() .standard_token_historical_balance( @@ -220,10 +210,6 @@ impl EthNamespace { pub async fn get_logs_impl(&self, mut filter: Filter) -> Result, Web3Error> { const METHOD_NAME: &str = "get_logs"; - if self.state.logs_translator_enabled { - return self.state.translate_get_logs(filter).await; - } - let method_latency = API_METRICS.start_call(METHOD_NAME); self.state.resolve_filter_block_hash(&mut filter).await?; let (from_block, to_block) = self.state.resolve_filter_block_range(&filter).await?; @@ -283,12 +269,13 @@ impl EthNamespace { }; let method_latency = API_METRICS.start_block_call(method_name, block_id); + self.state.start_info.ensure_not_pruned(block_id)?; let block = self .state .connection_pool .access_storage_tagged("api") .await - .unwrap() + .map_err(|err| internal_error(method_name, err))? .blocks_web3_dal() .get_block_by_web3_block_id( block_id, @@ -315,12 +302,13 @@ impl EthNamespace { const METHOD_NAME: &str = "get_block_transaction_count"; let method_latency = API_METRICS.start_block_call(METHOD_NAME, block_id); + self.state.start_info.ensure_not_pruned(block_id)?; let tx_count = self .state .connection_pool .access_storage_tagged("api") .await - .unwrap() + .map_err(|err| internal_error(METHOD_NAME, err))? .blocks_web3_dal() .get_block_tx_count(block_id) .await @@ -350,7 +338,10 @@ impl EthNamespace { .access_storage_tagged("api") .await .unwrap(); - let block_number = resolve_block(&mut connection, block_id, METHOD_NAME).await?; + let block_number = self + .state + .resolve_block(&mut connection, block_id, METHOD_NAME) + .await?; let contract_code = connection .storage_web3_dal() .get_contract_code_unchecked(address, block_number) @@ -384,7 +375,10 @@ impl EthNamespace { .access_storage_tagged("api") .await .unwrap(); - let block_number = resolve_block(&mut connection, block_id, METHOD_NAME).await?; + let block_number = self + .state + .resolve_block(&mut connection, block_id, METHOD_NAME) + .await?; let value = connection .storage_web3_dal() .get_historical_value_unchecked(&storage_key, block_number) @@ -416,35 +410,34 @@ impl EthNamespace { .await .unwrap(); - let (full_nonce, block_number) = match block_id { - BlockId::Number(BlockNumber::Pending) => { - let nonce = connection - .transactions_web3_dal() - .next_nonce_by_initiator_account(address) - .await - .map_err(|err| internal_error(method_name, err)); - (nonce, None) - } - _ => { - let block_number = resolve_block(&mut connection, block_id, method_name).await?; - let nonce = connection - .storage_web3_dal() - .get_address_historical_nonce(address, block_number) - .await - .map_err(|err| internal_error(method_name, err)); - (nonce, Some(block_number)) - } - }; + let block_number = self + .state + .resolve_block(&mut connection, block_id, method_name) + .await?; + let full_nonce = connection + .storage_web3_dal() + .get_address_historical_nonce(address, block_number) + .await + .map_err(|err| internal_error(method_name, err))?; // TODO (SMA-1612): currently account nonce is returning always, but later we will // return account nonce for account abstraction and deployment nonce for non account abstraction. // Strip off deployer nonce part. - let account_nonce = full_nonce.map(|nonce| decompose_full_nonce(nonce).0); + let (mut account_nonce, _) = decompose_full_nonce(full_nonce); + + if matches!(block_id, BlockId::Number(BlockNumber::Pending)) { + let account_nonce_u64 = u64::try_from(account_nonce) + .map_err(|err| internal_error(method_name, anyhow::anyhow!(err)))?; + account_nonce = connection + .transactions_web3_dal() + .next_nonce_by_initiator_account(address, account_nonce_u64) + .await + .map_err(|err| internal_error(method_name, err))?; + } - let block_diff = - block_number.map_or(0, |number| self.state.last_sealed_miniblock.diff(number)); + let block_diff = self.state.last_sealed_miniblock.diff(block_number); method_latency.observe(block_diff); - account_nonce + Ok(account_nonce) } #[tracing::instrument(skip(self))] @@ -501,7 +494,7 @@ impl EthNamespace { const METHOD_NAME: &str = "get_transaction_receipt"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let mut receipt = self + let receipt = self .state .connection_pool .access_storage_tagged("api") @@ -512,31 +505,6 @@ impl EthNamespace { .await .map_err(|err| internal_error(METHOD_NAME, err)); - if let Some(proxy) = &self.state.tx_sender.0.proxy { - // We're running an external node - if matches!(receipt, Ok(None)) { - // If the transaction is not in the db, query main node. - // Because it might be the case that it got rejected in state keeper - // and won't be synced back to us, but we still want to return a receipt. - // We want to only forward these kinds of receipts because otherwise - // clients will assume that the transaction they got the receipt for - // was already processed on the EN (when it was not), - // and will think that the state has already been updated on the EN (when it was not). - if let Ok(Some(main_node_receipt)) = proxy - .request_tx_receipt(hash) - .await - .map_err(|err| internal_error(METHOD_NAME, err)) - { - if main_node_receipt.status == Some(0.into()) - && main_node_receipt.block_number.is_none() - { - // Transaction was rejected in state-keeper. - receipt = Ok(Some(main_node_receipt)); - } - } - } - } - method_latency.observe(); receipt } @@ -546,25 +514,30 @@ impl EthNamespace { const METHOD_NAME: &str = "new_block_filter"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let mut conn = self + let mut storage = self .state .connection_pool .access_storage_tagged("api") .await .map_err(|err| internal_error(METHOD_NAME, err))?; - let last_block_number = conn - .blocks_web3_dal() + let last_block_number = storage + .blocks_dal() .get_sealed_miniblock_number() .await .map_err(|err| internal_error(METHOD_NAME, err))?; - drop(conn); + let next_block_number = match last_block_number { + Some(number) => number + 1, + // If we don't have miniblocks in the storage, use the first projected miniblock number as the cursor + None => self.state.start_info.first_miniblock, + }; + drop(storage); let idx = self .state .installed_filters .lock() .await - .add(TypedFilter::Blocks(last_block_number)); + .add(TypedFilter::Blocks(next_block_number)); method_latency.observe(); Ok(idx) } @@ -724,8 +697,10 @@ impl EthNamespace { .access_storage_tagged("api") .await .unwrap(); - let newest_miniblock = - resolve_block(&mut connection, BlockId::Number(newest_block), METHOD_NAME).await?; + let newest_miniblock = self + .state + .resolve_block(&mut connection, BlockId::Number(newest_block), METHOD_NAME) + .await?; let mut base_fee_per_gas = connection .blocks_web3_dal() @@ -773,14 +748,19 @@ impl EthNamespace { .map_err(|err| internal_error(METHOD_NAME, err))?; let (block_hashes, last_block_number) = conn .blocks_web3_dal() - .get_block_hashes_after(*from_block, self.state.api_config.req_entities_limit) + .get_block_hashes_since(*from_block, self.state.api_config.req_entities_limit) .await .map_err(|err| internal_error(METHOD_NAME, err))?; - *from_block = last_block_number.unwrap_or(*from_block); + + *from_block = match last_block_number { + Some(last_block_number) => last_block_number + 1, + None => *from_block, + }; + FilterChanges::Hashes(block_hashes) } - TypedFilter::PendingTransactions(from_timestamp) => { + TypedFilter::PendingTransactions(from_timestamp_excluded) => { let mut conn = self .state .connection_pool @@ -790,12 +770,14 @@ impl EthNamespace { let (tx_hashes, last_timestamp) = conn .transactions_web3_dal() .get_pending_txs_hashes_after( - *from_timestamp, + *from_timestamp_excluded, Some(self.state.api_config.req_entities_limit), ) .await .map_err(|err| internal_error(METHOD_NAME, err))?; - *from_timestamp = last_timestamp.unwrap_or(*from_timestamp); + + *from_timestamp_excluded = last_timestamp.unwrap_or(*from_timestamp_excluded); + FilterChanges::Hashes(tx_hashes) } @@ -816,16 +798,26 @@ impl EthNamespace { } else { vec![] }; + + let mut to_block = self + .state + .resolve_filter_block_number(filter.to_block) + .await?; + + if matches!(filter.to_block, Some(BlockNumber::Number(_))) { + to_block = to_block.min( + self.state + .resolve_filter_block_number(Some(BlockNumber::Latest)) + .await?, + ); + } + let get_logs_filter = GetLogsFilter { from_block: *from_block, - to_block: filter.to_block, + to_block, addresses, topics, }; - let to_block = self - .state - .resolve_filter_block_number(filter.to_block) - .await?; let mut storage = self .state @@ -859,11 +851,7 @@ impl EthNamespace { .get_logs(get_logs_filter, i32::MAX as usize) .await .map_err(|err| internal_error(METHOD_NAME, err))?; - *from_block = logs - .last() - .map(|log| MiniblockNumber(log.block_number.unwrap().as_u32())) - .unwrap_or(*from_block); - // FIXME: why is `from_block` not updated? + *from_block = to_block + 1; FilterChanges::Logs(logs) } }; @@ -876,7 +864,7 @@ impl EthNamespace { // They are moved into a separate `impl` block so they don't make the actual implementation noisy. // This `impl` block contains methods that we *have* to implement for compliance, but don't really // make sense in terms of L2. -impl EthNamespace { +impl EthNamespace { pub fn coinbase_impl(&self) -> Address { // There is no coinbase account. Address::default() diff --git a/core/lib/zksync_core/src/api_server/web3/namespaces/eth_subscribe.rs b/core/lib/zksync_core/src/api_server/web3/namespaces/eth_subscribe.rs deleted file mode 100644 index 208318ec3e1..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/namespaces/eth_subscribe.rs +++ /dev/null @@ -1,147 +0,0 @@ -use jsonrpc_core::error::{Error, ErrorCode}; -use jsonrpc_pubsub::{typed, SubscriptionId}; -use tokio::sync::RwLock; - -use std::{collections::HashMap, sync::Arc}; - -use zksync_types::web3::types::H128; -use zksync_web3_decl::types::{PubSubFilter, PubSubResult}; - -use super::eth::EVENT_TOPIC_NUMBER_LIMIT; -use crate::api_server::web3::metrics::{SubscriptionType, PUB_SUB_METRICS}; - -pub type SubscriptionMap = Arc>>; - -#[derive(Debug, Clone)] -pub struct EthSubscribe { - // `jsonrpc` backend executes task subscription on a separate thread that has no tokio context. - pub runtime_handle: tokio::runtime::Handle, - pub active_block_subs: SubscriptionMap>, - pub active_tx_subs: SubscriptionMap>, - pub active_log_subs: SubscriptionMap<(typed::Sink, PubSubFilter)>, -} - -impl EthSubscribe { - pub fn new(runtime_handle: tokio::runtime::Handle) -> Self { - Self { - runtime_handle, - active_block_subs: SubscriptionMap::default(), - active_tx_subs: SubscriptionMap::default(), - active_log_subs: SubscriptionMap::default(), - } - } - - /// Assigns ID for the subscriber if the connection is open, returns error otherwise. - fn assign_id( - subscriber: typed::Subscriber, - ) -> Result<(typed::Sink, SubscriptionId), ()> { - let id = H128::random(); - let sub_id = SubscriptionId::String(format!("0x{}", hex::encode(id.0))); - let sink = subscriber.assign_id(sub_id.clone())?; - Ok((sink, sub_id)) - } - - fn reject(subscriber: typed::Subscriber) { - subscriber - .reject(Error { - code: ErrorCode::InvalidParams, - message: "Rejecting subscription - invalid parameters provided.".into(), - data: None, - }) - .unwrap(); - } - - #[tracing::instrument(skip(self, subscriber, params))] - pub async fn sub( - &self, - subscriber: typed::Subscriber, - sub_type: String, - params: Option, - ) { - let sub_type = match sub_type.as_str() { - "newHeads" => { - let mut block_subs = self.active_block_subs.write().await; - let Ok((sink, id)) = Self::assign_id(subscriber) else { - return; - }; - block_subs.insert(id, sink); - Some(SubscriptionType::Blocks) - } - "newPendingTransactions" => { - let mut tx_subs = self.active_tx_subs.write().await; - let Ok((sink, id)) = Self::assign_id(subscriber) else { - return; - }; - tx_subs.insert(id, sink); - Some(SubscriptionType::Txs) - } - "logs" => { - let filter = params.map(serde_json::from_value).transpose(); - match filter { - Ok(filter) => { - let filter: PubSubFilter = filter.unwrap_or_default(); - if filter - .topics - .as_ref() - .map(|topics| topics.len()) - .unwrap_or(0) - > EVENT_TOPIC_NUMBER_LIMIT - { - Self::reject(subscriber); - None - } else { - let mut log_subs = self.active_log_subs.write().await; - let Ok((sink, id)) = Self::assign_id(subscriber) else { - return; - }; - log_subs.insert(id, (sink, filter)); - Some(SubscriptionType::Logs) - } - } - Err(_) => { - Self::reject(subscriber); - None - } - } - } - "syncing" => { - let Ok((sink, _id)) = Self::assign_id(subscriber) else { - return; - }; - let _ = sink.notify(Ok(PubSubResult::Syncing(false))); - None - } - _ => { - Self::reject(subscriber); - None - } - }; - - if let Some(sub_type) = sub_type { - PUB_SUB_METRICS.active_subscribers[&sub_type].inc_by(1); - } - } - - #[tracing::instrument(skip(self))] - pub async fn unsub(&self, id: SubscriptionId) -> Result { - let removed = if self.active_block_subs.write().await.remove(&id).is_some() { - Some(SubscriptionType::Blocks) - } else if self.active_tx_subs.write().await.remove(&id).is_some() { - Some(SubscriptionType::Txs) - } else if self.active_log_subs.write().await.remove(&id).is_some() { - Some(SubscriptionType::Logs) - } else { - None - }; - if let Some(sub_type) = removed { - PUB_SUB_METRICS.active_subscribers[&sub_type].dec_by(1); - Ok(true) - } else { - Err(Error { - code: ErrorCode::InvalidParams, - message: "Invalid subscription.".into(), - data: None, - }) - } - } -} diff --git a/core/lib/zksync_core/src/api_server/web3/namespaces/mod.rs b/core/lib/zksync_core/src/api_server/web3/namespaces/mod.rs index 9792fed5edc..e1b77d381da 100644 --- a/core/lib/zksync_core/src/api_server/web3/namespaces/mod.rs +++ b/core/lib/zksync_core/src/api_server/web3/namespaces/mod.rs @@ -4,17 +4,12 @@ mod debug; mod en; pub(crate) mod eth; -mod eth_subscribe; mod net; +mod snapshots; mod web3; mod zks; pub use self::{ - debug::DebugNamespace, - en::EnNamespace, - eth::EthNamespace, - eth_subscribe::{EthSubscribe, SubscriptionMap}, - net::NetNamespace, - web3::Web3Namespace, - zks::ZksNamespace, + debug::DebugNamespace, en::EnNamespace, eth::EthNamespace, net::NetNamespace, + snapshots::SnapshotsNamespace, web3::Web3Namespace, zks::ZksNamespace, }; diff --git a/core/lib/zksync_core/src/api_server/web3/namespaces/snapshots.rs b/core/lib/zksync_core/src/api_server/web3/namespaces/snapshots.rs new file mode 100644 index 00000000000..b45fe9a472e --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/namespaces/snapshots.rs @@ -0,0 +1,108 @@ +use zksync_types::{ + snapshots::{AllSnapshots, SnapshotHeader, SnapshotStorageLogsChunkMetadata}, + L1BatchNumber, +}; +use zksync_web3_decl::error::Web3Error; + +use crate::api_server::web3::{ + backend_jsonrpsee::internal_error, metrics::API_METRICS, state::RpcState, +}; + +#[derive(Debug, Clone)] +pub struct SnapshotsNamespace { + state: RpcState, +} + +impl SnapshotsNamespace { + pub fn new(state: RpcState) -> Self { + Self { state } + } + + pub async fn get_all_snapshots_impl(&self) -> Result { + let method_name = "get_all_snapshots"; + let method_latency = API_METRICS.start_call(method_name); + let mut storage_processor = self + .state + .connection_pool + .access_storage_tagged("api") + .await + .map_err(|err| internal_error(method_name, err))?; + let mut snapshots_dal = storage_processor.snapshots_dal(); + let response = snapshots_dal + .get_all_complete_snapshots() + .await + .map_err(|err| internal_error(method_name, err)); + method_latency.observe(); + response + } + + pub async fn get_snapshot_by_l1_batch_number_impl( + &self, + l1_batch_number: L1BatchNumber, + ) -> Result, Web3Error> { + let method_name = "get_snapshot_by_l1_batch_number"; + let method_latency = API_METRICS.start_call(method_name); + let mut storage_processor = self + .state + .connection_pool + .access_storage_tagged("api") + .await + .map_err(|err| internal_error(method_name, err))?; + let snapshot_metadata = storage_processor + .snapshots_dal() + .get_snapshot_metadata(l1_batch_number) + .await + .map_err(|err| internal_error(method_name, err))?; + + let Some(snapshot_metadata) = snapshot_metadata else { + method_latency.observe(); + return Ok(None); + }; + + let snapshot_files = snapshot_metadata.storage_logs_filepaths; + let is_complete = snapshot_files.iter().all(Option::is_some); + if !is_complete { + // We don't return incomplete snapshots via API. + method_latency.observe(); + return Ok(None); + } + + let chunks = snapshot_files + .into_iter() + .enumerate() + .filter_map(|(chunk_id, filepath)| { + Some(SnapshotStorageLogsChunkMetadata { + chunk_id: chunk_id as u64, + filepath: filepath?, + }) + }) + .collect(); + let l1_batch_with_metadata = storage_processor + .blocks_dal() + .get_l1_batch_metadata(l1_batch_number) + .await + .map_err(|err| internal_error(method_name, err))? + .ok_or_else(|| { + let err = format!("missing metadata for L1 batch #{l1_batch_number}"); + internal_error(method_name, err) + })?; + let (_, miniblock_number) = storage_processor + .blocks_dal() + .get_miniblock_range_of_l1_batch(l1_batch_number) + .await + .map_err(|err| internal_error(method_name, err))? + .ok_or_else(|| { + let err = format!("missing miniblocks for L1 batch #{l1_batch_number}"); + internal_error(method_name, err) + })?; + + method_latency.observe(); + Ok(Some(SnapshotHeader { + l1_batch_number: snapshot_metadata.l1_batch_number, + miniblock_number, + last_l1_batch_with_metadata: l1_batch_with_metadata, + storage_logs_chunks: chunks, + factory_deps_filepath: snapshot_metadata.factory_deps_filepath, + })) + } +} diff --git a/core/lib/zksync_core/src/api_server/web3/namespaces/zks.rs b/core/lib/zksync_core/src/api_server/web3/namespaces/zks.rs index 7127c5753dc..2727dc19da0 100644 --- a/core/lib/zksync_core/src/api_server/web3/namespaces/zks.rs +++ b/core/lib/zksync_core/src/api_server/web3/namespaces/zks.rs @@ -2,51 +2,53 @@ use std::{collections::HashMap, convert::TryInto, fs::File, io::BufReader, str:: use bigdecimal::{BigDecimal, Zero}; use zksync_dal::StorageProcessor; - use zksync_mini_merkle_tree::MiniMerkleTree; +use zksync_system_constants::DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE; use zksync_types::{ api::{ BlockDetails, BridgeAddresses, GetLogsFilter, L1BatchDetails, L2ToL1LogProof, Proof, ProtocolVersion, StorageProof, TransactionDetails, }, fee::Fee, + fee_model::FeeParams, l1::L1Tx, l2::L2Tx, l2_to_l1_log::L2ToL1Log, tokens::ETHEREUM_ADDRESS, transaction_request::CallRequest, AccountTreeId, L1BatchNumber, MiniblockNumber, StorageKey, Transaction, H160, - L1_MESSENGER_ADDRESS, L2_ETH_TOKEN_ADDRESS, MAX_GAS_PER_PUBDATA_BYTE, - REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, U256, U64, + L1_MESSENGER_ADDRESS, L2_ETH_TOKEN_ADDRESS, REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, U256, U64, }; use zksync_utils::{address_to_h256, ratio_to_big_decimal_normalized}; use zksync_web3_decl::{ error::Web3Error, - types::{Address, Filter, Log, Token, H256}, + types::{Address, Token, H256}, }; use crate::api_server::{ tree::TreeApiClient, - web3::{backend_jsonrpc::error::internal_error, metrics::API_METRICS, RpcState}, + web3::{backend_jsonrpsee::internal_error, metrics::API_METRICS, RpcState}, }; -use crate::l1_gas_price::L1GasPriceProvider; #[derive(Debug)] -pub struct ZksNamespace { - pub state: RpcState, +pub struct ZksNamespace { + pub state: RpcState, } -impl Clone for ZksNamespace { - fn clone(&self) -> Self { - Self { - state: self.state.clone(), - } +impl ZksNamespace { + pub fn new(state: RpcState) -> Self { + Self { state } } -} -impl ZksNamespace { - pub fn new(state: RpcState) -> Self { - Self { state } + async fn access_storage( + &self, + method_name: &'static str, + ) -> Result, Web3Error> { + self.state + .connection_pool + .access_storage_tagged("api") + .await + .map_err(|err| internal_error(method_name, err)) } #[tracing::instrument(skip(self, request))] @@ -61,7 +63,7 @@ impl ZksNamespace { .await?; if let Some(ref mut eip712_meta) = request_with_gas_per_pubdata_overridden.eip712_meta { - eip712_meta.gas_per_pubdata = MAX_GAS_PER_PUBDATA_BYTE.into(); + eip712_meta.gas_per_pubdata = U256::from(DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE); } let mut tx = L2Tx::from_request( @@ -72,7 +74,7 @@ impl ZksNamespace { // When we're estimating fee, we are trying to deduce values related to fee, so we should // not consider provided ones. tx.common_data.fee.max_priority_fee_per_gas = 0u64.into(); - tx.common_data.fee.gas_per_pubdata_limit = MAX_GAS_PER_PUBDATA_BYTE.into(); + tx.common_data.fee.gas_per_pubdata_limit = U256::from(DEFAULT_L2_TX_GAS_PER_PUBDATA_BYTE); let fee = self.estimate_fee(tx.into()).await?; method_latency.observe(); @@ -177,16 +179,14 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_confirmed_tokens"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let tokens = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() + let mut storage = self.access_storage(METHOD_NAME).await?; + let tokens = storage .tokens_web3_dal() .get_well_known_tokens() .await - .map_err(|err| internal_error(METHOD_NAME, err))? + .map_err(|err| internal_error(METHOD_NAME, err))?; + + let tokens = tokens .into_iter() .skip(from as usize) .take(limit.into()) @@ -198,7 +198,6 @@ impl ZksNamespace { decimals: token_info.metadata.decimals, }) .collect(); - method_latency.observe(); Ok(tokens) } @@ -216,12 +215,7 @@ impl ZksNamespace { let method_latency = API_METRICS.start_call(METHOD_NAME); let token_price_result = { - let mut storage = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap(); + let mut storage = self.access_storage(METHOD_NAME).await?; storage.tokens_web3_dal().get_token_price(&l2_token).await }; @@ -247,16 +241,14 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_all_balances"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let balances = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() + let mut storage = self.access_storage(METHOD_NAME).await?; + let balances = storage .accounts_dal() .get_balances_for_address(address) .await - .map_err(|err| internal_error(METHOD_NAME, err))? + .map_err(|err| internal_error(METHOD_NAME, err))?; + + let balances = balances .into_iter() .map(|(address, balance)| { if address == L2_ETH_TOKEN_ADDRESS { @@ -266,7 +258,6 @@ impl ZksNamespace { } }) .collect(); - method_latency.observe(); Ok(balances) } @@ -282,20 +273,15 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_l2_to_l1_msg_proof"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let mut storage = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap(); - let l1_batch_number = match storage + self.state.start_info.ensure_not_pruned(block_number)?; + let mut storage = self.access_storage(METHOD_NAME).await?; + let Some(l1_batch_number) = storage .blocks_web3_dal() .get_l1_batch_number_of_miniblock(block_number) .await .map_err(|err| internal_error(METHOD_NAME, err))? - { - Some(number) => number, - None => return Ok(None), + else { + return Ok(None); }; let (first_miniblock_of_l1_batch, _) = storage .blocks_web3_dal() @@ -311,7 +297,7 @@ impl ZksNamespace { .get_logs( GetLogsFilter { from_block: first_miniblock_of_l1_batch, - to_block: Some(block_number.0.into()), + to_block: block_number, addresses: vec![L1_MESSENGER_ADDRESS], topics: vec![(2, vec![address_to_h256(&sender)]), (3, vec![msg])], }, @@ -411,12 +397,7 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_l2_to_l1_msg_proof"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let mut storage = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap(); + let mut storage = self.access_storage(METHOD_NAME).await?; let Some((l1_batch_number, l1_batch_tx_index)) = storage .blocks_web3_dal() .get_l1_batch_info_for_tx(tx_hash) @@ -445,20 +426,16 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_l1_batch_number"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let l1_batch_number = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() - .blocks_web3_dal() + let mut storage = self.access_storage(METHOD_NAME).await?; + let l1_batch_number = storage + .blocks_dal() .get_sealed_l1_batch_number() .await - .map(|n| U64::from(n.0)) - .map_err(|err| internal_error(METHOD_NAME, err)); + .map_err(|err| internal_error(METHOD_NAME, err))? + .ok_or(Web3Error::NoBlock)?; method_latency.observe(); - l1_batch_number + Ok(l1_batch_number.0.into()) } #[tracing::instrument(skip(self))] @@ -469,12 +446,9 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_miniblock_range"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let minmax = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() + self.state.start_info.ensure_not_pruned(batch)?; + let mut storage = self.access_storage(METHOD_NAME).await?; + let minmax = storage .blocks_web3_dal() .get_miniblock_range_of_l1_batch(batch) .await @@ -493,12 +467,9 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_block_details"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let block_details = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() + self.state.start_info.ensure_not_pruned(block_number)?; + let mut storage = self.access_storage(METHOD_NAME).await?; + let block_details = storage .blocks_web3_dal() .get_block_details( block_number, @@ -519,12 +490,9 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_raw_block_transactions"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let transactions = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() + self.state.start_info.ensure_not_pruned(block_number)?; + let mut storage = self.access_storage(METHOD_NAME).await?; + let transactions = storage .transactions_web3_dal() .get_raw_miniblock_transactions(block_number) .await @@ -542,16 +510,13 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_transaction_details"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let mut tx_details = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() + let mut storage = self.access_storage(METHOD_NAME).await?; + let mut tx_details = storage .transactions_web3_dal() .get_transaction_details(hash) .await .map_err(|err| internal_error(METHOD_NAME, err)); + drop(storage); if let Some(proxy) = &self.state.tx_sender.0.proxy { // We're running an external node - we should query the main node directly @@ -577,12 +542,9 @@ impl ZksNamespace { const METHOD_NAME: &str = "get_l1_batch"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let l1_batch = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() + self.state.start_info.ensure_not_pruned(batch_number)?; + let mut storage = self.access_storage(METHOD_NAME).await?; + let l1_batch = storage .blocks_web3_dal() .get_l1_batch_details(batch_number) .await @@ -593,22 +555,18 @@ impl ZksNamespace { } #[tracing::instrument(skip(self))] - pub async fn get_bytecode_by_hash_impl(&self, hash: H256) -> Option> { + pub async fn get_bytecode_by_hash_impl( + &self, + hash: H256, + ) -> Result>, Web3Error> { const METHOD_NAME: &str = "get_bytecode_by_hash"; let method_latency = API_METRICS.start_call(METHOD_NAME); - let bytecode = self - .state - .connection_pool - .access_storage_tagged("api") - .await - .unwrap() - .storage_dal() - .get_factory_dep(hash) - .await; + let mut storage = self.access_storage(METHOD_NAME).await?; + let bytecode = storage.storage_dal().get_factory_dep(hash).await; method_latency.observe(); - bytecode + Ok(bytecode) } #[tracing::instrument(skip(self))] @@ -620,38 +578,49 @@ impl ZksNamespace { .state .tx_sender .0 - .l1_gas_price_source - .estimate_effective_gas_price(); + .batch_fee_input_provider + .get_batch_fee_input() + .l1_gas_price(); method_latency.observe(); gas_price.into() } + #[tracing::instrument(skip(self))] + pub fn get_fee_params_impl(&self) -> FeeParams { + const METHOD_NAME: &str = "get_fee_params"; + + let method_latency = API_METRICS.start_call(METHOD_NAME); + let fee_model_params = self + .state + .tx_sender + .0 + .batch_fee_input_provider + .get_fee_model_params(); + + method_latency.observe(); + + fee_model_params + } + #[tracing::instrument(skip(self))] pub async fn get_protocol_version_impl( &self, version_id: Option, - ) -> Option { + ) -> Result, Web3Error> { const METHOD_NAME: &str = "get_protocol_version"; let method_latency = API_METRICS.start_call(METHOD_NAME); + let mut storage = self.access_storage(METHOD_NAME).await?; let protocol_version = match version_id { Some(id) => { - self.state - .connection_pool - .access_storage() - .await - .unwrap() + storage .protocol_versions_web3_dal() .get_protocol_version_by_id(id) .await } None => Some( - self.state - .connection_pool - .access_storage() - .await - .unwrap() + storage .protocol_versions_web3_dal() .get_latest_protocol_version() .await, @@ -659,15 +628,7 @@ impl ZksNamespace { }; method_latency.observe(); - protocol_version - } - - #[tracing::instrument(skip_all)] - pub async fn get_logs_with_virtual_blocks_impl( - &self, - filter: Filter, - ) -> Result, Web3Error> { - self.state.translate_get_logs(filter).await + Ok(protocol_version) } #[tracing::instrument(skip_all)] @@ -679,11 +640,11 @@ impl ZksNamespace { ) -> Result { const METHOD_NAME: &str = "get_proofs"; + self.state.start_info.ensure_not_pruned(l1_batch_number)?; let hashed_keys = keys .iter() .map(|key| StorageKey::new(AccountTreeId::new(address), *key).hashed_key_u256()) .collect(); - let storage_proof = self .state .tree_api @@ -714,7 +675,7 @@ impl ZksNamespace { self.state .tx_sender .0 - .l1_gas_price_source + .batch_fee_input_provider .get_erc20_conversion_rate(), )) } diff --git a/core/lib/zksync_core/src/api_server/web3/pubsub.rs b/core/lib/zksync_core/src/api_server/web3/pubsub.rs new file mode 100644 index 00000000000..1eded8e49ea --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/pubsub.rs @@ -0,0 +1,457 @@ +//! (Largely) backend-agnostic logic for dealing with Web3 subscriptions. + +use anyhow::Context as _; +use futures::FutureExt; +use tokio::{ + sync::{broadcast, mpsc, watch}, + task::JoinHandle, + time::{interval, Duration}, +}; +use zksync_dal::ConnectionPool; +use zksync_types::{MiniblockNumber, H128, H256}; +use zksync_web3_decl::{ + jsonrpsee::{ + core::{server::SubscriptionMessage, SubscriptionResult}, + server::IdProvider, + types::{error::ErrorCode, ErrorObject, SubscriptionId}, + PendingSubscriptionSink, SendTimeoutError, SubscriptionSink, + }, + namespaces::EthPubSubServer, + types::{BlockHeader, Log, PubSubFilter, PubSubResult}, +}; + +use super::{ + metrics::{SubscriptionType, PUB_SUB_METRICS}, + namespaces::eth::EVENT_TOPIC_NUMBER_LIMIT, +}; +use crate::api_server::execution_sandbox::BlockStartInfo; + +const BROADCAST_CHANNEL_CAPACITY: usize = 1024; +const SUBSCRIPTION_SINK_SEND_TIMEOUT: Duration = Duration::from_secs(1); + +#[derive(Debug, Clone, Copy)] +pub struct EthSubscriptionIdProvider; + +impl IdProvider for EthSubscriptionIdProvider { + fn next_id(&self) -> SubscriptionId<'static> { + let id = H128::random(); + format!("0x{}", hex::encode(id.0)).into() + } +} + +/// Events emitted by the subscription logic. Only used in WebSocket server tests so far. +#[derive(Debug)] +pub(super) enum PubSubEvent { + Subscribed(SubscriptionType), + NotifyIterationFinished(SubscriptionType), +} + +/// Manager of notifications for a certain type of subscriptions. +#[derive(Debug)] +struct PubSubNotifier { + sender: broadcast::Sender>, + connection_pool: ConnectionPool, + polling_interval: Duration, + events_sender: Option>, +} + +impl PubSubNotifier { + async fn get_starting_miniblock_number(&self) -> anyhow::Result { + let mut storage = self + .connection_pool + .access_storage_tagged("api") + .await + .context("access_storage_tagged")?; + let sealed_miniblock_number = storage + .blocks_dal() + .get_sealed_miniblock_number() + .await + .context("get_sealed_miniblock_number()")?; + Ok(match sealed_miniblock_number { + Some(number) => number, + None => { + // We don't have miniblocks in the storage yet. Use the snapshot miniblock number instead. + let start_info = BlockStartInfo::new(&mut storage).await?; + MiniblockNumber(start_info.first_miniblock.saturating_sub(1)) + } + }) + } + + fn emit_event(&self, event: PubSubEvent) { + if let Some(sender) = &self.events_sender { + sender.send(event).ok(); + } + } +} + +impl PubSubNotifier { + async fn notify_blocks(self, stop_receiver: watch::Receiver) -> anyhow::Result<()> { + let mut last_block_number = self.get_starting_miniblock_number().await?; + let mut timer = interval(self.polling_interval); + loop { + if *stop_receiver.borrow() { + tracing::info!("Stop signal received, pubsub_block_notifier is shutting down"); + break; + } + timer.tick().await; + + let db_latency = PUB_SUB_METRICS.db_poll_latency[&SubscriptionType::Blocks].start(); + let new_blocks = self.new_blocks(last_block_number).await?; + db_latency.observe(); + + if let Some(last_block) = new_blocks.last() { + last_block_number = MiniblockNumber(last_block.number.unwrap().as_u32()); + let new_blocks = new_blocks.into_iter().map(PubSubResult::Header).collect(); + self.send_pub_sub_results(new_blocks, SubscriptionType::Blocks); + } + self.emit_event(PubSubEvent::NotifyIterationFinished( + SubscriptionType::Blocks, + )); + } + Ok(()) + } + + fn send_pub_sub_results(&self, results: Vec, sub_type: SubscriptionType) { + // Errors only on 0 receivers, but we want to go on if we have 0 subscribers so ignore the error. + self.sender.send(results).ok(); + PUB_SUB_METRICS.broadcast_channel_len[&sub_type].set(self.sender.len()); + } + + async fn new_blocks( + &self, + last_block_number: MiniblockNumber, + ) -> anyhow::Result> { + self.connection_pool + .access_storage_tagged("api") + .await + .context("access_storage_tagged")? + .blocks_web3_dal() + .get_block_headers_after(last_block_number) + .await + .with_context(|| format!("get_block_headers_after({last_block_number})")) + } + + async fn notify_txs(self, stop_receiver: watch::Receiver) -> anyhow::Result<()> { + let mut last_time = chrono::Utc::now().naive_utc(); + let mut timer = interval(self.polling_interval); + loop { + if *stop_receiver.borrow() { + tracing::info!("Stop signal received, pubsub_tx_notifier is shutting down"); + break; + } + timer.tick().await; + + let db_latency = PUB_SUB_METRICS.db_poll_latency[&SubscriptionType::Txs].start(); + let (new_txs, new_last_time) = self.new_txs(last_time).await?; + db_latency.observe(); + + if let Some(new_last_time) = new_last_time { + last_time = new_last_time; + let new_txs = new_txs.into_iter().map(PubSubResult::TxHash).collect(); + self.send_pub_sub_results(new_txs, SubscriptionType::Txs); + } + self.emit_event(PubSubEvent::NotifyIterationFinished(SubscriptionType::Txs)); + } + Ok(()) + } + + async fn new_txs( + &self, + last_time: chrono::NaiveDateTime, + ) -> anyhow::Result<(Vec, Option)> { + self.connection_pool + .access_storage_tagged("api") + .await + .context("access_storage_tagged")? + .transactions_web3_dal() + .get_pending_txs_hashes_after(last_time, None) + .await + .context("get_pending_txs_hashes_after()") + } + + async fn notify_logs(self, stop_receiver: watch::Receiver) -> anyhow::Result<()> { + let mut last_block_number = self.get_starting_miniblock_number().await?; + + let mut timer = interval(self.polling_interval); + loop { + if *stop_receiver.borrow() { + tracing::info!("Stop signal received, pubsub_logs_notifier is shutting down"); + break; + } + timer.tick().await; + + let db_latency = PUB_SUB_METRICS.db_poll_latency[&SubscriptionType::Logs].start(); + let new_logs = self.new_logs(last_block_number).await?; + db_latency.observe(); + + if let Some(last_log) = new_logs.last() { + last_block_number = MiniblockNumber(last_log.block_number.unwrap().as_u32()); + let new_logs = new_logs.into_iter().map(PubSubResult::Log).collect(); + self.send_pub_sub_results(new_logs, SubscriptionType::Logs); + } + self.emit_event(PubSubEvent::NotifyIterationFinished(SubscriptionType::Logs)); + } + Ok(()) + } + + async fn new_logs(&self, last_block_number: MiniblockNumber) -> anyhow::Result> { + self.connection_pool + .access_storage_tagged("api") + .await + .context("access_storage_tagged")? + .events_web3_dal() + .get_all_logs(last_block_number) + .await + .context("events_web3_dal().get_all_logs()") + } +} + +/// Subscription support for Web3 APIs. +pub(super) struct EthSubscribe { + blocks: broadcast::Sender>, + transactions: broadcast::Sender>, + logs: broadcast::Sender>, + events_sender: Option>, +} + +impl EthSubscribe { + pub fn new() -> Self { + let (blocks, _) = broadcast::channel(BROADCAST_CHANNEL_CAPACITY); + let (transactions, _) = broadcast::channel(BROADCAST_CHANNEL_CAPACITY); + let (logs, _) = broadcast::channel(BROADCAST_CHANNEL_CAPACITY); + + Self { + blocks, + transactions, + logs, + events_sender: None, + } + } + + pub fn set_events_sender(&mut self, sender: mpsc::UnboundedSender) { + self.events_sender = Some(sender); + } + + async fn reject(sink: PendingSubscriptionSink) { + sink.reject(ErrorObject::borrowed( + ErrorCode::InvalidParams.code(), + "Rejecting subscription - invalid parameters provided.", + None, + )) + .await; + } + + async fn run_subscriber( + sink: SubscriptionSink, + subscription_type: SubscriptionType, + mut receiver: broadcast::Receiver>, + filter: Option, + ) { + let _guard = PUB_SUB_METRICS.active_subscribers[&subscription_type].inc_guard(1); + let lifetime_latency = PUB_SUB_METRICS.subscriber_lifetime[&subscription_type].start(); + let closed = sink.closed().fuse(); + tokio::pin!(closed); + + loop { + tokio::select! { + new_items_result = receiver.recv() => { + let new_items = match new_items_result { + Ok(items) => items, + Err(broadcast::error::RecvError::Closed) => { + // The broadcast channel has closed because the notifier task is shut down. + // This is fine; we should just stop this task. + break; + } + Err(broadcast::error::RecvError::Lagged(message_count)) => { + PUB_SUB_METRICS + .skipped_broadcast_messages[&subscription_type] + .observe(message_count); + break; + } + }; + + let handle_result = Self::handle_new_items( + &sink, + subscription_type, + new_items, + filter.as_ref() + ) + .await; + if handle_result.is_err() { + PUB_SUB_METRICS.subscriber_send_timeouts[&subscription_type].inc(); + break; + } + } + _ = &mut closed => { + break; + } + } + } + lifetime_latency.observe(); + } + + async fn handle_new_items( + sink: &SubscriptionSink, + subscription_type: SubscriptionType, + new_items: Vec, + filter: Option<&PubSubFilter>, + ) -> Result<(), SendTimeoutError> { + let notify_latency = PUB_SUB_METRICS.notify_subscribers_latency[&subscription_type].start(); + for item in new_items { + if let PubSubResult::Log(log) = &item { + if let Some(filter) = &filter { + if !filter.matches(log) { + continue; + } + } + } + + sink.send_timeout( + SubscriptionMessage::from_json(&item) + .expect("PubSubResult always serializable to json;qed"), + SUBSCRIPTION_SINK_SEND_TIMEOUT, + ) + .await?; + + PUB_SUB_METRICS.notify[&subscription_type].inc(); + } + + notify_latency.observe(); + Ok(()) + } + + #[tracing::instrument(skip(self, pending_sink))] + pub async fn sub( + &self, + pending_sink: PendingSubscriptionSink, + sub_type: String, + params: Option, + ) { + let sub_type = match sub_type.as_str() { + "newHeads" => { + let Ok(sink) = pending_sink.accept().await else { + return; + }; + let blocks_rx = self.blocks.subscribe(); + tokio::spawn(Self::run_subscriber( + sink, + SubscriptionType::Blocks, + blocks_rx, + None, + )); + + Some(SubscriptionType::Blocks) + } + "newPendingTransactions" => { + let Ok(sink) = pending_sink.accept().await else { + return; + }; + let transactions_rx = self.transactions.subscribe(); + tokio::spawn(Self::run_subscriber( + sink, + SubscriptionType::Txs, + transactions_rx, + None, + )); + Some(SubscriptionType::Txs) + } + "logs" => { + let filter = params.unwrap_or_default(); + let topic_count = filter.topics.as_ref().map_or(0, Vec::len); + + if topic_count > EVENT_TOPIC_NUMBER_LIMIT { + Self::reject(pending_sink).await; + None + } else { + let Ok(sink) = pending_sink.accept().await else { + return; + }; + let logs_rx = self.logs.subscribe(); + tokio::spawn(Self::run_subscriber( + sink, + SubscriptionType::Logs, + logs_rx, + Some(filter), + )); + Some(SubscriptionType::Logs) + } + } + "syncing" => { + let Ok(sink) = pending_sink.accept().await else { + return; + }; + + tokio::spawn(async move { + sink.send_timeout( + SubscriptionMessage::from_json(&PubSubResult::Syncing(false)).unwrap(), + SUBSCRIPTION_SINK_SEND_TIMEOUT, + ) + .await + }); + None + } + _ => { + Self::reject(pending_sink).await; + None + } + }; + + if let Some(sub_type) = sub_type { + if let Some(sender) = &self.events_sender { + sender.send(PubSubEvent::Subscribed(sub_type)).ok(); + } + } + } + + /// Spawns notifier tasks. This should be called once per instance. + pub fn spawn_notifiers( + &self, + connection_pool: ConnectionPool, + polling_interval: Duration, + stop_receiver: watch::Receiver, + ) -> Vec>> { + let mut notifier_tasks = Vec::with_capacity(3); + + let notifier = PubSubNotifier { + sender: self.blocks.clone(), + connection_pool: connection_pool.clone(), + polling_interval, + events_sender: self.events_sender.clone(), + }; + let notifier_task = tokio::spawn(notifier.notify_blocks(stop_receiver.clone())); + notifier_tasks.push(notifier_task); + + let notifier = PubSubNotifier { + sender: self.transactions.clone(), + connection_pool: connection_pool.clone(), + polling_interval, + events_sender: self.events_sender.clone(), + }; + let notifier_task = tokio::spawn(notifier.notify_txs(stop_receiver.clone())); + notifier_tasks.push(notifier_task); + + let notifier = PubSubNotifier { + sender: self.logs.clone(), + connection_pool, + polling_interval, + events_sender: self.events_sender.clone(), + }; + let notifier_task = tokio::spawn(notifier.notify_logs(stop_receiver)); + + notifier_tasks.push(notifier_task); + notifier_tasks + } +} + +#[async_trait::async_trait] +impl EthPubSubServer for EthSubscribe { + async fn subscribe( + &self, + pending: PendingSubscriptionSink, + sub_type: String, + filter: Option, + ) -> SubscriptionResult { + self.sub(pending, sub_type, filter).await; + Ok(()) + } +} diff --git a/core/lib/zksync_core/src/api_server/web3/pubsub_notifier.rs b/core/lib/zksync_core/src/api_server/web3/pubsub_notifier.rs deleted file mode 100644 index 0d1008e77e0..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/pubsub_notifier.rs +++ /dev/null @@ -1,191 +0,0 @@ -use anyhow::Context as _; -use jsonrpc_pubsub::typed; -use tokio::sync::watch; -use tokio::time::{interval, Duration}; - -use zksync_dal::ConnectionPool; -use zksync_types::MiniblockNumber; -use zksync_web3_decl::types::{PubSubFilter, PubSubResult}; - -use super::{ - metrics::{SubscriptionType, PUB_SUB_METRICS}, - namespaces::SubscriptionMap, -}; - -pub async fn notify_blocks( - subscribers: SubscriptionMap>, - connection_pool: ConnectionPool, - polling_interval: Duration, - stop_receiver: watch::Receiver, -) -> anyhow::Result<()> { - let mut last_block_number = connection_pool - .access_storage_tagged("api") - .await - .unwrap() - .blocks_web3_dal() - .get_sealed_miniblock_number() - .await - .context("get_sealed_miniblock_number()")?; - let mut timer = interval(polling_interval); - loop { - if *stop_receiver.borrow() { - tracing::info!("Stop signal received, pubsub_block_notifier is shutting down"); - break; - } - - timer.tick().await; - - let db_latency = PUB_SUB_METRICS.db_poll_latency[&SubscriptionType::Blocks].start(); - let new_blocks = connection_pool - .access_storage_tagged("api") - .await - .unwrap() - .blocks_web3_dal() - .get_block_headers_after(last_block_number) - .await - .with_context(|| format!("get_block_headers_after({last_block_number})"))?; - db_latency.observe(); - - if !new_blocks.is_empty() { - last_block_number = - MiniblockNumber(new_blocks.last().unwrap().number.unwrap().as_u32()); - - let notify_latency = - PUB_SUB_METRICS.notify_subscribers_latency[&SubscriptionType::Blocks].start(); - let subscribers = subscribers - .read() - .await - .values() - .cloned() - .collect::>(); - for sink in subscribers { - for block in new_blocks.iter().cloned() { - if sink.notify(Ok(PubSubResult::Header(block))).is_err() { - // Subscriber disconnected. - break; - } - PUB_SUB_METRICS.notify[&SubscriptionType::Blocks].inc(); - } - } - notify_latency.observe(); - } - } - Ok(()) -} - -pub async fn notify_txs( - subscribers: SubscriptionMap>, - connection_pool: ConnectionPool, - polling_interval: Duration, - stop_receiver: watch::Receiver, -) -> anyhow::Result<()> { - let mut last_time = chrono::Utc::now().naive_utc(); - let mut timer = interval(polling_interval); - loop { - if *stop_receiver.borrow() { - tracing::info!("Stop signal received, pubsub_tx_notifier is shutting down"); - break; - } - - timer.tick().await; - - let db_latency = PUB_SUB_METRICS.db_poll_latency[&SubscriptionType::Txs].start(); - let (new_txs, new_last_time) = connection_pool - .access_storage_tagged("api") - .await - .unwrap() - .transactions_web3_dal() - .get_pending_txs_hashes_after(last_time, None) - .await - .context("get_pending_txs_hashes_after()")?; - db_latency.observe(); - - if let Some(new_last_time) = new_last_time { - last_time = new_last_time; - let notify_latency = - PUB_SUB_METRICS.notify_subscribers_latency[&SubscriptionType::Txs].start(); - - let subscribers = subscribers - .read() - .await - .values() - .cloned() - .collect::>(); - for sink in subscribers { - for tx_hash in new_txs.iter().cloned() { - if sink.notify(Ok(PubSubResult::TxHash(tx_hash))).is_err() { - // Subscriber disconnected. - break; - } - PUB_SUB_METRICS.notify[&SubscriptionType::Txs].inc(); - } - } - notify_latency.observe(); - } - } - Ok(()) -} - -pub async fn notify_logs( - subscribers: SubscriptionMap<(typed::Sink, PubSubFilter)>, - connection_pool: ConnectionPool, - polling_interval: Duration, - stop_receiver: watch::Receiver, -) -> anyhow::Result<()> { - let mut last_block_number = connection_pool - .access_storage_tagged("api") - .await - .unwrap() - .blocks_web3_dal() - .get_sealed_miniblock_number() - .await - .context("get_sealed_miniblock_number()")?; - let mut timer = interval(polling_interval); - loop { - if *stop_receiver.borrow() { - tracing::info!("Stop signal received, pubsub_logs_notifier is shutting down"); - break; - } - - timer.tick().await; - - let db_latency = PUB_SUB_METRICS.db_poll_latency[&SubscriptionType::Logs].start(); - let new_logs = connection_pool - .access_storage_tagged("api") - .await - .unwrap() - .events_web3_dal() - .get_all_logs(last_block_number) - .await - .context("events_web3_dal().get_all_logs()")?; - db_latency.observe(); - - if !new_logs.is_empty() { - last_block_number = - MiniblockNumber(new_logs.last().unwrap().block_number.unwrap().as_u32()); - let notify_latency = - PUB_SUB_METRICS.notify_subscribers_latency[&SubscriptionType::Logs].start(); - - let subscribers = subscribers - .read() - .await - .values() - .cloned() - .collect::>(); - - for (sink, filter) in subscribers { - for log in new_logs.iter().cloned() { - if filter.matches(&log) { - if sink.notify(Ok(PubSubResult::Log(log))).is_err() { - // Subscriber disconnected. - break; - } - PUB_SUB_METRICS.notify[&SubscriptionType::Logs].inc(); - } - } - } - notify_latency.observe(); - } - } - Ok(()) -} diff --git a/core/lib/zksync_core/src/api_server/web3/state.rs b/core/lib/zksync_core/src/api_server/web3/state.rs index 0463d482320..180e2de7211 100644 --- a/core/lib/zksync_core/src/api_server/web3/state.rs +++ b/core/lib/zksync_core/src/api_server/web3/state.rs @@ -1,8 +1,4 @@ -use zksync_utils::h256_to_u256; - use std::{ - collections::HashMap, - convert::TryFrom, future::Future, sync::{ atomic::{AtomicU32, Ordering}, @@ -10,38 +6,69 @@ use std::{ }, time::{Duration, Instant}, }; -use tokio::sync::Mutex; -use vise::GaugeGuard; +use lru::LruCache; +use tokio::sync::{watch, Mutex}; +use vise::GaugeGuard; use zksync_config::configs::{api::Web3JsonRpcConfig, chain::NetworkConfig, ContractsConfig}; -use zksync_dal::ConnectionPool; +use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_types::{ - api::{self, BlockId, BlockNumber, GetLogsFilter}, - block::unpack_block_upgrade_info, - l2::L2Tx, - transaction_request::CallRequest, - AccountTreeId, Address, L1BatchNumber, L1ChainId, L2ChainId, MiniblockNumber, StorageKey, H256, - SYSTEM_CONTEXT_ADDRESS, U256, U64, VIRTUIAL_BLOCK_UPGRADE_INFO_POSITION, -}; -use zksync_web3_decl::{ - error::Web3Error, - types::{Filter, Log}, + api, l2::L2Tx, transaction_request::CallRequest, Address, L1BatchNumber, L1ChainId, L2ChainId, + MiniblockNumber, H256, U256, U64, }; +use zksync_web3_decl::{error::Web3Error, types::Filter}; -use super::metrics::{FilterType, API_METRICS, FILTER_METRICS}; +use super::metrics::{FilterType, FILTER_METRICS}; use crate::{ api_server::{ - execution_sandbox::BlockArgs, + execution_sandbox::{BlockArgs, BlockArgsError, BlockStartInfo}, tree::TreeApiHttpClient, tx_sender::TxSender, - web3::{ - backend_jsonrpc::error::internal_error, namespaces::eth::EVENT_TOPIC_NUMBER_LIMIT, - resolve_block, TypedFilter, - }, + web3::{backend_jsonrpsee::internal_error, TypedFilter}, }, sync_layer::SyncState, }; +#[derive(Debug)] +pub(super) enum PruneQuery { + BlockId(api::BlockId), + L1Batch(L1BatchNumber), +} + +impl From for PruneQuery { + fn from(id: api::BlockId) -> Self { + Self::BlockId(id) + } +} + +impl From for PruneQuery { + fn from(number: MiniblockNumber) -> Self { + Self::BlockId(api::BlockId::Number(number.0.into())) + } +} + +impl From for PruneQuery { + fn from(number: L1BatchNumber) -> Self { + Self::L1Batch(number) + } +} + +impl BlockStartInfo { + pub(super) fn ensure_not_pruned(&self, query: impl Into) -> Result<(), Web3Error> { + match query.into() { + PruneQuery::BlockId(id) => self + .ensure_not_pruned_block(id) + .map_err(Web3Error::PrunedBlock), + PruneQuery::L1Batch(number) => { + if number < self.first_l1_batch { + return Err(Web3Error::PrunedL1Batch(self.first_l1_batch)); + } + Ok(()) + } + } + } +} + /// Configuration values for the API. /// This structure is detached from `ZkSyncConfig`, since different node types (main, external, etc) /// may require different configuration layouts. @@ -101,32 +128,30 @@ impl SealedMiniblockNumber { pub fn new( connection_pool: ConnectionPool, update_interval: Duration, - ) -> (Self, impl Future + Send) { + stop_receiver: watch::Receiver, + ) -> (Self, impl Future>) { let this = Self(Arc::default()); let number_updater = this.clone(); + let update_task = async move { loop { - if Arc::strong_count(&number_updater.0) == 1 { - // The `sealed_miniblock_number` was dropped; there's no sense continuing updates. + if *stop_receiver.borrow() { tracing::debug!("Stopping latest sealed miniblock updates"); - break; + return Ok(()); } let mut connection = connection_pool.access_storage_tagged("api").await.unwrap(); - let last_sealed_miniblock = connection - .blocks_web3_dal() + let Some(last_sealed_miniblock) = connection + .blocks_dal() .get_sealed_miniblock_number() - .await; + .await? + else { + tokio::time::sleep(update_interval).await; + continue; + }; drop(connection); - match last_sealed_miniblock { - Ok(number) => { - number_updater.update(number); - } - Err(err) => tracing::warn!( - "Failed fetching latest sealed miniblock to update the watch channel: {err}" - ), - } + number_updater.update(last_sealed_miniblock); tokio::time::sleep(update_interval).await; } }; @@ -165,39 +190,21 @@ impl SealedMiniblockNumber { } /// Holder for the data required for the API to be functional. -#[derive(Debug)] -pub struct RpcState { +#[derive(Debug, Clone)] +pub struct RpcState { pub(crate) installed_filters: Arc>, pub connection_pool: ConnectionPool, pub tree_api: Option, - pub tx_sender: TxSender, + pub tx_sender: TxSender, pub sync_state: Option, pub(super) api_config: InternalApiConfig, + /// Number of the first locally available miniblock / L1 batch. May differ from 0 if the node state was recovered + /// from a snapshot. + pub(super) start_info: BlockStartInfo, pub(super) last_sealed_miniblock: SealedMiniblockNumber, - // The flag that enables redirect of eth get logs implementation to - // implementation with virtual block translation to miniblocks - pub logs_translator_enabled: bool, -} - -// Custom implementation is required due to generic param: -// Even though it's under `Arc`, compiler doesn't generate the `Clone` implementation unless -// an unnecessary bound is added. -impl Clone for RpcState { - fn clone(&self) -> Self { - Self { - installed_filters: self.installed_filters.clone(), - connection_pool: self.connection_pool.clone(), - tx_sender: self.tx_sender.clone(), - tree_api: self.tree_api.clone(), - sync_state: self.sync_state.clone(), - api_config: self.api_config.clone(), - last_sealed_miniblock: self.last_sealed_miniblock.clone(), - logs_translator_enabled: self.logs_translator_enabled, - } - } } -impl RpcState { +impl RpcState { pub fn parse_transaction_bytes(&self, bytes: &[u8]) -> Result<(L2Tx, H256), Web3Error> { let chain_id = self.api_config.l2_chain_id; let (tx_request, hash) = api::TransactionRequest::from_bytes(bytes, chain_id)?; @@ -216,6 +223,34 @@ impl RpcState { } } + pub(crate) async fn resolve_block( + &self, + connection: &mut StorageProcessor<'_>, + block: api::BlockId, + method_name: &'static str, + ) -> Result { + self.start_info.ensure_not_pruned(block)?; + let result = connection.blocks_web3_dal().resolve_block_id(block).await; + result + .map_err(|err| internal_error(method_name, err))? + .ok_or(Web3Error::NoBlock) + } + + pub(crate) async fn resolve_block_args( + &self, + connection: &mut StorageProcessor<'_>, + block: api::BlockId, + method_name: &'static str, + ) -> Result { + BlockArgs::new(connection, block, self.start_info) + .await + .map_err(|err| match err { + BlockArgsError::Pruned(number) => Web3Error::PrunedBlock(number), + BlockArgsError::Missing => Web3Error::NoBlock, + BlockArgsError::Database(err) => internal_error(method_name, err), + }) + } + pub async fn resolve_filter_block_number( &self, block_number: Option, @@ -232,12 +267,10 @@ impl RpcState { .connection_pool .access_storage_tagged("api") .await - .unwrap(); - Ok(conn - .blocks_web3_dal() - .resolve_block_id(block_id) + .map_err(|err| internal_error(METHOD_NAME, err))?; + Ok(self + .resolve_block(&mut conn, block_id, METHOD_NAME) .await - .map_err(|err| internal_error(METHOD_NAME, err))? .unwrap()) // ^ `unwrap()` is safe: `resolve_block_id(api::BlockId::Number(_))` can only return `None` // if called with an explicit number, and we've handled this case earlier. @@ -318,7 +351,9 @@ impl RpcState { .access_storage_tagged("api") .await .unwrap(); - let block_number = resolve_block(&mut connection, block_id, METHOD_NAME).await?; + let block_number = self + .resolve_block(&mut connection, block_id, METHOD_NAME) + .await?; let address_historical_nonce = connection .storage_web3_dal() .get_address_historical_nonce(from, block_number) @@ -328,225 +363,11 @@ impl RpcState { } Ok(()) } - - /// Returns logs for the given filter, taking into account block.number migration with virtual blocks - pub async fn translate_get_logs(&self, filter: Filter) -> Result, Web3Error> { - const METHOD_NAME: &str = "translate_get_logs"; - - let method_latency = API_METRICS.start_call(METHOD_NAME); - // no support for block hash filtering - if filter.block_hash.is_some() { - return Err(Web3Error::InvalidFilterBlockHash); - } - - if let Some(topics) = &filter.topics { - if topics.len() > EVENT_TOPIC_NUMBER_LIMIT { - return Err(Web3Error::TooManyTopics); - } - } - - let mut conn = self - .connection_pool - .access_storage_tagged("api") - .await - .unwrap(); - - // get virtual block upgrade info - let upgrade_info = conn - .storage_dal() - .get_by_key(&StorageKey::new( - AccountTreeId::new(SYSTEM_CONTEXT_ADDRESS), - VIRTUIAL_BLOCK_UPGRADE_INFO_POSITION, - )) - .await - .ok_or_else(|| { - internal_error( - METHOD_NAME, - "Failed to get virtual block upgrade info from DB".to_string(), - ) - })?; - let (virtual_block_start_batch, virtual_block_finish_l2_block) = - unpack_block_upgrade_info(h256_to_u256(upgrade_info)); - let from_miniblock_number = - if let Some(BlockNumber::Number(block_number)) = filter.from_block { - self.resolve_miniblock_from_block( - block_number.as_u64(), - true, - virtual_block_start_batch, - virtual_block_finish_l2_block, - ) - .await? - } else { - let block_number = filter.from_block.unwrap_or(BlockNumber::Latest); - let block_id = BlockId::Number(block_number); - conn.blocks_web3_dal() - .resolve_block_id(block_id) - .await - .map_err(|err| internal_error(METHOD_NAME, err))? - .unwrap() - .0 - }; - - let to_miniblock_number = if let Some(BlockNumber::Number(block_number)) = filter.to_block { - self.resolve_miniblock_from_block( - block_number.as_u64(), - true, - virtual_block_start_batch, - virtual_block_finish_l2_block, - ) - .await? - } else { - let block_number = filter.to_block.unwrap_or(BlockNumber::Latest); - let block_id = BlockId::Number(block_number); - conn.blocks_web3_dal() - .resolve_block_id(block_id) - .await - .map_err(|err| internal_error(METHOD_NAME, err))? - .unwrap() - .0 - }; - - // It is considered that all logs of the miniblock where created in the last virtual block - // of this miniblock. In this case no logs are created. - // When the given virtual block range is a subrange of some miniblock virtual block range. - // e.g. given virtual block range is [11, 12] and the miniblock = 5 virtual block range is [10, 14]. - // Then `to_miniblock_number` will be 4 and `from_miniblock_number` will be 5. 4 < 5. - if to_miniblock_number < from_miniblock_number { - return Ok(vec![]); - } - - let block_filter = Filter { - from_block: Some(from_miniblock_number.into()), - to_block: Some(to_miniblock_number.into()), - ..filter.clone() - }; - - let result = self - .filter_events_changes( - block_filter, - MiniblockNumber(from_miniblock_number), - MiniblockNumber(to_miniblock_number), - ) - .await; - - method_latency.observe(); - result - } - - async fn resolve_miniblock_from_block( - &self, - block_number: u64, - is_from: bool, - virtual_block_start_batch: u64, - virtual_block_finish_l2_block: u64, - ) -> Result { - const METHOD_NAME: &str = "resolve_miniblock_from_block"; - - let mut conn = self - .connection_pool - .access_storage_tagged("api") - .await - .unwrap(); - - if block_number < virtual_block_start_batch { - let l1_batch = L1BatchNumber(block_number as u32); - let miniblock_range = conn - .blocks_web3_dal() - .get_miniblock_range_of_l1_batch(l1_batch) - .await - .map(|minmax| minmax.map(|(min, max)| (U64::from(min.0), U64::from(max.0)))) - .map_err(|err| internal_error(METHOD_NAME, err))?; - - match miniblock_range { - Some((batch_first_miniblock, batch_last_miniblock)) => { - if is_from { - Ok(batch_first_miniblock.as_u32()) - } else { - Ok(batch_last_miniblock.as_u32()) - } - } - _ => Err(Web3Error::NoBlock), - } - } else if virtual_block_finish_l2_block > 0 && block_number >= virtual_block_finish_l2_block - { - u32::try_from(block_number).map_err(|_| Web3Error::NoBlock) - } else { - // we have to deal with virtual blocks here - let virtual_block_miniblock = if is_from { - conn.blocks_web3_dal() - .get_miniblock_for_virtual_block_from(virtual_block_start_batch, block_number) - .await - .map_err(|err| internal_error(METHOD_NAME, err))? - } else { - conn.blocks_web3_dal() - .get_miniblock_for_virtual_block_to(virtual_block_start_batch, block_number) - .await - .map_err(|err| internal_error(METHOD_NAME, err))? - }; - virtual_block_miniblock.ok_or(Web3Error::NoBlock) - } - } - - async fn filter_events_changes( - &self, - filter: Filter, - from_block: MiniblockNumber, - to_block: MiniblockNumber, - ) -> Result, Web3Error> { - const METHOD_NAME: &str = "filter_events_changes"; - - let addresses: Vec<_> = filter - .address - .map_or_else(Vec::default, |address| address.0); - let topics: Vec<_> = filter - .topics - .into_iter() - .flatten() - .enumerate() - .filter_map(|(idx, topics)| topics.map(|topics| (idx as u32 + 1, topics.0))) - .collect(); - let get_logs_filter = GetLogsFilter { - from_block, - to_block: filter.to_block, - addresses, - topics, - }; - - let mut storage = self - .connection_pool - .access_storage_tagged("api") - .await - .unwrap(); - - // Check if there is more than one block in range and there are more than `req_entities_limit` logs that satisfies filter. - // In this case we should return error and suggest requesting logs with smaller block range. - if from_block != to_block - && storage - .events_web3_dal() - .get_log_block_number(&get_logs_filter, self.api_config.req_entities_limit) - .await - .map_err(|err| internal_error(METHOD_NAME, err))? - .is_some() - { - return Err(Web3Error::TooManyLogs(self.api_config.req_entities_limit)); - } - - let logs = storage - .events_web3_dal() - .get_logs(get_logs_filter, i32::MAX as usize) - .await - .map_err(|err| internal_error(METHOD_NAME, err))?; - - Ok(logs) - } } -/// Contains mapping from index to `Filter` with optional location. -#[derive(Default, Debug)] -pub(crate) struct Filters { - state: HashMap, - max_cap: usize, -} +/// Contains mapping from index to `Filter`x with optional location. +#[derive(Debug)] +pub(crate) struct Filters(LruCache); #[derive(Debug)] struct InstalledFilter { @@ -559,7 +380,7 @@ struct InstalledFilter { impl InstalledFilter { pub fn new(filter: TypedFilter) -> Self { - let guard = FILTER_METRICS.metrics_count[&FilterType::from(&filter)].inc_guard(1); + let guard = FILTER_METRICS.filter_count[&FilterType::from(&filter)].inc_guard(1); Self { filter, _guard: guard, @@ -585,44 +406,40 @@ impl Drop for InstalledFilter { fn drop(&mut self) { let filter_type = FilterType::from(&self.filter); - FILTER_METRICS.filter_count[&filter_type].observe(self.request_count); + FILTER_METRICS.request_count[&filter_type].observe(self.request_count); FILTER_METRICS.filter_lifetime[&filter_type].observe(self.created_at.elapsed()); } } impl Filters { /// Instantiates `Filters` with given max capacity. - pub fn new(max_cap: usize) -> Self { - Self { - state: Default::default(), - max_cap, - } + pub fn new(max_cap: Option) -> Self { + let state = match max_cap { + Some(max_cap) => { + LruCache::new(max_cap.try_into().expect("Filter capacity should not be 0")) + } + None => LruCache::unbounded(), + }; + Self(state) } /// Adds filter to the state and returns its key. pub fn add(&mut self, filter: TypedFilter) -> U256 { let idx = loop { let val = H256::random().to_fixed_bytes().into(); - if !self.state.contains_key(&val) { + if !self.0.contains(&val) { break val; } }; - self.state.insert(idx, InstalledFilter::new(filter)); - - // Check if we reached max capacity - if self.state.len() > self.max_cap { - if let Some(first) = self.state.keys().next().cloned() { - self.remove(first); - } - } + self.0.push(idx, InstalledFilter::new(filter)); idx } /// Retrieves filter from the state. pub fn get_and_update_stats(&mut self, index: U256) -> Option { - let installed_filter = self.state.get_mut(&index)?; + let installed_filter = self.0.get_mut(&index)?; installed_filter.update_stats(); @@ -631,13 +448,53 @@ impl Filters { /// Updates filter in the state. pub fn update(&mut self, index: U256, new_filter: TypedFilter) { - if let Some(installed_filter) = self.state.get_mut(&index) { + if let Some(installed_filter) = self.0.get_mut(&index) { installed_filter.filter = new_filter; } } /// Removes filter from the map. pub fn remove(&mut self, index: U256) -> bool { - self.state.remove(&index).is_some() + self.0.pop(&index).is_some() + } +} + +#[cfg(test)] +mod tests { + use chrono::NaiveDateTime; + + #[test] + fn test_filters_functionality() { + use super::*; + + let mut filters = Filters::new(Some(2)); + + let filter1 = TypedFilter::Events(Filter::default(), MiniblockNumber::default()); + let filter2 = TypedFilter::Blocks(MiniblockNumber::default()); + let filter3 = TypedFilter::PendingTransactions(NaiveDateTime::default()); + + let idx1 = filters.add(filter1.clone()); + let idx2 = filters.add(filter2); + let idx3 = filters.add(filter3); + + assert_eq!(filters.0.len(), 2); + assert!(!filters.0.contains(&idx1)); + assert!(filters.0.contains(&idx2)); + assert!(filters.0.contains(&idx3)); + + filters.get_and_update_stats(idx2); + + let idx1 = filters.add(filter1); + assert_eq!(filters.0.len(), 2); + assert!(filters.0.contains(&idx1)); + assert!(filters.0.contains(&idx2)); + assert!(!filters.0.contains(&idx3)); + + filters.remove(idx1); + + assert_eq!(filters.0.len(), 1); + assert!(!filters.0.contains(&idx1)); + assert!(filters.0.contains(&idx2)); + assert!(!filters.0.contains(&idx3)); } } diff --git a/core/lib/zksync_core/src/api_server/web3/tests.rs b/core/lib/zksync_core/src/api_server/web3/tests.rs deleted file mode 100644 index 0d595830d34..00000000000 --- a/core/lib/zksync_core/src/api_server/web3/tests.rs +++ /dev/null @@ -1,145 +0,0 @@ -use tokio::sync::watch; - -use std::{sync::Arc, time::Instant}; - -use zksync_config::configs::{ - api::Web3JsonRpcConfig, - chain::{NetworkConfig, StateKeeperConfig}, - ContractsConfig, -}; -use zksync_dal::ConnectionPool; -use zksync_health_check::CheckHealth; -use zksync_state::PostgresStorageCaches; -use zksync_types::{L1BatchNumber, U64}; -use zksync_web3_decl::{ - jsonrpsee::http_client::HttpClient, - namespaces::{EthNamespaceClient, ZksNamespaceClient}, -}; - -use super::*; -use crate::{ - api_server::tx_sender::TxSenderConfig, - genesis::{ensure_genesis_state, GenesisParams}, -}; - -const TEST_TIMEOUT: Duration = Duration::from_secs(5); -const POLL_INTERVAL: Duration = Duration::from_millis(50); - -/// Mock [`L1GasPriceProvider`] that returns a constant value. -struct MockL1GasPriceProvider(u64); - -impl L1GasPriceProvider for MockL1GasPriceProvider { - fn estimate_effective_gas_price(&self) -> u64 { - self.0 - } -} - -impl ApiServerHandles { - /// Waits until the server health check reports the ready state. - pub(crate) async fn wait_until_ready(&self) { - let started_at = Instant::now(); - loop { - assert!( - started_at.elapsed() <= TEST_TIMEOUT, - "Timed out waiting for API server" - ); - let health = self.health_check.check_health().await; - if health.status().is_ready() { - break; - } - tokio::time::sleep(POLL_INTERVAL).await; - } - } - - pub(crate) async fn shutdown(self) { - let stop_server = async { - for task in self.tasks { - task.await - .expect("Server panicked") - .expect("Server terminated with error"); - } - }; - tokio::time::timeout(TEST_TIMEOUT, stop_server) - .await - .unwrap(); - } -} - -pub(crate) async fn spawn_http_server( - network_config: &NetworkConfig, - pool: ConnectionPool, - stop_receiver: watch::Receiver, -) -> ApiServerHandles { - let contracts_config = ContractsConfig::for_tests(); - let web3_config = Web3JsonRpcConfig::for_tests(); - let state_keeper_config = StateKeeperConfig::for_tests(); - let api_config = InternalApiConfig::new(network_config, &web3_config, &contracts_config); - let tx_sender_config = - TxSenderConfig::new(&state_keeper_config, &web3_config, api_config.l2_chain_id); - - let storage_caches = PostgresStorageCaches::new(1, 1); - let gas_adjuster = Arc::new(MockL1GasPriceProvider(1)); - let (tx_sender, vm_barrier) = crate::build_tx_sender( - &tx_sender_config, - &web3_config, - &state_keeper_config, - pool.clone(), - pool.clone(), - gas_adjuster, - storage_caches, - ) - .await; - - ApiBuilder::jsonrpsee_backend(api_config, pool) - .http(0) // Assign random port - .with_threads(1) - .with_tx_sender(tx_sender, vm_barrier) - .enable_api_namespaces(Namespace::NON_DEBUG.to_vec()) - .build(stop_receiver) - .await - .expect("Failed spawning JSON-RPC server") -} - -#[tokio::test] -async fn http_server_can_start() { - let pool = ConnectionPool::test_pool().await; - let network_config = NetworkConfig::for_tests(); - let mut storage = pool.access_storage().await.unwrap(); - if storage.blocks_dal().is_genesis_needed().await.unwrap() { - ensure_genesis_state( - &mut storage, - network_config.zksync_network_id, - &GenesisParams::mock(), - ) - .await - .unwrap(); - } - drop(storage); - - let (stop_sender, stop_receiver) = watch::channel(false); - let server_handles = spawn_http_server(&network_config, pool, stop_receiver).await; - server_handles.wait_until_ready().await; - - test_http_server_methods(server_handles.local_addr).await; - - stop_sender.send_replace(true); - server_handles.shutdown().await; -} - -async fn test_http_server_methods(local_addr: SocketAddr) { - let client = ::builder() - .build(format!("http://{local_addr}/")) - .unwrap(); - let block_number = client.get_block_number().await.unwrap(); - assert_eq!(block_number, U64::from(0)); - - let l1_batch_number = client.get_l1_batch_number().await.unwrap(); - assert_eq!(l1_batch_number, U64::from(0)); - - let genesis_l1_batch = client - .get_l1_batch_details(L1BatchNumber(0)) - .await - .unwrap() - .unwrap(); - assert!(genesis_l1_batch.base.root_hash.is_some()); -} diff --git a/core/lib/zksync_core/src/api_server/web3/tests/debug.rs b/core/lib/zksync_core/src/api_server/web3/tests/debug.rs new file mode 100644 index 00000000000..874cc019a3d --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/tests/debug.rs @@ -0,0 +1,164 @@ +//! Tests for the `debug` Web3 namespace. + +use zksync_types::{tx::TransactionExecutionResult, vm_trace::Call, BOOTLOADER_ADDRESS}; +use zksync_web3_decl::namespaces::DebugNamespaceClient; + +use super::*; + +fn execute_l2_transaction_with_traces() -> TransactionExecutionResult { + let first_call_trace = Call { + from: Address::repeat_byte(1), + to: Address::repeat_byte(2), + gas: 100, + gas_used: 42, + ..Call::default() + }; + let second_call_trace = Call { + from: Address::repeat_byte(0xff), + to: Address::repeat_byte(0xab), + value: 123.into(), + gas: 58, + gas_used: 10, + input: b"input".to_vec(), + output: b"output".to_vec(), + ..Call::default() + }; + TransactionExecutionResult { + call_traces: vec![first_call_trace, second_call_trace], + ..execute_l2_transaction(create_l2_transaction(1, 2)) + } +} + +#[derive(Debug)] +struct TraceBlockTest(MiniblockNumber); + +#[async_trait] +impl HttpTest for TraceBlockTest { + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let tx_results = [execute_l2_transaction_with_traces()]; + let mut storage = pool.access_storage().await?; + let new_miniblock = store_miniblock(&mut storage, self.0, &tx_results).await?; + drop(storage); + + let block_ids = [ + api::BlockId::Number((*self.0).into()), + api::BlockId::Number(api::BlockNumber::Latest), + api::BlockId::Hash(new_miniblock.hash), + ]; + let expected_calls: Vec<_> = tx_results[0] + .call_traces + .iter() + .map(|call| api::DebugCall::from(call.clone())) + .collect(); + + for block_id in block_ids { + let block_traces = match block_id { + api::BlockId::Number(number) => client.trace_block_by_number(number, None).await?, + api::BlockId::Hash(hash) => client.trace_block_by_hash(hash, None).await?, + }; + + assert_eq!(block_traces.len(), 1); // equals to the number of transactions in the block + let api::ResultDebugCall { result } = &block_traces[0]; + assert_eq!(result.from, Address::zero()); + assert_eq!(result.to, BOOTLOADER_ADDRESS); + assert_eq!(result.gas, tx_results[0].transaction.gas_limit()); + assert_eq!(result.calls, expected_calls); + } + + let missing_block_number = api::BlockNumber::from(*self.0 + 100); + let error = client + .trace_block_by_number(missing_block_number, None) + .await + .unwrap_err(); + if let ClientError::Call(error) = error { + assert_eq!(error.code(), ErrorCode::InvalidParams.code()); + assert!( + error.message().contains("Block") && error.message().contains("doesn't exist"), + "{error:?}" + ); + assert!(error.data().is_none(), "{error:?}"); + } else { + panic!("Unexpected error: {error:?}"); + } + + Ok(()) + } +} + +#[tokio::test] +async fn tracing_block() { + test_http_server(TraceBlockTest(MiniblockNumber(1))).await; +} + +#[derive(Debug)] +struct TraceTransactionTest; + +#[async_trait] +impl HttpTest for TraceTransactionTest { + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let tx_results = [execute_l2_transaction_with_traces()]; + let mut storage = pool.access_storage().await?; + store_miniblock(&mut storage, MiniblockNumber(1), &tx_results).await?; + drop(storage); + + let expected_calls: Vec<_> = tx_results[0] + .call_traces + .iter() + .map(|call| api::DebugCall::from(call.clone())) + .collect(); + + let result = client + .trace_transaction(tx_results[0].hash, None) + .await? + .context("no transaction traces")?; + assert_eq!(result.from, Address::zero()); + assert_eq!(result.to, BOOTLOADER_ADDRESS); + assert_eq!(result.gas, tx_results[0].transaction.gas_limit()); + assert_eq!(result.calls, expected_calls); + + Ok(()) + } +} + +#[tokio::test] +async fn tracing_transaction() { + test_http_server(TraceTransactionTest).await; +} + +#[derive(Debug)] +struct TraceBlockTestWithSnapshotRecovery; + +#[async_trait] +impl HttpTest for TraceBlockTestWithSnapshotRecovery { + fn storage_initialization(&self) -> StorageInitialization { + StorageInitialization::empty_recovery() + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let snapshot_miniblock_number = + MiniblockNumber(StorageInitialization::SNAPSHOT_RECOVERY_BLOCK); + let missing_miniblock_numbers = [ + MiniblockNumber(0), + snapshot_miniblock_number - 1, + snapshot_miniblock_number, + ]; + + for number in missing_miniblock_numbers { + let error = client + .trace_block_by_number(number.0.into(), None) + .await + .unwrap_err(); + assert_pruned_block_error(&error, 24); + } + + TraceBlockTest(snapshot_miniblock_number + 1) + .test(client, pool) + .await?; + Ok(()) + } +} + +#[tokio::test] +async fn tracing_block_after_snapshot_recovery() { + test_http_server(TraceBlockTestWithSnapshotRecovery).await; +} diff --git a/core/lib/zksync_core/src/api_server/web3/tests/filters.rs b/core/lib/zksync_core/src/api_server/web3/tests/filters.rs new file mode 100644 index 00000000000..3c21be1b4be --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/tests/filters.rs @@ -0,0 +1,261 @@ +//! Tests for filter-related methods in the `eth` namespace. + +use zksync_web3_decl::{jsonrpsee::core::ClientError as RpcError, types::FilterChanges}; + +use super::*; + +#[derive(Debug)] +struct BasicFilterChangesTest { + snapshot_recovery: bool, +} + +#[async_trait] +impl HttpTest for BasicFilterChangesTest { + fn storage_initialization(&self) -> StorageInitialization { + if self.snapshot_recovery { + StorageInitialization::empty_recovery() + } else { + StorageInitialization::Genesis + } + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let block_filter_id = client.new_block_filter().await?; + let tx_filter_id = client.new_pending_transaction_filter().await?; + let tx_result = execute_l2_transaction(create_l2_transaction(1, 2)); + let new_tx_hash = tx_result.hash; + let new_miniblock = store_miniblock( + &mut pool.access_storage().await?, + MiniblockNumber(if self.snapshot_recovery { + StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1 + } else { + 1 + }), + &[tx_result], + ) + .await?; + + let block_filter_changes = client.get_filter_changes(block_filter_id).await?; + assert_matches!( + block_filter_changes, + FilterChanges::Hashes(hashes) if hashes == [new_miniblock.hash] + ); + let block_filter_changes = client.get_filter_changes(block_filter_id).await?; + assert_matches!(block_filter_changes, FilterChanges::Hashes(hashes) if hashes.is_empty()); + + let tx_filter_changes = client.get_filter_changes(tx_filter_id).await?; + assert_matches!( + tx_filter_changes, + FilterChanges::Hashes(hashes) if hashes == [new_tx_hash] + ); + let tx_filter_changes = client.get_filter_changes(tx_filter_id).await?; + assert_matches!(tx_filter_changes, FilterChanges::Hashes(hashes) if hashes.is_empty()); + + // Check uninstalling the filter. + let removed = client.uninstall_filter(block_filter_id).await?; + assert!(removed); + let removed = client.uninstall_filter(block_filter_id).await?; + assert!(!removed); + + let err = client + .get_filter_changes(block_filter_id) + .await + .unwrap_err(); + assert_matches!(err, RpcError::Call(err) if err.code() == ErrorCode::InvalidParams.code()); + Ok(()) + } +} + +#[tokio::test] +async fn basic_filter_changes() { + test_http_server(BasicFilterChangesTest { + snapshot_recovery: false, + }) + .await; +} + +#[tokio::test] +async fn basic_filter_changes_after_snapshot_recovery() { + test_http_server(BasicFilterChangesTest { + snapshot_recovery: true, + }) + .await; +} + +#[derive(Debug)] +struct LogFilterChangesTest { + snapshot_recovery: bool, +} + +#[async_trait] +impl HttpTest for LogFilterChangesTest { + fn storage_initialization(&self) -> StorageInitialization { + if self.snapshot_recovery { + StorageInitialization::empty_recovery() + } else { + StorageInitialization::Genesis + } + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let all_logs_filter_id = client.new_filter(Filter::default()).await?; + let address_filter = Filter { + address: Some(Address::repeat_byte(23).into()), + ..Filter::default() + }; + let address_filter_id = client.new_filter(address_filter).await?; + let topics_filter = Filter { + topics: Some(vec![Some(H256::repeat_byte(42).into())]), + ..Filter::default() + }; + let topics_filter_id = client.new_filter(topics_filter).await?; + + let mut storage = pool.access_storage().await?; + let first_local_miniblock = if self.snapshot_recovery { + StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1 + } else { + 1 + }; + let (_, events) = store_events(&mut storage, first_local_miniblock, 0).await?; + drop(storage); + let events: Vec<_> = events.iter().collect(); + + let all_logs = client.get_filter_changes(all_logs_filter_id).await?; + let FilterChanges::Logs(all_logs) = all_logs else { + panic!("Unexpected getFilterChanges output: {:?}", all_logs); + }; + assert_logs_match(&all_logs, &events); + + let address_logs = client.get_filter_changes(address_filter_id).await?; + let FilterChanges::Logs(address_logs) = address_logs else { + panic!("Unexpected getFilterChanges output: {:?}", address_logs); + }; + assert_logs_match(&address_logs, &[events[0], events[3]]); + + let topics_logs = client.get_filter_changes(topics_filter_id).await?; + let FilterChanges::Logs(topics_logs) = topics_logs else { + panic!("Unexpected getFilterChanges output: {:?}", topics_logs); + }; + assert_logs_match(&topics_logs, &[events[1], events[3]]); + + let new_all_logs = client.get_filter_changes(all_logs_filter_id).await?; + let FilterChanges::Hashes(new_all_logs) = new_all_logs else { + panic!("Unexpected getFilterChanges output: {:?}", new_all_logs); + }; + assert!(new_all_logs.is_empty()); + Ok(()) + } +} + +#[tokio::test] +async fn log_filter_changes() { + test_http_server(LogFilterChangesTest { + snapshot_recovery: false, + }) + .await; +} + +#[tokio::test] +async fn log_filter_changes_after_snapshot_recovery() { + test_http_server(LogFilterChangesTest { + snapshot_recovery: true, + }) + .await; +} + +#[derive(Debug)] +struct LogFilterChangesWithBlockBoundariesTest; + +#[async_trait] +impl HttpTest for LogFilterChangesWithBlockBoundariesTest { + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let lower_bound_filter = Filter { + from_block: Some(api::BlockNumber::Number(2.into())), + ..Filter::default() + }; + let lower_bound_filter_id = client.new_filter(lower_bound_filter).await?; + let upper_bound_filter = Filter { + to_block: Some(api::BlockNumber::Number(1.into())), + ..Filter::default() + }; + let upper_bound_filter_id = client.new_filter(upper_bound_filter).await?; + let bounded_filter = Filter { + from_block: Some(api::BlockNumber::Number(1.into())), + to_block: Some(api::BlockNumber::Number(1.into())), + ..Filter::default() + }; + let bounded_filter_id = client.new_filter(bounded_filter).await?; + + let mut storage = pool.access_storage().await?; + let (_, events) = store_events(&mut storage, 1, 0).await?; + drop(storage); + let events: Vec<_> = events.iter().collect(); + + let lower_bound_logs = client.get_filter_changes(lower_bound_filter_id).await?; + assert_matches!( + lower_bound_logs, + FilterChanges::Hashes(hashes) if hashes.is_empty() + ); + // ^ Since `FilterChanges` is serialized w/o a tag, an empty array will be deserialized + // as `Hashes(_)` (the first declared variant). + + let upper_bound_logs = client.get_filter_changes(upper_bound_filter_id).await?; + let FilterChanges::Logs(upper_bound_logs) = upper_bound_logs else { + panic!("Unexpected getFilterChanges output: {:?}", upper_bound_logs); + }; + assert_logs_match(&upper_bound_logs, &events); + let bounded_logs = client.get_filter_changes(bounded_filter_id).await?; + let FilterChanges::Logs(bounded_logs) = bounded_logs else { + panic!("Unexpected getFilterChanges output: {:?}", bounded_logs); + }; + assert_eq!(bounded_logs, upper_bound_logs); + + // Add another miniblock with events to the storage. + let mut storage = pool.access_storage().await?; + let (_, new_events) = store_events(&mut storage, 2, 4).await?; + drop(storage); + let new_events: Vec<_> = new_events.iter().collect(); + + let lower_bound_logs = client.get_filter_changes(lower_bound_filter_id).await?; + let FilterChanges::Logs(lower_bound_logs) = lower_bound_logs else { + panic!("Unexpected getFilterChanges output: {:?}", lower_bound_logs); + }; + assert_logs_match(&lower_bound_logs, &new_events); + + let new_upper_bound_logs = client.get_filter_changes(upper_bound_filter_id).await?; + assert_matches!(new_upper_bound_logs, FilterChanges::Hashes(hashes) if hashes.is_empty()); + let new_bounded_logs = client.get_filter_changes(bounded_filter_id).await?; + assert_matches!(new_bounded_logs, FilterChanges::Hashes(hashes) if hashes.is_empty()); + + // Add miniblock #3. It should not be picked up by the bounded and upper bound filters, + // and should be picked up by the lower bound filter. + let mut storage = pool.access_storage().await?; + let (_, new_events) = store_events(&mut storage, 3, 8).await?; + drop(storage); + let new_events: Vec<_> = new_events.iter().collect(); + + let bounded_logs = client.get_filter_changes(bounded_filter_id).await?; + let FilterChanges::Hashes(bounded_logs) = bounded_logs else { + panic!("Unexpected getFilterChanges output: {:?}", bounded_logs); + }; + assert!(bounded_logs.is_empty()); + + let upper_bound_logs = client.get_filter_changes(upper_bound_filter_id).await?; + let FilterChanges::Hashes(upper_bound_logs) = upper_bound_logs else { + panic!("Unexpected getFilterChanges output: {:?}", upper_bound_logs); + }; + assert!(upper_bound_logs.is_empty()); + + let lower_bound_logs = client.get_filter_changes(lower_bound_filter_id).await?; + let FilterChanges::Logs(lower_bound_logs) = lower_bound_logs else { + panic!("Unexpected getFilterChanges output: {:?}", lower_bound_logs); + }; + assert_logs_match(&lower_bound_logs, &new_events); + Ok(()) + } +} + +#[tokio::test] +async fn log_filter_changes_with_block_boundaries() { + test_http_server(LogFilterChangesWithBlockBoundariesTest).await; +} diff --git a/core/lib/zksync_core/src/api_server/web3/tests/mod.rs b/core/lib/zksync_core/src/api_server/web3/tests/mod.rs new file mode 100644 index 00000000000..1cfd6af269f --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/tests/mod.rs @@ -0,0 +1,811 @@ +use std::{collections::HashMap, time::Instant}; + +use assert_matches::assert_matches; +use async_trait::async_trait; +use jsonrpsee::core::ClientError; +use tokio::sync::watch; +use zksync_config::configs::{ + api::Web3JsonRpcConfig, + chain::{NetworkConfig, StateKeeperConfig}, + ContractsConfig, +}; +use zksync_dal::{transactions_dal::L2TxSubmissionResult, ConnectionPool, StorageProcessor}; +use zksync_health_check::CheckHealth; +use zksync_types::{ + api, + block::{BlockGasCount, MiniblockHeader}, + fee::TransactionExecutionMetrics, + get_nonce_key, + l2::L2Tx, + storage::get_code_key, + tx::{ + tx_execution_info::TxExecutionStatus, ExecutionMetrics, IncludedTxLocation, + TransactionExecutionResult, + }, + utils::storage_key_for_eth_balance, + AccountTreeId, Address, L1BatchNumber, Nonce, StorageKey, StorageLog, VmEvent, H256, U64, +}; +use zksync_web3_decl::{ + jsonrpsee::{http_client::HttpClient, types::error::ErrorCode}, + namespaces::{EthNamespaceClient, ZksNamespaceClient}, +}; + +use super::{metrics::ApiTransportLabel, *}; +use crate::{ + api_server::{ + execution_sandbox::testonly::MockTransactionExecutor, + tx_sender::tests::create_test_tx_sender, + }, + genesis::{ensure_genesis_state, GenesisParams}, + utils::testonly::{ + create_l1_batch, create_l1_batch_metadata, create_l2_transaction, create_miniblock, + prepare_empty_recovery_snapshot, prepare_recovery_snapshot, + }, +}; + +mod debug; +mod filters; +mod snapshots; +mod vm; +mod ws; + +const TEST_TIMEOUT: Duration = Duration::from_secs(10); +const POLL_INTERVAL: Duration = Duration::from_millis(50); + +impl ApiServerHandles { + /// Waits until the server health check reports the ready state. + pub(crate) async fn wait_until_ready(&self) { + let started_at = Instant::now(); + loop { + assert!( + started_at.elapsed() <= TEST_TIMEOUT, + "Timed out waiting for API server" + ); + let health = self.health_check.check_health().await; + if health.status().is_ready() { + break; + } + tokio::time::sleep(POLL_INTERVAL).await; + } + } + + pub(crate) async fn shutdown(self) { + let stop_server = async { + for task in self.tasks { + match task.await { + Ok(Ok(())) => { /* Task successfully completed */ } + Err(err) if err.is_cancelled() => { + // Task was canceled since the server runtime which runs the task was dropped. + // This is fine. + } + Err(err) => panic!("Server task panicked: {err:?}"), + Ok(Err(err)) => panic!("Server task failed: {err:?}"), + } + } + }; + tokio::time::timeout(TEST_TIMEOUT, stop_server) + .await + .expect(format!("panicking at {}", chrono::Utc::now()).as_str()); + } +} + +pub(crate) async fn spawn_http_server( + network_config: &NetworkConfig, + pool: ConnectionPool, + tx_executor: MockTransactionExecutor, + stop_receiver: watch::Receiver, +) -> ApiServerHandles { + spawn_server( + ApiTransportLabel::Http, + network_config, + pool, + None, + tx_executor, + stop_receiver, + ) + .await + .0 +} + +async fn spawn_ws_server( + network_config: &NetworkConfig, + pool: ConnectionPool, + stop_receiver: watch::Receiver, + websocket_requests_per_minute_limit: Option, +) -> (ApiServerHandles, mpsc::UnboundedReceiver) { + spawn_server( + ApiTransportLabel::Ws, + network_config, + pool, + websocket_requests_per_minute_limit, + MockTransactionExecutor::default(), + stop_receiver, + ) + .await +} + +async fn spawn_server( + transport: ApiTransportLabel, + network_config: &NetworkConfig, + pool: ConnectionPool, + websocket_requests_per_minute_limit: Option, + tx_executor: MockTransactionExecutor, + stop_receiver: watch::Receiver, +) -> (ApiServerHandles, mpsc::UnboundedReceiver) { + let contracts_config = ContractsConfig::for_tests(); + let web3_config = Web3JsonRpcConfig::for_tests(); + let api_config = InternalApiConfig::new(network_config, &web3_config, &contracts_config); + let (tx_sender, vm_barrier) = + create_test_tx_sender(pool.clone(), api_config.l2_chain_id, tx_executor.into()).await; + let (pub_sub_events_sender, pub_sub_events_receiver) = mpsc::unbounded_channel(); + + let mut namespaces = Namespace::DEFAULT.to_vec(); + namespaces.extend([Namespace::Debug, Namespace::Snapshots]); + + let server_builder = match transport { + ApiTransportLabel::Http => ApiBuilder::jsonrpsee_backend(api_config, pool).http(0), + ApiTransportLabel::Ws => { + let mut builder = ApiBuilder::jsonrpsee_backend(api_config, pool) + .ws(0) + .with_subscriptions_limit(100); + if let Some(websocket_requests_per_minute_limit) = websocket_requests_per_minute_limit { + builder = builder + .with_websocket_requests_per_minute_limit(websocket_requests_per_minute_limit); + } + builder + } + }; + let server_handles = server_builder + .with_polling_interval(POLL_INTERVAL) + .with_tx_sender(tx_sender, vm_barrier) + .with_pub_sub_events(pub_sub_events_sender) + .enable_api_namespaces(namespaces) + .build(stop_receiver) + .await + .expect("Failed spawning JSON-RPC server"); + (server_handles, pub_sub_events_receiver) +} + +#[async_trait] +trait HttpTest: Send + Sync { + /// Prepares the storage before the server is started. The default implementation performs genesis. + fn storage_initialization(&self) -> StorageInitialization { + StorageInitialization::Genesis + } + + fn transaction_executor(&self) -> MockTransactionExecutor { + MockTransactionExecutor::default() + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()>; +} + +/// Storage initialization strategy. +#[derive(Debug)] +enum StorageInitialization { + Genesis, + Recovery { + logs: Vec, + factory_deps: HashMap>, + }, +} + +impl StorageInitialization { + const SNAPSHOT_RECOVERY_BLOCK: u32 = 23; + + fn empty_recovery() -> Self { + Self::Recovery { + logs: vec![], + factory_deps: HashMap::new(), + } + } + + async fn prepare_storage( + &self, + network_config: &NetworkConfig, + storage: &mut StorageProcessor<'_>, + ) -> anyhow::Result<()> { + match self { + Self::Genesis => { + if storage.blocks_dal().is_genesis_needed().await? { + ensure_genesis_state( + storage, + network_config.zksync_network_id, + &GenesisParams::mock(), + ) + .await?; + } + } + Self::Recovery { logs, factory_deps } if logs.is_empty() && factory_deps.is_empty() => { + prepare_empty_recovery_snapshot(storage, Self::SNAPSHOT_RECOVERY_BLOCK).await; + } + Self::Recovery { logs, factory_deps } => { + prepare_recovery_snapshot(storage, Self::SNAPSHOT_RECOVERY_BLOCK, logs).await; + storage + .storage_dal() + .insert_factory_deps( + MiniblockNumber(Self::SNAPSHOT_RECOVERY_BLOCK), + factory_deps, + ) + .await; + } + } + Ok(()) + } +} + +async fn test_http_server(test: impl HttpTest) { + let pool = ConnectionPool::test_pool().await; + let network_config = NetworkConfig::for_tests(); + let mut storage = pool.access_storage().await.unwrap(); + test.storage_initialization() + .prepare_storage(&network_config, &mut storage) + .await + .expect("Failed preparing storage for test"); + drop(storage); + + let (stop_sender, stop_receiver) = watch::channel(false); + let server_handles = spawn_http_server( + &network_config, + pool.clone(), + test.transaction_executor(), + stop_receiver, + ) + .await; + server_handles.wait_until_ready().await; + + let client = ::builder() + .build(format!("http://{}/", server_handles.local_addr)) + .unwrap(); + test.test(&client, &pool).await.unwrap(); + + stop_sender.send_replace(true); + server_handles.shutdown().await; +} + +fn assert_logs_match(actual_logs: &[api::Log], expected_logs: &[&VmEvent]) { + assert_eq!(actual_logs.len(), expected_logs.len()); + for (actual_log, &expected_log) in actual_logs.iter().zip(expected_logs) { + assert_eq!(actual_log.address, expected_log.address); + assert_eq!(actual_log.topics, expected_log.indexed_topics); + assert_eq!(actual_log.data.0, expected_log.value); + } +} + +fn execute_l2_transaction(transaction: L2Tx) -> TransactionExecutionResult { + TransactionExecutionResult { + hash: transaction.hash(), + transaction: transaction.into(), + execution_info: ExecutionMetrics::default(), + execution_status: TxExecutionStatus::Success, + refunded_gas: 0, + operator_suggested_refund: 0, + compressed_bytecodes: vec![], + call_traces: vec![], + revert_reason: None, + } +} + +/// Stores miniblock #1 with a single transaction and returns the miniblock header + transaction hash. +async fn store_miniblock( + storage: &mut StorageProcessor<'_>, + number: MiniblockNumber, + transaction_results: &[TransactionExecutionResult], +) -> anyhow::Result { + for result in transaction_results { + let l2_tx = result.transaction.clone().try_into().unwrap(); + let tx_submission_result = storage + .transactions_dal() + .insert_transaction_l2(l2_tx, TransactionExecutionMetrics::default()) + .await; + assert_matches!(tx_submission_result, L2TxSubmissionResult::Added); + } + + let new_miniblock = create_miniblock(number.0); + storage + .blocks_dal() + .insert_miniblock(&new_miniblock) + .await?; + storage + .transactions_dal() + .mark_txs_as_executed_in_miniblock(new_miniblock.number, transaction_results, 1.into()) + .await; + Ok(new_miniblock) +} + +async fn seal_l1_batch( + storage: &mut StorageProcessor<'_>, + number: L1BatchNumber, +) -> anyhow::Result<()> { + let header = create_l1_batch(number.0); + storage + .blocks_dal() + .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[], 0) + .await?; + storage + .blocks_dal() + .mark_miniblocks_as_executed_in_l1_batch(number) + .await?; + let metadata = create_l1_batch_metadata(number.0); + storage + .blocks_dal() + .save_l1_batch_metadata(number, &metadata, H256::zero(), false) + .await?; + Ok(()) +} + +async fn store_events( + storage: &mut StorageProcessor<'_>, + miniblock_number: u32, + start_idx: u32, +) -> anyhow::Result<(IncludedTxLocation, Vec)> { + let new_miniblock = create_miniblock(miniblock_number); + let l1_batch_number = L1BatchNumber(miniblock_number); + storage + .blocks_dal() + .insert_miniblock(&new_miniblock) + .await?; + let tx_location = IncludedTxLocation { + tx_hash: H256::repeat_byte(1), + tx_index_in_miniblock: 0, + tx_initiator_address: Address::repeat_byte(2), + }; + let events = vec![ + // Matches address, doesn't match topics + VmEvent { + location: (l1_batch_number, start_idx), + address: Address::repeat_byte(23), + indexed_topics: vec![], + value: start_idx.to_le_bytes().to_vec(), + }, + // Doesn't match address, matches topics + VmEvent { + location: (l1_batch_number, start_idx + 1), + address: Address::zero(), + indexed_topics: vec![H256::repeat_byte(42)], + value: (start_idx + 1).to_le_bytes().to_vec(), + }, + // Doesn't match address or topics + VmEvent { + location: (l1_batch_number, start_idx + 2), + address: Address::zero(), + indexed_topics: vec![H256::repeat_byte(1), H256::repeat_byte(42)], + value: (start_idx + 2).to_le_bytes().to_vec(), + }, + // Matches both address and topics + VmEvent { + location: (l1_batch_number, start_idx + 3), + address: Address::repeat_byte(23), + indexed_topics: vec![H256::repeat_byte(42), H256::repeat_byte(111)], + value: (start_idx + 3).to_le_bytes().to_vec(), + }, + ]; + storage + .events_dal() + .save_events( + MiniblockNumber(miniblock_number), + &[(tx_location, events.iter().collect())], + ) + .await; + Ok((tx_location, events)) +} + +#[derive(Debug)] +struct HttpServerBasicsTest; + +#[async_trait] +impl HttpTest for HttpServerBasicsTest { + async fn test(&self, client: &HttpClient, _pool: &ConnectionPool) -> anyhow::Result<()> { + let block_number = client.get_block_number().await?; + assert_eq!(block_number, U64::from(0)); + + let l1_batch_number = client.get_l1_batch_number().await?; + assert_eq!(l1_batch_number, U64::from(0)); + + let genesis_l1_batch = client + .get_l1_batch_details(L1BatchNumber(0)) + .await? + .context("No genesis L1 batch")?; + assert!(genesis_l1_batch.base.root_hash.is_some()); + Ok(()) + } +} + +#[tokio::test] +async fn http_server_basics() { + test_http_server(HttpServerBasicsTest).await; +} + +#[derive(Debug)] +struct BlockMethodsWithSnapshotRecovery; + +#[async_trait] +impl HttpTest for BlockMethodsWithSnapshotRecovery { + fn storage_initialization(&self) -> StorageInitialization { + StorageInitialization::empty_recovery() + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let error = client.get_block_number().await.unwrap_err(); + if let ClientError::Call(error) = error { + assert_eq!(error.code(), ErrorCode::InvalidParams.code()); + } else { + panic!("Unexpected error: {error:?}"); + } + + let block = client + .get_block_by_number(api::BlockNumber::Latest, false) + .await?; + assert!(block.is_none()); + let block = client.get_block_by_number(1_000.into(), false).await?; + assert!(block.is_none()); + + let mut storage = pool.access_storage().await?; + store_miniblock(&mut storage, MiniblockNumber(24), &[]).await?; + drop(storage); + + let block_number = client.get_block_number().await?; + let expected_block_number = StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1; + assert_eq!(block_number, expected_block_number.into()); + + for block_number in [api::BlockNumber::Latest, expected_block_number.into()] { + let block = client + .get_block_by_number(block_number, false) + .await? + .context("no latest block")?; + assert_eq!(block.number, expected_block_number.into()); + } + + for number in [0, 1, expected_block_number - 1] { + let error = client + .get_block_details(MiniblockNumber(number)) + .await + .unwrap_err(); + assert_pruned_block_error(&error, expected_block_number); + let error = client + .get_raw_block_transactions(MiniblockNumber(number)) + .await + .unwrap_err(); + assert_pruned_block_error(&error, expected_block_number); + + let error = client + .get_block_transaction_count_by_number(number.into()) + .await + .unwrap_err(); + assert_pruned_block_error(&error, expected_block_number); + let error = client + .get_block_by_number(number.into(), false) + .await + .unwrap_err(); + assert_pruned_block_error(&error, expected_block_number); + } + + Ok(()) + } +} + +fn assert_pruned_block_error(error: &ClientError, first_retained_block: u32) { + if let ClientError::Call(error) = error { + assert_eq!(error.code(), ErrorCode::InvalidParams.code()); + assert!( + error + .message() + .contains(&format!("first retained block is {first_retained_block}")), + "{error:?}" + ); + assert!(error.data().is_none(), "{error:?}"); + } else { + panic!("Unexpected error: {error:?}"); + } +} + +#[tokio::test] +async fn block_methods_with_snapshot_recovery() { + test_http_server(BlockMethodsWithSnapshotRecovery).await; +} + +#[derive(Debug)] +struct L1BatchMethodsWithSnapshotRecovery; + +#[async_trait] +impl HttpTest for L1BatchMethodsWithSnapshotRecovery { + fn storage_initialization(&self) -> StorageInitialization { + StorageInitialization::empty_recovery() + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let error = client.get_l1_batch_number().await.unwrap_err(); + if let ClientError::Call(error) = error { + assert_eq!(error.code(), ErrorCode::InvalidParams.code()); + } else { + panic!("Unexpected error: {error:?}"); + } + + let mut storage = pool.access_storage().await?; + let miniblock_number = StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1; + store_miniblock(&mut storage, MiniblockNumber(miniblock_number), &[]).await?; + seal_l1_batch(&mut storage, L1BatchNumber(miniblock_number)).await?; + drop(storage); + + let l1_batch_number = client.get_l1_batch_number().await?; + assert_eq!(l1_batch_number, miniblock_number.into()); + + // `get_miniblock_range` method + let miniblock_range = client + .get_miniblock_range(L1BatchNumber(miniblock_number)) + .await? + .context("no range for sealed L1 batch")?; + assert_eq!(miniblock_range.0, miniblock_number.into()); + assert_eq!(miniblock_range.1, miniblock_number.into()); + + let miniblock_range_for_future_batch = client + .get_miniblock_range(L1BatchNumber(miniblock_number) + 1) + .await?; + assert_eq!(miniblock_range_for_future_batch, None); + + let error = client + .get_miniblock_range(L1BatchNumber(miniblock_number) - 1) + .await + .unwrap_err(); + assert_pruned_l1_batch_error(&error, miniblock_number); + + // `get_l1_batch_details` method + let details = client + .get_l1_batch_details(L1BatchNumber(miniblock_number)) + .await? + .context("no details for sealed L1 batch")?; + assert_eq!(details.number, L1BatchNumber(miniblock_number)); + + let details_for_future_batch = client + .get_l1_batch_details(L1BatchNumber(miniblock_number) + 1) + .await?; + assert!( + details_for_future_batch.is_none(), + "{details_for_future_batch:?}" + ); + + let error = client + .get_l1_batch_details(L1BatchNumber(miniblock_number) - 1) + .await + .unwrap_err(); + assert_pruned_l1_batch_error(&error, miniblock_number); + + Ok(()) + } +} + +fn assert_pruned_l1_batch_error(error: &ClientError, first_retained_l1_batch: u32) { + if let ClientError::Call(error) = error { + assert_eq!(error.code(), ErrorCode::InvalidParams.code()); + assert!( + error.message().contains(&format!( + "first retained L1 batch is {first_retained_l1_batch}" + )), + "{error:?}" + ); + assert!(error.data().is_none(), "{error:?}"); + } else { + panic!("Unexpected error: {error:?}"); + } +} + +#[tokio::test] +async fn l1_batch_methods_with_snapshot_recovery() { + test_http_server(L1BatchMethodsWithSnapshotRecovery).await; +} + +#[derive(Debug)] +struct StorageAccessWithSnapshotRecovery; + +#[async_trait] +impl HttpTest for StorageAccessWithSnapshotRecovery { + fn storage_initialization(&self) -> StorageInitialization { + let address = Address::repeat_byte(1); + let code_key = get_code_key(&address); + let code_hash = H256::repeat_byte(2); + let balance_key = storage_key_for_eth_balance(&address); + let logs = vec![ + StorageLog::new_write_log(code_key, code_hash), + StorageLog::new_write_log(balance_key, H256::from_low_u64_be(123)), + StorageLog::new_write_log( + StorageKey::new(AccountTreeId::new(address), H256::zero()), + H256::repeat_byte(0xff), + ), + ]; + let factory_deps = [(code_hash, b"code".to_vec())].into(); + StorageInitialization::Recovery { logs, factory_deps } + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let mut storage = pool.access_storage().await?; + + let address = Address::repeat_byte(1); + let first_local_miniblock = StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1; + for number in [0, 1, first_local_miniblock - 1] { + let number = api::BlockIdVariant::BlockNumber(number.into()); + let error = client.get_code(address, Some(number)).await.unwrap_err(); + assert_pruned_block_error(&error, first_local_miniblock); + let error = client.get_balance(address, Some(number)).await.unwrap_err(); + assert_pruned_block_error(&error, first_local_miniblock); + let error = client + .get_storage_at(address, 0.into(), Some(number)) + .await + .unwrap_err(); + assert_pruned_block_error(&error, 24); + } + + store_miniblock(&mut storage, MiniblockNumber(first_local_miniblock), &[]).await?; + drop(storage); + + for number in [api::BlockNumber::Latest, first_local_miniblock.into()] { + let number = api::BlockIdVariant::BlockNumber(number); + let code = client.get_code(address, Some(number)).await?; + assert_eq!(code.0, b"code"); + let balance = client.get_balance(address, Some(number)).await?; + assert_eq!(balance, 123.into()); + let storage_value = client + .get_storage_at(address, 0.into(), Some(number)) + .await?; + assert_eq!(storage_value, H256::repeat_byte(0xff)); + } + Ok(()) + } +} + +#[tokio::test] +async fn storage_access_with_snapshot_recovery() { + test_http_server(StorageAccessWithSnapshotRecovery).await; +} + +#[derive(Debug)] +struct TransactionCountTest; + +#[async_trait] +impl HttpTest for TransactionCountTest { + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let test_address = Address::repeat_byte(11); + let mut storage = pool.access_storage().await?; + let mut miniblock_number = MiniblockNumber(0); + for nonce in [0, 1] { + let mut committed_tx = create_l2_transaction(10, 200); + committed_tx.common_data.initiator_address = test_address; + committed_tx.common_data.nonce = Nonce(nonce); + miniblock_number += 1; + store_miniblock( + &mut storage, + miniblock_number, + &[execute_l2_transaction(committed_tx)], + ) + .await?; + let nonce_log = StorageLog::new_write_log( + get_nonce_key(&test_address), + H256::from_low_u64_be((nonce + 1).into()), + ); + storage + .storage_logs_dal() + .insert_storage_logs(miniblock_number, &[(H256::zero(), vec![nonce_log])]) + .await; + } + + let pending_count = client.get_transaction_count(test_address, None).await?; + assert_eq!(pending_count, 2.into()); + + let mut pending_tx = create_l2_transaction(10, 200); + pending_tx.common_data.initiator_address = test_address; + pending_tx.common_data.nonce = Nonce(2); + storage + .transactions_dal() + .insert_transaction_l2(pending_tx, TransactionExecutionMetrics::default()) + .await; + + let pending_count = client.get_transaction_count(test_address, None).await?; + assert_eq!(pending_count, 3.into()); + + let latest_block_numbers = [api::BlockNumber::Latest, miniblock_number.0.into()]; + for number in latest_block_numbers { + let number = api::BlockIdVariant::BlockNumber(number); + let latest_count = client + .get_transaction_count(test_address, Some(number)) + .await?; + assert_eq!(latest_count, 2.into()); + } + + let earliest_block_numbers = [api::BlockNumber::Earliest, 0.into()]; + for number in earliest_block_numbers { + let number = api::BlockIdVariant::BlockNumber(number); + let historic_count = client + .get_transaction_count(test_address, Some(number)) + .await?; + assert_eq!(historic_count, 0.into()); + } + + let number = api::BlockIdVariant::BlockNumber(1.into()); + let historic_count = client + .get_transaction_count(test_address, Some(number)) + .await?; + assert_eq!(historic_count, 1.into()); + + let number = api::BlockIdVariant::BlockNumber(100.into()); + let error = client + .get_transaction_count(test_address, Some(number)) + .await + .unwrap_err(); + if let ClientError::Call(error) = error { + assert_eq!(error.code(), ErrorCode::InvalidParams.code()); + } else { + panic!("Unexpected error: {error:?}"); + } + Ok(()) + } +} + +#[tokio::test] +async fn getting_transaction_count_for_account() { + test_http_server(TransactionCountTest).await; +} + +#[derive(Debug)] +struct TransactionCountAfterSnapshotRecoveryTest; + +#[async_trait] +impl HttpTest for TransactionCountAfterSnapshotRecoveryTest { + fn storage_initialization(&self) -> StorageInitialization { + let test_address = Address::repeat_byte(11); + let nonce_log = + StorageLog::new_write_log(get_nonce_key(&test_address), H256::from_low_u64_be(3)); + StorageInitialization::Recovery { + logs: vec![nonce_log], + factory_deps: HashMap::new(), + } + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let test_address = Address::repeat_byte(11); + let pending_count = client.get_transaction_count(test_address, None).await?; + assert_eq!(pending_count, 3.into()); + + let mut pending_tx = create_l2_transaction(10, 200); + pending_tx.common_data.initiator_address = test_address; + pending_tx.common_data.nonce = Nonce(3); + let mut storage = pool.access_storage().await?; + storage + .transactions_dal() + .insert_transaction_l2(pending_tx, TransactionExecutionMetrics::default()) + .await; + + let pending_count = client.get_transaction_count(test_address, None).await?; + assert_eq!(pending_count, 4.into()); + + let pruned_block_numbers = [ + api::BlockNumber::Earliest, + 0.into(), + StorageInitialization::SNAPSHOT_RECOVERY_BLOCK.into(), + ]; + for number in pruned_block_numbers { + let number = api::BlockIdVariant::BlockNumber(number); + let error = client + .get_transaction_count(test_address, Some(number)) + .await + .unwrap_err(); + assert_pruned_block_error(&error, StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1); + } + + let latest_miniblock_number = StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1; + store_miniblock(&mut storage, MiniblockNumber(latest_miniblock_number), &[]).await?; + + let latest_block_numbers = [api::BlockNumber::Latest, latest_miniblock_number.into()]; + for number in latest_block_numbers { + let number = api::BlockIdVariant::BlockNumber(number); + let latest_count = client + .get_transaction_count(test_address, Some(number)) + .await?; + assert_eq!(latest_count, 3.into()); + } + Ok(()) + } +} + +#[tokio::test] +async fn getting_transaction_count_for_account_after_snapshot_recovery() { + test_http_server(TransactionCountAfterSnapshotRecoveryTest).await; +} diff --git a/core/lib/zksync_core/src/api_server/web3/tests/snapshots.rs b/core/lib/zksync_core/src/api_server/web3/tests/snapshots.rs new file mode 100644 index 00000000000..1765a7c2397 --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/tests/snapshots.rs @@ -0,0 +1,101 @@ +//! Tests for the `snapshots` Web3 namespace. + +use std::collections::HashSet; + +use zksync_web3_decl::namespaces::SnapshotsNamespaceClient; + +use super::*; + +#[derive(Debug)] +struct SnapshotBasicsTest { + chunk_ids: HashSet, +} + +impl SnapshotBasicsTest { + const CHUNK_COUNT: u64 = 5; + + fn new(chunk_ids: impl IntoIterator) -> Self { + let chunk_ids: HashSet<_> = chunk_ids.into_iter().collect(); + assert!(chunk_ids.iter().all(|&id| id < Self::CHUNK_COUNT)); + Self { chunk_ids } + } + + fn is_complete_snapshot(&self) -> bool { + self.chunk_ids == HashSet::from_iter(0..Self::CHUNK_COUNT) + } +} + +#[async_trait] +impl HttpTest for SnapshotBasicsTest { + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let mut storage = pool.access_storage().await.unwrap(); + store_miniblock( + &mut storage, + MiniblockNumber(1), + &[execute_l2_transaction(create_l2_transaction(1, 2))], + ) + .await?; + seal_l1_batch(&mut storage, L1BatchNumber(1)).await?; + storage + .snapshots_dal() + .add_snapshot(L1BatchNumber(1), Self::CHUNK_COUNT, "file:///factory_deps") + .await?; + + for &chunk_id in &self.chunk_ids { + let path = format!("file:///storage_logs/chunk{chunk_id}"); + storage + .snapshots_dal() + .add_storage_logs_filepath_for_snapshot(L1BatchNumber(1), chunk_id, &path) + .await?; + } + + let all_snapshots = client.get_all_snapshots().await?; + if self.is_complete_snapshot() { + assert_eq!(all_snapshots.snapshots_l1_batch_numbers, [L1BatchNumber(1)]); + } else { + assert_eq!(all_snapshots.snapshots_l1_batch_numbers, []); + } + + let snapshot_header = client + .get_snapshot_by_l1_batch_number(L1BatchNumber(1)) + .await?; + let snapshot_header = if self.is_complete_snapshot() { + snapshot_header.context("no snapshot for L1 batch #1")? + } else { + assert!(snapshot_header.is_none()); + return Ok(()); + }; + + assert_eq!(snapshot_header.l1_batch_number, L1BatchNumber(1)); + assert_eq!(snapshot_header.miniblock_number, MiniblockNumber(1)); + assert_eq!( + snapshot_header.factory_deps_filepath, + "file:///factory_deps" + ); + + assert_eq!( + snapshot_header.storage_logs_chunks.len(), + self.chunk_ids.len() + ); + for chunk in &snapshot_header.storage_logs_chunks { + assert!(self.chunk_ids.contains(&chunk.chunk_id)); + assert!(chunk.filepath.starts_with("file:///storage_logs/")); + } + Ok(()) + } +} + +#[tokio::test] +async fn snapshot_without_chunks() { + test_http_server(SnapshotBasicsTest::new([])).await; +} + +#[tokio::test] +async fn snapshot_with_some_chunks() { + test_http_server(SnapshotBasicsTest::new([0, 2, 4])).await; +} + +#[tokio::test] +async fn snapshot_with_all_chunks() { + test_http_server(SnapshotBasicsTest::new(0..SnapshotBasicsTest::CHUNK_COUNT)).await; +} diff --git a/core/lib/zksync_core/src/api_server/web3/tests/vm.rs b/core/lib/zksync_core/src/api_server/web3/tests/vm.rs new file mode 100644 index 00000000000..ba5ca2ead00 --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/tests/vm.rs @@ -0,0 +1,237 @@ +//! Tests for the VM-instantiating methods (e.g., `eth_call`). + +// TODO: Test other VM methods (`debug_traceCall`, `eth_estimateGas`) + +use multivm::interface::ExecutionResult; +use zksync_types::{ + get_intrinsic_constants, transaction_request::CallRequest, L2ChainId, PackedEthSignature, U256, +}; +use zksync_utils::u256_to_h256; + +use super::*; + +#[derive(Debug)] +struct CallTest; + +impl CallTest { + fn call_request() -> CallRequest { + CallRequest { + from: Some(Address::repeat_byte(1)), + to: Some(Address::repeat_byte(2)), + data: Some(b"call".to_vec().into()), + ..CallRequest::default() + } + } +} + +#[async_trait] +impl HttpTest for CallTest { + fn transaction_executor(&self) -> MockTransactionExecutor { + let mut tx_executor = MockTransactionExecutor::default(); + tx_executor.insert_call_response( + Self::call_request().data.unwrap().0, + ExecutionResult::Success { + output: b"output".to_vec(), + }, + ); + tx_executor + } + + async fn test(&self, client: &HttpClient, _pool: &ConnectionPool) -> anyhow::Result<()> { + let call_result = client.call(Self::call_request(), None).await?; + assert_eq!(call_result.0, b"output"); + + let valid_block_numbers = [ + api::BlockNumber::Pending, + api::BlockNumber::Latest, + 0.into(), + ]; + for number in valid_block_numbers { + let number = api::BlockIdVariant::BlockNumber(number); + let call_result = client.call(Self::call_request(), Some(number)).await?; + assert_eq!(call_result.0, b"output"); + } + + let invalid_block_number = api::BlockNumber::from(100); + let number = api::BlockIdVariant::BlockNumber(invalid_block_number); + let error = client + .call(Self::call_request(), Some(number)) + .await + .unwrap_err(); + if let ClientError::Call(error) = error { + assert_eq!(error.code(), ErrorCode::InvalidParams.code()); + } else { + panic!("Unexpected error: {error:?}"); + } + + Ok(()) + } +} + +#[tokio::test] +async fn call_method_basics() { + test_http_server(CallTest).await; +} + +#[derive(Debug)] +struct CallTestAfterSnapshotRecovery; + +#[async_trait] +impl HttpTest for CallTestAfterSnapshotRecovery { + fn storage_initialization(&self) -> StorageInitialization { + StorageInitialization::empty_recovery() + } + + fn transaction_executor(&self) -> MockTransactionExecutor { + CallTest.transaction_executor() + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + let call_result = client.call(CallTest::call_request(), None).await?; + assert_eq!(call_result.0, b"output"); + let pending_block_number = api::BlockIdVariant::BlockNumber(api::BlockNumber::Pending); + let call_result = client + .call(CallTest::call_request(), Some(pending_block_number)) + .await?; + assert_eq!(call_result.0, b"output"); + + let first_local_miniblock = StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1; + let first_miniblock_numbers = [api::BlockNumber::Latest, first_local_miniblock.into()]; + for number in first_miniblock_numbers { + let number = api::BlockIdVariant::BlockNumber(number); + let error = client + .call(CallTest::call_request(), Some(number)) + .await + .unwrap_err(); + if let ClientError::Call(error) = error { + assert_eq!(error.code(), ErrorCode::InvalidParams.code()); + } else { + panic!("Unexpected error: {error:?}"); + } + } + + let pruned_block_numbers = [0, 1, StorageInitialization::SNAPSHOT_RECOVERY_BLOCK]; + for number in pruned_block_numbers { + let number = api::BlockIdVariant::BlockNumber(number.into()); + let error = client + .call(CallTest::call_request(), Some(number)) + .await + .unwrap_err(); + assert_pruned_block_error(&error, StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1); + } + + let mut storage = pool.access_storage().await?; + store_miniblock(&mut storage, MiniblockNumber(first_local_miniblock), &[]).await?; + drop(storage); + + for number in first_miniblock_numbers { + let number = api::BlockIdVariant::BlockNumber(number); + let call_result = client.call(CallTest::call_request(), Some(number)).await?; + assert_eq!(call_result.0, b"output"); + } + Ok(()) + } +} + +#[tokio::test] +async fn call_method_after_snapshot_recovery() { + test_http_server(CallTestAfterSnapshotRecovery).await; +} + +#[derive(Debug)] +struct SendRawTransactionTest { + snapshot_recovery: bool, +} + +impl SendRawTransactionTest { + fn transaction_bytes_and_hash() -> (Vec, H256) { + let private_key = H256::repeat_byte(11); + let address = PackedEthSignature::address_from_private_key(&private_key).unwrap(); + + let tx_request = api::TransactionRequest { + chain_id: Some(L2ChainId::default().as_u64()), + from: Some(address), + to: Some(Address::repeat_byte(2)), + value: 123_456.into(), + gas: (get_intrinsic_constants().l2_tx_intrinsic_gas * 2).into(), + gas_price: StateKeeperConfig::for_tests().minimal_l2_gas_price.into(), + input: vec![1, 2, 3, 4].into(), + ..api::TransactionRequest::default() + }; + let mut rlp = Default::default(); + tx_request.rlp(&mut rlp, L2ChainId::default().as_u64(), None); + let data = rlp.out(); + let signed_message = PackedEthSignature::message_to_signed_bytes(&data); + let signature = PackedEthSignature::sign_raw(&private_key, &signed_message).unwrap(); + + let mut rlp = Default::default(); + tx_request.rlp(&mut rlp, L2ChainId::default().as_u64(), Some(&signature)); + let data = rlp.out(); + let (_, tx_hash) = + api::TransactionRequest::from_bytes(&data, L2ChainId::default()).unwrap(); + (data.into(), tx_hash) + } + + fn balance_storage_log() -> StorageLog { + let private_key = H256::repeat_byte(11); + let address = PackedEthSignature::address_from_private_key(&private_key).unwrap(); + let balance_key = storage_key_for_eth_balance(&address); + StorageLog::new_write_log(balance_key, u256_to_h256(U256::one() << 64)) + } +} + +#[async_trait] +impl HttpTest for SendRawTransactionTest { + fn storage_initialization(&self) -> StorageInitialization { + if self.snapshot_recovery { + let logs = vec![Self::balance_storage_log()]; + StorageInitialization::Recovery { + logs, + factory_deps: HashMap::default(), + } + } else { + StorageInitialization::Genesis + } + } + + fn transaction_executor(&self) -> MockTransactionExecutor { + let mut tx_executor = MockTransactionExecutor::default(); + tx_executor.insert_tx_response( + Self::transaction_bytes_and_hash().1, + ExecutionResult::Success { output: vec![] }, + ); + tx_executor + } + + async fn test(&self, client: &HttpClient, pool: &ConnectionPool) -> anyhow::Result<()> { + if !self.snapshot_recovery { + // Manually set sufficient balance for the transaction account. + let mut storage = pool.access_storage().await?; + storage + .storage_dal() + .apply_storage_logs(&[(H256::zero(), vec![Self::balance_storage_log()])]) + .await; + } + + let (tx_bytes, tx_hash) = Self::transaction_bytes_and_hash(); + let send_result = client.send_raw_transaction(tx_bytes.into()).await?; + assert_eq!(send_result, tx_hash); + Ok(()) + } +} + +#[tokio::test] +async fn send_raw_transaction_basics() { + test_http_server(SendRawTransactionTest { + snapshot_recovery: false, + }) + .await; +} + +#[tokio::test] +async fn send_raw_transaction_after_snapshot_recovery() { + test_http_server(SendRawTransactionTest { + snapshot_recovery: true, + }) + .await; +} diff --git a/core/lib/zksync_core/src/api_server/web3/tests/ws.rs b/core/lib/zksync_core/src/api_server/web3/tests/ws.rs new file mode 100644 index 00000000000..0a82c3d0f21 --- /dev/null +++ b/core/lib/zksync_core/src/api_server/web3/tests/ws.rs @@ -0,0 +1,666 @@ +//! WS-related tests. + +use async_trait::async_trait; +use jsonrpsee::core::{client::ClientT, params::BatchRequestBuilder, ClientError}; +use reqwest::StatusCode; +use tokio::sync::watch; +use zksync_config::configs::chain::NetworkConfig; +use zksync_dal::ConnectionPool; +use zksync_types::{api, Address, L1BatchNumber, H256, U64}; +use zksync_web3_decl::{ + jsonrpsee::{ + core::client::{Subscription, SubscriptionClientT}, + rpc_params, + ws_client::{WsClient, WsClientBuilder}, + }, + namespaces::{EthNamespaceClient, ZksNamespaceClient}, + types::{BlockHeader, PubSubFilter}, +}; + +use super::*; +use crate::api_server::web3::metrics::SubscriptionType; + +#[allow(clippy::needless_pass_by_ref_mut)] // false positive +async fn wait_for_subscription( + events: &mut mpsc::UnboundedReceiver, + sub_type: SubscriptionType, +) { + let wait_future = tokio::time::timeout(TEST_TIMEOUT, async { + loop { + let event = events + .recv() + .await + .expect("Events emitter unexpectedly dropped"); + if matches!(event, PubSubEvent::Subscribed(ty) if ty == sub_type) { + break; + } else { + tracing::trace!(?event, "Skipping event"); + } + } + }); + wait_future + .await + .expect("Timed out waiting for subscription") +} + +#[allow(clippy::needless_pass_by_ref_mut)] // false positive +async fn wait_for_notifiers( + events: &mut mpsc::UnboundedReceiver, + sub_types: &[SubscriptionType], +) { + let wait_future = tokio::time::timeout(TEST_TIMEOUT, async { + loop { + let event = events + .recv() + .await + .expect("Events emitter unexpectedly dropped"); + if matches!(event, PubSubEvent::NotifyIterationFinished(ty) if sub_types.contains(&ty)) + { + break; + } else { + tracing::trace!(?event, "Skipping event"); + } + } + }); + wait_future.await.expect("Timed out waiting for notifier"); +} + +#[tokio::test] +async fn notifiers_start_after_snapshot_recovery() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + prepare_empty_recovery_snapshot(&mut storage, StorageInitialization::SNAPSHOT_RECOVERY_BLOCK) + .await; + + let (stop_sender, stop_receiver) = watch::channel(false); + let (events_sender, mut events_receiver) = mpsc::unbounded_channel(); + let mut subscribe_logic = EthSubscribe::new(); + subscribe_logic.set_events_sender(events_sender); + let notifier_handles = + subscribe_logic.spawn_notifiers(pool.clone(), POLL_INTERVAL, stop_receiver); + assert!(!notifier_handles.is_empty()); + + // Wait a little doing nothing and check that notifier tasks are still active (i.e., have not panicked). + tokio::time::sleep(POLL_INTERVAL).await; + for handle in ¬ifier_handles { + assert!(!handle.is_finished()); + } + + // Emulate creating the first miniblock; check that notifiers react to it. + let first_local_miniblock = MiniblockNumber(StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1); + store_miniblock(&mut storage, first_local_miniblock, &[]) + .await + .unwrap(); + + wait_for_notifiers( + &mut events_receiver, + &[ + SubscriptionType::Blocks, + SubscriptionType::Txs, + SubscriptionType::Logs, + ], + ) + .await; + + stop_sender.send_replace(true); + for handle in notifier_handles { + handle.await.unwrap().expect("Notifier task failed"); + } +} + +#[async_trait] +trait WsTest: Send + Sync { + /// Prepares the storage before the server is started. The default implementation performs genesis. + fn storage_initialization(&self) -> StorageInitialization { + StorageInitialization::Genesis + } + + async fn test( + &self, + client: &WsClient, + pool: &ConnectionPool, + pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()>; + + fn websocket_requests_per_minute_limit(&self) -> Option { + None + } +} + +async fn test_ws_server(test: impl WsTest) { + let pool = ConnectionPool::test_pool().await; + let network_config = NetworkConfig::for_tests(); + let mut storage = pool.access_storage().await.unwrap(); + test.storage_initialization() + .prepare_storage(&network_config, &mut storage) + .await + .expect("Failed preparing storage for test"); + drop(storage); + + let (stop_sender, stop_receiver) = watch::channel(false); + let (server_handles, pub_sub_events) = spawn_ws_server( + &network_config, + pool.clone(), + stop_receiver, + test.websocket_requests_per_minute_limit(), + ) + .await; + server_handles.wait_until_ready().await; + + let client = WsClientBuilder::default() + .build(format!("ws://{}", server_handles.local_addr)) + .await + .unwrap(); + test.test(&client, &pool, pub_sub_events).await.unwrap(); + + stop_sender.send_replace(true); + server_handles.shutdown().await; +} + +#[derive(Debug)] +struct WsServerCanStartTest; + +#[async_trait] +impl WsTest for WsServerCanStartTest { + async fn test( + &self, + client: &WsClient, + _pool: &ConnectionPool, + _pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()> { + let block_number = client.get_block_number().await?; + assert_eq!(block_number, U64::from(0)); + + let l1_batch_number = client.get_l1_batch_number().await?; + assert_eq!(l1_batch_number, U64::from(0)); + + let genesis_l1_batch = client + .get_l1_batch_details(L1BatchNumber(0)) + .await? + .context("missing genesis L1 batch")?; + assert!(genesis_l1_batch.base.root_hash.is_some()); + Ok(()) + } +} + +#[tokio::test] +async fn ws_server_can_start() { + test_ws_server(WsServerCanStartTest).await; +} + +#[derive(Debug)] +struct BasicSubscriptionsTest { + snapshot_recovery: bool, +} + +#[async_trait] +impl WsTest for BasicSubscriptionsTest { + fn storage_initialization(&self) -> StorageInitialization { + if self.snapshot_recovery { + StorageInitialization::empty_recovery() + } else { + StorageInitialization::Genesis + } + } + + async fn test( + &self, + client: &WsClient, + pool: &ConnectionPool, + mut pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()> { + // Wait for the notifiers to get initialized so that they don't skip notifications + // for the created subscriptions. + wait_for_notifiers( + &mut pub_sub_events, + &[SubscriptionType::Blocks, SubscriptionType::Txs], + ) + .await; + + let params = rpc_params!["newHeads"]; + let mut blocks_subscription = client + .subscribe::("eth_subscribe", params, "eth_unsubscribe") + .await?; + wait_for_subscription(&mut pub_sub_events, SubscriptionType::Blocks).await; + + let params = rpc_params!["newPendingTransactions"]; + let mut txs_subscription = client + .subscribe::("eth_subscribe", params, "eth_unsubscribe") + .await?; + wait_for_subscription(&mut pub_sub_events, SubscriptionType::Txs).await; + + let mut storage = pool.access_storage().await?; + let tx_result = execute_l2_transaction(create_l2_transaction(1, 2)); + let new_tx_hash = tx_result.hash; + let miniblock_number = MiniblockNumber(if self.snapshot_recovery { + StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1 + } else { + 1 + }); + let new_miniblock = store_miniblock(&mut storage, miniblock_number, &[tx_result]).await?; + drop(storage); + + let received_tx_hash = tokio::time::timeout(TEST_TIMEOUT, txs_subscription.next()) + .await + .context("Timed out waiting for new tx hash")? + .context("Pending txs subscription terminated")??; + assert_eq!(received_tx_hash, new_tx_hash); + let received_block_header = tokio::time::timeout(TEST_TIMEOUT, blocks_subscription.next()) + .await + .context("Timed out waiting for new block header")? + .context("New blocks subscription terminated")??; + assert_eq!( + received_block_header.number, + Some(new_miniblock.number.0.into()) + ); + assert_eq!(received_block_header.hash, Some(new_miniblock.hash)); + assert_eq!( + received_block_header.timestamp, + new_miniblock.timestamp.into() + ); + blocks_subscription.unsubscribe().await?; + Ok(()) + } +} + +#[tokio::test] +async fn basic_subscriptions() { + test_ws_server(BasicSubscriptionsTest { + snapshot_recovery: false, + }) + .await; +} + +#[tokio::test] +async fn basic_subscriptions_after_snapshot_recovery() { + test_ws_server(BasicSubscriptionsTest { + snapshot_recovery: true, + }) + .await; +} + +#[derive(Debug)] +struct LogSubscriptionsTest { + snapshot_recovery: bool, +} + +#[derive(Debug)] +struct LogSubscriptions { + all_logs_subscription: Subscription, + address_subscription: Subscription, + topic_subscription: Subscription, +} + +impl LogSubscriptions { + async fn new( + client: &WsClient, + pub_sub_events: &mut mpsc::UnboundedReceiver, + ) -> anyhow::Result { + // Wait for the notifier to get initialized so that it doesn't skip notifications + // for the created subscriptions. + wait_for_notifiers(pub_sub_events, &[SubscriptionType::Logs]).await; + + let params = rpc_params!["logs"]; + let all_logs_subscription = client + .subscribe::("eth_subscribe", params, "eth_unsubscribe") + .await?; + let address_filter = PubSubFilter { + address: Some(Address::repeat_byte(23).into()), + topics: None, + }; + let params = rpc_params!["logs", address_filter]; + let address_subscription = client + .subscribe::("eth_subscribe", params, "eth_unsubscribe") + .await?; + let topic_filter = PubSubFilter { + address: None, + topics: Some(vec![Some(H256::repeat_byte(42).into())]), + }; + let params = rpc_params!["logs", topic_filter]; + let topic_subscription = client + .subscribe::("eth_subscribe", params, "eth_unsubscribe") + .await?; + for _ in 0..3 { + wait_for_subscription(pub_sub_events, SubscriptionType::Logs).await; + } + + Ok(Self { + all_logs_subscription, + address_subscription, + topic_subscription, + }) + } +} + +#[async_trait] +impl WsTest for LogSubscriptionsTest { + fn storage_initialization(&self) -> StorageInitialization { + if self.snapshot_recovery { + StorageInitialization::empty_recovery() + } else { + StorageInitialization::Genesis + } + } + + async fn test( + &self, + client: &WsClient, + pool: &ConnectionPool, + mut pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()> { + let LogSubscriptions { + mut all_logs_subscription, + mut address_subscription, + mut topic_subscription, + } = LogSubscriptions::new(client, &mut pub_sub_events).await?; + + let mut storage = pool.access_storage().await?; + let miniblock_number = if self.snapshot_recovery { + StorageInitialization::SNAPSHOT_RECOVERY_BLOCK + 1 + } else { + 1 + }; + let (tx_location, events) = store_events(&mut storage, miniblock_number, 0).await?; + drop(storage); + let events: Vec<_> = events.iter().collect(); + + let all_logs = collect_logs(&mut all_logs_subscription, 4).await?; + for (i, log) in all_logs.iter().enumerate() { + assert_eq!(log.transaction_index, Some(0.into())); + assert_eq!(log.log_index, Some(i.into())); + assert_eq!(log.transaction_hash, Some(tx_location.tx_hash)); + assert_eq!(log.block_number, Some(miniblock_number.into())); + } + assert_logs_match(&all_logs, &events); + + let address_logs = collect_logs(&mut address_subscription, 2).await?; + assert_logs_match(&address_logs, &[events[0], events[3]]); + + let topic_logs = collect_logs(&mut topic_subscription, 2).await?; + assert_logs_match(&topic_logs, &[events[1], events[3]]); + + wait_for_notifiers(&mut pub_sub_events, &[SubscriptionType::Logs]).await; + + // Check that no new notifications were sent to subscribers. + tokio::time::timeout(POLL_INTERVAL, all_logs_subscription.next()) + .await + .unwrap_err(); + tokio::time::timeout(POLL_INTERVAL, address_subscription.next()) + .await + .unwrap_err(); + tokio::time::timeout(POLL_INTERVAL, topic_subscription.next()) + .await + .unwrap_err(); + Ok(()) + } +} + +async fn collect_logs( + sub: &mut Subscription, + expected_count: usize, +) -> anyhow::Result> { + let mut logs = Vec::with_capacity(expected_count); + for _ in 0..expected_count { + let log = tokio::time::timeout(TEST_TIMEOUT, sub.next()) + .await + .context("Timed out waiting for new log")? + .context("Logs subscription terminated")??; + logs.push(log); + } + Ok(logs) +} + +#[tokio::test] +async fn log_subscriptions() { + test_ws_server(LogSubscriptionsTest { + snapshot_recovery: false, + }) + .await; +} + +#[tokio::test] +async fn log_subscriptions_after_snapshot_recovery() { + test_ws_server(LogSubscriptionsTest { + snapshot_recovery: true, + }) + .await; +} + +#[derive(Debug)] +struct LogSubscriptionsWithNewBlockTest; + +#[async_trait] +impl WsTest for LogSubscriptionsWithNewBlockTest { + async fn test( + &self, + client: &WsClient, + pool: &ConnectionPool, + mut pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()> { + let LogSubscriptions { + mut all_logs_subscription, + mut address_subscription, + .. + } = LogSubscriptions::new(client, &mut pub_sub_events).await?; + + let mut storage = pool.access_storage().await?; + let (_, events) = store_events(&mut storage, 1, 0).await?; + drop(storage); + let events: Vec<_> = events.iter().collect(); + + let all_logs = collect_logs(&mut all_logs_subscription, 4).await?; + assert_logs_match(&all_logs, &events); + + // Create a new block and wait for the pub-sub notifier to run. + let mut storage = pool.access_storage().await?; + let (_, new_events) = store_events(&mut storage, 2, 4).await?; + drop(storage); + let new_events: Vec<_> = new_events.iter().collect(); + + let all_new_logs = collect_logs(&mut all_logs_subscription, 4).await?; + assert_logs_match(&all_new_logs, &new_events); + + let address_logs = collect_logs(&mut address_subscription, 4).await?; + assert_logs_match( + &address_logs, + &[events[0], events[3], new_events[0], new_events[3]], + ); + Ok(()) + } +} + +#[tokio::test] +async fn log_subscriptions_with_new_block() { + test_ws_server(LogSubscriptionsWithNewBlockTest).await; +} + +#[derive(Debug)] +struct LogSubscriptionsWithManyBlocksTest; + +#[async_trait] +impl WsTest for LogSubscriptionsWithManyBlocksTest { + async fn test( + &self, + client: &WsClient, + pool: &ConnectionPool, + mut pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()> { + let LogSubscriptions { + mut all_logs_subscription, + mut address_subscription, + .. + } = LogSubscriptions::new(client, &mut pub_sub_events).await?; + + // Add two blocks in the storage atomically. + let mut storage = pool.access_storage().await?; + let mut transaction = storage.start_transaction().await?; + let (_, events) = store_events(&mut transaction, 1, 0).await?; + let events: Vec<_> = events.iter().collect(); + let (_, new_events) = store_events(&mut transaction, 2, 4).await?; + let new_events: Vec<_> = new_events.iter().collect(); + transaction.commit().await?; + drop(storage); + + let all_logs = collect_logs(&mut all_logs_subscription, 4).await?; + assert_logs_match(&all_logs, &events); + let all_new_logs = collect_logs(&mut all_logs_subscription, 4).await?; + assert_logs_match(&all_new_logs, &new_events); + + let address_logs = collect_logs(&mut address_subscription, 4).await?; + assert_logs_match( + &address_logs, + &[events[0], events[3], new_events[0], new_events[3]], + ); + Ok(()) + } +} + +#[tokio::test] +async fn log_subscriptions_with_many_new_blocks_at_once() { + test_ws_server(LogSubscriptionsWithManyBlocksTest).await; +} + +#[derive(Debug)] +struct LogSubscriptionsWithDelayTest; + +#[async_trait] +impl WsTest for LogSubscriptionsWithDelayTest { + async fn test( + &self, + client: &WsClient, + pool: &ConnectionPool, + mut pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()> { + // Store a miniblock w/o subscriptions being present. + let mut storage = pool.access_storage().await?; + store_events(&mut storage, 1, 0).await?; + drop(storage); + + while pub_sub_events.try_recv().is_ok() { + // Drain all existing pub-sub events. + } + wait_for_notifiers(&mut pub_sub_events, &[SubscriptionType::Logs]).await; + + let params = rpc_params!["logs"]; + let mut all_logs_subscription = client + .subscribe::("eth_subscribe", params, "eth_unsubscribe") + .await?; + let address_and_topic_filter = PubSubFilter { + address: Some(Address::repeat_byte(23).into()), + topics: Some(vec![Some(H256::repeat_byte(42).into())]), + }; + let params = rpc_params!["logs", address_and_topic_filter]; + let mut address_and_topic_subscription = client + .subscribe::("eth_subscribe", params, "eth_unsubscribe") + .await?; + for _ in 0..2 { + wait_for_subscription(&mut pub_sub_events, SubscriptionType::Logs).await; + } + + let mut storage = pool.access_storage().await?; + let (_, new_events) = store_events(&mut storage, 2, 4).await?; + drop(storage); + let new_events: Vec<_> = new_events.iter().collect(); + + let all_logs = collect_logs(&mut all_logs_subscription, 4).await?; + assert_logs_match(&all_logs, &new_events); + let address_and_topic_logs = collect_logs(&mut address_and_topic_subscription, 1).await?; + assert_logs_match(&address_and_topic_logs, &[new_events[3]]); + + // Check the behavior of remaining subscriptions if a subscription is dropped. + all_logs_subscription.unsubscribe().await?; + let mut storage = pool.access_storage().await?; + let (_, new_events) = store_events(&mut storage, 3, 8).await?; + drop(storage); + + let address_and_topic_logs = collect_logs(&mut address_and_topic_subscription, 1).await?; + assert_logs_match(&address_and_topic_logs, &[&new_events[3]]); + Ok(()) + } +} + +#[tokio::test] +async fn log_subscriptions_with_delay() { + test_ws_server(LogSubscriptionsWithDelayTest).await; +} + +#[derive(Debug)] +struct RateLimitingTest; + +#[async_trait] +impl WsTest for RateLimitingTest { + async fn test( + &self, + client: &WsClient, + _pool: &ConnectionPool, + _pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()> { + client.chain_id().await.unwrap(); + client.chain_id().await.unwrap(); + client.chain_id().await.unwrap(); + let expected_err = client.chain_id().await.unwrap_err(); + + if let ClientError::Call(error) = expected_err { + assert_eq!(error.code() as u16, StatusCode::TOO_MANY_REQUESTS.as_u16()); + assert_eq!(error.message(), "Too many requests"); + assert!(error.data().is_none()); + } else { + panic!("Unexpected error returned: {expected_err}"); + } + + Ok(()) + } + + fn websocket_requests_per_minute_limit(&self) -> Option { + Some(NonZeroU32::new(3).unwrap()) + } +} + +#[tokio::test] +async fn rate_limiting() { + test_ws_server(RateLimitingTest).await; +} + +#[derive(Debug)] +struct BatchGetsRateLimitedTest; + +#[async_trait] +impl WsTest for BatchGetsRateLimitedTest { + async fn test( + &self, + client: &WsClient, + _pool: &ConnectionPool, + _pub_sub_events: mpsc::UnboundedReceiver, + ) -> anyhow::Result<()> { + client.chain_id().await.unwrap(); + client.chain_id().await.unwrap(); + + let mut batch = BatchRequestBuilder::new(); + batch.insert("eth_chainId", rpc_params![]).unwrap(); + batch.insert("eth_chainId", rpc_params![]).unwrap(); + + let mut expected_err = client + .batch_request::(batch) + .await + .unwrap() + .into_ok() + .unwrap_err(); + + let error = expected_err.next().unwrap(); + + assert_eq!(error.code() as u16, StatusCode::TOO_MANY_REQUESTS.as_u16()); + assert_eq!(error.message(), "Too many requests"); + assert!(error.data().is_none()); + + Ok(()) + } + + fn websocket_requests_per_minute_limit(&self) -> Option { + Some(NonZeroU32::new(3).unwrap()) + } +} + +#[tokio::test] +async fn batch_rate_limiting() { + test_ws_server(BatchGetsRateLimitedTest).await; +} diff --git a/core/lib/zksync_core/src/basic_witness_input_producer/mod.rs b/core/lib/zksync_core/src/basic_witness_input_producer/mod.rs index e4d605d2545..e91ccc4864e 100644 --- a/core/lib/zksync_core/src/basic_witness_input_producer/mod.rs +++ b/core/lib/zksync_core/src/basic_witness_input_producer/mod.rs @@ -1,24 +1,22 @@ -use anyhow::Context; -use std::sync::Arc; -use std::time::Instant; +use std::{sync::Arc, time::Instant}; +use anyhow::Context; +use async_trait::async_trait; +use multivm::interface::{L2BlockEnv, VmInterface}; +use tokio::{runtime::Handle, task::JoinHandle}; use zksync_dal::{basic_witness_input_producer_dal::JOB_MAX_ATTEMPT, ConnectionPool}; use zksync_object_store::{ObjectStore, ObjectStoreFactory}; use zksync_queued_job_processor::JobProcessor; -use zksync_types::witness_block_state::WitnessBlockState; -use zksync_types::{L1BatchNumber, L2ChainId}; +use zksync_types::{witness_block_state::WitnessBlockState, L1BatchNumber, L2ChainId}; -use async_trait::async_trait; -use multivm::interface::{L2BlockEnv, VmInterface}; -use tokio::runtime::Handle; -use tokio::task::JoinHandle; +use self::{ + metrics::METRICS, + vm_interactions::{create_vm, execute_tx}, +}; mod metrics; mod vm_interactions; -use self::metrics::METRICS; -use self::vm_interactions::{create_vm, execute_tx}; - /// Component that extracts all data (from DB) necessary to run a Basic Witness Generator. /// Does this by rerunning an entire L1Batch and extracting information from both the VM run and DB. /// This component will upload Witness Inputs to the object store. @@ -39,7 +37,7 @@ impl BasicWitnessInputProducer { ) -> anyhow::Result { Ok(BasicWitnessInputProducer { connection_pool, - object_store: store_factory.create_store().await.into(), + object_store: store_factory.create_store().await, l2_chain_id, }) } @@ -192,10 +190,6 @@ impl JobProcessor for BasicWitnessInputProducer { .mark_job_as_successful(job_id, started_at, &object_path) .await .context("failed to mark job as successful for BasicWitnessInputProducer")?; - transaction - .witness_generator_dal() - .mark_witness_inputs_job_as_queued(job_id) - .await; transaction .commit() .await diff --git a/core/lib/zksync_core/src/basic_witness_input_producer/vm_interactions.rs b/core/lib/zksync_core/src/basic_witness_input_producer/vm_interactions.rs index 464ab1f92d0..8ad2a66155d 100644 --- a/core/lib/zksync_core/src/basic_witness_input_producer/vm_interactions.rs +++ b/core/lib/zksync_core/src/basic_witness_input_producer/vm_interactions.rs @@ -1,15 +1,16 @@ use anyhow::{anyhow, Context}; - -use crate::state_keeper::io::common::load_l1_batch_params; - -use multivm::interface::{VmInterface, VmInterfaceHistoryEnabled}; -use multivm::vm_latest::HistoryEnabled; -use multivm::VmInstance; +use multivm::{ + interface::{VmInterface, VmInterfaceHistoryEnabled}, + vm_latest::HistoryEnabled, + VmInstance, +}; use tokio::runtime::Handle; use zksync_dal::StorageProcessor; use zksync_state::{PostgresStorage, StoragePtr, StorageView, WriteStorage}; use zksync_types::{L1BatchNumber, L2ChainId, Transaction}; +use crate::state_keeper::io::common::load_l1_batch_params; + pub(super) type VmAndStorage<'a> = ( VmInstance>, HistoryEnabled>, StoragePtr>>, @@ -73,6 +74,7 @@ pub(super) fn execute_tx( vm.make_snapshot(); if vm .execute_transaction_with_bytecode_compression(tx.clone(), true) + .0 .is_ok() { vm.pop_snapshot_no_rollback(); @@ -83,6 +85,7 @@ pub(super) fn execute_tx( vm.rollback_to_the_latest_snapshot(); if vm .execute_transaction_with_bytecode_compression(tx.clone(), false) + .0 .is_err() { return Err(anyhow!("compression can't fail if we don't apply it")); diff --git a/core/lib/zksync_core/src/block_reverter/mod.rs b/core/lib/zksync_core/src/block_reverter/mod.rs index 1170af9d5ba..e45bef7eb21 100644 --- a/core/lib/zksync_core/src/block_reverter/mod.rs +++ b/core/lib/zksync_core/src/block_reverter/mod.rs @@ -1,27 +1,26 @@ +use std::{path::Path, time::Duration}; + use bitflags::bitflags; use serde::Serialize; use tokio::time::sleep; - -use std::path::Path; -use std::time::Duration; - use zksync_config::{ContractsConfig, ETHSenderConfig}; use zksync_contracts::zksync_contract; use zksync_dal::ConnectionPool; +use zksync_eth_signer::{EthereumSigner, PrivateKeySigner, TransactionParameters}; use zksync_merkle_tree::domain::ZkSyncTree; use zksync_state::RocksdbStorage; use zksync_storage::RocksDB; -use zksync_types::aggregated_operations::AggregatedActionType; -use zksync_types::ethabi::Token; -use zksync_types::web3::{ - contract::{Contract, Options}, - transports::Http, - types::{BlockId, BlockNumber}, - Web3, +use zksync_types::{ + aggregated_operations::AggregatedActionType, + ethabi::Token, + web3::{ + contract::{Contract, Options}, + transports::Http, + types::{BlockId, BlockNumber}, + Web3, + }, + L1BatchNumber, PackedEthSignature, H160, H256, U256, }; -use zksync_types::{L1BatchNumber, PackedEthSignature, H160, H256, U256}; - -use zksync_eth_signer::{EthereumSigner, PrivateKeySigner, TransactionParameters}; bitflags! { pub struct BlockReverterFlags: u32 { @@ -191,7 +190,7 @@ impl BlockReverter { storage_root_hash: H256, ) { let db = RocksDB::new(path); - let mut tree = ZkSyncTree::new_lightweight(db); + let mut tree = ZkSyncTree::new_lightweight(db.into()); if tree.next_l1_batch_number() <= last_l1_batch_to_keep { tracing::info!("Tree is behind the L1 batch to revert to; skipping"); diff --git a/core/lib/zksync_core/src/consensus/mod.rs b/core/lib/zksync_core/src/consensus/mod.rs index a229666e76c..423ef75b7ce 100644 --- a/core/lib/zksync_core/src/consensus/mod.rs +++ b/core/lib/zksync_core/src/consensus/mod.rs @@ -1,6 +1,212 @@ //! Consensus-related functionality. +#![allow(clippy::redundant_locals)] +use std::collections::{HashMap, HashSet}; -mod payload; -mod proto; +use anyhow::Context as _; +use serde::de::Error; +use zksync_concurrency::{ctx, error::Wrap as _, scope}; +use zksync_consensus_crypto::{Text, TextFmt}; +use zksync_consensus_executor as executor; +use zksync_consensus_roles::{node, validator}; +use zksync_consensus_storage::BlockStore; +use zksync_dal::ConnectionPool; +use zksync_types::Address; -pub(crate) use self::payload::Payload; +use self::storage::Store; +use crate::sync_layer::sync_action::ActionQueueSender; + +mod storage; +#[cfg(test)] +pub(crate) mod testonly; +#[cfg(test)] +mod tests; + +#[derive(PartialEq, Eq, Hash)] +pub struct SerdeText(pub T); + +impl<'de, T: TextFmt> serde::Deserialize<'de> for SerdeText { + fn deserialize>(d: D) -> Result { + Ok(Self( + T::decode(Text::new(<&str>::deserialize(d)?)).map_err(Error::custom)?, + )) + } +} + +/// Config (shared between main node and external node) which implements `serde` encoding +/// and therefore can be flattened into env vars. +#[derive(serde::Deserialize)] +pub struct SerdeConfig { + /// Local socket address to listen for the incoming connections. + pub server_addr: std::net::SocketAddr, + /// Public address of this node (should forward to `server_addr`) + /// that will be advertised to peers, so that they can connect to this + /// node. + pub public_addr: std::net::SocketAddr, + + /// Validator private key. Should be set only for the validator node. + pub validator_key: Option>, + + /// Validators participating in consensus. + pub validator_set: Vec>, + + /// Key of this node. It uniquely identifies the node. + pub node_key: SerdeText, + /// Limit on the number of inbound connections outside + /// of the `static_inbound` set. + pub gossip_dynamic_inbound_limit: u64, + /// Inbound gossip connections that should be unconditionally accepted. + pub gossip_static_inbound: HashSet>, + /// Outbound gossip connections that the node should actively try to + /// establish and maintain. + pub gossip_static_outbound: HashMap, std::net::SocketAddr>, + + pub operator_address: Option
, +} + +impl SerdeConfig { + /// Extracts consensus executor config from the `SerdeConfig`. + fn executor(&self) -> anyhow::Result { + Ok(executor::Config { + server_addr: self.server_addr, + validators: validator::ValidatorSet::new( + self.validator_set.iter().map(|k| k.0.clone()), + ) + .context("validator_set")?, + node_key: self.node_key.0.clone(), + gossip_dynamic_inbound_limit: self.gossip_dynamic_inbound_limit, + gossip_static_inbound: self + .gossip_static_inbound + .iter() + .map(|k| k.0.clone()) + .collect(), + gossip_static_outbound: self + .gossip_static_outbound + .iter() + .map(|(k, v)| (k.0.clone(), *v)) + .collect(), + }) + } + + /// Extracts a validator config from the `SerdeConfig`. + pub(crate) fn validator(&self) -> anyhow::Result { + let key = self + .validator_key + .as_ref() + .context("validator_key is required")?; + Ok(executor::ValidatorConfig { + key: key.0.clone(), + public_addr: self.public_addr, + }) + } +} + +impl TryFrom for MainNodeConfig { + type Error = anyhow::Error; + fn try_from(cfg: SerdeConfig) -> anyhow::Result { + Ok(Self { + executor: cfg.executor()?, + validator: cfg.validator()?, + operator_address: cfg + .operator_address + .context("operator_address is required")?, + }) + } +} + +/// Main node consensus config. +#[derive(Debug, Clone)] +pub struct MainNodeConfig { + pub executor: executor::Config, + pub validator: executor::ValidatorConfig, + pub operator_address: Address, +} + +impl MainNodeConfig { + /// Task generating consensus certificates for the miniblocks generated by `StateKeeper`. + /// Broadcasts the blocks with certificates to gossip network peers. + pub async fn run(self, ctx: &ctx::Ctx, pool: ConnectionPool) -> anyhow::Result<()> { + anyhow::ensure!( + self.executor.validators + == validator::ValidatorSet::new(vec![self.validator.key.public()]).unwrap(), + "currently only consensus with just 1 validator is supported" + ); + scope::run!(&ctx, |ctx, s| async { + let store = Store::new(pool, self.operator_address); + let mut block_store = store.clone().into_block_store(); + block_store + .try_init_genesis(ctx, &self.validator.key) + .await + .wrap("block_store.try_init_genesis()")?; + let (block_store, runner) = BlockStore::new(ctx, Box::new(block_store)) + .await + .wrap("BlockStore::new()")?; + s.spawn_bg(runner.run(ctx)); + let executor = executor::Executor { + config: self.executor, + block_store, + validator: Some(executor::Validator { + config: self.validator, + replica_store: Box::new(store.clone()), + payload_manager: Box::new(store.clone()), + }), + }; + executor.run(ctx).await + }) + .await + } +} + +/// External node consensus config. +#[derive(Debug, Clone)] +pub struct FetcherConfig { + executor: executor::Config, + operator_address: Address, +} + +impl TryFrom for FetcherConfig { + type Error = anyhow::Error; + fn try_from(cfg: SerdeConfig) -> anyhow::Result { + Ok(Self { + executor: cfg.executor()?, + operator_address: cfg + .operator_address + .context("operator_address is required")?, + }) + } +} + +impl FetcherConfig { + /// Task fetching L2 blocks using peer-to-peer gossip network. + pub async fn run( + self, + ctx: &ctx::Ctx, + pool: ConnectionPool, + actions: ActionQueueSender, + ) -> anyhow::Result<()> { + tracing::info!( + "Starting gossip fetcher with {:?} and node key {:?}", + self.executor, + self.executor.node_key.public(), + ); + + scope::run!(ctx, |ctx, s| async { + let store = Store::new(pool, self.operator_address); + let mut block_store = store.clone().into_block_store(); + block_store + .set_actions_queue(ctx, actions) + .await + .wrap("block_store.set_actions_queue()")?; + let (block_store, runner) = BlockStore::new(ctx, Box::new(block_store)) + .await + .wrap("BlockStore::new()")?; + s.spawn_bg(runner.run(ctx)); + let executor = executor::Executor { + config: self.executor, + block_store, + validator: None, + }; + executor.run(ctx).await + }) + .await + } +} diff --git a/core/lib/zksync_core/src/consensus/payload.rs b/core/lib/zksync_core/src/consensus/payload.rs deleted file mode 100644 index 8d53fdf21f3..00000000000 --- a/core/lib/zksync_core/src/consensus/payload.rs +++ /dev/null @@ -1,99 +0,0 @@ -use anyhow::Context as _; - -use zksync_consensus_roles::validator; -use zksync_protobuf::{required, ProtoFmt}; -use zksync_types::api::en::SyncBlock; -use zksync_types::{Address, L1BatchNumber, Transaction, H256}; - -/// L2 block (= miniblock) payload. -#[derive(Debug)] -pub(crate) struct Payload { - pub hash: H256, - pub l1_batch_number: L1BatchNumber, - pub timestamp: u64, - pub l1_gas_price: u64, - pub l2_fair_gas_price: u64, - pub virtual_blocks: u32, - pub operator_address: Address, - pub transactions: Vec, -} - -impl ProtoFmt for Payload { - type Proto = super::proto::Payload; - - fn read(message: &Self::Proto) -> anyhow::Result { - let mut transactions = Vec::with_capacity(message.transactions.len()); - for (i, tx) in message.transactions.iter().enumerate() { - transactions.push( - required(&tx.json) - .and_then(|json_str| Ok(serde_json::from_str(json_str)?)) - .with_context(|| format!("transaction[{i}]"))?, - ); - } - - Ok(Self { - hash: required(&message.hash) - .and_then(|bytes| Ok(<[u8; 32]>::try_from(bytes.as_slice())?.into())) - .context("hash")?, - l1_batch_number: L1BatchNumber( - *required(&message.l1_batch_number).context("l1_batch_number")?, - ), - timestamp: *required(&message.timestamp).context("timestamp")?, - l1_gas_price: *required(&message.l1_gas_price).context("l1_gas_price")?, - l2_fair_gas_price: *required(&message.l2_fair_gas_price) - .context("l2_fair_gas_price")?, - virtual_blocks: *required(&message.virtual_blocks).context("virtual_blocks")?, - operator_address: required(&message.operator_address) - .and_then(|bytes| Ok(<[u8; 20]>::try_from(bytes.as_slice())?.into())) - .context("operator_address")?, - transactions, - }) - } - fn build(&self) -> Self::Proto { - Self::Proto { - hash: Some(self.hash.as_bytes().into()), - l1_batch_number: Some(self.l1_batch_number.0), - timestamp: Some(self.timestamp), - l1_gas_price: Some(self.l1_gas_price), - l2_fair_gas_price: Some(self.l2_fair_gas_price), - virtual_blocks: Some(self.virtual_blocks), - operator_address: Some(self.operator_address.as_bytes().into()), - // Transactions are stored in execution order, therefore order is deterministic. - transactions: self - .transactions - .iter() - .map(|t| super::proto::Transaction { - // TODO: There is no guarantee that json encoding here will be deterministic. - json: Some(serde_json::to_string(t).unwrap()), - }) - .collect(), - } - } -} - -impl TryFrom for Payload { - type Error = anyhow::Error; - - fn try_from(block: SyncBlock) -> anyhow::Result { - Ok(Self { - hash: block.hash.unwrap_or_default(), - l1_batch_number: block.l1_batch_number, - timestamp: block.timestamp, - l1_gas_price: block.l1_gas_price, - l2_fair_gas_price: block.l2_fair_gas_price, - virtual_blocks: block.virtual_blocks.unwrap_or(0), - operator_address: block.operator_address, - transactions: block.transactions.context("Transactions are required")?, - }) - } -} - -impl Payload { - pub fn decode(payload: &validator::Payload) -> anyhow::Result { - zksync_protobuf::decode(&payload.0) - } - - pub fn encode(&self) -> validator::Payload { - validator::Payload(zksync_protobuf::encode(self)) - } -} diff --git a/core/lib/zksync_core/src/consensus/proto/mod.rs b/core/lib/zksync_core/src/consensus/proto/mod.rs deleted file mode 100644 index e6ac37696c2..00000000000 --- a/core/lib/zksync_core/src/consensus/proto/mod.rs +++ /dev/null @@ -1,2 +0,0 @@ -#![allow(warnings)] -include!(concat!(env!("OUT_DIR"), "/src/consensus/proto/gen.rs")); diff --git a/core/lib/zksync_core/src/consensus/storage/mod.rs b/core/lib/zksync_core/src/consensus/storage/mod.rs new file mode 100644 index 00000000000..516ba7eb19c --- /dev/null +++ b/core/lib/zksync_core/src/consensus/storage/mod.rs @@ -0,0 +1,434 @@ +//! Storage implementation based on DAL. +use anyhow::Context as _; +use zksync_concurrency::{ctx, error::Wrap as _, sync, time}; +use zksync_consensus_bft::PayloadManager; +use zksync_consensus_roles::validator; +use zksync_consensus_storage::{BlockStoreState, PersistentBlockStore, ReplicaState, ReplicaStore}; +use zksync_dal::{consensus_dal::Payload, ConnectionPool}; +use zksync_types::{Address, MiniblockNumber}; + +#[cfg(test)] +mod testonly; + +use crate::sync_layer::{ + fetcher::{FetchedBlock, FetcherCursor}, + sync_action::ActionQueueSender, +}; + +/// Context-aware `zksync_dal::StorageProcessor` wrapper. +pub(super) struct CtxStorage<'a>(zksync_dal::StorageProcessor<'a>); + +impl<'a> CtxStorage<'a> { + /// Wrapper for `access_storage_tagged()`. + pub async fn access(ctx: &ctx::Ctx, pool: &'a ConnectionPool) -> ctx::Result> { + Ok(Self( + ctx.wait(pool.access_storage_tagged("consensus")).await??, + )) + } + + /// Wrapper for `start_transaction()`. + pub async fn start_transaction<'b, 'c: 'b>( + &'c mut self, + ctx: &ctx::Ctx, + ) -> ctx::Result> { + Ok(CtxStorage( + ctx.wait(self.0.start_transaction()) + .await? + .context("sqlx")?, + )) + } + + /// Wrapper for `blocks_dal().get_sealed_miniblock_number()`. + pub async fn last_miniblock_number( + &mut self, + ctx: &ctx::Ctx, + ) -> ctx::Result { + let number = ctx + .wait(self.0.blocks_dal().get_sealed_miniblock_number()) + .await? + .context("sqlx")? + .context("no miniblocks in storage")?; // FIXME (PLA-703): handle empty storage + Ok(validator::BlockNumber(number.0.into())) + } + + /// Wrapper for `commit()`. + pub async fn commit(self, ctx: &ctx::Ctx) -> ctx::Result<()> { + Ok(ctx.wait(self.0.commit()).await?.context("sqlx")?) + } + + /// Wrapper for `consensus_dal().block_payload()`. + pub async fn payload( + &mut self, + ctx: &ctx::Ctx, + number: validator::BlockNumber, + operator_address: Address, + ) -> ctx::Result> { + Ok(ctx + .wait( + self.0 + .consensus_dal() + .block_payload(number, operator_address), + ) + .await??) + } + + /// Wrapper for `consensus_dal().first_certificate()`. + pub async fn first_certificate( + &mut self, + ctx: &ctx::Ctx, + ) -> ctx::Result> { + Ok(ctx + .wait(self.0.consensus_dal().first_certificate()) + .await??) + } + + /// Wrapper for `consensus_dal().last_certificate()`. + pub async fn last_certificate( + &mut self, + ctx: &ctx::Ctx, + ) -> ctx::Result> { + Ok(ctx + .wait(self.0.consensus_dal().last_certificate()) + .await??) + } + + /// Wrapper for `consensus_dal().certificate()`. + pub async fn certificate( + &mut self, + ctx: &ctx::Ctx, + number: validator::BlockNumber, + ) -> ctx::Result> { + Ok(ctx + .wait(self.0.consensus_dal().certificate(number)) + .await??) + } + + /// Wrapper for `consensus_dal().insert_certificate()`. + pub async fn insert_certificate( + &mut self, + ctx: &ctx::Ctx, + cert: &validator::CommitQC, + operator_address: Address, + ) -> ctx::Result<()> { + Ok(ctx + .wait( + self.0 + .consensus_dal() + .insert_certificate(cert, operator_address), + ) + .await??) + } + + /// Wrapper for `consensus_dal().replica_state()`. + pub async fn replica_state(&mut self, ctx: &ctx::Ctx) -> ctx::Result> { + Ok(ctx.wait(self.0.consensus_dal().replica_state()).await??) + } + + /// Wrapper for `consensus_dal().set_replica_state()`. + pub async fn set_replica_state( + &mut self, + ctx: &ctx::Ctx, + state: &ReplicaState, + ) -> ctx::Result<()> { + Ok(ctx + .wait(self.0.consensus_dal().set_replica_state(state)) + .await? + .context("sqlx")?) + } + + /// Wrapper for `FetcherCursor::new()`. + pub async fn new_fetcher_cursor(&mut self, ctx: &ctx::Ctx) -> ctx::Result { + Ok(ctx.wait(FetcherCursor::new(&mut self.0)).await??) + } +} + +#[derive(Debug)] +struct Cursor { + inner: FetcherCursor, + actions: ActionQueueSender, +} + +impl Cursor { + /// Advances the cursor by converting the block into actions and pushing them + /// to the actions queue. + /// Does nothing and returns Ok() if the block has been already processed. + /// Returns an error if a block with an earlier block number was expected. + async fn advance(&mut self, block: &validator::FinalBlock) -> anyhow::Result<()> { + let number = MiniblockNumber( + u32::try_from(block.header().number.0) + .context("Integer overflow converting block number")?, + ); + let payload = + Payload::decode(&block.payload).context("Failed deserializing block payload")?; + let want = self.inner.next_miniblock; + // Some blocks are missing. + if number > want { + return Err(anyhow::anyhow!("expected {want:?}, got {number:?}")); + } + // Block already processed. + if number < want { + return Ok(()); + } + let block = FetchedBlock { + number, + l1_batch_number: payload.l1_batch_number, + last_in_batch: payload.last_in_batch, + protocol_version: payload.protocol_version, + timestamp: payload.timestamp, + reference_hash: Some(payload.hash), + l1_gas_price: payload.l1_gas_price, + l2_fair_gas_price: payload.l2_fair_gas_price, + fair_pubdata_price: payload.fair_pubdata_price, + virtual_blocks: payload.virtual_blocks, + operator_address: payload.operator_address, + transactions: payload.transactions, + }; + self.actions.push_actions(self.inner.advance(block)).await; + Ok(()) + } +} + +/// Wrapper of `ConnectionPool` implementing `ReplicaStore` and `PayloadManager`. +#[derive(Clone, Debug)] +pub(super) struct Store { + pool: ConnectionPool, + operator_address: Address, +} + +/// Wrapper of `ConnectionPool` implementing `PersistentBlockStore`. +#[derive(Debug)] +pub(super) struct BlockStore { + inner: Store, + /// Mutex preventing concurrent execution of `store_next_block` calls. + store_next_block_mutex: sync::Mutex>, +} + +impl Store { + /// Creates a `Store`. `pool` should have multiple connections to work efficiently. + pub fn new(pool: ConnectionPool, operator_address: Address) -> Self { + Self { + pool, + operator_address, + } + } + + /// Converts `Store` into a `BlockStore`. + pub fn into_block_store(self) -> BlockStore { + BlockStore { + inner: self, + store_next_block_mutex: sync::Mutex::new(None), + } + } +} + +impl BlockStore { + /// Generates and stores the genesis cert (signed by `validator_key`) for the last sealed miniblock. + /// No-op if db already contains a genesis cert. + pub async fn try_init_genesis( + &mut self, + ctx: &ctx::Ctx, + validator_key: &validator::SecretKey, + ) -> ctx::Result<()> { + let mut storage = CtxStorage::access(ctx, &self.inner.pool) + .await + .wrap("access()")?; + // Fetch last miniblock number outside of the transaction to avoid taking a lock. + let number = storage + .last_miniblock_number(ctx) + .await + .wrap("last_miniblock_number()")?; + + let mut txn = storage + .start_transaction(ctx) + .await + .wrap("start_transaction()")?; + if txn + .first_certificate(ctx) + .await + .wrap("first_certificate()")? + .is_some() + { + return Ok(()); + } + let payload = txn + .payload(ctx, number, self.inner.operator_address) + .await + .wrap("payload()")? + .context("miniblock disappeared")?; + let (genesis, _) = zksync_consensus_bft::testonly::make_genesis( + &[validator_key.clone()], + payload.encode(), + number, + ); + txn.insert_certificate(ctx, &genesis.justification, self.inner.operator_address) + .await + .wrap("insert_certificate()")?; + txn.commit(ctx).await.wrap("commit()") + } + + /// Sets an `ActionQueueSender` in the `BlockStore`. See `store_next_block()` for details. + pub async fn set_actions_queue( + &mut self, + ctx: &ctx::Ctx, + actions: ActionQueueSender, + ) -> ctx::Result<()> { + let mut storage = CtxStorage::access(ctx, &self.inner.pool) + .await + .wrap("access()")?; + let inner = storage + .new_fetcher_cursor(ctx) + .await + .wrap("new_fetcher_cursor()")?; + *sync::lock(ctx, &self.store_next_block_mutex).await? = Some(Cursor { inner, actions }); + Ok(()) + } +} + +#[async_trait::async_trait] +impl PersistentBlockStore for BlockStore { + async fn state(&self, ctx: &ctx::Ctx) -> ctx::Result { + let mut storage = CtxStorage::access(ctx, &self.inner.pool) + .await + .wrap("access()")?; + let first = storage + .first_certificate(ctx) + .await + .wrap("first_certificate()")? + .context("store is empty")?; + let last = storage + .last_certificate(ctx) + .await + .wrap("last_certificate()")? + .context("store is empty")?; + Ok(BlockStoreState { first, last }) + } + + async fn block( + &self, + ctx: &ctx::Ctx, + number: validator::BlockNumber, + ) -> ctx::Result { + let storage = &mut CtxStorage::access(ctx, &self.inner.pool) + .await + .wrap("access()")?; + let justification = storage + .certificate(ctx, number) + .await + .wrap("certificate()")? + .context("not found")?; + let payload = storage + .payload(ctx, number, self.inner.operator_address) + .await + .wrap("payload()")? + .context("miniblock disappeared from storage")?; + Ok(validator::FinalBlock { + payload: payload.encode(), + justification, + }) + } + + /// If actions queue is set (and the block has not been stored yet), + /// the block will be translated into a sequence of actions. + /// The received actions should be fed + /// to `ExternalIO`, so that `StateKeeper` will store the corresponding miniblock in the db. + /// + /// `store_next_block()` call will wait synchronously for the miniblock. + /// Once miniblock is observed in storage, `store_next_block()` will store a cert for this + /// miniblock. + async fn store_next_block( + &self, + ctx: &ctx::Ctx, + block: &validator::FinalBlock, + ) -> ctx::Result<()> { + // This mutex prevents concurrent `store_next_block` calls. + let mut guard = ctx.wait(self.store_next_block_mutex.lock()).await?; + if let Some(cursor) = &mut *guard { + cursor.advance(block).await.context("cursor.advance()")?; + } + const POLL_INTERVAL: time::Duration = time::Duration::milliseconds(50); + loop { + let mut storage = CtxStorage::access(ctx, &self.inner.pool) + .await + .wrap("access()")?; + let number = storage + .last_miniblock_number(ctx) + .await + .wrap("last_miniblock_number()")?; + if number >= block.header().number { + storage + .insert_certificate(ctx, &block.justification, self.inner.operator_address) + .await + .wrap("insert_certificate()")?; + return Ok(()); + } + drop(storage); + ctx.sleep(POLL_INTERVAL).await?; + } + } +} + +#[async_trait::async_trait] +impl ReplicaStore for Store { + async fn state(&self, ctx: &ctx::Ctx) -> ctx::Result> { + let storage = &mut CtxStorage::access(ctx, &self.pool).await.wrap("access()")?; + storage.replica_state(ctx).await.wrap("replica_state()") + } + + async fn set_state(&self, ctx: &ctx::Ctx, state: &ReplicaState) -> ctx::Result<()> { + let storage = &mut CtxStorage::access(ctx, &self.pool).await.wrap("access()")?; + storage + .set_replica_state(ctx, state) + .await + .wrap("set_replica_state()") + } +} + +#[async_trait::async_trait] +impl PayloadManager for Store { + /// Currently (for the main node) proposing is implemented as just converting a miniblock from db (without a cert) into a + /// payload. + async fn propose( + &self, + ctx: &ctx::Ctx, + block_number: validator::BlockNumber, + ) -> ctx::Result { + const POLL_INTERVAL: time::Duration = time::Duration::milliseconds(50); + let mut storage = CtxStorage::access(ctx, &self.pool).await.wrap("access()")?; + storage + .certificate(ctx, block_number.prev()) + .await + .wrap("certificate()")? + .with_context(|| format!("parent of {block_number:?} is missing"))?; + drop(storage); + loop { + let mut storage = CtxStorage::access(ctx, &self.pool).await.wrap("access()")?; + if let Some(payload) = storage + .payload(ctx, block_number, self.operator_address) + .await + .wrap("payload()")? + { + return Ok(payload.encode()); + } + drop(storage); + ctx.sleep(POLL_INTERVAL).await?; + } + } + + /// Verify that `payload` is a correct proposal for the block `block_number`. + /// Currently (for the main node) it is implemented as checking whether the received payload + /// matches the miniblock in the db. + async fn verify( + &self, + ctx: &ctx::Ctx, + block_number: validator::BlockNumber, + payload: &validator::Payload, + ) -> ctx::Result<()> { + let want = self.propose(ctx, block_number).await?; + let want = Payload::decode(&want).context("Payload::decode(want)")?; + let got = Payload::decode(payload).context("Payload::decode(got)")?; + if got != want { + return Err(anyhow::anyhow!("unexpected payload: got {got:?} want {want:?}").into()); + } + Ok(()) + } +} diff --git a/core/lib/zksync_core/src/consensus/storage/testonly.rs b/core/lib/zksync_core/src/consensus/storage/testonly.rs new file mode 100644 index 00000000000..a0c32c57a69 --- /dev/null +++ b/core/lib/zksync_core/src/consensus/storage/testonly.rs @@ -0,0 +1,47 @@ +//! Storage test helpers. +use anyhow::Context as _; +use zksync_concurrency::{ctx, error::Wrap as _, time}; +use zksync_consensus_roles::validator; +use zksync_consensus_storage as storage; + +use super::{BlockStore, CtxStorage}; + +impl BlockStore { + /// Waits for the `number` miniblock to have a certificate. + pub async fn wait_for_certificate( + &self, + ctx: &ctx::Ctx, + number: validator::BlockNumber, + ) -> ctx::Result<()> { + const POLL_INTERVAL: time::Duration = time::Duration::milliseconds(100); + loop { + let mut storage = CtxStorage::access(ctx, &self.inner.pool) + .await + .wrap("access()")?; + if storage.certificate(ctx, number).await?.is_some() { + return Ok(()); + } + ctx.sleep(POLL_INTERVAL).await?; + } + } + + /// Waits for `want_last` block to have certificate, then fetches all miniblocks with certificates + /// and verifies them. + pub async fn wait_for_blocks_and_verify( + &self, + ctx: &ctx::Ctx, + validators: &validator::ValidatorSet, + want_last: validator::BlockNumber, + ) -> ctx::Result> { + self.wait_for_certificate(ctx, want_last).await?; + let blocks = storage::testonly::dump(ctx, self).await; + let got_last = blocks.last().context("empty store")?.header().number; + assert_eq!(got_last, want_last); + for block in &blocks { + block + .validate(validators, 1) + .context(block.header().number)?; + } + Ok(blocks) + } +} diff --git a/core/lib/zksync_core/src/consensus/testonly.rs b/core/lib/zksync_core/src/consensus/testonly.rs new file mode 100644 index 00000000000..ebbd43ee920 --- /dev/null +++ b/core/lib/zksync_core/src/consensus/testonly.rs @@ -0,0 +1,387 @@ +//! Utilities for testing the consensus module. +use anyhow::Context as _; +use rand::Rng; +use zksync_concurrency::{ctx, error::Wrap as _, scope, sync, time}; +use zksync_consensus_roles::validator; +use zksync_contracts::{BaseSystemContractsHashes, SystemContractCode}; +use zksync_dal::ConnectionPool; +use zksync_types::{ + api, block::MiniblockHasher, Address, L1BatchNumber, L2ChainId, MiniblockNumber, + ProtocolVersionId, H256, +}; + +use crate::{ + consensus::{ + storage::{BlockStore, CtxStorage}, + Store, + }, + genesis::{ensure_genesis_state, GenesisParams}, + state_keeper::{ + seal_criteria::NoopSealer, tests::MockBatchExecutorBuilder, MiniblockSealer, + ZkSyncStateKeeper, + }, + sync_layer::{ + sync_action::{ActionQueue, ActionQueueSender, SyncAction}, + ExternalIO, MainNodeClient, SyncState, + }, + utils::testonly::{create_l1_batch_metadata, create_l2_transaction}, +}; + +#[derive(Debug, Default)] +pub(crate) struct MockMainNodeClient { + prev_miniblock_hash: H256, + l2_blocks: Vec, +} + +impl MockMainNodeClient { + /// `miniblock_count` doesn't include a fictive miniblock. Returns hashes of generated transactions. + pub fn push_l1_batch(&mut self, miniblock_count: u32) -> Vec { + let l1_batch_number = self + .l2_blocks + .last() + .map_or(L1BatchNumber(0), |block| block.l1_batch_number + 1); + let number_offset = self.l2_blocks.len() as u32; + + let mut tx_hashes = vec![]; + let l2_blocks = (0..=miniblock_count).map(|number| { + let is_fictive = number == miniblock_count; + let number = number + number_offset; + let mut hasher = MiniblockHasher::new( + MiniblockNumber(number), + number.into(), + self.prev_miniblock_hash, + ); + + let transactions = if is_fictive { + vec![] + } else { + let transaction = create_l2_transaction(10, 100); + tx_hashes.push(transaction.hash()); + hasher.push_tx_hash(transaction.hash()); + vec![transaction.into()] + }; + let miniblock_hash = hasher.finalize(if number == 0 { + ProtocolVersionId::Version0 // The genesis block always uses the legacy hashing mode + } else { + ProtocolVersionId::latest() + }); + self.prev_miniblock_hash = miniblock_hash; + + api::en::SyncBlock { + number: MiniblockNumber(number), + l1_batch_number, + last_in_batch: is_fictive, + timestamp: number.into(), + l1_gas_price: 2, + l2_fair_gas_price: 3, + fair_pubdata_price: Some(24), + base_system_contracts_hashes: BaseSystemContractsHashes::default(), + operator_address: Address::repeat_byte(2), + transactions: Some(transactions), + virtual_blocks: Some(!is_fictive as u32), + hash: Some(miniblock_hash), + protocol_version: ProtocolVersionId::latest(), + } + }); + + self.l2_blocks.extend(l2_blocks); + tx_hashes + } +} + +#[async_trait::async_trait] +impl MainNodeClient for MockMainNodeClient { + async fn fetch_system_contract_by_hash( + &self, + _hash: H256, + ) -> anyhow::Result { + anyhow::bail!("Not implemented"); + } + + async fn fetch_genesis_contract_bytecode( + &self, + _address: Address, + ) -> anyhow::Result>> { + anyhow::bail!("Not implemented"); + } + + async fn fetch_protocol_version( + &self, + _protocol_version: ProtocolVersionId, + ) -> anyhow::Result { + anyhow::bail!("Not implemented"); + } + + async fn fetch_genesis_l1_batch_hash(&self) -> anyhow::Result { + anyhow::bail!("Not implemented"); + } + + async fn fetch_l2_block_number(&self) -> anyhow::Result { + if let Some(number) = self.l2_blocks.len().checked_sub(1) { + Ok(MiniblockNumber(number as u32)) + } else { + anyhow::bail!("Not implemented"); + } + } + + async fn fetch_l2_block( + &self, + number: MiniblockNumber, + with_transactions: bool, + ) -> anyhow::Result> { + let Some(mut block) = self.l2_blocks.get(number.0 as usize).cloned() else { + return Ok(None); + }; + if !with_transactions { + block.transactions = None; + } + Ok(Some(block)) + } +} + +/// Fake StateKeeper for tests. +pub(super) struct StateKeeper { + // Batch of the `last_block`. + last_batch: L1BatchNumber, + last_block: MiniblockNumber, + // timestamp of the last block. + last_timestamp: u64, + batch_sealed: bool, + + fee_per_gas: u64, + gas_per_pubdata: u32, + operator_address: Address, + + pub(super) actions_sender: ActionQueueSender, + pub(super) pool: ConnectionPool, +} + +/// Fake StateKeeper task to be executed in the background. +pub(super) struct StateKeeperRunner { + actions_queue: ActionQueue, + operator_address: Address, + pool: ConnectionPool, +} + +impl StateKeeper { + /// Constructs and initializes a new `StateKeeper`. + /// Caller has to run `StateKeeperRunner.run()` task in the background. + pub async fn new( + pool: ConnectionPool, + operator_address: Address, + ) -> anyhow::Result<(Self, StateKeeperRunner)> { + // ensure genesis + let mut storage = pool.access_storage().await.context("access_storage()")?; + if storage + .blocks_dal() + .is_genesis_needed() + .await + .context("is_genesis_needed()")? + { + let mut params = GenesisParams::mock(); + params.first_validator = operator_address; + ensure_genesis_state(&mut storage, L2ChainId::default(), ¶ms) + .await + .context("ensure_genesis_state()")?; + } + + let last_l1_batch_number = storage + .blocks_dal() + .get_sealed_l1_batch_number() + .await + .context("get_sealed_l1_batch_number()")? + .context("no L1 batches in storage")?; + let last_miniblock_header = storage + .blocks_dal() + .get_last_sealed_miniblock_header() + .await + .context("get_last_sealed_miniblock_header()")? + .context("no miniblocks in storage")?; + + let pending_batch = storage + .blocks_dal() + .pending_batch_exists() + .await + .context("pending_batch_exists()")?; + let (actions_sender, actions_queue) = ActionQueue::new(); + Ok(( + Self { + last_batch: last_l1_batch_number + if pending_batch { 1 } else { 0 }, + last_block: last_miniblock_header.number, + last_timestamp: last_miniblock_header.timestamp, + batch_sealed: !pending_batch, + fee_per_gas: 10, + gas_per_pubdata: 100, + operator_address, + actions_sender, + pool: pool.clone(), + }, + StateKeeperRunner { + operator_address, + actions_queue, + pool: pool.clone(), + }, + )) + } + + fn open_block(&mut self) -> SyncAction { + if self.batch_sealed { + self.last_batch += 1; + self.last_block += 1; + self.last_timestamp += 5; + self.batch_sealed = false; + SyncAction::OpenBatch { + number: self.last_batch, + timestamp: self.last_timestamp, + l1_gas_price: 2, + l2_fair_gas_price: 3, + fair_pubdata_price: Some(24), + operator_address: self.operator_address, + protocol_version: ProtocolVersionId::latest(), + first_miniblock_info: (self.last_block, 1), + } + } else { + self.last_block += 1; + self.last_timestamp += 2; + SyncAction::Miniblock { + number: self.last_block, + timestamp: self.last_timestamp, + virtual_blocks: 0, + } + } + } + + /// Pushes a new miniblock with `transactions` transactions to the `StateKeeper`. + pub async fn push_block(&mut self, transactions: usize) { + assert!(transactions > 0); + let mut actions = vec![self.open_block()]; + for _ in 0..transactions { + let tx = create_l2_transaction(self.fee_per_gas, self.gas_per_pubdata); + actions.push(SyncAction::Tx(Box::new(tx.into()))); + } + actions.push(SyncAction::SealMiniblock); + self.actions_sender.push_actions(actions).await; + } + + /// Pushes `SealBatch` command to the `StateKeeper`. + pub async fn seal_batch(&mut self) { + // Each batch ends with an empty block (aka fictive block). + let mut actions = vec![self.open_block()]; + actions.push(SyncAction::SealBatch { virtual_blocks: 0 }); + self.actions_sender.push_actions(actions).await; + self.batch_sealed = true; + } + + /// Pushes `count` random miniblocks to the StateKeeper. + pub async fn push_random_blocks(&mut self, rng: &mut impl Rng, count: usize) { + for _ in 0..count { + // 20% chance to seal an L1 batch. + // `seal_batch()` also produces a (fictive) block. + if rng.gen_range(0..100) < 20 { + self.seal_batch().await; + } else { + self.push_block(rng.gen_range(3..8)).await; + } + } + } + + /// Last block that has been pushed to the `StateKeeper` via `ActionQueue`. + /// It might NOT be present in storage yet. + pub fn last_block(&self) -> validator::BlockNumber { + validator::BlockNumber(self.last_block.0 as u64) + } + + /// Creates a new `BlockStore` for the underlying `ConnectionPool`. + pub fn store(&self) -> BlockStore { + Store::new(self.pool.clone(), self.operator_address).into_block_store() + } + + // Wait for all pushed miniblocks to be produced. + pub async fn wait_for_miniblocks(&self, ctx: &ctx::Ctx) -> ctx::Result<()> { + const POLL_INTERVAL: time::Duration = time::Duration::milliseconds(100); + + loop { + let mut storage = CtxStorage::access(ctx, &self.pool).await.wrap("access()")?; + if storage + .payload(ctx, self.last_block(), self.operator_address) + .await + .wrap("storage.payload()")? + .is_some() + { + return Ok(()); + } + ctx.sleep(POLL_INTERVAL).await?; + } + } +} + +/// Waits for L1 batches to be sealed and then populates them with mock metadata. +async fn run_mock_metadata_calculator(ctx: &ctx::Ctx, pool: &ConnectionPool) -> anyhow::Result<()> { + const POLL_INTERVAL: time::Duration = time::Duration::milliseconds(100); + let mut n = { + let mut storage = pool.access_storage().await.context("access_storage()")?; + storage + .blocks_dal() + .get_last_l1_batch_number_with_metadata() + .await + .context("get_last_l1_batch_number_with_metadata()")? + .context("no L1 batches in Postgres")? + }; + while let Ok(()) = ctx.sleep(POLL_INTERVAL).await { + let mut storage = pool.access_storage().await.context("access_storage()")?; + let last = storage + .blocks_dal() + .get_sealed_l1_batch_number() + .await + .context("get_sealed_l1_batch_number()")? + .context("no L1 batches in Postgres")?; + + while n < last { + n += 1; + let metadata = create_l1_batch_metadata(n.0); + storage + .blocks_dal() + .save_l1_batch_metadata(n, &metadata, H256::zero(), false) + .await + .context("save_l1_batch_metadata()")?; + } + } + Ok(()) +} + +impl StateKeeperRunner { + /// Executes the StateKeeper task. + pub async fn run(self, ctx: &ctx::Ctx) -> anyhow::Result<()> { + scope::run!(ctx, |ctx, s| async { + let (stop_sender, stop_receiver) = sync::watch::channel(false); + let (miniblock_sealer, miniblock_sealer_handle) = + MiniblockSealer::new(self.pool.clone(), 5); + let io = ExternalIO::new( + miniblock_sealer_handle, + self.pool.clone(), + self.actions_queue, + SyncState::new(), + Box::::default(), + self.operator_address, + u32::MAX, + L2ChainId::default(), + ) + .await; + s.spawn_bg(miniblock_sealer.run()); + s.spawn_bg(run_mock_metadata_calculator(ctx, &self.pool)); + s.spawn_bg( + ZkSyncStateKeeper::new( + stop_receiver, + Box::new(io), + Box::new(MockBatchExecutorBuilder), + Box::new(NoopSealer), + ) + .run(), + ); + ctx.canceled().await; + stop_sender.send_replace(true); + Ok(()) + }) + .await + } +} diff --git a/core/lib/zksync_core/src/consensus/tests.rs b/core/lib/zksync_core/src/consensus/tests.rs new file mode 100644 index 00000000000..dddac56f99d --- /dev/null +++ b/core/lib/zksync_core/src/consensus/tests.rs @@ -0,0 +1,326 @@ +use std::ops::Range; + +use tracing::Instrument as _; +use zksync_concurrency::{ctx, scope}; +use zksync_consensus_executor::testonly::{connect_full_node, ValidatorNode}; +use zksync_consensus_storage as storage; +use zksync_consensus_storage::PersistentBlockStore as _; +use zksync_consensus_utils::no_copy::NoCopy; +use zksync_dal::{connection::TestTemplate, ConnectionPool}; +use zksync_types::Address; + +use super::*; +use crate::consensus::storage::CtxStorage; + +const OPERATOR_ADDRESS: Address = Address::repeat_byte(17); + +async fn make_blocks( + ctx: &ctx::Ctx, + pool: &ConnectionPool, + mut range: Range, +) -> ctx::Result> { + let rng = &mut ctx.rng(); + let mut storage = CtxStorage::access(ctx, pool).await.wrap("access()")?; + let mut blocks: Vec = vec![]; + while !range.is_empty() { + let payload = storage + .payload(ctx, range.start, OPERATOR_ADDRESS) + .await + .wrap(range.start)? + .context("payload not found")? + .encode(); + let header = match blocks.last().as_ref() { + Some(parent) => validator::BlockHeader::new(parent.header(), payload.hash()), + None => validator::BlockHeader::genesis(payload.hash(), range.start), + }; + blocks.push(validator::FinalBlock { + payload, + justification: validator::testonly::make_justification( + rng, + &header, + validator::ProtocolVersion::EARLIEST, + ), + }); + range.start = range.start.next(); + } + Ok(blocks) +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_validator_block_store() { + zksync_concurrency::testonly::abort_on_panic(); + let ctx = &ctx::test_root(&ctx::RealClock); + let rng = &mut ctx.rng(); + let pool = ConnectionPool::test_pool().await; + + // Fill storage with unsigned miniblocks. + // Fetch a suffix of blocks that we will generate (fake) certs for. + let want = scope::run!(ctx, |ctx, s| async { + // Start state keeper. + let (mut sk, runner) = testonly::StateKeeper::new(pool.clone(), OPERATOR_ADDRESS).await?; + s.spawn_bg(runner.run(ctx)); + sk.push_random_blocks(rng, 10).await; + sk.wait_for_miniblocks(ctx).await?; + let range = Range { + start: validator::BlockNumber(4), + end: sk.last_block(), + }; + make_blocks(ctx, &sk.pool, range) + .await + .context("make_blocks") + }) + .await + .unwrap(); + + // Insert blocks one by one and check the storage state. + for (i, block) in want.iter().enumerate() { + let store = Store::new(pool.clone(), OPERATOR_ADDRESS).into_block_store(); + store.store_next_block(ctx, block).await.unwrap(); + assert_eq!(want[..i + 1], storage::testonly::dump(ctx, &store).await); + } +} + +// In the current implementation, consensus certificates are created asynchronously +// for the miniblocks constructed by the StateKeeper. This means that consensus actor +// is effectively just back filling the consensus certificates for the miniblocks in storage. +#[tokio::test(flavor = "multi_thread")] +async fn test_validator() { + zksync_concurrency::testonly::abort_on_panic(); + let ctx = &ctx::test_root(&ctx::AffineClock::new(10.)); + let rng = &mut ctx.rng(); + + scope::run!(ctx, |ctx, s| async { + // Start state keeper. + let pool = ConnectionPool::test_pool().await; + let (mut sk, runner) = testonly::StateKeeper::new(pool, OPERATOR_ADDRESS).await?; + s.spawn_bg(runner.run(ctx)); + + // Populate storage with a bunch of blocks. + sk.push_random_blocks(rng, 5).await; + sk.wait_for_miniblocks(ctx) + .await + .context("sk.wait_for_miniblocks(<1st phase>)")?; + + let cfg = ValidatorNode::for_single_validator(&mut ctx.rng()); + let validators = cfg.node.validators.clone(); + + // Restart consensus actor a couple times, making it process a bunch of blocks each time. + for iteration in 0..3 { + scope::run!(ctx, |ctx, s| async { + // Start consensus actor (in the first iteration it will select a genesis block and + // store a cert for it). + let cfg = MainNodeConfig { + executor: cfg.node.clone(), + validator: cfg.validator.clone(), + operator_address: OPERATOR_ADDRESS, + }; + s.spawn_bg(cfg.run(ctx, sk.pool.clone())); + sk.store() + .wait_for_certificate(ctx, sk.last_block()) + .await + .context("wait_for_certificate(<1st phase>)")?; + + // Generate couple more blocks and wait for consensus to catch up. + sk.push_random_blocks(rng, 3).await; + sk.store() + .wait_for_certificate(ctx, sk.last_block()) + .await + .context("wait_for_certificate(<2nd phase>)")?; + + // Synchronously produce blocks one by one, and wait for consensus. + for _ in 0..2 { + sk.push_random_blocks(rng, 1).await; + sk.store() + .wait_for_certificate(ctx, sk.last_block()) + .await + .context("wait_for_certificate(<3rd phase>)")?; + } + + sk.store() + .wait_for_blocks_and_verify(ctx, &validators, sk.last_block()) + .await + .context("wait_for_blocks_and_verify()")?; + Ok(()) + }) + .await + .context(iteration)?; + } + Ok(()) + }) + .await + .unwrap(); +} + +// Test running a validator node and a couple of full nodes (aka fetchers). +// Validator is producing signed blocks and fetchers are expected to fetch +// them directly or indirectly. +#[tokio::test(flavor = "multi_thread")] +async fn test_fetcher() { + const FETCHERS: usize = 2; + + zksync_concurrency::testonly::abort_on_panic(); + let ctx = &ctx::test_root(&ctx::AffineClock::new(10.)); + let rng = &mut ctx.rng(); + + // topology: + // validator <-> fetcher <-> fetcher <-> ... + let cfg = ValidatorNode::for_single_validator(rng); + let validators = cfg.node.validators.clone(); + let mut cfg = MainNodeConfig { + executor: cfg.node, + validator: cfg.validator, + operator_address: OPERATOR_ADDRESS, + }; + let mut fetcher_cfgs = vec![connect_full_node(rng, &mut cfg.executor)]; + while fetcher_cfgs.len() < FETCHERS { + let cfg = connect_full_node(rng, fetcher_cfgs.last_mut().unwrap()); + fetcher_cfgs.push(cfg); + } + let fetcher_cfgs: Vec<_> = fetcher_cfgs + .into_iter() + .map(|executor| FetcherConfig { + executor, + operator_address: OPERATOR_ADDRESS, + }) + .collect(); + + // Create an initial database snapshot, which contains a cert for genesis block. + let pool = scope::run!(ctx, |ctx, s| async { + let pool = ConnectionPool::test_pool().await; + let (mut sk, runner) = testonly::StateKeeper::new(pool, OPERATOR_ADDRESS).await?; + s.spawn_bg(runner.run(ctx)); + s.spawn_bg(cfg.clone().run(ctx, sk.pool.clone())); + sk.push_random_blocks(rng, 5).await; + sk.store() + .wait_for_certificate(ctx, sk.last_block()) + .await?; + Ok(sk.pool) + }) + .await + .unwrap(); + let template = TestTemplate::freeze(pool).await.unwrap(); + + // Run validator and fetchers in parallel. + scope::run!(ctx, |ctx, s| async { + // Run validator. + let pool = template.create_db().await?; + let (mut validator, runner) = testonly::StateKeeper::new(pool, OPERATOR_ADDRESS).await?; + s.spawn_bg(async { + runner + .run(ctx) + .instrument(tracing::info_span!("validator")) + .await + .context("validator") + }); + s.spawn_bg(cfg.run(ctx, validator.pool.clone())); + + // Run fetchers. + let mut fetchers = vec![]; + for (i, cfg) in fetcher_cfgs.into_iter().enumerate() { + let i = NoCopy::from(i); + let pool = template.create_db().await?; + let (fetcher, runner) = testonly::StateKeeper::new(pool, OPERATOR_ADDRESS).await?; + fetchers.push(fetcher.store()); + s.spawn_bg(async { + let i = i; + runner + .run(ctx) + .instrument(tracing::info_span!("fetcher", i = *i)) + .await + .with_context(|| format!("fetcher{}", *i)) + }); + s.spawn_bg(cfg.run(ctx, fetcher.pool, fetcher.actions_sender)); + } + + // Make validator produce blocks and wait for fetchers to get them. + validator.push_random_blocks(rng, 5).await; + let want_last = validator.last_block(); + let want = validator + .store() + .wait_for_blocks_and_verify(ctx, &validators, want_last) + .await?; + for fetcher in &fetchers { + assert_eq!( + want, + fetcher + .wait_for_blocks_and_verify(ctx, &validators, want_last) + .await? + ); + } + Ok(()) + }) + .await + .unwrap(); +} + +// Test fetcher back filling missing certs. +#[tokio::test(flavor = "multi_thread")] +async fn test_fetcher_backfill_certs() { + zksync_concurrency::testonly::abort_on_panic(); + let ctx = &ctx::test_root(&ctx::AffineClock::new(10.)); + let rng = &mut ctx.rng(); + + let cfg = ValidatorNode::for_single_validator(rng); + let mut cfg = MainNodeConfig { + executor: cfg.node, + validator: cfg.validator, + operator_address: OPERATOR_ADDRESS, + }; + let fetcher_cfg = FetcherConfig { + executor: connect_full_node(rng, &mut cfg.executor), + operator_address: OPERATOR_ADDRESS, + }; + + // Create an initial database snapshot, which contains some blocks: some with certs, some + // without. + let pool = scope::run!(ctx, |ctx, s| async { + let pool = ConnectionPool::test_pool().await; + let (mut sk, runner) = testonly::StateKeeper::new(pool, OPERATOR_ADDRESS).await?; + s.spawn_bg(runner.run(ctx)); + + // Some blocks with certs. + scope::run!(ctx, |ctx, s| async { + s.spawn_bg(cfg.clone().run(ctx, sk.pool.clone())); + sk.push_random_blocks(rng, 5).await; + sk.store() + .wait_for_certificate(ctx, sk.last_block()) + .await?; + Ok(()) + }) + .await?; + + // Some blocks without certs. + sk.push_random_blocks(rng, 5).await; + sk.wait_for_miniblocks(ctx).await?; + Ok(sk.pool) + }) + .await + .unwrap(); + let template = TestTemplate::freeze(pool).await.unwrap(); + + // Run validator and fetchers in parallel. + scope::run!(ctx, |ctx, s| async { + // Run validator. + let pool = template.create_db().await?; + let (mut validator, runner) = testonly::StateKeeper::new(pool, OPERATOR_ADDRESS).await?; + s.spawn_bg(runner.run(ctx)); + s.spawn_bg(cfg.run(ctx, validator.pool.clone())); + + // Run fetcher. + let pool = template.create_db().await?; + let (fetcher, runner) = testonly::StateKeeper::new(pool, OPERATOR_ADDRESS).await?; + let fetcher_store = fetcher.store(); + s.spawn_bg(runner.run(ctx)); + s.spawn_bg(fetcher_cfg.run(ctx, fetcher.pool, fetcher.actions_sender)); + + // Make validator produce new blocks and + // wait for the fetcher to get both the missing certs and the new blocks. + validator.push_random_blocks(rng, 5).await; + fetcher_store + .wait_for_certificate(ctx, validator.last_block()) + .await?; + Ok(()) + }) + .await + .unwrap(); +} diff --git a/core/lib/zksync_core/src/consistency_checker/mod.rs b/core/lib/zksync_core/src/consistency_checker/mod.rs index cb122ca2f47..768684e5fba 100644 --- a/core/lib/zksync_core/src/consistency_checker/mod.rs +++ b/core/lib/zksync_core/src/consistency_checker/mod.rs @@ -1,204 +1,319 @@ -use std::time::Duration; +use std::{fmt, time::Duration}; +use anyhow::Context as _; +use tokio::sync::watch; use zksync_contracts::PRE_BOOJUM_COMMIT_FUNCTION; -use zksync_dal::ConnectionPool; -use zksync_types::{ - web3::{error, ethabi, transports::Http, types::TransactionId, Web3}, - L1BatchNumber, +use zksync_dal::{ConnectionPool, StorageProcessor}; +use zksync_eth_client::{clients::QueryClient, Error as L1ClientError, EthInterface}; +use zksync_types::{web3::ethabi, L1BatchNumber, H256}; + +use crate::{ + metrics::{CheckerComponent, EN_METRICS}, + utils::wait_for_l1_batch_with_metadata, }; -use crate::metrics::{CheckerComponent, EN_METRICS}; +#[cfg(test)] +mod tests; -#[derive(Debug)] -pub struct ConsistencyChecker { - // ABI of the zkSync contract - contract: ethabi::Contract, - // How many past batches to check when starting - max_batches_to_recheck: u32, - web3: Web3, - db: ConnectionPool, +#[derive(Debug, thiserror::Error)] +enum CheckError { + #[error("Web3 error communicating with L1")] + Web3(#[from] L1ClientError), + #[error("Internal error")] + Internal(#[from] anyhow::Error), } -const SLEEP_DELAY: Duration = Duration::from_secs(5); +impl From for CheckError { + fn from(err: zksync_dal::SqlxError) -> Self { + Self::Internal(err.into()) + } +} -impl ConsistencyChecker { - pub fn new(web3_url: &str, max_batches_to_recheck: u32, db: ConnectionPool) -> Self { - let web3 = Web3::new(Http::new(web3_url).unwrap()); - let contract = zksync_contracts::zksync_contract(); - Self { - web3, - contract, - max_batches_to_recheck, - db, - } +trait UpdateCheckedBatch: fmt::Debug + Send + Sync { + fn update_checked_batch(&mut self, last_checked_batch: L1BatchNumber); +} + +/// Default [`UpdateCheckedBatch`] implementation that reports the batch number as a metric. +impl UpdateCheckedBatch for () { + fn update_checked_batch(&mut self, last_checked_batch: L1BatchNumber) { + EN_METRICS.last_correct_batch[&CheckerComponent::ConsistencyChecker] + .set(last_checked_batch.0.into()); } +} - async fn check_commitments(&self, batch_number: L1BatchNumber) -> Result { - let mut storage = self.db.access_storage().await.unwrap(); +/// Consistency checker behavior when L1 commit data divergence is detected. +// This is a temporary workaround for a bug that sometimes leads to incorrect L1 batch data returned by the server +// (and thus persisted by external nodes). Eventually, we want to go back to bailing on L1 data mismatch; +// for now, it's only enabled for the unit tests. +#[derive(Debug)] +enum L1DataMismatchBehavior { + #[cfg(test)] + Bail, + Log, +} + +/// L1 commit data loaded from Postgres. +#[derive(Debug)] +struct LocalL1BatchCommitData { + is_pre_boojum: bool, + l1_commit_data: ethabi::Token, + commit_tx_hash: H256, +} - let storage_l1_batch = storage +impl LocalL1BatchCommitData { + /// Returns `Ok(None)` if Postgres doesn't contain all data necessary to check L1 commitment + /// for the specified batch. + async fn new( + storage: &mut StorageProcessor<'_>, + batch_number: L1BatchNumber, + ) -> anyhow::Result> { + let Some(storage_l1_batch) = storage .blocks_dal() .get_storage_l1_batch(batch_number) - .await - .unwrap() - .unwrap_or_else(|| panic!("L1 batch #{} not found in the database", batch_number)); + .await? + else { + return Ok(None); + }; - let commit_tx_id = storage_l1_batch - .eth_commit_tx_id - .unwrap_or_else(|| panic!("Commit tx not found for L1 batch #{}", batch_number)) - as u32; + let Some(commit_tx_id) = storage_l1_batch.eth_commit_tx_id else { + return Ok(None); + }; + let commit_tx_hash = storage + .eth_sender_dal() + .get_confirmed_tx_hash_by_eth_tx_id(commit_tx_id as u32) + .await? + .with_context(|| { + format!("Commit tx hash not found in the database for tx id {commit_tx_id}") + })?; - let block_metadata = storage + let Some(l1_batch) = storage .blocks_dal() .get_l1_batch_with_metadata(storage_l1_batch) - .await - .unwrap() - .unwrap_or_else(|| { - panic!( - "Metadata for L1 batch #{} not found in the database", - batch_number - ) - }); + .await? + else { + return Ok(None); + }; - let commit_tx_hash = storage - .eth_sender_dal() - .get_confirmed_tx_hash_by_eth_tx_id(commit_tx_id) - .await - .unwrap() - .unwrap_or_else(|| { - panic!( - "Commit tx hash not found in the database. Commit tx id: {}", - commit_tx_id - ) - }); + let is_pre_boojum = l1_batch + .header + .protocol_version + .map_or(true, |version| version.is_pre_boojum()); + let metadata = &l1_batch.metadata; - tracing::info!( - "Checking commit tx {} for batch {}", + // For Boojum batches, `bootloader_initial_content_commitment` and `events_queue_commitment` + // are (temporarily) only computed by the metadata calculator if it runs with the full tree. + // I.e., for these batches, we may have partial metadata in Postgres, which would not be sufficient + // to compute local L1 commitment. + if !is_pre_boojum + && (metadata.bootloader_initial_content_commitment.is_none() + || metadata.events_queue_commitment.is_none()) + { + return Ok(None); + } + + Ok(Some(Self { + is_pre_boojum, + l1_commit_data: l1_batch.l1_commit_data(), commit_tx_hash, - batch_number.0 - ); + })) + } +} - // we can't get tx calldata from db because it can be fake - let commit_tx = self - .web3 - .eth() - .transaction(TransactionId::Hash(commit_tx_hash)) - .await? - .expect("Commit tx not found on L1"); +#[derive(Debug)] +pub struct ConsistencyChecker { + /// ABI of the zkSync contract + contract: ethabi::Contract, + /// How many past batches to check when starting + max_batches_to_recheck: u32, + sleep_interval: Duration, + l1_client: Box, + l1_batch_updater: Box, + l1_data_mismatch_behavior: L1DataMismatchBehavior, + pool: ConnectionPool, +} + +impl ConsistencyChecker { + const DEFAULT_SLEEP_INTERVAL: Duration = Duration::from_secs(5); + + pub fn new(web3_url: &str, max_batches_to_recheck: u32, pool: ConnectionPool) -> Self { + let web3 = QueryClient::new(web3_url).unwrap(); + Self { + contract: zksync_contracts::zksync_contract(), + max_batches_to_recheck, + sleep_interval: Self::DEFAULT_SLEEP_INTERVAL, + l1_client: Box::new(web3), + l1_batch_updater: Box::new(()), + l1_data_mismatch_behavior: L1DataMismatchBehavior::Log, + pool, + } + } + + async fn check_commitments( + &self, + batch_number: L1BatchNumber, + local: &LocalL1BatchCommitData, + ) -> Result { + let commit_tx_hash = local.commit_tx_hash; + tracing::info!("Checking commit tx {commit_tx_hash} for L1 batch #{batch_number}"); let commit_tx_status = self - .web3 - .eth() - .transaction_receipt(commit_tx_hash) + .l1_client + .get_tx_status(commit_tx_hash, "consistency_checker") .await? - .expect("Commit tx receipt not found on L1") - .status; + .with_context(|| format!("Receipt for tx {commit_tx_hash:?} not found on L1"))?; + if !commit_tx_status.success { + let err = anyhow::anyhow!("Main node gave us a failed commit tx"); + return Err(err.into()); + } - assert_eq!( - commit_tx_status, - Some(1.into()), - "Main node gave us a failed commit tx" - ); + // We can't get tx calldata from db because it can be fake. + let commit_tx_input_data = self + .l1_client + .get_tx(commit_tx_hash, "consistency_checker") + .await? + .with_context(|| format!("Commit for tx {commit_tx_hash:?} not found on L1"))? + .input; + // TODO (PLA-721): Check receiving contract and selector - let commit_function = if block_metadata - .header - .protocol_version - .unwrap() - .is_pre_boojum() - { - PRE_BOOJUM_COMMIT_FUNCTION.clone() + let commit_function = if local.is_pre_boojum { + &*PRE_BOOJUM_COMMIT_FUNCTION } else { - self.contract.function("commitBatches").unwrap().clone() + self.contract + .function("commitBatches") + .context("L1 contract does not have `commitBatches` function")? }; + let commitment = + Self::extract_commit_data(&commit_tx_input_data.0, commit_function, batch_number) + .with_context(|| { + format!("Failed extracting commit data for transaction {commit_tx_hash:?}") + })?; + Ok(commitment == local.l1_commit_data) + } - let commitments = commit_function - .decode_input(&commit_tx.input.0[4..]) - .unwrap() + fn extract_commit_data( + commit_tx_input_data: &[u8], + commit_function: ðabi::Function, + batch_number: L1BatchNumber, + ) -> anyhow::Result { + let mut commit_input_tokens = commit_function + .decode_input(&commit_tx_input_data[4..]) + .with_context(|| format!("Failed decoding calldata for L1 commit function"))?; + let mut commitments = commit_input_tokens .pop() - .unwrap() + .context("Unexpected signature for L1 commit function")? .into_array() - .unwrap(); + .context("Unexpected signature for L1 commit function")?; // Commit transactions usually publish multiple commitments at once, so we need to find // the one that corresponds to the batch we're checking. - let first_batch_number = match &commitments[0] { - ethabi::Token::Tuple(tuple) => tuple[0].clone().into_uint().unwrap().as_usize(), - _ => panic!("ABI does not match the expected one"), + let first_batch_commitment = commitments + .first() + .with_context(|| format!("L1 batch commitment is empty"))?; + let ethabi::Token::Tuple(first_batch_commitment) = first_batch_commitment else { + anyhow::bail!("Unexpected signature for L1 commit function"); }; - let commitment = &commitments[batch_number.0 as usize - first_batch_number]; + let first_batch_number = first_batch_commitment + .first() + .context("Unexpected signature for L1 commit function")?; + let first_batch_number = first_batch_number + .clone() + .into_uint() + .context("Unexpected signature for L1 commit function")?; + let first_batch_number = usize::try_from(first_batch_number) + .map_err(|_| anyhow::anyhow!("Integer overflow for L1 batch number"))?; + // ^ `TryFrom` has `&str` error here, so we can't use `.context()`. - Ok(commitment == &block_metadata.l1_commit_data()) + let commitment = (batch_number.0 as usize) + .checked_sub(first_batch_number) + .and_then(|offset| { + (offset < commitments.len()).then(|| commitments.swap_remove(offset)) + }); + commitment.with_context(|| { + let actual_range = first_batch_number..(first_batch_number + commitments.len()); + format!( + "Malformed commitment data; it should prove L1 batch #{batch_number}, \ + but it actually proves batches #{actual_range:?}" + ) + }) } - async fn last_committed_batch(&self) -> L1BatchNumber { - self.db + async fn last_committed_batch(&self) -> anyhow::Result> { + Ok(self + .pool .access_storage() - .await - .unwrap() + .await? .blocks_dal() .get_number_of_last_l1_batch_committed_on_eth() - .await - .unwrap() - .unwrap_or(L1BatchNumber(0)) + .await?) } - pub async fn run( - self, - stop_receiver: tokio::sync::watch::Receiver, - ) -> anyhow::Result<()> { - let mut batch_number: L1BatchNumber = self + pub async fn run(mut self, mut stop_receiver: watch::Receiver) -> anyhow::Result<()> { + // It doesn't make sense to start the checker until we have at least one L1 batch with metadata. + let earliest_l1_batch_number = + wait_for_l1_batch_with_metadata(&self.pool, self.sleep_interval, &mut stop_receiver) + .await?; + + let Some(earliest_l1_batch_number) = earliest_l1_batch_number else { + return Ok(()); // Stop signal received + }; + + let last_committed_batch = self .last_committed_batch() - .await + .await? + .unwrap_or(earliest_l1_batch_number); + let first_batch_to_check: L1BatchNumber = last_committed_batch .0 .saturating_sub(self.max_batches_to_recheck) - .max(1) .into(); + // We shouldn't check batches not present in the storage, and skip the genesis batch since + // it's not committed on L1. + let first_batch_to_check = first_batch_to_check + .max(earliest_l1_batch_number) + .max(L1BatchNumber(1)); + tracing::info!( + "Last committed L1 batch is #{last_committed_batch}; starting checks from L1 batch #{first_batch_to_check}" + ); - tracing::info!("Starting consistency checker from batch {}", batch_number.0); - + let mut batch_number = first_batch_to_check; loop { if *stop_receiver.borrow() { tracing::info!("Stop signal received, consistency_checker is shutting down"); break; } - let metadata = self - .db - .access_storage() - .await - .unwrap() - .blocks_dal() - .get_l1_batch_metadata(batch_number) - .await - .unwrap(); - let batch_has_metadata = metadata - .map(|m| { - m.metadata.bootloader_initial_content_commitment.is_some() - && m.metadata.events_queue_commitment.is_some() - }) - .unwrap_or(false); - + let mut storage = self.pool.access_storage().await?; // The batch might be already committed but not yet processed by the external node's tree // OR the batch might be processed by the external node's tree but not yet committed. // We need both. - if !batch_has_metadata || self.last_committed_batch().await < batch_number { - tokio::time::sleep(SLEEP_DELAY).await; + let Some(local) = LocalL1BatchCommitData::new(&mut storage, batch_number).await? else { + tokio::time::sleep(self.sleep_interval).await; continue; - } + }; + drop(storage); - match self.check_commitments(batch_number).await { + match self.check_commitments(batch_number, &local).await { Ok(true) => { - tracing::info!("Batch {} is consistent with L1", batch_number.0); - EN_METRICS.last_correct_batch[&CheckerComponent::ConsistencyChecker] - .set(batch_number.0.into()); - batch_number.0 += 1; + tracing::info!("L1 batch #{batch_number} is consistent with L1"); + self.l1_batch_updater.update_checked_batch(batch_number); + batch_number += 1; } - Ok(false) => { - anyhow::bail!("Batch {} is inconsistent with L1", batch_number.0); + Ok(false) => match &self.l1_data_mismatch_behavior { + #[cfg(test)] + L1DataMismatchBehavior::Bail => { + anyhow::bail!("L1 Batch #{batch_number} is inconsistent with L1"); + } + L1DataMismatchBehavior::Log => { + tracing::warn!("L1 Batch #{batch_number} is inconsistent with L1"); + } + }, + Err(CheckError::Web3(err)) => { + tracing::warn!("Error accessing L1; will retry after a delay: {err}"); + tokio::time::sleep(self.sleep_interval).await; } - Err(e) => { - tracing::warn!("Consistency checker error: {}", e); - tokio::time::sleep(SLEEP_DELAY).await; + Err(CheckError::Internal(err)) => { + let context = + format!("Failed verifying consistency of L1 batch #{batch_number}"); + return Err(err.context(context)); } } } diff --git a/core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_200000_testnet_goerli.calldata b/core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_200000_testnet_goerli.calldata new file mode 100644 index 0000000000000000000000000000000000000000..8018825804e4b15e5708a12061eb8d684b5a912a GIT binary patch literal 35012 zcmc(I2Rv4P^#8Lc<0(>+QKF>m%9fGHEFmjM2`D`qN>lyGKWrV*4eqpBXL@C^Nq4M8|dX){X5%l_${b=Qqymz zsV*H>_;<#GQ)JpxcxG@xCrLr{xTxQ+wZn%!wI|{2A-tm02-aVbiL7iL!Bh`%>mTpL7wZezI;OwBg1^!1ZHX9wMtpYQ`M||9mtk( zr4JtA1qLgPPwM*N!PNN(Me2OUVCsA{bv6vmy#gQ{;3Ep(M7w&wa=Ce9_d6-mOe>qL zzx-TX#p`=RYitq)jZ@QCS_AeqehsI;_59KL^Uv533tL-_N)-?Nej-y*EXrR$o#Cbq z_5pa7Milx&Pl5gqWO@_;QUrmhnFoKt67LYks{_@;AHgv_`dU&C@!X7)*4@<0-h17P{#N;Ib}?V^iZ7ZfiS(Y$Az0kcg-qXziv=a@ z-dhUR?1;D#Ha#7or+8Nu#sA>)S5A+|!&uy;RsTsJJKs3zitUTfzPf2_FFam&_0RTD z_tmPkbvbt`<*>M^s`dxv&-8eTv_iI_CvxC|7KjF+@;<)RC$=Y?7xtgnjK%%Z>OC;> zJNi~+$FHY8{n}X{hjxlLGwyt%lyJZ&oZGV)8d$*no6Ztq5V=!e*DK26hqzFtQl-VE z@M*twru3s7>Q|BmZAP&Ae)~$Sc{+ZJL+ah@udhYo_7#5j4_wD9@tA(k+gzEV{JbFy zE+YmkmkvtfW#rB_5Ce=KrzPxC1;L~;c19O$qDJm)2TO7|bl-!!dA1-I!tsG6Db7qk zEcqktCV|JHSRAxYlHy>jb?G;TMeBk=7-$!oA7pUAB5>}?4O9;fw-c<8!?C$=NF2|} z#Nd9deLw++y1>M?Q#JHEE64}z?xKK$5nZnU1sw5p2vWdF1V4$K_Ca)S1^K97Ys)C$ zV2A?loyx+l`Vem9ju0grY&H9XAL1`D^JC-=v`>+{FRKQ)ZI12xY=_k+LP;M1?$_Ff zlyGVXQ!h93n}Tfc{)tk;5%mw+_sI2+=(##NHnz9QPL0e(T5TbqyzDjSxZL3HPrZnKQG!Zwr|KfxT~}}K67eY zC9H~l^Ow5Y7pcZ)ppMIhS@ATst&I!r^YQ&mdB62Pbux8pyFH6>szYCRI6(sA5jnR8C)K%+b_)^&~6<+S8db0_IyBbT58x#=MI_uI~2v+^}e?cQwtQb4Q}qebu5y@|JKm(^Djo-sgK|lrp9Ll1;(`wn{*dP+K(D)J!t0I zCQ{#_)pAZ~X*Fx-{cIkq&enJ4yB;j>uwiYRHu<(7IW%z>6DW`>Sdu?qx^`6g*__&U znTr!$I|}aeymXz(EHaTOwJ&l=y;~}=EJ_~`yP>iBSFfxv^Ogm%yXUj!celgf*2PF; z<+`gXv?+vUO%4b~jL*ExCYhLeJ&BmriFz=23z7Jzs&k_G(-JY&uMpp@ww>-Ay z=S7@1{Mf9d+sejvbZvIi{D~&Ll5dsF{o3$PQk*|mE!!ukE%kq4p8mIUlgQIaW8qg{ zbUpSS4SeYu`Lo)0=6=uBDQkHsZUpA?NdoER2EEDMo;O(^#sa+~$5Q>Rb5(^|o}EQ- z$sJPi;M!MJC|m_X0r^wQB1<*tu)S+{FSY*171|^~1oFdZ;4y1f5a)#WE63H)zWEPv zHO7zrB}Tk<%aiK|>ep?``zA(!87xqvh(F}ymucLiVErHBbO#Uz?86k}Ot|s!(^{7H zAPk0D_{s`Z$ep%o7BU20+pOG!v2w)mH^H8V+vCW@ z-^xpNfeL`T2)2PZ?&$j~_Af*Iw2hp4k^7B(lLcemb$3;nWVaOgH~nB4+79I}0Nl*$ zYflIAJ}tyw-P2{2A06|-ag)2M#^;WBANSm5gCK#+SbekFe_snEM<_8x1iaj5`IJj0 zIsNVDyy=wYwbP8O`tO|~KM~M3r^UI-snzaHk2HH{-WgE|t?wmPj6SZBK6J`Y2iR)5oCG(7XWbE+}2AV0q*jMVH}tJ^aRT=bex4KYwq>{Z_#l z;lR)rU;i6x_pfckr;9K9_H4>gKWNCI^Yv&8j6g zF;tyDTdJ~sSCmlST^^24G4GJtFJ=|nu4h2|4Qk*zO!d&Quy2Ks#)9z>0& ziknQlv~0eTPNB>5tx3GP%WX}`GTgb`Kw^IFX(H};+xJ1f@wXpe% z>j>|G^v#JA>nr)nLRgr0yeYFvIHz^AjqcZt$Qmng!_^xx`bgqx^5fvG{}_lTIm*!t z;x%sKgWMPn>a9O?e^$nwS(;D;SYmyhGCOeb4=?PCAQro>kv?442Aa88yPYzcVEj`} zpbTmlTl>>`0pr;hY@s@3Y)K&yg@xDfJL|#NrY5jOCBC=_y9mS<;2DEE50+#P#^cH% zVO~!_9zlGO0J_lZaB}elYSg8XY*Yg{MONnEKNz$>qDEaiaW<{Y0DT+_G%s#XI<;Id znJ68*>|{UGJOlApj#nPw^!!6<3Shv_6ZrZVITHV$z?TNsla>6Uf3W}m20@}RdjbV5 zKp)A)PD^OO|F^%gBP~K^hm+UW2em9x^w)3lhXD$5I5J#$TXkRh zw)Ur8wH~pxem~TNxr55bnqPQ|dSv0^+ai&+i9u|KQJq z*8=VN<$0a?oOUE50X#?jLpko4w|^bSt#hqDK60QnHEaLT&wC|6CB{ zqrRy@`8sk~4mX{CJKBa|2shY4-oJps=vkYyw~Kn5orZK!_4`kqT4&J5-O;6A%S6*4Edp$1xe{bwA9a(plp28-Uy4dH<2p@0)0E)flkj^FoK24m>d-n%gt zsJG@0f;iM*#{vdVaz-F$zxl5IY>8tK#|H#^B5n^P6CWJLDF90VR|=MZziiMS7<)fZ zyH_vhu;QSqd+dn1=dRbPV!R0vVXE>eC_!jm060UYDFI}4Jxz3~RBoi>!FjZphS$*O z!%n`|yZZE#9)LhWsge{#A$MM51~Ne6la{IIF4@o-@wF z;!FdNmor=4iEIB8f8<+iV~|^{+2#`*YwxhC<%;V?J=tc6#hEz|!UO6}4#!{G-QIsN z?cu~OUDesLKdZ-IC;sYd`^_qe#hDAaz}A+TPBx;}D|t9euVQ&+raYs+mF;DNybA36 zs!3=)1n!H)aOjcG?#;sn)3aCOGhlI2w(^PlkJDb5x}d48UfvK4`EdZwau(6WChzoX zPRXrCyBP45`l%pV{g&`FyWed&vxF?C%IU*Dgip zf5llUPqrepuqX$Mivi1}qX|uZ98iuAlw6`Y=Id)P8h~}zU`ZEYCF{Bw-H|JO7$TK8 zf>R@5ISq@JMT7!3OFr#khpnt_hwnewENJ-uoK~BolV6HsR2>k`w2~ zlLFq$6oBzdHy#aB1>WDx?-pK~mz3V$Jlyu}uH+wPePt#{j0PSzMxu(40PSy7;+#ZS zA`s(%XW#&EafIS{t+h;Fok#B}VR6gvQ7(d^0y~0iobwg29Zv&E6}VpL`w7%n1Rn_# zpU{L>2b%zH2R<27H!amUkILN6Bk3fHWdb}nll7A+85sOl^@GiX{4^#!Q%emeNYAn{fGBKnDvUn zd!Epnf$g4wh^rJgU2K0S6u9_~k(5so-x0)0;C`@@-%hIYf0=?rlg(w~tN?wyi>s3> zfwjqPc(S+Uamu)L(|&E|fMb0n+d>SSufj8`8*Enj+5EXZNlp%*CO070P^I^dyR3Iw z%c`#D>YNfk7cDQ_eEwdGW`|Fntp9C%hpfTx3u&Dpn3RT3A@kVr*)T?Kf+k4=fdWXY zr@hnFJ0w+fOWbZ+f4=E#8v6H9G4m02@!q1KK8J}G-<4wM8)x`gD_#HIstA;l@c7vC zGabb*?8q4+{RFmqWApnrukg(-PVP%R^>{~s2fMLw<%de=fA#${%X15QovXLDc zJyDR^Df<}2QJG8s5AoTma6mP*RR6>Gi9Jw%SB|HkeC9vIQ_#{6?3<~OUdksZP6GBr z=sgo<7>pC-QU9LsK^@BzaLND&)p*5JR-wV|5(uLjudHhoFh2o>HNbuya~8ytJK$Cw zx~UKQAr^2Q?&L5a4Q_0rQDMi8V^fZ6hPiu=IW0{~!u^$KO~#EIGVz5G&Jb7vxK^+P zaMBD$q4Q#%@tYf!*L!_S@Hi~I@#@6x4=;4n6yI-S+LCqrs zf{b)noLpTOS7zmwh4(?5KiG~xmOng{_xG}2Q`aSVhW67x^ZI^car@R(ztQh`$+4d4 z$i*reUk~g1k%~@SGw=4kve&;DG`L0>i<6fglZ?)@MNju92Oj)@fZhC^)g@HvHuhzD zpy*NCkTuRPS2!f-k+Pnn?DXPM8mlP=$sJ<1Ll-9x_cqfS`|sPixEXps3EUUO+m-27 zQ-kj#GTv^Rq~-b}Bp<|`IxB4;GS*&h8*t$|X8a9gDAm!4terd7*E8R0E!y8{;o9Hd z8R=rWbKP1C#Fy)vBA|R3ppV$kC;L2r`2od2;598DV%kp+#3f2SF$vU`xB!uX@kgj{ z84T1`zd9gr52B!E>9Yt}gpJj)ZEq|=5)dy#{v{zB*IKppea0K80r=5N-%Cj%nJ|MK zX2SZp?gj_OB1Q(l-2(jWm!yjoTv06EY%?(a>OdsL@1-6hO;(xlvOT#1EgRr>O9L3q zP2L0Gt#B76BO;S5Q8IfJoRtfK32*hmcIrg7$ALj;gN*JdYX-F5ZSf1BC|g-e~DP z)Hc``z~_gKbuUhH*ebkV76NcuU>rwgb1=3psi*_&Lc@^Sdng{Un$bUe_V%DwD)Cto zED4Csz%vFH50+#PMk^lWHL2G@9-wbIK1&22B|DN_d`4E!%0yiJu$mM~5U zuH+Yz>il1(Akh?aa-b!kkCfll1AJht`#w~?ui-fTes=ioS3l;GkJhM+p|?6I6G=N} zbrD@v73ybtWS(}USaJ14SW9E_yMIzy?@Nq z*)Zl{>Hy|U>U;#~|IzO)Wl1^Scye2z?Z@Zv1)+(9&Y3oIVrMz3B~d+J5)p@2iqSq5 zS96)r}ckry9_U#1{Ws7Zhv`dXjjW&rQ0bhGSQIUexb>e=UE6AEe|LF7?0P z&|arF7xlVJd%tZ4O?Tw;hDx`jYUBW2SAUhVdECjB>^H%_Wu%Lr=Z!XNJ9RC@sYk8C z`?b{XS0QO(Q??qnZhQshfw}a9f*$&jF2&pYmHa0;TC^(T1U~(;pKV%mG+>%meI!Av z_^=c7y)KoxNIy#_w|Gt2?M{00?|Y?eXU^=Ew!IzlB4p~`wN1?rqg9~m2joY|*q0VA zKDXq^lHspFVg$#p4hTZ9(XJFsZ-tbt9KRd@aezOy4Z>4Cae*wPD&hr_IcnwLwSC3Sh)VKVS{8V-1}F`@!E*!E9LrJ ztO_dcm;67`WnJYQ!~%)qI)dken~!ATIs*R+z(ILw0JmSxqjO|l%0D2Xat(89-?ncr z{m)Jo(;4qQ`$t(z`FB+pXcxdK&xdB2j}$R|N4cCf(?Vp<+28z^04# z=Y%m2VQ~i=c~lxqt5^#zw8gi3-n&uwd$Vab&6{5(j(^%cyj~lrU~y^}SS_m)(wUQs zT!M?e2Lqq!|LSlG+4XdYY?eoV&x_k_`}F-ySV-*rpHj%*1MZ*Z>Q0~T zsKV@fl0S;FD*0V?TKB5*4ma!xy?^Yz+HFDW!&rSY@qbhpUgSIZi~Ij9_tuaa|Ld{PP{^$Er*E$Ag+;~KOX zSJ}09oJ#TWgVrn5P(KfjMOYH)n{X~BoF3pdO%qg!64lIWewBb%{l$0?fBhAQ`yNKgqBm5=vCP_()~c2Mo^9 zoDvQOTXii_N)md1hZ<*{q>O{X1Z&>mt^+D5U=cYsB2at?T(2bs42HaV5sLo+oD~%u z;_7O%cOW0Y5sb@a8HBO?93uTsStMrMkMgxF^T^eBf%ia=K6`Rp9L-Ks5vl@q9dI-< z<-+PdcVEeeAEDlJ|Z7}AS`QAN=iTgYb)Kb80MQ2#a z^EUJjgM6TWm<@~sOf>FCZ;_pEVWVY}>0U$9*}ac!VX>1mab)KE_z0gkAFTVDV_htluy zQeY{tWZ=IPP!zz$5vf>iu05lArX&;UAAq~C%pj^U_Ld4A7;yrr0B+guOojs#b{cZ| z465JNT37-9$b4684(Q`ppe6TR?LWNF{RZ~$AKvF$fH>fHdtqQnbRdJCddoIlTIY+# zGARAN&v$x?Gm$am3$$4hDfu5XarqktQW=u`4M8ph^y^lN%SmK2p6s zK>j8?A!>)^@V*AdPurvKy(=~GY02jwUmdg2igt%z#ig%?6!zq0tNn?nL4(6FNI#w* zrWy(Bu1d_V=egIf6y#7QmZl{CnEyY{-_$Jkgj{VyA35qWZKLIKom_2WFgfZn4T%Ek zjO%k1RtM3)l}%5!zk0-vMTuSMQ##W$+;jN!hMB894SSaHWNV1t^TGQbZA?MyCcHYQ za*OUe{jtbX=erqoy-&-ril6EeCo8o~gllk4F1|YV$llT|_jGDtig<4vPkriq@Q+Jj z2ZhvzFUqWxqpLmG8W5}gir&`pUDxTPb>~Js?}?4LIJk_7yURS~vMO6PMNnJ%ecINS z^J%ue6=SSUIZq@@%XtcZ(7VpBF}vN9ljIROuKAllYPlgH{-Us#DXUy+*2Kh`_aIqTS>U%j{uq;AQTeAOsOQ~(A0Fj&GEdU_iNZ|44%E$&A5hdw@6W=||ilw%Ow8}{L?K&ynk zm|@>Iu%FID zdMTeUd8U=~J?lXg$jldzUdrmQFAP1Z!OffUfh@+1`>yqYltA!o&h!(S2OvKk_#T$h zJfmkjR`p}d=U=`P<~1=dy>SV&NA9&lrX>V8|UV6aIw zk256R-uGP-l0s%aqZnr)mA1STv(v_Th!p){_}FcZtr<`{4+awvTlSYA3rnXEMkhqd zkNl}!AO$s3^=yelT(SPrcyHzjv<_1H?ij`8;p}_E5|Re=)f1~(YAdfxUtlZ7B+Oti z!abH~Psi=yWb#ZNOZtEcus*B-+W=1c+7<3HMCC@W{)wkiNf%lJUcDK%+QpjgY0=&m zuI_RfGhPNV4&|kf^p=ZDuD-9)DTUJN5x&yptF|Y${s&8jx^Ak<+9xe%!eI~TE=V~N*#dp>DTOsMtZ&(8A&)qu| z#;#qx%ZcCe)JHX}z8J7vI-1br$JNkBit`eb1;*)Lu%wpX+MyL5w>se(VEUE4Af!v*3yRkAoN5$`&FQo^BxW;q1&jKJ6dxL&IH zqgc(yA6>9GDDEMz6A`{iK3f2TbJin|0b;8xeaCg4H9ZiH^6yRwuXp9|z@hjQHR{Pu z&aPs{)zkXQTYJuUdCEa{$Lc*5D2NTYRh;-qaG{aXuh1a@sL;fOg z9fxrYqLRmd-xt^>M1L>Qn+bR84UM@TUch2QJSAB)v%ku_iou&WDENl&y4 z^?Gw%%6_}CICA5Z^uAF3eF5P(rIJ5l*WV*&xS?ttWOuY^BWc$limZz6p7Bvw{sac+ zDMl8jTjpPsmw&x`XAQ&##yxaBBv>FHW!c`VTj(8(!4365{v8PpOIThvoH2eI36SV{ zj25wyK!7Q}ec!>!{rzSQf-?__KtAw%Zv&rZ-7M+%+ECe{bwia|W-p>r3c zA%$8ne(_EDKT8-k4p287C8Lx@tPgfmeqjlRU)kVQ0m>k|1fX>V-k7|s4+g{d`fw$8 zK;JYd28?^?dI21HeK436q#mx=29+N5rBDf-yaT|3=GXJnB>nTKzq5?PuQ_#MWa_lR z@*ncG7{3JXd$#ScFTz-j>l!_q9n(dzEl^*lj5au4B$r9Jnz8ku=M~_3$3X(M@7cBq zn)fzres~1L0r3%~JaH;4703~TXACYLEXf{>?O%RsA8`VC1o__-Foa|$`*+(=Lmw&a zzmSRL^~yyJB9NTmrAHh;R_(7VQU_VUKIlvW0SOY$PhxW9FKdg6GlESZ@2mlqqzB2m z0bMsKj%2-p{9`gW;&mg93c@=fbrl;c|RQXKP!=FC2G2XJ~C0`13q8W!n)|s(<+*F zy0UL(O=+g)KUp4Y)@@Na;2|45Fw3-!+#X7xas%HgW%}J;8RHr5FiieQ9y@(uo#-*f zHuYSQGhxFYI{W|I{4|oFFQg6nU{V@Bg#o`orp~}khKNUk{)iLJ;%)W$s8tIScY|ex zwE};gIj^9%tv%MIeWTIF*mImmf3EZnpe%Q6IJTn={;a-|fYo#qK}Kn^+XS;h{;ZfV{Y#cVGab;rZsLL7RcU zT&}k5z(b=b&GQp6oLp8ZlV_p*2pEeyNrfQ_Jq7wdkXM28R|Ew9!|!`A--WE4--P1V zf5>k_`!TRzd|9ER+o1ln=aY^9mSsaU!8$M}eV1uuYtWzvaDHUD%S0%yqW*g!zuRf2 zYl0yWs^7iQ+}GLN%?9GA=Pd=I)?!y2;J&y)ddZyN2G=-dK4RQ0RC_)@83;r={NCkO z#cWY9mM=pI2mg7-h2};z#HOBSED;mVjxGa%RP%@XzaIMRP(dkwNU-MO)>t4PBuUV} zCiEHnHh>d?><(lcn_4irFvJ>~BYJ#||I4k0&;2dV2K{ZbRAWBnm@nhn3i)M#KGPpB zlsjHbd`}zyNETJd%&%_qg`mO|NhvGwRwLZ z26XwB(DHuuS|i4x7X1mD|A0R8YJrxQ#a?U^$;c-T1;IC6nk>=t!!D2Z`1Z{YDBO{U z7DE7Mk?*N=NpxgDLMQ!=oqgWf@E$Y$7x(#-?nlGFI!b3YOJnU?GT!~3oTj*cez#og ziI$tC-?lum9F5JiH?Ihx|J_PdFvFuAs6ZENc zvtwM9Gw>0s&-!zmnyr&}e=y33rn9eV)?8yyuQ>l?W;5Sc9{S2)l?E)%rhmPgbJ(k? ze6@0KaZN<(SM`!2ul&_}60T<#zNpdahn^2$*Y<;GIxL`Ch5IDp2g`tzM`Q8GvyP+d zRNn8mWPfxe>nr3x0yw*ketIK2CbGHDsac)0y!uwtroh_6581efdFG(n_s_;dSi3P` zxpYu^z7+FhYjk{12X}GxVKe~q*MCVDVTIuL{s4|(9N|Hi-j9Ja%75R0#4Z#Q;OW6t ziR}jB7(^wJT}pr7fW&SPkse%=*lsY6K~xgjUGeu(8c-hNAO4;+G|vhDp0oyOE`sdu zqeyT|zmEdqm!;qL5X3_X1yfmXmamLzoa3VSV`t9kPav8~oO20w35apPGX@9c8^Jos zvGxbOz?caxZYo^jr^10M2id|)B@8e(<6G2?)C`5vb((*p9B+p32v7POP_u(no}o<HO-R@bHCOhZp?lhPd@T8N#jXKVzTsioXYu1@!^f>q=}1*M=Jv4hCPsPF-q#arsv!rF(0j zb(gAN%v7MJKVuE42dY|o^y{d3v#<@+gHmFH`IO4!DA#h9$$uOfZk?DlDS1}C2}<<3E9{s8)1 ztu`V&PDhGqT@$L6<@Oik`Qx@Xl(Ureob0!sy7V$*(7*z4$154{bchv54D7C&u`Yjv zG_%ZNUg*`?`Qa&Dp+*qW>K4|n+t0XN>F4scOSr=>qv0A+QNaTXnJSTsk_olK%(IDe z0$ALM_Ir6+&JP~%XNzD7xbx+k$w80K!PxQS<5P#Oc0^{pgYut%U1C3${Bs+gXaV(7 zI)LXZ2=ajO+)4=p(VNhazX{-8lj2ODNOq(m9vj<;;1X2;4$nKnO3*e9KTP>c^SS)w zwQ(#7c%QKJyL@RR(|ZirWbKDc!`Q2Mh5%e4m~RfnqhYGR`-#6Q{lIlV>jCBW6ZeKS zKYuu!@W>JpqvD69!O{R93_Jq|fQy9$$eqxdYZX>?LU|DYx9o>qg8VSDf8T25?}IhO zK~w+meQ+>{L)GSB0&uW_cbvEJJDt&D5)!!QCPYOxek;LaLG6m3?ZEEO1}Q?Bk}DJ2Pk=ez0_SqM#2=bMXFQ5(zkq zU`g9lF0-0H%G=<1(PWuE!~yJb;6t!=+OnyKWryxNWSD7TMk=aq)Rz>8+^%rT;q}I! zJUuZYmZa=|r{Mf=I#Ll5|C_*<1=r`5{3=qN|H~94TA7+gppS>C@d5w4!uK^F{qe=v`&sJo2Au ze{Xth^0@C*?$!U+|3)iPyFTD2uG}l|Ggsi(Q0Fs*1AO88!P@P5v))Mzl?jGSV-jjv z^kE4!YHpJmr%nY{3|brl{#5wkANvD&9Ew#JT2WdCvF?8BV)J-k%<2gqlBYnxR zdUe+~+P0z3#uvVi4ce zA=~TQ4poYr*Vd5ak$Z9f(2kcsoumYxa;GcN`XlVuqF0LXoH%n{8NfK|7dSsk3DQq{ zaaeqjjY7^veskq&%Q7DtreJ^R`()3}HVfgQ9nM4Bd8Zw7LIQgY#F*|BPPTuD<^HP} zMX~!!^00c21202q!-cbrMN-h4kIjgpG>tkox7!Yf+Plc28&|TwUh}Wyk9Rdl1f3?Y zUljjIE4IMbj+)$>|kEgC=zP<0`*rhFssqU%A!gfA+v6hiD2Ez+MRv)M764$ z{_lBOG*XuHzP!K9!ZAPX$e@WU19W@U7OpQS)B7kdzTlZXe0Z1_4r3Est?J`u>!nUH z9S!SHuNvB1#27vwS)};sq?XJ}YklVQ?EJ$fSsFXpPBcBob(BI0kMMb#b%Fo7xPIrl zE@Q#rKWgdex5YQXgS@U6#YJegvT56STb;D?^s?gydqSdxQ}v>v+40i+FVo6Q{A&s% z64Ql{aPH8q{`JxaG!D!J5Zpg!*W3^%^usdqsA~_ODtV-2AU#%>+&NKExgOD9+Y)ni zacUyK&^o`1cb37U`!1W&cb)?`@h!Gg`0niw-SJ58`rEAG{=HiTU6W+mUjJGxdfXXt z@1yz7DP+qd4afdLwS43v4R=of-!*M6e0jI@D`WCG#oriMRrnQE@57s#hCA(Y&R0#F zsvyg@uGL6XecQEVS97NEz_h`cuRGV=cbI;MFTaH`S(mO_EXdwK#6iR(+dDvdZLsu6 zRY9$2LV~Pmn84jr6(ynjW;foo*P1)TQ}wv==HTo~@_}j1vI}0`!OBnMcedMBhwSnh z{4iscJ5@Iw6S)3I|6hFhv&DXfOugH3j3(N9D;QEZd{)WM>ch5<#NWR&taaDt_6I(8 zKKBEK)7*4dU%fk)Hh2dS^lDii`4#@eY4>osY-RuX*Ojn9*DF`51PmVSQQ_S^enXq~ zht1H1D@jF)cbRkJj|ul&vc5WmPF@|hEDvPac7@9F91G5KXvux>JbE!K^~RYO$Uqv8 zTzS=l(Yl=#-Zy?->b^r0WWX!U@G#qDw|1fZGkkfX`qwu%m|!ftKOb$IscM!K)Lop} zlv=TKlFb=GsR8!Br9- zR+$k;d+-G|YW|qNK6oS0nWI%DJ63bg=GnIG0YwSo9LcWJ9;}->n{_(Ge(ru_(sN@_ zhKe{ExwB^UUoUj~P z`BW7szMvorlHn&Kd@o6JYnq-BWgSCX$(YecJ+72EG`!<=^~*@rl=eI-vXFe?k?PCI zx7N%m-LLY_Pj=mPf0Rf2rZ&x7Oq)_jR!}$YJr*T3@MG)_i`Tf%r`xI)Yx%1tPIDIX z`&<><)N$w;Lu^|Gx&U9$^7vF%_Imk`XZ?T9bvVa;81Z!(*ZEYdb$%P)@yWE zHqLvEj0`>rZaIA9ti3VgNvEgf0xVWa*RQgf@vPsUx+Qzv&Oe=o!Up(Yy!z7qDfvyx zqE}k*(1`{&SG^!z#u3x*Bf1vT;U%l<%^Q4K((q*Pt4s1cT@s4#vtlm4D;6_Zh3wg4 zmQbQ3mwA{=$&}ZsQ27wF{s6u#F291W1FX}Z&Jin}4-vc9*Yj5wDWxU)%q+3)Xv`{m z!Lyuae(I?35epEt;R^A|1+(0CH|l*%E?yH|rO@G5BzAhStmN(Y=sI_k;2gsJhx?E! zCWOJM*|uP3(B7J{vF8r2t~2V!xJbkvzGsiC`^{9I@NTd;VfIy9UloTXSJL*Iy-VY= z`r~n4^+g)Hvw!R^9}IuVFjElSe~HIb`M9XwQn{|Z4ivxch&rPxb^jmCDnD7bBn`4UiFTO zhKr|KBEodaEEVTVP1g?V(_X@tCsT*F2{~z|2;I1S-TBHdogcocVLJ4ezvNtZaPn@a z`@1FKI=&3K@!$XO$J-3+5U%NM$GYv-ms}~%ip>hU&NXOlVfVc7vQY-UJefL_2PH%Q z2NF#V0NMrufjq*@3Ibao{>u4VC}2j4Y7J4>SAqcf|B&aU0dc5WlK+1h@t^`@krnc? zNBe10*9U_*s(;_Kyxw1JqcR%^1K#sfzW+9>#f+z>pnf2?iBQf%_X=6aCn(f``l#o5 zg(r(5szDfVy#;uF14LRnA^ogm48t67ZOZxE87|Yv5lsjOTt_BOfTYCy-pK=&v%1s@2TF zd%AW2c*4xz#%L)wVyso|~vHdo*|MK4(UEbCbJ> z`_G-3zrDVAVR7Em$c1xulY6^DcfBiji*&l#m*3;RVCZ)!>8X3?+lFH;Se#Ew$tu1@ z>HPCWj`IZ@yIpkC`&kyVM-QW3hRervJD$bmt52HARBgO%l_nOU_K;rqm(wTYuF@9S zlCWPplqC+#?|BWqUj?q?RK({m&t6#cBvqmQ@{HZ%(^r;LJMp;ekNLc@CfDpH1mrgW zINykyDu}H;&%<{Am{XWem5cE?W|qA7!uy=BZVGq&H%)b8`{#GkBI_Qrw2OPxZc*=A zg_+4`IxH$*Tq}QHs$o?q((=Xi?{ut7;J8k9YhA|c3=g>Dk>D}z5Np-xT~jADex1(R z-Uh{+fL(uw5WgdnXwGVe=JKQM`GJY4Ny#h^oP%~S+r5jwv<~qDYxj)noYwr#6B1+H zvr%rX6H1f6w{MI>2{xCfDt_aTmaV77;sVZ|j~3K{ec?Pb_F*uLimtpekP;_qeNTOpY literal 0 HcmV?d00001 diff --git a/core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_351000-351004_mainnet.calldata b/core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_351000-351004_mainnet.calldata new file mode 100644 index 0000000000000000000000000000000000000000..f7983ec06bef5f7ebf4fb441aa932fe44fdd7e61 GIT binary patch literal 72932 zcmeF4XIK%sD6442U^m4y$X<3g#SE*PH|9 zggFNc_kXIZdPdfD-+Q0?Jon4>Lt9i=SDiX_>U4E?{pR$@*65J*tAFOE=>y(e9&yvV z-l*R$g-_nLdf@dT`(7>VoB4Cb4LiDj>C-0W+OICRsbxC;>O7r0bhe^A=0na>Q-%%Q zc>RcR?|o@&y}w4}Ejjq}WmEIfOM86#uxr7%S?c}yW&15mUv{ilOEXof537=MNEe?u-`;#`QT@*CwYS}W-<18gjraHXmX!<4E+2Tc#Sp`)SS=Knaq@Ym-{wfgn>zP~;#t-kf^0(XCX+T1k5uL?ka`G0jD<2%&QZbIj{4Nqbh z?l_p|ZgQ>eM_(j0&Ur0%YS|t4{hx$a3i#Fabnd^#|G|~ywx*~C<4=~^61YBog^$PZ z#Rt0e$kr&`$^K6pZ_4VU-Z*yoli$IJ6Q8C|v=@BdEM2Y!_6E)OEbG7bT-$d^np%U*|5?5An&s9WYKPY^hh!;S@qMEb4_#f)jBk{+ z=Az=pW&@tzzB}Y!^zdKu8s_co_$7Jbt*`5Pmu>#ixXtVPt17ll$-QQbC^6_<#dX8- z{}2w=bLZi2E~jh|)C$J*V0-I-);|Dw*x8Z%qu@g8^kcz}=f3@i4bj@#;90hQ-RLRC4QOG5)&Z(CAs)8rhdSK3Bpi2c;_ zf~{@2nF*%biuZ)=i_+)af2i=O8QmxOH_n?dYSP!_k_CFK{4z4_;U{B`=L_0d%{PBP zzbVxZ9x1!FT&Lk-QI*#|T-M(rPdI(b>RDuuOfD^h zTJ19)4$I}SZC1?uyB`-1SwFc|__HN1UYGXIyVD|XoHj76|E<`1ZcFP2yI%dgHgkvX zTc)K7`rFgDt)TRh~;# zSO5O{t|icG&8$no8Ec)&F{#(_c&mK>-Gp~dR^E-sx255G_mZ#c@61>BQYT{xGCvu5a7cKCO2mFmC4r`|A& z$AeCP9h?96oXNjGtM5`jvUk2|EB=Zqa*!*M(jU3{m`q|oil9-xG~MveyZow z3Xd-*4*7Fm_==fUx&6gs_Zu4Doq3A`+{+@HLTB8u$g(uExJ7npQ+_3td05+GuqmBxBdr!jzD-H6 zS&TL{t0$%BwnBO9>34KC_Xnyi#{kMdos9fxBbrmbeJ~x>8#sDKKccg7dFhylc9HX# zt}6MFa=w(Z$Wq^Y`;k_&Dpd2*@nEl-sTrgEhEb zAeG6xUDaV)p_yKN*HHTVFw|)oT#J=?hWw1hLn*)TA=1p%`sityRgvg>c=Ti@KgPwT zG*>RNh_s<~(!7mK z=@RoOKX8V{3|&E=@~*@gls}c#wmFI{xlkA6sM+pYOg3f0CyUXz&lh&$vad-Po%O3p zrw9x zQCd}*oFfjPr-RdPCz~t{tx9<=UZS)1@l^g@PLfs69RQ!DomQ~FC8SXV-=chtwO z>q&<>22p-3l1I(+2_-$}kE8s6Ih4O4GmY0$S32uQwyNeTN%^k_V=P`An9lNHKh~J` zztgqRuP7%gwC7eKHj>)Cahl3+DnRM+ILh^xJVc$z=lfumZC>evbnFKeWjxO{XSLNv zUqX7JevBH8f+|40cDQJs}@QablZ)ZjUX^>h8xKJI?5P9spE^9R9od+RDb2$ z$X80NfwXPvZ)7?t0okSvfqlvw_5@gr($?|{m3cOkj;FsuT0<_3lq_wYiyY7IK4?X? z6s2o@4^Wx$3E}9|KwC9Z z=b-d?%2#tY!u%RB8FCt|#TFyS^T~HQyY(!NE*Uyg8S6{ZPl5ZCHo_{P*7SwCYgNae3G&phBbbXfK(^dN=0jiB;x z(;~g4%rL6oPLe1UdXR?Cmd3SZuT;g^RG->Y{^`kdykC)yUS!pt(eH3J%}KHiH4A!b zUURZMogG-jVw9U)A4%yIqgYAk7QO{#b`L zN%~1!nMUL`=|MH+qMSg?MBG!D#y*5nPJ#5WmX25BsGbI`VCR#zyrnW(*`3bYLTMR# zh6TEzzk#d!qZf78oTT#Eh9iCLNCfrB_aMCPdcR$)MU=EDX$I4A>p5y=(*Q~re2t@L z^hZkP^utk2$Lv4zILJSAVhD~NKIn^0*}a~wN^=M~haYW&9$L6Eo#!Tf#`xT$YmN2b z*<$aJH7d_AE;gkxv)ftm(9)_8y(p)|4yxzcGD`2>4%yU9Z2Ju$Nk%iA5LlDFCDhYq52nif>j%q%&S^{!|Ri_!Y&{dDAeCT*aze((Tt zu9{wuEhyq8<)o`gM}Lf+P3fJNj=2|+tx8PtgnbsHq0au#lz(#yl}vTk;s*U2DCgFE zI$kNn@<&-z+#^a-&haeJe6QcC(=kSh^pb&67aobHa;CvunPhoDHie zCv|Q}Rkbtxq1@;``XWbcV%jM924nYI`8X;$guH}O)tky^jzXEHrav$*o~xX%xB9@=k2Y3Uxu z@6-kw71a)%nk(GuiyXJv(2=pVKfDae+=oo3TAiVK-mfF+vr>7b;We@nFHe+zU;Z-X zrwOHeb2W?XI=awiN_U)1$JfJ9!{BGoi%atZk(4ugJRP$yr_s&iLFvm6p+PBbE~T%f z!8lC_NpDGmIRL+DQ^qk%YP6X&a)w8$n9)}U{@=&Fw$ zC@1@BY9#|^4m7~*dlM{Ds=GK7owd*ysBY`2eCTExr-Mv}i)-lazT^S5vLC&ZY&jFD z%)u&joW?wb8^&JRe4f06>era^FSoZCO(7#NPFS(Qhf1Gnlt5q77HURkJ@JGiOY_M3 zl{kza?4UNK^H-u)_9fF*zK1DiT5&onLwj=h@kYvNQpmhsh->MB#(_W<{5)^1CBJ)RKF%TDy{3#@zxf!WjMT%WNSN_j;Dtp-$jA8Yw(T9mSUZ;g_kQC)G?GpZY1YadD_7clMR zDNH$5cx`$1&2Y+Jm5Xw`yJ6m`CGe~QITgyk)CRRZct42JPkSMK>S;W#ve>Spe$|fI z0dM;d{xM*NN;;{Ck;1h|W4uUmRlz*PY8W%frtE5tJE(Ob^~*DH8s+CHP5E1~o_1-` z?>y?awt$6WcEr(fe0H))|I^6tRtjDranTI7vlFU;{a=%c@dCq!N z>pjwz#dyAwUA|#e>Js*fW{vV88|koNWqZENd)8EuY|5z?_I$=I$1_WY_^N5_c|Je1 zy_2k|vUafNT`{8Rl1Ni%x0m+3vxEQ66e)f446)}k_;&G)bZM5{lqaG75B6%lrxUkF z+LZ7pUYTo;2O&YO3))#uHc_F3aC_dz*`mIn#HtDQZnd7gZY)7@6ZI;l7vo9Uw74a2 z4r}eJx@dkGxoB7;ZMWzBlDo7=q&##)j6Gkbz?uyrZECAyv@Ya`eL_ejz{OszK=aaTB5@Gx$jjm-NpE}2BQL7pE&sxLE9|)srt6Xp zqt$r15M4EX)RU!+_MO=+rH4v}fn9gn^A2BOx)*86GQK{q%-k}^)6mwHv*fxk^oq@< zdW7;WVZ_skKC9-O_DhHFRCSl6=i#CDYWaQ}eiZHb2I*?eGv zc|6w;U*26*=ALv-lvSmEBB{J+qbFS=^~?OLip!K6&>Cyl8XQz2IQ697>hT?4aUXD0 zF}9&SpW90RTr?aX?bVtz`Yi_x*S20%p=N2!E4HTD0@<0hV%71|UakD7g?-SY0lh`8 z>N!LouUj>O&={LQ1 zU*__D7`}xBh+Ow0-$dDr0qaG*N`NcRb!kM_$^U^~`@C#A0+GwRl3+pLR|2WFt@T1{P^S0=49r3$hnt^M8L4!wtr;Ux^)zK+zd zE4@%m4%PZfAB~W)yb42ni(b5}HTq#{A*Gt{DWCX;ZJ$5um4%r^0cCI+d$%eN50pm3 z1>J@0yQWqxxi4;CtFyhmZHkXDTs+y#)~_1?Bj+}g*j`VzDZ)+7rugUN{>vqPV<)@s znQy0$qmAY5`E-Y`OVlr0Qk*AU8tts44|q{kFYDLczBz}bOB`zW+LM=+cb(}VyqePP zg%}NG-Fkbsxzj(Kg|91!QRZ$h>j?YgK7*Eq9Uf0u`Lr!cLTs zalD&Sd}B<4w1u(-M`Gd7NH{p<(Nr-Q?n$?WN<2@Ouvh)CD`P8^ZJm`5kzB62!5%n0 z|Mq5>h)P*eR=FW`Ci|@{Eh_T}bC}|bqq^tVy;3MjRkDhtGAx*jBlWtAv0yhTQNK-^E^KvR7>o9=aNOczs3`P+ktzP3m5(p8LKo9{_8DUl)5PbIJS# zBsk=bycdRe-&eXx6_}=9mgAxDine3jc&;J7_%glhw4pp#`ub{#P^#+lNl&J(!N*OK zd}@?yx#xE2<#kKpEzR@IylLz6GIo34tqWIXL_b1sFJ-1^jcNC{y;{?IOVUJQvfmZT zR+WytX1ocU&TkfYOmVn?Y2AdSDbXXu$km^Ggx4THd{}~aN!q;Z9@nrO^C63n*z@m3 zdiO)3MemB%FRJT(a0Vfc^5vO5@8?;GnP3sA#9c}<-I;2ynzQ_d5OiqCS8AWkUIdr=5)SW>UO5SFGCa@=>k z?&x=lY`78f<;OMnRc-1htxxjH4C5Lc63O`V2gG=)1~<_+)rKT@3-MKjcGO~(#LQ5$ z_ty(Fd!`%RfODb-<>P0u?r~4LrxuN;#&l#4DGwd2Od7IikY$P0Vb zPgMri!vf}dC1Eg@bN6*|Ryl4~L6i6p8sf{|*A-Rb3zy6g-)be#HGXXp6>QD$I6Tq| zV>rC3P`jGxvAwI;)U6O$DF@H2&0jS~r0B9o7~DM5wXtfac0qBTtD3$okHQ-;@7X#P zZ&{DotB39IzK7nq{UxeULSlsdDbMs?$xYi zLwv^UynsA(IHol-89A;)CVMy6hn8)Uv^B^@ADG8px~fhtv{zi6b@(DlGUVvOl`VZU z_z3&+{CY|_5!F@P?G5n(lXP*$Bu3yQ# z#>5wyQpwTY`p7EZ#26~oux4R3SYKB-&lgLEDK|Uu%WTT}fxKb4)46CNnVMw~m&_1v z(N~2YVd+I#)8K}B*-T6HUfdcha+MI_<_+;7f9pzpb4#qp-iN#t{>URt_%1_y&~v@) z>+X8J17Pi%2d;kWb#VF?X(rB}QP`%Og&XJ2VAT#@vGw#c;oNHACf+K7&N zCT-FSFFPTUrYqTk4E56FJlnAl-tisO)fO^Alu*7`)JG-+&q>VktzHcmis4Y*>gZ)_ zt`cLQY@VmzUoXDbYe@e@Prk3G7kIN*)S%|iAOu#L)#O7bUCZyVnSf}%xWHG20{k*- zt!({xz0%j@Lb_WttPu4o13T#r&%?Clp7*V)7YGq5wkeIK@LVa!rrP>tA(MHo98u%AHx4U@7)|m2Gk}>_6cjUQfU$*zpUafJb$TCMjx@dln_1r=*2(HT4!}S7rBY6Qs#|?IU zLRD$Or;apf!ww;X@}j+{!Sv~%-FJfRI6X(a#m!n90gtfkJZTPSP(h5WI-sv^5Lq&b z5!g@|o`uf=|K@xI+}0P!qI>^DJk@FSD%LT&3@caItFGSH6>E59nW)UBU!(nUp(Zd(6@ylEq9e5}7pt|A3 z4qK+lFvrLr-b0OuJM*6~3N%|zjQ5uE`>r!T5X{$NxN$ZyCJuc8(%rfPu<2Jc- zvKVeP{T97!SV_Ha0rNzzn(2mUO|^999dcRdGtq9pSh(oPNV%Xz^w_YxSwDT`6CK{K z^io|V)=<3(!%ERjrSu8e<1pyR30mmYR0m>GP%jeM4mJ~{~NB5^;t5m3KGvM zO839@%d)uY6LA@QCBF+^?JXWz)iii7$fswc`T8)ITA=4<4Cbxb)bw$pL$gCyiw+Gv z|4yH0(I#GlAwF#kA6xm)fpDOF(UPp_1uA_KgYS7NNiT4=n|*V^I=?-KFFlBP#w9D7 zn-91om2DrC?rXPp`Fd}_O*b2_!Vq7ygs4pkna-~>#2b2x0;iV31#s&K5dAcl-0dso z{ljV#xvd#nXTgkNE*2*0&ufrQ)hnrwcOu^Zu@|p+{e51Sz5acBH)H72n~?oeUm?T$ z@>un0uLo35eQtPu5%b73Z-KnxF-mD1BZO35VyUBzY_+Q5A=9PN?m@cc&uXPNoOLB1 z0YiN5t-2mAH4`;_^Gd_tTjOKXNq(7iLaA!OhM8rC^!$Psc}G3<+D7-#lhQ!_Zgm2# zgAd{syb{os4oU>vB7c_iGVrqUv^f}0?xnqPGiOPu*DaVWOInxAjTQxzq>fxYhIo^B z{x!rm)0g1NynCVswI8~{TXTg~a^rZLT9>TkOW|0R^$sU|I)bZ5x)ax1@6etrJSj=J z{u1>n`&R4qPJ(mi7Gw71P2j^_Q?U$ILNLF%3`e%}X=VyB*5H*H;`1AX@qDUc;knkd z1yLEFhwd$eWZ%3m>Ae`3Q6$yWTlFR~FLj767WR1EmPH@$i9$&1XKTT0kOLR}rq|F) zw1#)baPFFQ8A^=S3#@vgn@O`f_Q_ZMC#<5^KJSD$N}9F2tQ@nm7N0rN=ANgyN?bcv z#jt22@0q+o)a!l}v!1mkn}(OsRc)Cq$|?_Fa$J*LFX*`gu>R(`&pQbjJYSaP>XTz? zcGTyorGY)q+i~TGk}0dAtb3Bl_E`(Hj9@Prn3G+^nvNWNl%`2 z6_Yy0>pUV?O$Yxt<+^DJ99__QVa#gAjk?yxHr8{~!CA0Xm?1vzDIt%tE)sY1{0AG4 zgT$Y6@G-EQDJ1UZQkE}sgp+Yk`XGkRJ?W!EnTteHnU`K~E|<5+Ro(2|Nz0v@WD>F~ zOZ1zOntr5x^V}TXGq9rYi{vk&4Dp%E>rxJm61SbJrN4-JmB(UwN~eyPg?cP5K4N;R z0gH8U(mmu3314Zm+x_MzMzzAgR1f1#7~*p-;Yr!lyRp95@q}m5oUNKoyg0+#$ZC8w zk7EY#JDL>ao6Ef zxq#j`%YwpUgj~MO;$~vbJaLhJk9$*}=K@~H6_s++imR)&&dexRX9fgm|VaCN-)?l9W zsoWWbZb?PM4;`J|KWKfvntw+B?t0&U$RFd}7F3v!etOoio_o!H<5B3gRAlU@MgH6K zzFcs5ZT_m)d=}Pfp1%0$bWQpw+m}ACS!U2<6uKkj88p{on|Uy9XUr4Hnz={L$hT99 zEi2YJIpOQ>zGYAK8-PN0rN&c=Er=QZ?qQBU-enAo+138x>N?r#wS9DAPo@*k{(5q$ zF%+};pNf&zy3Dbp#c{5$%d!o7i%1*3#Hh8dN}Eb$R@1+gHar|d6oK{giiUL^%!r|2 z-G$zp4wS?Hi!yH78E0PnQCg8l*SYyg=3MvtcTLsY{@%R)){vftwcis~oe0dlVchSo ztrsI}Xw*L-LR4Wm_O(?W&m+O3hM(#>W56?)@87>C-<{W4l1yDqCnikkD8~=3_jsAz zoT|W~qt8tHdtF%J)+4ZDTfJGHK-sCP+`y)mIzW5R;=RkY%V#qd9}W!ReE0Mj&)jQo z+TXj7cgF1HvAg84muo?6)4Xr1ZS#0L$Gc#J*`@Nx%HEYPUX%{(4D|5`o)By9+c%9W z2Vs@DjQZ@_GfRhzl`nf*H|(G8jmpr@Q=@rpy3rYNv9#e)b?@;7o69>EHqAj7+oayL zoM0RLSsObt&UY_%%#7~g-ydGf3-&#urKfHjU2l1AoGp=%_U*K>*F562{O{jGGp-5^ z+VJVU&2{Hr8{~lZZT%2Icm+q1B>5w>gFZSMz4)Z!xM3toy zQum23nP%~y;#|wIPil8qp6xH|*UmXNG^qA%X{$Ek)K_(NuWyy&GFlUA&-BLd8Vc51 zK*KEUOx3WXM>@a9r%b6A)qb8nYcAw83LUvN%WoA1ybVU@7m=|@JJZB<%#i$7{d$u= z%_(-VT&r@(SyZP}b*awDy2C>+pTVDtL*0w^7?E|`@um3o)Ik6C1VQTtX?;(?D4h#7 zBz!Hnw_YKm^d?QY)|H13PJFh?=IX!dO;GBr{=?sI9tuUD3zgee7|=W#!@t(GLg>W1 zv4`6Q>`BBQwJbJir)NP&mQ?ne{-98>-W|<^>)oZk8QRS~lzMa+{yR+n6|DDw^;im( zS*dn&kBBQFOOE|QmXyW^2qe;I{qEO5l)(OlAOGQG!*VS<8`iyUoAnIDPkvlNE5LdY zCB}gMe|@NDZ-aOEq1omeGxr*9=Y5?o&4W^HQg`>Oo3}FZ54Sd?TRQ8+juQ{orz&;S z&~9D{AtKU2=!JF5XkWF{I_qmGd+LnikXe&52agR?M~9ggfAUr{j&t>29hth*I#>VW zy(b$ZleWcVKW9$6WBKf9L+y^>psPxX_Mms@$lc7WtV;U+;u>=IU~>X5B_WF>)bbntZJF-@L8!`#*1m!k8J4w;gh*QiJK3$RyrEJ z;NqaZ8UP!}wu5i~kE>n)1${Qf>dA-_A?Nwd8Frn>Ty(FZ%sYez(K79g8y7multAmRNgn zq4xWGRp}Y>rh3<=Wo~TpuiO7>V#)#epZ#5Av~xLGD$>U!dH8ItoUYG{?HfB^c@{aL z#q7H)`uj~ef8goNhpY0Xk9~+;k*-x99y`75+TDbA{WhG7E8}&qLD{LLFQm%dfAH8U zLHCBO!d}vl10#xWkxG@yU+q+C|Mh43P5$Hd&hYJHe@n<*sYvEARUQT99P_&SK1n+8 zZc*&AdsX{Hy_>#q^~e=-dJnrWs)PFpQ@+TEycL?aJs|zpJm@iS>!LL3l<%>7FLdnH z+4ETHSARbkk=HkHbA!BB-iH72+Q;xA`sy{esb^RIQKf@N*Ev`3L@xfL?v{bau6et= zK3;jZ>C3l`w)(*Gi*9%^{mH$MxXKUrm3ZC1b zIC8t{`$$P@GONMFH3@%3G(Tflkh4{e1~X2c`TbSpB|XF5EDv0DCUCV?_RinjyL{t8 zy^7s^)I4Kq-v$|?eR4g&TJf(&@mYH29-e#Qz`Zk14nF3Zc2l(l3Jt-F@>6my?p8SA zLeb9g7m~i7C{u3n+N0kOcKcR)QtXw3FU-ErNBnidJK#-tmF*k-^ZG>$929zc#p;Xd z*Sec#)~f2=_ef!B;bIRTx$5bugT8(&aCL=?tL@eEYp>5`-80-8USY$=Ep;blmXc>3 zC=t#I%`ZB2z1R6AV>k43P4#NOcdPzSw(NiMC^+02_uIMy8Rp%`0R%OM3m%5HhB6*Zey>V1?vQ5^v2`}}GrVYXlywpYZsD%yPvb60P@~tInkUY16 z;Ygd?+@%^C^}*3IX$`hf$8>!Sv>|HybgCimR$Rp(Q)~1YY zK=mw6P227B6hXUnS{ApY#WQ_%q+?%FUp%tVHhi@NZOT@LQq4~AsWWy8?O}ttJiSa+ zDyhMz8}?9|;8U~<`K8e#u_0Q+P65cArSvVf2_4v}%&=2{GY_8vaadsYU|0Flk%6d6 zr&y#tqhH|Y1fMeO6dC|_3LDU|Q;agOQyzz<^V=FtbSjrzrW>f@6);G%uuSk5X2+7FLo3_e9KoZwSseLGJ1Iv6DhM|EKcs)n7K z<(N#&GZ1_VScz=j^Fw#4|C&4H06{glO~?^J`AO?2X9WO54Mf!tR1H3r66eK1+N8Y` zF}qjXBKu(os<~Q&Jinf zBdBQWiGm!@mly?xqZ;H6BVUo^pW9N-U5R?_1f_aLF}=C|fp(*9=$$EZ=1wT-gsH;P z?$MmJU@l7=o5N=^aFy!ERlrlsBuPFSsFU`SK zLRN7tA*<3DBOCWm(6dqYW-Bh^FlB zKq#vmu^1yRrC-aWUCu_0L9CupG|M!IRasw%(mzF6F(PlhaF$S3XaFcHj)byGquafp zd_b&_91tt=0kPs}aiin;%8+@}q?{<}gteLnK7xHpd6xoCVsNWL@^G0)^$^f1)qDYc z%F-(M8Q4{SIyw+n4!3$ng(F|}3&YVfDuU9*%TtR4x*B^lD~vS5T^Z==B3avBK+O)^ z6?Fo4b%AZO5orW{!iU2P3D9-HCD=x;v6x?`)4U+i^W<7ceD`Lp1!SK2l^S#cnYR&?5HupV(^9WOAXe!zw zG!>Q%?}xi#9AK%@chAsS2bwC4c^%Kn_)r-gO+^{ts0OK2s^*jr1Qql86&q7vrv`?d z8l+VF;4>L^it@MJsAfRxXqQk^Ie8LBL>d!ulky$FsiAG{0L+NwZLm-1@DrW=(gNwm z2gn;s*_Ofvil?GGxbg+-AIv_i5TJR?M=UZ{V7$`Y9fy2EPyvc>C;14vIeNSRaj4VWQM64UGijfbRdEk;V5(AFWmuXuVU3;2 zA7DR^U~PbL=V+=|Rmvwo6{DGs(hf8gp9LvL^<<;0L#dEihf=W`jKkb?!clQn$5HWi zXjLfuOx&%d&~uxw(2`5@A#6SZP{rzc^a85iuOYRY5>qvg_y}vUsriAjO2#&reGD_j zI{68#*KlD?H#+Ny*(deQ5JhQ%O`S1Qc*R7V)u2)}=V&Ses1&21R#UUj%=7~e4s2ACrsb!Sz(yh4b-?(hcCDa) zbD#uE!>S(EC`%ivlT<)RWw}}r=$V0}M#IY2RZxa-QfS~;@Tg6Rh3}F|DUcBTYC^}1 z?1@YVM$s04Q5*q`ni(pJIt#)_abQ%rMn5&+gpI-vOr^6xMIk>RQ5*q@8YGtp_!9<( z5{?XS!;ny;_33by4-6FfP5`L#@K5yj+NcewrORN)=zq0SxM*VW992@D&;i zDrKjimw_aBUV{urDD411U2GLP!_K|>|3&!(fJ!$SV7&+AcMEDD0MsbAXn2T5Bn1YF z+B(gGr_G(ZBh}mq{tc}Rqa%T#m@l^|%`i}q9p|gJQw9=TOq!R}WO8I1&90uJ88O()_ zqK_I3$}muz9e_QN?2Dii<%!y#=pQKOh;P#%C9xBTeGZjlLTc(C2MPFOxeWNM|LC_D zQ-*Ysi`dKQB>;T#Z8{23p4taULOaF2K=7RWJF9!r6<$F02nGklKCU`Zo^p5e2`QDY zOT;%~&&N-NPLwiGHro*zG`{m|YW&)B@15N}P zV}~t09O)^0gknb`Ga2cdoBsLQ8eG?2^?ksOZuo9auqLZb+T2X%Uz7%}VmpWDI6ICJDzgGkXQ{eLpYg1Sffv`6njG|Evv6vLFujhGL7ccx}nSy zdme4dJVoEd1e#N9M4P`L-_?DMbnJM|EiGgE`r|2BE z?Xt?G(Zf)d@8lt_Q>!c{{HUJmgm{AM)GlLOr$%1OF{5N&*08_?6W6xEDn8il!CNEn zlkaa12Y;lQiTjc7Vh@*^0`cURf$QXRmbgw0V@}C+3c?y(=j1`FbDWYin3FT0`LR)j z_f6+I&3zU+c1Vvm>5_r#6uTeuVs{zqg&bZCydJM%O|<9`qdoa_A=*=OgoJns#{$Gt z>#-B!2{UORIu(Y9lWvyl~Td;ea{U1tM2d zlu87r*W$gE@w(opMo`_&Q>J*ft?mSHYSbp{J)<_Y-Wvi#LqrvdS014gwZ!yb*)Xy} za#zWT;FL!1O~(rxCk8*W+mBREh=Ig_Ik}iI%ZiGQhKKyl%gPR}(>Q5#W4rIAa)XSK zmJT1b+vhf#yAd1aV*1P#ml61>RR%Zj&vT{7uTC5lEJB+_M0;`rRfzV~CMwaMA{htw zr#TMK$JlAAjXh&PwHn5ZI>&`%IOzEeYuhYEhX8(Vp0{?uICv8vo?23l$;a?AS98@KZA{w+T{9d-*?$Em#qWd(kUh$Ukf8%MhIbbpk)}K3^c7 zQsQ{A71ffot4naIYq7yAE$2;t6oY!SB>*Eo*(@om*6k*r;4=BlD|%(~!5&FCIIhi!dG+KX>^$eWgoN^$1P+wXy74Yu^)`Tn za^p9x)OF#+fin1$)!^E&oU7R3OvGn5mnJI@Li6Q@am8xzr{>BC{^XNY=_c+KF3tTX z2ys-M_>^P53MY%l>ndC_ImXjV7n0adKC!`mYK{oZRfJHc=gW~|aES=jb~WP-!WU85 zc=xBgtVV>ortLNK24p{n4d5CToe~ahP-(zL+BB;(tg7j_7>%f%gZU&kWx5UGM043h zj|yEAJ*tg>YMUKwxIXFe#TVG%DGLZhT6Z*X+`m_8a`%FV!uFP&9fR~1mx@M(Fb`e7#J!R}!$ zFhQlc zziBe4l=V#-Be&C4@^K0m-EeJmtlk7rQ>_;ULQVOK!5v#VHSL%J?_JQrk@z14#6@ijgic3W>k=TGH`9=~&rMM^*Eo-SGR9(%s1^7ArcU>(ex+lRo zv7aNty?-*V46a9a-D;tH;#UF`9a}Z_s0*%8nV>L5i z04pCD0IXt+PE8G#tClRZsta!I;SY4#>$lXs&`@k7*3`zZRqfuT%=6+c%eit-5w7s_ z_(GzFlz=Lq%5}c#Qm%fY57LVeF3&HpRc#FL?mer$;K%s>=ip0qDi!Z)R|FdpSKKV$ z3)TuORa1&&TJG4lOsFZ>51>-5HpA;cqeI}idI2Iyxn=t-&{dUdmVIOF(*@1Zgien# zPE?fDIZ>NOSXMEJDmYPX5iX5R9H-lG3QknBo-r>&MFEenY@%WHM^03X9EebEm&T{P zbkV_nYU0Q-TZ2U36i}XAB}94hq?8LwIdl&5f;M0qlC@KbxYbv@`jr~L4kEUbM@ zJ3OfJNtpueWU4k4eFfBjS&St)uM(WT*5am*o*k28rnc9w`-HbgtU0;il}y1oukGx3 ztPmE)S#6<)OI>r#Q|84PvJUXmi@(O>8Xv|6gLzVp8H8WqU~)*`%zb!^hLfSw#AU>P zYM6&)SP5g}{UiQUn;mx+Id0;Fck+o6f=R{8g7nnXjgNwvL;-?6wZ{p$LnxlqwaK9p z^r>Cue$=9ub?~3cytHC^g#$g(M3*CKtkHSG_AU5jh62(7QJDi9bz(*Fjbkw%Mkj34 zt-{);`j5Hj1gX&_OX*N~>yziXKUFM*cT`J-E{O{j9y0|Os{N=3aiQGPDE%OiHU#cT zZ@6sMoRtsQi=Hob%MY_dM??)nz4Qf|0$Nn87YPw%ua)Euwf`2wHYkQ$52m5bB zk8@-Ful=T-v1+1T8CXya)xmoaI0q(<;povO`sA^V;-V8FT29_pCuG^&ckA#eX`|kV zg9}XwgEG6s3Jd}<&*sc}s@d=YQu9{CdJ3-%z*D>R;%7{uA8jR1fG5{513WdoPwxiQ zM!Od_DPA@>=dlhaMi-PM2iA$z?rzbBvK5YoYj7Fff^x3~mQ%co0Kh4PAX1Z+b?vt+ zk9fpV0B!QIRX}QL<|EyS3FWzNS3@@2wMPhi`2e)Z9=z~~zKcs~9Dj9$z0;=SdDcp} zCE=9;YeJ(sHE~o>otoh?lvDdT2UF zfYhwDqVS9=5Y3nvE-ja&mJw z2-nI`PN82yIeA&2oLpY_q_?~YbN=CCP3Z)9+7#k9`TP1$=o_zr_)XSLDgJAKFsZNK zaIetpNZGSVn2F*qo~ooVJMiXC>#;Mmx%D^udF%(=VZ%no8tq0ph8i zQJvjp4-I-2X{eyeq89{s3U@;MCht7)o7`wq@SEDmGk%l1wBMsdhfMP`>z5I!sa2+C zNUc8z5vj>L4^mT@5lBtZFi6eKGgnsW63OsP?K#qVr&UY4ulMe?D%3}$CbzzRBZcS; z+GIan=LBuy-9B6z7ZP};m~UV*#qA7CrdSYlnjKuJ(C1E!IdvX_kX=H7kn~}P(U?)(@Hd*8OYJ?G3eHmK$tmg9(gJu;47Y#{yib)nJ~v<5W30L-O~? z71erR6`d<8C2#TXq~$&7j{oF5--`~|bUvy2s&d|^Prbjezl z9+CBZgLM~|Psm`%u_r_4^c^E|SZ}vzjzSNm*B`PyY}94q&Qm@C7oNTtoNHQJ%L=J{ z;c`zyEtf{empF|=kEA_krGphm`xMCDuaxh;p$~8GO*d|aeaMJ=buZ>G`$esK9fclC zU#I#64I8!Y${$M>wtOC5d1|@4);WXo9Xm9|ztyLy=`R&Qp(Ls1sq&L*);WBlYj~{v z{Z3nv4;9nxZB;Wqc>k3?J;RDVvi_X3lvhre5@a5CC%$`uuh^D4*eI=N(Id2T)at@n zvg}@GO?R+Fxh~%0bCj%CuoTYp3W{!+s{P?|lgyLHhFh}W7~yM9-V)!(yfpD@VL8!n zf9NLjT3fL1i*&8CU%_a-%8|ZjP&6EfWJ82Gp9L8!4_;w?8ri|eu*r9XF?)$eXs@MP z#(t&mVHege@9X9A5z*+QbRId0A&tne9PB#V8wK1#Hp)JROMT}UODtGyv?i>t?@g5z ze7@5=`&xvb%=J76t8~(*0Y37HK3+sSX2cn{p2hve0@B}iTi2(NLu?NfI@4tVH#^UzSVXBG{+=;B!Ww<@KE zfrbNW7k17(dEA!ASs?u{{^k1oxBx$`DIuprp0Cv}4wIxjeP^47zW&yBb%P$=nm&#H z*ia5m?0^x?)33EWXF=|I!PQ2;AHH=tbh;{k`VNZ=k3woAnFRjdbHoeAoC>)p{MYnR@ICjqNonY;}PnBk;#Tpjw4?1(^@jURX3^Dx?p1 z*Jv{bC$L!v$CJLM>m8o7K>jgo^dX|qaIA9c0*$9fXS+6d*RTpX-%q>Dh{7K{$?*E; zhSqA>B0D1ruRWeuA*EgzQJ8IQdZ^$dBMOb{D{Q(y)vec~3e4CTTA1E_#Iv(2t01#d zP{_6Y@}9Fk3^OD%U(vdjKF#n)ZJGo4zmDZ^X+@`@jpdSGuB};B)iif^clb+sPO_ic2l$-es*_ev$& zAFZDFb$a1O(-ZI$a=-rfUwq<%f*+&Y>n6@ySo?aL#G&8v#WxAg(r@n56{Q9oRq|{M zI2UsI{k(gH5}!XgwejJ)aIYNWEG>-+iz|C0vBu5o_S(lmu9&(7CSq(aMhb2 z`Q|PDRQyrYI-i19XHQ?y`uv9iFZZ@zc4T3L-0f?RIbOcWq63S{AH6*5Sh>o7&(1ez z*glsbLuTc!vwGc>TXV~gyk#BQX2-)L!}6|OU#D@mEWzg{2V6Y<%D3$cY#5X~y_*9K z7}8)%pmVfI9#xW~evYV;a;J|UsH7vdq;phB4e}H5B!ThK=IcjPo+3$_pB}}gR|fV` z{$olryid8+2kDfNBykooBxUJFo!@Z$Mm8Sc^-L^us@6`+Z3#HV$!mhlZ5?Y^D?4MViYLaq`>}IoJlRh zBtd?zl7ELzkc7vXj7v>YG2l-vEt4ko$dl+bus_rW?9cpb><8E4Yv1x49$M4QxNOTtROV-EH{)sU`Y5hj%)w0T*Ti(~#W?9WvCQA^e? zysn5mIi>*V-wFHsN6bkVId>#Rzss0|xK=4>q}CC6()CB+@B3TSsl}gkjy%cXKNf$I z#hKKAKL+{167)Ee@Pzj|Q2mH4i8zyS(2@rB$q|PDc{t&HvefJ`AQdgnqyYQA05Nfj zDrv%xuw&Cb;Da@E>@)>O9$}JVOa6fOMQo$_!jL}eyo57IQ*22@gkne^#g>H40OK?M z5LHsS1CM7@e~IxaUkEI4!uaHfW8EMp#g;Vu8=`0K--_-9PKe%*kt7{(o=ustj7l;% z?}ylui6v=FonuP^)I%Q;TT%n{kOrs+{ii=@F*xITPH`q-lK}NllHyF_McZ2V&qV`^ zQjQMo8F=JLm7-0e{7gDhgh^|M+nR;frSf0cJ&15UYn$$^5tEFfO=`Fv`MUP-=NhP| z1~!2YkJxk@XbEHSo?{q2x3=d*Lq{HAI`FH1Ip=qErgj!edSn zgy$4(5+woQp(J8X;y52QI7gl|ACtG>T88ME0{~K^3?ffrW>Mrx5p%K=YgdarNiioa zITPoiB#S(WBfOCHekksna>NVVO;SXgg!glfHVJ#9dyq1N#o%SpCRG+^l0}%5T&g|C zylY(#0E8A*5^>#fp$09IBq4eP*;GIc6g?70zmJR_8|0 z=g6$gW7G{t?^wi0x%G)S(zZ)ax>m>d(09W4pw*j{uVH+8^ho4r(Id5Jv<|Qj+W3g_ zo^e>En3SkhNUKhfBs;zCf;t`XBvB8GCmH)(B5&jR0BgJgi_+3n6iHGVJPTlo2J=A@ z=mhZ7Tk5GJwj_%xX%LVfVoRcoGwS!HAmup4nREjCoB_UnqIp^~iz&-;< z`c&f{%5i{wu$BW<1CTy=R|nE3M>H)!;OWC;Do>Fo4Hp*7qBOuhR{{3@BjzMUn=~4H zU)`gU4J$#n3#XKU%q;SxhWfFXlScW;^bb@9F(-i%CXdCqSfc^`U^FpD{uy&pk{pm9 zY-1qmPdM;cloWGP1O6}v=`O=#OdhLD zlFYq`+4r&32r5I^AM_8aHjFvB8foQ*FQt#Rz>&q3)ZjnZKg~D?@Mka|FGRiLQ6yiJ zjZG?JfGZN7vVfoKpsZIv`dB)LPb6|h6-0N8ka#F^OL>WD1q=o<*oDe{?NHHTB z>L;7RPrHymC*aT6d_3kL+JbdSInH1f{3B*0EC(?o=?G~x1kfpFq-@$`#js zNx&bT6ueQ}KO#q378&5*81`q>B1bx7f56B#;3|gwiO7)@Gt#YR!YOKn#f&5b(BK6B zVYTCc|9H$u9xW2^pGofYC=hi9WDKSn2>xSHBncqSKoU|ENj%-0r~JU&0tD0|N*b2T z#2o>n$$XI>Q4-~~h>}_?Nk`;J%kH6zQI7)$M4gT(l1}j>U7RCFnhRWgOji}&W-(dD z^m;>*q=*e^kTP_^ebzjFUNn^?1Q0V6v-scQK}y-zhfo>BgM_q%12Pa0mS4z3*c{+L z%xgy^NONJseUx#4fJi$nzqIp?*BdqiH4m$b?NT8ANmNA=W~W`3Z6e_(NYb;Ey0bE&3xgzn#t^?j!1?xR2%~ zW1mtD1#eJ)5d)I+FcoQz{prA;Q93gJ9bHRtALUMec=Ii(hJ@!s#Mdtt4U&H})1yH; zM|gB50)L3`80=S}afN~-1Hm_j^V;N2+opyE9 zDVC%5nJ;B5N0w_oI_t3B8h}4hRuAil-va=$rF{=oZsZpQSS&~FYnXIsC$IvDb{jgX}V=V2}6}4{Sx$wzD~qLL0?*QQVA0X(Q(=1dGav4M`5j&i5h*gegdBx|O!dxMN8 zu1}Tk)->$^8OcA-QmDtt*S0^KkB`$jGUy!NQO&(c|H0cI<2#Ce-m0O8!HamPe};BT z5KXY~j@k$)hxLu43?AgM@BSSg)TR!D2YC&M|ESrK6GsS`@taV)N*p08t8s+vE40QD z@_C4;klIIPr>KzFxIA3{x(#?xY>ogt$ah4rP>|YmaSjCuj!=~Cuts#AMTI;&Uz#U5 zBZVv!B<}@~LhUlk)1EMM-=r@>rwRwrS*t1013Bb`M3x$ zfdeG894I&;2tReb_syyKHb+=U*+CClUOvbboI&nB{k3tDi#^$2-&N&^7ODIQ74l*B z2y+!PT+w4lI`M_hB%xYlxjs#ZB;<;xz>!*fT1VhWEkY#M3NeGc`(Oq&BX?p3e*gtR z2lC1sVIj3xklOqtI#7$f<`e+ZrbJluL2?9uq^OU4m{ml5bfN?K7LJsm9=W`Z_>L;z zzYp{Kbh1WD{+0FZ)lQ)vJ;L0?cq$GgP%SY|Y`$T1Alp{TVnK2}5FIFV20D=M5p)EA z#Mu8p2MT_Z=s*V30YU>E$aX|XpW3w)+l?HcAfp4daHmeOAo0)&B#?)I(2+oCOvV}d z!1`m;1s_-f1vQnZC6etk`-h#~Bgqa>&?x|i-o~*!|VKG}G|r578jI znNC*2eh?N7lGlJ}keXV791Ro$s@%OICa_0XDqYz+`X`4!86yTp`H=$DDwCv3R|LxB zhz6;Jfb^f-DP1HUim<6sv!oQk9<|5>_>l^o4OiZL*T+6w|AheO5RiJ@M~wm$^GL*f zlsG5-kbk#jF5x(qp*+K?mI^I%|5pDkAP1YPLO@tY}aBv zYB(Q@^~eqTrr1-9*EVQ_>00`qk$rqz?g!{rM_`{;g9Gf7!@)IbLANzRPuonNGVCL- zjIcb-OcWieD+_L1+mW?>(>enIH*O{{9V!hDl-l0K?MR~@ zupNp3$-8+An&cfKSdVEPZ-ToZ;!cHY!4wSqM_P~bXzoc#ys$H(r}~HT-IQ{~U~g_4 z21lqzdFW_tBGd-XDe5D!c0!&M;T^S^q3Gcqu@YV(R>BnNQCI;0JCUUDj(l~Zhj%pj zcA2QVV*>0r{{*T|>pkMcl-G4lfgCG0ZC5qYWPSCT=l%nl%f9JVLZtj|MrDp=|66#+ zZ;QqJW|1CwhjhFSt99H-m`Y4V7wpqz(1Sj@1h&4UFYYM*qh?Y+jsJ*8e1LQsbUi#q zr_~r!#~PLnZyv~p#q1N(l&eGo=~w}C_Gm2mv>B_-MKD*9v0H3go|zWHf0KaoP=AYBP^uqhz02g04cdtF2WUI4h@;;yWb@$ zw!TY~94>q;td5wE(x=Lxir9u?nMHTxQyqMs2#Nk<97l000+c6y5yl_CZNPWpB95b& zrxeGL_3|Ha9QoYSgEz{vA~pz*)_+xNF1by~k&VkMIU+X7HNzV52P5h5QXgHtMEEh} zRdU2}#FBZg=vz!32Ycv5_)#239>N`Q97QA~r#Oy40C|sDNJlo@P6!|#~inXARY#th( z`1BJ~l0|x4ceOyMBn_U1<(lRT9MInn@gGl%81A-|p&*&Nu~Pg;zVvkl1Q7~SxYD1q zf-iIgOrZdg+ITw!h-9H4wVC5U0rAxoQ6ZTfIpKZ3V)~eM;fI$T#DOC!q>|@P-4gYv zkY>N>Erp_3C`fMXK>oDbtWzjRynIX)>SOeu22?wuLK?jPbO|;!yJT^z)J2OB$(=$MkGZ%jaUK99pnztG1mx-gl%!dK9N|63ep!t(+3-7Ba|L&3Q}kGphW+tJ zbRoe6Cf~gsh22*FJMkk`fPk8VHJEob50-7W9-lgayF-1v3uCfN3+wIYlNL{@1BC2~ zP7N|9u=lQ;`>IKD(%Lv(9t4oQbS&BR6)GDMERA|y_vstNTzr7*xS6u4 zMr_B>=ih{FXYfl6OKcdiafV!bT-rz-KZqa1arq}oK`KagXhsV1F>YE@bEce9%nB*e zx}=X&Qx+6Ua}wzzOpiQY(9oS_4mJV>dAobp%cdQem_ZJUxTKG2uNmV+aqxT#UoEXh z5?b~6U{dEJKbJF-kaw3mNhoSebRbWw6FnMJZozM#Y|!aILBrCZuFj&>{XD|IV5+)I zkE!7?+l)r1dO8gCbuyQjqVp z|4;r${>Z!eo8-f~7&Iv6j?4dOWze9wBGI5hh5V0vHPu0b2pfyjN(N|Al#rjyx%5#~D6X7S&2YPV+G}WFH-J#2fb~)I98*1tp$LGGLs@F)HBQk|N?vBMoNV}+>f)c>Rwr`8Fvr0T zw_l-AH36h?<%lm%?*=U<)IX6Qr{xi%{meth7IJjiACqBmI9EBcYH*dKs1a0-RoPPq z6{UrIB$jDth@YI`E`6ib)?QwCx+@#=r18zR4do$aBd7FS%0?ygZ-&q;;e$pD*^U&U z8^u6@&K;MVys32hMthz}r{TW;?z(YE41$pV9`cQ-kqsYH@PW*7A!l46+spKg;;RVL zH}avz^o?SC5mzYA736Fbg&@R`&Dx)Eh5VN;9apG!SW-kxh@ZfOVumv+ke%j|P=Pwl zBB($)zK9BBCHuy>$cxT2i|j5V*DSJY8RL?+@(UI2D>ral+*^AmEpPwISz@(acHK+? zDkx8WE>w^~f$YFKB_j+Eb7e!3hf#e0aCsO_QNh`?A(PWb0MXT9!9>H;@iP^ciQ2{bl_tpd-L)6V@d>s_%ChWrfaimRn~s-^10b7 zbC3K*Z#SGX>rP8Rc2AVxr8gSfDe$~R@ARMVui2e8W5#K1wq4lQXK;PL-fa#>STZBg zQzicC>RA~NcK&^8xi<@Ed!KpHa?+^#8wT%7NSLy4!u@n#njq0L<@N2wo72>N9NYHI zq?Om=mgYQQi##8(y4&o2*>AQ=eSX?MNR*&NK3}muZfD!@VN;sDcV?`&r0I%^tEU+) zbss0bj7fPezc&&+S4Iy1ux;%_4BMHjm`~41@k<(CMWPpq zVRDtGX=eX1PszBp!iHQSGZv?6R()8eRDs=l4!QE|(brnvrvvfRZHM4z1utGwu+=dn zxkslGyNrr5Z9+h{*a!GGI@II-zL6c@Kgae&Ga#$!bArci@LN(6TXQQLH}&m1C}dsx zzYM3|SSGJL|8GI6~6@AmB^mzqem{gi>Dv?KN!; z9p2cnLaaajB@?0<+*j$9K?cj!+zcD^e6!J`)zdM7w8PSM>@WUJt9CL3zkKfR`DQ7` z#j^nX?e+8(A}R1IJfco;fFehQ`mN4`lAkN2VrX09i*kdCd)>#o#di1^Kef-;Y`R3= zYX9@JS^Tup3b_wAtBo@G&g&)L`ue)Y!Fn}jMU(q*hvDYm2`!5^mG9kqd+q4Va&0Cd z%Mv?me`ToiNAqp+y@*AQ1|RzEY)kIL%{4X}pXKzHZ+)pdIr8su6Gjos7^rRzeP@0@ zx|e^PS2FW~I#oPORkuEUm|3Q!eK6_VyZDCZ|yxJ^UqpxAXj+rcK9>j z64SN@ZFjd!V=Z6wT9)NKd$d{fd@>%@7T7@Ed_fWGIC-sSpv|wjx_aTHVE=g)gE!;2 zA40_ulxiXE&@})g&A%R8v|z*?^!c6O_7Uk<9wQ`DUa?~PYiN-V850UWS70?fWOriF zgek1&@b_W z7~UXz^PPXLF8r^vS$YBkSC$=@YRNY=G?AHf52Sv{w_IL1$!`IuV3STs=y9Z38KmYH) z_`aeeI>IBkX=v$g-fLf^oRDYFUn47z%RZ@o&BZ(BWG;DO_^^}fR&`3garWl?mFtJE zc-?TxhTcaSo=bjfcHSF3Mm72e58C+Ie`@KUy{jl5H-G-Vq2u@8kNPi9pz*)C$p0&R zUy}Yqf0q9Nf98J1bFbF_ie?IUa|d! z&*}QJe$c-E$@jgz*78c$z9o-eJoTZ(*qKGH^bJU?T&7F>owut#rQh7F>}B^3`QQF_ z*!AgD#i1A#+vW4k$5lL7(GqrgSHOa&3l}WU()8B)*vb3Xw=34WLyvqP#6!Q8Wg)qJ zD}GH;_4l|-FPcn@&oC+fnZZTkIxXmb>*c0g(XUs9zN+FF2!QnJf-2QJR9@F7JgQE(8p3e!ZIaGZR`J} zw6geoR_DXS={^JP`4i0fpJlv!`Qp@rbsGlUzA(eEeTBM7`LF3%t@4H?18coHzJK1A z+czuM+1g@7@d4Re47l2=)txFnUk85;H?+#wD%+8#B{n(A7x2u{t#$o`Omm|mF1_Eh zy+Hal8MA)~9oea5u>~I0tC%PGKi<&TS=_jxU53e{{8~=QQvB4T?)#28_AKhRXwM(7 zie%`rJuj{;f}W?!+2v>(-__Ob4o^F7@}CJFFDjM|TKgbNy9I%7R`f<>r}0$N1=rN# z&8ttWQqQYMrxRULH3@6e@YSeP`JdgG-D`o_q8|Aa5Z$d-snLU0)G|Z_1tw;H)o1$U zjoJ4{L{~h%xl8N47B%0n?5$Jm9&kC$>r01o?ya1+(&c%L7x(yc_1i^t-WB=MxL=5| zHb}{3mYy~94*pW|T(y@auWo-ivqe~?btlhyPVBSxX1W3?bMB2@Qmo9%Nnff}U$Agl zhFMp;uWQ=Z8J2l;qxI=MTfT_CwD;tW*u>)R!z)aF++fJ@4(9qPDxbe|r3*z)UZnVQ z-&k-HYDCf7U^l{Mv8YZ`AbRA9Jwa(md#EEK5zpzaJ;eQXL?fFT7e7qtiHl@JBvOKc z#AlZyBGgjgAdbkHN6%)j5)s4iA{-ItIi0u(VjC$^9Z}6-tiX40!jZF$R$KK!h`UaR zOHS{)6OM?lMog)Wa73)BuZ{M&2uDa!JW1MdTxrPIM$_3mD7k0hhKNp2A~tkUjVOaoZ^*=koM-ef4?=vqI%?SwA~+&m z(y?4rBW?;#BZ9-FH#BsZFqPtIxzZ5aqWX=@5M|rcmugJ{LseM?$^bFasVHaAi});; z85USe`@t|Gw!@_{lRIDfKtTBsfUZj7g=zbh%!ghE7KC%x}~RDv`U9pcSRxSk~D}?7Q`ag6>?D_ zrxcl@kW-3D_(T_tNJn8c(wIc-@!E~p_1fn!&h;unj5_dxTtbMb>D=f?CLv_XK?5ff zA<#pM2_})zi0DU2AfgsbKWGDoh%`(*XnO{3f-#1kNi>7Lu{1WAY*1 zVRcF>1_XPUOu3uxckK6#rCDQRsz7@dFSI;K0NRp6 zxNZbeGU|{shFl3~xsfFwR{L>fs-ZkOJ)n|pDRgHjRUPCe^nfnvP{$jhRHO$4U&i!J zgC`+9QvyQAw~)$40zgaCTurc^e@h_R`kRr4iviu0fOPcID0xIk0F4>WAwFkByhbCU z%_FX(oJ1qySdanYl{A2Q0U+8fJ@O$A(S~qwnmaZQJf5a-P) zlmjS2>}QHU#u6eD2F=apkZM**8TeDs|L6iY9 zK)mWSfJ^{rd0A*Jr6KhnX9lI>kFkMvQ)hY?ia*K?6tJ>q?je+i6n|(>jS9qP^Pw-4 zh?gYMV!A&(LK^3EoR_pAQ{BnM*OHAm8cX4)L zBp~WY3?Q%;S`BP2p`S9h#tYg*LO-?j0d_tG_lJCbru8!wJh75K1^N%;hv+}W)Vhe- z$Q6GuVWh<#kM*q95d9gr+z;nCj6*K>Q#;(8WlkX(KzF&HWc^Ae&{3w~83LZmBTixt zRx5di_FOK2SCanWC?x&!h%~Cy4ly*gT|6Ml1|HC$SO1|OA7Ol+-ZYbPa7?4zhzi8m z$c*zEqXJDOCWEn(NRB(#r{U zl@o%34SfHsD{^xw0dYm4Qv$kVfPx47Mh1w|ayIb0R3I_t=u^E6P{#)1xP{rZ9K?Xg zhs1zZ8&QFr2NW{Em}j)|+ts>ssD%pqHD&DdR*eS1f9#wU~)m zOYtkVamvm&cnvF)nbQ2W1UUqq_ye!wv#+5&kOUgkEE#W8`oeTX9TjL=JQ!y*E(!E1 zzK@kW4bu^GRiMTR8q~%auqInVmr?#1zOpmO{EU<*69#^x3S^Q%4=x5&eBy}9BRwGM zco%10<@A$y+Wv1DU=l)*D*^4KI=d^WC62mwQ0;Cwuu(B+5iKJn{eLsO!6jN>#hLFm_0@>4P<&i zc7Mz8WHZ187G%JjfPPOs-i0@DfFzqF?}-|hBA`P={*VYz+=wR;AiqV%M1bserA`Eh zCyn5twCtXrsc0S<$Y2hC^-g}w1_DstH3aL&@0Q^XBO*`P(JTeFPIaa#)_KajNt<;n zAJC8H;?B1_s>h>waK>9DLjZ`+yEx!nBP*;6Zbm;=G##pkE_fn8W~+8y6wNbXZfSY5 zmZ5qg_KNI({8ltW^;jSSn6Xb<4oOixQBS)ko((PTU=yk*VoeCuV@;_>U!5q9rq(Wd zRmLACrjRl3?)E>f8bi?~w_wWGyKty_cyFCHxFuvGr5_96#c`&J*r219;cLao|<5SCJ?!!a1T+y!@IA>)xJ~6m{XyfA^9%Xd$ zty)(k#py&}*#aSl5yNu;?xFL(Vvau6{*ajbsnrr>9p^$Uzj>z!DY1CS!Uu{Dvg2Yt z_A#rQjZdr~_AIETSWIB!;unWzUbSxD(*`Z|7#=zJWE@1%YmN%zQ-yeg;vwLU zn*kVbC&KcER1LhIy^ zBeZVuks=YG%Y5;uWi}biFVa7D+H37g`|((1Jd;H-%N(c^%W7A|7xNH3Teajn*W5O0g0b!_?}pqoVF8lQ~_-8|eE;9V6y= zk21?1TTF4&wt8r#_)#~C-Wm3dWsl{iv+O|zfybV+y~%2i+3Co&!5K3xieqxSrW|V# zr8Dc{Y6EB-YarF7)3e$pAV!Miz;t?fhEK>6uc&0p6Ok7#j82Vymq8AAUt;HY!FiGW~9jD zji&eF;lcDg+ELMQZ+gOwym+D)U9g)i5D}@OLaiWZ5$EdMB^=$NdN=$dN+-@!zBI#$ ziyxS1IN# z7%s>A!llQv+Cal`B#O~+BCTB$;VJ$#+m*?AdH05DH=ZK4xm!oAmx0EKdbs)TR>pRF z(5^hrL7?OWIFS6+QAahM^iB*3H%d-)sGI3dq$M6s)Kuf)tTvxk@(_f^RfznUfX5e; z^G=?4$a%-cBCed|N!7~icH(F{c^%L=PSoMa7YzmXtRxx&VjbKK=5AR(Qo5Fofuh31DJQk1)v6OMvy+Lv%dL8XyO z>wK57&YnqfOiWWJ+Q9}0PSR|E;P{yoryIcZtZs^nd^1cs|5E15jCHIGYG)hD%iF<4 zzUja=orTUac+o7U>inb4yS+xG%?QL7vXjM+MG2~1KwIB6_AQC1 zjxadhwKffdb4TBD^rbvMMP|NhjJ}^Uz`4#3jzSQdu2} zrAfPEPM?RFo?IVS)$kKIPIOYALUOz_J)IacKXlT`^Oy^nYuyo)hFB$ZoZM4acxqB) zPOf!npfD559*{XXW(b)REv4D#zERHc>j3Y1QYA&4oYPQr(gnXqI5ivO@UYs5juZ79 z&{^BmkT`f!AW9TxW{OReAp2Kf>7E&zE>Q|ipcQ&N=if`r)y2*9} z(#BT-9nyyHX!Ar9GGa~49~;Tzc!6B-8ZJ^~$oJ3WaXjSl&QTYsMMxXV%=2bUK4-fT zc{@GHiFLwF9>=O=-gGq5nKL5YAN{5_Pr|8zNRv-c&=F|`l8EGFf?t+tOO4Jcx3GlP zH_*#Pq?{gC0)_bdgsWjmHOG37L2JCoYU{3b9T8uC*?k}ioAo3yZp$!84h3eN6W5ww zQE4$s$vP+YYE+xzvd$S6WF4%PCyzFXZK-4|FMc_bE$B|Q$+f)DM(tq~S2Zz+n~RQ* zFh;8Ja}N{H$x#dttqLQ!tlve>%)PH)*9v*}jvNv;Hvt{%TXo1pgt?0^A7q+iElKoQ z8TDH0zB~K2#^W4GDJRA`DdqSF-AFWEHsi@jKmyqHhI;t#t0Fr?aB1mCLT@zGv@vkb zU8wfwsO?dlL=&?E%o#tU;8}0=w6e`QBBl>#&O~<+XeJ5|&`fqBN$JFV23;mL8C}K? z!Sr+dV>NXT# z2y`MjJQ`M>8<pd6iLZc!hq21xDy_Q6SM6}FkJB*Ze%O8QC{B$Dnv_s6 zQ6AFAiBYUkV>%K{9rD*3S!LS<_~zNSWzWSWEsosNM?EMNmU+(M{vqBmc?(ng8UV7< zZ%;oh(cLSccC0cctWj|fFxEvmCu$ee9 z>a=v^qLa&{i!jsq=Lj|voz!U0K<%1%PD?os^nyAdXfiXn>g0nK0Go+cO-$)0+uFsQ znadtH>TuYX@4JNd%h^EdIn7nau`;%T>9I0fM|{&|*L>VR$EOJ+%-AOP&oPKd*>9*I z^R*YbBYsLzdimn#T7E(664dEgv-eNOi9=X6|IsgWQo5tK495K^5N2{L5@9C3uw%B1 z@4_}B%vgcBp^h&_WT@jG(-CHfcgFA1;?gy#p-wiXjyvm7ADyk3dbn{j30)RO#9n>er<{phR6;Tx`u4=S4`u~S}$op!3- z`%d0T=Q^_3v8BV~Y?WD#&SRd}A{U>Ao2k(?B<-W@CU#By(ngFKZwa#4$yAFYG3OEqBB$ zr@j8wUJ9Owu*<2oTiz*W#i(STtXCaVPQ45YSCo{y)eEO;=n%^ z)`^p%Can7bk7g%#9d8qR76-a{)$n`d)`iE84;qfCv1?IZQ5`Kye0>^3tnTMJ3jQk%dsgsqn#a?n#|$*Y}%7kGJ!2-Hx zD>i&^zh5#-vyRHh0m{XZa|2HnvUYk|e%XgVZ~qZEanj)2_Oj4tO&^vVrn_dHl=U?> z7%NmLoBq+ZW?kmCvs-UApOJY)E_0yIyx_t$>e-GWQD>!8?{&>QI`8gPPN4Sy7fWdyS4th_ptnp3BF6SZXdWMF)H-8 z=EEmXLZYt9+efFGPWOl}UZGdHiH8arEl<~O?{=b|(!S-vk)4M*_NGRn2&MhuA}`LE zsuT-Pe)DL0Q{^Vd)OVX2Y`Nb$Ps{3M>Q;<+^kYmdM7`Ew1MnH*)VxP{jZ2Fw9dK$h z$AZuSr{+65V|@1~7qLCjjCa7PS)-L9m55UdYu{q)rzTy0DZntb&?WcsI_7Q${gftP z+^tB&X+893nLu9mZpdh4u!b_V+Dkn0Kq%UDeY(1p60EYd;-Vhc4RM=lA;%Kilbeg?{HN7y`QGgCxV-iUk$s6g+f=M#$qNfx%C{0z zug|d2^YLLzfkV0c4~C5=0xf|4J$cO%uz1H!^NXy>{ORBEh4PubEMMNKcD0Drbm=N? zmYXhJ#~;L$$=ee+c%4vQo#WWQEY$z_`>qkGY-rVMt}e>*E+k-LuV0T%QAT`AOd{_nj5{ep=y3sL9X&9MN8A}}*~T}zmmwhg|FItP@P zDmP4W!&4?huKlS(_%~AKj_m>Fa$YB|*9kEOFWM1Q2%B{)cb}O2(1y*&W*$VD)`dp< zOqe>u)Yp`+PdnHerX2r(D4aP*S|YA4u05jbuzxRN9Ta%(*yst|;;J(|cc?i$ccpL4 z>e{ImHm}z9Jm8zXWBllOi)mr-3RV646)yX6$Vg-=kfBw(ZB>m6Qf+wo* zv-c5Bzg}%HwS3~ovD+R#{B_uR+h5)DuSlrbdUMU3la^U7Pgch@{MqN}z5n9FymnT} z7QEB*NOt-o1+*Zg}qrnH;BePymw)hqt;HEo-?YF?k(R!UiR z>(_NvXYSuWLv>X1ZjkBc$N&G25BqPf?*9rOmZbmCpVfZAFS(y;+?(zH@nO70)CsGn zP5HC(ps@FArygoMb8fHWV=Ma2ZT$G=f`5X)`lL-f|AY3W@7F*3&ycG^P|?UOOLs;3 z&)Zme`HF0@i@Rhj_v@*tqx|Y$e*1V(%+Egjzkl z!PBcmHrO!Br`_wm)(_Q^m4EK|<5A-x4zqf@-JD-9_Ka_ycy>^Qu*;8HX3ly)cfs~+ zI@I;9-7C?rd6CdfOLyhN&7tM5>pi>wCF8R;C+3pU6%?EQUY}YC9X_}6KKAHyXi5KfE7wIS4<^=n z5%cBO@iU`tZ@PN4vNEN|VEgb<+U=Uj2} z-ODTcGZ^<~$p33X#=wHl?N#-cj{rt5wGA9%Y7jy!_+StDyG- zKeVsxpR&y2dkcg1`ewdgt6uHc+~ebo>4x;YRk-=-jD1J{`}?yw4W^&>pOM=#x6LxA z^=;?;w{qv{*UkQR*p98gh25}6v|qO=v3b>0T_;)8>(}39n-c2fdt}DOk((Up@`ff( zD|a%^cJJ`n_q#5e;vQPm-IZe+oBsIw<>F*@8(kk&_TH+1y?rB|^{L`zJ%4ZTHA5$} znxk9Yy$1)!3>r6UK!4v|o%U?mc0T>u&rfQ_y-gQ3qjT@$7L_aMfMtB8V6=t@5Vn?c zKf(hUbYmd$rST)2f&o|v6Mld_ zzTax-OcbnY51xTnTZoau-$+A;KvA}*AKqo6Ii!JN4o6|=Rht3`s%6j>R9#PoQ4U_w zW)IVw@_A^#OC|R$XeCr~cKb5q<`T+Dm7MKXGRoo8Gt`c$Tw?r3MrY!)GU)?MClH5QpI0 zEA=S{f~y!%h5Z`=OfXFxh5R?6?aPC4`hN>}h02crbNrA?A(p}ePr!-a1{LurrN z45MZ5kV=&1<~4c+R;4nm%0nvPfK{PSfmPwPa}1?lD_*$>?pp;MVO2OHuqtCx`P-Jh^jTn4IHHDC zc`^kYa)$g3ua1e7<_#{kFcKJ=M%p*EdJ2q2Rhbcm*Koh%sQX{I;6nPW#VMs5M1`Y& zBX(n06-E?mRhQDuyrgPl%0UNI#XR=KQ9k`eBZsNp(At=BTRO!TSs41{2yb?+!#BO%qao8rH3eXQp-b|&3B>-{o^1MMge`38K*_u98 zZ6s29`@f_8E}#l3t(o*$HifC~ZTCVM(9ROQQl<7SFa^qKdSXIf z?EMR)T^YUX6JASYpmD+gES>O86~Z^~COav&)>BZUvzd{3ET7!XETnER% zEKlY4kYgzA(cdWlV3p>z?OsZJYU>j8HQJ4pjRRXhP{Y|?pQV%}h{LFuL^%+!Wm@$Z zEpKeGi~$Ot6S1Gk;uKTdNNSU~bChx;)@rP2cj?uG#uZ1kx{x>y_zQAl`ZxoFw=l-s z;4P%NigTxgx6EExZ?EQKWhRN79@0{3B&48}Z_zS1N?qF4q;Rxe)uTOwCu8O3V3ute z#(>1LmcKSzxD-xVI@Oa>0@6ZxIHW}iIE+$&6b#S8yIcY1E`P%l#5N$!;%8_Xu7JZF zr}A(G9D}zwu!R+P2JMFe4qze!wsi701QAo%zHe+MO0#AZZBJmcQd7Md)m~E`V!vI( zw$M9nz!vrZ*g`EdU<>UGU<-SIY9Z(S7wGj$IOUTvmfBOpwhR*0Vvv?S4bF)e16Qfv zpV0O+l-cB3OI&v;a$A~PA7lk!Z#hr<>`2vk&f=d zxd4BEp!^xswf8V8w{OGQ$8WUlu8CvF7sfLQ;=ZGXbHQF{)q*IeX>~zwj4|($_?ZB`GnHO5VZX7_lWMp! z$(DR6?Fqy+gXiT`mr=sJzL*8dhx-`k-zwuQHK#=2UC)+h*=WbOVpte*&@^#e5QqH$ zzc9`<@C)V9iQv9bz%l#_pW+HQ0>5;~m%-BTSIiEseFKD-n%*Vk3(o?~M%$h0(P}`# zH?De9uK4{<@A_hG0had)*NWxJ#=z{~ip{LQmjnr)PGhDhXj8vTOpETgfFN1oHRhJQh` z3H-}oIWV#t_S-tmrB{N$%y)VfLwY8Nv)Ty%vf4-xC&4c!h$A%|_7M1mx&MW>0e+$6 z1b#6o9NH~@HuAZ%p#@rLUNHJahkxPS&uC8ufvF4vLwz%^!yY?OFkE~1q`{c%zvC2o z>X+MiO(KXRHJpc~V6yQ@<5EM_G4)2lzPH9JiS%!akcsxiMh)?-`_F>1WOF0ni1uRU(%pKP}8QaD; zFT)2P^epN{$Fc`p!yMFU-_(v5+5QlA#@8S)521h~6wLHLKwv0sCYsm2&$CjC=>&0h zBW9)Aa%Wb&Ya7!D-|1oocBhgM7RFI9W2R>LXiwT(^vdxssNt}kp0(;G*5g3Fkh2T< z0`9*VZNf1xu7k6y>+vby@!jxh(}Q1zm6ThF*Ys*RkL$6yH#%SywU)=cHl*y&j#Uib zDUA6ZVVle1zQMw9Bl}4&${BDNW)0vll&A9?%F}@vAtZ7!576egJ@+rY>Qr)EAIIS^ z!oujY7$G_=4Eeu1N&AIF4l6LO;WJY{#@h7CgmQM0$Qi9F_89M`h^90yp&V1mA!Wt} zlpcr}J@kUjAVS1E8@I)%#u>I0jR6f5!x7!Ia`@7WP%UwztYIZ_yGqF!f7h9E+qU!1 zDwaZ2MHa_Y?+)t&6dgYXKA?%b#pJThsnI2C$UFVaFvsJ!+zfN#*1MfZG9C)x?Zm@U z*`0pVnq}}?@Gx_`y^joNWh5ED-{;cG0Znq<86|5RzkQ~+#;HzOIeShI?Ot2~PwkGq zW{rdT0FlPxW}PO5WBRw?vx#Sp<1VTP!nb+UfbGjNk=Bs<#U)u@LdJLiZW6nj=5aH< zDW1_GpQcK1n7DzNRbv7e945*@aF{sScRg-)+dx?hH>Df@nBL=N`v;(jXq^uTtK+dD zE{_{y!-TAW0by)FxC`4pg@*N;k9(aro;z~-9vwU^%JloS%_2O&5)yM#)Xf|?-lWBi zldz5Vre4^lso8K9mP~8)@9|zb$m=KOH9mm3d5u?-p38 zz$9#93hXmTXyENl$imq0v@(O5NDw$rjwtVXo^q4OE_n!IxS0A5H%%&P_k0QBVkOgA z)9|Cp;?OR|vUU5My9Wol#4$9WOJ_n$!gkdIu_U@vhi)xufd|mms`S9ASZ%PP$yp7I zOD6v8vXN*+7ghj_KXK%*b;PE!XKWWODuWDaA^|uro*9F?IBl84pcc4LyAckVnrJ{< zhiI)O5*pwR`_gMi>m&61_7LZ|+x5F$sJxi#)I^;?cgd&cKFKb7$zJrD7Jx{kmz+IG z)oNTM!AP&Fb&kZP#Be3jOJwX)tJxmlIs81fNas^y5;alNdL2TmnCkSsTL3ekglhcm zJ`<{O>Wfe>SwEm&ocdCKU{ND=8lUc@PLr`HM0)WdMx@u@Gs<=Qq%fhHD4N~cSB_T( za)}&Xq?2W5LN$>Csngil)=AQoL|;V6vl<~^M0T($E9aME(dTWFl-+WAJ@G78#_P^E z>=sM>e!`xj35W&iq$pbdEh!eXXH1xC=CCklbr0 zqHF%lcjvw(x>#m9)fp$cg!>OfmmFvsyQTM=*)2syad5f^!Dy+GP#CL-W<4_%{5_9| zF)}3kFv_{OGF6XHryV%1y@bu&4QT8TKQN$0!xtuoRJ3nO*}#N<@lMv^UzQ2U z`pNQo*R#vPLnN3XK4a?~QxJ~EBxz!Flu0h)%9tc=dCfb6^E5^EfPV=|+V%0kXvBor zBr$;RtJ8iaNfQn4GNTdx#Y?Za%xH>rjX_SS=xlzn(&=yDd_Lgwfx zEbuF;#ZiI^lT3PRgq-qg2^*ncqV@`6tVDS=HJa7;W@|^bGH=qPib4!KLY_s(5FjQ1 zVE|&hINXvZGE0w;I9@UxE4Cs33UOd%^rEqHv-oI#Ss})Xd9BSyGRG=darn_ljm9(6 z5oA{5?7pGda>q0{Jn)0n)}o-8R64NCRP<&`+1B>VZABJ3t_;`kQmLjyUsxZSWz5U& z`*6dRxH29asSGKJ%PtmDX*s-Mw*U~7EEOWO2E2)S@C)u`hbBAZMX83P;dd6JmV7Q)XM{e!h^isxM<5x>W{)_N9LeoTRYdjjJ zvy727=GszIE9J+RvTt32HpR3x*l{s`z(OnfcI62zGh)z0H^qoztpgi%#A}q zj5`yB)VMRH#yte^i@ku&WWJhz4Zo*M6MGesuE|Dq!DcSKnjSE-P13#q=vlMbM`g65 z&bo$Qlrw0BAnwfSH$^>zp-tSGC>u%Dc%yjNOD_vl_5vp8=WM@tqt$d$HDG7_V>;|i zDg7C1J}L{O_Qz05H)r*E>+|93m2lMgVE5RFoKB2hE$EWkHTOy zc`ctH59~6j;TA-^{Frw=)L!)Gs{L{^`6GPk5qpz*(!_ONub8JYft{!{F%(<6Y}|yR zi5!xmXwu+BD4I3k*)duzlcH#RB;nLtS36gNKa-uY_oB9Waf94sR2qLKi49a*%~dOk zHbB3CO5<&kgg+zbj5m+nj&H%dU+mDX@%0wd6K4X^^+ctKD^529n@CU@(d?q00MxoA zyKmlx3Xy?LoIt>;P3_jN|3gLRS~I0tAQvZmZ;~oC#;VB!lS)k-f>dgvfxqidW2Q8b zmasK3e>H5)GtXkJLiUt$EOUa5HGG{# zxA48u15e3QZBP@}tlo*SAxfsH)^vU~^fBNezxTy+Mj1Dxfv!UQ6q0q8W9*aXzl~6o z-03mGc}p?@+i%rRz}C$KY^>E4mw*lTYwC#S5$LRII_ypPfohGgH_?T_-sJNv0_7za z#=|e77KXEfXE?;^LkGh#>l(|!<6|Fa#aJ!O>Dy{RD-*D>VdQzK(Jn{9lY2{VjMBN- zaD!o~n34afWwX>QddN|z?#-(IMkz}B6OIluvQN*0Z*4k;uE26_5D+)VO2%M0UNVKO zYjTvhoNDgOn|P#wjZA9nrBCaT z&@I%OE_J4xt?KIZXlnEcTqB7As*!H^riH+7F*Di#_Z1V>r9^Xi(3Ib6;$o2>`Tqca z!|&5$Whu#y#v6izZem;~1>IyBG&>qsqOp2%=8X>n7wCpte7!~A>+ER8fk$U)zlePg zc~kzl@A{@s^cy>bj()?9av*Q)q-?d-=`J!ejr;SuoKQ@Z#+pR&`jScJz=$_KWQcf^ z-&P<>69a-7(fH8-v^o8j3=J7rJ_Wvg&Xg@>FFwYR|535D2eAYLXyf1X5uSl{f%TTb zrA)&L&{7=ZKb2{ojbisZvuw?N)Es~&3C%`Ig>B{F#o3Cc7=J@M8j(Qo7G0%HI^8+ z*Gp>4p3A#tBmPYcnoAn7pf{`mdXtYY-l*S6F_KQ3u_qv(VK3yZJjRpKB2jw@&8(;oOr6y@ALf-hQ=7zlSAF8dk41ddd`X=DqCcddPT`Ag6 z&&1i)iL&g~4)hj}r?c8d_7R+=8q@E+BiNGpE7gdQIN66=#DwyAJJm;?x$LPmpDLr= z#5fAwC|imsH<^QmwlSWKKgA4@Dzii(b*42u4DY2VBnQ_xbk3Y={1A2p*J-?Ddc7Kv zZG5kt$TrcE3a*C4$DBp9P4I4_U%tz`8B+Ga^?=9>=o^2wl1HL_m0XVh3Cby&gadKx zd!fRBIQB6H#7VmRp`4Mp>)v11jo3KZSsh+!3s!-pTG5G(WBIDlF~4Z7?yId8b~l5Y z_>ub`oR|1e)v*Ff@=pcF^HIB7cKbJCF^*1jNymGT(fRUk%mh1V!1xP%^f*s z_MB>cHZLf>`F{0d6P6lJ6*`#M>E8+f&$=u2Rfbn7ZjGDtA=&&^^UOW-Jm1xRW88_< zFG^jBEW2>eV@kojw?@L^@Y;+3OT3x4b(11yyj|d!8uqHoi+6d_ zJr-R8ae{P~C4Q{7H++7kL0heA?!X;`Hq@Lsa$ClnALT|hJsMmtT&zgrUq2aAzUEa*thq#2Bfv2VwUaJJO@5wd{ zdFw9nA3J=I`QDek{=L6W1J7m&z`vGiH%$Swf1STynL&B}-M?sA{uh>n)#JkCAIcT~ z9ifL#4b3!a{x~$|$||z+pD1Vk?HU@=d(qfiIf9f+FUwX|OMedjtLEWGWx%gBkoU4s zhdQCWS*oL#Mmf7&iociBs|O?c)EoZiceiFlU$Vg+YP^|%e1Q4k{qO_ISSd7l zwxb_fzNk(}pS-e1U2fBe#lEjjb2*`kF8HMQRfr5C3|~GGRMeE_H65ZBvwh4mnvjSnl6?>w$Xpr?$txk+Oc+ z9$-#Jd|`0J`e)u&h8J1~dv7+byVmG1!wcKCu6W!L_f)y`s+O*$FTJWQ+Y9g1fqC(Y zHFxdOf2Hj3dhFb>6HDUHJe8B}TD@mk^$xEpC|3fehZ@dw>b#)iE>v_K8xp7Htk+rv zJ(S4i5jMVRT~)!Xm}@HaqB4efwT;PiVJ9{-RlfE9@R{Quk{419c^gJho2$RA^pP#a JmX9C2{C~IK#qR(B literal 0 HcmV?d00001 diff --git a/core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_4470_testnet_sepolia.calldata b/core/lib/zksync_core/src/consistency_checker/tests/commit_l1_batch_4470_testnet_sepolia.calldata new file mode 100644 index 0000000000000000000000000000000000000000..d1ecde78199a848733672b25171d809e54479997 GIT binary patch literal 1956 zcmXSrk2uPJ7YLTFe6yD~x18~Jbpii|G{=;_nU--K_8(0e=Ih99*E+Sn4zG5mlWXwG z10|1Ma${`!u52@@u6EO%fzhD_Y{i%(+kKvPNU^=<(#D`Ury#oHvQ~A4hrd^EE|yR^Vpi_D*7)Bt`45YhKRUKE;zfs2 z(Wh^rN=yABUSS41(q| z!<7OV=<%1@xaTo2#b7g+PI6i;ND%ZBV;ENn&`=5k?s@!O95I=GE!I8U%C?HUNuX` z#YPtmj2nPTKO3p4F4&mMz`(Tdt|QZNK>;<}AZDy2o{9Ha7ru zYIeds1TqL1ZlJ)^>SP6T-~^Kw1A}u8LtFC1r40@o24_8TK1v<8-@3JB{VUzGd=ohG zf)*%M-B183%4OIyFMY}9d{+%K@0l8L*BiG^V*20qbe^DPfv?lzhB-&K02Spilzk5p zxjE~bRm;RDf8@$#b~+>*`!~KclDxrrbi)ie!62Zbe1`NjYmc0}a?#w?x#de|u L1BatchWithMetadata { + L1BatchWithMetadata { + header: create_l1_batch(number), + metadata: create_l1_batch_metadata(number), + factory_deps: vec![], + } +} + +const PRE_BOOJUM_PROTOCOL_VERSION: ProtocolVersionId = ProtocolVersionId::Version10; + +fn create_pre_boojum_l1_batch_with_metadata(number: u32) -> L1BatchWithMetadata { + let mut l1_batch = L1BatchWithMetadata { + header: create_l1_batch(number), + metadata: create_l1_batch_metadata(number), + factory_deps: vec![], + }; + l1_batch.header.protocol_version = Some(PRE_BOOJUM_PROTOCOL_VERSION); + l1_batch.metadata.bootloader_initial_content_commitment = None; + l1_batch.metadata.events_queue_commitment = None; + l1_batch +} + +fn build_commit_tx_input_data(batches: &[L1BatchWithMetadata]) -> Vec { + let commit_tokens = batches.iter().map(L1BatchWithMetadata::l1_commit_data); + let commit_tokens = ethabi::Token::Array(commit_tokens.collect()); + + let mut encoded = vec![]; + // Fake Solidity function selector (not checked for now) + encoded.extend_from_slice(b"fake"); + // Mock an additional argument used in real `commitBlocks` / `commitBatches`. In real transactions, + // it's taken from the L1 batch previous to `batches[0]`, but since this argument is not checked, + // it's OK to use `batches[0]`. + let prev_header_tokens = batches[0].l1_header_data(); + encoded.extend_from_slice(ðabi::encode(&[prev_header_tokens, commit_tokens])); + encoded +} + +fn create_mock_checker(client: MockEthereum, pool: ConnectionPool) -> ConsistencyChecker { + ConsistencyChecker { + contract: zksync_contracts::zksync_contract(), + max_batches_to_recheck: 100, + sleep_interval: Duration::from_millis(10), + l1_client: Box::new(client), + l1_batch_updater: Box::new(()), + l1_data_mismatch_behavior: L1DataMismatchBehavior::Bail, + pool, + } +} + +impl UpdateCheckedBatch for mpsc::UnboundedSender { + fn update_checked_batch(&mut self, last_checked_batch: L1BatchNumber) { + self.send(last_checked_batch).ok(); + } +} + +#[test] +fn build_commit_tx_input_data_is_correct() { + let contract = zksync_contracts::zksync_contract(); + let commit_function = contract.function("commitBatches").unwrap(); + let batches = vec![ + create_l1_batch_with_metadata(1), + create_l1_batch_with_metadata(2), + ]; + + let commit_tx_input_data = build_commit_tx_input_data(&batches); + + for batch in &batches { + let commit_data = ConsistencyChecker::extract_commit_data( + &commit_tx_input_data, + commit_function, + batch.header.number, + ) + .unwrap(); + assert_eq!(commit_data, batch.l1_commit_data()); + } +} + +#[test] +fn extracting_commit_data_for_boojum_batch() { + let contract = zksync_contracts::zksync_contract(); + let commit_function = contract.function("commitBatches").unwrap(); + // Calldata taken from the commit transaction for `https://sepolia.explorer.zksync.io/batch/4470`; + // `https://sepolia.etherscan.io/tx/0x300b9115037028b1f8aa2177abf98148c3df95c9b04f95a4e25baf4dfee7711f` + let commit_tx_input_data = include_bytes!("commit_l1_batch_4470_testnet_sepolia.calldata"); + + let commit_data = ConsistencyChecker::extract_commit_data( + commit_tx_input_data, + commit_function, + L1BatchNumber(4_470), + ) + .unwrap(); + + assert_matches!( + commit_data, + ethabi::Token::Tuple(tuple) if tuple[0] == ethabi::Token::Uint(4_470.into()) + ); + + for bogus_l1_batch in [0, 1, 1_000, 4_469, 4_471, 100_000] { + ConsistencyChecker::extract_commit_data( + commit_tx_input_data, + commit_function, + L1BatchNumber(bogus_l1_batch), + ) + .unwrap_err(); + } +} + +#[test] +fn extracting_commit_data_for_multiple_batches() { + let contract = zksync_contracts::zksync_contract(); + let commit_function = contract.function("commitBatches").unwrap(); + // Calldata taken from the commit transaction for `https://explorer.zksync.io/batch/351000`; + // `https://etherscan.io/tx/0xbd8dfe0812df0da534eb95a2d2a4382d65a8172c0b648a147d60c1c2921227fd` + let commit_tx_input_data = include_bytes!("commit_l1_batch_351000-351004_mainnet.calldata"); + + for l1_batch in 351_000..=351_004 { + let commit_data = ConsistencyChecker::extract_commit_data( + commit_tx_input_data, + commit_function, + L1BatchNumber(l1_batch), + ) + .unwrap(); + + assert_matches!( + commit_data, + ethabi::Token::Tuple(tuple) if tuple[0] == ethabi::Token::Uint(l1_batch.into()) + ); + } + + for bogus_l1_batch in [350_000, 350_999, 351_005, 352_000] { + ConsistencyChecker::extract_commit_data( + commit_tx_input_data, + commit_function, + L1BatchNumber(bogus_l1_batch), + ) + .unwrap_err(); + } +} + +#[test] +fn extracting_commit_data_for_pre_boojum_batch() { + // Calldata taken from the commit transaction for `https://goerli.explorer.zksync.io/batch/200000`; + // `https://goerli.etherscan.io/tx/0xfd2ef4ccd1223f502cc4a4e0f76c6905feafabc32ba616e5f70257eb968f20a3` + let commit_tx_input_data = include_bytes!("commit_l1_batch_200000_testnet_goerli.calldata"); + + let commit_data = ConsistencyChecker::extract_commit_data( + commit_tx_input_data, + &PRE_BOOJUM_COMMIT_FUNCTION, + L1BatchNumber(200_000), + ) + .unwrap(); + + assert_matches!( + commit_data, + ethabi::Token::Tuple(tuple) if tuple[0] == ethabi::Token::Uint(200_000.into()) + ); +} + +#[derive(Debug, Clone, Copy)] +enum SaveAction<'a> { + InsertBatch(&'a L1BatchWithMetadata), + SaveMetadata(&'a L1BatchWithMetadata), + InsertCommitTx(L1BatchNumber), +} + +impl SaveAction<'_> { + async fn apply( + self, + storage: &mut StorageProcessor<'_>, + commit_tx_hash_by_l1_batch: &HashMap, + ) { + match self { + Self::InsertBatch(l1_batch) => { + storage + .blocks_dal() + .insert_l1_batch(&l1_batch.header, &[], BlockGasCount::default(), &[], &[], 0) + .await + .unwrap(); + } + Self::SaveMetadata(l1_batch) => { + storage + .blocks_dal() + .save_l1_batch_metadata( + l1_batch.header.number, + &l1_batch.metadata, + H256::default(), + l1_batch.header.protocol_version.unwrap().is_pre_boojum(), + ) + .await + .unwrap(); + } + Self::InsertCommitTx(l1_batch_number) => { + let commit_tx_hash = commit_tx_hash_by_l1_batch[&l1_batch_number]; + storage + .eth_sender_dal() + .insert_bogus_confirmed_eth_tx( + l1_batch_number, + AggregatedActionType::Commit, + commit_tx_hash, + chrono::Utc::now(), + ) + .await + .unwrap(); + } + } + } +} + +type SaveActionMapper = fn(&[L1BatchWithMetadata]) -> Vec>; + +/// Various strategies to persist L1 batches in the DB. Strings are added for debugging failed test cases. +const SAVE_ACTION_MAPPERS: [(&str, SaveActionMapper); 4] = [ + ("sequential_metadata_first", |l1_batches| { + l1_batches + .iter() + .flat_map(|batch| { + [ + SaveAction::InsertBatch(batch), + SaveAction::SaveMetadata(batch), + SaveAction::InsertCommitTx(batch.header.number), + ] + }) + .collect() + }), + ("sequential_commit_txs_first", |l1_batches| { + l1_batches + .iter() + .flat_map(|batch| { + [ + SaveAction::InsertBatch(batch), + SaveAction::InsertCommitTx(batch.header.number), + SaveAction::SaveMetadata(batch), + ] + }) + .collect() + }), + ("all_metadata_first", |l1_batches| { + let commit_tx_actions = l1_batches + .iter() + .map(|batch| SaveAction::InsertCommitTx(batch.header.number)); + l1_batches + .iter() + .map(SaveAction::InsertBatch) + .chain(l1_batches.iter().map(SaveAction::SaveMetadata)) + .chain(commit_tx_actions) + .collect() + }), + ("all_commit_txs_first", |l1_batches| { + let commit_tx_actions = l1_batches + .iter() + .map(|batch| SaveAction::InsertCommitTx(batch.header.number)); + l1_batches + .iter() + .map(SaveAction::InsertBatch) + .chain(commit_tx_actions) + .chain(l1_batches.iter().map(SaveAction::SaveMetadata)) + .collect() + }), +]; + +#[test_casing(12, Product(([10, 3, 1], SAVE_ACTION_MAPPERS)))] +#[tokio::test] +async fn normal_checker_function( + batches_per_transaction: usize, + (mapper_name, save_actions_mapper): (&'static str, SaveActionMapper), +) { + println!("Using save_actions_mapper={mapper_name}"); + + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + + let l1_batches: Vec<_> = (1..=10).map(create_l1_batch_with_metadata).collect(); + let mut commit_tx_hash_by_l1_batch = HashMap::with_capacity(l1_batches.len()); + let client = MockEthereum::default(); + + for (i, l1_batches) in l1_batches.chunks(batches_per_transaction).enumerate() { + let input_data = build_commit_tx_input_data(l1_batches); + let signed_tx = client.sign_prepared_tx( + input_data.clone(), + Options { + nonce: Some(i.into()), + ..Options::default() + }, + ); + let signed_tx = signed_tx.unwrap(); + client.send_raw_tx(signed_tx.raw_tx).await.unwrap(); + client.execute_tx(signed_tx.hash, true, 1); + + commit_tx_hash_by_l1_batch.extend( + l1_batches + .iter() + .map(|batch| (batch.header.number, signed_tx.hash)), + ); + } + + let (l1_batch_updates_sender, mut l1_batch_updates_receiver) = mpsc::unbounded_channel(); + let checker = ConsistencyChecker { + l1_batch_updater: Box::new(l1_batch_updates_sender), + ..create_mock_checker(client, pool.clone()) + }; + + let (stop_sender, stop_receiver) = watch::channel(false); + let checker_task = tokio::spawn(checker.run(stop_receiver)); + + // Add new batches to the storage. + for save_action in save_actions_mapper(&l1_batches) { + save_action + .apply(&mut storage, &commit_tx_hash_by_l1_batch) + .await; + tokio::time::sleep(Duration::from_millis(7)).await; + } + + // Wait until all batches are checked. + loop { + let checked_batch = l1_batch_updates_receiver.recv().await.unwrap(); + if checked_batch == l1_batches.last().unwrap().header.number { + break; + } + } + + // Send the stop signal to the checker and wait for it to stop. + stop_sender.send_replace(true); + checker_task.await.unwrap().unwrap(); +} + +#[test_casing(4, SAVE_ACTION_MAPPERS)] +#[tokio::test] +async fn checker_processes_pre_boojum_batches( + (mapper_name, save_actions_mapper): (&'static str, SaveActionMapper), +) { + println!("Using save_actions_mapper={mapper_name}"); + + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + let genesis_params = GenesisParams { + protocol_version: PRE_BOOJUM_PROTOCOL_VERSION, + ..GenesisParams::mock() + }; + ensure_genesis_state(&mut storage, L2ChainId::default(), &genesis_params) + .await + .unwrap(); + storage + .protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + + let l1_batches: Vec<_> = (1..=5) + .map(create_pre_boojum_l1_batch_with_metadata) + .chain((6..=10).map(create_l1_batch_with_metadata)) + .collect(); + let mut commit_tx_hash_by_l1_batch = HashMap::with_capacity(l1_batches.len()); + let client = MockEthereum::default(); + + for (i, l1_batch) in l1_batches.iter().enumerate() { + let input_data = build_commit_tx_input_data(slice::from_ref(l1_batch)); + let signed_tx = client.sign_prepared_tx( + input_data.clone(), + Options { + nonce: Some(i.into()), + ..Options::default() + }, + ); + let signed_tx = signed_tx.unwrap(); + client.send_raw_tx(signed_tx.raw_tx).await.unwrap(); + client.execute_tx(signed_tx.hash, true, 1); + + commit_tx_hash_by_l1_batch.insert(l1_batch.header.number, signed_tx.hash); + } + + let (l1_batch_updates_sender, mut l1_batch_updates_receiver) = mpsc::unbounded_channel(); + let checker = ConsistencyChecker { + l1_batch_updater: Box::new(l1_batch_updates_sender), + ..create_mock_checker(client, pool.clone()) + }; + + let (stop_sender, stop_receiver) = watch::channel(false); + let checker_task = tokio::spawn(checker.run(stop_receiver)); + + // Add new batches to the storage. + for save_action in save_actions_mapper(&l1_batches) { + save_action + .apply(&mut storage, &commit_tx_hash_by_l1_batch) + .await; + tokio::time::sleep(Duration::from_millis(7)).await; + } + + // Wait until all batches are checked. + loop { + let checked_batch = l1_batch_updates_receiver.recv().await.unwrap(); + if checked_batch == l1_batches.last().unwrap().header.number { + break; + } + } + + // Send the stop signal to the checker and wait for it to stop. + stop_sender.send_replace(true); + checker_task.await.unwrap().unwrap(); +} + +#[test_casing(2, [false, true])] +#[tokio::test] +async fn checker_functions_after_snapshot_recovery(delay_batch_insertion: bool) { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + storage + .protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + + let l1_batch = create_l1_batch_with_metadata(99); + + let commit_tx_input_data = build_commit_tx_input_data(slice::from_ref(&l1_batch)); + let client = MockEthereum::default(); + let signed_tx = client.sign_prepared_tx( + commit_tx_input_data.clone(), + Options { + nonce: Some(0.into()), + ..Options::default() + }, + ); + let signed_tx = signed_tx.unwrap(); + let commit_tx_hash = signed_tx.hash; + client.send_raw_tx(signed_tx.raw_tx).await.unwrap(); + client.execute_tx(commit_tx_hash, true, 1); + + let save_actions = [ + SaveAction::InsertBatch(&l1_batch), + SaveAction::SaveMetadata(&l1_batch), + SaveAction::InsertCommitTx(l1_batch.header.number), + ]; + let commit_tx_hash_by_l1_batch = HashMap::from([(l1_batch.header.number, commit_tx_hash)]); + + if !delay_batch_insertion { + for &save_action in &save_actions { + save_action + .apply(&mut storage, &commit_tx_hash_by_l1_batch) + .await; + } + } + + let (l1_batch_updates_sender, mut l1_batch_updates_receiver) = mpsc::unbounded_channel(); + let checker = ConsistencyChecker { + l1_batch_updater: Box::new(l1_batch_updates_sender), + ..create_mock_checker(client, pool.clone()) + }; + let (stop_sender, stop_receiver) = watch::channel(false); + let checker_task = tokio::spawn(checker.run(stop_receiver)); + + if delay_batch_insertion { + tokio::time::sleep(Duration::from_millis(10)).await; + for &save_action in &save_actions { + save_action + .apply(&mut storage, &commit_tx_hash_by_l1_batch) + .await; + } + } + + // Wait until the batch is checked. + let checked_batch = l1_batch_updates_receiver.recv().await.unwrap(); + assert_eq!(checked_batch, l1_batch.header.number); + + stop_sender.send_replace(true); + checker_task.await.unwrap().unwrap(); +} + +#[derive(Debug, Clone, Copy)] +enum IncorrectDataKind { + MissingStatus, + MismatchedStatus, + BogusCommitDataFormat, + MismatchedCommitDataTimestamp, + CommitDataForAnotherBatch, + CommitDataForPreBoojum, +} + +impl IncorrectDataKind { + const ALL: [Self; 6] = [ + Self::MissingStatus, + Self::MismatchedStatus, + Self::BogusCommitDataFormat, + Self::MismatchedCommitDataTimestamp, + Self::CommitDataForAnotherBatch, + Self::CommitDataForPreBoojum, + ]; + + async fn apply(self, client: &MockEthereum, l1_batch: &L1BatchWithMetadata) -> H256 { + let (commit_tx_input_data, successful_status) = match self { + Self::MissingStatus => { + return H256::zero(); // Do not execute the transaction + } + Self::MismatchedStatus => { + let commit_tx_input_data = build_commit_tx_input_data(slice::from_ref(l1_batch)); + (commit_tx_input_data, false) + } + Self::BogusCommitDataFormat => { + let mut bogus_tx_input_data = b"test".to_vec(); // Preserve the function selector + bogus_tx_input_data + .extend_from_slice(ðabi::encode(&[ethabi::Token::Bool(true)])); + (bogus_tx_input_data, true) + } + Self::MismatchedCommitDataTimestamp => { + let mut l1_batch = create_l1_batch_with_metadata(1); + l1_batch.header.timestamp += 1; + let bogus_tx_input_data = build_commit_tx_input_data(slice::from_ref(&l1_batch)); + (bogus_tx_input_data, true) + } + Self::CommitDataForAnotherBatch => { + let l1_batch = create_l1_batch_with_metadata(100); + let bogus_tx_input_data = build_commit_tx_input_data(slice::from_ref(&l1_batch)); + (bogus_tx_input_data, true) + } + Self::CommitDataForPreBoojum => { + let mut l1_batch = create_l1_batch_with_metadata(1); + l1_batch.header.protocol_version = Some(ProtocolVersionId::Version0); + let bogus_tx_input_data = build_commit_tx_input_data(slice::from_ref(&l1_batch)); + (bogus_tx_input_data, true) + } + }; + + let signed_tx = client.sign_prepared_tx( + commit_tx_input_data, + Options { + nonce: Some(0.into()), + ..Options::default() + }, + ); + let signed_tx = signed_tx.unwrap(); + client.send_raw_tx(signed_tx.raw_tx).await.unwrap(); + client.execute_tx(signed_tx.hash, successful_status, 1); + signed_tx.hash + } +} + +#[test_casing(6, Product((IncorrectDataKind::ALL, [false])))] +// ^ `snapshot_recovery = true` is tested below; we don't want to run it with all incorrect data kinds +#[tokio::test] +async fn checker_detects_incorrect_tx_data(kind: IncorrectDataKind, snapshot_recovery: bool) { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + if snapshot_recovery { + storage + .protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + } else { + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + } + + let l1_batch = create_l1_batch_with_metadata(if snapshot_recovery { 99 } else { 1 }); + let client = MockEthereum::default(); + let commit_tx_hash = kind.apply(&client, &l1_batch).await; + let commit_tx_hash_by_l1_batch = HashMap::from([(l1_batch.header.number, commit_tx_hash)]); + + let save_actions = [ + SaveAction::InsertBatch(&l1_batch), + SaveAction::SaveMetadata(&l1_batch), + SaveAction::InsertCommitTx(l1_batch.header.number), + ]; + for save_action in save_actions { + save_action + .apply(&mut storage, &commit_tx_hash_by_l1_batch) + .await; + } + drop(storage); + + let checker = create_mock_checker(client, pool); + let (_stop_sender, stop_receiver) = watch::channel(false); + // The checker must stop with an error. + tokio::time::timeout(Duration::from_secs(30), checker.run(stop_receiver)) + .await + .expect("Timed out waiting for checker to stop") + .unwrap_err(); +} + +#[tokio::test] +async fn checker_detects_incorrect_tx_data_after_snapshot_recovery() { + checker_detects_incorrect_tx_data(IncorrectDataKind::CommitDataForAnotherBatch, true).await; +} diff --git a/core/lib/zksync_core/src/data_fetchers/error.rs b/core/lib/zksync_core/src/data_fetchers/error.rs deleted file mode 100644 index bdee4921008..00000000000 --- a/core/lib/zksync_core/src/data_fetchers/error.rs +++ /dev/null @@ -1,83 +0,0 @@ -use std::time::Duration; - -use thiserror::Error; - -#[derive(Debug, Clone, Error)] -pub enum ApiFetchError { - #[error("Requests to the remote API were rate limited. Should wait for {} seconds", .0.as_secs())] - RateLimit(Duration), - #[error("Remote API is unavailable. Used URL: {0}")] - ApiUnavailable(String), - #[error("Unexpected JSON format. Error: {0}")] - UnexpectedJsonFormat(String), - #[error("Unable to receive data due to request timeout")] - RequestTimeout, - #[error("Unspecified error: {0}")] - Other(String), -} - -#[derive(Debug, Clone)] -pub struct ErrorAnalyzer { - fetcher: String, - min_errors_to_report: u64, - error_counter: u64, - requested_delay: Option, -} - -impl ErrorAnalyzer { - pub fn new(fetcher: &str) -> Self { - const MIN_ERRORS_FOR_REPORT: u64 = 20; - - Self { - fetcher: fetcher.to_string(), - min_errors_to_report: MIN_ERRORS_FOR_REPORT, - error_counter: 0, - requested_delay: None, - } - } - - pub fn reset(&mut self) { - self.error_counter = 0; - } - - pub async fn update(&mut self) { - if self.error_counter >= self.min_errors_to_report { - tracing::error!( - "[{}] A lot of requests to the remote API failed in a row. Current error count: {}", - &self.fetcher, - self.error_counter - ); - } - - if let Some(time) = self.requested_delay.take() { - tokio::time::sleep(time).await; - } - } - - pub fn process_error(&mut self, error: ApiFetchError) { - let fetcher = &self.fetcher; - self.error_counter += 1; - match error { - ApiFetchError::RateLimit(time) => { - tracing::warn!( - "[{}] Remote API notified us about rate limiting. Going to wait {} seconds before next loop iteration", - fetcher, - time.as_secs() - ); - self.requested_delay = Some(time); - } - ApiFetchError::UnexpectedJsonFormat(err) => { - tracing::warn!("[{}] Parse data error: {}", fetcher, err); - } - ApiFetchError::ApiUnavailable(err) => { - tracing::warn!("[{}] Remote API is unavailable: {}", fetcher, err); - } - ApiFetchError::RequestTimeout => { - tracing::warn!("[{}] Request for data timed out", fetcher); - } - ApiFetchError::Other(err) => { - tracing::warn!("[{}] Unspecified API error: {}", fetcher, err); - } - } - } -} diff --git a/core/lib/zksync_core/src/data_fetchers/mod.rs b/core/lib/zksync_core/src/data_fetchers/mod.rs deleted file mode 100644 index f04a80c315e..00000000000 --- a/core/lib/zksync_core/src/data_fetchers/mod.rs +++ /dev/null @@ -1,34 +0,0 @@ -//! This module provides several third-party API data fetchers. -//! Examples of fetchers we use: -//! -//! - Token price fetcher, which updates prices for all the tokens we use in zkSync. -//! Data of this fetcher is used to calculate fees. -//! - Token trading volume fetcher, which updates trading volumes for tokens. -//! Data of this fetcher is used to decide whether we are going to accept fees in this token. -//! -//! Every data fetcher is represented by an autonomic routine, which spend most of the time sleeping; -//! once in the configurable interval it fetches the data from an API and store it into the database. - -use tokio::sync::watch; -use tokio::task::JoinHandle; -use zksync_config::FetcherConfig; -use zksync_dal::ConnectionPool; - -pub mod error; -pub mod token_list; -pub mod token_price; - -pub fn run_data_fetchers( - config: &FetcherConfig, - network: zksync_types::network::Network, - pool: ConnectionPool, - stop_receiver: watch::Receiver, -) -> Vec>> { - let list_fetcher = token_list::TokenListFetcher::new(config.clone(), network); - let price_fetcher = token_price::TokenPriceFetcher::new(config.clone()); - - vec![ - tokio::spawn(list_fetcher.run(pool.clone(), stop_receiver.clone())), - tokio::spawn(price_fetcher.run(pool.clone(), stop_receiver.clone())), - ] -} diff --git a/core/lib/zksync_core/src/data_fetchers/token_list/mock.rs b/core/lib/zksync_core/src/data_fetchers/token_list/mock.rs deleted file mode 100644 index c813888cf52..00000000000 --- a/core/lib/zksync_core/src/data_fetchers/token_list/mock.rs +++ /dev/null @@ -1,80 +0,0 @@ -use std::{collections::HashMap, fs::read_to_string, path::PathBuf, str::FromStr}; - -use async_trait::async_trait; -use serde::{Deserialize, Serialize}; - -use zksync_types::network::Network; -use zksync_types::{ - tokens::{TokenMetadata, ETHEREUM_ADDRESS}, - Address, -}; - -use crate::data_fetchers::error::ApiFetchError; - -use super::FetcherImpl; - -#[derive(Debug, Clone)] -pub struct MockTokenListFetcher { - tokens: HashMap, -} - -impl MockTokenListFetcher { - pub fn new(network: Network) -> Self { - let network = network.to_string(); - let tokens: HashMap<_, _> = get_genesis_token_list(&network) - .into_iter() - .map(|item| { - let addr = Address::from_str(&item.address[2..]).unwrap(); - let metadata = TokenMetadata { - name: item.name, - symbol: item.symbol, - decimals: item.decimals, - }; - - (addr, metadata) - }) - .chain(std::iter::once(( - ETHEREUM_ADDRESS, - TokenMetadata { - name: "Ethereum".into(), - symbol: "ETH".into(), - decimals: 18, - }, - ))) - .collect(); - - Self { tokens } - } -} - -#[async_trait] -impl FetcherImpl for MockTokenListFetcher { - async fn fetch_token_list(&self) -> Result, ApiFetchError> { - Ok(self.tokens.clone()) - } -} - -/// Tokens that added when deploying contract -#[derive(Debug, Clone, Serialize, Deserialize)] -struct TokenGenesisListItem { - /// Address (prefixed with 0x) - pub address: String, - /// Powers of 10 in 1.0 token (18 for default ETH-like tokens) - pub decimals: u8, - /// Token symbol - pub symbol: String, - /// Token name - pub name: String, -} - -fn get_genesis_token_list(network: &str) -> Vec { - let mut file_path: PathBuf = std::env::var("ZKSYNC_HOME") - .unwrap_or_else(|_| panic!("ZKSYNC_HOME variable should be set")) - .parse() - .unwrap_or_else(|_| panic!("Failed to parse ZKSYNC_HOME env variable")); - file_path.push("etc"); - file_path.push("tokens"); - file_path.push(network); - file_path.set_extension("json"); - serde_json::from_str(&read_to_string(file_path).unwrap()).unwrap() -} diff --git a/core/lib/zksync_core/src/data_fetchers/token_list/mod.rs b/core/lib/zksync_core/src/data_fetchers/token_list/mod.rs deleted file mode 100644 index 3981ea8ea40..00000000000 --- a/core/lib/zksync_core/src/data_fetchers/token_list/mod.rs +++ /dev/null @@ -1,134 +0,0 @@ -//! Token list fetcher is an entity capable of receiving information about token symbols, decimals, etc. -//! -//! Since we accept manual token addition to zkSync, we must be aware of some scam-tokens that are trying -//! to pretend to be something else. This is why we don't rely on the information that is provided by -//! the token smart contract itself. -//! Instead, we analyze somewhat truthful information source to pick the list of relevant tokens. -//! -//! If requested token is not in the list provided by the API, it's symbol will be displayed as -//! "ERC20-{token_address}" and decimals will be set to 18 (default value for most of the tokens). - -use std::{ - collections::{HashMap, HashSet}, - time::Duration, -}; - -use async_trait::async_trait; -use tokio::sync::watch; - -use zksync_config::{configs::fetcher::TokenListSource, FetcherConfig}; -use zksync_dal::{ConnectionPool, StorageProcessor}; -use zksync_types::network::Network; -use zksync_types::{tokens::TokenMetadata, Address}; - -use super::error::{ApiFetchError, ErrorAnalyzer}; - -mod mock; -mod one_inch; - -#[async_trait] -pub trait FetcherImpl: std::fmt::Debug + Send + Sync { - /// Retrieves the list of known tokens. - async fn fetch_token_list(&self) -> Result, ApiFetchError>; -} - -#[derive(Debug)] -pub struct TokenListFetcher { - config: FetcherConfig, - fetcher: Box, - error_handler: ErrorAnalyzer, -} - -impl TokenListFetcher { - fn create_fetcher(config: &FetcherConfig, network: Network) -> Box { - let token_list_config = &config.token_list; - match token_list_config.source { - TokenListSource::OneInch => { - Box::new(one_inch::OneInchTokenListFetcher::new(config)) as Box - } - TokenListSource::Mock => { - Box::new(mock::MockTokenListFetcher::new(network)) as Box - } - } - } - - pub fn new(config: FetcherConfig, network: Network) -> Self { - let fetcher = Self::create_fetcher(&config, network); - let error_handler = ErrorAnalyzer::new("TokenListFetcher"); - Self { - config, - fetcher, - error_handler, - } - } - - pub async fn run( - mut self, - pool: ConnectionPool, - stop_receiver: watch::Receiver, - ) -> anyhow::Result<()> { - let mut fetching_interval = - tokio::time::interval(self.config.token_list.fetching_interval()); - - loop { - if *stop_receiver.borrow() { - tracing::info!("Stop signal received, token_list_fetcher is shutting down"); - break; - } - - fetching_interval.tick().await; - self.error_handler.update().await; - - let mut token_list = match self.fetch_token_list().await { - Ok(list) => { - self.error_handler.reset(); - list - } - Err(err) => { - self.error_handler.process_error(err); - continue; - } - }; - - // We assume that token metadata does not change, thus we only looking for the new tokens. - let mut storage = pool.access_storage().await.unwrap(); - let unknown_tokens = self.load_unknown_tokens(&mut storage).await; - token_list.retain(|token, _data| unknown_tokens.contains(token)); - - self.update_tokens(&mut storage, token_list).await; - } - Ok(()) - } - - async fn fetch_token_list(&self) -> Result, ApiFetchError> { - const AWAITING_TIMEOUT: Duration = Duration::from_secs(2); - - let fetch_future = self.fetcher.fetch_token_list(); - - tokio::time::timeout(AWAITING_TIMEOUT, fetch_future) - .await - .map_err(|_| ApiFetchError::RequestTimeout)? - } - - async fn update_tokens( - &self, - storage: &mut StorageProcessor<'_>, - tokens: HashMap, - ) { - let mut tokens_dal = storage.tokens_dal(); - for (token, metadata) in tokens { - tokens_dal - .update_well_known_l1_token(&token, metadata) - .await; - } - } - - async fn load_unknown_tokens(&self, storage: &mut StorageProcessor<'_>) -> HashSet
{ - storage - .tokens_dal() - .get_unknown_l1_token_addresses() - .await - .into_iter() - .collect() - } -} diff --git a/core/lib/zksync_core/src/data_fetchers/token_list/one_inch.rs b/core/lib/zksync_core/src/data_fetchers/token_list/one_inch.rs deleted file mode 100644 index 1d022e4700d..00000000000 --- a/core/lib/zksync_core/src/data_fetchers/token_list/one_inch.rs +++ /dev/null @@ -1,57 +0,0 @@ -use std::{collections::HashMap, str::FromStr}; - -use async_trait::async_trait; -use reqwest::{Client, Url}; -use serde::{Deserialize, Serialize}; - -use zksync_config::FetcherConfig; -use zksync_types::{tokens::TokenMetadata, Address}; - -use crate::data_fetchers::error::ApiFetchError; - -use super::FetcherImpl; - -#[derive(Debug, Clone)] -pub struct OneInchTokenListFetcher { - client: Client, - addr: Url, -} - -impl OneInchTokenListFetcher { - pub fn new(config: &FetcherConfig) -> Self { - Self { - client: Client::new(), - addr: Url::from_str(&config.token_list.url).expect("failed parse One Inch URL"), - } - } -} - -#[async_trait] -impl FetcherImpl for OneInchTokenListFetcher { - async fn fetch_token_list(&self) -> Result, ApiFetchError> { - let token_list_url = self - .addr - .join("/v3.0/1/tokens") - .expect("failed to join URL path"); - - let token_list = self - .client - .get(token_list_url.clone()) - .send() - .await - .map_err(|err| { - ApiFetchError::ApiUnavailable(format!("{} , Error: {}", token_list_url, err)) - })? - .json::() - .await - .map_err(|err| ApiFetchError::UnexpectedJsonFormat(err.to_string()))? - .tokens; - - Ok(token_list) - } -} - -#[derive(Debug, Clone, Serialize, Deserialize)] -pub(super) struct OneInchTokensResponse { - pub tokens: HashMap, -} diff --git a/core/lib/zksync_core/src/data_fetchers/token_price/coingecko.rs b/core/lib/zksync_core/src/data_fetchers/token_price/coingecko.rs deleted file mode 100644 index a046c23ea2d..00000000000 --- a/core/lib/zksync_core/src/data_fetchers/token_price/coingecko.rs +++ /dev/null @@ -1,167 +0,0 @@ -use std::{collections::HashMap, str::FromStr}; - -use async_trait::async_trait; -use chrono::{DateTime, NaiveDateTime, Utc}; -use futures::try_join; -use itertools::Itertools; -use num::{rational::Ratio, BigUint}; -use reqwest::{Client, Url}; -use serde::{Deserialize, Serialize}; - -use zksync_config::FetcherConfig; -use zksync_types::{ - tokens::{TokenPrice, ETHEREUM_ADDRESS}, - Address, -}; -use zksync_utils::UnsignedRatioSerializeAsDecimal; - -use crate::data_fetchers::error::ApiFetchError; - -use super::FetcherImpl; - -#[derive(Debug, Clone)] -pub struct CoinGeckoFetcher { - client: Client, - addr: Url, -} - -impl CoinGeckoFetcher { - pub fn new(config: &FetcherConfig) -> Self { - Self { - client: Client::new(), - addr: Url::from_str(&config.token_price.url).expect("failed parse CoinGecko URL"), - } - } - - pub async fn fetch_erc20_token_prices( - &self, - tokens: &[Address], - ) -> Result, ApiFetchError> { - let token_price_url = self - .addr - .join("api/v3/simple/token_price/ethereum") - .expect("failed to join URL path"); - - let mut token_prices = HashMap::new(); - let mut fetching_interval = tokio::time::interval(tokio::time::Duration::from_secs(1)); - // Splitting is needed to avoid 'Request-URI Too Large' error. - for tokens_chunk in tokens.chunks(10) { - fetching_interval.tick().await; - let comma_separated_token_addresses = tokens_chunk - .iter() - .map(|token_addr| format!("{:#x}", token_addr)) - .join(","); - - let token_prices_chunk = self - .client - .get(token_price_url.clone()) - .query(&[ - ( - "contract_addresses", - comma_separated_token_addresses.as_str(), - ), - ("vs_currencies", "usd"), - ("include_last_updated_at", "true"), - ("include_24hr_change", "true"), - ]) - .send() - .await - .map_err(|err| { - ApiFetchError::ApiUnavailable(format!("{} , Error: {}", token_price_url, err)) - })? - .json::>() - .await - .map_err(|err| ApiFetchError::UnexpectedJsonFormat(err.to_string()))?; - token_prices.extend(token_prices_chunk); - } - - Ok(token_prices) - } - - pub async fn fetch_ethereum_price(&self) -> Result { - let coin_price_url = self - .addr - .join("api/v3/simple/price") - .expect("failed to join URL path"); - - let mut token_prices = self - .client - .get(coin_price_url.clone()) - .query(&[ - ("ids", "ethereum"), - ("vs_currencies", "usd"), - ("include_last_updated_at", "true"), - ("include_24hr_change", "true"), - ]) - .send() - .await - .map_err(|err| { - ApiFetchError::ApiUnavailable(format!("{} , Error: {}", coin_price_url, err)) - })? - .json::>() - .await - .map_err(|err| ApiFetchError::UnexpectedJsonFormat(err.to_string()))?; - - let eth_token_price = token_prices - .remove("ethereum") - .ok_or_else(|| ApiFetchError::Other("Failed to get ether price".to_string()))?; - - Ok(eth_token_price) - } -} - -#[async_trait] -impl FetcherImpl for CoinGeckoFetcher { - async fn fetch_token_price( - &self, - tokens: &[Address], - ) -> Result, ApiFetchError> { - let token_prices = { - // We have to find out the ether price separately from the erc20 tokens, - // so we will launch requests concurrently - if tokens.contains(ÐEREUM_ADDRESS) { - let (mut token_prices, ethereum_price) = try_join!( - self.fetch_erc20_token_prices(tokens), - self.fetch_ethereum_price(), - )?; - token_prices.insert(ETHEREUM_ADDRESS, ethereum_price); - - token_prices - } else { - self.fetch_erc20_token_prices(tokens).await? - } - }; - - let result = token_prices - .into_iter() - .map(|(address, coingecko_token_price)| { - let usd_price = coingecko_token_price.usd; - - let last_updated = { - let naive_last_updated = - NaiveDateTime::from_timestamp_opt(coingecko_token_price.last_updated_at, 0) - .unwrap(); - DateTime::::from_naive_utc_and_offset(naive_last_updated, Utc) - }; - - let token_price = TokenPrice { - usd_price, - last_updated, - }; - - (address, token_price) - }) - .collect(); - - Ok(result) - } -} - -#[derive(Debug, Clone, Serialize, Deserialize)] -pub struct CoinGeckoTokenPrice { - /// timestamp (milliseconds) - pub last_updated_at: i64, - pub usd_24h_change: Option, - #[serde(with = "UnsignedRatioSerializeAsDecimal")] - pub usd: Ratio, -} diff --git a/core/lib/zksync_core/src/data_fetchers/token_price/mock.rs b/core/lib/zksync_core/src/data_fetchers/token_price/mock.rs deleted file mode 100644 index 6e5f4893e53..00000000000 --- a/core/lib/zksync_core/src/data_fetchers/token_price/mock.rs +++ /dev/null @@ -1,50 +0,0 @@ -use std::collections::HashMap; - -use async_trait::async_trait; -use chrono::Utc; -use num::{rational::Ratio, BigUint}; -use zksync_types::{ - tokens::{TokenPrice, ETHEREUM_ADDRESS}, - Address, -}; - -use crate::data_fetchers::error::ApiFetchError; - -use super::FetcherImpl; - -#[derive(Debug, Default, Clone)] -pub struct MockPriceFetcher; - -impl MockPriceFetcher { - pub fn new() -> Self { - Self - } - - pub fn token_price(&self, token: &Address) -> TokenPrice { - let raw_base_price = if *token == ETHEREUM_ADDRESS { - 1500u64 - } else { - 1u64 - }; - let usd_price = Ratio::from_integer(BigUint::from(raw_base_price)); - - TokenPrice { - usd_price, - last_updated: Utc::now(), - } - } -} - -#[async_trait] -impl FetcherImpl for MockPriceFetcher { - async fn fetch_token_price( - &self, - tokens: &[Address], - ) -> Result, ApiFetchError> { - let data: HashMap<_, _> = tokens - .iter() - .map(|token| (*token, self.token_price(token))) - .collect(); - Ok(data) - } -} diff --git a/core/lib/zksync_core/src/data_fetchers/token_price/mod.rs b/core/lib/zksync_core/src/data_fetchers/token_price/mod.rs deleted file mode 100644 index 8e7d5575f69..00000000000 --- a/core/lib/zksync_core/src/data_fetchers/token_price/mod.rs +++ /dev/null @@ -1,134 +0,0 @@ -//! Token price fetcher is responsible for maintaining actual prices for tokens that are used in zkSync. - -use std::{collections::HashMap, time::Duration}; - -use async_trait::async_trait; - -use zksync_config::{configs::fetcher::TokenPriceSource, FetcherConfig}; -use zksync_dal::{ConnectionPool, StorageProcessor}; -use zksync_types::{tokens::TokenPrice, Address}; - -use super::error::{ApiFetchError, ErrorAnalyzer}; -use bigdecimal::FromPrimitive; -use num::{rational::Ratio, BigUint}; -use tokio::sync::watch; - -pub mod coingecko; -pub mod mock; - -#[async_trait] -pub trait FetcherImpl: std::fmt::Debug + Send + Sync { - /// Retrieves the token price in USD. - async fn fetch_token_price( - &self, - tokens: &[Address], - ) -> Result, ApiFetchError>; -} - -#[derive(Debug)] -pub struct TokenPriceFetcher { - minimum_required_liquidity: Ratio, - config: FetcherConfig, - fetcher: Box, - error_handler: ErrorAnalyzer, -} - -impl TokenPriceFetcher { - fn create_fetcher(config: &FetcherConfig) -> Box { - let token_price_config = &config.token_price; - match token_price_config.source { - TokenPriceSource::CoinGecko => { - Box::new(coingecko::CoinGeckoFetcher::new(config)) as Box - } - TokenPriceSource::CoinMarketCap => { - unimplemented!() - } - TokenPriceSource::Mock => { - Box::new(mock::MockPriceFetcher::new()) as Box - } - } - } - - pub fn new(config: FetcherConfig) -> Self { - let fetcher = Self::create_fetcher(&config); - let error_handler = ErrorAnalyzer::new("TokenPriceFetcher"); - Self { - minimum_required_liquidity: Ratio::from_integer( - BigUint::from_u64(0).unwrap(), // We don't use minimum required liquidity in the server anymore. - ), - config, - fetcher, - error_handler, - } - } - - pub async fn run( - mut self, - pool: ConnectionPool, - stop_receiver: watch::Receiver, - ) -> anyhow::Result<()> { - let mut fetching_interval = - tokio::time::interval(self.config.token_price.fetching_interval()); - - loop { - if *stop_receiver.borrow() { - tracing::info!("Stop signal received, token_price_fetcher is shutting down"); - break; - } - - fetching_interval.tick().await; - self.error_handler.update().await; - - // We refresh token list in case new tokens were added. - let mut storage = pool.access_storage().await.unwrap(); - let tokens = self.get_tokens(&mut storage).await; - - // Vector of received token prices in the format of (`token_addr`, `price_in_usd`, `fetch_timestamp`). - let token_prices = match self.fetch_token_price(&tokens).await { - Ok(prices) => { - self.error_handler.reset(); - prices - } - Err(err) => { - self.error_handler.process_error(err); - continue; - } - }; - self.store_token_prices(&mut storage, token_prices).await; - } - Ok(()) - } - - async fn fetch_token_price( - &self, - tokens: &[Address], - ) -> Result, ApiFetchError> { - const AWAITING_TIMEOUT: Duration = Duration::from_secs(2); - - let fetch_future = self.fetcher.fetch_token_price(tokens); - - tokio::time::timeout(AWAITING_TIMEOUT, fetch_future) - .await - .map_err(|_| ApiFetchError::RequestTimeout)? - } - - async fn store_token_prices( - &self, - storage: &mut StorageProcessor<'_>, - token_prices: HashMap, - ) { - let mut tokens_dal = storage.tokens_dal(); - for (token, price) in token_prices { - tokens_dal.set_l1_token_price(&token, price).await; - } - } - - /// Returns the list of "interesting" tokens, e.g. ones that can be used to pay fees. - /// We don't actually need prices for other tokens. - async fn get_tokens(&self, storage: &mut StorageProcessor<'_>) -> Vec
{ - storage - .tokens_dal() - .get_l1_tokens_by_volume(&self.minimum_required_liquidity) - .await - } -} diff --git a/core/lib/zksync_core/src/eth_sender/aggregator.rs b/core/lib/zksync_core/src/eth_sender/aggregator.rs index 9b6cd1d16ce..a043b871b1e 100644 --- a/core/lib/zksync_core/src/eth_sender/aggregator.rs +++ b/core/lib/zksync_core/src/eth_sender/aggregator.rs @@ -1,12 +1,13 @@ +use std::sync::Arc; + use zksync_config::configs::eth_sender::{ProofLoadingMode, ProofSendingMode, SenderConfig}; use zksync_contracts::BaseSystemContractsHashes; use zksync_dal::StorageProcessor; -use zksync_object_store::ObjectStore; -use zksync_prover_utils::gcs_proof_fetcher::load_wrapped_fri_proofs_for_range; +use zksync_object_store::{ObjectStore, ObjectStoreError}; use zksync_types::{ aggregated_operations::{ AggregatedActionType, AggregatedOperation, L1BatchCommitOperation, L1BatchExecuteOperation, - L1BatchProofOperation, + L1BatchProofForL1, L1BatchProofOperation, }, commitment::L1BatchWithMetadata, helpers::unix_timestamp_ms, @@ -25,11 +26,11 @@ pub struct Aggregator { proof_criteria: Vec>, execute_criteria: Vec>, config: SenderConfig, - blob_store: Box, + blob_store: Arc, } impl Aggregator { - pub fn new(config: SenderConfig, blob_store: Box) -> Self { + pub fn new(config: SenderConfig, blob_store: Arc) -> Self { Self { commit_criteria: vec![ Box::from(NumberCriterion { @@ -91,16 +92,19 @@ impl Aggregator { pub async fn get_next_ready_operation( &mut self, storage: &mut StorageProcessor<'_>, - prover_storage: &mut StorageProcessor<'_>, base_system_contracts_hashes: BaseSystemContractsHashes, protocol_version_id: ProtocolVersionId, l1_verifier_config: L1VerifierConfig, ) -> Option { - let last_sealed_l1_batch_number = storage + let Some(last_sealed_l1_batch_number) = storage .blocks_dal() .get_sealed_l1_batch_number() .await - .unwrap(); + .unwrap() + else { + return None; // No L1 batches in Postgres; no operations are ready yet + }; + if let Some(op) = self .get_execute_operations( storage, @@ -113,7 +117,6 @@ impl Aggregator { } else if let Some(op) = self .get_proof_operation( storage, - prover_storage, *self.config.aggregated_proof_sizes.iter().max().unwrap(), last_sealed_l1_batch_number, l1_verifier_config, @@ -223,7 +226,6 @@ impl Aggregator { async fn load_real_proof_operation( storage: &mut StorageProcessor<'_>, - prover_storage: &mut StorageProcessor<'_>, l1_verifier_config: L1VerifierConfig, proof_loading_mode: &ProofLoadingMode, blob_store: &dyn ObjectStore, @@ -259,10 +261,7 @@ impl Aggregator { } let proofs = match proof_loading_mode { ProofLoadingMode::OldProofFromDb => { - prover_storage - .prover_dal() - .get_final_proofs_for_blocks(batch_to_prove, batch_to_prove) - .await + unreachable!("OldProofFromDb is not supported anymore") } ProofLoadingMode::FriProofFromGcs => { load_wrapped_fri_proofs_for_range(batch_to_prove, batch_to_prove, blob_store).await @@ -338,7 +337,6 @@ impl Aggregator { async fn get_proof_operation( &mut self, storage: &mut StorageProcessor<'_>, - prover_storage: &mut StorageProcessor<'_>, limit: usize, last_sealed_l1_batch: L1BatchNumber, l1_verifier_config: L1VerifierConfig, @@ -347,7 +345,6 @@ impl Aggregator { ProofSendingMode::OnlyRealProofs => { Self::load_real_proof_operation( storage, - prover_storage, l1_verifier_config, &self.config.proof_loading_mode, &*self.blob_store, @@ -373,7 +370,6 @@ impl Aggregator { // if there is a sampled proof then send it, otherwise check for skipped ones. if let Some(op) = Self::load_real_proof_operation( storage, - prover_storage, l1_verifier_config, &self.config.proof_loading_mode, &*self.blob_store, @@ -423,3 +419,23 @@ async fn extract_ready_subrange( .collect(), ) } + +pub async fn load_wrapped_fri_proofs_for_range( + from: L1BatchNumber, + to: L1BatchNumber, + blob_store: &dyn ObjectStore, +) -> Vec { + let mut proofs = Vec::new(); + for l1_batch_number in from.0..=to.0 { + let l1_batch_number = L1BatchNumber(l1_batch_number); + match blob_store.get(l1_batch_number).await { + Ok(proof) => proofs.push(proof), + Err(ObjectStoreError::KeyNotFound(_)) => (), // do nothing, proof is not ready yet + Err(err) => panic!( + "Failed to load proof for batch {}: {}", + l1_batch_number.0, err + ), + } + } + proofs +} diff --git a/core/lib/zksync_core/src/eth_sender/error.rs b/core/lib/zksync_core/src/eth_sender/error.rs index 080e252c92c..206bbf2d583 100644 --- a/core/lib/zksync_core/src/eth_sender/error.rs +++ b/core/lib/zksync_core/src/eth_sender/error.rs @@ -1,10 +1,9 @@ -use zksync_eth_client::types; use zksync_types::web3::contract; #[derive(Debug, thiserror::Error)] pub enum ETHSenderError { #[error("Ethereum gateway Error {0}")] - EthereumGateWayError(#[from] types::Error), + EthereumGateWayError(#[from] zksync_eth_client::Error), #[error("Token parsing Error: {0}")] ParseError(#[from] contract::Error), } diff --git a/core/lib/zksync_core/src/eth_sender/eth_tx_aggregator.rs b/core/lib/zksync_core/src/eth_sender/eth_tx_aggregator.rs index 8059eb9d7ea..2632466d957 100644 --- a/core/lib/zksync_core/src/eth_sender/eth_tx_aggregator.rs +++ b/core/lib/zksync_core/src/eth_sender/eth_tx_aggregator.rs @@ -1,11 +1,10 @@ -use std::convert::TryInto; +use std::{convert::TryInto, sync::Arc}; use tokio::sync::watch; - use zksync_config::configs::eth_sender::SenderConfig; use zksync_contracts::BaseSystemContractsHashes; use zksync_dal::{ConnectionPool, StorageProcessor}; -use zksync_eth_client::BoundEthInterface; +use zksync_eth_client::{BoundEthInterface, CallFunctionArgs}; use zksync_types::{ aggregated_operations::AggregatedOperation, contracts::{Multicall3Call, Multicall3Result}, @@ -13,17 +12,22 @@ use zksync_types::{ ethabi::{Contract, Token}, protocol_version::{L1VerifierConfig, VerifierParams}, vk_transform::l1_vk_commitment, - web3::contract::{tokens::Tokenizable, Error, Options}, + web3::contract::{ + tokens::{Detokenize, Tokenizable}, + Error, + }, Address, ProtocolVersionId, H256, U256, }; -use crate::eth_sender::{ - metrics::{PubdataKind, METRICS}, - zksync_functions::ZkSyncFunctions, - Aggregator, ETHSenderError, +use crate::{ + eth_sender::{ + metrics::{PubdataKind, METRICS}, + zksync_functions::ZkSyncFunctions, + Aggregator, ETHSenderError, + }, + gas_tracker::agg_l1_batch_base_cost, + metrics::BlockL1Stage, }; -use crate::gas_tracker::agg_l1_batch_base_cost; -use crate::metrics::BlockL1Stage; /// Data queried from L1 using multicall contract. #[derive(Debug)] @@ -40,6 +44,7 @@ pub struct MulticallData { #[derive(Debug)] pub struct EthTxAggregator { aggregator: Aggregator, + eth_client: Arc, config: SenderConfig, timelock_contract_address: Address, l1_multicall3_address: Address, @@ -52,6 +57,7 @@ impl EthTxAggregator { pub fn new( config: SenderConfig, aggregator: Aggregator, + eth_client: Arc, timelock_contract_address: Address, l1_multicall3_address: Address, main_zksync_contract_address: Address, @@ -61,6 +67,7 @@ impl EthTxAggregator { Self { config, aggregator, + eth_client, timelock_contract_address, l1_multicall3_address, main_zksync_contract_address, @@ -69,29 +76,20 @@ impl EthTxAggregator { } } - pub async fn run( + pub async fn run( mut self, pool: ConnectionPool, - prover_pool: ConnectionPool, - eth_client: E, stop_receiver: watch::Receiver, ) -> anyhow::Result<()> { loop { let mut storage = pool.access_storage_tagged("eth_sender").await.unwrap(); - let mut prover_storage = prover_pool - .access_storage_tagged("eth_sender") - .await - .unwrap(); if *stop_receiver.borrow() { tracing::info!("Stop signal received, eth_tx_aggregator is shutting down"); break; } - if let Err(err) = self - .loop_iteration(&mut storage, &mut prover_storage, ð_client) - .await - { + if let Err(err) = self.loop_iteration(&mut storage).await { // Web3 API request failures can cause this, // and anything more important is already properly reported. tracing::warn!("eth_sender error {err:?}"); @@ -102,24 +100,14 @@ impl EthTxAggregator { Ok(()) } - pub(super) async fn get_multicall_data( - &mut self, - eth_client: &E, - ) -> Result { + pub(super) async fn get_multicall_data(&mut self) -> Result { let calldata = self.generate_calldata_for_multicall(); - let aggregate3_result = eth_client - .call_contract_function( - &self.functions.aggregate3.name, - calldata, - None, - Options::default(), - None, - self.l1_multicall3_address, - self.functions.multicall_contract.clone(), - ) - .await?; - - self.parse_multicall_data(aggregate3_result) + let args = CallFunctionArgs::new(&self.functions.aggregate3.name, calldata).for_contract( + self.l1_multicall3_address, + self.functions.multicall_contract.clone(), + ); + let aggregate3_result = self.eth_client.call_contract_function(args).await?; + self.parse_multicall_data(Token::from_tokens(aggregate3_result)?) } // Multicall's aggregate function accepts 1 argument - arrays of different contract calls. @@ -194,8 +182,8 @@ impl EthTxAggregator { ] } - // The role of the method below is to detokenize multicall call's result, which is actually a token. - // This token is an array of tuples like (bool, bytes), that contain the status and result for each contract call. + // The role of the method below is to de-tokenize multicall call's result, which is actually a token. + // This token is an array of tuples like `(bool, bytes)`, that contain the status and result for each contract call. pub(super) fn parse_multicall_data( &self, token: Token, @@ -300,9 +288,8 @@ impl EthTxAggregator { } /// Loads current verifier config on L1 - async fn get_recursion_scheduler_level_vk_hash( + async fn get_recursion_scheduler_level_vk_hash( &mut self, - eth_client: &E, verifier_address: Address, contracts_are_pre_boojum: bool, ) -> Result { @@ -312,68 +299,46 @@ impl EthTxAggregator { // tracing::debug!("Calling get_verification_key"); if contracts_are_pre_boojum { let abi = Contract { - functions: vec![( + functions: [( self.functions.get_verification_key.name.clone(), vec![self.functions.get_verification_key.clone()], )] - .into_iter() - .collect(), + .into(), ..Default::default() }; - let vk = eth_client - .call_contract_function( - &self.functions.get_verification_key.name, - (), - None, - Default::default(), - None, - verifier_address, - abi, - ) - .await?; - Ok(l1_vk_commitment(vk)) + let args = CallFunctionArgs::new(&self.functions.get_verification_key.name, ()) + .for_contract(verifier_address, abi); + + let vk = self.eth_client.call_contract_function(args).await?; + Ok(l1_vk_commitment(Token::from_tokens(vk)?)) } else { let get_vk_hash = self.functions.verification_key_hash.as_ref(); // tracing::debug!("Calling verificationKeyHash"); - let vk_hash = eth_client - .call_contract_function( - &get_vk_hash.unwrap().name, - (), - None, - Default::default(), - None, - verifier_address, - self.functions.verifier_contract.clone(), - ) - .await?; - Ok(vk_hash) + let args = CallFunctionArgs::new(&get_vk_hash.unwrap().name, ()) + .for_contract(verifier_address, self.functions.verifier_contract.clone()); + let vk_hash = self.eth_client.call_contract_function(args).await?; + Ok(H256::from_tokens(vk_hash)?) } } - #[tracing::instrument(skip(self, storage, eth_client))] - async fn loop_iteration( + #[tracing::instrument(skip(self, storage))] + async fn loop_iteration( &mut self, storage: &mut StorageProcessor<'_>, - prover_storage: &mut StorageProcessor<'_>, - eth_client: &E, ) -> Result<(), ETHSenderError> { let MulticallData { base_system_contracts_hashes, verifier_params, verifier_address, protocol_version_id, - } = self.get_multicall_data(eth_client).await.map_err(|err| { + } = self.get_multicall_data().await.map_err(|err| { tracing::error!("Failed to get multicall data {err:?}"); err })?; let contracts_are_pre_boojum = protocol_version_id.is_pre_boojum(); let recursion_scheduler_level_vk_hash = self - .get_recursion_scheduler_level_vk_hash( - eth_client, - verifier_address, - contracts_are_pre_boojum, - ) + .get_recursion_scheduler_level_vk_hash(verifier_address, contracts_are_pre_boojum) .await .map_err(|err| { tracing::error!("Failed to get VK hash from the Verifier {err:?}"); @@ -387,7 +352,6 @@ impl EthTxAggregator { .aggregator .get_next_ready_operation( storage, - prover_storage, base_system_contracts_hashes, protocol_version_id, l1_verifier_config, diff --git a/core/lib/zksync_core/src/eth_sender/eth_tx_manager.rs b/core/lib/zksync_core/src/eth_sender/eth_tx_manager.rs index da11e905647..99d46e31068 100644 --- a/core/lib/zksync_core/src/eth_sender/eth_tx_manager.rs +++ b/core/lib/zksync_core/src/eth_sender/eth_tx_manager.rs @@ -1,25 +1,25 @@ +use std::{sync::Arc, time::Duration}; + use anyhow::Context as _; use tokio::sync::watch; - -use std::sync::Arc; -use std::time::Duration; - use zksync_config::configs::eth_sender::SenderConfig; use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_eth_client::{ - types::{Error, ExecutedTxStatus, SignedCallResult}, - BoundEthInterface, + BoundEthInterface, Error, ExecutedTxStatus, RawTransactionBytes, SignedCallResult, }; use zksync_types::{ eth_sender::EthTx, - web3::{contract::Options, error::Error as Web3Error}, + web3::{ + contract::Options, + error::Error as Web3Error, + types::{BlockId, BlockNumber}, + }, L1BlockNumber, Nonce, H256, U256, }; use zksync_utils::time::seconds_since_epoch; use super::{metrics::METRICS, ETHSenderError}; -use crate::l1_gas_price::L1TxParamsProvider; -use crate::metrics::BlockL1Stage; +use crate::{l1_gas_price::L1TxParamsProvider, metrics::BlockL1Stage}; #[derive(Debug)] struct EthFee { @@ -43,22 +43,22 @@ pub(super) struct L1BlockNumbers { /// The component is responsible for managing sending eth_txs attempts: /// Based on eth_tx queue the component generates new attempt with the minimum possible fee, -/// save it to the database, and send it to ethereum. +/// save it to the database, and send it to Ethereum. /// Based on eth_tx_history queue the component can mark txs as stuck and create the new attempt /// with higher gas price #[derive(Debug)] -pub struct EthTxManager { - ethereum_gateway: E, +pub struct EthTxManager { + ethereum_gateway: Arc, config: SenderConfig, - gas_adjuster: Arc, + gas_adjuster: Arc, } -impl EthTxManager -where - E: BoundEthInterface + Sync, - G: L1TxParamsProvider, -{ - pub fn new(config: SenderConfig, gas_adjuster: Arc, ethereum_gateway: E) -> Self { +impl EthTxManager { + pub fn new( + config: SenderConfig, + gas_adjuster: Arc, + ethereum_gateway: Arc, + ) -> Self { Self { ethereum_gateway, config, @@ -174,7 +174,7 @@ where return Err(ETHSenderError::from(Error::from(Web3Error::Internal))); } - // Increase `priority_fee_per_gas` by at least 20% to prevent "replacement transaction underpriced" error. + // Increase `priority_fee_per_gas` by at least 20% to prevent "replacement transaction under-priced" error. Ok((previous_priority_fee + (previous_priority_fee / 5) + 1) .max(self.gas_adjuster.get_priority_fee())) } @@ -207,7 +207,7 @@ where base_fee_per_gas, priority_fee_per_gas, signed_tx.hash, - signed_tx.raw_tx.clone(), + signed_tx.raw_tx.as_ref(), ) .await .unwrap() @@ -232,7 +232,7 @@ where &self, storage: &mut StorageProcessor<'_>, tx_history_id: u32, - raw_tx: Vec, + raw_tx: RawTransactionBytes, current_block: L1BlockNumber, ) -> Result { match self.ethereum_gateway.send_raw_tx(raw_tx).await { @@ -285,7 +285,7 @@ where (latest_block_number.saturating_sub(confirmations) as u32).into() } else { self.ethereum_gateway - .block("finalized".to_string(), "eth_tx_manager") + .block(BlockId::Number(BlockNumber::Finalized), "eth_tx_manager") .await? .expect("Finalized block must be present on L1") .number @@ -303,7 +303,7 @@ where Ok(L1BlockNumbers { finalized, latest }) } - // Monitors the inflight transactions, marks mined ones as confirmed, + // Monitors the in-flight transactions, marks mined ones as confirmed, // returns the one that has to be resent (if there is one). pub(super) async fn monitor_inflight_transactions( &mut self, @@ -333,7 +333,7 @@ where // If the `operator_nonce.latest` <= `tx.nonce`, this means // that `tx` is not mined and we should resend it. - // We only resend the first unmined transaction. + // We only resend the first un-mined transaction. if operator_nonce.latest <= tx.nonce { // None means txs hasn't been sent yet let first_sent_at_block = storage @@ -367,9 +367,9 @@ where } None => { // The nonce has increased but we did not find the receipt. - // This is an error because such a big reorg may cause transactions that were + // This is an error because such a big re-org may cause transactions that were // previously recorded as confirmed to become pending again and we have to - // make sure it's not the case - otherwise eth_sender may not work properly. + // make sure it's not the case - otherwise `eth_sender` may not work properly. tracing::error!( "Possible block reorgs: finalized nonce increase detected, but no tx receipt found for tx {:?}", &tx @@ -410,7 +410,7 @@ where ) { for tx in storage.eth_sender_dal().get_unsent_txs().await.unwrap() { // Check already sent txs not marked as sent and mark them as sent. - // The common reason for this behaviour is that we sent tx and stop the server + // The common reason for this behavior is that we sent tx and stop the server // before updating the database let tx_status = self.get_tx_status(tx.tx_hash).await; @@ -435,12 +435,12 @@ where .send_raw_transaction( storage, tx.id, - tx.signed_raw_tx.clone(), + RawTransactionBytes::new_unchecked(tx.signed_raw_tx.clone()), l1_block_numbers.latest, ) .await { - tracing::warn!("Error {:?} in sending tx {:?}", error, &tx); + tracing::warn!("Error sending transaction {tx:?}: {error}"); } } } @@ -561,8 +561,8 @@ where self.send_unsent_txs(&mut storage, l1_block_numbers).await; } - // It's mandatory to set last_known_l1_block to zero, otherwise the first iteration - // will never check inflight txs status + // It's mandatory to set `last_known_l1_block` to zero, otherwise the first iteration + // will never check in-flight txs status let mut last_known_l1_block = L1BlockNumber(0); loop { let mut storage = pool.access_storage_tagged("eth_sender").await.unwrap(); diff --git a/core/lib/zksync_core/src/eth_sender/grafana_metrics.rs b/core/lib/zksync_core/src/eth_sender/grafana_metrics.rs deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/core/lib/zksync_core/src/eth_sender/metrics.rs b/core/lib/zksync_core/src/eth_sender/metrics.rs index 2f4a225e570..4bce1bf1a1f 100644 --- a/core/lib/zksync_core/src/eth_sender/metrics.rs +++ b/core/lib/zksync_core/src/eth_sender/metrics.rs @@ -1,9 +1,8 @@ //! Metrics for the Ethereum sender component. -use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; - use std::{fmt, time::Duration}; +use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; use zksync_dal::StorageProcessor; use zksync_types::{aggregated_operations::AggregatedActionType, eth_sender::EthTx}; use zksync_utils::time::seconds_since_epoch; @@ -78,7 +77,7 @@ pub(super) struct EthSenderMetrics { pub used_priority_fee_per_gas: Histogram, /// Last L1 block observed by the Ethereum sender. pub last_known_l1_block: Gauge, - /// Number of inflight txs produced by the Ethereum sender. + /// Number of in-flight txs produced by the Ethereum sender. pub number_of_inflight_txs: Gauge, #[metrics(buckets = GAS_BUCKETS)] pub l1_gas_used: Family>, diff --git a/core/lib/zksync_core/src/eth_sender/publish_criterion.rs b/core/lib/zksync_core/src/eth_sender/publish_criterion.rs index 33fd33ad577..85f6a46c960 100644 --- a/core/lib/zksync_core/src/eth_sender/publish_criterion.rs +++ b/core/lib/zksync_core/src/eth_sender/publish_criterion.rs @@ -1,8 +1,7 @@ -use async_trait::async_trait; -use chrono::Utc; - use std::fmt; +use async_trait::async_trait; +use chrono::Utc; use zksync_dal::StorageProcessor; use zksync_types::{ aggregated_operations::AggregatedActionType, commitment::L1BatchWithMetadata, L1BatchNumber, diff --git a/core/lib/zksync_core/src/eth_sender/tests.rs b/core/lib/zksync_core/src/eth_sender/tests.rs index 51166fc794a..73f484a1fd7 100644 --- a/core/lib/zksync_core/src/eth_sender/tests.rs +++ b/core/lib/zksync_core/src/eth_sender/tests.rs @@ -1,15 +1,13 @@ -use assert_matches::assert_matches; -use std::sync::{atomic::Ordering, Arc}; +use std::sync::Arc; +use assert_matches::assert_matches; use once_cell::sync::Lazy; - use zksync_config::{ configs::eth_sender::{ProofSendingMode, SenderConfig}, ContractsConfig, ETHSenderConfig, GasAdjusterConfig, }; -use zksync_contracts::BaseSystemContractsHashes; use zksync_dal::{ConnectionPool, StorageProcessor}; -use zksync_eth_client::{clients::mock::MockEthereum, EthInterface}; +use zksync_eth_client::{clients::MockEthereum, EthInterface}; use zksync_object_store::ObjectStoreFactory; use zksync_types::{ aggregated_operations::{ @@ -23,24 +21,21 @@ use zksync_types::{ Address, L1BatchNumber, L1BlockNumber, ProtocolVersionId, H256, }; -use crate::eth_sender::{ - eth_tx_manager::L1BlockNumbers, Aggregator, ETHSenderError, EthTxAggregator, EthTxManager, +use crate::{ + eth_sender::{ + eth_tx_manager::L1BlockNumbers, Aggregator, ETHSenderError, EthTxAggregator, EthTxManager, + }, + l1_gas_price::GasAdjuster, + utils::testonly::create_l1_batch, }; -use crate::l1_gas_price::GasAdjuster; -// Alias to conveniently call static methods of ETHSender. -type MockEthTxManager = EthTxManager, GasAdjuster>>; +// Alias to conveniently call static methods of `ETHSender`. +type MockEthTxManager = EthTxManager; static DUMMY_OPERATION: Lazy = Lazy::new(|| { AggregatedOperation::Execute(L1BatchExecuteOperation { l1_batches: vec![L1BatchWithMetadata { - header: L1BatchHeader::new( - L1BatchNumber(1), - 1, - Address::default(), - BaseSystemContractsHashes::default(), - ProtocolVersionId::latest(), - ), + header: create_l1_batch(1), metadata: default_l1_batch_metadata(), factory_deps: Vec::new(), }], @@ -83,9 +78,7 @@ impl EthSenderTester { .with_non_ordering_confirmation(non_ordering_confirmations) .with_multicall_address(contracts_config.l1_multicall3_addr), ); - gateway - .block_number - .fetch_add(Self::WAIT_CONFIRMATIONS, Ordering::Relaxed); + gateway.advance_block_number(Self::WAIT_CONFIRMATIONS); let gas_adjuster = Arc::new( GasAdjuster::new( @@ -112,6 +105,7 @@ impl EthSenderTester { aggregator_config.clone(), store_factory.create_store().await, ), + gateway.clone(), // zkSync contract address Address::random(), contracts_config.l1_multicall3_addr, @@ -174,7 +168,7 @@ async fn confirm_many() -> anyhow::Result<()> { } // check that we sent something - assert_eq!(tester.gateway.sent_txs.read().unwrap().len(), 5); + assert_eq!(tester.gateway.sent_tx_count(), 5); assert_eq!( tester .storage() @@ -190,7 +184,7 @@ async fn confirm_many() -> anyhow::Result<()> { for hash in hashes { tester .gateway - .execute_tx(hash, true, EthSenderTester::WAIT_CONFIRMATIONS)?; + .execute_tx(hash, true, EthSenderTester::WAIT_CONFIRMATIONS); } let to_resend = tester @@ -220,7 +214,7 @@ async fn confirm_many() -> anyhow::Result<()> { Ok(()) } -// Tests that we resend first unmined transaction every block with an increased gas price. +// Tests that we resend first un-mined transaction every block with an increased gas price. #[tokio::test] async fn resend_each_block() -> anyhow::Result<()> { let connection_pool = ConnectionPool::test_pool().await; @@ -251,7 +245,7 @@ async fn resend_each_block() -> anyhow::Result<()> { .await?; // check that we sent something and stored it in the db - assert_eq!(tester.gateway.sent_txs.read().unwrap().len(), 1); + assert_eq!(tester.gateway.sent_tx_count(), 1); assert_eq!( tester .storage() @@ -264,10 +258,18 @@ async fn resend_each_block() -> anyhow::Result<()> { 1 ); - let sent_tx = tester.gateway.sent_txs.read().unwrap()[&hash]; + let sent_tx = tester + .gateway + .get_tx(hash, "") + .await + .unwrap() + .expect("no transaction"); assert_eq!(sent_tx.hash, hash); - assert_eq!(sent_tx.nonce, 0); - assert_eq!(sent_tx.base_fee.as_usize(), 18); // 6 * 3 * 2^0 + assert_eq!(sent_tx.nonce, 0.into()); + assert_eq!( + sent_tx.max_fee_per_gas.unwrap() - sent_tx.max_priority_fee_per_gas.unwrap(), + 18.into() // `6 * 3 * 2^0` + ); // now, median is 5 tester.gateway.advance_block_number(2); @@ -294,7 +296,7 @@ async fn resend_each_block() -> anyhow::Result<()> { .await?; // check that transaction has been resent - assert_eq!(tester.gateway.sent_txs.read().unwrap().len(), 2); + assert_eq!(tester.gateway.sent_tx_count(), 2); assert_eq!( tester .storage() @@ -307,9 +309,17 @@ async fn resend_each_block() -> anyhow::Result<()> { 1 ); - let resent_tx = tester.gateway.sent_txs.read().unwrap()[&resent_hash]; - assert_eq!(resent_tx.nonce, 0); - assert_eq!(resent_tx.base_fee.as_usize(), 30); // 5 * 3 * 2^1 + let resent_tx = tester + .gateway + .get_tx(resent_hash, "") + .await + .unwrap() + .expect("no transaction"); + assert_eq!(resent_tx.nonce, 0.into()); + assert_eq!( + resent_tx.max_fee_per_gas.unwrap() - resent_tx.max_priority_fee_per_gas.unwrap(), + 30.into() // `5 * 3 * 2^1` + ); Ok(()) } @@ -342,7 +352,7 @@ async fn dont_resend_already_mined() -> anyhow::Result<()> { .unwrap(); // check that we sent something and stored it in the db - assert_eq!(tester.gateway.sent_txs.read().unwrap().len(), 1); + assert_eq!(tester.gateway.sent_tx_count(), 1); assert_eq!( tester .storage() @@ -358,7 +368,7 @@ async fn dont_resend_already_mined() -> anyhow::Result<()> { // mine the transaction but don't have enough confirmations yet tester .gateway - .execute_tx(hash, true, EthSenderTester::WAIT_CONFIRMATIONS - 1)?; + .execute_tx(hash, true, EthSenderTester::WAIT_CONFIRMATIONS - 1); let to_resend = tester .manager @@ -368,7 +378,7 @@ async fn dont_resend_already_mined() -> anyhow::Result<()> { ) .await?; - // check that transaction is still considered inflight + // check that transaction is still considered in-flight assert_eq!( tester .storage() @@ -419,17 +429,16 @@ async fn three_scenarios() -> anyhow::Result<()> { } // check that we sent something - assert_eq!(tester.gateway.sent_txs.read().unwrap().len(), 3); + assert_eq!(tester.gateway.sent_tx_count(), 3); // mined & confirmed tester .gateway - .execute_tx(hashes[0], true, EthSenderTester::WAIT_CONFIRMATIONS)?; - + .execute_tx(hashes[0], true, EthSenderTester::WAIT_CONFIRMATIONS); // mined but not confirmed tester .gateway - .execute_tx(hashes[1], true, EthSenderTester::WAIT_CONFIRMATIONS - 1)?; + .execute_tx(hashes[1], true, EthSenderTester::WAIT_CONFIRMATIONS - 1); let (to_resend, _) = tester .manager @@ -440,7 +449,7 @@ async fn three_scenarios() -> anyhow::Result<()> { .await? .expect("we should be trying to resend the last tx"); - // check that last 2 transactions are still considered inflight + // check that last 2 transactions are still considered in-flight assert_eq!( tester .storage() @@ -489,8 +498,7 @@ async fn failed_eth_tx() { // fail this tx tester .gateway - .execute_tx(hash, false, EthSenderTester::WAIT_CONFIRMATIONS) - .unwrap(); + .execute_tx(hash, false, EthSenderTester::WAIT_CONFIRMATIONS); tester .manager .monitor_inflight_transactions( @@ -858,7 +866,7 @@ async fn test_parse_multicall_data() { async fn get_multicall_data() { let connection_pool = ConnectionPool::test_pool().await; let mut tester = EthSenderTester::new(connection_pool, vec![100; 100], false).await; - let multicall_data = tester.aggregator.get_multicall_data(&tester.gateway).await; + let multicall_data = tester.aggregator.get_multicall_data().await; assert!(multicall_data.is_ok()); } @@ -872,21 +880,14 @@ async fn insert_genesis_protocol_version(tester: &EthSenderTester) { } async fn insert_l1_batch(tester: &EthSenderTester, number: L1BatchNumber) -> L1BatchHeader { - let mut header = L1BatchHeader::new( - number, - 0, - Address::zero(), - BaseSystemContractsHashes::default(), - Default::default(), - ); - header.is_finished = true; + let header = create_l1_batch(number.0); // Save L1 batch to the database tester .storage() .await .blocks_dal() - .insert_l1_batch(&header, &[], Default::default(), &[], &[]) + .insert_l1_batch(&header, &[], Default::default(), &[], &[], 0) .await .unwrap(); tester @@ -978,9 +979,7 @@ async fn send_operation( async fn confirm_tx(tester: &mut EthSenderTester, hash: H256) { tester .gateway - .execute_tx(hash, true, EthSenderTester::WAIT_CONFIRMATIONS) - .unwrap(); - + .execute_tx(hash, true, EthSenderTester::WAIT_CONFIRMATIONS); tester .manager .monitor_inflight_transactions( diff --git a/core/lib/zksync_core/src/eth_watch/client.rs b/core/lib/zksync_core/src/eth_watch/client.rs index af38ac79ae7..08e62c3f4ea 100644 --- a/core/lib/zksync_core/src/eth_watch/client.rs +++ b/core/lib/zksync_core/src/eth_watch/client.rs @@ -1,11 +1,14 @@ +use std::fmt; + use zksync_contracts::verifier_contract; -use zksync_eth_client::{types::Error as EthClientError, EthInterface}; +use zksync_eth_client::{CallFunctionArgs, Error as EthClientError, EthInterface}; use zksync_types::{ ethabi::{Contract, Token}, vk_transform::l1_vk_commitment, web3::{ self, - types::{BlockNumber, FilterBuilder, Log}, + contract::tokens::Detokenize, + types::{BlockId, BlockNumber, FilterBuilder, Log}, }, Address, H256, }; @@ -22,8 +25,14 @@ pub enum Error { InfiniteRecursion, } +impl From for Error { + fn from(err: web3::contract::Error) -> Self { + Self::EthClient(err.into()) + } +} + #[async_trait::async_trait] -pub trait EthClient { +pub trait EthClient: 'static + fmt::Debug + Send + Sync { /// Returns events in a given block range. async fn get_events( &self, @@ -44,8 +53,8 @@ const TOO_MANY_RESULTS_INFURA: &str = "query returned more than"; const TOO_MANY_RESULTS_ALCHEMY: &str = "response size exceeded"; #[derive(Debug)] -pub struct EthHttpQueryClient { - client: E, +pub struct EthHttpQueryClient { + client: Box, topics: Vec, zksync_contract_addr: Address, /// Address of the `Governance` contract. It's optional because it is present only for post-boojum chains. @@ -55,9 +64,9 @@ pub struct EthHttpQueryClient { confirmations_for_eth_event: Option, } -impl EthHttpQueryClient { +impl EthHttpQueryClient { pub fn new( - client: E, + client: Box, zksync_contract_addr: Address, governance_address: Option
, confirmations_for_eth_event: Option, @@ -101,41 +110,23 @@ impl EthHttpQueryClient { } #[async_trait::async_trait] -impl EthClient for EthHttpQueryClient { +impl EthClient for EthHttpQueryClient { async fn scheduler_vk_hash(&self, verifier_address: Address) -> Result { // This is here for backward compatibility with the old verifier: // Legacy verifier returns the full verification key; // New verifier returns the hash of the verification key. - let vk_hash = self - .client - .call_contract_function( - "verificationKeyHash", - (), - None, - Default::default(), - None, - verifier_address, - self.verifier_contract_abi.clone(), - ) - .await; + let args = CallFunctionArgs::new("verificationKeyHash", ()) + .for_contract(verifier_address, self.verifier_contract_abi.clone()); + let vk_hash_tokens = self.client.call_contract_function(args).await; - if let Ok(Token::FixedBytes(vk_hash)) = vk_hash { - Ok(H256::from_slice(&vk_hash)) + if let Ok(tokens) = vk_hash_tokens { + Ok(H256::from_tokens(tokens)?) } else { - let vk = self - .client - .call_contract_function( - "get_verification_key", - (), - None, - Default::default(), - None, - verifier_address, - self.verifier_contract_abi.clone(), - ) - .await?; - Ok(l1_vk_commitment(vk)) + let args = CallFunctionArgs::new("get_verification_key", ()) + .for_contract(verifier_address, self.verifier_contract_abi.clone()); + let vk = self.client.call_contract_function(args).await?; + Ok(l1_vk_commitment(Token::from_tokens(vk)?)) } } @@ -225,7 +216,7 @@ impl EthClient for EthHttpQueryClient EventProcessor for GovernanceUpgradesEventProcessor { +impl EventProcessor for GovernanceUpgradesEventProcessor { async fn process_events( &mut self, storage: &mut StorageProcessor<'_>, - client: &W, + client: &dyn EthClient, events: Vec, ) -> Result<(), Error> { let mut upgrades = Vec::new(); @@ -65,7 +66,7 @@ impl EventProcessor for GovernanceUpgradesEventProcessor ); continue; }; - // Scheduler VK is not present in proposal event. It is hardcoded in verifier contract. + // Scheduler VK is not present in proposal event. It is hard coded in verifier contract. let scheduler_vk_hash = if let Some(address) = upgrade.verifier_address { Some(client.scheduler_vk_hash(address).await?) } else { diff --git a/core/lib/zksync_core/src/eth_watch/event_processors/mod.rs b/core/lib/zksync_core/src/eth_watch/event_processors/mod.rs index 84ea1eeb04c..0a068033f2b 100644 --- a/core/lib/zksync_core/src/eth_watch/event_processors/mod.rs +++ b/core/lib/zksync_core/src/eth_watch/event_processors/mod.rs @@ -1,18 +1,21 @@ -use crate::eth_watch::client::{Error, EthClient}; +use std::fmt; + use zksync_dal::StorageProcessor; use zksync_types::{web3::types::Log, H256}; +use crate::eth_watch::client::{Error, EthClient}; + pub mod governance_upgrades; pub mod priority_ops; pub mod upgrades; #[async_trait::async_trait] -pub trait EventProcessor: Send + std::fmt::Debug { +pub trait EventProcessor: 'static + fmt::Debug + Send + Sync { /// Processes given events async fn process_events( &mut self, storage: &mut StorageProcessor<'_>, - client: &W, + client: &dyn EthClient, events: Vec, ) -> Result<(), Error>; diff --git a/core/lib/zksync_core/src/eth_watch/event_processors/priority_ops.rs b/core/lib/zksync_core/src/eth_watch/event_processors/priority_ops.rs index 7da78d68c0b..5b6ed4b9dc3 100644 --- a/core/lib/zksync_core/src/eth_watch/event_processors/priority_ops.rs +++ b/core/lib/zksync_core/src/eth_watch/event_processors/priority_ops.rs @@ -33,11 +33,11 @@ impl PriorityOpsEventProcessor { } #[async_trait::async_trait] -impl EventProcessor for PriorityOpsEventProcessor { +impl EventProcessor for PriorityOpsEventProcessor { async fn process_events( &mut self, storage: &mut StorageProcessor<'_>, - _client: &W, + _client: &dyn EthClient, events: Vec, ) -> Result<(), Error> { let mut priority_ops = Vec::new(); diff --git a/core/lib/zksync_core/src/eth_watch/event_processors/upgrades.rs b/core/lib/zksync_core/src/eth_watch/event_processors/upgrades.rs index 210b540c48e..e7f906cdf07 100644 --- a/core/lib/zksync_core/src/eth_watch/event_processors/upgrades.rs +++ b/core/lib/zksync_core/src/eth_watch/event_processors/upgrades.rs @@ -1,4 +1,5 @@ use std::convert::TryFrom; + use zksync_dal::StorageProcessor; use zksync_types::{web3::types::Log, ProtocolUpgrade, ProtocolVersionId, H256}; @@ -28,11 +29,11 @@ impl UpgradesEventProcessor { } #[async_trait::async_trait] -impl EventProcessor for UpgradesEventProcessor { +impl EventProcessor for UpgradesEventProcessor { async fn process_events( &mut self, storage: &mut StorageProcessor<'_>, - client: &W, + client: &dyn EthClient, events: Vec, ) -> Result<(), Error> { let mut upgrades = Vec::new(); @@ -42,7 +43,7 @@ impl EventProcessor for UpgradesEventProcessor { { let upgrade = ProtocolUpgrade::try_from(event) .map_err(|err| Error::LogParse(format!("{:?}", err)))?; - // Scheduler VK is not present in proposal event. It is hardcoded in verifier contract. + // Scheduler VK is not present in proposal event. It is hard coded in verifier contract. let scheduler_vk_hash = if let Some(address) = upgrade.verifier_address { Some(client.scheduler_vk_hash(address).await?) } else { diff --git a/core/lib/zksync_core/src/eth_watch/metrics.rs b/core/lib/zksync_core/src/eth_watch/metrics.rs index e5166f137ca..c96b8c08483 100644 --- a/core/lib/zksync_core/src/eth_watch/metrics.rs +++ b/core/lib/zksync_core/src/eth_watch/metrics.rs @@ -1,9 +1,9 @@ //! Metrics for Ethereum watcher. -use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; - use std::time::Duration; +use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] #[metrics(label = "stage", rename_all = "snake_case")] pub(super) enum PollStage { diff --git a/core/lib/zksync_core/src/eth_watch/mod.rs b/core/lib/zksync_core/src/eth_watch/mod.rs index fdb629bce28..5aac8624d47 100644 --- a/core/lib/zksync_core/src/eth_watch/mod.rs +++ b/core/lib/zksync_core/src/eth_watch/mod.rs @@ -4,10 +4,9 @@ //! Poll interval is configured using the `ETH_POLL_INTERVAL` constant. //! Number of confirmations is configured using the `CONFIRMATIONS_FOR_ETH_EVENT` environment variable. -use tokio::{sync::watch, task::JoinHandle}; - use std::time::Duration; +use tokio::{sync::watch, task::JoinHandle}; use zksync_config::ETHWatchConfig; use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_eth_client::EthInterface; @@ -17,12 +16,6 @@ use zksync_types::{ ProtocolVersionId, }; -mod client; -mod event_processors; -mod metrics; -#[cfg(test)] -mod tests; - use self::{ client::{Error, EthClient, EthHttpQueryClient, RETRY_LIMIT}, event_processors::{ @@ -32,6 +25,12 @@ use self::{ metrics::{PollStage, METRICS}, }; +mod client; +mod event_processors; +mod metrics; +#[cfg(test)] +mod tests; + #[derive(Debug)] struct EthWatchState { last_seen_version_id: ProtocolVersionId, @@ -40,32 +39,32 @@ struct EthWatchState { } #[derive(Debug)] -pub struct EthWatch { - client: W, +pub struct EthWatch { + client: Box, poll_interval: Duration, - event_processors: Vec>>, + event_processors: Vec>, last_processed_ethereum_block: u64, } -impl EthWatch { +impl EthWatch { pub async fn new( diamond_proxy_address: Address, governance_contract: Option, - mut client: W, + mut client: Box, pool: &ConnectionPool, poll_interval: Duration, ) -> Self { let mut storage = pool.access_storage_tagged("eth_watch").await.unwrap(); - let state = Self::initialize_state(&client, &mut storage).await; + let state = Self::initialize_state(&*client, &mut storage).await; tracing::info!("initialized state: {:?}", state); let priority_ops_processor = PriorityOpsEventProcessor::new(state.next_expected_priority_id); let upgrades_processor = UpgradesEventProcessor::new(state.last_seen_version_id); - let mut event_processors: Vec>> = vec![ + let mut event_processors: Vec> = vec![ Box::new(priority_ops_processor), Box::new(upgrades_processor), ]; @@ -93,7 +92,10 @@ impl EthWatch { } } - async fn initialize_state(client: &W, storage: &mut StorageProcessor<'_>) -> EthWatchState { + async fn initialize_state( + client: &dyn EthClient, + storage: &mut StorageProcessor<'_>, + ) -> EthWatchState { let next_expected_priority_id: PriorityOpId = storage .transactions_dal() .last_priority_id() @@ -150,7 +152,7 @@ impl EthWatch { // thus entering priority mode, which is not desired. tracing::error!("Failed to process new blocks {}", error); self.last_processed_ethereum_block = - Self::initialize_state(&self.client, &mut storage) + Self::initialize_state(&*self.client, &mut storage) .await .last_processed_ethereum_block; } @@ -178,7 +180,7 @@ impl EthWatch { for processor in self.event_processors.iter_mut() { processor - .process_events(storage, &self.client, events.clone()) + .process_events(storage, &*self.client, events.clone()) .await?; } self.last_processed_ethereum_block = to_block; @@ -186,10 +188,10 @@ impl EthWatch { } } -pub async fn start_eth_watch( +pub async fn start_eth_watch( config: ETHWatchConfig, pool: ConnectionPool, - eth_gateway: E, + eth_gateway: Box, diamond_proxy_addr: Address, governance: (Contract, Address), stop_receiver: watch::Receiver, @@ -204,7 +206,7 @@ pub async fn start_eth_watch( let mut eth_watch = EthWatch::new( diamond_proxy_addr, Some(governance.0), - eth_client, + Box::new(eth_client), &pool, config.poll_interval(), ) diff --git a/core/lib/zksync_core/src/eth_watch/tests.rs b/core/lib/zksync_core/src/eth_watch/tests.rs index d7627a56c13..d606a15107c 100644 --- a/core/lib/zksync_core/src/eth_watch/tests.rs +++ b/core/lib/zksync_core/src/eth_watch/tests.rs @@ -1,17 +1,13 @@ -use std::collections::HashMap; -use std::convert::TryInto; -use std::sync::Arc; +use std::{collections::HashMap, convert::TryInto, sync::Arc}; use tokio::sync::RwLock; - use zksync_contracts::{governance_contract, zksync_contract}; use zksync_dal::{ConnectionPool, StorageProcessor}; -use zksync_types::protocol_version::{ProtocolUpgradeTx, ProtocolUpgradeTxCommonData}; -use zksync_types::web3::types::{Address, BlockNumber}; use zksync_types::{ ethabi::{encode, Hash, Token}, l1::{L1Tx, OpProcessingType, PriorityQueueType}, - web3::types::Log, + protocol_version::{ProtocolUpgradeTx, ProtocolUpgradeTxCommonData}, + web3::types::{Address, BlockNumber, Log}, Execute, L1TxCommonData, PriorityOpId, ProtocolUpgrade, ProtocolVersion, ProtocolVersionId, Transaction, H256, U256, }; @@ -21,6 +17,7 @@ use crate::eth_watch::{ client::EthClient, event_processors::upgrades::UPGRADE_PROPOSAL_SIGNATURE, EthWatch, }; +#[derive(Debug)] struct FakeEthClientData { transactions: HashMap>, diamond_upgrades: HashMap>, @@ -71,7 +68,7 @@ impl FakeEthClientData { } } -#[derive(Clone)] +#[derive(Debug, Clone)] struct FakeEthClient { inner: Arc>, } @@ -212,7 +209,7 @@ async fn test_normal_operation_l1_txs() { let mut watcher = EthWatch::new( Address::default(), None, - client.clone(), + Box::new(client.clone()), &connection_pool, std::time::Duration::from_nanos(1), ) @@ -260,7 +257,7 @@ async fn test_normal_operation_upgrades() { let mut watcher = EthWatch::new( Address::default(), None, - client.clone(), + Box::new(client.clone()), &connection_pool, std::time::Duration::from_nanos(1), ) @@ -321,7 +318,7 @@ async fn test_gap_in_upgrades() { let mut watcher = EthWatch::new( Address::default(), None, - client.clone(), + Box::new(client.clone()), &connection_pool, std::time::Duration::from_nanos(1), ) @@ -360,7 +357,7 @@ async fn test_normal_operation_governance_upgrades() { let mut watcher = EthWatch::new( Address::default(), Some(governance_contract()), - client.clone(), + Box::new(client.clone()), &connection_pool, std::time::Duration::from_nanos(1), ) @@ -422,7 +419,7 @@ async fn test_gap_in_single_batch() { let mut watcher = EthWatch::new( Address::default(), None, - client.clone(), + Box::new(client.clone()), &connection_pool, std::time::Duration::from_nanos(1), ) @@ -452,7 +449,7 @@ async fn test_gap_between_batches() { let mut watcher = EthWatch::new( Address::default(), None, - client.clone(), + Box::new(client.clone()), &connection_pool, std::time::Duration::from_nanos(1), ) @@ -487,7 +484,7 @@ async fn test_overlapping_batches() { let mut watcher = EthWatch::new( Address::default(), None, - client.clone(), + Box::new(client.clone()), &connection_pool, std::time::Duration::from_nanos(1), ) diff --git a/core/lib/zksync_core/src/fee_model.rs b/core/lib/zksync_core/src/fee_model.rs new file mode 100644 index 00000000000..d84546da602 --- /dev/null +++ b/core/lib/zksync_core/src/fee_model.rs @@ -0,0 +1,392 @@ +use std::{fmt, sync::Arc}; + +use zksync_types::{ + fee_model::{ + BatchFeeInput, FeeModelConfig, FeeModelConfigV2, FeeParams, FeeParamsV1, FeeParamsV2, + L1PeggedBatchFeeModelInput, PubdataIndependentBatchFeeModelInput, + }, + U256, +}; +use zksync_utils::ceil_div_u256; + +use crate::l1_gas_price::L1GasPriceProvider; + +/// Trait responsible for providing fee info for a batch +pub trait BatchFeeModelInputProvider: fmt::Debug + 'static + Send + Sync { + /// Returns the batch fee with scaling applied. This may be used to account for the fact that the L1 gas and pubdata prices may fluctuate, esp. + /// in API methods that should return values that are valid for some period of time after the estimation was done. + fn get_batch_fee_input_scaled( + &self, + l1_gas_price_scale_factor: f64, + l1_pubdata_price_scale_factor: f64, + ) -> BatchFeeInput { + let params = self.get_fee_model_params(); + + match params { + FeeParams::V1(params) => BatchFeeInput::L1Pegged(compute_batch_fee_model_input_v1( + params, + l1_gas_price_scale_factor, + )), + FeeParams::V2(params) => { + BatchFeeInput::PubdataIndependent(compute_batch_fee_model_input_v2( + params, + l1_gas_price_scale_factor, + l1_pubdata_price_scale_factor, + )) + } + } + } + + /// Returns the batch fee input as-is, i.e. without any scaling for the L1 gas and pubdata prices. + fn get_batch_fee_input(&self) -> BatchFeeInput { + self.get_batch_fee_input_scaled(1.0, 1.0) + } + + /// Returns the fee model parameters. + fn get_fee_model_params(&self) -> FeeParams; + + fn get_erc20_conversion_rate(&self) -> u64; +} + +/// The struct that represents the batch fee input provider to be used in the main node of the server, i.e. +/// it explicitly gets the L1 gas price from the provider and uses it to calculate the batch fee input instead of getting +/// it from other node. +#[derive(Debug)] +pub(crate) struct MainNodeFeeInputProvider { + provider: Arc, + config: FeeModelConfig, +} + +impl BatchFeeModelInputProvider for MainNodeFeeInputProvider { + fn get_fee_model_params(&self) -> FeeParams { + match self.config { + FeeModelConfig::V1(config) => FeeParams::V1(FeeParamsV1 { + config, + l1_gas_price: self.provider.estimate_effective_gas_price(), + }), + FeeModelConfig::V2(config) => FeeParams::V2(FeeParamsV2 { + config, + l1_gas_price: self.provider.estimate_effective_gas_price(), + l1_pubdata_price: self.provider.estimate_effective_pubdata_price(), + }), + } + } + + fn get_erc20_conversion_rate(&self) -> u64 { + self.provider.get_erc20_conversion_rate() + } +} + +impl MainNodeFeeInputProvider { + pub(crate) fn new(provider: Arc, config: FeeModelConfig) -> Self { + Self { provider, config } + } +} + +/// Calculates the batch fee input based on the main node parameters. +/// This function uses the `V1` fee model, i.e. where the pubdata price does not include the proving costs. +fn compute_batch_fee_model_input_v1( + params: FeeParamsV1, + l1_gas_price_scale_factor: f64, +) -> L1PeggedBatchFeeModelInput { + let l1_gas_price = (params.l1_gas_price as f64 * l1_gas_price_scale_factor) as u64; + + L1PeggedBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price: params.config.minimal_l2_gas_price, + } +} + +/// Calculates the batch fee input based on the main node parameters. +/// This function uses the `V2` fee model, i.e. where the pubdata price does not include the proving costs. +fn compute_batch_fee_model_input_v2( + params: FeeParamsV2, + l1_gas_price_scale_factor: f64, + l1_pubdata_price_scale_factor: f64, +) -> PubdataIndependentBatchFeeModelInput { + let FeeParamsV2 { + config, + l1_gas_price, + l1_pubdata_price, + } = params; + + let FeeModelConfigV2 { + minimal_l2_gas_price, + compute_overhead_part, + pubdata_overhead_part, + batch_overhead_l1_gas, + max_gas_per_batch, + max_pubdata_per_batch, + } = config; + + // Firstly, we scale the gas price and pubdata price in case it is needed. + let l1_gas_price = (l1_gas_price as f64 * l1_gas_price_scale_factor) as u64; + let l1_pubdata_price = (l1_pubdata_price as f64 * l1_pubdata_price_scale_factor) as u64; + + // While the final results of the calculations are not expected to have any overflows, the intermediate computations + // might, so we use U256 for them. + let l1_batch_overhead_wei = U256::from(l1_gas_price) * U256::from(batch_overhead_l1_gas); + + let fair_l2_gas_price = { + // Firstly, we calculate which part of the overall overhead overhead each unit of L2 gas should cover. + let l1_batch_overhead_per_gas = + ceil_div_u256(l1_batch_overhead_wei, U256::from(max_gas_per_batch)); + + // Then, we multiply by the `compute_overhead_part` to get the overhead for the computation for each gas. + // Also, this means that if we almost never close batches because of compute, the `compute_overhead_part` should be zero and so + // it is possible that the computation costs include for no overhead. + let gas_overhead_wei = + (l1_batch_overhead_per_gas.as_u64() as f64 * compute_overhead_part) as u64; + + // We sum up the minimal L2 gas price (i.e. the raw prover/compute cost of a single L2 gas) and the overhead for batch being closed. + minimal_l2_gas_price + gas_overhead_wei + }; + + let fair_pubdata_price = { + // Firstly, we calculate which part of the overall overhead overhead each pubdata byte should cover. + let l1_batch_overhead_per_pubdata = + ceil_div_u256(l1_batch_overhead_wei, U256::from(max_pubdata_per_batch)); + + // Then, we multiply by the `pubdata_overhead_part` to get the overhead for each pubdata byte. + // Also, this means that if we almost never close batches because of pubdata, the `pubdata_overhead_part` should be zero and so + // it is possible that the pubdata costs include no overhead. + let pubdata_overhead_wei = + (l1_batch_overhead_per_pubdata.as_u64() as f64 * pubdata_overhead_part) as u64; + + // We sum up the raw L1 pubdata price (i.e. the expected price of publishing a single pubdata byte) and the overhead for batch being closed. + l1_pubdata_price + pubdata_overhead_wei + }; + + PubdataIndependentBatchFeeModelInput { + l1_gas_price, + fair_l2_gas_price, + fair_pubdata_price, + } +} + +#[cfg(test)] +mod tests { + use super::*; + + // To test that overflow never happens, we'll use giant L1 gas price, i.e. + // almost realistic very large value of 100k gwei. Since it is so large, we'll also + // use it for the L1 pubdata price. + const GIANT_L1_GAS_PRICE: u64 = 100_000_000_000_000; + + // As a small small L2 gas price we'll use the value of 1 wei. + const SMALL_L1_GAS_PRICE: u64 = 1; + + #[test] + fn test_compute_batch_fee_model_input_v2_giant_numbers() { + let config = FeeModelConfigV2 { + minimal_l2_gas_price: GIANT_L1_GAS_PRICE, + // We generally don't expect those values to be larger than 1. Still, in theory the operator + // may need to set higher values in extreme cases. + compute_overhead_part: 5.0, + pubdata_overhead_part: 5.0, + // The batch overhead would likely never grow beyond that + batch_overhead_l1_gas: 1_000_000, + // Let's imagine that for some reason the limit is relatively small + max_gas_per_batch: 50_000_000, + // The pubdata will likely never go below that + max_pubdata_per_batch: 100_000, + }; + + let params = FeeParamsV2 { + config, + l1_gas_price: GIANT_L1_GAS_PRICE, + l1_pubdata_price: GIANT_L1_GAS_PRICE, + }; + + // We'll use scale factor of 3.0 + let input = compute_batch_fee_model_input_v2(params, 3.0, 3.0); + + assert_eq!(input.l1_gas_price, GIANT_L1_GAS_PRICE * 3); + assert_eq!(input.fair_l2_gas_price, 130_000_000_000_000); + assert_eq!(input.fair_pubdata_price, 15_300_000_000_000_000); + } + + #[test] + fn test_compute_batch_fee_model_input_v2_small_numbers() { + // Here we assume that the operator wants to make the lives of users as cheap as possible. + let config = FeeModelConfigV2 { + minimal_l2_gas_price: SMALL_L1_GAS_PRICE, + compute_overhead_part: 0.0, + pubdata_overhead_part: 0.0, + batch_overhead_l1_gas: 0, + max_gas_per_batch: 50_000_000, + max_pubdata_per_batch: 100_000, + }; + + let params = FeeParamsV2 { + config, + l1_gas_price: SMALL_L1_GAS_PRICE, + l1_pubdata_price: SMALL_L1_GAS_PRICE, + }; + + let input = compute_batch_fee_model_input_v2(params, 1.0, 1.0); + + assert_eq!(input.l1_gas_price, SMALL_L1_GAS_PRICE); + assert_eq!(input.fair_l2_gas_price, SMALL_L1_GAS_PRICE); + assert_eq!(input.fair_pubdata_price, SMALL_L1_GAS_PRICE); + } + + #[test] + fn test_compute_batch_fee_model_input_v2_only_pubdata_overhead() { + // Here we use sensible config, but when only pubdata is used to close the batch + let config = FeeModelConfigV2 { + minimal_l2_gas_price: 100_000_000_000, + compute_overhead_part: 0.0, + pubdata_overhead_part: 1.0, + batch_overhead_l1_gas: 700_000, + max_gas_per_batch: 500_000_000, + max_pubdata_per_batch: 100_000, + }; + + let params = FeeParamsV2 { + config, + l1_gas_price: GIANT_L1_GAS_PRICE, + l1_pubdata_price: GIANT_L1_GAS_PRICE, + }; + + let input = compute_batch_fee_model_input_v2(params, 1.0, 1.0); + assert_eq!(input.l1_gas_price, GIANT_L1_GAS_PRICE); + // The fair L2 gas price is identical to the minimal one. + assert_eq!(input.fair_l2_gas_price, 100_000_000_000); + // The fair pubdata price is the minimal one plus the overhead. + assert_eq!(input.fair_pubdata_price, 800_000_000_000_000); + } + + #[test] + fn test_compute_batch_fee_model_input_v2_only_compute_overhead() { + // Here we use sensible config, but when only compute is used to close the batch + let config = FeeModelConfigV2 { + minimal_l2_gas_price: 100_000_000_000, + compute_overhead_part: 1.0, + pubdata_overhead_part: 0.0, + batch_overhead_l1_gas: 700_000, + max_gas_per_batch: 500_000_000, + max_pubdata_per_batch: 100_000, + }; + + let params = FeeParamsV2 { + config, + l1_gas_price: GIANT_L1_GAS_PRICE, + l1_pubdata_price: GIANT_L1_GAS_PRICE, + }; + + let input = compute_batch_fee_model_input_v2(params, 1.0, 1.0); + assert_eq!(input.l1_gas_price, GIANT_L1_GAS_PRICE); + // The fair L2 gas price is identical to the minimal one, plus the overhead + assert_eq!(input.fair_l2_gas_price, 240_000_000_000); + // The fair pubdata price is equal to the original one. + assert_eq!(input.fair_pubdata_price, GIANT_L1_GAS_PRICE); + } + + #[test] + fn test_compute_batch_fee_model_input_v2_param_tweaking() { + // In this test we generally checking that each param behaves as expected + let base_config = FeeModelConfigV2 { + minimal_l2_gas_price: 100_000_000_000, + compute_overhead_part: 0.5, + pubdata_overhead_part: 0.5, + batch_overhead_l1_gas: 700_000, + max_gas_per_batch: 500_000_000, + max_pubdata_per_batch: 100_000, + }; + + let base_params = FeeParamsV2 { + config: base_config, + l1_gas_price: 1_000_000_000, + l1_pubdata_price: 1_000_000_000, + }; + + let base_input = compute_batch_fee_model_input_v2(base_params, 1.0, 1.0); + + let base_input_larger_l1_gas_price = compute_batch_fee_model_input_v2( + FeeParamsV2 { + l1_gas_price: base_params.l1_gas_price * 2, + ..base_params + }, + 1.0, + 1.0, + ); + let base_input_scaled_l1_gas_price = + compute_batch_fee_model_input_v2(base_params, 2.0, 1.0); + assert_eq!( + base_input_larger_l1_gas_price, base_input_scaled_l1_gas_price, + "Scaling has the correct effect for the L1 gas price" + ); + assert!( + base_input.fair_l2_gas_price < base_input_larger_l1_gas_price.fair_l2_gas_price, + "L1 gas price increase raises L2 gas price" + ); + assert!( + base_input.fair_pubdata_price < base_input_larger_l1_gas_price.fair_pubdata_price, + "L1 gas price increase raises pubdata price" + ); + + let base_input_larger_pubdata_price = compute_batch_fee_model_input_v2( + FeeParamsV2 { + l1_pubdata_price: base_params.l1_pubdata_price * 2, + ..base_params + }, + 1.0, + 1.0, + ); + let base_input_scaled_pubdata_price = + compute_batch_fee_model_input_v2(base_params, 1.0, 2.0); + assert_eq!( + base_input_larger_pubdata_price, base_input_scaled_pubdata_price, + "Scaling has the correct effect for the pubdata price" + ); + assert_eq!( + base_input.fair_l2_gas_price, base_input_larger_pubdata_price.fair_l2_gas_price, + "L1 pubdata increase has no effect on L2 gas price" + ); + assert!( + base_input.fair_pubdata_price < base_input_larger_pubdata_price.fair_pubdata_price, + "Pubdata price increase raises pubdata price" + ); + + let base_input_larger_max_gas = compute_batch_fee_model_input_v2( + FeeParamsV2 { + config: FeeModelConfigV2 { + max_gas_per_batch: base_config.max_gas_per_batch * 2, + ..base_config + }, + ..base_params + }, + 1.0, + 1.0, + ); + assert!( + base_input.fair_l2_gas_price > base_input_larger_max_gas.fair_l2_gas_price, + "Max gas increase lowers L2 gas price" + ); + assert_eq!( + base_input.fair_pubdata_price, base_input_larger_max_gas.fair_pubdata_price, + "Max gas increase has no effect on pubdata price" + ); + + let base_input_larger_max_pubdata = compute_batch_fee_model_input_v2( + FeeParamsV2 { + config: FeeModelConfigV2 { + max_pubdata_per_batch: base_config.max_pubdata_per_batch * 2, + ..base_config + }, + ..base_params + }, + 1.0, + 1.0, + ); + assert_eq!( + base_input.fair_l2_gas_price, base_input_larger_max_pubdata.fair_l2_gas_price, + "Max pubdata increase has no effect on L2 gas price" + ); + assert!( + base_input.fair_pubdata_price > base_input_larger_max_pubdata.fair_pubdata_price, + "Max pubdata increase lowers pubdata price" + ); + } +} diff --git a/core/lib/zksync_core/src/gas_tracker/constants.rs b/core/lib/zksync_core/src/gas_tracker/constants.rs index 4eb9475cb0f..00c96486a72 100644 --- a/core/lib/zksync_core/src/gas_tracker/constants.rs +++ b/core/lib/zksync_core/src/gas_tracker/constants.rs @@ -1,5 +1,5 @@ -// Currently, every AGGR_* cost is overestimated, -// so there are safety margins around 100_000 -- 200_000 +// Currently, every `AGGR_* cost` is overestimated, +// so there are safety margins around `100_000 -- 200_000` pub(super) const AGGR_L1_BATCH_COMMIT_BASE_COST: u32 = 242_000; pub(super) const AGGR_L1_BATCH_PROVE_BASE_COST: u32 = 1_000_000; diff --git a/core/lib/zksync_core/src/genesis.rs b/core/lib/zksync_core/src/genesis.rs index 39a8645767d..54989bab93e 100644 --- a/core/lib/zksync_core/src/genesis.rs +++ b/core/lib/zksync_core/src/genesis.rs @@ -3,15 +3,14 @@ //! setups the required databases, and outputs the data required to initialize a smart contract. use anyhow::Context as _; - +use multivm::utils::get_max_gas_per_pubdata_byte; use zksync_contracts::BaseSystemContracts; use zksync_dal::StorageProcessor; use zksync_merkle_tree::domain::ZkSyncTree; - use zksync_types::{ - block::DeployedContract, - block::{legacy_miniblock_hash, BlockGasCount, L1BatchHeader, MiniblockHeader}, + block::{BlockGasCount, DeployedContract, L1BatchHeader, MiniblockHasher, MiniblockHeader}, commitment::{L1BatchCommitment, L1BatchMetadata}, + fee_model::BatchFeeInput, get_code_key, get_system_context_init_logs, protocol_version::{L1VerifierConfig, ProtocolVersion}, tokens::{TokenInfo, TokenMetadata, ETHEREUM_ADDRESS}, @@ -19,8 +18,7 @@ use zksync_types::{ AccountTreeId, Address, L1BatchNumber, L2ChainId, LogQuery, MiniblockNumber, ProtocolVersionId, StorageKey, StorageLog, StorageLogKind, Timestamp, H256, }; -use zksync_utils::{be_words_to_bytes, h256_to_u256}; -use zksync_utils::{bytecode::hash_bytecode, u256_to_h256}; +use zksync_utils::{be_words_to_bytes, bytecode::hash_bytecode, h256_to_u256, u256_to_h256}; use crate::metadata_calculator::L1BatchWithLogs; @@ -111,7 +109,7 @@ pub async fn ensure_genesis_state( vec![], H256::zero(), H256::zero(), - protocol_version.is_pre_boojum(), + *protocol_version, ); save_genesis_l1_batch_metadata( @@ -287,21 +285,21 @@ pub(crate) async fn create_genesis_l1_batch( 0, first_validator_address, base_system_contracts.hashes(), - ProtocolVersionId::latest(), + protocol_version, ); genesis_l1_batch_header.is_finished = true; let genesis_miniblock_header = MiniblockHeader { number: MiniblockNumber(0), timestamp: 0, - hash: legacy_miniblock_hash(MiniblockNumber(0)), + hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), l1_tx_count: 0, l2_tx_count: 0, base_fee_per_gas: 0, - l1_gas_price: 0, - l2_fair_gas_price: 0, + gas_per_pubdata_limit: get_max_gas_per_pubdata_byte(protocol_version.into()), + batch_fee_input: BatchFeeInput::l1_pegged(0, 0), base_system_contracts_hashes: base_system_contracts.hashes(), - protocol_version: Some(ProtocolVersionId::latest()), + protocol_version: Some(protocol_version), virtual_blocks: 0, }; @@ -319,6 +317,7 @@ pub(crate) async fn create_genesis_l1_batch( BlockGasCount::default(), &[], &[], + 0, ) .await .unwrap(); @@ -441,7 +440,7 @@ mod tests { #[tokio::test] async fn running_genesis_with_big_chain_id() { let pool = ConnectionPool::test_pool().await; - let mut conn: StorageProcessor<'_> = pool.access_storage().await.unwrap(); + let mut conn = pool.access_storage().await.unwrap(); conn.blocks_dal().delete_genesis().await.unwrap(); let params = GenesisParams { @@ -464,4 +463,19 @@ mod tests { let root_hash = metadata.unwrap().unwrap().metadata.root_hash; assert_ne!(root_hash, H256::zero()); } + + #[tokio::test] + async fn running_genesis_with_non_latest_protocol_version() { + let pool = ConnectionPool::test_pool().await; + let mut conn = pool.access_storage().await.unwrap(); + let params = GenesisParams { + protocol_version: ProtocolVersionId::Version10, + ..GenesisParams::mock() + }; + + ensure_genesis_state(&mut conn, L2ChainId::max(), ¶ms) + .await + .unwrap(); + assert!(!conn.blocks_dal().is_genesis_needed().await.unwrap()); + } } diff --git a/core/lib/zksync_core/src/house_keeper/blocks_state_reporter.rs b/core/lib/zksync_core/src/house_keeper/blocks_state_reporter.rs index 6ba94cbac6d..695c2008e13 100644 --- a/core/lib/zksync_core/src/house_keeper/blocks_state_reporter.rs +++ b/core/lib/zksync_core/src/house_keeper/blocks_state_reporter.rs @@ -1,10 +1,11 @@ use async_trait::async_trait; - use zksync_dal::ConnectionPool; -use zksync_prover_utils::periodic_job::PeriodicJob; use zksync_utils::time::seconds_since_epoch; -use crate::metrics::{BlockL1Stage, BlockStage, L1StageLatencyLabel, APP_METRICS}; +use crate::{ + house_keeper::periodic_job::PeriodicJob, + metrics::{BlockL1Stage, BlockStage, L1StageLatencyLabel, APP_METRICS}, +}; #[derive(Debug)] pub struct L1BatchMetricsReporter { @@ -21,30 +22,25 @@ impl L1BatchMetricsReporter { } async fn report_metrics(&self) { + let mut block_metrics = vec![]; let mut conn = self.connection_pool.access_storage().await.unwrap(); - let mut block_metrics = vec![ - ( - conn.blocks_dal() - .get_sealed_l1_batch_number() - .await - .unwrap(), - BlockStage::Sealed, - ), - ( - conn.blocks_dal() - .get_last_l1_batch_number_with_metadata() - .await - .unwrap(), - BlockStage::MetadataCalculated, - ), - ( - conn.blocks_dal() - .get_last_l1_batch_number_with_witness_inputs() - .await - .unwrap(), - BlockStage::MerkleProofCalculated, - ), - ]; + let last_l1_batch = conn + .blocks_dal() + .get_sealed_l1_batch_number() + .await + .unwrap(); + if let Some(number) = last_l1_batch { + block_metrics.push((number, BlockStage::Sealed)); + } + + let last_l1_batch_with_metadata = conn + .blocks_dal() + .get_last_l1_batch_number_with_metadata() + .await + .unwrap(); + if let Some(number) = last_l1_batch_with_metadata { + block_metrics.push((number, BlockStage::MetadataCalculated)); + } let eth_stats = conn.eth_sender_dal().get_eth_l1_batches().await.unwrap(); for (tx_type, l1_batch) in eth_stats.saved { diff --git a/core/lib/zksync_core/src/house_keeper/fri_proof_compressor_job_retry_manager.rs b/core/lib/zksync_core/src/house_keeper/fri_proof_compressor_job_retry_manager.rs index fc26524e992..7cf2c231b67 100644 --- a/core/lib/zksync_core/src/house_keeper/fri_proof_compressor_job_retry_manager.rs +++ b/core/lib/zksync_core/src/house_keeper/fri_proof_compressor_job_retry_manager.rs @@ -2,7 +2,8 @@ use std::time::Duration; use async_trait::async_trait; use zksync_dal::ConnectionPool; -use zksync_prover_utils::periodic_job::PeriodicJob; + +use crate::house_keeper::periodic_job::PeriodicJob; #[derive(Debug)] pub struct FriProofCompressorJobRetryManager { diff --git a/core/lib/zksync_core/src/house_keeper/fri_proof_compressor_queue_monitor.rs b/core/lib/zksync_core/src/house_keeper/fri_proof_compressor_queue_monitor.rs index 7a86dcf905f..88e1f0f6465 100644 --- a/core/lib/zksync_core/src/house_keeper/fri_proof_compressor_queue_monitor.rs +++ b/core/lib/zksync_core/src/house_keeper/fri_proof_compressor_queue_monitor.rs @@ -2,7 +2,7 @@ use async_trait::async_trait; use zksync_dal::ConnectionPool; use zksync_types::proofs::JobCountStatistics; -use zksync_prover_utils::periodic_job::PeriodicJob; +use crate::house_keeper::periodic_job::PeriodicJob; const PROOF_COMPRESSOR_SERVICE_NAME: &str = "proof_compressor"; @@ -58,6 +58,26 @@ impl PeriodicJob for FriProofCompressorStatsReporter { stats.in_progress as f64, "type" => "in_progress" ); + + let oldest_not_compressed_batch = self + .pool + .access_storage() + .await + .unwrap() + .fri_proof_compressor_dal() + .get_oldest_not_compressed_batch() + .await; + + if let Some(l1_batch_number) = oldest_not_compressed_batch { + metrics::gauge!( + format!( + "prover_fri.{}.oldest_not_compressed_batch", + PROOF_COMPRESSOR_SERVICE_NAME + ), + l1_batch_number.0 as f64 + ); + } + Ok(()) } diff --git a/core/lib/zksync_core/src/house_keeper/fri_prover_job_retry_manager.rs b/core/lib/zksync_core/src/house_keeper/fri_prover_job_retry_manager.rs index fefb7333a67..8ff847a5ca9 100644 --- a/core/lib/zksync_core/src/house_keeper/fri_prover_job_retry_manager.rs +++ b/core/lib/zksync_core/src/house_keeper/fri_prover_job_retry_manager.rs @@ -2,7 +2,8 @@ use std::time::Duration; use async_trait::async_trait; use zksync_dal::ConnectionPool; -use zksync_prover_utils::periodic_job::PeriodicJob; + +use crate::house_keeper::periodic_job::PeriodicJob; #[derive(Debug)] pub struct FriProverJobRetryManager { diff --git a/core/lib/zksync_core/src/house_keeper/fri_prover_queue_monitor.rs b/core/lib/zksync_core/src/house_keeper/fri_prover_queue_monitor.rs index ba731ede944..90f90759b32 100644 --- a/core/lib/zksync_core/src/house_keeper/fri_prover_queue_monitor.rs +++ b/core/lib/zksync_core/src/house_keeper/fri_prover_queue_monitor.rs @@ -1,12 +1,14 @@ use async_trait::async_trait; use zksync_config::configs::fri_prover_group::FriProverGroupConfig; use zksync_dal::ConnectionPool; -use zksync_prover_utils::periodic_job::PeriodicJob; + +use crate::house_keeper::periodic_job::PeriodicJob; #[derive(Debug)] pub struct FriProverStatsReporter { reporting_interval_ms: u64, prover_connection_pool: ConnectionPool, + db_connection_pool: ConnectionPool, config: FriProverGroupConfig, } @@ -14,17 +16,19 @@ impl FriProverStatsReporter { pub fn new( reporting_interval_ms: u64, prover_connection_pool: ConnectionPool, + db_connection_pool: ConnectionPool, config: FriProverGroupConfig, ) -> Self { Self { reporting_interval_ms, prover_connection_pool, + db_connection_pool, config, } } } -/// Invoked periodically to push prover queued/inprogress job statistics +/// Invoked periodically to push prover queued/in-progress job statistics #[async_trait] impl PeriodicJob for FriProverStatsReporter { const SERVICE_NAME: &'static str = "FriProverStatsReporter"; @@ -35,11 +39,11 @@ impl PeriodicJob for FriProverStatsReporter { for ((circuit_id, aggregation_round), stats) in stats.into_iter() { // BEWARE, HERE BE DRAGONS. - // In database, the circuit_id stored is the circuit for which the aggregation is done, + // In database, the `circuit_id` stored is the circuit for which the aggregation is done, // not the circuit which is running. // There is a single node level aggregation circuit, which is circuit 2. // This can aggregate multiple leaf nodes (which may belong to different circuits). - // This reporting is a hacky forced way to use circuit_id 2 which will solve autoscalers. + // This reporting is a hacky forced way to use `circuit_id` 2 which will solve auto scalers. // A proper fix will be later provided to solve this at database level. let circuit_id = if aggregation_round == 2 { 2 @@ -82,6 +86,55 @@ impl PeriodicJob for FriProverStatsReporter { "circuit_id" => circuit_id.to_string(), "aggregation_round" => aggregation_round.to_string()); } + + // FIXME: refactor metrics here + + let mut db_conn = self.db_connection_pool.access_storage().await.unwrap(); + + let oldest_unpicked_batch = match db_conn + .proof_generation_dal() + .get_oldest_unpicked_batch() + .await + { + Some(l1_batch_number) => l1_batch_number.0 as f64, + // if there is no unpicked batch in database, we use sealed batch number as a result + None => { + db_conn + .blocks_dal() + .get_sealed_l1_batch_number() + .await + .unwrap() + .unwrap() + .0 as f64 + } + }; + metrics::gauge!("fri_prover.oldest_unpicked_batch", oldest_unpicked_batch); + + if let Some(l1_batch_number) = db_conn + .proof_generation_dal() + .get_oldest_not_generated_batch() + .await + { + metrics::gauge!( + "fri_prover.oldest_not_generated_batch", + l1_batch_number.0 as f64 + ) + } + + for aggregation_round in 0..3 { + if let Some(l1_batch_number) = conn + .fri_prover_jobs_dal() + .min_unproved_l1_batch_number_for_aggregation_round(aggregation_round.into()) + .await + { + metrics::gauge!( + "fri_prover.oldest_unprocessed_block_by_round", + l1_batch_number.0 as f64, + "aggregation_round" => aggregation_round.to_string() + ) + } + } + Ok(()) } diff --git a/core/lib/zksync_core/src/house_keeper/fri_scheduler_circuit_queuer.rs b/core/lib/zksync_core/src/house_keeper/fri_scheduler_circuit_queuer.rs index ab9eba1fc66..70911339a8f 100644 --- a/core/lib/zksync_core/src/house_keeper/fri_scheduler_circuit_queuer.rs +++ b/core/lib/zksync_core/src/house_keeper/fri_scheduler_circuit_queuer.rs @@ -1,7 +1,7 @@ use async_trait::async_trait; use zksync_dal::ConnectionPool; -use zksync_prover_utils::periodic_job::PeriodicJob; +use crate::house_keeper::periodic_job::PeriodicJob; #[derive(Debug)] pub struct SchedulerCircuitQueuer { diff --git a/core/lib/zksync_core/src/house_keeper/fri_witness_generator_jobs_retry_manager.rs b/core/lib/zksync_core/src/house_keeper/fri_witness_generator_jobs_retry_manager.rs index f81cb03dc37..3aa21bdd534 100644 --- a/core/lib/zksync_core/src/house_keeper/fri_witness_generator_jobs_retry_manager.rs +++ b/core/lib/zksync_core/src/house_keeper/fri_witness_generator_jobs_retry_manager.rs @@ -2,7 +2,8 @@ use std::time::Duration; use async_trait::async_trait; use zksync_dal::ConnectionPool; -use zksync_prover_utils::periodic_job::PeriodicJob; + +use crate::house_keeper::periodic_job::PeriodicJob; #[derive(Debug)] pub struct FriWitnessGeneratorJobRetryManager { diff --git a/core/lib/zksync_core/src/house_keeper/fri_witness_generator_queue_monitor.rs b/core/lib/zksync_core/src/house_keeper/fri_witness_generator_queue_monitor.rs index 15b56e16553..f198d27d97b 100644 --- a/core/lib/zksync_core/src/house_keeper/fri_witness_generator_queue_monitor.rs +++ b/core/lib/zksync_core/src/house_keeper/fri_witness_generator_queue_monitor.rs @@ -4,7 +4,7 @@ use async_trait::async_trait; use zksync_dal::ConnectionPool; use zksync_types::proofs::{AggregationRound, JobCountStatistics}; -use zksync_prover_utils::periodic_job::PeriodicJob; +use crate::house_keeper::periodic_job::PeriodicJob; const FRI_WITNESS_GENERATOR_SERVICE_NAME: &str = "fri_witness_generator"; diff --git a/core/lib/zksync_core/src/house_keeper/gpu_prover_queue_monitor.rs b/core/lib/zksync_core/src/house_keeper/gpu_prover_queue_monitor.rs deleted file mode 100644 index 7ddb1bd75dd..00000000000 --- a/core/lib/zksync_core/src/house_keeper/gpu_prover_queue_monitor.rs +++ /dev/null @@ -1,67 +0,0 @@ -use async_trait::async_trait; -use zksync_dal::ConnectionPool; - -use zksync_prover_utils::periodic_job::PeriodicJob; - -#[derive(Debug)] -pub struct GpuProverQueueMonitor { - synthesizer_per_gpu: u16, - reporting_interval_ms: u64, - prover_connection_pool: ConnectionPool, -} - -impl GpuProverQueueMonitor { - pub fn new( - synthesizer_per_gpu: u16, - reporting_interval_ms: u64, - prover_connection_pool: ConnectionPool, - ) -> Self { - Self { - synthesizer_per_gpu, - reporting_interval_ms, - prover_connection_pool, - } - } -} - -/// Invoked periodically to push prover job statistics to Prometheus -/// Note: these values will be used for auto-scaling circuit-synthesizer -#[async_trait] -impl PeriodicJob for GpuProverQueueMonitor { - const SERVICE_NAME: &'static str = "GpuProverQueueMonitor"; - - async fn run_routine_task(&mut self) -> anyhow::Result<()> { - let prover_gpu_count_per_region_zone = self - .prover_connection_pool - .access_storage() - .await - .unwrap() - .gpu_prover_queue_dal() - .get_prover_gpu_count_per_region_zone() - .await; - - for ((region, zone), num_gpu) in prover_gpu_count_per_region_zone { - let synthesizers = self.synthesizer_per_gpu as u64 * num_gpu; - if synthesizers > 0 { - tracing::info!( - "Would be spawning {} circuit synthesizers in region {} zone {}", - synthesizers, - region, - zone - ); - } - metrics::gauge!( - "server.circuit_synthesizer.jobs", - synthesizers as f64, - "region" => region, - "zone" => zone, - "type" => "queued" - ); - } - Ok(()) - } - - fn polling_interval_ms(&self) -> u64 { - self.reporting_interval_ms - } -} diff --git a/core/lib/zksync_core/src/house_keeper/mod.rs b/core/lib/zksync_core/src/house_keeper/mod.rs index b14a089f911..2c029d42e77 100644 --- a/core/lib/zksync_core/src/house_keeper/mod.rs +++ b/core/lib/zksync_core/src/house_keeper/mod.rs @@ -6,9 +6,5 @@ pub mod fri_prover_queue_monitor; pub mod fri_scheduler_circuit_queuer; pub mod fri_witness_generator_jobs_retry_manager; pub mod fri_witness_generator_queue_monitor; -pub mod gpu_prover_queue_monitor; -pub mod prover_job_retry_manager; -pub mod prover_queue_monitor; +pub mod periodic_job; pub mod waiting_to_queued_fri_witness_job_mover; -pub mod waiting_to_queued_witness_job_mover; -pub mod witness_generator_queue_monitor; diff --git a/core/lib/prover_utils/src/periodic_job.rs b/core/lib/zksync_core/src/house_keeper/periodic_job.rs similarity index 97% rename from core/lib/prover_utils/src/periodic_job.rs rename to core/lib/zksync_core/src/house_keeper/periodic_job.rs index e58ff33e789..3f73a01ce20 100644 --- a/core/lib/prover_utils/src/periodic_job.rs +++ b/core/lib/zksync_core/src/house_keeper/periodic_job.rs @@ -1,6 +1,6 @@ use std::time::Duration; -use anyhow::Context as _; +use anyhow::Context; use async_trait::async_trait; use tokio::time::sleep; diff --git a/core/lib/zksync_core/src/house_keeper/prover_job_retry_manager.rs b/core/lib/zksync_core/src/house_keeper/prover_job_retry_manager.rs deleted file mode 100644 index 4142f1d5766..00000000000 --- a/core/lib/zksync_core/src/house_keeper/prover_job_retry_manager.rs +++ /dev/null @@ -1,57 +0,0 @@ -use std::time::Duration; - -use async_trait::async_trait; -use zksync_dal::ConnectionPool; - -use zksync_prover_utils::periodic_job::PeriodicJob; - -#[derive(Debug)] -pub struct ProverJobRetryManager { - max_attempts: u32, - processing_timeout: Duration, - retry_interval_ms: u64, - prover_connection_pool: ConnectionPool, -} - -impl ProverJobRetryManager { - pub fn new( - max_attempts: u32, - processing_timeout: Duration, - retry_interval_ms: u64, - prover_connection_pool: ConnectionPool, - ) -> Self { - Self { - max_attempts, - processing_timeout, - retry_interval_ms, - prover_connection_pool, - } - } -} - -/// Invoked periodically to re-queue stuck prover jobs. -#[async_trait] -impl PeriodicJob for ProverJobRetryManager { - const SERVICE_NAME: &'static str = "ProverJobRetryManager"; - - async fn run_routine_task(&mut self) -> anyhow::Result<()> { - let stuck_jobs = self - .prover_connection_pool - .access_storage() - .await - .unwrap() - .prover_dal() - .requeue_stuck_jobs(self.processing_timeout, self.max_attempts) - .await; - let job_len = stuck_jobs.len(); - for stuck_job in stuck_jobs { - tracing::info!("re-queuing prover job {:?}", stuck_job); - } - metrics::counter!("server.prover.requeued_jobs", job_len as u64); - Ok(()) - } - - fn polling_interval_ms(&self) -> u64 { - self.retry_interval_ms - } -} diff --git a/core/lib/zksync_core/src/house_keeper/prover_queue_monitor.rs b/core/lib/zksync_core/src/house_keeper/prover_queue_monitor.rs deleted file mode 100644 index 5b41ee74ac9..00000000000 --- a/core/lib/zksync_core/src/house_keeper/prover_queue_monitor.rs +++ /dev/null @@ -1,84 +0,0 @@ -use async_trait::async_trait; -use zksync_config::configs::ProverGroupConfig; -use zksync_dal::ConnectionPool; -use zksync_prover_utils::circuit_name_to_numeric_index; - -use zksync_prover_utils::periodic_job::PeriodicJob; - -#[derive(Debug)] -pub struct ProverStatsReporter { - reporting_interval_ms: u64, - prover_connection_pool: ConnectionPool, - config: ProverGroupConfig, -} - -impl ProverStatsReporter { - pub fn new( - reporting_interval_ms: u64, - prover_connection_pool: ConnectionPool, - config: ProverGroupConfig, - ) -> Self { - Self { - reporting_interval_ms, - prover_connection_pool, - config, - } - } -} - -/// Invoked periodically to push job statistics to Prometheus -/// Note: these values will be used for manually scaling provers. -#[async_trait] -impl PeriodicJob for ProverStatsReporter { - const SERVICE_NAME: &'static str = "ProverStatsReporter"; - - async fn run_routine_task(&mut self) -> anyhow::Result<()> { - let mut conn = self.prover_connection_pool.access_storage().await.unwrap(); - let stats = conn.prover_dal().get_prover_jobs_stats_per_circuit().await; - - for (circuit_name, stats) in stats.into_iter() { - let group_id = self - .config - .get_group_id_for_circuit_id(circuit_name_to_numeric_index(&circuit_name).unwrap()) - .unwrap(); - - metrics::gauge!( - "server.prover.jobs", - stats.queued as f64, - "type" => "queued", - "prover_group_id" => group_id.to_string(), - "circuit_name" => circuit_name.clone(), - "circuit_type" => circuit_name_to_numeric_index(&circuit_name).unwrap().to_string() - ); - - metrics::gauge!( - "server.prover.jobs", - stats.in_progress as f64, - "type" => "in_progress", - "prover_group_id" => group_id.to_string(), - "circuit_name" => circuit_name.clone(), - "circuit_type" => circuit_name_to_numeric_index(&circuit_name).unwrap().to_string() - ); - } - - if let Some(min_unproved_l1_batch_number) = - conn.prover_dal().min_unproved_l1_batch_number().await - { - metrics::gauge!("server.block_number", min_unproved_l1_batch_number.0 as f64, "stage" => "circuit_aggregation") - } - - let lag_by_circuit_type = conn - .prover_dal() - .min_unproved_l1_batch_number_by_basic_circuit_type() - .await; - - for (circuit_type, l1_batch_number) in lag_by_circuit_type { - metrics::gauge!("server.block_number", l1_batch_number.0 as f64, "stage" => format!("circuit_{}", circuit_type)); - } - Ok(()) - } - - fn polling_interval_ms(&self) -> u64 { - self.reporting_interval_ms - } -} diff --git a/core/lib/zksync_core/src/house_keeper/waiting_to_queued_fri_witness_job_mover.rs b/core/lib/zksync_core/src/house_keeper/waiting_to_queued_fri_witness_job_mover.rs index 2fd00bcd6f6..df9208f1f45 100644 --- a/core/lib/zksync_core/src/house_keeper/waiting_to_queued_fri_witness_job_mover.rs +++ b/core/lib/zksync_core/src/house_keeper/waiting_to_queued_fri_witness_job_mover.rs @@ -1,7 +1,7 @@ use async_trait::async_trait; use zksync_dal::ConnectionPool; -use zksync_prover_utils::periodic_job::PeriodicJob; +use crate::house_keeper::periodic_job::PeriodicJob; #[derive(Debug)] pub struct WaitingToQueuedFriWitnessJobMover { diff --git a/core/lib/zksync_core/src/house_keeper/waiting_to_queued_witness_job_mover.rs b/core/lib/zksync_core/src/house_keeper/waiting_to_queued_witness_job_mover.rs deleted file mode 100644 index c99603676ec..00000000000 --- a/core/lib/zksync_core/src/house_keeper/waiting_to_queued_witness_job_mover.rs +++ /dev/null @@ -1,96 +0,0 @@ -use async_trait::async_trait; -use zksync_dal::ConnectionPool; - -use zksync_prover_utils::periodic_job::PeriodicJob; - -#[derive(Debug)] -pub struct WaitingToQueuedWitnessJobMover { - job_moving_interval_ms: u64, - prover_connection_pool: ConnectionPool, -} - -impl WaitingToQueuedWitnessJobMover { - pub fn new(job_mover_interval_ms: u64, prover_connection_pool: ConnectionPool) -> Self { - Self { - job_moving_interval_ms: job_mover_interval_ms, - prover_connection_pool, - } - } - - async fn move_jobs(&mut self) { - self.move_leaf_aggregation_jobs().await; - self.move_node_aggregation_jobs().await; - self.move_scheduler_jobs().await; - } - - async fn move_leaf_aggregation_jobs(&mut self) { - let mut conn = self.prover_connection_pool.access_storage().await.unwrap(); - let l1_batch_numbers = conn - .witness_generator_dal() - .move_leaf_aggregation_jobs_from_waiting_to_queued() - .await; - let len = l1_batch_numbers.len(); - for l1_batch_number in l1_batch_numbers { - tracing::info!( - "Marked leaf aggregation job for l1_batch {} as queued", - l1_batch_number - ); - } - metrics::counter!( - "server.leaf_witness_generator.waiting_to_queued_jobs_transitions", - len as u64 - ); - } - - async fn move_node_aggregation_jobs(&mut self) { - let mut conn = self.prover_connection_pool.access_storage().await.unwrap(); - let l1_batch_numbers = conn - .witness_generator_dal() - .move_node_aggregation_jobs_from_waiting_to_queued() - .await; - let len = l1_batch_numbers.len(); - for l1_batch_number in l1_batch_numbers { - tracing::info!( - "Marking node aggregation job for l1_batch {} as queued", - l1_batch_number - ); - } - metrics::counter!( - "server.node_witness_generator.waiting_to_queued_jobs_transitions", - len as u64 - ); - } - - async fn move_scheduler_jobs(&mut self) { - let mut conn = self.prover_connection_pool.access_storage().await.unwrap(); - let l1_batch_numbers = conn - .witness_generator_dal() - .move_scheduler_jobs_from_waiting_to_queued() - .await; - let len = l1_batch_numbers.len(); - for l1_batch_number in l1_batch_numbers { - tracing::info!( - "Marking scheduler aggregation job for l1_batch {} as queued", - l1_batch_number - ); - } - metrics::counter!( - "server.scheduler_witness_generator.waiting_to_queued_jobs_transitions", - len as u64 - ); - } -} - -#[async_trait] -impl PeriodicJob for WaitingToQueuedWitnessJobMover { - const SERVICE_NAME: &'static str = "WaitingToQueuedWitnessJobMover"; - - async fn run_routine_task(&mut self) -> anyhow::Result<()> { - self.move_jobs().await; - Ok(()) - } - - fn polling_interval_ms(&self) -> u64 { - self.job_moving_interval_ms - } -} diff --git a/core/lib/zksync_core/src/house_keeper/witness_generator_queue_monitor.rs b/core/lib/zksync_core/src/house_keeper/witness_generator_queue_monitor.rs deleted file mode 100644 index 40a8e2a6613..00000000000 --- a/core/lib/zksync_core/src/house_keeper/witness_generator_queue_monitor.rs +++ /dev/null @@ -1,122 +0,0 @@ -use std::collections::HashMap; - -use async_trait::async_trait; -use zksync_dal::ConnectionPool; -use zksync_types::proofs::{AggregationRound, JobCountStatistics}; - -use zksync_prover_utils::periodic_job::PeriodicJob; - -const WITNESS_GENERATOR_SERVICE_NAME: &str = "witness_generator"; - -#[derive(Debug)] -pub struct WitnessGeneratorStatsReporter { - reporting_interval_ms: u64, - prover_connection_pool: ConnectionPool, -} - -impl WitnessGeneratorStatsReporter { - pub fn new(reporting_interval_ms: u64, prover_connection_pool: ConnectionPool) -> Self { - Self { - reporting_interval_ms, - prover_connection_pool, - } - } - - async fn get_job_statistics( - prover_connection_pool: &ConnectionPool, - ) -> HashMap { - let mut conn = prover_connection_pool.access_storage().await.unwrap(); - HashMap::from([ - ( - AggregationRound::BasicCircuits, - conn.witness_generator_dal() - .get_witness_jobs_stats(AggregationRound::BasicCircuits) - .await, - ), - ( - AggregationRound::LeafAggregation, - conn.witness_generator_dal() - .get_witness_jobs_stats(AggregationRound::LeafAggregation) - .await, - ), - ( - AggregationRound::NodeAggregation, - conn.witness_generator_dal() - .get_witness_jobs_stats(AggregationRound::NodeAggregation) - .await, - ), - ( - AggregationRound::Scheduler, - conn.witness_generator_dal() - .get_witness_jobs_stats(AggregationRound::Scheduler) - .await, - ), - ]) - } -} - -fn emit_metrics_for_round(round: AggregationRound, stats: JobCountStatistics) { - if stats.queued > 0 || stats.in_progress > 0 { - tracing::trace!( - "Found {} free and {} in progress {:?} witness generators jobs", - stats.queued, - stats.in_progress, - round - ); - } - - metrics::gauge!( - format!("server.{}.jobs", WITNESS_GENERATOR_SERVICE_NAME), - stats.queued as f64, - "type" => "queued", - "round" => format!("{:?}", round) - ); - - metrics::gauge!( - format!("server.{}.jobs", WITNESS_GENERATOR_SERVICE_NAME), - stats.in_progress as f64, - "type" => "in_progress", - "round" => format!("{:?}", round) - ); -} - -/// Invoked periodically to push job statistics to Prometheus -/// Note: these values will be used for auto-scaling job processors -#[async_trait] -impl PeriodicJob for WitnessGeneratorStatsReporter { - const SERVICE_NAME: &'static str = "WitnessGeneratorStatsReporter"; - - async fn run_routine_task(&mut self) -> anyhow::Result<()> { - let stats_for_all_rounds = Self::get_job_statistics(&self.prover_connection_pool).await; - let mut aggregated = JobCountStatistics::default(); - for (round, stats) in stats_for_all_rounds { - emit_metrics_for_round(round, stats); - aggregated = aggregated + stats; - } - - if aggregated.queued > 0 { - tracing::trace!( - "Found {} free {} in progress witness generators jobs", - aggregated.queued, - aggregated.in_progress - ); - } - - metrics::gauge!( - format!("server.{}.jobs", WITNESS_GENERATOR_SERVICE_NAME), - aggregated.queued as f64, - "type" => "queued" - ); - - metrics::gauge!( - format!("server.{}.jobs", WITNESS_GENERATOR_SERVICE_NAME), - aggregated.in_progress as f64, - "type" => "in_progress" - ); - Ok(()) - } - - fn polling_interval_ms(&self) -> u64 { - self.reporting_interval_ms - } -} diff --git a/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/bounded_gas_adjuster.rs b/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/bounded_gas_adjuster.rs deleted file mode 100644 index 1db1d638687..00000000000 --- a/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/bounded_gas_adjuster.rs +++ /dev/null @@ -1,46 +0,0 @@ -use std::{fmt, sync::Arc}; - -use crate::{l1_gas_price::L1GasPriceProvider, state_keeper::metrics::KEEPER_METRICS}; - -/// Gas adjuster that bounds the gas price to the specified value. -/// We need this to prevent the gas price from growing too much, because our bootloader is sensitive for the gas price and can fail if it's too high. -/// And for mainnet it's not the case, but for testnet we can have a situation when the gas price is too high. -pub struct BoundedGasAdjuster { - max_gas_price: u64, - default_gas_adjuster: Arc, -} - -impl fmt::Debug for BoundedGasAdjuster { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - f.debug_struct("BoundedGasAdjuster") - .field("max_gas_price", &self.max_gas_price) - .finish() - } -} - -impl BoundedGasAdjuster { - pub fn new(max_gas_price: u64, default_gas_adjuster: Arc) -> Self { - Self { - max_gas_price, - default_gas_adjuster, - } - } -} - -impl L1GasPriceProvider for BoundedGasAdjuster { - fn estimate_effective_gas_price(&self) -> u64 { - let default_gas_price = self.default_gas_adjuster.estimate_effective_gas_price(); - if default_gas_price > self.max_gas_price { - tracing::warn!( - "Effective gas price is too high: {default_gas_price}, using max allowed: {}", - self.max_gas_price - ); - KEEPER_METRICS.gas_price_too_high.inc(); - return self.max_gas_price; - } - default_gas_price - } - fn get_erc20_conversion_rate(&self) -> u64 { - self.default_gas_adjuster.get_erc20_conversion_rate() - } -} diff --git a/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/erc_20_fetcher.rs b/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/erc_20_fetcher.rs index 5a79909aa4d..4402d0147c2 100644 --- a/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/erc_20_fetcher.rs +++ b/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/erc_20_fetcher.rs @@ -1,6 +1,5 @@ -use serde::Deserialize; -use serde::Serialize; -use zksync_eth_client::types::Error; +use serde::{Deserialize, Serialize}; +use zksync_eth_client::Error; #[derive(Deserialize, Serialize, Debug)] struct EthValue { eth: serde_json::value::Number, diff --git a/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/mod.rs b/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/mod.rs index b73027cdd28..f34f6e7801f 100644 --- a/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/mod.rs +++ b/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/mod.rs @@ -1,23 +1,23 @@ //! This module determines the fees to pay in txs containing blocks submitted to the L1. -use ::metrics::atomics::AtomicU64; -use tokio::sync::watch; - use std::{ collections::VecDeque, - sync::{Arc, RwLock}, + sync::{atomic::AtomicU64, Arc, RwLock}, }; +use tokio::sync::watch; use zksync_config::GasAdjusterConfig; -use zksync_eth_client::{types::Error, EthInterface}; +use zksync_eth_client::{Error, EthInterface}; +use zksync_system_constants::L1_GAS_PER_PUBDATA_BYTE; -pub mod bounded_gas_adjuster; +use self::{erc_20_fetcher::get_erc_20_value_in_wei, metrics::METRICS}; +use super::{L1GasPriceProvider, L1TxParamsProvider}; +use crate::state_keeper::metrics::KEEPER_METRICS; pub mod erc_20_fetcher; + mod metrics; #[cfg(test)] mod tests; -use self::{erc_20_fetcher::get_erc_20_value_in_wei, metrics::METRICS}; -use super::{L1GasPriceProvider, L1TxParamsProvider}; /// This component keeps track of the median base_fee from the last `max_base_fee_samples` blocks. /// It is used to adjust the base_fee of transactions sent to L1. @@ -90,6 +90,19 @@ impl GasAdjuster { Ok(()) } + fn bound_gas_price(&self, gas_price: u64) -> u64 { + let max_l1_gas_price = self.config.max_l1_gas_price(); + if gas_price > max_l1_gas_price { + tracing::warn!( + "Effective gas price is too high: {gas_price}, using max allowed: {}", + max_l1_gas_price + ); + KEEPER_METRICS.gas_price_too_high.inc(); + return max_l1_gas_price; + } + gas_price + } + pub async fn run(self: Arc, stop_receiver: watch::Receiver) -> anyhow::Result<()> { loop { if *stop_receiver.borrow() { @@ -118,7 +131,16 @@ impl L1GasPriceProvider for GasAdjuster { let effective_gas_price = self.get_base_fee(0) + self.get_priority_fee(); - (self.config.internal_l1_pricing_multiplier * effective_gas_price as f64) as u64 + let calculated_price = + (self.config.internal_l1_pricing_multiplier * effective_gas_price as f64) as u64; + + // Bound the price if it's too high. + self.bound_gas_price(calculated_price) + } + + fn estimate_effective_pubdata_price(&self) -> u64 { + // For now, pubdata is only sent via calldata, so its price is pegged to the L1 gas price. + self.estimate_effective_gas_price() * L1_GAS_PER_PUBDATA_BYTE as u64 } /// TODO: This is for an easy refactor to test things, @@ -142,7 +164,7 @@ impl L1TxParamsProvider for GasAdjuster { // Currently we use an exponential formula. // The alternative is a linear one: - // let scale_factor = a + b * time_in_mempool as f64; + // `let scale_factor = a + b * time_in_mempool as f64;` let scale_factor = a * b.powf(time_in_mempool as f64); let median = self.statistics.median(); METRICS.median_base_fee_per_gas.set(median); @@ -159,11 +181,11 @@ impl L1TxParamsProvider for GasAdjuster { // Priority fee is set to constant, sourced from config. // Reasoning behind this is the following: - // High priority_fee means high demand for block space, - // which means base_fee will increase, which means priority_fee + // High `priority_fee` means high demand for block space, + // which means `base_fee` will increase, which means `priority_fee` // will decrease. The EIP-1559 mechanism is designed such that - // base_fee will balance out priority_fee in such a way that - // priority_fee will be a small fraction of the overall fee. + // `base_fee` will balance out `priority_fee` in such a way that + // `priority_fee` will be a small fraction of the overall fee. fn get_priority_fee(&self) -> u64 { self.config.default_priority_fee_per_gas } diff --git a/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/tests.rs b/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/tests.rs index a0c6dac365c..2feb65f1cbd 100644 --- a/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/tests.rs +++ b/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/tests.rs @@ -1,8 +1,9 @@ -use super::{GasAdjuster, GasStatisticsInner}; -use std::collections::VecDeque; -use std::sync::Arc; +use std::{collections::VecDeque, sync::Arc}; + use zksync_config::GasAdjusterConfig; -use zksync_eth_client::clients::mock::MockEthereum; +use zksync_eth_client::clients::MockEthereum; + +use super::{GasAdjuster, GasStatisticsInner}; /// Check that we compute the median correctly #[test] diff --git a/core/lib/zksync_core/src/l1_gas_price/main_node_fetcher.rs b/core/lib/zksync_core/src/l1_gas_price/main_node_fetcher.rs index 7815e07a71a..6284120ceef 100644 --- a/core/lib/zksync_core/src/l1_gas_price/main_node_fetcher.rs +++ b/core/lib/zksync_core/src/l1_gas_price/main_node_fetcher.rs @@ -1,19 +1,19 @@ use std::{ sync::{ atomic::{AtomicU64, Ordering}, - Arc, + Arc, RwLock, }, time::Duration, }; use tokio::sync::watch::Receiver; - +use zksync_types::fee_model::FeeParams; use zksync_web3_decl::{ jsonrpsee::http_client::{HttpClient, HttpClientBuilder}, namespaces::ZksNamespaceClient, }; -use super::{erc_20_fetcher, L1GasPriceProvider}; +use crate::{fee_model::BatchFeeModelInputProvider, l1_gas_price::gas_adjuster::erc_20_fetcher}; const SLEEP_INTERVAL: Duration = Duration::from_secs(5); @@ -24,18 +24,18 @@ const SLEEP_INTERVAL: Duration = Duration::from_secs(5); /// The same algorithm cannot be consistently replicated on the external node side, /// since it relies on the configuration, which may change. #[derive(Debug)] -pub struct MainNodeGasPriceFetcher { +pub struct MainNodeFeeParamsFetcher { client: HttpClient, - gas_price: AtomicU64, + main_node_fee_params: RwLock, erc20_value_in_wei: AtomicU64, } -impl MainNodeGasPriceFetcher { +impl MainNodeFeeParamsFetcher { pub fn new(main_node_url: &str) -> Self { Self { client: Self::build_client(main_node_url), - gas_price: AtomicU64::new(1u64), // Start with 1 wei until the first update. erc20_value_in_wei: AtomicU64::new(1u64), + main_node_fee_params: RwLock::new(FeeParams::sensible_v1_default()), } } @@ -48,11 +48,11 @@ impl MainNodeGasPriceFetcher { pub async fn run(self: Arc, stop_receiver: Receiver) -> anyhow::Result<()> { loop { if *stop_receiver.borrow() { - tracing::info!("Stop signal received, MainNodeGasPriceFetcher is shutting down"); + tracing::info!("Stop signal received, MainNodeFeeParamsFetcher is shutting down"); break; } - let main_node_gas_price = match self.client.get_l1_gas_price().await { + let main_node_fee_params = match self.client.get_fee_params().await { Ok(price) => price, Err(err) => { tracing::warn!("Unable to get the gas price: {}", err); @@ -61,8 +61,8 @@ impl MainNodeGasPriceFetcher { continue; } }; - self.gas_price - .store(main_node_gas_price.as_u64(), Ordering::Relaxed); + *self.main_node_fee_params.write().unwrap() = main_node_fee_params; + tokio::time::sleep(SLEEP_INTERVAL).await; self.erc20_value_in_wei.store( @@ -74,9 +74,9 @@ impl MainNodeGasPriceFetcher { } } -impl L1GasPriceProvider for MainNodeGasPriceFetcher { - fn estimate_effective_gas_price(&self) -> u64 { - self.gas_price.load(Ordering::Relaxed) +impl BatchFeeModelInputProvider for MainNodeFeeParamsFetcher { + fn get_fee_model_params(&self) -> FeeParams { + *self.main_node_fee_params.read().unwrap() } fn get_erc20_conversion_rate(&self) -> u64 { diff --git a/core/lib/zksync_core/src/l1_gas_price/mod.rs b/core/lib/zksync_core/src/l1_gas_price/mod.rs index 3811070d42e..c021e3afe5c 100644 --- a/core/lib/zksync_core/src/l1_gas_price/mod.rs +++ b/core/lib/zksync_core/src/l1_gas_price/mod.rs @@ -1,9 +1,9 @@ //! This module determines the fees to pay in txs containing blocks submitted to the L1. -pub use gas_adjuster::bounded_gas_adjuster::BoundedGasAdjuster; -pub use gas_adjuster::erc_20_fetcher; +use std::fmt; + pub use gas_adjuster::GasAdjuster; -pub use main_node_fetcher::MainNodeGasPriceFetcher; +pub use main_node_fetcher::MainNodeFeeParamsFetcher; pub use singleton::GasAdjusterSingleton; mod gas_adjuster; @@ -12,12 +12,17 @@ pub mod singleton; /// Abstraction that provides information about the L1 gas price currently /// observed by the application. -pub trait L1GasPriceProvider { +pub trait L1GasPriceProvider: fmt::Debug + 'static + Send + Sync { /// Returns a best guess of a realistic value for the L1 gas price. /// Return value is in wei. fn estimate_effective_gas_price(&self) -> u64; fn get_erc20_conversion_rate(&self) -> u64; + + /// Returns a best guess of a realistic value for the L1 pubdata price. + /// Note that starting with EIP4844 it will become independent from the gas price. + /// Return value is in wei. + fn estimate_effective_pubdata_price(&self) -> u64; } /// Extended version of `L1GasPriceProvider` that can provide parameters diff --git a/core/lib/zksync_core/src/l1_gas_price/singleton.rs b/core/lib/zksync_core/src/l1_gas_price/singleton.rs index 4808dee548b..639f00c52b9 100644 --- a/core/lib/zksync_core/src/l1_gas_price/singleton.rs +++ b/core/lib/zksync_core/src/l1_gas_price/singleton.rs @@ -1,10 +1,14 @@ -use crate::l1_gas_price::{BoundedGasAdjuster, GasAdjuster}; -use anyhow::Context as _; use std::sync::Arc; -use tokio::sync::{watch, OnceCell}; -use tokio::task::JoinHandle; + +use anyhow::Context as _; +use tokio::{ + sync::{watch, OnceCell}, + task::JoinHandle, +}; use zksync_config::GasAdjusterConfig; -use zksync_eth_client::clients::http::QueryClient; +use zksync_eth_client::clients::QueryClient; + +use crate::l1_gas_price::GasAdjuster; /// Special struct for creating a singleton of `GasAdjuster`. /// This is needed only for running the server. @@ -49,16 +53,6 @@ impl GasAdjusterSingleton { adjuster.clone() } - pub async fn get_or_init_bounded( - &mut self, - ) -> anyhow::Result>>> { - let adjuster = self.get_or_init().await.context("get_or_init()")?; - Ok(Arc::new(BoundedGasAdjuster::new( - self.gas_adjuster_config.max_l1_gas_price(), - adjuster, - ))) - } - pub fn run_if_initialized( self, stop_signal: watch::Receiver, diff --git a/core/lib/zksync_core/src/lib.rs b/core/lib/zksync_core/src/lib.rs index d52fd76661f..a246b43216a 100644 --- a/core/lib/zksync_core/src/lib.rs +++ b/core/lib/zksync_core/src/lib.rs @@ -3,11 +3,11 @@ use std::{net::Ipv4Addr, str::FromStr, sync::Arc, time::Instant}; use anyhow::Context as _; +use fee_model::MainNodeFeeInputProvider; use futures::channel::oneshot; use prometheus_exporter::PrometheusExporterConfig; use temp_config_store::TempConfigStore; use tokio::{sync::watch, task::JoinHandle}; - use zksync_circuit_breaker::{ l1_txs::FailedL1TransactionChecker, replication_lag::ReplicationLagChecker, CircuitBreaker, CircuitBreakerChecker, CircuitBreakerError, @@ -20,36 +20,68 @@ use zksync_config::{ StateKeeperConfig, }, contracts::ProverAtGenesis, - database::MerkleTreeMode, + database::{MerkleTreeConfig, MerkleTreeMode}, }, ApiConfig, ContractsConfig, DBConfig, ETHSenderConfig, PostgresConfig, }; use zksync_contracts::{governance_contract, BaseSystemContracts}; use zksync_dal::{healthcheck::ConnectionPoolHealthCheck, ConnectionPool}; -use zksync_eth_client::clients::http::QueryClient; -use zksync_eth_client::EthInterface; -use zksync_eth_client::{clients::http::PKSigningClient, BoundEthInterface}; +use zksync_eth_client::{ + clients::{PKSigningClient, QueryClient}, + BoundEthInterface, CallFunctionArgs, EthInterface, +}; use zksync_health_check::{CheckHealth, HealthStatus, ReactiveHealthCheck}; use zksync_object_store::{ObjectStore, ObjectStoreFactory}; -use zksync_prover_utils::periodic_job::PeriodicJob; use zksync_queued_job_processor::JobProcessor; use zksync_state::PostgresStorageCaches; use zksync_types::{ - proofs::AggregationRound, + fee_model::FeeModelConfig, protocol_version::{L1VerifierConfig, VerifierParams}, system_contracts::get_system_smart_contracts, + web3::contract::tokens::Detokenize, L2ChainId, PackedEthSignature, ProtocolVersionId, }; -use zksync_verification_key_server::get_cached_commitments; + +use crate::{ + api_server::{ + contract_verification, + execution_sandbox::{VmConcurrencyBarrier, VmConcurrencyLimiter}, + healthcheck::HealthCheckHandle, + tx_sender::{ApiContracts, TxSender, TxSenderBuilder, TxSenderConfig}, + web3, + web3::{state::InternalApiConfig, ApiServerHandles, Namespace}, + }, + basic_witness_input_producer::BasicWitnessInputProducer, + eth_sender::{Aggregator, EthTxAggregator, EthTxManager}, + eth_watch::start_eth_watch, + house_keeper::{ + blocks_state_reporter::L1BatchMetricsReporter, + fri_proof_compressor_job_retry_manager::FriProofCompressorJobRetryManager, + fri_proof_compressor_queue_monitor::FriProofCompressorStatsReporter, + fri_prover_job_retry_manager::FriProverJobRetryManager, + fri_prover_queue_monitor::FriProverStatsReporter, + fri_scheduler_circuit_queuer::SchedulerCircuitQueuer, + fri_witness_generator_jobs_retry_manager::FriWitnessGeneratorJobRetryManager, + fri_witness_generator_queue_monitor::FriWitnessGeneratorStatsReporter, + periodic_job::PeriodicJob, + waiting_to_queued_fri_witness_job_mover::WaitingToQueuedFriWitnessJobMover, + }, + l1_gas_price::{GasAdjusterSingleton, L1GasPriceProvider}, + metadata_calculator::{MetadataCalculator, MetadataCalculatorConfig}, + metrics::{InitStage, APP_METRICS}, + state_keeper::{ + create_state_keeper, MempoolFetcher, MempoolGuard, MiniblockSealer, SequencerSealer, + }, +}; pub mod api_server; pub mod basic_witness_input_producer; pub mod block_reverter; -mod consensus; +pub mod consensus; pub mod consistency_checker; -pub mod data_fetchers; pub mod eth_sender; pub mod eth_watch; +mod fee_model; pub mod gas_tracker; pub mod genesis; pub mod house_keeper; @@ -61,48 +93,7 @@ pub mod reorg_detector; pub mod state_keeper; pub mod sync_layer; pub mod temp_config_store; -pub mod witness_generator; - -use crate::api_server::healthcheck::HealthCheckHandle; -use crate::api_server::tx_sender::{TxSender, TxSenderBuilder, TxSenderConfig}; -use crate::api_server::web3::{state::InternalApiConfig, ApiServerHandles, Namespace}; -use crate::basic_witness_input_producer::BasicWitnessInputProducer; -use crate::eth_sender::{Aggregator, EthTxManager}; -use crate::house_keeper::fri_proof_compressor_job_retry_manager::FriProofCompressorJobRetryManager; -use crate::house_keeper::fri_proof_compressor_queue_monitor::FriProofCompressorStatsReporter; -use crate::house_keeper::fri_prover_job_retry_manager::FriProverJobRetryManager; -use crate::house_keeper::fri_prover_queue_monitor::FriProverStatsReporter; -use crate::house_keeper::fri_scheduler_circuit_queuer::SchedulerCircuitQueuer; -use crate::house_keeper::fri_witness_generator_jobs_retry_manager::FriWitnessGeneratorJobRetryManager; -use crate::house_keeper::fri_witness_generator_queue_monitor::FriWitnessGeneratorStatsReporter; -use crate::house_keeper::{ - blocks_state_reporter::L1BatchMetricsReporter, gpu_prover_queue_monitor::GpuProverQueueMonitor, - prover_job_retry_manager::ProverJobRetryManager, prover_queue_monitor::ProverStatsReporter, - waiting_to_queued_fri_witness_job_mover::WaitingToQueuedFriWitnessJobMover, - waiting_to_queued_witness_job_mover::WaitingToQueuedWitnessJobMover, - witness_generator_queue_monitor::WitnessGeneratorStatsReporter, -}; -use crate::l1_gas_price::{GasAdjusterSingleton, L1GasPriceProvider}; -use crate::metadata_calculator::{ - MetadataCalculator, MetadataCalculatorConfig, MetadataCalculatorModeConfig, -}; -use crate::state_keeper::{create_state_keeper, MempoolFetcher, MempoolGuard, MiniblockSealer}; -use crate::witness_generator::{ - basic_circuits::BasicWitnessGenerator, leaf_aggregation::LeafAggregationWitnessGenerator, - node_aggregation::NodeAggregationWitnessGenerator, scheduler::SchedulerWitnessGenerator, -}; -use crate::{ - api_server::{ - contract_verification, - execution_sandbox::{VmConcurrencyBarrier, VmConcurrencyLimiter}, - tx_sender::ApiContracts, - web3, - }, - data_fetchers::run_data_fetchers, - eth_sender::EthTxAggregator, - eth_watch::start_eth_watch, - metrics::{InitStage, APP_METRICS}, -}; +mod utils; /// Inserts the initial information about zkSync tokens into the database. pub async fn genesis_init( @@ -141,17 +132,13 @@ pub async fn genesis_init( }; let eth_client = QueryClient::new(eth_client_url)?; - let vk_hash: zksync_types::H256 = eth_client - .call_contract_function( - "verificationKeyHash", - (), - None, - Default::default(), - None, - contracts_config.verifier_addr, - zksync_contracts::verifier_contract(), - ) - .await?; + let args = CallFunctionArgs::new("verificationKeyHash", ()).for_contract( + contracts_config.verifier_addr, + zksync_contracts::verifier_contract(), + ); + + let vk_hash = eth_client.call_contract_function(args).await?; + let vk_hash = zksync_types::H256::from_tokens(vk_hash)?; assert_eq!( vk_hash, l1_verifier_config.recursion_scheduler_level_vk_hash, @@ -221,17 +208,12 @@ pub fn setup_sigint_handler() -> oneshot::Receiver<()> { pub enum Component { /// Public Web3 API running on HTTP server. HttpApi, - /// Public Web3 API running on HTTP/WebSocket server and redirect eth_getLogs to another method. - ApiTranslator, /// Public Web3 API (including PubSub) running on WebSocket server. WsApi, /// REST API for contract verification. ContractVerificationApi, /// Metadata calculator. Tree, - // TODO(BFT-273): Remove `TreeLightweight` component as obsolete - TreeLightweight, - TreeBackup, /// Merkle tree API. TreeApi, EthWatcher, @@ -239,19 +221,14 @@ pub enum Component { EthTxAggregator, /// Manager for eth tx. EthTxManager, - /// Data fetchers: list fetcher, volume fetcher, price fetcher. - DataFetcher, /// State keeper. StateKeeper, /// Produces input for basic witness generator and uploads it as bin encoded file (blob) to GCS. /// The blob is later used as input for Basic Witness Generators. BasicWitnessInputProducer, - /// Witness Generator. The first argument is a number of jobs to process. If None, runs indefinitely. - /// The second argument is the type of the witness-generation performed - WitnessGenerator(Option, AggregationRound), /// Component for housekeeping task such as cleaning blobs from GCS, reporting metrics etc. Housekeeper, - /// Component for exposing API's to prover for providing proof generation data and accepting proofs. + /// Component for exposing APIs to prover for providing proof generation data and accepting proofs. ProofDataHandler, } @@ -269,53 +246,15 @@ impl FromStr for Components { Component::ContractVerificationApi, ])), "http_api" => Ok(Components(vec![Component::HttpApi])), - "http_api_translator" => Ok(Components(vec![Component::ApiTranslator])), "ws_api" => Ok(Components(vec![Component::WsApi])), "contract_verification_api" => Ok(Components(vec![Component::ContractVerificationApi])), - "tree" | "tree_new" => Ok(Components(vec![Component::Tree])), - "tree_lightweight" | "tree_lightweight_new" => { - Ok(Components(vec![Component::TreeLightweight])) - } - "tree_backup" => Ok(Components(vec![Component::TreeBackup])), + "tree" => Ok(Components(vec![Component::Tree])), "tree_api" => Ok(Components(vec![Component::TreeApi])), - "data_fetcher" => Ok(Components(vec![Component::DataFetcher])), "state_keeper" => Ok(Components(vec![Component::StateKeeper])), "housekeeper" => Ok(Components(vec![Component::Housekeeper])), "basic_witness_input_producer" => { Ok(Components(vec![Component::BasicWitnessInputProducer])) } - "witness_generator" => Ok(Components(vec![ - Component::WitnessGenerator(None, AggregationRound::BasicCircuits), - Component::WitnessGenerator(None, AggregationRound::LeafAggregation), - Component::WitnessGenerator(None, AggregationRound::NodeAggregation), - Component::WitnessGenerator(None, AggregationRound::Scheduler), - ])), - "one_shot_witness_generator" => Ok(Components(vec![ - Component::WitnessGenerator(Some(1), AggregationRound::BasicCircuits), - Component::WitnessGenerator(Some(1), AggregationRound::LeafAggregation), - Component::WitnessGenerator(Some(1), AggregationRound::NodeAggregation), - Component::WitnessGenerator(Some(1), AggregationRound::Scheduler), - ])), - "one_shot_basic_witness_generator" => { - Ok(Components(vec![Component::WitnessGenerator( - Some(1), - AggregationRound::BasicCircuits, - )])) - } - "one_shot_leaf_witness_generator" => Ok(Components(vec![Component::WitnessGenerator( - Some(1), - AggregationRound::LeafAggregation, - )])), - "one_shot_node_witness_generator" => Ok(Components(vec![Component::WitnessGenerator( - Some(1), - AggregationRound::NodeAggregation, - )])), - "one_shot_scheduler_witness_generator" => { - Ok(Components(vec![Component::WitnessGenerator( - Some(1), - AggregationRound::Scheduler, - )])) - } "eth" => Ok(Components(vec![ Component::EthWatcher, Component::EthTxAggregator, @@ -333,7 +272,6 @@ impl FromStr for Components { pub async fn initialize_components( configs: &TempConfigStore, components: Vec, - use_prometheus_push_gateway: bool, ) -> anyhow::Result<( Vec>>, watch::Sender, @@ -351,10 +289,6 @@ pub async fn initialize_components( .build() .await .context("failed to build connection_pool")?; - let prover_connection_pool = ConnectionPool::builder(postgres_config.prover_url()?, pool_size) - .build() - .await - .context("failed to build prover_connection_pool")?; let replica_connection_pool = ConnectionPool::builder(postgres_config.replica_url()?, pool_size) .set_statement_timeout(statement_timeout) @@ -399,11 +333,7 @@ pub async fn initialize_components( .prometheus_config .clone() .context("prometheus_config")?; - let prom_config = if use_prometheus_push_gateway { - PrometheusExporterConfig::push(prom_config.gateway_endpoint(), prom_config.push_interval()) - } else { - PrometheusExporterConfig::pull(prom_config.listener_port) - }; + let prom_config = PrometheusExporterConfig::pull(prom_config.listener_port); let (prometheus_health_check, prometheus_health_updater) = ReactiveHealthCheck::new("prometheus_exporter"); @@ -424,7 +354,6 @@ pub async fn initialize_components( if components.contains(&Component::WsApi) || components.contains(&Component::HttpApi) || components.contains(&Component::ContractVerificationApi) - || components.contains(&Component::ApiTranslator) { let api_config = configs.api_config.clone().context("api_config")?; let state_keeper_config = configs @@ -458,9 +387,9 @@ pub async fn initialize_components( let started_at = Instant::now(); tracing::info!("Initializing HTTP API"); let bounded_gas_adjuster = gas_adjuster - .get_or_init_bounded() + .get_or_init() .await - .context("gas_adjuster.get_or_init_bounded()")?; + .context("gas_adjuster.get_or_init()")?; let server_handles = run_http_api( &postgres_config, &tx_sender_config, @@ -472,7 +401,6 @@ pub async fn initialize_components( stop_receiver.clone(), bounded_gas_adjuster.clone(), state_keeper_config.save_call_traces, - components.contains(&Component::ApiTranslator), storage_caches.clone().unwrap(), ) .await @@ -498,9 +426,9 @@ pub async fn initialize_components( let started_at = Instant::now(); tracing::info!("initializing WS API"); let bounded_gas_adjuster = gas_adjuster - .get_or_init_bounded() + .get_or_init() .await - .context("gas_adjuster.get_or_init_bounded()")?; + .context("gas_adjuster.get_or_init()")?; let server_handles = run_ws_api( &postgres_config, &tx_sender_config, @@ -512,7 +440,6 @@ pub async fn initialize_components( replica_connection_pool.clone(), stop_receiver.clone(), storage_caches, - components.contains(&Component::ApiTranslator), ) .await .context("run_ws_api")?; @@ -552,9 +479,9 @@ pub async fn initialize_components( let started_at = Instant::now(); tracing::info!("initializing State Keeper"); let bounded_gas_adjuster = gas_adjuster - .get_or_init_bounded() + .get_or_init() .await - .context("gas_adjuster.get_or_init_bounded()")?; + .context("gas_adjuster.get_or_init()")?; add_state_keeper_to_task_futures( &mut task_futures, &postgres_config, @@ -595,7 +522,7 @@ pub async fn initialize_components( start_eth_watch( eth_watch_config, eth_watch_pool, - query_client.clone(), + Box::new(query_client.clone()), main_zksync_contract_address, governance, stop_receiver.clone(), @@ -615,10 +542,6 @@ pub async fn initialize_components( .build() .await .context("failed to build eth_sender_pool")?; - let eth_sender_prover_pool = ConnectionPool::singleton(postgres_config.prover_url()?) - .build() - .await - .context("failed to build eth_sender_prover_pool")?; let eth_sender = configs .eth_sender_config @@ -633,17 +556,15 @@ pub async fn initialize_components( eth_sender.sender.clone(), store_factory.create_store().await, ), + Arc::new(eth_client), contracts_config.validator_timelock_addr, contracts_config.l1_multicall3_addr, main_zksync_contract_address, nonce.as_u64(), ); - task_futures.push(tokio::spawn(eth_tx_aggregator_actor.run( - eth_sender_pool, - eth_sender_prover_pool, - eth_client, - stop_receiver.clone(), - ))); + task_futures.push(tokio::spawn( + eth_tx_aggregator_actor.run(eth_sender_pool, stop_receiver.clone()), + )); let elapsed = started_at.elapsed(); APP_METRICS.init_latency[&InitStage::EthTxAggregator].set(elapsed); tracing::info!("initialized ETH-TxAggregator in {elapsed:?}"); @@ -668,7 +589,7 @@ pub async fn initialize_components( .get_or_init() .await .context("gas_adjuster.get_or_init()")?, - eth_client, + Arc::new(eth_client), ); task_futures.extend([tokio::spawn( eth_tx_manager_actor.run(eth_manager_pool, stop_receiver.clone()), @@ -678,22 +599,6 @@ pub async fn initialize_components( tracing::info!("initialized ETH-TxManager in {elapsed:?}"); } - if components.contains(&Component::DataFetcher) { - let started_at = Instant::now(); - let fetcher_config = configs.fetcher_config.clone().context("fetcher_config")?; - let eth_network = configs.network_config.clone().context("network_config")?; - tracing::info!("initializing data fetchers"); - task_futures.extend(run_data_fetchers( - &fetcher_config, - eth_network.network, - connection_pool.clone(), - stop_receiver.clone(), - )); - let elapsed = started_at.elapsed(); - APP_METRICS.init_latency[&InitStage::DataFetcher].set(elapsed); - tracing::info!("initialized data fetchers in {elapsed:?}"); - } - add_trees_to_task_futures( configs, &mut task_futures, @@ -704,17 +609,6 @@ pub async fn initialize_components( ) .await .context("add_trees_to_task_futures()")?; - add_witness_generator_to_task_futures( - configs, - &mut task_futures, - &components, - &connection_pool, - &prover_connection_pool, - &store_factory, - &stop_receiver, - ) - .await - .context("add_witness_generator_to_task_futures()")?; if components.contains(&Component::BasicWitnessInputProducer) { let singleton_connection_pool = ConnectionPool::singleton(postgres_config.master_url()?) @@ -783,10 +677,9 @@ async fn add_state_keeper_to_task_futures, - object_store: Box, + object_store: Arc, stop_receiver: watch::Receiver, ) -> anyhow::Result<()> { - let fair_l2_gas_price = state_keeper_config.fair_l2_gas_price; let pool_builder = ConnectionPool::singleton(postgres_config.master_url()?); let state_keeper_pool = pool_builder .build() @@ -802,6 +695,11 @@ async fn add_state_keeper_to_task_futures, ) -> anyhow::Result<()> { - if components.contains(&Component::TreeBackup) { - anyhow::bail!("Tree backup mode is disabled"); + if !components.contains(&Component::Tree) { + anyhow::ensure!( + !components.contains(&Component::TreeApi), + "Merkle tree API cannot be started without a tree component" + ); + return Ok(()); } let db_config = configs.db_config.clone().context("db_config")?; @@ -871,38 +772,19 @@ async fn add_trees_to_task_futures( .contains(&Component::TreeApi) .then_some(&api_config); - let has_tree_component = components.contains(&Component::Tree); - let has_lightweight_component = components.contains(&Component::TreeLightweight); - let mode = match (has_tree_component, has_lightweight_component) { - (true, true) => anyhow::bail!( - "Cannot start a node with a Merkle tree in both full and lightweight modes. \ - Since the storage layout is mode-independent, choose either of modes and run \ - the node with it." - ), - (false, true) => MetadataCalculatorModeConfig::Lightweight, - (true, false) => match db_config.merkle_tree.mode { - MerkleTreeMode::Lightweight => MetadataCalculatorModeConfig::Lightweight, - MerkleTreeMode::Full => MetadataCalculatorModeConfig::Full { - store_factory: Some(store_factory), - }, - }, - (false, false) => { - anyhow::ensure!( - !components.contains(&Component::TreeApi), - "Merkle tree API cannot be started without a tree component" - ); - return Ok(()); - } + let object_store = match db_config.merkle_tree.mode { + MerkleTreeMode::Lightweight => None, + MerkleTreeMode::Full => Some(store_factory.create_store().await), }; run_tree( task_futures, healthchecks, &postgres_config, - &db_config, + &db_config.merkle_tree, api_config, &operation_config, - mode, + object_store, stop_receiver, ) .await @@ -914,29 +796,32 @@ async fn run_tree( task_futures: &mut Vec>>, healthchecks: &mut Vec>, postgres_config: &PostgresConfig, - db_config: &DBConfig, + merkle_tree_config: &MerkleTreeConfig, api_config: Option<&MerkleTreeApiConfig>, operation_manager: &OperationsManagerConfig, - mode: MetadataCalculatorModeConfig<'_>, + object_store: Option>, stop_receiver: watch::Receiver, ) -> anyhow::Result<()> { let started_at = Instant::now(); - let mode_str = if matches!(mode, MetadataCalculatorModeConfig::Full { .. }) { + let mode_str = if matches!(merkle_tree_config.mode, MerkleTreeMode::Full) { "full" } else { "lightweight" }; tracing::info!("Initializing Merkle tree in {mode_str} mode"); - let config = - MetadataCalculatorConfig::for_main_node(&db_config.merkle_tree, operation_manager, mode); - let metadata_calculator = MetadataCalculator::new(&config).await; + let config = MetadataCalculatorConfig::for_main_node(merkle_tree_config, operation_manager); + let metadata_calculator = MetadataCalculator::new(config, object_store).await; if let Some(api_config) = api_config { let address = (Ipv4Addr::UNSPECIFIED, api_config.port).into(); - let server_task = metadata_calculator - .tree_reader() - .run_api_server(address, stop_receiver.clone()); - task_futures.push(tokio::spawn(server_task)); + let tree_reader = metadata_calculator.tree_reader(); + let stop_receiver = stop_receiver.clone(); + task_futures.push(tokio::spawn(async move { + tree_reader + .await + .run_api_server(address, stop_receiver) + .await + })); } let tree_health_check = metadata_calculator.tree_health_check(); @@ -945,11 +830,7 @@ async fn run_tree( .build() .await .context("failed to build connection pool")?; - let prover_pool = ConnectionPool::singleton(postgres_config.prover_url()?) - .build() - .await - .context("failed to build prover_pool")?; - let tree_task = tokio::spawn(metadata_calculator.run(pool, prover_pool, stop_receiver)); + let tree_task = tokio::spawn(metadata_calculator.run(pool, stop_receiver)); task_futures.push(tree_task); let elapsed = started_at.elapsed(); @@ -984,101 +865,6 @@ async fn add_basic_witness_input_producer_to_task_futures( Ok(()) } -async fn add_witness_generator_to_task_futures( - configs: &TempConfigStore, - task_futures: &mut Vec>>, - components: &[Component], - connection_pool: &ConnectionPool, - prover_connection_pool: &ConnectionPool, - store_factory: &ObjectStoreFactory, - stop_receiver: &watch::Receiver, -) -> anyhow::Result<()> { - // We don't want witness generator to run on local nodes, as it's CPU heavy. - if std::env::var("ZKSYNC_LOCAL_SETUP") == Ok("true".to_owned()) { - return Ok(()); - } - - let generator_params = components.iter().filter_map(|component| { - if let Component::WitnessGenerator(batch_size, component_type) = component { - Some((*batch_size, *component_type)) - } else { - None - } - }); - - for (batch_size, component_type) in generator_params { - let started_at = Instant::now(); - tracing::info!( - "initializing the {component_type:?} witness generator, batch size: {batch_size:?}" - ); - - let vk_commitments = get_cached_commitments(); - let protocol_versions = prover_connection_pool - .access_storage() - .await - .unwrap() - .protocol_versions_dal() - .protocol_version_for(&vk_commitments) - .await; - let config = configs - .witness_generator_config - .clone() - .context("witness_generator_config")?; - let task = match component_type { - AggregationRound::BasicCircuits => { - let witness_generator = BasicWitnessGenerator::new( - config, - store_factory, - protocol_versions.clone(), - connection_pool.clone(), - prover_connection_pool.clone(), - ) - .await; - tokio::spawn(witness_generator.run(stop_receiver.clone(), batch_size)) - } - AggregationRound::LeafAggregation => { - let witness_generator = LeafAggregationWitnessGenerator::new( - config, - store_factory, - protocol_versions.clone(), - connection_pool.clone(), - prover_connection_pool.clone(), - ) - .await; - tokio::spawn(witness_generator.run(stop_receiver.clone(), batch_size)) - } - AggregationRound::NodeAggregation => { - let witness_generator = NodeAggregationWitnessGenerator::new( - config, - store_factory, - protocol_versions.clone(), - connection_pool.clone(), - prover_connection_pool.clone(), - ) - .await; - tokio::spawn(witness_generator.run(stop_receiver.clone(), batch_size)) - } - AggregationRound::Scheduler => { - let witness_generator = SchedulerWitnessGenerator::new( - config, - store_factory, - protocol_versions.clone(), - connection_pool.clone(), - prover_connection_pool.clone(), - ) - .await; - tokio::spawn(witness_generator.run(stop_receiver.clone(), batch_size)) - } - }; - task_futures.push(task); - - let elapsed = started_at.elapsed(); - APP_METRICS.init_latency[&InitStage::WitnessGenerator(component_type)].set(elapsed); - tracing::info!("initialized {component_type:?} witness generator in {elapsed:?}"); - } - Ok(()) -} - async fn add_house_keeper_to_task_futures( configs: &TempConfigStore, task_futures: &mut Vec>>, @@ -1088,13 +874,16 @@ async fn add_house_keeper_to_task_futures( .clone() .context("house_keeper_config")?; let postgres_config = configs.postgres_config.clone().context("postgres_config")?; - let connection_pool = ConnectionPool::singleton(postgres_config.replica_url()?) - .build() - .await - .context("failed to build a connection pool")?; + let connection_pool = ConnectionPool::builder( + postgres_config.replica_url()?, + postgres_config.max_connections()?, + ) + .build() + .await + .context("failed to build a connection pool")?; let l1_batch_metrics_reporter = L1BatchMetricsReporter::new( house_keeper_config.l1_batch_metrics_reporting_interval_ms, - connection_pool, + connection_pool.clone(), ); let prover_connection_pool = ConnectionPool::builder( @@ -1104,43 +893,7 @@ async fn add_house_keeper_to_task_futures( .build() .await .context("failed to build a prover_connection_pool")?; - let prover_group_config = configs - .prover_group_config - .clone() - .context("prover_group_config")?; - let prover_configs = configs.prover_configs.clone().context("prover_configs")?; - let gpu_prover_queue = GpuProverQueueMonitor::new( - prover_group_config.synthesizer_per_gpu, - house_keeper_config.gpu_prover_queue_reporting_interval_ms, - prover_connection_pool.clone(), - ); - let config = prover_configs.non_gpu.clone(); - let prover_job_retry_manager = ProverJobRetryManager::new( - config.max_attempts, - config.proof_generation_timeout(), - house_keeper_config.prover_job_retrying_interval_ms, - prover_connection_pool.clone(), - ); - let prover_stats_reporter = ProverStatsReporter::new( - house_keeper_config.prover_stats_reporting_interval_ms, - prover_connection_pool.clone(), - prover_group_config.clone(), - ); - let waiting_to_queued_witness_job_mover = WaitingToQueuedWitnessJobMover::new( - house_keeper_config.witness_job_moving_interval_ms, - prover_connection_pool.clone(), - ); - let witness_generator_stats_reporter = WitnessGeneratorStatsReporter::new( - house_keeper_config.witness_generator_stats_reporting_interval_ms, - prover_connection_pool.clone(), - ); - - task_futures.push(tokio::spawn(witness_generator_stats_reporter.run())); - task_futures.push(tokio::spawn(gpu_prover_queue.run())); task_futures.push(tokio::spawn(l1_batch_metrics_reporter.run())); - task_futures.push(tokio::spawn(prover_stats_reporter.run())); - task_futures.push(tokio::spawn(waiting_to_queued_witness_job_mover.run())); - task_futures.push(tokio::spawn(prover_job_retry_manager.run())); // All FRI Prover related components are configured below. let fri_prover_config = configs @@ -1192,6 +945,7 @@ async fn add_house_keeper_to_task_futures( let fri_prover_stats_reporter = FriProverStatsReporter::new( house_keeper_config.fri_prover_stats_reporting_interval_ms, prover_connection_pool.clone(), + connection_pool.clone(), fri_prover_group_config, ); task_futures.push(tokio::spawn(fri_prover_stats_reporter.run())); @@ -1242,30 +996,31 @@ fn build_storage_caches( Ok(storage_caches) } -async fn build_tx_sender( +async fn build_tx_sender( tx_sender_config: &TxSenderConfig, web3_json_config: &Web3JsonRpcConfig, state_keeper_config: &StateKeeperConfig, replica_pool: ConnectionPool, master_pool: ConnectionPool, - l1_gas_price_provider: Arc, + l1_gas_price_provider: Arc, storage_caches: PostgresStorageCaches, -) -> (TxSender, VmConcurrencyBarrier) { - let mut tx_sender_builder = TxSenderBuilder::new(tx_sender_config.clone(), replica_pool) +) -> (TxSender, VmConcurrencyBarrier) { + let sequencer_sealer = SequencerSealer::new(state_keeper_config.clone()); + let tx_sender_builder = TxSenderBuilder::new(tx_sender_config.clone(), replica_pool) .with_main_connection_pool(master_pool) - .with_state_keeper_config(state_keeper_config.clone()); - - // Add rate limiter if enabled. - if let Some(transactions_per_sec_limit) = web3_json_config.transactions_per_sec_limit { - tx_sender_builder = tx_sender_builder.with_rate_limiter(transactions_per_sec_limit); - }; + .with_sealer(Arc::new(sequencer_sealer)); let max_concurrency = web3_json_config.vm_concurrency_limit(); let (vm_concurrency_limiter, vm_barrier) = VmConcurrencyLimiter::new(max_concurrency); + let batch_fee_input_provider = MainNodeFeeInputProvider::new( + l1_gas_price_provider, + FeeModelConfig::from_state_keeper_config(state_keeper_config), + ); + let tx_sender = tx_sender_builder .build( - l1_gas_price_provider, + Arc::new(batch_fee_input_provider), Arc::new(vm_concurrency_limiter), ApiContracts::load_from_disk(), storage_caches, @@ -1275,7 +1030,7 @@ async fn build_tx_sender( } #[allow(clippy::too_many_arguments)] -async fn run_http_api( +async fn run_http_api( postgres_config: &PostgresConfig, tx_sender_config: &TxSenderConfig, state_keeper_config: &StateKeeperConfig, @@ -1286,7 +1041,6 @@ async fn run_http_api( stop_receiver: watch::Receiver, gas_adjuster: Arc, with_debug_namespace: bool, - with_logs_request_translator_enabled: bool, storage_caches: PostgresStorageCaches, ) -> anyhow::Result { let (tx_sender, vm_barrier) = build_tx_sender( @@ -1300,30 +1054,27 @@ async fn run_http_api( ) .await; - let namespaces = if with_debug_namespace { - Namespace::ALL.to_vec() - } else { - Namespace::NON_DEBUG.to_vec() - }; + let mut namespaces = Namespace::DEFAULT.to_vec(); + if with_debug_namespace { + namespaces.push(Namespace::Debug) + } + namespaces.push(Namespace::Snapshots); + let last_miniblock_pool = ConnectionPool::singleton(postgres_config.replica_url()?) .build() .await .context("failed to build last_miniblock_pool")?; - let mut api_builder = + let api_builder = web3::ApiBuilder::jsonrpsee_backend(internal_api.clone(), replica_connection_pool) .http(api_config.web3_json_rpc.http_port) .with_last_miniblock_pool(last_miniblock_pool) .with_filter_limit(api_config.web3_json_rpc.filters_limit()) - .with_threads(api_config.web3_json_rpc.http_server_threads()) .with_tree_api(api_config.web3_json_rpc.tree_api_url()) .with_batch_request_size_limit(api_config.web3_json_rpc.max_batch_request_size()) .with_response_body_size_limit(api_config.web3_json_rpc.max_response_body_size()) .with_tx_sender(tx_sender, vm_barrier) .enable_api_namespaces(namespaces); - if with_logs_request_translator_enabled { - api_builder = api_builder.enable_request_translator(); - } api_builder.build(stop_receiver).await } @@ -1339,7 +1090,6 @@ async fn run_ws_api( replica_connection_pool: ConnectionPool, stop_receiver: watch::Receiver, storage_caches: PostgresStorageCaches, - with_logs_request_translator_enabled: bool, ) -> anyhow::Result { let (tx_sender, vm_barrier) = build_tx_sender( tx_sender_config, @@ -1356,8 +1106,11 @@ async fn run_ws_api( .await .context("failed to build last_miniblock_pool")?; - let mut api_builder = - web3::ApiBuilder::jsonrpc_backend(internal_api.clone(), replica_connection_pool) + let mut namespaces = Namespace::DEFAULT.to_vec(); + namespaces.push(Namespace::Snapshots); + + let api_builder = + web3::ApiBuilder::jsonrpsee_backend(internal_api.clone(), replica_connection_pool) .ws(api_config.web3_json_rpc.ws_port) .with_last_miniblock_pool(last_miniblock_pool) .with_filter_limit(api_config.web3_json_rpc.filters_limit()) @@ -1370,14 +1123,10 @@ async fn run_ws_api( .websocket_requests_per_minute_limit(), ) .with_polling_interval(api_config.web3_json_rpc.pubsub_interval()) - .with_threads(api_config.web3_json_rpc.ws_server_threads()) .with_tree_api(api_config.web3_json_rpc.tree_api_url()) .with_tx_sender(tx_sender, vm_barrier) - .enable_api_namespaces(Namespace::NON_DEBUG.to_vec()); + .enable_api_namespaces(namespaces); - if with_logs_request_translator_enabled { - api_builder = api_builder.enable_request_translator(); - } api_builder.build(stop_receiver.clone()).await } @@ -1388,12 +1137,10 @@ async fn circuit_breakers_for_components( ) -> anyhow::Result>> { let mut circuit_breakers: Vec> = Vec::new(); - if components.iter().any(|c| { - matches!( - c, - Component::EthTxAggregator | Component::EthTxManager | Component::StateKeeper - ) - }) { + if components + .iter() + .any(|c| matches!(c, Component::EthTxAggregator | Component::EthTxManager)) + { let pool = ConnectionPool::singleton(postgres_config.replica_url()?) .build() .await @@ -1404,10 +1151,7 @@ async fn circuit_breakers_for_components( if components.iter().any(|c| { matches!( c, - Component::HttpApi - | Component::WsApi - | Component::ApiTranslator - | Component::ContractVerificationApi + Component::HttpApi | Component::WsApi | Component::ContractVerificationApi ) }) { let pool = ConnectionPool::singleton(postgres_config.replica_url()?) diff --git a/core/lib/zksync_core/src/metadata_calculator/helpers.rs b/core/lib/zksync_core/src/metadata_calculator/helpers.rs index ffd87b92d16..563e643d7e1 100644 --- a/core/lib/zksync_core/src/metadata_calculator/helpers.rs +++ b/core/lib/zksync_core/src/metadata_calculator/helpers.rs @@ -1,9 +1,5 @@ //! Various helpers for the metadata calculator. -use serde::{Deserialize, Serialize}; -#[cfg(test)] -use tokio::sync::mpsc; - use std::{ collections::BTreeMap, future::Future, @@ -11,15 +7,19 @@ use std::{ time::Duration, }; +use serde::{Deserialize, Serialize}; +#[cfg(test)] +use tokio::sync::mpsc; use zksync_config::configs::database::MerkleTreeMode; use zksync_dal::StorageProcessor; use zksync_health_check::{Health, HealthStatus}; use zksync_merkle_tree::{ domain::{TreeMetadata, ZkSyncTree, ZkSyncTreeReader}, - Key, MerkleTreeColumnFamily, NoVersionError, TreeEntryWithProof, + recovery::MerkleTreeRecovery, + Database, Key, NoVersionError, RocksDBWrapper, TreeEntry, TreeEntryWithProof, TreeInstruction, }; use zksync_storage::{RocksDB, RocksDBOptions, StalledWritesRetries}; -use zksync_types::{block::L1BatchHeader, L1BatchNumber, StorageLog, H256}; +use zksync_types::{block::L1BatchHeader, L1BatchNumber, StorageKey, H256}; use super::metrics::{LoadChangesStage, TreeUpdateStage, METRICS}; @@ -38,11 +38,64 @@ impl From for Health { } } +/// Creates a RocksDB wrapper with the specified params. +pub(super) async fn create_db( + path: PathBuf, + block_cache_capacity: usize, + memtable_capacity: usize, + stalled_writes_timeout: Duration, + multi_get_chunk_size: usize, +) -> RocksDBWrapper { + tokio::task::spawn_blocking(move || { + create_db_sync( + &path, + block_cache_capacity, + memtable_capacity, + stalled_writes_timeout, + multi_get_chunk_size, + ) + }) + .await + .unwrap() +} + +fn create_db_sync( + path: &Path, + block_cache_capacity: usize, + memtable_capacity: usize, + stalled_writes_timeout: Duration, + multi_get_chunk_size: usize, +) -> RocksDBWrapper { + tracing::info!( + "Initializing Merkle tree database at `{path}` with {multi_get_chunk_size} multi-get chunk size, \ + {block_cache_capacity}B block cache, {memtable_capacity}B memtable capacity, \ + {stalled_writes_timeout:?} stalled writes timeout", + path = path.display() + ); + + let mut db = RocksDB::with_options( + path, + RocksDBOptions { + block_cache_capacity: Some(block_cache_capacity), + large_memtable_capacity: Some(memtable_capacity), + stalled_writes_retries: StalledWritesRetries::new(stalled_writes_timeout), + }, + ); + if cfg!(test) { + // We need sync writes for the unit tests to execute reliably. With the default config, + // some writes to RocksDB may occur, but not be visible to the test code. + db = db.with_sync_writes(); + } + let mut db = RocksDBWrapper::from(db); + db.set_multi_get_chunk_size(multi_get_chunk_size); + db +} + /// Wrapper around the "main" tree implementation used by [`MetadataCalculator`]. /// /// Async methods provided by this wrapper are not cancel-safe! This is probably not an issue; /// `ZkSyncTree` is only indirectly available via `MetadataCalculator::run()` entrypoint -/// which consumes `self`. That is, if `MetadataCalculator::run()` is cancelled (which we don't currently do, +/// which consumes `self`. That is, if `MetadataCalculator::run()` is canceled (which we don't currently do, /// at least not explicitly), all `MetadataCalculator` data including `ZkSyncTree` is discarded. /// In the unlikely case you get a "`ZkSyncTree` is in inconsistent state" panic, /// cancellation is most probably the reason. @@ -54,68 +107,19 @@ pub(super) struct AsyncTree { impl AsyncTree { const INCONSISTENT_MSG: &'static str = - "`ZkSyncTree` is in inconsistent state, which could occur after one of its async methods was cancelled"; + "`AsyncTree` is in inconsistent state, which could occur after one of its async methods was cancelled"; - pub async fn new( - db_path: PathBuf, - mode: MerkleTreeMode, - multi_get_chunk_size: usize, - block_cache_capacity: usize, - memtable_capacity: usize, - stalled_writes_timeout: Duration, - ) -> Self { - tracing::info!( - "Initializing Merkle tree at `{db_path}` with {multi_get_chunk_size} multi-get chunk size, \ - {block_cache_capacity}B block cache, {memtable_capacity}B memtable capacity, \ - {stalled_writes_timeout:?} stalled writes timeout", - db_path = db_path.display() - ); - - let mut tree = tokio::task::spawn_blocking(move || { - let db = Self::create_db( - &db_path, - block_cache_capacity, - memtable_capacity, - stalled_writes_timeout, - ); - match mode { - MerkleTreeMode::Full => ZkSyncTree::new(db), - MerkleTreeMode::Lightweight => ZkSyncTree::new_lightweight(db), - } - }) - .await - .unwrap(); - - tree.set_multi_get_chunk_size(multi_get_chunk_size); + pub fn new(db: RocksDBWrapper, mode: MerkleTreeMode) -> Self { + let tree = match mode { + MerkleTreeMode::Full => ZkSyncTree::new(db), + MerkleTreeMode::Lightweight => ZkSyncTree::new_lightweight(db), + }; Self { inner: Some(tree), mode, } } - fn create_db( - path: &Path, - block_cache_capacity: usize, - memtable_capacity: usize, - stalled_writes_timeout: Duration, - ) -> RocksDB { - let db = RocksDB::with_options( - path, - RocksDBOptions { - block_cache_capacity: Some(block_cache_capacity), - large_memtable_capacity: Some(memtable_capacity), - stalled_writes_retries: StalledWritesRetries::new(stalled_writes_timeout), - }, - ); - if cfg!(test) { - // We need sync writes for the unit tests to execute reliably. With the default config, - // some writes to RocksDB may occur, but not be visible to the test code. - db.with_sync_writes() - } else { - db - } - } - fn as_ref(&self) -> &ZkSyncTree { self.inner.as_ref().expect(Self::INCONSISTENT_MSG) } @@ -147,7 +151,10 @@ impl AsyncTree { self.as_ref().root_hash() } - pub async fn process_l1_batch(&mut self, storage_logs: Vec) -> TreeMetadata { + pub async fn process_l1_batch( + &mut self, + storage_logs: Vec>, + ) -> TreeMetadata { let mut tree = self.inner.take().expect(Self::INCONSISTENT_MSG); let (tree, metadata) = tokio::task::spawn_blocking(move || { let metadata = tree.process_l1_batch(&storage_logs); @@ -207,6 +214,104 @@ impl AsyncTreeReader { } } +/// Async wrapper for [`MerkleTreeRecovery`]. +#[derive(Debug, Default)] +pub(super) struct AsyncTreeRecovery { + inner: Option>, + mode: MerkleTreeMode, +} + +impl AsyncTreeRecovery { + const INCONSISTENT_MSG: &'static str = + "`AsyncTreeRecovery` is in inconsistent state, which could occur after one of its async methods was cancelled"; + + pub fn new(db: RocksDBWrapper, recovered_version: u64, mode: MerkleTreeMode) -> Self { + Self { + inner: Some(MerkleTreeRecovery::new(db, recovered_version)), + mode, + } + } + + pub fn recovered_version(&self) -> u64 { + self.inner + .as_ref() + .expect(Self::INCONSISTENT_MSG) + .recovered_version() + } + + /// Returns an entry for the specified key. + pub async fn entries(&mut self, keys: Vec) -> Vec { + let tree = self.inner.take().expect(Self::INCONSISTENT_MSG); + let (entry, tree) = tokio::task::spawn_blocking(move || (tree.entries(&keys), tree)) + .await + .unwrap(); + self.inner = Some(tree); + entry + } + + /// Returns the current hash of the tree. + pub async fn root_hash(&mut self) -> H256 { + let tree = self.inner.take().expect(Self::INCONSISTENT_MSG); + let (root_hash, tree) = tokio::task::spawn_blocking(move || (tree.root_hash(), tree)) + .await + .unwrap(); + self.inner = Some(tree); + root_hash + } + + /// Extends the tree with a chunk of recovery entries. + pub async fn extend(&mut self, entries: Vec) { + let mut tree = self.inner.take().expect(Self::INCONSISTENT_MSG); + let tree = tokio::task::spawn_blocking(move || { + tree.extend_random(entries); + tree + }) + .await + .unwrap(); + + self.inner = Some(tree); + } + + pub async fn finalize(self) -> AsyncTree { + let tree = self.inner.expect(Self::INCONSISTENT_MSG); + let db = tokio::task::spawn_blocking(|| tree.finalize()) + .await + .unwrap(); + AsyncTree::new(db, self.mode) + } +} + +/// Tree at any stage of its life cycle. +#[derive(Debug)] +pub(super) enum GenericAsyncTree { + /// Uninitialized tree. + Empty { + db: RocksDBWrapper, + mode: MerkleTreeMode, + }, + /// The tree during recovery. + Recovering(AsyncTreeRecovery), + /// Tree that is fully recovered and can operate normally. + Ready(AsyncTree), +} + +impl GenericAsyncTree { + pub async fn new(db: RocksDBWrapper, mode: MerkleTreeMode) -> Self { + tokio::task::spawn_blocking(move || { + let Some(manifest) = db.manifest() else { + return Self::Empty { db, mode }; + }; + if let Some(version) = manifest.recovered_version() { + Self::Recovering(AsyncTreeRecovery::new(db, version, mode)) + } else { + Self::Ready(AsyncTree::new(db, mode)) + } + }) + .await + .unwrap() + } +} + /// Component implementing the delay policy in [`MetadataCalculator`] when there are no /// L1 batches to seal. #[derive(Debug, Clone)] @@ -214,7 +319,7 @@ pub(super) struct Delayer { delay_interval: Duration, // Notifies the tests about the next L1 batch number and tree root hash when the calculator // runs out of L1 batches to process. (Since RocksDB is exclusive, we cannot just create - // another instance to check these params on the test side without stopping the calc.) + // another instance to check these params on the test side without stopping the calculation.) #[cfg(test)] pub delay_notifier: mpsc::UnboundedSender<(L1BatchNumber, H256)>, } @@ -228,6 +333,10 @@ impl Delayer { } } + pub fn delay_interval(&self) -> Duration { + self.delay_interval + } + #[cfg_attr(not(test), allow(unused))] // `tree` is only used in test mode pub fn wait(&self, tree: &AsyncTree) -> impl Future { #[cfg(test)] @@ -242,7 +351,7 @@ impl Delayer { #[cfg_attr(test, derive(PartialEq))] pub(crate) struct L1BatchWithLogs { pub header: L1BatchHeader, - pub storage_logs: Vec, + pub storage_logs: Vec>, } impl L1BatchWithLogs { @@ -276,15 +385,22 @@ impl L1BatchWithLogs { .await; touched_slots_latency.observe_with_count(touched_slots.len()); + let leaf_indices_latency = METRICS.start_load_stage(LoadChangesStage::LoadLeafIndices); + let hashed_keys_for_writes: Vec<_> = + touched_slots.keys().map(StorageKey::hashed_key).collect(); + let l1_batches_for_initial_writes = storage + .storage_logs_dal() + .get_l1_batches_and_indices_for_initial_writes(&hashed_keys_for_writes) + .await; + leaf_indices_latency.observe_with_count(hashed_keys_for_writes.len()); + let mut storage_logs = BTreeMap::new(); for storage_key in protective_reads { touched_slots.remove(&storage_key); // ^ As per deduplication rules, all keys in `protective_reads` haven't *really* changed // in the considered L1 batch. Thus, we can remove them from `touched_slots` in order to simplify // their further processing. - - let log = StorageLog::new_read_log(storage_key, H256::zero()); - // ^ The tree doesn't use the read value, so we set it to zero. + let log = TreeInstruction::Read(storage_key); storage_logs.insert(storage_key, log); } tracing::debug!( @@ -292,45 +408,17 @@ impl L1BatchWithLogs { touched_slots.len() ); - // We don't want to update the tree with zero values which were never written to per storage log - // deduplication rules. If we write such values to the tree, it'd result in bogus tree hashes because - // new (bogus) leaf indices would be allocated for them. To filter out those values, it's sufficient - // to check when a `storage_key` was first written per `initial_writes` table. If this never occurred - // or occurred after the considered `l1_batch_number`, this means that the write must be ignored. - // - // Note that this approach doesn't filter out no-op writes of the same value, but this is fine; - // since no new leaf indices are allocated in the tree for them, such writes are no-op on the tree side as well. - let hashed_keys_for_zero_values: Vec<_> = touched_slots - .iter() - .filter(|(_, value)| { - // Only zero values are worth checking for initial writes; non-zero values are always - // written per deduplication rules. - value.is_zero() - }) - .map(|(key, _)| key.hashed_key()) - .collect(); - METRICS - .load_changes_zero_values - .observe(hashed_keys_for_zero_values.len()); - - let latency = METRICS.start_load_stage(LoadChangesStage::LoadInitialWritesForZeroValues); - let l1_batches_for_initial_writes = storage - .storage_logs_dal() - .get_l1_batches_and_indices_for_initial_writes(&hashed_keys_for_zero_values) - .await; - latency.observe_with_count(hashed_keys_for_zero_values.len()); - for (storage_key, value) in touched_slots { - let write_matters = if value.is_zero() { - let initial_write_batch_for_key = - l1_batches_for_initial_writes.get(&storage_key.hashed_key()); - initial_write_batch_for_key.map_or(false, |&(number, _)| number <= l1_batch_number) - } else { - true - }; - - if write_matters { - storage_logs.insert(storage_key, StorageLog::new_write_log(storage_key, value)); + if let Some(&(initial_write_batch_for_key, leaf_index)) = + l1_batches_for_initial_writes.get(&storage_key.hashed_key()) + { + // Filter out logs that correspond to deduplicated writes. + if initial_write_batch_for_key <= l1_batch_number { + storage_logs.insert( + storage_key, + TreeInstruction::write(storage_key, leaf_index, value), + ); + } } } @@ -345,9 +433,8 @@ impl L1BatchWithLogs { #[cfg(test)] mod tests { use tempfile::TempDir; - use zksync_dal::ConnectionPool; - use zksync_types::{proofs::PrepareBasicCircuitsJob, L2ChainId, StorageKey, StorageLogKind}; + use zksync_types::{proofs::PrepareBasicCircuitsJob, L2ChainId, StorageKey, StorageLog}; use super::*; use crate::{ @@ -386,6 +473,10 @@ mod tests { .storage_logs_dal() .get_previous_storage_values(&hashed_keys, l1_batch_number) .await; + let l1_batches_for_initial_writes = storage + .storage_logs_dal() + .get_l1_batches_and_indices_for_initial_writes(&hashed_keys) + .await; for storage_key in protective_reads { let previous_value = previous_values[&storage_key.hashed_key()].unwrap_or_default(); @@ -397,16 +488,17 @@ mod tests { ); } - storage_logs.insert( - storage_key, - StorageLog::new_read_log(storage_key, previous_value), - ); + storage_logs.insert(storage_key, TreeInstruction::Read(storage_key)); } for (storage_key, value) in touched_slots { let previous_value = previous_values[&storage_key.hashed_key()].unwrap_or_default(); if previous_value != value { - storage_logs.insert(storage_key, StorageLog::new_write_log(storage_key, value)); + let (_, leaf_index) = l1_batches_for_initial_writes[&storage_key.hashed_key()]; + storage_logs.insert( + storage_key, + TreeInstruction::write(storage_key, leaf_index, value), + ); } } @@ -467,15 +559,15 @@ mod tests { } async fn create_tree(temp_dir: &TempDir) -> AsyncTree { - AsyncTree::new( + let db = create_db( temp_dir.path().to_owned(), - MerkleTreeMode::Full, - 500, 0, 16 << 20, // 16 MiB, Duration::ZERO, // writes should never be stalled in tests + 500, ) - .await + .await; + AsyncTree::new(db, MerkleTreeMode::Full) } async fn assert_log_equivalence( @@ -608,7 +700,7 @@ mod tests { let read_logs_count = l1_batch_with_logs .storage_logs .iter() - .filter(|log| log.kind == StorageLogKind::Read) + .filter(|log| matches!(log, TreeInstruction::Read(_))) .count(); assert_eq!(read_logs_count, 7); diff --git a/core/lib/zksync_core/src/metadata_calculator/metrics.rs b/core/lib/zksync_core/src/metadata_calculator/metrics.rs index f2bedf47229..87ab8fb377f 100644 --- a/core/lib/zksync_core/src/metadata_calculator/metrics.rs +++ b/core/lib/zksync_core/src/metadata_calculator/metrics.rs @@ -1,11 +1,11 @@ //! Metrics for `MetadataCalculator`. +use std::time::{Duration, Instant}; + use vise::{ Buckets, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, LatencyObserver, Metrics, + Unit, }; - -use std::time::{Duration, Instant}; - use zksync_types::block::L1BatchHeader; use zksync_utils::time::seconds_since_epoch; @@ -35,7 +35,7 @@ pub(super) enum LoadChangesStage { LoadL1BatchHeader, LoadProtectiveReads, LoadTouchedSlots, - LoadInitialWritesForZeroValues, + LoadLeafIndices, } /// Latency metric for a certain stage of the tree update. @@ -175,3 +175,38 @@ impl MetadataCalculator { APP_METRICS.block_latency[&BlockStage::Tree].observe(Duration::from_secs(latency)); } } + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] +#[metrics(label = "stage", rename_all = "snake_case")] +pub(super) enum RecoveryStage { + LoadChunkStarts, + Finalize, +} + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] +#[metrics(label = "stage", rename_all = "snake_case")] +pub(super) enum ChunkRecoveryStage { + AcquireConnection, + LoadEntries, + LockTree, + ExtendTree, +} + +/// Metrics for Merkle tree recovery driven by the metadata calculator. +#[derive(Debug, Metrics)] +#[metrics(prefix = "server_metadata_calculator_recovery")] +pub(super) struct MetadataCalculatorRecoveryMetrics { + /// Number of chunks recovered. + pub recovered_chunk_count: Gauge, + /// Latency of a tree recovery stage (not related to the recovery of a particular chunk; + /// those metrics are tracked in the `chunk_latency` histogram). + #[metrics(buckets = Buckets::LATENCIES, unit = Unit::Seconds)] + pub latency: Family>, + /// Latency of a chunk recovery stage. + #[metrics(buckets = Buckets::LATENCIES, unit = Unit::Seconds)] + pub chunk_latency: Family>, +} + +#[vise::register] +pub(super) static RECOVERY_METRICS: vise::Global = + vise::Global::new(); diff --git a/core/lib/zksync_core/src/metadata_calculator/mod.rs b/core/lib/zksync_core/src/metadata_calculator/mod.rs index 7289347fec0..e5a93d7d3de 100644 --- a/core/lib/zksync_core/src/metadata_calculator/mod.rs +++ b/core/lib/zksync_core/src/metadata_calculator/mod.rs @@ -1,10 +1,13 @@ //! This module applies updates to the ZkSyncTree, calculates metadata for sealed blocks, and //! stores them in the DB. -use tokio::sync::watch; - -use std::time::Duration; +use std::{ + future::{self, Future}, + sync::Arc, + time::Duration, +}; +use tokio::sync::watch; use zksync_config::configs::{ chain::OperationsManagerConfig, database::{MerkleTreeConfig, MerkleTreeMode}, @@ -12,57 +15,35 @@ use zksync_config::configs::{ use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_health_check::{HealthUpdater, ReactiveHealthCheck}; use zksync_merkle_tree::domain::TreeMetadata; -use zksync_object_store::ObjectStoreFactory; +use zksync_object_store::ObjectStore; use zksync_types::{ block::L1BatchHeader, commitment::{L1BatchCommitment, L1BatchMetadata}, - H256, + ProtocolVersionId, H256, }; -mod helpers; -mod metrics; -#[cfg(test)] -pub(crate) mod tests; -mod updater; - pub(crate) use self::helpers::{AsyncTreeReader, L1BatchWithLogs, MerkleTreeInfo}; use self::{ - helpers::Delayer, + helpers::{create_db, Delayer, GenericAsyncTree}, metrics::{TreeUpdateStage, METRICS}, updater::TreeUpdater, }; use crate::gas_tracker::commit_gas_count_for_l1_batch; -/// Part of [`MetadataCalculator`] related to the operation mode of the Merkle tree. -#[derive(Debug, Clone, Copy)] -pub enum MetadataCalculatorModeConfig<'a> { - /// In this mode, `MetadataCalculator` computes Merkle tree root hashes and some auxiliary information - /// for L1 batches, but not witness inputs. - Lightweight, - /// In this mode, `MetadataCalculator` will compute commitments and witness inputs for all storage operations - /// and optionally put witness inputs into the object store as provided by `store_factory` (e.g., GCS). - Full { - store_factory: Option<&'a ObjectStoreFactory>, - }, -} - -impl MetadataCalculatorModeConfig<'_> { - fn to_mode(self) -> MerkleTreeMode { - if matches!(self, Self::Full { .. }) { - MerkleTreeMode::Full - } else { - MerkleTreeMode::Lightweight - } - } -} +mod helpers; +mod metrics; +mod recovery; +#[cfg(test)] +pub(crate) mod tests; +mod updater; /// Configuration of [`MetadataCalculator`]. #[derive(Debug)] -pub struct MetadataCalculatorConfig<'a> { +pub struct MetadataCalculatorConfig { /// Filesystem path to the RocksDB instance that stores the tree. - pub db_path: &'a str, + pub db_path: String, /// Configuration of the Merkle tree mode. - pub mode: MetadataCalculatorModeConfig<'a>, + pub mode: MerkleTreeMode, /// Interval between polling Postgres for updates if no progress was made by the tree. pub delay_interval: Duration, /// Maximum number of L1 batches to get from Postgres on a single update iteration. @@ -79,15 +60,14 @@ pub struct MetadataCalculatorConfig<'a> { pub stalled_writes_timeout: Duration, } -impl<'a> MetadataCalculatorConfig<'a> { +impl MetadataCalculatorConfig { pub(crate) fn for_main_node( - merkle_tree_config: &'a MerkleTreeConfig, - operation_config: &'a OperationsManagerConfig, - mode: MetadataCalculatorModeConfig<'a>, + merkle_tree_config: &MerkleTreeConfig, + operation_config: &OperationsManagerConfig, ) -> Self { Self { - db_path: &merkle_tree_config.path, - mode, + db_path: merkle_tree_config.path.clone(), + mode: merkle_tree_config.mode, delay_interval: operation_config.delay_interval(), max_l1_batches_per_iter: merkle_tree_config.max_l1_batches_per_iter, multi_get_chunk_size: merkle_tree_config.multi_get_chunk_size, @@ -100,31 +80,43 @@ impl<'a> MetadataCalculatorConfig<'a> { #[derive(Debug)] pub struct MetadataCalculator { - updater: TreeUpdater, + tree: GenericAsyncTree, + tree_reader: watch::Sender>, + object_store: Option>, delayer: Delayer, health_updater: HealthUpdater, + max_l1_batches_per_iter: usize, } impl MetadataCalculator { /// Creates a calculator with the specified `config`. - pub async fn new(config: &MetadataCalculatorConfig<'_>) -> Self { - // TODO (SMA-1726): restore the tree from backup if appropriate - - let mode = config.mode.to_mode(); - let object_store = match config.mode { - MetadataCalculatorModeConfig::Full { store_factory } => match store_factory { - Some(f) => Some(f.create_store().await), - None => None, - }, - MetadataCalculatorModeConfig::Lightweight => None, - }; - let updater = TreeUpdater::new(mode, config, object_store).await; + pub async fn new( + config: MetadataCalculatorConfig, + object_store: Option>, + ) -> Self { + assert!( + config.max_l1_batches_per_iter > 0, + "Maximum L1 batches per iteration is misconfigured to be 0; please update it to positive value" + ); + + let db = create_db( + config.db_path.clone().into(), + config.block_cache_capacity, + config.memtable_capacity, + config.stalled_writes_timeout, + config.multi_get_chunk_size, + ) + .await; + let tree = GenericAsyncTree::new(db, config.mode).await; let (_, health_updater) = ReactiveHealthCheck::new("tree"); Self { - updater, + tree, + tree_reader: watch::channel(None).0, + object_store, delayer: Delayer::new(config.delay_interval), health_updater, + max_l1_batches_per_iter: config.max_l1_batches_per_iter, } } @@ -134,24 +126,38 @@ impl MetadataCalculator { } /// Returns a reference to the tree reader. - pub(crate) fn tree_reader(&self) -> AsyncTreeReader { - self.updater.tree().reader() + pub(crate) fn tree_reader(&self) -> impl Future { + let mut receiver = self.tree_reader.subscribe(); + async move { + loop { + if let Some(reader) = receiver.borrow().clone() { + break reader; + } + if receiver.changed().await.is_err() { + tracing::info!("Tree dropped without getting ready; not resolving tree reader"); + future::pending::<()>().await; + } + } + } } pub async fn run( self, pool: ConnectionPool, - prover_pool: ConnectionPool, stop_receiver: watch::Receiver, ) -> anyhow::Result<()> { - self.updater - .loop_updating_tree( - self.delayer, - &pool, - &prover_pool, - stop_receiver, - self.health_updater, - ) + let tree = self + .tree + .ensure_ready(&pool, &stop_receiver, &self.health_updater) + .await?; + let Some(tree) = tree else { + return Ok(()); // recovery was aborted because a stop signal was received + }; + self.tree_reader.send_replace(Some(tree.reader())); + + let updater = TreeUpdater::new(tree, self.max_l1_batches_per_iter, self.object_store); + updater + .loop_updating_tree(self.delayer, &pool, stop_receiver, self.health_updater) .await } @@ -185,10 +191,12 @@ impl MetadataCalculator { events_queue_commitment: Option, bootloader_initial_content_commitment: Option, ) -> L1BatchMetadata { - let is_pre_boojum = header + // The commitment generation pre-boojum is the same for all the version, so in case the version is not present, we just supply the + // last pre-boojum version. + // TODO(PLA-731): make sure that protocol version is not an Option + let protocol_version = header .protocol_version - .map(|v| v.is_pre_boojum()) - .unwrap_or(true); + .unwrap_or(ProtocolVersionId::last_potentially_undefined()); let merkle_root_hash = tree_metadata.root_hash; @@ -204,12 +212,12 @@ impl MetadataCalculator { tree_metadata.state_diffs, bootloader_initial_content_commitment.unwrap_or_default(), events_queue_commitment.unwrap_or_default(), - is_pre_boojum, + protocol_version, ); let commitment_hash = commitment.hash(); tracing::trace!("L1 batch commitment: {commitment:?}"); - let l2_l1_messages_compressed = if is_pre_boojum { + let l2_l1_messages_compressed = if protocol_version.is_pre_boojum() { commitment.l2_l1_logs_compressed().to_vec() } else { commitment.system_logs_compressed().to_vec() @@ -235,6 +243,4 @@ impl MetadataCalculator { tracing::trace!("L1 batch metadata: {metadata:?}"); metadata } - - // TODO (SMA-1726): Integrate tree backup mode } diff --git a/core/lib/zksync_core/src/metadata_calculator/recovery/mod.rs b/core/lib/zksync_core/src/metadata_calculator/recovery/mod.rs new file mode 100644 index 00000000000..0d37dd02417 --- /dev/null +++ b/core/lib/zksync_core/src/metadata_calculator/recovery/mod.rs @@ -0,0 +1,425 @@ +//! High-level recovery logic for the Merkle tree. +//! +//! # Overview +//! +//! Tree recovery works by checking Postgres and Merkle tree state on Metadata calculator initialization. +//! Depending on these states, we can have one of the following situations: +//! +//! - Tree is recovering. +//! - Tree is empty and should be recovered (i.e., there's a snapshot in Postgres). +//! - Tree is empty and should be built from scratch. +//! - Tree is ready for normal operation (i.e., it's not empty and is not recovering). +//! +//! If recovery is necessary, it starts / resumes by loading the Postgres snapshot in chunks +//! and feeding each chunk to the tree. Chunks are loaded concurrently since this is the most +//! I/O-heavy operation; the concurrency is naturally limited by the number of connections to +//! Postgres in the supplied connection pool, but we explicitly use a [`Semaphore`] to control it +//! in order to not run into DB timeout errors. Before starting recovery in chunks, we filter out +//! chunks that have already been recovered by checking if the first key in a chunk is present +//! in the tree. (Note that for this to work, chunks **must** always be defined in the same way.) +//! +//! The recovery logic is fault-tolerant and supports graceful shutdown. If recovery is interrupted, +//! recovery of the remaining chunks will continue when Metadata calculator is restarted. +//! +//! Recovery performs basic sanity checks to ensure that the tree won't end up containing garbage data. +//! E.g., it's checked that the tree always recovers from the same snapshot; that the tree root hash +//! after recovery matches one in the Postgres snapshot etc. + +use std::{ + fmt, ops, + sync::atomic::{AtomicUsize, Ordering}, +}; + +use anyhow::Context as _; +use async_trait::async_trait; +use futures::future; +use serde::{Deserialize, Serialize}; +use tokio::sync::{watch, Mutex, Semaphore}; +use zksync_dal::{ConnectionPool, StorageProcessor}; +use zksync_health_check::{Health, HealthStatus, HealthUpdater}; +use zksync_merkle_tree::TreeEntry; +use zksync_types::{snapshots::SnapshotRecoveryStatus, MiniblockNumber, H256, U256}; +use zksync_utils::u256_to_h256; + +use super::{ + helpers::{AsyncTree, AsyncTreeRecovery, GenericAsyncTree}, + metrics::{ChunkRecoveryStage, RecoveryStage, RECOVERY_METRICS}, +}; + +#[cfg(test)] +mod tests; + +/// Handler of recovery life cycle events. This functionality is encapsulated in a trait to be able +/// to control recovery behavior in tests. +#[async_trait] +trait HandleRecoveryEvent: fmt::Debug + Send + Sync { + fn recovery_started(&mut self, _chunk_count: usize, _recovered_chunk_count: usize) { + // Default implementation does nothing + } + + async fn chunk_started(&self) { + // Default implementation does nothing + } + + async fn chunk_recovered(&self) { + // Default implementation does nothing + } +} + +/// Information about a Merkle tree during its snapshot recovery. +#[derive(Debug, Clone, Copy, Serialize, Deserialize)] +struct RecoveryMerkleTreeInfo { + mode: &'static str, // always set to "recovery" to distinguish from `MerkleTreeInfo` + chunk_count: usize, + recovered_chunk_count: usize, +} + +/// [`HealthUpdater`]-based [`HandleRecoveryEvent`] implementation. +#[derive(Debug)] +struct RecoveryHealthUpdater<'a> { + inner: &'a HealthUpdater, + chunk_count: usize, + recovered_chunk_count: AtomicUsize, +} + +impl<'a> RecoveryHealthUpdater<'a> { + fn new(inner: &'a HealthUpdater) -> Self { + Self { + inner, + chunk_count: 0, + recovered_chunk_count: AtomicUsize::new(0), + } + } +} + +#[async_trait] +impl HandleRecoveryEvent for RecoveryHealthUpdater<'_> { + fn recovery_started(&mut self, chunk_count: usize, recovered_chunk_count: usize) { + self.chunk_count = chunk_count; + *self.recovered_chunk_count.get_mut() = recovered_chunk_count; + RECOVERY_METRICS + .recovered_chunk_count + .set(recovered_chunk_count); + } + + async fn chunk_recovered(&self) { + let recovered_chunk_count = self.recovered_chunk_count.fetch_add(1, Ordering::SeqCst) + 1; + RECOVERY_METRICS + .recovered_chunk_count + .set(recovered_chunk_count); + let health = Health::from(HealthStatus::Ready).with_details(RecoveryMerkleTreeInfo { + mode: "recovery", + chunk_count: self.chunk_count, + recovered_chunk_count, + }); + self.inner.update(health); + } +} + +#[derive(Debug, Clone, Copy)] +struct SnapshotParameters { + miniblock: MiniblockNumber, + expected_root_hash: H256, + log_count: u64, +} + +impl SnapshotParameters { + /// This is intentionally not configurable because chunks must be the same for the entire recovery + /// (i.e., not changed after a node restart). + const DESIRED_CHUNK_SIZE: u64 = 200_000; + + async fn new(pool: &ConnectionPool, recovery: &SnapshotRecoveryStatus) -> anyhow::Result { + let miniblock = recovery.miniblock_number; + let expected_root_hash = recovery.l1_batch_root_hash; + + let mut storage = pool.access_storage().await?; + let log_count = storage + .storage_logs_dal() + .count_miniblock_storage_logs(miniblock) + .await + .with_context(|| format!("Failed getting number of logs for miniblock #{miniblock}"))?; + + Ok(Self { + miniblock, + expected_root_hash, + log_count, + }) + } + + fn chunk_count(&self) -> usize { + zksync_utils::ceil_div(self.log_count, Self::DESIRED_CHUNK_SIZE) as usize + } +} + +/// Options for tree recovery. +#[derive(Debug)] +struct RecoveryOptions<'a> { + chunk_count: usize, + concurrency_limit: usize, + events: Box, +} + +impl GenericAsyncTree { + /// Ensures that the tree is ready for the normal operation, recovering it from a Postgres snapshot + /// if necessary. + pub async fn ensure_ready( + self, + pool: &ConnectionPool, + stop_receiver: &watch::Receiver, + health_updater: &HealthUpdater, + ) -> anyhow::Result> { + let (tree, snapshot_recovery) = match self { + Self::Ready(tree) => return Ok(Some(tree)), + Self::Recovering(tree) => { + let snapshot_recovery = get_snapshot_recovery(pool).await?.context( + "Merkle tree is recovering, but Postgres doesn't contain snapshot recovery information", + )?; + let recovered_version = tree.recovered_version(); + anyhow::ensure!( + u64::from(snapshot_recovery.l1_batch_number.0) == recovered_version, + "Snapshot L1 batch in Postgres ({snapshot_recovery:?}) differs from the recovered Merkle tree version \ + ({recovered_version})" + ); + tracing::info!("Resuming tree recovery with status: {snapshot_recovery:?}"); + (tree, snapshot_recovery) + } + Self::Empty { db, mode } => { + if let Some(snapshot_recovery) = get_snapshot_recovery(pool).await? { + tracing::info!( + "Starting Merkle tree recovery with status {snapshot_recovery:?}" + ); + let l1_batch = snapshot_recovery.l1_batch_number; + let tree = AsyncTreeRecovery::new(db, l1_batch.0.into(), mode); + (tree, snapshot_recovery) + } else { + // Start the tree from scratch. The genesis block will be filled in `TreeUpdater::loop_updating_tree()`. + return Ok(Some(AsyncTree::new(db, mode))); + } + } + }; + + let snapshot = SnapshotParameters::new(pool, &snapshot_recovery).await?; + tracing::debug!("Obtained snapshot parameters: {snapshot:?}"); + let recovery_options = RecoveryOptions { + chunk_count: snapshot.chunk_count(), + concurrency_limit: pool.max_size() as usize, + events: Box::new(RecoveryHealthUpdater::new(health_updater)), + }; + tree.recover(snapshot, recovery_options, pool, stop_receiver) + .await + } +} + +impl AsyncTreeRecovery { + async fn recover( + mut self, + snapshot: SnapshotParameters, + mut options: RecoveryOptions<'_>, + pool: &ConnectionPool, + stop_receiver: &watch::Receiver, + ) -> anyhow::Result> { + let chunk_count = options.chunk_count; + let chunks: Vec<_> = Self::hashed_key_ranges(chunk_count).collect(); + tracing::info!( + "Recovering Merkle tree from Postgres snapshot in {chunk_count} concurrent chunks" + ); + + let mut storage = pool.access_storage().await?; + let remaining_chunks = self + .filter_chunks(&mut storage, snapshot.miniblock, &chunks) + .await?; + drop(storage); + options + .events + .recovery_started(chunk_count, chunk_count - remaining_chunks.len()); + tracing::info!( + "Filtered recovered key chunks; {} / {chunk_count} chunks remaining", + remaining_chunks.len() + ); + + let tree = Mutex::new(self); + let semaphore = Semaphore::new(options.concurrency_limit); + let chunk_tasks = remaining_chunks.into_iter().map(|chunk| async { + let _permit = semaphore + .acquire() + .await + .context("semaphore is never closed")?; + options.events.chunk_started().await; + Self::recover_key_chunk(&tree, snapshot.miniblock, chunk, pool, stop_receiver).await?; + options.events.chunk_recovered().await; + anyhow::Ok(()) + }); + future::try_join_all(chunk_tasks).await?; + + if *stop_receiver.borrow() { + return Ok(None); + } + + let finalize_latency = RECOVERY_METRICS.latency[&RecoveryStage::Finalize].start(); + let mut tree = tree.into_inner(); + let actual_root_hash = tree.root_hash().await; + anyhow::ensure!( + actual_root_hash == snapshot.expected_root_hash, + "Root hash of recovered tree {actual_root_hash:?} differs from expected root hash {:?}", + snapshot.expected_root_hash + ); + let tree = tree.finalize().await; + let finalize_latency = finalize_latency.observe(); + tracing::info!( + "Finished tree recovery in {finalize_latency:?}; resuming normal tree operation" + ); + Ok(Some(tree)) + } + + fn hashed_key_ranges(count: usize) -> impl Iterator> { + assert!(count > 0); + let mut stride = U256::MAX / count; + let stride_minus_one = if stride < U256::MAX { + stride += U256::one(); + stride - 1 + } else { + stride // `stride` is really 1 << 256 == U256::MAX + 1 + }; + + (0..count).map(move |i| { + let start = stride * i; + let (mut end, is_overflow) = stride_minus_one.overflowing_add(start); + if is_overflow { + end = U256::MAX; + } + u256_to_h256(start)..=u256_to_h256(end) + }) + } + + /// Filters out `key_chunks` for which recovery was successfully performed. + async fn filter_chunks( + &mut self, + storage: &mut StorageProcessor<'_>, + snapshot_miniblock: MiniblockNumber, + key_chunks: &[ops::RangeInclusive], + ) -> anyhow::Result>> { + let chunk_starts_latency = + RECOVERY_METRICS.latency[&RecoveryStage::LoadChunkStarts].start(); + let chunk_starts = storage + .storage_logs_dal() + .get_chunk_starts_for_miniblock(snapshot_miniblock, key_chunks) + .await + .context("Failed getting chunk starts")?; + let chunk_starts_latency = chunk_starts_latency.observe(); + tracing::debug!( + "Loaded start entries for {} chunks in {chunk_starts_latency:?}", + key_chunks.len() + ); + + let existing_starts = chunk_starts + .iter() + .enumerate() + .filter_map(|(i, &start)| Some((i, start?))); + let start_keys = existing_starts + .clone() + .map(|(_, start_entry)| start_entry.key) + .collect(); + let tree_entries = self.entries(start_keys).await; + + let mut output = vec![]; + for (tree_entry, (i, db_entry)) in tree_entries.into_iter().zip(existing_starts) { + if tree_entry.is_empty() { + output.push(key_chunks[i].clone()); + continue; + } + anyhow::ensure!( + tree_entry.value == db_entry.value && tree_entry.leaf_index == db_entry.leaf_index, + "Mismatch between entry for key {:0>64x} in Postgres snapshot for miniblock #{snapshot_miniblock} \ + ({db_entry:?}) and tree ({tree_entry:?}); the recovery procedure may be corrupted", + db_entry.key + ); + } + Ok(output) + } + + async fn recover_key_chunk( + tree: &Mutex, + snapshot_miniblock: MiniblockNumber, + key_chunk: ops::RangeInclusive, + pool: &ConnectionPool, + stop_receiver: &watch::Receiver, + ) -> anyhow::Result<()> { + let acquire_connection_latency = + RECOVERY_METRICS.chunk_latency[&ChunkRecoveryStage::AcquireConnection].start(); + let mut storage = pool.access_storage().await?; + acquire_connection_latency.observe(); + + if *stop_receiver.borrow() { + return Ok(()); + } + + let entries_latency = + RECOVERY_METRICS.chunk_latency[&ChunkRecoveryStage::LoadEntries].start(); + let all_entries = storage + .storage_logs_dal() + .get_tree_entries_for_miniblock(snapshot_miniblock, key_chunk.clone()) + .await + .with_context(|| { + format!("Failed getting entries for chunk {key_chunk:?} in snapshot for miniblock #{snapshot_miniblock}") + })?; + drop(storage); + let entries_latency = entries_latency.observe(); + tracing::debug!( + "Loaded {} entries for chunk {key_chunk:?} in {entries_latency:?}", + all_entries.len() + ); + + if *stop_receiver.borrow() { + return Ok(()); + } + + // Sanity check: all entry keys must be distinct. Otherwise, we may end up writing non-final values + // to the tree, since we don't enforce any ordering on entries besides by the hashed key. + for window in all_entries.windows(2) { + let [prev_entry, next_entry] = window else { + unreachable!(); + }; + anyhow::ensure!( + prev_entry.key != next_entry.key, + "node snapshot in Postgres is corrupted: entries {prev_entry:?} and {next_entry:?} \ + have same hashed_key" + ); + } + + let all_entries = all_entries + .into_iter() + .map(|entry| TreeEntry { + key: entry.key, + value: entry.value, + leaf_index: entry.leaf_index, + }) + .collect(); + let lock_tree_latency = + RECOVERY_METRICS.chunk_latency[&ChunkRecoveryStage::LockTree].start(); + let mut tree = tree.lock().await; + lock_tree_latency.observe(); + + if *stop_receiver.borrow() { + return Ok(()); + } + + let extend_tree_latency = + RECOVERY_METRICS.chunk_latency[&ChunkRecoveryStage::ExtendTree].start(); + tree.extend(all_entries).await; + let extend_tree_latency = extend_tree_latency.observe(); + tracing::debug!( + "Extended Merkle tree with entries for chunk {key_chunk:?} in {extend_tree_latency:?}" + ); + Ok(()) + } +} + +async fn get_snapshot_recovery( + pool: &ConnectionPool, +) -> anyhow::Result> { + let mut storage = pool.access_storage_tagged("metadata_calculator").await?; + Ok(storage + .snapshot_recovery_dal() + .get_applied_snapshot_status() + .await?) +} diff --git a/core/lib/zksync_core/src/metadata_calculator/recovery/tests.rs b/core/lib/zksync_core/src/metadata_calculator/recovery/tests.rs new file mode 100644 index 00000000000..180894f2fc7 --- /dev/null +++ b/core/lib/zksync_core/src/metadata_calculator/recovery/tests.rs @@ -0,0 +1,354 @@ +//! Tests for metadata calculator snapshot recovery. + +use std::{path::PathBuf, time::Duration}; + +use assert_matches::assert_matches; +use tempfile::TempDir; +use test_casing::test_casing; +use tokio::sync::mpsc; +use zksync_config::configs::{ + chain::OperationsManagerConfig, + database::{MerkleTreeConfig, MerkleTreeMode}, +}; +use zksync_health_check::{CheckHealth, ReactiveHealthCheck}; +use zksync_merkle_tree::{domain::ZkSyncTree, TreeInstruction}; +use zksync_types::{L1BatchNumber, L2ChainId, StorageLog}; +use zksync_utils::h256_to_u256; + +use super::*; +use crate::{ + genesis::{ensure_genesis_state, GenesisParams}, + metadata_calculator::{ + helpers::create_db, + tests::{ + extend_db_state, extend_db_state_from_l1_batch, gen_storage_logs, run_calculator, + setup_calculator, + }, + MetadataCalculator, MetadataCalculatorConfig, + }, + utils::testonly::prepare_recovery_snapshot, +}; + +#[test] +fn calculating_hashed_key_ranges_with_single_chunk() { + let mut ranges = AsyncTreeRecovery::hashed_key_ranges(1); + let full_range = ranges.next().unwrap(); + assert_eq!(full_range, H256::zero()..=H256([0xff; 32])); +} + +#[test] +fn calculating_hashed_key_ranges_for_256_chunks() { + let ranges = AsyncTreeRecovery::hashed_key_ranges(256); + let mut start = H256::zero(); + let mut end = H256([0xff; 32]); + + for (i, range) in ranges.enumerate() { + let i = u8::try_from(i).unwrap(); + start.0[0] = i; + end.0[0] = i; + assert_eq!(range, start..=end); + } +} + +#[test_casing(5, [3, 7, 23, 100, 255])] +fn calculating_hashed_key_ranges_for_arbitrary_chunks(chunk_count: usize) { + let ranges: Vec<_> = AsyncTreeRecovery::hashed_key_ranges(chunk_count).collect(); + assert_eq!(ranges.len(), chunk_count); + + for window in ranges.windows(2) { + let [prev_range, range] = window else { + unreachable!(); + }; + assert_eq!( + h256_to_u256(*range.start()), + h256_to_u256(*prev_range.end()) + 1 + ); + } + assert_eq!(*ranges.first().unwrap().start(), H256::zero()); + assert_eq!(*ranges.last().unwrap().end(), H256([0xff; 32])); +} + +#[test] +fn calculating_chunk_count() { + let mut snapshot = SnapshotParameters { + miniblock: MiniblockNumber(1), + log_count: 160_000_000, + expected_root_hash: H256::zero(), + }; + assert_eq!(snapshot.chunk_count(), 800); + + snapshot.log_count += 1; + assert_eq!(snapshot.chunk_count(), 801); + + snapshot.log_count = 100; + assert_eq!(snapshot.chunk_count(), 1); +} + +async fn create_tree_recovery(path: PathBuf, l1_batch: L1BatchNumber) -> AsyncTreeRecovery { + let db = create_db( + path, + 0, + 16 << 20, // 16 MiB, + Duration::ZERO, // writes should never be stalled in tests + 500, + ) + .await; + AsyncTreeRecovery::new(db, l1_batch.0.into(), MerkleTreeMode::Full) +} + +#[tokio::test] +async fn basic_recovery_workflow() { + let pool = ConnectionPool::test_pool().await; + let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); + let snapshot_recovery = prepare_recovery_snapshot_with_genesis(&pool, &temp_dir).await; + let snapshot = SnapshotParameters::new(&pool, &snapshot_recovery) + .await + .unwrap(); + + assert!(snapshot.log_count > 200); + + let (_stop_sender, stop_receiver) = watch::channel(false); + for chunk_count in [1, 4, 9, 16, 60, 256] { + println!("Recovering tree with {chunk_count} chunks"); + + let tree_path = temp_dir.path().join(format!("recovery-{chunk_count}")); + let tree = create_tree_recovery(tree_path, L1BatchNumber(1)).await; + let (health_check, health_updater) = ReactiveHealthCheck::new("tree"); + let recovery_options = RecoveryOptions { + chunk_count, + concurrency_limit: 1, + events: Box::new(RecoveryHealthUpdater::new(&health_updater)), + }; + let tree = tree + .recover(snapshot, recovery_options, &pool, &stop_receiver) + .await + .unwrap() + .expect("Tree recovery unexpectedly aborted"); + + assert_eq!(tree.root_hash(), snapshot_recovery.l1_batch_root_hash); + let health = health_check.check_health().await; + assert_matches!(health.status(), HealthStatus::Ready); + } +} + +async fn prepare_recovery_snapshot_with_genesis( + pool: &ConnectionPool, + temp_dir: &TempDir, +) -> SnapshotRecoveryStatus { + let mut storage = pool.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, L2ChainId::from(270), &GenesisParams::mock()) + .await + .unwrap(); + let mut logs = gen_storage_logs(100..300, 1).pop().unwrap(); + + // Add all logs from the genesis L1 batch to `logs` so that they cover all state keys. + let genesis_logs = storage + .storage_logs_dal() + .get_touched_slots_for_l1_batch(L1BatchNumber(0)) + .await; + let genesis_logs = genesis_logs + .into_iter() + .map(|(key, value)| StorageLog::new_write_log(key, value)); + logs.extend(genesis_logs); + extend_db_state(&mut storage, vec![logs]).await; + drop(storage); + + // Ensure that metadata for L1 batch #1 is present in the DB. + let (calculator, _) = setup_calculator(&temp_dir.path().join("init"), pool).await; + let l1_batch_root_hash = run_calculator(calculator, pool.clone()).await; + + SnapshotRecoveryStatus { + l1_batch_number: L1BatchNumber(1), + l1_batch_root_hash, + miniblock_number: MiniblockNumber(1), + miniblock_root_hash: H256::zero(), // not used + last_finished_chunk_id: Some(0), + total_chunk_count: 1, + } +} + +#[derive(Debug)] +struct TestEventListener { + expected_recovered_chunks: usize, + stop_threshold: usize, + processed_chunk_count: AtomicUsize, + stop_sender: watch::Sender, +} + +impl TestEventListener { + fn new(stop_threshold: usize, stop_sender: watch::Sender) -> Self { + Self { + expected_recovered_chunks: 0, + stop_threshold, + processed_chunk_count: AtomicUsize::new(0), + stop_sender, + } + } + + fn expect_recovered_chunks(mut self, count: usize) -> Self { + self.expected_recovered_chunks = count; + self + } +} + +#[async_trait] +impl HandleRecoveryEvent for TestEventListener { + fn recovery_started(&mut self, _chunk_count: usize, recovered_chunk_count: usize) { + assert_eq!(recovered_chunk_count, self.expected_recovered_chunks); + } + + async fn chunk_recovered(&self) { + let processed_chunk_count = self.processed_chunk_count.fetch_add(1, Ordering::SeqCst) + 1; + if processed_chunk_count >= self.stop_threshold { + self.stop_sender.send_replace(true); + } + } +} + +#[test_casing(3, [5, 7, 8])] +#[tokio::test] +async fn recovery_fault_tolerance(chunk_count: usize) { + let pool = ConnectionPool::test_pool().await; + let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); + let snapshot_recovery = prepare_recovery_snapshot_with_genesis(&pool, &temp_dir).await; + + let tree_path = temp_dir.path().join("recovery"); + let tree = create_tree_recovery(tree_path.clone(), L1BatchNumber(1)).await; + let (stop_sender, stop_receiver) = watch::channel(false); + let recovery_options = RecoveryOptions { + chunk_count, + concurrency_limit: 1, + events: Box::new(TestEventListener::new(1, stop_sender)), + }; + let snapshot = SnapshotParameters::new(&pool, &snapshot_recovery) + .await + .unwrap(); + assert!(tree + .recover(snapshot, recovery_options, &pool, &stop_receiver) + .await + .unwrap() + .is_none()); + + // Emulate a restart and recover 2 more chunks. + let mut tree = create_tree_recovery(tree_path.clone(), L1BatchNumber(1)).await; + assert_ne!(tree.root_hash().await, snapshot_recovery.l1_batch_root_hash); + let (stop_sender, stop_receiver) = watch::channel(false); + let recovery_options = RecoveryOptions { + chunk_count, + concurrency_limit: 1, + events: Box::new(TestEventListener::new(2, stop_sender).expect_recovered_chunks(1)), + }; + assert!(tree + .recover(snapshot, recovery_options, &pool, &stop_receiver) + .await + .unwrap() + .is_none()); + + // Emulate another restart and recover remaining chunks. + let mut tree = create_tree_recovery(tree_path.clone(), L1BatchNumber(1)).await; + assert_ne!(tree.root_hash().await, snapshot_recovery.l1_batch_root_hash); + let (stop_sender, stop_receiver) = watch::channel(false); + let recovery_options = RecoveryOptions { + chunk_count, + concurrency_limit: 1, + events: Box::new( + TestEventListener::new(usize::MAX, stop_sender).expect_recovered_chunks(3), + ), + }; + let tree = tree + .recover(snapshot, recovery_options, &pool, &stop_receiver) + .await + .unwrap() + .expect("Tree recovery unexpectedly aborted"); + assert_eq!(tree.root_hash(), snapshot_recovery.l1_batch_root_hash); +} + +#[derive(Debug)] +enum RecoveryWorkflowCase { + Stop, + CreateBatch, +} + +impl RecoveryWorkflowCase { + const ALL: [Self; 2] = [Self::Stop, Self::CreateBatch]; +} + +#[test_casing(2, RecoveryWorkflowCase::ALL)] +#[tokio::test] +async fn entire_recovery_workflow(case: RecoveryWorkflowCase) { + let pool = ConnectionPool::test_pool().await; + // Emulate the recovered view of Postgres. Unlike with previous tests, we don't perform genesis. + let snapshot_logs = gen_storage_logs(100..300, 1).pop().unwrap(); + let mut storage = pool.access_storage().await.unwrap(); + let snapshot_recovery = prepare_recovery_snapshot(&mut storage, 23, &snapshot_logs).await; + + let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); + let merkle_tree_config = MerkleTreeConfig { + path: temp_dir.path().to_str().unwrap().to_owned(), + ..MerkleTreeConfig::default() + }; + let calculator_config = MetadataCalculatorConfig::for_main_node( + &merkle_tree_config, + &OperationsManagerConfig { delay_interval: 50 }, + ); + let mut calculator = MetadataCalculator::new(calculator_config, None).await; + let (delay_sx, mut delay_rx) = mpsc::unbounded_channel(); + calculator.delayer.delay_notifier = delay_sx; + + let (stop_sender, stop_receiver) = watch::channel(false); + let tree_reader = calculator.tree_reader(); + let calculator_task = tokio::spawn(calculator.run(pool.clone(), stop_receiver)); + + match case { + // Wait until the tree is fully initialized and stop the calculator. + RecoveryWorkflowCase::Stop => { + let tree_info = tree_reader.await.info().await; + assert_eq!(tree_info.root_hash, snapshot_recovery.l1_batch_root_hash); + assert_eq!(tree_info.leaf_count, 200); + assert_eq!( + tree_info.next_l1_batch_number, + snapshot_recovery.l1_batch_number + 1 + ); + } + + // Emulate state keeper adding a new L1 batch to Postgres. + RecoveryWorkflowCase::CreateBatch => { + tree_reader.await; + + let mut storage = storage.start_transaction().await.unwrap(); + let mut new_logs = gen_storage_logs(500..600, 1).pop().unwrap(); + // Logs must be sorted by `log.key` to match their enum index assignment + new_logs.sort_unstable_by_key(|log| log.key); + + extend_db_state_from_l1_batch( + &mut storage, + snapshot_recovery.l1_batch_number + 1, + [new_logs.clone()], + ) + .await; + storage.commit().await.unwrap(); + + // Wait until the inserted L1 batch is processed by the calculator. + let new_root_hash = loop { + let (next_l1_batch, root_hash) = delay_rx.recv().await.unwrap(); + if next_l1_batch == snapshot_recovery.l1_batch_number + 2 { + break root_hash; + } + }; + + let all_tree_instructions: Vec<_> = snapshot_logs + .iter() + .chain(&new_logs) + .enumerate() + .map(|(i, log)| TreeInstruction::write(log.key, i as u64 + 1, log.value)) + .collect(); + let expected_new_root_hash = + ZkSyncTree::process_genesis_batch(&all_tree_instructions).root_hash; + assert_ne!(expected_new_root_hash, snapshot_recovery.l1_batch_root_hash); + assert_eq!(new_root_hash, expected_new_root_hash); + } + } + + stop_sender.send_replace(true); + calculator_task.await.expect("calculator panicked").unwrap(); +} diff --git a/core/lib/zksync_core/src/metadata_calculator/tests.rs b/core/lib/zksync_core/src/metadata_calculator/tests.rs index 5e86db6087b..da158ff11ef 100644 --- a/core/lib/zksync_core/src/metadata_calculator/tests.rs +++ b/core/lib/zksync_core/src/metadata_calculator/tests.rs @@ -1,28 +1,32 @@ +//! Tests for the metadata calculator component life cycle. + +use std::{future::Future, ops, panic, path::Path, sync::Arc, time::Duration}; + use assert_matches::assert_matches; use itertools::Itertools; use tempfile::TempDir; use tokio::sync::{mpsc, watch}; - -use std::{future::Future, ops, panic, path::Path, time::Duration}; - -use zksync_config::configs::{chain::OperationsManagerConfig, database::MerkleTreeConfig}; -use zksync_contracts::BaseSystemContracts; +use zksync_config::configs::{ + chain::OperationsManagerConfig, + database::{MerkleTreeConfig, MerkleTreeMode}, +}; use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_health_check::{CheckHealth, HealthStatus}; use zksync_merkle_tree::domain::ZkSyncTree; use zksync_object_store::{ObjectStore, ObjectStoreFactory}; use zksync_types::{ - block::{miniblock_hash, BlockGasCount, L1BatchHeader, MiniblockHeader}, + block::{BlockGasCount, L1BatchHeader}, proofs::PrepareBasicCircuitsJob, AccountTreeId, Address, L1BatchNumber, L2ChainId, MiniblockNumber, StorageKey, StorageLog, H256, }; use zksync_utils::u32_to_h256; -use super::{ - L1BatchWithLogs, MetadataCalculator, MetadataCalculatorConfig, MetadataCalculatorModeConfig, +use super::{GenericAsyncTree, L1BatchWithLogs, MetadataCalculator, MetadataCalculatorConfig}; +use crate::{ + genesis::{ensure_genesis_state, GenesisParams}, + utils::testonly::{create_l1_batch, create_miniblock}, }; -use crate::genesis::{ensure_genesis_state, GenesisParams}; const RUN_TIMEOUT: Duration = Duration::from_secs(30); @@ -40,30 +44,27 @@ where #[tokio::test] async fn genesis_creation() { let pool = ConnectionPool::test_pool().await; - let prover_pool = ConnectionPool::test_pool().await; let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let (calculator, _) = setup_calculator(temp_dir.path(), &pool).await; - run_calculator(calculator, pool.clone(), prover_pool).await; + run_calculator(calculator, pool.clone()).await; let (calculator, _) = setup_calculator(temp_dir.path(), &pool).await; - assert_eq!( - calculator.updater.tree().next_l1_batch_number(), - L1BatchNumber(1) - ); -} -// TODO (SMA-1726): Restore tests for tree backup mode + let GenericAsyncTree::Ready(tree) = &calculator.tree else { + panic!("Unexpected tree state: {:?}", calculator.tree); + }; + assert_eq!(tree.next_l1_batch_number(), L1BatchNumber(1)); +} #[tokio::test] async fn basic_workflow() { let pool = ConnectionPool::test_pool().await; - let prover_pool = ConnectionPool::test_pool().await; let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let (calculator, object_store) = setup_calculator(temp_dir.path(), &pool).await; reset_db_state(&pool, 1).await; - let merkle_tree_hash = run_calculator(calculator, pool.clone(), prover_pool).await; + let merkle_tree_hash = run_calculator(calculator, pool.clone()).await; // Check the hash against the reference. let expected_tree_hash = expected_tree_hash(&pool).await; @@ -77,10 +78,10 @@ async fn basic_workflow() { assert!(merkle_paths.iter().all(|log| log.is_write)); let (calculator, _) = setup_calculator(temp_dir.path(), &pool).await; - assert_eq!( - calculator.updater.tree().next_l1_batch_number(), - L1BatchNumber(2) - ); + let GenericAsyncTree::Ready(tree) = &calculator.tree else { + panic!("Unexpected tree state: {:?}", calculator.tree); + }; + assert_eq!(tree.next_l1_batch_number(), L1BatchNumber(2)); } async fn expected_tree_hash(pool: &ConnectionPool) -> H256 { @@ -90,9 +91,9 @@ async fn expected_tree_hash(pool: &ConnectionPool) -> H256 { .get_sealed_l1_batch_number() .await .unwrap() - .0; + .expect("No L1 batches in Postgres"); let mut all_logs = vec![]; - for i in 0..=sealed_l1_batch_number { + for i in 0..=sealed_l1_batch_number.0 { let logs = L1BatchWithLogs::new(&mut storage, L1BatchNumber(i)).await; let logs = logs.unwrap().storage_logs; all_logs.extend(logs); @@ -103,7 +104,6 @@ async fn expected_tree_hash(pool: &ConnectionPool) -> H256 { #[tokio::test] async fn status_receiver_has_correct_states() { let pool = ConnectionPool::test_pool().await; - let prover_pool = ConnectionPool::test_pool().await; let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let (mut calculator, _) = setup_calculator(temp_dir.path(), &pool).await; @@ -122,7 +122,7 @@ async fn status_receiver_has_correct_states() { let (delay_sx, mut delay_rx) = mpsc::unbounded_channel(); calculator.delayer.delay_notifier = delay_sx; - let calculator_handle = tokio::spawn(calculator.run(pool, prover_pool, stop_rx)); + let calculator_handle = tokio::spawn(calculator.run(pool, stop_rx)); delay_rx.recv().await.unwrap(); assert_eq!( tree_health_check.check_health().await.status(), @@ -152,19 +152,18 @@ async fn status_receiver_has_correct_states() { #[tokio::test] async fn multi_l1_batch_workflow() { let pool = ConnectionPool::test_pool().await; - let prover_pool = ConnectionPool::test_pool().await; // Collect all storage logs in a single L1 batch let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let (calculator, _) = setup_calculator(temp_dir.path(), &pool).await; reset_db_state(&pool, 1).await; - let root_hash = run_calculator(calculator, pool.clone(), prover_pool.clone()).await; + let root_hash = run_calculator(calculator, pool.clone()).await; // Collect the same logs in multiple L1 batches let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let (calculator, object_store) = setup_calculator(temp_dir.path(), &pool).await; reset_db_state(&pool, 10).await; - let multi_block_root_hash = run_calculator(calculator, pool, prover_pool).await; + let multi_block_root_hash = run_calculator(calculator, pool).await; assert_eq!(multi_block_root_hash, root_hash); let mut prev_index = None; @@ -189,20 +188,18 @@ async fn multi_l1_batch_workflow() { #[tokio::test] async fn running_metadata_calculator_with_additional_blocks() { let pool = ConnectionPool::test_pool().await; - let prover_pool = ConnectionPool::test_pool().await; let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let calculator = setup_lightweight_calculator(temp_dir.path(), &pool).await; reset_db_state(&pool, 5).await; - run_calculator(calculator, pool.clone(), prover_pool.clone()).await; + run_calculator(calculator, pool.clone()).await; let mut calculator = setup_lightweight_calculator(temp_dir.path(), &pool).await; let (stop_sx, stop_rx) = watch::channel(false); let (delay_sx, mut delay_rx) = mpsc::unbounded_channel(); calculator.delayer.delay_notifier = delay_sx; - let calculator_handle = - tokio::spawn(calculator.run(pool.clone(), prover_pool.clone(), stop_rx)); + let calculator_handle = tokio::spawn(calculator.run(pool.clone(), stop_rx)); // Wait until the calculator has processed initial L1 batches. let (next_l1_batch, _) = tokio::time::timeout(RUN_TIMEOUT, delay_rx.recv()) .await @@ -234,30 +231,25 @@ async fn running_metadata_calculator_with_additional_blocks() { // Switch to the full tree. It should pick up from the same spot and result in the same tree root hash. let (calculator, _) = setup_calculator(temp_dir.path(), &pool).await; - let root_hash_for_full_tree = run_calculator(calculator, pool, prover_pool).await; + let root_hash_for_full_tree = run_calculator(calculator, pool).await; assert_eq!(root_hash_for_full_tree, updated_root_hash); } #[tokio::test] async fn shutting_down_calculator() { let pool = ConnectionPool::test_pool().await; - let prover_pool = ConnectionPool::test_pool().await; let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); - let (merkle_tree_config, mut operation_config) = create_config(temp_dir.path()); + let (merkle_tree_config, mut operation_config) = + create_config(temp_dir.path(), MerkleTreeMode::Lightweight); operation_config.delay_interval = 30_000; // ms; chosen to be larger than `RUN_TIMEOUT` - let calculator = setup_calculator_with_options( - &merkle_tree_config, - &operation_config, - &pool, - MetadataCalculatorModeConfig::Lightweight, - ) - .await; + let calculator = + setup_calculator_with_options(&merkle_tree_config, &operation_config, &pool, None).await; reset_db_state(&pool, 5).await; let (stop_sx, stop_rx) = watch::channel(false); - let calculator_task = tokio::spawn(calculator.run(pool, prover_pool, stop_rx)); + let calculator_task = tokio::spawn(calculator.run(pool, stop_rx)); tokio::time::sleep(Duration::from_millis(100)).await; stop_sx.send_replace(true); run_with_timeout(RUN_TIMEOUT, calculator_task) @@ -271,11 +263,10 @@ async fn test_postgres_backup_recovery( insert_batch_without_metadata: bool, ) { let pool = ConnectionPool::test_pool().await; - let prover_pool = ConnectionPool::test_pool().await; let temp_dir = TempDir::new().expect("failed get temporary directory for RocksDB"); let calculator = setup_lightweight_calculator(temp_dir.path(), &pool).await; reset_db_state(&pool, 5).await; - run_calculator(calculator, pool.clone(), prover_pool.clone()).await; + run_calculator(calculator, pool.clone()).await; // Simulate recovery from a DB snapshot in which some newer L1 batches are erased. let last_batch_after_recovery = L1BatchNumber(3); @@ -297,6 +288,7 @@ async fn test_postgres_backup_recovery( BlockGasCount::default(), &[], &[], + 0, ) .await .unwrap(); @@ -309,7 +301,7 @@ async fn test_postgres_backup_recovery( let (delay_sx, mut delay_rx) = mpsc::unbounded_channel(); calculator.delayer.delay_notifier = delay_sx; - let calculator_handle = tokio::spawn(calculator.run(pool.clone(), prover_pool, stop_rx)); + let calculator_handle = tokio::spawn(calculator.run(pool.clone(), stop_rx)); // Wait until the calculator has processed initial L1 batches. let (next_l1_batch, _) = tokio::time::timeout(RUN_TIMEOUT, delay_rx.recv()) .await @@ -322,7 +314,7 @@ async fn test_postgres_backup_recovery( for batch_header in &removed_batches { let mut txn = storage.start_transaction().await.unwrap(); txn.blocks_dal() - .insert_l1_batch(batch_header, &[], BlockGasCount::default(), &[], &[]) + .insert_l1_batch(batch_header, &[], BlockGasCount::default(), &[], &[], 0) .await .unwrap(); insert_initial_writes_for_batch(&mut txn, batch_header.number).await; @@ -369,27 +361,29 @@ async fn postgres_backup_recovery_with_excluded_metadata() { pub(crate) async fn setup_calculator( db_path: &Path, pool: &ConnectionPool, -) -> (MetadataCalculator, Box) { - let store_factory = &ObjectStoreFactory::mock(); - let (merkle_tree_config, operation_manager) = create_config(db_path); - let mode = MetadataCalculatorModeConfig::Full { - store_factory: Some(store_factory), - }; +) -> (MetadataCalculator, Arc) { + let store_factory = ObjectStoreFactory::mock(); + let store = store_factory.create_store().await; + let (merkle_tree_config, operation_manager) = create_config(db_path, MerkleTreeMode::Full); let calculator = - setup_calculator_with_options(&merkle_tree_config, &operation_manager, pool, mode).await; + setup_calculator_with_options(&merkle_tree_config, &operation_manager, pool, Some(store)) + .await; (calculator, store_factory.create_store().await) } async fn setup_lightweight_calculator(db_path: &Path, pool: &ConnectionPool) -> MetadataCalculator { - let mode = MetadataCalculatorModeConfig::Lightweight; - let (db_config, operation_config) = create_config(db_path); - setup_calculator_with_options(&db_config, &operation_config, pool, mode).await + let (db_config, operation_config) = create_config(db_path, MerkleTreeMode::Lightweight); + setup_calculator_with_options(&db_config, &operation_config, pool, None).await } -fn create_config(db_path: &Path) -> (MerkleTreeConfig, OperationsManagerConfig) { +fn create_config( + db_path: &Path, + mode: MerkleTreeMode, +) -> (MerkleTreeConfig, OperationsManagerConfig) { let db_config = MerkleTreeConfig { path: path_to_string(&db_path.join("new")), - ..Default::default() + mode, + ..MerkleTreeConfig::default() }; let operation_config = OperationsManagerConfig { @@ -402,11 +396,11 @@ async fn setup_calculator_with_options( merkle_tree_config: &MerkleTreeConfig, operation_config: &OperationsManagerConfig, pool: &ConnectionPool, - mode: MetadataCalculatorModeConfig<'_>, + object_store: Option>, ) -> MetadataCalculator { let calculator_config = - MetadataCalculatorConfig::for_main_node(merkle_tree_config, operation_config, mode); - let metadata_calculator = MetadataCalculator::new(&calculator_config).await; + MetadataCalculatorConfig::for_main_node(merkle_tree_config, operation_config); + let metadata_calculator = MetadataCalculator::new(calculator_config, object_store).await; let mut storage = pool.access_storage().await.unwrap(); if storage.blocks_dal().is_genesis_needed().await.unwrap() { @@ -424,7 +418,6 @@ fn path_to_string(path: &Path) -> String { pub(crate) async fn run_calculator( mut calculator: MetadataCalculator, pool: ConnectionPool, - prover_pool: ConnectionPool, ) -> H256 { let (stop_sx, stop_rx) = watch::channel(false); let (delay_sx, mut delay_rx) = mpsc::unbounded_channel(); @@ -440,7 +433,7 @@ pub(crate) async fn run_calculator( root_hash }); - run_with_timeout(RUN_TIMEOUT, calculator.run(pool, prover_pool, stop_rx)) + run_with_timeout(RUN_TIMEOUT, calculator.run(pool, stop_rx)) .await .unwrap(); delayer_handle.await.unwrap() @@ -478,50 +471,33 @@ pub(super) async fn extend_db_state( new_logs: impl IntoIterator>, ) { let mut storage = storage.start_transaction().await.unwrap(); - let next_l1_batch = storage + let sealed_l1_batch = storage .blocks_dal() .get_sealed_l1_batch_number() .await .unwrap() - .0 - + 1; - - let base_system_contracts = BaseSystemContracts::load_from_disk(); - for (idx, batch_logs) in (next_l1_batch..).zip(new_logs) { - let batch_number = L1BatchNumber(idx); - let mut header = L1BatchHeader::new( - batch_number, - 0, - Address::default(), - base_system_contracts.hashes(), - Default::default(), - ); - header.is_finished = true; + .expect("no L1 batches in Postgres"); + extend_db_state_from_l1_batch(&mut storage, sealed_l1_batch + 1, new_logs).await; + storage.commit().await.unwrap(); +} +pub(super) async fn extend_db_state_from_l1_batch( + storage: &mut StorageProcessor<'_>, + next_l1_batch: L1BatchNumber, + new_logs: impl IntoIterator>, +) { + assert!(storage.in_transaction(), "must be called in DB transaction"); + + for (idx, batch_logs) in (next_l1_batch.0..).zip(new_logs) { + let header = create_l1_batch(idx); + let batch_number = header.number; // Assumes that L1 batch consists of only one miniblock. - let miniblock_number = MiniblockNumber(idx); - let miniblock_header = MiniblockHeader { - number: miniblock_number, - timestamp: header.timestamp, - hash: miniblock_hash( - miniblock_number, - header.timestamp, - H256::zero(), - H256::zero(), - ), - l1_tx_count: header.l1_tx_count, - l2_tx_count: header.l2_tx_count, - base_fee_per_gas: header.base_fee_per_gas, - l1_gas_price: 0, - l2_fair_gas_price: 0, - base_system_contracts_hashes: base_system_contracts.hashes(), - protocol_version: Some(Default::default()), - virtual_blocks: 0, - }; + let miniblock_header = create_miniblock(idx); + let miniblock_number = miniblock_header.number; storage .blocks_dal() - .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[]) + .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[], 0) .await .unwrap(); storage @@ -538,9 +514,8 @@ pub(super) async fn extend_db_state( .mark_miniblocks_as_executed_in_l1_batch(batch_number) .await .unwrap(); - insert_initial_writes_for_batch(&mut storage, batch_number).await; + insert_initial_writes_for_batch(storage, batch_number).await; } - storage.commit().await.unwrap(); } async fn insert_initial_writes_for_batch( @@ -619,7 +594,8 @@ async fn remove_l1_batches( .blocks_dal() .get_sealed_l1_batch_number() .await - .unwrap(); + .unwrap() + .expect("no L1 batches in Postgres"); assert!(sealed_l1_batch_number >= last_l1_batch_to_keep); let mut batch_headers = vec![]; diff --git a/core/lib/zksync_core/src/metadata_calculator/updater.rs b/core/lib/zksync_core/src/metadata_calculator/updater.rs index ed38dae14ed..917ab68fbff 100644 --- a/core/lib/zksync_core/src/metadata_calculator/updater.rs +++ b/core/lib/zksync_core/src/metadata_calculator/updater.rs @@ -1,64 +1,47 @@ //! Tree updater trait and its implementations. +use std::{ops, sync::Arc, time::Instant}; + use anyhow::Context as _; use futures::{future, FutureExt}; use tokio::sync::watch; - -use std::{ops, time::Instant}; - use zksync_commitment_utils::{bootloader_initial_content_commitment, events_queue_commitment}; use zksync_config::configs::database::MerkleTreeMode; use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_health_check::HealthUpdater; use zksync_merkle_tree::domain::TreeMetadata; use zksync_object_store::ObjectStore; -use zksync_types::{block::L1BatchHeader, writes::InitialStorageWrite, L1BatchNumber, H256, U256}; +use zksync_types::{ + block::L1BatchHeader, writes::InitialStorageWrite, L1BatchNumber, ProtocolVersionId, H256, U256, +}; use super::{ helpers::{AsyncTree, Delayer, L1BatchWithLogs}, metrics::{TreeUpdateStage, METRICS}, - MetadataCalculator, MetadataCalculatorConfig, + MetadataCalculator, }; +use crate::utils::wait_for_l1_batch; #[derive(Debug)] pub(super) struct TreeUpdater { tree: AsyncTree, max_l1_batches_per_iter: usize, - object_store: Option>, + object_store: Option>, } impl TreeUpdater { - pub async fn new( - mode: MerkleTreeMode, - config: &MetadataCalculatorConfig<'_>, - object_store: Option>, + pub fn new( + tree: AsyncTree, + max_l1_batches_per_iter: usize, + object_store: Option>, ) -> Self { - assert!( - config.max_l1_batches_per_iter > 0, - "Maximum L1 batches per iteration is misconfigured to be 0; please update it to positive value" - ); - - let db_path = config.db_path.into(); - let tree = AsyncTree::new( - db_path, - mode, - config.multi_get_chunk_size, - config.block_cache_capacity, - config.memtable_capacity, - config.stalled_writes_timeout, - ) - .await; Self { tree, - max_l1_batches_per_iter: config.max_l1_batches_per_iter, + max_l1_batches_per_iter, object_store, } } - pub fn tree(&self) -> &AsyncTree { - &self.tree - } - async fn process_l1_batch( &mut self, l1_batch: L1BatchWithLogs, @@ -104,7 +87,6 @@ impl TreeUpdater { async fn process_multiple_batches( &mut self, storage: &mut StorageProcessor<'_>, - prover_storage: &mut StorageProcessor<'_>, l1_batch_numbers: ops::RangeInclusive, ) -> L1BatchNumber { let start = Instant::now(); @@ -185,32 +167,6 @@ impl TreeUpdater { // right away without having to implement dedicated code. if let Some(object_key) = &object_key { - let protocol_version_id = storage - .blocks_dal() - .get_batch_protocol_version_id(l1_batch_number) - .await - .unwrap(); - if let Some(id) = protocol_version_id { - if !prover_storage - .protocol_versions_dal() - .prover_protocol_version_exists(id) - .await - { - let protocol_version = storage - .protocol_versions_dal() - .get_protocol_version(id) - .await - .unwrap(); - prover_storage - .protocol_versions_dal() - .save_prover_protocol_version(protocol_version) - .await; - } - } - prover_storage - .witness_generator_dal() - .save_witness_inputs(l1_batch_number, object_key, protocol_version_id) - .await; storage .basic_witness_input_producer_dal() .create_basic_witness_input_producer_job(l1_batch_number) @@ -250,14 +206,14 @@ impl TreeUpdater { .await .unwrap(); - let is_pre_boojum = header + // TODO(PLA-731): ensure that the protocol version is always available. + let protocol_version = header .protocol_version - .map(|v| v.is_pre_boojum()) - .unwrap_or(true); - let events_queue_commitment = (!is_pre_boojum).then(|| { + .unwrap_or(ProtocolVersionId::last_potentially_undefined()); + let events_queue_commitment = (!protocol_version.is_pre_boojum()).then(|| { let events_queue = events_queue.expect("Events queue is required for post-boojum batch"); - events_queue_commitment(&events_queue, is_pre_boojum) + events_queue_commitment(&events_queue, protocol_version) .expect("Events queue commitment is required for post-boojum batch") }); events_queue_commitment_latency.observe(); @@ -271,7 +227,7 @@ impl TreeUpdater { .unwrap() .unwrap(); let bootloader_initial_content_commitment = - bootloader_initial_content_commitment(&initial_bootloader_contents, is_pre_boojum); + bootloader_initial_content_commitment(&initial_bootloader_contents, protocol_version); bootloader_commitment_latency.observe(); ( @@ -283,14 +239,17 @@ impl TreeUpdater { async fn step( &mut self, mut storage: StorageProcessor<'_>, - mut prover_storage: StorageProcessor<'_>, next_l1_batch_to_seal: &mut L1BatchNumber, ) { - let last_sealed_l1_batch = storage + let Some(last_sealed_l1_batch) = storage .blocks_dal() .get_sealed_l1_batch_number() .await - .unwrap(); + .unwrap() + else { + tracing::trace!("No L1 batches to seal: Postgres storage is empty"); + return; + }; let last_requested_l1_batch = next_l1_batch_to_seal.0 + self.max_l1_batches_per_iter as u32 - 1; let last_requested_l1_batch = last_requested_l1_batch.min(last_sealed_l1_batch.0); @@ -302,7 +261,7 @@ impl TreeUpdater { } else { tracing::info!("Updating Merkle tree with L1 batches #{l1_batch_numbers:?}"); *next_l1_batch_to_seal = self - .process_multiple_batches(&mut storage, &mut prover_storage, l1_batch_numbers) + .process_multiple_batches(&mut storage, l1_batch_numbers) .await; } } @@ -312,19 +271,25 @@ impl TreeUpdater { mut self, delayer: Delayer, pool: &ConnectionPool, - prover_pool: &ConnectionPool, mut stop_receiver: watch::Receiver, health_updater: HealthUpdater, ) -> anyhow::Result<()> { - let mut storage = pool - .access_storage_tagged("metadata_calculator") - .await - .unwrap(); + let Some(earliest_l1_batch) = + wait_for_l1_batch(pool, delayer.delay_interval(), &mut stop_receiver).await? + else { + return Ok(()); // Stop signal received + }; + let mut storage = pool.access_storage_tagged("metadata_calculator").await?; // Ensure genesis creation let tree = &mut self.tree; if tree.is_empty() { - let logs = L1BatchWithLogs::new(&mut storage, L1BatchNumber(0)) + assert_eq!( + earliest_l1_batch, + L1BatchNumber(0), + "Non-zero earliest L1 batch is not supported without previous tree recovery" + ); + let logs = L1BatchWithLogs::new(&mut storage, earliest_l1_batch) .await .context("Missing storage logs for the genesis L1 batch")?; tree.process_l1_batch(logs.storage_logs).await; @@ -332,50 +297,50 @@ impl TreeUpdater { } let mut next_l1_batch_to_seal = tree.next_l1_batch_number(); - let current_db_batch = storage - .blocks_dal() - .get_sealed_l1_batch_number() - .await - .unwrap(); + let current_db_batch = storage.blocks_dal().get_sealed_l1_batch_number().await?; let last_l1_batch_with_metadata = storage .blocks_dal() .get_last_l1_batch_number_with_metadata() - .await - .unwrap(); + .await?; drop(storage); tracing::info!( "Initialized metadata calculator with {max_batches_per_iter} max L1 batches per iteration. \ - Next L1 batch for Merkle tree: {next_l1_batch_to_seal}, current Postgres L1 batch: {current_db_batch}, \ - last L1 batch with metadata: {last_l1_batch_with_metadata}", + Next L1 batch for Merkle tree: {next_l1_batch_to_seal}, current Postgres L1 batch: {current_db_batch:?}, \ + last L1 batch with metadata: {last_l1_batch_with_metadata:?}", max_batches_per_iter = self.max_l1_batches_per_iter ); - let backup_lag = - (last_l1_batch_with_metadata.0 + 1).saturating_sub(next_l1_batch_to_seal.0); - METRICS.backup_lag.set(backup_lag.into()); - let tree_info = tree.reader().info().await; health_updater.update(tree_info.into()); - if next_l1_batch_to_seal > last_l1_batch_with_metadata + 1 { - // Check stop signal before proceeding with a potentially time-consuming operation. - if *stop_receiver.borrow_and_update() { - tracing::info!("Stop signal received, metadata_calculator is shutting down"); - return Ok(()); - } + // It may be the case that we don't have any L1 batches with metadata in Postgres, e.g. after + // recovering from a snapshot. We cannot wait for such a batch to appear (*this* is the component + // responsible for their appearance!), but fortunately most of the updater doesn't depend on it. + if let Some(last_l1_batch_with_metadata) = last_l1_batch_with_metadata { + let backup_lag = + (last_l1_batch_with_metadata.0 + 1).saturating_sub(next_l1_batch_to_seal.0); + METRICS.backup_lag.set(backup_lag.into()); + + if next_l1_batch_to_seal > last_l1_batch_with_metadata + 1 { + // Check stop signal before proceeding with a potentially time-consuming operation. + if *stop_receiver.borrow_and_update() { + tracing::info!("Stop signal received, metadata_calculator is shutting down"); + return Ok(()); + } - tracing::warn!( - "Next L1 batch of the tree ({next_l1_batch_to_seal}) is greater than last L1 batch with metadata in Postgres \ - ({last_l1_batch_with_metadata}); this may be a result of restoring Postgres from a snapshot. \ - Truncating Merkle tree versions so that this mismatch is fixed..." - ); - tree.revert_logs(last_l1_batch_with_metadata); - tree.save().await; - next_l1_batch_to_seal = tree.next_l1_batch_number(); - tracing::info!("Truncated Merkle tree to L1 batch #{next_l1_batch_to_seal}"); + tracing::warn!( + "Next L1 batch of the tree ({next_l1_batch_to_seal}) is greater than last L1 batch with metadata in Postgres \ + ({last_l1_batch_with_metadata}); this may be a result of restoring Postgres from a snapshot. \ + Truncating Merkle tree versions so that this mismatch is fixed..." + ); + tree.revert_logs(last_l1_batch_with_metadata); + tree.save().await; + next_l1_batch_to_seal = tree.next_l1_batch_number(); + tracing::info!("Truncated Merkle tree to L1 batch #{next_l1_batch_to_seal}"); - let tree_info = tree.reader().info().await; - health_updater.update(tree_info.into()); + let tree_info = tree.reader().info().await; + health_updater.update(tree_info.into()); + } } loop { @@ -383,18 +348,10 @@ impl TreeUpdater { tracing::info!("Stop signal received, metadata_calculator is shutting down"); break; } - let storage = pool - .access_storage_tagged("metadata_calculator") - .await - .unwrap(); - let prover_storage = prover_pool - .access_storage_tagged("metadata_calculator") - .await - .unwrap(); + let storage = pool.access_storage_tagged("metadata_calculator").await?; let snapshot = *next_l1_batch_to_seal; - self.step(storage, prover_storage, &mut next_l1_batch_to_seal) - .await; + self.step(storage, &mut next_l1_batch_to_seal).await; let delay = if snapshot == *next_l1_batch_to_seal { tracing::trace!( "Metadata calculator (next L1 batch: #{next_l1_batch_to_seal}) \ diff --git a/core/lib/zksync_core/src/metrics.rs b/core/lib/zksync_core/src/metrics.rs index fdb043e211f..2c1559aae27 100644 --- a/core/lib/zksync_core/src/metrics.rs +++ b/core/lib/zksync_core/src/metrics.rs @@ -1,11 +1,10 @@ //! Application-wide metrics. -use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; - use std::{fmt, time::Duration}; +use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; use zksync_dal::transactions_dal::L2TxSubmissionResult; -use zksync_types::{aggregated_operations::AggregatedActionType, proofs::AggregationRound}; +use zksync_types::aggregated_operations::AggregatedActionType; #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] #[metrics(label = "stage")] @@ -17,9 +16,7 @@ pub(crate) enum InitStage { EthWatcher, EthTxAggregator, EthTxManager, - DataFetcher, Tree, - WitnessGenerator(AggregationRound), BasicWitnessInputProducer, } @@ -33,9 +30,7 @@ impl fmt::Display for InitStage { Self::EthWatcher => formatter.write_str("eth_watcher"), Self::EthTxAggregator => formatter.write_str("eth_tx_aggregator"), Self::EthTxManager => formatter.write_str("eth_tx_manager"), - Self::DataFetcher => formatter.write_str("data_fetchers"), Self::Tree => formatter.write_str("tree"), - Self::WitnessGenerator(round) => write!(formatter, "witness_generator_{round:?}"), Self::BasicWitnessInputProducer => formatter.write_str("basic_witness_input_producer"), } } @@ -62,7 +57,6 @@ pub(crate) enum BlockStage { Sealed, Tree, MetadataCalculated, - MerkleProofCalculated, L1 { l1_stage: BlockL1Stage, tx_type: AggregatedActionType, @@ -75,7 +69,6 @@ impl fmt::Display for BlockStage { Self::Sealed => formatter.write_str("sealed"), Self::Tree => formatter.write_str("tree"), Self::MetadataCalculated => formatter.write_str("metadata_calculated"), - Self::MerkleProofCalculated => formatter.write_str("merkle_proof_calculated"), Self::L1 { l1_stage, tx_type } => { let l1_stage = match l1_stage { BlockL1Stage::Saved => "save", // not "saved" for backward compatibility @@ -180,9 +173,9 @@ pub(crate) struct ExternalNodeMetrics { pub synced: Gauge, /// Current sync lag of the external node. pub sync_lag: Gauge, - /// Number of the last L1 batch checked by the reorg detector or consistency checker. + /// Number of the last L1 batch checked by the re-org detector or consistency checker. pub last_correct_batch: Family>, - /// Number of the last miniblock checked by the reorg detector or consistency checker. + /// Number of the last miniblock checked by the re-org detector or consistency checker. pub last_correct_miniblock: Family>, } diff --git a/core/lib/zksync_core/src/proof_data_handler/mod.rs b/core/lib/zksync_core/src/proof_data_handler/mod.rs index 898ac4652ba..7a5b8bc69b3 100644 --- a/core/lib/zksync_core/src/proof_data_handler/mod.rs +++ b/core/lib/zksync_core/src/proof_data_handler/mod.rs @@ -1,8 +1,7 @@ -use crate::proof_data_handler::request_processor::RequestProcessor; +use std::{net::SocketAddr, sync::Arc}; + use anyhow::Context as _; -use axum::extract::Path; -use axum::{routing::post, Json, Router}; -use std::net::SocketAddr; +use axum::{extract::Path, routing::post, Json, Router}; use tokio::sync::watch; use zksync_config::{ configs::{proof_data_handler::ProtocolVersionLoadingMode, ProofDataHandlerConfig}, @@ -16,6 +15,8 @@ use zksync_types::{ H256, }; +use crate::proof_data_handler::request_processor::RequestProcessor; + mod request_processor; fn fri_l1_verifier_config(contracts_config: &ContractsConfig) -> L1VerifierConfig { @@ -33,7 +34,7 @@ fn fri_l1_verifier_config(contracts_config: &ContractsConfig) -> L1VerifierConfi pub(crate) async fn run_server( config: ProofDataHandlerConfig, contracts_config: ContractsConfig, - blob_store: Box, + blob_store: Arc, pool: ConnectionPool, mut stop_receiver: watch::Receiver, ) -> anyhow::Result<()> { diff --git a/core/lib/zksync_core/src/proof_data_handler/request_processor.rs b/core/lib/zksync_core/src/proof_data_handler/request_processor.rs index f091993812b..bc9873d99ed 100644 --- a/core/lib/zksync_core/src/proof_data_handler/request_processor.rs +++ b/core/lib/zksync_core/src/proof_data_handler/request_processor.rs @@ -1,26 +1,27 @@ -use axum::extract::Path; -use axum::response::Response; -use axum::{http::StatusCode, response::IntoResponse, Json}; -use std::convert::TryFrom; -use std::sync::Arc; +use std::{convert::TryFrom, sync::Arc}; + +use axum::{ + extract::Path, + http::StatusCode, + response::{IntoResponse, Response}, + Json, +}; use zksync_config::configs::{ proof_data_handler::ProtocolVersionLoadingMode, ProofDataHandlerConfig, }; -use zksync_types::commitment::serialize_commitments; -use zksync_types::web3::signing::keccak256; -use zksync_utils::u256_to_h256; - use zksync_dal::{ConnectionPool, SqlxError}; use zksync_object_store::{ObjectStore, ObjectStoreError}; -use zksync_types::protocol_version::FriProtocolVersionId; use zksync_types::{ - protocol_version::L1VerifierConfig, + commitment::serialize_commitments, + protocol_version::{FriProtocolVersionId, L1VerifierConfig}, prover_server_api::{ ProofGenerationData, ProofGenerationDataRequest, ProofGenerationDataResponse, SubmitProofRequest, SubmitProofResponse, }, + web3::signing::keccak256, L1BatchNumber, H256, }; +use zksync_utils::u256_to_h256; #[derive(Clone)] pub(crate) struct RequestProcessor { @@ -31,7 +32,6 @@ pub(crate) struct RequestProcessor { } pub(crate) enum RequestProcessorError { - NoPendingBatches, ObjectStore(ObjectStoreError), Sqlx(SqlxError), } @@ -39,10 +39,6 @@ pub(crate) enum RequestProcessorError { impl IntoResponse for RequestProcessorError { fn into_response(self) -> Response { let (status_code, message) = match self { - Self::NoPendingBatches => ( - StatusCode::NOT_FOUND, - "No pending batches to process".to_owned(), - ), RequestProcessorError::ObjectStore(err) => { tracing::error!("GCS error: {:?}", err); ( @@ -69,13 +65,13 @@ impl IntoResponse for RequestProcessorError { impl RequestProcessor { pub(crate) fn new( - blob_store: Box, + blob_store: Arc, pool: ConnectionPool, config: ProofDataHandlerConfig, l1_verifier_config: Option, ) -> Self { Self { - blob_store: Arc::from(blob_store), + blob_store, pool, config, l1_verifier_config, @@ -88,15 +84,19 @@ impl RequestProcessor { ) -> Result, RequestProcessorError> { tracing::info!("Received request for proof generation data: {:?}", request); - let l1_batch_number = self + let l1_batch_number_result = self .pool .access_storage() .await .unwrap() .proof_generation_dal() .get_next_block_to_be_proven(self.config.proof_generation_timeout()) - .await - .ok_or(RequestProcessorError::NoPendingBatches)?; + .await; + + let l1_batch_number = match l1_batch_number_result { + Some(number) => number, + None => return Ok(Json(ProofGenerationDataResponse::Success(None))), // no batches pending to be proven + }; let blob = self .blob_store @@ -125,7 +125,9 @@ impl RequestProcessor { l1_verifier_config, }; - Ok(Json(ProofGenerationDataResponse::Success(proof_gen_data))) + Ok(Json(ProofGenerationDataResponse::Success(Some( + proof_gen_data, + )))) } pub(crate) async fn submit_proof( diff --git a/core/lib/zksync_core/src/reorg_detector/mod.rs b/core/lib/zksync_core/src/reorg_detector/mod.rs index 4f7b8eb5503..c399ed4c488 100644 --- a/core/lib/zksync_core/src/reorg_detector/mod.rs +++ b/core/lib/zksync_core/src/reorg_detector/mod.rs @@ -1,214 +1,306 @@ -use std::{future::Future, time::Duration}; +use std::{fmt, future::Future, time::Duration}; +use anyhow::Context as _; +use async_trait::async_trait; use tokio::sync::watch; use zksync_dal::ConnectionPool; -use zksync_types::{L1BatchNumber, MiniblockNumber}; +use zksync_types::{L1BatchNumber, MiniblockNumber, H256}; use zksync_web3_decl::{ - jsonrpsee::core::Error as RpcError, - jsonrpsee::http_client::{HttpClient, HttpClientBuilder}, + jsonrpsee::{ + core::ClientError as RpcError, + http_client::{HttpClient, HttpClientBuilder}, + }, namespaces::{EthNamespaceClient, ZksNamespaceClient}, - RpcResult, }; -use crate::metrics::{CheckerComponent, EN_METRICS}; +use crate::{ + metrics::{CheckerComponent, EN_METRICS}, + utils::wait_for_l1_batch_with_metadata, +}; + +#[cfg(test)] +mod tests; + +#[derive(Debug, thiserror::Error)] +enum HashMatchError { + #[error("RPC error calling main node")] + Rpc(#[from] RpcError), + #[error( + "Unrecoverable error: the earliest L1 batch #{0} in the local DB \ + has mismatched hash with the main node. Make sure you're connected to the right network; \ + if you've recovered from a snapshot, re-check snapshot authenticity. \ + Using an earlier snapshot could help." + )] + EarliestHashMismatch(L1BatchNumber), + #[error("Internal error")] + Internal(#[from] anyhow::Error), +} + +impl From for HashMatchError { + fn from(err: zksync_dal::SqlxError) -> Self { + Self::Internal(err.into()) + } +} + +fn is_transient_err(err: &RpcError) -> bool { + matches!(err, RpcError::Transport(_) | RpcError::RequestTimeout) +} + +#[async_trait] +trait MainNodeClient: fmt::Debug + Send + Sync { + async fn miniblock_hash(&self, number: MiniblockNumber) -> Result, RpcError>; + + async fn l1_batch_root_hash(&self, number: L1BatchNumber) -> Result, RpcError>; +} + +#[async_trait] +impl MainNodeClient for HttpClient { + async fn miniblock_hash(&self, number: MiniblockNumber) -> Result, RpcError> { + Ok(self + .get_block_by_number(number.0.into(), false) + .await? + .map(|block| block.hash)) + } + + async fn l1_batch_root_hash(&self, number: L1BatchNumber) -> Result, RpcError> { + Ok(self + .get_l1_batch_details(number) + .await? + .and_then(|batch| batch.base.root_hash)) + } +} + +trait UpdateCorrectBlock: fmt::Debug + Send + Sync { + fn update_correct_block( + &mut self, + last_correct_miniblock: MiniblockNumber, + last_correct_l1_batch: L1BatchNumber, + ); +} -const SLEEP_INTERVAL: Duration = Duration::from_secs(5); +/// Default implementation of [`UpdateCorrectBlock`] that reports values as metrics. +impl UpdateCorrectBlock for () { + fn update_correct_block( + &mut self, + last_correct_miniblock: MiniblockNumber, + last_correct_l1_batch: L1BatchNumber, + ) { + EN_METRICS.last_correct_batch[&CheckerComponent::ReorgDetector] + .set(last_correct_miniblock.0.into()); + EN_METRICS.last_correct_miniblock[&CheckerComponent::ReorgDetector] + .set(last_correct_l1_batch.0.into()); + } +} -/// This is a component that is responsible for detecting the batch reorgs. -/// Batch reorg is a rare event of manual intervention, when the node operator +/// This is a component that is responsible for detecting the batch re-orgs. +/// Batch re-org is a rare event of manual intervention, when the node operator /// decides to revert some of the not yet finalized batches for some reason /// (e.g. inability to generate a proof), and then potentially /// re-organize transactions in them to fix the problem. /// /// To detect them, we constantly check the latest sealed batch root hash, -/// and in the event of mismatch, we know that there has been a reorg. +/// and in the event of mismatch, we know that there has been a re-org. /// We then perform a binary search to find the latest correct block /// and revert all batches after it, to keep being consistent with the main node. /// /// This is the only component that is expected to finish its execution -/// in the even of reorg, since we have to restart the node after a rollback is performed, +/// in the even of re-org, since we have to restart the node after a rollback is performed, /// and is special-cased in the `zksync_external_node` crate. #[derive(Debug)] pub struct ReorgDetector { - client: HttpClient, + client: Box, + block_updater: Box, pool: ConnectionPool, - should_stop: watch::Receiver, + stop_receiver: watch::Receiver, + sleep_interval: Duration, } impl ReorgDetector { - pub fn new(url: &str, pool: ConnectionPool, should_stop: watch::Receiver) -> Self { + const DEFAULT_SLEEP_INTERVAL: Duration = Duration::from_secs(5); + + pub fn new(url: &str, pool: ConnectionPool, stop_receiver: watch::Receiver) -> Self { let client = HttpClientBuilder::default() .build(url) .expect("Failed to create HTTP client"); Self { - client, + client: Box::new(client), + block_updater: Box::new(()), pool, - should_stop, + stop_receiver, + sleep_interval: Self::DEFAULT_SLEEP_INTERVAL, } } /// Compares hashes of the given local miniblock and the same miniblock from main node. - async fn miniblock_hashes_match(&self, miniblock_number: MiniblockNumber) -> RpcResult { - let local_hash = self - .pool - .access_storage() - .await - .unwrap() + async fn miniblock_hashes_match( + &self, + miniblock_number: MiniblockNumber, + ) -> Result { + let mut storage = self.pool.access_storage().await?; + let local_hash = storage .blocks_dal() .get_miniblock_header(miniblock_number) - .await - .unwrap() - .unwrap_or_else(|| { - panic!( - "Header does not exist for local miniblock #{}", - miniblock_number - ) - }) + .await? + .with_context(|| { + format!("Header does not exist for local miniblock #{miniblock_number}") + })? .hash; + drop(storage); - let Some(hash) = self - .client - .get_block_by_number(miniblock_number.0.into(), false) - .await? - .map(|header| header.hash) - else { + let Some(remote_hash) = self.client.miniblock_hash(miniblock_number).await? else { // Due to reorg, locally we may be ahead of the main node. // Lack of the hash on the main node is treated as a hash match, // We need to wait for our knowledge of main node to catch up. return Ok(true); }; - Ok(hash == local_hash) + if remote_hash != local_hash { + tracing::warn!( + "Reorg detected: local hash {local_hash:?} doesn't match the hash from \ + main node {remote_hash:?} (miniblock #{miniblock_number})" + ); + } + Ok(remote_hash == local_hash) } /// Compares root hashes of the latest local batch and of the same batch from the main node. - async fn root_hashes_match(&self, l1_batch_number: L1BatchNumber) -> RpcResult { - // Unwrapping is fine since the caller always checks that these root hashes exist. - let local_hash = self - .pool - .access_storage() - .await - .unwrap() + async fn root_hashes_match( + &self, + l1_batch_number: L1BatchNumber, + ) -> Result { + let mut storage = self.pool.access_storage().await?; + let local_hash = storage .blocks_dal() .get_l1_batch_state_root(l1_batch_number) - .await - .unwrap() - .unwrap_or_else(|| { - panic!( - "Root hash does not exist for local batch #{}", - l1_batch_number - ) - }); - let Some(hash) = self - .client - .get_l1_batch_details(l1_batch_number) .await? - .and_then(|b| b.base.root_hash) - else { + .with_context(|| { + format!("Root hash does not exist for local batch #{l1_batch_number}") + })?; + drop(storage); + + let Some(remote_hash) = self.client.l1_batch_root_hash(l1_batch_number).await? else { // Due to reorg, locally we may be ahead of the main node. // Lack of the root hash on the main node is treated as a hash match, // We need to wait for our knowledge of main node to catch up. return Ok(true); }; - Ok(hash == local_hash) + + if remote_hash != local_hash { + tracing::warn!( + "Reorg detected: local root hash {local_hash:?} doesn't match the state hash from \ + main node {remote_hash:?} (L1 batch #{l1_batch_number})" + ); + } + Ok(remote_hash == local_hash) } - /// Localizes a reorg: performs binary search to determine the last non-diverged block. - async fn detect_reorg(&self, diverged_l1_batch: L1BatchNumber) -> RpcResult { + /// Localizes a re-org: performs binary search to determine the last non-diverged block. + async fn detect_reorg( + &self, + known_valid_l1_batch: L1BatchNumber, + diverged_l1_batch: L1BatchNumber, + ) -> Result { // TODO (BFT-176, BFT-181): We have to look through the whole history, since batch status updater may mark - // a block as executed even if the state diverges for it. - binary_search_with(1, diverged_l1_batch.0, |number| { + // a block as executed even if the state diverges for it. + binary_search_with(known_valid_l1_batch.0, diverged_l1_batch.0, |number| { self.root_hashes_match(L1BatchNumber(number)) }) .await .map(L1BatchNumber) } - pub async fn run(mut self) -> Option { + pub async fn run(mut self) -> anyhow::Result> { loop { match self.run_inner().await { - Ok(l1_batch_number) => return l1_batch_number, - Err(err @ RpcError::Transport(_) | err @ RpcError::RequestTimeout) => { + Ok(l1_batch_number) => return Ok(l1_batch_number), + Err(HashMatchError::Rpc(err)) if is_transient_err(&err) => { tracing::warn!("Following transport error occurred: {err}"); tracing::info!("Trying again after a delay"); - tokio::time::sleep(SLEEP_INTERVAL).await; - } - Err(err) => { - panic!("Unexpected error in the reorg detector: {}", err); + tokio::time::sleep(self.sleep_interval).await; } + Err(HashMatchError::Internal(err)) => return Err(err), + Err(err) => return Err(err.into()), } } } - async fn run_inner(&mut self) -> RpcResult> { + async fn run_inner(&mut self) -> Result, HashMatchError> { + let earliest_l1_batch_number = wait_for_l1_batch_with_metadata( + &self.pool, + self.sleep_interval, + &mut self.stop_receiver, + ) + .await?; + + let Some(earliest_l1_batch_number) = earliest_l1_batch_number else { + return Ok(None); // Stop signal received + }; + tracing::debug!( + "Checking root hash match for earliest L1 batch #{earliest_l1_batch_number}" + ); + if !self.root_hashes_match(earliest_l1_batch_number).await? { + return Err(HashMatchError::EarliestHashMismatch( + earliest_l1_batch_number, + )); + } + loop { - let should_stop = *self.should_stop.borrow(); + let should_stop = *self.stop_receiver.borrow(); - let sealed_l1_batch_number = self - .pool - .access_storage() - .await - .unwrap() + // At this point, we are guaranteed to have L1 batches and miniblocks in the storage. + let mut storage = self.pool.access_storage().await?; + let sealed_l1_batch_number = storage .blocks_dal() .get_last_l1_batch_number_with_metadata() - .await - .unwrap(); - - let sealed_miniblock_number = self - .pool - .access_storage() - .await - .unwrap() + .await? + .context("L1 batches table unexpectedly emptied")?; + let sealed_miniblock_number = storage .blocks_dal() .get_sealed_miniblock_number() - .await - .unwrap(); + .await? + .context("miniblocks table unexpectedly emptied")?; + drop(storage); tracing::trace!( "Checking for reorgs - L1 batch #{sealed_l1_batch_number}, \ - miniblock number #{sealed_miniblock_number}" + miniblock number #{sealed_miniblock_number}" ); let root_hashes_match = self.root_hashes_match(sealed_l1_batch_number).await?; let miniblock_hashes_match = self.miniblock_hashes_match(sealed_miniblock_number).await?; - // The only event that triggers reorg detection and node rollback is if the + // The only event that triggers re-org detection and node rollback is if the // hash mismatch at the same block height is detected, be it miniblocks or batches. // // In other cases either there is only a height mismatch which means that one of - // the nodes needs to do catching up, howver it is not certain that there is actually - // a reorg taking place. + // the nodes needs to do catching up; however, it is not certain that there is actually + // a re-org taking place. if root_hashes_match && miniblock_hashes_match { - EN_METRICS.last_correct_batch[&CheckerComponent::ReorgDetector] - .set(sealed_l1_batch_number.0.into()); - EN_METRICS.last_correct_miniblock[&CheckerComponent::ReorgDetector] - .set(sealed_miniblock_number.0.into()); + self.block_updater + .update_correct_block(sealed_miniblock_number, sealed_l1_batch_number); } else { - if !root_hashes_match { - tracing::warn!( - "Reorg detected: last state hash doesn't match the state hash from \ - main node (L1 batch #{sealed_l1_batch_number})" - ); - } - if !miniblock_hashes_match { - tracing::warn!( - "Reorg detected: last state hash doesn't match the state hash from \ - main node (MiniblockNumber #{sealed_miniblock_number})" - ); - } - tracing::info!("Searching for the first diverged batch"); - let last_correct_l1_batch = self.detect_reorg(sealed_l1_batch_number).await?; + let diverged_l1_batch_number = if root_hashes_match { + sealed_l1_batch_number + 1 // Non-sealed L1 batch has diverged + } else { + sealed_l1_batch_number + }; + + tracing::info!("Searching for the first diverged L1 batch"); + let last_correct_l1_batch = self + .detect_reorg(earliest_l1_batch_number, diverged_l1_batch_number) + .await?; tracing::info!( "Reorg localized: last correct L1 batch is #{last_correct_l1_batch}" ); return Ok(Some(last_correct_l1_batch)); } + if should_stop { tracing::info!("Shutting down reorg detector"); return Ok(None); } - tokio::time::sleep(SLEEP_INTERVAL).await; + tokio::time::sleep(self.sleep_interval).await; } } } @@ -220,6 +312,8 @@ where { while left + 1 < right { let middle = (left + right) / 2; + assert!(middle < right); // middle <= (right - 2 + right) / 2 = right - 1 + if f(middle).await? { left = middle; } else { @@ -228,16 +322,3 @@ where } Ok(left) } - -#[cfg(test)] -mod tests { - /// Tests the binary search algorithm. - #[tokio::test] - async fn test_binary_search() { - for divergence_point in [1, 50, 51, 100] { - let mut f = |x| async move { Ok::<_, ()>(x < divergence_point) }; - let result = super::binary_search_with(0, 100, &mut f).await; - assert_eq!(result, Ok(divergence_point - 1)); - } - } -} diff --git a/core/lib/zksync_core/src/reorg_detector/tests.rs b/core/lib/zksync_core/src/reorg_detector/tests.rs new file mode 100644 index 00000000000..f9495286b1f --- /dev/null +++ b/core/lib/zksync_core/src/reorg_detector/tests.rs @@ -0,0 +1,496 @@ +//! Tests for the reorg detector component. + +use std::{ + collections::HashMap, + sync::{Arc, Mutex}, +}; + +use assert_matches::assert_matches; +use test_casing::{test_casing, Product}; +use tokio::sync::mpsc; +use zksync_dal::StorageProcessor; +use zksync_types::{ + block::{BlockGasCount, MiniblockHeader}, + L2ChainId, ProtocolVersion, +}; + +use super::*; +use crate::{ + genesis::{ensure_genesis_state, GenesisParams}, + utils::testonly::{create_l1_batch, create_miniblock}, +}; + +async fn store_miniblock(storage: &mut StorageProcessor<'_>, number: u32, hash: H256) { + let header = MiniblockHeader { + hash, + ..create_miniblock(number) + }; + storage + .blocks_dal() + .insert_miniblock(&header) + .await + .unwrap(); +} + +async fn seal_l1_batch(storage: &mut StorageProcessor<'_>, number: u32, hash: H256) { + let header = create_l1_batch(number); + storage + .blocks_dal() + .insert_l1_batch(&header, &[], BlockGasCount::default(), &[], &[], 0) + .await + .unwrap(); + storage + .blocks_dal() + .mark_miniblocks_as_executed_in_l1_batch(L1BatchNumber(number)) + .await + .unwrap(); + storage + .blocks_dal() + .set_l1_batch_hash(L1BatchNumber(number), hash) + .await + .unwrap(); +} + +/// Tests the binary search algorithm. +#[tokio::test] +async fn binary_search_with_simple_predicate() { + for divergence_point in [1, 50, 51, 100] { + let mut f = |x| async move { Ok::<_, ()>(x < divergence_point) }; + let result = binary_search_with(0, 100, &mut f).await; + assert_eq!(result, Ok(divergence_point - 1)); + } +} + +type ResponsesMap = HashMap; + +#[derive(Debug, Clone, Copy)] +enum RpcErrorKind { + Transient, + Fatal, +} + +impl From for RpcError { + fn from(kind: RpcErrorKind) -> Self { + match kind { + RpcErrorKind::Transient => Self::RequestTimeout, + RpcErrorKind::Fatal => Self::HttpNotImplemented, + } + } +} + +#[derive(Debug, Default)] +struct MockMainNodeClient { + miniblock_hash_responses: ResponsesMap, + l1_batch_root_hash_responses: ResponsesMap, + error_kind: Arc>>, +} + +#[async_trait] +impl MainNodeClient for MockMainNodeClient { + async fn miniblock_hash(&self, number: MiniblockNumber) -> Result, RpcError> { + if let &Some(error_kind) = &*self.error_kind.lock().unwrap() { + return Err(error_kind.into()); + } + + if let Some(response) = self.miniblock_hash_responses.get(&number) { + Ok(Some(*response)) + } else { + Ok(None) + } + } + + async fn l1_batch_root_hash(&self, number: L1BatchNumber) -> Result, RpcError> { + if let &Some(error_kind) = &*self.error_kind.lock().unwrap() { + return Err(error_kind.into()); + } + + if let Some(response) = self.l1_batch_root_hash_responses.get(&number) { + Ok(Some(*response)) + } else { + Ok(None) + } + } +} + +impl UpdateCorrectBlock for mpsc::UnboundedSender<(MiniblockNumber, L1BatchNumber)> { + fn update_correct_block( + &mut self, + last_correct_miniblock: MiniblockNumber, + last_correct_l1_batch: L1BatchNumber, + ) { + self.send((last_correct_miniblock, last_correct_l1_batch)) + .ok(); + } +} + +#[test_casing(4, Product(([false, true], [false, true])))] +#[tokio::test] +async fn normal_reorg_function(snapshot_recovery: bool, with_transient_errors: bool) { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + if snapshot_recovery { + storage + .protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + } else { + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + } + + let mut client = MockMainNodeClient::default(); + let l1_batch_numbers = if snapshot_recovery { + 11_u32..=20 + } else { + 1_u32..=10 + }; + let last_l1_batch_number = L1BatchNumber(*l1_batch_numbers.end()); + let last_miniblock_number = MiniblockNumber(*l1_batch_numbers.end()); + let miniblock_and_l1_batch_hashes = l1_batch_numbers.map(|number| { + let miniblock_hash = H256::from_low_u64_be(number.into()); + client + .miniblock_hash_responses + .insert(MiniblockNumber(number), miniblock_hash); + let l1_batch_hash = H256::repeat_byte(number as u8); + client + .l1_batch_root_hash_responses + .insert(L1BatchNumber(number), l1_batch_hash); + (number, miniblock_hash, l1_batch_hash) + }); + let miniblock_and_l1_batch_hashes: Vec<_> = miniblock_and_l1_batch_hashes.collect(); + + if with_transient_errors { + *client.error_kind.lock().unwrap() = Some(RpcErrorKind::Transient); + // "Fix" the client after a certain delay. + let error_kind = Arc::clone(&client.error_kind); + tokio::spawn(async move { + tokio::time::sleep(Duration::from_millis(100)).await; + *error_kind.lock().unwrap() = None; + }); + } + + let (stop_sender, stop_receiver) = watch::channel(false); + let (block_update_sender, mut block_update_receiver) = + mpsc::unbounded_channel::<(MiniblockNumber, L1BatchNumber)>(); + let detector = ReorgDetector { + client: Box::new(client), + block_updater: Box::new(block_update_sender), + pool: pool.clone(), + stop_receiver, + sleep_interval: Duration::from_millis(10), + }; + let detector_task = tokio::spawn(detector.run()); + + for (number, miniblock_hash, l1_batch_hash) in miniblock_and_l1_batch_hashes { + store_miniblock(&mut storage, number, miniblock_hash).await; + tokio::time::sleep(Duration::from_millis(10)).await; + seal_l1_batch(&mut storage, number, l1_batch_hash).await; + tokio::time::sleep(Duration::from_millis(10)).await; + } + + while let Some((miniblock, l1_batch)) = block_update_receiver.recv().await { + assert!(miniblock <= last_miniblock_number); + assert!(l1_batch <= last_l1_batch_number); + if miniblock == last_miniblock_number && l1_batch == last_l1_batch_number { + break; + } + } + + // Check detector shutdown + stop_sender.send_replace(true); + let task_result = detector_task.await.unwrap(); + assert_eq!(task_result.unwrap(), None); +} + +#[tokio::test] +async fn detector_stops_on_fatal_rpc_error() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + + let client = MockMainNodeClient::default(); + *client.error_kind.lock().unwrap() = Some(RpcErrorKind::Fatal); + + let (_stop_sender, stop_receiver) = watch::channel(false); + let detector = ReorgDetector { + client: Box::new(client), + block_updater: Box::new(()), + pool: pool.clone(), + stop_receiver, + sleep_interval: Duration::from_millis(10), + }; + // Check that the detector stops when a fatal RPC error is encountered. + detector.run().await.unwrap_err(); +} + +#[tokio::test] +async fn reorg_is_detected_on_batch_hash_mismatch() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + + let (_stop_sender, stop_receiver) = watch::channel(false); + let mut client = MockMainNodeClient::default(); + let miniblock_hash = H256::from_low_u64_be(23); + client + .miniblock_hash_responses + .insert(MiniblockNumber(1), miniblock_hash); + client + .l1_batch_root_hash_responses + .insert(L1BatchNumber(1), H256::repeat_byte(1)); + client + .miniblock_hash_responses + .insert(MiniblockNumber(2), miniblock_hash); + client + .l1_batch_root_hash_responses + .insert(L1BatchNumber(2), H256::repeat_byte(2)); + + let detector = ReorgDetector { + client: Box::new(client), + block_updater: Box::new(()), + pool: pool.clone(), + stop_receiver, + sleep_interval: Duration::from_millis(10), + }; + let detector_task = tokio::spawn(detector.run()); + + store_miniblock(&mut storage, 1, miniblock_hash).await; + seal_l1_batch(&mut storage, 1, H256::repeat_byte(1)).await; + store_miniblock(&mut storage, 2, miniblock_hash).await; + seal_l1_batch(&mut storage, 2, H256::repeat_byte(0xff)).await; + // ^ Hash of L1 batch #2 differs from that on the main node. + + let task_result = detector_task.await.unwrap(); + let last_correct_l1_batch = task_result.unwrap(); + assert_eq!(last_correct_l1_batch, Some(L1BatchNumber(1))); +} + +#[tokio::test] +async fn reorg_is_detected_on_miniblock_hash_mismatch() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + + let (_stop_sender, stop_receiver) = watch::channel(false); + let mut client = MockMainNodeClient::default(); + let miniblock_hash = H256::from_low_u64_be(23); + client + .miniblock_hash_responses + .insert(MiniblockNumber(1), miniblock_hash); + client + .l1_batch_root_hash_responses + .insert(L1BatchNumber(1), H256::repeat_byte(1)); + client + .miniblock_hash_responses + .insert(MiniblockNumber(2), miniblock_hash); + client + .miniblock_hash_responses + .insert(MiniblockNumber(3), miniblock_hash); + + let detector = ReorgDetector { + client: Box::new(client), + block_updater: Box::new(()), + pool: pool.clone(), + stop_receiver, + sleep_interval: Duration::from_millis(10), + }; + let detector_task = tokio::spawn(detector.run()); + + store_miniblock(&mut storage, 1, miniblock_hash).await; + seal_l1_batch(&mut storage, 1, H256::repeat_byte(1)).await; + store_miniblock(&mut storage, 2, miniblock_hash).await; + store_miniblock(&mut storage, 3, H256::repeat_byte(42)).await; + // ^ Hash of the miniblock #3 differs from that on the main node. + + let task_result = detector_task.await.unwrap(); + let last_correct_l1_batch = task_result.unwrap(); + assert_eq!(last_correct_l1_batch, Some(L1BatchNumber(1))); + // ^ All locally stored L1 batches should be correct. +} + +#[derive(Debug, Clone, Copy)] +enum StorageUpdateStrategy { + /// Prefill the local storage with all block data. + Prefill, + /// Sequentially add a new L1 batch after the previous one was checked. + Sequential, +} + +impl StorageUpdateStrategy { + const ALL: [Self; 2] = [Self::Prefill, Self::Sequential]; +} + +#[test_casing(16, Product(([false, true], [2, 3, 5, 8], StorageUpdateStrategy::ALL)))] +#[tokio::test] +async fn reorg_is_detected_on_historic_batch_hash_mismatch( + snapshot_recovery: bool, + last_correct_batch: u32, + storage_update_strategy: StorageUpdateStrategy, +) { + assert!(last_correct_batch < 10); + let (l1_batch_numbers, last_correct_batch) = if snapshot_recovery { + (11_u32..=20, last_correct_batch + 10) + } else { + (1_u32..=10, last_correct_batch) + }; + + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + storage + .protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + let earliest_l1_batch_number = l1_batch_numbers.start() - 1; + store_miniblock(&mut storage, earliest_l1_batch_number, H256::zero()).await; + seal_l1_batch(&mut storage, earliest_l1_batch_number, H256::zero()).await; + + let mut client = MockMainNodeClient::default(); + let miniblock_and_l1_batch_hashes = l1_batch_numbers.clone().map(|number| { + let mut miniblock_hash = H256::from_low_u64_be(number.into()); + client + .miniblock_hash_responses + .insert(MiniblockNumber(number), miniblock_hash); + let mut l1_batch_hash = H256::repeat_byte(number as u8); + client + .l1_batch_root_hash_responses + .insert(L1BatchNumber(number), l1_batch_hash); + + if number > last_correct_batch { + miniblock_hash = H256::zero(); + l1_batch_hash = H256::zero(); + } + (number, miniblock_hash, l1_batch_hash) + }); + let mut miniblock_and_l1_batch_hashes: Vec<_> = miniblock_and_l1_batch_hashes.collect(); + + if matches!(storage_update_strategy, StorageUpdateStrategy::Prefill) { + for &(number, miniblock_hash, l1_batch_hash) in &miniblock_and_l1_batch_hashes { + store_miniblock(&mut storage, number, miniblock_hash).await; + seal_l1_batch(&mut storage, number, l1_batch_hash).await; + } + } + + let (_stop_sender, stop_receiver) = watch::channel(false); + let (block_update_sender, mut block_update_receiver) = + mpsc::unbounded_channel::<(MiniblockNumber, L1BatchNumber)>(); + let detector = ReorgDetector { + client: Box::new(client), + block_updater: Box::new(block_update_sender), + pool: pool.clone(), + stop_receiver, + sleep_interval: Duration::from_millis(10), + }; + let detector_task = tokio::spawn(detector.run()); + + if matches!(storage_update_strategy, StorageUpdateStrategy::Sequential) { + let mut last_number = earliest_l1_batch_number; + while let Some((miniblock, l1_batch)) = block_update_receiver.recv().await { + if miniblock == MiniblockNumber(last_number) && l1_batch == L1BatchNumber(last_number) { + let (number, miniblock_hash, l1_batch_hash) = + miniblock_and_l1_batch_hashes.remove(0); + assert_eq!(number, last_number + 1); + store_miniblock(&mut storage, number, miniblock_hash).await; + seal_l1_batch(&mut storage, number, l1_batch_hash).await; + last_number = number; + } + } + } + + let task_result = detector_task.await.unwrap(); + let last_correct_l1_batch = task_result.unwrap(); + assert_eq!( + last_correct_l1_batch, + Some(L1BatchNumber(last_correct_batch)) + ); +} + +#[tokio::test] +async fn stopping_reorg_detector_while_waiting_for_l1_batch() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + assert!(storage.blocks_dal().is_genesis_needed().await.unwrap()); + drop(storage); + + let (stop_sender, stop_receiver) = watch::channel(false); + let detector = ReorgDetector { + client: Box::::default(), + block_updater: Box::new(()), + pool, + stop_receiver, + sleep_interval: Duration::from_millis(10), + }; + let detector_task = tokio::spawn(detector.run()); + + stop_sender.send_replace(true); + + let task_result = detector_task.await.unwrap(); + let last_correct_l1_batch = task_result.unwrap(); + assert_eq!(last_correct_l1_batch, None); +} + +#[tokio::test] +async fn detector_errors_on_earliest_batch_hash_mismatch() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + let genesis_root_hash = + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + assert_ne!(genesis_root_hash, H256::zero()); + + let mut client = MockMainNodeClient::default(); + client + .l1_batch_root_hash_responses + .insert(L1BatchNumber(0), H256::zero()); + + let (_stop_sender, stop_receiver) = watch::channel(false); + let mut detector = ReorgDetector { + client: Box::new(client), + block_updater: Box::new(()), + pool: pool.clone(), + stop_receiver, + sleep_interval: Duration::from_millis(10), + }; + + let err = detector.run_inner().await.unwrap_err(); + assert_matches!(err, HashMatchError::EarliestHashMismatch(L1BatchNumber(0))); +} + +#[tokio::test] +async fn detector_errors_on_earliest_batch_hash_mismatch_with_snapshot_recovery() { + let pool = ConnectionPool::test_pool().await; + let mut client = MockMainNodeClient::default(); + client + .l1_batch_root_hash_responses + .insert(L1BatchNumber(3), H256::zero()); + + let (_stop_sender, stop_receiver) = watch::channel(false); + let mut detector = ReorgDetector { + client: Box::new(client), + block_updater: Box::new(()), + pool: pool.clone(), + stop_receiver, + sleep_interval: Duration::from_millis(10), + }; + + tokio::spawn(async move { + tokio::time::sleep(Duration::from_millis(20)).await; + let mut storage = pool.access_storage().await.unwrap(); + storage + .protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + store_miniblock(&mut storage, 3, H256::from_low_u64_be(3)).await; + seal_l1_batch(&mut storage, 3, H256::from_low_u64_be(3)).await; + }); + + let err = detector.run_inner().await.unwrap_err(); + assert_matches!(err, HashMatchError::EarliestHashMismatch(L1BatchNumber(3))); +} diff --git a/core/lib/zksync_core/src/state_keeper/batch_executor/mod.rs b/core/lib/zksync_core/src/state_keeper/batch_executor/mod.rs index fffabe0b7e2..16e1e83f677 100644 --- a/core/lib/zksync_core/src/state_keeper/batch_executor/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/batch_executor/mod.rs @@ -1,13 +1,6 @@ -use async_trait::async_trait; -use once_cell::sync::OnceCell; -use tokio::{ - sync::{mpsc, oneshot}, - task::JoinHandle, -}; - -use multivm::MultivmTracer; use std::{fmt, sync::Arc}; +use async_trait::async_trait; use multivm::{ interface::{ ExecutionResult, FinishedL1Batch, Halt, L1BatchEnv, L2BlockEnv, SystemEnv, VmExecutionMode, @@ -15,18 +8,19 @@ use multivm::{ }, tracers::CallTracer, vm_latest::HistoryEnabled, - VmInstance, + MultiVMTracer, VmInstance, +}; +use once_cell::sync::OnceCell; +use tokio::{ + sync::{mpsc, oneshot}, + task::JoinHandle, }; use zksync_dal::ConnectionPool; use zksync_state::{RocksdbStorage, StorageView, WriteStorage}; use zksync_types::{vm_trace::Call, witness_block_state::WitnessBlockState, Transaction, U256}; use zksync_utils::bytecode::CompressedBytecodeInfo; -#[cfg(test)] -mod tests; - use crate::{ - gas_tracker::{gas_count_from_metrics, gas_count_from_tx_and_metrics}, metrics::{InteractionType, TxStage, APP_METRICS}, state_keeper::{ metrics::{ExecutorCommand, TxExecutionStage, EXECUTOR_METRICS, KEEPER_METRICS}, @@ -34,6 +28,9 @@ use crate::{ }, }; +#[cfg(test)] +mod tests; + /// Representation of a transaction executed in the virtual machine. #[derive(Debug, Clone)] pub(crate) enum TxExecutionResult { @@ -91,6 +88,7 @@ pub struct MainBatchExecutorBuilder { max_allowed_tx_gas_limit: U256, upload_witness_inputs_to_gcs: bool, enum_index_migration_chunk_size: usize, + optional_bytecode_compression: bool, } impl MainBatchExecutorBuilder { @@ -101,6 +99,7 @@ impl MainBatchExecutorBuilder { save_call_traces: bool, upload_witness_inputs_to_gcs: bool, enum_index_migration_chunk_size: usize, + optional_bytecode_compression: bool, ) -> Self { Self { state_keeper_db_path, @@ -109,6 +108,7 @@ impl MainBatchExecutorBuilder { max_allowed_tx_gas_limit, upload_witness_inputs_to_gcs, enum_index_migration_chunk_size, + optional_bytecode_compression, } } } @@ -137,6 +137,7 @@ impl L1BatchExecutorBuilder for MainBatchExecutorBuilder { l1_batch_params, system_env, self.upload_witness_inputs_to_gcs, + self.optional_bytecode_compression, ) } } @@ -160,6 +161,7 @@ impl BatchExecutorHandle { l1_batch_env: L1BatchEnv, system_env: SystemEnv, upload_witness_inputs_to_gcs: bool, + optional_bytecode_compression: bool, ) -> Self { // Since we process `BatchExecutor` commands one-by-one (the next command is never enqueued // until a previous command is processed), capacity 1 is enough for the commands channel. @@ -167,6 +169,7 @@ impl BatchExecutorHandle { let executor = BatchExecutor { save_call_traces, max_allowed_tx_gas_limit, + optional_bytecode_compression, commands: commands_receiver, }; @@ -287,6 +290,7 @@ pub(super) enum Command { pub(super) struct BatchExecutor { save_call_traces: bool, max_allowed_tx_gas_limit: U256, + optional_bytecode_compression: bool, commands: mpsc::Receiver, } @@ -327,7 +331,7 @@ impl BatchExecutor { }; resp.send((vm_block_result, witness_block_state)).unwrap(); - // storage_view cannot be accessed while borrowed by the VM, + // `storage_view` cannot be accessed while borrowed by the VM, // so this is the only point at which storage metrics can be obtained let metrics = storage_view.as_ref().borrow_mut().metrics(); EXECUTOR_METRICS.batch_storage_interaction_duration[&InteractionType::GetValue] @@ -366,7 +370,12 @@ impl BatchExecutor { // Execute the transaction. let latency = KEEPER_METRICS.tx_execution_time[&TxExecutionStage::Execution].start(); - let (tx_result, compressed_bytecodes, call_tracer_result) = self.execute_tx_in_vm(tx, vm); + let (tx_result, compressed_bytecodes, call_tracer_result) = + if self.optional_bytecode_compression { + self.execute_tx_in_vm_with_optional_compression(tx, vm) + } else { + self.execute_tx_in_vm(tx, vm) + }; latency.observe(); APP_METRICS.processed_txs[&TxStage::StateKeeper].inc(); APP_METRICS.processed_l1_txs[&TxStage::StateKeeper].inc_by(tx.is_l1().into()); @@ -378,7 +387,7 @@ impl BatchExecutor { }; } - let tx_metrics = Self::get_execution_metrics(Some(tx), &tx_result); + let tx_metrics = ExecutionMetricsForCriteria::new(Some(tx), &tx_result); let (bootloader_dry_run_result, bootloader_dry_run_metrics) = self.dryrun_block_tip(vm); match &bootloader_dry_run_result.result { @@ -434,11 +443,7 @@ impl BatchExecutor { result } - // Err when transaction is rejected. - // Ok(TxExecutionStatus::Success) when the transaction succeeded - // Ok(TxExecutionStatus::Failure) when the transaction failed. - // Note that failed transactions are considered properly processed and are included in blocks - fn execute_tx_in_vm( + fn execute_tx_in_vm_with_optional_compression( &self, tx: &Transaction, vm: &mut VmInstance, @@ -453,8 +458,8 @@ impl BatchExecutor { // that will not be published (e.g. due to out of gas), we use the following scheme: // We try to execute the transaction with compressed bytecodes. // If it fails and the compressed bytecodes have not been published, - // it means that there is no sense in pollutting the space of compressed bytecodes, - // and so we reexecute the transaction, but without compressions. + // it means that there is no sense in polluting the space of compressed bytecodes, + // and so we re-execute the transaction, but without compression. // Saving the snapshot before executing vm.make_snapshot(); @@ -466,7 +471,7 @@ impl BatchExecutor { vec![] }; - if let Ok(result) = + if let (Ok(()), result) = vm.inspect_transaction_with_bytecode_compression(tracer.into(), tx.clone(), true) { let compressed_bytecodes = vm.get_last_tx_compressed_bytecodes(); @@ -487,8 +492,10 @@ impl BatchExecutor { vec![] }; - let result = vm - .inspect_transaction_with_bytecode_compression(tracer.into(), tx.clone(), false) + let result = + vm.inspect_transaction_with_bytecode_compression(tracer.into(), tx.clone(), false); + result + .0 .expect("Compression can't fail if we don't apply it"); let compressed_bytecodes = vm.get_last_tx_compressed_bytecodes(); @@ -498,7 +505,46 @@ impl BatchExecutor { .unwrap() .take() .unwrap_or_default(); - (result, compressed_bytecodes, trace) + (result.1, compressed_bytecodes, trace) + } + + // Err when transaction is rejected. + // `Ok(TxExecutionStatus::Success)` when the transaction succeeded + // `Ok(TxExecutionStatus::Failure)` when the transaction failed. + // Note that failed transactions are considered properly processed and are included in blocks + fn execute_tx_in_vm( + &self, + tx: &Transaction, + vm: &mut VmInstance, + ) -> ( + VmExecutionResultAndLogs, + Vec, + Vec, + ) { + let call_tracer_result = Arc::new(OnceCell::default()); + let tracer = if self.save_call_traces { + vec![CallTracer::new(call_tracer_result.clone()).into_tracer_pointer()] + } else { + vec![] + }; + + let (published_bytecodes, mut result) = + vm.inspect_transaction_with_bytecode_compression(tracer.into(), tx.clone(), true); + if published_bytecodes.is_ok() { + let compressed_bytecodes = vm.get_last_tx_compressed_bytecodes(); + + let trace = Arc::try_unwrap(call_tracer_result) + .unwrap() + .take() + .unwrap_or_default(); + (result, compressed_bytecodes, trace) + } else { + // Transaction failed to publish bytecodes, we reject it so initiator doesn't pay fee. + result.result = ExecutionResult::Halt { + reason: Halt::FailedToPublishCompressedBytecodes, + }; + (result, Default::default(), Default::default()) + } } fn dryrun_block_tip( @@ -520,7 +566,7 @@ impl BatchExecutor { let stage_latency = KEEPER_METRICS.tx_execution_time[&TxExecutionStage::DryRunGetExecutionMetrics].start(); - let metrics = Self::get_execution_metrics(None, &block_tip_result); + let metrics = ExecutionMetricsForCriteria::new(None, &block_tip_result); stage_latency.observe(); let stage_latency = KEEPER_METRICS.tx_execution_time @@ -532,20 +578,4 @@ impl BatchExecutor { total_latency.observe(); (block_tip_result, metrics) } - - fn get_execution_metrics( - tx: Option<&Transaction>, - execution_result: &VmExecutionResultAndLogs, - ) -> ExecutionMetricsForCriteria { - let execution_metrics = execution_result.get_execution_metrics(tx); - let l1_gas = match tx { - Some(tx) => gas_count_from_tx_and_metrics(tx, &execution_metrics), - None => gas_count_from_metrics(&execution_metrics), - }; - - ExecutionMetricsForCriteria { - l1_gas, - execution_metrics, - } - } } diff --git a/core/lib/zksync_core/src/state_keeper/batch_executor/tests/mod.rs b/core/lib/zksync_core/src/state_keeper/batch_executor/tests/mod.rs index 05a8220bb83..362afe20437 100644 --- a/core/lib/zksync_core/src/state_keeper/batch_executor/tests/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/batch_executor/tests/mod.rs @@ -1,15 +1,13 @@ use assert_matches::assert_matches; - use zksync_dal::ConnectionPool; +use zksync_test_account::Account; use zksync_types::PriorityOpId; -mod tester; - use self::tester::Tester; use super::TxExecutionResult; use crate::state_keeper::batch_executor::tests::tester::{AccountLoadNextExecutable, TestConfig}; -use zksync_test_account::Account; +mod tester; /// Ensures that the transaction was executed successfully. fn assert_executed(execution_result: &TxExecutionResult) { diff --git a/core/lib/zksync_core/src/state_keeper/batch_executor/tests/tester.rs b/core/lib/zksync_core/src/state_keeper/batch_executor/tests/tester.rs index cd72f3eeb07..413a12bdf2e 100644 --- a/core/lib/zksync_core/src/state_keeper/batch_executor/tests/tester.rs +++ b/core/lib/zksync_core/src/state_keeper/batch_executor/tests/tester.rs @@ -1,11 +1,11 @@ //! Testing harness for the batch executor. //! Contains helper functionality to initialize test context and perform tests without too much boilerplate. +use multivm::{ + interface::{L1BatchEnv, SystemEnv}, + vm_latest::constants::INITIAL_STORAGE_WRITE_PUBDATA_BYTES, +}; use tempfile::TempDir; - -use multivm::interface::{L1BatchEnv, SystemEnv}; -use multivm::vm_latest::constants::INITIAL_STORAGE_WRITE_PUBDATA_BYTES; - use zksync_config::configs::chain::StateKeeperConfig; use zksync_contracts::{get_loadnext_contract, test_contracts::LoadnextContractExecutionParams}; use zksync_dal::ConnectionPool; @@ -19,13 +19,15 @@ use zksync_types::{ }; use zksync_utils::u256_to_h256; -use crate::genesis::create_genesis_l1_batch; -use crate::state_keeper::{ - batch_executor::BatchExecutorHandle, - tests::{default_l1_batch_env, default_system_env, BASE_SYSTEM_CONTRACTS}, +use crate::{ + genesis::create_genesis_l1_batch, + state_keeper::{ + batch_executor::BatchExecutorHandle, + tests::{default_l1_batch_env, default_system_env, BASE_SYSTEM_CONTRACTS}, + }, }; -const DEFAULT_GAS_PER_PUBDATA: u32 = 100; +const DEFAULT_GAS_PER_PUBDATA: u32 = 10000; const CHAIN_ID: u32 = 270; /// Representation of configuration parameters used by the state keeper. @@ -110,6 +112,7 @@ impl Tester { l1_batch, system_env, self.config.upload_witness_inputs_to_gcs, + false, ) } diff --git a/core/lib/zksync_core/src/state_keeper/extractors.rs b/core/lib/zksync_core/src/state_keeper/extractors.rs index e542b5b0959..e31020734f5 100644 --- a/core/lib/zksync_core/src/state_keeper/extractors.rs +++ b/core/lib/zksync_core/src/state_keeper/extractors.rs @@ -1,13 +1,12 @@ //! Pure functions that convert data as required by the state keeper. -use chrono::{DateTime, TimeZone, Utc}; - use std::{ convert::TryFrom, fmt, time::{Duration, Instant}, }; +use chrono::{DateTime, TimeZone, Utc}; use zksync_dal::StorageProcessor; use zksync_types::{L1BatchNumber, U256}; use zksync_utils::h256_to_u256; diff --git a/core/lib/zksync_core/src/state_keeper/io/common.rs b/core/lib/zksync_core/src/state_keeper/io/common.rs index c99508322ef..dad43b7ee9d 100644 --- a/core/lib/zksync_core/src/state_keeper/io/common.rs +++ b/core/lib/zksync_core/src/state_keeper/io/common.rs @@ -1,12 +1,14 @@ use std::time::Duration; -use multivm::interface::{L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode}; -use multivm::vm_latest::constants::BLOCK_GAS_LIMIT; +use multivm::{ + interface::{L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode}, + vm_latest::constants::BLOCK_GAS_LIMIT, +}; use zksync_contracts::BaseSystemContracts; use zksync_dal::StorageProcessor; use zksync_types::{ - Address, L1BatchNumber, L2ChainId, MiniblockNumber, ProtocolVersionId, H256, U256, - ZKPORTER_IS_AVAILABLE, + fee_model::BatchFeeInput, Address, L1BatchNumber, L2ChainId, MiniblockNumber, + ProtocolVersionId, H256, U256, ZKPORTER_IS_AVAILABLE, }; use zksync_utils::u256_to_h256; @@ -20,8 +22,7 @@ pub(crate) fn l1_batch_params( fee_account: Address, l1_batch_timestamp: u64, previous_batch_hash: U256, - l1_gas_price: u64, - fair_l2_gas_price: u64, + fee_input: BatchFeeInput, first_miniblock_number: MiniblockNumber, prev_miniblock_hash: H256, base_system_contracts: BaseSystemContracts, @@ -44,8 +45,7 @@ pub(crate) fn l1_batch_params( previous_batch_hash: Some(u256_to_h256(previous_batch_hash)), number: current_l1_batch_number, timestamp: l1_batch_timestamp, - l1_gas_price, - fair_l2_gas_price, + fee_input, fee_account, enforced_base_fee: None, first_l2_block: L2BlockEnv { @@ -122,8 +122,7 @@ pub(crate) async fn load_l1_batch_params( fee_account, pending_miniblock_header.timestamp, previous_l1_batch_hash, - pending_miniblock_header.l1_gas_price, - pending_miniblock_header.l2_fair_gas_price, + pending_miniblock_header.batch_fee_input, pending_miniblock_number, prev_miniblock_hash, base_system_contracts, diff --git a/core/lib/zksync_core/src/state_keeper/io/mempool.rs b/core/lib/zksync_core/src/state_keeper/io/mempool.rs index f10ad87580c..9ba6ff0ac9f 100644 --- a/core/lib/zksync_core/src/state_keeper/io/mempool.rs +++ b/core/lib/zksync_core/src/state_keeper/io/mempool.rs @@ -1,5 +1,3 @@ -use async_trait::async_trait; - use std::{ cmp, collections::HashMap, @@ -7,9 +5,11 @@ use std::{ time::{Duration, Instant}, }; -use multivm::interface::{FinishedL1Batch, L1BatchEnv, SystemEnv}; -use multivm::vm_latest::utils::fee::derive_base_fee_and_gas_per_pubdata; - +use async_trait::async_trait; +use multivm::{ + interface::{FinishedL1Batch, L1BatchEnv, SystemEnv}, + utils::derive_base_fee_and_gas_per_pubdata, +}; use zksync_config::configs::chain::StateKeeperConfig; use zksync_dal::ConnectionPool; use zksync_mempool::L2TxFilter; @@ -23,7 +23,7 @@ use zksync_types::{ use zksync_utils::time::millis_since_epoch; use crate::{ - l1_gas_price::L1GasPriceProvider, + fee_model::BatchFeeModelInputProvider, state_keeper::{ extractors, io::{ @@ -43,21 +43,20 @@ use crate::{ /// Decides which batch parameters should be used for the new batch. /// This is an IO for the main server application. #[derive(Debug)] -pub(crate) struct MempoolIO { +pub(crate) struct MempoolIO { mempool: MempoolGuard, pool: ConnectionPool, - object_store: Box, + object_store: Arc, timeout_sealer: TimeoutSealer, filter: L2TxFilter, current_miniblock_number: MiniblockNumber, miniblock_sealer_handle: MiniblockSealerHandle, current_l1_batch_number: L1BatchNumber, fee_account: Address, - fair_l2_gas_price: u64, validation_computational_gas_limit: u32, delay_interval: Duration, // Used to keep track of gas prices to set accepted price per pubdata byte in blocks. - l1_gas_price_provider: Arc, + batch_fee_input_provider: Arc, l2_erc20_bridge_addr: Address, chain_id: L2ChainId, @@ -65,10 +64,7 @@ pub(crate) struct MempoolIO { virtual_blocks_per_miniblock: u32, } -impl IoSealCriteria for MempoolIO -where - G: L1GasPriceProvider + 'static + Send + Sync, -{ +impl IoSealCriteria for MempoolIO { fn should_seal_l1_batch_unconditionally(&mut self, manager: &UpdatesManager) -> bool { self.timeout_sealer .should_seal_l1_batch_unconditionally(manager) @@ -80,10 +76,7 @@ where } #[async_trait] -impl StateKeeperIO for MempoolIO -where - G: L1GasPriceProvider + 'static + Send + Sync, -{ +impl StateKeeperIO for MempoolIO { fn current_l1_batch_number(&self) -> L1BatchNumber { self.current_l1_batch_number } @@ -113,12 +106,10 @@ where .await?; // Initialize the filter for the transactions that come after the pending batch. // We use values from the pending block to match the filter with one used before the restart. - let (base_fee, gas_per_pubdata) = derive_base_fee_and_gas_per_pubdata( - l1_batch_env.l1_gas_price, - l1_batch_env.fair_l2_gas_price, - ); + let (base_fee, gas_per_pubdata) = + derive_base_fee_and_gas_per_pubdata(l1_batch_env.fee_input, system_env.version.into()); self.filter = L2TxFilter { - l1_gas_price: l1_batch_env.l1_gas_price, + fee_input: l1_batch_env.fee_input, fee_per_gas: base_fee, gas_per_pubdata: gas_per_pubdata as u32, }; @@ -136,26 +127,17 @@ where ) -> Option<(SystemEnv, L1BatchEnv)> { let deadline = Instant::now() + max_wait; + let prev_l1_batch_hash = self.load_previous_l1_batch_hash().await; + + let MiniblockHeader { + timestamp: prev_miniblock_timestamp, + hash: prev_miniblock_hash, + .. + } = self.load_previous_miniblock_header().await; + // Block until at least one transaction in the mempool can match the filter (or timeout happens). // This is needed to ensure that block timestamp is not too old. for _ in 0..poll_iters(self.delay_interval, max_wait) { - // We create a new filter each time, since parameters may change and a previously - // ignored transaction in the mempool may be scheduled for the execution. - self.filter = l2_tx_filter(self.l1_gas_price_provider.as_ref(), self.fair_l2_gas_price); - // We only need to get the root hash when we're certain that we have a new transaction. - if !self.mempool.has_next(&self.filter) { - tokio::time::sleep(self.delay_interval).await; - continue; - } - - let prev_l1_batch_hash = self.load_previous_l1_batch_hash().await; - - let MiniblockHeader { - timestamp: prev_miniblock_timestamp, - hash: prev_miniblock_hash, - .. - } = self.load_previous_miniblock_header().await; - // We cannot create two L1 batches or miniblocks with the same timestamp (forbidden by the bootloader). // Hence, we wait until the current timestamp is larger than the timestamp of the previous miniblock. // We can use `timeout_at` since `sleep_past` is cancel-safe; it only uses `sleep()` async calls. @@ -165,11 +147,10 @@ where ); let current_timestamp = current_timestamp.await.ok()?; - tracing::info!( - "(l1_gas_price, fair_l2_gas_price) for L1 batch #{} is ({}, {})", + tracing::trace!( + "Fee input for L1 batch #{} is {:#?}", self.current_l1_batch_number.0, - self.filter.l1_gas_price, - self.fair_l2_gas_price + self.filter.fee_input ); let mut storage = self.pool.access_storage().await.unwrap(); let (base_system_contracts, protocol_version) = storage @@ -177,13 +158,24 @@ where .base_system_contracts_by_timestamp(current_timestamp) .await; + // We create a new filter each time, since parameters may change and a previously + // ignored transaction in the mempool may be scheduled for the execution. + self.filter = l2_tx_filter( + self.batch_fee_input_provider.as_ref(), + protocol_version.into(), + ); + // We only need to get the root hash when we're certain that we have a new transaction. + if !self.mempool.has_next(&self.filter) { + tokio::time::sleep(self.delay_interval).await; + continue; + } + return Some(l1_batch_params( self.current_l1_batch_number, self.fee_account, current_timestamp, prev_l1_batch_hash, - self.filter.l1_gas_price, - self.fair_l2_gas_price, + self.filter.fee_input, self.current_miniblock_number, prev_miniblock_hash, base_system_contracts, @@ -274,6 +266,7 @@ where self.current_l1_batch_number, self.current_miniblock_number, self.l2_erc20_bridge_addr, + false, ); self.miniblock_sealer_handle.submit(command).await; self.current_miniblock_number += 1; @@ -398,13 +391,13 @@ async fn sleep_past(timestamp: u64, miniblock: MiniblockNumber) -> u64 { } } -impl MempoolIO { +impl MempoolIO { #[allow(clippy::too_many_arguments)] pub(in crate::state_keeper) async fn new( mempool: MempoolGuard, - object_store: Box, + object_store: Arc, miniblock_sealer_handle: MiniblockSealerHandle, - l1_gas_price_provider: Arc, + batch_fee_input_provider: Arc, pool: ConnectionPool, config: &StateKeeperConfig, delay_interval: Duration, @@ -422,16 +415,19 @@ impl MempoolIO { ); let mut storage = pool.access_storage_tagged("state_keeper").await.unwrap(); - let last_sealed_l1_batch_header = storage + // TODO (PLA-703): Support no L1 batches / miniblocks in the storage + let last_sealed_l1_batch_number = storage .blocks_dal() - .get_newest_l1_batch_header() + .get_sealed_l1_batch_number() .await - .unwrap(); + .unwrap() + .expect("No L1 batches sealed"); let last_miniblock_number = storage .blocks_dal() .get_sealed_miniblock_number() .await - .unwrap(); + .unwrap() + .expect("empty storage not supported"); // FIXME (PLA-703): handle empty storage drop(storage); @@ -442,14 +438,13 @@ impl MempoolIO { timeout_sealer: TimeoutSealer::new(config), filter: L2TxFilter::default(), // ^ Will be initialized properly on the first newly opened batch - current_l1_batch_number: last_sealed_l1_batch_header.number + 1, + current_l1_batch_number: last_sealed_l1_batch_number + 1, miniblock_sealer_handle, current_miniblock_number: last_miniblock_number + 1, fee_account: config.fee_account_addr, - fair_l2_gas_price: config.fair_l2_gas_price, validation_computational_gas_limit, delay_interval, - l1_gas_price_provider, + batch_fee_input_provider, l2_erc20_bridge_addr, chain_id, virtual_blocks_interval: config.virtual_blocks_interval, @@ -514,7 +509,7 @@ impl MempoolIO { /// Getters required for testing the MempoolIO. #[cfg(test)] -impl MempoolIO { +impl MempoolIO { pub(super) fn filter(&self) -> &L2TxFilter { &self.filter } @@ -523,7 +518,6 @@ impl MempoolIO { #[cfg(test)] mod tests { use tokio::time::timeout_at; - use zksync_utils::time::seconds_since_epoch; use super::*; diff --git a/core/lib/zksync_core/src/state_keeper/io/mod.rs b/core/lib/zksync_core/src/state_keeper/io/mod.rs index 06397c3cd17..d1366858116 100644 --- a/core/lib/zksync_core/src/state_keeper/io/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/io/mod.rs @@ -1,13 +1,11 @@ -use async_trait::async_trait; -use tokio::sync::{mpsc, oneshot}; - use std::{ fmt, time::{Duration, Instant}, }; +use async_trait::async_trait; use multivm::interface::{FinishedL1Batch, L1BatchEnv, SystemEnv}; - +use tokio::sync::{mpsc, oneshot}; use zksync_dal::ConnectionPool; use zksync_types::{ block::MiniblockExecutionData, protocol_version::ProtocolUpgradeTx, @@ -15,10 +13,6 @@ use zksync_types::{ Transaction, }; -pub(crate) mod common; -pub(crate) mod mempool; -pub(crate) mod seal_logic; - pub(crate) use self::mempool::MempoolIO; use super::{ metrics::{MiniblockQueueStage, MINIBLOCK_METRICS}, @@ -26,6 +20,9 @@ use super::{ updates::{MiniblockSealCommand, UpdatesManager}, }; +pub(crate) mod common; +pub(crate) mod mempool; +pub(crate) mod seal_logic; #[cfg(test)] mod tests; @@ -130,7 +127,7 @@ struct Completable { /// Handle for [`MiniblockSealer`] allowing to submit [`MiniblockSealCommand`]s. #[derive(Debug)] -pub(crate) struct MiniblockSealerHandle { +pub struct MiniblockSealerHandle { commands_sender: mpsc::Sender>, latest_completion_receiver: Option>, // If true, `submit()` will wait for the operation to complete. @@ -143,8 +140,8 @@ impl MiniblockSealerHandle { /// Submits a new sealing `command` to the sealer that this handle is attached to. /// /// If there are currently too many unprocessed commands, this method will wait until - /// enough of them are processed (i.e., there is backpressure). - pub async fn submit(&mut self, command: MiniblockSealCommand) { + /// enough of them are processed (i.e., there is back pressure). + pub(crate) async fn submit(&mut self, command: MiniblockSealCommand) { let miniblock_number = command.miniblock_number; tracing::debug!( "Enqueuing sealing command for miniblock #{miniblock_number} with #{} txs (L1 batch #{})", @@ -209,7 +206,7 @@ impl MiniblockSealerHandle { /// Component responsible for sealing miniblocks (i.e., storing their data to Postgres). #[derive(Debug)] -pub(crate) struct MiniblockSealer { +pub struct MiniblockSealer { pool: ConnectionPool, is_sync: bool, // Weak sender handle to get queue capacity stats. @@ -220,10 +217,7 @@ pub(crate) struct MiniblockSealer { impl MiniblockSealer { /// Creates a sealer that will use the provided Postgres connection and will have the specified /// `command_capacity` for unprocessed sealing commands. - pub(crate) fn new( - pool: ConnectionPool, - mut command_capacity: usize, - ) -> (Self, MiniblockSealerHandle) { + pub fn new(pool: ConnectionPool, mut command_capacity: usize) -> (Self, MiniblockSealerHandle) { let is_sync = command_capacity == 0; command_capacity = command_capacity.max(1); diff --git a/core/lib/zksync_core/src/state_keeper/io/seal_logic.rs b/core/lib/zksync_core/src/state_keeper/io/seal_logic.rs index ca2dc641909..e6eaf635470 100644 --- a/core/lib/zksync_core/src/state_keeper/io/seal_logic.rs +++ b/core/lib/zksync_core/src/state_keeper/io/seal_logic.rs @@ -1,31 +1,34 @@ //! This module is a source-of-truth on what is expected to be done when sealing a block. //! It contains the logic of the block sealing, which is used by both the mempool-based and external node IO. -use itertools::Itertools; use std::{ collections::HashMap, time::{Duration, Instant}, }; -use multivm::interface::{FinishedL1Batch, L1BatchEnv}; +use itertools::Itertools; +use multivm::{ + interface::{FinishedL1Batch, L1BatchEnv}, + utils::{get_batch_base_fee, get_max_gas_per_pubdata_byte}, +}; use zksync_dal::StorageProcessor; use zksync_system_constants::ACCOUNT_CODE_STORAGE_ADDRESS; use zksync_types::{ - block::unpack_block_info, - l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}, - CURRENT_VIRTUAL_BLOCK_INFO_POSITION, SYSTEM_CONTEXT_ADDRESS, -}; -use zksync_types::{ - block::{L1BatchHeader, MiniblockHeader}, + block::{unpack_block_info, L1BatchHeader, MiniblockHeader}, event::{extract_added_tokens, extract_long_l2_to_l1_messages}, + l1::L1Tx, + l2::L2Tx, + l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}, + protocol_version::ProtocolUpgradeTx, storage_writes_deduplicator::{ModifiedSlot, StorageWritesDeduplicator}, tx::{ tx_execution_info::DeduplicatedWritesMetrics, IncludedTxLocation, TransactionExecutionResult, }, zkevm_test_harness::witness::sort_storage_access::sort_storage_access_queries, - AccountTreeId, Address, ExecuteTransactionCommon, L1BatchNumber, LogQuery, MiniblockNumber, - StorageKey, StorageLog, StorageLogQuery, StorageValue, Transaction, VmEvent, H256, + AccountTreeId, Address, ExecuteTransactionCommon, L1BatchNumber, L1BlockNumber, LogQuery, + MiniblockNumber, ProtocolVersionId, StorageKey, StorageLog, StorageLogQuery, StorageValue, + Transaction, VmEvent, CURRENT_VIRTUAL_BLOCK_INFO_POSITION, H256, SYSTEM_CONTEXT_ADDRESS, }; // TODO (SMA-1206): use seconds instead of milliseconds. use zksync_utils::{h256_to_u256, time::millis_since_epoch, u256_to_h256}; @@ -35,6 +38,7 @@ use crate::{ state_keeper::{ extractors, metrics::{L1BatchSealStage, MiniblockSealStage, L1_BATCH_METRICS, MINIBLOCK_METRICS}, + types::ExecutionMetricsForCriteria, updates::{MiniblockSealCommand, UpdatesManager}, }, }; @@ -57,12 +61,21 @@ impl UpdatesManager { progress.observe(None); let progress = L1_BATCH_METRICS.start(L1BatchSealStage::FictiveMiniblock); - self.extend_from_fictive_transaction(finished_batch.block_tip_execution_result); + let ExecutionMetricsForCriteria { + l1_gas: batch_tip_l1_gas, + execution_metrics: batch_tip_execution_metrics, + } = ExecutionMetricsForCriteria::new(None, &finished_batch.block_tip_execution_result); + self.extend_from_fictive_transaction( + finished_batch.block_tip_execution_result, + batch_tip_l1_gas, + batch_tip_execution_metrics, + ); // Seal fictive miniblock with last events and storage logs. let miniblock_command = self.seal_miniblock_command( l1_batch_env.number, current_miniblock_number, l2_erc20_bridge_addr, + false, // fictive miniblocks don't have txs, so it's fine to pass `false` here. ); miniblock_command.seal_inner(&mut transaction, true).await; progress.observe(None); @@ -125,12 +138,13 @@ impl UpdatesManager { l2_to_l1_messages, bloom: Default::default(), used_contract_hashes: finished_batch.final_execution_state.used_contract_hashes, - base_fee_per_gas: l1_batch_env.base_fee(), + base_fee_per_gas: get_batch_base_fee(l1_batch_env, self.protocol_version().into()), l1_gas_price: self.l1_gas_price(), l2_fair_gas_price: self.fair_l2_gas_price(), base_system_contracts_hashes: self.base_system_contract_hashes(), protocol_version: Some(self.protocol_version()), system_logs: finished_batch.final_execution_state.system_logs, + pubdata_input: finished_batch.pubdata_input, }; let events_queue = finished_batch @@ -142,9 +156,12 @@ impl UpdatesManager { .insert_l1_batch( &l1_batch, finished_batch.final_bootloader_memory.as_ref().unwrap(), - self.l1_batch.l1_gas_count, + self.pending_l1_gas_count(), &events_queue, &finished_batch.final_execution_state.storage_refunds, + self.pending_execution_metrics() + .estimated_circuits_used + .ceil() as u32, ) .await .unwrap(); @@ -274,6 +291,36 @@ impl MiniblockSealCommand { async fn seal_inner(&self, storage: &mut StorageProcessor<'_>, is_fictive: bool) { self.assert_valid_miniblock(is_fictive); + let mut transaction = storage.start_transaction().await.unwrap(); + if self.pre_insert_txs { + let progress = MINIBLOCK_METRICS.start(MiniblockSealStage::PreInsertTxs, is_fictive); + for tx in &self.miniblock.executed_transactions { + if let Ok(l1_tx) = L1Tx::try_from(tx.transaction.clone()) { + let l1_block_number = L1BlockNumber(l1_tx.common_data.eth_block as u32); + transaction + .transactions_dal() + .insert_transaction_l1(l1_tx, l1_block_number) + .await; + } else if let Ok(l2_tx) = L2Tx::try_from(tx.transaction.clone()) { + // Using `Default` for execution metrics should be OK here, since this data is not used on the EN. + transaction + .transactions_dal() + .insert_transaction_l2(l2_tx, Default::default()) + .await; + } else if let Ok(protocol_system_upgrade_tx) = + ProtocolUpgradeTx::try_from(tx.transaction.clone()) + { + transaction + .transactions_dal() + .insert_system_transaction(protocol_system_upgrade_tx) + .await; + } else { + unreachable!("Transaction {:?} is neither L1 nor L2", tx.transaction); + } + } + progress.observe(Some(self.miniblock.executed_transactions.len())); + } + let l1_batch_number = self.l1_batch_number; let miniblock_number = self.miniblock_number; let started_at = Instant::now(); @@ -291,7 +338,6 @@ impl MiniblockSealCommand { event_count = self.miniblock.events.len() ); - let mut transaction = storage.start_transaction().await.unwrap(); let miniblock_header = MiniblockHeader { number: miniblock_number, timestamp: self.miniblock.timestamp, @@ -299,10 +345,14 @@ impl MiniblockSealCommand { l1_tx_count: l1_tx_count as u16, l2_tx_count: l2_tx_count as u16, base_fee_per_gas: self.base_fee_per_gas, - l1_gas_price: self.l1_gas_price, - l2_fair_gas_price: self.fair_l2_gas_price, + batch_fee_input: self.fee_input, base_system_contracts_hashes: self.base_system_contracts_hashes, protocol_version: self.protocol_version, + gas_per_pubdata_limit: get_max_gas_per_pubdata_byte( + self.protocol_version + .unwrap_or(ProtocolVersionId::last_potentially_undefined()) + .into(), + ), virtual_blocks: self.miniblock.virtual_blocks, }; diff --git a/core/lib/zksync_core/src/state_keeper/io/tests/mod.rs b/core/lib/zksync_core/src/state_keeper/io/tests/mod.rs index 0c13a7a614b..b210307f2b9 100644 --- a/core/lib/zksync_core/src/state_keeper/io/tests/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/io/tests/mod.rs @@ -1,33 +1,35 @@ -use futures::FutureExt; - use std::time::Duration; -use multivm::vm_latest::utils::fee::derive_base_fee_and_gas_per_pubdata; +use futures::FutureExt; +use multivm::utils::derive_base_fee_and_gas_per_pubdata; use zksync_contracts::BaseSystemContractsHashes; use zksync_dal::ConnectionPool; use zksync_mempool::L2TxFilter; use zksync_types::{ - block::BlockGasCount, tx::ExecutionMetrics, AccountTreeId, Address, L1BatchNumber, - MiniblockNumber, ProtocolVersionId, StorageKey, VmEvent, H256, U256, + block::BlockGasCount, + fee_model::{BatchFeeInput, PubdataIndependentBatchFeeModelInput}, + tx::ExecutionMetrics, + AccountTreeId, Address, L1BatchNumber, MiniblockNumber, ProtocolVersionId, StorageKey, VmEvent, + H256, U256, }; use zksync_utils::time::seconds_since_epoch; -use crate::state_keeper::tests::{create_l1_batch_metadata, default_l1_batch_env}; - -use crate::state_keeper::{ - io::{MiniblockParams, MiniblockSealer, StateKeeperIO}, - mempool_actor::l2_tx_filter, - tests::{ - create_execution_result, create_transaction, create_updates_manager, - default_vm_block_result, Query, +use self::tester::Tester; +use crate::{ + state_keeper::{ + io::{MiniblockParams, MiniblockSealer, StateKeeperIO}, + mempool_actor::l2_tx_filter, + tests::{ + create_execution_result, create_transaction, create_updates_manager, + default_l1_batch_env, default_vm_block_result, Query, + }, + updates::{MiniblockSealCommand, MiniblockUpdates, UpdatesManager}, }, - updates::{MiniblockSealCommand, MiniblockUpdates, UpdatesManager}, + utils::testonly::create_l1_batch_metadata, }; mod tester; -use self::tester::Tester; - /// Ensure that MempoolIO.filter is correctly initialized right after mempool initialization. #[tokio::test] async fn test_filter_initialization() { @@ -50,25 +52,25 @@ async fn test_filter_with_pending_batch() { tester.genesis(&connection_pool).await; - // Insert a sealed batch so there will be a prev_l1_batch_state_root. + // Insert a sealed batch so there will be a `prev_l1_batch_state_root`. // These gas values are random and don't matter for filter calculation as there will be a // pending batch the filter will be based off of. tester - .insert_miniblock(&connection_pool, 1, 5, 55, 555) + .insert_miniblock(&connection_pool, 1, 5, BatchFeeInput::l1_pegged(55, 555)) .await; tester.insert_sealed_batch(&connection_pool, 1).await; // Inserting a pending miniblock that isn't included in a sealed batch means there is a pending batch. // The gas values are randomly chosen but so affect filter values calculation. - let (give_l1_gas_price, give_fair_l2_gas_price) = (100, 1000); + + let fee_input = BatchFeeInput::PubdataIndependent(PubdataIndependentBatchFeeModelInput { + l1_gas_price: 100, + fair_l2_gas_price: 1000, + fair_pubdata_price: 500, + }); + tester - .insert_miniblock( - &connection_pool, - 2, - 10, - give_l1_gas_price, - give_fair_l2_gas_price, - ) + .insert_miniblock(&connection_pool, 2, 10, fee_input) .await; let (mut mempool, _) = tester.create_test_mempool_io(connection_pool, 1).await; @@ -77,33 +79,33 @@ async fn test_filter_with_pending_batch() { mempool.load_pending_batch().await; let (want_base_fee, want_gas_per_pubdata) = - derive_base_fee_and_gas_per_pubdata(give_l1_gas_price, give_fair_l2_gas_price); + derive_base_fee_and_gas_per_pubdata(fee_input, ProtocolVersionId::latest().into()); let want_filter = L2TxFilter { - l1_gas_price: give_l1_gas_price, + fee_input, fee_per_gas: want_base_fee, gas_per_pubdata: want_gas_per_pubdata as u32, }; assert_eq!(mempool.filter(), &want_filter); } -/// Ensure that MempoolIO.filter is modified correctly if there is no pending batch. +/// Ensure that `MempoolIO.filter` is modified correctly if there is no pending batch. #[tokio::test] async fn test_filter_with_no_pending_batch() { let connection_pool = ConnectionPool::test_pool().await; let tester = Tester::new(); tester.genesis(&connection_pool).await; - // Insert a sealed batch so there will be a prev_l1_batch_state_root. + // Insert a sealed batch so there will be a `prev_l1_batch_state_root`. // These gas values are random and don't matter for filter calculation. tester - .insert_miniblock(&connection_pool, 1, 5, 55, 555) + .insert_miniblock(&connection_pool, 1, 5, BatchFeeInput::l1_pegged(55, 555)) .await; tester.insert_sealed_batch(&connection_pool, 1).await; // Create a copy of the tx filter that the mempool will use. let want_filter = l2_tx_filter( - &tester.create_gas_adjuster().await, - tester.fair_l2_gas_price(), + &tester.create_batch_fee_input_provider().await, + ProtocolVersionId::latest().into(), ); // Create a mempool without pending batch and ensure that filter is not initialized just yet. @@ -136,7 +138,7 @@ async fn test_timestamps_are_distinct( tester.set_timestamp(prev_miniblock_timestamp); tester - .insert_miniblock(&connection_pool, 1, 5, 55, 555) + .insert_miniblock(&connection_pool, 1, 5, BatchFeeInput::l1_pegged(55, 555)) .await; if delay_prev_miniblock_compared_to_batch { tester.set_timestamp(prev_miniblock_timestamp - 1); @@ -146,8 +148,8 @@ async fn test_timestamps_are_distinct( let (mut mempool, mut guard) = tester.create_test_mempool_io(connection_pool, 1).await; // Insert a transaction to trigger L1 batch creation. let tx_filter = l2_tx_filter( - &tester.create_gas_adjuster().await, - tester.fair_l2_gas_price(), + &tester.create_batch_fee_input_provider().await, + ProtocolVersionId::latest().into(), ); tester.insert_tx(&mut guard, tx_filter.fee_per_gas, tx_filter.gas_per_pubdata); @@ -189,8 +191,7 @@ async fn l1_batch_timestamp_respects_prev_miniblock_with_clock_skew() { #[tokio::test] async fn processing_storage_logs_when_sealing_miniblock() { let connection_pool = ConnectionPool::test_pool().await; - let mut miniblock = - MiniblockUpdates::new(0, 1, H256::zero(), 1, Some(ProtocolVersionId::latest())); + let mut miniblock = MiniblockUpdates::new(0, 1, H256::zero(), 1, ProtocolVersionId::latest()); let tx = create_transaction(10, 100); let storage_logs = [ @@ -239,12 +240,16 @@ async fn processing_storage_logs_when_sealing_miniblock() { miniblock_number: MiniblockNumber(3), miniblock, first_tx_index: 0, - l1_gas_price: 100, - fair_l2_gas_price: 100, + fee_input: BatchFeeInput::PubdataIndependent(PubdataIndependentBatchFeeModelInput { + l1_gas_price: 100, + fair_l2_gas_price: 100, + fair_pubdata_price: 100, + }), base_fee_per_gas: 10, base_system_contracts_hashes: BaseSystemContractsHashes::default(), protocol_version: Some(ProtocolVersionId::latest()), l2_erc20_bridge_addr: Address::default(), + pre_insert_txs: false, }; let mut conn = connection_pool .access_storage_tagged("state_keeper") @@ -285,8 +290,7 @@ async fn processing_storage_logs_when_sealing_miniblock() { async fn processing_events_when_sealing_miniblock() { let pool = ConnectionPool::test_pool().await; let l1_batch_number = L1BatchNumber(2); - let mut miniblock = - MiniblockUpdates::new(0, 1, H256::zero(), 1, Some(ProtocolVersionId::latest())); + let mut miniblock = MiniblockUpdates::new(0, 1, H256::zero(), 1, ProtocolVersionId::latest()); let events = (0_u8..10).map(|i| VmEvent { location: (l1_batch_number, u32::from(i / 4)), @@ -315,12 +319,16 @@ async fn processing_events_when_sealing_miniblock() { miniblock_number, miniblock, first_tx_index: 0, - l1_gas_price: 100, - fair_l2_gas_price: 100, + fee_input: BatchFeeInput::PubdataIndependent(PubdataIndependentBatchFeeModelInput { + l1_gas_price: 100, + fair_l2_gas_price: 100, + fair_pubdata_price: 100, + }), base_fee_per_gas: 10, base_system_contracts_hashes: BaseSystemContractsHashes::default(), protocol_version: Some(ProtocolVersionId::latest()), l2_erc20_bridge_addr: Address::default(), + pre_insert_txs: false, }; let mut conn = pool.access_storage_tagged("state_keeper").await.unwrap(); conn.protocol_versions_dal() @@ -399,14 +407,14 @@ async fn test_miniblock_and_l1_batch_processing( .get_sealed_miniblock_number() .await .unwrap(), - MiniblockNumber(2) // + fictive miniblock + Some(MiniblockNumber(2)) // + fictive miniblock ); let l1_batch_header = conn .blocks_dal() .get_l1_batch_header(L1BatchNumber(1)) .await .unwrap() - .unwrap(); + .expect("No L1 batch #1"); assert_eq!(l1_batch_header.l2_tx_count, 1); assert!(l1_batch_header.is_finished); } @@ -434,6 +442,7 @@ async fn miniblock_sealer_handle_blocking() { L1BatchNumber(1), MiniblockNumber(1), Address::default(), + false, ); sealer_handle.submit(seal_command).await; @@ -442,6 +451,7 @@ async fn miniblock_sealer_handle_blocking() { L1BatchNumber(1), MiniblockNumber(2), Address::default(), + false, ); { let submit_future = sealer_handle.submit(seal_command); @@ -470,6 +480,7 @@ async fn miniblock_sealer_handle_blocking() { L1BatchNumber(2), MiniblockNumber(3), Address::default(), + false, ); sealer_handle.submit(seal_command).await; let command = sealer.commands_receiver.recv().await.unwrap(); @@ -489,6 +500,7 @@ async fn miniblock_sealer_handle_parallel_processing() { L1BatchNumber(1), MiniblockNumber(i), Address::default(), + false, ); sealer_handle.submit(seal_command).await; } diff --git a/core/lib/zksync_core/src/state_keeper/io/tests/tester.rs b/core/lib/zksync_core/src/state_keeper/io/tests/tester.rs index 875bf89e048..27261f4e36c 100644 --- a/core/lib/zksync_core/src/state_keeper/io/tests/tester.rs +++ b/core/lib/zksync_core/src/state_keeper/io/tests/tester.rs @@ -1,25 +1,27 @@ //! Testing harness for the IO. -use multivm::vm_latest::constants::BLOCK_GAS_LIMIT; use std::{sync::Arc, time::Duration}; -use zksync_object_store::ObjectStoreFactory; -use zksync_config::configs::chain::StateKeeperConfig; -use zksync_config::GasAdjusterConfig; +use multivm::vm_latest::constants::BLOCK_GAS_LIMIT; +use zksync_config::{configs::chain::StateKeeperConfig, GasAdjusterConfig}; use zksync_contracts::BaseSystemContracts; use zksync_dal::ConnectionPool; -use zksync_eth_client::clients::mock::MockEthereum; +use zksync_eth_client::clients::MockEthereum; +use zksync_object_store::ObjectStoreFactory; use zksync_types::{ - block::{L1BatchHeader, MiniblockHeader}, + block::MiniblockHeader, + fee_model::{BatchFeeInput, FeeModelConfig, FeeModelConfigV1}, protocol_version::L1VerifierConfig, system_contracts::get_system_smart_contracts, - Address, L1BatchNumber, L2ChainId, MiniblockNumber, PriorityOpId, ProtocolVersionId, H256, + Address, L2ChainId, PriorityOpId, ProtocolVersionId, H256, }; use crate::{ + fee_model::MainNodeFeeInputProvider, genesis::create_genesis_l1_batch, l1_gas_price::GasAdjuster, state_keeper::{io::MiniblockSealer, tests::create_transaction, MempoolGuard, MempoolIO}, + utils::testonly::{create_l1_batch, create_miniblock}, }; #[derive(Debug)] @@ -37,7 +39,7 @@ impl Tester { } } - pub(super) async fn create_gas_adjuster(&self) -> GasAdjuster { + async fn create_gas_adjuster(&self) -> GasAdjuster { let eth_client = MockEthereum::default().with_fee_history(vec![0, 4, 6, 8, 7, 5, 5, 8, 10, 9]); @@ -57,8 +59,18 @@ impl Tester { .unwrap() } + pub(super) async fn create_batch_fee_input_provider(&self) -> MainNodeFeeInputProvider { + let gas_adjuster = Arc::new(self.create_gas_adjuster().await); + MainNodeFeeInputProvider::new( + gas_adjuster, + FeeModelConfig::V1(FeeModelConfigV1 { + minimal_l2_gas_price: self.minimal_l2_gas_price(), + }), + ) + } + // Constant value to be used both in tests and inside of the IO. - pub(super) fn fair_l2_gas_price(&self) -> u64 { + pub(super) fn minimal_l2_gas_price(&self) -> u64 { 100 } @@ -66,15 +78,22 @@ impl Tester { &self, pool: ConnectionPool, miniblock_sealer_capacity: usize, - ) -> (MempoolIO>, MempoolGuard) { + ) -> (MempoolIO, MempoolGuard) { let gas_adjuster = Arc::new(self.create_gas_adjuster().await); + let batch_fee_input_provider = MainNodeFeeInputProvider::new( + gas_adjuster, + FeeModelConfig::V1(FeeModelConfigV1 { + minimal_l2_gas_price: self.minimal_l2_gas_price(), + }), + ); + let mempool = MempoolGuard::new(PriorityOpId(0), 100); let (miniblock_sealer, miniblock_sealer_handle) = MiniblockSealer::new(pool.clone(), miniblock_sealer_capacity); tokio::spawn(miniblock_sealer.run()); let config = StateKeeperConfig { - fair_l2_gas_price: self.fair_l2_gas_price(), + minimal_l2_gas_price: self.minimal_l2_gas_price(), virtual_blocks_interval: 1, virtual_blocks_per_miniblock: 1, ..StateKeeperConfig::default() @@ -85,7 +104,7 @@ impl Tester { mempool.clone(), object_store, miniblock_sealer_handle, - gas_adjuster, + Arc::new(batch_fee_input_provider), pool, &config, Duration::from_secs(1), @@ -124,43 +143,28 @@ impl Tester { pool: &ConnectionPool, number: u32, base_fee_per_gas: u64, - l1_gas_price: u64, - l2_fair_gas_price: u64, + fee_input: BatchFeeInput, ) { let mut storage = pool.access_storage_tagged("state_keeper").await.unwrap(); storage .blocks_dal() .insert_miniblock(&MiniblockHeader { - number: MiniblockNumber(number), timestamp: self.current_timestamp, - hash: H256::default(), - l1_tx_count: 0, - l2_tx_count: 0, base_fee_per_gas, - l1_gas_price, - l2_fair_gas_price, + batch_fee_input: fee_input, base_system_contracts_hashes: self.base_system_contracts.hashes(), - protocol_version: Some(ProtocolVersionId::latest()), - virtual_blocks: 0, + ..create_miniblock(number) }) .await .unwrap(); } pub(super) async fn insert_sealed_batch(&self, pool: &ConnectionPool, number: u32) { - let mut batch_header = L1BatchHeader::new( - L1BatchNumber(number), - self.current_timestamp, - Address::default(), - self.base_system_contracts.hashes(), - Default::default(), - ); - batch_header.is_finished = true; - + let batch_header = create_l1_batch(number); let mut storage = pool.access_storage_tagged("state_keeper").await.unwrap(); storage .blocks_dal() - .insert_l1_batch(&batch_header, &[], Default::default(), &[], &[]) + .insert_l1_batch(&batch_header, &[], Default::default(), &[], &[], 0) .await .unwrap(); storage diff --git a/core/lib/zksync_core/src/state_keeper/keeper.rs b/core/lib/zksync_core/src/state_keeper/keeper.rs index 1ff31d62d41..4c7c45d819c 100644 --- a/core/lib/zksync_core/src/state_keeper/keeper.rs +++ b/core/lib/zksync_core/src/state_keeper/keeper.rs @@ -1,10 +1,11 @@ -use anyhow::Context as _; -use tokio::sync::watch; - -use std::convert::Infallible; -use std::time::{Duration, Instant}; +use std::{ + convert::Infallible, + time::{Duration, Instant}, +}; +use anyhow::Context as _; use multivm::interface::{Halt, L1BatchEnv, SystemEnv}; +use tokio::sync::watch; use zksync_types::{ block::MiniblockExecutionData, l2::TransactionType, protocol_version::ProtocolUpgradeTx, storage_writes_deduplicator::StorageWritesDeduplicator, Transaction, @@ -57,7 +58,7 @@ pub struct ZkSyncStateKeeper { stop_receiver: watch::Receiver, io: Box, batch_executor_base: Box, - sealer: Option, + sealer: Box, } impl ZkSyncStateKeeper { @@ -65,26 +66,13 @@ impl ZkSyncStateKeeper { stop_receiver: watch::Receiver, io: Box, batch_executor_base: Box, - sealer: ConditionalSealer, + sealer: Box, ) -> Self { Self { stop_receiver, io, batch_executor_base, - sealer: Some(sealer), - } - } - - pub fn without_sealer( - stop_receiver: watch::Receiver, - io: Box, - batch_executor_base: Box, - ) -> Self { - Self { - stop_receiver, - io, - batch_executor_base, - sealer: None, + sealer, } } @@ -138,10 +126,6 @@ impl ZkSyncStateKeeper { } } }; - println!("Before multiplying by l2 gas price"); - // l1_batch_env.fair_l2_gas_price *= - // crate::l1_gas_price::erc_20_fetcher::get_erc_20_value_in_wei().await; - println!("Price of l2 gas: {}", l1_batch_env.fair_l2_gas_price); let protocol_version = system_env.version; let mut updates_manager = UpdatesManager::new( l1_batch_env.clone(), @@ -653,18 +637,14 @@ impl ZkSyncStateKeeper { writes_metrics: block_writes_metrics, }; - if let Some(sealer) = &self.sealer { - sealer.should_seal_l1_batch( - self.io.current_l1_batch_number().0, - updates_manager.batch_timestamp() as u128 * 1_000, - updates_manager.pending_executed_transactions_len() + 1, - &block_data, - &tx_data, - updates_manager.protocol_version(), - ) - } else { - SealResolution::NoSeal - } + self.sealer.should_seal_l1_batch( + self.io.current_l1_batch_number().0, + updates_manager.batch_timestamp() as u128 * 1_000, + updates_manager.pending_executed_transactions_len() + 1, + &block_data, + &tx_data, + updates_manager.protocol_version(), + ) } }; (resolution, exec_result) diff --git a/core/lib/zksync_core/src/state_keeper/mempool_actor.rs b/core/lib/zksync_core/src/state_keeper/mempool_actor.rs index 2c369d35a0f..673f0d5ea7a 100644 --- a/core/lib/zksync_core/src/state_keeper/mempool_actor.rs +++ b/core/lib/zksync_core/src/state_keeper/mempool_actor.rs @@ -1,28 +1,27 @@ -use tokio::sync::watch; - use std::{sync::Arc, time::Duration}; -use multivm::vm_latest::utils::fee::derive_base_fee_and_gas_per_pubdata; +use multivm::utils::derive_base_fee_and_gas_per_pubdata; +use tokio::sync::watch; use zksync_config::configs::chain::MempoolConfig; use zksync_dal::ConnectionPool; use zksync_mempool::L2TxFilter; +use zksync_types::VmVersion; use super::{metrics::KEEPER_METRICS, types::MempoolGuard}; -use crate::l1_gas_price::L1GasPriceProvider; +use crate::{api_server::execution_sandbox::BlockArgs, fee_model::BatchFeeModelInputProvider}; /// Creates a mempool filter for L2 transactions based on the current L1 gas price. /// The filter is used to filter out transactions from the mempool that do not cover expenses /// to process them. -pub fn l2_tx_filter( - gas_price_provider: &G, - fair_l2_gas_price: u64, +pub fn l2_tx_filter( + batch_fee_input_provider: &dyn BatchFeeModelInputProvider, + vm_version: VmVersion, ) -> L2TxFilter { - let effective_gas_price = gas_price_provider.estimate_effective_gas_price(); + let fee_input = batch_fee_input_provider.get_batch_fee_input(); - let (base_fee, gas_per_pubdata) = - derive_base_fee_and_gas_per_pubdata(effective_gas_price, fair_l2_gas_price); + let (base_fee, gas_per_pubdata) = derive_base_fee_and_gas_per_pubdata(fee_input, vm_version); L2TxFilter { - l1_gas_price: effective_gas_price, + fee_input, fee_per_gas: base_fee, gas_per_pubdata: gas_per_pubdata as u32, } @@ -31,20 +30,20 @@ pub fn l2_tx_filter( #[derive(Debug)] pub struct MempoolFetcher { mempool: MempoolGuard, - l1_gas_price_provider: Arc, + batch_fee_input_provider: Arc, sync_interval: Duration, sync_batch_size: usize, } -impl MempoolFetcher { +impl MempoolFetcher { pub fn new( mempool: MempoolGuard, - l1_gas_price_provider: Arc, + batch_fee_input_provider: Arc, config: &MempoolConfig, ) -> Self { Self { mempool, - l1_gas_price_provider, + batch_fee_input_provider, sync_interval: config.sync_interval(), sync_batch_size: config.sync_batch_size, } @@ -55,7 +54,6 @@ impl MempoolFetcher { pool: ConnectionPool, remove_stuck_txs: bool, stuck_tx_timeout: Duration, - fair_l2_gas_price: u64, stop_receiver: watch::Receiver, ) -> anyhow::Result<()> { { @@ -78,7 +76,18 @@ impl MempoolFetcher { let latency = KEEPER_METRICS.mempool_sync.start(); let mut storage = pool.access_storage_tagged("state_keeper").await.unwrap(); let mempool_info = self.mempool.get_mempool_info(); - let l2_tx_filter = l2_tx_filter(self.l1_gas_price_provider.as_ref(), fair_l2_gas_price); + + let latest_miniblock = BlockArgs::pending(&mut storage).await; + let protocol_version = latest_miniblock + .resolve_block_info(&mut storage) + .await + .unwrap() + .protocol_version; + + let l2_tx_filter = l2_tx_filter( + self.batch_fee_input_provider.as_ref(), + protocol_version.into(), + ); let (transactions, nonces) = storage .transactions_dal() diff --git a/core/lib/zksync_core/src/state_keeper/metrics.rs b/core/lib/zksync_core/src/state_keeper/metrics.rs index f16b311e805..8f1b3319df5 100644 --- a/core/lib/zksync_core/src/state_keeper/metrics.rs +++ b/core/lib/zksync_core/src/state_keeper/metrics.rs @@ -1,15 +1,14 @@ //! General-purpose state keeper metrics. -use vise::{ - Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, LatencyObserver, - Metrics, -}; - use std::{ sync::{Mutex, Weak}, time::Duration, }; +use vise::{ + Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, LatencyObserver, + Metrics, +}; use zksync_mempool::MempoolStore; use super::seal_criteria::SealResolution; @@ -168,14 +167,13 @@ pub(super) enum L1BatchSealStage { FilterWrittenSlots, InsertInitialWrites, CommitL1Batch, - ExternalNodeStoreTransactions, } /// Buckets for positive integer, not-so-large values (e.g., initial writes count). const COUNT_BUCKETS: Buckets = Buckets::values(&[ 10.0, 20.0, 50.0, 100.0, 200.0, 500.0, 1_000.0, 2_000.0, 5_000.0, 10_000.0, 20_000.0, 50_000.0, ]); -/// Buckets for sealing deltas for L1 batches (in seconds). The expected delta is ~1 minute. +/// Buckets for sealing deltas for L1 batches (in seconds). The expected delta is approximately 1 minute. const L1_BATCH_SEAL_DELTA_BUCKETS: Buckets = Buckets::values(&[ 0.1, 0.5, 1.0, 5.0, 10.0, 20.0, 30.0, 40.0, 60.0, 90.0, 120.0, 180.0, 240.0, 300.0, ]); @@ -205,7 +203,7 @@ pub(crate) struct L1BatchMetrics { /// Number of entities stored in Postgres during a specific stage of sealing an L1 batch. #[metrics(buckets = COUNT_BUCKETS)] sealed_entity_count: Family>, - /// Latency of sealing an L1 batch split by the stage and divided by the number of entiries + /// Latency of sealing an L1 batch split by the stage and divided by the number of entries /// stored in the stage. #[metrics(buckets = Buckets::LATENCIES)] sealed_entity_per_unit: Family>, @@ -221,10 +219,6 @@ impl L1BatchMetrics { latency_per_unit: &self.sealed_entity_per_unit[&stage], } } - - pub(crate) fn start_storing_on_en(&self) -> LatencyObserver<'_> { - self.sealed_time_stage[&L1BatchSealStage::ExternalNodeStoreTransactions].start() - } } #[vise::register] @@ -241,6 +235,7 @@ pub(super) enum MiniblockQueueStage { #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue)] #[metrics(rename_all = "snake_case")] pub(super) enum MiniblockSealStage { + PreInsertTxs, InsertMiniblockHeader, MarkTransactionsInMiniblock, InsertStorageLogs, @@ -285,7 +280,7 @@ pub(super) struct MiniblockMetrics { /// Number of entities stored in Postgres during a specific stage of sealing a miniblock. #[metrics(buckets = COUNT_BUCKETS)] sealed_entity_count: Family>, - /// Latency of sealing a miniblock split by the stage and divided by the number of entiries + /// Latency of sealing a miniblock split by the stage and divided by the number of entries /// stored in the stage. #[metrics(buckets = Buckets::LATENCIES)] sealed_entity_per_unit: Family>, diff --git a/core/lib/zksync_core/src/state_keeper/mod.rs b/core/lib/zksync_core/src/state_keeper/mod.rs index bdc1f90e206..b1534d9612f 100644 --- a/core/lib/zksync_core/src/state_keeper/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/mod.rs @@ -1,14 +1,23 @@ -use tokio::sync::watch; -use zksync_object_store::ObjectStore; - use std::sync::Arc; +use tokio::sync::watch; use zksync_config::{ configs::chain::{MempoolConfig, NetworkConfig, StateKeeperConfig}, ContractsConfig, DBConfig, }; use zksync_dal::ConnectionPool; -use zksync_system_constants::MAX_TXS_IN_BLOCK; +use zksync_object_store::ObjectStore; + +use self::io::MempoolIO; +pub use self::{ + batch_executor::{L1BatchExecutorBuilder, MainBatchExecutorBuilder}, + io::{MiniblockSealer, MiniblockSealerHandle}, + keeper::ZkSyncStateKeeper, +}; +pub(crate) use self::{ + mempool_actor::MempoolFetcher, seal_criteria::SequencerSealer, types::MempoolGuard, +}; +use crate::fee_model::BatchFeeModelInputProvider; mod batch_executor; pub(crate) mod extractors; @@ -16,26 +25,14 @@ pub(crate) mod io; mod keeper; mod mempool_actor; pub(crate) mod metrics; -pub(crate) mod seal_criteria; +pub mod seal_criteria; #[cfg(test)] pub(crate) mod tests; pub(crate) mod types; pub(crate) mod updates; -pub use self::{ - batch_executor::{L1BatchExecutorBuilder, MainBatchExecutorBuilder}, - keeper::ZkSyncStateKeeper, -}; -pub(crate) use self::{ - io::MiniblockSealer, mempool_actor::MempoolFetcher, seal_criteria::ConditionalSealer, - types::MempoolGuard, -}; - -use self::io::{MempoolIO, MiniblockSealerHandle}; -use crate::l1_gas_price::L1GasPriceProvider; - #[allow(clippy::too_many_arguments)] -pub(crate) async fn create_state_keeper( +pub(crate) async fn create_state_keeper( contracts_config: &ContractsConfig, state_keeper_config: StateKeeperConfig, db_config: &DBConfig, @@ -43,21 +40,11 @@ pub(crate) async fn create_state_keeper( mempool_config: &MempoolConfig, pool: ConnectionPool, mempool: MempoolGuard, - l1_gas_price_provider: Arc, + batch_fee_input_provider: Arc, miniblock_sealer_handle: MiniblockSealerHandle, - object_store: Box, + object_store: Arc, stop_receiver: watch::Receiver, -) -> ZkSyncStateKeeper -where - G: L1GasPriceProvider + 'static + Send + Sync, -{ - assert!( - state_keeper_config.transaction_slots <= MAX_TXS_IN_BLOCK, - "Configured transaction_slots ({}) must be lower than the bootloader constant MAX_TXS_IN_BLOCK={}", - state_keeper_config.transaction_slots, - MAX_TXS_IN_BLOCK - ); - +) -> ZkSyncStateKeeper { let batch_executor_base = MainBatchExecutorBuilder::new( db_config.state_keeper_db_path.clone(), pool.clone(), @@ -65,13 +52,14 @@ where state_keeper_config.save_call_traces, state_keeper_config.upload_witness_inputs_to_gcs, state_keeper_config.enum_index_migration_chunk_size(), + false, ); let io = MempoolIO::new( mempool, object_store, miniblock_sealer_handle, - l1_gas_price_provider, + batch_fee_input_provider, pool, &state_keeper_config, mempool_config.delay_interval(), @@ -81,11 +69,11 @@ where ) .await; - let sealer = ConditionalSealer::new(state_keeper_config); + let sealer = SequencerSealer::new(state_keeper_config); ZkSyncStateKeeper::new( stop_receiver, Box::new(io), Box::new(batch_executor_base), - sealer, + Box::new(sealer), ) } diff --git a/core/lib/zksync_core/src/state_keeper/seal_criteria/conditional_sealer.rs b/core/lib/zksync_core/src/state_keeper/seal_criteria/conditional_sealer.rs index 2b5fb9fba48..cc7ba37ef9c 100644 --- a/core/lib/zksync_core/src/state_keeper/seal_criteria/conditional_sealer.rs +++ b/core/lib/zksync_core/src/state_keeper/seal_criteria/conditional_sealer.rs @@ -1,7 +1,10 @@ //! This module represents the conditional sealer, which can decide whether the batch //! should be sealed after executing a particular transaction. -//! It is used on the main node to decide when the batch should be sealed (as opposed to the external node, -//! which unconditionally follows the instructions from the main node). +//! +//! The conditional sealer abstraction allows to implement different sealing strategies, e.g. the actual +//! sealing strategy for the main node or noop sealer for the external node. + +use std::fmt; use zksync_config::configs::chain::StateKeeperConfig; use zksync_types::ProtocolVersionId; @@ -9,28 +12,51 @@ use zksync_types::ProtocolVersionId; use super::{criteria, SealCriterion, SealData, SealResolution, AGGREGATION_METRICS}; /// Checks if an L1 batch should be sealed after executing a transaction. +pub trait ConditionalSealer: 'static + fmt::Debug + Send + Sync { + /// Finds a reason why a transaction with the specified `data` is unexecutable. + /// + /// Can be used to determine whether the transaction can be executed by the sequencer. + fn find_unexecutable_reason( + &self, + data: &SealData, + protocol_version: ProtocolVersionId, + ) -> Option<&'static str>; + + /// Returns the action that should be taken by the state keeper after executing a transaction. + fn should_seal_l1_batch( + &self, + l1_batch_number: u32, + block_open_timestamp_ms: u128, + tx_count: usize, + block_data: &SealData, + tx_data: &SealData, + protocol_version: ProtocolVersionId, + ) -> SealResolution; +} + +/// Implementation of [`ConditionalSealer`] used by the main node. +/// Internally uses a set of [`SealCriterion`]s to determine whether the batch should be sealed. /// /// The checks are deterministic, i.e., should depend solely on execution metrics and [`StateKeeperConfig`]. /// Non-deterministic seal criteria are expressed using [`IoSealCriteria`](super::IoSealCriteria). #[derive(Debug)] -pub struct ConditionalSealer { +pub struct SequencerSealer { config: StateKeeperConfig, sealers: Vec>, } -impl ConditionalSealer { - /// Finds a reason why a transaction with the specified `data` is unexecutable. - pub(crate) fn find_unexecutable_reason( - config: &StateKeeperConfig, +impl ConditionalSealer for SequencerSealer { + fn find_unexecutable_reason( + &self, data: &SealData, protocol_version: ProtocolVersionId, ) -> Option<&'static str> { - for sealer in &Self::default_sealers() { + for sealer in &self.sealers { const MOCK_BLOCK_TIMESTAMP: u128 = 0; const TX_COUNT: usize = 1; let resolution = sealer.should_seal( - config, + &self.config, MOCK_BLOCK_TIMESTAMP, TX_COUNT, data, @@ -44,20 +70,7 @@ impl ConditionalSealer { None } - pub(crate) fn new(config: StateKeeperConfig) -> Self { - let sealers = Self::default_sealers(); - Self { config, sealers } - } - - #[cfg(test)] - pub(in crate::state_keeper) fn with_sealers( - config: StateKeeperConfig, - sealers: Vec>, - ) -> Self { - Self { config, sealers } - } - - pub fn should_seal_l1_batch( + fn should_seal_l1_batch( &self, l1_batch_number: u32, block_open_timestamp_ms: u128, @@ -99,18 +112,57 @@ impl ConditionalSealer { } final_seal_resolution } +} + +impl SequencerSealer { + pub(crate) fn new(config: StateKeeperConfig) -> Self { + let sealers = Self::default_sealers(); + Self { config, sealers } + } + + #[cfg(test)] + pub(in crate::state_keeper) fn with_sealers( + config: StateKeeperConfig, + sealers: Vec>, + ) -> Self { + Self { config, sealers } + } fn default_sealers() -> Vec> { vec![ Box::new(criteria::SlotsCriterion), Box::new(criteria::GasCriterion), Box::new(criteria::PubDataBytesCriterion), - Box::new(criteria::InitialWritesCriterion), - Box::new(criteria::RepeatedWritesCriterion), - Box::new(criteria::MaxCyclesCriterion), - Box::new(criteria::ComputationalGasCriterion), + Box::new(criteria::CircuitsCriterion), Box::new(criteria::TxEncodingSizeCriterion), - Box::new(criteria::L2ToL1LogsCriterion), ] } } + +/// Implementation of [`ConditionalSealer`] that never seals the batch. +/// Can be used in contexts where, for example, state keeper configuration is not available, +/// or the decision to seal batch is taken by some other component. +#[derive(Debug)] +pub struct NoopSealer; + +impl ConditionalSealer for NoopSealer { + fn find_unexecutable_reason( + &self, + _data: &SealData, + _protocol_version: ProtocolVersionId, + ) -> Option<&'static str> { + None + } + + fn should_seal_l1_batch( + &self, + _l1_batch_number: u32, + _block_open_timestamp_ms: u128, + _tx_count: usize, + _block_data: &SealData, + _tx_data: &SealData, + _protocol_version: ProtocolVersionId, + ) -> SealResolution { + SealResolution::NoSeal + } +} diff --git a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/geometry_seal_criteria.rs b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/geometry_seal_criteria.rs index 1ec0c66e4d7..7878621a729 100644 --- a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/geometry_seal_criteria.rs +++ b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/geometry_seal_criteria.rs @@ -1,11 +1,7 @@ -use multivm::vm_latest::constants::{ERGS_PER_CIRCUIT, MAX_CYCLES_FOR_TX}; use std::fmt; + use zksync_config::configs::chain::StateKeeperConfig; -use zksync_types::{ - circuit::{GEOMETRY_CONFIG, SCHEDULER_UPPER_BOUND}, - tx::tx_execution_info::{DeduplicatedWritesMetrics, ExecutionMetrics}, - ProtocolVersionId, -}; +use zksync_types::{tx::tx_execution_info::ExecutionMetrics, ProtocolVersionId}; // Local uses use crate::state_keeper::seal_criteria::{SealCriterion, SealData, SealResolution}; @@ -14,20 +10,12 @@ use crate::state_keeper::seal_criteria::{SealCriterion, SealData, SealResolution // Otherwise witness generation will fail and proof won't be generated. #[derive(Debug, Default)] -pub struct RepeatedWritesCriterion; -#[derive(Debug, Default)] -pub struct InitialWritesCriterion; -#[derive(Debug, Default)] -pub struct MaxCyclesCriterion; -#[derive(Debug, Default)] -pub struct ComputationalGasCriterion; -#[derive(Debug, Default)] -pub struct L2ToL1LogsCriterion; +pub struct CircuitsCriterion; trait MetricExtractor { const PROM_METRIC_CRITERION_NAME: &'static str; fn limit_per_block(protocol_version: ProtocolVersionId) -> usize; - fn extract(metric: &ExecutionMetrics, writes: &DeduplicatedWritesMetrics) -> usize; + fn extract(metric: &ExecutionMetrics) -> usize; } impl SealCriterion for T @@ -50,15 +38,13 @@ where * config.close_block_at_geometry_percentage) .round(); - if T::extract(&tx_data.execution_metrics, &tx_data.writes_metrics) > reject_bound as usize { + if T::extract(&tx_data.execution_metrics) > reject_bound as usize { SealResolution::Unexecutable("ZK proof cannot be generated for a transaction".into()) - } else if T::extract(&block_data.execution_metrics, &block_data.writes_metrics) + } else if T::extract(&block_data.execution_metrics) >= T::limit_per_block(protocol_version_id) { SealResolution::ExcludeAndSeal - } else if T::extract(&block_data.execution_metrics, &block_data.writes_metrics) - > close_bound as usize - { + } else if T::extract(&block_data.execution_metrics) > close_bound as usize { SealResolution::IncludeAndSeal } else { SealResolution::NoSeal @@ -70,85 +56,21 @@ where } } -impl MetricExtractor for RepeatedWritesCriterion { - const PROM_METRIC_CRITERION_NAME: &'static str = "repeated_storage_writes"; - - fn limit_per_block(protocol_version_id: ProtocolVersionId) -> usize { - if protocol_version_id.is_pre_boojum() { - GEOMETRY_CONFIG.limit_for_repeated_writes_pubdata_hasher as usize - } else { - // In boojum there is no limit for repeated writes. - usize::MAX - } - } - - fn extract(_metrics: &ExecutionMetrics, writes: &DeduplicatedWritesMetrics) -> usize { - writes.repeated_storage_writes - } -} - -impl MetricExtractor for InitialWritesCriterion { - const PROM_METRIC_CRITERION_NAME: &'static str = "initial_storage_writes"; - - fn limit_per_block(protocol_version_id: ProtocolVersionId) -> usize { - if protocol_version_id.is_pre_boojum() { - GEOMETRY_CONFIG.limit_for_initial_writes_pubdata_hasher as usize - } else { - // In boojum there is no limit for initial writes. - usize::MAX - } - } - - fn extract(_metrics: &ExecutionMetrics, writes: &DeduplicatedWritesMetrics) -> usize { - writes.initial_storage_writes - } -} - -impl MetricExtractor for MaxCyclesCriterion { - const PROM_METRIC_CRITERION_NAME: &'static str = "max_cycles"; - - fn limit_per_block(_protocol_version_id: ProtocolVersionId) -> usize { - MAX_CYCLES_FOR_TX as usize - } - - fn extract(metrics: &ExecutionMetrics, _writes: &DeduplicatedWritesMetrics) -> usize { - metrics.cycles_used as usize - } -} - -impl MetricExtractor for ComputationalGasCriterion { - const PROM_METRIC_CRITERION_NAME: &'static str = "computational_gas"; +impl MetricExtractor for CircuitsCriterion { + const PROM_METRIC_CRITERION_NAME: &'static str = "circuits"; fn limit_per_block(_protocol_version_id: ProtocolVersionId) -> usize { // We subtract constant to take into account that circuits may be not fully filled. // This constant should be greater than number of circuits types // but we keep it larger to be on the safe side. - const MARGIN_NUMBER_OF_CIRCUITS: usize = 100; - const MAX_NUMBER_OF_MUTLIINSTANCE_CIRCUITS: usize = - SCHEDULER_UPPER_BOUND as usize - MARGIN_NUMBER_OF_CIRCUITS; + const MARGIN_NUMBER_OF_CIRCUITS: usize = 10000; + const MAX_NUMBER_OF_CIRCUITS: usize = (1 << 14) + (1 << 13) - MARGIN_NUMBER_OF_CIRCUITS; - MAX_NUMBER_OF_MUTLIINSTANCE_CIRCUITS * ERGS_PER_CIRCUIT as usize + MAX_NUMBER_OF_CIRCUITS } - fn extract(metrics: &ExecutionMetrics, _writes: &DeduplicatedWritesMetrics) -> usize { - metrics.computational_gas_used as usize - } -} - -impl MetricExtractor for L2ToL1LogsCriterion { - const PROM_METRIC_CRITERION_NAME: &'static str = "l2_to_l1_logs"; - - fn limit_per_block(protocol_version_id: ProtocolVersionId) -> usize { - if protocol_version_id.is_pre_boojum() { - GEOMETRY_CONFIG.limit_for_l1_messages_merklizer as usize - } else { - // In boojum there is no limit for L2 to L1 logs. - usize::MAX - } - } - - fn extract(metrics: &ExecutionMetrics, _writes: &DeduplicatedWritesMetrics) -> usize { - metrics.l2_to_l1_logs + fn extract(metrics: &ExecutionMetrics) -> usize { + metrics.estimated_circuits_used.ceil() as usize } } @@ -166,7 +88,6 @@ mod tests { fn test_no_seal_block_resolution( block_execution_metrics: ExecutionMetrics, - block_writes_metrics: DeduplicatedWritesMetrics, criterion: &dyn SealCriterion, protocol_version: ProtocolVersionId, ) { @@ -177,7 +98,6 @@ mod tests { 0, &SealData { execution_metrics: block_execution_metrics, - writes_metrics: block_writes_metrics, ..SealData::default() }, &SealData::default(), @@ -188,7 +108,6 @@ mod tests { fn test_include_and_seal_block_resolution( block_execution_metrics: ExecutionMetrics, - block_writes_metrics: DeduplicatedWritesMetrics, criterion: &dyn SealCriterion, protocol_version: ProtocolVersionId, ) { @@ -199,7 +118,6 @@ mod tests { 0, &SealData { execution_metrics: block_execution_metrics, - writes_metrics: block_writes_metrics, ..SealData::default() }, &SealData::default(), @@ -210,7 +128,6 @@ mod tests { fn test_exclude_and_seal_block_resolution( block_execution_metrics: ExecutionMetrics, - block_writes_metrics: DeduplicatedWritesMetrics, criterion: &dyn SealCriterion, protocol_version: ProtocolVersionId, ) { @@ -221,7 +138,6 @@ mod tests { 0, &SealData { execution_metrics: block_execution_metrics, - writes_metrics: block_writes_metrics, ..SealData::default() }, &SealData::default(), @@ -232,7 +148,6 @@ mod tests { fn test_unexecutable_tx_resolution( tx_execution_metrics: ExecutionMetrics, - tx_writes_metrics: DeduplicatedWritesMetrics, criterion: &dyn SealCriterion, protocol_version: ProtocolVersionId, ) { @@ -244,7 +159,6 @@ mod tests { &SealData::default(), &SealData { execution_metrics: tx_execution_metrics, - writes_metrics: tx_writes_metrics, ..SealData::default() }, protocol_version, @@ -259,17 +173,11 @@ mod tests { macro_rules! test_scenario_execution_metrics { ($criterion: tt, $metric_name: ident, $metric_type: ty, $protocol_version: expr) => { let config = get_config(); - let writes_metrics = DeduplicatedWritesMetrics::default(); let block_execution_metrics = ExecutionMetrics { $metric_name: ($criterion::limit_per_block($protocol_version) / 2) as $metric_type, ..ExecutionMetrics::default() }; - test_no_seal_block_resolution( - block_execution_metrics, - writes_metrics, - &$criterion, - $protocol_version, - ); + test_no_seal_block_resolution(block_execution_metrics, &$criterion, $protocol_version); let block_execution_metrics = ExecutionMetrics { $metric_name: ($criterion::limit_per_block($protocol_version) - 1) as $metric_type, @@ -278,7 +186,6 @@ mod tests { test_include_and_seal_block_resolution( block_execution_metrics, - writes_metrics, &$criterion, $protocol_version, ); @@ -290,7 +197,6 @@ mod tests { test_exclude_and_seal_block_resolution( block_execution_metrics, - writes_metrics, &$criterion, $protocol_version, ); @@ -303,117 +209,16 @@ mod tests { ..ExecutionMetrics::default() }; - test_unexecutable_tx_resolution( - tx_execution_metrics, - writes_metrics, - &$criterion, - $protocol_version, - ); + test_unexecutable_tx_resolution(tx_execution_metrics, &$criterion, $protocol_version); }; } - macro_rules! test_scenario_writes_metrics { - ($criterion:tt, $metric_name:ident, $metric_type:ty, $protocol_version:expr) => { - let config = get_config(); - let execution_metrics = ExecutionMetrics::default(); - let block_writes_metrics = DeduplicatedWritesMetrics { - $metric_name: ($criterion::limit_per_block($protocol_version) / 2) as $metric_type, - ..Default::default() - }; - test_no_seal_block_resolution( - execution_metrics, - block_writes_metrics, - &$criterion, - $protocol_version, - ); - - let block_writes_metrics = DeduplicatedWritesMetrics { - $metric_name: ($criterion::limit_per_block($protocol_version) - 1) as $metric_type, - ..Default::default() - }; - - test_include_and_seal_block_resolution( - execution_metrics, - block_writes_metrics, - &$criterion, - $protocol_version, - ); - - let block_writes_metrics = DeduplicatedWritesMetrics { - $metric_name: ($criterion::limit_per_block($protocol_version)) as $metric_type, - ..Default::default() - }; - - test_exclude_and_seal_block_resolution( - execution_metrics, - block_writes_metrics, - &$criterion, - $protocol_version, - ); - - let tx_writes_metrics = DeduplicatedWritesMetrics { - $metric_name: ($criterion::limit_per_block($protocol_version) as f64 - * config.reject_tx_at_geometry_percentage - + 1f64) - .round() as $metric_type, - ..Default::default() - }; - - test_unexecutable_tx_resolution( - execution_metrics, - tx_writes_metrics, - &$criterion, - $protocol_version, - ); - }; - } - - #[test] - fn repeated_writes_seal_criterion() { - test_scenario_writes_metrics!( - RepeatedWritesCriterion, - repeated_storage_writes, - usize, - ProtocolVersionId::Version17 - ); - } - - #[test] - fn initial_writes_seal_criterion() { - test_scenario_writes_metrics!( - InitialWritesCriterion, - initial_storage_writes, - usize, - ProtocolVersionId::Version17 - ); - } - - #[test] - fn max_cycles_seal_criterion() { - test_scenario_execution_metrics!( - MaxCyclesCriterion, - cycles_used, - u32, - ProtocolVersionId::Version17 - ); - } - #[test] fn computational_gas_seal_criterion() { test_scenario_execution_metrics!( - ComputationalGasCriterion, - computational_gas_used, - u32, - ProtocolVersionId::Version17 - ); - } - - #[test] - fn l2_to_l1_logs_seal_criterion() { - test_scenario_execution_metrics!( - L2ToL1LogsCriterion, - l2_to_l1_logs, - usize, + CircuitsCriterion, + estimated_circuits_used, + f32, ProtocolVersionId::Version17 ); } diff --git a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/mod.rs b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/mod.rs index 8e0d89e8e0f..4e30f2a8b60 100644 --- a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/mod.rs @@ -5,12 +5,7 @@ mod slots; mod tx_encoding_size; pub(in crate::state_keeper) use self::{ - gas::GasCriterion, - geometry_seal_criteria::{ - ComputationalGasCriterion, InitialWritesCriterion, L2ToL1LogsCriterion, MaxCyclesCriterion, - RepeatedWritesCriterion, - }, - pubdata_bytes::PubDataBytesCriterion, - slots::SlotsCriterion, + gas::GasCriterion, geometry_seal_criteria::CircuitsCriterion, + pubdata_bytes::PubDataBytesCriterion, slots::SlotsCriterion, tx_encoding_size::TxEncodingSizeCriterion, }; diff --git a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/pubdata_bytes.rs b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/pubdata_bytes.rs index 61f30d724a7..ec778cdf083 100644 --- a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/pubdata_bytes.rs +++ b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/pubdata_bytes.rs @@ -26,7 +26,7 @@ impl SealCriterion for PubDataBytesCriterion { let block_size = block_data.execution_metrics.size() + block_data.writes_metrics.size(protocol_version); // For backward compatibility, we need to keep calculating the size of the pubdata based - // StorageDeduplication metrics. All vm versions + // `StorageDeduplication` metrics. All vm versions // after vm with virtual blocks will provide the size of the pubdata in the execution metrics. let tx_size = if tx_data.execution_metrics.pubdata_published == 0 { tx_data.execution_metrics.size() + tx_data.writes_metrics.size(protocol_version) diff --git a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/slots.rs b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/slots.rs index 4c21c41e5e4..41d99b8274b 100644 --- a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/slots.rs +++ b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/slots.rs @@ -1,3 +1,4 @@ +use multivm::utils::get_bootloader_max_txs_in_batch; use zksync_types::ProtocolVersionId; use crate::state_keeper::seal_criteria::{ @@ -16,8 +17,15 @@ impl SealCriterion for SlotsCriterion { tx_count: usize, _block_data: &SealData, _tx_data: &SealData, - _protocol_version: ProtocolVersionId, + protocol_version: ProtocolVersionId, ) -> SealResolution { + let max_txs_in_batch = get_bootloader_max_txs_in_batch(protocol_version.into()); + assert!( + config.transaction_slots <= max_txs_in_batch, + "Configured transaction_slots ({}) must be lower than the bootloader constant MAX_TXS_IN_BLOCK={} for protocol version {}", + config.transaction_slots, max_txs_in_batch, protocol_version as u16 + ); + if tx_count >= config.transaction_slots { SealResolution::IncludeAndSeal } else { diff --git a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/tx_encoding_size.rs b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/tx_encoding_size.rs index ed24e371933..02683e501d9 100644 --- a/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/tx_encoding_size.rs +++ b/core/lib/zksync_core/src/state_keeper/seal_criteria/criteria/tx_encoding_size.rs @@ -1,4 +1,4 @@ -use multivm::vm_latest::constants::BOOTLOADER_TX_ENCODING_SPACE; +use multivm::utils::get_bootloader_encoding_space; use zksync_types::ProtocolVersionId; use crate::state_keeper::seal_criteria::{ @@ -16,18 +16,21 @@ impl SealCriterion for TxEncodingSizeCriterion { _tx_count: usize, block_data: &SealData, tx_data: &SealData, - _protocol_version_id: ProtocolVersionId, + protocol_version_id: ProtocolVersionId, ) -> SealResolution { + let bootloader_tx_encoding_space = + get_bootloader_encoding_space(protocol_version_id.into()); + let reject_bound = - (BOOTLOADER_TX_ENCODING_SPACE as f64 * config.reject_tx_at_geometry_percentage).round(); - let include_and_seal_bound = (BOOTLOADER_TX_ENCODING_SPACE as f64 + (bootloader_tx_encoding_space as f64 * config.reject_tx_at_geometry_percentage).round(); + let include_and_seal_bound = (bootloader_tx_encoding_space as f64 * config.close_block_at_geometry_percentage) .round(); if tx_data.cumulative_size > reject_bound as usize { let message = "Transaction cannot be included due to large encoding size"; SealResolution::Unexecutable(message.into()) - } else if block_data.cumulative_size > BOOTLOADER_TX_ENCODING_SPACE as usize { + } else if block_data.cumulative_size > bootloader_tx_encoding_space as usize { SealResolution::ExcludeAndSeal } else if block_data.cumulative_size > include_and_seal_bound as usize { SealResolution::IncludeAndSeal @@ -47,6 +50,9 @@ mod tests { #[test] fn seal_criterion() { + let bootloader_tx_encoding_space = + get_bootloader_encoding_space(ProtocolVersionId::latest().into()); + // Create an empty config and only setup fields relevant for the test. let config = StateKeeperConfig { reject_tx_at_geometry_percentage: 0.95, @@ -72,7 +78,7 @@ mod tests { 0, &SealData::default(), &SealData { - cumulative_size: BOOTLOADER_TX_ENCODING_SPACE as usize + 1, + cumulative_size: bootloader_tx_encoding_space as usize + 1, ..SealData::default() }, ProtocolVersionId::latest(), @@ -89,7 +95,7 @@ mod tests { 0, 0, &SealData { - cumulative_size: BOOTLOADER_TX_ENCODING_SPACE as usize + 1, + cumulative_size: bootloader_tx_encoding_space as usize + 1, ..SealData::default() }, &SealData { @@ -105,7 +111,7 @@ mod tests { 0, 0, &SealData { - cumulative_size: BOOTLOADER_TX_ENCODING_SPACE as usize, + cumulative_size: bootloader_tx_encoding_space as usize, ..SealData::default() }, &SealData { diff --git a/core/lib/zksync_core/src/state_keeper/seal_criteria/mod.rs b/core/lib/zksync_core/src/state_keeper/seal_criteria/mod.rs index 99cb25c654d..bf44c7af0ec 100644 --- a/core/lib/zksync_core/src/state_keeper/seal_criteria/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/seal_criteria/mod.rs @@ -25,7 +25,7 @@ use zksync_utils::time::millis_since; mod conditional_sealer; pub(super) mod criteria; -pub(crate) use self::conditional_sealer::ConditionalSealer; +pub use self::conditional_sealer::{ConditionalSealer, NoopSealer, SequencerSealer}; use super::{extractors, metrics::AGGREGATION_METRICS, updates::UpdatesManager}; use crate::gas_tracker::{gas_count_from_tx_and_metrics, gas_count_from_writes}; @@ -104,7 +104,7 @@ impl SealData { } } -pub(super) trait SealCriterion: fmt::Debug + Send + 'static { +pub(super) trait SealCriterion: fmt::Debug + Send + Sync + 'static { fn should_seal( &self, config: &StateKeeperConfig, diff --git a/core/lib/zksync_core/src/state_keeper/tests/mod.rs b/core/lib/zksync_core/src/state_keeper/tests/mod.rs index c5841fd8b1b..6f71dc35bd9 100644 --- a/core/lib/zksync_core/src/state_keeper/tests/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/tests/mod.rs @@ -1,5 +1,3 @@ -use once_cell::sync::Lazy; - use std::{ sync::{ atomic::{AtomicBool, AtomicU64, Ordering}, @@ -8,42 +6,45 @@ use std::{ time::Instant, }; -use multivm::interface::{ - CurrentExecutionState, ExecutionResult, FinishedL1Batch, L1BatchEnv, L2BlockEnv, Refunds, - SystemEnv, TxExecutionMode, VmExecutionResultAndLogs, VmExecutionStatistics, +use multivm::{ + interface::{ + CurrentExecutionState, ExecutionResult, FinishedL1Batch, L1BatchEnv, L2BlockEnv, Refunds, + SystemEnv, TxExecutionMode, VmExecutionResultAndLogs, VmExecutionStatistics, + }, + vm_latest::{constants::BLOCK_GAS_LIMIT, VmExecutionLogs}, }; -use multivm::vm_latest::{constants::BLOCK_GAS_LIMIT, VmExecutionLogs}; +use once_cell::sync::Lazy; use zksync_config::configs::chain::StateKeeperConfig; use zksync_contracts::{BaseSystemContracts, BaseSystemContractsHashes}; use zksync_system_constants::ZKPORTER_IS_AVAILABLE; use zksync_types::{ aggregated_operations::AggregatedActionType, - block::{legacy_miniblock_hash, miniblock_hash, BlockGasCount, MiniblockExecutionData}, - commitment::{L1BatchMetaParameters, L1BatchMetadata}, - fee::Fee, - l2::L2Tx, - transaction_request::PaymasterParams, + block::{BlockGasCount, MiniblockExecutionData, MiniblockHasher}, + fee_model::{BatchFeeInput, PubdataIndependentBatchFeeModelInput}, tx::tx_execution_info::ExecutionMetrics, - Address, L1BatchNumber, L2ChainId, LogQuery, MiniblockNumber, Nonce, ProtocolVersionId, + Address, L1BatchNumber, L2ChainId, LogQuery, MiniblockNumber, ProtocolVersionId, StorageLogQuery, StorageLogQueryType, Timestamp, Transaction, H256, U256, }; mod tester; -pub(crate) use self::tester::TestBatchExecutorBuilder; use self::tester::{ bootloader_tip_out_of_gas, pending_batch_data, random_tx, rejected_exec, successful_exec, successful_exec_with_metrics, TestScenario, }; -use crate::gas_tracker::l1_batch_base_cost; -use crate::state_keeper::{ - keeper::POLL_WAIT_DURATION, - seal_criteria::{ - criteria::{GasCriterion, SlotsCriterion}, - ConditionalSealer, +pub(crate) use self::tester::{MockBatchExecutorBuilder, TestBatchExecutorBuilder}; +use crate::{ + gas_tracker::l1_batch_base_cost, + state_keeper::{ + keeper::POLL_WAIT_DURATION, + seal_criteria::{ + criteria::{GasCriterion, SlotsCriterion}, + SequencerSealer, + }, + types::ExecutionMetricsForCriteria, + updates::UpdatesManager, }, - types::ExecutionMetricsForCriteria, - updates::UpdatesManager, + utils::testonly::create_l2_transaction, }; pub(super) static BASE_SYSTEM_CONTRACTS: Lazy = @@ -70,40 +71,19 @@ pub(super) fn default_l1_batch_env( previous_batch_hash: None, number: L1BatchNumber(number), timestamp, - l1_gas_price: 1, - fair_l2_gas_price: 1, fee_account, enforced_base_fee: None, first_l2_block: L2BlockEnv { number, timestamp, - prev_block_hash: legacy_miniblock_hash(MiniblockNumber(number - 1)), + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(number - 1)), max_virtual_blocks_to_create: 1, }, - } -} - -pub(crate) fn create_l1_batch_metadata(number: u32) -> L1BatchMetadata { - L1BatchMetadata { - root_hash: H256::from_low_u64_be(number.into()), - rollup_last_leaf_index: u64::from(number) + 20, - merkle_root_hash: H256::from_low_u64_be(number.into()), - initial_writes_compressed: vec![], - repeated_writes_compressed: vec![], - commitment: H256::from_low_u64_be(number.into()), - l2_l1_messages_compressed: vec![], - l2_l1_merkle_root: H256::from_low_u64_be(number.into()), - block_meta_params: L1BatchMetaParameters { - zkporter_is_available: ZKPORTER_IS_AVAILABLE, - bootloader_code_hash: BASE_SYSTEM_CONTRACTS.bootloader.hash, - default_aa_code_hash: BASE_SYSTEM_CONTRACTS.default_aa.hash, - }, - aux_data_hash: H256::zero(), - meta_parameters_hash: H256::zero(), - pass_through_data_hash: H256::zero(), - events_queue_commitment: Some(H256::zero()), - bootloader_initial_content_commitment: Some(H256::zero()), - state_diffs_compressed: vec![], + fee_input: BatchFeeInput::PubdataIndependent(PubdataIndependentBatchFeeModelInput { + fair_l2_gas_price: 1, + fair_pubdata_price: 1, + l1_gas_price: 1, + }), } } @@ -127,6 +107,7 @@ pub(super) fn default_vm_block_result() -> FinishedL1Batch { storage_refunds: Vec::new(), }, final_bootloader_memory: Some(vec![]), + pubdata_input: Some(vec![]), } } @@ -139,32 +120,6 @@ pub(super) fn create_updates_manager() -> UpdatesManager { ) } -pub(crate) fn create_l2_transaction(fee_per_gas: u64, gas_per_pubdata: u32) -> L2Tx { - let fee = Fee { - gas_limit: 1000_u64.into(), - max_fee_per_gas: fee_per_gas.into(), - max_priority_fee_per_gas: 0_u64.into(), - gas_per_pubdata_limit: gas_per_pubdata.into(), - }; - let mut tx = L2Tx::new_signed( - Address::random(), - vec![], - Nonce(0), - fee, - U256::zero(), - L2ChainId::from(271), - &H256::random(), - None, - PaymasterParams::default(), - ) - .unwrap(); - // Input means all transaction data (NOT calldata, but all tx fields) that came from the API. - // This input will be used for the derivation of the tx hash, so put some random to it to be sure - // that the transaction hash is unique. - tx.set_input(H256::random().0.to_vec(), H256::random()); - tx -} - pub(super) fn create_transaction(fee_per_gas: u64, gas_per_pubdata: u32) -> Transaction { create_l2_transaction(fee_per_gas, gas_per_pubdata).into() } @@ -195,6 +150,7 @@ pub(super) fn create_execution_result( computational_gas_used: 0, total_log_queries, pubdata_published: 0, + estimated_circuits_used: 0.0, }, refunds: Refunds::default(), } @@ -246,7 +202,7 @@ async fn sealed_by_number_of_txs() { transaction_slots: 2, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); TestScenario::new() .seal_miniblock_when(|updates| updates.miniblock.executed_transactions.len() == 1) @@ -267,10 +223,10 @@ async fn sealed_by_gas() { close_block_at_gas_percentage: 0.5, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(GasCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(GasCriterion)]); let l1_gas_per_tx = BlockGasCount { - commit: 1, // Both txs together with block_base_cost would bring it over the block 31_001 commit bound. + commit: 1, // Both txs together with `block_base_cost` would bring it over the block `31_001` commit bound. prove: 0, execute: 0, }; @@ -316,7 +272,7 @@ async fn sealed_by_gas_then_by_num_tx() { transaction_slots: 3, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers( + let sealer = SequencerSealer::with_sealers( config, vec![Box::new(GasCriterion), Box::new(SlotsCriterion)], ); @@ -353,7 +309,7 @@ async fn batch_sealed_before_miniblock_does() { transaction_slots: 2, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); // Miniblock sealer will not return true before the batch is sealed because the batch only has 2 txs. TestScenario::new() @@ -378,7 +334,7 @@ async fn rejected_tx() { transaction_slots: 2, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); let rejected_tx = random_tx(1); TestScenario::new() @@ -400,7 +356,7 @@ async fn bootloader_tip_out_of_gas_flow() { transaction_slots: 2, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); let first_tx = random_tx(1); let bootloader_out_of_gas_tx = random_tx(2); @@ -438,20 +394,22 @@ async fn pending_batch_is_applied() { transaction_slots: 3, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); let pending_batch = pending_batch_data(vec![ MiniblockExecutionData { number: MiniblockNumber(1), timestamp: 1, - prev_block_hash: miniblock_hash(MiniblockNumber(0), 0, H256::zero(), H256::zero()), + prev_block_hash: MiniblockHasher::new(MiniblockNumber(0), 0, H256::zero()) + .finalize(ProtocolVersionId::latest()), virtual_blocks: 1, txs: vec![random_tx(1)], }, MiniblockExecutionData { number: MiniblockNumber(2), timestamp: 2, - prev_block_hash: miniblock_hash(MiniblockNumber(1), 1, H256::zero(), H256::zero()), + prev_block_hash: MiniblockHasher::new(MiniblockNumber(1), 1, H256::zero()) + .finalize(ProtocolVersionId::latest()), virtual_blocks: 1, txs: vec![random_tx(2)], }, @@ -494,7 +452,7 @@ async fn unconditional_sealing() { transaction_slots: 2, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); TestScenario::new() .seal_l1_batch_when(move |_| batch_seal_trigger_checker.load(Ordering::Relaxed)) @@ -524,12 +482,13 @@ async fn miniblock_timestamp_after_pending_batch() { transaction_slots: 2, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); let pending_batch = pending_batch_data(vec![MiniblockExecutionData { number: MiniblockNumber(1), timestamp: 1, - prev_block_hash: miniblock_hash(MiniblockNumber(0), 0, H256::zero(), H256::zero()), + prev_block_hash: MiniblockHasher::new(MiniblockNumber(0), 0, H256::zero()) + .finalize(ProtocolVersionId::latest()), virtual_blocks: 1, txs: vec![random_tx(1)], }]); @@ -567,7 +526,7 @@ async fn time_is_monotonic() { transaction_slots: 2, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); TestScenario::new() .seal_miniblock_when(|updates| updates.miniblock.executed_transactions.len() == 1) @@ -618,7 +577,7 @@ async fn protocol_upgrade() { transaction_slots: 2, ..StateKeeperConfig::default() }; - let sealer = ConditionalSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); + let sealer = SequencerSealer::with_sealers(config, vec![Box::new(SlotsCriterion)]); TestScenario::new() .seal_miniblock_when(|updates| updates.miniblock.executed_transactions.len() == 1) diff --git a/core/lib/zksync_core/src/state_keeper/tests/tester.rs b/core/lib/zksync_core/src/state_keeper/tests/tester.rs index 8d0d1fb047e..9ac886270d3 100644 --- a/core/lib/zksync_core/src/state_keeper/tests/tester.rs +++ b/core/lib/zksync_core/src/state_keeper/tests/tester.rs @@ -1,6 +1,3 @@ -use async_trait::async_trait; -use tokio::sync::{mpsc, watch}; - use std::{ collections::{HashMap, HashSet, VecDeque}, convert::TryInto, @@ -8,27 +5,32 @@ use std::{ time::{Duration, Instant}, }; -use multivm::interface::{ - ExecutionResult, FinishedL1Batch, L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode, - VmExecutionResultAndLogs, +use async_trait::async_trait; +use multivm::{ + interface::{ + ExecutionResult, FinishedL1Batch, L1BatchEnv, L2BlockEnv, SystemEnv, TxExecutionMode, + VmExecutionResultAndLogs, + }, + vm_latest::constants::BLOCK_GAS_LIMIT, }; -use multivm::vm_latest::constants::BLOCK_GAS_LIMIT; +use tokio::sync::{mpsc, watch}; use zksync_types::{ - block::MiniblockExecutionData, protocol_version::ProtocolUpgradeTx, + block::MiniblockExecutionData, fee_model::BatchFeeInput, protocol_version::ProtocolUpgradeTx, witness_block_state::WitnessBlockState, Address, L1BatchNumber, L2ChainId, MiniblockNumber, ProtocolVersionId, Transaction, H256, }; -use crate::state_keeper::{ - batch_executor::{BatchExecutorHandle, Command, L1BatchExecutorBuilder, TxExecutionResult}, - io::{MiniblockParams, PendingBatchData, StateKeeperIO}, - seal_criteria::{ConditionalSealer, IoSealCriteria}, - tests::{ - create_l2_transaction, default_l1_batch_env, default_vm_block_result, BASE_SYSTEM_CONTRACTS, +use crate::{ + state_keeper::{ + batch_executor::{BatchExecutorHandle, Command, L1BatchExecutorBuilder, TxExecutionResult}, + io::{MiniblockParams, PendingBatchData, StateKeeperIO}, + seal_criteria::{IoSealCriteria, SequencerSealer}, + tests::{default_l1_batch_env, default_vm_block_result, BASE_SYSTEM_CONTRACTS}, + types::ExecutionMetricsForCriteria, + updates::UpdatesManager, + ZkSyncStateKeeper, }, - types::ExecutionMetricsForCriteria, - updates::UpdatesManager, - ZkSyncStateKeeper, + utils::testonly::create_l2_transaction, }; const FEE_ACCOUNT: Address = Address::repeat_byte(0x11); @@ -188,7 +190,7 @@ impl TestScenario { /// Launches the test. /// Provided `SealManager` is expected to be externally configured to adhere the written scenario logic. - pub(crate) async fn run(self, sealer: ConditionalSealer) { + pub(crate) async fn run(self, sealer: SequencerSealer) { assert!(!self.actions.is_empty(), "Test scenario can't be empty"); let batch_executor_base = TestBatchExecutorBuilder::new(&self); @@ -198,7 +200,7 @@ impl TestScenario { stop_receiver, Box::new(io), Box::new(batch_executor_base), - sealer, + Box::new(sealer), ); let sk_thread = tokio::spawn(sk.run()); @@ -541,8 +543,7 @@ pub(crate) struct TestIO { stop_sender: watch::Sender, batch_number: L1BatchNumber, timestamp: u64, - l1_gas_price: u64, - fair_l2_gas_price: u64, + fee_input: BatchFeeInput, miniblock_number: MiniblockNumber, fee_account: Address, scenario: TestScenario, @@ -559,8 +560,7 @@ impl TestIO { stop_sender, batch_number: L1BatchNumber(1), timestamp: 1, - l1_gas_price: 1, - fair_l2_gas_price: 1, + fee_input: BatchFeeInput::default(), miniblock_number: MiniblockNumber(1), fee_account: FEE_ACCOUNT, scenario, @@ -649,8 +649,7 @@ impl StateKeeperIO for TestIO { previous_batch_hash: Some(H256::zero()), number: self.batch_number, timestamp: self.timestamp, - l1_gas_price: self.l1_gas_price, - fair_l2_gas_price: self.fair_l2_gas_price, + fee_input: self.fee_input, fee_account: self.fee_account, enforced_base_fee: None, first_l2_block: first_miniblock_info, @@ -769,3 +768,35 @@ impl StateKeeperIO for TestIO { None } } + +/// `L1BatchExecutorBuilder` which doesn't check anything at all. Accepts all transactions. +// FIXME: move to `utils`? +#[derive(Debug)] +pub(crate) struct MockBatchExecutorBuilder; + +#[async_trait] +impl L1BatchExecutorBuilder for MockBatchExecutorBuilder { + async fn init_batch( + &mut self, + _l1batch_params: L1BatchEnv, + _system_env: SystemEnv, + ) -> BatchExecutorHandle { + let (send, recv) = mpsc::channel(1); + let handle = tokio::task::spawn(async { + let mut recv = recv; + while let Some(cmd) = recv.recv().await { + match cmd { + Command::ExecuteTx(_, resp) => resp.send(successful_exec()).unwrap(), + Command::StartNextMiniblock(_, resp) => resp.send(()).unwrap(), + Command::RollbackLastTx(_) => panic!("unexpected rollback"), + Command::FinishBatch(resp) => { + // Blanket result, it doesn't really matter. + resp.send((default_vm_block_result(), None)).unwrap(); + return; + } + } + } + }); + BatchExecutorHandle::from_raw(handle, send) + } +} diff --git a/core/lib/zksync_core/src/state_keeper/types.rs b/core/lib/zksync_core/src/state_keeper/types.rs index 5e74cc1b4de..2dbce8dd207 100644 --- a/core/lib/zksync_core/src/state_keeper/types.rs +++ b/core/lib/zksync_core/src/state_keeper/types.rs @@ -3,12 +3,14 @@ use std::{ sync::{Arc, Mutex}, }; +use multivm::interface::VmExecutionResultAndLogs; use zksync_mempool::{L2TxFilter, MempoolInfo, MempoolStore}; use zksync_types::{ block::BlockGasCount, tx::ExecutionMetrics, Address, Nonce, PriorityOpId, Transaction, }; use super::metrics::StateKeeperGauges; +use crate::gas_tracker::{gas_count_from_metrics, gas_count_from_tx_and_metrics}; #[derive(Debug, Clone)] pub struct MempoolGuard(Arc>); @@ -64,3 +66,21 @@ pub struct ExecutionMetricsForCriteria { pub l1_gas: BlockGasCount, pub execution_metrics: ExecutionMetrics, } + +impl ExecutionMetricsForCriteria { + pub fn new( + tx: Option<&Transaction>, + execution_result: &VmExecutionResultAndLogs, + ) -> ExecutionMetricsForCriteria { + let execution_metrics = execution_result.get_execution_metrics(tx); + let l1_gas = match tx { + Some(tx) => gas_count_from_tx_and_metrics(tx, &execution_metrics), + None => gas_count_from_metrics(&execution_metrics), + }; + + ExecutionMetricsForCriteria { + l1_gas, + execution_metrics, + } + } +} diff --git a/core/lib/zksync_core/src/state_keeper/updates/l1_batch_updates.rs b/core/lib/zksync_core/src/state_keeper/updates/l1_batch_updates.rs index fdaa0b036f9..7f18edb3320 100644 --- a/core/lib/zksync_core/src/state_keeper/updates/l1_batch_updates.rs +++ b/core/lib/zksync_core/src/state_keeper/updates/l1_batch_updates.rs @@ -1,9 +1,12 @@ +use zksync_types::{ + block::BlockGasCount, + priority_op_onchain_data::PriorityOpOnchainData, + tx::{tx_execution_info::ExecutionMetrics, TransactionExecutionResult}, + ExecuteTransactionCommon, +}; + use super::miniblock_updates::MiniblockUpdates; use crate::gas_tracker::new_block_gas_count; -use zksync_types::block::BlockGasCount; -use zksync_types::priority_op_onchain_data::PriorityOpOnchainData; -use zksync_types::tx::tx_execution_info::ExecutionMetrics; -use zksync_types::{tx::TransactionExecutionResult, ExecuteTransactionCommon}; #[derive(Debug, Clone, PartialEq)] pub struct L1BatchUpdates { @@ -44,6 +47,7 @@ impl L1BatchUpdates { #[cfg(test)] mod tests { + use multivm::vm_latest::TransactionVmExt; use zksync_types::{ProtocolVersionId, H256}; use super::*; @@ -51,12 +55,11 @@ mod tests { gas_tracker::new_block_gas_count, state_keeper::tests::{create_execution_result, create_transaction}, }; - use multivm::vm_latest::TransactionVmExt; #[test] fn apply_miniblock_with_empty_tx() { let mut miniblock_accumulator = - MiniblockUpdates::new(0, 0, H256::zero(), 1, Some(ProtocolVersionId::latest())); + MiniblockUpdates::new(0, 0, H256::zero(), 1, ProtocolVersionId::latest()); let tx = create_transaction(10, 100); let expected_tx_size = tx.bootloader_encoding_size(); diff --git a/core/lib/zksync_core/src/state_keeper/updates/miniblock_updates.rs b/core/lib/zksync_core/src/state_keeper/updates/miniblock_updates.rs index d0a4f035f51..4dd561e72aa 100644 --- a/core/lib/zksync_core/src/state_keeper/updates/miniblock_updates.rs +++ b/core/lib/zksync_core/src/state_keeper/updates/miniblock_updates.rs @@ -1,18 +1,18 @@ -use multivm::interface::{ExecutionResult, L2BlockEnv, VmExecutionResultAndLogs}; -use multivm::vm_latest::TransactionVmExt; use std::collections::HashMap; -use zksync_types::l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}; +use multivm::{ + interface::{ExecutionResult, L2BlockEnv, VmExecutionResultAndLogs}, + vm_latest::TransactionVmExt, +}; use zksync_types::{ - block::{legacy_miniblock_hash, miniblock_hash, BlockGasCount}, + block::{BlockGasCount, MiniblockHasher}, event::extract_bytecodes_marked_as_known, - tx::tx_execution_info::TxExecutionStatus, - tx::{ExecutionMetrics, TransactionExecutionResult}, + l2_to_l1_log::{SystemL2ToL1Log, UserL2ToL1Log}, + tx::{tx_execution_info::TxExecutionStatus, ExecutionMetrics, TransactionExecutionResult}, vm_trace::Call, MiniblockNumber, ProtocolVersionId, StorageLogQuery, Transaction, VmEvent, H256, }; use zksync_utils::bytecode::{hash_bytecode, CompressedBytecodeInfo}; -use zksync_utils::concat_and_hash; #[derive(Debug, Clone, PartialEq)] pub struct MiniblockUpdates { @@ -29,9 +29,8 @@ pub struct MiniblockUpdates { pub timestamp: u64, pub number: u32, pub prev_block_hash: H256, - pub txs_rolling_hash: H256, pub virtual_blocks: u32, - pub protocol_version: Option, + pub protocol_version: ProtocolVersionId, } impl MiniblockUpdates { @@ -40,7 +39,7 @@ impl MiniblockUpdates { number: u32, prev_block_hash: H256, virtual_blocks: u32, - protocol_version: Option, + protocol_version: ProtocolVersionId, ) -> Self { Self { executed_transactions: vec![], @@ -55,19 +54,26 @@ impl MiniblockUpdates { timestamp, number, prev_block_hash, - txs_rolling_hash: H256::zero(), virtual_blocks, protocol_version, } } - pub(crate) fn extend_from_fictive_transaction(&mut self, result: VmExecutionResultAndLogs) { + pub(crate) fn extend_from_fictive_transaction( + &mut self, + result: VmExecutionResultAndLogs, + l1_gas_count: BlockGasCount, + execution_metrics: ExecutionMetrics, + ) { self.events.extend(result.logs.events); self.storage_logs.extend(result.logs.storage_logs); self.user_l2_to_l1_logs .extend(result.logs.user_l2_to_l1_logs); self.system_l2_to_l1_logs .extend(result.logs.system_l2_to_l1_logs); + + self.l1_gas_count += l1_gas_count; + self.block_execution_metrics += execution_metrics; } pub(crate) fn extend_from_executed_transaction( @@ -124,12 +130,9 @@ impl MiniblockUpdates { self.l1_gas_count += tx_l1_gas_this_tx; self.block_execution_metrics += execution_metrics; self.txs_encoding_size += tx.bootloader_encoding_size(); - self.storage_logs .extend(tx_execution_result.logs.storage_logs); - self.txs_rolling_hash = concat_and_hash(self.txs_rolling_hash, tx.hash()); - self.executed_transactions.push(TransactionExecutionResult { hash: tx.hash(), transaction: tx, @@ -145,15 +148,15 @@ impl MiniblockUpdates { /// Calculates miniblock hash based on the protocol version. pub(crate) fn get_miniblock_hash(&self) -> H256 { - match self.protocol_version { - Some(id) if id >= ProtocolVersionId::Version13 => miniblock_hash( - MiniblockNumber(self.number), - self.timestamp, - self.prev_block_hash, - self.txs_rolling_hash, - ), - _ => legacy_miniblock_hash(MiniblockNumber(self.number)), + let mut digest = MiniblockHasher::new( + MiniblockNumber(self.number), + self.timestamp, + self.prev_block_hash, + ); + for tx in &self.executed_transactions { + digest.push_tx_hash(tx.hash); } + digest.finalize(self.protocol_version) } pub(crate) fn get_miniblock_env(&self) -> L2BlockEnv { @@ -168,14 +171,15 @@ impl MiniblockUpdates { #[cfg(test)] mod tests { + use multivm::vm_latest::TransactionVmExt; + use super::*; use crate::state_keeper::tests::{create_execution_result, create_transaction}; - use multivm::vm_latest::TransactionVmExt; #[test] fn apply_empty_l2_tx() { let mut accumulator = - MiniblockUpdates::new(0, 0, H256::random(), 0, Some(ProtocolVersionId::latest())); + MiniblockUpdates::new(0, 0, H256::random(), 0, ProtocolVersionId::latest()); let tx = create_transaction(10, 100); let bootloader_encoding_size = tx.bootloader_encoding_size(); accumulator.extend_from_executed_transaction( diff --git a/core/lib/zksync_core/src/state_keeper/updates/mod.rs b/core/lib/zksync_core/src/state_keeper/updates/mod.rs index dc72893e703..7718882af28 100644 --- a/core/lib/zksync_core/src/state_keeper/updates/mod.rs +++ b/core/lib/zksync_core/src/state_keeper/updates/mod.rs @@ -1,21 +1,22 @@ -use multivm::interface::{L1BatchEnv, VmExecutionResultAndLogs}; - +use multivm::{ + interface::{L1BatchEnv, VmExecutionResultAndLogs}, + utils::get_batch_base_fee, +}; use zksync_contracts::BaseSystemContractsHashes; -use zksync_types::vm_trace::Call; use zksync_types::{ - block::BlockGasCount, storage_writes_deduplicator::StorageWritesDeduplicator, - tx::tx_execution_info::ExecutionMetrics, Address, L1BatchNumber, MiniblockNumber, - ProtocolVersionId, Transaction, + block::BlockGasCount, fee_model::BatchFeeInput, + storage_writes_deduplicator::StorageWritesDeduplicator, + tx::tx_execution_info::ExecutionMetrics, vm_trace::Call, Address, L1BatchNumber, + MiniblockNumber, ProtocolVersionId, Transaction, }; use zksync_utils::bytecode::CompressedBytecodeInfo; -pub mod l1_batch_updates; -pub mod miniblock_updates; - pub(crate) use self::{l1_batch_updates::L1BatchUpdates, miniblock_updates::MiniblockUpdates}; - use super::io::MiniblockParams; +pub mod l1_batch_updates; +pub mod miniblock_updates; + /// Most of the information needed to seal the l1 batch/mini-block is contained within the VM, /// things that are not captured there are accumulated externally. /// `MiniblockUpdates` keeps updates for the pending mini-block. @@ -25,8 +26,7 @@ use super::io::MiniblockParams; #[derive(Debug, Clone, PartialEq)] pub struct UpdatesManager { batch_timestamp: u64, - l1_gas_price: u64, - fair_l2_gas_price: u64, + batch_fee_input: BatchFeeInput, base_fee_per_gas: u64, base_system_contract_hashes: BaseSystemContractsHashes, protocol_version: ProtocolVersionId, @@ -43,9 +43,8 @@ impl UpdatesManager { ) -> Self { Self { batch_timestamp: l1_batch_env.timestamp, - l1_gas_price: l1_batch_env.l1_gas_price, - fair_l2_gas_price: l1_batch_env.fair_l2_gas_price, - base_fee_per_gas: l1_batch_env.base_fee(), + batch_fee_input: l1_batch_env.fee_input, + base_fee_per_gas: get_batch_base_fee(&l1_batch_env, protocol_version.into()), protocol_version, base_system_contract_hashes, l1_batch: L1BatchUpdates::new(), @@ -54,7 +53,7 @@ impl UpdatesManager { l1_batch_env.first_l2_block.number, l1_batch_env.first_l2_block.prev_block_hash, l1_batch_env.first_l2_block.max_virtual_blocks_to_create, - Some(protocol_version), + protocol_version, ), storage_writes_deduplicator: StorageWritesDeduplicator::new(), } @@ -69,11 +68,11 @@ impl UpdatesManager { } pub(crate) fn l1_gas_price(&self) -> u64 { - self.l1_gas_price + self.batch_fee_input.l1_gas_price() } pub(crate) fn fair_l2_gas_price(&self) -> u64 { - self.fair_l2_gas_price + self.batch_fee_input.fair_l2_gas_price() } pub(crate) fn seal_miniblock_command( @@ -81,18 +80,19 @@ impl UpdatesManager { l1_batch_number: L1BatchNumber, miniblock_number: MiniblockNumber, l2_erc20_bridge_addr: Address, + pre_insert_txs: bool, ) -> MiniblockSealCommand { MiniblockSealCommand { l1_batch_number, miniblock_number, miniblock: self.miniblock.clone(), first_tx_index: self.l1_batch.executed_transactions.len(), - l1_gas_price: self.l1_gas_price, - fair_l2_gas_price: self.fair_l2_gas_price, + fee_input: self.batch_fee_input, base_fee_per_gas: self.base_fee_per_gas, base_system_contracts_hashes: self.base_system_contract_hashes, protocol_version: Some(self.protocol_version), l2_erc20_bridge_addr, + pre_insert_txs, } } @@ -121,10 +121,16 @@ impl UpdatesManager { ); } - pub(crate) fn extend_from_fictive_transaction(&mut self, result: VmExecutionResultAndLogs) { + pub(crate) fn extend_from_fictive_transaction( + &mut self, + result: VmExecutionResultAndLogs, + l1_gas_count: BlockGasCount, + execution_metrics: ExecutionMetrics, + ) { self.storage_writes_deduplicator .apply(&result.logs.storage_logs); - self.miniblock.extend_from_fictive_transaction(result); + self.miniblock + .extend_from_fictive_transaction(result, l1_gas_count, execution_metrics); } /// Pushes a new miniblock with the specified timestamp into this manager. The previously @@ -135,7 +141,7 @@ impl UpdatesManager { self.miniblock.number + 1, self.miniblock.get_miniblock_hash(), miniblock_params.virtual_blocks, - Some(self.protocol_version), + self.protocol_version, ); let old_miniblock_updates = std::mem::replace(&mut self.miniblock, new_miniblock_updates); self.l1_batch @@ -166,12 +172,15 @@ pub(crate) struct MiniblockSealCommand { pub miniblock_number: MiniblockNumber, pub miniblock: MiniblockUpdates, pub first_tx_index: usize, - pub l1_gas_price: u64, - pub fair_l2_gas_price: u64, + pub fee_input: BatchFeeInput, pub base_fee_per_gas: u64, pub base_system_contracts_hashes: BaseSystemContractsHashes, pub protocol_version: Option, pub l2_erc20_bridge_addr: Address, + /// Whether transactions should be pre-inserted to DB. + /// Should be set to `true` for EN's IO as EN doesn't store transactions in DB + /// before they are included into miniblocks. + pub pre_insert_txs: bool, } #[cfg(test)] diff --git a/core/lib/zksync_core/src/sync_layer/batch_status_updater.rs b/core/lib/zksync_core/src/sync_layer/batch_status_updater.rs deleted file mode 100644 index 8e7ebe7a985..00000000000 --- a/core/lib/zksync_core/src/sync_layer/batch_status_updater.rs +++ /dev/null @@ -1,366 +0,0 @@ -use chrono::{DateTime, Utc}; -use tokio::sync::watch::Receiver; - -use std::time::Duration; - -use zksync_dal::ConnectionPool; -use zksync_types::{ - aggregated_operations::AggregatedActionType, api::BlockDetails, L1BatchNumber, MiniblockNumber, - H256, -}; -use zksync_web3_decl::{ - jsonrpsee::http_client::{HttpClient, HttpClientBuilder}, - namespaces::ZksNamespaceClient, - RpcResult, -}; - -use super::metrics::{FetchStage, L1BatchStage, FETCHER_METRICS}; -use crate::metrics::EN_METRICS; - -/// Represents a change in the batch status. -/// It may be a batch being committed, proven or executed. -#[derive(Debug)] -pub(crate) struct BatchStatusChange { - pub(crate) number: L1BatchNumber, - pub(crate) l1_tx_hash: H256, - pub(crate) happened_at: DateTime, -} - -#[derive(Debug, Default)] -struct StatusChanges { - commit: Vec, - prove: Vec, - execute: Vec, -} - -impl StatusChanges { - fn new() -> Self { - Self::default() - } - - /// Returns true if there are no status changes. - fn is_empty(&self) -> bool { - self.commit.is_empty() && self.prove.is_empty() && self.execute.is_empty() - } -} - -/// Module responsible for fetching the batch status changes, i.e. one that monitors whether the -/// locally applied batch was committed, proven or executed on L1. -/// -/// In essence, it keeps track of the last batch number per status, and periodically polls the main -/// node on these batches in order to see whether the status has changed. If some changes were picked up, -/// the module updates the database to mirror the state observable from the main node. -#[derive(Debug)] -pub struct BatchStatusUpdater { - client: HttpClient, - pool: ConnectionPool, - - last_executed_l1_batch: L1BatchNumber, - last_proven_l1_batch: L1BatchNumber, - last_committed_l1_batch: L1BatchNumber, -} - -impl BatchStatusUpdater { - pub async fn new(main_node_url: &str, pool: ConnectionPool) -> Self { - let client = HttpClientBuilder::default() - .build(main_node_url) - .expect("Unable to create a main node client"); - - let mut storage = pool.access_storage_tagged("sync_layer").await.unwrap(); - let last_executed_l1_batch = storage - .blocks_dal() - .get_number_of_last_l1_batch_executed_on_eth() - .await - .unwrap() - .unwrap_or_default(); - let last_proven_l1_batch = storage - .blocks_dal() - .get_number_of_last_l1_batch_proven_on_eth() - .await - .unwrap() - .unwrap_or_default(); - let last_committed_l1_batch = storage - .blocks_dal() - .get_number_of_last_l1_batch_committed_on_eth() - .await - .unwrap() - .unwrap_or_default(); - drop(storage); - - Self { - client, - pool, - - last_committed_l1_batch, - last_proven_l1_batch, - last_executed_l1_batch, - } - } - - pub async fn run(mut self, stop_receiver: Receiver) -> anyhow::Result<()> { - loop { - if *stop_receiver.borrow() { - tracing::info!("Stop signal received, exiting the batch status updater routine"); - return Ok(()); - } - // Status changes are created externally, so that even if we will receive a network error - // while requesting the changes, we will be able to process what we already fetched. - let mut status_changes = StatusChanges::new(); - if let Err(err) = self.get_status_changes(&mut status_changes).await { - tracing::warn!("Failed to get status changes from the database: {err}"); - }; - - if status_changes.is_empty() { - const DELAY_INTERVAL: Duration = Duration::from_secs(5); - tokio::time::sleep(DELAY_INTERVAL).await; - continue; - } - - self.apply_status_changes(status_changes).await; - } - } - - /// Goes through the already fetched batches trying to update their statuses. - /// Returns a collection of the status updates grouped by the operation type. - /// - /// Fetched changes are capped by the last locally applied batch number, so - /// it's safe to assume that every status change can safely be applied (no status - /// changes "from the future"). - async fn get_status_changes(&self, status_changes: &mut StatusChanges) -> RpcResult<()> { - let total_latency = EN_METRICS.update_batch_statuses.start(); - let last_sealed_batch = self - .pool - .access_storage_tagged("sync_layer") - .await - .unwrap() - .blocks_dal() - .get_newest_l1_batch_header() - .await - .unwrap() - .number; - - let mut last_committed_l1_batch = self.last_committed_l1_batch; - let mut last_proven_l1_batch = self.last_proven_l1_batch; - let mut last_executed_l1_batch = self.last_executed_l1_batch; - - let mut batch = last_executed_l1_batch.next(); - // In this loop we try to progress on the batch statuses, utilizing the same request to the node to potentially - // update all three statuses (e.g. if the node is still syncing), but also skipping the gaps in the statuses - // (e.g. if the last executed batch is 10, but the last proven is 20, we don't need to check the batches 11-19). - while batch <= last_sealed_batch { - // While we may receive `None` for the `self.current_l1_batch`, it's OK: open batch is guaranteed to not - // be sent to L1. - let request_latency = FETCHER_METRICS.requests[&FetchStage::GetMiniblockRange].start(); - let Some((start_miniblock, _)) = self.client.get_miniblock_range(batch).await? else { - return Ok(()); - }; - request_latency.observe(); - - // We could've used any miniblock from the range, all of them share the same info. - let request_latency = FETCHER_METRICS.requests[&FetchStage::GetBlockDetails].start(); - let Some(batch_info) = self - .client - .get_block_details(MiniblockNumber(start_miniblock.as_u32())) - .await? - else { - // We cannot recover from an external API inconsistency. - panic!( - "Node API is inconsistent: miniblock {} was reported to be a part of {} L1 batch, \ - but API has no information about this miniblock", start_miniblock, batch - ); - }; - request_latency.observe(); - - Self::update_committed_batch(status_changes, &batch_info, &mut last_committed_l1_batch); - Self::update_proven_batch(status_changes, &batch_info, &mut last_proven_l1_batch); - Self::update_executed_batch(status_changes, &batch_info, &mut last_executed_l1_batch); - - // Check whether we can skip a part of the range. - if batch_info.base.commit_tx_hash.is_none() { - // No committed batches after this one. - break; - } else if batch_info.base.prove_tx_hash.is_none() && batch < last_committed_l1_batch { - // The interval between this batch and the last committed one is not proven. - batch = last_committed_l1_batch.next(); - } else if batch_info.base.executed_at.is_none() && batch < last_proven_l1_batch { - // The interval between this batch and the last proven one is not executed. - batch = last_proven_l1_batch.next(); - } else { - batch += 1; - } - } - - total_latency.observe(); - Ok(()) - } - - fn update_committed_batch( - status_changes: &mut StatusChanges, - batch_info: &BlockDetails, - last_committed_l1_batch: &mut L1BatchNumber, - ) { - if batch_info.base.commit_tx_hash.is_some() - && batch_info.l1_batch_number == last_committed_l1_batch.next() - { - assert!( - batch_info.base.committed_at.is_some(), - "Malformed API response: batch is committed, but has no commit timestamp" - ); - status_changes.commit.push(BatchStatusChange { - number: batch_info.l1_batch_number, - l1_tx_hash: batch_info.base.commit_tx_hash.unwrap(), - happened_at: batch_info.base.committed_at.unwrap(), - }); - tracing::info!("Batch {}: committed", batch_info.l1_batch_number); - FETCHER_METRICS.l1_batch[&L1BatchStage::Committed] - .set(batch_info.l1_batch_number.0.into()); - *last_committed_l1_batch += 1; - } - } - - fn update_proven_batch( - status_changes: &mut StatusChanges, - batch_info: &BlockDetails, - last_proven_l1_batch: &mut L1BatchNumber, - ) { - if batch_info.base.prove_tx_hash.is_some() - && batch_info.l1_batch_number == last_proven_l1_batch.next() - { - assert!( - batch_info.base.proven_at.is_some(), - "Malformed API response: batch is proven, but has no prove timestamp" - ); - status_changes.prove.push(BatchStatusChange { - number: batch_info.l1_batch_number, - l1_tx_hash: batch_info.base.prove_tx_hash.unwrap(), - happened_at: batch_info.base.proven_at.unwrap(), - }); - tracing::info!("Batch {}: proven", batch_info.l1_batch_number); - FETCHER_METRICS.l1_batch[&L1BatchStage::Proven] - .set(batch_info.l1_batch_number.0.into()); - *last_proven_l1_batch += 1; - } - } - - fn update_executed_batch( - status_changes: &mut StatusChanges, - batch_info: &BlockDetails, - last_executed_l1_batch: &mut L1BatchNumber, - ) { - if batch_info.base.execute_tx_hash.is_some() - && batch_info.l1_batch_number == last_executed_l1_batch.next() - { - assert!( - batch_info.base.executed_at.is_some(), - "Malformed API response: batch is executed, but has no execute timestamp" - ); - status_changes.execute.push(BatchStatusChange { - number: batch_info.l1_batch_number, - l1_tx_hash: batch_info.base.execute_tx_hash.unwrap(), - happened_at: batch_info.base.executed_at.unwrap(), - }); - tracing::info!("Batch {}: executed", batch_info.l1_batch_number); - FETCHER_METRICS.l1_batch[&L1BatchStage::Executed] - .set(batch_info.l1_batch_number.0.into()); - *last_executed_l1_batch += 1; - } - } - - /// Inserts the provided status changes into the database. - /// The status changes are applied to the database by inserting bogus confirmed transactions (with - /// some fields missing/substituted) only to satisfy API needs; this component doesn't expect the updated - /// tables to be ever accessed by the `eth_sender` module. - async fn apply_status_changes(&mut self, changes: StatusChanges) { - let total_latency = EN_METRICS.batch_status_updater_loop_iteration.start(); - let mut connection = self.pool.access_storage_tagged("sync_layer").await.unwrap(); - - let mut transaction = connection.start_transaction().await.unwrap(); - - let last_sealed_batch = transaction - .blocks_dal() - .get_newest_l1_batch_header() - .await - .unwrap() - .number; - - for change in changes.commit.into_iter() { - tracing::info!( - "Commit status change: number {}, hash {}, happened at {}", - change.number, - change.l1_tx_hash, - change.happened_at - ); - - assert!( - change.number <= last_sealed_batch, - "Incorrect update state: unknown batch marked as committed" - ); - - transaction - .eth_sender_dal() - .insert_bogus_confirmed_eth_tx( - change.number, - AggregatedActionType::Commit, - change.l1_tx_hash, - change.happened_at, - ) - .await - .unwrap(); - self.last_committed_l1_batch = change.number; - } - for change in changes.prove.into_iter() { - tracing::info!( - "Prove status change: number {}, hash {}, happened at {}", - change.number, - change.l1_tx_hash, - change.happened_at - ); - - assert!( - change.number <= self.last_committed_l1_batch, - "Incorrect update state: proven batch must be committed" - ); - - transaction - .eth_sender_dal() - .insert_bogus_confirmed_eth_tx( - change.number, - AggregatedActionType::PublishProofOnchain, - change.l1_tx_hash, - change.happened_at, - ) - .await - .unwrap(); - self.last_proven_l1_batch = change.number; - } - for change in changes.execute.into_iter() { - tracing::info!( - "Execute status change: number {}, hash {}, happened at {}", - change.number, - change.l1_tx_hash, - change.happened_at - ); - - assert!( - change.number <= self.last_proven_l1_batch, - "Incorrect update state: executed batch must be proven" - ); - - transaction - .eth_sender_dal() - .insert_bogus_confirmed_eth_tx( - change.number, - AggregatedActionType::Execute, - change.l1_tx_hash, - change.happened_at, - ) - .await - .unwrap(); - self.last_executed_l1_batch = change.number; - } - - transaction.commit().await.unwrap(); - - total_latency.observe(); - } -} diff --git a/core/lib/zksync_core/src/sync_layer/batch_status_updater/mod.rs b/core/lib/zksync_core/src/sync_layer/batch_status_updater/mod.rs new file mode 100644 index 00000000000..4a670349723 --- /dev/null +++ b/core/lib/zksync_core/src/sync_layer/batch_status_updater/mod.rs @@ -0,0 +1,470 @@ +//! Component responsible for updating L1 batch status. + +use std::{fmt, time::Duration}; + +use anyhow::Context as _; +use async_trait::async_trait; +use chrono::{DateTime, Utc}; +#[cfg(test)] +use tokio::sync::mpsc; +use tokio::sync::watch; +use zksync_dal::{ConnectionPool, StorageProcessor}; +use zksync_types::{ + aggregated_operations::AggregatedActionType, api, L1BatchNumber, MiniblockNumber, H256, +}; +use zksync_web3_decl::{ + jsonrpsee::{ + core::ClientError, + http_client::{HttpClient, HttpClientBuilder}, + }, + namespaces::ZksNamespaceClient, +}; + +use super::metrics::{FetchStage, FETCHER_METRICS}; +use crate::{metrics::EN_METRICS, utils::projected_first_l1_batch}; + +#[cfg(test)] +mod tests; + +fn l1_batch_stage_to_action_str(stage: AggregatedActionType) -> &'static str { + match stage { + AggregatedActionType::Commit => "committed", + AggregatedActionType::PublishProofOnchain => "proven", + AggregatedActionType::Execute => "executed", + } +} + +/// Represents a change in the batch status. +/// It may be a batch being committed, proven or executed. +#[derive(Debug)] +struct BatchStatusChange { + number: L1BatchNumber, + l1_tx_hash: H256, + happened_at: DateTime, +} + +#[derive(Debug, Default)] +struct StatusChanges { + commit: Vec, + prove: Vec, + execute: Vec, +} + +impl StatusChanges { + /// Returns true if there are no status changes. + fn is_empty(&self) -> bool { + self.commit.is_empty() && self.prove.is_empty() && self.execute.is_empty() + } +} + +#[derive(Debug, thiserror::Error)] +enum UpdaterError { + #[error("JSON-RPC error communicating with main node")] + Web3(#[from] ClientError), + #[error("Internal error")] + Internal(#[from] anyhow::Error), +} + +impl From for UpdaterError { + fn from(err: zksync_dal::SqlxError) -> Self { + Self::Internal(err.into()) + } +} + +#[async_trait] +trait MainNodeClient: fmt::Debug + Send + Sync { + /// Returns any miniblock in the specified L1 batch. + async fn resolve_l1_batch_to_miniblock( + &self, + number: L1BatchNumber, + ) -> Result, ClientError>; + + async fn block_details( + &self, + number: MiniblockNumber, + ) -> Result, ClientError>; +} + +#[async_trait] +impl MainNodeClient for HttpClient { + async fn resolve_l1_batch_to_miniblock( + &self, + number: L1BatchNumber, + ) -> Result, ClientError> { + let request_latency = FETCHER_METRICS.requests[&FetchStage::GetMiniblockRange].start(); + let number = self + .get_miniblock_range(number) + .await? + .map(|(start, _)| MiniblockNumber(start.as_u32())); + request_latency.observe(); + Ok(number) + } + + async fn block_details( + &self, + number: MiniblockNumber, + ) -> Result, ClientError> { + let request_latency = FETCHER_METRICS.requests[&FetchStage::GetBlockDetails].start(); + let details = self.get_block_details(number).await?; + request_latency.observe(); + Ok(details) + } +} + +/// Cursors for the last executed / proven / committed L1 batch numbers. +#[derive(Debug, Clone, Copy, PartialEq)] +struct UpdaterCursor { + last_executed_l1_batch: L1BatchNumber, + last_proven_l1_batch: L1BatchNumber, + last_committed_l1_batch: L1BatchNumber, +} + +impl UpdaterCursor { + async fn new(storage: &mut StorageProcessor<'_>) -> anyhow::Result { + let first_l1_batch_number = projected_first_l1_batch(storage).await?; + // Use the snapshot L1 batch, or the genesis batch if we are not using a snapshot. Technically, the snapshot L1 batch + // is not necessarily proven / executed yet, but since it and earlier batches are not stored, it serves + // a natural lower boundary for the cursor. + let starting_l1_batch_number = L1BatchNumber(first_l1_batch_number.saturating_sub(1)); + + let last_executed_l1_batch = storage + .blocks_dal() + .get_number_of_last_l1_batch_executed_on_eth() + .await? + .unwrap_or(starting_l1_batch_number); + let last_proven_l1_batch = storage + .blocks_dal() + .get_number_of_last_l1_batch_proven_on_eth() + .await? + .unwrap_or(starting_l1_batch_number); + let last_committed_l1_batch = storage + .blocks_dal() + .get_number_of_last_l1_batch_committed_on_eth() + .await? + .unwrap_or(starting_l1_batch_number); + Ok(Self { + last_executed_l1_batch, + last_proven_l1_batch, + last_committed_l1_batch, + }) + } + + fn extract_tx_hash_and_timestamp( + batch_info: &api::BlockDetails, + stage: AggregatedActionType, + ) -> (Option, Option>) { + match stage { + AggregatedActionType::Commit => { + (batch_info.base.commit_tx_hash, batch_info.base.committed_at) + } + AggregatedActionType::PublishProofOnchain => { + (batch_info.base.prove_tx_hash, batch_info.base.proven_at) + } + AggregatedActionType::Execute => { + (batch_info.base.execute_tx_hash, batch_info.base.executed_at) + } + } + } + + fn update( + &mut self, + status_changes: &mut StatusChanges, + batch_info: &api::BlockDetails, + ) -> anyhow::Result<()> { + for stage in [ + AggregatedActionType::Commit, + AggregatedActionType::PublishProofOnchain, + AggregatedActionType::Execute, + ] { + self.update_stage(status_changes, batch_info, stage)?; + } + Ok(()) + } + + fn update_stage( + &mut self, + status_changes: &mut StatusChanges, + batch_info: &api::BlockDetails, + stage: AggregatedActionType, + ) -> anyhow::Result<()> { + let (l1_tx_hash, happened_at) = Self::extract_tx_hash_and_timestamp(batch_info, stage); + let (last_l1_batch, changes_to_update) = match stage { + AggregatedActionType::Commit => ( + &mut self.last_committed_l1_batch, + &mut status_changes.commit, + ), + AggregatedActionType::PublishProofOnchain => { + (&mut self.last_proven_l1_batch, &mut status_changes.prove) + } + AggregatedActionType::Execute => ( + &mut self.last_executed_l1_batch, + &mut status_changes.execute, + ), + }; + + // Check whether we have all data for the update. + let Some(l1_tx_hash) = l1_tx_hash else { + return Ok(()); + }; + if batch_info.l1_batch_number != last_l1_batch.next() { + return Ok(()); + } + + let action_str = l1_batch_stage_to_action_str(stage); + let happened_at = happened_at.with_context(|| { + format!("Malformed API response: batch is {action_str}, but has no relevant timestamp") + })?; + changes_to_update.push(BatchStatusChange { + number: batch_info.l1_batch_number, + l1_tx_hash, + happened_at, + }); + tracing::info!("Batch {}: {action_str}", batch_info.l1_batch_number); + FETCHER_METRICS.l1_batch[&stage.into()].set(batch_info.l1_batch_number.0.into()); + *last_l1_batch += 1; + Ok(()) + } +} + +/// Component responsible for fetching the batch status changes, i.e. one that monitors whether the +/// locally applied batch was committed, proven or executed on L1. +/// +/// In essence, it keeps track of the last batch number per status, and periodically polls the main +/// node on these batches in order to see whether the status has changed. If some changes were picked up, +/// the module updates the database to mirror the state observable from the main node. This is required for other components +/// (e.g., the API server and the consistency checker) to function properly. E.g., the API server returns commit / prove / execute +/// L1 transaction information in `zks_getBlockDetails` and `zks_getL1BatchDetails` RPC methods. +#[derive(Debug)] +pub struct BatchStatusUpdater { + client: Box, + pool: ConnectionPool, + sleep_interval: Duration, + /// Test-only sender of status changes each time they are produced and applied to the storage. + #[cfg(test)] + changes_sender: mpsc::UnboundedSender, +} + +impl BatchStatusUpdater { + const DEFAULT_SLEEP_INTERVAL: Duration = Duration::from_secs(5); + + pub fn new(main_node_url: &str, pool: ConnectionPool) -> anyhow::Result { + let client = HttpClientBuilder::default() + .build(main_node_url) + .context("Unable to create a main node client")?; + Ok(Self::from_parts( + Box::new(client), + pool, + Self::DEFAULT_SLEEP_INTERVAL, + )) + } + + fn from_parts( + client: Box, + pool: ConnectionPool, + sleep_interval: Duration, + ) -> Self { + Self { + client, + pool, + sleep_interval, + #[cfg(test)] + changes_sender: mpsc::unbounded_channel().0, + } + } + + pub async fn run(self, stop_receiver: watch::Receiver) -> anyhow::Result<()> { + let mut storage = self.pool.access_storage_tagged("sync_layer").await?; + let mut cursor = UpdaterCursor::new(&mut storage).await?; + drop(storage); + tracing::info!("Initialized batch status updater cursor: {cursor:?}"); + + loop { + if *stop_receiver.borrow() { + tracing::info!("Stop signal received, exiting the batch status updater routine"); + return Ok(()); + } + + // Status changes are created externally, so that even if we will receive a network error + // while requesting the changes, we will be able to process what we already fetched. + let mut status_changes = StatusChanges::default(); + // Note that we don't update `cursor` here (it is copied), but rather only in `apply_status_changes`. + match self.get_status_changes(&mut status_changes, cursor).await { + Ok(()) => { /* everything went smoothly */ } + Err(UpdaterError::Web3(err)) => { + tracing::warn!("Failed to get status changes from the main node: {err}"); + } + Err(UpdaterError::Internal(err)) => return Err(err), + } + + if status_changes.is_empty() { + tokio::time::sleep(self.sleep_interval).await; + } else { + self.apply_status_changes(&mut cursor, status_changes) + .await?; + } + } + } + + /// Goes through the already fetched batches trying to update their statuses. + /// + /// Fetched changes are capped by the last locally applied batch number, so + /// it's safe to assume that every status change can safely be applied (no status + /// changes "from the future"). + async fn get_status_changes( + &self, + status_changes: &mut StatusChanges, + mut cursor: UpdaterCursor, + ) -> Result<(), UpdaterError> { + let total_latency = EN_METRICS.update_batch_statuses.start(); + let Some(last_sealed_batch) = self + .pool + .access_storage_tagged("sync_layer") + .await? + .blocks_dal() + .get_sealed_l1_batch_number() + .await? + else { + return Ok(()); // No L1 batches in the storage yet; do nothing. + }; + + let mut batch = cursor.last_executed_l1_batch.next(); + // In this loop we try to progress on the batch statuses, utilizing the same request to the node to potentially + // update all three statuses (e.g. if the node is still syncing), but also skipping the gaps in the statuses + // (e.g. if the last executed batch is 10, but the last proven is 20, we don't need to check the batches 11-19). + while batch <= last_sealed_batch { + // While we may receive `None` for the `self.current_l1_batch`, it's OK: open batch is guaranteed to not + // be sent to L1. + let miniblock_number = self.client.resolve_l1_batch_to_miniblock(batch).await?; + let Some(miniblock_number) = miniblock_number else { + return Ok(()); + }; + + let Some(batch_info) = self.client.block_details(miniblock_number).await? else { + // We cannot recover from an external API inconsistency. + let err = anyhow::anyhow!( + "Node API is inconsistent: miniblock {miniblock_number} was reported to be a part of {batch} L1 batch, \ + but API has no information about this miniblock", + ); + return Err(err.into()); + }; + + cursor.update(status_changes, &batch_info)?; + + // Check whether we can skip a part of the range. + if batch_info.base.commit_tx_hash.is_none() { + // No committed batches after this one. + break; + } else if batch_info.base.prove_tx_hash.is_none() + && batch < cursor.last_committed_l1_batch + { + // The interval between this batch and the last committed one is not proven. + batch = cursor.last_committed_l1_batch.next(); + } else if batch_info.base.executed_at.is_none() && batch < cursor.last_proven_l1_batch { + // The interval between this batch and the last proven one is not executed. + batch = cursor.last_proven_l1_batch.next(); + } else { + batch += 1; + } + } + + total_latency.observe(); + Ok(()) + } + + /// Inserts the provided status changes into the database. + /// The status changes are applied to the database by inserting bogus confirmed transactions (with + /// some fields missing/substituted) only to satisfy API needs; this component doesn't expect the updated + /// tables to be ever accessed by the `eth_sender` module. + async fn apply_status_changes( + &self, + cursor: &mut UpdaterCursor, + changes: StatusChanges, + ) -> anyhow::Result<()> { + let total_latency = EN_METRICS.batch_status_updater_loop_iteration.start(); + let mut connection = self.pool.access_storage_tagged("sync_layer").await?; + let mut transaction = connection.start_transaction().await?; + let last_sealed_batch = transaction + .blocks_dal() + .get_sealed_l1_batch_number() + .await? + .context("L1 batches disappeared from Postgres")?; + + for change in &changes.commit { + tracing::info!( + "Commit status change: number {}, hash {}, happened at {}", + change.number, + change.l1_tx_hash, + change.happened_at + ); + anyhow::ensure!( + change.number <= last_sealed_batch, + "Incorrect update state: unknown batch marked as committed" + ); + + transaction + .eth_sender_dal() + .insert_bogus_confirmed_eth_tx( + change.number, + AggregatedActionType::Commit, + change.l1_tx_hash, + change.happened_at, + ) + .await?; + cursor.last_committed_l1_batch = change.number; + } + + for change in &changes.prove { + tracing::info!( + "Prove status change: number {}, hash {}, happened at {}", + change.number, + change.l1_tx_hash, + change.happened_at + ); + anyhow::ensure!( + change.number <= cursor.last_committed_l1_batch, + "Incorrect update state: proven batch must be committed" + ); + + transaction + .eth_sender_dal() + .insert_bogus_confirmed_eth_tx( + change.number, + AggregatedActionType::PublishProofOnchain, + change.l1_tx_hash, + change.happened_at, + ) + .await?; + cursor.last_proven_l1_batch = change.number; + } + + for change in &changes.execute { + tracing::info!( + "Execute status change: number {}, hash {}, happened at {}", + change.number, + change.l1_tx_hash, + change.happened_at + ); + anyhow::ensure!( + change.number <= cursor.last_proven_l1_batch, + "Incorrect update state: executed batch must be proven" + ); + + transaction + .eth_sender_dal() + .insert_bogus_confirmed_eth_tx( + change.number, + AggregatedActionType::Execute, + change.l1_tx_hash, + change.happened_at, + ) + .await?; + cursor.last_executed_l1_batch = change.number; + } + transaction.commit().await?; + total_latency.observe(); + + #[cfg(test)] + self.changes_sender.send(changes).ok(); + Ok(()) + } +} diff --git a/core/lib/zksync_core/src/sync_layer/batch_status_updater/tests.rs b/core/lib/zksync_core/src/sync_layer/batch_status_updater/tests.rs new file mode 100644 index 00000000000..7ca6e73c37c --- /dev/null +++ b/core/lib/zksync_core/src/sync_layer/batch_status_updater/tests.rs @@ -0,0 +1,442 @@ +//! Tests for batch status updater. + +use std::{future, sync::Arc}; + +use chrono::TimeZone; +use test_casing::{test_casing, Product}; +use tokio::sync::{watch, Mutex}; +use zksync_contracts::BaseSystemContractsHashes; +use zksync_types::{block::BlockGasCount, Address, L2ChainId, ProtocolVersionId}; + +use super::*; +use crate::{ + genesis::{ensure_genesis_state, GenesisParams}, + sync_layer::metrics::L1BatchStage, + utils::testonly::{create_l1_batch, create_miniblock, prepare_empty_recovery_snapshot}, +}; + +async fn seal_l1_batch(storage: &mut StorageProcessor<'_>, number: L1BatchNumber) { + let mut storage = storage.start_transaction().await.unwrap(); + // Insert a mock miniblock so that `get_block_details()` will return values. + let miniblock = create_miniblock(number.0); + storage + .blocks_dal() + .insert_miniblock(&miniblock) + .await + .unwrap(); + + let l1_batch = create_l1_batch(number.0); + storage + .blocks_dal() + .insert_l1_batch(&l1_batch, &[], BlockGasCount::default(), &[], &[], 0) + .await + .unwrap(); + storage + .blocks_dal() + .mark_miniblocks_as_executed_in_l1_batch(number) + .await + .unwrap(); + storage.commit().await.unwrap(); +} + +/// Mapping `L1BatchNumber` -> `L1BatchStage` for a continuous range of numbers. +#[derive(Debug, Clone, Default, PartialEq)] +struct L1BatchStagesMap { + first_batch_number: L1BatchNumber, + stages: Vec, +} + +impl L1BatchStagesMap { + fn empty(first_batch_number: L1BatchNumber, len: usize) -> Self { + Self { + first_batch_number, + stages: vec![L1BatchStage::Open; len], + } + } + + fn new(first_batch_number: L1BatchNumber, stages: Vec) -> Self { + assert!(stages.windows(2).all(|window| { + let [prev, next] = window else { unreachable!() }; + prev >= next + })); + Self { + first_batch_number, + stages, + } + } + + fn get(&self, number: L1BatchNumber) -> Option { + let Some(index) = number.0.checked_sub(self.first_batch_number.0) else { + return None; + }; + self.stages.get(index as usize).copied() + } + + fn iter(&self) -> impl Iterator + '_ { + self.stages + .iter() + .enumerate() + .map(|(i, &stage)| (self.first_batch_number + i as u32, stage)) + } + + fn update(&mut self, changes: &StatusChanges) { + self.update_to_stage(&changes.commit, L1BatchStage::Committed); + self.update_to_stage(&changes.prove, L1BatchStage::Proven); + self.update_to_stage(&changes.execute, L1BatchStage::Executed); + } + + fn update_to_stage(&mut self, batch_changes: &[BatchStatusChange], target: L1BatchStage) { + for change in batch_changes { + let number = change.number; + let index = number + .0 + .checked_sub(self.first_batch_number.0) + .unwrap_or_else(|| panic!("stage is missing for L1 batch #{number}")); + let stage = self + .stages + .get_mut(index as usize) + .unwrap_or_else(|| panic!("stage is missing for L1 batch #{number}")); + assert!( + *stage < target, + "Invalid update for L1 batch #{number}: {stage:?} -> {target:?}" + ); + *stage = target; + } + } + + async fn assert_storage(&self, storage: &mut StorageProcessor<'_>) { + for (number, stage) in self.iter() { + let local_details = storage + .blocks_web3_dal() + .get_block_details(MiniblockNumber(number.0), Address::zero()) + .await + .unwrap() + .unwrap_or_else(|| panic!("no details for block #{number}")); + let expected_details = mock_block_details(number.0, stage); + + assert_eq!( + local_details.base.commit_tx_hash, + expected_details.base.commit_tx_hash + ); + assert_eq!( + local_details.base.committed_at, + expected_details.base.committed_at + ); + assert_eq!( + local_details.base.prove_tx_hash, + expected_details.base.prove_tx_hash + ); + assert_eq!( + local_details.base.proven_at, + expected_details.base.proven_at + ); + assert_eq!( + local_details.base.execute_tx_hash, + expected_details.base.execute_tx_hash + ); + assert_eq!( + local_details.base.executed_at, + expected_details.base.executed_at + ); + } + } +} + +fn mock_block_details(number: u32, stage: L1BatchStage) -> api::BlockDetails { + api::BlockDetails { + number: MiniblockNumber(number), + l1_batch_number: L1BatchNumber(number), + base: api::BlockDetailsBase { + timestamp: number.into(), + l1_tx_count: 0, + l2_tx_count: 0, + root_hash: Some(H256::zero()), + status: api::BlockStatus::Sealed, + commit_tx_hash: (stage >= L1BatchStage::Committed).then(|| H256::repeat_byte(1)), + committed_at: (stage >= L1BatchStage::Committed) + .then(|| Utc.timestamp_opt(100, 0).unwrap()), + prove_tx_hash: (stage >= L1BatchStage::Proven).then(|| H256::repeat_byte(2)), + proven_at: (stage >= L1BatchStage::Proven).then(|| Utc.timestamp_opt(200, 0).unwrap()), + execute_tx_hash: (stage >= L1BatchStage::Executed).then(|| H256::repeat_byte(3)), + executed_at: (stage >= L1BatchStage::Executed) + .then(|| Utc.timestamp_opt(300, 0).unwrap()), + l1_gas_price: 1, + l2_fair_gas_price: 2, + base_system_contracts_hashes: BaseSystemContractsHashes::default(), + }, + operator_address: Address::zero(), + protocol_version: Some(ProtocolVersionId::default()), + } +} + +#[derive(Debug, Default)] +struct MockMainNodeClient(Arc>); + +impl From for MockMainNodeClient { + fn from(map: L1BatchStagesMap) -> Self { + Self(Arc::new(Mutex::new(map))) + } +} + +#[async_trait] +impl MainNodeClient for MockMainNodeClient { + async fn resolve_l1_batch_to_miniblock( + &self, + number: L1BatchNumber, + ) -> Result, ClientError> { + let map = self.0.lock().await; + Ok(map + .get(number) + .is_some() + .then_some(MiniblockNumber(number.0))) + } + + async fn block_details( + &self, + number: MiniblockNumber, + ) -> Result, ClientError> { + let map = self.0.lock().await; + let Some(stage) = map.get(L1BatchNumber(number.0)) else { + return Ok(None); + }; + Ok(Some(mock_block_details(number.0, stage))) + } +} + +fn mock_change(number: L1BatchNumber) -> BatchStatusChange { + BatchStatusChange { + number, + l1_tx_hash: H256::zero(), + happened_at: DateTime::default(), + } +} + +fn mock_updater( + client: MockMainNodeClient, + pool: ConnectionPool, +) -> (BatchStatusUpdater, mpsc::UnboundedReceiver) { + let (changes_sender, changes_receiver) = mpsc::unbounded_channel(); + let mut updater = + BatchStatusUpdater::from_parts(Box::new(client), pool, Duration::from_millis(10)); + updater.changes_sender = changes_sender; + (updater, changes_receiver) +} + +#[tokio::test] +async fn updater_cursor_for_storage_with_genesis_block() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + for number in [1, 2] { + seal_l1_batch(&mut storage, L1BatchNumber(number)).await; + } + + let mut cursor = UpdaterCursor::new(&mut storage).await.unwrap(); + assert_eq!(cursor.last_committed_l1_batch, L1BatchNumber(0)); + assert_eq!(cursor.last_proven_l1_batch, L1BatchNumber(0)); + assert_eq!(cursor.last_executed_l1_batch, L1BatchNumber(0)); + + let (updater, _) = mock_updater(MockMainNodeClient::default(), pool.clone()); + let changes = StatusChanges { + commit: vec![mock_change(L1BatchNumber(1)), mock_change(L1BatchNumber(2))], + prove: vec![mock_change(L1BatchNumber(1))], + execute: vec![], + }; + updater + .apply_status_changes(&mut cursor, changes) + .await + .unwrap(); + + assert_eq!(cursor.last_committed_l1_batch, L1BatchNumber(2)); + assert_eq!(cursor.last_proven_l1_batch, L1BatchNumber(1)); + assert_eq!(cursor.last_executed_l1_batch, L1BatchNumber(0)); + + let restored_cursor = UpdaterCursor::new(&mut storage).await.unwrap(); + assert_eq!(restored_cursor, cursor); +} + +#[tokio::test] +async fn updater_cursor_after_snapshot_recovery() { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + prepare_empty_recovery_snapshot(&mut storage, 23).await; + + let cursor = UpdaterCursor::new(&mut storage).await.unwrap(); + assert_eq!(cursor.last_committed_l1_batch, L1BatchNumber(23)); + assert_eq!(cursor.last_proven_l1_batch, L1BatchNumber(23)); + assert_eq!(cursor.last_executed_l1_batch, L1BatchNumber(23)); +} + +#[test_casing(4, Product(([false, true], [false, true])))] +#[tokio::test] +async fn normal_updater_operation(snapshot_recovery: bool, async_batches: bool) { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + let first_batch_number = if snapshot_recovery { + prepare_empty_recovery_snapshot(&mut storage, 23).await; + L1BatchNumber(24) + } else { + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + L1BatchNumber(1) + }; + + let target_batch_stages = L1BatchStagesMap::new( + first_batch_number, + vec![ + L1BatchStage::Executed, + L1BatchStage::Proven, + L1BatchStage::Proven, + L1BatchStage::Committed, + L1BatchStage::Committed, + L1BatchStage::Open, + ], + ); + let batch_numbers: Vec<_> = target_batch_stages + .iter() + .map(|(number, _)| number) + .collect(); + + if !async_batches { + // Make all L1 batches present in the storage from the start. + for &number in &batch_numbers { + seal_l1_batch(&mut storage, number).await; + } + } + + let client = MockMainNodeClient::from(target_batch_stages.clone()); + let (updater, mut changes_receiver) = mock_updater(client, pool.clone()); + let (stop_sender, stop_receiver) = watch::channel(false); + let updater_task = tokio::spawn(updater.run(stop_receiver)); + + let batches_task = if async_batches { + let pool = pool.clone(); + tokio::spawn(async move { + let mut storage = pool.access_storage().await.unwrap(); + for &number in &batch_numbers { + seal_l1_batch(&mut storage, number).await; + tokio::time::sleep(Duration::from_millis(15)).await; + } + }) + } else { + tokio::spawn(future::ready(())) + }; + + let mut observed_batch_stages = + L1BatchStagesMap::empty(first_batch_number, target_batch_stages.stages.len()); + loop { + let changes = changes_receiver.recv().await.unwrap(); + observed_batch_stages.update(&changes); + if observed_batch_stages == target_batch_stages { + break; + } + } + + batches_task.await.unwrap(); + target_batch_stages.assert_storage(&mut storage).await; + stop_sender.send_replace(true); + updater_task.await.unwrap().expect("updater failed"); +} + +#[test_casing(2, [false, true])] +#[tokio::test] +async fn updater_with_gradual_main_node_updates(snapshot_recovery: bool) { + let pool = ConnectionPool::test_pool().await; + let mut storage = pool.access_storage().await.unwrap(); + let first_batch_number = if snapshot_recovery { + prepare_empty_recovery_snapshot(&mut storage, 23).await; + L1BatchNumber(24) + } else { + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + L1BatchNumber(1) + }; + + let target_batch_stages = L1BatchStagesMap::new( + first_batch_number, + vec![ + L1BatchStage::Executed, + L1BatchStage::Proven, + L1BatchStage::Proven, + L1BatchStage::Committed, + L1BatchStage::Committed, + L1BatchStage::Open, + ], + ); + let mut observed_batch_stages = + L1BatchStagesMap::empty(first_batch_number, target_batch_stages.stages.len()); + + for (number, _) in target_batch_stages.iter() { + seal_l1_batch(&mut storage, number).await; + } + + let client = MockMainNodeClient::from(observed_batch_stages.clone()); + + // Gradually update information provided by the main node. + let client_map = Arc::clone(&client.0); + let final_stages = target_batch_stages.clone(); + let storage_task = tokio::spawn(async move { + for max_stage in [ + L1BatchStage::Committed, + L1BatchStage::Proven, + L1BatchStage::Executed, + ] { + let mut client_map = client_map.lock().await; + for (stage, &final_stage) in client_map.stages.iter_mut().zip(&final_stages.stages) { + *stage = final_stage.min(max_stage); + } + drop(client_map); + tokio::time::sleep(Duration::from_millis(15)).await; + } + }); + + let (updater, mut changes_receiver) = mock_updater(client, pool.clone()); + let (stop_sender, stop_receiver) = watch::channel(false); + let updater_task = tokio::spawn(updater.run(stop_receiver)); + + loop { + let changes = changes_receiver.recv().await.unwrap(); + observed_batch_stages.update(&changes); + if observed_batch_stages == target_batch_stages { + break; + } + } + + storage_task.await.unwrap(); + target_batch_stages.assert_storage(&mut storage).await; + stop_sender.send_replace(true); + updater_task.await.unwrap().expect("updater failed"); + + drop(storage); + test_resuming_updater(pool, target_batch_stages).await; +} + +async fn test_resuming_updater(pool: ConnectionPool, initial_batch_stages: L1BatchStagesMap) { + let target_batch_stages = L1BatchStagesMap::new( + initial_batch_stages.first_batch_number, + vec![L1BatchStage::Executed; 6], + ); + + let client = MockMainNodeClient::from(target_batch_stages.clone()); + let (updater, mut changes_receiver) = mock_updater(client, pool.clone()); + let (stop_sender, stop_receiver) = watch::channel(false); + let updater_task = tokio::spawn(updater.run(stop_receiver)); + + let mut observed_batch_stages = initial_batch_stages; + loop { + let changes = changes_receiver.recv().await.unwrap(); + observed_batch_stages.update(&changes); + if observed_batch_stages == target_batch_stages { + break; + } + } + + let mut storage = pool.access_storage().await.unwrap(); + target_batch_stages.assert_storage(&mut storage).await; + stop_sender.send_replace(true); + updater_task.await.unwrap().expect("updater failed"); +} diff --git a/core/lib/zksync_core/src/sync_layer/client.rs b/core/lib/zksync_core/src/sync_layer/client.rs index 5d4f61a4f2a..a13fba2d65c 100644 --- a/core/lib/zksync_core/src/sync_layer/client.rs +++ b/core/lib/zksync_core/src/sync_layer/client.rs @@ -1,10 +1,9 @@ //! Client abstractions for syncing between the external node and the main node. -use anyhow::Context as _; -use async_trait::async_trait; - use std::{collections::HashMap, convert::TryInto, fmt}; +use anyhow::Context as _; +use async_trait::async_trait; use zksync_contracts::{BaseSystemContracts, BaseSystemContractsHashes, SystemContractCode}; use zksync_system_constants::ACCOUNT_CODE_STORAGE_ADDRESS; use zksync_types::{ diff --git a/core/lib/zksync_core/src/sync_layer/external_io.rs b/core/lib/zksync_core/src/sync_layer/external_io.rs index dcc38334a99..c6098c65a43 100644 --- a/core/lib/zksync_core/src/sync_layer/external_io.rs +++ b/core/lib/zksync_core/src/sync_layer/external_io.rs @@ -1,19 +1,14 @@ -use async_trait::async_trait; - -use std::{ - collections::HashMap, - convert::{TryFrom, TryInto}, - iter::FromIterator, - time::Duration, -}; +use std::{collections::HashMap, convert::TryInto, iter::FromIterator, time::Duration}; +use async_trait::async_trait; +use futures::future; use multivm::interface::{FinishedL1Batch, L1BatchEnv, SystemEnv}; use zksync_contracts::{BaseSystemContracts, SystemContractCode}; use zksync_dal::ConnectionPool; use zksync_types::{ - ethabi::Address, l1::L1Tx, l2::L2Tx, protocol_version::ProtocolUpgradeTx, - witness_block_state::WitnessBlockState, L1BatchNumber, L1BlockNumber, L2ChainId, - MiniblockNumber, ProtocolVersionId, Transaction, H256, U256, + ethabi::Address, fee_model::BatchFeeInput, protocol_version::ProtocolUpgradeTx, + witness_block_state::WitnessBlockState, L1BatchNumber, L2ChainId, MiniblockNumber, + ProtocolVersionId, Transaction, H256, U256, }; use zksync_utils::{be_words_to_bytes, bytes_to_be_words}; @@ -28,9 +23,9 @@ use crate::{ extractors, io::{ common::{l1_batch_params, load_pending_batch, poll_iters}, - MiniblockParams, PendingBatchData, StateKeeperIO, + MiniblockParams, MiniblockSealerHandle, PendingBatchData, StateKeeperIO, }, - metrics::{KEEPER_METRICS, L1_BATCH_METRICS}, + metrics::KEEPER_METRICS, seal_criteria::IoSealCriteria, updates::UpdatesManager, }, @@ -47,6 +42,7 @@ const POLL_INTERVAL: Duration = Duration::from_millis(100); /// to the one in the mempool IO (which is used in the main node). #[derive(Debug)] pub struct ExternalIO { + miniblock_sealer_handle: MiniblockSealerHandle, pool: ConnectionPool, current_l1_batch_number: L1BatchNumber, @@ -63,7 +59,9 @@ pub struct ExternalIO { } impl ExternalIO { + #[allow(clippy::too_many_arguments)] pub async fn new( + miniblock_sealer_handle: MiniblockSealerHandle, pool: ConnectionPool, actions: ActionQueue, sync_state: SyncState, @@ -73,29 +71,33 @@ impl ExternalIO { chain_id: L2ChainId, ) -> Self { let mut storage = pool.access_storage_tagged("sync_layer").await.unwrap(); - let last_sealed_l1_batch_header = storage + // TODO (PLA-703): Support no L1 batches / miniblocks in the storage + let last_sealed_l1_batch_number = storage .blocks_dal() - .get_newest_l1_batch_header() + .get_sealed_l1_batch_number() .await - .unwrap(); + .unwrap() + .expect("No L1 batches sealed"); let last_miniblock_number = storage .blocks_dal() .get_sealed_miniblock_number() .await - .unwrap(); + .unwrap() + .expect("empty storage not supported"); // FIXME (PLA-703): handle empty storage drop(storage); tracing::info!( "Initialized the ExternalIO: current L1 batch number {}, current miniblock number {}", - last_sealed_l1_batch_header.number + 1, + last_sealed_l1_batch_number + 1, last_miniblock_number + 1, ); sync_state.set_local_block(last_miniblock_number); Self { + miniblock_sealer_handle, pool, - current_l1_batch_number: last_sealed_l1_batch_header.number + 1, + current_l1_batch_number: last_sealed_l1_batch_number + 1, current_miniblock_number: last_miniblock_number + 1, actions, sync_state, @@ -108,7 +110,6 @@ impl ExternalIO { async fn load_previous_l1_batch_hash(&self) -> U256 { let mut storage = self.pool.access_storage_tagged("sync_layer").await.unwrap(); - let wait_latency = KEEPER_METRICS.wait_for_prev_hash_time.start(); let (hash, _) = extractors::wait_for_prev_l1_batch_params(&mut storage, self.current_l1_batch_number) @@ -117,6 +118,18 @@ impl ExternalIO { hash } + async fn load_previous_miniblock_hash(&self) -> H256 { + let prev_miniblock_number = self.current_miniblock_number - 1; + let mut storage = self.pool.access_storage_tagged("sync_layer").await.unwrap(); + let header = storage + .blocks_dal() + .get_miniblock_header(prev_miniblock_number) + .await + .unwrap() + .unwrap_or_else(|| panic!("Miniblock #{prev_miniblock_number} is missing")); + header.hash + } + async fn load_base_system_contracts_by_version_id( &self, id: ProtocolVersionId, @@ -215,10 +228,7 @@ impl IoSealCriteria for ExternalIO { } fn should_seal_miniblock(&mut self, _manager: &UpdatesManager) -> bool { - matches!( - self.actions.peek_action(), - Some(SyncAction::SealMiniblock(_)) - ) + matches!(self.actions.peek_action(), Some(SyncAction::SealMiniblock)) } } @@ -304,18 +314,24 @@ impl StateKeeperIO for ExternalIO { timestamp, l1_gas_price, l2_fair_gas_price, + fair_pubdata_price, operator_address, protocol_version, first_miniblock_info: (miniblock_number, virtual_blocks), - prev_miniblock_hash, }) => { assert_eq!( number, self.current_l1_batch_number, "Batch number mismatch" ); - tracing::info!("Getting previous L1 batch hash"); - let previous_l1_batch_hash = self.load_previous_l1_batch_hash().await; - tracing::info!("Previous L1 batch hash: {previous_l1_batch_hash}"); + tracing::info!("Getting previous L1 batch hash and miniblock hash"); + let (previous_l1_batch_hash, previous_miniblock_hash) = future::join( + self.load_previous_l1_batch_hash(), + self.load_previous_miniblock_hash(), + ) + .await; + tracing::info!( + "Previous L1 batch hash: {previous_l1_batch_hash}, previous miniblock hash: {previous_miniblock_hash}" + ); let base_system_contracts = self .load_base_system_contracts_by_version_id(protocol_version) @@ -325,10 +341,14 @@ impl StateKeeperIO for ExternalIO { operator_address, timestamp, previous_l1_batch_hash, - l1_gas_price, - l2_fair_gas_price, + BatchFeeInput::for_protocol_version( + protocol_version, + l2_fair_gas_price, + fair_pubdata_price, + l1_gas_price, + ), miniblock_number, - prev_miniblock_hash, + previous_miniblock_hash, base_system_contracts, self.validation_computational_gas_limit, protocol_version, @@ -438,60 +458,18 @@ impl StateKeeperIO for ExternalIO { async fn seal_miniblock(&mut self, updates_manager: &UpdatesManager) { let action = self.actions.pop_action(); - let Some(SyncAction::SealMiniblock(consensus)) = action else { + let Some(SyncAction::SealMiniblock) = action else { panic!("State keeper requested to seal miniblock, but the next action is {action:?}"); }; - let mut storage = self.pool.access_storage_tagged("sync_layer").await.unwrap(); - let mut transaction = storage.start_transaction().await.unwrap(); - - let store_latency = L1_BATCH_METRICS.start_storing_on_en(); - // We don't store the transactions in the database until they're executed to not overcomplicate the state - // recovery on restart. So we have to store them here. - for tx in &updates_manager.miniblock.executed_transactions { - if let Ok(l1_tx) = L1Tx::try_from(tx.transaction.clone()) { - let l1_block_number = L1BlockNumber(l1_tx.common_data.eth_block as u32); - transaction - .transactions_dal() - .insert_transaction_l1(l1_tx, l1_block_number) - .await; - } else if let Ok(l2_tx) = L2Tx::try_from(tx.transaction.clone()) { - // Using `Default` for execution metrics should be OK here, since this data is not used on the EN. - transaction - .transactions_dal() - .insert_transaction_l2(l2_tx, Default::default()) - .await; - } else if let Ok(protocol_system_upgrade_tx) = - ProtocolUpgradeTx::try_from(tx.transaction.clone()) - { - transaction - .transactions_dal() - .insert_system_transaction(protocol_system_upgrade_tx) - .await; - } else { - unreachable!("Transaction {:?} is neither L1 nor L2", tx.transaction); - } - } - store_latency.observe(); - // Now transactions are stored, and we may mark them as executed. let command = updates_manager.seal_miniblock_command( self.current_l1_batch_number, self.current_miniblock_number, self.l2_erc20_bridge_addr, + true, ); - command.seal(&mut transaction).await; - - // We want to add miniblock consensus fields atomically with the miniblock data so that we - // don't need to deal with corner cases (e.g., a miniblock w/o consensus fields). - if let Some(consensus) = &consensus { - transaction - .blocks_dal() - .set_miniblock_consensus_fields(self.current_miniblock_number, consensus) - .await - .unwrap(); - } - transaction.commit().await.unwrap(); + self.miniblock_sealer_handle.submit(command).await; self.sync_state .set_local_block(self.current_miniblock_number); @@ -508,12 +486,15 @@ impl StateKeeperIO for ExternalIO { finished_batch: FinishedL1Batch, ) -> anyhow::Result<()> { let action = self.actions.pop_action(); - let Some(SyncAction::SealBatch { consensus, .. }) = action else { + let Some(SyncAction::SealBatch { .. }) = action else { anyhow::bail!( "State keeper requested to seal the batch, but the next action is {action:?}" ); }; + // We cannot start sealing an L1 batch until we've sealed all miniblocks included in it. + self.miniblock_sealer_handle.wait_for_all_commands().await; + let mut storage = self.pool.access_storage_tagged("sync_layer").await.unwrap(); let mut transaction = storage.start_transaction().await.unwrap(); updates_manager @@ -525,13 +506,6 @@ impl StateKeeperIO for ExternalIO { self.l2_erc20_bridge_addr, ) .await; - if let Some(consensus) = &consensus { - transaction - .blocks_dal() - .set_miniblock_consensus_fields(self.current_miniblock_number, consensus) - .await - .unwrap(); - } transaction.commit().await.unwrap(); tracing::info!("Batch {} is sealed", self.current_l1_batch_number); @@ -539,6 +513,8 @@ impl StateKeeperIO for ExternalIO { // Mimic the metric emitted by the main node to reuse existing Grafana charts. APP_METRICS.block_number[&BlockStage::Sealed].set(self.current_l1_batch_number.0.into()); + self.sync_state + .set_local_block(self.current_miniblock_number); self.current_miniblock_number += 1; // Due to fictive miniblock being sealed. self.current_l1_batch_number += 1; Ok(()) diff --git a/core/lib/zksync_core/src/sync_layer/fetcher.rs b/core/lib/zksync_core/src/sync_layer/fetcher.rs index 4aabd163f21..98e0a025ea8 100644 --- a/core/lib/zksync_core/src/sync_layer/fetcher.rs +++ b/core/lib/zksync_core/src/sync_layer/fetcher.rs @@ -1,14 +1,13 @@ -use anyhow::Context as _; -use tokio::sync::watch; - use std::time::Duration; +use anyhow::Context as _; +use tokio::sync::watch; use zksync_dal::StorageProcessor; use zksync_types::{ - api::en::SyncBlock, block::ConsensusBlockFields, Address, L1BatchNumber, MiniblockNumber, + api::en::SyncBlock, block::MiniblockHasher, Address, L1BatchNumber, MiniblockNumber, ProtocolVersionId, H256, }; -use zksync_web3_decl::jsonrpsee::core::Error as RpcError; +use zksync_web3_decl::jsonrpsee::core::ClientError as RpcError; use super::{ client::{CachingMainNodeClient, MainNodeClient}, @@ -23,39 +22,51 @@ const RETRY_DELAY_INTERVAL: Duration = Duration::from_secs(5); /// Common denominator for blocks fetched by an external node. #[derive(Debug)] -pub(super) struct FetchedBlock { +pub(crate) struct FetchedBlock { pub number: MiniblockNumber, pub l1_batch_number: L1BatchNumber, pub last_in_batch: bool, pub protocol_version: ProtocolVersionId, pub timestamp: u64, - pub hash: H256, + pub reference_hash: Option, pub l1_gas_price: u64, pub l2_fair_gas_price: u64, + pub fair_pubdata_price: Option, pub virtual_blocks: u32, pub operator_address: Address, pub transactions: Vec, - pub consensus: Option, } impl FetchedBlock { - fn from_sync_block(block: SyncBlock) -> Self { - Self { + fn compute_hash(&self, prev_miniblock_hash: H256) -> H256 { + let mut hasher = MiniblockHasher::new(self.number, self.timestamp, prev_miniblock_hash); + for tx in &self.transactions { + hasher.push_tx_hash(tx.hash()); + } + hasher.finalize(self.protocol_version) + } +} + +impl TryFrom for FetchedBlock { + type Error = anyhow::Error; + + fn try_from(block: SyncBlock) -> anyhow::Result { + Ok(Self { number: block.number, l1_batch_number: block.l1_batch_number, last_in_batch: block.last_in_batch, protocol_version: block.protocol_version, timestamp: block.timestamp, - hash: block.hash.unwrap_or_default(), + reference_hash: block.hash, l1_gas_price: block.l1_gas_price, l2_fair_gas_price: block.l2_fair_gas_price, + fair_pubdata_price: block.fair_pubdata_price, virtual_blocks: block.virtual_blocks.unwrap_or(0), operator_address: block.operator_address, transactions: block .transactions - .expect("Transactions are always requested"), - consensus: block.consensus, - } + .context("Transactions are always requested")?, + }) } } @@ -63,7 +74,7 @@ impl FetchedBlock { #[derive(Debug)] pub struct FetcherCursor { // Fields are public for testing purposes. - pub(super) next_miniblock: MiniblockNumber, + pub(crate) next_miniblock: MiniblockNumber, pub(super) prev_miniblock_hash: H256, pub(super) l1_batch: L1BatchNumber, } @@ -71,11 +82,13 @@ pub struct FetcherCursor { impl FetcherCursor { /// Loads the cursor from Postgres. pub async fn new(storage: &mut StorageProcessor<'_>) -> anyhow::Result { - let last_sealed_l1_batch_header = storage + // TODO (PLA-703): Support no L1 batches / miniblocks in the storage + let last_sealed_l1_batch_number = storage .blocks_dal() - .get_newest_l1_batch_header() + .get_sealed_l1_batch_number() .await - .context("Failed getting newest L1 batch header")?; + .context("Failed getting sealed L1 batch number")? + .context("No L1 batches sealed")?; let last_miniblock_header = storage .blocks_dal() .get_last_sealed_miniblock_header() @@ -97,21 +110,33 @@ impl FetcherCursor { // Decide whether the next batch should be explicitly opened or not. let l1_batch = if was_new_batch_open { // No `OpenBatch` action needed. - last_sealed_l1_batch_header.number + 1 + last_sealed_l1_batch_number + 1 } else { // We need to open the next batch. - last_sealed_l1_batch_header.number + last_sealed_l1_batch_number }; Ok(Self { next_miniblock, - l1_batch, prev_miniblock_hash, + l1_batch, }) } - pub(super) fn advance(&mut self, block: FetchedBlock) -> Vec { + pub(crate) fn advance(&mut self, block: FetchedBlock) -> Vec { assert_eq!(block.number, self.next_miniblock); + let local_block_hash = block.compute_hash(self.prev_miniblock_hash); + if let Some(reference_hash) = block.reference_hash { + if local_block_hash != reference_hash { + // This is a warning, not an assertion because hash mismatch may occur after a reorg. + // Indeed, `self.prev_miniblock_hash` may differ from the hash of the updated previous miniblock. + tracing::warn!( + "Mismatch between the locally computed and received miniblock hash for {block:?}; \ + local_block_hash = {local_block_hash:?}, prev_miniblock_hash = {:?}", + self.prev_miniblock_hash + ); + } + } let mut new_actions = Vec::new(); if block.l1_batch_number != self.l1_batch { @@ -132,11 +157,11 @@ impl FetcherCursor { timestamp: block.timestamp, l1_gas_price: block.l1_gas_price, l2_fair_gas_price: block.l2_fair_gas_price, + fair_pubdata_price: block.fair_pubdata_price, operator_address: block.operator_address, protocol_version: block.protocol_version, // `block.virtual_blocks` can be `None` only for old VM versions where it's not used, so it's fine to provide any number. first_miniblock_info: (block.number, block.virtual_blocks), - prev_miniblock_hash: self.prev_miniblock_hash, }); FETCHER_METRICS.l1_batch[&L1BatchStage::Open].set(block.l1_batch_number.0.into()); self.l1_batch += 1; @@ -162,13 +187,12 @@ impl FetcherCursor { new_actions.push(SyncAction::SealBatch { // `block.virtual_blocks` can be `None` only for old VM versions where it's not used, so it's fine to provide any number. virtual_blocks: block.virtual_blocks, - consensus: block.consensus, }); } else { - new_actions.push(SyncAction::SealMiniblock(block.consensus)); + new_actions.push(SyncAction::SealMiniblock); } self.next_miniblock += 1; - self.prev_miniblock_hash = block.hash; + self.prev_miniblock_hash = local_block_hash; new_actions } @@ -221,7 +245,7 @@ impl MainNodeFetcher { { tracing::warn!("Following transport error occurred: {err}"); tracing::info!("Trying again after a delay"); - tokio::time::sleep(RETRY_DELAY_INTERVAL).await; // TODO (BFT-100): Implement the fibonacci backoff. + tokio::time::sleep(RETRY_DELAY_INTERVAL).await; // TODO (BFT-100): Implement the Fibonacci back-off. } else { return Err(err.context("Unexpected error in the fetcher")); } @@ -280,8 +304,7 @@ impl MainNodeFetcher { request_latency.observe(); let block_number = block.number; - let fetched_block = FetchedBlock::from_sync_block(block); - let new_actions = self.cursor.advance(fetched_block); + let new_actions = self.cursor.advance(block.try_into()?); tracing::info!( "New miniblock: {block_number} / {}", diff --git a/core/lib/zksync_core/src/sync_layer/genesis.rs b/core/lib/zksync_core/src/sync_layer/genesis.rs index 4f7501fb0c3..77678a3b412 100644 --- a/core/lib/zksync_core/src/sync_layer/genesis.rs +++ b/core/lib/zksync_core/src/sync_layer/genesis.rs @@ -1,5 +1,4 @@ use anyhow::Context as _; - use zksync_dal::StorageProcessor; use zksync_types::{ block::DeployedContract, protocol_version::L1VerifierConfig, diff --git a/core/lib/zksync_core/src/sync_layer/gossip/buffered/mod.rs b/core/lib/zksync_core/src/sync_layer/gossip/buffered/mod.rs deleted file mode 100644 index 41ca50e1cf2..00000000000 --- a/core/lib/zksync_core/src/sync_layer/gossip/buffered/mod.rs +++ /dev/null @@ -1,340 +0,0 @@ -//! Buffered [`BlockStore`] implementation. - -use async_trait::async_trait; - -use std::{collections::BTreeMap, ops, time::Instant}; - -#[cfg(test)] -use zksync_concurrency::ctx::channel; -use zksync_concurrency::{ - ctx, scope, - sync::{self, watch, Mutex}, -}; -use zksync_consensus_roles::validator::{BlockNumber, FinalBlock}; -use zksync_consensus_storage::{BlockStore, StorageError, StorageResult, WriteBlockStore}; - -#[cfg(test)] -mod tests; - -use super::{ - metrics::{BlockResponseKind, METRICS}, - utils::MissingBlockNumbers, -}; - -/// [`BlockStore`] variation that upholds additional invariants as to how blocks are processed. -/// -/// The invariants are as follows: -/// -/// - Stored blocks always have contiguous numbers; there are no gaps. -/// - Blocks can be scheduled to be added using [`Self::schedule_next_block()`] only. New blocks do not -/// appear in the store otherwise. -#[async_trait] -pub(super) trait ContiguousBlockStore: BlockStore { - /// Schedules a block to be added to the store. Unlike [`WriteBlockStore::put_block()`], - /// there is no expectation that the block is added to the store *immediately*. It's - /// expected that it will be added to the store eventually, which will be signaled via - /// a subscriber returned from [`BlockStore::subscribe_to_block_writes()`]. - /// - /// [`Buffered`] guarantees that this method will only ever be called: - /// - /// - with the next block (i.e., one immediately after [`BlockStore::head_block()`]) - /// - sequentially (i.e., multiple blocks cannot be scheduled at once) - async fn schedule_next_block(&self, ctx: &ctx::Ctx, block: &FinalBlock) -> StorageResult<()>; -} - -/// In-memory buffer or [`FinalBlock`]s received from peers, but not executed and persisted locally yet. -/// -/// Unlike with executed / persisted blocks, there may be gaps between blocks in the buffer. -/// These blocks are shared with peers using the gossip network, but are not persisted and lost -/// on the node restart. -#[derive(Debug)] -struct BlockBuffer { - store_block_number: BlockNumber, - blocks: BTreeMap, -} - -impl BlockBuffer { - fn new(store_block_number: BlockNumber) -> Self { - Self { - store_block_number, - blocks: BTreeMap::new(), - } - } - - fn head_block(&self) -> Option { - self.blocks.values().next_back().cloned() - } - - #[tracing::instrument(level = "trace", skip(self))] - fn set_store_block(&mut self, store_block_number: BlockNumber) { - assert!( - store_block_number > self.store_block_number, - "`ContiguousBlockStore` invariant broken: unexpected new head block number" - ); - - self.store_block_number = store_block_number; - let old_len = self.blocks.len(); - self.blocks = self.blocks.split_off(&store_block_number.next()); - // ^ Removes all entries up to and including `store_block_number` - tracing::debug!("Removed {} blocks from buffer", old_len - self.blocks.len()); - METRICS.buffer_size.set(self.blocks.len()); - } - - fn last_contiguous_block_number(&self) -> BlockNumber { - // By design, blocks in the underlying store are always contiguous. - let mut last_number = self.store_block_number; - for &number in self.blocks.keys() { - if number > last_number.next() { - return last_number; - } - last_number = number; - } - last_number - } - - fn missing_block_numbers(&self, mut range: ops::Range) -> Vec { - // Clamp the range start so we don't produce extra missing blocks. - range.start = range.start.max(self.store_block_number.next()); - if range.is_empty() { - return vec![]; // Return early to not trigger panic in `BTreeMap::range()` - } - - let keys = self.blocks.range(range.clone()).map(|(&num, _)| num); - MissingBlockNumbers::new(range, keys).collect() - } - - fn put_block(&mut self, block: FinalBlock) { - let block_number = block.header.number; - assert!(block_number > self.store_block_number); - // ^ Must be checked previously - self.blocks.insert(block_number, block); - tracing::debug!(%block_number, "Inserted block in buffer"); - METRICS.buffer_size.set(self.blocks.len()); - } -} - -/// Events emitted by [`Buffered`] storage. -#[cfg(test)] -#[derive(Debug)] -pub(super) enum BufferedStorageEvent { - /// Update was received from the underlying storage. - UpdateReceived(BlockNumber), -} - -/// [`BlockStore`] with an in-memory buffer for pending blocks. -/// -/// # Data flow -/// -/// The store is plugged into the `SyncBlocks` actor, so that it can receive new blocks -/// from peers over the gossip network and to share blocks with peers. Received blocks are stored -/// in a [`BlockBuffer`]. The `SyncBlocks` actor doesn't guarantee that blocks are received in order, -/// so we have a background task that waits for successive blocks and feeds them to -/// the underlying storage ([`ContiguousBlockStore`]). The underlying storage executes and persists -/// blocks using the state keeper; see [`PostgresBlockStorage`](super::PostgresBlockStorage) for more details. -/// This logic is largely shared with the old syncing logic using JSON-RPC; the only differing part -/// is producing block data. -/// -/// Once a block is processed and persisted by the state keeper, it can be removed from the [`BlockBuffer`]; -/// we do this in another background task. Removing blocks from the buffer ensures that it doesn't -/// grow infinitely; it also allows to track syncing progress via metrics. -#[derive(Debug)] -pub(super) struct Buffered { - inner: T, - inner_subscriber: watch::Receiver, - block_writes_sender: watch::Sender, - buffer: Mutex, - #[cfg(test)] - events_sender: channel::UnboundedSender, -} - -impl Buffered { - /// Creates a new buffered storage. The buffer is initially empty. - pub fn new(store: T) -> Self { - let inner_subscriber = store.subscribe_to_block_writes(); - let store_block_number = *inner_subscriber.borrow(); - tracing::debug!( - store_block_number = store_block_number.0, - "Initialized buffer storage" - ); - Self { - inner: store, - inner_subscriber, - block_writes_sender: watch::channel(store_block_number).0, - buffer: Mutex::new(BlockBuffer::new(store_block_number)), - #[cfg(test)] - events_sender: channel::unbounded().0, - } - } - - #[cfg(test)] - fn set_events_sender(&mut self, sender: channel::UnboundedSender) { - self.events_sender = sender; - } - - pub(super) fn inner(&self) -> &T { - &self.inner - } - - #[cfg(test)] - async fn buffer_len(&self) -> usize { - self.buffer.lock().await.blocks.len() - } - - /// Listens to the updates in the underlying storage. - #[tracing::instrument(level = "trace", skip_all)] - async fn listen_to_updates(&self, ctx: &ctx::Ctx) { - let mut subscriber = self.inner_subscriber.clone(); - loop { - let store_block_number = { - let Ok(number) = sync::changed(ctx, &mut subscriber).await else { - return; // Do not propagate cancellation errors - }; - *number - }; - tracing::debug!( - store_block_number = store_block_number.0, - "Underlying block number updated" - ); - - let Ok(mut buffer) = sync::lock(ctx, &self.buffer).await else { - return; // Do not propagate cancellation errors - }; - buffer.set_store_block(store_block_number); - #[cfg(test)] - self.events_sender - .send(BufferedStorageEvent::UpdateReceived(store_block_number)); - } - } - - /// Schedules blocks in the underlying store as they are pushed to this store. - #[tracing::instrument(level = "trace", skip_all, err)] - async fn schedule_blocks(&self, ctx: &ctx::Ctx) -> StorageResult<()> { - let mut blocks_subscriber = self.block_writes_sender.subscribe(); - - let mut next_scheduled_block_number = { - let Ok(buffer) = sync::lock(ctx, &self.buffer).await else { - return Ok(()); // Do not propagate cancellation errors - }; - buffer.store_block_number.next() - }; - loop { - loop { - let block = match self.buffered_block(ctx, next_scheduled_block_number).await { - Err(ctx::Canceled) => return Ok(()), // Do not propagate cancellation errors - Ok(None) => break, - Ok(Some(block)) => block, - }; - self.inner.schedule_next_block(ctx, &block).await?; - next_scheduled_block_number = next_scheduled_block_number.next(); - } - // Wait until some more blocks are pushed into the buffer. - let Ok(number) = sync::changed(ctx, &mut blocks_subscriber).await else { - return Ok(()); // Do not propagate cancellation errors - }; - tracing::debug!(block_number = number.0, "Received new block"); - } - } - - async fn buffered_block( - &self, - ctx: &ctx::Ctx, - number: BlockNumber, - ) -> ctx::OrCanceled> { - Ok(sync::lock(ctx, &self.buffer) - .await? - .blocks - .get(&number) - .cloned()) - } - - /// Runs background tasks for this store. This method **must** be spawned as a background task - /// which should be running as long at the [`Buffered`] is in use; otherwise, it will function incorrectly. - pub async fn run_background_tasks(&self, ctx: &ctx::Ctx) -> StorageResult<()> { - scope::run!(ctx, |ctx, s| { - s.spawn(async { - self.listen_to_updates(ctx).await; - Ok(()) - }); - self.schedule_blocks(ctx) - }) - .await - } -} - -#[async_trait] -impl BlockStore for Buffered { - async fn head_block(&self, ctx: &ctx::Ctx) -> StorageResult { - let buffered_head_block = sync::lock(ctx, &self.buffer).await?.head_block(); - if let Some(block) = buffered_head_block { - return Ok(block); - } - self.inner.head_block(ctx).await - } - - async fn first_block(&self, ctx: &ctx::Ctx) -> StorageResult { - // First block is always situated in the underlying store - self.inner.first_block(ctx).await - } - - async fn last_contiguous_block_number(&self, ctx: &ctx::Ctx) -> StorageResult { - Ok(sync::lock(ctx, &self.buffer) - .await? - .last_contiguous_block_number()) - } - - async fn block( - &self, - ctx: &ctx::Ctx, - number: BlockNumber, - ) -> StorageResult> { - let started_at = Instant::now(); - { - let buffer = sync::lock(ctx, &self.buffer).await?; - if number > buffer.store_block_number { - let block = buffer.blocks.get(&number).cloned(); - METRICS.get_block_latency[&BlockResponseKind::InMemory] - .observe(started_at.elapsed()); - return Ok(block); - } - } - let block = self.inner.block(ctx, number).await?; - METRICS.get_block_latency[&BlockResponseKind::Persisted].observe(started_at.elapsed()); - Ok(block) - } - - async fn missing_block_numbers( - &self, - ctx: &ctx::Ctx, - range: ops::Range, - ) -> StorageResult> { - // By design, the underlying store has no missing blocks. - Ok(sync::lock(ctx, &self.buffer) - .await? - .missing_block_numbers(range)) - } - - fn subscribe_to_block_writes(&self) -> watch::Receiver { - self.block_writes_sender.subscribe() - } -} - -#[async_trait] -impl WriteBlockStore for Buffered { - async fn put_block(&self, ctx: &ctx::Ctx, block: &FinalBlock) -> StorageResult<()> { - let buffer_block_latency = METRICS.buffer_block_latency.start(); - { - let mut buffer = sync::lock(ctx, &self.buffer).await?; - let block_number = block.header.number; - if block_number <= buffer.store_block_number { - let err = anyhow::anyhow!( - "Cannot replace a block #{block_number} since it is already present in the underlying storage", - ); - return Err(StorageError::Database(err)); - } - buffer.put_block(block.clone()); - } - self.block_writes_sender.send_replace(block.header.number); - buffer_block_latency.observe(); - Ok(()) - } -} diff --git a/core/lib/zksync_core/src/sync_layer/gossip/buffered/tests.rs b/core/lib/zksync_core/src/sync_layer/gossip/buffered/tests.rs deleted file mode 100644 index de5ef8a88cb..00000000000 --- a/core/lib/zksync_core/src/sync_layer/gossip/buffered/tests.rs +++ /dev/null @@ -1,287 +0,0 @@ -//! Tests for buffered storage. - -use assert_matches::assert_matches; -use async_trait::async_trait; -use rand::{rngs::StdRng, seq::SliceRandom, Rng}; -use test_casing::test_casing; - -use std::{iter, ops}; - -use zksync_concurrency::{ - ctx::{self, channel}, - scope, - sync::{self, watch}, - time, -}; -use zksync_consensus_roles::validator::{BlockHeader, BlockNumber, FinalBlock, Payload}; -use zksync_consensus_storage::{BlockStore, InMemoryStorage, StorageResult, WriteBlockStore}; - -use super::*; - -fn init_store(rng: &mut impl Rng) -> (FinalBlock, InMemoryStorage) { - let payload = Payload(vec![]); - let genesis_block = FinalBlock { - header: BlockHeader::genesis(payload.hash()), - payload, - justification: rng.gen(), - }; - let block_store = InMemoryStorage::new(genesis_block.clone()); - (genesis_block, block_store) -} - -fn gen_blocks(rng: &mut impl Rng, genesis_block: FinalBlock, count: usize) -> Vec { - let blocks = iter::successors(Some(genesis_block), |parent| { - let payload = Payload(vec![]); - let header = BlockHeader { - parent: parent.header.hash(), - number: parent.header.number.next(), - payload: payload.hash(), - }; - Some(FinalBlock { - header, - payload, - justification: rng.gen(), - }) - }); - blocks.skip(1).take(count).collect() -} - -#[derive(Debug)] -struct MockContiguousStore { - inner: InMemoryStorage, - block_sender: channel::UnboundedSender, -} - -impl MockContiguousStore { - fn new(inner: InMemoryStorage) -> (Self, channel::UnboundedReceiver) { - let (block_sender, block_receiver) = channel::unbounded(); - let this = Self { - inner, - block_sender, - }; - (this, block_receiver) - } - - async fn run_updates( - &self, - ctx: &ctx::Ctx, - mut block_receiver: channel::UnboundedReceiver, - ) -> StorageResult<()> { - let rng = &mut ctx.rng(); - while let Ok(block) = block_receiver.recv(ctx).await { - let head_block_number = self.head_block(ctx).await?.header.number; - assert_eq!(block.header.number, head_block_number.next()); - - let sleep_duration = time::Duration::milliseconds(rng.gen_range(0..5)); - ctx.sleep(sleep_duration).await?; - self.inner.put_block(ctx, &block).await?; - } - Ok(()) - } -} - -#[async_trait] -impl BlockStore for MockContiguousStore { - async fn head_block(&self, ctx: &ctx::Ctx) -> StorageResult { - self.inner.head_block(ctx).await - } - - async fn first_block(&self, ctx: &ctx::Ctx) -> StorageResult { - self.inner.first_block(ctx).await - } - - async fn last_contiguous_block_number(&self, ctx: &ctx::Ctx) -> StorageResult { - self.inner.last_contiguous_block_number(ctx).await - } - - async fn block( - &self, - ctx: &ctx::Ctx, - number: BlockNumber, - ) -> StorageResult> { - self.inner.block(ctx, number).await - } - - async fn missing_block_numbers( - &self, - ctx: &ctx::Ctx, - range: ops::Range, - ) -> StorageResult> { - self.inner.missing_block_numbers(ctx, range).await - } - - fn subscribe_to_block_writes(&self) -> watch::Receiver { - self.inner.subscribe_to_block_writes() - } -} - -#[async_trait] -impl ContiguousBlockStore for MockContiguousStore { - async fn schedule_next_block(&self, _ctx: &ctx::Ctx, block: &FinalBlock) -> StorageResult<()> { - tracing::trace!(block_number = block.header.number.0, "Scheduled next block"); - self.block_sender.send(block.clone()); - Ok(()) - } -} - -#[tracing::instrument(level = "trace", skip(shuffle_blocks))] -async fn test_buffered_storage( - initial_block_count: usize, - block_count: usize, - block_interval: time::Duration, - shuffle_blocks: impl FnOnce(&mut StdRng, &mut [FinalBlock]), -) { - let ctx = &ctx::test_root(&ctx::RealClock); - let rng = &mut ctx.rng(); - - let (genesis_block, block_store) = init_store(rng); - let mut initial_blocks = gen_blocks(rng, genesis_block.clone(), initial_block_count); - for block in &initial_blocks { - block_store.put_block(ctx, block).await.unwrap(); - } - initial_blocks.insert(0, genesis_block.clone()); - - let (block_store, block_receiver) = MockContiguousStore::new(block_store); - let mut buffered_store = Buffered::new(block_store); - let (events_sender, mut events_receiver) = channel::unbounded(); - buffered_store.set_events_sender(events_sender); - - // Check initial values returned by the store. - let last_initial_block = initial_blocks.last().unwrap().clone(); - assert_eq!( - buffered_store.head_block(ctx).await.unwrap(), - last_initial_block - ); - for block in &initial_blocks { - let block_result = buffered_store.block(ctx, block.header.number).await; - assert_eq!(block_result.unwrap().as_ref(), Some(block)); - } - let mut subscriber = buffered_store.subscribe_to_block_writes(); - assert_eq!( - *subscriber.borrow(), - BlockNumber(initial_block_count as u64) - ); - - let mut blocks = gen_blocks(rng, last_initial_block, block_count); - shuffle_blocks(rng, &mut blocks); - let last_block_number = BlockNumber((block_count + initial_block_count) as u64); - - scope::run!(ctx, |ctx, s| async { - s.spawn_bg(buffered_store.inner().run_updates(ctx, block_receiver)); - s.spawn_bg(buffered_store.run_background_tasks(ctx)); - - for (idx, block) in blocks.iter().enumerate() { - buffered_store.put_block(ctx, block).await?; - let new_block_number = *sync::changed(ctx, &mut subscriber).await?; - assert_eq!(new_block_number, block.header.number); - - // Check that all written blocks are immediately accessible. - for existing_block in initial_blocks.iter().chain(&blocks[0..=idx]) { - let number = existing_block.header.number; - assert_eq!( - buffered_store.block(ctx, number).await?.as_ref(), - Some(existing_block) - ); - } - assert_eq!(buffered_store.first_block(ctx).await?, genesis_block); - - let expected_head_block = blocks[0..=idx] - .iter() - .max_by_key(|block| block.header.number) - .unwrap(); - assert_eq!(buffered_store.head_block(ctx).await?, *expected_head_block); - - let expected_last_contiguous_block = blocks[(idx + 1)..] - .iter() - .map(|block| block.header.number) - .min() - .map_or(last_block_number, BlockNumber::prev); - assert_eq!( - buffered_store.last_contiguous_block_number(ctx).await?, - expected_last_contiguous_block - ); - - ctx.sleep(block_interval).await?; - } - - let mut inner_subscriber = buffered_store.inner().subscribe_to_block_writes(); - while buffered_store - .inner() - .last_contiguous_block_number(ctx) - .await? - < last_block_number - { - sync::changed(ctx, &mut inner_subscriber).await?; - } - - // Check events emitted by the buffered storage. This also ensures that all underlying storage - // updates are processed before proceeding to the following checks. - let expected_numbers = (initial_block_count as u64 + 1)..=last_block_number.0; - for expected_number in expected_numbers.map(BlockNumber) { - assert_matches!( - events_receiver.recv(ctx).await?, - BufferedStorageEvent::UpdateReceived(number) if number == expected_number - ); - } - - assert_eq!(buffered_store.buffer_len().await, 0); - Ok(()) - }) - .await - .unwrap(); -} - -// Choose intervals so that they are both smaller and larger than the sleep duration in -// `MockContiguousStore::run_updates()`. -const BLOCK_INTERVALS: [time::Duration; 4] = [ - time::Duration::ZERO, - time::Duration::milliseconds(3), - time::Duration::milliseconds(5), - time::Duration::milliseconds(10), -]; - -#[test_casing(4, BLOCK_INTERVALS)] -#[tokio::test] -async fn buffered_storage_with_sequential_blocks(block_interval: time::Duration) { - test_buffered_storage(0, 30, block_interval, |_, _| { - // Do not perform shuffling - }) - .await; -} - -#[test_casing(4, BLOCK_INTERVALS)] -#[tokio::test] -async fn buffered_storage_with_random_blocks(block_interval: time::Duration) { - test_buffered_storage(0, 30, block_interval, |rng, blocks| blocks.shuffle(rng)).await; -} - -#[test_casing(4, BLOCK_INTERVALS)] -#[tokio::test] -async fn buffered_storage_with_slightly_shuffled_blocks(block_interval: time::Duration) { - test_buffered_storage(0, 30, block_interval, |rng, blocks| { - for chunk in blocks.chunks_mut(4) { - chunk.shuffle(rng); - } - }) - .await; -} - -#[test_casing(4, BLOCK_INTERVALS)] -#[tokio::test] -async fn buffered_storage_with_initial_blocks(block_interval: time::Duration) { - test_buffered_storage(10, 20, block_interval, |_, _| { - // Do not perform shuffling - }) - .await; -} - -#[test_casing(4, BLOCK_INTERVALS)] -#[tokio::test] -async fn buffered_storage_with_initial_blocks_and_slight_shuffling(block_interval: time::Duration) { - test_buffered_storage(10, 20, block_interval, |rng, blocks| { - for chunk in blocks.chunks_mut(5) { - chunk.shuffle(rng); - } - }) - .await; -} diff --git a/core/lib/zksync_core/src/sync_layer/gossip/conversions.rs b/core/lib/zksync_core/src/sync_layer/gossip/conversions.rs index 8face4e6942..de9f00093fa 100644 --- a/core/lib/zksync_core/src/sync_layer/gossip/conversions.rs +++ b/core/lib/zksync_core/src/sync_layer/gossip/conversions.rs @@ -1,31 +1,11 @@ //! Conversion logic between server and consensus types. - use anyhow::Context as _; - -use zksync_consensus_roles::validator::{BlockHeader, BlockNumber, FinalBlock}; -use zksync_types::{ - api::en::SyncBlock, block::ConsensusBlockFields, MiniblockNumber, ProtocolVersionId, -}; +use zksync_consensus_roles::validator::FinalBlock; +use zksync_dal::blocks_dal::ConsensusBlockFields; +use zksync_types::MiniblockNumber; use crate::{consensus, sync_layer::fetcher::FetchedBlock}; -pub(super) fn sync_block_to_consensus_block(mut block: SyncBlock) -> anyhow::Result { - let number = BlockNumber(block.number.0.into()); - let consensus = block.consensus.take().context("Missing consensus fields")?; - let payload: consensus::Payload = block.try_into().context("Missing `SyncBlock` data")?; - let payload = payload.encode(); - let header = BlockHeader { - parent: consensus.parent, - number, - payload: payload.hash(), - }; - Ok(FinalBlock { - header, - payload, - justification: consensus.justification, - }) -} - impl FetchedBlock { pub(super) fn from_gossip_block( block: &FinalBlock, @@ -40,11 +20,12 @@ impl FetchedBlock { number: MiniblockNumber(number), l1_batch_number: payload.l1_batch_number, last_in_batch, - protocol_version: ProtocolVersionId::latest(), // FIXME + protocol_version: payload.protocol_version, timestamp: payload.timestamp, - hash: payload.hash, + reference_hash: Some(payload.hash), l1_gas_price: payload.l1_gas_price, l2_fair_gas_price: payload.l2_fair_gas_price, + fair_pubdata_price: payload.fair_pubdata_price, virtual_blocks: payload.virtual_blocks, operator_address: payload.operator_address, transactions: payload.transactions, diff --git a/core/lib/zksync_core/src/sync_layer/gossip/metrics.rs b/core/lib/zksync_core/src/sync_layer/gossip/metrics.rs deleted file mode 100644 index f67c150b99c..00000000000 --- a/core/lib/zksync_core/src/sync_layer/gossip/metrics.rs +++ /dev/null @@ -1,29 +0,0 @@ -//! Metrics for gossip-powered syncing. - -use vise::{Buckets, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics, Unit}; - -use std::time::Duration; - -#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] -#[metrics(label = "kind", rename_all = "snake_case")] -pub(super) enum BlockResponseKind { - Persisted, - InMemory, -} - -#[derive(Debug, Metrics)] -#[metrics(prefix = "external_node_gossip_fetcher")] -pub(super) struct GossipFetcherMetrics { - /// Number of currently buffered unexecuted blocks. - pub buffer_size: Gauge, - /// Latency of a `get_block` call. - #[metrics(unit = Unit::Seconds, buckets = Buckets::LATENCIES)] - pub get_block_latency: Family>, - /// Latency of putting a block into the buffered storage. This may include the time to queue - /// block actions, but does not include block execution. - #[metrics(unit = Unit::Seconds, buckets = Buckets::LATENCIES)] - pub buffer_block_latency: Histogram, -} - -#[vise::register] -pub(super) static METRICS: vise::Global = vise::Global::new(); diff --git a/core/lib/zksync_core/src/sync_layer/gossip/mod.rs b/core/lib/zksync_core/src/sync_layer/gossip/mod.rs deleted file mode 100644 index 630ded95345..00000000000 --- a/core/lib/zksync_core/src/sync_layer/gossip/mod.rs +++ /dev/null @@ -1,93 +0,0 @@ -//! Consensus adapter for EN synchronization logic. - -use anyhow::Context as _; -use tokio::sync::watch; - -use std::sync::Arc; - -use zksync_concurrency::{ctx, scope}; -use zksync_consensus_executor::{Executor, ExecutorConfig}; -use zksync_consensus_roles::node; -use zksync_dal::ConnectionPool; - -mod buffered; -mod conversions; -mod metrics; -mod storage; -#[cfg(test)] -mod tests; -mod utils; - -use self::{buffered::Buffered, storage::PostgresBlockStorage}; -use super::{fetcher::FetcherCursor, sync_action::ActionQueueSender}; - -/// Starts fetching L2 blocks using peer-to-peer gossip network. -pub async fn run_gossip_fetcher( - pool: ConnectionPool, - actions: ActionQueueSender, - executor_config: ExecutorConfig, - node_key: node::SecretKey, - mut stop_receiver: watch::Receiver, -) -> anyhow::Result<()> { - scope::run!(&ctx::root(), |ctx, s| async { - s.spawn_bg(run_gossip_fetcher_inner( - ctx, - pool, - actions, - executor_config, - node_key, - )); - if stop_receiver.changed().await.is_err() { - tracing::warn!( - "Stop signal sender for gossip fetcher was dropped without sending a signal" - ); - } - tracing::info!("Stop signal received, gossip fetcher is shutting down"); - Ok(()) - }) - .await -} - -async fn run_gossip_fetcher_inner( - ctx: &ctx::Ctx, - pool: ConnectionPool, - actions: ActionQueueSender, - executor_config: ExecutorConfig, - node_key: node::SecretKey, -) -> anyhow::Result<()> { - tracing::info!( - "Starting gossip fetcher with {executor_config:?} and node key {:?}", - node_key.public() - ); - - let mut storage = pool - .access_storage_tagged("sync_layer") - .await - .context("Failed acquiring Postgres connection for cursor")?; - let cursor = FetcherCursor::new(&mut storage).await?; - drop(storage); - - let store = PostgresBlockStorage::new(pool, actions, cursor); - let buffered = Arc::new(Buffered::new(store)); - let store = buffered.inner(); - let executor = Executor::new(executor_config, node_key, buffered.clone()) - .context("Node executor misconfiguration")?; - - scope::run!(ctx, |ctx, s| async { - s.spawn_bg(async { - store - .run_background_tasks(ctx) - .await - .context("`PostgresBlockStorage` background tasks failed") - }); - s.spawn_bg(async { - buffered - .run_background_tasks(ctx) - .await - .context("`Buffered` storage background tasks failed") - }); - - executor.run(ctx).await.context("Node executor terminated") - }) - .await -} diff --git a/core/lib/zksync_core/src/sync_layer/gossip/storage/mod.rs b/core/lib/zksync_core/src/sync_layer/gossip/storage/mod.rs deleted file mode 100644 index d4e95c9e2d4..00000000000 --- a/core/lib/zksync_core/src/sync_layer/gossip/storage/mod.rs +++ /dev/null @@ -1,219 +0,0 @@ -//! Storage implementation based on DAL. - -use anyhow::Context as _; -use async_trait::async_trait; - -use std::ops; - -use zksync_concurrency::{ - ctx, - sync::{self, watch, Mutex}, - time, -}; -use zksync_consensus_roles::validator::{BlockNumber, FinalBlock}; -use zksync_consensus_storage::{BlockStore, StorageError, StorageResult}; -use zksync_dal::{ConnectionPool, StorageProcessor}; -use zksync_types::{Address, MiniblockNumber}; - -#[cfg(test)] -mod tests; - -use super::{buffered::ContiguousBlockStore, conversions::sync_block_to_consensus_block}; -use crate::sync_layer::{ - fetcher::{FetchedBlock, FetcherCursor}, - sync_action::{ActionQueueSender, SyncAction}, -}; - -#[derive(Debug)] -struct CursorWithCachedBlock { - inner: FetcherCursor, - maybe_last_block_in_batch: Option, -} - -impl From for CursorWithCachedBlock { - fn from(inner: FetcherCursor) -> Self { - Self { - inner, - maybe_last_block_in_batch: None, - } - } -} - -impl CursorWithCachedBlock { - fn advance(&mut self, block: FetchedBlock) -> Vec> { - let mut actions = Vec::with_capacity(2); - if let Some(mut prev_block) = self.maybe_last_block_in_batch.take() { - prev_block.last_in_batch = prev_block.l1_batch_number != block.l1_batch_number; - actions.push(self.inner.advance(prev_block)); - } - - // We take advantage of the fact that the last block in a batch is a *fictive* block that - // does not contain transactions. Thus, any block with transactions cannot be last in an L1 batch. - let can_be_last_in_batch = block.transactions.is_empty(); - if can_be_last_in_batch { - self.maybe_last_block_in_batch = Some(block); - // We cannot convert the block into actions yet, since we don't know whether it seals an L1 batch. - } else { - actions.push(self.inner.advance(block)); - } - actions - } -} - -/// Postgres-based [`BlockStore`] implementation. New blocks are scheduled to be written via -/// [`ContiguousBlockStore`] trait, which internally uses an [`ActionQueueSender`] to queue -/// block data (miniblock and L1 batch parameters, transactions) for the state keeper. Block data processing -/// is shared with JSON-RPC-based syncing. -#[derive(Debug)] -pub(super) struct PostgresBlockStorage { - pool: ConnectionPool, - actions: ActionQueueSender, - block_sender: watch::Sender, - cursor: Mutex, -} - -impl PostgresBlockStorage { - /// Creates a new storage handle. `pool` should have multiple connections to work efficiently. - pub fn new(pool: ConnectionPool, actions: ActionQueueSender, cursor: FetcherCursor) -> Self { - let current_block_number = cursor.next_miniblock.0.saturating_sub(1).into(); - Self { - pool, - actions, - block_sender: watch::channel(BlockNumber(current_block_number)).0, - cursor: Mutex::new(cursor.into()), - } - } - - /// Runs background tasks for this store. This method **must** be spawned as a background task - /// which should be running as long at the [`PostgresBlockStorage`] is in use; otherwise, - /// it will function incorrectly. - pub async fn run_background_tasks(&self, ctx: &ctx::Ctx) -> StorageResult<()> { - const POLL_INTERVAL: time::Duration = time::Duration::milliseconds(50); - loop { - let sealed_miniblock_number = match self.sealed_miniblock_number(ctx).await { - Ok(number) => number, - Err(err @ StorageError::Database(_)) => return Err(err), - Err(StorageError::Canceled(_)) => return Ok(()), // Do not propagate cancellation errors - }; - self.block_sender.send_if_modified(|number| { - if *number != sealed_miniblock_number { - *number = sealed_miniblock_number; - true - } else { - false - } - }); - if let Err(ctx::Canceled) = ctx.sleep(POLL_INTERVAL).await { - return Ok(()); // Do not propagate cancellation errors - } - } - } - - async fn storage(&self, ctx: &ctx::Ctx) -> StorageResult> { - ctx.wait(self.pool.access_storage_tagged("sync_layer")) - .await? - .context("Failed to connect to Postgres") - .map_err(StorageError::Database) - } - - async fn block( - ctx: &ctx::Ctx, - storage: &mut StorageProcessor<'_>, - number: MiniblockNumber, - ) -> StorageResult> { - let operator_address = Address::default(); // FIXME: where to get this address from? - let Some(block) = ctx - .wait( - storage - .sync_dal() - .sync_block(number, operator_address, true), - ) - .await? - .with_context(|| format!("Failed getting miniblock #{number} from Postgres")) - .map_err(StorageError::Database)? - else { - return Ok(None); - }; - let block = sync_block_to_consensus_block(block).map_err(StorageError::Database)?; - Ok(Some(block)) - } - - async fn sealed_miniblock_number(&self, ctx: &ctx::Ctx) -> StorageResult { - let mut storage = self.storage(ctx).await?; - let number = ctx - .wait(storage.blocks_dal().get_sealed_miniblock_number()) - .await? - .context("Failed getting sealed miniblock number") - .map_err(StorageError::Database)?; - Ok(BlockNumber(number.0.into())) - } -} - -#[async_trait] -impl BlockStore for PostgresBlockStorage { - async fn head_block(&self, ctx: &ctx::Ctx) -> StorageResult { - let mut storage = self.storage(ctx).await?; - let miniblock_number = ctx - .wait(storage.blocks_dal().get_sealed_miniblock_number()) - .await? - .context("Failed getting sealed miniblock number") - .map_err(StorageError::Database)?; - // ^ The number can get stale, but it's OK for our purposes - Ok(Self::block(ctx, &mut storage, miniblock_number) - .await? - .with_context(|| format!("Miniblock #{miniblock_number} disappeared from Postgres")) - .map_err(StorageError::Database)?) - } - - async fn first_block(&self, ctx: &ctx::Ctx) -> StorageResult { - let mut storage = self.storage(ctx).await?; - Self::block(ctx, &mut storage, MiniblockNumber(0)) - .await? - .context("Genesis miniblock not present in Postgres") - .map_err(StorageError::Database) - } - - async fn last_contiguous_block_number(&self, ctx: &ctx::Ctx) -> StorageResult { - self.sealed_miniblock_number(ctx).await - } - - async fn block( - &self, - ctx: &ctx::Ctx, - number: BlockNumber, - ) -> StorageResult> { - let number = u32::try_from(number.0) - .context("block number is too large") - .map_err(StorageError::Database)?; - let mut storage = self.storage(ctx).await?; - Self::block(ctx, &mut storage, MiniblockNumber(number)).await - } - - async fn missing_block_numbers( - &self, - _ctx: &ctx::Ctx, - _range: ops::Range, - ) -> StorageResult> { - Ok(vec![]) // The storage never has missing blocks by construction - } - - fn subscribe_to_block_writes(&self) -> watch::Receiver { - self.block_sender.subscribe() - } -} - -#[async_trait] -impl ContiguousBlockStore for PostgresBlockStorage { - async fn schedule_next_block(&self, ctx: &ctx::Ctx, block: &FinalBlock) -> StorageResult<()> { - // last_in_batch` is always set to `false` by this call; it is properly set by `CursorWithCachedBlock`. - let fetched_block = - FetchedBlock::from_gossip_block(block, false).map_err(StorageError::Database)?; - let actions = sync::lock(ctx, &self.cursor).await?.advance(fetched_block); - for actions_chunk in actions { - // We don't wrap this in `ctx.wait()` because `PostgresBlockStorage` will get broken - // if it gets reused after context cancellation. - self.actions.push_actions(actions_chunk).await; - } - Ok(()) - } -} diff --git a/core/lib/zksync_core/src/sync_layer/gossip/storage/tests.rs b/core/lib/zksync_core/src/sync_layer/gossip/storage/tests.rs deleted file mode 100644 index 437c5188330..00000000000 --- a/core/lib/zksync_core/src/sync_layer/gossip/storage/tests.rs +++ /dev/null @@ -1,127 +0,0 @@ -//! Tests for Postgres storage implementation. - -use rand::{thread_rng, Rng}; - -use zksync_concurrency::scope; -use zksync_types::L2ChainId; - -use super::*; -use crate::{ - genesis::{ensure_genesis_state, GenesisParams}, - sync_layer::{ - gossip::tests::{ - add_consensus_fields, assert_first_block_actions, assert_second_block_actions, - load_final_block, - }, - tests::run_state_keeper_with_multiple_miniblocks, - ActionQueue, - }, -}; - -const TEST_TIMEOUT: time::Duration = time::Duration::seconds(10); - -#[tokio::test] -async fn block_store_basics_for_postgres() { - let pool = ConnectionPool::test_pool().await; - run_state_keeper_with_multiple_miniblocks(pool.clone()).await; - - let mut storage = pool.access_storage().await.unwrap(); - add_consensus_fields(&mut storage, &thread_rng().gen(), 3).await; - let cursor = FetcherCursor::new(&mut storage).await.unwrap(); - drop(storage); - let (actions_sender, _) = ActionQueue::new(); - let storage = PostgresBlockStorage::new(pool.clone(), actions_sender, cursor); - - let ctx = &ctx::test_root(&ctx::RealClock); - let genesis_block = BlockStore::first_block(&storage, ctx).await.unwrap(); - assert_eq!(genesis_block.header.number, BlockNumber(0)); - let head_block = BlockStore::head_block(&storage, ctx).await.unwrap(); - assert_eq!(head_block.header.number, BlockNumber(2)); - let last_contiguous_block_number = storage.last_contiguous_block_number(ctx).await.unwrap(); - assert_eq!(last_contiguous_block_number, BlockNumber(2)); - - let block = storage - .block(ctx, BlockNumber(1)) - .await - .unwrap() - .expect("no block #1"); - assert_eq!(block.header.number, BlockNumber(1)); - let missing_block = storage.block(ctx, BlockNumber(3)).await.unwrap(); - assert!(missing_block.is_none(), "{missing_block:?}"); -} - -#[tokio::test] -async fn subscribing_to_block_updates_for_postgres() { - let pool = ConnectionPool::test_pool().await; - let mut storage = pool.access_storage().await.unwrap(); - if storage.blocks_dal().is_genesis_needed().await.unwrap() { - ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) - .await - .unwrap(); - } - let cursor = FetcherCursor::new(&mut storage).await.unwrap(); - // ^ This is logically incorrect (the storage should not be updated other than using - // `ContiguousBlockStore`), but for testing subscriptions this is fine. - drop(storage); - let (actions_sender, _) = ActionQueue::new(); - let storage = PostgresBlockStorage::new(pool.clone(), actions_sender, cursor); - let mut subscriber = storage.subscribe_to_block_writes(); - - let ctx = &ctx::test_root(&ctx::RealClock); - scope::run!(&ctx.with_timeout(TEST_TIMEOUT), |ctx, s| async { - s.spawn_bg(storage.run_background_tasks(ctx)); - s.spawn(async { - run_state_keeper_with_multiple_miniblocks(pool.clone()).await; - Ok(()) - }); - - loop { - let block = *sync::changed(ctx, &mut subscriber).await?; - if block == BlockNumber(2) { - // We should receive at least the last update. - break; - } - } - Ok(()) - }) - .await - .unwrap(); -} - -#[tokio::test] -async fn processing_new_blocks() { - let pool = ConnectionPool::test_pool().await; - run_state_keeper_with_multiple_miniblocks(pool.clone()).await; - - let mut storage = pool.access_storage().await.unwrap(); - add_consensus_fields(&mut storage, &thread_rng().gen(), 3).await; - let first_block = load_final_block(&mut storage, 1).await; - let second_block = load_final_block(&mut storage, 2).await; - storage - .transactions_dal() - .reset_transactions_state(MiniblockNumber(0)) - .await; - storage - .blocks_dal() - .delete_miniblocks(MiniblockNumber(0)) - .await - .unwrap(); - let cursor = FetcherCursor::new(&mut storage).await.unwrap(); - drop(storage); - - let (actions_sender, mut actions) = ActionQueue::new(); - let storage = PostgresBlockStorage::new(pool.clone(), actions_sender, cursor); - let ctx = &ctx::test_root(&ctx::RealClock); - let ctx = &ctx.with_timeout(TEST_TIMEOUT); - storage - .schedule_next_block(ctx, &first_block) - .await - .unwrap(); - assert_first_block_actions(&mut actions).await; - - storage - .schedule_next_block(ctx, &second_block) - .await - .unwrap(); - assert_second_block_actions(&mut actions).await; -} diff --git a/core/lib/zksync_core/src/sync_layer/gossip/tests.rs b/core/lib/zksync_core/src/sync_layer/gossip/tests.rs deleted file mode 100644 index ca3ce29f4d3..00000000000 --- a/core/lib/zksync_core/src/sync_layer/gossip/tests.rs +++ /dev/null @@ -1,339 +0,0 @@ -//! Tests for consensus adapters for EN synchronization logic. - -use assert_matches::assert_matches; -use test_casing::{test_casing, Product}; - -use zksync_concurrency::{ctx, scope, time}; -use zksync_consensus_executor::testonly::FullValidatorConfig; -use zksync_consensus_roles::validator::{self, FinalBlock}; -use zksync_consensus_storage::{InMemoryStorage, WriteBlockStore}; -use zksync_dal::{ConnectionPool, StorageProcessor}; -use zksync_types::{block::ConsensusBlockFields, Address, L1BatchNumber, MiniblockNumber}; - -use super::*; -use crate::{ - consensus, - sync_layer::{ - sync_action::SyncAction, - tests::{ - mock_l1_batch_hash_computation, run_state_keeper_with_multiple_l1_batches, - run_state_keeper_with_multiple_miniblocks, StateKeeperHandles, - }, - ActionQueue, - }, -}; - -const CLOCK_SPEEDUP: i64 = 20; -const POLL_INTERVAL: time::Duration = time::Duration::milliseconds(50 * CLOCK_SPEEDUP); - -/// Loads a block from the storage and converts it to a `FinalBlock`. -pub(super) async fn load_final_block( - storage: &mut StorageProcessor<'_>, - number: u32, -) -> FinalBlock { - let sync_block = storage - .sync_dal() - .sync_block(MiniblockNumber(number), Address::repeat_byte(1), true) - .await - .unwrap() - .unwrap_or_else(|| panic!("no sync block #{number}")); - conversions::sync_block_to_consensus_block(sync_block).unwrap() -} - -pub async fn block_payload(storage: &mut StorageProcessor<'_>, number: u32) -> validator::Payload { - let sync_block = storage - .sync_dal() - .sync_block(MiniblockNumber(number), Address::repeat_byte(1), true) - .await - .unwrap() - .unwrap_or_else(|| panic!("no sync block #{number}")); - consensus::Payload::try_from(sync_block).unwrap().encode() -} - -/// Adds consensus information for the specified `count` of miniblocks, starting from the genesis. -pub(super) async fn add_consensus_fields( - storage: &mut StorageProcessor<'_>, - validator_key: &validator::SecretKey, - count: u32, -) { - let mut prev_block_hash = validator::BlockHeaderHash::from_bytes([0; 32]); - let validator_set = validator::ValidatorSet::new([validator_key.public()]).unwrap(); - for number in 0..count { - let payload = block_payload(storage, number).await; - let block_header = validator::BlockHeader { - parent: prev_block_hash, - number: validator::BlockNumber(number.into()), - payload: payload.hash(), - }; - let replica_commit = validator::ReplicaCommit { - protocol_version: validator::CURRENT_VERSION, - view: validator::ViewNumber(number.into()), - proposal: block_header, - }; - let replica_commit = validator_key.sign_msg(replica_commit); - let justification = validator::CommitQC::from(&[replica_commit], &validator_set) - .expect("Failed creating QC"); - - let consensus = ConsensusBlockFields { - parent: prev_block_hash, - justification, - }; - storage - .blocks_dal() - .set_miniblock_consensus_fields(MiniblockNumber(number), &consensus) - .await - .unwrap(); - prev_block_hash = block_header.hash(); - } -} - -pub(super) async fn assert_first_block_actions(actions: &mut ActionQueue) -> Vec { - let mut received_actions = vec![]; - while !matches!(received_actions.last(), Some(SyncAction::SealMiniblock(_))) { - received_actions.push(actions.recv_action().await); - } - assert_matches!( - received_actions.as_slice(), - [ - SyncAction::OpenBatch { - number: L1BatchNumber(1), - timestamp: 1, - first_miniblock_info: (MiniblockNumber(1), 1), - .. - }, - SyncAction::Tx(_), - SyncAction::Tx(_), - SyncAction::Tx(_), - SyncAction::Tx(_), - SyncAction::Tx(_), - SyncAction::SealMiniblock(_), - ] - ); - received_actions -} - -pub(super) async fn assert_second_block_actions(actions: &mut ActionQueue) -> Vec { - let mut received_actions = vec![]; - while !matches!(received_actions.last(), Some(SyncAction::SealMiniblock(_))) { - received_actions.push(actions.recv_action().await); - } - assert_matches!( - received_actions.as_slice(), - [ - SyncAction::Miniblock { - number: MiniblockNumber(2), - timestamp: 2, - virtual_blocks: 1, - }, - SyncAction::Tx(_), - SyncAction::Tx(_), - SyncAction::Tx(_), - SyncAction::SealMiniblock(_), - ] - ); - received_actions -} - -#[test_casing(4, Product(([false, true], [false, true])))] -#[tokio::test] -async fn syncing_via_gossip_fetcher(delay_first_block: bool, delay_second_block: bool) { - zksync_concurrency::testonly::abort_on_panic(); - let pool = ConnectionPool::test_pool().await; - let tx_hashes = run_state_keeper_with_multiple_miniblocks(pool.clone()).await; - - let mut storage = pool.access_storage().await.unwrap(); - let genesis_block_payload = block_payload(&mut storage, 0).await; - let ctx = &ctx::test_root(&ctx::AffineClock::new(CLOCK_SPEEDUP as f64)); - let rng = &mut ctx.rng(); - let mut validator = FullValidatorConfig::for_single_validator(rng, genesis_block_payload); - let validator_set = validator.node_config.validators.clone(); - let external_node = validator.connect_full_node(rng); - - let (genesis_block, blocks) = - get_blocks_and_reset_storage(storage, &validator.validator_key).await; - let [first_block, second_block] = blocks.as_slice() else { - unreachable!("Unexpected blocks in storage: {blocks:?}"); - }; - tracing::trace!("Node storage reset"); - - let validator_storage = Arc::new(InMemoryStorage::new(genesis_block)); - if !delay_first_block { - validator_storage.put_block(ctx, first_block).await.unwrap(); - if !delay_second_block { - validator_storage - .put_block(ctx, second_block) - .await - .unwrap(); - } - } - let validator = Executor::new( - validator.node_config, - validator.node_key, - validator_storage.clone(), - ) - .unwrap(); - // ^ We intentionally do not run consensus on the validator node, since it'll produce blocks - // with payloads that cannot be parsed by the external node. - - let (actions_sender, mut actions) = ActionQueue::new(); - let (keeper_actions_sender, keeper_actions) = ActionQueue::new(); - let state_keeper = StateKeeperHandles::new(pool.clone(), keeper_actions, &[&tx_hashes]).await; - scope::run!(ctx, |ctx, s| async { - s.spawn_bg(validator.run(ctx)); - s.spawn_bg(run_gossip_fetcher_inner( - ctx, - pool.clone(), - actions_sender, - external_node.node_config, - external_node.node_key, - )); - - if delay_first_block { - ctx.sleep(POLL_INTERVAL).await?; - validator_storage.put_block(ctx, first_block).await.unwrap(); - if !delay_second_block { - validator_storage - .put_block(ctx, second_block) - .await - .unwrap(); - } - } - - let received_actions = assert_first_block_actions(&mut actions).await; - // Manually replicate actions to the state keeper. - keeper_actions_sender.push_actions(received_actions).await; - - if delay_second_block { - validator_storage - .put_block(ctx, second_block) - .await - .unwrap(); - } - - let received_actions = assert_second_block_actions(&mut actions).await; - keeper_actions_sender.push_actions(received_actions).await; - state_keeper - .wait(|state| state.get_local_block() == MiniblockNumber(2)) - .await; - Ok(()) - }) - .await - .unwrap(); - - // Check that received blocks have consensus fields persisted. - let mut storage = pool.access_storage().await.unwrap(); - for number in [1, 2] { - let block = load_final_block(&mut storage, number).await; - block.justification.verify(&validator_set, 1).unwrap(); - } -} - -async fn get_blocks_and_reset_storage( - mut storage: StorageProcessor<'_>, - validator_key: &validator::SecretKey, -) -> (FinalBlock, Vec) { - let sealed_miniblock_number = storage - .blocks_dal() - .get_sealed_miniblock_number() - .await - .unwrap(); - add_consensus_fields(&mut storage, validator_key, sealed_miniblock_number.0 + 1).await; - let genesis_block = load_final_block(&mut storage, 0).await; - - let mut blocks = Vec::with_capacity(sealed_miniblock_number.0 as usize); - for number in 1..=sealed_miniblock_number.0 { - blocks.push(load_final_block(&mut storage, number).await); - } - - storage - .transactions_dal() - .reset_transactions_state(MiniblockNumber(0)) - .await; - storage - .blocks_dal() - .delete_miniblocks(MiniblockNumber(0)) - .await - .unwrap(); - storage - .blocks_dal() - .delete_l1_batches(L1BatchNumber(0)) - .await - .unwrap(); - (genesis_block, blocks) -} - -#[test_casing(4, [3, 2, 1, 0])] -#[tokio::test] -async fn syncing_via_gossip_fetcher_with_multiple_l1_batches(initial_block_count: usize) { - assert!(initial_block_count <= 3); - zksync_concurrency::testonly::abort_on_panic(); - - let pool = ConnectionPool::test_pool().await; - let tx_hashes = run_state_keeper_with_multiple_l1_batches(pool.clone()).await; - let tx_hashes: Vec<_> = tx_hashes.iter().map(Vec::as_slice).collect(); - - let mut storage = pool.access_storage().await.unwrap(); - let genesis_block_payload = block_payload(&mut storage, 0).await; - let ctx = &ctx::test_root(&ctx::AffineClock::new(CLOCK_SPEEDUP as f64)); - let rng = &mut ctx.rng(); - let mut validator = FullValidatorConfig::for_single_validator(rng, genesis_block_payload); - let validator_set = validator.node_config.validators.clone(); - let external_node = validator.connect_full_node(rng); - - let (genesis_block, blocks) = - get_blocks_and_reset_storage(storage, &validator.validator_key).await; - assert_eq!(blocks.len(), 3); // 2 real + 1 fictive blocks - tracing::trace!("Node storage reset"); - let (initial_blocks, delayed_blocks) = blocks.split_at(initial_block_count); - - let validator_storage = Arc::new(InMemoryStorage::new(genesis_block)); - for block in initial_blocks { - validator_storage.put_block(ctx, block).await.unwrap(); - } - let validator = Executor::new( - validator.node_config, - validator.node_key, - validator_storage.clone(), - ) - .unwrap(); - - let (actions_sender, actions) = ActionQueue::new(); - let state_keeper = StateKeeperHandles::new(pool.clone(), actions, &tx_hashes).await; - scope::run!(ctx, |ctx, s| async { - s.spawn_bg(validator.run(ctx)); - s.spawn_bg(async { - for block in delayed_blocks { - ctx.sleep(POLL_INTERVAL).await?; - validator_storage.put_block(ctx, block).await?; - } - Ok(()) - }); - - let cloned_pool = pool.clone(); - s.spawn_bg(async { - mock_l1_batch_hash_computation(cloned_pool, 1).await; - Ok(()) - }); - s.spawn_bg(run_gossip_fetcher_inner( - ctx, - pool.clone(), - actions_sender, - external_node.node_config, - external_node.node_key, - )); - - state_keeper - .wait(|state| state.get_local_block() == MiniblockNumber(3)) - .await; - Ok(()) - }) - .await - .unwrap(); - - // Check that received blocks have consensus fields persisted. - let mut storage = pool.access_storage().await.unwrap(); - for number in [1, 2, 3] { - let block = load_final_block(&mut storage, number).await; - block.justification.verify(&validator_set, 1).unwrap(); - } -} diff --git a/core/lib/zksync_core/src/sync_layer/gossip/utils.rs b/core/lib/zksync_core/src/sync_layer/gossip/utils.rs deleted file mode 100644 index 8407821a2ec..00000000000 --- a/core/lib/zksync_core/src/sync_layer/gossip/utils.rs +++ /dev/null @@ -1,48 +0,0 @@ -use std::{iter, ops}; - -use zksync_consensus_roles::validator::BlockNumber; - -/// Iterator over missing block numbers. -pub(crate) struct MissingBlockNumbers { - range: ops::Range, - existing_numbers: iter::Peekable, -} - -impl MissingBlockNumbers -where - I: Iterator, -{ - /// Creates a new iterator based on the provided params. - pub(crate) fn new(range: ops::Range, existing_numbers: I) -> Self { - Self { - range, - existing_numbers: existing_numbers.peekable(), - } - } -} - -impl Iterator for MissingBlockNumbers -where - I: Iterator, -{ - type Item = BlockNumber; - - fn next(&mut self) -> Option { - // Loop while existing numbers match the starting numbers from the range. The check - // that the range is non-empty is redundant given how `existing_numbers` are constructed - // (they are guaranteed to be lesser than the upper range bound); we add it just to be safe. - while !self.range.is_empty() - && matches!(self.existing_numbers.peek(), Some(&num) if num == self.range.start) - { - self.range.start = self.range.start.next(); - self.existing_numbers.next(); // Advance to the next number - } - - if self.range.is_empty() { - return None; - } - let next_number = self.range.start; - self.range.start = self.range.start.next(); - Some(next_number) - } -} diff --git a/core/lib/zksync_core/src/sync_layer/metrics.rs b/core/lib/zksync_core/src/sync_layer/metrics.rs index c3082c51052..51c569b7fcc 100644 --- a/core/lib/zksync_core/src/sync_layer/metrics.rs +++ b/core/lib/zksync_core/src/sync_layer/metrics.rs @@ -1,9 +1,10 @@ //! Metrics for the synchronization layer of external node. -use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; - use std::time::Duration; +use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Gauge, Histogram, Metrics}; +use zksync_types::aggregated_operations::AggregatedActionType; + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] #[metrics(label = "stage", rename_all = "snake_case")] pub(super) enum FetchStage { @@ -12,7 +13,9 @@ pub(super) enum FetchStage { SyncL2Block, } -#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] +#[derive( + Debug, Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord, EncodeLabelValue, EncodeLabelSet, +)] #[metrics(label = "stage", rename_all = "snake_case")] pub(super) enum L1BatchStage { Open, @@ -21,6 +24,16 @@ pub(super) enum L1BatchStage { Executed, } +impl From for L1BatchStage { + fn from(ty: AggregatedActionType) -> Self { + match ty { + AggregatedActionType::Commit => Self::Committed, + AggregatedActionType::PublishProofOnchain => Self::Proven, + AggregatedActionType::Execute => Self::Executed, + } + } +} + #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelValue, EncodeLabelSet)] #[metrics(label = "method", rename_all = "snake_case")] pub(super) enum CachedMethod { diff --git a/core/lib/zksync_core/src/sync_layer/mod.rs b/core/lib/zksync_core/src/sync_layer/mod.rs index df059947e3e..e216ef4f8c5 100644 --- a/core/lib/zksync_core/src/sync_layer/mod.rs +++ b/core/lib/zksync_core/src/sync_layer/mod.rs @@ -3,7 +3,6 @@ mod client; pub mod external_io; pub mod fetcher; pub mod genesis; -mod gossip; mod metrics; pub(crate) mod sync_action; mod sync_state; @@ -11,6 +10,6 @@ mod sync_state; mod tests; pub use self::{ - client::MainNodeClient, external_io::ExternalIO, gossip::run_gossip_fetcher, - sync_action::ActionQueue, sync_state::SyncState, + client::MainNodeClient, external_io::ExternalIO, sync_action::ActionQueue, + sync_state::SyncState, }; diff --git a/core/lib/zksync_core/src/sync_layer/sync_action.rs b/core/lib/zksync_core/src/sync_layer/sync_action.rs index b4f56999d4f..abaa1d93359 100644 --- a/core/lib/zksync_core/src/sync_layer/sync_action.rs +++ b/core/lib/zksync_core/src/sync_layer/sync_action.rs @@ -1,9 +1,5 @@ use tokio::sync::mpsc; - -use zksync_types::{ - block::ConsensusBlockFields, Address, L1BatchNumber, MiniblockNumber, ProtocolVersionId, - Transaction, H256, -}; +use zksync_types::{Address, L1BatchNumber, MiniblockNumber, ProtocolVersionId, Transaction}; use super::metrics::QUEUE_METRICS; @@ -55,7 +51,7 @@ impl ActionQueueSender { return Err(format!("Unexpected Tx: {:?}", actions)); } } - SyncAction::SealMiniblock(_) | SyncAction::SealBatch { .. } => { + SyncAction::SealMiniblock | SyncAction::SealBatch { .. } => { if !opened || miniblock_sealed { return Err(format!("Unexpected SealMiniblock/SealBatch: {:?}", actions)); } @@ -133,11 +129,11 @@ pub(crate) enum SyncAction { timestamp: u64, l1_gas_price: u64, l2_fair_gas_price: u64, + fair_pubdata_price: Option, operator_address: Address, protocol_version: ProtocolVersionId, // Miniblock number and virtual blocks count. first_miniblock_info: (MiniblockNumber, u32), - prev_miniblock_hash: H256, }, Miniblock { number: MiniblockNumber, @@ -149,13 +145,11 @@ pub(crate) enum SyncAction { /// that they are sealed, but at the same time the next miniblock may not exist yet. /// By having a dedicated action for that we prevent a situation where the miniblock is kept open on the EN until /// the next one is sealed on the main node. - SealMiniblock(Option), + SealMiniblock, /// Similarly to `SealMiniblock` we must be able to seal the batch even if there is no next miniblock yet. SealBatch { /// Virtual blocks count for the fictive miniblock. virtual_blocks: u32, - /// Consensus-related fields for the fictive miniblock. - consensus: Option, }, } @@ -177,10 +171,10 @@ mod tests { timestamp: 1, l1_gas_price: 1, l2_fair_gas_price: 1, + fair_pubdata_price: Some(1), operator_address: Default::default(), protocol_version: ProtocolVersionId::latest(), first_miniblock_info: (1.into(), 1), - prev_miniblock_hash: H256::default(), } } @@ -209,14 +203,11 @@ mod tests { } fn seal_miniblock() -> SyncAction { - SyncAction::SealMiniblock(None) + SyncAction::SealMiniblock } fn seal_batch() -> SyncAction { - SyncAction::SealBatch { - virtual_blocks: 1, - consensus: None, - } + SyncAction::SealBatch { virtual_blocks: 1 } } #[test] @@ -251,7 +242,7 @@ mod tests { // Unexpected tx (vec![tx()], "Unexpected Tx"), (vec![open_batch(), seal_miniblock(), tx()], "Unexpected Tx"), - // Unexpected OpenBatch/Miniblock + // Unexpected `OpenBatch/Miniblock` ( vec![miniblock(), miniblock()], "Unexpected OpenBatch/Miniblock", @@ -264,7 +255,7 @@ mod tests { vec![open_batch(), miniblock()], "Unexpected OpenBatch/Miniblock", ), - // Unexpected SealMiniblock + // Unexpected `SealMiniblock` (vec![seal_miniblock()], "Unexpected SealMiniblock"), ( vec![miniblock(), seal_miniblock(), seal_miniblock()], diff --git a/core/lib/zksync_core/src/sync_layer/sync_state.rs b/core/lib/zksync_core/src/sync_layer/sync_state.rs index 8ba2cbb4e98..78b78d4aa4d 100644 --- a/core/lib/zksync_core/src/sync_layer/sync_state.rs +++ b/core/lib/zksync_core/src/sync_layer/sync_state.rs @@ -40,7 +40,7 @@ impl SyncState { let mut inner = self.inner.write().unwrap(); if let Some(local_block) = inner.local_block { if block.0 < local_block.0 { - // Probably it's fine -- will be checked by the reorg detector. + // Probably it's fine -- will be checked by the re-org detector. tracing::warn!( "main_node_block({}) is less than local_block({})", block, @@ -56,7 +56,7 @@ impl SyncState { let mut inner = self.inner.write().unwrap(); if let Some(main_node_block) = inner.main_node_block { if block.0 > main_node_block.0 { - // Probably it's fine -- will be checked by the reorg detector. + // Probably it's fine -- will be checked by the re-org detector. tracing::warn!( "local_block({}) is greater than main_node_block({})", block, @@ -86,7 +86,7 @@ impl SyncState { (inner.main_node_block, inner.local_block) { let Some(block_diff) = main_node_block.0.checked_sub(local_block.0) else { - // We're ahead of the main node, this situation is handled by the reorg detector. + // We're ahead of the main node, this situation is handled by the re-org detector. return (true, Some(0)); }; (block_diff <= SYNC_MINIBLOCK_DELTA, Some(block_diff)) @@ -137,7 +137,7 @@ mod tests { sync_state.set_main_node_block(MiniblockNumber(1)); sync_state.set_local_block(MiniblockNumber(2)); - // ^ should not panic, as we defer the situation to the reorg detector. + // ^ should not panic, as we defer the situation to the re-org detector. // At the same time, we should consider ourselves synced unless `ReorgDetector` tells us otherwise. assert!(sync_state.is_synced()); @@ -149,7 +149,7 @@ mod tests { sync_state.set_local_block(MiniblockNumber(2)); sync_state.set_main_node_block(MiniblockNumber(1)); - // ^ should not panic, as we defer the situation to the reorg detector. + // ^ should not panic, as we defer the situation to the re-org detector. // At the same time, we should consider ourselves synced unless `ReorgDetector` tells us otherwise. assert!(sync_state.is_synced()); diff --git a/core/lib/zksync_core/src/sync_layer/tests.rs b/core/lib/zksync_core/src/sync_layer/tests.rs index 4a337bbf5dc..3e508accefc 100644 --- a/core/lib/zksync_core/src/sync_layer/tests.rs +++ b/core/lib/zksync_core/src/sync_layer/tests.rs @@ -1,132 +1,34 @@ //! High-level sync layer tests. -use async_trait::async_trait; -use tokio::{sync::watch, task::JoinHandle}; - use std::{ collections::{HashMap, VecDeque}, iter, time::{Duration, Instant}, }; +use tokio::{sync::watch, task::JoinHandle}; use zksync_config::configs::chain::NetworkConfig; -use zksync_contracts::{BaseSystemContractsHashes, SystemContractCode}; use zksync_dal::{ConnectionPool, StorageProcessor}; use zksync_types::{ - api, Address, L1BatchNumber, L2ChainId, MiniblockNumber, ProtocolVersionId, Transaction, H256, + fee_model::{BatchFeeInput, PubdataIndependentBatchFeeModelInput}, + Address, L1BatchNumber, L2ChainId, MiniblockNumber, ProtocolVersionId, Transaction, H256, }; use super::{fetcher::FetcherCursor, sync_action::SyncAction, *}; use crate::{ api_server::web3::tests::spawn_http_server, + consensus::testonly::MockMainNodeClient, genesis::{ensure_genesis_state, GenesisParams}, state_keeper::{ - tests::{create_l1_batch_metadata, create_l2_transaction, TestBatchExecutorBuilder}, + seal_criteria::NoopSealer, tests::TestBatchExecutorBuilder, MiniblockSealer, ZkSyncStateKeeper, }, + utils::testonly::{create_l1_batch_metadata, create_l2_transaction}, }; const TEST_TIMEOUT: Duration = Duration::from_secs(10); const POLL_INTERVAL: Duration = Duration::from_millis(50); - -#[derive(Debug, Default)] -struct MockMainNodeClient { - l2_blocks: Vec, -} - -impl MockMainNodeClient { - /// `miniblock_count` doesn't include a fictive miniblock. Returns hashes of generated transactions. - fn push_l1_batch(&mut self, miniblock_count: u32) -> Vec { - let l1_batch_number = self - .l2_blocks - .last() - .map_or(L1BatchNumber(0), |block| block.l1_batch_number + 1); - let number_offset = self.l2_blocks.len() as u32; - - let mut tx_hashes = vec![]; - let l2_blocks = (0..=miniblock_count).map(|number| { - let is_fictive = number == miniblock_count; - let transactions = if is_fictive { - vec![] - } else { - let transaction = create_l2_transaction(10, 100); - tx_hashes.push(transaction.hash()); - vec![transaction.into()] - }; - let number = number + number_offset; - - api::en::SyncBlock { - number: MiniblockNumber(number), - l1_batch_number, - last_in_batch: is_fictive, - timestamp: number.into(), - root_hash: Some(H256::repeat_byte(1)), - l1_gas_price: 2, - l2_fair_gas_price: 3, - base_system_contracts_hashes: BaseSystemContractsHashes::default(), - operator_address: Address::repeat_byte(2), - transactions: Some(transactions), - virtual_blocks: Some(!is_fictive as u32), - hash: Some(H256::repeat_byte(1)), - protocol_version: ProtocolVersionId::latest(), - consensus: None, - } - }); - - self.l2_blocks.extend(l2_blocks); - tx_hashes - } -} - -#[async_trait] -impl MainNodeClient for MockMainNodeClient { - async fn fetch_system_contract_by_hash( - &self, - _hash: H256, - ) -> anyhow::Result { - anyhow::bail!("Not implemented"); - } - - async fn fetch_genesis_contract_bytecode( - &self, - _address: Address, - ) -> anyhow::Result>> { - anyhow::bail!("Not implemented"); - } - - async fn fetch_protocol_version( - &self, - _protocol_version: ProtocolVersionId, - ) -> anyhow::Result { - anyhow::bail!("Not implemented"); - } - - async fn fetch_genesis_l1_batch_hash(&self) -> anyhow::Result { - anyhow::bail!("Not implemented"); - } - - async fn fetch_l2_block_number(&self) -> anyhow::Result { - if let Some(number) = self.l2_blocks.len().checked_sub(1) { - Ok(MiniblockNumber(number as u32)) - } else { - anyhow::bail!("Not implemented"); - } - } - - async fn fetch_l2_block( - &self, - number: MiniblockNumber, - with_transactions: bool, - ) -> anyhow::Result> { - let Some(mut block) = self.l2_blocks.get(number.0 as usize).cloned() else { - return Ok(None); - }; - if !with_transactions { - block.transactions = None; - } - Ok(Some(block)) - } -} +pub const OPERATOR_ADDRESS: Address = Address::repeat_byte(1); fn open_l1_batch(number: u32, timestamp: u64, first_miniblock_number: u32) -> SyncAction { SyncAction::OpenBatch { @@ -134,10 +36,10 @@ fn open_l1_batch(number: u32, timestamp: u64, first_miniblock_number: u32) -> Sy timestamp, l1_gas_price: 2, l2_fair_gas_price: 3, - operator_address: Default::default(), + fair_pubdata_price: Some(4), + operator_address: OPERATOR_ADDRESS, protocol_version: ProtocolVersionId::latest(), first_miniblock_info: (MiniblockNumber(first_miniblock_number), 1), - prev_miniblock_hash: H256::default(), } } @@ -157,12 +59,16 @@ impl StateKeeperHandles { ensure_genesis(&mut pool.access_storage().await.unwrap()).await; let sync_state = SyncState::new(); + let (miniblock_sealer, miniblock_sealer_handle) = MiniblockSealer::new(pool.clone(), 5); + tokio::spawn(miniblock_sealer.run()); + let io = ExternalIO::new( + miniblock_sealer_handle, pool, actions, sync_state.clone(), Box::::default(), - Address::repeat_byte(1), + OPERATOR_ADDRESS, u32::MAX, L2ChainId::default(), ) @@ -174,10 +80,11 @@ impl StateKeeperHandles { batch_executor_base.push_successful_transactions(tx_hashes_in_l1_batch); } - let state_keeper = ZkSyncStateKeeper::without_sealer( + let state_keeper = ZkSyncStateKeeper::new( stop_receiver, Box::new(io), Box::new(batch_executor_base), + Box::new(NoopSealer), ); Self { stop_sender, @@ -240,7 +147,7 @@ async fn external_io_basics() { let tx = create_l2_transaction(10, 100); let tx_hash = tx.hash(); let tx = SyncAction::Tx(Box::new(tx.into())); - let actions = vec![open_l1_batch, tx, SyncAction::SealMiniblock(None)]; + let actions = vec![open_l1_batch, tx, SyncAction::SealMiniblock]; let (actions_sender, action_queue) = ActionQueue::new(); let state_keeper = @@ -260,8 +167,15 @@ async fn external_io_basics() { .unwrap() .expect("Miniblock #1 is not persisted"); assert_eq!(miniblock.timestamp, 1); - assert_eq!(miniblock.l1_gas_price, 2); - assert_eq!(miniblock.l2_fair_gas_price, 3); + + let expected_fee_input = + BatchFeeInput::PubdataIndependent(PubdataIndependentBatchFeeModelInput { + fair_l2_gas_price: 3, + fair_pubdata_price: 4, + l1_gas_price: 2, + }); + + assert_eq!(miniblock.batch_fee_input, expected_fee_input); assert_eq!(miniblock.l1_tx_count, 0); assert_eq!(miniblock.l2_tx_count, 1); @@ -271,7 +185,7 @@ async fn external_io_basics() { .await .unwrap() .expect("Transaction not persisted"); - assert_eq!(tx_receipt.block_number, Some(1.into())); + assert_eq!(tx_receipt.block_number, 1.into()); assert_eq!(tx_receipt.transaction_index, 0.into()); } @@ -283,7 +197,7 @@ pub(super) async fn run_state_keeper_with_multiple_miniblocks(pool: ConnectionPo }); let first_miniblock_actions: Vec<_> = iter::once(open_l1_batch) .chain(txs) - .chain([SyncAction::SealMiniblock(None)]) + .chain([SyncAction::SealMiniblock]) .collect(); let open_miniblock = SyncAction::Miniblock { @@ -297,7 +211,7 @@ pub(super) async fn run_state_keeper_with_multiple_miniblocks(pool: ConnectionPo }); let second_miniblock_actions: Vec<_> = iter::once(open_miniblock) .chain(more_txs) - .chain([SyncAction::SealMiniblock(None)]) + .chain([SyncAction::SealMiniblock]) .collect(); let tx_hashes = extract_tx_hashes( @@ -337,7 +251,7 @@ async fn external_io_with_multiple_miniblocks() { let sync_block = storage .sync_dal() - .sync_block(MiniblockNumber(number), Address::repeat_byte(1), true) + .sync_block(MiniblockNumber(number), OPERATOR_ADDRESS, true) .await .unwrap() .unwrap_or_else(|| panic!("Sync block #{} is not persisted", number)); @@ -371,7 +285,7 @@ async fn test_external_io_recovery(pool: ConnectionPool, mut tx_hashes: Vec { + SyncAction::SealMiniblock => { assert_eq!(tx_count_in_miniblock, 1); } } @@ -587,8 +498,13 @@ async fn fetcher_with_real_server() { // Start the API server. let network_config = NetworkConfig::for_tests(); let (stop_sender, stop_receiver) = watch::channel(false); - let server_handles = - spawn_http_server(&network_config, pool.clone(), stop_receiver.clone()).await; + let server_handles = spawn_http_server( + &network_config, + pool.clone(), + Default::default(), + stop_receiver.clone(), + ) + .await; server_handles.wait_until_ready().await; let server_addr = &server_handles.local_addr; @@ -640,7 +556,7 @@ async fn fetcher_with_real_server() { assert_eq!(tx.hash(), tx_hashes.pop_front().unwrap()); tx_count_in_miniblock += 1; } - SyncAction::SealMiniblock(_) => { + SyncAction::SealMiniblock => { assert_eq!( tx_count_in_miniblock, miniblock_number_to_tx_count[¤t_miniblock_number] diff --git a/core/lib/zksync_core/src/temp_config_store.rs b/core/lib/zksync_core/src/temp_config_store.rs index d1cd9f32670..cfa9ceed379 100644 --- a/core/lib/zksync_core/src/temp_config_store.rs +++ b/core/lib/zksync_core/src/temp_config_store.rs @@ -8,13 +8,13 @@ use zksync_config::{ fri_prover_group::FriProverGroupConfig, house_keeper::HouseKeeperConfig, FriProofCompressorConfig, FriProverConfig, FriWitnessGeneratorConfig, PrometheusConfig, - ProofDataHandlerConfig, ProverGroupConfig, WitnessGeneratorConfig, + ProofDataHandlerConfig, WitnessGeneratorConfig, }, ApiConfig, ContractsConfig, DBConfig, ETHClientConfig, ETHSenderConfig, ETHWatchConfig, - FetcherConfig, GasAdjusterConfig, ObjectStoreConfig, PostgresConfig, ProverConfigs, + GasAdjusterConfig, ObjectStoreConfig, PostgresConfig, }; -// TODO (QIT-22): This structure is going to be removed when components will be respnsible for their own configs. +// TODO (QIT-22): This structure is going to be removed when components will be responsible for their own configs. /// A temporary config store allowing to pass deserialized configs from `zksync_server` to `zksync_core`. /// All the configs are optional, since for some component combination it is not needed to pass all the configs. #[derive(Debug)] @@ -35,7 +35,6 @@ pub struct TempConfigStore { pub fri_witness_generator_config: Option, pub prometheus_config: Option, pub proof_data_handler_config: Option, - pub prover_group_config: Option, pub witness_generator_config: Option, pub api_config: Option, pub contracts_config: Option, @@ -43,8 +42,6 @@ pub struct TempConfigStore { pub eth_client_config: Option, pub eth_sender_config: Option, pub eth_watch_config: Option, - pub fetcher_config: Option, pub gas_adjuster_config: Option, - pub prover_configs: Option, pub object_store_config: Option, } diff --git a/core/lib/zksync_core/src/utils/mod.rs b/core/lib/zksync_core/src/utils/mod.rs new file mode 100644 index 00000000000..7d919d31f88 --- /dev/null +++ b/core/lib/zksync_core/src/utils/mod.rs @@ -0,0 +1,132 @@ +//! Miscellaneous utils used by multiple components. + +use std::time::Duration; + +use anyhow::Context as _; +use tokio::sync::watch; +use zksync_dal::{ConnectionPool, StorageProcessor}; +use zksync_types::L1BatchNumber; + +#[cfg(test)] +pub(crate) mod testonly; + +/// Repeatedly polls the DB until there is an L1 batch. We may not have such a batch initially +/// if the DB is recovered from an application-level snapshot. +/// +/// Returns the number of the *earliest* L1 batch, or `None` if the stop signal is received. +pub(crate) async fn wait_for_l1_batch( + pool: &ConnectionPool, + poll_interval: Duration, + stop_receiver: &mut watch::Receiver, +) -> anyhow::Result> { + loop { + if *stop_receiver.borrow() { + return Ok(None); + } + + let mut storage = pool.access_storage().await?; + let sealed_l1_batch_number = storage.blocks_dal().get_earliest_l1_batch_number().await?; + drop(storage); + + if let Some(number) = sealed_l1_batch_number { + return Ok(Some(number)); + } + tracing::debug!("No L1 batches are present in DB; trying again in {poll_interval:?}"); + + // We don't check the result: if a stop signal is received, we'll return at the start + // of the next iteration. + tokio::time::timeout(poll_interval, stop_receiver.changed()) + .await + .ok(); + } +} + +/// Repeatedly polls the DB until there is an L1 batch with metadata. We may not have such a batch initially +/// if the DB is recovered from an application-level snapshot. +/// +/// Returns the number of the *earliest* L1 batch with metadata, or `None` if the stop signal is received. +pub(crate) async fn wait_for_l1_batch_with_metadata( + pool: &ConnectionPool, + poll_interval: Duration, + stop_receiver: &mut watch::Receiver, +) -> anyhow::Result> { + loop { + if *stop_receiver.borrow() { + return Ok(None); + } + + let mut storage = pool.access_storage().await?; + let sealed_l1_batch_number = storage + .blocks_dal() + .get_earliest_l1_batch_number_with_metadata() + .await?; + drop(storage); + + if let Some(number) = sealed_l1_batch_number { + return Ok(Some(number)); + } + tracing::debug!( + "No L1 batches with metadata are present in DB; trying again in {poll_interval:?}" + ); + tokio::time::timeout(poll_interval, stop_receiver.changed()) + .await + .ok(); + } +} + +/// Returns the projected number of the first locally available L1 batch. The L1 batch is **not** +/// guaranteed to be present in the storage! +pub(crate) async fn projected_first_l1_batch( + storage: &mut StorageProcessor<'_>, +) -> anyhow::Result { + let snapshot_recovery = storage + .snapshot_recovery_dal() + .get_applied_snapshot_status() + .await + .context("failed getting snapshot recovery status")?; + Ok(snapshot_recovery.map_or(L1BatchNumber(0), |recovery| recovery.l1_batch_number + 1)) +} + +#[cfg(test)] +mod tests { + use zksync_types::L2ChainId; + + use super::*; + use crate::genesis::{ensure_genesis_state, GenesisParams}; + + #[tokio::test] + async fn waiting_for_l1_batch_success() { + let pool = ConnectionPool::test_pool().await; + let (_stop_sender, mut stop_receiver) = watch::channel(false); + + let pool_copy = pool.clone(); + tokio::spawn(async move { + tokio::time::sleep(Duration::from_millis(25)).await; + let mut storage = pool_copy.access_storage().await.unwrap(); + ensure_genesis_state(&mut storage, L2ChainId::default(), &GenesisParams::mock()) + .await + .unwrap(); + }); + + let l1_batch = wait_for_l1_batch(&pool, Duration::from_millis(10), &mut stop_receiver) + .await + .unwrap(); + assert_eq!(l1_batch, Some(L1BatchNumber(0))); + } + + #[tokio::test] + async fn waiting_for_l1_batch_cancellation() { + let pool = ConnectionPool::test_pool().await; + let (stop_sender, mut stop_receiver) = watch::channel(false); + + tokio::spawn(async move { + tokio::time::sleep(Duration::from_millis(25)).await; + stop_sender.send_replace(true); + }); + + let l1_batch = wait_for_l1_batch(&pool, Duration::from_secs(30), &mut stop_receiver) + .await + .unwrap(); + assert_eq!(l1_batch, None); + } +} diff --git a/core/lib/zksync_core/src/utils/testonly.rs b/core/lib/zksync_core/src/utils/testonly.rs new file mode 100644 index 00000000000..e6883f2585e --- /dev/null +++ b/core/lib/zksync_core/src/utils/testonly.rs @@ -0,0 +1,206 @@ +//! Test utils. + +use multivm::utils::get_max_gas_per_pubdata_byte; +use zksync_contracts::BaseSystemContractsHashes; +use zksync_dal::StorageProcessor; +use zksync_merkle_tree::{domain::ZkSyncTree, TreeInstruction}; +use zksync_system_constants::ZKPORTER_IS_AVAILABLE; +use zksync_types::{ + block::{L1BatchHeader, MiniblockHeader}, + commitment::{L1BatchMetaParameters, L1BatchMetadata}, + fee::Fee, + fee_model::BatchFeeInput, + l2::L2Tx, + snapshots::SnapshotRecoveryStatus, + transaction_request::PaymasterParams, + Address, L1BatchNumber, L2ChainId, MiniblockNumber, Nonce, ProtocolVersion, ProtocolVersionId, + StorageLog, H256, U256, +}; + +use crate::l1_gas_price::L1GasPriceProvider; + +/// Creates a miniblock header with the specified number and deterministic contents. +pub(crate) fn create_miniblock(number: u32) -> MiniblockHeader { + MiniblockHeader { + number: MiniblockNumber(number), + timestamp: number.into(), + hash: H256::from_low_u64_be(number.into()), + l1_tx_count: 0, + l2_tx_count: 0, + base_fee_per_gas: 100, + batch_fee_input: BatchFeeInput::l1_pegged(100, 100), + gas_per_pubdata_limit: get_max_gas_per_pubdata_byte(ProtocolVersionId::latest().into()), + base_system_contracts_hashes: BaseSystemContractsHashes::default(), + protocol_version: Some(ProtocolVersionId::latest()), + virtual_blocks: 1, + } +} + +/// Creates an L1 batch header with the specified number and deterministic contents. +pub(crate) fn create_l1_batch(number: u32) -> L1BatchHeader { + let mut header = L1BatchHeader::new( + L1BatchNumber(number), + number.into(), + Address::default(), + BaseSystemContractsHashes::default(), + ProtocolVersionId::latest(), + ); + header.is_finished = true; + header +} + +/// Creates metadata for an L1 batch with the specified number. +pub(crate) fn create_l1_batch_metadata(number: u32) -> L1BatchMetadata { + L1BatchMetadata { + root_hash: H256::from_low_u64_be(number.into()), + rollup_last_leaf_index: u64::from(number) + 20, + merkle_root_hash: H256::from_low_u64_be(number.into()), + initial_writes_compressed: vec![], + repeated_writes_compressed: vec![], + commitment: H256::from_low_u64_be(number.into()), + l2_l1_messages_compressed: vec![], + l2_l1_merkle_root: H256::from_low_u64_be(number.into()), + block_meta_params: L1BatchMetaParameters { + zkporter_is_available: ZKPORTER_IS_AVAILABLE, + bootloader_code_hash: BaseSystemContractsHashes::default().bootloader, + default_aa_code_hash: BaseSystemContractsHashes::default().default_aa, + }, + aux_data_hash: H256::zero(), + meta_parameters_hash: H256::zero(), + pass_through_data_hash: H256::zero(), + events_queue_commitment: Some(H256::zero()), + bootloader_initial_content_commitment: Some(H256::zero()), + state_diffs_compressed: vec![], + } +} + +/// Creates an L2 transaction with randomized parameters. +pub(crate) fn create_l2_transaction(fee_per_gas: u64, gas_per_pubdata: u32) -> L2Tx { + let fee = Fee { + gas_limit: 1000_u64.into(), + max_fee_per_gas: fee_per_gas.into(), + max_priority_fee_per_gas: 0_u64.into(), + gas_per_pubdata_limit: gas_per_pubdata.into(), + }; + let mut tx = L2Tx::new_signed( + Address::random(), + vec![], + Nonce(0), + fee, + U256::zero(), + L2ChainId::from(271), + &H256::random(), + None, + PaymasterParams::default(), + ) + .unwrap(); + // Input means all transaction data (NOT calldata, but all tx fields) that came from the API. + // This input will be used for the derivation of the tx hash, so put some random to it to be sure + // that the transaction hash is unique. + tx.set_input(H256::random().0.to_vec(), H256::random()); + tx +} + +/// Prepares a recovery snapshot without performing genesis. +pub(crate) async fn prepare_recovery_snapshot( + storage: &mut StorageProcessor<'_>, + l1_batch_number: u32, + snapshot_logs: &[StorageLog], +) -> SnapshotRecoveryStatus { + let mut storage = storage.start_transaction().await.unwrap(); + + let written_keys: Vec<_> = snapshot_logs.iter().map(|log| log.key).collect(); + let tree_instructions: Vec<_> = snapshot_logs + .iter() + .enumerate() + .map(|(i, log)| TreeInstruction::write(log.key, i as u64 + 1, log.value)) + .collect(); + let l1_batch_root_hash = ZkSyncTree::process_genesis_batch(&tree_instructions).root_hash; + + storage + .protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + // TODO (PLA-596): Don't insert L1 batches / miniblocks once the relevant foreign keys are removed + let miniblock = create_miniblock(l1_batch_number); + storage + .blocks_dal() + .insert_miniblock(&miniblock) + .await + .unwrap(); + let l1_batch = create_l1_batch(l1_batch_number); + storage + .blocks_dal() + .insert_l1_batch(&l1_batch, &[], Default::default(), &[], &[], 0) + .await + .unwrap(); + + storage + .storage_logs_dedup_dal() + .insert_initial_writes(l1_batch.number, &written_keys) + .await; + storage + .storage_logs_dal() + .insert_storage_logs(miniblock.number, &[(H256::zero(), snapshot_logs.to_vec())]) + .await; + storage + .storage_dal() + .apply_storage_logs(&[(H256::zero(), snapshot_logs.to_vec())]) + .await; + + let snapshot_recovery = SnapshotRecoveryStatus { + l1_batch_number: l1_batch.number, + l1_batch_root_hash, + miniblock_number: miniblock.number, + miniblock_root_hash: H256::zero(), // not used + last_finished_chunk_id: None, + total_chunk_count: 100, + }; + storage + .snapshot_recovery_dal() + .set_applied_snapshot_status(&snapshot_recovery) + .await + .unwrap(); + storage.commit().await.unwrap(); + snapshot_recovery +} + +// TODO (PLA-596): Replace with `prepare_recovery_snapshot(.., &[])` +pub(crate) async fn prepare_empty_recovery_snapshot( + storage: &mut StorageProcessor<'_>, + l1_batch_number: u32, +) -> SnapshotRecoveryStatus { + storage + .protocol_versions_dal() + .save_protocol_version_with_tx(ProtocolVersion::default()) + .await; + + let snapshot_recovery = SnapshotRecoveryStatus { + l1_batch_number: l1_batch_number.into(), + l1_batch_root_hash: H256::zero(), + miniblock_number: l1_batch_number.into(), + miniblock_root_hash: H256::zero(), // not used + last_finished_chunk_id: None, + total_chunk_count: 100, + }; + storage + .snapshot_recovery_dal() + .set_applied_snapshot_status(&snapshot_recovery) + .await + .unwrap(); + snapshot_recovery +} + +/// Mock [`L1GasPriceProvider`] that returns a constant value. +#[derive(Debug)] +pub(crate) struct MockL1GasPriceProvider(pub u64); + +impl L1GasPriceProvider for MockL1GasPriceProvider { + fn estimate_effective_gas_price(&self) -> u64 { + self.0 + } + + fn estimate_effective_pubdata_price(&self) -> u64 { + self.0 * u64::from(zksync_system_constants::L1_GAS_PER_PUBDATA_BYTE) + } +} diff --git a/core/lib/zksync_core/src/witness_generator/basic_circuits.rs b/core/lib/zksync_core/src/witness_generator/basic_circuits.rs deleted file mode 100644 index c700d59120a..00000000000 --- a/core/lib/zksync_core/src/witness_generator/basic_circuits.rs +++ /dev/null @@ -1,645 +0,0 @@ -use async_trait::async_trait; -use rand::Rng; -use serde::{Deserialize, Serialize}; - -use std::{ - collections::{hash_map::DefaultHasher, HashMap, HashSet}, - hash::{Hash, Hasher}, - sync::Arc, - time::Instant, -}; - -use multivm::vm_latest::{ - constants::MAX_CYCLES_FOR_TX, HistoryDisabled, SimpleMemory, StorageOracle as VmStorageOracle, -}; -use zksync_config::configs::{ - witness_generator::BasicWitnessGeneratorDataSource, WitnessGeneratorConfig, -}; -use zksync_dal::ConnectionPool; -use zksync_object_store::{Bucket, ObjectStore, ObjectStoreFactory, StoredObject}; -use zksync_queued_job_processor::JobProcessor; -use zksync_state::{PostgresStorage, ReadStorage, ShadowStorage, StorageView, WitnessStorage}; -use zksync_system_constants::BOOTLOADER_ADDRESS; -use zksync_types::{ - circuit::GEOMETRY_CONFIG, - proofs::{AggregationRound, BasicCircuitWitnessGeneratorInput, PrepareBasicCircuitsJob}, - zkevm_test_harness::toolset::GeometryConfig, - zkevm_test_harness::{ - abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit, - bellman::bn256::Bn256, - witness::full_block_artifact::{BlockBasicCircuits, BlockBasicCircuitsPublicInputs}, - witness::oracle::VmWitnessOracle, - SchedulerCircuitInstanceWitness, - }, - Address, L1BatchNumber, ProtocolVersionId, H256, U256, USED_BOOTLOADER_MEMORY_BYTES, -}; -use zksync_utils::{bytes_to_chunks, expand_memory_contents, h256_to_u256, u256_to_h256}; - -use super::{ - precalculated_merkle_paths_provider::PrecalculatedMerklePathsProvider, - storage_oracle::StorageOracle, utils::save_prover_input_artifacts, METRICS, -}; - -pub struct BasicCircuitArtifacts { - basic_circuits: BlockBasicCircuits, - basic_circuits_inputs: BlockBasicCircuitsPublicInputs, - scheduler_witness: SchedulerCircuitInstanceWitness, - circuits: Vec>>, -} - -#[derive(Debug)] -struct BlobUrls { - basic_circuits_url: String, - basic_circuits_inputs_url: String, - scheduler_witness_url: String, - circuit_types_and_urls: Vec<(&'static str, String)>, -} - -#[derive(Clone)] -pub struct BasicWitnessGeneratorJob { - block_number: L1BatchNumber, - job: PrepareBasicCircuitsJob, -} - -#[derive(Debug)] -pub struct BasicWitnessGenerator { - config: WitnessGeneratorConfig, - object_store: Arc, - protocol_versions: Vec, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, -} - -impl BasicWitnessGenerator { - pub async fn new( - config: WitnessGeneratorConfig, - store_factory: &ObjectStoreFactory, - protocol_versions: Vec, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, - ) -> Self { - Self { - config, - object_store: store_factory.create_store().await.into(), - protocol_versions, - connection_pool, - prover_connection_pool, - } - } - - async fn process_job_impl( - config: WitnessGeneratorConfig, - object_store: Arc, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, - basic_job: BasicWitnessGeneratorJob, - started_at: Instant, - ) -> anyhow::Result> { - let BasicWitnessGeneratorJob { block_number, job } = basic_job; - - if let Some(blocks_proving_percentage) = config.blocks_proving_percentage { - // Generate random number in (0; 100). - let threshold = rand::thread_rng().gen_range(1..100); - // We get value higher than `blocks_proving_percentage` with prob = `1 - blocks_proving_percentage`. - // In this case job should be skipped. - if threshold > blocks_proving_percentage { - METRICS.skipped_blocks.inc(); - tracing::info!( - "Skipping witness generation for block {}, blocks_proving_percentage: {}", - block_number.0, - blocks_proving_percentage - ); - let mut storage = connection_pool.access_storage().await.unwrap(); - storage - .blocks_dal() - .set_skip_proof_for_l1_batch(block_number) - .await - .unwrap(); - let mut prover_storage = prover_connection_pool.access_storage().await.unwrap(); - prover_storage - .witness_generator_dal() - .mark_witness_job_as_skipped(block_number, AggregationRound::BasicCircuits) - .await; - return Ok(None); - } - } - - METRICS.sampled_blocks.inc(); - tracing::info!( - "Starting witness generation of type {:?} for block {}", - AggregationRound::BasicCircuits, - block_number.0 - ); - - Ok(Some( - process_basic_circuits_job( - object_store, - config, - connection_pool, - started_at, - block_number, - job, - ) - .await, - )) - } -} - -#[async_trait] -impl JobProcessor for BasicWitnessGenerator { - type Job = BasicWitnessGeneratorJob; - type JobId = L1BatchNumber; - // The artifact is optional to support skipping blocks when sampling is enabled. - type JobArtifacts = Option; - - const SERVICE_NAME: &'static str = "basic_circuit_witness_generator"; - - async fn get_next_job(&self) -> anyhow::Result> { - let mut prover_connection = self.prover_connection_pool.access_storage().await.unwrap(); - let last_l1_batch_to_process = self.config.last_l1_batch_to_process(); - - Ok( - match prover_connection - .witness_generator_dal() - .get_next_basic_circuit_witness_job( - self.config.witness_generation_timeout(), - self.config.max_attempts, - last_l1_batch_to_process, - &self.protocol_versions, - ) - .await - { - Some(metadata) => { - let job = get_artifacts(metadata.block_number, &self.object_store).await; - Some((job.block_number, job)) - } - None => None, - }, - ) - } - - async fn save_failure(&self, job_id: L1BatchNumber, started_at: Instant, error: String) -> () { - let attempts = self - .prover_connection_pool - .access_storage() - .await - .unwrap() - .witness_generator_dal() - .mark_witness_job_as_failed( - AggregationRound::BasicCircuits, - job_id, - started_at.elapsed(), - error, - ) - .await; - - if attempts >= self.config.max_attempts { - self.connection_pool - .access_storage() - .await - .unwrap() - .blocks_dal() - .set_skip_proof_for_l1_batch(job_id) - .await - .unwrap(); - } - } - - #[allow(clippy::async_yields_async)] - async fn process_job( - &self, - job: BasicWitnessGeneratorJob, - started_at: Instant, - ) -> tokio::task::JoinHandle>> { - let object_store = Arc::clone(&self.object_store); - let config = self.config.clone(); - tokio::spawn(Self::process_job_impl( - config, - object_store, - self.connection_pool.clone(), - self.prover_connection_pool.clone(), - job, - started_at, - )) - } - - async fn save_result( - &self, - job_id: L1BatchNumber, - started_at: Instant, - optional_artifacts: Option, - ) -> anyhow::Result<()> { - match optional_artifacts { - None => (), - Some(artifacts) => { - let blob_urls = save_artifacts(job_id, artifacts, &self.object_store).await; - update_database(&self.prover_connection_pool, started_at, job_id, blob_urls).await; - } - } - Ok(()) - } - - fn max_attempts(&self) -> u32 { - self.config.max_attempts - } - - async fn get_job_attempts(&self, _job_id: &Self::JobId) -> anyhow::Result { - // Witness generator will be removed soon in favor of FRI one, so returning blank value. - Ok(1) - } -} - -pub async fn process_basic_circuits_job( - object_store: Arc, - config: WitnessGeneratorConfig, - connection_pool: ConnectionPool, - started_at: Instant, - block_number: L1BatchNumber, - job: PrepareBasicCircuitsJob, -) -> BasicCircuitArtifacts { - let witness_gen_input = - build_basic_circuits_witness_generator_input(connection_pool.clone(), job, block_number) - .await; - let (basic_circuits, basic_circuits_inputs, scheduler_witness) = - generate_witness(object_store, config, connection_pool, witness_gen_input).await; - let circuits = basic_circuits.clone().into_flattened_set(); - - tracing::info!( - "Witness generation for block {} is complete in {:?}. Number of circuits: {}", - block_number.0, - started_at.elapsed(), - circuits.len() - ); - - BasicCircuitArtifacts { - basic_circuits, - basic_circuits_inputs, - scheduler_witness, - circuits, - } -} - -async fn update_database( - prover_connection_pool: &ConnectionPool, - started_at: Instant, - block_number: L1BatchNumber, - blob_urls: BlobUrls, -) { - let mut prover_connection = prover_connection_pool.access_storage().await.unwrap(); - let mut transaction = prover_connection.start_transaction().await.unwrap(); - let protocol_version = transaction - .witness_generator_dal() - .protocol_version_for_l1_batch(block_number) - .await - .unwrap_or_else(|| { - panic!( - "No system version exist for l1 batch {} for basic circuits", - block_number.0 - ) - }); - transaction - .witness_generator_dal() - .create_aggregation_jobs( - block_number, - &blob_urls.basic_circuits_url, - &blob_urls.basic_circuits_inputs_url, - blob_urls.circuit_types_and_urls.len(), - &blob_urls.scheduler_witness_url, - protocol_version, - ) - .await; - transaction - .prover_dal() - .insert_prover_jobs( - block_number, - blob_urls.circuit_types_and_urls, - AggregationRound::BasicCircuits, - protocol_version, - ) - .await; - transaction - .witness_generator_dal() - .mark_witness_job_as_successful( - block_number, - AggregationRound::BasicCircuits, - started_at.elapsed(), - ) - .await; - - transaction.commit().await.unwrap(); - METRICS.processing_time[&AggregationRound::BasicCircuits.into()].observe(started_at.elapsed()); -} - -async fn get_artifacts( - block_number: L1BatchNumber, - object_store: &dyn ObjectStore, -) -> BasicWitnessGeneratorJob { - let job = object_store.get(block_number).await.unwrap(); - BasicWitnessGeneratorJob { block_number, job } -} - -async fn save_artifacts( - block_number: L1BatchNumber, - artifacts: BasicCircuitArtifacts, - object_store: &dyn ObjectStore, -) -> BlobUrls { - let basic_circuits_url = object_store - .put(block_number, &artifacts.basic_circuits) - .await - .unwrap(); - let basic_circuits_inputs_url = object_store - .put(block_number, &artifacts.basic_circuits_inputs) - .await - .unwrap(); - let scheduler_witness_url = object_store - .put(block_number, &artifacts.scheduler_witness) - .await - .unwrap(); - let circuit_types_and_urls = save_prover_input_artifacts( - block_number, - &artifacts.circuits, - object_store, - AggregationRound::BasicCircuits, - ) - .await; - BlobUrls { - basic_circuits_url, - basic_circuits_inputs_url, - scheduler_witness_url, - circuit_types_and_urls, - } -} - -// If making changes to this method, consider moving this logic to the DAL layer and make -// `PrepareBasicCircuitsJob` have all fields of `BasicCircuitWitnessGeneratorInput`. -pub async fn build_basic_circuits_witness_generator_input( - connection_pool: ConnectionPool, - witness_merkle_input: PrepareBasicCircuitsJob, - l1_batch_number: L1BatchNumber, -) -> BasicCircuitWitnessGeneratorInput { - let mut connection = connection_pool.access_storage().await.unwrap(); - let block_header = connection - .blocks_dal() - .get_l1_batch_header(l1_batch_number) - .await - .unwrap() - .unwrap(); - let initial_heap_content = connection - .blocks_dal() - .get_initial_bootloader_heap(l1_batch_number) - .await - .unwrap() - .unwrap(); - let (previous_block_hash, previous_block_timestamp) = connection - .blocks_dal() - .get_l1_batch_state_root_and_timestamp(l1_batch_number - 1) - .await - .unwrap() - .expect("cannot generate witness before the root hash is computed"); - BasicCircuitWitnessGeneratorInput { - block_number: l1_batch_number, - previous_block_timestamp, - previous_block_hash, - block_timestamp: block_header.timestamp, - used_bytecodes_hashes: block_header.used_contract_hashes, - initial_heap_content, - merkle_paths_input: witness_merkle_input, - } -} - -pub async fn generate_witness( - object_store: Arc, - config: WitnessGeneratorConfig, - connection_pool: ConnectionPool, - input: BasicCircuitWitnessGeneratorInput, -) -> ( - BlockBasicCircuits, - BlockBasicCircuitsPublicInputs, - SchedulerCircuitInstanceWitness, -) { - let mut connection = connection_pool.access_storage().await.unwrap(); - let header = connection - .blocks_dal() - .get_l1_batch_header(input.block_number) - .await - .unwrap() - .unwrap(); - let bootloader_code_bytes = connection - .storage_dal() - .get_factory_dep(header.base_system_contracts_hashes.bootloader) - .await - .expect("Bootloader bytecode should exist"); - let bootloader_code = bytes_to_chunks(&bootloader_code_bytes); - let account_bytecode_bytes = connection - .storage_dal() - .get_factory_dep(header.base_system_contracts_hashes.default_aa) - .await - .expect("Default aa bytecode should exist"); - let account_bytecode = bytes_to_chunks(&account_bytecode_bytes); - let bootloader_contents = - expand_memory_contents(&input.initial_heap_content, USED_BOOTLOADER_MEMORY_BYTES); - let account_code_hash = h256_to_u256(header.base_system_contracts_hashes.default_aa); - - let hashes: HashSet = input - .used_bytecodes_hashes - .iter() - // SMA-1555: remove this hack once updated to the latest version of zkevm_test_harness - .filter(|&&hash| hash != h256_to_u256(header.base_system_contracts_hashes.bootloader)) - .map(|hash| u256_to_h256(*hash)) - .collect(); - - let mut used_bytecodes = connection.storage_dal().get_factory_deps(&hashes).await; - if input.used_bytecodes_hashes.contains(&account_code_hash) { - used_bytecodes.insert(account_code_hash, account_bytecode); - } - - assert_eq!( - hashes.len(), - used_bytecodes.len(), - "{} factory deps are not found in DB", - hashes.len() - used_bytecodes.len() - ); - - // `DbStorageProvider` was designed to be used in API, so it accepts miniblock numbers. - // Probably, we should make it work with L1 batch numbers too. - let (_, last_miniblock_number) = connection - .blocks_dal() - .get_miniblock_range_of_l1_batch(input.block_number - 1) - .await - .unwrap() - .expect("L1 batch should contain at least one miniblock"); - let storage_refunds = connection - .blocks_dal() - .get_storage_refunds(input.block_number) - .await - .unwrap() - .unwrap(); - - drop(connection); - let rt_handle = tokio::runtime::Handle::current(); - - // The following part is CPU-heavy, so we move it to a separate thread. - tokio::task::spawn_blocking(move || { - // NOTE: this `match` will be moved higher up, as we need to load EVERYTHING from Blob, not just storage - // Until we can derive Storage from Merkle Paths, we'll have this version as testing ground. - let storage: Box = match config.data_source { - BasicWitnessGeneratorDataSource::FromPostgres => { - let connection = rt_handle - .block_on(connection_pool.access_storage()) - .unwrap(); - Box::new(PostgresStorage::new( - rt_handle.clone(), - connection, - last_miniblock_number, - true, - )) - } - BasicWitnessGeneratorDataSource::FromPostgresShadowBlob => { - let connection = rt_handle - .block_on(connection_pool.access_storage()) - .unwrap(); - let block_state = rt_handle.block_on(object_store.get(header.number)).unwrap(); - let source_storage = Box::new(PostgresStorage::new( - rt_handle.clone(), - connection, - last_miniblock_number, - true, - )); - let checked_storage = Box::new(WitnessStorage::new(block_state)); - Box::new(ShadowStorage::new( - source_storage, - checked_storage, - input.block_number, - )) - } - BasicWitnessGeneratorDataSource::FromBlob => { - let block_state = rt_handle.block_on(object_store.get(header.number)).unwrap(); - Box::new(WitnessStorage::new(block_state)) - } - }; - let mut tree = PrecalculatedMerklePathsProvider::new( - input.merkle_paths_input, - input.previous_block_hash.0, - ); - - let storage_view = StorageView::new(storage); - let storage_view = storage_view.to_rc_ptr(); - let vm_storage_oracle: VmStorageOracle>, HistoryDisabled> = - VmStorageOracle::new(storage_view); - let storage_oracle = StorageOracle::new(vm_storage_oracle, storage_refunds); - let memory: SimpleMemory = SimpleMemory::default(); - let mut hasher = DefaultHasher::new(); - GEOMETRY_CONFIG.hash(&mut hasher); - tracing::info!( - "generating witness for block {} using geometry config hash: {}", - input.block_number.0, - hasher.finish() - ); - - if config - .dump_arguments_for_blocks - .contains(&input.block_number.0) - { - rt_handle.block_on(save_run_with_fixed_params_args_to_gcs( - object_store, - input.block_number.0, - last_miniblock_number.0, - Address::zero(), - BOOTLOADER_ADDRESS, - bootloader_code.clone(), - bootloader_contents.clone(), - false, - account_code_hash, - used_bytecodes.clone(), - Vec::default(), - MAX_CYCLES_FOR_TX as usize, - GEOMETRY_CONFIG, - tree.clone(), - )); - } - - zksync_types::zkevm_test_harness::external_calls::run_with_fixed_params( - Address::zero(), - BOOTLOADER_ADDRESS, - bootloader_code, - bootloader_contents, - false, - account_code_hash, - used_bytecodes, - Vec::default(), - MAX_CYCLES_FOR_TX as usize, - GEOMETRY_CONFIG, - storage_oracle, - memory, - &mut tree, - ) - }) - .await - .unwrap() -} - -#[allow(clippy::too_many_arguments)] -async fn save_run_with_fixed_params_args_to_gcs( - object_store: Arc, - l1_batch_number: u32, - last_miniblock_number: u32, - caller: Address, - entry_point_address: Address, - entry_point_code: Vec<[u8; 32]>, - initial_heap_content: Vec, - zk_porter_is_available: bool, - default_aa_code_hash: U256, - used_bytecodes: HashMap>, - ram_verification_queries: Vec<(u32, U256)>, - cycle_limit: usize, - geometry: GeometryConfig, - tree: PrecalculatedMerklePathsProvider, -) { - let run_with_fixed_params_input = RunWithFixedParamsInput { - l1_batch_number, - last_miniblock_number, - caller, - entry_point_address, - entry_point_code, - initial_heap_content, - zk_porter_is_available, - default_aa_code_hash, - used_bytecodes, - ram_verification_queries, - cycle_limit, - geometry, - tree, - }; - object_store - .put(L1BatchNumber(l1_batch_number), &run_with_fixed_params_input) - .await - .unwrap(); -} - -#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)] -pub struct RunWithFixedParamsInput { - pub l1_batch_number: u32, - pub last_miniblock_number: u32, - pub caller: Address, - pub entry_point_address: Address, - pub entry_point_code: Vec<[u8; 32]>, - pub initial_heap_content: Vec, - pub zk_porter_is_available: bool, - pub default_aa_code_hash: U256, - pub used_bytecodes: HashMap>, - pub ram_verification_queries: Vec<(u32, U256)>, - pub cycle_limit: usize, - pub geometry: GeometryConfig, - pub tree: PrecalculatedMerklePathsProvider, -} - -impl StoredObject for RunWithFixedParamsInput { - const BUCKET: Bucket = Bucket::WitnessInput; - type Key<'a> = L1BatchNumber; - - fn encode_key(key: Self::Key<'_>) -> String { - format!("run_with_fixed_params_input_{}.bin", key) - } - - zksync_object_store::serialize_using_bincode!(); -} diff --git a/core/lib/zksync_core/src/witness_generator/leaf_aggregation.rs b/core/lib/zksync_core/src/witness_generator/leaf_aggregation.rs deleted file mode 100644 index 4c9201b65f6..00000000000 --- a/core/lib/zksync_core/src/witness_generator/leaf_aggregation.rs +++ /dev/null @@ -1,340 +0,0 @@ -use async_trait::async_trait; - -use std::{collections::HashMap, time::Instant}; - -use zksync_config::configs::WitnessGeneratorConfig; -use zksync_dal::ConnectionPool; -use zksync_object_store::{ObjectStore, ObjectStoreFactory}; -use zksync_queued_job_processor::JobProcessor; -use zksync_types::{ - circuit::LEAF_SPLITTING_FACTOR, - proofs::{AggregationRound, PrepareLeafAggregationCircuitsJob, WitnessGeneratorJobMetadata}, - zkevm_test_harness::{ - abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit, bellman::bn256::Bn256, - bellman::plonk::better_better_cs::setup::VerificationKey, - encodings::recursion_request::RecursionRequest, encodings::QueueSimulator, witness, - witness::oracle::VmWitnessOracle, LeafAggregationOutputDataWitness, - }, - L1BatchNumber, ProtocolVersionId, -}; -use zksync_verification_key_server::{ - get_ordered_vks_for_basic_circuits, get_vks_for_basic_circuits, get_vks_for_commitment, -}; - -use super::{utils::save_prover_input_artifacts, METRICS}; - -pub struct LeafAggregationArtifacts { - leaf_layer_subqueues: Vec, 2, 2>>, - aggregation_outputs: Vec>, - leaf_circuits: Vec>>, -} - -#[derive(Debug)] -struct BlobUrls { - leaf_layer_subqueues_url: String, - aggregation_outputs_url: String, - circuit_types_and_urls: Vec<(&'static str, String)>, -} - -#[derive(Clone)] -pub struct LeafAggregationWitnessGeneratorJob { - block_number: L1BatchNumber, - job: PrepareLeafAggregationCircuitsJob, -} - -#[derive(Debug)] -pub struct LeafAggregationWitnessGenerator { - config: WitnessGeneratorConfig, - object_store: Box, - protocol_versions: Vec, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, -} - -impl LeafAggregationWitnessGenerator { - pub async fn new( - config: WitnessGeneratorConfig, - store_factory: &ObjectStoreFactory, - protocol_versions: Vec, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, - ) -> Self { - Self { - config, - object_store: store_factory.create_store().await, - protocol_versions, - connection_pool, - prover_connection_pool, - } - } - - fn process_job_sync( - leaf_job: LeafAggregationWitnessGeneratorJob, - started_at: Instant, - ) -> LeafAggregationArtifacts { - let LeafAggregationWitnessGeneratorJob { block_number, job } = leaf_job; - - tracing::info!( - "Starting witness generation of type {:?} for block {}", - AggregationRound::LeafAggregation, - block_number.0 - ); - process_leaf_aggregation_job(started_at, block_number, job) - } -} - -#[async_trait] -impl JobProcessor for LeafAggregationWitnessGenerator { - type Job = LeafAggregationWitnessGeneratorJob; - type JobId = L1BatchNumber; - type JobArtifacts = LeafAggregationArtifacts; - - const SERVICE_NAME: &'static str = "leaf_aggregation_witness_generator"; - - async fn get_next_job(&self) -> anyhow::Result> { - let mut prover_connection = self.prover_connection_pool.access_storage().await.unwrap(); - let last_l1_batch_to_process = self.config.last_l1_batch_to_process(); - - Ok( - match prover_connection - .witness_generator_dal() - .get_next_leaf_aggregation_witness_job( - self.config.witness_generation_timeout(), - self.config.max_attempts, - last_l1_batch_to_process, - &self.protocol_versions, - ) - .await - { - Some(metadata) => { - let job = get_artifacts(metadata, &*self.object_store).await; - Some((job.block_number, job)) - } - None => None, - }, - ) - } - - async fn save_failure(&self, job_id: L1BatchNumber, started_at: Instant, error: String) -> () { - let attempts = self - .prover_connection_pool - .access_storage() - .await - .unwrap() - .witness_generator_dal() - .mark_witness_job_as_failed( - AggregationRound::LeafAggregation, - job_id, - started_at.elapsed(), - error, - ) - .await; - - if attempts >= self.config.max_attempts { - self.connection_pool - .access_storage() - .await - .unwrap() - .blocks_dal() - .set_skip_proof_for_l1_batch(job_id) - .await - .unwrap(); - } - } - - #[allow(clippy::async_yields_async)] - async fn process_job( - &self, - job: LeafAggregationWitnessGeneratorJob, - started_at: Instant, - ) -> tokio::task::JoinHandle> { - tokio::task::spawn_blocking(move || Ok(Self::process_job_sync(job, started_at))) - } - - async fn save_result( - &self, - job_id: L1BatchNumber, - started_at: Instant, - artifacts: LeafAggregationArtifacts, - ) -> anyhow::Result<()> { - let leaf_circuits_len = artifacts.leaf_circuits.len(); - let blob_urls = save_artifacts(job_id, artifacts, &*self.object_store).await; - update_database( - &self.prover_connection_pool, - started_at, - job_id, - leaf_circuits_len, - blob_urls, - ) - .await; - Ok(()) - } - - fn max_attempts(&self) -> u32 { - self.config.max_attempts - } - - async fn get_job_attempts(&self, _job_id: &Self::JobId) -> anyhow::Result { - // Witness generator will be removed soon in favor of FRI one, so returning blank value. - Ok(1) - } -} - -pub fn process_leaf_aggregation_job( - started_at: Instant, - block_number: L1BatchNumber, - job: PrepareLeafAggregationCircuitsJob, -) -> LeafAggregationArtifacts { - let stage_started_at = Instant::now(); - - let verification_keys: HashMap< - u8, - VerificationKey>>, - > = get_vks_for_basic_circuits(); - - tracing::info!( - "Verification keys loaded in {:?}", - stage_started_at.elapsed() - ); - - // we need the list of vks that matches the list of job.basic_circuit_proofs - let vks_for_aggregation: Vec< - VerificationKey>>, - > = get_ordered_vks_for_basic_circuits(&job.basic_circuits, &verification_keys); - - let (all_vk_committments, set_committment, g2_points) = - witness::recursive_aggregation::form_base_circuits_committment(get_vks_for_commitment( - verification_keys, - )); - - tracing::info!("Commitments generated in {:?}", stage_started_at.elapsed()); - - let stage_started_at = Instant::now(); - - let (leaf_layer_subqueues, aggregation_outputs, leaf_circuits) = - witness::recursive_aggregation::prepare_leaf_aggregations( - job.basic_circuits, - job.basic_circuits_inputs, - job.basic_circuits_proofs, - vks_for_aggregation, - LEAF_SPLITTING_FACTOR, - all_vk_committments, - set_committment, - g2_points, - ); - - tracing::info!( - "prepare_leaf_aggregations took {:?}", - stage_started_at.elapsed() - ); - tracing::info!( - "Leaf witness generation for block {} is complete in {:?}. Number of circuits: {}", - block_number.0, - started_at.elapsed(), - leaf_circuits.len() - ); - - LeafAggregationArtifacts { - leaf_layer_subqueues, - aggregation_outputs, - leaf_circuits, - } -} - -async fn update_database( - prover_connection_pool: &ConnectionPool, - started_at: Instant, - block_number: L1BatchNumber, - leaf_circuits_len: usize, - blob_urls: BlobUrls, -) { - let mut prover_connection = prover_connection_pool.access_storage().await.unwrap(); - let mut transaction = prover_connection.start_transaction().await.unwrap(); - - // inserts artifacts into the node_aggregation_witness_jobs table - // and advances it to waiting_for_proofs status - transaction - .witness_generator_dal() - .save_leaf_aggregation_artifacts( - block_number, - leaf_circuits_len, - &blob_urls.leaf_layer_subqueues_url, - &blob_urls.aggregation_outputs_url, - ) - .await; - let system_version = transaction - .witness_generator_dal() - .protocol_version_for_l1_batch(block_number) - .await - .unwrap_or_else(|| { - panic!( - "No system version exist for l1 batch {} for leaf agg", - block_number.0 - ) - }); - transaction - .prover_dal() - .insert_prover_jobs( - block_number, - blob_urls.circuit_types_and_urls, - AggregationRound::LeafAggregation, - system_version, - ) - .await; - transaction - .witness_generator_dal() - .mark_witness_job_as_successful( - block_number, - AggregationRound::LeafAggregation, - started_at.elapsed(), - ) - .await; - - transaction.commit().await.unwrap(); - METRICS.processing_time[&AggregationRound::LeafAggregation.into()] - .observe(started_at.elapsed()); -} - -async fn get_artifacts( - metadata: WitnessGeneratorJobMetadata, - object_store: &dyn ObjectStore, -) -> LeafAggregationWitnessGeneratorJob { - let basic_circuits = object_store.get(metadata.block_number).await.unwrap(); - let basic_circuits_inputs = object_store.get(metadata.block_number).await.unwrap(); - - LeafAggregationWitnessGeneratorJob { - block_number: metadata.block_number, - job: PrepareLeafAggregationCircuitsJob { - basic_circuits_inputs, - basic_circuits_proofs: metadata.proofs, - basic_circuits, - }, - } -} - -async fn save_artifacts( - block_number: L1BatchNumber, - artifacts: LeafAggregationArtifacts, - object_store: &dyn ObjectStore, -) -> BlobUrls { - let leaf_layer_subqueues_url = object_store - .put(block_number, &artifacts.leaf_layer_subqueues) - .await - .unwrap(); - let aggregation_outputs_url = object_store - .put(block_number, &artifacts.aggregation_outputs) - .await - .unwrap(); - let circuit_types_and_urls = save_prover_input_artifacts( - block_number, - &artifacts.leaf_circuits, - object_store, - AggregationRound::LeafAggregation, - ) - .await; - BlobUrls { - leaf_layer_subqueues_url, - aggregation_outputs_url, - circuit_types_and_urls, - } -} diff --git a/core/lib/zksync_core/src/witness_generator/mod.rs b/core/lib/zksync_core/src/witness_generator/mod.rs deleted file mode 100644 index 10a7ff861bd..00000000000 --- a/core/lib/zksync_core/src/witness_generator/mod.rs +++ /dev/null @@ -1,88 +0,0 @@ -//! `WitnessGenerator` component is responsible for generating prover jobs -//! and saving artifacts needed for the next round of proof aggregation. -//! -//! That is, every aggregation round needs two sets of input: -//! * computed proofs from the previous round -//! * some artifacts that the witness generator of previous round(s) returns. -//! -//! There are four rounds of proofs for every block, -//! each of them starts with an invocation of `WitnessGenerator` with a corresponding `WitnessGeneratorJobType`: -//! * `WitnessGeneratorJobType::BasicCircuits`: -//! generates basic circuits (circuits like `Main VM` - up to 50 * 48 = 2400 circuits): -//! input table: `basic_circuit_witness_jobs` (todo SMA-1362: will be renamed from `witness_inputs`) -//! artifact/output table: `leaf_aggregation_jobs` (also creates job stubs in `node_aggregation_jobs` and `scheduler_aggregation_jobs`) -//! value in `aggregation_round` field of `prover_jobs` table: 0 -//! * `WitnessGeneratorJobType::LeafAggregation`: -//! generates leaf aggregation circuits (up to 48 circuits of type `LeafAggregation`) -//! input table: `leaf_aggregation_jobs` -//! artifact/output table: `node_aggregation_jobs` -//! value in `aggregation_round` field of `prover_jobs` table: 1 -//! * `WitnessGeneratorJobType::NodeAggregation` -//! generates one circuit of type `NodeAggregation` -//! input table: `leaf_aggregation_jobs` -//! value in `aggregation_round` field of `prover_jobs` table: 2 -//! * scheduler circuit -//! generates one circuit of type `Scheduler` -//! input table: `scheduler_witness_jobs` -//! value in `aggregation_round` field of `prover_jobs` table: 3 -//! -//! One round of prover generation consists of: -//! * `WitnessGenerator` picks up the next `queued` job in its input table and processes it -//! (invoking the corresponding helper function in `zkevm_test_harness` repo) -//! * it saves the generated circuis to `prover_jobs` table and the other artifacts to its output table -//! * the individual proofs are picked up by the provers, processed, and marked as complete. -//! * when the last proof for this round is computed, the prover updates the row in the output table -//! setting its status to `queued` -//! * `WitnessGenerator` picks up such job and proceeds to the next round -//! -//! Note that the very first input table (`basic_circuit_witness_jobs` (todo SMA-1362: will be renamed from `witness_inputs`)) -//! is populated by the tree (as the input artifact for the `WitnessGeneratorJobType::BasicCircuits` is the merkle proofs) - -use vise::{Buckets, Counter, EncodeLabelSet, EncodeLabelValue, Family, Histogram, Metrics}; - -use std::{fmt, time::Duration}; - -use zksync_types::proofs::AggregationRound; - -pub mod basic_circuits; -pub mod leaf_aggregation; -pub mod node_aggregation; -mod precalculated_merkle_paths_provider; -pub mod scheduler; -mod storage_oracle; -#[cfg(test)] -mod tests; -mod utils; - -#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EncodeLabelSet, EncodeLabelValue)] -#[metrics(label = "stage", format = "wit_gen_{}")] -struct StageLabel(AggregationRound); - -impl From for StageLabel { - fn from(round: AggregationRound) -> Self { - Self(round) - } -} - -impl fmt::Display for StageLabel { - fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { - formatter.write_str(match self.0 { - AggregationRound::BasicCircuits => "basic_circuits", - AggregationRound::LeafAggregation => "leaf_aggregation", - AggregationRound::NodeAggregation => "node_aggregation", - AggregationRound::Scheduler => "scheduler", - }) - } -} - -#[derive(Debug, Metrics)] -#[metrics(prefix = "server_witness_generator")] -struct WitnessGeneratorMetrics { - #[metrics(buckets = Buckets::LATENCIES)] - processing_time: Family>, - skipped_blocks: Counter, - sampled_blocks: Counter, -} - -#[vise::register] -static METRICS: vise::Global = vise::Global::new(); diff --git a/core/lib/zksync_core/src/witness_generator/node_aggregation.rs b/core/lib/zksync_core/src/witness_generator/node_aggregation.rs deleted file mode 100644 index 6d884563c9d..00000000000 --- a/core/lib/zksync_core/src/witness_generator/node_aggregation.rs +++ /dev/null @@ -1,378 +0,0 @@ -use async_trait::async_trait; - -use std::{collections::HashMap, env, time::Instant}; - -use zksync_config::configs::WitnessGeneratorConfig; -use zksync_dal::ConnectionPool; -use zksync_object_store::{ObjectStore, ObjectStoreFactory}; -use zksync_queued_job_processor::JobProcessor; -use zksync_types::{ - circuit::{ - LEAF_CIRCUIT_INDEX, LEAF_SPLITTING_FACTOR, NODE_CIRCUIT_INDEX, NODE_SPLITTING_FACTOR, - }, - proofs::{AggregationRound, PrepareNodeAggregationCircuitJob, WitnessGeneratorJobMetadata}, - zkevm_test_harness::{ - abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit, - bellman::bn256::Bn256, - bellman::plonk::better_better_cs::setup::VerificationKey, - ff::to_hex, - witness::{ - self, - oracle::VmWitnessOracle, - recursive_aggregation::{erase_vk_type, padding_aggregations}, - }, - NodeAggregationOutputDataWitness, - }, - L1BatchNumber, ProtocolVersionId, -}; -use zksync_verification_key_server::{ - get_vk_for_circuit_type, get_vks_for_basic_circuits, get_vks_for_commitment, -}; - -use super::{utils::save_prover_input_artifacts, METRICS}; - -pub struct NodeAggregationArtifacts { - final_node_aggregation: NodeAggregationOutputDataWitness, - node_circuits: Vec>>, -} - -#[derive(Debug)] -struct BlobUrls { - node_aggregations_url: String, - circuit_types_and_urls: Vec<(&'static str, String)>, -} - -#[derive(Clone)] -pub struct NodeAggregationWitnessGeneratorJob { - block_number: L1BatchNumber, - job: PrepareNodeAggregationCircuitJob, -} - -#[derive(Debug)] -pub struct NodeAggregationWitnessGenerator { - config: WitnessGeneratorConfig, - object_store: Box, - protocol_versions: Vec, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, -} - -impl NodeAggregationWitnessGenerator { - pub async fn new( - config: WitnessGeneratorConfig, - store_factory: &ObjectStoreFactory, - protocol_versions: Vec, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, - ) -> Self { - Self { - config, - object_store: store_factory.create_store().await, - protocol_versions, - connection_pool, - prover_connection_pool, - } - } - - fn process_job_sync( - config: WitnessGeneratorConfig, - node_job: NodeAggregationWitnessGeneratorJob, - started_at: Instant, - ) -> anyhow::Result { - let NodeAggregationWitnessGeneratorJob { block_number, job } = node_job; - - tracing::info!( - "Starting witness generation of type {:?} for block {}", - AggregationRound::NodeAggregation, - block_number.0 - ); - Ok(process_node_aggregation_job( - config, - started_at, - block_number, - job, - )) - } -} - -#[async_trait] -impl JobProcessor for NodeAggregationWitnessGenerator { - type Job = NodeAggregationWitnessGeneratorJob; - type JobId = L1BatchNumber; - type JobArtifacts = NodeAggregationArtifacts; - - const SERVICE_NAME: &'static str = "node_aggregation_witness_generator"; - - async fn get_next_job(&self) -> anyhow::Result> { - let mut prover_connection = self.prover_connection_pool.access_storage().await.unwrap(); - let last_l1_batch_to_process = self.config.last_l1_batch_to_process(); - - Ok( - match prover_connection - .witness_generator_dal() - .get_next_node_aggregation_witness_job( - self.config.witness_generation_timeout(), - self.config.max_attempts, - last_l1_batch_to_process, - &self.protocol_versions, - ) - .await - { - Some(metadata) => { - let job = get_artifacts(metadata, &*self.object_store).await; - Some((job.block_number, job)) - } - None => None, - }, - ) - } - - async fn save_failure(&self, job_id: L1BatchNumber, started_at: Instant, error: String) -> () { - let attempts = self - .prover_connection_pool - .access_storage() - .await - .unwrap() - .witness_generator_dal() - .mark_witness_job_as_failed( - AggregationRound::NodeAggregation, - job_id, - started_at.elapsed(), - error, - ) - .await; - - if attempts >= self.config.max_attempts { - self.connection_pool - .access_storage() - .await - .unwrap() - .blocks_dal() - .set_skip_proof_for_l1_batch(job_id) - .await - .unwrap(); - } - } - - #[allow(clippy::async_yields_async)] - async fn process_job( - &self, - job: NodeAggregationWitnessGeneratorJob, - started_at: Instant, - ) -> tokio::task::JoinHandle> { - let config = self.config.clone(); - tokio::task::spawn_blocking(move || Self::process_job_sync(config, job, started_at)) - } - - async fn save_result( - &self, - job_id: L1BatchNumber, - started_at: Instant, - artifacts: NodeAggregationArtifacts, - ) -> anyhow::Result<()> { - let blob_urls = save_artifacts(job_id, artifacts, &*self.object_store).await; - update_database(&self.prover_connection_pool, started_at, job_id, blob_urls).await; - Ok(()) - } - - fn max_attempts(&self) -> u32 { - self.config.max_attempts - } - - async fn get_job_attempts(&self, _job_id: &Self::JobId) -> anyhow::Result { - // Witness generator will be removed soon in favor of FRI one, so returning blank value. - Ok(1) - } -} - -pub fn process_node_aggregation_job( - config: WitnessGeneratorConfig, - started_at: Instant, - block_number: L1BatchNumber, - job: PrepareNodeAggregationCircuitJob, -) -> NodeAggregationArtifacts { - let stage_started_at = Instant::now(); - zksync_prover_utils::ensure_initial_setup_keys_present( - &config.initial_setup_key_path, - &config.key_download_url, - ); - env::set_var("CRS_FILE", config.initial_setup_key_path); - tracing::info!("Keys loaded in {:?}", stage_started_at.elapsed()); - metrics::histogram!("server.prover.download_time", started_at.elapsed()); - - let stage_started_at = Instant::now(); - - let verification_keys: HashMap< - u8, - VerificationKey>>, - > = get_vks_for_basic_circuits(); - - let padding_aggregations = padding_aggregations(NODE_SPLITTING_FACTOR); - - let (_, set_committment, g2_points) = - witness::recursive_aggregation::form_base_circuits_committment(get_vks_for_commitment( - verification_keys, - )); - - let node_aggregation_vk = get_vk_for_circuit_type(NODE_CIRCUIT_INDEX); - - let leaf_aggregation_vk = get_vk_for_circuit_type(LEAF_CIRCUIT_INDEX); - - let (_, leaf_aggregation_vk_committment) = - witness::recursive_aggregation::compute_vk_encoding_and_committment(erase_vk_type( - leaf_aggregation_vk.clone(), - )); - - let (_, node_aggregation_vk_committment) = - witness::recursive_aggregation::compute_vk_encoding_and_committment(erase_vk_type( - node_aggregation_vk, - )); - - tracing::info!( - "commitments: basic set: {:?}, leaf: {:?}, node: {:?}", - to_hex(&set_committment), - to_hex(&leaf_aggregation_vk_committment), - to_hex(&node_aggregation_vk_committment) - ); - tracing::info!("Commitments generated in {:?}", stage_started_at.elapsed()); - - let stage_started_at = Instant::now(); - let (_, final_node_aggregations, node_circuits) = - zksync_types::zkevm_test_harness::witness::recursive_aggregation::prepare_node_aggregations( - job.previous_level_proofs, - leaf_aggregation_vk, - true, - 0, - job.previous_level_leafs_aggregations, - Vec::default(), - job.previous_sequence, - LEAF_SPLITTING_FACTOR, - NODE_SPLITTING_FACTOR, - padding_aggregations, - set_committment, - node_aggregation_vk_committment, - leaf_aggregation_vk_committment, - g2_points, - ); - - tracing::info!( - "prepare_node_aggregations took {:?}", - stage_started_at.elapsed() - ); - - assert_eq!( - node_circuits.len(), - 1, - "prepare_node_aggregations returned more than one circuit" - ); - assert_eq!( - final_node_aggregations.len(), - 1, - "prepare_node_aggregations returned more than one node aggregation" - ); - - tracing::info!( - "Node witness generation for block {} is complete in {:?}. Number of circuits: {}", - block_number.0, - started_at.elapsed(), - node_circuits.len() - ); - - NodeAggregationArtifacts { - final_node_aggregation: final_node_aggregations.into_iter().next().unwrap(), - node_circuits, - } -} - -async fn update_database( - prover_connection_pool: &ConnectionPool, - started_at: Instant, - block_number: L1BatchNumber, - blob_urls: BlobUrls, -) { - let mut prover_connection = prover_connection_pool.access_storage().await.unwrap(); - let mut transaction = prover_connection.start_transaction().await.unwrap(); - - // inserts artifacts into the scheduler_witness_jobs table - // and advances it to waiting_for_proofs status - transaction - .witness_generator_dal() - .save_node_aggregation_artifacts(block_number, &blob_urls.node_aggregations_url) - .await; - let protocol_version = transaction - .witness_generator_dal() - .protocol_version_for_l1_batch(block_number) - .await - .unwrap_or_else(|| { - panic!( - "No system version exist for l1 batch {} for node agg", - block_number.0 - ) - }); - transaction - .prover_dal() - .insert_prover_jobs( - block_number, - blob_urls.circuit_types_and_urls, - AggregationRound::NodeAggregation, - protocol_version, - ) - .await; - transaction - .witness_generator_dal() - .mark_witness_job_as_successful( - block_number, - AggregationRound::NodeAggregation, - started_at.elapsed(), - ) - .await; - - transaction.commit().await.unwrap(); - METRICS.processing_time[&AggregationRound::NodeAggregation.into()] - .observe(started_at.elapsed()); -} - -async fn get_artifacts( - metadata: WitnessGeneratorJobMetadata, - object_store: &dyn ObjectStore, -) -> NodeAggregationWitnessGeneratorJob { - let leaf_layer_subqueues = object_store - .get(metadata.block_number) - .await - .expect("leaf_layer_subqueues not found in queued `node_aggregation_witness_jobs` job"); - let aggregation_outputs = object_store - .get(metadata.block_number) - .await - .expect("aggregation_outputs not found in queued `node_aggregation_witness_jobs` job"); - - NodeAggregationWitnessGeneratorJob { - block_number: metadata.block_number, - job: PrepareNodeAggregationCircuitJob { - previous_level_proofs: metadata.proofs, - previous_level_leafs_aggregations: aggregation_outputs, - previous_sequence: leaf_layer_subqueues, - }, - } -} - -async fn save_artifacts( - block_number: L1BatchNumber, - artifacts: NodeAggregationArtifacts, - object_store: &dyn ObjectStore, -) -> BlobUrls { - let node_aggregations_url = object_store - .put(block_number, &artifacts.final_node_aggregation) - .await - .unwrap(); - let circuit_types_and_urls = save_prover_input_artifacts( - block_number, - &artifacts.node_circuits, - object_store, - AggregationRound::NodeAggregation, - ) - .await; - BlobUrls { - node_aggregations_url, - circuit_types_and_urls, - } -} diff --git a/core/lib/zksync_core/src/witness_generator/precalculated_merkle_paths_provider.rs b/core/lib/zksync_core/src/witness_generator/precalculated_merkle_paths_provider.rs deleted file mode 100644 index 96705de7e91..00000000000 --- a/core/lib/zksync_core/src/witness_generator/precalculated_merkle_paths_provider.rs +++ /dev/null @@ -1,261 +0,0 @@ -use serde::{Deserialize, Serialize}; -use zksync_types::proofs::{PrepareBasicCircuitsJob, StorageLogMetadata}; -use zksync_types::zkevm_test_harness::blake2::Blake2s256; -use zksync_types::zkevm_test_harness::witness::tree::BinaryHasher; -use zksync_types::zkevm_test_harness::witness::tree::{ - BinarySparseStorageTree, EnumeratedBinaryLeaf, LeafQuery, ZkSyncStorageLeaf, -}; - -#[derive(Debug, Clone, PartialEq, Deserialize, Serialize)] -pub struct PrecalculatedMerklePathsProvider { - // We keep the root hash of the last processed leaf, as it is needed by the the witness generator. - pub root_hash: [u8; 32], - // The ordered list of expected leaves to be interacted with - pub pending_leaves: Vec, - // The index that would be assigned to the next new leaf - pub next_enumeration_index: u64, - // For every Storage Write Log we expect two invocations: `get_leaf` and `insert_leaf`. - // We set this flag to `true` after the initial `get_leaf` is invoked. - pub is_get_leaf_invoked: bool, -} - -impl PrecalculatedMerklePathsProvider { - pub fn new(input: PrepareBasicCircuitsJob, root_hash: [u8; 32]) -> Self { - let next_enumeration_index = input.next_enumeration_index(); - tracing::debug!("Initializing PrecalculatedMerklePathsProvider. Initial root_hash: {:?}, initial next_enumeration_index: {:?}", root_hash, next_enumeration_index); - Self { - root_hash, - pending_leaves: input.into_merkle_paths().collect(), - next_enumeration_index, - is_get_leaf_invoked: false, - } - } -} - -impl BinarySparseStorageTree<256, 32, 32, 8, 32, Blake2s256, ZkSyncStorageLeaf> - for PrecalculatedMerklePathsProvider -{ - fn empty() -> Self { - unreachable!("`empty` must not be invoked by the witness generator code"); - } - - fn next_enumeration_index(&self) -> u64 { - self.next_enumeration_index - } - - fn set_next_enumeration_index(&mut self, _value: u64) { - unreachable!( - "`set_next_enumeration_index` must not be invoked by the witness generator code" - ); - } - - fn root(&self) -> [u8; 32] { - self.root_hash - } - - fn get_leaf(&mut self, index: &[u8; 32]) -> LeafQuery<256, 32, 32, 32, ZkSyncStorageLeaf> { - tracing::trace!( - "Invoked get_leaf({:?}). pending leaves size: {:?}. current root: {:?}", - index, - self.pending_leaves.len(), - self.root() - ); - assert!( - !self.is_get_leaf_invoked, - "`get_leaf()` invoked more than once or get_leaf is invoked when insert_leaf was expected" - ); - let next = self.pending_leaves.first().unwrap_or_else(|| { - panic!( - "invoked `get_leaf({:?})` with empty `pending_leaves`", - index - ) - }); - self.root_hash = next.root_hash; - - assert_eq!( - &next.leaf_hashed_key_array(), - index, - "`get_leaf` hashed key mismatch" - ); - - let mut res = LeafQuery { - leaf: ZkSyncStorageLeaf { - index: next.leaf_enumeration_index, - value: next.value_read, - }, - first_write: next.first_write, - index: *index, - merkle_path: next.clone().into_merkle_paths_array(), - }; - - if next.is_write { - // If it is a write, the next invocation will be `insert_leaf` with the very same parameters - self.is_get_leaf_invoked = true; - if res.first_write { - res.leaf.index = 0; - } - } else { - // If it is a read, the next invocation will relate to the next `pending_leaf` - self.pending_leaves.remove(0); - }; - - res - } - - fn insert_leaf( - &mut self, - index: &[u8; 32], - leaf: ZkSyncStorageLeaf, - ) -> LeafQuery<256, 32, 32, 32, ZkSyncStorageLeaf> { - tracing::trace!( - "Invoked insert_leaf({:?}). pending leaves size: {:?}. current root: {:?}", - index, - self.pending_leaves.len(), - self.root() - ); - - assert!( - self.is_get_leaf_invoked, - "`get_leaf()` is expected to be invoked before `insert_leaf()`" - ); - let next = self.pending_leaves.remove(0); - self.root_hash = next.root_hash; - - assert!( - next.is_write, - "invoked `insert_leaf({:?})`, but get_leaf() expected", - index - ); - - assert_eq!( - &next.leaf_hashed_key_array(), - index, - "insert_leaf hashed key mismatch", - ); - - assert_eq!( - &next.value_written, &leaf.value, - "insert_leaf enumeration index mismatch", - ); - - // reset is_get_leaf_invoked for the next get/insert invocation - self.is_get_leaf_invoked = false; - - // if this insert was in fact the very first insert, it should bump the `next_enumeration_index` - self.next_enumeration_index = self - .next_enumeration_index - .max(next.leaf_enumeration_index + 1); - - LeafQuery { - leaf: ZkSyncStorageLeaf { - index: next.leaf_enumeration_index, - value: next.value_written, - }, - first_write: next.first_write, - index: *index, - merkle_path: next.into_merkle_paths_array(), - } - } - - // Method to segregate the given leafs into 2 types: - // * leafs that are updated for first time - // * leafs that are not updated for the first time. - // The consumer of method must ensure that the length of passed argument indexes and leafs are same, - // and the merkle paths specified during the initialization must contains same number of write - // leaf nodes as that of the leafs passed as argument. - fn filter_renumerate<'a>( - &self, - mut indexes: impl Iterator, - mut leafs: impl Iterator, - ) -> ( - u64, - Vec<([u8; 32], ZkSyncStorageLeaf)>, - Vec, - ) { - tracing::trace!( - "invoked filter_renumerate(), pending leaves size: {:?}", - self.pending_leaves.len() - ); - let mut first_writes = vec![]; - let mut updates = vec![]; - let write_pending_leaves = self - .pending_leaves - .iter() - .filter(|&l| l.is_write) - .collect::>(); - let write_pending_leaves_iter = write_pending_leaves.iter(); - let mut length = 0; - for (&pending_leaf, (idx, mut leaf)) in - write_pending_leaves_iter.zip((&mut indexes).zip(&mut leafs)) - { - leaf.set_index(pending_leaf.leaf_enumeration_index); - if pending_leaf.first_write { - first_writes.push((*idx, leaf)); - } else { - updates.push(leaf); - } - length += 1; - } - assert_eq!( - length, - write_pending_leaves.len(), - "pending leaves: len({}) must be of same length as leafs and indexes: len({})", - write_pending_leaves.len(), - length - ); - assert!( - indexes.next().is_none(), - "indexes must be of same length as leafs and pending leaves: len({})", - write_pending_leaves.len() - ); - assert!( - leafs.next().is_none(), - "leafs must be of same length as indexes and pending leaves: len({})", - write_pending_leaves.len() - ); - (self.next_enumeration_index, first_writes, updates) - } - - fn verify_inclusion( - root: &[u8; 32], - query: &LeafQuery<256, 32, 32, 32, ZkSyncStorageLeaf>, - ) -> bool { - //copied from zkevm_test_harness/src/witness/tree/mod.rs with minor changes - tracing::trace!( - "invoked verify_inclusion. Index: {:?}, root: {:?})", - query.index, - root - ); - - let mut leaf_bytes = vec![0u8; 8 + 32]; // can make a scratch space somewhere later on - leaf_bytes[8..].copy_from_slice(query.leaf.value()); - - let leaf_index_bytes = query.leaf.current_index().to_be_bytes(); - leaf_bytes[0..8].copy_from_slice(&leaf_index_bytes); - - let leaf_hash = Blake2s256::leaf_hash(&leaf_bytes); - - let mut current_hash = leaf_hash; - for level in 0..256 { - let (l, r) = if is_right_side_node(&query.index, level) { - (&query.merkle_path[level], ¤t_hash) - } else { - (¤t_hash, &query.merkle_path[level]) - }; - - let this_level_hash = Blake2s256::node_hash(level, l, r); - - current_hash = this_level_hash; - } - - root == ¤t_hash - } -} - -fn is_right_side_node(index: &[u8; N], depth: usize) -> bool { - debug_assert!(depth < N * 8); - let byte_idx = depth / 8; - let bit_idx = depth % 8; - - index[byte_idx] & (1u8 << bit_idx) != 0 -} diff --git a/core/lib/zksync_core/src/witness_generator/scheduler.rs b/core/lib/zksync_core/src/witness_generator/scheduler.rs deleted file mode 100644 index ae8c2daff73..00000000000 --- a/core/lib/zksync_core/src/witness_generator/scheduler.rs +++ /dev/null @@ -1,376 +0,0 @@ -use async_trait::async_trait; - -use std::{collections::HashMap, slice, time::Instant}; - -use zksync_config::configs::WitnessGeneratorConfig; -use zksync_dal::ConnectionPool; -use zksync_object_store::{ObjectStore, ObjectStoreFactory}; -use zksync_queued_job_processor::JobProcessor; -use zksync_types::{ - circuit::{ - LEAF_CIRCUIT_INDEX, LEAF_SPLITTING_FACTOR, NODE_CIRCUIT_INDEX, NODE_SPLITTING_FACTOR, - }, - proofs::{AggregationRound, PrepareSchedulerCircuitJob, WitnessGeneratorJobMetadata}, - zkevm_test_harness::{ - abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit, - bellman::{bn256::Bn256, plonk::better_better_cs::setup::VerificationKey}, - sync_vm::scheduler::BlockApplicationWitness, - witness::{self, oracle::VmWitnessOracle, recursive_aggregation::erase_vk_type}, - }, - L1BatchNumber, ProtocolVersionId, -}; -use zksync_verification_key_server::{ - get_vk_for_circuit_type, get_vks_for_basic_circuits, get_vks_for_commitment, -}; - -use super::{utils::save_prover_input_artifacts, METRICS}; - -pub struct SchedulerArtifacts { - final_aggregation_result: BlockApplicationWitness, - scheduler_circuit: ZkSyncCircuit>, -} - -#[derive(Clone)] -pub struct SchedulerWitnessGeneratorJob { - block_number: L1BatchNumber, - job: PrepareSchedulerCircuitJob, -} - -#[derive(Debug)] -pub struct SchedulerWitnessGenerator { - config: WitnessGeneratorConfig, - object_store: Box, - protocol_versions: Vec, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, -} - -impl SchedulerWitnessGenerator { - pub async fn new( - config: WitnessGeneratorConfig, - store_factory: &ObjectStoreFactory, - protocol_versions: Vec, - connection_pool: ConnectionPool, - prover_connection_pool: ConnectionPool, - ) -> Self { - Self { - config, - object_store: store_factory.create_store().await, - protocol_versions, - connection_pool, - prover_connection_pool, - } - } - - fn process_job_sync( - scheduler_job: SchedulerWitnessGeneratorJob, - started_at: Instant, - ) -> SchedulerArtifacts { - let SchedulerWitnessGeneratorJob { block_number, job } = scheduler_job; - - tracing::info!( - "Starting witness generation of type {:?} for block {}", - AggregationRound::Scheduler, - block_number.0 - ); - process_scheduler_job(started_at, block_number, job) - } -} - -#[async_trait] -impl JobProcessor for SchedulerWitnessGenerator { - type Job = SchedulerWitnessGeneratorJob; - type JobId = L1BatchNumber; - type JobArtifacts = SchedulerArtifacts; - - const SERVICE_NAME: &'static str = "scheduler_witness_generator"; - - async fn get_next_job(&self) -> anyhow::Result> { - let mut connection = self.connection_pool.access_storage().await.unwrap(); - let mut prover_connection = self.prover_connection_pool.access_storage().await.unwrap(); - let last_l1_batch_to_process = self.config.last_l1_batch_to_process(); - - match prover_connection - .witness_generator_dal() - .get_next_scheduler_witness_job( - self.config.witness_generation_timeout(), - self.config.max_attempts, - last_l1_batch_to_process, - &self.protocol_versions, - ) - .await - { - Some(metadata) => { - let prev_metadata = connection - .blocks_dal() - .get_l1_batch_metadata(metadata.block_number - 1) - .await - .unwrap(); - let previous_aux_hash = prev_metadata - .as_ref() - .map_or([0u8; 32], |e| e.metadata.aux_data_hash.0); - let previous_meta_hash = - prev_metadata.map_or([0u8; 32], |e| e.metadata.meta_parameters_hash.0); - let job = get_artifacts( - metadata, - previous_aux_hash, - previous_meta_hash, - &*self.object_store, - ) - .await; - Ok(Some((job.block_number, job))) - } - None => Ok(None), - } - } - - async fn save_failure(&self, job_id: L1BatchNumber, started_at: Instant, error: String) -> () { - let attempts = self - .prover_connection_pool - .access_storage() - .await - .unwrap() - .witness_generator_dal() - .mark_witness_job_as_failed( - AggregationRound::Scheduler, - job_id, - started_at.elapsed(), - error, - ) - .await; - - if attempts >= self.config.max_attempts { - self.connection_pool - .access_storage() - .await - .unwrap() - .blocks_dal() - .set_skip_proof_for_l1_batch(job_id) - .await - .unwrap(); - } - } - - #[allow(clippy::async_yields_async)] - async fn process_job( - &self, - job: SchedulerWitnessGeneratorJob, - started_at: Instant, - ) -> tokio::task::JoinHandle> { - tokio::task::spawn_blocking(move || Ok(Self::process_job_sync(job, started_at))) - } - - async fn save_result( - &self, - job_id: L1BatchNumber, - started_at: Instant, - artifacts: SchedulerArtifacts, - ) -> anyhow::Result<()> { - let circuit_types_and_urls = - save_artifacts(job_id, &artifacts.scheduler_circuit, &*self.object_store).await; - update_database( - &self.connection_pool, - &self.prover_connection_pool, - started_at, - job_id, - artifacts.final_aggregation_result, - circuit_types_and_urls, - ) - .await; - Ok(()) - } - - fn max_attempts(&self) -> u32 { - self.config.max_attempts - } - - async fn get_job_attempts(&self, _job_id: &Self::JobId) -> anyhow::Result { - // Witness generator will be removed soon in favor of FRI one, so returning blank value. - Ok(1) - } -} - -pub fn process_scheduler_job( - started_at: Instant, - block_number: L1BatchNumber, - job: PrepareSchedulerCircuitJob, -) -> SchedulerArtifacts { - let stage_started_at = Instant::now(); - - let verification_keys: HashMap< - u8, - VerificationKey>>, - > = get_vks_for_basic_circuits(); - - let (_, set_committment, g2_points) = - witness::recursive_aggregation::form_base_circuits_committment(get_vks_for_commitment( - verification_keys, - )); - - tracing::info!( - "Verification keys loaded in {:?}", - stage_started_at.elapsed() - ); - - let leaf_aggregation_vk = get_vk_for_circuit_type(LEAF_CIRCUIT_INDEX); - - let node_aggregation_vk = get_vk_for_circuit_type(NODE_CIRCUIT_INDEX); - - let (_, leaf_aggregation_vk_committment) = - witness::recursive_aggregation::compute_vk_encoding_and_committment(erase_vk_type( - leaf_aggregation_vk, - )); - - let (_, node_aggregation_vk_committment) = - witness::recursive_aggregation::compute_vk_encoding_and_committment(erase_vk_type( - node_aggregation_vk.clone(), - )); - - tracing::info!("Commitments generated in {:?}", stage_started_at.elapsed()); - let stage_started_at = Instant::now(); - - let (scheduler_circuit, final_aggregation_result) = - witness::recursive_aggregation::prepare_scheduler_circuit( - job.incomplete_scheduler_witness, - job.node_final_proof_level_proof, - node_aggregation_vk, - job.final_node_aggregations, - set_committment, - node_aggregation_vk_committment, - leaf_aggregation_vk_committment, - job.previous_aux_hash, - job.previous_meta_hash, - (LEAF_SPLITTING_FACTOR * NODE_SPLITTING_FACTOR) as u32, - g2_points, - ); - - tracing::info!( - "prepare_scheduler_circuit took {:?}", - stage_started_at.elapsed() - ); - - tracing::info!( - "Scheduler generation for block {} is complete in {:?}", - block_number.0, - started_at.elapsed() - ); - - SchedulerArtifacts { - final_aggregation_result, - scheduler_circuit, - } -} - -pub async fn update_database( - connection_pool: &ConnectionPool, - prover_connection_pool: &ConnectionPool, - started_at: Instant, - block_number: L1BatchNumber, - final_aggregation_result: BlockApplicationWitness, - circuit_types_and_urls: Vec<(&'static str, String)>, -) { - let mut connection = connection_pool.access_storage().await.unwrap(); - let block = connection - .blocks_dal() - .get_l1_batch_metadata(block_number) - .await - .unwrap() - .expect("L1 batch should exist"); - - assert_eq!( - block.metadata.aux_data_hash.0, final_aggregation_result.aux_data_hash, - "Commitment for aux data is wrong" - ); - - assert_eq!( - block.metadata.pass_through_data_hash.0, final_aggregation_result.passthrough_data_hash, - "Commitment for pass through data is wrong" - ); - - assert_eq!( - block.metadata.meta_parameters_hash.0, final_aggregation_result.meta_data_hash, - "Commitment for metadata is wrong" - ); - - assert_eq!( - block.metadata.commitment.0, final_aggregation_result.block_header_hash, - "Commitment is wrong" - ); - - let mut prover_connection = prover_connection_pool.access_storage().await.unwrap(); - let mut transaction = prover_connection.start_transaction().await.unwrap(); - let protocol_version = transaction - .witness_generator_dal() - .protocol_version_for_l1_batch(block_number) - .await - .unwrap_or_else(|| { - panic!( - "No system version exist for l1 batch {} for node agg", - block_number.0 - ) - }); - transaction - .prover_dal() - .insert_prover_jobs( - block_number, - circuit_types_and_urls, - AggregationRound::Scheduler, - protocol_version, - ) - .await; - - transaction - .witness_generator_dal() - .save_final_aggregation_result( - block_number, - final_aggregation_result.aggregation_result_coords, - ) - .await; - - transaction - .witness_generator_dal() - .mark_witness_job_as_successful( - block_number, - AggregationRound::Scheduler, - started_at.elapsed(), - ) - .await; - - transaction.commit().await.unwrap(); - METRICS.processing_time[&AggregationRound::Scheduler.into()].observe(started_at.elapsed()); -} - -async fn save_artifacts( - block_number: L1BatchNumber, - scheduler_circuit: &ZkSyncCircuit>, - object_store: &dyn ObjectStore, -) -> Vec<(&'static str, String)> { - save_prover_input_artifacts( - block_number, - slice::from_ref(scheduler_circuit), - object_store, - AggregationRound::Scheduler, - ) - .await -} - -async fn get_artifacts( - metadata: WitnessGeneratorJobMetadata, - previous_aux_hash: [u8; 32], - previous_meta_hash: [u8; 32], - object_store: &dyn ObjectStore, -) -> SchedulerWitnessGeneratorJob { - let scheduler_witness = object_store.get(metadata.block_number).await.unwrap(); - let final_node_aggregations = object_store.get(metadata.block_number).await.unwrap(); - - SchedulerWitnessGeneratorJob { - block_number: metadata.block_number, - job: PrepareSchedulerCircuitJob { - incomplete_scheduler_witness: scheduler_witness, - final_node_aggregations, - node_final_proof_level_proof: metadata.proofs.into_iter().next().unwrap(), - previous_aux_hash, - previous_meta_hash, - }, - } -} diff --git a/core/lib/zksync_core/src/witness_generator/storage_oracle.rs b/core/lib/zksync_core/src/witness_generator/storage_oracle.rs deleted file mode 100644 index 112b4eb5988..00000000000 --- a/core/lib/zksync_core/src/witness_generator/storage_oracle.rs +++ /dev/null @@ -1,46 +0,0 @@ -use zksync_types::zkevm_test_harness::zk_evm::abstractions::{ - RefundType, RefundedAmounts, Storage, -}; -use zksync_types::{LogQuery, Timestamp}; - -#[derive(Debug)] -pub(super) struct StorageOracle { - inn: T, - storage_refunds: std::vec::IntoIter, -} - -impl StorageOracle { - pub fn new(inn: T, storage_refunds: Vec) -> Self { - Self { - inn, - storage_refunds: storage_refunds.into_iter(), - } - } -} - -impl Storage for StorageOracle { - fn estimate_refunds_for_write( - &mut self, - _monotonic_cycle_counter: u32, - _partial_query: &LogQuery, - ) -> RefundType { - let pubdata_bytes = self.storage_refunds.next().expect("Missing refund"); - RefundType::RepeatedWrite(RefundedAmounts { - pubdata_bytes, - ergs: 0, - }) - } - - fn execute_partial_query(&mut self, monotonic_cycle_counter: u32, query: LogQuery) -> LogQuery { - self.inn - .execute_partial_query(monotonic_cycle_counter, query) - } - - fn finish_frame(&mut self, timestamp: Timestamp, panicked: bool) { - self.inn.finish_frame(timestamp, panicked) - } - - fn start_frame(&mut self, timestamp: Timestamp) { - self.inn.start_frame(timestamp) - } -} diff --git a/core/lib/zksync_core/src/witness_generator/tests.rs b/core/lib/zksync_core/src/witness_generator/tests.rs deleted file mode 100644 index 38a77331fa3..00000000000 --- a/core/lib/zksync_core/src/witness_generator/tests.rs +++ /dev/null @@ -1,296 +0,0 @@ -use crate::witness_generator::precalculated_merkle_paths_provider::PrecalculatedMerklePathsProvider; -use std::convert::TryInto; -use zksync_types::proofs::StorageLogMetadata; -use zksync_types::zkevm_test_harness::witness::tree::{BinarySparseStorageTree, ZkSyncStorageLeaf}; - -#[test] -fn test_filter_renumerate_all_first_writes() { - let logs = vec![ - generate_storage_log_metadata( - "DDC60818D8F7CFE42514F8EA3CC52806DDC60818D8F7CFE42514F8EA3CC52806", - "12E9FF974B0FAEE514AD4AC50E2BDC6E12E9FF974B0FAEE514AD4AC50E2BDC6E", - false, - false, - 1, - ), - generate_storage_log_metadata( - "BDA1617CC883E2251D3BE0FD9B3F3064BDA1617CC883E2251D3BE0FD9B3F3064", - "D14917FCB067922F92322025D1BA50B4D14917FCB067922F92322025D1BA50B4", - true, - true, - 2, - ), - generate_storage_log_metadata( - "77F035AD50811CFABD956F6F1B48E48277F035AD50811CFABD956F6F1B48E482", - "7CF33B959916CC9B56F21C427ED7CA187CF33B959916CC9B56F21C427ED7CA18", - true, - true, - 3, - ), - ]; - let precalculated_merkle_paths_provider = PrecalculatedMerklePathsProvider { - root_hash: string_to_array( - "4AF44B3D5D4F9C7B117A68351AAB65CF4AF44B3D5D4F9C7B117A68351AAB65CF", - ), - pending_leaves: logs, - next_enumeration_index: 4, - is_get_leaf_invoked: false, - }; - let (leafs, indices) = generate_leafs_indices(); - - let (_, first_writes, updates) = - precalculated_merkle_paths_provider.filter_renumerate(indices.iter(), leafs.into_iter()); - assert_eq!(2, first_writes.len()); - assert_eq!(0, updates.len()); -} - -#[test] -fn test_filter_renumerate_all_repeated_writes() { - let logs = vec![ - generate_storage_log_metadata( - "DDC60818D8F7CFE42514F8EA3CC52806DDC60818D8F7CFE42514F8EA3CC52806", - "12E9FF974B0FAEE514AD4AC50E2BDC6E12E9FF974B0FAEE514AD4AC50E2BDC6E", - false, - false, - 1, - ), - generate_storage_log_metadata( - "BDA1617CC883E2251D3BE0FD9B3F3064BDA1617CC883E2251D3BE0FD9B3F3064", - "D14917FCB067922F92322025D1BA50B4D14917FCB067922F92322025D1BA50B4", - true, - false, - 2, - ), - generate_storage_log_metadata( - "77F035AD50811CFABD956F6F1B48E48277F035AD50811CFABD956F6F1B48E482", - "7CF33B959916CC9B56F21C427ED7CA187CF33B959916CC9B56F21C427ED7CA18", - true, - false, - 3, - ), - ]; - let precalculated_merkle_paths_provider = PrecalculatedMerklePathsProvider { - root_hash: string_to_array( - "4AF44B3D5D4F9C7B117A68351AAB65CF4AF44B3D5D4F9C7B117A68351AAB65CF", - ), - pending_leaves: logs, - next_enumeration_index: 4, - is_get_leaf_invoked: false, - }; - let (leafs, indices) = generate_leafs_indices(); - - let (_, first_writes, updates) = - precalculated_merkle_paths_provider.filter_renumerate(indices.iter(), leafs.into_iter()); - assert_eq!(0, first_writes.len()); - assert_eq!(2, updates.len()); -} - -#[test] -fn test_filter_renumerate_repeated_writes_with_first_write() { - let logs = vec![ - generate_storage_log_metadata( - "DDC60818D8F7CFE42514F8EA3CC52806DDC60818D8F7CFE42514F8EA3CC52806", - "12E9FF974B0FAEE514AD4AC50E2BDC6E12E9FF974B0FAEE514AD4AC50E2BDC6E", - false, - false, - 1, - ), - generate_storage_log_metadata( - "BDA1617CC883E2251D3BE0FD9B3F3064BDA1617CC883E2251D3BE0FD9B3F3064", - "D14917FCB067922F92322025D1BA50B4D14917FCB067922F92322025D1BA50B4", - true, - false, - 2, - ), - generate_storage_log_metadata( - "77F035AD50811CFABD956F6F1B48E48277F035AD50811CFABD956F6F1B48E482", - "7CF33B959916CC9B56F21C427ED7CA187CF33B959916CC9B56F21C427ED7CA18", - true, - true, - 3, - ), - ]; - let precalculated_merkle_paths_provider = PrecalculatedMerklePathsProvider { - root_hash: string_to_array( - "4AF44B3D5D4F9C7B117A68351AAB65CF4AF44B3D5D4F9C7B117A68351AAB65CF", - ), - pending_leaves: logs, - next_enumeration_index: 4, - is_get_leaf_invoked: false, - }; - let (leafs, indices) = generate_leafs_indices(); - - let (_, first_writes, updates) = - precalculated_merkle_paths_provider.filter_renumerate(indices.iter(), leafs.into_iter()); - assert_eq!(1, first_writes.len()); - assert_eq!(1, updates.len()); - assert_eq!(3, first_writes[0].1.index); - assert_eq!(2, updates[0].index); -} - -#[test] -#[should_panic(expected = "leafs must be of same length as indexes")] -fn test_filter_renumerate_panic_when_leafs_and_indices_are_of_different_length() { - let logs = vec![ - generate_storage_log_metadata( - "DDC60818D8F7CFE42514F8EA3CC52806DDC60818D8F7CFE42514F8EA3CC52806", - "12E9FF974B0FAEE514AD4AC50E2BDC6E12E9FF974B0FAEE514AD4AC50E2BDC6E", - false, - false, - 1, - ), - generate_storage_log_metadata( - "BDA1617CC883E2251D3BE0FD9B3F3064BDA1617CC883E2251D3BE0FD9B3F3064", - "D14917FCB067922F92322025D1BA50B4D14917FCB067922F92322025D1BA50B4", - true, - false, - 2, - ), - generate_storage_log_metadata( - "77F035AD50811CFABD956F6F1B48E48277F035AD50811CFABD956F6F1B48E482", - "7CF33B959916CC9B56F21C427ED7CA187CF33B959916CC9B56F21C427ED7CA18", - true, - true, - 3, - ), - ]; - let precalculated_merkle_paths_provider = PrecalculatedMerklePathsProvider { - root_hash: string_to_array( - "4AF44B3D5D4F9C7B117A68351AAB65CF4AF44B3D5D4F9C7B117A68351AAB65CF", - ), - pending_leaves: logs, - next_enumeration_index: 4, - is_get_leaf_invoked: false, - }; - - let leafs = vec![ - generate_leaf( - 1, - "AD558076F725ED8B5E5B42920422E9BEAD558076F725ED8B5E5B42920422E9BE", - ), - generate_leaf( - 1, - "98A0EADBD6118391B744252DA348873C98A0EADBD6118391B744252DA348873C", - ), - generate_leaf( - 2, - "72868932BBB002043AF50363EEB65AE172868932BBB002043AF50363EEB65AE1", - ), - ]; - let indices = [ - string_to_array("5534D106E0B590953AC0FC7D65CA3B2E5534D106E0B590953AC0FC7D65CA3B2E"), - string_to_array("00309D72EF0AD9786DA9044109E1704B00309D72EF0AD9786DA9044109E1704B"), - ]; - - precalculated_merkle_paths_provider.filter_renumerate(indices.iter(), leafs.into_iter()); -} - -#[test] -#[should_panic(expected = "indexes must be of same length as leafs and pending leaves")] -fn test_filter_renumerate_panic_when_indices_and_pending_leaves_are_of_different_length() { - let logs = vec![ - generate_storage_log_metadata( - "DDC60818D8F7CFE42514F8EA3CC52806DDC60818D8F7CFE42514F8EA3CC52806", - "12E9FF974B0FAEE514AD4AC50E2BDC6E12E9FF974B0FAEE514AD4AC50E2BDC6E", - false, - false, - 1, - ), - generate_storage_log_metadata( - "BDA1617CC883E2251D3BE0FD9B3F3064BDA1617CC883E2251D3BE0FD9B3F3064", - "D14917FCB067922F92322025D1BA50B4D14917FCB067922F92322025D1BA50B4", - true, - false, - 2, - ), - generate_storage_log_metadata( - "77F035AD50811CFABD956F6F1B48E48277F035AD50811CFABD956F6F1B48E482", - "7CF33B959916CC9B56F21C427ED7CA187CF33B959916CC9B56F21C427ED7CA18", - true, - true, - 3, - ), - ]; - let precalculated_merkle_paths_provider = PrecalculatedMerklePathsProvider { - root_hash: string_to_array( - "4AF44B3D5D4F9C7B117A68351AAB65CF4AF44B3D5D4F9C7B117A68351AAB65CF", - ), - pending_leaves: logs, - next_enumeration_index: 4, - is_get_leaf_invoked: false, - }; - - let leafs = vec![ - generate_leaf( - 1, - "AD558076F725ED8B5E5B42920422E9BEAD558076F725ED8B5E5B42920422E9BE", - ), - generate_leaf( - 1, - "98A0EADBD6118391B744252DA348873C98A0EADBD6118391B744252DA348873C", - ), - generate_leaf( - 2, - "72868932BBB002043AF50363EEB65AE172868932BBB002043AF50363EEB65AE1", - ), - ]; - let indices = [ - string_to_array("5534D106E0B590953AC0FC7D65CA3B2E5534D106E0B590953AC0FC7D65CA3B2E"), - string_to_array("00309D72EF0AD9786DA9044109E1704B00309D72EF0AD9786DA9044109E1704B"), - string_to_array("930058748339A83E06F0D1D22937E92A930058748339A83E06F0D1D22937E92A"), - ]; - - precalculated_merkle_paths_provider.filter_renumerate(indices.iter(), leafs.into_iter()); -} - -fn generate_leafs_indices() -> (Vec, Vec<[u8; 32]>) { - let leafs = vec![ - generate_leaf( - 1, - "AD558076F725ED8B5E5B42920422E9BEAD558076F725ED8B5E5B42920422E9BE", - ), - generate_leaf( - 2, - "72868932BBB002043AF50363EEB65AE172868932BBB002043AF50363EEB65AE1", - ), - ]; - let indices = vec![ - string_to_array("5534D106E0B590953AC0FC7D65CA3B2E5534D106E0B590953AC0FC7D65CA3B2E"), - string_to_array("00309D72EF0AD9786DA9044109E1704B00309D72EF0AD9786DA9044109E1704B"), - ]; - (leafs, indices) -} - -fn generate_leaf(index: u64, value: &str) -> ZkSyncStorageLeaf { - ZkSyncStorageLeaf { - index, - value: string_to_array(value), - } -} - -fn string_to_array(value: &str) -> [u8; 32] { - let array_value: [u8; 32] = hex::decode(value) - .expect("Hex decoding failed") - .try_into() - .unwrap(); - array_value -} - -fn generate_storage_log_metadata( - root_hash: &str, - merkle_path: &str, - is_write: bool, - first_write: bool, - leaf_enumeration_index: u64, -) -> StorageLogMetadata { - StorageLogMetadata { - root_hash: string_to_array(root_hash), - is_write, - first_write, - merkle_paths: vec![string_to_array(merkle_path)], - leaf_hashed_key: Default::default(), - leaf_enumeration_index, - value_written: [0; 32], - value_read: [0; 32], - } -} diff --git a/core/lib/zksync_core/src/witness_generator/utils.rs b/core/lib/zksync_core/src/witness_generator/utils.rs deleted file mode 100644 index 2135eddb3cc..00000000000 --- a/core/lib/zksync_core/src/witness_generator/utils.rs +++ /dev/null @@ -1,27 +0,0 @@ -use zksync_object_store::{CircuitKey, ObjectStore}; -use zksync_types::zkevm_test_harness::abstract_zksync_circuit::concrete_circuits::ZkSyncCircuit; -use zksync_types::zkevm_test_harness::bellman::bn256::Bn256; -use zksync_types::zkevm_test_harness::witness::oracle::VmWitnessOracle; -use zksync_types::{proofs::AggregationRound, L1BatchNumber}; - -pub async fn save_prover_input_artifacts( - block_number: L1BatchNumber, - circuits: &[ZkSyncCircuit>], - object_store: &dyn ObjectStore, - aggregation_round: AggregationRound, -) -> Vec<(&'static str, String)> { - // We intentionally process circuits sequentially to not overwhelm the object store. - let mut types_and_urls = Vec::with_capacity(circuits.len()); - for (sequence_number, circuit) in circuits.iter().enumerate() { - let circuit_type = circuit.short_description(); - let circuit_key = CircuitKey { - block_number, - sequence_number, - circuit_type, - aggregation_round, - }; - let blob_url = object_store.put(circuit_key, circuit).await.unwrap(); - types_and_urls.push((circuit_type, blob_url)); - } - types_and_urls -} diff --git a/core/tests/cross_external_nodes_checker/README.md b/core/tests/cross_external_nodes_checker/README.md index b6df859d906..78c1fe48b40 100644 --- a/core/tests/cross_external_nodes_checker/README.md +++ b/core/tests/cross_external_nodes_checker/README.md @@ -19,7 +19,7 @@ Run the server ``` zk init -zk server --components api,tree,eth,data_fetcher,state_keeper +zk server --components api,tree,eth,state_keeper ``` Run the EN diff --git a/core/tests/cross_external_nodes_checker/src/checker.rs b/core/tests/cross_external_nodes_checker/src/checker.rs index 61421816c60..0ddd179c266 100644 --- a/core/tests/cross_external_nodes_checker/src/checker.rs +++ b/core/tests/cross_external_nodes_checker/src/checker.rs @@ -7,7 +7,6 @@ use std::{ use serde_json::Value; use tokio::{sync::watch::Receiver, time::sleep}; - use zksync_types::{ api::{BlockDetails, BlockNumber, L1BatchDetails}, web3::types::U64, @@ -15,15 +14,17 @@ use zksync_types::{ }; use zksync_utils::wait_for_tasks::wait_for_tasks; use zksync_web3_decl::{ - jsonrpsee::core::Error, - jsonrpsee::http_client::{HttpClient, HttpClientBuilder}, + jsonrpsee::{ + core::ClientError, + http_client::{HttpClient, HttpClientBuilder}, + }, namespaces::{EnNamespaceClient, EthNamespaceClient, ZksNamespaceClient}, types::FilterBuilder, RpcResult, }; -use crate::config::{CheckerConfig, RpcMode}; use crate::{ + config::{CheckerConfig, RpcMode}, divergence::{Divergence, DivergenceDetails}, helpers::compare_json, }; @@ -366,14 +367,14 @@ impl Checker { tracing::debug!("Maybe checking batch {}", miniblock_batch_number); // We should check batches only the first time we encounter them per instance - // (i.e., next_instance_batch_to_check == miniblock_batch_number) + // (i.e., `next_instance_batch_to_check == miniblock_batch_number`) match instance_batch_to_check.cmp(&miniblock_batch_number) { Greater => return Ok(()), // This batch has already been checked. Less => { // Either somehow a batch wasn't checked or a non-genesis miniblock was set as the start // miniblock. In the latter case, update the `next_batch_to_check` map and check the batch. if self.start_miniblock == Some(MiniblockNumber(0)) { - return Err(Error::Custom(format!( + return Err(ClientError::Custom(format!( "the next batch number to check (#{}) is less than current miniblock batch number (#{}) for instance {}", instance_batch_to_check, miniblock_batch_number, diff --git a/core/tests/cross_external_nodes_checker/src/config.rs b/core/tests/cross_external_nodes_checker/src/config.rs index 6273b3405a0..636a4fd9ae5 100644 --- a/core/tests/cross_external_nodes_checker/src/config.rs +++ b/core/tests/cross_external_nodes_checker/src/config.rs @@ -116,9 +116,10 @@ fn default_subscription_duration() -> Option { #[cfg(test)] mod tests { - use super::*; use std::env; + use super::*; + #[test] fn success() { let config = r#" diff --git a/core/tests/cross_external_nodes_checker/src/divergence.rs b/core/tests/cross_external_nodes_checker/src/divergence.rs index 7f18f5fa605..18c910349f7 100644 --- a/core/tests/cross_external_nodes_checker/src/divergence.rs +++ b/core/tests/cross_external_nodes_checker/src/divergence.rs @@ -1,4 +1,5 @@ use std::fmt; + use zksync_types::{web3::types::U64, MiniblockNumber}; #[derive(Debug, Clone)] diff --git a/core/tests/cross_external_nodes_checker/src/helpers.rs b/core/tests/cross_external_nodes_checker/src/helpers.rs index 14843e55868..6247b5e8c8a 100644 --- a/core/tests/cross_external_nodes_checker/src/helpers.rs +++ b/core/tests/cross_external_nodes_checker/src/helpers.rs @@ -1,7 +1,7 @@ +use std::{collections::HashMap, future::Future, time::Duration}; + use futures::channel::oneshot; use serde_json::{Map, Value}; -use std::future::Future; -use std::{collections::HashMap, time::Duration}; use tokio::time::sleep; /// Sets up an interrupt handler and returns a future that resolves once an interrupt signal is received. @@ -132,9 +132,10 @@ impl ExponentialBackoff { #[cfg(test)] mod tests { - use super::*; use serde_json::json; + use super::*; + #[test] fn test_same_json() { let json1 = json!({ diff --git a/core/tests/cross_external_nodes_checker/src/main.rs b/core/tests/cross_external_nodes_checker/src/main.rs index 45192fe20fa..7199c1cbd32 100644 --- a/core/tests/cross_external_nodes_checker/src/main.rs +++ b/core/tests/cross_external_nodes_checker/src/main.rs @@ -1,4 +1,8 @@ -extern crate core; +use tokio::sync::watch; +use zksync_utils::wait_for_tasks::wait_for_tasks; + +use self::{checker::Checker, pubsub_checker::PubSubChecker}; +use crate::{config::CheckerConfig, helpers::setup_sigint_handler}; mod checker; mod config; @@ -6,13 +10,6 @@ mod divergence; mod helpers; mod pubsub_checker; -use crate::config::CheckerConfig; -use crate::helpers::setup_sigint_handler; -use checker::Checker; -use pubsub_checker::PubSubChecker; -use tokio::sync::watch; -use zksync_utils::wait_for_tasks::wait_for_tasks; - #[tokio::main] async fn main() -> anyhow::Result<()> { #[allow(deprecated)] // TODO (QIT-21): Use centralized configuration approach. diff --git a/core/tests/cross_external_nodes_checker/src/pubsub_checker.rs b/core/tests/cross_external_nodes_checker/src/pubsub_checker.rs index 78860210297..3e000e83d8f 100644 --- a/core/tests/cross_external_nodes_checker/src/pubsub_checker.rs +++ b/core/tests/cross_external_nodes_checker/src/pubsub_checker.rs @@ -1,14 +1,10 @@ -use crate::{ - config::CheckerConfig, - divergence::{Divergence, DivergenceDetails}, - helpers::{compare_json, ExponentialBackoff}, -}; -use anyhow::Context as _; use std::{ collections::HashMap, sync::Arc, time::{Duration, Instant}, }; + +use anyhow::Context as _; use tokio::{ select, spawn, sync::{watch::Receiver, Mutex as TokioMutex}, @@ -20,7 +16,7 @@ use zksync_web3_decl::{ jsonrpsee::{ core::{ client::{Subscription, SubscriptionClientT}, - Error, + ClientError, }, rpc_params, ws_client::{WsClient, WsClientBuilder}, @@ -28,6 +24,12 @@ use zksync_web3_decl::{ types::{BlockHeader, PubSubResult}, }; +use crate::{ + config::CheckerConfig, + divergence::{Divergence, DivergenceDetails}, + helpers::{compare_json, ExponentialBackoff}, +}; + const MAX_RETRIES: u32 = 6; const GRACE_PERIOD: Duration = Duration::from_secs(60); const SUBSCRIPTION_TIMEOUT: Duration = Duration::from_secs(120); @@ -264,7 +266,7 @@ impl PubSubChecker { // Extract the block header and block number from the pubsub result that is expected to be a header. async fn extract_block_info( &self, - pubsub_res: Result, + pubsub_res: Result, ) -> Result<(BlockHeader, U64), anyhow::Error> { let PubSubResult::Header(header) = pubsub_res? else { return Err(anyhow::anyhow!("Received non-header pubsub result")); diff --git a/core/tests/loadnext/README.md b/core/tests/loadnext/README.md index 99c0a47d5b7..52b4c68dec3 100644 --- a/core/tests/loadnext/README.md +++ b/core/tests/loadnext/README.md @@ -1,6 +1,6 @@ # Loadnext: loadtest for zkSync -Loadnext is an utility for random stress-testing the zkSync server. It is capable of simulating the behavior of many +Loadnext is a utility for random stress-testing the zkSync server. It is capable of simulating the behavior of many independent users of zkSync network, who are sending quasi-random requests to the server. The general flow is as follows: @@ -8,7 +8,7 @@ The general flow is as follows: - The master account performs an initial deposit to L2 - Paymaster on L2 is funded if necessary - The L2 master account distributes funds to the participating accounts (`accounts_amount` configuration option) -- Each account continiously sends L2 transactions as configured in `contract_execution_params` configuration option. At +- Each account continuously sends L2 transactions as configured in `contract_execution_params` configuration option. At any given time there are no more than `max_inflight_txs` transactions in flight for each account. - Once each account is done with the initial deposit, the test is run for `duration_sec` seconds. - After the test is finished, the master account withdraws all the remaining funds from L2. @@ -18,7 +18,7 @@ The general flow is as follows: It: -- doesn't care whether the server is alive or not. At worst, it will just consider the test failed. +- doesn't care whether the server is alive or not. In the worst-case scenario, it will simply mark the test as failed. - does a unique set of operations for each participating account. - sends transactions and priority operations. - sends incorrect transactions as well as correct ones and compares the outcome to the expected one. diff --git a/core/tests/loadnext/src/account/api_request_executor.rs b/core/tests/loadnext/src/account/api_request_executor.rs index e1e09004d4e..18d25a1da9c 100644 --- a/core/tests/loadnext/src/account/api_request_executor.rs +++ b/core/tests/loadnext/src/account/api_request_executor.rs @@ -2,7 +2,6 @@ use std::time::Instant; use rand::seq::IteratorRandom; use regex::Regex; - use zksync::{ error::{ClientError, RpcError}, types::FilterBuilder, diff --git a/core/tests/loadnext/src/account/mod.rs b/core/tests/loadnext/src/account/mod.rs index bead354cdea..9a1243151d0 100644 --- a/core/tests/loadnext/src/account/mod.rs +++ b/core/tests/loadnext/src/account/mod.rs @@ -1,18 +1,16 @@ -use futures::{channel::mpsc, SinkExt}; use std::{ collections::VecDeque, sync::Arc, time::{Duration, Instant}, }; -use tokio::sync::RwLock; +use futures::{channel::mpsc, SinkExt}; +use tokio::sync::RwLock; use zksync::{error::ClientError, operations::SyncTransactionHandle, HttpClient}; -use zksync_types::{api::TransactionReceipt, Address, Nonce, H256, U256, U64}; -use zksync_web3_decl::jsonrpsee::core::Error as CoreError; - use zksync_contracts::test_contracts::LoadnextContractExecutionParams; +use zksync_types::{api::TransactionReceipt, Address, Nonce, H256, U256, U64}; +use zksync_web3_decl::jsonrpsee::core::ClientError as CoreError; -use crate::utils::format_gwei; use crate::{ account::tx_command_executor::SubmitResult, account_pool::{AddressPool, TestWallet}, @@ -20,6 +18,7 @@ use crate::{ config::{LoadtestConfig, RequestLimiters}, constants::{MAX_L1_TRANSACTIONS, POLLING_INTERVAL}, report::{Report, ReportBuilder, ReportLabel}, + utils::format_gwei, }; mod api_request_executor; @@ -59,7 +58,7 @@ pub struct AccountLifespan { contract_execution_params: LoadnextContractExecutionParams, /// Pool of account addresses, used to generate commands. addresses: AddressPool, - /// Successful transactions, required for requesting api + /// Successful transactions, required for requesting API successfully_sent_txs: Arc>>, /// L1 ERC-20 token used in the test. main_l1_token: Address, @@ -137,7 +136,7 @@ impl AccountLifespan { let is_l1_transaction = matches!(command.command_type, TxType::L1Execute | TxType::Deposit); if is_l1_transaction && l1_tx_count >= MAX_L1_TRANSACTIONS { - continue; // Skip command to not run out of ethereum on L1 + continue; // Skip command to not run out of Ethereum on L1 } // The new transaction should be sent only if mempool is not full @@ -222,7 +221,7 @@ impl AccountLifespan { expected_outcome: &ExpectedOutcome, ) -> ReportLabel { match expected_outcome { - ExpectedOutcome::TxSucceed if transaction_receipt.status == Some(U64::one()) => { + ExpectedOutcome::TxSucceed if transaction_receipt.status == U64::one() => { // If it was a successful `DeployContract` transaction, set the contract // address for subsequent usage by `Execute`. if let Some(address) = transaction_receipt.contract_address { @@ -233,7 +232,7 @@ impl AccountLifespan { // Transaction succeed and it should have. ReportLabel::done() } - ExpectedOutcome::TxRejected if transaction_receipt.status == Some(U64::zero()) => { + ExpectedOutcome::TxRejected if transaction_receipt.status == U64::zero() => { // Transaction failed and it should have. ReportLabel::done() } diff --git a/core/tests/loadnext/src/account/pubsub_executor.rs b/core/tests/loadnext/src/account/pubsub_executor.rs index 2dec5dbd8c6..d3c9d7144f1 100644 --- a/core/tests/loadnext/src/account/pubsub_executor.rs +++ b/core/tests/loadnext/src/account/pubsub_executor.rs @@ -1,9 +1,7 @@ -use futures::{stream, TryStreamExt}; - use std::time::{Duration, Instant}; -use zksync::error::ClientError; -use zksync::types::PubSubFilterBuilder; +use futures::{stream, TryStreamExt}; +use zksync::{error::ClientError, types::PubSubFilterBuilder}; use zksync_web3_decl::{ jsonrpsee::{ core::client::{Subscription, SubscriptionClientT}, diff --git a/core/tests/loadnext/src/account/tx_command_executor.rs b/core/tests/loadnext/src/account/tx_command_executor.rs index 9fb8631fdc1..599656cd034 100644 --- a/core/tests/loadnext/src/account/tx_command_executor.rs +++ b/core/tests/loadnext/src/account/tx_command_executor.rs @@ -1,12 +1,13 @@ use std::time::Instant; -use zksync::web3::ethabi; -use zksync::EthNamespaceClient; + use zksync::{ error::ClientError, ethereum::PriorityOpHolder, utils::{ get_approval_based_paymaster_input, get_approval_based_paymaster_input_for_estimation, }, + web3::ethabi, + EthNamespaceClient, }; use zksync_eth_client::EthInterface; use zksync_system_constants::MAX_L1_TRANSACTION_GAS_LIMIT; @@ -16,14 +17,13 @@ use zksync_types::{ Address, H256, U256, }; -use crate::account::ExecutionType; -use crate::utils::format_gwei; use crate::{ - account::AccountLifespan, + account::{AccountLifespan, ExecutionType}, command::{IncorrectnessModifier, TxCommand, TxType}, constants::{ETH_CONFIRMATION_TIMEOUT, ETH_POLLING_INTERVAL}, corrupted_tx::Corrupted, report::ReportLabel, + utils::format_gwei, }; #[derive(Debug)] @@ -201,7 +201,7 @@ impl AccountLifespan { }?; // Update current nonce for future txs - // If the transaction has a tx_hash and is small enough to be included in a block, this tx will change the nonce. + // If the transaction has a `tx_hash` and is small enough to be included in a block, this tx will change the nonce. // We can be sure that the nonce will be changed based on this assumption. if let SubmitResult::TxHash(_) = &result { self.current_nonce = Some(nonce + 1) @@ -445,13 +445,11 @@ impl AccountLifespan { .get_transaction_receipt(tx_hash) .await?; - let receipt = if response.as_ref().and_then(|r| r.block_number).is_some() { - response.unwrap() - } else { + let Some(receipt) = response else { return Ok(None); }; - let block_number = receipt.block_number.unwrap(); + let block_number = receipt.block_number; let response = self .wallet diff --git a/core/tests/loadnext/src/account_pool.rs b/core/tests/loadnext/src/account_pool.rs index 556bee7f402..a6ae8cd6816 100644 --- a/core/tests/loadnext/src/account_pool.rs +++ b/core/tests/loadnext/src/account_pool.rs @@ -3,7 +3,6 @@ use std::{collections::VecDeque, convert::TryFrom, str::FromStr, sync::Arc, time use once_cell::sync::OnceCell; use rand::Rng; use tokio::time::timeout; - use zksync::{signer::Signer, HttpClient, HttpClientBuilder, Wallet, ZksNamespaceClient}; use zksync_eth_signer::PrivateKeySigner; use zksync_types::{tx::primitives::PackedEthSignature, Address, L2ChainId, H256}; @@ -76,7 +75,7 @@ pub struct TestWallet { } /// Pool of accounts to be used in the test. -/// Each account is represented as `zksync::Wallet` in order to provide convenient interface of interation with zkSync. +/// Each account is represented as `zksync::Wallet` in order to provide convenient interface of interaction with zkSync. #[derive(Debug)] pub struct AccountPool { /// Main wallet that will be used to initialize all the test wallets. @@ -91,7 +90,7 @@ impl AccountPool { /// Generates all the required test accounts and prepares `Wallet` objects. pub async fn new(config: &LoadtestConfig) -> anyhow::Result { let l2_chain_id = L2ChainId::try_from(config.l2_chain_id).unwrap(); - // Create a client for pinging the rpc. + // Create a client for pinging the RPC. let client = HttpClientBuilder::default() .build(&config.l2_rpc_address) .unwrap(); diff --git a/core/tests/loadnext/src/command/api.rs b/core/tests/loadnext/src/command/api.rs index e865ab00031..278f4e1a749 100644 --- a/core/tests/loadnext/src/command/api.rs +++ b/core/tests/loadnext/src/command/api.rs @@ -1,6 +1,5 @@ use num::Integer; use rand::RngCore; - use zksync::EthNamespaceClient; use zksync_types::api; @@ -54,7 +53,7 @@ impl AllWeighted for ApiRequestType { pub struct ApiRequest { /// Type of the request to be performed. pub request_type: ApiRequestType, - /// ZkSync block number, generated randomly. + /// zkSync block number, generated randomly. pub block_number: api::BlockNumber, } @@ -74,7 +73,7 @@ async fn random_block_number(wallet: &SyncWallet, rng: &mut LoadtestRng) -> api: match block_number { BlockNumber::Committed => api::BlockNumber::Committed, BlockNumber::Number => { - // Choose a random block in the range [0, latest_committed_block_number). + // Choose a random block in the range `[0, latest_committed_block_number)`. match wallet .provider .get_block_by_number(api::BlockNumber::Committed, false) diff --git a/core/tests/loadnext/src/command/tx_command.rs b/core/tests/loadnext/src/command/tx_command.rs index 945a7ca16bb..84e07d1f0d2 100644 --- a/core/tests/loadnext/src/command/tx_command.rs +++ b/core/tests/loadnext/src/command/tx_command.rs @@ -1,7 +1,6 @@ use once_cell::sync::OnceCell; use rand::Rng; use static_assertions::const_assert; - use zksync_types::{Address, U256}; use crate::{ diff --git a/core/tests/loadnext/src/config.rs b/core/tests/loadnext/src/config.rs index d62f4cdb63e..2cc6317059d 100644 --- a/core/tests/loadnext/src/config.rs +++ b/core/tests/loadnext/src/config.rs @@ -1,12 +1,9 @@ +use std::{path::PathBuf, time::Duration}; + use serde::Deserialize; use tokio::sync::Semaphore; - -use std::path::PathBuf; -use std::time::Duration; - use zksync_contracts::test_contracts::LoadnextContractExecutionParams; -use zksync_types::network::Network; -use zksync_types::{Address, L2ChainId, H160}; +use zksync_types::{network::Network, Address, L2ChainId, H160}; use crate::fs_utils::read_tokens; @@ -155,8 +152,8 @@ fn default_l1_rpc_address() -> String { fn default_master_wallet_pk() -> String { // Use this key only for localhost because it is compromised! - // Using this key for rinkeby will result in losing rinkeby ETH. - // Corresponding wallet is 0x36615Cf349d7F6344891B1e7CA7C72883F5dc049 + // Using this key for Rinkeby will result in losing Rinkeby ETH. + // Corresponding wallet is `0x36615Cf349d7F6344891B1e7CA7C72883F5dc049` let result = "7726827caac94a7f9e1b160f7ea819f172f7b6f9d2a97f992c38edeab82d4110".to_string(); tracing::info!("Using default MASTER_WALLET_PK: {result}"); result @@ -184,7 +181,7 @@ fn default_main_token() -> H160 { // Read token addresses from `etc/tokens/localhost.json`. Use the first one // as a main token since all of them are suitable. - // 0xeb8f08a975Ab53E34D8a0330E0D34de942C95926 for rinkeby + // `0xeb8f08a975Ab53E34D8a0330E0D34de942C95926` for Rinkeby let tokens = read_tokens(Network::Localhost).expect("Failed to parse tokens file"); let main_token = tokens.first().expect("Loaded tokens list is empty"); tracing::info!("Main token: {main_token:?}"); @@ -228,7 +225,7 @@ fn default_seed() -> Option { } fn default_l2_chain_id() -> u64 { - // 270 for rinkeby + // 270 for Rinkeby let result = L2ChainId::default().as_u64(); tracing::info!("Using default L2_CHAIN_ID: {result}"); result @@ -239,14 +236,14 @@ pub fn get_default_l2_rpc_address() -> String { } fn default_l2_rpc_address() -> String { - // https://z2-dev-api.zksync.dev:443 for stage2 + // `https://z2-dev-api.zksync.dev:443` for stage2 let result = get_default_l2_rpc_address(); tracing::info!("Using default L2_RPC_ADDRESS: {result}"); result } fn default_l2_ws_rpc_address() -> String { - // ws://z2-dev-api.zksync.dev:80/ws for stage2 + // `ws://z2-dev-api.zksync.dev:80/ws` for stage2 let result = "ws://127.0.0.1:3051".to_string(); tracing::info!("Using default L2_WS_RPC_ADDRESS: {result}"); result @@ -315,9 +312,9 @@ impl TransactionWeights { impl Default for TransactionWeights { fn default() -> Self { Self { - deposit: 0.1, + deposit: 0.05, withdrawal: 0.5, - l1_transactions: 0.1, + l1_transactions: 0.05, l2_transactions: 1.0, } } diff --git a/core/tests/loadnext/src/corrupted_tx.rs b/core/tests/loadnext/src/corrupted_tx.rs index c3ada60472e..c51b0c88d02 100644 --- a/core/tests/loadnext/src/corrupted_tx.rs +++ b/core/tests/loadnext/src/corrupted_tx.rs @@ -1,13 +1,13 @@ use async_trait::async_trait; - use zksync::signer::Signer; -use zksync_eth_signer::{error::SignerError, EthereumSigner}; -use zksync_types::{Address, EIP712TypedStructure, Eip712Domain, PackedEthSignature, H256}; +use zksync_eth_signer::{ + error::SignerError, raw_ethereum_tx::TransactionParameters, EthereumSigner, +}; +use zksync_types::{ + fee::Fee, l2::L2Tx, Address, EIP712TypedStructure, Eip712Domain, PackedEthSignature, H256, +}; use crate::command::IncorrectnessModifier; -use zksync_eth_signer::raw_ethereum_tx::TransactionParameters; -use zksync_types::fee::Fee; -use zksync_types::l2::L2Tx; /// Trait that exists solely to extend the signed zkSync transaction interface, providing the ability /// to modify transaction in a way that will make it invalid. @@ -94,14 +94,14 @@ impl EthereumSigner for CorruptedSigner { #[cfg(test)] mod tests { - use super::*; use zksync_eth_signer::PrivateKeySigner; - use zksync_types::fee::Fee; - use zksync_types::L2ChainId; use zksync_types::{ - tokens::ETHEREUM_ADDRESS, tx::primitives::PackedEthSignature, Address, Nonce, H256, + fee::Fee, tokens::ETHEREUM_ADDRESS, tx::primitives::PackedEthSignature, Address, L2ChainId, + Nonce, H256, }; + use super::*; + const AMOUNT: u64 = 100; const FEE: u64 = 100; const NONCE: Nonce = Nonce(1); diff --git a/core/tests/loadnext/src/executor.rs b/core/tests/loadnext/src/executor.rs index 08d1ce47d6a..e87fd22b867 100644 --- a/core/tests/loadnext/src/executor.rs +++ b/core/tests/loadnext/src/executor.rs @@ -1,14 +1,15 @@ -use anyhow::anyhow; -use futures::{channel::mpsc, future, SinkExt}; - use std::sync::Arc; -use zksync::ethereum::{PriorityOpHolder, DEFAULT_PRIORITY_FEE}; -use zksync::utils::{ - get_approval_based_paymaster_input, get_approval_based_paymaster_input_for_estimation, +use anyhow::anyhow; +use futures::{channel::mpsc, future, SinkExt}; +use zksync::{ + ethereum::{PriorityOpHolder, DEFAULT_PRIORITY_FEE}, + utils::{ + get_approval_based_paymaster_input, get_approval_based_paymaster_input_for_estimation, + }, + web3::{contract::Options, types::TransactionReceipt}, + EthNamespaceClient, EthereumProvider, ZksNamespaceClient, }; -use zksync::web3::{contract::Options, types::TransactionReceipt}; -use zksync::{EthNamespaceClient, EthereumProvider, ZksNamespaceClient}; use zksync_eth_client::{BoundEthInterface, EthInterface}; use zksync_eth_signer::PrivateKeySigner; use zksync_system_constants::MAX_L1_TRANSACTION_GAS_LIMIT; @@ -17,14 +18,14 @@ use zksync_types::{ REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_BYTE, U256, U64, }; -use crate::report::ReportBuilder; -use crate::utils::format_eth; use crate::{ account::AccountLifespan, account_pool::AccountPool, config::{ExecutionConfig, LoadtestConfig, RequestLimiters}, constants::*, + report::ReportBuilder, report_collector::{LoadtestResult, ReportCollector}, + utils::format_eth, }; /// Executor is the entity capable of running the loadtest flow. @@ -53,7 +54,7 @@ impl Executor { ) -> anyhow::Result { let pool = AccountPool::new(&config).await?; - // derive l2 main token address + // derive L2 main token address let l2_main_token = pool .master_wallet .ethereum(&config.l1_rpc_address) @@ -470,7 +471,7 @@ impl Executor { .commit_timeout(COMMIT_TIMEOUT) .wait_for_commit() .await?; - if result.status == Some(U64::zero()) { + if result.status == U64::zero() { return Err(anyhow::format_err!("Transfer failed")); } } diff --git a/core/tests/loadnext/src/fs_utils.rs b/core/tests/loadnext/src/fs_utils.rs index d5b92b3c7a9..9fee9916f91 100644 --- a/core/tests/loadnext/src/fs_utils.rs +++ b/core/tests/loadnext/src/fs_utils.rs @@ -1,14 +1,10 @@ //! Utilities used for reading tokens, contracts bytecode and ABI from the //! filesystem. -use std::fs::File; -use std::io::BufReader; -use std::path::Path; +use std::{fs::File, io::BufReader, path::Path}; use serde::Deserialize; - -use zksync_types::network::Network; -use zksync_types::{ethabi::Contract, Address}; +use zksync_types::{ethabi::Contract, network::Network, Address}; /// A token stored in `etc/tokens/{network}.json` files. #[derive(Debug, Deserialize)] @@ -93,9 +89,10 @@ pub fn loadnext_contract(path: &Path) -> anyhow::Result { #[cfg(test)] mod tests { - use super::*; use std::path::PathBuf; + use super::*; + #[test] fn check_read_test_contract() { let test_contracts_path = { diff --git a/core/tests/loadnext/src/main.rs b/core/tests/loadnext/src/main.rs index 5d35c4e7f79..595532706c7 100644 --- a/core/tests/loadnext/src/main.rs +++ b/core/tests/loadnext/src/main.rs @@ -4,19 +4,17 @@ //! Without required variables provided, test is launched in the localhost/development mode with some hard-coded //! values to check the local zkSync deployment. -use tokio::sync::watch; - use std::time::Duration; -use prometheus_exporter::PrometheusExporterConfig; -use zksync_config::configs::api::PrometheusConfig; - use loadnext::{ command::TxType, config::{ExecutionConfig, LoadtestConfig}, executor::Executor, report_collector::LoadtestResult, }; +use prometheus_exporter::PrometheusExporterConfig; +use tokio::sync::watch; +use zksync_config::configs::api::PrometheusConfig; #[tokio::main] async fn main() -> anyhow::Result<()> { diff --git a/core/tests/loadnext/src/report.rs b/core/tests/loadnext/src/report.rs index 0ea86a49de0..e6c6bfdb551 100644 --- a/core/tests/loadnext/src/report.rs +++ b/core/tests/loadnext/src/report.rs @@ -2,8 +2,8 @@ use std::time::Duration; use zksync_types::Address; -use crate::account::ExecutionType; use crate::{ + account::ExecutionType, all::All, command::{ApiRequest, ApiRequestType, SubscriptionType, TxCommand, TxType}, }; diff --git a/core/tests/loadnext/src/report_collector/metrics_collector.rs b/core/tests/loadnext/src/report_collector/metrics_collector.rs index 6c5867901b1..90775f039eb 100644 --- a/core/tests/loadnext/src/report_collector/metrics_collector.rs +++ b/core/tests/loadnext/src/report_collector/metrics_collector.rs @@ -151,7 +151,7 @@ mod tests { #[test] fn histogram_window_size() { - // Vector of ((window_idx, window_size), expected_range)). + // Vector of `((window_idx, window_size), expected_range))`. let test_vector = [ ((0, 100), (0, 99)), ((1, 100), (100, 199)), diff --git a/core/tests/loadnext/src/report_collector/mod.rs b/core/tests/loadnext/src/report_collector/mod.rs index a7798e0bea7..6a7a5de39ba 100644 --- a/core/tests/loadnext/src/report_collector/mod.rs +++ b/core/tests/loadnext/src/report_collector/mod.rs @@ -1,8 +1,8 @@ +use std::time::{Duration, Instant}; + use futures::{channel::mpsc::Receiver, StreamExt}; use operation_results_collector::OperationResultsCollector; -use std::time::{Duration, Instant}; - use crate::{ report::{ActionType, Report, ReportLabel}, report_collector::metrics_collector::MetricsCollector, diff --git a/core/tests/loadnext/src/report_collector/operation_results_collector.rs b/core/tests/loadnext/src/report_collector/operation_results_collector.rs index 63f2bb7dbf9..ab460af839b 100644 --- a/core/tests/loadnext/src/report_collector/operation_results_collector.rs +++ b/core/tests/loadnext/src/report_collector/operation_results_collector.rs @@ -1,7 +1,7 @@ -use crate::report::{ActionType, ReportLabel}; - use std::{fmt, time::Duration}; +use crate::report::{ActionType, ReportLabel}; + /// Collector that analyzes the outcomes of the performed operations. /// Currently it's solely capable of deciding whether test was failed or not. /// API requests are counted separately. diff --git a/core/tests/loadnext/src/rng.rs b/core/tests/loadnext/src/rng.rs index 4d5ab84c714..20f163e69ee 100644 --- a/core/tests/loadnext/src/rng.rs +++ b/core/tests/loadnext/src/rng.rs @@ -1,7 +1,6 @@ use std::convert::TryInto; use rand::{rngs::SmallRng, seq::SliceRandom, thread_rng, RngCore, SeedableRng}; - use zksync::web3::signing::keccak256; use zksync_types::H256; @@ -46,7 +45,7 @@ impl LoadtestRng { // We chain the current seed bytes and the Ethereum private key together, // and then calculate the hash of this data. // This way we obtain a derived seed, unique for each wallet, which will result in - // an uniques set of operations for each account. + // an unique set of operations for each account. let input_bytes: Vec = self .seed .iter() diff --git a/core/tests/loadnext/src/utils.rs b/core/tests/loadnext/src/utils.rs index 3f528e97e34..95f61c8cee8 100644 --- a/core/tests/loadnext/src/utils.rs +++ b/core/tests/loadnext/src/utils.rs @@ -1,4 +1,5 @@ use std::ops::Div; + use zksync_types::U256; pub fn format_eth(value: U256) -> String { diff --git a/core/tests/revert-test/tests/revert-and-restart.test.ts b/core/tests/revert-test/tests/revert-and-restart.test.ts index c4ab6823c10..bbbd5136859 100644 --- a/core/tests/revert-test/tests/revert-and-restart.test.ts +++ b/core/tests/revert-test/tests/revert-and-restart.test.ts @@ -76,7 +76,7 @@ describe('Block reverting test', function () { process.env.DATABASE_MERKLE_TREE_MODE = 'full'; // Run server in background. - const components = 'api,tree,eth,data_fetcher,state_keeper'; + const components = 'api,tree,eth,state_keeper'; utils.background(`zk server --components ${components}`, [null, logs, logs]); // Server may need some time to recompile if it's a cold run, so wait for it. let iter = 0; @@ -172,7 +172,7 @@ describe('Block reverting test', function () { process.env.ETH_SENDER_SENDER_AGGREGATED_BLOCK_EXECUTE_DEADLINE = '1'; // Run server. - utils.background('zk server --components api,tree,eth,data_fetcher,state_keeper', [null, logs, logs]); + utils.background('zk server --components api,tree,eth,state_keeper', [null, logs, logs]); await utils.sleep(10); const balanceBefore = await alice.getBalance(); @@ -184,7 +184,16 @@ describe('Block reverting test', function () { amount: depositAmount, to: alice.address }); - let receipt = await depositHandle.waitFinalize(); + const l1TxResponse = await alice._providerL1().getTransaction(depositHandle.hash); + // ethers doesn't work well with block reversions, so wait for the receipt before calling `.waitFinalize()`. + const l2Tx = await alice._providerL2().getL2TransactionFromPriorityOp(l1TxResponse); + let receipt = null; + do { + receipt = await tester.syncWallet.provider.getTransactionReceipt(l2Tx.hash); + await utils.sleep(1); + } while (receipt == null); + + await depositHandle.waitFinalize(); expect(receipt.status).to.be.eql(1); const balanceAfter = await alice.getBalance(); @@ -200,7 +209,7 @@ describe('Block reverting test', function () { await killServerAndWaitForShutdown(tester); // Run again. - utils.background(`zk server --components=api,tree,eth,data_fetcher,state_keeper`, [null, logs, logs]); + utils.background(`zk server --components=api,tree,eth,state_keeper`, [null, logs, logs]); await utils.sleep(10); // Trying to send a transaction from the same address again @@ -219,7 +228,13 @@ async function checkedRandomTransfer(sender: zkweb3.Wallet, amount: BigNumber) { to: receiver.address, value: amount }); - const txReceipt = await transferHandle.wait(); + + // ethers doesn't work well with block reversions, so we poll for the receipt manually. + let txReceipt = null; + do { + txReceipt = await sender.provider.getTransactionReceipt(transferHandle.hash); + await utils.sleep(1); + } while (txReceipt == null); const senderBalance = await sender.getBalance(); const receiverBalance = await receiver.getBalance(); diff --git a/core/tests/ts-integration/README.md b/core/tests/ts-integration/README.md index b3707cac664..c29e15d936d 100644 --- a/core/tests/ts-integration/README.md +++ b/core/tests/ts-integration/README.md @@ -97,7 +97,7 @@ implemented, register them at [setup file](./src/jest-setup/add-matchers.ts) and ### Matcher modifiers `toBeAccepted` and `toBeRejected` matchers accept modifiers. You can see one (`shouldChangeETHBalances`) above. There -are others (like `shouldChangeTokenBalances` or `shouldOnlyTakeFee`), and if needed you can create your onw ones. +are others (like `shouldChangeTokenBalances` or `shouldOnlyTakeFee`), and if needed you can create your own ones. These modifiers would be applied to the transaction receipt, and you can implement any kind of custom logic there. To do so, you just need to declare a class that inherits `MatcherModifier` class and implements the `check` method. @@ -134,7 +134,7 @@ finalization: it make take several hours to generate a proof and send it onchain Because of that, framework supports "fast" and "long" modes. `TestMaster` objects have `isFastMode` method to determine which mode is currently being used. -If you're going to write a test that can make test run duration longer, it is adviced to guard the "long" part with the +If you're going to write a test that can make test run duration longer, it is advised to guard the "long" part with the corresponding check. By default, "long" mode is assumed, and to enable the "fast" mode one must set the `ZK_INTEGRATION_TESTS_FAST_MODE` diff --git a/core/tests/ts-integration/contracts/counter/zkVM_bytecode.txt b/core/tests/ts-integration/contracts/counter/zkVM_bytecode.txt new file mode 100644 index 00000000000..d47161d8fcd --- /dev/null +++ b/core/tests/ts-integration/contracts/counter/zkVM_bytecode.txt @@ -0,0 +1 @@ +0x000100000000000200000000000103550000008003000039000000400030043f0000000003010019000000600330027000000027033001970000000102200190000000160000c13d000000040230008c000000550000413d000000000201043b000000e002200270000000290420009c0000001e0000a13d0000002a0420009c0000002b0000613d0000002b0420009c000000320000613d0000002c0120009c0000004a0000613d000000550000013d0000000001000416000000000101004b000000550000c13d0000002001000039000001000010044300000120000004430000002801000041000000970001042e0000002d0420009c000000470000613d0000002e0220009c000000550000c13d0000000002000416000000000202004b000000550000c13d000000040230008a000000200220008c000000550000413d0000000401100370000000000101043b000000570000013d0000000001000416000000000101004b000000550000c13d000000000100041a000000800010043f0000003101000041000000970001042e0000000002000416000000000202004b000000550000c13d000000040230008a000000200220008c000000550000413d0000000401100370000000000101043b000000000200041a0000000001120019000000000221004b000000000200001900000001020040390000000102200190000000570000613d0000002f0100004100000000001004350000001101000039000000040010043f000000300100004100000098000104300000000001000416000000000101004b000000550000c13d00000000010300190096005a0000040f009600730000040f000000400200043d00000000001204350000002701000041000000270320009c0000000002018019000000400120021000000032011001c7000000970001042e00000000010000190000009800010430000000000010041b0000000001000019000000970001042e000000040110008a00000033020000410000003f0310008c000000000300001900000000030220190000003301100197000000000401004b0000000002008019000000330110009c000000000203c019000000000102004b000000710000613d00000000010003670000002402100370000000000202043b000000000302004b0000000003000019000000010300c039000000000332004b000000710000c13d0000000401100370000000000101043b000000000001042d00000000010000190000009800010430000000000300041a0000000001130019000000000331004b0000000003000019000000010300403900000001033001900000007e0000c13d000000000010041b000000000202004b000000840000c13d000000000001042d0000002f0100004100000000001004350000001101000039000000040010043f00000030010000410000009800010430000000400100043d00000044021000390000003403000041000000000032043500000024021000390000001a030000390000000000320435000000350200004100000000002104350000000402100039000000200300003900000000003204350000002702000041000000270310009c0000000001028019000000400110021000000036011001c700000098000104300000009600000432000000970001042e000000980001043000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000ffffffff0000000200000000000000000000000000000040000001000000000000000000000000000000000000000000000000000000000000000000000000006d4ce63b000000000000000000000000000000000000000000000000000000006d4ce63c000000000000000000000000000000000000000000000000000000007cf5dab0000000000000000000000000000000000000000000000000000000008f9c4749000000000000000000000000000000000000000000000000000000000436dad60000000000000000000000000000000000000000000000000000000060fe47b14e487b7100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002400000000000000000000000000000000000000000000000000000000000000200000008000000000000000000000000000000000000000000000000000000020000000000000000000000000800000000000000000000000000000000000000000000000000000000000000054686973206d6574686f6420616c77617973207265766572747300000000000008c379a000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000006400000000000000000000000000000000000000000000000000000000000000000000000000000000000000005912077dbccfda7879a1765d5b280b2a37169a97193512ab62bd94e5027c06b3 \ No newline at end of file diff --git a/core/tests/ts-integration/contracts/custom-account/custom-paymaster.sol b/core/tests/ts-integration/contracts/custom-account/custom-paymaster.sol index 164aee98518..e55f093cb78 100644 --- a/core/tests/ts-integration/contracts/custom-account/custom-paymaster.sol +++ b/core/tests/ts-integration/contracts/custom-account/custom-paymaster.sol @@ -63,7 +63,7 @@ contract CustomPaymaster is IPaymaster { bool success = _transaction.payToTheBootloader(); require(success, "Failed to transfer funds to the bootloader"); - // For now, refunds are not supported, so we just test the fact that the transfered context is correct + // For now, refunds are not supported, so we just test the fact that the transferred context is correct txCounter += 1; context = abi.encode(txCounter); } else { diff --git a/core/tests/ts-integration/contracts/custom-account/interfaces/IPaymaster.sol b/core/tests/ts-integration/contracts/custom-account/interfaces/IPaymaster.sol index cf5ced94878..1bd5b81f32b 100644 --- a/core/tests/ts-integration/contracts/custom-account/interfaces/IPaymaster.sol +++ b/core/tests/ts-integration/contracts/custom-account/interfaces/IPaymaster.sol @@ -37,7 +37,7 @@ interface IPaymaster { /// @param _context, the context of the execution, returned by the "validateAndPayForPaymasterTransaction" method. /// @param _transaction, the users' transaction. /// @param _txResult, the result of the transaction execution (success or failure). - /// @param _maxRefundedGas, the upper bound on the amout of gas that could be refunded to the paymaster. + /// @param _maxRefundedGas, the upper bound on the amount of gas that could be refunded to the paymaster. /// @dev The exact amount refunded depends on the gas spent by the "postOp" itself and so the developers should /// take that into account. function postTransaction( diff --git a/core/tests/ts-integration/hardhat.config.ts b/core/tests/ts-integration/hardhat.config.ts index 166feea91d9..00abe2b32ef 100644 --- a/core/tests/ts-integration/hardhat.config.ts +++ b/core/tests/ts-integration/hardhat.config.ts @@ -4,7 +4,7 @@ import '@matterlabs/hardhat-zksync-vyper'; export default { zksolc: { - version: '1.3.16', + version: '1.3.21', compilerSource: 'binary', settings: { isSystem: true @@ -20,9 +20,9 @@ export default { } }, solidity: { - version: '0.8.21' + version: '0.8.23' }, vyper: { - version: '0.3.3' + version: '0.3.10' } }; diff --git a/core/tests/ts-integration/package.json b/core/tests/ts-integration/package.json index 95423097eaf..b27c2d855b6 100644 --- a/core/tests/ts-integration/package.json +++ b/core/tests/ts-integration/package.json @@ -4,13 +4,14 @@ "license": "MIT", "private": true, "scripts": { - "test": "zk f jest --forceExit --testTimeout 60000", + "test": "zk f jest --forceExit --testTimeout 120000", "long-running-test": "zk f jest", "fee-test": "RUN_FEE_TEST=1 zk f jest -- fees.test.ts", "api-test": "zk f jest -- api/web3.test.ts api/debug.test.ts", "contract-verification-test": "zk f jest -- api/contract-verification.test.ts", "build": "hardhat compile", - "build-yul": "hardhat run scripts/compile-yul.ts" + "build-yul": "hardhat run scripts/compile-yul.ts", + "snapshots-creator-test": "zk f jest -- api/snapshots-creator.test.ts" }, "devDependencies": { "@matterlabs/hardhat-zksync-deploy": "^0.6.1", diff --git a/core/tests/ts-integration/scripts/compile-yul.ts b/core/tests/ts-integration/scripts/compile-yul.ts index 26f779878ae..bdf25364418 100644 --- a/core/tests/ts-integration/scripts/compile-yul.ts +++ b/core/tests/ts-integration/scripts/compile-yul.ts @@ -7,7 +7,7 @@ import { getZksolcUrl, saltFromUrl } from '@matterlabs/hardhat-zksync-solc'; import { getCompilersDir } from 'hardhat/internal/util/global-dir'; import path from 'path'; -const COMPILER_VERSION = '1.3.16'; +const COMPILER_VERSION = '1.3.21'; const IS_COMPILER_PRE_RELEASE = false; async function compilerLocation(): Promise { diff --git a/core/tests/ts-integration/src/system.ts b/core/tests/ts-integration/src/system.ts index 6e29f71f7ca..4aa3c26534f 100644 --- a/core/tests/ts-integration/src/system.ts +++ b/core/tests/ts-integration/src/system.ts @@ -1,7 +1,7 @@ import { BigNumber, BytesLike, ethers } from 'ethers'; import { Provider, utils } from 'zksync-web3'; -const L1_CONTRACTS_FOLDER = `${process.env.ZKSYNC_HOME}/contracts/ethereum/artifacts/cache/solpp-generated-contracts`; +const L1_CONTRACTS_FOLDER = `${process.env.ZKSYNC_HOME}/contracts/l1-contracts/artifacts/cache/solpp-generated-contracts`; const DIAMOND_UPGRADE_INIT_ABI = new ethers.utils.Interface( require(`${L1_CONTRACTS_FOLDER}/zksync/upgrade-initializers/DiamondUpgradeInit1.sol/DiamondUpgradeInit1.json`).abi ); diff --git a/core/tests/ts-integration/src/test-master.ts b/core/tests/ts-integration/src/test-master.ts index 5318bbcc16d..fad2827e4f1 100644 --- a/core/tests/ts-integration/src/test-master.ts +++ b/core/tests/ts-integration/src/test-master.ts @@ -30,7 +30,7 @@ export class TestMaster { const contextStr = process.env.ZKSYNC_JEST_TEST_CONTEXT; if (!contextStr) { - throw new Error('Test context was not initalized; unable to load context environment variable'); + throw new Error('Test context was not initialized; unable to load context environment variable'); } const context = JSON.parse(contextStr) as TestContext; diff --git a/core/tests/ts-integration/tests/api/contract-verification.test.ts b/core/tests/ts-integration/tests/api/contract-verification.test.ts index cfda8a81074..61112ba685e 100644 --- a/core/tests/ts-integration/tests/api/contract-verification.test.ts +++ b/core/tests/ts-integration/tests/api/contract-verification.test.ts @@ -9,11 +9,12 @@ import { sleep } from 'zksync-web3/build/src/utils'; // Regular expression to match ISO dates. const DATE_REGEX = /\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(\.\d{6})?/; -const ZKSOLC_VERSION = 'v1.3.16'; -const SOLC_VERSION = '0.8.21'; +const ZKSOLC_VERSION = 'v1.3.21'; +const SOLC_VERSION = '0.8.23'; +const ZK_VM_SOLC_VERSION = 'zkVM-0.8.23-1.0.0'; const ZKVYPER_VERSION = 'v1.3.13'; -const VYPER_VERSION = '0.3.3'; +const VYPER_VERSION = '0.3.10'; type HttpMethod = 'POST' | 'GET'; @@ -67,6 +68,32 @@ describe('Tests for the contract verification API', () => { await expectVerifyRequestToSucceed(requestId, requestBody); }); + test('should test zkVM solc contract verification', async () => { + let artifact = contracts.counter; + // TODO: use plugin compilation when it's ready instead of pre-compiled bytecode. + artifact.bytecode = fs.readFileSync( + `${process.env.ZKSYNC_HOME}/core/tests/ts-integration/contracts/counter/zkVM_bytecode.txt`, + 'utf8' + ); + + const counterContract = await deployContract(alice, artifact, []); + const constructorArguments = counterContract.interface.encodeDeploy([]); + + const requestBody = { + contractAddress: counterContract.address, + contractName: 'contracts/counter/counter.sol:Counter', + sourceCode: getContractSource('counter/counter.sol'), + compilerZksolcVersion: ZKSOLC_VERSION, + compilerSolcVersion: ZK_VM_SOLC_VERSION, + optimizationUsed: true, + constructorArguments, + isSystem: true + }; + let requestId = await query('POST', '/contract_verification', undefined, requestBody); + + await expectVerifyRequestToSucceed(requestId, requestBody); + }); + test('should test multi-files contract verification', async () => { const contractFactory = new zksync.ContractFactory(contracts.create.abi, contracts.create.bytecode, alice); const contractHandle = await contractFactory.deploy({ diff --git a/core/tests/ts-integration/tests/api/snapshots-creator.test.ts b/core/tests/ts-integration/tests/api/snapshots-creator.test.ts new file mode 100644 index 00000000000..63d55a3d8a9 --- /dev/null +++ b/core/tests/ts-integration/tests/api/snapshots-creator.test.ts @@ -0,0 +1,86 @@ +import { TestMaster } from '../../src/index'; +import fs from 'fs'; +import * as zlib from 'zlib'; +import * as protobuf from 'protobufjs'; +import { snapshots_creator } from 'zk/build/run/run'; +import path from 'path'; + +describe('Snapshots API tests', () => { + let testMaster: TestMaster; + + beforeAll(() => { + testMaster = TestMaster.getInstance(__filename); + + if (process.env.ZKSYNC_ENV!.startsWith('ext-node')) { + console.warn("You are trying to run snapshots creator tests on external node. It's not supported."); + } + }); + + async function runCreator() { + await snapshots_creator(); + } + + async function rpcRequest(name: string, params: any) { + const response = await testMaster.mainAccount().provider.send(name, params); + return response; + } + + async function getAllSnapshots() { + return await rpcRequest('snapshots_getAllSnapshots', []); + } + + async function getSnapshot(snapshotL1Batch: number) { + return rpcRequest('snapshots_getSnapshot', [snapshotL1Batch]); + } + + async function decompressGzip(filePath: string): Promise { + return new Promise((resolve, reject) => { + const readStream = fs.createReadStream(filePath); + const gunzip = zlib.createGunzip(); + let chunks: Uint8Array[] = []; + + gunzip.on('data', (chunk) => chunks.push(chunk)); + gunzip.on('end', () => resolve(Buffer.concat(chunks))); + gunzip.on('error', reject); + + readStream.pipe(gunzip); + }); + } + async function createAndValidateSnapshot() { + const existingBatchNumbers = (await getAllSnapshots()).snapshotsL1BatchNumbers as number[]; + await runCreator(); + const newBatchNumbers = (await getAllSnapshots()).snapshotsL1BatchNumbers as number[]; + const addedSnapshots = newBatchNumbers.filter((x) => existingBatchNumbers.indexOf(x) === -1); + expect(addedSnapshots.length).toEqual(1); + + const l1BatchNumber = addedSnapshots[0]; + const fullSnapshot = await getSnapshot(l1BatchNumber); + const miniblockNumber = fullSnapshot.miniblockNumber; + + const protoPath = path.join(process.env.ZKSYNC_HOME as string, 'core/lib/types/src/proto/mod.proto'); + const root = await protobuf.load(protoPath); + const SnapshotStorageLogsChunk = root.lookupType('zksync.types.SnapshotStorageLogsChunk'); + + expect(fullSnapshot.l1BatchNumber).toEqual(l1BatchNumber); + for (let chunkMetadata of fullSnapshot.storageLogsChunks) { + const chunkPath = path.join(process.env.ZKSYNC_HOME as string, chunkMetadata.filepath); + const output = SnapshotStorageLogsChunk.decode(await decompressGzip(chunkPath)) as any; + expect(output['storageLogs'].length > 0); + for (const storageLog of output['storageLogs'] as any[]) { + const snapshotAccountAddress = '0x' + storageLog['accountAddress'].toString('hex'); + const snapshotKey = '0x' + storageLog['storageKey'].toString('hex'); + const snapshotValue = '0x' + storageLog['storageValue'].toString('hex'); + const snapshotL1BatchNumber = storageLog['l1BatchNumberOfInitialWrite']; + const valueOnBlockchain = await testMaster + .mainAccount() + .provider.getStorageAt(snapshotAccountAddress, snapshotKey, miniblockNumber); + expect(snapshotValue).toEqual(valueOnBlockchain); + expect(snapshotL1BatchNumber).toBeLessThanOrEqual(l1BatchNumber); + } + } + } + + test('snapshots can be created', async () => { + await createAndValidateSnapshot(); + }); +}); diff --git a/core/tests/ts-integration/tests/api/web3.test.ts b/core/tests/ts-integration/tests/api/web3.test.ts index bd553c474bb..f8d0ea284df 100644 --- a/core/tests/ts-integration/tests/api/web3.test.ts +++ b/core/tests/ts-integration/tests/api/web3.test.ts @@ -279,9 +279,9 @@ describe('web3 API compatibility tests', () => { let newTxHash: string | null = null; // We can't use `once` as there may be other pending txs sent together with our one. wsProvider.on('pending', async (txHash) => { - const receipt = await alice.provider.getTransactionReceipt(txHash); + const tx = await alice.provider.getTransaction(txHash); // We're waiting for the exact transaction to appear. - if (!receipt || receipt.to != uniqueRecipient) { + if (!tx || tx.to != uniqueRecipient) { // Not the transaction we're looking for. return; } @@ -784,34 +784,10 @@ describe('web3 API compatibility tests', () => { expect(exactProtocolVersion).toMatchObject(expectedProtocolVersion); }); - test('Should check zks_getLogsWithVirtualBlocks endpoint', async () => { - let logs; - logs = await alice.provider.send('zks_getLogsWithVirtualBlocks', [{ fromBlock: '0x0', toBlock: '0x0' }]); - expect(logs).toEqual([]); - - logs = await alice.provider.send('zks_getLogsWithVirtualBlocks', [{ fromBlock: '0x1', toBlock: '0x2' }]); - expect(logs.length > 0).toEqual(true); - - logs = await alice.provider.send('zks_getLogsWithVirtualBlocks', [{ fromBlock: '0x2', toBlock: '0x1' }]); - expect(logs).toEqual([]); - - logs = await alice.provider.send('zks_getLogsWithVirtualBlocks', [{ fromBlock: '0x3', toBlock: '0x3' }]); - expect(logs.length > 0).toEqual(true); - - await expect( - alice.provider.send('zks_getLogsWithVirtualBlocks', [{ fromBlock: '0x100000000', toBlock: '0x100000000' }]) // 2^32 - ).toBeRejected(); - await expect( - alice.provider.send('zks_getLogsWithVirtualBlocks', [ - { fromBlock: '0x10000000000000000', toBlock: '0x10000000000000000' } // 2^64 - ]) - ).toBeRejected(); - }); - test('Should check transaction signature', async () => { const CHAIN_ID = +process.env.CHAIN_ETH_ZKSYNC_NETWORK_ID!; const value = 1; - const gasLimit = 300000; + const gasLimit = 350000; const gasPrice = await alice.provider.getGasPrice(); const data = '0x'; const to = alice.address; diff --git a/core/tests/ts-integration/tests/contracts.test.ts b/core/tests/ts-integration/tests/contracts.test.ts index fea1b15845a..e7d5fcf3a23 100644 --- a/core/tests/ts-integration/tests/contracts.test.ts +++ b/core/tests/ts-integration/tests/contracts.test.ts @@ -306,6 +306,30 @@ describe('Smart contract behavior checks', () => { ).toBeAccepted([]); }); + test('Should reject tx with not enough gas for publishing bytecode', async () => { + // Send a transaction with big unique factory dep and provide gas enough for validation but not for bytecode publishing. + // Transaction should be rejected by API. + + const BYTECODE_LEN = 50016; + const bytecode = ethers.utils.hexlify(ethers.utils.randomBytes(BYTECODE_LEN)); + + // Estimate gas for "no-op". It's a good estimate for validation gas. + const gasLimit = await alice.estimateGas({ + to: alice.address, + data: '0x' + }); + + await expect( + alice.sendTransaction({ + to: alice.address, + gasLimit, + customData: { + factoryDeps: [bytecode] + } + }) + ).toBeRejected('not enough gas to publish compressed bytecodes'); + }); + afterAll(async () => { await testMaster.deinitialize(); }); diff --git a/core/tests/ts-integration/tests/custom-account.test.ts b/core/tests/ts-integration/tests/custom-account.test.ts index 27b3eca1ccf..29778ac4882 100644 --- a/core/tests/ts-integration/tests/custom-account.test.ts +++ b/core/tests/ts-integration/tests/custom-account.test.ts @@ -219,6 +219,8 @@ describe('Tests for the custom account behavior', () => { const transfer = await erc20.populateTransaction.transfer(alice.address, TRANSFER_AMOUNT); const nonce = await alice.provider.getTransactionCount(badCustomAccount.address); + // delayedTx should pass API checks (if not then error will be thrown on the next lime) + // but should be rejected by the state-keeper (checked later). const delayedTx = await sendCustomAccountTransaction( transfer, alice.provider, @@ -226,8 +228,6 @@ describe('Tests for the custom account behavior', () => { undefined, nonce + 1 ); - // delayedTx passed API checks (since we got the response) but should be rejected by the state-keeper. - const rejection = expect(delayedTx).toBeReverted(); // Increase nonce and set flag to do many calculations during validation. const validationGasLimit = +process.env.CHAIN_STATE_KEEPER_VALIDATION_COMPUTATIONAL_GAS_LIMIT!; @@ -235,7 +235,15 @@ describe('Tests for the custom account behavior', () => { await expect( sendCustomAccountTransaction(tx, alice.provider, badCustomAccount.address, undefined, nonce) ).toBeAccepted(); - await rejection; + + // We don't have a good check that tx was indeed rejected. + // Most that we can do is to ensure that tx wasn't mined for some time. + const attempts = 5; + for (let i = 0; i < attempts; ++i) { + const receipt = await alice.provider.getTransactionReceipt(delayedTx.hash); + expect(receipt).toBeNull(); + await zksync.utils.sleep(1000); + } }); afterAll(async () => { diff --git a/core/tests/ts-integration/tests/custom-erc20-bridge.test.ts b/core/tests/ts-integration/tests/custom-erc20-bridge.test.ts index da8a7863975..e3c517c9ba3 100644 --- a/core/tests/ts-integration/tests/custom-erc20-bridge.test.ts +++ b/core/tests/ts-integration/tests/custom-erc20-bridge.test.ts @@ -9,11 +9,7 @@ import { spawn as _spawn } from 'child_process'; import * as zksync from 'zksync-web3'; import * as ethers from 'ethers'; import { scaledGasPrice } from '../src/helpers'; -import { - L1ERC20BridgeFactory, - TransparentUpgradeableProxyFactory, - AllowListFactory -} from 'l1-zksync-contracts/typechain'; +import { L1ERC20BridgeFactory, TransparentUpgradeableProxyFactory } from 'l1-contracts/typechain'; import { sleep } from 'zk/build/utils'; describe('Tests for the custom bridge behavior', () => { @@ -40,18 +36,11 @@ describe('Tests for the custom bridge behavior', () => { }); await transferTx.wait(); - let allowList = new AllowListFactory(alice._signerL1()); - let allowListContract = await allowList.deploy(alice.address); - await allowListContract.deployTransaction.wait(2); - // load the l1bridge contract let l1bridgeFactory = new L1ERC20BridgeFactory(alice._signerL1()); const gasPrice = await scaledGasPrice(alice); - let l1Bridge = await l1bridgeFactory.deploy( - process.env.CONTRACTS_DIAMOND_PROXY_ADDR!, - allowListContract.address - ); + let l1Bridge = await l1bridgeFactory.deploy(process.env.CONTRACTS_DIAMOND_PROXY_ADDR!); await l1Bridge.deployTransaction.wait(2); let l1BridgeProxyFactory = new TransparentUpgradeableProxyFactory(alice._signerL1()); let l1BridgeProxy = await l1BridgeProxyFactory.deploy(l1Bridge.address, bob.address, '0x'); @@ -59,23 +48,22 @@ describe('Tests for the custom bridge behavior', () => { await l1BridgeProxy.deployTransaction.wait(2); const isLocalSetup = process.env.ZKSYNC_LOCAL_SETUP; - const baseCommandL1 = isLocalSetup ? `yarn --cwd /contracts/ethereum` : `cd $ZKSYNC_HOME && yarn l1-contracts`; + const baseCommandL1 = isLocalSetup + ? `yarn --cwd /contracts/l1-contracts` + : `cd $ZKSYNC_HOME && yarn l1-contracts`; let args = `--private-key ${alice.privateKey} --erc20-bridge ${l1BridgeProxy.address}`; let command = `${baseCommandL1} initialize-bridges ${args}`; await spawn(command); - const setAccessModeTx = await allowListContract.setAccessMode(l1BridgeProxy.address, 2); - await setAccessModeTx.wait(); - let l1bridge2 = new L1ERC20BridgeFactory(alice._signerL1()).attach(l1BridgeProxy.address); - const maxAttempts = 5; + const maxAttempts = 200; let ready = false; for (let i = 0; i < maxAttempts; ++i) { const l2Bridge = await l1bridge2.l2Bridge(); if (l2Bridge != ethers.constants.AddressZero) { const code = await alice._providerL2().getCode(l2Bridge); - if (code.length > 0) { + if (code.length > 2) { ready = true; break; } diff --git a/core/tests/ts-integration/tests/fees.test.ts b/core/tests/ts-integration/tests/fees.test.ts index 41016d447ac..6041ec8c2b2 100644 --- a/core/tests/ts-integration/tests/fees.test.ts +++ b/core/tests/ts-integration/tests/fees.test.ts @@ -10,12 +10,15 @@ * */ import * as utils from 'zk/build/utils'; +import * as fs from 'fs'; import { TestMaster } from '../src/index'; import * as zksync from 'zksync-web3'; import { BigNumber, ethers } from 'ethers'; import { Token } from '../src/types'; +const logs = fs.createWriteStream('fees.log', { flags: 'a' }); + // Unless `RUN_FEE_TEST` is provided, skip the test suit const testFees = process.env.RUN_FEE_TEST ? describe : describe.skip; @@ -73,23 +76,51 @@ testFees('Test fees', () => { }) ).wait(); - let reports = ['ETH transfer:\n\n', 'ERC20 transfer:\n\n']; + // Warming up slots for the receiver + await ( + await alice.sendTransaction({ + to: receiver, + value: BigNumber.from(1) + }) + ).wait(); + + await ( + await alice.sendTransaction({ + data: aliceErc20.interface.encodeFunctionData('transfer', [receiver, BigNumber.from(1)]), + to: tokenDetails.l2Address + }) + ).wait(); + + let reports = [ + 'ETH transfer (to new):\n\n', + 'ETH transfer (to old):\n\n', + 'ERC20 transfer (to new):\n\n', + 'ERC20 transfer (to old):\n\n' + ]; for (const gasPrice of L1_GAS_PRICES_TO_TEST) { reports = await appendResults( alice, - [feeTestL1Receipt, feeTestL1ReceiptERC20], + [feeTestL1Receipt, feeTestL1Receipt, feeTestL1ReceiptERC20, feeTestL1ReceiptERC20], // We always regenerate new addresses for transaction requests in order to estimate the cost for a new account [ { to: ethers.Wallet.createRandom().address, value: BigNumber.from(1) }, + { + to: receiver, + value: BigNumber.from(1) + }, { data: aliceErc20.interface.encodeFunctionData('transfer', [ ethers.Wallet.createRandom().address, BigNumber.from(1) ]), to: tokenDetails.l2Address + }, + { + data: aliceErc20.interface.encodeFunctionData('transfer', [receiver, BigNumber.from(1)]), + to: tokenDetails.l2Address } ], gasPrice, @@ -147,7 +178,9 @@ async function updateReport( const estimatedPrice = estimatedL2GasPrice.mul(estimatedL2GasLimit); const balanceBefore = await sender.getBalance(); - await (await sender.sendTransaction(transactionRequest)).wait(); + const transaction = await sender.sendTransaction(transactionRequest); + console.log(`Sending transaction: ${transaction.hash}`); + await transaction.wait(); const balanceAfter = await sender.getBalance(); const balanceDiff = balanceBefore.sub(balanceAfter); @@ -190,12 +223,13 @@ async function setInternalL1GasPrice(provider: zksync.Provider, newPrice?: strin } catch (_) {} // Run server in background. - let command = 'zk server --components api,tree,eth,data_fetcher,state_keeper'; + let command = 'zk server --components api,tree,eth,state_keeper'; command = `DATABASE_MERKLE_TREE_MODE=full ${command}`; if (newPrice) { - command = `ETH_SENDER_GAS_ADJUSTER_INTERNAL_ENFORCED_L1_GAS_PRICE=${newPrice} ${command}`; + // We need to ensure that each transaction gets into its own batch for more fair comparison. + command = `CHAIN_STATE_KEEPER_TRANSACTION_SLOTS=1 ETH_SENDER_GAS_ADJUSTER_INTERNAL_ENFORCED_L1_GAS_PRICE=${newPrice} ${command}`; } - const zkSyncServer = utils.background(command, 'ignore'); + const zkSyncServer = utils.background(command, [null, logs, logs]); if (disconnect) { zkSyncServer.unref(); diff --git a/core/tests/ts-integration/tests/l1.test.ts b/core/tests/ts-integration/tests/l1.test.ts index a1ef9564bd9..c0a84640794 100644 --- a/core/tests/ts-integration/tests/l1.test.ts +++ b/core/tests/ts-integration/tests/l1.test.ts @@ -278,7 +278,7 @@ describe('Tests for L1 behavior', () => { } const contract = await deployContract(alice, contracts.writesAndMessages, []); - const MAX_PUBDATA_PER_BATCH = ethers.BigNumber.from(SYSTEM_CONFIG['MAX_PUBDATA_PER_BATCH']); + const MAX_PUBDATA_PER_BATCH = ethers.BigNumber.from(SYSTEM_CONFIG['PRIORITY_TX_PUBDATA_PER_BATCH']); // We check that we will run out of gas if we send a bit // smaller than `MAX_PUBDATA_PER_BATCH` amount of pubdata in a single tx. const calldata = contract.interface.encodeFunctionData('big_l2_l1_message', [ @@ -338,60 +338,16 @@ function maxL2GasLimitForPriorityTxs(): number { let maxGasBodyLimit = +process.env.CONTRACTS_PRIORITY_TX_MAX_GAS_LIMIT!; const overhead = getOverheadForTransaction( - ethers.BigNumber.from(maxGasBodyLimit), - ethers.BigNumber.from(zksync.utils.REQUIRED_L1_TO_L2_GAS_PER_PUBDATA_LIMIT), - // We can just pass 0 as `encodingLength` because `overheadForPublicData` and `overheadForGas` - // will be greater than `overheadForLength` for large `gasLimit`. + // We can just pass 0 as `encodingLength` because the overhead for the transaction's slot + // will be greater than `overheadForLength` for a typical transacction ethers.BigNumber.from(0) ); return maxGasBodyLimit + overhead; } -function getOverheadForTransaction( - bodyGasLimit: ethers.BigNumber, - gasPricePerPubdata: ethers.BigNumber, - encodingLength: ethers.BigNumber -): number { - const BATCH_OVERHEAD_L2_GAS = ethers.BigNumber.from(SYSTEM_CONFIG['BATCH_OVERHEAD_L2_GAS']); - const L1_GAS_PER_PUBDATA_BYTE = ethers.BigNumber.from(SYSTEM_CONFIG['L1_GAS_PER_PUBDATA_BYTE']); - const BATCH_OVERHEAD_L1_GAS = ethers.BigNumber.from(SYSTEM_CONFIG['BATCH_OVERHEAD_L1_GAS']); - const BATCH_OVERHEAD_PUBDATA = BATCH_OVERHEAD_L1_GAS.div(L1_GAS_PER_PUBDATA_BYTE); - - const MAX_TRANSACTIONS_IN_BATCH = ethers.BigNumber.from(SYSTEM_CONFIG['MAX_TRANSACTIONS_IN_BATCH']); - const BOOTLOADER_TX_ENCODING_SPACE = ethers.BigNumber.from(SYSTEM_CONFIG['BOOTLOADER_TX_ENCODING_SPACE']); - // TODO (EVM-67): possibly charge overhead for pubdata - // const MAX_PUBDATA_PER_BATCH = ethers.BigNumber.from(SYSTEM_CONFIG['MAX_PUBDATA_PER_BATCH']); - const L2_TX_MAX_GAS_LIMIT = ethers.BigNumber.from(SYSTEM_CONFIG['L2_TX_MAX_GAS_LIMIT']); - - const maxBlockOverhead = BATCH_OVERHEAD_L2_GAS.add(BATCH_OVERHEAD_PUBDATA.mul(gasPricePerPubdata)); - - // The overhead from taking up the transaction's slot - const txSlotOverhead = ceilDiv(maxBlockOverhead, MAX_TRANSACTIONS_IN_BATCH); - let blockOverheadForTransaction = txSlotOverhead; - - // The overhead for occupying the bootloader memory can be derived from encoded_len - const overheadForLength = ceilDiv(encodingLength.mul(maxBlockOverhead), BOOTLOADER_TX_ENCODING_SPACE); - if (overheadForLength.gt(blockOverheadForTransaction)) { - blockOverheadForTransaction = overheadForLength; - } - - // The overhead for possible published public data - // TODO (EVM-67): possibly charge overhead for pubdata - // let maxPubdataInTx = ceilDiv(bodyGasLimit, gasPricePerPubdata); - // let overheadForPublicData = ceilDiv(maxPubdataInTx.mul(maxBlockOverhead), MAX_PUBDATA_PER_BATCH); - // if (overheadForPublicData.gt(blockOverheadForTransaction)) { - // blockOverheadForTransaction = overheadForPublicData; - // } - - // The overhead for gas that could be used to use single-instance circuits - let overheadForSingleInstanceCircuits = ceilDiv(bodyGasLimit.mul(maxBlockOverhead), L2_TX_MAX_GAS_LIMIT); - if (overheadForSingleInstanceCircuits.gt(blockOverheadForTransaction)) { - blockOverheadForTransaction = overheadForSingleInstanceCircuits; - } - - return blockOverheadForTransaction.toNumber(); -} +function getOverheadForTransaction(encodingLength: ethers.BigNumber): number { + const TX_SLOT_OVERHEAD_GAS = 10_000; + const TX_LENGTH_BYTE_OVERHEAD_GAS = 10; -function ceilDiv(a: ethers.BigNumber, b: ethers.BigNumber): ethers.BigNumber { - return a.add(b.sub(1)).div(b); + return Math.max(TX_SLOT_OVERHEAD_GAS, TX_LENGTH_BYTE_OVERHEAD_GAS * encodingLength.toNumber()); } diff --git a/core/tests/ts-integration/tests/l2-weth.test.ts b/core/tests/ts-integration/tests/l2-weth.test.ts index 2218aabb618..849f956b6cd 100644 --- a/core/tests/ts-integration/tests/l2-weth.test.ts +++ b/core/tests/ts-integration/tests/l2-weth.test.ts @@ -5,8 +5,8 @@ import { TestMaster } from '../src/index'; import * as zksync from 'zksync-web3'; import { scaledGasPrice, waitUntilBlockFinalized } from '../src/helpers'; -import { WETH9, WETH9Factory } from 'l1-zksync-contracts/typechain'; -import { L2Weth, L2WethFactory } from 'l2-zksync-contracts/typechain'; +import { WETH9, WETH9Factory } from 'l1-contracts/typechain'; +import { L2Weth, L2WethFactory } from 'l2-contracts/typechain'; import { BigNumber, ethers } from 'ethers'; import { shouldChangeETHBalances, diff --git a/core/tests/ts-integration/tests/mempool.test.ts b/core/tests/ts-integration/tests/mempool.test.ts index 1da8fcdc037..160b2a2b81a 100644 --- a/core/tests/ts-integration/tests/mempool.test.ts +++ b/core/tests/ts-integration/tests/mempool.test.ts @@ -104,18 +104,23 @@ describe('Tests for the mempool behavior', () => { const fund = gasLimit.mul(gasPrice).mul(13).div(10); await alice.sendTransaction({ to: poorBob.address, value: fund }).then((tx) => tx.wait()); + // delayedTx should pass API checks (if not then error will be thrown on the next lime) + // but should be rejected by the state-keeper (checked later). const delayedTx = await poorBob.sendTransaction({ to: poorBob.address, nonce: nonce + 1 }); - // delayedTx passed API checks (since we got the response) but should be rejected by the state-keeper. - const rejection = expect(delayedTx).toBeReverted(); await expect(poorBob.sendTransaction({ to: poorBob.address, nonce })).toBeAccepted(); - await rejection; - // Now check that there is only one executed transaction for the account. - await expect(poorBob.getTransactionCount()).resolves.toEqual(1); + // We don't have a good check that tx was indeed rejected. + // Most that we can do is to ensure that tx wasn't mined for some time. + const attempts = 5; + for (let i = 0; i < attempts; ++i) { + const receipt = await alice.provider.getTransactionReceipt(delayedTx.hash); + expect(receipt).toBeNull(); + await zksync.utils.sleep(1000); + } }); afterAll(async () => { diff --git a/core/tests/ts-integration/tests/system.test.ts b/core/tests/ts-integration/tests/system.test.ts index 0eaf8c23b46..1a4d9abb334 100644 --- a/core/tests/ts-integration/tests/system.test.ts +++ b/core/tests/ts-integration/tests/system.test.ts @@ -30,6 +30,26 @@ describe('System behavior checks', () => { alice = testMaster.mainAccount(); }); + test('Network should be supporting Cancun+Deneb', async () => { + const address_a = '0x000000000000000000000000000000000000000A'; + const address_b = '0x000000000000000000000000000000000000000b'; + + const transaction_a = { + to: address_a, + data: '0x' + }; + + await expect(alice.providerL1!.call(transaction_a)).rejects.toThrow(); + + const transaction_b = { + to: address_b, + data: '0x' + }; + + const result_b = await alice.providerL1!.call(transaction_b); + expect(result_b).toEqual('0x'); + }); + test('Should check that system contracts and SDK create same CREATE/CREATE2 addresses', async () => { const deployerContract = new zksync.Contract( zksync.utils.CONTRACT_DEPLOYER_ADDRESS, @@ -341,7 +361,7 @@ describe('System behavior checks', () => { function bootloaderUtilsContract() { const BOOTLOADER_UTILS_ADDRESS = '0x000000000000000000000000000000000000800c'; const BOOTLOADER_UTILS = new ethers.utils.Interface( - require(`${process.env.ZKSYNC_HOME}/etc/system-contracts/artifacts-zk/cache-zk/solpp-generated-contracts/BootloaderUtilities.sol/BootloaderUtilities.json`).abi + require(`${process.env.ZKSYNC_HOME}/contracts/system-contracts/artifacts-zk/contracts-preprocessed/BootloaderUtilities.sol/BootloaderUtilities.json`).abi ); return new ethers.Contract(BOOTLOADER_UTILS_ADDRESS, BOOTLOADER_UTILS, alice); diff --git a/core/tests/upgrade-test/tests/upgrade.test.ts b/core/tests/upgrade-test/tests/upgrade.test.ts index 80e2c96c4b4..b285d8b051b 100644 --- a/core/tests/upgrade-test/tests/upgrade.test.ts +++ b/core/tests/upgrade-test/tests/upgrade.test.ts @@ -8,7 +8,7 @@ import fs from 'fs'; import { TransactionResponse } from 'zksync-web3/build/src/types'; import { BytesLike } from '@ethersproject/bytes'; -const L1_CONTRACTS_FOLDER = `${process.env.ZKSYNC_HOME}/contracts/ethereum/artifacts/cache/solpp-generated-contracts`; +const L1_CONTRACTS_FOLDER = `${process.env.ZKSYNC_HOME}/contracts/l1-contracts/artifacts/cache/solpp-generated-contracts`; const L1_DEFAULT_UPGRADE_ABI = new ethers.utils.Interface( require(`${L1_CONTRACTS_FOLDER}/upgrades/DefaultUpgrade.sol/DefaultUpgrade.json`).abi ); @@ -19,10 +19,10 @@ const ADMIN_FACET_ABI = new ethers.utils.Interface( require(`${L1_CONTRACTS_FOLDER}/zksync/facets/Admin.sol/AdminFacet.json`).abi ); const L2_FORCE_DEPLOY_UPGRADER_ABI = new ethers.utils.Interface( - require(`${process.env.ZKSYNC_HOME}/contracts/zksync/artifacts-zk/cache-zk/solpp-generated-contracts/ForceDeployUpgrader.sol/ForceDeployUpgrader.json`).abi + require(`${process.env.ZKSYNC_HOME}/contracts/l2-contracts/artifacts-zk/cache-zk/solpp-generated-contracts/ForceDeployUpgrader.sol/ForceDeployUpgrader.json`).abi ); const COMPLEX_UPGRADER_ABI = new ethers.utils.Interface( - require(`${process.env.ZKSYNC_HOME}/etc/system-contracts/artifacts-zk/cache-zk/solpp-generated-contracts/ComplexUpgrader.sol/ComplexUpgrader.json`).abi + require(`${process.env.ZKSYNC_HOME}/contracts/system-contracts/artifacts-zk/contracts-preprocessed/ComplexUpgrader.sol/ComplexUpgrader.json`).abi ); const COUNTER_BYTECODE = require(`${process.env.ZKSYNC_HOME}/core/tests/ts-integration/artifacts-zk/contracts/counter/counter.sol/Counter.json`).deployedBytecode; @@ -66,7 +66,7 @@ describe('Upgrade test', function () { process.env.CHAIN_STATE_KEEPER_BLOCK_COMMIT_DEADLINE_MS = '2000'; // Run server in background. utils.background( - 'cd $ZKSYNC_HOME && cargo run --bin zksync_server --release -- --components=api,tree,eth,data_fetcher,state_keeper', + 'cd $ZKSYNC_HOME && cargo run --bin zksync_server --release -- --components=api,tree,eth,state_keeper', [null, logs, logs] ); // Server may need some time to recompile if it's a cold run, so wait for it. @@ -130,7 +130,7 @@ describe('Upgrade test', function () { }); step('Send l1 tx for saving new bootloader', async () => { - const path = `${process.env.ZKSYNC_HOME}/etc/system-contracts/bootloader/build/artifacts/playground_batch.yul/playground_batch.yul.zbin`; + const path = `${process.env.ZKSYNC_HOME}/contracts/system-contracts/bootloader/build/artifacts/playground_batch.yul.zbin`; const bootloaderCode = ethers.utils.hexlify(fs.readFileSync(path)); bootloaderHash = ethers.utils.hexlify(hashBytecode(bootloaderCode)); const txHandle = await tester.syncWallet.requestExecute({ @@ -197,7 +197,7 @@ describe('Upgrade test', function () { ).wait(); // Wait for server to process L1 event. - await utils.sleep(10); + await utils.sleep(30); }); step('Check bootloader is updated on L2', async () => { @@ -257,7 +257,7 @@ describe('Upgrade test', function () { // Run again. utils.background( - 'cd $ZKSYNC_HOME && cargo run --bin zksync_server --release -- --components=api,tree,eth,data_fetcher,state_keeper &> upgrade.log', + 'cd $ZKSYNC_HOME && cargo run --bin zksync_server --release -- --components=api,tree,eth,state_keeper &> upgrade.log', [null, logs, logs] ); await utils.sleep(10); diff --git a/core/tests/vm-benchmark/README.md b/core/tests/vm-benchmark/README.md index 4d66f287a70..cecbdb31d0c 100644 --- a/core/tests/vm-benchmark/README.md +++ b/core/tests/vm-benchmark/README.md @@ -1,7 +1,8 @@ # Benchmarking the VM -Currently all benchmarking happens on contract deployment bytecodes. These can execute arbitrary code, so that is -surprisingly useful. This library can be used to build more complex benchmarks, however. +Currently all benchmarking happens on contract deployment bytecodes. Since contract deployment bytecodes can execute +arbitrary code, they are surprisingly useful for benchmarking. This library can be used to build more complex +benchmarks, however. ## Benchmarking @@ -28,7 +29,7 @@ them to "benches/iai.rs". ## Profiling (Linux only) You can also use `sh perf.sh bytecode_file` to produce data that can be fed into the -[firefox profiler](profiler.firefox.com) for a specific bytecode. +[firefox profiler](https://profiler.firefox.com/) for a specific bytecode. ## Fuzzing diff --git a/core/tests/vm-benchmark/benches/diy_benchmark.rs b/core/tests/vm-benchmark/benches/diy_benchmark.rs index 8f5b6cd685b..b99837d8eab 100644 --- a/core/tests/vm-benchmark/benches/diy_benchmark.rs +++ b/core/tests/vm-benchmark/benches/diy_benchmark.rs @@ -1,5 +1,6 @@ -use criterion::black_box; use std::time::{Duration, Instant}; + +use criterion::black_box; use vm_benchmark_harness::{cut_to_allowed_bytecode_size, get_deploy_tx, BenchmarkingVm}; fn main() { diff --git a/core/tests/vm-benchmark/harness/src/lib.rs b/core/tests/vm-benchmark/harness/src/lib.rs index b7da44aed92..e4f26a20f49 100644 --- a/core/tests/vm-benchmark/harness/src/lib.rs +++ b/core/tests/vm-benchmark/harness/src/lib.rs @@ -1,16 +1,20 @@ -use multivm::interface::{ - L2BlockEnv, TxExecutionMode, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, +use std::{cell::RefCell, rc::Rc}; + +use multivm::{ + interface::{ + L2BlockEnv, TxExecutionMode, VmExecutionMode, VmExecutionResultAndLogs, VmInterface, + }, + utils::get_max_gas_per_pubdata_byte, + vm_latest::{constants::BLOCK_GAS_LIMIT, HistoryEnabled, Vm}, }; -use multivm::vm_latest::{constants::BLOCK_GAS_LIMIT, HistoryEnabled, Vm}; use once_cell::sync::Lazy; -use std::{cell::RefCell, rc::Rc}; use zksync_contracts::{deployer_contract, BaseSystemContracts}; use zksync_state::{InMemoryStorage, StorageView}; -use zksync_system_constants::ethereum::MAX_GAS_PER_PUBDATA_BYTE; use zksync_types::{ - block::legacy_miniblock_hash, + block::MiniblockHasher, ethabi::{encode, Token}, fee::Fee, + fee_model::BatchFeeInput, helpers::unix_timestamp_ms, l2::L2Tx, utils::storage_key_for_eth_balance, @@ -36,7 +40,7 @@ pub fn cut_to_allowed_bytecode_size(bytes: &[u8]) -> Option<&[u8]> { static STORAGE: Lazy = Lazy::new(|| { let mut storage = InMemoryStorage::with_system_contracts(hash_bytecode); - // give PRIVATE_KEY some money + // give `PRIVATE_KEY` some money let my_addr = PackedEthSignature::address_from_private_key(&PRIVATE_KEY).unwrap(); let key = storage_key_for_eth_balance(&my_addr); storage.set_value(key, zksync_utils::u256_to_h256(U256([0, 0, 1, 0]))); @@ -66,14 +70,16 @@ impl BenchmarkingVm { previous_batch_hash: None, number: L1BatchNumber(1), timestamp, - l1_gas_price: 50_000_000_000, // 50 gwei - fair_l2_gas_price: 250_000_000, // 0.25 gwei + fee_input: BatchFeeInput::l1_pegged( + 50_000_000_000, // 50 gwei + 250_000_000, // 0.25 gwei + ), fee_account: Address::random(), enforced_base_fee: None, first_l2_block: L2BlockEnv { number: 1, timestamp, - prev_block_hash: legacy_miniblock_hash(MiniblockNumber(0)), + prev_block_hash: MiniblockHasher::legacy_hash(MiniblockNumber(0)), max_virtual_blocks_to_create: 100, }, }, @@ -116,7 +122,9 @@ pub fn get_deploy_tx(code: &[u8]) -> Transaction { gas_limit: U256::from(30000000u32), max_fee_per_gas: U256::from(250_000_000), max_priority_fee_per_gas: U256::from(0), - gas_per_pubdata_limit: U256::from(MAX_GAS_PER_PUBDATA_BYTE), + gas_per_pubdata_limit: U256::from(get_max_gas_per_pubdata_byte( + ProtocolVersionId::latest().into(), + )), }, U256::zero(), L2ChainId::from(270), @@ -133,9 +141,10 @@ pub fn get_deploy_tx(code: &[u8]) -> Transaction { #[cfg(test)] mod tests { - use crate::*; use zksync_contracts::read_bytecode; + use crate::*; + #[test] fn can_deploy_contract() { let test_contract = read_bytecode( diff --git a/core/tests/vm-benchmark/src/compare_iai_results.rs b/core/tests/vm-benchmark/src/compare_iai_results.rs index d67d7238683..d903d727117 100644 --- a/core/tests/vm-benchmark/src/compare_iai_results.rs +++ b/core/tests/vm-benchmark/src/compare_iai_results.rs @@ -1,6 +1,5 @@ -use std::collections::HashMap; -use std::fs::File; -use std::io::BufReader; +use std::{collections::HashMap, fs::File, io::BufReader}; + use vm_benchmark::parse_iai::parse_iai; fn main() { diff --git a/core/tests/vm-benchmark/src/find_slowest.rs b/core/tests/vm-benchmark/src/find_slowest.rs index 947f944541c..2bc2a894d2d 100644 --- a/core/tests/vm-benchmark/src/find_slowest.rs +++ b/core/tests/vm-benchmark/src/find_slowest.rs @@ -2,6 +2,7 @@ use std::{ io::Write, time::{Duration, Instant}, }; + use vm_benchmark_harness::*; fn main() { diff --git a/core/tests/vm-benchmark/src/iai_results_to_prometheus.rs b/core/tests/vm-benchmark/src/iai_results_to_prometheus.rs index dc3c8f6d98f..396d59948a8 100644 --- a/core/tests/vm-benchmark/src/iai_results_to_prometheus.rs +++ b/core/tests/vm-benchmark/src/iai_results_to_prometheus.rs @@ -1,4 +1,5 @@ use std::io::BufReader; + use vm_benchmark::parse_iai::IaiResult; fn main() { diff --git a/core/tests/vm-benchmark/src/with_prometheus.rs b/core/tests/vm-benchmark/src/with_prometheus.rs index e9d4f2e57ed..1fcf5652c6d 100644 --- a/core/tests/vm-benchmark/src/with_prometheus.rs +++ b/core/tests/vm-benchmark/src/with_prometheus.rs @@ -1,6 +1,7 @@ -use metrics_exporter_prometheus::PrometheusBuilder; use std::time::Duration; +use metrics_exporter_prometheus::PrometheusBuilder; + pub fn with_prometheus(f: F) { println!("Pushing results to Prometheus"); diff --git a/deny.toml b/deny.toml index 7fa3c835088..b50b165b72f 100644 --- a/deny.toml +++ b/deny.toml @@ -8,7 +8,6 @@ yanked = "warn" notice = "warn" ignore = [ "RUSTSEC-2023-0018", - "RUSTSEC-2023-0071" ] [licenses] diff --git a/docker-compose-cpu-runner.yml b/docker-compose-cpu-runner.yml index 98b42fc7357..7e728001e00 100644 --- a/docker-compose-cpu-runner.yml +++ b/docker-compose-cpu-runner.yml @@ -33,6 +33,6 @@ services: postgres: image: "postgres:14" ports: - - "5432:5432" + - 127.0.0.1:5432:5432 environment: - POSTGRES_HOST_AUTH_METHOD=trust diff --git a/docker-compose-gpu-runner-cuda-12-0.yml b/docker-compose-gpu-runner-cuda-12-0.yml index 9b5feea15b0..2dd05299f58 100644 --- a/docker-compose-gpu-runner-cuda-12-0.yml +++ b/docker-compose-gpu-runner-cuda-12-0.yml @@ -47,6 +47,6 @@ services: postgres: image: "postgres:14" ports: - - "5432:5432" + - 127.0.0.1:5432:5432 environment: - POSTGRES_HOST_AUTH_METHOD=trust diff --git a/docker-compose-gpu-runner.yml b/docker-compose-gpu-runner.yml index b2683111378..a3997a63089 100644 --- a/docker-compose-gpu-runner.yml +++ b/docker-compose-gpu-runner.yml @@ -38,6 +38,6 @@ services: postgres: image: "postgres:14" ports: - - "5432:5432" + - 127.0.0.1:5432:5432 environment: - POSTGRES_HOST_AUTH_METHOD=trust diff --git a/docker-compose-runner-nightly.yml b/docker-compose-runner-nightly.yml index 0ea8f7d0764..5cd9294ffae 100644 --- a/docker-compose-runner-nightly.yml +++ b/docker-compose-runner-nightly.yml @@ -3,15 +3,35 @@ services: zk: image: matterlabs/zk-environment:latest2.0-lightweight-nightly extends: - file: docker-compose-runner.yml + file: docker-compose.yml service: zk postgres: extends: - file: docker-compose-runner.yml + file: docker-compose.yml service: postgres geth: extends: - file: docker-compose-runner.yml + file: docker-compose.yml service: geth + + create-beacon-chain-genesis: + extends: + file: docker-compose.yml + service: create-beacon-chain-genesis + + validator: + extends: + file: docker-compose.yml + service: validator + + beacon: + extends: + file: docker-compose.yml + service: beacon + + geth-genesis: + extends: + file: docker-compose.yml + service: geth-genesis diff --git a/docker-compose-runner.yml b/docker-compose-runner.yml deleted file mode 100644 index 2a373876792..00000000000 --- a/docker-compose-runner.yml +++ /dev/null @@ -1,38 +0,0 @@ -version: '3.2' -services: - geth: - image: "matterlabs/geth:latest" - environment: - - PLUGIN_CONFIG - - zk: - image: "matterlabs/zk-environment:latest2.0-lightweight" - depends_on: - - geth - - postgres - security_opt: - - seccomp:unconfined - command: tail -f /dev/null - volumes: - - .:/usr/src/zksync - - /usr/src/cache:/usr/src/cache - - /var/run/docker.sock:/var/run/docker.sock - environment: - - CACHE_DIR=/usr/src/cache - - SCCACHE_CACHE_SIZE=50g - - SCCACHE_GCS_BUCKET=matterlabs-infra-sccache-storage - - SCCACHE_GCS_SERVICE_ACCOUNT=gha-ci-runners@matterlabs-infra.iam.gserviceaccount.com - - SCCACHE_ERROR_LOG=/tmp/sccache_log.txt - - SCCACHE_GCS_RW_MODE=READ_WRITE - - CI=1 - - GITHUB_WORKSPACE=$GITHUB_WORKSPACE - env_file: - - ./.env - extra_hosts: - - "host:host-gateway" - postgres: - image: "postgres:14" - ports: - - "5432:5432" - environment: - - POSTGRES_HOST_AUTH_METHOD=trust diff --git a/docker-compose-zkstack-common.yml b/docker-compose-zkstack-common.yml new file mode 100644 index 00000000000..5d92de5d31d --- /dev/null +++ b/docker-compose-zkstack-common.yml @@ -0,0 +1,35 @@ +version: '3.2' +networks: + zkstack: + driver: bridge +services: + geth: + image: "matterlabs/geth:latest" + ports: + - "127.0.0.1:8545:8545" + - "127.0.0.1:8546:8546" + volumes: + - type: bind + source: ./volumes/geth + target: /var/lib/geth/data + networks: + - zkstack + container_name: geth + postgres: + image: "postgres:14" + container_name: postgres + ports: + - "127.0.0.1:5432:5432" + volumes: + - type: bind + source: ./volumes/postgres + target: /var/lib/postgresql/data + environment: + # We bind only to 127.0.0.1, so setting insecure password is acceptable here + - POSTGRES_PASSWORD=notsecurepassword + command: + - "postgres" + - "-c" + - "max_connections=1000" + networks: + - zkstack diff --git a/docker-compose.yml b/docker-compose.yml index dc1f375645a..1a7c0f01e95 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,18 +1,113 @@ version: '3.2' services: + create-beacon-chain-genesis: + image: "gcr.io/prysmaticlabs/prysm/cmd/prysmctl:HEAD-c6801d" + command: + - testnet + - generate-genesis + - --fork=deneb + - --num-validators=64 + - --genesis-time-delay=5 + - --output-ssz=/consensus/genesis.ssz + - --chain-config-file=/prysm/config.yml + - --geth-genesis-json-in=/geth/standard-dev.json + - --geth-genesis-json-out=/execution/genesis.json + volumes: + - ./docker/geth:/geth/:ro + - ./docker/prysm:/prysm/:ro + - ./volumes/geth:/execution + - ./volumes/prysm:/consensus + geth-genesis: + image: "ethereum/client-go:v1.13.5" + command: --datadir=/execution init /execution/genesis.json + volumes: + - ./volumes/geth:/execution + depends_on: + create-beacon-chain-genesis: + condition: service_completed_successfully geth: - image: "matterlabs/geth:latest" + image: "ethereum/client-go:v1.13.5" + ports: + - 8551:8551 + - 8545:8545 + - 8546:8546 + volumes: + - ./volumes/geth:/var/lib/geth/data + - ./docker/geth/:/geth/:ro + command: + - --networkid=9 + - --datadir=/var/lib/geth/data + - --http + - --http.api=engine,eth,web3,personal,net,debug + - --http.addr=0.0.0.0 + - --http.corsdomain=* + - --http.vhosts=* + - --ws + - --ws.addr=0.0.0.0 + - --ws.port=8546 + - --ws.origins=* + - --nodiscover + - --authrpc.addr=0.0.0.0 + - --authrpc.vhosts=* + - --authrpc.jwtsecret=/var/lib/geth/data/jwtsecret + - --allow-insecure-unlock + - --unlock=0x8a91dc2d28b689474298d91899f0c1baf62cb85b + - --password=/var/lib/geth/data/password.sec + - --syncmode=full + depends_on: + beacon: + condition: service_started + geth-genesis: + condition: service_completed_successfully + beacon: + image: "gcr.io/prysmaticlabs/prysm/beacon-chain:HEAD-c6801d" + command: + - --datadir=/consensus/beacon/ + - --min-sync-peers=0 + - --genesis-state=/consensus/genesis.ssz + - --bootstrap-node= + - --interop-eth1data-votes + - --chain-config-file=/consensus/config.yml + - --contract-deployment-block=0 + - --chain-id=9 + - --rpc-host=0.0.0.0 + - --grpc-gateway-host=0.0.0.0 + - --execution-endpoint=http://geth:8551 + - --accept-terms-of-use + - --jwt-secret=/execution/jwtsecret + - --suggested-fee-recipient=0x8a91dc2d28b689474298d91899f0c1baf62cb85b + - --minimum-peers-per-subnet=0 + - --enable-debug-rpc-endpoints ports: - - "8545:8545" - - "8546:8546" + - 4000:4000 + - 3500:3500 + - 8080:8080 + - 6060:6060 + - 9090:9090 volumes: - - type: bind - source: ./volumes/geth - target: /var/lib/geth/data + - ./volumes/prysm:/consensus + - ./volumes/geth:/execution + depends_on: + create-beacon-chain-genesis: + condition: service_completed_successfully + validator: + image: "gcr.io/prysmaticlabs/prysm/validator:HEAD-c6801d" + command: + - --beacon-rpc-provider=beacon:4000 + - --datadir=/consensus/validatordata + - --accept-terms-of-use + - --interop-num-validators=64 + - --interop-start-index=0 + - --chain-config-file=/consensus/config.yml + depends_on: + beacon: + condition: service_started + volumes: + - ./volumes/prysm:/consensus postgres: image: "postgres:14" ports: - - "5432:5432" + - 127.0.0.1:5432:5432 volumes: - type: bind source: ./volumes/postgres @@ -20,3 +115,29 @@ services: environment: - POSTGRES_HOST_AUTH_METHOD=trust + # This is specific to runner + zk: + image: "matterlabs/zk-environment:latest2.0-lightweight" + security_opt: + - seccomp:unconfined + command: tail -f /dev/null + volumes: + - .:/usr/src/zksync + - /usr/src/cache:/usr/src/cache + - /var/run/docker.sock:/var/run/docker.sock + - ./hardhat-nodejs:/root/.cache/hardhat-nodejs + environment: + - CACHE_DIR=/usr/src/cache + - SCCACHE_CACHE_SIZE=50g + - SCCACHE_GCS_BUCKET=matterlabs-infra-sccache-storage + - SCCACHE_GCS_SERVICE_ACCOUNT=gha-ci-runners@matterlabs-infra.iam.gserviceaccount.com + - SCCACHE_ERROR_LOG=/tmp/sccache_log.txt + - SCCACHE_GCS_RW_MODE=READ_WRITE + - CI=1 + - GITHUB_WORKSPACE=$GITHUB_WORKSPACE + env_file: + - ./.env + extra_hosts: + - "host:host-gateway" + profiles: + - runner diff --git a/docker/contract-verifier/Dockerfile b/docker/contract-verifier/Dockerfile index 1f244b38906..6680f99e89a 100644 --- a/docker/contract-verifier/Dockerfile +++ b/docker/contract-verifier/Dockerfile @@ -24,7 +24,7 @@ RUN apt-get update && apt-get install -y curl libpq5 ca-certificates wget python # install zksolc 1.3.x RUN skip_versions="v1.3.12 v1.3.15" && \ - for VERSION in $(seq -f "v1.3.%g" 0 16); do \ + for VERSION in $(seq -f "v1.3.%g" 0 19); do \ if echo " $skip_versions " | grep -q -w " $VERSION "; then \ continue; \ fi; \ @@ -53,10 +53,14 @@ RUN mkdir -p /etc/vyper-bin/0.3.9 \ && wget -O vyper0.3.9 https://github.com/vyperlang/vyper/releases/download/v0.3.9/vyper.0.3.9%2Bcommit.66b96705.linux \ && mv vyper0.3.9 /etc/vyper-bin/0.3.9/vyper \ && chmod +x /etc/vyper-bin/0.3.9/vyper +RUN mkdir -p /etc/vyper-bin/0.3.10 \ + && wget -O vyper0.3.10 https://github.com/vyperlang/vyper/releases/download/v0.3.10/vyper.0.3.10%2Bcommit.91361694.linux \ + && mv vyper0.3.10 /etc/vyper-bin/0.3.10/vyper \ + && chmod +x /etc/vyper-bin/0.3.10/vyper COPY --from=builder /usr/src/zksync/target/release/zksync_contract_verifier /usr/bin/ -COPY etc/system-contracts/bootloader/build/artifacts/ /etc/system-contracts/bootloader/build/artifacts/ -COPY etc/system-contracts/artifacts-zk /etc/system-contracts/artifacts-zk +COPY contracts/system-contracts/bootloader/build/artifacts/ /contracts/system-contracts/bootloader/build/artifacts/ +COPY contracts/system-contracts/artifacts-zk /contracts/system-contracts/artifacts-zk # CMD tail -f /dev/null ENTRYPOINT ["zksync_contract_verifier"] diff --git a/docker/contract-verifier/install-all-solc.sh b/docker/contract-verifier/install-all-solc.sh old mode 100644 new mode 100755 index 8f4d8c38a5c..83d9147cf6f --- a/docker/contract-verifier/install-all-solc.sh +++ b/docker/contract-verifier/install-all-solc.sh @@ -18,3 +18,11 @@ do ls etc/solc-bin/ done + +# Download zkVM solc +for version in $(curl -s https://api.github.com/repos/matter-labs/era-solidity/releases?per_page=200 | jq -r '.[].tag_name') +do + mkdir -p etc/solc-bin/zkVM-$version/ + wget https://github.com/matter-labs/era-solidity/releases/download/$version/solc-linux-amd64-$version -O etc/solc-bin/zkVM-$version/solc + chmod +x etc/solc-bin/zkVM-$version/solc +done diff --git a/docker/external-node/Dockerfile b/docker/external-node/Dockerfile index 7e1e7c36395..c21f00daad2 100644 --- a/docker/external-node/Dockerfile +++ b/docker/external-node/Dockerfile @@ -18,7 +18,7 @@ WORKDIR /usr/src/zksync COPY . . RUN cargo build --release -RUN cargo install sqlx-cli --version 0.5.13 +RUN cargo install sqlx-cli --version 0.7.3 FROM debian:bookworm-slim @@ -28,12 +28,12 @@ COPY --from=builder /usr/src/zksync/target/release/zksync_external_node /usr/bin COPY --from=builder /usr/src/zksync/target/release/block_reverter /usr/bin COPY --from=builder /usr/local/cargo/bin/sqlx /usr/bin COPY --from=builder /usr/src/zksync/docker/external-node/entrypoint.sh /usr/bin -COPY etc/system-contracts/bootloader/build/artifacts/ /etc/system-contracts/bootloader/build/artifacts/ -COPY etc/system-contracts/contracts/artifacts/ /etc/system-contracts/contracts/artifacts/ -COPY etc/system-contracts/contracts/precompiles/artifacts/ /etc/system-contracts/contracts/precompiles/artifacts/ -COPY etc/system-contracts/artifacts-zk /etc/system-contracts/artifacts-zk -COPY contracts/ethereum/artifacts/ /contracts/ethereum/artifacts/ -COPY contracts/zksync/artifacts-zk/ /contracts/zksync/artifacts-zk/ +COPY contracts/system-contracts/bootloader/build/artifacts/ /contracts/system-contracts/bootloader/build/artifacts/ +COPY contracts/system-contracts/contracts-preprocessed/artifacts/ /contracts/system-contracts/contracts-preprocessed/artifacts/ +COPY contracts/system-contracts/contracts-preprocessed/precompiles/artifacts/ /contracts/system-contracts/contracts-preprocessed/precompiles/artifacts/ +COPY contracts/system-contracts/artifacts-zk /contracts/system-contracts/artifacts-zk +COPY contracts/l1-contracts/artifacts/ /contracts/l1-contracts/artifacts/ +COPY contracts/l2-contracts/artifacts-zk/ /contracts/l2-contracts/artifacts-zk/ COPY etc/tokens/ /etc/tokens/ COPY etc/ERC20/ /etc/ERC20/ COPY etc/multivm_bootloaders/ /etc/multivm_bootloaders/ diff --git a/docker/geth/jwtsecret b/docker/geth/jwtsecret new file mode 100644 index 00000000000..9fb480ef6a2 --- /dev/null +++ b/docker/geth/jwtsecret @@ -0,0 +1 @@ +0xfad2709d0bb03bf0e8ba3c99bea194575d3e98863133d1af638ed056d1d59345 diff --git a/docker/geth/standard-dev.json b/docker/geth/standard-dev.json index 556af1632a6..9a5ea071707 100644 --- a/docker/geth/standard-dev.json +++ b/docker/geth/standard-dev.json @@ -12,10 +12,12 @@ "istanbulBlock": 0, "berlinBlock": 0, "londonBlock": 0, - "clique": { - "period": 1, - "epoch": 30000 - } + "arrowGlacierBlock": 0, + "grayGlacierBlock": 0, + "shanghaiTime": 0, + "cancunTime": 0, + "terminalTotalDifficulty": 0, + "terminalTotalDifficultyPassed": true }, "nonce": "0x0", "timestamp": "0x5ca9158b", @@ -28,6 +30,10 @@ "0000000000000000000000000000000000000000": { "balance": "0x1" }, + "4242424242424242424242424242424242424242": { + "code": "0x60806040526004361061003f5760003560e01c806301ffc9a71461004457806322895118146100b6578063621fd130146101e3578063c5f2892f14610273575b600080fd5b34801561005057600080fd5b5061009c6004803603602081101561006757600080fd5b8101908080357bffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916906020019092919050505061029e565b604051808215151515815260200191505060405180910390f35b6101e1600480360360808110156100cc57600080fd5b81019080803590602001906401000000008111156100e957600080fd5b8201836020820111156100fb57600080fd5b8035906020019184600183028401116401000000008311171561011d57600080fd5b90919293919293908035906020019064010000000081111561013e57600080fd5b82018360208201111561015057600080fd5b8035906020019184600183028401116401000000008311171561017257600080fd5b90919293919293908035906020019064010000000081111561019357600080fd5b8201836020820111156101a557600080fd5b803590602001918460018302840111640100000000831117156101c757600080fd5b909192939192939080359060200190929190505050610370565b005b3480156101ef57600080fd5b506101f8610fd0565b6040518080602001828103825283818151815260200191508051906020019080838360005b8381101561023857808201518184015260208101905061021d565b50505050905090810190601f1680156102655780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b34801561027f57600080fd5b50610288610fe2565b6040518082815260200191505060405180910390f35b60007f01ffc9a7000000000000000000000000000000000000000000000000000000007bffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916827bffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916148061036957507f85640907000000000000000000000000000000000000000000000000000000007bffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916827bffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916145b9050919050565b603087879050146103cc576040517f08c379a00000000000000000000000000000000000000000000000000000000081526004018080602001828103825260268152602001806116ec6026913960400191505060405180910390fd5b60208585905014610428576040517f08c379a00000000000000000000000000000000000000000000000000000000081526004018080602001828103825260368152602001806116836036913960400191505060405180910390fd5b60608383905014610484576040517f08c379a000000000000000000000000000000000000000000000000000000000815260040180806020018281038252602981526020018061175f6029913960400191505060405180910390fd5b670de0b6b3a76400003410156104e5576040517f08c379a00000000000000000000000000000000000000000000000000000000081526004018080602001828103825260268152602001806117396026913960400191505060405180910390fd5b6000633b9aca0034816104f457fe5b061461054b576040517f08c379a00000000000000000000000000000000000000000000000000000000081526004018080602001828103825260338152602001806116b96033913960400191505060405180910390fd5b6000633b9aca00348161055a57fe5b04905067ffffffffffffffff80168111156105c0576040517f08c379a00000000000000000000000000000000000000000000000000000000081526004018080602001828103825260278152602001806117126027913960400191505060405180910390fd5b60606105cb82611314565b90507f649bbc62d0e31342afea4e5cd82d4049e7e1ee912fc0889aa790803be39038c589898989858a8a610600602054611314565b60405180806020018060200180602001806020018060200186810386528e8e82818152602001925080828437600081840152601f19601f82011690508083019250505086810385528c8c82818152602001925080828437600081840152601f19601f82011690508083019250505086810384528a818151815260200191508051906020019080838360005b838110156106a657808201518184015260208101905061068b565b50505050905090810190601f1680156106d35780820380516001836020036101000a031916815260200191505b508681038352898982818152602001925080828437600081840152601f19601f820116905080830192505050868103825287818151815260200191508051906020019080838360005b8381101561073757808201518184015260208101905061071c565b50505050905090810190601f1680156107645780820380516001836020036101000a031916815260200191505b509d505050505050505050505050505060405180910390a1600060028a8a600060801b6040516020018084848082843780830192505050826fffffffffffffffffffffffffffffffff19166fffffffffffffffffffffffffffffffff1916815260100193505050506040516020818303038152906040526040518082805190602001908083835b6020831061080e57805182526020820191506020810190506020830392506107eb565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa158015610850573d6000803e3d6000fd5b5050506040513d602081101561086557600080fd5b8101908080519060200190929190505050905060006002808888600090604092610891939291906115da565b6040516020018083838082843780830192505050925050506040516020818303038152906040526040518082805190602001908083835b602083106108eb57805182526020820191506020810190506020830392506108c8565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa15801561092d573d6000803e3d6000fd5b5050506040513d602081101561094257600080fd5b8101908080519060200190929190505050600289896040908092610968939291906115da565b6000801b604051602001808484808284378083019250505082815260200193505050506040516020818303038152906040526040518082805190602001908083835b602083106109cd57805182526020820191506020810190506020830392506109aa565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa158015610a0f573d6000803e3d6000fd5b5050506040513d6020811015610a2457600080fd5b810190808051906020019092919050505060405160200180838152602001828152602001925050506040516020818303038152906040526040518082805190602001908083835b60208310610a8e5780518252602082019150602081019050602083039250610a6b565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa158015610ad0573d6000803e3d6000fd5b5050506040513d6020811015610ae557600080fd5b810190808051906020019092919050505090506000600280848c8c604051602001808481526020018383808284378083019250505093505050506040516020818303038152906040526040518082805190602001908083835b60208310610b615780518252602082019150602081019050602083039250610b3e565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa158015610ba3573d6000803e3d6000fd5b5050506040513d6020811015610bb857600080fd5b8101908080519060200190929190505050600286600060401b866040516020018084805190602001908083835b60208310610c085780518252602082019150602081019050602083039250610be5565b6001836020036101000a0380198251168184511680821785525050505050509050018367ffffffffffffffff191667ffffffffffffffff1916815260180182815260200193505050506040516020818303038152906040526040518082805190602001908083835b60208310610c935780518252602082019150602081019050602083039250610c70565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa158015610cd5573d6000803e3d6000fd5b5050506040513d6020811015610cea57600080fd5b810190808051906020019092919050505060405160200180838152602001828152602001925050506040516020818303038152906040526040518082805190602001908083835b60208310610d545780518252602082019150602081019050602083039250610d31565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa158015610d96573d6000803e3d6000fd5b5050506040513d6020811015610dab57600080fd5b81019080805190602001909291905050509050858114610e16576040517f08c379a000000000000000000000000000000000000000000000000000000000815260040180806020018281038252605481526020018061162f6054913960600191505060405180910390fd5b6001602060020a0360205410610e77576040517f08c379a000000000000000000000000000000000000000000000000000000000815260040180806020018281038252602181526020018061160e6021913960400191505060405180910390fd5b60016020600082825401925050819055506000602054905060008090505b6020811015610fb75760018083161415610ec8578260008260208110610eb757fe5b018190555050505050505050610fc7565b600260008260208110610ed757fe5b01548460405160200180838152602001828152602001925050506040516020818303038152906040526040518082805190602001908083835b60208310610f335780518252602082019150602081019050602083039250610f10565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa158015610f75573d6000803e3d6000fd5b5050506040513d6020811015610f8a57600080fd5b8101908080519060200190929190505050925060028281610fa757fe5b0491508080600101915050610e95565b506000610fc057fe5b5050505050505b50505050505050565b6060610fdd602054611314565b905090565b6000806000602054905060008090505b60208110156111d057600180831614156110e05760026000826020811061101557fe5b01548460405160200180838152602001828152602001925050506040516020818303038152906040526040518082805190602001908083835b60208310611071578051825260208201915060208101905060208303925061104e565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa1580156110b3573d6000803e3d6000fd5b5050506040513d60208110156110c857600080fd5b810190808051906020019092919050505092506111b6565b600283602183602081106110f057fe5b015460405160200180838152602001828152602001925050506040516020818303038152906040526040518082805190602001908083835b6020831061114b5780518252602082019150602081019050602083039250611128565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa15801561118d573d6000803e3d6000fd5b5050506040513d60208110156111a257600080fd5b810190808051906020019092919050505092505b600282816111c057fe5b0491508080600101915050610ff2565b506002826111df602054611314565b600060401b6040516020018084815260200183805190602001908083835b6020831061122057805182526020820191506020810190506020830392506111fd565b6001836020036101000a0380198251168184511680821785525050505050509050018267ffffffffffffffff191667ffffffffffffffff1916815260180193505050506040516020818303038152906040526040518082805190602001908083835b602083106112a55780518252602082019150602081019050602083039250611282565b6001836020036101000a038019825116818451168082178552505050505050905001915050602060405180830381855afa1580156112e7573d6000803e3d6000fd5b5050506040513d60208110156112fc57600080fd5b81019080805190602001909291905050509250505090565b6060600867ffffffffffffffff8111801561132e57600080fd5b506040519080825280601f01601f1916602001820160405280156113615781602001600182028036833780820191505090505b50905060008260c01b90508060076008811061137957fe5b1a60f81b8260008151811061138a57fe5b60200101907effffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916908160001a905350806006600881106113c657fe5b1a60f81b826001815181106113d757fe5b60200101907effffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916908160001a9053508060056008811061141357fe5b1a60f81b8260028151811061142457fe5b60200101907effffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916908160001a9053508060046008811061146057fe5b1a60f81b8260038151811061147157fe5b60200101907effffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916908160001a905350806003600881106114ad57fe5b1a60f81b826004815181106114be57fe5b60200101907effffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916908160001a905350806002600881106114fa57fe5b1a60f81b8260058151811061150b57fe5b60200101907effffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916908160001a9053508060016008811061154757fe5b1a60f81b8260068151811061155857fe5b60200101907effffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916908160001a9053508060006008811061159457fe5b1a60f81b826007815181106115a557fe5b60200101907effffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff1916908160001a90535050919050565b600080858511156115ea57600080fd5b838611156115f757600080fd5b600185028301915084860390509450949250505056fe4465706f736974436f6e74726163743a206d65726b6c6520747265652066756c6c4465706f736974436f6e74726163743a207265636f6e7374727563746564204465706f7369744461746120646f6573206e6f74206d6174636820737570706c696564206465706f7369745f646174615f726f6f744465706f736974436f6e74726163743a20696e76616c6964207769746864726177616c5f63726564656e7469616c73206c656e6774684465706f736974436f6e74726163743a206465706f7369742076616c7565206e6f74206d756c7469706c65206f6620677765694465706f736974436f6e74726163743a20696e76616c6964207075626b6579206c656e6774684465706f736974436f6e74726163743a206465706f7369742076616c756520746f6f20686967684465706f736974436f6e74726163743a206465706f7369742076616c756520746f6f206c6f774465706f736974436f6e74726163743a20696e76616c6964207369676e6174757265206c656e677468a2646970667358221220230afd4b6e3551329e50f1239e08fa3ab7907b77403c4f237d9adf679e8e43cf64736f6c634300060b0033", + "balance": "0x0" + }, "8a91dc2d28b689474298d91899f0c1baf62cb85b": { "balance": "0x4B3B4CA85A86C47A098A224000000000" }, diff --git a/docker/local-node/Dockerfile b/docker/local-node/Dockerfile index 3fe18d14d91..b6d8857a4f6 100644 --- a/docker/local-node/Dockerfile +++ b/docker/local-node/Dockerfile @@ -6,12 +6,15 @@ WORKDIR / # Install required dependencies RUN apt-get update; apt-get install -y make bash git openssl libssl-dev gcc g++ curl pkg-config software-properties-common jq wget -RUN apt-get install -y libpq5 ca-certificates postgresql-client && rm -rf /var/lib/apt/lists/* +RUN apt-get install -y curl gnupg libpq5 ca-certificates postgresql-client && rm -rf /var/lib/apt/lists/* # Install node and yarn -RUN curl -sL https://deb.nodesource.com/setup_18.x | bash - -RUN apt-get install -y nodejs -RUN npm install -g yarn +ENV NODE_MAJOR=18 +RUN mkdir -p /etc/apt/keyrings && \ + wget -c -O - https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \ + echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \ + apt-get update && apt-get install nodejs npm -y && \ + npm install -g yarn && npm install -g cspell && npm install -g markdown-link-check # Copy compiler (both solc and zksolc) binaries # Obtain `solc` 0.8.12. @@ -48,9 +51,9 @@ RUN cd /infrastructure/zk && yarn && yarn build && cd / # Build `local-setup-preparation` tool RUN cd /infrastructure/local-setup-preparation && yarn && cd / # Build L1 contracts package (contracts themselves should be already built) -RUN cd /contracts/ethereum && yarn && cd / +RUN cd /contracts/l1-contracts && yarn && cd / # Same for L2 contracts -RUN cd /contracts/zksync && yarn && cd / +RUN cd /contracts/l2-contracts && yarn && cd / # setup entrypoint script COPY ./docker/local-node/entrypoint.sh /usr/bin/ diff --git a/docker/local-node/entrypoint.sh b/docker/local-node/entrypoint.sh index e96674d6bdc..4998c7cc46e 100755 --- a/docker/local-node/entrypoint.sh +++ b/docker/local-node/entrypoint.sh @@ -27,7 +27,7 @@ then sed -i 's!^merkle_tree_backup_path=.*$!merkle_tree_backup_path="/var/lib/zksync/data/backups"!' /etc/env/base/database.toml # Switch zksolc compiler source from docker to binary - sed -i "s!'docker'!'binary'!" /contracts/zksync/hardhat.config.ts + sed -i "s!'docker'!'binary'!" /contracts/l2-contracts/hardhat.config.ts # Compile configs again (with changed values) yarn start config compile diff --git a/docker/proof-fri-compressor/Dockerfile b/docker/proof-fri-compressor/Dockerfile index a4654701311..e18c0c27f55 100644 --- a/docker/proof-fri-compressor/Dockerfile +++ b/docker/proof-fri-compressor/Dockerfile @@ -19,7 +19,7 @@ RUN curl https://sh.rustup.rs -sSf | bash -s -- -y && \ WORKDIR /usr/src/zksync COPY . . -RUN cargo build --release +RUN cargo build --release --bin zksync_proof_fri_compressor FROM debian:bookworm-slim diff --git a/docker/prover-fri-gateway/Dockerfile b/docker/prover-fri-gateway/Dockerfile index b0f11949551..256621e8df7 100644 --- a/docker/prover-fri-gateway/Dockerfile +++ b/docker/prover-fri-gateway/Dockerfile @@ -17,7 +17,7 @@ RUN curl https://sh.rustup.rs -sSf | bash -s -- -y && \ WORKDIR /usr/src/zksync COPY . . -RUN cargo build --release +RUN cargo build --release --bin zksync_prover_fri_gateway FROM debian:bookworm-slim RUN apt-get update && apt-get install -y curl libpq5 ca-certificates && rm -rf /var/lib/apt/lists/* diff --git a/docker/prover-fri/Dockerfile b/docker/prover-fri/Dockerfile index a4406b34ec7..8244aea06b2 100644 --- a/docker/prover-fri/Dockerfile +++ b/docker/prover-fri/Dockerfile @@ -17,7 +17,7 @@ RUN curl https://sh.rustup.rs -sSf | bash -s -- -y && \ WORKDIR /usr/src/zksync COPY . . -RUN cargo build --release +RUN cargo build --release --bin zksync_prover_fri FROM debian:bookworm-slim RUN apt-get update && apt-get install -y curl libpq5 ca-certificates && rm -rf /var/lib/apt/lists/* diff --git a/docker/prover-gar/Dockerfile b/docker/prover-gar/Dockerfile deleted file mode 100644 index ced97d6d7e7..00000000000 --- a/docker/prover-gar/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -# Will work locally only after prior universal key download and Docker login to the private registry - -ARG PROVER_IMAGE=latest -FROM us-docker.pkg.dev/matterlabs-infra/matterlabs-docker/prover-v2:2.0-$PROVER_IMAGE as prover - -FROM nvidia/cuda:11.8.0-runtime-ubuntu22.04 as app - -# HACK copying to root is the only way to make Docker layer caching work for these files for some reason -COPY *.bin / -COPY setup_2\^26.key /setup_2\^26.key - -RUN apt-get update && apt-get install -y libpq5 ca-certificates openssl && rm -rf /var/lib/apt/lists/* - -COPY --from=prover etc/system-contracts/bootloader/build/artifacts/ /etc/system-contracts/bootloader/build/artifacts/ -COPY --from=prover etc/system-contracts/artifacts-zk /etc/system-contracts/artifacts-zk -COPY --from=prover contracts/ethereum/artifacts/ /contracts/ethereum/artifacts/ -COPY --from=prover contracts/zksync/artifacts-zk/ /contracts/zksync/artifacts-zk/ -COPY --from=prover core/bin/verification_key_generator_and_server/data/ /core/bin/verification_key_generator_and_server/data/ -COPY --from=prover /usr/bin/zksync_prover /usr/bin/ - -ENTRYPOINT ["zksync_prover"] diff --git a/docker/prover-gpu-fri/Dockerfile b/docker/prover-gpu-fri/Dockerfile index b3edd5758ab..6a01926051e 100644 --- a/docker/prover-gpu-fri/Dockerfile +++ b/docker/prover-gpu-fri/Dockerfile @@ -24,7 +24,7 @@ RUN curl -Lo cmake-3.24.2-linux-x86_64.sh https://github.com/Kitware/CMake/relea WORKDIR /usr/src/zksync COPY . . -RUN cargo build --release --features "gpu" +RUN cargo build --release --features "gpu" --bin zksync_prover_fri FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 diff --git a/docker/prover/Dockerfile b/docker/prover/Dockerfile deleted file mode 100644 index a883aa02797..00000000000 --- a/docker/prover/Dockerfile +++ /dev/null @@ -1,63 +0,0 @@ -# Will work locally only after prior contracts build and universal setup key download - -FROM nvidia/cuda:11.8.0-devel-ubuntu22.04 as builder - -ARG DEBIAN_FRONTEND=noninteractive - -RUN apt-get update && apt-get install -y curl jq clang openssl libssl-dev gcc g++ \ - pkg-config build-essential libclang-dev && \ - rm -rf /var/lib/apt/lists/* - -ENV RUSTUP_HOME=/usr/local/rustup \ - CARGO_HOME=/usr/local/cargo \ - PATH=/usr/local/cargo/bin:$PATH - -RUN curl https://sh.rustup.rs -sSf | bash -s -- -y && \ - rustup install nightly-2023-08-21 && \ - rustup default nightly-2023-08-21 - -WORKDIR /usr/src/zksync - -ARG ERA_BELLMAN_CUDA_RELEASE -ENV ERA_BELLMAN_CUDA_RELEASE=$ERA_BELLMAN_CUDA_RELEASE -ENV GITHUB_OWNER=matter-labs -ENV GITHUB_REPO=era-bellman-cuda - -RUN set -e; \ - if [ -z "$ERA_BELLMAN_CUDA_RELEASE" ]; then \ - ERA_BELLMAN_CUDA_RELEASE="latest"; \ - fi; \ - if [ "$ERA_BELLMAN_CUDA_RELEASE" = "latest" ]; then \ - ERA_BELLMAN_CUDA_RELEASE=$(curl --silent "https://api.github.com/repos/${GITHUB_OWNER}/${GITHUB_REPO}/releases" | jq -r '.[0].tag_name'); \ - fi; \ - source_url="https://github.com/${GITHUB_OWNER}/${GITHUB_REPO}/archive/refs/tags/${ERA_BELLMAN_CUDA_RELEASE}.tar.gz"; \ - binary_url="https://github.com/${GITHUB_OWNER}/${GITHUB_REPO}/releases/download/${ERA_BELLMAN_CUDA_RELEASE}/bellman-cuda.tar.gz"; \ - curl --silent --location "$source_url" --output bellman-cuda-source.tar.gz; \ - curl --silent --location "$binary_url" --output bellman-cuda.tar.gz; \ - mkdir -p bellman-cuda; \ - tar xvfz bellman-cuda.tar.gz -C ./bellman-cuda; \ - tar xvfz bellman-cuda-source.tar.gz -C ./bellman-cuda --strip-components=1 - -ENV BELLMAN_CUDA_DIR=/usr/src/zksync/bellman-cuda - -COPY . . - -RUN cargo build --release --features gpu - -FROM nvidia/cuda:11.8.0-runtime-ubuntu22.04 as runner - -ARG DEBIAN_FRONTEND=noninteractive - -RUN apt-get update && apt-get install -y libpq5 ca-certificates openssl && rm -rf /var/lib/apt/lists/* - -COPY etc/system-contracts/bootloader/build/artifacts/ /etc/system-contracts/bootloader/build/artifacts/ -COPY etc/system-contracts/artifacts-zk /etc/system-contracts/artifacts-zk -COPY contracts/ethereum/artifacts/ /contracts/ethereum/artifacts/ -COPY contracts/zksync/artifacts-zk/ /contracts/zksync/artifacts-zk/ -COPY setup_2\^26.key /etc/ - -COPY core/bin/verification_key_generator_and_server/data/ /core/bin/verification_key_generator_and_server/data/ - -COPY --from=builder /usr/src/zksync/target/release/zksync_prover /usr/bin/ - -ENTRYPOINT ["zksync_prover"] diff --git a/docker/prysm/config.yml b/docker/prysm/config.yml new file mode 100644 index 00000000000..0b674138155 --- /dev/null +++ b/docker/prysm/config.yml @@ -0,0 +1,30 @@ +CONFIG_NAME: interop +PRESET_BASE: interop + +# Genesis +GENESIS_FORK_VERSION: 0x20000089 + +# Altair +ALTAIR_FORK_EPOCH: 0 +ALTAIR_FORK_VERSION: 0x20000090 + +# Merge +BELLATRIX_FORK_EPOCH: 0 +BELLATRIX_FORK_VERSION: 0x20000091 +TERMINAL_TOTAL_DIFFICULTY: 0 + +# Capella +CAPELLA_FORK_EPOCH: 0 +CAPELLA_FORK_VERSION: 0x20000092 +MAX_WITHDRAWALS_PER_PAYLOAD: 16 + +# Deneb +DENEB_FORK_VERSION: 0x20000093 +DENEB_FORK_EPOCH: 0 + +# Time parameters +SECONDS_PER_SLOT: 2 +SLOTS_PER_EPOCH: 6 + +# Deposit contract +DEPOSIT_CONTRACT_ADDRESS: 0x4242424242424242424242424242424242424242 diff --git a/docker/server-v2/Dockerfile b/docker/server-v2/Dockerfile index 1b79fb32854..a7d8fc7487f 100644 --- a/docker/server-v2/Dockerfile +++ b/docker/server-v2/Dockerfile @@ -31,16 +31,14 @@ EXPOSE 3030 COPY --from=builder /usr/src/zksync/target/release/zksync_server /usr/bin COPY --from=builder /usr/src/zksync/target/release/block_reverter /usr/bin COPY --from=builder /usr/src/zksync/target/release/merkle_tree_consistency_checker /usr/bin -COPY --from=builder /usr/src/zksync/target/release/rocksdb_util /usr/bin -COPY etc/system-contracts/bootloader/build/artifacts/ /etc/system-contracts/bootloader/build/artifacts/ -COPY etc/system-contracts/contracts/artifacts/ /etc/system-contracts/contracts/artifacts/ -COPY etc/system-contracts/contracts/precompiles/artifacts/ /etc/system-contracts/contracts/precompiles/artifacts/ -COPY etc/system-contracts/artifacts-zk /etc/system-contracts/artifacts-zk -COPY contracts/ethereum/artifacts/ /contracts/ethereum/artifacts/ -COPY contracts/zksync/artifacts-zk/ /contracts/zksync/artifacts-zk/ +COPY contracts/system-contracts/bootloader/build/artifacts/ /contracts/system-contracts/bootloader/build/artifacts/ +COPY contracts/system-contracts/contracts-preprocessed/artifacts/ /contracts/system-contracts/contracts-preprocessed/artifacts/ +COPY contracts/system-contracts/contracts-preprocessed/precompiles/artifacts/ /contracts/system-contracts/contracts-preprocessed/precompiles/artifacts/ +COPY contracts/system-contracts/artifacts-zk /contracts/system-contracts/artifacts-zk +COPY contracts/l1-contracts/artifacts/ /contracts/l1-contracts/artifacts/ +COPY contracts/l2-contracts/artifacts-zk/ /contracts/l2-contracts/artifacts-zk/ COPY etc/tokens/ /etc/tokens/ COPY etc/ERC20/ /etc/ERC20/ COPY etc/multivm_bootloaders/ /etc/multivm_bootloaders/ -COPY core/bin/verification_key_generator_and_server/data/ /core/bin/verification_key_generator_and_server/data/ ENTRYPOINT ["zksync_server"] diff --git a/docker/circuit-synthesizer/Dockerfile b/docker/snapshots-creator/Dockerfile similarity index 50% rename from docker/circuit-synthesizer/Dockerfile rename to docker/snapshots-creator/Dockerfile index 831fe167223..897f28f8780 100644 --- a/docker/circuit-synthesizer/Dockerfile +++ b/docker/snapshots-creator/Dockerfile @@ -1,9 +1,11 @@ +# syntax=docker/dockerfile:experimental FROM debian:bookworm-slim as builder -ARG DEBIAN_FRONTEND=noninteractive +WORKDIR /usr/src/zksync +COPY . . RUN apt-get update && apt-get install -y curl clang openssl libssl-dev gcc g++ \ - pkg-config build-essential libclang-dev && \ + pkg-config build-essential libclang-dev linux-libc-dev liburing-dev && \ rm -rf /var/lib/apt/lists/* ENV RUSTUP_HOME=/usr/local/rustup \ @@ -14,16 +16,13 @@ RUN curl https://sh.rustup.rs -sSf | bash -s -- -y && \ rustup install nightly-2023-08-21 && \ rustup default nightly-2023-08-21 -WORKDIR /usr/src/zksync -COPY . . - -RUN cargo build --release +RUN cargo build --release --bin snapshots_creator FROM debian:bookworm-slim -RUN apt-get update && apt-get install -y curl openssl libpq5 ca-certificates && rm -rf /var/lib/apt/lists/* +RUN apt-get update && apt-get install -y curl libpq5 liburing-dev ca-certificates && \ + rm -rf /var/lib/apt/lists/* -COPY core/bin/verification_key_generator_and_server/data/ /core/bin/verification_key_generator_and_server/data/ -COPY --from=builder /usr/src/zksync/target/release/zksync_circuit_synthesizer /usr/bin/ +COPY --from=builder /usr/src/zksync/target/release/snapshots_creator /usr/bin -ENTRYPOINT ["zksync_circuit_synthesizer"] +ENTRYPOINT ["snapshots_creator"] diff --git a/docker/witness-generator/Dockerfile b/docker/witness-generator/Dockerfile index f2fe7926a92..f431339d3e9 100644 --- a/docker/witness-generator/Dockerfile +++ b/docker/witness-generator/Dockerfile @@ -17,7 +17,7 @@ RUN curl https://sh.rustup.rs -sSf | bash -s -- -y && \ WORKDIR /usr/src/zksync COPY . . -RUN cargo build --release +RUN cargo build --release --bin zksync_witness_generator FROM debian:bookworm-slim diff --git a/docker/witness-vector-generator/Dockerfile b/docker/witness-vector-generator/Dockerfile index b366006009e..5861f3e5162 100644 --- a/docker/witness-vector-generator/Dockerfile +++ b/docker/witness-vector-generator/Dockerfile @@ -17,7 +17,7 @@ RUN curl https://sh.rustup.rs -sSf | bash -s -- -y && \ WORKDIR /usr/src/zksync COPY . . -RUN cargo build --release +RUN cargo build --release --bin zksync_witness_vector_generator FROM debian:bookworm-slim diff --git a/docker/zk-environment/20.04_amd64_cuda_11_8.Dockerfile b/docker/zk-environment/20.04_amd64_cuda_11_8.Dockerfile index 9aa7a2b0067..bd77e680d5f 100644 --- a/docker/zk-environment/20.04_amd64_cuda_11_8.Dockerfile +++ b/docker/zk-environment/20.04_amd64_cuda_11_8.Dockerfile @@ -50,10 +50,13 @@ RUN apt update; apt install -y docker-ce-cli # Configurate git to fetch submodules correctly (https://stackoverflow.com/questions/38378914/how-to-fix-git-error-rpc-failed-curl-56-gnutls) RUN git config --global http.postBuffer 1048576000 -# Install node and yarn -RUN wget -c -O - https://deb.nodesource.com/setup_18.x | bash - -RUN apt-get install -y nodejs -RUN npm install -g yarn +# Install Node and yarn +ENV NODE_MAJOR=18 +RUN mkdir -p /etc/apt/keyrings && \ + wget -c -O - https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \ + echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \ + apt-get update && apt-get install nodejs -y && \ + npm install -g yarn # Install Rust and required cargo packages ENV RUSTUP_HOME=/usr/local/rustup \ @@ -72,7 +75,7 @@ RUN echo "deb http://packages.cloud.google.com/apt cloud-sdk main" > /etc/apt/so RUN wget -c -O - https://sh.rustup.rs | bash -s -- -y RUN rustup install nightly-2023-08-21 RUN rustup default stable -RUN cargo install --version=0.5.13 sqlx-cli +RUN cargo install --version=0.7.3 sqlx-cli RUN cargo install cargo-nextest # Copy compiler (both solc and zksolc) binaries diff --git a/docker/zk-environment/20.04_amd64_cuda_12_0.Dockerfile b/docker/zk-environment/20.04_amd64_cuda_12_0.Dockerfile index ed10b252974..d0bb05fed16 100644 --- a/docker/zk-environment/20.04_amd64_cuda_12_0.Dockerfile +++ b/docker/zk-environment/20.04_amd64_cuda_12_0.Dockerfile @@ -49,9 +49,12 @@ RUN apt update; apt install -y docker-ce-cli RUN git config --global http.postBuffer 1048576000 # Install node and yarn -RUN wget -c -O - https://deb.nodesource.com/setup_18.x | bash - -RUN apt-get install -y nodejs -RUN npm install -g yarn +ENV NODE_MAJOR=18 +RUN mkdir -p /etc/apt/keyrings && \ + wget -c -O - https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \ + echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \ + apt-get update && apt-get install nodejs -y && \ + npm install -g yarn # Install Rust and required cargo packages ENV RUSTUP_HOME=/usr/local/rustup \ @@ -70,7 +73,7 @@ RUN echo "deb http://packages.cloud.google.com/apt cloud-sdk main" > /etc/apt/so RUN wget -c -O - https://sh.rustup.rs | bash -s -- -y RUN rustup install nightly-2023-08-21 RUN rustup default stable -RUN cargo install --version=0.5.13 sqlx-cli +RUN cargo install --version=0.7.3 sqlx-cli RUN cargo install cargo-nextest # Copy compiler (both solc and zksolc) binaries diff --git a/docker/zk-environment/Dockerfile b/docker/zk-environment/Dockerfile index f86aeaddb11..0c714db68e4 100644 --- a/docker/zk-environment/Dockerfile +++ b/docker/zk-environment/Dockerfile @@ -83,8 +83,8 @@ ENV NODE_MAJOR=18 RUN mkdir -p /etc/apt/keyrings && \ wget -c -O - https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \ echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \ - apt-get update && apt-get install nodejs -y && \ - npm install -g yarn + apt-get update && apt-get install nodejs npm -y && \ + npm install -g yarn && npm install -g cspell && npm install -g markdown-link-check # Install Rust and required cargo packages ENV RUSTUP_HOME=/usr/local/rustup \ @@ -103,8 +103,9 @@ RUN echo "deb [arch=${ARCH}] http://packages.cloud.google.com/apt cloud-sdk main RUN wget -c -O - https://sh.rustup.rs | bash -s -- -y && \ rustup default stable -RUN cargo install --version=0.5.13 sqlx-cli +RUN cargo install --version=0.7.3 sqlx-cli RUN cargo install cargo-nextest +RUN cargo install cargo-spellcheck # Copy compiler (both solc and zksolc) binaries # Obtain `solc` 0.8.20. diff --git a/docs/advanced/README.md b/docs/advanced/README.md deleted file mode 100644 index 032153fb2dc..00000000000 --- a/docs/advanced/README.md +++ /dev/null @@ -1,14 +0,0 @@ -# Advanced documentation - -This documentation is aimed at advanced users who are interested in developing the zkSync Era itself (rather than just -the contracts on top) - and would like to understand how the system works internally. - -The documents in this directory are not meant to be a full specification, but give you the rough understanding of the -system internals. - -Suggested order of reading: - -- [Initialization](01_initialization.md) -- [Deposits](02_deposits.md) -- [Withdrawals](03_withdrawals.md) -- [Contracts](contracts.md) diff --git a/docs/advanced/blocks_and_batches.md b/docs/advanced/blocks_and_batches.md deleted file mode 100644 index 7c8f136e887..00000000000 --- a/docs/advanced/blocks_and_batches.md +++ /dev/null @@ -1,85 +0,0 @@ -# Blocks & Batches - How we package transactions - -In this article, we will explore the processing of transactions, how we group them into blocks, what it means to "seal" -a block, and why it is important to have rollbacks in our virtual machine (VM). - -At the basic level, we have individual transactions. However, to execute them more efficiently, we group them together -into blocks & batches - -## L1 Batch vs L2 Block (a.k.a MiniBlock) vs Transaction - -To help visualize the concept, here are two images: - -![Block layout][block_layout] - -You can refer to the Block layout image to see how the blocks are organized. It provides a graphical representation of -how transactions are arranged within the blocks and the arrangement of L2 blocks within L1 "batches." - -![Explorer example][explorer_example] - -### L2 blocks (aka Miniblocks) - -Currently, the L2 blocks do not have a major role in the system, until we transition to a decentralized sequencer. We -introduced them mainly as a "compatibility feature" to accommodate various tools, such as Metamask, which expect a block -that changes frequently. This allows these tools to provide feedback to users, confirming that their transaction has -been added. - -As of now, an L2 block is created every 2 seconds (controlled by StateKeeper's config `miniblock_commit_deadline_ms`), -and it includes all the transactions received during that time period. This periodic creation of L2 blocks ensures that -transactions are processed and included in the blocks regularly. - -### L1 batches - -L1 batches play a crucial role because they serve as the fundamental unit for generating proofs. From the perspective of -the virtual machine (VM), each L1 batch represents the execution of a single program, specifically the Bootloader. The -Bootloader internally processes all the transactions belonging to that particular batch. Therefore, the L1 batch serves -as the container for executing the program and handling the transactions within it. - -#### So how large can L1 batch be - -Most blockchains use factors like time and gas usage to determine when a block should be closed or sealed. However, our -case is a bit more complex because we also need to consider prover capacity and limits related to publishing to L1. - -The decision of when to seal the block is handled by the code in the [conditional_sealer][conditional_sealer] module. It -maintains a list of `SealCriterion` and at the time of writing this article, [we have 9 reasons to seal the -block][reasons_for_sealing], which include: - -- Transaction slots limit (currently set to 750 transactions in `StateKeeper`'s config - `transaction_slots`). -- Gas limit (currently set to `MAX_L2_TX_GAS_LIMIT` = 80M). -- Published data limit (as each L1 batch must publish information about the changed slots to L1, so all the changes must - fit within the L1 transaction limit, currently set to `MAX_PUBDATA_PER_L1_BATCH`= 120k). -- zkEVM Geometry limits - For certain operations like merklelization, there is a maximum number of circuits that can be - included in a single L1 batch. If this limit is exceeded, we wouldn't be able to generate the proof. - -We also have a `TimeoutCriterion` - but it is not enabled. - -However, these sealing criteria pose a significant challenge because it is difficult to predict in advance whether -adding a given transaction to the current batch will exceed the limits or not. This unpredictability adds complexity to -the process of determining when to seal the block. - -#### What if a transaction doesn't fit - -To handle situations where a transaction exceeds the limits of the currently active L1 batch, we employ a "try and -rollback" approach. This means that we attempt to add the transaction to the active L1 batch, and if we receive a -`ExcludeAndSeal` response indicating that it doesn't fit, we roll back the virtual machine (VM) to the state before the -transaction was attempted. - -Implementing this approach introduces a significant amount of complexity in the `oracles` (also known as interfaces) of -the VM. These oracles need to support snapshotting and rolling back operations to ensure consistency when handling -transactions that don't fit. - -In a separate article, we will delve into more details about how these oracles and the VM work, providing a -comprehensive understanding of their functionality and interactions. - -[block_layout]: - https://user-images.githubusercontent.com/128217157/236494232-aeed380c-78f6-4fda-ab2a-8de26c1089ff.png - 'block layout' -[explorer_example]: - https://user-images.githubusercontent.com/128217157/236500717-165470ad-30b8-4ad6-97ed-fc29c8eb1fe0.png - 'explorer example' -[conditional_sealer]: - https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/state_keeper/seal_criteria/conditional_sealer.rs#20 - 'Conditional Sealer' -[reasons_for_sealing]: - https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/state_keeper/seal_criteria/mod.rs#L106 - 'Reasons for Sealing' diff --git a/docs/advanced/01_initialization.md b/docs/guides/advanced/01_initialization.md similarity index 94% rename from docs/advanced/01_initialization.md rename to docs/guides/advanced/01_initialization.md index 2d804f5032c..f44af176fb8 100644 --- a/docs/advanced/01_initialization.md +++ b/docs/guides/advanced/01_initialization.md @@ -1,11 +1,11 @@ # zkSync deeper dive -The goal of this doc, is to show you some more details on how zkSync works internally. +The goal of this doc is to show you some more details on how zkSync works internally. Please do the dev_setup.md and development.md (these commands do all the heavy lifting on starting the components of the system). -Now let's take a look what's inside: +Now let's take a look at what's inside: ### Initialization (zk init) @@ -20,7 +20,7 @@ there, make sure to run `zk` (that compiles this code), before re-running `zk in As first step, it gets the docker images for postgres and geth. -Geth (one of the ethereum clients) will be used to setup our own copy of L1 chain (that our local zkSync would use). +Geth (one of the Ethereum clients) will be used to setup our own copy of L1 chain (that our local zkSync would use). Postgres is one of the two databases, that is used by zkSync (the other one is RocksDB). Currently most of the data is stored in postgres (blocks, transactions etc) - while RocksDB is only storing the state (Tree & Map) - and it used by @@ -53,7 +53,7 @@ contents). As our network has just started, the database would be quite empty. -You can see the schema for the database in [dal/README.md](../../core/lib/dal/README.md) TODO: add the link to the +You can see the schema for the database in [dal/README.md](../../../core/lib/dal/README.md) TODO: add the link to the document with DB schema. #### Docker diff --git a/docs/advanced/02_deposits.md b/docs/guides/advanced/02_deposits.md similarity index 97% rename from docs/advanced/02_deposits.md rename to docs/guides/advanced/02_deposits.md index 9d9ac4d526b..0ca78c9bd88 100644 --- a/docs/advanced/02_deposits.md +++ b/docs/guides/advanced/02_deposits.md @@ -158,11 +158,11 @@ The call to requestL2Transaction, is adding the transaction to the priorityQueue The zk server (that you started with `zk server` command) is listening on events that are emitted from this contract (via eth_watcher module - -[`loop_iteration` function](https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/eth_watch/mod.rs#L163) +[`loop_iteration` function](https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/eth_watch/mod.rs#L161) ) and adds them to the postgres database (into `transactions` table). You can actually check it - by running the psql and looking at the contents of the table - then you'll notice that -transaction was succesfully inserted, and it was also marked as 'priority' (as it came from L1) - as regular +transaction was successfully inserted, and it was also marked as 'priority' (as it came from L1) - as regular transactions that are received by the server directly are not marked as priority. You can verify that this is your transaction, by looking at the `l1_block_number` column (it should match the diff --git a/docs/advanced/03_withdrawals.md b/docs/guides/advanced/03_withdrawals.md similarity index 97% rename from docs/advanced/03_withdrawals.md rename to docs/guides/advanced/03_withdrawals.md index 003121d8646..1ea1f5d1bd6 100644 --- a/docs/advanced/03_withdrawals.md +++ b/docs/guides/advanced/03_withdrawals.md @@ -81,7 +81,7 @@ This is a good opportunity to talk about system contracts that are automatically list here [in github](https://github.com/matter-labs/era-system-contracts/blob/436d57da2fb35c40e38bcb6637c3a090ddf60701/scripts/constants.ts#L29) -This is the place were we specify that `bootloader` is at address 0x8001, `NonceHolder` at 0x8003 etc. +This is the place where we specify that `bootloader` is at address 0x8001, `NonceHolder` at 0x8003 etc. This brings us to [L2EthToken.sol](https://github.com/matter-labs/era-system-contracts/blob/main/contracts/L2EthToken.sol) that has the @@ -120,7 +120,7 @@ BTW - all the transactions are sent to the 0x54e address - which is the `Diamond be different on your local node - see previous tutorial for more info) . And inside, all three methods above belong to -[Executor.sol](https://github.com/matter-labs/era-contracts/blob/main/ethereum/contracts/zksync/facets/Executor.sol) +[Executor.sol](https://github.com/matter-labs/era-contracts/blob/main/l1-contracts/contracts/zksync/facets/Executor.sol) facet and you can look at [README](https://github.com/matter-labs/era-contracts/blob/main/docs/Overview.md#executorfacet) to see the details of what each method does. diff --git a/docs/guides/advanced/0_alternative_vm_intro.md b/docs/guides/advanced/0_alternative_vm_intro.md new file mode 100644 index 00000000000..74e70edc953 --- /dev/null +++ b/docs/guides/advanced/0_alternative_vm_intro.md @@ -0,0 +1,311 @@ +# zkEVM internals + +## zkEVM clarifier + +[Back to ToC](../../specs/README.md) + +The zkSync zkEVM plays a fundamentally different role in the zkStack than the EVM does in Ethereum. The EVM is used to +execute code in Ethereum's state transition function. This STF needs a client to implement and run it. Ethereum has a +multi-client philosophy, there are multiple clients, and they are written in Go, Rust, and other traditional programming +languages, all running and verifying the same STF. + +We have a different set of requirements, we need to produce a proof that some client executed the STF correctly. The +first consequence is that the client needs to be hard-coded, we cannot have the same multi-client philosophy. This +client is the zkEVM, it can run the STF efficiently, including execution of smart contracts similarly to the EVM. The +zkEVM was also designed to be proven efficiently. + +For efficiency reasons it the zkEVM is similar to the EVM. This makes executing smart programs inside of it easy. It +also has special features that are not in the EVM but are needed for the rollup's STF, storage, gas metering, +precompiles and other things. Some of these features are implemented as system contracts while others are built into the +VM. System Contracts are contracts with special permissions, deployed at predefined addresses. Finally, we have the +bootloader, which is also a contract, although it is not deployed at any address. This is the STF that is ultimately +executed by the zkEVM, and executes the transaction against the state. + + + +Full specification of the zkEVM is beyond the scope of this document. However, this section will give you most of the +details needed for understanding the L2 system smart contracts & basic differences between EVM and zkEVM. Note also that +usually understanding the EVM is needed for efficient smart contract development. Understanding the zkEVM goes beyond +this, it is needed for developing the rollup itself. + +## Registers and memory management + +On EVM, during transaction execution, the following memory areas are available: + +- `memory` itself. +- `calldata` the immutable slice of parent memory. +- `returndata` the immutable slice returned by the latest call to another contract. +- `stack` where the local variables are stored. + +Unlike EVM, which is stack machine, zkEVM has 16 registers. Instead of receiving input from `calldata`, zkEVM starts by +receiving a _pointer_ in its first register *(*basically a packed struct with 4 elements: the memory page id, start and +length of the slice to which it points to*)* to the calldata page of the parent. Similarly, a transaction can receive +some other additional data within its registers at the start of the program: whether the transaction should invoke the +constructor +[more about deployments here](https://github.com/matter-labs/zksync-era/blob/main/docs/specs/zk_evm/system_contracts.md#contractdeployer--immutablesimulator), +whether the transaction has `isSystem` flag, etc. The meaning of each of these flags will be expanded further in this +section. + +_Pointers_ are separate type in the VM. It is only possible to: + +- Read some value within a pointer. +- Shrink the pointer by reducing the slice to which pointer points to. +- Receive the pointer to the `returndata` as a calldata. +- Pointers can be stored only on stack/registers to make sure that the other contracts can not read `memory/returndata` + of contracts they are not supposed to. +- A pointer can be converted to the u256 integer representing it, but an integer can not be converted to a pointer to + prevent unallowed memory access. +- It is not possible to return a pointer that points to a memory page with id smaller than the one for the current page. + What this means is that it is only possible to `return` only pointer to the memory of the current frame or one of the + pointers returned by the subcalls of the current frame. + +### Memory areas in zkEVM + +For each frame, the following memory areas are allocated: + +- _Heap_ (plays the same role as `memory` on Ethereum). +- _AuxHeap_ (auxiliary heap). It has the same properties as Heap, but it is used for the compiler to encode + calldata/copy the `returndata` from the calls to system contracts to not interfere with the standard Solidity memory + alignment. +- _Stack_. Unlike Ethereum, stack is not the primary place to get arguments for opcodes. The biggest difference between + stack on zkEVM and EVM is that on zkSync stack can be accessed at any location (just like memory). While users do not + pay for the growth of stack, the stack can be fully cleared at the end of the frame, so the overhead is minimal. +- _Code_. The memory area from which the VM executes the code of the contract. The contract itself can not read the code + page, it is only done implicitly by the VM. + +Also, as mentioned in the previous section, the contract receives the pointer to the calldata. + +### Managing returndata & calldata + +Whenever a contract finishes its execution, the parent’s frame receives a _pointer_ as `returndata`. This pointer may +point to the child frame’s Heap/AuxHeap or it can even be the same `returndata` pointer that the child frame received +from some of its child frames. + +The same goes with the `calldata`. Whenever a contract starts its execution, it receives the pointer to the calldata. +The parent frame can provide any valid pointer as the calldata, which means it can either be a pointer to the slice of +parent’s frame memory (heap or auxHeap) or it can be some valid pointer that the parent frame has received before as +calldata/returndata. + +Contracts simply remember the calldata pointer at the start of the execution frame (it is by design of the compiler) and +remembers the latest received returndata pointer. + +Some important implications of this is that it is now possible to do the following calls without any memory copying: + +A → B → C + +where C receives a slice of the calldata received by B. + +The same goes for returning data: + +A ← B ← C + +There is no need to copy returned data if the B returns a slice of the returndata returned by C. + +Note, that you can _not_ use the pointer that you received via calldata as returndata (i.e. return it at the end of the +execution frame). Otherwise, it would be possible that returndata points to the memory slice of the active frame and +allow editing the `returndata`. It means that in the examples above, C could not return a slice of its calldata without +memory copying. + +Some of these memory optimizations can be seen utilized in the +[EfficientCall](https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/libraries/EfficientCall.sol) +library that allows to perform a call while reusing the slice of calldata that the frame already has, without memory +copying. + +### Returndata & precompiles + +Some of the operations which are opcodes on Ethereum, have become calls to some of the system contracts. The most +notable examples are `Keccak256`, `SystemContext`, etc. Note, that, if done naively, the following lines of code would +work differently on zkSync and Ethereum: + +```solidity +pop(call(...)) +keccak(...) +returndatacopy(...) +``` + +Since the call to keccak precompile would modify the `returndata`. To avoid this, our compiler does not override the +latest `returndata` pointer after calls to such opcode-like precompiles. + +## zkEVM specific opcodes + +While some Ethereum opcodes are not supported out of the box, some of the new opcodes were added to facilitate the +development of the system contracts. + +Note, that this lists does not aim to be specific about the internals, but rather explain methods in the +[SystemContractHelper.sol](https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/libraries/SystemContractHelper.sol) + +### **Only for kernel space** + +These opcodes are allowed only for contracts in kernel space (i.e. system contracts). If executed in other places they +result in `revert(0,0)`. + +- `mimic_call`. The same as a normal `call`, but it can alter the `msg.sender` field of the transaction. +- `to_l1`. Sends a system L2→L1 log to Ethereum. The structure of this log can be seen + [here](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/Storage.sol#L47). +- `event`. Emits an L2 log to zkSync. Note, that L2 logs are not equivalent to Ethereum events. Each L2 log can emit 64 + bytes of data (the actual size is 88 bytes, because it includes the emitter address, etc). A single Ethereum event is + represented with multiple `event` logs constitute. This opcode is only used by `EventWriter` system contract. +- `precompile_call`. This is an opcode that accepts two parameters: the uint256 representing the packed parameters for + it as well as the ergs to burn. Besides the price for the precompile call itself, it burns the provided ergs and + executes the precompile. The action that it does depend on `this` during execution: + - If it is the address of the `ecrecover` system contract, it performs the ecrecover operation + - If it is the address of the `sha256`/`keccak256` system contracts, it performs the corresponding hashing operation. + - It does nothing (i.e. just burns ergs) otherwise. It can be used to burn ergs needed for L2→L1 communication or + publication of bytecodes onchain. +- `setValueForNextFarCall` sets `msg.value` for the next `call`/`mimic_call`. Note, that it does not mean that the value + will be really transferred. It just sets the corresponding `msg.value` context variable. The transferring of ETH + should be done via other means by the system contract that uses this parameter. Note, that this method has no effect + on `delegatecall` , since `delegatecall` inherits the `msg.value` of the previous frame. +- `increment_tx_counter` increments the counter of the transactions within the VM. The transaction counter used mostly + for the VM’s internal tracking of events. Used only in bootloader after the end of each transaction. + +Note, that currently we do not have access to the `tx_counter` within VM (i.e. for now it is possible to increment it +and it will be automatically used for logs such as `event`s as well as system logs produced by `to_l1`, but we can not +read it). We need to read it to publish the _user_ L2→L1 logs, so `increment_tx_counter` is always accompanied by the +corresponding call to the +[SystemContext](https://github.com/matter-labs/zksync-era/blob/main/docs/specs/zk_evm/system_contracts.md#systemcontext) +contract. + +More on the difference between system and user logs can be read +[here](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Handling%20pubdata%20in%20Boojum.md). - +`set_pubdata_price` sets the price (in gas) for publishing a single byte of pubdata. + +### **Generally accessible** + +Here are opcodes that can be generally accessed by any contract. Note that while the VM allows to access these methods, +it does not mean that this is easy: the compiler might not have convenient support for some use-cases yet. + +- `near_call`. It is basically a “framed” jump to some location of the code of your contract. The difference between the + `near_call` and ordinary jump are: + 1. It is possible to provide an ergsLimit for it. Note, that unlike “`far_call`”s (i.e. calls between contracts) the + 63/64 rule does not apply to them. + 2. If the near call frame panics, all state changes made by it are reversed. Please note, that the memory changes will + **not** be reverted. +- `getMeta`. Returns an u256 packed value of + [ZkSyncMeta](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/libraries/SystemContractHelper.sol#L42) + struct. Note that this is not tight packing. The struct is formed by the + [following rust code](https://github.com/matter-labs/era-zkevm_opcode_defs/blob/c7ab62f4c60b27dfc690c3ab3efb5fff1ded1a25/src/definitions/abi/meta.rs#L4). +- `getCodeAddress` — receives the address of the executed code. This is different from `this` , since in case of + delegatecalls `this` is preserved, but `codeAddress` is not. + +### Flags for calls + +Besides the calldata, it is also possible to provide additional information to the callee when doing `call` , +`mimic_call`, `delegate_call`. The called contract will receive the following information in its first 12 registers at +the start of execution: + +- _r1_ — the pointer to the calldata. +- _r2_ — the pointer with flags of the call. This is a mask, where each bit is set only if certain flags have been set + to the call. Currently, two flags are supported: 0-th bit: `isConstructor` flag. This flag can only be set by system + contracts and denotes whether the account should execute its constructor logic. Note, unlike Ethereum, there is no + separation on constructor & deployment bytecode. More on that can be read + [here](https://github.com/matter-labs/zksync-era/blob/main/docs/specs/zk_evm/system_contracts.md#contractdeployer--immutablesimulator). + 1-st bit: `isSystem` flag. Whether the call intends a system contracts’ function. While most of the system contracts’ + functions are relatively harmless, accessing some with calldata only may break the invariants of Ethereum, e.g. if the + system contract uses `mimic_call`: no one expects that by calling a contract some operations may be done out of the + name of the caller. This flag can be only set if the callee is in kernel space. +- The rest r3..r12 registers are non-empty only if the `isSystem` flag is set. There may be arbitrary values passed, + which we call `extraAbiParams`. + +The compiler implementation is that these flags are remembered by the contract and can be accessed later during +execution via special +[simulations](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/VM%20Section/How%20compiler%20works/instructions/extensions/overview.md). + +If the caller provides inappropriate flags (i.e. tries to set `isSystem` flag when callee is not in the kernel space), +the flags are ignored. + +### `onlySystemCall` modifier + +Some of the system contracts can act on behalf of the user or have a very important impact on the behavior of the +account. That’s why we wanted to make it clear that users can not invoke potentially dangerous operations by doing a +simple EVM-like `call`. Whenever a user wants to invoke some of the operations which we considered dangerous, they must +provide “`isSystem`” flag with them. + +The `onlySystemCall` flag checks that the call was either done with the “isSystemCall” flag provided or the call is done +by another system contract (since Matter Labs is fully aware of system contracts). + +### Simulations via our compiler + +In the future, we plan to introduce our “extended” version of Solidity with more supported opcodes than the original +one. However, right now it was beyond the capacity of the team to do, so in order to represent accessing zkSync-specific +opcodes, we use `call` opcode with certain constant parameters that will be automatically replaced by the compiler with +zkEVM native opcode. + +Example: + +```solidity +function getCodeAddress() internal view returns (address addr) { + address callAddr = CODE_ADDRESS_CALL_ADDRESS; + assembly { + addr := staticcall(0, callAddr, 0, 0xFFFF, 0, 0) + } +} + +``` + +In the example above, the compiler will detect that the static call is done to the constant `CODE_ADDRESS_CALL_ADDRESS` +and so it will replace it with the opcode for getting the code address of the current execution. + +Full list of opcode simulations can be found +[here](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/VM%20Section/How%20compiler%20works/instructions/extensions/call.md). + +We also use +[verbatim-like](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/VM%20Section/How%20compiler%20works/instructions/extensions/verbatim.md) +statements to access zkSync-specific opcodes in the bootloader. + +All the usages of the simulations in our Solidity code are implemented in the +[SystemContractHelper](https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/libraries/SystemContractHelper.sol) +library and the +[SystemContractsCaller](https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/libraries/SystemContractsCaller.sol) +library. + +**Simulating** `near_call` **(in Yul only)** + +In order to use `near_call` i.e. to call a local function, while providing a limit of ergs (gas) that this function can +use, the following syntax is used: + +The function should contain `ZKSYNC_NEAR_CALL` string in its name and accept at least 1 input parameter. The first input +parameter is the packed ABI of the `near_call`. Currently, it is equal to the number of ergs to be passed with the +`near_call`. + +Whenever a `near_call` panics, the `ZKSYNC_CATCH_NEAR_CALL` function is called. + +_Important note:_ the compiler behaves in a way that if there is a `revert` in the bootloader, the +`ZKSYNC_CATCH_NEAR_CALL` is not called and the parent frame is reverted as well. The only way to revert only the +`near_call` frame is to trigger VM’s _panic_ (it can be triggered with either invalid opcode or out of gas error). + +_Important note 2:_ The 63/64 rule does not apply to `near_call`. Also, if 0 gas is provided to the near call, then +actually all of the available gas will go to it. + +### Notes on security + +To prevent unintended substitution, the compiler requires `--system-mode` flag to be passed during compilation for the +above substitutions to work. + +## Bytecode hashes + +On zkSync the bytecode hashes are stored in the following format: + +- The 0th byte denotes the version of the format. Currently the only version that is used is “1”. +- The 1st byte is `0` for deployed contracts’ code and `1` for the contract code + [that is being constructed](https://github.com/matter-labs/zksync-era/blob/main/docs/specs/zk_evm/system_contracts.md#constructing-vs-non-constructing-code-hash). +- The 2nd and 3rd bytes denote the length of the contract in 32-byte words as big-endian 2-byte number. +- The next 28 bytes are the last 28 bytes of the sha256 hash of the contract’s bytecode. + +The bytes are ordered in little-endian order (i.e. the same way as for `bytes32` ). + +### Bytecode validity + +A bytecode is valid if it: + +- Has its length in bytes divisible by 32 (i.e. consists of an integer number of 32-byte words). +- Has a length of less than 2^16 words (i.e. its length in words fits into 2 bytes). +- Has an odd length in words (i.e. the 3rd byte is an odd number). + +Note, that it does not have to consist of only correct opcodes. In case the VM encounters an invalid opcode, it will +simply revert (similar to how EVM would treat them). + +A call to a contract with invalid bytecode can not be proven. That is why it is **essential** that no contract with +invalid bytecode is ever deployed on zkSync. It is the job of the +[KnownCodesStorage](https://github.com/matter-labs/zksync-era/blob/main/docs/specs/zk_evm/system_contracts.md#knowncodestorage) +to ensure that all allowed bytecodes in the system are valid. diff --git a/docs/advanced/advanced_debugging.md b/docs/guides/advanced/advanced_debugging.md similarity index 100% rename from docs/advanced/advanced_debugging.md rename to docs/guides/advanced/advanced_debugging.md diff --git a/docs/advanced/bytecode_compression.md b/docs/guides/advanced/compression.md similarity index 83% rename from docs/advanced/bytecode_compression.md rename to docs/guides/advanced/compression.md index 6f94277f801..12071e79891 100644 --- a/docs/advanced/bytecode_compression.md +++ b/docs/guides/advanced/compression.md @@ -2,7 +2,7 @@ ## Overview -As we are a rollup - all the bytecodes that contracts use in our chain must be copied into L1 (so that the chain can be +As we are a rollup - all the bytecodes that contracts used in our chain must be copied into L1 (so that the chain can be reconstructed from L1 if needed). Given the want/need to cutdown on space used, bytecode is compressed prior to being posted to L1. At a high level @@ -10,8 +10,8 @@ bytecode is chunked into opcodes (which have a size of 8 bytes), assigned a 2 by sequence (indexes) are verified and sent to L1. This process is split into 2 different parts: (1) [the server side operator](https://github.com/matter-labs/zksync-era/blob/main/core/lib/utils/src/bytecode.rs#L31) handling the compression and (2) -[the system contract](https://github.com/matter-labs/era-system-contracts/blob/main/contracts/BytecodeCompressor.sol) -verifying that the compression is correct before sending to L1. +[the system contract](https://github.com/matter-labs/era-system-contracts/blob/main/contracts/Compressor.sol) verifying +that the compression is correct before sending to L1. ## Example @@ -31,7 +31,7 @@ Dictionary would be: 3 -> 0xC (count: 1) ``` -Note that '1' maps to '0xD', as it occurs twice, and first occurrence is earlier than first occurence of 0xB, that also +Note that '1' maps to '0xD', as it occurs twice, and first occurrence is earlier than first occurrence of 0xB, that also occurs twice. Compressed bytecode: @@ -90,8 +90,7 @@ return [len(dictionary), dictionary.keys(order=index asc), encoded_data] ## System Contract Compression Verification & Publishing -The -[Bytecode Compressor](https://github.com/matter-labs/era-system-contracts/blob/main/contracts/BytecodeCompressor.sol) +The [Bytecode Compressor](https://github.com/matter-labs/era-system-contracts/blob/main/contracts/Compressor.sol) contract performs validation on the compressed bytecode generated on the server side. At the current moment, publishing bytecode to L1 may only be called by the bootloader but in the future anyone will be able to publish compressed bytecode with no change to the underlying algorithm. @@ -99,10 +98,10 @@ with no change to the underlying algorithm. ### Verification & Publication The function `publishCompressBytecode` takes in both the original `_bytecode` and the `_rawCompressedData` , the latter -of which comes from the output of the server’s compression algorithm. Looping over the encoded data, derived from -`_rawCompressedData` , the corresponding chunks are pulled from the dictionary and compared to the original byte code, -reverting if there is a mismatch. After the encoded data has been verified, it is published to L1 and marked accordingly -within the `KnownCodesStorage` contract. +of which comes from the server’s compression algorithm output. Looping over the encoded data, derived from +`_rawCompressedData` , the corresponding chunks are retrieved from the dictionary and compared to the original byte +code, reverting if there is a mismatch. After the encoded data has been verified, it is published to L1 and marked +accordingly within the `KnownCodesStorage` contract. Pseudo-code implementation: diff --git a/docs/advanced/contracts.md b/docs/guides/advanced/contracts.md similarity index 92% rename from docs/advanced/contracts.md rename to docs/guides/advanced/contracts.md index 9b44268827c..502d9b04cad 100644 --- a/docs/advanced/contracts.md +++ b/docs/guides/advanced/contracts.md @@ -32,7 +32,7 @@ a bunch of registers. More details on this will be written in the future article Having a different VM means that we must have a separate compiler [zk-solc](https://github.com/matter-labs/zksolc-bin) - as the bytecode that is produced by this compiler has to use the zkEVM specific opcodes. -While having a separte compiler introduces a bunch of challenges (for example, we need a custom +While having a separate compiler introduces a bunch of challenges (for example, we need a custom [hardhat plugins](https://github.com/matter-labs/hardhat-zksync) ), it brings a bunch of benefits too: for example it allows us to move some of the VM logic (like new contract deployment) into System contracts - which allows faster & cheaper modifications and increased flexibility. @@ -86,9 +86,9 @@ changed - so updating the same slot multiple times doesn't increase the amount o ### Account abstraction and some method calls -As `zkSync` has a built-in AccountAbstration (more on this in a separate article) - you shouldn't depend on some of the -solidity functions (like `ecrecover` - that checks the keys, or `tx.origin`) - in all the cases, the compiler will try -to warn you. +As `zkSync` has a built-in Account Abstraction (more on this in a separate article) - you shouldn't depend on some of +the solidity functions (like `ecrecover` - that checks the keys, or `tx.origin`) - in all the cases, the compiler will +try to warn you. ## Summary diff --git a/docs/advanced/prover.md b/docs/guides/advanced/deeper_overview.md similarity index 96% rename from docs/advanced/prover.md rename to docs/guides/advanced/deeper_overview.md index 02e69c4d38e..7fa4a009a92 100644 --- a/docs/advanced/prover.md +++ b/docs/guides/advanced/deeper_overview.md @@ -1,6 +1,8 @@ -# Overview +# Deeper Overview -The purpose of this document is to explain our new proof system from an engineering standpoint. We will examine the code +[Back to ToC](../../../README.md) + +The purpose of this section is to explain our new proof system from an engineering standpoint. We will examine the code examples and how the libraries communicate. Let's begin by discussing our constraint system. In the previous prover, we utilized the Bellman repository. However, in @@ -67,8 +69,8 @@ pub struct SelectionGate { } ``` -Internaly the `Variable` object is `pub struct Variable(pub(crate) u64);` - so it is an index to the position within the -constraint system object. +Internally the `Variable` object is `pub struct Variable(pub(crate) u64);` - so it is an index to the position within +the constraint system object. And now let's see how we can add this gate into the system. @@ -86,7 +88,7 @@ pub fn select>( ``` And then there is a block of code for witness evaluation (let's skip it for now), and the final block that adds the gate -to the constrain system `cs`: +to the constraint system `cs`: ```rust if ::SetupConfig::KEEP_SETUP { @@ -184,7 +186,7 @@ pub struct UInt32 { pub(crate) variable: Variable, } impl CSAllocatable for UInt32 { - // So the 'witness' type (concrete value) for U32 is u32 - no surprsise ;-) + // So the 'witness' type (concrete value) for U32 is u32 - no surprises ;-) type Witness = u32; ... } @@ -204,12 +206,12 @@ filled with concrete values. ### CsAllocatable -Implements CsAllocatable - which allows you to directly 'allocate' this struct within constraing system (similarly to +Implements CsAllocatable - which allows you to directly 'allocate' this struct within constraint system (similarly to how we were operating on regular 'Variables' above). ### CSSelectable -Implements the `Selectable` trait - that allows this struct to participage in operations like conditionally select (so +Implements the `Selectable` trait - that allows this struct to participate in operations like conditionally select (so it can be used as 'a' or 'b' in the Select gate example above). ### CSVarLengthEncodable @@ -245,7 +247,7 @@ pub struct ZkSyncUniformCircuitInstance>, // Configuration - that is circuit specific, in case of MainVM - the configuration - // is simply the amount of opcodes that we put wihtin 1 circuit. + // is simply the amount of opcodes that we put within 1 circuit. pub config: std::sync::Arc, // Circuit 'friendly' hash function. @@ -361,7 +363,7 @@ entry_point_code: Vec<[u8; 32]>, // for read lobkc must be a bootloader code initial_heap_content: Vec, // bootloader starts with non-deterministic heap zk_porter_is_available: bool, default_aa_code_hash: U256, -used_bytecodes: std::collections::HashMap>, // auxilary information to avoid passing a full set of all used codes +used_bytecodes: std::collections::HashMap>, // auxiliary information to avoid passing a full set of all used codes ram_verification_queries: Vec<(u32, U256)>, // we may need to check that after the bootloader's memory is filled cycle_limit: usize, round_function: R, // used for all queues implementation diff --git a/docs/advanced/gas_and_fees.md b/docs/guides/advanced/fee_model.md similarity index 96% rename from docs/advanced/gas_and_fees.md rename to docs/guides/advanced/fee_model.md index 800d27299c2..af598c95053 100644 --- a/docs/advanced/gas_and_fees.md +++ b/docs/guides/advanced/fee_model.md @@ -28,7 +28,7 @@ to L1. The maximum gas for a transaction is 80 million (80M/4k = 20k). ### L2 Fair price -The L2 fair gas price is currently determined by the StateKeeper configuration and is set at 0.25 Gwei (see +The L2 fair gas price is currently determined by the StateKeeper/Sequencer configuration and is set at 0.25 Gwei (see `fair_l2_gas_price` in the config). This price is meant to cover the compute costs (CPU + GPU) for the sequencer and prover. It can be changed as needed, with a safety limit of 10k Gwei in the bootloader. Once the system is decentralized, more deterministic rules will be established for this price. @@ -86,7 +86,7 @@ transaction. ```rust let gas_remaining_before = vm.gas_remaining(); execute_tx(); -let gas_used = gas_remainig_before = vm.gas_remaining(); +let gas_used = gas_remaining_before - vm.gas_remaining(); ``` ## Gas estimation @@ -127,5 +127,5 @@ There are a few reasons why refunds might be 'larger' on zkSync (i.e., why we mi https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/l1_gas_price/gas_adjuster/mod.rs#L30 'gas_adjuster' [get_txs_fee_in_wei]: - https://github.com/matter-labs/zksync-era/blob/d590b3f0965a23eb0011779aab829d86d4fdc1d1/core/bin/zksync_core/src/api_server/tx_sender/mod.rs#L450 + https://github.com/matter-labs/zksync-era/blob/714a8905d407de36a906a4b6d464ec2cab6eb3e8/core/lib/zksync_core/src/api_server/tx_sender/mod.rs#L656 'get_txs_fee_in_wei' diff --git a/docs/advanced/how_call_works.md b/docs/guides/advanced/how_call_works.md similarity index 94% rename from docs/advanced/how_call_works.md rename to docs/guides/advanced/how_call_works.md index cf9adc12f68..7f283cf8e0c 100644 --- a/docs/advanced/how_call_works.md +++ b/docs/guides/advanced/how_call_works.md @@ -69,7 +69,8 @@ opcodes similar to EVM, but operates on registers rather than a stack. We have t 'pure rust' without circuits (in the zk_evm repository), and the other has circuits (in the sync_vm repository). In this example, the api server uses the 'zk_evm' implementation without circuits. -Most of the code that the server uses to interact with the VM is in [core/lib/vm/src/vm.rs][vm_code]. +Most of the code that the server uses to interact with the VM is in +[core/lib/multivm/src/versions/vm_latest/implementation/execution.rs][vm_code]. In this line, we're calling self.state.cycle(), which executes a single VM instruction. You can see that we do a lot of things around this, such as executing multiple tracers after each instruction. This allows us to debug and provide @@ -115,11 +116,11 @@ In this article, we covered the 'life of a call' from the RPC to the inner worki https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/api_server/execution_sandbox/execute.rs 'execution sandbox' [vm_code]: - https://github.com/matter-labs/zksync-2-dev/blob/dc3b3d6b055c558b0e1a76ef5de3184291489d9f/core/lib/vm/src/vm.rs#L544 + https://github.com/matter-labs/zksync-era/blob/ccd13ce88ff52c3135d794c6f92bec3b16f2210f/core/lib/multivm/src/versions/vm_latest/implementation/execution.rs#L108 'vm code' [bootloader_code]: https://github.com/matter-labs/era-system-contracts/blob/93a375ef6ccfe0181a248cb712c88a1babe1f119/bootloader/bootloader.yul 'bootloader code' [init_vm_inner]: - https://github.com/matter-labs/zksync-2-dev/blob/dc3b3d6b055c558b0e1a76ef5de3184291489d9f/core/lib/vm/src/vm_with_bootloader.rs#L348 + https://github.com/matter-labs/zksync-era/blob/main/core/lib/multivm/src/versions/vm_m6/vm_with_bootloader.rs#L330 'vm constructor' diff --git a/docs/advanced/how_l2_messaging_works.md b/docs/guides/advanced/how_l2_messaging_works.md similarity index 90% rename from docs/advanced/how_l2_messaging_works.md rename to docs/guides/advanced/how_l2_messaging_works.md index c62fe4afc5b..0fea2be7d50 100644 --- a/docs/advanced/how_l2_messaging_works.md +++ b/docs/guides/advanced/how_l2_messaging_works.md @@ -126,8 +126,8 @@ sent. ### Retrieving Full Message Contents We go through all the Events generated during the run [here][iterate_over_events] and identify those coming from the -`L1_MESSENGER_ADDRESS` that correspond to the `L1MessageSent` topic. These Events represent the 'emit' calls executed in -Part 2. +`L1_MESSENGER_ADDRESS` that corresponds to the `L1MessageSent` topic. These Events represent the 'emit' calls executed +in Part 2. ### Retrieving Message Hashes @@ -177,7 +177,7 @@ return actualRootHash == calculatedRootHash; ## Summary -In this article, we've travelled through a vast array of topics: from a user contract dispatching a message to L1 by +In this article, we've traveled through a vast array of topics: from a user contract dispatching a message to L1 by invoking a system contract, to this message's hash making its way all the way to the VM via special opcodes. We've also explored how it's ultimately included in the execution results (as part of QueryLogs), gathered by the State Keeper, and transmitted to L1 for final verification. @@ -194,21 +194,17 @@ transmitted to L1 for final verification. [vm_execution_result]: https://github.com/matter-labs/zksync-era/blob/43d7bd587a84b1b4489f4c6a4169ccb90e0df467/core/lib/vm/src/vm.rs#L81 [log_queries]: - https://github.com/matter-labs/zk_evm_abstractions/blob/839721a4ae2093c5c0aa8ffd49758f32ecd172ed/src/queries.rs#L30C2-L30C2 -[aux_bytes]: - https://github.com/matter-labs/zkevm_opcode_defs/blob/780ce4129a95ab9a68abf0d60c156ee8df6008c2/src/system_params.rs#L37C39-L37C39 + https://github.com/matter-labs/era-zk_evm_abstractions/blob/15a2af404902d5f10352e3d1fac693cc395fcff9/src/queries.rs#L30C2-L30C2 +[aux_bytes]: https://github.com/matter-labs/era-zkevm_opcode_defs/blob/v1.3.2/src/system_params.rs#L37C39-L37C39 [event_sink]: https://github.com/matter-labs/zksync-era/blob/43d7bd587a84b1b4489f4c6a4169ccb90e0df467/core/lib/vm/src/event_sink.rs#L116 [log_writing_in_vm]: https://github.com/matter-labs/era-zk_evm/blob/v1.3.2/src/opcodes/execution/log.rs -[log_opcode]: https://github.com/matter-labs/zkevm_opcode_defs/blob/v1.3.2/src/definitions/log.rs#L16 +[log_opcode]: https://github.com/matter-labs/era-zkevm_opcode_defs/blob/v1.3.2/src/definitions/log.rs#L16 [zkevm_assembly_parse]: - https://github.com/matter-labs/zkEVM-assembly/blob/fcfeb51e45544a629d4279b3455def847dcc2505/src/assembly/instruction/log.rs#L32 + https://github.com/matter-labs/era-zkEVM-assembly/blob/v1.3.2/src/assembly/instruction/log.rs#L32 [executor_sol]: https://github.com/matter-labs/era-contracts/blob/3a4506522aaef81485d8abb96f5a6394bd2ba69e/ethereum/contracts/zksync/facets/Executor.sol#L26 [mainet_executor]: https://etherscan.io/address/0x389a081BCf20e5803288183b929F08458F1d863D - -[sepolia_tx]: -[0x18c2a113d18c53237a4056403047ff9fafbf772cb83ccd44bb5b607f8108a64c](https://sepolia.etherscan.io/tx/0x18c2a113d18c53237a4056403047ff9fafbf772cb83ccd44bb5b607f8108a64c) - +[sepolia_tx]: https://sepolia.etherscan.io/tx/0x18c2a113d18c53237a4056403047ff9fafbf772cb83ccd44bb5b607f8108a64c [mailbox_log_inclusion]: https://github.com/matter-labs/era-contracts/blob/3a4506522aaef81485d8abb96f5a6394bd2ba69e/ethereum/contracts/zksync/facets/Mailbox.sol#L54 diff --git a/docs/advanced/how_transaction_works.md b/docs/guides/advanced/how_transaction_works.md similarity index 99% rename from docs/advanced/how_transaction_works.md rename to docs/guides/advanced/how_transaction_works.md index 5507596efda..3ee9c30a205 100644 --- a/docs/advanced/how_transaction_works.md +++ b/docs/guides/advanced/how_transaction_works.md @@ -74,7 +74,7 @@ The transaction can have three different results in state keeper: - Success - Failure (but still included in the block, and gas was charged) - Rejection - when it fails validation, and cannot be included in the block. This last case should (in theory) never - happen - as we cannot charge the fee in such scenario, and it opens the possiblity for the DDoS attack. + happen - as we cannot charge the fee in such scenario, and it opens the possibility for the DDoS attack. [transaction_request_from_bytes]: https://github.com/matter-labs/zksync-era/blob/main/core/lib/types/src/transaction_request.rs#L196 diff --git a/docs/advanced/prover_keys.md b/docs/guides/advanced/prover_keys.md similarity index 97% rename from docs/advanced/prover_keys.md rename to docs/guides/advanced/prover_keys.md index 6e127f431fc..34660492bf2 100644 --- a/docs/advanced/prover_keys.md +++ b/docs/guides/advanced/prover_keys.md @@ -114,7 +114,7 @@ For SNARK circuits (like snark_wrapper), we use keccak as hash function. For STA friendly hash function (currently Poseidon2). [basic_circuit_list]: - https://github.com/matter-labs/era-zkevm_test_harness/blob/3cd647aa57fc2e1180bab53f7a3b61ec47502a46/circuit_definitions/src/circuit_definitions/base_layer/mod.rs#L80 + https://github.com/matter-labs/era-zkevm_test_harness/blob/3cd647aa57fc2e1180bab53f7a3b61ec47502a46/circuit_definitions/src/circuit_definitions/base_layer/mod.rs#L77 [recursive_circuit_list]: https://github.com/matter-labs/era-zkevm_test_harness/blob/3cd647aa57fc2e1180bab53f7a3b61ec47502a46/circuit_definitions/src/circuit_definitions/recursion_layer/mod.rs#L29 [verification_key_list]: @@ -124,4 +124,4 @@ friendly hash function (currently Poseidon2). [prover_setup_data]: https://github.com/matter-labs/zksync-era/blob/d2ca29bf20b4ec2d9ec9e327b4ba6b281d9793de/prover/vk_setup_data_generator_server_fri/src/lib.rs#L61 [verifier_computation]: - https://github.com/matter-labs/era-contracts/blob/dev/ethereum/contracts/zksync/Verifier.sol#L268 + https://github.com/matter-labs/era-contracts/blob/dev/l1-contracts/contracts/zksync/Verifier.sol#268 diff --git a/docs/advanced/pubdata.md b/docs/guides/advanced/pubdata.md similarity index 88% rename from docs/advanced/pubdata.md rename to docs/guides/advanced/pubdata.md index 6bab6a85a46..f0e159a8010 100644 --- a/docs/advanced/pubdata.md +++ b/docs/guides/advanced/pubdata.md @@ -8,9 +8,9 @@ Pubdata in zkSync can be divided up into 4 different categories: 4. Storage writes Using data corresponding to these 4 facets, across all executed batches, we’re able to reconstruct the full state of L2. -One thing to note is that the way that the data is represented changes in a pre-boojum and post-boojum zkSync Era. At a -high level, in a pre-boojum era these are represented as separate fields while in boojum they are packed into a single -bytes array. +One thing to note is that the way that the data is represented changes in a pre-boojum and post-boojum zkEVM. At a high +level, in a pre-boojum era these are represented as separate fields while in boojum they are packed into a single bytes +array. > Note: Once 4844 gets integrated this bytes array will move from being part of the calldata to blob data. @@ -22,9 +22,9 @@ executed, we then will pull the transaction input and the relevant fields, apply current state of L2. One thing to note is that in both systems some of the contract bytecode is compressed into an array of indices where -each 2 byte index corresponds to an 8 byte word in a dictionary. More on how that is done -[here](./bytecode_compression.md). Once the bytecode has been expanded, the hash can be taken and checked against the -storage writes within the `AccountCodeStorage` contract which connects an address on L2 with the 32 byte code hash: +each 2 byte index corresponds to an 8 byte word in a dictionary. More on how that is done [here](./compression.md). Once +the bytecode has been expanded, the hash can be taken and checked against the storage writes within the +`AccountCodeStorage` contract which connects an address on L2 with the 32 byte code hash: ```solidity function _storeCodeHash(address _address, bytes32 _hash) internal { @@ -36,7 +36,7 @@ function _storeCodeHash(address _address, bytes32 _hash) internal { ``` -## Pre-Boojum Era +### Pre-Boojum Era In pre-boojum era the superset of pubdata fields and input to the `commitBlocks` function follows the following format: @@ -53,7 +53,7 @@ In pre-boojum era the superset of pubdata fields and input to the `commitBlocks` /// @param repeatedStorageChanges Storage write access as a concatenation index-value /// @param l2Logs concatenation of all L2 -> L1 logs in the block /// @param l2ArbitraryLengthMessages array of hash preimages that were sent as value of L2 logs by special system L2 contract -/// @param factoryDeps (contract bytecodes) array of l2 bytecodes that were deployed +/// @param factoryDeps (contract bytecodes) array of L2 bytecodes that were deployed struct CommitBlockInfo { uint64 blockNumber; uint64 timestamp; @@ -79,15 +79,15 @@ The 4 main fields to look at here are: 1. Structure: `num entries as u32 || for each entry: (8 byte id, 32 bytes final value)` 3. `factoryDeps`: An array of uncompressed bytecodes 4. `l2ArbitraryLengthMessages` : L2 → L1 Messages - 1. We don’t need them all, we are just concerned with messages sent from the `Compressor/BytcodeCompressor` contract - 2. These messages will follow the compression algorithm outline [here](./bytecode_compression.md) + 1. We don’t need them all, we are just concerned with messages sent from the `Compressor/BytecodeCompressor` contract + 2. These messages will follow the compression algorithm outline [here](./compression.md) For the ids on the repeated writes, they are generated as we process the first time keys. For example: if we see `[, ]` (starting from an empty state) then we can assume that the next time a write happens to `key1` it will be encoded as `<1, new_val>` and so on and so forth. There is a little shortcut here where the last new id generated as part of a batch will be in the `indexRepeatedStorageChanges` field. -## Post-Boojum Era +### Post-Boojum Era ```solidity /// @notice Data needed to commit new block @@ -124,9 +124,9 @@ The 2 main fields needed for state reconstruction are the bytecodes and the stat structure and reasoning in the old system (as explained above). The state diffs will follow the compression illustrated below. -## Compression of State Diffs in Post-Boojum Era +### Compression of State Diffs in Post-Boojum Era -### Keys +#### Keys Keys will be packed in the same way as they were before boojum. The only change is that we’ll avoid using the 8-byte enumeration index and will pack it to the minimal necessary number of bytes. This number will be part of the pubdata. @@ -137,7 +137,7 @@ bytes on nonce/balance key, but ultimately the complexity may not be worth it. There is some room for the keys that are being written for the first time, however, these are rather more complex and achieve only a one-time effect (when the key is published for the first time). -### Values +#### Values Values are much easier to compress, since they usually contain only zeroes. Also, we can leverage the nature of how those values are changed. For instance if nonce has been increased only by 1, we do not need to write the entire 32-byte @@ -158,7 +158,7 @@ that it has been zeroed out). For `NoCompression` the whole 32 byte value is use So the format of the pubdata will be the following: -#### Part 1. Header +##### Part 1. Header - `` — this will enable easier automated unpacking in the future. Currently, it will be only equal to `1`. @@ -166,7 +166,7 @@ So the format of the pubdata will be the following: - ``. At the beginning it will be equal to `4`, but then it will automatically switch to `5` when needed. -#### Part 2. Initial writes +##### Part 2. Initial writes - `` (since each initial write publishes at least 32 bytes for key, then `2^16 * 32 = 2097152` will be enough for a lot of time (right now with the limit of 120kb it will take more than 15 L1 @@ -188,10 +188,3 @@ the writes will be repeated ones. - packing type as a 1 byte value, which consists of 5 bits to denote the length of the packing and 3 bits to denote the type of the packing (either `Add`, `Sub`, `Transform` or `NoCompression`). - The packed value itself. - -## L2 State Recosntruction Tool - -Given the structure above, there is a tool, created by the [Equilibrium Team](https://equilibrium.co/) that solely uses -L1 pubdata for reconstructing the state and verifying that the state root on L1 can be created using pubdata. A link to -the repo can be found [here](https://github.com/eqlabs/zksync-state-reconstruct). The way the tool works is by parsing -out all the L1 pubdata for an executed batch, compaing the state roots after each batch is processed. diff --git a/docs/advanced/zk_intuition.md b/docs/guides/advanced/zk_intuition.md similarity index 88% rename from docs/advanced/zk_intuition.md rename to docs/guides/advanced/zk_intuition.md index 7ea7dad0a44..e567ebf7ca8 100644 --- a/docs/advanced/zk_intuition.md +++ b/docs/guides/advanced/zk_intuition.md @@ -1,4 +1,4 @@ -# Intuition guide to ZK in zkSync +# Intuition guide to ZK in zkEVM **WARNING**: This guide simplifies the complex details of how we use ZK in our systems, just to give you a better understanding. We're leaving out a lot of details to keep things brief. @@ -7,7 +7,7 @@ understanding. We're leaving out a lot of details to keep things brief. In our case, the prover takes public input and witness (which is huge - you'll see below), and produces a proof, but the verifier takes (public input, proof) only, without witness. This means that the huge witness doesn't have to be -submitted to L1. This property can be used for many things, like privacy, but here we use it to ipmlement an efficient +submitted to L1. This property can be used for many things, like privacy, but here we use it to implement an efficient rollup that publishes the least required amount of data to L1. ## Basic overview @@ -20,7 +20,7 @@ Let’s break down the basic steps involved when a transaction is made within ou - **Verify proof on L1:** This means checking that the fancy math was done right on the Ethereum network (referred to as L1). -## Generate Witness - What Does It Mean +## What It Means to Generate a Witness When our State Keeper processes a transaction, it carries out a bunch of operations and assumes certain conditions without openly stating them. However, when it comes to ZK, we need to show clear evidence that these conditions hold. @@ -62,7 +62,7 @@ pub fn compute_decommitter_circuit_snapshots< ... ) -> ( Vec>, - CodeDecommittmentsDeduplicatorInstanceWitness, + CodeDecommitmentsDeduplicatorInstanceWitness, ) ``` @@ -80,12 +80,12 @@ the hashes we mentioned earlier. This is similar merkle paths that we discussed ### Where is the Code -The job of generating witnesses, which we discussed earlier, is handled by a the witness generator. Initially, this was +The job of generating witnesses, which we discussed earlier, is handled by the witness generator. Initially, this was located in a module [zksync core witness]. However, for the new proof system, the team began to shift this function to a new location called [separate witness binary]. Inside this new location, after the necessary data is fetched from storage, the witness generator calls another piece of -code from [zkevm_test_harness witness] named `run_with_fixed_params`. This code is responsible for createing the +code from [zkevm_test_harness witness] named `run_with_fixed_params`. This code is responsible for creating the witnesses themselves (which can get really HUGE). ## Generating the Proof @@ -139,16 +139,13 @@ version 1.4.0. [witness_example]: https://github.com/matter-labs/era-zkevm_test_harness/tree/main/src/witness/individual_circuits/decommit_code.rs#L24 -[verifier]: - https://github.com/matter-labs/zksync-2-contracts/blob/d9785355518edc7f686fb2c91ff7d1caced9f9b8/ethereum/contracts/zksync/Plonk4VerifierWithAccessToDNext.sol#L284 +[verifier]: https://github.com/matter-labs/era-contracts/blob/main/l1-contracts/contracts/zksync/Verifier.sol [bellman repo]: https://github.com/matter-labs/bellman [bellman cuda repo]: https://github.com/matter-labs/era-bellman-cuda [example ecrecover circuit]: - https://github.com/matter-labs/sync_vm/blob/683ade0bbb445f3e2ceb82dd3f4346a0c5d16a78/src/glue/ecrecover_circuit/mod.rs#L157 -[zksync core witness]: - https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/witness_generator/mod.rs + https://github.com/matter-labs/era-sync_vm/blob/v1.3.2/src/glue/ecrecover_circuit/mod.rs#L157 [separate witness binary]: https://github.com/matter-labs/zksync-era/blob/main/prover/witness_generator/src/main.rs [zkevm_test_harness witness]: - https://github.com/matter-labs/zkevm_test_harness/blob/0c17bc7baa4e0b64634d414942ef4200d8613bbd/src/external_calls.rs#L575 -[heavy_ops_service repo]: https://github.com/matter-labs/heavy-ops-service/tree/v1.3.2 + https://github.com/matter-labs/era-zkevm_test_harness/blob/fb47657ae3b6ff6e4bb5199964d3d37212978200/src/external_calls.rs#L579 +[heavy_ops_service repo]: https://github.com/matter-labs/era-heavy-ops-service [franklin repo]: https://github.com/matter-labs/franklin-crypto diff --git a/docs/architecture.md b/docs/guides/architecture.md similarity index 98% rename from docs/architecture.md rename to docs/guides/architecture.md index dbac73fa09a..e87f4bca7e5 100644 --- a/docs/architecture.md +++ b/docs/guides/architecture.md @@ -62,7 +62,6 @@ This section provides a physical map of folders & files in this repository. - `/multivm`: A wrapper over several versions of VM that have been used by the main node. - `/object_store`: Abstraction for storing blobs outside the main data store. - `/prometheus_exporter`: Prometheus data exporter. - - `/prover_utils`: Utilities related to the proof generation. - `/queued_job_processor`: An abstraction for async job processing - `/state`: A state keeper responsible for handling transaction execution and creating miniblocks and L1 batches. - `/storage`: An encapsulated database interface. diff --git a/docs/development.md b/docs/guides/development.md similarity index 61% rename from docs/development.md rename to docs/guides/development.md index 955189cb3ab..c6410bbfcbb 100644 --- a/docs/development.md +++ b/docs/guides/development.md @@ -89,6 +89,73 @@ Currently the following criteria are checked: - Other code should always be formatted via `zk fmt`. - Dummy Prover should not be staged for commit (see below for the explanation). +## Spell Checking + +In our development workflow, we utilize a spell checking process to ensure the quality and accuracy of our documentation +and code comments. This is achieved using two primary tools: `cspell` and `cargo-spellcheck`. This section outlines how +to use these tools and configure them for your needs. + +### Using the Spellcheck Command + +The spell check command `zk spellcheck` is designed to check for spelling errors in our documentation and code. To run +the spell check, use the following command: + +``` +zk spellcheck +Options: +--pattern : Specifies the glob pattern for files to check. Default is docs/**/*. +--use-cargo: Utilize cargo spellcheck. +--use-cspell: Utilize cspell. +``` + +## Link Checking + +To maintain the integrity and reliability of our documentation, we make use of a link checking process using the +`markdown-link-check` tool. This ensures that all links in our markdown files are valid and accessible. The following +section describes how to use this tool and configure it for specific needs. + +### Using the Link Check Command + +The link check command `zk linkcheck` is designed to verify the integrity of links in our markdown files. To execute the +link check, use the following command: + +``` +zk linkcheck +Options: +--config : Path to the markdown-link-check configuration file. Default is './checks-config/links.json'. +``` + +### General Rules + +**Code References in Comments**: When referring to code elements within development comments, they should be wrapped in +backticks. For example, reference a variable as `block_number`. + +**Code Blocks in Comments**: For larger blocks of pseudocode or commented-out code, use code blocks formatted as +follows: + +```` +// ``` +// let overhead_for_pubdata = { +// let numerator: U256 = overhead_for_block_gas * total_gas_limit +// + gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK); +// let denominator = +// gas_per_pubdata_byte_limit * U256::from(MAX_PUBDATA_PER_BLOCK) + overhead_for_block_gas; +// ``` +```` + +**Language Settings**: We use the Hunspell language setting of `en_US`. + +**CSpell Usage**: For spell checking within the `docs/` directory, we use `cspell`. The configuration for this tool is +found in `cspell.json`. It's tailored to check our documentation for spelling errors. + +**Cargo-Spellcheck for Rust and Dev Comments**: For Rust code and development comments, `cargo-spellcheck` is used. Its +configuration is maintained in `era.cfg`. + +### Adding Words to the Dictionary + +To add a new word to the spell checker dictionary, navigate to `/spellcheck/era.dic` and include the word. Ensure that +the word is relevant and necessary to be included in the dictionary to maintain the integrity of our documentation. + ## Using Dummy Prover By default, the chosen prover is a "dummy" one, meaning that it doesn't actually compute proofs but rather uses mocks to diff --git a/docs/external-node/01_intro.md b/docs/guides/external-node/01_intro.md similarity index 100% rename from docs/external-node/01_intro.md rename to docs/guides/external-node/01_intro.md diff --git a/docs/external-node/02_configuration.md b/docs/guides/external-node/02_configuration.md similarity index 95% rename from docs/external-node/02_configuration.md rename to docs/guides/external-node/02_configuration.md index 06448428340..fae903e5e97 100644 --- a/docs/external-node/02_configuration.md +++ b/docs/guides/external-node/02_configuration.md @@ -2,8 +2,8 @@ This document outlines various configuration options for the EN. Currently, the EN requires the definition of numerous environment variables. To streamline this process, we provide prepared configs for the zkSync Era - for both -[mainnet](prepared_configs/mainnet-config.env) and [testnet](prepared_configs/testnet-config.env). You can use these -files as a starting point and modify only the necessary sections. +[mainnet](prepared_configs/mainnet-config.env) and [testnet](prepared_configs/testnet-sepolia-config.env). You can use +these files as a starting point and modify only the necessary sections. ## Database @@ -20,7 +20,7 @@ recommended to use an NVME SSD for RocksDB. RocksDB requires two variables to be ## L1 Web3 client EN requires a connection to an Ethereum node. The corresponding env variable is `EN_ETH_CLIENT_URL`. Make sure to set -the URL corresponding to the correct L1 network (L1 mainnet for L2 mainnet and L1 goerli for L2 testnet). +the URL corresponding to the correct L1 network (L1 mainnet for L2 mainnet and L1 sepolia for L2 testnet). Note: Currently, the EN makes 2 requests to the L1 per L1 batch, so the Web3 client usage for a synced node should not be high. However, during the synchronization phase the new batches would be persisted on the EN quickly, so make sure diff --git a/docs/external-node/03_running.md b/docs/guides/external-node/03_running.md similarity index 100% rename from docs/external-node/03_running.md rename to docs/guides/external-node/03_running.md diff --git a/docs/external-node/04_observability.md b/docs/guides/external-node/04_observability.md similarity index 100% rename from docs/external-node/04_observability.md rename to docs/guides/external-node/04_observability.md diff --git a/docs/external-node/05_troubleshooting.md b/docs/guides/external-node/05_troubleshooting.md similarity index 100% rename from docs/external-node/05_troubleshooting.md rename to docs/guides/external-node/05_troubleshooting.md diff --git a/docs/external-node/06_components.md b/docs/guides/external-node/06_components.md similarity index 100% rename from docs/external-node/06_components.md rename to docs/guides/external-node/06_components.md diff --git a/docs/external-node/prepared_configs/mainnet-config.env b/docs/guides/external-node/prepared_configs/mainnet-config.env similarity index 94% rename from docs/external-node/prepared_configs/mainnet-config.env rename to docs/guides/external-node/prepared_configs/mainnet-config.env index a50e0034113..546e572f6cb 100644 --- a/docs/external-node/prepared_configs/mainnet-config.env +++ b/docs/guides/external-node/prepared_configs/mainnet-config.env @@ -41,8 +41,6 @@ EN_FILTERS_LIMIT=10000 EN_SUBSCRIPTIONS_LIMIT=10000 # Interval for polling the DB for pubsub (in ms). EN_PUBSUB_POLLING_INTERVAL=200 -# Number of threads per API server. -EN_THREADS_PER_SERVER=128 # Tx nonce: how far ahead from the committed nonce can it be. # This shouldn't be larger than the value on the main node (50). EN_MAX_NONCE_AHEAD=50 @@ -83,9 +81,6 @@ EN_MAIN_NODE_URL=https://zksync2-mainnet.zksync.io EN_L2_CHAIN_ID=324 EN_L1_CHAIN_ID=1 -EN_BOOTLOADER_HASH=0x010007794e73f682ad6d27e86b6f71bbee875fc26f5708d1713e7cfd476098d3 -EN_DEFAULT_AA_HASH=0x0100067d861e2f5717a12c3e869cfb657793b86bbb0caa05cc1421f16c5217bc - # Optional, required only if sentry is configured. EN_SENTRY_ENVIRONMENT=zksync_mainnet diff --git a/docs/external-node/prepared_configs/testnet-config.env b/docs/guides/external-node/prepared_configs/testnet-goerli-config-deprecated.env similarity index 94% rename from docs/external-node/prepared_configs/testnet-config.env rename to docs/guides/external-node/prepared_configs/testnet-goerli-config-deprecated.env index ff905f4d931..e5c0ac947df 100644 --- a/docs/external-node/prepared_configs/testnet-config.env +++ b/docs/guides/external-node/prepared_configs/testnet-goerli-config-deprecated.env @@ -41,8 +41,6 @@ EN_FILTERS_LIMIT=10000 EN_SUBSCRIPTIONS_LIMIT=10000 # Interval for polling the DB for pubsub (in ms). EN_PUBSUB_POLLING_INTERVAL=200 -# Number of threads per API server. -EN_THREADS_PER_SERVER=128 # Tx nonce: how far ahead from the committed nonce can it be. # This shouldn't be larger than the value on the main node (50). EN_MAX_NONCE_AHEAD=50 @@ -83,9 +81,6 @@ EN_MAIN_NODE_URL=https://zksync2-testnet.zksync.dev EN_L2_CHAIN_ID=280 EN_L1_CHAIN_ID=5 -EN_BOOTLOADER_HASH=0x010007794e73f682ad6d27e86b6f71bbee875fc26f5708d1713e7cfd476098d3 -EN_DEFAULT_AA_HASH=0x0100067d861e2f5717a12c3e869cfb657793b86bbb0caa05cc1421f16c5217bc - # Optional, required only if sentry is configured. EN_SENTRY_ENVIRONMENT=zksync_testnet diff --git a/docs/guides/external-node/prepared_configs/testnet-sepolia-config.env b/docs/guides/external-node/prepared_configs/testnet-sepolia-config.env new file mode 100644 index 00000000000..bc1898f89be --- /dev/null +++ b/docs/guides/external-node/prepared_configs/testnet-sepolia-config.env @@ -0,0 +1,92 @@ +# ------------------------------------------------------------------------ +# -------------- YOU MUST CHANGE THE FOLLOWING VARIABLES ----------------- +# ------------------------------------------------------------------------ + +# URL of the Postgres DB. +DATABASE_URL=postgres://postgres@localhost/zksync_local_ext_node +# PostgreSQL connection pool size +DATABASE_POOL_SIZE=50 + +# Folder where the state_keeper cache will be stored (RocksDB). +# If containerized, this path should be mounted to a volume. +EN_STATE_CACHE_PATH=./db/ext-node/state_keeper +# Folder where the Merkle Tree will be stored (RocksDB). +# If containerized, this path should be mounted to a volume. +EN_MERKLE_TREE_PATH=./db/ext-node/lightweight + +# URL of the Ethereum client (e.g. infura / alchemy). +EN_ETH_CLIENT_URL=http://127.0.0.1:8545 + +# ------------------------------------------------------------------------ +# -------------- YOU MAY CONFIGURE THE FOLLOWING VARIABLES --------------- +# ------------------------------------------------------------------------ + +# Port on which to serve the HTTP JSONRPC API. +EN_HTTP_PORT=3060 +# Port on which to serve the WebSocket JSONRPC API. +EN_WS_PORT=3061 + +# Port on which to serve metrics to be collected by Prometheus. +# If not set, metrics won't be collected. +# EN_PROMETHEUS_PORT=3322 + +# Port on which to serve the healthcheck endpoint (to check if the service is live). +EN_HEALTHCHECK_PORT=3081 + +# Max possible limit of entities to be requested at once. +EN_REQ_ENTITIES_LIMIT=10000 +# Max possible limit of filters to be active at once. +EN_FILTERS_LIMIT=10000 +# Max possible limit of subscriptions to be active at once. +EN_SUBSCRIPTIONS_LIMIT=10000 +# Interval for polling the DB for pubsub (in ms). +EN_PUBSUB_POLLING_INTERVAL=200 +# Tx nonce: how far ahead from the committed nonce can it be. +# This shouldn't be larger than the value on the main node (50). +EN_MAX_NONCE_AHEAD=50 +# The multiplier to use when suggesting gas price. Should be higher than one, +# otherwise if the L1 prices soar, the suggested gas price won't be sufficient to be included in block. +EN_GAS_PRICE_SCALE_FACTOR=1.2 +# The factor by which to scale the gasLimit +EN_ESTIMATE_GAS_SCALE_FACTOR=1.2 +# The max possible number of gas that `eth_estimateGas` is allowed to overestimate. +EN_ESTIMATE_GAS_ACCEPTABLE_OVERESTIMATION=1000 +# Max possible size of an ABI encoded tx (in bytes). +# This shouldn't be larger than the value on the main node. +EN_MAX_TX_SIZE=1000000 +# Enabled JSON-RPC API namespaces. Also available: en, debug. +EN_API_NAMESPACES=eth,net,web3,zks,pubsub + +# Settings related to sentry and opentelemetry. +MISC_LOG_FORMAT=plain +MISC_SENTRY_URL=unset +MISC_SENTRY_PANIC_INTERVAL=1800 +MISC_SENTRY_ERROR_INTERVAL=10800 +MISC_OTLP_URL=unset + +# Settings related to Rust logging and backtraces. +# You can read about the format [here](https://docs.rs/env_logger/0.10.0/env_logger/#enabling-logging) to fine-tune logging. +RUST_LOG=zksync_core=debug,zksync_dal=info,zksync_eth_client=info,zksync_merkle_tree=info,zksync_storage=info,zksync_state=debug,zksync_types=info,vm=info,zksync_external_node=info,zksync_utils=debug, +RUST_BACKTRACE=full +RUST_LIB_BACKTRACE=1 + + +# ------------------------------------------------------------------------ +# -------------- THE FOLLOWING VARIABLES DEPEND ON THE ENV --------------- +# ------------------------------------------------------------------------ + +# URL of the main zkSync node. +EN_MAIN_NODE_URL=https://sepolia.era.zksync.dev + +EN_L2_CHAIN_ID=300 +EN_L1_CHAIN_ID=11155111 + +# Optional, required only if sentry is configured. +EN_SENTRY_ENVIRONMENT=zksync_testnet + +# ------------------------------------------------------------------------ +# -------------- THE FOLLOWING VARIABLES ARE NOT USED -------------------- +# -------------- BUT HAVE TO BE SET. JUST LEAVE THEM AS IS --------------- +# ------------------------------------------------------------------------ + +ZKSYNC_HOME=/ diff --git a/docs/launch.md b/docs/guides/launch.md similarity index 90% rename from docs/launch.md rename to docs/guides/launch.md index 90ee266eeb5..17898e3350b 100644 --- a/docs/launch.md +++ b/docs/guides/launch.md @@ -17,11 +17,9 @@ zk # installs and builds zk itself zk init ``` -During the first initialization you have to download around 8 GB of setup files, this should be done once. If you have a -problem on this step of the initialization, see help for the `zk run plonk-setup` command. - -If you face any other problems with the `zk init` command, go to the [Troubleshooting](#Troubleshooting) section at the -end of this file. There are solutions for some common error cases. +If you face any other problems with the `zk init` command, go to the +[Troubleshooting](https://github.com/matter-labs/zksync-era/blob/main/docs/guides/launch.md#troubleshooting) section at +the end of this file. There are solutions for some common error cases. To completely reset the dev environment: @@ -114,29 +112,6 @@ cargo run --release --bin zksync_verification_key_generator ``` -## Running the setup key generator on machine with GPU - -- uncomment `"core/bin/setup_key_generator_and_server",` from root `Cargo.toml` file. -- ensure that the setup_2^26.key in the current directory, the file can be downloaded from - - -```shell -export BELLMAN_CUDA_DIR=$PWD -# To generate setup key for specific circuit type[0 - 17], 2 below corresponds to circuit type 2. -cargo +nightly run --features gpu --release --bin zksync_setup_key_generator -- --numeric-circuit 2 -``` - -## Running the setup key generator on machine without GPU - -- uncomment `"core/bin/setup_key_generator_and_server",` from root `Cargo.toml` file. -- ensure that the setup_2^26.key in the current directory, the file can be downloaded from - - -```shell -# To generate setup key for specific circuit type[0 - 17], 2 below corresponds to circuit type 2. -cargo +nightly run --release --bin zksync_setup_key_generator -- --numeric-circuit 2 -``` - ## Generating binary verification keys for existing json verification keys ```shell diff --git a/docs/guides/repositories.md b/docs/guides/repositories.md new file mode 100644 index 00000000000..154f0bccdd5 --- /dev/null +++ b/docs/guides/repositories.md @@ -0,0 +1,80 @@ +# Repositories + +## zkSync + +### Core components + +| Public repository | Description | +| --------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [zksync-era](https://github.com/matter-labs/zksync-era) | zk server logic, including the APIs and database accesses | +| [zksync-wallet-vue](https://github.com/matter-labs/zksync-wallet-vue) | Wallet frontend | +| [era-contracts](https://github.com/matter-labs/era-contracts) | L1 & L2 contracts, that are used to manage bridges and communication between L1 & L2. Privileged contracts that are running on L2 (like Bootloader or ContractDeployer) | + +### Compiler + +| Public repository | Description | +| ------------------------------------------------------------------------------------- | ------------------------------------------------------------------- | +| [era-compiler-tester](https://github.com/matter-labs/era-compiler-tester) | Integration testing framework for running executable tests on zkEVM | +| [era-compiler-tests](https://github.com/matter-labs/era-compiler-tests) | Collection of executable tests for zkEVM | +| [era-compiler-llvm](https://github.com/matter-labs//era-compiler-llvm) | zkEVM fork of the LLVM framework | +| [era-compiler-solidity](https://github.com/matter-labs/era-compiler-solidity) | Solidity Yul/EVMLA compiler front end | +| [era-compiler-vyper](https://github.com/matter-labs/era-compiler-vyper) | Vyper LLL compiler front end | +| [era-compiler-llvm-context](https://github.com/matter-labs/era-compiler-llvm-context) | LLVM IR generator logic shared by multiple front ends | +| [era-compiler-common](https://github.com/matter-labs/era-compiler-common) | Common compiler constants | +| [era-compiler-llvm-builder](https://github.com/matter-labs/era-compiler-llvm-builder) | Tool for building our fork of the LLVM framework | + +### zkEVM / crypto + +| Public repository | Description | +| ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | +| [era-zkevm_opcode_defs](https://github.com/matter-labs/era-zkevm_opcode_defs) | Opcode definitions for zkEVM - main dependency for many other repos | +| [era-zk_evm](https://github.com/matter-labs/era-zk_evm) | EVM implementation in pure rust, without circuits | +| [era-sync_vm](https://github.com/matter-labs/era-sync_vm) | EVM implementation using circuits | +| [era-zkEVM-assembly](https://github.com/matter-labs/era-zkEVM-assembly) | Code for parsing zkEVM assembly | +| [era-zkevm_test_harness](https://github.com/matter-labs/era-zkevm_test_harness) | Tests that compare the two implementation of the zkEVM - the non-circuit one (zk_evm) and the circuit one (sync_vm) | +| [era-zkevm_tester](https://github.com/matter-labs/era-zkevm_tester) | Assembly runner for zkEVM testing | +| [era-boojum](https://github.com/matter-labs/era-boojum) | New proving system library - containing gadgets and gates | +| [era-shivini](https://github.com/matter-labs/era-shivini) | Cuda / GPU implementation for the new proving system | +| [era-zkevm_circuits](https://github.com/matter-labs/era-zkevm_circuits) | Circuits for the new proving system | +| [franklin-crypto](https://github.com/matter-labs/franklin-crypto) | Gadget library for the Plonk / plookup | +| [rescue-poseidon](https://github.com/matter-labs/rescue-poseidon) | Library with hash functions used by the crypto repositories | +| [snark-wrapper](https://github.com/matter-labs/snark-wrapper) | Circuit to wrap the final FRI proof into snark for improved efficiency | + +#### Old proving system + +| Public repository | Description | +| ----------------------------------------------------------------------------- | ------------------------------------------------------------------- | +| [era-bellman-cuda](https://github.com/matter-labs/era-bellman-cuda) | Cuda implementations for cryptographic functions used by the prover | +| [era-heavy-ops-service](https://github.com/matter-labs/era-heavy-ops-service) | Main circuit prover that requires GPU to run | +| [era-circuit_testing](https://github.com/matter-labs/era-circuit_testing) | ?? | + +### Tools & contract developers + +| Public repository | Description | +| --------------------------------------------------------------- | ----------------------------------------------------------------------------- | +| [era-test-node](https://github.com/matter-labs/era-test-node) | In memory node for development and smart contract debugging | +| [local-setup](https://github.com/matter-labs/local-setup) | Docker-based zk server (together with L1), that can be used for local testing | +| [zksync-cli](https://github.com/matter-labs/zksync-cli) | Command line tool to interact with zksync | +| [block-explorer](https://github.com/matter-labs/block-explorer) | Online blockchain browser for viewing and analyzing zkSync chain | +| [dapp-portal](https://github.com/matter-labs/dapp-portal) | zkSync Wallet + Bridge DApp | +| [hardhat-zksync](https://github.com/matter-labs/hardhat-zksync) | zkSync Hardhat plugins | +| [zksolc-bin](https://github.com/matter-labs/zksolc-bin) | solc compiler binaries | +| [zkvyper-bin](https://github.com/matter-labs/zkvyper-bin) | vyper compiler binaries | + +### Examples & documentation + +| Public repository | Description | +| --------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| [zksync-web-era-docs](https://github.com/matter-labs/zksync-web-era-docs) | [Public zkSync documentation](https://era.zksync.io/docs/), API descriptions etc. | +| [zksync-contract-templates](https://github.com/matter-labs/zksync-contract-templates) | Quick contract deployment and testing with tools like Hardhat on Solidity or Vyper | +| [zksync-frontend-templates](https://github.com/matter-labs/zksync-frontend-templates) | Rapid UI development with templates for Vue, React, Next.js, Nuxt, Vite, etc. | +| [zksync-scripting-templates](https://github.com/matter-labs/zksync-scripting-templates) | Automated interactions and advanced zkSync operations using Node.js | +| [tutorials](https://github.com/matter-labs/tutorials) | Tutorials for developing on zkSync | + +## zkSync Lite + +| Public repository | Description | +| --------------------------------------------------------------------------- | -------------------------------- | +| [zksync](https://github.com/matter-labs/zksync) | zkSync Lite implementation | +| [zksync-docs](https://github.com/matter-labs/zksync-docs) | Public zkSync Lite documentation | +| [zksync-dapp-checkout](https://github.com/matter-labs/zksync-dapp-checkout) | Batch payments DApp | diff --git a/docs/setup-dev.md b/docs/guides/setup-dev.md similarity index 81% rename from docs/setup-dev.md rename to docs/guides/setup-dev.md index cb42a2b1c7c..a833fa10f45 100644 --- a/docs/setup-dev.md +++ b/docs/guides/setup-dev.md @@ -1,5 +1,34 @@ # Installing dependencies +## TL;DR + +If you run on 'clean' Debian on GCP: + +```bash +# Rust +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh +# NVM +curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash +# All necessary stuff +sudo apt-get install build-essential pkg-config cmake clang lldb lld libssl-dev postgresql +# Docker +sudo usermod -aG docker YOUR_USER + +## You might need to re-connect (due to usermod change). + +# Node & yarn +nvm install 18 +npm install -g yarn +yarn set version 1.22.19 + +# SQL tools +cargo install sqlx-cli --version 0.7.3 +# Stop default postgres (as we'll use the docker one) +sudo systemctl stop postgresql +# Start docker. +sudo systemctl start docker +``` + ## Supported operating systems zkSync currently can be launched on any \*nix operating system (e.g. any linux distribution or MacOS). @@ -7,7 +36,7 @@ zkSync currently can be launched on any \*nix operating system (e.g. any linux d If you're using Windows, then make sure to use WSL 2, since WSL 1 is known to cause troubles. Additionally, if you are going to use WSL 2, make sure that your project is located in the _linux filesystem_, since -accessing NTFS partitions from inside of WSL is very slow. +accessing NTFS partitions from within WSL is very slow. If you're using MacOS with an ARM processor (e.g. M1/M2), make sure that you are working in the _native_ environment (e.g. your terminal and IDE don't run in Rosetta, and your toolchain is native). Trying to work with zkSync code via @@ -20,15 +49,15 @@ If you are a NixOS user or would like to have a reproducible environment, skip t Install `docker`. It is recommended to follow the instructions from the [official site](https://docs.docker.com/install/). -Note: currently official site proposes using Docker Desktop for linux, which is a GUI tool with plenty of quirks. If you +Note: currently official site proposes using Docker Desktop for Linux, which is a GUI tool with plenty of quirks. If you want to only have CLI tool, you need the `docker-ce` package and you can follow [this guide](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-20-04) for Ubuntu. Installing `docker` via `snap` or from the default repository can cause troubles. -You need to install both `docker` and `docker-compose`. +You need to install both `docker` and `docker compose`. -**Note:** `docker-compose` is installed automatically with `Docker Desktop`. +**Note:** `docker compose` is installed automatically with `Docker Desktop`. **Note:** On linux you may encounter the following error when you’ll try to work with `zksync`: @@ -53,7 +82,7 @@ sudo usermod -a -G docker your_user_name After that, you should logout and login again (user groups are refreshed after the login). The problem should be solved at this step. -If logging out does not help, restarting the computer should. +If logging out does not resolve the issue, restarting the computer should. ## `Node` & `Yarn` @@ -61,10 +90,10 @@ If logging out does not help, restarting the computer should. `Node.js`, we suggest you to install [nvm](https://github.com/nvm-sh/nvm). It will allow you to update `Node.js` version easily in the future (by running `nvm use` in the root of the repository) 2. Install `yarn` (make sure to get version 1.22.19 - you can change the version by running `yarn set version 1.22.19`). - Instructions can be found on the [official site](https://classic.yarnpkg.com/en/docs/install/). - Check if `yarn` is installed by running `yarn -v`. If you face any problems when installing `yarn`, it might be the - case that your package manager installed the wrong package.Make sure to thoroughly follow the instructions above on - the official website. It contains a lot of troubleshooting guides in it. + Instructions can be found on the [official site](https://classic.yarnpkg.com/en/docs/install/). Check if `yarn` is + installed by running `yarn -v`. If you face any problems when installing `yarn`, it might be the case that your + package manager installed the wrong package.Make sure to thoroughly follow the instructions above on the official + website. It contains a lot of troubleshooting guides in it. ## `Axel` @@ -94,7 +123,7 @@ Make sure the version is higher than `2.17.10`. In order to compile RocksDB, you must have LLVM available. On debian-based linux it can be installed as follows: -On linux: +On debian-based linux: ```bash sudo apt-get install build-essential pkg-config cmake clang lldb lld @@ -115,7 +144,7 @@ On mac: brew install openssl ``` -On linux: +On debian-based linux: ```bash sudo apt-get install libssl-dev @@ -167,7 +196,7 @@ On mac: brew install postgresql@14 ``` -On linux: +On debian-based linux: ```bash sudo apt-get install postgresql @@ -188,17 +217,27 @@ SQLx is a Rust library we use to interact with Postgres, and its CLI is used to features of the library. ```bash -cargo install sqlx-cli --version 0.5.13 +cargo install sqlx-cli --version 0.7.3 ``` ## Solidity compiler `solc` Install the latest solidity compiler. +On mac: + ```bash brew install solidity ``` +On debian-based linux: + +```bash +sudo add-apt-repository ppa:ethereum/ethereum +sudo apt-get update +sudo apt-get install solc +``` + Alternatively, download a [precompiled version](https://github.com/ethereum/solc-bin) and add it to your PATH. ## Python diff --git a/docs/repositories.md b/docs/repositories.md deleted file mode 100644 index 7250c5aef22..00000000000 --- a/docs/repositories.md +++ /dev/null @@ -1,75 +0,0 @@ -# Repositories - -## zkSync Era - -### Core components - -| Public repository | Description | -| --------------------------------------------------------------------- | --------------------------------------------------------- | -| [zksync-era](https://github.com/matter-labs/zksync-era) | zk server logic, including the APIs and database accesses | -| [zksync-wallet-vue](https://github.com/matter-labs/zksync-wallet-vue) | Wallet frontend | - -### Contracts - -| Public repository | Description | -| --------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- | -| [era-contracts](https://github.com/matter-labs/era-contracts) | L1 & L2 contracts, that are used to manage bridges and communication between L1 & L2. | -| [era-system-contracts](https://github.com/matter-labs/era-system-contracts) | Privileged contracts that are running on L2 (like Bootloader oc ContractDeployer) | -| [v2-testnet-contracts](https://github.com/matter-labs/v2-testnet-contracts) | | - -### Compiler - -| Internal repository | Public repository | Description | -| ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- | ------------------------------------------------------------------- | -| [compiler-tester](https://github.com/matter-labs/compiler-tester) | [era-compiler-tester](https://github.com/matter-labs/era-compiler-tester) | Integration testing framework for running executable tests on zkEVM | -| [compiler-tests](https://github.com/matter-labs/compiler-tests) | [era-compiler-tests](https://github.com/matter-labs/era-compiler-tests) | Collection of executable tests for zkEVM | -| [compiler-llvm](https://github.com/matter-labs/compiler-llvm) | [era-compiler-llvm](https://github.com/matter-labs/compiler-llvm) | zkEVM fork of the LLVM framework | -| [compiler-solidity](https://github.com/matter-labs/compiler-solidity) | [era-compiler-solidity](https://github.com/matter-labs/era-compiler-solidity) | Solidity Yul/EVMLA compiler front end | -| [compiler-vyper](https://github.com/matter-labs/compiler-vyper) | [era-compiler-vyper](https://github.com/matter-labs/era-compiler-vyper) | Vyper LLL compiler front end | -| [compiler-llvm-context](https://github.com/matter-labs/compiler-llvm-context) | [era-compiler-llvm-context](https://github.com/matter-labs/era-compiler-llvm-context) | LLVM IR generator logic shared by multiple front ends | -| [compiler-common](https://github.com/matter-labs/compiler-common) | [era-compiler-common](https://github.com/matter-labs/era-compiler-common) | Common compiler constants | -| | [era-compiler-llvm-builder](https://github.com/matter-labs/era-compiler-llvm-builder) | Tool for building our fork of the LLVM framework | - -### zkEVM - -| Internal repository | Public repository | Description | -| ----------------------------------------------------------------------- | ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | -| [zkevm_opcode_defs](https://github.com/matter-labs/zkevm_opcode_defs) | [era-zkevm_opcode_defs](https://github.com/matter-labs/era-zkevm_opcode_defs) | Opcode definitions for zkEVM - main dependency for many other repos | -| [zk_evm](https://github.com/matter-labs/zk_evm) | [era-zk_evm](https://github.com/matter-labs/era-zk_evm) | EVM implementation in pure rust, without circuits | -| [sync_vm](https://github.com/matter-labs/sync_evm) | [era-sync_vm](https://github.com/matter-labs/era-sync_vm) | EVM implementation using circuits | -| [zkEVM-assembly](https://github.com/matter-labs/zkEVM-assembly) | [era-zkEVM-assembly](https://github.com/matter-labs/era-zkEVM-assembly) | Code for parsing zkEVM assembly | -| [zkevm_test_harness](https://github.com/matter-labs/zkevm_test_harness) | [era-zkevm_test_harness](https://github.com/matter-labs/era-zkevm_test_harness) | Tests that compare the two implementation of the zkEVM - the non-circuit one (zk_evm) and the circuit one (sync_vm) | -| [circuit_testing](https://github.com/matter-labs/circuit_testing) | [era-cicruit_testing](https://github.com/matter-labs/era-circuit_testing) | ?? | -| [heavy-ops-service](https://github.com/matter-labs/heavy-ops-service) | [era-heavy-ops-service](https://github.com/matter-labs/era-heavy-ops-service) | Main circuit prover, that requires GPU to run. | -| [bellman-cuda](https://github.com/matter-labs/bellman-cuda) | [era-bellman-cuda](https://github.com/matter-labs/era-bellman-cuda) | Cuda implementations for cryptographic functions used by the prover | -| [zkevm_tester](https://github.com/matter-labs/zkevm_tester) | [era-zkevm_tester](https://github.com/matter-labs/era-zkevm_tester) | Assembly runner for zkEVM testing | - -### Tools & contract developers - -| Public repository | Description | -| --------------------------------------------------------------- | ----------------------------------------------------------------------------- | -| [local-setup](https://github.com/matter-labs/local-setup) | Docker-based zk server (together with L1), that can be used for local testing | -| [zksolc-bin](https://github.com/matter-labs/zksolc-bin) | repository with solc compiler binaries | -| [zkvyper-bin](https://github.com/matter-labs/zkvyper-bin) | repository with vyper compiler binaries | -| [zksync-cli](<(https://github.com/matter-labs/zksync-cli)>) | Command line tool to interact with zksync | -| [hardhat-zksync](https://github.com/matter-labs/hardhat-zksync) | Plugins for hardhat | - -### Examples & documentation - -| Public repository | Description | -| ------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | -| [zksync-web-era-docs](https://github.com/matter-labs/zksync-web-era-docs) | Public documentation, API descriptions etc. Source code for [public docs](https://era.zksync.io/docs/) | -| [era-tutorial-examples](https://github.com/matter-labs/era-tutorial-examples) | List of tutorials | -| [custom-paymaster-tutorial](https://github.com/matter-labs/custom-paymaster-tutorial) | ?? | -| [daily-spendlimit-tutorial](https://github.com/matter-labs/daily-spendlimit-tutorial) | ?? | -| [custom-aa-tutorial](https://github.com/matter-labs/custom-aa-tutorial) | Tutorial for Account Abstraction | -| [era-hardhat-with-plugins](https://github.com/matter-labs/era-hardhat-with-plugins) | ?? | -| [zksync-hardhat-template](https://github.com/matter-labs/zksync-hardhat-template) | ?? | - -## zkSync Lite (v1) - -| Public repository | Description | -| --------------------------------------------------------------------------- | ---------------------------------- | -| [zksync](https://github.com/matter-labs/zksync) | zksync Lite/v1 implementation | -| [zksync-docs](https://github.com/matter-labs/zksync-docs) | Public documentation for zkSync v1 | -| [zksync-dapp-checkout](https://github.com/matter-labs/zksync-dapp-checkout) | ?? | diff --git a/docs/specs/README.md b/docs/specs/README.md new file mode 100644 index 00000000000..34103584e54 --- /dev/null +++ b/docs/specs/README.md @@ -0,0 +1,36 @@ +# ZK Stack protocol specs + +1. [Introduction](./introduction.md) +1. [Overview](./overview.md) +1. [L1 Contracts](./l1_smart_contracts.md) +1. [zkEVM](./zk_evm/README.md) + - [VM Overview](./zk_evm/vm_overview.md) + - [VM Specification](./zk_evm/vm_specification/README.md) + - [Bootloader](./zk_evm/bootloader.md) + - [System Contracts](./zk_evm/system_contracts.md) + - [Precompiles](./zk_evm/precompiles.md) + - [Account Abstraction](./zk_evm/account_abstraction.md) + - [Fee Model](./zk_evm/fee_model.md) +1. [L1<->L2 communication](./l1_l2_communication/README.md) + - [Overview - Deposits and Withdrawals](./l1_l2_communication/overview_deposits_withdrawals.md) + - [L2->L1 messages](./l1_l2_communication/l2_to_l1.md) + - [L1->L2 messages](./l1_l2_communication/l1_to_l2.md) +1. [Blocks and batches](./blocks_batches.md) +1. [Data Availability](./data_availability/README.md) + - [Overview](./data_availability/overview.md) + - [Pubdata](./data_availability/pubdata.md) + - [Compression](./data_availability/compression.md) + - [Reconstruction](./data_availability/reconstruction.md) + - [Validium and zkPorter](./data_availability/validium_zk_porter.md) +1. [Prover](./prover/README.md) + - [Overview - Boojum](./prover/overview.md) + - [ZK Terminology](./prover/zk_terminology.md) + - [Getting Started](./prover/getting_started.md) + - [Circuits](./prover/circuits/) + - [Circuit testing](./prover/circuit_testing.md) + - [Boojum gadgets](./prover/boojum_gadgets.md) + - [Boojum function: check_if_satisfied](./prover/boojum_function_check_if_satisfied.md) +1. [The Hyperchain](./the_hyperchain/README.md) + - [Overview](./the_hyperchain/overview.md) + - [Shared Bridge](./the_hyperchain/shared_bridge.md) + - [Hyperbridges](./the_hyperchain/hyperbridges.md) diff --git a/docs/specs/blocks_batches.md b/docs/specs/blocks_batches.md new file mode 100644 index 00000000000..bd3df88539c --- /dev/null +++ b/docs/specs/blocks_batches.md @@ -0,0 +1,274 @@ +# Blocks & Batches - How we package transactions + +In this article, we will explore the processing of transactions, how we group them into blocks, what it means to "seal" +a block, and why it is important to have rollbacks in our virtual machine (VM). + +At the basic level, we have individual transactions. However, to execute them more efficiently, we group them together +into blocks & batches. + +## L1 Batch vs L2 Block (a.k.a MiniBlock) vs Transaction + +To help visualize the concept, here are two images: + +![Block layout][block_layout] + +You can refer to the Block layout image to see how the blocks are organized. It provides a graphical representation of +how transactions are arranged within the blocks and the arrangement of L2 blocks within L1 "batches." + +![Explorer example][explorer_example] + +### L2 blocks (aka Miniblocks) + +Currently, the L2 blocks do not have a major role in the system, until we transition to a decentralized sequencer. We +introduced them mainly as a "compatibility feature" to accommodate various tools, such as Metamask, which expect a block +that changes frequently. This allows these tools to provide feedback to users, confirming that their transaction has +been added. + +As of now, an L2 block is created every 2 seconds (controlled by StateKeeper's config `miniblock_commit_deadline_ms`), +and it includes all the transactions received during that time period. This periodic creation of L2 blocks ensures that +transactions are processed and included in the blocks regularly. + +### L1 batches + +L1 batches play a crucial role because they serve as the fundamental unit for generating proofs. From the perspective of +the virtual machine (VM), each L1 batch represents the execution of a single program, specifically the Bootloader. The +Bootloader internally processes all the transactions belonging to that particular batch. Therefore, the L1 batch serves +as the container for executing the program and handling the transactions within it. + +#### So how large can L1 batch be + +Most blockchains use factors like time and gas usage to determine when a block should be closed or sealed. However, our +case is a bit more complex because we also need to consider prover capacity and limits related to publishing to L1. + +The decision of when to seal the block is handled by the code in the [conditional_sealer][conditional_sealer] module. It +maintains a list of `SealCriterion` and at the time of writing this article, [we have 9 reasons to seal the +block][reasons_for_sealing], which include: + +- Transaction slots limit (currently set to 750 transactions in `StateKeeper`'s config - `transaction_slots`). +- Gas limit (currently set to `MAX_L2_TX_GAS_LIMIT` = 80M). +- Published data limit (as each L1 batch must publish information about the changed slots to L1, so all the changes must + fit within the L1 transaction limit, currently set to `MAX_PUBDATA_PER_L1_BATCH`= 120k). +- zkEVM Geometry limits - For certain operations like merklelization, there is a maximum number of circuits that can be + included in a single L1 batch. If this limit is exceeded, we wouldn't be able to generate the proof. + +We also have a `TimeoutCriterion` - but it is not enabled. + +However, these sealing criteria pose a significant challenge because it is difficult to predict in advance whether +adding a given transaction to the current batch will exceed the limits or not. This unpredictability adds complexity to +the process of determining when to seal the block. + +#### What if a transaction doesn't fit + +To handle situations where a transaction exceeds the limits of the currently active L1 batch, we employ a "try and +rollback" approach. This means that we attempt to add the transaction to the active L1 batch, and if we receive a +`ExcludeAndSeal` response indicating that it doesn't fit, we roll back the virtual machine (VM) to the state before the +transaction was attempted. + +Implementing this approach introduces a significant amount of complexity in the `oracles` (also known as interfaces) of +the VM. These oracles need to support snapshotting and rolling back operations to ensure consistency when handling +transactions that don't fit. + +In a separate article, we will delve into more details about how these oracles and the VM work, providing a +comprehensive understanding of their functionality and interactions. + +[block_layout]: + https://user-images.githubusercontent.com/128217157/236494232-aeed380c-78f6-4fda-ab2a-8de26c1089ff.png + 'block layout' +[explorer_example]: + https://user-images.githubusercontent.com/128217157/236500717-165470ad-30b8-4ad6-97ed-fc29c8eb1fe0.png + 'explorer example' +[conditional_sealer]: + https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/state_keeper/seal_criteria/conditional_sealer.rs#20 + 'Conditional Sealer' +[reasons_for_sealing]: + https://github.com/matter-labs/zksync-era/blob/main/core/lib/zksync_core/src/state_keeper/seal_criteria/mod.rs#L106 + 'Reasons for Sealing' + +## Deeper dive + +### Glossary + +- Batch - a set of transactions that the bootloader processes (`commitBatches`, `proveBatches`, + and `executeBatches` work with it). A batch consists of multiple transactions. +- L2 block - a non-intersecting sub-set of consecutively executed transactions. This is the kind of block you see in the + API. This is the one that will _eventually_ be used for `block.number`/`block.timestamp`/etc. This will happen + _eventually_, since at the time of this writing the virtual block migration is being + [run](#migration--virtual-blocks-logic). +- Virtual block — blocks the data of which will be returned in the contract execution environment during the migration. + They are called “virtual”, since they have no trace in our API, i.e. it is not possible to query information about + them in any way. + +### Motivation + +Before the recent upgrade, `block.number`, `block.timestamp`, as well as `blockhash` in Solidity, returned information +about _batches_, i.e. large blocks that are proven on L1 and which consist of many small L2 blocks. At the same time, +API returns `block.number` and `block.timestamp` as for L2 blocks. + +L2 blocks were created for fast soft confirmation on wallets and block explorer. For example, MetaMask shows +transactions as confirmed only after the block in which transaction execution was mined. So if the user needs to wait +for the batch confirmation it would take at least minutes (for soft confirmation) and hours for full confirmation which +is very bad UX. But API could return soft confirmation much earlier through L2 blocks. + +There was a huge outcry in the community for us to return the information for L2 blocks in `block.number`, +`block.timestamp`, as well as `blockhash`, because of discrepancy of runtime execution and returned data by API. + +However, there were over 15mln L2 blocks, while less than 200k batches, meaning that if we simply “switched” from +returning L1 batches’ info to L2 block’s info, some contracts (especially those that use `block.number` for measuring +time intervals instead of `block.timestamp`) would break. For that, we decided to have an accelerated migration process, +i.e. the `block.number` will grow faster and faster, until it becomes roughly 8x times the L2 block production speed, +allowing it to gradually reach the L2 block number, after which the information on the L2 `block.number` will be +returned. The blocks the info of which will be returned during this process are called “virtual blocks”. Their +information will never be available in any of our APIs, which should not be a major breaking change, since our API +already mostly works with L2 blocks, while L1 batches’s information is returned in the runtime. + +### Adapting for Solidity + +In order to get the returned value for `block.number`, `block.timestamp`, `blockhash` our compiler used the following +functions: + +- `getBlockNumber` +- `getBlockTimestamp` +- `getBlockHashEVM` + +During the migration process, these will return the values of the virtual blocks. After the migration is complete, they +will return values for L2 blocks. + +### Migration status + +At the time of this writing, the migration has been complete on testnet, i.e. there we already have only the L2 block +information returned. However, the [migration](https://github.com/zkSync-Community-Hub/zksync-developers/discussions/87) +on mainnet is still ongoing and most likely will end on late October / early November. + +## Blocks’ processing and consistency checks + +Our `SystemContext` contract allows to get information about batches and L2 blocks. Some of the information is hard to +calculate onchain. For instace, time. The timing information (for both batches and L2 blocks) are provided by the +operator. In order to check that the operator provided some realistic values, certain checks are done on L1. Generally +though, we try to check as much as we can on L2. + +## Initializing L1 batch + +At the start of the batch, the operator +[provides](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L3636) +the timestamp of the batch, its number and the hash of the previous batch.. The root hash of the Merkle tree serves as +the root hash of the batch. + +The SystemContext can immediately check whether the provided number is the correct batch number. It also immediately +sends the previous batch hash to L1, where it will be checked during the commit operation. Also, some general +consistency checks are performed. This logic can be found +[here](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/SystemContext.sol#L416). + +## L2 blocks processing and consistency checks + +### `setL2Block` + +Before each transaction, we call `setL2Block` +[method](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L2605). +There we will provide some data about the L2 block that the transaction belongs to: + +- `_l2BlockNumber` The number of the new L2 block. +- `_l2BlockTimestamp` The timestamp of the new L2 block. +- `_expectedPrevL2BlockHash` The expected hash of the previous L2 block. +- `_isFirstInBatch` Whether this method is called for the first time in the batch. +- `_maxVirtualBlocksToCreate` The maximum number of virtual block to create with this L2 block. + +If two transactions belong to the same L2 block, only the first one may have non-zero `_maxVirtualBlocksToCreate`. The +rest of the data must be same. + +The `setL2Block` +[performs](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/SystemContext.sol#L312) +a lot of similar consistency checks to the ones for the L1 batch. + +### L2 blockhash calculation and storage + +Unlike L1 batch’s hash, the L2 blocks’ hashes can be checked on L2. + +The hash of an L2 block is +`keccak256(abi.encode(_blockNumber, _blockTimestamp, _prevL2BlockHash, _blockTxsRollingHash))`. Where +`_blockTxsRollingHash` is defined in the following way: + +`_blockTxsRollingHash = 0` for an empty block. + +`_blockTxsRollingHash = keccak(0, tx1_hash)` for a block with one tx. + +`_blockTxsRollingHash = keccak(keccak(0, tx1_hash), tx2_hash)` for a block with two txs, etc. + +To add a transaction hash to the current miniblock we use the `appendTransactionToCurrentL2Block` +[function](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/SystemContext.sol#L373). + +Since zkSync is a state-diff based rollup, there is no way to deduce the hashes of the L2 blocks based on the +transactions’ in the batch (because there is no access to the transaction’s hashes). At the same time, in order to +server `blockhash` method, the VM requires the knowledge of some of the previous L2 block hashes. In order to save up on +pubdata (by making sure that the same storage slots are reused, i.e. we only have repeated writes) we +[store](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/SystemContext.sol#L70) +only the last 257 block hashes. You can read more on what are the repeated writes and how the pubdata is processed +[here](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Handling%20L1%E2%86%92L2%20ops%20on%20zkSync.md). + +We store only the last 257 blocks, since the EVM requires only 256 previous ones and we use 257 as a safe margin. + +### Legacy blockhash + +When initializing L2 blocks that do not have their hashes stored on L2 (basically these are blocks before the migration +upgrade), we use the following formula for their hash: + +`keccak256(abi.encodePacked(uint32(_blockNumber)))` + +### Timing invariants + +While the timestamp of each L2 block is provided by the operator, there are some timing invariants that the system +preserves: + +- For each L2 block its timestamp should be > the timestamp of the previous L2 block +- For each L2 block its timestamp should be ≥ timestamp of the batch it belongs to +- Each batch must start with a new L2 block (i.e. an L2 block can not span across batches). +- The timestamp of a batch must be ≥ the timestamp of the latest L2 block which belonged to the previous batch. +- The timestamp of the last miniblock in batch can not go too far into the future. This is enforced by publishing an + L2→L1 log, with the timestamp which is then checked on L1. + +## Fictive L2 block & finalizing the batch + +At the end of the batch, the bootloader calls the `setL2Block` +[one more time](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L3812) +to allow the operator to create a new empty block. This is done purely for some of the technical reasons inside the +node, where each batch ends with an empty L2 block. + +We do not enforce that the last block is empty explicitly as it complicates the development process and testing, but in +practice, it is, and either way, it should be secure. + +Also, at the end of the batch we send the timestamps of the batch as well as the timestamp of the last miniblock in +order to check on L1 that both of these are realistic. Checking any other L2 block’s timestamp is not required since all +of them are enforced to be between those two. + +## Migration & virtual blocks’ logic + +As already explained above, for a smoother upgrade for the ecosystem, there is a migration being performed during which +instead of returning either batch information or L2 block information, we will return the virtual block information +until they catch up with the L2 block’s number. + +### Production of the virtual blocks + +- In each batch, there should be at least one virtual block created. +- Whenever a new L2 block is created, the operator can select how many virtual blocks it wants to create. This can be + any number, however, if the number of the virtual block exceeds the L2 block number, the migration is considered + complete and we switch to the mode where the L2 block information will be returned. + +## Additional note on blockhashes + +Note, that if we used some complex formula for virtual blocks’ hashes (like we do for L2 blocks), we would have to put +all of these into storage for the data availability. Even if we used the same storage trick that we used for the L2 +blocks, where we store only the last 257’s block’s hashes under the current load/migration plans it would be expected +that we have roughly ~250 virtual blocks per batch, practically meaning that we will publish all of these anyway. This +would be too expensive. That is why we have to use a simple formula of `keccak(uint256(number))` for now. Note, that +they do not collide with the legacy miniblock hash, since legacy miniblock hashes are calculated as +`keccak(uint32(number))`. + +Also, we need to keep the consistency of previous blockhashes, i.e. if `blockhash(X)` returns a non-zero value, it +should be consistent among the future blocks. For instance, let’s say that the hash of batch `1000` is `1`, +i.e. `blockhash(1000) = 1`. Then, when we migrate to virtual blocks, we need to ensure that `blockhash(1000)` will +return either 0 (if and only if the block is more than 256 blocks old) or `1`. Because of that for `blockhash` we will +have the following complex +[logic](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/SystemContext.sol#L103): + +- For blocks that were created before the virtual block upgrade, use the batch hashes +- For blocks that were created during the virtual block upgrade, use `keccak(uint256(number))`. +- For blocks that were created after the virtual blocks have caught up with the L2 blocks, use L2 block hashes. diff --git a/docs/specs/data_availability/README.md b/docs/specs/data_availability/README.md new file mode 100644 index 00000000000..590343485f4 --- /dev/null +++ b/docs/specs/data_availability/README.md @@ -0,0 +1,5 @@ +# Data availability + +- [Overview](./overview.md) +- [Pubdata](./pubdata.md) +- [Compression](./compression.md) diff --git a/docs/specs/data_availability/compression.md b/docs/specs/data_availability/compression.md new file mode 100644 index 00000000000..0a0057ac9fa --- /dev/null +++ b/docs/specs/data_availability/compression.md @@ -0,0 +1,125 @@ +# State diff Compression + +The most basic strategy to publish state diffs is to publish those in either of the following two forms: + +- When a key is updated for the first time — ``, where key is 32-byte derived key and the value is new + 32-byte value of the slot. +- When a key is updated for the second time and more — ``, where the `enumeration_index` is an + 8-byte id of the slot and the value is the new 32-byte value of the slot. + +This compression strategy will utilize a similar idea for treating keys and values separately and it will be focused on +the efficient compression of keys and values separately. + +## Keys + +Keys will be packed in the same way as they were before. The only change is that we’ll avoid using the 8-byte +enumeration index and will pack it to the minimal necessary number of bytes. This number will be part of the pubdata. +Once a key has been used, it can already use the 4 or 5 byte enumeration index and it is very hard to have something +cheaper for keys that has been used already. The opportunity comes when remembering the ids for accounts to spare some +bytes on nonce/balance key, but ultimately the complexity may not be worth it. + +There is some room for optimization of the keys that are being written for the first time, however, optimizing those is +more complex and achieves only a one-time effect (when the key is published for the first time), so they may be in scope +of the future upgrades. + +## Values + +Values are much easier to compress since they usually contain only zeroes. Also, we can leverage the nature of how those +values are changed. For instance, if nonce has been increased only by 1, we do not need to write the entire 32-byte new +value, we can just tell that the slot has been _increased_ and then supply only the 1-byte value by which it was +increased. This way instead of 32 bytes we need to publish only 2 bytes: first byte to denote which operation has been +applied and the second by to denote the number by which the addition has been made. + +We have the following 4 types of changes: `Add`, `Sub,` `Transform`, `NoCompression` where: + +- `NoCompression` denotes that the whole 32 byte will be provided. +- `Add` denotes that the value has been increased. (modulo 2^256) +- `Sub` denotes that the value has been decreased. (modulo 2^256) +- `Transform` denotes the value just has been changed (i.e. we disregard any potential relation between the previous and + the new value, though the new value might be small enough to save up on the number of bytes). + +Where the byte size of the output can be anywhere from 0 to 31 (also 0 makes sense for `Transform`, since it denotes +that it has been zeroed out). For `NoCompression` the whole 32 byte value is used. + +So the format of the pubdata is the following: + +**Part 1. Header.** + +- `` — this will enable easier automated unpacking in the future. Currently, it will be only equal to + `1`. +- `` — we need only 3 bytes to describe the total length of the L2→L1 logs. +- ``. It should be equal to the minimal required bytes to represent + the enum indexes for repeated writes. + +**Part 2. Initial writes.** + +- `` - the number of initial writes. Since each initial write publishes at least 32 + bytes for key, then `2^16 * 32 = 2097152` will be enough for a lot of time (right now with the limit of 120kb it will + take more than 15 L1 txs to use up all the space there). +- Then for each `` pair for each initial write: + - print key as 32-byte derived key. + - packing type as a 1 byte value, which consists of 5 bits to denote the length of the packing and 3 bits to denote + the type of the packing (either `Add`, `Sub`, `Transform` or `NoCompression`). + - The packed value itself. + +**Part 3. Repeated writes.** + +Note, that there is no need to write the number of repeated writes, since we know that until the end of the pubdata, all +the writes will be repeated ones. + +- For each `` pair for each repeated write: + - print key as derived key by using the number of bytes provided in the header. + - packing type as a 1 byte value, which consists of 5 bits to denote the length of the packing and 3 bits to denote + the type of the packing (either `Add`, `Sub`, `Transform` or `NoCompression`). + - The packed value itself. + +## Impact + +This setup allows us to achieve nearly 75% packing for values, and 50% gains overall in terms of the storage logs based +on historical data. + +## Encoding of packing type + +Since we have `32 * 3 + 1` ways to pack a state diff, we need at least 7 bits to present the packing type. To make +parsing easier, we will use 8 bits, i.e. 1 byte. + +We will use the first 5 bits to represent the length of the bytes (from 0 to 31 inclusive) to be used. The other 3 bits +will be used to represent the type of the packing: `Add`, `Sub` , `Transform`, `NoCompression`. + +## Worst case scenario + +The worst case scenario for such packing is when we have to pack a completely random new value, i.e. it will take us 32 +bytes to pack + 1 byte to denote which type it is. However, for such a write the user will anyway pay at least for 32 +bytes. Adding an additional byte is roughly 3% increase, which will likely be barely felt by users, most of which use +storage slots for balances, etc, which will consume only 7-9 bytes for packed value. + +## Why do we need to repeat the same packing method id + +You might have noticed that for each pair `` to describe value we always first write the packing type and +then write the packed value. However, the reader might ask, it is more efficient to just supply the packing id once and +then list all the pairs `` which use such packing. + +I.e. instead of listing + +(key = 0, type = 1, value = 1), (key = 1, type = 1, value = 3), (key = 2, type = 1, value = 4), … + +Just write: + +type = 1, (key = 0, value = 1), (key = 1, value = 3), (key = 2, value = 4), … + +There are two reasons for it: + +- A minor reason: sometimes it is less efficient in case the packing is used for very few slots (since for correct + unpacking we need to provide the number of slots for each packing type). +- A fundamental reason: currently enum indices are stored directly in the merkle tree & have very strict order of + incrementing enforced by the circuits and (they are given in order by pairs `(address, key)`), which are generally not + accessible from pubdata. + +All this means that we are not allowed to change the order of “first writes” above, so indexes for them are directly +recoverable from their order, and so we can not permute them. If we were to reorder keys without supplying the new +enumeration indices for them, the state would be unrecoverable. Always supplying the new enum index may add additional 5 +bytes for each key, which might negate the compression benefits in a lot of cases. Even if the compression will still be +beneficial, the added complexity may not be worth it. + +That being said, we _could_ rearrange those for _repeated_ writes, but for now we stick to the same value compression +format for simplicity. diff --git a/docs/specs/data_availability/overview.md b/docs/specs/data_availability/overview.md new file mode 100644 index 00000000000..d2a5f077896 --- /dev/null +++ b/docs/specs/data_availability/overview.md @@ -0,0 +1,19 @@ +# Overview + +To support being a rollup, the ZK Stack needs to post the data of the chain on L1. Instead of submitting the data of +each transaction, we submit how the state of the blockchain changes, this change is called the state diff. This approach +allows the transactions that change the same storage slots to be very cheap, since these transactions don't incur +additional data costs. + +Besides the state diff we also [post additional](./pubdata.md) data to L1, such as the L2->L1 messages, the L2->L1 logs, +the bytecodes of the deployed smart contracts. + +We also [compress](./compression.md) all the data that we send to L1, to reduce the costs of posting it. + +By posting all the data to L1, we can [reconstruct](./reconstruction.md) the state of the chain from the data on L1. +This is a key security property of the rollup. + +The the chain chooses not to post this data, they become a validium. This makes transactions there much cheaper, but +less secure. Because we use state diffs to post data, we can combine the rollup and validium features, by separating +storage slots that need to post data from the ones that don't. This construction combines the benefits of rollups and +validiums, and it is called a [zkPorter](./validium_zk_porter.md). diff --git a/docs/specs/data_availability/pubdata.md b/docs/specs/data_availability/pubdata.md new file mode 100644 index 00000000000..3584a043055 --- /dev/null +++ b/docs/specs/data_availability/pubdata.md @@ -0,0 +1,446 @@ +# Handling pubdata in Boojum + +Pubdata in zkSync can be divided up into 4 different categories: + +1. L2 to L1 Logs +2. L2 to L1 Messages +3. Smart Contract Bytecodes +4. Storage writes + +Using data corresponding to these 4 facets, across all executed batches, we’re able to reconstruct the full state of L2. +With the upgrade to our new proof system, Boojum, the way this data is represented will change. At a high level, in the +pre-Boojum system these are represented as separate fields while for boojum they will be packed into a single bytes +array. Once 4844 gets integrated this bytes array will move from being part of the calldata to blob data. + +While the structure of the pubdata changes, the way in which one can go about pulling the information will remain the +same. Basically, we just need to filter all of the transactions to the L1 zkSync contract for only the `commitBatches` +transactions where the proposed block has been referenced by a corresponding `executeBatches` call (the reason for this +is that a committed or even proven block can be reverted but an executed one cannot). Once we have all the committed +batches that have been executed, we then will pull the transaction input and the relevant fields, applying them in order +to reconstruct the current state of L2. + +## L2→L1 communication + +### L2→L1 communication before Boojum + +While there were quite some changes during Boojum upgrade, most of the scheme remains the same and so explaining how it +worked before gives some background on why certain decisions are made and kept for backward compatibility. + +[L2→L1 communication before Boojum](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Handling%20pubdata%20in%20Boojum/L2%E2%86%92L1%20communication%20before%20Boojum.md) + +The most important feature that we’ll need to maintain in Boojum for backward compatibility is to provide a similar +Merkle tree of L2→L1 logs with the long L2→L1 messages and priority operations’ status. + +Before Boojum, whenever we sent an L2→L1 long message, a _log_ was appended to the Merkle tree of L2→L1 messages on L1 +due to necessity. In Boojum we’ll have to maintain this fact. Having the priority operations’ statuses is important to +enable +[proving](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L255) +failed deposits for bridges. + +### Changes with Boojum + +#### Problems with the previous approach + +- There was a limit of 512 L2→L1 logs per batch, which is very limiting. It causes our block to be forcefully closed + based on the number of these messages instead of having the pubdata as the only limit. +- In the ideal world, we would like to have the tree adapt to the requirements of the batch, with any number of leaves + possible (in practice, a maximum of 2048 would likely be enough for the foreseeable future). +- Extending the tree in the circuits will be hard to do and hard to maintain. +- The hash of the contents of the L2→L1 messages needs to be rehashed to support the danksharding blobs, so we want to + keep only the essential logs as parts of calldata and the rest should be separated so that they could be moved the + EIP4844 blob in the future. + +#### Solution + +We will implement the calculation of the Merkle root of the L2→L1 messages via a system contract as part of the +`L1Messenger`. Basically, whenever a new log emitted by users that needs to be Merklized is created, the `L1Messenger` +contract will append it to its rolling hash and then at the end of the batch, during the formation of the blob it will +receive the original preimages from the operator, verify, and include the logs to the blob. + +We will now call the logs that are created by users and are Merklized _user_ logs and the logs that are emitted by +natively by VM _system_ logs. Here is a short comparison table for better understanding: + +| System logs | User logs | +| --------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Emitted by VM via an opcode. | VM knows nothing about them. | +| Consistency and correctness is enforced by the verifier on L1 (i.e. their hash is part of the block commitment. | Consistency and correctness is enforced by the L1Messenger system contract. The correctness of the behavior of the L1Messenger is enforced implicitly by prover in a sense that it proves the correctness of the execution overall. | +| We don’t calculate their Merkle root. | We calculate their Merkle root on the L1Messenger system contract. | +| We have constant small number of those. | We can have as much as possible as long as the commitBatches function on L1 remains executable (it is the job of the operator to ensure that only such transactions are selected) | +| In EIP4844 they will remain part of the calldata. | In EIP4844 they will become part of the blobs. | + +#### Backwards-compatibility + +Note, that to maintain a unified interface with the previous version of the protocol, the leaves of the Merkle tree will +have to maintain the following structure: + +```solidity +struct L2Log { + uint8 l2ShardId; + bool isService; + uint16 txNumberInBlock; + address sender; + bytes32 key; + bytes32 value; +} + +``` + +While the leaf will look the following way: + +```solidity +bytes32 hashedLog = keccak256( + abi.encodePacked(_log.l2ShardId, _log.isService, _log.txNumberInBlock, _log.sender, _log.key, _log.value) +); +``` + +`keccak256` will continue being the function for the merkle tree. + +To put it shortly, the proofs for L2→L1 log inclusion will continue having exactly the same format as they did in the +pre-Boojum system, which avoids breaking changes for SDKs and bridges alike. + +#### Implementation of `L1Messenger` + +The L1Messenger contract will maintain a rolling hash of all the L2ToL1 logs `chainedLogsHash` as well as the rolling +hashes of messages `chainedMessagesHash`. Whenever a contract wants to send an L2→L1 log, the following operation will +be +[applied](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/L1Messenger.sol#L110): + +`chainedLogsHash = keccak256(chainedLogsHash, hashedLog)`. L2→L1 logs have the same 88-byte format as in the current +version of zkSync. + +Note, that the user is charged for necessary future the computation that will be needed to calculate the final merkle +root. It is roughly 4x higher than the cost to calculate the hash of the leaf, since the eventual tree might have be 4x +times the number nodes. In any case, this will likely be a relatively negligible part compared to the cost of the +pubdata. + +At the end of the execution, the bootloader will +[provide](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L2470) +a list of all the L2ToL1 logs as well as the messages in this block to the L1Messenger (this will be provided by the +operator in the memory of the bootloader). The L1Messenger checks that the rolling hash from the provided logs is the +same as in the `chainedLogsHash` and calculate the merkle tree of the provided messages. Right now, we always build the +Merkle tree of size `2048`, but we charge the user as if the tree was built dynamically based on the number of leaves in +there. The implementation of the dynamic tree has been postponed until the later upgrades. + +#### Long L2→L1 messages & bytecodes + +Before, the fact that the correct preimages for L2→L1 messages as bytecodes were provided was checked on the L1 side. +Now, it will be done on L2. + +If the user wants to send an L2→L1 message, its preimage is +[appended](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/L1Messenger.sol#L125) +to the message’s rolling hash too `chainedMessagesHash = keccak256(chainedMessagesHash, keccak256(message))`. + +A very similar approach for bytecodes is used, where their rolling hash is calculated and then the preimages are +provided at the end of the batch to form the full pubdata for the batch. + +Note, that in for backward compatibility, just like before any long message or bytecode is accompanied by the +corresponding user L2→L1 log. + +#### Using system L2→L1 logs vs the user logs + +The content of the L2→L1 logs by the L1Messenger will go to the blob of EIP4844. Meaning, that all the data that belongs +to the tree by L1Messenger’s L2→L1 logs should not be needed during block commitment. Also, note that in the future we +will remove the calculation of the Merkle root of the built-in L2→L1 messages. + +The only places where the built-in L2→L1 messaging should continue to be used: + +- Logs by SystemContext (they are needed on commit to check the previous block hash). +- Logs by L1Messenger for the merkle root of the L2→L1 tree as well as the hash of the `totalPubdata`. +- `chainedPriorityTxsHash` and `numberOfLayer1Txs` from the bootloader (read more about it below). + +#### Obtaining `txNumberInBlock` + +To have the same log format, the `txNumberInBlock` must be obtained. While it is internally counted in the VM, there is +currently no opcode to retrieve this number. We will have a public variable `txNumberInBlock` in the `SystemContext`, +which will be incremented with each new transaction and retrieve this variable from there. It is +[zeroed out](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/SystemContext.sol#L458) +at the end of the batch. + +### Bootloader implementation + +The bootloader has a memory segment dedicated to the ABI-encoded data of the L1ToL2Messenger to perform the +`publishPubdataAndClearState` call. + +At the end of the execution of the batch, the operator should provide the corresponding data into the bootloader memory, +i.e user L2→L1 logs, long messages, bytecodes, etc. After that, the +[call](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L2484) +is performed to the `L1Messenger` system contract, that should validate the adherence of the pubdata to the required +format + +## Bytecode Publishing + +Within pubdata, bytecodes are published in 1 of 2 ways: (1) uncompressed via `factoryDeps` (pre-boojum this is within +its own field, and post-boojum as part of the `totalPubdata`) and (2) compressed via long l2 → l1 messages. + +### Uncompressed Bytecode Publishing + +With Boojum, `factoryDeps` are included within the `totalPubdata` bytes and have the following format: +`number of bytecodes || forEachBytecode (length of bytecode(n) || bytecode(n))` . + +### Compressed Bytecode Publishing + +This part stays the same in a pre and post boojum zkSync. Unlike uncompressed bytecode which are published as part of +`factoryDeps`, compressed bytecodes are published as long l2 → l1 messages which can be seen +[here](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/Compressor.sol#L80). + +#### Bytecode Compression Algorithm — Server Side + +This is the part that is responsible for taking bytecode, that has already been chunked into 8 byte words, performing +validation, and compressing it. + +Each 8 byte word from the chunked bytecode is assigned a 2 byte index (constraint on size of dictionary of chunk → index +is 2^16 - 1 elements). The length of the dictionary, dictionary entries (index assumed through order), and indexes are +all concatenated together to yield the final compressed version. + +For bytecode to be considered valid it must satisfy the following: + +1. Bytecode length must be less than 2097120 ((2^16 - 1) \* 32) bytes. +2. Bytecode length must be a multiple of 32. +3. Number of 32-byte words cannot be even. + +The following is a simplified version of the algorithm: + +```python +statistic: Map[chunk, (count, first_pos)] +dictionary: Map[chunk, index] +encoded_data: List[index] + +for position, chunk in chunked_bytecode: + if chunk is in statistic: + statistic[chunk].count += 1 + else: + statistic[chunk] = (count=1, first_pos=pos) + +# We want the more frequently used bytes to have smaller ids to save on calldata (zero bytes cost less) +statistic.sort(primary=count, secondary=first_pos, order=desc) + +for index, chunk in enumerated(sorted_statistics): + dictionary[chunk] = index + +for chunk in chunked_bytecode: + encoded_data.append(dictionary[chunk]) + +return [len(dictionary), dictionary.keys(order=index asc), encoded_data] +``` + +#### Verification And Publishing — L2 Contract + +The function `publishCompressBytecode` takes in both the original `_bytecode` and the `_rawCompressedData` , the latter +of which comes from the output of the server’s compression algorithm. Looping over the encoded data, derived from +`_rawCompressedData` , the corresponding chunks are pulled from the dictionary and compared to the original byte code, +reverting if there is a mismatch. After the encoded data has been verified, it is published to L1 and marked accordingly +within the `KnownCodesStorage` contract. + +Pseudo-code implementation: + +```python +length_of_dict = _rawCompressedData[:2] +dictionary = _rawCompressedData[2:2 + length_of_dict * 8] # need to offset by bytes used to store length (2) and multiply by 8 for chunk size +encoded_data = _rawCompressedData[2 + length_of_dict * 8:] + +assert(len(dictionary) % 8 == 0) # each element should be 8 bytes +assert(num_entries(dictionary) <= 2^16) +assert(len(encoded_data) * 4 == len(_bytecode)) # given that each chunk is 8 bytes and each index is 2 bytes they should differ by a factor of 4 + +for (index, dict_index) in list(enumerate(encoded_data)): + encoded_chunk = dictionary[dict_index] + real_chunk = _bytecode.readUint64(index * 8) # need to pull from index * 8 to account for difference in element size + verify(encoded_chunk == real_chunk) + +# Sending the compressed bytecode to L1 for data availability +sendToL1(_rawCompressedBytecode) +markAsPublished(hash(_bytecode)) +``` + +## Storage diff publishing + +zkSync is a statediff-based rollup and so publishing the correct state diffs plays an integral role in ensuring data +availability. + +### How publishing of storage diffs worked before Boojum + +As always in order to understand the new system better, some information about the previous one is important. + +Before, the system contracts had no clue about storage diffs. It was the job of the operator to provide the +`initialStorageChanges` and `reapeatedStorageWrites` (more on the differences will be explained below). The information +to commit the block looked the following way: + +```solidity +struct CommitBlockInfo { + uint64 blockNumber; + uint64 timestamp; + uint64 indexRepeatedStorageChanges; + bytes32 newStateRoot; + uint256 numberOfLayer1Txs; + bytes32 l2LogsTreeRoot; + bytes32 priorityOperationsHash; + bytes initialStorageChanges; + bytes repeatedStorageChanges; + bytes l2Logs; + bytes[] l2ArbitraryLengthMessages; + bytes[] factoryDeps; +} + +``` + +These two fields would be then included into the block commitment and checked by the verifier. + +### Difference between initial and repeated writes + +zkSync publishes state changes that happened within the batch instead of transactions themselves. Meaning, that for +instance some storage slot `S` under account `A` has changed to value `V`, we could publish a triple of `A,S,V`. Users +by observing all the triples could restore the state of zkSync. However, note that our tree unlike Ethereum’s one is not +account based (i.e. there is no first layer of depth 160 of the merkle tree corresponding to accounts and second layer +of depth 256 of the merkle tree corresponding to users). Our tree is “flat”, i.e. a slot `S` under account `A` is just +stored in the leaf number `H(S,A)`. Our tree is of depth 256 + 8 (the 256 is for these hashed account/key pairs and 8 is +for potential shards in the future, we currently have only one shard and it is irrelevant for the rest of the document). + +We call this `H(S,A)` _derived key_, because it is derived from the address and the actual key in the storage of the +account. Since our tree is flat, whenever a change happens, we can publish a pair `DK, V`, where `DK=H(S,A)`. + +However, these is an optimization that could be done: + +- Whenever a change to a key is used for the first time, we publish a pair of `DK,V` and we assign some sequential id to + this derived key. This is called an _initial write_. It happens for the first time and that’s why we must publish the + full key. +- If this storage slot is published in some of the subsequent batches, instead of publishing the whole `DK`, we can use + the sequential id instead. This is called a _repeated write_. + +For instance, if the slots `A`,`B` (I’ll use latin letters instead of 32-byte hashes for readability) changed their +values to `12`,`13` accordingly, in the batch it happened they will be published in the following format: + +- `(A, 12), (B, 13)`. Let’s say that the last sequential id ever used is 6. Then, `A` will receive the id of `7` and B + will receive the id of `8`. + +Let’s say that in the next block, they changes their values to `13`,`14`. Then, their diff will be published in the +following format: + +- `(7, 13), (8,14)`. + +The id is permanently assigned to each storage key that was ever published. While in the description above it may not +seem like a huge boost, however, each `DK` is 32 bytes long and id is at most 8 bytes long. + +We call this id _enumeration_index_. + +Note, that the enumeration indexes are assigned in the order of sorted array of (address, key), i.e. they are internally +sorted. The enumeration indexes are part of the state merkle tree, it is **crucial** that the initial writes are +published in the correct order, so that anyone could restore the correct enum indexes for the storage slots. In +addition, an enumeration index of `0` indicates that the storage write is an initial write. + +### State diffs after Boojum upgrade + +Firstly, let’s define what we’ll call the `stateDiffs`. A _state diff_ is an element of the following structure. + +[https://github.com/matter-labs/era-zkevm_test_harness/blob/3cd647aa57fc2e1180bab53f7a3b61ec47502a46/circuit_definitions/src/encodings/state_diff_record.rs#L8](https://github.com/matter-labs/era-zkevm_test_harness/blob/3cd647aa57fc2e1180bab53f7a3b61ec47502a46/circuit_definitions/src/encodings/state_diff_record.rs#L8). + +Basically, it contains all the values which might interest us about the state diff: + +- `address` where the storage has been changed. +- `key` (the original key inside the address) +- `derived_key` — `H(key, address)` as described in the previous section. + - Note, the hashing algorithm currently used here is `Blake2s` +- `enumeration_index` — Enumeration index as explained above. It is equal to 0 if the write is initial and contains the + non-zero enumeration index if it is the repeated write (indexes are numerated starting from 1). +- `initial_value` — The value that was present in the key at the start of the batch +- `final_value` — The value that the key has changed to by the end of the batch. + +We will consider `stateDiffs` an array of such objects, sorted by (address, key). + +This is the internal structure that is used by the circuits to represent the state diffs. The most basic “compression” +algorithm is the one described above: + +- For initial writes, write the pair of (`derived_key`, `final_value`) +- For repeated writes write the pair of (`enumeration_index`, `final_value`). + +Note, that values like `initial_value`, `address` and `key` are not used in the "simplified" algorithm above, but they +will be helpful for the more advanced compression algorithms in the future. The +[algorithm](#state-diff-compression-format) for Boojum will already utilize the difference between the `initial_value` +and `final_value` for saving up on pubdata. + +### How the new pubdata verification would work + +#### L2 + +1. The operator provides both full `stateDiffs` (i.e. the array of the structs above) and the compressed state diffs + (i.e. the array which contains the state diffs, compressed by the algorithm explained + [below](#state-diff-compression-format)). +2. The L1Messenger must verify that the compressed version is consistent with the original stateDiffs. +3. Once verified, the L1Messenger will publish the _hash_ of the original state diff via a system log. It will also + include the compressed state diffs into the totalPubdata to be published onto L1. + +#### L1 + +1. During committing the block, the L1 + [verifies](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L139) + that the operator has provided the full preimage for the totalPubdata (which includes L2→L1 logs, L2→L1 messages, + bytecodes as well as the compressed state diffs). +2. The block commitment + [includes](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L462) + \*the hash of the `stateDiffs`. Thus, during ZKP verification will fail if the provided stateDiff hash is not + correct. + +It is a secure construction because the proof can be verified only if both the execution was correct and the hash of the +provided hash of the `stateDiffs` is correct. This means that the L1Messenger indeed received the array of correct +`stateDiffs` and, assuming the L1Messenger is working correctly, double-checked that the compression is of the correct +format, while L1 contracts on the commit stage double checked that the operator provided the preimage for the compressed +state diffs. + +### State diff compression format + +The following algorithm is used for the state diff compression: + +[State diff compression v1 spec](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Handling%20pubdata%20in%20Boojum/State%20diff%20compression%20v1%20spec.md) + +## General pubdata format + +At the end of the execution of the batch, the bootloader provides the `L1Messenger` with the preimages for the user +L2→L1 logs, L2→L1 long messages as well as uncompressed bytecodes. It also provides with compressed state diffs as well +as the original expanded state diff entries. + +It will check that the preimages are correct as well as the fact that the compression is correct. It will output the +following three values via system logs: + +- The root of the L2→L1 log Merkle tree. It will be stored and used for proving withdrawals. +- The hash of the `totalPubdata` (i.e. the pubdata that contains the preimages above as well as packed state diffs). +- The hash of the state diffs provided by the operator (it later on be included in the block commitment and its will be + enforced by the circuits). + +The `totalPubdata` has the following structure: + +1. First 4 bytes — the number of user L2→L1 logs in the batch +2. Then, the concatenation of packed L2→L1 user logs. +3. Next, 4 bytes — the number of long L2→L1 messages in the batch. +4. Then, the concatenation of L2→L1 messages, each in the format of `<4 byte length || actual_message>`. +5. Next, 4 bytes — the number of uncompressed bytecodes in the batch. +6. Then, the concatenation of uncompressed bytecodes, each in the format of `<4 byte length || actual_bytecode>`. +7. Next, 4 bytes — the length of the compressed state diffs. +8. Then, state diffs are compressed by the spec [above](#state-diff-compression-format). + +With Boojum, the interface for committing batches is the following one: + +```solidity +/// @notice Data needed to commit new batch +/// @param batchNumber Number of the committed batch +/// @param timestamp Unix timestamp denoting the start of the batch execution +/// @param indexRepeatedStorageChanges The serial number of the shortcut index that's used as a unique identifier for storage keys that were used twice or more +/// @param newStateRoot The state root of the full state tree +/// @param numberOfLayer1Txs Number of priority operations to be processed +/// @param priorityOperationsHash Hash of all priority operations from this batch +/// @param bootloaderHeapInitialContentsHash Hash of the initial contents of the bootloader heap. In practice it serves as the commitment to the transactions in the batch. +/// @param eventsQueueStateHash Hash of the events queue state. In practice it serves as the commitment to the events in the batch. +/// @param systemLogs concatenation of all L2 -> L1 system logs in the batch +/// @param totalL2ToL1Pubdata Total pubdata committed to as part of bootloader run. Contents are: l2Tol1Logs <> l2Tol1Messages <> publishedBytecodes <> stateDiffs +struct CommitBatchInfo { + uint64 batchNumber; + uint64 timestamp; + uint64 indexRepeatedStorageChanges; + bytes32 newStateRoot; + uint256 numberOfLayer1Txs; + bytes32 priorityOperationsHash; + bytes32 bootloaderHeapInitialContentsHash; + bytes32 eventsQueueStateHash; + bytes systemLogs; + bytes totalL2ToL1Pubdata; +} + +``` diff --git a/docs/specs/data_availability/reconstruction.md b/docs/specs/data_availability/reconstruction.md new file mode 100644 index 00000000000..a97f22298e0 --- /dev/null +++ b/docs/specs/data_availability/reconstruction.md @@ -0,0 +1,7 @@ +# L2 State Reconstruction Tool + +Given that we post all data to L1, there is a tool, created by the [Equilibrium Team](https://equilibrium.co/) that +solely uses L1 pubdata for reconstructing the state and verifying that the state root on L1 can be created using +pubdata. A link to the repo can be found [here](https://github.com/eqlabs/zksync-state-reconstruct). The way the tool +works is by parsing out all the L1 pubdata for an executed batch, comparing the state roots after each batch is +processed. diff --git a/docs/specs/data_availability/validium_zk_porter.md b/docs/specs/data_availability/validium_zk_porter.md new file mode 100644 index 00000000000..e2b0b1408e6 --- /dev/null +++ b/docs/specs/data_availability/validium_zk_porter.md @@ -0,0 +1,7 @@ +# Validium and zkPorter + +The may choose not to post their data to L1, in which case they become a validium. This makes transactions there much +cheaper, but less secure. Because the ZK Stack uses state diffs to post data, it can combine the rollup and validium +features, by separating storage slots that need to post data from the ones that don't. This construction combines the +benefits of rollups and validiums, and it is called a +[zkPorter](https://blog.matter-labs.io/zkporter-composable-scalability-in-l2-beyond-zkrollup-2a30c4d69a75). diff --git a/docs/specs/img/L2_Components.png b/docs/specs/img/L2_Components.png new file mode 100644 index 0000000000000000000000000000000000000000..1f9dcf39392e8278d840ee70fb91dd7f63fc6983 GIT binary patch literal 75908 zcmeFY^;=Zk_dY%}(v5<^ARtIeE1iO%^w6P%G^lhVNK1nVN_R`=fP$nV%>V-oCEYbJ zFvI8I^E|KX`zL&Vc<<}-^2}kMbJp2=#l7yePL!603Mml-5eNh#eX6RY0|Md8fk3z< zg!sUj-?t#?z<;-1s~Wk1K*ZEH|FA$AUv2{@vD|c26hKwK?`{G=aBbzE%Y#5Q@x)gz z@jzHycb+QA>v?1C%-{Nw@+TFEX3HCkP?^1YH=olx@%w!Kn=MlTV+w}6fr3s~=Z9i0bB}&(F@oJK7x=!B9A@yPMOHG6FqxX-)KOkSj4 zIR#W_5%bAJF|YGfw+pSyN1C@8pPV$YrT^Ua+|F6o7}m5RU=3UnOI`0q5DZLN?X$35 zyS)u)J2B<(eRj3|Fm?F7;mPBKU+Y>`3nnsGw@??8rwI4)=MSW-F~Y|C!qQXX3*J^w znmDIc@?oFJ19fPBtYq;;1@`r}?fO>fkP_Sra*^mnCj0D*-iL~x7(e~vN`F050CT=C z&jJ^Ka10)b+=N6i-!>kYqf^G)?oSBJ1M^L^?yVIRvHyU@2u;4{8G$vm+s^Ko+| zWJj6aEXDm?J3FiGY4ZIRXftlH-y_o?d(BL*a3{R(5r3zd^c8oUcDk zJ>DJT^M@Q8%GRB7v7smJe`+}(GDIlhaSIk!VABiV{e3nA->ZUJP;OYeJ6#A>Lz`+t z=99cB0|%4wX*WTJm@~s+Pe>SDRmhyWY13l^KOHN`h&HHGpq3LSfJo#l^X}n@LYm8ZPP!V zTW*;&cM{vYmUuDHMOA8R#$xcq@RegcN@*$ieBI=G8ri@v8dXtTbrt?I2@TssDT5{+ z6IR`#m_HY$Xl;u!zPws9=^WUHGWC!rJQb}ccg6JEI(k3;vhA>IZt;{FS%(^H-51{Z zLT3d6rQnC>;DCakB>(Qpr<1Ji#Gnf3#P8X&N}0Fa7()v+!1@(5Rdj|{N%VC1c^aicbmpo>3+4s?*c^=PXP-lpdaT_w0%EG#8Cdihwj zX2AsWX2SWY9&Os-wjM0)*K!pLN#%NvtT+Z*wN;{o6>XvPdsVnxW?mB-cbuEL_#=wi z`9~z>$Es&C@obxdXZxK_V^&$u&I@3XWiiWUnnnAk*4*tU-am+1PiH5#xfcA?I?F@+ zcd~ZU5)4@){61_v3n{N`cebDy!W>>tHXYP>2$8LDq=aNhR^(9 zs6N33tK4;iczdmwAdJKc^p#~8OCCHBX6DsHDqm}KD`#6n#LlEin|QTmFSMhoXId#f zfIDWjrQIx#f}QdEr#)8*;_e{Scs6g^GPK&`GC*0meS~#pU2N1X7gSsng z5_Q%EyCAw1ay6&(mCpTZQd}=aDz2dCDuN~O(a7*5%6(qmG>9VJByZRsPvKBO2pSIf zy<5oU9@LlPO3_fVS6Co9Ob#9$2xLcdc((OSq*-b-#F^JvHKqxi-_z*r*Kt3CHmB!@ z6aTY6)bj>>GClb@+^UDz(bi`lG-$MJ>L6NqJ$5W3>~)dVeKPu~acNE_X-ZA%F(JmU zNw+*N9Aa}WzVXreb{so5*P9a%MJ@+uYzE%%=i`u z98;o~GMjL&RP6={z0amSX~){|9SC}*Y=8aHJ#$1LDJ6`Rj?cJgoMmDE@NP_%@YHe= zUDh(5Ku7Ge2T$w0TP=Hth?9(~ekAQITpt?GY+MZY&S62#SNOhO9I&NA3r{hsBCz|T zdFDb*>V5{rR9l;#r3~y@PP~gnb)f8O=ygDYu~byZXQlX>tbDyQZ)A0t_j1GSXKAis zVWG8;tsWpsq)mBC^*(LX9%B2g4S_&fbA)mrklgPFSCLJNua5>eIZaD6Y;U)66+|`} zGCOuC=WWF+`E}6eE9EM-_msQaf9ILaHEZ*vd+(%oz;0}(>E-e)PfRr`Jk)mElcTI& zt=O3DPAM)!6!uW>N4qtp_OeJN=TI7*Lh32*hfit1#g{F2$wSK+Emi%u^ziaL4M)jt z{B=$>=pzr?^8p2ln$OHC@cx8t2D#k3u=8$YUI@nkpVoeZ&TBp6>SH$Ce$=%;bS z;?b7m7Z_#knMo)ayf;6{I=|rUci30{Dbbr&B3)mE!&vKAuY!*dL|_33xGopGqnF$Z z+mA+Vt;DinPLTT}MCJU_bU-{UPp#D}@9^(Ytye6qufhf8==~$gSe|m;!|yRoC7A=} zt3*7@W=$>@s|x18dH`W>ObwFDr`GGploHF@>>Jzb_d$sC_xaf%6|5cp;|dbA>V(R zQEK#z2ytWL%Mc5t*oKnaZM7z^rklf@qU?0L@6Tb0v0L%a=uq{K(#xf%sqwTf zrs#Oh8JVQh5|>Q0sQ9eK^}RpFzzbn7E08XEXQoj<{PxTEG}^;X469877P!s_B4)sw zGO|SAWjr}~e=+uqru=oWd04!R@X1`SgLA>UR9g0%JcRIOIEnn6K4~@#JE(qr(xA6w zx%~E~^qu|F#m$RR8~1}sEVk|!fQHZhqv3Xm_sl_5&gA}l;2vY8Q$N4rw!TIS!>EQA z=beH(!zFczx1(Ju@lka5sQGBtb$M8&hYuABl|tLbQ%e|fYnby*=c>Hp zi!Dktkx)i~yRr1aKR&PL4`=w7U`Cvu(FFGzPdumj!cHWRl(^B5Xx}XQW?2c@-rFjG zG7UUmif}tK-SpCLtt<*TmzqLPT(QmX+g&A`TQaRyB5S!-l1X!~cys!P@u3zcNag#- zuE2^f<=r^B{&c0#i2R26fme*Ye7qbP$rS2Eih0M7)n)Bf`S?1znfj)hvnLw$DlRe& z_~Xt7ekq2_4SIum?J%tlvIW4QakOT++n#&ZEVLbl=k{Ha3`XEt^BtK7oKC)>_Y*eK z6G(YEoZ(Ep-FTmxf|fDN?a(~CM~dCNI<#%Cl_wv;iIN&|RO@UzF|J?epB82B(Yjm3XTx z*r%A$!?Q7g6y>+FB+|2%MRbqz8}OnVURMogo9b!wINmB6uC;#kN#x8{lRxCSG<|C} zAT!YmTA~ykzkKghik<}`_39SREblI8knpWefA1nKQ;2ANA2ujQL<sIi`gyR^Js#*cDLnZ=urcIQ4F$CbfUX7{rch)GV2)OjjG z0X;E{ZlV2HOrcdBGgD{V1~LhfEVz7Jg}~bU4F?_?7D{Ib>2MaF^oB80qSQW5pw(O7qU*Tx9tv@rcaBWN7Hq@t>FyF!?Wkp7kh%ImY03! z2jIFW7?nU&`*rf~BDx-#J!#^Y0&n}@31p6U$)088@YPs`oT(P!3?{e4fn`J8gOcsT zQlI+<6keM5_pZOm(>G-Jh$)su52(G%^1Hs;J{*ao3v_|h$|rC~=|KQVqv>x_kp5Ps z-D>vHxhQX8iK`Hm`{;Hwc*mPhsh%)BW`@pq!m@ zKdti znW~7ugNA5bIH0 z6q&`tw`Ps8L0av9eaIAtLr@`J+&?dh!4QENR%e9Diw9*?|UH0rDmJao2>0u173UlL&>ucp}xgD6rkJ#gy zT$j%^sTh6;AuEtLF6r@$`{pvvz3lxU)J$wceY-xi^X#VJIwm;%)z~RM0lx zPT;_s*fg+u^O<6;9>**jAl-f9F$XVd$9R8>#}wP1n-n|~gD0LG89%ArH}Z49#((1J zwoh-1t{u1u=r`Yfx-V;{A^dcVl;%{}5_;cX)Vy^+)VuNryUEKW*P7yWss<`ZZ_dZ; zJ5<$FqS_FEb>ecmVQRqT^&1H>Jz>x2&nU9b;vnBB9I*!Wrh21faVn#i72SrZ+pqM0B;(@8Q1qV+Ljt!SAM^3)< zl!LS$rS4sdXn!sA)Sk(^Of7l-vUGHNGcSgpyLstsAktv87-m$q&jAgzL}zjJ_K|}? zcPD;;K%iPxK7+R$3ziRs(n`7jVC6MZZ&x@&esMQYHIG7ml4q0m8;9`+jr#t~4bOSs z?MnN3em>(~NBgt^txdzLhbMY8T?fbR=O16x{2&%u8FkCpeo-zHfKThk*rj*Qdf3CY z;ON~GO$)Gb3A{I4MFON{&P04P@{C0x+elmUK{DTn(id51rtpa71 zPdS`M&~Z(-E5k;Wuv8jd4Dx(PX3D5kJy;(hBC>7ojRxp$d>owPFgv>;2>k9ySI>}f z`t$JKEm&{8&BL++{dn>?d22tzTT+$Aih>9PBD)6yl47HLq(^o>knh*#HcQY|&ED<=i$uCr+wj%&O z%yMt5Fl{Ot8y?>s?C~%4$4H%BEKdwkIs+;Y(MQy==>)Ab-04wMP|=Q>9q zFt6M|`|F0@PwgTg{@%8bX@Q zS`67W?}G*ajqlCtQy6y4Tjw30@`x3TP#De)yk5Q6uM?S4QfVSJ1IQiyM%6zLA&TOi zyfh=g^G@t^JDJri3dru@g#RP-KWKu8uY`l){g*25n6&!ugEG=yhi9KAkF#QxAMGY) z{npy^&keI2VjoyrSTuh4Oyn|OW4$a)wR2va(8lfPn}LC_M#MsErNh=KGku&z*@Fp){7dm2UOz0zvpNwm8 zpT_twj4)4h)hrOtQp=2oNzGT~(ARsky@i4q_m6X@BeGTU`<69>P`Wwa;}tf*7TxJw zy;zIsT)^wd{7`SG<)f5-(nCiFP#PP5!@*+h3}!IKKt8VrX-d+^=V2_TriYg_aIk1u z6t)7ulY1*4MQ1~eh)3o5yixGN3!?NnerTVEbZiYpIn25u+hnJEct!hmA(%aR*aFeZ z2j?5t8NDs6dSq79&~@aD5<;6&T{9E~0I0h#lyw7ep|7K6E${?`4gErU%O8$7=ZLC# zUWBa~Di+f!&HL>`cWf9VTulmL<>3%c;m#+--b5ge{;7M#oq_ZG4JqnH#T_m&3vgdo zu`@r-rd-VWNTLd}5fvcBVdd`=|E%LN_zWuOay#TTc*aI?r{2sdV!s8b9 zKrsrwaln~BkML)<1WXxp6k;&}ueaCrextb*LU+&*Ja6ddSze|FFg2cS33smB|5CLN zm56&KzW5&vNLBYcj>@FHAHEL)E#7FtKO;D=n*?x-{|qhnn}z*n$c?hz_^f}1a>Na7 z_0OPTy78p{jQ_jf|5oY$ROA0^4;c(ZV-}br`ts7!n$vAq+e)QLQ!n&t-V5Pe+Pqa( zuG1<7W0^bMZn*+VBN-=uc(}QDKAN~ZMIDV+S->dwQJflI>`G0qR|k@lJEUO4r|U;3 zgDO*d-J)l}oSSORPyROE&7W@QLkI-Yg0L9any$%z{^=p>&AIR9omq&(l|IgZ3tAri z=?`~=lm5903bj6*oqh7h1Kk}?s*XA;HNXD$4riX#D29&jEkRZG_Bx7s+|OWt$$Ar7 zj(pu%Wzl2?-yM+HEJ&_Uj39ezkp;|f^FjNf`%Gj@H216dOd8n(e5z!pw_(TSqTSY{ zw;z5R{P{Dv!@tHN$jB(YO$J)5(_+5Lf}MbUJY$>Pg4cDEF_8~w%AzOE_7Sdr$i3Lh zfbn9@CcO%iCjP=3>CLWxXG-d}Vz#!nK2%#=yUdE9`s4u#ou(>`K2|VEdQK;CsQ>#$ zM<{Wp>4dS(_UGOF^x`Nd{K_o*BD!WUhx zU#a%s#U`S&6Tb0$>a%M{I&l425awe2sRbsC-dy$Ka0%6d(ShDzl<6w7wxo_L;bEzj zg>HZft;ee^&hq6h1gXF(m3OS)piP7LtFHmT*2Hht${ujh#}YI*Rb}@1rgnB3PDIhmjWoM?*w00GD$D+&VL~!Bp0kC znd4J-u1k?Oa=4KnFSaU>M04s4G$`GAu^KxYru}k>xUPLk237pyfV;Zt(cShO3 z_S{2JQIQPg7`hoW65;w;%(=FkTz1a)bj#-7I|p~wz4(spw_VvZ7woZnF7vVr*MKRc zR}oK@&1bVPZHGMQJk(FveS2ysoSenn_7mB^*X4ZWwdj~L_xVHXI9p%pu{qXrU*pTh ztFW9m6u~0cEP?IUm`l&Gr<^mT#aRyjdsdU%x&|ObhN%76kKHQ`!ZGiR+i%sJs|jWQ zyX~U}yV0)SlKWEd&Y;dpbLi|)u)aF)Dl?WU*`_k)8xDDMqrrQYe`jlbp$W0Wk2HSy z0QUmDALQERwJ)|xhou^~(NkIA3fTKpu_j9wkaPVVT$ULZSA_xr1CS6qvHpotUDBKN zW~+NOR4eK@_323!Mc}_PP%^O4F9HhVHRe!Pd~)1UQ*n^0gJEtgkyzio4adn}Frk#W6`5D=UXF0p_G{Zteou-ZYk6<0u`K zoYms~T~Nz6Thd_D@#Tx+>_7*?kIVo&isIsgSC<=9H=09zXILZdTK1D+C-C}Sqmg_j ztsdKJ3wMd+gTWLzSp9f1jvx{I-G?`bszBj;#kdzhMj@v|7ix8xE==qcy8yDjZuAl{Z}(Nrz^uUm%kFEalu%}yVagM(=TUo z4)PsgRsSu0KR+HT+mjEk^9gI^t!Sl7W75bl! z^fP%ZYpXC1;oo}Oht!n0`0^<_3r>T z(S(#pY9=%~%~%3dQ1$aniOB>eT(D6NeN{*TTim0rki0E&e1IwZJLFbMN=jZwQKoc;Qc*>~ot9BrPvwexLW z)FIYbSgPMn?m;YzK?Q(?)1`pBY5tuqw-sVu3?Zy_Maa+xgS~OVcp+y7|Nd>HS+6Fs zvcZ^JE<^=H1vbz5_q^6zhM2R1yf!_nECHaTWP(7onEl0j@PB`9*TW0xlDlCT{|-4e zt929rr*R{xIQ~0TMkrew`airJ{N%^)(;0c-89EaG&1f~WBGZ4pvPG$}x!1fx=C_t^ z`=1U68=_7&t@@a}zBIVZ&r(PMIs`8KPhHvbK-LF~-}SY%moU$HDqf6Y^T^@;=LUlH z?48q3|14@_QGpBH*xf-1Ogsiv^*|~+4=k=D$Wn6YJ+;%~w|Ds98d65FX<*fzR(fOc z!Hu4~voi-v-4|$VcKhwA%CqGdfmy%+NaUPQ=sW~p$5HOYMT1QkxI&NsZ~}4#?y;`O zy?K2I#AX{-P0j*X;2aCkIe|09BH_8SK?Z;nA<)I2y|#VN2x-5w+rj4Z&%sf&h zL6Ogw`rhB7uJDD6;Ur80Uf4+%=n-)S8aV(t@QCTAPhl3W@N+-)o4P+qk(AI@JZ!ImE2YGXeqA{CK=irnf9w1v&wi@nSDee_Y@DNk zy^W2Hr_u)E)ES}g%o0Yn?>f7P`HFPUlz0XhKo~0kplm2_*|EEJTohgAm)MUNo$bRg z-%zKrGa#iV_knHCKIiN01z`XTt$Wc^L|{HnXn0!vzT*nRoIk;#$ZSl$k3R4_n@Un{ zJYI?41xiA7)d9PX_&K>@#PqF1iWj{c*L}b5y2W8dy7a)+cM=Kqz4p_AE{^WX?$K1d z897iAUJ8#*d`P@e-K&}7VOd1ZSiTL{&c-}aX?w2Hc4MR)4_rDO2qrYDagVArsT)L_ z_}>Lpbc-TW^8q?s{R938#Ip@8S;YFNSXRC9ETHO?@X`)x+qv+v4N%H{5@ z2Z;btkm4>j1q}%v3lZzRP;#`O%<4(vUVQYk>#thD%ZfBJ?QcT){NHFiBGE4AP4%aK z4wMJZw<^C(lLHYO?0sCcTA!jWwJX zwb-5gBVoWzT`8&a}ycd>M4bKKkGZvd)81NxuJv z_h|Cs1O0tgR$v-7h~m9WAF^)vY|jePN=t|(2?BGie{qYGJgW;*Ppk&9UJ(M&rm@12(%jd zJ(M6qvrZKcL^yaL{G^3D0Z1LD>~%>%a;+1!u%=ze=N=vSJIAH2u(Ng9>zT6h@=e|p zJiy%V75@B!jSC4p2g~COKu!}kgO8oG&U4{w5k;_0;MO8VI*3RR$T|X>0`hJL-kI(B z4HW$C1B>helVe-q#4NANT=S@T5ZYskrtUr9x7gj&OIg5Q1br)>pRdvS=$=x>x_oxY zca2ge1on_Kk&#mG9S9UM!b=m_%$719VA#{=n+T zC$|c%FK-qbe`ksc`&$5@?>1%;ACx0_&P~CmQA%We6P^Lw03t%5y{>zivc;Dy9X)dR z%8|Y0#7?7*#1-mISz=oip>^A0&T}s`pI?7I-Gm{+$yjC5ZPJk{(sC>k2~+Vza&Tkw4JBA{lFV69m@s!BcF^ zW9zN4yXKkTr|aRzjaR!I+CeUGPbLNAe#k+mm9)N}gTqdu?1Ezr$z72j8VpkGSFYF7 zqUS!lJe8UVtf>Y+45Z#x0js~A{uR6|Ui}NOKM>Q^}I4*z@==}zyt4HX}#9Dw(ZFS zv5w7-*fdQN%UREii936ir+Z#ZB3xhlS(#n0J!f3T?5GN|luIAFc48+>$R2;QGRW=_ zyUunq15xo+aO?Rpok->eaY-3(1oqioMr3y0cnC6+jsVvO_&df1X9T!FTm$>Q+(UF` z-vQfylJIc&9>HRcrns>eYgaCiT>hB39?ERnZ<8t*D0UpK6YMnvwVkQl5`Z@VT%LDz z*p;}s$wo_8@Hntt zX%?pe>S5d>TLgTg+)t^Z&fH8I;+Utcx_5oVsO%$M!>3T}m69wE z;;z>_+8BYjPH?++0n`P^1xR8!Yu`UiRSa-j`&H_O@qn4pBiFaC`hRMs3-th2mV$bM z8{A5R&87EA*42f}(3^AV54wUljG~PB(A{|Q&KRd>o3r8+!<(tExYx|P-gwo8Cp`s<@sa43BCTGlJ{9$HCA ze!`-9Xr42mO@7qh!onA0efvd0V=ieuW&UtYOI_AmTJS8Jn%`pH*Lm-iSb%8|p?@7& zCbaPFlCr8v8u)gK~7tW9vwxU=t*YojK{;$_A%y?&CXwUDrdhcy?b6(C!7~X zTlVU4P?z$@XjD_@nXg8J-e#a{)y#|Rozfc1JR@)!yAEw+E<#!6;Ves7V5Yf?s`yb=6bfCk%}6-BLE#@{!g9I;PN5kme z!(qLSL4P``jnWydvnpy<1YU>!nCwt%dQfX-P^Oojw1<46_UEiGPv6`jU{Z)?u%5eX z3%O+bq1H%K_Kr)PYAP!La8PE;<#H3R1m9M%CJ+Jn-l}?Xy=@Df>S%%jXxD?g?8EJc zTu^8+(bIJo={fn07!xTGSWg5Oyveocr$Fai=2_o@RlM{oP=dhKllSL{J`ho|D?tOa zb?L>(5M%k=8zJXc@%u0Y#IPjKYP|RSSa1SgeFDXuerB)<1b8iFT26_nU7eUJ1>`Dq zkXxTw+iAUJ0Z7U0XM^HmG%XJaC>Tt_DE5oVk*~6~)%%s*q}Sks;%QhX#-qHX$)RHWmTfjetf9R7P zZ|Phhq)N$LgH{-cK!4VHaZ*%{x$UF>#s%j>E>4CJ5L4NujpmrFCp9(oh6bvs{K!%N zjp#Ox!*a|Qv^LQsOBT1B9;TX<Pt zv|Cy~l(UsGGyJ>$@$gW6)jVUd@4eaGc+qEO@AO27WcfR{E|q66L;!^PUHzauqxLDa z16>HvbwUAs#1=m~f;!z+uSIn;zC{s!mfU`TTBf^0s$}p-C4fW*Ncd-NB-E&rrUP^El(Qdo|3VK(`vgMaBSKH` zqOmC&;A6uRIi6ws<`vs8kL~tXPe~Z#y=GEeg36;u{xCMKY1vJvIQMou6e@)SDX?0y zg|0qD-uR9+%Uu9<94tTMtvyZ7j?z}sAa!nk0HQqY;Q&1#V+gO(Hvs5z`vXxBDA-Ad zFpnFTi6%HDj1IoSSK9=nYcRC6&adO-(m^c{v4?{6Zy_Hc{`+G-g3 zjVUSjg+c`CWV-0>Wj-V{g#m`|xL^EB3Ks-^zGIlpG%=2=DxhOtQgiXrJG1Rf)d6f( zGum519wX9KS4u zU4V}|F2B!knUE3e8m#iksv>jzl5;xnd`jpTiXEt?A|9!Dhb} zVc3J(K}DQl+ug|l@OgLZ~Pj34M)ypp{x zXlKMIQkPFdIdW9hzp*osQASM zo0zywG+v^ezjbv9q>ru#`~CWcKu2`uKTiew004by+@#T|#*)tHRNgBW5CQj@z*GdY zZ~v>I44aZa<+AUIC`bpZ0!U7pm1M-YNSY9VexXwa8P}9guk|@1i{Bwpz$Yy?13cF5 zbG(z~w|8q_PJ_7Og{UZQ^!uvZel>!jZ(JT7*h+vhrKK?vg@%wf)F08fD>ZgS7rd|@ zOnI>MRVnQf$Smi;yntY_E1~2vUllV%YK%0myUP|#Y(bVL9XrlmQU|L*`UApMpXAF9 zL_L_dtOe4O7wqnGhh?(oS2caiF@8Qk1$wtugaUo^Q=kbboB&@|*!fF(?bR!OW`6ba zV~4%tf|(77>hjS^2k@ks9W++!4?bnJ9YhTzYc0JE_05 zfT;h@bjiH)cSiu*|BPtKV%iB+2HjwtQHZ{N>k7G;%U7Ls9yvm}eMhPwR|54f3qm zzWy1C3=1XKyq@y-6~KQY+<}IE%8lhOzPz)2c^W|UxYd1Id;O|`z|*?yS~^}6o!mB0 z3ulUpj+bvJjs6fG}{j;J2@RiUT4{@#IRE@F)YS zy5DzSU|Xv}$bY7F_`GKAOr7z`tuB2&F)Ly`&HX9z%EY1=f>*UD7oj)n;FI_+R3YQ* zh%>}+v=|Y!Csan|v#Ep+fUpAo%o1NT$ADV3VF0_jnQnq2}+2Qhl@!TsTB>qBo?72uw!4~CnxXnVe_zX~Vs@Kvl+9V^8g zz)7QCrEhqm4lH-u2%1EvI!q+vtKDGC_Go`i=SK6uH&tz9ERCR^54*QP{Y-c)dLaOC zWGD8{?G+EaI!`LYPDhOdERDDBb1^YVxPOhJ<_x#kcaj1ySq_5dh8P!CAPjMO?OgCm zYE2_s8iTE#+F^k+fR?|e8@l~8g2g!NnfIHMcL0yf-UKV2MzQm_LB3M0969e>op7%zo3ZGQ;;x?zpZb~%#VX{QuTKgm+4 z!bqOzM^JkR0uhGL6;`2;j=Rk`ey!9YU{xX>y&Bexro`WKGGRD41epvnC!2nGm#x{E ziQLysdP2_iGKVi!>W=C=WV0v<@Y{F@4|B=1K1|19foO=aDMX_CA1;QiXQ?@|aUnKk zpbxhnco`!h$UB8ko!L5kIBr(`#U^r)BNrj?9!!`-E}KWjs42g$WtYdgiaBUduX?>( zP9s-Phm+#!*Z}yx6?2WC(90M1-o~lG&l+fTx2Gz~gc*NqMR{d?AdzOqA)kwP@4lO( z^_2++#D#z3|NlHqdB9NwrQ>||X;H(GZ6&Hx#;NBCa=?`$mQg%zE1s&D^tBi>z{j!S z$|r;n$y$zDmN}JcNQ-QYS9khB9qA(1UjL|lv2YhD-dv`C&qQw%A;H!B>C$OuALbJ; zi+VlxLXq$LNmC|&{!i?Gc0IQL)Vh@z@20}z`|kIb$KyAZTt$Bz0x}W<8_zsaMkhL4HQyA^$^!P(Wtbv;S zset#JD#`gWkL}xOGYrRKTOOD{w*Y3@ZO$|?-9sT7$bxnRg4%~kxxCV#2YL9gAizt{ z`Luor$!aJ^+{!KQY|tF6{O;HI__d3ipp_&(s8&Ge_C%$xanT$IN*08ITl(*~d zLh7tHg+lGJ=ziGD+49`uM!x#U@rsJM*$8u-mEV`q4b?kZ>u(vw^=3ZU2gUeE7subW zL2GOv7Pn`Uf!NQCWMh4KMg8G=K*M)f zGBg%rYkD5*DrUG7eJZ?#5SI>Yb!Ai}HW}*(h!{b-e-*S1da#8(E8W>PZ@D>@Uwqkt zYO3zgujH{bmrbblCp_vK*{2_3Ic8N|r7%fpEH0K^BTJ-t`$6jA_`TZ9@h@7<4~xxH z!(L$(2oYQNjEV1_O^S<_6lmE!1ZoJ{o$H4x*At75799r`-J#>6DoVnS)>`4n6v_9t zeoD9;t(BwrDck!GA+9A3ir;%?{r2o$-wEjCPqzEk^4oxLS&^F5T-Nzq6(^eZ#yd&_ zpHA16>K4yx1|Cu0R83iLkPCoqJZ8bL@QWdF#Lk(H|Gj#_x9&fu>`9G(5jZd&{Q59U zsvgF|D*2A#41B{?@aVRdU5&+hPo7-9 z3?4J`3UfAcIMr$)d*GJk=UWtTDG5_D>xXr9@>4tCcB+6UZ8@LFu8gWM*SiKGGg7M5 z5toZ-q)4+Xk%Ifkit)8ziZ^WL;rkf?M&?#S(NmUfLMPLcL0Q=3U*== zUI(>+*Nit@zw{GPn{{IFb!?uX;mJGBG_s+JSg{Hp`xmXgE$W)(tSBbhoj_-&O0LnU z3mgUcpFoULLSww&MPsM;R13H}w^HG8%CQ0(S{m74rBlJw0OX&$#i%$5l%3?I`pUcp zS4wKz@sZSSCht{Wu-5TQZ0&>udMR_$+c z7Q0;bgMiRm5ni{f-f&EOl!Z$~_a&x`n7jM4?ILfGR(3 zu=52$Mx30(!ib?YSk5$^Lt3bF>s%^v!-=DxF0S=rSx(JOSE`x1;%KXapYv;@0w z{}yNXHF}nZc+Twv*s$=dw$CJ7?C-%P`|VtnIeFTXN)$A5=VeLO9Da<_VI+Hg{1v2* zCzq_|8XB03DD$ zKr#n++ZNv|Jw7#T8ntjOu*uD4IXx1X!k0mn%FYwH34I0nG%!&EkhG# zYYqQpNLTzV3mJGA3lU3^%wavB8FXOEwX&voji1mUcEhg$P4g5^b95rhpl`R~iymh1 zMcvVMEp|x@G+$jpv0!+&Y^n*v?j|BPd#_QDbI(#^yps6L|5BJuMhSPjN4S&?-Q4fL zZ!macl$1Ix=LzQN!FQabXsY!Ij9)kr@#U1GSd$cDo^cDgXU@8*F_IJMT0Z8fPuMTJ zufTY_jL4psh#DxrGdQ^`<-W`!iD31a9B`+11SfZXf$g1UC-=w!!I%nE;hTVQM1*(cOu zK%>hkH`28s^}H$QQha-|JdIWKra#cK(HUkA&j#OhhLyGreMW!Sg*U;N15UJov;$nJ zM)U>K4EyF1M$T|n+4!6YTSL~$dai}4%)g{lJcaIp4Xy(9%lTpoqmS>}I^IH6Ie`V|f`^R8{O^YloeUBym|1>@Rj2ZbO5fb#iy!!_iq#EXIC)~4fL0*>%F z*N9t^;X-RAU+cwU%faF|GJPcajl;xa=Daflf#J(yVu^cjvj-JgcG}n5%hd+L1a&=# z1tv=5EeeH%MSw@yFY%Zl6`xyFNB~Oiv(Zh6dZf7XSK%kwEBMrFp|q|7Zy+oke^Rrp zx_-r<{Q4x=9i2{Ba>l94GTQGw-FatHg*8cJZapAq4b= zgM%B&X1w*0RiP{km9pm^BVS@1?_-+3hqPy1j&I}LIu#h_)$l9Y9LKD6?D@^2S{UW^ zg#hK*)}8!$E)djHB#RMmib&N^8Rxw_Z9V17ZQP(@WOOpT`CNO=f8%ncfgXn_3KKWm zv`-I`OPG$6zn7RgGkf3nQf`}P_X|yISH!LLEGA5ou?^73(!XTuTd-AdcgIynJjY!1;-ED1SmXJ{1K<8XZlH2YM8leddx!vnplpM7Ar&B%h=o_3%KJ=g8VcytC{{E zPgflf#rJ)eE(w)Ta0Tm3oml8pm1q7B{ z_IC$A-{1TRyR$QI-n@73x#yhg`j%PZPL*aa|Su5C_w7|A@AcCj;0w8m(7t&JfR~KWAo? z4$s+SS#RS7Qj9iV%lk3Q0WXtLto|n?PuJXJRh8RVyS_tk&&%w-d?mef0Pp|I#qwP+ z`vzN*+|JsYaV0-`hP72K!=(89Ro4#lQukXY>fJ}vLbVy&7@|@w?(YQ>aJNyc^LynC zRF|w@rLG=dGVr!eO+9;sOC!20vukyHI{D-J7qMQQ%m7KRI^u}U0YNi`T^3@ia`jEX zrzz(NlW?nYsdw_|GZ^g~&HhP>UinQ&!_apGIMYx?_XlLOa4i3y2 zZ|6b=j^Elg8DF#)U40W9JYLxqtU0^rD_u@PsouYm)MpS^_S7qI?=wVYT*E#i%)alm zs?WS-HzsT3bOwx!oZxYuc<$ACyBr(lHsGE2(9g#JN`LC(Rm_> zahzZrCt32FF@Zxn^XJa=QyqwR*m{0FM)xNHxFJu08cYhjVaK>M$}>ECZc?DkG~&6I zQ1KUL0e2rWNzyRKLZ9)-0(n5oCk+1=WcRdZ3_PLhvWp-EmKfBn72+u1!;O-gJ4nIViQ)PjW5a*w$qEyY?}PP6^A6M$tuROo`TQJ}{_w zl*s>h-9I8yShuQAZ=2$$ZeR#`M6yb7*`m%p>11v~h>*xr@g>||YA z)v|`FSqsv5-MgRnn20~u=jY2xW#%&8F=oxrcf0PyZvg~nF zl5X2(Cds6tZ#4WxmYv7k^=ckFK6W@xbyOxtzwjr}*ADCd8nWc`T7D|qZhYA{VK!W% z^++b4?Mw||u@z*~RR?^3@~hj5zoo7Z$=A8TS(KZgR6?8oJZM|qp;t1=U}K<{IyXtH zY*q%?4u5=t_E5SR!t~xW!9GERMlH5Lp;JkWRHQv-pM@uXx6!I&{dL~vJws7#cDA~A zJ$@0y&p7I0QY7yzGT%#6(6(0f+}-u=ei>#-cDrUs7g#S<-t<0eM{=bz!(#SW#lUv#qq8QOkI${Iq}w${wY%A+ z=50>(b9&V`vXu5z{m$|Y-?~s@mlV%Wf8{kDM(fM1_^Br|gcD8^BBcJdqvt-42rPO< z+6s%X=7Z{?o0^%|7NCxENIjKG!!C0d?W7%$15{<6b_6Q%*)j+IaC?$ZydR4p9)in4 zP!3c*okDO3*M))3R&KJQr=dYzNmNh9<|Pic)~Q=e1(1$%!~}9IALCea-TQUt!N#EW z+4FMWo{nXXeg(v!_xA{1wLkLv#59%bPRT*a{aGY#hwEXoatcA%@=Wq;niMPs-D#yu zuTI^VQ8UVdtu$*l?25Vhx7y1xE(??70d)dBxe1yVAHiZ}OC&b$d_qblgtLjtyR2W`!euyk~&Svh~ZK!*XJ<;J&|2XsW4nnVsy; zD6>P!Vd{_EbuT4dHuzdHBpq^!jCUtF4gPKmm)Z1-?u>5KSvb_LzavfxD2|6qQ$7!+ zh6)5C3@8c&v^2Dp1}#x?=2A?|5ptE~&WgEUzKQz3uGQ=K;JE<4LtEmdc4G}b8=uA& zXcS4QJdUat`bFn~4qQgs3@1Hm>V1kX=mhftcHo<1$`AmsGMQ(7R_HAhiUjG`o5J&r zpQIAwm+m>VhTC@jDhk$XN|^h>VPsZcv3}ELFsm^T4Rreq;_kGf0HxB`5-{Vc9<+qa z#-TGCL=o5p6Trxvgq@n7xD_#j4g%|SRZTz2)7p^jsE-#^GE9vm`z%^IE?gcWZTuL@ zF*H}>@RQryoFKd~-c6eEJaEi&4cB=u`UWr+>i5t42PnTq!F=1vvNtw?Zh8oTf;<5z zXNGeh3+_d``)pK?4gUS}W5lYo_Bn^o!wW2}l+n`8X(JxfvD7K1 zE5E()U9Ltyy>FYW#9(kx_n{QwpQ0K75czcQ9Rh+BS8p_4k4?ebqA z6ieqkH+9Xev%Joft(n^fS-4su;!nF7YmLc5st6O!;TFk&$_w&65&U^<7ltGh0k`Gd zvfAU)s96O|Yqpn2dk*gU#R@*>q_?hys`8x%mSL92lRxGL%fGn|s$4Q}fLR}L1+-OX zDQ=$wb3NJBpBMH=;i_-cu+t9_mut{_7BPYYZh z+ntg^_(>`R!OhJHJ{uZeCTZ`KSPw#lR^7;JIxWvNyp2VuYQ=ews6O_6)Nk)voJ5sujb|-JGrWr$<{|@ zT>lxT?N)N?22&}rK_||XH2!xc7S_JVVvwj{>c}Yv*Q>{fZ9LgRrac^_FGY)=-`p%5 z_1-Z6>dZaV-%d5YPI?efBM0Uet+F2!IMGHQ2h-=6$BY(iqEd{B{F!owruDnpvPEBR z^<^c6GAOw`22Ilc1Ow;)xaU~>SNCK&QGG01@ivc_k+D#FBz4k!f{xx&o{?R=Hz8Vn zdnUQebHo0BS^`9whw3C3uAbBNH52O1xT=i~=Ee2k%I1FwVOV|6vhe(ORhk3;3bYSs zpSa0y{_@9P44GpVi?sNm`?SQ~`K29hw=SF912l9;L;fd=@(2M$T?=d04A0zdu*e+m zqB$O%5~G_*uZrUy>VutkA`6qI5pVzRC%`7Hs|QQI4E8j^8sXc&>-98m=k~9#q5qCg z;oR;4`MDfoO9Rm3Am#q(_r!&t0Z~UUJ`O3h?!^&$dIokqoaekmy0~DmT^!2Fy;jZ( z999juQ_NmB69GtMOu$}#>`1NQtcVEm+@JIePAm+REs6-@x2_giPxstEzK|)fXRTIl zZOZ4fnaJb(^PHslch(OV9h3HdQvDQawktQCK{h5bGEgM~F#5tD`P)95=jj!BKWrDs zdXH$FIttfDPci?a;TVeUh)987`uOzdCPSUQ($(ozv&p-rc6jGfuCxzh`HUEGR>NSB z2_fHz$h(zTFHs8x0b5|U=9Hr~ z6n$yBKmPz+mLMyv4$jRI5)wK>87MkrrQ3b6n)boINZNVoh~GFx#9`_KMS1dm%tzK7 zUthJ=))^hX$>SEppu{A&A_7veXo zWnC8gy0u@~4(TlVz1ACp0DcVd2cZ7IL8c%u;=SA$}30YEI3tHQZOZ$#;KC zxv9f7!uz1ZaqXw@EmWmj+a|;C3QPr;W&uXt2hAI@u-5=*S5KD-;r{DPF0bBE0jM#; zR(-?y&*wMRz+Un%c-St0UjuU8i>edMD9z39-@iAeZ^crvhPy4w_Rz@%`P@%|QA#+g z+7|ins|#d45#>-Tid>$((9UZy_}acmW^(O9V|ut;RU?lL|2N%Sc(UQgcD@5$_zQ^2 z#A!zi2z`f;8F_zs{!IX1!*P_-=-;OHaJ!+ln@1iMnP004H0KP$51AlZFd6sx@tZdG zbPQLKfk+)}SFrLV^i(9yl#=pz0^ltJlOu*4fQKkwz7;)U?OXcjz8&c|e^Zg-&he4( zcXyb7o*b<8jN%!}n|Rt?ZMV0(g_Pna8(~yv+xIrhZG0!GlYW$BG9xqaZ#6cpDO>0yP zX!~WH>?=y$U;bL$UyaisSRbr@H5dI45AJ2zgd4rHi!N!%S{N9NjgxXOW0clBSmgBQ zsyqBTs_0vJmGxj%DFY_OT#)jpj8}@R{OYs%p|}{%{TKM}{P!F&ntzqqho}0%ujoxz zQH!LWTB-fvJo0D-?<#mOkfSwB7c*=5zT_cA%1oa|L3QPlOZ3+26k@jgH_G@!&pCJ{ zdwW=p7=3JLaorLAHko;cE2sJU2ZI}esJq*J27SIAnPtmgD)Y!yS1~Zh8R3iSy(=J! z5FPRSDran$m$&ongu8xj{#<;z2Uf9xM24}lv+7*`v{4<%!=zo}QB6ygU8 znU|H6e_-dE{(M4BucQ=bf~6lW-&NkrlYSs);@Spn&#d zM)rT-K}}MVsbjpBm8jI`K#se=5!aK2N|Bm=h&Ro3rc0taxg-8K)c?^p$z*$%K~IUo ze$~>^VOKD`bH{Z56s*EX2Ej5JCW|jvf+#m=QnihA@%}Hgu#3Fs24lQYxqYrIzH8kC zrOb5(g)tQp6Jw4C$gzBXR5?LC?rjTrU0;>!BKLin14yNvA%pzNjk=$tWxtOUMZ1fs z_zLH8{|y_6i~l1ZqUbSopqJ7Ig><7SJ1D~t1?06>B)oQ1_(_UD$0>p|(*)JkRNDCZ zzeS0qwal)Uj=58mZ>fr~kT?Rd3r6AiW=mXJ5XNfdBZtjjaZ-~mc2Z?MnDU^pw%gnT zLf1Vh8nYk4e0m>D!%B_3HA(`-+Fb1ldYn!aQPCj-UcdJzB_OJDH)|<)&n?fca2pYQ z+~ATL$SD3=NUJIPjVb@-a0H=^oE-G9DHd~z%8Sl7g5MA`*Ya--CcH`jZ#}x zJnv1t6EA)d-bj!q1Cc(sUJ`I}s>CJN8>;R6V_l@RP#^EmeyB8AAX!-s0Azb0` z;RbO9BFx2-H&i(-!#>Xv$(IRA-^lIw6*tsy!Zrp>30Hb%u+B;R~mBdn>(|Geo3yYw7qxuJk;CURD%Vj}6+w!)NS&?K4@)Y;n0{t?3LyJ=g?d~t1 zY(GsnyB^tng4&#!Akc2ID*(L% zBT`Z#3t6g|2y<@sBrFsTw#%C*0|g9~5CNn z4Lu)mJ+3pZb|*+9-=MsY3QeyVW@BtMuajQ{0`{T)I)VfR%NVHP&FhJa4t7Ijc7zr_ zQoKBAO)d?ULXA0&}WJ6J+-@}cK?1OJSOl{Xbix<6Tsd49CE zx)%J?2|$!Do{rxNhI-HGnmZg=e^wvFcHIp>Y*oMeM{E&Tv*owHhOG~(fcP6tdnL@l z_vDri*|BIJyd`Z_P~s0y8|JjZ+Hl2A9*2>tUPQr-`VDOQ0t1KIUWmX(>c@OZ-k!wu z{t;y{(B62pz-zt;21bRTDLsnqItyy6d-K8?P2C&7)kUGpA>UV+A1ZeXR@W}f2Q@#B z!q((ribtB0+p}*^-&xpQZ7+v9o{`0I)ob8EwR?0fHvFBKF+Oay zaQnER{FC+4K!d5iqXDCpp6$z4+D~8d%HvRk zt7eXIyhde!d|HOXXC{0K_!?D}cOW&ELVdCE6x@6aN;>x^3!4nK1m_DQ{ZqkpU%6RO zEh5^UyX*PBrQq@LeQ_i&mhaMw51<<@1r^&-to~tvnFJNA2n@%dw0ra3o}O=1 z;b9KVYfpo-6sLYhPxwW=xheC_JNH_YsE$z2NsHbrZ9=GnS5uhu)RD$;EaZfklDeyL z@a~~%ZQ`iWO4pSgM$&Fp;qTjiTgZw4+B_zEps&~lb;>*(?~hC`&9aBC)X@ejx- z(F*o+%*q=;02mJ|wGic}jgZm4gVbZ}EC0M(4hm>w4n&{%OJ2G*s(Ho3j5cZRH0sMA zNcE<|dgp|Sx+Zq%II6Ekz%oV8GkOjjI&#o}8nA{n)F^V0Zh=yi*w@shKzrt$Vd@X}%CPOxw!|f0uZ65-y}N-y!@+7!wRi zXO>cl$_e;!8sKbaD$)LS=gtE^q1z4nMVAi2-K6>~#TdTm`ljFhn>DkJ3G|b-uJet? z-3KDx(GWV4$thbtKE!G=lB-Uuu#|o|wBBo2WJbqNtBJyuF8v}uP+Ic*vaiNv2E`gk zE5mn)DD3bZHd+zyrvr1r=(^iiBk!d8onvT#L^i_c;|Z5<2zn^1fGpMRevf4gbbH#! zJRC-1WQ;eQF<=96bxwu5g*{6NJ8(=T0u=vPSZq-F{w6d*XWp!ir>Z&&FVrTM9~!+{*q;|5daRjKdY|GI$AZ0@OjOY zP^a#gPzq`TUQ{6FJ$sEn7;+eDd3D?l#9%t~CC>8yB`Z%HLH#+#VjR20=OKd{Vfek8 zrlN`}T3Ada1Rg9U3;96tw_mZc8ypWT9NB3j@~NK$;9xiiBtYNdAR$B79ZsdauN3%*J0hLq3jz0_SVd>V@-2 zK!Ej1=c#Z(X(DD6>?$1;pBXMl*O8~NG&0luzMH!+@P$WB2dhq$M#E>3<0^;EPh?8G zKUBpl|3nW=0QhAXm7+|;Y#aj7A?@5_)Y(2%>eKc#!JqQ=3LF1@3H3Iu{eqQ>e}?5# zwzDT)`i(qu)=m_)7Jwtm{ne+-ogKkZ!3i+>F5N~lqRzn;?Y zJ_4u$j)-Ug6!pNYBDbbyc=294&Zz(*s3Y>Y-!HhaJ`0YEIhtlNv?Jx8UNg^^H_YLh zbZUrccCi1ol6MC4peBW>_+F{}0j~3HWW*pMQ3F}ubvhoX3V}?Id#DUc&hq0rAf4_J z8-011&PGE7p2qY{a`}1t!4~wIWOIe7 zwuh=r9MyAPQSeSGq(XoMnyQlSAm+3t8knce)`|He`yjAB4{W;CkZi|UR3HHWZJ&!5o~CK zxmMJxn}$aNn9z#7%!&`&cq^q0r zi$ctJxeIeSg=I5<_*qR*yWb@m_}-Z-k@fwlXT!EF3ogE>t=9Ia%LR~Of=$YqjBY$# z>FzG`>Cf~L=u?T!Tme4APbH~gb&^6^9+(Z{i(^)L4@vm!Jg!!4v*R!3QD;t0PFsKJ z>h+XW(V7;X_IRo0vdZO&EB)J78@^rf4@Lt1zwvHa7R`vCjah>Vv%DFCVD$Wirp1#I@)kK3}Qb+f_ZQljZQdZ^r z61#+vJbud#0uQ$zBIOm8pR}GLJbRNGjOnUKL703VDqE4MWoLW__^Z-G|Dq4^@Yz~=qoLrKn_^>aO$viN`d`M+IRplB zBaU0IL~=CEIUOP+qpBxH)`JqRAxS{`nCFr_1@<1T%a zTr@!;u5WT_x{3L85C6<-t2*slj@LKW-Z3j$KJrP=E45>IUQ4dZ#wS*@tn9z6vQ`Q) zeWVuKfsXg!gJj!_bmf%~|iP`SOstnZ61NVMC1}b@zuSB`=vC#hdOSCRs`t-zXUv=p*Flu!smteZH6G57_nuMLZ{YjKaNN!x zZAiTWE>Ailx|^cB!R-wxq7=45o~=tv?9&Cj_;+2}GVp(X!rKvxB;AdlLIM$-X>y z-|N1^OtK4S4eGoqC9H18+_k$qbFS;mTUd~)`jvq<3zzVgiCCNRJnVQ(hFe^q0W@a_^xfJrtrJb-ZVFtC!Pub66|ie>qB4$0{o^b_8ky%t|Z|scD?9UC3k-H z^>@t4Z73`DLPvufy8^_P$??*Mi3dhg1x;a(8iFm+UfpS4S{~rfP2i`Mus3*8r^^8z%hk5xbejK?s;qsAf9oMyG+2(AxA>N zE#krb(lp0(1iwlSl+lCyy%=(oAtjCWp)Ae2s7Y4zrzvAPyZ0$ryC|?8FLmuo+It6L zO1%7res>e6&jIh-tA*aHAI4mwdVl+z?lw*B_aiR^%gc&omBmg~ZDk<9ACYm%2<20Z zLnBZE{3?>zz$$S!A#lhd&g1K&4#=@aRtlTh+;)UD;+1B@yiW_~Gd8;|;ds_}Cve(0 zf?e;WA@XeZrBMRdZGe+S{NV3?4!!z4f$9!5f=^k0U#>?pf!p#;>L5qz%?lV8&7th4 z6f{-&v!!5m7@izv4!}De!?F*BLIncx^Fv$;d7Q5=E|@eTBt6OyrJ2;@Z%#op^#}hL zm^`=I6|1yu-0$U(g|JVt*;*diZkBZ)03}dQ*&cl$I_5>uMlr5WFKcfGO!5-><%8&D zTsr2>z%xvAmxV`|^{ABnM~E_z<2m<3!K|VD1KsfCv+DV;S0-Zb5^R74($Im&&bd?pGLA+IXs~{0Vg`u*% zLz9pX6xDieR5eUzF?II8;^Yg+`0`#znbm94Is&v+U5EYyRZoJStn|1rCd}681ev;| zFh-x}UPj+C_~hX93cHfz7*-o@v#IAB)i&A{vIY!$t+t9%c5)y5cUhAH)CmYQ)Iw)W zcKeJ-!HJRZV~Ui9N!m|0kAjRyhD&SnnZHSXioVj zCE3&m1JU(RZa(_*Uo{l_8UrQW*HtrsRIvaYI^?EvrfAD$M~FoT!$R3gwGs)?kQQCi z*?-d%6)FbAg}(QofFn;4Ze>n>{@k=NJsD!P}UiAEzYdb$NrlEDXVzvK>{NuqskiV+hxh0G5?N~rz& zT{!4>|A}j-TRKGys!egM{FX5^w~dVOH^FKpupbEuU{ybkt8`A+26(2MUKs&vAnE~E zz!Z^t6&)z_n3k4S#Z{JDfv<#={#{eY#S+O?MbfGw0j7Khy)Z)zC$^-rJ>^V5k%&RB z4z`1wVKBi14Yfljk%Ijt!oipg-V)CKe-Rkq8hs0&eqQw5YRUKb04iU335h;17QmHe zBeyZBsEqI<0Py%+^cIV{oAz|(jev%mNI^;ZQ@#iob~F&a6dDJTd`L+1$?xG52xB z>0irqb%RN@Ub9F^6bZc4u_Y`w*=P@C8jv}qkcD1>be{!46L%O%)>Dd`!`+%|FbR+i zuraaJ+^viWsdm#UK3KIYPa^ji5MiQwc|962H==D@pp{uMFC>P?_60J^mCY2;AoX@q#knW?0XQ{rI` z&?scQMl&>zB+38nuc`h%)&9X02B^hX%a<}glT5m!%+A2Q#<^Lo%)xi2)i%C5>)EM9 z5?j-bOk_2(vq**W*e<{#ArU03FkZBoBVcR&kV_a!&r3Y z{X6ZayOmC<3<9e24#aKd>znfNi@77SI)othIE4`o-x2%np!G7dl?U!mAV%1qdPSk@ zld!8O-9*g#p-xk`@_c=vTWZRD-L3y4ORP`|K~FUH#`1mbQXjibm+3ac2S)hALYRNQ zHa7!nBXBjpA^xurj*527jO6i_YC zP^e!CXcaimB*jB4V7IhRmz^zhyd)Gm-GcV`s4(dN6b!aJ71* z1wOsQv&^GoX3V5|SrOV-&dQwC@;W*65%r|4@bOR6yZP4(9v>SrK1cbFT7|BgmBkFo3ZsQhtxcOd$C>ps z-+FHn^me#1<#kG!Gor&uz3la6pMd>r(bRs#yy-Al7WMKeQ~kqnFt)P+=m5SJfot#1 z6avAt2T$Bw5YO+KM?shmOTo+XUEg|kODz|p+A(o3)l&del^n=fuycEZY z8MJC6T)nyb&6Yev*+sr^;-HUfcevpTvDxko%Ie9#!r-$}q`6txi{BzHI;upt+UQi8 z63LN;5joJ(oyrlBKSrI>%N>TSm+Fe;yzatt60SNxm7U3dw&vpDZoj>bL=`(AvX$9P zs}YRrt9th&*V_p{e;&o}B8B8&-r619GlAF#JTh)&MSrIfRX!xm4h6$iw6YIi%a)RbM?#foDE}0 zh4!v?sIFoQgxdxO<}dso$xR@ewEYnb&2n7lZ8dI$uFrNi8wM>?Y`qlPXOf0+SU9a+ zyG#~x=Dk$2Xw{_CPGiU`Lb~Fp!;ev;xw^bLI%B0*b`sSJWwEklgLSo@BDBGLoj`~C#%h28`_jv1^DH86$k@@U^5>n)|8Oe&g zJA=4=ANE0=41#dJKU-JKT@Mz-?3g3xy;5A~hI@T>NyKy!Xmw2bC_D5{%z*RijAFq( zK^vbGS+DJHyhEkV!Hb!L=Cc_&IVFIh4Yk~|&<~CyCL5hPPnij|5Kmv4^jrujsVJ{&(_%|<*5_*nMoy|u8wZ5OxXRM z{8nb65h{OSV#y6$*_@1ZzXasR3;=hkw%jdOcC=KuR`VDS7$cRc5WKDb1Pe$VQ)>uT zeEgx@_xHmv)nt~GoV(FCe0Lp0DYqtw8v_-qbq{)%91a*y*%`$t>zdDU(9)r*qu+h4gHN}%`gQB=ak-R-D1hm5AUq71FTaj8m& z7mUTY+`ZZ=IR;jUO!$E0SQ}oCUq#em)0^>>amOz_OtI9H!*;nq1#1u~nt(}y=w!vdW~{7SlJ&sOax#<=fpx8E zRu<^@Le<)>zhdH{xw+`@s=Rdy|Fr_%}~8}diLW;Q3d?aOJWrER;9L6uGf4dSC6)cghT z25W%eCO9)(Ii*Sl_|+mt9MEfg3GRMo=sSJmP&veV4K*p&nsHR_@uQ0GQ4kCOA6RL= zQpoh$b8d6`igRg9r z`TR1b*{kH7iy#9PzsUC(Tq3l!h>_=fP&uN2GqOXxS)ZFb;HrAF=e**nxokc;dU#Zl zVZ2{abb4<#{BlNN#J3G9ugy42+x0Sx)DM*H*Ta4*SW>FO>Jf#t5 z3k4F_Ui6R(f6MQFc71-Cly8ZItv&Bs?Fd`NhL~!K3K4KZSDoGDMzORa1Wq2Ny%D+T z%IN>6IGf_t2xmcxudw;+ON>IPe-)rM!;g%$)mV$WF=n!AKCk$n-I+C)><(tV zd0@rZu^GnILUG4Tkag=hg#Ex_-a**KM_kU8FB zNm?aC!3yL#vI2`FES)!9(zRC{M-g}L)#+Bqzr-@K0{U}kxVJjYu81ZjBE_;jRWB4{26l*#Z-Q9!rSe<5Aeh6|rf4Tpv zD7xjf!%GTT*KvAD)<+@Ca>opvTP4OZsxYTC|3%%{D%$-IWbG{{X}g!$4xS&Tza~;3 zTEFY`Up$}!L*f@t+MOQidX`wHH7d0P^-9EhIpK3wO4K14!PXaaXOR= zt{s)IECM}Dz;=5P)LdmS0DUX~&YtODaD$2kl6XPV-Bk?wIo(1=w501rDD)*n%21(^ zQ1Z@A=`U31pV5$x{|huxEU4BQaFt~(-o~CVbQ?4$%o~$%f&3;cPbAM${0DK-qQ+?= zVou*l>xmA$I|mvI+CyiMTSpG;f!62?0^C0L)^e8yGQ{>?fTum-0;wRFwCw4=T{dK2 z%;iw?N+^N?^1c0*j_{p`S4Bld%e>jZ={lFtb9(oP7w!=S%Liu6B-Hxd`Dt0ikw^3Q z1af}NY4BD%hww5W=_cibp!ek$#^2}eGG66Xx$%OWQOzVKz#zkx?uocj=h zvoP2M1haMdhwPzHuiar^tTferD!2TJf|~Lf{|s4@qQM-}dDmUeWk$0awzMRphu{o| z@crDkffE(>?U7NEBos_Q~%*zg!D5YO0se_yO6k0wb}`yXu@qlFa@ zL%ah3?;LSV$;uK!bdZIJ2p$m0vp?8K#T-Ct0)o>^PP560eG5n@yPYy?PjL4|)CY`h zT&{h32v_^dBF-ki6e5+ukJ8S=_a|Fk>146`6|`{!M-#ObfJa!O2(%u-1)K=RVA`w` zxBdhIBd$?9Y?rh!!rY_1?mfL@+;O=KQyuk2 ze79Gt2W_M{A6(gE?w{4HacBnY!mC+=DbH5Y?t;VDeUYJGrueUafVJzm$VFD zUGC+yD$2U(63<7RQPYrZ=QU`*_N`PoA}MZvptV8Tp}!)4U+>pAVN~~L#i1(7$g4qQ zdorCHg_<#X!66nlg#wo@Aa!^Op%B;eB>S`A{cvhi5Cyn~KIX3a&^0o@tTSd^DWC@L zyq%F?^1C>mXv=D{JENxf{lK9fr>vt@4)X6beCJ+NIkJq*Qhv zuSuzgRm`NX2i)ZOlXCFzV~*~v%|Vx*$Us@oi%R_G__+%C3g&y!DwoiN_*ZZtwF$B20xPyLJf zGPJ%>KGNw<&a~A!$mdL2>)XO7-AW#*oMv*L$gjZ>N7F$!HfwI0{KMULY|cor!Ga2f zk-EbcGTZyD11ubowlLk~S82Nx_J%pLGCBf&)AM@y{JE5#5gVB=)Xs%^4b8~DLPj_9 zQu-w-;p*iu?%6PI`Kdj(=>%DDh!~b`3YIPD?;fiEW2CTrT*_%}MsWG@?3WAEsOYmR z*5%GSWixA}om55FWHH`}j-xAffT7+q0hcx@2P1L*D*f+3Iyol8Y)M6?PNqJKQ&aNu zBQJe_sPyCQ_X(?7(w*udCL6C**^b^_#hxhikW*bOc>ZJ*)nPv!=V|ZasSRJpWLswq4gYD=z_qFvl)9KR-tPojc==8jy*fqF(zO%y}_PxEOkr%!jejonMOI-wfD@6!mz9Q>hp>mhm+cYJ9s1s+i6mc4>7C>e@ zN6)S5d1AhViJnuu(!>a|VyW=_mV>ffrZ&wO8?pdYAy4}g} z1a##BZifsLX=ykQ?VU3{x0)3l+n2UT!xfIY6~g&dUlYAA8;O6fU76y-f3_h-aAGb+ z5lxqUW@*HnIgHMBB@%r$8&^B03+(0(gpw%2HI8`n*Y&*&Q~#*v+~{AICWncYMe=-dJo~Udmd0-e|Fa=ulR1 zW1eK;@}x(Wg3H>UZ6mpUXkt749$)NU@}KzdQ|f9vwO-5Eo$g?rOREK5i^9GKiQ)G= zB%bo45{_hTYoeSM1`6Xc$yb^q=@c7}p_KYH)e}6#T{;pz;;v;jDyNI2s52ZtF@Ju& z+TktkAtLQ_ZF->t^S#GrMv=`ll?`xWZQ^C z)$G-zzrXm@d=-nfHAa?dSIc47N0&TiQx~H|`^*3vA3=Dm8q5x@Ica8`1;t~3fB!VT zp0p>ZUNHAuytS=5G4!FG+zY}nubIT)q~>nk&QKkD@l_b$5vYo3`$FylRWeMf6%SaE z;$H|kXfrzX$q`!KZcl z#_)n5Y0BhZ{>#bd`Xnz=5Up&JmN!*@CpAcEU$ZooSlkmCD4_eM!nM~%9plHqBQBWX zMo*~e#R*+G3QMZ^wXs{9jEuY?&5nV)RL`Pv;`9` z_WN!gN9IX+snQyWSUWuoD$e*YUJv&j%xV#fYGklNzO*yR!$qIJC0=38*g779YyQ*A z1(k|j`t1}RP#ZU!+E#_4QExkQ%SSoA z+pWBBI{WKM&^r6GUhAcr5V-#GrUr32^m>uu1A=b?{pmoy@ZGKWm3q)J+e2NxwA(t6 zN*v^>=nn~(53eo&4RXYLV}xl?+@$Q$c%%O54}4FK1oBzEd1k|G7D&f#bYjHFC3dzu zz4{Di%a1N~R8oKHl~y(%xvj}Jou)hOij%Nf-PEjp54`46^tP^=ED!7$$r8BqBtrTQ zmbOi`Gdc~`?^61&&Unzv^-#}e)(`#$Hv#O~0=MWyly z>q+kYXY6P#ZbcM6`11Vcy!Z~hs|R2yOrS4Iv6e#3u=_}akdNv5?lIdpmzP~W>}KrI z+siawv)ROvG?53S%;;868QfkSx!Qn!|f_gzx91)?#Cs#<>;Nif0}#F~_^tKYmk-&i?JsSJP9w zwQYls$BJ0Kp$R$a$*UM{9>qfy zj>-R$psllY+@RgYT%A^jovrq2ZUG*$O583VWd6q}k=i6m+`KV>uCyC)J?C1C>5qW@ z%`PSI4HYY@;|Hq`sO#j@1JTyOsg+8|$5ANUGk80)_bRsMvPT9AO##2J73B5H;OT>Y zl1of{3Jd>zED)7~qM)F#hbv-2VU5iBLt5#Q5JFMoKd5{;FWh^<6c0YX=E2*3ELc`7 zA_%MD^4sm?Bfk1@!Q$pxInE@Lr_Xmcob<~2-Td=$Wf0@MXqLXngMosixzHziTmO& z=x@putD~8OQN7;+9R#hNL-xPek-MA4j57(mgu0FT{uXqT=|bVqn(VT(KPWo6xpQC; z-s#C?-2PneBc>B1c$O1Nm@4j_7Fq3Ocj>9uNePb%B6pA{qshzun8hamsKV%WH}_}x$a8pl`)vzh%3f$?IK-^vg47k>euH*_*6D}}m1cjB z!WqLbCeX9`Z94nO!EU~vi&ru3;}0GF;sPx&-IKZzNtnOC|G$6Ik*Na}0hPg7zp*pZ znaqPqa34k#8J|NAcFrQ5^!^z)$f$Ph4j)zC&K=tGLn4w<=CP>=ImVHdWQEEK;aH(E z8d5}NIJTT*OG=R(TgRq2k-hite(C-BJ-+9k_v5W|Uh_Hb`?{|Cx^35xC~jlDyZZ0) z=FCjFvE_y7{H;#QM=U-f{hA7My_0Lr>zO+u3(F2IKa*At)~t84gjdYgG4(?w)RBj< zo5WxP?vEUJa~pJ%ACRF%z;N)(%I|D0OoLD%ih5rHH>|(>A#tpmLUZ@&_L}i-IIVQL zx+sa#dM|a3MwGUCw=Ic1m*#04Xnkr1@{yaV<^4l_CnZ!0;S0pW=`GR!{=BPkI)^rG+Aq~3t=yRvojeLz)8nrbZ0JaX8SAn zy2_(X6#aW=)VkZ9Yqp6s@}ouUcl+jw5>hwvyf#Y46Bb`j5vd(+4X}QR{RC4oJc0xx zNe^p8ca(q(U!kBNI=QT^&H$73hSoJlnb==JOd3g^3DG-`20d#QT&EwNx%xYNhUv5t zAM6bt8MP`auGr(UGv{_%REdp|JBzQB9#gXp#_cqywoZv|YL!<}zoZly-QRxPo2g(Y zhjQ2)0+VkTH~5m1g=anY1w6w`C#gximxEvm=J`*FXt77>9sPKeA*bl}Z{lISaiTTexIwvOQQ$N*L-E6** zV(Av=e>?Y6g`t#n=V9OKoojiKoim$nyiQo`{LHN3Xr8oYmldLoAY1n@_vRm7UX3s5 zGm*=Y&m%aNUU3M_C=}$qC-ib{#8~0)z%_e7n@GRk+M=ehi&fEp7LkCKD%1CjS(JT)H!e)z_swKM1{51*Mwj zlHN`CvTC$hJPW8!s+l(a*a#r{ftiD!E5_oT9Fet3E3+}$?v3AjPXEn$+tI$Tb^rcE zzvZ8qbi4$K^D~=FeQO7=QHC5lxizeV#(ozP9-hznK6~o2$y}zT;;8?x&e4Wz3Z4P+ zIfDKht2R=C@q@n-#}6s7YK?~Ra+#f=-2)mw@i7z(vpigt4-vF}Hz$6h1w}UXIv)fv zGI2wh$QV+KVV*B8n4PN19(!H-K8=Y>ZHU55!J*EXzpFeEQH>D|BqvSH@|(0q^3{IMDI%g$I%x%OD@MB3{Et<6cAW^A4>ut=Jn~I z&f}yT6OlW|;>CHsKH`~p^yvtHU{N7ZL8+&P^z@RAj0rDhX{bZl(dr_}U!6V%(X|!{ zi$7Lj?2o2~C1ma)i$atNOX#T-ioZty8qej&%&6*I_#K3Lfc0s;57^nJto99m>JK?x zpg20e&{9K+th%n^zRqXi_c>AiTRlZ$@opXu(@<$-7-$MBk?)ca4n&sc*8CQ+M_eAB(gP%BZgBm_v^aFe4SzK^VNuj|7=%cA6H-ibrPixWlpjg}33k8)efOR!%(|`>XdkR4DhW;}hF% zV|H6q1?&4@Cu00aBw)gu++4I#2b> zYB5k78@56R(KM^kp3_QRHRqQUy?#-ioXdBc|0Q;$&OmD))_SwkvOIclSuT@EFwmR( zT(IrIZXr~tcUT4b}p!xaj=i88q91 zs(s|NGn{AFSMsFYrIlFqu;>fyp3su~g6xlxup*2p(k!+0?$(#dg0g*|z@cz8%ygtZ{C#s6Q)BZR~tdx*`a_@c;Gn4px`Q`2iHxG*E;Jt|oeN3ujq{3jp zbpL`FA78Ufs_>KCM&_MmS<13wW=8L1QOEn!&i*ka<9lzkR{Dxnucpeu9-c-c*Mp=; zF|j94q4fWpK^7bP2V=Eg(MJuR=!*d)uh!*fvLVvEkh0O1>|B}g!lj_eP}`tjdppMZ*D6{%UWH3Rjmw6=2tD_fVu(cY7FOh2p(X6 z!y9!t$z=qZhH}L7%R68?PH4BpzxhP(Kn&Y}vhvixuGNqU*n=L?2YCcQl7O>wm~khL zIlk2~dXfI(8b;sH{OsLd6~1YXWdopUWIvxpt-5od%m0jS_mWxjpOWpBksVh5@KjE( za`P%H^_2rPuKiIHLWYq+(LgdLa^4qO`~a^tZSW1yAQzGe>|LimwCffs1uSu_^w0Qr zE3A~trD?Z%DH1z%Dku7Xn|1O@AC|>`v9G!~sb1B_Wjxvx!e`Cw#`q9&O%%IMN|-b) zyNM9(S*|0UgJ?~6?<9rRDgP4XVTdEnbKL`?$jDi|i{8&8i9RAv{R$)cT1V2Qm7In% zj!!*mWxS^Q>`mL1cUTE^#XEeVf({FR>@RR&g-x|SK-v1ae7lvrV7f!w-r9f9delJQ zLkN^G4dHJ1H!;Smq%g%_`IYmH_A1QXHn?nyy=dR4E*jpsG$ zkx~@;rARq}rz#Ss?LdD)oE3CnGYNj@lY5?dZ$m*1kXz^SGdHA{H`M{*ZXTGXuy0vr z?^kOoQi)6#7Gjb!6g^(=sn;wndbs!4=fbv zn#D^s>>#?|Y^pi3G$(^rRCMnzsCc9_OT~bSv|`u*At&sES^QIX^6|CHyroZX(;bBh zz>A4;svlM459xoknz}dAd4yAqa%Xd+!>W)>odG+AM6RJi&W-+>Yo$ev990iU&P2S1 zDMvWgr{45yCi*F~@#*;Sj=#XPyh`Eiv>AhbsA0~E`AvV6#Le%A!W<1ghgmM2^a~fza6!mignYl!y7$)N}_UFH0vGh6aTddx<0yFG$^) zT01%%#^>3hv~z#4ls=dvseKFt-0u5#&w`4nB_uqhX3HVXn%S7D8{s}{S$O&?Q3~6& z-5$F$<)HLe5wpLr3FyDLrYN#W8uw~ZSV1fa@lg;fKjsC75ry$891_fKP9WCIpH|4l zVso4eHl~k;35|dQN0ii%we_Dh~zif*Zv1{ zM2lt`{?iz-bq-LA)OJoEib<(X{P`+gC3)=%SM$tQ8)wd81A0)hSenk%HO0U0+U5E3 zN&+A&?{>KVisvuDAA26NL5hN+umCC$88vDhoG1pel8cpjIIPqG19tC0+nagMegRwQ zk@Wz&$CQH#OfwT(hS+&voEVkr(&&sZBFVeAWI}Yr=B-}`|Cvnln^OhFJji0}&{u%U9PGDH@ zDJVSSQe09O^`*SL+^tTCa$D@9GdQtn8j+3!Z0sOz5^_+x?+U3+&^EdUEN3KRTsT0S z*_q<4wAT%uIH?lEplyi(+iNbXv%{@}e{Uc|g$`MF`hmoS^JQ`7*>}m^FgFfPax{G= zwN^xbWoq5j*JpAbNcCeDZ>dP>vDcZSen4f44{!OkOlNUo=eQtwvacq9XVSICHX8Pc z%IIy6cS$yz9?V&pzSvWbltbL$h za!&GN$*tb^%^pldeF^Bzdq+sg?ERb)b^+!HjI>z~%?|W~&1>Ks?66f3C~8bigMEa<06h#1i-K zJ$HeOp-aBJ2ZeVukmHuKkG|bn6yVa>aLi68dg@V_zDevQM-7h&UX zAo!3x@_1sK56097D9RpF;A1*ulxy;2HC}D}w2)Rsr77CJZOpXry^SeWQoY&>g&V(; zs81ZUgomB8r}BWd!e(Bg!TldGB7|H$lwLvN zGfQl$xQg4|@8I>~#=@k)r@+u}6uhO~(%>=}A&!YoT>8-Eb6V%g3=h6bJi=2-0Y-W2 z*##n)gT7Eo`u<}Hp)_y$1;bffNt=f}3zglOD;pT5w>+;JwZ?RLHfHLnOeLjT$eqMG z^z_mB?4>&r|5eVR_;w9XoI;fDxN{ShPQ40lAZ>>4L~A~8C?6*-dkeTH3~#Z-+*;jBx!3D)8l6tT8Kiph1=CRNE)OY6!Jem5Sz^*++HDK1ZeM*A zrB^hn#!YDH_L*W2EeWfohp(_;79{a5gGz!Lk8cB?5v?i(XknLb`%!?(41wUr7$VDr zFZ(@;70nn|&n1Fn;j*z~!wn8W&gVV0OT3zac%6&9nF}Kaf}Cb8f(^ajMC)$EMKm*~ zHnov;Oh3eQMiCAZz7*a*@#n0Ef0FVC5@ukbuTS>+OWxt)|gl=Wig=shFr^uW z9m~!lNVmv#nQKZAw;pujJK|%PQw)d^60CorhUH|Y&8m{VceHbV*R*<^6aRpc$)gM? zG3=}gU^0E$gQ_{h2TiRdpOlUjveE6S)_BtbX}hC9xXS+iQX|sRdCqnF?U>w{l*2w^ zSkttPZl|JdfO4SKOfNs-WQQ+9@*TpF4?Ei%SD2JuuB74qzI*0WP}K#`pzzjZ(+22X z4)`Ed$%<8y5xA@+mrKs_^4(uKz2A^%b*rj&tbI0{?s1mFRbH;eD`Y3!N)>ESWSgFl zN$H}}g4Xf0?(7XCz2x4WYXY+G8}ziUkB6m?=Wle6W{s$Ym2_jTAn)D=tfKGZka9%F z;$>qA*p^{r)p;WX6tH2Fy06dg3+03Oq3b8h$11?}riVTvK&N3`^VAcK1GXpiIg~5g z_@~NKN&F6UGEN!B!Ashb9}1K=eB!+ouR9lW#V*}5aHX@fRx>bZ|Gq_BAE5dQ;}&h; zJao-XqYuQU!0$V-bh=SUDU?-cKnmI){#?*WoIzvqtJfW6z>O#j;1XCWpc`9On<|zM zQ7Yt3s2)fh1d82|#qi~_R_rhcIshyZ!6CfBu8zuH+hA4zfSo9uMK(2Urw0g#C??kM z7%{Q%a_f&tAqVt{z+KBG_O)M8^25!Zb!bbsuV&>*TmSnmgBr?~M$Udg09vcd1_*GO z%&@%x%2<)j2z`Cc#^zD{q4flA3d3$AEU`ofBAUp}!<+cyxw{YA*RX9O@5{MK0j4 z=4c(- zfdC`h9S)62c}c5)Adcm>)b7O6c%=YE(R?<2R4V@sh{z3t;U@A^YE>uCZ(lo%mRVn4 zPu)|=ai0xa)a0)X}}JHT{Ty6d4}2z)T|98R6l96fs;$_@0(lHsYr?L|&=q3C6%x z6Ns%DV#kMijt%-{jFE3OZ)IBRQZ!JcofmT(`8|UNO1YEjFS2&)y6}RDju+U zli0@C63rud1then5|9`WeEtMe1Lrz7?0qCWM;RWVV#Fc&{>4~=q_)ms`ur&cq;Z7P z^J>)8vTV=6R9AS{t+wzWb*lKx-%e7>9a_YuVpi7U#VcQR+ZVX~Y0DsvXWML@ubP4f;|UL9J9#R zQ;E}l^2}!iW(ia@om31H@o_<4>2PdJ%8Xhn{UJsps+m3u6GMde{_IEcABYwoU4TU% zh3EBruPW|^f)12j~Jw6;u_1oL=HTq<&bXP z0~A+kgA^LaDPj_{dZd`L4%w$)e`eCB@2}3KZ5U*wVr*f~<+meM*9s@$B)V;td<4`J zutrvM1Bwvd$awUrtIYc$w5Gy34EMcvPRNl6r8c6vj^#zu=U%KUz47fS#Uw9%6TP=) zJ#FB|EZTtmT0N@+vLf_<0kVZ>@`7X=`qG5NwX#=MBDA?dtNLaDJ1R!Y>wqbSBboYT3 ztE3uM6O8TOzOr_|r5wvV;@u27BSfY2^B6@l>d2jf%K3+<-w8ADqxa`h*9WoY#ArDM z+q&6@6EL~!f-`-*r;T!NL;LLuQzWg^rxi{nyi4zN+*7j^r_`c#604vG^QUvnIKX}y znK{YANbiW1#F_gixvpy5yBx#|hD;B~OKXZ@!3x3xF0c1l^Ysa)Qce?Ddy=9PFYB8V zN1~hJHPxeSu);nf6;-_NVo55)VRZ*93MyM*-X|yQ#(HxI^@qd_a4(D)y$f^8#=u*5 z@|TO`F~7hDO5(NzI15*cT@u(|Qr5xS=5Dh=OfkgdP|vhf4L>Rm#aWtGpolfdZbi5( z*V9ga8l@jH+*9}<=S9vwB-C(s} ztAlSP@CK!eN#Ic+i#B4fFY&QVk&-Ba%@CsnTUA7qCV2gU`}s7S1urbP$q25fg@lq? z11V*W&!3`8ywA6RU2GHJyiS1;byV25S)yIgd+gddNrs5#jg;B6PH#1pUAc0igrp=p z&&tvgtQ#O)f`U~zRI;$>Yr-+1UVa~5^uBp`Pn~`G8gxP&@)IY{_-`Bqw&Y&bC`|J8Tvhu{Q72Gp~J1POta$);YpdCJ9o~F zqFP~#C4&boMVAA?$cVxNdv!r#H3$zN+el15%f$U?6^PCwJgAOR2;|z!s~07vtdu~U z9CjPdopX^coPIIExxp}`#M5YdsG5jxJ8}pr#CvZ1CQlx{!JS*u_^fjH6EG}k8>F~0 z7Vn(%o8tu>r|5*56~m6s2?AFw`|*TQ_a@&#%lnXYB)7u`Z(Qyz7g+pmPAoyQG_G}B z7(B9gGw{hz0#_wS@?1|j{ln{`!AL``dkWk{$mpn~II`uRN7urHt7eu=h|KUVD@^IW zg3}fN#*r4X}Ik{M^HV)s5{?b$u{61=r@&GUqkajGJhtekUhP(cH;|*f) zvivxy3+2=gHF)2(iY0*g#aoNF!fZTT`k`WSOmE>;lBWPmXgv!h%6JJt!dY0zXWPf9 zJFUMxJ>JUiIy0ijuceFhGv$WA7g@;@h5zFqN`aGbnIr*)5(UqRPb zKs-r2-U=^Q=PO>Jd=S3OJPhLp zm3&kX+n;4eHg4k+n~c$*z`>g7E&Wb4xXdq>S9mtyUv*++`*Mj2TTPy1Cx2}PJ39gA z2rN*3oJsF#o_6Zfxj@%v&(~#pYLL?`x0diCI9RTk`*eSXZ|-h#Vr2P)6=ISQ6tRTL+=*^@B?a$2784uF^$WL$tyQz^m|gi^@uPE zZti{s#(}k406(J5E`FY$z-dIs7rzJ=1O13??htHb<_WKv`jy|ff&SQC-fQ~WMVH-6 zLE)w6a+I=Yk>x~OG`YV##!>He;GO@D$=KkTm5DwkkF(+!LPCX3V@r?!{t5Lfg=cb) zGQ_KUE;D6~>8`fLRsZ_mziBNryZX2LZ7;m>VqiIPn-Q__ee&bq3Bh~J2Lx|@-~jpu z5}aW+Uf7mB7LWz$-^cJj5yyZ3OY(-j6e`dH|EocUKg)vthmA>b7wp~rR`LR(X<6TW z5AX+)$d{M@lvDry_0z3*x)4>=Zf1h~r5whc)pMc$UC-_r^iu(N+5OnF%XNo=E-5CI z4@CZ+2QBU)(vRH_KWn7efuIl%2x4&OJyyTlDUKptCLn3s{b1rj6I6z{(4MvVq80$( z1m7GC?z}_bZut#-$DaM8Cd{6|nB5cw`4QJ=-KEb^W*##8zmt%JzPxy(3i_)3u#fD1 zFudFMB&-oFF`bpb{|+VjFmdveY_GtFqnvF3e9EGP>Y$a=C@3ipCXQ9mqxhFAk_W>j zavB$C2~j(caOtiyn z5y6Uj?!mt!V0`WEIzEN5YG#6rTM%`s_Wui*3_l)^yJjHgM}}L9?IQPRsxA3KBo9A_yM}B?F#0l$XnX|NU7NRM>WM zv^H+|uPy$|=T@LX?)9BSFtq(BHBvU7!GOu4omb8m@K9<-eQBcpKXr{ZX8z)aKfVVQMbG24yDP49J$7wxVs$|lYpwsTrM zcc#~;f_>}oJLea9kKdf_z4^0eJHOyS?@d$WzuoVfeeP^jh)lY}HQj~>&N_d12IRib zCdZptvvu}nNfbH;Zz4`*80%ly41hjtZpeomeRiyHU`1Nzb5N_AQ}x~dBSEHx3gy!su;IERD!V-!rqnlU;XP#-JHE$|X9o%nU+ZFz zMjD3Of_|XNccEFP|9>=}SC^^Xg%EfvR>#~?gvLf3HtMI=H}QYx*VBO6p|3}iOxTn* ziCq5sZ&G5Mz`b>}2|@KVVBir9CQm15x!K!?3qyE|dM&;0R+bU6eoU#L_8;2AgBO06 z^aHJwIYeM}8+Y3+e4-f&Ji=TX(d5#*ZE-)8}jd)=K(2mzGRguHL* z*_+LDZYY*jMHYx`wTN^UOs5}AC1eG%cK3lYPNKpNoSY=LJl$_5rvNHPxH~;57~5FG z)w6a#jxeC6B%QiMymq!$DxmzldFto82BcnddgPIqCkj7}B+-=B2iz_~OFs74q?3AB zL_7zHLUb3^LD+Gibhd(?f#FKRNv7Yhav{)5vYEZ=%C5HWG_CjSM~1aKa8AlT*z5E2 zHv0eNJnYd$zyG5ElFcqBNawxR%#+6NQHE4w*ilxvbEO4y9)-PrLTlPH7UV<0#0zD~xc| z;C2!u9iGB^^7$PaEKTaJLk*OP7M1(yP<|NbUZC@|2W{=qto>>WA>tPRzrhfv2C90Y zhvIvqwUpW5G9_4u`Es^NXGfcc0glqh-j8>Y1_rNtaL}^%oE~>0Fl&_v#Noto6i%AO zlGl!?FL9aYpeP!ZY{rI%bvxBD8r;~$2(&*TwJM2ACV;k?yAd9{D58x)9nY*)71h}q zvH|I#3nD{d5rd!l^`umvht~^n0Y;4UrRYSxq59WPtV&0wE@TChPf&`hs}*JZ*4=$H z;P#$}c;fUo+RQ6F%b=E`ylbM-zDif_oxM5anI#8DzaiVkWru#5>bXe!jCpviw^5{K zq(pS3mWEZWlaCy%2Zobxmr7X(w5=ZOA(VSiJ9Ntsj?At#X@jf_5S~YOHM!>svnKTP zWiAy%!DXx8Vj5pY&_NWQ1(_Th6*U`jyS0y?IUW>%<(FCr)q3#+6(imP*n|>F@&#q~ z`xKL0iTZ6$XZvU%H2G;UpKjgOZ2TM2;3D-U|hask5z7MSqRDXsNE2@9F6JL!S6c6rU& z{Pw-CQHt;0XW$jx|6;&lT7qZs=SqwM_*;Xj8 zZ2n3`F4Y3So0<&tElr;ZqkrG&hrZUGd`U2av{7^Pdk8-=fKo%}p&c;`$f$eInFT*V zu>}!(t`-vTSmhjQwzL|L9w?)hJf9WehviX;_Fa_SdIQ>#wZ~|kw2ybO`hUF4XedS( zYWZQLq6pzI>QSiJG=5bE=8kpAtcqPVG*!5ekZFadvpX)<0UBj1je0^w9w~*S%3&}^ zQvIl5X=$0~cSHaN9>xwV7ZFrVNzO?J;8Ct4%3$X}NeM(z zLMNd@2qLcQM*(`o-!-U3KtCC~AFqWL4}k_88-9QJ-J3}I5Rb!C0eT*|YCiBLJr~3( zI%wz%*Oc7NhUYIuM0SWlGC+afV0OyMYIwaAw^nf{@diRu1dnreKj7Hzf(dNqfnoFy zWn7?6qo5tFiqr}4;0gN4rO<%e{(|%_7T5`rB;g;r$`5-~5Alx0R?O6P*vLiy1cNqnxtI591Zpq@pqjLqure5;e-h~;y1_2_jj zJ=*|@AIfJyoZ*;WPtlEU`(^E+1frQ%9i%f4Okj_s;Qh$fvDmEv5)l@HK*rkr5FfeIcFCuem@ z6!|PU;%ul;SA>||;|b4^ax~m(3TYGy%yzShtYlD;pCPj#&JC(akZ8EH|2VTtPCrM?mlv$f z%utM4Y9JL(|I~TLhgb)7o-q~hgO>+sc{0(*LE=QvGspl!5rsbo_H}x(m!5-TP~3|H zSTq%^PwejY(d&que+Jh6*Ih1*ln{tsyXsi6r?PPoND!R7zR5G`5R44T)GDq$N~CZ| zX6-Ftq?W&;e+dPoKkN-0B3jB&CC}jmLh5?6 z5_?%-YtGrGgd9*_?ga9grF(ngsB*@TJQ ztx- z<={grY`!ol&*hnU6iVlP8namGA#^Sj62C=Zm6)!`LuZe2&GG@cu;@i$t7lbOJme#y zaPz-#6kG(VslUo!gy0;=iO>6?u#4 z@}_E<+g-{!LZkd~*^`p~mkL%nbwOGObA16uMA(5MRUkNA<1tFVxRoiiC5xVIw zNydfQtK7}mE?g+i1*|hZrcddZPL|7;Xg)p#RyoQXjGBXVC;O%EoJT~6A^ZX%EZ`Fc z$HNaG5`#L{6S)q2P9C-?VI5Av$LA2RFyXcQvF5oV7{p}d;V`;$pz4_*AUXc*8MyOM z^uOQhNpYFwH?k~afgsU~t&7$$lZn--+=~6yPJe0hVKg?!s7zimusMeILIA&!6yFbMnsT1BEq97il zem%u@IkxQrvx#w;^Aoc6D+s)fTWC^g;Sx^m5Xb%aL`YvR2|RMnnk>sgr@!)6C145w zII3(Q=rb@dI0z0Vwdl6apN4cIhFwlxiF2+dcQ>=sM9sZUnL7{Gb|GZJq?obAx zc^kpmG(q=SCV@j{Y52m%d4S?}&Iw4UKCjXS$}?{85< zUv$9cC`Iz$P*j8}&~(QPZ8SYW_FyRU(OmNi=_Ofi>09Tg%575!tf9Oe08-F7$qsW6{jS?%uJ21Wh1CAbZ!jyV%)8u-DQZ6ZieIC` zQpeT&zmm7}V)z|cnEAz=w87X!Ob@@$wWfoS`?c}v9Fnjj9eQ|}F#xhA-Sa2&`yQ^N zV^S_Tv$4HWqa^DJuJRc3`;M6iCos7E%KVNfF+CI z3`BWZ4i1c~17FZ2$5?=2sy)pdAVv^C0(K92nz!$tIe&o&y&v-XK4`?nLAR{RZ`O;smTo{yUt)X zsLK2}4P~0LRv>CQKAy$i1y`S6R>ykmo92)RinJDGHM@@N34>|DV7NuNzcUCVWs zqApWhT!YSD66@UmeucT$Wgb2&C_e__jf`j(XUNz82`Q%gEBZl9!OlbCS-b%Jux0~9 z^BH6a>yVHA0k}`d*df-)>fLzf@PDCZ%|Ol4%LRJUFzLT?{E9?O7cJp5aJtbMEH%@F;i(co)p-Zn{=d3yBpb_S9`XQXwe1g%-b}<#YF| zHUK)Kl!sHz~cZQcxEuAh&|{#Kyjeii|aCl>&XItC3wEb z>-BjZ6apJ$#sglo6#JPhy4X^mng(OXiZ+vt37)E@8tGsu|R`_J@?-wdut&>P9%kCdey)X1Q2rbB7a={jeK3t z=q?GZnbqf{l2+{EleHX-sJ*gu?~u7^YrkW@w@F``HKev==f5-#U$5OKw4!E zcfHbUfyfQxZ%Y)&H{CsrRh(j(e9BwUzf)j3tVOo@&3;6 zfV8$z8Hvj+?_!RMKG!Pu!`3rMjRq^H8Hi7G`?93u@U&GA|cZ>kxgwl ztjhBon9<6(ja@dT)85H(2-LILD}DbVEo~tEhe3D!7Yx`6K?((Wqw@TK z=q05Rq%e<>P%IIk>(FVX9cM=*K4Rccse5u%d)Qpb?YRPM96E3!T zyz1eFGo|rNO1=NivJCqUJn5t%W5b?WtbA=a!;z`{@~>po^A6D?>64$Y&?t&>BUvlh z%4O5hI7Vj#2eIkISj4{2b1?jq67d5&Jkoxjk8po(9HlJ9vE~=Ixb=oT_ZW8Jcw+Zg zoA>+E>uL7wxH9RD)STQjf|;xfbvAIjLy!mUGL#3e-429RzUs5eI5ibkt7ns+GsNDzqwf%GQ+byUrIKvG?7DR!npJ*MDr+Kb z;v)+=Eemrzd;ZwKws;Qq`apw&hP=q!r+r(V8RoG5>(R~lCiv@od@buCEW2B3zO%hG z4bm~s6p$x1K#uA>5qj94sPUxpkVCNCvX^}2%a}G^w2)vZ4L~#xbzOfJ|9;Gm*|GCy z8)=`$ zqcgIn<%d`Udtbbs{_;pHcM1)20O}a8(WUGh`Y^(vwJ2_m;Ktb zOhy7)*q+0Z{a_9uOsG>q=c;$Cceb0pdaP@fCQ@VL_Q~vnj;k~@D60j0INxLF$HC>? z`nzC(@}afx*o^E|d7;*imM5?KF{1RK4(jo_Fn~DN_JGZSsl)JtDf9k|AFOGZ$dAa< zWT~H70lcvQMkS-U_V8^IOn7=_jVfiOC*OM~F@XyRKYP%w6GR(zqY4_OmC9_-bS_sE zn-j~e&B5*8bcg_qF-EM3Tjdzt@%IBSIxTXZ#MHBD-Kx@=1afb6H!lQ}GYl$OW{oOr zsm?XSM=;w_kk>`82zDz{xxBLWeMPB&uotx=qi-y6#6D_(@rgFXM*l4yW zfEFM)sqLV4g50My36`vRZNe}66TfCppJzm9s2I&EkI(lo*Ex=P{%s=g&YgZltiJf0 z5%s(~^q>U=k`e4ig72~?#QaH17pGc;g+_uKK-!QC|sg%Zx|OcoUVmHdzf zBZGT)Gt#WkDsa*k0d!G%>i)N%)U&EigbEUl6iw`VlgD7z84f`xdlRd|KS3D3=u24> z=tRi>+9-&{iHdD7M(`JkV*##iKF*X(A;aZKBp*hbK} z)o;3@^O+IF{%BSZVcFRc(J97h^MY(i;7x98?;sYX9@2wXiEqt0P?dI|OK!bW05H1J zUm^{rps4W)NFNeCs|SUfJS8v2-%~hsNZT>v^ASN%u>wu?we5m|Eb426^al|NDe0TH z&$|QeD2&^pLiUUW9TlWW00ZS0JZPV31o_*3vnmv(pnWEYAUlE;+^(O5mGDyEsZ)8e zMwPe_JzCWKJD$IXEm^{Vpaq?5?W@f~Rc`I~UZ$K*<{6PEn_H}XUlu|5-V7olb;CW@ zzAKYV1SJL5wveEN8_om#>hu417i48ZTcth zFf_?zK$SpSK}@&D(K4r>&?&rCAP}9~o!y<%Z3Bv)*X47$kuIXa-V{Z5OGTaM*9w+!=iclg zX|E3sR(KVd72nVnJ*0O$oMm|@p+m0v;^J2N&fvWrpRMSe`Oph6)4Uxx1h zPj5n9oQA`&Iuti7yaD-%yS+K`yg?z!Rpox@fi{RLe=gSKpsoz#`9gYjideRjs%^<~ z-Sq9bEBECW3&-uz_O}uHf?s9(jGX3Rc#&%}e_!1K5;X~xbU+Sclo*jW1W&e7#J*M~NmROCxYb_HEH zvXS1qL2`e@{sHPENjNmIaAqhy6 ze!q$P{H9l{VUwN;vt$x8-IDWmkgJ9)wkf#SiRrg~sX%b5<4&xPv9qQR|U~ju;$2b=Je(2KE>3 z*8f{yxol4CMXWOT&BTde6Wd!G{rz5ppv=gfH}qcR;HBm5|2Sx4MiU;{_9Yz~W3PIZ zD#ZE5y6;rz+eq_X5|P>RgR;c)k@K^Sah){O>!<&`~0n5Yj!OMe!E*IN&YAB>1-&9 z09jnGf}c5PqsqgN^IpC0Lh8Dtcs7B#sH%cH!?y0k<@XyURnxg1fjWPj<$-K6HR!v4#)N>9{Q6K<_dsGMu#?=0|h3_Az z&A|<4%DA|Ohw+icZ+Sr1`>!2G_{oKabPqz=$zNY91^65C>^rKr5ND0`TV2HEPATg< z7vB1TGpN)+}#l)7}CXJqmfI9zd=-DHVDsNS|Z?TZY_&q`DZ zV&NU}&6Md*5^h|LVx>2Pp+x$gFJyHz8 zI5r0h+Pdh0t7@w$jZuIutF72POK#vna%mqZa>>U6JEI^yh~)Bm(L}dS>1R<}&L^DB zvWvX|+{m@0#(YMa@o=OMsnU?P>%SXKrMys$Kg8AFFjl5D;#z3Bno_G$G9G{CO1ZP( z@JnI&+~3bt*hB-!=X9GALRmSBL*ivVk&Xh=kym`Rn{k+kMT6qD~!?%Ap9x$if6gUP$G%F_E$e0V?iJz&hMdbyQzg{}2 zJJO;IrOurGH~Mu`f|l*$e&W~EaT4M7VT6NW3BJv zj$_R}?G)RBq8UcNEBYVlAif=B%v-_vikOvR*;1V)oDgHEI?pbN*U zKiBHZl_##}unn94ghj#6-AvfhvPS zi%;OgfB=W?&aIA$s4LtROS4Mh1d{86y1Tszovh5h5f&BLudQDo$Ot1gw9RCxxI4!= zgi&w3CCJ7{v{8jJ;SResZdp_!BXJLm_3z87<^&Dq`Wz3xD_S*^>{rkO0sT@aqhIR@0<(=-VJQg zz)kwonGi%+13qsuXtRP3BMrv!eH9 z2N!J^QJ8bN&S9ba(w0|B!zO&H)^&)SPpjt3sZgI=J;_iA;fP%)lyod}!6Gbydh(y{ z^~1X#vZv8XjF>to7;kDCetSp z`Fl)>^IK=h4F%|?WxME$Mm1l*Tn4-`Qq9Mxe7qAE$k`k@RABP|+WXRYD8K*jkt~rd zOCp9y3MruoW8X?d_7EbwZ`rpQ`&RZXWXo3eUD<{xMP?}Nq$zUS`=2iS3dd_D7xABaVktDGE zjq_k1WHw49;Yh^xq083deTwWx=5zknPI(`rVyXG4`TebJv^vRMMLkJ*`$S&1sTu2u zF5a^Ibh;>nDsR^t*roNsN-CwN($75_{8fsW67?ypS0<$a`)IYqQ+|9_MGxMb@r&WV z)!X)gzBZ|38OvfMrlhnts7VYX!=$ zwxjRULl&CodjK1-VGw>i#R88DnT^w?);&YV%^rUEeml$X>TsCmWLiWUrTD81eM z9!*A(OEk-)_TL)dL}{Inmz=jFjLzA5#d_wH6Gt$)bBf*1*Nt!U_PGqrU=CLXGBuCZ zm-{yR>@|#8`ZG0~t%R>)QN`)|=HUjEuj_R5A{fk)OO0~*53&Xnt0fISf5cq`l((Zv zXvX|z;UEEsCjK@4MPT0Jxa^iLD-(%;fK{|FHq}p8$bL)CF|-nLGmm& zz|Slbmd7bg(ag7MUM9EhjvHLnw6Afi02Rfg@`Y8?s}-NkFCl37cec&zjc`K}DssT)B+~ z%z@-2_%?SM@$Z0?&ldc{jmhb;&~3KIdXI7<0%inVZ65g29VwGnDFyu?xL_Oy-Brp} z%I^e?6JT{apJL3+lGt9i8KOfTRtps!waVj?Ce9LDj?{8j4?xvja%SB%waC%|HY@|j zqsJaqnihYc!%FsA_ z4^|&S8P(fbDSfkcp$glWD2Wt@|ljpE{^R+`mJg2VRx zvR?i8B?+uZ2s4|_(4<;UOdtdG`7O5i6MW-n(?ZjrvSQggj*ZJ9Hx148((>f&sAZQ! zp0w2T-0d;UO7XKc6udRCDkwV8Du=QhrE)Mt`_hj@>-O$Z@cv7*Bi7jw)_AD`_>OGhs;xt`tU~q zYk7o&hKHo{wwpnFL)BMuJ%`POUaok1?I&FxZJ%&r&pQA99O08?+$PnYpf3~6uOVt3 zQP%RVn7jbbV%o~XuA;KO85>i&%6<8bUcWkQ^OtschGHkR8Hy)VCM$jzpR8?fCh^pg zI!gNW(fA&cjtGn0N>P1< z9zr(SJers-(54_SoGql3(DT>p*#qFhN*d&UL`$ zYjpgiLVwgT-I2i*XIHR1wAjo`4A3oG+|T#3Ruj5#B+DaoRQaWL^#I&Z#$%V}kprGh zP=}EK5%#B(Vla6v0d986TJ=~xiiG=)MO|y{)VTqV9XDm9m_}vsi zg_=!0BJI~pebfWI*T~pfcj>&PS-AE}ry^O3;y<9XlrtNN`e@X1b}^pmRfLd_H*VM-fcO$qFILfI(ZiZh84x`8jx}^(Qw*d zdqDc!s1oxlo183xyseM@hq4-l5Cs+dNL)vx1Aom3~^otWz4%2k~u`3v?qwXp`w`tv7TLF^}Ne9Z%)j!F4R51 z{q3r`k+?S5t;ct^{?x*1%LW@8o8-*Qi=_xW`w&=M=V zMAvVF9aE%{aG!VG{O;O<$UuYVZ4aO&AsP)*&U_b8d8gDM9bh|Iw{wk6_D5@lUW@JX zLF4!xZ~cp5I|t6Lj|_soh?j?>B!yAsH74Yta?s>#zT1d^5sZk>V7w1rXGqoFMzlzlkhAzD1^t` zEI|fU?9^2xyDqjd0Seo^mC4mqk`8`9F>UOrSQ=RGIS^;v$CG69i%(UW(O5WEQpz`E z)z+&Q|6VJmVS78>oBf_Vk=?O)zJ8*puy6nTmM>5N!Et%0j)rdwfX1*ZAq&LAKTR^C z|3_7C+%b^y>rjf{|P5gpQy1YS9Z& zFCK5ZBBLC|<~AxBhViLsk2VXTfpJ&5_<4SXGN0E0dK?aq^aQ5CUQG<|>Z$MNxyMzq zYC}j49A<tWqbeibxZ^H~sf&P8(D6rVqW&aI9bwHDZ4L&Fb|v zg{c>rVVNo98>^_^j!Bi`uUh=scH{tiPcy%_J>8tgp7Fx8ge-_Nm$ByV@{MY72eqtv z_^Zn{>gmPGxS$DPj;)EktY%{7ato8wqVu7z;)4CTK{r8HHel)l+VzbJmbS=Xdx)pw zN&&lx3aP(xWssjI6vOtC*re2ODWb61a`GllKG|UQC=Vo`t!1zdLW{HFdfxWl6)3j; zAj1BpzEvnHL|lz@#L!iGyj)k&hNq@)XpqXsL@(eGQQvmUkl$DJbz1dD(sx9hQDp$!e)Yf|9I$mR?xtl-tbM*N~oJv-dkp62T zg*``d2Ol5UmQ=MUz3EuU18<}7zsChgfPklGNVEfLET!Sa6zCm;-Q9PX$i~{azRbB* zC!XL6(FzV{l(mcW-js^Ao16_{_Q&=KacsmZF->JQ8vij7N1wy`1oE%=U8x8%u_CQ6 z4Ve=+5UqVrtaBoXXxrvIm-JkZM-HL^9VZr+*^_{aOIZH)PY zi;ohl?YVx=r-pChip$G;LS7&Ku&>#DNO~Z-h))GOPFubkEro2sL-?ibMd`{O&H5qB z#T!Mt$b)j7m@MT%enXd(y($^3br}|A?VYic<~n-fI_mX4ubxH~n8&_i`LnfuF`_iL zUmESO*)|r;eKIw^Jyr_fMmzo98JO!OZiPLV+KrMF5?IlUpEKiSvfO%zOYj?bX?fRB zEYRHi`AZ(`Wki$qjyt@z@0(CAEpxTcHw~%-+5jw}70-@Y_UX1hHqf>ut(cnb$5Br0 zZVit5DjvnfRSDRf_^n%ukZisDUK<_qm&{rK-8*Ba_ZKBO#a|R?a0B$=OW=S3Db=r3 zd)bq?CYv!^_*59PzTPZxZuvluqI2CrLNFIC-J;=b0qJQ+iH~)ts-ia}*(Si!Zl&N} z#`4vcz3D~gDB&C5AiubvU?eza-D>EC&EL+bwh1B zZVRh<$v0Lk2}NK|fPJ3UG;$hTGz=9UF7QNSH71@RK3d*8b(-k_dZWb%9?1*Pk=73cb*6YVcbDMXb3ZakJT;+5fUh>xE zI8(k{=jH*El|*d)buN(F_{{U$XYw#J9VpU}!t$!c^}#LdtyVi&3UK=K>OK#5d^nP6 zr6>1c!U~+JHPz^pWKHxcE>V$P0A}+Y2>OlFq4pGZ%L#+%XYjpN5aj{9ek;IgJJzF! zNN{^Lfs-=bPuMN=?BL}rmDph~ux%u@Dl=1BvjVDCux^a^6IFJ;ck?ki_gYruKlk_Y>dpiw`T&sI>; z8h4i7WOyG@UuKa>3*)DoCjzi8?=^XP7=KvJqmC>~-j*rW6z1r?HdU$f=5&yBgs1{G zzpiDuyjZ-yOua_N4qH%HrY=SSON38B#AFhX_sQ<(f4hDG$*L`U{%t*jom@BvV=VY& zE8VSVI0C7h|7_w4ts0fY5ipg3L1=F=y>D^M(Tk4~R0?Vb&roZ`&vP(P%dcD?v>e&_ zocoalJonoM{^r=Q(g**rqVT<0DeV{@64-cH2Bhyu0^>Zc&;64Px*tU(E|BhaFSfDDQL z=r}3Bg6P2R1%!+LnqkM;vafmh+`FWkXkcLB!$0Z;uAD*dtd!qnWDKWbc2FIaj3eH} zg6Sv`4r&}*Hl#47%RNnSkWM6(SO>=ND5Ash z34r~Bv8H2ltd{i1NEWgER`Y#frgs-WmpFMTgq37>prx=oUi)%tM-rb%8Fva!0#zEp zn$L_jH6r!O51Ym>wUGlY7D&Xw12UJbwCq$y-JpP&B>}c1gD2Q6dYFKKK>SVq2s&+1 z#^k+v$Ju>D)-&az=Omu-=76Sf{6)l!;of6Nk0!VGB<1+kurNuCw``LbGbl&;Ujc4N zFB&2KYd8bXaT*u?m=+*kj!yl=Mo@4N9$vI1y=a?uPFAf*be%%ep_NxA8zK>Pvp?R` zJ(qLkhOeQJ=*$8GP}A|J2X#pcpjsm0gjz(`dKRJ}&G=Ec3aiYNbMWDw-2uwbPF!4)1em26BJD?%{n6xAqNG~=<#^&)lAd(xk2Em+f@fkb{cWTB5MAbJ znxhFboyv+cs%NBxo)%A&TC5Kc z4$Xrs*Ea{s=wKOVh)z+cUADvMdyOqbld)*^=c4PlkvyEqVRSSxlfeQ^>l`K13Y`!9 zgO9zsOMFgZirib+Cw;D#QqNYzD;?wh z_UublDN(hC=t*V4Rd>UM8#WR5)DF>iBoV6(1P{fDF%0#sqHNRd*1X7fdFrPLL_l$7 z0Q!%glED*y85RK+HGcK`LT~$b8g?`bS8FqBDLg)B=e_Xp@67k(^CH*4`ejC)ZsH4( zih486qCfuqBXZx?-p;K>*1} z;K7n6YIJqHaR~n#M<@fUzU=$i)Z2N>Ml^SdSnFyI&WY4Rp@8I6`s7i855#joA9@gf z*MOxt2!I&=)m=J%Pe7Y5cxyJ>gy#&+F0j&Q0IIyhcQ2`nM&*5)fk>CeS z4PY(!`x2y(@%L9)HkRK1-{1YqT zH*aY>UaA9#6$zLP*&h-0m-6($h3FfEQ!b@?-1l1PHvs4!L+-WiQ6T^|P500T$ftq* zJx*w_Q43y?gNfQh=2Fbo#&H;c?gyRru%>+!qKu6>j3y6=bHGHq;F{JBkb{iB!x`5r zvalKpV2lAo^|&N~{DTu%sKLR1&M_ANoiC1BGwN;%8`puJ?g|0RICy(-HldOw(BR@{ zpb!NGg0Spr(W&Y^E4Z_|lKj_=g z8%ObU7zZqE4$NoAHTFlqIEhm2;*Vne^0Mu?!N}SoIRNzj4fynxm;x_4Fy2l-L#!)g zE~M-Xdu3|!3^~sE*~}7ZR2*{yP@fmhp%EyHF~J7YYMf*4?VF1SQNGDAe+*DW6n}|C zJiv2p_JJLCVS`z!_)=Kn{V3+WW%Ekp?$=*Ut*1Piu@w{4ESWcD}iv`#N^C zuJ+*YFgIhM#-spn&M3`+y25V(v;9b5t5Bif2_zMA*JhwR48Uya#(=NXafk`_sKVlC z+4g#=-pkWd%QLp1(#w5klq*~p_wlV?C{4j)oz-NJD9+{UZPq;O z<)5mrsp##6FEp3}s`eIpii$v%{v}&~4IjoK1rS6tsf_83zQ4y3oW)eZ>2B-r>T*dm(#yB{0dy4Xja%~3O5svt#K*(@)nn%8|+(BK!yRr*@duD5|26qUr5pIuHET*}zz6X4`2 zjp=y_(2;QQ9?Mz5vY!t>UQ*NLu3K0Hwg7hKlu6tGa~4Ur=3!43Zsfj?9y%zECd!P_ z(kP>Mt+H;C!@4e*F6B@uAha+Z6?|et;y)sp1(v@9@i3*=WO|-p;*0Ai;ipDl;T1Yv z*+pObW#iH;835CJ+m=1wYl5#|f#CgFJN3v(64;j-4NDg_U`Su2x5Pjv&EyUs@}dX# zbl=>y;_w&g9Tow*DUg$n$vvNl(Yz(`CEz1tZ|{?GkWlzsoVyO@CE4R_wp8D6 z64rEQG95w+M-LSJ+%_rRi9F!ExB?(wU$v6T8C@kjN(;Te+QpNp}I5EuDgcrjDcP9;sSGl=uZo z5cQV!wk8=}D=?WKaMlz)eK0oL$YQWl)Ze%i}AeR4eW zKX2xenr|Yp0?iak`Ufys!^OiMXWJwePlH1x$LEVLV_9Vf-<}A*0%1sCuqWaAVK$^4 zxMF*DE|54g1@bdhN+=Ffl4!ZrU9vS?@)5`74cqFT-L~;p6G~e)xx|~EUDs#V4jRUA zi?a+6VT+E!bGMoOo$?*2$1uI_VYDF4N5X4L&Br)ky&9AWs%YzXtUN zR`kl1N_5aE9#Jt=HrGITKAz(^&zBus^eKMfqP!PLo4(!5u_4{N8KqCNT;Gn=6r{A7 zchjBP?6P1^C7kD{aw|GYgSGMx_wKa!_G*=GJLxH9<4p}}B?2h?okW2IfX7mPN&}#O z--i74Vl@KGV6%V9JLLcu6vcwRJMKIZ_u!&M5_d3ww54ycXZyDoPSO?o_QF#6h|y++ z`foNM5sgATK(o$-anzCOHt&S3?=(C}AYCM*l7-4Xz)p0Ao-=m~=AUfa-obaCX>7Ps zi?`)Tib*n4g{UPWt@JX@wST& zNk@{Hu}WX-JlXOviQ~*9cVxzE*p^88At{6^dMrI^U_i1ZiTzd_q#I$tKfSwCdNy(q z>0qhKuKM6PP!ZSp7VPx=<0n%2VqxXoH6wrWct?{plJpnMKS{)q<3}Y`h zgzF`gd;#i=)3Q#E4E9FcFH7K-S)GI7R(%T*8$zel%e#!^DqqN(%*(q5lq#jo7^nA_ zJ)*tHNLaP&JtBksO)7ZkI@SZ#98wam8PM=^S6Z8JPz1O2}e&-`!OrG9h1Kr8>ahg&(Zdj56Mv{&j za3A4#+Y74K$L~8cF#N6xteVa2{OX37< z)jYwUU03R{ZWNQ;BpyVG&ojeLoU`xdM|LZHW03DY3tRoA|IC0ataJvG9{#h#SbxM8 z%;<~f-QNh)3RRq_VljZ6Y$1m|>IlG!r4J?J9yQo9hRr$_T9v4G&?0Yr;9>=_TUySx zwz1i^LbURzZUjbU=Jv+0Ek>6l+pVUxh!%YH`+Me$KPvoB$a;-*VAYPA7pK`@txGR2 zI|d&N@<`VC2k#hxvov@AAL}rx*QlfgPm;hMB!n=kk*I0UsTvszeX%RO$ns*?vZ%9a zHLf!gs!9xkwk?I%X}6g>H22EWl-@YN#(^}r^^YNDRaXoUn`+~b^6UljXJGrPK|yaW z927YmSLRQtR&n-B{u;avq8*?zWAM&gE)?C8qIdmO*x$-bbqfV-90&2Bfmj-(PJ(pJ zZqzdPN^&Ol8wuEp^Jsu*(2cm3v5*hvxnNo*b>^>r{LUU}CizGwUlQLj>Ue6k4&3AN zX3%N$v)mEPgO)v7p>?kr0M5@&lmAWvi9;W!L=b+`1bBV|gt2)r*t!CnM6zuTJo?Mu z5U-=K|EcEGV1LX|aIYsDuXlsUzX9ZoQ}CMtwwIWQQkZg3-vI!PR@DVyXR&rOe&Jhs ziYEuy(IfZuoso7Vm2pl|GgA5jI7PP2WH6X|8^+cFubfm{Co_vX->+W5z3z5lRqBkH zibVz`h{7S~lxPd!LT13M#)6LTfHQ^TR$uZuD@E+F69~3f?tnJq5q!5cq%#dD4|X(+ zlWzf6;@W%g%@TmU9|RSFHa#+uZ@R82#ChHcmvTuE6@g7rTwGsZrMo+veg+^G?Rk4` ztem251=7G|Z}#q0&BNqwfn?-z4o^N{yvR*t^UA0)lwXPSHjhA!DQ9MdaIdQ~+J9k2 za{`D*H?g#S(xsGQ4~<=S3)G6okhqp?!;J2OPEM7l2ZD(JrmsqUNtX+7Ql~Y4nQBTc zt}R?qW9L^J_zF|k&X3?Q{wJQP2oxMvH^|Q+b(PjM-WKmbUHs}IaeRcB0KO55uczh)V99qI}GczoSQ6t*T;4l~CsATk|qWPHHP zc9IRGH>W}M=l-4?h=^hPVmcvl%xQTh$xrAodK8h<0jf>?yru#1t{gDg-E~yhG+K<{ zoPx^88)LfswlvQyHO~DUXl3Bg0Ln;RHg#^gQs>t%p;Br{3Y$nZb9QPpiUifD^qC5N zG;+eSpAwwq`f-6FEWk6#F*nkVJBg38^^&g0=;TRomPGbrDRwYwOv*=ob>xxpu48x% zgUr3)od_pgUs*xqC<@{5k025`x=oU;Gd)?ljx)8K?6V!(Mca?vw~gZ?NT+W#+fVkb zPu`FR6^aHr9U)Mm(e?+cvWgEmD4PB5U2a8TCfYC`?(_#T5u22mB(hUjid2pKFCcr* zparo=Sfal6a60oM&7jju1n)YOF$yfB1bw9FJ3m1`!RQz5@Wo&p37V%C6c zAEW-sTr%A-Fwu8Q|L=Jo=3k@5Ynn8jrC*+ZNB0NyU0%*KUXxB|o@DpNlFf59iiLS0 zFtvq3lObaCXP+@|5|xqskYY8*lNU`Lr_8<+!N83Dce?HZSNji-{y*GRk&7U&PJC^o zJvV}tbSz&jk$$J}yVm5Bbn8sNOoCKU!M03~GP!Aje5Ue)r?K^eBwi?|Fz*e)=i6lo z+rGst_v$VP41YD&NM?Dgr3CFaf#>0`=c?{NnOu9xKfI+V)C9H@hgg+VTFwu!cMYq# z@>aR1^hjJ&EoEQ=Wa?aqzvU&64Z~nxx~{E_sNT-X+r)brZ#+b-UT1$Vg>&oW^t$pG zL~>XZ_i9yMHuq{}KWEk&=;!R8f7XA0hpERTplSKbrCh=um=v3}wfeYnji%I7_DCU^UjtRE%co~+v4Sk*he z%9j(xNvN5T$ek3O!hsvr)zvo+$1_??zsxs|XGpuuPx+z;yl@f}un&{pT&RAj5->gJ&^b1}9xKNoF8!AHmfdE6Z-Wk-XTYpzYr7mJjh_=nrTV=$6^=D`ymR`> zy?&tPbvXl2A&(0!vI&6co(FUtq?>^Pbi~vW$$w7l_tL@~T$$t)7+T<1#}|T)Q_7OFNXxpfjwS)T zW%9i&nT*h9KtK-K!h*G8Bmu94EvPR9I%gl)P3_k*1<%dje@XxWCrFUCUu*i#R*H$c z;b^l@{eHd*7wlZy_K+C@OCs|4%-#ZmIC2e0UVy`xA4mgT{>|OlJj<%9N*6nIN>-WP z*5p+UQ2y{{Q_qV8c3KaQYg~`EAu$R$(%3~jaBI-d1Dd1em_r@}>-US!5lQ5XjT?ZY z`6}aGAAKeejuEYa#7~ye?92XC?n;z{;hIwG9V-j501JGhaT4;0{~r3`OyKD|bg>K% zLE&r(NQvCB+=IGeloudS!0}30c_HGkN+Ar+{UPQjY4GvgU9u16(l_<8Vlxu01Jvd` zAi(jbD?e=lX~=>s{K}IhAO)gkISfoM-mfX(Vx}DLyqIss`8i7>k38AhA%9835%@uG zn6*pvL-3_Q3;<@?xsH@xo;6PZ;2Hn`u6~1Q>aBHp-xn&Nf0iGM0SPu$%gC$xnn1T? z1^8T;$xzE#Y`8(6gt54Q?_{`l{gamd#n2YEtj0Og$OJJy`EqiubwFQw%w~M=;&I;^ zpso{DYInn~J@%Z3oimUz2gSSokshF8{-^I{S$3l=IhU5h!7t7fCPGW{nR5BkZ=w31@yD0v+9iD6jXf6w9+z$Niiyq4-8_JizI6P*t4=pT z?bS=c?3COm9&kzg<8R!muYPDE@UD`8^L^a2zos{UjO^DI{wyEV0~LldfiDzbB7t^v zvBxSl{m!VvF<0=A{25{&d0NgB_hw=Wvxr9vXM^ET`^0<0szAmwGVBB-aM1k;nlh=! z7k}9T1XZr%H&W1df!y^y4eLuiru^8ywRxeq?4~M;)5zi<;+*%oD81xD71!AK2vi83|=`S;Bv{tY< zC3k$ug3BLb6vAlpZ%+E&7zdUxOTe>U8i%aRgwKp6vuel$%sfT{T)K4(q{&e|1gZOe z2fAC}CB8qO&tXG$;3ru*7OT3irb~Y%R!W|TW&SKJcL5f`C>7u0NyvrcZN%3#L_;Ah z+2gLM7_s;s!@6@tac><=Arj5_Rp+5YOX`OkdOstcJg9umv-CXcNl&H*P{l!6$H0ku zoz?WS`6;IAkhInSVD4m0?9|o&8N=c|z(GbjeNAdUuw}Jw%vu_8;b<641A=>Rc3|ed z;9kCPt4ckCory8SFY7M>3!Np<5l_4)4<}M&z*gGdvvh-9v3}?)js!1c9nD@E0FV5> zOO6V1HmCd~S1~utFsFc?w?WbGaBT8k*5S{EegJsy2eS#-D*{J4x3_1<*`KJ5gseOu z3M>F}Cl40n4osm^6eSv`K-vNmOW*yqSjNT0_rhF^P?l6p)8%wJ_`m_I@j@tdK9JD6AdH;3&2z0E-|DW0A^zmr9OkIX19JrJ# z4OarzcMWKASMZ=_-7Hk8T~Hy|evc`=2phTJRX?;q_Lk7d85|6xFTS&l*SBn!Zh_f~ z2QofgGQoKdtu%YIY-U7ZL3DrEe*x&Q)bwt2O*U-sP&NV%`fXvjbRF6fJ#{I%ABe(>O>Z0$)SsZjPCw)FHl3$(HU{S=)FWX_jrbt47F&M{d%H znaQX3F||hqkiUmZPfw9jS2+fa#f`Sid5CooQuPTy@NvS{Sm6gE3$eL#e{9|HV1@p$ zu+%hvV~ik9`_%WW1Y#_?lykCRZh5TA20wF0K;sZisK4Gc!+~2gL8fdL8X;4}NA!sC z1qU}vi%khlGZkM6m*2uCxpy?j7CuR~>^px4XI#a7jXgxyNMqf7&A{3^O>p++R4^CR zCi0`b@z-z%XwCiS4~fD$NHx62!^w?CQ=)JqYQ8l0&i-Ie5;)T~QRMK_*V|bs(um?u zw@~x7UzEC0U|l)eK!{0Cf1XGTLmZjsV2Ltw!q4z%)%{A}+M|MJ#RnL4H*VCb3uhOO zzfpdqR7Nab&`iPBgPZkp562Jam3uAp#4e!T9yPqv_Xdn>4by5tM27#@#Iu6ne(aR?pe6cG2a!dk5!?I71LU#(axaDJE(4OAe?gvQM8QA+E1|_ zht?66^-cs~{pY6F+5L5wzKty4>hq}~^uVVZgMMH=D6;J=Q185dpJ>?BmL-dN4wKEUloO1uqDP}D?(DA%q5Dx2>crnVM7#Y#k>XH6M0xIymT9_G zkgONQB@HPFRjf(S&plTH?1$JtC{Rd5L1vp=$R*g|M@`aFrqccaB0-awcY5FAlv z%UI(Yzh>a^U2R|6P%y(l{1TZi^*j*;NN6r@S2J!8qEtpIqh+>n&uHpx@VP4nD#qzu zJ%>LKDU%N-4>o1CdxmBmSKUW8ibuZ}Z6j8LvU87OIM~&X(4*l+X=TK-;|L9unL2jq z@X%`lSHEnpu$rYkewYX!;#kA=hJyx%nk5mvxS!DNJcb)Q*p)zlQMoJKaxvfS zY6J1k!jrcvE!*WOX40AayZ@}CsZ6}}R zZurEqY{Sv22Hz$ym``Y`@g3MK=&2h-Z>84FRTcEqJNk^d>vjv7Y$akw z>U_tlHZeLys3C-+qm?dkd@I!pC*&=61NX|rdcUAY534;nuk?MhZU{6H+Ht?mYN537 z!Qa**de!`wb^~pN{YG>;c(GHCj~OHsTcj>O+xJT8soFlt#h(4+Wg1fMI6$m;bs%jR z&}l7l>OiqJf5SYEspY{$)7%}Ln~8JEhI%(f%#ZpJ!=}tY;a4k}dS({sd6Vx+Wph1; z4`1%CzAUr)c32>nwVCbg)!k31CFa-=q5i?)a}f2+$G3yqCG=*EQGr*>@w+~##U4kp z-Rdu%rfUU_-`KOci|QQ&y=>?GIm zFRio3dQKqP|MK5>qL+D~ekfDgUo=Ef!{~o}BMF<3`2Yt%B$f7ipby8lYRmnENU-L{ z$KT~;EF}U&AhT6!t}|<;*Btvct2k-eB>i*V275T z+x1SBU^qh{aJ=s1@c3%_{2H%m@pH9-9N|A5Uh!1?^3mt3b@a0UQGT{*r&oba>?&&Q zS6dCp5l1%n_JNdq9$4Bg$(fBE2fD4~OCa2*SMLtxeab^<4IuqEamsI&J7+Y|zs2I+ z8yN-Vq|huj`L|ake(^`_S`b&p8+U&aqD3tDOB=l0uC6}kM?2#8UBW3`pw!;4pym@d@x-ZRe|}Rw zi8W9lTIrQ>zIkKAD~kK*#c{?QW3?vEtRAd|*y#W=s9zm|b!mN=Z%$hV{;u>^aoG6B z;OY70DV3Ad+p%2U&k|+W$Hv88MLy#ZeG^#$A$qK5o?Q6(hTp| z>QgG=y*Vv(GoMfn9w*6fd;ESyM%U+Zk?*!M6{~u){Pdn^2xY#X(G{NOzjAHI2faJ% zhUV5DPb?H$5I>6FkonuG80Z2I>H4s_6EoS*-vuDbDP+IIa(@PzZcn%s2XfVhvM((g zO42eQ6X(BtHrGy|WWL}Vj%PWP3N5%2a>{G*&B2p9&9cu@w|hKN+b`~l+b#>8nZtBP zbLBE2zX}NKSn9+uhW?#SFqonbF)a$FA8B^qo#MRNct{J7dz5G7;h9bS6&N3ebYMeMB$>zeLCKTukRVg=npT~Er2Ag zdLjS4Iww$$-&{7r(J)&0m?8J)HoLB@25E4da7_3!Df!pdM6U5T5jr)NWtwkVa6Z54 zf^fY)|Bu1_9tt1`8aTwmYD1-5H2NXtYO|HaIKv6S{%Ssq{_52I<&O*jI@11caP8W- zxcFNX%W_(8ch$vaBTW%*46cbgZcWKphB1c2mQs(??eAtwiR#eMfZ0E0o>KW49-ZFn z$0r$+zWFo(x%j0d_v=+@mIk}D+zZ}KiGk^hetlLl&kNY`Cd-V{m!i*0h)`S4YlL#P zsmaK!zaLI6x)pl;r2fERer!QYl-FW){6W=yJ=&`mj*2jd<#XGP@!d-I(rNXw}l4mFM6vdhK4O)*!>(%&n+Lu zdgV&Af{t+1a!Ytc?gVw7q_9e}%BKr1sIB{S&hr^}Hs>wOJ^xJfk4k?6X1J~3)9Tw! z?}Ts6>e^0(NVKIV!N+o3~=b&rq?pWTm2@r99Cuf$1 zeH}v?WU;Z!$M8^*!~OS9iNg`AcO{E>?Q&y3_B8JR#qyj6?yxfgsrveLnXu!u*Wk87 z7}wJqO}JQn6X##TmY>&n0wem@?)uds1ZcdNn{77Mblg;8I5V14!cINU(rA_~TF#Nb zTymL=ecQWZ|Au?ai-XEYzX@?cqm#V(mhE6K_D=(|T{!vWCL!0O?e%nl_tt?;!xuy% zcY6#gZIMjVP}Ni?es>aqmC&XNc)g&sRDiN&E#LDs7LK4KS66T!O!5x+5jB3uGn9K} zM`4A?w?Et=!JJTBznyK)`nQQ4g%|@nzHLI!$hW2bYuXCzl1Kbx(sYQUwUs9hcb57v z_T0-U3z%@N9M0FXrZjH(u@`m=A~LxK@oeRt^p@+UCc(lN9Z5Qi<_Wj{`PE7${oul{ zaQywr7SB>il`l>B@7Jz+VbOo)XSV0*o%vpz^L_5d1TTL(q&x$mxqnZ3UD#`Qw+;`G)g7oZq@_y>^gn38rq$i_E=~+x=~^CR4NyBD zcC{Nr`u@D9=paHg#HP}b?;bqm@8BF|x3}0Tdil)~>dm&SfBSWq(lUh0EBrOs9V~xr@LGcg zQT5KtjBTRmhr*>3fs1jKpT7~xT#^&{7|lNZguC?xS;uXEs`8u9Gk^MbO$rUC=M|JZ zua~};Oqgi)Nukw;CJsa4cyY^6I0Ng}l5Pc9-Z#76q!^w#EaGJShp3sRX1COum+U5EzIrlG1Spdb`*17X@mct2J=@*ApoqW zaa{4Kc=~DalnCEcCvjHx(K!b3!O~~HW5g94;O*D`z9CNU`t<|)V0e4v1bq9;$C-!M z-rjQiJCrO-eJC#+v6_CppWD|7!&kZC0=LNR^|Bn6}vxxs^8+#w2ZpF*5 Tt6zK^kg6%&Q!JBz6!^aYwjtrc literal 0 HcmV?d00001 diff --git a/docs/specs/img/diamondProxy.jpg b/docs/specs/img/diamondProxy.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9e1bc58ce4232b56609148716a3896a5f4f4cdd4 GIT binary patch literal 127297 zcmeFZ2|QHq-#30N*_Z6Qk-cmQp)e#{lC;@PWluuLU>FgyM@3OamQXR-lV$8nl2C-g zj6~KM%NS;RPT%|Y``-6+|9;PN-_L#Dum5xZe@_?39Os<5&bdD4b6ubJ_4&N75B&>$ z7TABv+{zqaU;qG1@CTq{0S7a0tHNe-Djlzx{!`#kM{`)2|T8uaU)p$ibudzPj&YIe+~GRKn+dx z!y14A6driR-On>b=BlT+?+ruQ^(Kt0jIW2Gti6uSVVgixPaj{)h?}10BhFlKkMMKX z_mG7e$r!+o!TkgMJwvX@!2Pe^2tEckl>MXfW8nU;!y2+Oe_RscXDI7nb5_PQ;HIa{ zQT4;>hh@QU-}LZ0cJ8G4U%m_eXDIuZhYSk~QxDTp54h>Ap{cL0uW|T@#*rgx;1z1Y z;Wt9Az}0R9%l);6lb*ruH+=&`d;@OC{HpQF)qv0tLs{_7e>{?h`=7N2hTgpXN1Yz- z8lKlZ{XK7l1cS}j)c94Ehx@VM(5u%x-9!HJWGAk8Yk<2N|I-PLUw8Nycl>q1uR~xn ze(nA#4XDv>GRn5mx)O%Ztna1kA=mbON5Qh zt!?5CX?O2eyFhUIPt5}V{@r%*f$d^sW@ch${nai8#;{)v=VRV?K$C^v#E$h!kbvxw z+iZd-GM`nov&(6n!wX%#Im{s}uZ>k8{A$`CE&KO1EcQRtvOgR4&+VE5OaaC}3KJtE z(>^99rhP2?z{0}D@~g0Mu>Dau{!+O9C_KLk@1KPZR>A<*!OYCe3jXfrWar%fe_ZI_ zK#X5Vp9Hv=7(iuW;sc-nwXZN)8Th^ZmcjpJ9blq={H=@sm*2tfap0fELMHm~Z(aPY zi~lWb0uz1Yw=VwH#s4O{IQm-`f9v9Z6J7lDTNnSLi%jvPhXkee88(VoIh@>^v;2H< z&n2=rcvDLg>z)?ToIZ1?>^rIlW3&ZJpJ~utl@cR}tnAE6>9$XEJVI4piXYjiwVl4rRi4*<3qM2iU%~p}LNxwXUd*wACCd%7Cvbr1E_{ z^^K4*7U}Hltm1S1)idkJ$PkCQbRD+4cMrdKaTowDGqedZ>|SjZqyw?#@xXFEr##Q{ z(9eBP8XuH&D2K{wAVLQabYRCYx;)a(81fKe+^<8M`AmSj{{myH?agA(G?vfNbd&c&(*NoWKkkB3)?4Rb> zP_-GWN#vt0Q%jIKr5M$^8Rq^&y+$qA4#x`9CpR84J#3@#$)1qV;}?iW52IK}y`8ZA z25QgYqwPEWOp_PZET7?a3@6B!7bGT*k_Hz^xm#Fcv@9wGq?CTj`t$@CkkWz6*Em z>CH6fTNUt*UQ;fN;rPA%DFZ0|^KUc#S2MjCj1!@hXw71%$&-n+yPy5G-7trY4j|v1 zHv&!5mx?mvV%jCl#l`=x{4flM4OKSsh<2vfs+8}Zx3Ra7z zhJs2S1GOqEOa_#HZ-2^wKK@&ye{1wVQjho@Q2NtR!}Om+9<3q-U5cPC@g^R1cN@m( zCz6{xvp60+@s+vC%S5f_(TpF!2s@dfE(4X3^2n?_dp*GqpoqQWk9^@Wb#x-#wSNIg>1fH+FV|P50ZNyi>$B zYo7L&p1BD%2H{w%J!z191tCh}#q~9d5Ey^#shx?mT)*gF;pDMV5@t5Q!gM_I4f^;n z=vOF0RrkESRi7ii{o|2jjQrct1-|cF{Yf4&k@r|zfT#(xt3F0D!v=OmlcdB>r#s@6 zj}iGEB1cD5hfh83`5q$Aml>CynBFMVV7yPz?%&@2m1X-4-~JsimGyUg@4sefVA?Ij zqhbxT2y(>LPYAv_3EKvr$%Yi2vdYW+24<}bkE(Qs1LV_ zE$1I2=}LXbctMJob{?KTRdV5++)}spaaYe zq|cR0HGVV9I?JU^wWI#X+AXwZg9Lo+LNZQBNy%M1v$||ns*5)-sSY*ixeC3Z+zk2{ zIhH%Bry+b!i=8f!EXQ8QI-Zl0LA`H#pfY>Zu&N=MFhf#+C9JJSYs2%pPl-DDI>|X# zLBa-`-`wpfrUOl!R7>JQEI48oKZlT|UqQE?9|EY?sc4K3S}giQG}j2i3=82Xy^C^q z>nEo@^0wPoOPyreEuNDz*crq0oMT&`5n~t%BPj>0hRr_q*FoQf@Ti6iN^i$@sn@oc zcYBF`IFReur)oT8zGx|mF2wk2Oy#+fByglw5sKbOg@#UFLGscXao5RmH5c)yC|N<} zt<9UOU9&6zV{pbjsBbvtM!*PU;{Orq14)QLb|HD&h$pc5oG+~;BBxwUY1Oi`%0aQOUIBeF_joQudpIN8 z?GRTANjcq4x+%>H?zgfTm$=og8+@6z?ptt}6XBN6Vh|o&=@tn(U@WRHJpsO;a<70G zBiT#x@dTX@tfhNGs8bX?HET->+68SezwcY#b$@E=+Fr{S3o#aJhZ~+rtuGI~j22{A zuSem}&!iiP6I?FM@FO@&oqb2NZ(8}>bDfy3XhJoo8B`WO{4jEsF%tI{dUpnYJ|HP9 zXMC@;-Ilq7#y9%ZZd>Zol5*tJ++EJ3 z+0TXHhmZ6NI901h+~aIys*hS;rUSQWf*)5fmz`~-k2?(3mtd{Qc@L+$s=sxrW@$w{ zdE&7JxEB;~LaWfL2&C^*U562hNl9lBJkOZV>WH&eU0s)eu$}6afg}D`5-xcy6$|vi z)Tu9^v(JQQSemZ{;fJu#Gg$8$n3h&otaeZ7ep&tQIh0i{pwchXH?ygHeq&>Tb3>QH z|L_Uo2H`?v`Y3jSN2|wd{xvHetww!(rG@}mC6rpvDGz_ogN@yIszAnEPd{E>xJRSN;fUaD~2=;UrI7d41gnFF2r(@zVb z9Zn*3GSD`HoGrufg`zonOMXi4tN%oa3=Wq`SMgbse((6|+S$98Eb^_a*SVMu#h*2i zpS@1u2VYqZeC`4%qm(2u7m%oWT}?-#xY{pIbfG@?UAT++bGTQQ>pRhyHh)hsgE>|e z8oLITdeq+*2TfK?lBRu7J2E21CQ+0g9Jo;iU41kM#t8ftz}{#0u>L98ZAs*)@`0=$ z-0IkOgvW>y*RNq6=@B1US4vlH$)^^PH5M_21gUSO#OF0H-i=NAXX8^;U*zRlNt7jP zrcIg1SC{pyWW0~krE%C2GF(RM^bw)6DJm#d*sFSb5hu@<_qNxSY&95tJ+p;_nvOfD zTlHMnZ*NND=r`PiZ?+xXU{oEoY4RCzmwVsp{ar|9D`1um#8U)ks(KSBMtI{8&~3!l zHSX^7<;Rbq_it~}0iUMhy3h8&@JJ4Yzm#T82ks70two@F&fYML4vG6$m!kt&bN~k> z3#%iy>OWwjeVBF$pb(3Z{k^^zqWT2#r`hMooK+jLR5Tbx;iUs!<5`?PU9%A%wUCQZT1+6S0M#Xj`bHgJF?H!Kd|}{ zzTwiGrOzq+Y4U~1%`bD=uoeU-p0xv0le039%V2`v@NUg^J#N?7+WPR8RoG49`w}AK z3FV6%ciG#dtoeI4v}o*TJQxI;K1QO3L6f&gb;`}qqQNp}(_`$iA}8Pnti&AlcMF>z zV#C(I{>}kVHK@tZYrN95kKjOrN;Xn#f9{)|FP*zD*;@H~tae&axZ;u8(Nl+?U5I6C z`w;0qkG_k+pT|fn(bV4H*H^$UxTqsx=9SLAj(yYE*!bm)r*&kw+?O}sSyY1t{FQvD zY*h3O^{w@Z6r4fsIh<+HARkkzsiIsCo-cdPegPZ>K2R%!gCiFgI!q)RzvV!)nf;i?6wf;c1|mwsy2VbiC=!+UeKR&#gNTq;c!``^*ohnQR{Q50h2+7ni(p)KA z4o1zcvm>;g4|vYa&%UnI=KHMK;ZqNOO-?JE#MUU0GzoV%mNt7u`$iL}p=SFV9sAPD z+$x6&Sou<$^*w$}koV}(_L=WJn3fMFnh(XeI$DwVh1C}FC3w#4cv7q01YEcCC3c)IGU#fPAg=_~Q1|HZYma>KW}Gl$y+EFbGFN-?Z;Rzz=H4lOC*Wv8NO z3~*iIExcqj8$KyAJKHoUGD^?R@Y9NBV?ACF5E3X%pHFh%eOXECpMk8=0e@1f#uN#L zykn8Psv{RgoH~$V`6F%8R{n#+EK|?m%t|h`polRBDuWDwd>O@Mbb*Gq3gJYu^~xP} zbL#X*dMGAy-kopQckaUGW4_Fy$r955E=@iZl0$d|WI zr<)AF+eH;}rhPQ5sJ*h|+<%Cx`Nm;OIi_?IsABYcRCQSIdLpv!X+zhHD9ObEeodvq z&Af>m`T(QwNuXG1AUP~Y^+VAbAwMV`>T~$m$mO%FKcEyrTVf=sg)lkSrh$KHOq&vy zU5Z_&9vYi^r>i&)PO>r{yMJpMKsG&bf8yc2q>+H(r8pKuzl(r%x?b&Fp~*Gi2Yzli z?N+@OvANMdzk7F3?EBn-yX*PeTP+|vLkL79RO+KvJqjP`C7N^}hfGLMDRo;!`*LBf z*Swk0Z@k^%U!QwP?1Fn1Fo)KAg`6xR@{>@|?&GL5R84K6^S2YrWOF~|HQ^6ylXM_3 zEPv8bj4!U$C3flESLMraYHHh7Nm~JL8~CUb=ZQb5#Rg`@r9D$gsz-cP;I%yvvVAeG!{TZqJ8S?7U8pT|H*2#)WL_6?? zKiFTjDd>mBbWd>a83QHa5cNLIh~hvkR3wad(g6_8U0b~A^u=5~ddib^kOSM(zmP^U zByLXWkfFDKS{p<5M^9m@SJzh$%tjdna9DF0so>vd4R|g{Wq$2El5zA4#>`7OUhEC? z{y@=c3KM>S=tsq%Lp1Tw3!f^RrSN;;^sOg%Z3JG}@ZkQD69+OELIZ8rhO>{Vn(k-U zAvfI~yMJv#uYk7?^F@j1NSeWG%vszuFe5-rT#s9z%o<#UysRKpYRxQuaL?nzJ?!>mWway-WBU!PoJ(+e>c^@!keuDu$g&nxw_;q6c? z!nEjT)ow%1Ug+2T&rhsnIXIr>w`7u^lfS(V%Le;`?fOLATt+)2`Rz_pv=YgWEOMr0 z*t0ePR`2+A{b}Xpd+Jl+C35rT`Z7GnHj}Bh+KPZ}L53up6zeI%rMPku%r(a4=i8Rr z`W#JZrp{xc-{9Xpt;EosYR*JWY9-PedRI4Ep89lM;q!G5UP0Y#dtri0p4Wjp_Ls)F zUM$?+F|E9y%L+tqT|NiSn)kQz(c0;NRJ1bUVg}B8)mrzyPfd4d*uKH{_F=g>*(MH_ zw;#)l>CT2-~~?VoP!>3(aV1Tv$3H!#^3G z2%P9+w#pK@`6?~$?c0DaN^Pjx@%APuYiv&UI;UaIoe#?`M;B)%T%xkDc{{In18wE= zT^gqkAAThvw;&&V-#$KimtoM`@_YUfm>s`acYhjvrcDXr(@*}be9>Q_j7Em= zwB`)%!F^-5gKrMrt(d;}Ce7_kU;*d1-L*1X{FCTfRxYN4@AARmL4MulOb;@yl^YD@ z?AVcl=E}rfjmX@RRr{6dpK8qy@##-4MR@EtH|2h1hETz-s?azT$Xd=CaR?!nhXho6 zwnK_-7V%0znX|JZqdB7g*_GPM`@S8v+i^6W1fVX9=y&tfV&rU*g!wuc-#e|AjtNMO zPFUAoC;4iSlpClL#K&|Xj*SlNs>eXjFQNx{|Mo=2-yZsX?SG-np3(tsIgn`&I_Ha` zN1=Fk2pv#3%ev$*1WrcDr;+WZs|O-e&@mAgG-bYi*n4p5g`t)1WrlS(i(j)%GvqZW zXLg1wDI6=|d;XOvtS-xLJ59ry_JD!xZWhgE0oLqJ`mqrX#iO_Oe_v%<72&?WcvOD~ zo4W&LrA;6kpk3TLbr~!O-qCFpdH;k<`PNsO0Ib>jVXf4|^Cr@};0zk3jDF`gG#LqI zJ}Gswjw(a2jHp#qd}_`bDDTQF)VSp3_ZrwyN+zIqknL6&-*pjqD#^41bfDDwyF02? zsjYbDuLm)E?7r?BzAiSuZeXZ$&PAUq)zkc%<2jj^8r zG9fE^-twvGq5yA6a1IQAaXtN3TD2|U<6QOhxpSSt)>3Ci1VCLb#Bgodk`7{4#c(Va zN1OLybu@-==)Ccl_?ePYHuTeNusC`Q(lB&ajrO^Kvj5|@GIF7$vVysv4n!$nSz@4$ zWq&9=;>7S3!1OnTF}z?bv=EkLy`#Hjx5@xMLt`^vwS$=@FN z&r8VzQ=35%^f0Hvz}Ve$mD$xnO2B5+zAs9P`USk7{9ZQssraJR7c6CNi(AQO#>&d3 z)@bPh4E;(m`sjDiFE?iqmZSx;F#D}6rXwcKG_~>4z{&{(;(VI}yJpqs|fR3Z_ z!R1boB4?csxEKAjTS?Mt{Fb31pZ59tt>cf=k66ueaRBQ+{{aa3j{q9}S8e!Bc>WVi z0L$%vooW5^Cu?-@*FA8>gZPowiq$Tk=TP^Eh(h&`B8)dc8F=p zYDeMiL+=MnUvC@se!mJi1;sw01J^Uef(fj1qrMzpt9@>*O0yQ$p6JUcmkcZ5{g{{c z(ofIn$A(9&pqTe94$zY6b;H+_pg!v{)BIm`)*(J0`%n?e>$3S5_j>ufAv=Qzj6dj} zTqXsVzi+_sQG9W>BCbr?x+2%A?r$BGkE7A-tKk7eW$L45t-4hMZvRNx?Z{oPmakE# zhUAoH1v!At_5GlkJc;zG5PR=*=~)>m!8lO3qr=!?Ph%1h6J@xUJdecI(-h#P^CU;S zQkPTL&8UHt*Ti8UGV9(u)RVYnSTHHFt(6U*dOWnsYj+Zg1nC2+eIqW`HvOJmm^e8 zs@~n|{~r1SbJC}PcgPYt`!pZ;JnuY|WTZ#XNU1dWYSTSgd_BrCBy)2st0l1$R;97h zjg77Fn~jW|$RVwlfJSxupjLG>LlEqo%h#!ise!QYL+#9m+^i(yFPbemJONnr4T3%; zlY(*3)q&YnsDlwRNvha|VoSK*RUG8|ZqO_7xl_ocO+gRI9u7JXJ{DtCGfWY|FSH%a za`ibT{17|k7pl79>}=wbS-7qGZa;AR{#2{qcjR461N6>n9hzif#I!gL6=-(UR1tSj z(Mk1|x#9cJD%0HL{pC`ZK9%@d>f>lAZCFIp4^wBKg0lQL!0wK3O~%R^A2xoRu+_J% z=q*+3M9#wwb)XVq?6r2!5XV3I*E+j!uX6lwOeVbb&C^hctPwDVa8E?Phe$(xtjiFl zWP5n4&2dZB>RxP`Sce(pgs{CUA(l;F|1n|21QvoYBst^B*fYtk5K{OvXP<0$iLQg> zen)d%8x=1r3CSC`6r0u~49l2qlWyaiunXOW@{7p*wGuWrhE2XG*-o3~YYDzgl7p_w zF~OMOD54_x;yN7_KV!Vs1r}E+?}N)(xR$jcA}6T10asX4 zj70qQp2j!N9d`BE%XKmRG8dAf64ox!F4c!_X+DqD;DXOs=5^8zkNDR;Jr=Y%^{uPs zfL@#bD;_o}(VM)dNcLT*D#fZ<8z-a~?On8XGhY40La=X5-{#NJ%q*d5sd~bQ*`o*s zV?Jajgc6B=^bJ2nj2XL<4sG-+m(Srtm#Q5v*y*}}^Z+}H1w~}AwGNAv>OzHjnKfj_ zth8Bp>2FU**PTC2eN9tBNMS&2tSgOK3kmhwC>q}?-n{-CBs-O$kQE;!Bfi2YqyHtK3Y2P-Nnui~U_X;B9+bT2*y#JtoBQJF?}FqnTi-QFYggg-tXI zftMmMMD!YGkK~pnpSVW{gwnRU;+FRH=Ch897VtLEnvwan_6tbLVRQ;YbEpHIMpMKu zDSb`$6~`Sh6g5?Q3qlQ`w+@1@K+YD>cosTm__u6@CQB_tB|_8fw|3JSp-(zupeL2l zALszd71EFx*GR}{A3WZ5R*6zKq8?jwoA%B&u#FB(X>JwKq=9^iIb1`Btsv>F(oBHd zpxKVvu}Q+ZF{Dt?zQ;|9ih+9Z?iaG6*r;b{ZCNkt&rvvgvdp2v=t%rRFC;EnZL(E@ z(34VTNtH{d@oGo8f0c|rq|EfG?0J!DvXsQl*L_#|deMSQ|4@F5{}lxJ&mRB1!sQo) z(Mq4jlt%jih2=BXgvC8;+^&UbI!={NXL&meN5AU2+U(qgs$E~%#Ts@xk!}|y+O|lF zScM)!oyexS^FG%ymR>eP$f1$#W?)Y=JOZonIk(iR@Hm}MRaJ~L^CLsZN7N|3GMFZD zd=8RgWE&6mr|o6Z%$h4Kry-HhrNE0-1M$l6_R4YTn27O(Fp4|`Z-)^lc!NAN5;uSj=AC4_!$}dIff}R^E{qs0mu* zRqgVvrcdf}_3=iVeeHZ;GZ@@7aki5%A`+iJKW-%B(=QvfINN0X7{pu>U!yN>+Mp-` zMOe?3Wh8;MJ6e|D`EhC7{%%zRO>YGG{S*<&P3wWyD8+Z^Ar>a3D$Xu4uwh9#~V3byzH0H2Y{Oq?kxxlf)){nmyUl4VWL1rD~%N4t>@g+r>wb% z?Mob*_fEWTtM&-lm-@l*QtLFy7oUi2=#Y~1LGf%!L`e(Wmg{LMeUzTy zi#noD2jEX`74Y6w0};if05SDViZiv`z+sXn=DmDI$DS5*Di52Zi?Ff9)gEs;;On=h z0W!qdjU*9r!##=kO_TcJiau%glTFVbr88M;{jg%&zUntz_CK=*K^7s$vFyjsmyc@s zMsyeJj@heJJ1C`jCYx<}T)vQTRRxQZqIE4KsKn17)D(>ME&`l;roH<6H|>___w1k$?vZ9`*fRIY zryE~|>6uKVC>XJZ)1=|g6*muHhOH7bHN1Qn%+|*g@8+)> zLwxOExggD?3Y={~CyI8Uf?EnC^z2mh+xo|uNG^9$2}4*QucGA;bYrR|$`&6-d`8kB z)N3cU=WbZMI5J;5^-*p8LHZdv_e8*5@yIf0FpY;tGLW0gTR;A39rET7A3~ON5w|*n ze?U5J`Pp@^ZtxC2W<2Ob8#&I669@fbO}tEy22-}T1oeaQ{QyE(#99mb^CI^X>NRC za*-yy$h2x~f2vS(ee$gPD8GvR9a0AIDD^SojKh*N!|VE6i(`t;k`NbX13ywo0dJ8u z)z;>7Mf{JPFz(f+kSsCbli-m0;PZL`9WZJ{4-J~q*az@8RN~6uDsBYc_#3_7{H(h? zAv<5AuTbP)SP(CQT}**Il!&YqeioQStK@bQQ#6y#E6&1N?wKnyZJkJ_tPfEHGujbC z_=b#&h9lp;BER2d1%Z)x4KWv=(@zi}Mtv&1fL$0-%nES{o6I~@jGVIDF5n&JP}$tE z1qGXi5KI5=03R|Xu*L^KlWwd$%nTlvkUs+H(R3Td&P+jwUIe4|Qqo?>1gdU5&pma! zztOU-w=xJ+nbX5AKjP*fY@`=Z-Ogj*(kmbJ`B^_I72T#z=vQW>W+0dU?e&8Ho;8L4 zAlE?t-&*Ep+q;BB7ZmI(EW38>`QxG~i9x3g36;=g`L!w~$f~9g&PbkGj`2pa=eWA# zcrsg|&Djl%+r3rR0h+goJw8k`KIqsw3`u!tW!qhM0&y(2H>ouE?pQ}x83+Svo7Hv? z?p=aVjwB zUYQA$j5m$LaPEd$2$T!XBTPmcq!=Xg-kMUF9ijkH%8(k!s_&ct1_$XSKQ(~Pq@28s zJ7>srax8|I#4i1LPmZb#`fgrANJ`L-$d?Z>g!QCmGh9>q=SsBDewy851=;T&q+E)lqv&IY1LJQ&R zKg{85Qx)9>FFBohEaPZGMjum2klE%-+>wnj`jqQXfs0J~&etuT?R`_|qQ$x3pUac2 zN~EXcKnfK9;}t>xpW88qvrqZqE_o^Utgl-38I)ni%f$9o{)R@OAAuAGY6;ZW@&0*g zar0Tk0j#g+F_NG8<;sCiswXp+Fl%|LhZB|7Y$P$J5wjg z0a#S~Oag{=3$+rE7$H=DUxMMsyOdVk^{{NO_vyCeleZ~)=+i}$F+43k|Av=cWgeL_ zQ#IZWP1$Iz3NHcajqv!4N4d`*rub4mycjtYCt19o-Rm4__AvSd_@W2ncvs8Gn*9C~ zMmi+fsize!q429PTg*}TEV7stp)yg)k6F?#-L)SX283VO44w?1$u36z) zOhK0KAikkH_Z~i{3$AV?0k1s5?Y%EWhMw9oxogL%1h#Ik6$`V7+xBY_#*Kt*u44x zIEcKKZMN@%Rnvi$O+rr z+UXeb5I|W6aTmCX@8IlA-YEh)5#j3(`IfvPlyH^x>LoUemZmncgGQ}ymZAf4<86JR zwPz{{UHKg%;?vf7X~#+Si}4^i9EZZ6o#FCPKR_zN&vZ-kKX?K;`q@=*hxy&*fPO0? z`aFJpb)0+{o^p*>B<<*>WvT6o_o73u60}5n$=lllg_hu|hhvA^nLYMLUTv=Ff-xC! z!5f0%_uKZ=pUExkWI43|*!-z{c5btPTZ0GTLbAtEi#nz?M|8ZRll-HY(OZYmZy;B- zR^1Y*NNK~Da~gcKHe?O!T_!8mgy$m)(P;O`KoseFhP*r8X>*8lnF)*8Re*{h&SMQA z_rht{GW=ibR9eja;p+iiP6yFb1MOeo&^h5eoSxjmyU(j9nr_oZO#(Zi+aIHRkytNe zyGgTD@7-3>#a5NNj8~KX9o;H>*yxXgTU}sMDTrBg7&wyNf|qteH!eoAkx-p?Q4tRA zzH@qO7222HbuRWUyJjRJXP=>J6yFR;5vwAf^?mhkt;&mAw0ZrAw~MCDK%@%Ixk*wB zK{s=a&qqXL;a5vKa7(cT^vh9e9=xOjb9KvOJD0}qRSXuFYo9q%nJRjCn>o;jF+$m} zu{Y@FzZ%~D^>FtO_&@R=I-hha%NTUF2nnoK8g*k+Yy!G=)Mypguir}Cq62OOJqVsL z>Lr9GzD|rI1oY#_XIUzTK@3xI&PolZ|FhFnJo+5CvW>5#mkzu{&%vI~_<~5{9%<@M z9UjxU6&;`a^;kx9fC)545QTq-sa;P6C!nAMx0dA@qwZdxHr$Q*%Fnm@YM*#tfhnep zbqrxv!~U%kuVzfH+mla{Bxz<9$z90tR&WDP;4tsT3UGp^Q8`?5}< z>%eTN&ynr=(VP;Q$T>rA7~$dsc{Myj$oc1dPc<)jWxRnDKs2G|Hmkh;iQ?UwczjIa zbL1u|<#~3Dfj8*KiA|*dwWyV7@i-@f;XqVZRn;L$A`XiS0?`kuhf@@79Z$ektBH{p1NrXj84IWNj)Y%QZUhonrooC z|0jlbYdPUeFhZIRD2-Fqip7ZNl4Bu-F^9Kox!``ChpDK}2%F~*`9vptng4>ZekET7 zFG&Ur7s06H-|-_Cg+EKnRLHa|);nvtFZ~!`DGx5`*Je-!4>298F&T;pMl?(vQS%3*s&Qlv`5+sZBpn=fBmQ?h1Wr)1* zQNA7$<9Wxruk%APHa+sByj<(#o+(bF1IomaM zXUG(HfhLX+B-!I5Mx3DuH1Yc3le?P=#v?A4T2HHOHNzd4`Yv5%ffw*5&5?M?r)i`7 zl|^K8(j$^e$IW}o*b1qbY|CDm({0spvCuK?_q-QQk)5$&?Y0R}mRj@*Ix&K~HCv2v zFpz~@j)%^_TCv8+9S^RRPTm*s)Z8oH)A?4Q{AEXby{oL}MD&RtG>1T;K*oxa9lX!s z-`r!gKV%{%Cb{`#K@RPH?$1U?_%Sil^4U;5*f; zr9SXTYx>~FS1GX(EC3x4_WSe;^)!Nu&I2Zuh|v$XZ6|_z)?Y2&?Vjp%nqy5(w{m!d zGstH+bUDCYXN;so{P|IJ`wmE$;Ker0B)BQKFpk?o6t}|0ChVzUG(@B1n4n{#zpRN!!Y06VC4%0U7 z`7jlXghr{jjb0wpKE4<#!TE4Ts=GN!`-p~@#fb~(H_$X6``#H|jQbE1<;eIC0=U*( zoK)1Az4^{{#Xq2;^tGNrktvv6193U+!~3}^me$s^(?Ktq9&^O#9%Zj1&CMyT6VDD~>Sdj&O81s>m@1Z^8cF4O^PLX6*>k8}+j5z4#!!DKpu zQi_pkNh)9bE6w%KPdczY3@UBpxC9Q8h~#Z{6NRHYk9pNX&{L|<-him5{_f`qFpu#e zcm`2{`pCe#rLvhf01R76rH)Se8$G*{c^Q;;>6j{6!$AJ__bdB9cTzT~5%Vhy5wm)b#H>TvkMD(Gofwu}WgY!NBZgusoN ztmQs4buO_sdsMz3#Jgz{A|bcy28?#~*)X~Y>=V^yB($>IjZ z7!1v=i%=a}#*k;)_5%2293;N=5J~F{VJCJ?eA(jj$>;3yX}GIb9zF}SYrB$YF1Z)B z>-DS~6|8e*WR4&R3nXf-?;o~nhS?{7by&#bwVuH0$4n)*aG z|Eb>GTXb5q7R*;%WYr8G2P4!DPm-eibsnj%sarbxgj(?%<~^mEg{AtZ-p}d;e#B^y z6mWzKVJgFC|ZF47lv zJ717+vXy(B0`v8K;r#C2nc`Cf6BRsvXvHOd8n1;N4AkP}2V0||=d`QM$>g5mE9XE-wU*6|@6;dQ_C}oc~&S{AbT&yQZ zKYsD^Y~6SJbMKcnTDo>#*NC`QW?h#!BglUt7a-4c8R-zz(5srWL(Gn_haYupn9eu9 zY2b*L&u5qAYrK9#^cW!i6tQR>0rD;KI{Ps@$!xE>O2PF`|+>Sq5YpD28Fxo*TJ&bK4>)O};f zbJ+f9P0CS{6G-sXAD0^~`E1Q;c}f4q4aJGr_xFsmEZ;G(2D)%+UkQHM9Q1VKz}Jr_1XRzAT{REXmw}ukNe~lBV8X8pZ-xydaPWXSPM=(C z#k79Kyw3%Xe(^JzkN1ZvFo-U@z&enVrP$tXn2?|J{?QvwoXf7~aGkd2x!cM9EzRsI zE+U8Tq$mwCIM&?Z{v3mT>iM}YZ)kFPwyyMUb5t>W^t@##G0RZ4U+rAa!H}Ip8M0r` zcCGV=cB`$-pXJ5Nv4jVJ*_iVspju?_CCsJ5>nGwcIH&gfp+P{ z*MbZy4aizd+e;1MhFJCHE56dNau^54`9=8`XTvpa3al(Eo;jMsYTz!Dyg*Y&I1wHu zriUP7F|4mD0uX+$oR1b6et6dWVbSEzlKpOMq3@UjCK!5nwjE$JT|>nDrx)rxrOWcZwkS5=@8S z#M4q>*d*kqB37Xxyl`{X?y)#j!}t3A>^>JRezPbMVgZRA2jQQBTUZj(UhO?y>6!j^ z>U5^<(baqxt+#9*{#{(Ece2d)1(06OS=`doS|vwq|NlJ5c2W9l6=bw zca5i2*_Y$%%Mq7vNEbjpU{YEQ3D8s{VFZ+*kwgi{4kV1%rSWeJbiUQO$3I#1Sn>z9 zQvRWnkbkuRf6Rh6fh#8JAvA;HhLNBtk}ie{)82h-R`V`%G^sW)4V6KK#H@a``qIpA zAv8n#Fs(G+?2SL(?xsR89B%y_YIszv(%bXGk-mw6{kJbfF^(sG%^>yjrn=LL^LVL^ z=(E)O$nR6Fptt3VECXec>8m!9*GF2afvl9cHRXSVcXz>$4i8Pz#_%D%yO5FLLUEYKg5k4`_NuJs8)BS&jE>JC#o-xqkZ#^P z_N^j#$laA%7_Z-q&!>P;n1ypp{42ShM|vlAl}ik=w4iNw23 zS&ucFXjdf}JMv|Zc)-5{_r#yWriKj>P8AGbRm^qBt6(v%qR`7a?(7S z?6fkZwQkwnIs%cXj_?SEXUJG`g5M6Tpn$f!e%65Segy-4IR@5@IB9Xs&uv!9(!dC;-ZYY zyDB0G?W)z4uHTq;*X&F)a>@Y6dCEc3{G2U>VnI;qMDmk(X?!U|?8V$=mZ6qw9eK4H z0rQ7W=3ni5mOdNp#>7Q%m@HitnM1Q(6SrCD3Wy)im&Dgsm6<&6EtfhqvgG60c+Y~< z#i={%ZrArrIiW>!OV$CJgkP@FQ*Q#~1YS^Dxt=tLlaltglP9FTFhNF)-K!PO5bhT4 zQGOaL%f)d2&HhG+!E(O9Cj-}6Ud}X?w&>%Z4Z?lPd_|7YN-B{)kDmFg$&sy+^@}%3 z-luZBI{WlqLT0kRHpBMN9V1P|wK1qqhBQf(WZ=5!B(YdecDAceS=sN({uIr#zjn|7 zjy!ngthva#W<^>gF?T$A3fDB0h~llJIAhU_Uv2zfE@jR*Px|K$8w@p!Oz z+i1Y`UEqG{QZA-bV-nSCKRKRIm-0;srz+0qyO)mgRQvYooI`Q;MoDLvs97vIMVG-u z3HZ?(SWR-AMJMFWq#EcGkl2P4iuq1%%lukiIXryZI>nl-GCrNze9PXhS8~b9o=30& zqPb*jlt_9;aJMHFh2k+-Oox=0t1xeJpl#XLYxvll?A(jRe%dL#F!k`8?%Z+n_adKU zSmFa#hGy3pNkwM}UWwq~TK749Q&m@du)Dhj8v^ndPo{-c(uO^B{JJ%Egir5Se5Rooiy z%tgjfoQrL4RQR;yM^>Qtm02TdCNH4r(cfYsi-=;T|&WbeaYN7G{l8TVw#V8kawd?CWV39V&Y8x=>nX8HaRa_! zkZt4CHjFG5x=vNgYOO>+#&}Axy5JW&{ZDmv9SiNr30Arr+L*0Umd`GJu<~0`VAehb z$G{8eY~KZ7r$7Fiu7!V@>J_GW4O7dS9Lg&WPwc>OL_=KRV^bacoKcr{16Ehu)4!KK z`sn;>V)MIkVCNx0O5?nB)NvRWNvjK_b*${p!IIp>36NNdN#z7MY=%Y4jRb7D8m{Bs zh$kQL3-i!Rw#+q%-YV-kb7J-$PXIBRG=K%Qx+cAapmMS9ZMF)?%)D0ku6sY{KwjPy z=Ihi8f!ha^elTMLBUaH7cw}$DepiI)XsamV9RB1q+Ze*c&v5!a?J3W*eYnZA(H@UG z8aam)B`ewFpt?3QGR_n+QYH>Fac5$wRq$qkmx5KezfF>_V1xgi~AdrVa6T;dNa z(O!A~O@D3)f%=x5$PZz7OD~zktSdPK?n)b zI|2fNLX;vPO}f-b7g3NdH6*A=PbeXf;bpcw&o5$vml1LPvAV^ zz!PEzAtmqtQNNtPVZF4LNdD%vQ(^ajdxo;qUSy5^X~j#s^il3l8qFIh^3xbT#s`Db zTH^Ju2WXnpT|K+f_vy&Tv_E$gSQyKSSQU$Oh&AZPE&>f<@e{hvw5RgC%uh z-6Jf`zklpN;a2wUmG06WvPHTZvPMj223aHj+UpSH_!hLaQH}Eag_D}VaF~{OV?&`! z-UlyNiF~&mm9cMj;=J~3Za97!5$@)`-Huk#kYVr-c$)F%My+9BmuQFmeZND@L%a0K zcqw+KYO;wccTzEQ`Kj?2*4qS5uQ3a(g|(IJ>LDygNbr+b3LFWTT}aU(4YibG2l3bI zIka+A>K@t2$Hz`z4JdBA0a0z-f|QdC$XTR945Z$h0P?UK(=8c?iAhs^URLghpy7eNLJ2BfSIpre`M6mI^ zl~eDYOGYbB)WJOuN=xOz1xJulr0xPDoE%JCikd1i(?;*I8<}w0@>))nZyhR?QkLO; zyCS@QWtcnA;NSs!_>nf>nf7lAa3`~{Mt;y0oBH7ENwljG=+1dI(Q-S-MfKPV@#<{@ zOA(WX8!;h5?t35cYV$KKX>n6F@+K%ltN5Na7@O%-+%BCLjiqj|v3&z>ONE)&%9PA@ zUNQ{66XNUeKyo`Jr8zFlCS_3vf16Z57!E%g<%i}mQFazNoSJ89@0dHHC#lCH%W*Kl z`;b-5amlLVBG?@?X`-qHC4>xIV`$2x(5QSHt4q%ZHBP3b*&2wwG1y;5ul;(m?}+_F zNy)%F+{&rMsCGLQ&_zCm;y|7vs)l=wak2OX_g1EJn7J8qj2+Cmzt39N!1Fv;n=wyp zM(P!4R`(MEH1Iowd%;ryGOLDdb$wfm%vRA$Y681dr=v>{F<$iK*r-Uo1 zt4&LVH*)*?~tgq<*@4hIT$hCdGscq?YxuuJ!|>adxegM;)AJy$L__~eLa*F zOVOSxpu!_S1Jw%)rNAx`dqTb9*!8%Rk9t&e-Szy|FX`bLmZls3RP5A;`UCBU>e?Xo z6a$8?!id-g)tfP5ig@9*`33K>ozjhKf|acowFjZa7_bIhzWz{ISQ(R#J!nsy6&3RNW z+u~S!vMeW0p?Utj!>xw`SibI$6H73&A)-Z4{jeSNNAu<;xob%NMb+VQ35M49-jTFD zFDw|jZxfRAAHV6nT>`r2!P-%j2aBSKH0uWOL2I_4a|)8}W}J`64mh zC#m5z$KQ&HEUI|K*g+`qAdvFT;W*d7Rf?wX9=egVO+$5e-_F!&h3((UWR7o|91seM zUXAO1^0?9p#e;e`+BCd}*d1XnKa9zr9Hp0acQXv*K3&f^kg)&w)qMrZ#m(N1kf1e> zP#XI*K`^@UfD2}~3R={)oT5KtTGPK)C*vk`s!%HO>&3{Wsf5>x`G=Htwp_WxG>>6# zJThDBv4F$FBGH2N+5-6v{ZoDgV=?(($Bo(rB+YBa@^mjf731deb!_R0!g6cLj!j7sG>DAKX7BWggSBOlIHF*Pa9X;K%ZdPT!M5wjTK-qRP-e%$h^XUfr zSfA%k*^-jMHyD+Dln!zv<>-{qS_jD^g14>GPh~(pi>mp){Agk77y$lF_el%wAH1dA zz4JJ2nR2Nb!^7B3ahx(8j-*Lv`PjFN&huyZeFipvieHG z-F0{bL(!-DGNp4G5w-=4T%loV-VYWFZ3y2XmeqomDVcLHjS7?fZS}6(PfjIRE$w;A zvMoT9gHW;vC(hu9TP`aT&OAYLH#ioL>v5-(@7;@?67D-V_FmV&(Kz{+NStmDUN!@VU%DP*zw!H_p8 zXUa%rmxNpGjOAYs5s3w&%m@O2mshslyqEHL%?V%wY^AzsrlSQs`ycp6D3DEzS&=E z^nYV506cE*Jh;>K_v9$ZdxK>5PhfJrfNBJtox>eP;Y&xY-*Og_iUwN}j%I@~twp%y> zEmapY5=RpniV~mqxa&c%;Zih(Fy)4tnZYQdY1!51421Eo4pD`e7H$mVH&p zN1MNaOaA?S#Q$HN$iL(FzyDw1m!uAgh^zA`X(BH48PzPj($(}3uad-)=Yw17U*0h) z@c@c{1;2sea2H95TN?{C!M66)#)}L{lztqfu=DpKgg8VK4>(m~5$4uKcDbgL*x7#7QV2kZ+Hk=4bd=dTLHf(0 zXGOx!xDumyEllk*Xu~>&hvL_g416L<-1)9KE#D!UhjU}M1Df;G2X=gb7`aMzAr|BF zY1;ia92la~e$#6(2W!{1H4lok>Kjq(w&jT8qa}Rf)mt5njiW+zx1nRqbFI|jIzTG8 zKz~Y;uB?a)8#NL5Hhjy!CaiHj@!*Fa^I7u&KBeEt_9QWvqG~uyJAV&%`{qKDIiSN| z76%3J+Z*Lrd;#1UE67;JcK~&K-kTUk@w8RyltGVwi5NR8HqD>`72uqv6`yI#p()bN zv6Izskzkd`*gUxd5b%s4lpeg^4A6J_e216}Fb-RRZXWRw>V89MnM1$3{+D%*7R+Ip-C)%?{1-^TBcWrn2K~D0xcwDSS3VmX9p-G_&me;MP~ zmciA#p04W#bFP0Cc;66F%keh)J?j-Sfp3q_vd9pJ!z|C04_O*lh=isy+l-Fylw5uG zfK)4!-Tqmp!r?-|t>c6m1TS(g<>IvX1=mSAqI+E7oTL-g^h+?`yHoU!cDgsNsBiNr z(aj71{+MAvd~gjX;pe-pCT#chHCjD zqz-36Z8v-2ss);{aY)5Xrz#1x-DlCwidT5&nuTsAg>J6$>PYpKCp1a2D-M)O`QIK6 z(u3T}kf0!l!_DaHT7tylr0jGpqXX5hwle#UbjjV2ju+k~a3lKB4Wa#K`a1-!A;RF$ zucVtOttTxucYV@!opRCdlk|P|JdS=3r@WU9(eQCP=bqIwKqLi`Dn#WT=^*669H(=aB@L9Vt+kmuMIRcg>qlDOVP%coakd}DiG=kw z4ya(ZJKYPU$LI1m553#3Xjs1YroPf{JtPFhI4?RqE$V&OfTjEkg=ww^ItJiNuPACKk|MtT!EMZ2O*dI26yU3z{!T}MXa;YH@x znrAJe(9N&tMQ|55j&^kji?^Wcn|=}Y;xocbaKrKQ)9Jj_a8`y@B$X8pTi7H%nF@JW zMPvyRsO&#H;_;QfzJOhR|K=go>+(x=c?cXY9m9~L%$SLkv_ivreRdD&P4FwtVQ<#M zm)|6p%3b*msXRcM*m0hcNWA7DM5!~R#N*-om%+@Ew4;SXiW6fkI|h_S&kzhEoeuLC z#I)=QJ;x^fl+!Prp9$Z{Oi_e~X*k{{DYR$E)B~IR7v*Z6s;Y|Gt~+v4Z%Z2NXsCUe ze-Bb8{{Gepv6*pf&-#y!{c^^d`AOFGtl>%EbC|W&`!DkJ{J<{?U7g_unw4sN{N37HP1jSAkWGmWvJ>e=I$oRDWk#tk9HZ=&{{UFZNnv9aZ;SgO zjVsk5V=JL-uesGM`!j@zlX0L#;GS*dC$4Q9KHcX=sH_U-9orkSZI9H-85Rx+qZ38% zz(q^1fHxxh#VqPOM6~f3QYORn6f{`64J8r1>qKk)6Ss#N z3$m(Yn7?9!u$7o5M9e8!DEGXi=@1Sw>l-`f+VCAR5zTwn^whVa4jSw6wxf-X0Gv8c?xm-$Ft`U2 zr=1o172)g1Y?tL}ePD!`z zN;Dn1&ARN%AA4^!`tBP}^UkUL{;2FDoZQ;NQ!{bqCpmuJkNYiOU)m4&;OvL$i z-;@xQCq*bFVUI6m50)O7hj<))*!Tqkxf?%PR|}N$z~>=Yy%|o7fwG{l0`YsZR_UhX z8LA4xo`*R-Lm#K$?&sEK?_-Zgw3zJxE)=_n8A+L3+MM37>t&(LDsF>PGC_B!Vv)-- zvnR!7iRj?C(2gMPh-*l#(w!{nWT(dVePY48+*ZhOON(vyC$osQ&1&#K4fAKn{e3F9 z9WGl->0XWkWQ?Ogd52tSS^Lt*ygNMMS9`=QSsK#fW>3PN@FAHg4_?ybVXRpEX&)1l z0jW|`Ef2nih4WnaZ4GNqRX7*th#l=4+fRt5hPo~aG1zIx8p=MNZ_5zxuQ$p6cI+Pc z;hadwBfjxh)us=k-WHBXe{J`XitjA3_^~k3wRRCxL06J5a|C_ErOZ_ROUX#)%Hiv; zul3dUE{;hWwx-CjUAuGF+Sj86bixcm4-hR<@kPgmTQ>C4E^Er~wLM{7nm}Cq)Ve4v zB{t=2>81y{Ex|)^ATGawYGkkw8&mNcwGiW>W4CQJqMnA_h`I0(deS8~qLCLVOUlTu z?TMnnvgR#k0}dFBr1GM&%3`7Zr9Vp_NK;=*Wp_{*~A`&F8` za*Em>@wRxx_Db%(@OJtObL7WzBJ9B$)ntqV$~`+YRGU<0eOXT+`=EGRq4C>?#xI{~ z7{|WL8#yZ!aKzN>zH|{kQ;&3T{Xzaw%~oIe1QYzHQE|n#imQd!Jb9FC#cM)rj_y2l zMF1kfBHP_~92EH_(KKo2T$%*Q#s_}6sxcN{oE`PS=!#X2#e)>r0O1ka{bmy-H?lxV z<~_||*PFr;eh9hz@7dPH(c&Qtz+t?=7?9xIMD0WMjK(xeL$%_Ry9;8e_K|qNEUp6H z9GMaWfUNW-jK=Re_1RL=j8y!90Ohbl^G=M&9-8UDo^ka%1nadZAx1`emw?ZXXhoF67B&bC9oB%( zUf#Kp!J%6j-_SL-aL0f3q{Q=QK3z?WLuRTiB8#DA!PiCA9~U!8)WnQ!eyz2uXJVLb zu9pO_FH@u~7sg)JnP!d7Wk@8R;JS)hd`0!oGXpYSMb%auClbfO(C~>fj6p3njQBk^ zD`nGg-fTxbV%pmK6C=Bs52U<&M8^-=lSZwFMFPo#A*CYHX4l zR5a65o+5M7mc8z=R@`%?4fq}!D-kH&L}XXn0BPG!oJif-co2D%WESke%6Yxgn#b%G za@rl20L$?W3q!jD$L?0!^9_ZfhcLFo?zU%$(xLS_gs!Fyi=ht}p}CWHr8TkMJ-)fXa6^-7?bGpv;zisx*M^{9 zsvTHluku?=a)ZE#%0Hx$B5G@kfNMPhbo0xicfE>y?BAFhZS?UQeZEA#lmlc@oG@QF zK2}TBryNRBpC;FM^J7r`XLxO!{clr=VPlGCStosGB9Y4Ya8xhhvR#i+lrYZwHb1jR z@pT7S1aQ%i24mxuDOR;}@Lej@A61?bB6+Q;(nx!2Mq>T5FAUI;$v&?hPxu5;5v~0- z9C@35t!oV1tKxs!?Sg%`tPUE*j*1s5+)!xHg5Z#c$|qSx=bD%T$x zS+2TJ2|$41X#+gdEY)klAUlLGgCxc!u)r?Mqfzq*}oW-vuC zR$8NO?+Y5>1UIJU_IegWXk!{1rG<{bz}EAp6X_1cX=S#p{=TR4*XV5|bjls|c8y00SIOghaMVD0CWKjn?n$s&-gF&dRhM<*d;NN92o#emqj zSg)GZB^^z>@Zk#>Cnl6mo9=Z;^4(ZmfLU!@u4LIw*Utv=R^ zU0&n}k0ZXjBgEGmTnL%}JUonM1?CV6!3gG}-EgLF@g2ynD$`Tbd^nT&+8fMH@QFAV zFl+QAY#+!=Uek+zq~%DfK`j2FkHid&bfBMt&5)!iUtj9{rdtTn*gU$v(X22shKDtA zD0)oEbIT|QpXx-zmrHJ}zkkmL0lfNSCDfyf6`r&Ic0Dg^0}%U4D3I*QNQx1uxc3?8 zK7>ux)iU4oE@kFguzQ?-dU$fg@)Wo2YrAW%WK$?)ix~gDaB@d31ns7wvr!W z6=_FG4?P9iO}R<2#0?RwF`#jWq9hm-XlSM81s~<<1F7D`jyCgM2~95xAiQOmQo>h)QY2_I{$t|og~!cMJctMiu>TFpQ<%B>~mY@kaC z!&5K2bBc#vZ0&Un>^=QD;n>W}BS%T%%M0lw%#_=}=<-6=bWL(=X*}&jVQrV&UX8N$ z1&#Wr(_eM3sC5Bx2MmNQnl$xp8~(!$*mhLE9Zv)0Zb|jK!}YmO&OH6h^-X`~;^#Xr zE{G*eSx%r}Bv=^DI!C*hVt=k`uwp)GLZGd%K;?{PD!2M-4?k1+Z%K21$78mA?n7_o z0R@95P!a}+yerNTW3dq0v$FH(7ltGSM)YVKi;K{JX1OTAt@pnQ_|QXFVHZQOG#JNz zdIpWA8Bn*O-iuJ2_Y&XN>Xr;Hcu2>s7~Gcxu4eafG}U3N$D1U< zv0XsWRpknF?GPIHQ-;V>eSFm8SdrKJX?pCQQC5}gORQ8=t}f2+kYVlh9Po%lkQ?kI zdQ0q~I6tRU4IM6b9^Q7sc9?@4jk&3V$P}7}#fblK>+q(j8boU-cC?v2U`P|^awpUH zUBkQK5vM@Q>%|={SfGX*d1AX)Vcl<$Z7imGi1#V%lpE~}>)h>S#4Sr0K_M)I ziFE5K0n0U;*3AIpU_z(i8zh^6<8j=mUNlp^b%m3iR&Q>CSsmYG8)xpUSdJJVr5|0` zJ$$eFRndp1owqsY!!bd!et~Lq*)KRAMl0B)(1_}WVE_hv_LzQ^?5CMK@@KAl2}RnV zMP3bX-Tev6ygp~X(fi$eii3zeS(tWX2z{s68sq-kKhdyLLDazPMB<>0@NpMo>>W*) zi5JYDD%uxdd6)dm=>^~(?I9w%Zf8*t9r3vq5kr&u%=wplZYTrq7&v&YU4MF-5{8z~ zSvN`|6+W-1kC{JQ(UCydJ0I{$4RvRz>c@>1rEq9=e5yBdETUYSMQ!1%LfPgSnMPu9 z6o}RU)5f5{YHk?Z6+Kejrj9Px+B17w@SDQ*W301`N2RFBB^%XnnkM16)*ycI-0=A@ zD!E*0e)G8sShKAC@H4pnG;sX?gDC?4#Xn>I?=g3VcLEcWuSo4_1PjHKGvmv~P9DvUUB!mNAmN!Y#lNkw8@$TejO=2B_zbi0Y&r8alP_4ojg%SnN; zem4*t%eKoB)|U-2E!WC$?pgt3?emn+P~TN9RtDQ3_4w?as2F$LMUqtOmr(|Npw5r} z_@wJ(y((21)icRA>Io0qQZvNdGg{>RdQt27lIm-~z5LpRqsVkL7mS6yqM-2}Jd~Wq-sp3j7oxlY64NeLXX0?BXm)a__b8VX+RA zlV3-!z(eoat>dvHQDi@2MoT75t9QtdF+XlAKTxnFG5!9Q|Ek@h9_0M1{tEixk(+B> zRz>ptOC9WTPFj)Am@NM*3yp)y8Da1l_{^)4M+_y#Cmd_zu1i|M3WzodX*tO_qrR#h z)wEmNjHeF@cYZTM;Y9+pio{7DQScd95;C0>NA9G+iOFB;;XC>%C2gF|$Jf%O`MSP7 zg(SRHihuf4pcAcobP7R?Kz%|~YV9Wt0xPJ4%S_{|^mNxc>KTG$P}La=?Went?a|&V zR+d9(Rsfg;;h5zgQY7_s@~;%8rB6yXCThSYA8un=Dt(#HScD*f;$s8Al|EiR z^}`I&VVuokN%)q53&j>ej{u^!jf+p(;bmbl#WcRWIuHLta~7C0Ph(H~PXVPJ21*BMa-OrV0(q9u0Ii!|8o;#U^Gv=z_Brxr>}QX&JVh)T$O;ZXFTOi8T< zABp1)=<9iU4b#|)ZRYQBMR&UGxf8rvrT6x_2ib~w@iKQo_N>hkMk}#sq$SU4ym-WI zNz(dSyuY}Zz}1dZviK5FM)RaZ#So$$TRt4Vj1u-qx^uzCMDo*H-z2A|$;j3#0q-Br z3Ilk_spLZedeoL2MaSfrDwCfROk7yGvbLq5ePpRb9*>`>p(A`e9D5+ow(Of79 zQWTIOQoISGZWj(MCGT)cIWiMB`Gyj&Z+5Qk&Ji|#rnw9M6+$9E@FEiHNd-=75?Bwi zsrQ+xfh)jr!23vcXAkx8&LSHb=9v;6FwCz!EHWzH62J6A&E@*yg7JNRrw_`)zOCXuB zcNT+tBteOcTL6~c%RySMX1gek9Y;Dg&N&SU?-YM>(;^FlI6`*?Rst*_%>qUOq4D%r zG+0L*gURS+Fm*ZeQbmx&%U4nF>B^LWDj*dzM}(za=yW8BHfilACUuGNXl80$*${S& z)uHI?p+W{!cM~r+lQ+x@dkv4gol<(G8LlA$*+jHps+Jd82$v%~rk!ryKuMLzIwP~vCc$|V|V3M)(+xDDopRQsp3o?Q}tlx<^Q@_Dr@o}8~b-LuheZZ868lf90?zGG^6b9duGr^!QAaoK7~ zWgW89xgW^hKR2Pg1gi!;@XziPog7N?D16g3dt%3ZP!u^hrj`S>T>v?YAv!nRf>^oO z2nDI^bNJAapl-blu3O195^F!2Sjfx3N9j9+e07+d(-S_^o8ung@1S;fi6j>v*A_O^ zFy8`<$k-{>PQ;nm3foy2!&-iAAA(ALFNTsmDN&-QbkII^d+8C+s}C=Hc3Wq~4y$ba zx4`xPHizo^F+xcjhM6WzjOmD`-OrOxe}s@TFBb7jHGJ2`4%}+jy7{k{va`WNP@U+? zuvWyV!Zd4?oWfd{n|c#@H_WLbg2k~qrQ48Ty}zgX`l&Bv&$eX=&Fq0?J~G!=2c{0} z5Se1LRy$3iLa@M-w#I$^o@cW_Qq|^uMDw#(2@~kYlv5Rb?C(tY5GKjG-;$4fxL-Hs zDbu<6YU?BG;04Q>93U7yk0%!>k06f`4I+@2@8mKski$z8)E#V$Sf_>Z(D+R3O`P!U zDQE!h6VQlxt#>6nmMAQHBDoyfQ1hdy#Ja*r+MEFAPzrE1Y_FfoI6|BWl+UJ;Ms>?i zGcB+Mx@Tlwae|k^TX6z3Ko`;QiL#8LxVq}>uaNx2nx;$BwFg~&PoJO;EjD6;u*O9K z7hLbGDxaUXKqF+C+HR*g|EAuE>cV($Cp1}&Qmd?wSKA+beiGz{&kVHJ&*{2}b92n{wdbgvpEJPl!9b!1JFF+k~WQP^d@Gn#JfkT4C(HV0RB20tJ zLmC%5B#>sF@$R&xSe)~N4bV_3A(TWiJR%Vi_1Z7>fr@|hZ9@e-baoQ{ZZ9HJQr4z9prD@iB&QWf7W2x>5ZlRfTxB)N;Tqw-=eMQeH%m$GczFbf1Q;#s{ z{R-Y!iAQFDtc~@km2FW@G@HjZ|4-9r!?hxh?63w{nP%SX2hd3}sQh=dlm78@vZ@nG z1bytAM8p{TNvM3cY-fuxu5xFfAaW5Wd4nK~Ub1B2r@JMkBE$yMb z9AMf~V4%%9s__U}q$B-RBR5hkbHq(TA@GyIFS8wvP zCCI%jHuxs1^tnfNZaeZx*eXf;88j&@41UvNG#hNg`(ST-yiq+4zgjI%FSyOtG$uWf z0&>F^-ql1t_N}_9o`n{o_mo5Z9g4D^wbxAn|8uK?f<5h`ygcD8-d2MbHU!oL{=-e- z&#nsp4WF6BKDONE8ynx(#sBhRZ-9e?`_4T??Vf#&auf`e3n@oJy+B$Mmk85%Y?P$E zl;xQ)q3nKJ;QKu;WqRId->AJzlV!pYjT-JLr8b?d(_DK~5_oGt^h z#ePaD{%RcHe=}+5ywJsEGiFB4Nj zG|UC#32wW)ETf~{>Z@Y$x7XsWEJJ*v$m;Y|ntA;k47x_)@Gj-Gn%ashtG6eD(^1E{ zwkoM`jN^mEis<~^?+|(LCpM$bvcZa+&=>}A+*|~cs=J7%FFpAVVVPG0{lnspaaZ+0 z2LyFf_=khUwLEl-G@~zrcLtAJ-0{*dicyTSh%}lu|Kxr@cf0>SaF7GzOl&7OZ{mGq z92@~yYp!`}tS$~E_XyUW$PmGhvwzrbLdhs>?S=0U>tk{1?|;Cxv7dRshF~4~VM$oh zQwLJ>;bsoMkbc^2JdVf5VwIA-)G2Jxe>208p;O`rjJHy$N6tZgP|9#qqpX>7^C%fp_*|If-JI-0gJSy|#ZT8~XkP&cSrUPEGaF?k zs4i?gr|7iWv6fZTRty|v+uNvW*z+)4;`oIEuEs0SRnGWoU`pm@ZpTVwB}WpWiyZjd zMLrfD<#QI#^`F0Z*Rr@ffd?mq6NcZ@4luZM9IGPQ_*2k)bQ{p{O?6Y-} ziYBUsN=`OjA+f|05MvwUXcDZ2ZQW?CBoNEKl)z;qFO)Sm8mYY3l$I|#?R9>2dyMK~ ziIWE3EW>_`0~kqZ-ny=KCJ}fp6>-A6mOMWxNJ|TK_1sD@?hqg8dLTF3y3l9P6!;%^?2kM4tNrkAUSpN-eYzOzrc^m639*Jj4P}BQ>2N;# z4V+o%J2Fdp4@B6pT33&CS@AP>WCw%}9CSG6sajy~C=#&@wwqYWAQ)tWEnlYd!8pKz zPwx5-iT=pCvPi;^V@L`eh}~U;9OlyLLAlvZW5YBA+@x`lUHStz9yz|U&oz0w&c_bNAE7W2GEj$5!xDRkrcrj2{icI% zc!@U}cXA1mFk?DsV*KKjZv3s?Pnu~d05K}N*V~vkkX~$WAttwfbojXVC2r~6){eDRPG1xGC&y{?YYGhEF*0bt{v|x6fyjV$T>g0o z1KE8MxV6u)rRmuR-JGXCJ&#lj(e10;e^DZU;}ZK5hJ@%}MKgpsFpyJY^kCOcBNafX zwN+poHJ~Y%EP(t|x1$;4GyBVc>*58bm6<=sz0g0$y*P@G$DhKPKm4=sPw~v(;5You zW6rX6VTeTBUkAQl1+>a_*x$Gi^e?aWB<%37&I+er9|6My=wGhA81UYIxj-SvnqQq2 zU`)zi9|8ZuP5aAX0dCqa&Vb)@&wg=O{4cp@IbjTTC>3_;O!6#=`$+dK4(HLx!`jTr zerL0)r3KbDe!WaC>d%v1=;;cXKJF+%AM3$^=kQNeC#TjS;dh|SIS?lw<&F#vN>pYC z0nRj^?8`_!cu2Y|{ebE8%c|ykua>x>*LHYSZfa9?Y1`}Q;p>m#E*>L^ANpyUuD3b0 zq>0nUD>kOi8V__{th62`a|9*wDb-%pkw;-DX04+JF-ZArpN^A{KD=s}hjrFJmUGm!?wrTGgO zV~RAfc=1YO>|nGgruvHr(KdDpj%YUn0i6v)4t zD+U)BtduOOYuoTF+4XXA3Q(>(STjNIHWGw}6{C|NiS@<$6RoI#U!A#zXpniX1wvb3 z9Dv6^u>1ccayM5riu(-PCOYMeI;Ga7hR5KgC5wDS>QrvjXV5E%!*}@=xUS@-&AqvS z?I*u{0a1?#<9OHpyn~9AN3EGsk1@Je_3hE25<&nTx~pG)o+1z~LRg1>!jdlyVBXhI z?!I{5jlQ)%z;Tu@uMOfe0uq)`R7a$7b*ljD$JsQ@Dpg+frwnHXPJH#3>Th^}gQ|w= z$$M_lqQGgNt0o^0k6Qxi%CGW)KjkHVtk^%*rhnumeT|NQ%fD)Nyo|vu^i$Okr-;C5g;#ZTL%TlG|Zi(f~oEZ{luiB}p=jF8tr=FtitfPlj z)_IJ^k)54yA{BQG9Pz=lxXC5h7`bXZ(2ZY~yXgJ!^|jMy-gO3#S8j#jJ$Rk9l!)E& z1Ud@>ZaTEMr?KiGsa7=DdKc!@h3L4vXI!JdlkFqpQd^by+<8N!vF1K^y1i4~*Vi>k z*CJKKt< zv?x!EhSp8e;Zx`q*`8AWC!P&;fuqOV4P#{-^e*srLhGSKJPJm;Km5ele`lf1#p$NocV|X39v-QV)#qY4k`u6ly@a^BWuFD; z9Oe4K&u^kco$fs7*t(c$2ctb+mCb>b=Ppo=!+<4X4_*5vnlcDMf^OL_THt@QHU6Uw z`9~}0=TOTZ?X$n?eg3Na`S(ZI{37hsGh8WsqV--cQYYg2!zsc3^_~5JtUpq&l5=|g z0D$^s>+&p?o{DvpH=bB5W(fLR?bDMn>@6G&(7e^P%YnS-Bg=4by#pRD;q;J_*rfL1 zMO}_rddji$Ql#DDmUnoSm_#2v4I0kzOxjc+El*7hFF*-Js^la(Mq&=mG;Y`COu%rZ zMprcpkGXX5#78jS9u0|b`YanN^FC8UBl1{2_9mPdgiq;_l zVNeCzGq3s=uOBHc@D2JJKXC@-FuE|V)K1D2xV8W{RPh*bB)fi;yV}2o3z-zLdUpAF zPG6DZqgZNAs)YXR;NvvNChnIV;XgpZ|CeA?w0+5z%)gdF{0-oliyljgqga{5QCWWA z$G;0aP$jF+e}{1V954O}HW2~I|JmOMi-YihMPF+;H&M6_7i;on_KJ0rHjjr6+I);8 z#qrPp+avI@PhROzY&c-C_!@eOQmu{F6?SB5`L7}}alh)7{4MMRb;2hQM%k}b{MAhw zOyCX6?!H(kIrzj`vtzVKy6rB4n2Ka3V&S0odigq;q_u}co*gdo0t*haZexi zCia(5w{Z3^&^0&>jAG)0<10}Id{on6(;HzL_HKd`kyQPGeMzNr`su>7uO%v!PU;=N zbP1sN-6f8-s~wA%ly6nNg_y-Tg=fFwRsM3i`^?%6osHXdn#W6U0Oa}sq{dOIv;?Oa z&j5?y238pE3}(`y`L$#jy&`deTHMG3Q71ooZjT+EbQM<fU&y#PI<5DH9*d>gAlwm&b#qOAx($k=E%f;O09?C)3 zn??KPW5aNvqcDmN?G&aiL!h#*wmSXMT{)-+f6j&VPn4_uUtx+DNd9U;BjsqX1_d~8nq8fUg| zAQ!|_LXhMbLi5PdA;4MZhH8gW@4ol8xGVLs=8Y8VsTN6%=%7yldY%HCvZO>j>N#K` ztdk~FNb^%qtEKVo)&2r05lD5Z9ytrf?Pe2hf>pHp%1Fd@Y8E4mz-F=SB&f9`Pl?W5f(!U`+kwY>32w$U4T~Z@5G3iSIZe}ryD(*!2CkX)vHT4NI8@r_iE_l z3aZd-AdPNN($z!RFHMU_!2Bp<;PCDdBp6{G;zDC&J>Vm|qBCBfdjov~VXgFrgVQ*zqzz7m z*a7yQKr^ZhvjV&sxZn$XFh=MA)B5k?QcC*wo9NK{*M8DXNToTFZwxMBSZpzTxVmbR zBK2rqmqa$*b$vG$P+_JLPNG{BTwr7+kRZ%z7NMT~_8ECU3LgC(|5&5&cir%R2qswH zqN|@O_E7iIP8;zb5kYzcD!;p=`-ZM9EC64~l1pH`2U+2$g`7c{N^ip5H`OOury$#& zPyv9~nMphI9r6ma?0ZT5=<7Y3#}I%nNmg*iZS8ep{`O=3?oZQrf6M1Kl3bZKe!{}( z$@TcrGw+t;m7D!}`o^z(*Pl&;rEGz;jLf@Umd;Nd6B+bB9|w zc@2_lwCZJGA*dwFuQ(cS0mJlIKchu0juD)+W%T<j$t~6_n=4CN;4j|oRk0)`Tp*TVErdkS(#S>A}8sDuvW5QiB}_TM4;4ZdcJVc;8;Br`_aq7ERlimZBkN@SRt;7jO%9gG-X$X3Sh z47ku9C}ltFM{GiWS_H)0+5PFL=fhuSXuNR@!;(1c!o30XT0%GW?5`&y{?0i0TOAUZ zzoN$!7^ap43_U;rzQ#TU?nlq&QFJJV%og_@qT{j*zlq&Eiikv$A@$!OA7z8+QSi@$ z3=#CF18_Q59`oF9-ef;EB5Vnh!w;VTUJ|rbu#t?=oh3gHJeFU8hMU)fVfy31C(x>)cwr42izX}!TzYZ0b3~2lYjMKve1NyC3-y!`W zl^{-+d()m&H{+-Ro@VsDp#hp;A^+Xe82V4<8~zbZexBa_5l#MxCVyEEzX%?GL=)iS z^-rwc|I?#ML1Az5C#mfTJ}d!pe^G{-Cq`fI#pwL?H15Cr^k);g|IYWARz3bn9)Xq#m2mH*EmAV_M7yM`aoQdT4Pji&laTJ+2T53sa+!h}6 zFmgFu7Js-0ZmN2OV9AHRVYYO*AbxuP;10pQ57OWMgM|G*De%Y=?~>osUosSdvgOn~ zii4s}@YgZ!qbahENQknImi4%a_IS%OC7ebpDIRCKCOt5`sM-bNB5f5YrTsZy#5(yaya7NJAU%iXRu3Cfz2Wp)Rl z-TFGM{WIc9GR$jy47Lf>1X$^z8FsW1_jXQ4>Fs1KuQ0E(U_0FjqyRN|) z>}Dj}%M?H2v9T70NR6v3RuIQ=@I}Ns!>ZfQlO4{zmb||+W-eMAs8X+v(3~mSdGjsJ zTi&rMCh1pRM_ae$+iEu5xwT7QO3s%4Pe0`N6NY`n2Ui_=dB@$FpOOpZ?WYZzVbFPl7T=jz6;pK!&a+6<&W<+`aW~LEx6|Ru&OM(c^gIz2 z&;J*y^k)7Qq-Gk zPy2p{L>`^zf}MWMVp7nR_w2y~;T_lG1D^)P2VJB|0w1Rgc%W?wvrnQaiSx)6!%_cn z%Tr1tZ|?FP>oe4JB`T0j=xLS9qcf))B-BZv&zFn@~b0&d(Wv8U}@6&_q z-ZWAV5aT}K5pb_@FCBw+=aPJroEBv~l(ENc+gDxD-AkkQIUyB}lG~Mxs-;~a+WUdy zR2T|$rC5%ZL^eth{o9|AM74=eZII_!%7%}^7l@bM?yL=mn8e3qPs>)=pEyokZW2ju0^jbat}Qh+bXqLBN91j@}VS>#3qM zZZOM= zy-Vu=hm_bixL(I0%=n;^x%G}K@g}tT=CZKHkrQVPG<{0?5ufVx=miz`H4vabsVe_D zO6C>g9rfa1QTH)=)}lJBZhSr!gKBBr zUHY&@K4iesFeminjflt09XdgXD3l`oG5nethtF8+IpitI&=r%Y+0WJe73E?Hp9-T5 zL!AW_ILaKI_n)24V`;n&bde_LU~Cm@n46X$CDPQsWw@5!n2_QUv#YVc@KSkOsWzrf%`LsETpN}|-RlZ7irlG}Clg!eyw2w_ezo3T5Kt$@7=8$g$x zC#Z&*emxbAt}IghOzwEn<>yt>iK846Y)A8%PWvlNWyIejr|nOF;rz5JrOMqH&8wS>nzJNR=E8Wx+}hfNof|AzKm&4C7)xFT{i*gv zX1r=}k>`g~ljmKIKB=oluj$u3xq8yP*E}oQ!++$>)qa*nu(AW)Xhy?DsQs@0AA8>& z)O5S8OGl(iN06XYsY>s`LK6|OP=zSH2ndL@Kqvyz1q1{HRI1W@mC&owi`05zVBMkde-y6Db}afvbu5vD_dP*Qa}Bb z?lVM}g|7@Dc-}S)(z6tVNW`2$Rv;ZxAmqrNhA;Zw(8Ct^#5k}w>Xp)|8HZp_T^{mr zFDTL;Uhm`2_{h8k>w>TlC|6Y0Ytd2B$3MIR?U^NCaaAs=yWFFc?cUWI;Zf!(A8%77 zW`S)JX@b9#1f~eLknsQ*Vv=QAXJF2XzRAG>`aXE(QlxTeXZPRt)cUUpQ&s?3F}9mn z*rw^nhE2j0y(U~<`JI@^pSj)U9BqEH!t6+2KUNEvv>pb`9Fs!;8pa+uG8V)O%)__uQ+!cgoh{Skm}+-~!klvfvt0cP`fnI1a#+NbxA<3)XemZ-qZJvx~pu z5JIE%$FG*XMqBOhDG;E`)B)%}Ip=}^hJtPcz%Z4;Q1?azNEJ*f@JMC&Hs2*#wc5S) z4Cjl}4H}x~IfCOvL}L#KT!1$koKpQZG&cHi`2!KOAW+huqI z(kSF&uqgP~y$EDHtyNnnJiNcUCQyw+jbfzwIa8zUbcp6Y5PQx9d%&#Oj-dTpdKhI; z!2+{q6x@Y7KV34)#lq#>fhdm6J#{?}?*YWdbVJ=%S}+xFYWFC3^yx%C`Eq=%;(J?Y za^L74b?_bFVFXF1J&-~Gjc%CQgw&nxp5m2oMJs$T7Lp|4K~zV)uPuXVz;id=r3H0? zuLk_fc*AFEtmUEuV2%R~tRnkIj zT3xZjXp>JE+GU+0WV~8W8b4DSYJIV7;>#kvgns9BktOq{IZ!dBiU4JY2%J(=c@sk0 zb`iIuHwkk}HT{(z@nfoILUHmfSo z(W!#(*6tS>9*OwXr(}@O1f$i1T`R!13<<#Wkmldzm6hqHD1mi?WH@31T(rqY0Br0g zadd!{21+wDaIaJ>!hj!@7Xx^5SWFnAuC6mdzh)e*fjv(|P6uBn2&4Dw0Kv6&_=**m z;Z#k-Zn|oGtz6{s^_EE5pD_fWatuovkc8Cfd=e7Mk1g8jxs&_;T4ES;Spc{XffMaH z?P3m7f^p?zta|HEZYs}OrUf#%hTblymiAs%@{r=K6lxM~x4EXrPL3Z=mP{%w?nn)J zdV`83fF)=r2IJ8^5*17?uI`P>mg!p8UJ4^&c^&M9;);{rH3%fo-giBmzzP%d>xlGN zz{<$8?2eCGV#FwKy!yS}*_drn#pzvrA)T7r9YGxR3GAl*;2P0gK)e(*Z;JxO5cDzh zJzT~;g_3fJB~OLPiN$o5-kfK1(e+9i=k9-^SfL_=-6XMNX2M#19p)0lpf{eO=x@7~ zN$2`-j9(Vquetr59ILF5RY7=!b3|I*-XvYcyh_Y-IrnwXBe2+V{yRt0!)jfVj$OTV zcuwLQ%}bbuT}HhAEa2zSB$F4W7KnS2Cc2m-oDLJJZe$y(~!tSA9N$4G^OFHrV;9<`8&QPJnuT}gQ&lMRuJI2 z%2qW;N0##;(Sqcz0GgXX@t5Hhev*czjb_XCUwzwrdse+Qw&(QxP2Xj^5tTIx(M4Kt zHMlZrcUY4es>CmdQ7BIA4VH`-ZDj)cJG-5Js@r(0<9mlj)4O)VCX)=`3#Wp)z~`8; zabsYYTsEiz&U59G&fx5Uq?9~f%SbN@qr9ic>`y}tD1 z7hb#MXYA)U+BE!JmyJ5Vd3WVpC8o6rF75bUXsMkaDp-6s=5|%mJD+pKbuE<4UT>dy zketX#c!EpKBM9L-Ys5c1r+I1 zW{FxVLITAJ!pvHTbCkEI_UQBVNSXR|8%qYg{*?xcvl(o~=u;dX&*x`}&6-g#c(0I6 zf44?tkoEImV98Lwlr-Nbo(J4bR{6Feel%^aMBqBNh0z3$lUOmdZ=lBI+ZXT7X&+jA zwm28l+870!|FFM~TTp!ukNeH7FS4vmXVx6gQ$COj@*pF=WISXwyVVC^dSequI1i zhV7?Qj&#tK?})-8m>a{mRsNj@#uNcl;`$By!IaU=lRZ(3afLm_ORl%z;rA!@AJv+2 z9+r;w=c$Nq-bqUOiLFQgyFfyW@J3LYi37npJ$1Jt^S&?6MZ92`$BiKKb3^Z1s zi19{3-uNiXLzr`&@ z2ma^)jn2e(*f$J(dhNk4HrvF}8++F=*rOguZEoBedN|y7TMPx!nAdnXEE8%cDqRr&_9aciD$j zN9cgwf}@#tPcVtvn5f;Vm8Lp8{iB2};dgq;UE+GT(`u6VB;+56$Ax@k>9G`|fZYJ% zEibp0|C=is(Z0lp_F37&E~(e37979rf1q1)I58A1e&c$-oy|&HyDBF;`7ebaycU#x z`~axQePrIWU8--T3TJOF``jp4Ln+XKRj#;^)x*!%Fq+k)bg?c!KwRJ*LLO6Ak#>*L zf4<&Lw9OL|3#NuGXJA1W+E_MoR59S#{g*+I?pH|&%UjgdhEy!~T98MSsx>v2O$0c4 zU&SqF{+d8-bP&PIrUW*8H-J&@I$F+5yj-5)x~vYbePZaBY#>4>MDbmVYE_J~@*{H- zRAD8N8aE}H($OX`ak5q?E~3cwV&Touh7;^+misc3T--{`$TUfmpoktWehFsvYtSFB zle~(TS|qlzA$JjS5@+NA=7Nq4@g}_ zM4J>rz7nIRCcx$z zyj-}H>N5qZUp;i@6QTd|JqgqFq}Ju!FtZ3iz5w(Lqgoj;DaiUL49o(Eq%#j4&PcF?V_1*RLlVt@N^J5o;gcCN5%;$0W zdAt<=;LP_cOP2W0q5Hzd9ZwJapC%oC#SAIB1y6!1sT@h|D~|6pTqPW)XB&OL<_tE z0A25>C4tJesaj@s=YZ6N_mox(mew8eCfhdz1~=F4v(WLrsR)JJf#a>O;NW;6zuu9a zQ=?*6P4?PJ`O~J<*ddLkT3M~b>jTo{c9J@JTKCLSAW4v#^sb8NKgjshwS&;Q?W#=^ z@<04-;M3~85+2wfjW-GUk2M|79X(@tF+6yWjNv0_0uoLVB52}X%YfQ7MJAv+&Hmbr zsy+blDUzb5^gbhGfum|CJRx;z`+>TJ&tz40d6F9C=FJbLgP>~2ZaKUf8;xSThD#)| zs`%jFh&of}&Qjgvu%B%hPWP3(=cSo?;!ySnnLr8R7gCfI=0thcg0si;VV7`yC`EK) zmj{o_+;DlN#z2UbWjpl!V5Hm8`-!lwxc!YHj?>)TMI;O-CX$UQ>PpYOxG|a4;hB|J zOdo%<({Fgk+~m?&hnxx#bJy)*_`r$?TpH{EPOJ(3@dw$3Ra^wq8wG4~XQ1G7cg;aC z6w~$xncPrH;c%8KRsw5BY;%V@<;GkfC}*{AeJq)KVii>2!{()%)Ub9)`oTr@!O@qU z{Pf@r^f5B`+|O*wo`9a5SetVXHTHl3E;}p&a1z9ni*_L{0dDXuj||y5m()k=taME1 zE`6&n@Q|!lditi^;19C)8x@H#H^5^WLPZjGC2?Y4Ju+|7mMGj5dE5%j@5rA{_ndz2 z;O#Fk=YtOypE33?o31AAaV>*pYdVsgW?(gl#C?LPWVO2Q^r>fKGV4e7G!?**l|xkA zG>y+zeVOuC1g*v)qyI<@=^FGZT48pA;JBgzy4hmY6p1A*Eg82YyAvm3cMcSG_(}`(C<0C1sc^*{q;L$otCe90xP66{)2*ZelXn_6ywQ z8@L;C0r16jZQuTUVWFtVMQmz+S44z7ZOauIrau$z!apo1T)P@ zD&Q5K7P@J!OB8n;OR-%WS-Fp^=nFoNma6q#QozvUezY9;w@j8i^&Nj+VL;58Db|UY zrVhpnI+NsQ3TBwve#Ml2M(&Y@(dHI{;5406>ION^r@_xSTC`Fs4I9(&75-2h5QMjC-|A49lu?R~gq6^lzUmXIHCxhX&a`$BU|JT=uSJWgB&k z%}XEL8SaRwFCRKl?t^AauyyE$9(ZILzX&e6!ik?3!?8GtzNF!i7mCf#vXx*z%#dB- zKG}mfm?9l_eeukGyepzzZp zl??u8TrbFc1IdsJP4^3ma@Uz$WAj(R}?+8i4=6&MO527(;(m z3E;Ak-A9UbS@_?7KnAd{=pF-f#4KQ3=UFyzC*8b56s-914-BzCrOL_qSUu>vJ>h;5 zZ?Pu1`~0&1M&`r1D>1^-`#(mh@9S$+LBjta2a7TUBoypr(3Wd20ub~(%7qpU9KTwVy3;|mGEwq~ zT}Ftt{`1nZxs?&c360e>Yp3*~MDHtkci;lNA+&-*AweFf+&J)g^$P*=BRqZ+I->_( zA3K5ow48LJ}5dmW)fR^8KFx!04 zMY@Ex%8sn5aItYNN47XVb92wSZJ#Ep!+2GyLG*Ku3Qb!%TC@ic3v5%XpKv?&VBh=| zS>^AN@+~#>nC*N1M`yC=OSyC@Im!o+k=(sCAS1!4VWtf($vtWT+yI5EN%J`d(bAqrpBO5k91>AhCMXgBv?X&!cZ*dK+sH1x{Yt}- zRo!~Bqs0OKzryd=M>U0^Y8cCfJ|v8N%c9F%cYRp$;A7)q-X*JU_xVQwAD9D3&cO-< z)-e(vX<`OOwOgmwLe!8sZn5=$aetu!8>e`r*GG2MUXIUmE0gFx3SQ0vF+o+3iEMr@ zeZ3dQJ5YXZ%Z)Fdnwagav$-L!b-d}K>Hl~>mjUK6WORu%W^vG!{rK35t38e8*nInx zi{D^@-h__#toqL7Zn&HneRkt%#TbYgm(>+4HPKs!G_;s7uadm~#N6mow`Ka*gShHd z&!i7CN6dH;DB~#H$tcztecG)WF`VIqRZ6|G-=z3M;iewSUf&@_7 z9TAkIVaNr68GypDw2sdGyyY!6wZ(h8l;6HjsB`g#%RQwG)3fp0*#cK3cAK}c zkfuhVnrPS?cmtFRrFK?ZBW5lbw#N&$Z!*i`6=o`y)p_y#<6A&_A7O9+!f^rJVokQc zcE8i9IVI>kE+_1Lhrf_ynE%;;M&1iFdG!fni;PS~#lk4Jt)<7ypxVAyw3)Jmcj^5O z^*aSZ>Iqf&JRXe_G>mlqLb(fxn^=M9&g2*}-wrS=Pw{n=C2zjEY`D417gr)Cs2(sg z1*z#HupxSXf#TIAFheMhI8%JU5>~@{<1SwA$>5RFf#Q!>M=}RqH@pO&wJ3`McFH*b z%fYd$f{Z&Vq4i+_&cB@8Y3Ba7$fZ zr+bDj{-%fx{0#?T5I2bGG9?zAtP}I3Pn=2-=Bta1 zz90T89)FcI2&!2Hv&+NdVYIThZ+Szn)7Bxq|aV8Nitfzi{TyT$u_vf~M8awg?ax_a>&{COa5Q()sy6 z$S!+CLQc8>EPS-giSh?j!X7wc)i4#2#|MB}#KUQsm+{t%(r%&`(>{dh+3}dn3-zxE z96x2Q3vLbm!3p7fc@Vb5UxQmd0>=xw!W(#_ZE|ihP8MDo*ZRRj9Ju@R!wY70(^YsU z9CtMv(Ipeb&x;FyGYd|dcGa_((}#b3AJ=yC`I0SOVsE5y1yZ3&j&aDq%q%Yai8r!sip_O)oi7-h$wc&<33iBZYn`eRE71{%p_vq!lEb z)^v{TWZfa%ras66c0Z4vIj#U|c6neAif*Nv=*U7=(0b&>8 z_2-res4r&QVDYi}w9o>{Fa}UJ5lOW&Pm$3yw$n$@(jlqQD0VA+T?tuuUGQA6(1e zs3JFTSF?34<@M66LT~L373gU$1>rzroi12?Njka z(eswjbUYg|VP98VB!(TY+8XF(&XX@lx!&vb5j6g2hGqWt(Slx854VG_ZQWy2p=2X1 zwM@cKoGm`s!tVeZGZW0WUGaIdjjB@V^N49<}8@ZJWo6CP3^li65p6Oa_HEbM4V# z+%)-dQTLlBr1*Q+4~FZ`*)7t1XU;aWU6&^(s{6sZmv}MmISL-dw=Ymp}W@DB>7Df9w3y z{7X|3K#$)5$w(#vz}m_=LBC$qmm{$5K+^aMX9*;Y7S&Q|B)T#|0nJlp$>SW0w|CHy!J&pM4{_@ z-O{b4TJhS0Kqe_|LZvxdDWgRLn*@VC}9H6D$Xn84DKvfx^jCO;4L&u1y}; z@QD_78XJnf|E%+X@8uP^O`A9gfMh9y6+;dGsZrG0ITQG@k-M5y@(F$4b< zqi=$IdjrFA?S0jL-d%GtuWK3QI+ytBEv0018)C+%sfAy^X#XVmq{>10K8-#g&j^sQ z9O2HO*}fiFwQhW?JbXU-Yu082F_mylp=~JE++VA7A^B z8n`v&=I#3FqZ__{Rfkg3Sg`ngBXwP`5}u86)tSwM9(V|Xj-)aVOk=PgbIe>Bx7L@^ zos*Wz6nRI6bM+Ur;TG!`<)Nqp=oN`bM2I>q;6c;Z`AhZ|^7-AuPL#gK6$!7N0v)`; z;GHJhR8hVUOiq-C6<94|Gl>N%hNE>vpQ7Z`(=sa}Ep~Q?Jf2U)_7`RBvrEVfmS5Dl zt706&w0}?2h8WYT4UYYT%srmDr{TK4?2c*k-j42vUDqs$rt=$=pKMf3clOiqbisqo z;5QGu>vL$M_bNfH)cRno+h_@ZAqbv-_Ybn;LRI28&IoOUOl&8LpWeh-<-u%bUD$|)exxx^@3#A& zl-wJWb@<4xF=>>b!Twn|t zp#_L{V5Vij0Ifs}69sVcaEKceD9GNuge|1x!!~0Be$k@fyR&G^m9zre62b%Y26k~N zgZ>7v$z5COXL%J-*WKiwRr{ZXD$KjU8CsdkpsHZH6GORS^v9~Gdw8yFS)g!i|60|z zUJ~thKJOKA!3&6o2@>6v5-f9!C|}0QXStsjXTAgct@~hPSvhQH!~nSAse_R~1BV)& zzWhivA9DT}w5vUB!Px?GX=*w$1i&4ctPDGehQOc&NA+Wl?*wh)yI?Y?ImQ!%=w_ye zx$MOA8D}NdFLk^Nf6f1dt@lH~-o`hwARY*C4t3mytxiCJR=d^l2vIt$ZK0eECEArs z3pGJ!Z-fR50Hx?$Gylgic`WI9rTex1>O!H=70;;qe$OUo0Txsy2yNiO;OF`Tr;i%S zcGa()6i$5dv%-_gtGZ(LvoiJMTNU-dnOR63W5LF2utVaT8iGo#p~%V0fhSpFX`C0b zOVko2ncv@N-6-xe4sjiCVCVON&o;^0l)^ z3>36H+pd$M4Gqm<2A0KXP7V(e5+9nQ+0OB8 zR?SPDF?W(*e5XM*Eq`!;dw9T}f@T4-hDDIO5skm0URv$M$O4!a0G|%Rvd=M|pNHyjt(m zuF4QNDBFLvg=1z6F+W4o4Ke&#=X;d-cuTH4c!Yf#sYuzV?j^C$C_y^!fhKRg;9BM? z_`ryT*GK8%6UB)4Pu$K@MI6lTa1M_@;j(7g(}L`^-Tqrly=~UNPj_hW05;id9!q6Xt`$Be4O5eUCHa^ye zLae}SGVn?nnP1Uvfd`(&6~T3|O8h`j2xMwP;%OU66KSr98u^En5!1^1`{qL|h`#ff z(V2~rJ-DY*IL!T$X-`0WOI@w+MgBZ)(A6~FKudeKre(sIZ1{v=CUFpFjMsqj;rOK? z$i&D`V@2N}(yocJKt{-Rxjw_uALOp{Dm?BerUV1Sdg7EYGrQGFad^kwGV}=P0{o}ZicEKGf4vp1OMl8X^v#ZA(Y(RMq@1im=bCK} z*Ar&c%~@(Zhab%l z@)PccbAD;?1y>&V37hIc^4iop&HgQfg-Y9W1Y9V4zUd?L;0R6#wuCD=9tAo-zuG9r z4%B%|Ns}|_klK@;(J}k!3SbBF`4<{r;kkMF0;Sju@y{&bmzD2dyH4bU3KDbMSO|2a zDe!rK5N@;w7u0CFh`PEug(%rJwF`fNJQeY@O;6;e~@vhYm3H0ZISkF z4jVLqS(edz7G5-R;+@+G-7lV$A7G0q(OK9g3}|0`9PQMNNwSPMw6eFZ;vW z4Zf@Z+mD2PGfCbQwwe+bLPtDLEP}1^LYQDXU0tAGdUe?BzFkke!Xl1;206Jz+d%57 zZRYx#G~&s==f%>uuJ6rPq8{aXA<9eDvvZ{l18#={tYn;95mW#p5{*UBIbWaaaka;0 zy(iSxK%~jn`>qVWD(*E%hQ-R0V`lzB|C;!F`q$3?PX8)7^+kGgf}`NnP!*ZiZ9H3o zI^%_nfqmg4IV*Dg#c}w3p`^1%#_>P7w+1S}Xvf`8XjAALu&$LFQtmKXExStLSZj*Og@we7x;FF%Rqk zP^uw7?(Q&;put)5R-s1N<~!W{-6W~6)SM+7gxQO_mblp`y6X>{NuR42H{$$V!BP^u zF{~4)hFIve%)}T;>p8vh{HEqdj4s|xo1Dg!dm5Rt%ovB(qbJ1IfS?46Ukrp5;de#8 zDN`rStZcu$Y%hCZL-Y27&tibqIYmYBN9Ga8&xU2kH?3}XQG%nZt5=F2O1@_N)N|uU zL4L9bUHkb7gSGvUCB`jlT|6ym-1-7BlVFY_Qlj3Oss`s)lr`(iwvTI+O@-f4)Mq(E z-yY^YZ+6YN);n=i>GqgBjM14hlPT1ZDZB8ePUZZ3dzkWPZRBGPEwUhq4Pd$eR4uxD ziU~JM=?+#ve|X@^x;)_PDp{X$M~BQ=J^jYz_TCMuQCbvePXogp{1aANp5cKmA9j6T zE>Jsq+#$<7=T@yRcy}k+SZQzrKqJ0$eF!ch&B1EWJIAH|67&&ug8;<}8TIv?#mh==Ig z1l#D7k^H5B^CUV`w7pCYo4``5D)cKf>{?5jpX~!q-fCUFrKkP8ao4CQ_)HH>cc0<) z=1HRn`n+6|Y}(jXb-}tR)G`k*m9E!E7(<7?fGrcZEg9Q!JfO+d%KH^}1rj;)>yfOV zjpgMcUIvag7OyXqx`v&PxKEKP@P#ZqDf4h70$x4SUJ*vRoZbUwXjS|rU8K(JDim7` zxm!X1w&$YC#!Er*bAziFk^t8`)1y4R-VJKNqgHi&Ze1bmDpdEQU|z`JpluDco;phq zBTRehC>b)wKP)t+Y(5jET1? zYo)Rj_V%=tB2STTc4{g7+DUI*d)lOffwbZrN!4O*9%_gt1k|SSw*7cfA;iTJxy|Wh z4szapl$?JAWaGDqc?y8tWMtp+bQTgeyJMi57~{Uy;O|3c)567x?pC_vki6oOAtcCa zOMN|AF}?|}M4~Im6aD>YBbRUi4=A<+mOi=YZ{dlm9&F+g{K(L~BKP7-;Ms6I0rxZsA2pqXAijyH$-14QA5E8peW#hjWb? zpy2QE9d@s2`q0Uy>XdknWzc-?@cXuXP%dKqL($!g-nH8nu zaEOTy@y=dMEFwGxNM4i#=SiA640GoTh)YZh_3i@$=NIh^j0f(Uc5})QxmRkBK@pH@ z({|?Yi`#y=?V6khRt^e235Wh0&6(G>jR`e!jfR8WtRbv)v_Kyk{jkrP0o~VOG}rZT zG+4%T+t{mV{?d;XWGrU;FF-F@@)`WKU+{=H9jrH zw{)bzYtBNyTyk%hv3sJKsi;GiM_gBz7m_i$Or7OPikHqi7GsCQ3}UV>AG zu$Esx6Zwk5vV|^z?Eq^$u0TD>)ls;Y7c`izG?Uz|K^|W?*B+q!{PZJ}3^pBI6j=sd z&iyqUh3+y)F&Z_!J0VlLJDX#ExlXD6G}*(^pF8!z_wK4+%ML)pHWcPk@fuF9ei?;e zh8naqRi*tp-&OSNwy_PBBv9 z8gEbzBz0}Y{YS3!f-Rpd#~~G9sbG{ltGV;o&A63s>sv$C%Sjnl#_m-l)dJ;&(`79{ z)-%Wh`7NNY$C`f}@GW;&4ER>6^wxvdaEoMG(i#%{v1tmBmzXO)v4gCz!so5`h!fXoEZxsOev-~pdx$v z9;ks8%a8s8Zd-`v|CK$@C(_n0xuf+Bs*wn8RYrb}QT>5N;ogaeU;FbHj+mNZCEZrw$w?VbSeUyvy zo!N~7I;32r$38}+J0Iy0{4kdTKrW1!`Ce|)XCJ*^=Co3hZe~{Fh zU*+i|m!+2qX52)Ow@CnmXUi4H+%?*^8X)zW?@Jd6oJoyq@cwk^Hg)QnXq+5NuujD$ zxEEZlyNh4JGc!9|OQXA7ID-d4isx;e9Se!Q?CCQ3%Jm;PLOT}?+Lyc5h! z;9WMNnI&kV3sV^rqe^W8{Yrc75{KR6>yQA{% zgK842c>u3b24IQVXs2 z?L4bB1S4l+U(yTmhvr+fz5ZVC;l^4-@)&oOD7A?e-H61%u*CiWU2rG38WCQnr%%iP!YQ6R8h2+^f+YvqxXc zeXE~Ut3J(r35;3f6M-ZC8JxrKpfl4Bo3xW@zl*jQiWWUQ#@CzlCklE?QdiRTdZtX* zIs$kyl!&4(Zr@7vH{)sl_Fjf8M`(w?$ z^b{zqM=O-h3p(1+9bh}578CnGb85!8#QYmlZ5cuSjvSeHuZzy^nJ62-r|2-)mDUoa zTA7>y%+~QVq`@VD8AP{I4qXDzy0=i}yw#$`yc$T)Ce@f0FUAp6@&!opVR$9KFSUZD zej*D1`&_|2tvO(R;mKU-wfuN7YPa<`@Bz>JbcXI^{Q-IMqjUm?;Jwy1;)mmwET_!d z6RofIDKb==C444xAkNxG9Wp~2nodT>YLYY5dI0G_XJ@6T8^Ou938-f}sj8(*7B zO`Tcg77F7DT@e(R*`o(9F%>S9Bu!6xt1C@*nW0v^MM>DY#`Iz&ir{d-)x zMv9TGXNeguxg&8f{mNCrWV53Q331t^4=c{f3w6h$dx>~2f*`H|Exv-B>56J?nN-)6 z%3F+$bqF!0$hr?5X=bx#dV)xoeCJ$+Dj~aN$aoVCn~4r*e^1J2LCg7;tm3y{`e}IVofZOB>&T!} zczS~Bv^oec;J49paj~f$#&xUMYOVzR!=CX|jHqCohv#KJuNHvyT#*T0e%B64$PpVM z(xWiMue@YKk=7?Ln(8N49vQ zj3P)ENj;MlZa?zJiPeWIFBx_&TQ#uh<*jS0P^#N1i-c$Z707ta3G9s88X%?6fsffN z6d0<6FzT{#`5`6D_kp|yXf%d7J6|-Z!|ND1m(JdTk&PZa(e=DInDx3cL=SMmw-2b& z3SkY_3&HC;CO-3?>d^zn!wYdQ{3|{eQ=VlLzsT55f!&x{dN2W~t(Rg7Lr&6h{<7M6 zk%|0T9Ub`V3)jv`&UT1q=gu8;L#oy1bfeT|@YlDl`=<>bA-1n_EchFS+PJp9MU!*g2L24=^G~?@w0$Ec{4O;_&J&bN4JEKj z#4LFT)i#8Ax0TENWw!mNz#m5o3X{FGth%Jsb^mRS$eVdzPq;f&$Oy!0v`EPc94Bm@ zK@&}BWwlvj?ebXx->YWzU}Hoep!DuuLOui2)JAS4pfm=IK0(424x)0|RvWA88{uyE zhPS!-kp=_oF9zsKpEf70U4FnGo;JYELo5Sk;tx6y^y-)Gmt`oN5wn?0+3RPVz2=Uy z8+zjdN5dn6?!LB1*LOnODcYbI*sc^_8aHE_Yc)yWx>U9{8L4m4yN()Ha@d9z+nJ?{ zrqt2OliMOedu+=xbAx?-Bu=zp!K#Hxsw-oDx~WSQF%N%7?X~a~>i6R3e$~@E?#FrK z6wrCvI4gA5O2laxQ1!x@4E^)WGLG>-SCUbRHO)+lPnc`ez4YEcP!<)#PnkuG9K+B> z%Tb*pVX9WsiDb|*!eqlz+s}V(W^VO%o;UP+&5+ds=h^p9KK^v_+saz!J zqs!XL{M}mwn;C`c`mF~YLi1i3&zhz31L_<8rHinRtH)8;fT|CoYQIrT_7&oMpG-^{`zlAysTpKBbvP{e zNk^xt2pxJ`V~{dGb+MM{S6^uAiu@GZ52Vc0jRJkVeJ)n6g2eS1bZHCk&rGz5qTx|c z=a1Gn%brZAYI({R9h=6w#`~E5Y@HkX=cq0WzCEYHPL2aKHCRc&#Z9Am0m@d3^Iyp- zX|Zxg7&l(;VJ!JBYLISXVhVZtH2ItZZ5%K)1N1W7(|>2m#MGDBRti}j=hTyLT^Ztg zu+5ctUY3YREffyv(~)Ejz~A}v)bPKFei;Uy2+Wq-H@m3#A}@rmUhO{Z z3TUqcGlX@=RUnO+w%}AR{c87aj$<}jk~O5&_`1iQJlC%l+0y zYYFR6KRHOS<{#BO=jU>G+FjpkfH%Bt_@yA9*{SfJ$>ExLbRSvgV36w$1VrkA)fjb` zS#NY}UO_*1slf!1c<`vEV6 z9t0jr96Ms6#3~X4^g((jfMm$|yjl`$o_MS8jvcOQ$JXNubWMLiLlxmo zZQrV@@y~l*&85Z!m#R{~zZY&$UZ&94`2-Nul?lAK9jrMZ<|v)n5t;!He=gEWSLzTJ?Y6<7~k0m?F$ z3s=Apeno4b-<%iOnY zR$L{n`gpy*=D^V9G_r)C1PnoP0#5s6n4LgW>I(jWC`(&mV&P|z$?OpQsfgB(V=i?5q+^>_Vqye@wkELI58pW!o7AO{IufAa3bM#^cZ(2B# zLL`1kv{!r}2?h^>$TTH!jkp~YZA4^AxvVx%Bi?P7N6+uZT8FwPW~U+NJ#;ZuggITPYx*`G;&o_1r}Fr^Si0?DX&x@5ViwFs0=T)=h?o=Sx*^25 zEQ69FSp%%W{CmMyqC#h`B0VE`@wy$@d9b4qa4S<~`jdEY76|?3CaYvCIRo#P{hsfR zHvEOo=}~>Hd=lEnPQG`7D+BIEfQkRM0RZWdC9q8(oqK34jPbzw<1{eSR?;3ZIUv$} z5RZf9Eo@1Nhon*Mz#nA%<9#if7R22ZK3h=%KCaz{Wf=7mX&A;NeIU=M>Aya@o4uMm zE#hm-EmXrHlpL6O2uNayeVAK{z)Nhu%3*!zXfuK;YSy#SkArm7XIKu1~ zIe5w7EYJGMEMN^B%$&bC*kK)_j0tvTDGo z>aupkHw}Y$GGD-^@n8Vd4c%v-+58T)5srZiA25(6RlMYnT>puhwdY(pyjM&MLR%Dql&NHs$>TY4>Lj57?96dF~>MDUXau=cCaZ7GuHJw_vPkvspTrORr5j zU(it;PQ38T8hzhd|Acsu1$sGSi!^*lqBAeS-5IjL@JDM#m#B-+sCxe>{~`2nW&Key z55JOC`<~lEoTGC;s3u1OofxU^^XUoBwpZ1vt*XE}zIxtDM<_mI^F>#fY9!y6Rlb>l z8sL@$W+d%q58{z%Xws*h^Dd^HvO%z!eV~^9$Y}I)ZU!6JjAp7pjm)le%>;wS>NIYY zm2)1N7bN?;u~WYg&RKD>xfS}>bBN3pV~W)TxJ2+-1UB;GM)!#7VCOXbp6b@PNYdWg zvVER$BXYu-j8$$T95OZ&LgL2(PJ~f5=-d$bH}X~@G>$A5GM!qWGBi=5);7@4eo=` z16{J<5#)pKql`dv|UvA4I7>*t&RU z@aX;Rdz%bllO_PuqUjsMC=H#C>bpMld&1U{Hbj|KE-?8{Rjo7&JVjn3BxM|A( z3BVXYyO-fFio@4{S8SByX)$8t*W_}J8-L5OiO>Rn_-~kF`_4a+CZzoVuLlt_X8^%< zt~zESyXz0QjnSEcOe7pj5BjCBGY)$WJ_A&zU|2vI{g>EGR&Y0<#|~kH?Q(*TT}Npr zkN=4P-=7Ki1^dA|fL7dU3=U7~bJGP_@{&6zOmG>Xj)xs;LxX_Iw;sTm^3!~NMg6&C zxCm{56gyeq7$6z>)F%&kUb@gNgHYf*Ky?21!_!GJSW6Wgje`84IQT`4aePE@4wR!sYpP9|?$k7mM*Kl_F zNb#2#`~9(Tv;UUx|LZd8(EfcG{%wl;@4hnscbXCU9SID^z|&E(12IPSmuLgT{eQ0k z<^Not-#`EBA_0r{Z~Lt0FPim$Ts;s^|5LpFudn_@!Tvu*LwfW-gjLW?EmA}e<{-cFNgBK6zTtcFO2_`(`4*lr^%lV$Wr{>MfkVZ|5Bv?mm>Y&Vu9jc ziuC`mZx11spo!K6!%P2Ykx-7g1kC%z7T1|dE-`=RswB5C>QF0;<=WBiV#jglU zOZ-gG#W+Y^Suw|5$J|7UOhsg-r`(yOmy^%ZVW-$%yV3sACS+fG1i^oiSOvBp;c&jc zbJ4d4%xu{u8%Y;be5+`(@D(F!MGE$z=D7M-oMu6Uvco?YJWU-t(eQjI@X5nJK(C250c`$b<524aM;`L zs(z6fl06HaRk~nF0uXfe)zm5WO|CC#%GBA4#lMoZD4W{LDY2q4elUBZ$S*H)EH7O( zW5!zOiX&p)!cw78fQppwB+Na?27!G>%>3NgjLx6A_5%fD9biwgC@5v6z0k{dRif!u ze7k(Y*J4F??_W_!2*Z{p19~_Gd5H!DD8virbDE6h-05-;lczfEOdJkTD*lto@}JdK zge`!n@KCJ>9gbqk^?cnD-kZK|R+IdCZmyh}5|?BH5(wB}V{ zZnPB2iOcg!*>)H+7|3^F>WSSQ9#tRUD?Yo`J5RpIx^XbV6#NrXeGv1qu`Wmrl^#`w z%(z9>HhFnsgXgaYD-NJt7l>5=w1&u!#7y{RDb4_p&I5p_fJEH?VDG)dn%vfX(I6m7 ziy}ot2vP*4DN2zREHn|piqay2ARr*pfh^tkWL_> zgbz|Y!@2i5XYRSy+H22y?|Ghc_nCjm6JkO}#yiGae#H|$kjIJZxJ#skI{$>Vyy71I z>xq9Y&tIG3|B_J1bHwx_iVbtBzPHm|>T~3!VsLiF_gfjqQXk?kZGI-~%jkNaGz0{H za}@Gl6+9nS`g~oJv={eUfotZvN)kJsl8G2Vku$!46IW(8cWs}Ws}HvN1<+(3TWv8k zW6bqYv2Lh=O8s~opJ9)Lj51yIUYppvp~JF)N&CnaHlM0}xN-t*p2-FyU(0N#&8L## zfcPQu_44AuVeu{jm8(Bt6+-VeecgF;@7ImP|B2Vh-}wjaYU8=t)oF})iltdtNFv=5 zJ-?h#D9co?P3%^>Q9e*0ymRZl8-AlW4Bw0i=3sjv8pqi@PM>VdR3dg2YFXbRZu2$y z-hHvzH5_4cFXeW{hdMVYY?9hz3-m|C*Kj^YH9I}bB-iB4H^4&uZu6OY?}+F13!e|L zr+l3D*iL;Ef`ubY&?uG`-yA@lo4D7bAVa|09qYk1*kIaG%jd%sf5Pqn!z4pvnd!ft z_}B9MwJHAZ2!-C$tBgGMqkOd&Rd`BE@I6g#Yklrm0>(aL$sd zRi8o8Y^=or1EZMu;lg}Bxh8SW)=7!3Qs3F2VDq$1)KZpDaxL4I>&3l2gYrk(*Nd_} z^n#<8X7xEgdfD29 zU5IoO3bu|v^A0K?zal^MIak@MHaE`AepM)QNR+7w`2QxSoB+fA4-EvwAA1Zoe3gGa z4_f(aUH;k!zZz`+UG~8#B0G=h=BV9eIfRSjh@&Y|50B>CS?@N?jx!4~`moJd24Em~ zCEwd)zx7kp8K)-9US`efwt1U=yL4$M{_q;fwLgpV<-V0K(~z~l{ONH^MC9_hz?QxB z&9aC2-pdhTFRAd8(MqzF>`W)$9G~*RIP|#bbvgYh} zM9s}($1b5tl-M`H&9TlSw(e|`OH71H}$mRfq=+|Kif;kUZ zvI({`!=hYk7C&LaI0ylpx`Sh|3sp$YelwOHl7wXojR1kOTA+smHDL-sI{w#d{o>w#Jn07p>aeqhxq z7oqVBfA)K1*}wRi3wnj&vFCIn%6ywgH=(neDo5g9PMP+hUAlda$cJ<`M;`xJSaow* z3{c4bcIZLz9A*hP^u&5O?LdVeGg@NJz5$B(uP%gcei#c-D5A|+aG>H`)redj#{YzE z&*n%YXh=@f!j_+~(I71AGV%NrkWjIP0_Py)#lbP3kQ<%pL@@c5f|5cWpDiyI z?;bWyBP&@rKEAfKIT$nN|HO7dji7yRvQlG_0CWiV;#rRs%h?Iy$o0Jxq8SxA)&*ac zShL~%1yufL*xG;eI8vnAkZE)g)BO`xYSu!8L*-3_^Y@*}?Z@Q>^}@ei81QjAcO@Xz z;nJpN7&k#{7Dbze-$pF!{O+i11xF^FGe-^PX6`AVy}!QtF~-9LB3CN}AYTUU4QAaV@?pPC z=3=L%so*sFpRLNjwJiU|zkKA?hS7l*YxZmGRpg+QH9b8ckh-^{o}-Ygf>u?3aBMon zKcvD@(RMp(S5O2@@(|StWQ)T0eyfK6_7OmVeHd7UN;A(f(V1}eo}Rqr`UvEnzs~}d z|30ZC^n&*3V#*&=OWfZlm2CU0zh3mOHT^?j@z<96V@dJfA)=yxD=2>%JBJTq>2be> z{w5@l7e|`1Q?J!Z&(?0B=f?Qjo~5g0W-|J1&1ElqncjRZQoc^nudP{S6`QNI21-_5 zji0dlXUJykq#)F~`f>KYzXOZ@ParJ-TEl<#{0kU5k`WBVsBdAu;#a0~L3z8yb~+N{ z&H=)ni}y4Jw{dEKfF*ka|G#*j-@oUExV<&w|m^I`@@tWv1(4h>=pRYdu?du?Fs6&ppdfarpxI4kt{~uG?e&Gqkb}k+D z75MI;8>d|779`7iPEyr^#MP!1{|uemAfr;AHd{YflUdB8E@GiNnMB^&`lWAw5p1VTcBL7~ z=CR>j|Ffz60YL2k30p(@4k~dfE?}qj_y6|Q{DaoUe+FXvFa562-;^GGx7sYw{|cGv z({lDb5Q@Xa6!^Ggnun+`n~pSHKfH77{+w4f*lj9L`heUkLU_d9V~|ts^3o9q{k)2r znb1{8f13U2;wH5(e6S^BY3Y9f{y;oN(#|2*wDolfCq=NtF6%trRSL~XB#-Kl^UK_QpNg z$|?vcFgzQy&+CB8r+1+b`=*y#hcOIYX_ZvU)$%&oQ$oG`QOMf7GHU$`i7)BuPD{}k|Bsc)aQUKV0wuqvbpHF# z7|_+L13=3kXCY9-x;#zi#3%o@S2+BrpD?-HdK`1-9C6tp70g=x4!-oe-H!O#`@i+j zg|`u*y}w+!;t>32$hj;P)HW0pFq3;x0(JR8(_{$3WwCs$u#1W#NP7(;FKG;dQ|c-_ zDv{hF;TRj{M?Pjwu)JT(_{MrW%q*v!Fr!DbNPQrHFP4-ajCAS8Yz6_{Ay5PK`xnx? z+e37yS|sE)Oar2%FOhkarCPcN>~M#wPgn7(0t$Go@jKMk3*$Df(eeT;`Tg9+h?>&1 zqre5j33@t)81fo3yA^hdVoyEOur4*5)!ZC%kyIwWUVXh~{5d(u^tiAtUy{mlIe9gS zrA61TvqR!oyZjk7^7iGHF$yi8xx;_>vD6bE1kQc9(?yg=UcQ7RlZXi9EFp%tyebC6 zbe-y0V;g$?&=@+vz?ZWh)=D-NOuucwQW9^tc<0!Cu}(V<>*|no*tYYyt?btHKuOq- zFd-6d;1gq>{hW2co37M?kXQ(l7caTJ`0<0ILu1%$WAu}u!GnHB^5Gv3T{l|%<)t(v zgG69jx{6pw3#V%(Xq;S%_hsw74d$g8^W>$(BI77CgLUP}MM9GXf7ng-OXOu-!n$<> ze9+zjD~49G3*7wb`-xQ(S+`C7HV)=MpI}(q;Mpm&cMD3Mv1(j! z*`@Jh>uT&XxCx;N1$?Aq!W`-Dj6^7expyifdwOv9D^;gCdF_^p9`zlDOR~E64vX*^ zhZeh(Vyaljx~km^(JJS=YGr2kL&1%?ocqfC5EG5{TdoOA1Yo>7g;N+DSEgc_vzfFws8_^7cA%*ycZXZrCD`EU4{>$SCFS`Ws-hXj6!+l)tRTyN-XQc37t*CLHTd;H+2XXBx!%;8#oKj2ScO`{tHnZ>%A~o zOficN3&{c?xZ+|TdRfhn_^OBJoI?Eru>L$Ii}$N?7i3JC_fs#~>PJ6t#557h&&NKu z9FvwGi(FZaJB`#T8ShzOd?a0YesLVDhPxsd_lR`w&@=eJjLOn4ic$f~gg$$s4N2V?1E7q!(O*;MypR=R-fxpQmj+zT#9O7I+>c^=(BfyWq}B3t9uqoYf_ zPcB}ZTb|0PF{fVgUp$s|C>3+6yIB>+oi^|M$_Mg3d_C?b%trbq=G>Yg@+u^heu|3#kUrcw%!#;Cyjw1WGPT8!-o`AcdiKOp0NDQkc@~@7qrV zpZRX>EpN7DSp9bRV5Y@=u00CAPgG^}m6HQ}u?yY}zf?HFl{X>l8_Iq5L3XBomcx&wB@=w@GCn{!i zD-z_XLj;mUB*91-vK-TXPAST2_SKl<3pzLl*fgi=IxOLhmDW&R5&ZskYs& zN5#!(Nf()SO%08o!Gk!&$?oOsX;c(S$Vh|(#3;))f`jSl~q!T?$fPvP^nOdI%K1~QJYvd+1W%Fjihx{z4f4NqXuPx4}B}^>Wcm&r*X<{ z+x==iIwF|WRbQOZ9szHg|uW*pX|7vw*tb^n#2uu!)Gl<6m z2y46pAfmT}*63E$)6qXnRAZ+9-%V84@Jt=xM8w2&B3C|ww>+HmZ<#Py$`l4W3p<+{ z&AB7}uR3n11Z4S7Sd&Ilh6L4+8GQhH$|tu)Op_vc`Zdbs+2!Wz`EbtkY>r2|+Jh!~(L z1(cfyY)PY3f7)z|-pgX9G3&^krb@BqCk#ek5z1j$w8|qZ@IgN9$g@qo{jw=b0WDgq zxTo;d{w2iNNnu?5TirU{A4?r;EM=~qoAmC?71k^SXS`>>BmrT5#9bq<9xa48n%KPK z!@I8|i=^R1O_dOz>WOlTZ&~6;@CV>(=LGg~nS?jdq!?!Gyf6gaz5y>{q#EXgcX(Tz zcCl1?f1uit!1Mt1t;movol`36Bx4({=IhGbD?3t*t;afRlrt{#QFqaoVi@BD5 z+-#!kN##?{fP-L(iUDD~Xzuy_GX^3+bCYRkf(iJg=v7LLsWtS-=xFGOvmq(+JquPS<@4j9{jq~zdiEup zc>5$e`J(3i4gBL5?)ly3S{Jcin}#}3@#|)DF3do$bxrh;14Isefqsc*jdu^7CaP>J zR}p9zSo6#LT$KH6O7jrGiX8)l5j$clTN=Y%0A=g>&}Vv7{AA0OVA%~657kAt@<%B@ zQg3rB+#rme^KObgZHzoSMYx4A!n7kx@QuV++geq~pZ<r&XYZhQp7Z7xHsi#fWR`Tg*sn5d17|;T}fjLKQ~AjHuEqVl{>WK#9KvE z>Z6ZQbq%iY4bUhpm_dgM=)rc~VY}%3NM4p+`PI>r6=m z(@~aI=3f4^jKD*uPK@f9=x zZrNmiNbu|>cX@^iNvCNO=U^CumfQHF+}=WKRx9J`+40i>YCBtVnOP>6YQF(A0v6hh zXFW-8_UC*aU3cnbdO)E+VVT&c*nOkC2DbFUPh-TST4_v&j3Y5FED#%b)z?AYTg`i3 zF~dU)f1{JbY^rc)E3SP9jJh%%Y#L=^HkD)EvZQw^&2i3Fr>_CoIy4 zs~;l=y+mnPef4&&cmOBvI<;t;Nszx6gRHJ6N^lT^m_dAb=|tYxthIDIZK+Ot74GXL z_F`$s*ztPU!?AMZRbH{EwJXj`*Y(6~;KP|u(bKr8@F_w)aSO|sT|m49kJ00To>D4m z<p`5+shcC|6sv6~fe6`k^=0b$5vjb3 z>GPj>W@L1vb+5>N-B|Fu;_ml_-~ZoQ7%XQtk+UZ(;49H%fjdkCvKrH05Hh29t5`+K zMUlznI_VjSy_KJKe`pls`EIOqb;=J_+zZ`e20g)m8il9La2IVFX-%!dFRd}2372yu z{W<(DAgl^M0d}IqPZ%o_b&~UuxDhzl;iH`n^=jb<`6!|?YF`TvPL`LI#;s3GElXTUbgq zEcZ|5+%UE?$YQzrbH`0EykTbn}OMP*b<6fR7 z^)jVRQ&v$SaVgTJZacDaM2s$kkFVaJ4Hci{xj1`1X!UIrvtIvUW9*aQ2A~FP%i3nm zIcYg|2sCqj*&3eg5*Cn1??8SbKJ>kvsRB{2&_`QP!bEqR`e>7p92h09nT`7!q*Z{`r74FN3qH*#_3o%&=^zz6?H``;2i7R;s zjgI{^aq}NiSNJ!K-;0!bYh?Q$|9b?|Tlu%NnXsAt0;4+xxGEj>R_>IOFXIFKJ;enC zOudPowU6btzeC==(eYhw{|qnYY}i4_lIhF&wnKxZwjdIAwu*K>{*AuTQ(wVV@o6~~ z&H1>l-mwc~yw?kVRE-(jmW)99V;kzXk^(d#gJW(8DnoJ@BGuA90J7k|Cw!4u^&JZRV&dNB;(u| zIj?oa?tvLS-)ODS$6hgyy_yWfZ?s8e2f1F_v?@usKoDoYhz0vI2+H~8{0oxs7YA(k zz;>TI#S98F?lMw4OhzU-DOdHe4pHt%XF1G>ENDwyiMeexXLR9*lIx=nlC}!Ghd8&* z47fO0HtH$%5MGezhKEq7w>6XTVJaRIQ2&R)i7dv)_(vM53EE3mUM(Y+EYJ}h9Tlbq z4XA7)?c~DX9gc*1g9{Cm0Od~Z3Dr{kCi3P3$%`Mt?tSt9q_@*-53*4#_VdGMa2-IN zfBdTrNGm=C*-A7JwxQ4=fx*3z--;hJDs3rEYlmjbL&s(pPGV6*-=15e$e}5rsOw-amsLMS`)##62`s4@Ss5MGNJ!Kf z(dm>c&(!JmQLI9bSV(O8VX3m)W|>mE0!-Yjp*^TZtPHsIw`QU5kSD5w$?W6Ut%E~` zO;4is$ncd^1s!RCg5~h9WNz0Ur}2U60#crA^yRr~Wb~DDpicNHMP<7NwQJ0_=3<)4 zT@l$pOEa4fI=Yt)Hxsbnn=ZydEpRv}W_v*w{e@01gKw@t^(eywa>(g%X}D?A9MUn6 z!ZKW&HAZY>X|kUXNzXF)fJxeHran~V^%{Mn;Z*7S9`7?}Yf=l<~ zO5M3=g30gZiD|id=>k2%e|e0>TqU%2K+4D#oD6FlG)>C}inLX)FHW7GDj!UH>K;IA z2p21*9B5JStV7-0fZu1ZZ7>4q7&5lJ&dEw=al|0pbCw3X>L=`)zBOc@fy3F8JEFC& zeR?t7>Evklyv#Ra_XV}02hLp<7g`%sH`X6m=N%&ei}zOSyQZXyvEgG-dZXUPr(Tx{ z28-r{nFVs*Esr!7j;9OBZ(G`QOSiQN7Vgc)cH~Z*brA(?cZ4ajL9kvElHiE1#u|aW zree}7jXzl)6&fFKU+ht^iH9dW-|zN&+Z&Gb%)^3%wOgSTrWvLgCB00%i!UL>pqfU6 z>DcL1kNe?-*XOR9KbxsbRvW?@?oc-uv!TSHFl_4AYZE0 z-TTOJj)maXsNpSFA8q$lkf|v$mZ6$pjzBoZ33h(9{4-C$L06)hz53>1cUHt<*um2O z6;GV zzaEvZTze?vEV%~zu~-#>^iy70UZ8Tqy(q-N{(5|AOM*|>p=X@-N_N?u9F!`3RzgD| zXvz4)^Vr5Fr%U+)*jH8`2ueJ*%`#f-M65TS5A9_`pO2>ZpY1QDs!OWNG-<4ehNLN~ zcQz>`zYWsWKXw1?6G7ncSOQ8ZIuQ@;#YGf#v;(;K)N7SlETjY~SqfoLf*h6SJ2ox8 z(3rWcB5owKbW_xfK%|;r(krp^)~OH3#=3m3wbq810q9vr$ekH{0h+Dv>l~R`H%WS` z_~dk+o>s$=X_<4^?FH+~?xX;J-DSl3Si;RtR8r9BI|=Ia*sLNwH-V$;hz@O(IKJCa zPinR?Zs0LqEizkS>;3P?68MbGCvw?oNKZr}QJ7=U#@xf{(LwnVGf{G%PQAarl=C@8 z;r0PN`axR(oZ3TQ;g2uI_CAwY0LL{_i6aTvt@yQgkI{80+Y(IU6VAuKDc$yimRg|2b&d!_k^uKq4%$iZAe0DU@}Xgr*yut7ODi( zIV`xhRsi$)<`UYgZk=26yJuOqRz;e!gX!vNhbHE5!jJs@$4plmv_tmb=sILRY88;F zzZT|1TWxwwl=Rqf?Dg&(kAtfROk+Mv_zw;n#3~#sQ*hlov&jdh`V3KV(6(c=Sfa&S zhY=4UU$D%JBh@qzzWbUM-#)@+`@bIiJs5!NA3`vG8@TSl zvf}Xk$T@#7I`6#kuZE)7pMC#ZW`u%IZ!(swzLrXxS%GpcsI1#kP*&v1{VfkpC@Ulz z?-b1JUE>DaT7XmFsS!Ib0J=CP4v?F9E+odE!oCU8%v5LlQnDM0lY>|c6I=ow8!ZUXLL zE##M_wC<)@PHf(>V`pFjLk_!!1?iKBgFal`Q1nN%3n~psvqeEHNcTD#ZzoA@drKJH z+HxGmI>3&?s6mebXL_K3(>l6+bS^Epyj|>0t9AOuEK&%@9FAJNvcqtNd7i6p1k_SP4!yMxSmLqi)py#&eLjx3{Me%nsF7@+@|>NV+LJ@k_~o z%_2#!lCfT4SjdW8aLO-7%GF(M72D-|%s+7KZmj^0Z8Nr=ok>e8BkW(|Lx=Q<|?{#g~bxT5jw2 zpF_OEN*L`lDrV}Ti^;ZkDeh_3!j2vp0T;O*b@gVIA34eQ`A7oY7_kf3>=1tp5Wo@O z0;UsPp`L8@=`JE4@L(J*L)7CVGP&q{^~8i=ogF-#Gs&Yv-^0^9zOEHD3Enfmm))9F zAbwcoPL-s?B~E{s9%vf@PXYcejx1W8JYVAIEMw{6q#gTEzQlcxte}yrbF1_LW9b4g z8wdfjftSC^tWYz6!*L}H9S8N$5~zo)pimA2_Y?N4%@|RZ3W3UxXXNWO4WcI)(G9Nj zcqx8K7h1k9ZWE-{4hhd@Mth32P70bG@me(^iV#T|VVExL4nr6o$HaQV*rkLCFNR+) zm}E>7uWH0NDYKt0+gsT-1q+3L>4EGD}KvL{k1@ zbEhRKeCz!>9?5|EO@@bm!cG=gLrRD_GJS{yk70RhG+DzF;V1jmt~Dw+**S)!3{~gm zRCnEvykBw2sC1f!t+OUxxfoE|qd@F+ey@^pbWTyK`K_xdIhm>t^5*ASSTEvFq?x}Vg%0JLw$j8N175I=|J*;QAGa_1Qp3Vg0^Ph*e~Y{bm;I4?`aG5IKEM;Qcn1JZk`bZUBV`e}Ro4?6lIO1B7mbO6#Avtsjb(>mrElic%bm$s4~e=Hph{@=wfo!vNsPef}eKIoMZriS9ghb}r zcl!F3?2ofBvR3)rq5EaTYXIwvAHtF}>fyJ_h=Lr2@784Y8MBJ2lrNSRh4M9)kL=D} z`%qYaYg>^|g7wbfo zFU6f)N>hcAV3?LLZP40Qr96&|KN$xx>eBs&_IGRS$6R4AA!%<0s_%~cgRLBys?wUyBZ;&vqCO8an0$OXb& zS!+_x+kBTK(gNP)($1d4!Z8SsQ_r>iXVU%6hwr}qAif{U$F13J zr4x~~6%8&wmfcCCOo!A}2`ZV1oEb%Mc)xpKB#vstR3EyoWx2^%vI5&Bl@P6%? z-ugQc7SG@b92u`)&6sfc4>FQnz|gc^+-l<|%i{Qx{_2m?ijzbyfSI+G(+Ot_Qs9(I zE5?Y$2ky=|OqccZPjrDN3Avs@r|Fhl3a~_5X~YI! z?h9ek?B{b@YErhe1g~7PAhnDmU+WeXg13adYf(@ZGn(}gNRy>uZuJmJxkT8N{)2cS zNUr4^U2MJ}gg1*g(3Ddd(xvJ=JmGsFaR6rOf9J^#7<(hF%kw@IYKgOV@Ko`FRY(dk zX%QB!3Olt!cqaSGUNQb7Zwm>!AEei#i(a$919S+^-r&jJ+C}^!0g7V>&@%sOf(gdf zodLeL{hWp}#5zy!x_l0{RlS4r9wcMW;$XyoL2~#bDS{86^+3l@iOM_%2FNOK_C~#b z0|E{ZIo3lA&^IbPk60HB{AbWL*V+E^44=qXGkV??)m5cs85f*kDXKAdvz!krB*RFq zi~>*n$r>0&3ruU>v2};BQ=_1FRVM>xJ1r{9_6$$x7TucSThGLbvYcrZwCI>0mG5Ko zg-V;AUvRl3k5&Isa_71dfv+@lMHFUBYGH}u>*?q*MjPZodOg3R&k$?Tb&&Ds?15wD zb~)cojbB6@k80&f=wC3z$a03Ui=f58L+}1bo7Y`nm--n?<5+qE^{D03zC;>I;wKD2 z0LTm1(Qiu@><2&S$1+f{?sPayq5d=S>9w~Z$jeLg#H6+1$(@tio5EAgmY=~J;LvWy z9p`YSE$e)LVfID!fcL)H5%2C475aV%c39T8DKyi)thWtSu>t|~98dQcBL(W5;$dNj zn6}gc81|5iau4Q^6SmI8z&($(ty^tHW9K(domqF@G2^0#a%I}%x85f!{DhrafIRFb#GM~!4%+l+g=sg(v`ZC^z|3!XqG7a=p7L5Bk1%KJ2V&yIve+k_HX6#mkFCV#@+6QIG|y; zCbpi!)Z+{|@vvW3yHGTQ(A&(3B_&_gvfk?ik2tvb?*Hg~IT0_=e8XPp>y9;U4nLA+ z*20j{q?J?8&|YqyFGUCn1k_rhKNGH9bM=b%YL5_Ueev#7k21e`GX-`KOI8PgUz%xs z`yh(R%mHZhz>xHM)W9U;3z1!P5I!hR>9|8(9$s}`(SPhUjQnVB@~HUo+XjF6Q z*KKW0;cyy&-aKL=y+)+UtLV$Yob7$R$S$?Cfq9aq@Jq?dJnu5IZ$6iXzg|J!!ItCd z7jP&ES?D5d>3Giz+miQ{-bd&zw+)O%DY^~%Z@n9@lu;s#ilAF#9fD!|Xl{hD(e1ZQN`YHx< z749yS7u%q(l6fo-eGt(X-Z#gm1in-;c9vNqKE4lI+%=wN59Ab_H5fSp$*h0kg`rQGM=MV zLvx>trnQAx$z(PfJ%7DfQd&UF;-a`^K$)q$@|AQ|UD+*X!(&6~QuWxm@p-_u2uCwsgS?vbYzOXxwMZMi+w5xqVWW8; z);p>9nO7?ET726km&`NxgR;6x=a4ZNoMY5V2(xZT^MDt6Z85St%twm=Y)Pgq9(_8QON zx3HwgY3C1M`iSBhFZq26sDWZJTQ(2v&XHL}E~Fw7F;jtr2?WW#8NLnEfhe(!uRSny zHTWj+?p&#Ujf$i8Z2eKJpGu<+J{9#jLT}?_exuh-d8epjJK zi+a5o(~+z7qAa`JKsn80W0S*}zQ7ZASf(=f_`<+NTaX0Yei&xc9iV`?{64*L|Hm!f zXRS+H_HUos@bS4Z)ecKLW4)$}y@Q+wS4U2pV>D~q0Wx1Afn_xYA55VrY#SPe3fyit zeu)$s9dehj4ZftKFBlgu#TB)-p5z6pq1%rpIXe)fwr0#(?pemv6!#1g+AGAS;^zuN zd_5zxYI~OFVv;1MDYScu2V~o@U!1j^M3UdVxZ#njFjsShC0sd8bg^ zrVv^E>HVmY%f7W#`=r$M2Ik3M-GRsznpxwh(5@%;D9zN>vx<=vW3 zsEUZ5?KphyG4pC0Q*~MbS=>TJJe#p|xGB`w{KaqMbb+|rlFVjj#f#bM?Fu*KwSzAS zRhh$rTdX5NK5-g9C|_TOfkTNTD?KZtdb$QhyZoku$mZldatNIv>yPKYnCjg^6%dPY zQPmeo#9K1FG;C5qZA;-)hE?g)9;3^aR>urz^#U@nt{7HtGdFZU;Ad zvWp*5>ddG3)x>1$kM9)%2{*f0nMo*__)# zRibBq-3)=#o4&7l3DZR9xaXR$2$G7i`K=Un^eJ0nuKXLmh zSa7l@5ZZGxTy=zZ|9;njtnhvokRch?0HsMDpu%4J34?{1u^)q)Y*>ZS1O?`{zqz`4 zy21WH-=R!KW_RXSTY0~JJpH_;DD8mZ za+`mV+--_2zpIU@Xe3Oq|6}c+%y?#r0D67}y^WDU$Hi1@k*wr6UERiIairnV5Wh#& zEqfy4w8Jzc+pgh_y-27bi!ibe9u&dAW{n-XUP0N3=xo~sxzx;Wwa!)#8gv+H$P;377oaG2st4u(VUCL4`02sK%XXsGhz17O6uJ zb)>RpAaQ(2{hQF@n#EFUT4&L>h_H?nc9+pd$cuS{(=fWZS|FQcNOP+R^m=jiZT^1v z+^3rP8Bhq~@Ie%s;1KHFEL1&4xEyS`=XL9qe*n zqPOK}P?(?mMrDj#^sGTf4_++;sHXx8svq z>p5e91*b7L{sL_}cAj=OBdVrwd&j`?L`hmtNN%J^yX?2TC+^Pv38ZaJu&^4Y%HAoE z3Bcoa_}slSXDb3l9XcZPCYJGjLS7nn@vO3Rp7j`(m1FqLq04wFOmza`7iMbZYI+H>CALi6l5%T3lLY29${YKvOgZvgiW29} z0o3<$lqa|vd2Ji z-n*fAG9ex#3USd9bxUjOrtdR8ccvtE)?VYeuPZI1BbdNNez1gk2`B^;&qH2fJ|aa5 zRnL|rdqi#I=7fAdey7cFw{1n;_-6|mRZAz5!H3`04EQf4W!OR?^R zM5Ddk23tvLL6lj?-d$;~rD@F*+qOp?P|0hZlK(D>sQj$kY!&Wki~ubf+Ij4n&hJqz zOW1lOq}UNvtH@FZm~9mtV&sAq#Y(-~CN$t}l1XIY<%|3q4{SIx`Ib z)7Dy8?dRR6Igz2L%b1VgqG>qDQg59m*Qg*M!3SJ`XJ!rUUX8}^gkAhqE%&_34k3KM z$vO_UeFnCzfhD=Rxx*LYBt;;-02Vyo`*KlHqHKA&Ns?~okt{iw*f~>}4Gbo{QDqTK z50|i^h=$SVL5HhWFV=Iucj-8*?&D3ssHkwxtus;LoDoDZr+^MBa>wR$a)2fj-;mCI z_U&D+$LWcZish5p7E`3=w-1 zZo2jtaQ@_(&o?D)$>9&BZFKavppD*E4Mc5qn(4o1r^0?>XP^62u337-ZAo|d9*?{3 zp}J@AO<>vHjG&{2Nd5H%^0o)$mZpy+?$`~RJXz;{k*v`4oj$4=dR+;oo zn6kg8Z+$wUbY)uT)kcd-7D>>}0MUcJHM)_dMw1{H-lq-Kdpr(5binVr@kRB*7u;Vm zPqce zOQWB`w*V%TZ<_%ZU7E8gYzyr%w9q^)TmAsPzkpmJTXgGkOh}5}t#q#YXYSeWiWrk< zM$pb7whn;-t8Y};A#XXKt>%4`n@mH_S3PZce+}V`l0$p7^da9NKvT8FX*blCJ8bkg z|NG2Kl~P>$eYF)!TuB`|;s)u>+Kp>%k{E$KM;RK;1W1|KfZGDuO_IUrd|C*|_J(;; z&_mWmi*w^q%C7GnGu;MK74G-!%z7iHs%g%B4=xRMu)z~ty!$8^4G*nPVm$=(Reo{I z;1O!2u$SVMw52T{w%F;6MBJJA$T#Nb%My{RU$U%$+Hz*LJXD_>@7$Gkpv^+wYvqb= z$F6rmXj|$8N2`oo;a!T04oVqCM<09JU21=UI;Q#FTTi>v{ac0NDHrty=Cc)0G=FWv z>;%`t;nV=^5Q@a8%9i!E{n+{ATee`w#=KpzNqWf`n{b$P-VBN#3%Xx!?e_JZo+HXB zr@jV8CPZ5R|9A879KT7-flqR)7`2Bys8ogE>*X{g^MotChBMTc3Mo zT$a>OW$g(y*sE7k^kOao`yryxjKd!e^g3R@+Ud^OT^WR3C?i~AC#YQ_vXhNuE;`Dh zeFE-Cz2+!}?9}AQMIgb8Usf3%{0TGfuC@rfkEa$4&Ynu-_<;}8lF<*+IvhsbfAV~25;S`jreT8vZ9hBO9a{2sg4`@Y`VcCX!c z-~HqFGuP$1p1J1meSGJ6?&ovgpU-{Y9ro9rXeGwjS4Fq1qGkr<>_)^}2y%6SGV?_Y z0lndM2w={J#l)8^=srW{{LR<*gdJ6*7Q6W;TL#_E4wH0Dlz$Q5D$T+dumtAKeqt@5 zz-yoqH85FOIwL(@>^fVz4SC~xk^MjngA-qb%+aMU)FY#a+W>INXc*Xi1UGGY_`1EX z>&~i*YF=@!;kZiF3mFlmocew94=#*5Vw|Vv<~z=;r3db9o|ik-^}fr!P@zzMIGplk zw27|_;l8Xp_w^l4za@3Z=9(hEKvr;8K6^sVV74L-uL(&7CW9aoLy*YSF$sgyj`!lm ztNy6G26Fw6TEzd~3f1gTX$H7OOB}1c3TH^y3Uv_-*B)H#!@2fq`;LgH%_;dl>*H~^ z?z{RhLzCUx>+A_WFH6D9ND&LW5M?gSykKfl13D~_<@JF#gzbavLXAm1{K<}Zqwa$I zQA`wvOb4$^vybcpf#@KA?nQ18#uAu!p8lB%ZMDiwc^}Dyk5?yVAkgt+hQc?x;1}-5 zI3wo-YA1#PI5joOl`_CHL0U0R z`u6_5^0M3ZA%bo0virD|4K#KLF_7p^w!f^n|!BF(6^VG+8eTNKa{7GkXl7g;zDo#>AUsnB?h9AB4Wb!3HPYlqC5S1ZcHLeil%#W0ql3N#VS$f zA3SU-e8^@WdojVx%;6(!uFPxVglJC~R)_*+b&!S;SzYcw7oHBys!}2^X~J zMaU{3Q+EEb?|U>FOwq>wR&T0-iS6|_I=qq6&$XuQccsPm?r z5tt;J^BP{mKJUd4bQc5BIaj_nLfPmVO=-K6_hgrHPQS^uWmRB!8uI9XJLbAc5A5gf zf;U3)aE|wRyh{#63oBI~934?)6RXkn>+uCLJ}0a36xEp9LJ^1Z9C@w4gpM5zFI7)F%n?}-BG;`8+ z$gWa~8XuZ8n==uw336H1^u%uGySYXVY)P+tu?Jp+pOT$n8{Sx2xGX_a&Ln#GU{d|u z%_VvNVIsTB%Ew`d)+=8}wn|}E$o4bKp{z0G7wU;Df!tWl8Y8`WarRebs1ae*eK)Ri zgHUaS?tP$sbsniy&;7GXft(vC;&H&06$Eck!`yA1xCp9I%Fyux58EQa_;wY zuulm6QfuZ`L(u4L@l~)h|39}%Xy8TVO{)p+b z@kZYK>=3TOP!d7<>{$HTsu?Sw`6bIfF243+|%AG}8&A3c-1!>?> zbRMcoQ+2`1njvI_jp&=}h@KD9)jxI}dY|e%QEec3vlM>K$Z^BJV!pJ-QOq#k0QS)v z&Ty(JiG+-4NDbmSFq~fp2-Q_{;3@4|mR3F$-5%eThGp5oGL;%~vu>L5I7qTgTs(Rk z2RyeBIDh#862WuO18(`+*2VwFsAIPq!-QELbBhIuLQG)Of?jD*WG7RAahTc>4vGh) zVf4;)FY@VFD&&7v9OhAI)*@`QyS-%Oice^q_2CGtDy4&SoWvBPCMLM1yQklxru9DQaboea4wSJKJ{q1O%-#UX+tL~^ec!T2h)J#%uM6*? z444ViD5j6u!UJD}UA4_1;pZnfSECp?+nT?Spu0I)ck_57Bdi3;NEMnn4&*s|53h+_ z{dJp4fNe?%?8ejhH&on78(lRvf9|vKRrR?>lL+yHJ=0jrAsn}>dTr%nU)*;@LUG8@ z2f5JwkX?Do zusmV8(47{oG!>tln;)27AtjzpXJo0A#nLIr9 zf?;=-Aj`tUo#fScx5G@4Y}&P{ zbqIG}1D-AcWIe;GGxtn6l_*Fyj7puA+ju`m=MjMACKk?g*HkYfwK$w}YXlOYJX{bmBre)J8Q3;gHbEoDQq0#rC7due_$L;dSqCg_uFW@cUWm5sZQIkr27WLdV zrOxtKa)QnnkF{YxYCR@Qw$d!u?k~=mPN~`Q5;T&&8&T@Eb2Qf&b#Ro;UV8(ZJ8x&o zDiL$c!6M!CW{_-dG=viZ)a1kH+C7ZpuNh|O;Tpq+YO{e`9%{bonIuJn0HLG?_ov?} z7j_yezVj|5MM98SzPQ_3+p*&G*lRtt2&#E(k&Pnjj8LMf;ng5T$i1Ru#bZ-8k91vF z+pv47LW9|jeXn0*`3J4{>eOn7RaN{DrT|&oHKPn&ovgVO^Y(vB+%{LnL#q5LcRR@d zeyBt0LqLr9l&}0!E$%lmFTc3|H`{{EW9UyHu`HBnjG*O#@^k-HkCzB8`Ym+%*c`ct z*@8hXW}n+=^RU@B*%-^FW@Njr)Z(vaPLhgk-u5?9FQQ&XNq%!mHUTrD%Wf%Rz?->K zyMCw;9G|T>7y^SGEUrcA^u5~M3`BODoH2>)k#BybEPq>RGUfcaCK>kPc!*e-*CIef zzC-vMIBpHk+}1*RPraY?O;s5Oi})$i?t;Si7M+RbEf3sImN}#k8Qf*!x2P_YwnL?b z$j4xr-~U`wWr#C;aUH_nciL0bhgSS<*&XvP_1of8vq~3%lEWDvjDxBkArWAMntKgi zXS5mSxDT(MbS~`GSHK4sIb;+-V8KIZZj3`qibky%$nAD5boHE3wi?hDYdUm()FLzZ z-W^{~(LDp{K^p$`@Y1<@XvCf#hGZZcRqby2r8MP&H|3eG>T=PFdM%^8$G*~kHf7TY zEX@mgX}vhl{A^0f(U>wmWRoBE+AUJwx$?*b*!O1l^d-KxJQe1|Qv6_-x&TVuJRxh` z%CRsi-Lf%zf~J(IMkCGVn>GqMT#~$+#+{pH2Z69=*@8tg1Y*p<+}*y;BadpafpuM< znmNkUynJ}tt~zmmZ_`QhtNQ@&ij-k}0a_v%4lXtMC)Uglgp>vN%?H6<2qht?XqF(h zENp#nt5x3MKoF5#OT7Pa#$fEWakBgZuY7y?je}P>(~?~8;0Da5EHf%(Ak2hO<`KFS z)qhCyg$;eK-b$+mQwJ@xvDB7tfz)j~S8G={T;LZbk_MHcqU&r!@KE$OXQ_I@ULL!b zId+ec8w8)Sb{dvFL(b(+GHj_!b;z9_1=EZGN`Xq#4_3$Hg+C&Wp?68!hde8l2$Q^` zKUU2IJ4M6f7egk&N%jrF_d5#PM0 zdiLb`!G-Odp*D|nAXjRS)9tClLynnh@SOyeY)s5v_vQUe5+aoUau!dXvcS7SIe|Jn zYv2xsQZ?WuBbdBhWqV2NM$HA8u2)Py+8%+bi1#-86$rAl(S7P{IH&=$*-tQrjn&?_ zdP@c>6}U2VhTsu?Fj|I$Z@h`r4XM$~GYW#+x6j(D zPLiP!SR}<-L0+I?#j!9pTN|%HekoWPtkTveYJcAOt@%rKL6!~-zClZHz9OCsjb22_ z+|y%P5i_}M;(2k5nWLolu_2(<5s#u2$o6Sd$MEPZ*F?7x9+ZAO*M5`P%&S4U znnAE9IuJX0Y(u~Be0U1^aFkb+Cfej%;XYebtLJw62LhtGFXHZ!377k5rfVtkhm9Wr z6)1GO;dht2)4bSl-fAa97mCty0M3-Pcxy3{80Wg;LGVuefv>oE7uWC!glrB3nuYqr z7x=rPADw?C8|(l`kEgEET-K_XN8JW?7RN_HBmp}do!59{QxM{$>Ep=8BFog)g!o0L=%qA4k+QaBP=WSIvbTV z8%a2)*cDDpEKsq4?11G1v2?$VX+W!;nBxbB1q`qRlAW9rVSt$g^$#uAQjp_fLrBK9 zO5i@FAh#yFb!WGh!N0)Q4NRo86<0*ZJEAkAc+yyu%orejia}0*w*Q=b(C#^gnt^*Sb=Ed69BJ@ z`*3gT0SS;^D7Wa>&C~?J@2j|kK;FPUTqV4|gDf%lS_?uIdtAhkEuQCrv05Pur!PE; zF|)6dzB9Yr>VAEU=)2KvOk8C_Ml<#dr1&r7nE^!iiF!N`PDju(>F8BuCJ=U&@eMQ? z2AuTj`KX^!$?OO}{>;tkUhO(W*`k>FgF_L;Rv-5nI1EC5E%Qqmh0ib|K#ZTFL&%z9 z`iArW8A#Oo4J^yL*bupa(mzGC@K2#^8nl}mGzRz#&~8l@_zGKhc54~@J1heTu%n;q zYXiBMy+Fd*+5iRG*ST;k-yn~*6`yWvlebm$Z5&!qX<6x%Cl7FKBfKZ>1y>4lrUo&W zJ|i1H&B*=q`9_AWHn5zar5Ffi{&zo(a3VME(p)&M6h0%dsw>@u3efc&LJCw>Fgrqy z7t;*y52*a~K`IKpGTNe&Y64sA*)nlkhsTyZZacDNkN-h?45fco-YSrf`G#15Za|Zb z;Sn}4dVpJQz3*d8`g58Su`*JaK}(gfm&zvG+!-@fCoOc09x(EKNP^3VA0Kk<1^<9mNHTXiFob%Xo))AZnG zs_^d|L)W{0cUk_fgeU!X`0xweg#4mBlBBVeol%6cC1i=2RJJ5h5@ISlAz5S0NLiAlNQyKG*~y+IlRZhY zhZuvhXT~xnv;9xs`}@6r|KGhlzvu4xKF|H!_qS80ypD6Ob6wZ@e6G*-{(LUxN9F{; zbHc#b0AOJO03+}Zz$5_Y^@H4;0l?H0*be}}R)Cd-4}gF_v4DSoC^vxZm!APZ2K*lY zAWvET&u^cy{`On0b59|^{mcfo2Dbbf;N8mR%ykaJvIk)0Wr6UrFk1i^*e*NEFUR+r zz8@^C5H|KL99ucJae+5f@_-$HKv>xz?Cfmdt64(9-veyC?0mab^tSMuU*g!~v-9AM zdl_4$j+Qp=vgjd7t6shq&bduMP)Jy0uZ*nRzWr+Ihcq;`w2vLvH!w6ZK5^!(Yd zxOl;Fv9hs2*f_qAi-k4t`@nhG*mtXJ;nOqcxa7mX=irU4JCEMWC~f4FQneuNx_qr? zn}D<$VK3?X(0&=&e{NvmKQyvG5A6SpYZN#Ju>NvDSXm+L5D0{Q3p;ph*}CQXv31+l zUyg0R9@~F8c6>jee?FLCBP?JWY;0^C;9njtPA;B*_+XBK#8t%{0k%U}Kw^UM0tkTq z=2?s!@bmmR2mfX?0AYUkN!$Ns-}L7?^{2HO!tD7;+aPU2UNdx1fu$7H95)ekT9(;l zi7+L9VtCeOR|c2v+B{2ui)1t*mdEjK_)gw-pDzbynE=sT*Y1pY2sX-w^%NJ^K0%Ax z2Rj~GOPk$GE1t~pejal8RD?Q`B3GvL^qJ#_29dNolAoeoY6ehw(&oddIvI$u$p+d} zXgH3*#{|NoFdtfpQ%rzG-HepU;8mu`1=6=8cOZ!TXPJNqbub2?$p6)4#9uvH^2+sR zlRxM3eg?9SlUZu^-we;XAr+e{GJQRA&&(F1 z*N}KFo}49WNVL*K{QRJPs_=bnZJxJpn|I%j-zs%i|1#}L&)QPR#7w+esKuCW z!?wvi`&~{=L=0x-Bnrs{E`}as0>JDrM#K=8m{dxXV=K#v=`^i5)`RAWXga68F0F>GPqx?e~ zBO+d#MdC#pR8ejNj6EZE?)-CT$@syKt)aeGO54f)(w>pWIj|Q#aLn!6wIWD6(7pH- z=X;|6%v=GU36SOygeWwSP~%o0+jqW!2~1V6tS^1PPle-k`tot^NRcpZ1FDcYMMlR$Au z*p^U3;%YZuBeCIV`ao!iJ}QCK)-T?aHFiqG?$s;AReXuYIa<1FjwPCe$U22`p*<_S(!zO)6Hr-UeRe zg(06n5Yo_cC7&py$$RT2&P-tVxA(dz-|^Uh>jvEoy;zB$D#D2GnSgN0R}_i_Yz~jW z=8nMZdyu;^&2ZFJu-oJ+c&-1VOvX-Vd+7$kmHR@dUz*HseLIH!Pyk+3G$HjUn5~01 zA(tfJGfSMIlvx3e%?}(y6;^MM#4LGMcO{w&Ps4@C8-*w`HR17&AM$AQc?46_P8Q_teb5q}pf4lEFy=W+>y})^sV)F*G z4uP(%0VW;9mDh4+xUZ4JSqYw46vRSN+NhnH}hhg?x zgAMdD!JO7~-iF$3(CIZy6DQTVWQ>yeRAcCP`%3VyGt5&msvnFpCXUkmTwoobxqRyw zdB7gagHpoV$(Ykpf1{IIw&W4}CtAd$P`}O_wA2KOA;RZtRweN1hA+) zdOJ8Mvr%!q^c8rK@rGWLk`t}74R33D1S9op$DA2s9)0JDkWkO{-LyEE1HEqOc?OZx z@M1pm+po;)U-QFCq`Mel$FO#AA3DcwIO%sdxQ>VM=uLO&5;BVo@h=-X=*y`Z* zK8A!wujg5e%CGNmLyvh38G`RnQr}H$g^j|h53UoLzzy5Ez+C8;U&&?(5B8RVY*bup zcm?Zn1;?I19=C7ULG%q(7tNpz{(0;<;Mg0sfMZube1kTaKrvwgo|X$lYSYr%KsNZ~ zuhegf@!YjQ9BuYo8@zx~%VhB5M%B?9O-n2dspE*$-@e}q%$0}*V%l-<=k+(gf#gui z1g_{W5Ce_9bD}T9x_%>RYg`p8eK~x@BUH!O3-QV9x8b~Dzh1s+K+F5E>PiZcryXFsc-GSk;7+@4#(-~EQK-LogFfGsZwBLnN&gvP|0n zu1Ib@8jOM8xV}KWXM?ddh!(1rg`nCbP&~v>g6)MPc{%#@!3C}|_t)9>@Q47Xj2$kG z76A}cd+^4ma3;{0aNA>@I79i3^_uNRpg7{qN$pUSCY~0YK)E6=4K^2nHbZ9HjhEKTaGaA3-Ru z2f+8<(z%Sa+kzqeX%rYjMT1BQo;Y-ee<9<71rum?N6Eu5$4N&+{SIUQEJLc$JW$+o zN*ug)FW41+V0DE)BdQ{syr+5BK`hoPJAD! zZ=Hp<1b&8K`RM_A4aUiNZt4wARGvUvn!iN#Mt7R_wO4_nu4n=z-kmHd1pGIn;yfnc z*aoBPSdAxtOCfqp2Ht=xFvd5#Z#PJ|cNEUuLur~=rK%+!kTJEkOXAyKb+q=$!!J0~ zzaJSYq#v{Bt4Q+=%(v5b@v0Y=@sDHKaF(>Jaf}xlce8aj;8M{A#FCJ=klS>)i!qJ* zaJ3*@8h=L76PAt zY7K<=B??;e(1ne61vlu>pWqy!ThZ?rQY6{vdf~!(?FyT&!qdmxP;=DuSi2R{EkmWF zK-+=Skd9A2tJpgTFT{vyQ>y1oT%~tdO0d-}zq$InnO9F-$Mo(R6e2rVj;R0#3{KvK zNE~F@#OkA5rVt}DCwrNIQyOx_rAb?*zo_`#Rs|)e6>I4bY}vcm`I2|nwg3^$02CPN z493%TrnJuy>Xf755){OB<(q~{+=F@kYI$BmPDfJZJ{#GnTi(BKaM|RdlV$t|mQ8{_ zDrFKgLM04=iiwM-JUOZ+^7<9^{^O;6{kbn6knR-NenrTwjA&`#B=_?{#P$*Kx0pZ` zqJ25KL5Wyd39bj!PUtpD*`(aW09%U0$pgl#qBm7-zj@_{y`-kS{${4evT15LlPd4( z_%gl4rzLwhfyL6m7@>z`K{?Q?)Ui8I=dlx*Mi^^1XuR+IxKAT{@4oFy`QrD^sb8kD zw8UDoR(E#1*G7PfM`e0SJ6*Z$2^WbY>Nzng4yEFD&D(GI@tbWjv+4KCG7Ai41djW& zn<#NPPC(`nEVR><>!hQD^Hjo2^c9+(_vHn#Wf!?*hXCPiEq>Czc62l-DjH>&!1Phl z2L09&X3fNEK_A#@{&+-e(ks*eRS45jNK*x^dmThq!5s2G_6GE` ztS#*{Kf?a={HM+Vj5+XQ#tLXHI47h{8n$28McH&R0n})AXb5sQ`W;du9<6P>Fm)K@ zp@vKG-^Ieeehi;wD|e%CZtD10!`caS{(g5k)<=DdqRY0&(YZK+xj4+o`;`OBW($sB zqjs93tzlP1WjQF4qvuO)j^5o(apK$`v(ts@P|Oib-E80@F4MBCz!*%K1C;SZe)$9- z_bldbp5Z}zY9dXp-`Ba8XK?02!FJwnRs`5$H3?}-k7`h&FaZLhZC1&T%57kDJh1fj zO0LJD`}zm;MMp%Vr{6f8lZ+6eexy$o!`fin4HC4YF`B6BgmjG-d0-EsdPM)41NYFS zd*_QoOOL?zE?}wxZ`Owrn+H1Y( zLLVIuS~egpzh(kFK}nNVt+<^YcwS}apd}s6q)+N8iG{JfpS%sDTk>9!GbKfjKyD&? zFcEMulrC}a;}J}Fz3TJXLqk&hh7NArWgzL;vANgPG64Y(E(6M5qIYw3b!y~|2~X&$ z8eF`QmjvFlDlRL=X;k`(UM68}=%DEqaf#SZIT^%8F*5?S5ywNIGFI}!mVOh|;#32N ztFBw)S6#fvXNv4nUN>yTHsOfj-Gni1v+cLeM?M(5n+8)9(k>Z(`_1+k@Y?gWl&&j; zm^O)wSnqORaMlr{T12i`m^oy)kEGe(66VcrU7vQ7r*bs8473-aZ_~u-FF}C<>ML^+ zEJTTfP(PG$!QRLRY9ALrBg@{KU*7%M|NKdJ!?OXJvo(_q-M#fv1UruA<|b{4J5D*X zMu`Tsp_LVWsh&Sq|q z`0m!a+-%tMkfl}tEl62t7V*A;@H7)YkGeoYR}z;`4wa~Lje5G698`$oUcr*?!T+>j9A%D?m!PkRtZ%Ze7 z-Ev&fdhd&T4p*jxOejYm>anIQH>UO}q~2Yu(P3C2Dq(4j)Y&~L0m z>9WD!3do>0Jjt2@evpenm^7O-377`PP!hGISn?p|$!mN>y z!B&lEad6Fyna7e(USa}$t)X+jo9ZD3`+tOXHmxSHv?Lg4cYTDNW&*u|C8;?AsX&TlGY z9=3UEebs6aqzQXo?AGBixxtPCrbz>}kNg%m2Oc2$+;9PsFINVhAObz+^{fp4DW(Q9y#shsq{-gWtA*vA}{0m z*2`NoA+nJ>S*VTrXm^Sr0o(2!-2m-H`4BK8zBVuG*3BXFe^w6HPrC{4eMmQl`Z z0mbG2{uB7$OKu$nA;MU?2Mk90)Z7 zhIpw;L1yQPh{a{%MiWCaE~CG&A_Q4+e1ap!*y^3i0l-&cDe9BIuE%?n7FD`AWHMB~ z4I{2|y^vNKMLIj zrISNEN}0rG_HAwo$D_f}Ke0(}I7*MxBAUfbGl3^h)Q3iyfYdLsxIY#|l0$cL=^AEx zkkxsTcrg}UbST9|x7T`f-}SFHLY>i>=81N1MgVbB%waqad@H&~0zM!(#nIBWui|>W zf3eLRnF)BqPZs&2PrMF?JaULsKmD{eR!;Po)mjATZT*N@VWEJYFbB}s!>u{qXIWid zRLrLNQ#2-vJSMDKm;e{v)-}mcCetCL`h9u0)BdtOpJZlZ-fg{@QC0~5dSPmf@vrZl z{i9@FKzv$0sD(cm1QT4Q zUVx!22(gpV&IF*C*$(+%6zgUta2MLFOoh>OLDfw9-%PLm->mch9)0^!5G%+km%Y%? zc|J_H8Lx*Y=qNV*#amb^h+wk^rCYHFsJSO4en-gs!OUq8N4NxfIwWC{NHT&F!UPPA zf0>yo7%dDfi~&I}gk`$$-tTCOKbD05S5Elg_))Mb$S0S^(JUk1Wrw=wCEQAY%`XBa zQj4DMcN;PDJfonRVb=BUx? zol%r3K_u|oh}K^SqLIevl-QBP2sX~wV4N*>1z~%Cv@j#+zWfUhF@dqNmx%kG z_Uu&QV*GFv0R&g(2qGS@h-@Im4t29HEYru}vkY_fpp?acI)pPas+o~7O#AG61a{)z z2sHH5zWH~zWB%Uwel+p}Nub;s)qzY}F}^+u#>m;%-y!SUVau(WdzirKm)&39c?>kj zRCPx(GI0`Ql7|i_Z=0LCZUp$yVIg8{3A3D*4AliaKfVNr4444|3Fzn5U z@wx+MX~O942tpac3AeegR++(uSudEy{M9pm)$Gsy{9kkWbAEo(&rklcY532rKOYfS z5W@kJe$@$+0g)2N!tCeFF#=cQXhtO&YGY=mgN2dj>5(W;k>2_=zmjz<#| zA5!Gd*_V|)V!?!t#Rwc*r~Sp+ns_;kuZwvmzO^BI)7~vMXp&3^n^O0SiVd=CGf7Iccm41*JrS zRy*-)Ax2)y<1pC=yl0~s^x{~2xsN$~z6%J^{XJ>)*S5r5S}@;apU8sQX_m{3=-^fC z#}J08JRyo64(c4og4evGnZSyf4ycM;EWp!w)@KkeHfX6<%}^?vEF%&u3gP*C^$G=} z8e=+!(eE(ptCM*8Awy)4RU-%(PNAizEVeTNBpd`P^f~&mtbhK!UirP9pUnC%%#?r6 zzr*r_C;-SCjH}ErQE<{EILPBX;@e(D4RVMb7GQ{!V`=&x=49;|Bh4pYgoq)fmKr!_u!A~kSD zdQjce#<}`MX}q3b>TF&5_)$?u*R37f+7$q?L;(Ky8BQH7gdRbN)tgO~a0emV@Q`&y z(NiS5cUNt|dL>!SVs5hUoMeJnjdH{4F5O=e28htv!Hj&jTGN{P0WU(?jo=QQJ0egD zKaU*bXw0_-asnkMpQcV>pG>&TV0!|VZ#WUYPtozJY_-GQ7N0{$UyL7Z_MudOzT;W>biW#O=?DrF62_ zIPa#IPy8Con(i**asO&1{ z(a6{cel&X^w!?=lbD#7TaiQQy;Vof`J=V}7*HRl3j8cZzgAZcH`;ysQ4V|w{pk<0} z>}|j&#$xCG;)(Ouf|8nh=A{gi&Q#j291-~9$IlmxdNj6aIk#Ns`8;O)%cN%Z<4Lu^ zZSR9G>1pYPoMxYwKgex+qV`qVsmfQ89(CU9FS25K4YA8|TCnIm<#pY|C#p4qv-{6o z-Pf4ta{dEsJ9N<(Jdq zzCs`r8k>?64B+&gZQW~~)}sd$iU}{vE*d@2+BLb`m@SIKi-k?Xwa-6P*7|l%ulL1< z+dFzc7?tJ1Z{P7iR(S(T=-t)$AodJlMI5iORxf9A=WZ&YEuK0 zo`suU#5p52zmdEdX;L7*^!LC${?-Oq;_m;3QT&pSc?IzepdDlar6Bp?HxHip1Hyg( zh2hQ6A1u}aX~DS}3{!P!Mu>G~cZby~A6p!opi84SAiIG=WJBNfYJb7G4GvQe(+m;o zEI8O>1^GgXS{0paU-BRj|f?tZ@UIRSoL&awgz>7&l`3T01lOb%x0gf*N(c-!uuup(Qu#rcrSPEVsjipmZL!k17{$F- zE40jr#6vI*^CK`B6hFNftw~d&V4pF%k(KsfS&Q5b_Vfq2M+?6_s?U`$h<)g_$EE4f z=}3;fcX)iJNfZ^FCv+6k{)CH?R%qW$C}G36wtKjFO2!tR6I2=Ae_TvE=i$TLsu*1z{MM?ny%XY!uZgjqEbi6+0ed5WQnyNU5Eo%Dv>x^MaXoo}(0-h7yftDZz zE}Di?k1S{pKHX%bd56H$yuaj?g};A0e>@0PW#FCOQnFGO_ihKAyxh})aKVo#x8lnU z6W}7Kv!wKd=z7n|g@@VIFB6T=wGGORkJUf5V3$Mqbwv{(z9A2an7|gsd$=ZXdC}IN zs%4LK>z=f(pr871!BC;C#W=)YjgSLs6;fUkY;8)hgCA!soUqmF^)bND@u*S#3< z;y>H3oJ9C#@7*X!DQve1A)T*}upHsPd~2Ma)Rod=Vio>1 z6#2&c#R+jPRhx%~4_{t>F3XYpelbyHlQe<(_>7DkR`MPmo%G6auYH6|J6(nAJDJq# zsTJCz$(OaACSYZ;#WI>rjQ7ivU34sFA_LAr9LQ;gZKIieP@t(^Y|yW4>)kjo>he`e zH_2!#$7fN7{f*ZUK!TTgj#Af-6w`5{!coGcy`MJK?3H@eOqNdVi4=1eDQp|D3W{fO zMOCTf!S-F&>?qbCN7EnIt5daT*C~3>f>bSW^*!pL27`XpxJOQ|N>cSV_BwvdeQLDL z<@0;1JR!((_edRCkL2ApOB0_mW2HojYqU4;M9RSj{CW+=-`u{HCz+@o7;p4>`)t7D zWi2USDB1&8sKfNAQxE;pZi}Y*-uez7;iAV zXnE90;&NNbO?%0T9+Y`TE1SNu@TA>fS;dR4owvnsn#z^QLvDa?KTM^P9*uGBa+$Ka z0oO#Ckj9fHO}&i=6qeO5qCN^7_7I*5nk?rX&DYv5cGqIpcEbzhr?+eY>1UriSPO0e zRIIP^4ZVX2>-bzE`I75M$vLe{sx9_%#O?^Mk~ONL>=C=Z<5d=SE;b4gqypHdFLi2} ze~3k=j59jN5jFIeg*Z1qTpvfI;hQj`~CeN#=VQb=|Out#u^sXPUEklwlRb$ zxG7~8TQ?hWI`J!`canNC%DOsegmd@HxXasHw%#h)87n;;_F1nT0?UW-F+_q6PtM#@ zqH#?v@aN7}cky>_>y(zRP?A0>A!8-v#BsCxSStuY*qA682{}4gS%e|qCZ$J3voG=< zb8^a&dE*&V>g*dWY1bm==fD_p->eGLNjOE>O&mHwqDQq*x#e0XbyZDFMaHMP@8??^ z94P(fMCRiVjrqD!(hZ}?E7PK%fQim8lJSh&HgxxrNQ@_bII|s7Q4(3Nk{KjGH7!lk zuX{4-kY}QHqEq1#aBRCO-!(0;w8Jc>LXeiq5sO>2OaW`ZN1(kJE%>s>2r(UT0uzYA z^VFZoO;hG{H{j+!c_ZoVZABMB3-5#S1nEbk`)`HO#h^aKlpvndfk#F@YRCdX>GPFr?MfCxKB7sqdbPIhO?|!OY zKAFF*K;h!k{l~be`6V=dS-U~Mk{CQ6TI8sAk(#Kh{TD^VX?8C~=kSFN*7FqJu2S+_ z0Cq4eczZCsoGOCumDmbmMJ91~kmWP66`^~ef*nCCow-uFt+mK^N;0tqgMArXl3O&` z4gnkxt##P&8IVzj7Qc~hwl*9haYPr=^s^TR;m57EIho9PNk7oaP7HdaQ_0>ub*0#h zZCw+f5$W+57jAxXG6|Zn1{3qJNzRGV(bB%4m>s(y@@jp-ROIBb!rS-b1z>@)k}kp= zA3@&}fy8BsNkForqzY=s-3B#`v+Ce{+@}sb8PV#~+3o!CJm~OMeQfm+S?ay`Y6>i$ z+G$C-5i%8e>|}%4u1<%&xqaDBEW(G2O};hw9hkss`BVoUd+hVXYy!7&u-n`Wclb$t*zSPRQ2G{K^bD#mo`Tw=v&QCXwY*Nen-y(db@#bG>G5gP*sO( zM>bw>pNtxx&7JGLr>~ueD=gj-@}&+3V{~AuX4>Vsl^Zd_C|^Qd7u2MNzv$D|uXREe zX?NRQxJpM(S*An81f;_Z(=m<5X`Gib77#29VK+g(V-7hV58d-trL7d2>lW1>zE<~8 zj&0>&s2n7ko`>di!_liMD27Gz7Ra%#tE*`p$EfFSahiHPLSn zqW;hkbA&5C3@zPT(RJ>#!$GB7QM!avrNEUp4aev9=tw;dd)uWEK+ zVit}N*S}=u&WpUPMCiT~2)yKw;P%aU)!9OWm1De7(rOqK+877q9UK&_eqe`~s*a;w zx-=PCQ&AIo?us#YA9Cy{>0mQo>t~@e`*+>h{aQ2b2x}cK99OoHzz{JyTq|!D4 zGB@ZJ>bq0ahD7 z=JVBqo@2L4Kbd_#Wu0jksUpfD%9y@+Rnh_W94w*Py_oDv)$7OzYHkef4GL@<7nUJ}VHq1$u}6OX=NhGVX|Z2L3!}8@b=?Vm;wE5vuBixyYfklg%DGg(OU|#h zWgjiIH42iR{tTAHa}I+Gk_?;KDcxWYnwM1FY-(s(-&GlA;RXJsEYj)*h{^(PB z7Df9FPE8zX^SLYkl(K80ot}=Hc&3UHAbRBm%0{V+UZ1?@d{;1$qj!X%A~RsAESAsu z;fQqF&xsqDY%qfg6>HPj(o@v9k`slr`p&><8_CpnAA0)<43%X2Hhvc(=0xr zI~x!wS4XI z>K9e3wx}nj6&`$E7q{5D$3EVkN}HxsQzaQ6co{vLVD;c(d~2$6gnyCTDjHuGT30W> zz4=o0%lPwn=dT9GBsf?K*34ep1jhJb*G3P4T}Sz*MCoV z(NFm3|1_-Tf8u?ZAFM400r}nCc^K5w4G`;W^{~f|R)NIi0MvfsWlW%cU({z$@#`~7 z_M-@ACJ-IPc)9^u6)G{I*w?{Df{YP$S?8f|$K)V2zc+TL5W!1HYi{7_l^vp4=8d|6 z_?UMof)<4;$^(yYBz^SPEx1 zZVa5#l!R>DKs5ZPVrIKU54L50im()xCIY+Cay){SUO3rFovv;;AYQh0uWCLD%Z7IbXJ$^50#nAqS^LfNFiG zL*-eDSwl;qKvh+D+P-AeOVW_93xxE%yKqh*^_u&T)rVsadkPem_bZ$%JeZ}m+eqrJ z@0+LG!SaVs4fu^+9`N={d+F^tdP9LvA=<2b-=2D9hKLEFiRNER$380Nh*nsPSlm$9 zZ7Xy`|L*Z#wG7%=Y*B);kksQZVep7&4d#?@;O9(W2$2kLD5AHy2HRI>x^J+#_l=*7 zu}~x}eY6Rx$}k8RFOZeo?gw4pJ1|xhS2t#V@&U2!V7C;TrNlH*j&)!cHfKKAtK^EZ zU%|@~urtzNXN=aIq%GnM%~Vail%B0#6wNYSs1Y29yv8vYraaUQ6Gk{EM8U-B^$A#h z%JA}o)u!TZXGNFucOTr7hF_Fxt%>!MiXEzT8)--`JUS$zzbM!vze}0z&i+Voz5Pbr zmFQp(WHlms?IHZoC*|;q_s~4}MrTCdh0szjb%9bV?@x^7&l1jCmd%0zy2P{k4T4qd z*M>bbRU$4ySz+i@T8oWm@l(%%cXtFw9IRQRcAH#0FvI$4+h6ZzMSDxH)v+rI;+TqB z+?X>hr=fAKjC!{OCHuyLEvzaLNDcp9H3GqSN z6o$iIk(`u&TslUtQT&m3a6u+(Va{E>T|+}nPxhDhgT>sjFguhH@pHKuF`+eZ|G4Cf z2xCS|!~F=6TU)jDc1xa!(OjYdq0J5|Gnpp z(oDWFu#ULH(CouGC2aXr&x1Eejk;4J#X;vcI=8*5vEA4!ev5X5z&Yn>`F?nF?PL^TR5n@T?Qp0x2%_g2X5HYRQ>@~wU4NNoYj#}z$|d`5brnMW5FqrwA{K3Ih@dNJ&ah)b|tG;K-- z@yvz(b6@7I33c%+=kf2UnsE<#qQlw`*2!nGaKZpdwOm|^XQJ*YE; zJn58^wMl`AsHG>4*B05sycJK>hM$dRVY|8Vxn$9jYKXc-!bPqpjMTMPiG005xEbxK zP`2{^M)DKYs`25SrY|`69?p_v8Dayjb1ZoizQHBY;}Tnlub$`mKQo*()i5njoF9H? zwa+tPi1&Sdh-%Zjhguu~TcspF48p}m8EyD#XfqOuQs`es8j>#cJ*x?y)RU^vj_4aU z-I>P9xzm&Hxm6Ufe%c+31}72X0^5+)*qb`)1kdxcHeH`1)5xL|*$rL3VkvybWLP*@ z`d!31Gh>kXRL+vdBo)Rx9u#D7#o~39Yq}H8Iu@Ch- z5wdyGkg1Z9gmxd=NqXrzf)iy@KI9Ktwn3&{g(8O{Rj;95LR6orjBvdj%}NrDy(lLI zB%Gl~DA6px2lI}dCt@Si%y{Pz+&afReWVYKjMZg?yP7*11^~yU*>o=5|9n4oWeYr& zvPzYp9Up|N+cJ6(<+vD6l=_sI^2lY}xfSVqmk$W@-tvV$QIKluNk)WRm3f7nfZhN% z#L<;+3mliFKhyDizF_0=QMdB>qvw)FQ|X`MXfvbUz+OvXgFtLrF1S%t60FQ>aS5@c zQA`G>#k>@oY$XD)cDp=4CD)KctuRNOpJ=`W=kZGK}%wrlf@n zFFG&cv=UToE39acuCQ`npz9j4Yc(T^?OSJDU9H$u?6>81=rG5WR|fTj?2;%E+LC(i zQU0ewXZ-;PCi7j@v#v*iZ_=d=Z~5L8KA-+78u(^H-q-J?BS>|2hqDli;wuSQ?uB!K zCNlSmAiEY%R5g8((g}hD-+PD`UKuTU=i3>iN_L|7PbrULTJdb^)VLEE=v=42_6aY& z`o!b$vSZTkqI?vllVVe;ffL<}9;86f`M4yXLkl3`Vij~w4;@NT&hNUtyVir6YcF&D z^7;DAZB9o^9W^<4bAfX%t+-(ha##X6p&t%syk{u%smT|)8K#lxqf29i^Uk8&uUf9P zQ8lg|6e4PJACXX@h6GuiBqHJvE}+HQ;6e_#vF*I=^`ZtdnGp}8Bwyg*=vl`J#{t}k zQnw{N3*$wHk-bRXH2yhCdyA+v-L8n2E2nG|@lkZM98k_kLsc^7f!iLO9Lk`+f$LQNr`g{BbuM znHMed*vl!MiIcq}?>Sp!CG*ZDOUZ(670oxh);B<_=;P|-V}vL_`4&AJW$iKP&2KG7 z6%#n}@PVFioOpLKtK4nlQ8AyTPk{n&8W-wismei2;3&PKa(U$7^65TUb8w+W*10)_ zFBD;+qMIq1?%VP9!{EQ%MgrT`dT65M@d%T_sCqbf-D`hJ<(@i#fM(l(TWSK z)?H{SMmXVNoz!c#j6QkHb4-)Uc75*B2bk*VIa7xgv{^Fa#SGnhY^G$_MiFq4;@iAw zb&H0`rr7F5Ep9r;d9bG!+CHMaRQzBl^7Zn0!rASK#>=+-f1?)Pe;&Ap&VD|(`99r0 z%e`tY&?$t!S?6G`W1<$zIfFBeU>FL?OOZd_k8oU;7(vO9oEAQz6iFu+8-)9dg-Uyi zKRwX)Y>CS9927*{rnbW8ij^k?n?aWv&72sSIe`%#zB49&Cc!Z#$nnzK4p9tO=B4DQ zqjHdnXARmEDH2Buaz{Owbg{jg6tnbIJxz0%rcv0{v=HnbI-uaoNM?=Io_+w)neq+2 z^N&| zw3j8x+#q(6V_uT5E%I5te!3ye0jd6xdH&QoBPCCzuQD&I5*=2K_ZF7Ese2ps@rZQp zPL{9ZT%?%@_-;1~4+*{os-wWFZpA#QgN?k}9!D}|dz;uQ$;MZle^sw-tNPH;3@&W<()zY@Zq9{Qn8U|Q>nXIaR(`d>m^fCPLS%GK zj#UoKk&aNp1`Dojq--y6QNBIiW@65p)XAf!Fnr{tLZE+R@#U|kNDE5$BpV49tA^*I zv<lQO{m07&Fk`7CbhxU$|f*=Mo3a!+aQ7dA%*u`o5^l_V# zWFjZSc%zp z5SFmaJzsk&t5@8@WPw}8&f;)#jPQl#J;tIYSDt$~Jz`h}w7+L5I_#z+@0~}18q=Sm z98>W(rHFM^Y5B&1x>b)bc}1g<*JZ|^TFMoq9B7W*{{kr-vIb0{|a@Yki--LQs$h@g&>!v`>jtAsb-TKY0`jm5n zL*}n4`r+;3%uOnn0M)${A(Un6&+P6R#RKGrM}_skx7>P@0M zm#@p-LjB>eRiW#3J6C2Ty(G7-AtRexYOyVGq7wl*ik-u>-LvX69`ucmUwGIyvr<%)XnbBy%9nkl zMp#O{n&7?Dm2t4=5^2jV7Z0t@eEPGI5aVM8twL&6oMJJ@ShdWNhbbj3n;gxE5fz#f ziI1RG#C3rx- zqLkPfqxt4}+{ZnqH7t%j@5(>^Av=G}jDOGKh?35WuF`Zf3vl=LbbWLXMYeUOshI$7 zXKGUL$(pid2uad^lAb$%#t6hrIhl%r4u0X!kH4zCFgS02Cb3Ry+LHU=@+nGDv1=7> zL@;SqWTvIa(|7yosOTNJHRJ5#+hY@^J6?#fLUyl#1aSC5buU0+&0UA`^_?*1i{aPE=!mZtBp>pAZVEG3x zzXgv+ldH+~vr7$q`=dshuXr^d4k}J+wL(BAYlyAbCX-n&l!|f`>WbmVVw=g~&qkga zmpU~)vmReN($NZpmJpV_I$K7zJOb7As}%#D8Ff)XBa92P(x%O_9NQ17+U}Y`aNwpB zgJc=6gS{C^hW2s)_!|G;fav{uxyygl>#!fJB@g*xHoC70+^LHPO4rP#eTlWIr|1R* znH)F!`f(Kcc5ew-+1&HcFsPH!zzl*csnwcjs}BqjyC-73Nz|&l(6=f_ME!oxvO9xL z`byLLb30_ifUj-?E6^p`$+5uJ)T(<_Rg`A!fuK8W(Y@|NKG-yq4|gXg9cEXzw+k&N zYVI`!d~wizMjuSXkPpFe1CnoIhOpEqfJ|LiRBE%q9JpX7mNdB_I4?-L&4 zlZXj3t~vA$V%Lj`-n9_Fs_NK)<3b&}!wCxQiN>-?cH)2!mKUrB`XmU;Xtcs@?p}D0 zxwQOEtt1jg-fBkYUg&9%0(TSAAtC7DC788iLysBF_!T$gcElnNc2o2N$3Vs3nHdym zGGREA(0+%ZHj5Kq4`={`aVdI67Q+td=SF!1_ciF=!Z!G>?ig6p{Rhh@{$rsqj(D0E zMK_D)NQ@#YtX4fLa_d$6l8~?8vKhK|p?&{>#L_YY_rfU1Ld0t$5S~kNW7Q1;S z_bKb${6tRS(KlqZiqkBnj)raP+!fP~E^L80=0~_4tt{dp`{tp0>Sd?!En|TMbjk#^ zYs<*sXPVw*oV@;HJE_|2R|_u!6C9pP+hw|GL26SszzwngvWgjq<=vLIJ6c#AI`|+) zj&d~HpE0^Kt0KTH(Fv-}PZ{8F`UL&E3^*oym**n_|^w*vF6dw z;w;$W)ncTDj#~=ty}kEg((L`ytOpD73|a0I^`=FHE$2olEOZu9rI1^vC;$v@lqp*e>9V1;$aXXL0)6}r=27qPZ{1#$SLeyH*@ zr7n?1M4dMe?XJ~3Shc(-xM^DuW-j?IZMmp5)HJ8Klgp8@XoT3iljYj)h~0(iQVK}& zpeebv9#TL%LXyWkbMp~6fB(kbmadi7?bqj5w6^K0CcRdl#*`zQy?I@OoeN0F$;&Au zaO-wXfsNBUv`VWWbb%)L{EPOuX6h%gr7iVj9*o;E=)bsf)tTZ%OkFIrZq_RZirB&x z-?i(hPJlv4!Q;@zUseJ!WXnx3J!7Aytcc|Xz=!Ea9MMw$0)p213bAEY&0Fkm8k_ft z-4+wsxP3V$=n3ZsW3d>^&8kL5g zxuCJQNGRRR1Iq)AOBu{~n)rEOZLhxM?>@5l^vfHG%eR(G$k(OaN{8Kb?ApMm|xG?iZO%c^LEo z`l75AWvZefCzX8vwm|%k)Uy6|!~f^VmR5#8)vzYa6F(B<8W=mk+&cIz3^Ky=+_-i3 ziDCIV;m1GequTm-XX4dgc)m5v>fXIFSUF2On`)uRcFWl!KWYBSxSQ60iWpgZH0{ z00gRb7I}PSv-1az*C`jn@d_ z!H+#F(iD?GyuzIx5V#u7kMm@r+`vz!#cFVsXqSDT3-JtDTWk<#f6e*q!CU-byF&KK z6Txa3|Hyjs*JF?7iJ_#?GKzc+{xGbCKEDqOIM`opQ40z!y0xZiIjS?_tWS!*qC5laE)ogyG>|1CEO1^I1P6X_ zRwSW|7O&HdBGEa}c>ibA|J;fH<8y+q2UgPwtwa7;%tdv(OZSx%lDh)b{D1XE6Fbm zSozRePhpsqw|D+leUbY06Ab40*ZnR&&aIJ+W@`Ox=9`!LBETXNPt(8>lGSdqzU`%K*S|8DG{0loh@e z=L7O~$5DXdo<2odKKwUm|Dp?oMCS=)&E{UZAYfJg&kVGv-ulHW_{9#KX;N=w@sqE{Lg-+|MYkN=MS{(G9AB@ z`{I=!R~y$+@A4O@s{VV=*{pf*Ua#11jF*AH7l=)$%o)Zs5RjP@coCh|B;bZ$FH8o& zPoPzwqU*o|I@~K3|D?Pn`;pPf1W7Mos@e!z?SzV#!HNC_DO8k7frJj)@Oli z?%)2Yr~GGM!v7~fkB{)6H#8UalEy6j$mPSv)Z9~U_?7fIb!QV^C$)B==ntUxpex(Z zoI1R5Bw(99=R{3rLhbKqfgk;w-p>Q;EtYF{2609c4-GR7KQCR#P12D_>SwN2$Y%Z| z!5I);2ZTx&FzUm`#9M@e$}T9ktsr_}uHwMP|?fd3FfTiW0w>xDEv}+FhS8&o)fI+E2as!U}L4H|Yd)XI-Sh1NzY>gqPr`&)X6^%sgIw9E*TVL0IgAAoU za8!N&Cm0)Fr#r25gd&(?5K#yRFz+U~mcQNh+D{5spwyg(14i15Tm`1oPV9g0;~L_o zjOl$g$*Ybn+F-l!{f)T3ShyYor=B+BN3mdyhE5S#{7!<#x-F!|h(dh467!FpTR~kP6u9ZMAW1vzPb-Ydf0%ws(&sp1}Y4Tp&9RBDyh+e)3o?oAoAKj z6A2zckfahQ8B!sLt%^!OG2O6)Y|A2)3RUE(jAD(n6fSpKJP#h-Wx z=>M!<9iX#PgPcw12GH#h!?+8G-8|2~7;-e|Lu%sdz?6N|7(ei{perk#|4G+#G(=1! zm6noNx|B6}@Pe|-2^3EV(J{>94s$=n@CDUNxG8Hj0kl0L@PR{`kVbhj-N!((_ayH` zKv{o^HxuY%BM*chd9PSF}gKO`u#FwS^!mlR|%kZVd-I{JoxlVi;80t#ru9)j#&6n5E=vHe%|EfS@u%@l(2QUUoHP1e}GGLKxB|c^ARezNOjH zx(UbZT)FCcYFUIUhReEb22Oj;l&m0c{nEk81N21ZEfkZ6AD-N9rJ0Ut9^wCfm;uX} z8k2saFvOBC2|Y*Wpa(w`<%zGQ+x-oS0?OeDyv~p+^9XE-gX7HNiC~r?%>+AyqFIEFYT!7LJ^6znF=3yg>i1wbT={t zZWBYCBIyv^_s_tLJ87&D1{0n^lWydg$=cS)o>l)gH_WPbBUjq{pY5ukRRlT}hRLM)&ij$A9Z-NoP9# z?P89jhwh`GUL-vrDR!PzOgQY$hKwSr_jiO~kx&mm2=$OT9^e+&`TaO5v?L`%@jY)deAeRp6z2W$)eG|P-rOVn{ zmaY+W3=x5Bkc!mMM<>M8x2fQ*NKmYCrz}7NZ6343eXaX!4NHGz$)6(BZU*`^H#oSR z+|W5sW7a1$8{?(IHC5c8jP{|J)Z*tday}9$HkpQtKdLjGX}h^^eN4sp98H=h_VbG1 z45aZq7!NUnTBxZxGv_QKNkD}uSz9P(8lA^cJ*lFd_or7fo0$WL&Xm&qTJ&Wo?j)mi;ArZT`KlXEQlbRx$n zA^}mqRF2_@3IdZ<@cLau5rDkdP^4cn;8U_JnN>7DZzN>M>6DUpeC{$T&F0t#565<# z3<{l-k@Xw&W1c{d{e(|xEsE0=+EfnkD9CRe%^1!2x#2!#v7S(ZTzbg5`~J#>6D>Q$ zchu+>FaBOnHp)YiCpt?Vn;qtqZ{=c@5aGj8P~%HxL(S%;2rnu1rGC^q{mF-1RuokS zH2N1|5x{(Yf{a??z@s{3r7?ZcEvAV4Oe3K?6X}Q;S)}Xe92{pLUsTP}MR*r}k?iclq-{j=w;Ee}{pR{hs_Qzki+7&rN*YCIi)qrg8 z5xUw%H1HM4gf7h2m^^2@ixb@Ka+&4YFF($t`PlCD@m43iRdp|``UwiwK3!Sdau1Kh z+EtRdI`D2?+-y9(r5HAK!x_o@Ox~XE%*LMUdX}I>@t&5;(9ElSN=MfokvW6L1S7)Q zz_XTcp|KWOe>Bx($>`RBkF`Oa(L=@NY|Oha_BS8i?}+&_bm}-?u)3CJGJ#`Thzzx% zEo5IH1>%2$IukbyKe%x4ikxYjd>lQz4eu{VF!fgv^WT;Ecs+2%eCGN6jDsp1^w@V` z`@UDn*--*>6|yzIgMZc3IBm4pr7+*a@VtqdDJjPv*Y3nJE|DBYFvdW(A^0ug$Rx^z zBP58Wn(!5gbr+Gnj6vQFANuj7N=HaQV^&;4$mgN?Gts1u9-p-3nuhKKo}h)Gb0lQN z_aO7&799+eqNQi*+9E2@%k7MU-dS#cig1@#f~T-EeLs}OtWAg~cg-Ptx9QR7CX!12 za(V1r@^Xn?+?d3pLGKX5kD_}00wDrdhF^kvE9Wb1@cOZB0we^+DWorMJLtS~(x%mu zApiO%|B5aai~h4o=Dy1+UPpRAVUzUkPFR%i9PhxcsQtc%6~)NJGri zQJD{@B#F{1o{p@?#O+l_cAUA)BCBMn5tjJ%v-0ll zXxLG}ZetpeLd~X$1Kp{9uK|6bc-0DuCHmTZ*Hcs8HxpxCKWh%rh%IilQ=MUSIIiRU z{umvhFaigS2~qr8?wT8^7-A2JYpy!B*&n%9) zMdH)CIOfP$`RzD-Gcl4hfR0n|3b+$iWH+;*^9aGvJHS>xmv_6+(a*o|Gg`OhB3k*R zJyZr7)Qza!p4{nyaJYflUR3gJ1zl|Z+*Cs-EGp?7Idf}7Zr+(CjmHKp)|~6_(Ds#q5&rPe?uMd-ojVaV41U=fr=&B;)&krl`=D<{g@6 ze}mq}xFCtQfrxB;7!(-x?_166bO@DbVkiOGF@o8?rB^laW5v2v+9V<-PV<-C<80;A z8k{PuyVuxH9&DbcU-SHf^)m9^&)YbDr-12GP?l zZTtqEZjtzvJXfED6_zx0D0eY=Z~Q0_%pr84W9H_$(+TeowWmQGj(;M6|4VTF|BDRv z4)RjQ<(c}sUV z{7)nfgR_B++ZcSLXv_fUf!IO;TfDRf!~(cE06qZeTJjCNG1Ujag7-IkpsxRSR`ZWs z(*MS>f7Ys*9>`D=kh3%*-#XI1EKdt?3jhQ`S5|xuwn4oA!HV;b z9DlMk_cI9-cZ%F?MI8fH#HaFJ-Wm~1XYLg{+V9hEZ9vZjef=d_Nigo2Lv9=6UOK%x z_&l5zR#tG8%pO@31#FD04f=97X*?ZsBwkJVMH|{Tp9 zx9nE<1i@RDvm*TV$@6^kxl|&Nm`bp+y75VyxkO*^{QVpE!XDgf2R#FAXz~0ThWE#- z`2Y%Uj5-nvSf<9ciT(zS)j%!(_Lvsf`!D|lP4|MsNw1gdLL@%o8 zP--OGDbsK{QTt{GTZb9Yfb^Kq7Yc~ei?B^mSg_~(#wglO*TTL<)n5(AuzK({3+JPW zWyWD(7K$|9zFp0@3&PN+7$6s2UEVZuHdRY9eck=5*2svs{e1K;^dz9;#L?pJj~}RY zQ(qa^yx8?IOOA-)R>d}tUs<b2I#W+IJZbHn~zBbyf3VHC=;1r_v!gZ;@mu(U2;&9l=!J0~n%_`Nq&(#%GRW;B|N z=X5N^oBX}RXuCfg(iaWlED|}QDORLy+~P~3H6GH9JOlH)k1tDfvtw(`CO9;BmFOr& zDRtM6S{jK*F0e)dk;NIgg%uFT4=`u5N?VT9?{-Z)NL$eqtup^FIJmv<^QIC_fouS zuJ&Pm((D6$0jW19j6eaxe0u;p5E*NyNGR_i5x||5oTTexERwDDnRne(RJ1r5j;T8N z#rrafsnF*PFoFnn!qnn|M#4xqMF6)GAtH_!gMD?8-M1coAbZ=Kr$^}K<8;w%1KAY^ zXo~F};}3wiNL?=KEbSsJ1{#V!+&1!NFY-AvB?1}EY4Az!(&4dn5zT*cCO5I0^U_t# z!Zr+HdYc!})QmjQ3@{o^xnjd8rZeGXGUkl;qHfFY^MX&KmL0&eKnQtjwZFk#CZ~y$}(&B^k1j zxFA=Dk#=ZIUG1c%qF*s(s9Go+wtI+kA3MQBtW1Erk&UfLNNnYgLxO2%hGauvzMBbb zX>i1_=H#JT>3n%znSxBv7 zO7-w3wx_D_eC-kSi@fC&E|Bo6Q5bY}0CY@?asIyfV2}#6_^HNag1#9^9#eUIE9ep- zKFh@=P&hjyF>w=WT=iCNRbt5SMFzI7(y%UKp&&R$iGu;Af}yqIvOK%$ zeWx=hYPaU!j)hGGJ$+^%QT-tm{Xs@7!Am@66T~=GYfgvn!$QJ>EAAHs1qSNEzJU~ISkIZD zJCYZ>CBe#|%Wi`Q_4hN5zcugog`9yq5O_L+#NCkLB4T*oWSHvk(PiRaO2B^|g$Hg!8AuT{6WKzEb>zWDv-omu0)20$E9 z*#6u|7nJ9AQBQzvQ!4FiqxY*4Ydy6K8w&BQh~}DYcX6|ErER%!>rs!q)=#J?sSwyZ z#pgYu3uXDwv>C;M6`lV?;pM+eft=Nt8}RCFECE6~pkn?@W*UHT{oKp<~T}|J|$5S8bB|V1>}# zPnBH>b1Xy{=^7x~sDp{0)jEp81Z zW^Lq{8k>6WdexT>(J%?93_P@PM+;oCZ;k^eK-^Jtdzd+xC$=b>!d2kXMn|w`daC=$WBS0J0k}m`rxdt(Q1)m#|2m5B@mao`5{VI^E zv{RdIoW@Q4l}d3w+?-n8b`@5Nt1lu$XhYc|)e}C#pj+~lo}ZRmLY?1L9EruNVtsc9 z0(+Lj=;PYKMYZPZ4L_R6R;2kp5jBE*hkSySq98|H%at?JY_gwb8Ir71xJxUJKt8O( zmQ4D$lVTlOAO_S5h#O)~yhr4$=WIl$l{0=MVukr!+wA(SGV7r|5&vAR-7ugLoDv#t z-ysUlMG3U2!_`SlIWwdftkKGpuT|X0>?7xg{sU?D1|c7r=v}kjCKR*oj9ZVD)n#3a znOp7>?!FO{-*+*>M0@3#J|F3jjM#*5)gfl>G1)}~*UouEg8SDBJKEGYE>^jRNY`LH z<4niJndd4_XCAjobtCHf5~QYEn8??HzNonuqOhOKWN?!nKSx(7|3Cak3LB5hjK6{IQ4AE`+)% zxMZZZ<&NJD-BziQnPQIp)RVHbNJs*@XM1$vKpBY5kAuoJ5(OdM0n;G*6r}~ozP_de z03WF$NYpAw$3uRu?IpPTNG=ObN5L|9c6l}JT<$SXGPw?a*tCUP^-7B;r?3*xp0 z5=ja}AWMyU=KghYQ7))1wN$3gm>YISV4sA>|E$TGUv^blP58y0=i}z(FPzR88(A*7 znEm{&-v^FQA1mW2N`xYuU!0?%JO>Q$9Qhp4l>v?P2%7hLmWikx9@2L$iC(^XQ_nW? zq@RPeD6^dewRq=AOs5%*v!<%1nZO^uO)->qMY+B?k8L`7RHyC2)0p&LRY7_3ZetfW z&{v(O3opX##4U3LiNf-_+Q0zpDXTbB2W6jug%EA63{rl)KIGAs{S>cs!r;p3-GP}) z0-2=yzl^(=MM!yC1jeuyCbYGXVI?Q5yi(+<50iwUp_|*u$@Qdet^OakBR__E7(V$# zpU7dK+opp^Sms=>8&hOc>GI`_nv%DW`yPXOhOfU;%rKc}%cBmuEypUZ~r1{9^W@o#$NGp&tI(1WBFQ>ta1H?#ia#Hin)Ld5d3cekQso7W`a+pOFC+XsZjqhFbWFlTE9%q@|^V9o5@7JeKw68@k7X4s|1 zo`s+$l#o!Coru8K=U@SRe5ec- zYBF)Idc3^sz{qiTm&wz!!oDFYs?q4Hu;v1LUuT$pA4SopwBw>dKNfJ?1CRPFCq5g^n?0RpUIc6B3x^2t6AjwVN zz^AN7CiZ_}8Fp9bu2NPz%P0{KJc7~EYhk25oG0VvbSm-&ajJ9;o_3M52$5A z_N~Mr%c{ru>SIe54<0jf-hCe&=&lRL;hXK!bjN~Z-Nxr_H}eb#%2jG}b7!+R9YU7) zIp_P5n)G@nZpT*9w?2p9rvWbD=ou2gq9EHZQLWaYcRrjZ26krJ@{Q8Y&Q^J$7U-siyc zxUTHDu}J2}4x)8r&byd7pgqy99yUt=QZ-^WISsNHz^9%wj7dD{3ZI9bBz6*;bO~#{_rpu9i4oW7Eif zFMsBKuto5~nS|2_(uJZ(N0PW3QC4o2COcd#Bw0t+C>-x+xb)eg%R{C;^2W{#F+1yKb1wWI zwF=Gy7=9sQG*!O`XlD6gjAU%T?_DnyTfC!_wj`RBn0s8{wBC>(FWoPGUAPKfhZq4+ z1>|Rix)EoTzW<1btj)aU3Nt%xDI2qaZ|-SiNfp1S0`JnCT!pZqTmZpRBoE{XhalW; z?8TzOdXJUt!S?6#f^#e{S*&kXocnxojC;3NIJyFp9M67I6kH@4`fV*0#e+g;2_m

(>YwaaK%|%)sik+x(gObP`nHwJjmVOG{PL3!99!e@nmejCkLky zWv0?_J~6<_eSijc@C17Xyzo1hKX9ws5~A^K{{Y1w1~MVsJ$~F~1K@SFiBUN5QPXFP zc*kRt4E}gPL2W7mcz0G@sH|O>9llcYHzon;*%q~J67H2?-PZlQw_fl^sW8vU?fMXG z^T8Kx?UFPaF_l@2Yv1D_$;a)*dOI%gg{&!gRaBk~?oR0U*1^1^jd&hIu>FW>8U`D) zN?yEg!Dt&9p_)+DfI9J!ny+Q6-ew*|?+f5xX6{-xpyzVgt`6xin!0QAkONo_%j~}; zkt<;Hy`inOrjk1COcwjIqc)2(05dS>3o=GEIN9#er)tb9Ishd>+P-3rnz@o2$3eU| z3&SDj(GO$7sW;KXNy=_6Y6vb;5NN`%Ns5xw-%wcD8YXrC04>Mg065m1(-f@X&_19h zt(?vQnVZHva0AOVo_}sIqoXT{Qb|o$rzdTaG(mJ-n>DlY7#wJHuHM&KCi1PQN`hp` z0}I`^L^qJ{hJ!y|4V4s6$Tu$`OCa+-t2f#cl^#{DFF?^(1pqKG>_){zzhfcU^L5{E zFC>eC4xTqXY!|=Fqu9SEVl%4pNOR}&J)=@88~SM6h3*V6O%utiAW<2V<+29Vi{+6> z^?)sq{Kzx42`mFYN>p#0Sw!*sm!r0YDrm4bsNFu6xk_STyrkW169eE`6YP575hYUE zs!vJ*)1NPsy}TBdJ`5Fr`0V69#CiR+$#8m;r4z8jjksW+ry6ZBIH2bfEH~A~TO>O? zv$ud?eSk0vJva(YxZ^eTBBNYFOTK$ibdsUj%LZ;F#+=qp?D8!vcFkp)hnbuY%!gA5 zp*X5064z_2ZRO0t(*yD+XCa@L$;lGtRFmKaK`z?E_b9qM8G^uyz-O+@!YNoIP*dAe zZz`xCBJteOfuW3wAln-M0DLTkg3E3;RH1K8>f?am)yiy4tSKh0XUfRIpkFayyQXTG zX9e;Fm-qd+&Fa;>Ybj$t1QrQ{TP2Uq&!+%855Hh~kZUrZRn=C}0j(W^{q(I9MS`FJ z7ul@L4<|T!5l<(e>N%|?J*$vWXO891Feb9YM>(NpbMJMqvJVrwGICO`p8%wxO)+b8o}i-?2Y=Bu2X-q$;CM_K-O_6kgjc(0P@IK zUi%7r#8X~bnxz+h;(XyI%f@GQ<~ zpJQKcF|Qd+mSB{a-#IyZRgH978rcBPS=rA4GH=Kd0NR{B(-G?3ZDw*YdNd zC-e2CQuA2QZG-@8EoYWI`T`Ew_%YCvCJd&iRCBYJx&XmD(d(}}1A~tERQ%X%ztG6K zqgsb4=DL6i^AzpNSCf@jV)^@)>RDp1yta=20GS%b#%I#0Q9IwN2GY>M4Q^~R*L((T zI+9~V*6!)acJgh|1PnL5dB6@X_E)ocRd}n989Q=_0oWV4fN)>Q4P@xO7q7J^f^NpR ze%jUPBal@_ljYwwXHH7iGnFxRKg&@2O8%dg_g>Yl`?M83+zAuFXPD+w5=aX(e2GTeocaBtzBk)^EkW*IWwOedgrOI zI3ZiRb|hzCGbPv!07%0QD3LqdJN%^L$lUiVGuClb*!tIc59KEovKnNOOrCPbdK|3& z=~I8&5Jt@Y&P&u&x9mXWZTCu?{J-i&gy8H*4~!tAXI-2Wt+E0W!094thGn_1G|({K z6gva48T0dYXetY?^Yar7Q5IyIMO z_ZETm}uZW;C|gDeOnBe856UdB?!>UzZfd zO3G9p_>vyjN9dE8l(`ObsWi8Q+6_m-}Z16IrNut={(UJ_(p#!IC`m z{;Hb1y*=o_I}ufy0hUKZPl_*W!~p88YNzrE!_{PNpDc@pwr0I4MM*&Ou)P%PgjD+Ul=8P-_+`0|rC^O>nw>SFxc@!sU)Xf#4k zvvr#1>Llx*sAn0w(;2)6Ri?6*lye#{C2Y{tM`_fe{F8=9{`@2KJm%;JPr?mM%efz( z6b4PAQ(cLxFV6&f>1QAVob*K=D6LzoT|Sia!7cXy;;>q4G5Q9GhkW9_@~)k?G~GY%Ah{Jv}&pF$F zgA~SLNW?6(bH?sUG0J_b0FPS%F*d^fQA1z=7oN;}z zlh|C=hnC1DHIGb+qHB0L-cPAByv#M^(c6zLwrz~*Bm-LKXQpu>_QDI`$MeA* zc9HiQS^cor-;0LZY}pM@UP<3q6k5_2=mgOPoXZaU>nsKt1Cq|%673?J*(F-SnhFDI zjMg_*Jgups?!Yua&s~`gI-p=fn5c~Q`V|ce&VFF;@h2E4{M`QlBSJlUTfsrvyQO<|o0I-Kw3_IeGY*1iY%_FCaid8+=vLi_l73X4A_x~3ns z8P3hoOH;+v zlWxur>6&#vp%@n1_DZD^6324sskvyo^cFB!Glh$y7Iyr!dd@L_3M>9B%8fhzav%L` zIAL#+dWZ)eJaAY$2_*7VPSkxaSQ<5`Ko&fP0n{E$kv2~KaG|S(6I}Dmp4fN)09B24 zPKY^l^)&!2I>7@l{^Z~l{Hck^c7~uJ2>qCKUz2JWus2!0;dQRFthk>Q&&x z=}bItft%2<&w$oA&EL&Xc}mmyEcr{L*Uw~VY#V>~FuVMX*Z%-iah6Y{E6F4@;)eS0 zbOz`<5Nr?wss8jG&EHM1{Y!Y-bH2Lbw5e z*=D!^s;bw`s1&HV;%3<;_f?xX0K5+eg3f&LbxO`1{>Rk+0Ep(O_`$60b-)b!9k`_9 z9-sJ?1y*SO0ramz(NcNiO{7fFZBsuILRc+%gOc8{UWpOY#7B{QKaV zr2Xwn6YKzgzX&CnBb~+RlPvaBPMQFp2tz}y%C_xQ{r zrL>-E+HUr^pSo;{wisAvkOPkY05UZ>s4fpo&_A6yJNtkKlhd*@^l9m=$Qn)*nJg0r zsDjt6oI31)02~Kk98I}8fT=LR_954jr5;KF^6R<1baBHO$?Ml;oDh3Q@6%9|uA1(A zSuO?8a)<`7JrE^QN~bbIXoXKQ17v560MINjKHp$F9PmF`?Fo7Ts!PeRSQ&H8ji5i1gO^QK$$j=6mza5x zcPc%X$tnlFJVApAnhRdd!?3V0I~)PW8$A!xbc~c!=^i>i_vaP0p2sOF+vRQ(Hg7=h z2MGO0B&KWp)MRF>6a?C$5nE|g7Qq`gG zNeur0t}_1s>Vg8LNU8q-SS43eA?^$QVHi5uJ>cfE*XvUSAWg7dd2$Ae+%@kD0?Q6A z+do{{2VUmz4B^0Wq15y+SDSd8Q|rDqX?V031wb#(0305CRMopEsp!V4uaV@x@U8xb zz7XtYGI)Wk%-ApPc7e1IQZE$TvC~q^^aYVFTFu(lJH)}~&tycEHs2gKVupYNGd@6q zQ^N;X38r9chCm3M!!ZqruMM=*^0SnCQ`jQyvY@6t;J!@x{dg;ut<@^g+fYP6!ob%Q zEtoC|0t=vEs%MhJ(HCuz!3;1@L}jMeXYH@1T4>XRGx4oH4;#Gz05d!s;mCY?;_n$WQb~00Ma7xyiLB=hAE0bzEkR&DaJ&z{ z1;CQ~En;Ditd00Bb7%f+<}QDtt^JW?bw1Q7C(K&EX5j+{O9QX~&(~R>&yA{UE|*Tw zCX<$lIR!cRJCWr5x5!x0DN3D5rbb}+&hY#Rg_c-;d`FddNLG^82}#ZHJjehFxJLDZ z&*Ot9T8`k*zX>6?+=ATmkuE^DS=5DvH__s$a*G;XTRGokW%a;(0e?C^{1t^&F_|)M z`eiD3YOanxDoBDMpRoZ|ncm&G;h0{D9^h+lbtXFU8KoJ)T-bmgmwHYM%MGpzxJdz&`nVFa! zhK5X+@_k5sX=oe_4SYMq@uXYcqB9BG6mXM#n~+rU1`bO+{{Vh1QGU@BvGO(>0?W?( z0yFm0FO*8Aruf0bP&?7Vzy5QC4;{>^RGLj+ZfPKQqH7{x3m&l1q7Hfh8bccD`v>VG zhc*LH>g-m}BgT0$L|(0KuoTyvIs1`W<1O(7orU+9UT0??95Q6_M$gh{r(MC2HQsQb z;09}T1H}&m8S=$jSc1+B52>&$(GrnK1WU+XXQC@*u6onQ&S9?20oaRV3*Z4Ix-VCv z6-x~h#23(PGQQA@u41MiPeNU_PegYd$=IfNuik7@#SZmPm)wrYo#a*>qs;;*7%5n} zh$G~2MDNiOIKR~tsQFPtd|f-S{@!lPMn+nnlwUf>7fhJ+OxFPDFbgxm8ZxFm3Z&N# z{xez_$2c8g0EqynLMh&7lp?cp zGz*tW4h9SA35qrVM|Rv$yKx<}z%tXud~rfn=JZM#5rbShH3WVIirj?abV=r-i>|>@_KP1^!|>>9NrEHn^RI@J=_2x zM`@~(w|pNDM04prj6d6g8QW{yfrXdA>!kVu~9J#fXG<6qP)c>?&mMAF~2S#qU^CCSO&2?G3Q=vfgA z^9CkE13Tg7Z9JOmotQmIpColf6EOtT;LseAa0Urk&MChu`_*E|u8F0Hb|uDrxT-M5 z-SIE87&zdS{unnj*Rvts5nAOG?qqr|avs6Q8`~7wiUj!JhaaEI6+jd_F95T^yDPgNT48=j*+H}JIjQ-mga-E|BlxLskEf&W#D11;dOyx?Jx{3v^JgxZx-dDL zo`3)Y_WKc1@B>o4D6Z!Zl9tQWe5R1>Ulb_UlIpkDWcltPtb4P| z49{8FWuBy3V=4EemMWimA>NPDtF5Riqpd!mCvRJ^V>xHz(=%PL8)E}Lehm?hY#i*_ z6-Lq4M1lOrKOvZ326Ly48*P4)eED#~URw32V;)X>BQKIu)r}0oWh39ISM486mwMP7 zOsudECnqJG@cEii_BQ^cnCaWe-R)W}9bF;-PPVqoE^*kQn7 z&K-!|s|gK8=feRhYpJZY8R#}Z_vY-+b;&P213UeA(kiWwx~v=|g=+bQ>9#Z&IAUgr zneu16I3q>&o{<1M^sIGlUD%njysa&o(=_zbRa3~{Kf-G89XXdcJI%Lq@Yy?%O|?p@ zgW4)muMgAYzJ%ASCp zQSZTw@?5hsJsfDCaQNau*~xrNOLY+ESCtRBr>)9B9Sj)4RYV#RCC z8#z5a5O*r98Y_FFK4#Pg! zMAfFPCPbIU1OV&{4Z}o17gzSr>7&_J#@{kAR7blx6a)<&?ShhCk5#jyOV;aZE9Do)EypgXz+I3or ztfjLWahqwLHZh{n{rB0d3@ki{8M}m+LA)WYWH>=n&RVjaziA|-rBno|_Wllo#cif7 z+XG+#{x$yQ{{T)ev83}=SuvQ^PF^a1GnO$8*aFVLXUX4yNo{zV)#1%wQnOHK)>VMc zTy3ld#%I?baO{GFYw(#=ZYp&o)+;2H7yuiTQR)oB{yxNJ9gguARiJ6cYfP7$_V(EZ zdrg=%KpcP|i(HNtHIG1Xo_ILe47E>-?&eK?cGdncavngu4l}K@RP1X#qfF`sH-bx| zhC}Qi0UHN{#y`b<(KHdobmemk&dh5pzBNzK+qZ^m6u=AjGa4-cX{vK1l1+*S@}s?O zzy`4I#jLgBo2`bKo^t@RzXOjAcF)1~8KT}|b}RwG77dKeB&A4Medmqbz7-Q7<69N6 zOyW&G;aQu@02AZjJ$J`fD`dwovydV?^^k1e$^p2B@0iAT90!p20z3c;A*M?1Pt8iP zL$O6H4nT_5XYz-=XYrGZ1Weqx2CP8bs&K{>^wnn*JaoYv2j&JGdIBTdfOA)~YLg&6 zAVO8K-u|5YzuySBcx*$fT|hJR;0fgpAhB>(mU^TD&b&;Z+l3c->yNgf*vL^JM zfr2^(tc1uO_MllLws2r;%B4nNW_B6HkSR1FMRnFO$C<&3-jxX zSqo~8zYIxr@_8Zu03I;wmi$jAeipqY!!7Gs8JN+4Fa?=Z^pHzr$vbsWBfXi2Q-fx| z(19m170rX`;3tDGYm+}2@Bnt>cbrplIZ~-fAdI5R)@XFsOpe1*Jq_@2JDZ3fjT9f8q7$<>)FEi>uHN5lq1GyR< zVz?}X5daRLI6VCDe$A6c44ccCpa{2W0D>SK01zEUER@%2HB|9U z66+=C`v>cyew-A)={5=s~wACQd(|ka4H$H(Q zGX#PG{p-+-X0Hiw4QCO~Vu1alx&Y?8MW-2&2K;XoxX|rhRBE7I))o@jZC+teBeJvsq}#4q9l>w?ItH!R4Ae zap<#^RA3>7yzOqr?1*0*o0H%z2Nm-BEH_KaOOA(|Yi z@&F3tvds72qBn)9S>cF$u4|(FhKL3HkzS6>Uv5V{pU)hFv=PR^4!X+&Mf`BGHRKrb zsUvP48Y#63kg6toG*O!-l4o-u3`5R%KA1&=ZJx22R<0$pC(c$TveXv^Kt?Z@wvr)PjMX!$s!tUl zis5?CI+cI`HN|bVPq+@c;;u}_*lgJb@mM_oAG_**`&F;!!aeY8&199a4hZnp%@}Vx zOQpkI^MbkDmaz`sVJug@K~e+(6u_S@fX0-(=rKf>i_ z{kHepI1G=a!TlT;%qD!}N*)l%TyRnzp9Sk*0&}xwJ7wzc`x@8SS|1C*kN7d$NA_!9 zVazDY)AXknt&h>$LjE$9f27-gj1atKF8*BG{9uOu$8?wS7Qf>heEp3nFXX8I0E~ZL z5AIQ#POiwjXkn4kKAbT6I{{XSQF^XVv+?gyeGllUQT8aFHN9_Lq66C@=S#}Po zxIHki`hNUZ_>z8)%_;*sI2_fokEwOxRn>!E-???{wiRQZxtxdz}`|yl2SrtK9 z$j;8O%02XASWRWG5;fSn1+Y0Ivs@Y`TS;9@QB7q)@c;o0W_UxS0W7GT@O ztSn5x4$oZTsm@J&5HmiZ=8u*-U;v}7pC=r3vjr@d@^$*wNugXxHM zQ=}khx;Nj*g7EL?2#k69$gnh2X2B*p6+XRg#~?&dMmd^iy-n?aEiO2HKYkVuOi07& z`|z}9_%ub?s&PV7P`+rE9|k%4@g4YHf?t$`SUC&dxKI?l009v5HTv-h=ap`seEm<; ziHZ%+aw?GRUue{;C0fWUiNiMLiGiXHOwqE=L&*=8?P_=u5R=Vi($|f`NXkkVqhX@e z74d@3+y*V3xu?&PIXS~S;yM6>C92D68Umr3QvNY$w2>tz6SbyB0eAun8YUKZGw?^N zJPhTm-fWb`oY9)yJrjpnF3-UqpU8rUy=b0(KV1I+LOl_&YuVA@X{diZT>W@Sd5R7( zfs5M*;hhTv)F*{`9v~uG3V~ z6TSPsum1pSEdKykoq_ zc3#27-9barRX^RC6JtNa09d!C5S~rJhaUAsXrZ$*Gd_ch0+MYZ0A6sUx~4Y3cpoQs z^xq}0^}(E~ys;{MqVlX@KsuF@Tb&*-iW&C7jTdYQk~j;FV7phmmR?M+IQf8iP98W8*_(<6fzHpLGCn zp1|F2ztjMMq}#}G=)DQ7JbPA(@v@uZN%Mrn{R=+4L}nN*WQZuA(6#juS_e_uD(@Q0 z9j_|9-7|Y;3f%S?>+V4WwxOpdb4el%9|y@n05DiA$%1_F5wi0jt;`8m))E^Rm?#fY)N&aN5fw&jU#BQL}F~zST=8i-Sz>StZe4`}z688$fu(1yfHM z9bje$lg|!+G9y&(ep=rU!pqG7$Wutf|vgz`zl8e`6p;Jcm4umM8LH_9CWnfX--m2moI#T+YZ>nCF6G z^lditMHEuNjWz~-IYDPef8(58N$1kp!Z*vEg58ckLI&w9^vH4b&c8l!oVjz1s5=D? z@X)Jsv0(EL+&Cqcb23mjv>Fw?-+Wihq>`zxSnUfH;8iS#8C0uiwH&EYK}Eu90rX0R zi+;YG8ARY%)Kgn- z%H!}bU)V-8TIA4LkaoL8!DO#M?*qT-f-~7_z#b4vhD#1wdi#O(B5dY>nN8#TR9$6F zlm7s$Ov1$f0L*}YnT-}@h7DW-;lZBpfQdL+;TZQuc@GDf8f@tV)nKnd@&Rf45Gir_CU>Dz>4jyc0a3h@f;LO(EVLiY*3>$m# zu}&K=;tt+h{aB;+qw5;-D&N8668Ys5A04tV5-%&F#1M(-2V0xax#uG_bGg?X3B4@m^Q_D9q$bY97Zy(B@O&!nWM`I^QnWOy* z_vgt4O||M%qlk+yh@FI3w6|1#U>JyN00T4i&Lx?zkPu%OKc)8|aO{?YCD2{_o9sO2 z)Zl18*Pbty2WI`c>8=QaP&3b<@@R}yhV<*veU{xcWVKrU3C)&3!Chcz9)*KJ#VWyq zZYI0Z#;YfYAZZ!o05!t!_5DMRvni4YvIo=5W)8t)G;mKm52B3m%QdDr0fjNhiYKhk zoB|8C(@%QGc(BTT46+$iRmDQ&7(F@5LqysyW_~bS10nvphFbaNfrtLVDt!6L=dp`^i%-%9%$zvkI z2IX~8=1Ux%t{V8rP9Gqp?Z7fDw;*dck6TV38cpd}a__RB z@eZ^DBEi66$*dr*PfRpmV4nOAY-VU+hJXVLAXjA3hPU!sn>UC1vn@UfRFEtrlf#>KOAfclhxEDpdxeAL<^_kLYf9-Q_5hKsfOd28b|QMXXo(%x00dIW-X`UPq!_ygo9fq_ zfcg;EubwQ{Ji~T;;du5SH-RvM@w^$Du;@h#vGBteo;zZ6edvITgw?m;3&b*aB4i9| z@&zfHod~%2IAEEf*bpWiIqv>wylmxgYL{3U!U<$E1(m+dY8%ba{m6(qZy$pA>HuVW zu0C@M^MplicchYm#t|Ssc;6F^75&KIOXN9-W(%uicQgSD%)~g(Ahiq26Si>%s#$bW zZk{GX4EA%tWRiU9TK4>Ys1Gv)GMRnl+09%ByQaXDYFAKfol%WGBYj+(j^kC z()oyBhsms1?9O>1(;OGeXc})CwU)4HDrA!uiI^mgfHS|_iEyDs>g1tA;YTi9{tUki z3D#5AEn9B30?Qu{ELct%&)m%0n-{=~M04wm#;kI&&KJhA>+FFho&cNJ;e_cM{i+hq zjL7@(@i#^Vd_+XCv!1B!(b4ciF1y#zbV0Deb~pFp^_STYrcV&u6~j{VJenhkoq>tT z6|;>eXE;WJG;pfb zVc)xeYcTd8ar@2MTB_k)YfEpq7;=N;Z3hs`q-W_%{7(|5w1VN^Avsv+}qJHyzEP)9$PKQQ99R2EETz3}!T)D}YHPbNz*j;W%1$}?OM z#>!g&RHui)%Oux83H*?pDrwA7SdB2S5->Oxtj!aCth!#wvo@&YJQP3; zV}bAhMJ(vmSy^BR@vlEooGE$$uUE-v33W#&MVd|BzOlub@*mWU-lKl( z3STOhE@-%zt!5}|MF0!T{m(u*d|r{?ou)DKtg)H%2OHo5*iK9=4u0ek(-iV-5(%vFaB}Zh zC5?C^liP+pn|eCSF6^QSU7%BPBAwIH-K7IWXR@prQU{2LE%TL++z6XbY-46i%{Nn| zku*GC z9M97~uN^~UTP_d%ikABS0A2}#PM7kxpY10O@j^e!UVn^mMrZY;I072|hPGX8pAqF> z;KaFB&Zw$+(LRyWGS+S^vlS$B#m9+|{#Nt-q~V7oZhyw#f3_TZ)c&>D0`fgn!3Ulk z&E-yz#%^pghyMVJ&JT8%Kw!w7y_HWV2+Pp3d17x3WN{uD*Ap>$jYg9BtXZ#Qo>PHVr( z-l72f%@9P<(PHKS-`5s+$g^PHk3sTw4>ps`(pw+>lYe4adC>m=7lFO*y4QHd=KOQu(Gh3Vl;vt5pSzcw$`_kr8sEY+Ev8qF`aq9D*f3W}`Ff8aYKH4yP0> zh=HJEYTZD$ub;W;`f0pz^!%)RF^)D^xUTCQ?qqX4SIh+1aqhZ88j(g_q=5FAjKx;PA}^Xcw!eCb7iD6GM3X zHTQse@LRP+T~b>-lgwHe8@hRBri+X0JM7sdi)qjG7-Y?sraFbu*84pCRFB&px+ zHfy@nY}ie_NY&B1-LFKz4EQ|w$tJC*J3_3LkJ)JS9=Kzec&>*_a-L~FM zZ$DUQKTbW?2;argM-KCc%j&$9whA&;MAqjclP^&z!^9jMKs&<(omr1RR1#Eke!Vwh83CJqXw~P}xt|s!uI^)Y~MwH@0wm zXL*=IDQ_E=_R=eSU~cN*t68kINAiVJQe|RTIOYZrJ5A2lVD;Yifvn+h0N0|-k#O>4 z>U>@Mt})cs zvs6#ay>D#&NCKgs{B7`raX3>tNSmdDxQJkfxfM%QtF2bJTog68pfd*x1_)jU$r+_R zHJZq3gLImfJ3#PF8|=pHL-^4B62)g^+yS{`Q~H;?JZi`n4dHZC8~D!1#dc|3romc) zuHxn6wdPlz4VH2H_rT*f?Pm@AV%2l&`0vgK_2QuyE~$y}&U~{+Sr!g|)8mD$!rEfA zPt5_r(BQTAHHA#`A{(;+>~+T#OE%xj5^DL&!~T@=dJ)Nk_On)?>5Aa8M+|d2l`vNU zcgW)Y1P=4`0u@vahk|Y|Gf>WitPw>7Lk%VpT`pyKXYVjA8py-{d0x5Pab^k zat(u#qCWY=w7#|HavsHMvcQPmy5IrMXJk@5t*OQz=MXdgi#$9jihldupl=oDaIu_N zBEnqH=CbC<_YsL~q8-N{a+@}wCKKubgY{oAraIFzGcary-xnXUb6=p5zdteyZ!pjY z2+ehajQw~ruQvYxF-k?JV{vmI?m_3ZM`j&GD)Z1ksTEUQr%&9qaKtGkfEPs3yuoaj zhjI*`%%#jT&1S8eemP(d(}8)G(5Z%fVh+}M`8q!N*%z}M2>h8r$dp1JauC1lgmByDUY$TDh_wph#% zXoiH63jiP)_o&dB5RbUiJAa0sxI9kF+6x!icZ0%oyzP}$RZVx9gC()iXw%4n@Lp%G zJqn3l(L(-ch7!Z+97PJ=^iFP?Nwpv~ct38)$4Z|37L$G<(xsO5w$M6#S|6oM#IE0OpHRtluK4sf3zOU>S$a zyKufd&41C-p#K2T$I;r=)27vscS}A?L)gjvj+*{C4`AY5dHd~>cRllr{#&&gj#bjC zY+6f0A79C$LptVpR@4%t=Ah50TP6xN94{ztYZJtc?;6q4UMJgU@V{CA0B=6_&I2IN zZ8`S8S*ia3jk@;WGBf!3D!*=nr)90{TkQUa}SKa!R~i8 zVJ@j;?Oz%#9`3u!LG;#NWNHzeHgx0}{YzCC^NtEchLTQj%-yKI34@bB!prnUm+X3( zo_AmFV1}DTqAgvZg@yp|^ambDvV~7cJl3i-V5?i!bv2_m01mtiGoFZ~@lxh$;g|)B z8Ue37Xw^PMlI|b50wJZx&%M&*o_-=0SwuFP*&u>=F&g~(JCOH_e51w6WXXEn$l`8O@@N<= zvE&Oq_$LIo(md+-J-BU`pJ6yxv;na-?=aV%IT&1|oi^MlM3erQ&(}E4WwJ)Hsx`(QMNMP?1CR{E>zrz`6n8mGcA>!t zr@N>Lt)MsNtlzn@gS9ykGxZ=Uk_7?oBALJ$fM4G;`jLB6zXWBwihfn^P!>liQ*GHJ z{jdYDA$V*A_`^}vVP<$7t)IRDmN7wffEROlfF8R72=C7TQ@N^hiohBmr^)3A^~#4i zm}K@MZByQ;myS3sDwWz?{TJ>OUv_wtT9bXS!oyj73pK?Q%V<*7sp{YtO=Mlo=B|w! zGk|qAS$smLgLYxPKFN8TrMm`s{fN)zO99gE7gHjuyfXmr{{YBw7GpBupk?|E0etYqm@UM*b{2U6T~!7wntj50v1?C@!J?QgKx zzzaK(Ua}?%$r>o3!H}Q}B4i(pQfad_9svDRUW(eenmZ_wCx?CRSqJ|BDse=^;SH2h zUAmEfZYe5#r%q?ogua}Zv}S1_{R0C4u<`-&L{qQj%qdE8OuV_uDN5%DFg1Z|@qo`o zkiDp`i*z5_M!kluPxUWNQ8=F+>opYoGcU{>UCt|tVnXwAn7--74PXeF+267<`VVVV zTGcg2HJMC~T9nadrrUcCL@YfJQ1X?h>ZeX7v~_UI9k17z0fNA1+}03{=!6e?iL+7* zM{1C=$Q-1>4xHL z+ynPcuSE5(00@S@{14FB;k1`)sC`k*W*{Adz>NZ-J=$X3aN~Y{y2!_mI&C^2KZ2+O zFwga@$TmeeX6d_Ha()XWgZEMw(&y9Ldh_j~>)YP+ju?zd=VeiII z;Y&xMN6jfjH?wjJqUgX3EWxtP5LLedlhrT1o}6qJyIH)MHcK4PSr*{r4Y8_@L+#(^ z#Z@1+Fq5@$DqdBLkX~l6EuYXh*;ba%7)gOQNrd8Fx*>Q1InfI_IOENeh89ZZ4G=*d zpf3PaFXIS0DTJDs5Uljv;Zz-Hxi3P{b+s`ylS|m4IYi!rWYNJqhG%3+slJYWK!Y_m>Drb@$BST$o_l<_V}w zb=E)}N3?nV=MNC{IeYo@jNW7hinq*VTZi<`!x^vWXYa9 zjCdI@WQLSDpL86O+;$-!cSqCa9(zM@yxTRc%g1`bXd>ntroE) zSp)Eodt&$g=dyZ{t<7_=a39d0u^uX>_MG{hZsMbK`Cva5?x?%7x0C)zEuU?sVsh+6 z=Cg*$Xgpxa>4bCkw@lD*uWg7K8Yu94{rFFC{{WGJ`th#UhM-c9aL&WCU!NXg*}URn z4_{4X(_a++>?vFkCBi_p_#$>@GQ9O(AEEal^;l-Kwp0q`v21Fm(=q+SuEjJfsfrGx(H0yJ5 zUTEX-CA?Ac*#nfBE`fu^jTOPl*O`YPS>|VssJxydeI(U1UQ7@(18`@O&I{qSoi^^0 zZmnxd%uRwW5Ir}hKs>&{gFkWk-Xsuy~ay}#C)sn-NJ3H*iupR!K zcCiaUF++F^`TnFPtHwv>w-Bvy+77LXq@BqSe<5qq!}DaGK>9N>-XA8!2-*bPL|R|P!!dM@XIM<+Ru z$-Y**XN@ntYZ0)~Gvn>&I3q^%Y+Zbcf;k)Cd53?u6+F)Dl;OLxOsVFlioGc%qVnX4 zffVSXy@@E9ftuOO&MiRTQv_kD7ykf12j7cUc$f)y4k;HtkmvvnW)Wi9YZx~?NNa8n zOqb03V1J!3AH8*XSUZ-a3{Yr}Kmfibv#4Jz>RzJhft`!w0PaAP?>nQ5iEc+y{+`1A z#5?TT2D8O{&6}*(rnt!-!9{z{@&;?kRF_|C=Ag7rdT$4PZ zlAq9oT@OEc&fZyNw=-wlr-$vw*s0yBe~cjEH_SXwgPj~6y8irJLu8XQp+b&N9H7Ai zcIitJKrcAu;^ewz(2}Ma?8$!wVbh^-o5Kfs@6Q_B3eH6}coNpDtPT$)h893x&?O23 z(t4VAq_-q0xQ5M(Md7(3)Qk(z%PZk|Q6orVEm z9!>~k&2U+VE7Jy9;3!{d7-2|aAebU(7-$P2uj%ABJW72#-q;7OI_*#}XtxW7L&gl3 z-_I7lHI~W$00L%VW-3|ii;d&MHBO60;ZTrP3X@s0Eanc(4R8$@A$@^=Zu{bElTEQp z*r*1ItKWEEeU=7u_2TZBk~A}!yLM|Dskz9mgWx}YA_E4FYrsu4{Did37Cv)WKBRVF z;bN&aU5?(b;-vViSuUO60OahvHRL$#<;Vlp=x=Kn8J&iG zfQrYD<}VU&xTuI29!moI1D+|E$Pk7K>pjX#H9y{>FhtH{GeZ)+;jjKNC+^ACP0!+rv_V)G5LuOJqIvLk2} z8?@Tr^FBR#<6P-d-||L5vLsEk?Z!IhvF>@*J$mC*eEp2MslJ&zMYC|$s>fhJkF~uO z7E4lE)^Cc>_J@*SouC2JW@CV^hl9b>(bEzudip^Eoog?F?~F1q96uR0 z{{R{RIB#dcZSD71%tin+6~=1j`oc(`^=H21R)fH-Dv7Ltb=L8GofJBhmNY@i=Y98> zXUSk>iu#^*#NNx_F?@;cxZt&ipFN*aAkcHLb68XW37f-pWrhc3`OmT2f~jpiB6h!m zSt{_)a4a@?WncamEuzu%lL|rX)?H7t34#kC0OLLO8Rz5VgIXG%l~Zy=lam>kI&2HM zGds^7%>HJ06|I{ZJ3E* z{{TVGJplUfzh)^>i6|80uRkJ23iJqeA>wjpxe)pAQKWwC@ad0Cpa~ zxr;}mUzQ2Soob{10Pg<)l72=X{*PKc5j2Qh`L+BhotK%R$=heo#5WZW84UP9c4Kpe z@E=^(6v5FG*#oJnEGBA6-2gOpcq*ol8 z^Yg$_TnxiB+GBoQ0cQRUCyBCId8UEdZyw~cN6L;2D%wh2QaYACnKVV`ip>5_;Tyof z!o$fKCc}19N^J{bjF)_h&8EvN=@6N>Sir-t^bRPdR7}=`nn~mpqp7KYEVvuyljkUe z)DHVFOwPbq4$OzIL@;tVy!)C&mUjfx0tP5Q!j_L~O{&SfufAWK3dGo@J$tuUmqNK{3C5;GjW^`+OQYE2H(sF zl3ai4h90aqe;@jx7TS2qpP>kU8w zfyw1xE@Y9|sC33HGTL16{q{RBjOL`MUA&tMN6wZuh5#Djglk!!G^y*eG5o`FE88_^ zEuS2}#jb$WLm}D0oGkVr_2$^vqNsR|OtQ%VCgJrlu+T9qN9K-WP-TPB0n9r&=ENjGUrDXwsK>da(-GH1^MyvrrA z#O653pcApU3fKZ`DmM7(%~{#_;DMTxX>vIuUy!H|+;Sp$1J8=oc0~BWBT{4*7y;f0 zz>R-Op-q?|4$WgUZ!<8m$R0JmE3RXfN)zt0TVt*`sRvkXQDs_8Q|bYc3qp+AZ22UU zUBR$O&i??HKflxn)n!tCgi-*N%B)8~0!YCVSrpY_&c@F9!so9UpP=3;#9s}FhgnbL zv9=ooL(9)9H!F4;03-F_fu6}|b{2hx2Pget&1>UNZA-rC`O~CBxt=Pkig~=cjh)Q* z$0Xsa2TP-2*_;zQG(dwgYp1xAM(x(dwt%d6VW7p$TUyOnpa`Xm0Ehr+40W`60UGy7 zbe~QC00dC%s#{OAldI7#l{0LwO^K{%hnDKYU#ACxn?#7R1XXZ!opCgZ%WwxyfT~j! z6fcBPjz;;J$%Mq zH@i&p2#y;l^G>Cs~Day)RK^vCVTC?h&;qRN`;F16-& zoG*(^e+aLq8`h=(wz3Yvb;CoEcSZ;9nqS}dB1grxKYefUqT-d3F=swxx_H-t@vR=Y zEF2Zhz;r9-yh*li;cBq{+jZ^0d>Cz)_+7030JhzGa330&-y{|mSI#t{LWKpYXtlk9 zAKj1cJ^0sSAJ{lw<5fQq@5aFiuAPCtIzL_;-u`kT)@XG}M(TQ6z0omDIb+zc!{hXRc}laI<* zgz~?moR|joZGbu3Ssa@hYc(|!JHXaE3^W`V$Qx-~iH*Kn2~*0O0?d+KcR-79TEjE! zXM?uRWb=pW%u_(t$FK|z@Uru=GxlpAh)=NPf6SZ`(0SU2YeVD<(cHGx5H>XcOgX?k zi0Z;>(S(v}3#nnQ{d)tq4M!n)f$T*K@V&dV`1kXL*ktnCHBZdd@--r7DM_MkffbW5 z1Ie88*{`__t*PmE#V2mJz2@|90}cbe$PaR9wqW7YuOxt8a3~dos%n#jC7|VFK&@ERH+xLL2U{{SRpv$|CYYVvjx?YXqq!Zd>vZ0&8I;snWQS!Er;p7yX60t!1EpB8@vDFUUhv80XN%NLOmG8I^WQyy ziljkL*&1fQ@>aoN`GNJ~IA_g!4s)#VB}j|;!#C>9Y6@a-XQ35ApHkR40?3@&t&kgx zH*+Les(cPFMm1Vtumr5szzpVRlfH51F9|)NR+1XzF}9=K_5)1SQG+jAfGk1Aw5JS97-rwRV6%ZU>zf>i`F+amsOC zOq)3IGV-KbH)?n*G;VtP=MZKF99K!bgrABncz!#X05$oOE`v zykd4ByH_on@q#(xb2`pU{l@BQBF&N?hC45V&65DY5Mb~d+NO-J#@*0UfDLo+o{v-8 zh*MDC-?f);0pqRiL{E(Ja05{X!12MI_9A-DCopY-2D<72xLos?dk_vaZ8)QV_SI?! z5)oNIh(Qin$TmwH^aKl0>Y0i3qPY0^S4cw+Pds>tItvy``@wHc4V!J&WQtOGbjD1< z;<8H(6?27_A!olC2t3yL-b|SS$XLoE)m0SouA&DGmpQ8Swi5I-t88=~Pdk-9LPl~! zmp?a84IsQ013bdY4ZgBHfdmWJ3GHAD+X>1+>Q9XN(t?KAbT{<91u=g_#XwTKoNYUiFpG;I0WD zTF~jYjTlvJ-OQH&ZVwvXe!sr~UKaHXj;fu~PNZ`t&(a3YfXySm%;6AU)@^1=n-mjS zvAuHCGxAsg>BWD9QZ(Z_f|p}Trie5K24`lw&2B#4NGcpUw)F_oE6d)V3*KzOc+YQ0EVRJ%ElzpqiS~ZSF^pC(B;{0A93X+h3E$k$8jTg1wWYjLT<8aro8I z-(YCZ8bC1JolLOZ-T=+27np0kbAVc7I=er=@XnX@BS>4z%=rUy5BKmq^WKfMh}Dop zw6E`RUOV1BKlkSYhH3$Y>A@W<``7Qed-?N$^JeV$AUN1$NrT)4L~~~Dt{wU!wHJoj za9urdWbhY7C2tEdMJ@T@z*U01W`DS<&3@Z2Y#QO8?pYdc$egD# z*=0ES_H$jB0r{Cd$R)#g8*R+iax1as5wr`>yB~SGRj+Bs-zjM^X6Ot%VVomfdAb~+ z7{0vRHOB^SHJ(i2>F7Adwr+}lvi|^9rHKQVUA}nNXBK$Tz6Nx&pONj^@5i%4QoT`D z$F$oZN7PBpPD0(L+*~Vgu)MR5zo&0}S>-)|XE_ zD6$?hh{J6Qwc1LZZ0e30d=b?bDk6A_XgzTqoA$399s${tk0eEYaer0Gc)A!zYlMy8QXaw`uG@6{>h)oo#2w9y**N2ew0K?EqON>JtT$ zJq8=?hZor##m7^diB-l}>F!$``Q9!RiT4Bj+N00VG&@8k#-n(RV&!cx+CM^7B@ zX4oBW>mI(q904%eEmbF&f{wG^b*q#!pT- zeECI|nzcR4kIRR-#};&}1tc`%bk|S;TMc!N{xi;T=rf=hnzOhG4vS;$QRYamIDQOW zuZ-Skb~8i|KmqmF`UeIJOIJkIUR3XP#tyg@3&8uFkyA&@3@@HK1HEuP%nyMBJN+}) z1cTGl)6>(_0le?fuW6o@BQ*_zCe2(0-)|>n=k?;u4>!XcdQ#7R7~RkCM{CnZ$|+<~ zq;7CidP7meH!7JhKZl_nuKfAp*%PsYK=#5ZJY>+^B$?#Dk6n>)XVfxHLkr9=jT14A zi2K*Y?zc9smwv6D(NSj%(?w8U9L&!1vdFM(9elIT_9CWawfI<^k!+Wgmihj~ADQTi z$+Q9XgP1DG#Ynu8a%dpY6TJ}5XPzd@XKfxhRYP9K26vbrbM)YLvkT|n+=Mzp7tbvI zm_4|I(S+H}9SZF8GOEwFsy?YC>MbN_XaSft*8w4jb|9VL2hYzNg-9P_Tyt~AUo~uj z_4r|m)bp4^M&83>+0x(M`oOH{Hp!a+ePMyf~0K=Ri{BU?A9|aL*p97}o z0~^v({AF+QG1$NiI>f;H@gonT@5dKiN=CORMT^0wTJ%GYCRFU5dpZzB6@&0d=LoKM zKBwmh8x>aQwE4ow0iH}vM`p5$d>Ds(K;Jq?r!P;#3Z9j}1-Omj;h+ndZdX(37)#f` zQcGxo$CA_dK+#sjiz2_TBYjfyfg$J&SH{GAaM7p zb&TS?_!HJck17sgo*zd7qT>o}IRllZ>C z3-OH4CcKb%JQ*y*&`1ks6a}>$lpDM}Y;EK`w;s6A5v62nGH#-cG8!Oj1?Qh60&RfQ z3N~P8W&6lv0JEMnR&9{no8zu_Zo$XciAX$K92c=6hPmhZkmF_wFn=>*o1jKtyyVZ= z52+NmGJlWw8}M$PD!+IIgI+d>s@xdOYS*YcK^r&6UZtFVoJ5+imdk4_1}s})c&N(X*L+?8sE`0)H2tJYKJhZsM(Q+E9qp5{oJ$9_*hBEDrIYT1&j;q|@vusPZc z)drp}SkJIw9GuN}#Eq|uoPnPnOpquWt(rdJ9jOtGu~N2l{Rvdj+Uurk9_`eCahwm= zG_7%7FOM~BT^^lo| zMx`uk_8F>-x`~qm{`mf+HEoR6KSt_X{xSVX+EZ5a%o_CA-5qMjGkKIj4ICZ6(1HT* zF)}|6!K%UE?255qh9c{*0Q{$4sRNXI$&7}5H9-cqssVsze?wu%p`4m)IK#)iQ`Wa9 zU?VZvD`S6O)C;@Llh(%T%r{42!#LLmUXFUHJj+uMJ3XMThJW#1Uw8ggbX?v%_6DO}CKmP0xy6;x_gihY+4mrP9-Go7D{?bao;*CU?O( zte%|}ruY2`!%ffhXBCG%mwpj%_G9{x#ivYg6|);h@?-ju>h^%Vn@~JO&;1}WMG;&K za8nG2{{V-r9+aB2Tc@Kaaz~8b00)M9OgxDj!@TS;JNHwMThs2;aKP~tGqCIn&Lw#a zbpHT_6J`UUIAi#((x%-z+58+C4?I8o*zdT%zXR{;2TitvBBvUyU0~OZkp84z ztbb!3_ZRjK7q%Ew6MCL>HJA7;n&GjDe(Ion@$SC-XmwPbI=)xpY047UER6c(TrkE1 z^v*MX1+|kMjOfS)ADhite2{!oxVeNt<#=!9j*W#>X|~9|$d#O*UgKdp)FQ z=5+k}##1IP4)fP|dmZqPK8M=#BiOza50jbft-&{37DvWSbAiW?L)?x30Ee9?*v!|? zd`if(D+c}xmrk&UeJ^qrA5Y(h{WLi*g3b>m?p=riVX|6ltJcroI5@}<6|pn_0J2(N zkK_UEoEDsXg5$nP{{RyE9lP{UPcsOwoM)6f?nQtwe7BHlvG)QQGN-32O5LWL6TH#L z5z&KqsmMP>*NnF0V0BwB$UXCogJ(tuv$-OPr<@taNw>Lc=~ZOWtksImkHw&12g!No zjoW1sMAq50IJ%qW5wq*CfHG$|-`o!idu!!ZmJRExn5gsSz_hs3;{!w6i1Ztm+pNI` zvn|;#8CYoN>yE(Vf-2ft4CE)eg1~slUyI+0-G-us%i}8-XaR;W7+x9=JZH%LA~m{U zqG`$kA9xSEc=Q)_p<}%7I84KQRaEgbc0XQ7C97$LBsD5hq#qrQgBT9W&yvWANl0ch zS&M`dKDYw&yff=R(}Kb9(aj>1#Fr%CtHM+qa63DdH3vtJjhr%A7sj-;9_$Y~i24T< zRc#*mn=c+AVcFCeBpe*^7}G~QGFSbW_y+M=Q^~6G*7ysjdGH@RCe58JkSOWW-eK7h zC)k_#0*YUf{E%SUjC=`w(nin3!y`2?Wr)FJ^LRmH42xJ1jP3fE zV@ys`Yb`f6aBl~B@J67GA2C?))2Pi5>>_#nc*{25B=I6r#1xvF#a$Zr-WFM+E`#$j z$!D&39!3bmV>7h^k_M_SiGVpf0aJsA^&;lg!eGaPS?37Nb~4S`)0eefVEIYtI|G6= z3B-nK+FRpQ&^#r~5Cb!^33`mSd(u{0nBnwmf*(0+9<@9R!G53*lbqt5*fmAKF1?16 zpuav%89t;Dc#>qawPn;y!6UH@=jDvgk}fK(ohKwYBUK+7&t@lQWA)DgJtQbt(ALV| zZF#WHpn|FCBJO9v(FZ}gE_lXz00K1xWmEcwrj;PlM)Bmek>i@cjzd2A{>Rsd zlO2W2YF9}$NzlNROIfo>I2)n>9;85dI&REyA50rJ4%yiP(;cc%>@LTj zgOlWL265rM^WU_t2x18RX_HXnh=)11Kp93z{{U{1G)Lf-$=td8X@Vi|z{JaKbyJ_1 zwPMVBm!Gd35uZO|C|PViD|K3Q^!hrm0DzsPP5>ym>vwY`K~rG8f@fvDvbKfus!)Fyz=IJf&M(`3r8#jjGjej1EamBS^P8nFF;hHDNs@5N%ad29I_nNvy z*0nY*Y5v!z$NOwrGi~1+&ITXVRXochx`E}Kh@KCz6o|ZHwrKYEq5`_!QglB>`uV4o`UVfxYlfP1-R>>TiaQ^_d-deSuosld>RjMC~nh|9o`>@Nu7ZTl=R}aKY(BGH)t;{ zW{?@9@`bqk$jkf#48bby&PHGHquW#o{exHX#rWgrm#ee!WRL>i+F=8c0Lghxf;LJ( z4NeR5=hqP=?B746nlTF&Zo4o*z|PE;cOt+#=1F%ig+!Wv7NzIGyJUjsUPz*+O3;60 zW0kXMI|-MIrOZvUhDX z0_QGRz`z#_8#EYXjQ{{p2Q~BAQl1x%jH}sEsC4nKqW3GEo@Ji=8SLmjZl4A>*MOO5 z8Z^_4K(N7V{+cnXY?f5f%#VIu05x%s1mIb311uF$HptICExP`takM3Y%?#lHJoX{L zeBl-pG}nBPx0ddU2I|cKHIG5$;D|dyY??Gg%)-C`L@cmBrU>mPoJyoLl(J#B^+Ff{p(8i=JWG#zU6TIFkBvt;1SkN417$fyF8f7@IbqrqR{I-#|o-d zVSoqDLMFaUOpBO_*UV8TLSc(F2f77c)&jvx3g#513;c zi_-%Q@ZwvVwwYOTHnYiTrTJn%UMm7?E?AWX<}Az&$(R^C4IZ3!gSjTrW=7BoX1Q%U z0ksD^UR)M@n!~&_X9&eMdt9kWO58C*O};9-%zSKXI5cRDg1=d#{hv~oc#V!hd+!big8u-);RbAt@|7Ng z0J8_2?C0&oCp!v3vRM8ru9|j~TMT?i_l{W0aJRjHA_%R0>O9m_J=$TEOH84BJa;ah+i* zeJVkiBy0>13d=9N`62;7QQ`zcFfzwA=Y^?Rk>0yU%-q-)^($FtrxQ`6pICU{=KXiz z-X5G0%9sMq9oVIpw$JDX)Pn}n)~3}XT>t~HFEE15kW(*a87f|DMDR>S&*3x0!#S!5fOY1LoQ82q z4;X;;?M(D&TF8`hDKf{@NtOV=@YlzOij!wn{mAZHTOf_;sQ3&u^}umpb!@5PprNRV z*`N&a1OYjn(n|nfQ;*TJU5_un)QXj!F*SOG9Dr~I@F#8#CVy!gYhGtHmn`5Ebhua= zGvtY=ylZ~^qUfxA00)!4af)y0@F?&+@P~xqa1Wil4>CKoqTst4+|Koa_3|aJKC?JW zETA4woE}6#K7=C_Myi=3)u#F7ors#xX%m(T-+li8jYJ}z%@P;D_lnlcFj5I>9#r+j zhS+APFRdoI5%0!i&uPirYC34Kn)=Kl=^d!Z(Xl%mqI@4FzR2wZ!B6eBb6TA+&v~d? z2EEAlrH{A46#G<~gx6@m)egAX0t~$UXX~6(%C^}fk}@`w%(dhBn*RVy;Tr<}=Ckfc zDq>YNT4>&RI89_#1-GC`{&eK;zVqY`{8ZaJvNfK$&+boR^@T$_UP#p@Db6Na#rAbC z%>4avnm>4(nTT}g0E%^|4vkY?^7pt?g8cyz&CQ7$rPQ$OJHwDTJourw*i{2H^OP70 z)+m*eTA`!mw}t31bh(F_8b$cQFkL=5t5&|aN1V2GHJZz2D$f&K5m*BO6EM6xk)g@< zknGY|9MRnZWD^CP7;)dC6U>HnJk#hi0f5srx6-QuG`c2wFLZzJ&Nxq)FDIv-&QyGe zU?X@~exzrN_OuKsR_$7*%*?>gt?ik`O?kAiso6A@OKjc10M@gN`5*|;-PTNHOaNh9 zd~@Fb0oVWotbky5=j1qW9pwN>xlv+d#fp7u2cNcg-e(5@2uY{c8+7>L6 zDLw)B@_l&V;6|yL>llKs+XSftmem+&8b2Vj%X;ufutpD-*zGvF4)49O*?KpCXE^1y zWK9+oULl2>f#kEX1YQokPQ|Hk1jFXX`QwjYKSUB6d@y2DqP*F%w%FTHcYn}tUfgTd zfMCtqCY_nl;|&?YGaZb9p3vt9$78S0#-Ul*YO-9y{a}8CT>1Ko$o`vu;z!i3I7q

Q(K8WF-eziG3d5CY`YKRYz4I zjLB9m+86*WgLr3)iIk~}koj^5t&=724)Y8r&i?=@gW2Z+LyZ;tchmG1o7AgM658f{@iu3Fg%HXM0bu<_`>aXz-X#roB$#s)9V%L z7`a=_jR5gqK74fCG@c*XCW-$5SqQ!WHIEJj@so1?UTNRepC+?{)_K4bO+4}Gs;gS? zqOfLj6k6^rW;h}EkdHA;k-u&Rndbnij4!ia%VW?8&MN2a(sw7P8QADAM;YOtr(k4{ zH2BJL{{U+3*L<5~OzKO_057q9>phpm?cspBv@DFK++-bI-Bsfz(-gi8V-o`l>@&|0 z!O8)>pl7Yg@I|_bE^7HeKQbze7X{sh9!=-m2*j5q_U7r)q&^Nz==piIT^v@XhD-P_ z0D*n5QIxFhyuMcf?yUPXUPuXfi~t~OBFG}bPOG$&1&6+O=KX`4r3@(bL>FO z){Ls9NjH4$I}Ov7*)}jfFgr8yamS+S8Ey=Xraa+iCU4rH0h18H8Tb{{&HndpgXGU4gE3!_Mw!+ltFX7!oWa@Xw;o+gRATrtedHHG;z#*pTxnLJ@E z@v9I;Bl&BPOC~S|a7VFyo`hmE;C(HdoT=RBe@T#7ioj=On(}&apth#!V{0lk=BT+j zpaJ5a(1s1+>I4+@CR_o@bZhk19dk_4k?k&i{gSTQF-02#C5V}ao_rB;B&p{UCCyFw z=I(*npaT6Ji}~X}n!IAoR`LlOvssf<3)a6QmBWrS>Q<6=&YWoaWa`Xw00!U}o_^mv zEqWYQLxP{%*GYwjtyRYOq32C?vVTl*8yQ3EFR=OI&Q&UEX-pS&d8C1cd-P-jIF7WE z$#Z#2(*sQCQKd}Q+ zscV3}c8P(des=49FmuK0Hz2aD5KPQBTklD2dEf~Lhru<24>?-7@>KOasq@LsIt^AZ z!3^LK$s;wAJTOd(#@I;K z0$No60Pa$~(25|HOO|VgfB}AUvm{;5tno^@!$3u2FzN$1LR0vybWb2vj@pE*(@m}B z1_yop`0l~LBEY(GU!Wng!nsttkywvPgPaN#$dX|83A<56#(dVjvx$Q48mxkx78pUF z!_)BnsLghDKbN#ma&$n7rhjjqG>(^T(y$g>0GB%g^Vb@iaq2QF=NFcHc}KN%h!qD% z*oS>T(41LEC4d$aOw+#jWmK`V#OO=R)1gq_Sb8sqG(`GbmEmC72DQ~N61T*%Xx`NF z1PqEI1U6l1hG@!418n*)t;3oUqX7cw&i5*UCbm#Bi!f%G1|pj5*M5ktWq@nn&JoIS zJ`RSa#$B8bCFdTp8K9;+rg$PMhjm7mOLag2RV=g5yAu%+54 zqWS;~1Cj^UnLBc3ra7M`cqG96i8}xp=4!%;Sp%pY;n|@5xUjDl!Q72R!^>+kvMDHI z4~%MtC5C98xRPAb-PQ%A|* zgCj+&RBXQU$RAQ1>3Ts#v4*{3u3vJtq>-G~b3>OgnH;sd*N&ZUX4@F-Gcy46AcAYM zDf?&XK2vB*9pG(mS)T^GG8|2#<*Up}>8nI>pxYm{=f7Bw>DJuNP6Zq)Z{{ZxGMsp^J;WXcZ@X2MGB5E#8 z69mCTbMP1(2xR{N#aztGcL6%4)7w2=W(K7}KUvGXY=+zc5Dzjg8*I`{wf!niABR40 z`jL4i=ut+eh`J}A=z=U|t&my4rnp}V^^L>cw1KCgm?8GtG?OFb=Z`&4sU1&dlH)($ z8^H7+%q6UU0T1zOKfp!UA-lu+&S;@QNQ}IMv>yy zpMZ!?Y3WnXE`gu-#}i(6E`4hR+ARlD({#Jp$`w!=Em1Y>2HEWFMsKo%o-nH1CV(@= z@gq|_U@+2K57NlUd81mQ2%_rq~Wi-L;s%auXR)8H0tdvm!boFB?6V z`Tqd$rvM*2sQ&YeiL@x%eZ(BW|+sq~`oR@!F>Z0o}SD%s0JHbC*Kqp&rgXxL^P{Bi3ldaI+S&HQOjRFbDH7te~5 zkc&F=WmE>zQb{`6oNY2BG9tExgScUyGD)w1w#HwUBNx>Sw?2fP0f(*!4Ig$b!H@@b zkf7sL_-0sc5>~Y(b4s(?g+80Qj#P*PYs}5!HV2mD;EUHgM%lnJc9-mdP2RdI0ADpn zU7h&xp58VUCfP+=-k+Aw=1y5UY=fgW7GIp~JpmfS{{RB20$Rb_c7;O)M^nC;1G9L$QwHw<0>HA)2+>2pw_?u{#Av+w&TxJQ z?m)7cgxlqr)f*B_*BBp8iYJ5*#lYE9wB&D_IW7IJF{H2=o_;v;km!wLZvGE4ZH|PV zb4}C}Cn(HTGz}+qWUw#R5tRoTaj&t1O~>?$s>EO3azp;0;zE(avpK_$$wEXgBZlSh zNXO#6&9J3__DfA!F1+_WYaUH!2fr@o1b-A3KqrIteU8&#lNf)#IMD3=c=?>Re>pX^ zZt9GYk+jJM8ViPTuxcJbzeA)M)9jCAe4@ z-WdSTKd&1`&GvL{s5*TEoVf!A0eAogNZGn=KM(<88lLniCgl*-T6G!9{>aQ_M$;HC z1aH3k;q({3CuDp0`P1&d_Y z3EX52k_7H2iOr)EG2F9y=60Y}62;x7Xr2yB&i*(bXHd&`hpv}=vNxD#44mg1MEhg% zI+9}6V)=EiWR3^`WaGa0uSa3DiX!0OCgsXXXFJy8&m7EP25_pp{70MmxU9e9Y>dP1 ztCRHNA=H%m*QRq9It} zJWg(TbY6F<9!i#)0`k@GZU}|vW{9x()lLBQbsLla02&Q;sY6pT8mgRWCY1wbj*+oo z13dkF7aqqpcn#HEI(#x7PPqQO7!nw4H>;RBli&YKsnrW;V9C0$z88)b|*TlL<=3(hwEAVf$d zQs*x_XVipwrJU5;2qyUNZ}cw1LHGG0;0`FQyag1?fa207TWqM@b!(u%amWIih@8Lx@uruY!X zJHP-hi^F<}o|tB8!W#q97TZd!6~m__*M<7Q9D(PaNY$v#f>pB|P^+sHS-1cL1X!XA zWIl6_dB9_lA)1q|z)(5s#N+3_$d2)5DnE7{}}103F4!L|}DZLONEGnyb`Otj%lx0E^^5_c3O}V>4Iw zDJPTFO}RHiaslhdw3|0z%`7%7k!&2P1(5SvDXrxFprhGpa$Y?@2ko!fdl}_Iz&YkG z(J=HgoD$Mr4?9 z{&~a>?D2;MFo8f>yFUflF@Sty+8s{+0I=O)Y#yW#Fz|2K%L2q+mEYCpeC1`mvx6g_ zOTkPH^&DPH^Tc(kW=Q=n{{XT9Ig*d>L<9=eK2=6havMBHNlDq+#*;7ui-UESAV8|s zys;tpNnE|72ppR2W?IZaMyZ34GqCJiI7L{=aS#zO_^l%!qo@3_l5Jem`0pZ9of|!K zi$mH^KqLxaA_QT6cQKexs@aW6UP}{S2M8U>lS1LATU8b5~ckW;DquaYVTN*&)N6!z~2>EA-KTIX`%RlHXuOhQ~EkyugoYk-x z2l9}Cnmx0+o>PJQCisTKz(pf8Ld!V)c*jweu6i0QVb}=k;-fun573R*VjoiZ?4J;u zxNTilSwmG_=}XT1=6UarM+C}n&YWnR%VhH>E!wkc2H_{+*9C#tk2V7}05lkRk#-f$ zc!=SrSkCO^-`4u^DN%cj5D{)2xsk1@gZgFtFEgK>&1Oi;8o5B+hm5nCot7W51=OzM zw1uO&1u*fnz0hMK2QozU=i@l@XY7k#qb*ij@y2nw9<;ZUu>8odVYPbwvv5HP`G6o6 z0B*?S4?+QQ*k`da4-se{U<=T-B(>d1O-v+mMAhVqbYOb z(FH-vR=rZdci0E@BR^-hZKWgRCm?_N2H#R6)Rp~XS+#E}sNW*9bQa!#0hlA7M*v4g zg;oqz67OLzEV(LKB?nMdNjP3i#S=*b05e(5na}=6lzSPjKhaII{{UGC(l#;{N@sX& zFFVjXJvb%_+U6jJ=;Sag$YG*AdUN{Z4GO+2C`A-rl6du4XY4s&>j@es+;WfCh~cuj z?P4H8pEDq`@B>GWM;v+qwED(ohUm3`78&G<_7j8~I5qxLWr4|_z=?#X@ZJgr-M=xT zF04p4MYOI8E&2BVA%4WWMn1FRbN)<$9+H)92?=X3Xl*g9<5&XtOzW8%DJy#RUgX!Z z%@>?e_JpswwB9wk6NmFxnm=NVqX(3ureEO0KVwS#Jf$5n{{RL(N)_ceVSfGULjtXl znx>xoiSfi~c9H#>Y4%)C?N^^@3MQ^?T1nTLQ;*SZgOD>bY+wN!RY@`ibDJ~RXz1ka z#lh4v=PsHcgS33Mdk;iaR4WIJaPa`c2VL>u%6eOy!A022!eFqjN|I|e1C^?|->d^a zd?G5xe4_O=n~?V6$lEk?Emt2n0>EKsekzvFE5y;)21xdx0G6>UQzZFNe&`w?VR#we z78W41=d1ubFFgITiupR$Pm*6C0s(oM#gwXAyOrI{aSReXf!m26Jdoptm-nQKEgaRN z3LpmD5wgoN9!SvTRK$X1JeluOrpp2V)Bp@LWH>I0msy5OJdNQF_a#r@4_1Jxc7)jkWpeyREap}k;ezbU zFmeWq3?My&HlSep;#u_>{!a04e)uTbMBxCFGVvVBTCi`6hVHNfld$w2eI8^3qcX;7 zYG5!#&@i&b)_d_d{1W-EdWzytMoH%r9qBnz%-h&@H&tu>dm~nrY~F`3W$@DsGlqPc z@<7ch0I*~6;GeLJX>xrM^!pyYe`0C^pF@WiZb<*`g|sXS6s?5|*h> zHVkmS`y)b{g^4$?^W%MZA94%$`Jt%2{1@2^W4wP_aVd}h;;GTuMGmurL&WHXoRL)L zX7o5eRi}!qz%vbeff_`#9Rt?}Y6cO2S=5A5JVkV7?jIfw5t{a7&!PzjPwGml0;s=X z`Y+__CeZrxvs_3Gg;C29r-rsp+!O=E7D`yZIfed&5z3Zn@`Uj(JK_fOz=&ovXA64G z4`Ec5fnXl9zBwbY%2w0)ARUKb9g(Vmpkt80@4t>K7$OS-OY2E>ugGy%cJI`foxULB zfS@Pzs~Tac__+)h7lv>E43=LM;1d*5#6kA#27g7b>%oO#eYkj+^~dZ4R#Rbo$yE5j z4$i=F!N7iwX0g*5)SIFTsTYrcF!3L)#&!T_i3Y**!$ox^u0gnG-OnZcvp6H2ti%{9 zq6%2-&hRk201)8y?>HGmF}`&*Sp9gdr*&7a#VE&uy9JO$KqgMnL^vC_JpIUdxl(qh zAP8=9re8uIuDF81gtcbsx=jE$LC3y8u>?%|-ZSI?irpSdAG5(wCMQb|!^x}N08B~wPQp7SE*VqPq_!+6{d5f8HveqDGtJAR70MP}~t(Qm< zZqbXS9xZ?M^8miiK#H_OnW=Y%*lkx6EvGb)wQhsu7#Xt8!bZ_!Hps8|b&rhu5vh1Q zQIjb=u3=czV6aT;6@dECu<$eMT=pQkYq7DN%DtXKg&x1c7P7%UD z=Px?*vixXQFf$w)M8KXp**cclumOcp&$<5qz>D36wqmx%>CNpt`?2Q(F#2PRp1GR7 ze}VeY(QU=U8_m*ZyE2T0cmZl}Q}A zW*vHlilBK64#>DR$}QEP_!bUB3-9zJJ!d{qoV6e@0|mnK=ZO;9=30JUU4j9w5Xpw? zhnOvlYlKi}0Le|6BZEA4f`9;LVdU;cits)}s%UD#J3R=#Ot8}i(u-TbFTNnW?@<2AAv`qQN}DX>1Qy9=aP;GfE_RFn z7t!1tAM?h5@toDoMrq8K`A3QMfPYUC(UY%WA>$*_qR4U*Lj{tMPU~=4)U@jpeh!_{ zPD0KEPb3!0XA}0yhCKl_Ge9rF#sK!5ofdyw`w@<8DvW(tZ5ma&AbX^($V?t9zyJq-u66_)0i%r>$6Z!SloKYh)PFn| zxV%(%e$%f^AU1aKc5mLaFbfRtzs!re&T8sOD+SoPFm~x+MG!py0IrD7_w?Me$5BJY zU#?iyK0nkWKMFpf=4(shJDbcys4`pE9pvvmvIT25I4a8&KmZSUKVk@s^PgZZfqJ(( z@}$8|HP0t;Ncc^0m-F_-tgPH0Ns*38ZWZ{0A2$t3VC`mtC`adltpD(Il(gk*V}`- zVxgV^$rMsR7BXjJFg$>@;bs1X@u6_(q8KGwUa{t_d|JIdId33oW0we6y31L|W{%_+ zOJ~Y3lNx)|lXim09fo&>fPeBve=<3Vx_332y9*s(@`SWGK16#RzDT;J_E$2g8B8B| z)EjubFEmUH@>v2r0f(xj%}+d9S{PR{<`na^-#f8aPIP3x%L9YVfqxlP739kS1ovrT z2gZ334%zXXO_FS71WXs2zFBO6)P_Kf4pk~an^c;#VhFbl7DBUmmOfH;%x9g)Uy2uP zPc)Xd`3;7u1|4#t$L2-r4aJ-A-5&rq&|rPYDBe(Dz%kR%XP=)r#cP&qUQNX$FG~iv&V>EkU!Sfua~H zo%zFzBFE9D}k&B$7EB#le_Zh8QJ(7;5<5WOhC$;!4T9d0?VO>e{C;@BBFl>L$0gQqyU1sOe6s!{f6gJPnF6vq3ZJfbX*SGB&_VCOzupq)yv3ZB1d| zUyvYigPL#>-BXkAE&%XPKd%;NHTc^ma1B09V!(}u*n){W28<-l6WtL5!3)d-Tp7V9 zGc}q4__nB3^GiD5I{?7)Wc>(=i^Lq!Q`~hknXi#!&!WgEuP_1oOo7*;VR(1L>&94X zF10f_PagjOFTrJyOAO|RtJ+oGv-*<|HhqAYBZLb9nzm;%zAbC}5!uZ&p8>4P*w+J9 zAX9{+X3af?Qw>oVC31fbXz4A<4#Y#4!9&EY4xFEdKxnt-pSQ z33G#8a6|wyUVXS8J*QH#rfDFg*Likk26h3Rk)OX>quN{5Ob|3%ft`qf*a(Mg`QWZe zvgR$B0`G%=JM8QPcp}pSi>YnGG2UfsHX8+4Ot!C18&yzazGwPv+RNhZNws>ejOk~X4SZjgq=zw37Bn{uN z;pt+dTNZ5MkPA%OW!0lJjdkSN6ARAAAr=1s6sLhu0x1!rD~C8rG^Jh|ONHo|HrExDs|l>0 zyg4-01(-P8%ncLBck8kW2AKv-0RUON`)gL)uGB)U*~^JPiMM0l8lEYwe;u5ixGhGl z@!dZpPXh;XmV!5$z<1d|N@icDnxn#!%qTt}kf7-HQ!%?=xV5h3M!D0u)Ia zcDojy)ej4oQ~VLoOjlUN3_PKp`{1ILR7G`6d0-&kHAk;qbU;uJGqNH}_RF)%eRX_g zPyyu0yMe9jLP|w(&H%P^B`^&1BU?y6bN)lp0A`N+ zK3~wG-NpLFYyC47f6AvsFBmJ&(HRJMNNlkG0FKPR*yFcHgkH-}{NnTdnBoVC)NNumG_5<3h44g*ext!HT}4)(D-G>j}TMHaz$ap$r$3VN`S z*KN&ueMOc?HJPJjkWjN`rO;=99k}!*y4J&7OtAek1&`1hE~BWLeY}}=EuY|CU~U5Q z&Tw$q<5p>4G(yCKfe;G;MA|amL%zUHX+T++*69 z&CQ}K9Pq$-zEd>+09IjR_4W57%JzJq`;oc$L)R{<*>tf5<-pk{u1vziGds*Q9AYPi zXNlO4qS@mdDB|0T=QHkXUEc1a@v3QPhqeso=0i9mt6#~6 z0P7Vv5B~Qnf3%d1_<63UTZ9NdXces)o=7s{{V~;70lIe54gsE z-vm#&{gkU;FLLuF%Wk#)>tgA)2H<%Q1ogyl+s4t1L+q0<UwJ?K2JeK1A?Q<`pOO&y^lIZy^n92+ms6-zu&(nJt7kyH-D$qEiO zvieJ}GQ%JNxq3;=)rrp~>L6i$3g_wsdF#U&vi>=zA0}b*42uANPG|4K!OENabkkkP zC7$y$hE=lGvnCA?d}}Px=q`zYo#q-W{d*C7FzuZh5>Y6!ra-RtXgmMDG&Jm zer1OQ%e0vuanpNk{i+5M5ye9xn9F86^o={xJ#FHV;{@M_k0fj7K>FmYu)yMw{3H-r zP%xxcurru%{{W!_wGQ4Cf7i>YP+)iGJ?CGk9y113Tvg9rA$(|PwX?*`&kYL|`eyw9 z0H+oWmR*`aytMI~pYe0|IU^)JtG@AyQ(qCb#sJBj{Ww9jsg^0>)98jh_w~oe1|7B9 zs#)<1E%IBl2k1Xb9Sivk4}ke1L}87#%{gElT{ze=&m8*W&Sw^Fw$nT|YP(XOW)>I) zn8(L|)PauJ`efgVA!EI45TiLgrOiwC=-&HgLrfFd{?u3VZjU{DYE$U&Udf)b{tz2>okz!n}$z~CCWnz5SG&C_Wr zHfDp61&rL|V`MlTie_R$iJdS3k`Hp&{2SgV6jZ@=*XpFrHWRsmkEblr z0BqMfu0qXc9%9*zHtKxMz#xH`0@~oA%?9Y^j@Yzod9|?Y$;J?}WOnN$voy-u@f?obWLw!AUWGjFah#ZCa!yo?Dpf&^W;>OKS>18-IYNQzQfRQdo9`1 z=j~gr?r?}>q6cM$W*)qFa2OEGiChjFhk%OJPi&^3UdEhrW_-&m0TEtp7O^wxB+Y<1 z@4f@%G-CAJ(I;&uJNm2j8=-}n5595McSO^kIY~*&xyJD{GheB476(5B52V&;pHZ>m zqSrNTaI~3MN;+%_HS3QF9N__!^!lf%FJk(j+|p_no+;!z!u+1UJZnL*8b<&ZH_y)h z0PB(W_8g(?`CGT(2mNru_#_`-2L(Ye`-Q5r7HGO`lSA?2lUK&J0i}?@EQ5y96daA~ zsPScgrFdt20W9642lx#!kpw270gSczFUb>Y3WauJ(X5$#5YB4?CSexo(aPcr(39H< zRMEp}B(yJ2#OUPKW*6BR?$ex`Y4p9K2qK6BYtaVbuRrKPIy)tOIAGn|>ZxNj(E~oG zpMzXxwdEcZ#a_}!y{zONB7vI58S|Nhdk7iS!lbC5GDe>jyf$Y~vq%lpwSzT4Gw2NQ zojiFho$ZOIh}Q+1wMO-g6TB|~{V&j#G0enSy`I z1K0@phlr{9!;9$lU(lOgPuW5$c#fk_MoYujiwPq+l*&^nV=b1fW`?t{%r)`)aW`Y~ z+t^fKr}AL;<7n`5M*1nD3#YBh;rjX!qNJP+#qw*;3POr?J>6YVwW=3&*sSl1DO{Pg*{gGds-9wdjUS{-kZYI@pB=K9o3K zaVkCG2R3ypSQc0xryXehp&?n+%E&%0BHMDu&SnM&VfQ0bp|K1n38(-TKMaBRkdCCu zCRmu_jJ!1!h`GmmBpCkyV4F3SO6C(JBmlCgGhJkHbLRj%BQjue1YSy*uTD!*zvn9= z!5%=5O4Nb-pPGmD1^t7JN<&Xcrb@KOK=+F?o#)6AXdQ6gvk{y(4Gui0TCFiiCCz?L zgOCdW5Jvw1LOUA3YJ%t?V!6R!FpF8OB_>7tv=9Y=5OT|pp8Mj~r=}&;^CXmNYZ|6` z?>O`fTX=9zP1ADRLG-RvH6<;dNv5qIF+}fqcp3KsDHfy9TDqJ|nj+b=bPbRmDTEws zuja4693I3@rWv()v^v2DNCL@N*CB=c5i@x6C7+EaOmkgOpeW!DzFOGM&%l9%dA_g% zl~r?AEf;K$%g?G*001-l5$ac)H;_34z}{PaB;(Chg$KS?j@ksv2FEx#!#$kwI7R|R znI~~>1o7-evZp&Fc}+4okBd>9SQnTX>-Hj)7TL&1!wea%HNQFDdHukOmzx|0a)oC< z99tcT*me&)EXTV{^Ylmyf&Hf9Em`@M&Lh>$ejJd>{V#l#ymSey0W80%QIhmbpP zs;N5}Qzq&(e6yT$u;H^Uo4aO%cV0YZur9C#fQjB0#~PDoQ(9q@ch07)-V3Fy07wth zf<}Rm;7UE)*eA0^K%^bIP1%8gnd{M>807Djf&Tzm2(XfD-m#xkFJ??%Fx?Qo_Uf!) zc{7|uSU$uwQ?qY*PUMd+3 z7QF23_uyVzB%IFYC*lhM!U*1XotZC=vxy~QjHSHc-Ztd-Iybq$KXN-qGY#l7m?4-{ zS87SjpU0T>1wXvY_3r3};Es)OK|XmvOr37BIXh`+9)rQwe3n2WGJ2>1RX0b7^T#JL zs-9ZZCN7#eMIbP-Q{Xr_;{lf>v?rTuMG971h1K2%&S>QJBb2GMwS>gjm@@@HJjJ~@ z6grb@1_AQt<{HBEVxN4gU%JGz8Q(dh1B5a!awIPT2bL=Cu$46U#?sB$!p;i-4?IC~ z!c=C_G!p<OkW7$fhy#$Y(GEKf1e*CJd*K%VX|C4R-YZ?9 zk@C2i>}`I&z(Lw8D9ya7bTg`9cm}=*&m~JtrLxssvFR{97l(Y{&XuVUX3R0ztZvLT z*#XG|s3K!b06Yw_LJ}!g>H(Sd?1%W`RBjSq z5c-2V60#QPofF#6VZgvm56`Cyvv~PDb%PH@i)eum-j;7W$989X!`6L+gE#UNCbK1- zFF0n4bMn`nhXLz0QJaawP+S~{iKop zMc?lMW=-cOaxC1Mo46;~j!3YTKYp(K>cx9!WJDK^9a{vz+7q_TQfi#5szXQ0P9?V5 zbLPod5~~A{9vl@=+b)n+4_qunLJwd*vEJVOz;`a zgxk6$%^neMZZX^k1RRZ2b!u3iEjHK(ocEjWzy3&TI*~~Wg?>tk$uF++(cFtCl)@C* zq?!)aTOU4fA_soIlOlKf5nm_Szm6O4XfI-*wh5zPM^tubg41WRqnRJcK?BjO&Hy&z z)8Ly7sg7W*Clw7G@|rZ|vgXaE3xHUVHZVPU@6H+^GeGCp2Ne#@*wRw4cBIK=oEd^3 zjg|$FIKot$$&yVp0_z3mW%dM3g8*dA6%#cdcmdH`_R98eNlj^RnN$y(Z*NS@0L;L^ zJwC);&31K4wwW?HGV;}u3-N(lGGxi?#lu>*OxXmQHECi%76wbA*NnEK0y) zU;)Mp95Cn0+-d_wv3NS;$r9;AD=^FtB`&RaxJnI*3*@lzuK?%$2sqiKwptV{lX_1D zQ%3W=`*EH&$DOy_O$-+2eln+jx97o}%C_9)dyJ_xY(X0XkTOJGEkOVTSPqRz%Aie` zHcaYihCVhH3HdF0;P!oolL13Cx15G>FSB`5xlucu8V7r>p|BqxZU`P4YbA&leDcGy zp2!|g%vFOp$KHzDQ*2zw+D%k^=NBMhWc4BF&~)RL2`;3`=U=YF52+ndaseTir-&Ei z$$xGpvvPsAWQMTDyuc0<8_xCEN$^HvsBQMFil>v;bWLRTWlG50zmhML86dn2EDc~D zd|igv_Ngael)LsQd146QXJF??p#-nyMsA+5&t>KjshgD?$}(qHbM*2-&&t39R5C!@ z;{)dDv{>T2!!p}tcysR@e#T1zyPoQ48ZLBSeflC2-Vi%O2N>z>+RT=D7csqo*-$Hf zML~i9eqo$PsorBqW*vE>Vj-aWVHbqPxapwIHF^v4SU_1p7sbAIeV#-u{ox zUMHjWV?U`JeUVedlEC@P=V+cl>;|(q^={8+8zI@$kSY3o^P4rS;Z_S?au^TVRVd#raf7+#@TM$|yVy3%$m92kP)Ab@!#3{*; zJX9Q=-~eD@leq#?+(ow~6b`9#zdlanSW8$jpGwV(Mz}9px%t>E0bqOaQfw#k*GD5! z2Gb;vJHWu~Fu#I002uUv8K2rAtnU(Crmxg(zL+7OcGelg_0JjoD|`9dNok+WR&*K9 zCV-17s%1Q9ye@>bX2~uKf!~}0`!Yp7&YM1{S9YRrw`E%O#<<0CvJUtxeAnr=^}Unw!hqpta@z83OpQ zc#4ZZk}-8Ea*MEdg@KFPmR#R;xtVhB2MtV%Oj2#N<(4 zTP^@xELa-=7#{{mI!0uIAQL4^AWaMPa$x#Nlu|?>49Z0Lr5p zuY>gf(q%63#Nnau3A6he0g&4VICjkM&l=-+Bf*(>UXWxs6+HG_3+ zj*PG}SR5DGqqzlpF>QRNdV+Zj*Z^67J@97QTKN=@9S18Vf&d>W42NcZI274%0i`?Z z4$Kz5+$A|?_z@=I3?i6qTS3xN1{k=B-Z$$tjuU1S;h+YC8^X)L!LgV{gv$^B)D3sv z>&1l)l9H8n8$z7(+?k%iS$oBkXEId-Q4nqq*T{PR09^4MU}G|w0*E^ROP}@bLn=lK zzpSQU3K#*rJr+kPY@PFG;F^o9VSR=c1U75~n##Q8Oz_1>eu`@(lK0~PU%{RjiKfj{ z1Tp|{7-xr^?MBPrF=&d-!0ZDJVdh6Esiw2VU4}J=7G43LBzNp9dj9}e2_g%a6`TRd z9at=;e04bT9hPy!8KQzJs++e<2L=ATST+RrixXVTGFf0)F6=Y?ha#y|_!3q4QMiwTovAd_^^1aVDk4-;BvTVT6CrzlAy@d)*bLbS7idbUo%W{SQx$zq4n2y5@b1ij|? z)h9-QqGlQof(5E~4jVQz-U`(Z;XGerHkT)YvDN|rA8||;H(5y-yi^TE00aW?N+L>_ zAk8G2G(h)r3-o6=?Ngoe9Ca74FnKTicp`^rue8NU>JsHdck-Z_Tbh-VnHS1 z%#_(AwK)ur0r7?@nVFmm)RVc8B;NvL=EE@Kvd#$WJ9wfkZ}F|*W_AVkSrqeEDXi6| zBja+Zo_zX0TOhy!=7xFy08ao`2(xzOV zsIUW7!k#r+^b9NvXr!lscQVyMH)|t#p`iP5bANgm&k_|2ZnVUjTeG!%T*a_BQDi#M z3oMM+Yf?!&4V=jzN#&@b?Mwq$!(Wk&H*Cxuu-W5{?qucQZzy?g4Dp?9c0&4Fv*}dD z+TgNIjSDlM;5%?^XSP#!-iI&>y-~;RMXo|sO3RrwL5bLChCuFTIAS#-0Fq1YD{JFg z3!YN4ck+avKHgR|8wZhE&Ipexqa{-!WP*U?0dqb$r&)}5u9SB{e+c;Cj<#Oody>0$ z4RCLBqWky)Eb3`&x@z4KnJH!o-W_$B;nbX|3VExBElkWYug-ml9onTaIH}c^c$oxG z02jtHSOZ;_YlSR;IU}Aagt-qyg_aqT>Jr&gF#t|vXrvd}@{qD5O+dj^fiv~vI&HFw z#N9Vs+B2{g13#7oYIBKf$>XGyX@g*1XL-faT4}S`c_D2V2{S;b<9MORhPX0uqRZ^h z6jG)EGG!+~697H&)=N7drw2z(wOaf|&pvYiIHyIkMib_ZT6rWEGqBIF5p-!HEf4}( zaJM>F0FZn>3q5}=@oS`-;sKaVP`vDeSqOgRCK&_gjO6I?hxIRb*4@ z4{H;Kz9U|6&ErR>Wrna`W*PY};DMTwceL6{Y^;uDHASfOTVIUnp6FZ-*+vLEuFgCx`za&2HV>3@A zn5!xB-BpzMo??GN>%#F;{{Tq_vyGhuk<+j`s{99=m4J?O4l}a;{5ulL1=&>ALf&w_V(}1N8|>TzC$9)CJ|P#)&ga^+H@g8w7W-dv^Zb%SIH+bx zN~F@&o;zPa07sxGZw6*8?3nEeYm8x>*ta_a&s*wxaHKvCQfDQIrByXWP|wtW1Zr|k zok@aL)5iEkvomlAK1-MG40Pp{UVBLGm1u@WR$szjN3OuAR^E`z=xeq zy4_s1PUUX+K&5#qlhe~lCyJUP*bL6|?ig!=2lcf$3Yup;Y_Z?Kcfy*(r9fQ;cwKjs zM0oU3_iCNWRv^$D2X4rso$TtPsR1M%%v%B|9pT3pM_<#3=yYe?MkGqBnN-QapwJe) z0{rn~I;?L2Lrbh%h-kcdFE;o{Y`ID%mzGXcnuC*F{sn**27`}2ZvMo2!@}xzoYsSs zpqd62O?dwR5^?1~4IRkY+1_>bGkIrZ#%c-Z$KbhHn#gl)N8AW~na&a+t{NeYd#}ON z{{ZY-n*2%rc;iTqEo{eW)jWr1(y2VPcEFQVPS2A8p0k1_wy#_se;nD|MG*HRCi5p( zjZ^kxu5Vt3#OV9c6B=X4+C2ERe^{;6sZqT14#-x=nQv@+ZL)s*dDJgw?r?>*pPk@4q>sU>YHMxJFSca3{3XJ&{q+7KqRl3=D7c^J#- zS^+wRg0m&&rip^ag_t+{@Zb%ikG)dNX98&hD(bDOrBx!b{{R;$f?;?qj1~Z1a8Wpt zQx1Gtx=$J6pt2jtZ-Hfi8SH^0*;);iOYFhtJ3No+!zNS!OI2xyH8qW8tj>;z0pMWI zVXcHVWj5AED7pJOVH&lT8jKcVXr29XJdqN-XCpQZ*OhK)*ylNekUW+g4OBE5^4YUS zhcyGo13U0}BaKNxx|7R7WTe#ScLJ52Zy@{ zl{l(BhU|fEv&GGk?mR!Q99uT1(PZ5;-8Qfh3oHQokn=jJ5kHlOnP=33fXp~Jvqu>7 z<5WK_*#6F5RIVk-f-Rf%`M|*R7<0sW{{Un3jW&}dB~P(|e0%aElFK617S%e>%+y>n zhK!L}P|IhCnppTfW&Dw7y!N2_h}y=+Elisv*PAGiBEgi!twFr1xCQ8dBa-lnPuSj6 z@}D=r`N_Bo?*jm^GsU*;HNXr;fC*R`zydg;*>h{vw9p#jx;*fISegt5%MVZ5g=V`P zPj%jy%&%@54Dxa4jKySAMV!G5`5pjg^W)Ugd1=g=n^}@;VKuWLSz+HEY@MTFOcVaU z_29c~^M5ck3$>_VfEfFgnmN^&CuVbahEFi#Kxa%;?W)#Q8|2Yg5L+2wMQ#fqUP&rB zlc}BqQ#>~)9pD-x(oE$@S_`{D6_0kZ!w3TRaqX*sr~VbQjL@l?1*m4L&1I&6u{PJ3 z7+!kp>{$Ty3kOZ9!!-~vOL+qWzg|O)7OiFs+P6VGmIi;<%!(=7WZp>k7i-uHhC{PK zgf538sDgh!-Hk~QY$?i;+IiWg#b+kFondB7kbW5Z`-jL%j>lLRh2R<^0H*6YpJ!OC zd{howncrg9*?%|~MX&7lIY0fJU?1NcQ!d`)&XdB6fpcM);>q$9VOvvkiQE4G8p~MT z28aQHm;uLDb6Wir5^8j-W3Vi~{eHXyp4_K~A8;kUna3~Lrm9=WN|**GCGjwd{umt> zzi3C#FsUszPcp3*nlwUdAVsqiST8XD0G=-E25elMywpq#6j;&jWc>)oM$o!*30i(Y@SalMoh>ZF>DS2 z5XQ;JWHzCb)9z$505&a_u@gNJgRJG;`Qvpb8?!W!{!e-PaOW%~TbZcE*_*s}GiDA6YKqk4`$dy5mbXHn#%ApR05CL9KWrl}+Kl;(?y{9nDfHaA zXw|Y9#t5GR#&$&`_cc+f0gkfPahfpHe`I40p$97g=sXaPY9KIrwNY{2RY1?ZA{4?UVRetADl~dvGNisJ|o^&Sw6S zX+j-vq5Oa9i_ojmz6Q;mylYq>(ebwVmj=Hr8-*SZkH!ih`Cu?u0`OSFf;|5K;E?@B z!&D}lvqt^bLHFa)5=;blj0Z~wIo^5r;|XC2+W6xV3{t})40PB6Y^eq$fs)O?I{K_y(+|lwlD~g0Kr{;f=uk0LQRK(Tr zO$`hLa6z^A-y6L9?gY>BHT{Uhs4zikv%GY`{ziwk5l^s?Z1Yc_Kf!-BT);YjmmjYW zpH3H?N=-%3L;w~HfOo>EQ4bna0sUE#`r#I}m?z>h!%Q`VPu>ac{>8-oll<0Y_~7;; z-i`heYTn`=#8d2-=2Jh%C+tPd8T=$woy0wexzBRU$Ne2>0U4yWo0H9ElebeTY9`wrl`QMYU5FQ8YOszjTTUK;^IL@4U>7yGJuruCyGlW(n zv)X*=`D5hwz&A*R{{Sg74H@C{S=`0tJoFzf1{pXzVHXVYQ}T834DUxp`7}iHFa@Oy zOwbKmNlK)Y^r~>xcbmhdy5Kdh43S^7DGL^Ktg2*!?+o27oE2K|43Q$acw>MAoe1F=1Wa&`AWlQr)5yB`7dqn#-{R{PIt2T zU?-7O8z8bh@QkvF58Xn%eYS_c5@Q2j0ovzJk$MS|L$mH+XDi1F(aA09&hZ(vd^R)i}ruV`< zJ>h800H-8I2nWv(KnEmwotih3@5g)b2vR}z^IBIT_Fj>#AScK$ker!x4;6F1@ZipI zVB2YEN~LWjsndYvR=U+Q*Xl);D*jr{s;Ess?_4q}0ezYdG-@#xj~Gi3U8rPjW#$3f zvN+rZ`HEEaL^rIl=GEFfFkj+xWOLt|urhfNgE>@^P7;(F4~1-LxO-sZGqjk5}W0$PY3Okn2=4C0=lld(oj4%#tA zu>+ejafWz+U0HDU3e^pb;ES1;6`$bVhnWRZ9g7_Aa(tZR!+mYNA5)`7O>sm4@^w`0 z#`i&^_BK`ho9+Y$bDrd2%okf=H7M|PN7;1-6l3|Cd)c`&1T&afX1Ig3cFVb#{FQS9 z27Dd!jSYr(?^N!45I<9*nazzwKatpenZ`+(#^l{oTla0C-M2&*6!C-RV%*&>XT1ga z;-*yDb5&D8JI?)ZeOa?i-?UKx07}CE0D*d86+Td)DQ0-yMv4FAe0C_CY7Zme7*}AKP6^}q;Utnv3Sqe5(t`syd zPXsRu!1*D2(m*oWn!|wUwg-|5s7|h(%JAP)hKgIt9c_oK!oUphQM28cpuq!uP$BZQ z^dtBfD4K=h0sVcc`sWrK%7A$xLC-q3fNf1#wl_yLUfj6_D=Pc}`ezX)Qb}_(l3fG< z%+?Kg;=_Ml{{SiKM0f5~bA^tl(*z6#@6?ZY7{h`eoID0@4Fu(_tN_0Fh!a)tfqlgs z>sTMJrxGV;^4~BbJ)J^Ie@zZCt^#W~b*`xbYdOL13fZ^;f!SficQYg$zc)ssWO4un zkdavhkLIC$`r(M0W~rv40bm$MxYHHDRm&upOpxfA9f)r!9zaL~UzrylAv4~BMqqYm zG;~E_ziQ58-F4G2NH5WzEZolJPCzWi5AJM=%zlMtxM#l=$>F|O@fs#=fWplg!|FtJ zrMrVR`teO{;9z`_X7YC`ayb;R`-cvxCv0Y|SpyZo@C?GvOp(E#6a>=(XqD`$Wl#96 zqgU1%pc(p|5oX$bOg8HPEIxb~tft#apRjPE?QxY3PDr*zoj46}*Odk-hD?wHzyk8R zJMXy|OUVb|sO8bv25PQnqBL2P1>DQ9+Wi0p_nC(TLX`x$n^x=LqKk_&_u-T0C<88` za5KL+d6CBUtAYS3sIh4ZGlpl$XZ44TU(~Jl6xKHzo#qIwXBq3f;)!N;Q)Cr(?o~1d z=aBDm7JG1Hcb!_@Oh5w$8t;n<^*uFs)xvETW@&JB!Pxrn$X}y)^-5_ATnWHJ3q0Bo z4_7Up2mZVd-;VG90A&Db?|zogn=!%buUHH$^y5QeA(;bHV);f31(^=~@pm{q@|9Jl zAYZ$2z;8_fJ%L2#HLj+~WOYIG-C;bF$(l7@k)63P3jt6Iz~UuOP_H{c0eU)i*zeYC z^~LLFI4#;IfVJo?eTi6Mt{*;{wVInFPZ%4ZV0YPme3>GUIs=*Fr&!f;E|HS3rDdzO zlnU<9!Eoz2frgI654m9P#*;O`5i`cKJ^A!-d5}x-+EihOfVZI({xX*=lBh2DFqdbaKAESdPpbXz2L9fk*-&chB4a0sH90L_NE&Z8wKaT) zl3#K#{{UqO^p?jzdopzL1_Xtu07skl-iRL1To`DvujT?ay^_m77#@56K-9o2<1i1e z2izOZC<3+{`ZB6(Q+R3`O5jFQv)4Pfjt~UQ)pUqh2!WoCBaRuK3-cUWYIm?!2q*fbAZSf8^GcRsBjLlm?vOfv%>&02;(hA zTjM6`0_cN`ZUkaO%8_uNJHP}$7nya}h35)eJACqD-KvN-08>78#s_D)4Z}PZUC^es zYqHGPqN{}Q4;UYy2Y7bjOqe=E4RjYr=avR2M^`HVJ#d@$pnzJ15k9YLX%z?(oVK}h)AfdO4 zv&)}4zVN=zPuBPiu;^&{CJwW`8!)Puoze&5r$&o_M$6A#{fGv6J)5xURVU7#EFgFF zxxxt1nM$Qi_Y@RF4FKv=$($g5cI7pixKTtim}kBWkm;BtP~rd?AX<`k>SXD;;2I_e zWtuMw;_0)TX8Y3Xn05dT$b?{pz^3#T9(O+wDF z*K&FCbvOI*SZ0_4SZ1|`yx4h?X2iSSv2 zm?q-5ov>QD+&$bx?Dc{PFW|zX6=CR%vg_$$Ok)m!};3+XP#627krc`;bLw?_Z zYrrx^q4X=vvft5ml~dBEEpgqM-+w=D2ir52__Zb99iJ%#4)e1m=N)8{bllAn$H-uO z`r=&fo86^pI=nt}UW#^t91rct(|@w5 z76x;@56nZWme|yLg~1CI#tm^FYgzVLPxJL3nK&YV02E#c`j!35MmeijGO%Kr?n9{n z$ce(1&ca%%FSn;ANN2FZ@Cdy#O=@Y7VrRia+=@x7HXx8Fn(l87-)DM%O zM9+|)B<$3&6>kiOkjy#Z^v1+l>Xn#(jmIx9+%R2ORb;5$P&SbT6nK7ND5 znQy=gII!>RMH(TR_Eds6<$)(Kt+$s=&{9-kj<&t$N2wOm>V-Kvpqh;hYr|L^2exv> z%+aEaP_sC3gjrR)rNE-?R$9RN6UZI-nKBumVajp19z40Lm?1CQpfr*GAF zF0mDCzto3@I3Z&jSBs!$1rv~j87;l#Y zR2-mYc6Q*xn|SOc3bjF@n{#-*>jCJ`0h6~o_<`acF7g+dhP?25NdusixIv--V7ry? zGe2S=@^;bT7h<)3w*Jjj5Nc;-fqqLe9A~w&3h|vx-lL{i%AB{{!I7f*&Iy^{8UrwP z4PDE38^G2(^YQXT+1f7Vted91s>}le1H2+Eo|A|QhOn$v%}KVGFB%8M8hdy_Y;0t7u5b+)9(Ar2D$-W1rbA2l|9d2b)oonyZ|iPK$=D!E9rh8J&Tn zzZw`U0WDP2QpIl%ekY6al_To!Ue>&Dhgl0JE=2kXL_5*~Ev zQd(FHGSE~(ag&G0W*?4U*n?_dETS0)BV+gAK2DX4-t>P=ph2BI`^{*d^<(zV62tzp z_>()oCX^afFI>|;NXh3DxxhMb)k%|^0L(1B<65S=EqUdE#;)f<8e6L%0@4 znLa=5H6Qbg)H~r1{gbw$h-93+{{RF&q!0FC{b|Tu8gFKj(EKKI(6avk?I8Th(dPR! zcOa^>t*cs*f>P>@^2;{`vBLB~Gr+h9u@7ymXtjYMB){CAV!3D|^C9ScvhVwl>^`5r z4+Lz#oX+rr(8gw$3POSyND1zHsu{^_&?V$}3cF#A1$1G6LG~vqz^GZ7a08 z;+2+!s-K^%*`OJiA^;lii;vmMYKCB;)y_}*kou6F=rWu=hA4*a0@!q0FYGy3g1e{4 zFXsn2AtSOXw9>>4&pFwhWBQScp4_&mfBRzd&=08`k8ImeJnB|2{Wz!W`M+4M`(dzN zf|l%}r)+AdYBN3zi)i&DvC|KmD}cGS#SYwgKM)SGgEaKd9S0Xw>Un zBF&ucRKcoDTA=3%0JB6~UZXl-p#g^*lia_xTD&_PH0`Yw^--t{kx()kXOQ_J;WF{6 ztyDp(z>q8%;k8hFfH5rB?m#sSHwA0jJxI6C`2S<@xoRC~^IuUlizvj9j5nUaJCYf8a0unxe6FvB)V;4T0f?1EJ7)CF%bWFl$hR1mWF!MF`Ang~ED{S#rFaTH=pXfq6=PW> z0AYQB#&Fis*{ESA4vZ2z1CJyUzp{LiV4dwO3}CVoH#pN2feq(( z&v~pCUUAB0OgjmSbk1dX8QF`TCdnj=%VtKR>z^7H7;Au@c7V?`<}*iE_tnIc(wRR@?R5bAD_zgPa6m-!B@&(A@koaUY z=#!-lFasR$=xN2PlzrI)`T&gzn*a$jbnE~C43G;46!+YeJorm^uSUT?u~M3c)dRU< zyuDR45H*hTI|IKL6vH*bqiK6pk=Lv&;E5EM@0ir=F9ohc?6V-?&u5_(_m%PmQ;@Ri z?o0FEY@fo#Tm7)>+kp5m+Yj)oR{sEOI`-f`K8F+Jg83xh(oHBq3J^hPRk(OIB5NU0 zs0o>X8(aN&*gp%CJPY^_Q_iQ&>onM|iYkWXoc{p6jY54u&!qDg2p;Te-+7oUaf*{Q zp7hx@uZscmgbuQ{b^>-uxc+{;XIm)KHSO8&!S2GAQ5K+u`8W0QK=HnN z>ks1d{MCeUwR8IuEK`zd*V~EJIi_mM)c&&-n^Zrr()B;_R==Kz^COt}&=-8V#@BiI zFj;0xAcpv_ojLO=v0aoRx$Pn9ZBzg-H@VC&fA0rDw_06L;(2cv&KU?X#kQgQq1m?@IY{Pp)BU`(xlIdXr8;um;& z1M9&9V?T?MF5$%#$nTOp!!R(5rf2pBMU$R5){&K-)?xdXWzWB_?VLoZ^U$(R{{XoY zYdrM2U)oFi@d}~OLrQxG_2TAFim>=85=>8vSbx+H*f;{2e9~Y2s|T_}{m)^i{EYpB zjQ(Zwd2jZVL)@Og*uuM`0?Mi1VVV138`U@a(v^Nj9>if(u1c-nZJGOF8=We5(o(O; z!a3BgN6j<{0cBM%{CB<(LA6ZuyG74VJM1ji+X(Ra3Pu@Z zN`kOQ8k~2c{GQxrG{mb0sBR1SBfmN1;ETDF_1~&%u5nVg+kzQ00U%D;wS_}a)m1{x zL}$w(czJ8`Mw=v%%a~84m}}!1xzB%m<1%=+%h@4jy93vQx?!?t)U1>Y>2Es1I%Z_g zBs35PrQ?UWMDV>uKZ;UC6AM1d+3Fm5YFZyEaG&(w@xnb<;bIrjQbXsRhvrT^rzrcN z2}GT6XaM1KZ!_~FW~FJl7)d3o=vM?U>!UAn%|MmQ3=q5k zI07}+2%+(Dmrxh(!C{zoA#K(3>9EljsO-3PC!5R3OIlka2T3TQ*^YYgv_#dLCXA%l zO+x`enb-%%+l9C_MP9K4Zw!VO21pm3zfl+JpqhtZ8HJbRg-qu!W=YfM8qOM^-WmQ2 zl;IErr9pk>c3+dR12s9{O@He>^NYRZM=GRejIn~&0$1Jv6qP^kdk^}^dvS0xpzsQ2 z%&?bialk-87ByGD8?_RDAeATmL)-|%tobST<71?zty>`~yz*`U2kJqWaMWjNNo#l* zdwMmM^n5O!{{VOFMO6FlqPTnwgWHZ$dW8USGj_g`2=HXAV2` zMpPlZ4rhwo{h-0c5Xlei3RIglWs9c0;CF_8dmtW_3wMvlMx9t2S&&OP4Py(>xNrj@ zY;4^S4Z8n#%MUsHr31<5!?*P8@(X$q$@})8k(fFQ!N_Fp zL`9$cpc?2rm*1cw02>f*yMw+)tikwx4R1m@Eb%p!NiYS2uHy*eYZ^US*@}81Dk#lg zm&pMWz|LW!CIBjuc+Y5VPnj@X%fmDi3jjIeh%+=nx(nHU^Ak1x-vuPX6Cw63^FW{MR!( z5L){H_#?5rQMAz5n)}UxcvyblnHPG>s=E{v9oRg0(;P1h_ks5!;TFtrjeXS|hsh0X zB%HaNw_SNsz!n+bXRi_O1)8|Adn^c=Q7Cb)K&*j-Q*;O>(H!Syj>LAI%9yqWm; zo%TZ;Kit;`Z&Qs zBN>=@7!tU0i{~OKrU;nw0iE$=i}WFt5mw9Zz9J)4 zDCOy|XBpq0C6Rw@WC{&FA^s#lvj=!az8F}C7#L@SjH!E$#Q|?pDQ2^Vh`mWP05F2X z07rv7IU0v~Rl8hFo?7tMaSbEEL6KsP;_9=gLdNb8K|=&`bw+pRf&Gb?Hki0I^kW77 zxJ22L^Z^G6wL<&M@8`bAuaYe$k^!y?8<2IK6Wy#2`M~0RH5$QBJqPom*d9*Utk!0t z3!-Md{C$XzjDj*2>6+jMyF8wRW=mHnJQocYKmg48m4G?nEmdP1Mex89L*%wBepobL zk_7BiQMu<%xT}ZjUM8l9VSsr;(zoIE#KV24BVS~ClI}r;rnf*wP=4DU>v>zzc z5dbg*a%(;~I*gtPKo~OiX0y$+FjM2Zyqpbh^docF`(}u+-kmN08?#$8_5w5aGk{sT zswkSzCO;ct7?ek1;I(MBQFkAzJA^w;|hIKmars}Vb7^|J& zckmc!4>AS@O|ij9^II`OlI#m|n1Y*&PcUQLO9%%cVgHsjT*5=EPD&?AOQY zMVzVpsOn%{_A(%bKs)uHZZcWi)kfRFwL1Vlm^FtaN?|wyYG9(~80G+oAK~DdGGK$N z%n_2--@ifR$?EM787<`n!y(^b7+!H5Y4aJAtI$Bvv19o_9z!tD9=v1n+Nts{X0oTs zEqR`c{dka!%4*GnhKr`MDKC2V2XOL44mR~TZ-{!q!`!t5Y8iI)t_CnF2CIbxi9pT%F4<2tSyJFGy>|h2BjL(iI*`K)$CVMV9 zn@d1iWxTq2)i+^&OTztd$2AKZKr#k!1%UW3&tw5eUOUI%w;W|duR9OSh?3Qf%cl#V z2$)y~V6J~oC)wbJ?+mH*@-uX3pW_O1tU_A(>=y-BhMxU^>xue+(?Fp-eLwiSk zT=F*n&=5eh%3K?}uruBl$7|=<5DBv82pX;(J3|Fq0022M26%;5wsycOIg>>V9o5J$ zuUVae?jNBGPB3aG{{U!6ZWuW7(!djWHwlh79s%UPoWMU$I?Y~gYMKjz+42E^MO3Kg z?nx<Je&u;WGn=2cTgo216h zw)+DMf&(1a^TRh7iz? z=3%cSC7iBlB&I_%JHP{gEaQlFX1U9-I0OJFsNNung5h`=X9LL{ym#YJzYy#L{5Ere z9q|B2C9aggGeiQ;Fo_MHrT~*f)HRjovco(0`;h}=(X6=_X8!;$a=UVGw^dyz?Dufd zdFzg> zhy~WFL-uQ|mN-}ej0c?KMWYtorDh5#oYpf8@teG#H8hgI&THqs%^D(TPS~PpW1aix zUfRwJ4>`pr7~bTZeWer`#0_c;&#HX6*kB78`jFM97t4}#3L;=&7ua#1qgmL4hMjSl z=M0u#6-d;I8b;Bn@!;489Lx?`q|UT53M7{1YD^4Ig{tG+jr*es_oOoK{ZdRCKCcy-4P`&dtkQiod^%I3U%`$*0W} zQ69e~hI-&>YI=%09~_y2CI^vF3p2td_1xHNRs_Qv!=mph4z1tiC+UIzg~jCr^!`@PCHrg^I>WPS&q55d6~6Lhiq z5x+f`z?8OX>9Xd!2v}eTt}H$?<6amC$Qkp&M;P+S4_mZn(R*MI6%GFYXc6}yTE-@A z(>!bM0}PQMDtAb=!0~QC55q-EYQWxTXdR64;5O@g6iMbY-iYR{lUahGY3m3aYyf4} zCl6YxT<=45N7IVHe}m-0`}9T4cOLqo<`jN?7Ioo1iU+eqtU|*Hsvg0|lkjZO!5*ma zhj?!`d10siazOnclmRTPFu3Qg{^vy=^1EpWIjj<*lT3ZW>Ty}&};s9 z;Oqe~(HMC%9&O{m)|ND2agjV#fNaNLKleIR2i!OxsCHwr4e6rtYDQaNA6x(jo-E^v z-)<=G-ulrY5wJY4R8Dq#vbBuULWgV#8HM%*nKiM(Gx~5~ zamj3*Tu>5Z0q3Yml_SP@FgC?!hq1~p@D4-!=#I9=Y0ExJ?cyiFd4a}<9F7YK@;=pr26$x=;Pwj5 zJ_qk4QA(YXgX+vW=zUN3KhO6LI@8FmjG)t8z|Q^%;OA)W*;z1@c|W8A^eOx@et-KrEG^%{glSlY{j#1Ljkpb7IDKSRa3X} z*9=r@jf;hs=g9+xvGbgA$6xS=4e(z!Hbdg2(fKDnVa0`c%-2= zxPIgeU!@2wbU#kbQ&m{VSOLLj2YxIh?V7wv!hC=bJ{r2nD5K8CZ@YF!PYnk1Onf&# z?FoR%ikkhiRDX&A^2}#4jXrF8_@AsuE&pahlz2!Y;u1F;00 zz{3nzM}qTjXOai1gY1mItgtbn6Za{tyY+x}25=4tvZzAI7_dvS44%9<>}Rnux1 zp`P%9VZ06Eu9gORn4o-4pS=yl==;z?Dv~u>sjlgXt=CzSBfpcOy<#B5(L@hNJU9t~ zo(T>+GgSD={pe-E1hT4O2aj{K>VPk>&L+&MxoR@yluld$fEvRjKZ%E*mzP^#+zWwqo;FE}VUs6Ux9Lv&da13c$v zBvVYYqLoN6uI2gpwjbeEt^U|`?ZA8(ZI33^>fh~$Ufc)A(;|G3UmuyN!h{f7 z6?+^0k1aLF{{V}><42)uWN(@tdol!h{{a4v9PR7B{{R<%#*aqV008_uk@35^{{Y7c zeO|@=8Vfqvjzp$vcRlzr*tMI+9r_*_pP>Q!I!44M0g1_<(~i`X$<;RXRl)Men0aKZ zybL?>s)zbRi{qXv19B@kErV-D0Z+6(N9Z4~A17{1-m+_l$F4E2jZ>N2q3cI$xm&I^ z%oaPr9PmL>R&ysa>3iu_fRYRV?A8Z-6Nj;2sU5@Kg?4Ti?GfJxT=(0JcAl-N%2s7i zAu@eqhf{^4Hmaxe+8a3+XyuHE6kk;Miy9$AHeo#sJxlD@dzeHD@`UsO;>qw6FgFstD^? zJVWB+g%B`1?2PuRe!Sr$eC!Ot*6IL#!-)qqTp24ZUl`J}B0O;n6IK~1EbT^HF{@>) z)o~}zv$sYIWY7U*TuCYxGEJb*cd%O)V}Obv0iH4T&89MWiMnE^%J4w)a35kPZ|>aB zJaVJ6HwYLRxb!2BdK~SQ0A830s}+@S)iuD?Yvq7ABPTiBscFo>(@+*j- zSv=j`;RV(-m-5DWFALy(%Fdap3$ZgbPam*);+z$>_y*IxQLM@16{*K+Az%TWow&7T z@edHwiQ;Fx@5vdIk;;!6?uvZ|eEQ<`YuGB?28-h>U|6VRj}u%R8tQvgddxp_oF6}5 z--xq-{lrWE03G$>MB&PhB?OtNdO&#q_#=zSmJEfYsL(U*p9Fdai*Oog%xYh9OcueB zfLLepV+@=bAPQ~sc3+l8hG2dvNuwO$f_XU3Gs*HQHfr1yPQnNJ4kjGa7=^bF)UN5j z-&=$JrheE)u_vAQwvUsbfNw4hw~^5qe4Q21sN65XxO?FQQcO&M?Q=WA#6tXmo+e-k z9D+K?AO&H(rnKeju6HPV7sbVRx7bO7e1_NC2p?b_QmSOnpSX5JiiP@)cZ4zTN!6AC z_5vvct&y&ti_tWB?+6d1nfR$$eARR^7-Oy6 zIMAf)AQh8Iii%`PMQN|+#%RnEmtYr@Vx#1gPYthG=C?=V0EspHoSFkW!O9R)XR36bwB z8Q1}Ma7Aj(pG#^0T}2EG^WzvEfd2rJE)3q-WQUebMFFdA<@-u(wq(Vh%9GBpEDHiB zW=Q0u$)YCV-XLDqz_2*@@UOB8TkVB22Il#=HQ)1A@S^9?fQpJiT-gSylCg`7h&&K%Y!OPzBU5Hm8GTv&j!P z?GZ=~Gz{IE$9sT0obgv|Xu!EJT))W@?S$z03=G1}8TsL0?~0qS$;Nl|72x9QiXy7r?O4zZ&z`6n2hhnq(J9IyyzgT}x5HMt+A=ZYAKTbOXC0KURsg7Q1)@6>`QsQO0BZ_C`vzh3G&dHV(#{m#j4*V^6*&=Gw zz1h>X-66&ZBHRx?jOP?bK9@O~GQvDV^?fO&s~SWKCQ{?Vcc zEMV(8#);n=?hzwI6>K01QyD?}k5kpf@%xFTyvQ)td4L1Uw!zS*oC$P&}8J z-(UwMKn&DnMT}i)UlrDBel1D66z7X9(b6_({dwY@8O959V~_#qft}%?I}p@OHru*y zkTX4aAbQXBvLMY_uFhzY;8+H?25`TU3<36}5^7YYt7^h!I84|yT}9SdFE9?oUEakN zTrqrhG62R#!`6NH1eqJAZPiZz+caIyKt4Mn5-;Cws>_p1?=$DglaektnTEv%ZaPK0 z5$nbdjsEE_eZD1TF_>!ziWw5!P<*kn&z{acwTo>`gcFQr{ z(p|3d4gnK0aelnlq2?>#Eno`k;E8+66NmT>FTpch5xpG;5!|_49V;=w!Lufd z0L(R=@hxKEf)$v|9`SKm1DA57rC&_n1i{E327HlD*=6Obn(Y&_CO8?yj~pOrB%T`I zDlk5~>4R8%XAg4a3(`?`n9bJ!3mfa#!4?6Z(57+2m^isPnJ)d-{JMvWpBu3WY=4wH zVxgKXQM}ADd4U~KY@#l(&`$>dFR@)=etiA-uLp=ZT~yOd3^W)A27Lh>PGPZgR`Qk_ zxv6tml+~QwqI>Wj^cl?_G`-{4)+at1J{N>b0Y^0 z4qDY4W|O3K-(~%{w?qwB-Wz47l1%1C=Li$ds`)JB=dwDY z@kfix)J=0RYIlZv^VyOprJ$n89vTZ9v@cl|Ks<~wX_M(Bn-a?sGrYjVTv?(hc{5Id zn~05NkU5GyERP*SSuvC@K|7q=MgC<@~?~y&^D_b=b#Tgo%pW_ z#JS}{`b{`raF?#<9bf?ZeGy-4y>nL+2&CA1l#d$NyL9IXnalBv@IyuRL)^J`@lZ`X zilLzREa1-(-ZOM!(emw2#Y?#csF-U2GvkR1U@(^gi?58y*0ZyNCVxO8<7_ij#+kE7 z)X?9Vx>OKT1;QEfX9GPRd_APn`@`1w_eop*t14QX zU9+jdruoPmt<8X{$oXJ1(d*ylK}BI9UTaIcT|4)-xxf_JKQ9)#^(cFZO+>cZY7WUs_ZHVr)VYHW@AIn#kr7S_Ip7tJC9Aq$tEafnM za(^uB!obLPf%0Dk)eB&GH=4|Rv$=DVYk7wO+3Y|&GeOSKTp=>N04!JxI4nlbeB);u z*Jm~H`-lV?I$p%eo}{ShiMmqOj9USlY-2ON!?G+Tvs`I2xx}>ta*L!;#tZn&!YHSX zQ>$GXiY~Q`k-QEJDI{t2r3WTzx+uFhAZFblv#7J!f?j7}8eY}RJw|~DRTR>)~KOn=d3vW$S9|n zN}tYEc7)e+1RUP8bWyw(&(D$tm4+(YOTnxLe${%~u$Z%|8>oesuZ|0P8vRI@Q-=P* z+o^QLXY8gZfw{j}E*II{*AX63l{3k_)%7_p150-5Rx|Q9>j#U1XT@oGET z2=yx2qa~5&9hWd(ceF&m1cUQj&$b5zz7X}bPb!`TB`ur)H^JVPG>=|=2k1poAq5`+ zkI7XD@^%r{Ule@90ck-y-;TziU~*@(Kd%|&of*v5t7R=>x+vqVti#jwFOE#>2j3A< ze8yU`#^}KpmL5b;vFb!76;rkngqXBa#Zh^_qI_U*$~-f7JklpFRJRba%G8dMe6-x( zQ|KAX@5pAhaO_1>HrO*KZC3{^jYIlqI-*>zwJ$Grx|gcxG0g1az6Lg&i{{fNQ^qHFXaKlrT2H9WT8-gsTUyPGo4M#%H2B$pPBaCmW(0 z;2Pa`aw?47__ZuEKP`>{*uErEuir|LcRWDB6qm?gVWU2{Fd1+G2R-Hq(9s-Ko0*fN zot(9f@&*TvYhPu65PtE>kLi|9(@*6YFYUln2aPApouJV`G?%ad?9Y>2Uo)DaEmbKz z)?@(O5#E5z`w$!3`Kf`@o4(>itj=87C3mFkGwLyb^h9)A-hxYo_^4lb-@z0>*DxM-y2~ zLRF7pFV_Q$;+^}(lUE3!n)oSSlKAa1c!I>qG}F(1dk-c^yXgeAurmXARH>BJa6EMs zMBHG8^5706HmN309ZN9RFjv$#7na~mp1ze>7oZn_0Bg|}5~uXyz~2+fqmb`d01W=5 z6QUR;tvK-(NR4bZ$1vPo0q(YABmt~1gyosJHcB*H5e4kL1L%A45Hq)W(W-&G8G+*@ zep(avGcCy@e0P8?VWI(LkQWq4V(%n@U|A-ZZtrQlt^i;bSrR5$+N;(88&ET{{sTOg zK>6=Rz|&1L0BU!HSU;H=w*}YFU|1j1U%}=_Ca|Lforb$PnLd>=KN{bJx;KYiet7Fo zBJO0Ws33*=uU(KmOW&}YXI)~Oh_NyDIYXJ8&nBPT|70I9LZSQWnYk6wN=2#)4_qxK#( zVutJnXN&-RhxH<$Aw@o^-rnj$j<6n5tn z)-wb5BQ2fI8ib*fMl&FDp7oE6{K&8e?u>99s0%8-a61E^JQ#6<1(6MHV8v+dA| z@rrFi_Df!Oo`AFTW7LT*7MEVBN7NRi`LZc=w*m+L%W{WynXhSV|K&FoyKBwDpY zW23;%k?22e_~aF96MhI8f*=8TpE>db1_78lMR}3lk(e3T&KmMWgiQ_Az+zs~{Dyo1 z_2Oq3FjSe!bLSF;wh5?BX?JaY07Sv!2ZI<^?dC?YSS-7lL$F@U@qjGj>_&2E=Zy`h zG}*;w@9qR=!wvqmZqFG5U3kmn{`}zS z8ihwMFw+jC4#SL=iFjurWYiAKHIF}^7f8I_h%p^!J!uCdWn{7r8LgZ=*LQ$eU=G1jH3L4d43(>uO%4->@U}t9lL=HBZv5i;E zL9%-H5++-Q5tK`(iJM;i>L~}F%8?T(g&3qTc^g+czNPaGD zqKRvi4Jm$bR(;?0`pobmu3 zL*v&F*xvL@wR!Le`13Gnh9bdtChgLgJI1>&DxFQCLh30XEFq)~oG&J3qBG&-VCW|bCH}xc`;tdo* za9YP8W*+Ut{y06jE1Kt*I8U(xQ-i*C(#-Z4dvIGa!|`j6 z#GDdA!C6-XsN#u92gn=&rU%zw4WD6}{a9I$&nF&8Wj8Wt!39utcp08dm&4}|5O}z5 znetiOIGzWtv*#h6-=<3T#5Jd7_Gifv%o8!gJ1p(P*{p)eVgeRqgnmg?IUpA^15aZZ z8v)T)Ig`t@hCf?oh_%pp_Q7;1QCXB> zmOD`WABpjW!kiW_?ZGtQK3gp0175)MAX=5fZ8TMPM_^!?#q`PgZyYkQJ1jHyAdmyq zL5!9(st3t_er<#JR;vF1wjFzL9|gN${uJu3?T2362}jf7e2`x!oBBzm3KS^44!xKE z07r@SRp0*ri@)PXqif7P{VV?f10G`V1DQ3I4QIWnVY=Lb5a;sI=?b=F)@11=8mI?i zCI{F%HNEWrHVyrI!By4!=4ta>?-au1L8-5e#(IWUJ*nz#4afmLVC>xjdhi2T@xdTJT2(L`(;WqN z&7X{tRf-B&VR?sdjwP(wqmux3&;YV7;&obbQwZ>(l2W0#ek}g58rTm?6vqpa#YzZhiC5o8Yx%|3WpDPu->)mZ_`oY)8_xgKUI}ISFEv|tkch* zJ>jnKi-D9hsqP-ZF1agXnz3y(NGwIm5Xb<0VfErt+pX<4MC@*7cpsSps{)9)1r$C& zSotrB4qUDcQn2y@$L0m`#~W98s79YynK0@id4SofSPEDVn*pL7zCU6tAI<^%`I311 zum>5Rk-q!=2p6&Q#6C)b%WkNc1%cbXEs>R3Hg1`y3t($L4H^6KS+W7V(Je`F&q<-= zgm$$gnz^H95_AIjo7I}j*`54g7{-3N$C2u)da@_cczSW{3bkj=s#a3z zbjh)Vt+Be!88OzskBscTFkQXWj6znGGHaspYHo4XFg%Fm~^ZYhVamme9IK^)5M|Y}gZ$@xAN3G)8N<_M8<0|S}f(G9G z0cN}6M9IzX%mEA#0JB^*?SiDfRP?ut-^CX>pI**bY*hI|_nO%<9(b^L{{Va>mtB18 zhI8$nxD=9nrf$gv6Egz~JBI{SI?ZXh%#j9$ip{_d@G#)W7oV>bJs3Cwsx;#~jMcXD zdaqDS(OJH*FxSBhpFCY-N@|~yyZ{z3viPEBIF|)V;jfIF7&;7w0eE^IWERdAX)2{vdHaEOqwFgbvI_PE*UUyGwq#`L#*XY!7+sAY~}`e2?}$S zx5n+-D5386-(~!K2OOqNor6@2%@B4O*6XpG*7qxhVJ&SVm_Bu!&ifU=MB$ULY5-n>{vbGx{ONc=jdc+~fR+*CvX1&+m* z44wNTEbQ7jY6?3u-*4-UD=a+cd{R`4FDIuJG};~5#(DTJi1Y^iK7Oppc7SYL=P`xoC95ZRL}+-yj>3>jCcxYwUTDnhAkuX^Au+%0n7KRhP+pg@v6E%-G zYrmZ0Rg7C8YA&V@>S5ahhg0^VS(M??x@32@yezT6a6?fxD7gC~=@*LLVe)Xl`}iN7gA`R{#Q-pFhC@Dc z=ZH2tY=hhd0Pqp=+;C5AwI}aRj&2^R`efyKl&l!9jd}}YR5AsYam9KwQ~@PybO^P< z4FP_B#8aa_UCS>ODC-3)@s2CoRP0+zor4$0ec{z5*gW^aEwF9*At`VN4_PM5SnJE4 zLp$QaPUn0;B}s*u@$WOpvBVjXuPu}`n#+(3SD|5L93+94*dAM{BR&P8Kb5?;IJW3|9f{B7uNhR+lVEK(JaYguzeb3Y#>)Ybs9VBK%)|v( z$=RTc@xxb3nkT?%WU&2ssjg=Z(MTABuronp;DQRw8p+J(l4ab6`WOob032ttgM8*& zjtUyt^WUDx8YqO57eNnn7p-p;Rn}EFOx2b#oXm|74B)kw*%q#5sk3z51Pv9Np?-ZA z=g;TB#O_;Ut)%hQn8tQtA{GDy5l-a(Ee4L$>ypuBfWpL9 zvx5BaG!>ghZBkO0nEb2{La5{j_}Zjm=5BKr$}Dad0I?to&ypy?+|gOFrZo4fHV%GI zfq8+6gfOG*wd{UVsbWR%+!>5UTBT%p5KT7(8L$i4?5iNjg|)0p9lX zK*NGP(LQ3XH`sEzPjN-G#I*&dEigBh+QSsUJ%}HyDPOWv1rT;~Ey=Jf@W3<~^ZRgT zNh}ZM#%MG&OdA^5CLPF+QcRf2rL$HjK4uS4ato}{Wq=LHSYhg4d<=5TAHx-VSS_4P z>XjuYs45v-N#gTe*czA`D4!oa{=9L2MAZKP7n&e)469 zWw$2AZd?F-nGgWuKi52KY&x28k{Fkzf(9Obcm>(7GWO-6I}8Ja+l-d~0CMvB%qnVb zqdHU+&bjagjxhBgqE#wUva+d^s#J{D$1Aek5qX22%kMtKN86MC02NfO6GN$0bWrQK z!}Yx2gJiS0JlA(Aam!lDTr>yfrC|% z*j0)UGpuS6Pa z@y8#kS{a+IMS*>nRuDQMvZZtAa{j_ zW$}{;db0>|*uJ}Pyc`>77I_^-Wb$>VlC7~!*oVxXqP{VL+VmJ7b36@HjMr_n<6?#P zx%X`RqLblfM_T zO17VJ{#5Rqy=F_IX7IrC@)`Csm}HF(zoe%%g&RAf@!7UW+aQZIue>sG`jLuBD9O&s z+O(BUY;xh#z%{UQfB+bA`GEqm@Jc<)Un!O$Me5S@1wBx@HV!b;G*#V?2JqH#=TrFU zvWkz!GgZME!_!^Rae<&_VSSyv`6JM5-ZX6_@ydn>w*+rBkAg9stsI)CqkcR~U2ldH zEQ<0VXQR`C7@!8s5`vOu)?VccW;G%bcjki@t2sAV=4)Ab?@Q=h!?DN_Wn9uuwyxe- zoh11elY|+ez8izVu*@$sbMeMyv)3veWXX1puuT9nFg*v@g1v}UOUrhbi8W%f&D~$V z8J>ePygm3`J+NWjP~xb0unK_r#8cEvPa0l5G$zk)M)#_1#r z@y!&om}KJ%@??rBZEY^ASW79CF}z`Ze4LzT>%zuuT90=kwKogM#EfuEw_~*!QngYO zur`a{3g`rnJ^lC;oz57%aMDz`2JQ}Qa^s8;zk)4jnm%gUXcJZPGR8a=K=trI?Po{M zB~AebfxPcWU>YIq#)+8igPRx6&fsq}AZk@?!3kbcklec@z4nwSKP!2|#R zzydC($nF${FDiU?I0LZxFS04;(%Gm2Qti~j`_32-1d>Vb&2ggKUnpo89^~#MkrX9vgksFhKwdEaQqPkSDz!>UV%1zIY{_(|d%il*t7z14J>q2V@zd z&T5dJ@in8GU`=uwi|17A*BzKoor0~?V(l9q-UfDPI2V~3o-Mx&zz;#iBR3}5P*632 zA>&Y`&q^j8~<+>dE36=xvN1%2!h4h-<7voDjX zrXB6U5YNx&#VWvr`PG~7Hc767KHLY-pUhdm+;{G&uLl9bgNfXp07^gml~Y3f&OgQ@iZM-dN3y5kxxJ7wuX>xSv3aQk&4drB=A{Zyw zKhwSi-T*U`OB6wMj2$0bmWvKIUAocdQ7bB%v{4jI1WXn)q#!38xa<<*KF$-(X~d{{UdtTID1HNE=^*4}Ue>!P%e&vvuqX4D@!wA~1fT znn+?E74}4gSSa%r{dQysvzE$$)k*RnHw_UmWCaZ_=JbbX=1M%^<*N=GM`jKXFfa== zLmN3>PU9W&=8pA}E7&kzWjj@HO!J;|_98t_=XtT(iXvf+W<&EMsLlb3xydHux`?%$ z)v72ZiHvuwU}hge5oYn8@{q2Amw|u=en_OI+ZxOoAfgz}%^UpvadwKp*g{+v^%*mS z2#%D&M*!c+O~yf)x5GdYzzmj0BwlYV&l6sY;yT#o{5MN44SbNy6%k1IFFgUt9$=Si zJM8wU4UxYU+L<+kMMbfI-tVbyg_*)QoYe_ab2rO^AY48qNl>ht$4=K#Kr>#<;Cv7`Xy?O=Hy{Q%Jji2~3*=n{!tjRy>4qyGBE7-p zYO4o4NNmkN!Nj$j7$u0BdF+Vn9$>0q`1mgavL~9X7%^`&TPHQ!N&yCm{WM=nk@2gg zQ_hN7@vPAUGDnNdnWd}}1-LcDhm{KKCPvd7!p#t==1s#a7REfr+GVP{#a_8)`4vqA zPbRQe_3TAanfTMod_y7QKFbRR+X|%Ukd5qXK zF3Zw9HO#>E*%8|s4qK|C>z*iZ97GQ*iJu|FCdYc_(M=cG6VXDP#>kt*RnfU(7pIAg zcZKK1aXFc;i-S`@2Lun`m_5YgJNO0RD*M32Dl)2SSNk_ z2cjsJS$)*gyzGL8Z_sAV!fUCl^jUrIIWk8W)v*HuzWsN=agP%qFioZU?Sr6s8GBTZ z92unC)36T94j2=Lj4$7SH+wHUnj$_52&5fh52@5RD1hAT^*v|V&> zFu(vjhu?}o74jdz@k9-Sa5Ydh4P(e=e!xQ+%}_H07sgvAL$h2`GoMM=I&X%(h=6hD z(W3aOTBku3cT^pjf$|1>HN-n?(cyb2^lu9BSFG|XI8IbKaz_u= zPFhO!_GCQqUWhev%MXJ806rY?WY$R3LpT^1UJ=Q$sEd0P00X~3LEs$$WF*L~A9B9W zYL0EMT`sW9@7ET28(e~U@^>r zEh*aQFJ%K7;hwrdhU#BH?ULN4}!e|AU=;H@|1!bWva@AE0 zMUi)Xn5TZYQ@N7w+$!zb_{!OyC|u02Tv(n=mOvA;VS%8n4V2lHHSEL6sDaGk2Z=Ym zOB`RGH4U0POcj!Kft_IW?~FAY+^T9>pN`12>8@rnrn1H@PX*g0uf^xGEp8uu4@eN9 zJM8_*ZwtnTeN~Th&$5xt|0UU>Y zk;K+R6;$s#Gui3HbEY?E2~@zC1^)m-8a)U=ZB|LfI){Cl`3^cY0zhhl3i9-bZxja) zik8{P8*>HM07wq-yg1=hlT_uTmgwvjGYp0r4?cJrT6V?aTcow~WsLv;TLka281WDW zZT$NokVD+F$2{8%C@RiXQ=swdu)s8Iyubn+6(BZMR6JygA;$u&9J*)YRoJK2?hDv_ zhmtGkXZxJx8C6X94ZOs`b3Ow*@)tgqh{2^)KNVF!N(P|=P9jX z38#2x0}CST!7htKcAVE4Bbk%+LG;DhtPOelk=be7wd9j@eT}`SwtF;Ld{QF)+AN-- zWr4xii1;=)#DYPPH7A}hTHPyKQ|Pr-2VF|2b?D^Q7C^fS<=Q5x_^A`eSi?Q$Vabvx zW$YALyXVV+*n>E;r;Ut;S3RvJr~$X9m>bXo<6g{x>B7kju)}3k;?DG$GMj7(WLt8s zQ*$wy$&j8r-*=w47=R*(S-^LN`8yGatcWalieKr(hbtSE)pohws^(^A&m6D*XeKKl z&~#rymUNOhn%w9OqR8L3l1SGyS-5Hd)WKk|4#2YTI5H2NZmX^U0e*9jE1CLnv~1FS zGM&;!g6rcumo0&cdhFo$;`|mW?+ha{Zi3gI0`%P>@_vR4sn7x_sta+Ra6fVn3g!zqAmgXd5uCQ6 zPFfDyE~3i?nJhnU2N+?@joi7|e4g_#RP}-Dg6Z`(##HI-)s8Sr0MJ2a1{dJ=;~}b% zy_Caon1hz+7-K4x{c-C*Gdy!lgPaszlJ5idB0AY*B~+7%Y7-PrU}t%Na&yE@mAO)J zD>N|;vxZJ@yg5Hx;8$m|r6&`WU8&@dM8eMgNc2|9HZs#^qV3zc$;_R)3?Cz407X|R zEZf__`6oUew!L_NR0nF!!TT9)M(=xG#E|N5FxTin{O)j^;r=>kV5ERp7we0Ss#6Qo zbyI}(w{&~bC(i>-n=a+*Oj-r%ol-JpHroWP>_lm|jC6z;?JZae`)RUy?DUoSfC;yBEgmk3XLQ zHBg%Q8GBQiH9K4&0QY=Tr_~JRYLT>2@$QCq;Xy{q>ZB!7x3V{2W&!e`K2C6fE!y$& zadA&G-J82T91we;0MbcrH=mhGN62w2rKJ!qbONWr$dUNMxUatUeP^?ZvMeF#C}7j8Kuk*;9# z`hp8yX3YbT&L1Gq%`6SqV8 z_Kc^WR3xY-rU(FEe>^>mz(Z%CDNcqLXR|Qgy`kdYA4wV@fq|a#frXr4XP?xAxs<7^ z^Kj7omjUuw06Q@R!bc}#ju)X|S$=yU+OVnE(oD_Thm7HTh@;OJ{LFA)J$Lg;m>;zU z9Pf5O(0qqf)bBG5Vb2(p`@&jO+0?;|nwU53%>1i`ux}3KkYzZjA55uwf^jjh9Byup zCQRlQN14t@xyx0Y-T($>2=?r<__*t)cp7bGw$kXyIs&(fO0eFnW#e5_geg zLyXbqh*nwGemhaF*Z_dr%X%M~c-E(S(F{u*JJ>V31>RF0LEZqD5~09y5o%)s&m{(X_* z$qu~`aI7YA^%naz+AOIV^opaSm)>QQNh0_;EHlr3jsxI<`n72TIt|4}`9{wsvqh{} z7BB+?4huVwK{BH$DpYC7Y^eZvauiLjd+gEm^TsbVe>tfq)4Aly+c0GzcwiW)*lTCd z1{w|{S_6%X-vtP9WgF)c9Fd04Y8PnA&X}A`ffk0(9TaQ;MH3A889DC=^fez)sPdW* z8oL8D5eqPbU6JIKtNConS$!(SH-^ary|K7kMf?^=qV`E07 zvp7bVTTG6grcE^vMBTkR0`LRD=g57JZZY|l6Fg@2VKsEu1-*b~v6y6EgSW{RvsaTD ztvHm_<|gMKC5PC_e`5P1J#u6mF)%g0@cdrqfBnk5&2&`*O-l?GygfVPV5F)On#q!U z?7$uI_v0OMvSzLP=LECqRMnoy-OmtqjuWQZ8=i3Ncj9b=d4N?J;yCUbB0?%^EYOg; zs1U$K8y3F9$;o^^lAJNCWeH50CtzbP9)ZRmCpoIr^ury)mm!UQDVdq0lb_R#BV+Xp znwzC23RLfrU=3EzXU<`w88e<_3nzNCgHV}}KILJxMr*A!ksyF)&Hx>}nLU3}HdYOg zmdY71Lqk+-2nTHboUGN6R1!IoFWa?P>@;`|E*7gp$8IETT4X>zaejk}ZeC2JRzS{X zjeXu!R&Una?@dDtfuB5LTVtxPnleIEx+@w0c-jmv$vLP5)?0(R zJq&O(Lc{apEv6{-=Ck0DpwI!sqc8(A>>0+(Mo3hUvPxSrKm)E_(T5;LW5pC|n2>W{ zQ2d<8bety4HXx_+8I<~V?sCPg;ME%sK75g`$Y)_Vp;=jeDi$?3k1cZ}MZ>futuT1Z z4_nVhd2f&L*L5u)-?1c8Y)tCgq`%@1X+QQcGT$=Y+%|fZ=Qz^q!336wO z8RG$xO>k$J`|vY%4nYvc?VX1n##)I@vpIvW8m)?(V=%!BFUcMZ#+glLjnrSu8}PjO z^XH9Sz3f>f^QkP>?JF=9?1~y>b1HKX&;T%4!zP36z<${4W$B4CblBkUrnkwA^aYvy z$SavVX+~311tmyiKrz4%J%=8As-{n>)m1slhOw@EdIPZpLHy@91ij-7_ANrHX4Ph3`HFcAFY_ZmMr$fX)sEN97vv5I(jF|C(}2aI=&Y+& zUj}y;pRv;=n+Aa0&cn&a6muLEmS#cswQ%CS5NM$O#o8{20{*0F58EfW zH)3E0m4SWyapf-u>hhqft9JBs@o;#Pq)gFSUn~Xze0t8O-;t9lx@&+K21D(|MjqAE zn!)10P)&Zo4ZOXhKmap5h=c_Mo!e(h?@8o9FUb~FgPD@k=95s*fB|NG9Pup8N9h~9 zJQF?xi-F?-RFrL*q4RjSb$I3g2YhvvP8Z+JbIw+A#8)kp&xgSZR2Di6kWK+r4&>Yo zGDS?>sCTtZ05d;sD5uSrMR3Fb3&Q-7VDQbh78-^J6^nJ&U%`?C2;R16r>E*b2!-uW zmaSsAYA1LcTuP002w)(-yZp$iU>CXs)KI`8_QdZ5!;sE8N#JT$*ySOrmV8z19uIRGqQGh%fQ+7onKfCw!36Pu)6ZaNi-}apqXe+sD2dp8cXBg# z7{NBX#ck{ppSB*0PHZp5UsUyL8j*ss?aPD&CRwm2Unm6et? zWrxWVoWD(Ej2a=J>;nLRj@E1#Xo}j}JW~<)Z#Ox?%}SuzrVhNN>0q6Pa7O2;>f*MT zSu@3Uf&h*{2(s(VRaD0f8V`;%L${jvgyZfcdPrO)NHBB`mrU@mPWvKT#tF-1*lbM9 z4{Y@@I`YlG>z>s+7qcCixYN@;LxJ=$Hn3!-6OK zXrHrWHXFefyk^magvkZa z*widO`N0J%IGIeEt16iptPMagRpahKh8q%BkJDGc+Q2?Ag@cTi9!m@&P$PSMLrn5V zE~6aKPVj*Bk5dFm#bhB1xo=fV0&LPST<_$*DHaRWi`m0#{#W=2szd>2Npa^0Fg~;$ zPgpd~UCAB;33ejqf_=?>h#7d~4Ob4+zWeisJQdW8(EcNk!1KUkoOPA4dn~*!FbL*% z?^hmR7=0-1o6KaAba&w{8PAd-8q0unnlZgdq@`9}&hUbX2YkpJ9q57_{rL4Mm?Hfo6HF|>j38%BGr=@++VnvOkAmy~g4QpeCHI6l@ex!P zO*`^Lb8~~Pz=yeicp71xI}uI=nyN#M=$1jd>RoY^TJz8`V2R$y0W>@36zJj`Pyhqh z9(AewC+arS$%7hUi$4`#Vmrbodcn^VIQqL4)@L1Ny35k03#`-~f}MlEo(1Nw-!^-B zfEWh>@xbQHcC{|rLtF<~GTDP(y~;*;54IhMwsTX+>jtjEPrR+Vord58Cz;O^WxXFG zR7`B&cThPcHJmkBl3S_R;1<5X4G{$1F6C2QbuT+C2&XqH8m0+>+1rf*Lr1h)THV{_ z7ByE3@^D46O_*W1c>)}(v)w49d{~uk82IZSQ+OXdM3%`Aa`n3I(*pq4d=bB~nzr1^M&DDIi|xG4Iz2LP?uP(QD2VEK6tze+A`DvFxM7lS#zpl`vV8C9imh2az*B z`6IW3dNx$k4y^ODzt;%dHqA*?b5>3z(mlnz)t$rJghSuG0KZ zJ8;F+v;n#cVB|dg_%%eq%OG8%_k!Vg96r3A3MsA-btHTM zVWYPUy-U2OnH&L{q^eO17dI;nQ--zqx*5WMe>D=my?5T>9H7&t=v zI-K#^2*8kh)q^>cM~NQjxEOKSt~pIX097T_#ypwqJomtjDzfdW5MsLEwc(t?395JK zOIQNtpC15U9$}*Inq`r{8)_nw1hnMX>+4@Rnd=DTjMsA%m$4j%26yp=@Fh=GNKP55 z(p;j*ngAQf7;F6q5!+;je>Gt&eIvbOkTb?b6VG|*IFD_-vP6Z%+r_xzwKZ90^RV*6LF9L-jSOxGjVTsxq1Gi+m_Jh57l?$_bq_V@I@f70yd3_?Fj4n zLab*B2Kz2B#y`Uc1bP0ak|xe%;h-nv7BDjo#C6%Fc%IEGFBq!$PWPi4^hUIzBrBAedx} zX$S@8dODbdt%8Q|<8Mj0H~TBY`&a%Gs*mkgUfc>nUe&*Zx}*DH*S7+Zrit=FFqQL- zC{Uq6Xmz&uI+|fAs1#FyVD=+^S^94qmKtjv=$=h|xby!2gTgz@Im1-W%hp0SfW&N) z$QqjTSQ_C3{2Vm2GR@7(>db2+E!zcb`cE=tPs?8$W3zg4sz;x&5vrn&LBz?}fO^kj z)ctKVcRXHsGGOe$Gyobo1C#XN1D?x!O&{Uk3e7X_9tBUbgPhEyF6J{O)^goNEXSZ>Y%brS%*FR&mv4+N&RCQPvG zNzMs|DlDDJfVfRQ#12=8sODmomzuaa03yIA0VgvB5H*c($N?2gP82tuGQ;^rmHU@n(}sY1i^h4d69-w2Q_UzleLvQvFkYt-GOOq_7e;R{#FJ6=EJ-%$pc+nuD6orekE+U6L&s(4$mW| zW&nJ8<1ci)g8*7f={^0XEWg65>UPZmAeSJ5D1r#OFyId%V0w{_Q+q<1$sHc* zqApr5GBa|i=QXqEvNa8~tP)B!>I>s|%+GQIvmy5V2=jc9>~zVdCPmNJg1kKxW=GA` z4~b@!2eFT|m#t=7pd^;)2ZUjMc+GZ;B%jodnMtJXjhl4<)^e}}b)f2Z&RYCmnHiZN z9Z8=jBo?D5oXuSaiPu}G^4MDJhYxjv*2VMK7|pqlbiDgSqcNg(pcJg+(LMC+yg2U+ zk?0AmpXRCX8S9Luw%J{Fb<0YET_$qpbyyZN$$gi`)@vzn8i9I78X%a{HN#@^{{R43 zf2}wy@@ssNR|+Pb;$XTV>A-r(?<%wfI1Izgi|UeOb6KINo#5OuA`woGBGYt`4l}YE zg*IesH=j-mWuPYsxsiY_xvb%-;9h)T&+d7UDEtiHt%(=Zk?v_E2xKrsz~ZuSijABs z3I&D-07SvL1%f{K?btL;PcCAVgM^Rg=?wF(j$$YZH~OAQ<1&TwFIw2;HU#WeuTQ~q@x8`r!OqnX}q2%BQh-mL77@6!i8rkWnvxJkx&blyl z;-$eQOzenk1iDC8JJrDo`jiUx2#ILG`U=M8D?j;ktZVkc70@IV9kDjEH-2-994 zNHtW`yw8Hj^Jj(B>^?b3=yr?V6x66Ozbacf`j{}R+tiQ^e|p*YL^e4lzTYK>;&lD! zSe-w5IbZzH%ub)Z4_1>k+Y_VjMxo&LXgR3n5;e~wq(OT-V1K4@l;U*#=)R!pPl~mK zs(yMJxeRXRgY_?uY6(rrt!Tyz-W4Nc-cwEx*N%_57^K!txDa8dco+sq-Bt`~S>RZG zN%8g|cA=+`Ic%{uC;%84f!XLlP07$$mLeO$*XdQ1#J9z=mYqfltA;tj9G%Ic>AC>D zi7t7<`!7Ctg7DJ?%>*<7*dAmwFxE7aQZtS^W@KyW@Yn{K238NY&8XLsNu#=92{1_m z8uVS7H=4ns@HpeKIg)jR=4&{soo|zjhKi??AaVEd1YPa<`x&h$^*N2^;#{yBWwqI! z$jkO1b0llYKZ=?f#%E)Y;K1X)4QcM?DSK7!TG_N*LY%u+LX~~!Z869;2k}%oWD2x0B6gvPRotB3s zjms(G;bs)vBm1b1wn~bQUK(kxhyrJMf$KQ+*d2)h_9MibI(Q5!W{bN}!64TjsF`gd zl9fvL<>x4@0bm$6&Txmtah`GTz*eL3bv20WIjCxvZH(GHry4t6&EW?i2%li}#(A>r z0INjTPW5+V!4BLIZ5DhL3ve_OG+UvZ0PoM+fchlYMyi%5&YgOc{#Ok*} zOE-nU-8r+I_&rDxZFpbSLa6h}hV;R0Pr{5{BaDYja*sZwla*m8(F}mLj&O&}sa?XH zIg4A?NW&b@k_lEt2l9YXOl|Be?i^RY-j-^tor;ILE@p^W250DULr|5Nd9Rr}5bg4e z1!PME5zuJf=~WLv1TnQRZe?yC0p~*gs>mv(WQaDW96mJ<7aNmY8gm(NFi6<6%Z&9O z{{Rw}4pg!nz!P~fXoBi>Mzx0=FUby;osZH}koCt>I5ar|usROeR%Sx99?g7 zxo=TwLekBw0l$B_XuCA57e(y8n0yzvyzozw_}IfE#)_5oQHf}ZEL~?K{BP>QxBM|3d|9_ z3^RaP{{SRq=(bU^rtQ+;2`+bTHw~s?Z zRSc1<`Jg*!hC}*o7s0HOLpg0mt5rzl+sk=Ez=M;sC7%2bnSt1ymx0ESC#>C5)O^lN z#e5uxf&rsGNHC#vP5#{Uu)`B2%VbsL)@!sGB5E~6*(`P)h@ME)w&Vf5N}nd7*@9u8 zzZtH>T(qaF4aRcOW0)`GPml~Rvd-m^xKuI}D`!$w9Vuc`<6O)+@3P4B+|YPQVD8N8 zo`<-W4_v%=Pf%c*=x3ifu^agVIh+I4Kx`!xy*TZtHUt|wU2K`p8z;FDHqBPAQdUzn z0rl*eEx9lbKo8dV9QerYYW80GYN4!cb(x*_z|$Ia^B0%ZJRZzQko1q5KJa6lc&epU z3`-B>uDu29_7*t(09j@CbdygeK#$8xR(J7kb`vZ$u!8U~1J7lV?>bk)23sQnQ zv9KP!a247E&wv;zVO%2H<~gWbm!5nl)QW z^3@(zo;+T1*_cWjE_h2U0B&!|U`4Lm z@g)e_R^db+OuT9t+W~-BeuN(llX5PS24QC_xT4c+{Z2tfPfM2dDHZ4$u?Wj|amjHIi_n;b^z71ODY(xa`W1N%$WwKyQbaKl{I4Ci^|zeHh7CbH$9)wHYS5oT_l8tCIJ0e=OT z#-U%<%w?_{v}uC1=$(sScy=%Em{}Q3TR=!EM@r8Vt5`6Y9sA39iP#wT`jKEVCdjmD zTnVqhxEu5pU$aUl<0V7mgck#4D{^;*nal%6>)4NDY;j`^p8|M(QDx)GI|rE|Dyf`F zlbEwGbOvXFXJ${-%#&I=Aa&i=Wn+a!(~q)`LV2%}^+ zj@e3OVV`6ed3u(ivG}Uda|1}sEVGaf_!_E~V@=Vr$8Vn^*}Gq!I9oxV>6}hr@kf;8 zo!XvRc4!&-;TR`{)mrZQa*j>}|ei)=ym0wNa`P z3BzZG2cp9Rv!26?-K|va+0s%8|CBh6bPVLy^pa>}Y@E6GSm{@+K zYIWr8-0fAJspNymhag3z>?u&zz2}REatTe z3sOvF%e3oSyfycW>^m&7%Q*PYJP(0_$B3UCT}c~DmTNefPY_vYWRbS?m6^9{Ik(63t8}5c%#M>wmaWcg26BkKJfP=K$R?KJDRShyU*K?GwdsCK`pEt zK3-cYl~eC*YUahWw&b$&yyMUlwrZ@@Ev%Fdx&8iceA%oonLnG0>UlVU*NwYQHN z1n8#8Hih0lUU2cl=gMBH=Y94Ba$pIfM~0YrYvGNvg1@bhIapNg>E&CsmVzd#;f#0a z0F6=kY+1PFs?8SHKsjmgj2*jDz}I|f(kWm}uvWW%eU^>KU(~_%2hyO*VxJO<&;EelnJrSc7cfd~sn( zRO&XZ*-Tdp0M@}i;livWh%j`L>8N@2fxy#my6ltwm>{c62--TpvY(V2{lMmSCR;^% z4t-K4i#e){LwQxJHO&ywjCLO!cpd!tH#3&*u-hWDT=qB6XC9@fYX#Y`2~!mBaOeyT zW6xlHc*ehCSu>nf6TA$}&ldBEH6}YJjZKY`wulH83mbgaZcYz#QD(CyAy2eH1T5W8SaF<0 zmbu^WUpN>=+d6FKvblnaWWX?49@)VBsMALLKrrKj*~5S(5_o2asUuMu!e57&%`GsDbTmjD2yahYvKnUXYiU0uT7qyU$VH#5mS7}vmj)N%Xw2y9WBo*RZ9RYauD-HMS)(UIA^>9 z1Cn}q8Kzc7PD{ZZuX?rFq{%kb8`*OAo;x#NCt?g6NOzN0A>Y7uamSRZs^+n!YK2-R zxHfixeBr+sN2#_^S^RmV8B}JN0;UFn)^Heppm3{O-n2KHA;iQywKzyI5Uj>|^0Ru_w zcIM_vHk~C;vHiNCR@GrhuTE|l=$WFhIlnjs_8v&qCQd<{R}Szmurx=Ec1u&rsVe?o zJer_p2rYqNau`|f$F$EKWnHmd07NXw$Lqk84x|L1eT^Cn_Kv(4m3$#2GNW{JiBz1~ zy9+nV{+og`eVR6$9?)d$WXc#SyO2P@IXLh0FN0mN)(@L5?m;qM$EX>DmWQ7QkUjV{ z@MfY|qb65qYr93uweM<(2DaSy_RrXhEr9Sbi>pWC`B0d0u0}@2&GQH0*O&qN;mH=V zd2aszF$65g8O{hMtk80-y-r25!{4*&l}=i@wp8>OLmaj*k-aBgyOo2(c8r|A{cE{ znVV`rS$X1mU~wJErn6H)H1qoq^ZXthGRq4jQW1kZ(^_BO?R!sZ!{^RTX}g8j*1E?9 z+d4OEP24VOe4G)p&w!bh_70pTx`1Jfdh@}y&DqXTf4w_H1)-y6elvz|a6ma-i$X-; zcCCLhlcBvzkVVR_FVP zK-=6_KHF|joT_L{WX+Aao^x(y1`v9{+PPJ`Gmjpw$v6+~9k;d>hl1R<8;l4r$m%$Zgi&N8DVFs2=FE^&* z&1hr~QqNvC2jOD4hh{_f*};fcDA1>HF^wCKaw@c-#8bC8Ql$yQ!Il4 zfG?j&c;p^tq&9N9WkGZlu;9W;I0xZ?zs>+J@4!e!-6Vfy5Ai1ucmTn8$>57XKM5R+ z%@+v)*@S$2kuEr_iVG#z*kBp#=Y^-_Dczp}lsclas)m(Lp_@z}VpwP`XMPOn=aQsY zH<6&YZw*4r4D;C$9O0sF-G;p986>uFytZ!MYI)5Cul3>pCiMwzuQ7q9zyWC6QQ8ul zJz(6}u|@i>xD4z74%`n>n{t!X@4{O+Y2v4A(kOP_lf{I}&t`Ip@)4u=gWVs6nK)RIx1Gz$`F0 z%ls0-1CvbMakc6L&pZ9dJKA^50}p`jg?9%vX6aEq@cb6y!Z%4~lHJxrTi+(L#gh)E zz&rl{T<}Lz*Yu4ge70*f4eflEA_iuO_W2+&7m7A;3NFMPw`#UNy9_^5fJ)kicPFFO zt7fxY7q47~3ms=+^PDlyh#F2d?PpF)1H~<|f(JR5&JbM{fzmbtsGZ=ooPNA+9uID2 zCr^To<;x;yB6upQiP&NKoOz0akj`rgg{3&gTJme2{oTI1FVEbM(NIJYZru$44S*~@ z44NTT{^i*+41GzhpToi82kR{M;JiPyh7Oo2?s$Vk@q4F>;H|6-?~Hw8GtQ>7*B-O* zmXNPcQ&s^y?H3X}@#rrGIl}WM z?`c&O@&*o(_v6p8plm935`CvivChWtOxmsI==JP|(nDg+%V$xd1?#!11vk(^u77?d z13MCK=|mAcd-KCg*WXoqfGvKA9IB_zsVZGp$R-N~_5dx;agsMPJuoypdr=5?4~E5S zIC65?GOmz9{!7j~P|w}Jc;FqUU~b%;a1XPp<$AU1vN_^tg^1sbkFgr9Jk2&_;RF*5 zjjy~l#d4koTdPOOII`Q^DUHaEUYhWh)&Bs5+OPXz*S7+YSG9lPHmm;Fb?v~UX`+09 zK+E~&ls*{>3`4E&!GW@C4C3_unGa$$S;YB^m!~6^t`8LhyzsaqFYsCgH(Bl<;gI$Y zEa>)Yg=7+?@4W?&8ve$3SYQqrEXB#=ZQYA)4B!e!P(1M+vEw&L(7eFFEI93l(f8x$ z&jAVVU!o&A*?^EPsF|Dg>Ea$0`NN+fz@>O^zhoIyv=jhuxEY>JVV(*b9gka`;2m&h zKKxeHfSJvlJ}~Qn91Hs#aAX2mzYu_Y#h=dbFr0ao91FI{fG&E>&IxCE**L@31DXva z6jC>Fy*5|Hdk5cWTT8&k+ zxr6CCfU?a`#thH;K>Z8j$rWzYXEjoxdI+#c5wgboetd2`5b-$+52bv6$1#P8S^Y@1 zTR2^|=`sb*@IQVx+CHi_2`alpFGDv*zdnbzZ*ppP~S>7 zGfw-%2f1Wwrbh?E179(FIFF0KwJF7HZ`DafI+MQ3!zM`NZd7w(=6By2?L#|jEoZ$b zy@7p^DuAlKGqrJsU>m7`0AGGcSsEBIxDk`)oVdQKh~`eCm#!M7paQtrFM-fl5vbB) zYqkdgz-w8bk~9gEWf6XBrh3sc^W)1e-aFov-rb$xXMT9PT$gw>iSWIm6tE{G4olY* zd&?f07zPKs8JHgih>ugPe8pa~zZsUrwuQXUnMtOmYdb&;5mfb7ua`#0cDY)IjPY!jWpUHo^hA# z_4*GbYTzyhVSpH4*n+c(0Bl&Rp`UbUfwa{(^X7}-S*$%nf{X8r@w3F`ui7rAi_b@L z2>vs)w-69m{Yc{k31;R9t%0;fUHWQvGM53&9v>hho;(9uRZgpFwsBWJzZP!r-C4*7 zpgh2^Bhuawn@dif+26$`D~FA0{{WoWenwm$T=I7>k28Eyn1vzn(L}%{oBsfd5k57S zvh%aB89g`}AbP~;*(_R^1w!%48`kIB=W77BYUhhj00XseLYi!Pf$0fCCC^m>m}#Fd~iOF1}yGe zx?!B~1GyY_0tv0O7Ah#6<^Z|+k!$c+7EJz>b*}9yWRPa1$rbG*W0GmuH)A=d%*}Jo z4Z_VF@lQ!?t)&6HAtz7|Kh6~}#xVVOAcDDnywq=0K+ihaJj{Uvbd z_{7#@!AvsPzV23LMSyVmtn#Nhk+3#~`1f_sCsSeMjB<9ei%r~ODS2OHY!xCa-ius& z%soiSY8g86*}AAMz2<#&_D9g{frN2i3Gr!x+z%Bhm0HQNSehW(eJxzrICr2B3+(5B znr*fW`M4;#-hGJErwFKW7iM-@xtK)mwJKnx;VA9OeZZ}Ks=iNHOs5jCr#D=Hj&p7q z%mYR6Td`*>S=Qk)eAQ4DmK~Gey^25lH%jKK}sf5wUE2&6PDu zyHx>T2HkSH!0%hvg|Z6OM)=3>O|)gd_~hX#sOdf@<6&T8Y~CL*yo8A zf-aoz6#|K#67Foi%LB}cs%;ZPd#}Q!2XagOspxEFAoBP{K&YPQX4eDA*NWkAnCbrY5r4Y{gRq)^gy? z!LxS%08M$Hp$mJQfbd)pPLB)Ed(}N#Dnd^&ov{G09U>1y&%8azbhQu)BFuoQ;dlY} z?~6U9wtU&Fs$(@k5q9?U4J1zc5Z9de&NRs#IZw))CDATuAGD4|k2%0qo-I%e(}K0@ zRKPx;_g-@~tb)CrQ)$$X)m`It@F2iOLfc) zFXN3h(qBt{s4tyZZC!5KZ+lmpd@-qC98DFOF;lIQBX}jwhO>_%7P$8>gq`TKs^(aE z0$ONa-wPwdQD=qRHzMOz^bZuW3_b=P;agvWH2|eUQT%N)0B<2!GIN=M#@n^lX`@iL zbRWi+2r=H{MPYIPrf)R;dR#R zB9oH{hH=gpfMnlV-Yz9rt8zzd+D7#@QIzA@FGA-{ny z&<$BF?o$H+Sp8eGNi`V2Cz7eF@Gx?KVJu)^cbsyWEsWH{FLvJY$r~oW50`U)q5l9p z3WD3?0biXP<9lOy*W%{#_}`7Au?pdYam5FBKZItTUhfAc>rt5RxvgYan!xmG6s*Yd{^Q0efaLTl3LuwGCEvBK2)iHK~_S@0vU}S@;8hJqV&Ri`xC1x>Z?CWl~_VqbED4cfc$&^zVVp zPM^I5v$2uIP2zSXY1RFXuJ8|tjiP!RZlPQ7{cgVJ6_ySIGlY1?WEjXFVhA=ZQ?E>I8cw>7 z>}b232RG<2Jdy0k+WXKFOu!2T>y5TKU2UIC6W+0m4Gs?6dA?;N#-XK^2O19@H9VXd zERRL)T@X3~cE4Z`*9gj)u-`l%Uaq}{VC2>h@xUTbPfeAHc+B43wW?LJk0^?0)7BE! zr+=9qdhn~D>SSuxRdxc)OWiX;swR*b;Fx5v{X65(dpdcIK+@@eV=8RWL?cB8*eo|( zOV#5ujP%==Wb)E@M$YxhK{y;2i|7ezmdX5TqQ+(!8`))+0f(_b8!-$V~T^ri+BNng(Hz`xl4nd=w)sv4tL^RKN{$derfP{Re-y z80BF!kV>aD9I2-GvaWtiCV$121@AEJBzC%74T6n#OButR1sF8c0)6I@4JE%=Scmty9#@%0d{7O&4uI z5Z)W>*P!Fp4=6f0W=SzRaeyQT)PQ8L;~cHFXzFV|R!1i{CerYGz2vg~O&P=M#(r$= zX<&RseJ8^5p)+iKw}T@tC?8@TEs{5u04JefThEVHxmhlQcmaX%Uy=g9Lpy-@z$ZACpegz;F+QpXukNfqXQUSKI8yK7=U(G{1?uz^Vz)K z>hNw1ms+Y+)DlV33D~ZK?nMv=4^f8`X0uh!vhiw*-j(2qhkyQfxNS|f=v7qLQEZ$M zj+A|#I;->-gsK-PI&u8~4ngt0sAJ;mRaY7 zWmSZP>9J({6RMc#55UI1!_q*Sx5Xnm#MAJs%#IKXWqm^%e53><6wc71rq+Xq^B zhG%4J73LdiLGB(*=p&Q95s@Oc2t&w0@*=o~+3&)ED;H2}a3a}1Fj56buAzi3U6T9Qe1{()-` zv6J*8tPkTWPP{6N{G-}JUAU$D#bYnruJT~>kt9(q{sk3!Dmlh26`@4;r# z^N;{=vDg3r2YLGOg;bKGl1^q4Y)khiCXM{|MyMo%&yWJfS}k3*xwEgiUF^Dc5=zOD zsHKT7)E}Q7vZEL#d&9L3#Y4!)h?^aYf{JtSN3J$Gv{p69xRof=c9=Cp53sOvgc+Z) zAn_fvkE7x3RvT2B;J0ZlVz^?M8LS2z91$*+Go8_(3E7I(&6a+>_^I0K3>D?`6x0hB z14Gb;?3MTg-C0ulUWQKiMgIWRtX_RV86zimCz06sI~6w|eaMdg030Q54Wr{`cYtpI zdC|Wl6jsXuGGlhSnic?o*$Ogx<XR{K#J>nyi*-*N3F`cLvYt zUSHZo)E-t-1lleRK@O*Q0J9pv*B*k{3d_>810>%lU=tOM)trH#J&42hi?b9c3drg8 z>I7#Bpa5%NqG4tE>^|L=L4_Z*<=YcUZ14eQfnWmGX8_=U8PUK&eILY8=FN>)*ahh4 z?p^rBk%EA*Uw&}V5IH-QI-fF_G+Y6jP0et4`Nl+1ye!UfdnyFqy=Y*#0!N=3FkPo@ zB4b-k)j2oz#JIu`W!S2NNioF3=w@lYR6UO7)dXdQGxWH+qrH8`SBkb;dYMcW*XTi>JfBgs{qh4nX7 zOeAw84YP(zSYUh{k+S$6FkR(aP&@u8)?3$i&2Y2WIJ$?iYHxPNL6X36=)D$*17bej zxQ{--I!a~-oDH#ohd**HW~hT=Nia|kk0cFN2W2ZT3_HLK3A(_hj<$Z1XW{<<3bh(2w)W+p(fc21 zMoki)AyK(mz-00u0APCZMv+aqO@X~=pjd8zFi*A|5RpYRQUt@-h*Kwpep z&+Rf!rFeQ6wXhb8N54!A7`jXrMSPveWk`x(;MWVlJ8(NsIo?{ge1eu=VZz#U$#Y>O zcbw8Qb$y4%2;LsS!8XY{ZeVNXULvx0BakefWX(zT(Ty|{1)H460b}5^BpT1CV3|1F zTKx0_ zM%e5C+#rtXioD^XES@9c*;{lu!W6pm`Cv$Ry{NhA5AiBph)Z8EsOD@}YkPIuYYiIm zd+<+QJo#WtxDeOhh!oq+!g!W-fF772ct>V_2!j1(9zRs`nXYFCL=4{}HJI@rXfh@q zuqPT6qY--kNhZc zg+nKwB;&=?6jlJd4#>?Gl1x^5Y+4{@t@cGYBRr(0hN5ZUxIt`&8W#Ns)1)uda^3Az z#Q;YEnVF@=3+RFa^L@ge%0A=(9Oer@pH3<|&4j9pz*`Dj1_oirW%hazS^EamZdnWI zGU%+Sh3hw0A%~wEERvE6ZG%MA&f)7|SNwC9YYZ61jc=03^xc00Cx;@_X@`m411-1OT>{iwt=U4zR2jW0b;sI;&^2 zal6JzbWm9pY_p&1#m3QGsIpiAz;V{0`2~Crlm1O%!2N$U@g$7bK>ko?VmpFCBd0ixbe7n(qK*m(f&?fpn( zl4~^yi`mL%iKNI5@Gt|$HL?I_{c=GxpJmU1lSSrJUbN48G`oBm_~EP71`JHlc94-a zIk3obS?0uR$&SwJ#`A`uaxT6^{4`$F%rDQ_*XBp)8pzq-vyMC{=VO&rs;Ft=raSv} z%q+n_QZ!nsq}nwwlZni_s(uo~TLQy#ogT%V&t4T>?Tp?CD$(yvEi$F}akB{x?M5CJ z>mZKIK?6?pP0AKWilbevtIC3>+4F`o$&;6c6C^A(yN4MRYP>rH?aXMJLwC4^(fx2t zi45K@!P9rmRIyYH3=g&u9uz!uI)YzMiv0j*P|3Vd$#owhiMm7&ImJC}@<%w3xRnBa zGBf-S0q`6JP|Vk=wtY{ofF3K*K?7TWEaN{y4Cbwnl`0Ha=s*AsUNw)NK5*+`#GRaz zEC+;+TGK6?X3Ym^v&7#V5-j#x$M(073y8 zcFv|qJl+rmI@}!)T&imhYb|BExXju__AeT3Q}|4(f3_a}c;H?MwwwGT)j!(~zdUd= z8n~Y%6eWD)N)#wiTA^S$@9Rgk+l^ z$62hwstK9iuw?Ti*A(=TjWxfsEG9Gl2Q?$5Pod*{e&@vcv%F z4QKigdf62+A7V3ovrLomi%A+80jOgwXF2|aT;cx!R0&6i*zO^YHHGJAzW|IYY63#RCO;Y20fHfVAb018Rc_KG zbyHSSI%V0pGgu8{!7$O^kC)4Y^RMHE#rC+jHXDb9T%j7&wp}>Jm8DKPQz3^_e?J&U zn6|Y?{7J7S5_Hq~BIms`{6YSkXXDDmC{o zp32ek&N(|B{Shw@+$nCjY5Peb@nyU18+f_y73Qf8RrUnvx`Xavo-<`^zji;V8U+V3 zvYc?$bpZEQHeMOy9|p^TU@is+#s>$xJ;%M`*FE# z-OB9tf6*;K%t`5)pK%}gi8gLWcMekwEh;bF$yTS=Oahk9GOYwwUBRMX5n;j z8J+nrK6u!XnzVSkjnu#m42F^9FgWud8h~KzI3rUz+vk#IHFHC;=yeY}eYi3)>Am~X z-x=9jlEz`}Q&u%98ll)J^FYY3bJ-Tnk+YKU(F~*H;Bx7N%M;7F$rr7qU2Y-hPeccB zlj{<9;fruJZyE{`+3cv;SCh7gJFy@O%_}wEXR#V?4VFf9{jtgRlO%S6_$f^hHmBR< z_aj9`$)A)6WX#Qgu{ZAvgtv~MSd|kNL=4eUMD!j=?V`_uBm*@bDhahf%=sc&({qsi zBDbLoa|QEBFy4svn~NFRUdyMIwO*9r36nHh8Ym#KtY>;49gaA*+r*MxPizvDlgWeR z?5A#m^VYV!=ikp7D$E_WKD`4D4B-L(5uj(W8CECIR442l2>$>!{{VW~_-a|nbin?V z=2e`U zPcf0S#snL0OK0b@6ig4!8+A1U2ki1=pXytjs&>YqWxqJ-&4(Gz#hJTd zL=359p~4&ndE>8xz+lz$2g`?4%38UZ`QQ=R zgFW)JG}BMY7+(^o2~)ZQCP$Dh-UV`NzdV1_5qpnkc=RL!>d^9@g%p`By*BTU&U)=!-vriGdN;U7ACg8eY) z=FTW5;om@Vvu621xVf-Xxot zOD3UZfNP+6Jn;A9bK%>bU=14hKaA`K%Z^A6C<>kr-5Vto|pJmij>DvWoAm9Nj@H9q`v>9_I8$F*U#iA(Q+dYAt^M@ZC zSL{T35TSZLALWXBk(2vK!U15kk*r|c@zwM4M6zm5SuRy4W*E-U51cKY1GYrXHJ6-B zmrM^p0iSHl&)khST1R==05U0sDMtf|G`aJclBbTAmO{jBr;|pIChFAgJ1=Y{xAp)n$~kyA%KOA zgR%ACRpQIJRM%Z|Gv_md<69nmuot|9Qh9zLiygwN8;K0pFWNt?UCda4n|DhAh5lq; z1I7N4bIIyJywwx<%-k|?L6XeF=)*b2up12#ZG7(MCGnG}8x}?3MNr`EmDoB^7I5{- z67)>r&a|m?WvNYaZtJXQ_VPo!Yaqj0 z5uMY|v>L~f)I1pH%gVNqfC_!lX1W%7{^WWCX%i=NRHggS2T*70{{YDzUD%tDXSG^! zes?$?m|5?~v1b%`PVB>Xf&tiqb~ff~DSQVE_Hk6}8@KyB@irylxt)O+IxUnrGOn=l zZ#GA&<+R!i7z15k0cLQIH?~LjDC{u*0FD9rhZivLXVehTK=gO57iKP3Z1JK1867jc z{YWLO9H{Yj>Kb+)4W7JQQc_7$H2kUZjS&X`%>Xm=B0SZa59aEix`q}8L;ZX4p1S^c zCxGu-UkJ-)&vF@ZOJV|fO~~K936tg^vw#Z%{T6p3UPOAlI{-8Qa6t!%P&6?UeV?}( zBh9hWO_eZFH}{4^3>eH6wo&ZCeew^AtkSYJ9HJN7ll0b`%h-p|Howz9(1H44&NR3X zzsMbtRb15QRaFYJR-jvaKDo9 zN9sYZ2Jtglc*X#Q>p5?Vo6|vuop~Y)pKkM5;A%7tw6juMIj7jE^c3?ielz%hCd{72 z@HK~iHfjQiFxqp5h-8&DnxSBD9fx=XWt7C7N%Z-ot3VmvvGNA9k55V4VDA%K{%VQY z-?A3Ud({?!F$Q$wl!IP>sAgU`s3x;t<*yfY>7DUk3J4B-QTh?rqV5txxq_B~`X+wtV4J zVz^1$s1^b&avh!haBH`TlJ$ns8d1#@`8=&0{qqaV{ebf$3NwHL+}r1>=&hZqNiiQy zay*@2Yu6hsJm=&=*O}iK(VS~kV`a^$-kpYr0m0%lY}dDA*NeGbwsX9N_rZlRR@0Ic zm18xT17%n1!y7?_0ZM^HQZ)F$;-~)x`Qi`nPm^F@Yk%Nj%eaFbnStoSGw6Hp{I2x(y#W?e)hPpAlO?@W7Yd zKPK1i&>U#Tsvw%P-x4&#jN?Dr)0F&8BvU1UG6a1?gQ-wp=THpq4E=TSg?LreF3hD= zs~gb+N5&3JIL|yQFOM?AI7oHz8i3 znk6Sqd=*zkb?>tM0A~REAKIlsLsV@TsQJp$r>kJOXtCY_eoMoXTx+x%JD$>8v|mC9 zy$ zr|XBP5lGCy)mF<6#k2Ym^Q*DvDZ*c&ycQZ3rqe+392U|{7|t8GYWVF;5~u=O>?J zT`bA5+NRuQr&C4Ga@;nNRY)fiW{hP|v6Tln`2s0ulXMf1U5cM5Kra9QzB2Z27esa6 zW{B1{EM~OjyS2{q&*)@}qfFEc783X2f3w@xXpt>dP|hY#Fdua`eFz1S%7&fzhNf>h zbcOBN0lHhb0A~`mg@$!gyE`7l3cOnepPjC0{{R*+^v=j7{{YbvJ1A#z0R3w;oy}d( zrq5>&l)b~(Ch24CnZ!&^pS>J{AHn{p%y6px52xvLUX|JGmwCe1~LdOeRQ5wiMBi9MSpNXCNKt>;z@#o@4LC9D7jG3mgE4u76P}%-OE+i~_&_ z12a_g0CSvUE>_-GHTll?IIe1Z7XJVhWy|FLKB#e@;C#pejXkO{wi%j3cbVKffO}Ge zS|LKBxl}|_B+Xab(k#U(*>orvCNU8h%R`zKD=4B zeu2D5*D%Rt_wm4dlA&V3hUobN>JIzjzOC9Uj*n&&^uXQ$-UzfjhOzxfbqw_x!F;OW zYLwN>ARltfd@U)7a|*SsOq)j_bF^<+P!6QelgtYt{tA=nY_-I~RD{cRm@i?PO8|VN z1C!T`t7yARsP>UJ@r?1+jl<^65y*-m*a#VH<#uiocF}W1!Y6nt2VsOUiwhPpHKw`e zaBl_rHvM3DkY8o?S*`}Edp<2a;0w^t=Y(SWDDa0uWV?EDyvXB1GXSsv`}~L7jp+9i zV5$>^EMmV>XUTsbr(^*8 zDPa(1p?d>PCB%36+MjX;XPRXoGicO`G3AIjYGxOpcYt;e^uXfcb^zv{17BROWPyNw zKJ-uX&>2iT2Gfv2&n^ZY{{Yv6H|KnsYNopz8sX+=_aej1m-1OhBLmIVDNq^7rHU!6 z*`R0W1Xv;+YF+q_%rIFrPx`VR#68N`Elm3mqvlD)J0tDkW`XwP@i5tV1}1m2u&_G- z3m?>vJGMLTPm(p?2zUFNDqfYxyxOxjt9j>Odmg-F>;l0v=Q#R}ur%W@kUDw0Q~H#+ zi1Sb>=0v!T_sK!6s6D}HyBTx5w|at?p5~;v5 z-W|xHVO~=!A6YTM{BRgS2J*&bDSGXTq+FwddJgzeBsTRl>e}}kn%FEz0w_I=r!n`M+V*b4aHkUI zFuo?$OgEWWAfNJ-7xWH0QbB_xZ`_N7K+J4U>G&h(&pLg{68d~UO^{eUT8e)d+#P^+ zA_3gb879@%H!8P76RJQn`SGIG)8)donaYrB)dQRh%=tf4fm)tg#mfa|DrW|_x;}FR zK=OKLA4j!k8Ia958o zihZ3nU&Mzvee=1Z#^}uT*#rkFK=>>^#zYY3I70Ev)7B~GNr@GuaeCN5VnwtYd2K@&M#T}mPe4Qgpee{?%f`Yd9Nx> zT&QVon4MMk#b7W|WR*1ahKQEr?~4g)HZ>03c=|O@Oc%u)ZzoW>hIl{KnYBOx)62Z+_r$;pvr3e@59_Ud*Z<7B9&58NH+^E39~CZ`PT z&&-3plwt$*dc20r&L>ZnvQcph$gd*pJNQScKeitJc;IFUc9;AjRNvbUzdUd|8d%>X z6KDQ$r3w@%EmpU$1rwJeF`20~Knx9J01Wg2m>g=`U7fp?GYWkN*eN|THz=ff1Ase` zl6W+>LE{RI1=GAQL)bXD+7#8>$-9NuP|O;hY2RCbAAv3hCsfE)`_>wuRXFZGahnXHyhXTqXs4jwr3WkFW zkrNEk*_P3lxqBJ0fF64R5Y}<$N|JD-?s)cUf5w(G4(9|`Fxri9&4RL}CnoW_$l#vw zob2!Gh(SAQW1zs#CU_4-+dR?MXq})Kkttw(Xhz#sm^`Ctr)XGbyJ!tzlk^TTDQ+8* z1pz=Xjj8$|8#ktxPg{+xs43>(xygT48PUHd zZ$G~Pn{0XUdu3GuAL$cn)IZ0++>Ihg3514UW`x@S2ca1!g+st?LCae4x|s*42OlZ3 zj=jhPm)Gw!!rDvERQ2=eq`pk-FEd*N4?+5IvdJVZDT8q>Vxg_QUU#2h58sS`W!M3< zYJEl2u`Tcb&OKN0F?4Exo@2H)mR^cC=M0hcNv|JG74abmtW@54V~B;FTv^i&cRHE6 z`olrQ+NxE%P1Ril3>&U_I}sM1mZZn}+N$G!N-C_+f30Ll z8VI~-2%YIF7lDDFAE6#^+7yL4O0Rtxk@I{)BM4-8Cjya)%1F3Yu^mR z(+JYuHkBfSb53P~<0a`HaL)nn<-Rhkq>HFvic;iVENAEdMxjhoa&v0lWM~nCzDU7LxzDTzA;lOZ=;EY$x*Vf6Ikl-%MYbJOO;-<;HF(Oy z^TIEnYyv;2H}LF@IYu8)uOwq9_o0}{{pjLwxN##$Aet1D>ZAeR)u+cExZ5+zeukMy zn5dd9lJRI>Xrsq?1?C)jBg(Yf1twJtsfcIZo*;*&VH+=o1mk?#y0WD7OWB~0seiu- zE=_96ZKa^LJNcW0Ye?C>Ln@uqiEb~ESyT+~3-gD53sJ;8mS>L@Q`5PuhGLWHJL7EO z1HSP0nkeCF$IF+EWCDYU7z3Cp+2DPizf7oG88 zI0?;9GBwmsbCZY58&%g(2?JJ$7$h1mZVxduJFm+jDPlW z*y7rsxe;u25Rb^M?n&8*MR09Mn>%>OQA^Y=QW_ z6UZ4dUVLYsIPHRvZ#3Ct?oTC^1;CClB;6eO3=Ri@aAG8pW{VYdVpYmHyH$W}_hof; zc`K*q@_o$~lIFnF+&Egy;JKcF&&L}U*|uoB?`Jq{Py?7tjR##wat41=_+AU$3!}3` zXu1Z8wVJLL7+Jsv@5Z!hboBIm3P)UT4^~zqz-pyvu*Rf3&zV!rcmbT9syo_~S7rv9 zH?~H@3}en7+* z2VNaBNd&0|IL+^c69#Z-lcG6tEbeP3hi#39+61~i^aZk{cD z*L5RpR}+fDaXY;?&W{W5L69P|#;b-)0>Fz86e0LS^&xnjuW$TFY|K_z3;-5NP2Oa) z$%qbehiUu;zqxHPfi$eMxDP-EWi-#%aFQQ${S-m-A^&a7`9@_ZAxDSv62=e^xv7KA9rOgW)g9$(x zq#6v&Bh_a~ARot$&g6J9!Pe+Tv;HNBah?GMSpK1zEY(!dunfS!BQIK&e9oM4DpAlH ztDX1R*s?Y-rCR+W>w1w2WlT$VRa7%tE~LN?2?ILRt_wWcj@1;SvD2w@> zM)?5IfE&PT<3D59fCR%GRkL>36^EmS_2@a2;#|OiqIchZa{!MaSM9fI%5aAU@@MJi zk7ayVQQ)NP*9#0U$Z>#eMv0R)tsbbL9qZV7`w(DEWKvHK!;x7g-YXG{x>M&)KNs)8 zN5uIbl)yhuBr~SvtOZrtq1M^Pa7#1UGr?+S+FuCfYl_Uza|w<46mn?#03TjA4Uepx zl9xG>>_O+sOu!5;4HiaP{hkdGDpT{=6JH;@M)}& zGV8lRdB4I1;&Y1SC&q=JV^GP-t}{)|zu!_dZwS_YE!qk}hhQAF4{YNx>kKh4@jOyr$*z4731`@1a^)gpkeURi^2PqtuS^}a+Bi+hPvICEu}=D>&R36k@=Y3o(n*1+xF5-k>;sH;5=fJQer{`Z;OH5}(wrOA8dqmhdBp7|FOAOb zU}kn5hoIoPp4nXwB$))sb}nY7hJ;7#5&^*oAeY9aFZ-H^1V7^=r;MGaxVEqx9(YGdI{G!k(~d7COUdPLS6x8P z76qO7Cs~I$8pMPPgFBuLs``vsLzH2tzBeXIEDlJ$zxkuiTUXjEt(ep_q}7TIpWrrM zd2fwoLq0!3I}Uv`)41YfOm`&4d8e)_phXQY&&2kz;PAUHJ=B$@ye^l zj6w3u!ei6|zg`n`AXt=ZV(mCCxBr+OwK?&Sbq?Ts{lc2FsoOv%JnR(>U}j z`cLscV-vL(vv^!Hn0D6y_9M-Pv|AX#&A^$%VB6-Uo^g%#kC{uRw99wMC>z)U2s!V8 z$DQ|-!Zq!Ve5RgoplA7ImU)ruz^6PtERn53iUBuR(E4x%q9pAWJH)-)A%Sg0K&+#b4{-@YRJPIA}4qU91Ly)Va+aw7~GM;RL&b?R6E(&c=jGn;6XQy zL4@SM!0Zf{hjJu8ld(mrPOQCE><-`rSHDFM8Y-HX`H+&}VCD;P;czk609Q;Wa-$i1C;Pa4TZTe@ZG*_+n=Sw+`O*aW{%|bQ*ijfCs-P}16VIG2nO)f%rEc3knTs20sE6b8hAJX zgP6zle{$03_R1d4U9_<7-xn|d?gRtBc)K!o#Xp=bET zA`dDZ(R^$ZX1(}2;C|1f4Dq2*a;7b*#b=A4w)$>xFS9}cW&WHA6v>BjFj@=s8hYMs z_2j&zv{nhuHsan?dBEJV1MXM{>_sNt)1qt>ut5XrvuWPvm{E{?O=5D@wvJ)S*DM~-vdU^ESF#7 z1gHccwTk^8-D}j-OwUILVg+jicRCtox#TxXH9!N?KXNT+b19U{XyM!C%oQ`&*x-OF z@#2Qvx)_FFW@nt>m!@VKOC{m&4uf)nR%kFG@a+fz{u4Yl78k{RRT6~aWOUTNK)RnK zYUOuG_sjqf*ns)nPNbg|qaU81AK*jl!5!zgG7iD}aZa-aD>&YlXo;)NdE&@;1G$=6 zFbvwkqymho^T0Df127EcXAq-M!2_`cG-!;-!vu{rHG$b0cye`?GJHPSG88D6Czh`` znlzI%PW4?Z2mp8Cf?HMSj|a47tmxgvbifjH>7D_e@)N8pPwGB2k1*_R}~(xmoPWE_$)&I0CGHKAu|S2b<|T^^K^b>Y2Y`8^@NJ?zUiBY z3CLLKRQU|TIKnf$jZeZHHXov>x`cYRkE)+kn-*ja?>Bfh8pe9T1F}4u9-Y|Z#)q=0 zdJCv}-67br^M!XNbs1hZ;57I;e$rg4JLhudl~Mfd3ngJ> z`w-pn4H%dAA)Cqm0O2kR=tgL17n`%jj?Y310mh2j$0#b-CU7(}_QJ@`CZS+BO-zxY z$ri9o`Of&ZX8PKg&y4bYcx^oK zqr@G`flEAA@ph-!&wsxe{U2PQa*BegXr1b{-mpwL^V#i$Q3f7wRMalAYuV5NW}97~ z1xa7cUm$n41-Dk+#Bv9*d=4TOBmo_fV7rla?+>bK7oVS!86QWFvAh?}h!f%@!^sDJ zCB#SGMlEepd={7BlPGqle*0>NN;ff}7d)YCF~G5-K2 zHP-Y008@aPhLSgx`$W()F?3;nV}c8+cYx+-;{!q+Kz9NXULVG-fGrnaRlSa zzxA1%$rU+j2qR-t!1=ZQ{8%&aw0&TLj9HnP{{SXGuNjBDH#bvIMQfw@ss@&slCrE3 zL7i_=;h-3+tY8Oc8Rvl}4A-1GR);x_jp`I|QB?FcbQ!|_{g!#~Ygie~oH(f03l&{p zGXT-ak^-k~zDD9=o=Ysw5vc}VZZP#&NLL8bu1+?ISTjwWxw{;nB1928Ilwde@z?;s z;nP=(ts#@uOW?2xxOKsh`ta6w_Ts&>^v}u6n;)SW$2Xe$Ob3FIf`F%dtI46P@S?9(%8@66I zLwfmhU$)C5CzrgMu|W0!VVE?cJ$=UfA#R<1d*82qj{6bHN_Z zU%U9m(F0va^NOnyq~m>;SQ$$*mdok55We~KZF{a`&HMr za6b>@e4tuaxsp(!LV^`*eehZ|!L31wz-S++an^7c&00G@r@hhn=K)(EnLA-FoF>tG zG0Xt{2rz#9&6{-842@9$2%rOc%E=5KaMz#VLWLKtYEYpAI&TSUBNg$e3OW2XNsf>ezE*8%(VwhU^yUAa8MBpbRoK zZX6ay1>tD_0Qa>m=u`Pp<{pGHgYdVf+@zBl;!d=+TP)t%b;$YQtUY)N@Q1CWb)8o) zUYweu{gYMz0IFQ(W^wcTErwJZkd%O5Nxb-m6x(T1Ijrhb{Yg~Oq|_6<+N=wQm~re{ zN~)JtQXd$}o#P}2W^m32$sRA)9*FRDSpF$ZP}gI?KTl~0CogQt3&MZoc>=f>~3Y&2*ID1HKq)1Xnzfa{%v*0iTlA>e;I7=sBG6qW8FzFf&9I&!e6&M%#X? zLn4ydQU3sPe_l1Y+{p=+$||+=7%*HtNb;l+Z)!BW>SCA9kt)^=w$XH6pxIpRbF}74 zx;8A^I6=rVJ1@wFyW`Nn4OzDgI}u0%z>hL`J-nGpsdr|^@HbEWfC$|1#bySWsOXVWL@;h~T@0wtp)YL8gDm-`UoIcsc~ zfE~!R1bvG;nyB0tv1QjgGC-!-@B_j2tpA zpJ!EX#A=u-bkjAJYT13*EaNb;D4sbtj+K0&=UmLr6&ogVb$?QL%fQ{gOWajbB!`1J zX^CZqwgKPBc_Y~#26ENU<831D&#?~>FuX6{%Wh@Yrnmg;gPH`pk&EFv-Ac zkyf2{Gs%OmIhpH%2k6XJJVg^&jFazLkk|>szzWzwJp%9K#`}r!~gf)$Ch#wW2oqM^y8+NC7;il%|Teybq z>s_1~@Ao3l<7*$}t3?sYoqhptLO7cR!{`sthX72`WSkjJ9`HmT8<_t9C0o>o{Bf55 z03=(~fql(>L-Zkr&40n1SxSK*+!G#XqvN9=2Xj@L|WvMP}t@Aga2!@O74DpIT(J$;av)4FBN*iae*WRtMn05n3 zVeuLxi6wZ&oo`Rh9mv7^c?&cpfC}U@u>7}tBhud6pY6FVQ|;5+?z^z&f7 zAW|DbWxPDs&is7VgWzC{J~^`fO6Syw>s0JsbsLME=h%Uq6TT!%TVDMj?ZHp9%V4ok zMBI=7g0|AyU0luDAfrPz@{SD`pBz$C&AeJ$j^b#yYKR6E`V9X7vTId% z=x8><2DI~(Fr9+;-;lz{1=?Oo@?VJvHINM))-kqwTluscP^KSG-;T(I;HrPn#C%SV zy%HE_#r#Rd(Y|mkCDZgre{zvNa)X*`MJ_-rHm1G(^Elinn!yQGA1apaCU^~lJ@JFg zv#^(kA)IQmc59rWl7I*EV2vZ}f$cT%H|DFF z&}CTRkQT@E0zG`%s+9s3ZYvp>S)e1xn;Z-Fi7!vT>>PS+n*{+;@5BfYmx(8=3&3hj~Y-Nb!#LK>xg@C>I@i9bBYp7b6oxL+gak;Aem34u267> zd?mob%OeypG)%!x2+}ZmU#X4N*U- z8pp4 zwKRoR`%rxOg3_nEt59wWShH-Lhp&zY_Di#-ujm?rs+`MFSrbLjGc|9?VmuyvXIom@ z)laqBjakh=y^R3O7%>CtTOr6FUJABFUQZ=b8QmPAo0)srQ`2l8$(IBS&d$Mw@Nfg| zvA^Q9vS``)Hm|%J8)H|P{EQd(1K*EbNf;<^(ocX7TzQMZQG#w~lV^O4zyr@e(2ryON*IBlIOf|uTz zm>x*|J)MFZ(coWY`tOg;O{7bFNng}*tUQfou7Eq2#OydZK-&wsBT9SrA(AEF&Zwwwk!5^BH$w~-h+;Ln&~t9JhI3dP@o zX6VgDFYU(Rv)QSoOukA6dE*f{(P%l6>?F<usWSE^l zdM>gD=AyK>$ZbZNO*O@J##8S{r@R;>o;ac&uaGb%N1XeC9$duf`_bw@1S!HEA)V9t z@j>7^$;e@6xdV8G=cB?mSx~ZDn7B&_@9|pP%A4QX;3tsxo;;Z%YBF^hw*bkY04$4D zg7r_V#xp)m5c9(`1uw`M!_$wLh2m4giLa;s05>2*uAm2+A9*BDCBUL;BX|H_e%w|$ zr>~f|Uc`P>woLF~<*+R+Fikw>dBP&#?;zcEHGpL9L7!YsaTnCz|r=?r6 zpl*-=&vowa?z#SBHB`y#DWWl26+ zW%OVicZ=zTXW=RkN}G##AJjQZ9PP|DY26y>(koeF$SP)yJziZw^J_qUAwT5_fYT5Io1=+R)@PQlzaCvh@xN9#u5Jbx6wd^W@( zkcY#RK|YjdJcnylhd92|n}IH!Odcg4v^`UKkZvQh2jefYXfW~110 z9=Gw8h^!}9CQQ)I|7s6Q?hq|LELe6o@<URM`yW&oc^j2*7Mm|f<2u%_>ocBp5$~Wg9BL$N@b-9g1PLO+D|N1NgfvFs%n7VhZ7_- z(gbff7~4J0Fra1gJ;1%c8L7Ozlf0-69Xzv#n1YP?fG4!q@fW*fS54G&@w?S}z|{(* z8sLn89uHTdWQL3kSW;G0!;paIzu3Xz8DmDL0Kj<2_L^U;>FF8m`+<>m%!8}w_!EiI`t9mCRaJuo$>*RwkU;9spO=X=Uuk&FELlf z!?b*9UA9=Jn-%9WtJ^VNU~X#Vlr;4~mr+US^ZI+@6E`=-246wr#tRQnav7m~zqvUE zMv$p^eM^jTJf5L&>ezR%6vd|P^Cj8!bLCt{P+;)EhtFnTetMA+(GZGtiu>Nznc=l)Ah!d5S#m`P(!hY&3bPFc^-7e+_x)e_dNy$sUHf+hIY;DZwcX z|9QFN>WG`Q%~3HcP1kA$GGaTDxe+oMt1z+|0qge>ph<)C+$B|ox1efN@ew#uig3FT zfe%vpAFsxoe^)bKma+!&f)&QO8(RoY6SL(j9%^{`GzR0%hZSWq=$@0|EekA~$G#z@ znW~*e-`584)+O;WCA<@5CS_iXB|JiC^hSiuR)oDisJAUl@EW?MOsD?N^jWm$tbK#Q zzc!J>t=R-tt{%mE9QMk7lf**;Sm@cMoQ%QJIsB6IjJ-AZ*8y*g}->O$Op-bVy z*r&l}!a29?$$@{w%=;fl^>hyi!MM4JbHQW~+b)+#N(X9cRaZ&DEFVcnm)BY?f^%LK z`?&iQP=*RN^lM|3+uJ#wXMOR7&ALWhWCkGxTjm=?T9-7GCJ*B>o$@dz}TDS6P_P%94aU-kmkqe6 zt=k_X=_rZ=7iFvdfzK4XKk*27iz<0JI=sDjIdh+htOxAXXE+E8ukZw(F6bgolY~j! zIr~!hGXrU$1+J`v{6log`0QQ`u!p3d1zB?nQSQ|!$0Nv{M4K|&P396o$pahTzk#IN z?RZjZvT}A(=ev*I(Z|zC8-YexB24Ms(xlF9wx@ml+%pDgX!A(o5+5TUdM8w~C<}Es zd%sLTPM)Pk;+k1-tkPp!5Z2rolW(7g!n@ndye~cTK-LV_nKB?rhX^Q?vnT9 z9GFMG7umy$jW7$p$nmn;3Q*8$T%b#zvLk%8>87AQvX+wa;mvZXJw@Ync~ar~MBS8o zPK#z1Z=I`W|0o^@Z}**$P$xf#ZVLGX$IqSoXZ|Vz|rtw=ZEa+W5u19`-HPlPel0N#Ws%);UnsMo9GwmNpp8;7fz)?+(8=nezSa=G)!gty^T z*6dN|QXCEp>g3S6YS!wf3e`%Kd{te&*o zMbPY|GJpL%=|eq_Kj`sp9yvkyF0-$xf)KOp5&V50X-30Mig|s)iVYnGoGB)? ziWlCLQ{0iqj z!t9$;eWZSOXUGM`;-7U1*}_-E==ED*Kc_JXJ>P*lg^};qw2#A!GOP}~@?mvX?Vi6h z?$PkRUf?}6_A8PgU0%@L;9h86ky)30g9^)hu3l;5YbqBchl7EMVoFjvL=&S|PX9nW z1?vUkQO(Fjg5ThT_i_uo|1zJ?rU*Yrte>?+tOIFqIa1SM2u0CRO?%^)w9LY{8m7529%J z^!nU6dB*?&Uf;7!ESd6?`AzgNs1(V&9L1qRLO*ST;r;z3BxO;9ytW2C9tgE|i5;3| z!NyeKLP6pO{{f0mw-P)JC2f8Bs?XW-q7)$iIq*E;Uq>5FDSGmJ0>5Q=PZdL+rS@uA^HX|Gw_Bjc-~ zy3Mhp)ZRe@;q{Hn)Y0g~jVBVfL4hv$12e(V6nnqVw-ifJ6z2v%P;r+1$kc8|TR=O% zkZCY^0$5v|hv7!F-3W!q1HyykoVYaFkVtJPvu4tXLGOzg)A%uCu?=>!2?viDE+|B( z03ob^!$VOU;4LUF;}ol#5xWmCky_8OYn%KJfKM2G&#AoE*p;L@|2a78jfh8L?W1dL zE{k^L1##Sbwxz81=&Gtzu26Y|I+%>-H8)M|B_upy4iQYy&5j&JXI0SvpP+JA}v0i5)z6?!7E?6TSw7v~@YvOBD%0Kp8 zbZs9g82J`iZuLF1@s-{vZbf0pv9#WyV#?a+K3xbAUvJkU^;W4#MOu{-3bvhp>{Vzz zgz)UXQIUTOzC}2PiM``NeOQVYA0Iecp9zBh14MljdVDi59l_aH8TTI`hCC<$#gm|? zRYP+~lHu!rfD1Sy#72W7UDVO4lrCCVp4n|2GUW6~7ppIiuNqYYqsuh3@ z7-vc%vZ*U{{@Zi-TQ^TZ&jp9xY55!rTFYg5JcE-W|1lW>uefXYd3km80RX&z=6kW@ z2>}7bf6UheGB{h(f7^@hfxkbQwQnqQo^+RN0{{rm*J+0S_!SKRZOoCkdLs_+>{U z{BHrC;E`;0iptdX#`em`zZ^EJ!5b1uNWFyCi}va0^((8Qehsh6?F+Wori1n!DDaN1 zqA6=z`UpO?%r%R6>&DRL5E1*VO9vZ9-WKeXx}W~a@_$m3OmO$^k7;r!IpX^vMkdWv z?3#qcdQ=9kHXrUqG}%ktDs*>f8UHAe;{}twxfOXzl2Jd2aaD&BC6_U|A&`TXzJ9h@ zzPOQBTP>!*RWV|Ixi#Y5R37P@rV9c7B2r5?q=JoAR9oUV7=%W?+B*aM0JaSb zE(sjke~ChWQvDuDk@7v3k=ay^c7gm-Dan|eUk9#|!rX1O@4 z0C}wW;?g)B-{QbIjF7Yv7#M|FM)(apXh-&7wjr=kIq#=+v2~XkUcO@p&Xy=&bzX&>G{H<`yyPsqkx-4x1aTEpeKYIW^!c$d( zD^2(P6A=K{0}v4}AOBt5Ih;aruA8OKbsJi}@c5QKioT9{dVPNy$7;f&xA&C@LZWh? z$hq-!1KV^e8L(Qv4E!EPW;8JSYy3rNo+2!6(Iwju>|DS-8YTT<5OL#3$60lo$fSACY0TAStLMfdy-_Aym7TCSY>JC3#9iM}t5GPh#MJIj+{E$#Wj z8V&;9lZI}?^Hgwe!RhId5ww+(xwi2`jbd;4;Z28pzODK|(2e}DU2XJw>!*sfS1o-! zC*%gM*skm%%(H{-vgman#=d|E>P5MgrLv&w}1Y?ZGB$tZa(l1 zv;G5U91^~((OSdQkbY5=)DL%6wLxza{h<%|S;VCCGR zx|6NY>-xmYvM7eb48^0rn)N)`fi_!Z8eR}b>lj0DZYHV}N4dFF!u4+STA+Yc>eEId zEX*Nm>v$?(0SOPb;-05GxudnvRGk7?ww{TF-p>Or)J_^CwAq)S#Y#eDBpHncc(fg_ zk-^vI+q^lBNA*5UR`z?}dl^Ufr0>wEBPI6FM+*K)s>N~N7uA`)SmG0{v)yqwTm^2~}3f7m>#=&1T^2VpPc zPy~?{2cqrZP9O`9$s5yfCtb2yy2qI?pV!vw7H4HI^XWkPA(H~lQOJgzSKq3q*JJg2 z@Jpl0>SYN)lr6Y-$5&el?&!KA+YTVsw(AqCoAStA-XA^JC3!gXcsidv%XYqYKWA+y z)H!C3Dx;tD_nF{MCz|>_k1`oTXrjbnBo7*Xbh+Wz4 z)U>3CAa1flNYPX#s=u@BO1y{@WLhXxTO`&2_k8;?`9y@H#k6wHI|e+(&p2A`=90Kn zXBLcJeZy!vNqzRZv;5k>1b{?+!)5F5+GVv%CSAcz5I zF+{EJ3|ouJudZ*#G--v$+5S%|$v_PzE7>Cp;v8cClOezcUnO@{L0d8^=pn$!ApSB$gbk8a@lqW zM}qIfUbCC#=NEQ&$<(mw3t}*jqbKs{d+MWVRvdj|L7!^{oX(o?xwr|(5H#Fz;SubYw?lKlFEfeWq+RmgBT_r1*v}=>VBirX! zL#2SZ9))OTd5>xTJTWRuopWZb)X+2F?FDc+I`|Q0DV9DH9sS(ZyS~*67yf7Iw;>At zYAxL`CAovgj6i=CcibFuL}gS6;KjR|zC{mRjmvAVCdgN(7L!>xG@D2}u3B z1NTTX#C~umzRVWL@dda5Q~Yvv5gs$C+cB$A~?&!{N}3Q z>6~Pda^YlStdhPYp{kU=(u$q-n#mJE)Qj7wwP3tvXtxoJ*su6sI{JyL9aW~Ugp zHYO=2NOi2S25Eld@gypF-@6v_yh+QF)NYzx>L99Mj}g;|G%mZ$*!~7SZc?u3B?WXv zz?ALvS`wQWcLj*u> z>tqLq^Mt!S*dL(9`pPuJHy&2s+SxtzalHV=`6py8c5Rf68^giV+$rh*;^rvtJZ*xOF8W@#Nu~CEtPBKf1JWJ@}o>pcDX;n7&nzd&Az}!~|&| zBXu?sCG(V)4Bp@ib@`7Ei`H}tyD+DVC;8Qn;}?pnlVa}ogNHn*?P>9AQZqk16=eyD zJBURPOl?1kREO#I4IMcBx)I~7($%^;!*l!(Kr3$Qdh*gvd)layNXIIM2H<`!XBsAG zn7$mH!=_cAeZvWV|F(Tuoj)W?+%oKy%!e@;n&%JI^0G%|kTZGLr*QqvIr@Vx!IuV$ z7KOvxnOpkciSVk)$}Lf?bBx|L!&G6pw}0R1Gc{}F6sk!XA|n`0P*SDkDk@+#Eu|@^ zZz`%X%^C63mjuPY{V6$9{DUJK^?FUpRYR%r;|L>wN*3BGtH?QS))N7`F;l3>$5u-_ zzlH~10^=FTJK>g8gj_bEdsI7v~s^X?`~M0%ZAS>@PCDELrC%Y1p;I zn%Z*P)DrISDL?$$tW5C=9wsZYAnRkuV|tb!^+~QcFOJ0gIFi?-SrU33`<&D!_^J7T zD>t1Qc*;qarb==`?nhXzmib%WvQ!=J1*m)pvP&qSc_u7cbkI$C5RaPH zx-U$pV_jBJKKKvtM7V5ZA2_Dab3Jk*TWS#8{6V=quSs?(V%*J1WW;>pMG0*8wWyUc z%2`j$8r0}^bF;S8{~ti|O}fHMHqxpM$$CmsY!QUvR@|W#DiYg@gK5;KBoI#xXKlG!iYRXRtE&r2Zr>o`;g+B3CAf! z4)<;&LpNrbx}C?v8ry7<+OVi0^W{Ey3&u*{^<#}}02hU}nmGK7oa0HK_3u$;v$FF8 zbGjz6=3WnrahtD%-!@--aOi{MX=Rgc>*%?=oEjBNpx*$}FOhfyQOHV8N+Fk+4U9)z z|9V_|T{&}*nSy&b;=H)X)%iRw^`4h3mg+}Yf>$-NtL7>+0{>I)r3VS%)z5g17mAJ7 zU-~jCi#9AQayLg^)|L@~uLgcY@q49h8z6z7x>yh;Lj%gI9IFl^r17%!Y#jZFzNh_6 z=FP|B78%xRt8xecU?a96*`C{-@0{=#Vj3Y`#{f~o1ffO1zho21jCzHTAXQ7syzAqg zRuIita?)7pddM|JVL=Jj710_hbYNe8UleK}mR(o7(#w4-W%)GH<5RFGBNrZ2`AiE^EHJ{TNZc z(67ynl`~%rtKVo98zKy^kqRp5uMbVD<3qZZKe~IxBUj_~sLSj2b?%4EVQCd}{2|7C zrUSSTn-PkR30d9+w7rRAM&GZq!2WP-{%94XNi)IZJMvmTSdg;pF7$mkwusyRwFtsZ z8r9Z059vzLuPuozAjRPjyLogi;B5BfK2#*Ca94cOp-qjuM{2W3K_W?})Kt%WRq2F% z$4WrV&!uYg(_T>(c?`2c_?NO>U{(w~c!JbyDg0B)zOji55D&}U({CB;s9MX;a{){u ze-TWyWGh&sHo=a|MNwZ%jKMmy3OrE8lDZK*ofBd%uNr5bxHdHu)}GDu%L2q}{TX|# zh`i?7$7)#0zX+S45P&4P!cYfwWv0M^Tg$WB>AxI_GG2(zTpYL0hpKZQ=H^(h^oWm* zayM-XX^`3TH>`YR-hUpXoeP2@B*9lx^zDY<(gp_rJrD*b3D>ozxJP?w9`(;ivnL*a z8w}{~J@4=0Ay7v<0gR&Hxf=_lDO1fI*Mk}o0$qccUDvc`6B08la4h?I6&v|+#wulM zP>nU*tw?Uf)gl@w#P3P7`>TQ%((Z3j{PeiRw3fX!y1S&Es)>ComupK}wU?wTYsBhK?9Wc$-#}F(v0# z>jwRBudRJ~^u-QvrWf+2HM?rE=B%c zB??~|)?#B*F8vN^(}iLjOK1434cB5 zMfhqzES!gAc*!24ob8KG%om8DzM>?()XflR#}II zHtc)cAUx=y6W+mkzFcBmnS*oua^*Bt`1)+Si{3v}=U9geTET(=2k;VK2HgJx42B-P z_p7P7GXrC@lEQIg0qE+-ur&#(zA*p${+^9t@v{M^n^o8lO9w^;3^;{>ne_Zb|0k!y zGgZvQII@uyhMD;bd=B8%C{@1a(Qrkt^tX_dv-l4{PV$V(`<|#GadX8G zbYXlxP>B6>A7muVnec*r%`W#J0FanLxWq0eg#x7I^t<;6dP!O*Q}gwnz25Fv$~S6B z9~s!d>ev>gQwsZrotwD9gt^SM^!6oVG9`6>RIUCaGEo0Fy}8BVE&Zp+uw9TqMr#_s zgQO6CLNCA?S%2X&!LWc>z;=?M0f3Z_+kF$anUN;mG}d`rU;T2?JWYy5pcvthM4+Xm zK60(n>0YH&0CikKYAb$#d>~OGi)rc~K^o4-o6fE)ii&p24ge1Kr69$BfN3;^5KU6{ zc71#BH-R%l)<1_v$rT-CtTp*1m=|hupG(`iWwUUWP;}VXj&9D5rzW{esg3TJa_KmaRW46*^z#cpnc?x`aMH=4gve zJR8|&Acc8jK18Pd^7XEs`&|Mz00&lw`a%iVWx*2k%(PK5qWC5@qk{x<7BRo*W&lwQE;gzGzIYY0i zfPjVpIa1u*x)6SUw%hlu-^*e6PXgT~uNc=k621V~CWG~D)e>VbW zu#lxu`va;p?Wcji@EOzA&T#9daIf?lDi{-VFuyji^ip43y2Rl2u@GpoF&lFVe6iX) zpf`0DSYH6hGRhR)uf7)C>h^?YC+v0kzJ5$S$57C{)5y7peGX(1o45x_l}7nqL}U&P z{CEq!bywD2%sn-xo}3CQ^YDs%%9aq`men2m@Y=B<2M3YMxoFr6u>oH*v97wtE8+!E z@*eR4?3Zhjo{dW?q)j+ng)nu&XEYGnvQ{d^5LqM`EiWT%`!nKj4DV}W68GIUKO!@? zVBeN=S6z+A%8@`0Ak5wQuf(n!zpTx1hK8$v5oGqO#l7K)1LOwuR=@WlTZ;x_i-cU0 ze{fHIbh&2mch7cGr9gfHE5;y$orO5v*(1UrX zw>dR<+{8m`o*^L*0XTrI@>k>Qhezz+c>IkS$8_O9s0Au_s@6Jvm3Uv>Oe7 zsYz_=!wZ_pEhqqp4AxtE$=Q5b1pO6otN(4VDBvi*XOoA^oHnZJDWh|8bD1DxS3F#6 z46q%w2-w_16ZDBUw-rSwVVR7s_9-qs{0CT0d;3xu*-BSPZYS;2vM=+@WD++aZ#sB>pLs2fcN1gKVSN zoZNCB<{CRUuXgfasq90P|{304BW*T7+d-izRvV8f8C_z0_9K#(CyTZei{L9E#7> z^jc!$yUFNKok07^uQLCC@Be)2IvdO24zXE(5uPdqkt*7KdnbQ#3lXqiO*(yj_y|l#R$JFXb!sstKaTpDbAUHWK)=v@FhOCY~}BSvXe`Z!;+a zRzvBPU2JNitF$B(9El7mepL4ogNL$Ogjn17 ztoZ`#Xmfl$#F+)b$h_>tY4g_kxU_%bKX$DFwGFBC!Z+-v*lhh>#=xl^?&Ycu7Z z+k;HjVu(;?9t!+;q&l=e$SP=T&+NnouTyUP&ng~+DhGfN4FX%-wh0QwG8Oc?=ZQ1> z*JR9hY3|O{BMlTrt3C>-)z_zjr;)lx5HSHTpd59AJkWfWi!7=yQ;rf97s}W!t6d7j zx^29~SctOlk_xXU%M8gudAj6+uAEH>ofZTF>&+@}4XxmR_T`SJjiB1sw+#!q z41qXW7f6;Mn8JF!T!78Z&L5@+K^c#`XmFuP_x-o(X53$F6P51caW8fndOQH?C$?S) z!Bj(a*ZT089L3MPaV7RGt+C(52L(5(ugGX!L1!4ILkNZ)9}jwB%5yN4Vki(G^qQ)9 z?vaEL`DkW7CE+M=EaVI~rgncK#;l&C@xdRu3A3dI5gStHR!_#k?E4xQs~9@?^+!>b z04{bjtCS&jZ#QNqttH7c=a5vAP6<9DZ7c}~m@_fGjI{*EHElj(ZM7bKe4hf|z4<9W4v(n}7`4XAQ|iZZ zYpVeX<@&{;%>|8i-%$OJQT_w8VLjE+Va@ktw+qhM3nkM0LZBM^3XkOHJa9MAG|z$H zT)ss?5z$Vwo@Y zU81e6m^uSpVdOzCd9lLcDOr};fMOEPb#fv{9h{jDFvC@jF)9>ej>m5qj}O+QCB&19Kaf(xEgtERq}M z(8Jk6%MVt9v5j@&+6w=2mz3gy(?0ii zHzy7r<5$$T>sC-u6nY^NX>U9-GDi+9xQzM`S(>>a)VbRgp z-~gB3?1yu0#R%Z&jk!3)PKGI0y?3?-C(FIEc z4jVVzHHcImW9$lm71>ZAbc>1d*NfMF-(CK8pyG?`X^bL-LIvFEOEC(5=(%f|rxwUq ze(ezn-w51s?rRRnd7;r;nUSiXqrG^YxzCmfsja>DRpV<7$Gv+Od78AArVW7lz*ixU z&jd!DMOhYz&q3Wit0AQeyZCimY}$=iQ=>%X)(wYi3vYu+yB&9tH`H46x=nhxc2Va_ zcAs5N`gkZLORP)Mdz)|Im3z-%4wK}Ld{%DCVoeZJr??CTU!=b9p5hD~-)={$yU&w%-bjhg`182-#{B;FEcFbNGg>KP{ ztDWUOqHXif!AX!cjUg3g~vrKx}0?(y8w~TFb zy8b)<%cHMvy{^KrJmK`QV-n)!9*VuKj+^%*(?B|^-8&yL$r9wzyPl$o_Ln(=%+m*vO~lKaIs`yXc#Wjmhr{LJq?KDGt_(~A(HlXa)9N7L*k z)V?Uyzelo#{&wH5Y7S&jpJ$#H6de=%v&$adq_Shj<|3=!q>>}$`fxa0(l%UP2`p`0 z`c@2m;>-$7k6n-Zt5w&}@0?Yt`^OAZx#mF5uHLK^y6emzh9J8oj zu4;hT&l78%RH$3f>#sxxT#I)5?(F=31!T{RRu*#OF#*>V1uawdg2I_6!+S1x8?v<3b- zQfQV?AWr8bl@l@t4sAEw7b!9!@Zkfj1*9zd16Pp>$;o=3NF{WgCwaz+<~VtF}45`vHzKWO?aFxBGh?)2F1~h8w832g?%`GG z2(xA-{*GJVOBKe>;c_|v>FssxQWWGQw#JK4$oXd(R#DOYtw300n=eQGlg1wVL_WAAzqAUB)D$sl%z9vmDm5@_cSkUb+v`bF0XOwDW3@|Ic{){~K}t-$lvSw?4v_QpCmQ@+RHyBF8tXNpmN1 z8dpD>UN<;AUNkw*4PIN3{;?na95Wa59YoP@_-q7w!WG*m7gtH2$X7`l`Ft=dsR4XM z;I`2t%wK&M*&Bf8?RHR~X{?mXELmX0f`{D4Dq_vic~&zh;5&|xR}#oT-qKT>SJ{FQ zdpBveWbkQAlMimNnzA+a)Op)UE7heAA($o4nR|ah2xlHflawQ2g!%7NhI=aL7XqZE zahopSOV3gRkXJPDS}AKHchao#5FT7`%B zp7yz$VtEhL!Xi;S;hC`@J=DoZ6G=1CFI1VwVM)k|e<+^rKn9mgHbz)P9HLj_W}aic zacZV122=CDi|!3N#@)(umCVz7ckT|Y8aavxb*axy2I)#1(a$+uXb-T&u#r`X==djX zm=~FoRj&e2ulBL%dL9ZlTn{6P+CYFl!!w5|78tOuD~T4{83*imsOcGL95BL98_ef>rQF6@vo1?qm5EMf5*;xP~>R`BQLGYEJ$_WedvOXpWw zfQGusxCs|Ih8@k1NM@z#{Nkmfq zKKwY4aT02*$<$hR*_j{&!x}@;6X%5S&dCN!Z?i{}C+;Ob0>x##X-LP6cc%FNMu|8u zS7?+5Fuh8Rkh$dla7pifagsjXLf#)kh;{JN1n)moM1^YK{vi2){#<$XOg-u8fW z=MP;NJnXiW95}7dm%mr>wf1J4vzFZlXmkI4o9Yi((gg%&#tkrH9d(vVm*IKudrVz9TNB&@Qz44n|&>oDo(?Y!pk33Goo6E69>5&7dJFLJLF{ax-?>WKVO;M;&^C*w^>oQy~D@ovP72pCinnK9{O-ysizRS--|&x zkn+mY`9FYvIm=bFn}f)Yg1w;}#^@yq#>g7IKhx+gP#bYR?PtbH3Q7UTuBsyuik0|& zgx>V%zVJlOblIB*{DW*JRC*!r!=a?w!j4KblSdIaIK3(8=Vik zmp~Ttu-?pSmlR_PgHMi;yD#k&`>@k@*XlJ{&7HSd(oQt(@>(5C=6SSLOxY zc9AGkoA&U?^&ZKN4aop$z3b}slHmgT1mXf9y0%!a80M%edvxXZ)UH=L5h&c?9+3lK zeC4($zgHr`lwfCj@}ld`ZwVH%`$1ysp;cFqi39kUSB9HTLMWro_N%k$@%Q8kmU*unG`p=L0`_SG$&^ZfJ6_rW+HR!S!ezfv6*<46-o&4aY5yIDRoca7YkZF_-SAvK>Kpb++AZaL$zk zb{2^ybN2E)uq8k5hC;-%`&nsMrPuJW9S25ys>1Vs)o=gSnwXBe-x3A0t?V!D55sjw zG2=t`%xD6I=^GSshI7pKUaZNA0!u*EEqrC>2Gjr>e;tk&yF#Ty^_b0Z(U7Z=^d12p zgxZ$qG>!kPy8(h5pfvSN$MvP%J9-8UOedB^X+LVNJDg|0YD78X-rE$5;C_f7RZB=7^Wg8UqNy0x&}i z0Mvv`n5Dd+)m_4q%12{CD(?moQ4Uu+j4T)6-SB^a2{}ei**ayt6h)pK{))W<$84%< zDxgF#l{1pn4}P2~`JysvqG7FK38*7_=Dn2rVqzog{Y@v?wz&nGul6z9l7qED*{E6c zM(lx+tFZ`gqR%FRl3$T{*>g6hRj_LD+p7`_i->j z7+Zu~mr6ytp#BSY;SGOmLYCf4V=d>!l~L&8F;7r8{Xmbe1p^zN9=6KtvmLWeb=20> zd6QdgAp5>UN)yOXS!?gfO+_?8>6`bl2@Vl!$*$^Ys3{f2Z!3K!Q+nG_4#e=E7&q2S z&AlCEy|oqlDC_G?TjN%_fK> z0M2w`9y3plq|G}O`TV{tK{a7azz%Wy!K<8>!z!4L$MJn(X77k z^o940xKt#7?kO?)z&b)KeV@mCk;Y;p}ASJBcbWrE<%Y@toDQZRB; z`0Dyed#PpKxl0t)CCbftmEe}@U4F5y>f6Uf$`ln_Yrvy*d{lUF(D8a~&*ShZ^789E znE&GGDo}5n|GZN{P%`Li3SDQKbzoot&>yccKLfGVJzB37+X%8ZQ|5tvV;QZIt8Wu| zX8!}67FajJI}<`l6Ly|t)AZPX)rX8W@pMsV-Q-xeV18yDHyh7j-dL3RqBmPlEO$-q zA|KELc4zrJP9uoVUT^cZJyXmpvU=;&j|5hw>iT6aUSgTX@9p?3p+hhLI*WQXz_rLG ze-0d~GihHz367A(=ZXIyt8&_64`rKhh$S*X?fqcy^+cG-V}53N$jYHL?9vU1AUDsW zGr^$1M^Bk5L-*shrCgT!^0(_h0J5}P42945NY3XkW~+~0`#?pS3%+sqkD@r}mmVpW zWaCC({G3`^VcXmU1 z6vKsWG~ADRr47h7hqk$7Bh#|<7BLlLEsA~eQNkl}{e&cYu4TM9=y+=%YT9nUY`}Lx zPO)?@1OEZ)!=GN?3v-a$I{h@4Tpr!IVZua;$cSpqLq-WCYhkCq@40yEq0rR7Vw|Su ze1MmyNw*a;o_F@w9AGrX)597CzF-SKd|{FO_b1Ns(&u_lmS4pcSuVe99+~VCy^otN zW9zi-D6@0?@FO3o;s|Yvn~1U-@z4f*G@437ga^G7co(g=Y3Ik)IqATGzot|P|H;Ca zuUshzUB(q?T=hhpFX=DfDzV7m@CVJNVWxe^B0RQn@*xa^eYN= z>vDv?d84%QbmEHiU&SzKTFmtlY8tQFy7SYq^RK8=hz5-$Ni=9yj6OCLa`zm^ydAk}NlL1ej>AR_N~N_r7p6 zUNHt<6uo7hb8&oPk4&!fDxF$zHks0|*zEqN{_&F=`d^kYcx7{+X2poWT?9MZ4SZwH zp8uz@uMCT-Yug?eVCazUl#US?LSR7YMnX~;x+SGVTDlR0p(F$Z2`NFTp#*6`N`XPi z5u_2xZ_oWa_j|m@`|JBPd+%9mW*=)^>sse|)#{X%^OC<;Evj34in}PO_6}w)Em{C? z=$TWYhwkRu5Q%!ED`xt*KAnH=q|KfbXr))fv`@&6jQRzATM|*AVMLu_n@KSp9*e3$v&`_jU29wd(kh}>K(p~xaA}jE_~7J+af<( z#)OQ=1ncW9mKcrbI?mq(fh^7EeKrT0RhLXuCisWB1HUqWrKwp`kT!GXgU-XavJ7m{ zF}hoiTqwx-dD*Bwvrz7ps(yvjh_g{`YKiJRXMS`?(;(kI`@P+?cX>6zJ-ePIW)#|Q zA?sU0O%R{Hd>PZA#k12vc58apaXC4*#WWK8-(K#%eaf7wGpj#}=?m`k4+W9jfDRu% z%_x`MH(oU4jGsC#oIub)C60T=&xTS>-iunp6u6So}E;-B9 z-GJQ9-o}|_ytGX z1P>(tc$v9n676gIYf$U9f?kcEpQ1LkN%HA0BW@ug<5?4tFhVXudj3K^1wiGus#BGf4%xz12Y6PaGK0(74bK1g)X!xO=-hb zuCiVo?E&SpHONT-)TXlrVpiWiN;^zfrj8D zJadmfUWBqdNy#7(i}|g!1fZD+`a;^nasSSc5IG+XBGrX71nD{WtJVjK-#>11aJjrm8ZeGDzhLChi z8BgBAhKD9F-KLzRU*KfA?geU$hY50s75w*M6WHud#@-QXa@X5w8ceDy*ruQF1#ohx zb)m;W+3B{N-3L+2I{K)gLbT7U{6v!)vwt>B%SUyHE?m$i(jKgG6$H=or1#8JtL{SX zxD9UMZ!|C#MsC;;Z3OwuE~Ai>Sqx$Pj;BrjZbFYz-;P23b9m#A>aUzP+5f=eE;Re! zc*Trtie>^9!CdQ*v8xESa?{JjLVdd*2PzOLhoJ7f* z>)d;Vn(q{9XK6`$tM8|(6T7S}fuzF{V#AcX1iLW@E&Dg1`PXS}>+JQ+RVW$|zhfuD z;ZMauzU@1bmBUGYl2Y4er`D7e+V&tb-ZiEqZt{q)SY!CD~|7!5h+DMvTeX|VtMo3XVCTZOW+fx~%ZrqQ9 zRwLie&^$>$c9|mxaQb;)E20R99&sscM@%uc={dex5G#y5kN0JthU5nJO!dc|jF`OX z%Rn;?Qs7KWOhtvz9Uq2JiZ-tlm3R zkNfcAw7JL3f>uL0KsvZR@p?Mni@5%--j4@!5N63wb1JW@CU&^w5wjD^YW~L*$Vkji zZ57Q47zZCd`u{f${--57DUihvzgxQQADTB$#l}usNZ-EwMrmDQuRlVjHuY6wBqdY@ zuc+fIki0!cd;{8uN4H$-u>)4r(vB+r4B@sqAnR9Bo}zslg@3y_u{uH1n1`fEg!I&I0pFH*fPrB*5}3-0paY&_m9&Jh4k|)Ek1@N7jM6(1e;w@VJx)W!s6evc!MxrTEnr0O`gu-aNRc6HNF41wc*epXlj1YfE!;rf8<18L5SzkD;XFOx zysJl1tJ|rRxbS^d)1ds7yF>zxOm=vAet0sanE7bBgbSf)!mpoGbc69Q1$0%RROxJ= z+TmQHJYvaDuNT^N`@4@SU!J6z&eW3$Q9^6)LBGw%PCe6Plo~zbBb+6Yl4rvE8o_2wR`WR}yI&2pRnWaO_|g*a{6Vgi@f0*$x=FSWxzjY7 ztU#YA>>OW`2k^;*0hs$o#Z@CnGuW=Pn7PS@^;?7466;w>31TyWW*Mm73b%x2Yiy`3 zl=t@ladH=Z^Qp$iEkxsu1I5F9qUnDUv>A7 z%0TwslK7_;kfN_}RbjO$iAiJs3^MOMeu|$873nEXVTR2CuW^{3-O)^<`EkvUyvBmh z#lnwqJc+MkOctT{B8Z$z~H#;3Ni#p6LZ6y|?H5 zS@m(P+z(x)Bll; zeCLFeYqN|XEVCq!mn_V?UA}ORjWeL|j*(7=YC7BDR69DCb0VC@NsQNsF|*?1=D1bf zFIbL5bmBOWk0anRr6g}!RS29G+q1Si_>`POdQu#!S?3EQ0ZDE(RaO{o8&Axd*lUV8kJQ5FL$aMGj-Z_T-RbI8NZ-P z911lU0nMQ-cA^3^Z)clIB=9UPsSOa=N<67Wl|3u$nYy*fY_UkQ8#&@+a(8gayI~3i z>X7e=xCG+6NdQ-Y$}#fF`xP})?-rom2|q)zco-ME3cr)ZauH4bIDyS z5!-{91JMbdu0zDb73XkDYYW0#S>#Ud*WG~jUi`c&M+eZDE;b!=J>6!v;JCZdqIgw{ zyaBbcn06QFIkg7Ox*e<|rJpD7B%X1+O3fy2HGA#W9+OVjxgMXl%#s6vr@@6}(98Sy zY^$56jMYIl$!R`P`ziBEJVYfAq7lF;nCsdg$eUXfy%1aXu*K{e5;>3A^YrvRaRHlj z%_>Xo_fJc zgEF5Whd$9%=y-yAq;FHN8^wK5C;niM5cEtT8I`oz>laxFFT3Y;UwFIKcaORJwc5b> zm;J8e2S8OG$b1r;Kh&goDlho9Zc}LU0N)VI+-WP=8N>+YW1)R(8&eM_8pv{sb%!B^ zc-;?OsamZEMX26U4h1;uMW_l+^jh%#RY4)h5Fw=Gv|0VD6;rG65*wwOP^zFzKe0Sf z=9={Dh(i`4J%MiOv89nmRYA26p~J7xWx+1{UT%dEA>d3_Gw$xsx3h7BzbfxMosbwR zF3B2=$_vz)^in3J1dbWzMBv?(wYi);Lp&#Y!+>+*e2Xe*m(E#K_e;;@_>tCS1{;m< z!IS7EqzC4n)%%c+jgPjRQWYQ&8u#R5@F=$;Q0s5<9pIXIxX&Ef@a~~F0+LsICeOCj zFx6`Q;BWj5=yVEyqsKLZyx#pVTpkR}b+QyI$x}_6Med8Xlj~Z_hqvfn^zW~Yx)J!r zM1cOpo=fbj%CPtnh5?cm-J{!}#7b{DDyjSf4&2At`Co0FJp5?q|7^<)aQ(>q0Pf8I zurA-57}D|xyC#GI5oGZJUPv12p?GwcC9mW@k{FJGhHjhb(Mi2I6tAXAYAz9^U@(BY1$J(j-qBCDUr#s$Dch=+PEMn%zE-zY z@Dwu&%9_B5&41Fz!@4t%QUV{Vx9L<8AYVqXVd(!x;$D**hUE>;mSb$!9G})J2~$|1 z`+8a6{W789`ZCb&cfnU0sv(1obyMFr81E$d~2gibgRZ&1IZ`t79f zZM6n}VE*f-KdWSYGtb~H=bltm`h9C!6~$;hoSz!S+MyFXx=cJ}JGjZc(8{k4k|P#5euY36GV(8O>hb-o?0`Hv1Z1_BnjLG>X@fk-5j zlx95@)vq5VGXvu}tPKCjxHjyZD%^MkCroVWGlL$g_$4$>%)SZ!dCIH2n!cZu4L+YM zahbqhP_We*5tRCPUtS;yVbPz9GaqVOA4RB$aTdG*b^i`Ob9oljQ*~AOu$c{NB+=)Q z`FD&PWTpU)2JM$Fo`n}UR<0nZib6y_S8lbpC^}vQFr(abE^pCfD|*4QJOxIaJ@t-F z4qO}*4_<}4Wr=j|)Ed9Aey8RrY{Q{Ll~aln&bc<^eH%DOG^O^MP5%be*DJIcM)gsx z%CYXYQF^AV!N`k8EAi+==Fw`~*X7T2mCUAwC%>p0jX_;^ zHs@>?%HSV4etO0=5@pK|&rTRab2*OIIS?Bs>Cn00L-zML`L`>WW@%kCi{VnkkKOB& zGh!R|1_zH&ztz^OtN5&BJH3YpTz;V=*YD12;MSVVp0b5d9hV+y+rKj~N=N5%SjmOO zP;mvuvp_|LJ@mZ=DF0oEDpbP$k$_ySpfhQ8z`Q-!y#eI@v5EX$hZZ}C3 z>O(DTsD>+*reFe>!!yGMj}~9f(*a2!!a!u|#&agnC7tlZ1B>bD_3id}`27n>VU|^< zg$6wbGCuLe6nBcciK4Tr6U`UB! z5fbmy69}hmWVy5Bb?CkMG(~o>2*&=du8 zJVzB0WefNn{|nsX1n6A%-@P~tEG#_3J|m6ap6rseo?1-9pJVwu4qp!4WU>7Nfcb!E zEsOCmq1w%FscM1u0X>MV1j|{@RZ_hhc6?9A6!zrS){ezUdcPLK!kd4J_iS4LF?qt%Ch^O4bc1D%=|4w)A?` zUFucL4_!V9GIAVh{OGp8-i=k0pX18@x_SsTcfvgrm&(;A&0fba>FZH|c6j2pj4!!) zmX3s4J{S^cx+IBk3L5%DewmTq7fB7oiK0+miG%y4dV!4kJA$vt5|mK-<#Jp~QcCbt z!dZM%52s3KZB4AFVa`akGwV?R~@PhqwH70;o256jjBUK0bm7ce70#veuOKc5_D21B7RbN_#f zvHh<;|2Ht?GZ8jHQ1|~5L_oQQinFt{iwz8(yi+LmNWZ03P`}kqA{|1N&kTiH9?WgQ zms8ge4K2NY-?`p-E0`QDS-a;p74}@J;?%*8Af_l63LUl+DYY381YH&?0>)G)HAOip zv#A8chJ?*nWp}fs(m`nW4*izE)G^=H_ssBjWmHx}0YND4N8ekEJks4^87d%d6vyUe z-j$3faB97C=9|woyG*%x`gw@?B(=0Z9+#UmovVLxSx$(o*aRHMkoQ~3N_UY-SwDtb z{q%>I6(;=@P%Jdzx9FUKcfCaHEsKf!*K8-1nfES?PNYgd{Ptj9M+HZ5o(>TFexM-K zPvr^MWx|zfTV8AXBdkBEt?nVzR&7`;-$F+ogRWQ14cDZ<^{T9)Ydq-e>za#8M94wo z^)MS@_q9ErEzqAEAegz~J1@~OX)oprF0MLmK-U#wvL=cc%yYt@SFh>3_zFBeXj}=T z4G8p)fc8+KSIiV$AgU6nZztrV2ViI3G+Wkh7!~v7nNa%ouu%1ePaAvDVMrwZ-&0|q zxVMe&GA7=-G+uK18Ix29H6l3-?Sl}#*w44V*T^2ydgOJiCb5pPnjuzhp9ZIk=}QDq zJG^{U5~TzwKr6Ha%X=*G3Pj7(;Pznb-|r;JlM{r+7#b{rdT0ipGH$r57PF5{6p(}2 z+pXA6h|MXoX7;KnO~X)sMUxqxiz{ZUt2&d#CS>zH>Oj$&!bJ*ODp@tgPhQ$atcgw9 z?kHg-k7rnL6A=+O%g_9&@TFntJeg_8Sdo{%rU=J$xDBX}dIjn}qX}EzK}WxaNnnEZ zar?Sc=?8<;R@5hBN8d?XcO1oK?BG4b6Hn3U3wgjdE$Pk~)-n8{TFen0N=00f6Z0cj zPjl{2yIuY<+CA2=So-03P@0*}i=1%>y~XF4$h-)@0a1cW9AjR#lV@_U4I^dcTm4a^ zkr=#(Tn8l|OyUJX-aNmGCT#M(wa}~ffn?1_hJiJf%5dB`A42@`)1awZ80dKvo4bbW z@d8Q%+A46GPCs?{WX9$$1uGZMyd1^1F3~-b{U60Huj2X>Lh_{3SQyy-K#F=~DMzXl zK39xR8;!9X$XF+wGI%@|n#BG9AgaM%ho0#YEN1=ID$>Ac;xsB4avgLlSazkolmu5u zc6W1cXKmVy=Cnc<_85FY7wEZLyt-r% ztR64$dhoGGDx2r{&NnX46np6A9G)}UQ=a0g&GWhksAhW&1l7+&Cy^a0a^yN(X2vH_ zmp7a2yT6_VF`gaioXTD&Za56|J#|M^h0+xpHagSd{pikVb(j$|cg$yadJvPhiJ9PW zW4^QS<-*i^Hzc(pjP&&^B&X9}Vy#QE`~2M`-PL>n5SV8OzOdq8OKuvP=4kBnAv0Tz z7Zb-05f;WPK*LC9K;gxFY`K8R;7oGo&y}pqaUVu?Wn8A3eayUOOa^Gh4gmIYMa-1a z4XDx?+cNyhoS*#yB?q)r5108-ub5A5nlEywO1MPgLM`+)PXusXzOv3M4sBxU2nX(;W^@B?)c zPq%L0%va_6U>0FHW*}R9#nF)=4{njFhXT?gL1^!Lx*dAE15d=gmA=?T<5bO;*=b9Y zKr~gdx^?Lph+Sm3mo4z>+y)NOgdaRJukrh2)IKT)Nf?~m77}~wMttQ2k?i!KiDaf4 zw{_h7eu9p;WC~z^%#?5Dx7p#rw#FComrx&XxW>2?9|%drFiucVg1dY$GoiFnY^^EP zvhbgSD~59HQEJ1&nA0o(MKQjgNs8-l=mN^6X`R*c6e&ad!AdXnNNtu4a$yX`m$%msZ)Kl}xGLR|YFkJmII>6^s92gVoRLizeqH z2k*u@@!taaMk~Aw8?2P|xwy?3(;a~E7(DzF$7d(d=WMBL-d6V(Q^AxpYBdO1N;MoV8H4;LcQSseNO`pR<8C26Tp6n_&I`wbaa3WMqCdm?g}Pc7$`QC9>>fk@4J4_#uK7>ASW*P;`4?4&`}rP zV_ACuYP>qjcjMOmVnW~?j+s` zqV34Hcdo31m|FcbCu{Wi`m{+r- z5PC%>z2%8~)gPkBQs$9BYHkvWRIzUR75u%jw`So7ZAZ6(w!}2DuF@0=_jP+W$l^vy zkF7?WyU2Jm(4M5s>^NoO-80Edp^J}13O?EkBFn^GqGq{vDWUjLLbkv=SZO#j=dH!O z6lO;={LieVngSqS2zXgiC8rNqlgh40`l#ERGEJ5$&{C%vmD*7Q7gj$-lfCq$Rmh*= zlb2;Cs@+){iX-s%UhBxDokR&iFGw7GJgrr1qHbzB%Yf7z(Pt~@#F z$~~1(U>yP`o>%=+?seGxyWk&QxL=I^IWQAgiNc0VD`VHI@0tF-H`Wls(mC@+fB_J9 zWen z=TyU<33E8tDT-IlTT0=KlV5qu4f4utEU=YrLVRUDo#Z zF17rPvFuz~f~vRZ9iUif)Dtl2qzqe#^YA~~Xns)MArm#;D93q@N+?Sm`da?tPxQ>!bkW~+1=?2Tg&&(p zi-Ncv9AtoIOx_X)C!J?y+?tc84)4Vh;N+ix7%?)hQ(({Bv6DkmAY{}(c)}2hpOvnL z`b$=Eu8RDx8!{AK;!f>q-5A{&<)ONWd!mksWn}D;7UUCn?IhyIG@X_$wtfK)43~ zsNn||MgH2rP~4BbpJZ&biXD*$BU!7;oGNpC|4uX`);X4Hl?|*|3SdU32C?cj)vEX( m!uAh^2150**p5a55WxF@vYZ0q_euWOHQB#{pzC|H@c#g=nS$E@ literal 0 HcmV?d00001 diff --git a/docs/specs/introduction.md b/docs/specs/introduction.md new file mode 100644 index 00000000000..66f8f95adb8 --- /dev/null +++ b/docs/specs/introduction.md @@ -0,0 +1,19 @@ +# Introduction + +The goal of the ZK Stack is to power the internet of value. Value needs to be secured, and only blockchains are able +provide the level of security that the internet needs. The ZK Stack can be used to launch zero-knowledge rollups, which +are extra secure blockchains. + +ZK Rollups use advanced mathematics called zero-knowledge proofs to show that the execution of the rollup was done +correctly. They also send ("roll up") their data to another chain, in our case this is Ethereum. The ZK Stack uses the +zkEVM to execute transactions, making it Ethereum compatible. + +These two techniques allow the rollup to be verified externally. Unlike traditional blockchains, where you have to run a +node to verify all transactions, the state of the rollup can be easily checked by external participants by validating +the proof. + +These external validators of a rollup can be other rollups. This means we can connect rollups trustlessly, and create a +network of rollups. This network is called the hyperchain. + +These specs will provide a high level overview of the zkEVM and a full specification of its more technical components, +such as the prover, compiler, and the VM itself. We also specify the foundations of the hyperchain ecosystem. diff --git a/docs/specs/l1_l2_communication/README.md b/docs/specs/l1_l2_communication/README.md new file mode 100644 index 00000000000..6cccb85f457 --- /dev/null +++ b/docs/specs/l1_l2_communication/README.md @@ -0,0 +1,5 @@ +# L1<->L2 Communication + +- [Overview - Deposits and Withdrawals](./overview_deposits_withdrawals.md) +- [L2->L1 messages](./l2_to_l1.md) +- [L1->L2 messages](./l1_to_l2.md) diff --git a/docs/specs/l1_l2_communication/l1_to_l2.md b/docs/specs/l1_l2_communication/l1_to_l2.md new file mode 100644 index 00000000000..ed1605a039a --- /dev/null +++ b/docs/specs/l1_l2_communication/l1_to_l2.md @@ -0,0 +1,170 @@ +# Handling L1→L2 ops + +The transactions on zkSync can be initiated not only on L2, but also on L1. There are two types of transactions that can +be initiated on L1: + +- Priority operations. These are the kind of operations that any user can create. +- Upgrade transactions. These can be created only during upgrades. + +### Prerequisites + +Please read the full +[article](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/System%20contracts%20bootloader%20description.md) +on the general system contracts / bootloader structure as well as the pubdata structure with Boojum system to understand +[the difference](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Handling%20pubdata%20in%20Boojum.md) +between system and user logs. + +## Priority operations + +### Initiation + +A new priority operation can be appended by calling the +[requestL2Transaction](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L236) +method on L1. This method will perform several checks for the transaction, making sure that it is processable and +provides enough fee to compensate the operator for this transaction. Then, this transaction will be +[appended](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L369C1-L369C1) +to the priority queue. + +### Bootloader + +Whenever an operator sees a priority operation, it can include the transaction into the batch. While for normal L2 +transaction the account abstraction protocol will ensure that the `msg.sender` has indeed agreed to start a transaction +out of this name, for L1→L2 transactions there is no signature verification. In order to verify that the operator +includes only transactions that were indeed requested on L1, the bootloader +[maintains](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L970) +two variables: + +- `numberOfPriorityTransactions` (maintained at `PRIORITY_TXS_L1_DATA_BEGIN_BYTE` of bootloader memory) +- `priorityOperationsRollingHash` (maintained at `PRIORITY_TXS_L1_DATA_BEGIN_BYTE + 32` of the bootloader memory) + +Whenever a priority transaction is processed, the `numberOfPriorityTransactions` gets incremented by 1, while +`priorityOperationsRollingHash` is assigned to `keccak256(priorityOperationsRollingHash, processedPriorityOpHash)`, +where `processedPriorityOpHash` is the hash of the priority operations that has been just processed. + +Also, for each priority transaction, we +[emit](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L966) +a user L2→L1 log with its hash and result, which basically means that it will get Merklized and users will be able to +prove on L1 that a certain priority transaction has succeeded or failed (which can be helpful to reclaim your funds from +bridges if the L2 part of the deposit has failed). + +Then, at the end of the batch, we +[submit](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L3819) +and 2 L2→L1 log system log with these values. + +### Batch commit + +During block commit, the contract will remember those values, but not validate them in any way. + +### Batch execution + +During batch execution, we would pop `numberOfPriorityTransactions` from the top of priority queue and +[verify](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L282) +that their rolling hash does indeed equal to `priorityOperationsRollingHash`. + +## Upgrade transactions + +### Initiation + +Upgrade transactions can only be created during a system upgrade. It is done if the `DiamondProxy` delegatecalls to the +implementation that manually puts this transaction into the storage of the DiamondProxy. Note, that since it happens +during the upgrade, there is no “real” checks on the structure of this transaction. We do have +[some validation](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L175), +but it is purely on the side of the implementation which the `DiamondProxy` delegatecalls to and so may be lifted if the +implementation is changed. + +The hash of the currently required upgrade transaction is +[stored](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/Storage.sol#L138) +under `l2SystemContractsUpgradeTxHash`. + +We will also track the batch where the upgrade has been committed in the `l2SystemContractsUpgradeBatchNumber` +[variable](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/Storage.sol#L141). + +We can not support multiple upgrades in parallel, i.e. the next upgrade should start only after the previous one has +been complete. + +### Bootloader + +The upgrade transactions are processed just like with priority transactions, with only the following differences: + +- We can have only one upgrade transaction per batch & this transaction must be the first transaction in the batch. +- The system contracts upgrade transaction is not appended to `priorityOperationsRollingHash` and doesn't increment + `numberOfPriorityTransactions`. Instead, its hash is calculated via a system L2→L1 log _before_ it gets executed. + Note, that it is an important property. More on it [below](#security-considerations). + +### Commit + +After an upgrade has been initiated, it will be required that the next commit batches operation already contains the +system upgrade transaction. It is +[checked](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L157) +by verifying the corresponding L2→L1 log. + +We also remember that the upgrade transaction has been processed in this batch (by amending the +`l2SystemContractsUpgradeBatchNumber` variable). + +### Revert + +In a very rare event when the team needs to revert the batch with the upgrade on zkSync, the +`l2SystemContractsUpgradeBatchNumber` is +[reset](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L412). + +Note, however, that we do not “remember” that certain batches had a version before the upgrade, i.e. if the reverted +batches will have to be re-executed, the upgrade transaction must still be present there, even if some of the deleted +batches were committed before the upgrade and thus didn’t contain the transaction. + +### Execute + +Once batch with the upgrade transaction has been executed, we +[delete](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L304) +them from storage for efficiency to signify that the upgrade has been fully processed and that a new upgrade can be +initiated. + +## Security considerations + +Since the operator can put any data into the bootloader memory and for L1→L2 transactions the bootloader has to blindly +trust it and rely on L1 contracts to validate it, it may be a very powerful tool for a malicious operator. Note, that +while the governance mechanism is generally trusted, we try to limit our trust for the operator as much as possible, +since in the future anyone would be able to become an operator. + +Some time ago, we _used to_ have a system where the upgrades could be done via L1→L2 transactions, i.e. the +implementation of the `DiamondProxy` upgrade would +[include](https://github.com/matter-labs/era-contracts/blob/f06a58360a2b8e7129f64413998767ac169d1efd/ethereum/contracts/zksync/upgrade-initializers/DIamondUpgradeInit2.sol#L27) +a priority transaction (with `from` equal to for instance `FORCE_DEPLOYER`) with all the upgrade params. + +In the Boojum though having such logic would be dangerous and would allow for the following attack: + +- Let’s say that we have at least 1 priority operations in the priority queue. This can be any operation, initiated by + anyone. +- The operator puts a malicious priority operation with an upgrade into the bootloader memory. This operation was never + included in the priority operations queue / and it is not an upgrade transaction. However, as already mentioned above + the bootloader has no idea what priority / upgrade transactions are correct and so this transaction will be processed. + +The most important caveat of this malicious upgrade is that it may change implementation of the `Keccak256` precompile +to return any values that the operator needs. + +- When the`priorityOperationsRollingHash` will be updated, instead of the “correct” rolling hash of the priority + transactions, the one which would appear with the correct topmost priority operation is returned. The operator can’t + amend the behaviour of `numberOfPriorityTransactions`, but it won’t help much, since the + the`priorityOperationsRollingHash` will match on L1 on the execution step. + +That’s why the concept of the upgrade transaction is needed: this is the only transaction that can initiate transactions +out of the kernel space and thus change bytecodes of system contracts. That’s why it must be the first one and that’s +why +[emit](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/bootloader/bootloader.yul#L587) +its hash via a system L2→L1 log before actually processing it. + +### Why it doesn’t break on the previous version of the system + +This section is not required for Boojum understanding but for those willing to analyze the production system that is +deployed at the time of this writing. + +Note that the hash of the transaction is calculated before the transaction is executed: +[https://github.com/matter-labs/era-system-contracts/blob/3e954a629ad8e01616174bde2218241b360fda0a/bootloader/bootloader.yul#L1055](https://github.com/matter-labs/era-system-contracts/blob/3e954a629ad8e01616174bde2218241b360fda0a/bootloader/bootloader.yul#L1055) + +And then we publish its hash on L1 via a _system_ L2→L1 log: +[https://github.com/matter-labs/era-system-contracts/blob/3e954a629ad8e01616174bde2218241b360fda0a/bootloader/bootloader.yul#L1133](https://github.com/matter-labs/era-system-contracts/blob/3e954a629ad8e01616174bde2218241b360fda0a/bootloader/bootloader.yul#L1133) + +In the new upgrade system, the `priorityOperationsRollingHash` is calculated on L2 and so if something in the middle +changes the implementation of `Keccak256`, it may lead to the full `priorityOperationsRollingHash` be maliciously +crafted. In the pre-Boojum system, we publish all the hashes of the priority transactions via system L2→L1 and then the +rolling hash is calculated on L1. This means that if at least one of the hash is incorrect, then the entire rolling hash +will not match also. diff --git a/docs/specs/l1_l2_communication/l2_to_l1.md b/docs/specs/l1_l2_communication/l2_to_l1.md new file mode 100644 index 00000000000..c13194bdec9 --- /dev/null +++ b/docs/specs/l1_l2_communication/l2_to_l1.md @@ -0,0 +1,72 @@ +# L2→L1 communication + +The L2→L1 communication is more fundamental than the L1→L2 communication, as the second relies on the first. L2→L1 +communication happens by the L1 smart contract verifying messages alongside the proofs. The only “provable” part of the +communication from L2 to L1 are native L2→L1 logs emitted by VM. These can be emitted by the `to_l1` +[opcode](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/System%20contracts%20bootloader%20description.md). +Each log consists of the following fields: + +```solidity +struct L2Log { + uint8 l2ShardId; + bool isService; + uint16 txNumberInBatch; + address sender; + bytes32 key; + bytes32 value; +} + +``` + +Where: + +- `l2ShardId` is the id of the shard the opcode was called (it is currently always 0). +- `isService` a boolean flag that is not used right now +- `txNumberInBatch` the number of the transaction in the batch where the log has happened. This number is taken from the + internal counter which is incremented each time the `increment_tx_counter` is + [called](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/System%20contracts%20bootloader%20description.md). +- `sender` is the value of `this` in the frame where the L2→L1 log was emitted. +- `key` and `value` are just two 32-byte values that could be used to carry some data with the log. + +The hashed array of these opcodes is then included into the +[batch commitment](https://github.com/matter-labs/era-contracts/blob/f06a58360a2b8e7129f64413998767ac169d1efd/ethereum/contracts/zksync/facets/Executor.sol#L493). +Because of that we know that if the proof verifies, then the L2→L1 logs provided by the operator were correct, so we can +use that fact to produce more complex structures. Before Boojum such logs were also Merklized within the circuits and so +the Merkle tree’s root hash was included into the batch commitment also. + +## Important system values + +Two `key` and `value` fields are enough for a lot of system-related use-cases, such as sending timestamp of the batch, +previous batch hash, etc. They were and are used +[used](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/system-contracts/contracts/SystemContext.sol#L438) +to verify the correctness of the batch's timestamps and hashes. You can read more about block processing +[here](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Batches%20&%20L2%20blocks%20on%20zkSync.md). + +## Long L2→L1 messages & bytecodes + +However, sometimes users want to send long messages beyond 64 bytes which `key` and `value` allow us. But as already +said, these L2→L1 logs are the only ways that the L2 can communicate with the outside world. How do we provide long +messages? + +Let’s add an `sendToL1` method in L1Messenger, where the main idea is the following: + +- Let’s submit an L2→L1 log with `key = msg.sender` (the actual sender of the long message) and + `value = keccak256(message)`. +- Now, during batch commitment the operator will have to provide an array of such long L2→L1 messages and it will be + checked on L1 that indeed for each such log the correct preimage was provided. + +A very similar idea is used to publish uncompressed bytecodes on L1 (the compressed bytecodes were sent via the long +L1→L2 messages mechanism as explained above). + +Note, however, that whenever someone wants to prove that a certain message was present, they need to compose the L2→L1 +log and prove its presence. + +## Priority operations + +Also, for each priority operation, we would send its hash and it status via an L2→L1 log. On L1 we would then +reconstruct the rolling hash of the processed priority transactions, allowing to correctly verify during the +`executeBatches` method that indeed the batch contained the correct priority operations. + +Importantly, the fact that both hash and status were sent, it made it possible to +[prove](https://github.com/code-423n4/2023-10-zksync/blob/ef99273a8fdb19f5912ca38ba46d6bd02071363d/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L255) +that the L2 part of a deposit has failed and ask the bridge to release funds. diff --git a/docs/specs/l1_l2_communication/overview_deposits_withdrawals.md b/docs/specs/l1_l2_communication/overview_deposits_withdrawals.md new file mode 100644 index 00000000000..4137fe1f1b5 --- /dev/null +++ b/docs/specs/l1_l2_communication/overview_deposits_withdrawals.md @@ -0,0 +1,13 @@ +# Overview - Deposits and Withdrawals + +The zkEVM supports general message passing for L1<->L2 communication. Proofs are settled on L1, so core of this process +is the [L2->L1] message passing process. [L1->L2] messages are recorded on L1 inside a priority queue, the sequencer +picks it up from here and executes it in the zkEVM. The zkEVM sends an L2->L1 message of the L1 transactions that it +processed, and the rollup's proof is only valid if the processed transactions were exactly right. + +There is an asymmetry in the two directions however, in the L1->L2 direction we support starting message calls by having +a special transaction type called L1 transactions. In the L2->L1 direction we only support message passing. + +In particular, deposits and withdrawals of ether also use the above methods. For deposits the L1->L2 transaction is sent +with empty calldata, the recipients address and the deposited value. When withdrawing, an L2->L1 message is sent. This +is then processed by the smart contract holding the ether on L1, which releases the funds. diff --git a/docs/specs/l1_smart_contracts.md b/docs/specs/l1_smart_contracts.md new file mode 100644 index 00000000000..b5b0a484559 --- /dev/null +++ b/docs/specs/l1_smart_contracts.md @@ -0,0 +1,289 @@ +# L1 Smart contracts + +This document presumes familiarity with Rollups. For a better understanding, consider reading the overview +[here](./overview.md). + +Rollups inherit security and decentralization guarantees from Ethereum, on which they store information about changes in +their own state, providing validity proofs for state transition, implementing a communication mechanism, etc. In +practice, all this is achieved by Smart Contracts built on top of Ethereum. This document details the architecture of +the L2 contracts on Ethereum Layer 1. We also have contracts that support the hyperchain ecosystem, we cover those in +the [Shared Bridge](./the_hyperchain/shared_bridge.md) section. The Shared Bridge relies on these individual contracts. + +## Diamond + +Technically, this L1 smart contract acts as a connector between Ethereum (L1) and a single L2. It checks the validity +proof and data availability, handles L2 <-> L1 communication, finalizes L2 state transition, and more. + +![diamondProxy.png](./img/diamondProxy.jpg) + +### DiamondProxy + +The main contract uses [EIP-2535](https://eips.ethereum.org/EIPS/eip-2535) diamond proxy pattern. It is an in-house +implementation that is inspired by the [mudgen reference implementation](https://github.com/mudgen/Diamond). It has no +external functions, only the fallback that delegates a call to one of the facets (target/implementation contract). So +even an upgrade system is a separate facet that can be replaced. + +One of the differences from the reference implementation is access freezable. Each of the facets has an associated +parameter that indicates if it is possible to freeze access to the facet. Privileged actors can freeze the **diamond** +(not a specific facet!) and all facets with the marker `isFreezable` should be inaccessible until the governor or admin +unfreezes the diamond. Note that it is a very dangerous thing since the diamond proxy can freeze the upgrade system and +then the diamond will be frozen forever. + +The diamond proxy pattern is very flexible and extendable. For now, it allows splitting implementation contracts by +their logical meaning, removes the limit of bytecode size per contract and implements security features such as +freezing. In the future, it can also be viewed as [EIP-6900](https://eips.ethereum.org/EIPS/eip-6900) for +[zkStack](https://blog.matter-labs.io/introducing-the-zk-stack-c24240c2532a), where each hyperchain can implement a +sub-set of allowed implementation contracts. + +### GettersFacet + +Separate facet, whose only function is providing `view` and `pure` methods. It also implements +[diamond loupe](https://eips.ethereum.org/EIPS/eip-2535#diamond-loupe) which makes managing facets easier. This contract +must never be frozen. + +### AdminFacet + +Controls changing the privileged addresses such as governor and validators or one of the system parameters (L2 +bootloader bytecode hash, verifier address, verifier parameters, etc), and it also manages the freezing/unfreezing and +execution of upgrades in the diamond proxy. + +The admin facet is controlled by two entities: + +- Governance - Separate smart contract that can perform critical changes to the system as protocol upgrades. This + contract controlled by two multisigs, one managed by Matter Labs team and another will be multisig with well-respected + contributors in the crypto space. Only together they can perform an instant upgrade, the Matter Labs team can only + schedule an upgrade with delay. +- Admin - Multisig smart contract managed by Matter Labs that can perform non-critical changes to the system such as + granting validator permissions. Note, that the Admin is the same multisig as the owner of the governance. + +### MailboxFacet + +The facet that handles L2 <-> L1 communication, an overview for which can be found in +[docs](https://era.zksync.io/docs/dev/developer-guides/bridging/l1-l2-interop.html). + +The Mailbox performs three functions: + +- L1 <-> L2 communication. +- Bridging native Ether to the L2 (with the launch of the Shared Bridge this will be moved) +- Censorship resistance mechanism (in the research stage). + +L1 -> L2 communication is implemented as requesting an L2 transaction on L1 and executing it on L2. This means a user +can call the function on the L1 contract to save the data about the transaction in some queue. Later on, a validator can +process it on L2 and mark it as processed on the L1 priority queue. Currently, it is used for sending information from +L1 to L2 or implementing multi-layer protocols. + +_NOTE_: While user requests the transaction from L1, the initiated transaction on L2 will have such a `msg.sender`: + +```solidity + address sender = msg.sender; + if (sender != tx.origin) { + sender = AddressAliasHelper.applyL1ToL2Alias(msg.sender); + } +``` + +where + +```solidity +uint160 constant offset = uint160(0x1111000000000000000000000000000000001111); + +function applyL1ToL2Alias(address l1Address) internal pure returns (address l2Address) { + unchecked { + l2Address = address(uint160(l1Address) + offset); + } +} + +``` + +For most of the rollups the address aliasing needs to prevent cross-chain exploits that would otherwise be possible if +we simply reused the same L1 addresses as the L2 sender. In zkEVM address derivation rule is different from the +Ethereum, so cross-chain exploits are already impossible. However, the zkEVM may add full EVM support in the future, so +applying address aliasing leaves room for future EVM compatibility. + +The L1 -> L2 communication is also used for bridging ether. The user should include a `msg.value` when initiating a +transaction request on the L1 contract. Before executing a transaction on L2, the specified address will be credited +with the funds. To withdraw funds user should call `withdraw` function on the `L2EtherToken` system contracts. This will +burn the funds on L2, allowing the user to reclaim them through the `finalizeEthWithdrawal` function on the +`MailboxFacet`. + +More about L1->L2 operations can be found +[here](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Handling%20L1→L2%20ops%20on%20zkSync.md). + +L2 -> L1 communication, in contrast to L1 -> L2 communication, is based only on transferring the information, and not on +the transaction execution on L1. The full description of the mechanism for sending information from L2 to L1 can be +found +[here](https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Handling%20pubdata%20in%20Boojum.md). + +### ExecutorFacet + +A contract that accepts L2 batches, enforces data availability and checks the validity of zk-proofs. + +The state transition is divided into three stages: + +- `commitBatches` - check L2 batch timestamp, process the L2 logs, save data for a batch, and prepare data for zk-proof. +- `proveBatches` - validate zk-proof. +- `executeBatches` - finalize the state, marking L1 -> L2 communication processing, and saving Merkle tree with L2 logs. + +Each L2 -> L1 system log will have a key that is part of the following: + +```solidity +enum SystemLogKey { + L2_TO_L1_LOGS_TREE_ROOT_KEY, + TOTAL_L2_TO_L1_PUBDATA_KEY, + STATE_DIFF_HASH_KEY, + PACKED_BATCH_AND_L2_BLOCK_TIMESTAMP_KEY, + PREV_BATCH_HASH_KEY, + CHAINED_PRIORITY_TXN_HASH_KEY, + NUMBER_OF_LAYER_1_TXS_KEY, + EXPECTED_SYSTEM_CONTRACT_UPGRADE_TX_HASH_KEY +} + +``` + +When a batch is committed, we process L2 -> L1 system logs. Here are the invariants that are expected there: + +- In a given batch there will be either 7 or 8 system logs. The 8th log is only required for a protocol upgrade. +- There will be a single log for each key that is contained within `SystemLogKey` +- Three logs from the `L2_TO_L1_MESSENGER` with keys: +- `L2_TO_L1_LOGS_TREE_ROOT_KEY` +- `TOTAL_L2_TO_L1_PUBDATA_KEY` +- `STATE_DIFF_HASH_KEY` +- Two logs from `L2_SYSTEM_CONTEXT_SYSTEM_CONTRACT_ADDR` with keys: + - `PACKED_BATCH_AND_L2_BLOCK_TIMESTAMP_KEY` + - `PREV_BATCH_HASH_KEY` +- Two or three logs from `L2_BOOTLOADER_ADDRESS` with keys: + - `CHAINED_PRIORITY_TXN_HASH_KEY` + - `NUMBER_OF_LAYER_1_TXS_KEY` + - `EXPECTED_SYSTEM_CONTRACT_UPGRADE_TX_HASH_KEY` +- None logs from other addresses (may be changed in the future). + +### DiamondInit + +It is a one-function contract that implements the logic of initializing a diamond proxy. It is called only once on the +diamond constructor and is not saved in the diamond as a facet. + +Implementation detail - function returns a magic value just like it is designed in +[EIP-1271](https://eips.ethereum.org/EIPS/eip-1271), but the magic value is 32 bytes in size. + +## Bridges + +Bridges are completely separate contracts from the Diamond. They are a wrapper for L1 <-> L2 communication on contracts +on both L1 and L2. Upon locking assets on L1, a request is sent to mint these bridged assets on L2. Upon burning assets +on L2, a request is sent to unlock them on L2. + +Unlike the native Ether bridging, all other assets can be bridged by the custom implementation relying on the trustless +L1 <-> L2 communication. + +### L1ERC20Bridge + +The "standard" implementation of the ERC20 token bridge. Works only with regular ERC20 tokens, i.e. not with +fee-on-transfer tokens or other custom logic for handling user balances. + +- `deposit` - lock funds inside the contract and send a request to mint bridged assets on L2. +- `claimFailedDeposit` - unlock funds if the deposit was initiated but then failed on L2. +- `finalizeWithdrawal` - unlock funds for the valid withdrawal request from L2. + +The owner of the L1ERC20Bridge is the Governance contract. + +### L2ERC20Bridge + +The L2 counterpart of the L1 ERC20 bridge. + +- `withdraw` - initiate a withdrawal by burning funds on the contract and sending a corresponding message to L1. +- `finalizeDeposit` - finalize the deposit and mint funds on L2. The function is only callable by L1 bridge. + +The owner of the L2ERC20Bridge and the contracts related to it is the Governance contract. + +### L1WethBridge + +The custom bridge exclusively handles transfers of WETH tokens between the two domains. It is designed to streamline and +enhance the user experience for bridging WETH tokens by minimizing the number of transactions required and reducing +liquidity fragmentation thus improving efficiency and user experience. + +This contract accepts WETH deposits on L1, unwraps them to ETH, and sends the ETH to the L2 WETH bridge contract, where +it is wrapped back into WETH and delivered to the L2 recipient. + +Thus, the deposit is made in one transaction, and the user receives L2 WETH that can be unwrapped to ETH. + +For withdrawals, the contract receives ETH from the L2 WETH bridge contract, wraps it into WETH, and sends the WETH to +the L1 recipient. + +The owner of the L1WethBridge contract is the Governance contract. + +### L2WethBridge + +The L2 counterpart of the L1 WETH bridge. + +The owner of the L2WethBridge and L2Weth contracts is the Governance contract. + +## Governance + +This contract manages calls for all governed zkEVM contracts on L1 and L2. Mostly, it is used for upgradability an +changing critical system parameters. The contract has minimum delay settings for the call execution. + +Each upgrade consists of two steps: + +- Scheduling - The owner can schedule upgrades in two different manners: + - Fully transparent data. All the targets, calldata, and upgrade conditions are known to the community before upgrade + execution. + - Shadow upgrade. The owner only shows the commitment to the upgrade. This upgrade type is mostly useful for fixing + critical issues in the production environment. +- Upgrade execution - the Owner or Security council can perform the upgrade with previously scheduled parameters. + - Upgrade with delay. Scheduled operations should elapse the delay period. Both the owner and Security Council can + execute this type of upgrade. + - Instant upgrade. Scheduled operations can be executed at any moment. Only the Security Council can perform this type + of upgrade. + +Please note, that both the Owner and Security council can cancel the upgrade before its execution. + +The diagram below outlines the complete journey from the initiation of an operation to its execution. + +![governance.png](./img/governance.jpg) + +## ValidatorTimelock + +An intermediate smart contract between the validator EOA account and the zkSync smart contract. Its primary purpose is +to provide a trustless means of delaying batch execution without modifying the main zkSync contract. zkSync actively +monitors the chain activity and reacts to any suspicious activity by freezing the chain. This allows time for +investigation and mitigation before resuming normal operations. + +It is a temporary solution to prevent any significant impact of the validator hot key leakage, while the network is in +the Alpha stage. + +This contract consists of four main functions `commitBatches`, `proveBatches`, `executeBatches`, and `revertBatches`, +which can be called only by the validator. + +When the validator calls `commitBatches`, the same calldata will be propagated to the zkSync contract (`DiamondProxy` +through `call` where it invokes the `ExecutorFacet` through `delegatecall`), and also a timestamp is assigned to these +batches to track the time these batches are committed by the validator to enforce a delay between committing and +execution of batches. Then, the validator can prove the already committed batches regardless of the mentioned timestamp, +and again the same calldata (related to the `proveBatches` function) will be propagated to the zkSync contract. After +the `delay` is elapsed, the validator is allowed to call `executeBatches` to propagate the same calldata to zkSync +contract. + +The owner of the ValidatorTimelock contract is the same as the owner of the Governance contract - Matter Labs multisig. + +## Allowlist + +The auxiliary contract controls the permission access list. It is used in bridges and diamond proxies to control which +addresses can interact with them in the Alpha release. Currently, it is supposed to set all permissions to public. + +The owner of the Allowlist contract is the Governance contract. + +## Deposit Limitation + +The amount of deposit can be limited. This limitation is applied on an account level and is not time-based. In other +words, each account cannot deposit more than the cap defined. The tokens and the cap can be set through governance +transactions. Moreover, there is an allow listing mechanism as well (only some allow listed accounts can call some +specific functions). So, the combination of deposit limitation and allow listing leads to limiting the deposit of the +allow listed account to be less than the defined cap. + +```solidity +struct Deposit { + bool depositLimitation; + uint256 depositCap; +} + +``` + +Currently, the limit is used only for blocking deposits of the specific token (turning on the limitation and setting the +limit to zero). And on the near future, this functionality will be completely removed. diff --git a/docs/specs/overview.md b/docs/specs/overview.md new file mode 100644 index 00000000000..0f87d91c5c7 --- /dev/null +++ b/docs/specs/overview.md @@ -0,0 +1,38 @@ +# Overview + +As stated in the introduction, the ZK Stack can be used to launch rollups. These rollups have some operators that are +needed to run it, these are the sequencer and the prover, they create blocks and proofs, and submit them to the L1 +contract. + +A user submits their transaction to the sequencer. The job of the sequencer is to collect transactions and execute them +using the zkEVM, and to provide a soft confirmation to the user that their transaction was executed. If the user chooses +they can force the sequencer to include their transaction by submitting it via L1. After the sequencer executes the +block, it sends it over to the prover, who creates a cryptographic proof of the block's execution. This proof is then +sent to the L1 contract alongside the necessary data. On the L1 a [smart contract](./l1_smart_contracts.md) verifies +that the proof is valid and all the data has been submitted, and the rollup's state is also updated in the contract. + +![Components](./img/L2_Components.png) + +The core of this mechanism was the execution of transactions. The ZK Stack uses the [zkEVM](./zk_evm/README.md) for +this, which is similar to the EVM, but its role is different than the EVM's role in Ethereum. + +Transactions can also be submitted via L1. This happens via the same process that allows +[L1<>L2 communication](./l1_l2_communication/README.md). This method provides the rollup with censorship resistance, and +allows trustless bridges to the L1. + +The sequencer collects transactions into blocks [blocks](./blocks_batches.md), similarly to Ethereum. To provide the +best UX the protocol has small blocks with quick soft confirmations for the users. Unlike Ethereum, the zkEVM does not +just have blocks, but also batches, which are just a collection of blocks. A batch is the unit that the prover +processes. + +Before we submit a proof we send the [data](./data_availability/README.md) to L1. Instead of submitting the data of each +transaction, we submit how the state of the blockchain changes, this change is called the state diff. This approach +allows the transactions that change the same storage slots to be very cheap, since these transactions don't incur +additional data costs. + +Finally at the end of the process, we [create the proofs](./data_availability/README.md) and send them to L1. Our Boojum +proof system provides excellent performance, and can be run on just 16Gb of GPU RAM. This will enable the proof +generation to be truly decentralized. + +Up to this point we have only talked about a single chain. We will connect these chains into a single ecosystem, called +[the hyperchain](./the_hyperchain/README.md). diff --git a/docs/specs/prover/README.md b/docs/specs/prover/README.md new file mode 100644 index 00000000000..4dccfc9a833 --- /dev/null +++ b/docs/specs/prover/README.md @@ -0,0 +1,9 @@ +# Prover + +- [Overview](./overview.md) +- [ZK terminology](./zk_terminology.md) +- [Getting Started](./getting_started.md) +- [Circuits](./circuits/README.md) +- [Circuit testing](./circuit_testing.md) +- [Boojum gadgets](./boojum_gadgets.md) +- [Boojum function: check_if_satisfied](./boojum_function_check_if_satisfied.md) diff --git a/docs/specs/prover/boojum_function_check_if_satisfied.md b/docs/specs/prover/boojum_function_check_if_satisfied.md new file mode 100644 index 00000000000..922889b90d4 --- /dev/null +++ b/docs/specs/prover/boojum_function_check_if_satisfied.md @@ -0,0 +1,97 @@ +# Boojum function: check_if_satisfied + +Note: Please read our other documentation and tests first before reading this page. + +Our circuits (and tests) depend on a function from Boojum called +[check_if_satisfied](https://github.com/matter-labs/era-boojum/blob/main/src/cs/implementations/satisfiability_test.rs#L11). +You don’t need to understand it to run circuit tests, but it can be informative to learn more about Boojum and our proof +system. + +First we prepare the constants, variables, and witness. As a reminder, the constants are just constant numbers, the +variables circuit columns that are under PLONK copy-permutation constraints (so they are close in semantics to variables +in programming languages), and the witness ephemeral values that can be used to prove certain constraints, for example +by providing an inverse if the variable must be non-zero. + +![Check_if_satisfied.png](./img/boojum_function_check_if_satisfied/check_if_satisfied.png) + +Next we prepare a view. Instead of working with all of the columns at once, it can be helpful to work with only a +subset. + +![Check_if_satisfied(1).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(1).png>) + +Next we create the paths_mappings. For each gate in the circuit, we create a vector of booleans in the correct shape. +Later, when we traverse the gates with actual inputs, we’ll be able to remember which gates should be satisfied at +particular rows by computing the corresponding selector using constant columns and the paths_mappings. + +![Check_if_satisfied(2).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(2).png>) + +Now, we have to actually check everything. The checks for the rows depend on whether they are under general purpose +columns, or under special purpose columns. + +**General purpose rows:** + +For each row and gate, we need several things. + +- Evaluator for the gate, to compute the result of the gate +- Path for the gate from the paths_mappings, to locate the gate +- Constants_placement_offset, to find the constants +- Num_terms in the evaluator + - If this is zero, we can skip the row since there is nothing to do +- Gate_debug_name +- num_constants_used +- this_view +- placement (described below) +- evaluation function + +![Check_if_satisfied(3).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(3).png>) + +Placement is either UniqueOnRow or MultipleOnRow. UniqueOnRow means there is only one gate on the row (typically because +the gate is larger / more complicated). MultipleOnRow means there are multiple gates within the same row (typically +because the gate is smaller). For example, if a gate only needs 30 columns, but we have 150 columns, we could include +five copies fo that gate in the same row. + +Next, if the placement is UniqueOnRow, we call evaluate_over_general_purpose_columns. All of the evaluations should be +equal to zero, or we panic. + +![Check_if_satisfied(4).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(4).png>) + +If the placement is MultipleOnRow, we again call evaluate_over_general_purpose_columns. If any of the evaluations are +non-zero, we log some extra debug information, and then panic. + +![Check_if_satisfied(7).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(7).png>) + +This concludes evaluating and checking the generalized rows. Now we will check the specialized rows. + +![Check_if_satisfied(8).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(8).png>) + +We start by initializing vectors for specialized_placement_data, evaluation_functions, views, and evaluator_names. Then, +we iterate over each gate_type_id and evaluator. + +![Check_if_satisfied(9).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(9).png>) + +If gate_type_id is a LookupFormalGate, we don’t need to do anything in this loop because it is handled by the lookup +table. For all other cases, we need to check the evaluator’s total_quotient_terms_over_all_repetitions is non-zero. + +![Check_if_satisfied(11).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(11).png>) + +Next, we get num_terms, num_repetitions, and share_constants, total_terms, initial_offset, per_repetition_offset, and +total_constants_available. All of these together form our placement data. + +![Check_if_satisfied(12).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(12).png>) + +![Check_if_satisfied(13).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(13).png>) + +Once we know the placement_data, we can keep it for later, as well as the evaluator for this gate. + +![Check_if_satisfied(14).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(14).png>) + +We also will keep the view and evaluator name. This is all the data we need from our specialized columns. + +To complete the satisfiability test on the special columns, we just need to loop through and check that each of the +evaluations are zero. + +![Check_if_satisfied(16).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(16).png>) + +![Check_if_satisfied(17).png](<./img/boojum_function_check_if_satisfied/Check_if_satisfied(17).png>) + +Now we have checked every value on every row, so the satisfiability test is passed, and we can return true. diff --git a/docs/specs/prover/boojum_gadgets.md b/docs/specs/prover/boojum_gadgets.md new file mode 100644 index 00000000000..eb6e9ce719b --- /dev/null +++ b/docs/specs/prover/boojum_gadgets.md @@ -0,0 +1,189 @@ +# Boojum gadgets + +Boojum gadgets are low-level implementations of tools for constraint systems. They consist of various types: curves, +hash functions, lookup tables, and different circuit types. These gadgets are mostly a reference from +[franklin-crypto](https://github.com/matter-labs/franklin-crypto), with additional hash functions added. These gadgets +have been changed to use the Goldilocks field (order 2^64 - 2^32 + 1), which is much smaller than bn256. This allows us +to reduce the proof system. + +## Circuits types + +We have next types with we use for circuits: + +**Num (Number):** + +```rust +pub struct Num { + pub(crate) variable: Variable, + pub(crate) _marker: std::marker::PhantomData, +} +``` + +**Boolean:** + +```rust +pub struct Boolean { + pub(crate) variable: Variable, + pub(crate) _marker: std::marker::PhantomData, +} +``` + +**U8:** + +```rust +pub struct UInt8 { + pub(crate) variable: Variable, + pub(crate) _marker: std::marker::PhantomData, +} +``` + +**U16:** + +```rust +pub struct UInt16 { + pub(crate) variable: Variable, + pub(crate) _marker: std::marker::PhantomData, +} +``` + +**U32:** + +```rust +pub struct UInt32 { + pub(crate) variable: Variable, + pub(crate) _marker: std::marker::PhantomData, +} +``` + +**U160:** + +```rust +pub struct UInt160 { + pub inner: [UInt32; 5], +} +``` + +**U256:** + +```rust +pub struct UInt256 { + pub inner: [UInt32; 8], +} +``` + +**U512:** + +```rust +pub struct UInt512 { + pub inner: [UInt32; 16], +} +``` + +Every type consists of a Variable (the number inside Variable is just the index): + +```rust +pub struct Variable(pub(crate) u64); +``` + +which is represented in the current Field. Variable is quite diverse, and to have "good" alignment and size we manually +do encoding management to be able to represent it as both copyable variable or witness. + +The implementation of this circuit type itself is similar. We can also divide them into classes as main and dependent: +Such type like U8-U512 decoding inside functions to Num for using them in logical operations. As mentioned above, the +property of these types is to perform logical operations and allocate witnesses. + +Let's demonstrate this in a Boolean example: + +```rust +impl CSAllocatable for Boolean { + type Witness = bool; + fn placeholder_witness() -> Self::Witness { + false + } + + #[inline(always)] + fn allocate_without_value>(cs: &mut CS) -> Self { + let var = cs.alloc_variable_without_value(); + + Self::from_variable_checked(cs, var) + } + + fn allocate>(cs: &mut CS, witness: Self::Witness) -> Self { + let var = cs.alloc_single_variable_from_witness(F::from_u64_unchecked(witness as u64)); + + Self::from_variable_checked(cs, var) + } +} +``` + +As you see, you can allocate both with and without witnesses. + +## Hash function + +In gadgets we have a lot of hash implementation: + +- blake2s +- keccak256 +- poseidon/poseidon2 +- sha256 + +Each of them perform different functions in our proof system. + +## Queues + +One of the most important gadgets in our system is queue. It helps us to send data between circuits. Here is the quick +explanation how it works: + +```rust +Struct CircuitQueue{ + head: HashState, + tail: HashState, + length: UInt32, + witness: VecDeque, +} +``` + +The structure consists of `head` and `tail` commitments that basically are rolling hashes. Also, it has a `length` of +the queue. These three fields are allocated inside the constraint system. Also, there is a `witness`, that keeps actual +values that are now stored in the queue. + +And here is the main functions: + +```rust +fn push(&mut self, value: Element) { + // increment length + // head - hash(head, value) + // witness.push_back(value.witness) +} + +fn pop(&mut self) -> Element { + // check length > 0 + // decrement length + // value = witness.pop_front() + // tail = hash(tail, value) + // return value +} + +fn final_check(&self) -> Element { + // check that length == 0 + // check that head == tail +} +``` + +So the key point, of how the queue proofs that popped elements are the same as pushed ones, is equality of rolling +hashes that stored in fields `head` and `tail`. + +Also, we check that we can’t pop an element before it was pushed. This is done by checking that `length >= 0`. + +Very important is making the `final_check` that basically checks the equality of two hashes. So if the queue is never +empty, and we haven’t checked the equality of `head` and `tail` in the end, we also haven’t proven that the elements we +popped are correct. + +For now, we use poseidon2 hash. Here is the link to queue implementations: + +- [CircuitQueue](https://github.com/matter-labs/era-boojum/blob/main/src/gadgets/queue/mod.rs#L29) +- [FullStateCircuitQueue](https://github.com/matter-labs/era-boojum/blob/main/src/gadgets/queue/full_state_queue.rs#L20C12-L20C33) + +The difference is that we actually compute and store a hash inside CircuitQueue during `push` and `pop` operations. But +in FullStateCircuitQueue our `head` and `tail` are just states of sponges. So instead of computing a full hash, we just +absorb a pushed (popped) element. diff --git a/docs/specs/prover/circuit_testing.md b/docs/specs/prover/circuit_testing.md new file mode 100644 index 00000000000..4c8a2a5f210 --- /dev/null +++ b/docs/specs/prover/circuit_testing.md @@ -0,0 +1,59 @@ +# Circuit testing + +This page explains unit tests for circuits. Specifically, it goes through a unit test of +[ecrecover](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/ecrecover/mod.rs#L796). The tests for other +circuits are very similar. + +Many of the tests for different circuits are nearly identical, for example: + +- test_signature_for_address_verification (ecrecover) +- test_code_unpacker_inner +- test_demultiplex_storage_logs_inner +- and several others. + +If you understand one, you will quickly be able to understand them all. + +Let’s focus on ecrecover. Ecrecover is a precompile that, given your signature, can compute your address. If our circuit +works correctly, we should be able to recover the proper address, and be able to prove the computation was done +correctly. + +![Contest(4).png](<./img/circuit_testing/Contest(4).png>) + +The test begins by defining the geometry, max_variables, and max_trace_len. This data will be used to create the +constraint system. Next, we define a helper function: + +![Contest(5).png](<./img/circuit_testing/Contest(5).png>) + +To help run the test, we have a helper function called configure that returns a builder. The builder knows all of the +gates and gate placement strategy, which will be useful for setting up the constraint system. + +![Contest(6).png](<./img/circuit_testing/Contest(6).png>) + +The constaint system is almost ready! We still need to add the lookup tables for common boolean functions: + +![Contest(7).png](<./img/circuit_testing/Contest(7).png>) + +Now the constraint system is ready! We can start the main part of the test! + +![Contest(8).png](<./img/circuit_testing/Contest(8).png>) + +Here we have hard coded a secret key with its associated public key, and generate a signature. We will test our circuit +on these inputs! Next we “allocate” these inputs as witnessess: + +![Contest(9).png](<./img/circuit_testing/Contest(9).png>) + +We have to use special integer types because we are working in a finite field. + +![Contest(10).png](<./img/circuit_testing/Contest(10).png>) + +The constants here are specific to the curve used, and are described in detail by code comments in the +ecrecover_precompile_inner_routine. + +Finally we can call the ecrecover_precompile_inner_routine: + +![Contest(11).png](<./img/circuit_testing/Contest(11).png>) + +Lastly, we need to check to make sure that 1) we recovered the correct address, and 2) the constraint system can be +satisfied, meaning the proof works. + +![Contest(12).png](<./img/circuit_testing/Contest(12).png>) diff --git a/docs/specs/prover/circuits/README.md b/docs/specs/prover/circuits/README.md new file mode 100644 index 00000000000..f0da38ab3ae --- /dev/null +++ b/docs/specs/prover/circuits/README.md @@ -0,0 +1,17 @@ +# Circuits + +- [Overview](./overview.md) +- [Code decommitter](./code_decommitter.md) +- [Demux log queue](./demux_log_queue.md) +- [ECRecover](./ecrecover.md) +- [Keccak round function](./keccak_round_function.md) +- [L1 messages hasher](./l1_messages_hasher.md) +- [Log sorter](./log_sorter.md) +- [Main VM](./main_vm.md) +- [RAM permutation](./ram_permutation.md) +- [Sha256 round function](./sha256_round_function.md) +- [Sort decommitments](./sort_decommitments.md) +- [Sorting and deduplication](./sorting_and_deduplicating.md) +- [Sorting](./sorting.md) +- [Storage application](./storage_application.md) +- [Storage sorter](./storage_sorter.md) diff --git a/docs/specs/prover/circuits/code_decommitter.md b/docs/specs/prover/circuits/code_decommitter.md new file mode 100644 index 00000000000..2e5b9609de1 --- /dev/null +++ b/docs/specs/prover/circuits/code_decommitter.md @@ -0,0 +1,208 @@ +# CodeDecommitter + +## CodeDecommitter PI + +### [Input](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/code_unpacker_sha256/input.rs#L80) + +```rust +pub struct CodeDecommitterInputData { + pub memory_queue_initial_state: QueueState, + pub sorted_requests_queue_initial_state: QueueState, +} +``` + +### [Output](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/code_unpacker_sha256/input.rs#L100) + +```rust +pub struct CodeDecommitterOutputData { + pub memory_queue_final_state: QueueState, +} +``` + +### [FSM Input and FSM Output](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/code_unpacker_sha256/input.rs#L61) + +```rust +pub struct CodeDecommitterFSMInputOutput { + pub internal_fsm: CodeDecommitmentFSM, + pub decommitment_requests_queue_state: QueueState, + pub memory_queue_state: QueueState, +} + +pub struct CodeDecommitmentFSM { + pub sha256_inner_state: [UInt32; 8], // 8 uint32 words of internal sha256 state + pub hash_to_compare_against: UInt256, + pub current_index: UInt32, + pub current_page: UInt32, + pub timestamp: UInt32, + pub num_rounds_left: UInt16, + pub length_in_bits: UInt32, + pub state_get_from_queue: Boolean, + pub state_decommit: Boolean, + pub finished: Boolean, +} +``` + +## Main circuit logic + +This circuit takes a queue of decommit requests for DecommitSorter circuit. For each decommit request, it checks that +the linear hash of all opcodes will be equal to this hash that is stored in the decommit request. Also, it writes code +to the corresponding memory page. Briefly, it unpacks the queue from the opcode and updates the memory queue and check +correctness. + +### [First part](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/code_unpacker_sha256/mod.rs#L48) + +The circuit begins with allocating input part of the PI. + +```rust +let CodeDecommitterCircuitInstanceWitness { + closed_form_input, + sorted_requests_queue_witness, + code_words, +} = witness; + +let mut structured_input = + CodeDecommitterCycleInputOutput::alloc_ignoring_outputs(cs, closed_form_input.clone()); +``` + +We chose what `memory_queue` state and `decommitments_queue` state to continue to work with. + +```rust +let requests_queue_state = QueueState::conditionally_select( + cs, + structured_input.start_flag, + &structured_input + .observable_input + .sorted_requests_queue_initial_state, + &structured_input + .hidden_fsm_input + .decommitment_requests_queue_state, +); + +let memory_queue_state = QueueState::conditionally_select( + cs, + structured_input.start_flag, + &structured_input.observable_input.memory_queue_initial_state, + &structured_input.hidden_fsm_input.memory_queue_state, +); +``` + +We do the same with inner FSM part. + +```rust +let initial_state = CodeDecommitmentFSM::conditionally_select( + cs, + structured_input.start_flag, + &starting_fsm_state, + &structured_input.hidden_fsm_input.internal_fsm, +); +``` + +### [Main part](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/code_unpacker_sha256/mod.rs#L168) + +Here’s the part, where all the main logic is implemented. Firstly, we take a new decommit request if the queue is not +empty yet. + +```rust +let (may_be_new_request, _) = + unpack_requests_queue.pop_front(cs, state.state_get_from_queue); +``` + +Then we update the state of the circuit. + +```rust +state.num_rounds_left = UInt16::conditionally_select( + cs, + state.state_get_from_queue, + &length_in_rounds, + &state.num_rounds_left, +); +... +``` + +Then we create two write memory queries and push them to memory queue. + +```rust +let mem_query_0 = MemoryQuery { + timestamp: state.timestamp, + memory_page: state.current_page, + index: state.current_index, + rw_flag: boolean_true, + value: code_word_0, + is_ptr: boolean_false, +}; + +let mem_query_1 = MemoryQuery { + timestamp: state.timestamp, + memory_page: state.current_page, + index: state.current_index, + rw_flag: boolean_true, + value: code_word_1, + is_ptr: boolean_false, +}; + +memory_queue.push(cs, mem_query_0, state.state_decommit); +memory_queue.push(cs, mem_query_1, process_second_word); +``` + +Now we create a new input for hash to be absorbed. + +```rust +let mut sha256_input = [zero_u32; 16]; +for (dst, src) in sha256_input.iter_mut().zip( + code_word_0_be_bytes + .array_chunks::<4>() + .chain(code_word_1_be_bytes.array_chunks::<4>()), +) { + *dst = UInt32::from_be_bytes(cs, *src); +} +``` + +And absorb it to current state. + +```rust +let mut new_internal_state = state.sha256_inner_state; +round_function_over_uint32(cs, &mut new_internal_state, &sha256_input); +``` + +Also, we update current state. + +```rust +state.sha256_inner_state = <[UInt32; 8]>::conditionally_select( + cs, + state.state_decommit, + &new_internal_state, + &state.sha256_inner_state, +); +``` + +Finally, we check the hash if necessary. + +```rust +for (part_of_first, part_of_second) in hash + .inner + .iter() + .zip(state.hash_to_compare_against.inner.iter()) +{ + Num::conditionally_enforce_equal( + cs, + finalize, + &part_of_first.into_num(), + &part_of_second.into_num(), + ); +} +``` + +### [Final part](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/code_unpacker_sha256/mod.rs#L111) + +Now we update PI output parts and compute a commitment. Then we allocate it as public variables. + +```rust +let compact_form = + ClosedFormInputCompactForm::from_full_form(cs, &structured_input, round_function); + +let input_commitment = commit_variable_length_encodable_item(cs, &compact_form, round_function); +for el in input_commitment.iter() { + let gate = PublicInputGate::new(el.get_variable()); + gate.add_to_cs(cs); +} +``` diff --git a/docs/specs/prover/circuits/demux_log_queue.md b/docs/specs/prover/circuits/demux_log_queue.md new file mode 100644 index 00000000000..f84c8d1ea1e --- /dev/null +++ b/docs/specs/prover/circuits/demux_log_queue.md @@ -0,0 +1,226 @@ +# DemuxLogQueue + +## DemuxLogQueue PI + +### [Input](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/demux_log_queue/input.rs#L49) + +```rust +pub struct LogDemuxerInputData { + pub initial_log_queue_state: QueueState, +} +``` + +### [Output](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/fsm_input_output/circuit_inputs/main_vm.rs#L33) + +```rust +pub struct LogDemuxerOutputData { + pub storage_access_queue_state: QueueState, + pub events_access_queue_state: QueueState, + pub l1messages_access_queue_state: QueueState, + pub keccak256_access_queue_state: QueueState, + pub sha256_access_queue_state: QueueState, + pub ecrecover_access_queue_state: QueueState, +} +``` + +### [FSM Input and FSM Output](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/demux_log_queue/input.rs#L22) + +```rust +pub struct LogDemuxerFSMInputOutput { + pub initial_log_queue_state: QueueState, + pub storage_access_queue_state: QueueState, + pub events_access_queue_state: QueueState, + pub l1messages_access_queue_state: QueueState, + pub keccak256_access_queue_state: QueueState, + pub sha256_access_queue_state: QueueState, + pub ecrecover_access_queue_state: QueueState, +} +``` + +## Main circuit logic + +The input of Log_Demuxer receives log_queue, consisting of a request to storage, events, L1messages request, and a +request to the precompiles ecrecover, sha256, and keccak256. It divides this queue into six new queues. See our diagram. + +### Start + +The function of circuits is `demultiplex_storage_logs_enty_point`. We start for allocation of queue witnesses: + +```rust +let mut structured_input = + LogDemuxerInputOutput::alloc_ignoring_outputs(cs, closed_form_input.clone()); +``` + +Then we must verify that no elements have already been retrieved from the queue: + +```rust +structured_input + .observable_input + .initial_log_queue_state + .enforce_trivial_head(cs); +``` + +So long as `tail` is some equivalent of the merkle tree root and `head` is an equivalent of the current node hash, we +provide some path witness when we pop elements and require that we properly end up in the root. So we must prove that +element of head is zero: + +```rust +pub fn enforce_trivial_head>(&self, cs: &mut CS) { + let zero_num = Num::zero(cs); + for el in self.head.iter() { + Num::enforce_equal(cs, el, &zero_num); + } +} +``` + +Depends on `start_flag` we select which queue `observable_input` or `fsm_input`(internal intermediate queue) we took: + +```rust +let state = QueueState::conditionally_select( + cs, + structured_input.start_flag, + &structured_input.observable_input.initial_log_queue_state, + &structured_input.hidden_fsm_input.initial_log_queue_state, +); +``` + +Wrap the state and witnesses in `StorageLogQueue`, thereby preparing the input data for `inner` part: + +```rust +let mut initial_queue = StorageLogQueue::::from_state(cs, state); +use std::sync::Arc; +let initial_queue_witness = CircuitQueueWitness::from_inner_witness(initial_queue_witness); +initial_queue.witness = Arc::new(initial_queue_witness); +``` + +For the rest, it selects between empty or from FSM: + +```rust +let queue_states_from_fsm = [ +&structured_input.hidden_fsm_input.storage_access_queue_state, +&structured_input.hidden_fsm_input.events_access_queue_state, +&structured_input + .hidden_fsm_input + .l1messages_access_queue_state, +&structured_input + .hidden_fsm_input + .keccak256_access_queue_state, +&structured_input.hidden_fsm_input.sha256_access_queue_state, +&structured_input + .hidden_fsm_input + .ecrecover_access_queue_state, +]; + +let empty_state = QueueState::empty(cs); +let [mut storage_access_queue, mut events_access_queue, mut l1messages_access_queue, mut keccak256_access_queue, mut sha256_access_queue, mut ecrecover_access_queue] = +queue_states_from_fsm.map(|el| { +let state = QueueState::conditionally_select( + cs, + structured_input.start_flag, + &empty_state, + &el, + ); + StorageLogQueue::::from_state(cs, state) +}); +``` + +Prepared all queues into `input_queues` and call `inner` part: + +```rust +demultiplex_storage_logs_inner(cs, &mut initial_queue, input_queues, limit); +``` + +The last step is to form the final state. The flag `completed` shows us if `initial_queue` is empty or not. If not, we +fill fsm_output. If it is empty, we select observable_output for the different queues. + +Finally, we compute a commitment to PublicInput and allocate it as witness variables. + +```rust +let compact_form = + ClosedFormInputCompactForm::from_full_form(cs, &structured_input, round_function); + +let input_commitment = commit_variable_length_encodable_item(cs, &compact_form, round_function); +for el in input_commitment.iter() { + let gate = PublicInputGate::new(el.get_variable()); + gate.add_to_cs(cs); +} +``` + +### Inner part + +This is the logic part of the circuit. It depends on the main queue `storage_log_queue`, which separates the other +queues. After we have dealt with the initial precompile, we need to allocate constant addresses for +`keccak_precompile_address`, `sha256_precompile_address`, `ecrecover_precompile_address` and allocate constants for +`STORAGE_AUX_BYTE`, `EVENT_AUX_BYTE`, `L1_MESSAGE_AUX_BYTE`, `PRECOMPILE_AUX_BYTE`. Execution happens when we pop all +elements from `storage_log_queue`. We have appropriate flags for this, which depend on each other: + +```rust +let queue_is_empty = storage_log_queue.is_empty(cs); +let execute = queue_is_empty.negated(cs); +``` + +Here, we choose flags depending on the popped element data: + +```rust +let is_storage_aux_byte = UInt8::equals(cs, &aux_byte_for_storage, &popped.0.aux_byte); +let is_event_aux_byte = UInt8::equals(cs, &aux_byte_for_event, &popped.0.aux_byte); +let is_l1_message_aux_byte = + UInt8::equals(cs, &aux_byte_for_l1_message, &popped.0.aux_byte); +let is_precompile_aux_byte = + UInt8::equals(cs, &aux_byte_for_precompile_call, &popped.0.aux_byte); + +let is_keccak_address = UInt160::equals(cs, &keccak_precompile_address, &popped.0.address); +let is_sha256_address = UInt160::equals(cs, &sha256_precompile_address, &popped.0.address); +let is_ecrecover_address = + UInt160::equals(cs, &ecrecover_precompile_address, &popped.0.address); +``` + +Put up the right flag for shards: + +```rust +let is_rollup_shard = popped.0.shard_id.is_zero(cs); +let is_porter_shard = is_rollup_shard.negated(cs); +``` + +Execute all and push them into output queues: + +```rust +let execute_rollup_storage = Boolean::multi_and(cs, &[is_storage_aux_byte, is_rollup_shard, execute]); +let execute_porter_storage = Boolean::multi_and(cs, &[is_storage_aux_byte, is_porter_shard, execute]); + +let execute_event = Boolean::multi_and(cs, &[is_event_aux_byte, execute]); +let execute_l1_message = Boolean::multi_and(cs, &[is_l1_message_aux_byte, execute]); +let execute_keccak_call = Boolean::multi_and(cs, &[is_precompile_aux_byte, is_keccak_address, execute]); +let execute_sha256_call = Boolean::multi_and(cs, &[is_precompile_aux_byte, is_sha256_address, execute]); +let execute_ecrecover_call = Boolean::multi_and(cs, &[is_precompile_aux_byte, is_ecrecover_address, execute]); + +let bitmask = [ + execute_rollup_storage, + execute_event, + execute_l1_message, + execute_keccak_call, + execute_sha256_call, + execute_ecrecover_call, +]; + +push_with_optimize( + cs, + [ + rollup_storage_queue, + events_queue, + l1_messages_queue, + keccak_calls_queue, + sha256_calls_queue, + ecdsa_calls_queue, + ], + bitmask, + popped.0, +); +``` + +Note: since we do not have a porter, the flag is automatically set to `false`: + +```rust +let boolean_false = Boolean::allocated_constant(cs, false); +Boolean::enforce_equal(cs, &execute_porter_storage, &boolean_false); +``` diff --git a/docs/specs/prover/circuits/ecrecover.md b/docs/specs/prover/circuits/ecrecover.md new file mode 100644 index 00000000000..e1bc2347caf --- /dev/null +++ b/docs/specs/prover/circuits/ecrecover.md @@ -0,0 +1,320 @@ +# Ecrecover + +## Ecrecover PI + +### [Input](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/fsm_input_output/circuit_inputs/main_vm.rs#L9) + +```rust +pub struct PrecompileFunctionInputData { + pub initial_log_queue_state: QueueState, + pub initial_memory_queue_state: QueueState, +} +``` + +### [Output](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/base_structures/precompile_input_outputs/mod.rs#L42) + +```rust +pub struct PrecompileFunctionOutputData { + pub final_memory_state: QueueState, +} +``` + +### [FSM Input and FSM Output](https://github.com/matter-labs/era-zkevm_circuits/blob/main/src/keccak256_round_function/input.rs#L59) + +```rust +pub struct EcrecoverCircuitFSMInputOutput { + pub log_queue_state: QueueState, + pub memory_queue_state: QueueState, +} +``` + +## Main circuit logic + +This circuit implements the ecrecover precompile described in the Ethereum yellow paper: + + +The purpose of ecrecover is to recover the signer’s public key from digital signature. + +A special note about this circuit is that there are hardcoded ‘valid’ field element values provided to the circuit. This +is to prevent the circuit from not satisfying in case the user-provided inputs are incorrect and, when the circuit +detects this, the bad values are swapped out for the hardcoded ones. In this event, exceptions are logged and pushed +into a vector which are returned to the caller, informing them that the provided inputs were incorrect and the result +should be discarded. + +Most of the relevant circuit logic resides in the `ecrecover_precompile_inner_routine` function. Let’s take the circuit +step by step. + +1. The circuit starts off by declaring a set of constants which are useful to have throughout the circuit. These include + the B parameter of the secp256k1 curve, the constant -1 in the curve’s base field, and the base field and scalar + field modulus. We also create the vector that should capture any exceptions. + +```rust +let curve_b = Secp256Affine::b_coeff(); + +let mut minus_one = Secp256Fq::one(); +minus_one.negate(); + +let mut curve_b_nn = + Secp256BaseNNField::::allocated_constant(cs, curve_b, &base_field_params); +let mut minus_one_nn = + Secp256BaseNNField::::allocated_constant(cs, minus_one, &base_field_params); + +let secp_n_u256 = U256([ + scalar_field_params.modulus_u1024.as_ref().as_words()[0], + scalar_field_params.modulus_u1024.as_ref().as_words()[1], + scalar_field_params.modulus_u1024.as_ref().as_words()[2], + scalar_field_params.modulus_u1024.as_ref().as_words()[3], +]); +let secp_n_u256 = UInt256::allocated_constant(cs, secp_n_u256); + +let secp_p_u256 = U256([ + base_field_params.modulus_u1024.as_ref().as_words()[0], + base_field_params.modulus_u1024.as_ref().as_words()[1], + base_field_params.modulus_u1024.as_ref().as_words()[2], + base_field_params.modulus_u1024.as_ref().as_words()[3], +]); +let secp_p_u256 = UInt256::allocated_constant(cs, secp_p_u256); + +let mut exception_flags = ArrayVec::<_, EXCEPTION_FLAGS_ARR_LEN>::new(); +``` + +1. Next, the circuit checks whether or not the given `x` input (which is the x-coordinate of the signature) falls within + the scalar field of the curve. Since, in ecrecover, `x = r + kn`, almost any `r` will encode a unique x-coordinate, + except for when `r > scalar_field_modulus`. If this is the case, `x = r + n`, otherwise, `x = r`. `x` is recovered + here from `r`. + +```rust +let [y_is_odd, x_overflow, ..] = + Num::::from_variable(recid.get_variable()).spread_into_bits::<_, 8>(cs); + +let (r_plus_n, of) = r.overflowing_add(cs, &secp_n_u256); +let mut x_as_u256 = UInt256::conditionally_select(cs, x_overflow, &r_plus_n, &r); +let error = Boolean::multi_and(cs, &[x_overflow, of]); +exception_flags.push(error); + +// we handle x separately as it is the only element of base field of a curve (not a scalar field element!) +// check that x < q - order of base point on Secp256 curve +// if it is not actually the case - mask x to be zero +let (_res, is_in_range) = x_as_u256.overflowing_sub(cs, &secp_p_u256); +x_as_u256 = x_as_u256.mask(cs, is_in_range); +let x_is_not_in_range = is_in_range.negated(cs); +exception_flags.push(x_is_not_in_range); +``` + +1. Then, all field elements are interpreted as such within the circuit. As they are passed in, they are simply byte + arrays which are interpreted initially as `UInt256` numbers. These get converted to field elements by using the + conversion functions defined near the top of the file. Additionally, checks are done to make sure none of the passed + in field elements are zero. + +```rust +let mut x_fe = convert_uint256_to_field_element(cs, &x_as_u256, &base_field_params); + +let (mut r_fe, r_is_zero) = + convert_uint256_to_field_element_masked(cs, &r, &scalar_field_params); +exception_flags.push(r_is_zero); +let (mut s_fe, s_is_zero) = + convert_uint256_to_field_element_masked(cs, &s, &scalar_field_params); +exception_flags.push(s_is_zero); + +// NB: although it is not strictly an exception we also assume that hash is never zero as field element +let (mut message_hash_fe, message_hash_is_zero) = + convert_uint256_to_field_element_masked(cs, &message_hash, &scalar_field_params); +exception_flags.push(message_hash_is_zero); +``` + +1. Now we are going to compute `t` and check whether or not it is quadratic residue in the base field. To start, we take + `x` which we calculated before, and calculate `t` by doing `x^3 + b`, where `b` is the B parameter of the secp256k1 + curve. We check to make sure that `t` is not zero. + +```rust +let mut t = x_fe.square(cs); // x^2 +t = t.mul(cs, &mut x_fe); // x^3 +t = t.add(cs, &mut curve_b_nn); // x^3 + b + +let t_is_zero = t.is_zero(cs); +exception_flags.push(t_is_zero); +``` + +1. The Legendre symbol for `t` is computed to do a quadratic residue check. We need to compute `t^b` which corresponds + to `t^{2^255} / ( t^{2^31} * t^{2^8} * t^{2^7} * t^{2^6} * t^{2^5} * t^{2^3} * t)`. First, an array of powers of `t` + is created (up to `t^255`). Then, we multiply together all the elements in the denominator of the equation, which are + `t^{2^31} * t^{2^8} * t^{2^7} * t^{2^6} * t^{2^5} * t^{2^3} * t`. Lastly, the division is performed and we end up + with `t^b`. + +```rust +let t_is_zero = t.is_zero(cs); // We first do a zero check +exception_flags.push(t_is_zero); + +// if t is zero then just mask +let t = Selectable::conditionally_select(cs, t_is_zero, &valid_t_in_external_field, &t); + +// array of powers of t of the form t^{2^i} starting from i = 0 to 255 +let mut t_powers = Vec::with_capacity(X_POWERS_ARR_LEN); +t_powers.push(t); + +for _ in 1..X_POWERS_ARR_LEN { + let prev = t_powers.last_mut().unwrap(); + let next = prev.square(cs); + t_powers.push(next); +} + +let mut acc = t_powers[0].clone(); +for idx in [3, 5, 6, 7, 8, 31].into_iter() { + let other = &mut t_powers[idx]; + acc = acc.mul(cs, other); +} +let mut legendre_symbol = t_powers[255].div_unchecked(cs, &mut acc); +``` + +1. Before we proceed to the quadratic residue check, we take advantage of the powers we just calculated to compute the + square root of `t`, in order to determine whether the y-coordinate of the signature we’ve passed is positive or + negative. + +```rust +let mut acc_2 = t_powers[2].clone(); +for idx in [4, 5, 6, 7, 30].into_iter() { + let other = &mut t_powers[idx]; + acc_2 = acc_2.mul(cs, other); +} + +let mut may_be_recovered_y = t_powers[254].div_unchecked(cs, &mut acc_2); +may_be_recovered_y.normalize(cs); +let mut may_be_recovered_y_negated = may_be_recovered_y.negated(cs); +may_be_recovered_y_negated.normalize(cs); + +let [lowest_bit, ..] = + Num::::from_variable(may_be_recovered_y.limbs[0]).spread_into_bits::<_, 16>(cs); + +// if lowest bit != parity bit, then we need conditionally select +let should_swap = lowest_bit.xor(cs, y_is_odd); +let may_be_recovered_y = Selectable::conditionally_select( + cs, + should_swap, + &may_be_recovered_y_negated, + &may_be_recovered_y, +); +``` + +1. Then, proceed with the quadratic residue check. In case `t` is nonresidue, we swap out our inputs for the hardcoded + ‘valid’ inputs. + +```rust +let t_is_nonresidue = + Secp256BaseNNField::::equals(cs, &mut legendre_symbol, &mut minus_one_nn); +exception_flags.push(t_is_nonresidue); +// unfortunately, if t is found to be a quadratic nonresidue, we can't simply let x to be zero, +// because then t_new = 7 is again a quadratic nonresidue. So, in this case we let x to be 9, then +// t = 16 is a quadratic residue +let x = + Selectable::conditionally_select(cs, t_is_nonresidue, &valid_x_in_external_field, &x_fe); +let y = Selectable::conditionally_select( + cs, + t_is_nonresidue, + &valid_y_in_external_field, + &may_be_recovered_y, +); +``` + +1. The next step is computing the public key. We compute the public key `Q` by calculating `Q = (s * X - hash * G) / r`. + We can simplify this in-circuit by calculating `s / r` and `hash / r` separately, and then doing an MSM to get the + combined output. First, we pre-compute these divided field elements, and then compute the point like so: + +```rust +let mut r_fe_inversed = r_fe.inverse_unchecked(cs); +let mut s_by_r_inv = s_fe.mul(cs, &mut r_fe_inversed); +let mut message_hash_by_r_inv = message_hash_fe.mul(cs, &mut r_fe_inversed); + +s_by_r_inv.normalize(cs); +message_hash_by_r_inv.normalize(cs); + +let mut gen_negated = Secp256Affine::one(); +gen_negated.negate(); +let (gen_negated_x, gen_negated_y) = gen_negated.into_xy_unchecked(); +let gen_negated_x = + Secp256BaseNNField::allocated_constant(cs, gen_negated_x, base_field_params); +let gen_negated_y = + Secp256BaseNNField::allocated_constant(cs, gen_negated_y, base_field_params); + +let s_by_r_inv_normalized_lsb_bits: Vec<_> = s_by_r_inv + .limbs + .iter() + .map(|el| Num::::from_variable(*el).spread_into_bits::<_, 16>(cs)) + .flatten() + .collect(); +let message_hash_by_r_inv_lsb_bits: Vec<_> = message_hash_by_r_inv + .limbs + .iter() + .map(|el| Num::::from_variable(*el).spread_into_bits::<_, 16>(cs)) + .flatten() + .collect(); + +let mut recovered_point = (x, y); +let mut generator_point = (gen_negated_x, gen_negated_y); +// now we do multiexponentiation +let mut q_acc = + SWProjectivePoint::>::zero(cs, base_field_params); + +// we should start from MSB, double the accumulator, then conditionally add +for (cycle, (x_bit, hash_bit)) in s_by_r_inv_normalized_lsb_bits + .into_iter() + .rev() + .zip(message_hash_by_r_inv_lsb_bits.into_iter().rev()) + .enumerate() +{ + if cycle != 0 { + q_acc = q_acc.double(cs); + } + let q_plus_x = q_acc.add_mixed(cs, &mut recovered_point); + let mut q_0: SWProjectivePoint> = + Selectable::conditionally_select(cs, x_bit, &q_plus_x, &q_acc); + + let q_plux_gen = q_0.add_mixed(cs, &mut generator_point); + let q_1 = Selectable::conditionally_select(cs, hash_bit, &q_plux_gen, &q_0); + + q_acc = q_1; +} + +let ((mut q_x, mut q_y), is_infinity) = + q_acc.convert_to_affine_or_default(cs, Secp256Affine::one()); +exception_flags.push(is_infinity); +let any_exception = Boolean::multi_or(cs, &exception_flags[..]); + +q_x.normalize(cs); +q_y.normalize(cs); +``` + +1. Now that we have our public key recovered, the last thing we will need to do is take the keccak hash of the public + key and then take the first 20 bytes to recover the address. + +```rust +let zero_u8 = UInt8::zero(cs); + +let mut bytes_to_hash = [zero_u8; 64]; +let it = q_x.limbs[..16] + .iter() + .rev() + .chain(q_y.limbs[..16].iter().rev()); + +for (dst, src) in bytes_to_hash.array_chunks_mut::<2>().zip(it) { + let limb = unsafe { UInt16::from_variable_unchecked(*src) }; + *dst = limb.to_be_bytes(cs); +} + +let mut digest_bytes = keccak256(cs, &bytes_to_hash); +// digest is 32 bytes, but we need only 20 to recover address +digest_bytes[0..12].copy_from_slice(&[zero_u8; 12]); // empty out top bytes +digest_bytes.reverse(); +``` + +1. At this point, we are basically done! What’s left now is to ensure we send a masked value in case of any exception, + and then we can output the resulting address and any exceptions which occurred for the caller to handle. This wraps + up the ecrecover circuit! + +```rust +let written_value_unmasked = UInt256::from_le_bytes(cs, digest_bytes); + +let written_value = written_value_unmasked.mask_negated(cs, any_exception); +let all_ok = any_exception.negated(cs); + +(all_ok, written_value) // Return any exceptions and the resulting address value +``` diff --git a/docs/specs/prover/circuits/img/diagram.png b/docs/specs/prover/circuits/img/diagram.png new file mode 100644 index 0000000000000000000000000000000000000000..75ebf7c3f8879565b29c8b3c2033c4bd6054b67f GIT binary patch literal 12668 zcmd^mXH-*L*KUX)?MN?*lt4fPDK?ZEf>K351VpI{cn~-dm0m)O%AqM$L8L@L2rVE= zk&Z|e=?F?BQ3y?hNKFDH+@0V#@A z`4;OA_-kDC!!i}aid_8Y3qf<<4+SSC2_l}_LW+~N?27UG=8C?W-cr^S3eDYtKg`q? ze$dhUTyrt(b<@u{N?BDuVOD4MiytM_B{(|zH467yExg~wXAwDdx>@#tz!|Q8zkj4n z()RXt$LG(VO`5l9Q?87aw0)qK_tcrR&5Z)@tTFwI(b`pmpFyA!!_z%3oZWoZCdSyN z%?{ekvM2cu>5$D+pce1owVB1o?F#WB)u!s9?b>9$NmMW@|<78?(%}mrmO4kfDh(jcUp1 zG1oF@I2-S?fktCTjdB@{YeVMzVF>vXLQ;i&YkYe_mybN_aFcqmpB)U!y9bXofPjuF zbNyd^eB~k?wzSwQ?urbPAx1cdkf-{WP*_J#8JBUsqvGrre*|gmzh4Z2_j3>8inJ90_h=#>)0;nY-bwuJ&QOswvT1zz7^-H zT(JDHxto`aF5Vl!D4*N>lBYvVH8hqNk04e*2$x>>J#TmYj0X-AGbEJRWdO0~WDe$} z=Ya3Bt{4KpA*T%nVrFs$=ucGKfd(pxTKuwY(ds7p=0q#fPrrI?0n=k#B=7EF)z~Eu zBu2WaLpG-{nigRmofKb~y-?icwvI5rtegnWoD55Qn@D zTijN{wL6MGlv%*bzjg`_i?qO$JSS{1qWu5tZEp^X+cb||t`4jb8>BzI6gq6Cms+Wn z2kY{}MUvJw5AgA`%$2G;F<~M*|45{vaC1X4#uNYe>G;CXjeLz*oWin(>KRT%%&s zows%?Qad>u$0?r_ekcnxAG`4={LZ#c%QDOPB~bBJ1rBdDd~6X`k-s_brYa9FX0MIx zBLr)@m{;(Ts9OHe1qUsL)YstMB}?MXv7-IBk7z&XSqz^ElB|C8c;&IR*if=t18xF) zSO7UTS`Q&KUG0P23TC^qZ-a-2V1I`1dYb=?6ffO>d^ zUu4d2u>^}N*gTdpMP!TL-9J){t|H&^SGcD$=S7=}?&+m1s3js0F6ZJ=oSM?VmVej+ zN8y-xzFNut68Nq%o`1!?sJ`r@ic$yTP>rt%AxhQ{C%`qPD?6CM}HV+GHZijxLo6O;@3 zzBONLuYn4D5@#8*PP$k4A?whJPJsSR$78*~_g%R|yHSjO6XwMrA)mFT_}DfPR05hoz$Q*c zj>1=>Wpn$hV@wgFK|P({!;0r-cY?i&C&o`#uN#YcyDv2t?ffW?+DKTSZeDz{O;4@n z-PrXrf2s)8Jy#i-HH9UuqL~(ZcJFJXK@|T^245;PT+_hVqj=KzB8A4^Ha}89;OnX%o?w*h7gtx!|M#GoQAUnN2&~(ASdGyvqFtzA) zFO#cu@2ujsFnSwb@Zy+xE27MEmUmkMCaZ#ApSP*Q{TW?MHe9Y{eAwChbPv<354-IM zIC3cBk)|#eyD|$zf(!4u+EqN!3~b!UAQKza`RL9>Xtk`sCl2PJIlynR$Jwr;F<|3c z-x#0SVc0zb$V*=45qza~vUjTeBW-S2U0f7@`FUU5sQEx{&^B_u>ckaDBf_t*ge-Tb zekRdG_az@|H;3lQ70zciyeyp~Dv#FAO)EhX#7!16cmoY*Kc4c9ioQ|*B5x|s*}QDA za$J!6+Ien61e3*25yOM};|YH(A$Q)9)-~K#7Krrih29h8s9P0!Lkq&12v+Z#xq|4p zcX3(Oomm@S zh9A32N3L$F(AX=40v*!fz&iInyd2IgBmc$XF@yqlk?9QhyUq2a@CL(_xCPrBqOq8P zZwBVHIUK>A{yl7-C}=j+KcB$6u=Sl9?({+dqKVwKNhoLtH>qf=tH9r_Q0{8j@F`X$ z|C05GpowFTgoEF%yBn1c^j{N4x6Ovgr*`KhHR#2e*q|evbS=cZ)11)>Q$N3I#D;=XIL%jnB^L2 zoy&FS+9LOhpc)M2bKs(w9s0%ygl&eL(x0x7s87~I7BFr~@s4q_aFS3tGbL)#yXGM@ zFN$x7IQ}FX9y*&`2HQyxGdMDto6WPQ(=_!}CdwC}|zZfa69L`lt96fA#-eUIcI*5tGqcP3Ud2g2*RV&E70|LGA-3EX-*FM`p+(g>(kgSNE4ORK3+G$d@AtR0W5S|s+pXEY;?_Ut zdPyp5n9&%&P_(_^`sp|HI-J~+2X|%crieL5a^zEDCYipOroxsMR~=dSQM`%p0A8lJ z>&b5DZ=&yly)Fb1YCmmB`8S=gm$e;ygbdH>6E{-mIxpl)}&{MWa#ZAHLdaa>|REhfv7q-R{jAccsrF%qw@5P(% zPkvv0PfVSAFYJk^kBagV&ab^ZxWBW*6JJnJ?Obot^t-Pkd3@7}XhbZkvbt&IV_}9d zm#D>sWW5Nm+}-i)&l4UD3OhZtJej+KxTVJongs6lkg%olTxq5*I$jpSt-B5cjg!{- z^&SfhhismRnlD-4WpPgByxxVx6BO>_nikCS!+upluREVpSUv^5v*D%s#zl8c9<0O( zP*t>U=qejp_ghuc2UGvAQ~iY65hh?x&x@14hWZA&b~9yVFUGd3K%ax? z$w+&(W;h4v_5YR!3~C2lK0wN4m-bOsH#%2#0K(- zGFsD9NYHIa0r(~Q=HrluLHsd%BLjUp2S8VGH;Sd1-51gf$XA(+5Rg=e+Ez}H0$%;f zptg^<>#_gE%UvIz?=1K2=>MeZ*HYGHMA|h%cls$8QyDTdopL)e3rz?<9HzZ`$6aca zjk%VRGW3MNP2Y>q%^Fz3+)l<%_do-Vqpy5!_ZsFlV$U#Sz)1Wlm$2Cs&IqwV%Cd~v z;({j|mK6y|ra6s3xn6XXfd`}C-G_I8nI|s3XQIEk$+dKH=Tkfa3yXY$U&NOsJ98u_ ztYSF82lgp@zUT#u`_qxGMwk$`42L~~ zP1+cY$`%Gd7QU#N#svM^fr=FH#VGaE&O z>6d*m)mw9rS2(HhC8FyXg|4gZ=_B%j{3NAccqN8fp}Ji^^Fc6LhA3oaE|(nDiRp;D z;i@ma!6FsOw4zp#WrVb-t!U6hMah7n;6=8+&UN-&6MJX`wnXQfCUGdxP%v4zHG8!{6!@2u8zLs~$ z6XsU$tg)(ygHQT(Ki8N}HGzg~Rl^2?dJG_)Y%F)1mmiIZ_U-W!#W@+SMg#ck^w_bz z)WotxiWVv{U0F6e>}H_$oTfucij#qk#7h!k?P}o9#*DH?7`a;(HAaBFbY-pSoAaUt z(Y|riHOoX^iX@uK>o0+r8{9AG!)P8aVPr4;?+)P@j8dBE`(ZOXm(6ladN=usUuoK6oG4l~{bi z&@NAnipc(8#s6ArHjzq`csX-k%47J0r3Y20k`|wM;+UkIIsRK*Cq5ecX2sOYyI8en zpvD5ApD=xFLq`rdKX6*m)K}zK01i@~vbMQ@g9TNRX)f1<1c_qvqEB4D5e_9jj17oL zKDV4Jb!9x>N!JmYp(IC8OxJ3!Z6CRf!`QfB2WN3bg}SkG+%LvB0l*yV{A+nCd29rh z5lJrA^P)=%VD4?Hq*0s8=J=#I&xWI?(h%S_;<3Q7KqQESIj#hCBL^IaKe64R z$lta&cAZ{wl;f{SEpfk~8G$nNpc#(n)1H3$-FFz^Q@NFZDhr})k2N&qlcLyX zcZs73SKY#3t`Ym6pElb2m)Wen8W(KM2aKU8kV6jz_%Hx)Yf?@9Z*sCC`o@KaDe)<{ zR)Ut@v7d|}%0FNZ7uK$<-Imr+swF$i#m0X;W}~mJ`5TL+oos)zlXUQ#+ia*(h572O zL;ceTF-h)kXkC#dUKLWW#~7;dT{ z!m%JI4gcudpA{i76p%u z;S&KALRzFEkh|m@y#BGyX-l9hzB}u%%g!)uW_qmu)XtP^->IE#bH$Q8Hc;7#;_Ex~ zkFR@CNZKE(>Ru#3b%@~$-$-cg;{=0Bvrt#}y^to`<_LC?vzzyTUP@Hn$R5)>&gIcb zvfLiV^q)qpQqpFB14#0occ!-#`19FUx{PGMwE9*|u5f7PcaDZT_M z4!UZ~1d{*m$CeP%jAO<@^~NwXV+qeT3T6`a-PM_`pwoi%iJCp7tkjFm69FDHJx7}? z@l{9MuVDH5*65)`DvXO=l1aC0p7v;6Ap1*$WxRW5l_lkmn+60)sK5=D;QAHFpNuZk zX6nPlcbb6ojSiNorO^2;(hdFj%%%XAtk<-HEipCFPop}c_!CuD3~6Z#AKLO!med38 zlSCg*?zDAc7tDH>p8_pfi(lV3zjz;h^&s< ze_VTFEUgtWL74U1h&dX`!N-~#Act7!7@eFZK{>i1a!MqGmIRtwv0V8$*RKZX_rqt! zd+?Us83HO%joG1y(hJM>CtJHPvz4Q!hSG0INI63GC=H1L{K`5;E$K%R3EF2;;SAl* z1nw2vOE6R$eptJiENlSz$`Ud({WvB&Iq>X4R=29=xRoaty|Rdu7>K|v))vS-%PkMX zD3iF@cjb?}emBrma0q$ZoIQ6vnE(Sa?p@QF?`PAuBTUU93=jLGA7}n$fpW)|P;@hN za*HbLwCrK;OP6mRTtdYEJ%l@#;hBNBs0!7HDzi2di4aFd;}vbS8nfk7Ch`g>3VX+1 zChE3mRl;q%ph7Wy>H0R?g4Z5hzT>ZErV!s_<4TzMB29Pp2zmQvcSccSrXoLGLfiGM zY^K}DeX+qA&M+ME>Uq|TBq!)4Ej%*ZBt(Q0gspO;8$8&oA)1fWaLyNOV+zyjV|6|< zi{*a7TcHTC+KF7sA>D@@O!u@@+(Lt#G+2J^1fyGuShC?Q6#=eHiU0M>|5w8P|6TP{nDKGwAR55Cz@qyA1ic4zAlXFk z+&%5AtfCu$ptC2`YByvPrBZaPbwd{VW(G5APV;MMOM$*>ubxJs!d9(N+X1_vw~g5e zt+mm>0qMnF#@g&E=l6p^MgXKJsLd#R2G+IVHEnDs;EFuZ>o*h;)?66y?0QT;0Iq;- zbet>36a?fM$)?*<=pXyZBlbki=F8{DP7E=#3!nPrKfN=@^w?jy-%<3=PHguBKK(x; zGb!kI^!QFq{P*MHe+|#1wO}S(o+dWe@7K5b`P0TTV~jxaSZ%i_-du`ctkdc1>1ESf zl<{L#<$ge5ztuTUZbHTa@LE5t!IsJbLu?e!oZ9lL?&>C|c(WzT+QQ^_h?+`7vf_{cbU>a79?_;u4GN!_*yYbXmXuHK1jj~+FVmR zU@{x`Aag@WAj=gq-%mYHv~nB1ZDV6JK7$XCe)P1`a#+W5Ho-j%`FUJ%T*8{z4vWxr z+Q*7#LTx!FjT-FN?zw#Xuy_G|Nj{@!waBz1V5%<+1MZ$(6SOr%{PF0qoObhrBrH`9 z$9PZwx|lxRiRJd&4t^uS#1_(8A&((xvT*ReFXrq`6WHvCja`M6?s~rD8Hn=Q6=|1J zF(0MGWE@Sw`}Gv1CB|88}2MUIvM%~ZY%J_h3)ZW zxw=tVj$BhPEo{Djd*n*?j8JxX>4UWVWQEVk_lNPGGWzD4lDm`mkvDhVf?U{jGmO*l z4SZ$f^1@~DKX<~^{5Xa}H6b;hYi6k-k((;HO+Hz#1HO#Fwy0NHzwHhU6_*k{FPe?0 zcc`nCTWPv8oL?!i$j>+&(=~FI zBV_S#!Tf(}+o__x9DGwAXd@uV`>~=Uam@J6a;D7>M@a{W4f&M^y3MqhX)VT`p>j=Y zcey0%=SYfaE91sikmX9d^U(I;E3kX%yKNXD3)|}D9J$mzd*t*x7U-i}PNV?Yd_@6N>Hacl zmTI6^3|TDzN{mZCpA~-878N3$gjWl_}TVz9LChbtM-%jah?tf zp!6?%#MBx}pYYfZ1UZK+IkxTwo6T$_i+5j)r(6qYfJ0h=qDInOODCq$Ze^RCUNREK$X2f#mBh~s3kS}XEOSYn;a+i?}3Ufc< z&>C4%^va_%wqF9o$!>$yMS*J@UO7I~KQ4YzFiaRP9gOx z8D}3BSQcHzxMhs8!hOkvV)z|4LZE#%c?~I{GM+@<2)L7s_InZh)poeYN4dD4 zotTIw`GW}JscGS@w$Mk6k!C-cHrQ>(<&5sxkGH==MQK`N8Lty+%%DeoZ342m3z&y8 zamrfMG_^HTXT@bXCUCC*u!F=tYQzE2pKop&a`aIak`f&6MBxIB8Xo!eiF3zP7YxWP zBk$~(D&xE~)=kXsB_XT2r}=+L9)L;);}}uC>et0_yzg_}JN}irxECT<05zJ;ybfA%Am1<9(f{;wD(Z^g{@o?BLl?G~B*hfo zUFSh1s0h$?`aU1I)P(Ygf#*`cJ2W8FGw3ofMew;_DX{ zzqHffHJ)!mwk-}Z*osi0uHc;5^?eqew5E_Y!P$5fK~s;k$^FBBoXbwl@ayYJcFRIp zHdYm=B)L~|^RX(Ek_k>vZFOe`96F=Lx#&p~(Mu~;E?^RNmi)kXlckZUl=JYL*DB_f1IHRr1(<)5B7RdaAvKBSiTaM(00u@8kj| zOVM4fCm#aKFkMd(`wF$;IQ{Cy<$kXt;47Rg^3DQb=>dn-`a7eKraN z!s6x{gD%~3#1ypN<xOT_P|{ZC{A!=3$by>yl5teW~b`Bj_Q-&&kr}?>Xlw3w*P>tfbuLvL@&9qU|vd8$MBY;7oFk&E)Ye$Kg!n)mk9> z(wUS5QqvuD<>s-Cn2mWSTDV%()9;Q$hh!qB-B;_*wc+>uI$51Cb~*pFlKRi#r^(hi z-V;@EN`TT_FJnL+B;?;94TcF`9IV!AA2xA}-Bf5hu%^s&cv;7r+iKA-gng4n z?Kj?uXFMb?(#w6;$NyT+D4zJ(oB2PrZ#SK{k~FByvj+tS zm=g`d?Eaursbx5r^_C+sMPCMDUh4vAvLDfBxDcP&ll5*;dC1DwP6?XpePQeliDz z)>+RSBk0NhPq#g!@#`!BHnjBgcbVq*9-W4qEA@b=DNnr~8^#hW{|#MpAT0``jsFrn zKf$`)$hwi?8os_b(@P=WyqSdzUakeG{}hw@J#^*yxZNEiw&!PgYpKgylR!uT)3VNT z%lLYW#qXC3HJNt=&2tClpvJP&$V2ng+TyvoSX9K5%)dXXB(ALSyv&o67#y)Dq+;v0HHnJG zM`_{NpA>tH7ME~imGZ%b|NLHg0MHhk9?p?uJBDKbSPo!`5AI)WjDbdkih7ZSQ7!t@1`F$JV|nf41%ll`O*P_bn6!neSBl3- zeVNRj1$L$+*Z~nAtsS;UkHX|a)&Qm;D+_ezeI)4OMDKKu*=#>WY@M&-9%U749~zY3 z=#woMlyrnND!uqp(9oZJiUtGV@z>Y`N1pk;IftgExS~f@NHU;&|$0GRBl16GG^38W4SQbsGu& zHC0M+CiBi?3HNu+oqE0=7i}zxm(QiUuMLKV>0!z>R8z++A2+Cq`>3ylWHi%(T1}h$38JB`pwqf$0mwC!ZlsmKT|(#ccjkaHPM3NX#rOMv z$`cbifg2t>t97Qo&;0oDW2ad7KVI`-I744Y6$jr#nulq5)% zECpg=j^@q>g%`GKB0FtRVhz0og^b8qr81f zNr#N=m=zh>Npgx~;4j~{8db^20?BSG-Gn_cpBu5rH^rpzY_0r|V8x{`RJJ$Btjbu1 zhETVBmMgSLevNqU7hywjWcHs~{YKMSsrPg}8nWH`Y@9m!;@KzSnI z8pr!!#(ReiOwYY@q2gHQy~*Cjdl;v(dgQYboXFx6(^=$^g4Tm0*7tlrq!oM4BOTHZ zNcEJpg7W_F%z&={FaB3|Z*COn-ZOP-WJ>l_yF|`3#y8vNwtx4as1|L{#1Ct(>y_fOXe-wzcFhcpC5Mfwx4L}K3WL{H{=7CX`)D$} z$B2<7qzJv#yVvc%B(JAP=)CH(f!^=CnwQeB5{?vDM%HaUbeZRyr-ZgLzLM2Zwfw6b z9UfKx{b^2;$%gFaqkysZoQqT2PnW)p4jBzyfSxPU&MtM>|JE}mv-)M?NXiSu;M#I@ zz{jomolnP?ojxZ%a4ER_d&LKfpBp4vywBPB@3lYGRK)BUr*O2^|D?@8me-U?K#M(y z8L3IQ5~lf?O~%LmUFlshbeqJ=gxr^nV;R~1KJ>=$l9bd@H@o-ts!d6k!bfcEmi<0I zdEhdC<@ZKerS5AfJZR$l(=G)}RP|KF>s>(1|9plU6~{8x)s+&AU@5PbYqDMp^R*CW z9Wtpo;8`Gjm7~St_{=%q-N>{v?(~Rh1EVzS!rsBSMM@Pd9|zBeE;g%64%K#6#Bb4r z7g>5i9{p5@!4VR*6NKCoAkpx8(jz?j{LadKm;ba=mGF0er&W%(hS%LOoYcO(P`mnPW#_tN`20`3=3&46VT$mAr&F3U*ejl=!@3Y}3tMo`% z7gtE>DnuTS@)fBME%KdL+ArwUT9eZ-kG;Ra(Q${BJ}2 zO;k9gCSVuvYc3b!%#PH`ibbKnn{RjhawZXFDN|Ot9=|SbOnu+aW6F(9NjT<6I5&2C z5nU`L)r=&EXgV@2J))M*i36&0)}0YCja?B!x%&H^Y+HiPyb5nLZqhWR2Kp5K2b%% z?ke0=(9lCJo>)1;p8J+;&6ZYvmV!TZxOv;e}kfD-FxfJ z@h$4RQ`pns%|I5K-gqUu{&xAj)*HYk{E>UjS{fEtZ)&)>?5`quwpF9dN6@=bXqG2} z*!$}&`bD+tuQ6^)Q!QB~;3xxV_sLPi{Y%HG@KYE|{)G}^(U~WW$s^avp=T$KW_K5WgzmlwPYZlc@re|3D*zgRCJ6~T4Kpc2TJ=PZG zVuynW6?N*or{UmpL672U{&=lq(0S!fV~Px<(^!36$XGaB%k|egIkSK*TtNSr&yQd8 z#5K}MpIsTo)?Al5YLoKpyHQS(Mah-hwfkY->_J`h*aWzrePww-LS4{2=j(Ai{afbZX#Jg{yj@(MhYa%6WlaqTjEYgkuWF z*Jh<4qufr2dJ9KG8aDT?ouRsYBm?Ug3v^N%k2=n_z-GGlT@5nDtSThsDJ$#MJC5Oe z4ID~*+p$~G0qav*!6IHM39i0{)qZMv&*SWOg%~WplZ0FC0^hzC4|oI&UiYl;hRKnq z&zbTm5Rh-Vx(tPX%i}CFEa?5h#qSa%^^{2YRL({aAPIDOy~3P=_Ry^zi9Ob*>On%b z``Kw;&OcDbv@|%$P0^atYvPxerv*LOg`VUuHHdB6| z(}fD8x*7d@t<;5lQ#EFTiq1Qb1TC^_TyX{{59^B$?lurcJkzAnXSAUZDGPCvv2-Rj zwCjyTcdhJKDs zSu&e*EYC{rx-USk-2wG$%B`p5&ElzMo1x)hZ+3O~$zmP&p<8CfI+acX>xPS&GPvTT zCMp%5;Z#LrMmC$*YU zn)=2q(W$p0C$p*pyd3)#80VdQtXWc??2ZQP2S0rTmPz%We!2%4NPE#`5^*|G<<=!L z-yuepT=x~ENrOnE9~v$T5-vfhJJwh)qV{IfkSn>nc5o?CupWJlK=cTFqxGrfO{ZFp zy0C!f(+oZalUN3@T%gE+&#zydK;PgXrk7`E`mYS{9zjlQK;A7ecwIb{id!MJ8kfId z(zT&A=CiE3-F(eoLrcCIDJGrR zmiJjL2x0Z|eOFp+ymWbW=0`-^*sU>jr>xiRSGNoGrkEe2b_@$=zF(HND>wI==Q*=oeIS zL8qzbEJc*dkm_-l!jCFVD&0o&m!gavlr-RDGJ3j&cz1Pe5>1QpYw7_dK64$+v?V?3 z6}@!50dYR4YT2QOj_tkAm79<$(R{ep_txLh)QydPCWkw%wwt|dOvr`ym%354)fOji zOq94rJqZ6zv)Mywww;B{r@~hB1OxPz)qXW0USB|7JlA`zTy;C8yKsm5aN&+O#8qLA zl+3)v2(go{c2h2}Rc*k~YL^LIm+qh$$LA2DvMm! z!OZ1EK^qDW<~|+5JbV`;}-v z{~w<^``=uR4sc!A8Ti0|xL~1Y0;e8LO43xe&#-t2p;A($Pq_hyJ>ETKG9%*3h00{T zsYrhE>>f7cSig9vWxB5$(%V;YS&b zH*HWzq%l_ddN)Ng&lvOQgLf)^smdB%>yj%&XHZzHueSDjlJ`x zI+vUWt}Ce=hZQm}x-QPYNUPW1%G-OjDXkKCnCeSQd&yz`Fl!5U|{eHj%ZPV{mAm zT#3j-J#N8ngNXipI~U6Wju1F6{Z-n3-)yYP1^A*uahm$cx_4JzGSY*fmNzVMIQ8Q% zk2884u`m5r(%0cw*?bWDtocixq>8rL4#Zkva+}&J@D!;utQZzumZLDr;zHf8zE6YP(;n!i6VD z^6_{zW+lGGdh4U;l+l`Gvyqnk%8h`ZZ4HZl%i|Cyi)BTc)YPLvl@S6|xTAQs<~NP- z?PlKBG)hWMY3;Y-@ayw0gyHg9=Yv1?S8*8f>BbcpNPcN$S3j?1HbAF+?2I( zzM_HIN+Xc>Adt5nkhh?R0c#CZBR&0m9r9D%BFcv3T$r#M_Xc?5=7WTnOo@pU_s2`F zyZAdY)7>;6H-{?wHo{pzTAjFib&L7S;B+^;pT$Y>R9Q?xlQG=3vA0ppfQ9zRwCo^O zuaP%87ze|mBD*;%D%{IK0;Tc+f+-l9+Zvl3Y=x5rH2wiN0aPQ0jWw-vT{8?iz|rF z0nw*{)5|*4jy0@wGA!6wQDW5TGGOqH{rbVVGojiIhKRdz(5aBR00i{Gm^YadDhe*Wjqr`~>ntZ~_`tfFPW?ZS8dZYf(us)E>ZbD@xvf`Vj z3XmB+C~Vh-SR{S$s%I(5k4_X;opO3$I3Su2xHPFPrVa9le9G5s7wnXO(H$D#Sv(2| zB1^8^`0+E@rD}fihTTtNu?4Cu_gBpUHbVsiMb2X&HD7q`RARA%jED*iOb?=?c8(3a zWYJ(;KH9B|xN?r~P)~7g8$vIOJ}+tpa1McsWU7ejxG$BU^(XV{7NItI=web()o0#E zWa`&UFHjCl-e9wR=#tDpc;z0=D!=i0(VU6mM6;P2!_e3JiisazsO?C|>mQEG`B=bI z-$;?{s~*u9+QMDVt)6RO;Zrt7oNII8pxADZhJ;*~HMc+4Q_1Q*x%9NFzTEyhNNIgw zZ#@T4w&!k_-$~Rqd!1>ji+WKGsU2)>Xgu-B!lxITKIDR4`=+)*YOoXfpya~*;M4jn zp6}*WU~Ej>34wqWT;j_DAnN|_!P}Qy44&%^{klKYr4QmwZPP-VNr~L;GZfs~zeyJL zGaUI6XNb*wTj`khLfCW6kMPxoBce4rvXo;pjI2ZU6ri+N*|uaNAz&xuVVStjlGD;m zegwkbO-4%8#)S2~Hb0xLtk$SR{R`2<-0LO|V~kTYgb%UBSzCsZs^?FD8?8U&z?1QnHf`g8?j{my!2cK|;u6x~9RAgW=M`!N7G*sxt|6BxOq ztf<&geZO7~>bhB9V^vOlwgm(3GsOrG{57mC76bqQV!E#ckWl%7!sVQlAu7uc-3AQcShNveH@I zg74#F18hgKK{nr9e%faNzcT%BdwEZrpmFn4+_zEj>@a@2j-R{|W7f z?U(6pCg!e^_%-{{4;h}J%hk{K1=OWPI>M_NSn}J)WtB8<8v|Bs;G9;7&#J6(GW=fredc>2P~Ik*h@IGx(O2!%+dPqBo2=XR_P^(>Q|)^@l!w*t@FjWf(^Tk9)mc$N3Ia$zVh zI^oxcD~Wu1yGAQs`1-g==L@#wxyYs=cIxZ%v#XljLLO)m!dlzyedob>Ld>KvAVT)n zh|AhpGx+M>jyY8O3O7O++u~N2AE{7>_lBBmLoli&B%|v?!{38{yuY%V^Mao1My`ok zW?>r>KgMlx1&4@=Xl1`$$awyB(Pe}4pLG;s$_q2E$nfhi6gs*O^X6&H7X%4ohDrx~ zJW;*d$&trIsf|5=XH0lbR_;cp`)b}o%2ga~0P*28HH(e>ll49l{+2={3Uzx%I2S9% z&R1n z1ReG+M^Oxo9_8ITU#T6+8^_vjC9ENsd|?a~MR}1C&bj)7mDT!cBNG}FRzv-x{ghTZ zor{Yj(&&Mj^sOyZugwA4{^xH{)%byObmbKYU#vR@X(D@8OY8p%gGA&Zr}%0Rde1_0 zXu@EX1ro-YutBtQTPa1r_}i%+Ta84k_jgB7aIAUCP`=EfV2E(fD7{bGcEFYGj;H(Z z!lU9B>V5hDF`64;d)g5fQsl2gan+A*QQJht9g?e~8r0}8)}jhX%2Hla-&sKiLCS08`9kSPq- z5dB0@rK+*)KG$UK3wlg3*Lox&bhjOa`R7Bmi=>vRs+bJo#ifZeR*2rpruFK7>;*8* z>lK4B!daZjDfYuih~>pxB5}9C_SQXBkcfY$7Z5DiJTl5LnV1)JJ`=(*(srsqDE{*5 zqquq@h35FD9cD~mx-s?lNo~O;Qw&)SETXs-7X^@u3v}4TZWGpatV9?UH-BM?jOs{L zg^rV>3*m(@P`QKJqW>j{lJC#e(ai&wL3tqigK#gEXET*YN|0OA9-a;UC~(8&Kpzh=4?Cm zM6gg~{W8|V^Rg}4{vFH@HDK;(>KNXFu+ko>z6>2{p;f)5mD}_ z_M3Ir2@n);a_s5e@nvIhLV#U%S(`PRQI*AJ-z))wlR^%g!G`I=p1env=5jm zjx_5jB$g9E!7?l4VW2s+0p*H;Vc$wXhf8(k^5Q%UUhpy}@nnwgRrx*hlrs+oR&Q!M zXy~59}eeifiZ z-KoJ-7Ye03<`>-O8Id~@{h$QyUHad!{MH>*Rtn)Xutu1<$g@io2`z1V z$&^;(rnbWtzVkxz4@HtI4Ah$+h!EHX8E;JAGy1X3O!-rKG6tkr&phjPXiqG zp^Lp8NYqkAU(I2#tSl5qo3%luNWyv+YFy^oWHuMKq3U~EOOEKddsgnr;~$AR1{S+5 z7{B%<9!*nh0I0|p_d#-WcJ2b*`#W>A+X-PSxZONA@Z7I}blXoqd^(eIuO6>g?31LpuU9jlTM<1%S zf-YguxF6ilV(~=8c1v2`b^;+ZkjFwzQf~kBM9#)ofYqA$Xxoq+L{$Ne+p4PSoqXMY zfGCYHF}%c*92#dwo}k<)rO+0)IBE$Dju$roh+2=n$de|Ih*16qHxBpR7{MK@f2wV8LNgi>e^PYF^=&xo9xODXi-2gjlX zHcr#Fb;enpE(?}cn&I|6%nGcC=h=JF%SPn#Q(hC@L>wc$samE)t~-RgR&$j~sHOp#7<_gOvJai$v5+r{rw2`hE&{M06AC1U&l& zcow@ZE(LWUSvMCdNtB7Le$jodyh$_6Dabc$oi@PQGDR%}-&Xx4I=3TI&&^sCRmO*< zeX0f@!`p~_;{vEonVY?RqkWTVJ{ORTdhWXk&N2y#&$RVV$RY@+SOmbZOwJF|(6y)r z(zd7yzupj{>lFNF5?xrT(xYSkF_Hv@V21gJP>dGRcZT#|979l4?rU-ep#?|Xp zrH^?0ND>By7;!td#5U8@$Mp;fcE(RQ$tgUv(+<#y{D= z(?w>9rt@^gJx{j6*699e(m?LU-o<`76B#+WFrvnweDPQRZ1%DGXWAH0%2JrNRmgaQ z8cxoj{LV$)^#ZvK%@E;|Gu@zT)fZd|5f)h9c({kup81}OICg;=25U$#wBgz_p_D;< z)_{?7myD{x_jAfqIrmt*b)(uj4G~W@=V-gN>yxaour9r*P|GYSQJ#>i(t6mDC%Wp9 z?hVYeB2M!f8Wm-gONMiWG9K6wp5wm>5Tuvc_ zE&`O&L3eg!*SkO8A?Ir-XzT%NZZ=KnSX&6aQ&5PrxGTZU;2E#%x=e#-C%*U zXwRRS5E5>AIdg*FlU`Fz`Q>t(gFJC^vEgvEj!23j^XtkYy{W{=!d@CIkX9sYj;%YU zAtj~IklEkr;glnk(>-)_dF-OUxciM&+$Xw7@9|jl_TJc?;WYP$E)(z^&(&4V=+UcG zOcCiIV~j24QYlg@rU?+ru;F-=yC)eHo!inTC0ufOJzxzJVCskZ2r8Sodla;U#YQQa zDL9EbN}sdLsNI7MM$@5yp+2WmFJJ5NpCksTxAjSsXr?(VVlIr@819s>p1}}3wURF} zl)uG{!L@G3w-BDix@r)KoPDhs)9Kb_gS?6#wBy#r%|$wgp+>&n8A=sx5Etb(e>7>S zG{Ex6IWg2C7$>ert#PV-b1*>9g3>?CgkJ|4CJKtiXJwZ+0l-tHZ+5(7$QvGQzCoJ* zAvH$7jUpU4lnW{szUc$_$*oaxf*CWrGrol1v)xuvOsrkdc`J`#+G@NOOf;1?8X8VC zOI%na1=h0_CL%gqhob@hQO@xwv;}`SW?;XSXWtnPQ=2pJG<_>AZ$K_eoz?i5_|`V< zx_DZtjNOQrs`l8Y3BoS$KMS2gvXBvojVI^#SgmbckY||@eA`b`A5l2&7rn6$6`(7~ z&odfoKHeJR^eUPFz`2JhPek;-i*vd_Y1LMdW0;)lSX9__ z?x&URlWE63-@R00Q)unxKt(6#q0t!9gbp)zPA67ql=##BC8L$Ae+pD^ zFOtiq>o+NxE+o!D*nmKbN8>~hayC6OD62bF6JLnwqLl$2|OPHI)GJ+ig#guS;XH#Xxz$`?P z1vHX|aS>YtmF~6QC!-KUWSDVKh0aqO$5IPUmAI-4I*e8e`98n2wurQSsAl>!}XT-FxUF zO@N9RRaT_kMbo<%@(c}HVBTP7vgC88lvG=T-mU+C-*bHKfHH z0V>bxgh3890}JJIh4j@Q=`}sSva;=2oiJ(gTqN-`H0F{aZay@8?$g&}7JlGDV*dM%m^5f4ho3&g0vK&FY4~9{nd-iKCQ>8hZ)5YzyB7xe8MONz zO=v6P8k8$r{x{5kLK!pyi~bhMr30fu!WjbdPEw}O6esyfXingW)pJCVp(;;*Z)*`3 zv^l|Wj#YZ)&m0(RV+PLF#B>7k6!%MzP1XF=jXS*?smxE*oeIk;XCF`fw9%3wxg1Ia zid4)M*g}*P-!A#&?0aDoAkfe-T9U;Qs%4vP>}2i}&n zE^`$pHPrQ_?aOVPN<5W;Gd??CFansLCaP?yUK!VL9FDRcc?%%_+?^p3U=wASa8W94 zRAy!UUucYIu#I0-U`NZm{}wplw1D{s ztzyXi9FPryygoVQpZUALgIpirR6l(E1Xg@4?b_K%$kXkB05`}Df%itidlnHe0;{b5m8`M0n@T1GFzNY5 z0cz-3=C!Q~^hhR3pVnI;5E(3t=1s3Qb=)Hj!a5!E-5GV4t65G?^cM)Bb}Fb@kQ=lB zd1t%@KX{c)+!CN39KS)mgXXvyyptBvvxE843+t@ceSOr?aSboU`+>K9-(v#3H%$FM z%~uNR{N?XlpY3inbs=G0>Kl3nmk$yw zFlPE>Y+^UMs|lJpB&nT4RAQBrra-$umP2B4Qf*CT;)Yn5_4R>K_FDK!+hx z-UZ*u{*`Pu$~tom)>#PVwQA7;vokJG5q%&GFnkPQjbxY0p@z$X&##}*l=yej=xLFR z@BHGrpy1d)i^rSQd>kwuxdS5;qW8a<+lN?bhJ z3}&^p4xX(;KvSnz1Pk6qcGW{rK(q_KWmVFCU0}3*h9YW>cYJr=JsL@MsuAZC*xECi zWO6X!E)2^zD#`AZemsM4NWJS|;9jgUHQ#Gk!uTYjZUE^so4%7;>Q+`6X@Dy=*vMF3 ze<*k`hHjaBhvIeR5iq$9c_^~7TY?{vc8`f{D;}1s2jlL6^gVp1UuXRO{iUW=BJ;tk z@08t7HcUPs9g*yEx-KKP>cIto2h=I`V6vYyd4D5(=G+TFC$u=3u~N3Mdei~`zH&$f zTe<%KTl`-$sw-7Q@DGF|9~OVupf)KU0- zZw9|7x1S|R^XuI_mImn3gomYjdSt<|ROltLGSz%->fTGFTN?hS2T%`8*eXjbIu|Wa z(()wLzVRBY)>WievXcj7`Hhmq$d*zS)N? z;b#&@LrFJ6R%|avM~26+kyJj_7@1&f+>_c>nL8S~!*oEDC<(q!!Q~xN*T{Bt$Sd9H zPIJ!**zx3K!p|Gc?dk+6QL;9hZi@Ht;aJw>+)L;N5K(1kgIR$4B zi6k-=bqcrCKfmy?cX%m4Njup6Ly$1;gca%EfiX1oZ+ZT_aOeLKFI*ljE%tp#PI)1B zK!R=T5)sFu54tO|M8gA$4@ur*)F*z*0%2|>EnvEO>j_$5xrFo^*^%jujfKveiiHC_ zY}jkmg_CJDg(T)bo;luvB^xeAem0Zi&yvH)W#O=483y-0ns#Dg7d= z8|qJdmG6`x&*9iwbTf!_n`H%cKp|nMch;|n+jrA^v}l&Yaj^8$&i0ORBk!A9p@SW( z^&3LkV(&Sz>}pGJy~+JT`fK3jyi;sLQd<*OQ%2RC&So6^ks9&h>=JJ0iOHn)O()7f zf8-;)S1R{fZOK+ST7Er(gb|Xdd|;9*^IL7PqxW%XB7B_|hVp!%FYhsYtr2^P!@aX9 zR4NBr!&wR@&R*mIErjOzpJ!gtl>V_$NEVN z7ng$gm692^?~f@V$V;ur`}cHT)U@4~d>qmfjPXMc%-QD~A+C-YEO$DPYKVHw}VTlgu##w^1(`hc#+ruylg$1J^4 zpYy}lBA}E7w|#I`j@ubZ5lZSLuYI-{AzEyK$V;?Gb(1(^InEIj8s+fdS-{rlYcPXl zv90v?mmuM5t!MX%lI#a|-FuTl@ioPPWhF8xFeWKc9PHdjRsZ$UZ}ZD~LG^;0x}J0v zVKJ~m=z)y}=CTL9za1n@`azwN{WL&(4+yZQuzEoAdkbB~|9^}B`xuqHQGZHkwFtep zr9}sX`CE92@l?~PJ`3k5#M4>*H-FYT6!@CvTI_FOy|7N<4qKa<+px$6(1gQ0O~ZjE zn#GE9`8cktB_8{6p+pDQm7-?Hv^uso9B|?K75>qN6z)G_Rac{+P1$tg8+f@jA-^|K zO9-T(TYjABzh0U)HB#xM!>%60MD*ey8+=)9V?(a>5_?HLe#5B!y<*6*sPM3jN7=uf zcp_6TYrWELJ9`ANiej3il%X^@k zLsHMXBBuaxf(?B~&tQ_)sIsoC8oIOU^!jn<_t)xvK^;WUDPsK5_UzR7hu7~0xwey#%+BGQQm!K$)HfIAUX;@4sXYG z;szi7G16v0G_^78NO<+p*%jUxXiZm&*GFEi@fUIq+f%$QDk#KfF}43x?JMSMSpM=Z zM<$FHeeAIr(XOo}$e!`sqQ;1qN75h57U^VHR}a?%7k~mLA8|Z;?wiV7KiSgm)>I<7 zd49cCviN1&@&smOI)(>>IF+n`DzwslF}2d4QglV8mGc3l+_L z<11wDT#D`hqupXbt(+uBLH1f{D;>Jy~q=!yf$P4gR7P8oWQM`r-E%!8{l- z^L`(*@u83uJDz{lhq*ooUx82J^NQIN6}<#*5P3I>EN$2;Khlm<4T2SA4uIh-iC{x> z11^Mjr}7AVY8Riy(JpVnNS`|Ckjt8^_-@&_|9`B=T0f~g8!jW7LC#E+k<7`_rL=V| zzKe{v2B~6dzhK`{o`%9I0L}3$yhg~-8Htr4+T?9cXe~P!vv>g*&UMrZH+cJBVT1^K zjU>=iz*O(UvZvc{xu>pK!)`ODDTZ5#fpHA2MZL+3ocvlf`jd6s&tf>DPa6|th?03f zZ^OOIXeuVPuTRwyz5nX&JiGcChPE;rxYf-ONUG1DNA~f8F@?inwJ!dsbTBYWaZBM$ z-uPPX1ut4x3SCM!d=|?EaSbDXmj12tJjw@tLDv#O#HG^W?0x+1KdzDoBF>3246VBu zLV=-XBg$X^U6p$L3U8mFa+t3xP2If|a#q{O-`DU)jAB+bF0W1AVo*YBSic@2MO+K# zcTJ=%GFn?JikfLHYA8;7{0c-JA1VGZ!kye%#HOG)lucf{1#YE4xuE~In^>bg+-8lA zQ=t)s)$&YStqZ(S)jyx9zVE1@a{hY$=&t?*zMa|&Hg_3~whR~QxxF~>>mGoH%? z;W@jvm)K8tWNk*!r(wpgnvjzGL|0OYaUtGIT3DLBqbOjP@bvDx>RY}sq#Y<;S?N_; z$QYbpcip6ln<6mM-|aif0bD10w12KS&KsY4HSYrTxy74$2Liej^mLGpv^pvGSFe&y z84blKTRc5fN>d-iEHjz=MS-V}U$>QxP%_yxs}}bUJH(+quLFsII|){t&++E3 z^A{vbWw?x3DbM{&eBow3ecIqMbhXs7jhM;$A~}kvB$E_=4&-BMqR5zZo8m~R8p*n@-k~_saDtTfbX*~YsSaND z)2Dez8Io|#!Fw6ty|<)fs&0-HZ0aIt>gygIg&J`1P zkv=Oz;C@n?FLPPU0n;L80QA~@A6`x>V(vn+4{@3tjw{iUI|FW1L8+{zNB&0U8=!2n zzEFxQOey|rz6}&t$Np?4-M{^b=087=yA~bK`=ND!CEw^NZ?yF7+DIxuM&IHg3yLVUZw;cTLmu$p$ zJ3vG}Pk$}R?{XY08xEGm;?I4QD*~I=5&2MULT#mYS5l_-FY~(?tcgKAfibrvfl)V5 zPOt|(l}{}1c1RFf-TRI!k4zw~MtN=p?g?FLAy)q{AHyi@YC!>fxB{ADBeK5pO6OFi zP}HTcs>79wgTn$!D9bkLVuYH7e3rt;2rfHi`=u5;T9~iiD;>kT;BI3FUTEXss~9&* z4@wv(+={VJ8%_)e|5&Kz5L8UnY$^-wAO2rHj&Yxw#MYq>T)D@=p-aJcBg!GHlLQ=F zY{5ziQ*8V9)`sIApPd|jHhoZF8Moug`ww2Y>P-4nhVMA(QyJie6$g0OsZ(h2B$D!f z)t4(OukkepnvtkYf_%_@yFUhXkD%vrrwxerZJ`SX?<7g0dm+B;RWLHO!UTAHI$T+9 zBB6SxJr#U8Y`Km7i8OOk-`CAAU2YWE)rvX~wIm2eWYhWz{V#J>c1 z0PC;vS*?yDoBj z$w@M)vdC_S`73EyAolwM_21eCp^giQ3eqPqG@VKg?xX*A@zLLlcY@|&qYxaLruxAk zhuQJoYclBS7Sa>|TJZ`hs}&fH8~(!&Dcho&rL3nG>oD+@3o%LXl?y2m69r%hzH9OO z-@*9V7lReQGaCQlHa+@llR_F1FeR#b`jmwn1B)~f_3n+eHf|KN|Fhxy8_Lq@@2YGC z{sJ;8$0xU$lm4>5ZohkUx2f%MoX_ausj;bL%TP7v``o6DI?m1F71BuLeAfcYe)R3b z{TH7JqGN;~E1iBttsqY1jJkVx;^j=as^OzYXIjE;Il5AX#x!kS=TYG%?-)3FTl`LX zLdSV_>#n0o8*i`Yw4gOvyO(?O5H%Zx3x#@XySJj}M)b_|MryvBm-{32+4TzgMd8*D z!;S=$sOq7wWrq6iR~8ZkH?of*9yW!oYiy+m?d)lcWm8_78sGoa(I=C0cF3Xp$`E9R zWd8_{d@seaZ}&_4G5EsP)Fd`A8veUpPwi_8kZaPOj{AMv275b2D2ofXo^_3M%m11R zFp;m^Ro&Y;hIqMdj0|4~s!c|A$#H%z%g<+hVXQ@VTJvi+E^LOLM1-1q2 z@%x3Ej*;A7%QSEBUkg)XPw%oyY2RD@Ul#H3Zy`?bXs(;hX|Lt1$JdPZ$|~%5J!N`O z9{=#%Q9inA~!GGN?jNfeb`_7PJz%XVv!W~s1F)wq6?77r?r2b+$k_66$ ze)Lf^eKMsp*2F3{I8+NaDKg*dob+d45JHa=kA=)re(lYA-EyAQ(pM|}*spVuooiHJ+`mp1fyh~Bio4$fIS#VGpz@(% z)^E(c&|>yV3XB$PMkwF(LyGom^9@?-_+pGI=h`3f;rrn!Q`I@WG6(eB3li3MsU?OiB$X2ii?XTN z{8{2si)P!)Oatya34I40>v|zy1>%6Re8`f&cq20$v1dCcwrz zYbV`&9JQH=S`&NKbo?$q(jV<&a^EUqOy6-|Etj(;72nmhczujaSRCYk@rW-#5axdb zQLy;kYqg|7b?eGHy>8^i8^7A?b$(HqNW3jn{~R1|RQF!z3$?Rq#%)2~9=Em9RJ-V4 ztKf>b$_2Z{5x)ixr!-?{PJ{Zp+{~@kEf3n^Z@nY8diyJ}S^ubXm#6tr*%F2yBnrPg z-*PKa)@Q>L+vPlj{oj%yHf&NT6EJN$)--C3peCpAH!`#;wj>z;qQ20S{`GTFQCRS0 zgz%@5o~z;~n^ZCfs;z!l|9o-u(#iYv@HJ1_EL*E5Gz;C)jwhaHKC=J#ODq`7Xv3RX z1=8t9y|X3CoAA>+mnq3?fb1=E!yJpWAJWZ(U`j@IYXH&zy~SO717AN3d_Z_^GlkXD z0*6Ck!54nMj4gJAg%+gIS?APmqYPiZ@^t)J&2J*%F&1wacL6Zdgs%$`9bfaEv5xdTkZ5&BOXUpDf-vT0P4|M!V~hc9nrSYV#A3CMcT9H{>XV~P-A<2Ze+}}q%zk2G{vdaPe+BfuAbHs%)i^D zKzA9tY9TJAOcd8{7<5>76Ng^g)AhdFFAj&`|tQcB?>|!o1lY#~}4jXkplS zPYPPr43^5V{v6DxE7nIK*=+@gOBIN#q`AjD;~U@an3h8)FeH(y`>S3+ZL#BQ-jY?| zKL#gneN-Sn%1hta*cMH$J!AbeT*!&1Haon(T$oe;rwCxCkUxun6OVY9ohiHe^%UcN zf0f+u1dGK^p~97Shb-qHm(~1f$N|RmSD*iegWSwV7xl&C4EwLyTJc@)+44?RY2-cK zb1O0bN3%-X1KFdZOa`8Q2n9`g@m}vxG`*OK=bUE}ybU zM%wMI;blw0oE&1k*Q+N&iKW(9tW-o_Mo%*a0;)1kal_Jd^BZWF6swD+&N z_k~@D0?f%6&qCt3`QNDJ+%WCAHZKLz158FAd%g>HYvpj=6RLVL!_z|!Ykv}f97s6{ z>6$C*+u^c^txT^eAC`KMS_C*BteKJiZ(u!PGWa%mtjb|SkF;nMvkyXA8a+`fFGLn` z7AUuNk9nTwA7%KDf6ejkOGR$iIM_+dui%abF0lt466XI84Z`qaawHeXs<&(Wgpe}Y zN1vCxQn?A}-30V9(#@A|(f?8B|KYY^YlJy9p#Wj@^}x{aF_u8>um7dMxhuf9_rfMP z7i;E24e{?KE3pu%3@H!ERQ}7V{-oi(bm!Gsq4gZ^Sm$4PQ7jNgg#br0-u-*}T>J9) zT+X|?%+P{cnS2_n+i19%U6s5TSQ4E6Jvcr5&*^8UQ$bwql-oUOF+%vR#?W_jNC2JB zpP9TZ4jew|WT2zNNIzfh&G6@BkfE~5Jjz+Ljz*|bxc#xUHHc-vu7&fpha3_tw*5AS zL(=BrEdG-VRF0t5%Kt|O1xae-zc0c!X>N~%{Z8BLp#%|6Ui+&p69pXxzO+n05}V+x zn{G$yHX*{AbLPbq67ByD7H_(k{7Snl<|1pN{Pv%CuJk*NL}=eB=p_WwX4A z5^V0GGg>|!E{gD9gy_|H+1-xl?f8euL+c=)@LsE9K)d_NTg_Gty{O3CeTY9Js};~a zjLzCB-W~H@X>l2NVS(+(dN}#;vqZCQ5Ht~~ehcoN_no(&{U7YTS6EYB*EWj!AV?8Z znjj!nnu1DEx*%Wy>0O8c5fBgyMOv^>M4A<`P?X*cC3K>pM5$6Fkq8kHDItUwLP!F8 zt|-sz`|ba`u6^_$?K3Y))|zw7F-N(_Jx0KxnQR6UhQ|-O?2XK~T^I6yVyO$%)>jb| zoO==kG&f@`PrF9gjo%rN?vKsbZP1Q6(OoGJs%6WgUy)c`IzDw>`=FK@S<`zh zMAX?NhWW>(Z6$6IcDHv(b?-ZOvz@fdlOu%(9DkEoWkd28#7^swv_S*7=6+K9;Q}bq zDs|4L+FWU$JVHopeElPRheGAO#S4|O6=tmb^d_~`N`4y$*h7o4@RP>v7*tyL8EjP( zMrmT%H_vBnHBNmgN+MKpY4s3wSZh6)BdPfy?1&``qM@Vm&i!h4<1C%)r|zoqU9>*8 z1&(>Yx1Ix+w}WK;(UGVjSW$BLWBHPwD9(|1_&rqvGkdT;qly(Rx&Z#T(f@V@94OG^5g=_YP$KiL`5>3vhT! zrMm)+m$3V47(A_Sy&)<&KY2R_&v|X2-_0>6&NoyMct>?9H%66j<5q$#K5>h2vFt-6YsF$W z0@esbiO&;yHfRjbLp~KeK;qRH%&8<`9(o;-i6`T#1e%mKv>1)^0Sf~KIuGAH>#U)t z)&;_-K)OZv(?W%_5-D-~aQ_}}663u=cGo{>t$Jk|%_Ejf#JnxyUBJ)dKfdt7J#sS( z@E8wz?Z*LCW;yOD<}`?Sqkr1qMdsVB2NIDPdwl%pN{=Ibi;GZ}_Q9E#F{6%ve1u!& zBPw`vC5mJCu0*sBMmPOyxmG}TzqDpp7Knphe_5X0sJdkOes<3-5H3`&W|W2Fqfm`o zCaa_gX!1?-DAB=@DlHo|Tn(i<N|aboPdepxDg8SC+EmSj2j4H-MU>g(K%-_Q>>deJ<(2pU)&ce4d^a ztA3j8o7d(TkupzcQ&>M3o8p?J$+t6KqN*5&ms=eY1eEj^W^ixCm-Goa*!#|&*qnR! zZHA#m1YVbRioo{POsSl3)s3Oy zdE1Q!`1)+iM&m2_+u_IL%`emzaDN4;s2wpw0Y_FegVEnhn6X5JwnnO2{fe{;ru zq>7vTR?RPweC>s=QoVhWa|c$QqOTeFMp`Nt7SUgJp2xWYo3kKh=jHf>KWJL(3x~1| z@YVB*D7KP($#|roW4#4^d=zK%wR5H%CF-OO4g_Yb7#Ulc%3Q*r-sEsDkg)QlzGA%5 z4}2t|*N@{rzNpnXa2qA}doS@&hQ(LI;v3ZapgK<-BoRyje;>>0;B^N*$-%$W06{`I@w2i93&`B45ua{MOMc z+?i!tH-)C5a?AW4Pc2X1brM}d!88%RcA|60(VanFXLqnxRa?$%dT$L(!N-b7w%IBS zyR|KPGjh_^W+2*vfYs+?dHKrN6#0&|>+}06?aF@}?yx*c^G%9*^C0o-VEmGXkHhl* z-O`Woq;kd(9_6Dsa@wb%vB<3qmG6QUaGuuQlH_H+?m+(IrHT9G>r(_2NKTQp%lt~W ziv4@8!tbN=z8HpyO+}8OIcrh`TVWG1*{qfPKA+}uMXn4OVjozb1UgT^DW?nIX@#`_8v!@NW3kB$WkBOLZ?xv;c>%^L9(BA>(l zph=?m-58(em6hjV_qPzz9F_QPxMF`2LL6a?cVy2sM6>CCQr6U=3JL%PK+1Z+fk)0C z>78jkEpad5qhay0!xt(W(>zGtnvUv$7KPXflWvO^)XtLedpn*jWx6|b^TT3R<0WHY zFiEba65T|N#fi}&YN}=u6cql1MLoz&Fg!`>rb5H=Ztfh>E`En0-amVG+FhTQ4+pkhFMnh zXiuD+Z*HD%w&l8KF1)Paz1Rt&XuISq{j>U^kEzo)bL)=XB#1JDH(b;Jwy*hVLqiS^ zWiB`0R&~X|-3lRhf3DlXgOPBd;r3JR$FEmzETJC`T7r>yjJ;L<*4>Nv_K7b2+*Z6Pf)QXJ;zNF6k9S%@J zFEL2EB+9r;?dQ;MrFHm_(N5h(RK6~&>}^{jLDwY*B1`;iTRNFPb`F1Y|e0IBgm0I+z=lkz$j-{%S4+S*<6!d_zxMc7B$lbOm zA32$L&V3Ft_Y+WCsyql{YLQ3_WyGljdA>VsM-P21?@pbWdQ}NT0;qO+ZM`m;Iacd; z9Yo>5^Pe-5XwK87wU}-D5HH#=34pZ^(ikp9c99ezq8Kl$VdMv5xP(2vbVUBLUW%D&9)I9OM{pxG61 zO^ri#FF0xUE2S1sl-7f|7lbO-#BUrm$71B;GxcF+6Ms8q+X49ZH$4Ydk0|h-wqC-2 zDAnzyr4W;dRZ+S+);%Few=$~lgQGB{vWH`(2W52zN`6BQZn|v=?yq^y66oez2Ry9C zhZj7$sNhU8g|0+qIt$$gIFX3j-InC9H~nvhYUhl9T^UTw*-=ilK{ciy?nuk*WnJ)D z8!XYjdNN60nAdYrtIjw<_khHmOy8)!m-*Ax6&*3!PfDVyh9^yZk+mlx&TZ230xnS2 z3P|2(JvJ!5(iv_d8cG~g34=N)qh|8DO-=y9Q9ez z(7wackH3K&IP^`_(6Lr7DRDWsOdFqY-@UJgJ17Pp9t`l(G9RAmULX}ztSN`})COa- zEDi&tOyAZ~X_x!4PK_JW`7@9Oyuu-?sxOuShF!R19{5+UYd_>3o@TbVIGpXf7u6T# z&M9TfqqDecZ8^iz{@8T?MPz6zpS*Ui3GEKDmT@dprLrZL6}%zi0yq42h(c&+g1Fxw zp;%Wsw)Z1*<+ef(^ET%;;D?nSt{-|esBiUp*=mzs4KUKG$rnQt97R6IxTiPmTdi?zekDRxPMr zSBzIL2#wd6&7T^<)1S{2zTN!`n2&E&@#4k5KuKY>?bkO5nR*J4DRY<{u(?0}8~*68 zyG5t6;g4)$-Jjwa@Qm}o&P>JrUE}&<;$a%v?o6RWKoZ;iuUsyMp1cD**{lLL<@|pz zAQn{F--dy{uu<;>&@Sq!zg+0Qb^(H>wmOH`4FE^m25SC!2mkU(vYiUWe;gv@ltf)> z`s?y=S_`Y+ILyySD(kWS3+?(Z2aYv+Jq^`9D3IuY2kRI38)@nw6A+8|k-g!fq1M`0 zKsj@$vi#K&w9;T`rGFQ{a=Pk&r-xtxynYRRuLfNMZmx;|7b*(?vJAxk+kX*tKig#x%B$SOmvoID@whya%Y zdZZC}WbMC?{H>Od`2U6E^yttN{{{w8fvlW+0#e8OgPBze7P(fhz3?biV|Q3a)B1zD>@wnx->pP^QjU*r4b5XJhE~FCc+yD?P8(sotrsSm^xn+$yjo zAqx*mozZmk^34r;A%DJ*zW#;&D}JGQUx#lPiiScp?5S7DFIR7$MK9L%N7et@fumHZ zRJHL^QW$lLCuip3`j-R79w?qGP`ycQU*-aXv8+J>lKY{*-W8Zzj z&3ZdO*Kl*tmtR9|$sD^+g21Hhc9$W?N9_1M=zlo2KyG%k|KW{#IywiT0vg9@K>!dO z&A7RZV}9-gH1di=?=FZ||NkZZ-;hD4A;asOvL|7^O`xe zZy)_$&i$7zRA+FRZw?dL@R!1fE&{FU-t(7M{kSWow{zCvpTAV^!sku0n{?th{?bq0 z5DxeE9$=)`x)MKj0#Kr%=xo!|0ctnb?d%^n!JCU~eOJ*d-A(qzR|3*zyV)apIjnvu zeJ^j*72tUAL)Whi`p48o;GYxLwBo-&4nN{Bq`4d^Wj0j-X^WzBjTMOzQXE22-7*{> z0b00_9*cBDz72m8!EVR~pMy6#=LawsI5K3zwrH0jBU8-Ln$J zmT@qMCdk3@G_ujTruIv4Ddq0!pqt5te=cr^o)}i>m>R?K$P&)vWZy#B^4IydtMj;& zw?WG}oDevdo+NEO{uCzq`zak@bvq+0O0mok4&}q_C4V`Vs$0|jk!Gtl{)@#P_rQ^H z$$x|NdCq1-?b~NY*|NOSDZL1%7LzpE|NRK$g?=hcQBz<79Ib2t<-Xs~jK456RX>7b zFNR}3HP&{VN#$jDN6^|gT>QpfF^|ji%^Nm_3-4gRn+ness0)AYV8U#mO^N^B)a73@ zXemsG|G@GXd)*mXnEcx*8@b=B%$@+sfsy+p4r+|(-%@^R?*g`lo@YT>ly8qkS!bv(dzcyJTM1>l=#?;eL zDSjS+&o(b~T%hGMN3H82Y~feL_-WXcwX6A`2HGl9yFumHx`DCr$RX$A*)@a8xIi#V zUihuitIqwb%9z#Ux3co(Gx*hf8r9(l;VbeTL=9S9_-8D8wg~bHtW|c5d5>#rK>n34 zpt>2l_FNrQ_9_6qXMUVxd;gU~Y%0}CLm+BsA#KehNA>|vdQ)s>%g;K7`j#yka? zxDszaawX|1F$@PozgsJGE^rK>gbRdyE+JIYhc1VbnpPnZZE>iB_nl;6SG&iR{7W0x z5TYELYv#uz$c?k*>9oTh@RAauwnO9#A^H6ln!EfjokIEk=z~cQgXl>m!G*hO@{@+Y8E%!^k}RV zo*R;Wef&(_R*u_Gz5eO=mRz?=286O{!VHJ3hQ0zleB_B*7UTJCs6H& z1t9P!y>?yTqTJ$64Cl$cm-Mk zq0|A&#KCbk5rhC1;34^6hL=9SF_xx~ppQV2>=y`i%(?^qzKt7Rkok3ZW%kZJ;spEe zF#Ot=-*f_w+m8VvYc8apa-&z4J0W^Db-`M@fJS-z>c~Nm$g;m{khxn$923-KMU@>4 z0H^if&pXEiLjB zG<5aY0SS2DyaRxg(g`6nM&r%20%OS&OV&Yv9|!{J-I5@Rd7uZnYp&pi8tEnI6(Bb$ zNMour32&UaIP@Mv&yNz!2^#=O;|i4DK+FC+zkz^L^M62LJ7v;GcV$*R{rARc!!!=E z1_y^F$docI4ET8fSvld-`qH7F92*axWSOZ&)>iu%{6QI z%syjCD=J7JjM@ppsJDWcr*YwpXn@saM6Hq^ZeP z8+qu@GLdqk0q;qz(`?a^Av9OR2M{SMK-ez3QO0DYe?zH6IRkof!Fw%I`RX)10{W(U&A3`gsk-Q4*hEFq=3=`P@Bjr`LZ7YA%j&QB>*XK z7J#rJo$Xa4e7(Nm|7_um(&^96Xl0tk7UEY6J zao8IN*^w1Ud2wGC&^eIi2}2U*kuf<@^m4MOhr+%UdBE-tr2>osx^N)1XD`5=tU-ic zjkxyXXJz2fpZmJqfcpsw%^Xu2DzqzIi8(iJmA>-y@Jh*Zgb@{}l0jhuS!QTQGj!MT zxCC}hP3@WmiU_3#KoKXepsI9=LhmlKuR|0IMrQvRs3MH5zOrrmn0bn8>0LCn@?^6@f}`CR%blXVai03lGTbf2w2wF2g)Pl~@d{_5uc zckh?T8bhMcCcY>QaouP8+mqZ@S0k9C(sc^9MH9RZ(Q*kXa_0mMxUB>tmHVK>)2*V; z(88e=urV&^fPq%k{?RJa|IHHY2P=KP@o7DWkxU#W{v_$?1{o7=L5vA;y#@~*ukm_W z9VDx}o&P9V&Wo<4Oo51n*xzafyzysnO)>zJqOF}vRDOst)*=d9EL<|`m_R5a!fkw5 zihhmsoCwcL8Y123WD9*U-%`cBArF%TFR#1=sDGO7%@VdQ9jd3Erx3qGe9XUh(3$Xx z7!Ga(+tQGpi$)kjo>m@Q<%3Z&38y)B@E+Gk?l#)%UgpWddy2D)u$5tAsMJ||Fc)c+ z)GZBuQq8;nrs@N%^b@HyvR?!#{qE!4kuMH2v?2mV$cL;{E4Dsx7$ z!d_8VzUAy*M-}}8-k0K}1mvF1k@AG9J=>HMZ0Gu3JJ4~NzVwU(uMwt{Sm~O~F)MYhMxww_T|ytTpi19$V9%78mh3Ops#k#Xifmb z0ub+j-$kQ4i+^sTW07<2t8xZ(qSVEF(OO7zKrs;9b1@kcdh^}KZ*ztpn9_xLHBW7b zfvIaH;_tu2qg?l14&xrJ%6_-MHAdAi%1U;#Ovhpoh7-*LA)+lT_ zv|?Ggc>%kvM4I<}86OXvi7=9T$|cj`Uu56Yc!t7?vO7!-=n}PfRduy)l|p9Y#NTk8 z>JiM96&_dSjRJfv0E36o+NoUXX*=^a{<$hKl_W&qk|+?9;ealz8qb=yYaCz+tj+0C zoniSiU#TxQvyuK)2H`y5NCb zALg;Tt4KemOrf@Wf|oUHJN*YH8ff70He#3}hq@BKV0#{CA7^2ij%5-A^?86I-M4F& z8At!zBFTZ01@LJcIn@+L@Ls9g_`-N~2`M49_6kiuO~VseP5j+`8jh{^U(EFjoPS+E zP(oO0S5ajnYgo=7lht>))=)#wyWqHCGEh(=QcvgPUOqkR?ra8(PGSsQ%4YSPB5+_R|_T^Zq|>Pi8Y7Xi1hIx^`|BK;&H@fj#LvC6Jn zQ;0(*9-RpuoCSCP+?DIa&a{K<7};Seg(yXF;mk)$FKbP*(S$|! z!Dz%1ykq3xnv>s)H1EAW{^*_P@F?A-uqgeNy0lz0L1U$Epq%XP+kO_C$5N2(y}5Yd zm5wPPiSsPZ&NY@r<2$lemf-fO7TG1X8HBd=i(ptFY2I{4U#I8=Ckl9?SqQG z=h}sD%l6N;7Yk)X63V^07poI$EgbTmRO&t(pwY4{F7CLBJ&Vl>pP&qSEWe%&pUn9ji63J5~xUH*Y!PQ7a;+f6Fd;L_a=s?1@mUMIsuh!aa7~7jgO>>fK12F;`IUw<+V~7Mz3X6O2XGQ6DHIO z`gnM~ta;8THk*HjpM0P%?nETQjM^7>fy!n`+S$D}6^gI79@0Y1UYC4@l5CXHyy`?; zdc$EvsOnK4@suX_p}9gj(#D@BWUe!&!!|vGG7S*Cd70N;#cHS5O=*r}E!5r`eu$Fp zd~6qz*rtm$!sRUnMIBXp*(xQlNjmfb1Bdine4xLLS4Z)4&*)&X6mEfB9ezt_vz}TA z79(1y9J$3H?b<(yMjz}`GO;X@v`08elSu9gWV<9VCuU59SHDOx{ywLYEm98Snt9%u z$6Q6#!Pav-uRT*kUh+9cDoQ07`>RUW$qrkfcN#0qd!>SJ3{n?}^B+Dv`v@}R3Vhaz zr|0&X3bjpK=M-8!fl5q}wa6Huy^(91rmOAQ56GU2F!-$>###8$!5qZEjZj6A*Z7Io z%E!0dlUm#51bg#@n9OP6V1*KPhG6#+;EYP|v_XJ0$TCzq{87^3%dgB91GQP?L>k5F9d=t|7$$2C=dz`-Q*xMdO#L&s?i~CWcYnavef`Z*Jp~83}Y{ zY&D+1yTQ44oVUv1F-X03%6z$6cldfgM3h|Z1Rxf%SI_;A9~WN)Kgh=p_(tUi49v^` zh|h86)Jf|oesS_6)daigVOf+G9~|=;6LX;8+MrUX;;t>DxbDJh|HNtMj8-^3o` zg{ZnQYKR)?xnm_OV=Y7|m&<4a8IGU?=VQBj5_XBM*OIeT_UM4T#F*=w$cXb|k+S=GZop9AbPtMD z*q{Dk{4n7}!Oi>fC&7K1z8+$-_O)F# zFMT8fmmM|E$B)8~!)4CdW^Cs@@0#g`u}pb;&9L2<=P0achL|H?9_~dz@gFL;P=|jF z736)}YSEpl(~bsnwF=?|4@mNySJ`h5+LF#%1<6f+YQmA>N z8l!AP=tJTcgq!i<7mCUzmMa_Z6<&iesHhuzB3~#5bX@_{F;2kzp}lS`fA;~gwQ2j> zYvlRW{o&S;Z5fZ)GPOcsP${M(QN3tXS?n^0Aa>$2!UpICT%lE$yQA9NoliN0?DrX_7 z<=_`|{K?hhH=#s4@91w7;zb6i`40P;o*B~|k2@@#mp7fJD86m`=A*&gJ>3QN*R~hJ z0lE07G^RO6g10?qn-za~c;T$haTMaX_SfU*tUgcvA^pf7iV`$}aeIX=`+Y1(r1{1r zeD7395VyT%u>0N_ki6Ci5CUF4ZVT!Xb>=Y>GLlXC%ui+09tqAWC z=C#H=k_KPp+P}0b)TA$v{%Xkkp9h%N%FVL!krrRCuRJ`3ed9(Wv=JJgHE_xEzFka# z`R$?t>*9? zCqBE0$GXbI8a-R4x}7}ja(X*2+$a3m#vKXYd)!d4h@xOZ64^T?Ag;(DNoN=gRbKGi z^vsz1Ng^Ob$q&&IcW)jAxxm#cxN!KI9f0z%HX|Q}Igj$}6DolCIU}Asq$BN*XO4UQ z^C02=mZq(oVlRK4OL91#$+L-{b5}@5%mI1HxMa``5`%KbX8Fe4NlN&&CCP#0DFMf4 z={5--;JXT%rOuTaGRmneZ)rQpnq#U`U;FK2Ojs99AET7H3x{9F!)+==|os#v;gT7fl378k8U~3=iqQ9RQI_(*1 zL;NNglbRvab&|JnBdrONpsko?bKBi5x@Vh{jF5tcq^}d(VS~wN+Jh)HBna$_cO)k` zA|D#Le>i*m2kL+`s3N=jPEe*T2gMR&a zg7}=3ua~@%_Kfj|pW4RRSf3iBM$nu=@=CR@eslYP_X}SFm!7<7XwLhVnbF6UlzK9Q zTEaRTaxyX9)%OCM*c#!jA;7G;GnF68z3?MW$pt;aGS*!IzqqdI`g4^*p^)A1jC?j4 z6eV3N5Qlm*PC63WxRe=g@0Fe@ejxIQ_}pXR)(QKviCfg#0A`4EMWf3TR5tF3>f>KY z0HxtCH3RE`iPI$%QkK|@T_ zYbyj}X@#ogYe4CA?Mj92tP+Hu`7E&LWw6xOq z1_aBCKX4Xb^IpvkA8o8S#B0H1d4t}W>jZRSkBh-@zlS5%h>X9VfuBn(>`Gfxg_L6I z_8+Big9?I1`T!5RGVg9x7S)%RQ1@3ak-2FAKfHfi$@#13A^S1%Imww|f<vm z(mEyM@$#tjlZyX{T_*CJRlMIOAFAy#9h8NwFkP#D)=(NEP`*@%b$inzONoLhF~eIE zm6jNZ4UP5}GpS`RanF*m7>PO3)bip=6SfQy(PzYM<#Gk*wJ#E25=IK(dYFgFW|s9F zMyQgH+p&Tzt@bXbdh)xJ4QE4wPc>NQ9_qjJZaacA5@A>lj zvl-5;oA1*_^Kvwzm3g*p4tUCFn_5nc+R*5839C$K-$B6bcGwUjV7Z7`2`G~XZI1|J z9sv-ZoP(zbii9c<2VlklLfrh-Jbx^JVf4OZ~Lz+{Ofw(;&J;>gu8<4bv&!=;&)xiyJOB0~0u za`tKmervj~t+Xrl&FSS1qOE z-KqBG*=|R8^qFeR)5-ScnZ4J&og1Dbq>FS0hvcivyckKfn4hQho^xqYWOuJiV z;!%f3ST1eHd5jDp#O)S(X{!DSe<(`_1`>nP9$IFUxDjZnSV9N40=prQ7 z(&%II4S^Ci+j;jC`SXPkt&xw8tkSgGLPu~5gvWSUkz)1f8qv_C@M}W6SHe<8s~%Mr zAKeovmptfZ$b|W4fy31L0j7Zq>Uc;Gy4P&AKqf^3T&-x;5V;Yg z6#yq&?_>fEfdz$EWlV||hGVu|{>;ryd{Z)SWqoG4aqkm8c-h9yqcGzEB%Q?|-XX@c zp`{TL1#qJ;%We;R55~c|rK@6DjiQX0`*L98pF zf)6rvB4j2pR43>Q8?E*WAFmtwx)r*gSD>O3Fxsu)|GF}hBz1b(=n~YL?5tVmm9DE~ z?i|1iXXZV`fDh{VUgk_`Z@hf+dsLyk^X33H^VFiINE2cFQwz>)3c%IHB}IEaZrVN@33p};xSN>IObCOogEVc zc->1Js_klxb@N&~rRs|T14H_m7^Mc6z$mp34k*)(?~I0IHf!Y`)g9MRHJCI^gvxMJa)h5%+(ZOjjJM1X2u!zS%7A?6}+uorsb~nY9ZwFg; zg@AHkw>#6<6N>QxXsNizHO;oHm@2=G|v%znB z(#zR^ZE0SeztU!o9lFSvb)!W}tN( z9eoR*h=@`|29z(6nz(G1H4-Xw%NqMfsO#CzSyHnIC(XU9}#iw!?DsahQkH*}` zbp8{XgHr6jLlpp+U!e3wu+>P8&*^}7o1Cv1!y-4d$iR7wuo*dO>BSJSUv{?zn>>!v zUejFPnj0Fbf}$I91BXPHA}ZOHvVVi` z+AU?@Z>B_~b8y5PI_Qf@jBHOn{LqbfzG7a~DDSZIyBKt6#GSOG5mtjw4^L{l^^4x1=7cH3)ry;^^j8L zQ@7Py%skniK)}{HOQ{*hf#xX4#12peFGqgg?@wq2%b^bofQ%*AW-V;Qm4@+iA&1sa z^dgvUhRH5X)ijG!zTcQ<1_P`zdX(!a9Jll3&T&5p+LjrI##iC2SNAAp#VLhr9)3M@ zR@6E~G4SLILxy?GZQ}XimNF0QTV6dvbX$(btmuO`vj`bmRHNjpMB{8U7M1O{EgI`b zOpEUc-7%xUS2rEtDKQ9Ut7c#Wv_?YH3<=$X>Ds@PO3130iYdf{Jx3-}sN_#YG_A1k zC2=?6*@T{@N;6;GuypFJ=>`;zjOH)sx<9MhJW+#^W)aN6$at*gR^YVW@P;;2L0oa% z3EfEzsDTK?Fw4}1fX=~yq1<%(b@u!KR(&uDMIud6LQLA3fn5#$BP(jmqrPT@$}yBP zK#nWN2Gd}du|kA3(7JJ>iM0H2er%|FxUP;d@0YVC#rRz>DK3&THTMnRgP&SkbFXBD zO|GGf))FWRMO@f)7fTq+tmVv)@@w*klAordn(XJDIeSmGm%E&s-ea3Y?~#E!C_lo{FG>oeq4g@`?4?B_2}oWW zNxK2>&Z$eYqQsG!?^W9h>#0=7vMx!RTWc?hy7;x#{raDw(y3BTUNsS=EK*m&!(rH3 zLXW){0r}AD8!+nz>QDPmgIc}rq~*09?yh6J4%Q+s6;&g*(BAl7k$~I(sB%wA0@FP= zB%w+z0a8iz@Nw!ZTBjj=);sh?HF6tEHwR@fur{)SRxKn8SrVEW7c+eGnW9ITYoqrD zvyU{bqD!d#zuu{}$@VX`4Iy4n@{SXepM4_)9& z+u0w;(kb>Y%B+uZD6NYJbvXmFMnWVY4Lv!`DVp#-4E4~^{mmy!gOqF1gE^>e(fbij zOUtq{CStRm4EZSK6^DpclxpY7hu9~$!2rUr=fxjF?`7}Lfhp|D+wvooxARK6_L*G@ z$ibdvBshDHMTsKYrn02nB?|CWORl1CLX?<>zV;&Vd?8A%l+futYHGwl`@y88=RqbY z$!-|;c$7S{%?VcO{+Xhe%5l#HCZ+<6{Uuxa=XUf=(a~VIWf5}~j;k|u^JI{4vw{H zbyE7I7ts^p9~vkX0_kti4qa9_LNsj*xfUV>KKp)J!7b8BpVO<|M2$r&H1LF z#K4jA@+pYCx~-!8w~akl$V19~bj>{By5a1A=cZR(ZMr4%qVJK)2JS@bj=>lK`7#%) zX$m|lmFak}a1#;c-4ED7urn^aBuCRLq)YuK`X}zKX zQsVSWQE$#nOUbjF_M#4m=l{UjedQK0jr~Q_1Z{SiS5$tqmP$wKxxY*le@F1eh-922?q;lUzFqzRzJ|1DrMGon4e@KO)#i(Xn=V3S}>13i;{q*fIO0QoM|`u7Rh+$%O`=sukhSpF4?FJ z{_e+Y@OnD4DT}z~I-Y90!VMQuD%2M3U8AqI3(9-LHrN0GZ3PUha7efIDeQ{<@jPGg zoBdht9f_v^%CGk0z_%D*Hz1q=yN9DyrDZ)j2h-@-MzU$9zx3|GfIiTB#gS6b4?x>R z$G|s5UEyd&HPMR0)vORWwwf|+OaHDHHeQqF;d(M!;9@-GxLs7{;WB+OUF~FDsb}e$ zPNjXNqsHf&6Nf;?=2FP)v~(lPOG>PQhM&7cBgNZ-=;3$RtgT{eh}MBE^wFjE1Ob00 zgIu*El=UO&posDVUexl^R_xgFkplWX1*Uw1jp5VRn!*3)}~0e~O!%?7fi;woqNt3AWH8l;?5 z1v3g}jdWx7I&Zd_PRX?ZbfKWSa!DWo{dEc0bJYe+VKF?fN0Iq}@s(=Y`p;b_hu~MGD?S!I>fQN+KaG8W)(? z>Av(tiG-}V@fU4tE=XMF9lgy)S-+mG3Upm5-5|F& zQZ9y2Qc03p{j~$I8lD{HRVkZ!Zh472(?WG6)Hcd0_N&^wzL@3#(<7Hj3MTUeeua8i zhYD$!A(i2*ZV2Fo+xcffA|W{-UuT~fP$J+okS{GEcXS#V#k_tw;jwZsBOWA6DpAdt zC-&p@>5n_mjV)IXirBUcn@MLZ)7qqy_y(irNc8+)`<4)`s9e8KD^z{N*LP#xQE(S@ zbB@hi_V|?suYld#FhXXO$=^jdv)aw3#+UnUIedT9OQhNZgX@105sE@oC!BOA+y|55 z$X{1R!rj{(p$0XEdI0(ET(Usd=KgJ9_mjMPR(ZEIP9uDvO3TxG2rjeqkT;M#RXsJ1 zzG`jXDDZ5x1nVjjpR;L!P)=>~Wp}_v#`<;-2v8KP27Xu9UO}ae_tx1Wm}S3XTVHPU z!Izo4;kT}wWU6g$XsoBYVTJ0>giTYi8lzkRE25Q>qg7+XZaC_gr$lS1CA%=^(hN{D>Q?{3-@qv~ZWrq|zf^$2eS^W!rx-QssMZMX?E8D*YJ@-nEr4{w2t zrCj2wS^pXhxv?M-Fwy|=cGd3r%D=A&>;?Rlko8wIc1I}7>xvZE>%A?*s$g1lMUr!N zEF^&e8^$G4 zLwx!_?GA=p#1lQ1Yx_@pe~#6^k0&MV?4N}72gVbVb(hvpa zEyBg!#xn&c-HW{3=|=HBGCpGoASUmC!g9Ml3wjsb%UbM;?E81`^dqJ`zz{zv9;3yS zyGg(!V2h{cX44MwT*8cdzTlMQiJZR98v;hob@OgYolWsMm4q6*_o`?#)%FiA@kOur zcSW{iG2c0NG7p$W6?r~Yp^NtRZ*-~Uhj=#$aPAE;`Cu&e4L@qTbn8~P2syVgcd2h#~?wr46qW0F9F#o}HVD5Y&fE`x4wu0ONkTZeF1kRV}P|TzG!i^>! zn;^Ec*>A5}_LtpAH!AJi7J5JvPMDY9Rpyk7r{FyH7m@K6<;FxYkpktZCke`k2Eg8; zjOWS3FS{ed`Q1`*>4wwqq`Jjuk&rI51GtnDU4Nxs@!&spV*w{bi=ASmn2@;G+6puDf9eXscA9m;gnN_{JNjI}I|Zj(ghxNY zdrJVK-bMg}Eh4U*1?Z1%XW65Jwme!wu6XVpy3`b-;z)SH08Y(cKs2<2vbP=<|BO=+ z5%~Zyu%csc@$hkgNeDPZTOQxG8>M2VC$Z>*+Ex78QY2-KcY(#bi4NYrhQCS)zPu+Mtne#<@ zvP&zN_XkA8o5AI|j^&$LOahY%U%GyVZjeA`^w&gUG`gl?js&tOU|&uePKR^1ApGF6 zo{J&m3w=bE#wxXt*>GWT7a?(p)Qx?(rv9+%YEN~wY~t0yd|F>n*Yj+lxAn|d=Wr2S zJX%F6sVQlr(%lN7UGHdKJ#&#*6%(&;a#M_EUkTI${AY3<6`Cmks4IPJE;B6I^8Srq z>4@vB@uK%lXZvC&Ppr<>myPpSN@;Y2Q!Sp=m)`r1F%tz`bpp zB8|sRC>~mn{IZ>N-0oly{?iPu2Zc$z?yHXSBX!vB-bT7jlZkVA0z0TB;;ZDHYCf%_ z`eZBIRi-6_oN|&NepwS~aZsmY8~VJDlVOyy!^sQFb+N1&rcf}lM`4vFp(!H$>wcFU z+C@j#BRT%w~Kj0s~?D0k$jsoBa3Ua3u+%sR1|XUVKu9 zPt|b}q0DBS|1o^-Ti`}=0sdHx@B0<>ELt??ZIHe#8Vw6)8H=eyNUv556=I5X1)vit zSecrKUaB+ir0 zYedF2jDv+TQDNVYbOgHYJ?lyuKXlA6S~&#fn)%i>`{vOlonM?WBRawZKa0oDZ7}hP z7(K;p6%X~kPBzp-o%LPxG7+ff|F7oysXsjY9PD6#NB3H7>CMI>klbIGq@AxT*#+uf z99A|=n&czjiLurmM#2H9W=jrA&N}kQ>siT--lxV%I0}{4t!w&;d=TfMy3kl|s9{ZvYKGHz^=|n zhivG8aDztoeoyK>FdajD3&6Ndq}Vq+M;MdW7iunSD2ba|w#%(u?ME4tGi%tkQR6uwcG!>}oYe+r=VEWctey21oFwluj=RJ#N*oG1e0QFjX z-My)(`>b~T+vT$|1RB+(m&Q2{0J1M|;C(!B4r>7*6^O3>r=urt7@Y|wK7u*kX}_7L zmqT6xp!0WOufVvcL;;#N${y4O+8#Q3p`0a&%_V~RouK{+8s+o?mT_|v^-q^VADmpf zfc}5j`|@}w*EjCL5s~ObwvalNlW*Lkz@BK_vNBsVNKkw)L$3NzIp8LM;>%Q)5`Ci|<7$l{@4)0&sZ%SBxYPYEOMDtj3?8zs1EGai@P)k@ z9{39eLmv35tA9$>Jw44o7i5u8zWagD!)X;A^62Q#Jh!y8bV8uTxYHm>t39beY;WJEg( z=|ug7GIo)-6xR-2(G`0U*n@9Crt*`w(yZP(HtEgaio(puxhFfEv^#z1AdaM`plRb< z(UiMg6@Hr{2TGB&BKODQ?V7 zKVMEeQMVs4{-7^((h0l+9G<_7pm-mKXNLL{#8hXJ-X-s@$?tIe6Ce!J=DcorpigEb zMar&m!gHo1!wOo<5NLS7OdHq~9G+VpdjWmny_vJ9PZ?GKYIpwtz!@KvNTx}87rVZk z;6T9_-xGlB1b@m-fVXQbKFW*Y;2M2Cp&?SSKO!3M@^O+A!lyyst@T0Y)ZZ2k^7=aG zS~BLyEPyA|wu7(2^r0p;Q%3Xbi0*9iDq>3EL{nr^OLAcF-U-LCKJU7puh8fxlzNOt z&{1DAjR{_`D#k3D8cIST(s^i*z|MF`M!_|zwWE1#fCTleU8X|ZTcd7c%wW*+faS}0 z)zIMi1W7qVeM_;+K!-Ar0rFQFI2<}WlF^FroCzvm2%ptoNu0=-K#^Zh@?f0T-@;EU zne6d5+bK1YvZ#TNnsSG>94wg~%=Il>#q}KIYmAFib$U$amlGlCinHUhiuHckB>H`8 zgW&f;B2Nw0J~uqyLwqeuxO_94*3)|k~?&%t*O8rwC*O4>-hQ%Gx0 z20MSYLlGB@Jx?GPWVw%qITjQl9tRy5zAxF|!(R+po zcYm(8WJ4;?;Cy7QP{&yeX6`=J!j_4d-?^@xW_ivuGb zT8LF_#H&elAVCn1?*lzT!>{`Cr?afosB;6dD_tLTXvK?kde!^X(H~J! zXPE`-YR0^qz54r1makGb4f^eZ7o|U_LBN^Fq5_`%3_uTiClS5=W^a|ifCtf>4dlmD z?$YYtC90XHPA?wLEotZRnBQ17JaGuC{x8_5!yOkqzYu$nI7{jvhYoAcTa9>{Ws>MW z0bo3&u5xoFPDDquB8LbdB=ocR3)ig z>m27cly864Az2qzI+xfef*xu2nmE#;YJM?!yfZgLC-XBd^XWN4c5ShE750LK%56sT zOI3WR_QwS*Uc#&L#BXn6WZrd8YzJFpO)TtU-SVBuaUs4@C>Nqy}1pyCQ*Ze4* zr=QLzk!n=U{aN;#j4ha#2U`7mS;mh$-?ZeC2*#bzafjrFIjU^Jl&%qs8;@%88n^9M zor>})nUr}E$Yy2fcG5$7&=E3m3w>GINokAOVBdi_12S>!znRt_Gw#ULIh6?5RH|35 z{FUHml_)fQBkqYRkH~H>Q_|izoyro?bZuN{b8)1O_@D9B-NWOy)#JzKB1@ZGi&_XB z_N}wmS#qAT2-Y#>;&*#t-OdXGr5{AghADmOXRcb?m-M7@F!yuN+*jNOIMK5u88qt> z548%+r9G*d%zny7{p@I)dA#p?C3PCj`yJ+Cc{%2)K-~8frC;u5Qdb?<3^PYJH-yg6 zg*06@kf}Rav^XQ71P6UmVbn9}Rn7>a)m1^Xi%grjws|j4BBH4O0~$xT;B36uImwyK zxvu|7)2ag}Z9Hl+FFrL#^X+Cjx$;W5EoWJH)1U+7vmp^b4$cuyr_hHRx zIBEd+y);DxKZPZhR(`kjiyxZ7L04Zrj-58Qg`)~=<7<|&E|Fk*?ivZ6i9hrz)Mg_zLM zKcDzYRGB=ceb4;CQ40ID1$p6AT|rHo+bE9NA7*wRq@#R z=FLg@5<|}ZTL`LI?4Imi#W{b?$RcgK3Y}%Zg1Ph%i(NU9a*$ONbM!35Ndx{N>gVd& z!iVF(iCq?8GPnd_uC;i!s(^i$gsx_()~5aIy-=!~WU3|y(*6k)J*Q31A+|M@dEZPd zc4RK_zDC_ImCuvB!na!zncDz;*Y{b8H{@vs&`V5ag-csZ@p5STS#>TtFy{1(m8#M_ zO2pZPeU!iVKEt~0vhGe$cOEs`7bxI$ohqR2g^8jl)=n}lwg#r9@GUq%B^% zQAEk+mru3p#KupTv&tjq-cq@L75)!x@R3!q+;R;R$>H!$kl(xrz zUxKJNIuq=JVLz)^J3z~4ZTHiI0G8?|FU7l>tSV)ysFb@ii?GixgB2#`6!H2tdE{DAoSUVtos#}#3P_f5fRAl#G znp#>5+DE@w6%<->oElZayk$|}x%W7mNtIa)ZEREVxMt)D)t5~csx&W53sC1}$X%t% z!Ai@?1>iiE2L+gDg)iNZ=Ot1?F-JrC**3m#56m{8Id5CBf-~tIzcbC%gFAajL49rG z6*P#)Z(aRuET+UF$D+spUw}6dR;Gi zhEWefIyiI}PtO21d56?k$}Pex>zFA=JhGP36s(Ll)4SKOi`|~VqoI` z(7rqj+Q3^0a&OIaGv_81S}J&CAlphongFlPq)7-k3`|AM=;@(vsb&NK78dLF-C4%r2oXrclMur{Q81&nEnKVm`&mUb4Sn3b4AsP0bE zw@Xu+pVMCE!~=SjZ-i$Dz`a%9%=&z;R6hi+b;05e;l8<^!F$l&>p&n!dWmyVlIlrs z;==)So5|FT9JWd0-p8ch~vIg&(@9*J@;cj5^;?nuYfVr=Pt;Y11C?KESjp844DAgPD7BTpKQi!s`I zGwl6@L6u+J^4PN(1D4JVI+4Yz7`+3D8R+^H65*)oc?&JuG#DG*&rpVK+v&l;-8xF- zh&Mz2fe|-OoSl)(+KxXcb|i=ccCE~X7p8Lr{QCB}tO37o)0Nx7nrnI-K^~GZ$9z7R zm_@6yGHx&*sVQ|-7Gb{>2mu)C&` z^>D1WpLkc|;;T--iGRXgR58IMJ{7X81r`5x-cY^0IPk{{V4yJU%DW>n;otsgxlFTf z+uj12@6lTrJFk`yt8%0-I9;xX4O(B@ov&efk>@Dvo$rv$nxy}^R=It60GgZX%khgV znhq5nA2rTf(ED=T=-IXwiCuI!9(Q8p1}!)hpTi%>6*-# zU09_3>>t2MRk5E>gJ{k1)1RsH;D}zxg2C9Y&GrmhFieuS<(R{+FK|-usEPJ0-N0K- z(^SRd6V&X1bXThR^XDFSnH$?VtZxEBuP@6NdZi-@CKg8@2t;Q37{IvB4Wz4@@Ttw6 z#_^l^zTNnAdw>U#kGFU+59vyDA%V+JnG*M@3#i(?Rw7nBD$&4F?oH&;mPyO2uMqqV zz06i?F=9u3n>9SjD zR5S5OzN`rPduLLsF1YtBh6 zB37lb4pMKaTlWm(?gD7{2emtC@9-1DdGYo#Obh}_s*syz`r{KYojp9J)Qxw*?}0J2 zcMsv4CY2oNxI(Nxc3J`^BH`08x>&mnZ@ZB1R4j>&>m`47rK!^-9>4&SE0tm3qv!{1 zSgVh2JdO&?Rz8SToeV*@}axebDRu@p9gI5 zCcgryk{R$TD{$%FmcI(ZIceVz{u^{!A%Odzs$R)VHMxG`E*g+wx(AYG%(8WE{n-WJ z!oN1?+M3i2tM^W&dICxWAr!#09f2R1%)=r^HMqMf-iZL4eL+f2&NGTzB!HkSVq}j%t3)AYHeb7R zzO0`p^e{dk1aqk=f1<#oaJj3rga0B71|IrtWKCQX3#SlH)!3d~f`(yepAVV=HnccJ znZfrRIJ24}>pIlXZ-!05RPh^u7eE(EY?xXTyRke5xzdg(U5yDtQW)?+BP{s}iM%EC zcn|M)V;_Oy0DneD`oCHeu3Fvu2#c&-Frc}5EDUWj7E_gpiUK<%b(3Y1y7Qz!elqm| z6YPrO%)Q+uMH7f8T^8Wh-$bZ`W8<4uD2LpI1!0$iP$kpnoxwgohY1K{ZB|cAHz&GxZ7EAt!Jw+d%XY zAU_KEZLESYgMN7?5DAk6xY8T>^jY^mwsz=q?AmJ>rXv5fyUtWq@J zlRtm}&Rq?I=-Vbr=kCr;Iu59*HW(PNub}iP%=cn?#AKZr*?cMK&Zps=79XQ3rJbOK z=s!kHW%u_iCUBAUha&%ZtoMvYX!lkxZ0T*9G|eop%`5hbf%C&2p4#eFZe+cG@-Q)- zF!cf#U`{Q!+$F^XS$A9~tSV$OrX8 zy`XCt7HhMW{DgPOALLHjPp3n_!BYLy#;&Hx(xLjhx2yYDVV1F5ZShz0l2LS`e$6ox zWSGvjEnhaHeXc6y15E$FzRE8PpD0u3rhKsrPZN7=%Q5GbDx$D8eLG>6aa#;$KlNZi zM#{#?IDG9%&Mwd!$j0-Oef{qN)$|9@3{an-gJppV%~bm&}RklgCP!UMUMK6X_$TW8DKWkQ6n0SO%=n5 z_Ht7F!?j@uDBE?$X$qvwUrhAE!04N9gzt)@E8>~8Yr$73a#-6U3PAjrO``J+*{Ek< z|Ex&nMuT2$`wB6`KvscCWhXAq(PUGvv4aR%;nx$_?uF?ls=UrH-$4QZ8JJYGRoPzT z=(9VA6gfPj5K9J1Wp^U~>*_WA;lAA*q*KJ3+ys*HoqFY^?$4<4Nl)tvFg1#poF7Jq@aXQD8jL=V`)AF zI&XL!Ugy}44IDVbz%vIbNrcrKaISp<8`S9}xXVL2-S~BL#+74&ab@Z`2gV(^>O-f) z&gDs%o{766&Sa(u6S#Nfp9Ez(>xSs53yl+n*NM!qqh~g3 z{x(6sEF69F4o2S8#g!`zlY;_EEbDj#1Dnv#^=lBOR2yrT9ih(GX#o>(kfsUyf z5%j0C=<4*^Z(>@`)>ET-Z$UX~L6ELTKKCs`lzgS2sH~zyc46Lsl-!0Gz#b|&%%qds zpo_PbTU~ISnn|AT(rX`+J`%}&2(&Dlk>LfwUTK{ZqYEOQEW9Co8boA3)sV8%DSX@H zo8{S?q*s$UXze*3cwr(=(vA-hD$@7{gopPcZ}hLKQ8RZfJCa3uBZ5K3xvXa6u5o8H zH!`Kew|hGvS8&)90KaqrQKE&wXx}M2-l*X4`aVFi zo%A&mZOfhtGBvyJR6xj-{%UlhHjCz!UE4y(?5K~));#FXt1b+t;Ycn$hERQK0!j1x z_LBD#uMF=>;F+!5VH{=#u1b*oIGD`mO1to?_d?{EDVnjQ!S*ga9|8m8O1oOpWsbE9T4n zP~6QpLRhlH(*qvdBpwsFIC0e>_oUYBAXEs8qAJU+5PpId+;Ym*l6OAtiUJd@8^T8B ztPS4~BOF&9g71^@zN|Hxmy*kp8x&Z=+Fcq7uM#V`kX!1qAWb1}cRWfVvyc^<5W~zt zttcdMIolx&ss{URGg5BtO}My{I;Yc5K5F zHG{LJ@>uM0>&}nmbrBWyCwZj28gx}j+BOPyigCo+jQ&1l2cIWC{g<-QUF3>w(jGC4 z{^G)%U-SXN5{xGq%AfrxSXZ(twzMi;1$(gTH`~N$e*LnJcv@Q|zx$m<7oH&*9UXrw z^|L}!S%h3-uu%-A;enY)sK%(AYXBvO0E z|0MCBv#|<_IyvZfS42SzK}nuJ2U=e{jN^OGTB#I zGpk}=po%|^jkUlrJXTcjaY6vnrsHFBc8>J*$f{e(x0UT=?nIrA!;;2}^($55%^k!o za3c)c;+`Io(#n!NHeMW{80MoWOj7D(vpC59aEHP$J~KP#j%xY4WMZ%A8?s{?x$>@v zb)=8T+^QWVXUcJuJYN~!beSWw9e7GGsX8d0m%l+M-bYPM7w>$Uk6oYxWy22v8yv7? zW5%K62N_>-c;;&BUou8@*g^x0<70B$i_r>c?6aO?ZJxD#oIj(K7Rsy@ApLp%gvS*W zJdS(+P)Fv7$#zF&PnPK6K4wIC?r{DiOX6XSbEkV3Nrbc#H>^hLE*y4FYVx|Y`|dDx zEveAet%s7ktoz8O}1smrpYTN@Bg8tS+?RCldtVgkFMVJO1M46S&zZ`EotD z$O>hn*v$%2DZqHFG>Bg;iVxkVs(&dt&`U1|&)Q9IgR6Qqo6 z$pmkiT-{q=atJ&N(eevRYIwz5k-Wxk;@|qKBwnM&WS@LZ$c~8rQf660#Wa;Jt6fds zR5*fu&3PBv&UVzXAxeUu>Ri3b+kb_3s8Tdk)FUeYw(}ILV$yCV-*IUx)$#|Zujr;D zRRN0DHXT6%qJGi#Mbm;u>Hl&<1bbnlbl8T>k`B6Ec%@tJ2k$7gURWLO3v|gxjJi1w zJ?>?iASF~89fEA{Q^;8zjU|6+VW9g_jYirO;CFlU!=CbO@S1hp8C(Bmv24G2Vc4Z2 z5bVXcPvs5CF6?_d0k*RGwpMjRl}>W3cnv4hgK_x@a;4~g0Vd&<8z&nbrwSTF7!Wlw z?KjIKOe0v#SIUIvlvtRxR7p?Z9z`S(Ot0h1Lr1%C>f6(;i%?>g&`GwfnX^o>px?a< z0^EEZeSR1!F}QXEOv|OfjruO{x80x4u8mb8>V;DQO+bxIjiZH$C61ekf38Pn$1@im z@`BtMe=cCr20muGbh`(HHWR*8r*`dLSn9c{#QStqVnK0t)N4(&TCidIUw<%`1joT@ ztosc^1g`qs-L6Ood?g9{+!r4d-4n6xHB&QP-}-Z9ouc3gTn>F#WQf74~dZ>yZt z{cwt95g>5*AQVkopTfeJ{`EZj%QP1@?gf3m3cv80jJAI}jzr`>UVWqI;;+gO;&>`B ze8TH(=q&H&I31PYyE~*u9goK`(X;1sJJaosfnQ3L_?kCXi|m z50gEhY6CY4Hr7Jd2Uyk83tbXgyqV}o$bV*9n>^uo?|BnxtdEX+Avz%c8N8}g`_gFJ z&2Ea+&ic15s-*DzVBSZ4yqNy(3BG0GH6l3`L{KXCMt#$%kR<~ra^GZHO-&DbTRJGZ6TfMTHSjHeI1E3)ug8ElPbyO zQ+_b%_L)qiQ()UPja_*j8s=$j_nx9a{12{o+e_F^kopC&M!L8)8?|+i8qBtqWL|LtZemIVggX%6S%!aFg^#rMgD!N==#1 z>Gof0?Z3wh!1KA@c_b(FC`EO?beN}4NxE1~@O1J@7e~i=)tAGjF*S<8CA}F6ZUbJ} z9$WPIMBI&&<^g=wnM{gElS(Tt5yLoGl`faWq)Oyv{C(b>^oIl=*lxekmywNtsUZ3$m zB`s^zKQ+p|p#q|#Sg*${F+H|*ZZW2BLR$&#UOi)J`u}A3L~&`W1FzmnQ%qIKKV2%}^panF;1}TDLldik^YanK_DWa_03g^n#u2t{ zwXQTzw9*rGPIK6O4$Q#lS&83)t{zvROM_ye@@|=t9DVy&Z0f@d65+cnraW_$R$9J= z!!5!Lj?@*7I0OH__Gm8Go|`t8o*|bFJqV==03)X@}62Ec;fEP zgt?y#ro3!#8wlhdvlXzR5&C0;_*58M9hfc}nFg1IU>44e${$R1(LTNHF|?m~RC4x7 z@m4^>t7T>z_dza6p6JP5b!0lWGAzwp)jQI-*G2H*-R%?)&Raq&Hwub$PMuU2Qqe8y zlB^7EkAXi;7}=4o7VM5bO>ixElN*WHiF&)5#^)J9c^ilSmWs@=e2vMXk7e870swyP z*|GhT9<9W7jNO&sdN9nOcwL zpE!@BX<3MgjKpMYI*z?=5{u?JrY5!?k2Cn_0Qvkjn4J^ws7e7);UO4DdIwVlZ9y8d zxyj`?arNMWVXLIVD}S$Q{)uriW>W9TuFT*Vsxfo6*}GcYLl*VhPA?+wA3oViC?FO| zkZ`xM2>c$#{O&8CQ~JhNCYeQG{=a8i2IE_wmw-T(F7fF$A?Q-awjKyF#IJPIEmkfj z&&VQx9@#Wt^<Z<7Cv^KL$8vC40m(JO1_K`QLd#9{2AA5;RDkEM&hLsBrE)cJu3c=^-WbNDwjpF+PyMyML|!HA`|j5)r?vAO zIHr@itGKX)`7hOz>d7niIh`bjrE&nlsLf=Ga&qV_9v}OliuQ?QvnnbBtP6I8o?0|= zS+LhhI~T^j(Y;>Jc_&Nw4B!#frFXup*rn!aUjD=&?jZ*!yzg}qNV<4@kU?nK)kbmE zNB;dQ#AD-*dea%R=YVk2ve&C8J3Pz6EKvDPRXXN@VBI=c*^OBxZbPWHo}2gO(W5gB zx`mH4yO0;)N6)0=cE^iSEXD+1v0TwP&6!1{3p3Br%Z+rJrDLafD6_i=28X&QN$HKb z7vaN^vzIMjaUfWW11{lRSx?Cz8?oCREHBH4$4yM2KMFDz4 z{C$jE0*O4#(O|RZ5yq@`8AZIe!(TojbiQ&ACQX%1S*67Xc~;+zaj3Dh*BT7_|C$tRM}NDp_3J(t0fU;ZE}v%Rt4(^Iz+_ zW=o*-pAn2{qwDoUOWJ-O)VOp9wk#77>v$OL31_Ov6oB1?0N%EhS4YfwTTOc!SIN zLkCi*>5G^X(p2bm!(iW0!fSXyD@_N!XM^%_)FAu) zR{3AMvuoBE;Q0DYUX3~Bg6zSy-w&Mc=~n3kg70Wq8sRR0GjpKE(Uza*H*y+VQ>Uhrre+W4ub1ozcWSdps9v`#3c~ z=CoZ~`30Ve(iPgd)dC$@zFRJwV-5H3>(k-2H8crmXe?i;It*SjkW-0EVO6K zr|rt#>~fBnZF5FUy(LbO_o-V>JxXLRYtIs-fA?f7kv{T3s4`p!ZXXDf7B!M$HL_A@ z+i2Flk+|p+w}!s?QRedvFI*mWO5gdR_&CO~iStTvN6mBP<9(#S1HnWp5<#}ls{n;r z&A5p&b2{{}BZ1WO4LK&0j!&PBi9xxH-ZTpA815J|ZN1ru#`i88EMxdB!d!>3{6Wfd zUNvdJq7xwuJd}7E+?aC&hb%QyFD)UBA%6lvI)LGy4!wK7!y0#GrWV`l&tD6FZfWjf zO;PgaS)E`yKE6yInF(vRaL07GH}LX$0w#6K2`;2;~=9u1z)OJ5{JnMcPY=Gp&&1<|SG1HcD`)B926v#K&Mt9v3fG;mNE+Pyn9` zd$FZ@+<{;t@?7{Y-p__<|Ir!!6exHFAQbm{@FD$apEb=EGbv(G%YS2#Av3FQzvkhyASc~gm0>(QDIkMaZ%kl2pL8&%(a_2kZx8&%>^ zJx>!hXxIl5GL~vDo9aS=Y?3bU8muk@55N3T#P-kUC9k_-6djGY+nrz6xyk1Lk3ql8 zpo^!=%3ub_-P0h9nJzcgIGy-PaB|f}jjLY0*(Z)oy)h2UdX&@qI*a*Sk4F@8tj*fP z$0>@*w8!4i-)bAj6>91&gSm}XXZu!`_ZoBxK!J~gYw74soVrWg1G6|i|0TiuQ6lkJ z45?>%#lF<B|FwrZh0qBHSBXZ1vG&Erf3| zczP)(KBG(M{`%48Z=$!D=+#euuO|7Sky&PVS{?4KgsZwtlYVbHLPKDO)aWBo3cE>u zk8PyS;t+*o$d3u8;`J_~H5EBItdmDF$WcVTMQE z>SiWXC(l`<4m~`oc!`i%{s3XxYpuo~(vg7gJw9hvQq^>%W@V%~JDg*vO4kaO@M{Z( z@6D|YC5N4-N6EkHZyg(OaYkyYT*BCkFvIF?>`!^Nl8rD_x+s%B2>Rf_BYxukp;{-y zuI;8zKCt$L#tcbq4d3{)bjne*YzfYUGU}P|{ke``awsZGTWW)5y_0ew@b`f>z*9ko zl+6zt40$F>TMcu6gtLh^Vez9LUHww=iwe#VQ7`AIgiB`YxdM=MkM-WlmePNBpok$j zQvqt9&U@LuNz|e#1ooNah1G23@9AG9YkCg#JTX?z`pt(5nvWQv=t3Y@aBa`{cl6!& ziZ+B(&yoc_455lu00!3vTE@yitji50_+I3fEYN^cJ-I4iVFe|$x1fMdXy;FS)9z9B zmfolWpKH9Su*^RJW=|b-Kn?g(-;--gsHhHSF^3um+MT1Drs?fYCegQH&qW<->0^jJ z4iK&D0IN@02TSKB5&20>FpDDZ;qzElw|Pkj6%K|o$b+N?ta>wH8FZ&AIg?qV0vswe ziSZ?TDp>$yx?ZuMq6TFrmRmxo*u8Yxva?KAqZi8`XrmcgL7(LiGc0yrIJiE00V9fs zy0Lyu$M+U`#!&q>&T=R zZ?LF*HxKgV>Z>6dWrj(BIe z3%bHYg~~py#4A)#v{K`CGg$2YI6!{e(<)6J5Pyyy*<4I(?2}N zcfwS_ja}E*5r|ng1Rt01ftz(ylvCa0LC+C7esyVy6G2?5?_Vfge6Kv^7}?!MO`0L* z`%R~y4D0;+xPr{W+3xIOZ4}PU9>cGwPF$)b5y#}*VvLgz{FpT3Gown1Q<)>Ul}EW{hU*={by{GMK%Nv$XC1@JzJ-NkNLW#jkh7CmfLtpBiA z72>5OtAf<-`~x>MNYrbR&a#bfeKO|mI)RavQ#^wVuhP-cAt`II>Rp$-XQAc&93 zy)QH5EO3M{R83CmR|o@xW^e_;X?UZ@gR0nF080rj+gn3$!q}JrdK=bOw)|?cs)(-u z3A`$U7tB&_#iC%qaP)h6eqjK6@CP^j>@C)|3}c0(EiV_j`vDnwNm8VXh6!xh#|#@! zki4BCv`0>FoURSH1RT0QaUI9JChVr4X4gOAQ^A+DSaFFe&^M9N8sM%EO4| z^yYfj`7?KD=6>Wa7s)eHgj06{oCNmVD!e#4C^pqG6k}8;CX#eT)Cj$5U&(1~6w@&! z_J(3do|JPOQ&LoKlo)r{HvsjMuNQ!GVMBcMdULJQ*HnJd7^$e-6p6MM;jGXZ!a-mp z?b9Cf)pMYuSBie~WmjFklO|9D0A1ouv1t2U6`u`9Op$p46}i+V+^wJ5mDsA=^~F2t z5!)f`FW%ef=S_?Nl@D$WU4zW789m{)$#Ji9P|gnR3Vno1DDbR3qdfP|%{3wwlrWZ` zrLEBffHIqe)VcM|0N>Fsb}98oBMsF^JTxKG3{JuK4#{f_Rt@~Lj$DrmToxkA>r<9+#!`YxtVKF$vS-csJKAWVzyLr=f zK2L2A_>bFEuvGQFU>w=3zqye9sqZPAt==T1by{zWPrMeW0!X#*zU~hlpM`XsV{`eO zT0WU3P_@RE0cEd{jIDMBhsR5|8q=SS0BA~BqsM^!2zwBCTqiy{l(4B{ys!aD6#8HL zx%a<;tJzAg>+64wwspH8!a5&I`mc8XV?r*^ zy`=n*+A$7JWXjn5)%}|vZ9*U0LMMp*i2ea;hwLu_h>biC4Gg-(_@`sm*%VD-Meu@C ztW47y*D~`wiAn{rp2F#yTWm*P=ZmhGd0_9;{I+5s9Szf=1^4a{fbZJeaCg^lg9h1VSnLiE z*(tfTKP|Z6CEN$4e@cM+MaTnXzN?-OF0ae-rx5~$9N58Ez}eQ}v^E3I@Et2f4l}5d zNahxA#d}1yO5Yh@Wn21J&1(&k+ZB%Wk_I8VTBi^?rZx&V#Vorc}0VbT^-(``9-%U60D1yFqQptvs z0bD);h$Koj?sJ=LmWjz$8DCx7(8usSz$`(H@#-7&F!m#ajFipA#CVJPz8XVb52FLG z+q>!Os3`O+;spFG5kCxCh=f7g@9*+(g5)7kA@nk<7W_v4$!62+UG^ZB+b8X}_v{B{ zCgAG<@9+2V7ejeQoYrZXR=KNF&@Ab}h0PoHMANN_g+BmJ9dtBpHk@lWKpVgSgo|JB z0N2cQgZ^>Zjq5DY^D`F>W&b8xZW=U#Y(>S=n6&10Cx^wDt<*#9=h%xYT68d#gTE?mVrQn!S{U^-|KrCZ!rZHc zwy_b&Laq#3RoEzJlE8L=B=?&J*tVVq7tC5g*^ESc7|0x#T7a_7=EHB1Q7l;eSR zQ{4(Pg}M4@Y8nOROUw5A$e+s8-)&l_)FHn5PN}oSPcx{9w%PG@ilJIK>ub;r2xz2$ zkLLTke*3yXEhYVhHwZCQ`lV80=Wfu8ImryNS0JANt_$C&g8Sf-z3^)OmC5h4HpR1( zh;^tT_{1ib!q*LA_E=15ZG;#y76~Jqs8(f;uJ%Su-qs(vu)L$yx{FHe1NnB=G0H@6 z00JX(dvQAbw;^4SZ-6W^_4pf;a)X+<0Y?-V&izehER{loX3nAJcqU3Gj)oBy`Pg8- z$(yLm(^ybkim{+36G47S>txazo0C1zG1|=@h<;ODn76#Q^?jG)J)mOI@?;ZdFMLJy z_lX|ZM4E~aOtTL#|4>Vm#Z#+ObAX%b34pAF;XmGF_;Yn0o@U3F$pTcq5j%W^NK6`7 z!te9l?15aWPuiFX$dMd?zP&da5nahfDRNCP0FWbCl0|eg0LB#;;0aW%u{S#Nf{(GL zZaN#Df$GqV&p}V3sZajOD6+b|^uB+N#K>EiaSR3<2Do)aD_&BD8)(5Az6Ol!L-ilw zzpvD&q(%&!6FIZ2Le(gr)t)4Z-WC2$jU z1!#n}#ec$qMlJn&6Lc_ukol4Q_;b38@@bve4Mc2fJTw?!B!$W$793Q~QIP)V8rETf zK6du!$L{yHTXY-wIE@XCQyWnLF%ox(%4oB-z$pcZlr`7JCY~2b>wwOX$_EzF<2@-#}Z)dUi==1sodQgi#_& zswq+|brtU_wxFd%?q8-DbdgeS+Jn6msS6z4)OqdO(vpEdc_ zcx#}i5u6FPJFulUvxxJ~-fxJ?9w+bcl8bQ>4cJAtF+Bwx8xS3p@EXzX1Dz2!0U z(JRWUjg5+0wWCT_b?kZ&^Ba_sZy51l0^22M^$7ZSyMZHhzCW{RDm|~kvE3$p(!o`! zQJ#dNj2kswQ`-|_RV3uwOb$OX)StxtcX1Tk>zB)b5NRQo)fo4 zXVnyd_Vh}NFi#-Vcg`POaS1jQ`d69YvYcbAj_SDS*aah>krQ+-%W1)|$Jj z^Xc=XE7~x)Ret=Yr2&OhLyc8|>vAw^TPn)&Nn`lHjKK(6Sp=bA5PXf)*@CU6`g*80 zFy9$KU$K^f-ef3x<>T zM&J^)b(_J^bHp*mn{Wp^rOR&M|NhfHm9i-rDzgp)_mdDJDLA*1Xn+V;e%yioj10e%LcL=ofL$);a&zkn>N{Tf}& zYq(i)HYlJd!sc&+%UVYj0fhmXg0Th-)_2K=cfh!Ue(Lv3_}HZU!Y5Aw0O01-Ws)#5 zRRJ(}W%5gSUx;xZtNDt3ou3xx7g|o`Ke+ca6wVj5-s{2*TA>${F7nQ=EMqWYb1O`@ zSB57nY9d)&Arcn|uHrxg)3N^9ZEiP5n~a8fyp`{Fi7R3|Ru#6vb^y|lwXoUu)>zKh z@BYFgVSkqJ35jtj(>VKtLJ}Dy#6=uz;=gH=hMi`uD>{eIgec#8-}@Rj=EUrO2mtW> zTnf6Hs`{L*SJ(|8l1=0RI#~Egfy-332_kK>U{t(=6Ditap#AR)~`Lui-M04J;`k9IkqdG-fufNx!@9GR{jqPOYsUlnvTyv z-v<$OzME%^6C~K;CgL3g)|~VfW_SLM?L!s`*ku+lvx_Trwrk!jThc<$pa#{t8yX;W z20+>$B&A=POX{w)h7{pVUjE4>x6&HcWuL#8#Lq0^j7z+E=ZQc|;c#HoleF<}AH1O` zZbpG028+poh;(8AV@G}?b8OW`hx;^TcEFMW))=j|s;Vy|2>F&HTaKgvvom$^R=l`s zxu2+w^OR>|mr$JE&9Ny^8lRzy2r=D#h`pk)HE;~Oz#|s-N<1NB`%sM6G4={?GH*Z6 zt<`J*=V@KQBK15kFvEQ1^%!S+n8mvJ#e8`*vIMtLD-wWZMsA&DX3%w>EB{iA9<_!$ z0$)w3QWXq5>d7P)f_;pskKs!o!LM4VarBO;a-c5NCbL!bdIB5fJ?TMzVsgnqbb7|Q zfG|t9PuZ`zIIx6)Daj2(8?KZ5!c)BJG4cd)C{U{3=AoAtDf ziZ`U^E8mIb9toqqJf!8O|C7nU4Wr78%74J3YH~cG+IHK+h+RX0d`#kiSFZO<1$T&m zjnJniM;R&Iu{==Qy3`owUWEJCopdKSqJ9W#C6eI2X#RYa5GankkuYxy=Xn&}DB@gg zs^9BL0R;r$hXZPd|23vx&%5lEr~CCqUuNc<8h2pljJJ`K#k>pcutb*(sz&Gl$YRG# zZM~6;U{DSH(bdPiSoVd&k?gpKs+`JyWj9)y8S3Ua4uon$5%~X>g9)`bZQaH^5?~gS z_@PDfPk(l#t;~HEvz3dn@h~+@ZCv8~*bLK&Tv7T%9^4@j?n7qARiPcjs_Y0d4{>F| zgSg@q0iijsN2!3#u8RUsBbR{!H>?dXMQ*VSCLN7#vUZ33fE#o4 zD0G9XgHXcF9Qe3Hq_D@MO*A|2HCQH(pEyMTK_poiy7&8__!qfZ_1IT^ zS8siO)|A+BD8Jg;p;5fLuCw#bA7L*T*$vK0Y)fQ(aG#CWKyJ8e18ne=%n6_e8}*CeS23f$7u0(I{7f=YE!Ev3{H-;5}AWrEj4Q~T!<`b zR3cS>8-F!FV}!WDKP4w;5~e%jmInK3X@wj)skr-{M}r>O;3db}Hpz z%(;Z7gUCJ&jx*)S-O4d_GO{K;!r&MK)L{FR`4xaZbNPwio4wWeaiI9|!z2rO9f2@s zJJVFs^JwO4;9Ao1^9Ri4cD!UOGw~#|z+)67Keo8~-s0Z@+jrDYTm^Z;1K0s^M)PeW zGttj0-MKlmtC?bzI_i%(^!xN)IGXPo{Y{tk#N1_jdskw%BXHF8_XzHIDiF+f4JN)r z^=0|(wyNoP%LDXV zoNII{s$^yXG0`8`*oW=thJ<9fwUC+=&kNU{Oa-7*JZ+-S|6ko;5tIsJ$4DqF>lh9SknfrP(>qn~i7tTrGUjoy&{ z;cswK7)@Emfr_iKq^pLt0|vOrz!~vfxi!v4=WD+4F&QWL+xbxZl&~9^%-04oG0!Rj z!=E)l3uFO0dp4KF65#NgB?k(iQk&$IQ@rNq3yjg&c>#ai8Tj&EQn!#swfJNs*Xeo@ zIH{}nmHz~FA#ZQBko%llTW7;Yqvu*?a7vo4djnHj4YxZFCY5yLZPrI{=kO?<0I<+n_Di4%^cHct_AY+Uc_?9}3F zFDGBMVd*!9bV3F;ZeE-6_mZmvVe#9?g{g9?vM+%A!X-xYrjOBz2+`EthqXEl#059o z98U}8bLv@mwwo)A7-HB>G+$l)YemuTP-TGljI~WfxBA}DPDg(n32-?1p1~U)@oUB{ zOXj7Y>)5TWj0=YsF1NA6U~>EYxX~dYAs)A~ocl{Xp1O2dhUKA>yYd~4Nrb~O-r4gX z565&C7~EZu6rDJlTE3YkeUp*I&c;S>2RqadqpB0vwEqu|@EVN+XXm^w@9CJ51(%Rj z*vq0k`%e=BRZeqg3+yq~pg)|#rk-G1ff#%y-EGy`zsQxU+QJ&*ghiWDd0QTQab*4|o(O5pcnJwQHDSdS3g^P_k+iL^1o2%NyKNI%w zG2kkXKhiutvcOZ}rz_3lVu1@*ixs}0(Xo2_dVY~P6U=unU=!>M6hL-{M@N@V-_uYw zNIZfaFWpt4A6UIp0>-sXu=F-BB%-(x+7u+m8lVdhVF%b4?2o8t^12m72#~}*i(Fqu=E8eGLq3;{fyxI7u!YIYi!&!4@cdNaJmy% z!K>)3PNT);o`9Xg7of?drgVGh)fjf}wF$0k_LxIDD)^lyk3WB+Tbn0f{%`OT4LnW; z(R+BVYE$(~L^K1_!VGOw%hqT9wr^FbcRm>jLw7!TAk4|Pcr7{dpD~$V#4{7B8$(j; zzME!LV;{OB+MWdlyMCX|%I#w7W1s1AhX2FfoBu=I{r}^G7Dl8(BFUw!S;|t`LuB7Z zO4(A`NsKfYDqGp=lC83@qwI{b)+j16_8BH5m2D zq#QlbI`&6nO!e8`S?9F)?iwpU^|3Wd`)&8)ke6-Hq?(jfZgFGTvFWFaH8gl}D4gN8 zcx~VpI#(FzNuMxsNkK4Yb;$bG8Rg7&PUt zNoiEX)GI#&CKST8WNsUN$?HRgomR2f_lR2$r&0OSsAWLHI5Oj3acC2`76~^gICfi; zt4^BT=VKsT2vs4;_ui*}NH;z=gVgyx@FLhk0PT*_c9AXih_W93~rA;99g6 zll#}>_xLL?7!~NP%bDG{g}Y~Z%Ke;>2|0gfV5+rohTb9ht0!NaQnpB9ypPCt(B$DG zI`PWVmiYqR3HDZH!`J_GJcWVPsNU`jB&?xZ1BHw=#HLLmc4#LNG0k!=2iaXK>aiJ) zPeTY~$Cvm>7A{6^C5@A|u7P|{L{E)Z7^949jWam=5uX82akFg%F9-|f1ui%mVUj81 zz)9}58@L&Tc)c@R-ZxqNWYz^T3g~OJC`Df_z0sDB6T^Q$!3kT^+!zHc9;d0e6n@pj zde@8RaC}Z%{andvdkwCv;jDPLuDO1&5a*8yG>V)T8O*cMo1Bz&dn!8)1u3qVJ;tM; zDCi;(2A6BvgCkPJoyHHy^9m1cuW60B&%*hhh#cVnH&nK8;l_~X6tv3h*_THD6o@SL zJ-y_$CRsIPG4`m~B|cOJ5tK9fqE+DRF4;%K!*%2ea=OZlvMgT~ zo#Vz)txz#+VSkKn7L%bfsxelL{!n0D*rM}GO}imKNP^-iU$7g5xb%Ej^-(=jzBPqI z?q~I?kmecY#XZhcHq;*ndObNcwLxnkeWXPM_zMe3jiiqFqMD+$&kr9Y_G5oORQ^Iv zTFHs+CB`Ze2e}^k@ag_-6?B(Fo7Uw%aBc8{wUi^k9or%@h7I(M>F5 zg-Q<*%oVNS$R|&pa3&xj&vlzX5ov!h_D7Snxr~)m@flsEfl%dsY-eR!^3e5mEI_c! zz~$jwT5+$qNE}NZTD6f7#r_SxRwcPF5=sI!uM>Rx`_P1nZ0G%j45dC zRl8$<3^o#zpXJE%vVmYjOrU3V(EacI&sip})DGxcYY>KTKE` zI%XXqPnt4|SeeWjDXt6}?eV_hM*1;%@egt(?lW>kBTT0+FhxD(w0dcVe`qR3Y#(*L z?y@QBt!mIgc`KcqD_RWTT;4BYBa-V*LQ)4yBq@n($rZH(vHtetU%gPK>tk1X@ZAEl zGy5fZP+LX(lRR7m7a369rrV9h&ge!9>l$u9eHt_V!mKMIN1W>=kwe=|$hA5?RsPm* zZbZ*ukulvx!S?!G`PL~l#Mq;}fbQs{L%sb^ku#@BQlPZQ4K$dYLiDOSq%dZFUAJ*@ zXdHiZ?zr5?1M)WyN`dPnGF$6YfmqHSxIeiXGvOQXyD#ltUFZO_n7amw%62<_rjBn+ zORnJuBMJ1h1Pf%=bza=Q8CfQyZGIaGH#ih3gmHWs()s$njrK#)2wia9So+RE&%`{9 zB=xSOzB$eD@01EBo|s?eO$g*Grp(@0v`*gcU|rU(0|l~!zW<~%^(}JAuwW~}khA>o zvvP{UMcow+D-l(Ft`QE-0yqWjgQ`s@7FY*e0kIl_0gDf8PA*0#q+=H4}dNSJkr_4f%9?#z(QefI^9{F!hJJ9r(juP3M z+tBaUwuNs+?XOY%=sv3aIjiW|s?viyPp+z!Gr(jx|IJG&ZLRQCjWkWi26}e8ImYRJ zr<-@-yyE?~fr4ldjJk=8y>YdI<} z-GlIYpwK48H1UzVFjQGR1=8cUI{d0vE97OXe46O{3S4*bYvQ*1J2?5}-5*(R>3-29 z8A1GoVK24z0L6qc(HI4rY`wU0(c9)=3RRSZ~8i;e2AXcF*fuI(Rkn0%_H|; z;~mcNZ=3lX7YZ`_)tt$q$zsfIcW7S%iPImcczXBQK6Dat^XK-yDSAeJ!G1&1+v9ilMw* z6g*=wWve@sP;&pX?DcZIu5x)#>uqfglWJ@3nBUW7Pr=Y}josf*(zLAKwMwRRI5C5z zTFfs{ZP*SAz6Yp1^t4j7Qs;1f`Rzgziu^)#0tH zh7xd)SAWjhhhE0?_I~+#i9R80KPKcu2qRCZl%qtVk&hkEai3fi9jf@So3aa8vH13J znu_0A2Hg}qY8SD+O@)zF^vw+^#nOQpfnLwS)%1OtMPg#)Bk3_$$2NSrGZt>js5bXt?0oU$5 zp7d0IwmGIMsNvEkWmtNzFh4Hf`*3(6ly$T(7Abmt)T(i?nM_~xd52ec6A5<)$BjwQ z1J?6dcEBoAKDQ@N{4f?^DewM?R>cJ7o@>TdV&$$0(4hdUWFO{oM|`SAvwvPcNLOpJ zQcZGGmsspM+pe>ZfgCDhbqsM+=Xixb9Wh~D&J4`-psM>KUy`8BX6?OlO>XBZo^Cs* zufpv=cG7BnpA#&K1zVM-YQ0UBreA|>Vj_( zFoMMaZrBISB0&SuRKAb+8%$fJ&If7mq^th+dZRY%Xk|72_s#B-cSkTPP$5cjK)Qxp zhC^jg$voe8RDs>_r>7`*oP8hELb=AJ2##VT$j=HPtcs4R_!jWsvO&;L(U>qkK8{-| zD8sy|INx>d57V-StjNsHWT4AwxfIy?np5 z#^asy6;E~ru4ZV4_vg$hr7*eU(KZI84)o({cg?gNF>y^gO4)_4}Z zLQm(j$Bx#KBYj8$?)sy}iIk{Y;DqQFNYR@X4fp-I|G|>JA>(@TAS&BRl@@Xe- z(?pbU^qmf;WKLvuo@!R&LKF4bL~Yd+Th5hJFZ7^S#2B8?6!8rVg~1g$wTYK}&!uP_ zDolj`%_&7ho@`F;kFjATb*Lg0ZBZ&*3gc(U)hGI{mrrh04C}6M>Ty0`k+*U?Pt)AF zuICE~X6JPBb}GNk81VmhM;%sC3A+9DaC4$DeNQBAh!82CxN#AROB8@}yVj4*vPMrxl*_vu0DKurA6m9Mz8Z z;Dy9j@$h=Z(0zZqh+yvIUKX`k>N)Mg@=IEm7BmGTtVITm@=S!dwh@tv6wUK+XR}dU0^40P!0k&hqFHmp zL(9yQa@vsT-Wjf!)y!=t=S)saJaV-T3=>DHl27G!uV(iH*=8ADAMqDid;9DDbz8ep z+^OK{DOsfRXZrW$kSmdB{YitVJWy^J~`f>SCQ5MHv z35wcv+&JFA7_4(uR#{g_-{%a@{m!;#FHVTM+m4LGQJ}C8&{LG8XX1w{-d`Ly=(%o(}3Dd;rUq7;9L@}P26^4+z-<{cn`U2O0EgjEAjZhY>ETsZRBBRgk z_N;`^*8aEhA-bE{GMH3)hKF2VrI4|=YRJ|&jFPF9TW=6^Gytso>s&kksas8~#)zR%=Y zJskC2T|i`&(xV_3322Qh&WwFG4&7RJTKQn2b+5BtxrfG#ol;()~-x> zzVG3{f(_Oj%cffy>oHmz*X?W8tj6-^`4*9#RodPLML>05AgIpyVpQ&9a21oa{$^>^ zYGje81A6o}6rxtT0}{sWtE~2_%hQcGX0G_a)3>cvUn4GHO!oJ%6bE)zdxe7hBql&| z>gB$NYSen$!@o93-Zfm4^1EcU{lMIH-%YVAPr9)vw(V~-X{=87&M#9o!P=dKraDh21X2uV3`zCWujP;OSw0ei*tCTT} zsl4yYkW?mzs$E-mU}8`-a#Hu<3HiW!QLR&(ImG$45#4ieq1~!ExLe(hO=hIhQJuHI zNCT@#6PJ`QSi7n&XO%kcG-GR>8l!+zv*t9#*tidWJdt1DLZBGM|$Oy27_)0cnxl3rPn3@ z@9+O_e*Vuse%~Ka8cB@}-*YGa_3=B(Mq6tyjMcVSS!9fdh`%tljGJb=Au#*V*N^SaQRq{nysx78Sv5CXyA# zkXk(K;$;+Sb?3nIpp`TMV!aKFi`uaEWzCd%_U=m6woftZX`@I8LqX{L`mW#hPTFSn zwMET*z-mX(N)=^lvx%VR8mS@e;T<;+k^=VG)gP(IE*z{!gSI}Z7*KAAUgTd}9w2@F zj!3DMNDK0yp&1PBrZRbd-`(K0RR$Fk2k3L%4C^U_F*6f_m9h&IEbyf&`_etTv!So^CQ>nY=pby|k-{wpW5 z7L(cRG!Ap^0q+fg#La$ruZ|!PG}9f3O8~9p*3dDl!o14lA$kPlqD;eXvVLAR`)lmB-V}j%D)uy6ZIhwM($aQNJ&vErh~`+>?QL4lYpNlwJHE7Kq{+ z{pB0LeZ**r;C+8G2T#Zx2zV)SIf%|+c8q@*cCPblImu*DM1a9|K^k`(r-!P~KMxg> zsz~f7P2tXbc?w^QYLT@~OJ(h4M5Yf^g}&eHIp(-5)uw*~8n({?a)KsT$|R=VOx8&>h#tzZStpJ(ys zpv_-Z?@!`Sn;1HD0*dc`w+K|mnnkhEKk4KO71$6;gcca3p$06Q|${P{Evsa2u-}+d^Zia=S!}T@0cHwUuSszVlZ_`99L(QIk^Nqg@Lypv zf0_h<#oB_3T~3-km)_WD-|kqI{-y6Dy9)7YL3Igp<5P*}rqWRLb8%a@S2QNkv*=MK zn9PQ`o+}m1EczOl3A9QujN}*lFlmub-ZDUyxVJaP@QPev!8jb_{CMskwq(_~HkDw}kE7r{7fyO;K;E z2pHV&KJLfhwi$PMQWodgBov~itr`|79rI2+q2}}F8DQGK`*u-HQ0M@qG(TDINJksx zc_+cB>GMO?<4wI#iG%Wt5f%jwvwz07e%Sn6|0}(Cg`;DR1mmEF^UQxKC+1!zDK}6% zAow6v7WVw^|Fo(w*_x7(c%7j65e??yI!--F^77Ih4RQZA7)XhuxgYICIj7~vC%8i4O|+~3gx+d-}|>L%yZ+mPMl9sUAfvaf}exQ zm@NtSuZA8H5A?1q#;hM%TZ46fLPe`(3601bwiU)d*Dqb$AsT&IWwBQ_bo$0p8TtyI zUozBUp2`42`OI&nR8|jD&6Fcp)wwntMEKK zV#z^^$)iVuT!yy7EY$yduxagCH`7~vz3;D2j61S(X>2VxV@au=XB`kBAKwI{o??h( zvvz}=f5{Y6uz31^4~3tfziRD;zMK@RrfSN(H~&JNAGsqzL&AR_15Cy2`3240=L}A6 zNiW!N+W)YK+@Ob}iRc2y6k^E@y+oY&YMcUW+sN{**NR-=HmUq08%|EJLg&eG#;VRL zi=W~ORs82GqyUoM+h^{RL9MhQTrJ#cz$<7FM0MM>@?>chGsAF`_9o6QD=Sw8?b6@s z565d9T5SZgI!2BjyQjh22p%f_`$K)zk6gOR6@5W>oRdiGkP$$CA3oW_VlTrUB2)L! znmF?|OO56wcs?B^5o!s!uQHFz_BAS1u;6{!xd(*$wwNF{gnYoDohzZ$iDcXrZC9G1 z*BYj7t>v)Uk!>=lgFYeSUp1{&rT!b#y~WDVQ7fz(&@ zck{9;eXaZK@6VdS*kc>GJagN7C9DnIyEXXU?L4}99RIO(R8}wiyY0vKKJsC`pg3Mp z_|z>uql4cUa85$i6%0(1>~%in-?U*$GD*>%&Q@(q(AdS-ovpzWomx6YK`gzOdD|qR zk+kp8T5Pk(5Gt2L++>BY{p&MLeAuv>C)4x&QUK2bJgPP*UW@0a7$Lj=IUAMjS~l9W}7k3HJkZzPwH=$Q=qo1_9GFf;shi~ zgb$b97TR{p#xkeKJkRbmt47k*rG3|Wid-8?>>1_Hh_!a&X;2JE^k##Jdanj;dEzOz zz5nE|`2sMJ<403Yi(e_p`JkDynv%;D**EDjW*06p>1Hg#E>o=a!n~}eSZ#^m z-@wM5Z7)pK&KlYC*Kz9y=C+wGUx%BJy~0-R^tp#Rw_aH4FLH=|w+jr1^gsznn6+kT zB;9pY`Tl`Y=!y2OM=pOK;!Y?$O+U2g=cEb&m@hQAlof`6LJ&q4MEvF!s7msQ8lfKB zBdTs_h==G#1NFv4&drB?t!oTv$phHD6?P#5jXqN;OnTE65#MLx2j%bNxSdPUenwPp z4m+L7f%C~zm0EEta7-rdx4%F4a@PW9f3XXtU(Ic+u>S(O9Qo~&8t35AQ_lYVSk*46 zbA_lGVpd`Dvsbb+&$hs*5wKs=B;Yu`4@Xl^MJtY=<*iW?oN;xwH1-$IjI$pKAi^^%d+3Wf}-!I5k2{Ey1+_ z2^z_^YfH~?G#W-6+*nlUmn#?i_Gy6mZ0EeR3PMt!K(O(KJbC!HxQJPnCg%~K3ssm2$Ln{rVdFgRB^?+Pi;)>w?n0g0W5R@7e>0OtQX92( z=gN7yqZY*F19Ct76uHb;ZG|xZtr$Hg!A}^^E8xK8?_24M#bfKND^y@{-t*za%7<9V zl-{GX0m4AVRZt=2DNGEazCNju1e%hrDH!}lyR zxUon;P4*Q!_h!6D1wJJv&)hlf7UD{$I|8DY-9Qw_D^}Tn#lu4V04U$vY}=bm2Q-pQ zP+^h1rl0|HuV+K9+T5VgjXQ%`muAM2kA-y5pk?LrkZaWgUiQ6mFKGS9MPY z(=OcG;xFt`S2PIvxfjxu#$hFTbh6p!&ZG5y ztHc?Na21Q8&eutj?*^9JnV$T1C_nNat7F_8y=^>4Q*AC_7oQ(>XT;bn*|$3SKq;wP zG{K|AJZ(=lAW2>o=6dv!w0A+)zB+~VktTCou4DX|V<$bZV{bC83z6Zi%tRu1OEU=& zv3uOrT^b!YHR#EWm5)&bK~Z5-ywa+E24T^BJ8UGUZA;*-pz-gnp+=~J!JKwO!XTCf zHga;K0Za)RxF{BRWSMFWhXkgv37}ZC%wA|J4_H1)e4dqv-$t%hO!ae-)eY71eMsNF z6C&(qQO8v|ci^(2DEK7)P-(`Crv-@|!cVf`LKb0c_8* z*QNrCir3a~(h8xZ=S|qc2SEVevjOPQ+A}2>mAm8Nk(4#)+%`#%>(&J7Dr|gL^U1R| z)JisQYM-x)ZuEIETC0a(n5u{kA*NtQO~X*URz|SZvSBG6x70uz=wKz4GrW7M zXHNi;(1P@N7>sAP_KWXDBB|)2pKPQ*&D`5U6Zkm|kp}HhCpjJQVAH<+`?Fja zpH&NHAp+K1FVvqyZ`4s*7$Q7y|B8AEI8;AJ?Z15RQ1~QcgHxL5h)T#k#AuLOJc=i< zPk||vHH9(OBOc`&&2}2x8kvFkG9}36eq}710Esh5&CB8txrHVEhj#o_!0K6<0&9?Zatoo+baGT0Ta43 ztYwhpK_zl3n#KDwN_j$E5mJw2^oSOSR#pT=zU}#_L+l-lgvEQu64@-T|I#Y`nOa z_$wVlly4=6xNuJ0(bl$E?l}L`lgTm}^B}B_hsvTKTOKpOgeuV3XD3qF=$%}seXS6a z1n^#D(7z6P++g@t>hMm>Dvs=j(`ee>M5B-RuX8H5I_Dc5qxC2Hlz_uNLp<7(KH5;k zY+uj0G)j%y$;S~@vxUtbVZS7!boC_Or(?1J5bBqxT|t!ZsofyVKNM;Ul2m2sJPBfcp^jgYEJIgazraOcck5vA#oa3k0TJbS`SIcKNngjxYKk zZA@Kxb89Pn__<3rFqKrktObvOr98@h?cq1Kwhy8OADIH$Z$(T(=t2lV$KBaMwn2hDaQ{k?klz}Nqc_{UF~=tD7d{1k?<`z5^bZaA`osW*(elpTtr#? z!_fvwqwz{!d7ufQwp&;#^oZaY-B200)yH(<`!>dg?pWTrF+4f|tMc{g&_0GJB0c4O zSi9bUx0IO5ls&1o-kSFO`){dcqsiSP@nh<}-sd=*#ukrYV7Ua$>vn!%&LFD#y4GuW z6x?{&BfeP-u1CIqp)MhptIMCO#OY!bXr03Oj*Qkgo%J&mvZF3c*Y~8}Y)#ORS&>R# zHS>>#+1WvOo$WJfwDdYn&Tp*rCr$1N`)}WW=E3PhTopbAAYVI>cSv)M*}e?|!&7aS z#^No5Y?wFa_eh7@!(WjvF$Ujw2Z-ud>3uGM7&hQH>kdvkItAkS#BV9B_Hg5NHMdw) zvR;T(hMZ#9=<}%4ZNPo|He`0ZS0d-LxI5a95-e4r9E&6Ubc0d-oPmBuQk@Y@rUYV8 z1Klo3K$w{;K01glR(bi2EPcooaL`H=e?8=HnRZey3It4_$l&a>qh4jVr`GAH9mUd4 z^S(2mblg49w)3@HjmxMIiu*PjQ~sOoW`Q(2y(6%!thUX8=5$KMRuzji6gZ>3@wXUc z4*$AE7`|D6gR2P5lnh0b1}9G!_RmuIPE(n;`WM2|J*&^22MoD&LOIZ~c~Dcnd}3=p z>#q|HTP$z@HR=6yB&$~mO0PK74U6QC=nCB@&oKtqW0-8VU#@~!y8TcO=F$y_HU2pK z*I-(`eqtZ){^le03A}uaCUKI{1|Ofvz`{Uu)N%B>p)TlF!7L@_rgzOvQPmE^eqV6d zB5=io%MbfDA6^NQ@d{PHJkYebmQ(D`tZs+}j%KXS_nB)Sh*ZZ~`uJX11;!B?``n=k z`fn%1>D}v2^G&U9Mtk{yS%O2(A%wC6XzU85?-ySJspkF;E*Z$D`eazZ56%vK0jBm zSOPYB`OCF^-Lfa>{=x%rr4#ytp=tcZNdt-tINnn<^6FejSa;u^C*ATPrpHcm-|urM zYmROMJ5T%IvZFI{m85RB_at*y9kivDLT;xGFu?-5M46+*N+9!gxprv$#1)wEFZju` zP#J(@Nci-JFHJXYr%d$ayE128dD1nrX+BA@F#Io#B+R_Gy#ii3koxwqn)D{d5mAtY zMI0$m^==5=XNo)v6ue>&O^qlR=gr$|*@~|y7k+hJ%IvNyX zd*&LHeiFb5H0ZSg}kIFS{dkg8Jip*ccvw{bsrA7J{-E+1*vQS;)6CS zb@|}Z9)w^p+ZJtWFXpVfLdz;rbsj@i$jLODGet0A8(VvW+OJdSYZd&NA)jabE#Thd zRZ(!$R!*;gqsmgbKotb=9N7$OOakHHrw=RP>-!UIR4cwJsX-Ug=HNFQy}N;tpn-B_ zJm@j!tbJKLo_zQcPF4?uaE7%8!#8qO%$?GUl{#o|u{rL0k6lUkIcLQ!5dJodduG-` zBfeu*w6+tgF(>Q(aIkoGM~u+k$TshQCo@a-L*%(m|HiWKi!xxrRg{F8M^Dgl6gbj` z${fl^+lz{eM_ex_TCxeYCjOIo-@IXJxfdrY3a+H+5Vq&kJHy?eBY5hJT9mvB8`k@0nrh*x~w!y?d-+ctZ|R|zgwA*oabg#*Z@$Ge7#_!Q?UlegSj!ejuB(sf%^ za}-x3pf?;JQ?n~nY+!K6nGZ8^$|AsRAmJ>=+fJXsN__uL+A0LX+7i5o>eBbyH1jBi zyxEAnu-Of z&QTkMug-6vP!aU2&9pAIFu-n^I^L{LmTp+|$hZai>{Wuu-=B@XfaP!7=)?e(nAkQe zGnd4s%z5=bGO2m;v6+yITMjM`@gaSM%=u+oSz(gO)&kCGGci)QNs5Mq(w&@h&RTkR z6=M*ft9C44uH0A=O^LjOw|~k#zk)TRAwczaHwmEOdiKF=%-UzYO6=3{nQ-%)&RTb@ zxf3-w-6b|j%`DAC@PK#@U!O3pVpo0sl@!Kp(IpY_o7o4jml|Wcv2M)`Y#fLLMxL4&z0p%NPg~2r z*-L4kcMcYN7u^%`O^!KiAegJJIhoztrGP8@E=xhpKRsAk=l5_PX(4`BKSh&-HjD2+ z%>{6CgCTrTKa#Rr0j$ZZ3?1>k9pz^$TEk@E=D0G%t4Y_I8CiP-{8|MYUq#5uxZLM6 zerv}xlNO9shE`)+PzrcX*tV=+wFG=*t0;`mKW!+_n+k^mGN+l~KHUp>2g>blU#iM6 z6Y=Ni$Y`5KIM%&YZFwnW?Ebi`!e)N)ugKCU;EV|wKX1pZS zOIzH*mEQdR$|w3z7`rz{pdOkCcOYZUMon^_8<56)DgM_7+d;YSFrZE>ouz&{hv&R` z)6`-wj(77ntYoNKd?Cz_2jlhYVRR(tSaGaq^qsSn5?mmyZV8ftq9OWjv;#KwG9q=&j|yA{}A zahe-ohUC2(JbY!u_WfL)K75sS1dE*9BE4wl@k9yTn>?+E+kA* zBsXo&DnNq^g~aRwH%rZ&O7KetPz3yjK~l8=sT3cQ4AL0K>FI5kU_u$52bRJz!^=tR zOQzijA9Ks+7ZRK2CMG>?E9$L{qU+hgb!!ij&wU)ep#JK&#cMJekB4t|8};9taApQ2 z?yhz+OO0IZr4t~2tdzm@F8t10>?NfYa|Vq=@*joeHBM5md(E}j#7zg&k_L9u*UR^M z)jcCjiNwBFaf;#dvj>nA^X6mDAu=b2liNTQ^E~r5ZBr1x!HscC3d|j{mvChu-OV3b z`8pPQPu5*h@m@_Go9xxO(5#-Nzvc<xJYAXRphG$OGuNw7*9g9(ns?bw*!0i zTZ@z?fWh(;_A}_GFHFcH(YW&S8il8s&ADyPVmkC^j)StTQCHO5)u=e2m8bAeMbS>? z&cx9W(L0 zC8T|^GTPPf-NK$fK3BZ!o7%axO4g~;$$UAhFJGpG-&^!6L_NCcmBEXJ_BL?(Y^Vxv1?%j6o z77&@7J+C1a+~IT4YM*r1JlD&YSr_`mvQ(3{d_l;PFdS}B0JC04>&P#KBCw2ba`vmpP8)NO$?f!|-FvWV$Ysos2)_^TJfOR3-0oi2Eq^NT zw9IRtizytqPlmDFL@0fPEJbX;X?llPce~F8tIK&i%>kMm!CmN(7`ot0n=j`a8wu9%YhB;yE!f`MVSx`p_p%9o6Oh)(dV-lf>a$|qb#Hu1!LFTP zpR67Ga*Bm(D=%EPB6TqD?!nrkd%~ExN{Y;pgEqFs5*()AGt`BndQF{n1^Mn>6+Hyh z2F`lP?23&5fs>*=)iA&fvqMfWVuBM9UA>+WJm(C4`^!#>*W@9wsGiEv09>+k%$Q(Ptp~?Zx-M%Y z&f{;}>|3!7-G_}p@!8S-@u3dW0fW7Ho7e8I+jHTgbC~!q$+iWXeYck?a`jphC?6(o!tCJ1 zEC55plnD1^2!!)uWHjPY-0$|2%Ivky2H)kC|66g@gLdK8R@$Dw`(qL6_5@u@Byo=H zB|$8k%x){q&NYk7nXlVLdz7*Mo!9;Ww-^}Z&ySx~q9!u%>|w)n9pzHl4YZ^n10QOi zTj03*(*Cyird@)MZVzSMxy7M$r%7siaVQ{oOU{hGdZC+NlERVt(mux8l8Z6j?@npS z3g{UYyi!Kv%Kb~f&T%hDg=T^er&n(TPRx6N%f6~5Wd7Evy5zeG#&nl&jeVWWM@{<} z##`NrGn((sU+A8Z4&jBfl2t>I7r;)D$xH-Gn)z~ANBq56=!jmO+M@-To5EZ^+1ju^ zBzbYNB30#UH*?zez2QhR`)EL+AC)N~35C4D_f2`a&Xtb?=mm7`cYm1sSPCf|MLLof zw*=pS(Es}Nw$9~zXlYdf_;aT_yU#Bvre;cPIyA2$_|!z~!>RZIZDwPzY(Q|M=8&66 zh>YOHQuF(vdMb^=ZbPbh!U&$3ow3aDtPnGrT+Di_^TV3NJQ!l<243T(0$v=E{R*Fz z&)*XzRpwXwK6^P+HN`KQ^xfNw`H)R;cb3!Z!){^WPz*iPzgwwH2vIAI!JCK$N5mdP zo>x?xQO#2|p6bGW#Df-|kp#rC>`q4_cBfu3sa*sakV3>C2zk?m{nBP`tmd81alqtO zVrpfU<5#GX%ar_Yij`Fa_AGT8$t;PhxG+5FhmxZ0qXpAEF6opIO^+6!HIhm^T$BpB zMIU(pVH`V=ORQ`VO{Tc7&O0WS~RrZt2BF_6{RBLSpgN<&6? zapUVm8OmwDT)ky7CoNURB2iqP|7?La7cPY2!^;x)py&x{-eP7j3t0r!mDDm}35{`cQV_P(W4t3AKp_ib0yOw&uiif}+aS zSM?(M+&d>VCv`ymGnX$?!a#^STy6mCN$8kwT#U#Q;dA^azdH2W$C&QVz1Z!A+=a#9Ps?Piw6BVaPH>uSvMO2- zqqfr&>+%u<;iH20HB!G5LBQ*EZdkTblHW3Ei(xTRskyUdbqkh1!QFo+vpuTo4AE-} z;7RA0IrCTb@V&!ohnRT8R`wT+=L7F~%td^M-OKHHgfx4XGNY-EvBVM&W%skSDxv$m z#ts*72=A-Bpt+M`$@6B{lw}nq96q|g*0$M^u*cGww;C#6?q@>q{Ia8Xa8V>6-eaqY zq(GpLP|I5VQl-QCg0aVcyiXC;6r<7r5ZOeVCOeI{a)%$tf`d9kg+4J=dt|KAsJY#p=Jjfd;C+Go z&8z3@q=!|9v=H#zksFbd5Shl!B3bXb+X1U=5UN0a?nqmC%5lh~j$lz!v~sf^z`x$; z`crhgwAs{rH0Vruak!n(+}`^}Q@5FRpHq0rbBQ@?hqV1$nR9TnkH{9RK;ryyx8^OT zBjSc9@+OPd#13yxFWcx*f!xgbD@WeCGpqo7ZD6VI%a)Z#OMvXiI?QNcgUU7X?_UMf zG{E%I%)6kGM*9s> z>?HzKNG>kI1%C%I`G=paj0L}6o8^1xz1Ab#EF`5FNHbWfaKe)JGwB3JYSpr!Xq|NlN!XC=_T zKCtl)8&CdM z0|H%YM%U^hD4)GZ)ravU)nl9vgaG-4(|Ob=p6GyQWzUG{eXTBAWD4kY-)cbSv0C25XwN^CK8Uso>YJ($K!t_q%>rZkH-ZCdH08nFTR@G@MCI2Nq_a|! z`a=1qT<=Ko9W!~=DeFT$#P83}U?)oFajZ);h=!eET-*rMLCj%@SUGn5Im9>1#9-aV z)C13=eA&(CnRuSj1vZZErUx75bJ8kXBObEe-M-Oz{ouBmWR>Q6eU4d!7zCjoAX?== zl)!PO;0#fgKMO||-2QcALQLf?RbDM0SLsNrUwT`7YI0WEBz{n+x8Hj!O!GfS*R+y12W3RlbDh1L)CE0JyOToKApA;90w} zk45e7opxV70Hj0GwE!JBqB`&LzuVYC_pj1*>$4F%Iq7 zn?U_2{lTvrpH9p@HtG0~k6QjXcd$$_WVEyV>X;Cd4ENFtb8rz?8z`9laGxI-FK%d| zfcS{etb`qvt@8Ww$|08fCNmxhXs}oa*{&Rg!@_!#hOTS;w>q?~eABlLy%j$xsEwqj zm&T>#<`UTcs_h-`lCIKu;YYi^@3{*kNy)b^#t>IvX&Y2C-G9vGecJ5OxwQ@{8Y*7T zyxr>-!?z!hjDKz3mn+tgYQ|5dvM1EUv$9+u_S9|va4AHpEd&->Oq|^W(^R5qj#>pa z_CS@Q6&C{kP(#gJeDU%Lx&q|eFBc{usjd06rKRE^Y8>li7!r-7e#ZG$*!Tsx^a-rY zq@DK*T0TC5>i0O~QI(-biPOtyK@x_pba;);Qmwd{pr&-}&L6Uz7Mi>#Xg(7Ck8vaF$?;!$96c& z^*T|*Q$xSeJ$^_yyz8zrO2Homvpb-Grv$96>&Fyd7E4M}6hJt=JjE=|8S+xFNk-6M zZ-MAEOL6!;36!;jR zyZnV4FB)OG-(K^MB7v5d>gjx-ole`m&JcW$*;}pNXZa)M>FF9E_gL2QQRZS2hwvuJ zrLLml{eUwvZo6DC{S1geiiebvBJ{J|DhxoMumT3+J7grN=l;>=E4p=#0TO-ygw8s1fJ6GT-a>ZV3$9vsRYYJiJE7h2C821g)wiwKH+07V-z6D~y?3)2V{J5( zWO9;K^`}a?vs)`A?tF+~0-JXASID^R>v9_Px4P&L7A->}Hs8#$; zc<2#iK15ibpAmKHEOq8p@Kmc|N(zPqaOMm+AXP{%#%LpRWa#LVZM$$Gb1mXr1w_hY zH|Aa8hc3nUkLun}-JGo;vw6SLA3d=R@1Cr++Cr1d9$tidiuZH(b}a<*C&u9S$-<&u z<+|(V$@2nq0(_$S2pwk2<%L3t!>BrQ=YE+&H3RTv&X2`~?ECpoI(jOPS~2~ZvcC;T zzf$*dAUVgL?FVX|l@iLuCqd#o4?&R3M}aEE#G(Xd{GG<#F0rvv3vL|IPSi$?Ynw$O zT{-jbp#8>w8a)#%zZSBhYqr$GU50~ln73M!4C zEe@zB_Cl&(owcdvK&Wj4W=tpE(^e;F|MuD-!W3E76O%N=EBi>?-D6{?LIrtKb`}?l zy~Z&ol}IYpon7A3pNq}=@X|<ws0f)4)zF(NRY6nb{CvX-?HL+qbQVki znSf%t;+KW!W&!x|kIpX?H}A`ktIi(UA#7!vwkuavLZZgF?*5=d-oeKj9}o= zIIDgB>coS|hAhnM)%SqB@Jlcv0#u%wmSciW=M}922KsSE6GPt0+;+s;H0PpqZB5 z^=?gEPEh2vAzD^K>QHu28r1xy0RskKQQi>oq{pzux6SeykV3Qs%o2A<+6t&6soQMu zYCIwJv99o`XRlmiO*k#0fzF)s5Y#gTb&eiog{gmb1FZ#X33SC}%V%GKL@{Omm#NiN z0oE4q>A=vf%HLT-;%fV)H*5>gTL*jW;X6EG{tuxI^5?88Q#(NL_-TVbX92&pvv4Nn zDsGLDPCEoqV5F_DS+A{2O)&j3V(7`IdqA!c_AJL$t(-3M_7S*2UY1LsZo9m}TY{ER z*pq@}cm3@U*iRP~0th)M{a1jAKyR3c^~ZwZ=$?Oy_S@bm9EHA2?e{O!#xs~UKmYb{ zYF!yizuDF`Tm~g|*PRdcGobj<0Sca5;sOQ!{|OiWBNg~w;hZZ~4DZ8-h#v zy!GUpf0(|PMVDHeYa&fm=KZBa(;u^fX8HGDTja~Cr!Y=LRq<1SXyT$nGfA+OE~vr= zyS@Qi0T(I3&Tv7!l=A%XbS*Zf?JB+fK+qMUPl9_?=&Ch8dOYdB{Z%KA4C!axmEI77 zf*%IYxXShBYpO%_GC*3W1kol1^STzv%z*q?ZRtlO27dmWn6mQ9p-^!XovHDo1f}Qz zR=RcrUJb2u)WH8#zWcVJ#|D~a8Q*eN%QQ$h` z?>!TnFL;N*@#_4~}k?;o-kNu_YgBcLzq)D#|kb zrSe+%-?Lh@k(KRg`wUL{(sV}AV3SlZ=^oVjlrdl)!Y~0E>%p&Ob}h{m9|-^Q~@nEU{Fp!tSMpMo}G#MJR-}qOa z)0pc(n_kx0J)pFn_I(w#$AjLZ>X**pLG*8Wu} z^_B{G_sf&?@s^CRBmH&m!%CousJ%`ZSjBZti~@;1r`><+MZDIm$zx{f21*0q)9jih z4;5Pne0jDz3`|ATQ!sri9#dp4XcVl!jbfx)!qKC4sfk3N;z#ELzqj0Cm4H~a?NYye zTa4lryG7NN?+x`ckLj!3jwY-IeQ=sWlNA2+@T{i-24o(Clh~_Y!T|N8KYabm#BeYq z=o$xOC7LeXoB3{zXFoW4d%n48s{ot!i+m18us!Dme>EN7`)I1{PT|YS-~yJFwr~CZ zSVNeT$Zub-Gj6kgX=aUX2~;g5$P&IbP`^oB(|)l@e+hQ)9^(IF?@hy@?%zIO)#a+R zNRc8Gm8~qLgiw~Tj=dBiOEDNKj1XFEjV)VbXN)D;GDg-ZEpjQ$jCID+ZXL=pmTAoM z{SB$>fB)~}c-}n6^XBfQW`6Vip8N7S&!2{tFeUKSOhq{J?z0uCMD_ptG&&=0DY86W zpkAk@+d<7!C1!T01%0m&XKXPj&Kg?2Q!@k>GtspFQ${E1jhu&m-tS`R=y8PR7uDXH z^T|DwV=7B2Fev68=v-DsO~13=xc$hGbK1JwsQ;-C1?0X0BF{{-vE8XbYXU4P#sc>= zwr9_hydTgwjUSv5k^f1vGl$H0-hj8!oe=#`3Yl!?lH)l2KL}^C8U25M{tr9!e>C|& zIr*1k`u|cO$~PwN+xv5WtA^e#M#`t>cfK+v#@AggTd_o{+Iucf?Hi-V^fvzd*!EAb zO8+P`_NV+&XD}@SS;({1!MA9Z@lNq61H-O(UcKNH0)}Aa%8IpjWJ+YbZ!UXp*LBn$# z+2;?c@&JD-BL8CCp0Gcr%-3HKZyapwZN%(u1TRtq9&>|K`BtXH@fTo7*j0PwslZh? z_*}F9+6mlaKCP$V8B_2BIuImlzzgjL?v1X~r>fqOg5G-OwXVPDO59Lh4FjFLPg}(X zAQyj`KRPu&O2mc3754j)1wv>dC zzVu}iKP) zD^-&i#YEPnw{_K+j<$?@&IEg*F1f|AcOa-EXJ%^E>} zqt@P@b;wT#`k<(#`J=~FWf?fnARlApn#o_xAF2?r3HV9vXN~2F|2v&xu2OEf+-BuMo;+yRW~o$XIE5 zWh_{*!oh*s%ejX`n782IXD++HKCWCKQpT(U=S05_{l8J6xq6B)OkHjB-QJmwAu)9v z@#;N%CSWwH#}TEJ9sM_rFby3OCqxLQ2oYoK4&kofEmS-O>$>yVsqP-(cqf+J=j-lA zyTa_4%<%j2ZGL%SW8mHWbhXAsg1V1EaB^hBTyU!0r7L5baYjK2A9;C&d0u}AKl>G8 ztc!Gpn{un$R8e#m6u>xV2ILsyq=kdW`pyy@>=AP_pl3FUFani_0$w%WwL0f?39_pL!x^Y=RiFgmD66h9HoB z%7gX5sw%kYwy)h3J*e_x$SCuT8aFP9%>f+o=Z`MWBIJ1U;#{=d9BWudG2-DH5Yx=r z1%9TT75QvZ?fVKJ+WkpU?L$+^5UY{(afqeKqP7PO+s!E&9Jo5)!5^{9o&nxHB{ix1 zL7KQF+cvoFN2?0?#C3uQF3RKB>6EYuqtnQ&jg)^hpJs_}rN9{TuM~xUDl+#}tCQ~2 z-x$aeNb3F$bH}N|6!g%-ZP_L3b42gO*?X1~L&s6Vfp=1PO287C(dH_tDUPuM`&RFF z9VZvr@}bL?kL+^_s$>#nO2#fQa3oxbILQv@m-d>L9vcS7j)j$sdRoqlR|J!L&(3l- z!Ek_R7?Cd5+XCx1X9q3Gzt-D;^mc~k={1iC$1{2DI}Xo87PPyy&Cjd6aysbbfrH_^mvY=Z zmlwq8&$zQ4cy>9ssKYgZ9f^3O`BeRrQuE887qi}C=c}9<_tGPHQs)VE>@XFX=a{my zhU2GckddDS* zoxip$M-1=dW^j_NFWA;C{0!v}4|=dm{?nsj+mT}Msx%@@Im|R<-7T}A7}0kVfZalI zOX~}FXzzKR6|q^~&!Xo)6{Ak&aXM@Xm&)iMNRUmpojMn1$`@6V<#@5fH z8|UwPo?h_0)VE98)_*Z+nYgPstB=kh@|#gQWG`3mLe;RH@*2bcydGQv;XhIvd_Qva z-o!ud9Yq|SP{HHqb*+|mB4h45XxmFie!O4fk#}uFrKQaBcT$IZB4E=Sub>~ZepDF> zT6F6Ne(jXL9N8TzxpQEd8d~nQI^gmic*bU%u9@X<5pJK85%i7Ce-WT_;bkGd5LI!5 z_4LV(9|_O+vZ=?vU1kOK&>&2#nS$d*?z5rOM0BEUI~ zUxn;R9nGI`-G#@TeYI&|L*a_7eSh)$SvYl(TN5ogpT&8^6z1Oj81yd;IlOu*wgmop zDtCK7+NT+Q+kIm$JARATx<43Tq+aWD{CvRAgAT8j#Ekn%LpZSA<5$bhUS;# zDXfgK2yGLEZH?JDFbK+bAMS-a1;6r&0ZGzkA7w^o*okeSszEzx+;9~Pll7c9$iyRX zdsohP{EXdxZ!H>|YUN<`ZI-+YriZ4_jSQt2f{Xvt8D|pk3`!Z>{w6N#&fg<)Z=s(2 z;|E&$2a_3HVLqDY*@UdM%k+5VPSvKa2w>$5M9D%nCEw-CHZFY1%)R}oyTlq*alqr z{kGx#TRWu$SOU#Q%DZY?(rJJf8tR86g+bKTVd@RK$W*%Y3ff9ZzQx#tq>a;hPQQ^3}f%K7vAgNfLiZ9mHG#dYrCimzD$W6yHWbj0 zaE~KJOz`AeKYl+s$73pjQU=&9|EWuHaMIVZt-qWmx*jN)9j`qKxXWIi8sJAvL;4gD zJYnnKwPkOr^np|_U!<2?jsC?%=N{11u*sXA86qxiAgaTd*+LeUq zTi&c#P+|4^d*n}{KxM_fF5>ozJ;~69%>5)=m`_GaH%(}*`_@BP_F=i%F(@6%W`#TC zOK%>K>QBUL=QPi$eRg@|HQctX!S?!x=XYMr%cId^@G{F0FIORWmUFyl6qhopfqZKS z$tQo{Zw$!wQQ6@AiEN&7DbrXRW9XCB@C}VO*GL3!)~T|2&*>4Hqs7qlVsYnB)DPG_ zMT-3*{&tA?4Xea|ys2^*XNoCja^l(@0nY&DLa%sdG}fg&fBprzGkW zJrn(mmc!1wP%>SBQM*<~RWF?{78?0Bfv-G^QwJ8&BN&89UO??**VLZ4UFJ!;a?R5B zWcq^Mt4-0owi6|uy(k{eRTT+`RAg{PMO>wt6!_F&d`eG9wI?WAq2Z+3xpXY8T>CyR z)<;4gb1@Zt7Czqb`8+}e_Z~jp{Z`1$f+v6WqU3XTH1MPiYucy`>1+L)6>WSjvgYzI zqkD=1ockc-*r{V?*Mg6fSaEoLN|1Y+WxJ=V=0zsCl6Cs8?<2V6uy^k+XT+i;;pSJT z0(0ka)SMA+Ywy1f_vvq7x$cX; z%Li4P-7vF67)pWols<=lE=p&fczn=wT~jGWK?7^CpBQ*gT zw8=UN%hbttfV{m#1-=`dgv7hz7hR_j|+>W*O=-_3xi+D%@d6J4ZV| z-~P1OB$t`P_`2=eZSJU5`G%ZH2mhHQU}m0bKe%oxm6w4L&7=dI>dD6u=3{&F}Mc&12d_n#KvO8kjh;^9CzhTnvkhrIizM#4Uswd z&m%!V9q~23y~nXY74%Q9HuBf1S5vtM1SnhfwiGZU7E=52%EdHEj`(sDhsbUI8A5vT zO^rHc6g&dRc* z9SL*7MkHC#7pBP;|{Sb+C2!i8qpb_Op+wHO5ovUVXGG?UekKybeA%u+t|I zydj`m5B>Dt!K@YMnXslse`mC*Rb7N@+Q()Ag(kPTu+*>NdkgOUQN* z4ei!Wfn!At)plaMU*3HEjCc^sBb)$}x|GOuhP+$c+Y=0WmGIi@k3Twyw+hjlN59C2 zq%$Wbgie?QFUJ)GPoG~W@D3Tk2oFUH{&Dv{T41#u^1Px`6>U3*k!DP>|Xp*Wu{@8$J(T*!y4kbkO zR}4jJjLS5jC`COK84wft?t?Pj8*?9Jfy0X65ile4S`U@e)(5Q_e~C1^(}BxB3NnS1 zfnU)eSD^x&%YZrAwqj1GIxAvX1;jEnZxHeC8Vnu7vkJtRDin#no`i7ia zLTxc`b>8)^k4Fn1S~_ACY^Ad#tRV2#BfJ|?;3N$S8or3+mg?6=KIi>o-$KSN{C_&< zufLpg_Czm<;>pf-7L-~SS_zBVEZIS3@%@*LnY}d^tO7^!WiC|ELL^L&X7xIeP=kXzFbW5jvauc8eO8QPYc*3FICP)PEiKGxOBa z6S`Mkd8jyp*(dpaVup*6p$%MESd9|Rix%*op~K8t=r942TrFCcu)4F5m@!iV%2~PFRiC?FRgzbWdhj&wJ80iWLT8Mbgq_} zRJMeLO9un&^ha4lbM3>n=R8Zn832IE~PA7+~8Atm8D`5oh_YL1OjjqM7g z=EmfsO^IkrZ2K-#pq-OHv!N<}INPx(c!R#p&;4$D*X+~F-4An3jFPh}n-^ z1%0e!s)trtGwmsysGyoEEY0VYs)2F7fxl_S#pk4Nj+Z3)I`%k5%9&Bb>2=W(Ky$pj z^q-HvnSkVNub%$jK5*g8(2*YNt=#u;y( zZY409UpU5|re|%ODrSuhr7FK_RBLD+TAqG+_Ge4%GUKw}YeDtp$eb&LnSrUmt|6nT z3|S2A<`{Lju@-IF+M{68^5H@kO-TN|=)pAsI4z!uoYx(yd1m3B`hu^fUOLW3|JleQ$u041OLCQc&p9+gZT^YIuV^V1lsccwqvfo1>~1%5 z0PhVuGoEqj~J3&_E(_u+kGD6K!P}?v<+n3Qi8$|OR z0H|jWPdBIa4ANwelmvk$^UpQKxK;*r)f5~$gw=h;@bNcIn>5Qzd!~niuglZq3Fh-9Ip%!t%hpv*&l}qv@h`V_oaWgkNs9>~#;)Ixny5l{fjf&7v}+4X8)IwV)WbJqs%sLqDUi635yqK8h+ z4k`HPn13-xxk&A3Fe3Z+`s5d1N;*g+%;f1xw*1K29vJxjQs&Q=U9zK6mW;tAPyZ*9 zKXPdXS?!HwJLUb`NhfJn3uE&dNg3T7t0EKwk)Ew2?CM6vNX*d^=C|pt9-xXULSrdK zpHFz$TKbe9zSw5q{`DX1Koqts++*w?84!eKt+~&Dq72v0Y}KGi*q6;Q*Wn6P2FGb^61 zo5-IHsKjv&%8x*0WOpBwKq0c+V$7+1y`wH;-e2RmmAZEz81>DCHJdO>(N(Oj?URT- zE}U!TBCS)c%gBdyBw*`6wC?bZLx)J5T<|z!o?Kn?J735T@ayUDL;BFB~{aL~f*m zoW6PqTI0y-ukm7n)jhZ76w@^rBOsw+8mle7YUHg85q$@!CDB|6?lf-@n{>C4qnvZK zRwJqcl|BKoM=1KDmU)3&>iVcK)8cz6VcO%Y8R?t)Ios2f!z5wb5ct-UKJ?jVgNn#V zyfyW_m9P1*MQi?iOL+Ly)ve2ebq!+k3S79bH6g46tqfO8YFHCcUg^}f0|~I&T!jqv zITrm=vCsH*?a|Zc(WcaM%0tP*f_%#D`H~hB^vUxsl)Wnn9)T(hZg<@$Zxj#Yc%CISuW3A5_VS@XJ7*G@Wfk{8&mGj`eXFr?*Zy`8_iMF&4eJ|w}A zR!jLD#1VY#c2N6Lv=fzW$h?nRi zmMiYqmcsw=yt*-MoiA|d0aCV|vo%8jQw=$R@_M&uZg#~tEDVuDQ%0b+sJgV?7qHMo z2^bBic4rDTOt3jsc#M2;xpvCWq0h?a@D~!egckW!{DzI&3qH0MJ;-gI{z@L%<-;*^ zrOHrXy^WRc*rvyoDavRN42GA={8Qb~sRZW~X|OA*tP#vPFb6sZ(Mq_7zpn{pn?1N_ z`a(@wT-$-AvNo~LO^45UtC7dauQ%x5=$$wD!Y?|~;2mS3kWr>TjOQKwS$c)!sN9o} z?C?95;YRkmNX5>S4r-RQxA)LLg$;1QdnmMwh8qPZ_UCM>oL9hF9?fDz$~NwyaC7lp z0DI_R@h$-;reDh>JS>L8!PPPBtIu+dY*__vbJE`R#k_eUDqCCVn52B!ulWhk zAQh|wT&lw*kS@;$n~+H8yl5e>&62T+OAz%rWv+9=rU+A>VeMl=Lw%V_RxUtndPaLS zi5mG9zho-!veo6#n(6Z0XKlrJ|FyV`TS(paI7!KgBymV%TB*yRUD|uufPZgdlTjL? z9qE_2Qx^U6E|<1)Y|K&=y(w(fF5gX3>UXS0%k_W8KFXlG$&L!kp@t9)@8CVHF>-o2 zMekhA$n$k%i6+C?#^+ao&`SMsMN%}kZDmzJC$()S%E^9x+iir$w&s6z|EL*PI?UtFJoa-}w0{ydTV*{*pJdh7Q zf00_!VNpV1xo*TGwJZa8VE2O>PR3rJpX+$Lk&(I+UN^^_3J(5W54$EoTGD|RU+F0e z7MD6RG9>vUE3;aAYlE+U;Aow`X>Gf{e+gc(;V`leEkSR{&zXAZVT=&m8ECoqRNw#x z>5?0F(mxv3l}A)SQ0G)fE2HCs716l*<>IN+C2lH1o)nmB79E2>H}t-6s=D*McI$W* zv1rI1ugiNw7;{wA%B?xz5mX zsqnLNkF%rn%)7^4h4XMRZ}&vB++vpw25nTwQ(KWoXYI549a_?6oItd0sSb)4JYu$> zNfqS#_1!kAj7cyLY(h8ey18xN{Q32&1qTBpl{_&;y^eNiYSv_48!pv?dAR$JT*+yN z2k}4sK3=hKKS3<}6k(IhwF%=Qn8vkBiBIlu-*Wz^h~6vaFP_Xk3w*+h?>*Uak`nkR z5w98B*L2>zj>3hOBSXb|paguYO29L^5eFc?Kv-b5hI*X*HhuA2dzU;%_4%#q_wp4j z#4VR-#M)#@7_aLsLs|h{(Ul9@P%g zEz2ZtCp2kZZku&P%0YcbuMDQ$CDMEJI^%-m^n8kwNPb6Sd1#6msAUCuTp=jrY&tFk z8ZaRDx&%2^D9SK(_M@o|qzs@GuI}}dKNNhLpl&!SEVDSSubE*W54S%Q9rDO&?CGuT z)^#PLnQ|zV8KRmL7q-1N*^64rE?t`b9lfq(EoHszF*$O-tKFM3530{PTyuw@1E_VN zv`cExnU;Kh1M3v5rjMDe7dBsSW54AHSDJfu>zCI&e1GX<^vv>*!~m>h*~Bsb)7t)1 zW~OI12L&oxIj06m2M6^>?uHGAY4mHYNowBhR0lSro!YP!cv)y z(U6@hKbGehUhdpdI_jTiv&$->qVK#xjUH4nu7$2NJ1(38We=J&G>9lr8I>;@B`SM4 zF`v_*9d2e}>LO%i%ejM*j#fjcU1D0Zx}IW*lBhnMEuY!MrP@(71+*E@;Sw>~FB7kE zPN{c|7%Zr(-?gv9_E-JZz*|}EG167$+Dhe{L3r==Tsp`589BheILi}QsZm>@gs_a5 ze_~J_1OwQie+=%U8j`O>u0Pplt3NRwIK!i;|2CQHZ~xPlQl*K#O)<;Jub3;dgV?F- zJF)G~nnQ%fWW9XVN=93dWiGNh%hW#EjdC(D7Ar|S%nULZuNtx`+LJ$Q94f?VO39BmyVJBg9EIlEf?aQBKEfal%LXQ6 zo3zL%W3z+Of%EY31U-UXogrD7HlK@nl&DtRg#Vhso3klF!>U*?fSoYUe{w8h(&Ehqgwk(yRUQ_0;JpF9o2qD=`y)^Cj5 zPL8h@j{hW$vXGe1)Dfr+7Bcy$pfMga!LA$g!1Vr^ERx;W{K;eLpUadaBj>C@c$)!F zeqwg}t`81iQk{4VI~37hzs8;wi=(nC%)3y=D?|H~SG+S?DjL@&j;4RtY)n4K8&)uz{ zgioY5a#%%(zPsu5El&kL@>j_&MOa#%SSTC>LBusr?W1s22y{Yt7o(?UOpi$YlXo{V zn#8nQ!ZLM%fV&+umsEzy5_Ih6XS1}qU8fwmwoG5R7 z$vf(ZFI00YV}#U^iq<}!-3iP1t|i(HtnTZ38Tn238{0|F)$$&*#6p^MNpR|L35j8i z{nU~dSN($IKe0TBh>Mi-GcgF{jNX86{$Y98C$+c-=mLHXD49SF=tPw z9Xz{u4uG8KCc~Yfr%h|DK?zi@4^1jWTl|!WR;~~`sK5EAo|x>B#BTB{88<>WB8V#0 z;t8G2%40XiaVIkJC>r}#9V4Wa>mqgGK9?$L@uyeslvlrQf?ng0*$*eu9?Oslv3n>` zEcz(H=LGdKHwCW4?a)s7@r_IT8$#3ad+&C!lWrP?kiDGQXD9MT|p-no<21c&;;aILExaxbC-m9 z?3a4@2kBYrjrg}Zp3n_;!zbqDXr9)A8&iXx;I^H-nqlcu>uVUqEmUX6yXcEZQtK%k zrp+TD5k`w!l_594GGU^M=5AL+KBcF_mDaf!@o-0^(^_T;T0ITy>EJ&v z=ku^H{(A(x|&3t1-4oN~E;0hQ@*DFlPp7M-O4{0Qrsfh)a`3E!l3+E_PCuK(x zGpmORPPEKZy^T{4G$hZ}1<#CAb6wXbdr2aV#sCRWD4Xr|b>D~05U<|p`CckGGqLS2 zK8(@Q=-+<$4Y}LV!elQDTG;Avtl7qMvudb|DtSupx489S0V3|L$~}tOQFT}9LB4|N z7+iPB_Hc}>1B77aL0LK^mg&v zEAU;c_SI&F-4ZZ2Cdb{+h&Lq28<@u>1$jLT(ut|nyS0mo^E)MXQ;YC9pA1J;&Q#LUugz|NEB3vVd!^Qa1DXR-S!BDLHP z=<4(L?t|8Ru*&ehveOgEg&tFBOxFZ<8VUdh1GoKFdXp4jpK%x)m{4*9pZ%-lZk+b* z{4D|o1W$w3{74?MXZx(6qNI&hAD}>RKynb3GW5gW^Jucnt9|G8mxm^jhTU@SCJyX= zU8r#m;ceO&i0!{EgFg~|Mk&Um^Qex3FhXT)Hlk(y|;Hc{HWPMK++BuetYa>nke zO_hF8b>^4Ox8;s1@<1(E?22C-OjjC>8RyjBqQK+lZ> z+7KJK8HXN2RlE~O3>4|DbxepS=MPEVp=l;P+f2Qt$vN{C zu{5WV4Km=mc2O^^LxuezIk`hVN?yt)3eS{cy!*EJT+PW?3{n8!Y3zQK0fyY&Z!db{d0noZ}5H2wK(%MtFkEi zAXc(%`t6J$YT$E|hg}gei9Tk_lX=XIkD;Q5G@PeGeK)Wm^zbk;_p(Rzg5zImKzw4AhXE#$xib#7u27YY4;i2> zB#1EQmkBIx){o+#@nGfIx#g_ozN~pxEBEsFhavHR0)o|fR$23Gx=p)XJ;Tzj6InMymRD% z=@Vl4=+&Njh*!Fj6T0DfvuSnrZAxdUb)!w%+Dpt52aVBcUvmW{DBkbqu4%NC7FvCn-Junl89=%P*)06uxOXO9`J=f>pD%}_nh-@MAYeX3Mo`>mOq5_ zsZ@qr%3X3tfE@JPs5f0G-EMgZ=jn}6qV_>X3Cff$71OmgfsBKmzUvM|Oc@1YS1hO* zQFEMLU+Q*--{;fTPjpxO5l;y3F7o-Zd1qbe>;f1AK3#;FlTN%zB+LzwYU#&*<<-$j zQD3Vznb-2)cOW%s>OWKD6PyjGL)c)4N`c>k6RsPKAK^2D*}Q#ZiNq% zGP9f?KtuV*w@Kj#_?>XeTZu;Buo^mMV~JlUyx>CjNDN9_674TZ{fvClgC9Qw4KhR- zwt9%Zd})ts1)DG-%$&BOE#Y+y?O|JvB>bhFk}C1SZ{)(1hys`sgiuZ^nYQoVC^qu7 z)8QAcr+pwMG2J9r^pLnhV0fuO*XPk1+)vzn{t{_bdS`X}K;46S{ zv zf*smyouj~Lr4(v>{%78hXT0a-5vUCF$T*j!EW={z&5pibSC{51ZD-Hwq$_?x6z zzt^R1{kkcKv4Z>*T2ub@%@r?H9crbA8wW>`E`2XE2H|KqAF>)a{`)?pGeEw9eAgyA zWC_e&<3Ll0^~}pi(63jPIO4D?dJjbJnCq7L5A&ybT|Zs9WS9opYA{w?nTHTR7C9K% zq5Bcw{@(06ywYYdJifI4>>+K-Dt$kH0E!&jHrG}l-&=IR^JZ3IvcCM{x$SkCY3=NN^ z4s}AIT#jFK9e}F>KhA*sxK|ru3-IHqj^ge0SO~u~?4hIY@I^27H0l_Te#9ILbdN;h zyXB>R#N+kI=)AAw=%6&wEU_S2&G~occB39%&-Z}#20@}p0V6B7ZpCgJ|3WSlUd^xW}mdpZ}-k&zs~qXMdD)2K4Ec< zxLaz;w@PwMo9@L8Y$>1*A-3%aUqK>Wo#lk-SZxekmu&|5#kzpY`b3Mw9nrHAe9HRd z_#`fIlv0VYnA5}!RHtFl0!Fft^mZ4sTH$MQ({2^iW6)oAd5W>uf1OG~*_(rULK*@W zPcQm*swyLTV7t$A(mp(w4If6@ZX)S8+El28)OA`C*XS+wf){Stn>TDA)B?G)9=@}S z!zMZxG|Sv-i!O7qu~bSiUBWve*P`+Jlq^clWY(3~oI0UY{=mt}n%A~ZvV#kTr)R#J z4>gjFWe=H=YAoV}$x83k9_oyKx|{gDVwjYV)Nd_7=z;Lhu}ovWmAu@;ocVj(ddC%1 zt2xZ>!=OY)!RQu_B+w0j%+)z`ey-MZo66MI1{f#5^(~VZuVs?E%Z4q|CHTDc&C^ft z9h+^0kL%e-3Q;ASdgElUh&Q;SOEI%cW(TA~2V@{9o!fCZe?i<1Jf?yg=R3>m=Pbtu z$6JTA42f+EXp20AvKjq>K+E~v(HXbQ+n&E{ZPjD@rr5K7V5&5|Mn3!jFllV0eX=ez z0OAH^nf;M(ckR`;%KIyHIn5)!_-%W#e%Oe7#=U7LI8;MComF5n{Ql<&|59&Ee`0^j zE=hlzpuzXmk{Oy+GHFi`l)(lrxT;ruwUNdRt`$j6T(ujMklm0`*)Nrf-dSVz<3)@u zkM;MIgYG$+sjv^d648$#Eri3fEXyj#pHk0D=_ch`L{(7PH4-GgydGsb|5t@j^gzt5 zUT-Ojs4-n5a8Z@`vHjm#hc#b>jy zKomf%%u7u>!2}SGmThWX4j6Y!cyBl4-~WYwdz3#n(|Rhw*162;eS5GKY$~p898OKN z`zwdlv#5|E&3E!ZrR9$gj>fJ_ah`4V4*%tnlanvRfpHWqSbsM#LnC%LHD6FV6nz9$ z`?UYr-S9!3DQA&tDlP+{zl1K56t-D$sg`|;4(#{fKMC*!Recb=9-$CIbD6mMMLU61 z{&}_^g_bLF$=#l9V;iR&wwR7X2qo;cHxoZbIBSot-(*3?Io%@+drZCErvYN+6~J)Q zh&R1xo2vlC(^g%}w0x6z&0&nU$5YN=o1#>47hHX}sj^-E+u;b~xOgf0V+$tLNb)hu zayk!@Hk*1Y&1~e>rPiO{Q&ERM$-C68I4#L+*=xq2*^>^H_|QB25@EQ+!u|Gk#dkID zb8W9JGeGBx-17D+H?x z(nj>sZNnxvx)$P3$Im1(Nu;5fdcRY-8+@5zl1XL59)B)>@ZO-h(^z*lC*bX zJLrZ16?Y!?1oVcq*@k;OuP?_f)gbxzM6Z}Ad#FVRz>?A?Aw>yXI}&(pU`0L$H@{Az z0ob=&F;sW$7V-P^5U>UnC5VmZH<@q*h@dyGaUeC@sv>&k8{3O8${RSPqf1# z`h;Ph68yoNvNmY<#ku zk=d#eKMD)Mwq22WAVG2z_~;dW1ZSQb!C=8}-wgl#s@0q;;5 z@$ow=K9uj>71w%j?JRY}Y7f+Tyz@{I2;lRx*9=A1FXRS186o(+5R>hfu#h7%ZKuETVFoo> z-Hg1RKAt0fXLVXfRLv^#5ox4alAA?Ji_W*+Pw#AB--}f(Dtgq+T4?5mX3zx#TO-UOO}~A4zhXM(XF}!jI0QiM-xFg*2ff z;LmY0ur!5v-=pTJopmDWgyqN=erEIQ%aI#CuMUOWUzO5d#5h?Uyl<)}2qzS6rJ-aLXx|3X#JbMb2kKRP+VZl*MrjynvF9344U;$cn@jab{*h}%=(-F(DN;ZVJoH!KFbL~&v6iR5n|FwVtPI$oIL~9 zS9n6AY2uO#!Hwk><48UE$1|_Gdb;{9ty>vCSPfd-slP5?vkKWhzdBsjeK@?fK@390 z1G=YvWQ>Jds2WYnQkY7WAMUSooRc6N;9-4GDT&1Dl1Sd30B>xGUY&R-0RT6G| z#$)3OA$+0Hq*=}m0LEs|Us0Ssz8Y(7x4U`05E`7KF~B5eB;IEK;a?^m`yTjj<^rBQ zQnJvSkQ($Q4R3S##;VS&g59N8VNGrcq1di=Xp`AW*_8(MPVc0wiGBAN@R9EkkdFi{ z5-=yHrS7|N1*m;o$8P^kV4&PgLtfw@z*4}chWWwD$jfg|9azLNZo9CG9PJ?(_UGZA zU{KNR3cTY^J!81mva}vpo}4d!&mLj$-f-3C39$9~DpP<>cK{tZf9MKT4Vy6QU!+S4815o+ZS^BQF5=*0sysdmf@te~Dt_ON`be!7 z+`d4$4WPHo8of)Pg&~EPkwEUEPC?TlwU=K5sauKZ_bja^g4&D~sLPB*oWFeP`lnaD z?QPyp0kXh?9>qB0;AYs9oRvfovrIvVSC4>SI$HZRt%}SPV4%!mwf3nO9Tr zE2l@>hOB@Evb#VA6qB!^B^U^B;r8lvilAqP%nr;CEO4r44@3-HFk{#$rs9wruCFys zklrBKehPKOAPs!Q7R|19ASwQkq~!A3uWDdbgm#MWAc&j0x)0_wY4I-EPEXwzHEe)5 z@BvJev*8KQxC2-z!u;GYSFT=Y^6eL#97p``7JL@69on)@tHSNd6DcQE>K>Xupg(V5 z8kHI*ZD(XZWX|%iO6F3Rig0->3+F~n_Sd{1h+}))G8Pl%^rAVl z)52*8EXAt$O%xmy?m(3bVCuHUoL~99N;hRmsP zmDHqeHb7>iO47pRna_B)>Uc}hTZd|5Sa)3PB+=JCSyH=^iOFmo4Sa)))p(tiA4r;2 zEh=>CNpR?&@BACEYm$uzR#?FU$3}CjeVW>>&8|_DhBHYgQyJDsc(Vb!yZ@*0CqoX1 zuVxd}Sq5@E4(46ck4j89@xVeiL{FEORpyAo@59m{keQ%WBd?WAH2Tpw9@uz>cK{f? zspy^38}D?kDyt&Ss#a;g3Q6&;4&Dl!MkRZG1xyvMID<7@Bjxw9B1qcVDhJyNRIUW_ zI{!XL|B*;Jv<0ps_*`N|dqMep5s^SU#uY7RYLCIms6MnUMoJ=$$=*&VB}U&XBjnYd zi&7mU@{B%xYE{=%5gdqDecCVW?F0>hKeTtW$woS46yRZUHgvM!yqrM&scwAe2RZ@8*cp4grA&_ElM zQV{*jtEWM9J5vgcsFqH!hkKyCKn6B_VENjIHCd0QJX)Q5y2*MOyD`u@S^!U5Zw;9= z=^H#@jc57+o^*%kZu`XN#$u=6Xse|{X?yJ^j9{xZcC=~Q2Vo`8xFzs4-u&#F7_|yc z?mFSR_bt!9x}VjWqUI^YAF9rn(hDubgRFMO&@0t7%Vuby%LFP4tvN1^HLC>gZ0u~C z-Ow+rd7mCWnOp!ef#E*y#{56~*S{Gpzn~2t3^tD~Og%YTG74v-W>IvNct26xL0QWtb9*YdIgSL$#0cPabTV(te zcW9`!tWX;mX9$_3k-h1-GOBVD*{PH9u$yd=x2L(Z*~9CYYYie&?(;+0z{bOv#sWM* zoXxmHjW-BT3k?P@+X7(hzpe{|+LVT?ZA!2KOl+y@I0RLdc7m=t0b1$fSF=QXJRg##7j-j1tM1rXfUBU9NzjPe zeu~}LguP88|&9CxiTTjy#apEs-eg;6kjE5Q`l zIxtJ@$KG$kjt3zsu;W3^aj=1G=bG#tg_61N@4RHU3V|bB{}GJn)*H~=JJ*XedUMn{tB0;A z37~`{7(}CZU2tk+_{j*sld;~Gxm(2!u-IYa z-Ty)mu=SP&BDFwfC$0^%sp<%g9JgEKy17_LqqnDN17R*9qs! z*E>Q7ge9X>D>%~-jRgR+++Uf_%%mK2(0C_JlZNn}+j5ydpqcyS^C#*D@mu!)=Fb`P z6ORKSSRLWSylyH$H0IK-UJdAl#=x&31l*Q zZ_)Q07KNz56)?4jQLhoYvsK1X-5P-gJ6Y5OzUE3q-!?70keK!@0Mz6uE+Pm!$-SQJ zxj4m#oy$GZhmq7jn6}Nk>oaLgQ;|Ir_&fWfu{})5FM}b$Yk#=l)7hdFdyK-)SG%S3 zsa>GEtTUuLs;Z-1bOwwgf^LM# z^YJTCUdQ}};5wQ- z3V+!pE2*cR6x6yeYtnP_!UZd4x}?hIHy?UQ&|-=8R2`tvr=Q>&Z>7x!o_zEz;OWp} z;Z#c<7+v%Qpn8gNYWzP?{}NjL_QMx_AM$kWt-|z3-&97L(udDsJ3T0@TK?Ne!Co~e zDuH&lfE9pUUi7Kj=vCb=plRu@pB*wCuO>N@$9Yjy8ohVeV0y|Rhn3}|ZV0jMR{T^;F*uy32Ww%VjOheL z+9u&j_npFha_(2OeBhQK2R_F9Rz4NI;%*-RgR=}-Eq4AWv;}lC z9YRXSi02A{;a)?)lm6B*eiHPif$YFe(bRuLZ)8OWewtmqn2PM?Q1(V+{IfIm$speJKG)C~9O|*a{1zKb>X`}#%Y{zPRxdQDFQ2uW_aqGs zPmgXa(~^FEaF&xIILmn(O3k6QEXLll4-SlJ% zfR%yUe0{E?VVkLeF#}1bKk)QA?+;*QzD811Yfuga^v*XGRik? zsV4VZ--#yinJ&H~Z|*_h3jVpU)Lw!A^&2fvb-Dp$Lz;U2xeY(Y1U=`CZ+(6dZ2qYm zzwt8f|Ngxfp-JWxomzF^oGC)J^X*XHx{#Kx-nev>y8HI^UnYqk9Pt%c9diw;Yk)l} zDT+{83>q-$Y4isRJQ!ea`RL z4)%kcxda~S47Lxo2TsKXU-qi!keVYb_ppHFXn9jrxhRx(;nmP)w7aa$c|0Qglf z@KXzN`+Zhw%nwPP>4CHJ1ZMJ3-!0kO=v%-#TyvIs3P#D8Dt~h$jw?}@1IKg(g4gAwO{tOY98Zx{o?1Wtz2@ou?fsw;EP}W^~GYa@A5}drRQku zh3k{CjViiJjbJH~=q88QPf09B)r+Cr<-?cNM|T(9 zpg^P+wh(P$QW?7|NW+|S`_8kVEl4GpbKDO8*PU+b=ZQuI?&IL-1WR(jwkz-$6aMKy z4wKE>--17aF@|4%yb<(&fBZj!tV+oL3B&)7+QHs*d2s8~j&*%%`KO?@1q^?%?VRy1 zyT%dl;d@gNQ<=kfrxzF#`LCaNwcA?%IzSxh(0#xHNY8&65)OI<*Ae`P1ACz!*m21* z0qk`U*l`Oiu?+}-o(r4adA-}l9c*Ca;Ftr|9N=id|8;`d6pjf9?0>ZO3$(?Oqx1D| z8v?#w9=^I~^8fvRBn#vpbOb9TImZ5FT{(IkL1A24Q+rRo=mw5#<$v2X)$?HGCD>NU zQ3%=tU?(CtTeb|a<(JDINw$D3lN`5|L51$O;pC8dV9efs$;2!&i#W4g_ijn`k{>3AMBuB(yG&7oMTur{FWES zs@(;l&7l7}OtvaW1axA8KgVx(=MTPFMELPShsjzbZp+2yFd3xQve_!aUcL<+z1#jp z#SVcj;^*zupaL|scr7PHl{nb(HU(5<-SDbFi<>syRIv399kdBB=Y1v>Mw^aRq`M|YUQ@M)I&_=BPhpsn|hkAYg#|J5qOd%p$ zol~eRWhup2Duv3fvL(qDvS(yQ_R4lD`xdfi%Q6^DA!OglSPrs|A7N?rV8o&+EGP>ic1^E5Df`uuVh%G&xag1AYHCSj5S@bRe+ojQ=`R zZ%?Q!6-vcIzue*39N^Nw1L6$WmphW7h^Us0q4*iKn|T^!*X=ax%Erfn0Zp01*Kz2N1+>+ZgrUsDtk5gl|~D(N&P8 z$v|LH(D)`8%OqtNZvq=+``;VX272pHKs^_rPVi7Wn<>ca2C?Ap zzpQeEc!)D`v6=$ZZOg?4)X(g;)y4jo?U()#-lBW=voZ2g^h0y3k%c=38!Ltx{#qB{ z$HYSA|2VMU0)F&}(COzN|6mdt7r1#T`~y5-UGmzceb@J25B80@A;`~AD=cVPr*!hV zvdp_9C%k!c_)RUui(V|25Yn(FeS;LrU-mzYDQRwc#D&3F@Wpw<{IHY)o8BIa z&011dA?Q7{qPs*GsCh+xAtA>$Uk)xvL0ZSa-5sSK+z|V~9Fy1Jnbk|0t#b2(1^H+{ zq9%U^e{*^6yGnb>T({xf!4+^aS~rsmli-kmbF-~YE}6U*=Vl1fceSy{P3bqf{CrwnTwH%r$<7kK1`0UX*|c;Uv%@50M5M^z!4dG!L`lWSB`aSp zg9lIg4|rSk@Zt|I;Iu?p@ti`zXm8=&3z#48e@En zT^Vu%1o?rmy%Y7k0=s5W`*VmArm*Ca?ugRVTT+f_7T?x>Qf_LxG1m`X`!RagO0>+& zyVJft4W6?&rI(X?PbbdfmIc>0@OBs&DG0h3Yjc~cCeUCiqcVEtHR=s=Z0x1+*+xvd zZbQ#?D5EMks@D^~hyd{XdVYR&x;azo$OR0_-NDw`ywrGm^W6a-N>4m^dmSy^5>whO znYp^%Kf1Z{jna#>H+|5*pq@q=R@kB(ZZ%$cV@jPGuO6b6Bw*!TD)WksdhKRPm3j*O z*@{4veRUyxa0=z^hJ=0(Qi3vP%3$284rbg!$7)69RZr76~tyOz|~Ci53V zyZL-^iL~D8`3Zya>6??vDQ11|0uya}Gca8i>^J>*Q23B!4n4o@_N}-YE`J{4ej0QO zr_HZ1`KIWc3P9enL|JAj>;9mj{GWyD=vICk%^dHD%6YwsiCp1O9L?5orbwR&vWpjZ ztS9Ejyon1q)xB-^LRoHXy*q3y zaQ1x%rCGq_#V68ILxW)wnuZi2bhjUSu$c6sn)F3wt#tHmzg=uqtAYUR*as5!|E6Bd9a9cHO-`e3e0_BmBfyoo4n0jiNiT>F zlEiFx6qt)W``j=YaFWIDVsO`0MI^O6lWx2OS=*w*lPBWual!kupr2Cb3@M?ET+tpt z{dP_?5BY1X!EjSKQC{G++Gw^x>VoR_3LAckMNoOuSkFpM+H+WmLJ3~aw8SEDnHe|# zs*TdDTAs@{>~1RlEO@OG^~~t_)X=jMTl!rEcfPR#Gn_( znIeV*S?($;y+K&MF`gn%D06dt9uR6W@}DYjq+nKh%ADj)w)mCuqBLis1GF~(F3FO_ zY0|;}G;Yi;bieKTB+YpL!xv0un!-GZ)=mookW!j@@Rz7t1d-)F^3$f9ythkce=N#f zhqt0Mm_yMjT8soQ6uN$Vt|u!BuXITM5o}-Yo`st7dC{RKN$1aUeyHHlf+hO77&au~ z?`oYN{O3yhL$Z_L^|`!*VWK@O8QnBJu>nWAzkE(%*tqVnVx>+?IlvVw2j6k`J)zk< z9FQ>alMp^?%wj`}0?*lrJLlfy#5FCiX@qHo>D8W~yZL6yp=^zMnfq71v0^8ubeE>@6ynR0D|C~`8kMCp;q}Bb z`DZk&%0Azr7;IFbSJD05XL@WcxzpGNU)1sk@rEQS#y72xy~e42`-wCP9bbWQnwBR@ zSgZ$>(*mq(9AL9M_pT3^wh~-_kS{i}Biu;ly|B6UrP9sSNK4|%z7KUnn3qzLsa1jk zE_xb@$=U-S@XJ$5h4}TV+sK4sl%^2)z9eo);t}d*xAnFgA8}M(fcEQg?I+0mqSonP zohhB{LF__@UMs)*On6b+|4R3=nllXOyu{+^>qllCqIsO?K2HB~K=;SS52Na6CZ`V1 zX<=8c$etgRNclwS$MKzS}fOZ?avH{C&ApYmp0+hj*wM%a@$%F_E`eshBNllM#ES9~`-yo4iV>Y~ z7Np&7x7dr<2WoM$XO=l7ziw*tJvvzLe@v7V-`UQrwl(%;EBvvhP;e8;&5@$xaAG)e zja!a5DvtJSWjBvv%b9YYf6x>77qhoj=gh&nS3qedo51H-&GOl^oOCG;_p1u~61lEl zV`BLD27cNkPnDRFbpEkV(1oagCwPh8txWBl+=BMw-k&e81uCM2rT>Vi;ug38bpIi; z8`o36DW&iDdN!k{Kj7x!Lcz$&GNIb1A~%=%iKS?qON%{1CWuGhnn|vTW{~S(q(uim z&m;fx(vg}EOfGzx&Ktm=Z7Dgax8=0*^?lmQAo%{9;A!zn;H{c?%ts+w{ooej{@Tz* zKiAB~2XlG0aMC4BtU~%S7Kq9Xksv1jBj~MeTO!`hyVIICBCgSdEO$w>V8 z6~AaTj4O6gQ+S-5Md`(PtPNXlldIdR{)lQhb8)`wXHM_;DQiy?SP%;&#Ncl4Gcd@o zl#$Ix*g~80{>HU|o@mp8>FZiYYGAV`{a(GJTnL;}EEL*DLaLAHdLf%BWDf?-3$d5f z&0+XAjyA=U?au7XuLJaWDA^dd6q%*Xpa7E<8(wM&w&AxaGhFl*01`KE8l_tI#QrEfg+IlJs#d+ zh5ePhE6~~*D+berQwNLtU+XTZulSTs`-&V<<`y*elUVAELP}4+no=nI%0Rj<@-cXP+B6K3>I+PI+UCA;>S?5UHgHHO|ToJpUKk?NVB)s0!uTT4qC ze=AxL*u)k3w5=?XRAGM>pi(^4s)XjhUD6A+kv-ANYy0~YyTpPg@`SSHfw!^M1$RMG z^B;_s)gjB zN@ahxGnM3^x}hwkDO2%x-?d8`aKlsS4ku1(BijT``whFp zT>fm+%Ux}A;$LmUqU(_M%Lci?`^I8Eg!z4_*wy3P<(0cqSpJOe=-yTSm<=?#sk6@j zp-wmFvmbo2w|~^$&c4NDsC1O5leeqkpMAfjb#!+j_3d6+6yOD19HZ~`6sdzwa8v;Y zV@{Wmb~j=#f;Q%aG|s#~Md1aA_7p3vR`-nqZ(KKv%MYot^N}vgAg37&&cPV95jBWk zSJKGGXP6Li#-#c|^*gWM3hrF*i>Q*>^J~P7kp+!RMUsI4`o-3kr{i}8RoEm@;T_@? zAWr0(m=N`1X&sRX<{?unX?;m}3x}{QseOFtnU|{_z&vCHFJ^7YtBvXrcnxNTdMgFV z40;o8~bS0`R+p> z1lx>V@_Gbr-)^%PZ&WpQM{Os*8CBGoHMD|MHWw}=olVu=7;|UF_b&=Oji}sQz1*8` zJT!t6K+6;R&toW>{-L+hIxHGOWQ~_Exu7e~*DFqe=Rb4j9S~ojFBnw5_9(_$QkI8P;?UVMD2oCw}bWkBGMfHmD&3vw3)!xigcBHm|`mKeohJr!{Vt@FQnJ`($yKd*$5br(-NqvlB+ktsRH! zr!Yik)(Rx1hI&&CO5E2MzYteC%>i_aK{qLQ}CQ` z{rWHz1ApM@{vqPMgu9jDRj&CRx5%wFitND%oQ1vpk3iq7O6k2aXmTdqDB^GuCo*f& zK1t15Y9DpDYlbN*k#kZq8$R{rB|~n)6?YSD-_K=b(LZn~6EW;vgN85xOwN-R%iqt$ z5CJ=KEYQvM;VmY!`i79~)N+ld;S!C=DJswMiDo&HJQxU#-flvtYYYSuuqNB9C4&g+Y`V3-TSQ0jqDZum3@P@-`?kRjTI@2 zmkbegL|Rz}6WP-1ZgAEJzsNMYlyAgZb3oul$Ac0Zy(go-OdiS&pJJn<3v!+aXBOhi zyCfNW6`3w7kY>-fEBxxHih-MTL~zv%XbZ1I;jTvgs7`t<1iH}m(ynSbd~$7<_NMO; zCp~ti9QTw|m~)c9IseN{6d`yJJkiW{BN<-LYMoB$k|{O1%jqsa4UnL1GMmlQT;mMw zr<0Bw$a~C`>w6&y-Rl&BS?%7o65n2q z5^p!o*qOOJxj8BEND2RTA<$&enP=yvwP_4e3Vs z4f#hdo+j1<61li|c>O}R{yBNGnQ^9%Z*JTbyyGWvQ!TetdHNI3hyAaz{=Ju)JU*c} z5~WnQvEqyuN~gaUsBjOPCITA#nWAb_KM8IQ^~zm2?~k*O47rA5x#~>9 z!zzYeoM#rE{|v!w;{uhMOzg4fKcy;TSe%x9_zj1Zxp@z55Hfz5-Clvsk_=?KRsiGa z{eN`JeP=UmW1wI2Id$@zF61{yVZE65P`dk@-_bg3W{JqTEVp)cDq+!+Tc{<%z~o2w z64Zn=reV$?13u-?Wn$&9xL~-JwM|%Z9c*)K_m_9$(siBtI?;_B4BU<$E}Y?ZMGRXt z%Bx591FENY?52b1*P+Bh7Ak#&M%TDtHgBv-IL;!XquqvQNb5>Rz9yo@(_bi%M6G>D^o73Kedwh zH20^UBO%l1Zp0+G_7WBzYmzP6jneXKA=gWWE2jAsQ)HfwY<@cZs2Ex~uZ?iWGt613 zrKobdp497@jpWy*&X}An_>J1&a{O?(Ihn>evaWPXc&5{dYi7VAPc^KxNMJy@;YwTA z0ei1sd9a&D_uQ+utnZu`5u0`0F@(C4x|1~}o9+A?9+Zl!`9qszawU;S{kQ|8S`FR0 z=yy>1S=dRx!jNCgbmw@P2kdmo>zwdao)?)t>~(W8`(kr7H9Hm^GnE@s@bkz7Q4S{3 z=5vnvANWX0nR32ue#3-;$xYdtKmyWijf@G2awMnIQr!CGzf@FSif}LG=!i#5`bn75 zmM3FK+qAV*t~E=BShI+lJOBi&iC4p5T^|wgb3P#2ZO}svq)61U$tjx!Au|9#$aJBf zp`O@$*kNikkeo#oxm2j~rhJ)!FLyEEa`$Gkxwhg2d2+#&@@y@~qST=ElA6k$8B#x- zgA;aZ4;#=<7P5A&wh4ORnS0MK+o+6xQD5B{LFx+cIduDl$!j5vMqy=c=>R`sXlwz`kUYQ-fiBMQgKaUwzv7b6D;dC#uO?J%yW1mvqXJQ7&=HN97O8UW=&` zK(C)4jsl`^#x93CKvZcEN+#)67*o0wh!+g1uHy2zLPT9m$r~l|v8!dApV!;-<*J-MbhtBZkKW{h#H49m=@m}Z+R3~EsPL5cc~zcSX?aP=|Jb8j zo+j^)2x%yIw%8P4l>CX`#$@ZT<^^GRwu7r^?A0Fj+5lMTf!D*TRs$DsvVozk3aVrU z<*}gAoNHs@}yZ)tccG%4}z(*uu|lZ?F=Z73>R@LcCa?dRru+mrT{y zh7Nh6xRs+j9Zg3%jQ*54$Ni)n{si(HwzmTjTL@zKwe2}e|MRiz_4XK`*!Roj^{AwG zFPptr&S%wWRo&{RS5)rn;lY~oZQuVIsC*cGPr~c(%kBd)Uuq+eQLFgcYXkynsKau& ziC;&YlTg4I%l5?FJduS{&}>gzi0m6I%ib}g>t%&l$k?FY;39Bs12*dc6He9R2QoooDTE(--CKE;^Yqj~nXit*};tfowh zhD+RT%nNw#Dami^%3_vDo2%e;=+t~H3xx`D{|OcTV+Wz=Y!IC@7g7R(vs22^FUZlx z6V;>m>+<|3JP(Q8+c~qRY=(i$w9$5`uS3!#!ZR&2R2L-?PDq1 z_(=e4`)TF#Vylt(GP@hnXM%J7@n3PzrC}eF^$nBX*t6cU5U)2suN?igw7>(xN`)yLzr6Y@_HnC;TR&m5R$-o zuT>oyN?FL~JZY+2u=- z0Fmw=srgydFwZk-=Ua(k zA*0rN;Uq^``zP}WYXJpeSi_qi8oJc|kw%$grt-It2eswoRgqVd&5P2elcUVAE*cAE zDil)EI~20UuaB>1#g^f;R$~ug1^hM7G3Ds7Zlgacs;Pw0&nn1}6ey}$saa3ctYjHU$u6Fts0QV@6(BnY#nUC_9@P6@lum$&bp0pFQJl^~ zs^S5#v`(tWFP7kL=X4&K_t4V!lh~M-q=YQXJpI+L$95#iYY50w)eU8N-AJIs0vNjs zr$}Y9X_x?{w+k~r$H1jz%2L!DuLWJ2^CMIBUhQeWgZ@%gSEo#7T<6GJQ~s)u@BL!) z!04&j<;lc?wk;rvqKR-OvnM!TwMv~1M+ z@GdP{7&W9T@3)LO|p3Yo#I9VbC@BAo9ry;Muxq8#9 zrAk}SeDSRKRUY8n3g!Hc;?VcZsGCJAOHwY-1%=nM#Xq%2n2XnEIrTu<#H=&#<;7ti z19$!Dpac_fR#l4~%rqbwKMa z!;wLQndqUVO%Aai)BU9A!gdEM&jxakkl(GiL3qY=b>TXM7Abo3cqIUsGZIW*jxI>k zHD%)$$T`QCLs%J-m;*AH5VR$~r}F1*betQcC9yjyf8~nX`8jt0h+D|C!Eazl4^ZZ% zwtuWOFPMgmxh0@@>xc?K)(^7&nO^YaGhu)_K+&3&!6eet2uKf#v?*u-PZd3LVSZbA}6hUYeIl_L`O9nsMP-XwBCwC>okJXpdn{TIM&% zm^Q$$@(Q&)gLL9Wj{8O>P%h;{`$hO=XW!O4?>xCShVJs3!kQCIf{Sk&Fq-gF-K>$B*^<@`cQ8$7xR%{jZC74 zANezF4h5V`AjGM6uMQ9HfcuQzQhRw3$sOVXfbSB@A(WmcwrphJMs0`^B+1U1j-w9I zY^>K$lP@V#j*x`4nBH<7%j{Rp7|ZRM$rmb=^Ot;(?kdN)j~M~}RZf9X(4!&cOD(@I zs&9!x7Z_SAuP+~ly34If>GCR{{?Y0&QIn^s`Oe))?D6M@sV4rU$;8n+W*Xw{i$JFz z3d^%}HMUiO*;J+_-fCfCxMp8FmD_G}^2RR2O_!ta91rY&?4JFPwu_GUG&OB0gc4si zpUbI-RH7e;i25m^dxG){>h*9Or>(B|b25vrZr`C(3932dzRmn~Q~sFyA?k<&LV0Cq z^-*|xzH*cqqV1}mM6C)FyCnq4v?Z+I*L|en7pY6STrS|#KY9sB&@0nYqE5tNDV z>+O^0ZdYgbo4zX?bY}QxDfe>oWc#wu_*LDR*M~5}zNz}BgboMv=q9x@N}N_XQ6KjJ zf^)_4)&Sii-Id_JRth19CH)>G66PXBd)5uZmYahntq5#$91e6zsu0heURbjX+qznY z9!@uZM#x_7^P=V=FaqxuD1@AR>nK`2rHD}B6s84oM4R_9*amX}-vyPmMTEXx2Bn0e z?;mQ~UO2p{{X3J&Pa;Uis!;;peN7&aG^mF!N67)RCFJ+22mp6uJs0hz!NBMjnbM5~=k0@5mq zObm4z?4NUlMofZrCNl~4mDg4N8h!|Ang1Mh$zEavOx#H*SnX?|&;EY%+Tpw|WQB1v z_3wJ3>dRLw$5!)MIEzCJO)-?OuMI~R9aBlhWn+PV%E@@f_4P`YO*mr++{2Do6Ojtj z;V>*{Q_7cvHC`hoGHjP?Z3qS9TKv@qGX&0U5U>+HxP@v`zGtEv9WLAIhD)sRBx3@q^s~XM@)I5sEn24q%uctMiTWy9htt<>#T&M zPyqUz+a*b^ymbs=eW>H%aHz`U0UI}e>gHtM+G2l1s43;3yhGLd zDLT>R7ZS}>c_BZgmY!3mrKDm^=lIM6MbEMT0egUPbQ@(uxmccN;?_S>O_^kEjHmstRg6iy9o8nqKPRXAqPn$J zpBrTE1_^a0P5u%xR$8-e5u-ZSCj;SPiU%3lQ00>}YMJ66Qt{E}ak3$-k2bS3+ha7_ z`3>eK$N6)xrtWb9%9C~z_A9mVn$L>jyMmZ_jwfqt?#`W==^R(#wno?o z$nlh^^F)za8#Y^hu(XrDl2?W&0-x>qqGW9`?*gSaYM315ONEvcQ$QZQ5$izIk{l zPs6kikrYqmolM@7#x^Q-2;{(LA`H+82ls(j%Ow)jRsQNt#|&AT{0S?}yz3ypu?xln zn=&;;vLx9_KrS<{&>?un=KzUK&yPairRF1y1Ja5@nv)+G+s-e_ob&I~S+w#@7*Ibi z>sLsb)h}%bA?HebyQ7=`u#Oz6CHNMQ4L-&QytOhO&QQZ(zkEX+#@NqX{uL#D#!qW@ z$*K*Yg0tK!H6QIEN0t*c^LCP!2;WIjA6M`yq!?oD+xF4d3iH-%Ex5Clx-HV45SmS^ zD@nBiv?Q_eY>gD-jWzE!1iI~%J85kmOK)do`Kf~oTI*9Xh<4RaM8%)s8}a#uHe>% z0_Dk?Ox?4We|i4xr_7B-<)=;bZ%K%0KA2lc_fR5jx)WI6e*SH#{dt-2FZ&sM#Sng4 zjbpUMD0>XvZg91oE9$KGQojNBL_TE=Ctw#SY+sJcRWl2f=VugmtLoUS{nUWk=mto8 z^I1-qqxKr%`xN=&PTmtjJR*_GKY(rB1d9C9tcs0@3U^ko{-+U%mGDGgH}38QzV3lx zn&b%(9Nf;AKGswA$$`?J zJ{7u?V;`_(74=*44E77?+uWS99bB{Ue77!t&!{|HrAmLbh#;AAS$Do-0cHGf;=p;D zqg9MR=K9MwK>1H2^oi;Xc1Z__&7_A~>yN{3VXUuk(a&BguC^~qe!EZ$pg&xnI+Rqf zYJZoahZ72>%HP*XprOGrNr5OhfM=WJzA_OOW=Zd(6iGa29)-1_6__=IQY)+MGgZ~B zy!v%c7o!0z_jk?l?dL8CS_SR18Jjv8&~9uYFV^(=WmsLN+>fidBM@5whd{N0U%(e-mm5l0^$-P43=YseQ)BX{XPxtzE+d_Ota#zvBt7>W6Om)5!>O zKi>EnkdSnJGGL!h7zegR++TPwip6M)MCF%eHr^VMD6L>TZ-0Me=%VjlG7)eChHU8s zFKkZ5iOEFC(Jn@~Rk_TUkOM>16@nOq4?!2&!s=Juj2}-F5wm_J!C|`8G|Ls@-ROhG z9EiZ*LzJP-N1-5^-8_r9Sx6V2kaRjd7DVr<;!{6XQml5Y zThR*v)VdjAeQ6>WJTO=c+AMh#uCB0%S9X{jh7uQ6kcq=gdtxstef^L<3cq3Y+R*gn zo6&dQOrNN5rve^YIcB%w|#t?qc07U7t+)N5K?>SKnld%|NGsvTi&`(`5Jt zNHNg-3N^)H31K`beSqjfjqr);7&=EQcoIwaW-r?&SNbf4>Y=Eji+5&-I_Y=60z7m0 zk*`!&OzN6jHptAlX|m^h`}VC5O4Eudr@8>dJTlLbG~zVZ(;V~VA-vatDfL%fu7fYU zRb(R4mD$XQN?lYHC&xQhnbq^voB)}1b>}6c=bLnOjZ`zEw)J-1inQ+rq|F>`Ca6E) zU;x${P_*ils~CFNYXHEdeiji_XfYlCDmn4$suA!iBLd?1;Wg(%=p)qrUQ0F{2F|CO zfz~g9FrPm^zJxzn{f9%cZ@+IQN`;&8jJ+3*E10&}eq;qQx^D<)tqQB2=+cZN#dMEb zGOYldB6qgj;ap_0>e=vOP%U8(0F<4`)Rzr0GB^-Dt-McIVpnv7uJ6@n^uOxqQdepY z<#n(DbdyqbKm}8I|4kFh#d^z&hXcO~LEz|+S4MYHvC^h*Mlmqk7;)v_8eTRloiZrqwU<#+3VPz!gD%~=@@X#{ez*VNL- zcd}1gnH#OZWx)JU>O@Q=rMVc*D2Wn3jw{X+$PB*d*Ew`0);EJ*@X<6)a4yAAMf(TD zE-Lvog`GyYbi(#=@IX@<@jb%O%8=r$wc;~idmXTkJiGz>2!bR8-@wz+$sP}%!X||O z)&#Frqo)(x=d+_aXj0b(bneRc9(G#IN3~%ElH%~+B0`lom_-zwC|&Ij@{*|L-CNQ$ zvXK4ZG!RXlC)Y97@-C`?IKaH& z+Df5)pT0;(U$WH|eB+Uj2^hVbq;9TJuaYE^fY=aI=BkF$g{gs*+i(F{L2XwqQm zz+(62oO1M17GOt_Uj{C;?qlj%R&ypwd=w53+Fb2>7rbpWGB{L8`&3P}0zS$&;G#o% zMY6*qbEK0j2f&A&0-pLf=l+9di5@Q{l%r|>KKmkkqSFqiax$MZ5_#HLO(cqv@(SnW zWW2RRytQUh(Zp&}5sUTKh8f3&A5<%2d4cQ>>cI4=)gk{HLFMQXplQNJELTx0fINRu ziN0iYv-V9`N%p&)R0=H}HQ z*<;dvk*w^QGet?shX??ctQg7c{OLkjC}m% zVee~JVsL8Lg(H?8YVRE8@S!jMrf>3x1Y@y*`;6?Zyeg9umei%Y`6VV^RN@;1VH=0o zbfUOn7qkf^3~gv6gI3bKUvHs|NR*6FV9-Akr!j>;1F~e@9mdS(=eLAW8!AB}@0X%P z8fWYB@)OS}c$&JI9LtG9(5``#H_3sHwir6IaL^ zjBv@@e;4oMMHud|u9C;&Meoc?_k}ZT7F>UD_&v+65)baK2q~iU05;gg(A0%3ntU!O zx$63Mqdxxy$u<|1aIN&=Xg^<)+v9kPxcYmyUE?BuP=(i@Q3{%i5!;{;s*q4InJoB? z?_CwzUk<9HO8>eFV7B8{K~kAE?QA~=+zNcE;WdY%0+@56M};B~s&82Jz-E)M{u0B; z*h5$Rme#li*@RlUj?~m813)JN3f43V6iFui_IOm}*tM5Ez*zL%mfALzrq5hen(sW5 zn9i&{M@s2;0nlf#$tY_aMxptmvEl}ZiJ>FE1Qi1GkWxU=lB>VnsIztr-0H6)QmG=UAE8L#;1voTfhCu3`vR$Pk=_1 zNyR}@&{E(ZMM6Q@-6&#(j8tLsfg?430jE}`k+YmaEf2xum7LiLJnIk(1`t21q%uMi z5hc@u)5jE<+o;Rg?taxrFolw6zd0kZ;?J0ddz&9V$X>4D`cMf4o=0jP8o3x}t}FtE zyvD?qu=xa`fA#UQ5o4mr> zB@WtvS4hrk1>kQNNN}$D!_yISKT^VLTB#{$#Kuo{?1O-Wv-OKrN10DcHE0AaH0R}* zFaMbIWisP;~^pX-9?XdA%67w8yvmNC^=*e(L&c*|loPlUlw?fKVS zUP`L+X?7D^nd((4qFg-3sPMW7!%~o@XZ*uMD2QM;vDi*b1(PKC6w2G^qL?_E3m${~ zWL{dLCg*Ahm|e2T`Q(loE}pk7Ng03kG}rAvd&kNcjXGY~jP?H$L;(*!nkXocS_&zx zo_ zbth6BK^;nQ=)`KJ_E$io!{z5c~M-OB<D(i4-Q~vEdEER z@9BAYDf*uBW$hR532Ab-Ex^F~#H8m{QRt zw@o&j%gel6n9YVHe_XblM^TxH4n0t@@?glN7OrwB@D27k`7U+WSe4UKPw!`RN%{?t zD*Ppw{pMv*8_M#=%c3qXk`u4(&V|dEfk2(viQo^28s=AS68ZwNrc`6Q@d{BeO>oRG z?LC`6@lS#se+d!l^fI7HOsCwe63J>Nbs~D$RJd_g-(DcOqpnqmeeQB91w&}ihsw2V zAujON4{y|wnsW&?3|ErWOhjI2aX<3>a&OSm%IX5HxKihtS^wg(njqx(3^%bH>HArJ1YinRNWa;AmzvXs>W9rhnjlG zJDejLtR({wf>z?i50X))^3R7Pv*m`9JfEUmJ+A^=%P!z0`50oi^Vl>89PZ&tD|epx z#{A(us0H4=D8PGBnjRiIr3wyvLfD@N<-o;wvcfpopYU}&vfNIpyvn}Y&EAWsTKlS; zq-XDS#V8toBW=TIh-!;laN}AP)^(%{DRp-488CN}e^xAxzIQR=irD!gv-V6v2xin) zc03*{cJKbzx4T&7E6v=}6o~q_QQCf9VeHmdF`tDv*CNF^_Wt*Kn7Nn40V1~WS?Ie# z^=65i)!0;UA-MBRWh0_sA8G|gT1EnWICG~*!<3KOS@!GA(x|_#6WruZlbkxMjRZk+ z$&IR~_4nBnww~JpqS5|Fi(Rfhdqz&5Y+i@-(vFc<)Xc{mmUsRj-5GtwI!P8L%n0hB zMfRL(t!}@0n;=jUUid!Xz)B!XGs{&Yk@XH4GTA<@AiY2NtFm*w%UL`2?yC}cP*CRe zj=bN~18KxGn3`-XBaHc8X?3uyp^m(wn_DXI|z6ULP{ZnUq%P$o#&HbIVbLH-^csHEN< zVd*P#s&3K zw*m;YQ%f)+VUff;1yTpC6d4%P?|*JpgNGnzs1t(fzmPNa)Xw_ed4qy%B-%>sJxJ~} z`mtLG%N~jeuq?XE277<__fd4u#IlW+EH=c{dw^Q0{%TN_GMdOWBd}YTA(gqN>gq%; z_TgqcD)Uj;sR~pPMIz~eD|FR1i37C7bEy z=(L(cWq4aq_(R*oq{e(nxtaFsy7QO#MY@aFyYfEs&h)S}P(&#B=U&{wym=$5tz@TB zLK|q55EWY0XWygc8!0c_fdIw@REy1ao{)q{WJm!WkxWft9X}+f`LjdtGu}o%QvGzs^tRG3Wz%RNp=A{V?yk-{yHJu1!3_(k2+gx(XV0Y;c8@ zsBoWN=R5I#4n6>}6{Sx^>~zxjM@ux&?H24Yz!V?-rkn$A?<8ks+Wsp!dya4Cyg}oH zqhLP&tWvjRkU7De=WP@C`L}#<40deS>qn!%{vRW=e&jxVyFSmkWoP7n(S|R}yZcH9 z6O{bT!qpao;?!!0CSeWr|M(3Eh}^yEpwGsu-+hR5D16^-;sE5`&fWw~o|{_@mKQI8 zVpLE734`4{^qXK8e1f_`wB`R_H|+oC6URv2(GWq2clvjNQo)P=dweUga4ea62^?ue z>gu2YC>{i4ie}ZFyZgVVsSY|`Y=W|*ol6FsF6|vqD>4iywf?bDS}17Yu;aS@M@$d- zPSKeuL@!6y$X3x}xf&CB?@3}om0S!rFJ&B8;8AlMuZ|jwqiSH`608=0D=XJpK z?@orW1?&a=$xw%YD}IHzZqR4rQKbjNKl5$|?G3UH zfENk6Z{Fp^K)9#v_#i={+p}6;K;j;6`V@b)8@{a_v6+M(a5I4&=~|p~cfhT5hjil; zOK|PbaK8&liqJb3lr;*DtmD8+t+hjuIJ(wzr;MrGTL{z(_N13?VR>9hL?o;8KEXS4 zCFvcyxT4jg`~FATbU{+e6!dB7^Ke_q#bA0a-l)~nnvF71XB3a!zMc>I!Yr&q)oKII zFcn=ug2K#^wsNG#*1+N9LFLsVuX&tj-Io$6sIIQ9+ndz**s29VIv}G5!BzOj&t*(7 zhj0?8Jxk_z548JP4etOgH2?CAW@;#?u|O!NnTp*j!suA{R@`o_+6n>lbo_0>Fz>Q5 zc>M@yLFO%FgD$$o8f`M0(lU}hZg>p#2)aRkc;s{@t|f|WpnQgEd}-xYSob22W4p!3Z=TMUCYH@Kl7!UZ=} z49o=vGf(mU79dv2{rZsNr`<3TYK01QRq=KNSt1ofHp;`Kx2Wvp^yfE)t{64 zI$O?}EG$yT8*CS9vSR zaa;oR?^*@vxdQ*lX?^iHicayI++k?hiX6}cf&c&ldc++PJ9MkGqHn-~3;h4(fIrNI zF9`Dg_UA`)Y(p4qvoD?{g4p*BC}#rW!w>wsTKGYFU{{%T2bTJG;8)$UVfflwT}%59 z!ZurJqYXPQ11?8VtGM0O;pPRZDdqDz10fpl?=XS)p2Gcv{n|nHpF-?P;R|R z$z1AZ9W8g&!bJhwhrW-=D(|Pw(P!y2T6xyMuhK68_i|4bJ_3S|R@~t7B0x*b#{@8m*$X z-RIrh`7%mt-~LRjf}Q**)Rg`t z|B4kl?B)B7|Hs{X|5M%nf8ZxwBoY;I*&0Y$WklI+%E-z%Lc^AwW0qB!4I^bID?8hv z2t~3TTatMkGLAjIkJn3G*Y*B>Zr^|4`%6FQob!A>AJ4}g_vbd9P)Rob*Ol7i$2fq4Q1aX%~-)FAGL7%A8Hl3wdXw5r+_7ZQ2o6!V->r{#O>Z% zT5c1uM+wZx@Vhh8v3=EPlq&sx%^)U-W=86_Y@nunB0*tKOseE%1fA*I^TtME({#!3 z^g*q6r7NAiyd}? zCSs8h*Kgj79_iLMz)<=9-)}O|sf*i@oZ`}q2PC?#*JOfvE=O9h>SO6Y#Q%!@BRh!o z4C7?|6sDe$*(D;i6pBo0BWk=1E4`?}`rJjf@y5IrhGcgA z)^K4kZSqZa<>&Lx+2J~URT*;o(PWYH4W->B4kAf7P*}X6J(V5&s5U8xct?5J^DD;v5JO&!)ZifBOS$I`=K0p!d z-dmk?S}8dhR&f8<{|^9KB_*odTzBl*|35-ta0M>}m?-g)G?WK({X|$TzH+J-Y0!s{ zOe-GS=q&8|K7`3?5hfeX2iurI^sE$G@UQqOk$JZW0N|c-`>rUa z>i8=@KAR9GKcrIR!o5pDiQvr8Fkxebw3-*zOK zs(%=n9~(hxs;8Q4Z?tezsqZZRFF>2e_rfts5i^7pMPv~V0w+AJok47KBNc*21ONa= z)KF0U9(HlZ6UBpmyJ-{{xw{xX`HKiGG?yln++`H&;;u>wyRei@o+`|8j9WZ?gfW)? zejejg7kS;3hLmE2V|V6CkTP0e58zbp28D)>u0B8^-vCNNsd<82i6HB_S^PQvus=8K z0Cd`&KL>wU75rp36{GVBzQ1yKf@m*IZFJQ`(rM=DwSxBH>COYZ3unp72_gL#^ey_Hx*_(+_z!HwHm6$-hU#wvSNvJrlI&l5_~rKz#muy(*Z5Lt8Pg*0iuUpN zmT2;Sy_YPaIP{R>Ay(;cHt5H(f8ITonUqlMp75)dl5thCU71(-uP+Bm=}81;8e_%Y z5o_T^QQ}=0F4v|?MYgIaVbImkHR6YAPegwqI2XT+*^*G@g`f@Uf{Sz7bSH}rm=uis zPikW!rpevUbYLjtQdiEWyVX?TxIo*xuwF=bA+EZ>l%HeAe= zI${9-qU#r0qje2>wue#kW(>zAJwCTe{rV5Y<8E0i(>I9XxLMvu_Da9~oSR?C&4=M+q3<&l%*Sw;>z zlaaT!RTJ!hTK2zCkSqrW9KfBFYg{wg8AZ$URj&-tJs<6XK74Yi+UXX{oACqeg((;K zbEy5qCnrn@4Ycgk!&9urOYCt1#L(>UgE3YO#Thtt}A z+1^(0(JlGcT+~Q%idvx3>4_6A-0^i%&qsI^g9)Qhj7nC&sU|w)UGIM)A%`c_u!&xc z@vJW{)G?!c;Dqarppe0DJMuLXK3qx&_Kjuk4ZNgS{iej@#os_tCuBYiw*m8M5VxcA zX|`ug+EG=%m(YGkc1|e@L6LOk0Ncm|3pLQ1tjDtcUd4EBx?#p|KWnSXlI;kkNU3!g zUPYRcA7|x!<56c3kxt;q6?6^6BXO8t&EHNEE&m$+)6rD@3ALrV_=MTd{{TT^4xOiV zh?T2*LbPjD4h_79mjOs9m2po-`{e}|sCWBLFn;jGCHfW{f7X<)E&t9d_9Dv7Mw(Kr z0!ui~#`>u~1MrIO{ZCRP6S`qv$q0KJ|3J*;X|qc`h{ruY(|Bra${tR3OfhO+F?fSD zI^()X0+v5!ROYFIpNKR6Nv~&a7w3z+Jh~Wo--mE2VZ%%&M0-i3v!_E0RYzh~H!LIl zMzeQQO2uHG=fx|$D@WWX=mu*9KcFTXV9Y{|`9uz)#yvnQ@vC}N&Uv%~7EOT8i#Mi((ggX>a-5mWdKY)R7%V%gxtViUg2BFSva@BL zNgB(>?uT0%m*7a@)Hf@MaY%Uwvd1U+_ZSQy-eDbw;Co(}^NbwV9uX6C?NHF>esJQ; z03C^W1k>}(iuiNu8aWI!%~&{6RCPa82~(a+7^>2g4oUXsvvD)3Y`Q*Pvd>SzxxYNGPVEl- z{&UvvP+>z#~9D6|q8(LQkjk*7yq)>(Che2F+Q;q03nFD8b%_>fIzhV0|j) z7~Ev)(j&ILC)zFVH1h;GN^|Pa_|L_%L0ingMAzE+Pz~jKgpnyU27@zwAQ!Qdu0r zD(>GrymiwEhT!E>LCNkPMABI15B!Kb%zwzEBpE@~nQ=hyAp0Ha`UTT;D zDViaLkTmGG!(4wCu{R4mP;3Ji6mmkvT4?(o?M+5Rni(5QgBxv$8&^@!{Z$uTb7M?Y`_b$lBc)?{I#+x$zz=fgPRrPwItypVyG8RFh3;2 zYvapKlJQPS$vdYS{I#SxIU@1>;UR+@ZdfOrip?5;K%B|9Vi)i3^Sn@f3mSS@C{fBL z@}%=-;ipfDf95htjMsdmrrz<7gWgLvq=C1$TJ@T0;U4fJ$WS~OUj4I5|d zv5t{QuQYQ0MbFsoJDNN}iu*WvkurvIi1L!cYdRk}cE`jU(gnl#v2iD8CQ-lg(+$V}k>lT~Lqc=|6wY$0#!@>q>3mJp65rMo< z+h21pDt(G?AVU`q!z4KYE25XB@#}{T*LmvUX(jy4_1=wpwKK^{oH84n&wDSdwQZC@ z=YPukkbax4Y(#$g zOVRcR_@BvFiVaThshm+c^h6XXIk#9>Ou-PhC$aZ%_KM0(Eo|MmfH3uF>C;J+~)g{qckr&H27xrmhJ9=F-I}Y{U7tmk=`)$u?4P zH!81B6u++^&HiZO8J6SJ!KrKxPjtSzxHjw76O@i%m__i0WnxT``{H&=ngR;ypcMSW zf=bzBem;6F`NW)${-Rlo*skxNC|M?-nkp-6fn^r-$)y-vq%9KIVQtk%_;Hat zF}0BJMIzFXc!F*=88IEQmx2;uw3T0ou3=ixx0e?hKS3%J-F0Gibopx8R`N@YR>Ms|Cavt*pLsnaAzFp0*Bq9zNmC5BHeS%W8ALHk zU#nb=$+*LSqL@7_Ui^Xy+J~IwMR1vml=wiRixh^N=J#11Ky?dy*TXD_>xjR)O~nfv z8JTtCvItg~gJ9Rms(V}y!O`iKO6N0x_h})W)r_ODg$DUpu}$kmPp#=FrBFMTv# zTR*3rvQy+5!h#xag$oQ&&_Ka-_VXaZNJfvrCk4=NOm?Oo|CyxlZn@nMo1I_U1e&Uv zzjD_Tw#0^rhcuEwkyZO$)-;q@*p|(Ysg(>D;PqfwDD-MF zcb$w4o!^}$$)MJIf3r#`jZui$Y%fi<+=Yw0o^%}HjzoFtw*GH1XV%j3ZcBw&Xb$ZR z;PNzc@neTB4bUjdVs<}~;->o}1ld9{S^m7S-$MpUF&f@Eif7O^>}B&erR-`q*PXBZ zq0?=YLY}IO{YKc5wO>0ATW74b4F4K?o%`U4mw`0ZQgOfM!i>GYJTYY&1~^a&*@vis zuy^~Ggi?M8pxaTPZZjNFOzP9HFfmCaf852@Wmv`SI|2M#Q)FyEy@5nJ^9m+WBTXd` z#VAbOGXzX30l7VPBmR8zZX;%KrHxsybT&NzRLn@|W^P^j+)S@9$E!t9*4kGkisDKj z0K2~95RzEp)=HWN$<#t5^PCUE+p;V13R#2Fp6kW^f>7o`PT!@S8C|N9J*!+-N?#@( zWUugublU`A)nRl{>7zd`qzfyM)!uL(udcY(*(8uTJd>o~4R|iYqmL$zUTWEx$SpQ) z+fs69*pm?=_}XkMgovh~P$i_1JTM|Th;tjo1qxh5M2CS+F=|@ifm-rlDa7vjaqe@E z`^ANRpS=*t5CA5n(qB+VoNIQ-L<&^kiq#@1IQeu5@loarh(=36G}X~x37s?#(kQ_iJ=O~5Bz*es8=UjDBw)_R@C7Mi6! zN9erXLOt8B}COq-&esps@%ez}TP_U904mRQKNDUS;z~{UcU!CMdm_5PI>lk013_w++P> z(NZ!bXncRloYmztkLyP#`J9|yHBl+_L~^R8x`ZC5Jmsno6w;kF6Iu1L*}>br!JMH@ z@XhLDl#@Y z6_<#*eHokbEL1H)=}ogu@1K6hD%866_1Me|;9rtU!*QXy&M^x9&*D(mlXEFtaXl{j*)u=!c? zvU2yqi`0B?pjzy%+Un0ID=31H7QugDy-Jc_= z(^(2k37fT_$J?Qi6yww0f<%_bp4APEBwG$XKSQV9$8bx|buvY{N3LN67fC!bo5;2k z;|<7d=GnKZ9%%CfJ$zm~ASf|KO?kqZZ8u#XE|*%k3J_q3XV_tbS4(>Invwr_G4eJA7L&0#KfbL zt|@R_J2YrZ5-%~I(saJ!lG;kPj_Nn@E$cZpw@YkmNTR)GB-h1tqG#!6vOoXjr6!Gk zmsUC5eD&Db-Hk*0X_OQd0uD?X4(AhpRE0FVp4n||HN&*Kap^~#Bx6;VgH(bwDa^81 z0k)9Cm9E0cp4BOIZfC@`Ck(t-+doLX&sv;Wt-4uxPOXMtobg?DIV6nd8HtZMQoiPz`58OPA(w`_9_xB9YUzjSz`irni z2%Z}&s+((KqX|(o4)NuT!wdB_udCgN>FU~JEKo47;}u)ZPN%1*MY`|!xA^DMx!bc_ z>IB!d^A80sG+OnR+#6~5(SMZt_QDoYf#g2E5SzS|RZbQXHe8D>Z)Z3yfOxrz$PE1&bwjU~VKDS(MDjt{sXi%EKaT z8OfS>q+jkhDy0JZs^06I9FjNSPvAW4+*{03tC--!n;Lqcr$S3Ls=ND!?F^e@U9NEe zo$3$H>@L#+;_d?%=ZCtiNlrul&U7Y?QYPsK+-$SkM+->zlARwj@b)i6PdKSBAw;P6 zGB4gk_sV#NmYc2z4~ip<*TTEWbY`x#VCIW5o%*xsxNy|938yyhIsWnY}hF8ngV z!n(1!wC8$}4zC=5nebg@hiFCVsqj8c(a8rsp%P*b-r}PPNr>KvKxJYPlfmUq!$fH_1+)D%bkx{Ds|*6 zdl#8p*F58}kXSpZFKm8E0aS;1*iU3@BQM6bRmb)n(V}-`@y{LcOit{hkyfmJ=J)VG zSxiU@=Sk-?y_UR;n`V!M3G?T?{QcK8U~4)&DhAEB+^T)6qMMzDs8~PEoyUhSL|4cz zIsXj&WKU3yT{RzWfG0r6UI9jIrTE!=qxI6+Y&l%{%6x54JK37-BG@MF#_mz#P?tk= zJj+4IbgApGFU!>DTFo2JI_9{!#^rv$Ww>}{R9LLA-@$g6-iv+R!OGHRb5CV%faWmb zMlnqqR)B8&JeGM@Gp%Oq$29@qb~FUgfY~AZGlE*`6WXJ%30OD5yif{eHd+3AfEGGz z8B_IGE&c%Oj}tFFXprFz!X?_*Whr1AM~?9c5> z{+{@AgUyjAu-=dNvJ%S{yY$--KulQFwQ(J9FoBS7*IDU`D?CA-opA}Kba}CIt5=ew zb64DVUpj_l2yQ$r92mjBYb|IhT1`IY5gUBG2ye()R4Jnx5r{bU?T0{NJCdrOWE87a zG1G8?R?2;Fk?*$8-p9lTrpMbeYW+DDqknMP^1)0_7>yQxP%dzU8AC(l-0}j?MrQ=F z$B~CLeg~&%U;58!(C-&?A@(iy`-@0N=ob>txG`JLP%wqdxku?ATz$wwW8D6#mNdNd zu0^z;yc)_whqyuXFT7tML2wwQ`J~}MsDR;GP;XX5sYho6S%~1FMuFkBM#YH!m|cRe zFr!+kRStt>pKdB+Ue{2)JBflm=Hgg>14;boZsYZ#&##M(n+?qIRcvK_SZ14eg%d`d zHmOExw$y9qO{5y=uPQpE7CQmqnWIxvp7di!a5>{e^HZ*8053J0m0XLH!fi^`8Ec9&&S1TawUjV2sz z_oy${E_=TqltOhq%f&TKb^a_Tdr zlIE1MM{oLI%(*`)NJ44rf5(2nG5{ByE&SyXPS;b z@5C)l?P~ODs2=$et}{c$9fKt8-WcjZJmzx@34k=9O$+^Lh3o`u!%Ev2`r~33uT1E9 z&wg|B(tCeRuFT2vdoHl>;Rb^e?{gy&X#wy&NTmOP{04(%Q@5;7hR=Hk=+y5VcMMVJ z$lx1W4wkt^fyB1&Gdc^|FGIq}LIhQFlh9b)4VgIEJvtU;WWSG|k6F&GB=}S8Vb&g3 z-I(fJc@Ti+HHUUd+LRnG>Wj-i{~)L`w6g?r{y~`j^g~WL%@$A3_+g{x#Fiw}?(e+Q zxLsNn{JKW3q_7kkAxLD>lbq}#Rj`gO#|yDKx?E~QdFi)aP}2f(U21AW0m9Ly=e<@R z1aS4GGDewIFnhUaWjKuAjj1Eub<0_P5K`D8WuSY&tW(U76(WePKdw9oP~1bP{u{H>A2Wo2Y3R86TsGBXcCsHy8NuK18e<#T#hcfj~q8lL9fGlO=YcXB(t8> zHgQ(4lyfUu`j~aLL)X2#&B;S2>rKglixrsGM+l*!wQ5sTlhc4$<>L9)V-zSOLw_H_ z$la28o|E5WchaA)IQOQY$4^AlC3&!`?XSFlzHO2G9=>@N9NxjgLgzp{^y^(>g0z+! zD8!1%Powjya!V#Z1JCJ!At)-y^(2gRj6A-KSV1&Jm1!_zk2H+gs;>wmOn4I7n-M0= zv+hWreYL>)tugMg`_oI75H4l`fCih}=lzHdHy1xCT_CIsf;Z5)a~BufG=`fEa?4W* z3MYE~(+k8+$(fNL$zp^i&spLfjKh`>{l!l8{*L*JM&_w}FTg=~kLp)#V(nYGuNpJ< z=|#yinn*l;ivn_ukd(Wv@2a$s<7wI8`8fUwAfYu3w*^dj?0Cb3-Z}UYMsaRu=3E2o zy-Ef~O6jNYJ23K-zxk}tyw3z4KLYQ}M00-7wxFg>1Gq7(pSl8qE_Ka?Sxd`Yn=F9PbS*}? zw`6|vXIdOpX!h+=s-jE*n_icJwd?bu6Mcw%zp+~O?QW6N2xn4Ah8#IR0tIOLmN35- zs3R~6O!U9 zAjNd(sWqB@Brgc*FxA$I5p2ugpWA>p=EFbbMfP2eNKThR=KfkUU-k#XH=rm zw2u%I{PnXN^?b`kE^#C)bTo3Y&gAI4v2xRa2xef(WYNI8g(QO$NQ5 zdRI)^;FRnc@MH7TO|;B!c%PZ|(J)Rw%hHiR!X=NFONZ)D8wF%_2og&A9lpy+3ds4+ z5=!n}u3PXq>gDDVv)UG;?^QZ>V~m`g){7dy>vNn{{M`JmT!%_C6Djw2nlUlUGi^v+ z*$H&D?5}MU=E7M?%pHZ0fz(w{DGE&71e_6#at0fr%m8MuoAgNtX3_`^PoKRUWYe)6 zH0phjJ+Gk>%rbbCNLt9}deMdDmM)?=z-+QiFbO!|0PhJI5XSteEL}JRQVfhIIx38?K_S^bU$2uRg>5_?3@}87+?r@d@prx$zmV zEjME>x6Wqnh^x59g`PCg0nexLb4J;*uPrxw1+@f@gKWJ))4lK0z>M6mn6YP;?}|U4 zJd&<9ZcOM-DEm=QFSnkfn5=fe`24uLXVc$ay~Xm?bgxDM7Cb?Oe}R4Z!Sga9FWr3S zk&xk~QT1aV)W=Q7zTDUr0;%(RjInN{f>Aa7m=_NpCpORjkW+<@`}01!5_Es;6X<}| zU_80t&HoMxI4Yzll3|>|^ zXDAHDd<@GH6;iQ4lk>MU_9?OW9al~;F6whXmabiWZ>2YFN{Z~zm&>@$&(NY~#L8&} znVrVCGszoHDqf#+y1Tn;>$tqEB=HrQ79HtYfw6of=F&~)&_>+W#?q_a3A^#x;<{U0 zE|>V4o}aNPYPufL0_2@MneBfcjad6#n><$Y?iV475Ot4uvs_lzCmfzO>Bne7N`fuV z@w#-VT&~A>w81gQ*&_0o*R0ZBH&Oz_+YX2COCNSa@p|o=G9i1#*dYFW`&cj{wkn5w zCtfS=-a5T16(W<8Red+RfrI$;BIn~N;uVGJs;a8Q;brhpE#D`bF5}%647#ipM~u8O zoMdBC`pKz`o1gH!c1_O}kGiqzkYZELf}`q_<$k%D_*R&15n)S>!#lQYo@(`#fzfRd zE&%cm2zWQ!htr*0cbdK{$B^6WtZ%c)NXbaz%~;yGk1RGDK!5%7kS&pZ_d)1kQF!F# zzH~N~@WimNou*N$?W#G8G-`7|92_Ln#&1(6eV50dh2$Dzt8Z%2m&~F8)^1tNo^RwQ z6FSsM!_GLFEJeKT)chOM$=n_f>lztVS7@MEOf4(NO0FS7ZC)G%$%jNB>IpQx%o{#C zas4tba_JwOpPpt?qeJ(=5l{MqIzm4I#%N;-7i6}VsVwtMHm%iqd`;!v%=w*6a`)^S zoMg9hCv{70*3ORWX6*{B;&Z#I_jbd8`VFypQSXk@(U@uu1m@>5l6;Uy^7x|+fN>TF zu)KZ;SFa}o8P3$td^xn*ZIh%ye}IxL!PNX^uMS~9S(=PBpK+{YX}UpY@PHk-Zl zuu32TY}5-ruzF3OSJQml_MT;2*{{Hqw7}*TE#C<$>L5}vz7)*4AT*;zJMEnfpFepOYr32lb?1BDiOs0U`o)`6K?V8ljBGI}78uyHg=0lT>Yo>67hME-l+C^Cyj*|(&-%06rmSAXc z9rnfLEk8S}N|QMEgeuqSMuEjTFCLm&?t&W*Ngk{VezPbuv)^Qfj++*B-D@BdS@8zn zz;u$|fZ7kEmzAevwnCh;cTpkitbyUl-Y(NLmq)#_jT&z1_YiFMD#u~~CT-$@~V_;_&wqNLAn5BZDqzZi5&8iXh1uDxhh zGxfZ`Kl);=|M8f-x?IB4``&L^woWn$V;>uJ4#1-UssIZ?(!QsE%t?7qA}-BucL%J*^GWO>5y zCqG6~*Kn7mvz|4O=lF_d-anNS^NpfAq)1>&>2Ou5YdJ?&{(Y*L z*i&T{9epPfNE@+d6Y7{#I_6(Uf+z-fURTa0AWB(yzviZLWm+1+sItTy7~nX{01t2n z+hEn?_}jfQOA->DZ=;j3A_+Y8tpIYf?M4ZhP$&6sCFcaMr-wB&=u0$H8|tvTVda2d zSQZUpCKpa;7Sd9xWmbhX&wsC!O(5IpflhS7}DO!q74@<)W%JCG>26W}VA{um;m`>SEiJu`A<`bPZ>=c{A z2iHtio+4}8k7Qrjo3&0ZEm$2_&NkWUw&h<9N0222n?2_EM&Yh6|>iFt&> z3D{E-ITNF)oF5@5=!zE&X1S==ptDaGn5MX{VlZI99JkAT>p*3l#s#r zLf4sXtxHkqO&a5BYGj~5`_2o=IiCrQ^8o+yzQytl;Y#otJ zu#B9(ReSUk;1Tvz_&9*-_{jh%8} zkj7$xxOCOwJvJW3+y1hrsjY|gXfar*cSzHUh}nXWunigt(E5t>x11GNyzhDJrkbv4 z0CoQFqd}J_uB;IqY_VLyr^Kj}Tj>r`pNZN=gw<^Jlx~{bEzf!DO?w z-^S7*$I=Qag-rCq?OEu+!rZ8aYRSzoXdpon3vorM>&%Ggl|T&`U(9_0FJ};TbLS2= zP2a5JA^lle#myF%#3;{6+wyzcF@EyJr8}}!oo1ao;dpi{dcnOGZZWUPHDl};BKh9O zjrK1{Pbm9U;0pa{U_zpNmy;%5IY4=Ork`@4k)oSnkP2bWNY)BOL5%8p}gZ^ zjQ)AX+U2qL4jJgOLhYuXS|#8bSyuBqW(AkJ-pAEr%kZ3KvY)&vG*{jDvXvy8o_l;d z;~JX>DtZA4yss@P!$G0UJ(9b69yuox)o!nhsxxK?2r#bad~{cX-^EE6|4CF|sm#@Z z%9082Ss==1d+-gkhR8;LQX%1|JHyq=e}iXOp+~opI5#=>=?&srAd#yGzxkBmW`JBp z|9xUiKEbPV&b%;47Hpv%5CA|i8 zUzJv5Zf4!qyBnJHy_%yst8I4OVR;NOCV*yQ_*|w^F~)GPyKHZj>kpyimfM{!X8d4NV!&SZ^L1xAkO}>f}bVB7MKJB93jgmNNH#wV-u7823FZ z>+u<}KIr*;A+4r!e+Ws@+VG#;mk^JH3-w6(7$zc6=vwq|E>Gjo?H!?ciA2P%cQl-A}l1H96e z2MDxBke}? zNMg%TzbwIX=y7$^g$fn56~T>&QU5d3>+Z#8U5oHs-tV@5UptbI7tIF5l~Ab*HLG- zf50k!r$qa=zE?lJy?A!9PiQ?CULvCXlM`he-IMM-)iZeWkr75d>$fDVn(x0HY??{9 zVsr?GH7i@r->zKH6S&{9E{yXyKHQcfcFP~X%ex$jCUW}vgvGjRR}@|-(p44(QtT7d zF&o0=6g$~)UA4)+-P2^9J#2Pbv-S?YuD^9ecLKq@SfdwTA>r`4UUfopzJ2(y zUAp>T2%AJae`TtUqD!d#7hQr>@NFAi!h!$RtP;&89U5oKzUEVd9QldVdCu!2(`7Yg3_CF=(B{~N%atGA#QucG-J5SKJwuaW z%~g$mn*t=r3p>14-=eg?T=`2Gz>=^Yi@V74crm{>-O*Fdy`?ZS>oVTJ@dPtjk zw}zi@Evxnp&~l&uw%O!Wx%x_>qla~Eh6Us83Azw-BJwHu`mQDR`~0n^<3=GsH{`%s z6{>F=9}H$LLF0z{--8Qs75Q76P3@n{eua8<6&hi7zHz-7vN2lJh`BI-y4wrRvVe?y z-Yn7NlZf+LEnz+!JZ~=4n32dmdbe`f=@v(aq4m2#ryEM6kc-23ucvBizl-ad%d0TU zSlnFN^(wf659{TVn7X4Qe`|@h(JOFOGR^ho_>P?zT^r;gM!x%BiF+k?0H);UjBrp0 z+s-C~kbXYu(b8$#8+z4R=GyHdgZMj<*Sn9=F`YJuE!Q_a-0PKs!JI%wNVGN^OV=B!5h2SdM`ZGTLEdY zTz+u?eh3OoFwq+ZbL*J+!GBZ~y03{btW{9wol$7pfKT9HLm2L3@M-zV%--Ua5v9<^ z3#?;lChdQUL6CC(e|>HOKiV~W;AySvZ#Ay;lrEA_2jL0~HyPQQyasYi&sMjf0n2t4 zTek7EbpjM)qIp#QRFS;f0X+kQxeQep%qk-3nM0{o02C2~Waol^C%J(1fBk!mI5D0w z)*lF(1nSm_F?RQZd;Y(lu@wks!o>kz9E@Sqdo{+%64;*vHT@0y0-%3nUt%G!-udUb zM2}i%BCkmW*IVp&EdqWCkaYhL&FDg>Dn}nE;Y-MZ(hFTyNqW8Zp9Yv=#h0L@Fqmh^ zAoqf@lga!_N}QW$DUdFfeEaNpsYz9Q`Z`;5kDHG%V#q!%xIas;rriR4Oa zYtlhCTlmFW;B&#HJ%RD=X8Oq;qS}MFFX;ZpLMvL;RQbWLgAW0w$v+>8ru_-62Ix(_ zz^);6-~am=V4xZ#wm_~^bAJ9jS_MG%j}Cfo?>`#FKlt$n|6qXa|NZ|1It!a7fOfD> zD7!0XW(6YVddtxw4rNzJ=5JEw43Ub9X@A#(R|MS0_9X$J#vjsh zA+_K8aL-RK-~=kk>n%F0Y}3W_^I!t{7d>J9&qx0AVT5_foJlJRgl3g2k8O#6O*zH*$L8KfFliV~e3 zzIh8!NB_{%78mz-R=aa{b|ZUAPvpQWEoavr3*?-*vzdD0p3glR;U?{&^!Zyli3$pV*M`SlTg7?ooA(qD#Lzu-6&yrCT=!!9%)(%<K+ zv5&h>{iZHvqRPbsc<;q`pZ}k4iv=SdvL}*nX8MhGCTr!FX`m?s!r1bqh>pbU`EbwG zK3VpWlLq1s_9K@7@NyV$$8BH+2DU<93o}E~u`>7{d6YX_x6u{k8FK)5p;HZ{Wg0Gp z0pam~zJzoX|FdRJf(7Z)5np5elNe0SU;kOu+YiM% zfWI>S=dbZEQ49#MNied2!AzjC9!9bG=cf@J^ZPiL9uSqDY4OVC5@Ao$|Jli#j|Dtm z`~?b>oSA?)Daf8LZNpN$U`cPkgEsv^soDG5zbEA`vO&67`HgItxun{ZjL(*O_qYAZ8F;QK{wx;=+?Bf7T4+ z3a@`6PMnGA*!3EWlf`k#Cf zjyE*Jd&>KPA-Ov)KrReXvFKM2heOXWHtPKn3{Ia1=n~9_n{=5OGEBR%SsnMB=1vY8 zf#2{pl{Q=Uvk2o2QKW6kuw)137a}Dhy3o@XJ#>ZijP9nv>;~NSttYa5<4Qruq@;Q@ z{aff-Do6JsfbOMXBhM_=KV96v6?nH>gELO{jg$gvw|l@;`cLmbjF6fp!j5}lM?G?P zd@W$O6Wk9INZcLmw2F8h^s_&dPGB&B2=9p8M^ZyM$f=yvIC4;l87CIJrQ+Bdi9KxC4VpH8U=f}K9Z!C-_@8IW8%w`2)9 zgS42U285qf>3&5=^`lM!(j>(A3DOmRL`%m1ctO8X$78%R0Aa|NLcy~ALp)C!5IcOU zmBG<*4Cawhf-E#IY11j+fmk0{nb%Zjrw@Q{0rJ%*5gfYU?aC2s4VJ$Z!#bq{rnh50 zE-}@SDJ~a-@k6S`wvZY{iH9WV**bb64x{!RXcu+^ML@1gt8~WO+;gY>8`vD zF0V}^E_jNvtx9gQ<@B1|7JfEnW6A5SSN>n35Qbm*I{ZOz6NHZsf8Obn(gw#}xU`8#C|+3q2l_Ez9#Q|DGc>nOE+ zq1h&{zcOUfLU0PHC`{WE^+x|>P$q~deb7RPB?zVMGVviv?i06B}Gpw6WpF9OJ4 zr?sVJQL3>56ygyF8wEi>H4C%Jv!HIc^%<*Dd6?xF+1DCv*-WqNhxVdhiYUEr1D;to z${u`s<$77RdlE31K6oYy(~V?H2#*Pah(r2Q85GRtgt|Lu^zq+RHwGgKN@P z>E{^Wh#yFk7X%KQ#QuCPv4BHmbCkD}(R0wD=YRx;GX#f`Z;v*oo?i7jXcr6HjH*`aD(O~OMuQQWNoY#tG43`J!r)__x zExNn7)@>wP~*!t!Du4(H%##N@)5Gt0xz1!x(DOj z-_A~?q_A!+*PSVO*CZQf>wbT;qjp$Twsnswizk=7fF?8>nVUGHI^zzsaWqFBK(v;3 z4N706g$H8P7^LM{V}_Z`_{Cv)f4b!0>TOe#RU+-k?CNU|Eav7482t8lRL0zeCtdgk zim}dyshhv*ipqLouV}MAyp~;AOHoh+Y7$jlc}QvvUY8h9$0Qh+wkv(|XFQ)u%RFF4 zCA#ovhlq%d?29Z-pNFV4;!}^O6k}<0r*M_ffismzBH_}PoS^akg6liamP$SlaTbK( z!1V|NTt2*LUd5)MhrbqsarNUv9WR%#7MuG|#*OUQzJF?QDMyz$fq7A zZmfQ}E(b`#qqy(ilY%X*8}@C*g2WrU+wJtWLh@b!n<@nZ$W7U-&l>#qpFcrYrwtd8 z=MjsEpXm&~>a~niGynV~wB(-aTqBQUyN5H}d1(n^+=`@#y;Omo@EihMH8f(CQ5!#N z6UN$ZE#he+zE-6|WYZ+V@$_u`t z6mGsj@VBvrzQv-iSu<1CR`0Hg6OFtU?*EF4zJ7tYkOGfMdzMm#Z*D?&V5uk8dMqT_ znv|>AGr&R{$ZgbI-IXc zm)D**ovV6uh#=`QZ59OiaT!SE-|S8Y;Cq8))M1hAkmc6PVWl-P*twh*W876 z6_Hw{2L;AwSVL62igx&TIuqO7J`L7|bD8lUX>smR%)3 zt8owGi2Uzon@9n2G=YxnF%9&0);UEt7kg^S6|-FWsk?${6MsBR#}c6HP8=G6f!{9v zM>Z~lQ!vqZ*K>3D61Dn*2EVxI2AR@N6Wy3DBuo_$EAK9x;V9_XJZSj&d%#hFjdvqL z>%W%ZwYXqXa-#jmCsp*&jbl93jI}9=NvfrL{MKIK>7YI@el7fpA$d;)`M7X1%TMNAhK6zDIMXgS1pF+9YjB-~4fjVY~_?~rdQ*$PA9 z9%gQbXZh`0Eu)oM&p9{pa5wX}<+apx?oW$}xy1gH%vD|SHF9xWuZ!*L?1mn5jdodU z31Q{}SRCr;>CTK&@VOb<@aHwlZtyqmd_#zvj>KRb5d^_M!knC;VzoLHJ2M^POW)p6 zdT%H=qcZC27!I>h2%d=)v@$-lem&z9uHp@Nz^%;&Fe;eX0HZft%m%6OmTEm`wI3gx zF)qI*%c0H=uwHIVG2G5lbm%G`?sc^=DB%?L4CN*$^E+PMNd2SKVGr_k%Xw7WI-Qy4 z>wd8I3oprE-Gt9q?V7ycF|%t)l2N-mg&u=T|Hs)`!RmbZi&5zPz3L|=aT`M=Q?kwv z*R*fD<#b^s(`^)4imb;^r>hCS)~?ejxmibfjwYl})Zlc-ObjrV_5nO(Ldd&@9WH=f(~2x;a|b zc$Hi!{uriwF*b`f@T|qyc5|ef_2=Fu-o~qTjWJ+wij^PJF8Ct#0J;Ty*0bIGnlx%iBK1Drm8!> z7FBdrX&4jHx@LH+>1{ngOU(q)axGG2P<-&L-9aoF2YQcznm0)(^2Vp z4GCa%Z1lalPHx?&QCM$=5s9Ed%NqY9r9y<~b(LA#buJ^ZJBung6rT!3qy`a(z{m;?Lz(sbg*`Iw{v;@?Kq7- zwNqJdHH#$So(oi*uo}0%n=7`jT`W6INR8oda@;tkKy#g z;O4XZc5Co{m>`?U!bxBCP-e+nQG%4mHVa81Gm%-{9CcLc8v=9+DTFOVupwt2f7Ih8 zM^-22sN_!!_bb?*`m)-|K_<7Mq)=IlcX|Z^_ET8jvk8{k)?GOtZCDfHYp2Jq5cVuk zz)0eS3n`1%sKvPzuSYQ2#3goLd)u_JC`IXg)L~f3!rr-8elS#nVIR$5#A?54u=Z}T z;0k~B8N0*(3SNWpSr^Gb`0q2mQCAvzV$Lu=DK?!fUp+P52TarCG;;KYSe#1bf1Tzx zah^Mk=yo%nCs+D+UX_(Hcj&_X7Xow6Q~O|fd?yr>^Gy2hZ#+l5jU%_>d;VjEv@yCTj}9YUQ4ohj6s^13C@QE| zqSHp{1%ekLN6Vd@kPw%vB3GwwV8{B zOMwhjcwm~6$xm-3<5pHe55qGHH;$D@M{TZBkZxmeRaaZdt9`wQo41wB({*Ja;5ym2 zV=Su}vNyhzykd^q$R zXNFOxVie5!WR_H(Df*UHNt`X{X|ikb!#gu;7EFE|ZgZCQ--_xl7-w~gbr!qz%I(DT zX+Saf$-?EeKATvVRF_~p&gl3!B*DqkJnl5kU{jntyh8eEM}}e7yJA)aJvbqQ`-k@n zsl`MpzJv47ljdnLhHbVQ8GODFIY*?3T@1oiAl}ND&y8=G3y7J|Do?PIMPk#5VMPaj zMNn(HrSJSj*d(lU+q){*d4lM43>a>%eDotFB+-a39?!`cqYbABJ57U`T1;pq*HAL) z*;KzRwqP3X8WYfzR?RSd?{-!?AXd5T?3ekBHU4X5L_5HZEA@w>%Y@2plQ#M#@S~4t z#+PQO%E@%B*WC8gTa=>m&UyijN#@s(=$a`>j5!RA*)MT9`DhEHB??STg9-Hki|^PF zhFXcLq4dLE<_us<{2E)w<*7WDSJyoM#>0ud%xhfw$orTp4NfOG z+OixvzRVMGEvbrQc}>}*mN#{oVKrW!BysW&PAllfl-b~Nsf);rw2qP&%w)VH8YJg^ zPAtWH^KeT!xLpv^m~;L=+`aif)bIB{E^R|8Yh(>&ElXL7ED;K2U$QSHWqZmNGL%8~ z7P}adJ!Ica$-c`nc4ZsJmSr#)zSm=@*Yov$zd!%L*H3QDT#xHKawS$_HAU@@CgogLngzlaoy7XT#i~y+u!Y#eOvE5z>Za z_O*XW>|lx0xBnRx%1S`k3&}-r$3Uk^A>$z3^{v+3hUb+XerGovci3N6+ea??vAbTs z>_;liL|qi`a0#@)<6}}RO0hvtuC<<^T4Z(NX(IDuwO}jF8eOyTy*9Uj*Kyg|-8jmw zY7-};jk3)ySG9f9)#|Z_XS=<0wyz~}zxT8%xY<;3vMttBVZ%^)?zu!s#}o7wJ;L79 zwb0dpUWNVg6Fy-&Hilu$fT2#zra-xBhVuzSk0GsrkS&ro%RPs(Ww*yx`R zpflcE;?WbJbp_x4jbKlB@d!n+7z`nlqr)v$y$g=9+j0mKJtLTe8zC9lW zXvlNLf9>R#hudeT{ww?CaAErj8Hgb7fQt`6*fRw)pBS`p4c&3Qeg&<++&MZCaE-XC zQcB{}Y}~XQ1DXzJj6IDm(8Qs*wC$xMbw_rET@xe9R>L17O?8Z#%uU@qKbuIQfb=-1 zbmk?-m)R_y1oyfyJ~dgqNjz*h)+>x)DL!?%I}3&0Ho5x~^bTR8Y4OVPuoRCOj7-h5 zWxHQvN4pY|Tk4|Mk!uit?}P`))nS{&}Y$8f2RtME_ zI|=@@$pyDw=K`_S%~&5;5HkO0xoxhQ{iAr;rb}4nR3)Lb)OV!6h05w0FB`^qkrv6Mcp?Th5wi=N9Oy zf);ul$E}#UNVh#Oxt_AOil%9I6=~0_-ikfxb8h{c)_09T2%b6gl75mt1fBVTllBZ; zBeNh1<_Wi~V#;kD?_Z`cXLGB3BO9Ljo*Y-E{vGA<&YAjCkw@72b2+q?N~b*cch|YN z5to3lc^U4}bp)B+a5e^gLiOXD%pMGSnW9e+AtVf}6%=Px;rj(Sf8CPFdJk$i?6gsY zMVfXbU2fSUS3{i)bg*Gsm%{h>a)NPa*xp*fapCci@)TjHN~w>Vg+xOAJW@I}aFCd% zj04(t%TyXVovT3v1rk7EzH>FR=xVBu~n_)Y$&KAOVH0he& zUSXGa2az0bHVdludH6skYa9d7N<>=))U=+!DrOKCk%5APhDT*R{%thI%f_FcIVZZi zv#{;@QaGDS9^P-+aEe{1Pk zP)4bU?q14Aykd;p|9~jzcG>wAnA>imG^?L%^9((&RrU~RineT!MkNzihB?6nZTRvV zNI}S>U|i2O+DcCTN*l7(mldu2j&dg0wCY9yVSLYZVtd6_wkJ<>0K&z7;Q&%PVO@|b zb&@qmq(jhcal0ebzKL9@M$CB!hpOzl2xwTZXd=V%kc2N&fFFUw*+o;=W;_m+Ox)`g z+q;3gzCF5=Hv9(!dQ0u2O3aJhv46uJ66In5w=A)_xY^2T8v4sM-kzy$QJB!+MMiNc z?aSAKVL(n^%P4BIOZWLIwzP9gHX|UIg;aH&8Mw?{pqH5Q8K%0~INLXqiRdn2=g}dh z`3X51Hv!X((h%j&;8!f36mZ-GzRzYJ@v5L}jmd+>Lv;NCvs_F@OI;cp9K^3mwXfh& z<403<@a92$cVRT>`39C{=-lYXrGOxeHl>whwyh$6?kp0%RDWl3_gU1dz#ql?JbjjR z$LYxBb}2dU!X^Xw$X|>+Lpv$7(!E2Jls4-5f%4*uygnT4+N+%j{t%_{sxkM= zg}<+^DxOO6Eq`9(im|--g6MDqL4C4{f~L;iGCT#jD3;T*q1in?DM#ja%7Zys{kt^V z#!AJHd}D@l2AGo+j%f2uG8UCqqokgvW#TBgZ})#-^g3fHkQ1zW*%-an?NCZ9VkZU7 z3rXe5w)$P2&)D0bgK-yJd%Z8%{^qtf_OaiUIGTnfz}3rt8OD!$^Q5?jEw#%A1O>eN z#UwroBi>zkomW_cY|2~xxcDkN&dR?-2nkqsBV9d@3okua@?NWKa$np4SJqmF|BCh) zv`!#pth84b757ey9e@ZeDrB;0#PsOl;cqvhCtoe4WL-m587dWKjVkkpql)fX7i_9K zgy}x=W$x}w9+LvJyC~ORfy3OiW`G88!=6bGOGQJjzm41Ve-%|mE`=h&bql=3N2dc5>LgcQoXf}=qh1q^uoD zGA+<`INh#car9uKkxbHU8%Et91PpS?{e-eQrFTjPyyU(kafP(-$i_5*vF8dTUkgD2 z<1272Yy*B8+#ogrkqKS0KwA(jRm!wus`>XKFTnX^?a?*xs1@NpTe+&}g|jDl1MZOl z+tEx_dm+@QDVgN%t>8s=P*9ReOUB+t(HD46P-bt6^X)%oH-JK?xLX!2&(6w-`!e`( zt4X{6^thf9bjrLaw7=YIWVI~0W%~g+%G&ZMWuA_#_AFB7* zo`ACQb`&+^SljU%j<>>dKu;YC|J_CTZ|XzJYb3%}mmp;P&xGXU{Rb1dKx?_uIoGE0 z&zy+zmiEovXp~~WxNA)Fkywg|;h)9m)eP6sA(Dn#aKBMr#yF z$Zh|SM1NeDd0hZX+9^tnk^@m4iz%#EK&=$tQ0Emr(9h2Qyi@Dw8_SJda);fdi?r+3 z)u)^nB+w~JJA-3F$@44^|I~-(9lHQZVh)^_TLx0AO`PaqmZ3hVHcj&3gG8!YBaO_@ z`r;=0Q|+bj^JK>}rUm`X$?F$dk0pJL=wOwEbAX9ttbRY|#}4nJBA4%#1W2^}n{rm6 zC4JZD*NDwC(tp;SxiZso*14#fX-0K8AouQ(P{}C39sSjj?NXxGeu0Px_Tc$Vuj@vU zJf!~P1KzoYdrd}hZzkHKaUi)Rh@+J~WabQ%i|1YxO`cR$_4fKz*oonMNm9J)GE*JR z2&OfEEGfyKkH%D!vx;%j?kFKixmths&F$1AaOJ(x1xtDAoWftBa%bCVQjm5qOV5_F zAHoayghk_@6D2x=(PcK8l$LA(l~G;0qP`hW*CBZZLc?t`S*}0ck?&>ord3cmtdM2> zJ^aCgGv@2w;X{w&P!|j{6WFruFBmIdDEt*R#~Bw~=YOQv4BMloCVcU{4^u>m&2@04 z7t&wzck-L>Z09Zu{Qf6(?9wQx{@RcTXWIU3JZOXcR20qA6P|)fknFr5`Db&F z6!3?$tBSvD%hH(?e^LpuQ?HL5>HVoJ^}biWxyFPfC=GNJ*#9JY5!`>IMy!X2s2DY| zHg_aEZHTb&LHOgd(BcW}5qRUMMKM6>!)TlNn0l?2q|1AqABQaZKe(hsT$dP*9(@>n z%G^G$|0;bQD7Qe6#`bYe?uSxxc|nk?QMq7g+c270_8XL#Ar>+;;6>1cv(h7{E#Mo$ zro{4kkk2nndq~}2g0b;}mvSi|NG1lsD=d*fe;@JTe z=z)m-65>eE{Y*SmmT*950BvLBrIMlO!Sl}^@#bo!-zOHOEBM(q>%p`nb03ZF{jdeD zbO6@f+}8)*2Suz-9afRX;;)~f*Fqgal=>;Mpq#3FUaEU03GH3bRp2*hSrAI39aL{j zuguAV9R3@rKL!)wwY$<78ZNzUt>I#kxkFJ4%JcKYxhmyK9^OE73*NOciRRxwnNu`h zk?f8{ZTjwT-jYQHfr4(Y4g7AO$vrYtJ(BZ!0EE+afDvD0U9pQq3|kBZQg^!h*p z%SGkVnF9^+P_VoCbY5c;MR9Cf1BBu8AekXLJhofzNkSV^A%c~81ws1695k-HG~VIZ zl}4K5Z_fXn?-kJ`eJtGfO{$Fo1E84R5NxYh+2(er!L{DG&=n4v=@>HxeZHDM7ciqk zs(vx9%f@QBV;_CoxhohLJVDx2R+IPuM7 z9+=*z;)pzu{CeVOwSWvlqv_Y)QgqZXK~ZTRzwL1xa-J-Aff*>JX{W5u z+1%9OZOcce{H(X9AF4s(+B&^QGVa@Y6dp?&SZ7=Pcb;EJnBmQ}zw0~bBi9#bzbWa0 z)t>aDH?fY|=tCN80H<}-o^c++;+Hl7>?#tT3_`G-Wt4-sah*vvd}FSxu&}V*W3d>` zNW@ifj_1yui`#k1wwOKNJzo5oPRhz|vPA}&NV8d1;cmYgMmggTz|?3q@71?=gu3Qe zG^W&X}K2XUtN4ad-cGhVU4;7lGIxPWXSn)BQ~?*#c0#-c$nTbyiQJN`G%@T}}Fi z(lbW@q0yCWBD|r`mt&)VrY+4=o(K1?&kiKd=c5M&+IXquVwk|nyCMO$N=F9scu>L( z%ANc1-~HR$2}R|0piI=-6RpMN{DMUu{j>J`zK1ibx1qN&D&W!%q4_ZgH#39Lz!g>y zHOgG<{|DkL1NN8W)?mZ-&}lYRALc(U0#}>ZZni{sGn|}#LCmq({QHSYPc89Abzab* zs2IcxJg%mOj_g4l=B1(8UBaN(tL9H?1Ao9@BYLTMLSIiDtG+BV@MWzqW8{q)XtRsP zr&203K6@FAsvb{KOb#ArYS67rs?Q0I_e|3(CJb*12puQ!dB^ooPri!C0&^R4LV^px zhanXK#_m8xgqYMbB)aWoC{iDxvp1GThnXu{1@!zpkON~}+k+HLX)^Ls4;Owv+%PB( z_cro8<1NWg8)fY+-fJ)m4*f!P7>x&LvI^)Xm(%t;0m}zhKmisJK1&5a5b@ytK}tpQ zAVYdSZ%-C9l14H0AEm}Gw}LAZ%h(%7_CW4L`sV&qr}gvg?OLSjW`H_Ic>IU%6er`g zge{9W#4G?aYHvYA}9^{yT5`j%XH zcY<1DkGi#Ewc$e5jX&@NJKNafYMRF?{_NGZ3sRJ9D~p?xB<%`>Y5 z{3+vf;8It*c~5RjWAUJqj>&ZGIi?|=*WJG@avz79f>3D5!x^+6<11B@k&gjL_+A6B zQd;e35?RM5S6vC*wt!@9(R+YJNK9&y-hg@u5)n^*Bia0j)h7%BXvy2jhz?8n<4n2( zmI#l80aF8PVB(? zcd7AF7BX@V9jK#}*2o6A(Qe}9Jqd98#x&`9ap%}DOAO+?HZ0}F^WK@P610ur5ucBY z4`f{a>3kUg*NX#Z{7BmJ1ga(Nh7H zIG)RhnRGH-+gsgXKUOy$qgp^4(qKHUko?u}l*4UOUlZ+8_K;ed(rOAhk@OHlG#|nd zSBgp~8*^&F`JZ=&IjyG-`p-<1Z>%5XDfnLpwY!VE#Scg_!R4lh##qrzU9j1Ef~1APaD?#~MQmz?#JhQnyc)g-RStdp^D(~|!>As6%IZ))o0 zEy1i|O2u!1!(+?GQ&2ljbuadJ_Mdrq9=}q84rxL1H~8`S0x{fqQ6mh|ABILqB1Cv2 zsDGE^uYm*>plzXz^Y$So7X-DI{FTCUiR|un3uSNDtDhi8=0}V{RIRWQ&xX}#+5tlFDcHv(*=vwLcy8lf<2$|RY8Vd z-F>zBXgkV=30zy#HPLSq852Y|HNggLf}s#D($B&xn#vkA-Q=v3q;29~G|kzMtXGD>6`4D8TN_5?&PArPG4}CY!a7R^gvpJM z5Xc;I^XN~1i0=pmx)8oBehlYduzCDHULV)zc%NGIQyB_&EA*4CtNgt*X~RMjgw64@ zC!}k1(N6~#L05T9!Mx~{K`Ces4CPYpsr~+i*W)}w8d)WR37Z+q@09c*)nv@x{|BRn zgjk;VJTzv^-e1ULvUu~v|9TXv%ScGG3sTX#%{8ER8R@;L%z@T3v#L!v=6gO%z%8ZAp<*r+)k4exupwUDJ=*(e8 zv@1QG)dPp5Kw0U(&`JM2*0_bo=g^&g+dOtzcP2qk37M;*wjdn-j7pEKVLqO7noiX+p>xN z?9IRu4ZoSl{Aw-Yn-_v!hi00ftxQ#gbmjfz_u+t)0b0r0W!yu`Z2z#mcqb{!B+e#a zLyQE?yC@C%2od?9)t(!^R_Hr~eqWHOo;vy`ZZsb~MqA^$lZgoiIhJ3yd2Va)MyM~p zyB{_gEk}vc3o5GAMUHm(3tg3zWEvF6Ey@ASK%_v^R1xKlx^*Aj8AA56TjauEnC?IB)cmykYy}dqPCie^Ca$k@mS02|w^( z=fAoT>ASVVkpBLqM?z00)^UI(KX^K3h+Jmpg*3?4FRD0Ug*LDHC_p^k4Prqe$bp}c z5*y9{U7bMXG_mp*)Ez)EA0TTkMF31}59B9_%!oKGC2++hhF10^NE0Hx_h$KtjMRU3 zlxsRDk^m_{B8jvE>W{KGUgP%d#A)q$zz?JFC!G8;0EO3cGXDQQDo|X;g}@>}g~$JT ze3{tyvz;hLGRBOpA4U>!T8+|367mqdS&(siaub&rvJ}jC_rJfAv$H=~iDJ0XDyg>a z^-EIVTGX@V_BNyLcP>SfuD0}jC7{=}W8=W7rmwL#NT+KxUxjvD(7kPY+w~^(1Cf&n zE9?@_{V#Ld*8NizILmJRNIk;Q|HF)rKIwPs*D+M|m3pwpibK|&r_W@)H;q4OvX9Nd z+Upe?JM>{V3B5}m$W67rov6Esc}UCyBYX#n;C2>wj3TSXf+7K@A+(lIjW0ZwroVf( z$x}_btLH`@7pi~k1=^dzEC@0P{@jR{Gt{-V#;TcgUP}7}6l#d(ddArHJd)73loQO4 z`tYRGZK{(x#$sBqXXF<6_LBGSxAvYZv)TAq60Y~a12B6o$6zhM-c}rIy4@mZ5tPFW z1_%;$wz!h8d!AHFpr165a%LCnmw4YpG#1w|yzGK}zWO8>@ht*6gHEf#j<#Pyl#5jw zs8p`aeNqzI(uG*XO-q&@5&+*GNv9W!n zL*ZMU$CD%XC4UTM#H3*}A17+jXbNL5EzQ-ou&^}}s&T*VWZtxV2Oa4DGfmf>GtBPm z-Zjf(i9c@{XX;<$-X7WYeQah4p0?G>v1cLGb)zPWD}7?0r>)&>gWS~2gX}-cY*bnB z;hQ(Xf!>1@hV$6O?>m=xW;m@AcFT_lecB4|L=CxZsl!>!nE&;i(~o~O<@d?#pWUzF z_EFDsL%moCQ=34Y#a}yWc898V3Qf`w`LA90$HG5y_Ed0Y=`)ycJDtt+=S`}TVae6U zC%(P`v~41&c(~`M^NRnO`r!@7cY3~h{Ump1AD!nmtYV5!*?n+Ch*XXt=`LgtzSGpo zI+25KfdctIlJMK=F}*|3-j|qH z{3oe<^!GQ$rFs*9aF}8`$$Y55a^8cAffy1gPsFLft}*VMVZFhUVxHz@7lYax^fqEU zUTEX}`0-u{2?G`T=`N@+(dQSLc~Ut-i1Ua2^6PH4{Btcb`>h6Q*M%u`(%6IN23H%8 zeua^l`w$VqLyu-cmmyujdcM#6pAef&%I&5#gbss-1v%H7icTML067wnO_oaTH@~fmXO)0`0Jl+&6&*f`n>XE?l$nN`n4PY7!`Ym@x!!p)DvWT!8+pV>nCs8vIUtpt z(A(paaG5kkH;O2IeIBG?LlxKHCRS64pFd3!BK$`lsGgR()uFP&_e~k-PQj7;zf4^P zt5#`Jdnt0a1mJ7oHKU=p_=%nJ=TjR_qzok17yf*4k?NG!@)Nh3nt!IkbW(et_d+Y; zS{fB)$&$XCaGZn@lx+|v4`Ntra0eDq{J`%?;C}Z5^#Lb*190YuG7SE~2tvmHKYf8g z#c44$9xW*(u2EPREIv&*VlM)@aG-E3YNII1A@HqWd}gexr_vbPPemFi)CYHJ^9NjU z5p}b{NV?1TI1>3;;1*2XW0!9@H;amo55{jEWguPv4KC8QRrvZ&KH4aa<0#Q?fp0UI zf_-PMFcKwi(0=%p;m??MT@eqVPsIXJhCXHEn|ARU-Fcx7qMt(Y_4)j@OR%}MEv&X z!86*UhZ2f+-E`ClY4^s6-~8EdSLvL+ zXO{U?gNFW$@%1)1pGPX5uQ$)`Vj(4V*y}E>RL*2Fb=w!@7Hr&HHN$L;FH4p^#(If;KAwjFHa;IiLZxtvXwQ!GBdD7~|LN^eg( zW_h8#lN5T2<-t>u@2hIK%(j;9t2nn|&UlKJSZw+maldYVdYJ^7gVNxZ|n|;{rchj6c@ha>2xlelUP516Y z4}{JFYgU$A?e-3VG1|^LeBUXlgte!u*&&EP%mo>!I|0dv8T1+U(>nP)o;PXAolNd2 z)icVjsgzbDk)Rq6h(4$xj8lTi>p<@wz~=Qi9r_F)4RMY>Ke6friS<_o%39kzsAuaA zxM#^oN9ZybPneuTQGTk!U6iGGZ}Z!W#}04-XgG$WaxvfFxNK7wQ!iZK62QF)*?hGT z10`=OmnyP%;=}P-jNH+JrY7EkE3Bp-bE!BPrGJXvGGDe!n=oPu6mw*8l!_mH%t&U= z#<*M>LjWlSZpmpIUmqjHId|!9^!I$3DS|moa1LkWr^m> z80YCZ`d3pM@tM~j%TQT+%^Qwm2f4AmnOa9T6BOkbf*=X`&hG|)CyD8E?xj(a9+eun zQ4@8zA_&Tvr&JQm63!Xiuz?HVoh`tXxdaF*%L+SWT1%%eM6nQNN8pVR z$F7O8>_4K2nSidbl=R@tOTTK*t~!c~$ZWLubQeXLm9^x=XH#}M>!v0TY2=F!8CUBK zM~fS(WntgxR!-Lye%Y9*lww(bx;hWMbCU;Wp=I@1xdetD%?N>A+Ri*h#_?e=O z=h4;CPq-26w@<#f+?Gh((-oRc@^j88-`;f=I%FH1j}~>33$knyV7d8 z@Ly`IQdcu_aTVf>WLldy>~s(kcm~7s$3+$qLTyFEk`p&x^JYE+)87r^ot@ZQ9vUBv zWdnWvf!e&LoN;pDy!p|e%;Fi0CdD*M%NCbCAeenh<|9W;CewK>@xB2?thDco4=-38 zmngp}AG^N7IvHmXaStWL8!nMmGrslNQd@P<^m>m9O#X5MYv--Y9f^{&eN$+l$D{qV zEvdUk(36C$5JGy{r>-e0Fi`!HJ9aFd;?XmhmXz*;wi?s;n4s(|fhyi33cH_~F7*f= zn@e9}b{xs7piHwz!FK|76DK#fZf<++Q4PT?rMVI2Tax9OR&To zs(B1ZlI7T6yn_n*$xa%`9Hl+|g`H0^_@T=YWfzvHf<4vZi4h%_o)3LF_#bL#^su>> z>Xoch}fEtDa%{h<)|8;x~7|<#@-O#@9A2JBoHzw5?XzjM558?@Arg#CG6Z36xe9 z2WIXoKfUQZQ8Vb#V+Q+@7ZQDUZT~D3p*|xR#x2IgP-h)>Q!_>2OU5Z1i%%-{Xu*G8 zmk3K-D#3IM$4e=8J1`bzRFH7>CUB>n;-4V&d0F)pKf!$V z9ce`!<~)JRfm!~14-l7E9zfSIQmLdI90=Av6i z_FM+I!Q~TEr|h9Um6waLp?UdDmHgz#XHA;)V(agRb&f{BSh0%Xw%P>6!u3;T6WeQy zLg*|3p`b3VUM~T6T8)f&t{i5|I5IGD$g9`gltjCqD0AGmGE1tE}s;Q z2fEbk^(f52HHQ-9srJf+*S-4%eFD*nH(DO5`%o`mamc1wbgIu~*9lZ*weaAm0OM*HTc(CwA8dOad@Dt&HbX9^V(oU52Hi4yX)}9U5=ol< zMdgI66c4nGEKxtx?D_In2Tzk*XNe8l4}qmXFZKUe{S_%G(+rL>32Y!Ya*1_;*|fD2 zbZh`;tTN0=!T7PaYP%>;%xeuE-+n2w6ms)b?T38|OTqe$m)JN1gk0<*7z|Lw3sWal;bjGfCoy7%}E-=*dv%39Std`G_1V^*1A0Ij5@ECHQxgOuZNd19c~ zpeMgw4|^L!emQ`aN^zY@LsK-1GHuCdI&NYp#m@MGc6Fk)9-50Z13Md4k#*lQ>S*$_ zoYZ?SOkB|Mz9|*mQY!{VkE=>67Twzuo?HSem*W?yg3%LPckak;&I6Z0I}j>S>BTQp z4tH8hE~c|DlsZ%Q*eRD^a>X1z8H_qGEpd*Eq=7KLtZ&N3P1I=GBX<$J?!#8u5+Bp- zrJ~!lKcKS4alO?6n)tN@UAdS!+Rd9wgrWH54d*WK5PH*^=Z~ZSwn;(~Cl7NfuIQ$Z z OzQlzV0KXo~a?uf>yv$=*N3z{)8f{^7LNvAOL0?k@ZqDfmx9hS_YxJ_DBS!>I()%i6>t^3xWziqah}8%X zGVj7g1es-wKh~;R>ySLEgt}c{{`vUY4fZzYpfg+gF;YaSpX6SuWWHr|xE7XjF?$@n zuW_0dy-Enma*o~s9<$ePS-W!$2ve>ujbUhxHyI}KF4a*SuR;gTVZyyyoFC$u<8$CW zJ_tFiGEwGbg2L{q5i-U%4EQId(ag!6~Eu#(;G7lRlD+eWMu^i`?JGWb|N}z zZO*X?##Ova_ya4)rX3)uRDl}w>WSRco75m$ypBev-*cRhw#jP2%#+*^LOqY{YYN4e z+?PI#XDGwghEZ#k^WI|_t9wsxVnIR7Y=$Y0$ng3;c{(g8trf5|o#J?$Fjj4}EE@m0NEZDy(h$}>o*F5#yH7TjyTSM)9 zCdiL(&FCy9?*9t!H|0;UpXlrsL*^NHZzsr9Vn-ZzN76hfrU$r}@V<#Qp*Y7D$JM>m z2}VoLn%u2wkHv1Tp&5dual*hknaL4H0yoO_DPg31QTHifM;b(;2aeRCCm+jtV1>dU zE3~U9Y73(|BDY|gBlzN1X)ZZ0$3Hy76E~+jA9)nQ?sL#KH}cY9nlPbWaEB!oDLYGp z;_Ifxc19{jTJ}tIdC^}k%Dr9nF!rI=3Dc6!D+^ma->-8fu4N}2>6nvhTr-oS{rM;@ z=%Fz>CJaRrvTKKnZV3)G{8r$O0Hy=q-YdkeN_LAQh9^atn;g-#(PUi zG$!!tr7)UbiZ~pDux1C-u!;|r%;bkhpH!ZhhOspAe!Gj4n0$=GU44~Sm0t0aJM2N2 zsbbfZ&1E#}a$c1Q#V6&R>yNe1zwL22)@k+EAjU%ikQ7UdQex(W$8WeUwM{sGCU*}coYgU} zljx|9t5U=%^XnahJwj0I0`W zK9gGWHjFCyTT+()iA%6drs{XujEdSyJyUj$S6m`DuV}rUmQBxX*&~p~->kTlB{0RS z^;RccAp3rGRNf6!&%~O~raVP`9xKzqdi}n{l>{Iqh#+I;z5t8orm7Svi4ixv#FG*B zY*5wi5vm;&AL?K? zycL@Eno|FS5TA}~S?;6EM2Kp*^t|os%aD&bG?F)ok()>arN85EP-whgc&1Zx9XKnb zAfhbm6uzVb41uk&4NR*Rfwg|CNmWs|ffY`%(>5M1u)Ee1tUy8T!*MZiEy~4k+wXF! za?*SGxT~v|O!9!&+M$!{buH!DN5%8Cb+pco`)*JW)(EYTk3UE^EaX-$4QVDwxMc18 zP!V*I+5P?f1JY)+ z_QxeY5Wah`!u(h@$<^G{Gb|FlgrB}=SJ?sRZODm;eE~&XUi-e`;FNVpRff^Xw9ZD} z(xEVFsnHOgxJde*TK~4ra1?qKpOSU!E`m{*gKHxc`H1lgmIJmc@61WOQDrl zv;9>1GPrRz2b>Z$PY}a5tLRo2%-0O5E4?>m}fLHVzEI0mt9PC3;6P>NuASC zghx>Ex_fF5qD+5M(G9oYA}lEkeoF&{?1yx1)JU4?Fnl@Hps(0YtVb+YDokp2Xe~|@ zZ9qJG!Wy04Ac+AGqaw5kv}fun1PRc&KI#C%?D=GsF~L{R;OF>)A_<}xIRCP zgViX3uyO{nx5W6YuoKmlU|O+AI0$mOWaVN)+ilqqyd1Bacj|2jFSdRK%nwfv^m%c# z1)?p6le##Ew6n+8A3G+Fh?!(dcy8D(=6^IsT*~jbhui*0NU{Mp7RDtcRAZf9%%w*6 zZ_*KEvY`0SF$aa9#Xb|~n+_aFii?QB0pL`N(bU#ihqo{3(aZMg9AhQa0_SHuBmS)P zR#LiMm2RYO(skV`Byi?ZIb!DM>*_=?8+}3lq%B=bl3{OqIP7gbox(tI@Jid8p5PQd z2n|HuKoY^rJWHRZ-RH;RvID71`lupZic-OdRRp^#;$mO7AR%k%gD@#qX?AvnPr#l( znYl$mp#E=9waPVIRVb4Kdat9S43#f%X9>aq^3S}%nL+!`_4;7!j8j?#6?09`?r6}z zj+ksLVYsE4P?N2F?3{b=UHkW)I=u-_DU@jjD>c|LLv16ye!;51`Cy;4@eQ8s;ioXI zSJ8sSk~aCvk=V~Ot|QMKkzvTP{=jjv6_g5Wb-O(OKznPk}8*fzU{%6L$8>Wj1Xdq&m@DD z7*Mro_Z_DduKk3(X{?7huY=c5Ib&|otE@|ZMkidSw0hj7@3_ReI1l2%Srhxk7wOW^ zscJ(i%f$bOqh0j*jJfj!*2~I_h4RJ_(?s6PRw1u|pL&Am>EMiZQrJ+|5=hvlI^RqDRje*$5DVvRb z9JdFrp`4#fA)dI6p;mWRxvOurCSgBs)TG%zEIy~ZH~w$Oyb1~fh`8u``Ga>^KXu8X z?vDMiLr{K27yG%4<@8)a(p#a&oryUMgz}2XsV|%X+=MZQrp6)bi>fmDtoQBX^MX4S z0=!%#3I+%3h(I8Y=2RsMnJk$ zJ&Y{rVqCkH}ymP~|;hgKwFu zRo-e06-plxlMBwO(75ih>iF^5K>i?Qn!ueQ9>kpZ5Wu4oPvkD6W4qxU$AOV7K$WM1 zS}!8sa<3HHx8!9oyN2)~=IBMHyMT>QS>68JKz-zupu(cEb zk%wCpX^!;h3DorW-0-fRS3|8m9v$CH7$+=vZh?az3*Yqg?j}q+>^E;OgTO}GhRHh& zZDABGP zs#lPXxd4#Q@lGdtC>WxLLPk`9)D)pCQyvW+dg-Zde)f~B@tMrd2sr0I&-y~Mn$4AJ z(%{Q(*Vt8!FX-sGn5zOY0iM%kVPm5Z1c-0Yl^kU;nJ-SNG*Rowy*HPS!QPmNaSVB? z|1Mm|WLz1hMe%%MOM|MmZh56}NngAm(QD6dY4-=XvlFsgLLeApipLjS%hN`wf6Nj{ zdfoC(4mH<+<&%rLY5bs1@ukuS?v7~FPPB+BOe-Pm^Q<_9LFNGKuX}JQQ+gy#t!h;N zMEA9PE@1LeV>+*bU{{pYskGedVeg(-+ww1#GS=4Rx7dK#2%EKWNj)&@KAWyd_Nn^- zKW4b6wUHV2InXj{KBp$YF+=|pPvT3WyTk>;JxeM%?QHwoDhaTQt2I>Ddjr3+_q${= z-O}oaO}~&$!+sCPKs6HaCZ!2YYg4LSUoN%=w?w`8e3LruwGU5;ULHR@94$YZzv{9A z!uE%Qe~-Pq$2#)wD@3(hO3dYAsN;YSGa{W8Viw5Tz3MK`R2!ioDm=ken&&S&DRO0S zd(*w9(2a;QH*+G+t5M0?ayg%WXr%NGrU$qUvCR2r?u>Nm(*qdHv@$^ayj~L(v$fvICe+pVu;tJ*gE1S=4eEKITHr3-<4dG)a9U4vo^iTqsyGVj%5N%*@IKjek9&`9t1G)=0ZkX8d>;o}L*pc^O2S6WISUb9tpHToe^OO{I?*rJ0dryFIw77NFBjn!A=wUje~@2ZqBhTz~C%6%y>r@2oux@%oKII`?ZhjE)TE)jNFBt2WPfm{9km-ofN7u{w}@$+@ngCjIXR*sRrX3sexZTW2Yk=`P2#Yu2akr9;s zZ&Vv`yFr8((-YluJ-;};xN_j1s{$YJXIzQTudZZyrW&`Nx1-PHVrD8J=GgWBR}zm4 zL{|UcPH}K`=Ya%grq}X4u+F(Eg$EJw_>&6-_(j|S-o_O}X|=`?k-^Fh zU*<+$%T3`k)w|JOT7v_Ep`y_B4}NtzJ4}m?U8SbM$T0*vo49rPSv=VJ{Ukdb$HdTF zZoXP1eDl*dhBjGK`hI4pbo$*3h$m@4<0U4XptN8GAu$iIN8B(6_$nC}Vz7#tC*&%K zG%hH*PZ6Fe+|lAzjDN8bY`XBoRIfWXBxo>~k}IroaGqC3Jm`}5eaE)vdG6yz1TZu^ z;z<&u?xjR^2lzM0)8ne4#6gUrTul3L2PIb);@rwLx!9^|ZIg5^c&FV*3&h=HOfHQ< z=$0A)9ie+1Bnn0ZP^E9u#abIbpR%4P-W|i{&Kt<2-~D*4)ZIdOqs)PrO-V4V?t(~+ ze>CGhtk8X!rS^8ch~Xo?9EKGQdaJs59el;AqueNd74xPPV{Nqtn z$nSxStQ6j71HxEJu5(eH*tApGto&(ot;4N6(OIyEtF!N!pHEFrK~Ic6`u4hmj0@+3 zEPwn5j@NP8K+a>Kz#GyCBC5|~Qoi;c%7`E48-JwnB>b{dSpSi=_;vq^X!L0egVxLvs+DxAV(sUJKn2i(y-wa30bCfL~ zfJkizd3&1suxn+xkAWY>mbD6t0x?gR7X6H^B5_>x$_gr8LKqEJ?i2R0$>!dw$WsN@ zm)BNk@InoX*~|!Jn$_?Hy#z6C18&l zyuVk$Tlb7o!_Am*TEbz}(IR=<=56@E@B^kEfag@>?V-HwuE+F*&HOl8pPuJ{id&IA7pZDR8gl|1cD}_nc5VOxnalVK zEFX#=XpRt1Z<8}+si1)IX1+!mTa}}Ih7X&Y4RT8Xr(Do<=89y}6OUQ%(L|!dnuk+3 zc$%NZ10_wn6CBjVtyq4$FVrV6=u5^^e&l-qkkW2qAr3Q?@qswZIo^7kfDG4saGq(& z0j;HC?e#kk5V%)UNgtsdSzMisS9j~Ur9-b?*AzXHo9i_eHEuZT_IYb}+soetT@=~L z>wUtdF`iZ~#_F{v?IxeZq|?c2urB~ko*(anICgL`=zxj=m5(`rF50UrAk`+15Oojw z!mQnEtS3`$Y+Oc}V!V3i4V-Qmc4qJkhO9V?Da&I z%Mn$oTdJdXB6IwO6ja@H-Kwyn^_am{HK|WK@n#8K=`?$5BAQ^`lXzE#VCZ#C%v zpDWo!dd!p`Fms+ycc?+w!(kE=aM__yp*FH%w(1Qa0c2j$ip?qnQJ%kOYn=WJd=@bLd(U;cce-$1|I8IF@9vM>zx;T>)p zhrAfxi$p&!Bv1cjorh45AyrL1NVUH_Os>;XA52znB)Uw72$Vp{l2M4sRPZ10%uM+p z>oA#(5F&;L5Vb53EWlFEDW&}3zA)u4sq- zXoy6&b`Vn00y;TyouG{6NQypN-RqdTvkAkjV8zrQ5;WpxMt01|q$mcz&AK~5pW zsvF6MbUgH}u&{CDQ+R!K$j?jt-hn&AfQpCBB_t%^iD9Y_4pbYQ-M_lILTafB>tS|8 zLcxd30WE)n<>oU9#hVZVBm&%R4}M-)jem>O8sANEz(|{zsS?8XSN0+HJ@|>cuy@G; zQ$oXr2{SRfNw8hSfrJ44gVY{Ul}tVt}y<-2jgne zvOXn1+!=GDmdE!Y6q`UWS%BosPNiBjUE(`+2nRWqSixwt6r?e_r=xA1xIn}|o=yyQ zH3{~EZ?}9~GT+)fol9l}kjSivQbz{O-4j;)q}BLRPG42{7Qf@C5PzO6SK|CF3}e!)l;Z5hLqk8z=r3}#E<5N)6eimEpV6)^wIAC zNXX}n+NmZRkYvOiyFDrjw|T+2H{j!EBeVrd29*KXs4Yk{!sIGIG&xJM2WQs7w;}>G zMIku$#+OJKs||mt4Ae@8fZH2iyhS*5y_TXqV0nl@5J%g1>W8`#&wCuEK6*LE=57nb zqTw)D7jQful%$7#>4}K5aa~hEtIXxT#V>&~OQBD$q4ry6}at1hH3N7H@Q7Ym-VJH*`U)zs9&alyL zhO6aZJtPxn`WhL%E{=B$Tn`QwgMAtZMGPd=E~ql1HZTwlR6^2`!{xD8ax&Pr8@_6G z`buzM9R!W!yDv(G&?9VjrkbRc!QL<=(x|>J!jGLS?mqCV{0tj})ofEgD7Ad!ZiVys z28;zT584_y$0= zE+1fp%rN%8={JT0G`l^mRl5FOz7#;zU=Iqksqq`~Rn48j^j9QR+9OP8p!twHm3B%+|VA3K2kVKTwy9I2Wf$hyw}*pr!IrU;owTzE6u(zol-yAH{3&{78AD-);k zE_h9(fJ_>2*j@Ai#FlE>f|jlR6)X~5ANf~+MnokYTsDn#iT;My=Lx#G9&qGTg<5eIK7n`ummXKkYbX4GR78Kv{Zu!fdES$ zApaAms6j|Tc_Z(ozDR1865;o`3rIi;RO|=)vS!SsI}`h}x8|%JLgN3Ya9uyMuTkSPV?bm|?-oLWmC7}41lUWI7NU9=93b0VM^HAA0G*abg> zdgX$B(UYIdxPVkN39MsXhv4glj13@m2_NkjR`AVnG-)+ zp3W_UB#8o9&|D=J;GxXp3cd>9*m>7hmI><<1+#KxPC1tGVGIo9M!~FMoPOC*8Hw_k z5UL%@Mu{Yj;fl3v!@4X-i`3hz3yZ%H9|R4jEW;iyn6fw!?F)Gd5TWP@;aRJcL+>xI z1S2?$gD^yMFIUC<(RrFn9E;rDPJ@{(9Ch5eIu8;x=rLFP!i~c)HW?@Qc083(OxJV7 zSyP-62br_TeHJWy`1OpLW3rJ?EXBt*(PnFKvJipPdsUrkVW#o~gccjXt7`>H)w+N( z78YdHAt^kUwD2j`?XCdy;nIH$90~}1;_QTg~3g7psz$>eT*HdeOJwUhdD<;qxMkUF8J!koNsZ8mR^w0KictYa0=4+9P;A zzfe@uV&&IlOo&hyp??!W%wkvWO}EY7uh-zkB4pUf&6mdU__G>SHC6ru`!07L1HuJL z)5BfKL0zeXt`w2#A7V#Q>K!LfhrOupZnWX_ufL=;=iTxBrIy&WeX3ce*X)mL??@Rv zVm`R&+7=pF$z!Y&i^;K1CTHB(%RR9UtORo?m<@z_BD9tv_)miPu)N`Uo=R>__v&_o zX)jj{?$GN2AH_TL41dTQ4w^?yy*y``iR%n6j_i-1OY=YLfyvi!wfP#+KkRC)7)dYC zL4WC1gj$cVo%}zO9*9cI=B*lrC3T`QEw`ejdKv>CUMGFHfikdrVz2gz_u=wASL^`5 z%x*07u(l|z#i!|1#n+lUUeJU#2r6DE?AfY(kfe&<;Ook=uCS}Kl3~NO`^c>UrC7J@ zoJ?_F0fgNi+tFY!_6D`K{u;g3T`DjYj&*;{oi*~wm?)3MyE{CZYTKNmYN z0R38jsom|jyrdd)CN3b3`O`nFy3={@s+-m>dRNn`FyG~_!5K`E zhu9VDb>!*?c&c9X6^67JGGZdkYdrcb@XiM>qESQ#`+DuMdfU63qIF9%y@b)Rz$}3b zfw@H6c`r1cB-v~?2#>nfrLzH zW!d?LY9iF`ypHmFk!N3?1reRf$XQS>ICt-1!X-~3D!ojhGVYqIkmAItRx=9<&J5le zt^+0(@w{J<=ll|5>9!&$pbNJI6^O5x@uzIoK0(&OmL!1?7Yw@pQU&Lqph`a52iq3x zW?cPujnV&w^4(lFP{6OByznKRqZ{Z`PHO%F7#)xeD}QEXJJ$Q{i4RcK3N=~)Mh;it z#Ds#>7l^aP1WZT|FIsO|u7!+Y70mk!r~w}+?sqD11HOzv1_RGlS(X7pN-rJPmhxI9 zClQNMizpHhnBWT^9(4^yPlxf0%%oD~?Vs!*Tyt&lMuOzsPsWp;KaZ2f(LeU%J64-- zy&?jIHUaY&VR1DKzj`LQklzmN^5cwTZIIio^-??hsyLp~xz;hO)m`n_IPV>tQEG$1 zd>X8h^u5zT37oCA>1V1-*a)~|ET}y)J>SM(F#GoaBA3pw)Z*ugd`TrTFQPlrx3KtYI2XXt%_7%Xa|W z5mN_7T_lfxN8CHITnAN?cLKb+^QKCG@N^r&C}LRyO}K>54K3l5nfoO~K<1sNxz8X^ z75BFWHo-ibUAfQ)ga#I;^*WP?$tCJrCaCX5Td%gm4t9i~HC-cJuR|lFgk+u50?GCU zc4gt4of)IJB{{8}-A4$On+l$C@Rj$GRCbUdh7#lwzd{`5Z67m3qR6ao4$aUOAWH$z z0PiM){=W}#!HQESiQ1{jX*1R*KnlSa?m1k(kvqJiWFYuAN`c_%BhQwsrfl z%X*oIo@iZ;`P+%}3%Z`1$}wJ#kDeWR%I8XKcOb`kH?4GqWNmsnWNL~XIGjz}g4CqI zhUap-mkXhDyTz3oHwqY_@R%>ZdlZK>>tN?!c0Idh1rO4BA=~~^jM{eHQr7U(msW-3bjVKaq zz>7J3~OQK5WZ0xT<{@EPe?M>`EAWTb=+wd5SKVy4@`F=_!bo7nM>W(lC!P1-A zr<7m#B%~|*6sZr|bSYP(JoNGb5ar;|KjcpJ{qfKA$>PJcg(9@3?O#TLgTSo$v}dDI z5@Esw;$$%x-n5qu8!7G%AmVVUvPU&r2mk5M0UI^}N^D=mdp_?5A=QX2y#jCc0hRZT zB(Q-B^W!9ggJuo^Kp6}TMIEToxGCze?%DSJT^EIcAeppou6ls2Ws|o(SqkQUR2Sl%wDnc(1WRB650XodG1TIu`M`6!Z2mUexqr3Z%2>8Zw1MQjnL!m zjk)C&Xm9rkH?#m%{qnGixz2|u2^jVYUC5o6_Zcb~CLUyyTK(NU2AhF1#OGPvF(R)n zcZlP=Z(wXq9S1*ejW34nEF0$1B2zG7=P&C?VwkU!^C^E1OXP(oBp88=FrnT5;0L3b zSe69b#X>{&SZ%;CZg>GM$410xct4Tc(>Dvf``oW|DMqEIhINy085sf@^8UN@Sy=`p zjluUGg$rZz-II2J1WK)gI#jzgM#ei>a?cO=VOR2U$XxZa?8&W`TYs3R8!z_hRKUXz zEB)0vIG6e{-Kc)Ed2DKw^2w{NZQZ(zoU$i~nu zUiiL}+F9T6`X3*cdZ;8$L;AWbR@HY{pqgM3FI235oQv5PNrJ)u2ouG}XRhaZlOH-z zyXfl_IG0!wvF)l3DXf#u1CTcigRL_Y(b}4ts=Lx=bZgKn!#($wEm z)C8~j0vq^o&!s|eFyui(Ps`Tpol0-1a2&xl#_t*KSj!+tEUYa5=ugN0?mq8GN8y;L z$X_kl<;Hl=;k(C`p7Ni?NyKfAk9PLK;AKG?wg=N5eh>pptJ>T9BM<(yXYWke1{0c^ zB3ES2%17>nx|lTba9#a6^=)p8JunCJYo)zm#!p)NHEhO{zkjKGgLHySEP>bzGX5Mc?ckZLg|Y22K&ZHUG(Poucx=KHCD-Zr2!uQs(Q3h={vN`Vmsef`z|g^ z-0c6~r}w)Tl2RrT#i5U)P_|x5?}kCiRP1mP30gJOf$Jjas|GjOrgg<+=}9b-Qc$rN z>5y|Us^we5BsI|RdS{>hNl?IBhBj|IS>6>tmwgvqyw<~)yJmj0$z(|leTKR)S)CRa zM+6e}R@+JFm1ptCcmDGDWJFHHSge30Qe5HgG`IS&I#PbNj7q8Tm+!w<{y3bye)#;U z7iO_p(Z*he^y?-I{|Nj2QGVV zW-xxjJH>%t{(aMbClbU!{Q2oaU&BKMYkT_Z`!=MyCd&(EOsNcktPJ>X+s(vP^r}3+IaXCZu4v;Osue+^;<1o?otZZ z#ZWC=lSeDg3if&4L>PeiTfy<x{(toUX>4~Vdkv+0*H*5~^*t8A8JByyt~QI3zI}BlGh6o_&+uB97)+Bb zw(seB2zO$2Pv&)j`M!fJ5^0b1GgxuAWhjJ6N6Sh{6R=qR8&-C#qu3xqAY&i zVYDjfaEZs~47%IjY@=vAXHY8pOx?iNZcu0NBjey`G6p4Y6^PAoq2D%Tw%nfMTut%z@G|W@0G)&3!j=}FFbXV*XSUwCmG`Y;3 zN|qv?CN_WBPyoE(xW-zy%P}ne%K4*h^uE;@Wx}6c>MQ~Q8hC+?Z{8k#fttpLO}B(4SM?l-7~sIyg9#`quv?Z z&}zEQGeT?X^rUS(`EttbsE~g?BB(-h__@4}T5FWW=PTTr;nAnZXz!NF#IOtqnJOK) z7l+z!x_oA9ZKkoc`6E3P^1@kyZLu@qW+4E084<-grk4ir8B+gz11$1I`?(!qj^|Bh z4w>q|f;jqe=%1z>?)8*nUwZ-$oO%y7jD&bg?GJWkDa9X(I!x-cT5+9Rob;atIBSaZ zu-HbxUwON9z4Q<0bv?h<`^B}#HhCLA+H(4j&}#QFaf;CeVNRY2C222F1gD)vc>RM= zL(U?R(V}2Og6*9Y9s1yZqlK?HEal~!v${siXS3RIuUuAGW&k(J9MIvii!MGdx$QN+ zTGJeMa;J}s{;QE>7Q|fWqMQ20xGBl%IOB(i`w1Bk>k!Ww%cBmoZJ&3rRSf;@0iPFL z=dUYXx_VYsBuL@`SCq63^{Li;u#LjnrU-=DS0;w;yzva0oN}rH-`X4XAVOV{6mGTm zLcdJR6GF1Z=q`5U_5Z-S-rbz{BU>x5v44Z~sSua^F8XxVQX! zt^9ZX-}I??Ty`ztLLPcyqR0Dc5&MKWrFWiO&1k$bTvGZWa^+oF7g=Xh3bZo=IB!!7F7}A> z1Sn@+=pj~AJKMBy8gzmUwIEiO-hG?jKS=k`QtCMUp*MRcx)~au2_}}5lz_uh`t+s9 zp!~ZCK3D~C{XO0Nrll1M`9(!VRyA^(hGxkZdgC1`O7a|bNlZ`ToQ1kbXM?N-ypEDm zgUn?~@wyjMyB;lyb=6iFp1=DdH?Mr(^BKXeu3-$bYtuIoG-Dj9R%~|Fwpbv@yvipS zE$17LlCNxKk+KP8e6C9|$a5%o<-P!?-!I@g&#UM0cSQ`K6Qz97{R`714P$3Y2HnmDEUIe3w#)SDN<#OQ>J(oE zd!lgs>E-C|&{5~ieD#9x7iRjI0MnS+MG5JOa_t#*t1cL$b($#;G=JzWr8udt<*#6P zugxCbNgHUg6d=fij&n`6txZwhVq#C~PUsx|I^ZL}n|aB6p``Dk%mFti*~%w_B8God zUweeqFUc)Y_3+9pV*U_J`&yBK;21v4%20cPaopwM&9X}yQsrKkOiXKvmzSIjk9%O} zpN_bU)=JMLcJ)ymE4iu+>C|?D!83nm$KJEWJiCuPcM*s~$urUap*B+5x1b`d*cb(1 zX~;~ztJWmqkb8+r0>1OhfvZkst-?dt<~R)6wP*cnl*Q{{FEsXUTIR~c_|Z)cum+l~ zQoK0q{Y()HZxzDRt0bSvk6X9S9) zilcX6ki7m@5iPpmlYj&x23WL8D_5|M&PC%{@3<$NT@lNP7Ox{FE@?>=a4}&-u*fta zO)JsO<*jy^rft^|?4>C;KMm#>0e-2-?!iuaELzsJ*^|34?2nas!G2PSiq-)?#l}~I z4#WYnSc>JqSa8Eaar;8JB~feFaj`m@r3=sAjEqv=x?y#ZhJzT4ks&oxW)@+;_+ z*7R(+TN#x(50T`-OvUF)xOv;)m`J3xm%csmz!}9=X5p_!*2y0upL7q*258&~-x=uG zeH?2(I2*ZKVd(Ki!Aq<_4yB_*sx54sluM%Txb7)XM0w{72s^Q7lZ0Rq#lA74Lx)S7 zwptTJcv#O6pAi7rRy&e1%i#E9pl6C((*$-?o-YYwE<_>IUj?v zcjbIS`>>QxXcX2<%@fT|$l@{V|IKWmj$+S;+1ifFRzX<`*^+g}qdEr!y149;FUR0I zk1yTm-x2t(T%DH939slQ5JV~Opy<lakzeeN3zmFij7sl%AZP&QRMBT z4l>~&ybi<+mYazv62H5DwZ8fZ=lEbIHKHq8=7ofxF)fb_>iPku8asz7+t6=rzy|UE ztHe;V`F!`QSR_{MR`z+*8p%oTWagz~Zo92~Hnob$7nvB``$>iI_G!i1W7zfzm)@<$ zcxUEmk`*~(>g=4scnON^aw9j&K=!mKx}}4A&i9+s&805pXQ+=An```aG*}l+VT;_9 z>ygXCikUtm7r9s~3AN0O(k{(4+|5SytGsiPe1$==3U<@4V!oxh_>`zr6xFZ*LvFF7o7 zXW)wCVg}8nF2(0pGmWto!<45K*2rjmzTK>#ExXd&(*tfxCnA<3CNHHPkt>XxxKyDU z;@!KQE9QZbS` zj;d;w7-NgX-Zm`oW$CtLiy2$pR$MvvNEquT_G=$_r@fA#On5fu`1D`TH4x7%roqYr zohl@ok~X#NR~vR^t2-#(JwHzwa5r-VyOB#{qe#c<+NU70D{HQl&+a_n#{y8`ss6 zJp+EvbM8arb0O!{?3tWEwNWBBb{;zxaPvlW-!R!-l-+^CKp_^heOL zcEbKvF08qRf;-O;{H!A81$xqv+nL}XDAS4kQ}AmK4tH;cj4`iq(xZPXq-0)Az)n69 z+U${D>rJnzDwyxZhGQc4KX@GYBYdr5fhu&JLV>ge`--BQK=NHw-iiSqCN zspiRlMF$T5639Cj+~%F%SzH{gc0iAbdjvDasQWH8uG3QecI&CdzL|!p@dCD?dqyTA zt1I&Y#IUo8ODEU)VS65F4ks$swbvg>_12r0SN^%Yu)df}V~QKV--3pLxc#YhB;9za zN^@9h`MaubOFy{BV^|cwTB~tW{*mfd%;B@i9h29;>XUuemU7pG_jnLd%iLkBI}WOO z?;Ywoh<-y~S#>vICK)=a5`ieNMP9gLyOb$v_C$0P1R^Q~e|q)5O6J9$P%B?1>4>f! z?RWUQVyF2Ys--qKw72n5-+Gah$8zkLV&-0*Jgx6goGNVW^U7T~8Uwfbzx9vWGyKjy XEfJo)CRMK?8YswKyMmK3x&MCvU0(On literal 0 HcmV?d00001 diff --git a/docs/specs/prover/circuits/img/image.png b/docs/specs/prover/circuits/img/image.png new file mode 100644 index 0000000000000000000000000000000000000000..cb6f0be17dec7662434e17c0c2cad0c0ef27f0f6 GIT binary patch literal 227383 zcmeFXWmH^Uw=RfFAUMG_Sa3*im*7rtcXxMpP0&CA351Z~UbwrvJHe%Jm)_+4&Ua3C z-!sO&f4fhOs@iMsy?o92%sHR6qLdUQU%w)J1qB85T3Sk61qupD0SXGH2MGaqqKqri z1qFpFVkIV~BrPUJuH@`sZe?o*1tk@gqJgNX+W#h7=S!lnC^9)Y^#x7j6qJgnCG-!x zb_pq%P%IPjUvr;7!p{ar!`to_Mj9h_#m{Ns_!wX)E6;QX>v<)?G#oRmWD~IrD@fAIB&-J@>4~AcKP{+lZpRg~Bjvx}EZ^K<#{0P0cDC zi8tUF4x`cUm;nPxS?ubqq_Bq;{unoyVhSbP2;(II%>rvZc1P}xbs9zqHAR4NzrC%n zEe3j3@wysQjB(Q9=i|4eQ?B7^3VCKO6C{RhN*4N{4t^&zkGtou@y~kBEV#C7-V8%gEdh$&_!F8NR5-fyn8LLG?y3N#LqcIFcvl!HToJg zVh&@&P+7q=^oyb_9)b&bI0IZ3&CpJlSTW+MsEEE2Xu>?n)VX*9PdkU%s4Drvt5p10 zWBcIy2fm>Eser6^Yjl@v+D7KSSY0@gOs_(tK4VOhyAKV;C-xrWo#RySAsU*MlST9Bxo#3hS@8 zeMCfT+;}bItWsE8#cD?J}h`MzY zp}#nFb@F#@qEu*NjUjqd5{YPlKnXAMs3nLadK;KMA1x|18!-q{MwZKUl9+=ea&n)6!oCqbDH zf0!`&!(&g#R_e>o44Zh$2kxE)g!=p~G!vtg*e661P%lMfjWO5~@nukH7B&#-SdQZ) z+lqvWN4T-Ci9x*{J{{PieCwF&2;V~-ExwdXov9|92qR9Rt8Y^t_3EyvD^0#eZ}a=c z`>Dsx33e^8QB;U=3EkU0gi(h8%*YU%Q~k^tLY5sq5q~Hy(cjtkWSW^HIdN!dFGgRf z`?CQ4s_47QIP9#a&h*3c~%A9gBG;2Aw=G&QQlBD(T)0dhp z4G7KM1DpNfyd42IM81`ka@#!9yqDa5$`@NGKl@k40|m7B^(cq>hAhh0@I>0yB0b~Z z;*ThdFhljVutd+!X5!y^z>eEJjtoD;Lgiq?!oo_nxERqw&Fw&gpVuLv*Vz8NpP;z| z(rHt0=(u&=DcZoHyh~PhZTRW1MBHQ+YiLbDLP#)1dx&r^0?YkMA*gB%s}9CK(8ux8H3aV)dJ}98CL9IDr_lJ_w=|K| z#tGbS<+|{YMJW?CsR*P$vSVOUel({%|IGe_SS!L-!aC8f&*%@1GqjcDF?p60`WfME zsPQkt!Z^b3>bp|OQI%tkA4szT&A+LRiB{w51(%8xjq6zoc)w~6T@mIR!?P4ZU{#aD zd*4mWO4HMmTaLp>l4Mv_p;8P})U#hsa|0?fBGF{I!ld1B1qJc7;SxvFuJQZ65+pK+ zE$j4HyYxiye+}unKXHJgZ%;H-JfU<%GC?K?{T+%W3U5TTjv1m#s>DwPz6WSBZ6UY;4DqMRZ(S^}b z(N!|#GSSge`BM3!W(5OIDMj>(YJy1|eW|7_&NyEcvlaAZR%Glc4QM=)gj7Ggo53NI z;iryaPpAx6k>nApSM;_gM?ezzSp$Ebb__EwY#-#&8f9Y4Yl>+ zv_j#svX5B4O3AOMDI9A-H{_ad?~J>|yC8`Io(b*`f_7*nU(^*FWqf`ssDnRQ{8s$L z@~KQo=$pS*vqngjHv7s(DsHF7$&UKh9T_)AuH<(oy`&F;b~~ALJy@B)|M9dHr$<#F-Ap3t)DBD zBR|hsmW*5F>6_@81Z*;Hru6j2`N-|kw9tqqrO?pIWxsdDXOuuQd7T$FZeeXnXid+t zY`b87H$Sw%*k0Ucn=lXS63sKozwDduRppfv7S)X(^Yu;58xN-bRJlR9!Rgc;##J3H z9cnG6a_e$$^Ad~U3g+t4nidDKn$xPU#;SHM%c+)w)x))V4nEyzH7-?y)kW2fwh0T3 zizr94i{J$E*b`z3LS=!C)Q}9=!S8L~XTDD=&$4w?^i{N$}0lcD_ZQg6pdB znLPMBaQyKZ+xhyP?bY6i+0D;8`?I5y_m`V@YWD(XT-Q7I4(9;_m1brY?iKn7-0)H` zXz)!IJ{1-fiU`I*6hZkxb|Ts$$RY>9qzIYt7I3o&*I{*GLr6RXGDxq{^+>&AOuE*c zthxP+NR6sGv^%CdHalj+grdu162(8niS&lXX|Zkl|1kSOS4r_o+$mRGTp+SGJ{!xA zDjXl1M3vy2%Kg*f(433YMV+)0-{wj;Vi!wXYnovaxxA&qCq?2M&7>Q_cdWnpf14ck zRaEup%Wz7)q2cc_0l+_@;aEVVSm>|4}Pku_rn$gnl$GcWy?|JFYvMNTOv89JFnb%)c^D8lm3Of|HQ*j z~hZ`7vyz=`*^}7t!5Urh{)b%JTg6 z&JWtYTu|G|E~nXRO6byPySvn{o{?;2Y=v#YzbEgfdBOTfevM^Gb*;thjh==( z;<0_P%df>INe{(&{c??l>)VzoKd$Qth=X7ozmmt;nc+0W-S-LaAO7Bc24d!7k|xe3 z_v_`I!}2Tg5%Jul^Zrv8mRoFDCb9{t@pxop+{&K0&p&QVA-`i)^@fu&AX%h7ciysc zvWs6&-tUYeJQo)x(gj zk+C2NCTrdIxSXnKv#}_1DK%)o^vZiQ?-*VL_pI*M4YxiXDvV@lc%U5nw{TqxuKVx7 z#v$UOza~?EHoEW4M!SwT$qLFUN!TD=KG<+;_5@wu$apnvGfAZJmFbxwLZ$aDDFdsD}u) zvOIo!HtLn!?sdueA}H|4e~Gd;m{C@1kl7Lz&?uD1$4b5}Q+H+njjH}ZfFy8s5&9bq ztZ>B6`vLCmTSa}0?yqX`Z2=w8?{k~#V3iP`S53UvSBIIPuI$5lS$>rXk_KARL!D+r zxj7LOhFEai)~l#To)DM6+qcoSC%obM=(PPnah#(=uIdIa1b&9GT=VsL9vmqEVv1-p zO=)v^c_?}yjRXY;O$Y@Kq@aPHAT-gx(-P2hP_TcW!$3hrSV6)4D-RHx{QV^U>;9iB z?3ZvTMBoV)@bmlz^B=j96u!a!M;fN*rJRVWn6xx-RW)%oGqZQGbZ~XEVa){|AUjHF zxj;d^p?>*6ORG?w0DYRZ`lRWqDKE!s;$X*YWa?mS#_Vb5_)-rPzb7w{v@>%xBKNej zwRhq56rlW@gBM7@JZ7OJ|C`0tMu1XNUWr`H!P$(QlbMy7l~V8(IXOAMv#B|+inzqT ziUW57l$Nfpj=U@^9v&Xd9`BhQoGn<`czAeNSlL8F{e)ql2R@A9u<0Rv=tdBeiS%*yib zx`CqnFHd=utUS$ZwZyILfH?!&5aeQG<^P-i|MliSTl_C2HUG0DI~zOae=Yi7Uj09d zs=1gsi#gZzuY&w6FGK$?Q}NF{|9uKfwBRd#mVYmr;47!;Hx$4+ z5?hHYegdw5l)ZeQO@KeTf383px*60lVOR|XB@87kF7n9}`Y;PI5J#;cEWogjX9tug z^`;+2LYy@wH`wa>tK0Ol+i*isqxmey4oDbY9to;{I3rpN*q_ z=ncSUe+7~{xEhe<^F$19b17(LO-T;(fAGbKfOD4O#*C;crHBbciu~XHa>HOL>`Guj zWB%`dBXxi>T}M+kYySuRBY$a!^nbq*Am{(I^H&1?XWsrlDsMT>K|5>t%LXlJRkH+o zT5#gRFJDn|$BBD1PT$29!RF{dq%KtG=em>-UfhnJp_>mFf7LW-j&?0Z4Hw+=WNH)Y zq(~lNP`(c)xYmXHTY)$^FVfA1y6g3i=gi&R7H@*La8#hk&rz}AMt=sZQfVn7aG4T2 zqj86f!p?WfLv?wQ1LMQ{Rmnd7E&TyS%H@TrxhV8k;uP z3TNM@8avUEa>0UC<}QcDL*+!j(oq5K%sdQZ2(NFDTz`It?yIPq>ff1bOK2N6k>e%$ zGD>?mrVMu}B?{q9i4~-Q4kLSK7cG2&*_V`vQ?7rWQ?r%L1ewlzN^Fs``uHVBqd=LQWqL@^xLzYexrO$@SAo~4i@|2*XyAw zr4{c-38P-HwhIONl@Gd+6XAI=B4!o4x|DAR9Qs*c(kwVPiay^(ON3ELtO3`FL4A8et(1M7t6;z4?|}HA z1r_C@brg4W!N)vCL>{EzxJUy6@ct^;vsJc3$RC-uW0bqJED=uudn6rjfeV(?J93P|4#eny+{hta~sLWRO{RA%~~_A zE^N*A0vbTN^x$q_vx1|dG44~DGf=3vzNqsaE(CONt|Rhl9LV6{>bVrI02&b~+=Sg1 zRL9|99|?o<%BFQ=#?#{-Wk>I{h$^aOw-@-X(m&4r zU3p0nnKsw+tkP%pH|pkd-*j2>+P>I%!A*9E96#;6=mhCRiS3xf#i3xze$GF}R|uM4 zcdO8qChN9% zy$=x_r5TGStzM;9d>S7dsY~}Qv-@Rz{z0$YB;ntbJi0uE(R7k2fk*+chnwncUco{9 zus*KI{aTM>COZ}I=-jlqf+nG4J)+(1(>8R_fXPm-_ zf4q`C@s)9Z`;-r@f5wlP*G<)a&8+T$B5E zHfvpBQ}=)WGtZI2sF>fl%T3oo>aEVq46$%CD*4?;5l5xk;pi;&Ex2rg6; zY){$!HrVyR(R+ksEnf5=#ThZzTD@Ku{i(zXOAM4K9zZNT@Sv(^=$_!_c!L?K2E$0J z@v2p*sNE@-Ti5g|kFocmruUOOtvaplCKlj~1##1~fo$}+L!2wL^|?bzdtvZ;O8C^Q znd2eRC+K<|Q77-?7a1IJ#il3fi*XL)!AJ6U6fMxtc|QS7`s+4SNCa|R54;(KZk&SC z=n3KceV*9Fo15ZO!8uK+7>h+0-T>hkr0Fg}tZ+7Z3o@CpO}cp}QJ~ZLxAi)jwun+Q zrP4e?#;03I&h?lY@@`!V(}x>6$~Htvj=?SGBF8_W17-Jw(z)HLoC3$qyw%m}L2G+=mglG&UF!4;Oo>qjCxbU^C0|H` zFFS0v?6Rb8CvdFJ`is-GX$6xI{1?KQgQzkCs(5j+l(=})ssyrYLAAXy!oA1EHl`Kg zfu(HnMbw&Xbv?xVz8$H#D}1u9zZjhvhbwz1|4!fw~Sh zFl8pGH{6^wf!ec?Y$L}e5vX9rWV0@0T+Oklq>qNYD<(N~vFZ>jUmXruSzz@HdCEnz za#!zCut2hOpIPmfD|bSlmM0%6V3dUe?IY~=|I~`oVb{ZMZEY8~Lqyv5!I7qp%fCMOW8ts=N-2l+ofgaBJ&x?&<;Dr<0Frg_b766$Dv}$56 zx&)0Wl>$XVJ96m(0-mbr-?6#?!|Nj!^5eq=Vxk`~l4Sp2Mi(*Aps~FS$2V?DV4;ZR zYs2aP5kfSjkCn}lV5y8_%aAOe0TGT`iFo&VJTCjt#s z09qZJ-^(PR1*+@$x<|A}1tfSy_E`ThKk4Mat(;sQ0%pNW#gkF9XnzA4)04( zUmS+OmkIO%ypaEy!2g-R|B=A|f2eQ_T@Z-JMRSGUtJrqWI5jbmVQ+y}aiNEa@5pdt zJ@SgW(ID108jr~o{%ogasUG`~iu-7xY88!`+cYA9c=&i(<5`@1;{}(vo%NL9Dqmn< zD_n(kKoo|@`A@m-O>e&3@7&1Vv)hp-za9O>xO6!to*?5$9TdDF3$OESC$huEfU&D% z+3GH_=+mU62|i3I0M|p0;V>g#a9Vt)savefWy_ef;u{;yIpJmn)1teN!=-n$@b}=` z5~+2VR3QUWlc7;hx9b(y)O61-1rRrd0swT-FSrkt3pSz#WLfjH=enO76bZQAO&(VBx=OFcMiF>9Zb;(r>UFpIXM-N~y~?AlN1{&0g##OaoPk&SkvGLJ3!l~W zOB^xR;Vd5Iqg(3&xWe!ch~ zQ*vs$ZprA+D0l-7dYB1rRvz&Vhm+1xR4_Z7g;sVtoaMY6^xWSs&Zdd% zrb4}HiJgkxTNwc08vV6~Oa1v(CN=Pw=)@cZf7a2lLj4#! zAT)(p3x|Ov#>|p%m(F=@je_>ed?Vx=c97{VTmf3Cay}>}NGEF6{bHK~>uNkJL=`|fI1GLG;Nxz3J{lnc z2HW}C!7GSjHMZlajN)JtE!W0+$JIrrU$I|NcuQH+MJw6SrbGDsip|@oHl|tk8rg>G z>YmHO1`?rX!FhjT_VJ%0!cLsp{&lq}ZI4RdZKcyN6dlOi4;L)>JRmL+kj_VED)9oY zp0nwqw!Eo zD%8;p++hk;zO5z}Di0z%usvmq9)~p~jaNVUX1iV9lO6~7aW!sbl(rZ>@%ly!oGtaX zTrWsbjy0>kZ@jB?pSfFVNF_4fvvNtDFIY|Z@Jjxskz#@g9TdJ7sQg`5dWRphasZL^ z_n(~6J3bMA*hW^nXkS&mwAQ%l9@1?B8=a%=_zs;HM*a0gD5Zf8N=?L9*+_F^T3}~K zt*fMfSRCx}sx|0zI_Nd%%(ez_z$%`?`dBd0xgXO5nJwky92OcD(I_H?5r!uBJlj3Jm1n)wjW}mt{e|$A0p(_Wy}4Bz%lRPRmEm+iAL+rVM3J4d$#Dgd-{+q zM0<$UG9PDEhT7obxJmEfmIoq!(%AW%5YDMys zpYr6g@h2-U2KQ4EDWQ*e5cw7B4nf{ZZj z0%sSg98~GY!vXG4n4zgJFe7Jm)$g!r8Lul(Zdf!KL?mc0l|VtoEKLY24?Exctb| zvHjlsK3-tXgHA8^)E#4yl>)*MO(W6v+w`_D&Vrf(aw%3e9|5e?yan?VHfA3Zpvvr` zvb|^DeMR__nGr>(&r5JWA8ZDbGppxW2FFa(J-D4B8c&M#C9wJDTN-1(p(vRzAC3xm zcPMuhBC-5lvxVpl~QF?%a7 zT(skiSNR+oy3TUw>(e6d6P*qox{H>*Chl4U6Z~G(;)&Rz08@thbC&tro;ylledI0E z!#f*>Z_l^fw%%v7YlHO$!`<6&cZBnG`_hvY6f<9O9H;l6+wLEl)j~nz+U$@h7@57X z>Ep0c8rJvOM=QwPx=j${cG_Hiu5j{%Ds|5g_KMVaay`$Vfj?I+an|BP=tV|2ZH)t& zqokDz_^thB8Zi@@T4fHLcC*E)7$c!#SoFe>(_JmfyX;Je8LCdP;o9`qvTp4ZFlpc5 zsKrEQyP`n$+vqy|1drn&=#jR(?_pQ@4F$0qTT5VOC{}S*fDaHaZoVZqHO#(Oz!**8 zyXS-uH72%306*I09e27~PX!BJ~^Z6~ts*fENX5D5_Oz}>b-605pV=0~tApz_V zK#HhQ|Ivbc;OqMWYavGLB=5Hgt4E!Oqv{Xiv0Qo}&8WR9m&=-8z1%a!^ zZ5ogqm9{UEWB>!oxe@TOWk?9iJoX+`n6u50Ir&p@d-h? zFAJsP5GFN|Alr~q94XmK&Qoa5O{wvlxPY2es(wMX|**%*DGRYSh$m<_L5KD2S zB$5*?b{7G{pse{~5Ki55gsX-?N-;WFd2|$wgc>ix|Q@#Uv8O{pRP^rs% zxcakQ*cnQ3zQi;;Y_VB+qS;sNNi{Gde#&$ zXTPgb=PZ+#zvbg$$3DdyeiGgZIl?PR`90(IdOzAt>KAF3%=?;2I68f!tWMFjOrl`y zLM6*-FnVgc+}%Cb)2?-JecfYtOff`FZ%NWa@a(4_k74z*(bp9;9ECCZ#Oqfn^X

FEA=b2VZ<)iH!aedxpkiKn~$M;sCdmAY-zDu^>G4$MU{0$kjZ z^+!#w<4ZZGt@0zQ7q@+NAC4PxpMRHg{?&8Pkm?V*U$eS!dqC8GLS?xY`MNF%YoshI z-A&`qJ}-}JSP)H;Vq3ixy;99>pJd6R%)IWW3U>MrVPnR16osdcX2jD0IhY0z=kbEc zz+z}JstU2q&mW9({0(A(-D?wq&38)<8r`He>yH|(#>s5wytlZX`tts2qX+@QJDfvSUmI@a*SaB}f4rLm=De1B%rDdj@iBzk0+!;*U?j zDwT<+r9#<1-Joygx~xi(UZC;Ap(F$$K{wL|f2vU4zBYNFDQq;SvF;sW=xRIaV7f40K&-}HaNmp+92?IEr5H2 zELzL0|YySP;pff+1b2P&GAwEW+m(X_bYd3soyVr(V*a$H#O34Lz zZ|dO0nqjNn%FmP-pNsk1#*eb|ryKsNdcgNLZ1?G@zsG<_!S9ijTnKkIJ2!E?BIofP z<6}3NFx!vaWhM7cUQK_Dnf2OD{>HOq3V%Y7ifR*v3lKya(OE#%qB~o3TyZVNsJ6xcgb%xsqGgUs$4TfBjgP^mE!R_?uQcp0fxJCaIk~mRAoA&(S zU0FiS`*Lbt`Lzl{+0R2@vDr^)5q=ff2XOF%G$?0D<=ADW%?!o9cD1*HIW3>EXJrmd zeU2IM97X0TC+5d-*R$yFf$p!Nw1Hm9H#zv;W*{*b@xw`uc_6A#7#r&u$=rvCYHBI@ zv5VD$aa^{?aD}Pw_Gi^TB{Y=JgW@UQhZLNHn|`C6-D>0G+8DsFfKLqgyB98m2Mbl! zjXaVOa^yj_dS5IlyoqL}DgYNF)uMz??h`bp@T}{lnT!AEK=bZ?t_1 zHXHnOE%R5$UOwI|yEDr-xqE+wdX+5Tq846>mi5LJMQ+b6+mKKtBiIuLSKqKH($;S` zEH2vD={i8RGxm{j2!@|D6{{Xhxp{9uMFd7DGDFtfVBBm;ep^CD=t-J9&xjzWa#Qc1 zumR&#YmL!(nPQ;Y_{}klp|$ksHz&`=y5Gb`UC%xLgHH<-aF=mie!By|hb=#hgI~cK zhc+24QV-+yV*qlW+m*QviO#QGrWpd_jqiFn(s=d1%8EZS7f-S=2DT6YKAD4T3t>=U z*^GGV!ZLGwZA)ObAa|!Q4etf3anBF?hONYM&S0%)w za65CjJ2AH!>k@kBd{Z3qZAMsn-2KI2@cGN(W^_-u2ItQdt6PVL+C3<-gCViKGK+l%t(?VZQIfXBLgW4R+ATQ!h|csc|9t+x2O%M8n0{FVQ(~Q#N6YE5nd3L1a|hIST(|i!?w% z^BZ&46t6{gtM$dKOxUSrm9Ls`&#jrBW+t}`U|&5>Sk?LsYB`2_49wm^j;Lk!HqlKm z6T+ocxvNXCV0a0IYZcqO6Rv(?$#CAe;zhlG`M0wQy}2qf-Z8q)*JJ&!95r25agAaa3mC zDQ|#Y09aPt%s@o8N#YNB%gc;7-d0%@WKVG0h@lzNXYWS#P9`R*_SjIXG|pC%GSS@` z>mffRCpGS&<4kW8B$M6Ug$y+stQ-dW2LwfBzT(sGUkou&?bv|cs0?e-XuF6IV$%d~ z&&o_2t2845i8qo8Ie-4hwm;S9J^i5~J{Tg{VvjGy83)sjk*?tSUc;|^+=TEQ2rY9# zx2$~1Ysc*9aMJ$wj}pbwy`OBkyuGN*dv!h(Y6uX{CMm!VL^Q+yOwPBlb+J44jbh;N z(XNZWwXU_hDKP)hTFU)=cdzrw9#|FO4sR#|m@p?PIwV_cr7&$2#r;E8rb1?agb#i} zja9HzN3_OzKC`)Qmx6D;9a$}1eau3R{gN)c3Xj8}y6iEi6L1+)9Ug^!n|*#9Y+Th+ z7awT0bv=G<^vz{|-p=oqEV>SRyg8lngC)!;`QGQ?~ zZHjm3EgbdWlrgok(=PlHCZP{*@-q>*qr&Y41NIsvncAg zZASEKLB21^8`;AA`Zf_Jsap7XSIIH`=9``BY{il)@2@=cS{LMKuZ#o~E!D{eFz9uB zcpavg@*ptJRq!}6vM|G~x&LWi$cA|8+M1=`!1);a(%Va>4R2R{6Cw8JfhApWIlckR zuWNn49pGifR^77ozyUf&dC%DYm6_uaej~duv8+KAyPdd^dlqC5? zldcZGO(wmC-RfM}FmSohM!sIFiz8oxs(fMKHuuoiFy9d7rOPVF>noho7ri*d`Y_~@ zHnReo^Qi7$s__3X$LC)|hvP4s4Xw$RWu7^PF>bG&SEF^=--yR5RKE|&Kf?(VYZJmB zEESN&VZWB!n^wwII+jl@R)24(zAUbddY`Ins{W!OrP}^+km>v1AR+U`ltCD-LsO6x zgE2Dfu#^9zKgBGvOT(z6V@09ifsd%dr&Wt%=jlsx+Z0HbaS$%bSwcRdM!#oKw-P#} z05i18OxRZE#r>#mFs{?ObeUq9Ks2Og{$qkpe-upqdrZg8zk+KEqsTYhmU!!`sM&Qd zg~blCwc&@14qe^%pPt%PP&@n8kDKoMgK+ksgvxXTAiWYTau&ZDo`$c%1n<$#lD4zQ zjhOX+v4Rl!(4lG9>**Jb%~qUTc@;DuqbC`I)x-fn6Gr9KPLzoPzLBngd2 z)*&eQP;>+J`8UKLq?f7}cNshXAh)89l;pX9>d6vNrYg8My}rH;{tY@$A@Kk{^7eo) z=fd=kEic^>9YEn{y^%n?MYU7j~+d9au)1ebhpn#Cs{#Q2DsaxDl2 zta(x3`jJi!epniMqyrfpa3W2)s*r@}BBnOIzr^SlE!7ZyKP81u>`H_=0(0a&9nXWA zrG@~NKF^ad`wmS8!c&^)l#H@%x{R z5_e&@C<(Us^+xR*l2^{L}`nEeyHUF$E4RZ}XqI28{IXp^PQ#(u+Wa|3**}lr?_<=cq(9(Qr*1c`BnF1XLbdct3-vDTY*&-SibpTh`yy$ zV-zQhbq{LHCz2aG_vCv&E=O4BLcfDX=`jwmri}9+j{UvHp*A*Zt~J@52(X~{Pqukh zA$eU*6Mp9Vo=gJw4DcPM!0CDO`j8Do)Jv{CnYPm~wdp(0u5xV(ep&m@{gUVDX8=!6 z`D3cuM8vmne1PA;uPg}=1aG>pzzX9nB%D2De~NlZT1b2wW zB&Z)6%mY_f*yM7douACK$aY?jwds&nzzX^8M&5it7j?IkOX$*Ka>g&$Xb=WqEJpki zNyo-H_4vBW_z5l{?F$V#FN(&qX)K&>DPTV7xgWunDo*$81Ur15CV z!n7K92HXxf*OupSuB!3#IF2Qr0KXn7=sVrpqB;6q}snF3kr6DOyiLj=;|w5wS=0BrazD;vyj!rMxXkqzWlH7B7LXITa*Y%!lT}`Qh*r(nJ(H?c z6*i=dZhyakBAxZDAlXvXu_*qnro}xid%E-9(!aVa^mAY_ z(aR14oFls!Z?j2kk~j>yG7Z6qev@c&b9b@oxBJn;T+=x~G_E;4a}fi7A5x;-Kdb_* zUUV%|m>b`YXXg~#4q&zcT*W3D@GP^GY?;W(7WlHg8nUU|^{H3Lm+7$rN5efW5eOC$ z(#B7?-*&#SwYjz7Gr`p5xuFcVUZtCIJk6o3zPoqc*5-uzVTj)4*wD2MRM$1@UVyer z;ri%i#=koQSC7X0R4OIb#cuI)tX`{iW6QZa#5*V@Gpbke{n&*8@y(|5^znrCR~pv{ z;4V{=b$3<2djV|iU*_}rtM0@R*3vzRq^58);a+xaQG5d4OxV0D0MLAO!ESr^m|109 zFD7ot+P|6HUg6PEDRo-G*=Gt}vMR*V! z;69b}(THO%x8`+?E-%a`Q zkTvQvlC_+;#9b zn}=JQuaFWq^=9USKPZxWW)pr!ir_ATK;o+ppXqRH1Vn=wLZ>Uk?Y8fC?2xtY=jFGz&&NU&?IgTM8d8M-;I`s1QbS!(Wu(PeH1WPm8v)I6vi z^&QX686>0%{vb9?QKs%pKJg?cXq5>;_1aNC2g*ymsn2<(EDn>OCY|kp;pt_ z;dXyCGq7&@7CYp$_*O5UoH86L9y1ZWiFM(;v^&AE^u{mt)dVe4P2!`8E^FB(Z2<<6 zZs6TrJ+ZJ;igrMMwr22M#=RZ9J~18HiK>ADz7+G|ldf|`aRle0wTFa+(VAim0a0vijlH~zoOP43tkcJI$yn&X%l~Z9M zyeUN%6F1Q(9-hE5-p(%KdmIemLZP!!wGk1TLKgBrwy|t14?MZHXyU%nH*>XFkHK5KIl_pS8wHMwQIDMhPY+w`G)iv5wL8y<;3`*^8C) zU;nXr#DYzrWUaRY1RVlFh4!XT3v4&~uUZ-Zo{al{@IBq1HT?r1UwveR$ms-H0IcCU zl=ALh)ZeZk>0xpJ#~OF-IAZtmf4$vz7aF(Bw~q>iupe*;Lc2YFrn%Q69fQ;5<<|#0 zjvZe~^|*7D`7|0EtRYVfnNotpw5b5LNr)3REnJ(I3v;mZGabJ{I-`@XtRLd?8}ykO z@8&<@vrnklw-vgD|C!|Q!5MfKi(J0-d3rL~Gx6R#e_vN57$gnss7~P5zmQ!COiONU zif`>mJH4h^rgne_yO-oG>zr-jrurWGU_Y)7P$0+OrgCr~gJ2;42ECC20`Zx7EC*Fu z=^*9r995({aEhm$qiB+#`2Z`BzuQ+t=#%fvo8<+GWBT;!;_TYKed_ml;wPgD;ZI50 zWh6Te!41tTV;=h~nf&1i$zffnez8^QQ*a zcF(V!Fm7zQe5CTkefbX*rcK@r!d{1*yF&Ddz{t$GuSbea$QSt6Muc@+9h+6na=3Kn zBvg9!=h5@i@>eTUD96#mZoPJbq**lb6pY+q$KMJN&MS1Vr{Ow<0zS?yhth1F3qvnY zh;gO7bYiMHIrzp~3tYLRNkZaxFdp$4huu&v3zD~S3gz=^Yt~tA%`5en`s8NTnVI^O z)~hLwV=cxX;nyaBpdHzjfsD|2@Nv;zgKYH=NbH$k+;qPh5eWMNOXFf!Kd#w1t2{%+an*QLoOY_8ujs{K1q0P%EvR;|NE<7~ z6tI7UMu=P)qv#@vdL6Tc)ZeCziFVw6$M}}D{gi-!s8f=iF!`lS5%EQvTT06Q=>vCx zgUQIt_W82=dRu2T-J735!CmhO8Fhpq6Ff8$;p7>q25FC>itX5WGTSM=6_iLcc>eST zOpKxa98n>wxYpk5?$sVpz+w1rkTc?yojkz}Ce*ofZM4qT-!jp1eyK;IVjyeiG+Tsp zXkxlJqmo}QQdpCXa`kgtEnzu!znmF&=hPa^<7Zy{eezx3h{Gp8CAEc8K-|}p=TH!u zsvG#Y5@ePi^D`aqfiq0&-togEp-E2##Sy3S8_ZS*We}~Fr!Jp=_nvc>V^8oQTxpU z9v`B=+uacLfBHd1<)Vay+~d~*V5wC*W4@ypT=}W#+thamHzN++8jD)$6)gQik0&=g zQBQJQ>Rs-u-ICxhT4+}EKWB7=x zmIe=Gio#7pPAbyvm!WDA?QVz@It2FhzKk21LXFvm7V`N`H^N;osxYjKc2e4BE!cX2 zpC%_M0u2F^o&k>3nxa+HipH5fs|B5g%m7ws`xYaheLUrp`Deb}k-mheh5(M9gcCr8 zz921Z->r0urPbXB$uGJ62-~y-ci-VhK01y4%22aCvq?=l+VnC$a_dyk zSw{2mC)s;l#I>vE)3>9y80Nj5?zPqAjXi~$=69%({^~XB9C8Db>VB|+m>T) z<&cs7`|qXn29rn+WTo!|p3)SE8b_yfIrHAO8e6A}Z-)1LZSR+EkOfv}jp1v1*BNCM zZyV1aDitu#2h=p0@VxVM{WQOCfClHHK1-ej3Tu%O)?L(*c{(7n} zSibSqk0mt;oCkV|ap2ZWi#E?hMPDIltyj9_Gw87s%vxS0iFQ}btcS~rX^1R9W+R4+ zfd1V;KyD`A$Ax+&`9kmZ#tH7wutvfrEUEj!7u`3uyW!@=_RNM`51tw4FSu8>EIN0isZ^k zm514cMNY<&m*qjo3U*xc(KXtx&?KJ4jNO9$w%NS~1H*>u=w?eOpMxxbxcUhx@y>AV zM!4?!EeUTM2O;wikI%+#Rkq57UmEare67&p;niq7mUTRTKH&o}t`V?I`&}$BeL$&o zwz{gbX#F_n8ls{#QZZ>3^yYFkx3;J75Z~d7KZLF;AkUTAzi`y<)=qSOuP1ojv$L3e znX|~SlkPpsCsioP4IkXh!Z?ZQX~3NA_wZx?tXy3iFs~RipA9jtt+{3OPK68HLT7Bghng+I%S=DJoB)bY zd`&5=@@j_G4@X{k?h*AQqo_nh=$myvQ^#M{ZD=oaEc}89@>$yh39Qh-Y1*%a_~=mF z@nPwdYsB63>^4Rl!G1Z$s91wj$8@v(gN;F81;I4 zQqZlf_5v0Y;FsX?GO-#MHY4F;>{9jI6yAEpPPP{7yE(T!2$MD3;HMJ`xE}$%3Yg9= zM1gc+1fi!Hxxv&TgoyMx>b5A;44-!{M{TdgwmEBCZ3_NQ97nr zgPVB$)>Gbhf7B}kvc!_V@~n+JHCbFKJns7@Ggi-V0hm~GKD+hx{v`e*$joWPP@js_N7>)L#!>Vkuk4c=0D1+$3;VVCS~-`vW9JxCC=%g5NDKz>9V_zx&i$}AvD)>8Vy zW^eY{a-B4o&#Uk(vTVY%w?AH*(^1sDXzuov89AiNNt`-&616h35WY&01n9;A8&}0h z2mGPD!(QdklS?MZE0b7>k}}J1vBq2LlIlAUOy&(MurzeS!%t2j6$N+)?+g^E!aaw0 z11^t8miIv{&ZwO_B74!)9E5N<2+$w86>mWAkw%w8)#0hZzgMyUjnUd~dP1tXdL9Z3 z;5BRb>;bX0qQ}9TI{clIQSkSwqPpnU*`YynXMOy&@i8wR>%2o#0Zj$S;l$vE>2R5X z6p}=v4_g<{48@DSU@hC-V=~V(qfPRKuM<5Zqy_6`o(v zt`}htCvTTlQMNf4KhO&4^;~Z7R}W3AHJm}T>cb;!P9`s}ca$MG?Bn?O-SiSd^IUEU zy?oXYNxbGX;X1&oR`Rd;U;c0K5~mIYB1yDO2nICgM_{XESa|$e-(6W{${D`2E6-}$ zo6kngg%RPSZTP*AjY1E@F)l)W3CMBf6MiifWw?7irpa!mxRvr=7{ zK`c)oep0+KI!Gy{`Y83W%0jIdh-3i5#maQ6^PA}l^G#u&n5d=0ELBQ~*EChV%wQ2q83(-E*>38|l>P(aDkNNPTIaqc|}?$3y{et8M}(Fb`C ztPsG(A}RD}^wi) zFgvRQI`MqFo$-7_pdV<~L3@pKJs6Acb`Y>+y3SQurTaaW);@#J{K|_G-A}YUCbUzz zfN3MB?a&F92N|g-lMeYck*k4sDk?AQ>JGo}V*g@j{6${|7ON-bukwfY!4D1%68*Qnq9_ro|X} z)@dKRz8){163;-m+eYPvbP1YVDYw^VQ{DTOoUT{Bv^ z>*0JtaEw_8l!#8#soo$`5Ip2dK%7(rMHvYHh_zqFj*ce|jnQ#*mX?N2-pDW$`D*== z{mZ0JN*3}9P6mCo2v8$7HDfu-C6pET5X=_k1FwihdNBG+V|f$QJwA|ai6l$kE^jAk zf;s-3IX}OqNIIizNiv;C0E@*8nU5F^CKBg%t!(hR;Y#n{8g=foo*L-Lz%fO=s!k0nN$+;!RgI8Ec)Eb6_y8 z6yQp53EA5%^mW4LtIrz6(!G}GtXQ;4do{xmasL~QHFRdL(S;dEM({2QF!Z`lvl>?6 zQk}9bn><@Kx!=T4e=prSwKc6UG!91o4*5(jgyze4nJGkSUr7B_r_1JS7smKQPx8Hs zhe4bDy~}g6Jo3{_$Q!7*rbN?__o1oLx61<7Y;9j@CCdmJ>m2oWy@ASZ5r7h}KlLS~ za8B!FEvY>5PT*pXWtVT~xzMqEiJM2vU*uT>g`@15X?gj#)w{unS9&`zefxNqhw+zH z69lD2y|n1R{D47NjugqWJjc?b&u~W82ch}W9d0Zo7*AVBrA!}3iPHHmIQpkU?t$C@>TXpDqAy@N^E_T z=4jDz)7R~R9aHVFsGdkSYEXYuZ!TF+4OtB$oqH|Q^f)f^rMj{Kv^Sn#q}j`a`#`li z@;TR&D^#$f4mU!0U^A}CK0{wH3`BPo?ezC8sb*qtaywLqD{_&n0n7SvZ<6gB*J3hN zKi$#y>h_Q~qJ0e@W`-M~H2O}iGaEvhXOcX^}53GZdD$r!UoB4Saeu zRbub@Sw95+m$3rs<8+(F#xr*(oo3HKMvM5?6c;H1H>oIZJ+R>#Q?|0rNTE)e|AqEi zs+kM&`OwS3O+Jr&T!QVQR<4y>Ubu2niR~C$mlL;jQTx+zq~KBK3#y63!{m$L!oazt z#7i3{-{pk3Rzvl$yf#o;@Fn#N0cC0l#)wUcCt;AyZt9urhG}~H{Z~)%Db)5pHgEDy z>#rc4RK`v_54fB68Xbppd`3ULiGt5eHyQ?a_$1R1Z6d)_uW^4dMcgT>tdwi@$q16C z=|6nczFTMjzObc@*rPaR6pMLWGlq_+82a9^B*>(us6T%#d+aB~`$|k&5z_?qvm0a{ z;6^(7B$2c5X_;lH%X7C$>j~?xCk#A1yybf2OPoG;*nI|BB#t}ad+W_GS_lv4m~=?M z!FnYrReRsutN)B)Zn5d?p=4DPh2N+|=iQknBE#^hUiF-&>{?N&ayOEdTTBI4t|ndg-Gp`%DvF`=06ShAyfG)ynr?IGY%& z1Tg&z_DTpL%m{Xq3U~l|BQ+wzBE6)}G$L}l7OZ33)&IMX*JeSWmB#1XBaHX)Rx(QD$Z_ZNG;-H_3 z!wW`ctfZ-0S2r(NKz3aq09O;1Nfnw2PTEbTAM}-lTsfa@qBc4~`ZO8(P;Dg6+V6zX zG4X$HfnA6qog;@Q79RQ-g4&;M6r9YlN4d5I+BnympLvXOY3qr? zaR5{^=4eX->u}}Xbt(6E(kzK9#`SR_`FM?3y?aXMX&tu$9<#tn!@G$@@dhfpRMjUy zO6z_D+ma1mzl?Ocmjuhpl87HNUa}Wjsm8b?KnnsGz(0#+1&Ek$KOtp##ATa{r#mtn z&MUpiMm-t)3Hcsh4^wM^u81upQ^8UG?$A<1(iD%A<5aJ9(YGOcBN)819GDH zskYSiBFlxaqn|HakW?wUtpKv-|DhJ^ulLG7{QO_YnG7L>2E|5kwE)P_v{4rW{j1DY zR>pqUcKx4-w|_fq{}b{p(7$Z#Ac^I9%&%rD1jmmc`+zAUxXXtHpPmzned3#!F_|84|%6%|POU$a@ z9S=WtL(-MTe+lPpqg)NGX6iu5oqAaN%;+$)*42w?x|LsSSrZRucZdV9+NIQVI6+G~ z7r~cyK=Su>sVLuOS{sRNkwzR1(eSC$gA$@)@b=-6Dg3hzffOX?#~S9q=U3myj3k0P zY7piu(XhFm4HXRA6pLm8I2c;&0HLGE>i22tb2mHlZ-rb`kX9^%Kh~MtM0quB zMjGj7j#Uk{j*}aM*TTPtxWb`z0%pgz7QXZOxS9!wfVrR3tup2sglYOEivbJA=jE2; zr=?HQaP!%+i#)QSrKS#p5E0Zmvm3nRJ43@Q7_pj?Qet;>a6#0Y>}ww!q;>{t^jzz9 zU!j!5JHr|*U8Ry`cH$q5+9yiCaZ_9W)knCx$LPq%T#2a-JrW;k{qjho0!$F@eu~i~ z^LN&F|55Rp(}6K6?sH>X6P#I98+`+thf3Vkb3YVm?uzXaaF@WaA<`H#+~d_7g>rys_bUNTMW*~zw_h7-+ zBZ6iFN9btVl4SwEJhJ>@5}M{f-(gCL#6R53XNwOXAJ2Vfv22NK-%(ymOYB|ap?no& zP^Qo2hOxYk)hm3Q(%5sLi5Wm}kVR+n-ApTp!%Y$f^yAn1x-N|@HH%*wnr>X8Mo;#9j9NPNw z^iZ~l0ME4xA}PSL10{QPdD0T_4lHm_tW436q`K405Z3leH#Bw+%wCr{xSk^R@X31% z!RYxlT6G~17T&|>^1g*EO9)+miJq#YGcUMVne0&G#US><7;?L96AGXS%kTXrFs6IQ7H-4nxbdlyz<+@=OJa6-d9a;GNwjb~S7 zHPs3G*b$|}mm1$WSZ@e?cX^{LNVV^Am+0Gc5_-avFyJzP%Lx1X#6espuR}$lvfVMr9ft}X ziQ~4RbN1xV47m~c5LY;xLNpaBT^q0^e%xR-DFgom%q)36yCw?|KZOj=IH0nCM)eVP6Dzmou?%o))j}zjHD6 zO!EtUo;%a$-L>xc2tZhfd#iE8q^5cXv=h-sE@v9>ZL5q!EW@2@4zzm4?83OGuDnaY z5I}f8Lwf42?c2MTXy2jlf1|LIWr^9r>h!G?g@HoV+aCzv?*W&CBP;rSs+u`s+1^=^+Ebp1 zGLPdXp7FgN>}AZzQH_W-{IHBDd6x8&5H`&xTpv zYaG#()UxH-5`G9Pp8P7I+3*um5*$Egu_IB*BiK=4P9QA%jwvYjFq-Vg6{w)rL$qtz3qFut3cZV8 z(|%$$*4dVsmmeEp*y)xW)xC#Oo-;E8`HSe28V%wPul^eN_Ekf`+f+}V#g9k65wb*zCtf*Xw<-ob`b6eZtm={qc5!ZRZFt_^fXS2@veAvc1;+4&D)b(R9 z(Zfv{^Bo<{!CR3<3#mN#_|iQlTL62h7cG%Gf^umZ3cZ`<@FKIWmMsG2s;&4jy6My0 zj_Od$UkZU+^`+59#Z^VGWB9+gaIiH{HQbEGAm>;{BwE0F2X_j-+C%sdXGq#;(ytkp z$mKyisx&FF(sQNxUR^ATN6k^6)#dK^{Gey_Hn>Nftha+dhw|Mot0RG2=;cIIOF_Dm zFfRB(v+O&FgIQ}_?75mM%B?U0D z2IJ6fVVmg4f=#CE8<3VRtc03azR9@x6M`oAs~Dc7Oo0neunV>rtwuPG_R6n7=U;5x?Q`t! zj$T(4X;+$+m4Gw5`8PF#{A`=| zv%7Ov_Vco~Zq+_HM%b!9n|&(H>1KmqGIDA$?e8(mKU(Vk*)hOvH*io8fP*H0;)iMI zD?Df@k`ugdcXCs!vXT{8nKGN7x*e2Cn`o<2MP=j^R;?(R^_E4mE_qo9)IDHM&qiqX z8alc1b}<%L(pYnI-tnV*&pd88AgO@lR!}=TFVS@|gI!Brx{jZV$#BZckxmmGg1hs9k6(tkq4y@_y2E!yBGB%^qE zxO|hV1^VM+&(oifomqm*p=gx9dflHGfBl<6@G(n*cu3iSl=#ealQNu83k0k=IIe*Cz#vUEI(JE?Gtc2g zaKUPPhq1&cg&VC*TxfJT;vfJamI+HRiLr4%f^6w}$J(`-#&kT~y8!0-KBaVHn1*XL z{BkjU2zIC1!V|#v6K0kwFUD*_vK5EAk|PC==rJo_fE}AeSp8!|r$36o`v<{ze%(eL zu*a=^oeejrB%5a-DzV5wQgjPE2*EsGQAzp<3E##tcKPHH#j*~ioc@l@ahoC@u9AcQjgJk5ya6DuVR=B*8NZfNyNOKq%qa8^x7 zKO8jSo@HyTt&yK|BDXE7e;`SKc9!Sbu&ihrBInn@%&g!v`~AqFN*P+H_{j=d~+JBGva$;<=)AO7n5b)@JZ_vsT$$@I8^d%;#icdM#O%kH5&2<77o zDiOBF>)d*LmXTYIL({VYZw{Q$(!JnV&Bq2OMFxwNM;>a5efre0or3Hw3N>BlO+(UN zbOH}tKMO9jDj0%AQLE_`{}4_F<U1Umb^kPG>9 z4L@ZiJxM#0;5^fe*BztUr_sXP0g$^rz*ARZ*DQxacp6$0oNxDMbs#@1hRZGa#J|dF zIuuO}oqNfr#jen(J^lG|OqXdHS`o-5-MmeNhEZTk7^t!83WvdW_^VtF!OnHq{cp63?>LF)BH<1$p*WEN4D}eAp}&kU&9mKY zOpLVBC5`Y|`dU7P1)Ve|(8LH#4jH2&UXH2%t&*S*|K|nUQXmndcd)%tNxhJHxS1s(0Yd%7$M3xCxG+Q-9_@vu?w_LyGxeM%qp6ix&!IYidTS)Arn zrB2mI8x606H~FN=Cl;jjHqEQbRgMwtzZB3n+eRPd>;biA?{9~wYNRUGPI2O!YJ@5mc?0din$*$YMQ&6YC%z?1l{AY*kconjzLVmJjbRS&Vhu z9S}!aE;W;25vyH~rf$RtN?!idVLRlnK9*L0p%5cS8N_e_%C-kLQ7-|{mBu*BvE|`p z&Wf^k=bW~EOY-thzVDCs%^}2C52oJ2`!5`+qisWQQj}1^yOE`U5Ycl(yWjntX2Z~w zXq?GFzG#i_UD4H{<~GJB&7uyFkh7(y5${4(=^+PCibaRFFL;coDkx?;2^715Vxs%~ zf}j4vK#Jk0i!pyGN9C`3bW>shG1^6NX^uV#s<~M)d}zy)9H>=v<=nc>S=nGvd6xK_P!K}0i2bqAkIC3+bkPr=bWC??putZ=KCVKD(hIBxRR$VNdnL)XG zmG=rhT|at_1&*|mynnT0ZuJ2-jo$2bJ>HHS2c^S~KtZs&&;32~=HG7e{~xDnzb0!E zofsLi5V5B<<(L8f9BV?w8-8cWbE$;+#!dBmRUm&yHu*hU)qT6S!?`yuL8Z}0fSN)_ zi=b4ZAz{f95h`%vlBn%wTka^caDBy$Go#ErMBQXn_BJNHFd5{A^Hu}VSxXcNqCw{( zcv;A>`4!CppL-g6`5=mNwCVdD4Qk~EzGn{P<3KNx^sFgC^G1miOA8K~v}!RQbB=v< zvXB%`oBJHch~?2qBWj)2VljYA$$Ki}UIB3@GMto<)> zeG})>JO_+GpNq$u^O9uQp$b=3Rr1Yp;V@Cj^|o(RiI7dhXav_yJZ(UInfr=nE-K0V zjrF{eD6lJVXvNAfQk#<8U5UVZ%`LMi=SJ-*Ug+LrwG(g=0Yc zi6Yx>`R^dN-_kbzyFKRoT}SY%YQ~)gS=Li%mYfezVZ5C;YBwKY!BXw{JRBC{!9 zzU7A=l;Dn58V<9Kf0)vn;r}!X^8_8PaGVa%V>kf^99`cW@=e+?-<3+yj9T1ZQ?T#H zo;W*4@K?J9IY#@3_wZlu^WTSx{c+HkH7dA4fPv)xqKg26l>7bTM~W5JGEBwJPrS&| zdPM#7t;wqo*?zg2wSAKDLR-qT0Xk>=&NQA&5OSv6z6Ymlt4%=aX z^fJLv?nl<`)2wHhP9~%t zN&FqVU;1@Trn>I@X12~-KXhzc+o@W;IQ$DqZ4J>8I5i?PpnR#IRw)&Iizr>@k*iP? zwQxXh6&#xXKw7=G#O%~@*zL)x2e+70th+uq?C!l9d(blAYCRQv{_=V*EKQK%rSX$- z4Vofmxo!vWxv294`Ue^SXemOBZi9vm+wd2c5Z4nvET!x9iB`XD*=^u4f4s!)M*wiL_D_}4G3jSkQD(P z5xYB4u**6|=@86waL0fG3wZHXI1AYt5ecSOuDf51)HmsU-&HCH| zo1mK|0;SJm|J%>elHNmpXW`NhV13#Zx}3zGutQts(m z#|ev^LYaJj#w^i=gQA&${w%U73DIVKxvkNLopdr2KkA;Orlg~Eb7Tr3pN87-(2=n$ zjel_bpnS?GCB>g*@+Ii@HYEUvkm6dPWwg&Jie&wMv^ZX`_6!r*O%LDU?0n86&YR^$ z@uZ$AD~m83_q4W^>2s@_*%15))&Uzlbz}%bjXpC-1`Unr#L`8{7M>q+lFA3&llZUd z#F_<`-FWQ`3+bfj^N8>(K_Hx9CjvpoJUCFc29Rw~szUS`F9_IyONpEHti;!A(qV^Vv5)7cJ0f?!cBPzMBB}%}CkJC^f!AG$Duqjx|s#mjCEo^!%M zWxC{IcVVkzTVZxIC2lD-Iqz9gK%-+R676$!MYIhNmos6}orR+KFtE=6WY@X*7*&{%@ynOTo}Z3 zs-GAyt%ZJRFcqzEw>T6A{aCpN>g{^M_PL&8jyU3Is(aT}?TEGpl%B+~ean#T@x^ud z#Z~8cv8a>r*KL_NxPuQaxfhHb%$SCd?MTtLzyU?nSc5#4ovdO==V^6mk(Hq%VNecq{{2W2QcQ`Xo-!G2vw2(v(_VpYtQzz{#vo z$w%x%8W-;0Hn!HY&YLNOJ!`o^vTMk<1aAxGXp-W*r3vwSVPLtzk@d-B4fjn%4@W>4 zYgTtiL+E1-5qh8KbnyCgUz{H8-^~x7k@L^L98-LlP)2~0i;O2row864J( zMd)P+Jfh~%NB2uw*Uw`w)pDg<18xobT)~Cr7PIm3|uvC=rt$P{%S7R@<)A9#; z-1>VO<8Ly=6!yx0Bc=RCJ96vB|Kl3^&nhO)P181P5awJjTF>P`lgtsK6j5e|&NnoG zO8V@+el6uZ;pN}=Em%v|5|1hQTx-`6z;{Lm6}dT*TdXL&-|BCrcbxd z=&9+8k0vtLrK8fdN=V>qAX4A&!5!=C8epM>)UBk3XrUdhPlY142;_%R^XC7Q3TMlBNix$}D zwKTj4@?}!S^yofkD%?~1C&b)K+(@|cW?ks)1=@(1VKf_{lY!0?y7FpCR~_EKD_6L0-#!I_e5QV% z`=40k{bv@G|MYjE14%lR>svF}Lllto69QUt7b54*g~2zyQmB7IR@$1U_x1T9e!hQX zar*z`G1{N~f4WGvod-gRVsHdkI(Axd<1dQIvgmG-{){RHcN#27aZZ0~m|jxe?WRZk z8tF&i_W>J=NktxMX}cHrUmf5o=nz(pBp@P`OyOnF79?V{N#o6MV>Q*UeBA*l|D|`k z)em(@S33o+JvnjzNgf@|#^{39AWj0+cJuH9+3C^lf>~r9TH?W(o{_TK@RhK?#P&)W z?itb|XZ+q6jayodj<+~CuxlKp`*6xW^#+?Z=WQJb)o1EVgMZ$;{DD=}KQfToQ*foI z8J_5vabatlvC}2%jycWrja_?WIm_VpwqpG}Z2T=Ls=OI(e3$~xx*TizH!!CGo9Skb zB?p*iw_Ily>}9}=w{zXqsPd^5r;jMNMy23}>Z^{G`%d5+&u*#vLWO>)Q1sq;o{94j zGE`ngLOyT5(aytH%khSzDP{v?rA#1g&*mFNZE=2T!J>Sq-06CcaqYm}<~XRd-g7RK zO_f(5Nf2c8NTuXb6v&pKy~NP04M{v$l{$SnnTa_0q1HFN?(sOnk*$R#No8z2zviO~ zy7Op+*IMV!ky=}VVlZiXH+G23lI~a!%4hM`*hZy-5yvec&S+5nU_2aFoY0MWWZavq z?Ok+iwj5ro!uDN2>hK*!a=E&)MeSm93T@d&NU|6yDD$Iw_$-s<&GOiOgMR6k`EQX< z%HDWiBaAK~U7L+&5RzSgC2rm8 zaw8&^>gY}iHJDnci8pOI7oyHQs46105r}?3h%{`jNEQy*9BzMlj-ai$H7*}8qtYeN zayEi_^)1BUj{_Y4A@~2cYsSB8!~L}xYpJqYH;9&hLCVXW(b1ubj}M|RQf1XS$_rfo z>RoReEl?2dUVvdhwQeZJv74inKU{xu*Nv?6Mn6yjOJ(1t|0wv+Xx*cSaZ zT%QXywvn&k8g_Pl3jQPb>m*IHpnuAg737!^wEPc&!T$+5{!%i-e?We*NQ=R&;pu%@ zuw4ha_eEf2I5n}0()JU=f?e!5O}#Sq&lnT)XuDY^>dyk4gQ^aMq!@=xd!kxPb8sxD z@9A|A&P$JNrs^n2#PJM=9Dbz8LXVBdz#Hp>Dv@Hj!b9cuV9oum?DLQ$W+n8Rs(#LAEgTavKZgU+S<}=tlQW3tcHtoBrI09;Rf3=^Ec5D zh^*eyB>{?Q#IiqHsQjYbn!1AlQFEqX)m=@KSyj0!N_^2tNA)2AboABmK znCDUJFI6z+$5cCNoLeU&4|Grrgy83B7p|x0TtGVkHWbnT99}owC{4LuUopv0Cg;sX zmuD2yB(06Q0Ha@^aFacXhzYCZ#9;!W#d7@c>VZwa{2b_uv#!8UQ?nAn_?pd9d)2(6 zxPuC{5=-^Vf8s|}&u$eU;x#jA+a|m0<*8`HTU%B%Bv{+O?o=q>v0u7NWTX&`xR?QV=Pb8VowEBD36pynXWzVJBM-mh_jjtIDD z{G=tKwkFYqOIx&0{@@}*(z~lW^pejr6SeKEKX%po6!cxL;3njOz;h4)C!_==CbUe# zn`IjCk(`;)H#&G&3@B{oH3`mM<_#=U z7pgiCi~C@+B!{~JKIzg_n? zHzJ_N(Q>4e22*t=k|4@Uw#Uw!rOWk!7gWaT@&s$U-w&Rf4TWcb<0M40@p9!)$Pt{( z)nc?3vHFPELGg+TR=4o|S0(lf@j61EKL3cvWEZAT)X`inVY~*>A*vGY@(3u(MAIwP zF}xw_Zb!h)XVu4pJC~m*f2HdzqWyNl^ciSDH%!~oeQdqoyhMcyjIG&BagH}n`wU9F}p3WOexWW3^2In3XlY=4jr25e&d|{|0yquzd zz8MW_=e;T{LqbK}yC7~(cUj@?!ZRHc6;vHoDIflMS@++$1p2RA@(lh#$y2pN5k|u+ z-mN*K4KrKhb(o`lkCdc5MhJt1!@xM=0wpM40t6@CgXH{0>hiRzLcko(j9ql>t0$_k zP^6>({-OUx3kk4Q|LuSL{gMB|r`m!?VAKXETx8i{il76p2oN@_>2Jly{e9CycW~h9 z_%80K>#G;!>_A~~=8V5jR{jY|&As{)@_7*sR7W8~>Y2ntXHuH?uf()xnp^Bn<^Z)yws#&SICzzf=DQx=%Ei7sA0+_D4 zl3#)=8OXP->n{+gu6+4GG_)z5#i&yk9tg4c>tb3MQE#ticMGVTwN~0SC0cil7i9BJ z10}oXvEQ~m1j8RUt#`RUe(z~4OJCepO@0IvW}Sao6?!yPyFbT`T+PG^3o4Y7p-HtH0qb|6$Dl@|OerKu!Y3 z&`-#!6N#1*pSa(N`-7v9Jy3~cLj!!6Wyn2_<0tD^5l<*~2v2A+$J^$Fzq&_XjBLJ_ zbV-Dvp+`-kZirA7+%Qa>yzZPG)p$GBIF&I8h`Qe%^pk zzNjB1`VUI0YR)prT{D9hxhF{^b^UNgw~o{2xxO^Hr_FF)U}qj92V5j^e8y6#bEBr) zz!`i~-xz9nrG+AKvO4megps};Td}&r$e@RO-?f)EqnI#7El1CvF%vOs zVsAc84ck%mQV>bg&4d|s`UJ=_K`@4xugSc`;Wj9g(ud=z9`-<-1iV@;bb{8>hsu4; z)|_3^e32ts12@`S95Y2?bOXu7G2DX5OpcV!N}N0}Omfrw1VkcXUeMCkPNi)V zP-n5&KHoswrs$$^A@Ni4pH2y2wLnJ!vSu}L;2ZiJuHSvl``r=4T+42cN5u;ZhZC~R zEu=lnBfA{oszQIznAjGOnpw$bUonYmR#~siu}1QK&6T{&RX^Zj>0a`~(M;H%la2b+ zSuh~I14QlWe~8;Kf#abCu&myyb6TuGRkid0Zce zH_9G{5u&CW>p7(EYMeTczCtqA4m+O{?%kNP2-)VuBgznw>Rh02SLCdV0GN1HJw*k4 z;H;~0^kM2QwJ}42VTotTBJ*L!shuY-@npL@YWlBtZ6tJ3+v%6i&;Rj9=F_Za9JIuP=tYnnYQq9LF@)NmaGFf z5LudwQQ^(QQ+zdv7r~?z9if+}1IKP%(Rmpo(0ZrIOAw+Qp>-A^Op zCJy@OdaX0)sVk^1%suTFAM;n21N65a6(S?z9)8MaL}ME(a} zZMw+IbFq(QV!g$e9aKY9H_E(`^Rv!mrjhL)D1At?73-;GBoi}m+QUD^TP-0Xeq8C>HX!s zA3oNn^fc}GfQeFYt!5gkRI`|LE@(q!0JLHdO`al%yo^=iIf^y%rhV4^=#Nzlf7B(7 zS8Hd^-T9afi}R;J>mMLcg^gQ_)67mYLy~i^#@j3y`_d+~EiR;cGt@vv>FYXXyxy}^ zp3aL@LVWW``9ZyGGCZ=d#{sL~Kdw;thaeE0u90~ZWG)WDgnG;}mOIN$YH8hAfQO^1f@z&Ezf=Dopbg+@7&Ywd++{!`wtB+7i%S7zV&@`jycAdE0VP5Y#d( z@AB^!NdI^L-QU;<{k`t3s>LfSS-C!nt50vp6xSWoQZEq@zMXZQ4d~~5qZSVpWJw?7 zYfu}sG~u88nW}uj{5tT4taz2Eu|(}#u-lRqIL4kRQ26rvSO0#pXEcF?|MhRee|_Nh zOMzqe`&j))LihJh2J#cI1kaies;Z(M&Q>3kR#jZ)`xp-4&$y8M4-=I!zwRuBi!lx< z0KI`L`B&JAS$%|wF&E_)Ozh}cCpRTJM=|GUVc6>fS);8V>WbgRiDh_q8F1wm;RDAy zq2-Qp%tFxMV2p=d3eU6SMi;y7z0`Rx_J&GkH2%sXOZVzTVoFw*Wb{t^!)F`6d)_OB z&8CNq;DAeO<2tPPl(ZiZG-wQvh%RmHUMqr0#hCxcSyv>y=01?cZ* z{jHHusa`grD;-oti+hV3-;hcjHbD`ng)>!;=+d7sE#~N6(t>$E*wW?})gnePRB@%{0#ZIwT|?~?*`I>FCz?aJJ{(bQP(p_ zQ=h(}Wl;usu3Ed5>_G$V#5*@$Wg2UnZ|*bEX5Ybm2@&9=GtWX|;Qxh7sv zv#7T5L2loRGb~;Gc=O}b&*08n=cW}Y)6wMJJ3x;VmQq>jslo3@Kw}(52`l* zodt!x<1D5;yR|mEjzW8J4ykhHY(9!p2lG2geXvpdB3v0GmbNE|oqEwYrYe^1ZLA8%!E9w_lnu(02-c>;^oMB?heU9Hau3uTSrIm(sW@a!%qJIXK= zbB=E^3_nJ+nMp=+P%k`DnUgbD_nskJs4r-8tdmRDW=Z9UIDq$bZwi{wq1 zmMQT|s8!s%m4UEy`aqC`Us;2o&JGWCNbYWBaRRYkArr`q6U?58cMD=F`Hu5C16g*~ zjHeOGaH?jZSW=wTUb5?=M&7T!_5V9sbI;^`M;e<2+aPeP$*B{Dp3A zF5a|^C9!Xc^_6A1W!>_e$CG!?d$>13_|k9l0Odrz{Dl<>syW^ZBXCR|OmQZ=J!iTl z#&{3Wvn30kvXteoJ4--rzq-t}gek9XHnAs8-@%ZuGS=Ckyba~4Pm8HV8nhOFyj7YP zsTOP`yhkVfNF((4z<*9H{#&%;|L*_yPm8uhR#%IceLd=HZwQMn4q7fed&XA^H&y#a zpQ&!j)uDBi!Yk}ocF}S@O-EK&`J?L&e(Ms=E%?&k+25f%(Aqu!+4`4q9cg#zy}u|= zV)hE_M=&glwD&wha+~~s-26U=c$=e&?j-P4sW>nn*h}B^xQG4N&sj>}!|IBFz{I_! zZ?`Xi9L9W*2rtXbfGJE78Ixl8_)X7#iRgnVz$ z<5hn|p=e@m}&5E~PR$=dB9Y9%9)|%~qcG=u(?4Z=Gp(;pZ zV_UdSl|Hmk*g31RJxke*+*}@_EVR1VGFkJ8nA#lHYpBbo0lt)x|FooB4VP?|pS(*)PU>(8`L}Yzv0JsR|615nDR-n_~0H|vlTLC#w=0?p3 zKq2F2;#}(Wgg*sOlw>x*7YmR!U^~(DKNL^(=qSv99VMH!O96atzAn3f-p@GT1l~V& z`@po|4@i>&_GcoPj3z)gPS=6(Z=@AiU#5%x{{DU+zkhxHp|k;I>W3ln0z(T52FHWO zW`Sa{iWJ5n?<{-oD*alUd=Ea$Bm0`xIyZjK;D2;o`@{KlLnN?Fw;OeZV!jrMvMWln zkM~`@ObWAkwUs~;U0%c-$}|>o%LX-`g%mMTcEaURT*D5A*F&)mmQ<3l7~+uhRl^*cAzYi@eKO+?v& z&b6o{p&70AS1h%~>i70>*L!2hSUyX2kU(iC%NtGZNrKjMywi~4OBp}X1=;-Yb15Z( zJQ*`eK{K9nsR2%{TzorFfjl|X8R|anAXaEp%9CKAZdx+fAYikPejYs#B6+4uhoJ;$ zA)n2|Bnd85ZN9kAL0;GIGd_SugPe$n-S|j255t!Fwe1$Xg0Qlypt8aNz1cJ^`fls# zpsy>ZmY=(^$!sGRXKd%e0PZuXpfBcs6q#_b7(eWm7)$^&ev;C`O3G>={pL^vjPx^| z1pl!2uv8hU6nHy^V+qRNu+RJTYj4TOb$-EPDqL6k@#p4_-Yha{>QsQHuyz9r?I4?W zUmINm6*MCCOWwI0e=cN#AH3@N<*zDk<3<;AN={uL>Tz7422nI|{* z9UKZ~!j9wnzni6L=DwcH?YB$9AC!t2V}Fy8`rrUWHq#unLf2tj90o0r;AyB!e!6R* z)Og8Os`b`!j)O~w@oMj1eUWb2bN5EqDZ4>XVhI8jbwaee03P0qDw)v$X_h6|n4 zo!ZfKVm^;%7O@OpxG%SMGr;hGe1I<7ELJMm*Mf|1J%aIo?)>~rkR&E94wJiZ-3oziA z>|+X!%wgXaFmm);pma`IdgK@ZtNou()ZZ(oVE^%}b`<|y=yu?bY5CA!P0Jb6)DVUU`%Fq~$$4)# ziGfOt;N(Ope;CJ&V}6VEZu`{YB`gZ}dwz*>8=~L-Dw!2eR{@sD)}AR=vPtOk!=$l)xj2iFk1~ zE07RGZ;vp^p5b3HqEyq8hx|(FxB`tOV;-~B4&hFjhwzj(*YNbFBw6q!eVRGEt1Vmt zks!-HScI0sG&!y@lxMs+v(jEYiq4xL%*H!@wGtat8fm`k+<5W%od_YR#QC}C}oPFEGVfrBkq)s8*BHucSn4OMRWut@G?lD#? zMzmocQkfLerWJt^()*e!Fb?dWV9|IJAG$dueF@>^XjZyc9yjOY#C_^ z>|1eh@VEYL2aL^1E_9Q$c6MFmY09Z6AuwN9V`ar5CDPsxm2xic!2%LZ%%B?oeIH zsmpCtwWizE$-%rbr(5RbuNDeAMukLg_SDG_P4n@aO`#M~3E6_rQNDUBz@g(+WPxhv^ z+FhX=7tg(j__jCIr(AsOdeP+l+3*>B7KpmpfmFbpu~xGi)Eyliw+mu@>oiKnZ8RTx z^lIQ*jg-}3il)czyq$BMGud_mb1W)0NfSPkKdnJ2?yU{ijO7sHRCilCTXE9)Xk(FL z6`!Cz?^q6m>HZAt*(E;CxWdH4_9MTxia`Z~p*xX*Eq!}S%gem^EH&K_<#GuImmJUR zSF)1Q?2zG_+QfXyAR}6H&!CkUTtIe_RnbxdN5bL+!bZL;uxgh=b*`j`QLdwDj5gsQ z--!TW6|SXx5rCbpj$|i;OU{{@J|S-*P@cVEBGq!U=-$2@pV;^Hwf41YTb(RNb+r~9 zPIc>7it*bEtX&yY&yHzh3NE>BAG!D}`5a87rp(8=WX^lxc%4veTxV;!YK6opt9BBX zID~zOi3F!SRV$96Mu)j9 zxrvSVZG=^gT_@`}_bJvzOGS)aD5j_>LbC$qlV>p_!P%nU&KHl|#W7 zB-<)iPK-LAoqOq{=I*ET?sT1ce!fP=x3{P4FSF4#%+dC&mMj?eDC1UxJO$d`6Ve*s z`pJ3>Ejo(o^b_@*c;aAH{|reAxnitHCk2kkA8_6*(FAP2<1x~BCSS$KQ|(Lxg&v*^v% zE)rB26q}kiw$ky{+_REcYdp(NlYN&O2i=Q;Qp%gN8w{wcFW#SxF9=14wD595^iq@SjG ze12&XjQ4>`gFH>layW%RM(5*fu4jilKRK-H6TH4W-;gw=cA>f_m#{eVZ12(7BXZv% z%=Z))J|9_NHe_u7q>q-XbFM08ee^<6-W4%!g=E^hvll)z5+mfFLPn=h8gvl`-)Mt0 zpkq#@JtY^PZq1rL0QPuMckv|~RJ|24mu|YaTPuFCmm!p9$;UP$B%XFkxFyg?$Jn`vcMmHUgyYbjB>+$L(h3H|z%z zPnLpYV1^C~*d*~KdwYEC`=lPr2uE0@oVlE8WPjmviFKM=_9ak+Nd!~V3r6jED(~o; zna{nls>KA|l`oqs9aBuVMp;#@qwL9d&kPk`uv^hO zcKVUerMzU7)^vg4na48C5hiOyOiSt>h9X5a6|4m`Q?Hi}UJ_W7d0xabCzM3+)9YjG1s#K}q}y`bTLCN^Ic?>7bq?7Zyv^sP>AKKc#V zQ?ZW-ZA1JaUCK_bJNPlJ*G27-H!`nx38q;6Fw7A8@lq^@(@UpTf~c_$2@V1fbpu8jB!kXtU2;s9U)*W0>1+%dbAZbPK(hc@cNDO z`>q$(Jf88C+rdK)fNSdCFqL(uJD`542h1o=Zd6#zOT^x(xQ!l35sA=XHQL_EXqgY9 z+5(^V;f^zNVf^C^KER0B7klbp4($WKzQeoT`I=!r{)fRjhq``>>g8)<^8Vhnj&Ef#ld4Uui&>#gMIz~V3BFShE(ywitco%;mgSDge_4X+)bCTBH#c82_9K)8R zR#7^n+v!Fkh3*kOLYBCU*%NWoo;_zvYep>gJ2ufx_dV!*fbpUgoQel4cLg5;X}jfS zrDEJ7!-}#LFl;j`7k4_f?#gEtpf$0P+rQieYh&uv^T3;{97<0yQhreD0NWDlt0c(Ut{tHJ-DUw_YB{zf+Q8`tiCE^{&7 zB&D`Y@ssc^D`CXGxH5IUNvG-%M7@->?oC|b3F3M7cGEtQtuPM)bZP3p_#bW z+@p1piiz>83DxjOcMX*GS9X@SDu_Incv1B1l=wxik z425n;_|`u0?y&P-LSbQz$D~%^((iR0=ol=(%@=LH-`{i4NxS-m$FfJX&iT;q|*~A~;a2Uw+BuP|y z*Lfou`Om0lO^kh|T_70qggmP1={+(2RnLL6pt^d2(1L+ysI;yq-( zzv?OiHg6>f!D`T-nHwM?oP|Q zR~gT`)cXysDod572C8=QvqLL%PjM~8)Kk9E;(%njJQCar8}unznJGPrXfKU#nVemG@g=YA8Ijs}mBCUxcLmJ%#2ZHsevXho|_846anx#lC zdT6JzbJ_!|n*P?F8mGC7F#lJWOZf3>l2a3gA9a$F-P+ChHugfE%JtR#c@m@Z99AUn zw99D^+4j6pzvG&BA~1DORfV0}LURXf>AtKUmU=B^Pdw2?bFFYjYP0>+( zfo{DjP4@s7_1sjUgL0_nkLmW24QvdTLAXC&kl{0|3s-B(xvNto_5*UQ%6D#Me;mK! zRUQ6Z%lSh?CI@{s!2U-JR0SXaLUJFl!p0X~qk9ZBViW5$$UFDE4x%cLQ&tDm+BZd) zWrgJ5yf2qO&<#N%e3T^l%QfKIwUU`xhGx2ft?I4VSn+^b8Sx5U?K+o(j{dQ4PNwei$$6nDlA2!}NH0J6@brd7oWnFHqJ;D+U57 zK18ipm>|-^ma0DC6iXbcvwg9o#r`Tw&DAUIOdSvvef6PT$5mtLgH5=GS5NpmL5?$q zb^++{q1#x28*>LsaHv7@%dOJSx`wbv3ckW+pIsmMz7}@und|tbTbzY>40$EYz(2)Q zh(!kzq!@dt_$Y>!4{Zu@ZqTo|_~LZhR)*zW)wQ%tG3DBWvM=kiz=7Ik=Aw6j?jg`+ z9}+GarZ&*fLdrccc|K*jZ{_g%nop;Mob2_Ry2rw#Z<1xGZr&6r8jLzOwu~YB-+_u; z?}*XM4ROslRUW7=c3DUIu*WW|>U8soLDSBb0N)z&J)iGK!JC+85TOiVAxFSmY`HX$ zSg)-d*lBgrET-@GzH-^4fsoNF#(Yo6SyJu%d7w(U-aE{Bk$aG&~20e;H1*LRb^ z=IVn>YV(H8)zu0g$5c~hRsDQo53R{vR82P=`ywLDMp;`1l`NHR5A!DgEpkEh29v$T zO=?i??T}(LDXeR@YFsdM`pn0cxcln@Y{p}sfB7@FR1}sTi0OKP0<+_{r!kAexFWSV z{>h{}bS2bDbvAYzgFgH3ufLc=^<0GLCjopjfkz`ytEDf7b~|_>jsW_P>Ya(UZNR3r zC?L?6h726pcziaDAx|J+!(5<+CP!6nsRr!%BkB%uOgP1Dh-#IbBM8t&>)y5K4N3EB z8@xO8ZboxtrElWzuaEtCt?d7SpV^vG)Be0|ZlYDtz^hcDL|VoT*C?*^ z>u(QLRIN-=Q@(Fo%wd|W;0yTXR5}@Kq9qBTCnNwNAHek|9n*)2hHkdkvUEcc6i555 zHOg}qFotRWSq^BBS;x=3zY8_TI}WxRpSBV%<3BoA7+-F6;V}JdQe8_!(vMXR^97YBh zE^?rrIb}2ZD{cBHKZzmGw;Adh}ZF&Su^EuFSJzV@Lhb|3o?TH=?w^7l-~oaNbrosjq!1ti?@Zh$W0t zqbySWQ@$H{)E{0ysjTO62{+cxx%4E+fGecU6T zaHaB*7Ox6IG1UbO8sPi$D}{nzshb00-_~Cov&g33#`O8e5H{OOS-NcB=S0?_e{tyd zOZ;hg{@acPDvTPQ(?Kf-cE37*FkzIWCFl}+sVTFl|BdGn&XmIoCET(@fBE6u5Y*E| z{uL}W(4utk`&RN+hI7H-9#vu{>{U6jw;mRN> zBQo+!)*Tt7Pq5puup}X-K2vEQKyo-s)ERc!S zbD+|Ks>?>xKuyt2S%WMQX09emefr?w+D{WALHip*Q!c7+D@2!K0}BP-mEOB#);n@j z+w^qX)}P<9;TW?@h<4vT61*7dD#d7V!J&#SH@LHa_5)Hyq;0{F0+=`7^w-E9%cw~& z_KhrmD3~QVHQR=?!8k@%z87nfZR>dTYCU~I`K%2e>-w=tSg^%}?x^Yebed%=1>z~- zm1z~33CU8;t5)Ne z)9y$o$(Q0D!)G2$^HZj?DLd>%$6Z>3bNwCPyVmAD=`Q>H#$wNEt?=%q+gIb)Ord7z zK5Qkn*?+=lyamBv~|97qE(QERzobX zUS-cPf!?|3s#5vd_eE`WmF)RTxZ+$*?ZP`COzqM_?=ZQBOSl4bQ9j$m_7(06Wee7}z z2`_d5C^sE`mfe+3vj`|rXA$^1%XDK7t4JD%xcUfXJU)QSTmwCr;Q#_7z_2IrClbVh zv9a%k)d`<&-Tp;iUqCh6UqyUnTmQFMTiC=D^tkKe$g(#uKwSDBYJZzZD5X!+l}r=iv#d{h7o?GVz(bh7TGPSYt~gv zQ|g7S?5C;&;e!XZ+96vZEK^oD7Q(QhMv*pK>vs@HCrXKhJv^bi+n;)@hd)0R7`$IB z_im?^qRr#-o3`PBR9w64!6D4dGt6#=ntKUha&hzUc1Z43nc>&I2Rz>Kx$SL|IwH|& z&;~)5IUo|JG9bh!f}0#OdX%L1K246{>=`OxYJO|9`bv*zb^T7V7z{=t^@G$-S{Y&?q`O}NM(KC1uXNhe)=y7(_cNYa~#@b zHf&NEmy(@$H%;YKxYmM2fN%6r^x(AakX8efp-btD^P~jF##c<*x;x>^HI*&36b`+K z5?ecZLOyNWpDh}Z@()Tt2`SR-#8JA2FUzF~!v0QX$oqt45v zcCNc6u4Z0I!v1PVd=!%*cM8`Rb&VQRa>651YL{-@w2ll!9I1r(g7u)dWD2F)zZA zlE0c7^tzyp{R53e9P2s9VWX-Lu>Y>(;VMRYQ%ZhV#|f(WdBx|k<|i>@0+DWnVzYS8McxybFVFlT8WoMU$%r$tn|JS zmZwO4BFJi*0G*?)USV+K9^?u!04CGkEVS`gxR()ftWQ6D36- zUpkLJ!**X+Q+w_7*|JeO0%NO3wO^5a~YB1QQqau^|aiT8o_>J19f}sb? zYE-4>`{al|ZD<&1db7|3Pr-1Z^hr4B3jq0K9&7XhjI`zKQN4d%Bu}8h%GKkRb}}Gw z9I$K7zS{mYAVuvs3Dm!I^WKhr1orhoLvQ#OFIG$2 zdmka+T*!{@Qz1tlF&=x)0L3q9>@9E_Ul~YrKj3zN+C6jYH7VDw52PXaw1V~LRs#P^ z4ZqaA*Y@sl`S9I>Un)f;RWVAFuNR#mg&A~k z-j{U~Gk%`JJ@d_N<*%Ak{7uvPKMSS*ubd|$J!I}L^!dDDo3OR>RNNUblD9I3aGgeo zA*0CpDJ;XU*_b^)ARgX=lNQxVGOq0z`@(G884lh{0h-Dk)YMF*+)OM)Hi{ZWizua} zl=C;QLA)*6K^?&NEBj^k@02}LcCZ7BQJ#tsxMz8pmoJDbfPrKdz&)Un0DBv2SIWAL z(N7uKiL77JcWaMps9zeE407=Ii+gjJO&ii?c3mie*#I1)oNj`1Zs-pgA7cM}O{{o+ zNiu&Zw0iSfo{QDI@#Az$hiHi-5(-$0;h^2T=ia|<7^(Q`GnuT3fDJkmtz@SidA8DY zb~wDX{QT|+DFU{SnOP_Gk|7FO>L{ZWhTc^T8*34LPY~pHnh#y_*_Gc>*cqZvKY_Pa z`$i^nkcqeze+q*1G4{hy-D#P%NIeOAh#q0v?(xILkn?H*;YV=&xBtY&_zl>G`zrt( zh|0bCv(2<`_`8%X)V6lSpU_=Z7GpK%zuF<0U*^ZN0Vf=Eht%ilZdLdloAv#T^HH<k2>}Cp{9o4bLY!h zhxyV^kZ+c~odJcmpk1E&fcbx02>TDx6elnz;~Kue8pS}@G)+UVKd01J@ZEmlc+NQG zdB09X$8&wxB5y8dtsyoUKomGnxZ_(s-oVob+>b7RGi&V%WCjsoH4w+Pl~ zdIYTe&$a~iEGqWyhQAD8e~w}wcUi~#{oISu5S{{`(7=5sRKpJTnk@(TtQ$2!cf$Rl zMIQE*MJ8b<96BEx>hixW-}a%uPFvb%DSiU~)b9*g*hRhcF0(e*$txXb8|n6lWm zcPw2u)*#qvsra)t02)TSjGOPb9_UfHETEpe>)@M?J?<&G8Y#QvUU7lcDY1e!@bAb3 zWcO)6MkIeMF{nN4frkG%bukBC@$_@&vdp*A;w6G+uN*vrn|{Bg3_3ZyeE0-CbQ`~- z(f~3B>(3hO{n9>lR`=UJ%fR_Hg&8IE;#aWD!}>@lCv+6M(^5x3CyuoV&Gy~=XVzzJ zY>r$>uwA#|8aHzgPLCrcKpc)4+)@zAv-;)=AAe5EWIjME1CLw*d7=!Iqoz6zYTvd6(p8%5qFtiA&ep70^C`N=K{~Q1#9A@%L zN#@Hh`=|8|C_ZmbRESqk%u0Kwr6sZ8RfGqW!3dNP2c{?u{621?z@rzK%S+j}mp4m~ z(c&~iTXa#^3-2rGJRX2n1zPFn}6;-2+ zL347s-!EQl&LEL*SZVG{Q@We{sD`hGcPtgbOu>8pWc+clOh6Apgoy9ccU4r*M{h?o zR`;i?6?Q+0TUDGCzobn}Osc!KOFG~L#vc4o?Y`_Y*cB|@7MGQyk}!t`=28quU!7 zT;&ohPqv@h7ka!WnY{Y67ulz`?Fvp8N0c2C3nDLI5{sy~RZ2gK{y5=_jekb9wxuy) zU&!_vs}gQkf&?=~)W^~wveRgv2$+BgxG!*YqzRSkBMhuUAujV?URipa{hw zjw52csa@@xl2#Oga*AINwevkQH`t#4%lnWR-8gu#LiOQx8uQXa&zM02M`FJwg{NHI z-D{$cp-clODx(?COhx%nxoC2f*rusa`F>O&(aKNjNrJO{j?SUH**JwP(I8&+)9f`? z6O)1c36wQqlr*ye1_SI{pzJJ#MHm0rx%Hr%o)Xu&)o%CYV_b@?a;uzGhr3fm8SJm@i6zWAvu~`^u`IXu0YbQTH3_W)JJ0 z*?s;tCLyv2vXW21BN0Lk`Y1z6do%@a&YEZ#s?@n*jPb6n8W+3wr8c-V?Z8%?I0yC` zlLP!dwBgq<+HtnWW1mg2azr%A?s zTD-*CJeHU&*Kb z>(cCBH4yq2(`-g7^XWzATgK@kprFx#q1+FXL&~zePoxDReHrwiv|EKc{GO>STD1tD z`i;dhG8Su4`Rw>$7C%YE&PC#T*NOiS!=pjVHO+1 znF?JQnZYIH9FC@8xxVZjFCN*PD9-3Uw5vK*aZc#4K5pvRW9l^C3gk$1*@JDQ#X2z2 z;GW@uqta(YqhBINTUvdMt8>mi8Q75;bnLU5LLc{4Z;OXg@~d2tFZim5pykt8x+#!- zimKSxf!pEb#yjY*9-W_Am31$4Z9ZK|+xTYhC$&3~A9NTdUsnu4VoVg#voUq@5zU9H z9AC#q39}pg4&9ARAfoCo)6Ti51)qO&KJH_esmVRRDvlf4Pfc?8G%=;vDz+b)SY2r_t>^)MkX zmhrc2F znzCqb?LDPh{PbPAXb-Mbf8#lR{N_XLyGfC?X0uMoNcxQbN-edVq~C%~fC|+aylmCI za00S@3mfP!Mhh>}-}~WPr{cTtdBg*vLR!p6k@FE4uU|c_e-v#u*fVSJvx6{Y%%=nR zjcZP$BI}ryo*xjUG^!D+$rh}pJ4Tr3{7Mt8mI{RZfb8dr!7NK$1%<+nwLFlmRB~$L zic_0pKDG^aw>tg_%9gTyd198j^EqU$2=_|>^&Rotdch<|5SYa`9QlF~+bHZ-o7C?& zc7Fo@nQoHfmshzsB0`bp*HezjNA-jiqdKS^Gq(y!zGwSD#idPaKTK(HSE5fwTSLB| zfsfjVw6Wijv&OGGjx<4_b`EYL6eDs>CKY2{S6){h&fYG`&cwP7_#Jsy{!UtuZH{e`PDmCUtH7fT}jRS zMKD~g7pBb!r;IiGmo6TzDg(0HbGpV~dWnjM)n=PWP_e4}MqXS>Vvh*$>1FJqaJ8Zj zQjMXaV8|BsAb3rnx!G3dcwE+?uGu})6m9FCBjljsw7f@eiy#In8~pKv$_fyiqZWna zaR8=t7lPI9*`@bO!A^Bo=Xr0<#5HyEBQiTCU{DGzmS`f{*ODh$8f!bG3$&i4QRFPi zW}n!$GKL>#bQS0PVMKm*pq#)EFA&qs`$Up8?^ad(v( zpk?|iBJem(;U>x>vc^Q>0)$)FU$9EVGmvp``p&m%Q%1T7L=O5Q2<$)g(|rXbaI6f0 zvv{Fdi+&&v)bI_Yk9|J#+k^dFAPee`W-kahdf*2{8XOalMzv?2$``QvbZgGz*9D7) zjAeg+RC-91)3KCe}&o)Al<1~JBRChNnsCzQw0cuG6lmWEN|f>(l3s2 z=e|>id^Qi?Z7Q){mHG1P`wP?YMU#qi(F}XeR_)zg3XTVyPRKNIL?NUY;v>OOaIr+t zJgCtvUBnfO-a5+Jr$$-c8s+aV9JYY3kM6jcSGdTd5nk%1M;#(^#D9mdSp6f{`fVor2k%c%&^fyV_e$Ml9002a zWB)~+qGlQ4;0#p;6PabHP;jAjsVk=mgR;*P0O%EU5zj1Eyp2u8&mN~jnNj}m4I8H{ zbTf9{i@=9j1~K@`7f<+${?Vew|Bha-M&zGBu)7QA=|5e z++6<7SN>l&wfgH39(26;J_}4WD@Z%EH5n?FYf&$|Si)?Zng1@3@z-cANN}7bOg_Ft zS76vFkXecr-!nm-QNv&8j7pl{t;Jq}#jW@WUye?X2a-qQVvlCXUf>ww6hc7hn}jZSg3Va6VS_=0h9&|kxEnP5sqgNuvj z=!R>Grg{muB?h=D%l`y=kH{r#T5lzS`z#aDnp25wTf3|8{e)rLFC@Cjl^AaK`R1ro zVPjCh!J^R9DJR%Y@fUdEcmWw|ixtrF;=<(l30M?qg_G4&>{V@i-ue8VU67eNJ2RD; ziz$1!_ty_b{HYQJ`Ud>Fgv$T6>l7gbz{ZP|l=x7Ry%3`1rT1-=hx4j(Kw5iLjq=zN z8LLJK<5N3FPlK6PSnyGDLvaBv6XjRs}5wZL(ffj$(sQ*quNLc`nwW>O9BW+Iy7j z8#_coppP#Gt0FcG>-0&UEl>^yf)czL0rve!Tgov9w$k1qXC)@p?F_7h49-fYCq#B_p#r>I5-juno{z(^ikPKCjDwY#3|_JP+?Tk)Q6|( zqk6sPy1sF4Ri&jvOdgS{+c4}X5z3j(RMx(0Y6V|z9fO2_DN*#V;SgBRwqA&KSheEW z1>J{|2Ce9{5(#^=Y5Bm~%c@7O;&Pz7khk66dEG|kdOoyw9d8Gj0#QlqmF|_4wHf|o zV^6XVCIwolvC>daDm;xp(O(?zE7h_TU~xZozfano_a436Go?H&%YTw90-LnElKQeJ0A18@kxY8XYkE_)uoP8fvJ2eh>X9wg4qVT4C>K5FKP(CHg!K z&A&!bY^!_z{>Y+*$LY+g#xWeS=0SINa-O>)%dJQ0jmM5#kRg=EEnR>Ebw8!yp}hX; z#)WHU*_8%FxtezO<>C&A(O&+Vf7h!2zuOJ|E02RBCy>dt9t^FpptHc;ZpdeBta4qf z39Q$rPRN6_H6F~D%D-aH@b6f%{xgGz=j3&+JI0pw5btX2w&K@r_@$%#%gL0E$6$r6 ze9Kl567KWI$NFD=MR21*gm*m%U#`r=AVCz*_BA48tO;MyqQZO)maL8|?C`I%GhJ)X z!IDj(h<}jZ&i3P1U6nvD=I~IG|9B#qR!%?a;x*<18d&0?&f_k~XC^Rtllh0C1qTvF zK~P`1PS)Xo(m>{HM()#w1<*0-W+k&8f5qsXBeFrsd#Oz2qC&#_VaRr2YZ3n|dHMSD zV9{iSnTB6{`2~BQ!0@a0!*C)G&xh31^j}xZIhGYt8UHZ#wc^cdaKOx%{|DsgYAzUF z-veoa!fbO;$`rm#pD8@0vkoeMlfmG7_at2&GhjOeKY$0OZsQqpd0b$pq7YD(&}fMP z65vD*3Y^LNK7_U782F`};<}G%xy9-vY@812`{2*gw_xb|FN1Kcx&BGU%O4PO@Dd2E zMIxcB=qa#L$F>hNtx^GjGNHyFkPXc*1Yj5Vjs#3)LAv%|-1t}TYYe1FHt(9jY>Nf0 zF*ssD&f0UhI(6 z`*{4XiTnHf_`mbKr0cH`==srM(PQ<(lto>?rzQ5Cm8VX#sID zG>)Xv?Uh=SJI@{;@_AH=w~ zHN3w56K$K(J|g}TZIg$*;u_j;ibe4Qgy)RQ1FOWGfekHDFMFD=k9iGgzI`?Tc|=2K zuk{ge`KTBU{u}?C%i7D3H<4+$R;K z$7qbG4aBO*bUbZR6=xq{X)$~$_D#u>;4UHfanmK2y1ucZI)DQEjm0;2aL8w*o2myq zA5&l_y2$__KNG9Y z5GwwRvZk_LvUGst*H6S;>OVweEm<5q*OvVDXpNjNByDlez(zR4LfFAf&};`cIf?U( z187MBfy_Wut^Z2vvb4+#)<-zcU5x6K8w8UEIb z1sHiq?|J;aZybcjy96*zW-wmwBdMfYiJ+ZZAa2k5)##u*_9(rwUY>1Pa?VyC?JX0B z_W)T=va*0TG?L-(^V;e;@U3M#emQ4j!ZJN#XBwMV&-FC-=LAW&!7x?% z|Mrvh3rq5k9IO63QTi)L#BlEd5;JSCFJJc;bvK|MmXf#s96XK7B|owaQ>*)Dih#fJ zy$9$EtU|^u3Safux{e)79m$8~q5nEwxx!85DnWO#q`BL(o_w4c&!>?6~=51;QAKuiirb5Pl+m9^yV_1R~|1h4zU=@99x)a*X%G9iVm8 z_64Zc{X858!swsJxM3>afA4p_h?L5P!X%fjt8rDC)!;or*$$j4tMOqwV2}hf-c; z+y`^wKH5myn;I`2w2oQK`)6B5l;2xb{mK*IpZ%V{zMt~bHwYQk*(XTzGGa3>cu`I| zI)m9O#uPh#t%5(eYYWB@N5j+$bPGVNU+NFSq5b4DuEU1iZYS?z>kb)|dRR*Kvdb$7 z-EM4jeGqQg(8$AcRDJu11dE!^)x^Uze>e$u1;4% zQbUx%VIlLAwc|x9%}+_=uX&Y{X>@i1uMk54I%IZD8D6{X^GO`4`Azn*5tuG-@)A+> zt*`s5)9NZrpf=^1xYtw}=`9$bF;_OZ(tB5p!azC#P7jsU$cMFE6qoEUj*Pq%Hx?zXdp)%M&HK(45_QFbLRED-%?yY`x9X>x&=wR2oeDJ9b)o!@8ZO5M}X{T(Qt?z?Dk&@X}`5*`bT~qPz?TgOj%#|W3TvkKt2DH zh2CFnCh$*;OYwDhzvFk_5&ya~{^u*8)o|*AiWfe%8AUg3R^MNCENj{$aC&r@*VxGnY!tXY0)qEOU9R z_@Syf>7#HEULm%O_vGqY3xaP@&PKUKmnPr;>^$K#H=!JJp(-`g+2Hu){%FpMlGC(q zt*#!Za@J3e3Q1an5}UBx#OA&`xXl+@ZTTTQ{pJ%FMm`JGy@`-cPo`Uo0Rc7$JG4f` z1-zbqIX*G=%H3=LgGpZ~|1jiH^blc5@vWNY>rt;qxs9Fif6%G zut!=ra;gW?VP@3pSxu!+!1nH)6m6dMB2>z;rifGX92-^(*R{J&w@|Jqi8sQJ#H%X@TG|<&qg|oSg;8F+J-ZgZISO~cvzx?~r^UM7bfEG%JC6!DsIDAs zPse*&ruiM*7+={o&FTLINDTN(nc`ZVr>+#u(WN6$*A1o;-uhL&X}=85y4|5XJg&w_ zI)XE6ERun33MT^`l=rW9GmC?#{P#(tok2OCj~$bb!m}pk?=hYu+U=<=A#us$r0c|u ze34j$0Ci6Z1yCNEV zY*Ux^v(GVX0TpY5w$N+j5AVeQ^=jxI#3d+q(vr)LD^D)MJIqrpA=FH`n4K-q( zN#RL=(bC}aTD@j9A-oN`wEz>TM1GO+r2fflyt1Ji>CUZ>HJR({oQ`96?otaH+`qx7 z`lT68iI<+^Z{@~NxN+Imxn;G#f|-A*PF)yD4!Rs2aGdUdteErz4#(HmL5kM*)M;x- zZp4c(;I6pV3KaCEeB`Uol}S^JO06YyE`w{=p|x2zS82Iz6d3TC>Ow<)Z~W;)Wd;u0 zf0UNaie?L%i|y2PIz(CBUPYiecRWyD$#DyP_u~f@rveW?j#6A(StgqvI-2j~>jxH= z7b}n_c{K)f@8gB|CDtalA6B9sn(?j-x1#whmI9G zqTt!>E>Lt z^hJP}8}OLT`Ea?J@)g)zxeaoT2B<gbUwR7}-*GV1yo&*3Jkn^ts`TXAF!0+M!m{b-42n~@r0t9U-6<@N3PIBJJsxrmy_zSLm(##*7{E7fXZ#uNKZo6k|=& zM1}5&t?CLnozQ3y;M$BiMcKY-5I)5<=(l#v>Wjkgl+b$>J5SFoeNCfeg{qA4ZrBhj zlQz#8t#IwJq_!*xU8={!(r$Y)@N;DYaEfd?=tqM{GiZpD_}d7phuQb7b-c&B52$av zygYv1ss9SBHIxZx^YT5zIW+C2-yzw}EUu#2y4xxTr>B%^?ae=;x+jjGlzExe#lcV* zybf!B;HqTSoET%Pgg1z)XCrO{4+5IEMSe=~b+JjA%bN;AE;+(je3l;fef3OF4N9>& z;c$PKqb!XdUJlR5^~S{)WTsH^afujKp^Q0^qJ9STcAr z+$*Hjaxf^^_U?6+?(t4W&#E$lR0 zb{2VrD0{vP*Lzp)vtN-^Bh`)Z@uwFq@-^qH>Yfu`*A+Qi_ZxYjX54dN3L>&SEo~@x zr#8-_!zo6Od#^~%@#Z1a)R^9}dm4>YNh>y~EEMy@U`hm=4eW|Eb$!&a%i+sQOb=G2 zgsSxhA5X-V-?OGm(2)sx6s0J_V=N5_nJB!6pL3Xv{i0kGCD)x+S6&r*X$L)9bwl$~ zdK(LT?fj1q6vf{+I++v7GhkT?b(K1yJERvi zS|u*4D1r{2#$lQn4-jQSl@^+n%!^ZW2QejHJ`%44OJwM3pYo3IKc;*WK3SR|UJrJT$WlEn?0 z*Ama%f*4iKO;u4ptM5H+j9gX3KG3`i22;#Z zl-mp%ijrXzi&H0w;<)ivbgo)ajtXQn;CT~;lfUnEj6Hx_*YfcYKCh$5Uk%UY6^AsV z7*yzIMawKmsyVR=xK)&Vx?QxB9rcrzBt2nu<$I&BtE--`$HUZx`3L|VR^My}+RW>q zUp}zvmw#N3{!5S6n1@GqfmNPVIi0OXddw$IBS+?%Vka~DJ_is*@Z8K=b4?Aewn5Fn z6wKd)o>yk=ZFN;(#^4o~5JxvIqaCfJPWYN9X?en7@c)Bw@<(=pe(iVta6tcI9i#8E z+&>&D|AS`s?|o+fWPHj`((z#G@=0TxN@5Sr8LP^Vy%NvYi~3kT(#+~4*f?lc?Rb~%<;*6%54*&Cq`i$J__RQXDu9m2 zovE+O`%(d@jH)ph@Fn4ev=i+Z*T=q?77V?^Ia1yO4oK=N6av(apxxjYJQ_e7a>?<> zSu$+P_vvQCZtJ8hSGOhlu1~sc>)q>Ex^XG_uY*heg*ozvdiR=MsX@EebM&v0G_c|$ zev|S|vJ%_J-Xxsr4^DGvAm_#qOXga^$1on;`Esa8SsNn%toVU+r9CaX_gks&xRj|7 zc6VlOi!&sB1s%8qTCNvB^l@!R)m~F*VjOLI$InD8lF3LY5EaqsXTjM9u`mfY6YZ_n zPv6wIv_d|N=q2+L&9~J=KCGR{R^*_k;lnbe{s~8Q*X?*&SvEmHKE8>H#2kTh z;?PZm`XjiL5BbtxSRP@bvDF3#>spiEtxP^hA%es!f1eUUz126n2x9X7W@0?@(DU9F zeoAMZ<5YLwZ?60L*q!2sDb2S}$g4McCguaYYZamyUb-2>4yVf@ZZ{v%f04y!ds3*U z@(C=BPW>~!LSt)5QDFGla5JB{)_$s*6s9GX)f~YFKVO|7i8w*)oKm2>-i`q66<$ zIkvU6@t|%52zpNoFs7&d!NGX`$THW!}UdM2ORgL(An0cO3TnOTz^OQ z0h4sPfwj%erZ;hmR~43ZKan{#Z8mHvZeT=BM;mI&3`>H$ZUA3&8ixD>%(Rn}7Tl&V%5bq>Qkz8@$ zMkwQ}O6-@kkk}XNg?@WZ8&GJjH=8(2qnlZW3YR4$?CA1NO#;!^`;2%C!3yUaG;2dK zt-|DeiuX=tL~^38K7_@a@GN&Y&%mpUT!?ymD($m22xtsHi$9pCj}V$poN_l7S(N~! zk-F&*CLV3861qW=2=-$xqgdFLZd~IhA%~vgr&mPS34xIU{*QEVCPr!DDmPQyJ>eBNmuWuk%gvog93BTd&8yk zFQ3`Z$wKSI`pJ&nDo@|bx>bG?(TS$oe+s~j@K#L$+iKj^N|1OJX$3;UEkv$c__`Xr z0=&_6{ea>*UOHOuQMe2zLZcmT;B7ItQ+S@^uncFsDE^&?7iG)BO5E5%+VMFIiv2@ zy6Vb3%BftTLrVQeuVaW(^JF4rHgP<{y(pHKW@fFG~?Ipm*n`A07z)BQLV2-iai zj4L=w^hRylz;v^mU5$5K3nimrQbqv|gJo|;MIzZ#2tLGpBos^EWAx4n2qj?2QxqQi z1=MH~T*K!;a=tG!X#q+dB>b2Sw8#8=1h@ZxE=);vYv+VP!JXhbvVVL1>AaON~sS?tMlzRqX96@%74 zA35nRP>V>2!G9C#;)E;XQ<_Vbl>Cg`GTuED87=oZx$v=O@YV3~=czihRJV(9loTTo zhRc6Z5u|8?Ys&yOJ1lZCa#!RE2420H$nAhLJ&sG5Fnd65>MP_gF=%GF5~;`7cf{6$ zW0SIKcXxin_6eS!th)dN*2}CR0Opskc7Xu!+EXqO?EqncI{~DN`H2; zE{C9?K)39f1`q|H-fz~b1aO3-P-V{f3b&Cix~L1dIOZ}gbo`09u<?wBRHCss?pW0R7f74Flw1qtUieJKDTQFSn zQ2|I@Z{Bdo!+W9H|9D^ufhO{m8{f(P!dlIzHtnG=magWs*5Y|PC5#Stb%h>0i@;7z ztxs9$y-9YLr8$H#zHtSj)mFTa5-iXJ_PFhf$2n78lRJt+6K1_AoPn$9G(m|7 zWoGvEf|0_`g?xz+`Uvtt<*({?qMz340qPi^3ciADQvljzsVzX&qyqTCDJnz{;94Mf z*FqH(n)t73e?q9pADB+>KL;2vkKW9?PM$4t2j7<#KM}LaDUf-7CQM!6@kY2B0gx7V zPux+I9xWv|S7XTc0R8O0O3wT%3x}-VGortI8T-DFvq`bF@;ff2e~Y3o@c-feUN7Xo z_KH8WfKK@l6NjOx)q-DS2TCj9GVL$m=LdyGRe(~nM_O#$m`&WR8cL>)8zy0N9Hs_Z z#IwM*j^R`}Czbo+;>kw^izT7yrUTvxt2(6>m}`y^McZEwj;*B~mpSQ146}699B~$> zrn5O6rQ{hR<%RlKHtfaDSzgUaiJ%cm5eNqf+gX6rUOcajotbVkI81WFO=MK0@?-=Q z~yi!|X@LnoqR0#Il?b4GcVf z?V6UZFb>3%?18N_CLdzo6I-;L$dDzz@JNmBQvFaD|F)vx_U%ZM?nFY8IUis=GhSw< zFbiQJhP07+U*lcwLd^^Z&g$G58a~J&k(7Cdh9dIH4A&y?yVhF}rT2`9W^*nrP%87>dOmYD8X`PNw zVXB!i;*CgGXrdDB^5TqQEfT9LzhF|4LEvUO{7mEIPSV7gjYDx1oV|$q6l*4C{4-#- zG1y)sA0K|`3FWvYG1j2`y?u;zP*Qs!bJxr9n+l*2Ma7vR zCcUF9J(@_ahXcX-iw-~tqU+mdMd?6}IBK}Z9Df2rhM13rki_@X2b61S6pp%2i9Mhl z*eu-t7&a68C`@hXl?h)Jrx?ZOX`p}VD#<=uG{HxY`w&+zMZEn|nxwbMEuG;!IbfB} zdGx$ex_6c^!vXCRxH&@TkiRs`ILTrE85D~*i0}7xa*PGgenm_MYBt&0$Fq5yO*=a} z=AmjyIOGiEjvAxoTNep6K% zEk$g{6=)EY#_*OdA{A0QktDA;vI$8n{8ACfFUHw;>{YB%o0_EZc6!0njRPY z3PLFG08GeUZYT5L1jSEb3h;k?yoa<$Bj>F@AioSC_t`ue0e&kAv+jaAjQsL$;VVc_ z3Pa zIhd)&CK+yFD7SC=Q-O4vN`fJcfNK>Ov1c}SNbXLG!3VjW!xaTNK+7iBe{5bw_Bd!x zU>Oc-2soY^db9pAFd*8z|LhB(ddO%q{<;2aKTz8+3H8bu}iI5|aNz4-L3V39a_9(t!a;-m!=EXa#{P3h zR%xD-zEf@h_zKrL&4e=hc;$Ii>#wBNlzJIT;7SB;%A7x==#W^gkKhBk#i~RF+?jIe zTY&gndXuYQZN|!zEst)hH`RBADb#N)Ijx(yIVMjg_@){sw{P=f3Fx^Z+F5keS#*SD z@Y2+&lFAKFNnVCnxtGoJVw`kc3tx7^ zv!V397O{4!_kaZbN%>Nh)_}|qF3|1fXgWh&b0FBgGgI2U|J)BPqD=tM3k#?>%ae0BcErYo582;*+FoO(sUgCrRr{2?v^eLIvb|oavMRlvn;%!w?c@Z8Nx+D&S@+&gjfui@PXr!bd>(xbW!)s0N_13g#q90x5x;2KFlmv;@ue zIO-vzmCp2BSxgUct$|rwS|}m49j4rZDlft;&lZxqDj>9HRS6Z9`1%4L2Hc&Jq)e>2 zA(ueP<@xFhMn=}GhvI_NT4%vwfSsetH6YEMt0B5pQ-4BXJD|>Wk^1#_SEPiGP6y=# zTxPumYi8--PM{425zwq~dtyp1P}f&|Mr?F@rOO{@r8l^sl$g$x@-}0a<4Fa}nfWZH z?GStqycj4xwM~f;zwr=faQyKqUU_QyMedfX-cmM(T$(J~_>QH~Nd%~36BmfhWI5BX3xhWpyQq2uthS(R36rN^n142(YNye@6fvB*{cv;uaP zzC~stLM7yF1%Y+y*RC)qrdU=H@s7Hy@S*zZTC<|lHPbm^rcYD6fZ@p*0K{54dsw>V zD~NRy8jIk-OY}GANuh*&L*qIHvtIirC}`>LrHD9$XgIWt7Be*)A zO6k1RrPgQzPx7syD;`kFaT@L>Hr#d-l>edKXh*f(caC^LoQhS&RXD*Q+%bNN)%5$zf zV)E_bM7c)cLrHHzAm(e1^r-t%x?jLRD<=_L25v-0rP0suNBF`v8Wog0jSNl{vNNif zP5Cul+_=uX43W@6z@j>5t7CS!3H|_Z#upp862!j3TL@Q9nMFoQ#8om|>C%twl+R&&E~RY-u39jBm2+`&S|G_eZ+Mbg7Glk2@S+Fum9rCc^SKduR% z5q9ZLKP;S(b!YMx!0G@yBbrhCfFnyfwmkuH0Pg-?t1=&EDIRUKm`at@>EM@^(q6}; z(&C(C!5Oj2sHn&}T`ba!<|i5uQiz>sSZ=drX8emKUz>ScGj9$fRsr3Fm{*5p zBL;4sTLhkB1TXx2AJQ8gXDdBKQpjJ-;LAT!RuLTEbHCs6iqrd*?8L5zecC7Ijk?Po z=Lz39b%1e<-6YxGWnVK3 zcDfnKOB~e1Z#Q`KM%POXWv>^WmDHj|U8$@Jyr&zmuE&tJAzZlhadMdZhPk&Q0Xo`F zup{Q=k$d@h$w!GO^_o8aC#F^uxsN{c9rS)Bx5($T?sQK~1uE=bZ=d}1v?n%9vAf5Z zFhYE^Criwo3yi9@aeE09v8b$$S$9yvgF2q=6FNBÿ́CV5d83! zEnuR^#PmUbFDWD=?AfbT@8y*(9pSPh=ZK9S^+mygQ>l~!_NOVm=daSn_Rc}a7xam; zv?~yBqGV1#Pfq8+?S&HZK4rZp zmeUzev>1vV7+{SF60*sXEaUN2=ab#DKn$U{Nayf_<`qQESDlP^YMI<}>YwqPX~JFJ zmPlrP_qk_dXOW#MbTGl!_QPCaA|N_P^51iV(ath9A(^`EW*yudLSMGC_$X#7M3>Vk zC}*)Ux;toupSDWS0YqM@fm}@sFS0y{AK1pSNBWCPv%oSB+Le@JN;9Tbo%^wSCTiA3 zVpYsthAd$kjt%o%%ix*E(==*Ai^!wE(WZ&TL~cnkuQ%(J_HW`}TJ*TCpZsvXzpyKN z$IwdaZlrU25eU)rSBj_^kZ6^qlb)n6w|@me)PR$2(9wVUE}VZ)7uh)qSk!E7V5j%x zwE_04nF4m2o#@aIOg$X~cgSzfUNjZ2wu&Bimww7Sn)nw4{sE77^{3%}IAr6vo?fWF z^M~4t1u1;#&qvTtkfEZp=e1<6$u=u992r$Kp37t;0-*Pp-L1%0@Yt=ns;W|7`IK25 zolsf5-bs%HovJFzHMXuqpRlJqpAaI55q{ql1O-uX!2c7kL(3UPgPrYiXiGH;OFO?O z-lTQ}hY70%yUjvr7Vj-`T}xhxnW7%cdY`%6vi`QESS4BCFN~VP^FxWty10LwmwuA>4`(LZ;gT+yjDe!{pjkUDA)~f zm>D;y$!yxj%HKWQQ(AUYYq+0ATeJLr{Gzy1p4bmt4{~Wm zS&v;Ry zy6!&7GG)qhF-#LeKeY#`zWm#*scU@p)%#C;h0y^<%?yy`Pb*S@&UGHpSP0*L1YTAK z8V-wxL#F|O&JXaKgEW;fW;aV$LLNp%34_;#2USAJNLmj zO!jV4Jfa!Vq70r*A#(*meD_2H_Ep+jgZW#KJ0h7PK*j0W8aXO>12qyt))$|R+z$s7 z*foPeyA^+1!GBM6OH@X-Ql;-ezpXeP41^_3sN2=;kt8sl-je*b9xwMoT=;1BDX+pj zi?OqIToL}9_DZ`nLVqhxt}7=dOw)KFEA@xt<$%Xvmi@lwOYiEq&={@s4mvRSxJ>*5 zaBC|l_#{|^PB5^dWV3$em*}7%2JYJPlFa52SyNjp^mKE(aFG|TinsCc3Z{{&5>$Cgc zo2lRV-)b?~+(uNyG?$9JGY9%F-L=qd=y-R8Mx!{4jnH`nN@}Po>4sNMv-(b}c*1i#@wa};3PR67A`H3QHq=yU5klhVn zlOE{&GNtm5i>Kf9Mj<_nO+65T;{ZaObP+jbrByv7;aLCJt=oNA%*fDJ`hnH*#`zm& z{4kU|IlW+~fyfUyMwm6M#iE#@WKo!Ytr_&?R1a&J2h_`FzBbUPR9;Gl@`eDs77qY- zfr1HRlo{j@#wk=BaFEr@8n=Q8POYwboo_b||MY3rL1u^XW`thEjp9zusqCckO42iZ zpXxOd%Q4_!LzB6Q`~;r?Nh!Du7IuHBe_B)c-k1%u;j_z1yh+PJnVRC(PHmMLci?Fld8t@{|2ep9dj*20ofU~D z3E`Ayi|!_1@W)z?TAgkEFz}#oE@-?S)Me%}dvfL&QsHM^+P(SsbaToqRe3=HVaqf{2F>Yhu@cMu%&@Uriz# zTzc5kZzDuLvzMI1k*v~+m6m5WjeZbjC9wG_mf#&p5Wz#+p>%E-8WJzor)XU7#QHSE zpkk<}$S&x9M}b^TrHp4I$T7j-5@-v;K^}&bfq~pO-d2DJ-V=v;H?8v1m-SCVAlJ*{ zJ+Y#WG!}v}iln#Cr8rS*2WZZ3f?1KSsIiop_T_FDJR?>rKHE)AF}+3n#Umf4G)jBB z)H`uPZ~Qkn=Y&~8YtFRk!*gb&quIf)(^t;mnWbYgdEa}Nfsb`6TUw846$uw}M`TQy z$MtaO6uvT5=VA3`6uTILT2Y-VvLY1k$4dc5o8ryvL#DwBQCV-Sf+8RKNXC~L9J5~5 zh=1CfcF6ja>511zKx$FMo;^+Cb_2M0I}&m3vzzmj@P}nN<%emm7Ato$;tXrY9gZC2 zbSPs}+(u`D-3{WVIaT!bm|+fWYyqZv4s#wHkx^&|hhyVA1KJ;3z36WGoSAraW%-m4 z@fN=r{PKQ2J9c3?@+q7RS9!Nwz7BWy#e?d}op{SGZkuy#QzB+s&U~H`Dr?t3^M^+} zcH6Pw2!9Ly9YsWEH}@KN=(Jjt*G;|FW&}s-$Xtd zqN*1b6U&8oU+)c zDlHOXW$Y#?u8k$KB)67-s60CDfB!b>D&sVGbb3xTvc31tF1Y29vSw~~$R|yIcPm)< zCmK<~h=z&m_%5F(*)ETV+rgEPRyqj9GKL?bOuFrCl^goFg*;jbW@GX8y) zPIcbi75+VYxD8+;bO`Lyd&kFhOm1lLKzH7kIH4Geg-UiYHeFu(c=?j26ZCSAk38i? z^O7#@mu-SaQSD08i7IEGk9e4R4?SNRx?UrfFFDa0%Aq^2eNR@fg(aWXSBCeEn&F;1 z>XWO9dJZ=Fv%p0sK-YL&umFL3K-&NlT<$34~urDA!jS>TX5QlvO9Sh%*Gt)aRP8cB8;Wmu%BFzxm#besDa%o-;h9jUGvSRI2 zGVd@l7wB%(JMf(;OFbO9N543{3vK|9fptF!gjkJ>)6HaZ(hMh<7ou7FBkc7U8seHh>gS<(>!;g9YCyQ>K_h9+K;udh$u! zq|H)JGfqS*g3R<&KKij!XwCPF%vWxSvL6;$lCLgLO{Oa~u!#xwL3+E~BdyZN?H zV!VygWOK8Q@f=UCg7P& zOj%;28rKMV8IL_oe+|m)F(nibqw_qmBg}y1^yxQ8z6f%R3B956X0$&=30PwOBpkvh zVZ1_v*P5o;!7e?BX5H#xZ;A3JHUn*)n%3_SEi@y|N9h(+GWJ=206D z4M}8V1pKqI$XtQ#;LYd*8U=IBsW1>7Ks*Rax& z!BUlI7@5GU-BvG$@iI2gm|&v1DRVbrjAz~9t`A!+{=;tUCBdY2*ZKo~c<`I)TT32dLH z-d%Hild-$m!y@$V2&YckgT)O}-RPYPGw$?Y;hBvR|R2 zN$%==X0782Xg6_gc*PfQDxHwOcaGf4W%|SiZS_=PcsI-b)con4<>;!p_zOPwEJi9m zDYM?BSy>wm0h;6SbI6#n?FPmKCH*uMc7;zQll#fJQmQX(O&tv?G3Hs(O#Kz99byL; zX2D~Xv)+lot(U)76CklRhWkAllSAi zz>Yu{0q%ynN~E6!A03(cZ0XdplE35&um>#_8N@k~)h*JD8j6kYD$*8#Zm7}iSA*Tx zVxR>?#@WR1!n12^8i5kwOQ$R(S3Hl+a}VjJQbk-zSf_61Hh-L0LY5#tBD{UwSIgvq z!(8a+Tiw=;fgFXnA4!Up)L>Hi(0sW#YV^U%nD9B{K9E5d7jmSVp$np2N5mqA$8&f+Q8r!Phm_{Bc8)+(G|^p@3U9{P753D{?mmcNNM? z(k9-@nx1eSE0p0l{MdCspY=nnm1EQ;eAeBhOA7B!YpkADS8XPg6AR|}slEVCLxm)d zS%V|)Q^Ei=)J6kccyVE5ZF^h-h*OcW2Wv)>x*1+(>d@4Q5YqOG)K0*RvE zbw%ZzzJ^fH$6D&F=ogw|Ez$L7sqOhZI@XF!HkY*DLMkD#MQ|i`dUU#tz*cHu<F4VRt zn12vI6Uoh~D~vu4B)a45gor_m8vsH85nX1BWLwNalE<}luVGzt*6rUUdqm2-tQJOi zYdN&j`ixzxebi|A3BBip7|-M(w$Iej>|Mq|ZyOEw)YPa)dZ8YQ6_h#6Z8&AHw3B7O)0iFQX#a|+K{Ln6gz<#N*=0~yh@!t&M=lI6MPx1wDCum&& z?F>{+0D^Ib2gk9K79CmuF`r9!Y; zP7O_sUK+oUBzo>R%PEoJn9_?Nv&<>_`|GcSEQ|$KZg4pI&Ae}*22q459r9raX#!RU^pBomppEDrDLNdSubv`)W1HA!0W#4$CXO85Yd1|yh3pG_{Zo-yx42ZNqEZS3n8{#1=l{3hAODPmXCQH9y ziB&M%b=k#Z-V6I}|D%CxWGg=vEeK{xY zhMh3T>(vBCqQ3As`FO%bhQiYtoKzt(X)&OPQ(56>6Ihs@Gl=Ci0OmKwzwosG?WzBT z4o>JSgqA!2Sr34e^6$fmm4M-462P6h|96J@>u|s4t3528e6h!jTn9Aa`kia}zwd8v zCj#X8j;fbHCkiOk>`MHl&ILx!_o{ILm!9;J(+t%49r}Nx)ZgEb7eDME9)B@m{;oL< z3Jv^UmZts94fo%0-?U%*_dEWN7zA8R6F8NrNp)f)iawg_U71gnZ4%$hR|KS_Kt{7o}d+*$`w~YOp z_$;{p*c|QVS$yEK!d#C~1A-Mfs?hG+o(0>OUHLFsYdXvkxo6ZA5-n0$TQ=b%R^1uS zE^#KA#+7nY>8C~4-}3j50f=9x@C~hju0|#bXJB0xnS)83e_r8bOZDvbeEiw}n>cr6e(dB&2{CHBBmWHu9f52aNqd#B&BO3fW^FZ-$ zRv7zX-$HjF?@SkCB9Wfp!|upqh%>N_R)ydvX?2Be6&;oL^jPc4iw3E5;@=)qV!S>A zED&39Y~o01E0+sE>Trf~6Njy)_n*w}BU_d3_TI0VUza=?Gkr4jEpPg+d(qj zah!~tNn+cS)U~Ui1FLUAwCdn~Q6M2$l)i=8RDodt?irBg?l9NS;b_|dd^|~qct)fh zbtDu`W{N68ua3XFbD}prS1wjzwVCO0%edAmY|#=!MV5sr&rLV)fuq%^_6!KB%cZUS zW7wnJ)st_bsSG2WJiY^AYCvWZsS5{Ma?j=f@Hmmv1vs1m2jJ{_8bH$k51{c$45eQ| z^OyEbo`->frqJvFgcA7~hDVZVEC4#q3H~4Nvbt?LlQ@`;4NL`*jfbo`8CPV;GeUcCWZ1lm3;Huud0hBK9q?wzz^>;#<9vwT~5&tA$ z<$NpIzbkHd@;${4;Gp*F#^24q2toi^-jCA-{(hbiF99BQD)NWfN+kNFH%#yHsQ}r{ ztu!Pl8uAh_iu?#6De28X_Zj)50h{J-F!@3jnfC-vfgG6#=n%~~0q)jBl9MUeDwz ziP+fl#!h=b3~**biP0gWBOkWs>neH8Q_l8&e?!{e-wOJ}rBjsa=pSy#@E4{{e9H)k z!3%!j$N8m2wma7fDXWN{H5h{eh}qJ!ZSg*8x!8Sq{Y&CoNmNI(Jpx)e8Q<#e3Ec$O zsvZtK_bX>Ki%*`IG=q#3(guF_aq~zW;N|w!NWIt05ht;PIW=pYqsP)3=Vy-0JDLUk z$}xZSue6lI@;_ol@l4}%dc!vG;5d?%K%1IJ!O59>K0Ud4{Hf+dQQQM^53;-!)6h!? ztF-<=;`N#Fi1+e5D}Em)SR1M@AkA~Soke}wpsqB)qH2M4Hy=Vpj3gULzCeziegu&2 zagSG`M z*nyoaG6d13Np|wUka*mYq!ewE@;Jd5*avObzJh4EfG13Nc&zLyZ7T$~1|2mZ-wpZm zjekzh|E5I&035#TiFg|;-0>Ds*AP94&X3^klVAAl*$wnxt3(6{6~Z0L0> zgmcQX?d2w>gA&d;L$S`?eDlK=KR4t3J&Oka7sf_@{r78g{VaM#zXRVwt=l2FG1K@3 z@-0{fsIQ>r98Xx(s+BuwjvBc{U;{R*!yrY`>);=FZKoO-#6(I;HB2Bs!2}lt8<%=^2u-aE7A z&fMR9@9*!dv(DN3>^x`h=lOm=PnrwW?(r>(fMc%cYtICt%HZH}IlhYLB3}-WWA)ep z7GIj*08Q}v5ar8m*9cvOIW*PMf9Di)ly)vEtC<|In9?v+qhcfu)F-O#%s z%;7?|oibRGH*Q>hsQ2B=j_n7})#_xAWLHGpADn?j+`XF}Ih+lTc>}hyR)BQ{2Aki#Ua%>i%+plypu zZI>er3$JOay55{WJ??>j^+cg=wqaJqNWkNiMYx2@LH;C>kBjz2K)(qU(S^v^!Ka1v$H}MX?p=@)howaahk#XLO+DT(+C?pN>0%XX zOb2O3XQ9Qo6z-RDs-W>x5&p_Oo>%S!Ik`aTpiiGzKmCHFmK<_kg46li>?ravC&y8m zRd8ew(fMtiWbonH$($LZ`HL3~A=1&0Qx50vzp>I$!aPW*0oshHg#N5J?Wtsh`X-dL z(k4(`xZ8+rGN=O3(;K}jZBf4!4~n;4lg688926dwzEAn)h^-XR|LN z9QSFnG<}M$X?qMq$9!05OzI*KjeIqW)o4Kv>OLDWlI%vSoc%vuE1J=4)NW+|8GGj9 zlYF^?LDW7*o*sY9`$CPDXTWjS`2`@0c|p6XTxB@=7W-z6%T-=mi-xY!0jrOea`u=e?J*dWfc8__++ugykoMK zkXB(tf)HY=OGjQW^?67W3fgP0Co z(s)E6g1$(*NxC^{&T2DacQpFNmA2qOr3cWh5h2kh+10TPT7mt-Kg)M4z_<}bmjbdV zK5wAB%#&ykT|E>U8dYWW>$hTo4g8ZPTBpr^$ z+`|j_9Yd!m(RH9n0pVIG7%2_c(AwVC)X91cG?Q=e2rxKu1 zReK8i%9j>6XHIq#FWdR^PY7x>XI<`=b-t%~*EytA;tAXSsh*g_Q|-o-wxyOg2!6Uw z5{QHvX-B_savqXNRW}PFG;ah2DU|yP9huzt7P>*h82w<< z&K;@@u^y!wST*h2@OI_Q{2AYM;W#(RQY-aK7kf5{mknGh>QK|9EZ`Ivs}M)N;Q6R* z!zsEpwO^Q=;{gcl`r}O#t@i5gyL^V;QfDK5{a+iLNyYRDj0fvO1?QNQL09pGg<8sR~%a! zLnzdHC2ZIUPtLJq?3X)5R>(V9aU*0lcTYQGsqt@SDBtTGllq#HNQeb;%9V71wKF%a%@<11hs5Z|9!3R++XEW;Ji0w@nQ@Ufso|^z3TXOcv&1&kXV(NPA(H3hDGd{zFfTW{P~#1?9(; zq!I*TxJ#1n^b{X!pI0_zLNTO0%e9fu_lUUGC2?tu!R zb|tE_@g&Ny8hFrs@K%~$S+rQ!9OEfRGh^?eLC@9SjNGM!g+*IJi>=q02TLeNzBz!D z$4Ola_B67i_p1`acxwl2&#P>W0Xy}yP&KX@5}!_j1UnZ&0TP2m#b76xN+7H20u`M> zH-B+`0Xab$#U!^9Q$Ec(cOg!=RGgemlU2DG;!xv>YG+i^Y^PDufYe`tVpApa^-`Vn zuKk3xwk$x_x>XNt8nj$u zhSi-K8R$~nI6d%Y+m&|__E8gTo7552qj%~LICV57Xf2so0Q*XK+vM=uQ>pZY4_)KF}ffA@6tT|;Hg zno^c28R^)l!@ti|ppKHEKh2s!4W4Hgf}uM#KS8ldMlVYeeHeKM3E#izL*4$9jP>7i zCI5Nt-^}{|k>`-*4?@rU_p==)54~7|{%Z7tKodn7fWC8Yd?O|u?KaE{pu5mbt0}sM z&MP)wRo4Tvcfc4x#gLun$yCXSJs0+T24y&QFQ4#l0+$3M!yS~r;S&v`as=9 zdc`PM89~D7cA#=}VLx)%>chq}AS5W@>8sJ@woHlUorA$U_ncoDZEf)Pb>GM^5Ov7y zCjkAlYsf4MVB`f|h_afcD?`~pnp!oMT%FLpSV`R5+kt88M49OVU40GtzSo;nwjoE@ z7r93UmoX?0Qjun_woe=eCtNnA{xC>Uep9$a1zC6d=lgQuF2wb3^LVxouz(UVlmVc; z=uBYsC0_lDc$UwSbYpAOvch@jrntr9@0?`w6Fg^rOKH=(rSb;#uDJymsASH-4`Hp+ z?i4)0FiN`TB?|g&#sZ~Zs>(lQk+*SfZ(>6};!iZ~mdm=&6T-4*X^h&XnbMu6n0!dp ziw`JlOq zEfPK_)z{!_SFriaX{|7<(4WytcH3tt)&7k=r7~ASG-xB}+HuWWF46f~ zDTp*ZH8=o%A&yb?UU-y$DJZnPKNs`<0XynYyoj)Qsb^fWS}J z1#JJOaRg0!nmIMnPkO6#XhvRj7J1WY)*?LAyXBB?@L9a?eN!pDpOWcD7y7ZPI{Uag zK>fgD>X;p)vJ5yxDcCk0RikAcFd&QIEH92a4=6Zdp5fe%%gSFciEd!CyTQ-l(RNoK z>Th?d4D;E^l37ya*P4&ZU%^NjdhH*R4w<`{?CF^x94~c7)}5aZB2$VPM(n3sfyZYV zN8p>uC_-1E%#U#BhiJ!{m9^R+MVssGE$r8L`+N+1MH0$q&GXYV^!3kbE5MfOhI`r& z(@A~1G0@SW{wzcgad8;2I^dz^gjz9a$a()cO`Esa+%V3{3BuU-lyDyA{R7ypq2C5N zg2~*+)}{*Wa>x#K+nYic^DljA4U;@^a}(J$!FzjOrI);+vR068i3y_;?eCA7P8{B8 zZbNf5r`&kzTm*5vuF(6V!XE8a>=v<_v;SV=%?~Dn{QE49HpjvoK4aoJz@nrL33Pet z3A>`MzW$)j%|*XVa8Y#8v1%$llp**4x|>dFVf2B} zrR^*aa#AMDXB+BAR9H<8P}2zvHJl2>uDRbWYb~${QLM#}eG^{3yzkO$kpV6CSiRAw z_(P1?Dgal9;`yNKyo45TTG!UAF3jt^DetQDaqDuW-p`Zm{z-h#Pf(un6Gcp7Ttj0( ze|{GQ&uC}|_tO78j8IlEpEZSg+@IB&yR+;aHb#MZ zYaYT)?lc%59YoT`?=>fpilC(NW@7Yw`Ev&gv|?q=z`kMm;GVo5iMX?!4!Og5eNlI& zHrA;AjzHU@^ZK^H8f7i!H03=8O=9@XPp8c&mRAid4}MTZ^=Y!XM1*L1$cQF$5T@iv zB_g0N*!=kbDZ@m;ILx1xp-Abm+Cl`122h&ye`Y@sp8ySNU6sIQP0SQrBw(4^T*SyK zodp7}6E*VXQUZLM3y`JF9+7F00wM+G2lC#968vKM+TSjw!?3t*x{^7Q+pqOVO4S;a zSJzhF{Q0zHlwM`zPR2H`XW_|~dy2hW0v{U?t!$8(7bEb;u4fs|;nx6VN<>eqQK4o+ zyE!%3Yc9I(w{El1@|Tz9-~WPWtJ0V4_~lg`Jaf;^?QV3?%)fvXTILCWK!T>A49kGg z_3q|oVn!x`v}irLGa1@wR3Ih&RO@MUX@dKKpm>oR+^;|XPdHDx#S)%hkU?YYFzwV) z^x6u32NQ=f`~{H}9D{8VkZ6@em^yt2Y12Rr)y)2`XTrIeuNc z@6n>a)xNjoE@n%YJH}6qTd&N|wMIXklSD4(pr^y_mLx*qb|d=GCl824($X26X8D8T^s5a zrqr^Y?narw=lhBmD!Mhid>uFV^@GX0XTS>EQ6p661CREHQ=Hrj->-1=IPJa}PMa6+ z01TEei6Ng`@TY)C(kf-A-0S&p@ZtPVss=&nAFTKlKU>HsE-qP2-Hh&h7r6z&R zFT)Ax3kTdO^O)czXVC-dn^HaOVfS3tMjn>EZhD<4D;%@2;fZ`B;iDy?LQjl|L9PSaWG2qLnRgZxvRhYfw%Zxz#6Q9~Kb8>T@{b*}X}o z2t5E;qYU@Kxnf{4Nntu-A)}`QyeS(~5uwK6TyeIq@-~Be+U$%nM)-r?NW45JwO(_Y zl~iOw;@Fg>27zh7c`<-gjr1Pt!XKTG$E6K~nY0z`I=vpt1bCN7!Oe>n41#d&5Eu91h?#lS*y2Up1kpqSUI~ZJdwfXJg7VqKH<*`Q?GmRQcg}csRC+6|;TuY5Fa@a)E z1fq=Nz5$W$ase5xXQ0CSN8K^V&tVdHnBWBtSp%4;+Uri2inXaHCmY-)B8B@A=WEZ_ z4CB>iByJ9zh)DNXk-5912LT^7iUdCtGkok+d9`n@6(x6TC))ODfqP!vP&=g*r@;B( zM5Q&V6Drb&er<3--G z#-d^nad0s`A>>luUQ4GA`XDl`o2fm%JS9HXL-37TwHmL_Hy+4zMphm*kUAax%+30i zPE+&@h&*pWQ}W2{9aPgrOixjei{wERn_GQvouS8SFaHB=9F|wQ{GXG07N=5nw{tUmBx z;TVKA$#ngKq`~%w`2bhRQsrJU{9gBQNs zInBCcF`Tdf)f1x|Q++a-oaUsrG6pUasy{3A-EmvuZ~qw%eh!nWt3x^ z#`S`W++nqI2QWrc5N}$XOgf%d;mB4zeh-LSxF~+_(>~)k5#51wGjq9PvDL3XCA??9 zb)F15_f5DHenA9LA|N6&kBq*1IinhICLjB}R@Yf=70kBz*!L<{Jg_uXR_XytVOK0~ zl{H(%zZwt%@jGxX>_YPGJ3oeMw?R_iI$i>+F)VctH*u1i)1u=jUp4fc%OJOj-`+7$ zceOQ2@XSgE`;=-?g8B_6!n}YS3UXffFmWjO>4IVPw#5rytAe~U?Oq46O`+klpGz}~neEG58=vhHH3XMj+fwbTukwfVYn?XuRun(Md!C=U?$k2y1V zlzpe(iC>WWX}%>pm%v2#7zi~=_y>s9|B*uZKkxlNeDEB9BqrTo3<#Jcm%Y!KTc z>Z#Y{v~#Ox91P_j99}lN#^WHh>8_%}uBtCyuz+3dgCG1{3_Q*oZ)({(ZI?&K!oxR*!~Yi)?2jslX&<{dDF*Pelibj4NW@QL3k}aOWb2yP?>YerVzyrH#!crKB9B`=EJP zb-Mvss)ji0Xzp7z#J@Fp)~IiB;MRwX7m}whk+P`QG{N0Z5eYgIN_NF8a`@!CB%jXwI>S|`JaE86XD=*~of7%CkLIiP(AoqpLhjO4SbfGh1_{MZ$5ld12 z1K*vCpti@JxQ5?z_uW(^M2HUvO1;v~O*aubZ<_1}ZOrP0r7W=HPk#~Dz`Y!8kD?yx zX=a|pjRtu;x9K>1A3H{LaCyrkdkV-Y&&iYf#`-b!yHXsCJFr$y^7HPun72+o-Hx?V z`72Y_G>b=dKFTTyI1882LFrkW;IEX5Pq!t+>^r%MqU7y<7X!q1ZWhP2n4_aO=S|gX zqseNN>;m_XT zCsjjL-mlEP(+|X=Px5%?y+6P)4f4ky&`kPiZUdCn&O5i$BFYR&(9LrSO5r-mk_EC# zqe5-j{k|uJI0rqf!@tF1@BS0p85f$*+^6E0YFy zT7s{0G>mJg)XV_So;Be>FBx z-Yz1QVwiMaiY-@i;FU*}gIhPv5Otf10W(2JQIb%1o#s_U60xkIn09TVc=S-dvUE;A zX9b^fQ6sHIAcMWjp+MrwQRm^W5QOhpPjLr>X{dGaMq4Ac@`+bbw!23SY#B#s`q_b}t( zv-jy5uk4o2b+X6mD8<5s6!6+8nGblG8Q;d6!!;?RQL^x;nd`*ilkL=kuC0A$Grq)w z*ZEi$BkfwKGvxjhJsIE`Y4;6b{~Gf|3q@#pX;SG&pyhXT%Ro2F@(WIY*l45}g9ATVDZtP6I32>V$c9^fN&PJ+ic zGQ2*CN?3&VZYL87ae7ejbjO7<>WFt}&YiNxzAWsbJJ%lV4rvK`lrqR+pLKhr)A}-m zLy0DVEbkkQdCyd*BI4T5sw?r%0sZh-@g^N@&I*m!;rpm>gPV^475Sh_~@iUs7bM}H1fpjBc-NtDvkZJ>Dla*T8*hP zIs}kgQaZvMgHmuFfO!?_G$pL~Xxj+6q49NN!KXP!D>1Bq;)KF`j=TsYdU7_9W|a@= zi`j3Wsha<03V1Vv7DINMkE!EcVgB@g=;yn z+|nqpJcE*9RDQ=y`}e>bR}x6i!!m4)Xa-~90L}tg7anD<4#(nJg+I}Zu}#kAlFDJ6 z%5mMuE`VcECT(o#FYM7|aS=eMihi0xh+mmFY8{S$`FeaSW96~D@>^BsCZSZVTWOD^ za|$mk+Zw2`?B)FL?y-I+;a1k{~N z4QMd>E!<1-vm;>u?uD4eMMtT|3G5@UvyojQ7HTTJihRbi2JW{hD&AHznmg#cUSCq! z(Cgz>U$=7zv*AscYuZ@4@|F1>9w9Q<0_)X>+K#|a1NQxR2^dbM7tZCk_hb|q z+RcQ1zyxmIay=)Gz&@H2FmQj* zuTlIioKysFRk$K;dZzrG^wEz;^`VwmT`ftl3@A6sheF;ZeA7uqpE{$OvkcG<|l2bDl{wxu(Emy0)4~S`XzE{qL z=*2y4UnN`hDzVZfDRft9>Q-q$pkk1FW3vOPV1>N0h5C8LVE=)yZ>1JVw&ri7DF^H2 zx)-qMuDPhLc}Z%MlO^uGNCzpzt?*+;kTzH#CMHR z>y+-DQow@h)6?H2e!LyuI9buG**w`=RTBElJo4rdU7B0yv4b|*F*ey+eJXxgDZe28 z(0h6!RPa=Pchb`S`ZzYy`mi}R`qZvdLAgKi&Uk)_;Kr!mD$$(S+d zAw3bGh%WHLvib}0M@Q@2oa0Sm;QVv-q9(?4y>V@jq_vi4vUCW$vbFyLb{CXBA3^C* zwtI^#dPdM)Qsl?D#)fgpOmnHod98|%Uus+q@R-$#7;=annCpc^vlydjl!cuFz=dkq zyQ{}CB0U~9%xe&UuS_x2ELKq(X%4zFq<&{Q;~M+g zYjo3O7K2&rGk=0ZV*eH}4yZ1d|N&P!R1YmH@T(HWf(I9&WI|#8U5_&+*oo zMA7~hx#-u)iHpJ|A#sGWOIpY8!Vyy|3vV`~cIp8ZQk~D60yw0SYK$eWJg@FX;ZKeQ z^yJK6_B2@5d}zY1-5i|&iAkobQu;GdvZa<)pvO3GKNYG@vtKH+ucaHFx~K%w2+$>jgre!Gs!M*kXt!r zqWW8J1e`JgcC71Xh;l6e4!&r5P!@3W7lgRP%t%Dg<<$sGO?_cBH;JmW$wplOgD$ik zR^R_~V6D;-2KbfCAbjGYvBO+wZI2!c#9){Acqeos2 z{emPUJ^;u2MTk*G@R!L4b?A2tCHDzrrD5ydh`-WvHHO@ZIY-R`8x7KZ8Rfb@ir;0^%Ia-m=52nX&d{y-* z*{1k$p`d)i!i-?$^^DPxTP|ZvSSMKlYq&$u`kIRc+VXYJLPt8gfuc&~FzbHqk!s z{X(!j#1quGDfi8X3?|qwzdig)g3CUn1Pj~r5^c|6z8#e_3+X|&U1ZC8{q<)hf+(|t zz3nKv1Ha$s;|o6YSJlqWuRtl65*O+c1@zodSC=l(1IJAhlhSdbVm{^*ie$9Zl?Ds7 zq~B7n*|A$V@{SFjLq9%KaeQi6LtLyqkF?t@7G*c;H5P_AMvt-L@fFkz7WPcDXs0<7 z6B81zXmm?hD<|&1WgKk+b}F*p^q1IYH63FIiBlyrsW=)g; zA}9OA#t$ov#f8ezCo}n8vz3XQHv(zX5?46p?*D_)IRC6Ef&U86jsAndtNvZq{kP6I zfBycTa-#p7$JEoTe>8Z~pKJEdHT!p31FU~Eyu#mUTz|$bP>b;|K9{V2G>h54`1|-* cUypy!@w5KX#PENv?SJ`r`}bPktiPuJ8=x?zC;$Ke literal 0 HcmV?d00001 diff --git a/docs/specs/img/zk-the-collective-action.jpeg b/docs/specs/img/zk-the-collective-action.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..1d75fcb856fdc0b072147ad8267a53d9c2711689 GIT binary patch literal 260019 zcmb6AbyVF@*XRppvvGHKD^QBd#$AfL755^=wK&BciaW)%aB?LEy)|S@?H!0GNIC ze`xdnuNckR#={EU;0!)9y2Fjboh5|pgm(YcS^nvk|J6nR={}yWp71sr|8#d9Eg86O z4cA%h{%^YF|4p}Yb^q5s9^OXO*~#mlt$*^*V@#WOy4vtHDtx8{JOC{~0g(RJfA}_B zTuJ}{at#3Brvzya5I04{(V5P1Fp z$N*r3fAVh$h6^Gx;y*z}Mn*zLLq$hNLq$VF$Hc)z$H2xwL&L(u!p6bH#luC%#3#VV zC4lR={|*BBw? zq^S>&n2}FrhKZYBQ`^I{dFzaXj#)rZR>#81+ABOwN=`TQx3Az9DM0`N`2XP$A`$`^833W6!fRjS0sjU9?h+UU9SIQ( zA6fv62txYj4;~>h9fLGCuNIM%CJG~;g{7N2Dlxx|c4$~i@nTthML$V7mOI)V#$DUZ-EU zW+R;DKJ>low=K}B+FQ=af|oC_jcOltmnyfj zi;L;Wx$PKT<3bT>808@=Ub^0v^mt|5JNd$O8?CHd)ofzFn%@%xR2%6!{YjMU!8kmX1;K9qgjeO2xECt6C{V$F;JW^Rog`AeXZ4w?#=gMXi zQ$cCob-Xzkrgrtrvop)uBfRsADho(3E|ptMu+ZOg{G!rMJmT`)BV0qOy?%@fIa*bR z)18<@lTISpMLvYf2^v!lCavW%>GQ8-x>+4C*cCl!5{&s%)X#v5BLw-X-HJM>j9Ox` zhg_GWE6tW zxJ-rI$^=Og6$OkEFRD-dPyCQ^krP)LW~1%Qk((hZmytIW)FC^l(UGekVSg&4ce2Dp zuEWG1bhB74EUVVd`Qr>TuA<40jdD}DntB(Bc$S@pP(#;8K2KaW8$K$&8wX=qDVy%c zCdPfF5cEkb_gZf-M{ZB;r`kkttg`atypxb{+Hw?PkToG?21xd?p%*e?x|ZEW-lTFv zw2n|M%);(h%PKKnhTYu98wC`>_)n|)uh@*Kz$Ehag^UKs6nvpz)29d4%>FXG=_r*y z^^y6Hc#=;eIuBH4^D5zp&j2O4m~uZXZYT+qrfcE|U>u=RRZ6Ws8K8ZEf`@krdLX3s zaj*I;O>jbAGG$pc=XB}@Fc+0K^8!GOH8GPl7MEw<4n^m!6RqsTWN$4^1&IlB=P)BN zsaf)5DoYxTe0vqwP5CaBl^HPc8y~pZ8X+E~U-h5Bplcv|NT8KH;u<8x5@oTO|VuQ)Hh8I=V#ycK$s86S17fF$pFOR-&hS^vwr!qBa;5pcv z?eIum%ra7|PMDI-*yN8y#@!-t81N!R&zW9gV~Dk%|60Mi;>SzNKGMRn;)&j{#b7I7 zZUp}kFJ(>r5mMTbT~`3hnRv;pM&dj}z~3nKRpWOEZx#xrr~SN}N5k9tv^G1=jQU%` z#uRXKPZ8aPU$9D9m=;X=Od3|u(?IAktK}l%$RH(ZZf<_P-%g!}w_+R3fw3lFJyMSo z*qlTPK2-j}6?nmSFYGT&|7Ep2K_;?ugN9*STx3f*Dw@Z(W+=*1l59&fy(PqhWF)lH z-zUqlIav&~gIG!}=myN|JcRu1R@C^H;uGh@RG!p6g>| zTMybACIC;u*{y9eHjwxZabgy=royO(L@^lUV`OO{Gm1660Y2qlhKYhek{2#7PY?%Y zab6|F4F*O#yd-^*rZ3865_1QBM2&W2AtTg!HN?qQef!cp=zs)GSdiJFI1eWkwOsdM zwDjDczy$LPfUJCX6k3kT?;Tct-~m?Qv5w9f3CpZ1M~~uOQb=pdd%X{W!$2(@9VuaZ)g5!#0wDfMmf%pz3tYHx-#y8OS7idWqtyjosNHoPV& zt%kGa2GbJ7U3@a_sYn|zJswXKI+{wh2|^YPOEhsp9zyr902Yl;nM~+}EW0Dp5txa_ zh3I7HLdqpR{nMBSziur&Js-GlgnXc!E5DL5J*5xsHPv>uSe&QhpeVUe8jE!IM1#dQGJ{zXRFs0Ou`|3YDVd90+fXuQv1X|f*!`GUC*3V%z1Ho+=F zp0Ai@gwoP@8~G65rnc2|XOzveU#GREK!$$`&vcASu0$M60YaL^4?Zs+6i1 zcXMnUKK0tXi#uFiP0Wq6_>+ws<2a%sE*GD>z4!nm!uwtxWL1B3>BNt0ApZM1lpldy1I1MbYKKJp_J->#{RM-fW_t+7)O5w_7XMK zBQSgJ;GKl8BPh$JSv1dr=D0l?F((y=IN$D@#>Rfi^W;OzJ8 zQ#5Emj{}koo)3**Xk8~Ybh$3I%Nf243|)?7n0;$E;Rd}Q_<9 z$)!W^Yq6mR_h&?}81jaj_?4{h15RluFPK4ecNlLj1*|lY{te7EPZ^A8LNP@-BJH;* z_>vGq^1_f>KM$#Z5G_au=P6c<4l?8!tw zzdk+u<+>60A(KxWbmr%q9$vksPsx#LDAPzVVH<##DKyl;7G|RShZaL5I?yA|=&3?; zyCDwW2Fr@~^LU=!2(ddEM9~0j99gw*a5s4;2IzC}DDIpu^a3Dn ze{GJf6gC?bg_mf6u37 zTFrNbPPU3YTq{N~tW80CX1Dsv)()GJTCkige0CDUDRZx0VycG}3rN57OI`PGiX@ofo8NEpa=)JwqbJOjDWvre3%z{5v z3rJ&%u=6-_e6o<90;G|WQL4)w*X#-=F#m3WG+{7}md~4J8YyqXiEaGRNg-T(#FOC5 z4F~*1RAF^L#oi0@wLI>z^woB-$`S8KJxjG;+K%oy4@rSU19d-v*HM%Fc#W$?@)WTd zPugjM8Esf}voa!#TN3jWjH%*!Lk1Q4^KlW(x$LQ*O9a+{+Gj!$3 z0?f?q!238sT+aaNGUJnsiBfB^0<$AICe(*^l@rbzeT)#! zc+_qB*bP5S_DMU-Va()XQ@>{EbDhrM7gbD3uJm?uQC76%Vz4tCI>@$>Ss(#~B!hPJ z1Sf`5?ilTFDpi5n-s?)#hxnBcv)iDG^|*vg`kCBiSWHPaMe!>n*!^a34=MJv_mGOd zh^U;v%i8!}n$V8Sm-W5i#?vXkqa?Y;{F%B^^G}9pHlYIMAWLY;ur3?-XHxm%6C=e% z)Y6^P*8XxL&8i^dWyh+{sV{RZh*=H&wD`8xMN&{-;EbK7 zlBSS2m~=tB!RKlMftc@r?bocPB%1E)g8K__!`x%g0Iq1SlhNfm^m*W7w@3OWm^2aEh;FktMJ{Dr{Cj1yPYQH*jbUnqE#T-q)**%Ko?8oqOr)%YU0q`M z6BOS^7Jf%Zc22r!RX3n{Lj7}zf*dH}02cWEKsdj?bkZH28}QvXK=-If!sDWFnEc6! zfhmaOR=Qg~HP`X(GQnFzqN7S3`ViW%*T?gM?16zfkn-aXvv;V}*4>WL4N_k~y~-VA z858{*xyPi>zyq!Jr5Iji>s_bGFs#+XF8(GXAZ=GH`U;W?s24v2f6n!o-S9?xm)X7M z095rg+VO9~`{t5|W1Rf@R5RX!e+ez)wGU~gIy-{C-p?XolGC7&^#QwQ0LVr@>8Th^OW#s(H zHn=+o@}yNbquJv=$7k8kSZlR(n-%6orY&1xbGI${xnM7sXWead>B?S z&sd!&xtJ%?^oWM+@uq{&eZfmENDvmk)5I?xE$+S})*Ic7{<;Pk%YiKQ#8g%4f_b+; z{*H6n`mN7-MZvB8c6_fD<4WsBxp|Gi%S%CRIVuP*3xQ> zlX9j9qsl3WOX^-JD^=qVj8*N8_|6zrXo1Yh!g@)bfo~AvmE3G?d8aDdX!>{L`TKTL z5jhMzIX#8G7e+a0`jDgS&d)YU!s5XNZr01q6^CJ{2ooj^``I^J@}$UR`U3mfw8kkr zkl+l)G?fb<`URiYBB@;OjxSr6%DpLEMK=+CgF14Q&v*=4o<5)>PBQaqIE2L+DD~M-)NgvX!iIi6tjTS$NqjWS-#BWe+*XR^e(Du zDbIz)*Xoi8fG&`tQhEG+s6X0{pb6!0F#W|I<2@fps9sd7a7SR8LVR7r51RomcpC=) zo+PL>_x80vmzlBPN4@rA6H5^D-hDmq*}3_x+s~5Oq82RijrxPradj2N71ikiJw}xA z>a9ph%+&CSr)(Lfs{J-ifzPAjm)|&s*#oBl$B1zCvAb$zUrlSsGvJrpslmzHG^x($ z;W2fzAH%?*0rpT3DhQys@b^|&iOH>i{m~)809osa+@l1%X>>yUd8<1~X0;D#r$IfJ z4h!>CU1rRSWf8zd6gr(OxZH=Vk}|x?X?)iSEL*Emus8C^Gw!e}-s3r-w8Od#Eo5y=g-${X`}NF z$5lHM38A%Hv4P?~B05-J?24kXQbn|ZI}Y1ZL;MPYZedX9xuLnwm4d7$6i<>b#1N0# z^EBy4v$~|$*!#i;OC}-yzQI?7pLP4qHe?FOFc>!gCgcs?+`;yMyXoQw6Y%e^b|ZVT z_dGJw=3%Vnbjgxps584wWTGiy9EtshE^CvC!ux8&NcwMmIhM27r?&;2W(G{a82wBS zZg|c7lr@%UYZmpWhx&&)1o$RgpBbfl!8_4fo~~~1y(y7#^p$y467p}+8;~8Zt2RT| zllxN{>3?|WcJ5{Gh^APd0W(ODenwq(lJtFYRGjJI1D)2;HurJyiflSKU5mIw7~5*Z z@)5I)CGQ2eiFEQ$nxkwXl{pZ{-zu1`zE0 z?f2shQ(*yNWDwp+wFm8O=H;3XBwYY8Zb^yI6A@=unHpUHh3x5rW+*s2 zTA4|ae0*hxSMgE#7F#S772r=W;ZS?zQr8UgI9RPp9Xn;sVt=RAQ#A~-Ya=FzneODo z9Z|t+JTn%?DLz7RS&BW#jL(}3;>&T? z)|+5SR4#_93^w;yE2Cv=9JV0a9`P5AZz@p}?JoH8tLg$Wyf=$^q(1SbneZZhv66m~ zBlvqSD>NKUNNjiWI-C3#_VUvQU0v-Dj28-O`OGMZF(t9adw%ROO`bj<6TIf$2xaex zaAKoU7+@DQoRR#(8kJAvVfDy9f9gsOAnh-(8^BF9W%qvn@voYi=T#l!dccjvqL6;u z$e!GET}N3`9ft@+#C17?u51P95;+H*bL_+#vIBz1G^{%P4CWGxvT`%0l(5CnyGC<# z;XxwyUn|A=AKQcq5yss8QUahlR7^p1`8j>VA%p51ANgPBAUpabnF)PQLu@nGyW-Ek zgV6dlfUq%Zwn2%PIvoDm8OKN@N`7QtDJhC z{^tVYTE9g*p~nMWd&=W5_=_fDJg32=VU#(Yfy|5kI?cf|ka^wi`fn` zEla7)f|q|XoG6eE0jv!AWaMwXh&+GZNn!q(j?|Mw%@IQur`*&MS3h2A8;02@WJa77keky}x)I%#R)rRo?b-&U z3_v)y6F-^$t#o`(nAkW#h=o$ZlxxNPdlt-r!4M0{L_jCCAE6=*sQ45#Ai~FRy&e03 zViDnFng0h{Y8ElpgGD`GEFMSfaz8nO;@~kw^(SF|f&{adU^2k(6W`0-WZfg) zjn`N%2A)2{k+r3Gv8*{gNu{Ixx=1`%1*O&ip%)N6uk^gS!ugS~!}Yz8|HAOZ_S#Lf zvJ)O-V@Z|$7h}NVRp~_I{Ie_RTYtL7Pnr4n@Zi?{n&m&SI!XWZG)6fql`fBb;|Ls1!3JDH+(1qMJ{tXhGN|Y$Wn?h=voKu{%x9<3LX zOI@!S=l){Z{Yc_xfBOt9B($5Nz+cD}D8dKy-M0o~4F_%EArp_Y*Uo6>5A^03-N?<< zVd~%kM3FCT*`zo#VPCevPa3V#-MO>Q8~s^UdYKB2xfD1On}0IY!wwZ^L7_$ z0hAuX-Khl0N-CrmFD$EQ3JH1j);k4$7r1&Z!EnNE+kHQWRg}Qa=}Tp4M!8qAz#ZP6 zz9O55y*^m!f0*TXi#^#qvZW=*^{J*PZ7i(_Z;+c?N3^!S54PT%OLVbmuNPwGwqx!JW}?*chl+e>fBt3Qw21e}f4|?)@ow%f9`M=NUl53Ue9_DNZfW;sM)U?J4s2 z_!(S~vioD++9 zd0kr`-N`Ze&uH2xkVu=GKfS%w6tDTN>ZEgJEz3WY=HORbow^bpvo~SF9^Z{}dojz^ zG%@61mQC&rI{}=?M;E?)F#pBr#L!%qd5Rp2UuFcNz@h^1au@kl_ib>VAV=q3d9xeA z)}@Z@jtF!UNiRA`zNp&0s@+Lo^Nw4BzXizOkAIGAjJ|%FpISb)o>Abp#852farKG^ zErRwj?k!Srbu)9{D`y@W*{4QK>g5K%_42)V^X1T$MscpYI^hV>q>P9#nokIdgoy#cjlm7b$;JKqg#eyJhN@q%!_@AZA!E-f4u51C+Whp zf{y#ofK9sDjYxwf!{@5CHc>&lc&a)#42RiP4goVA=CBQ0_XYyIRkr0nk3q9egY?%K2TDi~0(8!7W0=sd_sf(+HIXdm}w_$sPczzs1a-i@dn0br;+g=M4?P z+PMNL7wX(lA8=?(kgsdKJP7B_4a*OQRK&$H$Fk3WyTYB7**o5n zg@zzLvC5~*q>F`bt9HKwHsC}>Vfq35m%HKnGcal(v86%-ze{ue0aduavaHU!k6wc# zj-aPxA2{N``7h$gmsl2NXKae7Q0LMdnFvTRLwY0e(yUYcFHI1UrW2)bc#vYXnObAo zXIfbfJ=kF;rHw7|;OIs#Ih);VuUleWiO}v#^{N8LF>SVjav{Dx-EDoAF_MX9Pqki1 zNxGs&Iy5(;*1e)>!+66xy;SP!aH@GYIJl))LtN}%Ve$9e zQJq)wp{`=ImJ*j&hmIkiN$1FbI#0hv-g+r$Nc`uA1!!M$(r03Dc5~_1@Rb_%PY!l- zbP!dnU_L5gZv;JCuj$<6o+NzA<9_UKV2e1BLJw0^5S#56xZFvkua=GtQmqH5>hGhLAWz z0@;FlXgfkbI36unwI~I4efJJxv+uhq6n$6{!?2yd6t49)XCcf1Hy#6h4oU) zvAsQd4IO!yicaC&nl*ccz|in(mbfexD7!WzK5ug=2?1lyPSe+2=FMxbpeEGLA(KBN z)3UJnon~XhTOlDE!CG6Qho4IVykKWG#p^J-`bpj&kkOi)@PLEcju$O1q@>84g~?*K zBI(K+sl4yLC#5^WZnsICo2iHm?DSSb4!wlCEu^%8aN zY|XJ}fJp2h${G9^*dA&k<{^^BQ_S?jF{&K-3?Ao_KLcvyc}hLJKIA7)JGe?yQM;3| zDazxfI*EA`*u9ycjb_E`n+YsDc)Ec)K7U5zTaNS)1XUSYAKJmlv@r+bX2A1s$o`Ic zP~DVh6-FtVy%7^iIW&C7s*^6rk<+64V~0BgdyFn!w5m2jr;TvZhiha#?vM+V1xXh^ zGzr}w93jk=tFfLWYfBleb>9Sct%7#2hu#r|kiBpL-m={^l~J|~41CswVDiWj3}L-t z%7d+1nV)RdPNuR`=(V$R2{L8q0t?8*l#-=nr;QOnA`F+y?=bwS1k@EMKi@12gBE=PF$U zdxF3A_!stoK)~CQYg+p|qNI5IjWBEr1u?GjYwgy-J;AQl8l=jR`oo{HByu(;C@DA% zNEn*c6gpi#LcG1Q0KT$91vsH5yP7tcdAb0flI&DMoP*Sv*ys_wRyHAJD%sdaq~foW zvX&HzO-L1gtaGiM!H$on)53*uS!*!_R?Lsx@=M`xz4lA{u+mk}vy%b!a!=zoz~MXIg(IGuaS&SE}n;%um( zeif?`@odfS_oZg64KND9E9%Z5`yHG8u?=w?@iHthjx#EENbR+@_5lQeA^%B*Gt7Eb z>WM;E@3->QliNyqe2#d8J#DgMeO{qYRmSt5IE#Pcfw``Z63EjtzUDW2ah7P$1yJjWga(Ukymu*ie3aOfYz90|V;Kd$v=G79-2KAMEXvkTY zaKWyj;?%Hq>K`lze(0B8ajkP*kKEiz18=(7SluNaw7wAM9CrU+-VpC?d5_oNAuFv; zK@N~%F%@J+;&I#C$#}|23opV`QvtVeC$++kipjFo?n z%C*LDv>dGcU`1H(V+>m4#kNszqE`m*VlrK{yjLV9PBtypM&L$&NPW8OccFl0f6T-6 zu;|M|L{;-;G#RmZ@S4hHZPg?9<#!F5EbpYrQ)$yBHUE?IuTplE=Qvh9!~S)bc}1j}T%%o=l$1GTO8j-=wKe{sJ`bdLiBJ9)j^u02!v$)GN!IpO?9l zWXsdZo6TGs&Hh-kOVZu37E2c-3AyLbVrBCmnK+2APRMYAb1GXJEvoprGRf=3k=&-q zhT&x*C+O^usfafTtYF7{+H7msBTt&k^IAMzFo}GF?chU;yyNL)z6+MdmGb~w9 z(o!L{_M~ATXrU-t`}B*?FOM#waJnd~MTuQy48wdklkD7SxIH-Rdlx;C$^Ir|*#?9X zkUJ0qY=yRt@$-%Bao!{k#WM@mq8pTUhyYqhm_-T8PU>1>Dr@}O_kucTZ?wDI?m>a< zR!qg2ePCr4^px+D?b+KrZG7RB-v__T$5%H(1egMY-MPDQSK*)bK|FggJ7vs*-Xl%j zJ(|99Gkm|FVytwq9n48;YjkB0!9A1Yd9m&@?_W9!*vy_49n%_!MA78he^c!$9_Zi` zUQ`Ci$G^LWr9PTod1n1WJ&&Q zr|$56VNI*j<=OPkCNhR%+;PSiN7C|Ri)^c|w3{gv?y?S=!cOgzm~!YR9ju)9av_+# zfzf#RYY0?!WYF-33`lB%K{9Or39zHLG#~*_<0Tc zX}gJ%45R1<3&e<@=FHb{*1xq?2Wv<;G5G(w55|<&0KY3G-WDWXYj3yxzFGRIyt0Fj zvvD5dD6u!zK9U@scU**f8dU7|A)r>52*QH zh(?G#)e*#8WQvkJDBKq_?cQG_f#@7R=e``dtwJGGQ^YH`B>JYx6c%N3W4ILm$3h(S zx6@&xP~ue80ugN%sL?-dY}(c$=!?1;idf2T&hVm6Iyf=ICg&&9e~Psg5UIYFKGHUlU>b?Owic~-CF}fHu^BZRu}%Ex!9qffV;+$s zhZ)68U9}M0#JQ40qFc^H64fSoN1||;29Aa^f2?JVp+Cj6 zhdnBBZ4Ljvnre*~CK<{PYG1qqtT@^<4jO~@WM%SuIKQ-2 z;li_gEb`b-3ixYF<=<#0zB}rj)#?Wy$zZ#lV&E*s-ai95WVKsUsfT~$%0{-+#JjE# zbtT>)vCEj%d3pn&{kZVSCzTY_C*4=>y&&0f0vTau@;P5IgDNA2tnC4E~YXrzHE$iOA&?BI&9xUl4iDuPn8N zmaOss2B3PHH!4=*3Oa;Y`#u2jg04tK#m93sRznK80%hjt@F_)vOWEGdk3kbPlKyGC zJC5`~6Usssa(I%Irv->cDUg=rcazWqeeiA+IKZt{H5psUZo|IG8ZlF?Tgt=TF{>uj z{H4W`-6Sp}PXj(e2h>J?!H$>)>5&FgaO2?K43i#51yy0S<(&~j*9IgQ&v>ugLl6CP zKQOetXzT0Gq3<+@J8TQGIV+B$wrECFH(&58>&YUfk4DsQ#363$?P2;e5T2+&P(+s( zIpN7SYq%y<8Fs^HH%Y9u+^^?+2E#t-mD3y|h>PhYHGloy+k3?`pT_;G2gIQ@uJ=Ag zaGSBpy4FNyg4jM39v|nB;l5+h^kZIur3@@Gg?^ySC`{M-`T8(5DuTX^@>AeIG|P1&Ty-?0f3-#WUo470%>^tQitRG_j2WK{Rl{(skKYvVjAd|y6 z=`TUkE&uW$KaV}>E8#QGNrq?Mn?5~3+3Plnv`{fZa#kJv~l;^1c>c_U%} z1M^@F0#2gHOW&pU*t?Hu@d;<-!%xIWMGS_ z9(|}Yfj-aTy?BwCa-o1bwfIMb(Ke&Ap8!Dm38L_JRV)2B~ytEdl$MWi~OH*-~g zZC%3$iB*z~S6&rOhTP%w(3H$2sY^C-9sJtHvE*%2EXF)X{0qBJguZfHPi3>dTB$;4 z(SES?z|1h?!-0LkeOgyo2^E4rU#UHxfh{VUY*gJG7BVcAsW~1R%tTfEShlm%Vh4$% z3RCF!IlToq5njGl>WqP3rTK=xD19mCj5p#zE~|4X5{LHhL1bN3&Oe-ITh30uDTV-k zXa2E^BW0nzm{T{jXoY9}yiY?imnr5XOllNJ_83AYqivn0*G|n6h=@nOMfaMYe*8he zq?NE;_xjCw+~dlSy`{V;6qckhK_%*MZ?A~m3&d8Q^52(kpAS*eme1lMmaL@TS3QoV zXp<0D{B$`{WBh}}YvP@MKf^gje(wrw_^ZZ@cf*Mh7AF?yrIk$Whg z0hYa%a_;+#IMv4?7h-Nh_)3X=8|%Mx7!%FOKmnoG$AWbh^0!QYPJSx8AJHIX>CnsooNMps(@$<(ibJerA}?j^3f~>jYHTE z#-aPsER7m#rAMNv7^Zm4Dn8cb8#FCuju)=mBKT1y8rlmX7fJF3(M)!Dv9)ZZ-!y2<|9PD)bq|^1bwPHR%UwvA2wiR%!>pAUu&qc z6eXF5riVhbH6%|o{dS%oPwtg{sa@WEO3r*!>CU~z+ZW(2qcNwh>-Bv5YwHZ%1eMW` z_GM8W#THqngaoQkY@%5PJNbe#w8X zNA66ME-%2eknvtDGY7+asz+3zQZKDeulhvS;)@{d;XM!8`QNj;)x?-a9^j2e#?sE= z%$LWPW<~3}?iA$Tusv}kT5YKVu$;1EU|o)8s^DFMvLuwo5Sm~9$!2XMqZ0z&gn!vg zyDlu}8}EZQHy3+j7T)-^0 z%x3SxDPTW9yCAIn5h#!~!Z}x)?AZL7NIC3)3sYt|+I)R=1i@wRUBw|~XQl?IlZ+Au z0XI_aqccbgK8X@2l!GO(9UX1I!S0ng| z3(ETqnN?7XPj!>lc)Ck?Tl9A52kOc0_CbHG+@j%F^ERYx@Oy zha1v`wChHOQnAb5^B1PM3K?RY^oB;Y0VT__x`rj2VfqC!3n)VfTz+>sABzN((dm(A zUe}B$H4l(K#DJSg+VByi!;#BT@^zcxi9^s9_mVjcE%}JJ7nTrc(bTs`JY~U3pD1b$ z=_{v&x--6H_lj_bSEQHxTKn~8UP$bS{8^5?k(owWTP${Iv{YpeCwYymww{jqtsh6kEyavW)E-; z54)Aeqx&jq-u2hN;hh*vVO(adRKhp>TBiI`1RQ@FTXD6d!!D@y)u*Q(v572-=>h&v zMN#x-M2H=MP5CF@eyITbH-=G&{;+<_j<9{p@Ov!xa#lkR6oXXngj^nLVhR~xBFNgX z)tT9X+cgZp3hDb^i?NlqVT`z_%HNWISQZ$y59Ptqu$-)M^re}({O-Zj`96L=r3CyQ zAnJGUf4!A#s#cB>cO1axW$?5I4*Yn;thpK&p8Qu$)islN{kY8Hp16sM42 zU62O)Ry#R6B4h2gI7GJ!$ou3-sHVb_JLOhSI0;JLi+`NYmC?wa{lgyVZ9AUb84w1w z4G}MY$f55ZWfQ`>_eY+Y0`un=yuZ&plTT6CsBb+Q|3SUHUq|g z`=QN=c;imWgB=uJ!_D88FA%X`<@Fdo7R&i%DO_dm zrOqDcoH$|HjD!!Q7@fUCxlI2W7_#^yoEU%BDW(3m$o!LgecXM1bg7VtAx3HbFliVw z>Ph)e_AGc}>(7q|f`{Q6nJR*bTt91@Cq+}xb}JVZS4mPr2A%=PPC0v$&f9Vezw-+P zLbo622K)QJW9&cWVaXnS*txJ|CEJj&(Kgod-DMsIg_v$!A?i10MEt3@wWysSbvb&i zFJG7}J8@f)HcFDq^X_kHKy#`H@lPCjx?pp_zBv|i8rsof;#f?Y2!VWS?}*T)*5S}R@{+N$%Awt|sQXjn)#7}D}4Ll|l% z0h8G^hb7(|g&-M>{tkR_5&DL@td26p?dNcB$NUo{T=41l;kF>U5wFkk_LFn-^5uLS~Q}b@B zkfXxjGSkHOcFxm6Mpq9FkFcwT@DEdL@I-Z;aod;O0VB9h=+z0CX0`X^471_WQe|F z1wrM1gNpwJBK{v~B>yi9JN^ApK5ZTxgeWz2q4~US%D4(`7%&E5H#nqbFtqcEcbmhE zL&+>E9Z8^Q`;)Uv7^*f+f5e+t_3dU!)b*KG>m6UNL^qx!CWV)rWH~YN9JXQcVICp& ze2S#;kd-*TJBw>0(0jiq9v?yD>6P4LA4@|Ny3y^)s|kbPh$_n2fJgG&xyGzHWVgOA zg@={8#APi_5lELhR&;^Lm1c_8tdLKWc%e?J^R+-Mv3};$zjemC(_3z!S&o|D= z^R|n{J=hkHzW29SAlpGLblpc%p0doW+z23uKZzWoI&Qb{YTM2awe~X9!D}CP1+aM` z6|(6?_X<0R-v+!s17Goes+w>Q_njw!Of9nNrD8@7`i8CMt1K$0TrgZN*`psQQ!wX6 z@V9pomC5ZR>60%-@dh;LKaVc1Blkw>C-H8@#NhD478&^ZhpkU!-~ScA3i$Qw7LITp z@X7)8H}(`OXr7OiJ4Os}o}H)^Sos;Bwac|w8veafNB>;=mRSay1`~2FyGXxCnJAp6 z0BMc3GERi#c|*3`?Ab_IES>=c95Dme{(d?3b62`?cTqkCJUMpML>Ka?*}qX#nhKk?<3PP|UE!u@0uIC@W-nonCy?V^N*a?J~E48J&?RC4sLAOFM;b>0^ z!!Nm9{p_4~PeXSeE-G{`N5Pp4Ph8SMqJ;;fA2{(wo`LDg0WQ-+rf=L&MoG6&j9{yW zj<#DU&N*iM^=L|DfN2Fh5dA~|RWL|hUwQ^OoDV+y|7@9&R6PSbEPc0a>g#`m+KTae ziTnpU!v~{^XI`vA4Du!X>Fbn+7cfM1qP=^0zl#id2`>ofmP}OzmA9b(h*k6W5S-JG&5dNfBEDK^xzX5 zOJZY>rK&W^PfTHSp-NGEzk^Js?~=oQ;Z});CyjNAg+x?YNs!YONL;{%1&P? zv9!iMsxSR5vX4w667HFbTOw~j_)A~*5W4u8a-E+9LT@Pa7@7GbAN3jGC^(WV>V}?r zl|b>^4*atov&>Rxw#(Tf*ArCc@|^oNwY*D;DvXOk5-Dx8|K8;ZkZMy{A;l~fKVDe> z5qyR~KZ&pHvs7u@RVsP8inw75%ghvH6gcPZ{t+}$le zk>XM;xVxrMpcE-?2^5#&MT)!A&2{gy^X$&f?2CPq%s-jQOeUG+cfRsD<8n$OUq^+R zv>1h>$vel>KT582=I@%qdzc5 ze?aM_4YPo~(!ov@uQ%4dln%c;JnKZ4|73X<{tG)C{{>+a=fR8me1=8E@i6)gzLGtU zg>*E7q4>mhC^hQJBF-XKtRn9y;vpZ_M&xDKUW}~H>gUPjVl>ZuY^eR{xZG^(9cl?n zZw4g)t=2`CC50cF^RY}-uHee#JbR9?VfCr!)K;5Ahuf`rAm3mVw;Zs#S%l8y3dul~ zF@|I}E+OOxJ4)5ZZd6?0+?CFQlSsFPF9Yt5SnL~PjMAfEO&ZiDe1=3yb&CqlkFnzg z>}UXKGjElaO2%J9BFR>k)MG9VPwED_<zgl#0RILp#06i(&wjld+Otoxnxq7{ zGf_~2julImr*vv=Ia0#M4>yt6pEarPxy`0F`^fBhZiYqMn7JxVekeAqGJ?*cw7|s|U>)MlUxt7I zSlkpmrRwBij`FADO&{!0=}+hdGEs?0E>at~slZ0Lnyko+7i@jOn|<#Z zhDsu`9M-XM;KX>Nl$}fx@(048$q0sZj$xg`hLWVLDVmiV)W_59&M30Xzt)x%bo)gs z9el{zAz!Vd9}7Gu8){{rc1aa*03k{j1L(Dz%K5!TutkaUK24CaH@|wrLO7CDjdLff znC;FN`s9vhVCeY;1|#;VOjhng$4;&5TpvY}Ov?*o(v)ob_Ph??Ug%+0b~u=6?4d;tDELF1m+9F<8MlcS-%?lU_K75FoQ))b34x+OwI1nu;Xw+DM`h$UI>jV&R4Q0ylNQsn41yhf$WbV6NcJ( zRhF=U*CDEBKrDw1z!;x*5R)KB7E^^v-Pj*CY(^Ct3`8;nqP9;4Gq>y-RnTm?-{A4U zf?kb#dRO(?n>Q-34Nf=p!vo}cK;m{|CZn1@ekDhe>)2tzX4pt0Dp50#K{VW}b4Sp* z?pQ%j=ay-WvKTEYYQd@zDGlq7fkq)z~}=Kf=k)|H;4= zY^>+Tvck!}A7uAW4)sfbA_{7%ie=7u_e3?(7mhC+5bM`iyOEK2HM? zmNj`>6&0+@friP4AfJgaV78<@dmv@T*|{D!9Gr__rFAbRZttC>puF;5tF89U3vlvwoGyRKBF_x&-md>x5ua zXkWMR^C;$Un|wrFr%ORiylTxJ5;x9JU7j3CFW~HwXRJ$S_*i~`0EofW3j0V&3-?a- zoJz6PCSi?$o_i32mq4iIQbV4g!%L3ayM4Z1NikoeJo$9wxyx!QwMT6F=BzH4n(xhg zxrMk?@LcY9aLSsn8BT^MU<4T7Iug;yvGqf$hS;AYv(|xYVR@KjnP_&8z^IPf+&DVB zA<6cZl=>q&i9H8c^}SHk2WK2fl`5-;tub82tC_IPmfa%+qq&?wQWtE&HBaKs;9g^O zZ1&VITRkT4hWHWuSp}SheOxF!!D4unZcbij z?VfYz`aeKoQ74Mq2Xkj04?sz~a?Sp}@%vt5&)!HZTP3gp8PVogBN>lRi1>T*H*!oA z>#8tF{IIIpx8`+-Q195;?RLrgvJfZqiJHSN^Lp4^W=>s^}HJ%GUH*VFw^A+=5%g_woMdYzYP@u)n98oUhkeT}?Hwr-+y zC;u`I_bbbjA~DkTz|ia}TlnwW7a-ErQi%W1gdiIGL=azZ%NPbHLEJzIoz?Kh!si&P zgix@^PMwRlE`Z>QP8M0Z^(BdNRBCCJ6FbAA6J21U61_1=WL_$l+iYhTLQPt~>f2k3 zmc;AVQJan)(eT3tLs)F|?>Y2-^WuOf8@L~j$T4GlrH`4ca7t6_1zK0AWz%j~*b+S! zsFwq_+uN8%3bSo`peV&Ge{H$6=EnK*&7@J0BvND!W0Hc`ZNZG+sBTe18p+>#fx^Fk z=kE>~f?4PLjA4SJw{Q14I6mP){vd?Hm38v79iG}Ov0CH3fnl64IjAkf_}(EuwbQ)^ zhW%nx2PF7;?4Q0Fo+PBIMi@eLK^H5kU#qRac`QapsRqNmn=V$xzf4H6+28kH(Ite3 zG(Te+cU~PUup>B@R4fR%zw*2&8L$bK?4jL54`bf+@z7Vx;)xE_kA`G#5WDPNF~^Y~ z8`l4!S#U%?&$%@ym0t*-et@JAFpccY5iM>N-vRgv}*Y-PT~V zKvbmmM=6sjasj+5wZu47-AKf)>S}-myM!n#|6T5r<(nV(uFlD6y`A_xrf`{<6!s>+^Ip+#-j2XK?g9l+lo(w$t@Rd)=Jun!UbKO*j* zyp0UsfxMaeLUQ^;)V=aBs!n8SUnHGqI#FFs?()CwMDht1N>lAR=IOlb3&Te$yV+6t zwS*s^Ga+d#d|YfEtm2o_8^SzcB(iHWA|cJ0skZ^5uMbS&X-`{k z_Z3$U6LI=C?#B3@Q#Q~x-b-w#8A$Gxdq}7}!ACiCZk7K(Gt_^f{r@v5&s8AlUP`Z) zUpZ6TsHrGrf$w#F{nA0SLNFjBQ_Xz+TTq^>FvqST5en(Ucn9v)x4H38aN_GNrNwTF zp{i0Gz&w1Z|BaW#Hz&3@ztfKSO^WFJ;gxiMX69VcJUQ=bL}D-QdcGer7DC;IwLP#K z3m-CYXCB173hk@>JgJcseTvKrdZU|b0)-tDxdW^gy#kFc$DD2M1?_h_6A+3!J^Ps|wmgu_7RRGs=!PX{ zIN=IaG1x;ME8Q=KC4kffI74>0^+FXQEkLDjh zrTuy*5))QpbX)QL=>wdfu@T?8+BNoN?jPWW8T*aCE`>Wz&sUn7L%0F3LvxM!G77Qc zwdMt`S1GB$*NyH~jWU&_6 zOw8yy$FmvmymiX@;W6|hja3hiG*)A|B2&y+eJ?e$htlbS#rimLi%qbq6o z*R-1EfSn#=+lW45Tu(0I5hhNe6()7G6e&zu4=<+9qwZon;W>DlclP7yz%$EaVJfjL zK<#OWD_8jCC3R7x5%t>V@ZHg1ToLwHvW@URyOQGgIG<}Z<2Voz+r#g|I$q&_-jTS@ zfNO-;gQJv_vwH+0$lw>89Jaub-v;3+!qNEqMZRXQ!rPcsi%4_Aj3k6bzr#;8SU16$ zS&i}EFX6RIfXGhaW^o8%Xm!|4_{~XZ2fEx%f8|_wkj3M0ZxI@bOJK6Z=gsh$BzPrU zOVi5F`i_IM@G13+p&TpD+SgJSOLzDP`452qLcgi`5JY|cfyGCV@=eyy|KX!}T~shK zZBTvrYguv1lZe+FIU&*Qm`nqGQ zVZc0`^FxP*VOFvU!L;F*tz<%Qb=_wPF~9fO=~*vZdGiJm0al$TBR{N5<50<~Y|yKT zCa=}Ke^;PWh0s91RZ+7(nlfr6UF}H-8W=^D9i`o2V?K#wf%^xpWc>FOn?DROepS$Y zk>)B?O{A!eT>d#3LriKETF^naLKNWJ+Hgl)3^km4pY@!-kC4%MMOLf6!sFnYS8~mq zA?bx`I+<#>$tZ<#X?e6a)pv6XOQi=VHK_rJs)S_0kbG#)DDFs*ol{Yol0XCDS1_Bp zJ!hr&xgnDj!go4{CA8R&oHmz|#Ess8KM%g=`du^L=cIZY=^%UlRx?47h1wyEk?0;w z6JX+i$&-$9x(<(Z5;+rC#U8F}zVs)^^11W7Mt-YGeQCbh#s!c|b&!&hnxl|o(r!*E z$zR@*2gXjv7__~|6|Jf6?-Xl>?j4j2txF=01uB^dEq+VWw zX^pu*O8K<;MX*h>Z2rRa1!ZDAi23erXJW>8mdYg(P<)0JVxx^ur2!a(g)21zEl2d`skbj#1-eVO0}i z{RI_*C2J|&5y~GvflXzU!zh*c-TAe#5Pwzhcp-!@$}C$?`;h*|aMA_k%mq{Dk>Eub zn#W$yQyCC?%ShX!2PW$K?HT*Ji4qJuV23n*-?W|- zdbj34VOGw&yd$LYbtB8pZj}_kw*e%|k=S6Un+mDrqk$C(Q@c+`j;DKCmseCHoMCA^ zyQzWcB7aod&b|?_)E>0M-)GO?@{Zv~-olSi)R4+zaa5QV&~Mdyhv+Qs=H%HaCpNM9 znXL9C4)N;eZYwHv>@qQ^XsdLKP~BeNC=p?VNEN$GZ7~h@ChN^gVG(eAB52@2O4SAD zqIWk=r$&b=In%5-I%hwyqK~?S@L0^LeLk;*L#97y022lRU_ZUYan!7`Okwhq^cFF7 zcwNm{n?&>mG5{vWgz){BSd{J7hygg(kz5sOSo%La%+pb2iHO!)rbM zQUb${e|IT)jRh46{&vi<6vx-lL@Xy;5=#RCQsg_^07iQ;j44Akqb{578RQ586WSs% zqjn>dF-saUO62_wEUIS`;x#@gSq7w|wAD+0WV(4cDn!pU*4p3?#=Pws%ioM}n~xXqC&>CO zCtIs)Ap}(B4djX_C+Gp@VM#@DA|&rhQKC03gPuGQCXdOwMnoLXP_K>n2FlL}0uD!# zo26-{9c=(;#Y-cTdVP*ZmTw2aC86WUw@BQKn%xdt3L@l2!~RR5aaq}F1uF`dT0e3? z4v{rBkjQtexry_W9T&$xkdp{{Q*MN5YaJufV@7cB;aM6*jv<#L<$Y0s=rl4%=l#ED ztLJoUH{W*t0SZOLP@N4AJ(~M2Er{RlT~d$@NXe6J{ZJd=_&f>p=8vM55wzb){&w~$ z>-t~@1J;Ry{vIWwODMuOJC;;VfAmb;oCf*A{G-WYk11hI7q2fB?*|#=B+M9VvXemC zOw?ulIJf@L1Qd|u!l;0;(jo3cNs%QsTOptNWQwP>K$iZwK)Tp|BfoxZix1#4yux+fU9%;-S<(4OLM%2 zu<56DsaCwkL_u+7LW^)iT>kip&Z5b=i-BjRcI2eBg6!o&)Jna3_Q|R8c|w^do-}#e zC%O zZ51>feoeev_4UU4jd><*FjAaL8YV&pTk z*PnqZrh^4JUgT$zd`e<0>sB`{t@O$d`w_Jy7xWgCs*In7G(P4wD~rMd^L%I_(MLw< zwa_jhOTN)%0z)o2k>h>I8ltE1E2h%JX`r3)u_z|&uOc&k`F`1F=B(1Gnxhbhn~^n~ zFhCi{rqKia)b_+jHa?w0Y!Nke^;OS-$?J2e<7x6u7gnF;)#+d5dG3_4La`tmef|E6M>6Z#u$xWpu@`O}TbWV%m}!+l{S0Wj%Z+FR5RR zQEJN6Ir(5fb(8;N+Nst*-~7}W6j2?qFYYI$?J zBDWicHHTeWuMF$%<)Zsmhns)4c&mg!_B`a~7WIY0>GO!5J4enK67Te=9< zTz1L}`gCO=)-K;KVJ<3|hFFblr04{2!`|4-W+1nCGS+S0bjRk4A6z9Ap*EB-q=}ek zdwn$eIRNaPM7tH7Vp$HQrtpZ5KZ!??o#`DJW(!TS{Z+Kj{pY=#qv(B&uM#x zP28r2eCkoUxwN5i(Gh_g_(5Ut$k^;{+5BJZm7+6|5%|Qaj}WZ;J*~o3^RBSzzX8_% z5`GyfT&WM0E9<`8N6e)0MGujT5f)fC5SS8Ei?Hi>Q}dh}rtP6-?nGiSGesI8$)2M- zD>~OKgcCv^;PR62Q+)o)wtm+TTkNJKmyQVYmoulIGor$PF>{fjLa+Mqg1UOoMLKq& zi02FIi_)zfd4ebg2frD7nla6-fZzT{4E(v)}?c&Xt z+^4sE!*W&RhDNK%X&5$j7}_mZqaSK8|H%a}@MDc|bN{$F2#czm@-~F4 z_Yms8eP>}0wofoF4PLF_$@gxcOe8d7 z&yL==3;4)V#bLjAvw#ALv1~+R$RR6C=)0EFoV(++ZNU2rC>MkH>Q50HVHrN2Yko`g zNW9BBG9<$yXg-T^k0Y;>893@-n)8t>={+17L$5rmqOg&w_DV2bzMsImV5HNv|| zCOWxWC7jcNX8;^Gyg7lNgT(xHls8dpG1#jXNf0f^mqUK%HIt&c)y(it-aO@FCS@mT2Y7m&?hwXHOS*1U}-o(4u);K3%nfa>DGTMUfn50a=+b& z4*}u)Riv@Es3$F=dP^F{07W~afLV_he{H7y46m-*TDC*0PqvsqL)Ltv5&_>9W2mNB zPTqe02l(1>vyH1#XYq8tboN>jqTofO^?of(%)IHEqh2^iBU~~+CTtPTqJmH2wj!?C zANrUAux0-N3-h8Uhv}V137fc=_@ZvP0V%dXTJGAabRun;vMewYs->vh z8SrX*iah!XeMu){sNeNy4|?2ubTqpbLDdj%B;N!Wj?W`oyC0qt*{c4roBe#PJV6aMpi~u-NO9`mEVYg_!TX z9zzq>3{!xG(_1V>qLtQ524-xGF5c4ZJ(R6&#_bk)8NRB)6@bg>e}H!8-iH85NhB|> zPB<7~EcKQq$mE$vP4e@#*$?Yjerz_U)-Tar11SjMdy!^?VmaVgwB!`YSD4 z-Ak-`v)?gI%mDc%S=uc`vc2Ny+mfeQoy!Hd4gl*r?!gf(dzv=59PTYdgOe1R!X%nD zap1ZD@2j>%!2-`~Ma}H;A6EHK`Z}l~N!>;xF!zoBF_(oNB*itcWhLPeW6fLmAF5W3 zi*__->e#19lnLI5JkHwg^}2ZND2U6 zi%XauJ@~Bhq)doQR)&*r7haTL^z0-&dLtdPRYfi$z)_NI z;!7O;h5lwWImRw`^Uqfav^G2J8A1bfpx9>EC!RBP(2}yf4Ng%tiypZ`w!+}ERvO6L z;$|C5Q)MDG+kNhc1{KZTh{MuqDXFo&9IRH;*Tmq}(BHLnnf07~@#qP5bB>8x?6=V! z8lAjk6w$w3#bkNsx->k+y4*9hd-YIL#ybL!$ncXzrxmDwSX&o%n*Yy->wZGD??eqI zSV3w{&L9%ePR3w(KkO8}Z~J(LM!Q_zXk_H0R`fpGhkb@bq1l{T%|0y51BOv}gPFvi z)RqP`{Q0XEv|Rgu?lmDKPhe!5?&1eP={^^Q)Y=6aEkI{ejqCnIIg%zrDd zD@O`i`^1c{QnR;Xksn}lKpY8x80F8*tZBQ_D_ui1tcNZmTm9y9VSrfQ%~l4-fdaGk zKy@#_$)o;}(rAUxywD7%R63CqPyU*pxo7f;n>Slsrfrw%qZFAh&Dn#J)Q$I7uEZNh zDC75W4gy7AIh@0&{DE&Flr9H?o_Y+C1oLMhCCk&lZR>%jq|VV`1qsXCzX&e1qtBWg z@@B+==SK)cL7$ERyGdP5OOZH)UO(u1a8x~d{XF0~37kl7#iWAL&2NedC=^>%?y#Wh zYJ@&*fqD-=l#Ue^tTd&mYPG+MQineu?r=o`qPNb%S4W)_j=|Sof?6UQyW^bs#Bt@O zb}GkE$YaR3PUTyT1pDOzo!0NwbXi{8jte$YvqM?_S@3q&CQ$%{0y!JzF|Cg%o^Nckrd^`4RcSo z!h(4%Z15)Bk)f}7sM24kCbP1-%X_%SR#4>J?EqwF5jC!idLl~a3~ri?cXjFTYCFE# z*yU30S171vZk%`5&inJvcth6@^37xg`A5|4xZ=XqGEST4XTST-AcZWI(vnvE!5q2d zQ+bV+&O#ERh<#UdcZ1m#KXw7T!ilwl3uH}Xd>qB!k+x)|uxkAna1V7K0<}*m`2k05 z5ksLlbO^Rt9CW9Kr@2>1*>+UrnC%{X!iqz`VReUkuz+)vVq!A1|NUmO7Og67{S z^KsK?@O)vtO{a2uZc@J)B4eo8Mk&u>$HBP$vB^A}KT&i&a;Rz(1Ga*xxA#!|FXcmO z9`8+x`G5S3@$&-sZI5_p6#MLWhDLEI(ss-wEuqE4zuQ4x!FeeHLjs6EDR~22i1bjp!%P@2v@Ti?GqXss9=CNy zk_@>~-N5zs0mu;}Apuk+u4c_ZuT=l~c}A>|h;*n@bH=CYX)*Z$;^)mkY_~{gc$j!(3$Ug8yA&O52HZx?v)$#g4Yi3C}dWU1zpJ=*;em8l7J}St8 z)oMBQCoAtJS~LrjW>Prl*o$Nx1)-42t72VvzzhS=HxQ04B#a(M*}?L+Yi+jyV%d{a zCM8WFL!rMf>LSp5zNb`tbX)vs=g^1JT0&DBBg-Z}gG3;Rj!%v+fTZa4yd+^c@`6MQ{hR~zZik3_+(G>H)9fdSvJ^Mn#e3$?l2sLJW( zp0VnMEJd2tid6h*(?9D{jnKoxv@1pcp$38|3vG1j<55@zLO_Db(>6-%056JMVS0s~ zy#P`ME7G@oh`Y69vxH?csP*8eS7 z9|*_CmF$lGV(wxFzGvYdDa& z0KB8IA1@H8GooHLa0Rn0)2REaQd^TAUvtIMb9^Luj;4A^8;j&T8}RGVz!;FMA&`Pi zeu~*cjkF=%Y~0Pn=oBeDh_dWv@1E&pa3s8*e^V=i#fAj*5fWy}LC(&%iLiaKAsXAb z?pDqq~dJdShgtBR6qD&PhDk#QG(f_2;@DZ4lAB)AQA!M09O-t^&}H^ z35@O9Ow^)_d|K|O zx+uv#+@q>t@4R1TK;im0VIpR4!4J*_&B(PDaF$ zm49;HGho=p4uzzRb$|Kl6F518vF-AvtI$E!Df|axI0arV-$SikbhqDTE3UTZHmXM~GL`~UXAGyQ{6N09E znCgt$ zKF{1D9I30@hL_~Oyt2wrCrAPVFN4wZc_N>ORPk4l>KF62KHA?j9~+tI7A`Gt#KY7? z&s(nsue2zivKb?gySpr{MgzBUSm+GnXyQmL>DzK2whF|lu{~0B`;g^AMoJD)C_!78 z2L<^aPb5ZNhQt7)5P#h+0wq`(i0+-`2g39LDfVWd$hmaMA|X$-fMCV1qq6c$b%p(3 z4n}2(P&yI4u+MR-L0C3YeG7cpN?EPq&e4~?xAOXXAoaI5N2p^dDT$a8-4>;bYSS@Z zLE9lj9jGT;CDRDFOgc%+Gc68))|w+|*rwcfYPvB~DYYFtC5LG2DfjT@>Z zhwY|4Hgb<~x{He#k{a!ZEo&J(nFZO&j-8~8wyqu7cME}m=iY!W*wkCy=KNKMUj@g+q(Wq+prX!8Ro zEN^J%w-2my8U= z8LCpp;K=dZeLm5xfJY7VkmJcGw0z^}cPfz`Xi77DSFnxb>M_>g)js|-h)}2{!g3lZ zA)@G?++A=kG;)b|o$bs18**w|r?&lK7=vz(nwpC`%6ee#Q=01&CFKTZojrCiS|OS& zj@@-bNAhAW$Ewgiir_Iy0)o_7bZB-r3>j~7ybJ|p26;I;!fKgtYe!2J0d=yEWHEpH zQBO)MhOM#d?Wlz%S*O@*_j_BFkV1`2XI&t#O4FWj#tz>5e#JNDeMkJbV)oLDj$rIiwD=}i0MI}-}2H?qz!VDYF6T} zgcb`5RTqz6e?@`tT>-Yw>zE5nq&f!sXzKRIEOcDXwT#pc$EVJk5!_9;W$!ph1@vYs z*x^{FK@iegA$j=LyHsQlE(2unum7GC3l(!V-wjVEM7%&8XYdKq}3nE^MeC-@(Z|aPrB{KJdvyL_K0IL$+oS_?x^C2#_@0UMMd@ zN-O$s6?I8Fwi;DwbCKww(%|xoCV%^`cC5}lQQ%XLT8xKHsORJUF_Ht+v^cKz#JhzC zY7hJp@`?4(2|`x0bhF@_`1(nNR&cP+V0{sY{vLHM_A+B%Vo>^66PJoEF*a3q@44k)krQ`F8!cnY*{7D z(gv_>uCD_T6fsT&pR|dyLOqCe8 zUZgZu1=hV5UgE0C_UUb>GI>QjdOd;7jR_}mWCl>`*AJ)i5uw#l?3ZWZEO)fVCMWt@ z=oX>B6fY#6Df<`$@04){pDxBnpQpP&=u>ysob6)&i2D)GHgK0I{$I$Rz#aTC1dl!A zRF(QaF~m1=!8P~n)dUCDkCbt$)_ch>9*2j@TGsCGM5Vd56c)yTZ4)i|@fVg&AKZFN z98U?b*INY4r~>u@{_vy)uJNDoWK{{d4n#y2_YDsFms;u^w42lw72C-V)rbzs?Y z2QKef)cgW>NL$?2OH;nl5LUVS(^~)lo>A{Km<$ZXojRYZ!M*G0I3aPz|gmA;{ z_8Zmlvx^1Gw@EqG4&67mbVO{g-YGFBms6R*M~WD{d3)9^L_FPPD3(mM#h;n(r@JLQ zS;9G+ezaGPE|g@7Upo4e-Cm_{Cy1a|erfAe;iPgj1K*YsKI_@@W-8|^Ppe@uOIZ~~ z-jj(*Va5RbJb!KiIg*(eW4RSq`CaK^wa%+LdBgcz-{J%8Ihx!H!zocPYO``cUnRiexviLn+JpMlYdE zFnc2sVLx9_gHcJD>aD4dX>Dv)GY)eVaWF(#{}1m)f;|WH2?&Cv)9Gfg^4&3s z3TIC>cZl`a)B(b1wo@*v-F5f;AB5@3+9M>(KI{HjV)`BGI**wukVw2gio5W3?bHE3 zke3vC+-nFxPP4WQB_a|-S6bCy(^^4FWf7Vp0$e`4Q+z+#)Q*ATG-hEc2 z3>-1X)Y3_njeO4cMBlOg3N`IHuGyF1mbmvEDZTxsrt>ums-4~QdeM}SbMfmLcR9q4 z7e22(wsM<&wDd&$g0uEq+c)ll%YAA@&d39)Nm82J6Mo905#@pRcOPhdr(T8@=I6C<O)98n48-%=>ZEm;tmxB0koE$7yF=VY0;p`h*3OFP0>)=z#XP|LrY@& zX{#~x!UoMryo*G@r9kOSrdn5Yj?b)K!Y317p?1HTB@BRv1xXv#VohhTwg_3?9cnTD z%;88@`*j2zSC&{*`8xA$tv4T!Q>(tn{Z`H5 zGqyvu7F^q;j!F4CrjoNZQ`eTYo_v=~)N{wY+DiHh*_y9@pPvuH@^JXApP(CD{fD9a zgmBma6pAkxm^x=#ovct|kD{^5du60&ATz-}nu?qin6_X%No>*70~J=gh+R){_1gfD zICVW>5!GM!Z98ZX23#d4|ZJ#IYo>o@|zj>}fNv6iQ{^UGG}ay}Mq2;A6(drEHioi71KKd5uw#W=JXM&6uD z-9OOR6*Zq-@SLgU(}X(K9yE1T7HnGu1cF~ivzAB#YHK+=Bb6v~K2R+U1YYQ%f^Kmb zRF-E+ILui53qAN{o+hzL-G%!5{753;&@_JUjXn8z?ZqNQRdPRsKIUo7kK7U%9ctNj zhh5mIK$Yj0j%Ki!&2zfBX)@8dL`w}XSjcH6m$qX}K}W3zI-S&{8({u!Pz_ zKnYO>gKnmZ85koE$_)#rkW6|fhC$e)7FtK1%bPl7Fp@LNJKLN177$D+6WT#@Ig z-cv}mM@pUweMJ)vlx}gMqL)T8LGxS=;fdtR@x4bOyQEB&g8KI`#@xdK_K&5HCxaA_ zY+Hcuf4M3M(LE1hx3QI3BTY4AADnzrLKkgc!Ijul=v#rWFmjv!oxorfx`l*KuA;{* zU`89*F%!5CIF(-%RR&>9Uet~mb{o!pQz1+n>f7ypU54!HAy1E5!SE@{df42uG2h+~ z$hpi*z+`M1U)VGy$t|?mS)6!8=)KL(Q72r0SbqLn^gM+^9s;di!E_77?Apft0G}+n zK(75r$%KihXOtvN;vILpj#<-b!v0?6+XIK2<$i)@u1GDHX(APyNJ=LJ`&`W2Z=}== zEk}Kepz7;j`^jBantOIb%;SB?H&jCMMX0*m^;_%PpDazU4(%mb{nbRWj|8_rDLicL zuja&N<|d@%KQiZwUWjZ}hFUNv0^kxr;b6_ydK=9&_x)~4I@%Fc@vr~DU;hiU26ax^ zU7n|<@BZL%rkrY;|lAccnik=}#Fi*i3tU`b;yOt>sxnqgY@5b{#$O3nidxDa{6FDxHSMBPQaIrPwhdz0%$X+}SV`>1R*5;C(DPh=8bV_e$@?NpwF!L!)5 zpL0QONhzc+Ahc-->PRu}etpz*xY!U^_dlex{{ia%Z=E6Xe{<><|KrsCuSCSXxa5D^ z3lt6yFXE+)ktA@F5ZwjbnVmE!sHHG%ZYZ{tX9qlIXB-X?q7CRVxi#zY@>$7y=9MgP zkc6UMKKz2_IO4$9>2aqZ_a4cQlLijIzy8GklL=6nJAg)eJ}FVuOJe!sQjJ@!Z&r(! z)QBQ#%x{3fS9kW{fbWAU0Fho`@b`s22A20QB;y9{6tU#g^#19>Lcry4!Zgsi{DaTj znZiGS<@S`j=_L|3o;q%ea0d?2(Nxr2zrlok1C=nkce9|okK3v6Q|t&n0wJO|w-%Qi zE&zxP6b~Cp8+K>3mT+=iXN@ejm+zzU(JW*;WQFr<(8abynW{O0^I#JW7%x%@*N){H z?T}V;&+K8bl=Jj2U&qj0!1indCc#o9+++2alYK7%5SMtga=Smy8FW<%)_Xh;wp z>n=;$$GD0Zl6n9c=FJ9$8BU)?+bjXd*$Deb$d`cR_M9}lg;{I_K4iiznNOY{dA#oV zHi!f;cS@Bx^!@>U9mt7#x`|UGhNG%scV^}P^~l%{_g;ySg43#K(BXSdw4J+K9UF)x ztR6l{m)R(C;TKIpdP4KLHi3`l3i2~E*cMe~#qVikS_{anW|^%6>nr8!Z^iC&fZ0Ou zs!uUgGRF0Rn(L0?o#|Xkt#Z1y2r1dl4a=P{>}E5eSV0f6Vl&tCByiPf`VU|s!#HzY ztpeH*g>eVN1LKJBb=_CY2`6tFP!ctdtxfs~jy;?9UzDbWA|7ck2hxY8DVz#+HpHQl zt(CvDNqlw3j9#P{8AJD9crN`B`AKLV#w+I=Hm%^g0ovbwh>c;Y*B&(9<(IyOg&I@p3oI#_Y*P$kxW@9{vG7 zR~Yr-SRmX!Mc`yXteK0%j~*-4e%@$)%ik(@_zi@Cp{8znffvL*hVy1nrkYXP$_G^} zozXb&e8oye6;&;vS?a>B#c+T#HeV>5^SIYI?^$th>JzW7~>rPQcQ zY~+NZd4+YKKYy{wV;;A2OJOFf?>&%4=3zj`pkJTFIw~6u_~w317l>&&DEYHQ$}4z& z{^?b+wkYz;A4iuMgZZk+-_?cys*#ZG+QdvQiSP@sSE1^w0@Xi&hor_k7=CY&TO5lO zLpHT(c#MRm$v``OjeuLPr1=o?dotc38{q7YxC$B{P0X-heSo52hJh`y|JFMQa}Fya+y{$J{)XX za23}Y==9V-M=%5&#)y&XrRUy_Ppr{gOTsP{m#pVXIc|hM?^T|et8M@c>M+0PKM(t) zC*oBh(Tud>d`rR$&~88Fs))&7^ux z3l}pL1(CL1zp{_1kGS99JT^h%YXA*!Yaac%L@|!M{~otTJX`TPgm6`&a4=w4cJofm z3SxO4QE$&hHaQJ$H8)}B_wJOo9cVJV=ju5}yRBf$!jhPg80me}1fr3NY+q9e@uGXo z{3@OOV9@*JDT1K1$E4xx5A&%EVGsoLe*bpQF^mZ_%PYzz$#xR%4DCqN*d^IPF#W!P zrHJizk_{BF{(jkxhrI0SQZG0Di0c(5Z*+fy9sB}5TKWu`B8STuHKX)Q*)_AzxfsAn zP_dY6^LMR(fN7SnAat7|ZzJ;DS2e0ZEc$0I(bs;>t9{~FI^D&k#7(9c$=~wM|D#n1 zf`V}FR!zg5V)tZ33K=@rEt6@dr(@-nzLacXq6H{&%H-+3{OR~lv4}(n5-1A7Zm)To z@_bOWfFU6fL(Z;8=0i{~)x!i|wR?Zzh5PEKsJ!&M6VDi_z4SX0dq(z#D~UX-UZ?DC zyc)yPQW{0aU*Owsl~i3Mk*?zQ3rR1gB4P?VlBaA>`DdYcVuab5`j;IM{Rrt*sx<-y z5@y!x(}uuigrdK4GEKE`hU`+8fX&%-ZDn`~ZpBpXvi2tW9kCs z*BqDVm08C03=fh*0(W-e5)QAj!My4c5U{u#Cz8utCDr9OXU`pKJ^LtLfesM{*?`uE z+FaZ*dj>M#wGc4ma2^CApP;5erJ&D4TMaJepkPhR6R8bqw;9gMX6v5UsPSl>;Oar} zbGr~3vcZxTBi_R(#AzYai`Kk=`tMLjxQ<~arAuf8FfQ$2{9Sy*f|4@Ul{ZlIsx*g$ z8|%(1;mma#m40do?03?zIcf&0lDEm+m^Xj`6rOa>9HGoUUdtH-E}J;Q6s|rt={ZJu zK{ejUg{^;0$cm<-5?{i#dBf42F7?=gxxj5Yfqvh`-illuo!TdU=b-3{vTXNKqC+;9 zV4*50QY~k3*C9H_okpnN-mz}W+PvH0U(meewaW14gV_r!s4C9RvaS3dWSwPHTm7S@ z1Hs+hwWUxf4#C|Wid%7acXusX+})u-k>DPzIK{oV`#X96ckZm2JKu6X=47qp{C4(! z9y51xslu8uj@@Smum`QC_6!UZ)#ic_#cs>okFoAaqKM);r`*1Sx~w23%2yU*G3CZ@ zUPro7xS0Z3Hz)nD%9-#zuiKgvl~{op{o<0_>V}rCyv!jdrs-RDha{uU6q!TGj`*O4 z#FVfC^@mf=>jc}@l-2t?{&Z0K1nI;^(WPN+0$lWF67Y5xPv;r^$4hdctleB~RSG=O zM0~SC{s-qI0f-m%D#6IUhCS7NrB<16lr(<~P@nsDAeG}HmYV*>85zTZi8Fo8`)tXnOHn)?!BnUzA8eVSE zssW-qO^J#wT6QC7Y4=myzhq=97-|kGgk5><5Y{{$5?VyAEzd>6iCf9*>=`{*TIoWq zNxg&jJUn+1nwRci5fb#M%CO)cuPU; zSvF8XUwO_WZA@g}fq~roKLAn#WeO!mBxV!=9vCkoSMaYSWmI&L2G;mR*OqM4;+2*~ zGY{AU(yA%6o8UZ=i-+AQTt=mXkECRN1V`|HiEM=iEpOcC*JYm;3NYyr^ccnV@^$i! zGKr~z$hJy-u8dgV`|A86{K%4x(Xd>)554zLfhkN$2Za|JJPP7AB3GUX@d&{SI(2l+ z)|*kC2LO_bX!Hi`6Lthetg#f(uIs?8c_d6|aka>OH6$~b-JkNnr8&Plo>@aL)IZJpV0p8rlM z5_X0{4skQbOlylYg_Og=NDQ~|@ zJT+j$bY>?sWK$e!L~$wC5C;v-H{d=5DwQ6}^SVjBkPR3Rsf zJ$XL;8XR21K}S6}=EGRN%1|CUhf}yj!Aqg1BZXPbRHsJT8O!NuJh*6_hj!as^UO6E z-C~vuwkOwO?bF#Ye3mj9_~W|n2p`s{ddj-(s;{>zV3VIE73^DQ@cbGID=enX$lIm40q(xe^#X!3;5Vz znN(4+Nv_qVF#H+vag8kl7|Mw?T>xDlwP(k2p&$GL!`?K(rv?}4OyX_na4hBAN<`Q? zVB)x;tDbNyd-v1H=mC>P-~jMwk+B6d{Kugg2zG8BTULj^6mbFyKDI;hkULdD{jY~( zIKit2u^yFW+kUL z=QL44DsNEo5LN^D@}fBeL>}Q(?BpB_9-Psss}@i-MpHtHBH8_k_*}%d10Q`gH2aMm zY|UNiZAVIM7p}o-9liwc$Og>=*=8*oLOS@ClUs$E5R6U6i1uMrD&-bjp2n>=kjSA1 zGz{J)ZRTMQb;P+fF#2{jBMH$3O#g?29+sUK+9XGo^?LW9Ttq z3q(cZ3Ujy5)of=K$PEo!rCYhJ5{!$!yI*z_Xv4PmTrRGRVfL$Qj+kxi8q6iF=~C!!BA@y{LIAz?LrIX00Pzr6MT@Jf7>CG0vtK?t zN$#BOUM8#R+&rujrStP)!idan?VUoK66o{pHQ9c!=~<6wm=cdFJrs$(_~({$W#ij*N302&)kvzyIg8Mv7426M^D*3S8LYXKHXK#+4^z;` zP0RpJ!^BqN1^bu%|s6Vug zbX*KgIu|TIiG_0)zDzqmUdAm1=bfbZN6kU6XPh6m;$|bKQqHAY9vQiN%(7mcjo4wU zEuuHMtfyco!`;LR9`WZ!|iv7XP<++-(?j>-fRP_KC>X zx!cXZC$q|PO3uWu^&VQGrQz~BdgR>|tZn-!x@gTak_OkR)roHkNKlfQ**%oGbsGvX zBMsIvW*kFRbLB(pZuXJWz)(eY;ibah1O9{;MwC`Q8x)ELeU1!y@I>F4V2#ell$Jh| zgvmEJ8Q(oMG7$?W{4a4|3N%p?{a57oKZ_avKRRwB*8qFyCCTECX4>&194vicI)d#8 z!}na#!(%{dOxh#{hz8o6rKYC7adk@&O2DBF z5Y0AVu7#@iA0Ic0CuosuDC`(-VWuNlbMN}%{HA*D)rAYVE>GXw)hc+)m?u@p0&VBe z^mxQ-mTr<+<4=;wP4nw}Ku=p+`*Rt`3{UPqC>p+oP6v~K_a^D9rs-SY13@oNiy)FP&=+V~KkSzbf|b<#(~N&>eJ)e-?_jOYG_Evi(+c;)K*S?|ug?gs zT3~dNUzTA(+DJb*#tt`OUwG_88PJEKL(8STvm)eqsWuA}MQ$B22(>4YFh-T#y@&;= zc0S2&J9)?pLI?6`Hq(FFG;saq-q!*voU$@mVt33hHGA%oHJE|1(#SvEXLv_p@2z@6 zXe#Vyk4{tokKR-k28ckL+1$z%=~;;INmb5oj%cgwgvytJcr&dr@)@JTdAfYYl1J8A zTz+z_TZ^}bgvwi*J3WzxPD~>SBROsXL_Eo!O?KcDX%=^N07m9Py)^g>*I#wJ^1gYj zw0XF2t;*)Yr&$TspO?}c2@T8XRTsI;krUl9~nxl{&VN6k&<@44un z79VJ4XWU0-=jRc#w(+!lSnIuAh0Q-yG3nGt@6#O%GeI>%TOP73iUa{%1`3w)hrm6I zfLEDSwPkkhV92FlXtayd6I{3Okr-4rpl4_@S=F3@k~=c!4~3hR$q9O0$qM4L8E`Rc z%)*IJg9Jl^$uk=iTQWCw7Q1z)*@V?&|1^{2<7xuXwyanWjIg+UjNK+1=I2c|k8|AY z+jfN)qXQSa!%;7#q&x^H=~Ok&oirWt`5CY+FhBi$KF) zT5DV1_nZ<$;a;%t)mZ%5@1fYdVJF?#yb4efr5fV7i^&u53%0uVgIraR+NR3e_i+=_ zkLfuN^=CO^Re2vEGTRbdSzB;&WdzDQ^YVB-1Ny?7WFnRP2gokKq#l&=V~J{#q?P*s z6tq0OoiZGrOoTAB*h@wJerCCfLMX8%qZ<4JESQL!q;Yfl_O7hdWbUye*;+S8@AM4^)+sCTWqT)6J`_>9GJl|4XOd)^T7Ncm zj~ir`MeWzkV056MN3Dt~jr?;`o5O2;eo8Gvp%?4c_;f1LaWf z>~29AsD6rpoSa4ZUq6z$5Idglxz-xVq}MFyxUxXyXu;;nx|&LcO$Rm4dg+wIu-n0QyN%Ca7&97 zVl(~$ z2>y45f&*R2EK#l&J|%^=!xabz;4j^S1C0OW=;t0Fos*;g?+7I_%w>fs>Hw)KIS*cnBkrrXQ@g3tm!xiKFc3O zp-*kdHC_qbNXbb+bb(+rj%IxqpsGCyZH6(RD>|RBaT+a$TXnMZgdmVW*pzaYIZu>Q zjNWUZXw>-g;7xdBSX+CIOyB8TB#AwEUVDhCDULxFWcnj@dI{hn1)dZCQoWbQnA(Zp zI|oMCwkI&6h`jVJIcmALJJBDC<8S)KHJn|`{ovkjp^*c8&e;kUkPqa>$@{_fbQBBi zzzl6pSZTE6uvsC3sH)mAWrNa+!l>yN^Bxf$nu-FpLy#F@1_lrFlPnX{))zO&br(+5 z-B|+)6qc$#`6W$vup1zsWdc+ooA_=22r_TpUC8Y`kV7hm zY}&ax7YDYlyP$LYu$efe+d$x|vsb)}2>$Bhn~AVpPx%4Rr3Ire9_beqVnl9?7xFgtW=y#znb16%(N+lX358tC2&u$+aMH80WN@J zwYU$o5Vb|yIi+kx^=akl#Jyokv0gfYQCt3O*^gHOc?DimRfpaoj9xy7=p_~!J0mKp z3nZvyxwFBcVS%x%(-(u=dtf4p{k>vh1_n|M=|?Cz*@8UwZjjE9s^L?x|8@`dCk*Ha*jH##zFPuK{F{SkC`zby0+kHTcNFB0slb<6~9c07c&GEM7;nB2_>D- z^_R6inhYkr9H7%s{5u90rD@<@#gji1;TqIpy*geBa95jF{2_T0Dz%<$mkzO}GPZ(6 zh3y?@?HX8*^3Q_(@;uzmIPNFRLwO@MZC>9tqI)rSzyQ%k3i-qXciRt7 zliJEX)ra-h?s{bH47{6ldy<(}gOBEM&M*A6gR=JdksApBf)Vq@^xq53Phk(pdh zCigG#kf&$K7>s%H?fafV6Q|dpvXEg>uTb1_W)h1*xA-6;Eg0y|uT9d{+gm@GSaU+$ zKRFO!v5emwcX3Z;<4oBv}#K2h5NWE^+0gK|Nl5~|Nohm6c<~GdV0$e$I7_tsyN*gfvGpZ~1 z8Y+Dv3A3}xkKR8V2ynf*d&ZIx0Y&UCqad;;-OoE0WiL96+aRkp`I1di)F8A*1|Y#) z4W$}iL029=lDJGh+hyT)s6u{iV%i5T-gZ&P9>zb5Io4RmJqkuhNJjnR`R2Smr+K%*L0$|u zT?%p#O_B#QATecxiQjowfCD@O<{OB26b@~~SDKQGes1!peoM^AO9u9 zuhoAUZLDELb%` zLjOr%hweVjmm6&5JOU03N%g4QjOo}K3Zee&5pnW_W_&iN<#vs#ATsaoO9!l!;YEAe z=FpX%;-aG3s-ru_bB!zhxy%?4gt{Z1rFIrI-*a2Dh%K{XPIn<-_?Mc2uqZGB=>zB2 zA+#jaK|2boBp7E+ZnQTMqhY7R&+dhh#>7L-d8L(QAuur}1ytsHW86~;kg07946jZO zORlNdM1(%E97Xk(8F7QGEti*3TqNJLdJdyJ;?c6GXY(E+Da#^_tAn)T+!!aLX-r^O zVLAgDOde&yFD#pI-0gI2?c^oQ@{0ii5MYN>z0cnyx?4M+%<30^C7$u^zS1>aPeR|Y zAws)lEGF5<7!(ue%IYBuU!rXrjlN05cE)kG%d>=?NSwGPdmuVM{EntAG57IY0rhId z8X&u)UBBxmSwowE z;Swlw^Th_BN4@DQSG58s68j2XuyuHWn|e3P0ejjsZHj*}uln$Bm=o6uqrJ2}JlREt zf9OoB+1f^D>Lrj5P^G7mKl#uG3-q1l5hI%$Sh8 z0b2$)k;W}!0sP0KDP8{n%84>>Zz+{X@=R_A?a~8{Aia6eDunULX_1+&buZ^Q% zJMwIz*2`cPVGg4Xnk={(ybBY=9utW1dD~%NTvN1@SUghR&e_i;N0e7%vh^OiMmh+f zV1tiBHlv##RLY={zjqyIoA`$^RU(ww5A{i}d-iiM{0p1(JyWu`?8@kSy6$}mfvv{O z`ShgxJ5=?*F01|lSuOP~oxkp08{hBx#u4@{J@oOu{@9E^`Ivj7 zW<~k&gG!-D#qED1FD3p{J^4=+<$q%%|4$X=|MCoS{ZCj2quX0gRAG%aS-)C4LR~^^ z(QT)zn>2l7aVhQ5@hn~^aRln3{+yFiED9Ujam((h&a`N+M4#KttmoMv4AS^Q4krHX zqTz3wh@S1+w?}P%;Q$aoyx}?E^^1AhN}cLBe~$CMbv$&CYnA%cjZW66+jj9S!>KNB8Ul zEO3p4xdGSgl&D%FGL^%a=U8#2Y@JIJHx0Xrmsy7|HaazKL2cyg4_n!K%$D@>rOj}s zG>bopM+9!P-u64FlTIh@Rf$;GC?x*@gwf1}Wh-cIoX94_5O3N)j^O%uc+;?9(XVG3 zc}EK1-MV2y3rLSXfq5if{fLE-l0s4u>+ec9AWmI$EHKK$Pucb`DIwAfUrwpZ}nm@r? z?^=Rq1-9`vX(vs$aT6u`rBFQVl)T*$fim)N9>t(UW|JmX7e6cR4iVJg-ZzUwGKowU zK}hElReo55K{i=69A=l#*Q?7UuT><%U9eT&LZcNU{Y9}_kGVf)j8XR~tbj^z8 z^@ni^<2C~oyi-<~;&oO*f&$B``?_LJVJ7z}WW)K-j?9m0d?(WOp*_mK62@2W$R_1@ zH~T;DO`=nVg5E}J%NR#M4)#W7S*@r9P+arbn^zMX5U8k#SO*U-l(~_{{^=j0QN9ti zEWpEz1L(zD(>YuL#xnmJuALuYgI`(xSx5lg2eP^S;kqV2T<=Yx;ZQtuiQdnhwa9}f^FyC;&HTy8IiIjbXw?SvcS|}3^O0RhPGy~k2 znYCzs<26fh-qq^%|FKG;_qUHVqToT1nyPn|+&)Ek;N`n)@{fl?rojfTT1!eY`p`g* z3>gSi;1QTV9ieg9 zuHMIDy)4d1rv{US zU6waYN(YDx%d$`(VQ#0qHAqfm(ZRBt!*o|K^_9PG3>tv=3stMT>P`}>^UD7qtJ8~5 z#&TY;gFh%rA3oGDuO&}<$-Yy%{3dQgA1h!WKsY4^N=1#@j#VF~j@5K#8W@`` z2`uzedm+N)cN~!6YJj9gbNQIl74@bkVpkNk$`3e2=E7Rg@D3ZiBy zlOw-7r{Uc-zzVPx^Qf|{-BLhQF5#xC?xX3t+XNCX{yrC6Tl9W;f_b{|7DXRiTM~%a zN_cPcKB$;yT;QiUn18OvI+lD-rAoOSJo0X+d^4lT_>w*W{n???q1=w%d^g0LmJFU; zpg)yTK^CDr@9%zJ#+>Y1hn=D9<$rn6#GD+|`A-~jPmUgrPorMAO;$gRk)R&|#ea-> zJ{~~vA1FH?I9H?Jckuz>kT?MERZ7l1W{nxdQ7`N@&3t zjq^9%e$h^FlI1@bT-yFt5TC}j8pz-)C61dmVnM6kQd7m+md2jTFM?Q@8b8au8S;lgB*lMrfHHVBM!aO|; zrO(|b^8||aB$36zoE$CN5-$S~hp{c2V)|3ivv|I-XSZ3@biNwv*kT%NIj4Q)TE!PQ zv!7u$_dZ{Wgvo_I^d2Da@BO%*aLFdY%X$hI;Hapl%p@0n`zG}X;d$>5d9bl1Xm?)S z2r1^=oI%&HH&!Qm+vJx+qJbG4JUV-NgEhN9LqJ@kJUaOn8jSHCFqCSnL}2o5)HaTP z5=^4fCf}F^%=U5CJ`?pk%VKX>=0T)FxPE^-T#xrisW@iIvb%*Z+KTmDs_gKgs;H#V^L-;|TAaI-u5+k>Sy^V|on)Nz&t6d970b8X{TPBC_b$^wL~Ch$fmCon zR=nBA9_wg><>R;bNmEl@_iejnM5h(xe~_H#$p{Hl4-w*q!{z{s04eqTb${f{6@AZh zW#M+fLPsEZF(Fr&m#nJZeu7hYli3xN+Y-Q;heSC18dFiUz{nH0LKnz%MDn&IFp~n+ zDmzRrh*te21JCw)aX>6_xV^xnGNMLx-m#BUXHzUUY2)HCBFN9f)y|F^7 z&PW9TA|~qSx`m}x;sMp(0x1JN-spPlw1iD#HaKlX3-FUq3xXDYmaxA#?fpvE7bN(X zl@!o zbcyb|l~pkMx`PRCXcax&&j?ON@UvREH>x|=)BHj~fDGq}MQ-T>3T3s?oNW=CDn)gX zKiztxKY>e|dq|jR8ZoaXKMIvv5x4G)|D*{7Ar?TmbbYkuG$wpDu5$TN7oAK@a;Q3P z<4;It)`SJf5Lc`BTN>o@{7mycuk6mCMR>zBxBz@Fg}zsoz1Q4}GTBX&QWtG851&PD z8RtHIe>Q8TNE3JZe&}W4Gx;bm<;2KdOK&U0ryq{UQ4R$mI~ITr@rO%BgkSb&u^C&@ z!=bF{k8ux9ZW{(WI?mRM4xlgj#KZqW({R zwsoCYS`NDuY470sM~u&^xpg6OVT&49xM)wA97y*}`uf%AxrCEt5x@mxv{>5y@g8@7qGdjbc$Z zZr!fF(S(NGv5re@a9${sClE)WyvKo4Ou~tnKq&R37p??`^SU^f3>&so`3+Gs%t$AN z8Xd16w$0xL-3PPiVssR0su$2LZy5OqJa&zo_+U}&XTpZrJt?7wmCcZvRi2G;;kU6k zt2XAPtDF@_;wjz*Ct_eb84R2Wqv~jI0b$nFpk}9f=jGUPh_0zks04D%J;Im2dvUKT zbL3?6nX_4#G~!B%1l|}E^}A8*Gn-@z=#`tkH>gjifF|TlJ*oqT=2O(UMeH96Kju?c z56oV6*1mp7!f$36xGdlj_=a5w#7fpy1#9!LQg->ncF#Fx&kNV;K(R!JQ$i-2ng0 zN@^Kpf4e7x$B|K#wBNY#d^~dIDpO*3dd^+|fi?TwnkrG<`^6I9fSpj8obKTu6*PD;NyVWE|T;bha)IIJwhy^s1QhKE`}8eX#a*bdWr09&3G@B z{|<5;(8JB8cUDR5Q-kG0`@)Lky&dfNNSzJcpm?qX^msiR(>;Zpkob%Sp1Ftv=5-(a z(2Ve%7!kLHh^(_M7fr>Gc5IzY9%zF&)D$IH*_5DFQZjG;cEyAtrHR*3`{Urg_*blh zL=RPVeTI{X`q3jnr+CGPNSMf!FS$PstM|8|-VRdz7G{Hf=e{Z9E_pnr$CQNkTixx$ z22M&k7Yh_-i%e|szDIW*P*SN(xiPeX;7joF$g(zyV1ecio#hdC+?xPHrar(J zBirNaqylekM$a|C`#&>gBd{5+~>!B0txFkd#g5xJNfm} z{a>vW?P=T8CjYll5pcI8_1o zx^R&f1UwL4gjpaRCn(z&x4H=D<-czQRW%>Vy@S?~c!hKGvjBIF)0!7A!H5!jL7lHH zmRt7?w5kw14!{!+WdM)CRpNDvKkrT1sL(ldbA|)`$+0y=8A%25Xa#T#1LD*-jnDlfGPb}_4^~{iF~+g-WTzfDwmyQ%29EiMk8Fj~gr8L`)|ZUR z{{2sy_N$euCS!%U>kJpYL!U8vAe`<<)Z_7pT-o*e$^#F@40*qJT;Ufu-imo)U002> zR(9edJM+Ps)`uTNu#S#sQFLro?O!V;@4h^mRGSeUeVMY`MHk`x{n22YQTBbQ-{)JR z?4K#*so|6gm_xP(+8~w$!pJ`%zh zmQ#K&e!ZbvQN8${qr?xteSC2tOks!7ke}!3rStR*RNJoNi11^DmJ~`}heFvoRk{xI z*PYlHt_kv6Y?(z*Ttc#(qaIf9kC@Z5N7B+CO}nlBmik4(+-W)AY?AFxH7ZOYUGlyV z@m#4?K8d+lly_-6eueD4INQ&@_*0^}P(S+k@COiiDxbMY=_;gt_g=uqp%S;!gL|~& zC&h#P0=?DAHYp|pCL`#vFbe7zR9$F|yb$Ux*45DM!|d)nn@&7hUNb9P{6B~&+{fJ< zEjxM6Kcnh-xk1Q7w(}mTjNQR?e(YflMH{sy?QjYPvuXV(Kh+za(cljwM1OV?o9r4K-1^%~rAXfDs4pNwBcaM0zS zdF1q9-HQfLXp`W_W=i1{|4>M>H(^0u45}Tcc!Hv?^l)Hoj~457t%0efC6~O$k)X;T)us=cnPBH{pyHZhR=>nN+yA=F3>^&0}}YINK| zn`gkakoa8P+j`aq5$Rm*o?_oD*0-vDTNbybV$06`!^%hJJN$#b6Bskfi3JsUh>Tqxf) z^UiYSw>e^0SN*^IF-y#V%t{$!h$lVmq$qQVIT3s`1_2;sh-*mg0VvF4DX53&lYoJI zK5%I&hVfBxt8P12j?0ZoBEN{$$9<0AFx2T_Z}r>q>zM}33EIBE-@Mr|8`Qo#OP0R$ zuQBV%7FP|-2ne}bCMJ;rIEbr!^UE^C_HSh)y7pf9^fEEHIvBU}LWqXs5aFdpf;rg~ z*9c_Ln14FUW(=`)C0#&G{8a?CCZa<#SH0tQ!C>~{c1NZ$n*Ll!l38Aq_) zzgnlQ>sV8kLDMnJ?OtPO#vvs<#JGVX2u_4{zHyJ~%lm9K%6svuJ@3$jHEm^u*V#ZI z4WM%=gfPeFmJ?0Y@AeYD`LZy&O;#)%J??q2{9lx{J5I1a6yu7x@Sl(bPo!t=A3*3y z?7syPhG{kxA-Fm^C=z5PKq0XaHtHV$nv(GKbzpu-hxGcQQ7ZMRNRjXWZsKcK{^8EI z(FMjcsTp|D_t)ypGY#P{h~)r{FDtrBy{j?-7JPXK9T3w*Q(J_w8ZRjCxK`pfTWqxT zo)%CDsb#D}yGo0(5C5PwJhJAAMq+#1mMMwKdq7`o6q{|U1N96?{#6`J=~yrPx5@IhrZQR$0yVOU`NdJr;*Kru?AJ;VQD+E7!tLJ7h?Va zr1gBwUq(^`wEcekTxXR1N84)eFnV%UfNF3FxtVj#J$pKjPMVc4cD+q;T$?CY^->2T zwH1v@Rq{ee;Bt-iIT<@_TqCB3N%7{Jb6^VEA$+KmUCmWHz&GuswKJT`2odl4OdKp;av^fO5z))+6zg6q`Jb6xa<*^DnzR*oXV(P?uS-xo9AQckWML8S1gEM~FROGoZJ_>vGIpr}jE~Cvup>yRtbxlE zT%clI6SoZaN9Bp%u65c6Ua#^2}!Avns}}l9`L%jzBom7n!p8V#;c!&15eGGLqA? zFg=YQM)?bB!)hZkK=bSlo)5nqEKuRM&6AZ@Wxv$GtPgokB_ni@0^=RN0^a<2D{}@P z7Ic{Vw!q%hOfnE7B_$Hp`Rj-}JL#YP$pthqXvdf3^s>8xZUL-zWCz~DN-(9vkrM{> zze~HaHvF!>iD*tta7WD580tEbOawhx9C+BGe))CTH{2@E*@GJd>3?A1c5JLdv?LY8 zehN%XK5_QUiDg~ii`#^QMQMvQ6skG+F&722h%&QV$CN$>`#(=^eI?`hv{;dGD{^Tq z99xqhWu0%zxk69WGq-Oqpg;5jR^gP#I6ip&Ea%&u-F7X7gMf3=42L)pSJpM0Y(#?W zaY&WLao%G*n znWWtAB$y?;(x5o!B>3Qbz7g5FS(0X%nbdE5f~5VTJ|FS}6F%(!9*odb`6bqwzssJ{ zj9BeRF2kpP3jo-jOEV4FvU`^dYs^X0>R4;kJbc%sBXEqpG}{Y^4+JHz@1qd1fImg+;qmmHa_4A1 zNbeu2T=_EQYXwYr{6jJjiSu0mjF=}?rBrO?WMLUZ1Ul{Hv3wZs?N1WzDC1W5g;i*a zPVqsr9l=|X4xHZ=foKvpPnT_J4%51b3R7&O=f59!kcdAzKu&NGSoV)5^chp#G}yG> zcq4@h@r{>Y<$m)kOw&}tM?}9|y0fRq9)y|=#eDLZ*4Ly}QYe*si>$~Ab%LP(iMWQj z=ws0j`iUI4UU1ioHCXvB*Ppc7d4c0dSE;MRWt;rX$(VH7QGmRo^0#E=3X`zF0Am($ z@i4~0A~kQ)M1K79tnReRdzU&bUD0eFS*O=?VQSVBUiw#f{9loIgkvp3IcCs?&_2oJ z(-(j)%jQvwcn+g;Prxx2V97g3l0EI0yOOi7+y8{Q{wLzr#aN6FJsjSuv31A$xs zH%#*x%VQrG7{$LWz=@z=Fn{ltHrWEx-8Y7l&A!50dQYA`Ug!0p)Yy{u2TL*C7+neq z)yf7Yad92u@<%P%gU@9gQYqyS0<;KQoe;8MIJ=Xq_>SY<_kCktX*MI6zEr4he346+ zeKZt*`Iz56yWo~SDx`4vfkvELk?fa@Vg*fl^Vx`nEq?{}h|Pc}U@6Au6`YWu<)V!+aLu z4HS=^#N%W}2RKpjXpI!oDBSGCJd-k5IVu=^Ur-F{uH^_{!+64Fh>3gH2?Arp`3TaydnN$Ajkz=Md|S;KTy1qFXhqP(xO( zB^ka%MA@b)z~SA|YGMx5HW+&sV(7Hz0J5qRkRP_o0%fxH_QDYTj}K9)vjMofhw;3y z*fMsl`Qb=E-E&mScGoKBog?NRN11XM;e~ z-Hp8HvY2kmEYt}mn~x|H6JrAy;B(0qgG*^&dJ})~?uz^*AmAd$wh3A%_;oSt-4vue zR5veN4gw|lGwD9mec`%#*xhsi8|M8^l2iW&Kz}%3-er9=kpH?~oXU0*uju_I)9yu= zwpXhPdWHy^09&a(qy7{DOtl*}cu)nCy#@sKlIb_|rB(x}x3X5&#(*$e?K%+gz`^jV z8Z3cy+c%*-8ANXP=<=ky5#$s;=lQYeW!{jLcVt);1`i7HQ;EwZv~Yg{=F|u;P@i7S z6&sbpd#dQAxaW7+wMp=_LFo5@z6%dDaeCxm@rSTQsBv+X2#Z@CyUrieTU4B>0H9(r%dg`xYis zw4c|it2v1JdjRSV1&bwWBZ~*e>f!nJI~~oo&5Cltx8odC$Yv#%m&1|*;Is)GBgz6` z<021Yw0_tYkls7QuuM`KJv{`u`zE_kh}fuvyW2JWs4^HmiCgei56h)47L62;wxgt% z@4&W@zIHk#(-N3fes)&1H@-r=4ek-SW;GdhSiZm=D-J=|&wT;O-|YsKfxW5@qq4bI zeQ|_yy@R2N$RBv$_bt&Xs6}*3L{l^Ox%={-I!-u-kNXUs9Ao{SkzjCgC!Q7 zZ+H5a(29r*(moDQ`MKywwz7=)mX0S!lB*td8-mFNwW0VS0H@LL<0CiPSrVPLGTmG^ z2#55BUGWw^hC;_EN8Bf7@S*EM>pIC1bgmo;k}CWaiVPnVp3x-$;Mvh8=_K$I2CpIv z;zw9=6e7^#it!?O&z@fT#n~Rvme)SFFlsL^%gljYf1jWcxOk+;+ zXSSB}XOHl_PdXkQyl4P4=G$w@wQp|rbE=oxTVPhwix@d4h`IbE1HXXZ)!eq_7G@>z z7Jz}o@8d|rtLWjo-1))VtfxO51*`bqyQhGsmcY}`5f!l~^oldVV6z2J2jR`0#akcz z=HxV%iIik6uQMuvpVs@@IQsjT^(T6S!Z5Ud|tK zY`OiNSPEaZ<_!<4Q|}}Q8-_QIfxG+g%202-27OLtu7W~v4&W_n&+)f$I^SP?K|s{Y zjztA*7EW&^a5pC7@8`Gvy813a|N9;X%i%#%KOs!c2Oa2lK#z^f^!yju8$FEIxB)1B zZ6^9QeI_A~Bekc;oKBcI`cUB2B(M>te*J>Jy9J}tFC9T1gJ{ZFXDA}@gTUe1*-#jy zpl%YaWse26wqTymXxN7Gv?l82hlJAI4^N@8hPYM;>?lcerI%6Cj{PS+Ef9YIK3S!c zE31!ZhiQPMV}*bFO!;rWP&*uakJzU0zLsU7z>gr zo!Sw3Lhbvj{}9pq#unN)+08>JV`i;%&N#3x9Ai>+lIG@E+856%%)EiLCLsC@IE#s} zrU>0ds!#2`8muuO&+L&j-s<@@iIL{rUFbbQ?rpr6@HxT0o(sSpe)97g^B;gsGAPaH zUbe~tA$Y@>jJ&F+|ocRSto1_|{_*aJd4lpXTf4Txt`G3iENzep_W=;RKYYQjKX^HgCvJxi5eP^{)8MCj|Og-R&?(*kffgs4g-OuT#1-=Z?MFj3z0MN4OaWo%$~7#E>DXbd1y!rp^mLFhp*k3fRm7Kj7<^!rFtlX$Mi z05SHW8&?pOb!UNS#8^*_=Xeoc+znUNOy~0+bF&d;eI24D#8#KR z)>^dyAy!JA+1oZexf_=a1Qr%Up_iv-st?hh`I0v3%?Yz?LNT-=HA3eeF?HZl)v19= zcjFiI$8cmml6un*?`9j#$5%_{=(bz|>2)Du%%fGw-XjAgPa~6NyF!QEG{M!bmkN>k zlFT&0(9drbZLz#M*u1h*C*mC)atW9=6$Ce2-0U*0;^Vfk!gZH0}kv zGR>vpVyeld@qX4Vh(QSkxAi6zVz)2X#r8$L{h8$t{whnp(zO)w#kT~2d#g`{(R@^a zbCj8V6wNDa11%~IY49hZrD}-TMS-ykLSO~ty*Ap4dEJN%SJk1z&siOe9Sk zo#Ch{h7?UJTn!oNJ3qK(04hx{D9Dpm>I(9XiTyq+v&;t-a6GY!&k4{Uf4P<4gGb_emJ~#_RuGM>S4pVg@tosjLs@rO=*< zHT*wf%XW#*BY(pG;I}N4{r|RLFr0rG8NP`lXVT8OGl!OT1_4G92ee<0Ud#b_zQQ+6 z)y`;V*U5wtqH1JNcx}N6>`Gu4OVrksQ(mVbDG7tW+G&)wu23-EC0S^?p|lgA;q!i^ zI&T)RDzfC-y@4^2cALh^GxvAeWDtvg4JKku9q-P4P{*?rk&ms2Tva+V?LYu=F z?Fl%t6ENR^agg-(;Whh=af%b@YC`m%8*IA!klieCN;ryb3xLpuLp%_qyg6;5)aBkc z2%2q>n#1_seH5*5JFl9>fDqxveU#d*N$AgpDcK*<4E3h@gxQ81sqj3Xx?S#RkVEa# zgDq=q9g2H+IRCzWW0yE&Ad5h7$kNdCbuniYTmQX>^yV_MC4zEUFYqM>VyIU&w34Nx ztjMcyhL#5A*jplNSy6pT+K(#cJ6ov=VXpH1t(QWDT<2Wk)VE?7W1x+g$OH3_Sj+CW zA>ycbP38XfXYA;7fXa-#26l59+{D;{9Y50|!=j4P-}cIZT#@ z()ozvp@w<97X6SGs>TttSoFzPzil{*E)s53soVbmC>5Dm^wab4l)DxCH*Hn6#me85 zzpT#+d+|f(w&NSBGd6wK->x%rUHe4sw|Bqoo=?)IQbqD|bh4mQ^Xv8xVl{dh(})@& zSDlN>^X*puA`jG7+s~v<8~^px%_fE|hnC4V#YRfO`8(Qq$TY+{l3PLyK~)pqF5i$z zKKnE`S1KVH?mXV~K=vvTD>T0{Ec4Z)Qh$`F{KlhokH`nM7t5BVU>KU87Lhi*;j=f2 zJsvwyG`nFb7cvF?!>1Gb0!MD#rUtGdG#;<#7v##)YiY4OFC!@N>%1h8# z?3RoR_s84ss8d+{!gSFS#6-G|1UDo1wDi18#*xm4F4t!|xGczcd4+`xC=_*Q*Tc{v zjS@=lji?bEY%50zT)it~;z1C{6O#vawK@2kMlp}Ec-PZOTVrpRhd2A?cc_WSWdepa zzUFXCc`kH|F>`j^1r5-;PyKuXeDao9c_OBirF+mMEl#aJv=FVXYo((?Q@@w(z?3 zv089#BBwb^fK_DY%2Me1a0P^J(eev|g#I$MfACs}%nip%)v6tJDz_AIA`7}+^>Fh} z?HXVCbKSjP45zkqD6g1kr31RUAOl1j7tyNu?Ao$bqiz`2}7vE;IVDc8uK( zfG+No^)lPFWZ5fMaib9RXlV^vL#;exJTgTFd0Ymd{0I0-Ag!;}cbx2_*$$|dz*>o+ ztVS+>lU42zh_@XBm)E!~wiW_r$h{|kF#SlFP00=r+TC}aXA@sC->`5xu-Owho-VZM-L!&LiZy@e{zszN(fnS?DNH@QUUGx%Z8;M*{J~R}!3+_im9=nr}Q5L?8;CpJ^q7X-9Nb*Lb+@BFtwzja~`S8fZ-CQm` z$Z?lYM1di;BG~&?2%8@tk7ED?!rJLZa0%x=p^JitCZDn*uGApSUn#<>>&_0C1-}QI z==)4D)0gcDp!Q#(U3=WRqoah4jG^vR{0CT!1BXOT>5q-?<|c@+2oQ&wb&Ww-WJZ{aAN*l zFDCITrzK6-b`h~Xl*Z0ynqvG0NSXyuaM$nz5E9}47D;^ID$VJ|_IoTMMRDh#!G!n> ziWNL=8l6YvWQ@@zM61G>hjkw3JYgX`E`5V?MdmWxVE+Nln*$bRM3O znNMB{vAHz8H30yH{_W*p`aliMi78BCvvWc8nwVcHnQmmZBI6aW;@qA=5B+p<7J3X^PlA13kev`9bL2n@Xccu9hEj+*2$IE@`hr8UkYwu$Pq*;O!DZP0%K1vw9_ zSNj?$3St_#NZDbXvS9QLr0GADl*A^g`^`jrm$C<=0Z$To&_H#-c(PKB=}&eI2-0D$ z>~tDZ6Nx7K(}oH--{wQ>IJUlT#|G0hmTVut7-BidtNg<|LQ?)Y+3V~p69TI^jPVn$ z@;?>bl@=WL+%^4oH57j{7_S=7qFWo1%0BD=l-N4Ca6u-McEtC8S90v$I)(K+^x0<( zlpN3LkBNhB#p_{TV5=nE8;!s_EuY#eFUndnq?UOneeiz(@Md5Spq8ObzWq^mZga80%^t*!?+TSM_0v2Ud^C_o+~2N{)29 zq7=@Y_VNd@I;_l4_ff!v$Ke+K@(0oXJ*4vdO>OocK#uULl1=n}pgmxohkWiK--i{S zaQXj{H2(bmb&mgUc81^Od1mHHhLM?Jh1Sru4rm}3d@^yC3h#FckYNXUyTSHpns39D zxCDN7knCb6lzuw=ko4k&ueZ2`mk7L8hr)`~fpbB39wYpjAC>5v#u>J!pe*2AHnT1I znT1k(gRCmq=GkgqcIjF7Q12UE)a97zMKa{|5Q_MOpmuY#v^I~ZR<~01ZBR}&GHIy! zux;||Ha4<^(gz%)E;;vb_I;H$%!Ft+15pH3n=lb!ZL!1gTsr6DBN%<>n@TYZl~eRM z#pEMuXCynxN4og_IW%rM*gw9gkY69`^eVZd)q)t^K+7QYf1n2a>nBA4_A`m~JCceF z@&_XV8PM_25fv>51(1Tzs`mSKL3QGHjtOs(Uwoa#+2ViDT_$?JOf0%u#XJc*6Os~* zKTb@bvD9JEMFLMc?n?9WX$2XXRB{w~0@*+aofgM7DM9}Tq?;d*?I#v7dg1ycp6z$C zf4|C@XJdgj$XHARot`idTNL(i(k7UY$6ETztQ5L77rxajnIN|w_lXB_N|$-vesLgm z90kcSuxiw|!_5aF#SiiY%rmlX04MyMcph|tR!%=eZNH?mLm-VqqDAL2%g-|pMQ4B# z7+CgtaQEPydG(VrCYK~AirF@DghY0yo@}en1r*QZ-yP{a!WsH02LA@{lNF=NW)w3n z0S6pC-q90sA$$dK#d+jX-MO&~hAEN*`a69R6CUU#zjF>n|Pp`K(%m*PRqNybciQmI_HDn@}qG<~?w8jOLV9MI)( zz2fQtK!3sC`%fv_wsKqdrasyh=MmIHn$M)9#oRWxu0p|M2y@$=Uj&^#b8`#HrWhaB zhM06^Mw2`e>6wlPt`^TtdH(a8GZs0iFF3G!+)g+B(4BbhOh;c}{<(;|Z*(j140T{B zn8XyHcyyhuMCo{PXB_~;wQ~uFV_Y|dM_Y`pd^e;0bgbb7jHZa4SyQmO{38D1E>+cP z&$cn1Qa9f#wT(=i;jcjRliS@nuzC3Nor8XHh6QNr43$qHD5!iOwO2fb>|%2tqgN(W zG3ZleghUkHy6a|}T87Dq6AI&v|2~I*pv!0nq(yw9dHHh@!t3ZBlgW>qj4u+R#CiNf zT5_C*;{{adV68w_Fe9D2u%2GX$-A9nAGAv(H=qOJ{HqD?SuZ#^Z(Fs;0ADfw!Q~DH zAyhG6kM`DV{vbf~VZ}S_l-cUH|JUnf?o|0=ZKLO0k?T>yY{db+jfx{jJR*Oib$Pt8 z-AjCat~QvPY*_mh#YM*H+CKc-6TuOZ*T2a|IgM|)u)30i@)svgt1i<>CJE7 zFZ-zQ*mv++Bvb7FIC7*H-Me%`aVhq+?eRHgq$C%gZpDTy`kQGOySsIg^G8$|8=kP zd*`)&^rXSrQ0L#c$O$@j4ucHm;GAtI@m;siCr04Cl&-Ux%#cNENf;0Fg_ef*B1kFj zfvzj4PU3ej5)u611jzb6`;lGT-m<}6^Y4b{Gb)*Y9)jO%dUimjd`1D2*{1G{AdW2z zXSkj)WsDru@OAV()jR3Ml(OiEfCE#-4C^V9Yg9*Xs>YU>%D>sZMyP7N=KNX1Y|f}Mr@*V5tU?u)a_5Hr^$zb z0!-trYLo2Ou0}3o_S4o*AGk4AZJ8D``!+U|c^6kWiNi8q!c7D7uo@dTg1P*(mKZw} z4e3Lfk^aILo9IFjhZ7C98XuKx}xo|YMdb0=CD zLNvliye^6WP!10gQ5-U?g-B{tPSI%n{khaIdV2b_eY*04E}ZcIS{Ie)I?U*E7U#*z3C<;g4;%fRXe60L@qr7$B!O_WsbvA4@M% zq9%J>=vcY7E4m`;Zl4k-LrcSMG}#YDP+Feq%?!<+U*G_YC-uY?agd9k+yJjI455e! z7&zS?)q(vnST#Vbryh~PiM`3-j^*(N@hvh-s89V7(T+*9exip#Vma7J^ul1Tpo`?EAdoFwJxu6_Rov$L1|=EwQ<7CFC7k?a-vv7%8;$?6Ri_>vg1=hBht zDQ8PG0__84@mis3-Pha!Z1dmH{sXKJxd{PMM>hodQ#SVYBo=jdz#fq!OS>A|M5=bNG}!fRx)XZc2mJeD zWq6xV5ukE4;~c>Eu8~(d@aL*RFs03UN;>%>9WYkNys47YcCzYD4IOsYC7yOHxI%EZ znw)zCEJP#O-biadEc0^1z<>c!uUUMKUPhEEXGd94GmnLfDs=vwr*I&oS1^K$JbjT! zx7|&uHy=DooW%^9LN5SlU}<5%yFLDoO6K2mJ8PmrXmD#__ z6t9k-# zx~{;jZFYulKd(h|y1I~sJAlS*z!$RocRo$N%fuIawt6t{+m|Z@dm;}(`?r!n`RA;^ z)O)C6j%aB+jq={rUmC{(zuE*bwNo9B-9om?5L{eh+aEugATbwpS$gk26zC!7g`ndB zNN*z$EH&C!=6B811?0+DOq0Oid|Gt!HeSZbt4G$`2$BsVB5oWvuw*B%tnPvC(QPYm zD(XQHdsQpm8g0RSMo&k)3KWD)CBri-W-9EsGgsud2VgnGT)J`dVIZj-NVFgudupQ1hcm#Y>E%%0HbHic0lsKi!L z%bQ4wnddmt8qmyv`){B)QSZj8J+y1LZPE zmZiL`q*3U)V%kN{e!S3Ex5*7Dg`FlfSIU>W7vH3jFqL^pz4FmjD8e%R_TN=-8^2&U zv#dQzGo%#3i*Gmg8>iSI8$;V2M$o*kqHx(flQfSJB)tBSCI_6gS7a!8Hpo}&^0x@-DIteug=$atESkP0 zNsdRP+LO)x6-U$v&T;iNj}HF=iu7(%gvx1Vf#Q9L?oBJ?+PvaI)T z*3v$>+o<*70`EQI%1%prz>EpNlSYu|Pp4Cgbb(K0P7oZ)CvCo~0k!kN#<|*L`dqYg zd}lcg)cIRQ)Cm$KPi7C1P6vA;E|Ud@Dpdq^i6ENpjKl(^@!537TK$nJXQx=|HYo3A zU-yTN?sYU!*&&y2LsT+O=`Ht?0roTbj`zdf7iT`BwrYm^nXtI)Rx1zRz*-@v>XL7< zzliEPHzzUOB>}sODy&~s*yJ@o+=~%dFZww(m$}6=(yrUoPgX8V)?yoGc`dZ+4qgHm zk(?J=Qn=?&6)*K)Gj|P-coeD(u`pH!X{wLp1M%l~ohHAZ>@5*L2g9RzZq$k)I(PqZ zTelAA$yO+Cz*Ql~1#zGPZgV06+wjke*-IzCy6ScJ*?P}d5Ced%z}uvZ^MInQUpS3v zo(u|<_uh+1sm8AN?vD(061;}c24MP8byW2a@+I69 zCzL_q(GvdzRZj80xl0^PT2=BPGE^&DjB$+Z-kzxR5E>JcXS^n7djTlJ>saE&<(mSI zei{~1)!w@{E=S?4&^43ADwUWfQ)KQPBC?+k+6*pg z-sl7cFYl$q^8*tRf@5228u^#8<-8pAxXawuYaQ>2a(X`~vna<1e6KU}VgB?9#n{Ak z(4#kGJXBGFTZ~e=Se0^ zPe#E|ms#uG0cva|uTQ+O1$S~^NfLaY=K1Xv(Mh~V{!H5lrus`|=yo;|w`3<>K8+(? z0?gH-JH;L1&=7_|#e)J$u_u=J?Seec@gxuiBzgVn`!Y8PbJARYNM_PFI>kBZ0ww^@ z0NQnYcG;I;;rj9*Tt)|>w!=dV*>=RAk`h>;a4`(mNM*%I+AfibiK}O80hh#2?GF}9 zP%MryFAR^7=$}x26E@Qy!G|1se2I39(FBaKa3feDZ%51dg$w}VsY&w4tlx}aT{|Mr zZFBVFP)yR`dKby-924`?DG#ZfBgEqnxIp@(Ap&b2fEot?TE)M^KiEXL2ZWOOEpbf= zMt#%hMftrwWs1Y(roo$$DHJI~Oe4w)+ySAStDA=2wOs{t>L%vh{f;=%{B%znG3?Ar z7>|8pOMX4J6P6xc7fzWxxYvEvjjSISzmTs%s+P%7|Lf0RRw#qNGpwJrg?$!zcCU+# zBY}rxk?3=jWuhrvwn)lvxi#D|6Ynd*YPvbr{Bsb7!mNpj*+&M?;1R5shI9uQ=;rgW z-kMw`l_BQ;>QQba4?zscD;@3|#5*P#7U;zN0FGq^0lcA|f|~oLX#kQgId$PJZS2C> zo^ICzkwwVrfZ)AhX*OY9KSG?vL>$*0g|DEqpsYW$G4EVx4i0z2ui^*fdbCFBaxi(? zSF~-oMCxsub$~VaFqFBT#~%G?dz-n%GsC>xa+?^KOn&uBfAyI>i^tGLUqe-Bn=G-t z@pw>$(L5Q+z4lc6n_WBHZ#9)vLM$D|pFOP>3R3pCC_hkjtx(GdoQx@@q}T?YxTGlY zu0rB9;Pw4^MU-t9hoFDqPKU~eHP|!R+1)S_=dnvc0YAnnloO|a>Ej!-UgH@W(6QdS`$muzizn$O?j4*wVz3}a!3LunIZx_a7UABPtHo|{wtWH1N1=}EQ-g(M zQ5o%_S4!{qS74h}=MK4V6TVLW@xTtDh$8MTLwr~iLH6Ghnt`b`-nPc^AtJElLg{+ntEZ&4-JjjId&Re0-QR=1vBz z8RavN*vywwVZx9<6mr=O&$B!(lD;m>J|>IRRx-0}K3pr7iA*SN#ukW0C3b2Umd0R^ zmQ^3cMrZ2?BPzEQ)kg~oI6%xH3$|te5HtJ}17`}{^XCoIqn;2V0Bx2pS1fCEc@WQU z0X&b}NI>W2rIX}9izS@g;t8eS}GA`%Ohe+MhSS9@L-P5SOd!ULk zTW#lWe;Bx5%gGKyW4tfssJ2m`2rwj;5Qk?tINXu>a)5+HhL#=Q3mEaSG*}EF%l`p* zJP35bqo)4vzpRa$XVCK0nyO~Cc3U{h?EqhgRw`yPkPd#6Z<`*D&|zGmp7q3U2Jrnc zBC*X#v8h5Pa}t(ffHfJ%QT%k~J10|S*c#Fm#W(6%i#)_3d~qh*QyxL^wdEh-jE^9T z|4{fz0|V}4wPJu@s4bv>Jfa?lnW?q7`KDYy$1yCG0tH@1ApawI@UXKJ{GJfDLH4xC z=mFb}@d*OTq;b~c$C~cuDf%f!-|cOxoVWoQN9JV?Pj2^i?ptU|E0PSl@Ii3$LrI^u zW(xOW#-+_OLmRZG7wa+eAS*`ezHJN~z?#bss~DX%n-i$8KF<^4)Su+1k>SpGJ?rm% z{~^+8Se^jUcgQ4a`Jfp?6=_=HLzAmK;KxqV@m!=aiaZ>sP7wok*ZhNt)%V-mJf2%< zoAufi3t%0tn8ah3P(q3CHUNL?3R69CXUaRWWYgZ~~1_J44--u6aPpU0Eg?l-;3bF{evKuOjDiV`?75-wEH_7ge8W{*AI*bVvM6#jJy5IS=u)Y z9wUDNRK_hV1}}MiOc2R=%J~idVe{TTF8Oii?k;7hdeYP1b;(HgZ8TKrkB&>*yleIa z;SH4^9A=&}Im_wpew<2bUYcm{4+FS2-M}dAy`F#Rf&HHo)@wxqT=>-=cro*SW13L4 z?QuwTR-%jES_2R>@AKxPrEwOMrNu&sam3g`A|mTwt4_#>>5{)O8JatDnGnAa=EiSa zuQD9Rk!}a^@AjPtPHI9+6sERbjZ;4R=-iL=rL3cL+pkoWCvw^JWluc4I8#V4Dl0Dz zv20@g(d%1|{x`AZ)kN|I_^ft6vixQr6FW3VK(bphXGxx%`yXI}9tj$!oKVG)%HRzZ z?-WtqhXIZd{?+<&8wiwpNRJ@}K~uIu0rohH`3q=4^DHPNE7N|#C$Co4zR366!*Gs-VPjc9Pc5D0y?uFXi*=%cPA57kw1=cxg^9mfb*%c}1G|Y4 z8Ob)YE!QoH=~)*K-!MiJNO0-}Ha1RIIUk{tnJb*y z6{>6z&T3lub=`$`ym_n2X^%QJfmtp!yYi@dfta30JUp}Qt!rPCwd&Siy1G8Preuh` zESGh5p2cS6H4KJ1owkA}b3iE$r2|BSxr$k2-8SmN8Ad*2A-s@J5dpoS1!1PDSpZlL zRYoBkpFR5FF&JZT<7~t#Hb>bW*%j}r$v#V6FI3Dsm?GNimZ)w(Qls##f~lBE$qCqS zB?Si=XeDasvMQ zv4CFw?T&s6_IwEPI~`@FJ~10+eVix`#ULEDjeY4%F&Q?16u^Swp9du%K&IpUz=tll zpw~3Y60q0pjCUtFHcB*+gLyXs^FQ0;Za72vGbGXO7EbacG0q9mKJj1fBO|m}A#4Y< zNy%`LJv)7Uc*#7B(HGv$zx+cd&Z>vx!f*Mmxl&+8oeiZ!E$s7rvEZVla){j*VBa2z z!;oMU4@El6u~fo3)m|zDfL(v=3p(*y1y}q1zyc`bK9#AcLJ+S3Yo>wA^z7H=h73A$wb;e|D8L>w9jTW)5$M(2=fVD++4RXF@)IKhjOe z#IpDTui^F>!u){-J$1z3F`djd4?)sZP>hos&HOO(lpiwBbpPREl4ht8iA4zFSvd^G zDxtoX+$k$WXr7zux;cOw!>Lkg(6#^oA~l$jD$NJbU_@Td8;j$At&mK9cs3-tO+l|w+LiX zG0`lux=p~!nq_lDZl(Wce&dABy<^Q6XeD}9bC3%BV7l!MrVPPAq13af+OGBR zpzVb#q*&DtbzFJawb|)U>m{m9UR|PnDm!Wsr!0cv@QE&AB2A^CNzT;?I+L4^At9HA ziz<1=e4hu&q3w{QMrLqAEVz17*#8CWyDZbohH_iEjX!_*o)+QQ3%w8*Sv$f1tJ%?x z)0EO?s$OXAE=|l1Tnc3PecR0$RI5|92gdkpz}k*OIAJkK>HsxNZCqIJ>#u{vQOn}* zwkv(v+M#Rg8F2U2^m?d1+v@2D;{i6i(1J|!fI+|>+?E*e z4*h8Uan#KUN>&*LCkgTQuE?qEG&8~F#vvO)&*H|-t}%G!1cX(AeO+O{+*fUFD6XkO zhX$^|1NHHH*fsqNlvq~UB5!6nz<($Cb4z<2D7Do#*J-#DGq?ARkCWDroP>%EULw~fm%vk7wz->WPlrqC)PJ$X!Q_819Rrb0| zXcmf~4B{(2_=cJG%j7SIUqQmQnW*g4mhb-w6XX710Cpb*C|Ukz}U_A~`n(R#ACH^!m+pHf_5t zIsLia91oNyE&U@;8B0r3E`Q&YvIDp?ki>4cTP;b2Kd$UW%#pt8u{9EjqJjG;!cCd1 z`vBGk$p`G|f)oO#t;AHt)(JW_ci}f8dSJ{p2C^Qoj_gX-0KuOjbnK%?zo^=r9$rOi*`Xso(&s%B@)+V=St=O|Ee_o?LK7?E zGaWwZ0=m+4tG*5bY7^SR(FG54F;JdpW_{VTNI?)=g5P4}dXR;2#R>u5Zd8JqRl)p8 zowhsucWPOz?3t%A*FQHtN!Dr88|CKOO^QeBY#sm{9%Is`-Kez2O3UP9SE(cg4>xc? zqu_v$_N2uTPv!Bm%keV9U#GpXc`;i4%t3)T7sV;kBqCbNp`^~zmP*y)_+p9_;($;@ z?JhfZpinNDP6a3iS~-v@EdEwF{~}3v8jO0$fq`ZY`v4zb&9!a%ySaCVs^^~@*n^X& z0mdk(M|SYC$aE8m16DO3Mnw>~IQ0;7ey znKtv;r9;k8g4}KPyFXKgSXV9_@llmo6UvfL*C3mlz5*j{mzXdn8`BP6mBaiD4& z#yvoDRa1PXif>6~mzn=KmLn+j9=67(@O6}uOj|*zyqz1Dqfsh-kZw46P5ww-&wJ;K zp_=rgq298G5y96?ctE0=h0aq4%U<|$mxllR#owliJ8uV$cahs5DnA1s8T?sRN`cqs^iHJq(;6Ic=DAB!}doyOkg z<&n%%`EhXPI^tecR&CLn;BF%yyr5x&WZY3PMN5%z4=~nLIUi~!CUt7z7CBdYSwvl+ z${;7iF9UAI#_(nDZF{n7N;l%^f1D>~#Rx%&@sk9!DX&EMNA|&CB$5)p`MkCEY5rQy zm}~*R?o>TtJXPrJdVTxVeyby%lL??r$||o^-HabjvjD*Iv5IIb&8R^7beMyAC&bD{ z>BQRsUyIc$^yilYd6pJeaWV%u>?_?CkOa_M`*x#F`Q_!BHS4ZlJ3;RzRQF1~N80b* z;!CI{DRWff#9Jg&lmli7-!Hh(zpJ|C*QY)8%Tee_#)Rm;7*N_Vun>KmYrmnOK+F`; zs-;=*U?gYwYVtH{5a8GQAKh~-06^8OW*_Dy@ot|eaOW||W<25q) zd(&tx;36dGB8!AZmW4zv=Qb;HZ+(FJEDum&4dM?bjUy%1yP5vz4&_UGSLvV-o$-a0 z@3!2U0rGXpA`o3XKM6^|`MpL=VfXT%%dvfzYoYX7Ja`+2e2t4wR!MxKzW+WIimjMI zotGk{%LDFENkX{&<=wkmQ9Z^b%MPf=s>Ap&ca+lVY-PC6swc`cqgm)dhgx1kQJ)#< zB11I@Fe-@9)h#*sumpK`YZ_=V{0OJcIW|5VvYpdF zrZDK`W7&&dN%@KXTob$rLrl%N-uxo$1C5#@JebTbH43k=2Fxb@{kIbUEUsNdLSBn$ zcdYhjA`E1!XNGeE1sCZ7s7@!K+3;lb`SsSq>jMHa!c>}qClO`M0={-~TS0}^Z5Vya zP9FUgDaxyr?d|p7Fe`+e|K1|P4+(p0Ss9iH)b`W{(+fzw z5jv7)Van9Y4!mZyR%f#MN?I47wGGjg7FN2@B+Q#;b_%!Vg=kHTar4d#RVy(`s{JPwgpB8@<2qDLx`|07C; z@jt>PxFE58urr>OdSzOLX%GVLk_8g5G}KAfNAi#So|=8IidQJlmc~3+7m>WW1A!}F z2EVcq6os8b_32G^rd`yLsL&+P-ps_u>K9`0U#TmCz)@6*3)gNX8Z;Pi4wlL$I5TKjZw?|r&#|94Sj&3mOVj4*t5 z4O58%smvOsZCDT!W3k6|nIF#3CCVZT3 zh@aQ5#}e;K@%69RMe5E_xqGFwwCwL2^?t-xxW&cn>)~gyn3xcF2%6?{A(K-gC5RPf zri!v!?FA_1lg8ThX(``g__EKeNB8zMZno_>4dhpS;u#(R*TVe@DSFzW5k@C3Yyk0? zF6;IWg`ghA)o74PL4?!(sCrG_st)p8^a9SNJ)}ErGYHS`l7@T~6OZXb3&ZXwjW?Ox(e8`n{ETqLZuF(7iN*CINh0H3X zVa?(w=gMKHk6ah)7c12;I6iLTq^z_fB~h6zhZs2;u{Zd ze6lz!u}L+#Q(r=I5y_aBlKnh|{5hdxQt0X)gXLtd+Zy=|MSSOkk1D-W7fGR7Xn9jW z+nZDUwtLVQ8D9%+Oy;7!hmIIPGo^p)QMyqck%M}6N2P{aubD0a>=wq_l*~tt1E`UW zeTOk1&1PV5oT~p3q~VQN0LW3pPBI#s4cjmJ3gIeC&|uG_%q#Ll3sp2%g$rp%3QBn@ z(E8d%BzZ*9-JaT&p_qW(QVv=2mLDyt)YQ#7;T~E9sc}d&CAvFE<6p$9Nz4^GLd{>M z{J&p2oo~cnv@mkxercA@nS69bY`l@yO}J=r=z;{AWs*lm@Za`%Y?Y!;dNiud%vR-D zC~h)Cm!k>UESm;12-QG!oHUVky3e$|9}})W|A5Gvz-P?ZYpL8?Tnfro_rbwS{C<~AJjir4a4&O{qXs3VX3k_NgP zc$NK-+!UcT6)>ZAIM=-O`-6RJ4eE`@i~lICey(_ExzLa+(CJ4~1%SYt+zJeTVcE^9 z)gdYHS7)kW{Vj6(#8ty9&8^(TKickaKfX=01b#l?vwu8x*QZ~~PZgfbCPAi~;4!@0 z?ao!kscY2TQ34?C$(bflQhQSQ-tr0Z*c(pFCiV`MI8)df zL9WAg5D_Aln0v1Z7}S`F2xV}xA}|vYHK874pNX@mJ$&vpJ;xj2Kh^W`3x;vHh+Dwt zloOW6O$KPwlD%h#-7F`Br)_!6zxPPm036L%VyWItMw%TAF&SnT%Lj0Mxunt<%qT2b zKY8QiizwS&He%yqD+4vjAm>>Rdsf~uP#)m7vr~Jm{lUXYik{7qiXh_-fjx|kAQRJd zKIntHNYeEDoP6AkXNTd~^14t%FY=kkF2;0AdQ){%kG#@NO8TDBlmv_9mDm%;6zDt~ zlL{^O?dOCu0Rll$%UhDY_>d83V2dTNPo$zs@{sa6XwEelp@zq;V)|N0o9{Cu+sLFvu{vdt;cH5+_a9&;s>%GNsA<)K!jIt(>pd;wRmzp7|MpeC^107K(^Cn9 zNgd6^%tjj@)F|*PBA`{o3!W<8k+|HQF+%5ur3BZ982KUMkFw<3!@3i2lp-$^uTPKP zLy$I@ggeh>=5K1!X<`?ZVV@SKf{p|l?nDP^w|C$Ub$feI6f zLUt@Y#&@GPEEL_5sZ;X^BZVI`ZYB&y#{v?)WYO;wRL&*+!;1DRFBB-pLYJdio0jh4 zv>N{z?uU4-Y-_T8HD#^4tqP*?%*l!AXQ$h8$qax&qfO|7o9mWlSt80Qxp_p~-pXDm z1p?;bbJVO-w%Q$g$ed@0D^MB%N`j;C#A^O??%*A>>`kgD6JzfO8YJSPi0xd{ST9@a z$gHnWG39AxXlAy%l;wH_X(8cOBEK-7gX^b_vZ#T>sn`Xi6>Qlvz`tvx3KC+I1^5;NtI_=zV+SdzdI{8i?5n9W5>FU_~HbX(+ zwx|ep9!BuR4-y9f=`33UyDuJXoxWz|&R6bzg+P__yE_lyazbT*lJkrw;?v=?@$hA zbu>w3?P&l6d?f}J%g^TBMiZI9T+Zi|`>o11LV$#wIm7Npn+T~WFQ0P(6&OLci~;HMqx0h&X@Y+KuOU+&r!4^u zMlV_~pHQ9e6}{ehXg_Gswmw(>8;A5C^fiJ$|D2bpFR2Y$hxd|8)A9!_(zDL(1@iJy zkoi>RRUCuoI=avS^gWWRI*dQtq5b}0B2!8#WBW%uQiH!ISR}c_(u@MU#7Qxi3uAL; z?#XhTuIXk&4HkQ-tM1r)pu;fFzwfxi!Z-1Gcg^e46@jpvU6C=lT$Sa) zu{jo7lx8w52y~u-_~5al;4w>C*OX^@=G?k*?c`-TJpM8RxXfHx$5{6+xjK`{vn_g?S5g#7+Q}=; z6?T`9|M_uT84z~Z`7hh~rYyt5k*Y+Z9?FSOftBDM_lJ|tN_PQ23dmTRc->tdBR@EwM@Sl zpxgN7_jx5cNAP)a_XRAr)U#BV+Nk4{_qCL$3T|`*1lOOl6A||YI4w8b%lo^nD|I4i zhMJDkzuc&ko*QW?j}pt-++l$E8*bsyg23m4#B?dv8U;>Shnw_{_vVIU5mVaHJotZU z<$qbLbBzy!UrqZLi3b+fxTRZl*iz(^E`zHA+6HpRdf&)ICqD}Yi5^lzuV!r3hKU5Y=^AD{T9rD zUUQ=A$cdl_46d#a`>#qWI^pgxwJEn5++Afwu5{Q-csLgDd);F^=q>u*{d zi4Kd5lVih&<%;uAK2IuA^HZ-%n}tez5QG4^rvBwhkP1U= z#r`VQA>Y^bh0S2xS^U1d4g z=Wj!RND(doAbSXVXN(4+@H{!_1ZO2CaDGiL*GgWpsYh%R2DuUsX<}qA&2i0Z=mh~N zU3i&Hn{Pt7MdwCo!KYintfjJ+6HGj}cq9NmwcW?cfKFUg>dKKCA`&edJMq_%wS&6x zmaL4qsbqgnYwKTkP-Kw~S}_4)dF^0ljqHVn;k(uFi$rg-$A3<5T_JF+$!x?6$jreJ z8QAVk2acI%Js1OkEnHnW_Bc$UWZ zx(>r4sVa*0Z41hM44l)T40EHucAPX}!I#-Y8fyrtLh@zB;F#sz;gXeQB#&W}!0Mzb zd+rc8WyLLB!dgx)_A?p)#l8UmpmW;mo~t6fV`qzsdfaz19;q{FJ@Y#Np2 zUD3|x5H2oPZV|+xJb-SMra<$!Cje$$7w692MSQ%ZKHZNoj@8=ET4StR)bfRR!6PM= zM-mWUXVVHr7noL|*o`F)-0wx-8Ei+SG}1!=d(lwx&Mr}+>O5WRC;}+bmGM20C1vor41g`AY>Zp){d`wYF5wDm@q_PWf&gWfelSWsUP^*rLnEMg}fOEfz zU8iluag!k6473CmPMZJR)tA(5=%i`F^-NYtdD zilj{(<<0Z$l|VG_MA;A`6Cc?&50}0wd1eg zvN6B*ksDOaZx!Dd;EutpXWtmS_q2GOD!pfD)5=zA-olZR{{tuxxZRD(2~87n2^5Xi z4+OEu;@Rvc@V9Q?Qdfcyv$9%pgs=94PaX}!BY(jR!dvWp6xFQayV8&&xZqx4IY!&Q zCwRNfq{H(FGaOJ9{ z5^PXEkV$J4IUV46D|MnkfKJI9VWnvp1gE=je~?rknho9ggt&l36Hsa#V@=lfu;dUp z=7OhxBqG^8@hDs*j`kypiMT5$vS`Z%eyq|6o7Zp*o0ZfzZ-hT2WBWi}VUZJDSw~`- ztwGH)1}sc|6US*ut0GI@rcuQIR>=ziwT(I=Sg@sgq=iLyHYE4qq00_CBCC-rzH4i< zjEh`im#d(4m9)^!p0E@SwySkXoI*0=rsWS9^rz&5r15~QH`0kT##IMlIu!{g-gdLq z>GEt~?CFw-5R|o+v4dcs-&H5Go;AH9h?&?2x7$YS!|E#cx3$FyiqK@!sTj?(J`E5@ z=vM2xksPKnpX*>iGB)DSmu<^JEGDMKXr!+lhh0@f`!^04yv$iIUtkRYhmwvETyUeC zo(F&~91Q(T?cLTc>99xFA3uT050ccpAPv*l{u*RzO31m z_3Hik-=Cr623$+ZKSMEJo~NOLVxpT~T)Y2I!VaSyV7e>fob6Z|YX0BG5jpTwMrl6! zdSJC+R2C9=BUJLVE^;LQWpW#51%Bsp^n$6c535Vi;qG-ucQp$p_d8v;Yp(wRNqy0b zjH3Ug+n%wvqG=gQ?>U~4bDyH=(~4FZy3yl>1wOA6K3-^r{#ZEr6r8Yo#hilk!lm_J z&?{bDmn_`yXVfNa)02AV4qq$clM_`{)zRbt;#IvkIBF+8FjBR9sN?YQnX841v5{L| z-05BfZ$foh>6tv{rl-ekcR$r~atPZ_{;oAz=cw_V32BigKJs zq4X^~?8{fYwW{Rkf0Fl-*~6Q06qzGSU*{@V{3^=lto@4Yf=J1D3SaORebS5-kO9`3Ez480@dm#*3~@5#>R=;_F72AUCA$ zNaLl@-55=;8L_xawDZ7M?Q#$W4_K6U$r3;~h1pKB?(dayU(wAb)`5=f=gWt@F6jsF zkF;KlNp{YoXww0uSc-C>y`97!^ghB3tvafV2zp(U75P-W9<`Rd4gID1N3-RDu*k8;OGMcx8Hj#->=-j$`QF5U zPRC6xSJ^(sPLmFnlh!6%r`lilu4pVmiWw1C7;Fr`c5Aw`oFU6P`rytJ@hLScxt7ce04)n@XYgafE{(Nlp+i zO$TL_o!2{EeQWI*9)hdi#@aU%_lM-OGLxF~6K>sm-n@k0?Ca9xOmwF+8L+Jf;c}=MExVb_OeiikLikYMXc!=+`wtyPBY{Pl(E%Te7H{A?Cm@vq_}y*EJn`L!8HB1`jEZ5HVkkJIj|^(jUM$)2`hdp5 zl3|Y`-$@a)9={`%nKN_6@4SozyL`mkU7OZ3=C486V>*K7KMilK zoWl8BufENQ({ZD5#8t!gd~~(*)qy%1@^nQTUhwk}XlL8s8*NS&sCIJmlLq=D5LH-O1ACfNYumvKCoEBfSGt)F}naxp6oyLK_&N@ zOJ(&_A+EuLBi?BH8mg4_$Ta>lVXceMOwOa)7JCtI`T6J6RJCRPpm0}5Vl4_MrBU0A z!!2a^gakekM+J8mf8^{_k~)9=aLzURvFv%>9l%M+pMH*f!)|h0O`sK-Mweud+#vE8 zk8&H>;~e(pL5?QvM(S9d>(S`(!?eA@_j{KE$(HA@2Qs#Yi=bFQ(8|+M)abHx zYbCs7@Ty&3&fXQPws3;x{_4+kMZ2#zd&ox^u3=1mx#3@tl7@@2Wr31F09L=-S};DV zN#&f3airUoW_O~ec9`&vgoT6ajZ5Vc(K-9N==orcT06ni~X*hm68`XWXa zfLA~V)C}$!_~?yT$@bX4+s#HO8MZntk!ckrEhp)P3a1Fy@6YYf11qbPaWEsJ;63jU z$vY1r%ntxyUP!&1)|Q;>?Em=JOc_ta=G{{nEgjoha@xfmZqz{XV9g%%A9kVNa*(xa`lJ)x$F{-i)XmjL$sgHFb6As4 z>AdM#rLiG>AouT-=JOjj?0U$6gGETS0zi)_Og zgBp=hfVcq4n!yqA_>zECr@&-R%Y-sS*$;z)=SXAZzshp|R`gWKPDt3JoFs|{w@RDz z_6|O;hINuC3ul##R(g_+)!%~dD6Y^rfJSnaH!NS>S}f2pyS;Uxbk}+C zTh$rUt}5_O72Z1D*C|1n6(%14ywi4ylt`_@Pg;RuG_FNrARx4WExI93`|WK^c9{0& zKL^w`_zJB9l!C#nxJOwK%ZjY&?L z65g$F9vqq+y|EW3_~WUC!55{MY}@a5QEz{AB>r#Ns5L@;(ja)}sf?m;gr-Y3c9H*d zWfS}U1lNkZ9>CD%)XuL=xb-kXXKrQUZLy#1Cx1!r>e9D;STFhcZPZHEcZ*1wf)XsG zfoi36Dvn_?IygAM6d7TAAfa0{k_)NnEB4o_-0{Lvi4bkiatsd*%;z;4L%r@v^e(OE zk#_G*`{f>eVGA-jNhA%Pxa)S)K~=PZE9yMh>$2agERoA!=WNplWZaz|#K850zm{J& zioI$4yY*-8ljPV3Noafs<7c6^?agX%I&c5A3l;NxrhBhv2460}5 zRfYMzuGT;Kz|?fq#dngbO=k`}n*~u`FmGz0NyEh}Pn!OgRK&gJ<>gW^z|{c|f3q`& zK(6Eq5N3%H!U_OvqQUiOa^hU^n1fgV5zK=ktFcW~oWNBYG!B>IkP%zZ;`?pg@`hVQ z1v-x&W}e1mJH6LZFF?x~-Ia&>y#sE1JJt=fm-=%{W-jSz>RV%<(vk;j8EcWPwBd@T z%uVFEc$;^n-Y}blMFjf9C3G`qPkI`npo{TB}8{`k%Ee3JWcZECgV|ia~ ze-L75=|}!Xv5rxU)!jajkT=5{ml3A?+lpbq+mX9Y$@&zps=34#zGaKrH%UvWSp#If zswNiW4Myd-$~T2COUm<3!f*db^9e~4u%$89RkPbhT+j&nok$rDYne)9S{-x7Rx*f- zCR`}yJ>ehk?0hcO3pg=Rp@y>gPMC`n@gTrE11V^+6(vImESdV8I5Gq$p8MOum?ML~ z1ugwdw^qp9K> zUBUUm*P2E{)0n&}DiGoxW*F+bl6UT*Zb8AFO6lir5m%PjyRPQ~>qf5vW z;+V28fNyl_xQdI2WCBa&hdm8-TS%(?i?jZ=+xU(=&9?~&k`3y`mygkZUY;WP+rx_m zKoRnWsA68}cs(9B?;!c%%+e14TD;nr&<`4Ar<1WCikI?3gdfxMR}XE@{L51-oy{pP zuCSb_B*A<=V1gmKaeHH)PlR&EAEDz@jOkS$`?hzcou-kIerPL1B373P@#cG!XYOz$ zCNs66u(jAZ)jgRWAu+Y%3@h%T>XrEFZ435IY@8l0s{dWB_#Z$ezZH+20Fr2lGGdMPRsHCEm#-JA{ku35kk*UV-_mm2}Gz|{5GN2U#*5Ii|X)=~WA_Wiox zHvf@RBQ9(TH;zAIjiqMl0$y&24Hx^x9r1v@;^9e^ixBC28V|#Ec_WaO@ol3PzEn0j zN?H9cpNctw_b_i$WYF9A`(()Iw_~Rg(Nu%Y`w+|n0>1j5Sb)Mn?_U=7ZcDT#qUz0FWYLC$x#Oz2)4q}B6p>WbIOqWqMo+l@1MmEa zoM(+0g{{aH@Qfx;Tz>V8)S*EqtGbT)wdL-#Yb@ep`z*sLRkx_%SDu=s{7l>l@BU6= ztCUI1?yWoP!ae1F-*ty!zlr{w=tB;~`!5xqm+Xg=N(We7prO=%aWb8u2m=GD(oE40 zf9UR+RpqUPh$NTAmtHJgSqSfmp-?JLABGQDHz_-!Z^0AmMi>|A_vdb5*(u#k3z~fSMYWZxd3#~T8 zv|jKrx!Tc*`E5*)OTC?l(jwl#>^q0WAP{T76<81fCG5#ea!R}r>8FbtD=K!iL1HDG zW3O4h@XKNBh3R*X?Wyi?ioamJ1fWuxy-_J(Jmy^}9`fA_`inu5`&)3MbDcM$MYrVLyK`Sx0RqGy- z`cs*shSE2H16<&K@+5q`_vF%i+5Q(p_W1MHA~{pX+p0kkSK`DP$0D(xq5Arj*B#13 z6LLLtAm5SHPmY3n@Mza(RD`q1_pCd_&LXvFQB_50S+CIy|0;ct4p zZzlc&P+K;g{eGHdbdBl3@0X$$NKCq54#{*XV88h!9a1!v{jAFRP^+kvN~JX20mnVH z4L#d_JI!Cs7U|6j4O1L^_LXY&ia?6ZAVs_MOHU_47fA*!gBddwnzoaciKrwjd}-{4 z_T}W&0`}pZq91W&axC^g3*#d!I^4S#+awmG>-;?aDc~P$VEK~DmAMU})}FqA8xaTb zqY)!^I$88hHUfUUS&jIFEmCa+Qr%CuR-@rB`y_Owwx2aG({)SSlM*+t0ayKXcGjY| z$DEFg!A4(38cLX`@xKGitZq`BW>hcrf%(>Rp>Ny%RCDY#SbA#I3t` zaZ_552p2la_Lg8l6mbpfeTf5#VGDYU4lQHilod^s_wMEy-6;BGw*yO{TDE{{^}ZaI zO2x3-;EX`gpV4*&mt!)r=ln&=Yhp(=p%ur@d_s;mX!-|=k&%$-<6nPbIgTUgG+)+C z2{Cj&QvG<4By{X%wMP;wW0~GELB6H^w<@0Jd9k8O)59OjciY7B!SX$r9>NMJwHYf@ zIYtju3MW#w^0zs~k*9fZ5XHY$3||Bhm*$>h+dk2%J-T0Ggv(hH=!Qp8mqzT&Ds}s( zf+z14lK6{P^e(Ot`rew0eJRjf-E~wCG=@zoRxhzkoe`enrnSIYQ0mMtf8wIEDzcqF zP)6dS*$-gWOI4C0+Y@GpXffVa@@j~bZ8?bE2g@;&def1ZwC0iPz4Nds$@?i1x(k=m z%bm@xDcl*ti1a0TdLc9x0KO3^d9-Q+N za|ENmGpNmRq;tX^ubd>4?oSzt(bp0od>eY*jc?b{5_g~ULiiSOy;`4rABAcKQDISb z6yx6*#J!j#=5`(6g2OA%Fk#x9T8kC-G<}ksXxY)9cA}_&XA`O6jG%}Zh8wBQF}ypp z*lh#Lh|zOM7G1WbU`Gq{7;7|z+Jshv(n>h>Po-D(VCSJ`ce%&AuW<`iTg%q7iNc?; z7JqSfkCJk>6)ZTT_d)~b_1^L{{eg>-xo%m?_3)p}QKGkrn6l)4@V#Uv_5$a|`@H;( zUxVzSS?Jv+Q^GatTaAlROhrTK=iHu3zJW^m_^O0COV7BYzr!?73;$el^Ue5pLa;OU z4pda0T*E|!2)0Qz8V=98u^n==4mM-EY+)v^ZTa{7P2bFky!ZMP-7@iA2^e6JURC4M z3H3vPVJfFy@XOOMj@wR_jPjpz`Q+Uv2g>C}z6OzsY+_Y!^F>|DFTS=gYJp+}67t}W z?NR-6tskhcAyB&SoEj%@=NtS4_o=|viOP2yaTA2bKMhZ8Y~7UKP3g{?n5}zB`osSM z9y`V(#45fC%vEeIVVKr_Y!bwA23I#O761GW!^}1y;Lu3uAM&^wqtebKRWK$3wr^7y z+`pU8*{~Ji+oQ9y#5se89_Bj(5~S(f;tJbb@}Lce4FnN@EVtgZ9r^@L{CJ_Yq#J9a z3Qb9Cws1225_)dC9PX4Y2bOZKFTqB&{C_XiH35u#2^z0PWr zEgwOXH%Z6nNq%~gGKGoO0N<^JUR;K*p5MagUaIyngL5@1lwips=C-4&@SJ8I&N*%KYW(?bs}F{ zb4mX(`!DK~h;f97Rfnu1=G%t)|H z)|a@<`_ll$hrc4w1EL<4hm4+VgWHTiu7H&LGz409@mI$>uP{%~gFHj?&$O6qYhe|H zF__&(+H=cI5oaSaWVU15786s?xRWnUoN%z>9xt^RGZ;$ zg`KT2WKrIf+!}>gpth4?wddiaBbT0Xv#3TV zsNF4MjPkWT&iY;)rFZLP=W%&Q%O^AvJGNvJ;dJ*a_@&NCfDMG5DN03{Eos<%tlN~| z0iLS!YbY`ZD_T|>lG#2|IuHC;_gcBplpoN8mjaGh9O;>gR?r)JmqADMmJb;w<1B1M z;=RdHKdsR6yXyUG9#&Y+y2FJIHssJd1Pb&ks$si&u>7t;(Pf+3OK777oBX0noHVQ` ztBygie_Mesmu^)?!ybo3>`Vlhpo-5{t?hjgb!mw1;Tz_J)`Csv_h^l|K>Lt`4tS>k z1OM1c5@K0-FKeU|CKfkRK_XuM^4;l?nyqaHQxM1G)R*nr_Y6yCb8=S3k(Q$3~{SfqMOWS`Qo)e~5Hjwh^<@Z<{T~Sks4bgzQYg7Whp?lwk zXl_BK1q_8s0glqI0=yMIu#bv;`(>FiTi3m<*69Axx7dzx?j1nSI%@8PGC-S|W`7mM z=yMYPc;Br3{69b)S{uTc1a%0h_=}IxBh)v;=y>v3S>jfc^%#Em$NIKE zFnZ#8#ryQG)$i8=8G7b(dc;rmaw$`Hj^ZD;Iv{Cjn6*uw zwD)7>)Dl)aFn2c`agKIz>yjOtMmb`E^v9h%q;>6gIpq;>6-u_tjX$D2;-dYiZ}+kK zia$L%e)bgpxaZ03$UVZZiz=bWO;#o6?!};m{?6+NTq2Y%k6Emn1q&RbNF^DY^1)D8F|; zagKiZppNo}hxHIW;2)A7d@ZT4zHMMc3B8v4AkR1T&#{?Ry~ek{1-L@KeTq$I^2Wy= z!2X;mNT4}~vRqUF45MaAl4Q0+iug5NscGq9#c>!-8!t$<$lMCZL2*Y(vZnSfeULUN z6h-!{^g5WaVR=Ew$FN=irHr>Rb|qh!XdRW15{9s!fyvKj?MN;COry(noHizA9Cm_1 zx^?+yJ24>;Xk+x3p}0mqpWzUk??Annt@9*N4uzi_sD4+;t>4EKXw1~;M-R!ZZH46O zg6VU!gC0S6kOx>%X9Sf%LmWwg={78{*QI6@&x+kd9x8?bi9+~R+=1I0#U zzg@2k^*Fb#Wg;HQ_2a+Q6pC<#)l@{pA)&>EZT{**oD zW*P@5EC`6p|lT8-C=N@N% z@-Wtk*4TtOwAGo3BWRI$SWL#^IMNYA+?)V~t8`VaqyCTfl`oMUr)E+nu-K3Bl~DWn z5w^M{(al+lsvauOI?6k z0stMgwWxMdw~#gg8mrHMU_&-HFvuB*BMh)lD+WKeLfi?fq*2rtDrSqRoQ9Z<$uXjw zUX%)roJZo34R(wJ-(+NXzII;!Tc6of7Q!EUQp60bg>z}-JJo5&ib~Sw!*s)z%tj{g z!8Wk*rX8Y1Kin6ua%jiK@hUpK1eXlj5rgJ|8E5AI`le8-kc#WM;y#byxeD?IwzdNZYA?H|V2{z4iTlYEG$u*EYcNF>) zGsk5t-Po{39;gRKgZ{gOZBpIBr)37Y-q#0cl{ z^igfn`z1&EkgY=cn|VkB&QS=){n&`dMtp?XnOR$F%NUUIEe4BKuf~b?X;;_E6`SBA0n?mtg&9jA#$;YBLaO8dZE45 z$kg?K$k^6m`)lr5ypmvk?iw~g1T~;mn{9Hh!Plv?sk#Fl$QLlaJqHs44koCerySJf zaOhmXl{v(m7MQjB3-jg3d4xY-NOj)2I?Rdv83yyglX9HTX4{5A);KB?C;mM`TnGz( zQj69cy($U;{R$X!fVzuMn6g6Y4D5#1$%wB{XB_9Q3=A&wwbz{pu(nNd!spdM@% z7HnZNCl_6GpyDGhcj`Rv9b#|$0a_T0pnSayg|{UTG$rig&AaUXR`((*qxmIg23ibt zYT)^$!EGaVTAN@XRfDn}iH`zSKsts-?(+*tuS#}&KPJk49({h=?fl6Bz04^ckecVw~$NS<)JvtQ^`f?%07pqq8(xy&EIVJd32R*=QF0bL?3Y_fXsOs20+SL&-@~ zOJOphjw97N`V+^m^?$eju@W}tKcRUx9J6jLpsOM_6DT$cP( zNz2?tX3~Y`q=e?S@Q%vdzFxnweq(4)H@OANwId#Wiwdq=zLm zFBr`BXNfwHfs&g+hJ|J+uzpD6kOHcEVW>O~1F4$G)W+M?+z3Xxuvpi4KSxKBc>x~m zwoVUPa6jT<$qkVhK?+%%w<|4xrhoZ-hw~2Y(4~lfzGCea6uq2eJuy%tfME)+ zFoHnQF8cn)>wNl`_oCBkT}os3!t`P`tK&v^;3O~&K&OfV~g6Mnb z0XGMz=gl~pwMk+J2KeXu2^w?Dc0A~-B`H>%2%3`9zMbN$u;F7}Qou`f;N0ATQ2PxM zzsn2^UZuTVZv%ei3Qd5`N+#9PK9 zA&4pMI>3D?AHK>64T*?coiwii%*<|v>V=d?S?2nFq_%q`QrnDz3hCZU-jkdtN$9kn zY!LObf6AUj1>-pLjmiVg_X70b=L8~vHj&WY25limnw?A5SIoZhfINR9&L$85e{#@G z&N3XNt20iHu00@$RTQJjBIGw}Z`S1j=7~{$H>}b^b>O)-Oj^=_Wfxdbi>=^i$tyb9DtMC>_^cu+y$9-$;?5QtpYQl=4Q^ z#%!X8h(Y&s(}7am?v1%d#zBhpQeFv9mxj z8K*spn-J<4937wVoMaj*abCMw2#kcF%jk+LD(k1xjMAxKmo%WKnW}1A4OC!uTzf!B zOEmb*_pG=5jo(}zA7ILTrLQnG?%>gH-IpWv!WyV5WWtwhJ?*)sbiRpOT7ISp9xP2A zn^7YDfZsVbSeMnNdG}8s(>SFEw-|(TZd5ti+!US-Hh4mBeO-j!0YEYfEAdWQ!&yu( zUiWl_Z1BygF8Oq%`k$7sq`a~?HX1zm4`7lndRv-DVyqR|@3a_d7rD6d`k5W7<={Sv zMv&%F;{Ho(=?Fs*`-Yh0AI=qzMf5WXem8(dVhplCc2D}q|Fl5GPd;$kYw>zRF~JMZ z<<@Sz@O4abIXb{6gQnH#d{s|Divg9<0lwGK|JSHEoJ!E)vKGhNkv_#lOX2-X!@vDy zO%%pJk|f69-xq#6QQxrZab_6h-a_3TYUTvXZIBF@CluH%ktxx$C^0J4DrI zJ4I&iI}n0@rtW&$)+OEq$T{=owrd!GHGy9T4}{EB6M^X>t=W)9i-G4xjexOkszjAY zmj1)_@boJlb3&|~={mN>TQ5a|B#H3C*FWEtPeD6*u3qFGAzP2!RkYkJWV60YA|wOQ z6t)*cLX+0lf?is$Jk1g$bG#+j-8w7G5~S$8Bsbk$xoItWSm-6pj4CwHD55U>gCX=J zscKNi7gHK(C1Gp*3w-tI#0=_1Qj?dJ9X)ydRrbs&LyUpjSA{APQNwsT)MWFk&>~MW zu*K>GEB*&nXlU*eMEPJ~Am!VR$l8#%5(@D0GS4H1gz%}+KnuI@0B1(>P%=l*Ro$i) z4F~zixM}=`efQ0{=&7JVFrVw+#2W|BwQ~U;)608>Ra?nO4k+*$xd=Khr}C+BFJJF# zfXS$NsjxoPJ86n8fN498;G-XF+VB1-t5S=(Di*e>=&W?w%Kf_}Mbj5qRf*M<9_GlU z(%JmqAylEWLh}MXRbQSyugc1H+577P5Z}{2$kXNPVAZlh33c~nKg&c_DS>_7W$j`e zZUMiC%*d5fQkBF4%#-Q0bup|DOZ1rE>ce-!gQQbe+iq_+C>U}Uy*B0>6NV}+(V;5Q z^$Jb$cs$*VmWdcz3@LSJl})wle96}D${i6*ZS(x+*=j~#TVC2y%~DmZcApb3(k^WL2y)Y#kBFAo7p49j)!8JhCce7 z8vrvdN42MiX6tB>OAAU9+zQ6@8Uo(BiGec^;g6kRPJi5AtukR!bbWvO^OkQW7L{ln zJkcBc!(>XXO!(n+Ai?)PzzidL!YRtQ9yWQBA3ooqGyN63c4;*GZQ>~Adp8~Nl}%Mg z!*{d|B+l9s0z|qAZ+LEwjBJ>m#JIzy&8ud-Ane3FQcrK=EH9D^g_D}om`mhEByvMU z1{nh^Bx2beT)RTaJRj9U>wnW={#7~)#PybXxJg1@F7c)~F=R>ziHOvb=0&Y{&KSS1 z(vhdEJqE4fuQ_m4{t{!Ed}m*Mi`0+Dao`WqT)r0^suAVgT90fMTWN-FcVSJdU1s{M z+R_1p#i(?zsX3`0)z~WK={wRf9^KL9dLzYMcaL6@{;u(JCiiumAkPa%-vd%o`usP~ z0T&7phzIUs0M|v*d({v&0Rwc7iNKY5?zgidgV4}I*faGbyQf!z@uM~mR*wu>wP}nn zyS4w&iKO$4&xW5C<(L&0Bro&;dK7zh{{eUsU)?ANJw~2k${@qg*Kf$#;I1x*_GkI7 z<*38No6|eNiP91|sQ|KsTY}ho2paUsRKy_li5Y)qj68kd5q2slohc}tjk)$`+r9=0 z%UAKZ#u%(1kfV~;B*L@BT3bYa{i-OtutP6O>X}Zzc3=;^$zwKN_mo8ER85wq9y0im zI7iT|1K>T7&T6ARY6fxyJixU4>G*gw`4n7B`dxnqI%XCmAKA+AXjH+DJ{g58ibtLg ze3;A42@UWK?ul@n_IW?4HM?Q-e3Gbv@6Ps;;Wd*bs-1L`3E%Z`T<qpg<3W^M zma-dowgP9C_@t5T|1D~SU0QU}i<+?6#1W^^{{Z;zg2(88CvJBx9QWaZXi#Ysac_r= zT^q+8@pXAVvGz*%$;IQ%2PS)RC}kb=lG z@jE^7%-LVkuUte+f}egicVPPgj(gCJA33{IR@Vqg%WNm$3T|t~F$Ioqx^0a9D5goX zXeY)ANb*_ksFrr)bYduhOYJse;0~GDlwi7js?w`3Re+(Y)i;AdM~Gl(ev_N0w@;qK zfFuStLepF$ZNR%V-ztRp?VJ0JGV-T-#$HlBHwBKz6Md#Hk##soO|h{v3Qv-ocVZ2V zK2i@+9yxA#_C3j0R{GR47TD7FaWp9hZ8RJYPDLk(vR}&mn;>q2+^%hc zj{A`IelpHe9krw`H!V!%FS6XT+nA*L-5zq=y$WS`&v(KcY@I!T#e>}JyR*U`H~@|vii&Wd;KimiqGH7$0&&T>~M4w1Fg`<#wW$DfzI-9YYXta<8&wS zxwL<5-0J4qE=4FpoWpLKjQ)k$-{ZZtuY!vyUySGn zKhh(6jeLjg8Y`hVHfsXuSUZdT;81i!Kg=n5oILG?#G9W30$e%%l)itFOPbzlIY};Pe~S9dY~08bxj7mv zMCzlkGaLUFZ+8WRI(v!JZ2>>xp?8^P%EEt|e zR$8k60jj)i&*P-r#jdV4KqWLqX^X>*#n>rfUDEb|tPVo&XryeSbL;%@wTl%i~M?&wv4| z2Yz{*M0DIScB7vT6)t*CJPadPaV%Qp94}OIo&v>;nH3f`j9VCPxF9NeTvhD0QyxTZ z3%tMWf9&RFgr2~NG+iCzhT_(`)tG%bVVmFEh+BfTTYMr$*W1nAUGG!Hqe|d-;u|Ar6VK$(n1tNPXBw4CNs2Nlljuy^jzt%PleBl!Xo;h~ z*(e(X=(j&jQm-LrfW+)xa=(Z#HLPZp-qh_FoVa3>{O0atWgBS3usy?a%?wY;{c) z{#QMpovB@ijKuR^LMlHq@q|ShF*hUfaIThpe%RVF{9{mS!MVZhjrk1N5{ZQ7$>#&_ zpcWXW@3OLC2j6O*c0LaWf;uFv0kLH?xkAc{H%VDRi}gAUP|kSNqKX*rChCf?MEjnN z3qeyed%~&D!1z=7t{CWA*9pl*fXDFpaHn`lIxMF;G&3FtuP|8n?u1v4mFIar*JF;h z(xNX=_%=Ri56#)+SR{~UPwWdcN)G#*Q=hyR}Vm96E-+TrulG z^V?cs7a>85e6}YgeW2b^E2Mco;Pqo63XchJGH^K~%m6zyu5hr=v36PT%a=W3oL zGi;I>#%Mw?81;!Ba!5O?WJ2;c&46p0LiHp7zI4`q%P>?IfzixRREV`KC}P9d(ry_> zU-|V*LSo-$!XndN)GB_6#pd~1M3Z>HjkxVj6y*D;S__L3yQT=AP%VL|bv*}gnU+t^ zk{MDmwEdIX3Xf}7B^Rms%s^jhJsy07@bvA9Y-h}Grg632XHJBQ9FxqX4A_5pL&;%Z$DM0UC^1cvTbk4?Q=q1GLyg_3EOLB}3?n^k;`x^%ml%pmV?=Y@t4$N^Q5Yh*sd)~D~ zJc}_je1>g*_5mpxGoYP%MhGN?$6`}5DTW(Jdqcxl&khN#z(y;FqU-P3;F*ljvTWXj zP-TaVd+yal+Q{~iL7axEgxNa?rn>1%bjaM?GaGK|vusyo}U8J2e;EI)%CdU=Do%abqK1ivgxdqn$ z@(!GC9Po0`tHmcbhved2{fHJgayGVwTZTF61al|;3-LRgN=amOclWU+SXH%af@W;IG7WtZB#f}k=M{Ra-%R9= zjwJnuvTdknLW(QBX_vp?X6PwRb`*=9uDLiIVP zcNR&+wQl-bgEWkcVYo@n*lcZ~6#JHZ?X+3MQwe?4N1AYTS_h*LZk)*3@VF9gSQ{$j zLUBS&&J!8|4Gs18`S|nJ4(5UBOaP_8O+w}m znzVd}UChr?MH6UULyqaMfNs4W#)@=ZLPzvw|9h2&K!Ru4H($k@;grEj5C69j{bNhB z+Fz;CswTxcbpqDf37OT&L72_jpsl+yT%3mHG;|gkKx!&1y$r{-*2fsWl`{{bS@2sA z$!`#TH_P|)8c+0NkMqFH=I8qU82K>s9GWctpEQ@fA<9*aKEC!wi*(pQny$CfGA_AX z7SvuWgCt7>3cps|^_K`o^R_VQnBy%3?ITkZ5Oe!o3Y!F4xbzdI-)@2Tr>>`4hl|V* zi1()?{@+>i!L1*h3hK)5XCUtIegry52tgXdCK1vM^M&pMKno> zYK3i4&njjkHV4g?;0ewOHo6L8f`qWUryneQA2@$#`VtU~+@}ZnZsN{~xGgy1?B_^Q zzOzdyFUY+i3?-8VNFU!N8R00N>7eYNe?J(o1_sH?xN=}M-bucw8-w{W`aVyrFh;Kf z!UMWbg7g2{>FqGb3&6&rFx5*lF`EpRu*kjib4-sbMD6|TXoR88h}XmJ&h^^cjxR0;bI9A0*6aPWcp_$w2C&nfkf4}9{@tCxZa(1 zo-?si%*ryhR=$71ylq(76oH+NH*0}O;Y=&=&B$&?O3&;eYn9i23r|MaNl%BgjZaDX z#MBfcdn(`GlQ8=xIXP=)XQHSBrH84kN zO!S#tO&X#kSi>#9)%|m$KoRymA10hg+$GALLQ}c(NjIod(H93%^*s)s&Y7v9IvZ>| z`rbDqJIs_-p5{U|Tm0b!9Xs9=6(49Vpymv`xj?J}r3X4bTz~c}buO?lX1EF%B_x^W zWaxhTDc`!OsRJ{lJuf*JZ4>FoeWdtzZ^M+^j@#kJo*TW=PI|}k>^NWQMg7AYztg(a zY~`69(9F$l6;_56Z9!|MF_%Nm@up;xDj|6mcZvTEKpJ1!vMUh079h=++W0WkePT+R zmDps}o|j@#;*Is_gb>AaBs1I*F1Vv*E&x_^5$tTMylE0?M-__|cqf(xkWZNPE3d{Z@Y<*93f2KGZzeHn)TFQ4mwp@lQnw33o(6h3X z{67E~LFc~t?+p1YzC3y1X*G2_oYU@~E(rgOAgBH{2#4lLxUMnL7yn7is?bpP4xQ9ANTlw6&{56WtL6Krv5Q#U3X4>DPp* z)(=nCuN}W=&!gol+1cZWo5aQEkqyl*bIx$a%-XjsOD|WP^~2swnlKC9E&&;Zu)zb) z0|sQ^6j08?)^Se$WP=2aMLh_cHFEB1{lbkC!AQ?p>xwFdTG6zjRr8v+hI5V;VyAgK zeB`{PfyfXqTOIV2&$;i##8Osg`;wW5I6*B2@lbo^lCwStr+NGLW&N6d*~QW*nb?|P zd6-30mfyRk>8PH66&d>QKyzH+Yz1U`;+9Ak)HC15A?!f`m;vGBf%!gp^+EXLJ&3)P z&vFDxip*JtB5C6*&T+}i$CY#)l=)iGx1YT zz_-?Zd=7#rBcL<$!{I@U#bu5O^U@8@X+nhx2t%(V!i5SPEF$d9v$yV&V#Nbl!=nTd zL^YY7i1*in_k-Ofvw2!m8LEw@wNl9ZrCocs^eFv!&HM~J9uS{@+Qnj`D6|X0PTbDR z&q1x+;0X1K+cKiBhYgo%c`yl@SI5H1xAReA!SyK}iUlu1wzQX7ORlBrxhki6! z9@FIWQt@R<(B#jw8cuwP`Tqc3Gim+HP@Kygfj!i+k*;OtyERdB&(fYnk&GI4ie?iz zYqZvW(#J>|DxHR5o;K=_W?elW4C0Y=Qs=>)JR`6V_4FeQn7lGw??mqNaA50sG($LK zJI=_yKas1gXP6x*T~JoTY3VezGGOG0IXkr!y4k6Q;|_`|WD0p4th3L9osLunlFCaj$pIz-m$R!2ZsVei2GZMKyt zovF=ab6IZlEXF{=WPWsw=mq`wJNVB`RvR*%&Lq;T2G^la_L zY_g1z&ScIPk{e`J2abcOJJDD#J0Jitu0mq9K{3|c12tdK!RP=Ct$?v zR1pn*^Mk3ZlEzNd!!%uj*M>~t4_|vt%oaWrFqFk=YQmVB%)U8G4r0dlK*80A#&#bZ zRaOf=2@KV)Oe03~M@5<9&Xsui%~2wDy_mi&bY@}4o(#6S66c#aoxVY!ZwNEI4~_zI zP7aC-?7d1kqwP`YS121cO#yH`Ho@#gET1OUD_ldcEM|#6`_#kSI+cafd&NSng;W3F;1$ zy)FF%{{ZHW12*eTuC&pkxlr`E;U)TeUVzaGgA+UNhh@7@f##_hnLqqf?k)qwaG_vt)NLbZ9Kk+NTH>{p&YArHL4?(_g3K z&#V0iXHZTV*t)_n`ZW{ucDJG zJb}T%403m#gUB5EA-m7gW7&T?IU;JD_pYdgjulUXk{;wTU!cQ|7>dXR>+fJ@e0PLA z%{7QH!ysqR5LbIw6Em}{PX5?OD>|XT3}C-FM2qh{Jr+kNAO@&drHAT7)C@K!1|`^* zSRJ@|pIR6mTFmf_7qhHT1Gx-GAplr>=i825PoW^dGdxSo!;Hcn2ZHF$44E;Jvw-7} zXLM*M&wp|pumCzs_P7o$YmyF5<_&r(r0L%#O&=%lTQrpd{)hhn-Uy^(Rz5K3$_t_g z=Y~(I5?3?$yF{o569qvjYH`;nZU8JVeDPpol7!48gmQPML= z`lodLzKF;+?=4S!mQ74-y%R*tJ0J7LxPNLhUAle6mMO*#PuPamBf?;zWR7htbnq~` zu=LJ3LyY5+$6$ngEEZ_fz~NtfGlVep!^1>)s7?d3tWR7JHl42VzT&!IvZuy+;C>z% zBhv*Jnb!nSd4Ymy@#hQUt%d-2>E37U#dY>F!42?aw?20GP6i) zgBc6~XN3F7CO<+(caUOV_2HMPVV@?rYpa}rekJFAOz|CK zQD+utzB?=lm>Pke&O7hEIIsgS1r0mz_9K~P9dMaZ+9szNpo4r2KQJOdW9P&2p_22_ z5iH0+pB<4kvj^7SfVR8NAYh^wR{sFPyE6bj{{S2jB8`b(&k@*3_d5-H4*d29zZM0@ z6)fHE^bfV$vWPZc^XZ4;efZGMGtHTO(U|@uJ-E;g70da_Okng9sLn>wL^W9vGuVb?J!OA0J0;G45^9et2?CU(m+MS3#7y#MV&mI0IGy(fj>$Kjex0j;6!i5Sc zOemcS6e#85P@zJOT$CtKqn9NL6eyDA$-=m8jZQ$1H&0)a`Kza7de$?KNK8ZFVyJTj zd5Pj`BIQ(1SR8WM_%H&lIA3~%{Q8VM`OeJ|2;KlCfQKIe#es$}VZ}PwhOQ!t226ny z1-f+f$#Sz9TR7C?jxJkc5wCsuGsLho0g>K;;+{x}cwTqMFKIwJ^O3mWvS&Hx(1A%cOJILeleA6lY{0f1)?#AEde$~l|6T}@_jur|PK)E6}G zzl=HK9G?aZ+1h{8MsLIiuvIVv3iMTnt|G0^5{@IBX2Bk6A)|N(8kQ=agNoM2*4}CP zLo-A>abU5SUyu*mh)uBEut_}OW=_}~GG~Ge`KB4DWN01wFAF&P5y;*(y(f5=fChUH zM}81q*>zny&K6mX$qow^0JBH87f>jwRkJKZy>_|m*V_o{*zthWPdY{%nIVAZb`^TN z?8p$CHgt&I3L|-)=8p%N41_wim(cJu_Q$(7JWDY`^DqdnlY)z5J5xECm}l%qSC0^E z@Yi;N$hZV-!LwWsqvIzom6ag6$fRd^f# z&L5!=0Ht;*{@PC#`z(8ib*rKXh59(o{{S*98#V@MU?=CFJPt|R<0|50ntCz;_A)^k zo@3V(+($=hCM9+EAkCuWR$tsW4}(3Thurhv&K|@DbbYapsCaXAv9GEi`NsHsKK80g~1YCp9vMJ|CF9yNS&T)*A z#F#S-%YxQNdL_mn0CPd?)|DdT5!L9b7v9}octG)YByD9$gE2hoI6rVA>EmE|GGvkSr*&-M=$k{@JUY?dr(13DUuW88Na4 z91F`^12loF7N!Y*LKz^-Z4H-kwyOUCp>qRnocVl~b^-@}@>+00DX6h42*H z3=j-G02v_Gvpe3hPm)^8^8psled4JIDL{_PKqpi#<;rc92g)ba03H5>RW>kM`Qe*3 zcli`d_#u<$iac->t5EA*XITFL0)$I8a_Le=?AjxcHJC;4!$3FkB~+8wJ)A{Ilg(4! zm3O+>l3`#SoP6=9Y{hJ9aaBLAAFvUS%9*=1czF0N&pq~MF!v*1*^6Wu((53BO3C?$FU6ZrWgy78mNcY z5EihSVkQ&}i%pP)9y~z(e+z zJKrglWD?`~A+|ssKC`JuMnLVvSr0HrXM-kxvapq_Hh|q!FL|5{IrYXXw<(hutI0Z& zCi zI-3Q2sH-I0>hUU12jop{vQvqs9Bck$q-#} z5bdp!`1V6-Iho_lSH?b00F0-Ij9iqyz2O#cg_@Rg=g-RmPSot z`@>Q3-eHm}Cb=1(ej|qUb{iFVYXst^wqT+(d)3tB(dWgQCxWggBvw{yF10T(yw69! z1^^zQpxrRjBvOgAV1g=SywMFA_#j34#^$*UJfC8{_#kZAOeC*8U=c{Lf7%FkB}bpn zi<%AQrupmzW(@xTq>6bseV(ZQ033(01Yl}8I~pN*^YBGwB*S^CDpVgSegep34H_a~ zdr3k^Fj5;cUCOo~7scxdA`QC5I^}&xuszkAd_^!(NICKVKz#3NGG-T{DnRB=IrIcT z228Du0;Jw$j%eb`1_dYEOQ$6Fpn{wGkHEJ-eluCcRZvtVhcSI)f8_1*1XO?kXbT~n zYX|=Tu$4)>p9WQ(VOvk?77By2-I{+SM?vS~=Ynsz`7lL1FQ58JFR%C~`T{TT(I`-%Kuc7VC{Ux9 zFC*}Osy5#TF9vFlZc|P9d)QL!j>nITJhA=(9^bJPTM*cU$7@vv8t;NnPqHxal+9fM z*ba50^hTO@vI-JL(wwH2a-6-AI4FQPSOdQTUan^xG??U-*{s!(>p8_sX|V}Ry@Ppk zS&5?Eswtw1A)pfu2#DikOq=9E`rRF_ly8Y{c7wC9EH%mJn5xmpEB&6|dJ0T{`R`ln z#vFU1KBm5j9EGim?1ooR(CU)XrNLy6A)bwZx^s5 z1=)yATRM6OP7P*&*Z=^-K2Cp5H!2%FJQ#n5U)zhO%do)KTd)bFqje=rbtv2Kt146F<_?}7gS1~jgp{5%)W!}?g3%TLrG zszB9q2FIesv>%@m>7s1RTqg0MHdG(30O7tv4##dG&bECql{4O&C~w-BUyw6^S;-Z2 zl*r8`P&Mxj?AF=2OizH^P6ws<#!aSQk32Qy6JwdAZ)2z@i zNnQZFGG88tV>c;d!$?&$hz`q!G1~rXPt7)(-3i2vwdYW^>zjp!VbuEh;_GH%Q%LV| zx=;`Hk=)1>M{5%NPotBsqtAj>oV8l~H4!ti$gG@6QAcW0rR9?ZT(>zptQ3_y3^nn^ z%)y^ZrHJvGjofotcpskux_iMqj_0CuSZ9K`Upc7LQ9t6qfA8baQ;KqDg*P*Pvu){{6 zvr0-%`JG!Qyfw)eb)$mtf(@r40kfK;C-DP|+DRMh%4r4D*GV6roFl&5q+5>7DmBp4 z(9hopUis$xYUs~{bZ`{vcz(mn`V`=1hp`z{l2i{ixwJOAECyCt+&I`6O4e;IY0#c= z-GkVS)|(49$+QmRBlQjij`6psuaW@aHil}ZZlv>;DX4Z{2b>~C@`L&HDSCD*<0~va zKBVB9q-}j~Po}3DFK_f9MzA&HH4Q%Pjr~KzG()#!If(YK3eHQ5rKo?jiID^G9Q8#L z4DL+-0Qsx^_!QePC=+%@Vf=k?u=@avUej;anNZW$&XfaG)-b-yFVJySw#0r=(R)A6 zPJlnbMlBrm=)H^qR_l___MIW|B`YVub&TJ6FaSMw*cZV?Y}U_8Q|aVS5iAnN_M}qMR&ANYR_a1)0YMb zIz^TJyTCrAK0+7CrRJtU2FPpfK)ruhNakBImP=*RBu&B#OM%D+yTCKUrMp6>)}zir zQK$`!7yI=ge`rSbol$0hCIe~Kf;P9b@{VNRxDlbE;nNXNDlilah`}DI)%_u z^2$^c$kng{$J~l`$nMDkY4Af z5y&`vmxMk-!AaY|K|OO9Ak`l~BwokIG`i$|irVI2Y-BWJB@e9$Rp0KyDVO>DwGZo; z`iB=4;>oR7gA_DNnSvl5zp-RdGgu$a+9)4D3z_>LUNh4EIj?QB=^Rl&Lx=50E_(}@7zWm-eln~TK?O1~06-7hhgJr2m6`%^;L-}c zGYI5np_m3Al!fF=tonT6X9qkae+Zy!^&{wuEuNz40cMDNFwHPLJ~%R%Yk9-Z+1d3ONl{ z1t2`QX*s0z&-!D+t&%t@cQpb?*ybq2FpRhr)7=Xv+XOJSkWh%xq zu`mM;IB^b07+`DWS5{qOQAmmL_aUqoYaxi59aexo#7kz`OjA=1`wq-ly+xhvQc-j- zFwq(XJxa8eqzk)3_ZD&RcOvIxg8-2U|wyG&ny-l;opmbPR=O=?L$8&31;)=b0*0F zzCp+&YjrrW62LT#1HQ}jbHT3BLi%~4i}lV7?^yeGa7QVsDVzCvY3NbQUoyah8IlIq z1(-P!V@fUm0D=-WA1c6KsRPep$5%LECb*ykmto2vy~w&qfMYD7p4eac4srZAaU2)KI9tAtO{vf;Co>KPIHOE zpyr*z{T)kM%M$bbxa4{xs@cUaKiiH+qAo}r=7=_9_d)*v;zQem_yF0D-3R{wi4SfM z;Z(nzpG5fyK{GSPmoz{)0tX<1Tk|x#ky4HrIHE0)!NnC+{q&xmm`!C;rKYahh3K?P zTE_$n1@0m}nIMLZ2PicF!FaN{+pmPZD>pRX1Ikn$=mc4S2Xgbj+k$PL&3rSqmt@MA zpU(VW{H@u8WKYxA=0~n6xm3C2-{ovkTEtAk@B@SJ1i)+n01m`9JFoy@M#m(U4Y1F7 zI7;+^deEm>+&GXL1)!%-T8& zISvMvc!8>T;iIWAvqVjwQ-b;&xjX!9=M8YXf-dzJdq*c;I%&9i#Z}X zDD|cX^m!o6HXvD(gPM6RCx(iqo*CE~JMk6D*|6`>b{FCv^)VoFt98IEXW)ld@uoUAX5Os!!SJl zoOTIpyO74GN>NP(tUJ(8IA3^K8GUTqcBtO=&i?>TH8e~S>hLw6J@>$7)g`zt8BBpQ z&KM`(jQ5C{^aRvAH*z^o7#S?|V0kc$fhDk%PE5{mQah z0L-1q%`mZWy%X$3n`N<*1>&j836jlH#ItxnF9Qsar1MkW4Xdn%XY0Z9BX>%$hODquV>f# z2kFDM@70=km?V$bDF`(iG#(Qv%VkW$aLmm0nV4t&Z-nv8$@*U_oHj7lUTIIV+0wHoHYmD#vPZ1XU!fk_@Bo)JloeouB=}g~cjq(fqs#7AL9$rx)WJwN z&OJ@w2IYB@RjcI?!2?;A2p=VjLg`mIS1`=>ij&spcHqD_63F(!pKO))NB%muun++7 zz!zkmvNev+@D_sowM|X7+biaXOI*2E;|j-#Tkv1Ud@Sv% z50kF(PT7(L5E>3j&1Z?T08=MmInF9-76#f;xT$6^BaP?li?+-fGwTpfKRM?(qNyqr z8yU4hFmmrjoS$Aig+e!kT2Y#QiBeY*IUxO9z(q!D+tD%p36%9oMLjNQGJ zHOa2P3&YdN08&X)xs#xkXARU2{=j~x2ead8B$mnL%uqFzIcQ+>o_;fpJeH!w!>;2< z;lDOm9XNs#VJfXyI#t>UfGoIm#^Aur;y@iv<@-ldDKWfBu7>EC02%h+f>hbtWDmwm z?=B4QJNP>hG%^@^kd{s4HFyef8q#`yGO7{Gss)Fuk)nZ*`OmjSnRUSxm^Gpk*4h26!8m;bnGqX+z2wqCtdz2CyapGfz*#BPhL%CtRLVp zT+s!Mybemi zB_ltwKo+aum!6zxANcqG0AnAc0i3&W`ODyne_+~wZ-4eN`Ysya3)jv`?O!mLpIqLI z$-00XW_j%G{dlD7(UTf#S)NGL0?vH#cKi^%bm-Ty2j#DSuFX zL6{le*t6V;o({(T1(h92W?*0p5D#8GK(ygI@EE7OpJPOMGr;i_)}Bzc~g8O{r4 zrK&lg9rGJ5dND%|P z&dvzTsv6sNt9*G*CJO@c-P9Fu7IEh=2r4TtK4*ZcrAY=rI+D~DS&XT@?u{170CAm> zmj!-udZMLXM$jCD(PRx53uFMWIU<4M$J19^+|}C7RqdWY^CBVr(p65tdl2p&qzC4u zGcHE|01c|bYdD(D)xK)@*6e5i^CBcDY&r(Py`Ynursp*1P}oso@kE(?#N$C%%JlyL z?g#;Y^FxEP)zjP$Vl);mX1_arUA|OhNm6|AFUf)_;?|}4`ahEdRYJM{0Hm7w{{Vu2 zpd)_`5`_vB1hq**g$g-AYqWmK)TgA)}!|0XI{{SRuu4SxT zq-ZYG3=BCVIG<83r9!Imd)$t_dOPuO9;U}t`Lf_)f*?kxr{~=m>U4Oq_XoqE;xz7+ z2UN=Q`GaK5cId96D}nY709l099pOjP<&jAXEL6e>zBQ{VmN;MzmfVdK10F+!6UN-0b|!1 z9JRJIrIzBp56OTG5!l z`xl?~uT?ax>-oh-#0(p{vcNrg00K3wjsOP3s)f%+@_IkHjNSbWPEOn(#VMBN?DR{f z2e0ufeR$F~bt;(2Hak>Gr;S!-`309e6AfWvIl?rv@EC#uu(rIMxN4g8?5X_WgH)_j z#)w&daKTsu*og9~VwI>R&H=Z`<_7uEk4}ZPo&KLKgxC*iplQpEo;66z*1>0m;vl_wJCtMIe zDy3gYcBLw6xNj`HLb?F8n!^78(}r!zo(6O*!FDc|1djj>8~b4by{7Ynf{Q_g2-c-7 zM)_ke$%EL8YFdh)vrT%SCU#iL^YbG`r)t1TQq#v719lIT0A#Yrso6;}neDG=fryKi zuA4Q?8Nl)cEMcFRjF9oiz|e(GAG=y}whp6|;#$T1b4Q@D9Lu$Hu8R?$i^c54C6S2GSU>R{-eok#Z|CP~+5vG-Prrb8zH zrRNA?{{Wr=t%e-6Yu^ zo=M!t-};bP{OT&&<5VTKW^EftWxJvl8t=~-`TB_Kv?q@L0Am>w9GkheWMH=)_&Oo` zc*6=_NWVM7{LCu<0QfnPb$S~zv;B$2WWxVw_y1X^$0taP7tVN$xNhn1AH6I)-cBH zg@$2#bcU&hV2QWRshKkOwOY9}(Q%1^VoRQIvqAI6CfN3*rJ3(AKVRxZcF3l%q`Hgi z3jtFvCx4Ki51{f;=9~6TmN56JNg%u{*l61(oOE2O>Ut)8vuf4f@M32 zsETH0d(vuH90AV|1vgKkSEjOEU4{xm1XBaY;Mm@K1kT6 zrkxE%g({hcjwa`f{KJ3`^fG#JbDo=6eMuDOwa*2^#|JWAVNt2{?r#p~2dQLArJGoS zvn8*^d!Ii5a0lzp6td7a=8GFFXohz`QVSboG7i9d@td4@1_{hC=n7azV2(euQ1V$c zgp^S10M>AtgHrNEoqAKMtE#FLDajM5%H|g)79u?7!5(9OGE;g5-6@Q-qsSa)X9Rml z+G*Dn2gUkqz~zj;j4PZk=d+FmmXb6J6n3J^Dt}C8^&{wZ8((J5ZgVAryIuw7bXfMN z=8BdY>RF3tm;uQTSi-=^Oz}+Yvywh;34#4Jn*6c-c;pP)m>QTDhi^P+a7#eX+Paz8 zs33-Ovq#s72;u-|Q(Pbf(Ev;WzzzV8!Ry6H@axlhbUY!EAr{89_9u`vp8!WLCjq*3 zG*L+cjM|#$SZcaFf!u-y&Qsl0T|+yJ_1R_dN(|Ekf=&zL9ta)blLR8v+d2mZ?b`RG zb{HSvSA-IGn$tom_TRFs+ZrA;RLy*tH{TOX!DUPnXtK!#fuAMu2@p+Z=ZJamb;Qs$ z0F^lRWm^pR?YyG;a~u8 zd+}uD;tM@Vj`WHk7p-`3A0z=Vq@B#4s-`9B1()Z)o(!eCE|ocx8}At`4#Qh>3p4UW z-3I7fl6qX9Dqs!mR8`{5QIWK2#quU-EO(@e*V(T)MVs~vd}Atf#c*2Az4_Q?0g@tC z)MH9*n|;NK-zv}77#JQy)*Sc7>rZU1m<9};;kmrq0hk+}aL*Gn2EYn7}F$Y8e%GG2X%AAgb4FUn8b zi;VRsxC2eIMzdeujy%t7AT;yY05Y@|{{T`lny^`g)7U?3;mlH=kpK5r|-ekDwp$?f*;5f__%Cl z66CCkK-U&vwT!_ysaEI3OTPP0o;cr3zFQ0%<(zS@w-%(@ZVgF@8$HVbSQG*@&w?VHY8zCdRAzq} z!Bp@7#>))PS@yyoOgTW=Zn&Q)Ymm}7i6?(G32IX( zOm~)xf#*k9cfu0L%V%H<%AW&if*KD;rvNz{JesO8x4L5Jra$UyR#=~b`tf|_vTre^ zN@Tn-(J$PWps?rOPW^A@1Z!W~19Gst@Lho=Rv!A;2BA(6u_y%YAsxfIJ}*(SkV z3E+5KHB~s7V7CKyos$V!g7`LhVC-|`i4PV+WjACA=Wc>K>G}F0k1;25uzb-lMaHy# zq*u{^g4^q}42NIoPZg7{M<5=& zM>Fv}$A11Q>muEQhtw|+Mp_xb%J47?`3?*ih22n^;z&Z2G-9If#8+9vlo%zyC>NR< z1Gf~CN0OvfcR8BA4dXxtvxM;eGtI#SF9w0^M7oAl(H21Mi%@K6WIHlm2hR;-gE48_ zxy+KOPQ5O=?Qj&9b26z14T>AIzl~M3oLE1WyL08{u2uu&07uw}=KV>OwImpzc&nqF z^jQ`R`k8Bdb?>np=K#tRWUvHmK>{5^^Hi8J*=*%5zXc#{YZqp%^XxYQv7@0sas1y0&Xy2K8u2V2U# z2?l_VVE7CqViku*KnvMe0q4h-Yz!_dfAinmLeRtgH~#?G$LN3}VB-Dk&pwFo*2Fe-QS8QBERog7 zB8%1LTh)lH3uH46PUL&8{P4E!VBv{^TWteu%Qff8swxAcQ1lyM4IJ*Eec_wNPB%m` zjdzV&fB<#^3uZ{~FVasBS+WZ}^EjuI$=^Ix89Cm~sjZB5d1J8%01sl=v>Bn@tv0ep zh`Y2-W^V&|3-kq?kvGBEKElrb04YP*hHYJgMs$#1AOK+S0M+9?dd?r<_mV?6Q9}R? z@I8npoK(e=^}>rqsgLY4j{1JsM~J*QO;3&V*Gl)_;(kBJoO@QMc-O>g{HYtS4nEv? zpTlir7OQDV=%NW;d?51PgyZ&WS$opY(8~*@H%YK@9x-aOW;iW;lzk01*R! zn#{iEA-g5`WueH*kVRZ|)ZSI+RuKOH649zx>rAH(j2d+r!Wa^sy-h?6gvIPOMg_wh|YYge&PFJ4}|B+x9%Ue4g>+d32u30 zo5B95XYuuWf$T?HGDjsoc(-vlx89*nB%cZ$3MQycg$fi}rntNd+L~}^D3fJracq6i}L_3KS^i1@F*>3KV?ZY|@1a6msS8qONg+;`bI1Txgt6 zbL0!T6rk7*Q0l|K9Ht+Nub(#i0{0Vv7#gZ*@C07IVx^aXD~jin>YXiMIB)rX0_z0gQCkv2O7};(dIz=os8G?O!=d_a-cTN z*ITYO{9qZ*^iDLn#Nn&T7I20+tnB5h`COOehSMbE1BT_Fw?WZ(%*Pc}jK~orlXj*E z9hP6B%Oag>kw(wtlBM!k@zJc}%=Q7FA8rd+ZZj!sMl%ovlHG$|ehAMQqJzz#xIwI` zX_HEM74Hl0)_fhz{)8)TlyLmkf@{XhARkiKY0BbaoaO2gwXI16P&^lAW@p?u$hMsQ z*X*jv^6f0XWiE7nf+6lj!}&R_WXhPBw!#dUMz)T|eI9%URYy^yQPZka2xm7aAlUB% zya8ac!w4=bA=hG6G>qs0iu9uIB{g2usio2BMcY-R^8{87O%Z25+4|^p@}%!bI7st^ zXD<;!g>@|Xaex~KM|wwLhH#A`g>v;2cR7#C8=g#CELc16rQke)ITmEHIkddbutlt_ zDpTx##yHhwuNV08;K2A7_7VL!$u=4ad}gZnGFkro<5Q=bDCVkn*uOvcFxmc9>XAmL zR;m0|QY1Cv=e75AiV1=(*)ucFc?{0aL(GD`tIU&sLA?V_WK}_AGlAX*o&aW4tEziy z9~@M(#q$=%ip~r?iuM3leuQYuCQ_qF(3q%;mh0p&)({i;Hez5ZwprvRhuNFvY>`n# z%@nXhtqu&FVG=e(!2{YgQGqfBZ$klQ%HNRkX9I!*rv6H2BT*9zEZy85Lk>s`qrm00 zHAZfwr@Mm9P;zD-L@d@F!-z6j=B|_k`#4o-byV$1_R=XtH$Z0f2G3A<-QC~_oq?YI zFpfJ{35#mBgbBo{vo(BVy@uaGg@F-j)(k?mr4UdKYb;$CKn=}fJYcg#R#z)EIo%Jq z%$9P~#;f4(2YI3v21vQAG8|T2)?IBjo(m1yngL494Sn!93)^@B(P*!n1LF+SM}5p0 z$rew>N}-3YvkzhcHq5vEpJUmR;|$;1a~BWIqA{xAII3N}wsegfBwgA7id*MWt@q<~ z9GnnmwHXYz_7tbzAjCJ*2|yY44GSWZ!a(n#sjCzIGiUpVE82*E*v&bh{^D!=Om=1f z-XBt)dAeDk{k&7mX}z7wkZ8Kb-+urDRP)(@v{Siq>Yi@NtnoVmb_hB}z-I#k4F||@ zOW`1Hj)RsN*=LS9{{Wy`D>j4(Kb6r^(^CYUNzDvz?h&y4J0PYHQLzL4YoD6*V<$uP zD61mX%UlcuUNS$Nv|BhBK+J}HPEj;!NHnG|HQ%~xd97e;J!hYS$ZEkkkc%gBd%?rp z05F`qYaEkaD$ zp6c5B(mTS-{{UZJBsRll)g{l#S(Bas8O*PG(KBjhk{5xRL7c$&Jvcv>gS!KIFa>aW zkl;u4{d;50WEee(&5miw4g8bQ4!3(1d?I5{F&;g}M90AUd?zX?nL!o1K}4##%*jNISs zMAwd5o6|O{n#kAzoDVtKXSo9@(DQXxncgSQuE^?&${*3>*Bp%SaZe8>a}LBHFhm0U zfn}G(@I-;`PB$uKH1r=C!$B1+!%sMIgeyLf3^m`+47|Yk0?v5s7k?T{7%VN%L z<<0HQ1Tq1g>vkerm5>AhG%T?U@gAFgn+cFicNA7I@;B!MPoB;=CNl=$C084OMh=TY zrrS!kdcnOy@1mQ=$S z%=N7X%)`a4!3Hftz|a8!%snb$?m^p|?tOajS4`0W7D4I0YT3JXCdrSsBJxR@W}7o`aP!ZfMEZ)(7RB_xS=br<+Kq>Kf@M zbsC1Vd+gox17qv~!Ku_T*Oi#=klze>^%P!6z8*9di|44U|7WL`Qgv z?VhDA-E0K{@V>yZ%Ramjtj}<8-+NE0Gv~;0Ni|uE$B8>M%q8KF0>cZ!{eGR15P38b zcmo>pQEbi;)f7!^iJ-BcU=A#tvt5F%<|roFn~Y#Vvv5ZsUy{vnX3hNmRO!Y7=)8Kl zpoNXV>;n&>zmuLRdPz13ym~z8VO3QjmZs)x4JJmjWRT9VS+D9}o-XE6)D~5|nW7Dr zm|uB-9pH#_nZ?31s(42FqkOZKU04_`&jk?lGoPmt4oOeQdc+LaB)y0%Xr4|zaJ1)( zhMF9t0|STg|eilDmS0fY3~B2m~5l~FqZ zEfyGSi0zRN8aTrPKBpi6M29^~tm+wsnelVjab?4p{{RZlcrP%F_FvjnG!Dh@!vMW& zEaZ!MYm-IjCJ0y>7UT$lEp6pg!@R)G&g5E$TqXd9bIE5sF2hi;6{DLyn5|^#;;iwM zzsgv)u!e7#&Js<0neuWUauPjC2`-tP=b$*u;UkjvhK5V^!-hiu!F-}*ib|4hf_uz2 z=Jy>i#Q@1?Bu97{&D*;}1QkP{xth@qp|*_znWB!BrG7)&MZP>^P8a zRC(4Qp@LRe0DOJ8fu8gpiMOymD$c)h>T6FWGJKhXjpsOUap;ZqPxm@s`7$2FdFpWF z-_lLfTr;7PHYp8MpN%Dw3=MM7bZ6X&nC2-|wl-?$%B+<|xBI%)Yz zp4=2|oA>U~PIN&1xEGoH<)`D1+k*pus!Tm1Mzb&8>%QV1+y?Z0p@RvR?`_}y^dLW{ zKt%wZS${`P128_U?CwWrD`PcC{+9E+;yLRmajb$Y?_T>avmil9RHk~7;)%9z^ZAc( zAnG&PJ^Bqv!t8#SEIajt5*+bse5W0Ok`u`c-CFBiiT9#`tLT@1$x?xz952&vz;Pre z3*lIC3=uDQY5xGJUY>Lkj!QF5V84(eJ87y52qt+VX~zMWRcEeN(~drCa`gsM5s|ES%9A;IaJk2(#a&dhEA0QxRMPKUrcQw zolP(qb17rZ7n-I4U_pBAP3IE;Yciq;z-bSjVuUvDR2*5 zos%a20EzeD;sI;_0J=^802A-UjPpX~hZ_Rx9f&7+>xUmwVUMox_Tj7_?gp+PUIrZT zPVvme$T(4Zz%|$bk_yq@f{^)`q6)&#llRUA!^(w$d`NhXc2-_@nd?1q5CK3V;ESku z0v;qmn{o;o6@DAF>*1a9hc*Xo1JCRqsTJ=qAI!Jp7ds7P01W*&vZ>ZK&%;HNM%HKN z570P-#krv?m!RVzA*>^_0QDBumZ>uz&}=+~O9ij7OnbvYo%i2-L1x(7Jymu1!p(8q zI#WqI8vp=u3~~4i;r+6OruSTy;g1cqQU{+c!ny}aY2*m|{oaU-tAr3N>R;FhXgPMa zge>pC!YSL&#vPSbNE=jA-4$vE)l7yQ5tLV(9CpyoaY#2eY>zp#4Caq>oFhOmn3??U z&oU)5e>s-<0|kQp2Lnrj)SgHabbW%n%Gz5oc%%lIc-{kk^xp^aN2#pZK-WI^99aMiv-- zNHvzfl{~R|Xd(syoGh@zzXn}n_HXw92s=~#N+nXL`M@sYRpoaMK||j~Rk9X?Qzb zXDKcL<#J>=wl;Hy4w92fpJO+42Vgi20mmF_qf95#_4FhNhoA=+0OS^^C62o>b zcbWlXz~|(Os%_%zCD-`4)|Z;c*9NdYKS$*NH zXs?>eovf0oAiB$0z-N7d*ojVTe$RfF7#W~=$e2X7W}D03CY^v_5GJQ->d_c&_a~=Q z8fL-T8Nk8K z_Cf4PXk*pZNiBJoU6R`}NB;ooZ7>RNPq93$fF8O1NU(n82F=!JZFiG;7$ArqbBHTB z@g+d-sJvY^y$}vF{crT*b2ba2M!U5ydcH$GPQ+6xzNI9ZCKWw1hzgQVWU6AUx$tK^ zfN{#Cei)}MPa;ixwVY?k7gJUqJ7i&y9ktN`?9pB0)S6;mW_Ra?_WCt8y64DJTZ4|m zr)oS&8%Y}`&4Nt=z%vWazHu}hps=cGt_xUp9gsiqg#eQR;HS!r4<{kTCi3_gH)zKj zcQdm67|ZfP_bk6^+taw1dlzP_}ly)e`PxHgyUyBFPMa?SoL69cTnfO5YPeb*a4jRz#>(E z47Z4u)FjD-=P}%}{{RV-O>YPtpgPAh!=0`c0B6t;D<|=jW8av|ns#Z2tNOFyv`jhjlFebIf4e}P2(vx&{6vze)VfR_UX=aU%W zd9ch=&b)teX4HqD3@P#b>X7%eSFr^Q?>v`=bB%6&M4j?=*L<)aiP>HN9f+rp(oAKx zJ6z4ny%@`IcOylsK*3pM{1T>Ln1z=s=FiRjsg1VHf#)OKf!k8DX3jAT4bSBVd|mMM zaLez88K{W$U~6{{Gc_qU0_D|cXc3@qs*!zz8b-^5>?4aZe6an1jGeQ`PX$mxrS1W1 zEDZj>h}Je-&3_X(09*jUZ1~9EW{7`egmH-TS-%p>YE*E;@k3q_PuKBWo>ZEd7jkPK zm>;Jcsk)M&rX{r0s-Kx|Z4OVqo=kvuncp0IAX%o_St4jBU7G`hV@B+|ho!{{V|qC(0VinkX5q&z?OLjax1^;KeXAzVN>$ zxbu_$0P`Z~C>TzF4I2W$v#}efbhMO<)#cR40i%-@Z5)sUoh>0r0hbO@VuMx%!TUu4 zOyK1Jus;%T=^>|%2Bnz1euwFtax*ksB%y+=Zv`TGo7l z1#N*94v)mdZD#X+O?+_&VC|PWYH$gmjqBh+N<~E^TmH zPY%oo*3rppRSjWIQvr!NnF1%kh-WkNBP^*l(=~s36yp$Jm-2@h*7P_dJwCZpT<l~&(FmoKab&sogR{g~xoPX+vIo8v-e%wa9m972hM?d3>cLUgtoFD6sNAvZ2ffDzw zrxIOZLWL7UQ8pAh6mqdP6gnFmxhPPfM=nYfM1M7w%R=|hk!&B*osc;5YBZB>tv72q z{8BMy3WIzDk)pUz2zy|VU)8aAeuPDHwbrNeS!w?OdU4t&tM8-zYxNcs2!6=YYcT;A zK7dCf+FYyx{l(|0A2-2n@TxS00U@-epXlYiIP8wm^ zhtBiL9Csm;yz;ldi8z2MsW^Qfeg`Qp^T+m*@J4VsN1fp~{?cAO4kWEpTmks@Sf;x- zte#|4*OdIFdTMI>bO3OJ1AM#`Bf?3>~({u}pkZk2sEu%`1QpYVF zGnj#gCU_`XiCvnZ;^~d;!+~A^fmf1tlj^pdXwh}0Mr<7zEYpDaAeVpI9X56@#3~ly z#^1|X`uE{=X{k_7hDn7^ix1c%FnLzhXep36{{X)@-sM#DRu5cEK{Ewc3oJbmgX}5% zxfBnN9JHBZRfn~o|1LXfPR}OHm}eA$SL#W@ge^JxsOZb zH~Z?Wr_C?ihx{i$cyhzT&*P8y3qbzW4K@w-e)&x8&}Ig(UuOb0!54(ECVM-R_U9k) zR)KxCErTne)H4J@HW!-47uXgyf+2dEf8K(gdnFI>9hI zIP6*`(p22ZXA_36q}$7X9f7Qpp^)-+_#id4PR3-Z?ka8#FTCFxJLA>6W&w~pkZ&!h zXN{Vq_}!yEkpbsgS>=%nEY6G&@S6jQ_S$`GsB1czFkNbbSv6k3cIel=xE_EHBkEb` zMcIOgu9q2@Szrc;Iz5%hCkzg!q2{0f3mKw{$BY(obB7!Utty(L;3ZF`PFp*_9{vMD z%=RDaWCHWT@vTju0yPSy?p5y*-wPcKU~gs$$9_Z4>;z6c0C12l7C7U@_YNfj)?`k= z4FC}pWPXh^3?tEgdpi&@tq2Y!nv{Av!>}|*E%7icJMZpih=$uDN(KqoXOmb&fLInI zPeF#TIEDz2dW*8gqQRoTs+FwaQ>^idH-ZQOt=Qm*E>=+7Du7-N7dzHF;jNfsDhZ&TUv3zM zyc^zYD<{PcNc3Y4`ux8CO5CCcm8W(|tMR@0SZfsCc z)>g~%YY4kj*a>6Jb@gN(^>r#m^?;%nAgfc>XPn}`q1svoDzQW4&H(eI3v08v!4y-u z@{U=8XeWbt_0}G~dm@nJE3-d=5*ghJW(qP_9+_ViMR zU^`si=d$n)yaPeUQ*6=9-gcVlnY#&+_BcHGBa;KhYeh6(YO@Q$)s7NJ81uTa&0!FX z*IKfiNsaT6Fn+imOoyMp2F7op;0q^sEl8QXXlAFOpG`#PjT6oR*I{w?;?9*>jMsFQ z;iAFlWH1c|9O3mKwSvR^9Rsk+9o!$pPmMOpvOvj28Zr!RF|$AEJTFvGg5*VZ~JXWM&GW z#RO(dY&0*vzzgiWJK@-fQ@O_S)ugvYfCjaP=mbsps-ykgHPl-0M)L;w3ylP4Tp=WN`;2SS(A{BY z5yQeDr_o78TeGHde!w@}dKlpjV*Kz&rWk&9d0nsDIm0kJ~Ta-Ba_#ni)j3`F%uCdXpEW`oFa)e!kY zSOKi;FbJ^mYZl6y?u-^Q`XJzip!N$-P}LvWQQIQ2h0=wFL>VGbw6u=(hem8|;ox zD~eW7)LYD|1_-Je!y5Ap{DA`VEw0dpT|~*s-->K;ba^j5eXB>Ut(`;@XZ>@>=Ze4PbmZPQ{0o4=Thq}2;By&MtTpkQDjs3M2MR60 zuCd7u-NIF^-_D0LMK!!Ci8_a2=*j*0!{>6< zlxd9G_$G1XROJXe@fs-$HeOn&Ol7uMq4mh{AL+uUo(Ar6{1f4Dtc?Avr}ylIE2JyCbj*%sJp=XH}dW`_UTR9?ieG5a0aWF-VE0DydZuqI?#4 z`orG{ux9-??DwFs509}BYDRd?(tV$(3l7gv;a$C*2TC)ZH5IZjnjNXt<9*HfceuIa%ER!6$ zn$F%GIGn^5O8X7Kafdm_xI6`AO=OhyTG5K@EZpCl8SHi${J?@QBh|q8BD}~PQ~`bMPR7B10BiT+?HA}d6qipR z7;(lf><>73X16w@Yy+kWXa;cPf;zU%;$C=xXNsAAWFtDAtOl#K6m+rKU?Pw$U}yvP z;MSy&%Ac;?cVT7&2BC$GkN^TAHr!d??o%oCC-4zePG@qa#bS%m0H$_Br#32Zq@1u)yrk9+udH#pPQWrPKr*t$SlT8>4yWGlDLrY(}Ok zHlXgD{{RYAgte@ycTGyA$Y8WW#8FE7>Y*e&O5jw#J(>LYz^emXDr#{@$*p?p z2bQw%>$?%Y&w=-gl})H3XLw(Y$qHWiO{f@Y${<;*@Ic%uzBm8_mq+eHC9YR;x>D(C z_5t33xJWaB-hW;m{#v>zD(Kg}BSn0eJ+O;8ZXM1I01uuV51Mcxn$7F^Qx@AiRoZP# z+yclUdZ)n^n-h71PtN-eK=gJXuWP25Z0%40vc-@H$><8r3T+947j0auPfQFAAiOd| zh5+A-P!to-3|T{(FnM37ay<+)tCDk4n2XtY6k3$R0$)YJa}dBZ{+=Qy-ezY#xbn}2#-AM1vn&(-b( zJA?gj)5G6Stl$Wjy?WovBtD^4BxBB-$$naGrg)*b1TzgVJ%DgEqe#IQcsJro`>$`xFL_Og|G1S?AHF#pjq1yl};^c|!p_`M}|xK4ZY4o&11t9m~}r z7d8)A(a-oX)ftlQmj&9icnXGt3j^PXOFsZ_j+zJOFz_nAz za1SJYyJMN4(MvCxIy1?6>yANEE?^%nSo>gcSs|G&Zl7V^vG0zu0^PXgS?Q9Qfo)cA z4Rl?orG~R#%%j8(^4G^kE(;pDM zbZdYl4UZrrlh=!Y_lkxAi7vGBmLW{b6T>&|H#9UJjF8QaJl&|NV(1`iU?O+t4EfK3 zA5TkIqf4@O`Pt`S1{l|$1P*Mi72MIXr12JWduuEU`kdz-fMRGlfx#C)rDS_T6FKJd zyRyiL0@hx9clIxf*$kdXOfqjOk+cSF@6aCJeDKB+cg>oER)}uNA(O5+ux%nTB(dKg zJ%SK1JbW_Rt7(u8`0iQjI~!H>(uyb+c&XR778^y{5)`dI?a~Ie_F-dd5kD?^A|$K# z{{T*FFga7LXWKY_-Z0LHZWac4+_*pA!GHmfq0X)gZdQm!gV|Yz&yA$jp|+SkKNwzq zSrfA-sqMy%v#G=ChLV@fW=Ytwg@%0g9=IrR#{!!!5BolnekA>~j(P2FN^zJLYCRD^ z>w}$<9rB5f_M@M20>CrDhR1&5O#cA2-FuN!C|l3VqCI)@b$gI6Lw@%f{{ZekZVH_I z->10(v^Vd4um0oq;!8f)$|NvrK+1J2bDc`G&I(pgf+W zXq#F2tuy}snI+!b1b9G&@@jog{8rEYRm^xFFj0Z>kUKS}lB$7K)A@Y9S6&!4mxe`* z_bi#r^jLg+n&UdJCQV%Is&=L80-;cw(*=<{fNqZ`CH{PDa-?yllLbHgIWzwN_%BmoBxsadE4KyqzC8JgAsmZ@6fDfuY&HH%mGzXAA5D(0TYFbenxJ)dcPx;LP-mXmy%0 z<)-czl^~AjxL=SmU&pWhNEtTSRSf}VzDPc)Hlsz!>vs~_l4~rH%RvLZ8_`8`&!b+8 zG(>uz_Ai-J>dj@(rAWH~`NX)+&cIm;!MB7)xGjWRs8eCA{0|s$6v<{o2#EdoHX#K5 za0je7A|sdUqU#?ZV}`Q>u;P_tJ>c*cZJ#;bdHr~24&RyPX=U%kpxat3@>5*Qkh`p0 z^XEK~uWeiqmdNQu5KkjnIu;)}f*=|^_{40+R8t2{fWwZ<{m9*R%-GhNN|>N`ae9g>Gjwcn8+$Tgd2Il36`T1}2c<0r zp#K1R>M^Xm0QtkQt~%3i4rM!-D7&n0k*}QcV`%gvp3K7~IPU~(Xa&6^39QKGZCI(e zH^BwgS!|X7bMi%fFwNHg05CzIB4B9P9G+$VeUWO~Nx!5L_r~@PL)ag zNi=ZMWWUM5U;|jf@G#DLKV5NEOaY>zA%>opC|+?yXGf>J0~V{}7&$?HyGD!5JNLwT zshjLqsxE`XTbdvS298U^kmF4>P+P@V4NbNK%kXr43eiinwXddC=LL*CNERI9aq*>7 zYsI0cR_B}wpg>3ETQu#Pp#K2pViLz@)F)^k1j*2P=NiV$X;~dZNm7$pt4wEdvvq=C zv4w}dA0!sPq@>gLQ?{TxbZkN*IED+H)qoStHHSyM%sss?C0Xn70|jtc1aGB+j@S*+w6 zHw=K@%khHFFpFKZqb{6N?dmiTzF^m=c(s-qz-N;F<>8;V0sAgR*6WUK0poC3F^`NO z#%bKMk)}?$+aEzZff0jdkLJet2~+p^!^;>V5qc9T&h*iIHf#*c0Tn>+80W?xNfv0)FM+5g>cBAIHI6CMX;|SKzdhge zSE$M5QzpCH>5b=ROU@)y#)PV9XDW2r5NdbW0eCED+kwUs*Jf(9Yc*LL&hR?`(Z0jr ziwZHBnY(woIXne`7CuPy1bSfYQmdGhX7%8iX0(mMRXa<5VuNOHP|mU@h-dHpc;v%z z$8LLwHhR@-)}I}t<%qAK2kh6li57pS~rIBt?_sBNYJ$iSnNlm*Qjt&g?eu4 zI&bWM{r>>|sGG^Nv#80RI3FROxfHXv>7wf$d+a+eNDn=np1fH=^#<0;m zm-fOco-doQ@GA{P(7My;ue9i z20>MB*NcVIJJmrO%mHn1$*MmpZU`aP*^Xh2ld$&MmKbP?!`4wD%ap-XlgGVP&h@frivIvCt{UcHk|U&x z1IuENwjL6?GsdeSl|FeXmzoVs!*fGFnHdmu%ezxPfDce3(iGY_G}NRy8KLY%zFJvw zjY(oTtUqf_lxI@O&|b^;KYZXZ&yr-J{<+}$Zb$pCm;OK=*}yx%`xh@?wtn0)Wc3;p zrf3Us@I_D0_cZVOhzH~U09+W%mP{!}8TFbF8Hv&Nq5v7Eb_pQE8_jj<%wg!A@p+@U!KU2eoL{U&PV{m zdegHwXvSi!g24Jq@7LIl(8V%&k?>Ea??e+|rq$dcV}2m)L-G}VvEMsIupbhqWI=M10M&2ZKkkG70P_W`-?Dy=BNu3_Jn0?q9awbO z_)n+sQOS*{e>qfT_M(bvb}xvaUlzRwbWl#;pbiZSFFcXS)#a?B6%-a`nYc5<1P!u=a z9|I*%;p|5MN9E}!{FNVGBtINRYJz?ElH(K zy0Dl5%m)$`ac1lUZF(#-4E@eXj%#uq;m;jbc!l;Lpc&bo&Ir!7mtTc8+~rcE`yjY~ zf)K8+U>nn>%rMLqg=e&hRU`(0e6nG{+&DY&id2>5M^Y+#^u)m~2I%AnqFKqTsjj)1 zs(Q@hlm>4go|0yiG9L;#3Y0R>=Y@Unv_#c)_#!!NJk;Byy5@J7Uy=rPmmy+{R!@^m z`h)pSd(Q$bk)6z2Hv|wof!GLw{mSp}QQC%gg@=4;B3&Deg#cDj+bqY-+;z>XHK=6k z%A6z#*kF0U!$F4u#nG&q*iPD~@`k5^{kidnJX6#tsLj!_HO#>S>OlAcE-HLRZW>Pm zUyxh25u=!TpxshvVWdu16sKsQ*(Qh}p@oh>z&H#JD`(A=W~e$%Lj%7$Tw&V|#8^}- z0?%cMr<^PcJur?l=G)BCqf7j{HhSR{vr-&rijhG*LjB6knkY7rq=01>k(}>1zchPtszBwa&vnK#!z6VWXRzSI$`jtJnu6I#1)tlBUu? zfFou_oN{^b?tcL(55>Alt!|^^cx%j>u4#Xl{(g0no&o1>Q(Ank z0(XN@viK|TfpWaJZxyrT3@-`WqYRKKO0(WoOC5UwcYufoIBPSFFf)WuclK6zUCrG9 z`jNz)hUY78(Di_4@5MXCyhy8ttKMxCJakMm+1P^&9USpW4A{SzyO|gPf&c)DC&s_) z?Z!4KRre1*D}5!c2EZBSgWF9|e)U`cJQLtP-~?^6Dh-U{_s^!U7V1F3z4iuxjHWHkOMiKUF_+zy5uHn$>v7FUJC*XS)AORoE?bEF{(P{!J#c;vA4e< zC7?U!8(zvKQz}ZIaK+L94h_yWyZ9mO7@fB4x1jI0H*QqG1Jhal+G(~l^Qlq{R9mlX zX0|ckdEXwe*nnBGDp5cDC?m@Icv;fUrAGS9UD)(@zqWVBuj&&Ig}x@NZGK_G*S1G(TJt}ik_CRX=mHpe!1MnAd=Vl~hS=O==RNSw zBdvyLBzSxQ!To3=62`leGyv={a%)~5WCzR>cG=V{6(-mN2WKkZU}r)oQZV*Z@*Go%4$4^GD179T&E9d?p!+!uI! z!MI!3k2?0sn@3v1*M$Xjg*Ll4sTy#(-wIa}nd!}?>0tk&LAztikQi!qYGQX^lA;PS~MIF9AA zix4i|OyC}zL32B6D!n&rg4PB00p7Qv0rM@Zsdw0^cW~T&NS+3K5XCW;LvlL@>7pj1 z>l<0D&mch5<7zSqZ&t?l>h>}Rrxh+%Qop6oIC#ye-eKGQFpRvw!0^pI5KLx3`K41X zMn5GKPBxKI&2LQydzxI0{=5pAnU*4%%=zI$!|40)cm*htJtfEarb36|^nLicqGau7 zbyTuc`3u{+TEh$MiIL&CRus&zLq>4SPM^IN6(azZR1-OB8XCDY1GonjvB0Kbf_5iQ z-h~Z^M~W$8bp7a83kHdyv{WV4KYqLLwb|72P)ncI9ENNQYv}vG<0Z1 zC3`l+`34zcVdTOdX!gz{n`QHwZ;#NAmM^j3x|!tiaQp0to3rB(`E{OTKXP&Wy!{B} z*e+8n?anDU;?87>h>w?8{P?W1`QYe$6b)IUy6y>L0A6cnKE!pMwrJTPQLapUn&Oi6 z03o;E7Xv%<$xmBm3PLj{X!U8kFbJ@cKV7n_uAPTt#m_%}DbWtJB5b=ukIGIacqz1s zB~#@VsipYs5x&Lrapa5I&T8#yUTg`mus1Wl^A1SO-Wq{9m#Clg{=`^RGbd+GO+eSY zH#PP5AY+TXOc!Q5mWNDAntW20J9Gf$UlyQw`;pgTJvVHopHuo1_RcHmTB=R)R#<}O zX#I@<&KWdB-GZx+vRZuq0HHt690p)`lA+8p6n()A1ne-5(evpB16Fle1gj7>gG~oK zsqg2GByG$0pDzx1^teCY#2IOy?2}4l$a2|_ws9|RSt%)3{$ZDXej10&6tV6kv^OD< zUgQewyZ3=7?jN>rQuE0z+&BQ))a^EORIv;T`jg}BoI}`%YjG_Dr&vLT#(v_>{{Xff zdy$-@QZ&}0Yk81yu^FQmGRebq7aUNVE><2x&=D3v0eKHeY3un8p6m=!AHC+@f42mS z^!%K4C*0WWj%lL6&}If7QV#YYkN*I3!nk~B%@5mw9f{L&>GdyC zr!_;ZWQ$i#oCdHyx(jX4JzzhnE||l?ply99f_(j@L$PU^Xc!x?1P=!M{Yi(^fk`}t z-C;^eB}`A{VKrS2xa`S(e?G%A^CC@2vq#Hpsl{BzWxSgwE{$Z2EU*mtJj0H(eQog$ zg9-cTa?4*bFtbJZBh8F6oPWi87>)Pvls|GPp!Rx&Fb2H+cFP+iO_C-7pg7L>q^|;0 zU}!Nnh6lQ}296tim2SCg=$~@%^TIYZW)CH;=AajYq(RVM+kvWviEC9P;YH?4HgLU{ ztQo^0frWwQdglt-x@Punr>@xXV;HLm+NDWQEl?#VnzRPe`B(rV(PVE0vS(O1@^V8c z#t1N|O{?R)HzRdu9`6C6Vni9Bhadv)a(*yx@P6yCbu{C4$Tot{hLP42Lbl-3*W(HaiV8vO1x&( zMEcdy#sITjfPb%<3S`+dTr@y50|52!ii6O50bMDeeipHFQE$9KMA{jLJbD9eM!x?5 zFF~oFT+9IM;}3E?{^TsNH4MWhi{pCRHc6ztK>9YoW(S-P%!(#_(L&n5`&SSYsTPt) zNls?2tPEfPZjfGP9PmdWXHIz0+7e~ltl!0_)|gpg-W+ffGJetHd_%^*^Sr~4L>N%p ziOuE)((q`b1M=GVGsRMK(#cs50FVvTQQK*13?&l87}@fxFFkjeh4Dn?AKe;B;s@O( zw}(Gb1qS2*Wa*|3T$pDJnmUd<`|vL2^Qw~Vzd-*0+#Dfb&#A5X&?wtNA5t)K01l=9 z0F?g#!8ixnS^a~1W>1ucPcY!g`QYwL@#7tz;G75TMVbuhF0D*wW_P(%_|C(L+CF<| zZyD+xz0hqKY)zW$Y|>Qe1TO?0I=ue?zc{!?eYw=rz3DK*-`CL(@M&1l*<(ha8O_VD zZU^UPr^Z?D#U5qP64ddE>BtQOg!4suw~PBRw;t%y4t|AglAQj&0RMFc~615 zXdlJTI>`4q!V@{F^3%sba|;3vB=t*4ji@?4gdA`j4h+e*Z*PLVi41 zi^09%Gp1)4>G^kn9;XCq;oLiz{Xt>x7BjUTgNC#UCmvfUHi^oXSPf%4^hAZXiksL>#|pqWLqO zh6kf0S)PuuPhKmm#+qUM5CP-}oOpd`8g_RA200YWQzvaFR&$~8Ln||Avvpc2kPna8 zj$1T%?pmzP0I<>pj_?SVE!`d@YpIb`I~wA$j{pgz;>{S)-PS?y#baaMWk9ZUFELsJ z_6R&58A;DQnbI(dd~9G;F)Thvn%Uxwl?b+(Q`pH2)>Idnh2|fI53dV=jt0lLr^4Gl zBx19oQwPV8{y(mYrG&PinKj;%bc)@F9PtWekY3+?;O96Q-(lo}x^EA~AfPA~V;l?A zCt_1IJ`bk9*o8qSxO|7{UrqZlyLFAJRLe8zlC^kig<$ogU#Dyt{W!3evjj{Obzy>; z@IWjGtayLBljPll*f=0Z3^0vMfo6Cb>z)IGdEl~+Li>+E+dKy>&=bYn=;$(mua6vOR?LzY~4x9p=5@vGN)A zAsWgVfC82>T=H|D=s}&gMV@BsM+a(`yr|{Dx?kxu2V?d3;=lnlw}sPC@>#F-?SxeU z;I}YSgCt77d^K>w#INg>2&CCEMN(S~4mCA7gJ8mXxu}hr1I!5YgtZ(QTO|1#A?!wD zvbrTljy^HZ5WTi5ur#u#=xye~U4K`X%h$~fe_Jwt(Z9$dqn z69{Dp`iU=|PLJEy1uPK*{yc~uA%VlqgRCQ63=;s)km5QoFm<2@=h+rgcP$P3d<4xK z%)s;Ckl~QgXNYq9)Kp$5iKU$EK1iFsUSNXZVVU0*OLZw{`VVqCzgw+trYAgli?L)_-;pmP?bQ;Z)_;p#LE+mrBbkZ4*vi=@V3%~=8$m|c;w$S`b6~Og&s3b%^9|BsVfGuN&<`!Wd zK2A&F?i*J0UwC?0{o6kdB!EoWzr z{{XoIiz?XE@m>HRb^+gin0X?;xx?|wT$`|aa9uTEgw|Z`ii%hZ=riCr+Pdxxv0AKq zER(KCpeh=MRKJOtK53lUA{X}}mUlZsWUD7_Ou@ndpO6kEc^RO}*?EWc8(M!4JstKm>S$eH znMh_PqUvK~@&5oR{WxUR;%^-{U5SSGXHtCp;BZ#)vJ)nz)!JZSa33UdbXhZ?B!wUB=yH^?S1^e4v!5_c^Ym9k6}L=^_X z@_sl6oExwcYNt|bt&X*ako$3}%Bi#^Ot8?-sfS$P^`GcLcSdLtL}-S!$g8r*)neD{ z&;hLMfGihvg!m9OEK1AIunXSukU2c~g-=c?%6db+??}9uIs-FE zFTVTo2Vgkj`#b`TA6FgKLf&V$CPtAS(NJss)E~5Jf?yk~Z5)^X0LkmcwG$!nYg3xO z00mtJOZWmH(e(4Go_XULu6%S(U;+H5!vOR?q#eI`E^OhtZ=E6+=P)y~Cp=E=CHgWk zSzg&$j!vA%_WdY*{s2zp^HWmG(KJ@N4Jk-1h>qepF)vCZoOTmfWn8xY$a4?)Q$3%*gx(}|&ht-3Zd$N`f+gX8q!JMi9e zFl^qKyD^%$UFA4oBy(&^91Fo1Z7nlM602;|>w`5Xn3U~3J zX>|onpLdwf*L7u<9fr03#9h>t&&C(69kgO?l4NIsW_j;EKV#61pfdMv*epJ?Lirjy z_?bL5pmSNM0!A6ZrcYi47)?Q%O74;~c#%XV)$DBVeF5 z14_{^d}MBxn!VsRU={~(Amha8fuB%e=f!6DH?_5ttj{zfDyHSqaXECVS`C|k20+OT zG;m!zLy*JRg0pR7qw4qq1=stnFz<)35IaCQ=83Jv_rtfTZCK{s1Lab$$&cHOmu4>} zDD3W^O$-|XT7{b9BHO55@=y4c{kYinXb15EpTkc>0FJ$WirU!tQR8jX{uD1+n(k|P zQjV+x#%rU$9=fP{Z0T-l1^84% z_H3s|;86GC1mddN-C)S&Qz=u;$jlGs7&rOwU9;L%dSQcyQzH%O)t7*01XB*&SooA^ zWXfR%wUclc>rJAIx>dCwhCFk%q)p{oeS<%4J#ONAPqhz$)bWBn_aG7E`tuFVOyP9Y zzd8B&;J*m8ZM;Tl59K~U6n60ZRnLO!!>0DCs)#AOD>0U?)DH&!L(zD0N2VupS6>{O zOILOdPsHZBD`?M*7yFUp`8&tbNuznpwM!kfJJC4Qrxp83!6{>;9qw~hKz}E2b^_Tr z54R6)P&FiKub$fOp++>8omR=vny@o7MHgs!9AmQn$8Ir6^tD7t-yoUZpUy+E7qtyx zNXm&Ik;;~G^wA7scX`-(^_l02`E5>xPJ@NlKpQ=3b~!{d`SD9J++Z_Wdu_-Z^of^q4vd7M6d>9bV-j=;_2KX&N>j^r~ zjR`vDZtTcsp!*OmwQW3Eby`f9!ESjB5ep3%5(7{O6irJEG)2}j6Xd$Z044#!_-LPO z0E0P>ysJTH(s`CSsdsZR;Dj@Nb@(=a;G-SYLrfG?ZBO&O^&pF9 zNXmk@l5U3P{pTh2MOOCv`k(kA1iLpkPHU++DT8M7z=%a}a8qTl%up?V`}dyyUc3*% z2GM_il>&d)*Y@Ck3S)qvU*<_dg$g9Ar@>iCz|B*`z_( zOE1?KelnRY{JmLlf;w_V-YxD{m4u+h3iG86U(FzblqAF^hajht=i#gt;qOu)E z`AAddzAy7*h_+{mN93cD9#nhqNN`a#*cq+_2Zs_Rx*h&laVvxUaTZnN-Tqj-51wd> z1`0IG?##3M{{UhnPX6Fb{{SXm^Kl+$_i|PJ{{XQPYA@dG&+=pTAfKgI^(QdI_^#9P z0QMp-$fi>nZ4oL_i=~m*p!;VMntt`#e`W`<6}vC};!8aL0BqyRQ(>V$tp%>jc8d8P zu4I5M+YE;V3@X_h`#x3(pCY* zs@VgPRWv53^?-WLGs-2(0Y}R*)Bx&FUwq&++@Q8i?*70WM5Y1vi4w$3;>l;u2a{RF z6piMK9hC*9oYV}k*r#}Lp9D^!o+yuvR6%l1hZ+`bCvmm#~%)KN^E z=OU4Ze;!oP1@q!9xtZ9^E|KHfvvw}rleUv_Jyyov}W}URWwB8jJMU6Njbw+ zfg{ic7ua#fYWqW;O(fDObjJl`zA&{sTpJG{9!Qu5;Y%EbY>6Y9Z|CZt{{V!JjQ;@m zdHqPrc8~%49oN6wKp*_jeR$L;#ka@q%rtyqb!Yn$(V1-vKFoH8zW`1w_0c%JkDP{g z%WOYXUk<6EU~SydVzCJ&3QcVhtSN>bFqo44v+&68t5>;_|h@SU{m%4nyzQA{{YW~ zDlpe=vI&BX6UO^7VfQi7fnKMkSbi(kc@Ar5IdfBxlBuU1SW8oxQ|h3*77joT^?}|smU)qjttwr$M#$Gp<)PzNyS6Ge0>~b-1M@w2 z^bi7OCYttPao?iIs&gU6Nx9L}R@=WD7k??19sHbO=49+cXc9DllAEo(9WgbXZ7*Q% z=MN5XDtsI2;1{CfIwxjKhr#E^o@dm2@|bnHO=YfG;%LRteZ$axWO{mKGJQi$!37sN z2rjUEhGrYgnQr32^H`d-Pfkt}8C6v$32yv(0q_FS_b1K^Xg^XPL%F{gKA=csz+zAA zlJW~H^ z*LL3JMiXX_vw}!A&UeNhvy%rFF__+!nK5!kq2Uz%_wx17>zl65z;+%+`}6_C-e)J< zOCiVx;=NP^+Hir(PY>q?^6(2Y_C5G5+0{ayjh(47U2WlZXQ3Tu*o>7q?#)-Ar7 zgJ%vkg5hDA_VQm5D+9*as=+HN)$ho14~0%;1QC9)W3XS_$S9Hft>M z9*FD2DoHBW(xenMfeXU^_(g;8-I+L0HGn}i2Ug3bv&pOFz#9YCA!0n11QOIVdl}mY z9xB0X;c(Z%9tGP%xJXH*csa?Y7YHYp?5y3hG|ZBGg6Dmj3^HehjG2j;S**bI-;y~j z6DncQv6{6ou^Th-pEyTd*)R-4Qze-@@FPW&oocj`AB zOz|bQVJ)2O=YU$AEE$Jv{lJU)QT&%&4%I+ltJOOJ>_$nZbS9WD(6S{I-wH;MO4a5{ z{L@qLKH42O-J;El8Ph?7H609hms^G3t4`w%XOVD;v`=6dYl zizH1u2m<>I0K<^;#Zq`|PJB0RtNA?t0QoI!HpM#xfdm+^L=y`v06dW3+TBE;WCnlX zMs+Pj&8|9!uRKmLP%KZ_jN8YFudl(h#(j}I6si2x^+hr9qw6oQ10Y!uQEZ2k0QKRA zhzEzs55_R-6UY{1h;=mFBQoB=1m`UGmPUo<0DO=-w6%6?=gtS9;GjC^7HIR~%OiN0 zE}ijoX~w~%mp=@#aSZ9S(2zDz)i{>T6%lljoQ4r8X4|s3aD$&jGW$sVi9Bz~<1cP} z_bBffsIscqpje{EW_U|WT`VbQ43Rq%qwh!Cx;lT$(-8YwLS)zE&e}D|imRty7A~2Y zmZI$*MKwWoERGSi-3o7wj6ojAou4MjXMPQq$HFbr z%w7xhKounAHgDW6zKk7yd>XKLgw+FG1Ic}ma>|?9``6YUGA@ODWxH$YcYE}C@iq`> zg%e#x_oLzI`+X4GNAE&Y;lur?&SnygtMt@hoW7xKGNaCFn_#><7Dt4n+giTtR*wok zC#RzQQ26ymwdj;6P@|MnVMJ&0{)g!bd()I`JMV6G96mWqs&dMvh~g-A3}?0D^d35B zqf@*9AdULUZ-;(#fFLrbFjx{%FaYdDS*lAT4<$+&$|`0+fvz1zmxY(#`t!w@>xy}j z#&1$j05#d{M8FXgnnKW^qdJ69qvu?>i#t$tDS6A52g=TREC>HcZx^>}XhVRqkvH z69o)2#CMBzODTSg;pRh@G|f;?&xo%ySR99ZOwn=UF$t2=r$on#bwP*0pVWl1-D@N} ze&k2K{4TRzbM6E*0e6oK(^9|<2OJEcpPACXzoNA0Q8WnF$N->c9tfhSou=!hp&+H< z(SMl|=0Jiv`7gZq;)-R5zTzR|ugDQ=FvA+WlwYpzVMYp_1c_!f`;{Ek53~6T5DzdQ zY6$COR0^VDe;U{NkUtnlzG`+HXt=_K9Ej}i$0;eY@1LXJgWy>(NY_xn0?hH4<5^+n ziQD_NgcCK)!1ynX1*YT>+8{AmYt@+UPXU4PR`|U7aVW1Hofe_*p&# z@boqhZVH^Aja?kIiG~tCIaED}9oi6CQ(>qd#gNGb0Vh&a%_cI_u~$KQ#(Ch`_%{`` zMYPTNPJJt@#uDAZDN<-6X$J+6D>bK|N|oQFYpejCunk}kTCgdFfhuSV7zT4bcz))) zgpyr#%r6fmWGjU|$j9bM#Jh8HLlG61nb@dgv&_jBvU#hv3Yno{^P+kVFIJT?l@o>C zF$a?OSr$Yx8a=Q$7u?T(HnQ8?h3A8k!!QWhvgCV{IQ`27C7BkNbIZ+W2RK0F$2mhq znLF`1gJ7Eqe|u3&4b|(J_BbOkS;?%-+5+YPXL+OBjjm4qPq;D(G7CR500zk;tz!Vu z8TnSfv+2-1%S~^&{ix?xJ)*L9%o@L%CDp94x>_)rMcZ)y#pO;$y`p|(3_;EAkn z9bb?={PDS()!x)sl_^ono-?N=fVei2H&_u__#zjXta%UIg>y{fdPDyJa%&NqO{T4X z$yFbN1uIRm35@e-PF87yjB7V+00HZscTv~HA0RFY>Hid=8Mc@N4ICcd^>rZ_`B>H6S-4$!yOO?%+_jsvoO!*Mt-wj&ysrF zCc>Oohw7%xS=F;PlB)rbEn*-GV`T5~!A8$$W;IjSYE50NizY~d<_R5_`TP-8wBvlX z(kH$$1VP^OV%Sz$eSxCLwW3=fZ_fSL`rtE3!qx)TUUqVS$r|^}${$fBkZ1s>U_y%C zY|@LII%LL+Ij;7))-AoA?Lay4tk32~kxJ}-K}IWnI?u}c!xSfE=i9J}Ni0AEfq(9w ze<8-9PgS(6szWN4yQtU#s53(FHMC?1#%uvYek zi$Y~E&DQZ(y$^^m{zS6Ib1k9NC5I|reE$Hj-x*`|BG@)#wkfYzr58}2CM#uw+mCF~ zatSRTCK)w8$QifSXF2nk$7!uoDVCNR?(v*{1-^5I&R)0rO|NTdI()}Tc&IiHwk5yz zMan;pa)^8JUs=!W)rrg1`eXFfhNq5$idvX8Gmd!~vLg^PCIKe#`-(IZ)mt zdckcTgcTO>@Sa8qJ>Jb4LpV^&HRK~ZVs7B+WU<-+JWGzvVdh5BIN0Qk;^w5ab8H8| zdBOwcyDOea!C8#8aB^q`<_Tn4$kQ`&W%cu!Hpx*w0*As$HK(L$PA8I}b1$3q2)73& z3v+?r!4?}jMGuV)(@}|{tlSrB0B3pm{(wcDH(9ZtOG1jPuP|J|8MqC1@J4}2Y|>3k zO#*6~?p(MJB5NLqrb+361yT0fgLEI;K{F{<39Bu9hg5kMNaG5x_H?~Asm*0TYfBXL z(eKBsE6FAQ0A^4OpH0{}0$CK+0|NjsFtEVj2y)4t%WH_vWJ-(5kfT7_F3l8@8O|hW znaj`1BmA56BR`EhF9d+h5=WUf4+BaRC{cgKSMXM0J6{U{@B@(T>-6K6{>S|r{`w%o ziu6HQtZH`u04@IjM}NMLoBX@|75@4mLXOb|+T^DA!I%0dJve6m)BPGAnZ#IRc8DtV zNlo9{f1^XwI5gPdp#K1B6yrc*0ehWypCQJ5htv1tL)Zvkwi;{lll;lR(!fEYS4T3` z6`pN4Fm{bnha}b4w*ydDwXB;^s?+}f#vOZbDHcZgK!bbd5_~9RJ}gx$DIum9x1bAm z;_VPYZy|mBm%#k1jxA9;3lb%QxD9@XLO9TRHZxNf+jA}d02uzHFW6e9{#`Hh&+13W za9?8Y8Tyud8X-o3`wY}}eb8g;{YdzG8mRamFfTm*q(@C(-bVVCS3A^{RNws3t4%gh zRkTR%m!kAI@Q(igQURt^s_YlQG!|^H;FSv@jAy~Hz}LQ$+nO~*W}Xx=4=O$Qkmg$H z6VZQiBj{8yGG~TCg+WnJKfPQ;SwE6@`C-JZ5eiOyQ@_g&0{}$(*D06XxmWkzz(-ny z{{VB7KZp4-#3{nbqdb);rS=9sj^r|Y{n^Ogf@lB+?04RMh%f_+q_$+N*esvLR-f6S z>_ukH+@RWpm1C{}koEV@6Et0&KGOHcX5bkBEcytEvw3S3sQF{15WLg_)_t>%!(tfE z)dWy;<{)@!-m&%q8{;X8-!zqfjCMrSQ9%&HK>!ZqJ}9ETxh%b4QO$WOO?50YZ86T5 z2V4;Cq%%RBl3u>EIjc}R&it3d{{Z{p%D*O<{&*q#R6npX{{ZcVuw{NCmDA<8juFE~ z#Lxf^w(*3-{6jY@Q5I0w2LgT z^g#yA0~xI0GyecdAFvVUcbj9XmOqj-W*mWod|lE05HlK79;Y>twp)F3y&a2LX9Qe! z$rS{=?@`RxqhGL5vP!O*)0eY4O9--kA)wLp+wBXdN6foQUN z+GX>Y^CPo{Ztf7xo}3k9i}SvUe(W>?!Hlyj>GS0?B*8?Y2CZ`Vi@> z`5ejMe0-A|2S=i3DaalEe{L(-Fk>yAD*9rYHEf;L;N@gN8}BAf*^<; z{{Y-L+~Q)>bzNX+>B*sr0ePYU#(d%1j<4wY;*v`FZVQ%QObx?%YOr?O9BB(m zr;6Lu=64!H42VRz8gS(2pO+?6TJNma8G#u0I^j|WG5+s?Inw> z&P_X2Can|+JG05x9qg7rZ67}H?SxfW?phNzhn(7&VE7fFEWG}vNFbFtrY2I?4OE(hwq}EqHo&iuSU3Xsq;pNQ2{S3G@0PvN)?smgbHslV zH1WG8EnOPT-iMYj3*gO_+Oo%8U|Iv0kCs+gnarL!af}EXfIMIe`7GdtKA9-~m6$HO z=PNh^v1C@@&%pumdR6l|vZyo99Mz!Au+|OrKGbXZtcPkC2N0c^hnpcv!K+k8g7}Xzli1sfOuQZbMqp? zYqJ_<6fxS%r)FWF8DAr=4(acU;VlFtXd>ApGGOvW+c4Ovn)|dI4LXmUHItg4?FEgx z=69F{;jT1kRc4wvNwHAYZ=Ud5gSqL(NX}$~F*MH@?=w7{8RAkaZ|N$#7fU067ub5j z7tDavkm_jrCE`>GL5~HuEap#Eur$|977j2yVF2?fQuL~(zm}FSlK%j}_u=wM!gC~n za9?(I0p&yA>_JlpAvj=YK0pA@^cD^SjFTpI_we>Fu1k*|^9rWdJXTvOJJLKv@?#iZ zX9RJjLi&z(iTG&j!4>7NRbsw_`tYex1E^;hhppsq#QiMS=0R-QkZqbEW{=98VW=11 z?nD+`ac0@Fo-$Pq*n1IF%=N?>J?iQ0>;-Dg;kHQBPQ&lTRZh3b%|+BRRd5anx@RX* z`vq_v;FxC)L+`+>`gh}osV1NTDz(V4YtIVnfyz7<;wyIsEo^F^rR@|;JIS;ZjQ;?T z27mFO!I&lmFHuVj&qbUWAToRadqi*6WZcmA;JJaWdy<$20DxyTv}m#bz4~C<`X7NP zW)eSYsy$AKIdI86W?*0to>R`>&P~~v1AD;lKIC1g(19omVU93pz8+=LWAJoz{1BV% zGi(F_X`S$M6P+WK<~6TXOmt+l)yG}Z#%FwYMwXGaJVe`{o^fDHbeG$XMzWqv4>7=} zK1utNMtmn~Yl|YIc#cb>1ohvp8{#>5XF*j%J@GUMjBXEuy&rm^&T(IPBkD%d#Q=T{ zs7&R%ne}Udha^~EXC!eH%roC`b3@NJd?E(%xQ%8LQHKVUcRB3mz86+$b|O3XJ$P6e zBeQRmfv=v;c3%Vqb|DUw)g>Vl!Bp_`#L(-k#{p z_Nv*devB*m4k`HWO!9HX>%qi9UgT2fRPQ}}5PVa~v`%;!itB&F{z&Q_9R@vF`SBm0 zmdJjK)WnOhMj4k8T}&acy1TnU4O6G5;eg%@6~VrUo{yNAJ>750v>u|_LkYnyzj$FBs^nl;+MJ)K;yq6jbU7KZ3j3T~wI9=l>3~~lOg9r_pwwUC=nw_7HFp7oX zUK>VUSgQnC)M&f|cTja)VP?3GJ~a&>(XKu@nP9aHGdy65Ri+E500z09emH~AX0$fT z9dO<7NfT1euO;yj;9k3YOFYPyn$~&o$>oMgDHP_4xIw|NYLxd;L~=a{iS&FiTwi`A zEz{C{9X&A=`aTHtZXz-)1_Gi?Q!+>}_zox@BDi&!X9R9Mlt)0)uZ z$N|`icqyWm15IHP9JWx`>MU5$4oHY*0IlYkh2f*O5kWNYRP#TIg#Q2y6pco_&Ub2ARU%k0(= z{OYex=8O;-JgLEZU$Q#fX-oI*`0qrG!2Q!emvAJDa0Rc#%#^j zFd_$_!zZ73K)kk+o;jhW6^lg?Hcely)ysSO~?M4eUdH@Z&;Kt4wGIPb3CJszy4N(G6M40d{NVdU~ zVuQb-zW&{j)wSy(pUQw|4%~KaP0kjTn(Y(~-K%ZsWL!A_JHuF6X1>Rt73WIMT+-)1 zH5~4Xtzs?`(0TQrViSeCK560I05~Q&nK57nn)WP48Uh53v$h75)uINDd%z42Abato zEwiK~hS96#)?Y~=X>{=69-5~FDfsitk7G|pz>yD$e$k6&Dm7(=MfHE#}p zvvb*g@OhTpRtq>qjNp87#>UP*nH%+=VmQ+kBkFE$-ejJA)bCjv?7j~6qamgxuBB62 zVcTZg=mqICJM3SaBD@<)wuCCQT?W#6u3U-UU=|}Xvz#~DsjZdaaOE# z7#)n*n=svs!}_+1J7?xbKUVG5ysKK9S86Z-Fu^kh+3eS^TxF>OP0!RduPO1Z;ie1H z+ghyWlpKY@g-*fej8@%j-K;j(>srh6*boC&)bu?KPnc8tSeYyU79%W_xkYSa^g5Wq z>Po6*QaCxR=4!L~g5jL>0CoT(Ud(B^Ra;YTH3ZZfH)@VBF*UGGWPgAI;O96fEWsJP z%($SA@@~0+dyu5bgivQ+&d@?y z3=#(*8@L8Q$PuV)9m}LfU~O6g#t4PrXJ$^gAj`4kZ`gpddEK|Ijr%?EAW~IeOb|QU zeM9UVQBEI($mHwD!2Do%?vV>PEU@qWcp^PXQZCttjC@AsTY>y}-atpWVV&{TkCxBm z;^JcZP7Xone<=i3L^%vB?cc63k=npBHNTYjxaI(dHjyMwRV$V+jtf0{^Bv=jW@l&X zl06YaOq#Z%WQYQ0W(*KB01m`pTUuKgl(4~~&95`P6}yqL(ofj4re#2);eOzTK<+{3 zo`B@C5@DP3NE=tviLNr?hJ}^}gMjxgW=|eCOx+qS&^s(kkI#!ETX|Jm0Q>wAPLIxA z{yW?V=#>gbWU+%8BhX=ZWQ)3ngQ3xz$_?uZW4Po13)75xH3g!e+XF9r#VL=1PJR*>cW9oFc%nq=5|-)=1Daz`(;e9~@_uJt}h5O?H8ML-jlH=w-H1Ed;(vQO$4dnkVVx zjGkoic5^3kIg(+#AhDr%21A6*$p`*Y)Hck^M&shLlMkbep1;{%xy3MtZDxV)eMs~~ z;z=sq&Gshe>_DlrnW!=b{{RJ@LX8*xuG?_`0P}4){{RLN z_&ja20})M}&MXh;V!rk_)~)Sd`WAHx3x8sbqkr93{)9L7DB3su)_494PrGO9Q0>p? zLXn~PKYt8V$K#j$$;C=LBHB{LVic6gW9&9K(Dp5|jiJ-qP^Tx8$~t7(k`7N9RZ*}5 zs4#&h%x6HitJsY8pzK(t*gu6@!~1R5w*pXMwm;!&u>RY1?ZA8!7vzFnBj*}?C{W0` zRZoJjm2t(KTx{!Rm5p;Kp3f=CZt}{W1P%G)0APXW_99orI-EbvN9lw|SP@z$hHKrY zwL0Yg0QQyNnH-O3^}+uD?KA%X@t|k|A0Ics7uPh~`VkS=wVqGiEXzmXjvPBft%prY z`}}AfE*|GZE|0b`zzf1?IxbVc3eBid^)buZx?wVCfCcT{w!c4oGr^pP!%m2wAPjbc zLac}$wVT-G9{q=&JgHSV@s&+8eZXsE^D;$NA4lJfP+C}Vkjz#^_l#cHz3V1<;PH}77JoYB=RkM`zrM~%QZ$sC@X3P}$19DZ25e5Cv~c8@MF zLa&rCpEu*MH)YH`%$}qcQk!VlT}dodD*Aooo$AjWEN+^HK)mnZ*XTm?hWi|aGGg~y z{NO-SgZ~?X)iuYvo z%`r>OUp|r=%>jbuZWH+zfiL)QxOslwl*~ElE z!b06e0?&{E{{Xp;TQU9Q%Dac{oK{>(U$ov2+=}5tA@DsBe{#()vc8l4M=|{PL-wQh z24DTK*V~Rd>-j(Ab05!ycG`E`NoV%MUv3y-xKb+cacB~$#Xi6l@#L#~O3h$)7)P5b z7wlin4S8+&S1 zYco9n9%bFI~#QTmaP%%xAH^O>7f0RBqxa5X?OYqAgx#3uWezBFj8 z?F4m7>}1)vN_pcWS?|!Bv%EX@MRwCDdEBuob=#K!U~}XEGyecax72}YJx@HRkL+~G zWK!fr@_G?#INH3%YQj#|et;|z`h}CAR5E1F78rT(8ODCD}=?T#VH) z0R8d+YwQ`vX$mScp>~FWr~Gnb{{YbhT~IZxKQ?~#OiUAXQ12iL@O+fvS^JXOSO5Xu z{%Cm*`lv+);ep_PsR#fVf10AB{{Z}(l~ex!3VZFX7^L7oa+Knq)?WLOvr*^&0Nlc_ zAK0mn=hlp7Y8F!RF2l#6>_PVP^$+)@cWQcJ`^-27mT*Mt15I{rL3$MSpr+&iPgS%ht4iP4ZDbb2tgB{_nc; zJ%~7>dM~F?J2jd6a0OPa`=Ni%1@1<4z6(ojE|Q9>sb!Ew15tGV00TK3c`4${%Wv^J zvpk)1b4Kt!c-|+{dCg%{JCV#+S)8&~i&gML>5XjhRpO~M%ARh;32-)O>p6#GfIDV- z;pM6B&-_R`=zXSR)7pV4Dw1-{l49?Cl1i5YHG%MNGWEL{C!^Pg%W8Q8`MvU~$~5!V zs~BrDWVe3v$?8ExLnLTmXeTO0cKHDQHW#AX(9ztC{#!)v^G9t|O=Mo=Fhv^?M8eJu zcf(el3gk&S!!50&S@!Etu4+|M)tTex@r1a~2L5<^_cc?=biP|He9nC=mo@+lEQeqn z{COikmaROHyk+rzYHY#k#(*TM!JTm6G*2WM$X zXJB3&@lLQbcr6oNc-hM%unZ;$-(jE7UKW13;?`9Dv8R|g__%|^m1x0mX7R0^gUGr* z#4ilndnU;pBsOB;tkCJ_$40cT6vmpHqh4O;2s-3U48S|#Ve#kB8LHMD8gA3q<5L6C zeVzF9M$b)ip=yS%n43M;cDe57^f>_VJxJ?PY#j}uNM>u(3!^Q1$@!VL0P0+Yfo6>m z>Kx#0%n$Kd+k``Q9$>ua4U`ITDX)(KV7|@Lz`Q*8xRav1)%`e|UE6qciSFG^pHF6U z4RP9`*^O~YQ|Tu)lR0NBnEa`;gU$we13XqRa@A&3`Wy4fY|d5-g2sqmVS46yBS(-q z!F>4RRjjkxiDj*)=)W}`APA)O$^dH^9eE zF*VvtW>ZjqFG0!08@=iV=@<1_RS>n z*{faI*@3!QBjCm}J3Rvk%W8QF?<)$9VhAj0HJMnofuIj1{-h>|<8MlHJ}XP&C~wBL zG28PMMdK$8 z1JA(_jKCE&5V9%nZb$`aNM_oWW|G6xyPTH)(Z02kIQlo*EXWO;N} z%TUtkthtodTDT)KUx3dco&Ka{l=8-m`gdkULFb_w+>&H++I^GexGO5u0L`MuW;L9L zq9E*wTPCRvt(Ll#bxXY7ku{I5Y@%HUD$9#dT zsiX8KW{>&esT&Q$j13p_!F<$F{{Z5wz1Q$houZ%Yg5U8iL)(JZ z%XMiWVP^n9#P9e*3_TVh>_IYpDc#U#lF0J>dX#VXEl=d}oEd-js|1wjFWc$phTZU5 z5yzohNIP-d@4~z_fJSnQe);GIbxyIz`9so(rp?i)W@oQlSQ&+l*IdmT4#N!>hXKQu zFa=Uit_@*-0iE%8YGO+)UM*m2(-l~XE23S$P$EU$9VLDx=jubi@qhyG;6F|oh0)P? zCXY>_lSW7x;4nN>sFzsNLr_U_Rmf=1a6|9Q5!MU(z3ef003sl7O@}}Nr^t8ukuVL3 zi97jLln=$FV*>28j7;n!Z`G;`(@GFkA>2A`~x3wdJ5yd0?$o-C;$O=ztae6 z&Tktd;hm#@5JV-GObcIbJLV|h9`x0Z1|ulpF-xq^U~35E@oUuuFb}c85#C}2iKd`USB=ZN|7th+EJo&1&8J``O=nLXh$>oi~3;?q5$PmXU-e!Oui2z<7QX<8gJ0P4AsV@`mhD)j|bw;8Vb?1Clb)Hc3=Ra;HtZAa%Q^{Z-1VlxG?-j#P?7uj3 z*%Jn^-&H>U0HNRmlH?2@P93E!VuteVcQyb7YO+m)CYo2zCb{_0ZXWzmY^Ab&72_