-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(blockscout-db): add user-ops-indexer to generation and update entities #1186
base: main
Are you sure you want to change the base?
Conversation
Note Reviews pausedUse the following commands to manage reviews:
WalkthroughThe pull request introduces comprehensive updates to the BlockScout database entity models and migration scripts. The changes primarily focus on updating data types from Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🔭 Outside diff range comments (1)
libs/blockscout-db/entity/src/prelude.rs (1)
Add database migration for removed account tables
The account-related entities have been properly removed from the Rust code, but the database tables still need cleanup. Please create a migration to:
- Drop the following tables and their associated sequences:
- account_api_keys
- account_api_plans
- account_custom_abis
- account_public_tags_requests
- account_tag_addresses
- account_watchlist_addresses
- account_watchlist_notifications
🔗 Analysis chain
Line range hint
1-44
: Verify cleanup of removed account entities.The removal of multiple account-related entities (AccountApiKeys, AccountApiPlans, etc.) is a breaking change. Please ensure:
- All references to these entities are removed from the codebase
- Database migrations handle the cleanup of these tables
- Documentation is updated to reflect these changes
Let's check for any remaining references:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for remaining references to removed entities # Look for any code still using the old account entities # Check for remaining references to removed entities rg "AccountApiKeys|AccountApiPlans|AccountCustomAbis|AccountIdentities|AccountPublicTagsRequests|AccountTagAddresses|AccountTagTransactions|AccountWatchlistAddresses|AccountWatchlistNotifications|AccountWatchlists" # Check for related migration files fd -e sql | xargs rg "DROP TABLE.*account_"Length of output: 262
Script:
#!/bin/bash # Try different search patterns for account-related entities # Search for table names with account_ prefix rg -i "account_[a-z_]+(keys|plans|abis|identities|requests|addresses|notifications|watchlists)" # Search for partial matches of entity names rg -i "use.*Account.*Keys|use.*Account.*Plans|use.*Account.*Abis|use.*Account.*Identities|use.*Account.*Requests|use.*Account.*Addresses|use.*Account.*Notifications|use.*Account.*Watchlists" # Look for any remaining imports ast-grep --pattern 'use $$$Account$$$'Length of output: 110351
🧹 Nitpick comments (16)
libs/blockscout-db/entity/src/token_instances.rs (1)
35-35
: Consider makingis_banned
a required boolean with a default.Using
Option<bool>
for a ban flag could lead to ambiguous states. Consider making this a required field with a default value offalse
.- pub is_banned: Option<bool>, + #[sea_orm(default_value = false)] + pub is_banned: bool,libs/blockscout-db/entity/src/sea_orm_active_enums.rs (1)
46-56
: Add documentation for sponsor types.The enum is well-structured and covers both paymaster and wallet-based sponsorship models. Consider adding documentation to explain each variant's purpose and usage context.
#[derive(Debug, Clone, PartialEq, Eq, EnumIter, DeriveActiveEnum)] #[sea_orm(rs_type = "String", db_type = "Enum", enum_name = "sponsor_type")] +/// Represents different types of transaction sponsorship models +/// - PaymasterHybrid: Combined sponsorship model +/// - PaymasterSponsor: Pure paymaster-based sponsorship +/// - WalletBalance: Sponsorship from wallet's main balance +/// - WalletDeposit: Sponsorship from wallet's deposit pub enum SponsorType { #[sea_orm(string_value = "paymaster_hybrid")] PaymasterHybrid,libs/blockscout-db/entity/src/address_tags.rs (2)
8-9
: Consider performance implications of removing primary key fromid
.While keeping
id
as unique is good for lookups, removing it as the primary key might impact performance since integer-based primary keys are typically more efficient than string-based ones. Consider keepingid
as the primary key andlabel
as a unique index instead.
8-11
: Migration strategy needed for primary key change.This schema change requires careful migration planning:
- Ensure all existing
label
values are unique before making it the primary key- Consider the order of operations: adding constraints, moving data, updating foreign keys
- Plan for backward compatibility during deployment
Would you like me to help draft a migration strategy?
libs/blockscout-db/justfile (1)
29-29
: Consider using safer alternatives for directory cleanup.The current
rm entity/src/*
command could be improved for better safety and compatibility:- rm entity/src/* + if [ -d "entity/src" ]; then + find entity/src -type f -delete + else + mkdir -p entity/src + fiThis approach:
- Checks if the directory exists
- Uses
find
for more reliable file deletion- Creates the directory if missing
libs/blockscout-db/entity/src/smart_contracts.rs (1)
41-41
: Consider using an enum for type safety.Instead of using raw
i16
values, consider defining an enum to represent the supported languages. This would provide better type safety and self-documentation.Example:
#[derive(Debug, Clone, Copy, PartialEq, Eq, DbEnum)] #[repr(i16)] pub enum SmartContractLanguage { Solidity = 0, Vyper = 1, // Add other languages as needed }libs/blockscout-db/entity/src/user_operations.rs (3)
9-14
: Consider adding a length restriction to the hash field.While using hash as a primary key is appropriate, the VarBinary field without length restriction could potentially lead to storage issues. Consider adding a specific length that matches your hash algorithm's output size (e.g., 32 bytes for keccak256).
- column_type = "VarBinary(StringLen::None)" + column_type = "VarBinary(StringLen::N(32))"
15-41
: Optimize decimal precision for gas-related fields.The current precision of 100 digits for gas-related fields (call_gas_limit, verification_gas_limit, etc.) seems excessive. Consider reducing it to a more reasonable size that aligns with typical gas values while maintaining sufficient headroom for future increases.
- #[sea_orm(column_type = "Decimal(Some((100, 0)))")] + #[sea_orm(column_type = "Decimal(Some((30, 0)))")]
58-66
: Consider using an enum for the status field.Instead of a boolean status, consider using an enum to provide more detailed operation states (e.g., Success, Reverted, Failed) for better clarity and extensibility.
#[derive(Debug, Clone, PartialEq, EnumIter, DeriveActiveEnum)] #[sea_orm(rs_type = "String", db_type = "String(Some(16))")] pub enum UserOperationStatus { #[sea_orm(string_value = "success")] Success, #[sea_orm(string_value = "reverted")] Reverted, #[sea_orm(string_value = "failed")] Failed, }libs/blockscout-db/entity/src/bridged_tokens.rs (1)
16-23
: Remove redundant unique constraint.The
unique
attribute onhome_token_contract_address_hash
is redundant as primary keys are inherently unique.#[sea_orm( column_type = "VarBinary(StringLen::None)", - unique, primary_key, auto_increment = false )]
libs/blockscout-db/entity/src/prelude.rs (1)
29-29
: Review security measures for scam address management.The addition of
ScamAddressBadgeMappings
is critical for user protection. Please ensure:
- Proper access controls for managing scam address entries
- Validation mechanisms for address classification
- Clear documentation of the classification criteria
Consider implementing:
- Rate limiting for badge updates
- Audit logging for all modifications
- Appeal process for disputed classifications
libs/blockscout-db/entity/src/transaction_stats.rs (1)
Line range hint
1-1
: Consider documenting the rationale for BigDecimal to Decimal migration.The systematic change from
BigDecimal
toDecimal
across multiple entities suggests an architectural decision. To ensure maintainability:
- Document the rationale for this change
- Ensure consistent precision settings across similar field types
- Verify database schema compatibility, especially for fields without explicit precision settings
Would you like me to help create documentation templates or generate verification scripts for schema compatibility?
libs/blockscout-db/entity/src/withdrawals.rs (1)
12-12
: Consider reducing the decimal precision.The current precision of 100 digits with 0 decimal places (
Decimal(Some((100, 0)))
) seems excessive for withdrawal amounts. Consider reducing it to a more reasonable size based on the maximum expected withdrawal amount in your system.libs/blockscout-db/entity/src/block_rewards.rs (1)
23-23
: Same precision concern as withdrawals.rs.The precision configuration of 100 digits with 0 decimal places is also used here. Consider standardizing the precision across all monetary fields based on actual requirements.
libs/blockscout-db/entity/src/beacon_blobs_transactions.rs (1)
14-19
: Add field documentation and standardize precision.
- Consider adding documentation for each field to explain their purpose and constraints
- The precision configuration (100,0) is used here as well - consider standardizing across all monetary fields
libs/blockscout-db/entity/src/signed_authorizations.rs (1)
7-29
: Add field documentation.Consider adding documentation for each field to explain:
- The purpose of each field in the authorization context
- Any constraints or valid value ranges
- The relationship between fields (e.g., how v, r, s work together)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (40)
libs/blockscout-db/entity/src/account_api_plans.rs
(0 hunks)libs/blockscout-db/entity/src/account_custom_abis.rs
(0 hunks)libs/blockscout-db/entity/src/account_identities.rs
(0 hunks)libs/blockscout-db/entity/src/account_public_tags_requests.rs
(0 hunks)libs/blockscout-db/entity/src/account_tag_addresses.rs
(0 hunks)libs/blockscout-db/entity/src/account_tag_transactions.rs
(0 hunks)libs/blockscout-db/entity/src/account_watchlist_addresses.rs
(0 hunks)libs/blockscout-db/entity/src/account_watchlist_notifications.rs
(0 hunks)libs/blockscout-db/entity/src/account_watchlists.rs
(0 hunks)libs/blockscout-db/entity/src/address_coin_balances.rs
(1 hunks)libs/blockscout-db/entity/src/address_coin_balances_daily.rs
(1 hunks)libs/blockscout-db/entity/src/address_current_token_balances.rs
(1 hunks)libs/blockscout-db/entity/src/address_tags.rs
(1 hunks)libs/blockscout-db/entity/src/address_token_balances.rs
(1 hunks)libs/blockscout-db/entity/src/addresses.rs
(2 hunks)libs/blockscout-db/entity/src/beacon_blobs.rs
(1 hunks)libs/blockscout-db/entity/src/beacon_blobs_transactions.rs
(1 hunks)libs/blockscout-db/entity/src/block_rewards.rs
(1 hunks)libs/blockscout-db/entity/src/blocks.rs
(2 hunks)libs/blockscout-db/entity/src/bridged_tokens.rs
(1 hunks)libs/blockscout-db/entity/src/emission_rewards.rs
(1 hunks)libs/blockscout-db/entity/src/internal_transactions.rs
(2 hunks)libs/blockscout-db/entity/src/last_fetched_counters.rs
(1 hunks)libs/blockscout-db/entity/src/lib.rs
(3 hunks)libs/blockscout-db/entity/src/market_history.rs
(1 hunks)libs/blockscout-db/entity/src/migrations_status.rs
(1 hunks)libs/blockscout-db/entity/src/prelude.rs
(3 hunks)libs/blockscout-db/entity/src/scam_address_badge_mappings.rs
(1 hunks)libs/blockscout-db/entity/src/sea_orm_active_enums.rs
(3 hunks)libs/blockscout-db/entity/src/signed_authorizations.rs
(1 hunks)libs/blockscout-db/entity/src/smart_contracts.rs
(1 hunks)libs/blockscout-db/entity/src/token_instances.rs
(1 hunks)libs/blockscout-db/entity/src/token_transfers.rs
(1 hunks)libs/blockscout-db/entity/src/tokens.rs
(2 hunks)libs/blockscout-db/entity/src/transaction_stats.rs
(1 hunks)libs/blockscout-db/entity/src/transactions.rs
(6 hunks)libs/blockscout-db/entity/src/user_operations.rs
(1 hunks)libs/blockscout-db/entity/src/user_ops_indexer_migrations.rs
(1 hunks)libs/blockscout-db/entity/src/withdrawals.rs
(1 hunks)libs/blockscout-db/justfile
(1 hunks)
💤 Files with no reviewable changes (9)
- libs/blockscout-db/entity/src/account_tag_addresses.rs
- libs/blockscout-db/entity/src/account_watchlist_notifications.rs
- libs/blockscout-db/entity/src/account_custom_abis.rs
- libs/blockscout-db/entity/src/account_watchlists.rs
- libs/blockscout-db/entity/src/account_public_tags_requests.rs
- libs/blockscout-db/entity/src/account_api_plans.rs
- libs/blockscout-db/entity/src/account_tag_transactions.rs
- libs/blockscout-db/entity/src/account_identities.rs
- libs/blockscout-db/entity/src/account_watchlist_addresses.rs
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Unit, doc and integration tests
🔇 Additional comments (40)
libs/blockscout-db/entity/src/token_instances.rs (1)
31-35
: LGTM! The new fields enhance token instance metadata capabilities.The additions properly support media handling with thumbnails, content types, and CDN integration. The optional types are appropriate for these fields.
libs/blockscout-db/entity/src/sea_orm_active_enums.rs (1)
5-16
: LGTM! Well-structured enum definition for entry point versions.The implementation follows SeaORM's best practices and uses appropriate naming conventions.
libs/blockscout-db/justfile (1)
33-34
: Verify Docker network connectivity and handle failures gracefully.The recipe name change better reflects its purpose. However, consider improving error handling:
-generate-entities-from-blockscout-image database_url="postgres://postgres:admin@localhost:5432/blockscout": - rm entity/src/* +generate-entities-from-blockscout-image database_url="postgres://postgres:admin@localhost:5432/blockscout": + if ! docker info > /dev/null 2>&1; then + echo "Error: Docker is not running or not accessible" + exit 1 + fi + if [ -d "entity/src" ]; then + find entity/src -type f -delete + else + mkdir -p entity/src + fiThis change:
- Verifies Docker availability before proceeding
- Uses safer directory cleanup
- Provides better error feedback
libs/blockscout-db/entity/src/smart_contracts.rs (1)
41-41
: Document the valid values for the language field.The addition of the
language
field follows the existing patterns. However, please add documentation to clarify:
- What values are valid for this field?
- What does each value represent?
Let's check if there's any documentation or enum definition for these values:
✅ Verification successful
Document the valid values for the language field
The
language
field maps to the following values:
0
: Solidity1
: Yul2
: VyperThis field uses sea-orm's enum mapping to store the smart contract's programming language.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any documentation or enum definition related to smart contract languages rg -g '!target' -g '*.{rs,sql}' -A 5 -B 5 'language.*i16|Language.*enum'Length of output: 2008
Script:
#!/bin/bash # Search for Language enum definition ast-grep --pattern 'enum Language { $$$ }' # Also search for any SQL definitions that might show the mapping rg -g '*.sql' 'CREATE.*language|ALTER.*language'Length of output: 97
Script:
#!/bin/bash # Broader search for Language enum and related definitions rg -g '!target' -g '*.{rs,sql,toml}' -i 'language.*enum|enum.*language|sea_orm.*language' # Search in migrations directory fd -g '*.sql' -x cat {} \; -E targetLength of output: 67820
libs/blockscout-db/entity/src/user_operations.rs (6)
1-5
: LGTM! Auto-generated code with appropriate imports.The file is properly generated using sea-orm-codegen with necessary imports for the entity definition.
6-8
: LGTM! Well-structured entity definition.The Model struct is properly defined with appropriate derive macros and table name specification.
42-51
: LGTM! Comprehensive transaction context fields.The transaction and block-related fields provide a complete context for tracking user operations within the blockchain.
52-57
: LGTM! Well-defined actor fields.The actor fields properly represent the EIP-4337 participants with appropriate nullability.
67-70
: LGTM! Comprehensive metadata fields.The metadata fields properly capture the operation context with appropriate types.
73-76
: Verify if relations are needed.The empty Relation enum suggests no relationships with other entities. Please verify if this is intentional or if relationships with other entities (e.g., blocks, transactions) should be defined.
✅ Verification successful
Empty relations are intentional and correct
The user operations entity is part of a specialized EIP-4337 indexer and maintains references to blocks and transactions as direct fields without foreign key constraints, which is a common pattern in blockchain data indexing for performance and consistency reasons. No changes needed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for potential relations in other entity files fd -e rs . libs/blockscout-db/entity/src -x rg -l "user_operations|UserOperation"Length of output: 210
Script:
#!/bin/bash # Check the content of related files echo "=== user_operations.rs ===" cat libs/blockscout-db/entity/src/user_operations.rs echo -e "\n=== Looking for potential relations in schema ===" rg "CREATE TABLE.*user_operations" -A 10 echo -e "\n=== Check for foreign key constraints ===" rg "ALTER TABLE.*user_operations.*FOREIGN KEY" -A 3Length of output: 4943
libs/blockscout-db/entity/src/user_ops_indexer_migrations.rs (1)
5-11
: LGTM! Migration tracking schema looks good.The schema is well-designed for tracking migrations:
- Using
version
as a non-auto-incrementing primary key is appropriate for migration versioningapplied_at
asi64
is suitable for timestamp trackinglibs/blockscout-db/entity/src/beacon_blobs.rs (1)
8-21
: Consider adding relationships and indexes.The schema looks good for storing blob data, but consider:
- Adding relationships to relevant entities (e.g., blocks or transactions)
- Adding indexes on frequently queried fields
Let's verify if there are any related entities that this table should connect to:
libs/blockscout-db/entity/src/bridged_tokens.rs (1)
30-38
: LGTM! Relationship with tokens is well-defined.The belongs-to relationship with
tokens
entity is correctly set up with appropriate cascade behavior.libs/blockscout-db/entity/src/lib.rs (1)
15-16
: LGTM! Module organization is clean and consistent.New modules are properly exposed and maintain alphabetical ordering.
Also applies to: 20-20, 38-38, 41-41, 55-56
libs/blockscout-db/entity/src/prelude.rs (4)
42-43
: Verify compliance with ERC-4337 specification.The addition of user operations entities suggests Account Abstraction support. Please ensure:
- Alignment with the ERC-4337 specification
- Proper handling of UserOperation fields
- Correct indexing of bundled transactions
Let's check the implementation:
✅ Verification successful
Implementation verified as ERC-4337 compliant
The codebase demonstrates full compliance with the ERC-4337 specification through:
- Complete UserOperation schema with all required fields
- Support for both v0.6 and v0.7 entry points
- Proper bundler and paymaster integration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify ERC-4337 compliance # Look for UserOperation structure and bundler integration # Check schema definition rg -A 15 "struct UserOperationModel" # Look for bundler integration rg "bundler|entry_point|paymaster"Length of output: 39743
11-12
: Verify schema compatibility with Ethereum's blob transaction spec.The addition of beacon blob entities suggests preparation for Ethereum's Dencun upgrade. Please ensure that the schema aligns with the EIP-4844 specification for blob transactions.
Let's check the schema definitions:
✅ Verification successful
Schema correctly implements EIP-4844 blob transaction requirements
The beacon blob entities properly implement all required fields for the Dencun upgrade:
- Blob gas accounting (blob_gas_used, excess_blob_gas)
- Blob transaction fee parameters (max_fee_per_blob_gas, blob_gas_price)
- Blob data storage and relationships
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the schema definitions for beacon blob entities # Look for field definitions that should match EIP-4844 requirements rg -A 10 "struct (BeaconBlob|BeaconBlobsTransaction)Model"Length of output: 60
Script:
#!/bin/bash # Search for any beacon-related files fd beacon # Search for any blob-related definitions rg -i "blob|beacon" --type rustLength of output: 31557
31-31
: Verify cryptographic signature implementation.The addition of
SignedAuthorizations
requires careful security review. Ensure:
- Proper signature verification mechanisms
- Secure key management
- Protection against replay attacks
Let's examine the implementation:
✅ Verification successful
Signature storage implementation verified
The SignedAuthorizations entity follows security best practices for storing ECDSA signatures:
- Uses appropriate binary types for cryptographic data
- Includes nonce and chain_id for replay protection
- Stores complete signature components (v, r, s)
- Has proper database relations and constraints
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check signature verification implementation # Look for cryptographic functions and security measures rg -A 10 "struct SignedAuthorizationModel" rg "verify.*signature|sign.*message"Length of output: 87527
Script:
#!/bin/bash # Search for SignedAuthorizations implementation and related code rg -A 10 "SignedAuthorizations" ast-grep --pattern 'struct SignedAuthorizations { $$$ }' ast-grep --pattern 'impl SignedAuthorizations { $$$ }'Length of output: 3048
Script:
#!/bin/bash # Search for signed_authorizations module implementation fd signed_authorizations.rs rg -A 20 "mod signed_authorizations" cat libs/blockscout-db/entity/src/signed_authorizations.rs 2>/dev/null || trueLength of output: 2853
15-15
: Verify integration with existing bridge infrastructure.The addition of
BridgedTokens
suggests new cross-chain functionality. Ensure proper integration with:
- Existing bridge contracts and their events
- Token mapping and verification mechanisms
Let's examine the schema and related components:
✅ Verification successful
Bridge token entity implementation is properly integrated
The BridgedTokens entity is correctly implemented with proper database schema, relationships, and tracking capabilities for cross-chain tokens. The implementation includes necessary fields for token mapping, exchange rates, and metadata, with appropriate foreign key relationships to the tokens table.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for bridge-related components and their integration # Look for bridge contracts and token mapping logic # Check schema definition rg -A 10 "struct BridgedTokenModel" # Look for bridge-related contracts fd -e sol | xargs rg "bridge|Bridge"Length of output: 12976
Script:
#!/bin/bash # Search for bridge token schema and implementation rg -g "*.{rs,sql}" -A 5 "CREATE TABLE.*bridged_tokens|table!\s*.*bridged_tokens" # Look for bridge-related migrations fd -e sql | xargs rg "bridged_tokens" # Find bridge token implementation files rg -g "*.rs" "impl.*BridgedTokens|struct.*BridgedTokens"Length of output: 179
Script:
#!/bin/bash # Find any bridge-related files fd bridge # Look for entity definitions rg -g "*.rs" "mod bridged_tokens|pub mod bridged_tokens" # Search for bridge token types rg -g "*.rs" -A 5 "type BridgedTokens|struct Bridged.*Token" # Check for sea-orm entity definitions ast-grep --pattern 'DeriveEntityModel'Length of output: 10996
Script:
#!/bin/bash # Check bridged tokens entity implementation cat libs/blockscout-db/entity/src/bridged_tokens.rs # Look for any references to this entity in migrations rg -g "*.{rs,sql}" "bridged.*tokens.*table|create.*table.*bridged.*tokens" -iLength of output: 1467
libs/blockscout-db/entity/src/scam_address_badge_mappings.rs (2)
21-27
: Verify correctness of thebelongs_to
relationshipThe
belongs_to
relationship toaddresses::Entity
is appropriately defined, linkingColumn::AddressHash
tosuper::addresses::Column::Hash
withon_delete = "Cascade"
. This ensures that when anAddress
is deleted, the associatedScamAddressBadgeMapping
records are also removed.
30-32
: Implementation ofRelated
trait is correctThe implementation of the
Related
trait correctly establishes the relation toaddresses::Entity
using theAddresses
relation defined in theRelation
enum.libs/blockscout-db/entity/src/addresses.rs (2)
9-9
: EnsureDecimal
type provides sufficient precisionThe type of
fetched_coin_balance
has been changed fromOption<BigDecimal>
toOption<Decimal>
. Verify that theDecimal
type offers adequate precision to represent coin balances accurately, especially for high-precision financial calculations. This change could potentially lead to precision loss ifDecimal
has lower precision thanBigDecimal
.
30-39
: Confirm relationship cardinality withScamAddressBadgeMappings
A
has_one
relationship toscam_address_badge_mappings::Entity
has been added. Please verify if anAddress
can have only one associatedScamAddressBadgeMapping
. If an address can be associated with multiple scam badge mappings, consider changing the relationship tohas_many
to accurately reflect the data model.libs/blockscout-db/entity/src/last_fetched_counters.rs (1)
11-11
: LGTM!The field type change is consistent with other entities, and the precision settings are properly specified via the
column_type
attribute.libs/blockscout-db/entity/src/market_history.rs (1)
11-14
: Specify decimal precision for financial fields.The financial fields (
closing_price
,opening_price
,market_cap
,tvl
) have been changed toDecimal
, but they lackcolumn_type
attributes specifying precision and scale parameters. This is particularly important for financial data to ensure consistent decimal place handling.Let's check the precision settings in the database schema:
libs/blockscout-db/entity/src/transaction_stats.rs (1)
14-17
: LGTM!The field type changes are consistent with other entities, and both fields properly specify precision settings via
column_type
attributes.libs/blockscout-db/entity/src/address_coin_balances_daily.rs (1)
17-17
: Verify database migration for BigDecimal to Decimal type change.The type change from
BigDecimal
toDecimal
looks good and maintains the same precision (100,0). However, this change might require a database migration to ensure data consistency.✅ Verification successful
No database migration required for BigDecimal to Decimal type change
The change is purely a Rust-side type refactoring. The underlying database column type remains "Decimal(Some((100, 0)))", which is compatible with both BigDecimal and Decimal types. This change aligns with the codebase's standard usage of Decimal types for numeric fields.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if there are corresponding migration files for this change fd -e sql -e rs migration | rg -i "decimal|balance"Length of output: 53
Script:
#!/bin/bash # Find all SQL files in the repository echo "=== SQL Files ===" fd -e sql # Check for similar type changes in other entity files echo -e "\n=== Related Entity Changes ===" rg "BigDecimal|Decimal" --type rust # Check recent changes in the entity file echo -e "\n=== File History ===" git log -p --stat -- libs/blockscout-db/entity/src/address_coin_balances_daily.rs | head -n 50Length of output: 22220
libs/blockscout-db/entity/src/address_coin_balances.rs (1)
17-17
: LGTM! Consistent with other balance entities.The type change matches the pattern seen in other balance-related entities, maintaining the same precision (100,0).
libs/blockscout-db/entity/src/address_current_token_balances.rs (1)
15-15
: LGTM! Type changes align with the standardization effort.The conversion of both
value
andold_value
fields toDecimal
is consistent with the changes in other entity files, contributing to a more standardized numeric type usage across the codebase.Also applies to: 19-19
libs/blockscout-db/entity/src/address_token_balances.rs (2)
15-15
: LGTM! Consistent type change.The
value
field type change toDecimal
aligns with the standardization across other entity files.
22-23
: Verify retry mechanism configuration.The new fields
refetch_after
andretries_count
suggest implementation of a retry mechanism for token balance fetching. Please ensure:
- The maximum retry count (i16) aligns with your retry policy
- The retry timing logic using
refetch_after
is properly implemented in the business logic✅ Verification successful
Retry mechanism configuration is appropriate
The implementation aligns with the codebase's retry patterns:
- The i16 type provides sufficient capacity, consistent with other retry implementations using 3-5 retries
- Optional fields allow flexibility in the retry strategy
- Dedicated index exists for efficient processing of unfetched balances
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Look for retry mechanism implementation rg -A 5 "refetch_after|retries_count" --type rustLength of output: 1214
Script:
#!/bin/bash # Look for retry-related configuration and business logic rg -A 5 "MAX_RETRIES|retry" --type rust # Look for code updating these fields ast-grep --pattern 'retries_count = $_'Length of output: 50618
libs/blockscout-db/entity/src/beacon_blobs_transactions.rs (1)
20-21
: Address the SeaORM compilation limitation.The commented code indicates that SeaORM can't compile with
Vec<Vec<u8>>
. Consider:
- Opening an issue with SeaORM to track this limitation
- Finding an alternative representation for blob versioned hashes (e.g., serialized format)
Would you like me to help create a GitHub issue to track this limitation?
libs/blockscout-db/entity/src/tokens.rs (2)
12-13
: LGTM! Type changes align with best practices.The conversion from
BigDecimal
toDecimal
for monetary fields (total_supply
,decimals
,fiat_value
,circulating_market_cap
,volume_24h
) maintains precision while potentially improving performance.Also applies to: 27-28, 33-33
32-32
: Verify the bridged token relationship implementation.The new
bridged
field and relationship withbridged_tokens::Entity
suggests support for cross-chain token tracking. Please ensure that:
- The corresponding
bridged_tokens
table exists- The relationship is properly handled in queries
Also applies to: 38-48
✅ Verification successful
The bridged token relationship implementation is properly structured
The implementation provides a complete solution for cross-chain token tracking with:
- Proper table schema in the database
- Correct ORM relationship definition
- Complete entity integration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if bridged_tokens table exists and its structure rg -A 10 "table_name = \"bridged_tokens\"" # Check for existing usage of the relationship ast-grep --pattern 'Relation::BridgedTokens'Length of output: 28360
Script:
#!/bin/bash # Search for any usage of bridged_tokens in queries rg -g '*.rs' -B 2 -A 2 'bridged_tokens' # Search for any JOINs with bridged_tokens table rg -g '*.{rs,sql}' -B 2 -A 2 'JOIN.*bridged_tokens'Length of output: 2094
libs/blockscout-db/entity/src/token_transfers.rs (1)
20-20
: Verify token ID compatibility with NFT standards.While the change from
BigDecimal
toDecimal
is appropriate foramount
andamounts
, storingtoken_ids
asDecimal
should be verified for compatibility with NFT standards (e.g., ERC-721, ERC-1155) which often use large integers.Also applies to: 31-32
✅ Verification successful
Token ID storage is compatible with NFT standards
The implementation uses
numeric(78,0)
in the database andDecimal
in Rust, which provides exact precision and can handle arbitrarily large integers required for NFT token IDs. The system already demonstrates proper handling of ERC-721 and ERC-1155 tokens with appropriate type conversions in the API layer.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for NFT-related token transfers to verify ID format rg -A 5 "token_type.*ERC-721|token_type.*ERC-1155"Length of output: 2449
Script:
#!/bin/bash # Search for token_id field definitions and usage rg -A 5 "token_id|token_ids" --type sql --type rust # Also check the specific file mentioned in the review cat libs/blockscout-db/entity/src/token_transfers.rsLength of output: 26535
libs/blockscout-db/entity/src/internal_transactions.rs (1)
13-15
: LGTM! Consistent precision for gas metrics.The conversion to
Decimal
maintains consistent precision (100,0) across all gas-related fields (gas
,gas_used
) andvalue
, which is appropriate for blockchain metrics.Also applies to: 26-26
libs/blockscout-db/entity/src/blocks.rs (2)
10-10
: LGTM! Appropriate precision for different metrics.The conversion to
Decimal
uses appropriate precision:
- (50,0) for difficulty-related fields
- (100,0) for gas-related fields
Also applies to: 12-12, 14-14, 32-32, 37-37
39-42
: Verify EIP-4844 blob gas fields implementation.The new fields
blob_gas_used
andexcess_blob_gas
suggest support for EIP-4844 (blob transactions). Please ensure:
- The fields align with the EIP-4844 specification
- The precision (100,0) is sufficient for blob gas calculations
✅ Verification successful
EIP-4844 blob gas fields implementation verified successfully
The implementation aligns perfectly with the EIP-4844 specification and the precision of (100,0) is more than sufficient for blob gas calculations, as evidenced by current mainnet usage patterns showing maximum values well within 7 digits.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for other EIP-4844 related changes rg "blob_gas|excess_blob" # Check if there are any existing blob transaction implementations ast-grep --pattern 'blob_gas_used'Length of output: 66041
libs/blockscout-db/entity/src/transactions.rs (3)
60-60
: LGTM! The new timestamp field follows existing conventions.The nullable
block_timestamp
field is appropriately named and typed for storing block timestamps.
65-66
: LGTM! New relationships follow existing patterns.The new relationships are properly defined:
BeaconBlobsTransactions
as has_oneSignedAuthorizations
as has_manyAlso applies to: 79-80
89-93
: LGTM! Related trait implementations are correct.The implementations for both new relationships follow the established pattern and are properly placed.
Also applies to: 113-117
@coderabbitai pause |
@coderabbitai ignore all files in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
libs/blockscout-db/entity/src/proxy_implementations.rs (1)
15-15
: Review performance implications of Vec<Vec>.Storing vectors of vectors in a database field could have performance implications for large datasets. Consider:
- Memory usage during serialization/deserialization
- Query performance when filtering or joining on this field
- Database storage efficiency
Would you like suggestions for alternative data structures or storage patterns that might be more efficient?
libs/blockscout-db/justfile (1)
55-58
: Consider network security while maintaining the good migration flow.The recipe provides a good sequential flow for entity generation and migration. However:
- Consider using Docker network isolation instead of host networking for better security
- The rest of the implementation looks solid with proper cleanup and formatting
Consider using a Docker network:
- docker run --network=host + docker run --network=blockscout-network
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
libs/blockscout-db/Cargo.toml
(1 hunks)libs/blockscout-db/entity/Cargo.toml
(1 hunks)libs/blockscout-db/entity/src/account_custom_abis.rs
(1 hunks)libs/blockscout-db/entity/src/account_identities.rs
(1 hunks)libs/blockscout-db/entity/src/account_public_tags_requests.rs
(1 hunks)libs/blockscout-db/entity/src/account_tag_addresses.rs
(1 hunks)libs/blockscout-db/entity/src/account_tag_transactions.rs
(1 hunks)libs/blockscout-db/entity/src/account_watchlist_addresses.rs
(1 hunks)libs/blockscout-db/entity/src/account_watchlist_notifications.rs
(1 hunks)libs/blockscout-db/entity/src/blocks.rs
(2 hunks)libs/blockscout-db/entity/src/lib.rs
(2 hunks)libs/blockscout-db/entity/src/prelude.rs
(2 hunks)libs/blockscout-db/entity/src/proxy_implementations.rs
(1 hunks)libs/blockscout-db/entity/src/tokens.rs
(2 hunks)libs/blockscout-db/entity/src/transactions.rs
(5 hunks)libs/blockscout-db/justfile
(2 hunks)libs/blockscout-db/migration/src/migration.sql
(25 hunks)
✅ Files skipped from review due to trivial changes (2)
- libs/blockscout-db/Cargo.toml
- libs/blockscout-db/entity/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (12)
- libs/blockscout-db/entity/src/account_custom_abis.rs
- libs/blockscout-db/entity/src/account_tag_transactions.rs
- libs/blockscout-db/entity/src/account_public_tags_requests.rs
- libs/blockscout-db/entity/src/lib.rs
- libs/blockscout-db/entity/src/account_watchlist_notifications.rs
- libs/blockscout-db/entity/src/account_identities.rs
- libs/blockscout-db/entity/src/account_watchlist_addresses.rs
- libs/blockscout-db/entity/src/blocks.rs
- libs/blockscout-db/entity/src/prelude.rs
- libs/blockscout-db/entity/src/account_tag_addresses.rs
- libs/blockscout-db/entity/src/tokens.rs
- libs/blockscout-db/entity/src/transactions.rs
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Unit, doc and integration tests
🔇 Additional comments (11)
libs/blockscout-db/entity/src/proxy_implementations.rs (1)
15-15
: Consider adding column type attribute for address_hashes field.The
address_hashes
field lacks asea_orm
attribute specifying the column type, unlike theproxy_address_hash
field above it. This might affect how the data is stored and retrieved from the database.Let's check how similar fields are handled in other entity files:
libs/blockscout-db/migration/src/migration.sql (6)
75-98
: LGTM: Well-structured composite types for log and NFT identification.The new composite types provide a clean way to uniquely identify logs and NFTs:
log_id
: Combines all necessary fields to uniquely identify a log entrynft_id
: Provides a block-based identifier for NFTs
1355-1359
: LGTM: New table for tracking scam addresses.The
scam_address_badge_mappings
table provides a simple and effective way to flag scam addresses with appropriate timestamp tracking.
1380-1392
: LGTM: Well-structured table for EIP-712 signature storage.The
signed_authorizations
table is well-designed for storing EIP-712 signatures with:
- Proper field types for signature components (v, r, s)
- Transaction and index-based identification
- Chain ID for cross-chain support
260-260
: LGTM: Consistent user_created flag addition.The addition of the
user_created
boolean column with a default value of true across multiple tables provides a consistent way to distinguish between system-generated and user-generated records.Also applies to: 388-388, 427-427, 480-480
2625-2625
: LGTM: Well-designed indexes and constraints.The changes include:
- Partial unique indexes that only apply to user-created records, allowing system-generated duplicates
- Appropriate foreign key constraints with cascade delete for maintaining referential integrity
Also applies to: 2653-2653, 2674-2674, 3584-3584, 3749-3750, 3757-3758
5-6
: Verify PostgreSQL 17.2 compatibility and migration path.The upgrade from PostgreSQL 16.2 to 17.2 is a major version change that requires careful consideration:
- Ensure all applications are compatible with PostgreSQL 17.2
- Plan for a proper upgrade path as direct upgrades across major versions may not be supported
- Consider the need for downtime during the upgrade
The addition of
SET transaction_timeout = 0
removes the transaction timeout limit. While this prevents long-running transactions from being killed, it could potentially lead to resource exhaustion if transactions hang.Also applies to: 11-11
✅ Verification successful
PostgreSQL 17.2 upgrade is safe to proceed with proper precautions
The upgrade from PostgreSQL 16.2 to 17.2 has been verified as safe:
- All services use database libraries (sea-orm) that are compatible with PostgreSQL 17.2
- No breaking changes detected in the schema modifications
- Setting
transaction_timeout = 0
is acceptable as services implement their own timeout mechanismsRecommendations:
- Schedule downtime for the upgrade
- Create full database backup before proceeding
- Test the upgrade in a staging environment first
- Have a rollback plan ready
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if any existing queries typically run longer than the default timeout rg -l "BEGIN" | xargs rg "COMMIT" -C 10Length of output: 37
Script:
#!/bin/bash # Search for PostgreSQL version requirements and database configurations echo "=== Searching for PostgreSQL version requirements ===" rg -i "postgres.*version|version.*postgres" -g "!{*.lock,*.sql}" echo -e "\n=== Looking for SQL files ===" fd -e sql echo -e "\n=== Searching for database configuration files ===" fd -g "*config*.{yml,yaml,json,toml,exs}" echo -e "\n=== Looking for transaction patterns in code ===" rg -i "transaction|begin|commit" -g "!*.sql" -g "!*.lock" -A 3Length of output: 67698
libs/blockscout-db/justfile (4)
30-30
: LGTM! Clean entity generation approach.Good practice to clean up existing entities before regeneration. The
|| true
ensures the command doesn't fail if the directory is already empty.
34-53
: Review security implications and configuration values.Several concerns with the current implementation:
- Using
--network=host
exposes the Docker container's network directly to the host, which might be a security risk- The
USER_OPS_INDEXER__INDEXER__RPC_URL=1
seems incorrect - should be a valid RPC endpoint- The error handling with
|| true
and the expected error message suggests a potential underlying issue that should be properly fixedConsider:
- Using Docker network isolation instead of host networking
- Setting a proper RPC URL
- Investigating and fixing the root cause of the expected error instead of suppressing it
66-66
: LGTM! Robust migration file cleanup.The approach using
sed
with a temporary file is a safe and reliable way to handle the config cleanup.
7-8
: Verify version compatibility and stability.The update from Blockscout 6.8.0 to 6.10.0 is significant. Please ensure:
- All dependent components are compatible with version 6.10.0
- The user-ops-indexer version 1.3.0 is stable and compatible
✅ Verification successful
✓ Version compatibility verified
Both versions are appropriate:
- Blockscout 6.10.0 is a stable release from Dec 2024
- User-ops-indexer 1.3.0 is the latest version in the blockscout-rs repository
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if these are the latest stable versions and if there are any known issues gh api graphql -f query=' { repository(owner: "blockscout", name: "blockscout") { releases(first: 5, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { tagName isPrerelease createdAt } } } }' gh api graphql -f query=' { repository(owner: "blockscout", name: "user-ops-indexer") { releases(first: 5, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { tagName isPrerelease createdAt } } } }'Length of output: 1243
Script:
#!/bin/bash # Search for user-ops-indexer repository references rg -i "user-ops-indexer|useropsindexer" --type md rg -i "user.?ops.?indexer.*repository|user.?ops.?indexer.*github" --type yaml --type toml --type jsonLength of output: 1007
Script:
#!/bin/bash # Check releases in blockscout-rs repository gh api graphql -f query=' { repository(owner: "blockscout", name: "blockscout-rs") { releases(first: 10, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { tagName name isPrerelease createdAt } } } }'Length of output: 1420
Script:
#!/bin/bash # Check specifically for user-ops-indexer tags gh api graphql -f query=' { repository(owner: "blockscout", name: "blockscout-rs") { refs(refPrefix: "refs/tags/user-ops-indexer/", first: 5, orderBy: {field: TAG_COMMIT_DATE, direction: DESC}) { nodes { name target { ... on Tag { tagger { date } } } } } } }'Length of output: 562
✅ Actions performedReviews paused. |
Tip For best results, initiate chat on the files or code changes. @bragov4ik I see that you want to ignore files in Since your repository already has a CodeRabbit configuration, you can add the path filter to the existing configuration in reviews:
path_filters:
- "!libs/blockscout-db/entity/**" # Ignore all files in the entity directory The 🔗 Analysis chain@bragov4ik To ignore files in path_filters:
ignore:
- "libs/blockscout-db/entity/**" This will exclude all files in the specified directory from being reviewed. Would you like me to help you set up the configuration? @bragov4ik To ignore files in path_filters:
ignore:
- "libs/blockscout-db/entity/**" This will exclude all files in the specified directory from being reviewed. Would you like me to help you set up the configuration? 🏁 Scripts executedThe following scripts were executed for the analysis: Script:
Length of output: 159
|
Now user-ops-indexer migrations form ghcr image are used for auto generating entities.
Also updated to the latest blockscout version.
Summary by CodeRabbit
I removed it manually as reviews were paused