Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ingest/s3): Partition support #11083

Merged
merged 31 commits into from
Aug 22, 2024

Conversation

treff7es
Copy link
Contributor

@treff7es treff7es commented Aug 2, 2024

Checklist

  • The PR conforms to DataHub's Contributing Guideline (particularly Commit Message Format)
  • Links to related issues (if applicable)
  • Tests for the changes have been added/updated (if applicable)
  • Docs related to the changes have been added/updated (if applicable). If a new feature has been added a Usage Guide has been added for the same.
  • For any breaking change/potential downtime/deprecation/big changes an entry has been made in Updating DataHub

Summary by CodeRabbit

  • New Features

    • Enhanced documentation for S3 dataset ingestion, including flexible path configuration and partition auto-detection.
    • Added new attributes for data ingestion configuration, improving control over partition handling and metadata management.
    • Introduced a new Folder class to better manage folder metadata for partitioned datasets.
    • New JSON configurations for S3 data source integration with advanced partition detection options.
  • Bug Fixes

    • Restructured JSON schemas for improved clarity and consistency in metadata representation across various test cases.
  • Tests

    • Improved validation logic in S3 integration tests to ensure uploaded files match expected values.

Copy link
Contributor

coderabbitai bot commented Aug 2, 2024

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The changes introduce significant enhancements to the metadata ingestion process for S3 data sources, improving flexibility and usability in handling dataset configurations. New features such as partition auto-detection, enhanced sorting capabilities, and refined metadata structures enrich the data management experience. Additionally, validation mechanisms for uploaded files ensure data integrity, while modifications to JSON schemas streamline metadata representation and tracking.

Changes

Files Change Summary
metadata-ingestion/docs/sources/s3/s3.md Updated documentation on path specifications, added partition auto-detection example.
metadata-ingestion/src/datahub/ingestion/source/data_lake_common/path_spec.py Added new classes and methods for sorting and partition management, enhanced Config class.
metadata-ingestion/src/datahub/ingestion/source/s3/config.py Introduced get_all_partitions and generate_partition_aspects attributes to DataLakeSourceConfig.
metadata-ingestion/src/datahub/ingestion/source/s3/source.py Added Folder class for partition metadata, updated TableData class to use Folder instances.
metadata-ingestion/tests/integration/s3/golden-files/*.json Modified JSON structures for metadata, introduced lastRunId, updated schema fields and values.
metadata-ingestion/tests/integration/s3/test_s3.py Enhanced validation logic for uploaded files using FILE_LIST_FOR_VALIDATION.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant System
    participant S3

    User->>System: Request data ingestion
    System->>S3: Access files from specified path
    S3-->>System: Return file metadata
    System->>System: Apply partition auto-detection
    System->>User: Confirm successful ingestion
Loading

🐰 In fields so bright, changes take flight,
New paths are mapped, oh what a delight!
With folders and partitions, we hop with glee,
Data flows smoothly, as swift as can be.
In the world of S3, we dance and play,
Celebrating updates, hip-hip-hooray! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added the ingestion PR or Issue related to the ingestion of metadata label Aug 2, 2024
Fix doc generation
@treff7es treff7es marked this pull request as ready for review August 5, 2024 19:16
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 20c54e2 and 56e14e2.

Files selected for processing (10)
  • metadata-ingestion/src/datahub/ingestion/source/data_lake_common/path_spec.py (10 hunks)
  • metadata-ingestion/src/datahub/ingestion/source/s3/source.py (18 hunks)
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_basic.json (15 hunks)
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_keyval.json (15 hunks)
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_update_schema.json (15 hunks)
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_update_schema_with_partition_autodetect.json (1 hunks)
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_with_partition_autodetect_traverse_all.json (1 hunks)
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_with_partition_autodetect_traverse_min_max.json (1 hunks)
  • metadata-ingestion/tests/integration/s3/sources/s3/folder_partition_with_partition_autodetect_traverse_all.json (1 hunks)
  • metadata-ingestion/tests/integration/s3/sources/s3/folder_partition_with_partition_autodetect_traverse_min_max.json (1 hunks)
Files skipped from review due to trivial changes (1)
  • metadata-ingestion/tests/integration/s3/sources/s3/folder_partition_with_partition_autodetect_traverse_all.json
Files skipped from review as they are similar to previous changes (5)
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_keyval.json
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_update_schema_with_partition_autodetect.json
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_with_partition_autodetect_traverse_all.json
  • metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_with_partition_autodetect_traverse_min_max.json
  • metadata-ingestion/tests/integration/s3/sources/s3/folder_partition_with_partition_autodetect_traverse_min_max.json
Additional context used
Ruff
metadata-ingestion/src/datahub/ingestion/source/data_lake_common/path_spec.py

427-427: Use enumerate() for index variable num in for loop

(SIM113)

metadata-ingestion/src/datahub/ingestion/source/s3/source.py

996-996: Use bool(...) instead of True if ... else False

Replace with `bool(...)

(SIM210)

Additional comments not posted (31)
metadata-ingestion/src/datahub/ingestion/source/data_lake_common/path_spec.py (11)

43-52: New Enum Class SortKeyType Looks Good

The new enum class SortKeyType is well-defined and categorizes sorting keys appropriately.


54-77: New Class SortKey Looks Good

The new class SortKey is well-structured and includes a useful validator method for converting date formats.


111-116: New Attribute sort_key Looks Good

The new optional attribute sort_key is well-defined and useful for sorting partitions.


133-136: New Attribute autodetect_partitions Looks Good

The new boolean attribute autodetect_partitions is well-defined and useful for enabling or disabling partition autodetection.


138-141: New Attribute traversal_method Looks Good

The new attribute traversal_method is well-defined and useful for specifying the folder traversal method.


143-146: New Attribute include_hidden_folders Looks Good

The new boolean attribute include_hidden_folders is well-defined and useful for enabling or disabling the inclusion of hidden folders in the traversal.


148-161: New Method is_path_hidden Looks Good

The new method is_path_hidden is well-implemented and useful for determining if a path or its directories are hidden.


Line range hint 163-189:
Modified Method allowed Looks Good

The allowed method has been appropriately modified to include the ignore_ext parameter, enhancing its flexibility.


360-430: New Method get_partition_from_path Looks Good

The new method get_partition_from_path is well-implemented and useful for extracting partition keys from paths based on the defined sorting and partitioning logic.

Tools
Ruff

427-427: Use enumerate() for index variable num in for loop

(SIM113)


481-518: New Method extract_datetime_partition Looks Good

The new method extract_datetime_partition is well-implemented and useful for parsing datetime values from paths based on the defined sorting keys.


192-195: Modified Method dir_allowed Looks Good

The dir_allowed method has been appropriately modified to enhance its functionality.

metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_basic.json (5)

10-12: New Fields in customProperties Look Good

The new fields number_of_partitions and partitions enhance the metadata's descriptive capabilities.


16-21: New Timestamps created and lastModified Look Good

The new timestamps created and lastModified improve data lineage and auditing capabilities.


27-28: New Field lastRunId in systemMetadata Looks Good

The new field lastRunId standardizes the tracking of metadata ingestion runs.


558-559: Changes to entityType Look Good

The changes to entityType from dataset to container and vice versa reflect a reclassification of certain metadata entities.

Also applies to: 579-580


582-582: Changes to aspectName Look Good

The changes to aspectName indicate a reorganization of how metadata is structured.

Also applies to: 623-623

metadata-ingestion/tests/integration/s3/golden-files/s3/golden_mces_folder_partition_update_schema.json (5)

10-12: New Fields in customProperties Look Good

The new fields number_of_partitions and partitions enhance the metadata's descriptive capabilities.


16-21: New Timestamps created and lastModified Look Good

The new timestamps created and lastModified improve data lineage and auditing capabilities.


27-28: New Field lastRunId in systemMetadata Looks Good

The new field lastRunId standardizes the tracking of metadata ingestion runs.


558-559: Changes to entityType Look Good

The changes to entityType from dataset to container and vice versa reflect a reclassification of certain metadata entities.

Also applies to: 579-580


582-582: Changes to aspectName Look Good

The changes to aspectName indicate a reorganization of how metadata is structured.

Also applies to: 623-623

metadata-ingestion/src/datahub/ingestion/source/s3/source.py (10)

212-220: LGTM!

The Folder class is well-structured and encapsulates folder metadata effectively.


236-238: LGTM!

The TableData class structure is appropriate and the new attributes for partitions are well-integrated.


505-527: LGTM!

The method appropriately handles partition keys using the new Folder structure.


620-652: LGTM!

The method effectively generates summaries for partitions.


685-764: LGTM!

The method effectively handles partition data and integrates the new attributes appropriately.


860-888: LGTM!

The method appropriately handles the new partitions parameter.


957-1003: LGTM!

The method effectively handles partition data and returns a list of Folder instances.

Tools
Ruff

996-996: Use bool(...) instead of True if ... else False

Replace with `bool(...)

(SIM210)


Line range hint 1008-1129:
LGTM!

The method effectively handles partition data and yields additional partition information.


Line range hint 1134-1155:
LGTM!

The method effectively handles partition data and yields additional partition information.


1174-1178: LGTM!

The method effectively handles partition data and integrates the new attributes appropriately.

)
type: SortKeyType = Field(
default=SortKeyType.STRING,
description="The date format to use when sorting. This is used to parse the date from the key. The format should follow the java [SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html) format.",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like description needs to change here.

return None
else:
for java_format, python_format in java_to_python_mapping.items():
v = v.replace(java_format, f"%{python_format}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we accept date_format in java format if we anyway convert it to python format here before actually using it ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future, this should be supported across platforms (java, python, etc..) and I felt like the java one is more common

# Conflicts:
#	metadata-ingestion/src/datahub/ingestion/source/s3/source.py
Copy link
Collaborator

@mayurinehate mayurinehate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good.

metadata-ingestion/docs/sources/s3/s3.md Outdated Show resolved Hide resolved
metadata-ingestion/docs/sources/s3/s3.md Outdated Show resolved Hide resolved
ps = PathSpec(include=path_spec, default_extension="csv", allow_double_stars=True)
assert ps.allowed(path)
partitions = ps.get_partition_from_path(path)
assert partitions == expected
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for adding these.

timestamp: datetime
size: int
partitions: List[Folder]

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! So browser would browse through file system to yield one BrowsePath for one identified table and that table would then be emitted as dataset if its allowed by patterns, right ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, exactly

path_spec (PathSpec): The path specification used to determine partitioning.
bucket (Any): The S3 bucket object.
prefix (str): The prefix path in the S3 bucket to list objects from.
partition (Optional[str]): An optional partition string to append to the prefix.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can remove partition from here now that its removed from method

@@ -263,6 +509,44 @@ def _extract_table_name(self, named_vars: dict) -> str:
raise ValueError("path_spec.table_name is not set")
return self.table_name.format_map(named_vars)

def extract_datetime_partition(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this used anywhere ? Would be good to add unit tests for this too.

partitions.append(
Folder(
partition_id=id,
is_partition=True if id else False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we set empty partitions [] if id is absent ?

protocol=protocol,
min=True,
)
dirs_to_process.append(dirs_to_process_min[0])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we push down traversal mode as max or min_max to get_dirs_to_process ? Without that, list_folders would be done twice ?

treff7es and others added 7 commits August 21, 2024 10:41
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Collaborator

@mayurinehate mayurinehate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@treff7es treff7es merged commit ef6a410 into datahub-project:master Aug 22, 2024
57 of 58 checks passed
@treff7es treff7es deleted the s3_partition_support branch August 22, 2024 15:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ingestion PR or Issue related to the ingestion of metadata
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants