Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ingest/s3): Partition support #11083

Merged
merged 31 commits into from
Aug 22, 2024
Merged
Show file tree
Hide file tree
Changes from 30 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
1bb1626
Initial commit for s3 partition support
treff7es Aug 2, 2024
7774e5f
Merge branch 'master' into s3_partition_support
treff7es Aug 2, 2024
7db9079
black linting
treff7es Aug 2, 2024
e8d6f3b
Merge branch 'master' into s3_partition_support
treff7es Aug 5, 2024
45efdb7
Changing enum to check if doc build succeeds
treff7es Aug 5, 2024
664bcd7
Update to use the latest model
treff7es Aug 5, 2024
af6df2a
Update golden files
treff7es Aug 5, 2024
6f1de05
Update golden files
treff7es Aug 5, 2024
05a132b
Stabilizing s3 integration tests
treff7es Aug 5, 2024
34f629b
Updating golden again
treff7es Aug 5, 2024
c247299
Updating goldens again
treff7es Aug 5, 2024
35bc791
Fix formatting
treff7es Aug 5, 2024
8d20952
Merge branch 'master' into s3_partition_support
treff7es Aug 5, 2024
45ee511
- Adding option to disable partition aspect generation for backward c…
treff7es Aug 5, 2024
f51a3e9
Black formatting
treff7es Aug 5, 2024
20c54e2
Update doc
treff7es Aug 5, 2024
e401ee9
Merge branch 'master' into s3_partition_support
treff7es Aug 5, 2024
56e14e2
Fix typos
treff7es Aug 5, 2024
a4cbdb9
Addressing pr review comments
treff7es Aug 12, 2024
d5aee9d
Merge branch 'master' into s3_partition_support
treff7es Aug 12, 2024
b0c8799
Merge branch 'master' into s3_partition_support
treff7es Aug 13, 2024
4d6502a
Fix linter issues
treff7es Aug 13, 2024
cb56197
Update metadata-ingestion/docs/sources/s3/s3.md
treff7es Aug 21, 2024
ab36e00
Update metadata-ingestion/src/datahub/ingestion/source/data_lake_comm…
treff7es Aug 21, 2024
662fa52
Update metadata-ingestion/src/datahub/ingestion/source/data_lake_comm…
treff7es Aug 21, 2024
12d6150
Update metadata-ingestion/src/datahub/ingestion/source/s3/source.py
treff7es Aug 21, 2024
a07216e
Update metadata-ingestion/src/datahub/ingestion/source/s3/source.py
treff7es Aug 21, 2024
e8c4f74
Update metadata-ingestion/docs/sources/s3/s3.md
treff7es Aug 21, 2024
b929f6a
Addressing pr review comments
treff7es Aug 21, 2024
ee2b827
Merge branch 'master' into s3_partition_support
treff7es Aug 21, 2024
0ce1174
Update golden files
treff7es Aug 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 29 additions & 4 deletions metadata-ingestion/docs/sources/s3/s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,31 @@

Path Specs (`path_specs`) is a list of Path Spec (`path_spec`) objects where each individual `path_spec` represents one or more datasets. Include path (`path_spec.include`) represents formatted path to the dataset. This path must end with `*.*` or `*.[ext]` to represent leaf level. If `*.[ext]` is provided then files with only specified extension type will be scanned. "`.[ext]`" can be any of [supported file types](#supported-file-types). Refer [example 1](#example-1---individual-file-as-dataset) below for more details.

All folder levels need to be specified in include path. You can use `/*/` to represent a folder level and avoid specifying exact folder name. To map folder as a dataset, use `{table}` placeholder to represent folder level for which dataset is to be created. For a partitioned dataset, you can use placeholder `{partition_key[i]}` to represent name of `i`th partition and `{partition[i]}` to represent value of `i`th partition. During ingestion, `i` will be used to match partition_key to partition. Refer [example 2 and 3](#example-2---folder-of-files-as-dataset-without-partitions) below for more details.
All folder levels need to be specified in include path. You can use `/*/` to represent a folder level and avoid specifying exact folder name. To map folder as a dataset, use `{table}` placeholder to represent folder level for which dataset is to be created. For a partitioned dataset, you can use placeholder `{partition_key[i]}` to represent name of `i`th partition and `{partition_value[i]}` to represent value of `i`th partition. During ingestion, `i` will be used to match partition_key to partition. Refer [example 2 and 3](#example-2---folder-of-files-as-dataset-without-partitions) below for more details.

Exclude paths (`path_spec.exclude`) can be used to ignore paths that are not relevant to current `path_spec`. This path cannot have named variables ( `{}` ). Exclude path can have `**` to represent multiple folder levels. Refer [example 4](#example-4---folder-of-files-as-dataset-with-partitions-and-exclude-filter) below for more details.

Refer [example 5](#example-5---advanced---either-individual-file-or-folder-of-files-as-dataset) if your bucket has more complex dataset representation.


**Additional points to note**
- Folder names should not contain {, }, *, / in their names.
- Named variable {folder} is reserved for internal working. please do not use in named variables.

#### Partitioned Dataset support
If your dataset is partitioned by the `partition_key`=`partition_value` format, then the partition values are auto-detected.

Otherwise, you can specify partitions in the following way in the path_spec:
1. Specify partition_key and partition_value in the path like => `{partition_key[0]}={partition_value[0]}/{partition_key[1]}={partition_value[1]}/{partition_key[2]}={partition_value[2]}`
2. Partition key can be specify using named variables in the path_spec like => `year={year}/month={month}/day={day}`
3 if the path is in the form of /value1/value2/value3 the source infer partition value from the path and assign partition_0, partition_1, partition_2 etc

Dataset creation time is determined by the creation time of earliest created file in the lowest partition while last updated time is determined by the last updated time of the latest updated file in the highest partition.

How the source determines the highest/lowest partition it is based on the traversal method set in the path_spec.
- If the traversal method is set to `MAX` then the source will try to find the latest partition by ordering the partitions each level and find the latest partiton. This traversal method won't look for earilest partition/creation time but this is the fastest.
- If the traversal method is set to `MIN_MAX` then the source will try to find the latest and earliest partition by ordering the partitions each level and find the latest/earliest partiton. This traversal sort folders purely by name therefor it is fast but it doesn't guarantee the latest partition will have the latest created file.
- If the traversal method is set to `ALL` then the source will try to find the latest and earliest partition by listing all the files in all the partitions and find the creation/last modification time based on the file creations. This is the slowest but for non time partitioned datasets this is the only way to find the latest/earliest partition.

### Path Specs - Examples
#### Example 1 - Individual file as Dataset
Expand Down Expand Up @@ -73,7 +88,12 @@ test-bucket
Path specs config to ingest folders `orders` and `returns` as datasets:
```
path_specs:
- include: s3://test-bucket/{table}/{partition_key[0]}={partition[0]}/{partition_key[1]}={partition[1]}/*.parquet
- include: s3://test-bucket/{table}/{partition_key[0]}={partition_value[0]}/{partition_key[1]}={partition_value[1]}/*.parquet
```
or with partition auto-detection:
```
path_specs:
- include: s3://test-bucket/{table}/
```

One can also use `include: s3://test-bucket/{table}/*/*/*.parquet` here however above format is preferred as it allows declaring partitions explicitly.
Expand All @@ -99,11 +119,15 @@ test-bucket
Path specs config to ingest folder `orders` as dataset but not folder `tmp_orders`:
```
path_specs:
- include: s3://test-bucket/{table}/{partition_key[0]}={partition[0]}/{partition_key[1]}={partition[1]}/*.parquet
- include: s3://test-bucket/{table}/{partition_key[0]}={partition_value[0]}/{partition_key[1]}={partition_value[1]}/*.parquet
exclude:
- **/tmp_orders/**
```

or with partition auto-detection:
```
path_specs:
- include: s3://test-bucket/{table}/
```

#### Example 5 - Advanced - Either Individual file OR Folder of files as Dataset

Expand Down Expand Up @@ -150,6 +174,7 @@ Above config has 3 path_specs and will ingest following datasets
s3://my-bucket/foo/tests/bar.avro # single file table
s3://my-bucket/foo/tests/*.* # mulitple file level tables
s3://my-bucket/foo/tests/{table}/*.avro #table without partition
s3://my-bucket/foo/tests/{table}/ #table with partition autodetection. Partition only can be detected if it is in the format of key=value
s3://my-bucket/foo/tests/{table}/*/*.avro #table where partitions are not specified
s3://my-bucket/foo/tests/{table}/*.* # table where no partitions as well as data type specified
s3://my-bucket/{dept}/tests/{table}/*.avro # specifying keywords to be used in display name
Expand Down
Loading
Loading