Skip to content

Commit

Permalink
Docs: model kind concepts move inc by partition (#3713)
Browse files Browse the repository at this point in the history
  • Loading branch information
treysp authored Jan 25, 2025
1 parent 64e1c3f commit 42ffe94
Showing 1 changed file with 133 additions and 131 deletions.
264 changes: 133 additions & 131 deletions docs/concepts/models/model_kinds.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,137 +138,6 @@ Depending on the target engine, models of the `INCREMENTAL_BY_TIME_RANGE` kind a
| Postgres | DELETE by time range, then INSERT |
| DuckDB | DELETE by time range, then INSERT |

## INCREMENTAL_BY_PARTITION

Models of the `INCREMENTAL_BY_PARTITION` kind are computed incrementally based on partition. A set of columns defines the model's partitioning key, and a partition is the group of rows with the same partitioning key value.

!!! info "Should you use this model kind?"

Any model kind can use a partitioned table by specifying the [`partitioned_by` key](../models/overview.md#partitioned_by) in the `MODEL` DDL. The "partition" in `INCREMENTAL_BY_PARTITION` is about how the data is **loaded** when the model runs.

`INCREMENTAL_BY_PARTITION` models are inherently [non-idempotent](../glossary.md#idempotency), so restatements and other actions can cause data loss. This makes them more complex to manage than other model kinds.

In most scenarios, an `INCREMENTAL_BY_TIME_RANGE` model can meet your needs and will be easier to manage. The `INCREMENTAL_BY_PARTITION` model kind should only be used when the data must be loaded by partition (usually for performance reasons).

This model kind is designed for the scenario where data rows should be loaded and updated as a group based on their shared value for the partitioning key.

It may be used with any SQL engine. SQLMesh will automatically create partitioned tables on engines that support explicit table partitioning (e.g., [BigQuery](https://cloud.google.com/bigquery/docs/creating-partitioned-tables), [Databricks](https://docs.databricks.com/en/sql/language-manual/sql-ref-partition.html)).

New rows are loaded based on their partitioning key value:

- If a partitioning key in newly loaded data is not present in the model table, the new partitioning key and its data rows are inserted.
- If a partitioning key in newly loaded data is already present in the model table, **all the partitioning key's existing data rows in the model table are replaced** with the partitioning key's data rows in the newly loaded data.
- If a partitioning key is present in the model table but not present in the newly loaded data, the partitioning key's existing data rows are not modified and remain in the model table.

This kind should only be used for datasets that have the following traits:

* The dataset's records can be grouped by a partitioning key.
* Each record has a partitioning key associated with it.
* It is appropriate to upsert records, so existing records can be overwritten by new arrivals when their partitioning keys match.
* All existing records associated with a given partitioning key can be removed or overwritten when any new record has the partitioning key value.

The column defining the partitioning key is specified in the model's `MODEL` DDL `partitioned_by` key. This example shows the `MODEL` DDL for an `INCREMENTAL_BY_PARTITION` model whose partition key is the row's value for the `region` column:

```sql linenums="1" hl_lines="4"
MODEL (
name db.events,
kind INCREMENTAL_BY_PARTITION,
partitioned_by region,
);
```

Compound partition keys are also supported, such as `region` and `department`:

```sql linenums="1" hl_lines="4"
MODEL (
name db.events,
kind INCREMENTAL_BY_PARTITION,
partitioned_by (region, department),
);
```

Date and/or timestamp column expressions are also supported (varies by SQL engine). This BigQuery example's partition key is based on the month each row's `event_date` occurred:

```sql linenums="1" hl_lines="4"
MODEL (
name db.events,
kind INCREMENTAL_BY_PARTITION,
partitioned_by DATETIME_TRUNC(event_date, MONTH)
);
```

!!! warning "Only full restatements supported"

Partial data [restatements](../plans.md#restatement-plans) are used to reprocess part of a table's data (usually a limited time range).

Partial data restatement is not supported for `INCREMENTAL_BY_PARTITION` models. If you restate an `INCREMENTAL_BY_PARTITION` model, its entire table will be recreated from scratch.

Restating `INCREMENTAL_BY_PARTITION` models may lead to data loss and should be performed with care.

### Example

This is a fuller example of how you would use this model kind in practice. It limits the number of partitions to backfill based on time range in the `partitions_to_update` CTE.

```sql linenums="1"
MODEL (
name demo.incremental_by_partition_demo,
kind INCREMENTAL_BY_PARTITION,
partitioned_by user_segment,
);

-- This is the source of truth for what partitions need to be updated and will join to the product usage data
-- This could be an INCREMENTAL_BY_TIME_RANGE model that reads in the user_segment values last updated in the past 30 days to reduce scope
-- Use this strategy to reduce full restatements
WITH partitions_to_update AS (
SELECT DISTINCT
user_segment
FROM demo.incremental_by_time_range_demo -- upstream table tracking which user segments to update
WHERE last_updated_at BETWEEN DATE_SUB(@start_dt, INTERVAL 30 DAY) AND @end_dt
),

product_usage AS (
SELECT
product_id,
customer_id,
last_usage_date,
usage_count,
feature_utilization_score,
user_segment
FROM sqlmesh-public-demo.tcloud_raw_data.product_usage
WHERE user_segment IN (SELECT user_segment FROM partitions_to_update) -- partition filter applied here
)

SELECT
product_id,
customer_id,
last_usage_date,
usage_count,
feature_utilization_score,
user_segment,
CASE
WHEN usage_count > 100 AND feature_utilization_score > 0.7 THEN 'Power User'
WHEN usage_count > 50 THEN 'Regular User'
WHEN usage_count IS NULL THEN 'New User'
ELSE 'Light User'
END as user_type
FROM product_usage
```

**Note**: Partial data [restatement](../plans.md#restatement-plans) is not supported for this model kind, which means that the entire table will be recreated from scratch if restated. This may lead to data loss.

### Materialization strategy
Depending on the target engine, models of the `INCREMENTAL_BY_PARTITION` kind are materialized using the following strategies:

| Engine | Strategy |
|------------|-----------------------------------------|
| Databricks | REPLACE WHERE by partitioning key |
| Spark | INSERT OVERWRITE by partitioning key |
| Snowflake | DELETE by partitioning key, then INSERT |
| BigQuery | DELETE by partitioning key, then INSERT |
| Redshift | DELETE by partitioning key, then INSERT |
| Postgres | DELETE by partitioning key, then INSERT |
| DuckDB | DELETE by partitioning key, then INSERT |

## INCREMENTAL_BY_UNIQUE_KEY

Models of the `INCREMENTAL_BY_UNIQUE_KEY` kind are computed incrementally based on a key that is unique for each data row.
Expand Down Expand Up @@ -1018,3 +887,136 @@ Due to there being no standard, each vendor has a different implementation with
We would recommend using standard SQLMesh model types in the first instance. However, if you do need to use Managed models, you still gain other SQLMesh benefits like the ability to use them in [virtual environments](../../concepts/overview#build-a-virtual-environment).

See [Managed Models](./managed_models.md) for more information on which engines are supported and which properties are available.

## INCREMENTAL_BY_PARTITION

Models of the `INCREMENTAL_BY_PARTITION` kind are computed incrementally based on partition. A set of columns defines the model's partitioning key, and a partition is the group of rows with the same partitioning key value.

!!! question "Should you use this model kind?"

Any model kind can use a partitioned **table** by specifying the [`partitioned_by` key](../models/overview.md#partitioned_by) in the `MODEL` DDL.

The "partition" in `INCREMENTAL_BY_PARTITION` is about how the data is **loaded** when the model runs.

`INCREMENTAL_BY_PARTITION` models are inherently [non-idempotent](../glossary.md#idempotency), so restatements and other actions can cause data loss. This makes them more complex to manage than other model kinds.

In most scenarios, an `INCREMENTAL_BY_TIME_RANGE` model can meet your needs and will be easier to manage. The `INCREMENTAL_BY_PARTITION` model kind should only be used when the data must be loaded by partition (usually for performance reasons).

This model kind is designed for the scenario where data rows should be loaded and updated as a group based on their shared value for the partitioning key.

It may be used with any SQL engine. SQLMesh will automatically create partitioned tables on engines that support explicit table partitioning (e.g., [BigQuery](https://cloud.google.com/bigquery/docs/creating-partitioned-tables), [Databricks](https://docs.databricks.com/en/sql/language-manual/sql-ref-partition.html)).

New rows are loaded based on their partitioning key value:

- If a partitioning key in newly loaded data is not present in the model table, the new partitioning key and its data rows are inserted.
- If a partitioning key in newly loaded data is already present in the model table, **all the partitioning key's existing data rows in the model table are replaced** with the partitioning key's data rows in the newly loaded data.
- If a partitioning key is present in the model table but not present in the newly loaded data, the partitioning key's existing data rows are not modified and remain in the model table.

This kind should only be used for datasets that have the following traits:

* The dataset's records can be grouped by a partitioning key.
* Each record has a partitioning key associated with it.
* It is appropriate to upsert records, so existing records can be overwritten by new arrivals when their partitioning keys match.
* All existing records associated with a given partitioning key can be removed or overwritten when any new record has the partitioning key value.

The column defining the partitioning key is specified in the model's `MODEL` DDL `partitioned_by` key. This example shows the `MODEL` DDL for an `INCREMENTAL_BY_PARTITION` model whose partition key is the row's value for the `region` column:

```sql linenums="1" hl_lines="4"
MODEL (
name db.events,
kind INCREMENTAL_BY_PARTITION,
partitioned_by region,
);
```

Compound partition keys are also supported, such as `region` and `department`:

```sql linenums="1" hl_lines="4"
MODEL (
name db.events,
kind INCREMENTAL_BY_PARTITION,
partitioned_by (region, department),
);
```

Date and/or timestamp column expressions are also supported (varies by SQL engine). This BigQuery example's partition key is based on the month each row's `event_date` occurred:

```sql linenums="1" hl_lines="4"
MODEL (
name db.events,
kind INCREMENTAL_BY_PARTITION,
partitioned_by DATETIME_TRUNC(event_date, MONTH)
);
```

!!! warning "Only full restatements supported"

Partial data [restatements](../plans.md#restatement-plans) are used to reprocess part of a table's data (usually a limited time range).

Partial data restatement is not supported for `INCREMENTAL_BY_PARTITION` models. If you restate an `INCREMENTAL_BY_PARTITION` model, its entire table will be recreated from scratch.

Restating `INCREMENTAL_BY_PARTITION` models may lead to data loss and should be performed with care.

### Example

This is a fuller example of how you would use this model kind in practice. It limits the number of partitions to backfill based on time range in the `partitions_to_update` CTE.

```sql linenums="1"
MODEL (
name demo.incremental_by_partition_demo,
kind INCREMENTAL_BY_PARTITION,
partitioned_by user_segment,
);

-- This is the source of truth for what partitions need to be updated and will join to the product usage data
-- This could be an INCREMENTAL_BY_TIME_RANGE model that reads in the user_segment values last updated in the past 30 days to reduce scope
-- Use this strategy to reduce full restatements
WITH partitions_to_update AS (
SELECT DISTINCT
user_segment
FROM demo.incremental_by_time_range_demo -- upstream table tracking which user segments to update
WHERE last_updated_at BETWEEN DATE_SUB(@start_dt, INTERVAL 30 DAY) AND @end_dt
),

product_usage AS (
SELECT
product_id,
customer_id,
last_usage_date,
usage_count,
feature_utilization_score,
user_segment
FROM sqlmesh-public-demo.tcloud_raw_data.product_usage
WHERE user_segment IN (SELECT user_segment FROM partitions_to_update) -- partition filter applied here
)

SELECT
product_id,
customer_id,
last_usage_date,
usage_count,
feature_utilization_score,
user_segment,
CASE
WHEN usage_count > 100 AND feature_utilization_score > 0.7 THEN 'Power User'
WHEN usage_count > 50 THEN 'Regular User'
WHEN usage_count IS NULL THEN 'New User'
ELSE 'Light User'
END as user_type
FROM product_usage
```

**Note**: Partial data [restatement](../plans.md#restatement-plans) is not supported for this model kind, which means that the entire table will be recreated from scratch if restated. This may lead to data loss.

### Materialization strategy
Depending on the target engine, models of the `INCREMENTAL_BY_PARTITION` kind are materialized using the following strategies:

| Engine | Strategy |
|------------|-----------------------------------------|
| Databricks | REPLACE WHERE by partitioning key |
| Spark | INSERT OVERWRITE by partitioning key |
| Snowflake | DELETE by partitioning key, then INSERT |
| BigQuery | DELETE by partitioning key, then INSERT |
| Redshift | DELETE by partitioning key, then INSERT |
| Postgres | DELETE by partitioning key, then INSERT |
| DuckDB | DELETE by partitioning key, then INSERT |

0 comments on commit 42ffe94

Please sign in to comment.