Skip to content

Releases: delta-io/delta

Delta Lake 1.2.0

13 Apr 21:05
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 1.2.0 on Apache Spark 3.2. Similar to Apache Spark™, we have released Maven artifacts for both Scala 2.12 and Scala 2.13.

Key features in this release

  • Support multi-cluster write in Delta Lake tables stored in S3. Users now have the option of specifying a new and experimental LogStore implementation that supports concurrent reads and writes to a single Delta Lake table in S3 from multiple Spark drivers. See the documentation for more details.

  • Support for compacting small files (optimize) into larger files in a Delta Lake table. Reduced number of data files improves read latency due to reduced metadata size and per-file overheads such as file-open overhead and file-close overhead. See the documentation for more details.

  • Support for data skipping using column statistics. Column statistics are collected for each file as part of the Delta Lake table writes. These statistics can be used during the reading of a Delta Lake table to skip reading files not matching the filters in the query. See the documentation for more details.

  • Support for restoring a Delta table to an earlier version. Restoring to an earlier version number or a version of a specific timestamp is supported using the SQL command, Scala APIs or Python APIs. See the documentation for more details.

  • Support for column renaming in a Delta Lake table without the need to rewrite the underlying Parquet data files. See the documentation for more details.

  • Support for arbitrary characters in column names in Delta tables. Before, the supported list of characters was limited by the support of the same in Parquet data format. Column names containing special characters such space, tab, ,, {, ( etc. are supported now. See the documentation for more details.

  • Support for automatic data skipping using generated columns. For any partition column that is a generated column, partition filters will be automatically generated from any data filters on its generating column(s), when possible.

  • Support for Google Cloud Storage is now generally available. See the documentation on how to read and write Delta Lake tables in Google Cloud Storage.

  • Other notable changes

    • Create a new module delta-storage. This extracts out the LogStore interface and implementations in a separate module which is published as its own jar. This enables new implementations of LogStore without depending upon the complete Delta jars. See the migration guide here for more details.
    • Improve the error messages and exceptions to be better organized and queryable.
    • Support for gettimestamp expression in generated columns.
    • Snapshot/Checkpoint management improvements
      • Make loading snapshots resilient to corrupt checkpoints in Delta. When reading a checkpoint fails, we try to search for an alternative checkpoint and use it to construct a snapshot.
      • Fix to snapshot writing to not fail the write when a checkpoint fails due to non-fatal errors.
      • Optimization to reduce the number of list calls to storage
    • Improved output metrics for DELETE table command.
    • Improved output metrics for UPDATE table command.
    • Optimize merge operation in a Delta table with a large number of columns.
    • Fix a NullPointerException when trying to reference a DeltaLog created with a SparkContext that has stopped.
    • Fix an issue in handling null partition column values in the change data capture feature.
    • Fix an issue in adding a new column to the Delta table when the preceding column is of type Array.
    • Fix an issue where we are not closing the file list iterator when reading large log files in the Delta Streaming source.
    • Throw proper exceptions when searching for a Delta table in the catalog.
    • Fix a schema evolution issue when the column type is an array of structs.
    • Better handling of FileNotFoundException when reading Delta log files to distinguish between the corrupt log files and no files found.

Benchmark Framework

Independent of this release, we have also built a framework for writing large scale performance benchmarks on Delta tables using a real cluster. Currently, the framework provides a TPC-DS inspired benchmark to measure the ingestion time (e.g. time taken to create TPC-DS tables) and query times. But we encourage the community to contribute more benchmarks to measure performance of different real-world workloads on Delta tables.

Credits

Adam Binford, Alex Liu, Allison Portis, Anton Okolnychyi, Bart Samwel, Carmen Kwan, Chang Yong Lik, Christian Williams, Christos Stavrakakis, David Lewis, Denny Lee, Fabio Badalì, Fred Liu, Gengliang Wang, Hoang Pham, Hussein Nagree, Hyukjin Kwon, Jackie Zhang, Jan Paw, John ODwyer, Junlin Zeng, Jackie Zhang, Junyong Lee, Kam Cheung Ting, Kapil Sreedharan, Lars Kroll, Liwen Sun, Maksym Dovhal, Mariusz Krynski, Meng Tong, Peng Zhong, Prakhar Jain, Pranav, Ryan Johnson, Sabir Akhadov, Scott Sandre, Shixiong Zhu, Sri Tikkireddy, Tathagata Das, Tyson Condie, Vegard Stikbakke, Venkata Sai Akhil Gudesa, Venki Korukanti, Vini Jaiswal, Wenchen Fan, Will Jones, Xinyi Yu, Yann Byron, Yaohua Zhao, Yijia Cui

Delta Lake 1.0.1

10 Feb 21:39
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 1.0.1 on Apache Spark™ 3.1, which back-ports bug fixes from Delta Lake 1.1.0 to Delta Lake 1.0.0.

The details of the fixed bugs are as follows:

  • Fix for rare data corruption issue on GCS - Experimental GCS support released in Delta Lake 1.0 has a rare bug that can lead to Delta tables being unreadable due to partially written transaction log files. This issue has now been fixed (1, 2).

  • Fix for the incorrect return object in Python DeltaTable.convertToDelta() - This existing API now returns the correct Python object of type delta.tables.DeltaTable instead of an incorrectly-typed, and therefore unusable object.

  • Fix for incorrect handling of special characters (e.g. spaces) in paths by MERGE/UPDATE/DELETE operations

  • Fix for Hadoop configurations not being used to write checkpoints

  • Improvements to DeltaTableBuilder API introduced in Delta 1.0.0

    • Fix for bug that prevented passing of multiple partition columns in Python DeltaTableBuilder.partitionBy.
    • Throw error when column data type is not specified.

Credits
Jarred Parrett, Shixiong Zhu, Tathagata Das, Tom Lynch, Yijia Cui, Yaohua Zhao, gurunath

Delta Lake 1.1.0

03 Dec 20:53
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 1.1.0 on Apache Spark 3.2. Similar to Apache Spark™, we have released Maven artifacts for both Scala 2.12 and Scala 2.13. The key features in this release are as follows.

  • Performance improvements in MERGE operation - On partitioned tables, MERGE operations will automatically repartition the output data before writing to files. This ensures better performance out-of-the-box for both the MERGE operation as well as subsequent read operations.

  • Support for passing Hadoop configurations via DataFrameReader/Writer options - You can now set Hadoop FileSystem configurations (e.g., access credentials) via DataFrameReader/Writer options. Earlier the only way to pass such configurations was to set Spark session configuration which would set them to the same value for all reads and writes. Now you can set them to different values for each read and write. See the documentation for more details.

  • Support for arbitrary expressions inreplaceWhere DataFrameWriter option - Instead of expressions only on partition columns, you can now use arbitrary expressions in the replaceWhere DataFrameWriter option. That is you can replace arbitrary data in a table directly with DataFrame writes. See the documentation for more details.

  • Improvements to nested field resolution and schema evolution in MERGE operation on array of structs - When applying the MERGE operation on a target table having a column typed as an array of nested structs, the nested columns between the source and target data are now resolved by name and not by position in the struct. This ensures structs in arrays have a consistent behavior with structs outside arrays. When automatic schema evolution is enabled for MERGE, nested columns in structs in arrays will follow the same evolution rules (e.g., column added if no column by the same name exists in the table) as columns in structs outside arrays. See the documentation for more details.

  • Support for Generated Columns in MERGE operation - You can now apply MERGE operations on tables having Generated Columns.

  • Fix for rare data corruption issue on GCS - Experimental GCS support released in Delta Lake 1.0 has a rare bug that can lead to Delta tables being unreadable due to partially written transaction log files. This issue has now been fixed (1, 2).

  • Fix for the incorrect return object in Python DeltaTable.convertToDelta() - This existing API now returns the correct Python object of type delta.tables.DeltaTable instead of an incorrectly-typed, and therefore unusable object.

  • Python type annotations - We have added Python type annotations which improve auto-completion performance in editors which support type hints. Optionally, you can enable static checking through mypy or built-in tools (for example Pycharm tools).

  • Other notable changes

    • Removed support to read tables with certain special characters in partition column name. See migration guide for details.
    • Support for “delta.`path`” in DeltaTable.forName() for consistency with other APIs
    • Improvements to DeltaTableBuilder API introduced in Delta 1.0.0
      • Fix for bug that prevented passing of multiple partition columns in Python DeltaTableBuilder.partitionBy.
      • Throw error when column data type is not specified.
    • Improved support for MERGE/UPDATE/DELETE on temp views.
    • Support for setting userMetadata in the commit information when creating or replacing tables.
    • Fix for an incorrect analysis exception in MERGE with multiple INSERT and UPDATE clauses and automatic schema evolution enabled.
    • Fix for incorrect handling of special characters (e.g. spaces) in paths by MERGE/UPDATE/DELETE operations.
    • Fix for Vacuum parallel mode from being affected by the Adaptive Query Execution enabled by default in Apache Spark 3.2.
    • Fix for earliest valid time travel version.
    • Fix for Hadoop configurations not being used to write checkpoints.
    • Multiple fixes (1, 2, 3) to Delta Constraints.

    Credits
    Abhishek Somani, Adam Binford, Alex Jing, Alexandre Lopes, Allison Portis, Bogdan Raducanu, Bart Samwel, Burak Yavuz, David Lewis, Eunjin Song, Feng Zhu, Flavio Cruz, Florian Valeye, Fred Liu, Guy Khazma, Jacek Laskowski, Jackie Zhang, Jarred Parrett, JassAbidi, Jose Torres, Junlin Zeng, Junyong Lee, KamCheung Ting, Karen Feng, Lars Kroll, Li Zhang, Linhong Liu, Liwen Sun, Maciej, Max Gekk, Meng Tong, Prakhar Jain, Pranav Anand, Rahul Mahadev, Ryan Johnson, Sabir Akhadov, Scott Sandre, Shixiong Zhu, Shuting Zhang, Tathagata Das, Terry Kim, Tom Lynch, Vijayan Prabhakaran, Vítor Mussa, Wenchen Fan, Yaohua Zhao, Yijia Cui, YuXuan Tay, Yuchen Huo, Yuhong Chen, Yuming Wang, Yuyuan Tang, Zach Schuermann, ericfchang, gurunath

Delta Lake 1.0.0

24 May 23:22
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 1.0.0 on Apache Spark 3.1. The key features in this release are as follows.

  • Unlimited MATCHED and NOT MATCHED clauses for merge operations in SQL - With the upgrade to Apache Spark 3.1, MERGE SQL command now supports any number of WHEN MATCHED and WHEN NOT MATCHED clauses (Scala, Java and Python APIs already support unlimited clauses since 0.8.0 on Spark 3.0). See the documentation on MERGE for more details.

  • New programmatic APIs to create tables - Delta Lake now allows you to directly create new Delta tables programmatically (Scala, Java, and Python) without using DataFrame APIs. We have introduced new DeltaTableBuilder and DeltaColumnBuilder APIs to specify all the table details that you can specify through SQL CREATE TABLE. See the documentation for details and examples.

  • Experimental support for Generated Columns - Delta Lake now supports Generated Columns which are a special type of columns whose values are automatically generated based on a user-specified function over other columns in the Delta table. You can use most built-in SQL functions in Apache Spark to generate the values of these generated columns. For example, you can automatically generate a date column (for partitioning the table by date) from the timestamp column; any writes into the table need only specify the data for the timestamp column. You can create Delta tables with Generated Columns using the new programmatic APIs to create tables. See the documentation for details.

  • Simplified storage configuration - Delta Lake can now automatically load the correct LogStore needed for common storage systems hosting the Delta table being read or written to. Users no longer need to explicitly configure the LogStore implementation if they are running Delta Lake on AWS S3, Azure blob stores, and HDFS. This also allows the same application to simultaneously read and write to Delta tables on different cloud storage systems. The scheme of the Delta table path is used to dynamically load the necessary LogStore implementation. Using storage systems other than the ones listed above still needs explicit configuration. See the documentation on storage configuration for details.

  • Experimental support for additional cloud storage systems - Delta Lake now has experimental support for Google Cloud Storage, Oracle Cloud Storage, IBM Cloud Object Storage. You will have to add an additional maven artifact delta-contribs to access the LogStores corresponding to them, and explicitly configure the LogStore names corresponding to the relevant path schemes. See the documentation on storage configuration for details. In addition, we have also defined a more stable LogStore API for building custom implementations.

  • Public APIs for catching exceptions due to conflicts - The exceptions thrown on conflict between concurrent operations have now been converted to public APIs. This allows you to catch those exceptions and retry your write operations. See the API documentation for details.

  • PyPI release - Delta Lake can now be installed from PyPI with pip install delta-spark. However, along with pip installation, you also have to configure the SparkSession. See the documentation for details.

  • Other notable changes

    • New Maven artifact delta-contribs which contain contributions from the community that are still experimental and need more testing before being packaged in the main artifact delta-core.
    • Execution time metrics for UPDATE, DELETE, and MERGE operations are available in table history.
    • Fixed multiple bugs in schema evolution of nested columns in MERGE operation.
    • Fixed bug in handling dots in column names.

Delta Sharing
In relation to this release, we have also introduced a new Delta Sharing project which is an open protocol for secure real-time exchange of large datasets, which enables organizations to share data in real-time regardless of which computing platforms they use. It is a simple REST protocol that securely shares access to part of a cloud dataset and leverages modern cloud storage systems, such as S3, ADLS, or GCS, to reliably transfer data. See the project repository and the release notes for details.

Credits
Alex Ott, Ali Afroozeh, Antonio, Bruno Palos, Burak Yavuz, Christopher Grant, Denny Lee, Gengliang Wang, Guy Khazma, Howard Xiao, Jacek Laskowski, Joe Widen, Jose Torres, Lars Kroll, Linhong Liu, Meng Tong, Prakhar Jain, Pranav Anand, R. Tyler Croy, Rahul Mahadev, Ranu Vikram, Sabir Akhadov, Shixiong Zhu, Stefan Zeiger, Tathagata Das, Tom van Bussel, Vijayan Prabhakaran, Vivek Bhaskar, Wenchen Fan, Yijia Cui, Yingyi Bu, Yuchen Huo, Brenner Heintz, fvaleye, Herman van Hovell, Liwen Sun, Mahmoud Mahdi, Sabir Akhadov, Yaohua Zhao

Delta Lake 0.8.0

05 Feb 02:23
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 0.8.0, which introduces the following key features.

  • Unlimited MATCHED and NOT MATCHED clauses for merge operations in Scala, Java, and Python - merge operations now support any number of whenMatched and whenNotMatched clauses. In addition, merge queries that unconditionally delete matched rows no longer throw errors on multiple matches. See the documentation for details.

  • MERGE operation now supports schema evolution of nested columns - Schema evolution of nested columns now has the same semantics as that of top-level columns. For example, new nested columns can be automatically added to a StructType column. See Automatic schema evolution in Merge for details.

  • MERGE INTO and UPDATE operations now resolve nested struct columns by name - Update operations UPDATE and MERGE INTO commands now resolve nested struct columns by name. That is, when comparing or assigning columns of type StructType, the order of the nested columns does not matter (exactly in the same way as the order of top-level columns). To revert to resolving by position, set the Spark configuration ”spark.databricks.delta.resolveMergeUpdateStructsByName.enabled” to ”false”.

  • Check constraints on Delta tables - Delta now supports CHECK constraints. When supplied, Delta automatically verifies that data added to a table satisfies the specified constraint expression. To add CHECK constraints, use the ALTER TABLE ADD CONSTRAINTS command. See the documentation for details.

  • Start streaming a table from a specific version (#474) - When using Delta as a streaming source, you can use the options startingTimestamp or startingVersionto start processing the table from a given version and onwards. You can also set startingVersion to latest to skip existing data in the table and stream from the new incoming data. See the documentation for details.

  • Ability to perform parallel deletes with VACUUM (#395) - When using VACUUM, you can set the session configuration "spark.databricks.delta.vacuum.parallelDelete.enabled" to “true” in order to use Spark to perform the deletion of files in parallel (based on the number of shuffle partitions). See the documentation for details.

  • Use Scala implicits to simplify read and write APIs - You can import io.delta.implicits._ to use the delta method with Spark read and write APIs such as spark.read.delta(“/my/table/path”). See the documentation for details.

Credits
Adam Binford, Alan Jin, Alex liu, Ali Afroozeh, Andrew Fogarty, Burak Yavuz, David Lewis, Gengliang Wang, HyukjinKwon, Jacek Laskowski, Jose Torres, Kian Ghodoussi, Linhong Liu, Liwen Sun, Mahmoud Mahdi, Maryann Xue, Michael Armbrust, Mike Dias, Pranav Anand, Rahul Mahadev, Scott Sandre, Shixiong Zhu, Stephanie Bodoff, Tathagata Das, Wenchen Fan, Wesley Hoffman, Xiao Li, Yijia Cui, Yuanjian Li, Zach Schuermann, contrun, ekoifman, Yi Wu

Delta Lake 0.7.0

18 Jun 21:22
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 0.7.0 on Apache Spark 3.0. This is the first release on Spark 3.x and adds support for metastore-defined tables and SQL DDLs. The key features in this release are as follows.

  • Support for defining tables in the Hive metastore (#85) - You can now define Delta tables in the Hive metastore and use the table name in all SQL operations. Specifically, we have added support for: 

    This integration uses Catalog APIs introduced in Spark 3.0. You must enable the Delta Catalog by setting additional configurations when starting your SparkSession. See the documentation for details.

  • Support for SQL Delete, Update and Merge - With Spark 3.0, you can now use SQL DML operations DELETE, UPDATE and MERGE. See the documentation for details.

  • Support for automatic and incremental Presto/Athena manifest generation (#453) - You can now use ALTER TABLE SET TBLPROPERTIES to enable automatic regeneration of the Presto/Athena manifest files on every operation on a Delta table. This regeneration is incremental, that is, manifest files are updated for only the partitions that have been updated by the operation. See the documentation for details.

  • Support for controlling the retention of the table history - You can now use ALTER TABLE SET TBLPROPERTIES to configure how long the table history and delete files are maintained in Delta tables. See the documentation for details.

  • Support for adding user-defined metadata in Delta table commits - You can now add user-defined metadata as strings in commits made to a Delta table by any operation. For DataFrame.writeand DataFrame.writeStream operations, you can set the option userMetadata. For other operations, you can set the SparkSession configuration spark.databricks.delta.commitInfo.userMetadata. See the documentation for details.

  • Support Azure Data Lake Storage Gen2 (#288) - Spark 3.0 has support for Hadoop 3.2 libraries which enables support for Azure Data Lake Storage Gen2. See the documentation for details on how to configure Delta Lake with the correct versions of Spark and Hadoop libraries for Azure storage systems.

  • Improved support for streaming one-time triggers - With Spark 3.0, we now ensure that one-time trigger (also known as Trigger.Once) processes all outstanding data in a Delta table in a single micro-batch even if rate limits are set with the DataStreamReader option maxFilesPerTrigger.

Due to the significant internal changes, workloads on previous versions of Delta using the DeltaTable programmatic APIs may require additional changes to migrate to 0.7.0. See the Migration Guide for details.

Credits
Alan Jin, Alex Ott, Burak Yavuz, Jose Torres, Pranav Anand, QP Hou, Rahul Mahadev, Rob Kelly, Shixiong Zhu, Subhash Burramsetty, Tathagata Das, Wesley Hoffman, Yin Huai, Youngbin Kim, Zach Schuermann, Eric Chang, Herman van Hovell, Mahmoud Mahdi

Delta Lake 0.6.1

26 May 23:03
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 0.6.1, which fixes a few critical bugs in merge operation and operation metrics. If you are using version 0.6.0, it is strongly recommended that you upgrade to version 0.6.1. The details of the fixed bugs are as follows:

  • Invalid MERGE INTO AnalysisExceptions (#419) - A couple of bugs related to merge operation were causing analysis errors in 0.6.0 on previously supported merge queries.

    • Fixing one of these bugs required reverting a minor change to the DeltaTable 0.6.0 API. In 0.6.1 (similar to 0.5.0), if the table’s schema has changed since the creation of the DeltaTable instance DeltaTable.toDF() does not return a DataFrame with the latest schema. In such scenarios, you must recreate the DeltaTable instance for it to recognize the latest schema.
  • Incorrect operations metrics in history - 0.6.0 reported an incorrect number of rows processed during Update and Delete. This is fixed in 0.6.1.

Credits
Alan Jin, Jose Torres, Rahul Mahadev, Tathagata Das

Delta Lake 0.6.0

23 Apr 01:01
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 0.6.0, which introduces schema evolution and performance improvements in merge, and operation metrics in table history. The key features in this release are:

  • Support for schema evolution in merge operations (#170) - You can now automatically evolve the schema of the top-level columns of a Delta table with the merge operation. This is useful in scenarios where you want to upsert change data into a table and the schema of the data changes over time. Instead of detecting and applying schema changes before upserting, merge can simultaneously evolve the schema and upsert the changes. See the documentation for details.

  • Improved merge performance with automatic repartitioning (#349) - When merging into partitioned tables, you can choose to automatically repartition the data by the partition columns before writing to the table. In cases where the merge operation on a partitioned table is slow because it generates too many small files (#345), enabling automatic repartition can improve performance. See the documentation for details.

  • Improved performance when there is no insert clause (#342) - You can now get better performance in a merge operation if it does not have any insert clause.

  • Operation metrics in DESCRIBE HISTORY (#312) - You can now see operation metrics (for example, number of files and rows changed) for all writes, updates, and deletes on a Delta table in the table history. See the documentation for details.

  • Support for reading Delta tables from any file system (#347) - You can now read Delta tables on any storage system with a Hadoop FileSystem implementation. However, writing to Delta tables still requires configuring a LogStore implementation that gives the necessary guarantees on the storage system. See the documentation for details.

Credits

Ali Afroozeh, Andrew Fogarty, Anurag870, Burak Yavuz, Erik LaBianca, Gengliang Wang, IonutBoicuAms, Jakub Orłowski, Jose Torres, KevinKarlBob, Michael Armbrust, Pranav Anand, Rahul Govind, Rahul Mahadev, Shixiong Zhu, Steve Suh, Tathagata Das, Timothy Zhang, Tom van Bussel, Wesley Hoffman, Xiao Li, chet, Eugene Koifman, Herman van Hovell, hongdd, lswyyy, lys0716, Mahmoud Mahdi, Maryann Xue

Delta Lake 0.5.0

13 Dec 02:08
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 0.5.0, which introduces Presto/Athena support and improved concurrency. The key features in this release are:

  • Support for other processing engines using manifest files (#76) - You can now query Delta tables from Presto and Amazon Athena using manifest files, which you can generate using Scala, Java, Python and SQL APIs. See the documentation for details. 

  • Improved concurrency for all Delta Lake operations (#9, #72, #228) - You can now run more Delta Lake operations concurrently. Delta Lake’s optimistic concurrency control has been improved by making conflict detection more fine-grained. This makes it easier to run complex workflows on Delta tables. For example:

    • Running deletes (e.g. for GDPR compliance) concurrently on older partitions while newer partitions are being appended.
    • Running updates and merges concurrently on disjoint sets of partitions.
    • Running file compactions concurrently with appends (see below).

    See the documentation on concurrency control for more details.

  • Improved support for file compaction (#146) - You can now compact files by rewriting them with the DataFrameWriter option dataChange set to false. This option allows a compaction operation to run concurrently with other batch and streaming operations. See this example in the documentation for details.

  • Improved performance for insert-only merge (#246) - Delta Lake now provides more optimized performance for merge operations that have only insert clauses and no update clauses. Furthermore, Delta Lake ensures that writes from such insert-only merges only append new data to the table. Hence, you can now use Structured Streaming and insert-only merges to do continuous deduplication of data (e.g. logs). See this example in the documentation for details.

  • SQL Support for Convert-to-Delta (#175) - You can now use SQL to convert a Parquet table to Delta (Scala, Java, and Python were already supported in 0.4.0). See the documentation for details.

  • Experimental support for Snowflake and Redshift Spectrum - You can now query Delta tables from Snowflake and Redshift Spectrum. This support is considered experimental in this release. See the documentation for details.

Credits

Andreas Neumann, Andrew Fogarty, Burak Yavuz, Denny Lee, Fabio B. Silva, JassAbidi, Matthew Powers, Mukul Murthy, Nicolas Paris, Pranav Anand, Rahul Mahadev, Reynold Xin, Shixiong Zhu, Tathagata Das, Tomas Bartalos, Xiao Li

Thank you for your contributions.

Delta Lake 0.4.0

01 Oct 02:23
Compare
Choose a tag to compare

We are excited to announce the release of Delta Lake 0.4.0 which introduces Python APIs for manipulating and managing data in Delta tables. The key features in this release are:

Python APIs for DML and utility operations (#89) - You can now use Python APIs to update/delete/merge data in Delta Lake tables and to run utility operations (i.e., vacuum, history) on them. These are great for building complex workloads in Python, e.g., Slowly Changing Dimension (SCD) operations, merging change data for replication, and upserts from streaming queries. See the documentation for more details.

Convert-to-Delta (#78) - You can now convert a Parquet table in place to a Delta Lake table without rewriting any of the data. This is great for converting very large Parquet tables which would be costly to rewrite as a Delta table. Furthermore, this process is reversible - you can convert a Parquet table to Delta Lake table, operate on it (e.g., delete or merge), and easily convert it back to a Parquet table. See the documentation for more details.

SQL for utility operations - You can now use SQL to run utility operations vacuum and history. See the documentation for more details on how to configure Spark to execute these Delta-specific SQL commands.

To try out Delta Lake 0.4.0, please follow the Delta Lake Quickstart.