diff --git a/site/blog/2024-06-26-datapackage-v2-release/README.md b/site/blog/2024-06-26-datapackage-v2-release/README.md index fc48cd4c1..2075d2cbc 100644 --- a/site/blog/2024-06-26-datapackage-v2-release/README.md +++ b/site/blog/2024-06-26-datapackage-v2-release/README.md @@ -7,9 +7,9 @@ image: /img/blog/DP-release.png description: We are very excited to announce the release of Data Package v2 author: Sara Petti --- -We are very excited to announce the release of the version 2.0 of the Frictionless Standard. Thanks to the generous support of [NLnet](https://nlnet.nl/) from November last year we were able to [focus on reviewing the Frictionless Standard](https://frictionlessdata.io/blog/2023/11/15/frictionless-specs-update/#additional-deliverables) in order to include features that were often requested throughout the years and improve extensibility for domain-specific implementations. +We are very excited to announce the release of the version 2.0 of Data Package (previously known as Frictionless Specs). Thanks to the generous support of [NLnet](https://nlnet.nl/) from November last year we were able to [focus on reviewing Data Package](https://frictionlessdata.io/blog/2023/11/15/frictionless-specs-update/#additional-deliverables) in order to include features that were often requested throughout the years and improve extensibility for domain-specific implementations. -The Frictionless Standard for data containerisation is Data Package, which consists of a set of simple yet extensible specifications to describe datasets, data files and tabular data. It is a data definition language (DDL) and data API that enhances data FAIRness (findability, accessibility, interoperability, and reusability). Since the last releases of Data Package and all its components, the community had requested a few features throught the years that would improve the standard support for specific data types and simplify the extensions. Some of the Internet requirements also have evolved meanwhile. We therefore used [the issues that had accumulated in the GitHub repository](https://github.com/frictionlessdata/datapackage/issues) to build our Roadmap. +Data Package is a standard for data containerisation, which consists of a set of simple yet extensible specifications to describe datasets, data files and tabular data. It is a data definition language (DDL) and data API that enhances data FAIRness (findability, accessibility, interoperability, and reusability). Since the last releases of Data Package and all its components, the community had requested a few features throught the years that would improve the standard support for specific data types and simplify the extensions. Some of the Internet requirements also have evolved meanwhile. We therefore used [the issues that had accumulated in the GitHub repository](https://github.com/frictionlessdata/datapackage/issues) to build our Roadmap. In parallel we assembled an outstanding Data Package Working Group composed of experts from the community. We carefully selected a diverse group of people who could bring to the discussion table different use-cases, formats, and data types that we would need the Standard to support, and crafted together with them [a governance model](https://datapackage.org/overview/governance/) that is explicit, in order to create an environment that adequately supports new contributions and ensures project sustainability. @@ -23,13 +23,16 @@ During these months we have been working on the core specifications that compose During the update process we tried to be as little disruptive as possible, avoiding breaking changes when possible. -We put a lot of effort into removing ambiguity, cutting or clarifying under-defined features, and promoting some well-oiled [recipes](https://datapackage.org/recipes/caching-of-resources/) into the Standard itself. Among other things, we added versioning, to ensure consistency and reliability over time, and added support for categorical data. +We put a lot of effort into removing ambiguity, cutting or clarifying under-defined features, and promoting some well-oiled [recipes](https://datapackage.org/recipes/caching-of-resources/) into the Standard itself. An example of a recipe (or pattern, as they were called in v1) that has been promoted to the Standard is the [Missing values per field +one](https://specs.frictionlessdata.io/patterns/#missing-values-per-field). + +Among other things, we added versioning, to ensure consistency and reliability over time, and support for categorical data. If you are curious and would like to know more details about what changes with version 2, go and have a look at the [Changelog](https://datapackage.org/overview/changelog/) we published. -To increase and facilitate adoption, we published a [metadata mapper written in Python](https://github.com/frictionlessdata/dplib-py). We have also worked on Data Package integrations for the most notable open data portals out there. We therefore proposed a Data Package serializer to [Invenio RDM](https://inveniordm.web.cern.ch/), created a pull request that exposes `datapackage.json` as a metadata export target in the [Open Science Framework](https://www.cos.io/) system, and built an extension that adds a `datapackage.json` endpoint to every dataset in [CKAN](https://github.com/frictionlessdata/ckanext-datapackage). +To increase and facilitate adoption, we published a [metadata mapper written in Python](https://github.com/frictionlessdata/dplib-py). We have also worked on Data Package integrations for the most notable open data portals out there. Many people from the community use Zenodo, so we definitely wanted to target that. As they are moving their system to [Invenio RDM](https://inveniordm.web.cern.ch/), we proposed a Data Package serializer to the project. We also created a pull request that exposes `datapackage.json` as a metadata export target in the [Open Science Framework](https://www.cos.io/) system, and built an extension that adds a `datapackage.json` endpoint to every dataset in [CKAN](https://github.com/frictionlessdata/ckanext-datapackage). -If you want to know more about how to coordinate a Standard update, we shared our main takeaways at FOSDEM 2024. The presentation was recorded, and you can watch it [here](https://fosdem.org/2024/schedule/event/fosdem-2024-3109-updating-open-data-standards/). +If you want to know more about how to coordinate a standard update, we shared our main takeaways at FOSDEM 2024. The presentation was recorded, and you can watch it [here](https://fosdem.org/2024/schedule/event/fosdem-2024-3109-updating-open-data-standards/). ## And what happens now?