Releases: koordinates/kart
v0.10.6
0.10.6
New features
- New
kart data rm
command to simply delete datasets and commit the result #490 - Minimal patches:
kart create-patch
now supports--patch-type minimal
, which creates a much-smaller patch; relying on the patch recipient having the HEAD commit in their repository #482kart apply
now applies both types of patch.
Behavioural changes
- Added a specification for allowed characters & path components in dataset names - see Valid Dataset Names.
- Information about the current spatial filter is now shown in
status
. #456 kart log
now accepts a--
marker to signal that all remaining arguments are dataset names. #498
Bugs fixed
- Make Kart more robust to manual edits to the GPKG working copy that don't leave the metadata exactly as Kart would leave it (such as by leaving unneeded table rows in
gpkg_contents
) #491 - Diffing between an old commit and the current working copy no longer fails when datasets have been deleted in the intervening commits.
- Existing auto-incrementing integer PK sequences are now overwritten properly in GPKG working copies. #468
import
from a Postgres or MSSQL source will no longer prepend the database schema name to the imported dataset path.
v0.10.5
v0.10.4
0.10.4
Major changes
- Added basic support for spatial filters - the spatial filter can be updated during an
init
,clone
orcheckout
by supplying the option--spatial-filter=CRS;GEOMETRY
where CRS is a string such asEPSG:4326
and GEOMETRY is a polygon or multigon specified using WKT or hex-encoded WKB. When a spatial filter is set, the working copy will only contain features that intersect the spatial filter, and changes that happened outside the working copy are not shown to the user unless specifically required. #456
Other changes
- Auto-incrementing integer PKs: When the working copy is written, Kart now sets up a sequence which supplies the next unassigned PK value and sets it as the default value for the PK column. This helps the user find the next unassigned PK, which can be non-obvious in particular when a spatial filter has been applied and not all features are present in the working copy. #468
- Bugfix: Set GDAL and PROJ environment variables on startup, which fixes an issue where Kart may or may not work properly depending on whether GDAL and PROJ are appropriately configured in the user's environment
- Bugfix:
kart restore
now simply discards all working copy changes, as it is intended to - previously it would complain if there were "structural" schema differences between the working copy and HEAD. - Bugfix: MySQL working copy now works without a timezone database - previously it required that at least
UTC
was defined in such a database. - Feature-count estimates are now more accurate and generally also faster #467
kart log
now supports output in JSON-lines format, so that large logs can be streamed before being entirely generated.
v0.10.2
0.10.2
- Added support for the geometry
POINT EMPTY
in SQL Server working copy. - Bugfix: fixed the error when writing diff output to a file. #453
- Bugfix: when checking out a dataset that has an integer primary key as a GPKG working copy, Kart should continue to use the actual primary key instead of overriding it, even if the primary key column isn't the first column. #455
Note on 0.10.0
Kart v0.10.0 introduces a new repository structure, which is the default, dubbed 'Datasets V3'. Datasets V2 continues to be supported, but all newly created repos are V3 going forward. See CHANGELOG.md or the previous release notes for more details about 0.10.0
v0.10.1
0.10.1
Fix for kart upgrade
Fixed kart upgrade
so that it preserves more complicated (or yet-to-be-released) features of V2 repos as they are upgraded to V3. #448
Specifically:
generated-pks.json
metadata, extra metadata found in datasets that have an automatically generated primary key and which are maintained by repeatedly importing from a primary-key-less datasource- attachments (which are not yet fully supported by Kart) - arbitrary files kept alongside datasets, such as license or readme files.
Other changes
kart show
now supports all the same options askart diff
. Bothkart diff
andkart show
now both support output in JSON-lines format, so that large diffs can be processed as the diff is generated.- Bugfix: diffs containing a mixture of primary key types can now be shown (necessary in the case where the primary key type has changed).
- Some performance improvements - less startup overhead.
0.10.0
Note: Kart v0.10.0 introduces a new repository structure, which is the default, dubbed 'Datasets V3'. Datasets V2 continues to be supported, but all newly created repos are V3 going forward. See CHANGELOG.md or the previous release notes for more details about 0.10.0
v0.10.0
Kart v0.10.0 introduces a new repository structure, which is the default, dubbed 'Datasets V3'. Datasets V2 continues to be supported, but all newly created repos are V3 going forward.
Datasets V3
- Entire repositories can be upgraded from V2 to V3 using
kart upgrade EXISTING_REPO NEW_REPO
. - Anything which works in a V2 repo should work in a V3 repo and vice versa.
- V3 repos are more performant for large datasets - compared to V2 repos where size-on-disk climbs quickly once dataset size exceeds 16 million features.
Other major changes in this release
- The working copy can now be a MySQL database (previously only GPKG, PostGIS and SQL Server working copies were supported). The commands
init
,clone
andcreate-workingcopy
now all accept working copy paths in the formmysql://HOST/DBNAME
#399- Read the documentation at docs/MYSQL_WC.md
- Import of tables using
kart import
is now supported from any type of database that Kart also supports writing to as a working copy - namely, GPKG, PostGIS, SQL Server and MySQL. - Support for rapidly calculating or estimating feature-counts - see below.
Other minor changes
- Change to
kart data ls
JSON output, now includes whether repo is Kart or Sno branded. - Importing from a datasource now samples the first geometry to check the number of dimensions - in case the datasource actually has 3 or 4 dimensions but this fact is not stored in the column metadata (which is not necessarily required by all source types). #337
- Bugfix: Creating a working copy while switching branch now creates a working copy with the post-switch branch checked out, not the pre-switch branch.
- Bugfix: GPKG spatial indexes are now created and deleted properly regardless of the case (upper-case or lower-case) of the table name and geometry column.
- A few bugfixes involving accurately roundtripping boolean and blob types through different working copy types.
- Bugfix: 3D and 4D geometries are now properly roundtripped through SQL Server working copy.
- Fix help text for discarding changes to refer to
kart restore
instead ofkart reset
, askart restore
is now the simplest way to discard changes. #426 import
: PostGIS internal views/tables are no longer listed by--list
or imported by--all-tables
, and can't be imported by name either. #439upgrade
no longer adds amain
ormaster
branch to upgraded repos.
Calculating feature counts for diffs
Kart now includes ways to calculate or estimate feature counts for diffs. This encompasses the following changes:
diff
now accepts--only-feature-count=<ACCURACY>
, which produces a feature count for the diff.log
now accepts--with-feature-count=<ACCURACY>
which adds a feature count to each commit when used with-o json
.- All calculated feature counts are stored in a SQLite database in the repo's
.kart
directory. - Feature counts for commit diffs can be populated in bulk with the new
build-annotations
command
v0.9.0
0.9.0 (First "Kart" release)
Major changes in this release
- First and foremost, the name — we're now called Kart!
Other changes
- Various local config and structure which was named after
sno
is now named afterkart
- for instance, a Kart repo's objects are now hidden inside a.kart
folder. Sno repos with the older names will continue to be supported going forward. To modify a repo in place to use thekart
based names instead of thesno
ones, usekart upgrade-to-kart PATH
. import
&init
are often much faster now because they do imports in parallel subprocesses. Use--num-processes
to control this behaviour. #408status -o json
now shows which branch you are on, even if that branch doesn't yet have any commits yet.
v0.8.0
Breaking changes in this release
- Internally, Sno now stores XML metadata in an XML file, instead of nested inside a JSON file. This is part of a longer term plan to make it easier to attach metadata or other files to a repository in a straight-forward way, without having to understand JSON internals. Unfortunately, diffing commits where the XML metadata has been written by Sno 0.8.0 won't work in Sno 0.7.1 or earlier - it will fail with
binascii.Error
- Backwards compatibility with Datasets V1 ends at Sno 0.8.0 - all Sno commands except
sno upgrade
will no longer work in a V1 repository. Since Datasets V2 has been the default since Sno 0.5.0, most users will be unaffected. Remaining V1 repositories can be upgraded to V2 usingsno upgrade EXISTING_REPO NEW_REPO
, and the ability to upgrade from V1 to V2 continues to be supported indefinitely. #342 sno init
now sets the head branch tomain
by default, instead ofmaster
. To override this, add--initial-branch=master
reset
now behaves more likegit reset
- specifically,sno reset COMMIT
stays on the same branch but sets the branch tip to beCOMMIT
. #60import
now accepts a--replace-ids
argument for much faster importing of small changesets from large sources. #378
Other major changes
- The working copy can now be a SQL Server database (previously only GPKG and PostGIS working copies were supported). The commands
init
,clone
andcreate-workingcopy
now all accept working copy paths in the formmssql://[HOST]/DBNAME/SCHEMA
#362- Currently requires that the ODBC driver for SQL Server is installed.
- Read the documentation at
docs/SQL_SERVER_WC.md
- Support for detecting features which have changed slightly during a re-import from a data source without a primary key, and reimporting them with the same primary key as last time so they show as edits as opposed to inserts. #212
Minor changes
- Optimised GPKG working copies for better performance for large datasets.
- Bugfix - fixed issues roundtripping certain type metadata in the PostGIS working copy: specifically geometry types with 3 or more dimensions (Z/M values) and numeric types with scale.
- Bugfix - if a database schema already exists, Sno shouldn't try to create it, and it shouldn't matter if Sno lacks permission to do so #391
- Internal dependency change - Sno no longer depends on apsw, instead it depends on SQLAlchemy.
init
now accepts a--initial-branch
optionclone
now accepts a--filter
option (advanced users only)show -o json
now includes the commit hash in the outputimport
from Postgres now uses a server-side cursor, which means sno uses less memory- Improved log formatting at higher verbosity levels
sno -vvv
will log SQL queries to the console for debugging
v0.7.1
0.7.1
JSON syntax-highlighting fix
- Any command which outputs JSON would fail in 0.7.0 when run in a terminal unless a JSON style other than
--pretty
was explicitly specified, due to a change in the pygments library which Sno's JSON syntax-highlighting code failed to accomodate. This is fixed in the 0.7.1 release. #335
0.7.0
Major changes in this release
- Support for importing data without a primary key. Since the Sno model requires that every feature has a primary key, primary keys are assigned during import. #212
- Support for checking out a dataset with a string primary key (or other non-integer primary key) as a GPKG working copy. #307
Minor features / fixes:
- Improved error recovery: Sno commands now write to the working copy within a single transaction, which is rolled back if the command fails. #281
- Dependency upgrades (GDAL; Git; Pygit2; Libgit2; Spatialite; GEOS) #327
- Bugfixes:
sno meta set
didn't allow updates toschema.json
- Fixed a potential
KeyError
inSchema._try_align
- Fixed a potential
unexpected NoneType
inWorkingCopy.is_dirty
- Imports now preserve fixed-precision numeric types in most situations.
- Imports now preserve length of text/string fields.
- Imported fields of type
numeric
now stored internally as strings, as required by datasets V2 spec. #325
v0.7.0
Major changes in this release
- Support for importing data without a primary key. Since the Sno model requires that every feature has a primary key, primary keys are assigned during import. #212
- Support for checking out a dataset with a string primary key (or other non-integer primary key) as a GPKG working copy. #307
Minor features / fixes:
- Improved error recovery: Sno commands now write to the working copy within a single transaction, which is rolled back if the command fails. #281
- Dependency upgrades (GDAL; Git; Pygit2; Libgit2; Spatialite; GEOS) #327
- Bugfixes:
sno meta set
didn't allow updates toschema.json
- Fixed a potential
KeyError
inSchema._try_align
- Fixed a potential
unexpected NoneType
inWorkingCopy.is_dirty
- Imports now preserve fixed-precision numeric types in most situations.
- Imports now preserve length of text/string fields.
- Imported fields of type
numeric
now stored internally as strings, as required by datasets V2 spec. #325