Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
129837: sql, row: support INSERT & UPDATE of implicit regional by row partitioning under READ COMMITTED isolation r=mw5h a=mw5h

Under non-Serializable isolations, we need to lock the keys for unique
indexes with implicit partitioning in all partitions to ensure a racing
write doesn't insert a conflicting row into a different partition. Since
there is no support for predicate locks at the KV layer, we fake it by
writing tombstone values to the partitions not being inserted into. This
causes a bit of a write explosion, but it will keep conflicting rows
from being written.

In this patch, we create the infrastructure for writing these tombstones
by modifying the RowHelper to generate a list of keys to lock.

For unique indexes with a single ENUM implicit partition column (e.g.
regional by row), write a tombstone to each partition to ensure
uniqueness when isolation level is not serializable.

This patch also reverts previous work on the insert fast path to write
tombstones because those tombstones are now being written by the row
inserter.

132570: kvserver: unskip v1 flow control integration tests under duress r=sumeerbhola a=kvoli

Only the last commit is relevant.

---

These were skipped to unblock merging #132125 and later (presumed to
be) fixed by #132563.

Un-skip all `TestFlowControl.*` v1 integration tests under duress.

Resolves: #132310
Release note: None

132571: kvserver: unskip v2 flow control integration tests under duress  r=sumeerbhola a=kvoli

Similar to the prior commit, un-skip the v2 flow control integration
tests under duress.

These were disabled due to flakiness.

Resolves: #132272
Release note: None

132590: testutils,kvserver: disable store rebalancer when replication manual r=andrewbaptist,sumeerbhola a=kvoli

`ReplicationManual` can be specified in tests which use a test cluster to prevent lease/replica movements.

This proves vital if the test is asserting exactly on the placement of a range, or its leaseholder.

Also disable the store rebalancer, which could move both leases and replicas under `ReplicationManual` before.

Informs: #132310
Informs: #132272
Release note: None

132591: roachtest: elide mode=default in ldr tests r=dt a=dt

The valid modes are immediate or validated.

Release note: none.
Epic: none.

132592: cluster-ui: bump version to 24.3.0-prerelease.1 r=xinhaoz a=xinhaoz

Bump the cluster-ui pkg version.

Epic: none

Release note: None

132609: release: released CockroachDB version 24.3.0-alpha.2. Next version: 24.3.0-alpha.3 r=celiala a=cockroach-teamcity


Release note: None
Epic: None
Release justification: non-production (release infra) change.


Co-authored-by: Matt White <[email protected]>
Co-authored-by: Austen McClernon <[email protected]>
Co-authored-by: David Taylor <[email protected]>
Co-authored-by: Xin Hao Zhang <[email protected]>
Co-authored-by: Justin Beaver <[email protected]>
  • Loading branch information
6 people committed Oct 14, 2024
8 parents ce02890 + 9b07464 + 9925b69 + efffda9 + 55a4719 + 30ba549 + 32f5121 + 3c1433e commit 71c6071
Show file tree
Hide file tree
Showing 35 changed files with 647 additions and 217 deletions.
2 changes: 1 addition & 1 deletion pkg/build/version.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v24.3.0-alpha.2
v24.3.0-alpha.3
4 changes: 2 additions & 2 deletions pkg/ccl/crosscluster/logical/lww_kv_processor.go
Original file line number Diff line number Diff line change
Expand Up @@ -332,13 +332,13 @@ func newKVTableWriter(

// TODO(dt): pass these some sort fo flag to have them use versions of CPut
// or a new LWW KV API. For now they're not detecting/handling conflicts.
ri, err := row.MakeInserter(ctx, nil, evalCtx.Codec, tableDesc, writeCols, a, &evalCtx.Settings.SV, internal, nil)
ri, err := row.MakeInserter(ctx, nil, evalCtx.Codec, tableDesc, nil /* uniqueWithTombstoneIndexes */, writeCols, a, &evalCtx.Settings.SV, internal, nil)
if err != nil {
return nil, err
}
rd := row.MakeDeleter(evalCtx.Codec, tableDesc, readCols, &evalCtx.Settings.SV, internal, nil)
ru, err := row.MakeUpdater(
ctx, nil, evalCtx.Codec, tableDesc, readCols, writeCols, row.UpdaterDefault, a, &evalCtx.Settings.SV, internal, nil,
ctx, nil, evalCtx.Codec, tableDesc, nil /* uniqueWithTombstoneIndexes */, readCols, writeCols, row.UpdaterDefault, a, &evalCtx.Settings.SV, internal, nil,
)
if err != nil {
return nil, err
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
# LogicTest: local

statement ok
CREATE TYPE part_type AS ENUM ('one', 'two', 'three', 'four', 'five');

statement ok
SET experimental_enable_implicit_column_partitioning = true

statement ok
SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ COMMITTED

statement ok
CREATE TABLE t_double (
pk INT PRIMARY KEY,
a part_type,
b part_type,
c INT,
UNIQUE INDEX (c)
) PARTITION ALL BY LIST (a, b) (
PARTITION one VALUES IN (('one', 'one')),
PARTITION two VALUES IN (('two', 'two'))
)

# Test that we don't allow writes to tables with multiple partition columns.
statement error pgcode 0A000 pq: unimplemented: explicit unique checks are not yet supported under read committed isolation
INSERT INTO t_double VALUES (1, 'one', 'one', 10), (2, 'two', 'two', 20)

statement ok
CREATE TABLE t_int (
pk INT PRIMARY KEY,
a INT NOT NULL,
c INT,
UNIQUE INDEX (c)
) PARTITION ALL BY LIST (a) (
PARTITION one VALUES IN (1),
PARTITION two VALUES IN (2)
)

# Test that we don't allow writes to tables with non-enum partition columns.
statement error pgcode 0A000 pq: unimplemented: explicit unique checks are not yet supported under read committed isolation
INSERT INTO t_int VALUES (1, 1, 10), (2, 2, 20)

statement ok
CREATE TABLE t (
pk INT PRIMARY KEY,
a part_type,
b INT,
c INT,
d INT,
j JSON,
UNIQUE INDEX (c),
FAMILY (pk, a, b, c, d, j)
) PARTITION ALL BY LIST(a) (
PARTITION one VALUES IN ('one'),
PARTITION two VALUES IN ('two'),
PARTITION three VALUES IN ('three'),
PARTITION four VALUES IN ('four'),
PARTITION five VALUES IN ('five')
)

query T
SELECT create_statement FROM [SHOW CREATE TABLE t]
----
CREATE TABLE public.t (
pk INT8 NOT NULL,
a public.part_type NOT NULL,
b INT8 NULL,
c INT8 NULL,
d INT8 NULL,
j JSONB NULL,
CONSTRAINT t_pkey PRIMARY KEY (pk ASC),
UNIQUE INDEX t_c_key (c ASC),
FAMILY fam_0_pk_a_b_c_d_j (pk, a, b, c, d, j)
) PARTITION ALL BY LIST (a) (
PARTITION one VALUES IN (('one')),
PARTITION two VALUES IN (('two')),
PARTITION three VALUES IN (('three')),
PARTITION four VALUES IN (('four')),
PARTITION five VALUES IN (('five'))
)
-- Warning: Partitioned table with no zone configurations.

query T
EXPLAIN (OPT, CATALOG) SELECT * FROM t
----
TABLE t
├── pk int not null
├── a part_type not null
├── b int
├── c int
├── d int
├── j jsonb
├── crdb_internal_mvcc_timestamp decimal [hidden] [system]
├── tableoid oid [hidden] [system]
├── crdb_internal_origin_id int4 [hidden] [system]
├── crdb_internal_origin_timestamp decimal [hidden] [system]
├── FAMILY fam_0_pk_a_b_c_d_j (pk, a, b, c, d, j)
├── CHECK (a IN (x'20':::@100106, x'40':::@100106, x'80':::@100106, x'a0':::@100106, x'c0':::@100106))
├── PRIMARY INDEX t_pkey
│ ├── a part_type not null (implicit)
│ ├── pk int not null
│ └── partitions
│ ├── one
│ │ └── partition by list prefixes
│ │ └── ('one')
│ ├── two
│ │ └── partition by list prefixes
│ │ └── ('two')
│ ├── three
│ │ └── partition by list prefixes
│ │ └── ('three')
│ ├── four
│ │ └── partition by list prefixes
│ │ └── ('four')
│ └── five
│ └── partition by list prefixes
│ └── ('five')
├── UNIQUE INDEX t_c_key
│ ├── a part_type not null (implicit)
│ ├── c int
│ ├── pk int not null (storing)
│ └── partitions
│ ├── one
│ │ └── partition by list prefixes
│ │ └── ('one')
│ ├── two
│ │ └── partition by list prefixes
│ │ └── ('two')
│ ├── three
│ │ └── partition by list prefixes
│ │ └── ('three')
│ ├── four
│ │ └── partition by list prefixes
│ │ └── ('four')
│ └── five
│ └── partition by list prefixes
│ └── ('five')
├── UNIQUE WITHOUT INDEX (pk)
└── UNIQUE WITHOUT INDEX (c)
scan t
└── check constraint expressions
└── a IN ('one', 'two', 'three', 'four', 'five')

statement ok
SET tracing = kv

statement ok
INSERT INTO t VALUES (1, 'two', 3, 4, 5)

statement error pgcode 23505 pq: duplicate key value violates unique constraint "t_pkey"
INSERT INTO t VALUES (1, 'one', 3, 6, 5)

statement error pgcode 23505 pq: duplicate key value violates unique constraint "t_c_key"
INSERT INTO t VALUES (2, 'three', 3, 4, 5)

statement ok
INSERT INTO t VALUES (2, 'four', 3, 6, 5)

statement error pgcode 23505 pq: duplicate key value violates unique constraint "t_pkey"
UPDATE t SET pk = 1 WHERE c = 6;

statement error pgcode 23505 pq: duplicate key value violates unique constraint "t_c_key"
UPDATE t SET c = 4 WHERE pk = 2

query T
SELECT message FROM [SHOW TRACE FOR SESSION] WHERE message LIKE 'CPut%'
----
CPut /Table/110/1/"@"/1/0 -> /TUPLE/3:3:Int/3/1:4:Int/4/1:5:Int/5
CPut /Table/110/1/" "/1/0 -> nil (tombstone)
CPut /Table/110/1/"\x80"/1/0 -> nil (tombstone)
CPut /Table/110/1/"\xa0"/1/0 -> nil (tombstone)
CPut /Table/110/1/"\xc0"/1/0 -> nil (tombstone)
CPut /Table/110/2/" "/4/0 -> nil (tombstone)
CPut /Table/110/2/"\x80"/4/0 -> nil (tombstone)
CPut /Table/110/2/"\xa0"/4/0 -> nil (tombstone)
CPut /Table/110/2/"\xc0"/4/0 -> nil (tombstone)
CPut /Table/110/1/" "/1/0 -> /TUPLE/3:3:Int/3/1:4:Int/6/1:5:Int/5
CPut /Table/110/1/"@"/1/0 -> nil (tombstone)
CPut /Table/110/1/"\x80"/1/0 -> nil (tombstone)
CPut /Table/110/1/"\xa0"/1/0 -> nil (tombstone)
CPut /Table/110/1/"\xc0"/1/0 -> nil (tombstone)
CPut /Table/110/2/"@"/6/0 -> nil (tombstone)
CPut /Table/110/2/"\x80"/6/0 -> nil (tombstone)
CPut /Table/110/2/"\xa0"/6/0 -> nil (tombstone)
CPut /Table/110/2/"\xc0"/6/0 -> nil (tombstone)
CPut /Table/110/1/"\x80"/2/0 -> /TUPLE/3:3:Int/3/1:4:Int/4/1:5:Int/5
CPut /Table/110/1/" "/2/0 -> nil (tombstone)
CPut /Table/110/1/"@"/2/0 -> nil (tombstone)
CPut /Table/110/1/"\xa0"/2/0 -> nil (tombstone)
CPut /Table/110/1/"\xc0"/2/0 -> nil (tombstone)
CPut /Table/110/2/" "/4/0 -> nil (tombstone)
CPut /Table/110/2/"@"/4/0 -> nil (tombstone)
CPut /Table/110/2/"\xa0"/4/0 -> nil (tombstone)
CPut /Table/110/2/"\xc0"/4/0 -> nil (tombstone)
CPut /Table/110/1/"\xa0"/2/0 -> /TUPLE/3:3:Int/3/1:4:Int/6/1:5:Int/5
CPut /Table/110/1/" "/2/0 -> nil (tombstone)
CPut /Table/110/1/"@"/2/0 -> nil (tombstone)
CPut /Table/110/1/"\x80"/2/0 -> nil (tombstone)
CPut /Table/110/1/"\xc0"/2/0 -> nil (tombstone)
CPut /Table/110/2/" "/6/0 -> nil (tombstone)
CPut /Table/110/2/"@"/6/0 -> nil (tombstone)
CPut /Table/110/2/"\x80"/6/0 -> nil (tombstone)
CPut /Table/110/2/"\xc0"/6/0 -> nil (tombstone)
CPut /Table/110/1/"\xa0"/1/0 -> /TUPLE/3:3:Int/3/1:4:Int/6/1:5:Int/5
CPut /Table/110/1/" "/1/0 -> nil (tombstone)
CPut /Table/110/1/"@"/1/0 -> nil (tombstone)
CPut /Table/110/1/"\x80"/1/0 -> nil (tombstone)
CPut /Table/110/1/"\xc0"/1/0 -> nil (tombstone)
CPut /Table/110/2/"\xa0"/4/0 -> /BYTES/0x8a (expecting does not exist)
CPut /Table/110/2/" "/4/0 -> nil (tombstone)
CPut /Table/110/2/"@"/4/0 -> nil (tombstone)
CPut /Table/110/2/"\x80"/4/0 -> nil (tombstone)
CPut /Table/110/2/"\xc0"/4/0 -> nil (tombstone)
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ CREATE TABLE river (
LOCALITY REGIONAL BY ROW AS region

statement ok
GRANT INSERT ON TABLE university TO testuser
GRANT INSERT, UPDATE, SELECT ON TABLE university TO testuser

# Test non-conflicting INSERT.

Expand Down Expand Up @@ -117,10 +117,15 @@ INSERT INTO university (name, mascot, postal_code)
VALUES ('Thompson Rivers', 'wolves', 'V2C 0C8'), ('Evergreen State', 'geoducks', '98505')
ON CONFLICT (mascot) DO NOTHING

# TODO(mw5h): Temporary until ON CONFLICT works
statement ok
INSERT INTO university (name, mascot, postal_code) VALUES ('Evergreen State', 'geoducks', '98505')

query TTT
SELECT name, mascot, postal_code FROM university ORDER BY name
----
Western Oregon wolves 97361
Evergreen State geoducks 98505
Western Oregon wolves 97361

statement error pgcode 0A000 explicit unique checks are not yet supported under read committed isolation
INSERT INTO volcano VALUES
Expand Down Expand Up @@ -170,10 +175,10 @@ UPSERT INTO river VALUES ('us-east-1', 'Fraser', 'Salish Sea')

# Test conflicting UPDATE.

statement error pgcode 0A000 explicit unique checks are not yet supported under read committed isolation
statement error pgcode 23505 pq: duplicate key value violates unique constraint "university_mascot_key"
UPDATE university SET mascot = 'wolves' WHERE name = 'Evergreen State'

statement error pgcode 0A000 explicit unique checks are not yet supported under read committed isolation
statement ok
UPDATE volcano SET origin = 'Fought over Loowit and was transformed by Saghalie.' WHERE name = 'Mount St. Helens'

statement error pgcode 23505 pq: duplicate key value violates unique constraint "city_pkey"\nDETAIL: Key \(name, state_or_province\)=\('Vancouver', 'BC'\) already exists\.
Expand Down Expand Up @@ -232,5 +237,41 @@ awaitstatement conflict
query TTT
SELECT name, mascot, postal_code FROM university ORDER BY name
----
CMU Scottie Dog 15213
Western Oregon wolves 97361
CMU Scottie Dog 15213
Evergreen State geoducks 98505
Western Oregon wolves 97361

statement ok
INSERT INTO university VALUES ('Central Michigan University', 'Chippewas', '97858');

statement ok
UPDATE university SET name = 'Carnegie Mellon University' WHERE name = 'CMU';

statement ok
BEGIN

statement ok
UPDATE university SET name = 'CMU' WHERE name = 'Carnegie Mellon University';

user testuser

statement async conflict error pgcode 23505 pq: duplicate key value violates unique constraint "university_pkey"
UPDATE university SET name = 'CMU' WHERE name = 'Central Michigan University'

user root

statement ok
COMMIT

awaitstatement conflict

statement error pgcode 23505 pq: duplicate key value violates unique constraint "university_mascot_key"
UPDATE university SET mascot = 'wolves' WHERE name = 'CMU'

query TTT
SELECT name, mascot, postal_code FROM university ORDER BY name
----
CMU Scottie Dog 15213
Central Michigan University Chippewas 97858
Evergreen State geoducks 98505
Western Oregon wolves 97361
2 changes: 1 addition & 1 deletion pkg/ccl/logictestccl/tests/local/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ go_test(
"//pkg/ccl/logictestccl:testdata", # keep
],
exec_properties = {"test.Pool": "large"},
shard_count = 47,
shard_count = 48,
tags = ["cpu:1"],
deps = [
"//pkg/base",
Expand Down
7 changes: 7 additions & 0 deletions pkg/ccl/logictestccl/tests/local/generated_test.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pkg/cmd/roachtest/tests/logical_data_replication.go
Original file line number Diff line number Diff line change
Expand Up @@ -736,7 +736,7 @@ func setupLDR(

startLDR := func(targetDB *sqlutils.SQLRunner, sourceURL string) int {
options := ""
if mode.String() != "" {
if mode != Default {
options = fmt.Sprintf("WITH mode='%s'", mode)
}
targetDB.Exec(t, fmt.Sprintf("USE %s", dbName))
Expand Down
Loading

0 comments on commit 71c6071

Please sign in to comment.