-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
V3 casl 530 #355
V3 casl 530 #355
Conversation
…ng deprecated structs.
…way do it in the client).
@@ -0,0 +1,568 @@ | |||
-- noinspection SqlResolveForFile @ routine/"ST_GeomFromTWKB" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the purpose of this file? Do we plan to use it? If so, maybe it's worth adding some ticket and referencig it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its a backup for now, in the case we need something from it. We can delete it, when we make the first release of 3.0.0
that is not alpha or beta.
IMMUTABLE | ||
PARALLEL SAFE | ||
SET search_path FROM CURRENT | ||
CREATE OR REPLACE FUNCTION naksha_tags(tags bytea, flags int4) RETURNS jsonb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if there's an option to atomically test such functions as this one?
By atomically I mean without setting up any additional resources (like creating tables, writing collections and using other parts of our "regular" flow).
I was thinking of executing raw SQL on test DB and asserting the stdout or something.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be possible to write tests for it, a matter of time and effort. Right now I skipped this, due to a lack of time, as priority is right now getting storage-api integration ready.
return naksha_geometry_in_type(naksha_geo_type(flags),geo_bytes); | ||
END; | ||
$$; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How much of this "green" section was actually changed and how much is just the effect of weird GitHub diff colouring? ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fix-me file is basically just a backup of everything that seems to have no use, so github is right in that all of this is new, but we should not care about this file. We will delete it, I just kept the code around in case I missed something, while reducing the used naksha-sql.
// The store_number contains the partition-number in the lower 8-bit. | ||
PgColumn.store_number -> { | ||
require(partitionCount in 2..256) { "Invalid partition-count, expect 2 .. 256, found : $partitionCount" } | ||
"PARTITION BY RANGE (((${PgColumn.store_number} & 255) % $partitionCount))" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a general partition-related comment, not specifically about this snippet:
I've seen that apart from tags, you have also changed some stuff related to partitioning, like:
- validating partition count
- enhancig check for id-based partitioning
What was the reasoning and is it worth testing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, i just saw that we calculate an MD5 hash for every insert or update in the database and the client, I just wanted to get rid of this. The first idea was to partition by tuple_number
, because we used this, when loading feature data from database, and if Postgres would be able to directly know from which partition to load the tuple, this would be great. Sadly, PostgresQL does not allow to use generated columns in partition keys, but the same number that is part of tuple_number
is as well part of store_number
, so I used this.
Eventually, this prevents that the database has to calculate an MD5 hash for every row modification (the client anyway does this already, why do it twice?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should have made a separate PR for this.
*/ | ||
const val JBON = 2 shl TAGS_SHIFT | ||
const val JSON = 2 shl TAGS_SHIFT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope that we always used references and not hard-coded values for these encodings - otherwise, we might end up surprised (JBON and JSON swapped) 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the SQL code uses hard-coded values (because otherwise we need to pay for a plv8
call, just to read a constant), which exactly lead to the problem, that the tag decoding differs from feature decoding. As tag decoding right now was anyway broken, I just ensured that features and tags are in-sync and that the SQL code is as well simply the same.
No description provided.