-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A relational insert #818
Comments
Yes, that is correct, for now, RPCs are the only* way to do that. |
@ruslantalpa Eh thats a bit of a bummer. Both solution sound to me more compilicated than it should be. There really needs to be a streamlined way of doing transactions, just as there is an easy and streamlined way of getting your data. But reading through other issues I see that there are already some ideas to implement transactions. Guess I will have to wait for them before adopting postgREST. |
I have a proposal for this. Once #690 is solved, we could take advantage of the new query parameter( -- having these tables
CREATE TABLE items (
id serial PRIMARY KEY,
name text NOT NULL
);
CREATE TABLE subitems (
id serial PRIMARY KEY,
name text NOT NULL,
item_id integer REFERENCES items(id)
); POST /items?columns=name,subitems(name)
{
"name": "item 1",
"subitems": [
{"name": "subitem 1"},
{"name": "subitem 2"}
]
} This would generate the following SQL(using data-modifying CTEs): WITH
payload AS (
select '{"name": "item 1", "subitems": [{"name": "subitem 1"}, {"name": "subitem 2"}]}'::json as data),
ins1 AS (
insert into items(name)
select
name
from json_populate_record(null::items, (select data from payload)) _
returning id AS item_id)
insert into subitems(name, item_id)
select
name,
(select item_id from ins1) as item_id
from json_populate_recordset(null::subitems, (select data from payload)->'subitems') _; Since we know the tables relationship, we can infer that the This would be a major feature and requires a fair amount of work, so I'll require sponsorship to implement it. |
The query gets more complicated when there's multiple parent records to insert with their childs. But here's a query that works https://gist.github.com/steve-chavez/60473d1765b5175012f5cc15695ae0b1. @ruslantalpa Perhaps you may have some feedback about that. |
the direction is good however: but again, the direction is good, maybe leave this part as it is and now focus on the following problem: maybe what I am saying is, it might be doable but it's not trivial so maybe first try to split the problem into a few smaller ones and solve each stage (you started from the last stage):
the above is "brainstorming" around this feature, trying to split the problem into stages |
Isn't that information in the query string? |
@ruslantalpa With the new For example for inserting items + subitems + pieces, querystring would be: POST /items?columns=name,subitems(name,pieces(name))
[
{"name": "item 1", "subitems": [
{"name": "subitem 1", "pieces": [{"name": "piece 1"}, {"name": "piece 2"}, {"name": "piece 3"}]},
{"name": "subitem 2", "pieces": [{"name": "piece 4"}, {"name": "piece 5"}, {"name": "piece 6"}]},
{"name": "subitem 3", "pieces": [{"name": "piece 7"}, {"name": "piece 8"}]}]},
{"name": "item 2", "subitems": [
{"name": "subitem 4", "pieces": [{"name": "piece 9"}, {"name": "piece 10"}]},
{"name": "subitem 5", "pieces": [{"name": "piece 11"}]}]}
] Besides that, looking at the schema cache is necessary to complete these parts: json_to_recordset((select data from payload)) as (name text, subitems json) Because of Edit: added some comments in the gist. |
Ok, this kind of takes care of 1,2,4 |
That's what I do with |
For an example of what...
...from this comment looks like, check out the repo I made with a demo of how to build an RPC: https://gitlab.com/tomsaleeba/postgrest-rpc-complex-insert-demo Be warned, there was a lot of learning PL/pgpsql as I went to get that to work, so it's probably not pretty. |
Hello, |
@tomsaleeba I'll be using your example solution until this feature is implemented, but I can't help but think that there is a faster way of inserting than using a foreach loop. Actually I spent some time and re-wrote the body of your function to avoid using a foreach loop: CREATE OR REPLACE FUNCTION chicken_aio_rt2(coop coop, chickens chicken[])
RETURNS chicken_aio_type AS $$
DECLARE
new_coop_id int;
new_chicken_ids int[];
BEGIN
RAISE WARNING 'coop %, chickens %', coop, chickens;
INSERT INTO coop (colour, built) VALUES(
coop.colour,
coop.built
) RETURNING coop_id INTO new_coop_id;
WITH created_chicken_ids AS (
INSERT INTO chicken (name, is_laying, coop)
SELECT name, is_laying, new_coop_id FROM (
SELECT * FROM unnest(chickens)
) AS subquery
RETURNING chicken_id
)
SELECT array_agg(chicken_id) INTO new_chicken_ids
FROM created_chicken_ids;
RETURN (new_coop_id, new_chicken_ids);
END
$$ LANGUAGE plpgsql; I have not benchmarked or measured the difference between the two implementations yet, however this to me seems to me a little bit nicer. |
Some higher level thoughts on this issue. The "relational insert" compared to a "regular insert" is similar to "embedded resources" vs. a "simple resource". In #2144 we are discussing the extension of embedding through computed/virtual columns through functions. Those can provide the same functionality as the auto-detected embedding. What if we turned that thing around... and provided a way to create functions that supported inserting into a separate table from one request? Something roughly along the lines of: create table clients (
client int primary key generated by default as identity,
name text
);
create table projects (
project int primary key generated by default as identity,
name text,
client int references clients
);
-- this would currently be auto-detected anyway and is just for demonstration
create function client(projects)
returns setof clients
rows 1
language sql as $$
select * from clients where client = $1.client
$$;
create function client(projects, clients)
returns projects
language sql as $$
insert into clients (name)
select $2.name
returning $1.project, $1.name, clients.client
$$; which would then allow to do something like: POST /projects?columns=name,client(name) HTTP/1.1
{ "name": "New Project", "client": { "name": "New Client" } } PostgREST would parse the I'm not sure whether the function signature like will actually work nicely, but I guess it's close. Those functions would not have to be used for "relational" inserts, however. We can think of them more generalized as "computed setters" vs. "computed getters" (computed/virtual columns). |
Tangentially related; it's very common in crud applications to want to update M2M relations where the foreign table records already exist (so only the join table needs to be updated). Is this already being considered? (if so, dare I ask, is some solution to this being worked on currently?) example:
|
So you would need to think about nested updates as well for inserts, although your CASCADE settings should take care of most of this. Here is how Hasura does it: https://hasura.io/docs/latest/mutations/postgres/insert/#pg-nested-inserts J |
any updates on it ? |
@wolfgangwalther - How would this work with a J |
I think that would just be a matter of adding some |
Isn't there already a J |
I don't think there is a difference between update or insert for the relational part. The main entity needs to be either created - or it already exists. But the related entities must always be "made to match". This is simple in the insert case, because there are certainly no related entities to be deleted or updated. But the more complex query for the update case won't hurt in the insert case either. So I think it should be possible to use the same |
@steve-chavez Is the design for this feature completed or is it still in discussion? |
So I think creating the virtual column mutations would definitely be the first step here, and could potentially be a work around for transactions. For a trivial example with tags:
How do you add a post with tags? {
'title': 'my post',
'content': 'blah, blah, blah...',
'post_tag': [
tags: [{
name: 'hiking',
}, {
name: 'swimming'
}]
]
} If you can even visualize this correctly, this gets complicated real quickly. In Dgraph, Prisma, Hasura, or any other ORM, you don't actually think about the junction table (or the relationship properties). So it would be: {
'title': 'my post',
'content': 'blah, blah, blah...',
'tags': [{
name: 'hiking',
}, {
name: 'swimming'
}]
]
} Which makes more sense conceptually. However, we also have different problems to consider:
This is just a sample of problems to think about. So, for now, until postgREST figures out a good mental model, virtual mutation columns would definitely be a life saver. It would allow me to choose the nested Then I could just do: {
'title': 'my post',
'content': 'blah, blah, blah...',
'tags': ['swimming', 'hiking']
} With a computed column like: create function tags(posts, text[])
returns post_tag
language sql as $$
-- delete existing post_tag
delete from post_tag
where post_id = $1.post_id;
-- upsert tags
insert_tags as (
insert into tags (name)
select unnest($2)
on conflict (name) do nothing
returning tag_id
),
-- insert new post_tag
insert into post_tag (post_id, tag_id)
select insert_tags.tag_id, $1.post_id
from insert_tags
returning *;
$$; Which is what I really want. While this seems like a lot of work, it is so much I don't want this as a final solution, but this would definitely help simplify my code for now. Thanks, J |
So thinking more about the "virtual [column] mutations" approach mentioned in #818 (comment) and #818 (comment): My original function interface was wrong - the virtual mutation needs to return the type of the resources it inserted, of course. So As discussed in #3226 (comment) we'll need to support RETURNING all the way through, to avoid the snapshot-not-visible problem of CTEs and embedding. This also means that each virtual mutation needs to return at least all the rows it created, because those won't be visible. But that's not the only rows that are invisible here, if you UPDATE or DELETE inside the virtual column's function, those changes won't be visible in the remaining query either. Thus, virtual mutations should always return all rows, including those they did not modify. In #3226 I mentioned that a PostgREST-native relational insert might need to use UNION to combine the new rows with the old rows. If using a virtual mutation, this would need to replace that part, too. Example:
A PostgREST-native insert query could look roughly like this: WITH
inserted_posts AS (INSERT INTO posts ... RETURNING ...),
inserted_tags AS (INSERT INTO tags ... ON CONFLICT DO NOTHING RETURNING ...),
all_tags AS (SELECT ... FROM tags WHERE tag IN (payload.tags) UNION SELECT * FROM inserted_tags),
inserted_post_tags AS (INSERT INTO post_tags ... RETURNING ...)
SELECT ...
-- regular PostgREST-select, including embeddings - but every table reference to posts, tags and post_tags is replaced with inserted_posts, all_tags and inserted_post_tags But if you wanted custom behavior for the tags-insert, you could maybe do it like so: CREATE FUNCTION tags(posts, <something>) RETURNS SETOF tags
LANGUAGE SQL AS $$
-- custom query with these properties
-- INSERT INTO tags
-- RETURNING tags (inserted and existing, so via CTE + UNION or so)
$$; Then the query for PostgREST could look like this: WITH
inserted_posts AS (INSERT INTO posts ... RETURNING ...),
inserted_tags AS (SELECT tags.* FROM inserted_posts, tags(inserted_posts, payload.tags)),
inserted_post_tags AS (INSERT INTO post_tags ... RETURNING ...)
SELECT ... Now, about the With the following functions you could support multiple different kinds of POST requests: CREATE TABLE tags (
tag text,
type text DEFAULT 'A'
);
CREATE FUNCTION tags(posts, text[]) RETURNS SETOF tags ...
CREATE FUNCTION tags(posts, tags[]) RETURNS SETOF tags ... This would use the first overload with
While this would use the second overload with
(added some aliases in random places to highlight how the mapping between the json body and the function arguments / names could work) All of the "virtual mutations" only really make sense, when a basic, PostgREST-native, relational insert is already there, thus...
... I disagree with that. I don't think we can add virtual mutations as a first step. We need to add basic relational insert first. |
Hi! Will this feature be on the roadmap? |
@noobmaster19 Yes, it is! You can track it in #3226, which I'm retaking after #3640 is completed. |
Having a (simplified) model with a many to many relation like so
Now lets assume that in my application I have a form that lets you add a new customer. On that form, besides creating a new customer I also want to immediately assign him to a known bank. Of course I could just: (in pseudo code)
newCustomerId = POST Prefer: return=representation /customers?select=id { "first_name": "John", "last_name": "Doe" }
POST /customer_bank_accounts { "account_number": 123456788901234556152, "customer_id": ${newCustomerId}, "bank_id": "some_preexisting_id_of_a_bank_that_the_user_selected"
However this of course is not trasactional and the problem here is that if step 2 fails I would like to rollback and discard the creation of a customer. But boom tss, the customer has already been created!
Is there an api that would let me to do this like so?
POST http://localhost:3000/customers?select=*,customer_bank_accounts{account_number, bank_id}
From the docs I figure that the only way to achieve such would be to manually write a procedure and then call it via rpc, would that be correct?
The text was updated successfully, but these errors were encountered: