-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
insertBatch to generate multiple INSERT statements #888
Comments
That will change the characteristic of this operation: from atomic to non-atomic. |
You can't make it atomic (except transactions) in case of exceeding max number of parameters anyway. |
Yest, but implicit changing atomic transaction to non-atomic is just a massive footgun. This should be explicitly enabled by programmer who makes the call, so developer will know that this batch insert may result multiple queries. Also, splitting one massive insert to multiple smaller ones is useful not only to handle parameters limit. I generally avoid really big inserts as they may block table for other queries, so system may be more responsive if you do 10 small inserts instead of one big one. Having a method that would do this for you would be useful even if I don't exceed parameters limit in your queries. |
Makes sense. Default value of such option must be |
Good point, but
By default it is better to specify driver specific max value and allow the developer to change it. |
Either way is fine for me, if we'll put a warning about it being atomic or not in the docs. |
To be clear: I was talking about limit based on number of inserted rows, not number of params used by query. If I want to insert 50k records in 50 queries (1k records per query), then limit I need to pass to the function should |
Yes. That sounds useful as well. |
Right now, if the batch is too big, we're getting into limits of DBMS like the following:
The screenshot is from PostgreSQL.
I suggest doing two things:
INSERT
statement.The text was updated successfully, but these errors were encountered: