Searching for a Better Way to Handle Bulk Inserts #4629
-
Hello Discussion Forum, My issue is I need to parse through a JSON file that has 10k records each with ~70 fields of data. When written to disk the file in question is only ~18 mb in size. Holding it all in memory isn't an issue. My destinations are a single chain of 3 steps
ISSUES
Is there a better way of handling a situation where 1000's of records need to be added to a table in a single go? Things I have tried
Extra Info - Edited |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
I'd suggest don't do it in Mirth at all. For very large inserts command line DB tools are much faster. You can shell out of Mirth to do that of course or just cron something. Network drive problems would present a problem irrespective of how you do the insert. |
Beta Was this translation helpful? Give feedback.
-
Well, all you had to say was Windows :) |
Beta Was this translation helpful? Give feedback.
-
Do a two-stage insert to your DB.
This pushes some of the logic into your db, but if MC is just making SQL calls to poke in the JSON then SQL to SELECT+INSERT the JSON it keeps the heavy lifting in SQL Server but the logic is still visible to MC. You'll want to review https://docs.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver15 . I know PG has native JSON column types but it doesn't look like SQL Server does, though it does have the query tools to read the JSON. Apologies if I'm over looking any SQL Server quirks, I dont use it as often as I do other SQL engines. |
Beta Was this translation helpful? Give feedback.
Do a two-stage insert to your DB.
raw_hr_data
whose only columns are your JSON data and maybe some timestamps to show when the data was inserted