You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My app manages deep learning models and it's metadata, the models are sized around 50MB - 100MB.
I am using CockroachDB for storage because it has no hard limit on their BYTES / BYTEA / BLOB columns, it is handy when the binary data is infrequently accessed. The binary data is for records and statistical study, they are usually written once and rarely read.
I know it is not the best practice out there, but I don't want to setup object storage just for them.
With the current version, when the buffer goes more than ~30MB V8 will start throwing heap allocation failures, I believe denoland/rusty_v8#427 is not enough to catch up with the sudden memory spike.
Does it make sense if the parameters can be transferred via ReadableStream and WriteableStream?
The text was updated successfully, but these errors were encountered:
My app manages deep learning models and it's metadata, the models are sized around 50MB - 100MB.
I am using CockroachDB for storage because it has no hard limit on their
BYTES / BYTEA / BLOB
columns, it is handy when the binary data is infrequently accessed. The binary data is for records and statistical study, they are usually written once and rarely read.I know it is not the best practice out there, but I don't want to setup object storage just for them.
With the current version, when the buffer goes more than ~30MB V8 will start throwing heap allocation failures, I believe denoland/rusty_v8#427 is not enough to catch up with the sudden memory spike.
Does it make sense if the parameters can be transferred via
ReadableStream
andWriteableStream
?The text was updated successfully, but these errors were encountered: