Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Allow custom storage engines to prematurely terminate stream without calling cb(err) #1277

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

agadzinski93
Copy link

@agadzinski93 agadzinski93 commented Sep 28, 2024

Greetings,

I added a single line of code which allows authors of custom storage engines to call file.stream.destroy() to terminate the stream and still allow them to call cb(null, info) instead of cb(err) in events such as

  • Allow users to continue to next non-error middleware
  • Keep any chunked data that was streamed using events such as file.stream.on('data', (data: Buffer) => {})

Without this feature, to accomplish this goal, multer has to pipe the entire stream which is costly (performance/bandwidth/time) for bigger files.

Since the destroy method is not utilized in the code as is, this should not affect current users and their storage engines.

After debugging the code, the only difference using this feature is that busboy's finish event is not triggered (which is the primary issue since the variable readFinished isn't set to true). However, this should be fine as the done function is still called which triggers req.unpipe(busboy) and removes all event listeners.

The finish event still triggers normally when you don't destroy the stream.

I also placed the line of code at the end of the _handleFile callback to ensure the author's custom _handleFile already executes. All an author has to do is destroy the stream at some point before calling cb(null, info).

I'd greatly appreciate it if you can review this! Thanks!

Edit:
I should also note that when using file.stream.destory() on larger files (roughly >3MB from my experience), Fetch/XHR requests will still hang onto their connection unless res.header('Connection', 'close') is called at some point.

However, doing the above will result in FF and Chrome returning Network Reset errors rather than whatever the programmer returns as the status code and response body. See here for details. A proper response is generated with Postman (which I can confirm from testing) so it's 100% a browser issue. As such, Multer could add that header to the response object, but due to this issue, I think it's best to temporarily leave that portion in the hands of the authors of custom storage engines or their consumers.

Edit 2:
To those interested, I opened issues with Mozilla and Chromium to address the issue of XHR/Fetch requests resulting in network errors instead of producing the expected API response when large file uploads are terminated when setting the Connection response header to close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant