Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large file uploads (>4gb) not working in Safari; high memory usage for uploads in Safari #125

Open
timgcarlson opened this issue Jan 11, 2023 · 6 comments
Labels
bug Something isn't working

Comments

@timgcarlson
Copy link

My team really appreciates this library, and overall it's working out great for us. The one issue we are encountering is uploading large zip files with Safari on macOS (> 4gb). All other major browsers seem to work just fine, Safari gives this error as soon as you start the upload of a large file with the uploadToS3 function:

[Error] NotReadableError: The I/O read operation failed.

image

image

image

Uploads from Safari (< 4gb) also seem to use a lot of system memory during the upload compared to Chrome. On Safari, the system memory usage spikes, the fans go wild (MBP pre M1 model) and I get this warning message in the browser: "This webpage is using significant memory. Closing it may improve the responsiveness of your Mac."

Any ideas on what the the issue could be here? Should files larger than 4gb work in Safari? Let me know if there is any other information I can provide to help diagnose the issue if it's not reproducible in other projects.

Thank you!

@ryanto
Copy link
Owner

ryanto commented Jan 12, 2023

Hey, thanks for the issue! Not sure but sounds like there could be a big in this library. Could you share your upload code?

@timgcarlson
Copy link
Author

Sure, here's the code relating to the upload. Let me know if there is anything else that could help.

// components/Upload.tsx

const { files, resetFiles, uploadToS3 } = useS3Upload({
  endpoint: '/api/appUpload',
});

const onUpload = async () => {
  try {
    await uploadToS3(selectedFile, {
      endpoint: {
        request: {
          body: {
            userId: user.id,
            appId,
            uploadId
          },
          headers: {},
        },
      },
    });
    } catch (error) {
      // handle error
      // This is where error.message is "[Error] NotReadableError: The I/O read operation failed." on large files in Safari
    }
  }
}
// pages/api/appUpload.ts

import { NextApiRequest } from 'next';
import { getSession } from 'next-auth/react';
import { APIRoute } from 'next-s3-upload';

export const getS3AppBuildPath = async (req: NextApiRequest) => {
  const { uploadId, userId, appId} = req.body;

  if (!userId || !appId || !uploadId) {
    throw new Error('Bad request');
  }

  const session = await getSession({ req });

  if (!session) {
    throw new Error('Not authenticated');
  }

  return `${appId}/${uploadId}/bundle.zip`;
};

export default APIRoute.configure({
  accessKeyId: process.env.S3_UPLOAD_KEY,
  secretAccessKey: process.env.S3_UPLOAD_SECRET,
  bucket: process.env.S3_APP_UPLOAD_BUCKET,
  region: process.env.S3_UPLOAD_REGION,
  async key(req: NextApiRequest) {
    return await getS3AppBuildPath(req);
  },
});

@ryanto
Copy link
Owner

ryanto commented Jan 19, 2023

Hmm, ok your code looks spot on.

I'll try to test out with Safari and see if I can get you an answer. Sorry you're running into this issue.

@ryanto
Copy link
Owner

ryanto commented Jan 19, 2023

Looks like this is a bug in lib-storage. We use lib-storage under the hood to do the upload.

In the first thread someone had an solution using patch package. Pretty ugly :(

I'll try to reproduce and post something in those threads.

@ryanto ryanto added the bug Something isn't working label Jan 19, 2023
@cjjenkinson
Copy link

Also experiencing this so bumping

@ErikPlachta
Copy link

ErikPlachta commented Feb 27, 2023

Could maybe get away with using MultiPart Upload if the file size is over N, and the browser is Safari?

"Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation."

https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants