You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current configuration requires us to have 2x amount of storage capacity as we compress files on the origin server itself before transporting them to the runner.
Solution
We can Rsync transport files to the runner then within the runner we can move the files to the object storage. Once the files are in the bucket we can zip them there.
Limitations
Again the GitHub runner should provide support for handling the required storage capacity. At this point in time, it's approximately 30GiBs
The text was updated successfully, but these errors were encountered:
We can use the aws s3 cp command with the --multipart-upload option to multipart upload large files to S3. For example:
Zip the Uploaded Files in S3 Bucket: aws s3 cp largefile.txt s3://your-bucket/largefile.txt --multipart-upload
To zip the uploaded files in the S3 bucket, you can use the aws s3 cp command with the --exclude and --include options to specify the files you want to include in the zip, and then pipe the output to a zip command.
The current configuration requires us to have 2x amount of storage capacity as we compress files on the origin server itself before transporting them to the runner.
Solution
We can Rsync transport files to the runner then within the runner we can move the files to the object storage. Once the files are in the bucket we can zip them there.
Limitations
Again the GitHub runner should provide support for handling the required storage capacity. At this point in time, it's approximately 30GiBs
The text was updated successfully, but these errors were encountered: