-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
migrating s3 put to tm upload #28
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. We already have tests covering the upload/ putObject?
I believe we have tests covering this. There are some integration tests that are disabled but can be run locally (some of them are designed to actually interact with s3, Box, or whatever store you are working with) |
Agree with Matt on version bump. |
if you are changing a public signature it should be a version bump |
@matthewgraf I set up tests to run against Minio (dockerized S3) so S3 tests are running in travis (I filed issue #25 to fix codecov publishing). Agree we should bump version. Otherwise this looks good to me. |
@rolandomanrique where is the version changed? I was looking over this with @lumengxi and it looks like the version in |
Reconciling all version references to the same- 0.3.0 |
#release @rolandomanrique |
TODO: bump the version.
Relevant:
Q: How much data can I store in Amazon S3?
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
AWS guidance is use TransferManager/Mulitpart upload for anything over 100mb