Author: Chou-han Yang (@chouhanyang)
Current Maintainers: Debodirno Chandra (@debodirno) | Naveen Vardhi (@rozuur) | Navin Pai (@navinpai)
- Fully migrated from old boto 2.x to new boto3 library, which provides more reliable and up-to-date S3 backend.
- Support S3
--API-ServerSideEncryption
along with 36 new API pass-through options. See API pass-through options section for complete list. - Support batch delete (with delete_objects API) to delete up to 1000 files with single call. 100+ times faster than sequential deletion.
- Support
S4CMD_OPTS
environment variable for commonly used options such as--API-ServerSideEncryption
across all your s4cmd operations. - Support moving files larger than 5GB with multipart upload. 20+ times faster then sequential move operation when moving large files.
- Support timestamp filtering with
--last-modified-before
and--last-modified-after
options for all operations. Human friendly timestamps are supported, e.g.--last-modified-before='2 months ago'
- Faster upload with lazy evaluation of md5 hash.
- Listing large number of files with S3 pagination, with memory is the limit.
- New directory to directory
dsync
command is better and standalone implementation to replace oldsync
command, which is implemented based on top of get/put/mv commands.--delete-removed
work for all cases including local to s3, s3 to local, and s3 to s3.sync
command preserves the old behavior in this version for compatibility. - Support for S3 compatible storage services such as DreamHost and Cloudian using
--endpoint-url
(Community Supported Beta Feature). - Tested on both Python 2.7, 3.6, 3.7, 3.8, 3.9 and nightly.
- Special thanks to onera.com for supporting s4cmd.
S4cmd is a command-line utility for accessing Amazon S3, inspired by s3cmd.
We have used s3cmd heavily for a number of scripted, data-intensive applications. However as the need for a variety of small improvements arose, we created our own implementation, s4cmd. It is intended as an alternative to s3cmd for enhanced performance and for large files, and with a number of additional features and fixes that we have found useful.
It strives to be compatible with the most common usage scenarios for s3cmd. It does not offer exact drop-in compatibility, due to a number of corner cases where different behavior seems preferable, or for bugfixes.
S4cmd supports the regular commands you might expect for fetching and storing
files in S3: ls
, put
, get
, cp
, mv
, sync
, del
, du
.
The main features that distinguish s4cmd are:
- Simple (less than 1500 lines of code) and implemented in pure Python, based on the widely used Boto3 library.
- Multi-threaded/multi-connection implementation for enhanced performance on all commands. As with many network-intensive applications (like web browsers), accessing S3 in a single-threaded way is often significantly less efficient than having multiple connections actively transferring data at once. In general, we get a 2X boost to upload/download speeds from this.
- Path handling: S3 is not a traditional filesystem with built-in support for directory structure: internally, there are only objects, not directories or folders. However, most people use S3 in a hierarchical structure, with paths separated by slashes, to emulate traditional filesystems. S4cmd follows conventions to more closely replicate the behavior of traditional filesystems in certain corner cases. For example, "ls" and "cp" work much like in Unix shells, to avoid odd surprises. (For examples see compatibility notes below.)
- Wildcard support: Wildcards, including multiple levels of wildcards, like in Unix shells, are handled. For example: s3://my-bucket/my-folder/20120512/*/*chunk00?1?
- Automatic retry: Failure tasks will be executed again after a delay.
- Multi-part upload support for files larger than 5GB.
- Handling of MD5s properly with respect to multi-part uploads (for the sordid details of this, see below).
- Miscellaneous enhancements and bugfixes:
- Partial file creation: Avoid creating empty target files if source does not exist. Avoid creating partial output files when commands are interrupted.
- General thread safety: Tool can be interrupted or killed at any time without being blocked by child threads or leaving incomplete or corrupt files in place.
- Ensure exit code is nonzero on all failure scenarios (a very important feature in scripts).
- Expected handling of symlinks (they are followed).
- Support both
s3://
ands3n://
prefixes (the latter is common with Amazon Elastic Mapreduce).
Limitations:
- No CloudFront or other feature support.
- Currently, we simulate
sync
withget
andput
with--recursive --force --sync-check
.
You can install s4cmd
PyPI.
pip install s4cmd
- Copy or create a symbolic link so you can run
s4cmd.py
ass4cmd
. (It is just a single file!) - If you already have a
~/.s3cfg
file from configurings3cmd
, credentials from this file will be used. Otherwise, set theS3_ACCESS_KEY
andS3_SECRET_KEY
environment variables to contain your S3 credentials. - If no keys are provided, but an IAM role is associated with the EC2 instance, it will be used transparently.
List all contents of a directory.
- -r/--recursive: recursively display all contents including subdirectories under the given path.
- -d/--show-directory: show the directory entry instead of its content.
Upload local files up to S3.
- -r/--recursive: also upload directories recursively.
- -s/--sync-check: check md5 hash to avoid uploading the same content.
- -f/--force: override existing file instead of showing error message.
- -n/--dry-run: emulate the operation without real upload.
Download files from S3 to local filesystem.
- -r/--recursive: also download directories recursively.
- -s/--sync-check: check md5 hash to avoid downloading the same content.
- -f/--force: override existing file instead of showing error message.
- -n/--dry-run: emulate the operation without real download.
Synchronize the contents of two directories. The directory can either be local or remote, but currently, it doesn't support two local directories.
- -r/--recursive: also sync directories recursively.
- -s/--sync-check: check md5 hash to avoid syncing the same content.
- -f/--force: override existing file instead of showing error message.
- -n/--dry-run: emulate the operation without real sync.
- --delete-removed: delete files not in source directory.
(Obsolete, use dsync
instead) Synchronize the contents of two directories. The directory can either be local or remote, but currently, it doesn't support two local directories. This command simply invoke get/put/mv commands.
- -r/--recursive: also sync directories recursively.
- -s/--sync-check: check md5 hash to avoid syncing the same content.
- -f/--force: override existing file instead of showing error message.
- -n/--dry-run: emulate the operation without real sync.
- --delete-removed: delete files not in source directory. Only works when syncing local directory to s3 directory.
Copy a file or a directory from a S3 location to another.
- -r/--recursive: also copy directories recursively.
- -s/--sync-check: check md5 hash to avoid copying the same content.
- -f/--force: override existing file instead of showing error message.
- -n/--dry-run: emulate the operation without real copy.
Move a file or a directory from a S3 location to another.
- -r/--recursive: also move directories recursively.
- -s/--sync-check: check md5 hash to avoid moving the same content.
- -f/--force: override existing file instead of showing error message.
- -n/--dry-run: emulate the operation without real move.
Delete files or directories on S3.
- -r/--recursive: also delete directories recursively.
- -n/--dry-run: emulate the operation without real delete.
Get the size of the given directory.
Available parameters:
- -r/--recursive: also add sizes of sub-directories recursively.
path to s3cfg config file
force overwrite files when download or upload
recursively checking subdirectories
check file md5 before download or upload
trial run without actual download or upload
number of retries before giving up
seconds to sleep between retries
number of concurrent threads
endpoint url used in boto3 client
show directory instead of its content
ignore empty source from s3
(obsolete) use SSL connection to S3
verbose output
debug output
(obsolete) validate lookup operation
delete remote files that do not exist in source after sync
size in bytes to split multipart transfers
files with size (in bytes) greater than this will be downloaded in multipart transfers
files with size (in bytes) greater than this will be uploaded in multipart transfers
files with size (in bytes) greater than this will be copied in multipart transfers
Number of files (<1000) to be combined in batch delete.
Condition on files where their last modified dates are before given parameter.
Condition on files where their last modified dates are after given parameter.
Those options are directly translated to boto3 API commands. The options provided will be filtered by the APIs that are taking parameters. For example, --API-ServerSideEncryption
is only needed for put_object
, create_multipart_upload
but not for list_buckets
and get_objects
for example. Therefore, providing --API-ServerSideEncryption
for s4cmd ls
has no effect.
For more information, please see boto3 s3 documentations http://boto3.readthedocs.io/en/latest/reference/services/s3.html
The canned ACL to apply to the object.
Specifies caching behavior along the request/reply chain.
Specifies presentational information for the object.
Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
The language the content is in.
The base64-encoded 128-bit MD5 digest of the part data.
A standard MIME type describing the format of the object data.
Copies the object if its entity tag (ETag) matches the specified tag.
Copies the object if it has been modified since the specified time.
Copies the object if its entity tag (ETag) is different than the specified ETag.
Copies the object if it hasn't been modified since the specified time.
The range of bytes to copy from the source object. The range value must use the form bytes=first-last, where the first and last are the zero-based byte offsets to copy. For example, bytes=0-9 indicates that you want to copy the first ten bytes of the source. You can copy a range only if the source object is greater than 5 GB.
Specifies the algorithm to use when decrypting the source object (e.g., AES256).
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error. Please note that this parameter is automatically populated if it is not provided. Including this parameter is not required
Specifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object. The encryption key provided in this header must be one that was used when the source object was created.
Entity tag returned when the part was uploaded.
The date and time at which the object is no longer cacheable.
Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.
Allows grantee to read the object ACL.
Allows grantee to read the object data and its metadata.
Allows grantee to write the ACL for the applicable object.
Return the object only if its entity tag (ETag) is the same as the one specified, otherwise return a 412 (precondition failed).
Return the object only if it has been modified since the specified time, otherwise return a 304 (not modified).
Return the object only if its entity tag (ETag) is different from the one specified, otherwise return a 304 (not modified).
Return the object only if it has not been modified since the specified time, otherwise return a 412 (precondition failed).
A map (in json string) of metadata to store with the object in S3
Specifies whether the metadata is copied from the source object or replaced with metadata provided in the request.
The concatenation of the authentication device's serial number, a space, and the value that is displayed on your authentication device.
Confirms that the requester knows that she or he will be charged for the request. Bucket owners need not specify this parameter in their requests. Documentation on downloading objects from requester pays buckets can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html
The Server-side encryption algorithm used when storing this object in S3 (e.g., AES256, aws:kms).
Specifies the algorithm to use to when encrypting the object (e.g., AES256).
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error. Please note that this parameter is automatically populated if it is not provided. Including this parameter is not required
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header.
Specifies the AWS KMS key ID to use for object encryption. All GET and PUT requests for an object protected by AWS KMS will fail if not made via SSL or using SigV4. Documentation on configuring any of the officially supported AWS SDKs and CLI can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version
The type of storage to use for the object. Defaults to 'STANDARD'.
VersionId used to reference a specific version of the object.
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
Simply enable --debug
option to see the full log of s4cmd. If you even need to check what APIs are invoked from s4cmd to boto3, you can run:
s4cmd --debug [op] .... 2>&1 >/dev/null | grep S3APICALL
To see all the parameters sending to S3 API.
Prefix matching: In s3cmd, unlike traditional filesystems, prefix names match listings:
>> s3cmd ls s3://my-bucket/ch
s3://my-bucket/charlie/
s3://my-bucket/chyang/
In s4cmd, behavior is the same as with a Unix shell:
>>s4cmd ls s3://my-bucket/ch
>(empty)
To get prefix behavior, use explicit wildcards instead: s4cmd ls s3://my-bucket/ch*
Similarly, sync and cp commands emulate the Unix cp command, so directory to directory sync use different syntax:
>> s3cmd sync s3://bucket/path/dirA s3://bucket/path/dirB/
will copy contents in dirA to dirB.
>> s4cmd sync s3://bucket/path/dirA s3://bucket/path/dirB/
will copy dirA into dirB.
To achieve the s3cmd behavior, use wildcards:
s4cmd sync s3://bucket/path/dirA/* s3://bucket/path/dirB/
Note s4cmd doesn't support dirA without trailing slash indicating dirA/* as what rsync supported.
No automatic override for put command: s3cmd put fileA s3://bucket/path/fileB will return error if fileB exists. Use -f as well as get command.
Bugfixes for handling of non-existent paths: Often s3cmd creates empty files when specified paths do not exist: s3cmd get s3://my-bucket/no_such_file downloads an empty file. s4cmd get s3://my-bucket/no_such_file returns an error. s3cmd put no_such_file s3://my-bucket/ uploads an empty file. s4cmd put no_such_file s3://my-bucket/ returns an error.
Etags, MD5s and multi-part uploads: Traditionally, the etag of an object in S3 has been its MD5. However, this changed with the introduction of S3 multi-part uploads; in this case the etag is still a unique ID, but it is not the MD5 of the file. Amazon has not revealed the definition of the etag in this case, so there is no way we can calculate and compare MD5s based on the etag header in general. The workaround we use is to upload the MD5 as a supplemental content header (called "md5", instead of "etag"). This enables s4cmd to check the MD5 hash before upload or download. The only limitation is that this only works for files uploaded via s4cmd. Programs that do not understand this header will still have to download and verify the MD5 directly.
- CloudFront or other feature support beyond basic S3 access.
- Bloomreach http://www.bloomreach.com
- Onera http://www.onera.com