Utility to walk a S3 hierarchy. An analog of find for AWS S3.
```sh
USAGE:
s3find [OPTIONS]
FLAGS: -h, --help Prints help information
-V, --version
Prints version information
OPTIONS:
--aws-access-key
--aws-region <aws_region>
The region to use. Default value is us-east-1 [default: us-east-1]
--aws-secret-key <aws_secret_key>
AWS secret key. Unrequired
--size <bytes_size>...
File size for match:
5k - exact match 5k,
+5k - bigger than 5k,
-5k - smaller than 5k,
Possible file size units are as follows:
k - kilobytes (1024 bytes)
M - megabytes (1024 kilobytes)
G - gigabytes (1024 megabytes)
T - terabytes (1024 gigabytes)
P - petabytes (1024 terabytes)
--iname <ipatern>...
Case-insensitive glob pattern for match, can be multiple
--name <npatern>...
Glob pattern for match, can be multiple
--regex <rpatern>...
Regex pattern for match, can be multiple
--mtime <time>...
Modification time for match, a time period:
+5d - for period from now-5d to now
-5d - for period before now-5d
Possible time units are as follows:
s - seconds
m - minutes
h - hours
d - days
w - weeks
Can be multiple, but should be overlaping
ARGS:
SUBCOMMANDS: -copy Copy matched keys to a S3 destination -delete Delete matched keys -download Download matched keys -exec Exec any shell program with every key -ls Print the list of matched keys -lstags Print the list of matched keys with tags -move Move matched keys to a S3 destination -print Extended print with detail information -public Make the matched keys public available (readonly) -tags Set the tags(overwrite) for the matched keys help Prints this message or the help of the given subcommand(s)
The authorization flow is the following chain: * use credentials from arguments provided by users * use environment variable credentials: AWSACCESSKEYID and AWSSECRETACCESSKEY * use credentials via aws file profile. Profile can be set via environment variable AWSPROFILE Profile file can be set via environment variable AWSSHAREDCREDENTIALSFILE * use AWS instance IAM profile * use AWS container IAM profile ```
sh
s3find 's3://example-bucket/example-path' --name '*' -print
sh
s3find 's3://example-bucket/example-path' --name '*' -delete
sh
s3find 's3://example-bucket/example-path' --name '*' -ls
sh
s3find 's3://example-bucket/example-path' --name '*' -lstags
```sh s3find 's3://example-bucket/example-path' --name '*' -exec 'echo {}'
```
```sh s3find 's3://example-bucket/example-path' --name '*' -download
```
```sh s3find 's3://example-bucket/example-path' --name '9' -tags 'key:value' 'env:staging'
```
```sh s3find 's3://example-bucket/example-path' --name '9' -public
```
sh
s3find 's3://example-bucket/example-path' --iname '*s*' -ls
sh
s3find 's3://example-bucket/example-path' --regex '1$' -print
sh
s3find 's3://example-bucket/example-path' --size 0 -print
sh
s3find 's3://example-bucket/example-path' --size +10M -print
sh
s3find 's3://example-bucket/example-path' --size -10k -print
sh
s3find 's3://example-bucket/example-path' --time 10 -print
sh
s3find 's3://example-bucket/example-path' --time +10m -print
sh
s3find 's3://example-bucket/example-path' --time -10h -print
Files with size between 10 and 20 bytes
sh
s3find 's3://example-bucket/example-path' --size +10 --size -20 -print
sh
s3find 's3://example-bucket/example-path' --size +10 --name '*file*' -print
Requirements: rust and cargo
```sh
cargo build --release
cargo install
cargo install --git https://github.com/AnderEnder/s3find-rs
cargo install s3find ```