findduplicatefiles

This program finds duplicate files by calculating their hash values.

The chosen hashing algorithm was blake version 3.

To find duplicate files in a directory, run the command: find_duplicate_files

Help

Type in the terminal find_duplicate_files -h to see the help messages and all available options: ``` findduplicatefiles -h

find duplicate files according to their blake3 hash

Usage: findduplicatefiles [OPTIONS]

Options: -f, --fullpath Prints full path of duplicate files, otherwise relative path -g, --generate If provided, outputs the completion file for given shell [possible values: bash, elvish, fish, powershell, zsh] -j, --json Print to output in json format -m, --maxdepth Set the maximum depth to search for duplicate files -o, --omit_hidden Omit hidden files (starts with '.'), otherwise search all files -p, --path Set the path where to look for duplicate files, otherwise use the current directory -s, --sort Sort result by file size, otherwise sort by number of duplicate files -t, --time Show total execution time -h, --help Print help (see more with '--help') -V, --version Print version ```

Building

To build and install from source, run the following command: cargo install find_duplicate_files Another option is to clone/copy the project from github, compile and generate the executable: ``` git clone https://github.com/claudiofsr/findduplicatefiles.git

cd findduplicatefiles

cargo b -r && cargo install --path=. ```

Mutually exclusive features: jwalk or walkdir.

In general, jwalk (default) is faster than walkdir.

But if you prefer to use walkdir: cargo b -r && cargo install --path=. --features walkdir