Small util to count tokens and optionally character ngrams in a whitespace tokenized corpus.
Outputs frequency-sorted lists of items.
```Bash
$ corpus-count -c /path/to/corpus.txt -n /path/to/ngramoutput.txt \ -t /path/to/tokenoutput.txt
$ corpus-count -c /path/to/corpus.txt
$ corpus-count < /path/to/corpus.txt
$ corpus-count -c /path/to/corpus.txt -n /path/to/ngramoutput.txt \ -t /path/to/tokenoutput.txt --tokenmin 30 --ngrammin 30
$ corpus-count -c /path/to/corpus.txt -n /path/to/ngramoutput.txt \ -t /path/to/tokenoutput.txt --tokenmin 30 --ngrammin 30 --filter_first ```
Counting ngrams is determined by giving an argument to the --ngram_count
or
-n
flag. Without the --filter_first
flag, the ngram counts are determined
before filtering tokens, therefore tokens which appear less than
--token_min
times can still contribute to the count of an ngram. If this flag
is set, tokens are filtered first and only in-vocabulary tokens influence the
counts of ngrams.
Per default, tokens are bracketed with "<" and ">" before extracting ngrams.
This does not affect the tokens, only ngrams and can be toggled through the
--no_bracket
flag.
Minimum and maximum ngram length can be set through the respective --min_n
and --max_n
flags.
Rust is required, most easily installed through https://rustup.rs.
Bash
cargo install corpus-count