chunker

Minimalistic alternative to rayon’s par_iter/par_chunks + inner iteration + reduce for parallel processing of slices with progress bar by default. Despite its name, this crate is really tiny: only 80 SLOC.

Usage

cargo add chunker

Call chunker::run or chunker::run_mut with these arguments:

Example

Sum of squares:

rust chunker::run( &input, chunker::Config::default(), || 0, |thread_sum, i| *thread_sum += i * i, |rx| rx.iter().sum::<i64>() )

Simple parallel implementaion of word counting:

```rust use std::{collections::HashMap, io::{stdin, stdout, Read, Write, BufWriter}, cmp::Reverse};

fn main() { let mut text = String::new(); stdin().readtostring(&mut text).unwrap(); let lower = text.toasciilowercase(); let lines: Vec<_> = lower.lines().collect();

let word_counts = chunker::run(
    &lines,
    chunker::Config::default(),
    || HashMap::<&str, u32>::new(),
    |counts, line| {
        for word in line.split_whitespace() {
            *counts.entry(word).or_default() += 1;
        }
    },
    |rx| rx.into_iter().reduce(|mut word_counts, counts| {
        for (word, count) in counts {
            *word_counts.entry(word).or_default() += count;
        }

        word_counts
    }).unwrap(),
);

let mut sorted_word_counts: Vec<_> = word_counts.iter().collect();
sorted_word_counts.sort_unstable_by_key(|&(_, count)| Reverse(count));
let mut stdout = BufWriter::new(stdout().lock());

for (word, count) in sorted_word_counts {
    writeln!(stdout, "{word} {count}").unwrap();
}

} ```

$ hyperfine 'target/release/examples/count_words <kjvbible_x10.txt' Benchmark 1: target/release/examples/count_words <kjvbible_x10.txt Time (mean ± σ): 84.0 ms ± 1.3 ms [User: 314.6 ms, System: 28.2 ms] Range (min … max): 82.4 ms … 87.8 ms 34 runs