Build Status

A fast parser for fastq.

This library can process fastq files at about the speed of the coreutils wc -l (about 2GB/s on my laptop, seqan runs at about 150MB/s). It also makes it easy to distribute the processing of fastq records to many cores, without losing much of the performance.

See the documentation for details.

Benchmarks

We compare this library with the fastq parser in rust-bio, the C++ library seqan 2.0.0, with kseq.h and with wc -l.

We test 4 scenarios: - A 2GB test file is uncompressed on a ramdisk. The program counts the number of records in the file. - The test file lz4 compressed on disk, with an empty page cache. Again, the program should just count the number of records. - The test file is lz4 compressed on disk with empty page cache, but the program sends records to a different thread. This thread counts the number of records. - The same as scenario 3, but with gzip compression.

All measurements are taken with a 2GB test file (TODO describe!) on a Haskwell i7-4510U @ 2GH. Each program is executed three times (clearing the os page cache where appropriate) and the best time is used. Libraries without native support for a compression algorithm get the input via a pipe from zcat or lz4 -d. The C and C++ programs are compiled with gcc 6.2.1 with the fags -O3 -march=native. All programs can be found in the examples directory of this repository.

| | ramdisk | lz4 | lz4 + thread | gzip | gzip + thread | | ----------| -----------| --------- | ------------ | ------- | ------------- | | wc -l | 2.3GB/s| 1.2GB/s | NA | 300MB/s | NA | | fastq | 1.9GB/s |1.9GB/s| 1.6GB/s |650MB/s|620MB/s | | rust-bio| 730MB/s | NA | 250MB/s | NA | NA | | seqan | 150MB/s | NA | NA | NA | NA | | kseq.h | 980MB/s | 680MB/s | NA | NA | NA |

Some notes from checking perf record:

Examples

Count the number of fastq records that contain an N

```rust use fastq::{Parser, Record}; let reader = ::std::io::stdin(); let mut parser = Parser::new(reader); let mut total: usize = 0;

parser.each(|record| { if record.seq().contains(&b'N') { total += 1; true // continue parsing } }).unwrap(); println!("{}", total); ```

And an (unnecessarily) parallel version of this

```rust const n_threads: usize = 2;

use fastq::{Parser, Record}; let reader = ::std::io::stdin(); let parser = Parser::new(reader);

let results: Vec = parser.paralleleach(nthreads, |recordsets| { // we can initialize thread local variables here. let mut threadtotal = 0;

// We iterate over sets of records
for record_set in record_sets {
    for record in record_set.iter() {
        if record.seq().contains(&b'N') {
            thread_total += 1;
        }
    }
}

// The values we return (it can be any type implementing `Send`)
// are collected from the different threads by
// `parser.parallel_each` and returned. See doc for a description of
// the error handling.
thread_total

}).expect("Invalid fastq file");

// Add up the results from the individual worker threads let total: u64 = results.iter().sum(); println!("{}", total); ```