qsv is a command line program for indexing, slicing, analyzing, splitting, enriching,
validating & joining CSV files. Commands are simple, fast and composable:
NOTE: qsv is a fork of the popular xsv utility, merging several pending PRs since xsv 0.13.0's release, along with additional features & commands for data-wrangling. See FAQ for more details. (NEW and EXTENDED commands are marked accordingly).
| Command | Description |
| --- | --- |
| apply | Apply series of string, profanity, similarity, date, currency & geocoding transformations to a CSV column. (NEW) |
| behead | Drop headers from a CSV. (NEW) |
| cat | Concatenate CSV files by row or by column. |
| count[^1] | Count the rows in a CSV file. (Instantaneous with an index.) |
| dedup[^2] | Remove redundant rows. (NEW) |
| enum | Add a new column enumerating rows by adding a column of incremental or uuid identifiers. Can also be used to copy a column or fill a new column with a constant value. (NEW) |
| exclude[^1] | Removes a set of CSV data from another set based on the specified columns. (NEW) |
| explode | Explode rows into multiple ones by splitting a column value based on the given separator. (NEW) |
| fill | Fill empty values. (NEW) |
| fixlengths | Force a CSV to have same-length records by either padding or truncating them. |
| flatten | A flattened view of CSV records. Useful for viewing one record at a time.
e.g. qsv slice -i 5 data.csv \| qsv flatten
. |
| fmt | Reformat a CSV with different delimiters, record terminators or quoting rules. (Supports ASCII delimited data.) (EXTENDED) |
| foreach | Loop over a CSV to execute bash commands. (*nix only) (NEW) |
| frequency^1 | Build frequency tables of each column. (Uses parallelism to go faster if an index is present.) |
| generate | Generate test data by profiling a CSV using a Markov decision process. (NEW) |
| headers | Show the headers of a CSV. Or show the intersection of all headers between many CSV files. |
| index | Create an index for a CSV. This is very quick & provides constant time indexing into the CSV file. |
| input | Read a CSV with exotic quoting/escaping rules. |
| join[^1] | Inner, outer, cross, anti & semi joins. Uses a simple hash index to make it fast. (EXTENDED) |
| jsonl | Convert newline-delimited JSON to CSV. (NEW)
| lua | Execute a Lua script over CSV lines to transform, aggregate or filter them. (NEW) |
| partition | Partition a CSV based on a column value. |
| pseudo | Pseudonymise the value of the given column by replacing them with an incremental identifier. (NEW) |
| rename | Rename the columns of a CSV efficiently. (NEW) |
| replace | Replace CSV data using a regex. (NEW) |
| reverse[^2] | Reverse order of rows in a CSV. Unlike sort --reverse
command, it preserves the order of rows with the same key. (NEW) |
| sample[^1] | Randomly draw rows from a CSV using reservoir sampling (i.e., use memory proportional to the size of the sample). (EXTENDED) |
| search | Run a regex over a CSV. Applies the regex to each field individually & shows only matching rows. (EXTENDED) |
| searchset | Run multiple regexes over a CSV in a single pass. Applies the regexes to each field individually & shows only matching rows. (NEW) |
| select | Select or re-order columns. (EXTENDED) |
| slice^1 | Slice rows from any part of a CSV. When an index is present, this only has to parse the rows in the slice (instead of all rows leading up to the start of the slice). |
| sort | Sorts CSV data alphabetically, numerically, reverse or random (with optional seed) order. (EXTENDED) |
| split^1 | Split one CSV file into many CSV files of N chunks. |
| stats^1[^3] | Show basic types & statistics of each column in a CSV. (i.e., sum, min/max, min/max length, mean, stddev, variance, quartiles, IQR, lower/upper fences, skew, median, mode, cardinality & nullcount) (EXTENDED) |
| table[^2] | Show aligned output of a CSV using elastic tabstops. (EXTENDED) |
| transpose[^2] | Transpose rows/columns of a CSV. (NEW) |
Binaries for Windows, Linux and macOS are available from Github.
Alternatively, you can compile from source by
installing Cargo
(Rust's package manager)
and installing qsv
using Cargo:
bash
cargo install qsv
Compiling from this repository also works similarly:
bash
git clone git://github.com/jqnatividad/qsv
cd qsv
cargo build --release
The compiled binary will end up in ./target/release/qsv
.
Building qsv requires Rust version 1.56+.
qsv's command-line options are quite extensive. Thankfully, since it uses docopt for CLI processing, we can take advantage of docopt.rs' tab completion support to make it easier to use qsv at the command-line (currently, only bash shell is supported):
```bash
cargo install docopt
echo "DOCOPTWORDLISTBIN=\"$(which docopt-wordlist)"\" >> $HOME/.bashcompletion echo "source \"$(pwd)/scripts/docopt-wordlist.bash\"" >> $HOME/.bashcompletion echo "complete -F docoptwordlistcommands qsv" >> $HOME/.bashcompletion ```
qsv recognizes CSV (.csv
file extension) and TSV files (.tsv
and .tab
file extensions). CSV files are assummed to have "," (comma) as a delimiter,
and TSV files, "\t" (tab) as a delimiter. The delimiter is a single ascii character that can be set either by the --delimiter
command-line option or
with the QSV_DEFAULT_DELIMITER
environment variable.
QSV_DEFAULT_DELIMITER
- single ascii character to use as delimiter. Overrides --delimeter
option. Defaults to "," (comma) for CSV files and "\t" (tab) for TSV files, when not set. Note that this will also set the delimiter for qsv's output.QSV_NO_HEADERS
- when set, the first row will NOT be interpreted as headers. Supersedes QSV_TOGGLE_HEADERS
.QSV_TOGGLE_HEADERS
- if set to 1
, toggles header setting - i.e. inverts qsv header behavior, with no headers being the default, and setting --no-headers
will actually mean headers will not be ignored.QSV_MAX_JOBS
- number of jobs to use for parallelized commands (currently frequency
, split
and stats
). If not set, max_jobs is set
to number of logical processors divided by four. See Parallelization for more info.QSV_REGEX_UNICODE
- if set, makes search
, searchset
and replace
commands unicode-aware. For increased performance, these
commands are not unicode-aware and will ignore unicode values when matching and will panic when unicode characters are used in the regex.QSV_RDR_BUFFER_CAPACITY
- set to change reader buffer size (bytes - default when not set: 16384)QSV_WTR_BUFFER_CAPACITY
- set to change writer buffer size (bytes - default when not set: 65536)Modern CPUs have various features that the Rust compiler can take advantage of to increase performance. If you want the compiler to take advantage of these CPU-specific speed-ups, set this environment variable BEFORE installing/compiling qsv:
On Linux and macOS:
bash
export CARGO_BUILD_RUSTFLAGS='-C target-cpu=native'
On Windows Powershell:
powershell
$env:CARGO_BUILD_RUSTFLAGS='-C target-cpu=native'
Do note though that the resulting binary will only run on machines with the
same architecture as the machine you installed/compiled from.
To find out your CPU architecture and other valid values for target-cpu
:
```bash rustc --print target-cpus
rustc --print cfg | grep -i target_feature
rustc --print cfg -C target-cpu=native | grep -i target_feature
rustc --print target-features ```
By default, qsv uses an alternative allocator - mimalloc,
a performance-oriented allocator from Microsoft.
If you want to use the standard allocator, use the --no-default-features
flag
when installing/compiling qsv, e.g.:
bash
cargo install qsv --no-default-features
or
bash
cargo build --release --no-default-features
To find out what memory allocator qsv is using, do qsv --version
. After the qsv version number, the allocator used is displayed ("standard
" or "mimalloc
"). Note that mimalloc is not supported on the x86_64-pc-windows-gnu
and arm
targets, and you'll need to use the "standard" allocator on those platforms.
Depending on your filesystem's configuration (e.g. block size, file system type, writing to remote file systems (e.g. sshfs, efs, nfs), SSD or rotating magnetic disks, etc.), you can also fine-tune qsv's read/write buffers.
By default, the read buffer size is set to 16k, you can change it by setting the environment
variable QSV_RDR_BUFFER_CAPACITY
in bytes.
The same is true with the write buffer (default: 64k) with the QSV_WTR_BUFFER_CAPACITY
environment variable.
Several commands support parallelization - stats
, frequency
and split
.
Previously, these commands spawned several jobs equal to the number of logical processors. After extensive benchmarking, it turns out doing so often results in the multi-threaded runs running slower than single-threaded runs.
Parallelized jobs do increase performance - to a point. After a certain number of threads, there are not only diminishing returns, the parallelization overhead actually results in slower runs.
Starting with qsv 0.22.0, a heuristic of setting the maximum number of jobs to the number of logical processors divided by 4 is applied. The user can still manually override this using the --jobs
command-line option or the QSV_MAX_JOBS
environment variable, but testing shows negative returns start at around this point.
These observations were gathered using the benchmark script, using a relatively large file (520mb, 41 column, 1M row sample of NYC's 311 data). Performance will vary based on environment - CPU architecture, amount of memory, operating system, I/O speed, and the number of background tasks, so this heuristic will not work for every situation.
To find out your jobs setting, call qsv --version
. The last number is the default number of jobs qsv will use for parallelized commands if --jobs
and QSV_MAX_JOBS
are not specified.
Use and fine-tune the benchmark script when tweaking qsv's performance to your environment. Don't be afraid to change the benchmark data and the qsv commands to something that is more representative of your workloads.
Use the generated TSV files to meter and compare performance across platforms. You'd be surprised how performance varies
across environments - e.g. qsv's join
performs abysmally on Windows's WSL running Ubuntu 20.04 LTS, taking 172.44 seconds.
On the same machine, running in a VirtualBox VM at that with the same Ubuntu version, join
was done in 1.34 seconds -
two orders of magnitude faster!
However, stats
performs two times faster on WSL vs the VirtualBox VM - 2.80 seconds vs 5.33 seconds for the stats_index
benchmark.
Dual-licensed under MIT or the UNLICENSE.
qsv was made possible by datHere - Data Infrastructure Engineering.
Standards-based, best-of-breed, open source solutions to make your Data Useful, Usable & Used.
This project is unrelated to Intel's Quick Sync Video.