qsv: Ultra-fast CSV data-wrangling toolkit

Linux build status Windows build status macOS build status Security audit Downloads Clones Discussions Docs Minimum supported Rust version Crates.io Crates.io downloads

  | Table of Contents :-------------------------:|:------------------------- ![qsv logo](docs/images/qsv-logo.png) |qsv is a command line program for
indexing, slicing, analyzing, splitting,
enriching, validating & joining CSV files.
Commands are simple, fast & composable.

* [Available Commands](#available-commands)
* [Installation](#installation)
* [Whirlwind Tour](docs/whirlwind_tour.md#a-whirlwind-tour)
* [Cookbook](https://github.com/jqnatividad/qsv/wiki)
* [FAQ](https://github.com/jqnatividad/qsv/discussions/categories/faq)
* [Changelog](https://github.com/jqnatividad/qsv/blob/master/CHANGELOG.md#changelog)
* [Performance Tuning](https://github.com/jqnatividad/qsv/blob/master/docs/PERFORMANCE.md#performance-tuning)
* [Benchmarks](docs/BENCHMARKS.md)
* [NYC School of Data 2022 slides](https://docs.google.com/presentation/d/e/2PACX-1vQ12ndZL--gkz0HLQRaxqsNOwzddkv1iUKB3sq661yA77OPlAsmHJHpjaqt9s9QEf73VqMfb0cv4jHU/pub?start=false&loop=false&delayms=3000)
* [Sponsor](#sponsor)

ℹ️ NOTE: qsv is a fork of the popular xsv utility, merging several pending PRs since xsv 0.13.0's May 2018 release. It also has numerous new features & 54 additional commands/subcommands/operations (for a total of 74). See FAQ for more details.

Available commands

| Command | Description | | --- | --- | | apply^1 | Apply series of string, date, currency & geocoding transformations to a CSV column. It also has some basic NLP functions (similarity, sentiment analysis, profanity, eudex & language detection). | | behead | Drop headers from a CSV. | | cat | Concatenate CSV files by row or by column. | | count[^2] | Count the rows in a CSV file. (Instantaneous with an index.) | | dedup^3 | Remove redundant rows. | | enum | Add a new column enumerating rows by adding a column of incremental or uuid identifiers. Can also be used to copy a column or fill a new column with a constant value. | | excel | Exports a specified Excel/ODS sheet to a CSV file. | | exclude[^2] | Removes a set of CSV data from another set based on the specified columns. | | explode | Explode rows into multiple ones by splitting a column value based on the given separator. | | extsort[^5] | Sort an arbitrarily large CSV/text file using a multithreaded external merge sort algorithm. | | fetch | Fetches HTML/data from web pages or web services for every row. Comes with jql JSON query language support, dynamic throttling (RateLimit) & caching with optional Redis support for persistent caching. | | fill | Fill empty values. | | fixlengths | Force a CSV to have same-length records by either padding or truncating them. | | flatten | A flattened view of CSV records. Useful for viewing one record at a time.
e.g. qsv slice -i 5 data.csv \| qsv flatten. | | fmt | Reformat a CSV with different delimiters, record terminators or quoting rules. (Supports ASCII delimited data.) | | foreach[^1] | Loop over a CSV to execute bash commands. (not available on Windows) | | frequency^2 | Build frequency tables of each column. (Uses multithreading to go faster if an index is present.) | | generate[^1] | Generate test data by profiling a CSV using Markov decision process machine learning. | | headers | Show the headers of a CSV. Or show the intersection of all headers between many CSV files. | | index | Create an index for a CSV. This is very quick & provides constant time indexing into the CSV file. Also enables multithreading for frequency, split, stats and schema commands. | | input[^2] | Read CSV data with special quoting, trimming, line-skipping and UTF-8 transcoding rules. Typically used to "normalize" a CSV for further processing with other qsv commands. | | join[^2] | Inner, outer, cross, anti & semi joins. Uses a simple hash index to make it fast. | | jsonl | Convert newline-delimited JSON (JSONL/NDJSON) to CSV. | lua[^1] | Execute a Lua 5.4.4 script over CSV lines to transform, aggregate or filter them. | | partition | Partition a CSV based on a column value. | | pseudo | Pseudonymise the value of the given column by replacing them with an incremental identifier. | | py[^1] | Evaluate a Python expression over CSV lines to transform, aggregate or filter them. Python's f-strings is particularly useful for extended formatting (Python 3.8+ required). | | rename | Rename the columns of a CSV efficiently. | | replace | Replace CSV data using a regex. | | reverse[^3] | Reverse order of rows in a CSV. Unlike the sort --reverse command, it preserves the order of rows with the same key. | | sample[^2] | Randomly draw rows (with optional seed) from a CSV using reservoir sampling (i.e., use memory proportional to the size of the sample). | | schema[^4] | Infer schema from CSV data and output in JSON Schema format. Uses multithreading to go faster if an index is present. See validate command. | | search | Run a regex over a CSV. Applies the regex to each field individually & shows only matching rows. | | searchset | Run multiple regexes over a CSV in a single pass. Applies the regexes to each field individually & shows only matching rows. | | select | Select, re-order, duplicate or drop columns. | | slice^2 | Slice rows from any part of a CSV. When an index is present, this only has to parse the rows in the slice (instead of all rows leading up to the start of the slice). | | sniff[^2] | Quickly sniff CSV metadata (delimiter, header row, preamble rows, quote character, flexible, is_utf8, number of records, number of fields, field names & data types). | | sort[^5] | Sorts CSV data in alphabetical, numerical, reverse or random (with optional seed) order. | | split^2 | Split one CSV file into many CSV files of N chunks. (Uses multithreading to go faster if an index is present.) | | stats^2[^4] | Infer data type & compute descriptive statistics for each column in a CSV (sum, min/max, min/max length, mean, stddev, variance, nullcount, quartiles, IQR, lower/upper fences, skewness, median, mode & cardinality). Uses multithreading to go faster if an index is present. | | table[^3] | Show aligned output of a CSV using elastic tabstops. | | tojsonl[^4] | Converts CSV to a newline-delimited JSON (JSONL/NDJSON). | | transpose[^3] | Transpose rows/columns of a CSV. | | validate^2 | Validate CSV data with JSON Schema (See schema command). If no jsonschema file is provided, validates if a CSV conforms to the RFC 4180 standard. |

Installation

Pre-built binaries for Windows, Linux and macOS are available from GitHub, including binaries compiled with Rust Nightly/Unstable (click here for more info).

There are three versions of qsv. qsv enables all features valid for the target platform[^6]; qsvlite has all features disabled (half the size of qsv); qsvdp is optimized for use with DataPusher+, with only DataPusher+ relevant commands and the self-update engine removed (a sixth of the size of qsv).

Alternatively, you can compile from source by installing Cargo (Rust's package manager) and installing qsv using Cargo:

bash cargo install qsv --features all_full

If you encounter compilation errors, ensure you're using the exact version of the dependencies qsv was built with by issuing:

bash cargo install qsv --locked --features all_full

Compiling from this repository also works similarly:

```bash git clone git@github.com:jqnatividad/qsv.git cd qsv cargo build --release --features all_full

or if you encounter compilation errors

cargo build --release --locked --features all_full ```

The compiled binary will end up in ./target/release/.

To enable optional features, use cargo --features (see Feature Flags for more info):

```bash cargo install qsv --features apply,generate,lua,fetch,foreach,python,full

or to build qsvlite

cargo install qsv --features lite

or to build qsvdp

cargo install qsv --features datapusher_plus

or when compiling from a local repo

cargo build --release --features apply,generate,lua,fetch,foreach,python,full

for qsvlite

cargo build --release --features lite

for qsvdp

cargo build --release --features datapusher_plus ```

access to a native python interpreter for those platforms (aarch64, i686, and arm) on GitHub's action runners. Compile natively on those platforms with Python 3.8+ installed, if you want to enable the python feature.

Minimum Supported Rust Version

Building qsv requires Rust stable - currently version 1.62.1.

Tab Completion

qsv's command-line options are quite extensive. Thankfully, since it uses docopt for CLI processing, we can take advantage of docopt.rs' tab completion support to make it easier to use qsv at the command-line (currently, only bash shell is supported):

```bash

install docopt-wordlist

cargo install docopt

IMPORTANT: run these commands from the root directory of your qsv git repository

to setup bash qsv tab completion

echo "DOCOPTWORDLISTBIN=\"$(which docopt-wordlist)"\" >> $HOME/.bashcompletion echo "source \"$(pwd)/scripts/docopt-wordlist.bash\"" >> $HOME/.bashcompletion echo "complete -F docoptwordlistcommands qsv" >> $HOME/.bashcompletion ```

File formats

qsv recognizes UTF-8/ASCII encoded, CSV (.csv) and TSV files (.tsv and .tab). CSV files are assummed to have "," (comma) as a delimiter, and TSV files, "\t" (tab) as a delimiter. The delimiter is a single ascii character that can be set either by the --delimiter command-line option or with the QSV_DEFAULT_DELIMITER environment variable or automatically detected when QSV_SNIFF_DELIMITER is set.

When using the --output option, note that qsv will UTF-8 encode the file and automatically change the delimiter used in the generated file based on the file extension - i.e. comma for .csv, tab for .tsv and .tab files.

JSONL/NDJSON files are also recognized and converted to CSV with the jsonl command.

The fetch command also produces JSONL files when its invoked without the --new-column option.

The sniff and validate commands produce JSON files with their --json and --pretty-json options.

The schema command produces JSON Schema Validation (Draft 7) files with the ".schema.json" file extension, which can be used with the validate command.

The excel command recognizes Excel and Open Document Spreadsheet(ODS) files (.xls, .xlsx, .xlsm, .xlsb and .ods files).

RFC 4180

qsv validates against the RFC 4180 CSV standard. However IRL, CSV formats vary significantly and qsv is actually not strictly compliant with the specification so it can process "real-world" CSV files. qsv leverages the awesome Rust CSV library, which in turn, is built on top of the csv-core library to read CSV files.

Click here to find out more about how qsv conforms to the standard with csv-core.

UTF-8 Encoding

qsv requires UTF-8 encoded (of which ASCII is a subset) input files. On startup, it scans the input if it's UTF-8 encoded (for files, the first 8k; for stdin, the entire buffer), and will abort if its not unless QSV_SKIPUTF8_CHECK is set. On Linux and macOS, UTF-8 encoding is the default.

Previously, several qsv commands worked with non utf-8 encoded files. But to increase performance, qsv versions since 0.38 make extensive use of from_utf8_unchecked so as not to pay the repetitive utf-8 validation penalty, no matter how small, even for already utf-8 encoded files.

Only two commands still work with non utf-8 encoded files - sniff and input. sniff is meant to screen files and detect if a file is utf-8 encoded; and input, to prepare files for further qsv processing.

Should you need to reencode CSV/TSV files, you can use the input command to transcode to UTF-8. It will replace all invalid UTF-8 sequences with . Alternatively, there are several utilities you can use to do so on Linux/macOS and Windows.

Windows Usage Note

Unlike other modern operating systems, Windows' default encoding is UTF16-LE. This will cause problems when redirecting qsv's output to a CSV file and trying to open it with Excel (which ignores the comma delimiter, with everything in the first column):

qsv stats wcp.csv > wcpstats.csv

Which is weird, since you would think Microsoft Excel would properly recognize UTF16-LE encoded CSV files. Regardless, to create a properly UTF-8 encoded file, use the --output option instead:

qsv stats wcp.csv --output wcpstats.csv

Environment Variables

| Variable | Description | | --- | --- | | QSV_DEFAULT_DELIMITER | single ascii character to use as delimiter. Overrides --delimeter option. Defaults to "," (comma) for CSV files and "\t" (tab) for TSV files when not set. Note that this will also set the delimiter for qsv's output to stdout.
However, using the --output option, regardless of this environment variable, will automatically change the delimiter used in the generated file based on the file extension - i.e. comma for .csv, tab for .tsv and .tab files. | | QSV_SNIFF_DELIMITER | if set, the delimiter is automatically detected. Overrides QSV_DEFAULT_DELIMITER and --delimiter option. Note that this does not work with stdin. | | QSV_NO_HEADERS | if set, the first row will NOT be interpreted as headers. Supersedes QSV_TOGGLE_HEADERS. | | QSV_TOGGLE_HEADERS | if set to 1, toggles header setting - i.e. inverts qsv header behavior, with no headers being the default, and setting --no-headers will actually mean headers will not be ignored. | | QSV_AUTOINDEX | if set, automatically create an index when none is detected. Also automatically update stale indices. | | QSV_COMMENT_CHAR | set to an ascii character. If set, any lines(including the header) that start with this character are ignored. | | QSV_MAX_JOBS | number of jobs to use for multithreaded commands (currently apply, dedup, extsort, frequency, schema, sort, split, stats and validate). If not set, max_jobs is set to the detected number of logical processors. See Multithreading for more info. | | QSV_NO_UPDATE | if set, prohibit self-update version check for the latest qsv release published on GitHub. | | QSV_PREFER_DMY | if set, date parsing will use DMY format. Otherwise, use MDY format (used with apply datefmt, schema, sniff & stats commands). | | QSV_REGEX_UNICODE | if set, makes search, searchset and replace commands unicode-aware. For increased performance, these commands are not unicode-aware and will ignore unicode values when matching and will panic when unicode characters are used in the regex. | | QSV_SKIPUTF8_CHECK | if set, skip UTF-8 encoding check. Otherwise, qsv scans the first 8k of files. For stdin, it scans the entire buffer. | | QSV_RDR_BUFFER_CAPACITY | reader buffer size (default (bytes): 16384) | | QSV_WTR_BUFFER_CAPACITY | writer buffer size (default (bytes): 65536) | | QSV_LOG_LEVEL | desired level (default - off; error, warn, info, trace, debug). | | QSV_LOG_DIR | when logging is enabled, the directory where the log files will be stored. If the specified directory does not exist, qsv will attempt to create it. If not set, the log files are created in the directory where qsv was started. See Logging for more info. | | QSV_REDIS_CONNECTION_STRING | the fetch command can use Redis to cache responses. Set to connect to the desired Redis instance. (default: redis:127.0.0.1:6379/1). For more info on valid Redis connection string formats, see https://docs.rs/redis/latest/redis/#connection-parameters. | | QSV_REDIS_MAX_POOL_SIZE | the maximum Redis connection pool size. (default: 20). | | QSV_REDIS_TTL_SECONDS | set time-to-live of Redis cached values (default (seconds): 2419200 (28 days)). | | QSV_REDIS_TTL_REFRESH| if set, enables cache hits to refresh TTL of cached values. |

Several dependencies also have environment variables that influence qsv's performance & behavior:

ℹ️ NOTE: To get a list of all active qsv-relevant environment variables, run qsv --envlist.

Feature Flags

qsv has several features:

The following "power-user" commands can be abused and present "foot-shooting" scenarios.

ℹ️ NOTE: qsvlite, as the name implies, always has non-default features disabled. qsv can be built with any combination of the above features using the cargo --features & --no-default-features flags. The pre-built qsv binaries has all applicable features valid for the target platform[^6].

License

Dual-licensed under MIT or the UNLICENSE.

Sponsor

|qsv was made possible by| :-------------------------:| |[![datHere Logo](docs/images/datHere-logo-withtagline.png)](https://datHere.com)
| |Standards-based, best-of-breed, open source solutions
to make your **Data Useful, Usable & Used.** |

Naming Collision

This project is unrelated to Intel's Quick Sync Video.