rust-rdkafka

crates.io docs.rs Build Status coverate Join the chat at https://gitter.im/rust-rdkafka/Lobby

A fully asynchronous, [futures]-based Kafka client library for Rust based on [librdkafka].

The library

rust-rdkafka provides a safe Rust interface to librdkafka. The master branch is currently based on librdkafka 1.3.0.

Documentation

Features

The main features provided at the moment are:

One million messages per second

rust-rdkafka is designed to be easy and safe to use thanks to the abstraction layer written in Rust, while at the same time being extremely fast thanks to the librdkafka C library.

Here are some benchmark results using the rust-rdkafka BaseProducer, sending data to a single Kafka 0.11 process running in localhost (default configuration, 3 partitions). Hardware: Dell laptop, with Intel Core i7-4712HQ @ 2.30GHz.

For more numbers, check out the kafka-benchmark project.

Client types

rust-rdkafka provides low level and high level consumers and producers. Low level:

High level:

For more information about consumers and producers, refer to their module-level documentation.

Warning: the library is under active development and the APIs are likely to change.

Asynchronous data processing with tokio-rs

[tokio-rs] is a platform for fast processing of asynchronous events in Rust. The interfaces exposed by the [StreamConsumer] and the [FutureProducer] allow rust-rdkafka users to easily integrate Kafka consumers and producers within the tokio-rs platform, and write asynchronous message processing code. Note that rust-rdkafka can be used without tokio-rs.

To see rust-rdkafka in action with tokio-rs, check out the [asynchronous processing example] in the examples folder.

At-least-once delivery

At-least-once delivery semantic is common in many streaming applications: every message is guaranteed to be processed at least once; in case of temporary failure, the message can be re-processed and/or re-delivered, but no message will be lost.

In order to implement at-least-once delivery the stream processing application has to carefully commit the offset only once the message has been processed. Committing the offset too early, instead, might cause message loss, since upon recovery the consumer will start from the next message, skipping the one where the failure occurred.

To see how to implement at-least-once delivery with rdkafka, check out the [at-least-once delivery example] in the examples folder. To know more about delivery semantics, check the [message delivery semantics] chapter in the Kafka documentation.

Users

Here are some of the projects using rust-rdkafka:

If you are using rust-rdkafka, please let me know!

Installation

Add this to your Cargo.toml:

toml [dependencies] rdkafka = { version = "0.23", features = ["cmake-build"] }

This crate will compile librdkafka from sources and link it statically to your executable. To compile librdkafka you'll need:

Note that using the CMake build system, via the cmake-build feature, is strongly encouraged. The default build system has a known issue that can cause corrupted builds.

By default a submodule with the librdkafka sources pinned to a specific commit will be used to compile and statically link the library. The dynamic-linking feature can be used to instead dynamically link rdkafka to the system's version of librdkafka. Example:

toml [dependencies] rdkafka = { version = "0.23", features = ["dynamic-linking"] }

For a full listing of features, consult the rdkafka-sys crate's documentation. All of rdkafka-sys features are re-exported as rdkafka features.

Compiling from sources

To compile from sources, you'll have to update the submodule containing librdkafka:

bash git submodule update --init

and then compile using cargo, selecting the features that you want. Example:

bash cargo build --features "ssl gssapi"

Examples

You can find examples in the examples folder. To run them:

bash cargo run --example <example_name> -- <example_args>

Tests

Unit tests

The unit tests can run without a Kafka broker present:

bash cargo test --lib

Automatic testing

rust-rdkafka contains a suite of tests which is automatically executed by travis in docker-compose. Given the interaction with C code that rust-rdkafka has to do, tests are executed in valgrind to check eventual memory errors and leaks.

To run the full suite using docker-compose:

bash ./test_suite.sh

To run locally, instead:

bash KAFKA_HOST="kafka_server:9092" cargo test

In this case there is a broker expected to be running on KAFKA_HOST. The broker must be configured with default partition number 3 and topic autocreation in order for the tests to succeed.

Debugging

rust-rdkafka uses the log and env_logger crates to handle logging. Logging can be enabled using the RUST_LOG environment variable, for example:

bash RUST_LOG="librdkafka=trace,rdkafka::client=debug" cargo test

This will configure the logging level of librdkafka to trace, and the level of the client module of the Rust client to debug. To actually receive logs from librdkafka, you also have to set the debug option in the producer or consumer configuration (see librdkafka configuration).

To enable debugging in your project, make sure you initialize the logger with env_logger::init() or equivalent.

rdkafka-sys

See rdkafka-sys.

Contributors

Thanks to: * Thijs Cadier - thijsc

Alternatives