This is a proof-of-concept HDFS client written natively in Rust. All other clients I have found in any other language are simply wrappers around libhdfs and require all the same Java dependencies, so I wanted to see if I could write one from scratch given that HDFS isn't really changing very often anymore. Several basic features are working, however it is not nearly as robust and the real HDFS client.
What this is not trying to do is implement all HDFS client/FileSystem interfaces, just things involving reading and writing data.
Here is a list of currently supported and unsupported but possible future features.
patch
in your downstream Cargo.toml
to use the required patch
toml
[patch.crates-io]
reed-solomon-erasure = { git = "https://github.com/Kimahriman/reed-solomon-erasure.git", branch = "SNB/23C24_external_matrix" }
The client will attempt to read Hadoop configs core-site.xml
and hdfs-site.xml
in the directories $HADOOP_CONF_DIR
or if that doesn't exist, $HADOOP_HOME/etc/hadoop
. Currently the supported configs that are used are:
- dfs.ha.namenodes
- name service support
- dfs.namenode.rpc-address.*
- name service support
All other settings are generally assumed to be the defaults currently. For instance, security is assumed to be enabled and SASL negotiation is always done, but on insecure clusters this will just do SIMPLE authentication. Any setups that require other customized Hadoop client configs may not work correctly.
``` brew install gsasl krb5
export BINDGENEXTRACLANGARGS="-I/opt/homebrew/include" export LIBRARYPATH=/opt/homebrew/lib cargo build --features token,kerberos ```
apt-get install clang libkrb5-dev libgsasl-dev
cargo build --features token,kerberos
token
- enables token based DIGEST-MD5 authentication support. This uses the gsasl
native library and only supports authentication, not integrity or confidentialitykerberos
- enables kerberos GSSAPI authentication support. This uses the libgssapi
crate and supports integrity as well as confidentialityobject_store
- provides an object_store
wrapper around the HDFS clientrs
- support Reed-Solomon codecs for erasure coded reads. It relies on a fork of https://github.com/rust-rse/reed-solomon-erasure, so you must include a patch
for it to compile:
toml
[patch.crates-io]
reed-solomon-erasure = { git = "https://github.com/Kimahriman/reed-solomon-erasure.git", branch = "SNB/23C24_external_matrix" }
The tests are mostly integration tests that utilize a small Java application in rust/mindifs/
that runs a custom MiniDFSCluster
. To run the tests, you need to have Java, Maven, Hadoop binaries, and Kerberos tools available and on your path. Any Java version between 8 and 17 should work.
bash
cargo test -p hdfs-native --features token,kerberos,rs,intergation-test
See the Python README