HDFS as a remote ObjectStore for Datafusion.
This crate introduces HadoopFileSystem
as a remote ObjectStore which provides the ability of querying on HDFS files.
For the HDFS access, We leverage the library fs-hdfs. Basically, the library only provides Rust FFI APIs for the libhdfs
which can be compiled by a set of C files provided by the official Hadoop Community.
Since the libhdfs
is also just a C interface wrapper and the real implementation for the HDFS access is a set of Java jars, in order to make this crate work, we need to prepare the Hadoop client jars and the JRE environment.
Install Java.
Specify and export JAVA_HOME
.
To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors. Currently, we support Hadoop-2 and Hadoop-3.
Unpack the downloaded Hadoop distribution. For example, the folder is /opt/hadoop. Then prepare some environment variables: ```shell export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin ```
Firstly, we need to add library path for the jvm related dependencies. An example for MacOS,
shell
export DYLD_LIBRARY_PATH=$JAVA_HOME/jre/lib/server
Since our compiled libhdfs is JNI native implementation, it requires the proper CLASSPATH to load the Hadoop related jars. An example,
shell
export CLASSPATH=$CLASSPATH:`hadoop classpath --glob`
Suppose there's a hdfs directory,
rust
let hdfs_file_uri = "hdfs://localhost:8020/testing/tpch_1g/parquet/line_item";
in which there're a list of parquet files. Then we can query on these parquet files as follows:
```rust
let ctx = SessionContext::new();
ctx.runtimeenv().registerobjectstore("hdfs", "", Arc::new(HadoopFileSystem));
let tablename = "lineitem";
println!(
"Register table {} with parquet file {}",
tablename, hdfsfileuri
);
ctx.registerparquet(tablename, &hdfsfileuri, ParquetReadOptions::default()).await?;
let sql = "SELECT count(*) FROM line_item"; let result = ctx.sql(sql).await?.collect().await?; ```
First clone the test data repository:
shell
git submodule update --init --recursive
Run testing
shell
cargo test
During the testing, a HDFS cluster will be mocked and started automatically.
Run testing for with enabling feature hdfs3 ```shell cargo build --no-default-features --features datafusion-objectstore-hdfs/hdfs3,datafusion-objectstore-hdfs-testing/hdfs3,datafusion-hdfs-examples/hdfs3
cargo test --no-default-features --features datafusion-objectstore-hdfs/hdfs3,datafusion-objectstore-hdfs-testing/hdfs3,datafusion-hdfs-examples/hdfs3 ```
Run the ballista-sql test by
shell
cargo run --bin ballista-sql --no-default-features --features hdfs3