This crate is a safe Rust wrapper of [TensorFlow Lite C API]. Its API is very similar to that of [TensorFlow Lite Swift API].
Targets below are tested. However, others may work, too.
* iOS: aarch64-apple-ios
and x86_64-apple-ios
* MacOS: x86_64-apple-darwin
* Linux: x86_64-unknown-linux-gnu
* Android: aarch64-linux-android
and armv7-linux-androideabi
* Windows (see details)
See compilation section to see build instructions for your target. Please read Optimized Build section carefully.
xnnpack
- Compiles XNNPACK and allows you to use XNNPACK delegate. See details of XNNPACK
on here.xnnpack_qs8
- Compiles XNNPACK with additional build flags to accelerate inference of
operators with symmetric quantization. See details in this blog post.
Implies xnnpack
.xnnpack_qu8
- Similar to xnnpack_qs8
, but accelerates few operators with
asymmetric quantization. Implies xnnpack
.Note: xnnpack
is already enabled for iOS, but xnnpack_qs8
and xnnpack_qu8
should be enabled manually.
The example below shows running inference on a TensorFlow Lite model.
```rust use tflitec::interpreter::{Interpreter, Options}; use tflitec::tensor; use tflitec::model::Model;
// Create interpreter options let mut options = Options::default(); options.thread_count = 1;
// Load example model which outputs y = 3 * x let model = Model::new("tests/add.bin")?; // Or initialize with model bytes if it is not available as a file // let modeldata = std::fs::read("tests/add.bin")?; // let model = Model::frombytes(&model_data)?;
// Create interpreter let interpreter = Interpreter::new(&model, Some(options))?; // Resize input let inputshape = tensor::Shape::new(vec![10, 8, 8, 3]); let inputelementcount = inputshape.dimensions().iter().copied().reduce(std::ops::Mul::mul).unwrap(); interpreter.resizeinput(0, inputshape)?; // Allocate tensors if you just created Interpreter or resized its inputs interpreter.allocate_tensors()?;
// Create dummy input
let data = (0..inputelementcount).map(|x| x as f32).collect::
let inputtensor = interpreter.input(0)?; asserteq!(inputtensor.datatype(), tensor::DataType::Float32);
// Copy input to buffer of first tensor (with index 0) // You have 2 options: // Set data using Tensor handle if you have it already assert!(inputtensor.setdata(&data[..]).isok()); // Or set data using Interpreter: assert!(interpreter.copy(&data[..], 0).isok());
// Invoke interpreter assert!(interpreter.invoke().is_ok());
// Get output tensor let output_tensor = interpreter.output(0)?;
asserteq!(outputtensor.shape().dimensions(), &vec![10, 8, 8, 3]);
let outputvector = outputtensor.data::
```
As described in the compilation section, libtensorflowlite_c
is built during compilation and
this step may take a few minutes. To allow reusing prebuilt library, one can set TFLITEC_PREBUILT_PATH
or
TFLITEC_PREBUILT_PATH_<NORMALIZED_TARGET>
environment variables (the latter has precedence).
NORMALIZED_TARGET
is the target triple which is converted to uppercase and underscores,
as in the cargo configuration environment variables. Below you can find example values for different TARGET
s:
TFLITEC_PREBUILT_PATH_AARCH64_APPLE_IOS=/path/to/TensorFlowLiteC.framework
TFLITEC_PREBUILT_PATH_ARMV7_LINUX_ANDROIDEABI=/path/to/libtensorflowlite_c.so
TFLITEC_PREBUILT_PATH_X86_64_APPLE_DARWIN=/path/to/libtensorflowlite_c.dylib
TFLITEC_PREBUILT_PATH_X86_64_PC_WINDOWS_MSVC=/path/to/tensorflowlite_c.dll
. Note that, the prebuilt .dll
file must have the corresponding .lib
file under the same directory.You can find these files under the OUT_DIR
after you compile the library for the first time,
then copy them to a persistent path and set environment variable.
You can activate xnnpack
features with a prebuilt library, too.
However, you must have built that library with XNNPACK, otherwise you will see a linking error.
Some tensorflow header files are downloaded from GitHub during compilation with or without prebuild binary.
However, some users may have difficulty to access GitHub. Hence, one can pass header directory with
TFLITEC_HEADER_DIR
or TFLITEC_HEADER_DIR_<NORMALIZED_TARGET>
environment variables
(the latter has precedence). See the example command below:
```shell TFLITECHEADERDIR=/path/to/tensorflowv2.9.1headers cargo build --release
```
This library builds libtensorflowlite_c
dynamic library and must be linked to it. This is not an issue
if you build and run a binary target with cargo run
. However, if you run your binary directly, you must
have libtensorflowlite_c
dynamic library in your library search path. You can either copy built library under
target/{release,debug}/build/tflitec-*/out
to one of the system library search paths
(such as /usr/lib
or /usr/local/lib
) or add that directory (out
) to search path.
Similarly, if you distribute a prebuilt library depending on this, you must distribute libtensorflowlite_c
, too.
Or, document this warning in your library to instruct your users.
Current version of the crate builds tag v2.9.1
of the [tensorflow project].
Compiled dynamic library or Framework will be available under OUT_DIR
(see [cargo documentation]) of tflitec
.
You won't need this most of the time, because the crate output is linked appropriately.
In addition, it may be better to read prebuilt library support section
to make your builds faster.
For all environments and targets you will need to have:
git
CLI to fetch [TensorFlow]To build [TensorFlow] for your machine with native optimizations
or pass other --copts
to [Bazel], set environment variable below:
``sh
TFLITEC_BAZEL_COPTS="OPT1 OPT2 ..." # space seperated values will be passed as
--copt=OPTN` to bazel
TFLITECBAZELCOPTS="-march=native" # for native optimized build
TFLITECBAZELCOPTSX8664APPLEDARWIN="-march=native"
Some OSs or targets may require additional steps.
ANDROID_NDK_HOME
ANDROID_NDK_API_LEVEL
ANDROID_SDK_HOME
ANDROID_API_LEVEL
ANDROID_BUILD_TOOLS_VERSION
HOSTTAG=darwin-x8664 # as example
TARGETTRIPLE=arm-linux-androideabi # as example
BINDGENEXTRACLANGARGS="\
-I${ANDROIDNDKHOME}/sources/cxx-stl/llvm-libc++/include/ \
-I${ANDROIDNDKHOME}/sysroot/usr/include/ \
-I${ANDROIDNDKHOME}/toolchains/llvm/prebuilt/${HOSTTAG}/sysroot/usr/include/${TARGETTRIPLE}/"
``
* (Recommended) [cargo-ndk] simplifies
cargo buildprocess. Recent version of the tool has
--bindgenflag
which sets
BINDGENEXTRACLANG_ARGS` variable appropriately. Hence, you can skip the step above.
Windows support is experimental. It is tested on Windows 10. You should follow instructions in
the Setup for Windows
section on [TensorFlow Build Instructions for Windows]. In other words,
you should install following before build:
* Python 3.8.x 64 bit (the instructions suggest 3.6.x but this package is tested with 3.8.x)
* [Bazel]
* [MSYS2]
* Visual C++ Build Tools 2019
Do not forget to add relevant paths to %PATH%
environment variable by following the
[TensorFlow Build Instructions for Windows] carefully (the only exception is the Python version).