This crate is a safe Rust wrapper of [TensorFlow Lite C API]. Its API is very similar to that of [TensorFlow Lite Swift API].
Targets below are tested. However, others may work, too.
* iOS: aarch64-apple-ios
and x86_64-apple-ios
* MacOS: x86_64-apple-darwin
* Linux: x86_64-unknown-linux-gnu
* Android: aarch64-linux-android
and armv7-linux-androideabi
See compilation section to see build instructions for your target. Please read Optimized Build section carefully.
xnnpack
- Compiles XNNPACK and allows you to use XNNPACK delegate. See details of XNNPACK
on here.xnnpack_qs8
- Compiles XNNPACK with additional build flags to accelerate inference of
operators with symmetric quantization. See details in this blog post.
Implies xnnpack
.xnnpack_qu8
- Similar to xnnpack_qs8
, but accelerates few operators with
asymmetric quantization. Implies xnnpack
.Note: xnnpack
is already enabled for iOS, but xnnpack_qs8
and xnnpack_qu8
should be enabled manually.
The example below shows running inference on a TensorFlow Lite model.
```rust use tflitec::interpreter::{Interpreter, Options}; use tflitec::tensor;
// Create interpreter options let mut options = Options::default(); options.thread_count = 1;
// Load example model which outputs y = 3 * x let interpreter = Interpreter::withmodelpath("tests/add.bin", Some(options))?; // Resize input let inputshape = tensor::Shape::new(vec![10, 8, 8, 3]); interpreter.resizeinput(0, inputshape)?; // Allocate tensors if you just created Interpreter or resized its inputs interpreter.allocatetensors()?;
// Create dummy input
let inputelementcount = 10 * 8 * 8 * 3;
let data = (0..inputelementcount).map(|x| x as f32).collect::
let inputtensor = interpreter.input(0)?; asserteq!(inputtensor.datatype(), tensor::DataType::Float32);
// Copy input to buffer of first tensor (with index 0) // You have 2 options: // Set data using Tensor handle if you have it already assert!(inputtensor.setdata(&data[..]).isok()); // Or set data using Interpreter: assert!(interpreter.copy(&data[..], 0).isok());
// Invoke interpreter assert!(interpreter.invoke().is_ok());
// Get output tensor let output_tensor = interpreter.output(0)?;
asserteq!(outputtensor.shape().dimensions(), &vec![10, 8, 8, 3]);
let outputvector = outputtensor.data::
```
Current version of the crate builds r2.6
branch of [tensorflow project].
Compiled dynamic library or Framework will be available under OUT_DIR
(see [cargo documentation]) of tflitec
.
You won't need this most of the time, because the crate output is linked appropriately.
For all environments and targets you will need to have:
git
CLI to fetch [TensorFlow][3.7.2, 4.99.0]
To build [TensorFlow] for your machine with native optimizations
or pass other --copts
to [Bazel], set environment variable below:
``sh
BAZEL_COPTS="OPT1 OPT2 ..." # space seperated values will be passed as
--copt=OPTN` to bazel
BAZEL_COPTS="-march=native" # for native optimized build
Some OSs or targets may require additional steps.
ANDROID_NDK_HOME
ANDROID_NDK_API_LEVEL
ANDROID_SDK_HOME
ANDROID_API_LEVEL
ANDROID_BUILD_TOOLS_VERSION
HOSTTAG=darwin-x8664 # as example
TARGETTRIPLE=arm-linux-androideabi # as example
BINDGENEXTRACLANGARGS="\
-I${ANDROIDNDKHOME}/sources/cxx-stl/llvm-libc++/include/ \
-I${ANDROIDNDKHOME}/sysroot/usr/include/ \
-I${ANDROIDNDKHOME}/toolchains/llvm/prebuilt/${HOSTTAG}/sysroot/usr/include/${TARGETTRIPLE}/"
``
* (Recommended) [cargo-ndk] simplifies
cargo build` process.