This crate is a safe Rust wrapper of [TensorFlow Lite C API]. Its API is very similar to that of [TensorFlow Lite Swift API].

Supported Targets

Targets below are tested. However, others may work, too. * iOS: aarch64-apple-ios and x86_64-apple-ios * MacOS: x86_64-apple-darwin * Linux: x86_64-unknown-linux-gnu * Android: aarch64-linux-android and armv7-linux-androideabi * Windows (see details)

See compilation section to see build instructions for your target. Please read Optimized Build section carefully.

Features

Note: xnnpack is already enabled for iOS, but xnnpack_qs8 and xnnpack_qu8 should be enabled manually.

Examples

The example below shows running inference on a TensorFlow Lite model.

```rust use tflitec::interpreter::{Interpreter, Options}; use tflitec::tensor; use std::path::MAIN_SEPARATOR;

// Create interpreter options let mut options = Options::default(); options.thread_count = 1;

// Load example model which outputs y = 3 * x let path = format!("tests{}add.bin", MAINSEPARATOR); let interpreter = Interpreter::withmodelpath(&path, Some(options))?; // Resize input let inputshape = tensor::Shape::new(vec![10, 8, 8, 3]); interpreter.resizeinput(0, inputshape)?; // Allocate tensors if you just created Interpreter or resized its inputs interpreter.allocate_tensors()?;

// Create dummy input let inputelementcount = 10 * 8 * 8 * 3; let data = (0..inputelementcount).map(|x| x as f32).collect::>();

let inputtensor = interpreter.input(0)?; asserteq!(inputtensor.datatype(), tensor::DataType::Float32);

// Copy input to buffer of first tensor (with index 0) // You have 2 options: // Set data using Tensor handle if you have it already assert!(inputtensor.setdata(&data[..]).isok()); // Or set data using Interpreter: assert!(interpreter.copy(&data[..], 0).isok());

// Invoke interpreter assert!(interpreter.invoke().is_ok());

// Get output tensor let output_tensor = interpreter.output(0)?;

asserteq!(outputtensor.shape().dimensions(), &vec![10, 8, 8, 3]); let outputvector = outputtensor.data::().tovec(); let expected: Vec = data.iter().map(|e| e * 3.0).collect(); asserteq!(expected, output_vector);

// The line below is needed for doctest, please ignore it

Ok::<(), tflitec::Error>(())

```

Prebuilt Library Support

As described in the compilation section, libtensorflowlite_c is built during compilation and this step may take a few minutes. To allow reusing prebuilt library, one can set TFLITEC_PREBUILT_PATH or TFLITEC_PREBUILT_PATH_<NORMALIZED_TARGET> environment variables (the latter has precedence). NORMALIZED_TARGET is the target triple which is converted to uppercase and underscores, as in the cargo configuration environment variables. Below you can find example values for different TARGETs:

You can find these files under the OUT_DIR after you compile the library for the first time, then copy them to a persistent path and set environment variable.

XNNPACK support

You can activate xnnpack features with a prebuilt library, too. However, you must have built that library with XNNPACK, otherwise you will see a linking error.

Compilation

Current version of the crate builds tag v2.9.1 of the [tensorflow project]. Compiled dynamic library or Framework will be available under OUT_DIR (see [cargo documentation]) of tflitec. You won't need this most of the time, because the crate output is linked appropriately. In addition, it may be better to read prebuilt library support section to make your builds faster. For all environments and targets you will need to have:

Optimized Build

To build [TensorFlow] for your machine with native optimizations or pass other --copts to [Bazel], set environment variable below: ``sh TFLITEC_BAZEL_COPTS="OPT1 OPT2 ..." # space seperated values will be passed as--copt=OPTN` to bazel TFLITECBAZELCOPTS="-march=native" # for native optimized build

```

Some OSs or targets may require additional steps.

Android:

see https://developer.android.com/ndk/guides/otherbuildsystems

HOSTTAG=darwin-x8664 # as example TARGETTRIPLE=arm-linux-androideabi # as example BINDGENEXTRACLANGARGS="\ -I${ANDROIDNDKHOME}/sources/cxx-stl/llvm-libc++/include/ \ -I${ANDROIDNDKHOME}/sysroot/usr/include/ \ -I${ANDROIDNDKHOME}/toolchains/llvm/prebuilt/${HOSTTAG}/sysroot/usr/include/${TARGETTRIPLE}/" `` * (Recommended) [cargo-ndk] simplifiescargo buildprocess. Recent version of the tool has--bindgenflag which setsBINDGENEXTRACLANG_ARGS` variable appropriately. Hence, you can skip the step above.

Windows

Windows support is experimental. It is tested on Windows 10. You should follow instructions in the Setup for Windows section on [TensorFlow Build Instructions for Windows]. In other words, you should install following before build: * Python 3.8.x 64 bit (the instructions suggest 3.6.x but this package is tested with 3.8.x) * [Bazel] * [MSYS2] * Visual C++ Build Tools 2019

Do not forget to add relevant paths to %PATH% environment variable by following the [TensorFlow Build Instructions for Windows] carefully (the only exception is the Python version).