This crate is a safe Rust wrapper of [TensorFlow Lite C API]. Its API is very similar to that of [TensorFlow Lite Swift API].

Supported Targets

Targets below are tested. However, others may work, too. * iOS: aarch64-apple-ios and x86_64-apple-ios * MacOS: x86_64-apple-darwin * Linux: x86_64-unknown-linux-gnu * Android: aarch64-linux-android and armv7-linux-androideabi * Windows (see details)

See compilation section to see build instructions for your target. Please read Optimized Build section carefully.

Features

Note: xnnpack is already enabled for iOS, but xnnpack_qs8 and xnnpack_qu8 should be enabled manually.

Examples

The example below shows running inference on a TensorFlow Lite model.

```rust use tflitec::interpreter::{Interpreter, Options}; use tflitec::tensor; use std::path::MAIN_SEPARATOR;

// Create interpreter options let mut options = Options::default(); options.thread_count = 1;

// Load example model which outputs y = 3 * x let path = format!("tests{}add.bin", MAINSEPARATOR); let interpreter = Interpreter::withmodelpath(&path, Some(options))?; // Resize input let inputshape = tensor::Shape::new(vec![10, 8, 8, 3]); interpreter.resizeinput(0, inputshape)?; // Allocate tensors if you just created Interpreter or resized its inputs interpreter.allocate_tensors()?;

// Create dummy input let inputelementcount = 10 * 8 * 8 * 3; let data = (0..inputelementcount).map(|x| x as f32).collect::>();

let inputtensor = interpreter.input(0)?; asserteq!(inputtensor.datatype(), tensor::DataType::Float32);

// Copy input to buffer of first tensor (with index 0) // You have 2 options: // Set data using Tensor handle if you have it already assert!(inputtensor.setdata(&data[..]).isok()); // Or set data using Interpreter: assert!(interpreter.copy(&data[..], 0).isok());

// Invoke interpreter assert!(interpreter.invoke().is_ok());

// Get output tensor let output_tensor = interpreter.output(0)?;

asserteq!(outputtensor.shape().dimensions(), &vec![10, 8, 8, 3]); let outputvector = outputtensor.data::().tovec(); let expected: Vec = data.iter().map(|e| e * 3.0).collect(); asserteq!(expected, output_vector);

// The line below is needed for doctest, please ignore it

Ok::<(), tflitec::Error>(())

```

Compilation

Current version of the crate builds r2.6 branch of [tensorflow project]. Compiled dynamic library or Framework will be available under OUT_DIR (see [cargo documentation]) of tflitec. You won't need this most of the time, because the crate output is linked appropriately. For all environments and targets you will need to have:

Optimized Build

To build [TensorFlow] for your machine with native optimizations or pass other --copts to [Bazel], set environment variable below: ``sh BAZEL_COPTS="OPT1 OPT2 ..." # space seperated values will be passed as--copt=OPTN` to bazel BAZEL_COPTS="-march=native" # for native optimized build

```

Some OSs or targets may require additional steps.

Android:

see https://developer.android.com/ndk/guides/otherbuildsystems

HOSTTAG=darwin-x8664 # as example TARGETTRIPLE=arm-linux-androideabi # as example BINDGENEXTRACLANGARGS="\ -I${ANDROIDNDKHOME}/sources/cxx-stl/llvm-libc++/include/ \ -I${ANDROIDNDKHOME}/sysroot/usr/include/ \ -I${ANDROIDNDKHOME}/toolchains/llvm/prebuilt/${HOSTTAG}/sysroot/usr/include/${TARGETTRIPLE}/" `` * (Recommended) [cargo-ndk] simplifiescargo build` process.

Windows

Windows support is experimental. It is tested on Windows 10. You should follow instructions in the Setup for Windows section on [TensorFlow Build Instructions for Windows]. In other words, you should install following before build: * Python 3.8.x 64 bit (the instructions suggest 3.6.x but this package is tested with 3.8.x) * Bazel, supported versions: [3.7.2, 3.99.0] * [MSYS2] * Visual C++ Build Tools 2019

Do not forget to add relevant paths to %PATH% environment variable by following the [TensorFlow Build Instructions for Windows] carefully (the only exception is the Python version).