Safe MMDeploy Rust wrapper.
To make sure the building of this repo successful, you should install some pre-packages.
The following guidance is tested on Ubuntu OS on x86 device.
Step 0. Install Rust if you don't have.
bash
apt install curl
curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
Step 1. Install Clang and Rust required by Bindgen
.
bash
apt install llvm-dev libclang-dev clang
Step 2. Download and install pre-built mmdeploy package. Currently, mmdeploy-sys
is built upon the pre-built package of mmdeploy
so this repo only supports OnnxRuntime and TensorRT backends. Don't be disappoint, the script of building from source is ongoing, and after finishing that we can deploy models with all backends supported by mmdeploy
in Rust.
bash
apt install wget
If you wants deploy models with OnnxRuntime:
```bash
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.9.0/mmdeploy-0.9.0-linux-x8664-onnxruntime1.8.1.tar.gz tar -zxvf mmdeploy-0.9.0-linux-x8664-onnxruntime1.8.1.tar.gz pushd mmdeploy-0.9.0-linux-x8664-onnxruntime1.8.1 export MMDEPLOYDIR=$(pwd)/sdk export LDLIBRARYPATH=$MMDEPLOYDIR/sdk/lib:$LDLIBRARY_PATH popd
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz tar -zxvf onnxruntime-linux-x64-1.8.1.tgz cd onnxruntime-linux-x64-1.8.1 export ONNXRUNTIMEDIR=$(pwd) export LDLIBRARYPATH=$ONNXRUNTIMEDIR/lib:$LDLIBRARYPATH ```
If you wants deploy models with TensorRT:
Pay attention to the version of cuda: 11. So this script is only supported for machines with cuda-11.x.
```bash
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.9.0/mmdeploy-0.9.0-linux-x8664-cuda11.1-tensorrt8.2.3.0.tar.gz tar -zxvf mmdeploy-0.9.0-linux-x8664-cuda11.1-tensorrt8.2.3.0.tar.gz pushd mmdeploy-0.9.0-linux-x8664-cuda11.1-tensorrt8.2.3.0 export MMDEPLOYDIR=$(pwd)/sdk export LDLIBRARYPATH=$MMDEPLOYDIR/sdk/lib:$LDLIBRARY_PATH popd
export TENSORRTDIR=$(pwd)/TensorRT-8.2.3.0 export LDLIBRARYPATH=${TENSORRTDIR}/lib:$LDLIBRARYPATH
export CUDNNDIR=$(pwd)/cuda export LDLIBRARYPATH=$CUDNNDIR/lib64:$LDLIBRARYPATH ```
Step 3. (Optional) Install OpenCV required by examples.
bash
apt install libopencv-dev
Step 4. (Optional) Download converted onnx models by mmdeploy-converted-models
bash
apt install git-lfs
git clone https://github.com/liu-mengyang/mmdeploy-converted-models --depth=1
Please read the previous section to make sure the required packages have been installed before using this crate.
Update your Cargo.toml
toml
mmdeploy = "0.9.0"
Good news: Now, you can use Rust language to build your fantastic applications powered by MMDeploy!
Take a look by running some examples! In these examples, CPU
is the default inference device. If you choose to deploy models on GPU
, you will replace all cpu
in test commands with cuda
.
You can
Deploy image classification models converted by MMDeploy.
The example deploys a ResNet model converted by the ONNXRUNTIME target on a CPU device.
bash
cargo run --example classifier cpu ../mmdeploy-converted-models/resnet ./images/demos/mmcls_demo.jpg
Deploy object detection models converted by MMDeploy.
The example deploys a FasterRCNN model converted by the ONNXRUNTIME target on a CPU device.
bash
cargo run --example detector cpu ../mmdeploy-converted-models/faster-rcnn-ort ./images/demos/mmdet_demo.jpg
A rendered result we can take a look located in the current directory and is named output_detection.png
.
Deploy object segmentation models converted by MMDeploy.
The example deploys a DeepLabv3 model converted by the ONNXRUNTIME target on a CPU device.
bash
cargo run --example segmentor cpu ../mmdeploy-converted-models/deeplabv3 ./images/demos/mmseg_demo.png
A rendered result we can take a look located in the current directory and is named output_segmentation.png
.
Deploy pose detection models converted by MMDeploy.
The example deploys an HRNet model converted by the ONNXRUNTIME target on a CPU device.
bash
cargo run --example pose_detector cpu ../mmdeploy-converted-models/hrnet ./images/demos/mmpose_demo.jpg
A rendered result we can take a look located in the current directory and is named output_pose.png
.
Deploy rotated detection models converted by MMDeploy.
The example deploys a RetinaNet model converted by the ONNXRUNTIME target on a CPU device.
bash
cargo run --example rotated_detector cpu ../mmdeploy-converted-models/retinanet ./images/demos/mmrotate_demo.jpg
A rendered result we can take a look located in the current directory and is named output_rotated_detection.png
.
Deploy text detection and text recognition models converted by MMDeploy.
The example deploys a DBNet model for detection and a CRNN model for recognition both converted by the ONNXRUNTIME target on a CPU device.
bash
cargo run --example ocr cpu ../mmdeploy-converted-models/dbnet ../mmdeploy-converted-models/crnn ./images/demos/mmocr_demo.jpg
A rendered result we can take a look located in the current directory and is named output_ocr.png
.
Deploy restorer models converted by MMDeploy.
The example deploys an EDSR model for restoration converted by the ONNXRUNTIME target on a CPU device.
bash
cargo run --example restorer cpu ../mmdeploy-converted-models/edsr ./images/demos/mmediting_demo.png
A rendered result we can take a look located in the current directory and is named output_restorer.png
.