ONNX Runtime: ubutnu16.04编译 (编到怀疑人生)
程序员文章站
2022-05-30 19:51:17
...
ONNX Runtime: ubutnu16.04编译
1. 前言
ONNX Runtime是什么?
ONNX Runtime是适用于Linux,Windows和Mac上ONNX格式的机器学习模型的高性能推理引擎.
为什么要用ONNX Runtime?
因为训练的模型要用啊,辛辛苦苦采集了数据,训练了模型,结果只能在benchmark中拿个名次是不是有点亏呢?如果能在实际场景中应用,是不是很棒呢,当然对模型要求会更高了,毕竟对模型要求泛化能力更强,模型参数量更小,精度还得保持住,有的时候刚拿到任务的时候,总有一种"mission impossible"的感觉,好在老大很好,也就step by step的完成了.扯远了…
2. ubuntu16.04编译ONNX Runtime
a) 依赖项
首先一些依赖项,因为LZ有些库是为了版本匹配,是源码编译的,所以并没有全部使用apt-get进行安装
PACKAGE_LIST="autotools-dev \
automake \
build-essential \
git apt-transport-https apt-utils \
ca-certificates \
pkg-config \
wget \
zlib1g \
zlib1g-dev \
libssl-dev \
curl libcurl4-openssl-dev \
autoconf \
sudo \
gfortran \
python3-dev \
language-pack-en \
libopenblas-dev \
liblttng-ust0 \
libcurl3 \
libssl1.0.0 \
libkrb5-3 \
libicu55 \
libtinfo-dev \
libtool \
aria2 \
bzip2 \
unzip \
zip \
rsync libunwind8 libpng16-dev libexpat1-dev \
python3-setuptools python3-numpy python3-wheel python python3-pip python3-pytest \
libprotobuf-dev libprotobuf9v5 protobuf-compiler \
openjdk-8-jdk"
小黑板画重点: 说一下这个库,如果不安装,编译test的时候是会报错的,吃过这个亏!
language-pack-en
运行下面的命令
locale-gen en_US.UTF-8
update-locale LANG=en_US.UTF-8
b) git clone对应分支
git clone --recursive https://github.com/Microsoft/onnxruntime -b your_branch
cd onnxruntime
git submodule update --init --recursive
2000 years later!
c) 进行编译
需要根据自己的设置进行编译
具体的能够设置的参数如下:
2020-04-24 03:26:15,615 Build [DEBUG] - Running subprocess in '/home/felaim/Documents/code/onnxruntime/build/Linux/Release'
['/usr/local/bin/cmake', '/home/felaim/Documents/code/onnxruntime/cmake', '-Donnxruntime_RUN_ONNX_TESTS=OFF', '-Donnxruntime_GENERATE_TEST_REPORTS=ON', '-Donnxruntime_DEV_MODE=ON', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-Donnxruntime_USE_CUDA=ON', '-Donnxruntime_USE_NSYNC=OFF', '-Donnxruntime_CUDNN_HOME=/usr/lib/x86_64-linux-gnu/', '-Donnxruntime_USE_AUTOML=OFF', '-Donnxruntime_CUDA_HOME=/usr/local/cuda', '-Donnxruntime_USE_JEMALLOC=OFF', '-Donnxruntime_USE_MIMALLOC=OFF', '-Donnxruntime_ENABLE_PYTHON=OFF', '-Donnxruntime_BUILD_CSHARP=OFF', '-Donnxruntime_BUILD_JAVA=OFF', '-Donnxruntime_BUILD_SHARED_LIB=ON', '-Donnxruntime_USE_EIGEN_FOR_BLAS=ON', '-Donnxruntime_USE_OPENBLAS=OFF', '-Donnxruntime_USE_DNNL=OFF', '-Donnxruntime_USE_MKLML=OFF', '-Donnxruntime_USE_GEMMLOWP=OFF', '-Donnxruntime_USE_NGRAPH=OFF', '-Donnxruntime_USE_OPENVINO=OFF', '-Donnxruntime_USE_OPENVINO_MYRIAD=OFF', '-Donnxruntime_USE_OPENVINO_GPU_FP32=OFF', '-Donnxruntime_USE_OPENVINO_GPU_FP16=OFF', '-Donnxruntime_USE_OPENVINO_CPU_FP32=OFF', '-Donnxruntime_USE_OPENVINO_VAD_M=OFF', '-Donnxruntime_USE_OPENVINO_VAD_F=OFF', '-Donnxruntime_USE_NNAPI=OFF', '-Donnxruntime_USE_OPENMP=ON', '-Donnxruntime_USE_TVM=OFF', '-Donnxruntime_USE_LLVM=OFF', '-Donnxruntime_ENABLE_MICROSOFT_INTERNAL=OFF', '-Donnxruntime_USE_BRAINSLICE=OFF', '-Donnxruntime_USE_NUPHAR=OFF', '-Donnxruntime_USE_EIGEN_THREADPOOL=OFF', '-Donnxruntime_USE_TENSORRT=ON', '-Donnxruntime_TENSORRT_HOME=path to tensorrt', '-Donnxruntime_CROSS_COMPILING=OFF', '-Donnxruntime_BUILD_SERVER=OFF', '-Donnxruntime_BUILD_x86=OFF', '-Donnxruntime_USE_FULL_PROTOBUF=ON', '-Donnxruntime_DISABLE_CONTRIB_OPS=OFF', '-Donnxruntime_MSVC_STATIC_RUNTIME=OFF', '-Donnxruntime_ENABLE_LANGUAGE_INTEROP_OPS=OFF', '-Donnxruntime_USE_DML=OFF', '-Donnxruntime_USE_TELEMETRY=OFF', '-DCUDA_CUDA_LIBRARY=/usr/local/cuda/lib64/stubs', '-Donnxruntime_PYBIND_EXPORT_OPSCHEMA=OFF', '-DCMAKE_BUILD_TYPE=Release']
LZ根据自己的需要,设置如下:
./build.sh --build_shared_lib --config Release --use_cuda --cudnn_home /usr/lib/x86_64-linux-gnu/ --cuda_home /usr/local/cuda --use_tensorrt --tensorrt_home your_path_to_tensorrt
在经历了网络问题,版本问题等一系列稀奇古怪的问题后,最终得到了最后的的100% tests passed!
在网上这个资源真的太少了,一趟趟的坑踩…其中还要感谢郭博的指导,努力follow中!
不说了,继续coding去了! 到处都是知识盲点