转换TensorFlow模型并使用OpenVINO运行加速推理效能

发布于:2023-01-04 ⋅ 阅读:(570) ⋅ 点赞:(0)

编译 李翊玮

使用OpenVINO™ Toolkit运行神经网路时,需先将其神经网路转换为中间表达Intermediate Representation又称 IR文件。为此,您需要 模型优化器Model Optimizer,它是OpenVINO™ Toolkit中的命令行工具。获得它的最简单方法是通过PyPi

pip install openvino-dev

模型优化器直接支持TensorFlow模型,因此可直接在终端中使用以下命令

mo --input_model v3-small_224_1.0_float.pb --input_shape “[1,224,224,3]”

这表示您正将 v3-small_224_1.0_float.pb 模型转换为一个大小为 224x224 的 RGB 图像。 当然,您可以指定更多参数,如预处理步骤或所需的模型精度 (FP32 或 FP16):

mo --input_model v3-small_224_1.0_float.pb --input_shape "[1,224,224,3]" --mean_values="[127.5,127.5,127.5]" --scale_values="[127.5]" --data_type FP16

您的模型会将所有像素归一化为 [-1,1] 值范围,并且将使用 FP16 执行推理。 运行后,你应该会看到下面的东西,其中包含所有显式和隐式参数,如模型路径、输入形状、所选精度、通道回归、平均值和比例值、转换参数等等:

将TensorFlow模型导出到IR...这可能需要几分钟时间。

模型优化器参数:

Common parameters:

      - Path to the Input Model: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.pb

      - Path for generated IR: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model

      - IR output name: v3-small_224_1.0_float

      - Log level: ERROR

      - Batch: Not specified, inherited from the model

      - Input layers: Not specified, inherited from the model

      - Output layers: Not specified, inherited from the model

      - Input shapes: [1,224,224,3]

      - Mean values: [127.5,127.5,127.5]

      - Scale values: [127.5]

      - Scale factor: Not specified

      - Precision of IR: FP16

      - Enable fusing: True

      - Enable grouped convolutions fusing: True

      - Move mean values to preprocess section: None

      - Reverse input channels: False

TensorFlow specific parameters:

- Input model in text protobuf format: False

      - Path to model dump for TensorBoard: None

      - List of shared libraries with TensorFlow custom layers implementation: None

      - Update the configuration file with input/output node names: None

      - Use configuration file used to generate the model with Object Detection API: None

      - Use the config file: None

      - Inference Engine found in: /home/adrian/repos/openvino_notebooks/openvino_env/lib/python3.8/site-packages/openvino

Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4

Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4

[ SUCCESS ] Generated IR version 11 model.

[ SUCCESS ] XML file: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.xml

[ SUCCESS ] BIN file: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.bin

[ SUCCESS ] Total execution time: 9.97 seconds.

It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub*

[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.

Find more information about API v2.0 and IR v11 at https://docs.openvino.ai

最后的成功表示所有内容都已成功转换。 您应该获得中间表达 (IR),IR由两个文件组成:.xml 和 .bin。

现在,您已准备好将此网络加载到推理引擎并运行推理。 下面的代码假定您的模型用于 ImageNet 分类。

import cv2 

import numpy as np 

from openvino.inference_engine import IECore 

# Load the model 

ie = IECore() 

net = ie.read_network(model="v3-small_224_1.0_float.xml", weights="v3-small_224_1.0_float.bin") 

exec_net = ie.load_network(network=net, device_name="CPU") 

input_key = next(iter(exec_net.input_info)) 

output_key = next(iter(exec_net.outputs.keys())) 

# Load the image 

# The MobileNet network expects images in RGB format 

image = cv2.cvtColor(cv2.imread(filename="image.jpg"), code=cv2.COLOR_BGR2RGB) 

# resize to MobileNet image shape 

input_image = cv2.resize(src=image, dsize=(224, 224)) 

# reshape to network input shape 

input_image = np.expand_dims(input_image.transpose(2, 0, 1), axis=0) 

# Do inference 

result = exec_net.infer(inputs={input_key: input_image})[output_key] 

result_index = np.argmax(result) 

# Convert the inference result to a class name. 

imagenet_classes = open("imagenet_2012.txt").read().splitlines() 

# The model description states that for this model, class 0 is background, 

# so we add background at the beginning of imagenet_classes 

imagenet_classes = ['background'] + imagenet_classes 

print(imagenet_classes[result_index])

如此已成功辨识! 你会得到一个图像分类结果n02099267 flat-coated retriever(平面-涂层的-猎犬)。 您可以通过原始码 demo亲自试验。

本文含有隐藏内容,请 开通VIP 后查看

网站公告

今日签到

点亮在社区的每一天
去签到