Semi-automated OpenVINO benchmark_app with variable parameters

Overview

Semi-automated OpenVINO benchmark_app with variable parameters

Description

This program allows the users to specify variable parameters in the OpenVINO benchmark_app and run the benchmark with all combinations of the given parameters automatically.
The program will generate the report file in the CSV format with coded date and time file name ('result_DDmm-HHMMSS.csv'). You can analyze or visualize the benchmark result with MS Excel or a spreadsheet application.

The program is just a front-end for the OpenVINO official benchmark_app.
This program utilizes the benchmark_app as the benchmark core logic. So the performance result measured by this program must be consistent with the one measured by the benchmark_app.
Also, the command line parameters and their meaning are compatible with the benchmark_app.

Requirements

  • OpenVINO 2022.1 or higher
    This program is not compatible with OpenVINO 2021.

How to run

  1. Install required Python modules.
python -m pip install --upgrade pip setuptools
python -m pip install -r requirements.txt
  1. Run the auto benchmark (command line example)
python auto_benchmark_app.py -m resnet.xml -niter 100 -nthreads %1,2,4,8 -nstreams %1,2 -d %CPU,GPU -cdir cache

With this command line, -nthreads has 4 options (1,2,4,8), -nstreams has 2 options (1,2), and -d option has 2 options (CPU,GPU). As the result, 16 (4x2x2) benchmarks will be performed in total.

Parameter options

You can specify variable parameters by adding following prefix to the parameters.

Prefix Type Description/Example
$ range $1,8,2 == range(1,8,2) => [1,3,5,7]
All range() compatible expressions are possible. e.g. $1,5 or $5,1,-1
% list %CPU,GPU => ['CPU', 'GPU'], %1,2,4,8 => [1,2,4,8]
@ ir-models @models == IR models in the './models' dir => ['resnet.xml', 'googlenet.xml', ...]
This option will recursively search the '.xml' files in the specified directory.

Examples of command line

python auto_benchmark_app.py -cdir cache -m resnet.xml -nthreads $1,5 -nstreams %1,2,4,8 -d %CPU,GPU

  • Run benchmark with -nthreads=range(1,5)=[1,2,3,4], -nstreams=[1,2,4,8], -d=['CPU','GPU']. Total 32 combinations.

python auto_benchmark_app.py -m @models -niter 100 -nthreads %1,2,4,8 -nstreams %1,2 -d CPU -cdir cache

  • Run benchmark with -m=[all .xml files in models directory], -nthreads = [1,2,4,8], -nstreams=[1,2].

Example of a result file

The last 4 items in each line are the performance data in the order of 'count', 'duration (ms)', 'latency AVG (ms)', and 'throughput (fps)'.

#CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
#MEM: 33947893760
#OS: Windows-10-10.0.22000-SP0
#OpenVINO: 2022.1.0-7019-cdb9bec7210-releases/2022/1
#Last 4 items in the lines : test count, duration (ms), latency AVG (ms), and throughput (fps)
benchmark_app.py,-m,models\FP16\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,772.55,30.20,129.44
benchmark_app.py,-m,models\FP16\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1917.62,75.06,52.15
benchmark_app.py,-m,models\FP16\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,195.28,7.80,512.10
benchmark_app.py,-m,models\FP16-INT8\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,337.09,24.75,308.53
benchmark_app.py,-m,models\FP16-INT8\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1000.39,38.85,99.96
benchmark_app.py,-m,models\FP16-INT8\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,64.22,4.69,1619.38
benchmark_app.py,-m,models\FP32\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,778.90,30.64,128.39
benchmark_app.py,-m,models\FP32\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1949.73,76.91,51.29
benchmark_app.py,-m,models\FP32\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,182.59,7.58,547.69
benchmark_app.py,-m,models\FP32-INT8\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,331.73,24.90,313.51
benchmark_app.py,-m,models\FP32-INT8\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,968.38,38.45,103.27
benchmark_app.py,-m,models\FP32-INT8\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,67.70,5.04,1536.23
benchmark_app.py,-m,models\FP16\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1536.14,15.30,65.10
benchmark_app.py,-m,models\FP16\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,3655.59,36.50,27.36
benchmark_app.py,-m,models\FP16\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,366.73,3.68,272.68
benchmark_app.py,-m,models\FP16-INT8\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,872.87,8.66,114.56
benchmark_app.py,-m,models\FP16-INT8\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1963.67,19.54,50.93
benchmark_app.py,-m,models\FP16-INT8\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,242.28,2.34,412.74
benchmark_app.py,-m,models\FP32\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1506.14,14.96,66.39
benchmark_app.py,-m,models\FP32\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,3593.88,35.88,27.83
benchmark_app.py,-m,models\FP32\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,366.28,3.56,273.01
benchmark_app.py,-m,models\FP32-INT8\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,876.52,8.69,114.09
benchmark_app.py,-m,models\FP32-INT8\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1934.72,19.25,51.69

END

Owner
Yasunori Shimura
Yasunori Shimura
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
[ICLR 2021] Is Attention Better Than Matrix Decomposition?

Enjoy-Hamburger 🍔 Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021) Under construction. Introduction T

Gsunshine 271 Dec 29, 2022
YOLOv5 detection interface - PyQt5 implementation

所有代码已上传,直接clone后,运行yolo_win.py即可开启界面。 2021/9/29:加入置信度选择 界面是在ultralytics的yolov5基础上建立的,界面使用pyqt5实现,内容较简单,娱乐而已。 功能: 模型选择 本地文件选择(视频图片均可) 开关摄像头

487 Dec 27, 2022
This was initially the repo for the project of [email protected] of Asaf Mazar, Millad Kassaie and Georgios Chochlakis named "Powered by the Will? Exploring Lay Theories of Behavior Change through Social Media"

Subreddit Analysis This repo includes tools for Subreddit analysis, originally developed for our class project of PSYC 626 in USC, titled "Powered by

Georgios Chochlakis 1 Dec 17, 2021
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
code for our BMVC 2021 paper "HCV: Hierarchy-Consistency Verification for Incremental Implicitly-Refined Classification"

HCV_IIRC code for our BMVC 2021 paper HCV: Hierarchy-Consistency Verification for Incremental Implicitly-Refined Classification by Kai Wang, Xialei Li

kai wang 13 Oct 03, 2022
Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021

Refer-it-in-RGBD This is the repository of our paper 'Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD Images' in CVPR 2021 Pape

Haolin Liu 34 Nov 07, 2022
Using this you can control your PC/Laptop volume by Hand Gestures (pinch-in, pinch-out) created with Python.

Hand Gesture Volume Controller Using this you can control your PC/Laptop volume by Hand Gestures (pinch-in, pinch-out). Code Firstly I have created a

Tejas Prajapati 16 Sep 11, 2021
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

Karttikeya Manglam 40 Nov 18, 2022
Drone Task1 - Drone Task1 With Python

Drone_Task1 Matching Results 3.mp4 1.mp4

MLV Lab (Machine Learning and Vision Lab at Korea University) 11 Nov 14, 2022
Code of paper Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification.

Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification We provide the codes for repr

12 Dec 12, 2022
Job-Recommend-Competition - Vectorwise Interpretable Attentions for Multimodal Tabular Data

SiD - Simple Deep Model Vectorwise Interpretable Attentions for Multimodal Tabul

Jungwoo Park 40 Dec 22, 2022
Research code for the paper "Variational Gibbs inference for statistical estimation from incomplete data".

Variational Gibbs inference (VGI) This repository contains the research code for Simkus, V., Rhodes, B., Gutmann, M. U., 2021. Variational Gibbs infer

Vaidotas Šimkus 1 Apr 08, 2022
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

Antoine Caillon 589 Jan 02, 2023
Fastshap: A fast, approximate shap kernel

fastshap: A fast, approximate shap kernel fastshap was designed to be: Fast Calculating shap values can take an extremely long time. fastshap utilizes

Samuel Wilson 22 Sep 24, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Ren Yurui 261 Jan 09, 2023
The self-supervised goal reaching benchmark introduced in Discovering and Achieving Goals via World Models

Lexa-Benchmark Codebase for the self-supervised goal reaching benchmark introduced in 'Discovering and Achieving Goals via World Models'. Setup Create

1 Oct 14, 2021
TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation

TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation Zhaoyun Yin, Pichao Wang, Fan Wang, Xianzhe Xu, Hanling Zhang, Hao Li

DamoCV 25 Dec 16, 2022
A rule learning algorithm for the deduction of syndrome definitions from time series data.

README This project provides a rule learning algorithm for the deduction of syndrome definitions from time series data. Large parts of the algorithm a

0 Sep 24, 2021
Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, L

3 Dec 02, 2022