Simple ONNX operation generator. Simple Operation Generator for ONNX.

Overview

sog4onnx

Simple ONNX operation generator. Simple Operation Generator for ONNX.

https://github.com/PINTO0309/simple-onnx-processing-tools

Downloads GitHub PyPI CodeQL

Key concept

  • Variable, Constant, Operation and Attribute can be generated externally.
  • Allow Opset to be specified externally.
  • No check for consistency of Operations within the tool, as new OPs are added frequently and the definitions of existing OPs change with each new version of ONNX's Opset.
  • Only one OP can be defined at a time, and the goal is to generate free ONNX graphs using a combination of snc4onnx, sne4onnx, snd4onnx and scs4onnx.
  • List of parameters that can be specified: https://github.com/onnx/onnx/blob/main/docs/Operators.md

1. Setup

1-1. HostPC

### option
$ echo export PATH="~/.local/bin:$PATH" >> ~/.bashrc \
&& source ~/.bashrc

### run
$ pip install -U onnx \
&& python3 -m pip install -U onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com \
&& pip install -U sog4onnx

1-2. Docker

### docker pull
$ docker pull pinto0309/sog4onnx:latest

### docker build
$ docker build -t pinto0309/sog4onnx:latest .

### docker run
$ docker run --rm -it -v `pwd`:/workdir pinto0309/sog4onnx:latest
$ cd /workdir

2. CLI Usage

$ sog4onnx -h

usage: sog4onnx [-h]
  --op_type OP_TYPE
  --opset OPSET
  --op_name OP_NAME
  [--input_variables NAME TYPE VALUE]
  [--output_variables NAME TYPE VALUE]
  [--attributes NAME DTYPE VALUE]
  [--output_onnx_file_path OUTPUT_ONNX_FILE_PATH]
  [--non_verbose]

optional arguments:
  -h, --help
        show this help message and exit

  --op_type OP_TYPE
        ONNX OP type.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

  --opset OPSET
        ONNX opset number.

  --op_name OP_NAME
        OP name.

  --input_variables NAME DTYPE VALUE
        input_variables can be specified multiple times.
        --input_variables variable_name numpy.dtype shape
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --input_variables i1 float32 [1,3,5,5] \
        --input_variables i2 int32 [1] \
        --input_variables i3 float64 [1,3,224,224]

  --output_variables NAME DTYPE VALUE
        output_variables can be specified multiple times.
        --output_variables variable_name numpy.dtype shape
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --output_variables o1 float32 [1,3,5,5] \
        --output_variables o2 int32 [1] \
        --output_variables o3 float64 [1,3,224,224]

  --attributes NAME DTYPE VALUE
        attributes can be specified multiple times.
        dtype is one of "float32" or "float64" or "int32" or "int64" or "str".
        --attributes name dtype value
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g.
        --attributes alpha float32 1.0 \
        --attributes beta float32 1.0 \
        --attributes transA int32 0 \
        --attributes transB int32 0

  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
        Output onnx file path.
        If not specified, a file with the OP type name is generated.

        e.g. op_type="Gemm" -> Gemm.onnx

  --non_verbose
        Do not show all information logs. Only error logs are displayed.

3. In-script Usage

$ python
>>> from sog4onnx import generate
>>> help(generate)
Help on function generate in module sog4onnx.onnx_operation_generator:

generate(
  op_type: str,
  opset: int,
  op_name: str,
  input_variables: dict,
  output_variables: dict,
  attributes: Union[dict, NoneType] = None,
  output_onnx_file_path: Union[str, NoneType] = '',
  non_verbose: Union[bool, NoneType] = False
) -> onnx.onnx_ml_pb2.ModelProto

    Parameters
    ----------
    op_type: str
        ONNX op type.
        See below for the types of OPs that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md

        e.g. "Add", "Div", "Gemm", ...

    opset: int
        ONNX opset number.

        e.g. 11

    op_name: str
        OP name.

    input_variables: Optional[dict]
        Specify input variables for the OP to be generated.
        See below for the variables that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"input_var_name1": [numpy.dtype, shape], "input_var_name2": [dtype, shape], ...}

        e.g.
        input_variables = {
          "name1": [np.float32, [1,224,224,3]],
          "name2": [np.bool_, [0]],
          ...
        }

    output_variables: Optional[dict]
        Specify output variables for the OP to be generated.
        See below for the variables that can be specified.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"output_var_name1": [numpy.dtype, shape], "output_var_name2": [dtype, shape], ...}

        e.g.
        output_variables = {
          "name1": [np.float32, [1,224,224,3]],
          "name2": [np.bool_, [0]],
          ...
        }

    attributes: Optional[dict]
        Specify output attributes for the OP to be generated.
        See below for the attributes that can be specified.
        When specifying Tensor format values, specify an array converted to np.ndarray.
        https://github.com/onnx/onnx/blob/main/docs/Operators.md
        {"attr_name1": value1, "attr_name2": value2, "attr_name3": value3, ...}

        e.g.
        attributes = {
          "alpha": 1.0,
          "beta": 1.0,
          "transA": 0,
          "transB": 0
        }
        Default: None

    output_onnx_file_path: Optional[str]
        Output of onnx file path.
        If not specified, no .onnx file is output.
        Default: ''

    non_verbose: Optional[bool]
        Do not show all information logs. Only error logs are displayed.
        Default: False

    Returns
    -------
    single_op_graph: onnx.ModelProto
        Single op onnx ModelProto

4. CLI Execution

$ sog4onnx \
--op_type Gemm \
--opset 1 \
--op_name gemm_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,1] \
--input_variables i3 int32 [0] \
--output_variables o1 float32 [1,2,3] \
--attributes alpha float32 1.0 \
--attributes beta float32 1.0 \
--attributes transA int32 0 \
--attributes transB int32 0

5. In-script Execution

import numpy as np
from sog4onnx import generate

single_op_graph = generate(
    op_type = 'Gemm',
    opset = 1,
    op_name = "gemm_custom1",
    input_variables = {
      "i1": [np.float32, [1,2,3]],
      "i2": [np.float32, [1,1]],
      "i3": [np.int32, [0]],
    },
    output_variables = {
      "o1": [np.float32, [1,2,3]],
    },
    attributes = {
      "alpha": 1.0,
      "beta": 1.0,
      "broadcast": 0,
      "transA": 0,
      "transB": 0,
    },
    non_verbose = True,
)

6. Sample

6-1. opset=1, Gemm

$ sog4onnx \
--op_type Gemm \
--opset 1 \
--op_name gemm_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,1] \
--input_variables i3 int32 [0] \
--output_variables o1 float32 [1,2,3] \
--attributes alpha float32 1.0 \
--attributes beta float32 1.0 \
--attributes transA int32 0 \
--attributes transB int32 0
--non_verbose

image image

6-2. opset=11, Add

$ sog4onnx \
--op_type Add \
--opset 11 \
--op_name add_custom1 \
--input_variables i1 float32 [1,2,3] \
--input_variables i2 float32 [1,2,3] \
--output_variables o1 float32 [1,2,3] \
--non_verbose

image image

6-3. opset=11, NonMaxSuppression

$ sog4onnx \
--op_type NonMaxSuppression \
--opset 11 \
--op_name nms_custom1 \
--input_variables boxes float32 [1,6,4] \
--input_variables scores float32 [1,1,6] \
--input_variables max_output_boxes_per_class int64 [1] \
--input_variables iou_threshold float32 [1] \
--input_variables score_threshold float32 [1] \
--output_variables selected_indices int64 [3,3] \
--attributes center_point_box int64 1

image image

6-4. opset=11, Constant

$ sog4onnx \
--op_type Constant \
--opset 11 \
--op_name const_custom1 \
--output_variables boxes float32 [1,6,4] \
--attributes value float32 \
[[\
[0.5,0.5,1.0,1.0],\
[0.5,0.6,1.0,1.0],\
[0.5,0.4,1.0,1.0],\
[0.5,10.5,1.0,1.0],\
[0.5,10.6,1.0,1.0],\
[0.5,100.5,1.0,1.0]\
]]

image

7. Reference

  1. https://github.com/onnx/onnx/blob/main/docs/Operators.md
  2. https://docs.nvidia.com/deeplearning/tensorrt/onnx-graphsurgeon/docs/index.html
  3. https://github.com/NVIDIA/TensorRT/tree/main/tools/onnx-graphsurgeon
  4. https://github.com/PINTO0309/sne4onnx
  5. https://github.com/PINTO0309/snd4onnx
  6. https://github.com/PINTO0309/snc4onnx
  7. https://github.com/PINTO0309/scs4onnx
  8. https://github.com/PINTO0309/PINTO_model_zoo

8. Issues

https://github.com/PINTO0309/simple-onnx-processing-tools/issues

You might also like...
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Accelerated SMPL operation, commonly used in generate 3D human mesh, STAR included.

SMPL2 An enchanced and accelerated SMPL operation which commonly used in 3D human mesh generation. It takes a poses, shapes, cam_trans as inputs, outp

Liecasadi - liecasadi implements Lie groups operation written in CasADi

liecasadi liecasadi implements Lie groups operation written in CasADi, mainly di

A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Simple node deletion tool for onnx.
Simple node deletion tool for onnx.

snd4onnx Simple node deletion tool for onnx. I only test very miscellaneous and limited patterns as a hobby. There are probably a large number of bugs

MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

PyTorch ,ONNX and TensorRT implementation of YOLOv4
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

YOLOv5 in PyTorch > ONNX > CoreML > TFLite
YOLOv5 in PyTorch ONNX CoreML TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. All code and models are under active development, and are subject to modification or deletion without notice.

Comments
  • Small fixes to README

    Small fixes to README

    Thank you for the tool. There are small fixes needed in the README: the attributes of one example missing the type, and the numpy import in another one.

    Otherwise, it works perfectly.

    opened by ibaiGorordo 1
Releases(1.0.15)
  • 1.0.15(Nov 20, 2022)

    • Fixed a bug where Constant and ConstantOfShape opsets were not set

    Full Changelog: https://github.com/PINTO0309/sog4onnx/compare/1.0.14...1.0.15

    Source code(tar.gz)
    Source code(zip)
  • 1.0.14(Sep 8, 2022)

    • Add short form parameter
      $ sog4onnx -h
      
      usage: sog4onnx [-h]
        --ot OP_TYPE
        --os OPSET
        --on OP_NAME
        [-iv NAME TYPE VALUE]
        [-ov NAME TYPE VALUE]
        [-a NAME DTYPE VALUE]
        [-of OUTPUT_ONNX_FILE_PATH]
        [-n]
      
      optional arguments:
        -h, --help
          show this help message and exit
      
        -ot OP_TYPE, --op_type OP_TYPE
          ONNX OP type.
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
        -os OPSET, --opset OPSET
          ONNX opset number.
      
        -on OP_NAME, --op_name OP_NAME
          OP name.
      
        -iv INPUT_VARIABLES INPUT_VARIABLES INPUT_VARIABLES, --input_variables INPUT_VARIABLES INPUT_VARIABLES INPUT_VARIABLES
          input_variables can be specified multiple times.
          --input_variables variable_name numpy.dtype shape
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --input_variables i1 float32 [1,3,5,5] \
          --input_variables i2 int32 [1] \
          --input_variables i3 float64 [1,3,224,224]
      
        -ov OUTPUT_VARIABLES OUTPUT_VARIABLES OUTPUT_VARIABLES, --output_variables OUTPUT_VARIABLES OUTPUT_VARIABLES OUTPUT_VARIABLES
          output_variables can be specified multiple times.
          --output_variables variable_name numpy.dtype shape
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --output_variables o1 float32 [1,3,5,5] \
          --output_variables o2 int32 [1] \
          --output_variables o3 float64 [1,3,224,224]
      
        -a ATTRIBUTES ATTRIBUTES ATTRIBUTES, --attributes ATTRIBUTES ATTRIBUTES ATTRIBUTES
          attributes can be specified multiple times.
          dtype is one of "float32" or "float64" or "int32" or "int64" or "str".
          --attributes name dtype value
          https://github.com/onnx/onnx/blob/main/docs/Operators.md
      
          e.g.
          --attributes alpha float32 1.0 \
          --attributes beta float32 1.0 \
          --attributes transA int32 0 \
          --attributes transB int32 0
      
        -of OUTPUT_ONNX_FILE_PATH, --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
          Output onnx file path.
          If not specified, a file with the OP type name is generated.
      
          e.g. op_type="Gemm" -> Gemm.onnx
      
        -n, --non_verbose
          Do not show all information logs. Only error logs are displayed.
      
    Source code(tar.gz)
    Source code(zip)
  • 1.0.13(Jun 10, 2022)

  • 1.0.12(Jun 7, 2022)

  • 1.0.11(May 25, 2022)

  • 1.0.10(May 15, 2022)

  • 1.0.9(Apr 26, 2022)

    • Added op_name as an input parameter, allowing OPs to be named.
      • CLI
        sog4onnx [-h]
          --op_type OP_TYPE
          --opset OPSET
          --op_name OP_NAME
          [--input_variables NAME TYPE VALUE]
          [--output_variables NAME TYPE VALUE]
          [--attributes NAME DTYPE VALUE]
          [--output_onnx_file_path OUTPUT_ONNX_FILE_PATH]
          [--non_verbose]
        
      • In-script
        generate(
          op_type: str,
          opset: int,
          op_name: str,
          input_variables: dict,
          output_variables: dict,
          attributes: Union[dict, NoneType] = None,
          output_onnx_file_path: Union[str, NoneType] = '',
          non_verbose: Union[bool, NoneType] = False
        ) -> onnx.onnx_ml_pb2.ModelProto
        
    Source code(tar.gz)
    Source code(zip)
  • 1.0.8(Apr 15, 2022)

  • 1.0.7(Apr 14, 2022)

  • 1.0.6(Apr 14, 2022)

  • 1.0.5(Apr 13, 2022)

  • 1.0.4(Apr 13, 2022)

  • 1.0.3(Apr 12, 2022)

  • 1.0.2(Apr 12, 2022)

  • 1.0.1(Apr 12, 2022)

  • 1.0.0(Apr 12, 2022)

  • 0.0.2(Apr 12, 2022)

  • 0.0.1(Apr 12, 2022)

Owner
Katsuya Hyodo
Hobby programmer. Intel Software Innovator Program member.
Katsuya Hyodo
[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling Introduction Contrastive learning approaches have achieved great success in

VITA 24 Dec 17, 2022
Unicorn can be used for performance analyses of highly configurable systems with causal reasoning

Unicorn can be used for performance analyses of highly configurable systems with causal reasoning. Users or developers can query Unicorn for a performance task.

AISys Lab 27 Jan 05, 2023
PROJECT - Az Residential Real Estate Analysis

AZ RESIDENTIAL REAL ESTATE ANALYSIS -Decided on libraries to import. Includes pa

2 Jul 05, 2022
Github project for Attention-guided Temporal Coherent Video Object Matting.

Attention-guided Temporal Coherent Video Object Matting This is the Github project for our paper Attention-guided Temporal Coherent Video Object Matti

71 Dec 19, 2022
Soomvaar is the repo which 🏩 contains different collection of 👨‍💻🚀code in Python and 💫✨Machine 👬🏼 learning algorithms📗📕 that is made during 📃 my practice and learning of ML and Python✨💥

Soomvaar 📌 Introduction Soomvaar is the collection of various codes implement in machine learning and machine learning algorithms with python on coll

Felix-Ayush 42 Dec 30, 2022
Fast RFC3339 compliant Python date-time library

udatetime: Fast RFC3339 compliant date-time library Handling date-times is a painful act because of the sheer endless amount of formats used by people

Simon Pirschel 235 Oct 25, 2022
Face Mask Detection System built with OpenCV, TensorFlow using Computer Vision concepts

Face mask detection Face Mask Detection System built with OpenCV, TensorFlow using Computer Vision concepts in order to detect face masks in static im

Vaibhav Shukla 1 Oct 27, 2021
Official DGL implementation of "Rethinking High-order Graph Convolutional Networks"

SE Aggregation This is the implementation for Rethinking High-order Graph Convolutional Networks. Here we show the codes for citation networks as an e

Tianqi Zhang (张天启) 32 Jul 19, 2022
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation This is a pytorch project for the paper Dynamic Divide-and-Conquer Ad

DV Lab 29 Nov 21, 2022
Repository of Vision Transformer with Deformable Attention

Vision Transformer with Deformable Attention This repository contains the code for the paper Vision Transformer with Deformable Attention [arXiv]. Int

410 Jan 03, 2023
The AWS Certified SysOps Administrator

The AWS Certified SysOps Administrator – Associate (SOA-C02) exam is intended for system administrators in a cloud operations role who have at least 1 year of hands-on experience with deployment, man

Aiden Pearce 32 Dec 11, 2022
Voice Gender Recognition

In this project it was used some different Machine Learning models to identify the gender of a voice (Female or Male) based on some specific speech and voice attributes.

Anne Livia 1 Jan 27, 2022
PyTorch implementation of ''Background Activation Suppression for Weakly Supervised Object Localization''.

Background Activation Suppression for Weakly Supervised Object Localization PyTorch implementation of ''Background Activation Suppression for Weakly S

35 Jan 06, 2023
Code for ViTAS_Vision Transformer Architecture Search

Vision Transformer Architecture Search This repository open source the code for ViTAS: Vision Transformer Architecture Search. ViTAS aims to search fo

46 Dec 17, 2022
PyTorch module to use OpenFace's nn4.small2.v1.t7 model

OpenFace for Pytorch Disclaimer: This codes require the input face-images that are aligned and cropped in the same way of the original OpenFace. * I m

Pete Tae-hoon Kim 176 Dec 12, 2022
A nutritional label for food for thought.

Lexiscore As a first effort in tackling the theme of information overload in content consumption, I've been working on the lexiscore: a nutritional la

Paul Bricman 34 Nov 08, 2022
SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging.

SweiNet SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging. SweiNet takes as in

Felix Jin 3 Mar 31, 2022
Colab notebook for openai/glide-text2im.

GLIDE text2im on Colab This repository provides a Colab notebook to produce images conditioned on text prompts with GLIDE [1]. Usage Run text2im.ipynb

Wok 19 Oct 19, 2022
Multi-Modal Machine Learning toolkit based on PyTorch.

简体中文 | English TorchMM 简介 多模态学习工具包 TorchMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 TorchMM 初始版本 v1.0 特性 丰富的任务场景:工具

njustkmg 1 Jan 05, 2022