Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

Overview

tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

PyPI - Downloads GitHub PyPI

1. Supported Layers

No. TFLite Layer TF Layer Remarks
1 CONV_2D tf.nn.conv2d
2 DEPTHWISE_CONV_2D tf.nn.depthwise_conv2d
3 MAX_POOL_2D tf.nn.max_pool
4 PAD tf.pad
5 MIRROR_PAD tf.raw_ops.MirrorPad
6 RELU tf.nn.relu
7 PRELU tf.keras.layers.PReLU
8 RELU6 tf.nn.relu6
9 RESHAPE tf.reshape
10 ADD tf.add
11 SUB tf.math.subtract
12 CONCATENATION tf.concat
13 LOGISTIC tf.math.sigmoid
14 TRANSPOSE_CONV tf.nn.conv2d_transpose
15 MUL tf.multiply
16 HARD_SWISH x*tf.nn.relu6(x+3)*0.16666667 Or x*tf.nn.relu6(x+3)*0.16666666
17 AVERAGE_POOL_2D tf.keras.layers.AveragePooling2D
18 FULLY_CONNECTED tf.keras.layers.Dense
19 RESIZE_BILINEAR tf.image.resize Or tf.image.resize_bilinear
20 RESIZE_NEAREST_NEIGHBOR tf.image.resize Or tf.image.resize_nearest_neighbor
21 MEAN tf.math.reduce_mean
22 SQUARED_DIFFERENCE tf.math.squared_difference
23 RSQRT tf.math.rsqrt
24 DEQUANTIZE (const)
25 FLOOR tf.math.floor
26 TANH tf.math.tanh
27 DIV tf.math.divide
28 FLOOR_DIV tf.math.floordiv
29 SUM tf.math.reduce_sum
30 POW tf.math.pow
31 SPLIT tf.split
32 SOFTMAX tf.nn.softmax
33 STRIDED_SLICE tf.strided_slice
34 TRANSPOSE ttf.transpose
35 SPACE_TO_DEPTH tf.nn.space_to_depth
36 DEPTH_TO_SPACE tf.nn.depth_to_space
37 REDUCE_MAX tf.math.reduce_max
38 Convolution2DTransposeBias tf.nn.conv2d_transpose, tf.math.add CUSTOM, MediaPipe
39 LEAKY_RELU tf.keras.layers.LeakyReLU
40 MAXIMUM tf.math.maximum
41 MINIMUM tf.math.minimum
42 MaxPoolingWithArgmax2D tf.raw_ops.MaxPoolWithArgmax CUSTOM, MediaPipe
43 MaxUnpooling2D tf.cast, tf.shape, tf.math.floordiv, tf.math.floormod, tf.ones_like, tf.shape, tf.concat, tf.reshape, tf.transpose, tf.scatter_nd CUSTOM, MediaPipe
44 GATHER tf.gather
45 CAST tf.cast
46 SLICE tf.slice
47 PACK tf.stack
48 UNPACK tf.unstack
49 ARG_MAX tf.math.argmax
50 EXP tf.exp
51 TOPK_V2 tf.math.top_k
52 LOG_SOFTMAX tf.nn.log_softmax
53 L2_NORMALIZATION tf.math.l2_normalize
54 LESS tf.math.less
55 LESS_EQUAL tf.math.less_equal
56 GREATER tf.math.greater
57 GREATER_EQUAL tf.math.greater_equal
58 NEG tf.math.negative
59 WHERE tf.where
60 SELECT tf.where
61 SELECT_V2 tf.where
62 PADV2 tf.raw_ops.PadV2
63 SIN tf.math.sin
64 TILE tf.tile
65 EQUAL tf.math.equal
66 NOT_EQUAL tf.math.not_equal
67 LOG tf.math.log
68 SQRT tf.math.sqrt
69 ARG_MIN tf.math.argmin
70 REDUCE_PROD tf.math.reduce_prod
71 LOGICAL_OR tf.math.logical_or
72 LOGICAL_AND tf.math.logical_and
73 LOGICAL_NOT tf.math.logical_not
74 REDUCE_MIN tf.math.reduce_min
75 REDUCE_ANY tf.math.reduce_any
76 SQUARE tf.math.square
77 ZEROS_LIKE tf.zeros_like
78 FILL tf.fill
79 FLOOR_MOD tf.math.floormod
80 RANGE tf.range
81 ABS tf.math.abs
82 UNIQUE tf.unique
83 CEIL tf.math.ceil
84 REVERSE_V2 tf.reverse
85 ADD_N tf.math.add_n
86 GATHER_ND tf.gather_nd
87 COS tf.math.cos
88 RANK tf.math.rank
89 ELU tf.nn.elu
90 WHILE tf.while_loop
91 REVERSE_SEQUENCE tf.reverse_sequence
92 MATRIX_DIAG tf.linalg.diag
93 ROUND tf.math.round
94 NON_MAX_SUPPRESSION_V4 tf.raw_ops.NonMaxSuppressionV4
95 NON_MAX_SUPPRESSION_V5 tf.raw_ops.NonMaxSuppressionV5
96 SCATTER_ND tf.scatter_nd
97 SEGMENT_SUM tf.math.segment_sum
98 CUMSUM tf.math.cumsum
99 BROADCAST_TO tf.broadcast_to
100 RFFT2D tf.signal.rfft2d
101 L2_POOL_2D tf.square, tf.keras.layers.AveragePooling2D, tf.sqrt
102 LOCAL_RESPONSE_NORMALIZATION tf.nn.local_response_normalization
103 RELU_N1_TO_1 tf.minimum, tf.maximum
104 SPLIT_V tf.raw_ops.SplitV
105 MATRIX_SET_DIAG tf.linalg.set_diag
106 SHAPE tf.shape

2. Environment

3. Setup

To install using the Python Package Index (PyPI), use the following command.

$ pip3 install tflite2tensorflow --upgrade

Or, To install with the latest source code of the main branch, use the following command.

$ pip3 install git+https://github.com/PINTO0309/tflite2tensorflow --upgrade

Installs a customized TensorFlow Lite runtime with support for MediaPipe Custom OP, FlexDelegate, and XNNPACK. If tflite_runtime does not install properly, please follow the instructions in the next article to build a custom build in the environment you are using. Add a custom OP to the TFLite runtime to build the whl installer (for Python), MaxPoolingWithArgmax2D, MaxUnpooling2D, Convolution2DTransposeBias

$ sudo pip3 uninstall tensorboard-plugin-wit tb-nightly tensorboard \
                      tf-estimator-nightly tensorflow-gpu \
                      tensorflow tf-nightly tensorflow_estimator tflite_runtime -y

### Customized version of TensorFlow Lite installation
$ sudo gdown --id 1RWZmfFgtxm3muunv6BSf4yU29SKKFXIh
$ sudo chmod +x tflite_runtime-2.4.1-py3-none-any.whl
$ sudo pip3 install tflite_runtime-2.4.1-py3-none-any.whl

### Install the full TensorFlow package
$ sudo pip3 install tensorflow==2.4.1
 or
$ sudo pip3 install tf-nightly

### Download flatc
$ flatbuffers/1.12.0/download.sh

### Download schema.fbs
$ wget https://github.com/PINTO0309/tflite2tensorflow/raw/main/schema/schema.fbs

If the downloaded flatc does not work properly, please build it in your environment.

$ git clone -b v1.12.0 https://github.com/google/flatbuffers.git
$ cd flatbuffers && mkdir build && cd build
$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release ..
$ make -j$(nproc)

vvtvsu0y1791ow2ybdk61s9fv7e4 saxqukktcjncsk2hp7m8p2cns4q4

The Windows version of flatc v1.12.0 can be downloaded from here. https://github.com/google/flatbuffers/releases/download/v1.12.0/flatc_windows.zip

4. Usage / Execution sample

4-1. Command line options

usage: tflite2tensorflow [-h] --model_path MODEL_PATH --flatc_path
                         FLATC_PATH --schema_path SCHEMA_PATH
                         [--model_output_path MODEL_OUTPUT_PATH]
                         [--output_pb OUTPUT_PB]
                         [--output_no_quant_float32_tflite OUTPUT_NO_QUANT_FLOAT32_TFLITE]
                         [--output_weight_quant_tflite OUTPUT_WEIGHT_QUANT_TFLITE]
                         [--output_float16_quant_tflite OUTPUT_FLOAT16_QUANT_TFLITE]
                         [--output_integer_quant_tflite OUTPUT_INTEGER_QUANT_TFLITE]
                         [--output_full_integer_quant_tflite OUTPUT_FULL_INTEGER_QUANT_TFLITE]
                         [--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]
                         [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
                         [--calib_ds_type CALIB_DS_TYPE]
                         [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
                         [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
                         [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
                         [--tfds_download_flg TFDS_DOWNLOAD_FLG]
                         [--output_tfjs OUTPUT_TFJS]
                         [--output_tftrt OUTPUT_TFTRT]
                         [--output_coreml OUTPUT_COREML]
                         [--output_edgetpu OUTPUT_EDGETPU]
                         [--replace_swish_and_hardswish REPLACE_SWISH_AND_HARDSWISH]
                         [--optimizing_hardswish_for_edgetpu OPTIMIZING_HARDSWISH_FOR_EDGETPU]
                         [--replace_prelu_and_minmax REPLACE_PRELU_AND_MINMAX]

optional arguments:
  -h, --help            show this help message and exit
  --model_path MODEL_PATH
                        input tflite model path (*.tflite)
  --flatc_path FLATC_PATH
                        flatc file path (flatc)
  --schema_path SCHEMA_PATH
                        schema.fbs path (schema.fbs)
  --model_output_path MODEL_OUTPUT_PATH
                        The output folder path of the converted model file
  --output_pb OUTPUT_PB
                        .pb output switch
  --output_no_quant_float32_tflite OUTPUT_NO_QUANT_FLOAT32_TFLITE
                        float32 tflite output switch
  --output_weight_quant_tflite OUTPUT_WEIGHT_QUANT_TFLITE
                        weight quant tflite output switch
  --output_float16_quant_tflite OUTPUT_FLOAT16_QUANT_TFLITE
                        float16 quant tflite output switch
  --output_integer_quant_tflite OUTPUT_INTEGER_QUANT_TFLITE
                        integer quant tflite output switch
  --output_full_integer_quant_tflite OUTPUT_FULL_INTEGER_QUANT_TFLITE
                        full integer quant tflite output switch
  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
                        Input and output types when doing Integer Quantization
                        ('int8 (default)' or 'uint8')
  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
                        String formulas for normalization. It is evaluated by
                        Python's eval() function. Default: '(data -
                        [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
  --calib_ds_type CALIB_DS_TYPE
                        Types of data sets for calibration. tfds or
                        numpy(Future Implementation)
  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
                        Dataset name for TensorFlow Datasets for calibration.
                        https://www.tensorflow.org/datasets/catalog/overview
  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
                        Split name for TensorFlow Datasets for calibration.
                        https://www.tensorflow.org/datasets/catalog/overview
  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
                        Download destination folder path for the calibration
                        dataset. Default: $HOME/TFDS
  --tfds_download_flg TFDS_DOWNLOAD_FLG
                        True to automatically download datasets from
                        TensorFlow Datasets. True or False
  --output_tfjs OUTPUT_TFJS
                        tfjs model output switch
  --output_tftrt OUTPUT_TFTRT
                        tftrt model output switch
  --output_coreml OUTPUT_COREML
                        coreml model output switch
  --output_edgetpu OUTPUT_EDGETPU
                        edgetpu model output switch
  --replace_swish_and_hardswish REPLACE_SWISH_AND_HARDSWISH
                        [Future support] Replace swish and hard-swish with
                        each other
  --optimizing_hardswish_for_edgetpu OPTIMIZING_HARDSWISH_FOR_EDGETPU
                        Optimizing hardswish for edgetpu
  --replace_prelu_and_minmax REPLACE_PRELU_AND_MINMAX
                        Replace prelu and minimum/maximum with each other

4-2. Step 1 : Generating saved_model and FreezeGraph (.pb)

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ./flatc \
  --schema_path schema.fbs \
  --output_pb True

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ./flatc \
  --schema_path schema.fbs \
  --output_pb True \
  --optimizing_hardswish_for_edgetpu True

4-3. Step 2 : Generation of quantized tflite, TFJS, TF-TRT, EdgeTPU, and CoreML

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ./flatc \
  --schema_path schema.fbs \
  --output_no_quant_float32_tflite True \
  --output_weight_quant_tflite True \
  --output_float16_quant_tflite True \
  --output_integer_quant_tflite True \
  --string_formulas_for_normalization 'data / 255.0' \
  --output_tfjs True \
  --output_coreml True \
  --output_tftrt True

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ./flatc \
  --schema_path schema.fbs \
  --output_no_quant_float32_tflite True \
  --output_weight_quant_tflite True \
  --output_float16_quant_tflite True \
  --output_integer_quant_tflite True \
  --output_edgetpu True \
  --string_formulas_for_normalization 'data / 255.0' \
  --output_tfjs True \
  --output_coreml True \
  --output_tftrt True

5. Sample image

This is the result of converting MediaPipe's Meet Segmentation model (segm_full_v679.tflite / Float16 / Google Meet) to saved_model and then reconverting it to Float32 tflite. Replace the GPU-optimized Convolution2DTransposeBias layer with the standard TransposeConv and BiasAdd layers in a fully automatic manner. The weights and biases of the Float16 Dequantize layer are automatically back-quantized to Float32 precision. The generated saved_model in Float32 precision can be easily converted to Float16, INT8, EdgeTPU, TFJS, TF-TRT, CoreML, ONNX, and OpenVINO.

Before After
segm_full_v679 tflite model_float32 tflite
Comments
  • The same generated IR gives different results on CPU and MYRIAD

    The same generated IR gives different results on CPU and MYRIAD

    1. OS Ubuntu 18.04

    2. OS Architecture x86_64

    3. Version of OpenVINO 2021.2.185

    4. Version of tflite2tensorflow 1.8.0

    11. URL or source code for simple inference testing code: https://github.com/geaxgx/openvino_blazepose

    12. Issue Details

    Hi Katsuya, I would like to have your opinion on the following problem: I have used tflite2tensorflow 1.8.0 to convert the new version of mediapipe Blazepose models (mediapipe version 0.8.4). In this new version, now we have 3 pose landmark models (full, lite and heavy) and one common pose detection model. Then I try to test my code BlazeposeOpenvino.py with each model first on CPU then on MYRIAD (OAK-D). Everything seems to work great except when I run the heavy model on the MYRIAD. For instance, below is the outuput of the heavy model running on the CPU (which looks accurate): heavy_on_cpu And below is the output of the same heavy model running on the MYRIADX : heavy_on_myriad We can see the skeleton is kind of distorded. Actually, in many cases, with other images, we don't even have a skeleton drawn because the score given by the model is too low.

    Do you have an idea of what the problem is ? The heavy model is much bigger (27Mo for the bin fle) than the other models. It takes about 30s to load on the MyriadX. But I guess the size is still acceptable otherwise I would get some error messages.

    In case, you want to reproduce the problem:

    # Clone my repo
    git clone https://github.com/geaxgx/openvino_blazepose.git
    cd openvino_blazepose
    # Start your tflite2tensorflow docker
    ./docker_tflite2tensorflow.sh
    cd workdir/models
    # In my repo, there are only the FP32 IR version of the models, so we need to download the original tflite file 
    # and then convert it using tflite2tensorflow. All is done with the following command:
    ./get_and_convert_heavy.sh
    # Exit docker container
    exit
    
    # To test with the heavy model running on the CPU :
    python BlazeposeOpenvino.py --lm_xml models/pose_landmark_heavy_FP16.xml -i img/yoga.jpg
    # To test on the MYRIAD
    python BlazeposeOpenvino.py --lm_xml models/pose_landmark_heavy_FP16.xml -i img/yoga.jpg --lm_device MYRIAD 
    

    You can test also with the image img/yoga2.jpg (no skeleton detected).

    Thank you.

    opened by geaxgx 37
  • Palm detectin model: tflite and openvino IR model give different ouputs

    Palm detectin model: tflite and openvino IR model give different ouputs

    1. OS Ubuntu 18.04

    2. OS Architecture x86_64

    3. Version of OpenVINO 2021.2.185 (the one from your dockerfile)

    4. Version of TensorFlow e.g. v2.4.1 (the one from your dockerfile)

    9. Download URL for .tflite : https://github.com/google/mediapipe/blob/master/mediapipe/modules/palm_detection/palm_detection.tflite

    Hi Pinto ! First of all, I want to thank you for your last version of tflite2tensorflow. The dockerfile will surely make users life much easier !

    I have installed and run the docker image of tflite2tensorflow to convert the Mediapipe palm detection model (see link above) into Openvino IR format. This model takes 128x128 images as input, whereas the previous model took 256x256 images. When running the FP32 model on my cpu, I noticed that sometimes the palm bounding box seemed a bit off. When comparing with the output from the original tflite model, we can see the bounding boxes are not the same: Below is the output from the FP32 openvino model: output_hands_openvino_128

    Below is the output from the tflite model: output_hands_tflite_128

    Note that if I compare the outputs of the older 256x256 models, there are no differences between tflite and Openvino versions.

    Do you have an idea of what could explain the different ouputs ? Using Netron, I can see that the new tflite model now uses Prelu and ResizeBilinear that were not used in the older model. I don't see how Prelu could cause differences in the conversion process, but ResizeBilinear may be trickier (converted into Interpolate). Do you have any thoughts about that ?

    Thanks for your help! I would like to use the new model, which is much faster than the previous version.

    I can send you the code to reproduce the problem if you want.

    bug feature_request 
    opened by geaxgx 19
  • ValueError: Dimension size, given by scalar input 1 must be in range [-1, 1)

    ValueError: Dimension size, given by scalar input 1 must be in range [-1, 1)

    I am trying to convert these tflite models: https://github.com/breizhn/DTLN-aec/tree/main/pretrained_models The command I ran was: tflite2tensorflow --model_path dtln_aec_128_2.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb But ran into this error:

    ValueError: Dimension size, given by scalar input 1 must be in range [-1, 1) for '{{node split_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/split}} = Split[T=DT_FLOAT, num_split=4](split_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/split/split_dim, BiasAdd_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/BiasAdd)' with input shapes: [], [512] and with computed input tensors: input[0] = <1>.

    Tried with both pip installation and docker version. Is it a bug or something that not supported?

    opened by SaneBow 14
  • The shape of the output layer is different from the result of the tensorflow2onnx transformation.

    The shape of the output layer is different from the result of the tensorflow2onnx transformation.

    Issue Type

    Bug

    OS

    Ubuntu

    OS architecture

    x86_64

    Programming Language

    Python

    Framework

    ONNX

    Download URL for tflite file

    VertexAI Auto ML classification TFlite Format model

    Convert Script

    ●tflite2tensorflow

    docker run -it --rm --gpus all -v pwd:/home/user/workdir ghcr.io/pinto0309/tflite2tensorflow:latest

    tflite2tensorflow
    --model_path model.tflite
    --flatc_path ../flatc
    --schema_path ../schema.fbs
    --output_pb
    --optimizing_for_openvino_and_myriad
    --rigorous_optimization_for_myriad

    tflite2tensorflow
    --model_path model.tflite
    --flatc_path ../flatc
    --schema_path ../schema.fbs
    --output_onnx
    --onnx_opset 13
    --output_openvino_and_myriad

    ●tensorflow2onnx pip install git+https://github.com/onnx/tensorflow-onnx

    python -m tf2onnx.convert --opset 13 --tflite model.tflite --output model.onnx --dequantize

    Description

    I'm trying to convert the attached file from tflite to onnx. (Eventually I want to run it in myriad). In the conversion source tflite, the input to sortmax is [1×8], but in tflite2tensorflow, it is [7×7×8]. If you try tensorflow2tflite, the input to sortmax becomes [1×8] as in tflite. I think [1×8] is correct because it is an 8-class classification model.

    [tflite] tflite_netron

    [tflite2tensorflow(savedmodel and onnx)] tflite2tensorflow_onnx

    [tensorflow2onnx] tensorflow2onnx_onnx

    Relevant Log Output

    None.
    

    Source code for simple inference testing code

    No response

    opened by ichigosyakure 8
  • converted mediapipe model pose_detection not work

    converted mediapipe model pose_detection not work

    Hi , thanks for your great work.

    recently, I am working on pose estimation with mediapipe. I convert the pose_detection.tflite model to onnx with your tflite2tensorflow, the coversion processing is ok, the log info shows that the converison is success. But when I use the converted .onnx model the outpt value, seems not correct, and is different from what the orginal .tflite model dose.

    In the original tflite model, the max confidence value of bounxing box is 0.9, but using the converted model , the max value of bounxing box is only 0.078, which is not correct. And I also tried the model you've already converted, the result is also not right. Is there something wrong in my step or code?

    1. WIndows10

    2. x86_64

    3. Version of OpenVINO : none

    4. Version of TensorFlow e.g. v2.6.0

    5. Version of TensorRT : none

    6. Version of TFJS : none

    7. Version of coremltools : none

    8. Version of ONNX : 1.10.1

    9. Download URL for .tflite IR model

    10. URL of the repository from which the transformed model was taken : https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_detection

    11. URL or source code for simple inference testing code

    def image_preprocess(img):
        img = cv2.resize(img, dsize=(224, 224), interpolation=cv2.INTER_AREA)
        img = img.astype("float32")
        img /= 255.0
        print(img.shape)
        img.resize((1, 224, 224, 3))
        return img
    
    def inference(img):
        # Test the model on random input data.
        input_shape = input_details[0]['shape']
        input_data = np.array(img, dtype=np.float32)
        interpreter.set_tensor(input_details[0]['index'], input_data)
    
        interpreter.invoke()
    
        # The function `get_tensor()` returns a copy of the tensor data.
        # Use `tensor()` in order to get a pointer to the tensor.
        output_data = interpreter.get_tensor(output_details[0]['index'])
        output_data1 = interpreter.get_tensor(output_details[1]['index'])
    
        result = [output_data, output_data1]
    
        print("result[0] shape:", result[0].shape)  # (1, 2254, 12)
        print("result[1] shape:", result[1].shape)  # (1, 2254, 1)
        return result
    
    model = "model_float32.onnx"
    
    # load model
    session = onnxruntime.InferenceSession(model, None)
    input_name = session.get_inputs()[0].name
    output_name = session.get_outputs()[0].name
    print(input_name)
    print(output_name)
    

    12. Issue Details

    bug 
    opened by Jerryzhangzhao 8
  • Densify ?

    Densify ?

    1. Ubuntu 18.04

    2. OS Architecture x86_64

    3. OpenVINO e.g. 2021.4.582

    9. Download URL for .tflite IR model https://github.com/google/mediapipe/blob/master/mediapipe/modules/pose_detection/pose_detection.tflite

    Hi @PINTO0309 ! New mediapipe version 0.8.6 comes with new models for Blazepose (that's a never ending story :-) The size of the pose detection model (link above) has been significantly reduced (from ~7.5MB to ~3MB) but unfortunately the model is using a layer named Densify that is not implemented in tflite2tensorflow. I guess it is a relatively new layer. When trying to visualize its data in Netron, I get an "Invalid tensor data size" message. image

    Do you think Densify can be easily implemented in your tools ? Note that it is not something I am eagerly waiting for since I can do without it by using the previous version of the pose detection model.

    enhancement feature_request 
    opened by geaxgx 8
  • I was trying to convert hair_segmentation.tflite in google colab. and getting following error.

    I was trying to convert hair_segmentation.tflite in google colab. and getting following error.

    output json command = /content/flatbuffers/build/flatc -t --strict-json --defaults-json -o . /content/flatbuffers/build/schema.fbs -- /content/hair_segmentation.tflite Traceback (most recent call last): File "/usr/local/bin/tflite2tensorflow", line 2882, in main() File "/usr/local/bin/tflite2tensorflow", line 2556, in main ops, op_types = parse_json(jsonfile_path) File "/usr/local/bin/tflite2tensorflow", line 247, in parse_json j = json.load(open(jsonfile_path)) FileNotFoundError: [Errno 2] No such file or directory: './/content/hair_segmentation.json'

    bug 
    opened by soham24 8
  • Add option to optimize tf.math.reduce_prod to Myriad (OAK)

    Add option to optimize tf.math.reduce_prod to Myriad (OAK)

    Issue Type

    Feature Request

    OS

    Other

    OS architecture

    Other

    Programming Language

    Other

    Framework

    OpenVINO, Myriad Inference Engine

    Download URL for tflite file

    None

    Convert Script

    None

    Description

    Replace tf.math.reduce_prod with tf.math.multiply https://www.tensorflow.org/api_docs/python/tf/math/reduce_prod

    https://github.com/PINTO0309/PINTO_model_zoo/tree/main/282_face_landmark_with_attention

    https://github.com/iwatake2222/play_with_tflite/tree/master/pj_tflite_face_landmark_with_attention

    image

    Relevant Log Output

    None
    

    Source code for simple inference testing code

    No response

    enhancement feature_request 
    opened by PINTO0309 7
  • Converted mediapipe palm detection coreml model cannot be deployed on IOS app

    Converted mediapipe palm detection coreml model cannot be deployed on IOS app

    Issue Type

    Bug

    OS

    Mac OS

    OS architecture

    x86_64

    Programming Language

    Python

    Framework

    CoreML

    Download URL for tflite file

    https://github.com/google/mediapipe/blob/master/mediapipe/modules/palm_detection/palm_detection_full.tflite

    Convert Script

    docker run -it --rm \
      -v `pwd`:/home/user/workdir \
      ghcr.io/pinto0309/tflite2tensorflow:latest
    
    tflite2tensorflow \
      --model_path palm_detection_full.tflite \
      --flatc_path ../flatc \
      --schema_path ../schema.fbs \
      --output_pb
    
    tflite2tensorflow \
      --model_path palm_detection_full.tflite \
      --flatc_path ../flatc \
      --schema_path ../schema.fbs \
      --output_no_quant_float32_tflite \
      --output_dynamic_range_quant_tflite \
      --output_weight_quant_tflite \
      --output_float16_quant_tflite \
      --output_integer_quant_tflite \
      --string_formulas_for_normalization 'data / 255.0' \
      --output_tfjs \
      --output_coreml \
      --output_tftrt_float32 \
      --output_tftrt_float16 \
      --output_onnx \
      --onnx_opset 13 \
      --output_openvino_and_myriad
    

    Description

    when I deployed the converted mp palm detection coreml model on IOS app, I got error like:

    2022-02-15 16:31:50.577312+0000 MLModelCamera[12602:4637828] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": [Exception from layer 240: model_1/model/add_24/add]Espresso exception: "Invalid blob shape": elementwise_kernel_cpu: Cannot broadcast [12, 2, 1, 256, 1] and [12, 1, 1, 256, 1] status=-1
    

    seems like the resizeBilinear op causes this problem. Is there any way to fix this issue? Thanks in advance.

    The related code in tflite2tensorlfow.py is:

            elif op_type == 'RESIZE_BILINEAR':
                input_tensor = tensors[op['inputs'][0]]
                size_detail = interpreter._get_tensor_details(op['inputs'][1])
                size = interpreter.get_tensor(size_detail['index'])
                size_height = size[0]
                size_width  = size[1]
    
                options = op['builtin_options']
                align_corners = options['align_corners']
                half_pixel_centers = options['half_pixel_centers']
    
                def upsampling2d_bilinear(x, size_height, size_width, align_corners, half_pixel_centers):
                    if optimizing_for_edgetpu_flg:
                        return tf.image.resize_bilinear(x, (size_height, size_width))
                    else:
                        if optimizing_for_openvino_and_myriad:
                            if half_pixel_centers:
                                return tf.image.resize_bilinear(
                                    x,
                                    (size_height, size_width),
                                    align_corners=False,
                                    half_pixel_centers=half_pixel_centers
                                )
                            else:
                                return tf.image.resize_bilinear(
                                    x,
                                    (size_height, size_width),
                                    align_corners=True,
                                    half_pixel_centers=half_pixel_centers
                                )
                        else:
                            return tfv2.image.resize(
                                x,
                                [size_height, size_width],
                                method='bilinear'
                            )
    
                output_detail = interpreter._get_tensor_details(op['outputs'][0])
                lambda_name = get_op_name(output_detail['name']) + '_lambda'
                output_tensor_lambda = tf.keras.layers.Lambda(
                    upsampling2d_bilinear,
                    arguments={'size_height': size_height,
                                'size_width': size_width,
                                'align_corners': align_corners,
                                'half_pixel_centers': half_pixel_centers},
                    name=lambda_name
                )(input_tensor)
                output_tensor = tf.identity(
                    output_tensor_lambda,
                    name=get_op_name(output_detail['name'])
                )
                tensors[output_detail['index']] = output_tensor
    

    Relevant Log Output

    2022-02-15 16:31:50.577312+0000 MLModelCamera[12602:4637828] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": [Exception from layer 240: model_1/model/add_24/add]Espresso exception: "Invalid blob shape": elementwise_kernel_cpu: Cannot broadcast [12, 2, 1, 256, 1] and [12, 1, 1, 256, 1] status=-1
    

    Source code for simple inference testing code

    No response

    opened by TaoZappar 7
  • Latest docker image: error: unrecognized arguments

    Latest docker image: error: unrecognized arguments

    Hello Pinto, I am following the tutorial form here: https://github.com/geaxgx/openvino_hand_tracker Can it be that in the latest docker version the API has changed?

    This is what I do: [email protected]:/tmp/depthai_hand_tracker$ docker run --gpus all -it --rm
    -v pwd:/workspace/resources
    -e LOCAL_UID=$(id -u $USER)
    -e LOCAL_GID=$(id -g $USER)
    pinto0309/tflite2tensorflow:latest bash

    NVIDIA Release 20.09 (build 15985252)

    NVIDIA TensorRT 7.1.3 (c) 2016-2020, NVIDIA CORPORATION. All rights reserved. Container image (c) 2020, NVIDIA CORPORATION. All rights reserved.

    https://developer.nvidia.com/tensorrt

    To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

    To install open source parsers, plugins, and samples, run /opt/tensorrt/install_opensource.sh. See https://github.com/NVIDIA/TensorRT for more information.

    error: XDG_RUNTIME_DIR not set in the environment. [setupvars.sh] OpenVINO environment initialized

    wget https://github.com/google/mediapipe/blob/master/mediapipe/modules/hand_landmark/hand_landmark.tflite wget https://github.com/google/mediapipe/blob/master/mediapipe/modules/palm_detection/palm_detection.tflite

    tflite2tensorflow --model_path ./hand_landmark.tflite --model_output_path palm_detection --flatc_path /home/user/flatc --schema_path /home/user/schema.fbs --output_pb True tflite2tensorflow: error: unrecognized arguments: True

    tflite2tensorflow --model_path ./hand_landmark.tflite --model_output_path palm_detection --flatc_path /home/user/flatc --schema_path /home/user/schema.fbs --output_pb
    output json command = /home/user/flatc -t --strict-json --defaults-json -o . /home/user/schema.fbs -- ./hand_landmark.tflite /home/user/flatc: error: binary "./hand_landmark.tflite" does not have expected file_identifier "TFL3", use --raw-binary to read this file anyway.

    Thanks Marco

    enhancement feature_request 
    opened by MaHoef 7
  • KeyError: 'operator_codes'

    KeyError: 'operator_codes'

    Running tflite2tensorflow in docker container, against tflite model generated in the AutoML Vision service in google cloud, produces:

    sh-5.0$ tflite2tensorflow \

    --model_path model.tflite
    --flatc_path ../flatc
    --schema_path ../schema.fbs
    --output_pb
    --optimizing_for_openvino_and_myriad Traceback (most recent call last): File "/usr/local/bin/tflite2tensorflow", line 6201, in main() File "/usr/local/bin/tflite2tensorflow", line 5592, in main ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path) File "/usr/local/bin/tflite2tensorflow", line 265, in parse_json op_types = [v['builtin_code'] for v in j['operator_codes']] KeyError: 'operator_codes'

    The tflite output from AutoML Vision was three files: model.tflite, tflite_metadata.json, and dict.txt (which contains the list of labels).

    I first received the error that it could not find model.json, so I renamed tflite_metadata.json to model.json and thus the error above.

    Here is contents of model.json (tflite_metadata.json) - obviously it does not include operator codes.

    { "inferenceType": "QUANTIZED_UINT8", "inputShape": [ 1, 320, 320, 3 ], "inputTensor": "normalized_input_image_tensor", "maxDetections": 40, "outputTensorRepresentation": [ "bounding_boxes", "class_labels", "class_confidences", "num_of_boxes" ], "outputTensors": [ "TFLite_Detection_PostProcess", "TFLite_Detection_PostProcess:1", "TFLite_Detection_PostProcess:2", "TFLite_Detection_PostProcess:3" ] }

    Is tflite2tensorflow expecting a different set of files?

    opened by mtyeager 4
  • The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.

    The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.

    OS you are using: MacOS 11.4

    Version of TensorFlow: v2.5.0

    Environment: Docker

    Under tf 2.5.0, I converted my pre-trained model from saved_model to tflite.

    Afterwards, in Docker container, when I was converting this tflite model to pb format using tflite2tensorflow, the following error occured:

    ERROR: The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.
    

    (In this experiment, I did not perform quantization/optimization, but later on I do plan to use tflite to quantize my model that is to be saved as .tflite, which is why I did not directly convert saved_model to pb)

    enhancement feature_request 
    opened by peiwenhuang27 3
Releases(v1.22.0)
Owner
Katsuya Hyodo
Hobby programmer. Intel Software Innovator Program member.
Katsuya Hyodo
MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Octave Convolution MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution Imag

Meta Research 549 Dec 28, 2022
A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

70 Jul 12, 2022
A machine learning malware analysis framework for Android apps.

🕵️ A machine learning malware analysis framework for Android apps. ☢️ DroidDetective is a Python tool for analysing Android applications (APKs) for p

James Stevenson 77 Dec 27, 2022
Deep Q Learning with OpenAI Gym and Pokemon Showdown

pokemon-deep-learning An openAI gym project for pokemon involving deep q learning. Made by myself, Sam Little, and Layton Webber. This code captures g

2 Dec 22, 2021
TyXe: Pyro-based BNNs for Pytorch users

TyXe: Pyro-based BNNs for Pytorch users TyXe aims to simplify the process of turning Pytorch neural networks into Bayesian neural networks by leveragi

87 Jan 03, 2023
GoodNews Everyone! Context driven entity aware captioning for news images

This is the code for a CVPR 2019 paper, called GoodNews Everyone! Context driven entity aware captioning for news images. Enjoy! Model preview: Huge T

117 Dec 19, 2022
The official implementation for ACL 2021 "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval".

Code for "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval" (ACL 2021, Long) This is the repository for baseline m

Akari Asai 25 Oct 30, 2022
Implementation of Barlow Twins paper

barlowtwins PyTorch Implementation of Barlow Twins paper: Barlow Twins: Self-Supervised Learning via Redundancy Reduction This is currently a work in

IgorSusmelj 86 Dec 20, 2022
A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

A Light and Fast Face Detector for Edge Devices Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended

YonghaoHe 1.3k Dec 25, 2022
EgGateWayGetShell py脚本

EgGateWayGetShell_py 免责声明 由于传播、利用此文所提供的信息而造成的任何直接或者间接的后果及损失,均由使用者本人负责,作者不为此承担任何责任。 使用 python3 eg.py urls.txt 目标 title:锐捷网络-EWEB网管系统 port:4430 漏洞成因 ?p

榆木 61 Nov 09, 2022
a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

Arno Barton 1 Oct 29, 2021
DeepFaceLive - Live Deep Fake in python, Real-time face swap for PC streaming or video calls

DeepFaceLive - Live Deep Fake in python, Real-time face swap for PC streaming or video calls

8.3k Dec 31, 2022
An improvement of FasterGICP: Acceptance-rejection Sampling based 3D Lidar Odometry

fasterGICP This package is an improvement of fast_gicp Please cite our paper if possible. W. Jikai, M. Xu, F. Farzin, D. Dai and Z. Chen, "FasterGICP:

79 Dec 31, 2022
Who calls the shots? Rethinking Few-Shot Learning for Audio (WASPAA 2021)

rethink-audio-fsl This repo contains the source code for the paper "Who calls the shots? Rethinking Few-Shot Learning for Audio." (WASPAA 2021) Table

Yu Wang 34 Dec 24, 2022
ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.

ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning This repository contains the code for our ICCV 202

sangho.lee 28 Nov 08, 2022
Active window border replacement for window managers.

xborder Active window border replacement for window managers. Usage git clone https://github.com/deter0/xborder cd xborder chmod +x xborders ./xborder

deter 250 Dec 30, 2022
Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression.

Spatio-Temporal Entropy Model A Pytorch Reproduction of Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression. More details can

16 Nov 28, 2022
NudeNet: Neural Nets for Nudity Classification, Detection and selective censoring

NudeNet: Neural Nets for Nudity Classification, Detection and selective censoring Uncensored version of the following image can be found at https://i.

notAI.tech 1.1k Dec 29, 2022
Unofficial implement with paper SpeakerGAN: Speaker identification with conditional generative adversarial network

Introduction This repository is about paper SpeakerGAN , and is unofficially implemented by Mingming Huang ( 7 Jan 03, 2023

Code for generating a single image pretraining dataset

Single Image Pretraining of Visual Representations As shown in the paper A critical analysis of self-supervision, or what we can learn from a single i

Yuki M. Asano 12 Dec 19, 2022