The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Overview

Hailo Model Zoo

The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can measure the full precision accuracy of each model, the quantized accuracy using the Hailo Emulator and measure the accuracy on the Hailo-8 device. Finally, you will be able to generate the Hailo Executable Format (HEF) binary file to speed-up development and generate high quality applications accelerated with Hailo-8. The models are optimized for high accuracy on public datasets and can be used to benchmark the Hailo quantization scheme.

Usage

Quick Start Guide

  • Install the Hailo Dataflow Compiler and enter the virtualenv. In case you are not Hailo customer please contact hailo.ai
  • Clone the Hailo Model Zoo
git clone https://github.com/hailo-ai/hailo_model_zoo.git
  • Run the setup script
cd hailo_model_zoo; pip install -e .
  • Run the Hailo Model Zoo. For example, to parse the YOLOv3 model:
python hailo_model_zoo/main.py parse yolov3

Getting Started

For further functionality please see the GETTING_STARTED page (full install instructions and usage examples). The Hailo Model Zoo is using the Hailo Dataflow Compiler for parsing, quantization, emulation and compilation of the deep learning models. Full functionality includes:

  • Parse: model translation of the input model into Hailo's internal representation.
  • Profiler: generate profiler report of the model. The report contains information about your model and expected performance on the Hailo hardware.
  • Quantize: numeric translation of the input model into a compressed integer representation.
  • Compile: run the Hailo compiler to generate the Hailo Executable Format file (HEF) which can be executed on the Hailo hardware.
  • Evaluate: infer the model using the Hailo Emulator or the Hailo hardware and produce the model accuracy.

For further information about the Hailo Dataflow Compiler please contact hailo.ai.

Models

Full list of pre-trained models can be found here.

License

The Hailo Model Zoo is released under the MIT license. Please see the LICENSE file for more information.

Contact

Please visit hailo.ai for support / requests / issues.

Comments
  • yolov7.hef vs yolov5m_wo_spp_60p.hef

    yolov7.hef vs yolov5m_wo_spp_60p.hef

    Hi,

    As far as I know yolov7 faster and more accurate than the yolov5.

    In our tests :

    gst-launch-1.0 rtspsrc location=rtsp://xxxxx/ISAPI/Streaming/Channels/101 name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailonet hef-path=/local/shared_with_docker/yolov7.hef is-active=true batch-size=1 ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter function-name=yolov5 so-path=/local/workspace/tappas/apps/gstreamer/libs/post_processes//libyolo_post.so config-path=/local/workspace/tappas/apps/gstreamer/general/detection/resources/configs/yolov5.json qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailooverlay ! videoconvert ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=false text-overlay=false -v | grep -e hailo_display -e hailodevicestats

    yolov7.hef almost 7 times slower than the yolov5m_wo_spp_60p.hef version.

    opened by MyraBaba 19
  • Error: Model uses too many reources: 136 Layer-Controllers

    Error: Model uses too many reources: 136 Layer-Controllers

    onnx: 1.11.0
    torch: 1.12.1
    torchvision: 0.13.1
    

    Hi, I have fine-tuned yolov5m_wo_spp.pt model in the yolov5 v6.2 framework. Then I have exported the model to onnx (with opset 11) also in the yolov5 v6.2 framework. When I compile this onnx model with hailomz compile, the model optimization is done correctly, but then it throws following error:

    997/1000 [============================>.] - ETA: 1s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 998/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 999/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - 477s 477ms/step - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74: 0.0297
    [info] Fine Tune is done (completion time is 00:38:18.70)
    Calibration: 64entries [00:48,  1.32entries/s]
    
    [info] Model Optimization is done
    [info] Loading model script on yolov5m_wo_spp
    [info] Loading network parameters
    [info] Starting Hailo allocation and compilation flow
    [info] Using Single-context flow
    [info] Resources optimization guidelines: Strategy -> GREEDY Objective -> REQUIRED_FPS
    [info] Resources optimization params: max_control_utilization=120%, max_compute_utilization=100%, max_memory_utilization (weights)=100%, max_input_aligner_utilization=100%, max_apu_utilization=100%
    [info] Running Auto-Merger
    [info] Auto-Merger is done
    [info] Adding a portal between conv27( index=19 604, name=conv27, ) and concat7, type: L4
    [info] Starting context partition
    [info] Context partition is done (0s 2ms)
    [info] Adding format conversion layer 'auto_reshape_from_input_layer1_to_merged_layer_normalization1_space_to_depth1' after input_layer1
    [info] Adding format conversion layer 'auto_reshape_from_conv74_to_output_layer1' after conv74
    [info] Adding format conversion layer 'auto_reshape_from_conv84_to_output_layer2' after conv84
    [info] Adding format conversion layer 'auto_reshape_from_conv93_to_output_layer3' after conv93
    Model uses too many reources: 136 Layer-Controllers
    [critical] Model uses too many reources: 136 Layer-Controllers
    [error] Failed to produce compiled graph
    [error] Tried to deserialize allocator result on failure, but got another exception: No output graph, deserialization failed.
    Traceback (most recent call last):
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/bin/hailomz", line 33, in <module>
        sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 181, in main
        run(args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 170, in run
        return handlers[args.command](args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main_driver.py", line 132, in compile
        compile_model(runner, network_info, args.results_dir, model_script_path=args.model_script_path)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/core/main_utils.py", line 298, in compile_model
        hef = runner.compile()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 661, in compile
        return self._get_hef_hw_representation()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 707, in _get_hef_hw_representation
        serialized_hef = self._sdk_backend.get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1156, in get_hef_hw_representation
        hef, mapped_graph_file = self._get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1151, in _get_hef_hw_representation
        hef, mapped_graph_file, auto_alls = self.hef_full_build(fps, mapping_timeout, model_params, allocator_script)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1128, in hef_full_build
        auto_alls, self._mapped_graph, self._integrated_graph = allocator.create_mapping_and_full_build_hef(
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 568, in create_mapping_and_full_build_hef
        self.call_builder(network_graph_path, output_path, compilation_output_proto=compilation_output_proto,
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 527, in call_builder
        self.run_builder(network_graph_path, output_path, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 394, in run_builder
        raise e.internal_exception("Hailo tools builder failed:", hailo_tools_error=e.hailo_tools_error) from None
    hailo_sdk_client.sdk_backend.sdk_backend_exceptions.BackendAllocatorException: Hailo tools builder failed: Model uses too many reources: 136 Layer-Controllers
    

    If I train the model in the hailo yolov5 retraining docker container, the compilation works fine. Any idea what this error means?

    opened by frenky-strasak 3
  • Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Hi, I have not applied through the developer zone account yet (will it be difficult to apply for all-pass?). I wonder if the Haio-8 chip can run some large models at the same time? Or can you tell me what the implementation limits are? E.g:

    1. Operator compatibility (highest opset version supported by ONNX, or better than OpenVINO in general?)
    2. What is the memory size of the chip that can store/compute tensors? Can it run super-resolution models with higher output resolutions? Can the chip run very wide fully connected layers?
    3. Hailo-8 can multitask, if I run keypoint detection, ReID and depth estimation at the same time with frame skipping, is the chip's computing or memory capacity overloaded? How to spot where the overload is or estimate it?
    4. Has your team considered having multiple Hailo-8 chips "chained" to run some difficult tasks? This should be super cool.
    opened by BICHENG 2
  • Dataflow Compiler v3.17.0 not available in Developer Zone

    Dataflow Compiler v3.17.0 not available in Developer Zone

    Hi, in the latest changelog update you mentioned, that the repository was updated to use the Dataflow Compiler v3.17.0. However, in the Developer Zone only version 3.16.0 is available. How can we get the latest Dataflow Compiler v3.17.0? Can you please add it in the Developer Zone?

    opened by kmaerkl 2
  • Old version of yolov5 in retraining docker container.

    Old version of yolov5 in retraining docker container.

    Hi, for what reason does the retraining docker container contain the old version of yolov5 v2.0? Is possible to use some new versions of yolov5 such as v6.2? Are these newer versions of yolov5 compatible with data flow compiler to optimize and compile models to hailo hef files? Thanks!

    opened by frenky-strasak 1
  • YoloV7-tiny with on-chip NMS

    YoloV7-tiny with on-chip NMS

    Dear Hailo, the output structure of Yolov5 and Yolov7 is the same IIRC, so it should be possible to run the NMS on-chip. I wanted to test this, so I took the yolov5xs_wo_spp_nms model of this zoo as a reference. When downloading, I get this NMS config JSON:

    {
      "nms_scores_th": 0.01,
      "nms_iou_th": 1.0,
      "image_dims": [512, 512],
      "max_proposals_per_class": 80,
      "background_removal": false,
      "input_division_factor": 8,
      "classes": 80,
      "bbox_decoders": [
          {
              "name": "bbox_decoder53",
              "w": [
                  10,
                  16,
                  33
              ],
              "h": [
                  13,
                  30,
                  23
              ],
              "stride": 8,
              "encoded_layer": "conv53"
          },
          {
              "name": "bbox_decoder61",
              "w": [
                  30,
                  62,
                  59
              ],
              "h": [
                  61,
                  45,
                  119
              ],
              "stride": 16,
              "encoded_layer": "conv61"
          },
          {
              "name": "bbox_decoder69",
              "w": [
                  116,
                  156,
                  373 
              ],
              "h": [
                  90,
                  198,
                  326
              ],
              "stride": 32,
              "encoded_layer": "conv69"
          }
      ]
    }
    

    I cannot find a description of these parameters in the documentation anywhere. While I understand some parameters, like the names and anchors (w, h, stride, etc.) I dont get these ones:

      "background_removal": false,
      "input_division_factor": 8,
    

    Can you help me with these parameters? And did you ever test a yolov7 and on-chip decode/nms? Furhter, in the yolov5xs alls-file, there have been these settings made:

    buffers(proposal_generator0, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator1, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator2, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator0_concat, nms1, 2)
    

    which I don't really understand. Could you explain the usage of those as well? thanks!

    Cheers

    opened by dnns92 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Reactangle hef model does not work

    Reactangle hef model does not work

    Hi, in the yolov5 retraining container (v.2) I have exported yolov5m_wo_spp.pt to ONNX with rectangular shape: python models/export.py --weights model.pt --img 352 640 --batch 1 (--img H W)

    Then I compiled this onnx model: hailomz compile --ckpt model.onnx --calib-path calib_dataset --yaml yolov5m_wo_spp.yaml where I changed the yolov5m_wo_spp.yaml like this:

    • I added the preprocessing part with corresponding shape:
    preprocessing:
      network_type: detection
      input_shape:
      - 352
      - 640
      - 3
    
    • and I changed the info (the output shapes were found by Netron tool)
    info:
      input_shape: 352x640x3
      output_shape: 11x20x18, 22x40x18, 44x80x18
    

    The complete file is here: yolov5m_wo_spp.zip

    The compilation looks fine. When I deploy the hef file to my pipeline I can see wrong bboxes which are doubled and shifted. But they move in the same way as the object to be detected.

    What do I miss? Should I also modify the yolov5m_wo_spp.alls file? Could you point me please? Thank you!

    opened by frenky-strasak 3
  • Fix yolo postprocessing when batches are used

    Fix yolo postprocessing when batches are used

    Hi,

    currently the yolo postprocessing is not working when batches are provided. I fixed the bug in this branch: https://github.com/DavidBecht/hailo_model_zoo

    BR

    opened by DavidBecht 0
  • Illegal instruction (core dumped)

    Illegal instruction (core dumped)

    Hi,

    hailomz gives below error all the time. (in docker)

    all other command and tappas is working without any problem

    hailomz -h Illegal instruction (core dumped)

    opened by MyraBaba 1
  • when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    [HailoRT] [error] CHECK failed - Failed opening non-compatible HEF with the following unsupported extensions: KO Run ASAP (KO_RUN_ASAP) [HailoRT] [error] CHECK_SUCCESS failed with status=26 [HailoRT] [error] Failed parsing HEF file [HailoRT] [error] Failed creating HEF [HailoRT] [error] CHECK_EXPECTED failed with status=26

    opened by riverfrank 2
  • Post-processing yolov5s personface output

    Post-processing yolov5s personface output

    Hi,

    As a result of inference of the yolov5s_personface model I get 3 vectors of dimensions [1, 40, 40, 21], [1, 20, 20, 21], [1, 80, 80, 21]; what's the correct/fastest procedure to decode them in order to get a list of detections (such as [x_min, y_min, x_max, y_max, score, class])?

    Thanks

    opened by aux82716 6
Releases(v2.5)
Owner
Hailo
Hailo
Bayesian Deep Learning and Deep Reinforcement Learning for Object Shape Error Response and Correction of Manufacturing Systems

Bayesian Deep Learning for Manufacturing 2.0 (dlmfg) Object Shape Error Response (OSER) Digital Lifecycle Management - In Process Quality Improvement

Sumit Sinha 30 Oct 31, 2022
Official repository for Fourier model that can generate periodic signals

Conditional Generation of Periodic Signals with Fourier-Based Decoder Jiyoung Lee, Wonjae Kim, Daehoon Gwak, Edward Choi This repository provides offi

8 May 25, 2022
Dynamic Capacity Networks using Tensorflow

Dynamic Capacity Networks using Tensorflow Dynamic Capacity Networks (DCN; http://arxiv.org/abs/1511.07838) implementation using Tensorflow. DCN reduc

Taeksoo Kim 8 Feb 23, 2021
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022
Sharpness-Aware Minimization for Efficiently Improving Generalization

Sharpness-Aware-Minimization-TensorFlow This repository provides a minimal implementation of sharpness-aware minimization (SAM) (Sharpness-Aware Minim

Sayak Paul 54 Dec 08, 2022
[ICCV2021] Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving

Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving

Xuanchi Ren 44 Dec 03, 2022
robomimic: A Modular Framework for Robot Learning from Demonstration

robomimic [Homepage]   [Documentation]   [Study Paper]   [Study Website]   [ARISE Initiative] Latest Updates [08/09/2021] v0.1.0: Initial code and pap

ARISE Initiative 178 Jan 05, 2023
Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash through feeding it pictures or videos.

Trash-Sorter-Extraordinaire Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash

Rameen Mahmood 1 Nov 07, 2021
(to be released) [NeurIPS'21] Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs

Higher-Order Transformers Kim J, Oh S, Hong S, Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs, NeurIPS 2021. [arxiv] W

Jinwoo Kim 44 Dec 28, 2022
Deep learning model, heat map, data prepo

deep learning model, heat map, data prepo

Pamela Dekas 1 Jan 14, 2022
A python library for time-series smoothing and outlier detection in a vectorized way.

tsmoothie A python library for time-series smoothing and outlier detection in a vectorized way. Overview tsmoothie computes, in a fast and efficient w

Marco Cerliani 517 Dec 28, 2022
Official implement of Paper:A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images

A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images 深度监督影像融合网络DSIFN用于高分辨率双时相遥感影像变化检测 Of

Chenxiao Zhang 135 Dec 19, 2022
Official Repository for Machine Learning class - Physics Without Frontiers 2021

PWF 2021 Física Sin Fronteras es un proyecto del Centro Internacional de Física Teórica (ICTP) en Trieste Italia. El ICTP es un centro dedicado a fome

36 Aug 06, 2022
Conformer: Local Features Coupling Global Representations for Visual Recognition

Conformer: Local Features Coupling Global Representations for Visual Recognition (arxiv) This repository is built upon DeiT and timm Usage First, inst

Zhiliang Peng 378 Jan 08, 2023
Collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and related datasets

The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and related datasets

Jun Chen 139 Dec 21, 2022
MobileNetV1-V2,MobileNeXt,GhostNet,AdderNet,ShuffleNetV1-V2,Mobile+ViT etc.

MobileNetV1-V2,MobileNeXt,GhostNet,AdderNet,ShuffleNetV1-V2,Mobile+ViT etc. ⭐⭐⭐⭐⭐

568 Jan 04, 2023
Distributed DataLoader For Pytorch Based On Ray

Dpex——用户无感知分布式数据预处理组件 一、前言 随着GPU与CPU的算力差距越来越大以及模型训练时的预处理Pipeline变得越来越复杂,CPU部分的数据预处理已经逐渐成为了模型训练的瓶颈所在,这导致单机的GPU配置的提升并不能带来期望的线性加速。预处理性能瓶颈的本质在于每个GPU能够使用的C

Dalong 23 Nov 02, 2022
An atmospheric growth and evolution model based on the EVo degassing model and FastChem 2.0

EVolve Linking planetary mantles to atmospheric chemistry through volcanism using EVo and FastChem. Overview EVolve is a linked mantle degassing and a

Pip Liggins 2 Jan 17, 2022
Implementation of the state-of-the-art vision transformers with tensorflow

ViT Tensorflow This repository contains the tensorflow implementation of the state-of-the-art vision transformers (a category of computer vision model

Mohammadmahdi NouriBorji 2 Mar 16, 2022
This repository builds a basic vision transformer from scratch so that one beginner can understand the theory of vision transformer.

vision-transformer-from-scratch This repository includes several kinds of vision transformers from scratch so that one beginner can understand the the

1 Dec 24, 2021