Model Quantization Benchmark

Related tags

Deep LearningMQBench
Overview

Introduction

MQBench is an open-source model quantization toolkit based on PyTorch fx.

The envision of MQBench is to provide:

  • SOTA Algorithms. With MQBench, the hardware vendors and researchers can benefit from the latest research progress in academia.
  • Powerful Toolkits. With the toolkit, quantization node can be inserted to the original PyTorch module automatically with respect to the specific hardware. After training, the quantized model can be smoothly converted to the format that can inference on the real device.

Installation


git clone [email protected]:ModelTC/MQBench.git
cd MQBench
python setup.py install

Documentation

MQBench aims to support (1) various deployable quantization algorithms and (2) hardware backend libraries to facilitate the development of the community.

For the detailed information, please refer to mqbench documentation.

Citation

If you use this toolkit or benchmark in your research, please cite this project.

@article{MQBench,
  title   = {MQBench: Towards Reproducible and Deployable Model Quantization Benchmark},
  author  = {Yuhang Li* and Mingzhu Shen* and Jian Ma* and Yan Ren* and Mingxin Zhao* and
             Qi Zhang* and Ruihao Gong* and Fengwei Yu and Junjie Yan},
  journal= {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
  year={2021}
}

License

This project is released under the Apache 2.0 license.

Comments
  • Deploy之前想保存量化的pth模型,torch.save失败

    Deploy之前想保存量化的pth模型,torch.save失败

    image

    File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 379, in save _save(obj, opened_zipfile, pickle_module, pickle_protocol) File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 484, in _save pickler.dump(obj) AttributeError: Can't pickle local object 'ObserverBase.__init__.<locals>.PerChannelLoadHook'

    opened by wangshankun 13
  • 使用MQBench commit:a5582f416d62d9c40d3f5023b81ce71bb8791dd9 导出resnet到tengine,fp32到uint8,精度下降30个点

    使用MQBench commit:a5582f416d62d9c40d3f5023b81ce71bb8791dd9 导出resnet到tengine,fp32到uint8,精度下降30个点

    !!!!!!!!!!首先声明!!!!!!!!!! 这个问题是基于commit:a5582f416d62d9c40d3f5023b81ce71bb8791dd9 进行的,因为当前的master版本导出tengine的时候scale文件是空的,所以基于这个版本进行转换。

    使用这个版本的mqbench对resnet进行qat训练,qat训练精度基本保持不变,acc在90左右,然后使用convert_deploy对qat训练的模型进行导出,指定BackendType=Tengine_u8,导出结果是onnx和对应的scale,onnx在nerton中可以正常打开,scale查看也无异常,然后使用tengine的转换工具,先把onnx转换为tmfile, 再使用tengine的后量化工具 quant_tools_uint8 加载已转换的tmfile和scale 进行量化,出来的uint8模型,可以使用tengine在npu上面推理,但是精度降低到0.58,使用量化工具看到的cosin余弦 最后几层在0.5 0.6左右,正常应该在0.8-0.9左右

    所以目前这个问题出在哪里?是生成的scale有问题呢?还是因为tengine和mqbench的一些节点没有对齐,希望协助看下这个问题,非常感谢

    以下是qat之后mqbench导出的scale,模型较大无法上传

    链接: https://pan.baidu.com/s/1FbdT7ZHEYnciIQKcWtlMVg 提取码: cz7f

    Stale 
    opened by RedHandLM 11
  • 基于最新mqbench对yolox进行量化,选择backbend=tengine_u8时报错:AttributeError: 'dict' object has no attribute 'detach'

    基于最新mqbench对yolox进行量化,选择backbend=tengine_u8时报错:AttributeError: 'dict' object has no attribute 'detach'

    使用UP框架基于最新mqbench对yolox进行QAT训练,选择backbend=tengine_u8 时报错:AttributeError: 'dict' object has no attribute 'detach'

    以下是使用的QAT配置文件:

    num_classes: &num_classes 13
    runtime:
      aligned: true
        # async_norm: True
      special_bn_init: true
      task_names: quant_det
      runner:
        type: quant
    
    quant:
      quant_type: qat
      deploy_backend: Tengine_u8
      cali_batch_size: 900
      prepare_args:
        extra_qconfig_dict:
          w_observer: MinMaxObserver
          a_observer: EMAMinMaxObserver
          w_fakequantize: FixedFakeQuantize
          a_fakequantize: FixedFakeQuantize
        leaf_module: [Space2Depth, FrozenBatchNorm2d]
        extra_quantizer_dict:
          additional_module_type: [ConvFreezebn2d, ConvFreezebnReLU2d]
    
    
    mixup:
      type: yolox_mixup_cv2
      kwargs:
        extra_input: true
        input_size: [640, 640]
        mixup_scale: [0.8, 1.6]
        fill_color: 0
    
    mosaic:
      type: mosaic
      kwargs:
        extra_input: true
        tar_size: 640
        fill_color: 0
    
    random_perspective:
      type: random_perspective_yolox
      kwargs:
        degrees: 10.0 # 0.0
        translate: 0.1
        scale: [0.1, 2.0] # 0.5
        shear: 2.0 # 0.0
        perspective: 0.0
        fill_color: 0  # 0
        border: [-320, -320]
    
    augment_hsv:
      type: augment_hsv
      kwargs:
        hgain: 0.015
        sgain: 0.7
        vgain: 0.4
        color_mode: BGR
    
    flip:
      type: flip
      kwargs:
        flip_p: 0.5
    
    to_tensor: &to_tensor
      type: custom_to_tensor
    
    train_resize: &train_resize
      type: keep_ar_resize_max
      kwargs:
        max_size: 640
        random_size: [15, 25]
        scale_step: 32
        padding_type: left_top
        padding_val: 0
    
    test_resize: &test_resize
      type: keep_ar_resize_max
      kwargs:
        max_size: 640
        padding_type: left_top
        padding_val: 0
    
    dataset:
      train:
        dataset:
          type: coco
          kwargs:
            meta_file: train.json
            image_reader:
              type: fs_opencv
              kwargs:
                image_dir: &img_root /images/
                color_mode: BGR
            transformer: [*train_resize, *to_tensor]
        batch_sampler:
          type: base
          kwargs:
            sampler:
              type: dist
              kwargs: {}
            batch_size: 4
      test:
        dataset:
          type: coco
          kwargs:
            meta_file: &gt_file val.json
            image_reader:
              type: fs_opencv
              kwargs:
                image_dir: *img_root
                color_mode: BGR
            transformer: [*test_resize, *to_tensor]
            evaluator:
              type: COCO
              kwargs:
                gt_file: *gt_file
                iou_types: [bbox]
        batch_sampler:
          type: base
          kwargs:
            sampler:
              type: dist
              kwargs: {}
            batch_size: 4
      dataloader:
        type: base
        kwargs:
          num_workers: 4
          alignment: 32
          worker_init: true
          pad_type: batch_pad
    
    trainer: # Required.
      max_epoch: &max_epoch 6             # total epochs for the training
      save_freq: 1
      test_freq: 1
      only_save_latest: false
      optimizer:                 # optimizer = SGD(params,lr=0.01,momentum=0.937,weight_decay=0.0005)
        register_type: yolov5
        type: SGD
        kwargs:
          lr: 0.0000003125
          momentum: 0.9
          nesterov: true
          weight_decay: 0.0      # weight_decay = 0.0005 * batch_szie / 64
      lr_scheduler:              # lr_scheduler = MultStepLR(optimizer, milestones=[9,14],gamma=0.1)
        warmup_epochs: 0        # set to be 0 to disable warmup. When warmup,  target_lr = init_lr * total_batch_size
        warmup_type: linear
        warmup_ratio: 0.001
        type: MultiStepLR
        kwargs:
          milestones: [2, 4]     # epochs to decay lr
          gamma: 0.1             # decay rate
    
    saver:
      save_dir: checkpoints/yolox_s_ret_a1_comloc_quant_tengine
      results_dir: results_dir/yolox_s_ret_a1_comloc_quant_tengine
      resume_model: /United-Perception/train_config/pretrain/300_65_ckpt_best.pth
      auto_resume: True
    
    
    
    ema:
      enable: false
      ema_type: exp
      kwargs:
        decay: 0.9998
    
    net:
    - name: backbone
      type: yolox_s
      kwargs:
        out_layers: [2, 3, 4]
        out_strides: [8, 16, 32]
        normalize: {type: mqbench_freeze_bn}
        act_fn: {type: Silu}
    - name: neck
      prev: backbone
      type: YoloxPAFPN
      kwargs:
        depth: 0.33
        out_strides: [8, 16, 32]
        normalize: {type: mqbench_freeze_bn}
        act_fn: {type: Silu}
    - name: roi_head
      prev: neck
      type: YoloXHead
      kwargs:
        num_classes: *num_classes
        width: 0.5
        num_point: &dense_points 1
        normalize: {type: mqbench_freeze_bn}
        act_fn: {type: Silu}
    - name: post_process
      prev: roi_head
      type: retina_post_iou
      kwargs:
        num_classes: *num_classes
                                      # number of classes including backgroudn. for rpn, it's 2; for RetinaNet, it's 81
        cfg:
          cls_loss:
            type: quality_focal_loss
            kwargs:
              gamma: 2.0
          iou_branch_loss:
            type: sigmoid_cross_entropy
          loc_loss:
            type: compose_loc_loss
            kwargs:
              loss_cfg:
              - type: iou_loss
                kwargs:
                  loss_type: giou
                  loss_weight: 1.0
              - type: l1_loss
                kwargs:
                  loss_weight: 1.0
          anchor_generator:
            type: hand_craft
            kwargs:
              anchor_ratios: [1]    # anchor strides are provided as feature strides by feature extractor
              anchor_scales: [4]   # scale of anchors relative to feature map
          roi_supervisor:
            type: atss
            kwargs:
              top_n: 9
              use_iou: true
          roi_predictor:
            type: base
            kwargs:
              pre_nms_score_thresh: 0.05    # to reduce computation
              pre_nms_top_n: 1000
              post_nms_top_n: 1000
              roi_min_size: 0                 # minimum scale of a valid roi
              merger:
                type: retina
                kwargs:
                  top_n: 100
                  nms:
                    type: naive
                    nms_iou_thresh: 0.65
    
    

    以下是报错信息:

    [MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
    [MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
    [MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "/data/lsc/United-Perception/up/__main__.py", line 27, in <module>
        main()
      File "/data/lsc/United-Perception/up/__main__.py", line 21, in main
        args.run(args)
      File "/data/lsc/United-Perception/up/commands/train.py", line 144, in _main
        launch(main, args.num_gpus_per_machine, args.num_machines, args=args, start_method=args.fork_method)
      File "/data/lsc/United-Perception/up/utils/env/launch.py", line 52, in launch
        mp.start_processes(
      File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
        while not context.join():
      File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
        raise ProcessRaisedException(msg, error_index, failed_process.pid)
    torch.multiprocessing.spawn.ProcessRaisedException: 
    
    -- Process 3 terminated with the following error:
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
        fn(i, *args)
      File "/data/lsc/United-Perception/up/utils/env/launch.py", line 117, in _distributed_worker
        main_func(args)
      File "/data/lsc/United-Perception/up/commands/train.py", line 134, in main
        runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs'])
      File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 17, in __init__
        super(QuantRunner, self).__init__(config, work_dir, training)
      File "/data/lsc/United-Perception/up/runner/base_runner.py", line 59, in __init__
        self.build()
      File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 34, in build
        self.calibrate()
      File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 182, in calibrate
        self.model(batch)
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/data/lsc/United-Perception/up/tasks/quant/models/model_helper.py", line 76, in forward
        output = submodule(input)
      File "/opt/conda/lib/python3.8/site-packages/torch/fx/graph_module.py", line 308, in wrapped_call
        return cls_call(self, *args, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/torch/fx/graph_module.py", line 308, in wrapped_call
        return cls_call(self, *args, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "<eval_with_key_2>", line 4, in forward
        input_1_post_act_fake_quantizer = self.input_1_post_act_fake_quantizer(input_1);  input_1 = None
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/data/lsc/United-Perception/MQBench/mqbench/fake_quantize/fixed.py", line 20, in forward
        self.activation_post_process(X.detach())
    AttributeError: 'dict' object has no attribute 'detach'
    

    辛苦帮忙看下是什么问题?是mqbench还没有支持tengine么

    opened by RedHandLM 11
  • Hi, will export to QLinear save weights in int8?

    Hi, will export to QLinear save weights in int8?

    Using tensorrt backend, will QLinear make the onnx model smaller? I got some error when trying to save to QLinear:

    deploy/common.py", line 138, in optimize_model
        assert node_detect, "Graph is illegel, error occured!"
    AssertionError: Graph is illegel, error occured!
    
    
    bug 
    opened by jinfagang 10
  • how to use in mmdet build model

    how to use in mmdet build model

    when use mmdet build this model, it will like: object {
    module list aaaa module list bbb } when use prepare_by_platform to trace will get error like: TypeError: 'xxxobject' object does not support indexing

    Stale 
    opened by 791136190 10
  • how to ptq for Faster RCNN or SSD?

    how to ptq for Faster RCNN or SSD?

    From QDROP paper,i notice the benchmark result include Faster RCNN; image

    Could you provide this examples?

    In addition, it's best to provide PTQ of SSD,another import object detection network;

    opened by wangshankun 9
  • onnx inference

    onnx inference

    Hello. Finish model translate to onnx-quant, however cant use onnx-runtime to inference. error log No Op registered for LearnablePerTensorAffine with domain_version of 11

    opened by www516717402 9
  • 多个不同scale的输入,量化影响了结果

    多个不同scale的输入,量化影响了结果

    任务的模型有两个输入,一个是image,经过backbone后得到image features,另一个是输入其他detection模型检测得到的bbox 坐标,坐标经过了归一化,是0~1之间的float值。坐标经过线性层以及卷积层的上采样,结果与image features做concat。使用MQBench量化后,发现INT8的推理结果,对于head1精度很高,但是head2有明显的精度损失。 网络定义如下:

    image

    想问下这种结构的网络一般怎么处理?

    Stale 
    opened by zhouyang1989 8
  • DDP multi-gpu training issues with Imagenet example

    DDP multi-gpu training issues with Imagenet example

    I am trying to use multi-gpu QAT training using Imagenet example code. It runs into issue after first iteration training update.

    RuntimeError: grad.numel() == bucket_view.numel() INTERNAL ASSERT FAILED at "/pytorch/torch/lib/c10d/reducer.cpp":343, please report a bug to PyTorch.

    The code works fine with multi-gpu training if I comment the wrapper code that quantize the original model i.e., model=prepare_by_platform(model, args.backend). Did anyone encounter the same issue?

    opened by kartikgupta-at-anu 7
  • KeyError for Adaround、Qdrop

    KeyError for Adaround、Qdrop

    When I use the MQBench to quant RLFN model with Qdrop、adaround, some errors have occurred. env: Ubuntu18.04,cuda11.1, MQbench version: e2175203c8e62596e66500a720a6cb1d1fc1dacd RLFN is a super resolution model from: https://github.com/ofsoundof/NTIRE2022_ESR, the model id is 4. image

    error: [MQBENCH] INFO: Disable observer and Disable quantize. [MQBENCH] INFO: Disable observer and Enable quantize. [MQBENCH] INFO: prepare layer reconstruction for fea_conv [MQBENCH] INFO: the node list is below! [MQBENCH] INFO: [input_1_post_act_fake_quantizer, fea_conv, fea_conv_post_act_fake_quantizer_2] Traceback (most recent call last): File "quant.py", line 158, in main() File "quant.py", line 137, in main model = ptq_reconstruction(model, stacked_tensor, EasyDict(ptq_reconstruction_config)) File ".../mqbench/advanced_ptq.py", line 636, in ptq_reconstruction fp32_module = fp32_modules[qnode2fpnode_dict[layer_node_list[-1]]] KeyError: fea_conv_post_act_fake_quantizer_2

    Here is my code tracking and analysis

    (1)mode.code def forward(self, input): input_1 = input input_1_post_act_fake_quantizer = self.input_1_post_act_fake_quantizer(input_1); input_1 = None fea_conv = self.fea_conv(input_1_post_act_fake_quantizer); input_1_post_act_fake_quantizer = None fea_conv_post_act_fake_quantizer_2 = self.fea_conv_post_act_fake_quantizer(fea_conv) fea_conv_post_act_fake_quantizer_1 = self.fea_conv_post_act_fake_quantizer(fea_conv) fea_conv_post_act_fake_quantizer = self.fea_conv_post_act_fake_quantizer(fea_conv); fea_conv = None ... (2)"problems" 问题,quant model node.target多对1,导致quant_named_nodes缺少keys: mqbench/advanced_ptq.py-》qnode2fpnode(quant_modules, fp32_modules): def qnode2fpnode(quant_modules, fp32_modules): quant_named_nodes = {node.target: node for node in quant_modules} """ node:fea_conv_post_act_fake_quantizer_2 node.target:fea_conv_post_act_fake_quantizer node:fea_conv_post_act_fake_quantizer_1 node.target:fea_conv_post_act_fake_quantizer """ fp32_named_nodes = {node.target: node for node in fp32_modules} qnode2fpnode_dict = {quant_named_nodes[key]: fp32_named_nodes[key] for key in quant_named_nodes} return qnode2fpnode_dict

    I am not familiar with the process of trained PTQ, so looking forward to your suggestions and Solutions.

    opened by feixiang7701 7
  • MQBench的结果与SNPE DSP的结果不是位精确的

    MQBench的结果与SNPE DSP的结果不是位精确的

    MQBench是一个非常有趣的项目。

    环境 pytorch: 1.8.1 MQBench: branch main, e2175203 SNPE: snpe-1.61.0.3358

    问题: 我用一个只有两层卷积模型做了一个简单的测试,比对MQBench 量化后的结果和SNPE DSP的结果,发现并不是位精确的,请问一下这是否是正常的,我是否有哪里做错了。

    复现

    • MQBench量化
    def seed_torchv2(seed: int = 42) -> None:
        np.random.seed(seed)
        random.seed(seed)
        os.environ['PYTHONHASHSEED'] = str(seed)
        torch.backends.cudnn.benchmark = False
        torch.backends.cudnn.deterministic = True
        torch.manual_seed(seed)
        torch.cuda.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
    
    class Net(nn.Module):
        def __init__(self):
            super(Net, self).__init__()
            self.avg_pool = nn.AdaptiveAvgPool2d(1)
            self.conv = nn.Conv2d(3, 128,1,1, bias=True)
            self.conv2 = nn.Conv2d(128, 20,1,1,bias=True)
            self.relu = nn.ReLU()
            self.flat = nn.Flatten(1)
    
        def forward(self, x): # (1,3,20,20)
            x = self.avg_pool(x)
            x = self.conv(x)
            x = self.conv2(x)
            x = self.flat(x)
            return x
    
        
    SIZE = 20
    backend = BackendType.SNPE
    
    np.set_printoptions(suppress=True, precision=6)
    torch.set_printoptions(6)
    seed_torchv2(42)
    
    
    def gen_input_data(length=100):
        data = []
        for _ in range(length):
            data.append(np.ones((1,3,SIZE,SIZE), dtype=np.float32) * 0.1 * np.random.randint(0, 10))
        return np.stack(data, axis=0)
    
    
    model = Net()          # use vision pre-defined model
    model.eval()
    
    train_data = gen_input_data(100)
    dummy_input = np.zeros((1,3,SIZE,SIZE), dtype=np.float32) + 0.5
    
    
    print("pytorch fp32 result")
    print(model(torch.from_numpy(dummy_input.copy())).float())
    
    
    # quant
    model = prepare_by_platform(model, backend)
    
    enable_calibration(model)
    
    for i, d in enumerate(train_data):
        _ = model(torch.from_numpy(d).float())
    
    enable_quantization(model)
    
    
    print("quant sim result")
    print(model(torch.from_numpy(dummy_input.copy())).float())
    
    
    input_shape = {"image":[1,3,SIZE,SIZE]}
    convert_deploy(model, backend, input_shape)
    
    # save dummy input and test it on DSP
    image = dummy_input.copy()
    assert image.shape == (1,3,SIZE,SIZE)
    assert image.dtype == np.float32
    image.tofile("./tmp.raw")
    print("#" * 50)
    
    pytorch fp32 result
    tensor([[-0.347889, -0.289117, -0.083191, -0.222827,  0.124699,  0.235278,
              0.434433, -0.302174, -0.047763,  0.229472, -0.037784,  0.082496,
             -0.150852, -0.170281,  0.130777,  0.146441, -0.494992, -0.182881,
              0.600709, -0.063706]], grad_fn=<ViewBackward>)
    
    quant sim result
    tensor([[-0.344930, -0.290467, -0.081694, -0.222389,  0.131618,  0.231466,
              0.435701, -0.299544, -0.049924,  0.226927, -0.036308,  0.081694,
             -0.149772, -0.172465,  0.131618,  0.149772, -0.494702, -0.181542,
              0.599088, -0.063540]], grad_fn=<ViewBackward>
    
    • DLC转换 ./snpe-onnx-to-dlc --input_network mqbench_qmodel_deploy_model.onnx --output_path tmp.dlc --quantization_overrides mqbench_qmodel_clip_ranges.json ./snpe-dlc-quantize --input_dlc tmp.dlc --input_list tmp_file.txt --output_dlc tmp_quat_mq.dlc --override_params --bias_bitwidth 32 tmp_file.txt和tmp_file_android.txt都只有一个文件就是tmp.raw,tmp.raw在上面python程序里面保存下来为一个3x20x20的float文件

    • SNPE DSP run ./snpe-net-run --container /sdcard/tmp_quat_mq.dlc --input_list /sdcard/tmp_file_android.txt --use_dsp

    ################################################## 74.raw (20,) [-0.34493 -0.285929 -0.081694 -0.222389 0.127079 0.236005 0.435701 -0.299544 -0.049924 0.226927 -0.036308 0.081694 -0.149772 -0.172465 0.131618 0.149772 -0.490163 -0.177003 0.599088 -0.068078]

    比对quant sim result 和 DSP 的结果,可以看到粗斜体是二者不一致的地方

    good first issue Stale 
    opened by changewOw 7
  • 关于使用ONNX-QNN在生成Deploy模型出现的问题

    关于使用ONNX-QNN在生成Deploy模型出现的问题

    您好,非常感谢您的出色工作。我是MQBench的初学者,在使用您mqbench的QNN方案对vgg19模型进行量化时,我发现当我使用以下config的时候,生成的onnx模型无法进行下一步的模型转换,也就是去除伪量化块,生成Deploy模型。请问这样的问题该如何解决?

                extra_qconfig_dict = {
                    'w_observer': 'ClipStdObserver',
                    'a_observer': 'ClipStdObserver',
                    'w_fakequantize': 'DSQFakeQuantize',
                    'a_fakequantize': 'DSQFakeQuantize',
                    'w_qscheme': {
                        'bit': 8,
                        'symmetry': True,
                        'per_channel': False,
                        'pot_scale': True
                    },
                    'a_qscheme': {
                        'bit': 8,
                        'symmetry': True,
                        'per_channel': False,
                        'pot_scale': True
                    }
                }
                prepare_custom_config_dict = {
                    'extra_qconfig_dict': extra_qconfig_dict
                }
               self.model = prepare_by_platform(self.model, BackendType.ONNX_QNN, prepare_custom_config_dict)
    

    报错信息如下

      File "openpose_mqb.py", line 411, in train
        convert_deploy(self.model, BackendType.ONNX_QNN, input_shape, model_name = 'model_QNN')
      File "MQBench-0.0.6-py3.9.egg/mqbench/convert_deploy.py", line 184, in convert_deploy
        convert_function(deploy_model, **kwargs)
      File "MQBench-0.0.6-py3.9.egg/mqbench/convert_deploy.py", line 138, in deploy_qparams_tvm
        ONNXQNNPass(onnx_model_path).run(model_name)
      File "MQBench-0.0.6-py3.9.egg/mqbench/deploy/deploy_onnx_qnn.py", line 273, in run
        self.format_qlinear_dtype_pass()
      File "MQBench-0.0.6-py3.9.egg/mqbench/deploy/deploy_onnx_qnn.py", line 258, in format_qlinear_dtype_pass
        scale, zero_point, qmin, qmax = node.input[1], node.input[2], node.input[3], node.input[4]
    IndexError: list index (3) out of range
    
    opened by Zhoukai1234 1
  • The QAT top1@acc of mobilenet_v2 a4w4 LSQ cannot be reproduced as the paper shown 70.6%.

    The QAT [email protected] of mobilenet_v2 a4w4 LSQ cannot be reproduced as the paper shown 70.6%.

    Hi, thanks for providing this amazing quantization framework ! I want to reproduce the [email protected] of mobilenet_v2 a4w4 LSQ under academic setting. The quantization configuration is as below:

    dict(qtype='affine',    
     w_qscheme=QuantizeScheme(symmetry=True, per_channel=True, pot_scale=False, bit=4, symmetric_range=False, p=2.4),
                                     a_qscheme=QuantizeScheme(symmetry=True, per_channel=False, pot_scale=False, bit=4, symmetric_range=False, p=2.4),
                                     default_weight_quantize=LearnableFakeQuantize,
                                     default_act_quantize=LearnableFakeQuantize,
                                     default_weight_observer=MSEObserver,
                                     default_act_observer=EMAMSEObserver),
    

    For the training strategy, I set weght decay=0, lr = 1e-3 and batch_size=128 per GPU using 8 cards Nvidia A100. And the adjust_learning_rate strategy is remained the same as main.py. However, the highest [email protected] I reproduced in the validation set was only 68.66%, which is far from the 70.6% as the paper presented.

    Which part I have missed ?

    opened by LuletterSoul 0
  • 关于yolov5s进行PTQ量化出现TraceError问题

    关于yolov5s进行PTQ量化出现TraceError问题

    嗨 大家好,

    今天我尝试使用mqbench对yolov5s进行PTQ量化

    yolov5s模型来自于:https://github.com/ultralytics/yolov5.git

    当我尝试如下代码进行yolov5s量化处理时

    from mqbench.prepare_by_platform import prepare_by_platform, BackendType
    
    backend = BackendType.ONNX_QNN
    model = prepare_by_platform(model, backend)
    

    出现了这个问题

    torch.fx.proxy.TraceError: symbolically traced variables cannot be used as inputs to control flow
    

    请问大家,这个问题有什么好的,简便的方式处理呢?

    opened by xiaopengaia 2
  • 关于如何将conv和bn层进行合并的问题

    关于如何将conv和bn层进行合并的问题

    嗨 大家好,

    今天在做resnet50量化的时候,想将conv层和bn层进行合并,然后进行量化

    为此我找到了fuser_method_mappings.py这个文件

    同时调用了fuse_conv_freezebn这个函数

    但在进行合并的时候,发现需要将网络中的conv和bn单独提取处理来,进行合并

    显然,这样操作似乎过于麻烦些

    因此,我尝试寻找r50_8_8.yaml中能够针对conv和bn相互融合的参数,未果

    想请教大家是如何合并bn和conv层的

    有没有较好的简便的方法,或者在r50_8_8.yaml是否有参数能够进行处理呢?

    希望得到指正,谢谢大家!

    opened by xiaopengaia 2
Releases(v0.0.7)
Owner
Model Infra
Hypernetwork-Ensemble Learning of Segmentation Probability for Medical Image Segmentation with Ambiguous Labels

Hypernet-Ensemble Learning of Segmentation Probability for Medical Image Segmentation with Ambiguous Labels The implementation of Hypernet-Ensemble Le

Sungmin Hong 6 Jul 18, 2022
Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples

Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples (WACV 2022) and Beyond Simple Meta-Learning: Multi-Purpose Model

PLAI Group at UBC 42 Dec 06, 2022
Convolutional Neural Network for Text Classification in Tensorflow

This code belongs to the "Implementing a CNN for Text Classification in Tensorflow" blog post. It is slightly simplified implementation of Kim's Convo

Denny Britz 5.5k Jan 02, 2023
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Vide

Jonas Wu 232 Dec 29, 2022
Implementation of U-Net and SegNet for building segmentation

Specialized project Created by Katrine Nguyen and Martin Wangen-Eriksen as a part of our specialized project at Norwegian University of Science and Te

Martin.w-e 3 Dec 07, 2022
Code for the paper "How Attentive are Graph Attention Networks?"

How Attentive are Graph Attention Networks? This repository is the official implementation of How Attentive are Graph Attention Networks?. The PyTorch

175 Dec 29, 2022
Binary classification for arrythmia detection with ECG datasets.

HEART DISEASE AI DATATHON 2021 [Eng] / [Kor] #English This is an AI diagnosis modeling contest that uses the heart disease echocardiography and electr

HY_Kim 3 Jul 14, 2022
Masked regression code - Masked Regression

Masked Regression MR - Python Implementation This repositery provides a python implementation of MR (Masked Regression). MR can efficiently synthesize

Arbish Akram 1 Dec 23, 2021
This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting

1 MAGNN This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting. 1.1 The frame

SZJ 12 Nov 08, 2022
A Convolutional Transformer for Keyword Spotting

☢️ Audiomer ☢️ Audiomer: A Convolutional Transformer for Keyword Spotting [ arXiv ] [ Previous SOTA ] [ Model Architecture ] Results on SpeechCommands

49 Jan 27, 2022
Microscopy Image Cytometry Toolkit

Cytokit Cytokit is a collection of tools for quantifying and analyzing properties of individual cells in large fluorescent microscopy datasets with a

Hammer Lab 106 Jan 06, 2023
Create Data & AI apps in 20 lines of code with Shimoku

Install with: pip install shimoku-api-python Start with: from os import getenv import shimoku_api_python.client as Shimoku

Shimoku 5 Nov 07, 2022
Potato Disease Classification - Training, Rest APIs, and Frontend to test.

Potato Disease Classification Setup for Python: Install Python (Setup instructions) Install Python packages pip3 install -r training/requirements.txt

codebasics 95 Dec 21, 2022
Texture mapping with variational auto-encoders

vae-textures This is an experiment with using variational autoencoders (VAEs) to perform mesh parameterization. This was also my first project using J

Alex Nichol 41 May 24, 2022
BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Holy Wu 35 Jan 01, 2023
Adversarial Attacks are Reversible via Natural Supervision

Adversarial Attacks are Reversible via Natural Supervision ICCV2021 Citation @InProceedings{Mao_2021_ICCV, author = {Mao, Chengzhi and Chiquier

Computer Vision Lab at Columbia University 20 May 22, 2022
Toolchain to build Yoshi's Island from source code

Project-Y Toolchain to build Yoshi's Island (J) V1.0 from source code, by MrL314 Last updated: September 17, 2021 Setup To begin, download this toolch

MrL314 19 Apr 18, 2022
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark Yong

19 Dec 17, 2022
Repository aimed at compiling code, papers, demos etc.. related to my PhD on 3D vision and machine learning for fruit detection and shape estimation at the university of Lincoln

PhD_3DPerception Repository aimed at compiling code, papers, demos etc.. related to my PhD on 3D vision and machine learning for fruit detection and s

lelouedec 2 Oct 06, 2022
Self-Supervised Monocular DepthEstimation with Internal Feature Fusion(arXiv), BMVC2021

DIFFNet This repo is for Self-Supervised Monocular DepthEstimation with Internal Feature Fusion(arXiv), BMVC2021 A new backbone for self-supervised de

Hang 94 Dec 25, 2022