A treasure chest for visual recognition powered by PaddlePaddle

Overview

简体中文 | English

PaddleClas

简介

飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。

近期更新

  • 2021.11.1 发布PP-ShiTu技术报告,新增饮料识别demo
  • 2021.10.23 发布轻量级图像识别系统PP-ShiTu,CPU上0.2s即可完成在10w+库的图像识别。 点击这里立即体验
  • 2021.09.17 发布PP-LCNet系列超轻量骨干网络模型, 在Intel CPU上,单张图像预测速度约5ms,ImageNet-1K数据集上Top1识别准确率达到80.82%,超越ResNet152的模型效果。PP-LCNet的介绍可以参考论文, 或者PP-LCNet模型介绍,相关指标和预训练权重可以从 这里下载。
  • more

特性

  • PP-ShiTu轻量图像识别系统:集成了目标检测、特征学习、图像检索等模块,广泛适用于各类图像识别任务。cpu上0.2s即可完成在10w+库的图像识别。

  • PP-LCNet轻量级CPU骨干网络:专门为CPU设备打造轻量级骨干网络,速度、精度均远超竞品。

  • 丰富的预训练模型库:提供了36个系列共175个ImageNet预训练模型,其中7个精选系列模型支持结构快速修改。

  • 全面易用的特征学习组件:集成arcmargin, triplet loss等12度量学习方法,通过配置文件即可随意组合切换。

  • SSLD知识蒸馏:14个分类预训练模型,精度普遍提升3%以上;其中ResNet50_vd模型在ImageNet-1k数据集上的Top-1精度达到了84.0%, Res2Net200_vd预训练模型Top-1精度高达85.1%。

欢迎加入技术交流群

  • 您可以扫描下面的微信群二维码, 加入PaddleClas 微信交流群。获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。

快速体验

PP-ShiTu图像识别快速体验:点击这里

文档教程

PP-ShiTu图像识别系统介绍

PP-ShiTu是一个实用的轻量级通用图像识别系统,主要由主体检测、特征学习和向量检索三个模块组成。该系统从骨干网络选择和调整、损失函数的选择、数据增强、学习率变换策略、正则化参数选择、预训练模型使用以及模型裁剪量化8个方面,采用多种策略,对各个模块的模型进行优化,最终得到在CPU上仅0.2s即可完成10w+库的图像识别的系统。更多细节请参考PP-ShiTu技术方案

PP-ShiTu图像识别系统效果展示

  • 瓶装饮料识别
  • 商品识别
  • 动漫人物识别
  • logo识别
  • 车辆识别

许可证书

本项目的发布受Apache 2.0 license许可认证。

贡献代码

我们非常欢迎你为PaddleClas贡献代码,也十分感谢你的反馈。 如果想为PaddleCLas贡献代码,可以参考贡献指南

  • 非常感谢nblib修正了PaddleClas中RandErasing的数据增广配置文件。
  • 非常感谢chenpy228修正了PaddleClas文档中的部分错别字。
  • 非常感谢jm12138为PaddleClas添加ViT,DeiT系列模型和RepVGG系列模型。
Comments
  • PaddleClas MultiLabels模型导出的onnx模型推理有BUG

    PaddleClas MultiLabels模型导出的onnx模型推理有BUG

    MultiLabels模型导出的onnx模型推理 与 paddle模型相差非常大。

    1 代码:PaddleClas develop分支 2 环境:docker images paddlepaddle/paddle:2.1.0-gpu-cuda11.2-cudnn8

    下面是我操作的过程:

    导出到paddle推理模型

    python3 tools/export_model.py -c ./ppcls/configs/quick_start/professional/MobileNetV1_multilabel_intent_eval.yaml -o Arch.pretrained="./output/k1000_focal_2/MobileNetV2/latest" image

    安装必要的包

    python -m pip install -i https://pypi.tuna.tsinghua.edu.cn/simple --upgrade pip pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple pip install paddle2onnx onnx onnx-simplifier onnxruntime-gpu image

    导出到onnx

    paddle2onnx --model_dir inference/ --model_filename inference.pdmodel --params_filename inference.pdiparams --save_file focal2.onnx --opset_version 10 --enable_dev_version True --enable_onnx_checker True

    对于同一张图片输入,paddle的输出是: image

    但是onnx的输出是: image

    我做的是130类的多标签分类器,两者输出的数值相差很大,经过sigmod之后标签概率大小排序,最大的是同样的,但是数值上明显差别很大。

    btw,官方的例子https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/quick_start/quick_start_multilabel_classification_en.md里面,使用NUS-WIDE-SCENE做多标签任务,最后的模型导出到onnx依旧会遇到这个问题。

    是我操作不对,还是有解决办法,希望尽快给一个答复。不行的话,我想我在下一周将转到使用pytorch来完成我的工作。

    status/close 
    opened by xddun 21
  • add support for windows and cpu

    add support for windows and cpu

    Add supprt for windows and CPU.

    • set the environment

      • for windows: SET PYTHONPATH=$PWD $PYTHONPATH
      • for linux: export PYTHONPATH=$PWD:$PYTHONPATH
    • train

    python tools/train_multi_platform.py -c configs/ResNet/ResNet50.yaml
    
    • eval
    python -u  tools/eval_multi_platform.py -c configs/ResNet/ResNet50.yaml -o pretrained_model=./output/ResNet50/1/ppcls -o VALID.batch_size=2
    
    • If you want to use cpu to train, you should use tools/train_multi_platform.py, and set use_gpu as False explicitly in the config file(*.yaml). The following is an example.
    mode: 'train'
    ARCHITECTURE:
        name: 'ResNet50'
    
    pretrained_model: ""
    model_save_dir: "./output/"
    classes_num: 1000
    use_gpu: False
    ..
    ..
    ..
    
    opened by littletomatodonkey 15
  • 自己训练的LCNet主体检测模型预测报错TypeError: only size-1 arrays can be converted to Python scalars

    自己训练的LCNet主体检测模型预测报错TypeError: only size-1 arrays can be converted to Python scalars

    1. PaddleClas版本以及PaddlePaddle版本:PaddleClas release/2.3,PaddlePaddle 2.2.0,
    2. 涉及的其他产品使用的版本号:PaddleDetection release/2.3
    3. 训练环境信息:Linux,Python3.7,CUDA 11.2,cuDNN 8
    4. 使用数据集CUB_200_2011,修改所有检测框的 category_id 修改为1,使用PaddleDetection基于picodet_PPLCNet_x2_5_mainbody_lite_v1.0_pretrained.pdparams训练主体检测模型,训练相关配置文件和日志:yml&log.zip
    5. 训练完成后使用PaddleDetection/blob/release/2.3/tools/export_model.py导出模型,使用PaddleDetection/blob/release/2.3/tools/infer.py可以正常预测,但使用PaddleClas/blob/release/2.3/deploy/python/predict_det.py预测时报如下错误:

    image

    1. 另外,训练日志中的shape unmatched问题要如何解决?谢谢
    opened by niancheng 14
  • export_model.py 运行报错

    export_model.py 运行报错

    按照文档4. 使用inference模型进行模型推理,运行到一下代码时,出现错误

    python tools/export_model.py \
        --model MobileNetV3_large_x1_0 \
        --pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \
        --output_path ./inference
    

    报错信息如下

    
    
    /home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:26: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      def convert_to_list(value, n, name, dtype=np.int):
    W0318 10:31:46.486126 59140 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.2, Runtime API Version: 10.2
    W0318 10:31:46.489892 59140 device_context.cc:372] device: 0, cuDNN Version: 7.6.
    /home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
      return (isinstance(seq, collections.Sequence) and
    Traceback (most recent call last):
      File "tools/export_model.py", line 79, in <module>
        main()
      File "tools/export_model.py", line 75, in main
        paddle.jit.save(model, os.path.join(args.output_path, "inference"))
      File "<decorator-gen-60>", line 2, in save
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
        return wrapped_func(*args, **kwargs)
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 39, in __impl__
        return func(*args, **kwargs)
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/jit.py", line 681, in save
        inner_input_spec)
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 488, in concrete_program_specify_input_spec
        *desired_input_spec)
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 402, in get_concrete_program
        concrete_program, partial_program_layer = self._program_cache[cache_key]
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 711, in __getitem__
        self._caches[item] = self._build_once(item)
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 702, in _build_once
        class_instance=cache_key.class_instance)
      File "<decorator-gen-58>", line 2, in from_func_spec
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
        return wrapped_func(*args, **kwargs)
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 39, in __impl__
        return func(*args, **kwargs)
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 658, in from_func_spec
        error_data.raise_new_exception()
      File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/error.py", line 189, in raise_new_exception
        six.exec_("raise new_exception from None")
      File "<string>", line 1, in <module>
    AttributeError: In transformed code:
    
        File "tools/export_model.py", line 51, in forward (* user code *)
    	x = self.pre_net(inputs)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_call_func.py", line 220, in convert_call
    	forward_func = convert_to_static(forward_func)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 141, in convert_to_static
    	static_func = _FUNCTION_CACHE.convert_with_cache(function)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 78, in convert_with_cache
    	static_func = self._convert(func)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 116, in _convert
    	root_wrapper = self._dygraph_to_static.get_static_ast(root)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/ast_transformer.py", line 61, in get_static_ast
    	self.transfer_from_node_type(self.static_analysis_root)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/ast_transformer.py", line 92, in transfer_from_node_type
    	self._apply(transformer, node_wrapper, log_level=index + 1)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/ast_transformer.py", line 65, in _apply
    	transformer(node_wrapper).transform()
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/loop_transformer.py", line 435, in transform
    	self.visit(self.root)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/loop_transformer.py", line 438, in visit
    	self.generic_visit(node)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/ast.py", line 326, in generic_visit
    	value = self.visit(value)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/loop_transformer.py", line 441, in visit
    	self.replace_stmt_list(node.body)
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/loop_transformer.py", line 457, in replace_stmt_list
    	new_stmts = self.get_for_stmt_nodes(body_list[i])
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/loop_transformer.py", line 479, in get_for_stmt_nodes
    	stmts_tuple = current_for_node_parser.parse()
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/utils.py", line 1030, in parse
    	return self._parse_for_stmts()
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/utils.py", line 1094, in _parse_for_stmts
    	target_node, assign_node = self._build_assign_var_slice_node()
        File "/home/zhouwentest/comSoftware/python3.7/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/utils.py", line 1245, in _build_assign_var_slice_node
    	slice=gast.Index(value=gast.Name(
        AttributeError: module 'gast' has no attribute 'Index'
    
    
    opened by duohappy 14
  • 根据PaddleClas文档Paddle Serving服务化部署,启动日志正常,但是python3 pipeline_http_client.py调用失败

    根据PaddleClas文档Paddle Serving服务化部署,启动日志正常,但是python3 pipeline_http_client.py调用失败

    centos环境 PaddlePaddle 2.1.0 直接混淆 python3.7 python/predict_system.py -c configs/inference_general.yaml -o Global.use_gpu=False 结果是正确的,但是serving部署后,调用python3 pipeline_http_client.py出错,错误信息如下: File "/usr/local/lib/python3.7/dist-packages/paddle_serving_server/pipeline/operator.py", line 966, in _run_process feed_batch, typical_logid) File "/usr/local/lib/python3.7/dist-packages/paddle_serving_server/pipeline/operator.py", line 577, in process log_id=typical_logid) File "/usr/local/lib/python3.7/dist-packages/paddle_serving_app/local_predict.py", line 329, in predict self.predictor.run() ValueError: In user code:

    File "tools/export_model.py", line 115, in <module>
      main()
    File "tools/export_model.py", line 111, in main
      run(FLAGS, cfg)
    File "tools/export_model.py", line 77, in run
      trainer.export(FLAGS.output_dir)
    File "/paddle/code/gry/mainbody/PaddleDetection/ppdet/engine/trainer.py", line 582, in export
      input_spec, static_model.forward.main_program,
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 537, in main_program
      concrete_program = self.concrete_program
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 453, in concrete_program
      return self.concrete_program_specify_input_spec(input_spec=None)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 491, in concrete_program_specify_inpu                                                                                   t_spec
      *desired_input_spec)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 401, in get_concrete_program
      concrete_program, partial_program_layer = self._program_cache[cache_key]
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 714, in __getitem__
      self._caches[item] = self._build_once(item)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 705, in _build_once
      class_instance=cache_key.class_instance)
    File "<decorator-gen-64>", line 2, in from_func_spec
    
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
      return wrapped_func(*args, **kwargs)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 40, in __impl__
      return func(*args, **kwargs)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 655, in from_func_spec
      outputs = static_func(*inputs)
    File "/paddle/temp/tmpufw8i6kf.py", line 29, in forward
      false_fn_1, (), (), (out,))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 210, in convert_ifelse
      return _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 235, in _run_py_ifelse
      return true_fn(*true_args) if pred else false_fn(*false_args)
    File "/paddle/code/gry/mainbody/PaddleDetection/ppdet/modeling/architectures/meta_arch.py", line 28, in forward
      out = self.get_pred()
    File "/paddle/temp/tmp6d02fg7t.py", line 27, in get_pred
      self), (__return_value_0,))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 210, in convert_ifelse
      return _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 235, in _run_py_ifelse
      return true_fn(*true_args) if pred else false_fn(*false_args)
    File "/paddle/code/gry/mainbody/PaddleDetection/ppdet/modeling/architectures/picodet.py", line 91, in get_pred
      bbox_pred, bbox_num = self._forward()
    File "/paddle/temp/tmp99s3_qui.py", line 38, in _forward
      self), (__return_value_1,))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 210, in convert_ifelse
      return _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 235, in _run_py_ifelse
      return true_fn(*true_args) if pred else false_fn(*false_args)
    File "/paddle/code/gry/mainbody/PaddleDetection/ppdet/modeling/architectures/picodet.py", line 73, in _forward
      scale_factor)
    File "/paddle/code/gry/mainbody/PaddleDetection/ppdet/modeling/heads/gfl_head.py", line 478, in post_process
      scale_factor, self.cell_offset)
    File "/paddle/temp/tmpjnprum8k.py", line 34, in decode
      batch_scores, scale_factor, im_shape, bbox_preds, img_id])
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 42, in convert_while_loop
      loop_vars = _run_paddle_while_loop(cond, body, loop_vars)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 52, in _run_paddle_while_loop
      loop_vars = control_flow.while_loop(cond, body, loop_vars)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/layers/control_flow.py", line 1203, in while_loop
      output_vars = body(*new_loop_vars)
    File "/paddle/code/gry/mainbody/PaddleDetection/ppdet/modeling/heads/gfl_head.py", line 467, in decode
      cell_offset=cell_offset)
    File "/paddle/temp/tmpmxo1kufy.py", line 57, in get_bboxes_single
      mlvl_bboxes,))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 210, in convert_ifelse
      return _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 235, in _run_py_ifelse
      return true_fn(*true_args) if pred else false_fn(*false_args)
    File "/paddle/code/gry/mainbody/PaddleDetection/ppdet/modeling/heads/gfl_head.py", line 449, in get_bboxes_single
      mlvl_bboxes /= im_scale
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py", line 328, in __impl__
      attrs={'axis': axis})
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2942, in append_op
      attrs=kwargs.get("attrs", None))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2014, in __init__
      for frame in traceback.extract_stack():
    
    InvalidArgumentError: Broadcast dimension mismatch. Operands could not be broadcast together with the shape of X = [2500, 4] and the shape of Y = [2].                                                                                    Received [4] in X is not equal to [2] in Y at i:1.
      [Hint: Expected x_dims_array[i] == y_dims_array[i] || x_dims_array[i] <= 1 || y_dims_array[i] <= 1 == true, but received x_dims_array[i] == y_dims_a                                                                                   rray[i] || x_dims_array[i] <= 1 || y_dims_array[i] <= 1:0 != true:1.] (at /paddle/paddle/fluid/operators/elementwise/elementwise_op_function.h:240)
      [operator < elementwise_div > error]
    

    ERROR 2021-12-17 06:39:09,195 [operator.py:1024] (log_id=0) det failed to predict. ERROR 2021-12-17 06:39:09,202 [dag.py:410] (data_id=0 log_id=0) Failed to predict: (log_id=0) det failed to predict.

    opened by ckseweccsdssaawqwgjgDwww 13
  • 在GPU下使用图像识别模型报错

    在GPU下使用图像识别模型报错

    --------------------------------------
    C++ Traceback (most recent call last):
    --------------------------------------
    0   paddle::framework::SignalHandle(char const*, int)
    1   paddle::platform::GetCurrentTraceBackString[abi:cxx11]()
    
    ----------------------
    Error Message Summary:
    ----------------------
    FatalError: `Segmentation fault` is detected by the operating system.
      [TimeInfo: *** Aborted at 1636099131 (unix time) try "date -d @1636099131" if you are using GNU date ***]
      [SignalInfo: *** SIGSEGV (@0x0) received by PID 3521 (TID 0x7fe7f9808740) from PID 0 ***]
    
    Segmentation fault (core dumped)
    

    CPU能够正常使用,但GPU会报如上错误信息,paddle版本为2.1.3-gpu

    opened by DrewdropLife 13
  • 问题二 - 同一个代码 (用AI Studio训练MultiLabelLoss是好的,我本地训练是有问题的)

    问题二 - 同一个代码 (用AI Studio训练MultiLabelLoss是好的,我本地训练是有问题的)

    同一个代码 (用AI Studio训练MultiLabelLoss是好的,我本地训练是有问题的):

    例子: https://aistudio.baidu.com/aistudio/projectdetail/4247343

    训练的log截取如:

    `

    用AI Studio 训练MultiLabelLoss是好的如:

    		===========================================================
    		==        PaddleClas is powered by PaddlePaddle !        ==
    		===========================================================
    		==                                                       ==
    		==   For more info please go to the following website.   ==
    		==                                                       ==
    		==       https://github.com/PaddlePaddle/PaddleClas      ==
    		===========================================================
    		
    		[2022/07/16 17:07:52] ppcls INFO: Arch : 
    		[2022/07/16 17:07:52] ppcls INFO:     class_num : 33
    		[2022/07/16 17:07:52] ppcls INFO:     name : MobileNetV1
    		[2022/07/16 17:07:52] ppcls INFO:     pretrained : True
    		[2022/07/16 17:07:52] ppcls INFO: DataLoader : 
    		[2022/07/16 17:07:52] ppcls INFO:     Eval : 
    		[2022/07/16 17:07:52] ppcls INFO:         dataset : 
    		[2022/07/16 17:07:52] ppcls INFO:             cls_label_path : ./dataset/NUS-WIDE-SCENE/NUS-SCENE-dataset/multilabel_test_list.txt
    		[2022/07/16 17:07:52] ppcls INFO:             image_root : ./dataset/NUS-WIDE-SCENE/NUS-SCENE-dataset/images/
    		[2022/07/16 17:07:52] ppcls INFO:             name : MultiLabelDataset
    		[2022/07/16 17:07:52] ppcls INFO:             transform_ops : 
    		[2022/07/16 17:07:52] ppcls INFO:                 DecodeImage : 
    		[2022/07/16 17:07:52] ppcls INFO:                     channel_first : False
    		[2022/07/16 17:07:52] ppcls INFO:                     to_rgb : True
    		[2022/07/16 17:07:52] ppcls INFO:                 ResizeImage : 
    		[2022/07/16 17:07:52] ppcls INFO:                     resize_short : 256
    		[2022/07/16 17:07:52] ppcls INFO:                 CropImage : 
    		[2022/07/16 17:07:52] ppcls INFO:                     size : 224
    		[2022/07/16 17:07:52] ppcls INFO:                 NormalizeImage : 
    		[2022/07/16 17:07:52] ppcls INFO:                     mean : [0.485, 0.456, 0.406]
    		[2022/07/16 17:07:52] ppcls INFO:                     order : 
    		[2022/07/16 17:07:52] ppcls INFO:                     scale : 1.0/255.0
    		[2022/07/16 17:07:52] ppcls INFO:                     std : [0.229, 0.224, 0.225]
    		[2022/07/16 17:07:52] ppcls INFO:         loader : 
    		[2022/07/16 17:07:52] ppcls INFO:             num_workers : 4
    		[2022/07/16 17:07:52] ppcls INFO:             use_shared_memory : True
    		[2022/07/16 17:07:52] ppcls INFO:         sampler : 
    		[2022/07/16 17:07:52] ppcls INFO:             batch_size : 256
    		[2022/07/16 17:07:52] ppcls INFO:             drop_last : False
    		[2022/07/16 17:07:52] ppcls INFO:             name : DistributedBatchSampler
    		[2022/07/16 17:07:52] ppcls INFO:             shuffle : False
    		[2022/07/16 17:07:52] ppcls INFO:     Train : 
    		[2022/07/16 17:07:52] ppcls INFO:         dataset : 
    		[2022/07/16 17:07:52] ppcls INFO:             cls_label_path : ./dataset/NUS-WIDE-SCENE/NUS-SCENE-dataset/multilabel_train_list.txt
    		[2022/07/16 17:07:52] ppcls INFO:             image_root : ./dataset/NUS-WIDE-SCENE/NUS-SCENE-dataset/images/
    		[2022/07/16 17:07:52] ppcls INFO:             name : MultiLabelDataset
    		[2022/07/16 17:07:52] ppcls INFO:             transform_ops : 
    		[2022/07/16 17:07:52] ppcls INFO:                 DecodeImage : 
    		[2022/07/16 17:07:52] ppcls INFO:                     channel_first : False
    		[2022/07/16 17:07:52] ppcls INFO:                     to_rgb : True
    		[2022/07/16 17:07:52] ppcls INFO:                 RandCropImage : 
    		[2022/07/16 17:07:52] ppcls INFO:                     size : 224
    		[2022/07/16 17:07:52] ppcls INFO:                 RandFlipImage : 
    		[2022/07/16 17:07:52] ppcls INFO:                     flip_code : 1
    		[2022/07/16 17:07:52] ppcls INFO:                 NormalizeImage : 
    		[2022/07/16 17:07:52] ppcls INFO:                     mean : [0.485, 0.456, 0.406]
    		[2022/07/16 17:07:52] ppcls INFO:                     order : 
    		[2022/07/16 17:07:52] ppcls INFO:                     scale : 1.0/255.0
    		[2022/07/16 17:07:52] ppcls INFO:                     std : [0.229, 0.224, 0.225]
    		[2022/07/16 17:07:52] ppcls INFO:         loader : 
    		[2022/07/16 17:07:52] ppcls INFO:             num_workers : 4
    		[2022/07/16 17:07:52] ppcls INFO:             use_shared_memory : True
    		[2022/07/16 17:07:52] ppcls INFO:         sampler : 
    		[2022/07/16 17:07:52] ppcls INFO:             batch_size : 64
    		[2022/07/16 17:07:52] ppcls INFO:             drop_last : False
    		[2022/07/16 17:07:52] ppcls INFO:             name : DistributedBatchSampler
    		[2022/07/16 17:07:52] ppcls INFO:             shuffle : True
    		[2022/07/16 17:07:52] ppcls INFO: Global : 
    		[2022/07/16 17:07:52] ppcls INFO:     checkpoints : None
    		[2022/07/16 17:07:52] ppcls INFO:     device : gpu
    		[2022/07/16 17:07:52] ppcls INFO:     epochs : 10
    		[2022/07/16 17:07:52] ppcls INFO:     eval_during_train : True
    		[2022/07/16 17:07:52] ppcls INFO:     eval_interval : 1
    		[2022/07/16 17:07:52] ppcls INFO:     image_shape : [3, 224, 224]
    		[2022/07/16 17:07:52] ppcls INFO:     output_dir : ./output/
    		[2022/07/16 17:07:52] ppcls INFO:     pretrained_model : None
    		[2022/07/16 17:07:52] ppcls INFO:     print_batch_step : 10
    		[2022/07/16 17:07:52] ppcls INFO:     save_inference_dir : ./inference
    		[2022/07/16 17:07:52] ppcls INFO:     save_interval : 1
    		[2022/07/16 17:07:52] ppcls INFO:     use_multilabel : True
    		[2022/07/16 17:07:52] ppcls INFO:     use_visualdl : False
    		[2022/07/16 17:07:52] ppcls INFO: Infer : 
    		[2022/07/16 17:07:52] ppcls INFO:     PostProcess : 
    		[2022/07/16 17:07:52] ppcls INFO:         class_id_map_file : None
    		[2022/07/16 17:07:52] ppcls INFO:         name : MultiLabelTopk
    		[2022/07/16 17:07:52] ppcls INFO:         topk : 5
    		[2022/07/16 17:07:52] ppcls INFO:     batch_size : 10
    		[2022/07/16 17:07:52] ppcls INFO:     infer_imgs : ./deploy/images/0517_2715693311.jpg
    		[2022/07/16 17:07:52] ppcls INFO:     transforms : 
    		[2022/07/16 17:07:52] ppcls INFO:         DecodeImage : 
    		[2022/07/16 17:07:52] ppcls INFO:             channel_first : False
    		[2022/07/16 17:07:52] ppcls INFO:             to_rgb : True
    		[2022/07/16 17:07:52] ppcls INFO:         ResizeImage : 
    		[2022/07/16 17:07:52] ppcls INFO:             resize_short : 256
    		[2022/07/16 17:07:52] ppcls INFO:         CropImage : 
    		[2022/07/16 17:07:52] ppcls INFO:             size : 224
    		[2022/07/16 17:07:52] ppcls INFO:         NormalizeImage : 
    		[2022/07/16 17:07:52] ppcls INFO:             mean : [0.485, 0.456, 0.406]
    		[2022/07/16 17:07:52] ppcls INFO:             order : 
    		[2022/07/16 17:07:52] ppcls INFO:             scale : 1.0/255.0
    		[2022/07/16 17:07:52] ppcls INFO:             std : [0.229, 0.224, 0.225]
    		[2022/07/16 17:07:52] ppcls INFO:         ToCHWImage : None
    		[2022/07/16 17:07:52] ppcls INFO: Loss : 
    		[2022/07/16 17:07:52] ppcls INFO:     Eval : 
    		[2022/07/16 17:07:52] ppcls INFO:         MultiLabelLoss : 
    		[2022/07/16 17:07:52] ppcls INFO:             weight : 1.0
    		[2022/07/16 17:07:52] ppcls INFO:     Train : 
    		[2022/07/16 17:07:52] ppcls INFO:         MultiLabelLoss : 
    		[2022/07/16 17:07:52] ppcls INFO:             weight : 1.0
    		[2022/07/16 17:07:52] ppcls INFO: Metric : 
    		[2022/07/16 17:07:52] ppcls INFO:     Eval : 
    		[2022/07/16 17:07:52] ppcls INFO:         HammingDistance : None
    		[2022/07/16 17:07:52] ppcls INFO:         AccuracyScore : None
    		[2022/07/16 17:07:52] ppcls INFO:     Train : 
    		[2022/07/16 17:07:52] ppcls INFO:         HammingDistance : None
    		[2022/07/16 17:07:52] ppcls INFO:         AccuracyScore : None
    		[2022/07/16 17:07:52] ppcls INFO: Optimizer : 
    		[2022/07/16 17:07:52] ppcls INFO:     lr : 
    		[2022/07/16 17:07:52] ppcls INFO:         learning_rate : 0.1
    		[2022/07/16 17:07:52] ppcls INFO:         name : Cosine
    		[2022/07/16 17:07:52] ppcls INFO:     momentum : 0.9
    		[2022/07/16 17:07:52] ppcls INFO:     name : Momentum
    		[2022/07/16 17:07:52] ppcls INFO:     regularizer : 
    		[2022/07/16 17:07:52] ppcls INFO:         coeff : 4e-05
    		[2022/07/16 17:07:52] ppcls INFO:         name : L2
    		[2022/07/16 17:07:52] ppcls INFO: profiler_options : None
    		[2022/07/16 17:07:52] ppcls INFO: train with paddle 2.3.0 and device Place(gpu:0)
    		[2022/07/16 17:07:57] ppcls INFO: unique_endpoints {''}
    		[2022/07/16 17:07:57] ppcls INFO: Downloading MobileNetV1_pretrained.pdparams from https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/MobileNetV1_pretrained.pdparams
    		[2022/07/16 17:07:59] ppcls WARNING: The training strategy provided by PaddleClas is based on 4 gpus. But the number of gpu is 1 in current training. Please modify the stategy (learning rate, batch size and so on) if use this config to train.
    		[2022/07/16 17:08:01] ppcls INFO: [Train][Epoch 1/10][Iter: 0/273]lr(CosineAnnealingDecay): 0.09999993, HammingDistance: 0.49716, AccuracyScore: 0.50284, MultiLabelLoss: 0.85628, loss: 0.85628, batch_cost: 1.44437s, reader_cost: 1.23644, ips: 44.31011 samples/s, eta: 1:05:43
    		[2022/07/16 17:08:02] ppcls INFO: [Train][Epoch 1/10][Iter: 10/273]lr(CosineAnnealingDecay): 0.09999530, HammingDistance: 0.15526, AccuracyScore: 0.84474, MultiLabelLoss: 0.37622, loss: 0.37622, batch_cost: 0.10000s, reader_cost: 0.00025, ips: 640.00290 samples/s, eta: 0:04:31
    		[2022/07/16 17:08:03] ppcls INFO: [Train][Epoch 1/10][Iter: 20/273]lr(CosineAnnealingDecay): 0.09998404, HammingDistance: 0.11718, AccuracyScore: 0.88282, MultiLabelLoss: 0.32476, loss: 0.32476, batch_cost: 0.11404s, reader_cost: 0.00106, ips: 561.19280 samples/s, eta: 0:05:09
    		[2022/07/16 17:08:04] ppcls INFO: [Train][Epoch 1/10][Iter: 30/273]lr(CosineAnnealingDecay): 0.09996617, HammingDistance: 0.10278, AccuracyScore: 0.89722, MultiLabelLoss: 0.29221, loss: 0.29221, batch_cost: 0.12300s, reader_cost: 0.00443, ips: 520.30787 samples/s, eta: 0:05:32
    		[2022/07/16 17:08:05] ppcls INFO: [Train][Epoch 1/10][Iter: 40/273]lr(CosineAnnealingDecay): 0.09994168, HammingDistance: 0.09368, AccuracyScore: 0.90632, MultiLabelLoss: 0.27051, loss: 0.27051, batch_cost: 0.12503s, reader_cost: 0.00764, ips: 511.86440 samples/s, eta: 0:05:36
    		[2022/07/16 17:08:07] ppcls INFO: [Train][Epoch 1/10][Iter: 50/273]lr(CosineAnnealingDecay): 0.09991057, HammingDistance: 0.08857, AccuracyScore: 0.91143, MultiLabelLoss: 0.25566, loss: 0.25566, batch_cost: 0.12603s, reader_cost: 0.00750, ips: 507.79879 samples/s, eta: 0:05:37
    		[2022/07/16 17:08:08] ppcls INFO: [Train][Epoch 1/10][Iter: 60/273]lr(CosineAnnealingDecay): 0.09987286, HammingDistance: 0.08468, AccuracyScore: 0.91532, MultiLabelLoss: 0.24471, loss: 0.24471, batch_cost: 0.13030s, reader_cost: 0.00920, ips: 491.16795 samples/s, eta: 0:05:47
    		[2022/07/16 17:08:10] ppcls INFO: [Train][Epoch 1/10][Iter: 70/273]lr(CosineAnnealingDecay): 0.09982854, HammingDistance: 0.08121, AccuracyScore: 0.91879, MultiLabelLoss: 0.23447, loss: 0.23447, batch_cost: 0.13031s, reader_cost: 0.00969, ips: 491.11827 samples/s, eta: 0:05:46
    		[2022/07/16 17:08:12] ppcls INFO: [Train][Epoch 1/10][Iter: 80/273]lr(CosineAnnealingDecay): 0.09977762, HammingDistance: 0.07914, AccuracyScore: 0.92086, MultiLabelLoss: 0.22757, loss: 0.22757, batch_cost: 0.14074s, reader_cost: 0.01000, ips: 454.72867 samples/s, eta: 0:06:12
    		[2022/07/16 17:08:14] ppcls INFO: [Train][Epoch 1/10][Iter: 90/273]lr(CosineAnnealingDecay): 0.09972011, HammingDistance: 0.07785, AccuracyScore: 0.92215, MultiLabelLoss: 0.22251, loss: 0.22251, batch_cost: 0.14647s, reader_cost: 0.00899, ips: 436.95943 samples/s, eta: 0:06:26
    		[2022/07/16 17:08:15] ppcls INFO: [Train][Epoch 1/10][Iter: 100/273]lr(CosineAnnealingDecay): 0.09965602, HammingDistance: 0.07627, AccuracyScore: 0.92373, MultiLabelLoss: 0.21681, loss: 0.21681, batch_cost: 0.14683s, reader_cost: 0.01051, ips: 435.88615 samples/s, eta: 0:06:26
    		[2022/07/16 17:08:16] ppcls INFO: [Train][Epoch 1/10][Iter: 110/273]lr(CosineAnnealingDecay): 0.09958535, HammingDistance: 0.07491, AccuracyScore: 0.92509, MultiLabelLoss: 0.21235, loss: 0.21235, batch_cost: 0.14429s, reader_cost: 0.01024, ips: 443.54869 samples/s, eta: 0:06:18
    		[2022/07/16 17:08:18] ppcls INFO: [Train][Epoch 1/10][Iter: 120/273]lr(CosineAnnealingDecay): 0.09950812, HammingDistance: 0.07390, AccuracyScore: 0.92610, MultiLabelLoss: 0.20877, loss: 0.20877, batch_cost: 0.14396s, reader_cost: 0.01148, ips: 444.55331 samples/s, eta: 0:06:15
    		[2022/07/16 17:08:19] ppcls INFO: [Train][Epoch 1/10][Iter: 130/273]lr(CosineAnnealingDecay): 0.09942433, HammingDistance: 0.07272, AccuracyScore: 0.92728, MultiLabelLoss: 0.20525, loss: 0.20525, batch_cost: 0.14281s, reader_cost: 0.01087, ips: 448.14375 samples/s, eta: 0:06:11
    		[2022/07/16 17:08:20] ppcls INFO: [Train][Epoch 1/10][Iter: 140/273]lr(CosineAnnealingDecay): 0.09933399, HammingDistance: 0.07197, AccuracyScore: 0.92803, MultiLabelLoss: 0.20234, loss: 0.20234, batch_cost: 0.14260s, reader_cost: 0.01128, ips: 448.80568 samples/s, eta: 0:06:09
    		[2022/07/16 17:08:22] ppcls INFO: [Train][Epoch 1/10][Iter: 150/273]lr(CosineAnnealingDecay): 0.09923712, HammingDistance: 0.07122, AccuracyScore: 0.92878, MultiLabelLoss: 0.19955, loss: 0.19955, batch_cost: 0.14106s, reader_cost: 0.01144, ips: 453.69443 samples/s, eta: 0:06:03
    		[2022/07/16 17:08:23] ppcls INFO: [Train][Epoch 1/10][Iter: 160/273]lr(CosineAnnealingDecay): 0.09913373, HammingDistance: 0.07050, AccuracyScore: 0.92950, MultiLabelLoss: 0.19703, loss: 0.19703, batch_cost: 0.14162s, reader_cost: 0.01223, ips: 451.90739 samples/s, eta: 0:06:03
    		[2022/07/16 17:08:24] ppcls INFO: [Train][Epoch 1/10][Iter: 170/273]lr(CosineAnnealingDecay): 0.09902383, HammingDistance: 0.06968, AccuracyScore: 0.93032, MultiLabelLoss: 0.19439, loss: 0.19439, batch_cost: 0.14092s, reader_cost: 0.01232, ips: 454.16743 samples/s, eta: 0:06:00
    		[2022/07/16 17:08:26] ppcls INFO: [Train][Epoch 1/10][Iter: 180/273]lr(CosineAnnealingDecay): 0.09890745, HammingDistance: 0.06891, AccuracyScore: 0.93109, MultiLabelLoss: 0.19201, loss: 0.19201, batch_cost: 0.14031s, reader_cost: 0.01235, ips: 456.12892 samples/s, eta: 0:05:57
    		[2022/07/16 17:08:27] ppcls INFO: [Train][Epoch 1/10][Iter: 190/273]lr(CosineAnnealingDecay): 0.09878458, HammingDistance: 0.06841, AccuracyScore: 0.93159, MultiLabelLoss: 0.19022, loss: 0.19022, batch_cost: 0.14028s, reader_cost: 0.01253, ips: 456.23303 samples/s, eta: 0:05:56
    		[2022/07/16 17:08:28] ppcls INFO: [Train][Epoch 1/10][Iter: 200/273]lr(CosineAnnealingDecay): 0.09865526, HammingDistance: 0.06789, AccuracyScore: 0.93211, MultiLabelLoss: 0.18832, loss: 0.18832, batch_cost: 0.13975s, reader_cost: 0.01238, ips: 457.94703 samples/s, eta: 0:05:53
    		[2022/07/16 17:08:30] ppcls INFO: [Train][Epoch 1/10][Iter: 210/273]lr(CosineAnnealingDecay): 0.09851949, HammingDistance: 0.06747, AccuracyScore: 0.93253, MultiLabelLoss: 0.18667, loss: 0.18667, batch_cost: 0.13930s, reader_cost: 0.01244, ips: 459.45184 samples/s, eta: 0:05:51
    		[2022/07/16 17:08:31] ppcls INFO: [Train][Epoch 1/10][Iter: 220/273]lr(CosineAnnealingDecay): 0.09837730, HammingDistance: 0.06700, AccuracyScore: 0.93300, MultiLabelLoss: 0.18511, loss: 0.18511, batch_cost: 0.13931s, reader_cost: 0.01251, ips: 459.40462 samples/s, eta: 0:05:49
    		[2022/07/16 17:08:32] ppcls INFO: [Train][Epoch 1/10][Iter: 230/273]lr(CosineAnnealingDecay): 0.09822870, HammingDistance: 0.06672, AccuracyScore: 0.93328, MultiLabelLoss: 0.18408, loss: 0.18408, batch_cost: 0.13934s, reader_cost: 0.01277, ips: 459.31651 samples/s, eta: 0:05:48
    		[2022/07/16 17:08:34] ppcls INFO: [Train][Epoch 1/10][Iter: 240/273]lr(CosineAnnealingDecay): 0.09807371, HammingDistance: 0.06645, AccuracyScore: 0.93355, MultiLabelLoss: 0.18297, loss: 0.18297, batch_cost: 0.13894s, reader_cost: 0.01266, ips: 460.61978 samples/s, eta: 0:05:45
    		[2022/07/16 17:08:35] ppcls INFO: [Train][Epoch 1/10][Iter: 250/273]lr(CosineAnnealingDecay): 0.09791236, HammingDistance: 0.06608, AccuracyScore: 0.93392, MultiLabelLoss: 0.18163, loss: 0.18163, batch_cost: 0.13939s, reader_cost: 0.01296, ips: 459.14592 samples/s, eta: 0:05:45
    		[2022/07/16 17:08:36] ppcls INFO: [Train][Epoch 1/10][Iter: 260/273]lr(CosineAnnealingDecay): 0.09774466, HammingDistance: 0.06579, AccuracyScore: 0.93421, MultiLabelLoss: 0.18057, loss: 0.18057, batch_cost: 0.13863s, reader_cost: 0.01254, ips: 461.66447 samples/s, eta: 0:05:42
    		[2022/07/16 17:08:38] ppcls INFO: [Train][Epoch 1/10][Iter: 270/273]lr(CosineAnnealingDecay): 0.09757064, HammingDistance: 0.06547, AccuracyScore: 0.93453, MultiLabelLoss: 0.17950, loss: 0.17950, batch_cost: 0.13838s, reader_cost: 0.01274, ips: 462.49553 samples/s, eta: 0:05:40
    		[2022/07/16 17:08:38] ppcls INFO: [Train][Epoch 1/10][Avg]HammingDistance: 0.06545, AccuracyScore: 0.93455, MultiLabelLoss: 0.17936, loss: 0.17936
    		[2022/07/16 17:08:42] ppcls INFO: [Eval][Epoch 1][Iter: 0/69]MultiLabelLoss: 0.11665, loss: 0.11665, HammingDistance: 0.04013, AccuracyScore: 0.95987, batch_cost: 3.60435s, reader_cost: 3.00752, ips: 71.02521 images/sec
    

    我 本地 训练MultiLabelLoss是有问题的如:

    		===========================================================
    		==        PaddleClas is powered by PaddlePaddle !        ==
    		===========================================================
    		==                                                       ==
    		==   For more info please go to the following website.   ==
    		==                                                       ==
    		==       https://github.com/PaddlePaddle/PaddleClas      ==
    		===========================================================
    		
    		[2022/07/18 09:40:48] ppcls INFO: Arch : 
    		[2022/07/18 09:40:48] ppcls INFO:     class_num : 33
    		[2022/07/18 09:40:48] ppcls INFO:     name : MobileNetV1
    		[2022/07/18 09:40:48] ppcls INFO:     pretrained : True
    		[2022/07/18 09:40:48] ppcls INFO: DataLoader : 
    		[2022/07/18 09:40:48] ppcls INFO:     Eval : 
    		[2022/07/18 09:40:48] ppcls INFO:         dataset : 
    		[2022/07/18 09:40:48] ppcls INFO:             cls_label_path : ./dataset/NUS-WIDE-SCENE/NUS-SCENE-dataset/multilabel_test_list.txt
    		[2022/07/18 09:40:48] ppcls INFO:             image_root : ./dataset/NUS-WIDE-SCENE/NUS-SCENE-dataset/images/
    		[2022/07/18 09:40:48] ppcls INFO:             name : MultiLabelDataset
    		[2022/07/18 09:40:48] ppcls INFO:             transform_ops : 
    		[2022/07/18 09:40:48] ppcls INFO:                 DecodeImage : 
    		[2022/07/18 09:40:48] ppcls INFO:                     channel_first : False
    		[2022/07/18 09:40:48] ppcls INFO:                     to_rgb : True
    		[2022/07/18 09:40:48] ppcls INFO:                 ResizeImage : 
    		[2022/07/18 09:40:48] ppcls INFO:                     resize_short : 256
    		[2022/07/18 09:40:48] ppcls INFO:                 CropImage : 
    		[2022/07/18 09:40:48] ppcls INFO:                     size : 224
    		[2022/07/18 09:40:48] ppcls INFO:                 NormalizeImage : 
    		[2022/07/18 09:40:48] ppcls INFO:                     mean : [0.485, 0.456, 0.406]
    		[2022/07/18 09:40:48] ppcls INFO:                     order : 
    		[2022/07/18 09:40:48] ppcls INFO:                     scale : 1.0/255.0
    		[2022/07/18 09:40:48] ppcls INFO:                     std : [0.229, 0.224, 0.225]
    		[2022/07/18 09:40:48] ppcls INFO:         loader : 
    		[2022/07/18 09:40:48] ppcls INFO:             num_workers : 4
    		[2022/07/18 09:40:48] ppcls INFO:             use_shared_memory : True
    		[2022/07/18 09:40:48] ppcls INFO:         sampler : 
    		[2022/07/18 09:40:48] ppcls INFO:             batch_size : 256
    		[2022/07/18 09:40:48] ppcls INFO:             drop_last : False
    		[2022/07/18 09:40:48] ppcls INFO:             name : DistributedBatchSampler
    		[2022/07/18 09:40:48] ppcls INFO:             shuffle : False
    		[2022/07/18 09:40:48] ppcls INFO:     Train : 
    		[2022/07/18 09:40:48] ppcls INFO:         dataset : 
    		[2022/07/18 09:40:48] ppcls INFO:             cls_label_path : ./dataset/NUS-WIDE-SCENE/NUS-SCENE-dataset/multilabel_train_list.txt
    		[2022/07/18 09:40:48] ppcls INFO:             image_root : ./dataset/NUS-WIDE-SCENE/NUS-SCENE-dataset/images/
    		[2022/07/18 09:40:48] ppcls INFO:             name : MultiLabelDataset
    		[2022/07/18 09:40:48] ppcls INFO:             transform_ops : 
    		[2022/07/18 09:40:48] ppcls INFO:                 DecodeImage : 
    		[2022/07/18 09:40:48] ppcls INFO:                     channel_first : False
    		[2022/07/18 09:40:48] ppcls INFO:                     to_rgb : True
    		[2022/07/18 09:40:48] ppcls INFO:                 RandCropImage : 
    		[2022/07/18 09:40:48] ppcls INFO:                     size : 224
    		[2022/07/18 09:40:48] ppcls INFO:                 RandFlipImage : 
    		[2022/07/18 09:40:48] ppcls INFO:                     flip_code : 1
    		[2022/07/18 09:40:48] ppcls INFO:                 NormalizeImage : 
    		[2022/07/18 09:40:48] ppcls INFO:                     mean : [0.485, 0.456, 0.406]
    		[2022/07/18 09:40:48] ppcls INFO:                     order : 
    		[2022/07/18 09:40:48] ppcls INFO:                     scale : 1.0/255.0
    		[2022/07/18 09:40:48] ppcls INFO:                     std : [0.229, 0.224, 0.225]
    		[2022/07/18 09:40:48] ppcls INFO:         loader : 
    		[2022/07/18 09:40:48] ppcls INFO:             num_workers : 4
    		[2022/07/18 09:40:48] ppcls INFO:             use_shared_memory : True
    		[2022/07/18 09:40:48] ppcls INFO:         sampler : 
    		[2022/07/18 09:40:48] ppcls INFO:             batch_size : 64
    		[2022/07/18 09:40:48] ppcls INFO:             drop_last : False
    		[2022/07/18 09:40:48] ppcls INFO:             name : DistributedBatchSampler
    		[2022/07/18 09:40:48] ppcls INFO:             shuffle : True
    		[2022/07/18 09:40:48] ppcls INFO: Global : 
    		[2022/07/18 09:40:48] ppcls INFO:     checkpoints : None
    		[2022/07/18 09:40:48] ppcls INFO:     device : gpu
    		[2022/07/18 09:40:48] ppcls INFO:     epochs : 10
    		[2022/07/18 09:40:48] ppcls INFO:     eval_during_train : True
    		[2022/07/18 09:40:48] ppcls INFO:     eval_interval : 1
    		[2022/07/18 09:40:48] ppcls INFO:     image_shape : [3, 224, 224]
    		[2022/07/18 09:40:48] ppcls INFO:     output_dir : ./output/
    		[2022/07/18 09:40:48] ppcls INFO:     pretrained_model : None
    		[2022/07/18 09:40:48] ppcls INFO:     print_batch_step : 10
    		[2022/07/18 09:40:48] ppcls INFO:     save_inference_dir : ./inference
    		[2022/07/18 09:40:48] ppcls INFO:     save_interval : 1
    		[2022/07/18 09:40:48] ppcls INFO:     use_multilabel : True
    		[2022/07/18 09:40:48] ppcls INFO:     use_visualdl : False
    		[2022/07/18 09:40:48] ppcls INFO: Infer : 
    		[2022/07/18 09:40:48] ppcls INFO:     PostProcess : 
    		[2022/07/18 09:40:48] ppcls INFO:         class_id_map_file : None
    		[2022/07/18 09:40:48] ppcls INFO:         name : MultiLabelTopk
    		[2022/07/18 09:40:48] ppcls INFO:         topk : 5
    		[2022/07/18 09:40:48] ppcls INFO:     batch_size : 10
    		[2022/07/18 09:40:48] ppcls INFO:     infer_imgs : ./deploy/images/0517_2715693311.jpg
    		[2022/07/18 09:40:48] ppcls INFO:     transforms : 
    		[2022/07/18 09:40:48] ppcls INFO:         DecodeImage : 
    		[2022/07/18 09:40:48] ppcls INFO:             channel_first : False
    		[2022/07/18 09:40:48] ppcls INFO:             to_rgb : True
    		[2022/07/18 09:40:48] ppcls INFO:         ResizeImage : 
    		[2022/07/18 09:40:48] ppcls INFO:             resize_short : 256
    		[2022/07/18 09:40:48] ppcls INFO:         CropImage : 
    		[2022/07/18 09:40:48] ppcls INFO:             size : 224
    		[2022/07/18 09:40:48] ppcls INFO:         NormalizeImage : 
    		[2022/07/18 09:40:48] ppcls INFO:             mean : [0.485, 0.456, 0.406]
    		[2022/07/18 09:40:48] ppcls INFO:             order : 
    		[2022/07/18 09:40:48] ppcls INFO:             scale : 1.0/255.0
    		[2022/07/18 09:40:48] ppcls INFO:             std : [0.229, 0.224, 0.225]
    		[2022/07/18 09:40:48] ppcls INFO:         ToCHWImage : None
    		[2022/07/18 09:40:48] ppcls INFO: Loss : 
    		[2022/07/18 09:40:48] ppcls INFO:     Eval : 
    		[2022/07/18 09:40:48] ppcls INFO:         MultiLabelLoss : 
    		[2022/07/18 09:40:48] ppcls INFO:             weight : 1.0
    		[2022/07/18 09:40:48] ppcls INFO:     Train : 
    		[2022/07/18 09:40:48] ppcls INFO:         MultiLabelLoss : 
    		[2022/07/18 09:40:48] ppcls INFO:             weight : 1.0
    		[2022/07/18 09:40:48] ppcls INFO: Metric : 
    		[2022/07/18 09:40:48] ppcls INFO:     Eval : 
    		[2022/07/18 09:40:48] ppcls INFO:         HammingDistance : None
    		[2022/07/18 09:40:48] ppcls INFO:         AccuracyScore : None
    		[2022/07/18 09:40:48] ppcls INFO:     Train : 
    		[2022/07/18 09:40:48] ppcls INFO:         HammingDistance : None
    		[2022/07/18 09:40:48] ppcls INFO:         AccuracyScore : None
    		[2022/07/18 09:40:48] ppcls INFO: Optimizer : 
    		[2022/07/18 09:40:48] ppcls INFO:     lr : 
    		[2022/07/18 09:40:48] ppcls INFO:         learning_rate : 0.1
    		[2022/07/18 09:40:48] ppcls INFO:         name : Cosine
    		[2022/07/18 09:40:48] ppcls INFO:     momentum : 0.9
    		[2022/07/18 09:40:48] ppcls INFO:     name : Momentum
    		[2022/07/18 09:40:48] ppcls INFO:     regularizer : 
    		[2022/07/18 09:40:48] ppcls INFO:         coeff : 4e-05
    		[2022/07/18 09:40:48] ppcls INFO:         name : L2
    		[2022/07/18 09:40:48] ppcls INFO: profiler_options : None
    		[2022/07/18 09:40:48] ppcls INFO: train with paddle 2.3.0 and device Place(gpu:0)
    		[2022/07/18 09:40:55] ppcls INFO: unique_endpoints {''}
    		[2022/07/18 09:40:55] ppcls INFO: Found C:\Users\Administrator/.paddleclas/weights\MobileNetV1_pretrained.pdparams
    		[2022/07/18 09:40:55] ppcls WARNING: The training strategy provided by PaddleClas is based on 4 gpus. But the number of gpu is 1 in current training. Please modify the stategy (learning rate, batch size and so on) if use this config to train.
    		[2022/07/18 09:40:57] ppcls INFO: [Train][Epoch 1/10][Iter: 0/273]lr(CosineAnnealingDecay): 0.09999993, HammingDistance: 0.78883, AccuracyScore: 0.20360, MultiLabelLoss: 0.00000, loss: 0.00000, batch_cost: 1.35928s, reader_cost: 0.35935, ips: 47.08391 samples/s, eta: 1:01:50
    		[2022/07/18 09:41:01] ppcls INFO: [Train][Epoch 1/10][Iter: 10/273]lr(CosineAnnealingDecay): 0.09999530, HammingDistance: 0.36359, AccuracyScore: 0.63572, MultiLabelLoss: 15540.73622, loss: 15540.73622, batch_cost: 0.47913s, reader_cost: 0.13281, ips: 133.57495 samples/s, eta: 0:21:43
    		[2022/07/18 09:41:06] ppcls INFO: [Train][Epoch 1/10][Iter: 20/273]lr(CosineAnnealingDecay): 0.09998404, HammingDistance: 0.38591, AccuracyScore: 0.61373, MultiLabelLoss: 36724.26478, loss: 36724.26478, batch_cost: 0.49703s, reader_cost: 0.14355, ips: 128.76380 samples/s, eta: 0:22:26
    		[2022/07/18 09:41:11] ppcls INFO: [Train][Epoch 1/10][Iter: 30/273]lr(CosineAnnealingDecay): 0.09996617, HammingDistance: 0.37392, AccuracyScore: 0.62584, MultiLabelLoss: 24898.26703, loss: 24898.26703, batch_cost: 0.49876s, reader_cost: 0.14542, ips: 128.31781 samples/s, eta: 0:22:26
    		[2022/07/18 09:41:23] ppcls INFO: [Train][Epoch 1/10][Iter: 40/273]lr(CosineAnnealingDecay): 0.09994168, HammingDistance: 0.39181, AccuracyScore: 0.60800, MultiLabelLoss: -92611.72725, loss: -92611.72725, batch_cost: 0.46481s, reader_cost: 0.10980, ips: 137.69074 samples/s, eta: 0:20:50
    		[2022/07/18 09:41:28] ppcls INFO: [Train][Epoch 1/10][Iter: 50/273]lr(CosineAnnealingDecay): 0.09991057, HammingDistance: 0.38774, AccuracyScore: 0.61211, MultiLabelLoss: -50456.68341, loss: -50456.68341, batch_cost: 0.46702s, reader_cost: 0.11344, ips: 137.03984 samples/s, eta: 0:20:51
    		[2022/07/18 09:41:33] ppcls INFO: [Train][Epoch 1/10][Iter: 60/273]lr(CosineAnnealingDecay): 0.09987286, HammingDistance: 0.38567, AccuracyScore: 0.61421, MultiLabelLoss: -22486.64255, loss: -22486.64255, batch_cost: 0.46899s, reader_cost: 0.11690, ips: 136.46209 samples/s, eta: 0:20:52
    		[2022/07/18 09:41:37] ppcls INFO: [Train][Epoch 1/10][Iter: 70/273]lr(CosineAnnealingDecay): 0.09982854, HammingDistance: 0.38313, AccuracyScore: 0.61676, MultiLabelLoss: -19252.86648, loss: -19252.86648, batch_cost: 0.46990s, reader_cost: 0.11836, ips: 136.19938 samples/s, eta: 0:20:49
    		[2022/07/18 09:41:42] ppcls INFO: [Train][Epoch 1/10][Iter: 80/273]lr(CosineAnnealingDecay): 0.09977762, HammingDistance: 0.39454, AccuracyScore: 0.60536, MultiLabelLoss: -16924.69063, loss: -16924.69063, batch_cost: 0.47057s, reader_cost: 0.11924, ips: 136.00645 samples/s, eta: 0:20:46
    		[2022/07/18 09:41:47] ppcls INFO: [Train][Epoch 1/10][Iter: 90/273]lr(CosineAnnealingDecay): 0.09972011, HammingDistance: 0.39503, AccuracyScore: 0.60489, MultiLabelLoss: 349381514.77505, loss: 349381514.77505, batch_cost: 0.47108s, reader_cost: 0.12027, ips: 135.85876 samples/s, eta: 0:20:43
    		[2022/07/18 09:41:52] ppcls INFO: [Train][Epoch 1/10][Iter: 100/273]lr(CosineAnnealingDecay): 0.09965602, HammingDistance: 0.39561, AccuracyScore: 0.60431, MultiLabelLoss: 314791201.08847, loss: 314791201.08847, batch_cost: 0.47132s, reader_cost: 0.12060, ips: 135.78894 samples/s, eta: 0:20:39
    		[2022/07/18 09:41:56] ppcls INFO: [Train][Epoch 1/10][Iter: 110/273]lr(CosineAnnealingDecay): 0.09958535, HammingDistance: 0.39322, AccuracyScore: 0.60671, MultiLabelLoss: 286436854.72710, loss: 286436854.72710, batch_cost: 0.47196s, reader_cost: 0.12160, ips: 135.60517 samples/s, eta: 0:20:36
    		[2022/07/18 09:42:01] ppcls INFO: [Train][Epoch 1/10][Iter: 120/273]lr(CosineAnnealingDecay): 0.09950812, HammingDistance: 0.38938, AccuracyScore: 0.61056, MultiLabelLoss: 266475598.22907, loss: 266475598.22907, batch_cost: 0.47222s, reader_cost: 0.12176, ips: 135.53073 samples/s, eta: 0:20:32
    		[2022/07/18 09:42:06] ppcls INFO: [Train][Epoch 1/10][Iter: 130/273]lr(CosineAnnealingDecay): 0.09942433, HammingDistance: 0.39447, AccuracyScore: 0.60547, MultiLabelLoss: 246135450.44263, loss: 246135450.44263, batch_cost: 0.47256s, reader_cost: 0.12227, ips: 135.43263 samples/s, eta: 0:20:28
    		[2022/07/18 09:42:11] ppcls INFO: [Train][Epoch 1/10][Iter: 140/273]lr(CosineAnnealingDecay): 0.09933399, HammingDistance: 0.39562, AccuracyScore: 0.60433, MultiLabelLoss: 228682885.43340, loss: 228682885.43340, batch_cost: 0.47297s, reader_cost: 0.12293, ips: 135.31618 samples/s, eta: 0:20:24
    		[2022/07/18 09:42:15] ppcls INFO: [Train][Epoch 1/10][Iter: 150/273]lr(CosineAnnealingDecay): 0.09923712, HammingDistance: 0.39547, AccuracyScore: 0.60448, MultiLabelLoss: 213542288.18181, loss: 213542288.18181, batch_cost: 0.47321s, reader_cost: 0.12339, ips: 135.24643 samples/s, eta: 0:20:20
    		[2022/07/18 09:42:20] ppcls INFO: [Train][Epoch 1/10][Iter: 160/273]lr(CosineAnnealingDecay): 0.09913373, HammingDistance: 0.39550, AccuracyScore: 0.60446, MultiLabelLoss: 200279231.33699, loss: 200279231.33699, batch_cost: 0.47362s, reader_cost: 0.12389, ips: 135.12851 samples/s, eta: 0:20:17
    		[2022/07/18 09:42:25] ppcls INFO: [Train][Epoch 1/10][Iter: 170/273]lr(CosineAnnealingDecay): 0.09902383, HammingDistance: 0.39739, AccuracyScore: 0.60257, MultiLabelLoss: 400101751.08385, loss: 400101751.08385, batch_cost: 0.47380s, reader_cost: 0.12424, ips: 135.07861 samples/s, eta: 0:20:12
    		[2022/07/18 09:42:30] ppcls INFO: [Train][Epoch 1/10][Iter: 180/273]lr(CosineAnnealingDecay): 0.09890745, HammingDistance: 0.39598, AccuracyScore: 0.60398, MultiLabelLoss: 378067441.94226, loss: 378067441.94226, batch_cost: 0.47395s, reader_cost: 0.12437, ips: 135.03441 samples/s, eta: 0:20:08
    		[2022/07/18 09:42:35] ppcls INFO: [Train][Epoch 1/10][Iter: 190/273]lr(CosineAnnealingDecay): 0.09878458, HammingDistance: 0.39366, AccuracyScore: 0.60630, MultiLabelLoss: 8491173.37458, loss: 8491173.37458, batch_cost: 0.47426s, reader_cost: 0.12457, ips: 134.94716 samples/s, eta: 0:20:04
    		[2022/07/18 09:42:39] ppcls INFO: [Train][Epoch 1/10][Iter: 200/273]lr(CosineAnnealingDecay): 0.09865526, HammingDistance: 0.39221, AccuracyScore: 0.60775, MultiLabelLoss: 2851103.73324, loss: 2851103.73324, batch_cost: 0.47446s, reader_cost: 0.12475, ips: 134.89158 samples/s, eta: 0:20:00
    		[2022/07/18 09:42:44] ppcls INFO: [Train][Epoch 1/10][Iter: 210/273]lr(CosineAnnealingDecay): 0.09851949, HammingDistance: 0.39390, AccuracyScore: 0.60606, MultiLabelLoss: -2290493.99880, loss: -2290493.99880, batch_cost: 0.47456s, reader_cost: 0.12484, ips: 134.86298 samples/s, eta: 0:19:55
    		[2022/07/18 09:42:49] ppcls INFO: [Train][Epoch 1/10][Iter: 220/273]lr(CosineAnnealingDecay): 0.09837730, HammingDistance: 0.39434, AccuracyScore: 0.60562, MultiLabelLoss: -2186844.87674, loss: -2186844.87674, batch_cost: 0.47479s, reader_cost: 0.12499, ips: 134.79596 samples/s, eta: 0:19:51
    		[2022/07/18 09:42:54] ppcls INFO: [Train][Epoch 1/10][Iter: 230/273]lr(CosineAnnealingDecay): 0.09822870, HammingDistance: 0.39287, AccuracyScore: 0.60710, MultiLabelLoss: -2093130.91939, loss: -2093130.91939, batch_cost: 0.47494s, reader_cost: 0.12492, ips: 134.75454 samples/s, eta: 0:19:47
    		[2022/07/18 09:42:58] ppcls INFO: [Train][Epoch 1/10][Iter: 240/273]lr(CosineAnnealingDecay): 0.09807371, HammingDistance: 0.39554, AccuracyScore: 0.60443, MultiLabelLoss: -2003783.23691, loss: -2003783.23691, batch_cost: 0.47507s, reader_cost: 0.12506, ips: 134.71665 samples/s, eta: 0:19:42
    		[2022/07/18 09:43:03] ppcls INFO: [Train][Epoch 1/10][Iter: 250/273]lr(CosineAnnealingDecay): 0.09791236, HammingDistance: 0.39686, AccuracyScore: 0.60311, MultiLabelLoss: -1933839.57987, loss: -1933839.57987, batch_cost: 0.47519s, reader_cost: 0.12525, ips: 134.68186 samples/s, eta: 0:19:38
    		[2022/07/18 09:43:08] ppcls INFO: [Train][Epoch 1/10][Iter: 260/273]lr(CosineAnnealingDecay): 0.09774466, HammingDistance: 0.39622, AccuracyScore: 0.60375, MultiLabelLoss: -1859746.09497, loss: -1859746.09497, batch_cost: 0.47531s, reader_cost: 0.12542, ips: 134.64980 samples/s, eta: 0:19:34
    		[2022/07/18 09:43:13] ppcls INFO: [Train][Epoch 1/10][Iter: 270/273]lr(CosineAnnealingDecay): 0.09757064, HammingDistance: 0.39745, AccuracyScore: 0.60253, MultiLabelLoss: -1780679.59171, loss: -1780679.59171, batch_cost: 0.47541s, reader_cost: 0.12558, ips: 134.62017 samples/s, eta: 0:19:29
    		[2022/07/18 09:43:13] ppcls INFO: [Train][Epoch 1/10][Avg]HammingDistance: 0.39777, AccuracyScore: 0.60220, MultiLabelLoss: -1774132.37936, loss: -1774132.37936
    		[2022/07/18 09:43:15] ppcls INFO: [Eval][Epoch 1][Iter: 0/69]MultiLabelLoss: 0.00000, loss: 0.00000, HammingDistance: 0.51136, AccuracyScore: 0.48864, batch_cost: 1.85924s, reader_cost: 1.34365, ips: 137.69075 images/sec
    		[2022/07/18 09:43:29] ppcls INFO: [Eval][Epoch 1][Iter: 10/69]MultiLabelLoss: 0.00000, loss: 0.00000, HammingDistance: 0.23842, AccuracyScore: 0.76158, batch_cost: 1.34105s, reader_cost: 0.80984, ips: 190.89550 images/sec
    		[2022/07/18 09:43:42] ppcls INFO: [Eval][Epoch 1][Iter: 20/69]MultiLabelLoss: 0.00000, loss: 0.00000, HammingDistance: 0.18564, AccuracyScore: 0.81436, batch_cost: 1.34072s, reader_cost: 0.80951, ips: 190.94184 images/sec
    		[2022/07/18 09:43:56] ppcls INFO: [Eval][Epoch 1][Iter: 30/69]MultiLabelLoss: 0.00000, loss: 0.00000, HammingDistance: 0.16790, AccuracyScore: 0.83210, batch_cost: 1.34485s, reader_cost: 0.81545, ips: 190.35529 images/sec
    		[2022/07/18 09:44:09] ppcls INFO: [Eval][Epoch 1][Iter: 40/69]MultiLabelLoss: 0.00000, loss: 0.00000, HammingDistance: 0.16036, AccuracyScore: 0.83964, batch_cost: 1.34756s, reader_cost: 0.81722, ips: 189.97333 images/sec
    		[2022/07/18 09:44:23] ppcls INFO: [Eval][Epoch 1][Iter: 50/69]MultiLabelLoss: 0.00000, loss: 0.00000, HammingDistance: 0.15452, AccuracyScore: 0.84548, batch_cost: 1.34637s, reader_cost: 0.81584, ips: 190.14109 images/sec
    		[2022/07/18 09:44:36] ppcls INFO: [Eval][Epoch 1][Iter: 60/69]MultiLabelLoss: 0.00000, loss: 0.00000, HammingDistance: 0.14978, AccuracyScore: 0.85022, batch_cost: 1.34533s, reader_cost: 0.81467, ips: 190.28854 images/sec
    		[2022/07/18 09:44:46] ppcls INFO: [Eval][Epoch 1][Avg]MultiLabelLoss: 0.00000, loss: 0.00000, HammingDistance: 0.14743, AccuracyScore: 0.85257
    

    `

    opened by gg22mm 12
  • qiickstart花朵识别训练报错

    qiickstart花朵识别训练报错

    欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献! 提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题:

    1. PaddleClas版本以及PaddlePaddle版本:请您提供您使用的版本号或分支信息,如PaddleClas release/2.2和PaddlePaddle 2.1.0
    2. 涉及的其他产品使用的版本号:如您在使用PaddleClas的同时还在使用其他产品,如PaddleServing、PaddleInference等,请您提供其版本号
    3. 训练环境信息: a. 具体操作系统,Windows10 b. Python版本号,Python3.7 c. CUDA/cuDNN版本, 如CUDA10.1/cuDNN 7.6.5
    4. 完整的代码(相比于repo中代码,有改动的地方)、详细的错误信息及相关log

    刚搭建的环境,对着这个文档https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/quick_start/quick_start_classification_new_user.md

    敲的这行命令 python3 tools/train.py -c ./ppcls/configs/quick_start/ResNet50_vd.yaml -o Arch.pretrained=True

    Traceback (most recent call last): File "tools/train.py", line 32, in engine.train() File "D:\PYproject\PaddleClas-release-2.3\ppcls\engine\engine.py", line 265, in train self.train_epoch_func(self, epoch_id, print_batch_step) File "D:\PYproject\PaddleClas-release-2.3\ppcls\engine\train\train.py", line 49, in train_epoch out = forward(engine, batch) File "D:\PYproject\PaddleClas-release-2.3\ppcls\engine\train\train.py", line 77, in forward return engine.model(batch[0]) File "D:\Anaconda\envs\paddle\lib\site-packages\paddle\fluid\dygraph\layers.py", line 902, in call outputs = self.forward(*inputs, **kwargs) File "D:\PYproject\PaddleClas-release-2.3\ppcls\arch\backbone\legendary_models\resnet.py", line 350, in forward x = self.stem(x) File "D:\Anaconda\envs\paddle\lib\site-packages\paddle\fluid\dygraph\layers.py", line 902, in call outputs = self.forward(*inputs, **kwargs) File "D:\Anaconda\envs\paddle\lib\site-packages\paddle\fluid\dygraph\container.py", line 98, in forward input = layer(input) File "D:\Anaconda\envs\paddle\lib\site-packages\paddle\fluid\dygraph\layers.py", line 902, in call outputs = self.forward(*inputs, **kwargs) File "D:\PYproject\PaddleClas-release-2.3\ppcls\arch\backbone\legendary_models\resnet.py", line 134, in forward x = self.conv(x) File "D:\Anaconda\envs\paddle\lib\site-packages\paddle\fluid\dygraph\layers.py", line 902, in call outputs = self.forward(*inputs, **kwargs) File "D:\Anaconda\envs\paddle\lib\site-packages\paddle\nn\layer\conv.py", line 667, in forward use_cudnn=self._use_cudnn) File "D:\Anaconda\envs\paddle\lib\site-packages\paddle\nn\functional\conv.py", line 114, in _conv_nd pre_bias = getattr(core.ops, op_type)(x, weight, *attrs) OSError: (External) Cudnn error, CUDNN_STATUS_EXECUTION_FAILED (at C:/home/workspace/Paddle_release/paddle/fluid/operators/conv_cudnn_op.cu:348) [operator < conv2d > error]

    还有就是前面进程设置有警告

    D:\Anaconda\envs\paddle\lib\site-packages\paddle\fluid\reader.py:358: UserWarning: DataLoader with multi-process mode is not supported on MacOs and Windows currently. Please use signle-process mode with num_workers = 0 instead "DataLoader with multi-process mode is not supported on MacOs and Windows currently."
    但是代码里面的num_workers 确实是等于0的

    opened by Z-mingyu 12
  • 我用hub部署分类模型时,调用就出现这个问题

    我用hub部署分类模型时,调用就出现这个问题

    ./tools/infer/utils.py:227: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead data = np.fromstring(data, dtype).reshape(shape)


    C++ Traceback (most recent call last):

    0 paddle::framework::SignalHandle(char const*, int) 1 paddle::platform::GetCurrentTraceBackStringabi:cxx11


    Error Message Summary:

    FatalError: Segmentation fault is detected by the operating system. [TimeInfo: *** Aborted at 1621321952 (unix time) try "date -d @1621321952" if you are using GNU date ***] [SignalInfo: *** SIGSEGV (@0x0) received by PID 1516 (TID 0x7f41fab9f700) from PID 0 ***]

    段错误

    opened by intjun 11
  • 为什么infer的输出没有对应的label name,看不懂【quick start,classification,one image infer】教程

    为什么infer的输出没有对应的label name,看不懂【quick start,classification,one image infer】教程

    欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献! 提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题:

    1. PaddleClas版本以及PaddlePaddle版本:请您提供您使用的版本号或分支信息,如PaddleClas release/2.2和PaddlePaddle 2.1.0
    2. 涉及的其他产品使用的版本号:如您在使用PaddleClas的同时还在使用其他产品,如PaddleServing、PaddleInference等,请您提供其版本号
    3. 训练环境信息: a. 具体操作系统,如Linux/Windows/MacOS b. Python版本号,如Python3.6/7/8 c. CUDA/cuDNN版本, 如CUDA10.2/cuDNN 7.6.5等
    4. 完整的代码(相比于repo中代码,有改动的地方)、详细的错误信息及相关log

    复现了https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/quick_start/quick_start_classification_new_user.md 30分钟玩转PaddleClas(尝鲜版) 这个demo的输出也是一致: [{'class_ids': [20, 9, 29, 34, 23], 'scores': [0.03, 0.02606, 0.02249, 0.02051, 0.01818], 'file_name': 'dataset/flowers102/jpg/image_00001.jpg', 'label_names': []}] 为什么没有label_names,scores最高的0.03分数是推荐的分类吗?对应哪个标签? ps:flowers102的imagelabels.mat,setid.mat都有, 任何帮助都感激不尽,先谢谢大佬老师们百忙解答了

    opened by aibama 10
  • 训练时候出错

    训练时候出错

    欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献! 提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题:

    1. PaddleClas版本以及PaddlePaddle版本:请您提供您使用的版本号或分支信息,如PaddleClas release/2.2和PaddlePaddle 2.1.0
    2. 涉及的其他产品使用的版本号:如您在使用PaddleClas的同时还在使用其他产品,如PaddleServing、PaddleInference等,请您提供其版本号
    3. 训练环境信息: a. 具体操作系统,如Linux/Windows/MacOS b. Python版本号,如Python3.6/7/8 c. CUDA/cuDNN版本, 如CUDA10.2/cuDNN 7.6.5等
    4. 完整的代码(相比于repo中代码,有改动的地方)、详细的错误信息及相关log

    训练指令:python tools/train.py -c ./ppcls/configs/quick_start/professional/ResNet50_vd_CIFAR100.yaml -o Global.output_dir="output_CIFAR" -o Arch.pretrained=True

    训练显示结果,但一直报错!!

    C:\Users\86151/.paddleclas/weights\ResNet50_vd_pretrained.pdparams C:\Users\86151\Anaconda3\lib\site-packages\paddle\fluid\dygraph\layers.py:1301: UserWarning: Skip loading for fc.weight. fc.weight receives a shape [2048, 1000], but the expected shape is [2048, 12]. warnings.warn(("Skip loading for {}. ".format(key) + str(err))) C:\Users\86151\Anaconda3\lib\site-packages\paddle\fluid\dygraph\layers.py:1301: UserWarning: Skip loading for fc.bias. fc.bias receives a shape [1000], but the expected shape is [12]. warnings.warn(("Skip loading for {}. ".format(key) + str(err))) [2021/10/27 15:51:29] root INFO: train with paddle 2.1.3 and device CUDAPlace(0) C:\Users\86151\Anaconda3\lib\site-packages\paddle\fluid\reader.py:358: UserWarning: DataLoader with multi-process mode is not supported on MacOs and Windows currently. Please use signle-process mode with num_workers = 0 instead "DataLoader with multi-process mode is not supported on MacOs and Windows currently."
    [2021/10/27 15:51:30] root INFO: [Train][Epoch 1/300][Iter: 0/34]lr: 0.04000, CELoss: 2.48894, loss: 2.48894, top1: 0.03125, top5: 0.48438, batch_cost: 0.87071s, reader_cost: 0.15462, ips: 73.50348 images/sec, eta: 2:28:01 [2021/10/27 15:51:30] root ERROR: Exception occured when parse line: ./dataset/Cat/cat_12_train/tO6cKGH8uPEayzmeZJ51Fdr2Tx3fBYSn.jpg with msg: 'NoneType' object has no attribute 'shape' [2021/10/27 15:51:30] root ERROR: Exception occured when parse line: ./dataset/Cat/cat_12_train/YfsxcFB9D3LvkdQyiXlqnNZ4STwope2r.jpg with msg: 'NoneType' object has no attribute 'shape' [2021/10/27 15:51:31] root ERROR: Exception occured when parse line: ./dataset/Cat/cat_12_train/yGcJHV8Uuft6grFs7QWnK5CTAZvYzdDO.jpg with msg: 'NoneType' object has no attribute 'shape' [2021/10/27 15:51:31] root INFO: [Train][Epoch 1/300][Iter: 10/34]lr: 0.04000, CELoss: 2.60209, loss: 2.60209, top1: 0.09375, top5: 0.46023, batch_cost: 0.12350s, reader_cost: 0.00682, ips: 518.20466 images/sec, eta: 0:20:58 [2021/10/27 15:51:31] root ERROR: Exception occured when parse line: ./dataset/Cat/cat_12_train/3yMZzWekKmuoGOF60ICQxldhBEc9Ra15.jpg with msg: 'NoneType' object has no attribute 'shape' [2021/10/27 15:51:32] root ERROR: Exception occured when parse line: ./dataset/Cat/cat_12_train/5nKsehtjrXCZqbAcSW13gxB8E6z2Luy7.jpg with msg: 'NoneType' object has no attribute 'shape' [2021/10/27 15:51:33] root INFO: [Train][Epoch 1/300][Iter: 20/34]lr: 0.04000, CELoss: 2.55338, loss: 2.55338, top1: 0.11905, top5: 0.50074, batch_cost: 0.16357s, reader_cost: 0.04662, ips: 391.27521 images/sec, eta: 0:27:45 [2021/10/27 15:51:35] root INFO: [Train][Epoch 1/300][Iter: 30/34]lr: 0.04000, CELoss: 2.52793, loss: 2.52793, top1: 0.12349, top5: 0.50907, batch_cost: 0.17201s, reader_cost: 0.05527, ips: 372.08184 images/sec, eta: 0:29:09 [2021/10/27 15:51:35] root INFO: [Train][Epoch 1/300][Avg]CELoss: 2.52594, loss: 2.52594, top1: 0.12500, top5: 0.51468 [2021/10/27 15:51:35] root INFO: [Train][Epoch 2/300][Iter: 0/34]lr: 0.04000, CELoss: 2.40325, loss: 2.40325, top1: 0.10938, top5: 0.59375, batch_cost: 0.17696s, reader_cost: 0.06030, ips: 361.67293 images/sec, eta: 0:29:58 [2021/10/27 15:51:37] root INFO: [Train][Epoch 2/300][Iter: 10/34]lr: 0.04000, CELoss: 2.35911, loss: 2.35911, top1: 0.17898, top5: 0.60511, batch_cost: 0.17437s, reader_cost: 0.05785, ips: 367.02882 images/sec, eta: 0:29:30 [2021/10/27 15:51:38] root ERROR: Exception occured when parse line: ./dataset/Cat/cat_12_train/3yMZzWekKmuoGOF60ICQxldhBEc9Ra15.jpg with msg: 'NoneType' object has no attribute 'shape' [2021/10/27 15:51:39] root ERROR: Exception occured when parse line: ./dataset/Cat/cat_12_train/YfsxcFB9D3LvkdQyiXlqnNZ4STwope2r.jpg with msg: 'NoneType' object has no attribute 'shape' [2021/10/27 15:51:39] root INFO: [Train][Epoch 2/300][Iter: 20/34]lr: 0.04000, CELoss: 2.34280, loss: 2.34280, top1: 0.18378, top5: 0.61607, batch_cost: 0.18401s, reader_cost: 0.06794, ips: 347.80181 images/sec, eta: 0:31:06

    opened by Sun-Shun 10
  • [WIP] support dbb module for ResNet

    [WIP] support dbb module for ResNet

    1. add DiverseBranchBlock module;
    2. ResNet support dbb version using DiverseBranchBlock by setting micro_block="DiverseBranchBlock";
    3. ResNet support official vb version by setting use_first_short_conv=False;
    4. add ResNet18_dbb training config ResNet18_dbb.yaml.

    TODO:

    1. support PCALighting;
    2. perf & refactor BNWithPad, IdentityBasedConv, Conv2D, etc.;
    3. upload pretrained and inference model.
    opened by TingquanGao 1
  • 服务化部署过程中始终报错 error_msg: invalid arg list: ['fetch_list']

    服务化部署过程中始终报错 error_msg: invalid arg list: ['fetch_list']

    欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献! 提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题:

    1. PaddleClas版本以及PaddlePaddle版本:PaddleClas release/2.5
    2. 涉及的其他产品使用的版本号:
    paddle-bfloat         0.1.7
    paddle-serving-app    0.9.0
    paddle-serving-client 0.9.0
    paddle-serving-server 0.9.0
    paddle2onnx           1.0.5
    paddleclas            2.5.1
    paddlefsl             1.1.0
    paddlehub             2.3.1
    paddlenlp             2.4.3
    paddlepaddle          2.4.1
    
    1. 训练环境信息: a. 具体操作系统: Linux, registry.baidubce.com/paddlepaddle/serving:0.9.0-devel b. Python版本号: Python3.6 c. CUDA/cuDNN版本: cpu
    2. 完整的代码(相比于repo中代码,有改动的地方)、详细的错误信息及相关log

    根据文档:https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/deployment/image_classification/paddle_serving.md

    使用自己训练的模型,验证了下是可以跑的

    python3 python/predict_cls.py -c configs/sun.yaml
    
    2022-12-19 20:02:54 INFO:         ToCHWImage : None
    E1219 20:02:54.029156 21438 analysis_config.cc:96] Please compile with gpu to EnableGpu()
    IMG_8512.JPG:	class id(s): [0, 4, 1, 3, 5], score(s): [0.62, 0.18, 0.14, 0.03, 0.02], label_name(s): ['optometry-0', 'eye-axis-2', 'optometry-1', 'eye-axis-1', 'eye-axis-3']
    

    服务化部署的时候始终报错:

    {'err_no': 5000, 'err_msg': '[imagenet] failed to predict. Log_id: 2  Raise_msg:  invalid arg list  ClassName: Op.process.<locals>.feed_fetch_list_check_helper  FunctionName: feed_fetch_list_check_helper. Please check the input dict and checkout PipelineServingLogs/pipeline.log for more details.', 'key': [], 'value': [], 'tensors': []}
    

    PipelineServingLogs/pipeline.log

    ERROR 2022-12-19 19:54:52,613 [error_catch.py:106]
    Log_id: 1
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/site-packages/paddle_serving_server/pipeline/error_catch.py", line 97, in wrapper
        res = func(*args, **kw)
      File "/usr/local/lib/python3.6/site-packages/paddle_serving_server/pipeline/error_catch.py", line 160, in wrapper
        raise CustomException(CustomExceptionCode.INPUT_PARAMS_ERROR, "invalid arg list: {}".format(invalid_argument_list), True)
    paddle_serving_server.pipeline.error_catch.CustomException:
    	exception_code: 5000
    	exception_type: INPUT_PARAMS_ERROR
    	error_msg: invalid arg list: ['fetch_list']
    	is_send_to_user: True
    

    serving_server_conf.prototxt

    feed_var {
      name: "inputs"
      alias_name: "inputs"
      is_lod_tensor: false
      feed_type: 1
      shape: 3
      shape: 224
      shape: 224
    }
    fetch_var {
      name: "softmax_1.tmp_0"
      alias_name: "prediction"
      is_lod_tensor: false
      fetch_type: 1
      shape: 1000
    }
    
    opened by sunzhaoyang 2
Releases(v2.5.1)
  • v2.5.1(Dec 5, 2022)

  • v2.5.0(Nov 14, 2022)

  • v2.4.0(Jul 7, 2022)

    1.Release Practical Ultra Light-weight image Classification solutions. PULC models inference within 3ms on CPU devices, with accuracy on par with SwinTransformer. 2.Release 9 PULC models including person attribute, traffic sign recognition, text image orientation classification, etc. 3.Release PP-HGNet classification network, which is suitable for gpu devices 4.Release PP-LCNet v2 classification network, which is suitable for cpu devices. 5.Add CSwinTransformer, PVTv2, MobileViT and VAN. 6.Add BoT ReID models.

    Source code(tar.gz)
    Source code(zip)
  • v2.3.1(Feb 7, 2022)

    1.Update PP-ShiTu model and add 18MB model series. 2.Upgrade the document completely. 3.Add C++ Inference. 4.Add C++ Pipeline Serving mode. 5.Add a demo for Paddle Lite on Android.

    Source code(tar.gz)
    Source code(zip)
  • 2.3.0(Oct 14, 2021)

    1.Add lite weight models, including detection and feature extraction. 2.Add PP-LCNet backbone model, which is super fast on CPU devices. 3.Support PaddleServing and PaddleSlim. 4.Switch Vector Search module to faiss, due to many compatibility feedback. 5.Add PKSampler, which is more stable on multi-card training. 6.Legendary models now can output middleware result. 7.Add DeepHash module, which can compress float feature to binary feature. 8.SwinTransformer, Twins and Deit achieve same accuracy with the origins training from scratch.

    Source code(tar.gz)
    Source code(zip)
  • 2.2.1(Aug 11, 2021)

    1.Add Swin transformer series model. 2.Support static graph training, support dali and fp16 training. 3.Support build feature gallery with batchsize > 1, support add new feature to existing feature gallery. 4.Fix bugs and update document.

    Source code(tar.gz)
    Source code(zip)
  • 2.2.0(Jun 16, 2021)

    1. Architecture 1.1. ppcls backbones are now separated into two groups: legendary models and model zoo. 1.2. Legendary models inherit from a new base class TheseusLayer, which allows stop at some point or even change architectures
    2. Metric Learning 2.1. Add a lot of metric learning functions, including gears, which can be inserted into arch , and losses. 2.2. PaddleClas now support classification task and metric learning task using the same trainer. You only need switch different configs.
    3. Vector Search 3.1. Intergrate Mobius vector search algorithm.
    4. Applications 4.1. Add new applications: product recognition, logo recognition, car classification, car ReID and cartoon character recognition. 4.2. Add new image recognition pipeline, which contains detection, feature extraction and vector search.
    5. New models 5.1. add LeViT、Twins、TNT、DLA、HarDNet、RedNet models
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Apr 23, 2021)

  • v2.0.0(Feb 8, 2021)

    Release Note

    Support dynamic graph programming paradigm, adapted to Paddle2.0. Including:

    1. 29 series of classification network structures and training configurations, 134 models' pretrained weights and their evaluation metrics.
    2. SSLD Knowledge Distillation. Based on this SSLD distillation strategy, the top-1 acc of the distilled model is generally increased by more than 3%.
    3. Data augmentation: PaddleClas provides detailed introduction of 8 data augmentation algorithms such as AutoAugment, Cutout, Cutmix, code reproduction and effect evaluation in a unified experimental environment.
    4. Pretrained model with 100,000 categories: Based on ResNet50_vd model, Baidu open sourced the ResNet50_vd pretrained model trained on a 100,000-category dataset. In some practical scenarios, the accuracy based on the pretrained weights can be increased by up to 30%.
    5. A variety of training modes, including multi-machine training, mixed precision training, etc.
    6. A variety of inference and deployment solutions, including TensorRT inference, Paddle-Lite inference, model service deployment, model quantification, Paddle Hub, etc.
    7. Support Linux, Windows, macOS and other systems.
    8. Support training/evaluation on CPU/CPU/XPU.
    Source code(tar.gz)
    Source code(zip)
Official repository for Natural Image Matting via Guided Contextual Attention

GCA-Matting: Natural Image Matting via Guided Contextual Attention The source codes and models of Natural Image Matting via Guided Contextual Attentio

Li Yaoyi 349 Dec 26, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration (NeurIPS 2021) PyTorch implementation of the paper: CoFiNet: Reli

76 Jan 03, 2023
Discovering Interpretable GAN Controls [NeurIPS 2020]

GANSpace: Discovering Interpretable GAN Controls Figure 1: Sequences of image edits performed using control discovered with our method, applied to thr

Erik Härkönen 1.7k Jan 03, 2023
A python comtrade load library accelerated by go

Comtrade-GRPC Code for python used is mainly from dparrini/python-comtrade. Just patch the code in BinaryDatReader.parse for parsing a little more eff

Bo 1 Dec 27, 2021
The official implementation of the Hybrid Self-Attention NEAT algorithm

PUREPLES - Pure Python Library for ES-HyperNEAT About This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure

Adrian Westh 91 Dec 12, 2022
[ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators

AMOS This repository contains the scripts for fine-tuning AMOS pretrained models on GLUE and SQuAD 2.0 benchmarks. Paper: Pretraining Text Encoders wi

Microsoft 22 Sep 15, 2022
[NeurIPS 2021] Introspective Distillation for Robust Question Answering

Introspective Distillation (IntroD) This repository is the Pytorch implementation of our paper "Introspective Distillation for Robust Question Answeri

Yulei Niu 13 Jul 26, 2022
[SIGIR22] Official PyTorch implementation for "CORE: Simple and Effective Session-based Recommendation within Consistent Representation Space".

CORE This is the official PyTorch implementation for the paper: Yupeng Hou, Binbin Hu, Zhiqiang Zhang, Wayne Xin Zhao. CORE: Simple and Effective Sess

RUCAIBox 26 Dec 19, 2022
YOLOv5 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and e

Ultralytics 34.1k Dec 31, 2022
A parametric soroban written with CADQuery.

A parametric soroban written in CADQuery The purpose of this project is to demonstrate how "code CAD" can be intuitive to learn. See soroban.py for a

Lee 4 Aug 13, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 07, 2022
Dynamica causal Bayesian optimisation

Dynamic Causal Bayesian Optimization This is a Python implementation of Dynamic Causal Bayesian Optimization as presented at NeurIPS 2021. Abstract Th

nd308 18 Nov 22, 2022
TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 06, 2023
Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique

AOS: Airborne Optical Sectioning Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned airc

JKU Linz, Institute of Computer Graphics 39 Dec 09, 2022
[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples This repository is the official implementation of "Tow

Sungyoon Lee 4 Jul 12, 2022
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images (ISBI 2021, MELBA 2021)

MultiMix This repository contains the implementation of MultiMix. Our publications for this project are listed below: "MultiMix: Sparingly Supervised,

Ayaan Haque 27 Dec 22, 2022
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022
Tracking Pipeline helps you to solve the tracking problem more easily

Tracking_Pipeline Tracking_Pipeline helps you to solve the tracking problem more easily I integrate detection algorithms like: Yolov5, Yolov4, YoloX,

VNOpenAI 32 Dec 21, 2022
BC3407-Group-5-Project - BC3407 Group Project With Python

BC3407-Group-5-Project As the world struggles to contain the ever-changing varia

1 Jan 26, 2022