Pytorch implementation of OCNet series and SegFix.

Overview

openseg.pytorch

PWC

PWC

PWC

PWC

PWC

News

  • 2021/09/14 MMSegmentation has supported our ISANet and refer to ISANet for more details.

  • 2021/08/13 We have released the implementation for HRFormer and the combination of HRFormer and OCR achieves better semantic segmentation performance.

  • 2021/03/12 The late ACCPET is finally here, our "OCNet: Object context network for scene parsing" has been accepted by IJCV-2021, which consists of two of our previous technical reports: OCNet and ISA. Congratulations to all the co-authors!

  • 2021/02/16 Support pytorch-1.7, mixed-precision, and distributed training. Based on the PaddleClas ImageNet pretrained weights, we achieve 83.22% on Cityscapes val, 59.62% on PASCAL-Context val (new SOTA), 45.20% on COCO-Stuff val (new SOTA), 58.21% on LIP val and 47.98% on ADE20K val. Please checkout branch pytorch-1.7 for more details.

  • 2020/12/07 PaddleSeg has supported our ISA and HRNet + OCR. Jittor also has supported our ResNet-101 + OCR.

  • 2020/08/16 MMSegmentation has supported our HRNet + OCR.

  • 2020/07/20 The researchers from AInnovation have achieved Rank#1 on ADE20K Leaderboard via training our HRNet + OCR with a semi-supervised learning scheme. More details are in their Technical Report.

  • 2020/07/09 OCR (Spotlight) and SegFix have been accepted by the ECCV-2020. Notably, the reseachers from Nvidia set a new state-of-the-art performance on Cityscapes leaderboard: 85.4% via combining our HRNet + OCR with a new hierarchical mult-scale attention scheme.

  • 2020/05/11 We have released the checkpoints/logs of "HRNet + OCR" on all the 5 benchmarks including Cityscapes, ADE20K, LIP, PASCAL-Context and COCO-Stuff in the Model Zoo. Please feel free to try our method on your own dataset.

  • 2020/04/18 We have released some of our checkpoints/logs of OCNet, ISA, OCR and SegFix. We highly recommend you to use our SegFix to improve your segmentation results as it is super easy & fast to use.

  • 2020/03/12 Our SegFix could be used to improve the performance of various SOTA methods on both semantic segmentation and instance segmentation, e.g., "PolyTransform + SegFix" achieves Rank#2 on Cityscapes leaderboard (instance segmentation track) with performance as 41.2%.

  • 2020/01/13 The source code for reproduced HRNet+OCR has been made public.

  • 2020/01/09 "HRNet + OCR + SegFix" achieves Rank#1 on Cityscapes leaderboard with mIoU as 84.5%.

  • 2019/09/25 We have released the paper OCR, which is method of our Rank#2 entry to the leaderboard of Cityscapes.

  • 2019/07/31 We have released the paper ISA, which is very easy to use and implement while being much more efficient than OCNet or DANet based on conventional self-attention.

  • 2019/07/23 We (HRNet + OCR w/ ASP) achieve Rank#1 on the leaderboard of Cityscapes (with a single model) on 3 of 4 metrics.

  • 2019/05/27 We achieve SOTA on 6 different semantic segmentation benchmarks including: Cityscapes, ADE20K, LIP, Pascal-Context, Pascal-VOC, COCO-Stuff. We provide the source code for our approach on all the six benchmarks.

Model Zoo and Baselines

We provide a set of baseline results and trained models available for download in the Model Zoo.

Introduction

This is the official code of OCR, OCNet, ISA and SegFix. OCR, OCNet, and ISA focus on better context aggregation mechanisms (in the semantic segmentation task) and ISA focuses on addressing the boundary errors (in both semantic segmentation and instance segmentation tasks). We highlight the overall framework of OCR and SegFix in the figures as shown below:

OCR

Fig.1 - Illustrating the pipeline of OCR. (i) form the soft object regions in the pink dashed box. (ii) estimate the object region representations in the purple dashed box. (iii) compute the object contextual representations and the augmented representations in the orange dashed box.

SegFix

Fig.2 - Illustrating the SegFix framework: In the training stage, we first send the input image into a backbone to predict a feature map. Then we apply a boundary branch to predict a binary boundary map and a direction branch to predict a direction map and mask it with the binary boundary map. We apply boundary loss and direction loss on the predicted boundary map and direction map separately. In the testing stage, we first convert the direction map to offset map and then refine the segmentation results of any existing methods according to the offset map.

Citation

Please consider citing our work if you find it helps you,

@article{YuanW18,
  title={Ocnet: Object context network for scene parsing},
  author={Yuhui Yuan and Jingdong Wang},
  journal={arXiv preprint arXiv:1809.00916},
  year={2018}
}

@article{HuangYGZCW19,
  title={Interlaced Sparse Self-Attention for Semantic Segmentation},
  author={Lang Huang and Yuhui Yuan and Jianyuan Guo and Chao Zhang and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:1907.12273},
  year={2019}
}

@article{YuanCW20,
  title={Object-Contextual Representations for Semantic Segmentation},
  author={Yuhui Yuan and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:1909.11065},
  year={2020}
}

@article{YuanXCW20,
  title={SegFix: Model-Agnostic Boundary Refinement for Segmentation},
  author={Yuhui Yuan and Jingyi Xie and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:2007.04269},
  year={2020}
}

@article{YuanFHZCW21,
  title={HRT: High-Resolution Transformer for Dense Prediction},
  author={Yuhui Yuan and Rao Fu and Lang Huang and Weihong Lin and Chao Zhang and Xilin Chen and Jingdong Wang},
  booktitle={arXiv preprint arXiv:2110.09408},
  year={2021}
}

Acknowledgment

This project is developed based on the segbox.pytorch and the author of segbox.pytorch donnyyou retains all the copyright of the reproduced Deeplabv3, PSPNet related code.

Comments
  • questions/issues on training segfix with own data

    questions/issues on training segfix with own data

    I was excited to try segfix training on my own data.

    I could produce the mat files for train and val data. Training works with run_h_48_d_4_segfix.sh and loss convergences. But on the validation the IoU is more or less random (I have 2 classes)

    2020-08-20 10:47:41,932 INFO [base.py, 32] Result for mask 2020-08-20 10:47:41,932 INFO [base.py, 48] Mean IOU: 0.7853758111568029 2020-08-20 10:47:41,933 INFO [base.py, 49] Pixel ACC: 0.9692584678389714 2020-08-20 10:47:41,933 INFO [base.py, 54] F1 Score: 0.7523384841507573 Precision: 0.7928424176432377 Recall: 0.7157718538603068 2020-08-20 10:47:41,933 INFO [base.py, 32] Result for dir (mask) 2020-08-20 10:47:41,933 INFO [base.py, 48] Mean IOU: 0.5390945167184129 2020-08-20 10:47:41,933 INFO [base.py, 49] Pixel ACC: 0.7248566725097775 2020-08-20 10:47:41,933 INFO [base.py, 32] Result for dir (GT) 2020-08-20 10:47:41,934 INFO [base.py, 48] Mean IOU: 0.41990305666871003 2020-08-20 10:47:41,934 INFO [base.py, 49] Pixel ACC: 0.6007717101395131

    to investigate the issue further I tried to analyse the predicted mat files with bash scripts/cityscapes/segfix/run_h_48_d_4_segfix.sh segfix_pred_val 1

    with "input_size": [640, 480] this exception happens: File "/home/rsa-key-20190908/openseg.pytorch/lib/datasets/tools/collate.py", line 108, in collate assert pad_height >= 0 and pad_width >= 0 after fixing it more or less, iv got similar results as val during training They were around 3Kb instead of ~70kb btw, it took "input_size": [640, 480] config from "test": { leave instead "val": {

    is it possible validation only works with "input_size": [2048, 1024],? Can you give me any hints how to manually verify the .mat files of there correctness? Currently I'm diving into 2007.04269.pdf and the code of dt_offset_generator.py to get an understanding.

    opened by marcok 18
  • How to prepare the Cityscapes data

    How to prepare the Cityscapes data

    Hello. I'm trying to reproduce your CityScapes results for our BMVC paper.

    after I followed the data directory format in the config.profile file and running bash ./scripts/cityscapes/hrnet/run_h_48_d_4_ocr.sh val 1 I get this error:

    ERROR: Found no prediction for ground truth /home/arash/openseg.pytorch/dataset/cityscapes/val/label/munster_000027_000019_gtFine_labelIds.png

    could you explain how did you prepare the data? Thanks

    opened by arashash 15
  • about json file, the input size and crop size should based on what

    about json file, the input size and crop size should based on what

    my dataset image size is 256*256,and i dont know how to modifiy the json file

    {
        "dataset": "BDCI",
        "method": "fcn_segmentor",
        "data": {
          "image_tool": "cv2",
          "input_mode": "BGR",
          "num_classes": 7,
          "label_list": [0, 1, 2, 3, 4, 5, 6, 255],
          "data_dir": "~/DataSet/BDCI",
          "workers": 8
        },
       "train": {
          "batch_size": 16,
          "data_transformer": {
            "size_mode": "fix_size",
            "input_size": [256, 256],
            "align_method": "only_pad",
            "pad_mode": "random"
          }
        },
        "val": {
          "batch_size": 4,
          "mode": "ss_test",
          "data_transformer": {
            "size_mode": "fix_size",
            "input_size": [256, 256],
            "align_method": "only_pad"
          }
        },
        "test": {
          "batch_size": 4,
          "mode": "ss_test",
          "out_dir": "~/DataSet/BDCI/seg_result/BDCI",
          "data_transformer": {
            "size_mode": "fix_size",
            "input_size": [256, 256],
            "align_method": "only_pad"
          }
        },
        "train_trans": {
          "trans_seq": ["random_resize", "random_crop", "random_hflip", "random_brightness"],
          "random_brightness": {
            "ratio": 1.0,
            "shift_value": 10
          },
          "random_hflip": {
            "ratio": 0.5,
            "swap_pair": []
          },
          "random_resize": {
            "ratio": 1.0,
            "method": "random",
            "scale_range": [0.5, 2.0],
            "aspect_range": [0.9, 1.1]
          },
          "random_crop":{
            "ratio": 1.0,
            "crop_size": [256, 256],
            "method": "random",
            "allow_outside_center": false
          }
        },
        "val_trans": {
          "trans_seq": []
        },
        "normalize": {
          "div_value": 255.0,
          "mean_value": [0.485, 0.456, 0.406],
          "mean": [0.485, 0.456, 0.406],
          "std": [0.229, 0.224, 0.225]
        },
        "checkpoints": {
          "checkpoints_name": "fs_baseocnet_BDCI_seg",
          "checkpoints_dir": "./checkpoints/BDCI",
          "save_iters": 500
        },
        "network":{
          "backbone": "deepbase_resnet101_dilated8",
          "multi_grid": [1, 1, 1],
          "model_name": "base_ocnet",
          "bn_type": "inplace_abn",
          "stride": 8,
          "factors": [[8, 8]],
          "loss_weights": {
            "corr_loss": 0.01,
            "aux_loss": 0.4,
            "seg_loss": 1.0
          }
        },
        "logging": {
          "logfile_level": "info",
          "stdout_level": "info",
          "log_file": "./log/BDCI/fs_baseocnet_BDCI_seg.log",
          "log_format": "%(asctime)s %(levelname)-7s %(message)s",
          "rewrite": true
        },
        "lr": {
          "base_lr": 0.01,
          "metric": "iters",
          "lr_policy": "lambda_poly",
          "step": {
            "gamma": 0.5,
            "step_size": 100
          }
        },
        "solver": {
          "display_iter": 10,
          "test_interval": 1000,
          "max_iters": 40000
        },
        "optim": {
          "optim_method": "sgd",
          "adam": {
            "betas": [0.9, 0.999],
            "eps": 1e-08,
            "weight_decay": 0.0001
          },
          "sgd": {
            "weight_decay": 0.0005,
            "momentum": 0.9,
            "nesterov": false
          }
        },
        "loss": {
          "loss_type": "fs_auxce_loss",
          "params": {
            "ce_weight": [0.8373, 0.9180, 0.8660, 1.0345, 1.0166, 0.9969, 0.9754,
                          1.0489, 0.8786, 1.0023, 0.9539, 0.9843, 1.1116, 0.9037,
                          1.0865, 1.0955, 1.0865, 1.1529, 1.0507],
            "ce_reduction": "elementwise_mean",
            "ce_ignore_index": -1,
            "ohem_minkeep": 100000,
            "ohem_thresh": 0.9
          }
        }
    }
    
    

    here is my json file, and when i try to train my dataset, there is such sizemisbatch error...like: image image and so on, environment should be satisfied: image

    this is my val error: image and the config.profile: image this is my log file screenshot: image image image image

    opened by ShiMinghao0208 12
  • problem occured in  hrnet_backbone.py

    problem occured in hrnet_backbone.py

    Dear Author,

    Thank you for your excellent work, but some errors are reported for backbones.

    checkpoint names:
    checkpoints/cityscapes/hrnet_w48_ocr_1_latest.pth
    
    
    commands:
    (for HRNet-W48:)
    python -u main.py --configs configs/cityscapes/H_48_D_4.json --drop_last y --backbone hrnet48 --model_name hrnet_w48_ocr --checkpoints_name hrnet_w48_ocr_1 --phase test --gpu 0 --resume ./checkpoints/cityscapes/hrnet_w48_ocr_1_latest.pth --loss_type fs_auxce_loss --test_dir input_images --out_dir output_images
    

    Error messages:

    2020-07-15 21:00:10,470 INFO [module_runner.py, 44] BN Type is inplace_abn. Traceback (most recent call last): File "main.py", line 214, in model = Tester(configer)
    File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 69, in init self._init_model() File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 72, in _init_model self.seg_net = self.model_manager.semantic_segmentor() File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/model_manager.py", line 81, in semantic_segmentor model = SEG_MODEL_DICTmodel_name File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/nets/hrnet.py", line 105, in init self.backbone = BackboneSelector(configer).get_backbone() File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/backbone_selector.py", line 34, in get_backbone model = HRNetBackbone(self.configer)(**params) File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/hrnet/hrnet_backbone.py", line 598, in call bn_momentum=0.1) File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/hrnet/hrnet_backbone.py", line 307, in init self.bn1 = ModuleHelper.BatchNorm2d(bn_type=bn_type)(64, momentum=bn_momentum) TypeError: 'NoneType' object is not callable

    Could you please tell me what is wrong? thank you.

    opened by daixiaolei623 12
  • Problem with OCR similarity map

    Problem with OCR similarity map

    Thanks for sharing this wonderful work with us!

    I have a problem with the computing of similarity map in the OCR module. In line 131 in lib/models/seg_hrnet_orc.py sim_map = (self.key_channels**-.5) * sim_map Why multiply a small value (self.key_channels**-.5) to sim_map before softmax?

    During validation, I have printed the final result of sim_map and I found all values in this map are very close to 0.0526 (equals to 1/19), which means the probabilities of a pixel i belong to different classesk are almost equal. Is this contradicting the assumption that the similarity map should represent the relation between the _i_th pixel and the _k_th object region?

    #######################

    Your former answer:

    • Multiplying the small value is following the original self-attention scheme. Please refer to the last paragraph of 3.2.1 in the paper "Attention Is All You Need". However, we find this small factor does not influence the segmentation performance.

    • As the final result of the sim_map, we do not understand why all the values are almost the same in your case. What checkpoints are you testing? How about the performance of the used checkpoint? Please provide more information so that we can help you.

    #########################

    Thanks a lot for your reply! I used the checkpoint posted on HRNet-OCR. The segmentation performance is good ad the mIoU is 81.6, too. Screenshot from 2020-06-15 12-07-11 In inference, I have printed 10 random rows in the sim_map like below: Screenshot from 2020-06-15 12-19-24 All values in this map are very close to 0.0526 (equals to 1/19).

    opened by Mayy1994 11
  • SegFix paper link

    SegFix paper link

    Hi!

    Thanks for your nice work. It is really impressive. I'm interested in the SegFix algorithm. Could you send a copy of the paper "SegFix: Model-Agnostic Boundary Refinement for Segmentation", since I cannot find it on arXiv.

    Best, David

    opened by davidblom603 8
  • The performance of renset101-ocr

    The performance of renset101-ocr

    Hi, I want to reproduce the results of ocr paper, specially for pascal context and ade20k. Should I use the HRNet-OCR repo or this repo? In fact, I follow the default settings of HRNet-OCR and just replace HRNet with resnet101, but I can not reproduce the results on pascal context (54.8%mIoU) and ade20k (45.3%mIoU).

    opened by ydhongHIT 7
  • Test sets results

    Test sets results

    For comparison in our paper, we are looking for the detailed test set results (class IoUs) of these prediction files that you shared: https://drive.google.com/drive/folders/156vMABydr7btdPDBU6b9J-e0jJHuPI73 Do you happen to have a snapshot of the submission results obtained with these predictions? Thank you for your consideration.

    opened by arashash 7
  • class-id mapping for mapillary dataset

    class-id mapping for mapillary dataset

    I cannot find any class-id mapping in README or the config file. Just like Road in the ground truth with a label of 0 and traffic light is 1, the unlabeled is 255, etc.

    could you provide the mapping for v1.2 of mapillary?

    opened by lingorX 6
  • when i use H_SEGFIX.json to train cityscapes datasets meet the error:

    when i use H_SEGFIX.json to train cityscapes datasets meet the error:

    In loss_heleper.py In the calculation of loss function, the input is two tensors[1,8,128,128] /[1,2,128,128], and the corresponding label of single is three tensors.[1,512,512],[1,512,512],[1,512,512]

    targets=targets_.clone().unsqueeze(1).float() AttributeError:'list' object has no attribute 'clone'

    opened by qingchengboy 6
  • How to draw pictures

    How to draw pictures

    01 02 03 Coarse Label Map,Offset Map,Refined Label Map,Distance Map, Direction Map and the last one,How to draw them。Which drawing software is used, which is a program, what is the name of the software, and can the program be open source?I want to apply Figure 2 and Figure 3 to my own grayscale map. If it can be open sourced, will it be possible in the near future?Thanks you very much. 您好,抱歉我的英语太渣了,想了解一下这3张图是如何制作的。哪些图用了画图软件,是什么软件,哪些用了程序,程序可以开源吗。我想把图2和图3应用到自己的灰度图上,如果可以开源,近期可以吗?谢谢各位大佬,万分感谢。

    opened by Klaviersonate 5
  • need *.mat when I want to train segfix on my own dataset

    need *.mat when I want to train segfix on my own dataset

    I want to train segfix on my own dataset with script "scripts/cityscapes/segfix/run_hx_20_d_2_segfix_trainval.sh", but it seems that it needs file like *.mat, how to solve this problem? thank you.

    image

    opened by jhyin12 0
  • preprocess scripts for LIP

    preprocess scripts for LIP

    Thanks for your work!

    I download LIP dataset from here, and get dataset folder structure as below:

    .
    |-- ATR
    |   `-- humanparsing
    |       |-- JPEGImages
    |       `-- SegmentationClassAug
    |-- CIHP
    |   `-- instance-level_human_parsing
    |       |-- Testing
    |       |   `-- Images
    |       |-- Training
    |       |   |-- Categories
    |       |   |-- Category_ids
    |       |   |-- Human
    |       |   |-- Human_ids
    |       |   |-- Images
    |       |   |-- Instance_ids
    |       |   `-- Instances
    |       `-- Validation
    |           |-- Categories
    |           |-- Category_ids
    |           |-- Human
    |           |-- Human_ids
    |           |-- Images
    |           |-- Instance_ids
    |           `-- Instances
    `-- LIP
    

    that is different from the structure you mentioned in GETTING_STARTED.md:

    
    ├── lip
    │   ├── atr
    │   │   ├── edge
    │   │   ├── image
    │   │   └── label
    │   ├── cihp
    │   │   ├── image
    │   │   └── label
    │   ├── train
    │   │   ├── edge
    │   │   ├── image
    │   │   └── label
    │   ├── val
    │   │   ├── edge
    │   │   ├── image
    │   │   └── label
    
    

    could you please provide the scripts to preprocess LIP dataset? Thanks for a lot!

    opened by shouyanxiang 0
  • Result of refinement by SegFix on HRNet / HRNet-Semantic-Segmentation open source

    Result of refinement by SegFix on HRNet / HRNet-Semantic-Segmentation open source

    From my understanding, the two open source (HRNet-Semantic-Segmentation & openseg.pytorch) doesn't differ greatly.

    So I applied SegFix to results generated from HRNet-Semantic-Segmentation. The original mIoU is like below.

    image

    Obviously, I assumed that the final mIoU after applying SegFix would increase. However, that's not the case. mIoU actually decreased to 80.29.

    I applied SegFix the way described in MODEL_ZOO.md (below)

    image

    Is this the correct way to apply SegFix? Or is there any other way to apply SegFix?

    opened by Jonnyboyyyy 0
  • question about flops

    question about flops

    how do you calculate the flops in the figure 4 if I want to calculate input size of 512 * 97 * 97? i use the underline formula but the result is much larger than expectation. 捕获

    opened by HaoGuo98 0
Owner
openseg-group
openseg-group
Neural Motion Learner With Python

Neural Motion Learner Introduction This work is to extract skeletal structure from volumetric observations and to learn motion dynamics from the detec

Jinseok Bae 14 Nov 28, 2022
Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

Kimio Kuramitsu 1 Dec 13, 2021
Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]

Introduction This repository is for X-Linear Attention Networks for Image Captioning (CVPR 2020). The original paper can be found here. Please cite wi

JDAI-CV 240 Dec 17, 2022
Text Summarization - WCN — Weighted Contextual N-gram method for evaluation of Text Summarization

Text Summarization WCN — Weighted Contextual N-gram method for evaluation of Text Summarization In this project, I fine tune T5 model on Extreme Summa

Aditya Shah 1 Jan 03, 2022
PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data.

Anti-Backdoor Learning PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data. The Anti-Backdoor Learning

Yige-Li 51 Dec 07, 2022
using STGCN to achieve egg classification task

EEG Classification   The task requires us to classify electroencephalography(EEG) into six categories, including human body, human face, animal body,

4 Jun 13, 2022
Video-Music Transformer

VMT Video-Music Transformer (VMT) is an attention-based multi-modal model, which generates piano music for a given video. Paper https://arxiv.org/abs/

Chin-Tung Lin 5 Jul 13, 2022
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
RLHive: a framework designed to facilitate research in reinforcement learning.

RLHive is a framework designed to facilitate research in reinforcement learning. It provides the components necessary to run a full RL experiment, for both single agent and multi agent environments.

88 Jan 05, 2023
Companion repo of the UCC 2021 paper "Predictive Auto-scaling with OpenStack Monasca"

Predictive Auto-scaling with OpenStack Monasca Giacomo Lanciano*, Filippo Galli, Tommaso Cucinotta, Davide Bacciu, Andrea Passarella 2021 IEEE/ACM 14t

Giacomo Lanciano 0 Dec 07, 2022
Automatic labeling, conversion of different data set formats, sample size statistics, model cascade

Simple Gadget Collection for Object Detection Tasks Automatic image annotation Conversion between different annotation formats Obtain statistical info

llt 4 Aug 24, 2022
codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification

DLCF-DCA codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification. submitted t

15 Aug 30, 2022
Advantage Actor Critic (A2C): jax + flax implementation

Advantage Actor Critic (A2C): jax + flax implementation Current version supports only environments with continious action spaces and was tested on muj

Andrey 3 Jan 23, 2022
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 07, 2023
PyTorch Lightning + Hydra. A feature-rich template for rapid, scalable and reproducible ML experimentation with best practices. ⚡🔥⚡

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Łukasz Zalewski 2.1k Jan 09, 2023
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
DataCLUE: 国内首个以数据为中心的AI测评(含模型分析报告)

DataCLUE: A Benchmark Suite for Data-centric NLP You can get the english version of README. 以数据为中心的AI测评(DataCLUE) 内容导引 章节 描述 简介 介绍以数据为中心的AI测评(DataCLUE

CLUE benchmark 135 Dec 22, 2022
Customised to detect objects automatically by a given model file(onnx)

LabelImg LabelImg is a graphical image annotation tool. It is written in Python and uses Qt for its graphical interface. Annotations are saved as XML

Heeone Lee 1 Jun 07, 2022
Learning Energy-Based Models by Diffusion Recovery Likelihood

Learning Energy-Based Models by Diffusion Recovery Likelihood Ruiqi Gao, Yang Song, Ben Poole, Ying Nian Wu, Diederik P. Kingma Paper: https://arxiv.o

Ruiqi Gao 41 Nov 22, 2022
Memory-efficient optimum einsum using opt_einsum planning and PyTorch kernels.

opt-einsum-torch There have been many implementations of Einstein's summation. numpy's numpy.einsum is the least efficient one as it only runs in sing

Haoyan Huo 9 Nov 18, 2022