Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Overview

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

PWC PWC


Results

results on COCO val

Backbone Method Lr Schd PQ Config Download
R-50 Panoptic-SegFormer 1x 48.0 config model
R-50 Panoptic-SegFormer 2x 49.6 config model
R-101 Panoptic-SegFormer 2x 50.6 config model
PVTv2-B5 (much lighter) Panoptic-SegFormer 2x 55.6 config model
Swin-L (window size 7) Panoptic-SegFormer 2x 55.8 config model

Install

Prerequisites

  • Linux
  • Python 3.6+
  • PyTorch 1.5+
  • torchvision
  • CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
  • GCC 5+
  • mmcv-full==1.3.4
  • mmdet==2.12.0 # higher version may not work
  • timm==0.4.5
  • einops==0.3.0
  • Pillow==8.0.1
  • opencv-python==4.5.2

note: PyTorch1.8 has a bug in its adamw.py and it is solved in PyTorch1.9(see), you can easily solve it by comparing the difference.

install Panoptic SegFormer

python setup.py install 

Datasets

When I began this project, mmdet dose not support panoptic segmentation officially. I convert the dataset from panoptic segmentation format to instance segmentation format for convenience.

1. prepare data (COCO)

cd Panoptic-SegFormer
mkdir datasets
cd datasets
ln -s path_to_coco coco
mkdir annotations/
cd annotations
wget http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
unzip panoptic_annotations_trainval2017.zip

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

2. convert panoptic format to detection format

cd Panoptic-SegFormer
./tools/convert_panoptic_coco.sh coco

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017_detection_format.json
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   ├── panoptic_val2017_detection_format.json
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

Run (panoptic segmentation)

train

single-machine with 8 gpus.

./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 8

test

./tools/dist_test.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py path/to/model.pth 8

Citing

If you use Panoptic SegFormer in your research, please use the following BibTeX entry.

@article{li2021panoptic,
  title={Panoptic SegFormer},
  author={Li, Zhiqi and Wang, Wenhai and Xie, Enze and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Lu, Tong and Luo, Ping},
  journal={arXiv},
  year={2021}
}

Acknowledgement

Mainly based on Defromable DETR from MMdet.

Thanks very much for other open source works: timm, Panoptic FCN, MaskFomer, QueryInst

Comments
  • How demo one picture result ?

    How demo one picture result ?

    Dear friend, Thanks you for your good job. Now we do not want to download coco datasets, just want to give one picture, segment it and show its result. How to do it ? Best regards,

    opened by delldu 3
  • what's pvt_v2_ap in code?

    what's pvt_v2_ap in code?

    I found there are many names that obscure to understand. For example: pvt_v2_ap what that stands for? and what's single_stage_w_mask stands for?

    image

    and those file differences?

    opened by jinfagang 2
  • how to visualize demo image?

    how to visualize demo image?

    Dear friend, how to visualize the segmentation result of custom images? I run the infererce.py and didn’t get a good result. Like this: 000000

    I think there are some faults in my code.

    Here is my code:

    from mmcv.runner import checkpoint
    from mmdet.apis.inference import init_detector,LoadImage, inference_detector
    import easymd
    import cv2
    import random
    import colorsys
    import numpy as np
    
    def random_colors(N, bright=True):
        brightness = 1.0 if bright else 0.7
        hsv = [(i / float(N), 1, brightness) for i in range(N)]
        colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))
        random.shuffle(colors)
        return colors
    
    def apply_mask(image, mask, color, alpha=0.5):
        for c in range(3):
            image[:, :, c] = np.where(mask == 0,
                                      image[:, :, c],
                                      image[:, :, c] *
                                      (1 - alpha) + alpha * color[c] * 255)
        return image
    
    config = './configs/panformer/panformer_pvtb5_24e_coco_panoptic.py'
    #checkpoints = './checkpoints/pseg_r101_r50_latest.pth'
    checkpoints = "./checkpoints/panoptic_segformer_pvtv2b5_2x.pth"
    img_path = "img_path "
    mask_save_path = "save_path"
    
    colors = random_colors(80)
    
    model = init_detector(config,checkpoint=checkpoints)
    
    results = inference_detector(model, [img_path])
    
    img = cv2.imread(img_path)
    
    seg = results['segm'][0]
    N = len(seg)
    
    masked_image = img.copy()
    for i in range(N):
        color = colors[i]
        masks = np.sum(seg[i], axis=0)
        masked_image = apply_mask(masked_image, masks, color)
        # for mask in seg[i]:
        #     masked_image = apply_mask(masked_image, mask, color)
    
    # cv2.imshow("a", masked_image)
    
    opened by garriton 0
  • Location Decoder loss

    Location Decoder loss

    https://github.com/zhiqi-li/Panoptic-SegFormer/blob/e604ef810eaf5101106d221db4b6970c2daca5c9/easymd/models/panformer/panformer_head.py#L360-L364

    Why does the location decoder only compute the losses of the first L-1 layers not the whole L layers?

    opened by hust-nj 0
  • Instruction for single GPU run

    Instruction for single GPU run

    Hi thanks for sharing your works. Iwas trying to run it on single gpu. Would you pls add some instructions or scripts to run it in single gpu? That would be a great help.

    kind regards Abdullah

    opened by nazib 1
  • Impossible to debug, single_gpu code paths are broken

    Impossible to debug, single_gpu code paths are broken

    It seems that the multi gpu training and eval works great, however, while trying to debug you're opt for using a single gpu.
    In that case the code breaks in several parts during the evaluation of the validation set.
    Any chance for a hotfix? :)

    To reproduce, try to run the code from PyCharm in debug mode while there's only one GPU available.

    opened by aviadmx 1
  • Why instance annotations are required along panoptic ones?

    Why instance annotations are required along panoptic ones?

    The model solves the panoptic segmentation task, why does the validation dataset uses the instance segmentation annotations?

    data = dict(
        samples_per_gpu=2,
        workers_per_gpu=2,
        train=dict(
            type=dataset_type,
            ann_file= './datasets/annotations/panoptic_train2017_detection_format.json',
            img_prefix=data_root + 'train2017/',
            pipeline=train_pipeline),
        val=dict( 
          
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline),
        test=dict(
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            #ann_file= './datasets/coco/annotations/image_info_test-dev2017.json',
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            #img_prefix=data_root + '/test2017/',
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline)
            )
    

    We eventually use the instances_val2017.json file instead of panoptic_val2017.json

    opened by aviadmx 3
  • Loading checkpoint

    Loading checkpoint

    When loading the Swin-L checkpoint by adding a load_from line to the config configs/panformer/panformer_swinl_24e_coco_panoptic.pyz as following:

    load_from='./pretrained/panoptic_segformer_swinl_2x.pth'
    

    The loading fails with an error about keys mismatch:

    unexpected key in source state_dict: bbox_head.cls_branches2.0.weight, bbox_head.cls_branches2.0.bias, bbox_head.cls_branches2.1.weight, bbox_head.cls_branches2.1.bias, bbox_head.cls_branches2.2.weight, bbox_head.cls_branches2.2.bias, bbox_head.cls_branches2.3.weight, bbox_head.cls_branches2.3.bias, bbox_head.mask_head.blocks.0.head_norm1.weight, bbox_head.mask_head.blocks.0.head_norm1.bias, bbox_head.mask_head.blocks.0.attn.q.weight, bbox_head.mask_head.blocks.0.attn.q.bias, bbox_head.mask_head.blocks.0.attn.k.weight, bbox_head.mask_head.blocks.0.attn.k.bias, bbox_head.mask_head.blocks.0.attn.v.weight, bbox_head.mask_head.blocks.0.attn.v.bias, bbox_head.mask_head.blocks.0.attn.proj.weight, bbox_head.mask_head.blocks.0.attn.proj.bias, bbox_head.mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.0.attn.linear.0.weight, bbox_head.mask_head.blocks.0.attn.linear.0.bias, bbox_head.mask_head.blocks.0.head_norm2.weight, bbox_head.mask_head.blocks.0.head_norm2.bias, bbox_head.mask_head.blocks.0.mlp.fc1.weight, bbox_head.mask_head.blocks.0.mlp.fc1.bias, bbox_head.mask_head.blocks.0.mlp.fc2.weight, bbox_head.mask_head.blocks.0.mlp.fc2.bias, bbox_head.mask_head.blocks.1.head_norm1.weight, bbox_head.mask_head.blocks.1.head_norm1.bias, bbox_head.mask_head.blocks.1.attn.q.weight, bbox_head.mask_head.blocks.1.attn.q.bias, bbox_head.mask_head.blocks.1.attn.k.weight, bbox_head.mask_head.blocks.1.attn.k.bias, bbox_head.mask_head.blocks.1.attn.v.weight, bbox_head.mask_head.blocks.1.attn.v.bias, bbox_head.mask_head.blocks.1.attn.proj.weight, bbox_head.mask_head.blocks.1.attn.proj.bias, bbox_head.mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.1.attn.linear.0.weight, bbox_head.mask_head.blocks.1.attn.linear.0.bias, bbox_head.mask_head.blocks.1.head_norm2.weight, bbox_head.mask_head.blocks.1.head_norm2.bias, bbox_head.mask_head.blocks.1.mlp.fc1.weight, bbox_head.mask_head.blocks.1.mlp.fc1.bias, bbox_head.mask_head.blocks.1.mlp.fc2.weight, bbox_head.mask_head.blocks.1.mlp.fc2.bias, bbox_head.mask_head.blocks.2.head_norm1.weight, bbox_head.mask_head.blocks.2.head_norm1.bias, bbox_head.mask_head.blocks.2.attn.q.weight, bbox_head.mask_head.blocks.2.attn.q.bias, bbox_head.mask_head.blocks.2.attn.k.weight, bbox_head.mask_head.blocks.2.attn.k.bias, bbox_head.mask_head.blocks.2.attn.v.weight, bbox_head.mask_head.blocks.2.attn.v.bias, bbox_head.mask_head.blocks.2.attn.proj.weight, bbox_head.mask_head.blocks.2.attn.proj.bias, bbox_head.mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.2.attn.linear.0.weight, bbox_head.mask_head.blocks.2.attn.linear.0.bias, bbox_head.mask_head.blocks.2.head_norm2.weight, bbox_head.mask_head.blocks.2.head_norm2.bias, bbox_head.mask_head.blocks.2.mlp.fc1.weight, bbox_head.mask_head.blocks.2.mlp.fc1.bias, bbox_head.mask_head.blocks.2.mlp.fc2.weight, bbox_head.mask_head.blocks.2.mlp.fc2.bias, bbox_head.mask_head.blocks.3.head_norm1.weight, bbox_head.mask_head.blocks.3.head_norm1.bias, bbox_head.mask_head.blocks.3.attn.q.weight, bbox_head.mask_head.blocks.3.attn.q.bias, bbox_head.mask_head.blocks.3.attn.k.weight, bbox_head.mask_head.blocks.3.attn.k.bias, bbox_head.mask_head.blocks.3.attn.v.weight, bbox_head.mask_head.blocks.3.attn.v.bias, bbox_head.mask_head.blocks.3.attn.proj.weight, bbox_head.mask_head.blocks.3.attn.proj.bias, bbox_head.mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.3.attn.linear.0.weight, bbox_head.mask_head.blocks.3.attn.linear.0.bias, bbox_head.mask_head.blocks.3.head_norm2.weight, bbox_head.mask_head.blocks.3.head_norm2.bias, bbox_head.mask_head.blocks.3.mlp.fc1.weight, bbox_head.mask_head.blocks.3.mlp.fc1.bias, bbox_head.mask_head.blocks.3.mlp.fc2.weight, bbox_head.mask_head.blocks.3.mlp.fc2.bias, bbox_head.mask_head.attnen.q.weight, bbox_head.mask_head.attnen.q.bias, bbox_head.mask_head.attnen.k.weight, bbox_head.mask_head.attnen.k.bias, bbox_head.mask_head.attnen.linear_l1.0.weight, bbox_head.mask_head.attnen.linear_l1.0.bias, bbox_head.mask_head.attnen.linear_l2.0.weight, bbox_head.mask_head.attnen.linear_l2.0.bias, bbox_head.mask_head.attnen.linear_l3.0.weight, bbox_head.mask_head.attnen.linear_l3.0.bias, bbox_head.mask_head.attnen.linear.0.weight, bbox_head.mask_head.attnen.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm1.weight, bbox_head.mask_head2.blocks.0.head_norm1.bias, bbox_head.mask_head2.blocks.0.attn.q.weight, bbox_head.mask_head2.blocks.0.attn.q.bias, bbox_head.mask_head2.blocks.0.attn.k.weight, bbox_head.mask_head2.blocks.0.attn.k.bias, bbox_head.mask_head2.blocks.0.attn.v.weight, bbox_head.mask_head2.blocks.0.attn.v.bias, bbox_head.mask_head2.blocks.0.attn.proj.weight, bbox_head.mask_head2.blocks.0.attn.proj.bias, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.0.attn.linear.0.weight, bbox_head.mask_head2.blocks.0.attn.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm2.weight, bbox_head.mask_head2.blocks.0.head_norm2.bias, bbox_head.mask_head2.blocks.0.mlp.fc1.weight, bbox_head.mask_head2.blocks.0.mlp.fc1.bias, bbox_head.mask_head2.blocks.0.mlp.fc2.weight, bbox_head.mask_head2.blocks.0.mlp.fc2.bias, bbox_head.mask_head2.blocks.0.self_attention.qkv.weight, bbox_head.mask_head2.blocks.0.self_attention.qkv.bias, bbox_head.mask_head2.blocks.0.self_attention.proj.weight, bbox_head.mask_head2.blocks.0.self_attention.proj.bias, bbox_head.mask_head2.blocks.0.norm3.weight, bbox_head.mask_head2.blocks.0.norm3.bias, bbox_head.mask_head2.blocks.1.head_norm1.weight, bbox_head.mask_head2.blocks.1.head_norm1.bias, bbox_head.mask_head2.blocks.1.attn.q.weight, bbox_head.mask_head2.blocks.1.attn.q.bias, bbox_head.mask_head2.blocks.1.attn.k.weight, bbox_head.mask_head2.blocks.1.attn.k.bias, bbox_head.mask_head2.blocks.1.attn.v.weight, bbox_head.mask_head2.blocks.1.attn.v.bias, bbox_head.mask_head2.blocks.1.attn.proj.weight, bbox_head.mask_head2.blocks.1.attn.proj.bias, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.1.attn.linear.0.weight, bbox_head.mask_head2.blocks.1.attn.linear.0.bias, bbox_head.mask_head2.blocks.1.head_norm2.weight, bbox_head.mask_head2.blocks.1.head_norm2.bias, bbox_head.mask_head2.blocks.1.mlp.fc1.weight, bbox_head.mask_head2.blocks.1.mlp.fc1.bias, bbox_head.mask_head2.blocks.1.mlp.fc2.weight, bbox_head.mask_head2.blocks.1.mlp.fc2.bias, bbox_head.mask_head2.blocks.1.self_attention.qkv.weight, bbox_head.mask_head2.blocks.1.self_attention.qkv.bias, bbox_head.mask_head2.blocks.1.self_attention.proj.weight, bbox_head.mask_head2.blocks.1.self_attention.proj.bias, bbox_head.mask_head2.blocks.1.norm3.weight, bbox_head.mask_head2.blocks.1.norm3.bias, bbox_head.mask_head2.blocks.2.head_norm1.weight, bbox_head.mask_head2.blocks.2.head_norm1.bias, bbox_head.mask_head2.blocks.2.attn.q.weight, bbox_head.mask_head2.blocks.2.attn.q.bias, bbox_head.mask_head2.blocks.2.attn.k.weight, bbox_head.mask_head2.blocks.2.attn.k.bias, bbox_head.mask_head2.blocks.2.attn.v.weight, bbox_head.mask_head2.blocks.2.attn.v.bias, bbox_head.mask_head2.blocks.2.attn.proj.weight, bbox_head.mask_head2.blocks.2.attn.proj.bias, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.2.attn.linear.0.weight, bbox_head.mask_head2.blocks.2.attn.linear.0.bias, bbox_head.mask_head2.blocks.2.head_norm2.weight, bbox_head.mask_head2.blocks.2.head_norm2.bias, bbox_head.mask_head2.blocks.2.mlp.fc1.weight, bbox_head.mask_head2.blocks.2.mlp.fc1.bias, bbox_head.mask_head2.blocks.2.mlp.fc2.weight, bbox_head.mask_head2.blocks.2.mlp.fc2.bias, bbox_head.mask_head2.blocks.2.self_attention.qkv.weight, bbox_head.mask_head2.blocks.2.self_attention.qkv.bias, bbox_head.mask_head2.blocks.2.self_attention.proj.weight, bbox_head.mask_head2.blocks.2.self_attention.proj.bias, bbox_head.mask_head2.blocks.2.norm3.weight, bbox_head.mask_head2.blocks.2.norm3.bias, bbox_head.mask_head2.blocks.3.head_norm1.weight, bbox_head.mask_head2.blocks.3.head_norm1.bias, bbox_head.mask_head2.blocks.3.attn.q.weight, bbox_head.mask_head2.blocks.3.attn.q.bias, bbox_head.mask_head2.blocks.3.attn.k.weight, bbox_head.mask_head2.blocks.3.attn.k.bias, bbox_head.mask_head2.blocks.3.attn.v.weight, bbox_head.mask_head2.blocks.3.attn.v.bias, bbox_head.mask_head2.blocks.3.attn.proj.weight, bbox_head.mask_head2.blocks.3.attn.proj.bias, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.3.attn.linear.0.weight, bbox_head.mask_head2.blocks.3.attn.linear.0.bias, bbox_head.mask_head2.blocks.3.head_norm2.weight, bbox_head.mask_head2.blocks.3.head_norm2.bias, bbox_head.mask_head2.blocks.3.mlp.fc1.weight, bbox_head.mask_head2.blocks.3.mlp.fc1.bias, bbox_head.mask_head2.blocks.3.mlp.fc2.weight, bbox_head.mask_head2.blocks.3.mlp.fc2.bias, bbox_head.mask_head2.blocks.3.self_attention.qkv.weight, bbox_head.mask_head2.blocks.3.self_attention.qkv.bias, bbox_head.mask_head2.blocks.3.self_attention.proj.weight, bbox_head.mask_head2.blocks.3.self_attention.proj.bias, bbox_head.mask_head2.blocks.3.norm3.weight, bbox_head.mask_head2.blocks.3.norm3.bias, bbox_head.mask_head2.blocks.4.head_norm1.weight, bbox_head.mask_head2.blocks.4.head_norm1.bias, bbox_head.mask_head2.blocks.4.attn.q.weight, bbox_head.mask_head2.blocks.4.attn.q.bias, bbox_head.mask_head2.blocks.4.attn.k.weight, bbox_head.mask_head2.blocks.4.attn.k.bias, bbox_head.mask_head2.blocks.4.attn.v.weight, bbox_head.mask_head2.blocks.4.attn.v.bias, bbox_head.mask_head2.blocks.4.attn.proj.weight, bbox_head.mask_head2.blocks.4.attn.proj.bias, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.4.attn.linear.0.weight, bbox_head.mask_head2.blocks.4.attn.linear.0.bias, bbox_head.mask_head2.blocks.4.head_norm2.weight, bbox_head.mask_head2.blocks.4.head_norm2.bias, bbox_head.mask_head2.blocks.4.mlp.fc1.weight, bbox_head.mask_head2.blocks.4.mlp.fc1.bias, bbox_head.mask_head2.blocks.4.mlp.fc2.weight, bbox_head.mask_head2.blocks.4.mlp.fc2.bias, bbox_head.mask_head2.blocks.4.self_attention.qkv.weight, bbox_head.mask_head2.blocks.4.self_attention.qkv.bias, bbox_head.mask_head2.blocks.4.self_attention.proj.weight, bbox_head.mask_head2.blocks.4.self_attention.proj.bias, bbox_head.mask_head2.blocks.4.norm3.weight, bbox_head.mask_head2.blocks.4.norm3.bias, bbox_head.mask_head2.blocks.5.head_norm1.weight, bbox_head.mask_head2.blocks.5.head_norm1.bias, bbox_head.mask_head2.blocks.5.attn.q.weight, bbox_head.mask_head2.blocks.5.attn.q.bias, bbox_head.mask_head2.blocks.5.attn.k.weight, bbox_head.mask_head2.blocks.5.attn.k.bias, bbox_head.mask_head2.blocks.5.attn.v.weight, bbox_head.mask_head2.blocks.5.attn.v.bias, bbox_head.mask_head2.blocks.5.attn.proj.weight, bbox_head.mask_head2.blocks.5.attn.proj.bias, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.5.attn.linear.0.weight, bbox_head.mask_head2.blocks.5.attn.linear.0.bias, bbox_head.mask_head2.blocks.5.head_norm2.weight, bbox_head.mask_head2.blocks.5.head_norm2.bias, bbox_head.mask_head2.blocks.5.mlp.fc1.weight, bbox_head.mask_head2.blocks.5.mlp.fc1.bias, bbox_head.mask_head2.blocks.5.mlp.fc2.weight, bbox_head.mask_head2.blocks.5.mlp.fc2.bias, bbox_head.mask_head2.blocks.5.self_attention.qkv.weight, bbox_head.mask_head2.blocks.5.self_attention.qkv.bias, bbox_head.mask_head2.blocks.5.self_attention.proj.weight, bbox_head.mask_head2.blocks.5.self_attention.proj.bias, bbox_head.mask_head2.blocks.5.norm3.weight, bbox_head.mask_head2.blocks.5.norm3.bias, bbox_head.mask_head2.attnen.q.weight, bbox_head.mask_head2.attnen.q.bias, bbox_head.mask_head2.attnen.k.weight, bbox_head.mask_head2.attnen.k.bias, bbox_head.mask_head2.attnen.linear_l1.0.weight, bbox_head.mask_head2.attnen.linear_l1.0.bias, bbox_head.mask_head2.attnen.linear_l2.0.weight, bbox_head.mask_head2.attnen.linear_l2.0.bias, bbox_head.mask_head2.attnen.linear_l3.0.weight, bbox_head.mask_head2.attnen.linear_l3.0.bias, bbox_head.mask_head2.attnen.linear.0.weight, bbox_head.mask_head2.attnen.linear.0.bias
    
    missing keys in source state_dict: bbox_head.cls_thing_branches.0.weight, bbox_head.cls_thing_branches.0.bias, bbox_head.cls_thing_branches.1.weight, bbox_head.cls_thing_branches.1.bias, bbox_head.cls_thing_branches.2.weight, bbox_head.cls_thing_branches.2.bias, bbox_head.cls_thing_branches.3.weight, bbox_head.cls_thing_branches.3.bias, bbox_head.things_mask_head.blocks.0.head_norm1.weight, bbox_head.things_mask_head.blocks.0.head_norm1.bias, bbox_head.things_mask_head.blocks.0.attn.q.weight, bbox_head.things_mask_head.blocks.0.attn.q.bias, bbox_head.things_mask_head.blocks.0.attn.k.weight, bbox_head.things_mask_head.blocks.0.attn.k.bias, bbox_head.things_mask_head.blocks.0.attn.v.weight, bbox_head.things_mask_head.blocks.0.attn.v.bias, bbox_head.things_mask_head.blocks.0.attn.proj.weight, bbox_head.things_mask_head.blocks.0.attn.proj.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear.0.bias, bbox_head.things_mask_head.blocks.0.head_norm2.weight, bbox_head.things_mask_head.blocks.0.head_norm2.bias, bbox_head.things_mask_head.blocks.0.mlp.fc1.weight, bbox_head.things_mask_head.blocks.0.mlp.fc1.bias, bbox_head.things_mask_head.blocks.0.mlp.fc2.weight, bbox_head.things_mask_head.blocks.0.mlp.fc2.bias, bbox_head.things_mask_head.blocks.1.head_norm1.weight, bbox_head.things_mask_head.blocks.1.head_norm1.bias, bbox_head.things_mask_head.blocks.1.attn.q.weight, bbox_head.things_mask_head.blocks.1.attn.q.bias, bbox_head.things_mask_head.blocks.1.attn.k.weight, bbox_head.things_mask_head.blocks.1.attn.k.bias, bbox_head.things_mask_head.blocks.1.attn.v.weight, bbox_head.things_mask_head.blocks.1.attn.v.bias, bbox_head.things_mask_head.blocks.1.attn.proj.weight, bbox_head.things_mask_head.blocks.1.attn.proj.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear.0.bias, bbox_head.things_mask_head.blocks.1.head_norm2.weight, bbox_head.things_mask_head.blocks.1.head_norm2.bias, bbox_head.things_mask_head.blocks.1.mlp.fc1.weight, bbox_head.things_mask_head.blocks.1.mlp.fc1.bias, bbox_head.things_mask_head.blocks.1.mlp.fc2.weight, bbox_head.things_mask_head.blocks.1.mlp.fc2.bias, bbox_head.things_mask_head.blocks.2.head_norm1.weight, bbox_head.things_mask_head.blocks.2.head_norm1.bias, bbox_head.things_mask_head.blocks.2.attn.q.weight, bbox_head.things_mask_head.blocks.2.attn.q.bias, bbox_head.things_mask_head.blocks.2.attn.k.weight, bbox_head.things_mask_head.blocks.2.attn.k.bias, bbox_head.things_mask_head.blocks.2.attn.v.weight, bbox_head.things_mask_head.blocks.2.attn.v.bias, bbox_head.things_mask_head.blocks.2.attn.proj.weight, bbox_head.things_mask_head.blocks.2.attn.proj.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear.0.bias, bbox_head.things_mask_head.blocks.2.head_norm2.weight, bbox_head.things_mask_head.blocks.2.head_norm2.bias, bbox_head.things_mask_head.blocks.2.mlp.fc1.weight, bbox_head.things_mask_head.blocks.2.mlp.fc1.bias, bbox_head.things_mask_head.blocks.2.mlp.fc2.weight, bbox_head.things_mask_head.blocks.2.mlp.fc2.bias, bbox_head.things_mask_head.blocks.3.head_norm1.weight, bbox_head.things_mask_head.blocks.3.head_norm1.bias, bbox_head.things_mask_head.blocks.3.attn.q.weight, bbox_head.things_mask_head.blocks.3.attn.q.bias, bbox_head.things_mask_head.blocks.3.attn.k.weight, bbox_head.things_mask_head.blocks.3.attn.k.bias, bbox_head.things_mask_head.blocks.3.attn.v.weight, bbox_head.things_mask_head.blocks.3.attn.v.bias, bbox_head.things_mask_head.blocks.3.attn.proj.weight, bbox_head.things_mask_head.blocks.3.attn.proj.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear.0.bias, bbox_head.things_mask_head.blocks.3.head_norm2.weight, bbox_head.things_mask_head.blocks.3.head_norm2.bias, bbox_head.things_mask_head.blocks.3.mlp.fc1.weight, bbox_head.things_mask_head.blocks.3.mlp.fc1.bias, bbox_head.things_mask_head.blocks.3.mlp.fc2.weight, bbox_head.things_mask_head.blocks.3.mlp.fc2.bias, bbox_head.things_mask_head.attnen.q.weight, bbox_head.things_mask_head.attnen.q.bias, bbox_head.things_mask_head.attnen.k.weight, bbox_head.things_mask_head.attnen.k.bias, bbox_head.things_mask_head.attnen.linear_l1.0.weight, bbox_head.things_mask_head.attnen.linear_l1.0.bias, bbox_head.things_mask_head.attnen.linear_l2.0.weight, bbox_head.things_mask_head.attnen.linear_l2.0.bias, bbox_head.things_mask_head.attnen.linear_l3.0.weight, bbox_head.things_mask_head.attnen.linear_l3.0.bias, bbox_head.things_mask_head.attnen.linear.0.weight, bbox_head.things_mask_head.attnen.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm1.weight, bbox_head.stuff_mask_head.blocks.0.head_norm1.bias, bbox_head.stuff_mask_head.blocks.0.attn.q.weight, bbox_head.stuff_mask_head.blocks.0.attn.q.bias, bbox_head.stuff_mask_head.blocks.0.attn.k.weight, bbox_head.stuff_mask_head.blocks.0.attn.k.bias, bbox_head.stuff_mask_head.blocks.0.attn.v.weight, bbox_head.stuff_mask_head.blocks.0.attn.v.bias, bbox_head.stuff_mask_head.blocks.0.attn.proj.weight, bbox_head.stuff_mask_head.blocks.0.attn.proj.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm2.weight, bbox_head.stuff_mask_head.blocks.0.head_norm2.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.0.norm3.weight, bbox_head.stuff_mask_head.blocks.0.norm3.bias, bbox_head.stuff_mask_head.blocks.1.head_norm1.weight, bbox_head.stuff_mask_head.blocks.1.head_norm1.bias, bbox_head.stuff_mask_head.blocks.1.attn.q.weight, bbox_head.stuff_mask_head.blocks.1.attn.q.bias, bbox_head.stuff_mask_head.blocks.1.attn.k.weight, bbox_head.stuff_mask_head.blocks.1.attn.k.bias, bbox_head.stuff_mask_head.blocks.1.attn.v.weight, bbox_head.stuff_mask_head.blocks.1.attn.v.bias, bbox_head.stuff_mask_head.blocks.1.attn.proj.weight, bbox_head.stuff_mask_head.blocks.1.attn.proj.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.1.head_norm2.weight, bbox_head.stuff_mask_head.blocks.1.head_norm2.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.1.norm3.weight, bbox_head.stuff_mask_head.blocks.1.norm3.bias, bbox_head.stuff_mask_head.blocks.2.head_norm1.weight, bbox_head.stuff_mask_head.blocks.2.head_norm1.bias, bbox_head.stuff_mask_head.blocks.2.attn.q.weight, bbox_head.stuff_mask_head.blocks.2.attn.q.bias, bbox_head.stuff_mask_head.blocks.2.attn.k.weight, bbox_head.stuff_mask_head.blocks.2.attn.k.bias, bbox_head.stuff_mask_head.blocks.2.attn.v.weight, bbox_head.stuff_mask_head.blocks.2.attn.v.bias, bbox_head.stuff_mask_head.blocks.2.attn.proj.weight, bbox_head.stuff_mask_head.blocks.2.attn.proj.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.2.head_norm2.weight, bbox_head.stuff_mask_head.blocks.2.head_norm2.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.2.norm3.weight, bbox_head.stuff_mask_head.blocks.2.norm3.bias, bbox_head.stuff_mask_head.blocks.3.head_norm1.weight, bbox_head.stuff_mask_head.blocks.3.head_norm1.bias, bbox_head.stuff_mask_head.blocks.3.attn.q.weight, bbox_head.stuff_mask_head.blocks.3.attn.q.bias, bbox_head.stuff_mask_head.blocks.3.attn.k.weight, bbox_head.stuff_mask_head.blocks.3.attn.k.bias, bbox_head.stuff_mask_head.blocks.3.attn.v.weight, bbox_head.stuff_mask_head.blocks.3.attn.v.bias, bbox_head.stuff_mask_head.blocks.3.attn.proj.weight, bbox_head.stuff_mask_head.blocks.3.attn.proj.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.3.head_norm2.weight, bbox_head.stuff_mask_head.blocks.3.head_norm2.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.3.norm3.weight, bbox_head.stuff_mask_head.blocks.3.norm3.bias, bbox_head.stuff_mask_head.blocks.4.head_norm1.weight, bbox_head.stuff_mask_head.blocks.4.head_norm1.bias, bbox_head.stuff_mask_head.blocks.4.attn.q.weight, bbox_head.stuff_mask_head.blocks.4.attn.q.bias, bbox_head.stuff_mask_head.blocks.4.attn.k.weight, bbox_head.stuff_mask_head.blocks.4.attn.k.bias, bbox_head.stuff_mask_head.blocks.4.attn.v.weight, bbox_head.stuff_mask_head.blocks.4.attn.v.bias, bbox_head.stuff_mask_head.blocks.4.attn.proj.weight, bbox_head.stuff_mask_head.blocks.4.attn.proj.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.4.head_norm2.weight, bbox_head.stuff_mask_head.blocks.4.head_norm2.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.4.norm3.weight, bbox_head.stuff_mask_head.blocks.4.norm3.bias, bbox_head.stuff_mask_head.blocks.5.head_norm1.weight, bbox_head.stuff_mask_head.blocks.5.head_norm1.bias, bbox_head.stuff_mask_head.blocks.5.attn.q.weight, bbox_head.stuff_mask_head.blocks.5.attn.q.bias, bbox_head.stuff_mask_head.blocks.5.attn.k.weight, bbox_head.stuff_mask_head.blocks.5.attn.k.bias, bbox_head.stuff_mask_head.blocks.5.attn.v.weight, bbox_head.stuff_mask_head.blocks.5.attn.v.bias, bbox_head.stuff_mask_head.blocks.5.attn.proj.weight, bbox_head.stuff_mask_head.blocks.5.attn.proj.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.5.head_norm2.weight, bbox_head.stuff_mask_head.blocks.5.head_norm2.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.5.norm3.weight, bbox_head.stuff_mask_head.blocks.5.norm3.bias, bbox_head.stuff_mask_head.attnen.q.weight, bbox_head.stuff_mask_head.attnen.q.bias, bbox_head.stuff_mask_head.attnen.k.weight, bbox_head.stuff_mask_head.attnen.k.bias, bbox_head.stuff_mask_head.attnen.linear_l1.0.weight, bbox_head.stuff_mask_head.attnen.linear_l1.0.bias, bbox_head.stuff_mask_head.attnen.linear_l2.0.weight, bbox_head.stuff_mask_head.attnen.linear_l2.0.bias, bbox_head.stuff_mask_head.attnen.linear_l3.0.weight, bbox_head.stuff_mask_head.attnen.linear_l3.0.bias, bbox_head.stuff_mask_head.attnen.linear.0.weight, bbox_head.stuff_mask_head.attnen.linear.0.bias
    
    opened by aviadmx 3
  • ImportError Libtorch_cpu.so: undefined symbol

    ImportError Libtorch_cpu.so: undefined symbol

    Thank you for this awesome work

    Unfortunately I can't run the training because I get the following error

    ./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 1
    + CONFIG=./configs/panformer/panformer_r50_24e_coco_panoptic.py
    + GPUS=1
    + PORT=29503
    ++ dirname ./tools/dist_train.sh
    ++ dirname ./tools/dist_train.sh
    + PYTHONPATH=./tools/..:
    + python -m torch.distributed.launch --nproc_per_node=1 --master_port=29503 ./tools/train.py ./configs/panformer/panformer_r50_24e_coco_panoptic.py --launcher pytorch --deterministic
    Traceback (most recent call last):
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 183, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 109, in _get_module_details
        __import__(pkg_name)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/__init__.py", line 197, in <module>
        from torch._C import *  # noqa: F403
    ImportError: /home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: _ZNK3c1010TensorImpl23shallow_copy_and_detachERKNS_15VariableVersionEb
    

    This is my environment:

    screen screen1

    opened by EnnioEvo 0
Owner
Nanjing University, China.
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory = 8G Numpy 1.

46 Dec 14, 2022
Use graph-based analysis to re-classify stocks and to improve Markowitz portfolio optimization

Dynamic Stock Industrial Classification Use graph-based analysis to re-classify stocks and experiment different re-classification methodologies to imp

Sheng Yang 10 Dec 05, 2022
Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression"

beyond-preserved-accuracy Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression" How to implemen

Kevin Canwen Xu 10 Dec 23, 2022
Official Repository for our ECCV2020 paper: Imbalanced Continual Learning with Partitioning Reservoir Sampling

Imbalanced Continual Learning with Partioning Reservoir Sampling This repository contains the official PyTorch implementation and the dataset for our

Chris Dongjoo Kim 40 Sep 18, 2022
Hack Camera, Microphone, Location, Clipboard With Just a Link. Also, Get Many Details About Victim's Device. And So On...

An Automated Tool to Hack Victim's Camera, Microphone, Location, Clipboard. Has 2 Extra Features. Version 1.1 Update Fixed Some Major Bugs Data Saving

ToxicNoob 36 Jan 07, 2023
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21

ANEMONE A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21 Dependencies python==3.6.1 dgl==

Graph Analysis & Deep Learning Laboratory, GRAND 30 Dec 14, 2022
Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER (WIP) Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEER is an e

Alipay 6 Dec 17, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs

GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs GraphLily is the first FPGA overlay for graph processing. GraphLily supports a rich se

Cornell Zhang Research Group 39 Dec 13, 2022
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

同济大学智能汽车研究所综合感知研究组 ( Comprehensive Perception Research Group under Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University) 842 Jan 04, 2023
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer.

Unsupervised_IEPGAN This is the PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer. Ha

25 Oct 26, 2022
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Bee Lim 625 Dec 30, 2022
Measuring and Improving Consistency in Pretrained Language Models

ParaRel 🤘 This repository contains the code and data for the paper: Measuring and Improving Consistency in Pretrained Language Models as well as the

Yanai Elazar 26 Dec 02, 2022
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Tested on many Common CNN Networks and Vision Transformers. ⭐ Includes smoo

Jacob Gildenblat 6.6k Jan 06, 2023
Active and Sample-Efficient Model Evaluation

Active Testing: Sample-Efficient Model Evaluation Hi, good to see you here! 👋 This is code for "Active Testing: Sample-Efficient Model Evaluation". P

Jannik Kossen 19 Oct 30, 2022
Meta Language-Specific Layers in Multilingual Language Models

Meta Language-Specific Layers in Multilingual Language Models This repo contains the source codes for our paper On Negative Interference in Multilingu

Zirui Wang 20 Feb 13, 2022
RoFormer_pytorch

PyTorch RoFormer 原版Tensorflow权重(https://github.com/ZhuiyiTechnology/roformer) chinese_roformer_L-12_H-768_A-12.zip (提取码:xy9x) 已经转化为PyTorch权重 chinese_r

yujun 283 Dec 12, 2022
Source code for EquiDock: Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking (ICLR 2022)

Source code for EquiDock: Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking (ICLR 2022) Please cite "Independent SE(3)-Equivar

Octavian Ganea 154 Jan 02, 2023