METER: Multimodal End-to-end TransformER

Related tags

Deep LearningMETER
Overview

METER

Code and pre-trained models will be publicized soon.

Citation

@article{dou2021meter,
  title={An Empirical Study of Training End-to-End Vision-and-Language Transformers},
  author={Dou, Zi-Yi and Xu, Yichong and Gan, Zhe and Wang, Jianfeng and Wang, Shuohang and Wang, Lijuan and Zhu, Chenguang and Peng, Nanyun and Liu, Zicheng and Zeng, Michael},
  journal={arXiv},
  year={2021},
  url={https://arxiv.org/abs/2111.02387},
}

Acknowledgements

The code is based on ViLT and some of the code is borrowed from CLIP and Swin-Transformer.

Comments
  • questions about VQA

    questions about VQA

    Hi, could you share the VQAv2 result fine-tuning with image resolution of 384, the result implemented by me is 76.52 and it is based on your checkpoint pretrained on COCO, SBU, VG, CC3M.

    opened by Henry9805 20
  • Some questions for the paper

    Some questions for the paper

    What is the difference between the score in Table 5 and Table 8? 77.19 in Table 5 results on test-dev set of VQAv2, and, 77.68 in Table 8 results on test-dev set of VQAv2.

    opened by wanng-ide 17
  • How much is the per gpu batch size?

    How much is the per gpu batch size?

    How much is the per gpu batch size? total batchsize is 4096, GPU num is 8, so per gpu batch size is 512? But I use A100 GPU, the batch size only can be set 16?

    opened by qiao1025566574 5
  • pretraining task

    pretraining task

    Hello, the author, great work! I'm curious whether you have tried to add Image Text Contrast Learning in the pretraining task? Because in the ALBEF paper, they reported that the ITC task had a great impact on the experimental results.

    opened by mactavish91 4
  • Inference with Fine-tuned SNLI Model

    Inference with Fine-tuned SNLI Model

    Hi,

    Thank you for the great work and the fine-tuned models, but I just wanted to ask how I should go about running inference with the fine-tuned model. Currently, I run into this error in my notebook:

    1 model = METERTransformerSS(cfg)
    ----> 2 model.load_state_dict(torch.load("/content/meter_clip16_288_roberta_snli.ckpt")['state_dict'])
    
    [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in load_state_dict(self, state_dict, strict)
       1050         if len(error_msgs) > 0:
       1051             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    -> 1052                                self.__class__.__name__, "\n\t".join(error_msgs)))
       1053         return _IncompatibleKeys(missing_keys, unexpected_keys)
       1054 
    
    RuntimeError: Error(s) in loading state_dict for METERTransformerSS:
    	Unexpected key(s) in state_dict: "vit_model.token_embedding.weight". 
    	size mismatch for vit_model.visual.positional_embedding: copying a param with shape torch.Size([577, 768]) from checkpoint, the shape in current model is torch.Size([197, 768]).
    

    I wonder if this is due to how I configure the model or not, is there a specific way I should create the config for inference? Thank you in advance.

    opened by sramshetty 4
  • The model meter_clip16_288_roberta_flickr.ckpt is inconsistent with the network weight parameter dimension

    The model meter_clip16_288_roberta_flickr.ckpt is inconsistent with the network weight parameter dimension

    Hi, Thank you for your excellent work, may I use this model "METER-CLIP16-RoBERTa fine-tuned on Flickr30k IR/TR (resolution: 384^2)" as meter_clip16_288_roberta_flickr.ckpt, why does the code report this error showing inconsistent dimensions, thank you answer my question. Y88) J7_592_AJQR}4LBP4W

    opened by attutude 4
  • Unable to train models faster with more gpus

    Unable to train models faster with more gpus

    Hi, I am facing an issue where, on increasing the number of gpus and nodes, the number of steps for each epoch doesnot change. for eg if I run

    python run.py with data_root=/data/datasets/meter_data_combined num_gpus=4 num_nodes=8 task_mlm_itm_clip_bert per_gpu_batchsize=64 clip16 text_roberta image_size=224 precision=16 datasets='["vg"]'

    the number of steps per epoch is nearly 150k. I observe that the number of steps is 150k when num_gpus=1 num_nodes=1, and when num_gpus=4 num_nodes=8. I made sure that all gpus were being utilized when I set num_gpus=4 num_nodes=8. I also observe that while using num_gpus=4 num_nodes=8, the time for each epoch is ~160 hours in my case, while it is ~30 hours if I set num_gpus=1 num_nodes=1.

    Is there any suggestion that you have for this problem?

    opened by HarmanDotpy 3
  • GPU OOM when pretraining

    GPU OOM when pretraining

    HI, I'm trying to pre-train the METER by using 8 A100 GPUS with the recommended config:

    python run.py with num_gpus=8 num_nodes=1 task_mlm_itm_clip_bert per_gpu_batchsize=32 clip16 text_roberta image_size=288
    

    but the GPU OOM occurred.

    So what is the extract per_gpu_batchsize? And how can I pre-train the model in about 8 days as mentioned in the paper.

    By the way, will the mixed precision training (precision=16) cause a performance drop?

    Many thanks!

    opened by hi-zhenyu 3
  • The training set of using different pretraining datasets.

    The training set of using different pretraining datasets.

    When I tried to reproduce the results in Table 17, I found that using the default learning rate and only using the coco pertaining dataset worked extremely poorly on downstream tasks.

    So, I would like to ask, do you set different training parameters (eg, lr, bs, max epoch, etc) for different pre-training datasets?

    opened by ShiYaya 2
  • question about the pre-trained weights

    question about the pre-trained weights

    Dear authors, Thanks for the great work! I have downloaded the pre-trained weights of ViT-B-16(224)+RoBERTa checkpoint from https://github.com/zdou0830/METER/releases/download/checkpoint2/meter_clip16_224_roberta_pretrain.ckpt, and found that the last layer of the visual encoder "vit_model.visual.transformer.resblocks.11..." is not included in the ckpt file? Did I miss something? Could you please help me to check it?

    opened by Junction4Nako 2
  • About license

    About license

    Thanks for the great work! The codebase is released under an MIT license (https://github.com/zdou0830/METER/blob/main/LICENSE) and an Apache License (https://github.com/zdou0830/METER/blob/main/ViLT_LICENSE).

    I want to know whether the pre-trained models are also released under the same license? Thanks.

    opened by WangWenhao0716 2
  • Pretrained weights of CLIP-ViT-224/32

    Pretrained weights of CLIP-ViT-224/32

    Hi,

    Thanks for the code! I wonder if you plan to release the pretrained weights of CLIP-ViT-224/32 (e.g., METER-CLIP32-RoBERTa (resolution: 224^2) pre-trained on GCC+SBU+COCO+VG)? It would be helpful for those who want to play with your model but don't have enough computational resources. Thanks!

    opened by bfshi 0
  • The last checkpoint or the best one on the Val split?

    The last checkpoint or the best one on the Val split?

    Hi, I'm confused by the testing checkpoint in the downstream tasks.

    I wonder which checkpoint should I use to evaluate, the last ckpt or the saved top-1 on the val split?

    opened by hi-zhenyu 3
  • Why the test results are different using same data?

    Why the test results are different using same data?

    I used pl.seed_everything to set seed,

    pl.seed_everything(_config["seed"], workers=True)
    

    but I still got different result when I tested flickr30k Image2Text Retrieval task on the model trained by myself. First:

    (tensor(0.7382),  tensor(0.9274), tensor(0.9638), tensor(0.8965), tensor(0.9814), tensor(0.9941)) 0)
    

    Second:

    (tensor(0.7366), tensor(0.9294), tensor(0.9656), tensor(0.8975), tensor(0.9814), tensor(0.9941)) 0
    

    I ensure the config files are same. Do you meet this problem?

    opened by qiao1025566574 1
  • ValueError and AttributeError

    ValueError and AttributeError

    Hi, I‘m trying to making "run.py" work for Pre-training, but I got ValueError and AttributeError, and I didn't find a solution, can you help me to check it? Thank you very much!

    Traceback (most recent call last): File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 312, in run_commandline return self.run( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 276, in run run() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/run.py", line 238, in call self.result = self.main_function(*args) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/config/captured_function.py", line 42, in captured_function result = wrapped(*args, **kwargs) File "run.py", line 20, in main dm = MTDataModule(_config, dist=True) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/multitask_datamodule.py", line 19, in init self.dm_dicts = {key: _datamoduleskey for key in datamodule_keys} File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/multitask_datamodule.py", line 19, in self.dm_dicts = {key: _datamoduleskey for key in datamodule_keys} File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/coco_caption_karpathy_datamodule.py", line 7, in init super().init(*args, **kwargs) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/datamodule_base.py", line 60, in init self.tokenizer = get_pretrained_tokenizer(tokenizer) File "/home/T3090U3/PycharmProjects/hxf/METER/METER-main/meter/datamodules/datamodule_base.py", line 25, in get_pretrained_tokenizer return RobertaTokenizer.from_pretrained(from_pretrained) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1672, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/file_utils.py", line 1271, in cached_path output_path = get_from_cache( File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/transformers/file_utils.py", line 1494, in get_from_cache raise ValueError( ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "run.py", line 16, in def main(_config): File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 190, in automain self.run_commandline() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/experiment.py", line 347, in run_commandline print_filtered_stacktrace() File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 493, in print_filtered_stacktrace print(format_filtered_stacktrace(filter_traceback), file=sys.stderr) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 528, in format_filtered_stacktrace return "".join(filtered_traceback_format(tb_exception)) File "/home/T3090U3/anaconda3/envs/hxf/lib/python3.8/site-packages/sacred/utils.py", line 568, in filtered_traceback_format current_tb = tb_exception.exc_traceback AttributeError: 'TracebackException' object has no attribute 'exc_traceback'

    opened by huhuhud 3
  • Pre-trained models for the Merged Attention Model?

    Pre-trained models for the Merged Attention Model?

    Thanks for the amazing repository. The code is really clean. If I understand correctly, the current implementation is co-attention model, and same for pre-trained weights. I wanted to know if you had plans to release the merge attention model weights as well! Thanks in advance!

    opened by TheShadow29 1
Owner
Zi-Yi Dou
Zi-Yi Dou (窦子轶).
Zi-Yi Dou
[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples This repository is the official implementation of "Tow

Sungyoon Lee 4 Jul 12, 2022
Use deep learning, genetic programming and other methods to predict stock and market movements

StockPredictions Use classic tricks, neural networks, deep learning, genetic programming and other methods to predict stock and market movements. Both

Linda MacPhee-Cobb 386 Jan 03, 2023
XViT - Space-time Mixing Attention for Video Transformer

XViT - Space-time Mixing Attention for Video Transformer This is the official implementation of the XViT paper: @inproceedings{bulat2021space, title

Adrian Bulat 33 Dec 23, 2022
HINet: Half Instance Normalization Network for Image Restoration

HINet: Half Instance Normalization Network for Image Restoration Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen Paper: https://arxiv.org

303 Dec 31, 2022
A research toolkit for particle swarm optimization in Python

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python. It is intended for swarm intelligence researchers, practit

Lj Miranda 1k Dec 30, 2022
A trashy useless Latin programming language written in python.

Codigum! The first programming langage in latin! (please keep your eyes closed when if you read the source code) It is pretty useless though. Document

Bic 2 Oct 25, 2021
Python library to receive live stream events like comments and gifts in realtime from TikTok LIVE.

TikTokLive A python library to connect to and read events from TikTok's LIVE service A python library to receive and decode livestream events such as

Isaac Kogan 277 Dec 23, 2022
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch

Learning to Communicate with Deep Multi-Agent Reinforcement Learning This is a PyTorch implementation of the original Lua code release. Overview This

Minqi 297 Dec 12, 2022
Official PyTorch implementation of the ICRA 2021 paper: Adversarial Differentiable Data Augmentation for Autonomous Systems.

Adversarial Differentiable Data Augmentation This repository provides the official PyTorch implementation of the ICRA 2021 paper: Adversarial Differen

Manli 3 Oct 15, 2022
Code for the paper "TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks"

TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks This is a Python3 / Pytorch implementation of TadGAN paper. The associated

Arun 92 Dec 03, 2022
Self-Supervised Deep Blind Video Super-Resolution

Self-Blind-VSR Paper | Discussion Self-Supervised Deep Blind Video Super-Resolution By Haoran Bai and Jinshan Pan Abstract Existing deep learning-base

Haoran Bai 35 Dec 09, 2022
A wrapper around SageMaker ML Lineage Tracking extending ML Lineage to end-to-end ML lifecycles, including additional capabilities around Feature Store groups, queries, and other relevant artifacts.

ML Lineage Helper This library is a wrapper around the SageMaker SDK to support ease of lineage tracking across the ML lifecycle. Lineage artifacts in

AWS Samples 12 Nov 01, 2022
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

54 Dec 06, 2022
A platform to display the carbon neutralization information for researchers, decision-makers, and other participants in the community.

Welcome to Carbon Insight Carbon Insight is a platform aiming to display the carbon neutralization roadmap for researchers, decision-makers, and other

Microsoft 14 Oct 24, 2022
Airborne magnetic data of the Osborne Mine and Lightning Creek sill complex, Australia

Osborne Mine, Australia - Airborne total-field magnetic anomaly This is a section of a survey acquired in 1990 by the Queensland Government, Australia

Fatiando a Terra Datasets 1 Jan 21, 2022
Official repo for our 3DV 2021 paper "Monocular 3D Reconstruction of Interacting Hands via Collision-Aware Factorized Refinements".

Monocular 3D Reconstruction of Interacting Hands via Collision-Aware Factorized Refinements Yu Rong, Jingbo Wang, Ziwei Liu, Chen Change Loy Paper. Pr

Yu Rong 41 Dec 13, 2022
Unsupervised Learning of Video Representations using LSTMs

Unsupervised Learning of Video Representations using LSTMs Code for paper Unsupervised Learning of Video Representations using LSTMs by Nitish Srivast

Elman Mansimov 341 Dec 20, 2022
An end-to-end PyTorch framework for image and video classification

What's New: March 2021: Added RegNetZ models November 2020: Vision Transformers now available, with training recipes! 2020-11-20: Classy Vision v0.5 R

Facebook Research 1.5k Dec 31, 2022
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
Modified prey-predator system - Modified prey–predator model describes the rate of change for each species by adding coupling terms.

Modified prey-predator system We aim to study the behaviors of the modified prey–predator model and establish the effects of several parameters that p

Seoyoung Oh 1 Jan 02, 2022