QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

Overview

Moment-DETR

QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

Jie Lei, Tamara L. Berg, Mohit Bansal

For dataset details, please check data/README.md

Getting Started

Prerequisites

  1. Clone this repo
git clone https://github.com/jayleicn/moment_detr.git
cd moment_detr
  1. Prepare feature files

Download moment_detr_features.tar.gz (8GB), extract it under project root directory:

tar -xf path/to/moment_detr_features.tar.gz
  1. Install dependencies.

This code requires Python 3.7, PyTorch, and a few other Python libraries. We recommend creating conda environment and installing all the dependencies as follows:

# create conda env
conda create --name moment_detr python=3.7
# activate env
conda actiavte moment_detr
# install pytorch with CUDA 11.0
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
# install other python packages
pip install tqdm ipython easydict tensorboard tabulate scikit-learn pandas

Training

Training can be launched by running the following command:

bash moment_detr/scripts/train.sh 

This will train Moment-DETR for 200 epochs on the QVHighlights train split, with SlowFast and Open AI CLIP features. The training is very fast, it can be done within 4 hours using a single RTX 2080Ti GPU. The checkpoints and other experiment log files will be written into results. For training under different settings, you can append additional command line flags to the command above. For example, if you want to train the model without the saliency loss (by setting the corresponding loss weight to 0):

bash moment_detr/scripts/train.sh --lw_saliency 0

For more configurable options, please checkout our config file moment_detr/config.py.

Inference

Once the model is trained, you can use the following command for inference:

bash moment_detr/scripts/inference.sh CHECKPOINT_PATH SPLIT_NAME  

where CHECKPOINT_PATH is the path to the saved checkpoint, SPLIT_NAME is the split name for inference, can be one of val and test.

Pretraining and Finetuning

Moment-DETR utilizes ASR captions for weakly supervised pretraining. To launch pretraining, run:

bash moment_detr/scripts/pretrain.sh 

This will pretrain the Moment-DETR model on the ASR captions for 100 epochs, the pretrained checkpoints and other experiment log files will be written into results. With the pretrained checkpoint, we can launch finetuning from a pretrained checkpoint PRETRAIN_CHECKPOINT_PATH as:

bash moment_detr/scripts/train.sh  --resume ${PRETRAIN_CHECKPOINT_PATH}

Note that this finetuning process is the same as standard training except that it initializes weights from a pretrained checkpoint.

Evaluation and Codalab Submission

Please check standalone_eval/README.md for details.

Acknowledgement

We thank Linjie Li for the helpful discussions. This code is based on detr and TVRetrieval XML. We used resources from mdetr, MMAction2, CLIP, SlowFast and HERO_Video_Feature_Extractor. We thank the authors for their awesome open-source contributions.

LICENSE

The annotation files are under CC BY-NC-SA 4.0 license, see ./data/LICENSE. All the code are under MIT license, see LICENSE.

Comments
  • About experiments on CharadesSTA dataset

    About experiments on CharadesSTA dataset

    Hi, I noticed that you also conduct experiments on CharadesSTA dataset. I'm wondering how you prepare the video feature in CharadesSTA dataset? Could you share the feature files you prepared?

    opened by xljh0520 8
  • About the annotations

    About the annotations

    Hi @jayleicn, thanks for your great work! I notice that in the annotation files, as shown below, the duration of a video (126s) does not match the actual duration (810s - 660s = 150s). May I ask that should I crop the original video to 126s before processing in this case?

    {
        "qid": 8737, 
        "query": "A family is playing basketball together on a green court outside.", 
        "duration": 126, 
        "vid": "bP5KfdFJzC4_660.0_810.0", 
        "relevant_windows": [[0, 16]],
        "relevant_clip_ids": [0, 1, 2, 3, 4, 5, 6, 7], 
        "saliency_scores": [[4, 1, 1], [4, 1, 1], [4, 2, 1], [4, 3, 2], [4, 3, 2], [4, 3, 3], [4, 3, 3], [4, 3, 2]]
    }
    
    opened by yeliudev 4
  • CodaLab Submission Error

    CodaLab Submission Error

    Hi, I recently generate the test results and validation results on CodaLab as the following structure.

    --Submit.zip
    ----hl_val_submission.jsonl
    ----hl_test_submission.jsonl
    

    The CodaLab gave me the error IOError: [Errno 2] No such file or directory: '/tmp/codalab/tmphfqu8Q/run/input/res/hl_test_submission.jsonl'

    How can I solve this problem?

    opened by vateye 3
  • Video feature extraction

    Video feature extraction

    Hi, thanks for your excellent work! I found that the provided video features include both clip_features and slow_fast features. When it comes to the run_on_video/run.py, the codes only extract the clip features. Is there a mistake here? Besides, could you please provide the run.py extracting both clip and slowfast features? Thank you.

    opened by fxqzb 2
  • About paper

    About paper

    hi, We think that mdetr has great potential, but we look at table 6 in the paper and find that the metics of moment retrieval on the charades-sta dataset is not much higher than that of ivg-dcl (in particular, ivg-dcl adopts C3d feature for video extractor and glove for text embedding), and your work uses clip feature + slowfast). Have you ever tested on other video grounding dataset, like activitynets?

    opened by BMEI1314 2
  • About dataset?

    About dataset?

    Good job. I have read the paper and the github repository, but I still don’t understand how the features such as clip_features, clip_sub_features, clip_text_features, slowfast_features, etc. under the features folder are extracted and the details of the features extracted? Can you describe it in detail if it is convenient?

    opened by dourcer 2
  • [Request for the approval in competition] Hello. can you approve the request?

    [Request for the approval in competition] Hello. can you approve the request?

    Hello.

    Thanks for the great work. Motivated by the work and the interesting topic, we sincerely hope to get approved to be in the competition.

    Thank you!!! Btw, Sorry for bothering you.

    Regards.

    opened by wjun0830 1
  • Meaning of GT saliency scores

    Meaning of GT saliency scores

    Thank you for your great work and open-source code.

    I have an issue with the GT saliency scores (only localized 2-sec clips), can you please explain briefly? besides, how Predicted saliency scores (for all 2-sec clip) corresponds to the previous term?

    Thanks!

    Best, Kevin

    Build models...
    Loading feature extractors...
    Loading CLIP models
    Loading trained Moment-DETR model...
    Run prediction...
    ------------------------------idx0
    >> query: Chef makes pizza and cuts it up.
    >> video_path: run_on_video/example/RoripwjYFp8_60.0_210.0.mp4
    >> GT moments: [[106, 122]]
    >> Predicted moments ([start_in_seconds, end_in_seconds, score]): [
        [49.967, 64.9129, 0.9421], 
        [66.4396, 81.0731, 0.9271], 
        [105.9434, 122.0372, 0.9234], 
        [93.2057, 103.3713, 0.2222], 
        ..., 
        [45.3834, 52.2183, 0.0005]
       ]
    >> GT saliency scores (only localized 2-sec clips):  # what it means?
        [[2, 3, 3], [2, 3, 3], ...]
    >> Predicted saliency scores (for all 2-sec clip):  # how this correspond to the GT saliency scores?
        [-0.9258, -0.8115, -0.7598, ..., 0.0739, 0.1068]  
    
    opened by QinghongLin 1
  • How do I make my dataset ?

    How do I make my dataset ?

    Hi, Congrats on the amazing work. I want to make a data set similar to QVHighlights in my research direction, I have a lot of questions? 1、What annotation tools do you use? And details in the annotation process. 2、How to use CLIP to extract QVHIGHLIGHTS text features ? Can you provide the specific code?

    opened by Yangaiei 1
  • About File missing in run_on_video

    About File missing in run_on_video

    Thank you for your wonderful work! However, when I tried to run your demo in folder run_on_video, the file bpe_simple_vocab_16e6.txt.gz for the tokenizer is missing. Can you provide this file?

    FileNotFoundError: [Errno 2] No such file or directory: 'moment_detr/run_on_video/clip/bpe_simple_vocab_16e6.txt.gz'

    opened by lmfethan 1
  • The meaning of

    The meaning of "tef"

    Hi, I have a question about the "tef" in vision feature:

    if self.use_tef:
        tef_st = torch.arange(0, ctx_l, 1.0) / ctx_l
        tef_ed = tef_st + 1.0 / ctx_l
        tef = torch.stack([tef_st, tef_ed], dim=1)  # (Lv, 2)
        if self.use_video:
            model_inputs["video_feat"] = torch.cat(
                [model_inputs["video_feat"], tef], dim=1)  # (Lv, Dv+2)
        else:
            model_inputs["video_feat"] = tef
    

    What does "tef" mean in the visual feature? Thanks in advance.

    opened by vateye 1
  • Slowfast config setting

    Slowfast config setting

    Hi, thanks for your good work and released code!

    I have a question regarding the feature extractor: which setting did you adopt for the QVHighlight slowfast feature? e.g., SLOWFAST_8x8_R50.

    Thanks!

    Kevin

    opened by QinghongLin 0
  • predicted saliency scores

    predicted saliency scores

    1. How is the predicted saliency scores (for all 2-sec clip) calculated?
    >> Predicted saliency scores (for all 2-sec clip): 
        [-0.9258, -0.8115, -0.7598, ..., 0.0739, 0.1068]  
    
    1. Is it the average of the scores of three people? And why the predicted saliency scores (for all 2-sec clip) is negative.
    opened by Yangaiei 0
Releases(checkpoints)
Owner
Jie Lei 雷杰
UNC CS PhD student, vision+language.
Jie Lei 雷杰
Final Project for the Intel AI Readiness Boot Camp NLP (Jan)

NLP Boot Camp (Jan) Synopsis Full Name: Prameya Mohanty Name of your School: Delhi Public School, Rourkela Class: VIII Title of the Project: iTransect

TheCodingHub 1 Feb 01, 2022
NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 2k Jan 04, 2023
Lattice methods in TensorFlow

TensorFlow Lattice TensorFlow Lattice is a library that implements constrained and interpretable lattice based models. It is an implementation of Mono

504 Dec 20, 2022
Flaxformer: transformer architectures in JAX/Flax

Flaxformer: transformer architectures in JAX/Flax Flaxformer is a transformer library for primarily NLP and multimodal research at Google. It is used

Google 114 Dec 29, 2022
Natural language Understanding Toolkit

Natural language Understanding Toolkit TOC Requirements Installation Documentation CLSCL NER References Requirements To install nut you need: Python 2

Peter Prettenhofer 119 Oct 08, 2022
A simple chatbot based on chatterbot that you can use for anything has basic features

Chatbotium A simple chatbot based on chatterbot that you can use for anything has basic features. I have some errors Read the paragraph below: Known b

Herman 1 Feb 16, 2022
Materials (slides, code, assignments) for the NYU class I teach on NLP and ML Systems (Master of Engineering).

FREE_7773 Repo containing material for the NYU class (Master of Engineering) I teach on NLP, ML Sys etc. For context on what the class is trying to ac

Jacopo Tagliabue 90 Dec 19, 2022
Reading Wikipedia to Answer Open-Domain Questions

DrQA This is a PyTorch implementation of the DrQA system described in the ACL 2017 paper Reading Wikipedia to Answer Open-Domain Questions. Quick Link

Facebook Research 4.3k Jan 01, 2023
Awesome-NLP-Research (ANLP)

Awesome-NLP-Research (ANLP)

Language, Information, and Learning at Yale 72 Dec 19, 2022
BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese

Table of contents Introduction Using BARTpho with fairseq Using BARTpho with transformers Notes BARTpho: Pre-trained Sequence-to-Sequence Models for V

VinAI Research 58 Dec 23, 2022
Malware-Related Sentence Classification

Malware-Related Sentence Classification This repo contains the code for the ICTAI 2021 paper "Enrichment of Features for Malware-Related Sentence Clas

Chau Nguyen 1 Mar 26, 2022
A benchmark for evaluation and comparison of various NLP tasks in Persian language.

Persian NLP Benchmark The repository aims to track existing natural language processing models and evaluate their performance on well-known datasets.

Mofid AI 68 Dec 19, 2022
jiant is an NLP toolkit

jiant is an NLP toolkit The multitask and transfer learning toolkit for natural language processing research Why should I use jiant? jiant supports mu

ML² AT CILVR 1.5k Jan 04, 2023
Estimation of the CEFR complexity score of a given word, sentence or text.

NLP-Swedish … allows to estimate CEFR (Common European Framework of References) complexity score of a given word, sentence or text. CEFR scores come f

3 Apr 30, 2022
Using BERT-based models for toxic span detection

SemEval 2021 Task 5: Toxic Spans Detection: Task: Link to SemEval-2021: Task 5 Toxic Span Detection is https://competitions.codalab.org/competitions/2

Ravika Nagpal 1 Jan 04, 2022
Voilà turns Jupyter notebooks into standalone web applications

Rendering of live Jupyter notebooks with interactive widgets. Introduction Voilà turns Jupyter notebooks into standalone web applications. Unlike the

Voilà Dashboards 4.5k Jan 03, 2023
Open-source offline translation library written in Python. Uses OpenNMT for translations

Open source neural machine translation in Python. Designed to be used either as a Python library or desktop application. Uses OpenNMT for translations and PyQt for GUI.

Argos Open Tech 1.6k Jan 01, 2023
📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation

Well-formed Limericks and Haikus with GPT2 📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation In collaboration with Matthew Korahais &

Bardia Shahrestani 2 May 26, 2022
Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence models

PEGASUS library Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence models, or PEGASUS, uses self-supervised

Google Research 1.4k Dec 22, 2022