Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging

Overview

BERT Got a Date: Introducing Transformers to Temporal Tagging

Satya Almasian*, Dennis Aumiller*, and Michael Gertz
Heidelberg University
Contact us via: <lastname>@informatik.uni-heidelberg.de

Code and data for the paper BERT Got a Date: Introducing Transformers to Temporal Tagging. Temporal tagging is the task of identification of temporal mentions in text; these expressions can be further divided into different type categories, which is what we refer to as expression (type) classification. This repository describes two different types of transformer-based temporal taggers, which are both additionally capable of expression classification. We follow the TIMEX3 schema definitions in their styling and expression classes (notably, the latter are one of TIME, DATE, SET, DURATION). The available data sources for temporal tagging are in the TimeML format, which is essentially a form of XML with tags encapsulating temporal expressions.
An example can be seen below:

Due to lockdown restrictions, 2020 might go down as the worst economic year in over <TIMEX3 tid="t2" type="DURATION" value="P1DE">a decade</TIMEX3>.

For more data instances, look at the content of data.zip. Refer to the README file in the respective unzipped folder for more information.
This repository contains code for data preparation and training of a seq2seq model (encoder-decoder architectured initialized from encoder-only architectures, specifically BERT or RoBERTa), as well as three token classification encoders (BERT-based).
The output of the models discussed in the paper is in the results folder. Refer to the README file in the folder for more information.

Data Preparation

The scripts to generate training data is in the subfolder data_preparation. For more usage information, refer to the README file in the subfolder. The data used for training and evaluation are provided in zipped form in data.zip.

Evaluation

For evaluation, we use a slightly modified version of the TempEval-3 evaluation toolkit (original source here). We refactored the code to be compatible with Python3, and incorporated additional evaluation metrics, such as a confusion matrix for type classification. We cross-referenced results to ensure full backward-compatibility and all runs result in the exact same results for both versions. Our adjusted code, as well as scripts to convert the output of transformer-based tagging models are in the evaluation subfolder. For more usage information, refer to the README file in the respective subfolder.

Temporal models

We train and evaluate two types of setups for joint temporal tagging and classification:

  • Token Classification: We define three variants of simple token classifiers; all of them are based on Huggingface's BertForTokenClassification. We adapt their "token classification for named entity recognition script" to train these models. All the models are trained using bert-base-uncased as their pre-trained checkpoint.
  • Text-to-Text Generation (Seq2Seq): These models are encoder-decoder architectures using BERT or RoBERTa for initial weights. We use Huggingface's EncoderDecoder class for initialization of weights, starting from bert-base-uncased and roberta-base, respectively.

Seq2seq

To train the seq2seq models, use run_seq2seq_bert_roberta.py. Example usage is as follows:

python3 run_seq2seq_bert_roberta.py --model_name roberta-base --pre_train True \
--model_dir ./test --train_data ./data/seq2seq/train/tempeval_train.json \ 
--eval_data ./data/seq2seq/test/tempeval_test.json --num_gpu 2 --num_train_epochs 1 \
warmup_steps 100 --seed 0 --eval_steps 200

Which trains a roberta2roberta model defined by model_name for num_train_epochs epochs on the gpu with ID num_gpu. The random seed is set by seed and the number of warmup steps by warmup_steps. Train data should be specified in train_data and model_dir defines where the model is saved. set eval_data if you want intermediate evaluation defined by eval_steps. If the pre_train flag is set to true it will load the checkpoints from the hugginface hub and fine-tune on the dataset given. If the pre_train is false, we are in the fine-tuning mode and you can provide the path to the pre-trained model with pretrain_path. We used the pre_train mode to train on weakly labeled data provided by the rule-based system of HeidelTime and set the pre_train to false for fine-tunning on the benchmark datasets. If you wish to simply fine-tune the benchmark datasets using the huggingface checkpoints you can set the pre_train to ture, as displayed in the example above. For additional arguments such as length penalty, the number of beams, early stopping, and other model specifications, please refer to the script.

Token Classifiers

As mentioned above all token classifiers are trained using an adaptation of the NER script from hugginface. To train these models use
run_token_classifier.py like the following example:

python3 run_token_classifier.py --data_dir /data/temporal/BIO/wikiwars \ 
--labels ./data/temporal/BIO/train_staging/labels.txt \ 
--model_name_or_path bert-base-uncased \ 
--output_dir ./fine_tune_wikiwars/bert_tagging_with_date_no_pretrain_8epochs/bert_tagging_with_date_layer_seed_19 --max_seq_length  512  \
--num_train_epochs 8 --per_device_train_batch_size 34 --save_steps 3000 --logging_steps 300 --eval_steps 3000 \ 
--do_train --do_eval --overwrite_output_dir --seed 19 --model_date_extra_layer    

We used bert-base-uncased as the base of all our token classification models for pre-training as defined by model_name_or_path. For fine-tuning on the datasets model_name_or_path should point to the path of the pre-trained model. labels file is created during data preparation for more information refer to the subfolder. data_dir points to a folder that contains train.txt, test.txt and dev.txt and output_dir points to the saving location. You can define the number of epochs by num_train_epochs, set the seed with seed and batch size on each GPU with per_device_train_batch_size. For more information on the parameters refer to the Hugginface script. In our paper, we introduce 3 variants of token classification, which are defined by flags in the script. If no flag is set the model trains the vanilla BERT for token classification. The flag model_date_extra_layer trains the model with an extra date layer and model_crf adds the extra crf layer. To train the extra date embedding you need to download the vocabulary file and specify its path in date_vocab argument. The description and model definition of the BERT variants are in folder temporal_models. Please refer to the README file for further information. For training different model types on the same data, make sure to remove the cached dataset, since the feature generation is different for each model type.

Load directly from the Huggingface Model Hub

We uploaded our best-performing version of each architecture to the Huggingface Model Hub. The weights for the other four seeding runs are available upon request. We upload the variants that were fine-tuned on the concatenation of all three evaluation sets for better generalization to various domains. Token classification models are variants without pre-training. Both seq2seq models are pretrained on the weakly labled corpus and fine-tuned on the mixed data.

Overall we upload the following five models. For other model configurations and checkpoints please get in contact with us:

  • satyaalmasian/temporal_tagger_roberta2roberta: Our best perfoming model from the paper, an encoder-decoder architecture using RoBERTa. The model is pre-trained on weakly labeled news articles, tagged with HeidelTime, and fined-tuned on the train set of TempEval-3, Tweets, and Wikiwars.
  • satyaalmasian/temporal_tagger_bert2bert: Our second seq2seq model , an encoder-decoder architecture using BERT. The model is pre-trained on weakly labeled news articles, tagged with HeidelTime, and fined-tuned on the train set of TempEval-3, Tweets, and Wikiwars.
  • satyaalmasian/temporal_tagger_BERT_tokenclassifier: BERT for token classification model or vanilla BERT model from the paper. This model is only trained on the train set of TempEval-3, Tweets, and Wikiwars.
  • satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier: BERT for token classification with an extra date embedding, that encodes the reference date of the document. If the document does not have a reference date, it is best to avoid this model. Moreover, since the architecture is a modification of a default hugginface model, the usage is not as straightforward and requires the classes defined in the temporal_model module. This model is only trained on the train set of TempEval-3, Tweets, and Wikiwars.
  • satyaalmasian/temporal_tagger_BERTCRF_tokenclassifier :BERT for token classification with a CRF layer on the output. Moreover, since the architecture is a modification of a default huggingface model, the usage is not as straightforward and requires the classes defined in the temporal_model module. This model is only trained on the train set of TempEval-3, Tweets, and Wikiwars.

In the examples module, you find two scripts model_hub_seq2seq_examples.py and model_hub_tokenclassifiers_examples.py for seq2seq and token classification examples using the hugginface model hub. The examples load the models and use them on example sentences for tagging. The seq2seq example uses the pre-defined post-processing from the tempeval evaluation and contains rules for the cases we came across in the benchmark dataset. If you plan to use these models on new data, it is best to observe the raw output of the first few samples to detect possible format problems that are easily fixable. Further fine-tuning of the models is also possible. For seq2seq models you can simply load the models with

tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_roberta2roberta")
model = EncoderDecoderModel.from_pretrained("satyaalmasian/temporal_tagger_roberta2roberta")

and use the DataProcessor from temporal_models.seq2seq_utils to preprocess the json dataset. The model can be fine-tuned using Seq2SeqTrainer (same as in run_seq2seq_bert_roberta.py). For token classifiers the model and the tokenizers are loaded as follows:

tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier")

Classifiers need a BIO-tagged file that can be loaded using TokenClassificationDataset and fine-tuned with the hugginface Trainer. For more information on the usage of these models refer to their model hub page.

Citation

If you use our models in your work, we would appreciate attribution with the following citation:

@article{almasian2021bert,
  title={{BERT got a Date: Introducing Transformers to Temporal Tagging}},
  author={Almasian, Satya and Aumiller, Dennis and Gertz, Michael},
  journal={arXiv},
  year={2021}
}
Table-Extractor 表格抽取

(t)able-(ex)tractor 本项目旨在实现pdf表格抽取。 Models 版面分析模块(Yolo) 表格结构抽取(ResNet + Transformer) 文字识别模块(CRNN + CTC Loss) Acknowledgements TableMaster attention-i

2 Jan 15, 2022
TEDSummary is a speech summary corpus. It includes TED talks subtitle (Document), Title-Detail (Summary), speaker name (Meta info), MP4 URL, and utterance id

TEDSummary is a speech summary corpus. It includes TED talks subtitle (Document), Title-Detail (Summary), speaker name (Meta info), MP4 URL

3 Dec 26, 2022
Rl-quickstart - Reinforcement Learning Quickstart

Reinforcement Learning Quickstart To get setup with the repository, git clone ht

UCLA DataRes 3 Jun 16, 2022
Official code for "Decoupling Zero-Shot Semantic Segmentation"

Decoupling Zero-Shot Semantic Segmentation This is the official code for the arxiv. ZegFormer is the first framework that decouple the zero-shot seman

Jian Ding 108 Dec 30, 2022
Official DGL implementation of "Rethinking High-order Graph Convolutional Networks"

SE Aggregation This is the implementation for Rethinking High-order Graph Convolutional Networks. Here we show the codes for citation networks as an e

Tianqi Zhang (张天启) 32 Jul 19, 2022
This repo contains the implementation of YOLOv2 in Keras with Tensorflow backend.

Easy training on custom dataset. Various backends (MobileNet and SqueezeNet) supported. A YOLO demo to detect raccoon run entirely in brower is accessible at https://git.io/vF7vI (not on Windows).

Huynh Ngoc Anh 1.7k Dec 24, 2022
Implementation of paper "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement"

DCS-Net This is the implementation of "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement" Steps to run the model Edit V

Jack Walters 10 Apr 04, 2022
A repo for Causal Imitation Learning under Temporally Correlated Noise

CausIL A repo for Causal Imitation Learning under Temporally Correlated Noise. Running Experiments To re-train an expert, run: python experts/train_ex

Gokul Swamy 5 Nov 01, 2022
Self-Supervised depth kalilia

Self-Supervised depth kalilia

24 Oct 15, 2022
Fast methods to work with hydro- and topography data in pure Python.

PyFlwDir Intro PyFlwDir contains a series of methods to work with gridded DEM and flow direction datasets, which are key to many workflows in many ear

Deltares 27 Dec 07, 2022
Testability-Aware Low Power Controller Design with Evolutionary Learning, ITC2021

Testability-Aware Low Power Controller Design with Evolutionary Learning This repo contains the source code of Testability-Aware Low Power Controller

Lee Man 1 Dec 26, 2021
[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution

TTSR Official PyTorch implementation of the paper Learning Texture Transformer Network for Image Super-Resolution accepted in CVPR 2020. Contents Intr

Multimedia Research 689 Dec 28, 2022
Using LSTM write Tang poetry

本教程将通过一个示例对LSTM进行介绍。通过搭建训练LSTM网络,我们将训练一个模型来生成唐诗。本文将对该实现进行详尽的解释,并阐明此模型的工作方式和原因。并不需要过多专业知识,但是可能需要新手花一些时间来理解的模型训练的实际情况。为了节省时间,请尽量选择GPU进行训练。

56 Dec 15, 2022
alfred-py: A deep learning utility library for **human**

Alfred Alfred is command line tool for deep-learning usage. if you want split an video into image frames or combine frames into a single video, then a

JinTian 800 Jan 03, 2023
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Swin Transformer for Object Detection This repo contains the supported code and configuration files to reproduce object detection results of Swin Tran

Swin Transformer 1.4k Dec 30, 2022
Blender Python - Node-based multi-line text and image flowchart

MindMapper v0.8 Node-based text and image flowchart for Blender Mindmap with shortcuts visible: Mindmap with shortcuts hidden: Notes This was requeste

SpectralVectors 58 Oct 08, 2022
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 302 Dec 14, 2022
The official implementation of the Hybrid Self-Attention NEAT algorithm

PUREPLES - Pure Python Library for ES-HyperNEAT About This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure

Adrian Westh 91 Dec 12, 2022
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

CLIP-GEN [简体中文][English] 本项目在萤火二号集群上用 PyTorch 实现了论文 《CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP》。 CLIP-GEN 是一个 Language-F

75 Dec 29, 2022
Tensorflow 2.x implementation of Vision-Transformer model

Vision Transformer Unofficial Tensorflow 2.x implementation of the Transformer based Image Classification model proposed by the paper AN IMAGE IS WORT

Soumik Rakshit 16 Jul 20, 2022