A project for developing transformer-based models for clinical relation extraction

Overview

Clinical Relation Extration with Transformers

Aim

This package is developed for researchers easily to use state-of-the-art transformers models for extracting relations from clinical notes. No prior knowledge of transformers is required. We handle the whole process from data preprocessing to training to prediction.

Dependency

The package is built on top of the Transformers developed by the HuggingFace. We have the requirement.txt to specify the packages required to run the project.

Background

Our training strategy is inspired by the paper: https://arxiv.org/abs/1906.03158 We only support train-dev mode, but you can do 5-fold CV.

Available models

  • BERT
  • XLNet
  • RoBERTa
  • ALBERT
  • DeBERTa
  • Longformer

We will keep adding new models.

usage and example

  • data format

see sample_data dir (train.tsv and test.tsv) for the train and test data format

The sample data is a small subset of the data prepared from the 2018 umass made1.0 challenge corpus

# data format: tsv file with 8 columns:
1. relation_type: adverse
2. sentence_1: ALLERGIES : [s1] Penicillin [e1] .
3. sentence_2: [s2] ALLERGIES [e2] : Penicillin .
4. entity_type_1: Drug
5. entity_type_2: ADE
6. entity_id_1: T1
7. entity_id2: T2
8. file_id: 13_10

note: 
1) the entity between [s1][e1] is the first entity in a relation; the second entity in the relation is inbetween [s2][e2]
2) even the two entities in the same sentenc, we still require to put them separately
3) in the test.tsv, you can set all labels to neg or no_relation or whatever, because we will not use the label anyway
4) We recommend to evaluate the test performance in a separate process based on prediction. (see **post-processing**)
5) We recommend using official evaluation scripts to do evaluation to make sure the results reported are reliable.
  • preprocess data (see the preprocess.ipynb script for more details on usage)

we did not provide a script for training and test data generation

we have a jupyter notebook with preprocessing 2018 n2c2 data as an example

you can follow our example to generate your own dataset

  • special tags

we use 4 special tags to identify two entities in a relation

# the defaults tags we defined in the repo are

EN1_START = "[s1]"
EN1_END = "[e1]"
EN2_START = "[s2]"
EN2_END = "[e2]"

If you need to customize these tags, you can change them in
config.py
  • training

please refer to the wiki page for all details of the parameters flag details

export CUDA_VISIBLE_DEVICES=1
data_dir=./sample_data
nmd=./new_modelzw
pof=./predictions.txt
log=./log.txt

# NOTE: we have more options available, you can check our wiki for more information
python ./src/relation_extraction.py \
		--model_type bert \
		--data_format_mode 0 \
		--classification_scheme 1 \
		--pretrained_model bert-base-uncased \
		--data_dir $data_dir \
		--new_model_dir $nmd \
		--predict_output_file $pof \
		--overwrite_model_dir \
		--seed 13 \
		--max_seq_length 256 \
		--cache_data \
		--do_train \
		--do_lower_case \
		--train_batch_size 4 \
		--eval_batch_size 4 \
		--learning_rate 1e-5 \
		--num_train_epochs 3 \
		--gradient_accumulation_steps 1 \
		--do_warmup \
		--warmup_ratio 0.1 \
		--weight_decay 0 \
		--max_num_checkpoints 1 \
		--log_file $log \
  • prediction
export CUDA_VISIBLE_DEVICES=1
data_dir=./sample_data
nmd=./new_model
pof=./predictions.txt
log=./log.txt

# we have to set data_dir, new_model_dir, model_type, log_file, and eval_batch_size, data_format_mode
python ./src/relation_extraction.py \
		--model_type bert \
		--data_format_mode 0 \
		--classification_scheme 1 \
		--pretrained_model bert-base-uncased \
		--data_dir $data_dir \
		--new_model_dir $nmd \
		--predict_output_file $pof \
		--overwrite_model_dir \
		--seed 13 \
		--max_seq_length 256 \
		--cache_data \
		--do_predict \
		--do_lower_case \
		--eval_batch_size 4 \
		--log_file $log \
  • post-processing (we only support transformation to brat format)
# see --help for more information
data_dir=./sample_data
pof=./predictions.txt

python src/data_processing/post_processing.py \
		--mode mul \
		--predict_result_file $pof \
		--entity_data_dir ./test_data_entity_only \
		--test_data_file ${data_dir}/test.tsv \
		--brat_result_output_dir ./brat_output

Using json file for experiment config instead of commend line

  • to simplify using the package, we support using json file for configuration
  • using json, you can define all parameters in a separate json file instead of input via commend line
  • config_experiment_sample.json is a sample json file you can follow to develop yours
  • to run experiment with json config, you need to follow run_json.sh
export CUDA_VISIBLE_DEVICES=1

python ./src/relation_extraction_json.py \
		--config_json "./config_experiment_sample.json"

Baseline (baseline directory)

  • We also implemented some baselines for relation extraction using machine learning approaches
  • baseline is for comparison only
  • baseline based on SVM
  • features extracted may not optimize for each dataset (cover most commonly used lexical and semantic features)
  • see baseline/run.sh for example

Issues

raise an issue if you have problems.

Citation

please cite our paper:

# We have a preprint at
https://arxiv.org/abs/2107.08957

Clinical Pre-trained Transformer Models

We have a series transformer models pre-trained on MIMIC-III. You can find them here:

Comments
  • prediction on large corpus

    prediction on large corpus

    The package will have issues dealing with the prediction on a large corpus (e.g., thousands of notes). We need to develop a batch process to avoid OOM issue and parallel may be to speed up.

    enhancement 
    opened by bugface 2
  • Not able to get the prediction for Test.csv

    Not able to get the prediction for Test.csv

    Hi

    I am just trying to run the code to get the predictions for the test.csv. i am trying with the pre trained model at https://transformer-models.s3.amazonaws.com/mimiciii_bert_10e_128b.zip.

    While running code I am getting an error as AttributeError: 'BertConfig' object has no attribute 'tags'

    Screen shot of my scree is as below

    image

    opened by vikasgoel2000 1
  • Binary classification with BCELoss or Focal Loss

    Binary classification with BCELoss or Focal Loss

    For binary mode, we currently still use CrossEntropyLoss, but BCELoss is designed for binary classification. We need to add options to use BCELoss or Focal Loss in binary mode

    enhancement 
    opened by bugface 1
  • Ok

    Ok

    Keep forgetting your Singpass username and password? Set it up once on Singpass app for password-free logins next time.

    Download Singpass app at https://app.singpass.gov.sg/share?src=gxe1ax

    opened by Andre11232 0
  • Confused on usage

    Confused on usage

    The input to the prediction model is a .tsv file where the first column is the relation type. So it is unclear to me why we need the model to predict the relation type again.

    Am I misunderstanding? For predicting relations for new data, will the first column be autofilled with NonRel?

    opened by jiwonjoung 1
  • roberta question

    roberta question

    Thank you for providing and actively maintaining this repository. I'm trying to run the roberta on the sample data, but I'm encountering an error (I have tested bert and deberta, and both worked well without any error)

    Here is the code I ran

    export CUDA_VISIBLE_DEVICES=1
    data_dir=./sample_data
    nmd=./roberta_re_model
    pof=./roberta_re_predictions.txt
    log=./roberta_re_log.txt
    
    python ./src/relation_extraction.py \
    		--model_type roberta \
    		--data_format_mode 0 \
    		--classification_scheme 2 \
    		--pretrained_model roberta-base \
    		--data_dir $data_dir \
    		--new_model_dir $nmd \
    		--predict_output_file $pof \
    		--overwrite_model_dir \
    		--seed 13 \
    		--max_seq_length 256 \
    		--cache_data \
    		--do_train \
    		--do_lower_case \
                    --do_predict \
    		--train_batch_size 4 \
    		--eval_batch_size 4 \
    		--learning_rate 1e-5 \
    		--num_train_epochs 3 \
    		--gradient_accumulation_steps 1 \
    		--do_warmup \
    		--warmup_ratio 0.1 \
    		--weight_decay 0 \
    		--max_num_checkpoints 1 \
    		--log_file $log \
    

    but I ran into this error:

    2022-05-12 06:07:50 - Transformer_Relation_Extraction - ERROR - Training error:
    Traceback (most recent call last):
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/relation_extraction.py", line 59, in app
        task_runner.train()
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/task.py", line 100, in train
        batch_output = self.model(**batch_input)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/models.py", line 159, in forward
        output_hidden_states=output_hidden_states
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 849, in forward
        past_key_values_length=past_key_values_length,
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 133, in forward
        token_type_embeddings = self.token_type_embeddings(token_type_ids)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py", line 160, in forward
        self.norm_type, self.scale_grad_by_freq, self.sparse)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2183, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
    
    Traceback (most recent call last):
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/relation_extraction.py", line 59, in app
        task_runner.train()
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/task.py", line 100, in train
        batch_output = self.model(**batch_input)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/models.py", line 159, in forward
        output_hidden_states=output_hidden_states
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 849, in forward
        past_key_values_length=past_key_values_length,
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 133, in forward
        token_type_embeddings = self.token_type_embeddings(token_type_ids)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py", line 160, in forward
        self.norm_type, self.scale_grad_by_freq, self.sparse)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2183, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
    Traceback (most recent call last):
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/relation_extraction.py", line 59, in app
        task_runner.train()
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/task.py", line 100, in train
        batch_output = self.model(**batch_input)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/models.py", line 159, in forward
        output_hidden_states=output_hidden_states
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 849, in forward
        past_key_values_length=past_key_values_length,
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 133, in forward
        token_type_embeddings = self.token_type_embeddings(token_type_ids)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py", line 160, in forward
        self.norm_type, self.scale_grad_by_freq, self.sparse)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2183, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/relation_extraction.py", line 181, in <module>
        app(args)
      File "/content/drive/MyDrive/Colab Notebooks/ClinicalTransformer/src/relation_extraction.py", line 63, in app
        raise RuntimeError()
    RuntimeError
    

    Any help would be much appreciated. Thanks for your project!

    opened by jeonge1 4
  • save trained model as a RE model and a core model with only transformer layers

    save trained model as a RE model and a core model with only transformer layers

    we need to separately save the whole RE model and a core transformer model with only transformer layers so that the model can be used for other training tasks.

    enhancement 
    opened by bugface 0
  • ELECTRA and GPT2 support

    ELECTRA and GPT2 support

    Hi,

    I'm wondering how to add ELECTRA and GPT2 support to this module.

    Neither ELECTRA nor GPT2 has pooled output, unlike BERT/RoBERTa-based model.

    I noticed in the models.py the model is implemented as following:

            outputs = self.roberta(
                input_ids,
                attention_mask=attention_mask,
                token_type_ids=token_type_ids,
                position_ids=position_ids,
                head_mask=head_mask,
                output_attentions=output_attentions,
                output_hidden_states=output_hidden_states
            )
    
            pooled_output = outputs[1]
            seq_output = outputs[0]
            logits = self.output2logits(pooled_output, seq_output, input_ids)
    
            return self.calc_loss(logits, outputs, labels)
    

    There are no pooled_output for ELECTRA/GPT2 sequence classification models, only seq_output is in the outputs variable.

    How to get around this limitation and get a working version of ELECTRA/GPT2? Thank you!

    opened by Stochastic-Adventure 2
Releases(v1.0.0)
Owner
uf-hobi-informatics-lab
codebase for hobi informatics lab
uf-hobi-informatics-lab
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
Denoising images with Fourier Ring Correlation loss

Denoising images with Fourier Ring Correlation loss The python code accompanies the working manuscript Image quality measurements and denoising using

2 Mar 12, 2022
K Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching (To appear in RA-L 2022)

KCP The official implementation of KCP: k Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching, accepted for p

Yu-Kai Lin 109 Dec 14, 2022
MPViT:Multi-Path Vision Transformer for Dense Prediction

MPViT : Multi-Path Vision Transformer for Dense Prediction This repository inlcu

Youngwan Lee 272 Dec 20, 2022
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Daniel Voigt Godoy 340 Jan 01, 2023
Adversarial vulnerability of powerful near out-of-distribution detection

Adversarial vulnerability of powerful near out-of-distribution detection by Stanislav Fort In this repository we're collecting replications for the ke

Stanislav Fort 9 Aug 30, 2022
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.5k Dec 31, 2022
Official PyTorch implementation of Segmenter: Transformer for Semantic Segmentation

Segmenter: Transformer for Semantic Segmentation Segmenter: Transformer for Semantic Segmentation by Robin Strudel*, Ricardo Garcia*, Ivan Laptev and

594 Jan 06, 2023
Code for "NeRS: Neural Reflectance Surfaces for Sparse-View 3D Reconstruction in the Wild," in NeurIPS 2021

Code for Neural Reflectance Surfaces (NeRS) [arXiv] [Project Page] [Colab Demo] [Bibtex] This repo contains the code for NeRS: Neural Reflectance Surf

Jason Y. Zhang 234 Dec 30, 2022
pytorch bert intent classification and slot filling

pytorch_bert_intent_classification_and_slot_filling 基于pytorch的中文意图识别和槽位填充 说明 基本思路就是:分类+序列标注(命名实体识别)同时训练。 使用的预训练模型:hugging face上的chinese-bert-wwm-ext 依

西西嘛呦 33 Dec 15, 2022
Non-Attentive-Tacotron - This is Pytorch Implementation of Google's Non-attentive Tacotron.

Non-attentive Tacotron - PyTorch Implementation This is Pytorch Implementation of Google's Non-attentive Tacotron, text-to-speech system. There is som

Jounghee Kim 46 Dec 19, 2022
Train DeepLab for Semantic Image Segmentation

Train DeepLab for Semantic Image Segmentation Martin Kersner, [email protected]

Martin Kersner 172 Dec 14, 2022
An Artificial Intelligence trying to drive a car by itself on a user created map

An Artificial Intelligence trying to drive a car by itself on a user created map

Akhil Sahukaru 17 Jan 13, 2022
Code for the Paper: Conditional Variational Capsule Network for Open Set Recognition

Conditional Variational Capsule Network for Open Set Recognition This repository hosts the official code related to "Conditional Variational Capsule N

Guglielmo Camporese 35 Nov 21, 2022
Attempt at implementation of a simple GAN using Keras

Simple GAN This is my attempt to make a wrapper class for a GAN in keras which can be used to abstract the whole architecture process. Simple GAN Over

Deven96 7 May 23, 2019
Combinatorial model of ligand-receptor binding

Combinatorial model of ligand-receptor binding The binding of ligands to receptors is the starting point for many import signal pathways within a cell

Mobolaji Williams 0 Jan 09, 2022
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
BigbrotherBENL - Face recognition on the Big Brother episodes in Belgium and the Netherlands.

BigbrotherBENL - Face recognition on the Big Brother episodes in Belgium and the Netherlands. Keeping statistics of whom are most visible and recognisable in the series and wether or not it has an im

Frederik 2 Jan 04, 2022
Enhancing Column Generation by a Machine-Learning-BasedPricing Heuristic for Graph Coloring

Enhancing Column Generation by a Machine-Learning-BasedPricing Heuristic for Graph Coloring (to appear at AAAI 2022) We propose a machine-learning-bas

YunzhuangS 2 May 02, 2022
Introducing neural networks to predict stock prices

IntroNeuralNetworks in Python: A Template Project IntroNeuralNetworks is a project that introduces neural networks and illustrates an example of how o

Vivek Palaniappan 637 Jan 04, 2023