STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs

Overview

STonKGs

Tests PyPI PyPI - Python Version PyPI - License Documentation Status DOI Code style: black

STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs. This multimodal Transformer combines structured information from KGs with unstructured text data to learn joint representations. While we demonstrated STonKGs on a biomedical knowledge graph ( i.e., from INDRA), the model can be applied other domains. In the following sections we describe the scripts that are necessary to be run to train the model on any given dataset.

💪 Getting Started

Data Format

Since STonKGs is operating on both text and KG data, it's expected that the respective data files include columns for both modalities. More specifically, the expected data format is a pandas dataframe (or a pickled pandas dataframe for the pre-training script), in which each row is containing one text-triple pair. The following columns are expected:

  • source: Source node in the triple of a given text-triple pair
  • target: Target node in the triple of a given text-triple pair
  • evidence: Text of a given text-triple pair
  • (optional) class: Class label for a given text-triple pair in fine-tuning tasks (does not apply to the pre-training procedure)

Note that both source and target nodes are required to be in the Biological Expression Langauge (BEL) format, more specifically, they need to be contained in the INDRA KG. For more details on the BEL format, see for example the INDRA documentation for BEL processor and PyBEL.

Pre-training STonKGs

Once you have installed STonKGs as a Python package (see below), you can start training the STonKGs on your dataset by running:

$ python3 -m stonkgs.models.stonkgs_pretraining

The configuration of the model can be easily modified by altering the parameters of the pretrain_stonkgs method. The only required argument to be changed is PRETRAINING_PREPROCESSED_POSITIVE_DF_PATH, which should point to your dataset.

Downloading the pre-trained STonKGs model on the INDRA KG

We released the pre-trained STonKGs models on the INDRA KG for possible future adaptations, such as further pre-training on other KGs. Both STonKGs150k as well as STonKGs300k are accessible through Hugging Face's model hub.

The easiest way to download and initialize the pre-trained STonKGs model is to use the from_default_pretrained() class method (with STonKGs150k being the default):

from stonkgs import STonKGsForPreTraining

# Download the model from the model hub and initialize it for pre-training 
# using from_default_pretrained
stonkgs_pretraining = STonKGsForPreTraining.from_default_pretrained()

Alternatively, since our code is based on Hugging Face's transformers package, the pre-trained model can be easily downloaded and initialized using the .from_pretrained() function:

from stonkgs import STonKGsForPreTraining

# Download the model from the model hub and initialize it for pre-training 
# using from_pretrained
stonkgs_pretraining = STonKGsForPreTraining.from_pretrained(
    'stonkgs/stonkgs-150k',
)

Extracting Embeddings

The learned embeddings of the pre-trained STonKGs models (or your own STonKGs variants) can be extracted in two simple steps. First, a given dataset with text-triple pairs (a pandas DataFrame, see Data Format) needs to be preprocessed using the preprocess_file_for_embeddings function. Then, one can obtain the learned embeddings using the preprocessed data and the get_stonkgs_embeddings function:

import pandas as pd

from stonkgs import get_stonkgs_embeddings, preprocess_df_for_embeddings

# Generate some example data
# Note that the evidence sentences are typically longer than in this example data
rows = [
    [
        "p(HGNC:1748 ! CDH1)",
        "p(HGNC:2515 ! CTNND1)",
        "Some example sentence about CDH1 and CTNND1.",
    ],
    [
        "p(HGNC:6871 ! MAPK1)",
        "p(HGNC:6018 ! IL6)",
        "Another example about some interaction between MAPK and IL6.",
    ],
    [
        "p(HGNC:3229 ! EGF)",
        "p(HGNC:4066 ! GAB1)",
        "One last example in which Gab1 and EGF are mentioned.",
    ],
]
example_df = pd.DataFrame(rows, columns=["source", "target", "evidence"])

# 1. Preprocess the text-triple data for embedding extraction
preprocessed_df_for_embeddings = preprocess_df_for_embeddings(example_df)

# 2. Extract the embeddings 
embedding_df = get_stonkgs_embeddings(preprocessed_df_for_embeddings)

Fine-tuning STonKGs

The most straightforward way of fine-tuning STonKGs on the original six classfication tasks is to run the fine-tuning script (note that this script assumes that you have a mlflow logger specified, e.g. using the --logging_dir argument):

$ python3 -m stonkgs.models.stonkgs_finetuning

Moreover, using STonKGs for your own fine-tuning tasks (i.e., sequence classification tasks) in your own code is just as easy as initializing the pre-trained model:

from stonkgs import STonKGsForSequenceClassification

# Download the model from the model hub and initialize it for fine-tuning
stonkgs_model_finetuning = STonKGsForSequenceClassification.from_default_pretrained(
    num_labels=number_of_labels_in_your_task,
)

# Initialize a Trainer based on the training dataset
trainer = Trainer(
    model=model,
    args=some_previously_defined_training_args,
    train_dataset=some_previously_defined_finetuning_data,
)

# Fine-tune the model to the moon 
trainer.train()

Using STonKGs for Inference

You can generate new predictions for previously unseen text-triple pairs (as long as the nodes are contained in the INDRA KG) based on either 1) the fine-tuned models used for the benchmark or 2) your own fine-tuned models. In order to do that, you first need to load/initialize the fine-tuned model:

from stonkgs.api import get_species_model, infer

model = get_species_model()

# Next, you want to use that model on your dataframe (consisting of at least source, target
# and evidence columns, see **Data Format**) to generate the class probabilities for each
# text-triple pair belonging to each of the specified classes in the respective fine-tuning task:
example_data = ...

# See Extracting Embeddings for the initialization of the example data
# This returns both the raw (transformers) PredictionOutput as well as the class probabilities 
# for each text-triple pair
raw_results, probabilities = infer(model, example_data)

ProtSTonKGs

It is possible to download the extension of STonKGs, the pre-trained ProtSTonKGs model, and initialize it for further pre-training on text, KG and amino acid sequence data:

from stonkgs import ProtSTonKGsForPreTraining

# Download the model from the model hub and initialize it for pre-training 
# using from_pretrained
protstonkgs_pretraining = ProtSTonKGsForPreTraining.from_pretrained(
    'stonkgs/protstonkgs',
)

Moreover, analogous to STonKGs, ProtSTonKGs can be used for fine-tuning sequence classification tasks as well:

from stonkgs import ProtSTonKGsForSequenceClassification

# Download the model from the model hub and initialize it for fine-tuning
protstonkgs_model_finetuning = ProtSTonKGsForSequenceClassification.from_default_pretrained(
    num_labels=number_of_labels_in_your_task,
)

# Initialize a Trainer based on the training dataset
trainer = Trainer(
    model=model,
    args=some_previously_defined_training_args,
    train_dataset=some_previously_defined_finetuning_data,
)

# Fine-tune the model to the moon 
trainer.train()

⬇️ Installation

The most recent release can be installed from PyPI with:

$ pip install stonkgs

The most recent code and data can be installed directly from GitHub with:

$ pip install git+https://github.com/stonkgs/stonkgs.git

To install in development mode, use the following:

$ git clone git+https://github.com/stonkgs/stonkgs.git
$ cd stonkgs
$ pip install -e .

Warning: Because stellargraph doesn't currently work on Python 3.9, this software can only be installed on Python 3.8.

Artifacts

The pre-trained models are hosted on HuggingFace The fine-tuned models are hosted on the STonKGs community page on Zenodo along with the other artifacts (node2vec embeddings, random walks, etc.)

Acknowledgements

⚖️ License

The code in this package is licensed under the MIT License.

📖 Citation

Balabin H., Hoyt C.T., Birkenbihl C., Gyori B.M., Bachman J.A., Komdaullil A.T., Plöger P.G., Hofmann-Apitius M., Domingo-Fernández D. STonKGs: A Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (2021), bioRxiv, 2021.08.17.456616v1.

🎁 Support

This project has been supported by several organizations (in alphabetical order):

💰 Funding

This project has been funded by the following grants:

Funding Body Program Grant
DARPA Automating Scientific Knowledge Extraction (ASKE) HR00111990009

🍪 Cookiecutter

This package was created with @audreyfeldroy's cookiecutter package using @cthoyt's cookiecutter-snekpack template.

🛠️ Development

The final section of the README is for if you want to get involved by making a code contribution.

Testing

After cloning the repository and installing tox with pip install tox, the unit tests in the tests/ folder can be run reproducibly with:

$ tox

Additionally, these tests are automatically re-run with each commit in a GitHub Action.

📦 Making a Release

After installing the package in development mode and installing tox with pip install tox, the commands for making a new release are contained within the finish environment in tox.ini. Run the following from the shell:

$ tox -e finish

This script does the following:

  1. Uses BumpVersion to switch the version number in the setup.cfg and src/stonkgs/version.py to not have the -dev suffix
  2. Packages the code in both a tar archive and a wheel
  3. Uploads to PyPI using twine. Be sure to have a .pypirc file configured to avoid the need for manual input at this step
  4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
  5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can use tox -e bumpversion minor after.
Comments
  • Add default loading function

    Add default loading function

    This PR uses the top-level __init__.py to make imports a bit less verbose and adds a class method that wraps loading from huggingface's hub so users don't need so many constants

    opened by cthoyt 3
  • Add code for using fine-tuned models

    Add code for using fine-tuned models

    • [x] Upload fine-tuned models to Zenodo under CC 0 license each in their own record (@cthoyt)
      • [x] species (https://zenodo.org/record/5205530)
      • [x] location (https://zenodo.org/record/5205553)
      • [x] disease (https://zenodo.org/record/5205592)
      • [x] correct (multiclass; https://zenodo.org/record/5206139)
      • [x] correct (binary; https://zenodo.org/record/5205989)
      • [x] cell line (https://zenodo.org/record/5205915)
    • [x] Automate download of fine-tuned models with PyStow (@cthoyt; see example in 660920d, done in 5dfecde)
    • [x] Show how to load model into python object (@helena-balabin)
    • [x] Show how to make a prediction on a given text triple, similarly to the README example that's on the general pre-trained model (@helena-balabin)
    opened by cthoyt 2
  • Prepare code for black

    Prepare code for black

    black is an automatic code formatter and linter. After all of the flake8 hell, it's a solution we can use to automate most of the pain away. This PR prepares the project for applying black by updating the flake8 config (adding a specific ignore) and adding a tox environment for applying black itself.

    After accepting this PR, do two things in order:

    1. Uncomment the flake8-black plugin in the flake8 environment of the tox.ini
    2. Run tox -e lint
    opened by cthoyt 1
  • Stream, yo

    Stream, yo

    I got a sigkill from lack of memory when I did this on the full COVID19 emmaa model, so I'm switching everything over to iterables and streams rather than building up full pandas dataframes each time.

    opened by cthoyt 0
  • Improve label to identifier mapping

    Improve label to identifier mapping

    Make sure that during inference, the predictions are correctly mapped to any kind of class labels when saving the dataframe. (This requires accessing/saving the id2tag attribute in each of the fine-tuning model classes.)

    opened by helena-balabin 0
Releases(v0.1.5)
Owner
STonKGs
Multimodal Transformers for biomedical text and Knowledge Graph data
STonKGs
ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体

ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体,包括上市公司所属行业关系、行业上级关系、产品上游原材料关系、产品下游产品关系、公司主营产品、产品小类共6大类。 上市公司4,654家,行业511个,产品95,559条、上游材料56,824条,上级行业480条,下游产品390条,产品小类52,937条,所属行业3,946条。

liuhuanyong 415 Jan 06, 2023
This repository contains Python scripts for extracting linguistic features from Filipino texts.

Filipino Text Linguistic Feature Extractors This repository contains scripts for extracting linguistic features from Filipino texts. The scripts were

Joseph Imperial 1 Oct 05, 2021
中文問句產生器;使用台達電閱讀理解資料集(DRCD)

Transformer QG on DRCD The inputs of the model refers to we integrate C and A into a new C' in the following form. C' = [c1, c2, ..., [HL], a1, ..., a

Philip 1 Oct 22, 2021
Search Git commits in natural language

NaLCoS - NAtural Language COmmit Search Search commit messages in your repository in natural language. NaLCoS (NAtural Language COmmit Search) is a co

Pushkar Patel 50 Mar 22, 2022
kochat

Kochat 챗봇 빌더는 성에 안차고, 자신만의 딥러닝 챗봇 애플리케이션을 만드시고 싶으신가요? Kochat을 이용하면 손쉽게 자신만의 딥러닝 챗봇 애플리케이션을 빌드할 수 있습니다. # 1. 데이터셋 객체 생성 dataset = Dataset(ood=True) #

1 Oct 25, 2021
Contains descriptions and code of the mini-projects developed in various programming languages

TexttoSpeechAndLanguageTranslator-project introduction A pleasant application where the client will be given buttons like play,reset and exit. The cli

Adarsh Reddy 1 Dec 22, 2021
Applied Natural Language Processing in the Enterprise - An O'Reilly Media Publication

Applied Natural Language Processing in the Enterprise This is the companion repo for Applied Natural Language Processing in the Enterprise, an O'Reill

Applied Natural Language Processing in the Enterprise 95 Jan 05, 2023
多语言降噪预训练模型MBart的中文生成任务

mbart-chinese 基于mbart-large-cc25 的中文生成任务 Input source input: text + /s + lang_code target input: lang_code + text + /s Usage token_ids_mapping.jso

11 Sep 19, 2022
IndoBERTweet is the first large-scale pretrained model for Indonesian Twitter. Published at EMNLP 2021 (main conference)

IndoBERTweet 🐦 🇮🇩 1. Paper Fajri Koto, Jey Han Lau, and Timothy Baldwin. IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effe

IndoLEM 40 Nov 30, 2022
SAINT PyTorch implementation

SAINT-pytorch A Simple pyTorch implementation of "Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing" based on https://arx

Arshad Shaikh 63 Dec 25, 2022
Exploration of BERT-based models on twitter sentiment classifications

twitter-sentiment-analysis Explore the relationship between twitter sentiment of Tesla and its stock price/return. Explore the effect of different BER

Sammy Cui 2 Oct 02, 2022
BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia.

BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia. Its intended use is as input for neural models in natural languag

Benjamin Heinzerling 1.1k Jan 03, 2023
Persian-lexicon - A lexicon of 70K unique Persian (Farsi) words

Persian Lexicon This repo uses Uppsala Persian Corpus (UPC) to construct a lexic

Saman Vaisipour 7 Apr 01, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)

CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specia

Zihan Liu 89 Nov 10, 2022
ReCoin - Restoring our environment and businesses in parallel

Shashank Ojha, Sabrina Button, Abdellah Ghassel, Joshua Gonzales "Reduce Reuse R

sabrina button 1 Mar 14, 2022
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation

CPT This repository contains code and checkpoints for CPT. CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Gener

fastNLP 342 Jan 05, 2023
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Eliyar Eziz 2.3k Dec 29, 2022
Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".

STEMM: Self-learning with Speech-Text Manifold Mixup for Speech Translation This is a PyTorch implementation for the ACL 2022 main conference paper ST

ICTNLP 29 Oct 16, 2022
Ελληνικά νέα (Python script) / Greek News Feed (Python script)

Ελληνικά νέα (Python script) / Greek News Feed (Python script) Ελληνικά English Το 2017 είχα υλοποιήσει ένα Python script για να εμφανίζει τα τωρινά ν

Loren Kociko 1 Jun 14, 2022