GT4SD, an open-source library to accelerate hypothesis generation in the scientific discovery process.

Overview

GT4SD (Generative Toolkit for Scientific Discovery)

License: MIT Code style: black Contributions

logo

The GT4SD (Generative Toolkit for Scientific Discovery) is an open-source platform to accelerate hypothesis generation in the scientific discovery process. It provides a library for making state-of-the-art generative AI models easier to use.

Installation

pip

You can install gt4sd directly from GitHub:

pip install git+https://github.com/GT4SD/gt4sd-core

Development setup & installation

If you would like to contribute to the package, we recommend the following development setup: Clone the gt4sd-core repository:

git clone [email protected]:GT4SD/gt4sd-core.git
cd gt4ds-core
conda env create -f conda.yml
conda activate gt4sd
pip install -e .

Learn more in CONTRIBUTING.md

Supported packages

Beyond implementing various generative modeling inference and training pipelines GT4SD is designed to provide a high-level API that implement an harmonized interface for several existing packages:

  • GuacaMol: inference pipelines for the baselines models.
  • MOSES: inference pipelines for the baselines models.
  • TAPE: encoder modules compatible with the protein language models.
  • PaccMann: inference pipelines for all algorithms of the PaccMann family as well as traiing pipelines for the generative VAEs.
  • transformers: training and inference pipelines for generative models from the HuggingFace Models

Using GT4SD

Running inference pipelines

Running an algorithm is as easy as typing:

from gt4sd.algorithms.conditional_generation.paccmann_rl.core import (
    PaccMannRLProteinBasedGenerator, PaccMannRL
)
target = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTT'
# algorithm configuration with default parameters
configuration = PaccMannRLProteinBasedGenerator()
# instantiate the algorithm for sampling
algorithm = PaccMannRL(configuration=configuration, target=target)
items = list(algorithm.sample(10))
print(items)

Or you can use the ApplicationRegistry to run an algorithm instance using a serialized representation of the algorithm:

from gt4sd.algorithms.registry import ApplicationsRegistry
target = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTT'
algorithm = ApplicationsRegistry.get_application_instance(
    target=target,
    algorithm_type='conditional_generation',
    domain='materials',
    algorithm_name='PaccMannRL',
    algorithm_application='PaccMannRLProteinBasedGenerator',
    generated_length=32,
    # include additional configuration parameters as **kwargs
)
items = list(algorithm.sample(10))
print(items)

Running training pipelines via the CLI command

GT4SD provides a trainer client based on the gt4sd-trainer CLI command. The trainer currently supports training pipelines for language modeling (language-modeling-trainer), PaccMann (paccmann-vae-trainer) and Granular (granular-trainer, multimodal compositional autoencoders).

$ gt4sd-trainer --help
usage: gt4sd-trainer [-h] --training_pipeline_name TRAINING_PIPELINE_NAME
                     [--configuration_file CONFIGURATION_FILE]

optional arguments:
  -h, --help            show this help message and exit
  --training_pipeline_name TRAINING_PIPELINE_NAME
                        Training type of the converted model, supported types:
                        granular-trainer, language-modeling-trainer, paccmann-
                        vae-trainer. (default: None)
  --configuration_file CONFIGURATION_FILE
                        Configuration file for the trainining. It can be used
                        to completely by-pass pipeline specific arguments.
                        (default: None)

To launch a training you have two options.

You can either specify the training pipeline and the path of a configuration file that contains the needed training parameters:

gt4sd-trainer  --training_pipeline_name ${TRAINING_PIPELINE_NAME} --configuration_file ${CONFIGURATION_FILE}

Or you can provide directly the needed parameters as argumentsL

gt4sd-trainer  --training_pipeline_name language-modeling-trainer --type mlm --model_name_or_path mlm --training_file /pah/to/train_file.jsonl --validation_file /path/to/valid_file.jsonl 

To get more info on a specific training pipeleins argument simply type:

gt4sd-trainer --training_pipeline_name ${TRAINING_PIPELINE_NAME} --help

References

If you use gt4sd in your projects, please consider citing the following:

@software{GT4SD,
author = {GT4SD Team},
month = {2},
title = {{GT4SD (Generative Toolkit for Scientific Discovery)}},
url = {https://github.com/GT4SD/gt4sd-core},
version = {main},
year = {2022}
}

License

The gt4sd codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.

Comments
  • cli-upload

    cli-upload

    cli-upload

    Add upload functionality to the command line. It gives the user the possibility to upload specific artifacts on a server.

    Given a specific version for an algorithm:

    • check if that version is already on the server: - check if the folder bucket/algorithm_type/algorithm_name/algorithm_application/version/ exists.
    • If yes, tell the user and stop the upload.
    • If not, upload all the files in that version.

    cli-upload relies on minio and has been tested locally using docker-compose. cli-upload can be used to upload on a cloud or local server.


    How to use cli-upload

    Following the example in the README (in the Saving a trained algorithm for inference via the CLI command section) and assuming a trained model in /tmp/test_cli_upload, run:

    gt4sd-upload --training_pipeline_name paccmann-vae-trainer --model_path /tmp/test_cli_upload --training_name fast-example --target_version fast-example-v0 --algorithm_application PaccMannGPGenerator

    opened by georgosgeorgos 15
  • MOSES VAE from Guacamol training reconstruction is

    MOSES VAE from Guacamol training reconstruction is "incorrect"

    Describe the bug The VAE in GT4SD uses the wrapper of the Moses VAE from Guacamol. Unfortunately, the decoding training step from the Moses VAE is bugged.

    More detail The problem arises from the definition of the forward_decoder method:

    def forward_decoder(self, x, z):
        lengths = [len(i_x) for i_x in x]
    
        x = nn.utils.rnn.pad_sequence(x, batch_first=True, padding_value=self.pad)
        x_emb = self.x_emb(x)
    
        z_0 = z.unsqueeze(1).repeat(1, x_emb.size(1), 1)
        x_input = torch.cat([x_emb, z_0], dim=-1)  # <--- PROBLEM 1
        x_input = nn.utils.rnn.pack_padded_sequence(x_input, lengths, batch_first=True)
    
        h_0 = self.decoder_lat(z)
        h_0 = h_0.unsqueeze(0).repeat(self.decoder_rnn.num_layers, 1, 1)
    
        output, _ = self.decoder_rnn(x_input, h_0)
    
        output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True)
        y = self.decoder_fc(output)
    
        recon_loss = F.cross_entropy(  # <--- PROBLEM 2
            y[:, :-1].contiguous().view(-1, y.size(-1)),
            x[:, 1:].contiguous().view(-1),
            ignore_index=self.pad
        )
    
        return recon_loss
    

    Namely, the reconstruction step is wrong in two spots:

    1. construction of the true input: x_input = torch.cat([x_emb, z_0], dim=-1) In the visual representation of a typical RNN, the true token feeds in from the 'bottom" of the cell and the previous hidden state from the "left". In this implementation, the reparameterized latent vector z is fed in both from the "left" (normal) and the "bottom" (atypical). Fix: this line should be removed
    2. calculation of the reconstruction loss: recon_loss = F.cross_entropy(...) This reconstruction loss is calculated as the per-token loss of the input batch (i.e., the mean of a batch of tokens) because the default reduction in F.cross_entropy is "mean". In turn, this results in reconstruction losses that are very low for the VAE, causing the optimizer to ignore the decoder and focus on the encoder. When a VAE focuses too hard on the encoder, you get mode collapse, and that's what happens with the Moses VAE. Fix: this line should be: F.cross_entropy(..., reduction="sum") / len(x)

    To reproduce

    1. Problem 1 is not a "problem" so much as it is highly atypical to structure a VAE like this. I can't say if it results in any actual problems, but it simply shouldn't be there
    2. Problem 2 can be observed with two experiments:
      1. Using PCA with two dimensions, plot the embeddings of a random batch z ~ q(z|x) and a sample from the standard normal distribution z ~ N(0, I). The embeddings from the encoder will look like a point at (0, 0) compared to the samples from the standard normal
      2. Measure the reconstruction accuracy x_r ~ p(x | z ~ q(z | x_0)). In a well-trained VAE, sum(x_r == x_0 for x_0 in xs) / len(xs) should be above 50%. This VAE is generally fairly low (in my experience).
    bug 
    opened by davidegraff 12
  • Improve CLA workflow

    Improve CLA workflow

    actions to commit to other peoples forks was not something super easy to do, so I'm settling for a bit more verbosity and automation.

    the issue will be closed with a comment to the commit that added the contributor. There is a notice to merge this into a PR.

    Therefore there is no assignment of the issue any more.

    Looks like this: https://github.com/C-nit/gt4sd-core/issues/9 and can also be triggered in a different way: https://github.com/C-nit/gt4sd-core/issues/11

    opened by C-nit 11
  • feat: Support in RT Trainer for multiple entities.

    feat: Support in RT Trainer for multiple entities.

    Solving #143 by expanding the Regression Transformer trainer to support multi-entity discriminations, i.e., support the multientity_cg collator from the RT repo.

    Signed-off-by: Nicolai Ree [email protected]

    opened by NicolaiRee 9
  • feat: property_predictors in scorer

    feat: property_predictors in scorer

    • Implement PropertyPredictorScorer in domains.materials.property_scorer. - circular import using domains.materials.scorer for the implementation
    • We are simply using the PropertyPredictorRegistry to select a property and parameters by name and PropertyPredictionScorer to compute a score on a sample wrt a target value.
    • Tests mimick the logic in properties.
    cla-signed 
    opened by georgosgeorgos 8
  • Training pipeline Regression Transformer

    Training pipeline Regression Transformer

    Adding new training pipeline for RT

    • allows to finetune existing models available in the toolkit
    • allows to train models from scratch
    • patching LRSchedulers in torchdrug --> they are needed for RT training and threw errors
    cla-signed 
    opened by jannisborn 6
  • Added toxicity and  affinity to visum notebook

    Added toxicity and affinity to visum notebook

    Signed-off-by: Eduardo [email protected] Added toxicity (Tox21 model from https://github.com/PaccMann/paccmann_sarscov2) and affinity (Paccmann predictor) to the notebook.

    @drugilsberg , I am not sure about one specific step in the notebook and I would really appreciate it if you could help: When calling the sample in PaccMannGP for the first time the first line of the output is

    configuring optimization for target: {'qed': {'weight': 1.0}, 'sa': {'weight': 1.0}}

    However, on the second call to the same object (no reinitialization), in section "Sampling and Plotting Molecules with GT4SD", the first line reads:

    configuring optimization for target: {'qed': {}, 'sa': {}}

    Do you know if this has any influence on the molecules being generated? I attached a PDF file with the output for convenience.

    visum-2022-handson-generative-models.pdf

    @helenaMontenegro , the notebook now requires users to download a small model, but I don't think this is a problem.

    cla-signed 
    opened by edux300 5
  • Problem multiprocess in requirements

    Problem multiprocess in requirements

    The new multiprocess library version (0.70.13) gives problems when installing gt4sd-core using the development mode. I had to set multiprocess==0.70.12.2 to install the library.

    opened by georgosgeorgos 5
  • Torchdrug trainer pipeline

    Torchdrug trainer pipeline

    Implemented torchdrug trainer pipeline. Models can be used via:

    gt4sd-trainer --training_pipeline_name torchdrug-gcpn-trainer -h
    gt4sd-trainer --training_pipeline_name torchdrug-graphaf-trainer -h
    

    Features:

    • [x] Support for the same two models that are available via inference TorchDrugGCPN and TorchDrugGraphAF.
    • [x] Both models can be trained on all MoleculeDatasets from torchdrug.Datasets. Those are around 20 predefined datasets.
    • [x] Implemented a custom dataset where users can pass their own data
    • [x] In addition to the unittests I verified functionalities from the CLI via gt4sd-trainer

    Problems:

    • [ ] Property optimization does not work, due to instabilities in TorchDrug. I opened issue and PR but we have to wait until they merge, release a new version and then bump our dependency. The code I wrote here already supports the property optimization but I disabled the unittest for the moment because it would fail due to the TorchDrug issue. See details: https://github.com/DeepGraphLearning/torchdrug/issues/83
    • [x] gt4sd-saving: I ran a test via CLI but the saving failed. Not sure how problematic this is, here's the error:
    INFO:gt4sd.cli.saving:Selected configuration: ConfigurationTuple(algorithm_type='generation', domain='materials', algorithm_name='TorchDrugGenerator', algorithm_application='TorchDrugGCPN')
    INFO:gt4sd.cli.saving:Saving model version "fast" with the following configuration: <class 'gt4sd.algorithms.generation.torchdrug.core.TorchDrugGCPN'>
    INFO:gt4sd.algorithms.core:TorchDrugGCPN can not save a version based on TorchDrugSavingArguments(model_path='/Users/jab/.gt4sd/runs/', training_name='gcpn_test')
    
    enhancement cla-signed 
    opened by jannisborn 5
  • RT sampling_wrapper to specify a substructure or series of tokens to keep unmasked

    RT sampling_wrapper to specify a substructure or series of tokens to keep unmasked

    I would like to propose an upgrade on the feature demonstrated in this notebook: https://github.com/GT4SD/gt4sd-core/blob/main/notebooks/regression-transformer-demo.ipynb (see cells 12-14)

    In addition to explicitly specifying tokens_to_mask, one probably could more likely imagine that a chemist might want to specify a substructure to mask or to "freeze" (keep unchanged, i.e. unmasked). It might be easier to specify tokens to freeze as that would be just selecting a part of the string to be kept unmasked. Prototype example is given below.

        sampling_wrapper={
            'property_goal': {
                '<logp>': 6.123,
                '<scs>': 1.5
            },
            'fraction_to_mask': 0.6,
            # keep morpholino tail unchanged
            'tokens_to_freeze': ['N4CCOCC4']
        }
    

    If one could specify substructure to freeze or to mask - that would be potentially even more advantageous, as that would remove ambiguities when a substructure can be expressed in more than one sequence.

        sampling_wrapper={
            'property_goal': {
                '<logp>': 6.123,
                '<scs>': 1.5
            },
            'fraction_to_mask': 0.6,
            # keep morpholino tail unchanged
            'substructure_to_freeze': ['N1CCOCC1'],
            # explicitly mask benzene ring moiety
            'substructure_to_mask':  ['C1=CC=CC=C1'],
        }
    

    One could use RDKit functionality to identify substructure tokens, as given here: https://www.rdkit.org/docs/Cookbook.html#substructure-matching

    Regarding the interpretation of the 'fraction_to_mask', I would then imagine that it would best applied to the remaining set of tokens (after tokens_to_freeze and explicit tokens_to_mask are excluded). I hope this makes sense, happy to clarify and exemplify further.

    enhancement 
    opened by OleinikovasV 4
  • Artifact storage for property predictors

    Artifact storage for property predictors

    Closes #116

    Now we can store artifacts also for property predictors

    • New property predictors are tested
    • One thing that remains to do is to have functions under gt4sd.properties.molecules.functions. Atm this is not yet supported since it would yield circular imports.
    cla-signed 
    opened by jannisborn 4
  • RT saving pipeline

    RT saving pipeline

    Closes #169

    • gt4sd-saving now also supports the RT training pipeline. I implemented the get_filepath_mappings_for_training_pipeline_arguments method. The inference.json is now created inside the RT trainer and also saved in the model folder such that it can later be copied by gt4sd-saving. The Property class was needed as a helper for this, to track some attributes of each property.
    • Expanded the RT example. Describes now a full process of training/finetuning a model, saving it with gt4sd-saving, running inference on it and finally uploading it to the model hub.

    I tested everything with the example from the README

    Minors:

    • adding a method filter_stubbed to the molecular RT that removes stub-like molecules("invalid SELFIES")
    • Bumping paccmann_gp dependency
    enhancement cla-signed 
    opened by jannisborn 0
  • RegressionTransformer saving pipeline

    RegressionTransformer saving pipeline

    Is your feature request related to a problem? Please describe. gt4sd-saving is not fully supportive of RT

    ToDo:

    • Implement get_filepath_mappings_for_training_pipeline_arguments
    • Save inference.json to model dir
    enhancement 
    opened by jannisborn 0
  • Disentangle properties from algorithms

    Disentangle properties from algorithms

    Is your feature request related to a problem? Please describe. Currently, the properties submodule imports stuff from algorithms.core and thus also from that __init__. In the init, we registry all the training pipelines and thus, one needs to have all those dependencies installed, including torchdrug, guacamol_baselines and other vcs-requirements

    Describe the solution you'd like Creating a submodule gt4sd.core that specifies base classes used by multiple submodules like gt4sd.algorithms or gt4sd.properties

    Describe alternatives you've considered Do the imports only when someone calls list_available_algorithms

    NOTE: When creating gt4sd.core we have to make sure that all the rest remains functional, including relative imports, jupyter notebooks (should be fine since we barely import from algorithms.core directly) and in particular also documentation

    enhancement 
    opened by jannisborn 0
  • Add methods for artifact-based property predictors

    Add methods for artifact-based property predictors

    Is your feature request related to a problem? Please describe. Currently the artifact-based property predictors (like gt4sd.properties.molecules.core.Tox21) are not usable as functions via gt4sd.properties.molecules.tox_21, unlike all the non-artifact-based properties). Moving the functions there would yield circula import issues

    Describe the solution you'd like A small refactor that goes around the circular imports

    enhancement 
    opened by jannisborn 0
  • Refactor AlgorithmConfiguration baseclass

    Refactor AlgorithmConfiguration baseclass

    Inconsistent types between AlgorithmConfiguration base class and the child ConfigurablePropertyAlgorithm Configuration, concerning attributes like domain but also methods like ensure_artifacts_for_version (class methods in the base class but instance methods in the base class).

    A simple refactor into 3 instead of 2 classes should fix this.

    Originally posted by @jannisborn in https://github.com/GT4SD/gt4sd-core/pull/121#discussion_r943649339

    • So the ones in the contstructor for lines like self.domain=domain says: error: Cannot assign to class variable "domain" via instance. That's because in the parent class (AlgorithmConfiguration) we set it as domain: ClassVar[str]
    • the ones in the signatures like get_application_prefix which returns a str are because in the parent class those are class methods, not instance methods. THe error is Signature of "get_application_prefix" incompatible with supertype "AlgorithmConfiguration

    It might be fixable by a refactor but I'm not sure it's worth it

    refactoring 
    opened by jannisborn 0
Releases(v1.0.4)
Owner
Generative Toolkit 4 Scientific Discovery
Generative Toolkit 4 Scientific Discovery
Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective

Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective Zhengzhuo Xu, Zenghao Chai, Chun Yuan This is the PyTorch implement

Sincere 16 Dec 15, 2022
Semi-supervised Transfer Learning for Image Rain Removal. In CVPR 2019.

Semi-supervised Transfer Learning for Image Rain Removal This package contains the Python implementation of "Semi-supervised Transfer Learning for Ima

Wei Wei 59 Dec 26, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 02, 2022
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022
PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer.

Unsupervised_IEPGAN This is the PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer. Ha

25 Oct 26, 2022
Find-Lane-Line - Use openCV library and Python to detect the road-lane-line

Find-Lane-Line This project is to use openCV library and Python to detect the road-lane-line. Data Pipeline Step one : Color Selection Step two : Cann

Kenny Cheng 3 Aug 17, 2022
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

631 Jan 04, 2023
Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Accompanying code for the paper Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Kevin Wilkinghoff 6 Dec 01, 2022
Genetic feature selection module for scikit-learn

sklearn-genetic Genetic feature selection module for scikit-learn Genetic algorithms mimic the process of natural selection to search for optimal valu

Manuel Calzolari 260 Dec 14, 2022
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 09, 2022
An Open-Source Toolkit for Prompt-Learning.

An Open-Source Framework for Prompt-learning. Overview • Installation • How To Use • Docs • Paper • Citation • What's New? Nov 2021: Now we have relea

THUNLP 2.3k Jan 07, 2023
Yolo Traffic Light Detection With Python

Yolo-Traffic-Light-Detection This project is based on detecting the Traffic light. Pretained data is used. This application entertained both real time

Ananta Raj Pant 2 Aug 08, 2022
Source codes for "Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs"

Structure-Aware-BART This repo contains codes for the following paper: Jiaao Chen, Diyi Yang:Structure-Aware Abstractive Conversation Summarization vi

GT-SALT 56 Dec 08, 2022
This code implements constituency parse tree aggregation

README This code implements constituency parse tree aggregation. Folder details code: This folder contains the code that implements constituency parse

Adithya Kulkarni 0 Oct 11, 2021
Project of 'TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement '

TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement Codes for TMM20 paper "TBEFN: A Two-branch Exposure-fusion Network for Low

KUN LU 31 Nov 06, 2022
Mining-the-Social-Web-3rd-Edition - The official online compendium for Mining the Social Web, 3rd Edition (O'Reilly, 2018)

Mining the Social Web, 3rd Edition The official code repository for Mining the Social Web, 3rd Edition (O'Reilly, 2019). The book is available from Am

Mikhail Klassen 838 Jan 01, 2023
Implementation detail for paper "Multi-level colonoscopy malignant tissue detection with adversarial CAC-UNet"

Multi-level-colonoscopy-malignant-tissue-detection-with-adversarial-CAC-UNet Implementation detail for our paper "Multi-level colonoscopy malignant ti

CVSM Group - email: <a href=[email protected]"> 84 Nov 22, 2022
Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

NPMs: Neural Parametric Models Project Page | Paper | ArXiv | Video NPMs: Neural Parametric Models for 3D Deformable Shapes Pablo Palafox, Aljaz Bozic

PabloPalafox 109 Nov 22, 2022
Python Actor concurrency library

Thespian Actor Library This library provides the framework of an Actor model for use by applications implementing Actors. Thespian Site with Documenta

Kevin Quick 177 Dec 11, 2022
This's an implementation of deepmind Visual Interaction Networks paper using pytorch

Visual-Interaction-Networks An implementation of Deepmind visual interaction networks in Pytorch. Introduction For the purpose of understanding the ch

Mahmoud Gamal Salem 166 Dec 06, 2022