CHERRY is a python library for predicting the interactions between viral and prokaryotic genomes

Related tags

Deep LearningCHERRY
Overview

CHERRY CHERRY is a python library for predicting the interactions between viral and prokaryotic genomes. CHERRY is based on a deep learning model, which consists of a graph convolutional encoder and a link prediction decoder.

Overview

There are two kind of tasks that CHERRY can work:

  1. Host prediction for virus
  2. Identifying viruses that infect pathogenic bacteria

Users can choose one of the task when running CHERRY. If you have any trouble installing or using CHERRY, please let us know by opening an issue on GitHub or emailing us ([email protected]).

Required Dependencies

  • Python 3.x
  • Numpy
  • Pytorch>1.8.0
  • Networkx
  • Pandas
  • Diamond
  • BLAST
  • MCL
  • Prodigal

All these packages can be installed using Anaconda.

If you want to use the gpu to accelerate the program:

  • cuda
  • Pytorch-gpu

An easiler way to install

We recommend you to install all the package with Anaconda

After cloning this respository, you can use anaconda to install the CHERRY.yaml. This will install all packages you need with gpu mode (make sure you have installed cuda on your system to use the gpu version. Othervise, it will run with cpu version). The command is: conda env create -f CHERRY.yaml

  • For cpu version pytorch: conda install pytorch torchvision torchaudio cpuonly -c pytorch
  • For gpu version pytorch: Search pytorch to find the correct cuda version according to your computer Note: we suggest you to install all the package using conda (both miniconda and anaconda are ok). We supply a

Prepare the database

Due to the limited size of the GitHub, we zip the database. Before using CHEERY, you need to unpack them using the following commands.

cd CHEERY/dataset
bzip2 -d protein.fasta.bz2
bzip2 -d nucl.fasta.bz2
cd ../prokaryote
gunzip *
cd ..

Usage

1 Predicting host for viruses

If you want to predict hosts for viruses, the input should be a fasta file containing the virual sequences. We support an example file named "test_contigs.fa" in the Github folder. Then, the only command that you need to run is

python run_Speed_up.py [--contigs INPUT_FA] [--len MINIMUM_LEN] [--model MODEL] [--topk TOPK_PRED]

Options

  --contigs INPUT_FA
                        input fasta file
  --len MINIMUM_LEN
                        predict only for sequence >= len bp (default 8000)
  --model MODEL (pretrain or retrain)
                        predicting host with pretrained parameters or retrained paramters (default pretrain)
  --topk TOPK_PRED
                        The host prediction with topk score (default 1)

Example

Prediction on species level with pretrained paramters:

python run_Speed_up.py --contigs test_contigs.fa --len 8000 --model pretrain --topk 3

Note: Commonly, you do not need to retrain the model, especially when you do not have gpu unit.

OUTPUT

The format of the output file is a csv file ("final_prediction.csv") which contain the prediction of each virus. Column contig_name is the accession from the input.

Since the topk method is given, we cannot give the how taxaonmic tree for each prediction. However, we will supply a script for you to convert the prediction into a complte taxonmoy tree. Use the following command to generate taxonomy tree:

python run_Taxonomy_tree.py [--k TOPK_PRED]

Because there are k prediction in the "final_prediction.csv" file, you need to specify the k to generate the tree. The output of program is 'Top_k_prediction_taxonomy.csv'.

2 Predicting virus infecting prokaryote

If you want to predict hosts for viruses, you need to supply two kinds of inputs:

  1. Place your prokaryotic genomes in new_prokaryote/ folder.
  2. A fasta file containing the virus squences. Then, the program will output which virus in your fasta file will infect the prkaryotes in the new_prokaryote/ folder.

The command is simlar to the previous one but two more paramter is need:

python run_Speed_up.py [--mode MODE] [--t THRESHOLD]

Example

python run_Speed_up.py --contigs test_contigs.fa --mode prokaryote --t 0.98

Options

  --mode MODE (prokaryote or virus)
                        Switch mode for predicting virus or predicting host
  --t THRESHOLD
                        The confident threshold for predicting virus, the higier the threshold the higher the precision. (default 0.98)

OUTPUT

The format of the output file is a csv file which contain the prediction of each virus. Column prokaryote is the accession of your given prokaryotic genomes. Column virus is the list of viruses that might infect these genomes.

Extension of the parokaryotic genomes database

Due to the limitation of storage on GitHub, we only provided the parokaryote with known interactions (Date up to 2020) in prokaryote folder. If you want to predict interactions with more species, please place your parokaryotic genomes into prokaryote/ folder and add an entry of taxonomy information into dataset/prokaryote.csv. We also recommand you only add the prokaryotes of interest to save the computation resourse and time. This is because all the genomes in prokaryote folder will be used to generate the multimodal graph, which is a O(n^2) algorithm.

Example

If you have a metagenomic data and you know that only E. coli, Butyrivibrio fibrisolvens, and Faecalibacterium prausnitzii exist in the metagenomic data. Then you can placed the genomes of these three species into the prokaryote/ and add the entry in dataset/prokaryote.csv. An example of the entry is look like:

GCF_000007445,Bacteria,Proteobacteria,Gammaproteobacteria,Enterobacterales,Enterobacteriaceae,Escherichia,Escherichia coli

The corresponding header of the entry is: Accession,Superkingdom,Phylum,Class,Order,Family,Genus,Species. If you do not know the whole taxonomy tree, you can directly use a specific name for all columns. Because CHERRY is a link prediction tool, it will directly use the given name for prediction.

Noted: Since our program will use the accession for searching and constructing the knowledge graph, the name of the fasta file of your genomes should be the same as the given accession. For example, if your accession is GCF_000007445, your file name should be GCF_000007445.fa. Otherwise, the program cannot find the entry.

Extension of the virus-prokaryote interactions database

If you know more virus-prokaryote interactions than our pre-trained model (given in Interactiondata), you can add them to train a custom model. Several steps you need to do to train your model:

  1. Add your viral genomes into the nucl.fasta file and run the python refresh.py to generate new protein.fasta and database_gene_to_genome.csv files. They will replace the old one in the dataset/ folder automatically.
  2. Add the entrys of host taxonomy information into dataset/virus.csv. The corresponding header of the entry is: Accession (of the virus), Superkingdom, Phylum, Class, Order, Family, Genus, Species. The required field is Species. You can left it blank if you do not know other fields. Also, the accession of the virus shall be the same as your fasta entry.
  3. Place your prokaryotic genomes into the the prokaryote/ folder and add an entry in dataset/prokaryote.csv. The guideline is the same as the previous section.
  4. Use retrain as the parameter for --mode option to run the program.

References

The paper is submitted to the Briefings in Bioinformatics.

The arXiv version can be found via: CHERRY: a Computational metHod for accuratE pRediction of virus-pRokarYotic interactions using a graph encoder-decoder model

Contact

If you have any questions, please email us: [email protected]

Notes

  1. if the program output an error (which is caused by your machine): Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library. You can type in the command export MKL_SERVICE_FORCE_INTEL=1 before runing run_Speed_up.py
Owner
Kenneth Shang
Kenneth Shang
Causal Influence Detection for Improving Efficiency in Reinforcement Learning

Causal Influence Detection for Improving Efficiency in Reinforcement Learning This repository contains the code release for the paper "Causal Influenc

Autonomous Learning Group 21 Nov 29, 2022
Implementing DeepMind's Fast Reinforcement Learning paper

Fast Reinforcement Learning This is a repo where I implement the algorithms in the paper, Fast reinforcement learning with generalized policy updates.

Marcus Chiam 6 Nov 28, 2022
PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"

Non-Autoregressive Transformer Code release for Non-Autoregressive Neural Machine Translation by Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K.

Salesforce 261 Nov 12, 2022
Project NII pytorch scripts

project-NII-pytorch-scripts By Xin Wang, National Institute of Informatics, since 2021 I am a new pytorch user. If you have any suggestions or questio

Yamagishi and Echizen Laboratories, National Institute of Informatics 184 Dec 23, 2022
Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch

Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch

Phil Wang 383 Jan 02, 2023
Demonstration of transfer of knowledge and generalization with distillation

Distilling-the-Knowledge-in-a-Neural-Network This is an implementation of a part of the paper "Distilling the Knowledge in a Neural Network" (https://

26 Nov 25, 2022
Source code for deep symbolic optimization.

Update July 10, 2021: This repository now supports an additional symbolic optimization task: learning symbolic policies for reinforcement learning. Th

Brenden Petersen 290 Dec 25, 2022
Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)

Normalization Matters in Weakly Supervised Object Localization (ICCV 2021) 99% of the code in this repository originates from this link. ICCV 2021 pap

Jeesoo Kim 10 Feb 01, 2022
This repo. is an implementation of ACFFNet, which is accepted for in Image and Vision Computing.

Attention-Guided-Contextual-Feature-Fusion-Network-for-Salient-Object-Detection This repo. is an implementation of ACFFNet, which is accepted for in I

5 Nov 21, 2022
Official Implementation for the "An Empirical Investigation of 3D Anomaly Detection and Segmentation" paper.

An Empirical Investigation of 3D Anomaly Detection and Segmentation Project | Paper Official PyTorch Implementation for the "An Empirical Investigatio

Eliahu Horwitz 55 Dec 14, 2022
Spatial Sparse Convolution Library

SpConv: Spatially Sparse Convolution Library PyPI Install Downloads CPU (Linux Only) pip install spconv CUDA 10.2 pip install spconv-cu102 CUDA 11.1 p

Yan Yan 1.2k Jan 07, 2023
Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation

Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation (AAAI 2021) Official pytorch implementation of our paper: Discriminative

Beom 74 Dec 27, 2022
Laplace Redux -- Effortless Bayesian Deep Learning

Laplace Redux - Effortless Bayesian Deep Learning This repository contains the code to run the experiments for the paper Laplace Redux - Effortless Ba

Runa Eschenhagen 28 Dec 07, 2022
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]

The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment

Kemal Oksuz 46 Sep 29, 2022
Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis

HAABSAStar Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis". This project builds on the code from https://gith

1 Sep 14, 2020
A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

NeRF Minimal Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Result of Tiny-NeRF RGB Depth

Soumik Rakshit 11 Jul 24, 2022
This is a JAX implementation of Neural Radiance Fields for learning purposes.

learn-nerf This is a JAX implementation of Neural Radiance Fields for learning purposes. I've been curious about NeRF and its follow-up work for a whi

Alex Nichol 62 Dec 20, 2022
基于tensorflow 2.x的图片识别工具集

Classification.tf2 基于tensorflow 2.x的图片识别工具集 功能 粗粒度场景图片分类 细粒度场景图片分类 其他场景图片分类 模型部署 tensorflow serving本地推理和docker部署 tensorRT onnx ... 数据集 https://hyper.a

Wei Qi 1 Nov 03, 2021
A benchmark for the task of translation suggestion

WeTS: A Benchmark for Translation Suggestion Translation Suggestion (TS), which provides alternatives for specific words or phrases given the entire d

zhyang 55 Dec 24, 2022
Out-of-distribution detection using the pNML regret. NeurIPS2021

OOD Detection Load conda environment conda env create -f environment.yml or install requirements: while read requirement; do conda install --yes $requ

Koby Bibas 23 Dec 02, 2022