Stitch together Nanopore tiled amplicon data without polishing a reference

Related tags

Data AnalysisLilo
Overview

logo_dark_white

Stitch together Nanopore tiled amplicon data using a reference guided approach

Tiled amplicon data, like those produced from primers designed with primal scheme, are typically assembled using methods that involve aligning them to a reference and polishing the reference into a sequence that represents the reads. This works very well for obtaining a genome with SNPs and small indels representative of the reads. However in cases where the reads cannot be mapped well to the reference (e.g. genomes containing hypervariable regions between primers) or in cases where large structrual variants are expected this method may fail as polishing tools expect the reference to originate from the reads.

Lilo uses a reference only to assign reads to the amplicon they originated from and to order and orient the polished amplicons, no reference sequence is incorporated into the final assembly. Once assigned to an amplicon, a read with high average base quality of roughly median length for that amplicon is selected as a reference and polished with up to 300x coverage three times with medaka. The polished amplicons have primers removed with porechop (fork: https://github.com/sclamons/Porechop-1) and are then assembled with scaffold_builder.

Lilo has been tested on SARS-CoV-2 with artic V3 primers. It has also been tested on 7kb and 4kb amplicons with ~100-1000bp overlaps for ASFV, PRRSV-1 and PRRSV-2, schemes for which will be made available in the near future.

Requirments not covered by conda

Install Conda :)
Install this fork of porechop and make sure it is in your path: https://github.com/sclamons/Porechop-1

Installation

git clone https://github.com/amandawarr/Lilo  
cd Lilo  
conda env create --file LILO.yaml  
conda env create --file scaffold_builder.yaml

Usage

Lilo assumes your reads are in a folder called raw/ and have the suffix .fastq.gz. Multiple samples can be processed at the same time.
Lilo requires a config file detailing the location of a reference, a primer scheme (in the form of a primal scheme style bed file), and a primers.csv file (described below).

conda activate LILO
snakemake -k -s /path/to/LILO --configfile /path/to/config.file --cores N

It is recommended to run with -k so that one sample with insufficient coverage will not stop the other jobs completing.

Input specifications

  • config.file: an example config file has been provided.
  • Primer scheme: As output by primal scheme, with alt primers removed. Bed file of primer alignment locations. Columns: reference name, start, end, primer name, pool (must end with 1 or 2).
  • Primers.csv: Comma delimited, includes alt primers, with header line. Columns: amplicon_name, F_primer_name, F_primer_sequence, R_primer_name, R_primer_sequence. If there are a lot of degenerate bases in any of the primers it is recommended to expand these, the script expand.py will expand the described csv into a longer csv with IUPAC codes expanded.
  • reference.fasta Same reference used to make the scheme file.

Output

Lilo uses the names from raw/ to name the output file. For a file named "sample.fastq.gz", the final assembly will be named "sample_Scaffold.fasta", and files produced during the pipeline will be in a folder called "sample". The output will contain amplicons that had at least 40X full length coverage. Missing amplicons will be represented by Ns. Any ambiguity at overlaps will be indicated with IUPAC codes.

Note

  • Use of the wrong fork for porechop will cause the pipeline to fail.
  • Lilo is a work in progress and has been tested on a limited number of references, amplicon sizes, and overlap sizes, I recommend checking the results carefully for each new scheme.
  • The pipeline currently assumes that any structural variants are contained between the primers of an amplicon and do not change the length of the amplicon by more than 5%. If alt amplicons produce a product of a different length to the original amplicon they may not be allocated to their amplicon. Editing it to work better with alt amplicons is on my to do list.
  • Should not be used with reads produced with rapid kits, the pipeline assumes the reads are the length of the amplicons.
  • Do let me know if it destroys any cities or steals everyone's left shoe.
You might also like...
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

PostQF Copyright © 2022 Ralph Seichter PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the ma

Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

NumPy and Pandas interface to Big Data
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Comments
  • Error in rule reporechop:

    Error in rule reporechop:

    Hello, While running the sample dataset, I have encoutered the following error messages. I have made such that prochop is installed correctly and in the path.

    Any help is greatly appreciated.

    Error in rule reporechop: jobid: 2 output: FAT94769_pass_barcode02_66883b35_0/polished_trimmed.fa shell: porechop --adapter_threshold 72 --end_threshold 70 --end_size 30 --extra_end_trim 5 --min_trim_size 3 -f ASFV.primers.csv -i FAT94769_pass_barcode02_66883b35_0/polished_clipped_amplicons.fa --threads 8 --no_split -o FAT94769_pass_barcode02_66883b35_0/polished_trimmed.fa (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)

    opened by tboonf 1
  • Error while running LILO

    Error while running LILO

    Dear, I get the following error while running LILO. Any idea what could be the problem?

    /bin/bash: /home/minion/anaconda3/envs/LILO/etc/profile.d/conda.sh: No such file or directory
    
    CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
    To initialize your shell, run
    
        $ conda init <SHELL_NAME>
    
    Currently supported shells are:
      - bash
      - fish
      - tcsh
      - xonsh
      - zsh
      - powershell
    
    See 'conda init --help' for more information and options.
    
    IMPORTANT: You may need to close and restart your shell after running 'conda init'.
    
    
    /bin/bash: line 2: scaffold_builder.py: command not found
    sed: can't read reads_24h_Scaffold.fasta: No such file or directory
    [Wed Aug 10 11:12:28 2022]
    Error in rule scaffold:
        jobid: 1
        output: reads_24h_Scaffold.fasta
        shell:
            source $CONDA_PREFIX/etc/profile.d/conda.sh
                    conda activate scaffold_builder
                    scaffold_builder.py -i 75 -t 3693 -g 80000 -r /home/minion/lilo-test/ASFV.reference.fasta -q reads_24h/polished_trimmed.fa -p reads_24h
                    sed -i '1 s/^.*$/>reads_24h_Lilo_scaffold/' reads_24h_Scaffold.fasta
            (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
    
    Job failed, going on with independent jobs.
    Exiting because a job execution failed. Look above for error message
    Complete log: /home/minion/lilo-test/.snakemake/log/2022-08-10T111227.425486.snakemake.log
    

    Kind regards, Elisabeth

    opened by el-mat 1
  • LILO with SLURM

    LILO with SLURM

    Hi there,

    I'm trying to run LILO on a SLURM HPC and I'm not sure what the errors are related to. Do you have an idea? It seems really environment depended, but maybe you stumbled across something similar.

    Call:

    snakemake -k -s [...]/tools/Lilo/LILO --configfile $CONFIG --profile [...]/tools/config-snippets/snake-cookies/slurm
    

    Log:

    [...]
    MissingOutputException in line 84 of [...]/tools/Lilo/LILO:
    Job Missing files after 30 seconds:
    FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
    This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
    Job id: 133673 completed successfully, but some output files are missing. 133673
    Trying to restart job 133673.
    [...]
    Error in rule assign:
        jobid: 133673
        output: FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
        shell:
            bedtools intersect -F 0.9 -wa -wb -bed -abam FAR95540_pass_unclassified_7f618209_73/alignments/reads_to_ref.bam -b amplicons.bed  | grep amplicon51 - | awk '{print $4}' - | seqtk subseq porechop/FAR95540_pass_unclassified_7f618209_73.fastq.gz - > FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
            (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
        cluster_jobid: 210115
    
    Error executing rule assign on cluster (jobid: 133673, external: 210115, jobscript: [...]/.snakemake/tmp.cssfeg5e/snakejob.assign.133673.sh). For error details see the cluster log and the log files of the involved rule(s).
    [...]
    Traceback (most recent call last):
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/__init__.py", line 701, in snakemake
        success = workflow.execute(
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/workflow.py", line 1077, in execute
        success = self.scheduler.schedule()
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 441, in schedule
        self._error_jobs()
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 557, in _error_jobs
        self._handle_error(job)
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 615, in _handle_error
        self.running.remove(job)
    KeyError: assign
    

    I set --latency-wait 90 it again breaks after some time at a assign rule and a KeyError: read_select from the snakemake scheduler.

    Let me know which input/config files might be interesting to solve this. :)

    opened by MarieLataretu 7
Releases(v0.2)
Owner
Amanda Warr
Amanda Warr
PipeChain is a utility library for creating functional pipelines.

PipeChain Motivation PipeChain is a utility library for creating functional pipelines. Let's start with a motivating example. We have a list of Austra

Michael Milton 2 Aug 07, 2022
Multiple Pairwise Comparisons (Post Hoc) Tests in Python

scikit-posthocs is a Python package that provides post hoc tests for pairwise multiple comparisons that are usually performed in statistical data anal

Maksim Terpilowski 264 Dec 30, 2022
Using approximate bayesian posteriors in deep nets for active learning

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Jan 04, 2023
A DSL for data-driven computational pipelines

"Dataflow variables are spectacularly expressive in concurrent programming" Henri E. Bal , Jennifer G. Steiner , Andrew S. Tanenbaum Quick overview Ne

1.9k Jan 03, 2023
Port of dplyr and other related R packages in python, using pipda.

Unlike other similar packages in python that just mimic the piping syntax, datar follows the API designs from the original packages as much as possible, and is tested thoroughly with the cases from t

179 Dec 21, 2022
Spectral Analysis in Python

SPECTRUM : Spectral Analysis in Python contributions: Please join https://github.com/cokelaer/spectrum contributors: https://github.com/cokelaer/spect

Thomas Cokelaer 280 Dec 16, 2022
An ETL framework + Monitoring UI/API (experimental project for learning purposes)

Fastlane An ETL framework for building pipelines, and Flask based web API/UI for monitoring pipelines. Project structure fastlane |- fastlane: (ETL fr

Dan Katz 2 Jan 06, 2022
Functional tensors for probabilistic programming

Funsor Funsor is a tensor-like library for functions and distributions. See Functional tensors for probabilistic programming for a system description.

208 Dec 29, 2022
Synthetic data need to preserve the statistical properties of real data in terms of their individual behavior and (inter-)dependences

Synthetic data need to preserve the statistical properties of real data in terms of their individual behavior and (inter-)dependences. Copula and functional Principle Component Analysis (fPCA) are st

32 Dec 20, 2022
Hg002-qc-snakemake - HG002 QC Snakemake

HG002 QC Snakemake To Run Resources and data specified within snakefile (hg002QC

Juniper A. Lake 2 Feb 16, 2022
Produces a summary CSV report of an Amber Electric customer's energy consumption and cost data.

Amber Electric Usage Summary This is a command line tool that produces a summary CSV report of an Amber Electric customer's energy consumption and cos

Graham Lea 12 May 26, 2022
2019 Data Science Bowl

Kaggle-2019-Data-Science-Bowl-Solution - Here i present my solution to kaggle 2019 data science bowl and how i improved it to win a silver medal in that competition.

Deepak Nandwani 1 Jan 01, 2022
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically

About The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficien

ROOT 2k Dec 29, 2022
Analysiscsv.py for extracting analysis and exporting as CSV

wcc_analysis Lichess page documentation: https://lichess.org/page/world-championships Each WCC has a study, studies are fetched using: https://lichess

32 Apr 25, 2022
ASTR 302: Python for Astronomy (Winter '22)

ASTR 302, Winter 2022, University of Washington: Python for Astronomy Mario Jurić Location When: 2:30-3:50, Monday & Wednesday, Winter quarter 2022 Wh

UW ASTR 302: Python for Astronomy 4 Jan 12, 2022
Python script for transferring data between three drives in two separate stages

Waterlock Waterlock is a Python script meant for incrementally transferring data between three folder locations in two separate stages. It performs ha

David Swanlund 13 Nov 10, 2021
Zipline, a Pythonic Algorithmic Trading Library

Zipline is a Pythonic algorithmic trading library. It is an event-driven system for backtesting. Zipline is currently used in production as the backte

Quantopian, Inc. 15.7k Jan 07, 2023
Data-sets from the survey and analysis

bachelor-thesis "Umfragewerte.xlsx" contains the orginal survey results. "umfrage_alle.csv" contains the survey results but one participant is cancele

1 Jan 26, 2022
Average time per match by division

HW_02 Unzip matches.rar to access .json files for matches. Get an API key to access their data at: https://developer.riotgames.com/ Average time per m

11 Jan 07, 2022