Skip to content

Latest commit

 

History

History
154 lines (118 loc) · 32.2 KB

File metadata and controls

154 lines (118 loc) · 32.2 KB

zellerlab/flexprofiler pipeline parameters

Taxonomic classification and profiling of metagenomic and 16S data

Input/output options

Define where the pipeline should find input data and save output data.

Parameter Description Type Default Required Hidden
input Path to comma-separated file containing information about the samples and libraries/runs.
HelpYou will need to create a design file with information about the samples and libraries/runs you want to running in your pipeline run. Use this parameter to specify its location. It has to be a comma-separated file with 6 columns, and a header row. See usage docs.
string True
databases Path to comma-separated file containing information about databases and profiling parameters for each taxonomic profiler
HelpYou will need to create a design file with information about the samples in your experiment before running the pipeline. Use this parameter to specify its location. It has to be a comma-separated file with 4 columns, and a header row. See usage docs.

Profilers will only be executed if a corresponding database are supplied.

We recommend storing this database sheet somewhere centrally and accessible by others members of your lab/institutions, as this file will likely be regularly reused.
string True
save_untarred_databases Specify to save decompressed user-supplied TAR archives of databases
HelpIf input databases are supplied as gzipped TAR archives, in some cases you may want to move and re-use these for future runs. Specifying this parameter will save these to --outdir results/ under a directory called untar.
boolean
outdir The output directory where the results will be saved. You have to use absolute paths to storage on Cloud infrastructure. string True
email Email address for completion summary.
HelpSet this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config) then you don't need to specify this on the command line for every run.
string
multiqc_title MultiQC report title. Printed as page header, used for filename if not otherwise specified. string

Preprocessing general QC options

Common options across both long and short read preprocessing QC steps

Parameter Description Type Default Required Hidden
skip_preprocessing_qc Specify to skip sequencing quality control of raw sequencing reads
HelpSkipping running of FastQC or Falco maybe useful in cases where you are already running with preprocessed data (e.g. you are also skipping short/long read qc steps) that you already know the quality of
boolean
preprocessing_qc_tool Specify the tool used for quality control of raw sequencing reads (accepted: fastqc|falco)
HelpFalco is designed as a drop-in replacement for FastQC but written in C++ for faster computation. We particularly recommend using falco when using long reads (due to reduced memory constraints), however is also applicable for short reads.
string fastqc
save_preprocessed_reads Save reads from samples that went through the adapter clipping, pair-merging, and length filtering steps for both short and long reads
HelpThis saves the FASTQ output from the following tools:

- fastp
- AdapterRemoval
- Porechop
- Filtlong
- Nanoq

These reads will be a mixture of: adapter clipped, quality trimmed, pair-merged, and length filtered, depending on the parameters you set.
boolean
save_analysis_ready_fastqs Save only the final reads from all read processing steps (that are sent to classification/profiling) in results directory.
HelpThis flag will generate the directory results/analysis_ready_reads that contains the reads from the last preprocessing (QC, host removal, run merging etc.) step of the pipeline run.

This can be useful if you wish to re-use the final cleaned-up and prepared reads - the data actually used for the actual classification/profiling steps of the pipeline - for other analyses or purposes (e.g., to reduce redundant preprocessing between different pipelines, e.g. nf-core/mag).

In most cases this will be preferred over similar parameters e.g. --save_preprocessed_reads or --save_complexityfiltered_reads, unless you wish to explore in more detail the output of each specific preprocessing step independently.

Note if you do no preprocessing of any kind, nothing will be present in this directory.
boolean

Preprocessing short-read QC options

Options for adapter clipping, quality trimming, pair-merging, and complexity filtering

Parameter Description Type Default Required Hidden
perform_shortread_qc Turns on short read quality control steps (adapter clipping, complexity filtering etc.)
HelpTurns on short read quality control steps (adapter clipping, complexity filtering etc.)

This subworkflow can perform:

- Adapter removal
- Read quality trimming
- Read pair merging
- Length filtering
- Complexity filtering

Either with fastp or AdapterRemoval.

Removing adapters (if present) is recommend to reduce false-postive hits that may occur from 'dirty' or 'contaminated' reference genomes in a profiling database that contain accidentially incorporated adapter sequences. Note that some, but not all, tools support paired-end alignment (utilising information about the insert covered by the pairs). However read pair merging in some cases can be recommend to increase read length (such as in aDNA). Length filtering, and/or complexity can speed up alignment by reducing the number of short unspecific reads that need to be aligned.
boolean
shortread_qc_tool Specify which tool to use for short-read QC (accepted: fastp|adapterremoval) string fastp
shortread_qc_skipadaptertrim Skip adapter trimming
HelpSkip the removal of sequencing adapters.

This often can be useful to speed up run-time of the pipeline when analysing data downloaded from public databases such as the ENA or SRA, as adapters should already be removed (however we recommend to check FastQC results to ensure this is the case).
boolean
shortread_qc_adapter1 Specify adapter 1 nucleotide sequence
HelpSpecify a custom forward or R1 adapter sequence to be removed from reads.

If not set, the selected short-read QC tool's defaults will be used.

> Modifies tool parameter(s):
> - fastp: --adapter_sequence. fastp default: AGATCGGAAGAGCACACGTCTGAACTCCAGTCA
> - AdapterRemoval: --adapter1. AdapteRemoval2 default: AGATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNATCTCGTATGCCGTCTTCTGCTTG
string
shortread_qc_adapter2 Specify adapter 2 nucleotide sequence
HelpSpecify a custom reverse or R2 adapter sequence to be removed from reads.

If not set, the selected short-read QC tool's defaults will be used.

> Modifies tool parameter(s):
> - fastp: --adapter_sequence. fastp default: AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT
> - AdapterRemoval: --adapter1. AdapteRemoval2 default: AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTAGATCTCGGTGGTCGCCGTATCATT
string
shortread_qc_adapterlist Specify a list of all possible adapters to trim. Overrides --shortread_qc_adapter1/2. Formats: .txt (AdapterRemoval) or .fasta. (fastp).
HelpAllows to supply a file with a list of adapter (combinations) to remove from all files.

Overrides the --shortread_qc_adapter1/--shortread_qc_adapter2 parameters .

For AdapterRemoval this consists of a two column table with a .txt extension: first column represents forward strand, second column for reverse strand. You must supply all possible combinations, one per line, and this list is applied to all files. See AdapterRemoval documentation for more information.

For fastp this consists of a standard FASTA format with a .fasta/.fa/.fna/.fas extension. The adapter sequence in this file should be at least 6bp long, otherwise it will be skipped. fastp trims the adapters present in the FASTA file one by one.

> Modifies AdapterRemoval parameter: --adapter-list
> Modifies fastp parameter: --adapter_fasta
string
shortread_qc_mergepairs Turn on merging of read pairs for paired-end data
HelpTurn on the merging of read-pairs of paired-end short read sequencing data.

> Modifies tool parameter(s):
> - AdapterRemoval: --collapse
> - fastp: -m --merged_out
boolean
shortread_qc_includeunmerged Include unmerged reads from paired-end merging in the downstream analysis
HelpTurns on the inclusion of unmerged reads in resulting FASTQ file from merging paired-end sequencing data when using fastp and/or AdapterRemoval. For fastp this means the unmerged read pairs are directly included in the output FASTQ file. For AdapterRemoval, additional output files containing unmerged reads are all concatenated into one file by the workflow.

Excluding unmerged reads can be useful in cases where you prefer to have very short reads (e.g. aDNA), thus excluding longer-reads or possibly faulty reads where one of the pair was discarded.

> Adds fastp option: --include_unmerged
boolean
shortread_qc_minlength Specify the minimum length of reads to be retained
HelpSpecifying a mimum read length filtering can speed up profiling by reducing the number of short unspecific reads that need to be match/aligned to the database.

> Modifies tool parameter(s):
> - removed from reads --length_required
> - AdapterRemoval: --minlength
integer 15
shortread_qc_dedup Perform deduplication of the input reads (fastp only)
HelpThis enables the deduplication of processed reads during fastp adapter removal and/or merging. It removes identical reads that are likely artefacts from laboratory protocols (e.g. amplification), and provide no additional sequence information to the library.

Removing duplicates can increase runtime and increase accuracy of abundance calculations.

> Modifies tool parameter(s):
> fastp: --dedup
boolean
perform_shortread_complexityfilter Turns on nucleotide sequence complexity filtering
HelpTurns on sequencing complexity filtering. Complexity filtering can be useful to increase run-time by removing unspecific read sequences that do not provide any informative taxon ID.
boolean
shortread_complexityfilter_tool Specify which tool to use for complexity filtering (accepted: bbduk|prinseqplusplus|fastp) string bbduk
shortread_complexityfilter_entropy Specify the minimum sequence entropy level for complexity filtering
HelpSpecify the minimum 'entropy' value for complexity filtering for BBDuk or PRINSEQ++.

Note that this value will only be used for PRINSEQ++ if --shortread_complexityfilter_prinseqplusplus_mode is set to entropy.

Entropy here corresponds to the amount of sequence variation exists within the read. Higher values correspond to more variety, and thus will likely reslut in more specific matching to a taxon's reference genome. The trade off here is fewer reads (or abundance information) available for having a confident identification.


> Modifies tool parameter(s):
> - BBDuk: entropy=
> - PRINSEQ++: -lc_entropy

number 0.3
shortread_complexityfilter_bbduk_windowsize Specify the window size for BBDuk complexity filtering
HelpSpecify the window size to calculate the level entropy within for BBDuk.

> Modifies tool parameter(s):
> - BBDuk: entropywindow=
integer 50
shortread_complexityfilter_bbduk_mask Turn on masking rather than discarding of low complexity reads for BBduk
HelpTurn on masking of low-complexity reads (i.e., replacement with N) rather than removal.

> Modifies tool parameter(s)
> - BBDuk: entropymask=
boolean
shortread_complexityfilter_fastp_threshold Specify the minimum complexity filter threshold of fastp
HelpSpecify the minimum sequence complexity value for fastp. This value corresponds to the percentage of bases that is different from it's adjacent bases.

> Modifies tool parameter(s):
> - removed from reads --complexity_threshold
integer 30
shortread_complexityfilter_prinseqplusplus_mode Specify the complexity filter mode for PRINSEQ++ (accepted: entropy|dust) string entropy
shortread_complexityfilter_prinseqplusplus_dustscore Specify the minimum dust score for PRINTSEQ++ complexity filtering
HelpSpecify the minimum dust score below which low-complexity reads will be removed. A DUST score is based on how often different tri-nucleotides occur along a read.

> Modifies tool parameter(s):
> - PRINSEQ++: --lc_dust
number 0.5
save_complexityfiltered_reads Save reads from samples that went through the complexity filtering step
HelpSpecify whether to save the final complexity filtered reads in your results directory (--outdir).
boolean

Preprocessing long-read QC options

Options for adapter clipping, quality trimming, and length filtering

Parameter Description Type Default Required Hidden
perform_longread_qc Turns on long read quality control steps (adapter clipping, length filtering etc.)
HelpTurns on long read quality control steps (adapter clipping, length and/or quality filtering.)

Removing adapters (if present) is recommend to reduce false-postive hits that may occur from 'dirty' or 'contaminated' reference genomes in a profiling database that contain accidentially incorporated adapter sequences.

Length filtering, and quality filtering can speed up alignment by reducing the number of unspecific reads that need to be aligned.
boolean
longread_adapterremoval_tool Specify which tool to use for adapter trimming. (accepted: porechop|porechop_abi)
HelpThe performance of Porechop and Porechop_ABI is same in terms of removing adapter reads. However Porechop is no longer updated, Porechop_ABI receives regular updates.
string porechop_abi
longread_qc_skipadaptertrim Skip long-read trimming
HelpSkip removal of adapters by Porechop. This can be useful in some cases to speed up run time - particularly when you are running data downloading from public databases such as the ENA/SRA that should already have adapters removed. We recommend that you check your FastQC results this is indeed the case.
boolean
longread_filter_tool Specify which tool to use for long reads filtering (accepted: filtlong|nanoq)
HelpNanoq is a filtering tool only for Nanopore reads. Nanoq is faster and more memory-efficient than Filtlong. Nanoq also provides a summary of input read statistics; see benchmarking.

Filtlong is a good option if you want to keep a certain percentage of reads after filtering, and you can also use it for non-Nanopore long reads.
string nanoq
longread_qc_skipqualityfilter Skip long-read length and quality filtering
HelpSkip removal of quality filtering with Filtlong or Nanoq. This will skip length, percent reads, and target bases filtering (see other --longread_qc_qualityfilter_* parameters).
boolean
longread_qc_qualityfilter_minlength Specify the minimum length of reads to be retained
HelpSpecify the minimum of length of reads to be kept for downstream analysis.

> Modifies tool parameter(s):
> - Filtlong: --min_length or - Nanoq: --min-len
integer 1000
longread_qc_qualityfilter_keeppercent Specify the percent of high-quality bases to be retained
HelpThrow out the remaining percentage of reads outside the value. This is measured by bp, not by read count. So this option throws out the worst e.g. 10% of read bases if the parameter is set to 90. Modified from Filtlong documentation

> Modifies tool parameter(s):
> - Filtlong: --keep_percent
integer 90
longread_qc_qualityfilter_targetbases Filtlong only: specify the number of high-quality bases in the library to be retained
HelpRemoves the worst reads until only the specified value of bases remain, useful for very large read sets. If the input read set is less than the specified value, this setting will have no effect. Modified from Filtlong documentation

> Modifies tool parameter(s):
> - Filtlong: --keep_percent
integer 500000000
longread_qc_qualityfilter_minquality Nanoq only: specify the minimum average read quality filter (Q)
HelpRemove the reads with quality score lower than 7.

> Modifies tool parameter(s):
> - Nanoq: --min-qual
integer 7

Redundancy Estimation

Estimate metagenome sequencing complexity coverage

Parameter Description Type Default Required Hidden
perform_shortread_redundancyestimation Turn on short-read metagenome sequencing redundancy estimation with nonpareil. Warning: only use for shallow short-read sequencing datasets.
HelpTurns on nonpareil, a tool for estimating metagenome 'coverage', i.e, whether all genomes within the metagenome have had at least one read sequenced.

It estimates this by checking the read redundancy between a subsample of reads versus other reads in the library.

The more redundancy that exists, the larger the assumption that all possible reads in the library have been sequenced and all 'redundant' reads are simply sequencing of PCR duplicates.

The lower the redundancy, the more sequencing should be done until the entire metagenome has been captured. The output can be used to guide the amount of further sequencing is required.

Note this is not the same as genomic coverage, which is the number of times a base-pair is covered by unique reads on a reference genome.

Before using this tool please note the following caveats:

- It is not recommended to run this on deep sequencing data, or very large datasets
- Your shortest reads after processing should not go below 24bp (see warning below)
- It is not recommended to keep unmerged (--shortread_qc_includeunmerged) reads when using the calculation.
:::warning
On default settings, with 'kmer mode', you must make sure that your shortest processed reads do not go below 24 bp (the default kmer size).

If you have errors regarding kmer size, you will need to specify in a custom config in a process block

<br> withName: NONPAREIL {<br> ext.args = { "-k <NUMBER>" }<br> }<br>

Where <NUMBER> should be at least the shortest read in your library
:::
boolean
shortread_redundancyestimation_mode Specify mode for identifying redundant reads (accepted: kmer|alignment)
HelpSpecify which read-comparison mode to use to check for redundancy.

k-mer is faster but less precise but is recommended for FASTQ files. Alignment is more precise but is slower, it is recommended for FASTA files.

> Modifies tool parameter(s):
> - Nonpareil: -T
string kmer

Preprocessing host removal options

Options for pre-profiling host read removal

Parameter Description Type Default Required Hidden
perform_shortread_hostremoval Turn on short-read host removal
HelpTurns on the ability to remove short-reads from the that derived from a known organism, using Bowtie2 and samtools

This subworkflow is useful to remove reads that may come from a host, or a known contamination like the human reference genome. Human DNA contamination of (microbial) reference genomes is well known, so removal of these prior profiling both reduces the risks of false positives, and in some cases a faster runtime (as less reads need to be profiled).

Alternatively, you can include the reference genome within your profiling databases and can turn off this subworkflow, with the trade off of a larger taxonomic profiling database.
boolean
perform_longread_hostremoval Turn on long-read host removal
HelpTurns on the ability to remove long-reads from the that derived from a known organism, using minimap2 and samtools

This subworkflow is useful to remove reads that may come from a host, or a known contamination like the human reference genome. Human DNA contamination of (microbial) reference genomes is well known, so removal of these prior profiling both reduces the risks of false positives, and in some cases a faster runtime (as less reads need to be profiled).

Alternatively, you can include the reference genome within your profiling databases and can turn off this subworkflow, with the trade off of a larger taxonomic profiling database.
boolean
hostremoval_reference Specify path to single reference FASTA of host(s) genome(s)
HelpSpecify a path to the FASTA file (optionally gzipped) of the reference genome of the organism to be removed.

If you have two or more host organisms or contaminants you wish to remove, you can concatenate the FASTAs of the different taxa into a single one to provide to the pipeline.
string
shortread_hostremoval_index Specify path to the directory containing pre-made BowTie2 indexes of the host removal reference
HelpSpecify the path to a directory containing pre-made Bowtie2 reference index files (i.e. the directory containing .bt1, .bt2 files etc.). These should sit in the same directory alongside the the reference file specified in --hostremoval_reference.

Specifying premade indices can speed up runtime of the host-removal step, however if not supplied the pipeline will generate the indices for you.
string
longread_hostremoval_index Specify path to a pre-made Minimap2 index file (.mmi) of the host removal reference
HelpSpecify path to a pre-made Minimap2 index file (.mmi) of the host removal reference file given to --hostremoval_reference.

Specifying a premade index file can speed up runtime of the host-removal step, however if not supplied the pipeline will generate the indices for you.
string
save_hostremoval_index Save mapping index of input reference when not already supplied by user
HelpSave the output files of the in-built indexing of the host genome.

This is recommend to be turned on if you plan to use the same reference genome multiple times, as supplying the directory or file to --shortread_hostremoval_index or --longread_hostremoval_index respectively can speed up runtime of future runs. Once generated, we recommend you place this file outside of your run results directory in a central 'cache' directory you and others using your machine can access and supply to the pipeline.
boolean
save_hostremoval_bam Saved mapped and unmapped reads in BAM format from host removal
HelpSave the reads mapped to the reference genome and off-target reads in BAM format as output by the respective hostremoval alignment tool.

This can be useful if you wish to perform other analyses on the host organism (such as host-microbe interaction), however, you should consider whether the default mapping parameters of Bowtie2 (short-read) or minimap2 (long-read) are optimised to your context.
boolean
save_hostremoval_unmapped Save reads from samples that went through the host-removal step
HelpSave only the reads NOT mapped to the reference genome in FASTQ format (as exported from samtools view and fastq).

This can be useful if you wish to perform other analyses on the off-target reads from the host mapping, such as manual profiling or de novo assembly.
boolean

Preprocessing run merging options

Options for per-sample run-merging

Parameter Description Type Default Required Hidden
perform_runmerging Turn on run merging
HelpTurns on the concatenation of sequencing runs or libraries with the same sample name.

This can be useful to ensure you get a single profile per sample, rather than one profile per run or library. Note that in some cases comparing profiles of independent libraries may be useful, so this parameter may not always be suitable.
boolean
save_runmerged_reads Save reads from samples that went through the run-merging step
HelpSave the run- and library-concatenated reads of a given sample in FASTQ format.

> ⚠️ Only samples that went through the run-merging step of the pipeline will be stored in the resulting directory.

If you wish to save the files that go to the classification/profiling steps for samples that did not go through run merging, you must supply the appropriate upstream --save_<preprocessing_step> flag.

boolean

Profiling options

Parameter Description Type Default Required Hidden
run_motus Turn on profiling with mOTUs. Requires database to be present CSV file passed to --databases boolean

Postprocessing and visualisation options

Parameter Description Type Default Required Hidden
run_profile_standardisation Turn on standardisation of taxon tables across profilers
HelpTurns on standardisation of output OTU tables across all tools.
boolean

Institutional config options

Parameters used to describe centralised config profiles. These should not be edited.

Parameter Description Type Default Required Hidden
custom_config_version Git commit id for Institutional configs. string master True
custom_config_base Base directory for Institutional configs.
HelpIf you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.
string https://raw.githubusercontent.com/nf-core/configs/master True
config_profile_name Institutional config name. string True
config_profile_description Institutional config description. string True
config_profile_contact Institutional config contact information. string True
config_profile_url Institutional config URL link. string True

Generic options

Less common options for the pipeline, typically set in a config file.

Parameter Description Type Default Required Hidden
version Display version and exit. boolean True
publish_dir_mode Method used to save pipeline results to output directory. (accepted: symlink|rellink|link|copy|copyNoFollow|move)
HelpThe Nextflow publishDir option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.
string copy True
email_on_fail Email address for completion summary, only when pipeline fails.
HelpAn email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.
string True
plaintext_email Send plain-text email instead of HTML. boolean True
max_multiqc_email_size File size limit when attaching MultiQC reports to summary emails. string 25.MB True
monochrome_logs Do not use coloured log outputs. boolean True
hook_url Incoming hook URL for messaging service
HelpIncoming hook URL for messaging service. Currently, MS Teams and Slack are supported.
string True
multiqc_config Custom config file to supply to MultiQC. string True
multiqc_logo Custom logo file to supply to MultiQC. File name must also be set in the MultiQC config file string True
multiqc_methods_description Custom MultiQC yaml file containing HTML including a methods description. string
validate_params Boolean whether to validate parameters against the schema at runtime boolean True True
pipelines_testdata_base_path Base URL or local path to location of pipeline test dataset files string https://raw.githubusercontent.com/nf-core/test-datasets/ True
trace_report_suffix Suffix to add to the trace report filename. Default is the date and time in the format yyyy-MM-dd_HH-mm-ss. string True
help Display the help message. ['boolean', 'string']
help_full Display the full detailed help message. boolean
show_hidden Display hidden parameters in the help message (only works when --help or --help_full are provided). boolean