Skip to content

Latest commit

 

History

History
236 lines (172 loc) · 10.2 KB

usage.md

File metadata and controls

236 lines (172 loc) · 10.2 KB

lehtiolab/nf-labelcheck: Usage

Table of contents

Introduction

Nextflow handles job submissions on SLURM or other environments, and supervises running the jobs. Thus the Nextflow process must run until the pipeline is finished. We recommend that you put the process running in the background through screen / tmux or similar tool. Alternatively you can run nextflow within a cluster job submitted your job scheduler.

It is recommended to limit the Nextflow Java virtual machines memory. We recommend adding the following line to your environment (typically in ~/.bashrc or ~./bash_profile):

NXF_OPTS='-Xms1g -Xmx4g'

Running the pipeline

The typical command for running the pipeline is as follows:

nextflow run lehtiolab/nf-labelcheck --mzmls '*.mzML' --tdb swissprot_20181011.fa --mods assets/mods.txt --isobaric tmt10plex -profile standard,docker

This will launch the pipeline with the docker configuration profile. See below for more information about profiles.

Note that the pipeline will create the following files in your working directory:

work            # Directory containing the nextflow working files
results         # Finished results (configurable, see below)
.nextflow_log   # Log file from Nextflow
# Other nextflow hidden files, eg. history of pipeline runs and old logs.

Updating the pipeline

When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you're running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:

nextflow pull lehtiolab/nf-labelcheck

Reproducibility

It's a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you'll be running the same version of the pipeline, even if there have been changes to the code since.

First, go to the lehtiolab/nf-labelcheck releases page and find the latest version number - numeric only (eg. 1.3.1). Then specify this when running the pipeline with -r (one hyphen) - eg. -r 1.3.1.

This version number will be logged in reports when you run the pipeline, so that you'll know what you used when you look back in the future.

Main arguments

-profile

Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: -profile docker - the order of arguments is important!

If -profile is not specified at all the pipeline will be run locally and expects all software to be installed and available on the PATH.

  • awsbatch
    • A generic configuration profile to be used with AWS Batch.
  • conda
    • A generic configuration profile to be used with conda
    • Pulls most software from Bioconda
  • docker
    • A generic configuration profile to be used with Docker
    • Pulls software from dockerhub: nf-labelcheck
  • singularity
  • test
    • A profile with a complete configuration for automated testing
    • Includes links to test data so needs no other parameters

--mzmls

Use this to specify the location of your input mzML files. For example:

--mzmls 'path/to/data/sample_*.mzML'

The path must be enclosed in quotes when using wildcards like *

--mzmldef

Alternative to the above, you can use --mzmldef to pass a text file which contains the mzML specifications.

--mzmldef /path/to/data/mzmls.txt

This text file is tab-separated without header, contains a single line per mzML file specified as follows: /path/to/file<TAB>channel_name<TAB>sample_name Channel and sample name are optional.

--tdb

Target database. Decoy databases are created "tryptic-reverse" by the pipeline and searches are against a concatenated database (T-TDC).

--tdb /path/to/Homo_sapiens.pep.all.fa

--mods

Modifications file for MSGF+, contains the peptide modifications allowed by the search engine. Two examples can be found in the assets folder

--mods /path/to/assets/tmtmods.txt

--isobaric

Isobaric multiplexing chemistry used, e.g. tmt10plex, itraq8plex

--isobaric tmt10plex

Job resources

Automatic resubmission

Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with an error code of 143 (exceeded requested resources) it will automatically resubmit with higher requests (2 x original, then 3 x original). If it still fails after three times then the pipeline is stopped.

AWS Batch specific parameters

Running the pipeline on AWS Batch requires a couple of specific parameters to be set according to your AWS Batch configuration. Please use the -awsbatch profile and then specify all of the following parameters.

--awsqueue

The JobQueue that you intend to use on AWS Batch.

--awsregion

The AWS region to run your job in. Default is set to eu-west-1 but can be adjusted to your needs.

Please make sure to also set the -w/--work-dir and --outdir parameters to a S3 storage bucket of your choice - you'll get an error message notifying you if you didn't.

Other command line parameters

--activation

The MS fragmentation activation method used, used by the IsobaricQuant program from OpenMS. Default is hcd, but cid, etd can also be used.

--instrument

The MS instrument type used to be passed to the MSGF+ search engine. Defaults to qe, but can also be one of [orbi, lowres, tof]. See the MSGF+ documentation for more info.

--outdir

The output directory where the results will be saved.

--email

Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config) then you don't need to specify this on the command line for every run.

-name or --name

Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic.

-resume

Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.

You can also supply a run name to resume a specific run: -resume [run-name]. Use the nextflow log command to show previous run names.

NB: Single hyphen (core Nextflow option)

-c

Specify the path to a specific config file (this is a core NextFlow command).

NB: Single hyphen (core Nextflow option)

Note - you can use this to override pipeline defaults.

--custom_config_version

Provide git commit id for custom Institutional configs hosted at nf-core/configs. This was implemented for reproducibility purposes. Default is set to master.

## Download and use config file with following git commid id
--custom_config_version d52db660777c4bf36546ddb188ec530c3ada1b96

--custom_config_base

If you're running offline, nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell nextflow where to find them with the custom_config_base option. For example:

## Download and unzip the config files
cd /path/to/my/configs
wget https://github.com/nf-core/configs/archive/master.zip
unzip master.zip

## Run the pipeline
cd /path/to/my/data
nextflow run /path/to/pipeline/ --custom_config_base /path/to/my/configs/configs-master/

Note that the nf-core/tools helper package has a download command to download all required pipeline files + singularity containers + institutional configs in one go for you, to make this process easier.

--max_memory

Use to set a top-limit for the default memory requirement for each process. Should be a string in the format integer-unit. eg. --max_memory '8.GB'

--max_time

Use to set a top-limit for the default time requirement for each process. Should be a string in the format integer-unit. eg. --max_time '2.h'

--max_cpus

Use to set a top-limit for the default CPU requirement for each process. Should be a string in the format integer-unit. eg. --max_cpus 1

--plaintext_email

Set to receive plain-text e-mails instead of HTML formatted.

--monochrome_logs

Set to disable colourful command line output and live life in monochrome.