Skip to main content

Configuration

Resources

Usage limits

You may find that pipeline runs occasionally fail due to a particular step of the pipeline requesting more resources than you have on your system.

To avoid these failures, all nf-core pipelines check pipeline-step resource requests against parameters called --max_cpus, --max_memory and --max_time. These should represent the maximum possible resources of a machine or node.

Most pipelines will attempt to automatically restart jobs that fail due to lack of resources with double-requests, these caps keep those requests from getting out of hand and crashing the entire pipeline run. If a particular job exceeds the process-specific default resources and is retried, only resource requests (cpu, memory, or time) that have not yet reached the value set with --max_<resource> will be increased during the retry.

caution

These parameters only act as a cap, to prevent Nextflow submitting a single job requesting resources more than what is possible on your system.

Tuning

Like all nextflow pipelines, the resources allocated to each process can be tuned in nextflow .config files. We refer to the nf-core documentation on resource tuning for more information.

Institutional configuration

The nf-core community has created a repository of configuration profiles for many institutions, which can be used to run nf-core pipelines on institutional compute clusters. Using an existing configuration can save you the work to get the pipline working optimally on your compute infrastructure.

You will find all the configurations and infomation on how to use them here.

Running in the background

Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.

The Nextflow -bg flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.

Alternatively, you can use screen / tmux or similar tool to create a detached session which you can log back into at a later time. Some HPC setups also allow you to run Nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).

Nextflow Memory Requirements

In some cases, the Nextflow Java virtual machines can start to request a large amount of memory. We recommend adding the following line to your environment to limit this (typically in ~/.bashrc or ~./bash_profile):

NXF_OPTS='-Xms1g -Xmx4g'