Skip to main content
Version: 1.3.x

Configuration

Resources

Usage limits

You may find that pipeline runs occasionally fail due to a particular step of the pipeline requesting more resources than you have on your system.

To avoid these failures, all nf-core pipelines check pipeline-step resource requests against parameters called --max_cpus, --max_memory and --max_time. These should represent the maximum possible resources of a machine or node.

Most pipelines will attempt to automatically restart jobs that fail due to lack of resources with double-requests, these caps keep those requests from getting out of hand and crashing the entire pipeline run. If a particular job exceeds the process-specific default resources and is retried, only resource requests (cpu, memory, or time) that have not yet reached the value set with --max_<resource> will be increased during the retry.

caution

These parameters only act as a cap, to prevent Nextflow submitting a single job requesting resources more than what is possible on your system.

Tuning

Like all nextflow pipelines, the resources allocated to each process can be tuned in nextflow .config files. We refer to the nf-core documentation on resource tuning for more information.

Institutional configuration

In most cases, you will only need to create a custom config as a one-off but if you and others within your organisation are likely to be running nf-core pipelines regularly and need to use the same settings, it may be a good idea to request that your custom config file is uploaded to the nf-core/configs GitHub repository as a profile. Before you do this, test that the config file works with your pipeline of choice using the -c parameter. You can then create a pull request to the nf-core/configs repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs), and amending nfcore_custom.config to include your custom profile.

See the main Nextflow documentation or the step by step guide for more information about creating your own configuration files.

If you have any questions or issues please send nf-core a message on Slack on the #configs channel.

Running in the background

Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.

The Nextflow -bg flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.

Alternatively, you can use screen / tmux or similar tool to create a detached session which you can log back into at a later time. Some HPC setups also allow you to run Nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).

Nextflow Memory Requirements

In some cases, the Nextflow Java virtual machines can start to request a large amount of memory. We recommend adding the following line to your environment to limit this (typically in ~/.bashrc or ~./bash_profile):

NXF_OPTS='-Xms1g -Xmx4g'