Due to the high computing requirements for post-processing of microscope images, we are testing the UCT HPC cluster for relion.

Relion, as released, has job submission scripts for Torque / PBS, but not for SLURM - the job manager for the UCT cluster.

On the web, there are a few places detailing using relion with SLURM - NIH

Relion has a master GUI, which orchestrates all the processing steps needed for the data processing from the images.

To use it on the cluster, it is important to use the Schedule button rather than the Run! button for each of the steps, so that the processing is not performed on the cluster Head node.

Relion has been set up as a Singularity container, and also as a binary in /opt/exp_soft/emu/bin

First thing to do is to set up the environment - this is to set the path, and some tweaks for the relion GUI.

Place this in your .bashrc

# Add module calls etc here:

PATH=$PATH:/opt/exp_soft/emu/bin

export RELION_QSUB_TEMPLATE=/opt/exp_soft/emu/bin/relion_slurm.sh

export RELION_MOTIONCOR2_EXECUTABLE=/opt/exp_soft/emu/MotionCor2_1.3.0/MotionCor2_1.3.0-Cuda101

export RELION_QSUB_EXTRA_COUNT=1

export RELION_QSUB_EXTRA1='HPC Account'

export RELION_QSUB_EXTRA1_DEFAULT=emu

The submission template referenced above looks like this :-

#!/bin/bash

#SBATCH --partition=XXXqueueXXX

#SBATCH --nodes=XXXnodesXXX

#SBATCH --cpus-per-task=XXXdedicatedXXX

#SBATCH --error=XXXerrfileXXX

#SBATCH --output=XXXoutfileXXX

#SBATCH --account=XXXextra1XXX

# mpiexec -mca orte_forward_job_control 1 -n XXXmpinodesXXX XXXcommandXXX

srun XXXcommandXXX

All the parameters surrounded by XXX are replaced dynamically by entries in the Run tab of the relion gui.