User Tools

Site Tools


faq

Frequently Asked Questions (FAQ)

I. SLURM

What's SLURM?

In the new build, a migration is done from MARVIN's TORQUE/Maui scheduler chain into SLURM. This means you must update your legacy submission scripts to SLURM syntax! Note that SLURM claims to be able to accept PBS directives, but this is yet tested. Apologies, but both the torque_mom and maui daemons have seen past health check problems. Besides, 50% of the leading 10 supercomputers within the TOP500 list utilize SLURM. There are a number of great sources directly from the developers. Some of our favorites:


II. GAUSSIAN

Gaussian is a proprietary software package - the license is owned by Peter Schwerdtfeger. Familiarize yourself with Gaussian - here is a brief introduction.
With Dr. Schwerdtfeger's approval, you'll be added to the gaussian user list.

How do I submit a Gaussian job?

A Gaussian SLURM script for a serial (one node) job looks similar to the following tetrafluoromethane example:

serial_gauss.slurm
#!/bin/bash
##################################
#SBATCH -J CF4
#SBATCH -p sgi
#SBATCH -t 0-0:15
#SBATCH -N 1
#SBATCH -n 4
#SBATCH --mem=4G
#SBATCH -o CF4.out
#SBATCH --mail-type=begin
#SBATCH --mail-type=end
#SBATCH --mail-user=you@massey.ac.nz
##################################
 
module load gaussian/sgi
source $g09root/g09/bsd/g09.profile
 
export GAUSS_SCRDIR=$SCRATCH
WORKDIR=/home/$USER/<path/ to CF4.inp and CF4.out files>
 
echo "This job was submitted from $SLURM_SUBMIT_HOST,"
echo "from the directory $SLURM_SUBMIT_DIR,"
echo "Running on node $HOSTNAME."
echo "The local scratch is on $SCRATCH."
 
echo START: `date`
g09 < $WORKDIR/CF4.inp
echo FINISH: `date`

NOTE: For things you do very often, you can load modules and set environment variable calls within bash startup. For example, in the above script, one might opt to delete the two lines

  module load gaussian/sgi
  source $g09root/g09/bsd/g09.profile

and instead replace them within the file /home/$USER/.bashrc. HOWEVER, don't do both: either put in a SLURM script, OR set in bash startup…

If you want to set a different path for your error/output file via the -e or -o options, you should always provide a filename. Setting just the directory patch, the way it worked in PBS, will result in your job to crash. The %j keyword can be useful here as it will be replaced by the SLURM job id.

  #SBATCH -o /home/${USER}/err/job-%j

III. INTEL Composer XE


IV. VASP


V. Orca

Orca is distributed as binary files only. A sample script could look like this

  #!/bin/bash
 
  #SBATCH --job-name=Au12Ih
  #SBATCH -t 999:00:00
  #SBATCH -N 1
  #SBATCH -n 4
  #SBATCH --mem=2G
  #SBATCH --mail-type=END
  #SBATCH --mail-type=FAIL
  #SBATCH -o path-to-error-file/orca-%j
  #SBATCH --mail-user=you@massey.ac.nz
  #SBATCH -p sgi
 
  echo This job was submitted from the computer:
  echo $SLURM_SUBMIT_HOST
  echo and the directory:
  echo $SLURM_SUBMIT_DIR
  echo
  echo It is running on the compute node:
  echo $SLURM_CLUSTER_NAME
  echo
 
  module load intel/compiler/64/15.0/2015.1.133
  module load openmpi/gcc/64/1.8.1
 
  cd $SLURM_SUBMIT_DIR
  echo Current directory:
  pwd
  echo
 
 
  echo "---- The Job is executed at "Fri Mar 27 13:36:01 NZDT 2015" on "simurg" ----"
 
  /cm/shared/apps/orca/orca /home/trombach/test-simurg/Au12/Au12Ih.in > /home/trombach/test-simurg/Au12/Au12Ih.out
 
  echo "---- The Job has finished at "Fri Mar 27 13:36:01 NZDT 2015" ----"

Please note, that if you want run a parallel MPI job, you need to call the binary with its absolute path!

faq.txt · Last modified: 2015/03/27 14:05 by trombach