Gaussian#

Important

Access to Gaussian requires permission. Please open a help ticket by emailing help@smu.edu with “[HPC]” in the subject line.

Gaussian Gaussian is a series of electronic structure programs, used by chemists, chemical engineers, biochemists, physicists and other scientists worldwide.

Gaussian homepage: gaussian.com/

Official Gaussian Manual: gaussian.com/man/

See also

For examples and tips on submitting jobs, see our SLURM documentation and Best Practices for Jobs

For compute resources, see HPC Queues

Using Gaussian on M3#

Files used in the example are available:

  • on github

  • on M3 at /hpc/m3/examples/gaussian/

Example submission script#

The following job script can be submitted using sbatch guassian_cpu_example.sbatch.

This example will run in about 10 seconds and use less that 1GB of memory. In general, more complicated simulations will take much longer and should typically use more cores and memory. Note, we know this from running the job. It is always a good idea to review the resources your jobs use and adjust future jobs to more accurately request resources.

This uses the dev queue which has a 2 hour time limit. In general, the dev queue should only be used for testing code and running interactive sessions. Most normal jobs should be submitted to the standard queues such as standard-s where longer runtimes are allowed.

This example also writes output to $SCRATCH. $SCRATCH is a high performance file system designed to be used for temporary job files and data. After a job has finished running, any data you need to keep should be moved to a project directory or your $HOME directory. Files in SCRATCH are subject to a 60 day purge policy where files older than 60 days may be automatically deleted without warning.

Note

This job script will not run without modification. In particular, you must change the account from peruna_project_0001 to the account name for an allocation you have access to.

 1#!/bin/bash
 2#SBATCH -J gaussian            # job name shown in queue
 3#SBATCH -o gaussian_%j.out     # output file, %j is the job id number
 4#SBATCH -p dev                 # request the dev queue
 5#SBATCH -c 16                  # request 16 CPU cores
 6#SBATCH --mem=1G               # Request 1GB of memory
 7#SBATCH -t 00:01:00            # request 1 minute
 8#SBATCH -A peruna_project_0001 # example account
 9
10# specify the input file
11input_file=gaussian_example.cpu
12
13# module purge clears any existing modules so only the modules
14# requested will be loaded. This improves reproducibility
15module purge
16
17# load gaussian. It is a good practice to include the version
18# the default modules without a version may change when software
19# is updated
20module load gaussian/g16c/zen3
21
22# create a temporary job directory in scratch
23# copy the input file and cd into it
24# Note: scratch is intended for temporary job storage
25# you should move any files you need to keep when the job
26# completes and delete unneeded files. Files in $SCRATCH
27# are subject to automatic deletion after 60 days.
28job_dir=${SCRATCH}/${SLURM_JOB_ID}
29mkdir -p ${job_dir}
30cp ${input_file} ${job_dir}
31cd ${job_dir}
32GAUSS_SCRDIR=${job_dir}
33
34# gets requested mem in GB
35sleep 5 # to make sure the job info is available
36mem=$(sacct -o "ReqMem" --units=G -j ${SLURM_JOB_ID} -n | xargs)
37
38# get the cores assigned to the job
39# This is M3 specific, though something similar will work on many systems
40cpus=$(cat /sys/fs/cgroup/system.slice/slurmstepd.scope/job_${SLURM_JOB_ID}/cpuset.cpus)
41
42# make sure mem is an int with correct units
43mem=${mem%.*}
44mem=${mem//[!0-9]/}
45mem="${mem}GB"
46
47# this function fills in the cpu and memory information into the
48# input file
49cpu_mem() {
50  sed -i -e "/^%CPU/c\%CPU=${1}" ${input_file}
51  sed -i -e "/^%Mem/c\%Mem=${2}" ${input_file}
52}
53
54cpu_mem ${cpus} ${mem}
55
56# run gaussian
57g16 < ${input_file}
58
59# clean up temp $SCRATCH directory
60# this moves the rwf file from Gaussian to the
61# directory you submitted the job from. 
62# Note, if the job runs out of time, memory, or
63# fails in some other way, these commands may not be reached.
64
65# You should make sure you are keeping the files you need
66# this is just an example of a plausible workflow.
67# For instance, you may want to keep the checkpoint files
68mv *.rwf ${SLURM_SUBMIT_DIR}/
69cd ${SLURM_SUBMIT_DIR}
70rm -rf ${job_dir}