For this class, we have access to the Engineering Instructional partition on Campus Cluster; this is to be used for running parallel DFT calculations with the Quantum Espresso package.
Please consult Illinois Campus Cluster User Documentation for more information about how to use the Illinois Campus Cluster. This document is going to focus on how to use our instructional allocation to run Quantum Espresso calculations.
As a start, you should (1) log in to campus cluster, and (2) make sure you have access to our allocation:
ssh [your netID]@cc-login.campuscluster.illinois.edu/projects/illinois/eng/shared/shared/examples/my-accounts-engThe script will tell you what engineering accounts you have access
to; it should include 25fa-mse598dt-eng. If it does not,
please let me know ASAP.
The eng-instruction partition consists of:
A few quick links:
srun options these options go into your submission
scriptTo access the instructional allocation your submission scripts will need the two lines:
#SBATCH --partition=eng-instruction
#SBATCH --account=25fa-mse598dt-eng
The Quantum Espresso code (source, executables, pseudopotentials) is
all available in our shared project direction
/projects/illinois/eng/shared/shared/MSE598DT-FA25. We all
have access to this however:
DO NOT WRITE ANYTHING IN THIS DIRECTORY! It is
strictly intended to be the reference for the code that everyone will be
able to access. You should write your output to your own home directory
or scratch (/scratch/[your netid]). Scratch space is erased
periodically; have codes dump their output there, and then after, move
the items you want to save into your home directory.
To make your life a little simpler, you’ll likely also want to add the lines:
module load intel/umf intel/compiler-rt intel/tbb intel/compiler intel/mpi intel/mkl
export QE=/projects/illinois/eng/shared/shared/MSE598DT-FA25
to your ~/.bashrc or whatever initialization script you
run when you login. Quantum Espresso will need those modules loaded to
find the MPI (parallelization) and MKL (math) libraries, and then the
environment variable $QE will be a shortcut to our class
directory.
Quantum Espresso is a series of different packages to perform a wide
range of density-functional theory and related calculations. We will
primarily work with the planewave DFT code, PWscf. This has
been compiled to run on campus cluster; it is parallelized with MPI, and
takes advantage of the fast infiniband interconnects. It has not been
optimized for GPUs particularly, so you won’t see a particular advantage
to running on GPU nodes.
A few quick links for running PWscf:
There are many other packages in the QE family; you are encouraged to look around the website to see what else is available if you’re interested in doing something particular.
From the “What can PWscf do?” website:
PWscf performs many different kinds of self-consistent calculations of electronic-structure properties within Density-Functional Theory (DFT), using a Plane-Wave (PW) basis set and pseudopotentials (PP). In particular:
- ground-state energy and one-electron (Kohn-Sham) orbitals, atomic forces, stresses;
- structural optimization, also with variable cell;
- molecular dynamics on the Born-Oppenheimer surface, also with variable cell;
- macroscopic polarization (and orbital magnetization) via Berry Phases;
- various forms of finite electric fields, with a sawtooth potential or with the modern theory of polarization;
- Effective Screening Medium (ESM) method;
- self-consistent continuum solvation (SCCS) model, if patched with ENVIRON (http://www.quantum-environment.org/).
PWscf works for both insulators and metals, in any crystal structure, for many exchange-correlation (XC) functionals (including spin polarization, DFT+U, meta-GGA, nonlocal and hybrid functionals), for norm-conserving (Hamann-Schluter-Chiang) PPs (NCPPs) in separable form or Ultrasoft (Vanderbilt) PPs (USPPs) or Projector Augmented Waves (PAW) method. Noncollinear magnetism and spin-orbit interactions are also implemented.
Please note that NEB calculations are no longer performed by
pw.x, but are instead carried out byneb.x(see main user guide), a dedicated code for path optimization which can use PWscf as computational engine.
To use PWscf, you’ll need (1) pseudopotentials for your
chemical species, (2) an input file for your calculation, and (3) a
submission script if you want to run it in parallel.
PWscf’s units: Bohr for distance, Rydberg for energy
1 Rydberg = 0.5 Hartree = 13.60569312299 eV. This is the energy unit if the kinetic energy is operator is \(-\nabla^2\) instead of \(-\frac12\nabla^2\).
You can access pseudopotentials from the pseudopotentials webpage at QE. From there, you can go to:
The latter site has a large set to choose from, some more recent than
what is available through the SSSP. I’ve already downloaded the SSSP
potentials, and placed them in
/projects/illinois/eng/shared/shared/MSE598DT-FA25/SSSP
should you wish to use that. If you want to use a potential from the QE
table, copy the link and run wget [PPurl] like
wget https://pseudopotentials.quantum-espresso.org/upf_files/Al.pbe-n-kjpaw_psl.1.0.0.UPF
at the command line to download the file on campus cluster.
PWscf uses an input file that contains all of the
information about the calculation to perform. The full documentation is
available here. It’s
format follows many Fortran conventions (single quotes for strings,
.true. and .false. for true and false,
! for comments). Below is a short example of a calculation,
and discussion of what the different lines do:
&CONTROL
calculation = 'scf'
restart_mode='from_scratch',
prefix='aluminum',
tstress = .true.
tprnfor = .true.
pseudo_dir = '/projects/illinois/eng/shared/shared/MSE598DT-FA25/SSSP/',
outdir='/scratch/[YOUR NET ID]/'
/
&SYSTEM
ibrav= 2,
celldm(1)= 9.0,
nat= 1,
ntyp= 1,
ecutwfc = 15.0,
occupations = 'smearing',
smearing = 'methfessel-paxton',
degauss = 0.05,
/
&ELECTRONS
diagonalization='david'
mixing_mode = 'plain'
conv_thr = 1.0d-8
/
ATOMIC_SPECIES
Al 26.982 Al.pbe-n-kjpaw_psl.1.0.0.UPF
ATOMIC_POSITIONS alat
Al 0.00 0.00 0.00
K_POINTS automatic
4 4 4 0 0 0
PWscf uses what they call “cards,” which start with an
&. You can introduce comments with a ! in
a line, or a # or ! at the beginning of a
line.
The &CONTROL card is information about the overall
calculation.
calculation is the type of calculation
(scf mean a self-consistent field calculation)restart_mode should be from_scratch unless
you are continuing from an interrupted calculation (see
max_seconds for more information)prefix is appended to the output fileststress and tprnfor determine whether
stress and forces are calculatedpseudo_dir specifies where pseudopotentials can be
found; if you download your own, you will likely want to change this to
'./outdir where output files will be written; scratch is a
good place for thisThe &SYSTEM card is information about the lattice
and the calculation.
ibrav specifies the Bravais lattice to use; see ibrav
for a listcelldm specifies the computational cell parameters;
refer to ibrav for interpretation. Remember that the units
are Bohr. If you specify ibrav=0, then there is a
CELL_PARAMETERS card instead.nat and ntyp are the number of atoms, and
the number of types of atoms.ecutwfc is the planewave energy cutoff in Ryd. The
cutoff for the charge density and potential is controlled by
ecutrho; the default is 4 times ecutwfc and is
typically reasonable for a PAW calculation.occupations and smearing determine how the
occupations are determined (smearing vs. tetrahedron method) and which
smearing method to choose.degauss is the smearing parameter, in Ryd.The &ELECTRONS card is information about the
iterative diagonalization.
diagonalization is the algorithm choice (davidson, CV,
RMM-DIIS)mixing_mode should always be plain, for Broyden
mixingconv_thr determines when self-consistency is reached;
this is for the total energy in the cell, and is in Ryd.The ATOMIC_SPECIES card gives a (1) label, (2) mass, and
(3) pseudopotential file for each of the ntyp atomic
species present.
The ATOMIC_POSITIONS card gives each atomic position
with the label first, then the coordinates in the cell.
The KPOINTS card determines the k-points.
automatic generates a Monkhorst-Pack mesh, where the first
three numbers are the divisions along each direction, and the next three
are the offsets.
For small runs, PWscf can be run at the command line. To
run a serial job,
$QE/bin/pw.x -in [input script] | tee [output file]
will run the input script, dump the output to the screen and save it
to an input file specified by the tee command.
NOTE: You can test whether your input file is sane
by setting nstep = 0 in the &CONTROL card,
and then running at the command line. This can save you from the
heartbreak of waiting for a parallel job to start, only to find that you
have a typo in your input file.
You can run short parallel jobs at the command line (that will get killed if they run for too long…. this is not the recommended way to do a project!) by doing
mpirun $QE/bin/pw.x -in [input script]
To submit a parallel job, you will need a script that looks something like
#!/bin/bash
#
#SBATCH --time=00:10:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=64
#SBATCH --mem-per-cpu=3375 # memory per core
#SBATCH --job-name=Al-supercell
#SBATCH --partition=eng-instruction
#SBATCH --account=25fa-mse598dt-eng
#SBATCH --output=sbatch.o%j
#SBATCH --error=sbatch.e%j
#SBATCH --mail-user=[YOUR EMAIL ADDRESS]
#SBATCH --mail-type=ALL
# Load modules
module load intel/umf intel/compiler-rt intel/tbb intel/compiler intel/mpi intel/mkl
QE=/projects/illinois/eng/shared/shared/MSE598DT-FA25
SCRATCH=/scratch/$(whoami)
mpirun $QE/bin/pw.x -in [input file]
This is a regular shell script, with a series of slurm commands at the beginning. To see the options, check out slurm srun options. This script is asking for 64 tasks on one node (corresponding with the number of cores), and will email when the job starts, and stops. Once you have this script, you can submit it via
sbatch [slurm script]
You can see your job status with
squeue --me
More information is available with the Campus Cluster Running Jobs documentation.