Quantum Espresso on Campus Cluster

For this class, we have access to the Engineering Instructional partition on Campus Cluster; this is to be used for running parallel DFT calculations with the Quantum Espresso package.

Campus Cluster

Please consult Illinois Campus Cluster User Documentation for more information about how to use the Illinois Campus Cluster. This document is going to focus on how to use our instructional allocation to run Quantum Espresso calculations.

As a start, you should (1) log in to campus cluster, and (2) make sure you have access to our allocation:

  1. To log in, from a command line, ssh [your netID]@cc-login.campuscluster.illinois.edu
  2. One logged into campus cluster, execute the script /projects/illinois/eng/shared/shared/examples/my-accounts-eng

The script will tell you what engineering accounts you have access to; it should include 25fa-mse598dt-eng. If it does not, please let me know ASAP.

The eng-instruction partition consists of:

A few quick links:

To access the instructional allocation your submission scripts will need the two lines:

#SBATCH --partition=eng-instruction
#SBATCH --account=25fa-mse598dt-eng

The Quantum Espresso code (source, executables, pseudopotentials) is all available in our shared project direction /projects/illinois/eng/shared/shared/MSE598DT-FA25. We all have access to this however:

DO NOT WRITE ANYTHING IN THIS DIRECTORY! It is strictly intended to be the reference for the code that everyone will be able to access. You should write your output to your own home directory or scratch (/scratch/[your netid]). Scratch space is erased periodically; have codes dump their output there, and then after, move the items you want to save into your home directory.

To make your life a little simpler, you’ll likely also want to add the lines:

module load intel/umf intel/compiler-rt intel/tbb intel/compiler intel/mpi intel/mkl
export QE=/projects/illinois/eng/shared/shared/MSE598DT-FA25

to your ~/.bashrc or whatever initialization script you run when you login. Quantum Espresso will need those modules loaded to find the MPI (parallelization) and MKL (math) libraries, and then the environment variable $QE will be a shortcut to our class directory.

Quantum Espresso

Quantum Espresso is a series of different packages to perform a wide range of density-functional theory and related calculations. We will primarily work with the planewave DFT code, PWscf. This has been compiled to run on campus cluster; it is parallelized with MPI, and takes advantage of the fast infiniband interconnects. It has not been optimized for GPUs particularly, so you won’t see a particular advantage to running on GPU nodes.

A few quick links for running PWscf:

There are many other packages in the QE family; you are encouraged to look around the website to see what else is available if you’re interested in doing something particular.

From the “What can PWscf do?” website:

PWscf performs many different kinds of self-consistent calculations of electronic-structure properties within Density-Functional Theory (DFT), using a Plane-Wave (PW) basis set and pseudopotentials (PP). In particular:

PWscf works for both insulators and metals, in any crystal structure, for many exchange-correlation (XC) functionals (including spin polarization, DFT+U, meta-GGA, nonlocal and hybrid functionals), for norm-conserving (Hamann-Schluter-Chiang) PPs (NCPPs) in separable form or Ultrasoft (Vanderbilt) PPs (USPPs) or Projector Augmented Waves (PAW) method. Noncollinear magnetism and spin-orbit interactions are also implemented.

Please note that NEB calculations are no longer performed by pw.x, but are instead carried out by neb.x (see main user guide), a dedicated code for path optimization which can use PWscf as computational engine.

To use PWscf, you’ll need (1) pseudopotentials for your chemical species, (2) an input file for your calculation, and (3) a submission script if you want to run it in parallel.

PWscf’s units: Bohr for distance, Rydberg for energy

1 Rydberg = 0.5 Hartree = 13.60569312299 eV. This is the energy unit if the kinetic energy is operator is \(-\nabla^2\) instead of \(-\frac12\nabla^2\).

1. Pseudopotentials

You can access pseudopotentials from the pseudopotentials webpage at QE. From there, you can go to:

The latter site has a large set to choose from, some more recent than what is available through the SSSP. I’ve already downloaded the SSSP potentials, and placed them in /projects/illinois/eng/shared/shared/MSE598DT-FA25/SSSP should you wish to use that. If you want to use a potential from the QE table, copy the link and run wget [PPurl] like

wget https://pseudopotentials.quantum-espresso.org/upf_files/Al.pbe-n-kjpaw_psl.1.0.0.UPF

at the command line to download the file on campus cluster.

2. Input file

PWscf uses an input file that contains all of the information about the calculation to perform. The full documentation is available here. It’s format follows many Fortran conventions (single quotes for strings, .true. and .false. for true and false, ! for comments). Below is a short example of a calculation, and discussion of what the different lines do:

&CONTROL
  calculation = 'scf'
  restart_mode='from_scratch',
  prefix='aluminum',
  tstress = .true.
  tprnfor = .true.
  pseudo_dir = '/projects/illinois/eng/shared/shared/MSE598DT-FA25/SSSP/',
  outdir='/scratch/[YOUR NET ID]/'
/
&SYSTEM
  ibrav= 2, 
  celldm(1)= 9.0, 
  nat=  1, 
  ntyp= 1,
  ecutwfc = 15.0,
  occupations = 'smearing',
  smearing = 'methfessel-paxton',
  degauss = 0.05, 
/
&ELECTRONS
  diagonalization='david'
  mixing_mode = 'plain'
  conv_thr =  1.0d-8
/
ATOMIC_SPECIES
 Al  26.982  Al.pbe-n-kjpaw_psl.1.0.0.UPF
ATOMIC_POSITIONS alat
 Al 0.00 0.00 0.00
K_POINTS automatic
  4 4 4  0 0 0

PWscf uses what they call “cards,” which start with an &. You can introduce comments with a ! in a line, or a # or ! at the beginning of a line.

The &CONTROL card is information about the overall calculation.

The &SYSTEM card is information about the lattice and the calculation.

The &ELECTRONS card is information about the iterative diagonalization.

The ATOMIC_SPECIES card gives a (1) label, (2) mass, and (3) pseudopotential file for each of the ntyp atomic species present.

The ATOMIC_POSITIONS card gives each atomic position with the label first, then the coordinates in the cell.

The KPOINTS card determines the k-points. automatic generates a Monkhorst-Pack mesh, where the first three numbers are the divisions along each direction, and the next three are the offsets.

3. Submission script

For small runs, PWscf can be run at the command line. To run a serial job,

$QE/bin/pw.x -in [input script] | tee [output file]

will run the input script, dump the output to the screen and save it to an input file specified by the tee command.

NOTE: You can test whether your input file is sane by setting nstep = 0 in the &CONTROL card, and then running at the command line. This can save you from the heartbreak of waiting for a parallel job to start, only to find that you have a typo in your input file.

You can run short parallel jobs at the command line (that will get killed if they run for too long…. this is not the recommended way to do a project!) by doing

mpirun $QE/bin/pw.x -in [input script]

To submit a parallel job, you will need a script that looks something like

#!/bin/bash
#
#SBATCH --time=00:10:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=64
#SBATCH --mem-per-cpu=3375  # memory per core
#SBATCH --job-name=Al-supercell
#SBATCH --partition=eng-instruction
#SBATCH --account=25fa-mse598dt-eng
#SBATCH --output=sbatch.o%j
#SBATCH --error=sbatch.e%j
#SBATCH --mail-user=[YOUR EMAIL ADDRESS]
#SBATCH --mail-type=ALL

# Load modules
module load intel/umf intel/compiler-rt intel/tbb intel/compiler intel/mpi intel/mkl

QE=/projects/illinois/eng/shared/shared/MSE598DT-FA25
SCRATCH=/scratch/$(whoami)

mpirun $QE/bin/pw.x -in [input file]

This is a regular shell script, with a series of slurm commands at the beginning. To see the options, check out slurm srun options. This script is asking for 64 tasks on one node (corresponding with the number of cores), and will email when the job starts, and stops. Once you have this script, you can submit it via

sbatch [slurm script]

You can see your job status with

squeue --me

More information is available with the Campus Cluster Running Jobs documentation.