Abisko

This text describes how to run Amber on Absiko.
Homepage of Abisko: http://www.hpc2n.umu.se/resources/abisko

Samuel Genheden, 2012


If you are new on Abisko

On Abisko you should not store files on your home directory, because as on Platon it is very small. Instead you should execute the following command in your home directory:
ln -s /pfs/nobackup$HOME $HOME/pfs

This will create a symbolic link to the pfs-filesystem in your home director. So now you can easily goto the place were you should store your files:
cd ~/pfs

Also, on your desktop computer you can mount this directory on Abisko directly:
sshfs user@abisko.hpc2n.umu.se:/home/u/user/pfs Abisko

user is your user name, and u is the first letter in your user name.

There is another queue-system on Abisko, so all the commands are different if you compare to Platon. Check out this page for a comparison.

Executing Amber

I have installed Amber11 in /home/g/genheden/pfs/Amber11, i.e., in my directory, but everybody in the same project as me can read it.

Executing with 8 cores

#!/bin/bash
#SBATCH -N 1
#SBATCH -n 8
#SBATCH -t 24:00:00
#SBATCH -A SNIC020-11-20

module add intel-fortran
module add openmpi/intel
export AMBERHOME=/home/g/genheden/pfs/Amber11
export PATH=$AMBERHOME/exe:$PATH

srun --cpu_bind=rank -n 8 sander.MPI ...

Change 24:00:00 to your wall time and ... to the regular input things to sander. Note that sander.MPI could be changed to pmemd.MPI to execute pmemd


Executing with more cores

#!/bin/bash
#SBATCH -N 4
#SBATCH -n 32
#SBATCH -t 24:00:00
#SBATCH -A SNIC020-11-20

module add intel-fortran
module add openmpi/intel
export AMBERHOME=/home/g/genheden/pfs/Amber11
export PATH=$AMBERHOME/exe:$PATH

srun --cpu_bind=rank -n 32 sander.MPI ...

Here I have used 32 cores, and I have not tested anything else at the moment. You will get a speed up of about 2.7, compared to 8 cores. Note that Amber would like to have cores that have 2 as a base, i.e., 2, 4, 8, 16, 32, 64, etc. Note also that I have spread out the job on several nodes, -N 4. If you attempt to use this on a single node, -N 1, the execution will be about 3 times slower.