Amber

The current version is 22.
The home page is http://ambermd.org/.
From version 14, sander is in AmberTools (which is free); only pmemd is in the commercial package.

Note that on Cosmos you need to accept the licence rules:
Amber 22 is a licensed software. To be able to use it please read through the
license on https://ambermd.org/GetAmber.php#amber ? Please review the
"ASSIGNMENT RESTRICTIONS" in particular.
We need you to confirm that you are accepting these rules and we will give you
access to the amber package.

We need that confirmation for every user who wants to use Amber. You can tell
your colleagues to create a support ticket confirming that they agree to the
license rules so that we can add them to the linux group that provides access
to the software.


Tetralith:

module add Amber/16-nsc1-intel-2017.u7-bare
module load Amber/18-nsc1-intel-2017.u7-bare

COSMOS:

module add GCC/11.2.0  OpenMPI/4.1.1 Amber/22.0-AmberTools-22.3




Problems and solutions



Frequency calculations for QM projects
entropies calculatedy thermo
mkdir Freq
mimic>pdb
mv pdb pdb.in Freq
cp control coord Freq
cd Freq
changepdb <<EOF
pdb
am
prmcrd
q
EOF
changepdb<<EOF
pdb.in
md
prmcrd
w
pdb.in
q
EOF
tleap -f ../../../leap.in
nmode -i ../../../min.in -o min.out -c prmcrd -r mincrd
nmode -i ../../../freq.in -o freq.out -c mincrd
ambtoturbfreq<<EOF
freq.out
EOF
coord2dftd
\mv coord-dftd3 coord
thermo 100 298.15 1.0 c1 >thermo.out
grep 'G(T)' thermo.out
cd ../..

min.in

MM minimisation with nmode, from PON, UR 13/3-13
 &data
  ntrun = 4,
  nprint = 100,
  nsave = 1000,
  idiel = 1,
  scnb = 2.0,
  scee = 1.2,
  maxcyc = 1000,
  cut = 999.0
 &end

freq.in
Title
 &data
  ntx    = 1,
  ntrun  = 1,       nvect  = 0,
  drms   = 1,
  dielc  = 1.0,     idiel  = 1,
  scnb   = 2.0,     scee   = 1.2,
  cut    = 999.0,
 &end
END


With Amber 12 scee and scnb is no longer allowed in sander input file (set by leap).
Thee new sections are added incd ../../ the prmtop file:
ATOMIC_NUMBER, SCEE_SCALE_FACTOR and SCNB_SCALE_FACTOR

EXECUTION

Amber22:
Dardel:
Amber is now installed on Dardel but you need to try it on the upgraded SS11
partition. Please see the following page for more info:
https://www.pdc.kth.se/about/pdc-news/how-to-test-software-on-the-parts-of-dardel-that-have-slingshot-11-1.1222491
The module name is 'amber/22-cpeGNU-22.06-ambertools-22' and you need to load
'PDCTEST' module before loading that.

Amber18

Aurora:
module spider Amber/18.13-AT-18.13-no-python
To get the GPU version, load GCC/7.3.0-2.30 CUDA/9.2.88 OpenMPI/3.1.1

Amber 16

Kebnekaise (use preferably 28 cores):

#!/bin/bash
#SBATCH -N 1
#SBATCH -n 28
#SBATCH -t 68:00:00
#SBATCH --exclusive
#SBATCH -A SNIC2016-34-18

module purge
module load ifort/2017.1.132-GCC-5.4.0-2.26  CUDA/8.0.44  impi/2017.1.132
module load Amber/16-AmberTools-16-patchlevel-20-7-hpc2n

mpirun pmemd.MPI -O -i  sander.in00 -o sander.out00 -c prmcrd -r mdrest00 -ref prmcrd



GPUs on Kebnekaise

#!/bin/bash
#SBATCH -J MD
#SBATCH -A SNIC2017-12-46
#SBATCH --gres=gpu:k80:1
#SBATCH -n 2
#SBATCH -t 10:00:00

ml ifort/2017.1.132-GCC-5.4.0-2.26  
ml CUDA/8.0.44  
ml impi/2017.1.132
ml Amber/16-AmberTools-16-patchlevel-20-7-hpc2n

pmemd.cuda -O -i   sander.in1 -o sander.out1 -p prmtop -c prmcrd  -r mdrest1 -ref prmcrd
pmemd.cuda -O -i   sander.in2 -o sander.out2 -p prmtop -c mdrest1 -r mdrest2 -ref prmcrd
pmemd.cuda -O -i   sander.in3 -o sander.out3 -p prmtop -c mdrest2 -r mdrest3 -ref prmcrd
pmemd.cuda -O -i   sander.in4 -o sander.out4 -p prmtop -c mdrest3 -r mdrest4 -ref prmcrd
pmemd.cuda -O -i   sander.in5 -o sander.out5 -p prmtop -c mdrest4 -r mdrest5 -x mdcrd5
###


You can find more options in https://www.hpc2n.umu.se/resources/software/amber


To run cpptraj for GIST, for example, I had to add the "#SBATCH -c 2" line, so that I get more than 4500 MB memory for one task, which is sometimes necessary for the GIST analysis. The example file is bellow.

#!/bin/bash
#SBATCH -n 1
#SBATCH -A SNIC2016-34-18
#SBATCH -t 30:00:00
#SBATCH -c 2

ml intel/2017.01
ml Amber/16-AmberTools-16-patchlevel-20-7

cpptraj prmtop < gist.in


Aurora (use preferably 20 (or 10) cores):

#!/bin/bash
#SBATCH -N 1
#SBATCH -n 20
#SBATCH --exclusive
#SBATCH -t  130:00:00

ml purge
ml iomkl/2017b Amber/16.12-AT-17.08

cd $SLURM_SUBMIT_DIR
mpirun -bind-to core -np 20 pmemd.MPI -O -i    prot-0.00-sander.in1 -o prot-0.00-sander.out1 -p prot.prm -c prot.rst          -r prot-0.00.mdrest1 -ref prot.rst
mpirun -bind-to core -np 20 pmemd.MPI -O -i    prot-0.00-sander.in2 -o prot-0.00-sander.out2 -p prot.prm -c prot-0.00.mdrest1 -r prot-0.00.mdrest2 -ref prot.rst
mpirun -bind-to core -np 20 pmemd.MPI -O -i    prot-0.00-sander.in3 -o prot-0.00-sander.out3 -p prot.prm -c prot-0.00.mdrest2 -r prot-0.00.mdrest3
mpirun -bind-to core -np 20 pmemd.MPI -O -i    prot-0.00-sander.in4 -o prot-0.00-sander.out4 -p prot.prm -c prot-0.00.mdrest3 -r prot-0.00.mdrest4 -x prot-0.00.mdcrd4



Old:
ml load iomkl/2017.01
ml load Amber


Amber14

Aurora (use preferably 20 (or 10) cores):

#!/bin/bash
#SBATCH -N 1
#SBATCH -n 20
#SBATCH --exclusive
#SBATCH -t  130:00:00

module load icc/2015.3.187-GNU-4.9.3-2.25
module load OpenMPI/1.8.8
module load Amber/14-AT-15

cd $SLURM_SUBMIT_DIR
mpirun -bind-to core -np 20 pmemd.MPI -O -i    prot-0.00-sander.in1 -o prot-0.00-sander.out1 -p prot.prm -c prot.rst          -r prot-0.00.mdrest1 -ref prot.rst
mpirun -bind-to core -np 20 pmemd.MPI -O -i    prot-0.00-sander.in2 -o prot-0.00-sander.out2 -p prot.prm -c prot-0.00.mdrest1 -r prot-0.00.mdrest2 -ref prot.rst
mpirun -bind-to core -np 20 pmemd.MPI -O -i    prot-0.00-sander.in3 -o prot-0.00-sander.out3 -p prot.prm -c prot-0.00.mdrest2 -r prot-0.00.mdrest3
mpirun -bind-to core -np 20 pmemd.MPI -O -i    prot-0.00-sander.in4 -o prot-0.00-sander.out4 -p prot.prm -c prot-0.00.mdrest3 -r prot-0.00.mdrest4 -x prot-0.00.mdcrd4


Abisko (note that the jobs must use a multiple of 6 cores):

#!/bin/bash
#SBATCH -N 1
#SBATCH -n 12
#SBATCH -t 68:00:00
#SBATCH -A SNIC2014-11-32

module rm openmpi/intel/1.8.1
module add intel/14.0.2
module add openmpi/intel/1.6.5
export AMBERHOME=/home/p/paulius/pfs/Amber13/
export PATH=$AMBERHOME/bin:$PATH

srun --cpu_bind=rank -n 12 pmemd.MPI -O -i prot-1.00-sander.in1 -o prot-1.00-sander.out1 -p prot.prm -c prot.rst          -r prot-1.00.mdrest1 -ref prot.rst
srun --cpu_bind=rank -n 12 pmemd.MPI -O -i prot-1.00-sander.in2 -o prot-1.00-sander.out2 -p prot.prm -c prot-1.00.mdrest1 -r prot-1.00.mdrest2 -ref prot.rst
srun --cpu_bind=rank -n 12 pmemd.MPI -O -i prot-1.00-sander.in3 -o prot-1.00-sander.out3 -p prot.prm -c prot-1.00.mdrest2 -r prot-1.00.mdrest3
srun --cpu_bind=rank -n 12 pmemd.MPI -O -i prot-1.00-sander.in4 -o prot-1.00-sander.out4 -p prot.prm -c prot-1.00.mdrest3 -r prot-1.00.mdrest4 -x prot-1.00.mdcrd4


Platon:

#/bin/sh
#PBS -l nodes=1:ppn=8
#PBS -l walltime=168:00:00
#PBS -j oe

. use_modules
module add intel/14.0
module add mkl/11.1
module add openmpi/1.6.4/intel/14.0
module add amber/14

cd $PBS_O_WORKDIR
mpirun -np 8 pmemd.MPI -O -i sander.in1 -o sander.out1 -r mdrest1 -c prmcrd -ref prmcrd
mpirun -np 8 pmemd.MPI -O -i sander.in2 -o sander.out2 -r mdrest2 -c mdrest1 -ref mdrest1
mpirun -np 8 pmemd.MPI -O -i sander.in3 -o sander.out3 -r mdrest3 -c mdrest2
mpirun -np 8 pmemd.MPI -O -i sander.in4 -o sander.out4 -r mdrest4 -c mdrest3


Alarik (use a multiple of 8 or 16 cores):

#!/bin/bash
#SBATCH -N 1
#SBATCH -n 8
#SBATCH -A SNIC2014-11-32
#SBATCH -J MD-COX2-L01tl07
#SBATCH -t 68:00:00


module add mkl/11.2
module add intel/15.0
module add openmpi/1.8.3/intel/15.0
module add amber/14

cd
$SLURM_SUBMIT_DIR
mpirun -np 8 pmemd.MPI -O -i   prot-0.00-sander.in1 -o prot-0.00-sander.out1 -p prot.prm -c prot.rst          -r prot-0.00.mdrest1 -ref prot.rst
mpirun -np 8 pmemd.MPI -O -i   prot-0.00-sander.in2 -o prot-0.00-sander.out2 -p prot.prm -c prot-0.00.mdrest1 -r prot-0.00.mdrest2 -ref prot.rst
mpirun -np 8 pmemd.MPI -O -i   prot-0.00-sander.in3 -o prot-0.00-sander.out3 -p prot.prm -c prot-0.00.mdrest2 -r prot-0.00.mdrest3
mpirun -np 8 pmemd.MPI -O -i   prot-0.00-sander.in4 -o prot-0.00-sander.out4 -p prot.prm -c prot-0.00.mdrest3 -r prot-0.00.mdrest4 -x prot-0.00.mdcrd4



Akka (any number of nodes can be used, but multiples of 8 are best)

#!/bin/bash
#SBATCH -n 8
#SBATCH -t  40:00:00
#SBATCH -A SNIC2014-11-32

module add amber/14

srun --cpu_bind=rank pmemd.MPI -O -i       prot-0.00-sander.in1 -o prot-0.00-sander.out1 -p prot.prm -c prot.rst          -r prot-0.00.mdrest1 -ref prot.rst
srun --cpu_bind=rank pmemd.MPI -O -i       prot-0.00-sander.in2 -o prot-0.00-sander.out2 -p prot.prm -c prot-0.00.mdrest1 -r prot-0.00.mdrest2 -ref prot.rst
srun --cpu_bind=rank pmemd.MPI -O -i       prot-0.00-sander.in3 -o prot-0.00-sander.out3 -p prot.prm -c prot-0.00.mdrest2 -r prot-0.00.mdrest3
srun --cpu_bind=rank pmemd.MPI -O -i       prot-0.00-sander.in4 -o prot-0.00-sander.out4 -p prot.prm -c prot-0.00.mdrest3 -r prot-0.00.mdrest4 -x prot-0.00.mdcrd4


Erik (GPU)

#!/bin/bash
#SBATCH -N 1
#SBATCH -t 08:00:00
#SBATCH -J Aln723-Gal3
#SBATCH --exclusive

module add intel/14.0
module add mkl/11.1
module add cuda/6.5
module add openmpi/1.8.3/intel/14.0_cuda
module add amber/14_experimental

pmemd.cuda -O -i sander.in1 -o sander.out1 -p prmtop -c prmcrd -r mdrest1 -ref prmcrd
pmemd.cuda -O -i sander.in2 -o sander.out2 -p prmtop -c mdrest1 -r mdrest2 -ref prmcrd
pmemd.cuda -O -i sander.in3 -o sander.out3 -p prmtop -c mdrest2 -r mdrest3 -ref prmcrd
pmemd.cuda -O -i sander.in4 -o sander.out4 -p prmtop -c mdrest3 -r mdrest4 -ref prmcrd
pmemd.cuda -O -i sander.in5 -o sander.out5 -p prmtop -c mdrest4 -r mdrest5 -x mdcrd5


Older
#!/bin/sh #SBATCH -N 1 #SBATCH -t 02:00:00 #SBATCH -J pm-amber14 module add intel/14.0 module add mkl/11.1 module add cuda/6.5 module add openmpi/1.8.3/intel/14.0_cuda module add amber/14_experimental pmemd.cuda -O -i sander.in1 -o sander.out1 -p ferr-l01.prm -c ferr-l01.rst -r ferr-l01.restrt -x ferr-l01.mdcrd1 pmemd.cuda -O -i sander.in2 -o sander.out2 -p ferr-l01.prm -c ferr-l01.restrt -r ferr-l01.restrt2 -x ferr-l01.mdcrd2 pmemd.cuda -O -i sander.in3 -o sander.out3 -p ferr-l01.prm -c ferr-l01.restrt2 -r ferr-l01.restrt3 -x ferr-l01.mdcrd3
I tested Amber14 on erik. Ferritin for 1 ns, 54267 atoms. Erik was done
in 1 h, Alarik in 8 h.

|  Final Performance Info:
|     -----------------------------------------------------
|     Average timings for last    5000 steps:
|         Elapsed(s) =      32.02 Per Step(ms) =       6.40
|             ns/day =      26.98   seconds/ns =    3201.88
|
|     Average timings for all steps:
|         Elapsed(s) =    3212.82 Per Step(ms) =       6.43
|             ns/day =      26.89   seconds/ns =    3212.82
|     -----------------------------------------------------

This is with one GPU and amber14_experimental. While I got 3.24 ns/day
on 16 cores on alarik with amber14.

Problems if velocities not present in the mdrest files

 (Paulius 19/3-15)


Amber 10
Example script by Samuel
#!/bin/sh
#PBS -l nodes=1:ppn=8
#PBS -l walltime=2:00:00
#PBS -A SNIC001-10-79

cd $PBS_O_WORKDIR
module add intel/11.1
module add openmpi/intel
export AMBERHOME=/sw/pkg/bio/Amber10
export PATH=$AMBERHOME/exe:$PATH
export PATH=$PBS_O_PATH:$PATH

mpirun -np 8 sander.MPI -O -i sander.in1 -o sander.out1 -r mdrest1 -c prmcrd -ref prmcrd

or
mpirun -np 2 sander.MPI -O -ng 2 -groupfile prot_l080_group1


(csh, on bash and ksh: replace 'setenv XXX YYY' with 'export XXX=YYY' 

Sarek:
setenv AMBERHOME /kfs/home/t/throd/amber/amber8
Serial sander:
> module load pgi-compiler/5.2-4
> sander
Parallel sander:
> module load mpich mpich/1.2.5..12/gm/pgi
> module load pgi-compiler/5.2-4
> mpirun -np <number_of_cpus> mpi/sander
Parallel pmemd:
> module load mpich/1.2.5..12/gm/pgi
> module load gm/2.1.2 (maybe this is not necesary)
> module load pgi-compiler/5.2-4
> mpirun -np <number_of_cpus> mpi/pmemd (or psander)

Seth:
setenv AMBERHOME /kfs/home/t/throd/amber/amber8_seth
> module load intel-compiler/8.1
Serial sander:
> sander
Parallel sander:
> mpirun -np <number_of_cpus> mpi/sander (or psander)

Docenten:
setenv AMBERHOME /sw/pkg/bio/Amber8
> module load intel/8.1
Serial sander:
> sander
Parallel sander:
> module load mpich-intel8/1.2.5.2
> mpirun -np <number_of_cpus> mpi/sander (or psander)

Sigrid:
Amber is compiled with static libraries, so it should work on all the swegrid clusters
setenv AMBERHOME /sw/pkg/bio/Amber8
Serial sander:
> sander
Parallel sander:
> module load mpich-intel8/1.2.5.2
> mpirun -np <number_of_cpus> mpi/sander (or psander)


toto7/whemim64:
. use_modules
module load intel/8.1
export AMBERHOME=/sw/amber/Amber8
export PATH=$AMBERHOME/exe:$PATH

Locally:
export LD_LIBRARY_PATH=/opt/intel/f_compiler81/lib:/opt/intel/cc_compiler81/lib:$LD_LIBRARY_PATH
export AMBERHOME=/home/bio/AMBER/Amber8

setenv LD_LIBRARY_PATH /opt/intel/f_compiler81/lib:/opt/intel/cc_compiler81/lib:$LD_LIBRARY_PATH
setenv AMBERHOME /home/bio/AMBER/Amber8


Amber 11 is now installed locally on /away/bio/AMBER/Amber11 and on platon on /sw/pkg/bio/Amber/Amber11. On platon both a serial and a parallel version is installed. I haven't tested it much, but it seems to be working fine.


On platon, you can use the same queue script as for Amber10, which is shown on Ulf's homepage.

Some notes:
The parameters scnb and scee, which sets the scaling parameter for the 1-4 interactions for van der Waals and electrostatic, is no longer supported in sander. For some reason it seems that they have put them in the topology file instead (my guess is that it has something do to with the fact that CHARMM force field is supported, which does not use the same scaling parameters). When I tested my old sander input files where I set these parameters, it worked fine locally, but on platon sander crashed. So my recommendation is to remove them from the input file, if you are using old input files.

pmemd is in the new release compiled together with the standard sander program, I haven't tried it, but this program should be much faster than sander for a standard md run.

Samuel 28/7-10



Amber 11 on Abisko

After a lot of testing, we have finally figured out how to run Amber properly on Abisko.

If you would like to run with only 8 cores as in Platon you have to make a script like this:
#!/bin/bash
#SBATCH -N 1
#SBATCH -n 8 
#SBATCH -t 20:00:00
#SBATCH -A SNIC020-11-20 

module add intel-fortran
module add openmpi/intel
export AMBERHOME=/home/g/genheden/pfs/Amber11

srun --cpu_bind=rank -n 8 $AMBERHOME/exe/sander.MPI -O -i r42_sander.in1 -o r42_sander.out1 -r r42_mdrest1 -p apo.prm -c apo.rst -ref apo.rst

If you would like to run with more cores, say 32, than you have to spread them out over several nodes, like this:
#!/bin/bash
#SBATCH -N 4
#SBATCH -n 32
#SBATCH -t 20:00:00
#SBATCH -A SNIC020-11-20 

module add intel-fortran
module add openmpi/intel
export AMBERHOME=/home/g/genheden/pfs/Amber11

srun --cpu_bind=rank -n 32 $AMBERHOME/exe/sander.MPI -O -i r42_sander.in1 -o r42_sander.out1 -r r42_mdrest1 -p apo.prm -c apo.rst -ref apo.rst

Samuel 2/2-12

INSTALLATION

Amber20 (18/11-20)
Downloaded from ambermd.org
tar -xvf Amber20.tar.bz2 Downloaded and installed cmake fom https://cmake.org/download/
Took the cmake-3.18.4-Linux-x86_64.sh version
cd amber20_src/build
./run_cmake
Needs to run amber.sh



Amber 16/17 (1/1-18)

export $AMBERHOME=/temp4/bio/AMBER/Amber16
./configure gnu
. amber.sh
make install


Following another users query (you are not the only Amber users we care for), I have rebuild the amber 14 using a more modern compiler and patch level. It now performs a lot better on the test suite. The only failures I get, are cases when the problem is not suitable to run on e.g. 8 cores. I named the module amber/14. Could you and your team please test and switch if satisfied with the new module. If you encounter issues please submit a formal user query (support request) and we will have a look into it.

Amber 14 (pmemd; parallel and GPU versions) were installed by Joachim Hein on Lunarc in  /common/sw/alarik/pkg/amber/amber14/bin/pmemd.

tar xfj ../../../tarballs/amber/Amber14/AmberTools14.tar.bz2 
tar xfj ../../../tarballs/amber/Amber14/Amber14.tar.bz2 


export AMBERHOME=/common/sw/alarik/src/amber/amber14

cd $AMBERHOME

./configure --full-help

module load mkl/11.1

# export SSE_TYPES=SSE2,SSSE3,SSE4.1,SSE4.2  # Intel CPU
export MKL_HOME=$MKLROOT

module load intel/14.0

./configure intel

source $AMBERHOME/amber.sh
# prepend the amber library
export LD_LIBRARY_PATH="${AMBERHOME}/lib:${LD_LIBRARY_PATH}"

make install

make test

######################################################################

# This fails 2 tests - I still need to ask the "experts"

######################################################################

# Assuming this goes ok, we intent to build an MPI version

module load openmpi/1.8.1/intel/14.0
./configure -mpi intel


make install

export DO_PARALLEL="mpirun -np 2"

# this should be placed into a runscript asking for two cores
make test 


export DO_PARALLEL="mpirun -np 8"

# again, put this into a runscript

Ulf used the same instructions to recompile all in AmberTools (including sander) 22/1-15

With molsurf (e.g. in qmmmpbsa), I got
molsurf: error while loading shared libraries: libnetcdf.so.7: cannot open shared object file: No such file or directory
which was solved by setting in .bash_profile:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/common/sw/alarik/pkg/bio/AMBER/Amber14/lib

AmberTools 1.4 on platon

./configure gnu
Unable to run flex; this is recommended for NAB
make

Version 10.0 on platon

I used to files, copied from milleotto. So all the modifications to Amber 10 is included also on platon. export $AMBERHOME=/sw/pkg/bio/Amber10/
export MPI_HOME=/sw/pkg/openmpi/1.4.1/intel/11.1/
module add openmpi/intel
module add intel/11.1
module add mkl/10.2

Installation of AmberTools

./configure_at gcc
make -f Makefile_at
Note! Could not get leap to work. So I excluded that from compilation.

Installation of Amber

First serial:
./configure_amber ifort
make serial
... and then parallel:
make clean
./configure_amber -openmpi ifort
make parallel

The test seems to work, the same failures as with the installation on milleotto

Version 10.0 on /home/bio/AMBER/Amber10

Downloaded the patches and invoked them.

1. Installation of AmberTools

Followed the manual:
./configure_at gcc
make -f Makefile_at

Make issued a warning: make: warning: Clock skew detected. Your build may be incomplete., but this can be ignored.

The following failures were reported by the test-cases:

nab test:
=====================================================
Running test of randomized embedding

1c1
< radius of gyration: 7.340
---
> radius of gyration: 7.249
    FAILED (OK if gyration radius is about 7 or 8)
=====================================================
Running test to do simple lmod optimization
1c1
< Glob. min. E = -122.793 kcal/mol
---
> Glob. min. E = -129.345 kcal/mol
    FAILED (probably OK if energy is -115 to -125)

=====================================================
cd antechamber/ash && ./Run.ash
diffing ash.mol2.save with ash.mol2
possible FAILURE: check ash.mol2.dif
==============================================================
cd antechamber/sustiva && ./Run.sustiva
diffing sustiva.mol2.save with sustiva.mol2
possible FAILURE: check sustiva.mol2.dif
==============================================================
cd antechamber/fluorescein && ./Run.fluorescein
diffing fluorescein.mol2.save with fluorescein.mol2
possible FAILURE: check fluorescein.mol2.difv
===========================================================
cd amoeba; ./Run.amoeba_sol
diffing hpv.prmtop.save with hpv.prmtop
possible FAILURE: check hpv.prmtop.dif
=======================================================
cd addions; ./Run.addions
diffing glu.mol2.save with glu.mol2
possible FAILURE: check glu.mol2.dif
==============================================================

Some other failures where also issued but these differences were not numerical and hence they can probably be ignored.

2. Installation of Amber

./configure_amber -static g95
make serial

A few warnings were issued by the compilers but they could be ignored.

The following failures were reported by the test-cases:

cd LES && ./Run.PME_LES

    Amber 8 ADDLES and SANDER.LES test:

addles:
Killed
    ./Run.PME_LES: Program error
==============================================================
cd PIMD/part_nmpimd_water && ./Run.nmpimd
Killed
At line 1315 of file _ew_box.f (Unit 9 "fort.9")
Traceback: not available, compile with -ftrace=frame or -ftrace=full
Fortran runtime error: End of file
    ./Run.nmpimd: Program error

The test case test.ncsu.serial could not be completed since the simulation got stucked. This was also the case for the tgtmd/change_target.ntr in test.sander.BASIC

Version 10.0, parallel version on milleotto

AmberTools was installed as above, no new failures were reported but a few were not reported.

Amber was installed as described below for Amber 9.0. The installation did not produce any serious warnings and no failures were reported by the test cases. Both the parallel and the serial test cases were executed.

To execute the test cases a little bit more work is required:
 1. Create an interactive session
    qsub -I -l walltime=2:00:00,nodes=1:ppn=4
 2. Import various modules and set various environment variables
    module rm intel
    module add intel/9.1
    module add mkl/9.1
    module add mpich-intel9/1.2.7p1
    export AMBERHOME=/sw/pkg/bio/Amber10
    export PATH=$AMBERHOME/exe:$PATH
    export DO_PARALLEL='mpiexec -np 4'
 3. Run the test
    make test.parallel >& test.parallel.log

Version 9.0

Serial Amber 9 on alarik (20/2-15)

module add intel/14.0
export AMBERHOME=/common/sw/alarik/pkg/bio/AMBER/Amber9
export PATH=$AMBERHOME/exe:$PATH

cd $AMBERHOME/src
make clean (got lots of warnings and some errors)
./configure ifort_x86_64
make serial

Got
g++ -c  -o elsize.o elsize.cc
elsize.cc: In function ‘int main(int, char**)’:
elsize.cc:117:15: error: ‘exit’ was not declared in this scope
elsize.cc:124:30: error: ‘strcmp’ was not declared in this scope
elsize.cc:145:44: error: ‘strcmp’ was not declared in this scope
elsize.cc:160:49: error: ‘calloc’ was not declared in this scope
elsize.cc:171:14: error: ‘exit’ was not declared in this scope
elsize.cc:233:13: error: ‘exit’ was not declared in this scope
elsize.cc:295:11: error: ‘exit’ was not declared in this scope
elsize.cc:339:12: error: ‘exit’ was not declared in this scope
elsize.cc: In function ‘atom_count GetNumberOfAtoms(const char*, int)’:
elsize.cc:504:14: error: ‘exit’ was not declared in this scope
elsize.cc: In function ‘atom_count ReadAtomicCoordinates(const char*, int, atom_count, double*, double*, double*, double*, double*)’:
elsize.cc:565:14: error: ‘exit’ was not declared in this scope
make[1]: *** [elsize.o] Error 1
make[1]: Leaving directory `/common/sw/alarik/pkg/bio/AMBER/Amber9/src/etc'
make: *** [serial] Error 2

On local system:

Install G95 compiler:
download g95-x86-linux.tgz from http://ftp.g95.org/
gunzip g95-x86-linux.tgz
tar -xvf g95-x86-linux.tar
cd /home/bio/Bin/Linux
ln ../../G95/bin/i686-pc-linux-gnu-g95 g95

070115:
export AMBERHOME=/home/bio/AMBER/Amber9
cd $
AMBERHOME/src
./configure g95
make serial >mke.out
This worked (with several warnings), but only after reinstalling the complete program. sanderf comes from this (now deleted) compilation.

0608:
export AMBERHOME=/home/bio/AMBER/Amber9
cd $
AMBERHOME/src
./configure gfortran

make serial >mke.out
Several warnings ignored of the type
warning: passing arg 1 of `SortByDouble' from incompatible pointer type
A few other warnings ignord.
61 executables built in Amber9/exe, including sander, antechamber, and tleap, but not pmemd

070115: When doing it again, I now get:
 In file _lmod.f:259

   character(len=len(MVPM_FORWARD)) :: matrix_vector_product_method=MVPM_FORWARD                                                                  1
Error: 'matrix_vector_product_method' at (1) must have constant character length in this context
 In file _lmod.f:271

   character(len=len(MC_METHOD_QUICK_QUENCH)) :: Monte_Carlo_method &
                                                                  1
Error: 'monte_carlo_method' at (1) must have constant character length in this context
 In file _lmod.f:334

   character(len=len(XMIN_METHOD_LBFGS)) :: xmin_method = XMIN_METHOD_LBFGS
                                                      1
Error: 'xmin_method' at (1) must have constant character length in this context
make[1]: *** [lmod.o] Error 1
make: *** [serial] Error 2


cd ../test
make test.serial >tst.out
make: *** [test.sander.BASIC] Error 1

cd tgtmd/change_target; ./Run.tgtmd
SANDER: Targeted MD with changing target
diffing tgtmd.out.save with tgtmd.out
possible FAILURE:  check tgtmd.out.dif
==============================================================
cd tgtmd/change_target.rms; ./Run.tgtmd
SANDER: Targeted MD with changing target and fit/rmsd
        to different regions
diffing tgtmd.out.save with tgtmd.out
possible FAILURE:  check tgtmd.out.dif
==============================================================
cd tgtmd/conserve_ene; ./Run.tgtmd
SANDER: Targeted MD energy conservation test
diffing tgtmd.out.save with tgtmd.out
possible FAILURE:  check tgtmd.out.dif
==============================================================

These three are caused by:
71c71
<      Mask ":3-10@CA,N,C,O,H,HA" matches    47 atoms
---
>      Mask ":3-10@CA,N,C,O,H,HA" matches    58 atoms

cd tgtmd/minimize; ./Run.tgtmin
SANDER: Targeted minimization
  ./Run.tgtmin:  Program error
==============================================================
cd tgtmd/PME; ./Run.tgtPME
SANDER: Targeted MD with PME
  ./Run.tgtPME:  Program error
==============================================================
cd umbrella; ./Run.umbrella
  ./Run.umbrella:  Program error
==============================================================
cd noesy; ./Run.noesy
  ./Run.noesy:  Program error
cd jar; ./Run.jar
  ./Run.jar:  Program error
(

cd trajene; ./Run.trajene
  ./Run.trajene:  Program error

cd vancomycin_lmod; ./Run.vancomycin_lmod
  ./Run.vancomycin_lmod:  Program error


../../exe/psander: Command not found.
make: *** [test.psander] Error 1

../../exe/pmemd: Command not found.
make: *** [test.pmemd] Error 1


Amber 9 on Macintosh:

Used gfortran, which compiled with no problems

cd bintraj; ./Run.bintraj

sander and ptraj: test sander.netCDF output and ptraj netCDF input
 ./Run.bintraj: Program error
make: [test.sander.BASIC] Error 1 (ignored)

Serial Amber 9 on docenten:

module add intel/8.1
export AMBERHOME=/sw/pkg/bio/Amber9
export PATH=$PATH:$AMBERHOME/exe

cd $AMBERHOME/src
./configure ifort_x86_64
make serial

Tests
All tests went fine exept for antechamber where some failed

Parallel Amber 9.0 on milleotto:

module rm intel/8.1
module add intel/9.1
module add mpich-intel9/1.2.7p1
module add mkl/9.1

export MKL_HOME=/sw/pkg/mkl/9.1
export AMBERHOME=/sw/pkg/bio/Amber9_jh

./configure -p4 ifort_x86_64

make serial

make clean
export MPI_HOME=/sw/pkg/mpich-intel9/1.2.7p1
Följande röda rader adderas till filen mdfil.f i src/sander
#ifdef MPI
      else if (arg(1:3) == '-p4') then
         iarg = iarg+1
      else if (arg == '-np') then
         iarg = iarg+1
      else if (arg == '-execer_id') then
         iarg = iarg+1
      else if (arg == '-master_host') then
         iarg = iarg+1
      else if (arg == '-my_hostname') then
         iarg = iarg+1
      else if (arg == '-my_nodenum') then
         iarg = iarg+1
      else if (arg == '-my_numprocs') then
         iarg = iarg+1
      else if (arg == '-total_numnodes') then
         iarg = iarg+1
      else if (arg == '-master_port') then
         iarg = iarg+1
      else if (arg == '-remote_info') then
         iarg = iarg+2

      else if (arg == '-mpedbg') then
         continue
      else if (arg == '-dbx') then
         continue
      else if (arg == '-gdb') then
         continue
#endif
./configure -mpich -p4 ifort_x86_64
make parallel

Jimmy, 7/8-07 och 9/8-07


Version 8.0

Tried to recompile it on alarik 6/3-15:
export AMBERHOME=/common/sw/alarik/pkg/bio/AMBER/Amber8/
cd /common/sw/alarik/pkg/bio/AMBER/Amber8/src
./configure ifort
gave this warning:
MKL_HOME is set to /common/sw/alarik/pkg/intel/14.0/composer_xe_2013_sp1.2.144/mkl
MKL libraries were not found !

The configuration file, config.h, was successfully created.

make serial
gcc -c -I/usr/X11R6/include -O2  -o tLeap.o tLeap.c
In file included from tLeap.c:92:0:
getline.h:8:14: error: conflicting types for ‘getline’
/usr/include/stdio.h:673:20: note: previous declaration of ‘getline’ was here
make[2]: *** [tLeap.o] Error 1
make[2]: Leaving directory `/common/sw/alarik/pkg/bio/AMBER/Amber8/src/leap/src/leap'
make[1]: *** [install] Error 2
make[1]: Leaving directory `/common/sw/alarik/pkg/bio/AMBER/Amber8/src/leap'
make: *** [serial] Error 2



Details of the not-so-straightforward installation is given below

On sarek (AMD Opteron 848):

Serial:

> module load pgi-compiler/5.2-4 (5.2-2 is default, so maybe I should have taken that instead)
> setenv AMBERHOME /kfs/home/t/throd/amber/amber8

> cd $AMBERHOME/src
> ./configure -opteron pgf90 (./configure -help gives options)
This gives the config.h file, which doesn't work before we remove the '-tp k8-32' flag. Therefore,
> sed 's/-tp k8-32//g' > tmp
> mv tmp config.h
Now we can compile:
> make serial

Next, we do the testing:
> cd ../test
> make test
This makes a series of tests. If there are deviations between expected result and calculated result, a file ending with 'dif' is dumbed. In this case the test crashes! Let's look at the dumbed files:
> ls */*.dif

Inspect the files. In most of them the results only differ in the last digit. That is OK. In the last one, LES/LES.prmtop.dif, which looks something like a prmtop file, there are serious differences. Now we test the different programs individually to figure out what works and what doesn't.
> make test.sander
> make test.sander.LES
and so on.
The dif files are in e.g. sander/*/*.dif
and so on. Look in the $AMBERHOME/test/Makefile to see a list.
Everything works fine, except leap (make test.leap).

Everything is fine. Next we test tleap.
> make test.tleap
We get the same problem as before, so the error is located to tleap (and fortunately not sander).
So we do not use tleap on docenten before this has been resolved. Hence:
> rm ../exe/tleap

Parallel:

Only sander can be compiled in parallel and, unfortunately, the serial executables sander and sander.LES are overwritten unless we do something. I edited the Makefiles such that the executables are copied to $AMBERHOME/exe/mpi instead of $AMBERHOME/exe. First in $AMBERHOME/src/Makefile, line 47: -mkdir ../exe -> -mkdir ../exe/mpi, and next:
line 195 in $AMBERHOME/src/sander: /bin/mv sander$(SFX) sander.LES$(SFX) ../../exe -> /bin/mv sander$(SFX) sander.LES$(SFX) ../../exe/mpi.

> module load mpich/1.2.5..12/gm/pgi
This also load pgi-compiler/5.2-2, so now I have two loaded PGI compilers!
> module unload pgi-compiler/5.2-2
> make clean
> ./configure -mpich -opteron pgf90
> sed '/-tp kp8-32//g' config.h > tmp
> mv tmp config.h
> setenv MPICH_HOME /lap/mpich/1.2.5..12/gm-2.1.2/pgi-5.2
> make parallel
> cd ../exe; ln -s mpi/sander psander (otherwise we cannot use the test)

PMEMD:
I have only compiled pmemd in parallel. First I downloaded from the Internet the file: pmemd_new_config.030505.tar into $AMBERHOME/src/pmemd. The tar file contains newer configure files that also work with pgf90.

Before we start, we need to load the gm module:
> module load gm/2.1.2
> tar xvf pmemd_new_config.030505.tar
mpich is installed in /lap/mpich/1.2.5..12/gm-2.1.2/pgi-5.2 (verify by typing 'which mpirun') and gm is installed in /lap/gm/2.1.2.
> ./new_configure linux64_opteron pgf90 mpich_gm
and give the directories for mpich and gm when prompted.
> make install
> cd ../../exe; mv pmemd mpi
> ./configure -athlon -scali ifort (not sure that scali works with the intel compilers. Otherwise change to PGI)
>

It is difficult to test the parallel versions using the make test, because parallel jobs can only be run by submitting a job via PBS.


On seth:

serial:
> module load intel-compiler/8.1
> ./configure -athlon ifort
The Intel Math Kernel Libraries (MKL) are not installed on seth, so ignore the message
> make serial
> cd ../test
> make test
Fails in Run.lysine in qmmm/standard
> test.sander PASSED
> test.sander.LES PASSED
> test.sander.QMMM FAILED with program error
> test.nmode PASSED
> test.anal PASSED
> test.ptraj PASSED
> test.leap PASSED
> test.resp PASSED
> test.antechamber FAILED with program error
> test.pbsa FAILED with program error
> cd ../exe
> rm -f antechamber sander.QMMM pbsa

Parallel
cd $AMBERHOME/src
Edit Makefile as outlined above for sarek
> make clean
> ./configure
Now add '-DMPI' to the flags in line 48 in config.h
> make parallel
> cd ../exe
> ln -s mpi/sander psander

PMEMD.
No success!

Amber8 is installed on:

sarek.hpc2n.umu.se (parallel and serial by me. pgi5.4 versions, Myrinet mpich)
seth.hpc2n.umu.se (parallel and serial by me. intel8.1 versions, scali mpi)
toto7.lunarc.lu.se (serial by Francesco, intel8 version)
sigrid.lunarc.lu.se (serial and parallel by, intel8 versions, mpich)
docenten.lunarc.lu.se (serial by Francsco and parallel by me, intel8 versions, mpich)
in /home/bio (serial by Francesco, intel8 versions)

Details of how to to run sander (and pmemd) are given below. Installation details are given in the end for seth and sarek.
Instructions how to using python with Amber are given in a separate page.

I have compiled the new program pmemd on sarek, but only in parallel.
Parallel executables of sander, sander.LES, and pmemd are in all cases in $AMBERHOME/exe/mpi whereas serial versions are in $AMBERHOME/exe. $AMBERHOME/exe should be added to your path. I have added a link named psander in $AMBERHOME/exe, which points to mpi.sander.

With regard to pmemd. pmemd is designed exclusively to run MD with periodic boundary conditions with the particle mesh Ewald summation for electrostatic interactions. It should be faster than sander, but probably not unless many nodes are used. I have not tested how well it performs, I would be pleased if somebody did, but until then I recommend to continue to use sander. Also, I encourage you to run parallel jobs on docenten, seth, and sarek, but make some tests first, since the parallel versions are NOT tested. All serial jobs are tested with the 'make test' facility.

What does not work:
On seth: antechamber, sander.QMMM, pbsa, pmemd
On seth, toto7, docenten, sigrid, and locally: pmemd
On sarek: problems with sander.LES, more specifically with ADDLES.
On sarek, docenten, and seth: Results based on Antechamber deviate somewhat from what they should be.
On sigrid: Problems with sander.QMMM

Let me know, if something does not work.

Thomas (26/4-05)

Version 7.0

To run it, you need to set

on garm:
export AMBERHOME=/usr/local/software/Amber/Amber7/amber7
export PATH=$PATH:$AMBERHOME/exe

on husmodern:
export AMBERHOME=/home/bio/Amber/Amber7
export PATH=$PATH:$AMBERHOME/exe

on whenim64:
export AMBERHOME=/sw/amber/Amber7
export PATH=$PATH:$AMBERHOME/exe

We have on all computers made the use of the 1999 force field default:
cd $AMBERHOME/dat/leap/cmd
ln -fs leaprc.ff99 leaprc

Amber 7.0  is not present on signe or ask yet.

Amber 6.0 is also present on whenim64.

The source code is located in: $MBERHOME/src

The parameter files in $AMBERHOME/dat

The documentation is in $AMBERHOME/doc/amber7.pdf



Version 5.0

To run it, you need to set

on signe:
export OML_AMBER=/molcas/DIVPROG/lib/Amber
export PATH=$PATH:$OML_AMBER/exe

on ask:
export OML_AMBER=/molcas/DIVPROG/lib/Amber
export PATH=$PATH:$OML_AMBER/exe

Amber is not present on husmodern or garm yet.

The source code is located in: $OML_AMBER/src

The parameter files in $OML_AMBER/dat


Changes of the Amber code:

To write out four-decimal charges:
File /usr/local/software/Amber/Amber7/amber7/src/leap/src/leap/pdb_format.c
Lines 41-42.
Change
                "%6 %5d %4s%c%3s %c%4d%c   %8f%8f%8f%6f%6f %3d",
                "ATOM  %5d %-4s%C%-3s %C%4d%C   %8.3f%8.3f%8.3f%6.2f%6.2f %3D"
to
                "%6 %5d %4s%c%3s %c%4d%c   %8f%8f%8f%6f%8f %3d",
                "ATOM  %5d %-4s%C%-3s %C%4d%C   %8.3f%8.3f%8.3f%6.2f% 8.4 f %3D"
Then
make -f Makefile.tleap