I just purchased several Dell T620s with two processors each with twelve cores per node. I am using Centos6.5 to support the infiniband links between the nodes. The CPU are Xeon(R) CPU E5-2695. I have used ifort/impi/mkl for compiling the VASP code on this and 12 core single processor (2012 Mac Pros) with a 12 core Xeon(R) CPU X5650.
On the 12 core Mac Pro machine (under opensuse 12.3), Vasp runs without errors, but the same job fails with a "Error EDDDAV: Call to ZHEGV failed. Returncode = 9 2 16" on the new T620 machines. I am using the latest version of ifort on both (ifort (IFORT) 14.0.1 20131008) and I am being thrown for a loop on this. I have compiled the code on both machines using the same Makefile and both link successfully. The Dell T620 gasp works on many input files (it does not always fail), but relatively often in my early testing, VASP stops with the EDDDAV error.
Is this a known problem for IvyBridge machines? Has anyone had a similar set of problems with solution? It is very strange that the same Makefile works with the same compiler and libraries on one machine but not a newer one.
Does anyone have any advice?
The output goes as follows
POSCAR, INCAR and KPOINTS ok, starting setup
FFT: planning ...
reading WAVECAR
random initialization beyond band 349
the WAVECAR file was read sucessfully
initial charge from wavefunction
entering main loop
N E dE d eps ncg rms rms(c)
Error EDDDAV: Call to ZHEGV failed. Returncode = 25 2 48
INCAR File
SIGMA = 0.100000
PREC = High
ISMEAR = 0
ISTART = 1
NELMIN = 10 # do a minimum of 10 electronic steps
EDIFF = 1E-5 # high accuracy for electronic ground state
EDIFFG = -0.01 # small tolerance for ions
NSW = 20 # 20 ion steps
MAXMIX = 80 # keep dielectric function fixed between ionic movements
IBRION = 1 # use RMM-DISS algorithm for ionic movements
POTTIM = 0.8 # scaling factor for minimization routine
ISIF = 4
QCUT = -1
LNONCOLLINEAR = .TRUE.
LCHARG = .TRUE.
IBRION = 2 # cg
LSORBIT = .TRUE.
LREAL = .FALSE.
POSCAR File
Ge Te
1.00000000000000
9.2300410661963603 -0.0000400546833307 0.0000734741114623
-4.6150552055355929 7.9935338386993635 0.0000169667530870
0.0001669465229778 0.0001467280811994 19.1797200965724244
Ge Te
24 24
Direct
-0.0001124291688844 -0.0000017174658231 0.2234582270284568
0.3333309041811137 0.1666649491841791 0.3901248936784587
0.1666642374811177 0.3333316158841749 0.0567915603784546
-0.0000024291688844 -0.0000017174658231 0.7234582270284563
0.3333309041811137 0.1666649491841791 0.8901248936784584
0.1666642374811177 0.3333316158841749 0.5567915603784542
-0.0000024291688844 0.4999982825341768 0.2234582270284568
0.3333309041811137 0.6666649491841785 0.3901248936784587
0.1666642374811177 0.8333316158841744 0.0567915603784546
-0.0000024291688844 0.4999982825341768 0.7234582270284563
0.3333309041811137 0.6666649491841785 0.8901248936784584
0.1666642374811177 0.8333316158841744 0.5567915603784542
0.4999975708311157 -0.0000017174658231 0.2234582270284569
0.8333309041811130 0.1666649491841791 0.3901248936784587
0.6666642374811171 0.3333316158841749 0.0567915603784546
0.4999975708311157 -0.0000017174658231 0.7234582270284563
0.8333309041811130 0.1666649491841791 0.8901248936784584
0.6666642374811171 0.3333316158841749 0.5567915603784542
0.4999975708311157 0.4999982825341768 0.2234582270284568
0.8333309041811130 0.6666649491841785 0.3901248936784587
0.6666642374811171 0.8333316158841744 0.0567915603784546
0.4999975708311157 0.4999982825341768 0.7234582270284563
0.8333309041811130 0.6666649491841785 0.8901248936784584
0.6666642374811171 0.8333316158841744 0.5567915603784542
0.0000024291688844 0.0000017174658231 0.0141417729715438
0.3333357625188821 0.1666683841158250 0.1808084396215457
0.1666690958188864 0.3333350508158209 0.3474751063215418
0.0000024291688844 0.0000017174658231 0.5141417729715442
0.3333357625188821 0.1666683841158250 0.6808084396215462
0.1666690958188864 0.3333350508158209 0.8474751063215421
0.0000024291688844 0.5000017174658236 0.0141417729715438
0.3333357625188821 0.6666683841158256 0.1808084396215457
0.1666690958188864 0.8333350508158215 0.3474751063215418
0.0000024291688844 0.5000017174658236 0.5141417729715442
0.3333357625188821 0.6666683841158256 0.6808084396215462
0.1666690958188864 0.8333350508158215 0.8474751063215421
0.5000024291688838 0.0000017174658231 0.0141417729715438
0.8333357625188829 0.1666683841158250 0.1808084396215457
0.6666690958188870 0.3333350508158209 0.3474751063215418
0.5000024291688838 0.0000017174658231 0.5141417729715442
0.8333357625188829 0.1666683841158250 0.6808084396215462
0.6666690958188870 0.3333350508158209 0.8474751063215421
0.5000024291688838 0.5000017174658236 0.0141417729715438
0.8333357625188829 0.6666683841158256 0.1808084396215457
0.6666690958188870 0.8333350508158215 0.3474751063215418
0.5000024291688838 0.5000017174658236 0.5141417729715442
0.8333357625188829 0.6666683841158256 0.6808084396215462
0.6666690958188870 0.8333350508158215 0.8474751063215421
My Makefile
.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Intel Fortran compiler for Pentium/Athlon/Opteron
# based systems
# we recommend this makefile for both Intel as well as AMD systems
# for AMD based systems appropriate BLAS (libgoto) and fftw libraries are
# however mandatory (whereas they are optional for Intel platforms)
# For Athlon we recommend
# ) to link against libgoto (and mkl as a backup for missing routines)
# ) odd enough link in libfftw3xf_intel.a (fftw interface for mkl)
# feedback is greatly appreciated
#
# The makefile was tested only under Linux on Intel and AMD platforms
# the following compiler versions have been tested:
# - ifc.7.1 works stable somewhat slow but reliably
# - ifc.8.1 fails to compile the code properly
# - ifc.9.1 recommended (both for 32 and 64 bit)
# - ifc.10.1 partially recommended (both for 32 and 64 bit)
# tested build 20080312 Package ID: l_fc_p_10.1.015
# the gamma only mpi version can not be compiles
# using ifc.10.1
# - ifc.11.1 partially recommended (some problems with Gamma only and intel fftw)
# Build 20090630 Package ID: l_cprof_p_11.1.046
# - ifort.12.1 strongly recommended (we use this to compile vasp)
# Version 12.1.5.339 Build 20120612
#
# it might be required to change some of library path ways, since
# LINUX installations vary a lot
#
# Hence check ***ALL*** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
# retrieve the lapackage from ftp.netlib.org
# and compile the blas routines (BLAS/SRC directory)
# please use g77 or f77 for the compilation. When I tried to
# use pgf77 or pgf90 for BLAS, VASP hang up when calling
# ZHEEV (however this was with lapack 1.1 now I use lapack 2.0)
# 2) more desirable: get an optimized BLAS
#
# the two most reliable packages around are presently:
# 2a) Intels own optimised BLAS (PIII, P4, PD, PC2, Itanium)
# http://developer.intel.com/software/products/mkl/
# this is really excellent, if you use Intel CPU's
#
# 2b) probably fastest SSE2 (4 GFlops on P4, 2.53 GHz, 16 GFlops PD,
# around 30 GFlops on Quad core)
# Kazushige Goto's BLAS
# http://www.cs.utexas.edu/users/kgoto/signup_first.html
# http://www.tacc.utexas.edu/resources/software/
#
#-----------------------------------------------------------------------
# all CPP processed fortran files have the extension .f90
SUFFIX=.f90
#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC=ifort -I$(MKL_ROOT)/include/fftw
# fortran linker
FCL=$(FC)
#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE X.X, maybe some Red Hat distributions:
CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)
# this release should be fpp clean
# we now recommend fpp as preprocessor
# if this fails go back to cpp
CPP_=fpp -f_com=no -free -w0 $*.F $*$(SUFFIX)
#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# PGF90 work around some for some PGF90 / IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
# tbdyn MD package of Tomas Bucko
#-----------------------------------------------------------------------
#CPP = $(CPP_) -DHOST=\"LinuxIFC\" \
# -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DNGXhalf \
# -DRPROMU_DGEMV -DRACCMU_DGEMV
#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
# byterecl is strictly required for ifc, since otherwise
# the WAVECAR file becomes huge
#-----------------------------------------------------------------------
FFLAGS = -FR -names lowercase -assume byterecl -O2 -I$(MKLROOT)/include/intel64/lp64 -I$(MKLROOT)/include
DEBUG = -FR -O0
INLINE = $(OFLAG)
#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# we recommend to use mkl, that is simple and most likely
# fastest in Intel based machines
#-----------------------------------------------------------------------
# mkl path for ifc 11 compiler
#MKL_PATH=$(MKLROOT)/lib/em64t
# mkl path for ifc 12 compiler
MKL_PATH=$(MKLROOT)/lib/intel64
MKL_FFTW_PATH=$(MKLROOT)/interfaces/fftw3xf/
# BLAS
# setting -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines usually speeds up program execution
# BLAS= -Wl,--start-group $(MKL_PATH)/libmkl_intel_lp64.a $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -lguide
# faster linking and available from at least version 11
#BLAS= -lguide -mkl
#BLAS = -L$(MKLROOT)/lib/intel64 $(MKLROOT)/lib/intel64/libmkl_blas95_lp64.a $(MKLROOT)/lib/intel64/libmkl_lapack95_lp64.a -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_intelmpi_lp64 -liomp5 -lpthread -lm
#BLAS = $(MKLROOT)/lib/intel64/libmkl_blas95_lp64.a $(MKLROOT)/lib/intel64/libmkl_lapack95_lp64.a $(MKLROOT)/lib/intel64/libmkl_scalapack_lp64.a -Wl,--start-group $(MKLROOT)/lib/intel64/libmkl_cdft_core.a $(MKLROOT)/lib/intel64/libmkl_intel_lp64.a $(MKLROOT)/lib/intel64/libmkl_sequential.a $(MKLROOT)/lib/intel64/libmkl_core.a $(MKLROOT)/lib/intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lpthread -lm
BLAS = -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_core -lmkl_intel_thread -lmkl_blacs_intelmpi_lp64 -lpthread -lm
# LAPACK, use vasp.5.lib/lapack_double
#LAPACK= ../vasp.5.lib/lapack_double.o
# LAPACK from mkl, usually faster and contains scaLAPACK as well
#LAPACK= $(MKL_PATH)/libmkl_intel_lp64.a
# here a tricky version, link in libgoto and use mkl as a backup
# also needs a special line for LAPACK
# this is the best thing you can do on AMD based systems !!!!!!
#BLAS = -Wl,--start-group /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -liomp5
#LAPACK= /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_lp64.a
#-----------------------------------------------------------------------
LIB = -L../vasp.5.lib -ldmy \
../vasp.5.lib/linpack_double.o /home/matstud/VASP/utilities/wannier90-1.2/libwannier.a $(LAPACK) \
$(BLAS)
# options for linking, nothing is required (usually)
LINK = -parallel
#-----------------------------------------------------------------------
# fft libraries:
# VASP.5.2 can use fftw.3.1.X (http://www.fftw.org)
# since this version is faster on P4 machines, we recommend to use it
#-----------------------------------------------------------------------
#FFT3D = fft3dfurth.o fft3dlib.o /home/paulfons/fftw3/lib/libfftw3.a
# alternatively: fftw.3.1.X is slighly faster and should be used if available
#FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a
# you may also try to use the fftw wrapper to mkl (but the path might vary a lot)
# it seems this is best for AMD based systems
#FFT3D = fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a
#INCS = -I$(MKLROOT)/include/fftw
FFT3D = fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a
INCS = -I$(MKLROOT)/include/fftw
#=======================================================================
# MPI section, uncomment the following lines until
# general rules and compile lines
# presently we recommend OPENMPI, since it seems to offer better
# performance than lam or mpich
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi
#-----------------------------------------------------------------------
#FC=mpif90
FC=mpiifort
#FCL=$(FC)
#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (recommended if mkl is available)
# avoidalloc avoid ALLOCATE if possible
# PGF90 work around some for some PGF90 / IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
# tbdyn MD package of Tomas Bucko
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
#CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \
# -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf \
# -DMPI_BLOCK=262144 -Duse_collective -DscaLAPACK \
# -DRPROMU_DGEMV -DRACCMU_DGEMV
CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \
-DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \
-DMPI_BLOCK=262144 -Duse_collective -DscaLAPACK \
-DRPROMU_DGEMV -DRACCMU_DGEMV -DVASP2WANNIER90
-DMPI_BLOCK=8000 -Duse_collective -DscaLAPACK
#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply leave this section commented out
#-----------------------------------------------------------------------
# usually simplest link in mkl scaLAPACK
#BLACS= -lmkl_blacs_openmpi_lp64
#SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)
SCA= -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -lmkl_core
#-----------------------------------------------------------------------
# libraries
#-----------------------------------------------------------------------
#LIB = -L../vasp.5.lib -ldmy \
# ../vasp.5.lib/linpack_double.o \
# $(SCA) $(LAPACK) $(BLAS)
#-----------------------------------------------------------------------
# parallel FFT
#-----------------------------------------------------------------------
# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
#FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o
# alternatively: fftw.3.1.X is slighly faster and should be used if available
FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /opt/intel/mkl/lib/intel64/libfftw3xf_intel.a
# you may also try to use the fftw wrapper to mkl (but the path might vary a lot)
# it seems this is best for AMD based systems
#FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a
#INCS = -I$(MKLROOT)/include/fftw
#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC= symmetry.o symlib.o lattlib.o random.o
SOURCE= base.o mpi.o smart_allocate.o xml.o \
constant.o jacobi.o main_mpi.o scala.o \
asa.o lattice.o poscar.o ini.o mgrid.o xclib.o vdw_nl.o xclib_grad.o \
radial.o pseudo.o gridq.o ebs.o \
mkpoints.o wave.o wave_mpi.o wave_high.o spinsym.o \
$(BASIC) nonl.o nonlr.o nonl_high.o dfast.o choleski2.o \
mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o \
constrmag.o cl_shift.o relativistic.o LDApU.o \
paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o rhfatm.o hyperfine.o paw.o \
mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o pot.o \
dos.o elf.o tet.o tetweight.o hamil_rot.o \
chain.o dyna.o k-proj.o sphpro.o us.o core_rel.o \
aedens.o wavpre.o wavpre_noio.o broyden.o \
dynbr.o hamil_high.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \
brent.o stufak.o fileio.o opergrid.o stepver.o \
chgloc.o fast_aug.o fock_multipole.o fock.o mkpoints_change.o sym_grad.o \
mymath.o internals.o npt_dynamics.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o \
nmr.o pead.o subrot.o subrot_scf.o \
force.o pwlhf.o gw_model.o optreal.o steep.o davidson.o david_inner.o \
electron.o rot.o electron_all.o shm.o pardens.o paircorrection.o \
optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o \
hamil_lr.o rmm-diis_lr.o subrot_cluster.o subrot_lr.o \
lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o \
linear_optics.o \
setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o \
mlwf.o ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o \
local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o \
bse_te.o bse.o acfdt.o chi.o sydmat.o dmft.o \
rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o linear_response.o
vasp: $(SOURCE) $(FFT3D) $(INC) main.o
rm -f vasp
$(FCL) -o vasp main.o $(SOURCE) $(FFT3D) $(LIB) $(LINK)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
$(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)
clean:
-rm -f *.g *.f *.o *.L *.mod ; touch *.F
main.o: main$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX)
makeparam.o: makeparam$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX)
makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F
$(OBJ_HIGH):
$(CPP)
$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
$(CPP)
$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)
fft3dlib_f77.o: fft3dlib_f77.F
$(CPP)
$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)
.F.o:
$(CPP)
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
$(CPP)
$(SUFFIX).o:
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
# special rules
#-----------------------------------------------------------------------
# these special rules have been tested for ifc.11 and ifc.12 only
fft3dlib.o : fft3dlib.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
fft3dfurth.o : fft3dfurth.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
fftw3d.o : fftw3d.F
$(CPP)
$(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX)
fftmpi.o : fftmpi.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
fftmpiw.o : fftmpiw.F
$(CPP)
$(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX)
wave_high.o : wave_high.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
# the following rules are probably no longer required (-O3 seems to work)
wave.o : wave.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
paw.o : paw.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
cl_shift.o : cl_shift.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
us.o : us.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
LDApU.o : LDApU.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
Error EDDDAV: Call to ZHEGV failed on Ivy Bridge Xeon
Moderators: Global Moderator, Moderator
- paulfons
- Jr. Member
- Posts: 85
- Joined: Sun Nov 04, 2012 2:40 am
- License Nr.: 5-1405
- Location: Yokohama, Japan
- Contact:
Error EDDDAV: Call to ZHEGV failed on Ivy Bridge Xeon
Last edited by paulfons on Thu Jan 30, 2014 6:06 am, edited 1 time in total.
- paulfons
- Jr. Member
- Posts: 85
- Joined: Sun Nov 04, 2012 2:40 am
- License Nr.: 5-1405
- Location: Yokohama, Japan
- Contact:
Error EDDDAV: Call to ZHEGV failed on Ivy Bridge Xeon
A quick update. Before I had posted this I had recompiled all sources setting the optimization level to -O2 to find the same problems. Subsequently to posting, I recompiled all sources with optimization level 1 -O1 and found to my surprise the SCF loop worked. Now I would like to lower the optimization of just the bits that require it. Does anyone have any suggestions as to which these might be (perhaps davidson.F?). Thanks for any advice in advance.
Last edited by paulfons on Thu Jan 30, 2014 7:30 am, edited 1 time in total.