Page 1 of 1

Error with non-self-consistent calculations on Vasp 5.3.3

Posted: Mon Feb 25, 2013 8:21 pm
by duncand
I was using Vasp 5.2.2 on the Ranger supercomputer and have not had issues before with doing non-self-consistent calculations. However, since moving over to the newer Stampede system, which uses Vasp 5.3.3, I have found that the same input files no longer work for me. In particular, after NELM electronic iterations are performed, the program terminates unexpectedly before writing any output files.

For what it's worth, the end of the logfile looks something like this:

RMM: 50 -0.495020007092E+03 0.68650E-01 -0.10744E-01223659 0.130E-01
TACC: MPI job exited with code: 1
TACC: Shutdown complete. Exiting.

Sometimes, in the logfile, I have also gotten messages like this. It does not always happen, and I am not sure if it is correlated with my my inability to do non-SC calculations:

WARNING in EDDRMM: call to ZHEGV failed, returncode = 8 4 **


A non-self-consistent calculation is done with new k-points after a self-consistent calculation to generate either a DOS or band structure. The program fails in the same manner regardless of what k-points I specify. Even reusing the same points that were used for the original self-consistent calculation results in the program terminating unexpectedly.

Can you give any guidance as to what I can try in order to fix this problem? If desired, I can provide my INCAR files as well.

Thanks in advance.

Dan

Error with non-self-consistent calculations on Vasp 5.3.3

Posted: Tue Feb 26, 2013 12:13 pm
by juhL
Concerning the ZHEGV error:

it often helps to switch IBRION to 1 if you are using 2; it also might help to switch the algo if you are using RMM-DIIS try Davidson

I cannot comment on the other messages although i'm not quite sure if those are really errors, did you check what "code: 1" means?

Error with non-self-consistent calculations on Vasp 5.3.3

Posted: Tue Mar 12, 2013 12:11 am
by duncand
So an update if you are having trouble with non-self-consistent calculations:

The crashes happened specifically because Vasp ran out of memory. This was surprising, as I used a supercell of ~150 atoms and a 2x2x2 k-mesh, and I was using the Stampede system, which has 32 GB of memory/node. This issue can be resolved by increasing the number of nodes (instead of the original 32 cores on 2 nodes, I was able to run the calculation successfully only when using 32 cores spread across 8 nodes).

However, this doesn't explain why Vasp is requiring so much memory all of a sudden. If anyone knows of something that changed between 5.2.2 and 5.3.3 that might be contributing to memory problems, please let me know.