I'm looking for help with a technical question. I am doing a series of three calculations, trying to get increasingly more accurate energies, and consistently get an error on the first iteration of the third calculation. The calculation fails with the error "mpiexec noticed that process rank 35 with PID 219062 on node rs1933 exited on signal 11 (Segmentation fault)."
I am doing these calculations on thirty different metal organic frameworks, using a combination of fifteen different metals and two different structures. Interestingly, one MOF structure (containing 54 atoms) is successful with every metal and the other MOF structure (containing 156 atoms) fails on the third calculation with every metal. (ie, it is not at all dependent on the metal and completely dependent on the MOF structure) I am using a 2x2x2 k-point mesh, gamma-centered. When I used a 1x1x1 k-point mesh, the calculations were successful for both MOF structure types.
The INCAR of the third calculation (the one that fails for the larger MOF structure but is successful for the smaller MOF structure) is shown below:
-------------------------------------------------
Code: Select all
SYSTEM = MgHKUST1pr,?FO
ISTART = 1
ICHARG = 1
ISIF = 2
PREC = ACCURATE
ENCUT = 500.000
LVDW=.TRUE.
ISMEAR=1
SIGMA=0.2
MAXMIX=40
EDIFFG=-0.03
LREAL=.FALSE.
ISPIN=2
NELM=80
NSW=1000
IBRION=2
POTIM=0.5
NCORE=8
LPLANE=.TRUE.
LSCALU=.FALSE.
NSIM=4
The INCAR for the second calculation differs from the third only in the LREAL setting; the second calculation has LREAL = AUTO instead of LREAL=.FALSE. The INCAR for the first calculation differs from the second calculation in the PREC setting (NORMAL) as well as ISTART (0) and ICHARG (0). Both the first and second calculations are successful for the systems I am studying.
The OUTCAR for the third calculation shows that iteration 1(1) begins, but only the subroutines POTLOK and SETDIJ are done; EDDAV is never started and the file ends after SETDIJ. The first and last few lines of the OUTCAR are shown below:
-------------------------------------------------
Code: Select all
vasp.5.2.12 11Nov11
complex
executed on LinuxIFC date 2014.06.29 20:26:34
running on 80 nodes
distr: one band on 1 nodes, 80 groups
--------------------------------------------------------------------------------------------------------
INCAR:
POTCAR: PAW_PBE Mg 05Jan2001
POTCAR: PAW_PBE C 08Apr2002
POTCAR: PAW_PBE H 15Jun2001
POTCAR: PAW_PBE O 08Apr2002
POTCAR: PAW_PBE N 08Apr2002
.
. (omitted for brevity)
.
.
?k-point??1?:??0.00000.00000.0000??plane?waves:??119485
?k-point??2?:??0.50000.00000.0000??plane?waves:??119366
?k-point??3?:??0.00000.50000.0000??plane?waves:??119366
?k-point??4?:??0.50000.50000.0000??plane?waves:??119460
?k-point??5?:??0.00000.00000.5000??plane?waves:??119366
?k-point??6?:??0.50000.00000.5000??plane?waves:??119460
?k-point??7?:??0.00000.50000.5000??plane?waves:??119460
?k-point??8?:??0.50000.50000.5000??plane?waves:??119366
?maximum?and?minimum?number?of?plane-waves?per?node?:????119485???119366
?maximum?number?of?plane-waves:????119485
?maximum?index?in?each?direction:?
???IXMAX=???34???IYMAX=???34???IZMAX=???34
???IXMIN=??-34???IYMIN=??-34???IZMIN=??-34
?NGX?is?ok?and?might?be?reduce?to?138
?NGY?is?ok?and?might?be?reduce?to?138
?NGZ?is?ok?and?might?be?reduce?to?138
?serial???3D?FFT?for?wavefunctions
?parallel?3D?FFT?for?charge:
????minimum?data?exchange?during?FFTs?selected?(reduces?bandwidth)
?total?amount?of?memory?used?by?VASP?on?root?node??1098851.?kBytes
========================================================================
???base??????:??????30000.?kBytes
???nonl-proj?:?????608117.?kBytes
???fftplans??:??????21381.?kBytes
???grid??????:?????252958.?kBytes
???one-center:????????970.?kBytes
???wavefun???:?????185425.?kBytes
?
?Broyden?mixing:?mesh?for?mixing?(old?mesh)
???NGX?=?69???NGY?=?69???NGZ?=?69
??(NGX??=280???NGY??=280???NGZ??=280)
??gives?a?total?of?328509?points
?initial?charge?density?was?supplied:
?number?of?electron?????633.9999997?magnetization???????0.1233006
?keeping?initial?charge?density?in?first?step
--------------------------------------------------------------------------------------------------------
?Maximum?index?for?augmentation-charges??????????924?(set?IRDMAX)
--------------------------------------------------------------------------------------------------------
?First?call?to?EWALD:??gamma=???0.106
?Maximum?number?of?real-space?cells?3x?3x?3
?Maximum?number?of?reciprocal?cells?3x?3x?3
????FEWALD:??cpu?time????0.25:?real?time????0.25
-----------------------------------------?Iteration????1(???1)??---------------------------------------
????POTLOK:??cpu?time????1.77:?real?time????1.79
????SETDIJ:??cpu?time????0.06:?real?time????0.06
-------------------------------------------------
The vasp output file shows the initial lines, shows ?entering main loop? and the column headings, but shows no electronic or ionic relaxation output. The entire output file is shown below:
-------------------------------------------------
Code: Select all
running?on???80?nodes
?distr:??one?band?on????1?nodes,???80?groups
?vasp.5.2.12?11Nov11?complex????????????????????????????????????????????????????
??
?POSCAR?found?type?information?on?POSCAR??Mg?C??H??O??N?
?POSCAR?found?:??5?types?and?????158?ions
?scaLAPACK?will?be?used
?-----------------------------------------------------------------------------?
|?????????????????????????????????????????????????????????????????????????????|
|???????????W????W????AA????RRRRR???N????N??II??N????N???GGGG???!!!???????????|
|???????????W????W???A??A???R????R??NN???N??II??NN???N??G????G??!!!???????????|
|???????????W????W??A????A??R????R??N?N??N??II??N?N??N??G???????!!!???????????|
|???????????W?WW?W??AAAAAA??RRRRR???N??N?N??II??N??N?N??G??GGG???!????????????|
|???????????WW??WW??A????A??R???R???N???NN??II??N???NN??G????G????????????????|
|???????????W????W??A????A??R????R??N????N??II??N????N???GGGG???!!!???????????|
|?????????????????????????????????????????????????????????????????????????????|
|??????For?optimal?performance?we?recommend?that?you?set??????????????????????|
|????????NPAR?=?approx?SQRT(?number?of?cores)?????????????????????????????????|
|??????This?will?greatly?improve?the?performance?of?VASP?for?DFT.?????????????|
|??????The?default?NPAR=number?of?cores?might?be?grossly?inefficient??????????|
|??????on?modern?multi-core?architectures?or?massively?parallel?machines.?????|
|??????Unfortunately?you?need?to?use?the?default?for?hybrid,?GW?and?RPA???????|
|??????calculations.??????????????????????????????????????????????????????????|
|?????????????????????????????????????????????????????????????????????????????|
?-----------------------------------------------------------------------------?
?-----------------------------------------------------------------------------?
|?????????????????????????????????????????????????????????????????????????????|
|??ADVICE?TO?THIS?USER?RUNNING?'VASP/VAMP'???(HEAR?YOUR?MASTER'S?VOICE?...):??|
|?????????????????????????????????????????????????????????????????????????????|
|??????You?have?a?(more?or?less)?'large?supercell'?and?for?larger?cells???????|
|??????it?might?be?more?efficient?to?use?real?space?projection?opertators?????|
|??????So?try?LREAL=?Auto??in?the?INCAR???file.???????????????????????????????|
|??????Mind:?At?the?moment?your?POTCAR?file?does?not?contain?real?space???????|
|???????projectors,?and?has?to?be?modified,??BUT?if?you???????????????????????|
|??????want?to?do?an?extremely??accurate?calculation?you?might?also?keep?the??|
|??????reciprocal?projection?scheme??????????(i.e.?LREAL=.FALSE.)?????????????|
|?????????????????????????????????????????????????????????????????????????????|
?-----------------------------------------------------------------------------?
?LDA?part:?xc-table?for?Pade?appr.?of?Perdew
?found?WAVECAR,?reading?the?header
?POSCAR,?INCAR?and?KPOINTS?ok,?starting?setup
?FFT:?planning?...(???????????1?)
?reading?WAVECAR
?the?WAVECAR?file?was?read?sucessfully
?charge-density?read?from?file:?MgHKUST1pr??????????????????????????????
?magnetization?density?read?from?file?1
?entering?main?loop
???????N???????E?????????????????????dE?????????????d?eps???????ncg?????rms??????????rms(c)
-------------------------------------------------
I have been running on 8 or 10 nodes, with 8 processors per node on an institutional cluster.
Any help would be greatly appreciated!
Thank you.
Marie
<span class='smallblacktext'>[ Edited ]</span>