Interfacial MLFF train and usage in LAMMPS

Queries about input and output files, running specific calculations, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
suojiang_zhang1
Jr. Member
Jr. Member
Posts: 53
Joined: Tue Nov 19, 2019 4:15 am

Interfacial MLFF train and usage in LAMMPS

#1 Post by suojiang_zhang1 » Wed Feb 19, 2025 2:00 am

Dear,
I am working on the electrochemical study, the liquid-solid interface and EDL become a core for understanding the process.
The classic force field can be used to calculate the interface at nano- or micro- scale, but some important interactions, example electron transfer has not be considered, such that the calculation is not very precise. Thus I want to train a MLFF to calculate the interface, then speed-up in the extended size to nano-scale by the LAMMPS.
1. how I train the interface? do you have any instruction about the details. Or I train the solid, then train liquid, finally train the interface?
2. In the train, do I constrain the solid atoms? or relax them? which cause the influence to the result?
3. only train the NVT by ISIF=2, or train the NPT by ISIF=3? how to consider the cell boundary?
yours.


suojiang_zhang1
Jr. Member
Jr. Member
Posts: 53
Joined: Tue Nov 19, 2019 4:15 am

Re: Interfacial MLFF train and usage in LAMMPS

#2 Post by suojiang_zhang1 » Thu Feb 20, 2025 2:15 am

Hi,
following the above poster,
I primiary trained a MLFF including the interfacial Au-ILs, of course the accuracy of the MLFF is not very high, but test is ok.
The INCAR looks like:

Code: Select all

SYSTEM = EMIMPF6_Au
NCORE=8
### Electronic structure part
ENCUT=450
GGA = RP
IVDW = 11
ALGO = F
LASPH = .T.
ISMEAR = 0
SIGMA = 0.5
ISPIN = 1
ISYM   = 0
LREAL = Auto
### MD part
IBRION = 0
MDALGO = 3
LANGEVIN_GAMMA = 10.0 10.0 10.0 10.0 10.0 10.0
#LANGEVIN_GAMMA_L = 10.0
NSW = 10000
POTIM = 1
ISIF = 2
TEBEG = 200
TEEND = 400
#PSTRESS = 0.001
PMASS=100
RANDOM_SEED =         486686595                0                0
### Output
LWAVE = .FALSE.
LCHARG = .FALSE.
#NBLOCK = 10
#KBLOCK = 10
##############################
### MACHINE-LEARNING         ###
################################
ML_LMLFF = .T.
ML_MODE=train
ML_DESC_TYPE = 1

Then I copied the ML_FFN to ML_FF and made a data.lammps from CONTCAR.

The lammps.in looks like:

Code: Select all

variable        NSTEPS          equal 100000
variable        THERMO_FREQ     equal 100
variable        DUMP_FREQ       equal 1000
variable        TEMP            equal 300.000000
variable        PRES            equal 1.000000
variable        TAU_T           equal 0.100000
variable        TAU_P           equal 1.000000
variable        my_restart      equal 0  

units           metal
boundary        p p f
atom_style      atomic

neighbor        1.0 bin

box             tilt large
read_data       data.lammps
change_box      all triclinic
replicate       1 1 1

mass            1 1.000000
mass            2 12.000000
mass            3 14.000000
mass            4 19.000000
mass            5 31.000000
mass            6 197.00000

pair_style      vasp
pair_coeff      * * ML_FF H C N F P Au

thermo_style    custom step temp pe ke etotal press density vol lx ly lz
thermo          ${THERMO_FREQ}
dump            1 all xyz ${DUMP_FREQ} traj_3.xyz
dump_modify     1  sort id element H C N F P Au

min_style cg
minimize 1e-25 1e-25 1000 1000

if "${my_restart} == 0" then "velocity        all create ${TEMP} 91162"
fix             1 all nvt temp ${TEMP} ${TEMP} ${TAU_T}
timestep        0.0005000
run             ${NSTEPS}
write_restart   re.0001
write_data      data.001

but when I ran the lammps, there are such errors, but the same parameters can run in the bulk phase.

Code: Select all

Setting up cg style minimization ...
  Unit style    : metal
  Current step  : 0

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 0 PID 3513773 RUNNING AT node8
=   KILLED BY SIGNAL: 9 (Killed)
===================================================================================
[node8:3513898:0:3513898] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513865:0:3513865] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513833:0:3513833] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513862:0:3513862] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513782:0:3513782] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513836:0:3513836] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513830:0:3513830] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513813:0:3513813] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513780:0:3513780] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513829:0:3513829] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513843:0:3513843] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513893:0:3513893] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513866:0:3513866] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513828:0:3513828] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513784:0:3513784] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513794:0:3513794] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513831:0:3513831] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513859:0:3513859] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513852:0:3513852] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
[node8:3513811:0:3513811] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)

andreas.singraber
Global Moderator
Global Moderator
Posts: 265
Joined: Mon Apr 26, 2021 7:40 am

Re: Interfacial MLFF train and usage in LAMMPS

#3 Post by andreas.singraber » Thu Feb 20, 2025 9:09 am

Hello!

Although it is in principle possible to train a machine-learned force field for interfaces I doubt this is the correct approach in your case. As far as I know the application of MLFFs in electrochemistry and in particular for liquid-solid interfaces and electric double layers is an active field of research. The application of short-ranged MLFFs (such as the one currently implemented in VASP) may not capture the long-range interactions certainly present in such systems. I would suggest a careful analysis of the existing literature to determine whether your approach is correct.

Regarding your general training questions: Yes, I would suggest to start with the solid phase alone, then continue with the liquid and finally train the interface. We generally recommend to use heating runs and, if possible, the NpT ensemble as it improves the force field's robustness. However, there is much more to consider, please carefully read the tips and tricks on our best practices page:

https://www.vasp.at/wiki/index.php/Best ... rce_fields

I would not recommend to fix any atom positions during training because the always identical forces between fixed atoms will then be overrepresented in the training data.

Regarding the LAMMPS error: I never tried a minimizer in LAMMPS before but gave it a try with your example script and one of my test systems. Unfortunately, I could not reproduce the error you received. The message hints at some fundamental issue (0x8 is not an address in memory where usually some user data resides, maybe some object was not properly created). To investigate further I would need all relevant input and output files!

All the best,
Andreas Singraber


suojiang_zhang1
Jr. Member
Jr. Member
Posts: 53
Joined: Tue Nov 19, 2019 4:15 am

Re: Interfacial MLFF train and usage in LAMMPS

#4 Post by suojiang_zhang1 » Fri Feb 21, 2025 3:07 am

thank for your rapid reply.
I can understand your suggestion, this is closely related with the computational platform.
My supercomputer is 128 cores in mpi-parallel (the lammps was compiled via make mpi) and is partitioned by slurm.
I found the lammps can run when the cores=1 or 2, even 64 when I setup the processor 8*8*1 in input file, but when I want to set the processor 8*8*2 to run on the all 128 cores, the run was stoped as I mentioned in last poster.

I will appreciate for your any suggestions.
Yours.


Post Reply