deepmd.tf.descriptor.se_atten_v2

Module Contents

Classes

DescrptSeAttenV2

Smooth version 2.0 descriptor with attention.

Attributes

log

deepmd.tf.descriptor.se_atten_v2.log[source]
class deepmd.tf.descriptor.se_atten_v2.DescrptSeAttenV2(rcut: float, rcut_smth: float, sel: int, ntypes: int, neuron: List[int] = [24, 48, 96], axis_neuron: int = 8, resnet_dt: bool = False, trainable: bool = True, seed: int | None = None, type_one_side: bool = True, set_davg_zero: bool = False, exclude_types: List[List[int]] = [], activation_function: str = 'tanh', precision: str = 'default', uniform_seed: bool = False, attn: int = 128, attn_layer: int = 2, attn_dotr: bool = True, attn_mask: bool = False, multi_task: bool = False, **kwargs)[source]

Bases: deepmd.tf.descriptor.se_atten.DescrptSeAtten

Smooth version 2.0 descriptor with attention.

Parameters:
rcut

The cut-off radius \(r_c\)

rcut_smth

From where the environment matrix should be smoothed \(r_s\)

selint

sel[i] specifies the maxmum number of type i atoms in the cut-off radius

neuronlist[int]

Number of neurons in each hidden layers of the embedding net \(\mathcal{N}\)

axis_neuron

Number of the axis neuron \(M_2\) (number of columns of the sub-matrix of the embedding matrix)

resnet_dt

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

trainable

If the weights of embedding net are trainable.

seed

Random seed for initializing the network parameters.

type_one_side

Try to build N_types embedding nets. Otherwise, building N_types^2 embedding nets

exclude_typesList[List[int]]

The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.

set_davg_zero

Set the shift of embedding net input to zero.

activation_function

The activation function in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

attn

The length of hidden vector during scale-dot attention computation.

attn_layer

The number of layers in attention mechanism.

attn_dotr

Whether to dot the relative coordinates on the attention weights as a gated scheme.

attn_mask

Whether to mask the diagonal in the attention weights.

multi_task

If the model has multi fitting nets to train.