deepmd.pt.model.descriptor.repformer_layer
Module Contents
Classes
Base class for all neural network modules. | |
Base class for all neural network modules. | |
Base class for all neural network modules. | |
Base class for all neural network modules. | |
Base class for all neural network modules. |
Functions
| |
| |
| |
| |
| Normalize h by the std of vector length. |
- deepmd.pt.model.descriptor.repformer_layer._make_nei_g1(g1_ext: torch.Tensor, nlist: torch.Tensor) torch.Tensor [source]
- deepmd.pt.model.descriptor.repformer_layer._apply_nlist_mask(gg: torch.Tensor, nlist_mask: torch.Tensor) torch.Tensor [source]
- deepmd.pt.model.descriptor.repformer_layer._apply_switch(gg: torch.Tensor, sw: torch.Tensor) torch.Tensor [source]
- deepmd.pt.model.descriptor.repformer_layer._apply_h_norm(hh: torch.Tensor) torch.Tensor [source]
Normalize h by the std of vector length. do not have an idea if this is a good way.
- class deepmd.pt.model.descriptor.repformer_layer.Atten2Map(ni: int, nd: int, nh: int, has_gate: bool = False, smooth: bool = True, attnw_shift: float = 20.0)[source]
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- class deepmd.pt.model.descriptor.repformer_layer.Atten2MultiHeadApply(ni: int, nh: int)[source]
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- class deepmd.pt.model.descriptor.repformer_layer.Atten2EquiVarApply(ni: int, nh: int)[source]
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- class deepmd.pt.model.descriptor.repformer_layer.LocalAtten(ni: int, nd: int, nh: int, smooth: bool = True, attnw_shift: float = 20.0)[source]
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- class deepmd.pt.model.descriptor.repformer_layer.RepformerLayer(rcut, rcut_smth, sel: int, ntypes: int, g1_dim=128, g2_dim=16, axis_dim: int = 4, update_chnnl_2: bool = True, do_bn_mode: str = 'no', bn_momentum: float = 0.1, update_g1_has_conv: bool = True, update_g1_has_drrd: bool = True, update_g1_has_grrg: bool = True, update_g1_has_attn: bool = True, update_g2_has_g1g1: bool = True, update_g2_has_attn: bool = True, update_h2: bool = False, attn1_hidden: int = 64, attn1_nhead: int = 4, attn2_hidden: int = 16, attn2_nhead: int = 4, attn2_has_gate: bool = False, activation_function: str = 'tanh', update_style: str = 'res_avg', set_davg_zero: bool = True, smooth: bool = True)[source]
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- _update_h2(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor [source]
- _update_g1_conv(gg1: torch.Tensor, g2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor [source]
- _cal_h2g2(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor [source]
- _update_g1_grrg(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor [source]
- _update_g2_g1g1(g1: torch.Tensor, gg1: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor [source]
- forward(g1_ext: torch.Tensor, g2: torch.Tensor, h2: torch.Tensor, nlist: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor)[source]
- Parameters:
- g1_ext
nf
x
nall
x
ng1
extended
single-atomchanel
- g2
nf
x
nloc
x
nnei
x
ng2
pair-atomchannel
,invariant
- h2
nf
x
nloc
x
nnei
x
3 pair-atomchannel
,equivariant
- nlist
nf
x
nloc
x
nnei
neighbor
list
(padded
neis
are
set
to
0) - nlist_mask
nf
x
nloc
x
nnei
masks
of
the
neighbor
list.real
nei
1otherwise
0 - sw
nf
x
nloc
x
nnei
switch
function
- g1_ext
- Returns:
- g1:
nf
x
nloc
x
ng1
updated
single-atomchanel
- g2:
nf
x
nloc
x
nnei
x
ng2
updated
pair-atomchannel
,invariant
- h2:
nf
x
nloc
x
nnei
x
3updated
pair-atomchannel
,equivariant
- g1: