deepmd.pt.optimizer.LKF

Module Contents

Classes

LKFOptimizer

Base class for all optimizers.

Functions

distribute_indices(total_length, num_workers)

deepmd.pt.optimizer.LKF.distribute_indices(total_length, num_workers)[source]
class deepmd.pt.optimizer.LKF.LKFOptimizer(params, kalman_lambda=0.98, kalman_nue=0.9987, block_size=5120)[source]

Bases: torch.optim.optimizer.Optimizer

Base class for all optimizers.

Warning

Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries.

Parameters:
  • params (iterable) – an iterable of torch.Tensor s or dict s. Specifies what Tensors should be optimized.

  • defaults – (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them).

__init_P()[source]
__get_blocksize()[source]
__get_nue()[source]
__split_weights(weight)[source]
__update(H, error, weights)[source]
set_grad_prefactor(grad_prefactor)[source]
step(error)[source]

Performs a single optimization step (parameter update).

Parameters:

closure (Callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.

Note

Unless otherwise specified, this function should not modify the .grad field of the parameters.

get_device_id(index)[source]