Skip to content

adagrad_optimiser_type

Ned Taylor edited this page Feb 17, 2024 · 1 revision
adagrad_optimiser_type(
   learning_rate=0.01,
   epsilon=.false.,
   num_params=1,
   regulariser=None,
   clip_dict,
   lr_decay
)

The adagrad_optimiser_type derived type provides a data structure that contains all optimisation/learning parameters for a network model. Most simply, it defines the learning rate for the model.

This type provides an implementation of the Adaptive Gradient Algorithm (AdaGrad) method.

Arguments

  • learning_rate: A real scalar. The rate of learning applied to the weights.
  • epsilon: A small real scalar. Used for zero division handling.
  • num_params: An integer scalar. The number of learnable parameters in the model.
  • regulariser: A derived type extended from the base_regulariser_type derived type.
  • clip_dict: A derived data type defining weight clipping parameters. These nested parameters can be set using the set_clip procedure.
  • lr_decay: A derived type extended from the base_lr_decay_type derived type.