-
Notifications
You must be signed in to change notification settings - Fork 2
batchnorm1d_layer_type
Ned Taylor edited this page Mar 10, 2024
·
4 revisions
batchnorm1d_layer_type(
input_shape,
num_channels,
num_inputs,
batch_size,
momentum=0.99,
epsilon=0.001,
gamma_init_mean=1.0,
gamma_init_std=0.01,
beta_init_mean=0.0,
beta_init_std=0.01,
kernel_initialiser='ones',
bias_initialiser='zeros',
moving_mean_initialiser='zeros',
moving_variance_initialiser='ones'
)
THIS LAYER TYPE WILL ONLY BE AVAILABLE IN VERSION 1.3.0 AND LATER
The batchnorm1d_layer_type
derived type provides 0D and 1D batch normalisation layer. If input_shape
is rank 1 (or only num_channels
or num_inputs
is given), then the input and output data shape will be CxB, where C = number of channels/inputs, and B = batch size. If input_shape
is rank 2 (or both num_channels
and num_inputs
is given), then the input and output data shape will be IxCxB, where I = number of features per channel, C = number of channels, and B = batch size.
This layer normalises over samples within a batch to improve stability of data.
- input_shape: The shape of the input data for one sample. This is required only if this layer is the first (non-input) layer of the network.
- num_inputs: An integer scalar. Defines the number of input features per channel.
- num_channels: An integer scalar. Defines the number of channels in the input data.
- batch_size: Integer. The number samples in a batch. This is optional (the enclosing network structure can handle it instead).
- momentum: A real scalar. Defines the speed at which learning is performed.
- epsilon: A real scalar. A small positive number that is used to avoid division by zero.
- gamma_init_mean: The mean of initial gamma values (for gaussian random initialisation).
- gamma_init_std: The standard deviation of initial gamma values (for gaussian random initialisation).
- beta_init_mean: The mean of initial beta values (for gaussian random initialisation).
- beta_init_std: The standard deviation of initial beta values (for gaussian random initialisation).
- kernel_initialiser: Initialiser for the kernel weights, i.e. gamma (see Initialisers).
- bias_initialiser: Initialiser for the biases, i.e. beta (see Initialisers).
- moving_mean_initialiser: Initialiser for the moving mean, for momentum-based learning (see Initialisers).
- moving_variance_initialiser: Initialiser for the moving variance, for momentum-based learning (see Initialisers).