Skip to content

conv3d_layer_type

Ned Taylor edited this page Mar 10, 2024 · 4 revisions
conv3d_layer_type(
   input_shape,
   batch_size,
   num_filters=32,
   kernel_size=3,
   stride=1,
   padding="valid",
   activation_function="none",
   activation_scale=1.0,
   kernel_initialiser=empty,
   bias_initialiser=empty,
   calc_input_gradients=.true.
)

The conv2d_layer_type derived type provides a 3D convolution layer (e.g. spatial convolution over volumes).

This layer creates a convolution kernel that is convolved with the layer input to produce tensor of outputs.

Arguments

  • input_shape: The shape of the input data for one sample. This is required only if this layer is the first (non-input) layer of the network.
  • batch_size: Integer. The number samples in a batch. This is optional (the enclosing network structure can handle it instead).
  • num_filters: Integer. The number of output filters in the convolution (i.e. the number of output channels).
  • kernel_size: An integer or 1D-array of 3 integers. Specifies the height, width and depth of the convolution kernel. Providing a scalar integer specifies the same value for all dimensions. Default = 3.
  • stride: An integer or 1D-array of 3 integers. Specifies the strides of the convolution along each spatial dimension. Providing a scalar integer specifies the same value for all dimensions. Default = 1.
  • padding: one of "valid", "full", "circular", "reflection", or "replication". Padding method assumed of the input data. WARNING: the input data must be padded before being passed to the network. This can be done using the pad_data procedure.
  • activation_function: Activation function for the layer (see Activation Functions).
  • activation_scale: A real scalar. Defaults to 1.0.
  • kernel_initialiser: Initialiser for the kernel weights (see Initialisers).
  • bias_initialiser: Initialiser for the biases (see Initialisers).
  • calc_input_gradients: Boolean whether to calculate the input gradients for the layer or not. If it is the first layer after the network inputs, then input gradients do not need to be calculated as they have no other layer to be back-propagated to.