Skip to content

Latest commit

 

History

History
214 lines (187 loc) · 6.04 KB

GeneralizedMeanPooling.md

File metadata and controls

214 lines (187 loc) · 6.04 KB

TFSimilarity.layers.GeneralizedMeanPooling

This is the class from which all layers inherit.

TFSimilarity.layers.GeneralizedMeanPooling(
    p: float = 3.0,
    data_format: Optional[str] = None,
    keepdims: bool = False,
    **kwargs
) -> None

A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables). State can be created in various places, at the convenience of the subclass implementer:

  • in init();
  • in the optional build() method, which is invoked by the first call() to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time;
  • in the first invocation of call(), with some caveats discussed below.

Users will just instantiate a layer and then treat it as a callable.

Args

trainable Boolean, whether the layer's variables should be trainable.
name String name of the layer.
dtype The dtype of the layer's computations and weights. Can also be a tf.keras.mixed_precision.Policy, which allows the computation and weight dtype to differ. Default of None means to use tf.keras.mixed_precision.global_policy(), which is a float32 policy unless set to different value.
dynamic Set this to True if your layer should only be run eagerly, and should not be used to generate a static computation graph. This would be the case for a Tree-RNN or a recursive network, for example, or generally for any layer that manipulates tensors using Python control flow. If False, we assume that the layer can safely be used to generate a static computation graph.

We recommend that descendants of Layer implement the following methods:

  • init(): Defines custom layer attributes, and creates layer weights that do not depend on input shapes, using add_weight(), or other state.
  • build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), using add_weight(), or other state. call() will automatically build the layer (if it has not been built yet) by calling build().
  • call(self, inputs, *args, **kwargs): Called in call after making sure build() has been called. call() performs the logic of applying the layer to the inputs. The first invocation may additionally create state that could not be conveniently created in build(); see its docstring for details. Two reserved keyword arguments you can optionally use in call() are:
    • training (boolean, whether the call is in inference mode or training mode). See more details in the layer/model subclassing guide
    • mask (boolean tensor encoding masked timesteps in the input, used in RNN layers). See more details in the layer/model subclassing guide A typical signature for this method is call(self, inputs), and user could optionally add training and mask if the layer need them. *args and **kwargs is only useful for future extension when more input parameters are planned to be added.
  • get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in init, then override from_config(self) as well. This method is used when saving the layer or a model that contains this layer.
# # # # # # # [4. 4.]

assert my_sum.weights == [my_sum.total] assert my_sum.non_trainable_weights == [my_sum.total] assert my_sum.trainable_weights == []


For more information about creating layers, see the guide
- [Making new Layers and Models via subclassing](
  https://www.tensorflow.org/guide/keras/custom_layers_and_models)



<!-- Tabular view -->
 <table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>

<tr>
<td>
<b>name</b>
</td>
<td>
The name of the layer (string).
</td>
</tr><tr>
<td>
<b>dtype</b>
</td>
<td>
The dtype of the layer's weights.
</td>
</tr><tr>
<td>
<b>variable_dtype</b>
</td>
<td>
Alias of <b>dtype</b>.
</td>
</tr><tr>
<td>
<b>compute_dtype</b>
</td>
<td>
The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to also
be in this dtype. When mixed precision is used with a
<b>tf.keras.mixed_precision.Policy</b>, this will be different than
<b>variable_dtype</b>.
</td>
</tr><tr>
<td>
<b>dtype_policy</b>
</td>
<td>
The layer's dtype policy. See the
<b>tf.keras.mixed_precision.Policy</b> documentation for details.
</td>
</tr><tr>
<td>
<b>trainable_weights</b>
</td>
<td>
List of variables to be included in backprop.
</td>
</tr><tr>
<td>
<b>non_trainable_weights</b>
</td>
<td>
List of variables that should not be
included in backprop.
</td>
</tr><tr>
<td>
<b>weights</b>
</td>
<td>
The concatenation of the lists trainable_weights and
non_trainable_weights (in this order).
</td>
</tr><tr>
<td>
<b>trainable</b>
</td>
<td>
Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
<b>layer.trainable_weights</b>.
</td>
</tr><tr>
<td>
<b>input_spec</b>
</td>
<td>
Optional (list of) <b>InputSpec</b> object(s) specifying the
constraints on inputs that can be accepted by the layer.
</td>
</tr>
</table>