Skip to content

Commit

Permalink
fix docs
Browse files Browse the repository at this point in the history
  • Loading branch information
DavidLandup0 committed Aug 23, 2023
1 parent c640fc9 commit f1b5ffa
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 17 deletions.
2 changes: 1 addition & 1 deletion keras_cv/layers/overlapping_patching_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ def __init__(self, project_dim=32, patch_size=7, stride=4, **kwargs):
patch_size: integer, the size of the patches to encode.
Defaults to `7`.
stride: integer, the stride to use for the patching before
projection. Defaults to 5`.
projection. Defaults to `5`.
Basic usage:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,19 +59,16 @@ def __init__(
- [Based on the TensorFlow implementation from DeepVision](https://github.com/DavidLandup0/deepvision/tree/main/deepvision/models/classification/mix_transformer) # noqa: E501
Args:
backbone: `keras.Model`. The backbone network for the model that is
used as a feature extractor for the SegFormer encoder.
It is *intended* to be used only with the MiT backbone model which
was created specifically for SegFormers. It should either be a
`keras_cv.models.backbones.backbone.Backbone` or a `tf.keras.Model`
that implements the `pyramid_level_inputs` property with keys
"P2", "P3", "P4", and "P5" and layer names as
values.
num_classes: int, the number of classes for the detection model,
including the background class.
projection_filters: int, number of filters in the
convolution layer projecting the concatenated features into
a segmentation map. Defaults to `256`.
include_rescaling: bool, whether to rescale the inputs. If set
to `True`, inputs will be passed through a `Rescaling(1/255.0)`
layer.
depths: the number of transformer encoders to be used per stage in the
network
embedding_dims: the embedding dims per hierarchical stage, used as
the levels of the feature pyramid
input_shape: optional shape tuple, defaults to (None, None, 3).
input_tensor: optional Keras tensor (i.e. output of `keras.layers.Input()`)
to use as image input for the model.
Examples:
Expand All @@ -84,9 +81,6 @@ def __init__(
images = np.ones(shape=(1, 96, 96, 3))
labels = np.zeros(shape=(1, 96, 96, 1))
backbone = keras_cv.models.MiTBackbone.from_preset("mit_b0_imagenet")
model = keras_cv.models.segmentation.SegFormer(
num_classes=1, backbone=backbone,
)
# Evaluate model
model(images)
Expand Down

0 comments on commit f1b5ffa

Please sign in to comment.