Skip to content

Commit

Permalink
chore: remove duplicate words (huggingface#31853)
Browse files Browse the repository at this point in the history
remove duplicate words
  • Loading branch information
hattizai authored Jul 9, 2024
1 parent bd760cd commit 350aed7
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion awesome-transformers.md
Original file line number Diff line number Diff line change
Expand Up @@ -596,7 +596,7 @@ Keywords: Data-Centric AI, Data Quality, Noisy Labels, Outlier Detection, Active

## [BentoML](https://github.com/bentoml/BentoML)

[BentoML](https://github.com/bentoml) is the unified framework for for building, shipping, and scaling production-ready AI applications incorporating traditional ML, pre-trained AI models, Generative and Large Language Models.
[BentoML](https://github.com/bentoml) is the unified framework for building, shipping, and scaling production-ready AI applications incorporating traditional ML, pre-trained AI models, Generative and Large Language Models.
All Hugging Face models and pipelines can be seamlessly integrated into BentoML applications, enabling the running of models on the most suitable hardware and independent scaling based on usage.

Keywords: BentoML, Framework, Deployment, AI Applications
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/model_doc/prophetnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ The Authors' code can be found [here](https://github.com/microsoft/ProphetNet).

- ProphetNet is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
- The model architecture is based on the original Transformer, but replaces the “standard” self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism.
- The model architecture is based on the original Transformer, but replaces the “standard” self-attention mechanism in the decoder by a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism.

## Resources

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ from [`DetrImageProcessor`] and define a custom `collate_fn` to batch images tog

## Multimodal

For tasks involving multimodal inputs, you'll need a [processor](main_classes/processors) to prepare your dataset for the model. A processor couples together two processing objects such as as tokenizer and feature extractor.
For tasks involving multimodal inputs, you'll need a [processor](main_classes/processors) to prepare your dataset for the model. A processor couples together two processing objects such as tokenizer and feature extractor.

Load the [LJ Speech](https://huggingface.co/datasets/lj_speech) dataset (see the 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use a processor for automatic speech recognition (ASR):

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -1011,7 +1011,7 @@ slow models to do qualitative testing. To see the use of these simply look for *
grep tiny tests examples
```

Here is a an example of a [script](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py) that created the tiny model
Here is an example of a [script](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py) that created the tiny model
[stas/tiny-wmt19-en-de](https://huggingface.co/stas/tiny-wmt19-en-de). You can easily adjust it to your specific
model's architecture.

Expand Down

0 comments on commit 350aed7

Please sign in to comment.