Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: add info about wav2vec2 bert in older wav2vec2 models. #31120

Merged
merged 3 commits into from
Jun 5, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/model_doc/wav2vec2-conformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ The Wav2Vec2-Conformer weights were released by the Meta AI team within the [Fai
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).
The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec).

Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We specially recommend using it for fine-tuning tasks.
Vaibhavs10 marked this conversation as resolved.
Show resolved Hide resolved

## Usage tips

- Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the *Attention*-block with a *Conformer*-block
Expand Down
2 changes: 2 additions & 0 deletions docs/source/en/model_doc/wav2vec2.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ recognition with limited amounts of labeled data.*

This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).

Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We specially recommend using it for fine-tuning tasks.

## Usage tips

- Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Expand Down
2 changes: 2 additions & 0 deletions docs/source/en/model_doc/xlsr_wav2vec2.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@ XLSR-53, a large model pretrained in 53 languages.*

The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec).

Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We specially recommend using it for fine-tuning tasks.

## Usage tips

- XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Expand Down
Loading