Skip to content

Commit

Permalink
fix-copies
Browse files Browse the repository at this point in the history
  • Loading branch information
kashif committed Feb 6, 2024
1 parent 6acff0d commit d025cb4
Show file tree
Hide file tree
Showing 8 changed files with 1,914 additions and 413 deletions.
32 changes: 16 additions & 16 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Flax), PyTorch, and/or TensorFlow.
| [CLIPSeg](model_doc/clipseg) ||||
| [CLVP](model_doc/clvp) ||||
| [CodeGen](model_doc/codegen) ||||
| [CodeLlama](model_doc/code_llama) ||| |
| [CodeLlama](model_doc/code_llama) ||| |
| [Conditional DETR](model_doc/conditional_detr) ||||
| [ConvBERT](model_doc/convbert) ||||
| [ConvNeXT](model_doc/convnext) ||||
Expand All @@ -112,7 +112,7 @@ Flax), PyTorch, and/or TensorFlow.
| [Deformable DETR](model_doc/deformable_detr) ||||
| [DeiT](model_doc/deit) ||||
| [DePlot](model_doc/deplot) ||||
| [Depth Anything](model_doc/depth_anything) | |||
| [Depth Anything](model_doc/depth_anything) | |||
| [DETA](model_doc/deta) ||||
| [DETR](model_doc/detr) ||||
| [DialoGPT](model_doc/dialogpt) ||||
Expand All @@ -133,7 +133,7 @@ Flax), PyTorch, and/or TensorFlow.
| [ESM](model_doc/esm) ||||
| [FairSeq Machine-Translation](model_doc/fsmt) ||||
| [Falcon](model_doc/falcon) ||||
| [FastSpeech2Conformer](model_doc/fastspeech2_conformer) | |||
| [FastSpeech2Conformer](model_doc/fastspeech2_conformer) | |||
| [FLAN-T5](model_doc/flan-t5) ||||
| [FLAN-UL2](model_doc/flan-ul2) ||||
| [FlauBERT](model_doc/flaubert) ||||
Expand Down Expand Up @@ -162,17 +162,17 @@ Flax), PyTorch, and/or TensorFlow.
| [InstructBLIP](model_doc/instructblip) ||||
| [Jukebox](model_doc/jukebox) ||||
| [KOSMOS-2](model_doc/kosmos-2) ||||
| [LagLlama](model_doc/lagllama) | |||
| [LagLlama](model_doc/lagllama) | |||
| [LayoutLM](model_doc/layoutlm) ||||
| [LayoutLMv2](model_doc/layoutlmv2) ||||
| [LayoutLMv3](model_doc/layoutlmv3) ||||
| [LayoutXLM](model_doc/layoutxlm) ||||
| [LED](model_doc/led) ||||
| [LeViT](model_doc/levit) ||||
| [LiLT](model_doc/lilt) ||||
| [LLaMA](model_doc/llama) ||| |
| [Llama2](model_doc/llama2) ||| |
| [LLaVa](model_doc/llava) | |||
| [LLaMA](model_doc/llama) ||| |
| [Llama2](model_doc/llama2) ||| |
| [LLaVa](model_doc/llava) | |||
| [Longformer](model_doc/longformer) ||||
| [LongT5](model_doc/longt5) ||||
| [LUKE](model_doc/luke) ||||
Expand All @@ -191,8 +191,8 @@ Flax), PyTorch, and/or TensorFlow.
| [Megatron-BERT](model_doc/megatron-bert) ||||
| [Megatron-GPT2](model_doc/megatron_gpt2) ||||
| [MGP-STR](model_doc/mgp-str) ||||
| [Mistral](model_doc/mistral) ||| |
| [Mixtral](model_doc/mixtral) | |||
| [Mistral](model_doc/mistral) ||| |
| [Mixtral](model_doc/mixtral) | |||
| [mLUKE](model_doc/mluke) ||||
| [MMS](model_doc/mms) ||||
| [MobileBERT](model_doc/mobilebert) ||||
Expand All @@ -219,8 +219,8 @@ Flax), PyTorch, and/or TensorFlow.
| [OPT](model_doc/opt) ||||
| [OWL-ViT](model_doc/owlvit) ||||
| [OWLv2](model_doc/owlv2) ||||
| [PatchTSMixer](model_doc/patchtsmixer) | |||
| [PatchTST](model_doc/patchtst) | |||
| [PatchTSMixer](model_doc/patchtsmixer) | |||
| [PatchTST](model_doc/patchtst) | |||
| [Pegasus](model_doc/pegasus) ||||
| [PEGASUS-X](model_doc/pegasus_x) ||||
| [Perceiver](model_doc/perceiver) ||||
Expand All @@ -234,7 +234,7 @@ Flax), PyTorch, and/or TensorFlow.
| [ProphetNet](model_doc/prophetnet) ||||
| [PVT](model_doc/pvt) ||||
| [QDQBert](model_doc/qdqbert) ||||
| [Qwen2](model_doc/qwen2) | |||
| [Qwen2](model_doc/qwen2) | |||
| [RAG](model_doc/rag) ||||
| [REALM](model_doc/realm) ||||
| [Reformer](model_doc/reformer) ||||
Expand All @@ -249,11 +249,11 @@ Flax), PyTorch, and/or TensorFlow.
| [RWKV](model_doc/rwkv) ||||
| [SAM](model_doc/sam) ||||
| [SeamlessM4T](model_doc/seamless_m4t) ||||
| [SeamlessM4Tv2](model_doc/seamless_m4t_v2) | |||
| [SeamlessM4Tv2](model_doc/seamless_m4t_v2) | |||
| [SegFormer](model_doc/segformer) ||||
| [SEW](model_doc/sew) ||||
| [SEW-D](model_doc/sew-d) ||||
| [SigLIP](model_doc/siglip) | |||
| [SigLIP](model_doc/siglip) | |||
| [Speech Encoder decoder](model_doc/speech-encoder-decoder) ||||
| [Speech2Text](model_doc/speech_to_text) ||||
| [SpeechT5](model_doc/speecht5) ||||
Expand Down Expand Up @@ -285,7 +285,7 @@ Flax), PyTorch, and/or TensorFlow.
| [VAN](model_doc/van) ||||
| [VideoMAE](model_doc/videomae) ||||
| [ViLT](model_doc/vilt) ||||
| [VipLlava](model_doc/vipllava) | |||
| [VipLlava](model_doc/vipllava) | |||
| [Vision Encoder decoder](model_doc/vision-encoder-decoder) ||||
| [VisionTextDualEncoder](model_doc/vision-text-dual-encoder) ||||
| [VisualBERT](model_doc/visual_bert) ||||
Expand All @@ -298,7 +298,7 @@ Flax), PyTorch, and/or TensorFlow.
| [VITS](model_doc/vits) ||||
| [ViViT](model_doc/vivit) ||||
| [Wav2Vec2](model_doc/wav2vec2) ||||
| [Wav2Vec2-BERT](model_doc/wav2vec2-bert) | |||
| [Wav2Vec2-BERT](model_doc/wav2vec2-bert) | |||
| [Wav2Vec2-Conformer](model_doc/wav2vec2-conformer) ||||
| [Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme) ||||
| [WavLM](model_doc/wavlm) ||||
Expand Down
Loading

0 comments on commit d025cb4

Please sign in to comment.