From 2f3ea08a077ba3133fa8a604b22436cad250b055 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=EA=B9=80=EC=A4=80=EC=9E=AC=5FT3056?= <55151385+junejae@users.noreply.github.com> Date: Wed, 4 Oct 2023 03:20:22 +0900 Subject: [PATCH] docs: feat: add clip notebook resources from OSSCA community (#26505) --- docs/source/en/model_doc/clip.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/source/en/model_doc/clip.md b/docs/source/en/model_doc/clip.md index 8c1e11c398c180..7b050296aeac2a 100644 --- a/docs/source/en/model_doc/clip.md +++ b/docs/source/en/model_doc/clip.md @@ -85,6 +85,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h - A blog post on [How to fine-tune CLIP on 10,000 image-text pairs](https://huggingface.co/blog/fine-tune-clip-rsicd). - CLIP is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text). +- A [notebook](https://colab.research.google.com/drive/1zip3zmrbuKerAfC1d2uS1mqQS-QykXnl?usp=sharing) on how to fine-tune the CLIP model with Korean multimodal dataset. 🌎🇰🇷 If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.