Skip to content

This repository contains demos I made with the Transformers library by HuggingFace.

License

Notifications You must be signed in to change notification settings

aydinmyilmaz/Transformers-Tutorials

 
 

Repository files navigation

Transformers-Tutorials

Hi there!

This repository contains demos I made with the Transformers library by 🤗 HuggingFace.

NOTE: if you are not familiar with HuggingFace and/or Transformers, I highly recommend to check out our free course, which introduces you to several Transformer architectures (such as BERT, GPT-2, T5, BART, etc.), as well as an overview of the HuggingFace libraries, including Transformers, Tokenizers, Datasets, Accelerate and the hub.

Currently, it contains the following demos:

  • BERT (paper):
    • fine-tuning BertForTokenClassification on a named entity recognition (NER) dataset. Open In Colab
  • LayoutLM (paper):
    • fine-tuning LayoutLMForTokenClassification on the FUNSD dataset Open In Colab
    • fine-tuning LayoutLMForSequenceClassification on the RVL-CDIP dataset Open In Colab
    • adding image embeddings to LayoutLM during fine-tuning on the FUNSD dataset Open In Colab
  • TAPAS (paper):
  • Vision Transformer (paper):
    • performing inference with ViTForImageClassification Open In Colab
    • fine-tuning ViTForImageClassification on CIFAR-10 using PyTorch Lightning Open In Colab
    • fine-tuning ViTForImageClassification on CIFAR-10 using the 🤗 Trainer Open In Colab
  • LUKE (paper):
    • fine-tuning LukeForEntityPairClassification on a custom relation extraction dataset using PyTorch Lightning Open In Colab
  • DETR (paper):
    • performing inference with DetrForObjectDetection Open In Colab
    • fine-tuning DetrForObjectDetection on a custom object detection dataset Open In Colab
    • evaluating DetrForObjectDetection on the COCO detection 2017 validation set Open In Colab
    • performing inference with DetrForSegmentation Open In Colab
    • fine-tuning DetrForSegmentation on COCO panoptic 2017 Open In Colab
  • T5 (paper):
    • fine-tuning T5ForConditionalGeneration on a Dutch summarization dataset on TPU using HuggingFace Accelerate Open In Colab
    • fine-tuning T5ForConditionalGeneration (CodeT5) for Ruby code summarization using PyTorch Lightning Open In Colab
  • LayoutLMv2 (paper):
    • fine-tuning LayoutLMv2ForSequenceClassification on RVL-CDIP Open In Colab
    • fine-tuning LayoutLMv2ForTokenClassification on FUNSD Open In Colab
    • fine-tuning LayoutLMv2ForTokenClassification on FUNSD using the 🤗 Trainer Open In Colab
    • performing inference with LayoutLMv2ForTokenClassification on FUNSD Open In Colab
    • true inference with LayoutLMv2ForTokenClassification (when no labels are available) + Gradio demo Open In Colab
    • fine-tuning LayoutLMv2ForTokenClassification on CORD Open In Colab
    • fine-tuning LayoutLMv2ForQuestionAnswering on DOCVQA Open In Colab
  • CANINE (paper):
    • fine-tuning CanineForSequenceClassification on IMDb Open In Colab
  • GPT-J-6B (repository):
    • performing inference with GPTJForCausalLM to illustrate few-shot learning and code generation Open In Colab

... more to come! 🤗

If you have any questions regarding these demos, feel free to open an issue on this repository.

Btw, I was also the main contributor to add the following algorithms to the library:

  • TAbular PArSing (TAPAS) by Google AI
  • Vision Transformer (ViT) by Google AI
  • Data-efficient Image Transformers (DeiT) by Facebook AI
  • LUKE by Studio Ousia
  • DEtection TRansformers (DETR) by Facebook AI
  • CANINE by Google AI
  • BEiT by Microsoft Research
  • LayoutLMv2 (and LayoutXLM) by Microsoft Research

All of them were an incredible learning experience. I can recommend anyone to contribute an AI algorithm to the library!

About

This repository contains demos I made with the Transformers library by HuggingFace.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%