Skip to content

Latest commit

 

History

History
49 lines (42 loc) · 1.47 KB

README.md

File metadata and controls

49 lines (42 loc) · 1.47 KB

Fine-Tuning a Pre-trained Model

Overview

This repository serves as a practice on how to fine-tune a pre-trained transformers model from the Hugging Face, in various techniques.

Points

  • Practice on how to prepare the foundations of the training environment, such as the tokenizer, model, dataset, and hyperparameters.
  • Practice on how to fine-tune a model by using the Trainer high-level API.
  • Practice on how to evaluate the performance of the model after finishing the training process, by using the evaluate library.
  • Practice on how to fine-tune a model from the low level training & evaluation loop.
  • Practice on how to fine-tune a model & using the Accelerator to enable distributed training on multiple GPUs or TPUs.

Usage

  1. Clone this repository to your local machine:
git clone [email protected]:IsmaelMousa/playing-with-finetuning.git
  1. Navigate to the playing-with-finetuning directory:
cd playing-with-finetuning
  1. Setup virtual environment:
python3 -m venv .venv
  1. Activate the virtual environment:
source .venv/bin/activate
  1. Install the required dependencies:
pip install -r requirements.txt

Important

To start training form train.py file directly:

  1. This command will prompt you to answer a few questions and dump your answers in the configuration file
accelerate config
  1. Launch the training
accelerate launch train.py