diff --git a/docs/source/en/add_new_model.md b/docs/source/en/add_new_model.md
index 93c8ff4407391c..50a49584c2db74 100644
--- a/docs/source/en/add_new_model.md
+++ b/docs/source/en/add_new_model.md
@@ -1,4 +1,4 @@
-
+
+To keep the code readable, there is never more than two levels of abstraction for any model. The example model here, BrandNewBert, traces its inheritance from `BrandNewBertPreTrainedModel` and [`PreTrainedModel`]. It is important that a new model only depends on [`PreTrainedModel`] because it allows a model to be loaded and saved with [`~PreTrainedModel.from_pretrained`] and [`PreTrainedModel.save_pretrained`].
+
+Other important functions like the forward method are defined in the `modeling.py` file.
+
+Specific model heads (for example, for sequence classification or language modeling) should use the base model as a component that is called in the forward pass rather than inherting from it. This keeps abstraction low.
+
+New models require a configuration, for example `BrandNewBertConfig`, that is stored as an attribute of [`PreTrainedModel`].
+
+```py
model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert")
-model.config # model has access to its config
+model.config
```
-Similar to the model, the configuration inherits basic serialization and deserialization functionalities from
-[`PretrainedConfig`]. Note that the configuration and the model are always serialized into two
-different formats - the model to a *pytorch_model.bin* file and the configuration to a *config.json* file. Calling
-the model's [`~PreTrainedModel.save_pretrained`] will automatically call
-the config's [`~PretrainedConfig.save_pretrained`], so that both model and configuration are saved.
+Like [`PreTrainedModel`], [`PretrainedConfig`] provides [`~PretrainedConfig.from_pretrained`] and [`PretrainedConfig.save_pretrained`] methods.
+
+When you use [`~PreTrainedModel.save_pretrained`], it automatically calls the configurations [`~PretrainedConfig.save_pretrained`] method so that both the model and configuration are saved together.
+A model is saved to a `model.safetensors` file and a configuration is saved to a `config.json` file.
### Code style
-When coding your new model, keep in mind that Transformers is an opinionated library and we have a few quirks of our
-own regarding how code should be written :-)
-
-1. The forward pass of your model should be fully written in the modeling file while being fully independent of other
- models in the library. If you want to reuse a block from another model, copy the code and paste it with a
- `# Copied from` comment on top (see [here](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160)
- for a good example and [there](pr_checks#check-copies) for more documentation on Copied from).
-2. The code should be fully understandable, even by a non-native English speaker. This means you should pick
- descriptive variable names and avoid abbreviations. As an example, `activation` is preferred to `act`.
- One-letter variable names are strongly discouraged unless it's an index in a for loop.
-3. More generally we prefer longer explicit code to short magical one.
-4. Avoid subclassing `nn.Sequential` in PyTorch but subclass `nn.Module` and write the forward pass, so that anyone
- using your code can quickly debug it by adding print statements or breaking points.
-5. Your function signature should be type-annotated. For the rest, good variable names are way more readable and
- understandable than type annotations.
-
-### Overview of tokenizers
-
-Not quite ready yet :-( This section will be added soon!
-
-## Step-by-step recipe to add a model to π€ Transformers
-
-Everyone has different preferences of how to port a model so it can be very helpful for you to take a look at summaries
-of how other contributors ported models to Hugging Face. Here is a list of community blog posts on how to port a model:
-
-1. [Porting GPT2 Model](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by [Thomas](https://huggingface.co/thomwolf)
-2. [Porting WMT19 MT Model](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas)
-
-From experience, we can tell you that the most important things to keep in mind when adding a model are:
-
-- Don't reinvent the wheel! Most parts of the code you will add for the new π€ Transformers model already exist
- somewhere in π€ Transformers. Take some time to find similar, already existing models and tokenizers you can copy
- from. [grep](https://www.gnu.org/software/grep/) and [rg](https://github.com/BurntSushi/ripgrep) are your
- friends. Note that it might very well happen that your model's tokenizer is based on one model implementation, and
- your model's modeling code on another one. *E.g.* FSMT's modeling code is based on BART, while FSMT's tokenizer code
- is based on XLM.
-- It's more of an engineering challenge than a scientific challenge. You should spend more time creating an
- efficient debugging environment rather than trying to understand all theoretical aspects of the model in the paper.
-- Ask for help, when you're stuck! Models are the core component of π€ Transformers so we at Hugging Face are more
- than happy to help you at every step to add your model. Don't hesitate to ask if you notice you are not making
- progress.
-
-In the following, we try to give you a general recipe that we found most useful when porting a model to π€ Transformers.
-
-The following list is a summary of everything that has to be done to add a model and can be used by you as a To-Do
-List:
-
-β (Optional) Understood the model's theoretical aspects
-β Prepared π€ Transformers dev environment
-β Set up debugging environment of the original repository
-β Created script that successfully runs the `forward()` pass using the original repository and checkpoint
-β Successfully added the model skeleton to π€ Transformers
-β Successfully converted original checkpoint to π€ Transformers checkpoint
-β Successfully ran `forward()` pass in π€ Transformers that gives identical output to original checkpoint
-β Finished model tests in π€ Transformers
-β Successfully added tokenizer in π€ Transformers
-β Run end-to-end integration tests
-β Finished docs
-β Uploaded model weights to the Hub
-β Submitted the pull request
-β (Optional) Added a demo notebook
-
-To begin with, we usually recommend starting by getting a good theoretical understanding of `BrandNewBert`. However,
-if you prefer to understand the theoretical aspects of the model *on-the-job*, then it is totally fine to directly dive
-into the `BrandNewBert`'s code-base. This option might suit you better if your engineering skills are better than
-your theoretical skill, if you have trouble understanding `BrandNewBert`'s paper, or if you just enjoy programming
-much more than reading scientific papers.
-
-### 1. (Optional) Theoretical aspects of BrandNewBert
-
-You should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large
-sections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is
-not to get a deep theoretical understanding of the paper, but to extract the necessary information required to
-effectively re-implement the model in π€ Transformers. That being said, you don't have to spend too much time on the
-theoretical aspects, but rather focus on the practical ones, namely:
-
-- What type of model is *brand_new_bert*? BERT-like encoder-only model? GPT2-like decoder-only model? BART-like
- encoder-decoder model? Look at the [model_summary](model_summary) if you're not familiar with the differences between those.
-- What are the applications of *brand_new_bert*? Text classification? Text generation? Seq2Seq tasks, *e.g.,*
- summarization?
-- What is the novel feature of the model that makes it different from BERT/GPT-2/BART?
-- Which of the already existing [π€ Transformers models](https://huggingface.co/transformers/#contents) is most
- similar to *brand_new_bert*?
-- What type of tokenizer is used? A sentencepiece tokenizer? Word piece tokenizer? Is it the same tokenizer as used
- for BERT or BART?
-
-After you feel like you have gotten a good overview of the architecture of the model, you might want to write to the
-Hugging Face team with any questions you might have. This might include questions regarding the model's architecture,
-its attention layer, etc. We will be more than happy to help you.
-
-### 2. Next prepare your environment
-
-1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the βFork' button on the
- repository's page. This creates a copy of the code under your GitHub user account.
-
-2. Clone your `transformers` fork to your local disk, and add the base repository as a remote:
-
- ```bash
- git clone https://github.com/[your Github handle]/transformers.git
- cd transformers
- git remote add upstream https://github.com/huggingface/transformers.git
- ```
-
-3. Set up a development environment, for instance by running the following command:
-
- ```bash
- python -m venv .env
- source .env/bin/activate
- pip install -e ".[dev]"
- ```
-
- Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a
- failure with this command. If that's the case make sure to install the Deep Learning framework you are working with
- (PyTorch, TensorFlow and/or Flax) then do:
-
- ```bash
- pip install -e ".[quality]"
- ```
-
- which should be enough for most use cases. You can then return to the parent directory
-
- ```bash
- cd ..
- ```
-
-4. We recommend adding the PyTorch version of *brand_new_bert* to Transformers. To install PyTorch, please follow the
- instructions on https://pytorch.org/get-started/locally/.
-
- **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient.
-
-5. To port *brand_new_bert*, you will also need access to its original repository:
-
- ```bash
- git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git
- cd brand_new_bert
- pip install -e .
- ```
-
-Now you have set up a development environment to port *brand_new_bert* to π€ Transformers.
-
-### 3.-4. Run a pretrained checkpoint using the original repository
-
-At first, you will work on the original *brand_new_bert* repository. Often, the original implementation is very
-βresearchyβ. Meaning that documentation might be lacking and the code can be difficult to understand. But this should
-be exactly your motivation to reimplement *brand_new_bert*. At Hugging Face, one of our main goals is to *make people
-stand on the shoulders of giants* which translates here very well into taking a working model and rewriting it to make
-it as **accessible, user-friendly, and beautiful** as possible. This is the number-one motivation to re-implement
-models into π€ Transformers - trying to make complex new NLP technology accessible to **everybody**.
-
-You should start thereby by diving into the original repository.
-
-Successfully running the official pretrained model in the original repository is often **the most difficult** step.
-From our experience, it is very important to spend some time getting familiar with the original code-base. You need to
-figure out the following:
-
-- Where to find the pretrained weights?
-- How to load the pretrained weights into the corresponding model?
-- How to run the tokenizer independently from the model?
-- Trace one forward pass so that you know which classes and functions are required for a simple forward pass. Usually,
- you only have to reimplement those functions.
-- Be able to locate the important components of the model: Where is the model's class? Are there model sub-classes,
- *e.g.* EncoderModel, DecoderModel? Where is the self-attention layer? Are there multiple different attention layers,
- *e.g.* *self-attention*, *cross-attention*...?
-- How can you debug the model in the original environment of the repo? Do you have to add *print* statements, can you
- work with an interactive debugger like *ipdb*, or should you use an efficient IDE to debug the model, like PyCharm?
+Transformers prefers a clean and readable code style over a more abstracted one. Some of the coed style choices include:
-It is very important that before you start the porting process, you can **efficiently** debug code in the original
-repository! Also, remember that you are working with an open-source library, so do not hesitate to open an issue, or
-even a pull request in the original repository. The maintainers of this repository are most likely very happy about
-someone looking into their code!
-
-At this point, it is really up to you which debugging environment and strategy you prefer to use to debug the original
-model. We strongly advise against setting up a costly GPU environment, but simply work on a CPU both when starting to
-dive into the original repository and also when starting to write the π€ Transformers implementation of the model. Only
-at the very end, when the model has already been successfully ported to π€ Transformers, one should verify that the
-model also works as expected on GPU.
-
-In general, there are two possible debugging environments for running the original model
+- The forward pass is written in the `modeling.py` file, completely independent of other models in the library. To reuse a block from another model, copy the code and paste it with a `# Copied from` comment above it. For example, the `RobertaSelfAttention` class is copied from the `BertSelfAttention` class.
-- [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb)
-- Local python scripts.
+ ```py
+ # Copied from transformers.models.bert.modeling_bert.BertSelfAttention with Bert->Roberta
+ class RobertaSelfAttention(nn.Module):
+ ```
-Jupyter notebooks have the advantage that they allow for cell-by-cell execution which can be helpful to better split
-logical components from one another and to have faster debugging cycles as intermediate results can be stored. Also,
-notebooks are often easier to share with other contributors, which might be very helpful if you want to ask the Hugging
-Face team for help. If you are familiar with Jupyter notebooks, we strongly recommend you work with them.
+ Refer to the [Check copies](./pr_checks#check-copies) section for more information about the `# Copied from` comment.
-The obvious disadvantage of Jupyter notebooks is that if you are not used to working with them you will have to spend
-some time adjusting to the new programming environment and you might not be able to use your known debugging tools
-anymore, like `ipdb`.
+- The code should be accessible to users from a non-native English background. Pick descriptive variable names and avoid abbreviations. For example, "activation" is preferred over "act". One letter variables names are highly discouraged unless it's an infex in a for loop.
-For each code-base, a good first step is always to load a **small** pretrained checkpoint and to be able to reproduce a
-single forward pass using a dummy integer vector of input IDs as an input. Such a script could look like this (in
-pseudocode):
+- Explicit code is preferred over shorter code even if it's longer.
-```python
-model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/")
-input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids
-original_output = model.predict(input_ids)
-```
+- Avoid subclassing [nn.Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html). Subclass [nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module) instead so the code can be quickly debugged with print statements or breakpoints.
-Next, regarding the debugging strategy, there are generally a few from which to choose from:
+- Function signatures should be type-annotated. Otherwise, good variable names are preferred because they're more readable and understandable.
-- Decompose the original model into many small testable components and run a forward pass on each of those for
- verification
-- Decompose the original model only into the original *tokenizer* and the original *model*, run a forward pass on
- those, and use intermediate print statements or breakpoints for verification
+## Add a new model
-Again, it is up to you which strategy to choose. Often, one or the other is advantageous depending on the original code
-base.
+With some background knowledge about your model and the Transformers library, you're ready to add BrandNewBert now!
-If the original code-base allows you to decompose the model into smaller sub-components, *e.g.* if the original
-code-base can easily be run in eager mode, it is usually worth the effort to do so. There are some important advantages
-to taking the more difficult road in the beginning:
+> [!TIP]
+> Each contributor has a unique style and workflow for porting models to Transformers. It may be helpful to take a look at how [GPT2](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) and [WMT19](https://huggingface.co/blog/porting-fsmt) were ported.
-- at a later stage when comparing the original model to the Hugging Face implementation, you can verify automatically
- for each component individually that the corresponding component of the π€ Transformers implementation matches instead
- of relying on visual comparison via print statements
-- it can give you some rope to decompose the big problem of porting a model into smaller problems of just porting
- individual components and thus structure your work better
-- separating the model into logical meaningful components will help you to get a better overview of the model's design
- and thus to better understand the model
-- at a later stage those component-by-component tests help you to ensure that no regression occurs as you continue
- changing your code
+Some final tips to keep in mind are:
-[Lysandre's](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) integration checks for ELECTRA
-gives a nice example of how this can be done.
+- Don't reinvent the wheel! Take your time to explore existing models and tokenizers to see what you can copy and reuse. [Grep](https://www.gnu.org/software/grep/) and [ripgrep](https://github.com/BurntSushi/ripgrep) are great tools for this.
+- This is an engineering challenge more than a scientific one. Focus on the more practical aspects (set up an efficient debugging environment for example) instead of theoretical ones.
+- Don't be shy to ask for help! We are here to support you. π€
-However, if the original code-base is very complex or only allows intermediate components to be run in a compiled mode,
-it might be too time-consuming or even impossible to separate the model into smaller testable sub-components. A good
-example is [T5's MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) library which is
-very complex and does not offer a simple way to decompose the model into its sub-components. For such libraries, one
-often relies on verifying print statements.
+### Dev environment
-No matter which strategy you choose, the recommended procedure is often the same that you should start to debug the
-starting layers first and the ending layers last.
+Click on the **Fork** button on the [Transformers](https://github.com/huggingface/transformers) repository to create your own copy of it to work on. Then clone the repository to your local disk and add the base repository as the remote.
-It is recommended that you retrieve the output, either by print statements or sub-component functions, of the following
-layers in the following order:
+```bash
+git clone https://github.com/[your Github handle]/transformers.git
+cd transformers
+git remote add upstream https://github.com/huggingface/transformers.git
+```
-1. Retrieve the input IDs passed to the model
-2. Retrieve the word embeddings
-3. Retrieve the input of the first Transformer layer
-4. Retrieve the output of the first Transformer layer
-5. Retrieve the output of the following n - 1 Transformer layers
-6. Retrieve the output of the whole BrandNewBert Model
+Create a virtual environment and do an [editable install](./installation#editable-install) of the library with the "dev" or development dependencies.
-Input IDs should thereby consists of an array of integers, *e.g.* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]`
+```bash
+python -m venv .env
+source .env/bin/activate
+pip install -e ".[dev]"
+```
-The outputs of the following layers often consist of multi-dimensional float arrays and can look like this:
+Due to the number of optional dependencies as Transformers grows, this command may fail. In that case, install the "quality" dependencies. Also make sure you have a deep learning framework installed.
+```bash
+pip install -e ".[quality]"
```
-[[
- [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024],
- [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132],
- [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648],
- ...,
- [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288],
- [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191],
- [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]],
+
+Return to the parent directory and clone and install the original BrandNewBert repository.
+
+```bash
+git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git
+cd brand_new_bert
+pip install -e .
```
-We expect that every model added to π€ Transformers passes a couple of integration tests, meaning that the original
-model and the reimplemented version in π€ Transformers have to give the exact same output up to a precision of 0.001!
-Since it is normal that the exact same model written in different libraries can give a slightly different output
-depending on the library framework, we accept an error tolerance of 1e-3 (0.001). It is not enough if the model gives
-nearly the same output, they have to be almost identical. Therefore, you will certainly compare the intermediate
-outputs of the π€ Transformers version multiple times against the intermediate outputs of the original implementation of
-*brand_new_bert* in which case an **efficient** debugging environment of the original repository is absolutely
-important. Here is some advice to make your debugging environment as efficient as possible.
-
-- Find the best way of debugging intermediate results. Is the original repository written in PyTorch? Then you should
- probably take the time to write a longer script that decomposes the original model into smaller sub-components to
- retrieve intermediate values. Is the original repository written in Tensorflow 1? Then you might have to rely on
- TensorFlow print operations like [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) to output
- intermediate values. Is the original repository written in Jax? Then make sure that the model is **not jitted** when
- running the forward pass, *e.g.* check-out [this link](https://github.com/google/jax/issues/196).
-- Use the smallest pretrained checkpoint you can find. The smaller the checkpoint, the faster your debug cycle
- becomes. It is not efficient if your pretrained model is so big that your forward pass takes more than 10 seconds.
- In case only very large checkpoints are available, it might make more sense to create a dummy model in the new
- environment with randomly initialized weights and save those weights for comparison with the π€ Transformers version
- of your model
-- Make sure you are using the easiest way of calling a forward pass in the original repository. Ideally, you want to
- find the function in the original repository that **only** calls a single forward pass, *i.e.* that is often called
- `predict`, `evaluate`, `forward` or `__call__`. You don't want to debug a function that calls `forward`
- multiple times, *e.g.* to generate text, like `autoregressive_sample`, `generate`.
-- Try to separate the tokenization from the model's *forward* pass. If the original repository shows examples where
- you have to input a string, then try to find out where in the forward call the string input is changed to input ids
- and start from this point. This might mean that you have to possibly write a small script yourself or change the
- original code so that you can directly input the ids instead of an input string.
-- Make sure that the model in your debugging setup is **not** in training mode, which often causes the model to yield
- random outputs due to multiple dropout layers in the model. Make sure that the forward pass in your debugging
- environment is **deterministic** so that the dropout layers are not used. Or use *transformers.utils.set_seed*
- if the old and new implementations are in the same framework.
-
-The following section gives you more specific details/tips on how you can do this for *brand_new_bert*.
-
-### 5.-14. Port BrandNewBert to π€ Transformers
-
-Next, you can finally start adding new code to π€ Transformers. Go into the clone of your π€ Transformers' fork:
+Return to your clone of Transformers to begin porting BrandNewBert.
```bash
cd transformers
```
-In the special case that you are adding a model whose architecture exactly matches the model architecture of an
-existing model you only have to add a conversion script as described in [this section](#write-a-conversion-script).
-In this case, you can just re-use the whole model architecture of the already existing model.
+> [!TIP]
+> If the model architecture is identical to an existing model, skip ahead to add a [conversion script](#conversion-script), because you can reuse the architecture of the existing model.
-Otherwise, let's start generating a new model. We recommend using the following script to add a model starting from
-an existing model:
+Run the command below to start and complete the questionnaire with some basic information about the new model. This command jumpstarts the process by automatically generating some model code that you'll need to adapt.
```bash
transformers-cli add-new-model-like
```
-You will be prompted with a questionnaire to fill in the basic information of your model.
+### Create a pull request
-**Open a Pull Request on the main huggingface/transformers repo**
+Before you start adapting the code, create a pull request to track your progress and get feedback from the Transformers team. Title your pull request "[WIP] Add BrandNewBert" so it's clear that this is a work in progress.
-Before starting to adapt the automatically generated code, now is the time to open a βWork in progress (WIP)β pull
-request, *e.g.* β[WIP] Add *brand_new_bert*β, in π€ Transformers so that you and the Hugging Face team can work
-side-by-side on integrating the model into π€ Transformers.
+Create a branch with a descriptive name from your main branch.
-You should do the following:
+```bash
+git checkout -b add_brand_new_bert
+```
-1. Create a branch with a descriptive name from your main branch
+Commit the code, and then fetch and rebase on the main branch.
- ```bash
- git checkout -b add_brand_new_bert
- ```
+```bash
+git add .
+git commit
+git fetch upstream
+git rebase upstream/main
+```
-2. Commit the automatically generated code:
+Push any changes to your branch and click on **Compare & pull request** to open a pull request on GitHub. Open the pull request as a *draft* to signal it's a work in progress.
- ```bash
- git add .
- git commit
- ```
+```bash
+git push -u origin a-descriptive-name-for-my-changes
+```
-3. Fetch and rebase to current main
+Include relevant Hugging Face team members GitHub handles in the pull request for questions, feedback, comments, and reviews. Direct team members to specific parts of the code you want by clicking on the **Files changed** tab, and then clicking on **+** to the left of the line number to add a comment. When a question or problem is solved, click on **Resolve** to indicate the issue is resolved. This keeps the conversation organized and clean.
- ```bash
- git fetch upstream
- git rebase upstream/main
- ```
+Remember to periodically commit and push your work, and update your work with the current main branch.
-4. Push the changes to your account using:
+```bash
+git fetch upstream
+git merge upstream/main
+```
- ```bash
- git push -u origin a-descriptive-name-for-my-changes
- ```
+### Run original checkpoint
-5. Once you are satisfied, go to the webpage of your fork on GitHub. Click on βPull requestβ. Make sure to add the
- GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for
- future changes.
+### Adapt the model code
-6. Change the PR into a draft by clicking on βConvert to draftβ on the right of the GitHub pull request web page.
+The `transformers-cli add-new-model-like` command should have generated a model and configuration file.
-In the following, whenever you have made some progress, don't forget to commit your work and push it to your account so
-that it shows in the pull request. Additionally, you should make sure to update your work with the current main from
-time to time by doing:
+- `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`
+- `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`
-```bash
-git fetch upstream
-git merge upstream/main
-```
+The automatically generated code in the `modeling.py` file will have the same architecture as BERT if you answered it's an encoder-only model or it will have the same architecture as BART if you answered it's an encoder-decoder model. The generated code is just a starting point. Based on your research on the new model, you'll need to implement those specific changes by adapting the generated code. This may involve changes to the self-attention layer, the order of the normalization layer, and so on.
-In general, all questions you might have regarding the model or your implementation should be asked in your PR and
-discussed/solved in the PR. This way, the Hugging Face team will always be notified when you are committing new code or
-if you have a question. It is often very helpful to point the Hugging Face team to your added code so that the Hugging
-Face team can efficiently understand your problem or question.
-
-To do so, you can go to the βFiles changedβ tab where you see all of your changes, go to a line regarding which you
-want to ask a question, and click on the β+β symbol to add a comment. Whenever a question or problem has been solved,
-you can click on the βResolveβ button of the created comment.
-
-In the same way, the Hugging Face team will open comments when reviewing your code. We recommend asking most questions
-on GitHub on your PR. For some very general questions that are not very useful for the public, feel free to ping the
-Hugging Face team by Slack or email.
-
-**5. Adapt the generated models code for brand_new_bert**
-
-At first, we will focus only on the model itself and not care about the tokenizer. All the relevant code should be
-found in the generated files `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` and
-`src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`.
-
-Now you can finally start coding :). The generated code in
-`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` will either have the same architecture as BERT if
-it's an encoder-only model or BART if it's an encoder-decoder model. At this point, you should remind yourself what
-you've learned in the beginning about the theoretical aspects of the model: *How is the model different from BERT or
-BART?*". Implement those changes which often means changing the *self-attention* layer, the order of the normalization
-layer, etc⦠Again, it is often useful to look at the similar architecture of already existing models in Transformers to
-get a better feeling of how your model should be implemented.
-
-**Note** that at this point, you don't have to be very sure that your code is fully correct or clean. Rather, it is
-advised to add a first *unclean*, copy-pasted version of the original code to
-`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` until you feel like all the necessary code is
-added. From our experience, it is much more efficient to quickly add a first version of the required code and
-improve/correct the code iteratively with the conversion script as described in the next section. The only thing that
-has to work at this point is that you can instantiate the π€ Transformers implementation of *brand_new_bert*, *i.e.* the
-following command should work:
-
-```python
-from transformers import BrandNewBertModel, BrandNewBertConfig
-
-model = BrandNewBertModel(BrandNewBertConfig())
-```
+At this point, your code doesn't have to be clean or even fully correct! It is more efficiently to quickly create a first draft and then iteratively improve on it. The only thing that matters is that your model should be able to be instantiated from Transformers. The command below creates a model from the configuration with random weights, verifying that the the `__init__` method works.
-The above command will create a model according to the default parameters as defined in `BrandNewBertConfig()` with
-random weights, thus making sure that the `init()` methods of all components works.
+```py
+from transformers import BrandNewBert, BrandNewBertConfig
+model = BrandNewBert(BrandNewBertConfig())
+```
-Note that all random initialization should happen in the `_init_weights` method of your `BrandnewBertPreTrainedModel`
-class. It should initialize all leaf modules depending on the variables of the config. Here is an example with the
-BERT `_init_weights` method:
+Random initialization occurs in BrandNewBertPreTrainedModel's `_init_weights` method. All leaf modules are initialized depending on the configuration's variables.
```py
def _init_weights(self, module):
@@ -520,9 +237,9 @@ def _init_weights(self, module):
module.weight.data.fill_(1.0)
```
-You can have some more custom schemes if you need a special initialization for some modules. For instance, in
-`Wav2Vec2ForPreTraining`, the last two linear layers need to have the initialization of the regular PyTorch `nn.Linear`
-but all the other ones should use an initialization as above. This is coded like this:
+The initialization scheme can look different if you need to adapt it to your model. For example, [`Wave2Vec2ForPreTraining`] initializes [nn.Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) in its last two linear layers.
+
+The `_is_hf_initialized` flag makes sure the submodule is only initialized once. Setting `module.project_q` and `module.project_hid` to `True` ensures the custom initialization is not overriden later. The `_init_weights` function won't be applied to these modules.
```py
def _init_weights(self, module):
@@ -538,30 +255,34 @@ def _init_weights(self, module):
module.bias.data.zero_()
```
-The `_is_hf_initialized` flag is internally used to make sure we only initialize a submodule once. By setting it to
-`True` for `module.project_q` and `module.project_hid`, we make sure the custom initialization we did is not overridden later on,
-the `_init_weights` function won't be applied to them.
+### Conversion script
-**6. Write a conversion script**
+The original checkpoint must be converted to a Transformers compatible checkpoint.
-Next, you should write a conversion script that lets you convert the checkpoint you used to debug *brand_new_bert* in
-the original repository to a checkpoint compatible with your just created π€ Transformers implementation of
-*brand_new_bert*. It is not advised to write the conversion script from scratch, but rather to look through already
-existing conversion scripts in π€ Transformers for one that has been used to convert a similar model that was written in
-the same framework as *brand_new_bert*. Usually, it is enough to copy an already existing conversion script and
-slightly adapt it for your use case. Don't hesitate to ask the Hugging Face team to point you to a similar already
-existing conversion script for your model.
+> [!TIP]
+> Try looking for an existing conversion script to copy, adapt, and reuse for your model!
+>
+> - If you're porting a model from TensorFlow to PyTorch, a good starting point may be the BERT [conversion script](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91).
+> - If you're porting a model from PyTorch to PyTorch, a good starting point may be the BART [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py).
-- If you are porting a model from TensorFlow to PyTorch, a good starting point might be BERT's conversion script [here](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91)
-- If you are porting a model from PyTorch to PyTorch, a good starting point might be BART's conversion script [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py)
+Make sure **all** required weights are initialized and print out all the checkpoint weights that weren't used for initialization to make sure the model has been converted correctly.
-In the following, we'll quickly explain how PyTorch models store layer weights and define layer names. In PyTorch, the
-name of a layer is defined by the name of the class attribute you give the layer. Let's define a dummy model in
-PyTorch, called `SimpleModel` as follows:
+You may encounter wrong shape statements of name assignments during the conversion. This is most likely because of incorrect parameters in BrandNewBertConfig, the wrong architecture, a bug in the `init` method of your implementation, or you need to transpose one of the checkpoint weights.
-```python
-from torch import nn
+Keep iterating with the [Adapt the model code](#adapt-the-model-code) section until all the checkpoint weights are correctly loaded. Once you can load a checkpoint in your model, save it to a folder. This should contain a `model.safetensors` file and a `config.json` file.
+
+```py
+model.save_pretrained("/path/to/converted/checkpoint/folder")
+```
+
+To help with conversion, the next section briefly describes how PyTorch models stores and defines layer weights and names.
+#### PyTorch layer weights and names
+
+It is helpful to create a basic PyTorch model to understand how layer names are defined and weights are initialized.
+
+```py
+from torch import nn
class SimpleModel(nn.Module):
def __init__(self):
@@ -571,18 +292,11 @@ class SimpleModel(nn.Module):
self.layer_norm = nn.LayerNorm(10)
```
-Now we can create an instance of this model definition which will fill all weights: `dense`, `intermediate`,
-`layer_norm` with random weights. We can print the model to see its architecture
+PyTorch layer names are defined by the class attribute name of the layer (dense, intermediate, layer_norm). Create a instance of SimpleModel to fill all the layers with random weights.
-```python
+```py
model = SimpleModel()
-
print(model)
-```
-
-This will print out the following:
-
-```
SimpleModel(
(dense): Linear(in_features=10, out_features=10, bias=True)
(intermediate): Linear(in_features=10, out_features=10, bias=True)
@@ -590,16 +304,10 @@ SimpleModel(
)
```
-We can see that the layer names are defined by the name of the class attribute in PyTorch. You can print out the weight
-values of a specific layer:
+The weight values of a specific layer are randomly initialized.
-```python
+```py
print(model.dense.weight.data)
-```
-
-to see that the weights were randomly initialized
-
-```
tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212,
-0.2077, 0.2157],
[ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190,
@@ -622,270 +330,182 @@ tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212,
0.2220, 0.2358]]).
```
-In the conversion script, you should fill those randomly initialized weights with the exact weights of the
-corresponding layer in the checkpoint. *E.g.*
+In the conversion script, the random weights should be replaced with the exact weights from the corresponding layer in the original checkpoint.
-```python
-# retrieve matching layer weights, e.g. by
-# recursive algorithm
+```py
+# retrieve matching layer weights with recursive algorithm
layer_name = "dense"
pretrained_weight = array_of_dense_layer
model_pointer = getattr(model, "dense")
-
model_pointer.weight.data = torch.from_numpy(pretrained_weight)
```
-While doing so, you must verify that each randomly initialized weight of your PyTorch model and its corresponding
-pretrained checkpoint weight exactly match in both **shape and name**. To do so, it is **necessary** to add assert
-statements for the shape and print out the names of the checkpoints weights. E.g. you should add statements like:
+Verify the randomly initialized weights and their corresponding pretrained checkpoint weights have the identical **shape** and **name**. Add assert statements for the shape and print out the checkpoint weight names.
-```python
+```py
assert (
model_pointer.weight.shape == pretrained_weight.shape
), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched"
+
+logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}")
```
-Besides, you should also print out the names of both weights to make sure they match, *e.g.*
+When the shape or name don't match, you may have assigned the incorrect checkpoint weight to a randomly initialized layer. An incorrect shape may be because of an BrandNewBert's parameters don't exactly match the original model's parameters. But it could be that the PyTorch layer implementation requires the weights to be transposed first.
-```python
-logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}")
+### Implement the forward pass
+
+The forward pass should be implemented next if the model loads correctly. It takes some inputs and returns the model output.
+
+```py
+model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder")
+input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]
+output = model(input_ids).last_hidden_states
```
-If either the shape or the name doesn't match, you probably assigned the wrong checkpoint weight to a randomly
-initialized layer of the π€ Transformers implementation.
+Don't be discouraged if your forward pass isn't identical with the output from the original model or if it returns an error! Check that the forward pass doesn't throw any errors. This is often because the dimensions are wrong (dimensionality mismatch) or because the wrong data type is used ([torch.long](https://pytorch.org/docs/stable/generated/torch.Tensor.long.html) instead of [torch.float32](https://pytorch.org/docs/stable/tensors.html)).
-An incorrect shape is most likely due to an incorrect setting of the config parameters in `BrandNewBertConfig()` that
-do not exactly match those that were used for the checkpoint you want to convert. However, it could also be that
-PyTorch's implementation of a layer requires the weight to be transposed beforehand.
+Your output should have a precision of *1e-3*. Ensure the output shapes and output values are identical. Common reasons for why the outputs aren't identical include:
-Finally, you should also check that **all** required weights are initialized and print out all checkpoint weights that
-were not used for initialization to make sure the model is correctly converted. It is completely normal, that the
-conversion trials fail with either a wrong shape statement or a wrong name assignment. This is most likely because either
-you used incorrect parameters in `BrandNewBertConfig()`, have a wrong architecture in the π€ Transformers
-implementation, you have a bug in the `init()` functions of one of the components of the π€ Transformers
-implementation or you need to transpose one of the checkpoint weights.
+- Some layers were not added (activation layer or a residual connection).
+- The word embedding matix is not tied.
+- The wrong positional embeddings are used because the original implementation includes an offset.
+- Dropout is applied during the forward pass. Fix this error by making sure `model.training` is `False` and passing `self.training` to [torch.nn.functional.dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout).
-This step should be iterated with the previous step until all weights of the checkpoint are correctly loaded in the
-Transformers model. Having correctly loaded the checkpoint into the π€ Transformers implementation, you can then save
-the model under a folder of your choice `/path/to/converted/checkpoint/folder` that should then contain both a
-`pytorch_model.bin` file and a `config.json` file:
+Compare the forward pass of the original model and your implementation to check if there are any differences. Ideally, debug and print out the intermediate outputs of both implementations of the forward pass to pinpoint where the original implementation differs from yours.
-```python
-model.save_pretrained("/path/to/converted/checkpoint/folder")
-```
+1. Make sure the hardcoded `input_ids` in both implementations are identical.
+2. Verify the outputs of the first transformation of `input_ids` (usually the word embeddings) are identical, and work your way through to the last layer.
-**7. Implement the forward pass**
+Any difference between the two implementations should point to the bug in your implementation.
-Having managed to correctly load the pretrained weights into the π€ Transformers implementation, you should now make
-sure that the forward pass is correctly implemented. In [Get familiar with the original repository](#3-4-run-a-pretrained-checkpoint-using-the-original-repository), you have already created a script that runs a forward
-pass of the model using the original repository. Now you should write an analogous script using the π€ Transformers
-implementation instead of the original one. It should look as follows:
+One of the best strategies is to add many print statements to the same positions in both implementations, and then successively remove them when they output identical values for the intermediate outputs.
-```python
-model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder")
-input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]
-output = model(input_ids).last_hidden_states
+When both implementations produce the same output, verify the outputs are within the precision of *1e-3*.
+
+```py
+torch.allclose(original_output, output, atol=1e-3)
```
-It is very likely that the π€ Transformers implementation and the original model implementation don't give the exact
-same output the very first time or that the forward pass throws an error. Don't be disappointed - it's expected! First,
-you should make sure that the forward pass doesn't throw any errors. It often happens that the wrong dimensions are
-used leading to a *Dimensionality mismatch* error or that the wrong data type object is used, *e.g.* `torch.long`
-instead of `torch.float32`. Don't hesitate to ask the Hugging Face team for help, if you don't manage to solve
-certain errors.
-
-The final part to make sure the π€ Transformers implementation works correctly is to ensure that the outputs are
-equivalent to a precision of `1e-3`. First, you should ensure that the output shapes are identical, *i.e.*
-`outputs.shape` should yield the same value for the script of the π€ Transformers implementation and the original
-implementation. Next, you should make sure that the output values are identical as well. This one of the most difficult
-parts of adding a new model. Common mistakes why the outputs are not identical are:
-
-- Some layers were not added, *i.e.* an *activation* layer was not added, or the residual connection was forgotten
-- The word embedding matrix was not tied
-- The wrong positional embeddings are used because the original implementation uses on offset
-- Dropout is applied during the forward pass. To fix this make sure *model.training is False* and that no dropout
- layer is falsely activated during the forward pass, *i.e.* pass *self.training* to [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)
-
-The best way to fix the problem is usually to look at the forward pass of the original implementation and the π€
-Transformers implementation side-by-side and check if there are any differences. Ideally, you should debug/print out
-intermediate outputs of both implementations of the forward pass to find the exact position in the network where the π€
-Transformers implementation shows a different output than the original implementation. First, make sure that the
-hard-coded `input_ids` in both scripts are identical. Next, verify that the outputs of the first transformation of
-the `input_ids` (usually the word embeddings) are identical. And then work your way up to the very last layer of the
-network. At some point, you will notice a difference between the two implementations, which should point you to the bug
-in the π€ Transformers implementation. From our experience, a simple and efficient way is to add many print statements
-in both the original implementation and π€ Transformers implementation, at the same positions in the network
-respectively, and to successively remove print statements showing the same values for intermediate presentations.
-
-When you're confident that both implementations yield the same output, verify the outputs with
-`torch.allclose(original_output, output, atol=1e-3)`, you're done with the most difficult part! Congratulations - the
-work left to be done should be a cakewalk π.
-
-**8. Adding all necessary model tests**
-
-At this point, you have successfully added a new model. However, it is very much possible that the model does not yet
-fully comply with the required design. To make sure, the implementation is fully compatible with π€ Transformers, all
-common tests should pass. The Cookiecutter should have automatically added a test file for your model, probably under
-the same `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`. Run this test file to verify that all common
-tests pass:
+This is typically the most difficult part of the process. Congratulations if you've made it this far!
+
+And if you're stuck or struggling with this step, don't hesitate to ask for help on your pull request.
+
+### Add model tests
+
+While the model works, you still need to add tests to ensure it is compatible with Transformers and all the tests pass. Tests are important because they help users understand your work by looking at specific tests, and because they prevent your model from breaking in the future if any changes are made.
+
+[Cookiecutter](https://cookiecutter.readthedocs.io/en/stable/) should have added a test file for your model. Run the test file below to make sure all common tests pass.
```bash
pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py
```
-Having fixed all common tests, it is now crucial to ensure that all the nice work you have done is well tested, so that
+The integration tests should be added first because they serve the same purpose as the debugging scripts you used earlier to implement the new model in Transformers. A template of those model tests, BrandNewBertModelIntegrationTests, was added by Cookiecutter and only needs to be filled out by you. To ensure it passes, run the following command.
-- a) The community can easily understand your work by looking at specific tests of *brand_new_bert*
-- b) Future changes to your model will not break any important feature of the model.
-
-At first, integration tests should be added. Those integration tests essentially do the same as the debugging scripts
-you used earlier to implement the model to π€ Transformers. A template of those model tests has already added by the
-Cookiecutter, called `BrandNewBertModelIntegrationTests` and only has to be filled out by you. To ensure that those
-tests are passing, run
+
+
```bash
RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests
```
-
-
-In case you are using Windows, you should replace `RUN_SLOW=1` with `SET RUN_SLOW=1`
-
-
+
+
-Second, all features that are special to *brand_new_bert* should be tested additionally in a separate test under
-`BrandNewBertModelTester`/`BrandNewBertModelTest`. This part is often forgotten but is extremely useful in two
-ways:
+```bash
+SET RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests
+```
-- It helps to transfer the knowledge you have acquired during the model addition to the community by showing how the
- special features of *brand_new_bert* should work.
-- Future contributors can quickly test changes to the model by running those special tests.
+
+
+All features unique to BrandNewBert should be tested in a separate test under `BrandNewBertModelTester/BrandNewBertModelTest`. This test is often overlooked, but it is extremely important because:
-**9. Implement the tokenizer**
+- it helps transfer knowledge you acquired during the process to the community by showing how the novel features of the new model works
+- future contributors can quickly test changes to the model by running these special tests
-Next, we should add the tokenizer of *brand_new_bert*. Usually, the tokenizer is equivalent to or very similar to an
-already existing tokenizer of π€ Transformers.
+### Implement tokenizer
-It is very important to find/extract the original tokenizer file and to manage to load this file into the π€
-Transformers' implementation of the tokenizer.
+With the model out of the way, time to focus on the tokenizer. The tokenizer should be identical or very similar to an existing tokenizer in Transformers.
-To ensure that the tokenizer works correctly, it is recommended to first create a script in the original repository
-that inputs a string and returns the `input_ids`. It could look similar to this (in pseudo-code):
+Find and load the original tokenizer file into your implementation. Create a script in the original repository that inputs a string and returns the `input_ids`. The pseudocode should look similar to this.
-```python
+```py
input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."
model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/")
input_ids = model.tokenize(input_str)
```
-You might have to take a deeper look again into the original repository to find the correct tokenizer function or you
-might even have to do changes to your clone of the original repository to only output the `input_ids`. Having written
-a functional tokenization script that uses the original repository, an analogous script for π€ Transformers should be
-created. It should look similar to this:
+You may need to search the original repository to find the correct tokenizer function or modify the existing tokenizer in your clone of the original repository to only return the `input_ids`. The script for your tokenizer should look something like this.
-```python
+```py
from transformers import BrandNewBertTokenizer
input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."
-
tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/")
-
input_ids = tokenizer(input_str).input_ids
```
-When both `input_ids` yield the same values, as a final step a tokenizer test file should also be added.
+When both implementations have the same `input_ids`, add a tokenizer test file. This file is analogous to the modeling test files. The tokenizer test files should contain a couple of hardcoded integration tests.
+
+### Run integration tests
+
+Now that you have a model and tokenizer, add end-to-end integration tests using both the model and tokenizer to `tests/models/brand_new_bert/test_modeling_brand-new_bert.py`.
+
+The test should provide a meaningful text-to-text example that the model works as expected. For example, you can include a source-to-target translation pair, an article-to-summary pair, or a question-to-answer pair.
-Analogous to the modeling test files of *brand_new_bert*, the tokenization test files of *brand_new_bert* should
-contain a couple of hard-coded integration tests.
+If the checkpoint hasn't been finetuned on a downstream task, then the model tests will suffice.
-**10. Run End-to-end integration tests**
+Finally, try to make sure your tests can run on a GPU by adding `.to(self.device)` statements to the models internal tensors. Don't worry if you don't have access to a GPU, we can take care of that for you if that's the case.
-Having added the tokenizer, you should also add a couple of end-to-end integration tests using both the model and the
-tokenizer to `tests/models/brand_new_bert/test_modeling_brand_new_bert.py` in π€ Transformers.
-Such a test should show on a meaningful
-text-to-text sample that the π€ Transformers implementation works as expected. A meaningful text-to-text sample can
-include *e.g.* a source-to-target-translation pair, an article-to-summary pair, a question-to-answer pair, etc⦠If none
-of the ported checkpoints has been fine-tuned on a downstream task it is enough to simply rely on the model tests. In a
-final step to ensure that the model is fully functional, it is advised that you also run all tests on GPU. It can
-happen that you forgot to add some `.to(self.device)` statements to internal tensors of the model, which in such a
-test would show in an error. In case you have no access to a GPU, the Hugging Face team can take care of running those
-tests for you.
+### Add documentation
-**11. Add Docstring**
+Your model is only useful if users know how to use it. This is why it's important to add documentation and docstrings. Cookiecutter added a template file, `docs/source/model_doc/brand_new_bert.md`, that you can fill out with information about your model.
-Now, all the necessary functionality for *brand_new_bert* is added - you're almost done! The only thing left to add is
-a nice docstring and a doc page. The Cookiecutter should have added a template file called
-`docs/source/model_doc/brand_new_bert.md` that you should fill out. Users of your model will usually first look at
-this page before using your model. Hence, the documentation must be understandable and concise. It is very useful for
-the community to add some *Tips* to show how the model should be used. Don't hesitate to ping the Hugging Face team
-regarding the docstrings.
+This is generally a user's first interaction with a model, so the documentation should be clear and concise. It is often very useful to add examples of how the model should be used.
-Next, make sure that the docstring added to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` is
-correct and included all necessary inputs and outputs. We have a detailed guide about writing documentation and our docstring format [here](writing-documentation). It is always good to remind oneself that documentation should
-be treated at least as carefully as the code in π€ Transformers since the documentation is usually the first contact
-point of the community with the model.
+Make sure docstrings are added to `src/transformers/models/brand_new_bert/modeling_brand_new_bert/py` and includes all necessary inputs and outputs. Review our [guide](https://github.com/huggingface/transformers/tree/main/docs#writing-documentation---specification) for writing documentation and docstrings.
-**Code refactor**
+### Refactor
-Great, now you have added all the necessary code for *brand_new_bert*. At this point, you should correct some potential
-incorrect code style by running:
+Time to tidy things up and make sure the code style is consistent with the rest of the library. Run the following command to automatically fix incorrect styles.
```bash
make style
```
-and verify that your coding style passes the quality check:
+To verify the code style passes quality checks, run the command below.
```bash
make quality
```
-There are a couple of other very strict design tests in π€ Transformers that might still be failing, which shows up in
-the tests of your pull request. This is often because of some missing information in the docstring or some incorrect
-naming. The Hugging Face team will surely help you if you're stuck here.
+There may be other failing tests or checks (missing docstring or incorrect naming) on your pull request due to Transformers strict design tests. We can help you with these issues if you're stuck.
-Lastly, it is always a good idea to refactor one's code after having ensured that the code works correctly. With all
-tests passing, now it's a good time to go over the added code again and do some refactoring.
+After ensuring the code runs correctly, you may want to refactor it to make it more readable or cleaner.
-You have now finished the coding part, congratulation! π You are Awesome! π
+### Upload to the Hub
-**12. Upload the models to the model hub**
+Convert and upload all checkpoints to the [Hub](https://hf.co/models). Add a model card to provide more transparency and context about the model. The model card should highlight specific characteristics of a checkpoint, how the model was trained, and code examples of how to use it.
-In this final part, you should convert and upload all checkpoints to the model hub and add a model card for each
-uploaded model checkpoint. You can get familiar with the hub functionalities by reading our [Model sharing and uploading Page](model_sharing). You should work alongside the Hugging Face team here to decide on a fitting name for each
-checkpoint and to get the required access rights to be able to upload the model under the author's organization of
-*brand_new_bert*. The `push_to_hub` method, present in all models in `transformers`, is a quick and efficient way to push your checkpoint to the hub. A little snippet is pasted below:
+> [!TIP]
+> In many cases, adding an interactive notebook users can run is a great way to showcase how to use the model for inference or finetune it on a downstream task. While not mandatory, including a notebook can drive greater adoption of your model.
-```python
-brand_new_bert.push_to_hub("brand_new_bert")
-# Uncomment the following line to push to an organization.
-# brand_new_bert.push_to_hub("/brand_new_bert")
-```
-
-It is worth spending some time to create fitting model cards for each checkpoint. The model cards should highlight the
-specific characteristics of this particular checkpoint, *e.g.* On which dataset was the checkpoint
-pretrained/fine-tuned on? On what down-stream task should the model be used? And also include some code on how to
-correctly use the model.
+You should also consult with the Transformers team to decide on an appropriate name for the model, and getting the required access rights to upload the model.
-**13. (Optional) Add notebook**
+Use the [`~PreTrainedModel.push_to_hub`] method to upload the model.
-It is very helpful to add a notebook that showcases in-detail how *brand_new_bert* can be used for inference and/or
-fine-tuned on a downstream task. This is not mandatory to merge your PR, but very useful for the community.
+```py
+brand_new_bert.push_to_hub("brand_new_bert")
+```
-**14. Submit your finished PR**
+Refer to the [Share](./model_sharing) guide for more information about uploading models to the Hub.
-You're done programming now and can move to the last step, which is getting your PR merged into main. Usually, the
-Hugging Face team should have helped you already at this point, but it is worth taking some time to give your finished
-PR a nice description and eventually add comments to your code, if you want to point out certain design choices to your
-reviewer.
+### Merge your model
-### Share your work!!
+You're finally ready to merge your pull request and officially add the model to Transformers! Make sure all the tests are passing and all comments and feedback have been addressed.
-Now, it's time to get some credit from the community for your work! Having completed a model addition is a major
-contribution to Transformers and the whole NLP community. Your code and the ported pre-trained models will certainly be
-used by hundreds and possibly even thousands of developers and researchers. You should be proud of your work and share
-your achievements with the community.
+Congratulations on adding a new model to Transformers! π₯³
-**You have made another model that is super easy to access for everyone in the community! π€―**
+This is a very significant contribution. Your work here makes Transformers more accessible to developers and researchers around the world. You should be proud of your contribution and share your accomplishment with the community!
\ No newline at end of file
diff --git a/docs/source/en/custom_models.md b/docs/source/en/custom_models.md
index f4dcfc33645c34..ce96b9e6cbae45 100644
--- a/docs/source/en/custom_models.md
+++ b/docs/source/en/custom_models.md
@@ -90,7 +90,7 @@ Transformers' models follow the convention of accepting a `config` object in the
You'll create two ResNet models, a ResNet model that outputs the hidden states and a ResNet model with an image classification head.
-
+
Define a mapping between the block types and block classes. Everything else is created by passing the configuration class to the Resnet model class.
@@ -127,7 +127,7 @@ class ResnetModel(PreTrainedModel):
```
-
+
The `forward` method needs to be rewrittten to calculate the loss for each logit if labels are available. Otherwise, the Resnet model class is the same.
@@ -185,7 +185,7 @@ pretrained_model = timm.create_model("resnet50d", pretrained=True)
resnet50d.model.load_state_dict(pretrained_model.state_dict())
```
-## AutoClass support
+## AutoClass
The [AutoClass](./models#autoclass) API is a shortcut for automatically loading the correct architecture for a given model. It may be convenient for your users to add this API to your custom model.
@@ -202,7 +202,7 @@ AutoModel.register(ResnetConfig, ResnetModel)
AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification)
```
-## Share a custom model on the Hub
+## Upload model
Upload a custom model to the [Hub](https://hf.co/models) to allow other users to easily load and use it.
@@ -289,5 +289,3 @@ resnet50d.push_to_hub("custom-resnet50d")
The pretrained weights, configuration in JSON format, `modeling.py` and `configuration.py` files should all be uploaded to the Hub now under a namespace and specified directory [here](https://hf.co/sgugger/custom-resnet50d).
Because a custom model doesn't use the same modeling code as Transformers' model, you need to add `trust_remode_code=True` in the [`~PreTrainedModel.from_pretrained`] method. Refer to the load [custom models](./models#custom-models) section for more information.
-
-6401
\ No newline at end of file
diff --git a/docs/source/en/model_sharing.md b/docs/source/en/model_sharing.md
index 1aa193a1377b18..30fd5a2e30c7c1 100644
--- a/docs/source/en/model_sharing.md
+++ b/docs/source/en/model_sharing.md
@@ -43,7 +43,7 @@ notebook_login()
-## Model repository features
+## Repository features
@@ -73,7 +73,7 @@ The model repository also includes an inference [widget](https://hf.co/docs/hub/
Check out the Hub [Models](https://hf.co/docs/hub/models) documentation to learn more about.
-## Convert a model for all frameworks
+## Model framework conversion
Reach a wider audience by converting a model to be compatible with all machine learning frameworks (PyTorch, TensorFlow, Flax). While users can still load a model if they're using a different framework, it is slower because Transformers converts the checkpoint on the fly. It is faster to convert the checkpoint beforehand.
@@ -122,7 +122,7 @@ There are several ways to upload a model to the Hub depending on your workflow p
-### Upload from Trainer
+### Trainer
The [`Trainer`], Transformers' training API, allows pushing a model directly to the Hub after training. Set `push_to_hub=True` in the [`TrainingArguments`] class and pass it to the [`Trainer`]. Once training is complete, call [`~transformers.Trainer.push_to_hub`] to upload the model.
@@ -155,7 +155,7 @@ push_to_hub_callback = PushToHubCallback(
model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback)
```
-### Upload from model
+### PreTrainedModel.push_to_hub
Call [`~PreTrainedModel.push_to_hub`] directly on a model to upload it to the Hub. It creates a repository under your namespace with the model name specified in [`~PreTrainedModel.push_to_hub`].
@@ -173,7 +173,7 @@ Your Hugging Face profile should now display the newly created model repository.
Refer to the [Upload files to the Hub](https://hf.co/docs/hub/how-to-upstream) guide for more details about pushing files to the Hub.
-### Upload from web interface
+### Hub web interface
For a no-code approach, upload a model with the Hub's web interface.