Skip to content

Commit

Permalink
1T release for 3B and 7B, 13B 600BT preview
Browse files Browse the repository at this point in the history
  • Loading branch information
young-geng authored Jun 7, 2023
1 parent f334729 commit 11d80d4
Show file tree
Hide file tree
Showing 3 changed files with 69 additions and 53 deletions.
122 changes: 69 additions & 53 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,26 @@
# OpenLLaMA: An Open Reproduction of LLaMA

**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset. Our model weights can serve as the drop in replacement of LLaMA 7B in existing implementations. We also provide a smaller 3B variant of LLaMA model.
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B and 13B trained on the RedPajama dataset. Our model weights can serve as the drop in replacement of LLaMA in existing implementations. We also provide a smaller 3B variant of LLaMA model.


In this repo, we release a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. In this release, we're releasing a public preview of the 7B and 3B OpenLLaMA model that has been trained with 700 billion tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Stay tuned for our updates.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Stay tuned for our updates.

PyTorch weights for Hugging Face transformers:

- [PyTorch weights for OpenLLaMA 7B 700B tokens preview](https://huggingface.co/openlm-research/open_llama_7b_700bt_preview)
- [PyTorch weights for OpenLLaMA 3B 600B tokens preview](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview)
- [EasyLM JAX weights for OpenLLaMA 7B 700B tokens preview](https://huggingface.co/openlm-research/open_llama_7b_700bt_preview_easylm)
- [EasyLM JAX weights for OpenLLaMA 3B 600B tokens preview](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview_easylm)
- [OpenLLaMA 3B](https://huggingface.co/openlm-research/open_llama_3b)
- [OpenLLaMA 7B](https://huggingface.co/openlm-research/open_llama_7b)
- [OpenLLaMA 13B 600B tokens preview](https://huggingface.co/openlm-research/open_llama_13b_600bt)

JAX weights for [EasyLM](https://github.com/young-geng/EasyLM):

- [OpenLLaMA 3B for EasyLM](https://huggingface.co/openlm-research/open_llama_3b_easylm)
- [OpenLLaMA 7B for EasyLM](https://huggingface.co/openlm-research/open_llama_7b_easylm)
- [OpenLLaMA 13B 600B token preview for EasyLM](https://huggingface.co/openlm-research/open_llama_13b_600bt_easylm)


## Update 06/07/2023

We are happy to release our final 1T token version of OpenLLaMA 3B and 7B. We’ve updated the evaluation results. We are also happy to release a 600B token preview of the 13B model, trained in collaboration with [Stability AI](https://stability.ai/).


## Update 05/22/2023
Expand All @@ -24,19 +34,20 @@ After receiving feedback from the community, we discovered that the tokenizer of



## Preview Weights Release and Usage
## Weights Release, License and Usage

To encourage the feedback from the community, we release a preview checkpoint of our weights. We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library.
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.

### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. See the following example for usage:
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.

```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM

model_path = 'openlm-research/open_llama_3b_600bt_preview'
# model_path = 'openlm-research/open_llama_7b_700bt_preview'
model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b_600bt'

tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
Expand All @@ -54,63 +65,65 @@ print(tokenizer.decode(generation_output[0]))

For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).

**Loading the Weights with EasyLM**
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:

```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```

### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.

Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.



## Dataset and Training

We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.

We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2100 tokens / second / TPU-v4 chip for our 7B model. The training loss can be seen in the figure below.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. The training loss can be seen in the figure below.

![](media/loss.png)
![](media/loss_13b.png)

## Evaluation

## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).

The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens, whereas OpenLLaMA was trained on 400 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. We expect that the performance of OpenLLaMA, after completing its training on 1 trillion tokens, will be enhanced even further.


| **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B 400BT | OpenLLaMA 7B 700BT | OpenLLaMA 3B 350BT | OpenLLaMA 3B 600BT |
| ---------------------- | -------- | -------- | ------------------ | ------------------ | ------------------ | ------------------ |
| anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.35 | 0.34 | 0.31 |
| anli_r2/acc | 0.34 | 0.34 | 0.33 | 0.35 | 0.34 | 0.32 |
| anli_r3/acc | 0.35 | 0.37 | 0.34 | 0.37 | 0.37 | 0.35 |
| arc_challenge/acc | 0.34 | 0.39 | 0.34 | 0.36 | 0.31 | 0.33 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.34 | 0.38 | 0.33 | 0.37 |
| arc_easy/acc | 0.67 | 0.68 | 0.68 | 0.72 | 0.65 | 0.68 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.64 | 0.66 | 0.59 | 0.63 |
| boolq/acc | 0.66 | 0.75 | 0.67 | 0.70 | 0.60 | 0.66 |
| cb/acc | 0.36 | 0.36 | 0.43 | 0.38 | 0.11 | 0.32 |
| cb/f1 | 0.26 | 0.24 | 0.22 | 0.37 | 0.10 | 0.19 |
| hellaswag/acc | 0.50 | 0.56 | 0.49 | 0.52 | 0.45 | 0.48 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.67 | 0.70 | 0.61 | 0.65 |
| openbookqa/acc | 0.29 | 0.29 | 0.28 | 0.30 | 0.26 | 0.25 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.39 | 0.42 | 0.37 | 0.37 |
| piqa/acc | 0.75 | 0.78 | 0.74 | 0.75 | 0.72 | 0.74 |
| piqa/acc_norm | 0.76 | 0.78 | 0.74 | 0.76 | 0.73 | 0.74 |
| record/em | 0.88 | 0.91 | 0.88 | 0.89 | 0.86 | 0.87 |
| record/f1 | 0.89 | 0.91 | 0.88 | 0.90 | 0.87 | 0.88 |
| rte/acc | 0.54 | 0.56 | 0.61 | 0.62 | 0.56 | 0.58 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.22 | 0.24 | 0.23 | 0.22 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.36 | 0.37 | 0.35 | 0.32 |
| wic/acc | 0.50 | 0.50 | 0.5 | 0.49 | 0.50 | 0.50 |
| winogrande/acc | 0.64 | 0.68 | 0.66 | 0.66 | 0.61 | 0.61 |
| wsc/acc | 0.37 | 0.35 | 0.4 | 0.45 | 0.39 | 0.38 |
| Average | 0.50 | 0.52 | 0.51 | 0.53 | 0.47 | 0.49 |




## Future Plans

The current release is only a preview of what the complete OpenLLaMA release will offer. We are currently focused on completing the training process on the entire RedPajama dataset. This can gives us a good apple-to-apple comparison between the original LLaMA and our OpenLLaMA. Please stay tuned for our upcoming releases.
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.


| **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT |
| ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 |
| anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 |
| arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 |
| ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 |
| hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 |
| openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 |
| piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 |
| record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 |
| record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 |
| rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 |
| wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 |
| winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 |
| Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 |


We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.


## Contact
Expand All @@ -122,10 +135,13 @@ OpenLLaMA is developed by:
*Equal Contribution



## Acknowledgment

We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.

The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.


## Reference

Expand Down
Binary file modified media/loss.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added media/loss_13b.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 11d80d4

Please sign in to comment.