Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add skip_verbose_naming in add_hook to give an option for skipping the naming #635

Open
wants to merge 21 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
eabc539
Add skip_verbose_naming in add_hook
verlocks Jun 11, 2024
3e15f9c
Merge branch 'dev' into verlock-dev
bryce13950 Jul 5, 2024
fafe46f
add transformer diagram
akozlo Oct 7, 2024
28eb2f7
Merge pull request #749 from akozlo/transformer-diagram
bryce13950 Oct 9, 2024
ca0f346
Demo colab compatibility (#752)
bryce13950 Oct 9, 2024
70029b9
Add support for `Mistral-Nemo-Base-2407` model (#751)
ryanhoangt Oct 15, 2024
0dbc7a8
Merge pull request #755 from TransformerLensOrg/main
bryce13950 Oct 16, 2024
ab27ac5
added new block for recent diagram, and colab compatibility notebook …
bryce13950 Oct 16, 2024
4e7e23e
Add warning and halt execution for incorrect T5 model usage (#757)
vatsalrathod16 Oct 16, 2024
15ae297
added template for reporting model compatibility (#759)
bryce13950 Oct 17, 2024
cb6ad8e
Add configurations for Llama 3.1 models(Llama-3.1-8B and Llama-3.1-70…
vatsalrathod16 Oct 22, 2024
8029d13
added logit comparator (#765)
curt-tigges Oct 25, 2024
c7837fb
Add support for NTK-by-Part Rotary Embedding & set correct rotary bas…
Hzfinfdu Oct 26, 2024
d4c8612
fix the bug that attention_mask and past_kv_cache cannot work togethe…
yzhhr Nov 15, 2024
32b87c6
Set prepend_bos to false by default for Bloom model family (#775)
degenfabian Nov 15, 2024
d9792a9
Fix that if use_past_kv_cache is set to True models from the Bloom fa…
degenfabian Nov 16, 2024
e0a1787
added typeguard dependency (#786)
bryce13950 Nov 19, 2024
7e2877c
remove einsum in forward pass in AbstractAttention (#783)
degenfabian Nov 21, 2024
b7c4dbd
Colab compatibility bug fixes (#794)
degenfabian Nov 25, 2024
623407f
remove einsum usage from create_alibi_bias function in AbstractAttent…
degenfabian Nov 25, 2024
31852b6
Merge branch 'dev' into verlock-dev
bryce13950 Nov 25, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions .github/ISSUE_TEMPLATE/compatibility.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
name: Compatibility Report
about: Submit a compatibility report
title: "[Compatibility Report] Model ID"

---

<!--
Use this template to report any issues found with model compatibility. Make sure to create a report per model id, and not per model family. Please include details of any options you used in TransformerLens, and example generations both directly through transformers and TransformerLens. Additionally, please make sure you determine information on historical compatibility with the model in question with TransformerLens.

To determine historic compatibility of your model, first check the performance of the model on the first release of TransformerLens when the model was added. If the incompatibility does not exist on the first version with the model in question, you then need to narrow down the last version where the model performed comparably to transformers, and the first version where it became incompatible.

The process for finding the last compatible version number is pretty manually. It's a matter of picking a random version, checking the compatibility, and then deciding which version to check next based on the result on the version number in question. If the version you are testing is incompatible, then you want to check earlier releases of TransformerLens. If the version you are testing is compatible, you then want to check newer versions of TransformerLens. This process must be repeated until two consecutive version numbers are found, one where the model was compatible, and the next where the model was incompatible. This process is very tedious, but it will greatly help in the process of fixing the underlying incompatibility.
-->

## Model

REPLACE_WITH_MODEL_ID

- [ ] This model was incompatible when it was introduced to TransformerLens

<!--
Remove the next block if the model in question did not work as expected on the first version of TransformerLens in which it was available.
-->

The model seems to have worked as of REPLACE_WITH_LAST_COMPATIBLE_VERSION_NUMBER. It first started
showing signs of incompatibility in REPLACE_WITH_FIRST_INCOMPATIBLE_VERSION_NUMBER.

### Example of some generations in transformers


### Code used to load the model in TransformerLens


### Example of some generations in TransformerLens
1 change: 1 addition & 0 deletions .github/workflows/checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ jobs:
# - "Activation_Patching_in_TL_Demo"
# - "Attribution_Patching_Demo"
- "ARENA_Content"
- "Colab_Compatibility"
- "BERT"
- "Exploratory_Analysis_Demo"
# - "Grokking_Demo"
Expand Down
265 changes: 265 additions & 0 deletions debugging/hf-tl-logit-comparator.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,265 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Logit Comparator for HuggingFace and TransformerLens Outputs\n",
"This notebook is a quick and dirty tool to compare the logit outputs of a HuggingFace model and a TransformerLens model via several different metrics. It is intended to help debug issues with the TransformerLens model, such as bugs in the model's implementation. If you identify any issues, please open an issue on the [GitHub repository](https://github.com/TransformerLensOrg/TransformerLens)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer, AutoModelForCausalLM\n",
"from transformer_lens import HookedTransformer\n",
"import torch\n",
"import torch.nn.functional as F\n",
"\n",
"if torch.backends.mps.is_available():\n",
" device = \"mps\"\n",
"else:\n",
" device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
"\n",
"torch.set_grad_enabled(False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Comparator Setup"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"EleutherAI/pythia-2.8b\" # You can change this to any model name\n",
"sentence = \"The quick brown fox\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from huggingface_hub import login\n",
"login(token=\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get Transformers Logits"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from transformers import AutoTokenizer, AutoModelForCausalLM\n",
"\n",
"def load_model(model_name=\"gpt2\"):\n",
" tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
" model = AutoModelForCausalLM.from_pretrained(model_name)\n",
" return model, tokenizer\n",
"\n",
"def get_logits(model, tokenizer, sentence, device):\n",
" # Tokenize the input sentence\n",
" inputs = tokenizer(sentence, return_tensors=\"pt\")\n",
" \n",
" # Move inputs to the device\n",
" inputs = {k: v.to(device) for k, v in inputs.items()}\n",
" \n",
" # Generate the logits\n",
" with torch.no_grad():\n",
" outputs = model(**inputs)\n",
" \n",
" # Get the logits for all tokens\n",
" logits = outputs.logits\n",
" \n",
" return logits\n",
"\n",
"model, tokenizer = load_model(model_name)\n",
"model = model.to(device)\n",
"\n",
"hf_logits = get_logits(model, tokenizer, sentence, device)[:, -1, :]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get TransformerLens Logits"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model = HookedTransformer.from_pretrained_no_processing(model_name, device=device)\n",
"tokens = model.to_tokens(sentence, prepend_bos=False)\n",
"tl_logits = model(tokens)[:, -1, :]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compare Logit Distributions\n",
"Various metrics are used to compare the logit distributions of the two models. We don't yet have standard values for what constitutes a \"good\" logit comparison, so we are working on establishing benchmarks."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(f\"HF Logits Shape: {hf_logits.shape}\")\n",
"print(f\"TL Logits Shape: {tl_logits.shape}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tensor Comparison"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"are_close = torch.allclose(tl_logits, hf_logits, rtol=1e-5, atol=1e-3)\n",
"print(f\"Are the logits close? {are_close}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Mean Squared Error"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Compare the logits with MSE\n",
"mse = torch.nn.functional.mse_loss(hf_logits, tl_logits)\n",
"print(f\"MSE: {mse}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Maximum Absolute Difference"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"max_diff = torch.max(torch.abs(tl_logits - hf_logits))\n",
"print(f\"Max Diff: {max_diff}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Cosine Similarity"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cosine_sim = F.cosine_similarity(tl_logits, hf_logits, dim=-1).mean()\n",
"print(f\"Cosine Sim: {cosine_sim}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### KL Divergence"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def kl_div(logits1: torch.Tensor, logits2: torch.Tensor) -> torch.Tensor:\n",
" probs1 = F.softmax(logits1, dim=-1)\n",
" probs2 = F.softmax(logits2, dim=-1)\n",
" return F.kl_div(probs1.log(), probs2, reduction='batchmean')\n",
"\n",
"kl_tl_hf = kl_div(tl_logits, hf_logits)\n",
"kl_hf_tl = kl_div(hf_logits, tl_logits)\n",
"print(f\"KL(TL||HF): {kl_tl_hf}\")\n",
"print(f\"KL(HF||TL): {kl_hf_tl}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "sae-l",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading
Loading