-
Notifications
You must be signed in to change notification settings - Fork 316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add allenai/OLMoE-1B-7B-0924
.
#718
Conversation
It looks like Compatibility Checks (3.9) failed because of incompatible numpy versions. |
There are a lot of issues in this pr due to dependency bumping. None of that has anything to do with what has been done here, but there are general issues at the moment with dependency versions. I started messing with it in this PR. In order to add these models officially, we probably need to get that resolved first. I will prioritize it a bit further up the line in order to allow you to finish what you are doing. |
@joelburget I am working on https://github.com/jonasrohw/TransformerLens/tree/OLMo; I think your MoE is very similar. I found the issue you were facing: the tokenizer is called again after |
Hey @jonasrohw, thanks for looping me in. Your code looks much more complete than mine, so I want to make sure I understand the bit that you're suggesting we merge in (and how). The two things this implementation has that yours doesn't:
Are you suggesting I merge my |
@joelburget Exactly. You can also conditionally add the MoE weights import into the Olmo file. You could include your model names, etc., in the preloading with the exact model configurations for MoE. |
Originally from TransformerLensOrg#718.
Thanks @jonasrohw. I opened jonasrohw#1. I still need to finish the one TODO and do testing but I can hopefully finish this weekend. |
Closing this because #816 |
Add
allenai/OLMoE-1B-7B-0924
This is a new MoE model which I'd like to use with TL. Notes:
transformers
hasn't released a version with OlMoE support yet. We can updatepyproject.toml
to point to it instead of github once it's released. Will leave as a draft until then.router_aux_loss_coef
/router_z_loss_coef
: I don't plan on training OLMoE in TL so there's no need for these coefficients.norm_topk_prob
defaults toFalse
intransformers
and I don't plan to use it.Commenting-out
add_bos_token=True
This is a temporary fix. When running without either location commented out:
Commenting out the location mentioned in the stack trace (
HookedTransformer.py:146
):I'd appreciate advice on what's going wrong here. I'm a bit confused because I didn't change anything related to bos tokens (and e.g. the call to
AutoTokenizer.from_pretrained
inHookedTransformer
always specifiesadd_bos_token=True
but neverbos_token
).Type of change
Checklist: