-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: unable to load upos-multi with SequenceTagger - AttributeError #3393
Comments
update: Issue can be fixed by downgrading to torch==1.13.1 so it's an issue with torch. |
It's probably an API change from PyTorch 1.x to 2.x and therefore the model should be reconfigured or retrained. It looks like it has been done for flair/upos-multi since that one does work, are there any plans to also update upos-multi? This model still seems to be used a lot and it would be a shame for it to be a version anchor. |
Hi @saltlas , I re-trained the model with latest Flair release and PyTorch 2.2.1 - it took around 3 days on my GPU. @alanakbik Am I allowed to prepare PR at Hugging Face Model Hub to update the existing model 🤔 |
@stefan-it that would be great! |
PR on the Model Hub is here: https://huggingface.co/flair/upos-multi/discussions/3 :) |
When trying to use this model, I get an error |
Could you try again? We just merged @stefan-it 's new version of the model. |
Thanks guys, it works now :) |
Describe the bug
Hi all, I'm trying to load "upos-multi" using the demo code available at https://huggingface.co/flair/upos-multi . I notice this bug actually also happens when you try to use the built-in HuggingFace Inference API on this page. I'm getting the following error:
AttributeError: 'LSTM' object has no attribute '_flat_weights'. Did you mean: '_all_weights'?
I've tried (without success):
To Reproduce
Expected behavior
Expected code to work, I guess?
Logs and Stack traces
Screenshots
No response
Additional Context
No response
Environment
Versions:
Flair
0.13.1
Pytorch
2.1.2+cpu
Transformers
4.36.2
GPU
False
The text was updated successfully, but these errors were encountered: