-
Notifications
You must be signed in to change notification settings - Fork 27.4k
Issues: huggingface/transformers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
How can I disable legacy processing in llava-next
bug
#35457
opened Dec 30, 2024 by
foreverpiano
1 of 4 tasks
Installation Error for transformers Package (🔥 maturin failed)
bug
#35454
opened Dec 29, 2024 by
SauceChord
2 of 4 tasks
[Feature Request] Add beam search text streaming visualization feature
Feature request
Request for a new feature
#35451
opened Dec 29, 2024 by
MosheOfer1
Support Constant Learning Rate with Cooldown
Feature request
Request for a new feature
#35449
opened Dec 29, 2024 by
LoserCheems
Tokenizer does not split text according to newly added input tokens
bug
Core: Tokenization
Internals of the library; Tokenization.
#35447
opened Dec 29, 2024 by
jiongjiongli
2 of 4 tasks
tokenizer
should be replaced to processing_class
in Seq2SeqTrainer
?
bug
Core: Tokenization
#35446
opened Dec 29, 2024 by
zzaebok
2 of 4 tasks
Allow static cache to be larger than sequence length / batch size for encoder-decoder models
Feature request
Request for a new feature
#35444
opened Dec 29, 2024 by
cptspacemanspiff
Compatibility Issue with Python 3.13
bug
dependencies
Pull requests that update a dependency file
#35443
opened Dec 29, 2024 by
pocerberus
2 of 4 tasks
Missing weights are not properly initialized when using model.from_pretrained()
bug
Core: Modeling
Internals of the library; Models.
#35437
opened Dec 27, 2024 by
YifanXu74
4 tasks done
Memory leak on python 3.10.*
bug
dependencies
Pull requests that update a dependency file
#35434
opened Dec 27, 2024 by
KhoiTrant68
2 of 4 tasks
tokenizers.apply_chat_template with
continue_final_message=True
with trailing spaces in input
bug
Chat Template
#35433
opened Dec 27, 2024 by
chuyishang
1 of 4 tasks
apply class transformers.SequenceBiasLogitsProcessor on Qwen model
Feature request
Request for a new feature
Generation
#35432
opened Dec 27, 2024 by
buptspig
GPT2Attention()
class with _attn()
method when add_cross_attention=True
and therefore is_cross_attention=True
.
bug
Feature request
#35430
opened Dec 27, 2024 by
CHLEE-Leo
cannot custom
warmup_min_lr
of deepspeed lr scheduler
bug
DeepSpeed
#35428
opened Dec 27, 2024 by
SeunghyunSEO
2 of 4 tasks
model.config.to_diff_dict()
delivers different result to model.save_pretrained()
bug
Core: Modeling
#35426
opened Dec 27, 2024 by
umarbutler
2 of 4 tasks
LLaVa 1.5 and 1.6 not working with text-only inputs
bug
VLM
#35424
opened Dec 26, 2024 by
giobin
2 of 4 tasks
modular_model_converter
can not handle objects import via try - except
bug
Modular
#35414
opened Dec 25, 2024 by
HIT-cwh
2 of 4 tasks
Qwen2VLProcessor cannot handle odd number of video frames
bug
VLM
#35412
opened Dec 25, 2024 by
DarkLight1337
2 of 4 tasks
AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' still has no clear guide around
bug
Vision
#35402
opened Dec 23, 2024 by
deanAirre
2 of 4 tasks
Set output_attentions=True for model.geneate
bug
Generation
#35393
opened Dec 23, 2024 by
yiyexy
2 of 4 tasks
Previous Next
ProTip!
Adding no:label will show everything without a label.