-
Notifications
You must be signed in to change notification settings - Fork 196
Issues: open-compass/VLMEvalKit
[Help Wanted] Supporting the
chat_inner
API for existing VLMs.
#323
opened Jul 27, 2024 by
kennymckormick
Open
2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Discrepancy in MM Math Evaluation Results: Possible Issue with Answer Extraction in VLMEvalkit
#638
opened Dec 1, 2024 by
mantle2048
The evaluation device_map setting is not optimized for quantized models using the ExLlamaV2 backend.
#629
opened Nov 26, 2024 by
WeiweiZhang1
Error: module 'torch.library' has no attribute 'register_fake'
#605
opened Nov 15, 2024 by
tjasmin111
How soon is the 'News' section in the README updated after a new Benchmark or Model is added?
#590
opened Nov 10, 2024 by
Baiqi-Li
Evaluations get stuck on the last 4 questions
awaiting confirm
#588
opened Nov 9, 2024 by
XuGW-Kevin
Qwen-VL-Max-0809 MME中celebrity测出来和榜单结果差距有10分左右
awaiting confirm
#571
opened Nov 2, 2024 by
lihua8848
The result of gpt4o on MathVista is 61.6, which is lower than the 62.7 on the list.
awaiting confirm
#553
opened Oct 28, 2024 by
kkk123698745
Different evaluate result between different VLMEvalKit version
#523
opened Oct 16, 2024 by
terry-for-github
[Help Wanted] the alignment with official accuracy in llama3.2-vision
Feature Request
Extra attention is needed
#493
opened Sep 29, 2024 by
droidXrobot
Reproducing QWen2VL Results on Video Benchmarks with VLMEvalKit
#484
opened Sep 23, 2024 by
aniki-ly
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.