Skip to content

Commit

Permalink
Update llama_vision.py (EvolvingLMMs-Lab#431)
Browse files Browse the repository at this point in the history
BugFix: llama_vision processor accepts "text" key and not "content" key.
  • Loading branch information
Danielohayon authored and ZhaoCinyu committed Dec 9, 2024
1 parent bee8a34 commit 26091a3
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion lmms_eval/models/llama_vision.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ def generate_until(self, requests: List[Instance]) -> List[str]:

for _ in range(len(images)):
messages[-1]["content"].append({"type": "image"})
messages[-1]["content"].append({"type": "text", "content": contexts})
messages[-1]["content"].append({"type": "text", "text": contexts})
prompt = self.processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = self.processor(images, prompt, return_tensors="pt").to(self.model.device)

Expand Down

0 comments on commit 26091a3

Please sign in to comment.