Skip to content

Commit

Permalink
Update README to enhance task listing and provide usage instructions …
Browse files Browse the repository at this point in the history
…for MixEval-X
  • Loading branch information
pufanyi committed Dec 5, 2024
1 parent ff992fd commit c9ea58e
Showing 1 changed file with 22 additions and 2 deletions.
24 changes: 22 additions & 2 deletions lmms_eval/tasks/mix_evals/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

## Usage

Tasks:
Here is the list of tasks in MixEval-X:

```
mix_evals_image2text
Expand All @@ -25,8 +25,28 @@ mix_evals_audio2text_hard
└── mix_evals_audio2text_freeform_hard
```

You can run the command:

```bash
lmms-eval --model=<MODEL> \
--model_args=<MODEL_ARGS> \
--tasks=<TASK> \
--batch_size=1 \
--log_samples \
--output_path=./logs/
```

Models are listed at [here](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/0589d0fba2efbcb526321f23ab0587599fd3c4c9/lmms_eval/models/__init__.py#L13).

For example, to evaluate `llava_vid` on `mix_evals_video2text` (including `mix_evals_video2text_freeform` and `mix_evals_video2text_mc`):

```bash
lmms-eval --model=llava_vid --model_args=pretrained=lmms-lab/LLaVA-NeXT-Video-7B --tasks=mix_evals_video2text --batch_size=1 --log_samples --output_path=./logs/
lmms-eval --model=llava_vid \
--model_args=pretrained=lmms-lab/LLaVA-NeXT-Video-7B \
--tasks=mix_evals_video2text \
--batch_size=1 \
--log_samples \
--output_path=./logs/
```

## Citation
Expand Down

0 comments on commit c9ea58e

Please sign in to comment.