diff --git a/lmms_eval/tasks/mix_evals/README.md b/lmms_eval/tasks/mix_evals/README.md index 2cee05b4..8d14cbeb 100644 --- a/lmms_eval/tasks/mix_evals/README.md +++ b/lmms_eval/tasks/mix_evals/README.md @@ -4,7 +4,7 @@ ## Usage -Tasks: +Here is the list of tasks in MixEval-X: ``` mix_evals_image2text @@ -25,8 +25,28 @@ mix_evals_audio2text_hard └── mix_evals_audio2text_freeform_hard ``` +You can run the command: + +```bash +lmms-eval --model= \ + --model_args= \ + --tasks= \ + --batch_size=1 \ + --log_samples \ + --output_path=./logs/ +``` + +Models are listed at [here](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/0589d0fba2efbcb526321f23ab0587599fd3c4c9/lmms_eval/models/__init__.py#L13). + +For example, to evaluate `llava_vid` on `mix_evals_video2text` (including `mix_evals_video2text_freeform` and `mix_evals_video2text_mc`): + ```bash -lmms-eval --model=llava_vid --model_args=pretrained=lmms-lab/LLaVA-NeXT-Video-7B --tasks=mix_evals_video2text --batch_size=1 --log_samples --output_path=./logs/ +lmms-eval --model=llava_vid \ + --model_args=pretrained=lmms-lab/LLaVA-NeXT-Video-7B \ + --tasks=mix_evals_video2text \ + --batch_size=1 \ + --log_samples \ + --output_path=./logs/ ``` ## Citation