This folder contains evaluation for MiniWoB++ benchmark, powered by BrowserGym for easy evaluation of how well an agent capable of browsing can perform on synthetic web browsing tasks.
Please follow instruction here to setup your local development environment and LLM.
Access with browser the above MiniWoB URLs and see if they load correctly.
./evaluation/miniwob/scripts/run_infer.sh llm.claude-35-sonnet-eval
Results will be in evaluation/evaluation_outputs/outputs/miniwob/
To calculate the average reward, run:
poetry run python evaluation/miniwob/get_success_rate.py evaluation/evaluation_outputs/outputs/miniwob/SOME_AGENT/EXP_NAME/output.jsonl
You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results following the guide here.
Tested on BrowsingAgent V1.0
MiniWoB++, 125 tasks (3 runs due to random init task), max step 10
- GPT4o: 0.384, 0.416, 0.424, avg: 0.408
- GPT3.5: 0.288, 0.256, 0.272, avg: 0.272