Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Categorise test cases #129

Conversation

pr4deepr
Copy link
Contributor

This PR contains:

  • a new test-case for the benchmark
    • I hereby confirm that NO LLM-based technology (such as github copilot) was used while writing this benchmark
  • new dependencies in requirements.txt
    • The environment.yml file was updated using the command conda env export > environment.yml
  • new generator-functions allowing to sample from other LLMs
  • new samples (sample_....jsonl files)
  • new benchmarking results (..._results.jsonl files)
  • documentation update
  • bug fixes

Related github issue (if relevant): closes #112

Short description:
Group test cases into categories making it easier to understand benchmark LLM performance

How do you think will this influence the benchmark results?

  • Make it easier to understand and benchmark models. Help Identify areas where we need more test cases

Why do you think it makes sense to merge this PR?

@ian-coccimiglio
Copy link
Contributor

Interesting work here! This inspires me to look into my next set of test cases on "why are all these models failing at file I/O"

@haesleinhuepf haesleinhuepf changed the base branch from main to development-collecting-new-test-cases November 21, 2024 13:02
@haesleinhuepf
Copy link
Owner

Thanks a lot @pr4deepr !

@haesleinhuepf haesleinhuepf merged commit d4ab9f8 into haesleinhuepf:development-collecting-new-test-cases Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

grouping test cases and categorisation
3 participants