Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HumanEval Benchmark #362

Closed
L1aoXingyu opened this issue Jun 23, 2023 · 3 comments
Closed

HumanEval Benchmark #362

L1aoXingyu opened this issue Jun 23, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@L1aoXingyu
Copy link

L1aoXingyu commented Jun 23, 2023

🚀 Feature Request

I found there is only ICL Benchmark in eval folder, but the HumanEval Benchmark is reported in MPT-30B. I want to reproduce the HumanEval results with llm-foundry.

So I just want llm-foundry integrated with HumanEval Benchmark.

Motivation

[Optional] Implementation

Additional context

@L1aoXingyu L1aoXingyu added the enhancement New feature or request label Jun 23, 2023
@bmosaicml
Copy link
Contributor

Hi! We are working on integrating HumanEval into the current coding suite. Thank you for your patience while we do so :)

@L1aoXingyu
Copy link
Author

@bmosaicml hi, I want to train codeLLM using llm-foundry, any tutorial or docs about it? Thank you so much!

@dakinggg dakinggg mentioned this issue Sep 16, 2023
@dakinggg
Copy link
Collaborator

dakinggg commented Oct 5, 2023

This has been merged in #587

@dakinggg dakinggg closed this as completed Oct 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants