Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Eval benchmark for China ecosystem #199

Closed
2 tasks done
joshuayao opened this issue Nov 12, 2024 · 1 comment
Closed
2 tasks done

[Feature] Eval benchmark for China ecosystem #199

joshuayao opened this issue Nov 12, 2024 · 1 comment
Assignees
Labels
feature New feature or request
Milestone

Comments

@joshuayao
Copy link
Collaborator

joshuayao commented Nov 12, 2024

Specialize evaluation benchmarks tailored for Chinese language models, focusing on their performance and accuracy within Chinese dataset. Extend RAGAS benchmark to support Chinese dataset like CRUD.

@joshuayao joshuayao added the feature New feature or request label Nov 12, 2024
@joshuayao joshuayao added this to the v1.2 milestone Nov 12, 2024
@joshuayao joshuayao added this to OPEA Nov 12, 2024
@joshuayao
Copy link
Collaborator Author

Done for PR merged.

@github-project-automation github-project-automation bot moved this to Done in OPEA Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
Status: Done
Development

No branches or pull requests

2 participants