Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add frontend for evaluation #25

Closed
wants to merge 11 commits into from

Conversation

binxuan39
Copy link

@binxuan39 binxuan39 commented Aug 8, 2024

Purpose

This feature aims to integrate the evaluation functionality into the existing frontend page by adding a new tab "Evaluate" at the header. Users can upload their test data, select evaluation metrics, and configure detailed settings for evaluation. Then submit the evaluation request to the backend, and the backend will handle the request, retrieving the evaluation scripts and then sending the result back to the frontend to all users to download it.
image

Does this introduce a breaking change?

When developers merge from main and run the server, azd up, or azd deploy, will this produce an error?
If you're not sure, try it out on an old environment.

  • Yes
  • No

Does this require changes to learn.microsoft.com docs?

This repository is referenced by this tutorial
which includes deployment, settings and usage instructions. If text or screenshot need to change in the tutorial,
check the box below and notify the tutorial author. A Microsoft employee can do this for you if you're an external contributor.

  • Yes
  • No

Type of change

  • Bugfix
  • Feature
  • Code style update (formatting, local variables)
  • Refactoring (no functional changes, no api changes)
  • Documentation content changes
  • Other... Please describe:

Code quality checklist

See CONTRIBUTING.md for more details.

  • The current tests all pass (python -m pytest).
  • I added tests that prove my fix is effective or that my feature works
  • I ran python -m pytest --cov to verify 100% coverage of added lines
  • I ran python -m mypy to check for type errors
  • I either used the pre-commit hooks or ran ruff and black manually on my code.

@binxuan39 binxuan39 marked this pull request as ready for review August 8, 2024 18:03
@binxuan39
Copy link
Author

The current solution cannot pass the ruff check as the evaluation package cannot be imported properly.

@binxuan39 binxuan39 force-pushed the evaluation-wip-integrate-frontend branch from c451412 to c8e2fd3 Compare August 10, 2024 15:27
GrayNekoBean and others added 9 commits August 13, 2024 13:00
- add /generate route
- add generate tab to the first step of the evaluation page
- fix error in setting number of questions
- change the request data format of generate to JSON
- fixed error message display issue
- fixed report generation failing bug
- updated .gitignore
- Separated GPT metrics and statistic metrics
- select all by default
- multiple style & layout fixes
@binxuan39 binxuan39 closed this Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants