-
Notifications
You must be signed in to change notification settings - Fork 73
Issues: microsoft/rag-experiment-accelerator
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Indexing step not working properly when "use_checkpoints" is turned on
#783
opened Oct 14, 2024 by
yuvalyaron
rag-experiment-accelerator package installation fails on PromptFlow due to dependency conflicts
#755
opened Sep 24, 2024 by
KKulma
[peer-review-pal] Repository Review for Commit #290bd0ecdde69903f6792d281f4afc6e6604f09c
#754
opened Sep 23, 2024 by
julia-meshcheryakova
pip install custom_environment/rag_experiment_accelerator-0.9-py3-none-any.whl fails
#734
opened Sep 18, 2024 by
KKulma
[Refactoring] Pass config to
load_<format>_files()
methods
Good have
hack
#732
opened Sep 17, 2024 by
beandrad
Add promptflow-evals quality metrics as an alternative to ragas
enhancement
New feature or request
#707
opened Sep 13, 2024 by
prvenk
enable image and table data ingestion, indexing and search as part of Multi-modal RAG
Good have
hack
#703
opened Sep 9, 2024 by
ritesh-modi
add new Search Features like Filtering,(pre, post), generate dynamic rich metadata (like summary, title, keywords, entities etc) along with usage of Query Profiles
enhancement
New feature or request
hack
#685
opened Aug 29, 2024 by
ritesh-modi
index_name is cut off to 128 characters
Must have
#684
opened Aug 29, 2024 by
julia-meshcheryakova
2 tasks
Integrate the best experiment (prompt Flow) with LLMOps for automated deployment
enhancement
New feature or request
hack
#669
opened Aug 14, 2024 by
ritesh-modi
Generate Prompt Flow inference pipeline based on the best experiment configuration
enhancement
New feature or request
hack
#668
opened Aug 14, 2024 by
ritesh-modi
Extend metric reporting such that confidence intervals are also computed and displayed. Confidence interval computation should leverage recent frameworks like prediction-powered inference to combine AI-based metrics (like LLM-based metrics) with human evaluations.
#597
opened Jun 13, 2024 by
dmavroeid
Separate each search type into its own mlflow run to allow comparison using Azure ML / mlflow
#540
opened May 9, 2024 by
guybartal
Previous Next
ProTip!
Follow long discussions with comments:>50.