You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should further specialize certain functions and provide out of the box very well written prompts for certain common tasks (e.g., llm_summarize, llm_extract_facts_and_verify_output)
Description
This issue keeps track of the main features we plan to deliver in the upcoming releases:
0.2.0
Features
Azure
as model provider (Feature/model provider setup #45)Provide support forBedrock
as model providerllm_complete
with anllm_reduce
(Addllm_reduce
Aggregate Function #53)fusion_relative
Function for Score Normalization and Max Aggregation #65)Performance Improvements
llm_embedding
. It is currently only supported for the LLM completion scalar functions. (Add the batching to the llm_embedding #54)Prediction caching based on exact prompt equality. (In-memory vs Disk)Requests for unique rows (given relevant columns) per DataChunkPerformance Enhancement: Split equally the rows if a DataChunk needs more than 1 LLM prediction request #15Usability Improvements
We should further specialize certain functions and provide out of the box very well written prompts for certain common tasks (e.g.,llm_summarize
,llm_extract_facts_and_verify_output
)PROMPT
s (Add Support for Prompt Versioning #63)llm_*
Function Call Structure for Enhanced Flexibility in Prompt Specification #61)Bugs
The text was updated successfully, but these errors were encountered: