Skip to content

Commit

Permalink
Update 2024-05-04-RAG.md
Browse files Browse the repository at this point in the history
  • Loading branch information
rkuo2000 authored May 6, 2024
1 parent c4adfc1 commit 10a9ead
Showing 1 changed file with 52 additions and 3 deletions.
55 changes: 52 additions & 3 deletions _posts/2024-05-04-RAG.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,11 +91,60 @@ Introduction to Retrieval-Augmented Generation (RAG)
---
## Frameworks

### [Langchain](https://github.com/langchain-ai/langchain)
LangChain is a framework for developing applications powered by large language models (LLMs).
### [LangChain](https://github.com/langchain-ai/langchain)
LangChain is a framework for developing applications powered by large language models (LLMs).<br>
![](https://js.langchain.com/assets/images/langchain_stack_feb_2024-101939844004a99c1b676723fc0ee5e9.webp)

---
### [LangGraph](https://langchain-ai.github.io/langgraph/)
**[Introduction to LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/)**<br>
![](https://github.com/rkuo2000/AI-course/blob/main/images/LangGraph_intro.png?raw=true)<br>

```
import json
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.graph import END, MessageGraph
from langgraph.prebuilt.tool_node import ToolNode
# Define the function that determines whether to continue or not
def should_continue(messages):
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return END
else:
return "action"
# Define a new graph
workflow = MessageGraph()
tools = [TavilySearchResults(max_results=1)]
model = ChatAnthropic(model="claude-3-haiku-20240307").bind_tools(tools)
workflow.add_node("agent", model)
workflow.add_node("action", ToolNode(tools))
workflow.set_entry_point("agent")
# Conditional agent -> action OR agent -> END
workflow.add_conditional_edges(
"agent",
should_continue,
)
# Always transition `action` -> `agent`
workflow.add_edge("action", "agent")
memory = SqliteSaver.from_conn_string(":memory:") # Here we only save in-memory
# Setting the interrupt means that any time an action is called, the machine will stop
app = workflow.compile(checkpointer=memory, interrupt_before=["action"])
```

---
### [LlamaIndex](https://github.com/run-llama/llama_index)
LlamaIndex (GPT Index) is a data framework for your LLM application.
LlamaIndex (GPT Index) is a data framework for your LLM application.<br>
**Kaggle:** [https://www.kaggle.com/code/rkuo2000/llm-llamaindex](https://www.kaggle.com/code/rkuo2000/llm-llamaindex)<br>

---
Expand Down

0 comments on commit 10a9ead

Please sign in to comment.