Multi Agent Research ππ¨βπ¦βπ¦ #467
Replies: 6 comments 14 replies
-
The token is out of limit π |
Beta Was this translation helpful? Give feedback.
-
This is great work Assaf. Thank you for always pushing the frontier on what's possible. I've been toying with this and am curious to ask a few things.
|
Beta Was this translation helpful? Give feedback.
-
@assafelovic Nice job! Have you considered making it capable of producing a detailed_report? Currently it only supports research_report. I find detailed_report superior in many cases. Also, it didn't respect all my guidelines. It produced a report with many subsections (as instructed by me), but they contained only placeholder text, without going into details. Example:
Output:
|
Beta Was this translation helpful? Give feedback.
-
Hey @pax-k it's supposed to be detailed reports by design! I'll take a look at this and why it didn't output as expected. It looks like it did not do an actual web search? Have you added a Tavily API key? Maybe I should it to installation instructions |
Beta Was this translation helpful? Give feedback.
-
Great Work from langchain.adapters.openai import convert_openai_messages
from langchain_openai import ChatOpenAI
# ollama
from langchain_community.chat_models import ChatOllama
async def call_model(
prompt: list,
model: str,
backend_type: str = 'ollama',
max_retries: int = 2,
response_format: str = None,
api_key: str = None
) -> str:
"""
Args:
backend (str): "ollama" or "openai"
"""
valid_backends = ['ollama', 'openai']
assert backend_type in valid_backends, (
f'Invalid backend_type "f{backend_type}". Valid backends are {valid_backends}')
if backend_type == 'openai':
optional_params = {}
if response_format == 'json':
optional_params = {
"response_format": {"type": "json_object"}
}
lc_messages = convert_openai_messages(prompt)
response = ChatOpenAI(model=model,
max_retries=max_retries,
api_key=api_key,
model_kwargs=optional_params).invoke(lc_messages).content
return response
elif backend_type == 'ollama':
llm = ChatOllama(model=model, format=response_format, temperature=0)
response = llm.invoke(prompt)
return response.content
raise ValueError('No Backend selected') and changing the MASTER: Starting the research process for query 'Automatic Speech Recoginition SOTA models?'...
RESEARCHER: Running initial research on the following query: Automatic Speech Recoginition SOTA models?
π Starting the research task for 'Automatic Speech Recoginition SOTA models?'...
β οΈ Error in reading JSON, attempting to repair JSON
Error using json_repair: 'list' object has no attribute 'get'
Traceback (most recent call last):
File "/home/wakeb/text-workspace/gpt-researcher/gpt_researcher/master/actions.py", line 104, in choose_agent
agent_dict = json.loads(response)
^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wakeb/text-workspace/gpt-researcher/multi_agents/main.py", line 62, in <module>
asyncio.run(main())
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/multi_agents/main.py", line 57, in main
research_report = await chief_editor.run_research_task()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/multi_agents/agents/master.py", line 64, in run_research_task
result = await chain.ainvoke({"task": self.task})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1504, in ainvoke
async for chunk in self.astream(
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1333, in astream
_panic_or_proceed(done, inflight, step)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1537, in _panic_or_proceed
raise exc
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 120, in arun_with_retry
await task.proc.ainvoke(task.input, task.config)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2535, in ainvoke
input = await step.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/langgraph/utils.py", line 117, in ainvoke
ret = await asyncio.create_task(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/multi_agents/agents/researcher.py", line 43, in run_initial_research
return {"task": task, "initial_research": await self.research(query=query, verbose=task.get("verbose"),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/multi_agents/agents/researcher.py", line 19, in research
await researcher.conduct_research()
File "/home/wakeb/text-workspace/gpt-researcher/gpt_researcher/master/agent.py", line 109, in conduct_research
self.agent, self.role = await choose_agent(
^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/gpt_researcher/master/actions.py", line 109, in choose_agent
return await handle_json_error(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/gpt_researcher/master/actions.py", line 124, in handle_json_error
return json_data["server"], json_data["agent_role_prompt"]
~~~~~~~~~^^^^^^^^^^
KeyError: 'server'
(gpt-research) wakeb@wakeb-GPU-SE04:~/text-workspace/gpt-researcher/multi_agents$ cat agents/
editor.py __init__.py master.py publisher.py __pycache__/ researcher.py reviewer.py reviser.py utils/ writer.py
(gpt-research) wakeb@wakeb-GPU-SE04:~/text-workspace/gpt-researcher/multi_agents$ cat agents/
editor.py __init__.py master.py publisher.py __pycache__/ researcher.py reviewer.py reviser.py utils/ writer.py
(gpt-research) wakeb@wakeb-GPU-SE04:~/text-workspace/gpt-researcher/multi_agents$ cat agents/utils/llms.py
from langchain.adapters.openai import convert_openai_messages
from langchain_openai import ChatOpenAI
# ollama
from langchain_community.chat_models import ChatOllama
async def call_model(
prompt: list,
model: str,
backend_type: str = 'ollama',
max_retries: int = 2,
response_format: str = None,
api_key: str = None
) -> str:
"""
Args:
backend (str): "ollama" or "openai"
"""
valid_backends = ['ollama', 'openai']
assert backend_type in valid_backends, (
f'Invalid backend_type "f{backend_type}". Valid backends are {valid_backends}')
if backend_type == 'openai':
optional_params = {}
if response_format == 'json':
optional_params = {
"response_format": {"type": "json_object"}
}
lc_messages = convert_openai_messages(prompt)
response = ChatOpenAI(model=model,
max_retries=max_retries,
api_key=api_key,
model_kwargs=optional_params).invoke(lc_messages).content
return response
elif backend_type == 'ollama':
llm = ChatOllama(model=model, format=response_format, temperature=0)
response = llm.invoke(prompt)
return response.content
raise ValueError('No Backend selected') |
Beta Was this translation helpful? Give feedback.
-
Hi there. It seems that when using local files and having
Here is the
Also, the env file (parts):
I looked at the error and was able to "fix" it by adding class ReviserAgent:
def __init__(self, headers=None):
self.headers = headers or {}
async def revise_draft(self, draft_state: dict):
"""
Review a draft article
:param draft_state:
:return:
"""
review = draft_state.get("review")
task = draft_state.get("task")
draft_report = draft_state.get("draft")
prompt = [{
"role": "system",
"content": "You are an expert writer. Your goal is to revise drafts based on reviewer notes."
}, {
"role": "user",
"content": f"""Draft:\n{draft_report}" + "Reviewer's notes:\n{review}\n\n
You have been tasked by your reviewer with revising the following draft, which was written by a non-expert.
If you decide to follow the reviewer's notes, please write a new draft and make sure to address all of the points they raised.
Please keep all other aspects of the draft the same.
You MUST return nothing but a JSON in the following format:
{sample_revision_notes}
"""
}]
response = await call_model(prompt, model=task.get("model"), response_format='json', api_key=self.headers.get("openai_api_key"))
return json.loads(response)
async def run(self, draft_state: dict):
print_agent_output(f"Rewriting draft based on feedback...", agent="REVISOR")
revision = await self.revise_draft(draft_state)
if draft_state.get("task").get("verbose"):
print_agent_output(f"Revision notes: {revision.get('revision_notes')}", agent="REVISOR")
return {"draft": revision.get("draft"),
"revision_notes": revision.get("revision_notes")}
It does not give error now, but it gets stuck in a loop and eventually langgraph recursion limit of 25 (?maybe more) is reached. I know it "works" because I see the comments of `reviser` on what it has done.
EDIT: added reviser agent function |
Beta Was this translation helpful? Give feedback.
-
This is one of the most exciting releases yet. Proud to introduce the latest GPTR x LangGraph integration showcasing the power of flow engineering and multi agent collaboration! Check out the full implementation in the new directory
multi_agents
.By using Langgraph, the research process can be significantly improved in depth and quality by leveraging multiple agents with specialized skills. Inspired by the recent STORM paper, this example showcases how a team of AI agents can work together to conduct research on a given topic, from planning to publication. An average run generates a 5-6 page research report in multiple formats such as PDF, Docx and Markdown.
The Multi Agent Team
The research team is made up of 7 AI agents:
Architecture
This discussion was created from the release Multi Agent Research ππ¨βπ¦βπ¦.
Beta Was this translation helpful? Give feedback.
All reactions