Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in calling LLM: unsupported operand type(s) for +: 'NoneType' and 'NoneType' with custom LLM #1001

Closed
calz1 opened this issue Nov 27, 2024 · 1 comment

Comments

@calz1
Copy link

calz1 commented Nov 27, 2024

Describe the bug
I am trying gpt-researcher for the first time. I am using an OpenAI-compatible API endpoint and followed the instructions to configure it like llama.cpp server with a different base URL and key. When trying a query for the first time, I get an error that seems to be related to the endpoint response, but I'm not sure what it is looking for.

To Reproduce
Steps to reproduce the behavior:

  1. Follow procedure to git clone the project and install requirements.
  2. Configure .env for custom endpoint
  3. Startup uvicorn

I also tried the process here by putting the code into tester.py. I get a similar error. Any suggestions for what to try?

(.venv) cal@cal-virtualbox:~/Projects/gpt-researcher$ python tester.py
Error in calling LLM: unsupported operand type(s) for +: 'NoneType' and 'NoneType'

Expected behavior
Query runs and doesn't error out immediately :(

Logs

(.venv) cal@cal-virtualbox:~/Projects/gpt-researcher$ uvicorn main:app --reload
INFO:     Will watch for changes in these directories: ['/home/cal/Projects/gpt-researcher']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [17007] using StatReload
INFO:     Started server process [17009]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     127.0.0.1:54602 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:54602 - "GET /site/styles.css HTTP/1.1" 304 Not Modified
INFO:     127.0.0.1:54618 - "GET /site/scripts.js HTTP/1.1" 304 Not Modified
INFO:     127.0.0.1:54618 - "GET /static/gptr-logo.png HTTP/1.1" 304 Not Modified
INFO:     ('127.0.0.1', 46332) - "WebSocket /ws" [accepted]
INFO:     connection open
⚠ Error in reading JSON, attempting to repair JSON
Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/cal/Projects/gpt-researcher/gpt_researcher/actions/agent_creator.py", line 27, in choose_agent
    response = await create_chat_completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/gpt_researcher/utils/llm.py", line 60, in create_chat_completion
    response = await provider.get_chat_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/gpt_researcher/llm_provider/generic/base.py", line 116, in get_chat_response
    output = await self.llm.ainvoke(messages)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 307, in ainvoke
    llm_result = await self.agenerate_prompt(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 796, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 756, in agenerate
    raise exceptions[0]
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 924, in _agenerate_with_cache
    result = await self._agenerate(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 826, in _agenerate
    return await run_in_executor(
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 588, in run_in_executor
    return await asyncio.get_running_loop().run_in_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/.pyenv/versions/3.11.10/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 579, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 730, in _create_chat_result
    message.usage_metadata = _create_usage_metadata(token_usage)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 2232, in _create_usage_metadata
    total_tokens = oai_token_usage.get("total_tokens", input_tokens + output_tokens)
                                                       ~~~~~~~~~~~~~^~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for +: 'NoneType' and 'NoneType'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 242, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 152, in __call__
    await self.app(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 77, in __call__
    await self.app(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/routing.py", line 362, in handle
    await self.app(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/routing.py", line 95, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/starlette/routing.py", line 93, in app
    await func(session)
  File "/home/cal/Projects/gpt-researcher/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 383, in app
    await dependant.call(**solved_result.values)
  File "/home/cal/Projects/gpt-researcher/backend/server/server.py", line 110, in websocket_endpoint
    await handle_websocket_communication(websocket, manager)
  File "/home/cal/Projects/gpt-researcher/backend/server/server_utils.py", line 121, in handle_websocket_communication
    await handle_start_command(websocket, data, manager)
  File "/home/cal/Projects/gpt-researcher/backend/server/server_utils.py", line 28, in handle_start_command
    report = await manager.start_streaming(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/backend/server/websocket_manager.py", line 66, in start_streaming
    report = await run_agent(task, report_type, report_source, source_urls, tone, websocket, headers = headers, config_path = config_path)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/backend/server/websocket_manager.py", line 108, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/backend/report_type/basic_report/basic_report.py", line 41, in run
    await researcher.conduct_research()
  File "/home/cal/Projects/gpt-researcher/gpt_researcher/agent.py", line 92, in conduct_research
    self.agent, self.role = await choose_agent(
                            ^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/gpt_researcher/actions/agent_creator.py", line 44, in choose_agent
    return await handle_json_error(response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/gpt_researcher/actions/agent_creator.py", line 55, in handle_json_error
    json_string = extract_json_with_regex(response)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/Projects/gpt-researcher/gpt_researcher/actions/agent_creator.py", line 71, in extract_json_with_regex
    json_match = re.search(r"{.*?}", response, re.DOTALL)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/cal/.pyenv/versions/3.11.10/lib/python3.11/re/__init__.py", line 176, in search
    return _compile(pattern, flags).search(string)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got 'NoneType'

Desktop (please complete the following information):
Lubuntu 24.04

@ElishaKay
Copy link
Collaborator

The error message is saying that your LLM isn't configured correctly

Check out the:

Running with Ollama & Testing your LLM docs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants