Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Errors occurred during the pipeline run, see logs for more details. #583

Closed
Jumbo-zczlbj0 opened this issue Jul 16, 2024 · 10 comments
Labels
community_support Issue handled by community members

Comments

@Jumbo-zczlbj0
Copy link

Jumbo-zczlbj0 commented Jul 16, 2024

Describe the bug

"Sometimes, running the command python3 -m graphrag.index --root ./ragtest results in the error 'Errors occurred during the pipeline run, see logs for more details,' even though no configuration changes were made. This issue may occur after restarting the computer. I have tried deleting the original environment and creating a new one. Sometimes it works well, and sometimes it doesn't."

The previous model was running smoothly and had successfully answered my questions. And my Ollama is functioning properly and the model has been downloaded.

I am a beginner, so I might not understand everything fully. Please bear with me

Screenshot from 2024-07-16 04-19-19

settings.yaml:
Screenshot from 2024-07-16 04-12-24
Screenshot from 2024-07-16 04-12-15

log: (ragtest/output/20240716-034934/reports/indexing-engine.log)

Screenshot from 2024-07-16 04-16-05

Screenshot from 2024-07-16 04-16-15

indexing-engine.log

Steps to reproduce

  1. Run Lm-studio

  2. chmod +x LM_Studio-0.2.27.AppImage

  3. python3 -m graphrag.index --root ./ragtest

Expected Behavior

I expect the LLM and embedding to process my data correctly

GraphRAG Config Used

No response

Logs and screenshots

No response

Additional Information

  • GraphRAG Version: 0.1.1 ( pip install graphrag )
  • Operating System: Ubuntu 22.04
  • Python Version: 3.12.4
  • llm: ollama/gemma2:latest
  • embeddings: lm-studio (nomic-embed-text-v1.5.Q5_K_M.gguf)
@Jumbo-zczlbj0 Jumbo-zczlbj0 added bug Something isn't working triage Default label assignment, indicates new issue needs reviewed by a maintainer labels Jul 16, 2024
@Jumbo-zczlbj0
Copy link
Author

  1. /ragtest/.env:

GRAPHRAG_API_KEY=<API_KEY>

  1. ragtest/output/20240716-035359/artifacts/stats.json:

Screenshot from 2024-07-16 04-25-14

@Jumbo-zczlbj0 Jumbo-zczlbj0 changed the title [Bug]: Errors occurred during the pipeline run, see logs for more details. [Issue]: Errors occurred during the pipeline run, see logs for more details. Jul 16, 2024
@Jumbo-zczlbj0 Jumbo-zczlbj0 changed the title [Issue]: Errors occurred during the pipeline run, see logs for more details. [BUG]: Errors occurred during the pipeline run, see logs for more details. Jul 16, 2024
@myyourgit
Copy link

To: Jumbo:
how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?

I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?

thanks

@myyourgit
Copy link

myyourgit commented Jul 17, 2024

To: Jumbo:
I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.

And suggestion about setting.yaml?

in logs, api_key is not recognized.
"llm": {
"api_key": "REDACTED, length 6",
"type": "openai_chat",
"model": "gemma2:latest",

Thanks

@Jumbo-zczlbj0
Copy link
Author

To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?

I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?

thanks

I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai

@myyourgit
Copy link

To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?
I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?
thanks

I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai

I use windows10. In windows, I just run LM_Studio, I think this is OK.

@Jumbo-zczlbj0
Copy link
Author

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.

And suggestion about setting.yaml?

in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",

Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

@Jumbo-zczlbj0
Copy link
Author

Jumbo-zczlbj0 commented Jul 17, 2024

I installed graphrag in Docker to avoid this bug.

I am using the official NVIDIA docker image (CUDA =12.2, Ubuntu22.04, devel)

By the way, LM_Studio can be replaced with llama.cpp: https://github.com/ggerganov/llama.cpp

@myyourgit
Copy link

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.
And suggestion about setting.yaml?
in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",
Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

Hi,Jumbo:
thanks.

in ollama directory, I run
ollama pull gemma2:9b

ollama run gemma2:9b
it works.

but the problem is that when running below command

curl http://localhost:11434/v1/chat/completions

the result is:
404 page not found

this means that gemma2 port is not recognized.

@Jumbo-zczlbj0
Copy link
Author

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.
And suggestion about setting.yaml?
in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",
Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

Hi,Jumbo: thanks.

in ollama directory, I run ollama pull gemma2:9b

ollama run gemma2:9b it works.

but the problem is that when running below command

curl http://localhost:11434/v1/chat/completions

the result is: 404 page not found

this means that gemma2 port is not recognized.

Please refer to the official way:https://github.com/ollama/ollama/blob/main/docs/api.md

For example:
curl http://localhost:11434/api/chat -d '{
"model": "gemm2:latest",
"messages": [
{ "role": "user", "content": "hi" }
]
}'

IMG_7056

@natoverse
Copy link
Collaborator

Consolidating alternate model issues here: #657

@natoverse natoverse closed this as not planned Won't fix, can't repro, duplicate, stale Jul 22, 2024
@natoverse natoverse added community_support Issue handled by community members and removed bug Something isn't working triage Default label assignment, indicates new issue needs reviewed by a maintainer labels Jul 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community_support Issue handled by community members
Projects
None yet
Development

No branches or pull requests

3 participants