-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: Errors occurred during the pipeline run, see logs for more details. #583
Comments
To: Jumbo: I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file? thanks |
To: Jumbo: And suggestion about setting.yaml? in logs, api_key is not recognized. Thanks |
I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai |
I use windows10. In windows, I just run LM_Studio, I think this is OK. |
I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3 |
I installed graphrag in Docker to avoid this bug. I am using the official NVIDIA docker image (CUDA =12.2, Ubuntu22.04, devel) By the way, LM_Studio can be replaced with llama.cpp: https://github.com/ggerganov/llama.cpp |
Hi,Jumbo: in ollama directory, I run ollama run gemma2:9b but the problem is that when running below command curl http://localhost:11434/v1/chat/completions the result is: this means that gemma2 port is not recognized. |
Please refer to the official way:https://github.com/ollama/ollama/blob/main/docs/api.md For example: |
Consolidating alternate model issues here: #657 |
Describe the bug
"Sometimes, running the command
python3 -m graphrag.index --root ./ragtest
results in the error 'Errors occurred during the pipeline run, see logs for more details,' even though no configuration changes were made. This issue may occur after restarting the computer. I have tried deleting the original environment and creating a new one. Sometimes it works well, and sometimes it doesn't."The previous model was running smoothly and had successfully answered my questions. And my Ollama is functioning properly and the model has been downloaded.
I am a beginner, so I might not understand everything fully. Please bear with me
settings.yaml:
log: (ragtest/output/20240716-034934/reports/indexing-engine.log)
indexing-engine.log
Steps to reproduce
Run Lm-studio
chmod +x LM_Studio-0.2.27.AppImage
python3 -m graphrag.index --root ./ragtest
Expected Behavior
I expect the LLM and embedding to process my data correctly
GraphRAG Config Used
No response
Logs and screenshots
No response
Additional Information
The text was updated successfully, but these errors were encountered: