Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using Ollama, a long loop of Initializing Ollama Model happens before any output #443

Open
FGhrawi opened this issue Nov 20, 2024 · 10 comments
Labels
bug Something isn't working Need Feedback

Comments

@FGhrawi
Copy link

FGhrawi commented Nov 20, 2024

Describe the bug

When using Ollama, the model being used is loaded over and over in a loop (sometimes 10-15+ times) in the terminal before any output or agent decision-making happens.
Fwiw, one model is configured at a time in .env (so no swapping is being done)

To Reproduce

Use any model with the Ollama provider

Expected behavior

Model loads once. (Since ollama is on keepalive and only 1 model is being used)

Screenshots
Screenshot_2024-11-19_at_10 46 50_PM

Additional context
I am on a fresh clone of the repo with a basic character configuration.
Fwiw, I have 24gb vram, and this happens even on smaller models.

@FGhrawi FGhrawi added the bug Something isn't working label Nov 20, 2024
drew-royster added a commit to drew-royster/eliza that referenced this issue Nov 21, 2024
@drew-royster
Copy link
Contributor

@FGhrawi can you pull latest and confirm that this is fixed?

@yodamaster726
Copy link
Contributor

I did a fresh clone on the latest code and the code keeps wanting to download and run the local_llama.
I have the OLLAMA_MODEL set in my .env. Here is what have:

#Set to Use for New OLLAMA provider
OLLAMA_SERVER_URL= #Leave blank for default localhost:11434
OLLAMA_MODEL=hermes3
OLLAMA_EMBEDDING_MODEL= #default mxbai-embed-large
#To use custom model types for different tasks set these
SMALL_OLLAMA_MODEL= #default llama3.2
MEDIUM_OLLAMA_MODEL= #default hermes3
LARGE_OLLAMA_MODEL= #default hermes3:70b

This is what I have in my defaultCharacter.

modelProvider: ModelProviderName.OLLAMA,
settings: {
    secrets: {},
    voice: {
        model: "en_US-hfc_female-medium",
    },
    embeddingModel: "mxbai-embed-large"
},

Code is also tries to load the wrong OLLAMA embedding model too by default - hence having to set it here.

I'm debugging the problem now and working on a fix for it.

@lakshya404stc
Copy link

Is this issue still valid??

@yodamaster726
Copy link
Contributor

yes, ollama and llama local got merged together and the ollama logic is not working right. getting close to a fix.
example, its downloading the local llama instead of using ollama if it's configured

@drew-royster
Copy link
Contributor

@yodamaster726 what is your character file like? I had ollama as the model in my character and it seemed to use ollama just fine.

@yodamaster726
Copy link
Contributor

I tried the latest tag v0.1.3 and then tried the latest code from yesterday. My character file was the default one.
image

updates to fix this problem: #521

@dr-fusion
Copy link
Contributor

just tested with the latest code and used ModelProviderName.LLAMALOCAL,

Seems like its still going in a loop.

@MERNinja
Copy link

ModelProviderName.LLAMALOCAL is still in loop with alpha.1 tag release. Anyone found a solution?

@AIFlowML
Copy link
Collaborator

AIFlowML commented Jan 3, 2025

ModelProviderName.LLAMALOCAL is still in loop with alpha.1 tag release. Anyone found a solution?

Did you try last version we just release ?
If is not working at all i wil look in the code.
Please let me know.

@Luucky
Copy link

Luucky commented Jan 9, 2025

I tried the following fix locally which prevented the looping on LLAMA_LOCAL model provider
https://github.com/elizaOS/eliza/pull/1755/files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Need Feedback
Projects
None yet
Development

No branches or pull requests

9 participants