We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation went fine but I get the following error when trying to invoke the assistant:
Sorry, there was a problem talking to the backend: RuntimeError('llama_decode returned 1')
The text was updated successfully, but these errors were encountered:
Can you provide more information? What model were you using? How many entities do you have exposed? Were there any errors or warnings in the HA logs?
Sorry, something went wrong.
llama_decode is a method in llama.cpp: https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.llama_cpp.llama_decode
llama_decode
llama.cpp
Does your LLM model work if you call it directly (i.e. no Home-Assistant, use the CLI or API of the runner)?
I have no idea how to do that I am afraid
No branches or pull requests
Installation went fine but I get the following error when trying to invoke the assistant:
Sorry, there was a problem talking to the backend: RuntimeError('llama_decode returned 1')
The text was updated successfully, but these errors were encountered: