-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.2.16 - intregration failed to configure bug #140
Comments
Can you post the contents of |
Sure : `processor : 0 processor : 1 |
For what it's worth I am having the exact same issue using v0.2.17: `Logger: homeassistant.config_entries Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation Logger: homeassistant.config_entries Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation Logger: homeassistant.config_entries Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation CPU is Genuine intel celeron 2955u |
To follow up, tried the new release and still have this error. I have tried manually adding various whl to the custom_components/llama_conversation/ directory and continuing the installation. All result with the same error: Logger: homeassistant.config_entries Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation CPU is Genuine intel celeron 2955u |
Your CPU says that it supports all of the required instructions but keeps crashing because of a missing instruction. The solution to get around this is to follow the directions here to build wheels that are compatible with the machine you are using: https://github.com/acon96/home-llm/blob/develop/docs/Backend%20Configuration.md#build-your-own |
Hey! Thanks for the reply. I’m running Home Assistant OS supervised and the command line won’t let me execute docker or git commands. Guess I’ll have to find a different route. Thanks again! |
Thanks a lot. I just generated it for my Intel(R) Celeron(R) N5095A @ 2.00GHz (see attachment below) Can you just tell us where to store it in HA please ? llama_cpp_python-0.2.77-cp312-cp312-musllinux_1_2_x86_64.zip |
@pbn42 "Take the appropriate wheel and copy it to the custom_components/llama_conversation/ directory." (See https://github.com/acon96/home-llm/blob/develop/docs/Backend%20Configuration.md#wheels) |
I followed the given instructions and placed the newly created wheel inside of the I'm still getting:
Latest |
I have exactly the same problem with an Intel Celeron J4105, I first use a "noavx" prebuilt wheel >not working then I built a custom wheel on my machine and placed in correct folder, but I still have: |
Describe the bug
Using v0.2.146, installation works fine, but when i finished to create the integration, i got a "failed to configure" message.
Expected behavior
Integration shall start and should appear as a conversation agent
Logs
If applicable, please upload any error or debug logs output by Home Assistant.
Thanks a lot
The text was updated successfully, but these errors were encountered: