Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to properly initialize llama-cpp-python. (Exit code 1.) #176

Open
912-Cireap-Bogdan opened this issue Jun 20, 2024 · 2 comments
Open
Labels
bug Something isn't working

Comments

@912-Cireap-Bogdan
Copy link

Describe the bug
Installation from HACS worked fine, however when initializing the integration I get a "Failed to set up" error in HA UI

Expected behavior
Integration would install properly and configure properly with the basic default settings

Logs
If applicable, please upload any error or debug logs output by Home Assistant.

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.12/multiprocessing/spawn.py", line 122, in spawn_main
    exitcode = _main(fd, parent_sentinel)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/multiprocessing/spawn.py", line 132, in _main
    self = reduction.pickle.load(from_parent)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/__init__.py", line 7, in <module>
    import homeassistant.components.conversation as ha_conversation
  File "/usr/src/homeassistant/homeassistant/components/conversation/__init__.py", line 11, in <module>
    from homeassistant.config_entries import ConfigEntry
  File "/usr/src/homeassistant/homeassistant/config_entries.py", line 30, in <module>
    from .components import persistent_notification
  File "/usr/src/homeassistant/homeassistant/components/persistent_notification/__init__.py", line 14, in <module>
    from homeassistant.components import websocket_api
  File "/usr/src/homeassistant/homeassistant/components/websocket_api/__init__.py", line 14, in <module>
    from . import commands, connection, const, decorators, http, messages  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/homeassistant/homeassistant/components/websocket_api/http.py", line 15, in <module>
    from homeassistant.components.http import KEY_HASS, HomeAssistantView
  File "/usr/src/homeassistant/homeassistant/components/http/__init__.py", line 44, in <module>
    from homeassistant.helpers.network import NoURLAvailableError, get_url
  File "/usr/src/homeassistant/homeassistant/helpers/network.py", line 9, in <module>
    from hass_nabucasa import remote
  File "/usr/local/lib/python3.12/site-packages/hass_nabucasa/__init__.py", line 30, in <module>
    from .remote import RemoteUI
  File "/usr/local/lib/python3.12/site-packages/hass_nabucasa/remote.py", line 22, in <module>
    from .acme import AcmeClientError, AcmeHandler, AcmeJWSVerificationError
  File "/usr/local/lib/python3.12/site-packages/hass_nabucasa/acme.py", line 13, in <module>
    from acme import challenges, client, crypto_util, errors, messages
  File "/usr/local/lib/python3.12/site-packages/acme/challenges.py", line 24, in <module>
    from acme import crypto_util
  File "/usr/local/lib/python3.12/site-packages/acme/crypto_util.py", line 23, in <module>
    from acme import errors
  File "/usr/local/lib/python3.12/site-packages/acme/errors.py", line 52, in <module>
    class MissingNonce(NonceError):
  File "/usr/local/lib/python3.12/site-packages/acme/errors.py", line 62, in MissingNonce
    def __init__(self, response: requests.Response, *args: Any) -> None:
                                 ^^^^^^^^^^^^^^^^^
AttributeError: module 'requests' has no attribute 'Response'
2024-06-20 12:20:12.797 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/config_entries.py", line 594, in async_setup
    result = await component.async_setup_entry(hass, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/__init__.py", line 83, in async_setup_entry
    await agent._async_load_model(entry)
  File "/config/custom_components/llama_conversation/agent.py", line 201, in _async_load_model
    return await self.hass.async_add_executor_job(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/agent.py", line 805, in _load_model
    validate_llama_cpp_python_installation()
  File "/config/custom_components/llama_conversation/utils.py", line 132, in validate_llama_cpp_python_installation
    raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code 1.)

Thanks in advance!

@912-Cireap-Bogdan 912-Cireap-Bogdan added the bug Something isn't working label Jun 20, 2024
@bigboo3000
Copy link

I think it's the same as #140

@Teagan42
Copy link
Contributor

So I haven't used this part of the component but based on the code here:

def install_llama_cpp_python(config_dir: str):

When you add the integration to home-assistant through the UI and select llama.cpp, it verifies (via a Home-Assistant utility) that llama-cpp-python version 0.2.88 is installed. If it is not, then it attempts to install it. First it examines your system's architecture (is it an arm64, amd64 or x86) and CPU capabilities if its amd64 or x86:

if " avx512f " in cpu_features and " avx512bw " in cpu_features:
    instruction_extensions_suffix = "-avx512"
elif " avx2 " in cpu_features and \
     " avx " in cpu_features and \
     " f16c " in cpu_features and \
     " fma " in cpu_features and \
     (" sse3 " in cpu_features or " ssse3 " in cpu_features):
        instruction_extensions_suffix = ""

It will default to noavx if there's any issues determining the CPU features.

Then it will look for a suitable whl file inside the config/custom_components/llama_conversation directory that meets your architecture and CPU features.
If a file is already found, it will attempt to install it using the home-assistant utility install_package.
If it is not found locally, it will attempt to retrieve the whl from the releases section of this repository: https://github.com/acon96/home-llm/releases and install it via python3 -m pip3 install $url (Another built-in home-assistant utility).

Finally it attempts to import the llama-cpp-python package, if it can't - then you get the error message you have posted.
Considering the first error you posted states "requests has no attribute Response" and you appear to be using Nabu remote UI - I'm wondering if it's trying to install the wheel locally or remotely....

So, to debug this issue:

  1. What CPU architecture is your system?
  2. CPU features?
  3. Python version?
  4. Operating System?
  5. More logs
  6. Pip freeze

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants