Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation borked after installing llm-gpt4all #565

Open
jeremymcmullin opened this issue Sep 3, 2024 · 5 comments
Open

Installation borked after installing llm-gpt4all #565

jeremymcmullin opened this issue Sep 3, 2024 · 5 comments

Comments

@jeremymcmullin
Copy link

Background / context

  • was originally following install instructions for Mac at https://simonwillison.net/2023/Aug/1/llama-2-mac/ - yeah, I should have spotted that this was an older post....but I didn't
  • Macbook Pro on Intel silicon, Sonoma 14.6.1
  • Python 3.12.5
  • Relative noob to Python and not a Dev for 25 years: it has been a few weeks of hacking with Claude-Dev so this could be "user error"

What I did

  1. Installed llm via Homebrew - brew install llm
  2. Installed the llm-llama-cpp plugin - llm install llm-llama-cpp
  3. Installed the python bindings - llm install llama-cpp-python
  4. Ran the test of models installed - llm models (success: got 11 listed, all OpenAI)
  5. Instead of downloading a specific model, I opted to install the plugins llm install llm-gpt4all (I didn't do this in a virtual Env btw, just at the command line in terminal)

Among other things I got this from the terminal:

Successfully installed charset-normalizer-3.3.2 gpt4all-2.8.2 llm-gpt4all-0.4 requests-2.32.3 urllib3-2.2.2

  1. Ran the test of models installed AGAIN - llm models (but this time got the error below)

Traceback (most recent call last):
File "/usr/local/bin/llm", line 5, in
from llm.cli import cli
File "/usr/local/Cellar/llm/0.15/libexec/lib/python3.12/site-packages/llm/init.py", line 18, in
from .plugins import pm
File "/usr/local/Cellar/llm/0.15/libexec/lib/python3.12/site-packages/llm/plugins.py", line 17, in
pm.load_setuptools_entrypoints("llm")
File "/usr/local/Cellar/llm/0.15/libexec/lib/python3.12/site-packages/pluggy/_manager.py", line 421, in load_setuptools_entrypoints
plugin = ep.load()
^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/metadata/init.py", line 205, in load
module = import_module(match.group('module'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/llm/0.15/libexec/lib/python3.12/site-packages/llm_gpt4all.py", line 1, in
from gpt4all import GPT4All as _GPT4All
File "/usr/local/Cellar/llm/0.15/libexec/lib/python3.12/site-packages/gpt4all/init.py", line 1, in
from .gpt4all import CancellationError as CancellationError, Embed4All as Embed4All, GPT4All as GPT4All
File "/usr/local/Cellar/llm/0.15/libexec/lib/python3.12/site-packages/gpt4all/gpt4all.py", line 23, in
from ._pyllmodel import (CancellationError as CancellationError, EmbCancelCallbackType, EmbedResult as EmbedResult,
File "/usr/local/Cellar/llm/0.15/libexec/lib/python3.12/site-packages/gpt4all/_pyllmodel.py", line 34, in
if subprocess.run(
^^^^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['sysctl', '-n', 'sysctl.proc_translated']' returned non-zero exit status 1.

No llm related commands seem to work e.g. llm --help (I always get some tracback error)
Next will be try and uninstall llm via HomeBrew.....but I've not gone there yet. No idea if that will work anyway. And I wanted to see if the community here could help first. :)

@jeremymcmullin
Copy link
Author

This link is blocked by bitly. Why not just use the original URL?

@jeremymcmullin
Copy link
Author

Seems like I got lucky: bit.ly's system flagged that last link as suspicious and I didn't get to the destination. Clever social engineering writing a comment with it that referenced "changing the compiler" making it seem like it was relevant (there's a compile step I went through in Step 3). The comment is gone which may have been GitHub (?) and the profile is also returning a 404 now. Reminder to NOT click links. Any real people here who can help?

@jeremymcmullin
Copy link
Author

According to this GH issue on the gpt4all project: nomic-ai/gpt4all#2744 (comment)

You are using an Intel x86_64 build of Python, which runs in Rosetta and does not support AVX instructions. ...
This is not a supported use of the GPT4All Python binding. Please install in an environment that uses an arm64 build of Python.

So I guess this is me screwed. file "$(which python)" at the terminal reports "Mach-O 64-bit executable x86_64" rather than the arm version.

@mrowland7
Copy link

Same thing happened to me. brew uninstall llm + brew install llm reset things for me, because even undoing the gpt4all install with llm uninstall llm-gpt4all -y crashes.

@jeremymcmullin
Copy link
Author

Yeah, same here. Had to go to brew uninstall as no other llm commands, including uninstall would work. Worthwhile including in docs for users on i386/intel as a pre-req and warning @simonw

@github-staff github-staff deleted a comment from maher-nakesh Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants