-
-
Notifications
You must be signed in to change notification settings - Fork 117
Open
Labels
bugSomething isn't workingSomething isn't working
Description
A clear and concise description of what the bug is.
Expected behavior
I expect to use "Web Search" in the conversations. However when I use the app and enable "Web Search" in the conversion, the app crashes
I'm able to use the app in normal conversion or use the "Terminal" tool.
Debugging information
I'm running version 8.3.1, with Ollama(managed)
flatpak run com.jeffser.Alpaca
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: CPU model buffer size = 1918.35 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = disabled
llama_context: kv_unified = false
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.50 MiB
llama_kv_cache: CPU KV buffer size = 448.00 MiB
llama_kv_cache: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_context: CPU compute buffer size = 256.50 MiB
llama_context: graph nodes = 1014
llama_context: graph splits = 1
time=2025-11-21T04:02:37.792+01:00 level=INFO source=server.go:1289 msg="llama runner started in 1.53 seconds"
time=2025-11-21T04:02:37.792+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
time=2025-11-21T04:02:37.792+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-21T04:02:37.792+01:00 level=INFO source=server.go:1289 msg="llama runner started in 1.53 seconds"
[GIN] 2025/11/21 - 04:02:50 | 200 | 15.184962749s | 127.0.0.1 | POST "/api/chat"
flatpak-spawn: Invalid byte sequence in conversion input
Try "flatpak-spawn --help" for more information.
** (python3:2): ERROR **: 04:02:51.734: readPIDFromPeer: Unexpected short read from PID socket. (This usually means the auxiliary process crashed immediately. Investigate that instead!)
....
...
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: CPU model buffer size = 1918.35 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = disabled
llama_context: kv_unified = false
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.50 MiB
llama_kv_cache: CPU KV buffer size = 448.00 MiB
llama_kv_cache: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_context: CPU compute buffer size = 256.50 MiB
llama_context: graph nodes = 1014
llama_context: graph splits = 1
time=2025-11-21T04:02:37.792+01:00 level=INFO source=server.go:1289 msg="llama runner started in 1.53 seconds"
time=2025-11-21T04:02:37.792+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
time=2025-11-21T04:02:37.792+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-21T04:02:37.792+01:00 level=INFO source=server.go:1289 msg="llama runner started in 1.53 seconds"
[GIN] 2025/11/21 - 04:02:50 | 200 | 15.184962749s | 127.0.0.1 | POST "/api/chat"
flatpak-spawn: Invalid byte sequence in conversion input
Try "flatpak-spawn --help" for more information.
** (python3:2): ERROR **: 04:02:51.734: readPIDFromPeer: Unexpected short read from PID socket. (This usually means the auxiliary process crashed immediately. Investigate that instead!)
milad@fedora ~ [SIGTRAP]> ps aux | grep lam
milad 51053 2.8 0.4 2295028 242368 ? Ssl 04:22 0:01 ollama serve
milad 52714 286 4.5 4365288 2584600 ? Sl 04:23 0:47 /app/plugins/Ollama/bin/ollama runner --model /home/milad/.var/app/com.jeffser.Alpaca/data/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --port 45437
milad 52981 0.0 0.0 231396 2876 pts/1 S+ 04:23 0:00 grep --color=auto lam
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working