packing default modelfiles + taking sys prompt from modelfile#76
packing default modelfiles + taking sys prompt from modelfile#76
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughSYSTEM_PROMPT is removed from server config and StartRequest now requires a per-request Sequence Diagram(s)sequenceDiagram
participant Client
participant MLX as MLX Runtime
participant FS as FileSystem
participant Server
Client->>MLX: run(run_args {system_prompt, modelfile_path?, memory?})
alt modelfile_path provided
MLX->>FS: read modelfile at modelfile_path
FS-->>MLX: modelfile
else modelfile_path not provided
MLX->>FS: get_default_modelfile(memory_mode)
FS-->>MLX: default_modelfile
end
MLX->>MLX: determine system_prompt (modelfile.system || default_modelfile.system)
MLX->>Server: start_server_daemon(modelfile, default_modelfile, system_prompt, memory_path)
Server->>Server: initialize using provided system_prompt
Server-->>MLX: server running / PID
MLX-->>Client: Result(())
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Fix all issues with AI agents
In `@scripts/install.sh`:
- Line 10: The comment for the MODELFILE_DIR variable is incorrect; update the
comment next to MODELFILE_DIR="$HOME/.local/lib/tiles/modelfiles" to accurately
describe that this path is for storing model files (the modelfiles directory),
not the Python server folder—ensure the comment clearly states it holds model
files for the application.
- Line 4: The ENV variable is currently set to "dev" which forces use of a local
tarball; change the ENV assignment (the ENV variable in the top-level script)
from "dev" to "prod" so the installer pulls releases from GitHub instead of
local artifacts, ensuring the default is production before merging.
In `@server/schemas.py`:
- Around line 59-62: The StartRequest schema allows an empty system_prompt;
update the StartRequest model to reject empty strings by adding a
constraint/validator for system_prompt (e.g., use pydantic.constr(min_length=1)
or Field(..., min_length=1) or add a `@validator` for "system_prompt" that raises
a ValueError if the value is empty/only-whitespace) so the Python schema matches
the Rust behavior and prevents blank system messages.
In `@tiles/src/commands/mod.rs`:
- Around line 10-11: The current run function in mod.rs is discarding the Result
from Runtime::run, hiding failures; update the run function to handle the Result
returned by Runtime::run (call Runtime::run on the provided runtime and await
it), and on Err(e) either propagate the error (change run's signature to return
Result<()> and return the result of runtime.run(...)) or log/print the error and
exit with a non-zero status (e.g., eprintln!("...: {}", e);
std::process::exit(1)); ensure you reference Runtime::run and the run function
in mod.rs when making the change.
In `@tiles/src/runtime/mlx.rs`:
- Around line 59-62: The code currently swallows parse errors by printing
"Invalid Modelfile" and returning Ok(()) in the Err(_err) arm; change this to
propagate the error so callers can detect failure — capture the error (use the
existing _err binding without underscore so it isn't ignored), include
contextual information if desired, and return it (e.g., convert into the
function's error type or use anyhow::Error) instead of returning Ok(()); update
the Err arm where "Invalid Modelfile" is printed to return Err(...) with the
original error (or use the ? operator at the call site) so the function fails
properly.
- Around line 440-444: The JSON body construction uses nested unwraps for the
system prompt
(modelfile.system.clone().unwrap_or(default_modelfile.system.clone().unwrap()))
which can panic if both are None; update the logic to safely provide a fallback
by using Option-safe methods on modelfile.system and default_modelfile.system
(e.g., use modelfile.system.clone().unwrap_or_else(||
default_modelfile.system.clone().unwrap_or_default()) or otherwise call
unwrap_or_default() on the inner Option) so the "system_prompt" value is never
produced via a double unwrap in mlx.rs when building body (refer to symbols
modelfile, default_modelfile, and body).
- Around line 47-54: The current code uses unwrap() on
default_modelfile_path.to_str() and encodes "no path" as Err("NOT PROVIDED"),
which can panic on invalid UTF-8 and mixes control flow with error strings;
change the logic to: obtain default_modelfile_path via get_default_modelfile(),
convert the Path/OsStr safely (e.g., to_string_lossy() or work with Path
directly) and attempt parse_from_file, and represent the optional override as
Option<Modelfile> instead of Result—i.e., if run_args.modelfile_path.is_some()
try parsing and return a Result<Modelfile, ParseError> for failures, otherwise
fall back to parsed default; update uses of modelfile_parse_result (and any
string checks for "NOT PROVIDED") to match on Option/Result so parsing failures
are handled explicitly and no unwrap() or sentinel error strings are used
(references: get_default_modelfile, tilekit::modelfile::parse_from_file,
run_args.modelfile_path, modelfile_parse_result).
In `@tiles/src/utils/config.rs`:
- Around line 104-109: The is_memory_model function is verbose and triggers a
Clippy warning for unnecessary returns; replace its body with a single
expression returning modelname.contains("mem") (i.e., remove the if block,
returns, and explicit braces so the function simply evaluates and returns
modelname.contains("mem")). Ensure the function signature and name
is_memory_model remain unchanged.
- Around line 63-75: get_lib_dir() uses XDG_DATA_HOME but falls back to
"~/.local/lib", which conflicts with the XDG spec and the project's
get_config_dir(); change the fallback for XDG_DATA_HOME to "~/.local/share" (so
the Err branch becomes home_dir.join(".local/share")) or explicitly use a
different env var if libraries are intentionally stored elsewhere, and ensure
the final returned path still appends "tiles"; also simplify is_memory_model()
by replacing its explicit boolean branches with a direct
modelname.contains("mem") expression.
🧹 Nitpick comments (1)
tiles/src/runtime/mlx.rs (1)
83-93: Confusing path manipulation withpop().The
server_dirvariable is mutated multiple times: first joined with"server", then used to buildserver_path, thenpop()is called to revert it. This is hard to follow and error-prone.Suggested refactor for clarity
let config_dir = get_config_dir()?; - let mut server_dir = get_lib_dir()?; + let lib_dir = get_lib_dir()?; + let server_dir = lib_dir.join("server"); let pid_file = config_dir.join("server.pid"); fs::create_dir_all(&config_dir).context("Failed to create config directory")?; - server_dir = server_dir.join("server"); let stdout_log = File::create(config_dir.join("server.out.log"))?; let stderr_log = File::create(config_dir.join("server.err.log"))?; let server_path = server_dir.join("stack_export_prod/app-server/bin/python"); - server_dir.pop(); let child = Command::new(server_path) .args(["-m", "server.main"]) - .current_dir(server_dir) + .current_dir(&lib_dir) .stdin(Stdio::null())
| INSTALL_DIR="$HOME/.local/bin" # CLI install location | ||
| SERVER_DIR="$HOME/.local/share/tiles/server" # Python server folder | ||
| SERVER_DIR="$HOME/.local/lib/tiles/server" # Python server folder | ||
| MODELFILE_DIR="$HOME/.local/lib/tiles/modelfiles" # Python server folder |
There was a problem hiding this comment.
Misleading comment on MODELFILE_DIR.
The comment says "Python server folder" but this is the modelfiles directory. Appears to be a copy-paste error from line 9.
Suggested fix
-MODELFILE_DIR="$HOME/.local/lib/tiles/modelfiles" # Python server folder
+MODELFILE_DIR="$HOME/.local/lib/tiles/modelfiles" # Modelfiles folder📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| MODELFILE_DIR="$HOME/.local/lib/tiles/modelfiles" # Python server folder | |
| MODELFILE_DIR="$HOME/.local/lib/tiles/modelfiles" # Modelfiles folder |
🤖 Prompt for AI Agents
In `@scripts/install.sh` at line 10, The comment for the MODELFILE_DIR variable is
incorrect; update the comment next to
MODELFILE_DIR="$HOME/.local/lib/tiles/modelfiles" to accurately describe that
this path is for storing model files (the modelfiles directory), not the Python
server folder—ensure the comment clearly states it holds model files for the
application.
| class StartRequest(BaseModel): | ||
| model: str | ||
| memory_path: str | ||
|
|
||
| system_prompt: str |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and read the server/schemas.py file
fd -type f -name "schemas.py" | grep -E "server|schemas"Repository: tilesprivacy/tiles
Length of output: 180
🏁 Script executed:
# Also search for dependency files to identify Pydantic version
fd -type f -name "requirements*.txt" -o -name "setup.py" -o -name "pyproject.toml" -o -name "setup.cfg" -o -name "poetry.lock" -o -name "Pipfile"Repository: tilesprivacy/tiles
Length of output: 232
🏁 Script executed:
# Search for any pydantic-related imports to understand version usage
rg "from pydantic import|import pydantic" -A 2Repository: tilesprivacy/tiles
Length of output: 328
🏁 Script executed:
# Read the server/schemas.py file to see current state
cat -n server/schemas.py | head -80Repository: tilesprivacy/tiles
Length of output: 2088
🏁 Script executed:
# Search for pydantic version in common dependency files
find . -maxdepth 2 -type f \( -name "requirements*.txt" -o -name "setup.py" -o -name "pyproject.toml" -o -name "setup.cfg" \) -exec cat {} \;Repository: tilesprivacy/tiles
Length of output: 461
🏁 Script executed:
# Also check for environment specs or lock files
find . -maxdepth 2 -type f \( -name "poetry.lock" -o -name "Pipfile" -o -name "Pipfile.lock" \) -exec head -50 {} \;Repository: tilesprivacy/tiles
Length of output: 44
🏁 Script executed:
# Search for how system_prompt is used throughout the codebase
rg "system_prompt" -B 2 -A 2Repository: tilesprivacy/tiles
Length of output: 4837
🏁 Script executed:
# Check what version of FastAPI is being used (which implies Pydantic version)
rg "fastapi|pydantic" pyproject.toml setup.py requirements.txt 2>/dev/null | head -20Repository: tilesprivacy/tiles
Length of output: 44
🏁 Script executed:
# Look for any validators or validators decorator on StartRequest
ast-grep --pattern 'class StartRequest { $$$ }'Repository: tilesprivacy/tiles
Length of output: 44
Enforce non-empty system_prompt.
The field currently accepts empty strings, which would result in blank system messages. The Rust implementation explicitly rejects empty system prompts, and the Python schema should enforce this constraint consistently.
🔧 Proposed fix
class StartRequest(BaseModel):
model: str
memory_path: str
- system_prompt: str
+ system_prompt: str = Field(..., min_length=1)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class StartRequest(BaseModel): | |
| model: str | |
| memory_path: str | |
| system_prompt: str | |
| class StartRequest(BaseModel): | |
| model: str | |
| memory_path: str | |
| system_prompt: str = Field(..., min_length=1) |
🤖 Prompt for AI Agents
In `@server/schemas.py` around lines 59 - 62, The StartRequest schema allows an
empty system_prompt; update the StartRequest model to reject empty strings by
adding a constraint/validator for system_prompt (e.g., use
pydantic.constr(min_length=1) or Field(..., min_length=1) or add a `@validator`
for "system_prompt" that raises a ValueError if the value is
empty/only-whitespace) so the Python schema matches the Rust behavior and
prevents blank system messages.
| pub async fn run(runtime: &Runtime, run_args: RunArgs) { | ||
| runtime.run(run_args).await; | ||
| let _ = runtime.run(run_args).await; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file and the specific lines
cat -n tiles/src/commands/mod.rs | head -20Repository: tilesprivacy/tiles
Length of output: 767
🏁 Script executed:
# Find the Runtime type definition and run method signature
fd --type=rs | xargs grep -l "impl Runtime" | head -5Repository: tilesprivacy/tiles
Length of output: 231
🏁 Script executed:
# Search for all runtime.run call sites as mentioned in the review
rg -nP '\bruntime\.run\s*\(.*\)\.await' --type=rsRepository: tilesprivacy/tiles
Length of output: 88
🏁 Script executed:
# Also check for just runtime.run without .await in case it's not async
rg -nP '\bruntime\.run\s*\(' --type=rs -A 2Repository: tilesprivacy/tiles
Length of output: 88
🏁 Script executed:
# Find Runtime type definition
find . -name "*.rs" -type f | xargs grep -l "impl Runtime" | head -5Repository: tilesprivacy/tiles
Length of output: 88
🏁 Script executed:
# Search for runtime.run call sites without using invalid --type flag
rg 'runtime\.run\s*\(' -A 2 --color=neverRepository: tilesprivacy/tiles
Length of output: 434
🏁 Script executed:
# Look for Runtime::run method signature in the codebase
rg 'fn run\(' -B 3 -A 3 --color=never | head -50Repository: tilesprivacy/tiles
Length of output: 1603
Don't swallow runtime errors.
Line 11 discards the Result from Runtime::run, which now returns Result<()>. This silently hides failures and allows the CLI to exit with success even when operations fail.
🔧 Proposed fix
pub async fn run(runtime: &Runtime, run_args: RunArgs) {
- let _ = runtime.run(run_args).await;
+ if let Err(err) = runtime.run(run_args).await {
+ eprintln!("Runtime failed: {err}");
+ std::process::exit(1);
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub async fn run(runtime: &Runtime, run_args: RunArgs) { | |
| runtime.run(run_args).await; | |
| let _ = runtime.run(run_args).await; | |
| pub async fn run(runtime: &Runtime, run_args: RunArgs) { | |
| if let Err(err) = runtime.run(run_args).await { | |
| eprintln!("Runtime failed: {err}"); | |
| std::process::exit(1); | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@tiles/src/commands/mod.rs` around lines 10 - 11, The current run function in
mod.rs is discarding the Result from Runtime::run, hiding failures; update the
run function to handle the Result returned by Runtime::run (call Runtime::run on
the provided runtime and await it), and on Err(e) either propagate the error
(change run's signature to return Result<()> and return the result of
runtime.run(...)) or log/print the error and exit with a non-zero status (e.g.,
eprintln!("...: {}", e); std::process::exit(1)); ensure you reference
Runtime::run and the run function in mod.rs when making the change.
| let default_modelfile_path = get_default_modelfile()?; | ||
| let default_modelfile = | ||
| tilekit::modelfile::parse_from_file(default_modelfile_path.to_str().unwrap()).unwrap(); | ||
| let modelfile_parse_result = if let Some(modelfile_str) = &run_args.modelfile_path { | ||
| tilekit::modelfile::parse_from_file(modelfile_str.as_str()) | ||
| } else { | ||
| tilekit::modelfile::parse(DEFAULT_MODELFILE) | ||
| Err("NOT PROVIDED".to_string()) | ||
| }; |
There was a problem hiding this comment.
Avoid using error strings for control flow; handle parsing failures properly.
Several issues in this block:
- Line 49: Double
unwrap()can panic if the path contains invalid UTF-8. - Lines 52-53: Using
Err("NOT PROVIDED")as a sentinel for "no path given" is fragile. Consider usingOption<Modelfile>instead ofResultto distinguish "not provided" from "parse failed". - Line 58: String matching on
"NOT PROVIDED"couples the logic to an error message.
Suggested refactor using Option
- let default_modelfile =
- tilekit::modelfile::parse_from_file(default_modelfile_path.to_str().unwrap()).unwrap();
- let modelfile_parse_result = if let Some(modelfile_str) = &run_args.modelfile_path {
- tilekit::modelfile::parse_from_file(modelfile_str.as_str())
- } else {
- Err("NOT PROVIDED".to_string())
- };
-
- let modelfile = match modelfile_parse_result {
- Ok(mf) => mf,
- Err(err) if err == "NOT PROVIDED" => default_modelfile.clone(),
- Err(_err) => {
- println!("Invalid Modelfile");
- return Ok(());
- }
- };
+ let default_modelfile_str = default_modelfile_path
+ .to_str()
+ .context("Default modelfile path is not valid UTF-8")?;
+ let default_modelfile = tilekit::modelfile::parse_from_file(default_modelfile_str)
+ .map_err(|e| anyhow::anyhow!("Failed to parse default modelfile: {}", e))?;
+
+ let modelfile = match &run_args.modelfile_path {
+ Some(modelfile_str) => {
+ match tilekit::modelfile::parse_from_file(modelfile_str.as_str()) {
+ Ok(mf) => mf,
+ Err(err) => {
+ return Err(anyhow::anyhow!("Invalid Modelfile: {}", err));
+ }
+ }
+ }
+ None => default_modelfile.clone(),
+ };🤖 Prompt for AI Agents
In `@tiles/src/runtime/mlx.rs` around lines 47 - 54, The current code uses
unwrap() on default_modelfile_path.to_str() and encodes "no path" as Err("NOT
PROVIDED"), which can panic on invalid UTF-8 and mixes control flow with error
strings; change the logic to: obtain default_modelfile_path via
get_default_modelfile(), convert the Path/OsStr safely (e.g., to_string_lossy()
or work with Path directly) and attempt parse_from_file, and represent the
optional override as Option<Modelfile> instead of Result—i.e., if
run_args.modelfile_path.is_some() try parsing and return a Result<Modelfile,
ParseError> for failures, otherwise fall back to parsed default; update uses of
modelfile_parse_result (and any string checks for "NOT PROVIDED") to match on
Option/Result so parsing failures are handled explicitly and no unwrap() or
sentinel error strings are used (references: get_default_modelfile,
tilekit::modelfile::parse_from_file, run_args.modelfile_path,
modelfile_parse_result).
| pub fn get_lib_dir() -> Result<PathBuf> { | ||
| if cfg!(debug_assertions) { | ||
| let base_dir = env::current_dir().context("Failed to fetch CURRENT_DIR")?; | ||
| Ok(base_dir.join("server")) | ||
| Ok(base_dir) | ||
| } else { | ||
| let home_dir = env::home_dir().context("Failed to fetch $HOME")?; | ||
| let data_dir = match env::var("XDG_DATA_HOME") { | ||
| Ok(val) => PathBuf::from(val), | ||
| Err(_err) => home_dir.join(".local/share"), | ||
| Err(_err) => home_dir.join(".local/lib"), | ||
| }; | ||
| Ok(data_dir.join("tiles/server")) | ||
| Ok(data_dir.join("tiles")) | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, find and examine the file to verify the actual code structure
echo "=== Checking file structure and line numbers ==="
wc -l tiles/src/utils/config.rs
echo ""
echo "=== Lines 63-75 context ==="
sed -n '60,80p' tiles/src/utils/config.rs
echo ""
echo "=== Lines 100-115 (checking for is_memory_model) ==="
sed -n '100,115p' tiles/src/utils/config.rs
echo ""
echo "=== Searching for get_lib_dir and XDG_DATA_HOME usage ==="
rg -n "get_lib_dir|XDG_DATA_HOME" --type=rust -A2 -B2Repository: tilesprivacy/tiles
Length of output: 3228
🌐 Web query:
XDG Base Directory Specification default paths
💡 Result:
Default XDG base directories (used if the corresponding environment variable is unset or empty):
- XDG_CONFIG_HOME = $HOME/.config. [1]
- XDG_DATA_HOME = $HOME/.local/share. [1]
- XDG_STATE_HOME = $HOME/.local/state. [1][2]
- XDG_CACHE_HOME = $HOME/.cache. [1]
- XDG_RUNTIME_DIR = (no fixed default; typically set to /run/user/$UID by systemd/pam; must be a 0700-owned, per-login runtime directory). [1]
- XDG_CONFIG_DIRS = /etc/xdg. [1]
- XDG_DATA_DIRS = /usr/local/share/:/usr/share/. [1]
Sources: XDG Base Directory Specification and reference implementations. [1][2]
XDG_DATA_HOME fallback path is inconsistent with XDG spec and internal patterns.
Using XDG_DATA_HOME but falling back to .local/lib is incorrect per the XDG Base Directory spec, which specifies XDG_DATA_HOME should default to ~/.local/share. This inconsistency is evident in the codebase itself—get_config_dir() correctly uses .local/share as the fallback for the same environment variable. Either use the correct fallback path, use a different environment variable for library files, or remove the XDG check if a custom directory structure is intended.
Additionally, the is_memory_model() function (lines 104-108) can be simplified to modelname.contains("mem") to eliminate unnecessary explicit returns.
🤖 Prompt for AI Agents
In `@tiles/src/utils/config.rs` around lines 63 - 75, get_lib_dir() uses
XDG_DATA_HOME but falls back to "~/.local/lib", which conflicts with the XDG
spec and the project's get_config_dir(); change the fallback for XDG_DATA_HOME
to "~/.local/share" (so the Err branch becomes home_dir.join(".local/share")) or
explicitly use a different env var if libraries are intentionally stored
elsewhere, and ensure the final returned path still appends "tiles"; also
simplify is_memory_model() by replacing its explicit boolean branches with a
direct modelname.contains("mem") expression.
| pub fn is_memory_model(modelname: &str) -> bool { | ||
| if modelname.contains("mem") { | ||
| return true; | ||
| } | ||
| return false; | ||
| } |
There was a problem hiding this comment.
Fix clippy warning and simplify the function.
The function can be simplified to a single expression, which also addresses the pipeline failure flagging the unneeded return statement.
Proposed fix
pub fn is_memory_model(modelname: &str) -> bool {
- if modelname.contains("mem") {
- return true;
- }
- return false;
+ modelname.contains("mem")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn is_memory_model(modelname: &str) -> bool { | |
| if modelname.contains("mem") { | |
| return true; | |
| } | |
| return false; | |
| } | |
| pub fn is_memory_model(modelname: &str) -> bool { | |
| modelname.contains("mem") | |
| } |
🧰 Tools
🪛 GitHub Actions: Rust
[error] 108-108: clippy: unneeded return statement. Remove 'return false;' (use 'false' directly).
🤖 Prompt for AI Agents
In `@tiles/src/utils/config.rs` around lines 104 - 109, The is_memory_model
function is verbose and triggers a Clippy warning for unnecessary returns;
replace its body with a single expression returning modelname.contains("mem")
(i.e., remove the if block, returns, and explicit braces so the function simply
evaluates and returns modelname.contains("mem")). Ensure the function signature
and name is_memory_model remain unchanged.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@modelfiles/gpt-oss`:
- Around line 5-33: The prompt uses nonstandard channel markers
(<|channel|>analysis<|message|>, <|channel|>code<|message|>,
<|channel|>final<|message|>) that don't match the runtime parser which extracts
<reply> and <python>; update the code that parses the model output to also
accept the new markers (recognize and map <|channel|>analysis<|message|> /
<|channel|>code<|message|> / <|channel|>final<|message|> to the existing
internal channels) or alternatively modify the prompt to emit the existing tags
(<reply> and <python>) instead of the <|channel|>... markers so the parser sees
output; locate the parsing logic that looks for '<reply>' and '<python>' (or the
function that tokenizes output channels) and add support for the corresponding
'<|channel|>...<|message|>' sequences and ensure you preserve the rule about
always starting with the analysis channel.
In `@tiles/src/runtime/mlx.rs`:
- Around line 564-574: In get_default_modelfile(memory_mode: bool) update the
else branch so the default non-memory model path uses "modelfiles/gpt-oss"
instead of "modelfiles/mem-agent"; locate the function get_default_modelfile and
change the PathBuf created in the else branch (which currently calls
get_lib_dir()?.join("modelfiles/mem-agent")) to
get_lib_dir()?.join("modelfiles/gpt-oss") so non-memory mode no longer defaults
to the mem-agent.
- Around line 65-69: run() currently calls
run_model_with_server(...).await.inspect_err(...) which only logs errors and
still returns Ok(()), so failures are swallowed; change run() to propagate the
error from run_model_with_server instead of discarding it — remove inspect_err
and either use the ? operator (e.g. run_model_with_server(self, modelfile,
default_modelfile, &run_args).await?;) or capture the Err, log it, and return
Err(e) so run() returns the underlying error; update references in the run()
function and preserve the existing eprintln! logging if desired.
♻️ Duplicate comments (4)
tiles/src/utils/config.rs (2)
63-74: XDG_DATA_HOME fallback should be~/.local/share(or use a dedicated env var).Falling back to
~/.local/libwhile still honoringXDG_DATA_HOMEis inconsistent with the XDG defaults and withget_data_dir(), which can split Tiles data across locations. Consider aligning the fallback or switching to a distinct env var for library storage.🔧 Suggested adjustment
- let data_dir = match env::var("XDG_DATA_HOME") { - Ok(val) => PathBuf::from(val), - Err(_err) => home_dir.join(".local/lib"), - }; + let data_dir = match env::var("XDG_DATA_HOME") { + Ok(val) => PathBuf::from(val), + Err(_err) => home_dir.join(".local/share"), + };
104-109: Simplifyis_memory_modelto a single expression.This removes the unneeded
returnpattern and silences clippy.♻️ Proposed simplification
pub fn is_memory_model(modelname: &str) -> bool { - if modelname.contains("mem") { - return true; - } - false + modelname.contains("mem") }Rust clippy "unneeded return" linttiles/src/runtime/mlx.rs (2)
46-62: Avoid panics and sentinel errors when selecting the modelfile.
to_str().unwrap()andparse_from_file(...).unwrap()can panic, and usingErr("NOT PROVIDED")conflates “no path” with real parse failures. Also, the invalid-modelfile path returnsOk(()), hiding the error.🛠️ Safer selection + error propagation
- let default_modelfile = - tilekit::modelfile::parse_from_file(default_modelfile_path.to_str().unwrap()).unwrap(); - let modelfile_parse_result = if let Some(modelfile_str) = &run_args.modelfile_path { - tilekit::modelfile::parse_from_file(modelfile_str.as_str()) - } else { - Err("NOT PROVIDED".to_string()) - }; - - let modelfile = match modelfile_parse_result { - Ok(mf) => mf, - Err(err) if err == "NOT PROVIDED" => default_modelfile.clone(), - Err(_err) => { - println!("Invalid Modelfile"); - return Ok(()); - } - }; + let default_modelfile_str = default_modelfile_path + .to_str() + .context("Default modelfile path is not valid UTF-8")?; + let default_modelfile = tilekit::modelfile::parse_from_file(default_modelfile_str) + .map_err(|e| anyhow::anyhow!("Failed to parse default modelfile: {e}"))?; + + let modelfile = match &run_args.modelfile_path { + Some(modelfile_str) => { + tilekit::modelfile::parse_from_file(modelfile_str.as_str()) + .map_err(|e| anyhow::anyhow!("Invalid modelfile: {e}"))? + } + None => default_modelfile.clone(), + };
440-445: Guard against missing system prompts to avoid panic.The nested unwrap will panic if both modelfiles lack a system prompt.
🛡️ Safer fallback
- "system_prompt": modelfile.system.clone().unwrap_or(default_modelfile.system.clone().unwrap()) + "system_prompt": modelfile.system.clone() + .or_else(|| default_modelfile.system.clone()) + .unwrap_or_default()
- Added gpt-oss part of default modelfiles NOTE: The idea is to run gpt-oss as default and use `-m` to use the memory model. But gpt-oss integration is not complete, so on both modes memory model will be running as of now. refactor
4e804dc to
ab8f516
Compare
No description provided.