Common issues and solutions when working with rlm-cli.
- Installation Issues
- Build Failures
- Runtime Errors
- Performance Issues
- Search Problems
- Embedding Issues
- Database Issues
Symptom:
error: failed to compile `rlm-cli` v1.2.4
Common causes:
- Rust version too old
# Check version
rustc --version
# Required: 1.88+
# Update Rust
rustup update stable- Missing C++ compiler (usearch-hnsw feature)
# Ubuntu/Debian
sudo apt-get install build-essential
# macOS
xcode-select --install
# Or install without HNSW
cargo install rlm-cli --no-default-features --features fastembed-embeddings- Network issues downloading dependencies
# Use vendored dependencies
cargo install rlm-cli --locked
# Or retry with verbose output
cargo install rlm-cli -vSymptom:
error: linking with `cc` failed
undefined reference to `onnxruntime_*`
Solution:
FastEmbed uses bundled ONNX binaries by default (no action needed). If you see this error, ensure you're not overriding default features:
# Correct - uses bundled ONNX
cargo build --release
# Incorrect - disables bundled binaries
cargo build --release --features fastembed-embeddings --no-default-featuresSymptom:
error: failed to compile usearch-sys
C++ compilation failed
Solutions:
- Install C++ compiler:
# Ubuntu/Debian
sudo apt-get install g++ clang
# macOS
xcode-select --install- Build without HNSW:
cargo build --release --features fastembed-embeddings- Check C++ version (requires C++17):
g++ --version # Should be 7.0+
clang++ --version # Should be 5.0+Background:
rlm-cli uses usearch 2.23.0 from crates.io, pinned to <2.24 to avoid compilation issues on Windows.
Why version 2.24+ is excluded:
usearch v2.24.0 introduced a MAP_FAILED constant that is POSIX-only and breaks Windows compilation. See unum-cloud/USearch#715.
If you encounter version-related errors:
- Verify Cargo.lock uses 2.23.x:
grep -A2 'name = "usearch"' Cargo.lockExpected output should show version 2.23.x.
- Clear cache and rebuild:
cargo clean
rm -rf ~/.cargo/registry/cache/
cargo build --release --features usearch-hnsw- Check for git dependencies:
Ensure Cargo.toml references the official crates.io version, not a git fork:
usearch = { version = ">=2.23, <2.24", optional = true }Not a git dependency like:
# INCORRECT - do not use git forks
usearch = { git = "https://github.com/...", branch = "..." }Symptom:
error: unwrap_used
--> src/main.rs:42:18
|
42 | let x = y.unwrap();
| ^^^^^^
Solution:
This is expected - clippy is configured to deny unwraps. Fix by using ? or proper error handling:
// Before
let x = y.unwrap();
// After
let x = y.map_err(|e| Error::Custom(e.to_string()))?;Symptom:
Error: Database file not found: .rlm/rlm-state.db
Solution:
Initialize the database:
rlm-cli initOr specify custom path:
rlm-cli --db-path /path/to/db.sqlite init
rlm-cli --db-path /path/to/db.sqlite statusSet environment variable for persistent custom path:
export RLM_DB_PATH=/path/to/db.sqlite
rlm-cli statusSymptom:
Error: Permission denied (os error 13)
Solutions:
- Check directory permissions:
ls -la .rlm/
chmod 755 .rlm/
chmod 644 .rlm/rlm-state.db- Use custom path with write access:
mkdir -p ~/rlm-data
rlm-cli --db-path ~/rlm-data/rlm.db initSymptom:
Error: Buffer 'docs' not found
Solutions:
- List available buffers:
rlm-cli list- Load the buffer if missing:
rlm-cli load document.md --name docs- Check for typos in buffer name (case-sensitive):
# Wrong
rlm-cli search "query" --buffer Docs
# Correct
rlm-cli search "query" --buffer docsSymptom:
Loading a 100MB file takes >5 minutes
Solutions:
- Use parallel chunking:
# Before: Sequential chunking
rlm-cli load large.txt --chunker fixed
# After: Parallel chunking
rlm-cli load large.txt --chunker parallel- Increase chunk size (fewer chunks to embed):
# Before: Many small chunks
rlm-cli load file.txt --chunk-size 50000 # More chunks
# After: Fewer large chunks
rlm-cli load file.txt --chunk-size 200000 # Fewer chunks- Disable embedding if not needed:
# Build without embeddings
cargo build --release --no-default-features
# Then load without embedding overhead
rlm-cli load file.txtPerformance comparison (100MB file):
| Configuration | Time | Chunks |
|---|---|---|
| Sequential, 50KB chunks | 4m 30s | 2000 |
| Parallel, 50KB chunks | 1m 15s | 2000 |
| Parallel, 200KB chunks | 25s | 500 |
Symptom:
Search takes >10 seconds for 50K chunks
Solutions:
- Enable HNSW vector index:
# Rebuild with HNSW support
cargo build --release --features full-search
# Search will use approximate NN (much faster)
rlm-cli search "query" --buffer docs- Reduce top-k:
# Before
rlm-cli search "query" --top-k 100 # Slow
# After
rlm-cli search "query" --top-k 10 # Much faster- Use BM25-only for keyword search:
# Semantic search is slower
rlm-cli search "exact keyword" --mode hybrid
# BM25-only is faster
rlm-cli search "exact keyword" --mode bm25Search performance (50K chunks):
| Mode | Without HNSW | With HNSW |
|---|---|---|
| BM25 | 200ms | 200ms |
| Semantic (exact) | 5000ms | 5000ms |
| Semantic (HNSW) | N/A | 8ms |
| Hybrid | 5200ms | 220ms |
Symptom:
rlm-cli process using 8GB RAM
Solutions:
- Reduce chunk count:
# Increase chunk size
rlm-cli load file.txt --chunk-size 500000 # Larger chunks- Delete unused buffers:
rlm-cli list
rlm-cli delete old-buffer-1
rlm-cli delete old-buffer-2- Use BM25-only (no embedding memory):
# Rebuild without embeddings
cargo build --release --no-default-features- Disable HNSW index (saves ~4x memory):
cargo build --release --features fastembed-embeddingsMemory estimates:
| Configuration | 10K chunks | 50K chunks | 100K chunks |
|---|---|---|---|
| BM25-only | 50MB | 200MB | 400MB |
| + Embeddings (exact) | 250MB | 1.2GB | 2.4GB |
| + HNSW index | 450MB | 2.2GB | 4.4GB |
Symptom:
rlm-cli search "query" --buffer docs
# No results foundSolutions:
- Check buffer exists:
rlm-cli list- Check buffer has chunks:
rlm-cli chunk list docs- Check embeddings are generated:
rlm-cli chunk status
# Shows embedding status for all buffers- Try different search mode:
# Try BM25 keyword search
rlm-cli search "query" --buffer docs --mode bm25
# Try semantic-only
rlm-cli search "query" --buffer docs --mode semantic- Check query spelling:
# Typo
rlm-cli search "errro handling" # No results
# Correct
rlm-cli search "error handling" # Results foundSymptom:
Search returns irrelevant results
Solutions:
- Use hybrid search for best results:
# Better: Combines semantic + keyword
rlm-cli search "authentication flow" --mode hybrid- Increase top-k to see more results:
rlm-cli search "query" --top-k 20 # Instead of default 10- Try different query phrasing:
# Too specific
rlm-cli search "JWT authentication with refresh tokens"
# More general
rlm-cli search "authentication tokens"- Check chunk boundaries:
# View chunk content
rlm-cli chunk get 42
# Might need different chunking strategy
rlm-cli delete docs
rlm-cli load document.md --name docs --chunker semanticSymptom:
rlm-cli chunk status
# Embedded: 0/100 (0%)Solutions:
- Generate embeddings manually:
rlm-cli chunk embed docs- Check fastembed feature is enabled:
rlm-cli --version
# Should show: Features: fastembed-embeddings- Rebuild with embeddings:
cargo build --release --features fastembed-embeddingsSymptom:
Error: Failed to download embedding model
Network error: Connection timeout
Solutions:
- Check internet connection:
curl -I https://huggingface.co/- Retry download (model cached after first success):
rm -rf ~/.cache/fastembed/
rlm-cli chunk embed docs- Use HTTP proxy if needed:
export HTTP_PROXY=http://proxy.example.com:8080
export HTTPS_PROXY=http://proxy.example.com:8080
rlm-cli chunk embed docs- Download model manually:
# Download BGE-M3 model manually
mkdir -p ~/.cache/fastembed/BAAI__bge-m3
# Copy model files to this directorySymptom:
Error: Embedding dimension mismatch: expected 1024, got 384
Cause:
Buffer was embedded with a different model (e.g., all-MiniLM-L6-v2 vs BGE-M3)
Solution:
Re-embed with current model:
rlm-cli chunk embed docs --forceSymptom:
Error: database is locked
Solutions:
- Close other rlm-cli processes:
# Check for running processes
ps aux | grep rlm-cli
# Kill if needed
pkill rlm-cli- Wait for lock to release:
SQLite locks are temporary - wait 5-10 seconds and retry.
- Check for stale lock file:
# Remove .rlm directory entirely (WARNING: deletes all data)
rm -rf .rlm/
rlm-cli initSymptom:
Error: database disk image is malformed
Solutions:
- Check database integrity:
sqlite3 .rlm/rlm-state.db "PRAGMA integrity_check;"- Export data before recovery:
# Export buffers if possible
rlm-cli export-buffers --output backup.json- Reset database (last resort - DESTROYS DATA):
rm .rlm/rlm-state.db
rlm-cli init- Restore from backup:
# If you have backup.json from export-buffers
# Manually re-load documentsSymptom:
Error: No space left on device
Solutions:
- Check database size:
rlm-cli status
# Shows: Database: .rlm/rlm-state.db (512 MB)- Delete unused buffers:
rlm-cli list
rlm-cli delete old-buffer- Vacuum database:
sqlite3 .rlm/rlm-state.db "VACUUM;"- Check disk space:
df -h .If these solutions don't resolve your issue:
-
Check existing issues: GitHub Issues
-
Enable verbose output:
rlm-cli --verbose <command>- Collect diagnostic info:
# Version and features
rlm-cli --version
# Database status
rlm-cli status
# System info
uname -a
rustc --version- Open an issue: New Issue
Include:
rlm-cli --versionoutput- Operating system and Rust version
- Full error message
- Steps to reproduce
- Features Guide - Understanding feature flags and build options
- Examples - Usage examples and workflows
- CLI Reference - Complete command documentation
- Architecture - Internal design and implementation