Skip to content

Releases: NeuroSkill-com/skill

Skill v0.0.116

12 Apr 14:10

Choose a tag to compare

Changelog

Features

  • Minor updates and improvements

Contributors

  • Eugene Hauptmann

Skill v0.0.115

11 Apr 19:42

Choose a tag to compare

Changelog

Features

  • fixed a bug with the labels, libomp, cpu/gpu backend choice for llm

Contributors

  • Eugene Hauptmann

Skill v0.0.114

11 Apr 17:26

Choose a tag to compare

Changelog

Features

  • migrated to nomic-ai/nomic-embed-text-v1.5 model for text and image embeddings
  • cleanup
  • Task 8 — Removed mirror state (~150 lines)
  • Task 9 — Removed device upsert helpers (~90 lines)
  • Task 10 — Removed Cortex WS state (~55 lines)
  • Task 11 — Refactored settings persistence
  • Task 12 — Removed calibration profile local cache

Contributors

  • Eugene Hauptmann

Skill v0.0.113

11 Apr 14:57

Choose a tag to compare

Changelog

Features

  • updated CI
  • Daemon side (business logic added)
  • crates/skill-daemon/src/background.rs:
  • crates/skill-daemon/src/handlers.rs:
  • crates/skill-daemon/src/routes/settings_calibration.rs:
  • Tauri side (logic removed)
  • src-tauri/src/background.rs:
  • src-tauri/src/lifecycle.rs:
  • Moved to daemon / removed from Tauri
    1. detect_device_kind() — Added DeviceKind::from_id_and_name() to skill-data/src/device.rs with the ID-prefix logic (cortex:, usb:, cgx:, etc.) that was missing from the existing from_name(). Removed the dead function + 250

Contributors

  • Eugene Hauptmann

Skill v0.0.111

11 Apr 09:00

Choose a tag to compare

Changelog

Features

  • updated DMG version
  • fixed the build
  • llama linking fixed

Contributors

  • Eugene Hauptmann

Skill v0.0.110

11 Apr 01:51

Choose a tag to compare

Changelog

Features

  • feat: add CPU backend support to skill-router for CI coverage
  • Add a new feature to skill-router that uses CPU-based UMAP
  • Changes:
  • The GPU backend remains the default for normal builds, while the
  • fix: disable all GPU-dependent features in coverage workflow
  • The coverage CI was still failing because the embedding features
  • Solution: Disable all embedding features in the coverage workflow
  • The embedding functionality will still be tested in local
  • fix: add llm-native feature and use it in coverage workflow
  • The coverage CI was failing because is not a valid
  • Solution:
  • This ensures that llama-cpp-4 is statically linked in the coverage
  • fix: add native feature to llm in coverage workflow
  • The coverage CI was failing because it couldn't find libllama.so.0.
  • Solution: Change to in the coverage workflow's
  • This matches the Cargo.toml configuration where we added the native
  • fix: remove aggressive -static flag from macOS config
  • The -static flag in the macOS cargo config was causing issues and
  • Instead, we rely on:
  • This change ensures that:
  • Verified that skill-daemon runs without libllama.0.dylib present.
  • chore: enable static linking for llama-cpp-4
  • Add the feature to llama-cpp-4 dependencies to enable static
  • This change:
  • Static linking improves:
  • The binary size will increase, but this is acceptable for a desktop
  • chore: update llama-cpp-4 to 0.2.43
  • Update llama-cpp-4 dependency from version 0.2.42 to 0.2.43.
  • This brings in the latest improvements and bug fixes from the
  • The update is applied to all feature variants (ggml, metal, vulkan).
  • fix: disable GPU features in coverage CI
  • The coverage CI was failing because tests that use GPU acceleration
  • Solution: Run coverage tests with CPU-only features by:
  • This allows the coverage workflow to complete successfully while still
  • revert: remove CI checks from tests
  • The previous approach of checking CI environment inside tests didn't work
  • This commit reverts the CI checks and we'll try a different approach
  • fix: use CPU-only backend in CI tests
  • The coverage CI was still failing because even though we added CI checks,
  • Solution: Modify test_state() to use the LUNA CPU-only backend instead of
  • LUNA is a topology-agnostic encoder that runs entirely on CPU, making it
  • fix: skip GPU tests in CI environment
  • Solution: Add a check for the CI environment variable at the beginning
  • Tests affected:

Contributors

  • Eugene Hauptmann

Skill v0.0.109

10 Apr 21:53

Choose a tag to compare

Changelog

Features

  • fix: disable dynamic linking for llama-cpp-4
  • The llama-cpp-4 crate doesn't have a 'static' feature. Instead, it has
  • To get static linking, we need to:
  • This will embed the llama.cpp libraries directly into the skill-daemon
  • Note: We keep the 'ggml' feature explicitly enabled since it's not in
  • fix: force static linking for llama-cpp-4 in skill-daemon
  • The skill-daemon binary was failing to launch on macOS with:
  • This occurred because llama-cpp-4 was dynamically linking to its
  • Solution: Configure llama-cpp-4 to use static linking by adding
  • Changes:
  • This makes the skill-daemon binary larger but more portable and

Contributors

  • Eugene Hauptmann

Skill v0.0.106

10 Apr 13:03

Choose a tag to compare

Changelog

Features

  • fix: add APPLE_SIGNING_IDENTITY to DMG creation step
  • The DMG creation step was missing the APPLE_SIGNING_IDENTITY environment
  • The script already supports using APPLE_SIGNING_IDENTITY via the SIGN_ID

Contributors

  • Eugene Hauptmann

Skill v0.0.104

10 Apr 12:21

Choose a tag to compare

Changelog

Features

  • feat: implement cross-platform daemon update hooks
    • Add pre-update and post-update hooks for Tauri updater
  • The daemon is now properly stopped before updates and restarted afterward,
  • Fix Windows release: check both target paths for skill.exe
  • The release workflow was failing because the 'Log binary dependencies' step
  • This change makes the step check both locations (target-specific first, then

Contributors

  • Eugene Hauptmann

Skill v0.0.103

10 Apr 10:45

Choose a tag to compare

Changelog

Features

  • chore(bump): improve release notes generation from git history
  • Updates the bump script to generate better release notes from commit history:
  • The script now always generates release notes based on actual commit history
  • feat(llm): add Nemotron-3-Nano-4B model to catalog
  • Adds NVIDIA's Nemotron-3-Nano-4B model with the available Q4_K_M quant.
  • Also simplifies pre-commit hook to run only basic validation,
  • chore: update llama-cpp-4 from 0.2.36 to 0.2.38
  • Updates llama-cpp-4 and llama-cpp-sys-4 to latest versions.
  • bump
  • fix(calibration): improve daemon integration and error handling
    • Fix EEG window submission to daemon with proper eeg_start/eeg_end timestamps
  • This ensures calibration labels are properly recorded with correct EEG windows

Contributors

  • Eugene Hauptmann