Skip to content

fix(disk-collector): fix storage reporting for NFS mounts#686

Open
bgauger wants to merge 2 commits intoCrosstalk-Solutions:devfrom
bgauger:fix/disk-collector-nfs-multiline-df
Open

fix(disk-collector): fix storage reporting for NFS mounts#686
bgauger wants to merge 2 commits intoCrosstalk-Solutions:devfrom
bgauger:fix/disk-collector-nfs-multiline-df

Conversation

@bgauger
Copy link
Copy Markdown
Collaborator

@bgauger bgauger commented Apr 8, 2026

Summary

  • df wraps long NFS device names across two lines, breaking the awk 'NR==2' parser in the disk collector and producing empty size/used/available values. Adding -P forces POSIX single-line output.
  • The easy-setup storage bar only checked physical block devices and /dev/* filesystems, so NFS-backed storage was never picked up. Now checks the /app/storage mount first.

Test plan

  • Verified on a host with NFS-mounted storage — disk collector JSON now reports correct values, easy-setup shows full NAS capacity
  • Verified on a bare-metal local-disk install — no regression, storage displays correctly
  • Dashboard storage devices section unaffected

Related: #373, #320, #384

Copilot AI review requested due to automatic review settings April 8, 2026 12:22
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes incorrect storage reporting when storage is backed by NFS (or other long device-name mounts) by making both the disk collector parsing and the easy-setup “storage projection” prefer stable, accurate filesystem data.

Changes:

  • Update disk-collector df calls to use POSIX output (df -P) to prevent line-wrapping from breaking parsing on long device names (e.g., NFS).
  • Update easy-setup primary storage detection to prefer the /app/storage mount (so NFS-backed storage is correctly detected and displayed).

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
install/sidecar-disk-collector/collect-disk-info.sh Uses df -P to ensure single-line output and reliable parsing for NFS/long device names.
admin/inertia/hooks/useDiskDisplayData.ts Prefers /app/storage filesystem stats as the primary source for easy-setup storage projection.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@bravosierra99
Copy link
Copy Markdown

This is my exact issue... and fixed in exactly the same way.

@bravosierra99
Copy link
Copy Markdown

Great fix! Two things we found testing this on NFS-backed storage (TrueNAS):

1. getAllDiskDisplayItems has the same bug — the settings/system page calls that function, not getPrimaryDiskInfo. Since it returns block-device data immediately when any disk is found, the NFS mount never appears there. Fix is to prepend the /app/storage entry when present, so both NAS and OS disk are shown.

2. Worth a comment on /app/storage — it looks surprising that we're checking the container path rather than the host path. A quick note explaining that si.fsSize() runs inside the admin container (where storage is bind-mounted at /app/storage) would help future readers.

Happy to open a PR against your branch with both fixes if useful. Verified working on NFS.

useDiskDisplayData.ts.txt

@bgauger
Copy link
Copy Markdown
Collaborator Author

bgauger commented Apr 8, 2026

nice catch on getAllDiskDisplayItems — i didnt hit that on my setup since the fsSize fallback picked it up, but makes sense that bare metal with physical disks would skip right past the NFS mount. feel free to PR against my branch with both fixes.

@bravosierra99
Copy link
Copy Markdown

I've submitted the PR on bgauger

bgauger and others added 2 commits April 9, 2026 14:17
Co-Authored-By: Ben Smith <bravosierra99@gmail.com>
Co-Authored-By: Ben Smith <bravosierra99@gmail.com>
@bgauger bgauger force-pushed the fix/disk-collector-nfs-multiline-df branch from 2390291 to 7124dd4 Compare April 9, 2026 20:32
@bgauger
Copy link
Copy Markdown
Collaborator Author

bgauger commented Apr 9, 2026

@bravosierra99 merged your fix in — thanks! One thing we found testing on the NFS box: when there are no block devices (like on a containerized setup with only NFS), validDisks is empty so the code falls through to the fsSize fallback path, which doesn't use the storageMountItem. The NFS mount still showed up but with the raw path as the label instead of "NAS Storage". Added a small fix to prepend storageMountItem in the fsSize fallback and exclude /app/storage from the generic list so it doesn't double up.

Copy link
Copy Markdown
Collaborator

@chriscrosstalk chriscrosstalk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. This is a real fix for real user pain on NFS-backed storage, and the collaborative iteration with bravosierra99's follow-up on getAllDiskDisplayItems plus your NFS-only fallback covers the edge cases cleanly.

The -P flag on df is the textbook fix for the multi-line wrapping behavior with long NFS device names, and the frontend changes are well-scoped. Tested paths (NFS + bare-metal no-regression) match what I'd want to see.

One minor nit (non-blocking): the hardcoded "NAS Storage" label is slightly misleading for the edge case where /app/storage is a bind mount from a local disk rather than NFS. If you want to polish later, showing the filesystem type or actual device would be nicer. But it's a nit — ship it.

Flagging for inclusion in v1.31.1 as a patch fix — this is the kind of concrete user-facing improvement worth getting out quickly alongside the other hotfix candidates (#645, #649).

@chriscrosstalk chriscrosstalk added the Next Release This PR is staged for our next release. label Apr 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Next Release This PR is staged for our next release.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants