ZFS Monitoring #3397
Replies: 22 comments
-
|
Testbed for Linux user (or / for developer without a ZFS pool): Also be sure to uncomment the following line in the Glances conf file: |
Beta Was this translation helpful? Give feedback.
-
|
First issue, only root could grab ZFS pools status: |
Beta Was this translation helpful? Give feedback.
-
|
There are other examples of data that require root (sensor info) no? |
Beta Was this translation helpful? Give feedback.
-
|
Nope, Sensors did not need root rights... One workaround is to configure the sudoers file to not ask for password when the 'sudo zpool status zsfpool' command line is asked. Not a very big fan... |
Beta Was this translation helpful? Give feedback.
-
|
decent workaround on this one is sudoers.d conf file with no password on just that command anyway i just found out about this project and from a quick look it seems like a keeper.. |
Beta Was this translation helpful? Give feedback.
-
|
Salutations! I've managed to edit my /etc/sudoers.d/zfs to allow Should I see a thing saying my pool is OK if there's nothing wrong? Or will I see a thing only if something is wrong? I have set Thanks for such an awesome project! -Travis |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I am running a recent linux mint, and setting up ZFS. I notice I am able to run |
Beta Was this translation helpful? Give feedback.
-
|
I confirm that on ubuntu 20.04 (out of the box) zpool and zfs commands can be run as a regular user. |
Beta Was this translation helpful? Give feedback.
-
|
Is there a follow up to this? |
Beta Was this translation helpful? Give feedback.
-
|
Ok, tested on Ubuntu 20.04 and the zpool status zsfpool command line could be executed as a regular user. I need to understand what kind of additional information (specifics to ZFS) do you want to display in Glances. For the moment the pool is displayed as a standard mount mount: |
Beta Was this translation helpful? Give feedback.
-
|
Perhaps for me the most critical would be:
then perhaps cool informational stuff would be:
|
Beta Was this translation helpful? Give feedback.
-
|
|
Beta Was this translation helpful? Give feedback.
-
|
Thanks @fusionstream |
Beta Was this translation helpful? Give feedback.
-
|
no problem. I'm using it solely in Home Assistant at this time so full disclosure the space issue is something I will not yet experience fully. What's a dome mockup and how can I help? |
Beta Was this translation helpful? Give feedback.
-
|
@fusionstream can you make a mockup using a basic text editor ? |
Beta Was this translation helpful? Give feedback.
-
|
I'll take a stab at it, the following will be incomplete, but perhaps a useful start for discussion: That last line would be the zpool name (truncated as required to fit) the following would be repeated for each zpool on the system. ONE of the following lines containing status states for the pool would follow, the xxxx's would replaced with the actual numbers, typically here. the zpool may be in various states of scrubbing or resilvering, one of the following groups would follow following this would be the configuration of the zpool, this can be presented in several ways, there should be some mechanism to toggle among the options here, or others. The following samples assume a pool of two mirrored VDEVs some thought would be required to accommodate other configurations and vdev types (logs, RAIDZ, etc) I am less familiar with those, so I am going to keep the scope limited in this mock-up. With large pools this may consume vertical real estate, how to handle that is a problem for later. Another command available to non-root is the following: in these samples "_xxx..."" is a truncated drive name. these may be in the form of sda, sdb, etc or longer disk ID strings that will need truncation to fit. "nnn" is a number with no errors, capacity may be of primary concern for someone mucking about in configuring a zpool: for someone keeping track of a pool in production might be interested to see IO performance, presented like the following, displaying operations: or the following as bandwidth: If the pool status is not "ONLINE" config with states would likely be of interest: or the following error counts on Read/Write/Checksum Following the display of one of the above configuration presentations, zfs file systems would be displayed. each one may or may not be mounted, which might be indicated by a color change or something. This information comes from the |
Beta Was this translation helpful? Give feedback.
-
|
Postpone because the information needed could not be integrated in the current FS Plugin. The proposal is to stay with the current feature in Glances v3.x:
The result is the following in the current develop branch: In Glances version 4 a dedicated plugin should be created (see branch https://github.com/nicolargo/glances/tree/glancesv4). Contributors are welcome. |
Beta Was this translation helpful? Give feedback.
-
|
For contributors: have a look on https://pypi.org/project/zpool-status/ |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
@Zylatis Can you copy/paste (no screenshot) the result of 'zpool status' and 'df -kh' in order to test it locally ? Thanks ! |
Beta Was this translation helpful? Give feedback.
-
|
@nicolargo Totes: |
Beta Was this translation helpful? Give feedback.
-
|
Since zfsutil 2.3 there is support for JSON output for the various status commands: Seems like that would likely be the cleanest solution for consuming the data rather than integrating a custom parsing library like https://pypi.org/project/zpool-status/. |
Beta Was this translation helpful? Give feedback.




Uh oh!
There was an error while loading. Please reload this page.
-
It would be great to have monitoring/alerting for ZFS pools. Set alerts for degraded state, watch scrub/repair status, etc.
Beta Was this translation helpful? Give feedback.
All reactions