[GH-ISSUE #7] How do you handle Replacing Disks and updating the Stonith Resource #5

Closed
opened 2026-05-05 03:28:14 -06:00 by gitea-mirror · 1 comment
Owner

Originally created by @nugsolot on GitHub (Mar 22, 2017).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/7

Thanks so much for your project here, its been an amazing template for me to layout something very similar and probably couldn't have gotten as far as I have without this.

A Few Questions or more maybe even just to get your thought on how you would handle these things:

When you think about a disk failing and needing to be replaced how do you handle the Stonith Resource update to reflected the changed disk?

Have you implemented smartd to watch the disks for failures that might come up?

brian

Originally created by @nugsolot on GitHub (Mar 22, 2017). Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/7 Thanks so much for your project here, its been an amazing template for me to layout something very similar and probably couldn't have gotten as far as I have without this. A Few Questions or more maybe even just to get your thought on how you would handle these things: When you think about a disk failing and needing to be replaced how do you handle the Stonith Resource update to reflected the changed disk? Have you implemented smartd to watch the disks for failures that might come up? brian
Author
Owner

@ewwhite commented on GitHub (Mar 22, 2017):

I have several methods of handling ZFS disk and pool health monitoring.

My primary go-to is to install zfswatcher, which provides all of the necessary pool and disk health alerts for ZFS deployments. Another option is to fully configure the ZED daemon by editing /etc/zfs/zed.d/zed.rc to taste. Here's an example. Without modifying this, hot-spares will not work properly on ZFS.

Regarding updating STONITH resources when a drive fails, I make the change manually either by updating the STONITH pcs resource, or going into the Cluster Manager GUI (https://ip.address:2224) and adding the new drive's Device Mapper address to the list of disks in the STONITH setup.

<!-- gh-comment-id:288361182 --> @ewwhite commented on GitHub (Mar 22, 2017): I have several methods of handling ZFS disk and pool health monitoring. My primary go-to is to install [zfswatcher](http://zfswatcher.damicon.fi), which provides all of the necessary pool and disk health alerts for ZFS deployments. Another option is to fully configure the ZED daemon by editing `/etc/zfs/zed.d/zed.rc` to taste. [Here's an example](https://www.reddit.com/r/zfs/comments/4zf5ji/first_disk_failure_on_my_linux_zfs/). Without modifying this, hot-spares will not work properly on ZFS. Regarding updating STONITH resources when a drive fails, I make the change manually either by updating the STONITH pcs resource, or going into the Cluster Manager GUI (https://ip.address:2224) and adding the new drive's Device Mapper address to the list of disks in the STONITH setup.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/zfs-ha#5
No description provided.