mirror of
https://github.com/ewwhite/zfs-ha.git
synced 2026-05-15 14:16:09 -06:00
[GH-ISSUE #7] How do you handle Replacing Disks and updating the Stonith Resource #5
Labels
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/zfs-ha#5
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @nugsolot on GitHub (Mar 22, 2017).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/7
Thanks so much for your project here, its been an amazing template for me to layout something very similar and probably couldn't have gotten as far as I have without this.
A Few Questions or more maybe even just to get your thought on how you would handle these things:
When you think about a disk failing and needing to be replaced how do you handle the Stonith Resource update to reflected the changed disk?
Have you implemented smartd to watch the disks for failures that might come up?
brian
@ewwhite commented on GitHub (Mar 22, 2017):
I have several methods of handling ZFS disk and pool health monitoring.
My primary go-to is to install zfswatcher, which provides all of the necessary pool and disk health alerts for ZFS deployments. Another option is to fully configure the ZED daemon by editing
/etc/zfs/zed.d/zed.rcto taste. Here's an example. Without modifying this, hot-spares will not work properly on ZFS.Regarding updating STONITH resources when a drive fails, I make the change manually either by updating the STONITH pcs resource, or going into the Cluster Manager GUI (https://ip.address:2224) and adding the new drive's Device Mapper address to the list of disks in the STONITH setup.