[GH-ISSUE #15] Question about fence_scsi #15

Closed
opened 2026-05-05 03:28:43 -06:00 by gitea-mirror · 2 comments
Owner

Originally created by @intentions on GitHub (Oct 12, 2017).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/15

Hello,

I've got a question about the fence_scsi, I have a similar setup to yours but instead of everything being in one big pool I have two pools that are supposed to be in an active-active configuration (so one pool being exported from each head unless there is a problem with one of the heads).

for the fence_scsi do I create two different stonith objects (one for each pool) or one big stonith object for both pools?

Originally created by @intentions on GitHub (Oct 12, 2017). Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/15 Hello, I've got a question about the fence_scsi, I have a similar setup to yours but instead of everything being in one big pool I have two pools that are supposed to be in an active-active configuration (so one pool being exported from each head unless there is a problem with one of the heads). for the fence_scsi do I create two different stonith objects (one for each pool) or one big stonith object for both pools?
Author
Owner

@ewwhite commented on GitHub (Oct 12, 2017):

In this case, you would create multiple groups and associate a fence device with each pool.

You can set resource co-location constraints to pin each pool to the node you want, allowing failover of that workload to the other node.

 fence-vol1	(stonith:fence_scsi):	Started zfs3-node1
 Resource Group: group-vol1
     vol1	(ocf::heartbeat:ZFS):	Started zfs3-node1
     vol1-ip	(ocf::heartbeat:IPaddr2):	Started zfs3-node1
 fence-vol2	(stonith:fence_scsi):	Started zfs3-node2
 Resource Group: group-vol2
     vol2	(ocf::heartbeat:ZFS):	Started zfs3-node2
     vol2-ip	(ocf::heartbeat:IPaddr2):	Started zfs3-node2
<!-- gh-comment-id:336143765 --> @ewwhite commented on GitHub (Oct 12, 2017): In this case, you would create multiple groups and associate a fence device with each pool. You can set resource co-location constraints to pin each pool to the node you want, allowing failover of that workload to the other node. ``` fence-vol1 (stonith:fence_scsi): Started zfs3-node1 Resource Group: group-vol1 vol1 (ocf::heartbeat:ZFS): Started zfs3-node1 vol1-ip (ocf::heartbeat:IPaddr2): Started zfs3-node1 fence-vol2 (stonith:fence_scsi): Started zfs3-node2 Resource Group: group-vol2 vol2 (ocf::heartbeat:ZFS): Started zfs3-node2 vol2-ip (ocf::heartbeat:IPaddr2): Started zfs3-node2 ```
Author
Owner

@intentions commented on GitHub (Oct 19, 2017):

Thanks!

<!-- gh-comment-id:338001954 --> @intentions commented on GitHub (Oct 19, 2017): Thanks!
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/zfs-ha#15
No description provided.