[GH-ISSUE #35] Question about using a similar setup for FC Disk shelf connected directly to Qlogic FC HBA #36

Closed
opened 2026-05-05 03:32:14 -06:00 by gitea-mirror · 2 comments
Owner

Originally created by @brickcatena on GitHub (Sep 1, 2020).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/35

Thank you ewwhite for sharing your knowledge and experience with this stuff. Really helpful!

So not really an issue but more of a question or request for direction in regards to using this methodology with a fibre channel JBOD/ZFS Pool.

My roadblock seems to be an issue with fence_scsi wanting features not supported by my setup. Specifically when the fence agent gets to the first sg_persist command it returns:
sg_persist failed: Illegal request, Invalid opcode

Not really having much experience with fencing, I guess my specific question would be is there different fence agent I could use instead? If so I would hope to find a working example online. Really just trying to put to good use a stack of hardware I already have. My goal is to get one or more disk shelves connected to two controller nodes to allow for maintenance of the controllers and the redundancy that comes with that. I could abstract the controllers up a layer into VM's and do some kind of VM fencing if that helps but I would want to pass through the HBAs so I'm not sure that helps. Maybe fence_sbd? I'm open for ideas. Thanks for reading!!

The specific setup I am using:
1 x 25 Bay EMC VNGD disk shelf which has FC to SAS interposers for each disk, 2 2port hssdc2 4Gbit FC controllers in the shelf
25 Seagate ST910006CLAR1000 SAS drives with SPC-3 Compliance according to smartctl using FCP-2 transport protocol
2 Qlogic QLE2562 FC HBAs
2 Cisco UCS C240 M3 servers
Each host has 2 SFP to HSSDC2 style twinax cables connecting to the FC ports on the disk shelf
Active/Active multipath is working
Using a /dev/mapper/ multipath based zfs zpool

Originally created by @brickcatena on GitHub (Sep 1, 2020). Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/35 Thank you ewwhite for sharing your knowledge and experience with this stuff. Really helpful! So not really an issue but more of a question or request for direction in regards to using this methodology with a fibre channel JBOD/ZFS Pool. My roadblock seems to be an issue with fence_scsi wanting features not supported by my setup. Specifically when the fence agent gets to the first sg_persist command it returns: `sg_persist failed: Illegal request, Invalid opcode` Not really having much experience with fencing, I guess my specific question would be is there different fence agent I could use instead? If so I would hope to find a working example online. Really just trying to put to good use a stack of hardware I already have. My goal is to get one or more disk shelves connected to two controller nodes to allow for maintenance of the controllers and the redundancy that comes with that. I could abstract the controllers up a layer into VM's and do some kind of VM fencing if that helps but I would want to pass through the HBAs so I'm not sure that helps. Maybe fence_sbd? I'm open for ideas. Thanks for reading!! The specific setup I am using: 1 x 25 Bay EMC VNGD disk shelf which has FC to SAS interposers for each disk, 2 2port hssdc2 4Gbit FC controllers in the shelf 25 Seagate ST910006CLAR1000 SAS drives with SPC-3 Compliance according to smartctl using FCP-2 transport protocol 2 Qlogic QLE2562 FC HBAs 2 Cisco UCS C240 M3 servers Each host has 2 SFP to HSSDC2 style twinax cables connecting to the FC ports on the disk shelf Active/Active multipath is working Using a /dev/mapper/ multipath based zfs zpool
Author
Owner

@scramatte commented on GitHub (Mar 7, 2022):

Hi,

I've got similar requirement ... have you made any progress on it?

<!-- gh-comment-id:1060420785 --> @scramatte commented on GitHub (Mar 7, 2022): Hi, I've got similar requirement ... have you made any progress on it?
Author
Owner

@ewwhite commented on GitHub (Mar 7, 2022):

This is a fairly non-standard use case. But I supposed you could fence with IPMI or power instead.
I have no method to test against this type of setup.

<!-- gh-comment-id:1060640642 --> @ewwhite commented on GitHub (Mar 7, 2022): This is a fairly non-standard use case. But I supposed you could fence with IPMI or power instead. I have no method to test against this type of setup.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/zfs-ha#36
No description provided.