[GH-ISSUE #18] JBOD Redundancy #16

Closed
opened 2026-05-05 03:28:59 -06:00 by gitea-mirror · 3 comments
Owner

Originally created by @terminet85 on GitHub (Jul 17, 2018).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/18

What would be the best strategy in order to redound also he JBOD?

Originally created by @terminet85 on GitHub (Jul 17, 2018). Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/18 What would be the best strategy in order to redound also he JBOD?
Author
Owner

@ewwhite commented on GitHub (Jul 17, 2018):

It depends upon how you design the pool, your raid level and your risk profile.

For instance, if you use RAID mirrors, you can arrange each side of the mirror into a different JBOD.

SAS cabling is important. Multipath is important. But in practice, the incidence of complete JBOD failure is rare.

<!-- gh-comment-id:405682717 --> @ewwhite commented on GitHub (Jul 17, 2018): It depends upon how you design the pool, your raid level and your risk profile. For instance, if you use RAID mirrors, you can arrange each side of the mirror into a different JBOD. SAS cabling is important. Multipath is important. But in practice, the incidence of complete JBOD failure is rare.
Author
Owner

@mwpastore commented on GitHub (Jul 18, 2018):

Keep in mind that most JBODs have quite a bit of redundancy built into them: redundant power supplies, redundant I/O modules, etc. The only "single point of failure" is the backplane itself, i.e. the circuit board that the drives and I/O modules plug into.

<!-- gh-comment-id:406081057 --> @mwpastore commented on GitHub (Jul 18, 2018): Keep in mind that most JBODs have quite a bit of redundancy built into them: redundant power supplies, redundant I/O modules, etc. The only "single point of failure" is the backplane itself, i.e. the circuit board that the drives and I/O modules plug into.
Author
Owner

@ewwhite commented on GitHub (Jul 18, 2018):

Right, and at a certain scale, you have to rely on the track record of the storage enclosures. It's difficult to plan/spread your vdevs across multiple cabinets in a way that would sustain enclosure failures.

<!-- gh-comment-id:406081445 --> @ewwhite commented on GitHub (Jul 18, 2018): Right, and at a certain scale, you have to rely on the track record of the storage enclosures. It's difficult to plan/spread your vdevs across multiple cabinets in a way that would sustain enclosure failures.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/zfs-ha#16
No description provided.