mirror of
https://github.com/ewwhite/zfs-ha.git
synced 2026-05-15 14:16:09 -06:00
[GH-ISSUE #18] JBOD Redundancy #16
Labels
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/zfs-ha#16
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @terminet85 on GitHub (Jul 17, 2018).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/18
What would be the best strategy in order to redound also he JBOD?
@ewwhite commented on GitHub (Jul 17, 2018):
It depends upon how you design the pool, your raid level and your risk profile.
For instance, if you use RAID mirrors, you can arrange each side of the mirror into a different JBOD.
SAS cabling is important. Multipath is important. But in practice, the incidence of complete JBOD failure is rare.
@mwpastore commented on GitHub (Jul 18, 2018):
Keep in mind that most JBODs have quite a bit of redundancy built into them: redundant power supplies, redundant I/O modules, etc. The only "single point of failure" is the backplane itself, i.e. the circuit board that the drives and I/O modules plug into.
@ewwhite commented on GitHub (Jul 18, 2018):
Right, and at a certain scale, you have to rely on the track record of the storage enclosures. It's difficult to plan/spread your vdevs across multiple cabinets in a way that would sustain enclosure failures.