mirror of
https://github.com/ewwhite/zfs-ha.git
synced 2026-05-15 14:16:09 -06:00
[GH-ISSUE #34] enclosure compatibility problem #31
Labels
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/zfs-ha#31
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @darklinkworker on GitHub (Jul 5, 2020).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/34
So me and a few of my coworkers have been trying to get this ha zfs system running and we keep having drives drop out randomly and the drives are new so there shouldn't be anything wrong with them. After some testing it seem like the enclosures were the problem. It's a holiday weekend so everyone is taking a break right now but after digging in some more I am wondering if we just have the wrong hardware altogether. The enclosure don't seem to have a model number but the modules on them do. They are HP ap844b. It seems to only match hp P2000 enclosures so I am assuming that is what we got. We got a used HP storage once B6200 rack so everything was pre cables for HA, servers came with lots of RAM and 2 hba plus 2 10g nic cards. We assume this prebuilt rack from HP would work. I found a few people online say not every dual port enclosure will work. Wondering now if I can switch out the modules for one that will work. Sent on phone, did not reread to check for spelling or grammar
@ewwhite commented on GitHub (Jul 5, 2020):
Hello,
The P2000 LFF expansion chassis (AP843B) and associated modules (AP844B) aren't intended for this purpose.
These are meant to be cascaded expansion for the P2000 SAN array. They may or may not work with what you're doing.
If you're having drive drop-outs, there would certainly be SAS and SCSI bus errors. Look at the output of
dmesgor include relevant logs.Also, what specific disks are being used?
@darklinkworker commented on GitHub (Jul 5, 2020):
Thanks for the quick reply. I am still out in a lake cabin and had to borrow a laptop from one of my nephews to grab this data.

The disks we are using are SEAGATE ST4000NM0023. so this HP storage rack we got came from a Data center that was pull by a local vendor who breaks down the racks and resells them on ebay. Well we got this rack before they broke it down so all the HP tags are on them from when it was new. There are 16 enclosures with all the modules beingAP844B. The were 4 servers group in two. the top two servers are connected to the top 8 enclosures and process repeats for the bottom half of the rack. These are messages we got on some of the drives that drop out of the zfs array
I though all enclosures were all the same. That there just a bunch of disks for a host bus adatper to read but after doing some digging I guess they are not the same. Wondering would I be able to switch out the modules for ones that would support this zfs HA system or do I need to replace the whole enclosure with D2600/D2700?
@darklinkworker commented on GitHub (Aug 5, 2020):
Figure I would give an update. We ended up replacing the enclosures with HP D2600 and are problems went away. Drives didn't drop out any more and the system has been stable for a few weeks now.