[GH-ISSUE #37] Spread ZFS Pools across nodes #35

Open
opened 2026-05-05 03:32:14 -06:00 by gitea-mirror · 3 comments
Owner

Originally created by @rbicelli on GitHub (Apr 3, 2021).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/37

Hi, could I create various group resources (group-vol1, group-vol2, group-vol3, gruop-vol4) and spread them across cluster nodes?

Originally created by @rbicelli on GitHub (Apr 3, 2021). Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/37 Hi, could I create various group resources (group-vol1, group-vol2, group-vol3, gruop-vol4) and spread them across cluster nodes?
Author
Owner

@ewwhite commented on GitHub (Apr 3, 2021):

Yes, you can have different pools assigned to different cluster nodes. A common set up with two hosts is to run one pool and host one, and another pool on host two. It’s a way of leveraging all the resources available and having reasonable failover in the event of a node’s unavailability.

Edmund White

On Apr 3, 2021, at 2:44 PM, Riccardo @.***> wrote:



Hi, could I create various group resources (group-vol1, group-vol2, group-vol3, gruop-vol4) and spread them across cluster nodes?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://github.com/ewwhite/zfs-ha/issues/37, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABJSFNX2WY7BIIBCEYVKAQ3TG5VYZANCNFSM42KSCA3Q.

<!-- gh-comment-id:812916804 --> @ewwhite commented on GitHub (Apr 3, 2021): Yes, you can have different pools assigned to different cluster nodes. A common set up with two hosts is to run one pool and host one, and another pool on host two. It’s a way of leveraging all the resources available and having reasonable failover in the event of a node’s unavailability. Edmund White On Apr 3, 2021, at 2:44 PM, Riccardo ***@***.***> wrote:  Hi, could I create various group resources (group-vol1, group-vol2, group-vol3, gruop-vol4) and spread them across cluster nodes? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub<https://github.com/ewwhite/zfs-ha/issues/37>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABJSFNX2WY7BIIBCEYVKAQ3TG5VYZANCNFSM42KSCA3Q>.
Author
Owner

@rbicelli commented on GitHub (Apr 15, 2021):

And could it work using 2 SAS controllers per host connecting each to different chains?
Like
Controller 1 Host 1 -> Enclosure Chain 1
Controller 1 Host 2 -> Enclosure Chain 1
Controller 2 Host 1 -> Enclosure Chain 2
Controller 2 Host 2 -> Enclosure Chain 2

each node serving 2 volumes per chain?

<!-- gh-comment-id:820575392 --> @rbicelli commented on GitHub (Apr 15, 2021): And could it work using 2 SAS controllers per host connecting each to different chains? Like Controller 1 Host 1 -> Enclosure Chain 1 Controller 1 Host 2 -> Enclosure Chain 1 Controller 2 Host 1 -> Enclosure Chain 2 Controller 2 Host 2 -> Enclosure Chain 2 each node serving 2 volumes per chain?
Author
Owner

@almereyda commented on GitHub (Oct 11, 2022):

In Kubernetes environments, one can also leverage OpenEBS cStor for replicated pools across multiple nodes:

<!-- gh-comment-id:1273945001 --> @almereyda commented on GitHub (Oct 11, 2022): In Kubernetes environments, one can also leverage OpenEBS cStor for replicated pools across multiple nodes: - https://github.com/mayadata-io/cstor/wiki/Using-uZFS-for-storing-cStor-Volume-Data > cStor Data engine makes it possible to run ZFS in user space and use a collection of such ZFS instances running on multiple nodes to provide a replicated storage resilient against node failures.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/zfs-ha#35
No description provided.