[GH-ISSUE #36] How best to handle locking over NFS #34

Closed
opened 2026-05-05 03:32:13 -06:00 by gitea-mirror · 2 comments
Owner

Originally created by @tullis on GitHub (Dec 1, 2020).
Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/36

Thanks @ewwhite for your hard work on this.
The system that I am building based upon this guide is in testing now, but I have an issue regarding locks over NFS.

I have an NFS export defined for virtual machines, using the sharenfs parameter.

The options are: sync,no_subtree_check,no_wdelay.

I have mounted this NFS share on a Linux client and I am using it to host a VM for test purposes.

When I try a managed failover, the ZFS resource in pacemaker attempts to export the pool, but it gets an error that the pool is in use. For example.

notice: zflash_stop_0:43741:stderr [ umount: /srv/zflash_vm_disks: target is busy. ]
notice: zflash_stop_0:43741:stderr [ cannot unmount '/srv/zflash_vm_disks': umount failed ]

There are no other local processes on the machine, so I think that it is only the only NFS kernel server that is holding the locks open. The protocol in use is NFS version 4.2.

Do you tend to use NFS version 3 for this kind of requirement, where nolock can be specified on the client?

Originally created by @tullis on GitHub (Dec 1, 2020). Original GitHub issue: https://github.com/ewwhite/zfs-ha/issues/36 Thanks @ewwhite for your hard work on this. The system that I am building based upon this guide is in testing now, but I have an issue regarding locks over NFS. I have an NFS export defined for virtual machines, using the `sharenfs` parameter. The options are: `sync,no_subtree_check,no_wdelay`. I have mounted this NFS share on a Linux client and I am using it to host a VM for test purposes. When I try a managed failover, the ZFS resource in pacemaker attempts to export the pool, but it gets an error that the pool is in use. For example. ``` notice: zflash_stop_0:43741:stderr [ umount: /srv/zflash_vm_disks: target is busy. ] notice: zflash_stop_0:43741:stderr [ cannot unmount '/srv/zflash_vm_disks': umount failed ] ``` There are no other local processes on the machine, so I think that it is only the only NFS kernel server that is holding the locks open. The protocol in use is NFS version 4.2. Do you tend to use NFS version 3 for this kind of requirement, where `nolock` can be specified on the client?
Author
Owner

@ewwhite commented on GitHub (Dec 1, 2020):

Hello. This works best with NFS3.

<!-- gh-comment-id:736537846 --> @ewwhite commented on GitHub (Dec 1, 2020): Hello. This works best with NFS3.
Author
Owner

@tullis commented on GitHub (Dec 1, 2020):

Thanks very much.

<!-- gh-comment-id:736729885 --> @tullis commented on GitHub (Dec 1, 2020): Thanks very much.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/zfs-ha#34
No description provided.