This patch fixes to check on the right return value if it was the last callback. The rv variable got overwritten by the return of copy_result_to_user(). Fixing it by introducing a second variable for the return value and don't let rv being overwritten.
Cc: stable@vger.kernel.org Fixes: 61bed0baa4db ("fs: dlm: use a non-static queue for callbacks") Reported-by: Valentin Vidić vvidic@valentin-vidic.from.hr Closes: https://lore.kernel.org/gfs2/Ze4qSvzGJDt5yxC3@valentin-vidic.from.hr Signed-off-by: Alexander Aring aahringo@redhat.com --- fs/dlm/user.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/dlm/user.c b/fs/dlm/user.c index 695e691b38b3..9f9b68448830 100644 --- a/fs/dlm/user.c +++ b/fs/dlm/user.c @@ -806,7 +806,7 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count, struct dlm_lkb *lkb; DECLARE_WAITQUEUE(wait, current); struct dlm_callback *cb; - int rv, copy_lvb = 0; + int rv, ret, copy_lvb = 0; int old_mode, new_mode;
if (count == sizeof(struct dlm_device_version)) { @@ -906,9 +906,9 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count, trace_dlm_ast(lkb->lkb_resource->res_ls, lkb); }
- rv = copy_result_to_user(lkb->lkb_ua, - test_bit(DLM_PROC_FLAGS_COMPAT, &proc->flags), - cb->flags, cb->mode, copy_lvb, buf, count); + ret = copy_result_to_user(lkb->lkb_ua, + test_bit(DLM_PROC_FLAGS_COMPAT, &proc->flags), + cb->flags, cb->mode, copy_lvb, buf, count);
kref_put(&cb->ref, dlm_release_callback);
@@ -916,7 +916,7 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count, if (rv == DLM_DEQUEUE_CALLBACK_LAST) dlm_put_lkb(lkb);
- return rv; + return ret; }
static __poll_t device_poll(struct file *file, poll_table *wait)
There was a wrong conversion to atomic counters in commit 75a7d60134ce ("fs: dlm: handle lkb wait count as atomic_t"), when atomic_dec_and_test() returns true it will decrement at first and then return true if it hits zero. This means we will mis a unhold_lkb() for the last iteration. This patch fixes this issue and if the last reference is taken we will remove the lkb from the waiters list as this is how it's supposed to work.
Cc: stable@vger.kernel.org Fixes: 75a7d60134ce ("fs: dlm: handle lkb wait count as atomic_t") Signed-off-by: Alexander Aring aahringo@redhat.com --- fs/dlm/lock.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index 652c51fbbf76..c30e9f8d017e 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -5070,11 +5070,13 @@ int dlm_recover_waiters_post(struct dlm_ls *ls) /* drop all wait_count references we still * hold a reference for this iteration. */ - while (!atomic_dec_and_test(&lkb->lkb_wait_count)) - unhold_lkb(lkb); - mutex_lock(&ls->ls_waiters_mutex); - list_del_init(&lkb->lkb_wait_reply); + while (atomic_read(&lkb->lkb_wait_count)) { + if (atomic_dec_and_test(&lkb->lkb_wait_count)) + list_del_init(&lkb->lkb_wait_reply); + + unhold_lkb(lkb); + } mutex_unlock(&ls->ls_waiters_mutex);
if (oc || ou) {
On Tue, Mar 12, 2024 at 01:05:08PM -0400, Alexander Aring wrote:
There was a wrong conversion to atomic counters in commit 75a7d60134ce ("fs: dlm: handle lkb wait count as atomic_t"), when atomic_dec_and_test() returns true it will decrement at first and then return true if it hits zero. This means we will mis a unhold_lkb() for the last iteration. This patch fixes this issue and if the last reference is taken we will remove the lkb from the waiters list as this is how it's supposed to work.
Cc: stable@vger.kernel.org Fixes: 75a7d60134ce ("fs: dlm: handle lkb wait count as atomic_t") Signed-off-by: Alexander Aring aahringo@redhat.com
Tested-by: Valentin Vidić vvidic@valentin-vidic.from.hr
linux-stable-mirror@lists.linaro.org