Check for NULL entries before checking the entry order, otherwise NULL is misinterpreted as a present pte conflict. The 'order' check needs to happen before the locked check as an unlocked entry at the wrong order must fallback to lookup the correct order.
Reported-by: Jeff Smits jeff.smits@intel.com Reported-by: Doug Nelson doug.nelson@intel.com Cc: stable@vger.kernel.org Fixes: 23c84eb78375 ("dax: Fix missed wakeup with PMD faults") Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com --- fs/dax.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index a71881e77204..08160011d94c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -221,10 +221,11 @@ static void *get_unlocked_entry(struct xa_state *xas, unsigned int order)
for (;;) { entry = xas_find_conflict(xas); + if (!entry || WARN_ON_ONCE(!xa_is_value(entry))) + return entry; if (dax_entry_order(entry) < order) return XA_RETRY_ENTRY; - if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) || - !dax_is_locked(entry)) + if (!dax_is_locked(entry)) return entry;
wq = dax_entry_waitqueue(xas, entry, &ewait.key);
On Sat, Oct 19, 2019 at 09:26:19AM -0700, Dan Williams wrote:
Check for NULL entries before checking the entry order, otherwise NULL is misinterpreted as a present pte conflict. The 'order' check needs to happen before the locked check as an unlocked entry at the wrong order must fallback to lookup the correct order.
Reported-by: Jeff Smits jeff.smits@intel.com Reported-by: Doug Nelson doug.nelson@intel.com Cc: stable@vger.kernel.org Fixes: 23c84eb78375 ("dax: Fix missed wakeup with PMD faults") Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com
fs/dax.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index a71881e77204..08160011d94c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -221,10 +221,11 @@ static void *get_unlocked_entry(struct xa_state *xas, unsigned int order) for (;;) { entry = xas_find_conflict(xas);
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
if (dax_entry_order(entry) < order) return XA_RETRY_ENTRY;return entry;
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
!dax_is_locked(entry))
if (!dax_is_locked(entry)) return entry;
Yes, I think this works. Should we also add:
static unsigned int dax_entry_order(void *entry) { + BUG_ON(!xa_is_value(entry)); if (xa_to_value(entry) & DAX_PMD) return PMD_ORDER; return 0; }
which would have caught this logic error before it caused a performance regression?
On Sat, Oct 19, 2019 at 1:50 PM Matthew Wilcox willy@infradead.org wrote:
On Sat, Oct 19, 2019 at 09:26:19AM -0700, Dan Williams wrote:
Check for NULL entries before checking the entry order, otherwise NULL is misinterpreted as a present pte conflict. The 'order' check needs to happen before the locked check as an unlocked entry at the wrong order must fallback to lookup the correct order.
Reported-by: Jeff Smits jeff.smits@intel.com Reported-by: Doug Nelson doug.nelson@intel.com Cc: stable@vger.kernel.org Fixes: 23c84eb78375 ("dax: Fix missed wakeup with PMD faults") Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com
fs/dax.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index a71881e77204..08160011d94c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -221,10 +221,11 @@ static void *get_unlocked_entry(struct xa_state *xas, unsigned int order)
for (;;) { entry = xas_find_conflict(xas);
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
return entry; if (dax_entry_order(entry) < order) return XA_RETRY_ENTRY;
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
!dax_is_locked(entry))
if (!dax_is_locked(entry)) return entry;
Yes, I think this works. Should we also add:
static unsigned int dax_entry_order(void *entry) {
BUG_ON(!xa_is_value(entry)); if (xa_to_value(entry) & DAX_PMD) return PMD_ORDER; return 0;
}
which would have caught this logic error before it caused a performance regression?
Sounds good will add it to v2.
On Sat, Oct 19, 2019 at 4:09 PM Dan Williams dan.j.williams@intel.com wrote:
On Sat, Oct 19, 2019 at 1:50 PM Matthew Wilcox willy@infradead.org wrote:
On Sat, Oct 19, 2019 at 09:26:19AM -0700, Dan Williams wrote:
Check for NULL entries before checking the entry order, otherwise NULL is misinterpreted as a present pte conflict. The 'order' check needs to happen before the locked check as an unlocked entry at the wrong order must fallback to lookup the correct order.
Reported-by: Jeff Smits jeff.smits@intel.com Reported-by: Doug Nelson doug.nelson@intel.com Cc: stable@vger.kernel.org Fixes: 23c84eb78375 ("dax: Fix missed wakeup with PMD faults") Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com
fs/dax.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index a71881e77204..08160011d94c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -221,10 +221,11 @@ static void *get_unlocked_entry(struct xa_state *xas, unsigned int order)
for (;;) { entry = xas_find_conflict(xas);
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
return entry; if (dax_entry_order(entry) < order) return XA_RETRY_ENTRY;
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
!dax_is_locked(entry))
if (!dax_is_locked(entry)) return entry;
Yes, I think this works. Should we also add:
static unsigned int dax_entry_order(void *entry) {
BUG_ON(!xa_is_value(entry)); if (xa_to_value(entry) & DAX_PMD) return PMD_ORDER; return 0;
}
which would have caught this logic error before it caused a performance regression?
Sounds good will add it to v2.
...except that there are multiple dax helpers that have the 'value' entry assumption. I'd rather do all of them in a separate patch, or none of them. It turns out that after this change all dax_entry_order() invocations are now protected by a xa_is_value() assert earlier in the calling function.
On Sat 19-10-19 09:26:19, Dan Williams wrote:
Check for NULL entries before checking the entry order, otherwise NULL is misinterpreted as a present pte conflict. The 'order' check needs to happen before the locked check as an unlocked entry at the wrong order must fallback to lookup the correct order.
Reported-by: Jeff Smits jeff.smits@intel.com Reported-by: Doug Nelson doug.nelson@intel.com Cc: stable@vger.kernel.org Fixes: 23c84eb78375 ("dax: Fix missed wakeup with PMD faults") Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com
Good catch! The patch looks good to me. You can add:
Reviewed-by: Jan Kara jack@suse.cz
Honza
fs/dax.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index a71881e77204..08160011d94c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -221,10 +221,11 @@ static void *get_unlocked_entry(struct xa_state *xas, unsigned int order) for (;;) { entry = xas_find_conflict(xas);
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
if (dax_entry_order(entry) < order) return XA_RETRY_ENTRY;return entry;
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
!dax_is_locked(entry))
if (!dax_is_locked(entry)) return entry;
wq = dax_entry_waitqueue(xas, entry, &ewait.key);
Dan Williams dan.j.williams@intel.com writes:
Check for NULL entries before checking the entry order, otherwise NULL is misinterpreted as a present pte conflict. The 'order' check needs to happen before the locked check as an unlocked entry at the wrong order must fallback to lookup the correct order.
Please include the user-visible effects of the problem in the changelog.
Thanks, Jeff
Reported-by: Jeff Smits jeff.smits@intel.com Reported-by: Doug Nelson doug.nelson@intel.com Cc: stable@vger.kernel.org Fixes: 23c84eb78375 ("dax: Fix missed wakeup with PMD faults") Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com
fs/dax.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index a71881e77204..08160011d94c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -221,10 +221,11 @@ static void *get_unlocked_entry(struct xa_state *xas, unsigned int order) for (;;) { entry = xas_find_conflict(xas);
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
if (dax_entry_order(entry) < order) return XA_RETRY_ENTRY;return entry;
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
!dax_is_locked(entry))
if (!dax_is_locked(entry)) return entry;
wq = dax_entry_waitqueue(xas, entry, &ewait.key);
On Mon, Oct 21, 2019 at 5:07 AM Jeff Moyer jmoyer@redhat.com wrote:
Dan Williams dan.j.williams@intel.com writes:
Check for NULL entries before checking the entry order, otherwise NULL is misinterpreted as a present pte conflict. The 'order' check needs to happen before the locked check as an unlocked entry at the wrong order must fallback to lookup the correct order.
Please include the user-visible effects of the problem in the changelog.
Yup, I noticed that right after sending.
linux-stable-mirror@lists.linaro.org