On Wed, Jul 03, 2019 at 02:28:41PM -0700, Dan Williams wrote:
On Wed, Jul 3, 2019 at 12:53 PM Matthew Wilcox willy@infradead.org wrote:
@@ -211,7 +215,8 @@ static void *get_unlocked_entry(struct xa_state *xas) for (;;) { entry = xas_find_conflict(xas); if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
!dax_is_locked(entry))
!dax_is_locked(entry) ||
dax_entry_order(entry) < xas_get_order(xas))
Doesn't this potentially allow a locked entry to be returned for a caller that expects all value entries are unlocked?
It only allows locked entries to be returned for callers which pass in an xas which refers to a PMD entry. This is fine for grab_mapping_entry() because it checks size_flag & is_pte_entry.
dax_layout_busy_page() only uses 0-order. __dax_invalidate_entry() only uses 0-order. dax_writeback_one() needs an extra fix:
/* Did a PMD entry get split? */ if (dax_is_locked(entry)) goto put_unlocked;
dax_insert_pfn_mkwrite() checks for a mismatch of pte vs pmd.
So I think we're good for all current users.
+#ifdef CONFIG_XARRAY_MULTI
unsigned int sibs = xas->xa_sibs;
while (sibs) {
order++;
sibs /= 2;
}
Use ilog2() here?
Thought about it. sibs is never going to be more than 31, so I don't know that it's worth eliminating 5 add/shift pairs in favour of whatever the ilog2 instruction is on a given CPU. In practice, on x86, sibs is going to be either 0 (PTEs) or 7 (PMDs). We could also avoid even having this function by passing PMD_ORDER or PTE_ORDER into get_unlocked_entry().
It's probably never going to be noticable in this scenario because it's the very last thing checked before we put ourselves on a waitqueue and go to sleep.