On Tue, Apr 03, 2018 at 01:30:41PM +0200, Michal Hocko wrote:
On Mon 02-04-18 09:50:26, Wei Yang wrote:
On Fri, Mar 30, 2018 at 01:57:27PM -0700, Andrew Morton wrote:
On Fri, 30 Mar 2018 11:30:55 +0800 Wei Yang richard.weiyang@gmail.com wrote:
memblock_search_pfn_nid() returns the nid and the [start|end]_pfn of the memory region where pfn sits in. While the calculation of start_pfn has potential issue when the regions base is not page aligned.
For example, we assume PAGE_SHIFT is 12 and base is 0x1234. Current implementation would return 1 while this is not correct.
Why is this not correct? The caller might want the pfn of the page which covers the base?
Hmm... the only caller of memblock_search_pfn_nid() is __early_pfn_to_nid(), which returns the nid of a pfn and save the [start_pfn, end_pfn] with in the same memory region to a cache. So this looks not a good practice to store un-exact pfn in the cache.
This patch fixes this by using PFN_UP().
The original commit is commit e76b63f80d93 ("memblock, numa: binary search node id") and merged in v3.12.
Signed-off-by: Wei Yang richard.weiyang@gmail.com Cc: 3.12+ stable@vger.kernel.org
Please fully describe the runtime effects of a bug when fixing that bug. This description doesn't give enough justification for merging the patch into mainline, let alone -stable.
Since PFN_UP() and PFN_DOWN() differs when the address is not page aligned, in theory we may have two situations like below.
Have you ever seen a HW that would report page unaligned memory ranges? Is this even possible?
No, so we don't need to handle this case?
-- Michal Hocko SUSE Labs