On Thu 14-11-19 21:53:23, John Hubbard wrote:
And get rid of the mmap_sem calls, as part of that. Note that get_user_pages_fast() will, if necessary, fall back to __gup_longterm_unlocked(), which takes the mmap_sem as needed.
Reviewed-by: Jason Gunthorpe jgg@mellanox.com Reviewed-by: Ira Weiny ira.weiny@intel.com Signed-off-by: John Hubbard jhubbard@nvidia.com
Looks good to me. You can add:
Reviewed-by: Jan Kara jack@suse.cz
Honza
drivers/infiniband/core/umem.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-)
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 24244a2f68cc..3d664a2539eb 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -271,16 +271,13 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, sg = umem->sg_head.sgl; while (npages) {
down_read(&mm->mmap_sem);
ret = get_user_pages(cur_base,
min_t(unsigned long, npages,
PAGE_SIZE / sizeof (struct page *)),
gup_flags | FOLL_LONGTERM,
page_list, NULL);
if (ret < 0) {
up_read(&mm->mmap_sem);
ret = get_user_pages_fast(cur_base,
min_t(unsigned long, npages,
PAGE_SIZE /
sizeof(struct page *)),
gup_flags | FOLL_LONGTERM, page_list);
if (ret < 0) goto umem_release;
}
cur_base += ret * PAGE_SIZE; npages -= ret; @@ -288,8 +285,6 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, sg = ib_umem_add_sg_table(sg, page_list, ret, dma_get_max_seg_size(context->device->dma_device), &umem->sg_nents);
}up_read(&mm->mmap_sem);
sg_mark_end(sg); -- 2.24.0