From: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
Long time ago in commit b3ac17667f11 ("drm/scheduler: rework entity
creation") a change was made which prevented priority changes for entities
with only one assigned scheduler.
The commit reduced drm_sched_entity_set_priority() to simply update the
entities priority, but the run queue selection logic in
drm_sched_entity_select_rq() was never able to actually change the
originally assigned run queue.
In practice that only affected amdgpu, being the only driver which can do
dynamic priority changes. And that appears was attempted to be rectified
there in 2316a86bde49 ("drm/amdgpu: change hw sched list on ctx priority
override").
A few unresolved problems however were that this only fixed
drm_sched_entity_set_priority() *if* drm_sched_entity_modify_sched() was
called first. That was not documented anywhere.
Secondly, this only works if drm_sched_entity_modify_sched() is actually
called, which in amdgpu's case today is true only for gfx and compute.
Priority changes for other engines with only one scheduler assigned, such
as jpeg and video decode will still not work.
Note that this was also noticed in 981b04d96856 ("drm/sched: improve docs
around drm_sched_entity").
Completely different set of non-obvious confusion was that whereas
drm_sched_entity_init() was not keeping the passed in list of schedulers
(courtesy of 8c23056bdc7a ("drm/scheduler: do not keep a copy of sched
list")), drm_sched_entity_modify_sched() was disagreeing with that and
would simply assign the single item list.
That incosistency appears to be semi-silently fixed in ac4eb83ab255
("drm/sched: select new rq even if there is only one v3").
What was also not documented is why it was important to not keep the
list of schedulers when there is only one. I suspect it could have
something to do with the fact the passed in array is on stack for many
callers with just one scheduler. With more than one scheduler amdgpu is
the only caller, and there the container is not on the stack. Keeping a
stack backed list in the entity would obviously be undefined behaviour
*if* the list was kept.
Amdgpu however did only stop passing in stack backed container for the more
than one scheduler case in 977f7e1068be ("drm/amdgpu: allocate entities on
demand"). Until then I suspect dereferencing freed stack from
drm_sched_entity_select_rq() was still present.
In order to untangle all that and fix priority changes this patch is
bringing back the entity owned container for storing the passed in
scheduler list. Container is now owned by the entity and the pointers are
owned by the drivers. List of schedulers is always kept including for the
one scheduler case.
The patch can therefore also removes the single scheduler special case,
which means that priority changes should now work (be able to change the
selected run-queue) for all drivers and engines. In other words
drm_sched_entity_set_priority() should now just work for all cases.
To enable maintaining its own container some API calls needed to grow a
capability for returning success/failure, which is a change which
percolates mostly through amdgpu source.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
Fixes: b3ac17667f11 ("drm/scheduler: rework entity creation")
References: 8c23056bdc7a ("drm/scheduler: do not keep a copy of sched list")
References: 977f7e1068be ("drm/amdgpu: allocate entities on demand")
References: 2316a86bde49 ("drm/amdgpu: change hw sched list on ctx priority override")
References: ac4eb83ab255 ("drm/sched: select new rq even if there is only one v3")
References: 981b04d96856 ("drm/sched: improve docs around drm_sched_entity")
Cc: Christian König <christian.koenig(a)amd.com>
Cc: Alex Deucher <alexander.deucher(a)amd.com>
Cc: Luben Tuikov <ltuikov89(a)gmail.com>
Cc: Matthew Brost <matthew.brost(a)intel.com>
Cc: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: amd-gfx(a)lists.freedesktop.org
Cc: dri-devel(a)lists.freedesktop.org
Cc: <stable(a)vger.kernel.org> # v5.6+
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 31 +++++---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 13 +--
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c | 3 +-
drivers/gpu/drm/scheduler/sched_entity.c | 96 ++++++++++++++++-------
include/drm/gpu_scheduler.h | 16 ++--
6 files changed, 100 insertions(+), 61 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 5cb33ac99f70..387247f8307e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -802,15 +802,15 @@ struct dma_fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx,
return fence;
}
-static void amdgpu_ctx_set_entity_priority(struct amdgpu_ctx *ctx,
- struct amdgpu_ctx_entity *aentity,
- int hw_ip,
- int32_t priority)
+static int amdgpu_ctx_set_entity_priority(struct amdgpu_ctx *ctx,
+ struct amdgpu_ctx_entity *aentity,
+ int hw_ip,
+ int32_t priority)
{
struct amdgpu_device *adev = ctx->mgr->adev;
- unsigned int hw_prio;
struct drm_gpu_scheduler **scheds = NULL;
- unsigned num_scheds;
+ unsigned int hw_prio, num_scheds;
+ int ret = 0;
/* set sw priority */
drm_sched_entity_set_priority(&aentity->entity,
@@ -822,16 +822,18 @@ static void amdgpu_ctx_set_entity_priority(struct amdgpu_ctx *ctx,
hw_prio = array_index_nospec(hw_prio, AMDGPU_RING_PRIO_MAX);
scheds = adev->gpu_sched[hw_ip][hw_prio].sched;
num_scheds = adev->gpu_sched[hw_ip][hw_prio].num_scheds;
- drm_sched_entity_modify_sched(&aentity->entity, scheds,
- num_scheds);
+ ret = drm_sched_entity_modify_sched(&aentity->entity, scheds,
+ num_scheds);
}
+
+ return ret;
}
-void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
- int32_t priority)
+int amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx, int32_t priority)
{
int32_t ctx_prio;
unsigned i, j;
+ int ret;
ctx->override_priority = priority;
@@ -842,10 +844,15 @@ void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
if (!ctx->entities[i][j])
continue;
- amdgpu_ctx_set_entity_priority(ctx, ctx->entities[i][j],
- i, ctx_prio);
+ ret = amdgpu_ctx_set_entity_priority(ctx,
+ ctx->entities[i][j],
+ i, ctx_prio);
+ if (ret)
+ return ret;
}
}
+
+ return 0;
}
int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
index 85376baaa92f..835661515e33 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -82,7 +82,7 @@ struct dma_fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx,
struct drm_sched_entity *entity,
uint64_t seq);
bool amdgpu_ctx_priority_is_valid(int32_t ctx_prio);
-void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx, int32_t ctx_prio);
+int amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx, int32_t ctx_prio);
int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
index 863b2a34b2d6..944edb7f00a2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
@@ -54,12 +54,15 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
mgr = &fpriv->ctx_mgr;
mutex_lock(&mgr->lock);
- idr_for_each_entry(&mgr->ctx_handles, ctx, id)
- amdgpu_ctx_priority_override(ctx, priority);
+ idr_for_each_entry(&mgr->ctx_handles, ctx, id) {
+ r = amdgpu_ctx_priority_override(ctx, priority);
+ if (r)
+ break;
+ }
mutex_unlock(&mgr->lock);
fdput(f);
- return 0;
+ return r;
}
static int amdgpu_sched_context_priority_override(struct amdgpu_device *adev,
@@ -88,11 +91,11 @@ static int amdgpu_sched_context_priority_override(struct amdgpu_device *adev,
return -EINVAL;
}
- amdgpu_ctx_priority_override(ctx, priority);
+ r = amdgpu_ctx_priority_override(ctx, priority);
amdgpu_ctx_put(ctx);
fdput(f);
- return 0;
+ return r;
}
int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index 81fb99729f37..2453decc73c7 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1362,8 +1362,7 @@ static int vcn_v4_0_5_limit_sched(struct amdgpu_cs_parser *p,
scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_ENC]
[AMDGPU_RING_PRIO_0].sched;
- drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
- return 0;
+ return drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
}
static int vcn_v4_0_5_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
index 58c8161289fe..cb5cc65f461d 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -45,7 +45,12 @@
* @guilty: atomic_t set to 1 when a job on this queue
* is found to be guilty causing a timeout
*
- * Note that the &sched_list must have at least one element to schedule the entity.
+ * Note that the &sched_list must have at least one element to schedule the
+ * entity.
+ *
+ * The individual drm_gpu_scheduler pointers have borrow semantics, ie.
+ * they must remain valid during entities lifetime, while the containing
+ * array can be freed after this call returns.
*
* For changing @priority later on at runtime see
* drm_sched_entity_set_priority(). For changing the set of schedulers
@@ -69,27 +74,24 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
INIT_LIST_HEAD(&entity->list);
entity->rq = NULL;
entity->guilty = guilty;
- entity->num_sched_list = num_sched_list;
entity->priority = priority;
- /*
- * It's perfectly valid to initialize an entity without having a valid
- * scheduler attached. It's just not valid to use the scheduler before it
- * is initialized itself.
- */
- entity->sched_list = num_sched_list > 1 ? sched_list : NULL;
RCU_INIT_POINTER(entity->last_scheduled, NULL);
RB_CLEAR_NODE(&entity->rb_tree_node);
- if (num_sched_list && !sched_list[0]->sched_rq) {
- /* Since every entry covered by num_sched_list
- * should be non-NULL and therefore we warn drivers
- * not to do this and to fix their DRM calling order.
- */
- pr_warn("%s: called with uninitialized scheduler\n", __func__);
- } else if (num_sched_list) {
- /* The "priority" of an entity cannot exceed the number of run-queues of a
- * scheduler. Protect against num_rqs being 0, by converting to signed. Choose
- * the lowest priority available.
+ if (num_sched_list) {
+ int ret;
+
+ ret = drm_sched_entity_modify_sched(entity,
+ sched_list,
+ num_sched_list);
+ if (ret)
+ return ret;
+
+ /*
+ * The "priority" of an entity cannot exceed the number of
+ * run-queues of a scheduler. Protect against num_rqs being 0,
+ * by converting to signed. Choose the lowest priority
+ * available.
*/
if (entity->priority >= sched_list[0]->num_rqs) {
drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n",
@@ -122,19 +124,58 @@ EXPORT_SYMBOL(drm_sched_entity_init);
* existing entity->sched_list
* @num_sched_list: number of drm sched in sched_list
*
+ * The individual drm_gpu_scheduler pointers have borrow semantics, ie.
+ * they must remain valid during entities lifetime, while the containing
+ * array can be freed after this call returns.
+ *
* Note that this must be called under the same common lock for @entity as
* drm_sched_job_arm() and drm_sched_entity_push_job(), or the driver needs to
* guarantee through some other means that this is never called while new jobs
* can be pushed to @entity.
+ *
+ * Returns zero on success and a negative error code on failure.
*/
-void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
- struct drm_gpu_scheduler **sched_list,
- unsigned int num_sched_list)
+int drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
+ struct drm_gpu_scheduler **sched_list,
+ unsigned int num_sched_list)
{
- WARN_ON(!num_sched_list || !sched_list);
+ struct drm_gpu_scheduler **new, **old;
+ unsigned int i;
- entity->sched_list = sched_list;
+ if (!(entity && sched_list && (num_sched_list == 0 || sched_list[0])))
+ return -EINVAL;
+
+ /*
+ * It's perfectly valid to initialize an entity without having a valid
+ * scheduler attached. It's just not valid to use the scheduler before
+ * it is initialized itself.
+ *
+ * Since every entry covered by num_sched_list should be non-NULL and
+ * therefore we warn drivers not to do this and to fix their DRM calling
+ * order.
+ */
+ for (i = 0; i < num_sched_list; i++) {
+ if (!sched_list[i]) {
+ pr_warn("%s: called with uninitialized scheduler %u!\n",
+ __func__, i);
+ return -EINVAL;
+ }
+ }
+
+ new = kmemdup_array(sched_list,
+ num_sched_list,
+ sizeof(*sched_list),
+ GFP_KERNEL);
+ if (!new)
+ return -ENOMEM;
+
+ old = entity->sched_list;
+ entity->sched_list = new;
entity->num_sched_list = num_sched_list;
+
+ kfree(old);
+
+ return 0;
}
EXPORT_SYMBOL(drm_sched_entity_modify_sched);
@@ -341,6 +382,8 @@ void drm_sched_entity_fini(struct drm_sched_entity *entity)
dma_fence_put(rcu_dereference_check(entity->last_scheduled, true));
RCU_INIT_POINTER(entity->last_scheduled, NULL);
+
+ kfree(entity->sched_list);
}
EXPORT_SYMBOL(drm_sched_entity_fini);
@@ -531,10 +574,6 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
struct drm_gpu_scheduler *sched;
struct drm_sched_rq *rq;
- /* single possible engine and already selected */
- if (!entity->sched_list)
- return;
-
/* queue non-empty, stay on the same engine */
if (spsc_queue_count(&entity->job_queue))
return;
@@ -561,9 +600,6 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
entity->rq = rq;
}
spin_unlock(&entity->rq_lock);
-
- if (entity->num_sched_list == 1)
- entity->sched_list = NULL;
}
/**
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 5acc64954a88..09e1d063a5c0 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -110,18 +110,12 @@ struct drm_sched_entity {
/**
* @sched_list:
*
- * A list of schedulers (struct drm_gpu_scheduler). Jobs from this entity can
- * be scheduled on any scheduler on this list.
+ * A list of schedulers (struct drm_gpu_scheduler). Jobs from this
+ * entity can be scheduled on any scheduler on this list.
*
* This can be modified by calling drm_sched_entity_modify_sched().
* Locking is entirely up to the driver, see the above function for more
* details.
- *
- * This will be set to NULL if &num_sched_list equals 1 and @rq has been
- * set already.
- *
- * FIXME: This means priority changes through
- * drm_sched_entity_set_priority() will be lost henceforth in this case.
*/
struct drm_gpu_scheduler **sched_list;
@@ -568,9 +562,9 @@ int drm_sched_job_add_implicit_dependencies(struct drm_sched_job *job,
bool write);
-void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
- struct drm_gpu_scheduler **sched_list,
- unsigned int num_sched_list);
+int drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
+ struct drm_gpu_scheduler **sched_list,
+ unsigned int num_sched_list);
void drm_sched_tdr_queue_imm(struct drm_gpu_scheduler *sched);
void drm_sched_job_cleanup(struct drm_sched_job *job);
--
2.44.0
On the off chance that clock value ends up being too high (by means
of skl_ddi_calculate_wrpll() having benn called with big enough
value of crtc_state->port_clock * 1000), one possible consequence
may be that the result will not be able to fit into signed int.
Fix this, albeit unlikely, issue by first casting one of the operands
to u32, then to u64, and thus avoid causing an integer overflow.
Found by Linux Verification Center (linuxtesting.org) with static
analysis tool SVACE.
Fixes: fe70b262e781 ("drm/i915: Move a bunch of stuff into rodata from the stack")
Cc: stable(a)vger.kernel.org
Signed-off-by: Nikita Zhandarovich <n.zhandarovich(a)fintech.ru>
---
Fixes: tag is not entirely correct, as I can't properly identify the
origin with all the code movement. I opted out for using the most
recent topical commit instead.
drivers/gpu/drm/i915/display/intel_dpll_mgr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
index 90998b037349..46d4dac6c491 100644
--- a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+++ b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
@@ -1683,7 +1683,7 @@ skl_ddi_calculate_wrpll(int clock /* in Hz */,
};
unsigned int dco, d, i;
unsigned int p0, p1, p2;
- u64 afe_clock = clock * 5; /* AFE Clock is 5x Pixel clock */
+ u64 afe_clock = (u64)(u32)clock * 5; /* AFE Clock is 5x Pixel clock */
for (d = 0; d < ARRAY_SIZE(dividers); d++) {
for (dco = 0; dco < ARRAY_SIZE(dco_central_freq); dco++) {
From: Chris Wulff <crwulff(a)gmail.com>
Make sure the descriptor has been set before looking at maxpacket.
This fixes a null pointer panic in this case.
This may happen if the gadget doesn't properly set up the endpoint
for the current speed, or the gadget descriptors are malformed and
the descriptor for the speed/endpoint are not found.
No current gadget driver is known to have this problem, but this
may cause a hard-to-find bug during development of new gadgets.
Fixes: 54f83b8c8ea9 ("USB: gadget: Reject endpoints with 0 maxpacket value")
Cc: stable(a)vger.kernel.org
Signed-off-by: Chris Wulff <crwulff(a)gmail.com>
---
v2: Added WARN_ONCE message & clarification on causes
v1: https://lore.kernel.org/all/20240721192048.3530097-2-crwulff@gmail.com/
---
drivers/usb/gadget/udc/core.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
index 2dfae7a17b3f..81f9140f3681 100644
--- a/drivers/usb/gadget/udc/core.c
+++ b/drivers/usb/gadget/udc/core.c
@@ -118,12 +118,10 @@ int usb_ep_enable(struct usb_ep *ep)
goto out;
/* UDC drivers can't handle endpoints with maxpacket size 0 */
- if (usb_endpoint_maxp(ep->desc) == 0) {
- /*
- * We should log an error message here, but we can't call
- * dev_err() because there's no way to find the gadget
- * given only ep.
- */
+ if (!ep->desc || usb_endpoint_maxp(ep->desc) == 0) {
+ WARN_ONCE(1, "%s: ep%d (%s) has %s\n", __func__, ep->address, ep->name,
+ (!ep->desc) ? "NULL descriptor" : "maxpacket 0");
+
ret = -EINVAL;
goto out;
}
--
2.43.0
From: Xiubo Li <xiubli(a)redhat.com>
If a client sends out a cap update dropping caps with the prior 'seq'
just before an incoming cap revoke request, then the client may drop
the revoke because it believes it's already released the requested
capabilities.
This causes the MDS to wait indefinitely for the client to respond
to the revoke. It's therefore always a good idea to ack the cap
revoke request with the bumped up 'seq'.
Currently if the cap->issued equals to the newcaps the check_caps()
will do nothing, we should force flush the caps.
Cc: stable(a)vger.kernel.org
Link: https://tracker.ceph.com/issues/61782
Signed-off-by: Xiubo Li <xiubli(a)redhat.com>
---
V3:
- Move the force check earlier
V2:
- Improved the patch to force send the cap update only when no caps
being used.
fs/ceph/caps.c | 35 ++++++++++++++++++++++++-----------
fs/ceph/super.h | 7 ++++---
2 files changed, 28 insertions(+), 14 deletions(-)
diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index 24c31f795938..672c6611d749 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -2024,6 +2024,8 @@ bool __ceph_should_report_size(struct ceph_inode_info *ci)
* CHECK_CAPS_AUTHONLY - we should only check the auth cap
* CHECK_CAPS_FLUSH - we should flush any dirty caps immediately, without
* further delay.
+ * CHECK_CAPS_FLUSH_FORCE - we should flush any caps immediately, without
+ * further delay.
*/
void ceph_check_caps(struct ceph_inode_info *ci, int flags)
{
@@ -2105,7 +2107,7 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
}
doutc(cl, "%p %llx.%llx file_want %s used %s dirty %s "
- "flushing %s issued %s revoking %s retain %s %s%s%s\n",
+ "flushing %s issued %s revoking %s retain %s %s%s%s%s\n",
inode, ceph_vinop(inode), ceph_cap_string(file_wanted),
ceph_cap_string(used), ceph_cap_string(ci->i_dirty_caps),
ceph_cap_string(ci->i_flushing_caps),
@@ -2113,7 +2115,8 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
ceph_cap_string(retain),
(flags & CHECK_CAPS_AUTHONLY) ? " AUTHONLY" : "",
(flags & CHECK_CAPS_FLUSH) ? " FLUSH" : "",
- (flags & CHECK_CAPS_NOINVAL) ? " NOINVAL" : "");
+ (flags & CHECK_CAPS_NOINVAL) ? " NOINVAL" : "",
+ (flags & CHECK_CAPS_FLUSH_FORCE) ? " FLUSH_FORCE" : "");
/*
* If we no longer need to hold onto old our caps, and we may
@@ -2188,6 +2191,11 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
queue_writeback = true;
}
+ if (flags & CHECK_CAPS_FLUSH_FORCE) {
+ doutc(cl, "force to flush caps\n");
+ goto ack;
+ }
+
if (cap == ci->i_auth_cap &&
(cap->issued & CEPH_CAP_FILE_WR)) {
/* request larger max_size from MDS? */
@@ -3518,6 +3526,8 @@ static void handle_cap_grant(struct inode *inode,
bool queue_invalidate = false;
bool deleted_inode = false;
bool fill_inline = false;
+ bool revoke_wait = false;
+ int flags = 0;
/*
* If there is at least one crypto block then we'll trust
@@ -3713,16 +3723,18 @@ static void handle_cap_grant(struct inode *inode,
ceph_cap_string(cap->issued), ceph_cap_string(newcaps),
ceph_cap_string(revoking));
if (S_ISREG(inode->i_mode) &&
- (revoking & used & CEPH_CAP_FILE_BUFFER))
+ (revoking & used & CEPH_CAP_FILE_BUFFER)) {
writeback = true; /* initiate writeback; will delay ack */
- else if (queue_invalidate &&
+ revoke_wait = true;
+ } else if (queue_invalidate &&
revoking == CEPH_CAP_FILE_CACHE &&
- (newcaps & CEPH_CAP_FILE_LAZYIO) == 0)
- ; /* do nothing yet, invalidation will be queued */
- else if (cap == ci->i_auth_cap)
+ (newcaps & CEPH_CAP_FILE_LAZYIO) == 0) {
+ revoke_wait = true; /* do nothing yet, invalidation will be queued */
+ } else if (cap == ci->i_auth_cap) {
check_caps = 1; /* check auth cap only */
- else
+ } else {
check_caps = 2; /* check all caps */
+ }
/* If there is new caps, try to wake up the waiters */
if (~cap->issued & newcaps)
wake = true;
@@ -3749,8 +3761,9 @@ static void handle_cap_grant(struct inode *inode,
BUG_ON(cap->issued & ~cap->implemented);
/* don't let check_caps skip sending a response to MDS for revoke msgs */
- if (le32_to_cpu(grant->op) == CEPH_CAP_OP_REVOKE) {
+ if (!revoke_wait && le32_to_cpu(grant->op) == CEPH_CAP_OP_REVOKE) {
cap->mds_wanted = 0;
+ flags |= CHECK_CAPS_FLUSH_FORCE;
if (cap == ci->i_auth_cap)
check_caps = 1; /* check auth cap only */
else
@@ -3806,9 +3819,9 @@ static void handle_cap_grant(struct inode *inode,
mutex_unlock(&session->s_mutex);
if (check_caps == 1)
- ceph_check_caps(ci, CHECK_CAPS_AUTHONLY | CHECK_CAPS_NOINVAL);
+ ceph_check_caps(ci, flags | CHECK_CAPS_AUTHONLY | CHECK_CAPS_NOINVAL);
else if (check_caps == 2)
- ceph_check_caps(ci, CHECK_CAPS_NOINVAL);
+ ceph_check_caps(ci, flags | CHECK_CAPS_NOINVAL);
}
/*
diff --git a/fs/ceph/super.h b/fs/ceph/super.h
index b0b368ed3018..831e8ec4d5da 100644
--- a/fs/ceph/super.h
+++ b/fs/ceph/super.h
@@ -200,9 +200,10 @@ struct ceph_cap {
struct list_head caps_item;
};
-#define CHECK_CAPS_AUTHONLY 1 /* only check auth cap */
-#define CHECK_CAPS_FLUSH 2 /* flush any dirty caps */
-#define CHECK_CAPS_NOINVAL 4 /* don't invalidate pagecache */
+#define CHECK_CAPS_AUTHONLY 1 /* only check auth cap */
+#define CHECK_CAPS_FLUSH 2 /* flush any dirty caps */
+#define CHECK_CAPS_NOINVAL 4 /* don't invalidate pagecache */
+#define CHECK_CAPS_FLUSH_FORCE 8 /* force flush any caps */
struct ceph_cap_flush {
u64 tid;
--
2.45.1
The patch titled
Subject: selftests: mm: add s390 to ARCH check
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
selftests-mm-add-s390-to-arch-check.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Nico Pache <npache(a)redhat.com>
Subject: selftests: mm: add s390 to ARCH check
Date: Wed, 24 Jul 2024 15:35:17 -0600
commit 0518dbe97fe6 ("selftests/mm: fix cross compilation with LLVM")
changed the env variable for the architecture from MACHINE to ARCH.
This is preventing 3 required TEST_GEN_FILES from being included when
cross compiling s390x and errors when trying to run the test suite. This
is due to the ARCH variable already being set and the arch folder name
being s390.
Add "s390" to the filtered list to cover this case and have the 3 files
included in the build.
Link: https://lkml.kernel.org/r/20240724213517.23918-1-npache@redhat.com
Fixes: 0518dbe97fe6 ("selftests/mm: fix cross compilation with LLVM")
Signed-off-by: Nico Pache <npache(a)redhat.com>
Cc: Mark Brown <broonie(a)kernel.org>
Cc: Albert Ou <aou(a)eecs.berkeley.edu>
Cc: Palmer Dabbelt <palmer(a)dabbelt.com>
Cc: Paul Walmsley <paul.walmsley(a)sifive.com>
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
tools/testing/selftests/mm/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/tools/testing/selftests/mm/Makefile~selftests-mm-add-s390-to-arch-check
+++ a/tools/testing/selftests/mm/Makefile
@@ -110,7 +110,7 @@ endif
endif
-ifneq (,$(filter $(ARCH),arm64 ia64 mips64 parisc64 powerpc riscv64 s390x sparc64 x86_64))
+ifneq (,$(filter $(ARCH),arm64 ia64 mips64 parisc64 powerpc riscv64 s390x sparc64 x86_64 s390))
TEST_GEN_FILES += va_high_addr_switch
TEST_GEN_FILES += virtual_address_range
TEST_GEN_FILES += write_to_hugetlbfs
_
Patches currently in -mm which might be from npache(a)redhat.com are
selftests-mm-add-s390-to-arch-check.patch
The patch titled
Subject: crash: fix x86_32 crash memory reserve dead loop bug
has been added to the -mm mm-nonmm-unstable branch. Its filename is
crash-fix-x86_32-crash-memory-reserve-dead-loop-bug.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-nonmm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jinjie Ruan <ruanjinjie(a)huawei.com>
Subject: crash: fix x86_32 crash memory reserve dead loop bug
Date: Thu, 18 Jul 2024 11:54:42 +0800
Patch series "crash: Fix x86_32 memory reserve dead loop bug", v3.
Fix two bugs for x86_32 crash memory reserve, and prepare to apply generic
crashkernel reservation to 32bit system. Then use generic interface to
simplify crashkernel reservation for ARM32.
This patch (of 3):
On x86_32 Qemu machine with 1GB memory, the cmdline "crashkernel=1G,high"
will cause system stall as below:
ACPI: Reserving FACP table memory at [mem 0x3ffe18b8-0x3ffe192b]
ACPI: Reserving DSDT table memory at [mem 0x3ffe0040-0x3ffe18b7]
ACPI: Reserving FACS table memory at [mem 0x3ffe0000-0x3ffe003f]
ACPI: Reserving APIC table memory at [mem 0x3ffe192c-0x3ffe19bb]
ACPI: Reserving HPET table memory at [mem 0x3ffe19bc-0x3ffe19f3]
ACPI: Reserving WAET table memory at [mem 0x3ffe19f4-0x3ffe1a1b]
143MB HIGHMEM available.
879MB LOWMEM available.
mapped low ram: 0 - 36ffe000
low ram: 0 - 36ffe000
(stall here)
The reason is that the CRASH_ADDR_LOW_MAX is equal to CRASH_ADDR_HIGH_MAX
on x86_32, the first high crash kernel memory reservation will fail, then
go into the "retry" loop and never came out as below.
-> reserve_crashkernel_generic() and high is true
-> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
-> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
(because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
Fix it by prevent crashkernel=,high from being parsed successfully on 32bit
system with a architecture-defined macro.
After this patch, the 'crashkernel=,high' for 32bit system can't succeed,
and it has no chance to call reserve_crashkernel_generic(), therefore this
issue on x86_32 is solved.
Link: https://lkml.kernel.org/r/20240718035444.2977105-1-ruanjinjie@huawei.com
Link: https://lkml.kernel.org/r/20240718035444.2977105-2-ruanjinjie@huawei.com
Fixes: 9c08a2a139fe ("x86: kdump: use generic interface to simplify crashkernel reservation code")
Signed-off-by: Jinjie Ruan <ruanjinjie(a)huawei.com>
Signed-off-by: Baoquan He <bhe(a)redhat.com>
Tested-by: Jinjie Ruan <ruanjinjie(a)huawei.com>
Cc: Albert Ou <aou(a)eecs.berkeley.edu>
Cc: Andrew Davis <afd(a)ti.com>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: Chen Jiahao <chenjiahao16(a)huawei.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Dave Young <dyoung(a)redhat.com>
Cc: Eric DeVolder <eric.devolder(a)oracle.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Hari Bathini <hbathini(a)linux.ibm.com>
Cc: Helge Deller <deller(a)gmx.de>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: Javier Martinez Canillas <javierm(a)redhat.com>
Cc: Linus Walleij <linus.walleij(a)linaro.org>
Cc: Palmer Dabbelt <palmer(a)dabbelt.com>
Cc: Paul Walmsley <paul.walmsley(a)sifive.com>
Cc: Rob Herring <robh(a)kernel.org>
Cc: Russell King <linux(a)armlinux.org.uk>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Vivek Goyal <vgoyal(a)redhat.com>
Cc: Will Deacon <will(a)kernel.org>
Cc: Zhen Lei <thunder.leizhen(a)huawei.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
arch/arm64/include/asm/crash_reserve.h | 2 ++
arch/riscv/include/asm/crash_reserve.h | 2 ++
arch/x86/include/asm/crash_reserve.h | 1 +
kernel/crash_reserve.c | 2 +-
4 files changed, 6 insertions(+), 1 deletion(-)
--- a/arch/arm64/include/asm/crash_reserve.h~crash-fix-x86_32-crash-memory-reserve-dead-loop-bug
+++ a/arch/arm64/include/asm/crash_reserve.h
@@ -7,4 +7,6 @@
#define CRASH_ADDR_LOW_MAX arm64_dma_phys_limit
#define CRASH_ADDR_HIGH_MAX (PHYS_MASK + 1)
+
+#define HAVE_ARCH_CRASHKERNEL_RESERVATION_HIGH
#endif
--- a/arch/riscv/include/asm/crash_reserve.h~crash-fix-x86_32-crash-memory-reserve-dead-loop-bug
+++ a/arch/riscv/include/asm/crash_reserve.h
@@ -7,5 +7,7 @@
#define CRASH_ADDR_LOW_MAX dma32_phys_limit
#define CRASH_ADDR_HIGH_MAX memblock_end_of_DRAM()
+#define HAVE_ARCH_CRASHKERNEL_RESERVATION_HIGH
+
extern phys_addr_t memblock_end_of_DRAM(void);
#endif
--- a/arch/x86/include/asm/crash_reserve.h~crash-fix-x86_32-crash-memory-reserve-dead-loop-bug
+++ a/arch/x86/include/asm/crash_reserve.h
@@ -26,6 +26,7 @@ extern unsigned long swiotlb_size_or_def
#else
# define CRASH_ADDR_LOW_MAX SZ_4G
# define CRASH_ADDR_HIGH_MAX SZ_64T
+#define HAVE_ARCH_CRASHKERNEL_RESERVATION_HIGH
#endif
# define DEFAULT_CRASH_KERNEL_LOW_SIZE crash_low_size_default()
--- a/kernel/crash_reserve.c~crash-fix-x86_32-crash-memory-reserve-dead-loop-bug
+++ a/kernel/crash_reserve.c
@@ -305,7 +305,7 @@ int __init parse_crashkernel(char *cmdli
/* crashkernel=X[@offset] */
ret = __parse_crashkernel(cmdline, system_ram, crash_size,
crash_base, NULL);
-#ifdef CONFIG_ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION
+#ifdef HAVE_ARCH_CRASHKERNEL_RESERVATION_HIGH
/*
* If non-NULL 'high' passed in and no normal crashkernel
* setting detected, try parsing crashkernel=,high|low.
_
Patches currently in -mm which might be from ruanjinjie(a)huawei.com are
crash-fix-x86_32-crash-memory-reserve-dead-loop-bug.patch
crash-fix-x86_32-crash-memory-reserve-dead-loop-bug-at-high.patch
arm-use-generic-interface-to-simplify-crashkernel-reservation.patch
The patch titled
Subject: mm: let pte_lockptr() consume a pte_t pointer
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
mm-let-pte_lockptr-consume-a-pte_t-pointer.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: David Hildenbrand <david(a)redhat.com>
Subject: mm: let pte_lockptr() consume a pte_t pointer
Date: Thu, 25 Jul 2024 20:39:54 +0200
Patch series "mm/hugetlb: fix hugetlb vs. core-mm PT locking".
Working on another generic page table walker that tries to avoid
special-casing hugetlb, I found a page table locking issue with hugetlb
folios that are not mapped using a single PMD/PUD.
For some hugetlb folio sizes, GUP will take different page table locks
when walking the page tables than hugetlb when modifying the page tables.
I did not actually try reproducing an issue, but looking at
follow_pmd_mask() where we might be rereading a PMD value multiple times
it's rather clear that concurrent modifications are rather unpleasant.
In follow_page_pte() we might be better in that regard -- ptep_get() does
a READ_ONCE() -- but who knows what else could happen concurrently in some
weird corner cases (e.g., hugetlb folio getting unmapped and freed).
This patch (of 2):
pte_lockptr() is the only *_lockptr() function that doesn't consume what
would be expected: it consumes a pmd_t pointer instead of a pte_t pointer.
Let's change that. The two callers in pgtable-generic.c are easily
adjusted. Adjust khugepaged.c:retract_page_tables() to simply do a
pte_offset_map_nolock() to obtain the lock, even though we won't actually
be traversing the page table.
This makes the code more similar to the other variants and avoids other
hacks to make the new pte_lockptr() version happy. pte_lockptr() users
reside now only in pgtable-generic.c.
Maybe, using pte_offset_map_nolock() is the right thing to do because the
PTE table could have been removed in the meantime? At least it sounds
more future proof if we ever have other means of page table reclaim.
It's not quite clear if holding the PTE table lock is really required:
what if someone else obtains the lock just after we unlock it? But we'll
leave that as is for now, maybe there are good reasons.
This is a preparation for adapting hugetlb page table locking logic to
take the same locks as core-mm page table walkers would.
Link: https://lkml.kernel.org/r/20240725183955.2268884-1-david@redhat.com
Link: https://lkml.kernel.org/r/20240725183955.2268884-2-david@redhat.com
Fixes: 9cb28da54643 ("mm/gup: handle hugetlb in the generic follow_page_mask code")
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Cc: Muchun Song <muchun.song(a)linux.dev>
Cc: Oscar Salvador <osalvador(a)suse.de>
Cc: Peter Xu <peterx(a)redhat.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/mm.h | 7 ++++---
mm/khugepaged.c | 21 +++++++++++++++------
mm/pgtable-generic.c | 4 ++--
3 files changed, 21 insertions(+), 11 deletions(-)
--- a/include/linux/mm.h~mm-let-pte_lockptr-consume-a-pte_t-pointer
+++ a/include/linux/mm.h
@@ -2915,9 +2915,10 @@ static inline spinlock_t *ptlock_ptr(str
}
#endif /* ALLOC_SPLIT_PTLOCKS */
-static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
+static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pte_t *pte)
{
- return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
+ /* PTE page tables don't currently exceed a single page. */
+ return ptlock_ptr(virt_to_ptdesc(pte));
}
static inline bool ptlock_init(struct ptdesc *ptdesc)
@@ -2940,7 +2941,7 @@ static inline bool ptlock_init(struct pt
/*
* We use mm->page_table_lock to guard all pagetable pages of the mm.
*/
-static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
+static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pte_t *pte)
{
return &mm->page_table_lock;
}
--- a/mm/khugepaged.c~mm-let-pte_lockptr-consume-a-pte_t-pointer
+++ a/mm/khugepaged.c
@@ -1697,12 +1697,13 @@ static void retract_page_tables(struct a
i_mmap_lock_read(mapping);
vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
struct mmu_notifier_range range;
+ bool retracted = false;
struct mm_struct *mm;
unsigned long addr;
pmd_t *pmd, pgt_pmd;
spinlock_t *pml;
spinlock_t *ptl;
- bool skipped_uffd = false;
+ pte_t *pte;
/*
* Check vma->anon_vma to exclude MAP_PRIVATE mappings that
@@ -1739,9 +1740,17 @@ static void retract_page_tables(struct a
mmu_notifier_invalidate_range_start(&range);
pml = pmd_lock(mm, pmd);
- ptl = pte_lockptr(mm, pmd);
+
+ /*
+ * No need to check the PTE table content, but we'll grab the
+ * PTE table lock while we zap it.
+ */
+ pte = pte_offset_map_nolock(mm, pmd, addr, &ptl);
+ if (!pte)
+ goto unlock_pmd;
if (ptl != pml)
spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
+ pte_unmap(pte);
/*
* Huge page lock is still held, so normally the page table
@@ -1752,20 +1761,20 @@ static void retract_page_tables(struct a
* repeating the anon_vma check protects from one category,
* and repeating the userfaultfd_wp() check from another.
*/
- if (unlikely(vma->anon_vma || userfaultfd_wp(vma))) {
- skipped_uffd = true;
- } else {
+ if (likely(!vma->anon_vma && !userfaultfd_wp(vma))) {
pgt_pmd = pmdp_collapse_flush(vma, addr, pmd);
pmdp_get_lockless_sync();
+ retracted = true;
}
if (ptl != pml)
spin_unlock(ptl);
+unlock_pmd:
spin_unlock(pml);
mmu_notifier_invalidate_range_end(&range);
- if (!skipped_uffd) {
+ if (retracted) {
mm_dec_nr_ptes(mm);
page_table_check_pte_clear_range(mm, addr, pgt_pmd);
pte_free_defer(mm, pmd_pgtable(pgt_pmd));
--- a/mm/pgtable-generic.c~mm-let-pte_lockptr-consume-a-pte_t-pointer
+++ a/mm/pgtable-generic.c
@@ -313,7 +313,7 @@ pte_t *pte_offset_map_nolock(struct mm_s
pte = __pte_offset_map(pmd, addr, &pmdval);
if (likely(pte))
- *ptlp = pte_lockptr(mm, &pmdval);
+ *ptlp = pte_lockptr(mm, pte);
return pte;
}
@@ -371,7 +371,7 @@ again:
pte = __pte_offset_map(pmd, addr, &pmdval);
if (unlikely(!pte))
return pte;
- ptl = pte_lockptr(mm, &pmdval);
+ ptl = pte_lockptr(mm, pte);
spin_lock(ptl);
if (likely(pmd_same(pmdval, pmdp_get_lockless(pmd)))) {
*ptlp = ptl;
_
Patches currently in -mm which might be from david(a)redhat.com are
mm-let-pte_lockptr-consume-a-pte_t-pointer.patch
mm-hugetlb-fix-hugetlb-vs-core-mm-pt-locking.patch
The check_unaligned_access_emulated() function should have been called
during CPU hotplug to ensure that if all CPUs had emulated unaligned
accesses, the new CPU also does.
This patch adds the call to check_unaligned_access_emulated() in
the hotplug path.
Fixes: 55e0bf49a0d0 ("RISC-V: Probe misaligned access speed in parallel")
Signed-off-by: Jesse Taube <jesse(a)rivosinc.com>
Cc: stable(a)vger.kernel.org
---
V5 -> V6:
- New patch
---
arch/riscv/kernel/unaligned_access_speed.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
index a9a6bcb02acf..b67db1fc3740 100644
--- a/arch/riscv/kernel/unaligned_access_speed.c
+++ b/arch/riscv/kernel/unaligned_access_speed.c
@@ -191,6 +191,7 @@ static int riscv_online_cpu(unsigned int cpu)
if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)
goto exit;
+ check_unaligned_access_emulated(NULL);
buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
if (!buf) {
pr_warn("Allocation failure, not measuring misaligned performance\n");
--
2.45.2