From: Vladimir Oltean <vladimir.oltean(a)nxp.com>
[ Upstream commit 5f2b28b79d2d1946ee36ad8b3dc0066f73c90481 ]
There are actually 2 problems:
- deleting the last element doesn't require the memmove of elements
[i + 1, end) over it. Actually, element i+1 is out of bounds.
- The memmove itself should move size - i - 1 elements, because the last
element is out of bounds.
The out-of-bounds element still remains out of bounds after being
accessed, so the problem is only that we touch it, not that it becomes
in active use. But I suppose it can lead to issues if the out-of-bounds
element is part of an unmapped page.
Fixes: 6666cebc5e30 ("net: dsa: sja1105: Add support for VLAN operations")
Signed-off-by: Vladimir Oltean <vladimir.oltean(a)nxp.com>
Reviewed-by: Simon Horman <horms(a)kernel.org>
Link: https://patch.msgid.link/20250318115716.2124395-4-vladimir.oltean@nxp.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Chen Yu <xnguchen(a)sina.cn>
---
drivers/net/dsa/sja1105/sja1105_static_config.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.c b/drivers/net/dsa/sja1105/sja1105_static_config.c
index baba204ad62f..2ac91fe2a79b 100644
--- a/drivers/net/dsa/sja1105/sja1105_static_config.c
+++ b/drivers/net/dsa/sja1105/sja1105_static_config.c
@@ -1921,8 +1921,10 @@ int sja1105_table_delete_entry(struct sja1105_table *table, int i)
if (i > table->entry_count)
return -ERANGE;
- memmove(entries + i * entry_size, entries + (i + 1) * entry_size,
- (table->entry_count - i) * entry_size);
+ if (i + 1 < table->entry_count) {
+ memmove(entries + i * entry_size, entries + (i + 1) * entry_size,
+ (table->entry_count - i - 1) * entry_size);
+ }
table->entry_count--;
--
2.17.1
get_user/put_user change didn't spend time in next and
seems a bit too risky to rush. I'm keeping it in my tree
and we'll get it in the next cycle.
The following changes since commit ac3fd01e4c1efce8f2c054cdeb2ddd2fc0fb150d:
Linux 6.18-rc7 (2025-11-23 14:53:16 -0800)
are available in the Git repository at:
https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus
for you to fetch changes up to 205dd7a5d6ad6f4c8e8fcd3c3b95a7c0e7067fee:
virtio_pci: drop kernel.h (2025-11-30 18:02:43 -0500)
----------------------------------------------------------------
virtio,vhost: fixes, cleanups
Just a bunch of fixes and cleanups, mostly very simple. Several
features are merged through net-next this time around.
Signed-off-by: Michael S. Tsirkin <mst(a)redhat.com>
----------------------------------------------------------------
Alok Tiwari (3):
virtio_vdpa: fix misleading return in void function
vdpa/mlx5: Fix incorrect error code reporting in query_virtqueues
vdpa/pds: use %pe for ERR_PTR() in event handler registration
Kriish Sharma (1):
virtio: fix kernel-doc for mapping/free_coherent functions
Marco Crivellari (2):
virtio_balloon: add WQ_PERCPU to alloc_workqueue users
vduse: add WQ_PERCPU to alloc_workqueue users
Miaoqian Lin (1):
virtio: vdpa: Fix reference count leak in octep_sriov_enable()
Michael S. Tsirkin (11):
virtio: fix typo in virtio_device_ready() comment
virtio: fix whitespace in virtio_config_ops
virtio: fix grammar in virtio_queue_info docs
virtio: fix grammar in virtio_map_ops docs
virtio: standardize Returns documentation style
virtio: fix virtqueue_set_affinity() docs
virtio: fix map ops comment
virtio: clean up features qword/dword terms
vhost/test: add test specific macro for features
vhost: switch to arrays of feature bits
virtio_pci: drop kernel.h
Mike Christie (1):
vhost: Fix kthread worker cgroup failure handling
drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
drivers/vdpa/octeon_ep/octep_vdpa_main.c | 1 +
drivers/vdpa/pds/vdpa_dev.c | 2 +-
drivers/vdpa/vdpa_user/vduse_dev.c | 3 ++-
drivers/vhost/net.c | 29 +++++++++++-----------
drivers/vhost/scsi.c | 9 ++++---
drivers/vhost/test.c | 10 ++++++--
drivers/vhost/vhost.c | 4 ++-
drivers/vhost/vhost.h | 42 ++++++++++++++++++++++++++------
drivers/vhost/vsock.c | 10 +++++---
drivers/virtio/virtio.c | 12 ++++-----
drivers/virtio/virtio_balloon.c | 3 ++-
drivers/virtio/virtio_debug.c | 10 ++++----
drivers/virtio/virtio_pci_modern_dev.c | 6 ++---
drivers/virtio/virtio_ring.c | 7 +++---
drivers/virtio/virtio_vdpa.c | 2 +-
include/linux/virtio.h | 2 +-
include/linux/virtio_config.h | 24 +++++++++---------
include/linux/virtio_features.h | 29 +++++++++++-----------
include/linux/virtio_pci_modern.h | 8 +++---
include/uapi/linux/virtio_pci.h | 2 +-
21 files changed, 131 insertions(+), 86 deletions(-)
When VM boots with one virtio-crypto PCI device and builtin backend,
run openssl benchmark command with multiple processes, such as
openssl speed -evp aes-128-cbc -engine afalg -seconds 10 -multi 32
openssl processes will hangup and there is error reported like this:
virtio_crypto virtio0: dataq.0:id 3 is not a head!
It seems that the data virtqueue need protection when it is handled
for virtio done notification. If the spinlock protection is added
in virtcrypto_done_task(), openssl benchmark with multiple processes
works well.
Fixes: fed93fb62e05 ("crypto: virtio - Handle dataq logic with tasklet")
Cc: stable(a)vger.kernel.org
Signed-off-by: Bibo Mao <maobibo(a)loongson.cn>
---
drivers/crypto/virtio/virtio_crypto_core.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index 3d241446099c..ccc6b5c1b24b 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -75,15 +75,20 @@ static void virtcrypto_done_task(unsigned long data)
struct data_queue *data_vq = (struct data_queue *)data;
struct virtqueue *vq = data_vq->vq;
struct virtio_crypto_request *vc_req;
+ unsigned long flags;
unsigned int len;
+ spin_lock_irqsave(&data_vq->lock, flags);
do {
virtqueue_disable_cb(vq);
while ((vc_req = virtqueue_get_buf(vq, &len)) != NULL) {
+ spin_unlock_irqrestore(&data_vq->lock, flags);
if (vc_req->alg_cb)
vc_req->alg_cb(vc_req, len);
+ spin_lock_irqsave(&data_vq->lock, flags);
}
} while (!virtqueue_enable_cb(vq));
+ spin_unlock_irqrestore(&data_vq->lock, flags);
}
static void virtcrypto_dataq_callback(struct virtqueue *vq)
--
2.39.3
The below “No resource for ep” warning appears when a StartTransfer
command is issued for bulk or interrupt endpoints in
`dwc3_gadget_ep_enable` while a previous StartTransfer on the same
endpoint is still in progress. The gadget functions drivers can invoke
`usb_ep_enable` (which triggers a new StartTransfer command) before the
earlier transfer has completed. Because the previous StartTransfer is
still active, `dwc3_gadget_ep_disable` can skip the required
`EndTransfer` due to `DWC3_EP_DELAY_STOP`, leading to the endpoint
resources are busy for previous StartTransfer and warning ("No resource
for ep") from dwc3 driver.
To resolve this, a check is added to `dwc3_gadget_ep_enable` that
checks the `DWC3_EP_TRANSFER_STARTED` flag before issuing a new
StartTransfer. By preventing a second StartTransfer on an already busy
endpoint, the resource conflict is eliminated, the warning disappears,
and potential kernel panics caused by `panic_on_warn` are avoided.
------------[ cut here ]------------
dwc3 13200000.dwc3: No resource for ep1out
WARNING: CPU: 0 PID: 700 at drivers/usb/dwc3/gadget.c:398 dwc3_send_gadget_ep_cmd+0x2f8/0x76c
Call trace:
dwc3_send_gadget_ep_cmd+0x2f8/0x76c
__dwc3_gadget_ep_enable+0x490/0x7c0
dwc3_gadget_ep_enable+0x6c/0xe4
usb_ep_enable+0x5c/0x15c
mp_eth_stop+0xd4/0x11c
__dev_close_many+0x160/0x1c8
__dev_change_flags+0xfc/0x220
dev_change_flags+0x24/0x70
devinet_ioctl+0x434/0x524
inet_ioctl+0xa8/0x224
sock_do_ioctl+0x74/0x128
sock_ioctl+0x3bc/0x468
__arm64_sys_ioctl+0xa8/0xe4
invoke_syscall+0x58/0x10c
el0_svc_common+0xa8/0xdc
do_el0_svc+0x1c/0x28
el0_svc+0x38/0x88
el0t_64_sync_handler+0x70/0xbc
el0t_64_sync+0x1a8/0x1ac
Fixes: a97ea994605e ("usb: dwc3: gadget: offset Start Transfer latency for bulk EPs")
Cc: stable(a)vger.kernel.org
Signed-off-by: Selvarasu Ganesan <selvarasu.g(a)samsung.com>
---
Changes in v2:
- Removed change-id.
- Updated commit message.
Link to v1: https://lore.kernel.org/linux-usb/20251117152812.622-1-selvarasu.g@samsung.…
---
drivers/usb/dwc3/gadget.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 1f67fb6aead5..8d3caa71ea12 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -963,8 +963,9 @@ static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, unsigned int action)
* Issue StartTransfer here with no-op TRB so we can always rely on No
* Response Update Transfer command.
*/
- if (usb_endpoint_xfer_bulk(desc) ||
- usb_endpoint_xfer_int(desc)) {
+ if ((usb_endpoint_xfer_bulk(desc) ||
+ usb_endpoint_xfer_int(desc)) &&
+ !(dep->flags & DWC3_EP_TRANSFER_STARTED)) {
struct dwc3_gadget_ep_cmd_params params;
struct dwc3_trb *trb;
dma_addr_t trb_dma;
--
2.34.1
Currently, kvfree_rcu_barrier() flushes RCU sheaves across all slab
caches when a cache is destroyed. This is unnecessary; only the RCU
sheaves belonging to the cache being destroyed need to be flushed.
As suggested by Vlastimil Babka, introduce a weaker form of
kvfree_rcu_barrier() that operates on a specific slab cache.
Factor out flush_rcu_sheaves_on_cache() from flush_all_rcu_sheaves() and
call it from flush_all_rcu_sheaves() and kvfree_rcu_barrier_on_cache().
Call kvfree_rcu_barrier_on_cache() instead of kvfree_rcu_barrier() on
cache destruction.
The performance benefit is evaluated on a 12 core 24 threads AMD Ryzen
5900X machine (1 socket), by loading slub_kunit module.
Before:
Total calls: 19
Average latency (us): 18127
Total time (us): 344414
After:
Total calls: 19
Average latency (us): 10066
Total time (us): 191264
Two performance regression have been reported:
- stress module loader test's runtime increases by 50-60% (Daniel)
- internal graphics test's runtime on Tegra23 increases by 35% (Jon)
They are fixed by this change.
Suggested-by: Vlastimil Babka <vbabka(a)suse.cz>
Fixes: ec66e0d59952 ("slab: add sheaf support for batching kfree_rcu() operations")
Cc: <stable(a)vger.kernel.org>
Link: https://lore.kernel.org/linux-mm/1bda09da-93be-4737-aef0-d47f8c5c9301@suse.…
Reported-and-tested-by: Daniel Gomez <da.gomez(a)samsung.com>
Closes: https://lore.kernel.org/linux-mm/0406562e-2066-4cf8-9902-b2b0616dd742@kerne…
Reported-and-tested-by: Jon Hunter <jonathanh(a)nvidia.com>
Closes: https://lore.kernel.org/linux-mm/e988eff6-1287-425e-a06c-805af5bbf262@nvidi…
Signed-off-by: Harry Yoo <harry.yoo(a)oracle.com>
---
No code change, added proper tags and updated changelog.
include/linux/slab.h | 5 ++++
mm/slab.h | 1 +
mm/slab_common.c | 52 +++++++++++++++++++++++++++++------------
mm/slub.c | 55 ++++++++++++++++++++++++--------------------
4 files changed, 73 insertions(+), 40 deletions(-)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index cf443f064a66..937c93d44e8c 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -1149,6 +1149,10 @@ static inline void kvfree_rcu_barrier(void)
{
rcu_barrier();
}
+static inline void kvfree_rcu_barrier_on_cache(struct kmem_cache *s)
+{
+ rcu_barrier();
+}
static inline void kfree_rcu_scheduler_running(void) { }
#else
@@ -1156,6 +1160,7 @@ void kvfree_rcu_barrier(void);
void kfree_rcu_scheduler_running(void);
#endif
+void kvfree_rcu_barrier_on_cache(struct kmem_cache *s);
/**
* kmalloc_size_roundup - Report allocation bucket size for the given size
diff --git a/mm/slab.h b/mm/slab.h
index f730e012553c..e767aa7e91b0 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -422,6 +422,7 @@ static inline bool is_kmalloc_normal(struct kmem_cache *s)
bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj);
void flush_all_rcu_sheaves(void);
+void flush_rcu_sheaves_on_cache(struct kmem_cache *s);
#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
SLAB_CACHE_DMA32 | SLAB_PANIC | \
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 84dfff4f7b1f..dd8a49d6f9cc 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -492,7 +492,7 @@ void kmem_cache_destroy(struct kmem_cache *s)
return;
/* in-flight kfree_rcu()'s may include objects from our cache */
- kvfree_rcu_barrier();
+ kvfree_rcu_barrier_on_cache(s);
if (IS_ENABLED(CONFIG_SLUB_RCU_DEBUG) &&
(s->flags & SLAB_TYPESAFE_BY_RCU)) {
@@ -2038,25 +2038,13 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
}
EXPORT_SYMBOL_GPL(kvfree_call_rcu);
-/**
- * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete.
- *
- * Note that a single argument of kvfree_rcu() call has a slow path that
- * triggers synchronize_rcu() following by freeing a pointer. It is done
- * before the return from the function. Therefore for any single-argument
- * call that will result in a kfree() to a cache that is to be destroyed
- * during module exit, it is developer's responsibility to ensure that all
- * such calls have returned before the call to kmem_cache_destroy().
- */
-void kvfree_rcu_barrier(void)
+static inline void __kvfree_rcu_barrier(void)
{
struct kfree_rcu_cpu_work *krwp;
struct kfree_rcu_cpu *krcp;
bool queued;
int i, cpu;
- flush_all_rcu_sheaves();
-
/*
* Firstly we detach objects and queue them over an RCU-batch
* for all CPUs. Finally queued works are flushed for each CPU.
@@ -2118,8 +2106,43 @@ void kvfree_rcu_barrier(void)
}
}
}
+
+/**
+ * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete.
+ *
+ * Note that a single argument of kvfree_rcu() call has a slow path that
+ * triggers synchronize_rcu() following by freeing a pointer. It is done
+ * before the return from the function. Therefore for any single-argument
+ * call that will result in a kfree() to a cache that is to be destroyed
+ * during module exit, it is developer's responsibility to ensure that all
+ * such calls have returned before the call to kmem_cache_destroy().
+ */
+void kvfree_rcu_barrier(void)
+{
+ flush_all_rcu_sheaves();
+ __kvfree_rcu_barrier();
+}
EXPORT_SYMBOL_GPL(kvfree_rcu_barrier);
+/**
+ * kvfree_rcu_barrier_on_cache - Wait for in-flight kvfree_rcu() calls on a
+ * specific slab cache.
+ * @s: slab cache to wait for
+ *
+ * See the description of kvfree_rcu_barrier() for details.
+ */
+void kvfree_rcu_barrier_on_cache(struct kmem_cache *s)
+{
+ if (s->cpu_sheaves)
+ flush_rcu_sheaves_on_cache(s);
+ /*
+ * TODO: Introduce a version of __kvfree_rcu_barrier() that works
+ * on a specific slab cache.
+ */
+ __kvfree_rcu_barrier();
+}
+EXPORT_SYMBOL_GPL(kvfree_rcu_barrier_on_cache);
+
static unsigned long
kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
{
@@ -2215,4 +2238,3 @@ void __init kvfree_rcu_init(void)
}
#endif /* CONFIG_KVFREE_RCU_BATCHED */
-
diff --git a/mm/slub.c b/mm/slub.c
index 785e25a14999..7cec2220712b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4118,42 +4118,47 @@ static void flush_rcu_sheaf(struct work_struct *w)
/* needed for kvfree_rcu_barrier() */
-void flush_all_rcu_sheaves(void)
+void flush_rcu_sheaves_on_cache(struct kmem_cache *s)
{
struct slub_flush_work *sfw;
- struct kmem_cache *s;
unsigned int cpu;
- cpus_read_lock();
- mutex_lock(&slab_mutex);
+ mutex_lock(&flush_lock);
- list_for_each_entry(s, &slab_caches, list) {
- if (!s->cpu_sheaves)
- continue;
+ for_each_online_cpu(cpu) {
+ sfw = &per_cpu(slub_flush, cpu);
- mutex_lock(&flush_lock);
+ /*
+ * we don't check if rcu_free sheaf exists - racing
+ * __kfree_rcu_sheaf() might have just removed it.
+ * by executing flush_rcu_sheaf() on the cpu we make
+ * sure the __kfree_rcu_sheaf() finished its call_rcu()
+ */
- for_each_online_cpu(cpu) {
- sfw = &per_cpu(slub_flush, cpu);
+ INIT_WORK(&sfw->work, flush_rcu_sheaf);
+ sfw->s = s;
+ queue_work_on(cpu, flushwq, &sfw->work);
+ }
- /*
- * we don't check if rcu_free sheaf exists - racing
- * __kfree_rcu_sheaf() might have just removed it.
- * by executing flush_rcu_sheaf() on the cpu we make
- * sure the __kfree_rcu_sheaf() finished its call_rcu()
- */
+ for_each_online_cpu(cpu) {
+ sfw = &per_cpu(slub_flush, cpu);
+ flush_work(&sfw->work);
+ }
- INIT_WORK(&sfw->work, flush_rcu_sheaf);
- sfw->s = s;
- queue_work_on(cpu, flushwq, &sfw->work);
- }
+ mutex_unlock(&flush_lock);
+}
- for_each_online_cpu(cpu) {
- sfw = &per_cpu(slub_flush, cpu);
- flush_work(&sfw->work);
- }
+void flush_all_rcu_sheaves(void)
+{
+ struct kmem_cache *s;
+
+ cpus_read_lock();
+ mutex_lock(&slab_mutex);
- mutex_unlock(&flush_lock);
+ list_for_each_entry(s, &slab_caches, list) {
+ if (!s->cpu_sheaves)
+ continue;
+ flush_rcu_sheaves_on_cache(s);
}
mutex_unlock(&slab_mutex);
--
2.43.0
From: Peter Zijlstra <peterz(a)infradead.org>
[ Upstream commit 1fe4002cf7f23d70c79bda429ca2a9423ebcfdfa ]
A KASAN build bloats these single load/store helpers such that
it fails to inline them:
vmlinux.o: error: objtool: irqentry_exit+0x5e8: call to instruction_pointer_set() with UACCESS enabled
Make sure the compiler isn't allowed to do stupid.
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Link: https://patch.msgid.link/20251031105435.GU4068168@noisy.programming.kicks-a…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
LLM Generated explanations, may be completely bogus:
## Comprehensive Analysis: x86/ptrace: Always inline trivial accessors
### 1. COMMIT MESSAGE ANALYSIS
**Subject:** `x86/ptrace: Always inline trivial accessors`
**Key points from the commit message:**
- A **KASAN build** bloats these single load/store helpers such that
they fail to inline
- The result is an **objtool ERROR**: `vmlinux.o: error: objtool:
irqentry_exit+0x5e8: call to instruction_pointer_set() with UACCESS
enabled`
- The commit ensures "the compiler isn't allowed to do stupid"
**Author:** Peter Zijlstra (Intel) - a highly respected kernel developer
and maintainer
**Committer:** Ingo Molnar - x86 subsystem maintainer
**Missing tags:**
- No `Cc: stable(a)vger.kernel.org` tag
- No `Fixes:` tag
### 2. CODE CHANGE ANALYSIS
The diff changes 8 trivial accessor functions from `static inline` to
`static __always_inline`:
| Function | Purpose | Complexity |
|----------|---------|------------|
| `regs_return_value()` | Returns `regs->ax` | 1 line |
| `regs_set_return_value()` | Sets `regs->ax = rc` | 1 line |
| `kernel_stack_pointer()` | Returns `regs->sp` | 1 line |
| `instruction_pointer()` | Returns `regs->ip` | 1 line |
| `instruction_pointer_set()` | Sets `regs->ip = val` | 1 line |
| `frame_pointer()` | Returns `regs->bp` | 1 line |
| `user_stack_pointer()` | Returns `regs->sp` | 1 line |
| `user_stack_pointer_set()` | Sets `regs->sp = val` | 1 line |
**Technical mechanism of the bug:**
1. KASAN adds memory sanitization instrumentation to functions
2. Even trivial one-liner functions get bloated with KASAN checks
3. The compiler decides the bloated functions are "too big" to inline
4. These functions get called from `irqentry_exit()` in contexts where
UACCESS is enabled (SMAP disabled via STAC)
5. Objtool validates that no unexpected function calls happen with
UACCESS enabled (security/correctness requirement)
6. Result: **BUILD FAILURE** (error, not warning)
**Why `__always_inline` fixes it:**
```c
#define __always_inline inline
__attribute__((__always_inline__))
```
This compiler attribute forces inlining regardless of any optimization
settings or instrumentation, ensuring these trivial accessors always
become inline code rather than function calls.
### 3. CLASSIFICATION
- **Category:** BUILD FIX
- **Type:** Fixes compilation error with KASAN on x86
- **Not a feature:** Simply enforces behavior that was intended
(functions should always inline)
- **Not a quirk/device ID/DT:** N/A
### 4. SCOPE AND RISK ASSESSMENT
**Scope:**
- 1 file changed: `arch/x86/include/asm/ptrace.h`
- 10 insertions, 10 deletions (only adding `__always_` prefix)
- Changes are purely compile-time
**Risk: VERY LOW**
- Zero runtime functional change when compiler already inlines
- Only forces the compiler to do what it was supposed to do
- Same pattern already successfully applied to other functions in the
same file:
- `user_mode()` - already `__always_inline`
- `v8086_mode()` - commit b008893b08dcc
- `ip_within_syscall_gap()` - commit c6b01dace2cd7
- `regs_irqs_disabled()` - already `__always_inline`
### 5. USER IMPACT
**Who is affected:**
- Anyone building x86 kernels with `CONFIG_KASAN=y`
- KASAN is used for memory debugging, commonly in development and CI
systems
- Enterprise distributions often enable KASAN in debug/test builds
**Severity:** HIGH (build failure = kernel cannot be compiled)
### 6. STABILITY INDICATORS
- **Reviewed-by:** None explicit, but committed through tip tree
- **Tested-by:** None explicit, but the error message shows it was
reproduced
- **Author credibility:** Peter Zijlstra is a top kernel maintainer
- **Committer credibility:** Ingo Molnar is the x86 maintainer
### 7. DEPENDENCY CHECK
**Dependencies:** NONE
- This is a standalone fix
- Does not depend on any other commits
- The affected code exists unchanged in all stable kernels (5.10, 5.15,
6.1, 6.6)
**Backport applicability verified:**
```
v5.10: static inline void instruction_pointer_set(...) ✓
v5.15: static inline void instruction_pointer_set(...) ✓
v6.1: static inline void instruction_pointer_set(...) ✓
v6.6: static inline void instruction_pointer_set(...) ✓
```
The patch should apply cleanly to all stable trees.
### 8. HISTORICAL CONTEXT
Similar fixes have been applied to this same file and other kernel
files:
| Commit | Description | Pattern |
|--------|-------------|---------|
| c6b01dace2cd7 | x86: Always inline ip_within_syscall_gap() | Same |
| b008893b08dcc | x86/ptrace: Always inline v8086_mode() | Same |
| cb0ca08b326aa | kcov: mark in_softirq_really() as __always_inline |
Same (backported to stable) |
The KASAN + objtool UACCESS validation issue is a known pattern that has
been addressed multiple times with `__always_inline`.
### SUMMARY
**Strong YES signals:**
- ✅ Fixes a build failure (compilation error, not warning)
- ✅ Small, surgical fix with clear scope (only adds `__always_inline`)
- ✅ Obviously correct - trivial accessors should always inline
- ✅ Zero functional/runtime change
- ✅ No dependencies, applies cleanly to all stable trees
- ✅ Well-established fix pattern used elsewhere in kernel
- ✅ Authored by highly trusted maintainer (Peter Zijlstra)
- ✅ Committed through the proper channel (tip tree via Ingo Molnar)
**Weak NO signals:**
- ⚠️ No explicit `Cc: stable` tag
- ⚠️ No `Fixes:` tag
The absence of stable tags is not disqualifying for build fixes. The
stable kernel rules explicitly state that "build fixes that prevent
compilation" are acceptable. This is a clear-cut build fix that prevents
KASAN-enabled x86 kernels from compiling.
### CONCLUSION
This commit **should be backported** to stable kernel trees. It's a
textbook example of a build fix:
- Small, contained, obviously correct
- Fixes a real build failure affecting KASAN users
- Zero risk of regression (only forces intended behavior)
- No dependencies, clean backport
- Follows established patterns from similar successful fixes
**YES**
arch/x86/include/asm/ptrace.h | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
index 50f75467f73d0..b5dec859bc75a 100644
--- a/arch/x86/include/asm/ptrace.h
+++ b/arch/x86/include/asm/ptrace.h
@@ -187,12 +187,12 @@ convert_ip_to_linear(struct task_struct *child, struct pt_regs *regs);
extern void send_sigtrap(struct pt_regs *regs, int error_code, int si_code);
-static inline unsigned long regs_return_value(struct pt_regs *regs)
+static __always_inline unsigned long regs_return_value(struct pt_regs *regs)
{
return regs->ax;
}
-static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
+static __always_inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
{
regs->ax = rc;
}
@@ -277,34 +277,34 @@ static __always_inline bool ip_within_syscall_gap(struct pt_regs *regs)
}
#endif
-static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+static __always_inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
{
return regs->sp;
}
-static inline unsigned long instruction_pointer(struct pt_regs *regs)
+static __always_inline unsigned long instruction_pointer(struct pt_regs *regs)
{
return regs->ip;
}
-static inline void instruction_pointer_set(struct pt_regs *regs,
- unsigned long val)
+static __always_inline
+void instruction_pointer_set(struct pt_regs *regs, unsigned long val)
{
regs->ip = val;
}
-static inline unsigned long frame_pointer(struct pt_regs *regs)
+static __always_inline unsigned long frame_pointer(struct pt_regs *regs)
{
return regs->bp;
}
-static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+static __always_inline unsigned long user_stack_pointer(struct pt_regs *regs)
{
return regs->sp;
}
-static inline void user_stack_pointer_set(struct pt_regs *regs,
- unsigned long val)
+static __always_inline
+void user_stack_pointer_set(struct pt_regs *regs, unsigned long val)
{
regs->sp = val;
}
--
2.51.0
A new warning in Clang 22 [1] complains that @clidr passed to
get_clidr_el1() is an uninitialized const pointer. get_clidr_el1()
doesn't really care since it casts away the const-ness anyways.
Silence the warning by initializing the struct.
This patch won't apply to anything past v6.1 as this code section was
reworked in Commit 7af0c2534f4c ("KVM: arm64: Normalize cache
configuration"). There is no upstream equivalent so this patch only
needs to be applied (stable only) to 6.1.
Cc: stable(a)vger.kernel.org
Fixes: 7c8c5e6a9101e ("arm64: KVM: system register handling")
Link: https://github.com/llvm/llvm-project/commit/00dacf8c22f065cb52efb14cd091d44… [1]
Signed-off-by: Justin Stitt <justinstitt(a)google.com>
---
Resending this with Nathan's RB tag, an updated commit log and better
recipients from checkpatch.pl.
I've also sent a similar patch resend for 5.15
---
arch/arm64/kvm/sys_regs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4a7c5abcbca..d7ebd7387221 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2948,7 +2948,7 @@ int kvm_sys_reg_table_init(void)
{
bool valid = true;
unsigned int i;
- struct sys_reg_desc clidr;
+ struct sys_reg_desc clidr = {0};
/* Make sure tables are unique and in order. */
valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
---
base-commit: 830b3c68c1fb1e9176028d02ef86f3cf76aa2476
change-id: 20250724-b4-clidr-unint-const-ptr-7edb960bc3bd
Best regards,
--
Justin Stitt <justinstitt(a)google.com>
The recent refactoring of where runtime PM is enabled done in commit
f1eb4e792bb1 ("spi: spi-cadence-quadspi: Enable pm runtime earlier to
avoid imbalance") made the fact that when we do a pm_runtime_disable()
in the error paths of probe() we can trigger a runtime disable which in
turn results in duplicate clock disables. This is particularly likely
to happen when there is missing or broken DT description for the flashes
attached to the controller.
Early on in the probe function we do a pm_runtime_get_noresume() since
the probe function leaves the device in a powered up state but in the
error path we can't assume that PM is enabled so we also manually
disable everything, including clocks. This means that when runtime PM is
active both it and the probe function release the same reference to the
main clock for the IP, triggering warnings from the clock subsystem:
[ 8.693719] clk:75:7 already disabled
[ 8.693791] WARNING: CPU: 1 PID: 185 at /usr/src/kernel/drivers/clk/clk.c:1188 clk_core_disable+0xa0/0xb
...
[ 8.694261] clk_core_disable+0xa0/0xb4 (P)
[ 8.694272] clk_disable+0x38/0x60
[ 8.694283] cqspi_probe+0x7c8/0xc5c [spi_cadence_quadspi]
[ 8.694309] platform_probe+0x5c/0xa4
Dealing with this issue properly is complicated by the fact that we
don't know if runtime PM is active so can't tell if it will disable the
clocks or not. We can, however, sidestep the issue for the flash
descriptions by moving their parsing to when we parse the controller
properties which also save us doing a bunch of setup which can never be
used so let's do that.
Reported-by: Francesco Dolcini <francesco(a)dolcini.it>
Closes: https://lore.kernel.org/r/20251201072844.GA6785@francesco-nb
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
---
Changes in v2:
- Switch to moving the DT parsing earlier so we avoid triggering the
clock referencing problems.
- Link to v1: https://patch.msgid.link/20251202-spi-cadence-qspi-runtime-pm-imbalance-v1-…
---
drivers/spi/spi-cadence-quadspi.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
index af6d050da1c8..bdbeef05cd72 100644
--- a/drivers/spi/spi-cadence-quadspi.c
+++ b/drivers/spi/spi-cadence-quadspi.c
@@ -1845,6 +1845,12 @@ static int cqspi_probe(struct platform_device *pdev)
return -ENODEV;
}
+ ret = cqspi_setup_flash(cqspi);
+ if (ret) {
+ dev_err(dev, "failed to setup flash parameters %d\n", ret);
+ return ret;
+ }
+
/* Obtain QSPI clock. */
cqspi->clk = devm_clk_get(dev, NULL);
if (IS_ERR(cqspi->clk)) {
@@ -1988,12 +1994,6 @@ static int cqspi_probe(struct platform_device *pdev)
pm_runtime_get_noresume(dev);
}
- ret = cqspi_setup_flash(cqspi);
- if (ret) {
- dev_err(dev, "failed to setup flash parameters %d\n", ret);
- goto probe_setup_failed;
- }
-
host->num_chipselect = cqspi->num_chipselect;
if (ddata && (ddata->quirks & CQSPI_SUPPORT_DEVICE_RESET))
---
base-commit: cebdea5fc60642a39a76c237257a7e6662336006
change-id: 20251202-spi-cadence-qspi-runtime-pm-imbalance-657740cf7eae
Best regards,
--
Mark Brown <broonie(a)kernel.org>