Hongyu noticed that the nr_unaccepted counter kept growing even in the
absence of unaccepted memory on the machine.
This happens due to a commit that removed NR_BOUNCE: it removed the
counter from the enum zone_stat_item, but left it in the vmstat_text
array.
As a result, all counters below nr_bounce in /proc/vmstat are
shifted by one line, causing the numa_hit counter to be labeled as
nr_unaccepted.
To fix this issue, remove nr_bounce from the vmstat_text array.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Reported-by: Hongyu Ning <hongyu.ning(a)linux.intel.com>
Fixes: 194df9f66db8 ("mm: remove NR_BOUNCE zone stat")
Cc: stable(a)vger.kernel.org
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Hannes Reinecke <hare(a)suse.de>
Cc: Johannes Thumshirn <johannes.thumshirn(a)wdc.com>
Cc: Jens Axboe <axboe(a)kernel.dk>
---
mm/vmstat.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4c268ce39ff2..ae9882063d89 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1201,7 +1201,6 @@ const char * const vmstat_text[] = {
"nr_zone_unevictable",
"nr_zone_write_pending",
"nr_mlock",
- "nr_bounce",
#if IS_ENABLED(CONFIG_ZSMALLOC)
"nr_zspages",
#endif
--
2.47.2
When we created the driver for the AXI PWMGEN IP block, we overlooked
the fact that it can optionally be configured to use an external clock
in addition to the AXI bus clock. This is easy to miss in testing
because the bus clock is always on because it is driving other
peripherals as well.
Up to now, users were specifying the external clock if there was one and
the AXI bus clock otherwise. But the proper way to do this is to would
be to always specify the bus clock and only specify the external clock
if the IP block has been configured to use it.
To fix this, we add clock-names to the devicetree bindings and change
clocks to allow 1 or 2 clocks.
---
Changes in v3:
- Fixed clock-names DT property restrictions (was failing dt_binding_check)
- Added Cc: stable
- Picked up trailers
- Link to v2: https://lore.kernel.org/r/20250522-pwm-axi-pwmgen-add-external-clock-v2-0-0…
Changes in v2:
- Consider this a fix rather than a new feature.
- Make clock-names required.
- Simplify the logic in the pwm driver to avoid needing to test if
clock-names is present in old dtbs that used the broken binding.
- Link to v1: https://lore.kernel.org/r/20250520-pwm-axi-pwmgen-add-external-clock-v1-0-6…
---
David Lechner (3):
dt-bindings: pwm: adi,axi-pwmgen: update documentation link
dt-bindings: pwm: adi,axi-pwmgen: fix clocks
pwm: axi-pwmgen: fix missing separate external clock
.../devicetree/bindings/pwm/adi,axi-pwmgen.yaml | 15 +++++++++++---
drivers/pwm/pwm-axi-pwmgen.c | 23 +++++++++++++++++++---
2 files changed, 32 insertions(+), 6 deletions(-)
---
base-commit: 484803582c77061b470ac64a634f25f89715be3f
change-id: 20250515-pwm-axi-pwmgen-add-external-clock-0364fbdf809b
Best regards,
--
David Lechner <dlechner(a)baylibre.com>
There are several issues in pud_free_pmd_page() and pmd_free_pte_page(),
which are used by vmalloc:
1. flush_tlb_kernel_range() does not flush paging-structure caches for
inactive PCIDs, but we're using it when freeing kernel page tables.
2. flush_tlb_kernel_range() only flushes paging-structure cache entries
that intersect with the flush range, and pud_free_pmd_page() passes in
the first PAGE_SIZE bytes of the area covered by the PMD table; but
pud_free_pmd_page() is actually designed to also remove PTE tables that
were installed in the PMD table, so we need to also flush those.
3. [theoretical issue] invlpgb_kernel_range_flush() expects an exclusive
range, it does: `nr = (info->end - addr) >> PAGE_SHIFT;`
But pmd_free_pte_page() and pud_free_pmd_page() provide inclusive
ranges (`addr, addr + PAGE_SIZE-1`).
To fix it:
1. Add a new helper flush_tlb_kernel_pgtable_range() for flushing paging
structure caches for kernel page tables, and use that.
2. Flush PUD_SIZE instead of PAGE_SIZE in pud_free_pmd_page().
3. Remove `-1` in end address calculations.
Note that since I'm not touching invlpgb_kernel_range_flush() or
invlpgb_flush_addr_nosync() in this patch, the flush on PMD table deletion
with INVLPGB might be a bit slow due to using PTE_STRIDE instead of
PMD_STRIDE. I don't think that matters.
Backport notes:
Kernels before 6.16 don't have invlpgb_kernel_range_flush(); for them,
kernel_tlb_flush_pgtable() should just unconditionally call
flush_tlb_all().
I am marking this as fixing commit 28ee90fe6048 ("x86/mm: implement free
pmd/pte page interfaces"); that is technically incorrect, since back then
no paging-structure cache invalidations were performed at all, but probably
makes the most sense here.
Cc: stable(a)vger.kernel.org
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Jann Horn <jannh(a)google.com>
---
arch/x86/include/asm/tlb.h | 4 ++++
arch/x86/include/asm/tlbflush.h | 1 +
arch/x86/mm/pgtable.c | 13 +++++++++----
arch/x86/mm/tlb.c | 37 +++++++++++++++++++++++++++++++++++++
4 files changed, 51 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
index 866ea78ba156..56a4b93a0742 100644
--- a/arch/x86/include/asm/tlb.h
+++ b/arch/x86/include/asm/tlb.h
@@ -153,6 +153,10 @@ static inline void invlpgb_flush_all(void)
/* Flush addr, including globals, for all PCIDs. */
static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr)
{
+ /*
+ * Don't set INVLPGB_FLAG_FINAL_ONLY here without adjusting
+ * kernel_tlb_flush_pgtable().
+ */
__invlpgb(0, 0, addr, nr, PTE_STRIDE, INVLPGB_FLAG_INCLUDE_GLOBAL);
}
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index e9b81876ebe4..06a4a2b7a9f5 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -318,6 +318,7 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long end, unsigned int stride_shift,
bool freed_tables);
extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
+void flush_tlb_kernel_pgtable_range(unsigned long start, unsigned long end);
static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
{
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 62777ba4de1a..0779ed02226c 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -745,8 +745,13 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
pud_clear(pud);
- /* INVLPG to clear all paging-structure caches */
- flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+ /*
+ * Clear paging-structure caches.
+ * Note that since this function can remove a PMD table together with the
+ * PTE tables it points to, we can't just flush the first PAGE_SIZE, we
+ * must flush PUD_SIZE!
+ */
+ flush_tlb_kernel_pgtable_range(addr, addr + PUD_SIZE);
for (i = 0; i < PTRS_PER_PMD; i++) {
if (!pmd_none(pmd_sv[i])) {
@@ -778,8 +783,8 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
pte = (pte_t *)pmd_page_vaddr(*pmd);
pmd_clear(pmd);
- /* INVLPG to clear all paging-structure caches */
- flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+ /* clear paging-structure caches */
+ flush_tlb_kernel_pgtable_range(addr, addr + PAGE_SIZE);
free_page((unsigned long)pte);
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 39f80111e6f1..26b78451a7ed 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1525,6 +1525,22 @@ static void kernel_tlb_flush_range(struct flush_tlb_info *info)
on_each_cpu(do_kernel_range_flush, info, 1);
}
+static void kernel_tlb_flush_pgtable(struct flush_tlb_info *info)
+{
+ /*
+ * The special thing about removing kernel page tables is that we can't
+ * use a flush that just removes global TLB entries; we need one that
+ * flushes paging structure caches across all PCIDs.
+ * With INVLPGB, that's explicitly supported.
+ * Otherwise, there's no good way (INVLPG and INVPCID can't specifically
+ * target one address across all PCIDs), just throw everything away.
+ */
+ if (cpu_feature_enabled(X86_FEATURE_INVLPGB))
+ invlpgb_kernel_range_flush(info);
+ else
+ flush_tlb_all();
+}
+
void flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
struct flush_tlb_info *info;
@@ -1542,6 +1558,27 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
put_flush_tlb_info();
}
+/*
+ * Flush paging-structure cache entries for page tables covering the specified
+ * range and synchronize with concurrent hardware page table walks, in
+ * preparation for freeing page tables in the region.
+ * This must not be used for flushing translations to data pages.
+ */
+void flush_tlb_kernel_pgtable_range(unsigned long start, unsigned long end)
+{
+ struct flush_tlb_info *info;
+
+ /* We don't synchronize against GUP-fast. */
+ VM_WARN_ON(start < TASK_SIZE_MAX);
+
+ guard(preempt)();
+
+ info = get_flush_tlb_info(NULL, start, end, PMD_SHIFT, true,
+ TLB_GENERATION_INVALID);
+ kernel_tlb_flush_pgtable(info);
+ put_flush_tlb_info();
+}
+
/*
* This can be used from process context to figure out what the value of
* CR3 is without needing to do a (slow) __read_cr3().
---
base-commit: b1456f6dc167f7f101746e495bede2bac3d0e19f
change-id: 20250528-x86-fix-vmap-pt-del-flush-f9fa4f287c93
--
Jann Horn <jannh(a)google.com>
From: Lee Jones <joneslee(a)google.com>
This is the second attempt at achieving the same goal. This time, the
submission avoids forking the current code base, ensuring it remains
easier to maintain over time.
The set has been tested using the SCM_RIGHTS test suite [1] using QEMU
and has been seen to successfully mitigate a UAF on on a top tier
handset.
RESULTS:
TAP version 13
1..20
# Starting 20 tests from 5 test cases.
# RUN scm_rights.dgram.self_ref ...
# OK scm_rights.dgram.self_ref
ok 1 scm_rights.dgram.self_ref
# RUN scm_rights.dgram.triangle ...
# OK scm_rights.dgram.triangle
ok 2 scm_rights.dgram.triangle
# RUN scm_rights.dgram.cross_edge ...
# OK scm_rights.dgram.cross_edge
ok 3 scm_rights.dgram.cross_edge
# RUN scm_rights.dgram.backtrack_from_scc ...
# OK scm_rights.dgram.backtrack_from_scc
ok 4 scm_rights.dgram.backtrack_from_scc
# RUN scm_rights.stream.self_ref ...
# OK scm_rights.stream.self_ref
ok 5 scm_rights.stream.self_ref
# RUN scm_rights.stream.triangle ...
# OK scm_rights.stream.triangle
ok 6 scm_rights.stream.triangle
# RUN scm_rights.stream.cross_edge ...
# OK scm_rights.stream.cross_edge
ok 7 scm_rights.stream.cross_edge
# RUN scm_rights.stream.backtrack_from_scc ...
# OK scm_rights.stream.backtrack_from_scc
ok 8 scm_rights.stream.backtrack_from_scc
# RUN scm_rights.stream_oob.self_ref ...
# OK scm_rights.stream_oob.self_ref
ok 9 scm_rights.stream_oob.self_ref
# RUN scm_rights.stream_oob.triangle ...
# OK scm_rights.stream_oob.triangle
ok 10 scm_rights.stream_oob.triangle
# RUN scm_rights.stream_oob.cross_edge ...
# OK scm_rights.stream_oob.cross_edge
ok 11 scm_rights.stream_oob.cross_edge
# RUN scm_rights.stream_oob.backtrack_from_scc ...
# OK scm_rights.stream_oob.backtrack_from_scc
ok 12 scm_rights.stream_oob.backtrack_from_scc
# RUN scm_rights.stream_listener.self_ref ...
# OK scm_rights.stream_listener.self_ref
ok 13 scm_rights.stream_listener.self_ref
# RUN scm_rights.stream_listener.triangle ...
# OK scm_rights.stream_listener.triangle
ok 14 scm_rights.stream_listener.triangle
# RUN scm_rights.stream_listener.cross_edge ...
# OK scm_rights.stream_listener.cross_edge
ok 15 scm_rights.stream_listener.cross_edge
# RUN scm_rights.stream_listener.backtrack_from_scc ...
# OK scm_rights.stream_listener.backtrack_from_scc
ok 16 scm_rights.stream_listener.backtrack_from_scc
# RUN scm_rights.stream_listener_oob.self_ref ...
# OK scm_rights.stream_listener_oob.self_ref
ok 17 scm_rights.stream_listener_oob.self_ref
# RUN scm_rights.stream_listener_oob.triangle ...
# OK scm_rights.stream_listener_oob.triangle
ok 18 scm_rights.stream_listener_oob.triangle
# RUN scm_rights.stream_listener_oob.cross_edge ...
# OK scm_rights.stream_listener_oob.cross_edge
ok 19 scm_rights.stream_listener_oob.cross_edge
# RUN scm_rights.stream_listener_oob.backtrack_from_scc ...
# OK scm_rights.stream_listener_oob.backtrack_from_scc
ok 20 scm_rights.stream_listener_oob.backtrack_from_scc
# PASSED: 20 / 20 tests passed.
# Totals: pass:20 fail:0 xfail:0 xpass:0 skip:0 error:0
[0] https://lore.kernel.org/all/20250304030149.82265-1-kuniyu@amazon.com/
[1] https://lore.kernel.org/all/20240325202425.60930-16-kuniyu@amazon.com/
Kuniyuki Iwashima (24):
af_unix: Return struct unix_sock from unix_get_socket().
af_unix: Run GC on only one CPU.
af_unix: Try to run GC async.
af_unix: Replace BUG_ON() with WARN_ON_ONCE().
af_unix: Remove io_uring code for GC.
af_unix: Remove CONFIG_UNIX_SCM.
af_unix: Allocate struct unix_vertex for each inflight AF_UNIX fd.
af_unix: Allocate struct unix_edge for each inflight AF_UNIX fd.
af_unix: Link struct unix_edge when queuing skb.
af_unix: Bulk update unix_tot_inflight/unix_inflight when queuing skb.
af_unix: Iterate all vertices by DFS.
af_unix: Detect Strongly Connected Components.
af_unix: Save listener for embryo socket.
af_unix: Fix up unix_edge.successor for embryo socket.
af_unix: Save O(n) setup of Tarjan's algo.
af_unix: Skip GC if no cycle exists.
af_unix: Avoid Tarjan's algorithm if unnecessary.
af_unix: Assign a unique index to SCC.
af_unix: Detect dead SCC.
af_unix: Replace garbage collection algorithm.
af_unix: Remove lock dance in unix_peek_fds().
af_unix: Try not to hold unix_gc_lock during accept().
af_unix: Don't access successor in unix_del_edges() during GC.
af_unix: Add dead flag to struct scm_fp_list.
Michal Luczaj (1):
af_unix: Fix garbage collection of embryos carrying OOB with
SCM_RIGHTS
Shigeru Yoshida (1):
af_unix: Fix uninit-value in __unix_walk_scc()
include/net/af_unix.h | 49 ++-
include/net/scm.h | 11 +
net/Makefile | 2 +-
net/core/scm.c | 17 ++
net/unix/Kconfig | 5 -
net/unix/Makefile | 2 -
net/unix/af_unix.c | 120 +++++---
net/unix/garbage.c | 691 +++++++++++++++++++++++++++++-------------
net/unix/scm.c | 161 ----------
net/unix/scm.h | 10 -
10 files changed, 617 insertions(+), 451 deletions(-)
delete mode 100644 net/unix/scm.c
delete mode 100644 net/unix/scm.h
--
2.49.0.1112.g889b7c5bd8-goog
Hi Greg and Sasha,
Please find attached backports of commit 2e43ae7dd71c ("drm/i915/gvt:
fix unterminated-string-initialization warning") to address an instance
of -Wunterminated-string-initialization that appears in the i915 driver
with GCC 15 and Clang 21 due to its use of -Wextra. Please let me know
if there are any issues.
Cheers,
Nathan
From: Fedor Pchelkin <pchelkin(a)ispras.ru>
echo_skb_max should define the supported upper limit of echo_skb[]
allocated inside the netdevice's priv. The corresponding size value
provided by this driver to alloc_candev() is KVASER_PCIEFD_CAN_TX_MAX_COUNT
which is 17.
But later echo_skb_max is rounded up to the nearest power of two (for the
max case, that would be 32) and the tx/ack indices calculated further
during tx/rx may exceed the upper array boundary. Kasan reported this for
the ack case inside kvaser_pciefd_handle_ack_packet(), though the xmit
function has actually caught the same thing earlier.
BUG: KASAN: slab-out-of-bounds in kvaser_pciefd_handle_ack_packet+0x2d7/0x92a drivers/net/can/kvaser_pciefd.c:1528
Read of size 8 at addr ffff888105e4f078 by task swapper/4/0
CPU: 4 UID: 0 PID: 0 Comm: swapper/4 Not tainted 6.15.0 #12 PREEMPT(voluntary)
Call Trace:
<IRQ>
dump_stack_lvl lib/dump_stack.c:122
print_report mm/kasan/report.c:521
kasan_report mm/kasan/report.c:634
kvaser_pciefd_handle_ack_packet drivers/net/can/kvaser_pciefd.c:1528
kvaser_pciefd_read_packet drivers/net/can/kvaser_pciefd.c:1605
kvaser_pciefd_read_buffer drivers/net/can/kvaser_pciefd.c:1656
kvaser_pciefd_receive_irq drivers/net/can/kvaser_pciefd.c:1684
kvaser_pciefd_irq_handler drivers/net/can/kvaser_pciefd.c:1733
__handle_irq_event_percpu kernel/irq/handle.c:158
handle_irq_event kernel/irq/handle.c:210
handle_edge_irq kernel/irq/chip.c:833
__common_interrupt arch/x86/kernel/irq.c:296
common_interrupt arch/x86/kernel/irq.c:286
</IRQ>
Tx max count definitely matters for kvaser_pciefd_tx_avail(), but for seq
numbers' generation that's not the case - we're free to calculate them as
would be more convenient, not taking tx max count into account. The only
downside is that the size of echo_skb[] should correspond to the max seq
number (not tx max count), so in some situations a bit more memory would
be consumed than could be.
Thus make the size of the underlying echo_skb[] sufficient for the rounded
max tx value.
Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
Fixes: 8256e0ca6010 ("can: kvaser_pciefd: Fix echo_skb race")
Cc: stable(a)vger.kernel.org
Signed-off-by: Fedor Pchelkin <pchelkin(a)ispras.ru>
Link: https://patch.msgid.link/20250528192713.63894-1-pchelkin@ispras.ru
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
---
drivers/net/can/kvaser_pciefd.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
index 7d3066691d5d..52301511ed1b 100644
--- a/drivers/net/can/kvaser_pciefd.c
+++ b/drivers/net/can/kvaser_pciefd.c
@@ -966,7 +966,7 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
u32 status, tx_nr_packets_max;
netdev = alloc_candev(sizeof(struct kvaser_pciefd_can),
- KVASER_PCIEFD_CAN_TX_MAX_COUNT);
+ roundup_pow_of_two(KVASER_PCIEFD_CAN_TX_MAX_COUNT));
if (!netdev)
return -ENOMEM;
@@ -995,7 +995,6 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
can->tx_max_count = min(KVASER_PCIEFD_CAN_TX_MAX_COUNT, tx_nr_packets_max - 1);
can->can.clock.freq = pcie->freq;
- can->can.echo_skb_max = roundup_pow_of_two(can->tx_max_count);
spin_lock_init(&can->lock);
can->can.bittiming_const = &kvaser_pciefd_bittiming_const;
base-commit: 271683bb2cf32e5126c592b5d5e6a756fa374fd9
--
2.47.2
From: Yanqing Wang <ot_yanqing.wang(a)mediatek.com>
Identify the cause of the suspend/resume hang: netif_carrier_off()
is called during link state changes and becomes stuck while
executing linkwatch_work().
To resolve this issue, call netif_device_detach() during the Ethernet
suspend process to temporarily detach the network device from the
kernel and prevent the suspend/resume hang.
Fixes: 8c7bd5a454ff ("net: ethernet: mtk-star-emac: new driver")
Signed-off-by: Yanqing Wang <ot_yanqing.wang(a)mediatek.com>
Signed-off-by: Macpaul Lin <macpaul.lin(a)mediatek.com>
Signed-off-by: Biao Huang <biao.huang(a)mediatek.com>
---
drivers/net/ethernet/mediatek/mtk_star_emac.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
index b175119a6a7d..b83886a41121 100644
--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
+++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
@@ -1463,6 +1463,8 @@ static __maybe_unused int mtk_star_suspend(struct device *dev)
if (netif_running(ndev))
mtk_star_disable(ndev);
+ netif_device_detach(ndev);
+
clk_bulk_disable_unprepare(MTK_STAR_NCLKS, priv->clks);
return 0;
@@ -1487,6 +1489,8 @@ static __maybe_unused int mtk_star_resume(struct device *dev)
clk_bulk_disable_unprepare(MTK_STAR_NCLKS, priv->clks);
}
+ netif_device_attach(ndev);
+
return ret;
}
--
2.45.2
On Thu, May 29, 2025 at 06:19:31AM +0000, Parav Pandit wrote:
> When the PCI device is surprise removed, requests may not complete
> the device as the VQ is marked as broken. Due to this, the disk
> deletion hangs.
>
> Fix it by aborting the requests when the VQ is broken.
>
> With this fix now fio completes swiftly.
> An alternative of IO timeout has been considered, however
> when the driver knows about unresponsive block device, swiftly clearing
> them enables users and upper layers to react quickly.
>
> Verified with multiple device unplug iterations with pending requests in
> virtio used ring and some pending with the device.
>
> Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device")
> Cc: stable(a)vger.kernel.org
> Reported-by: Li RongQing <lirongqing(a)baidu.com>
> Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@bai…
> Reviewed-by: Max Gurtovoy <mgurtovoy(a)nvidia.com>
> Reviewed-by: Israel Rukshin <israelr(a)nvidia.com>
> Signed-off-by: Parav Pandit <parav(a)nvidia.com>
>
> ---
> v2->v3:
> - Addressed comments from Michael
> - updated comment for synchronizing with callbacks
>
> v1->v2:
> - Addressed comments from Stephan
> - fixed spelling to 'waiting'
> - Addressed comments from Michael
> - Dropped checking broken vq from queue_rq() and queue_rqs()
> because it is checked in lower layer routines in virtio core
>
> v0->v1:
> - Fixed comments from Stefan to rename a cleanup function
> - Improved logic for handling any outstanding requests
> in bio layer
> - improved cancel callback to sync with ongoing done()
Thanks!
Something else small to improve.
> ---
> drivers/block/virtio_blk.c | 82 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 82 insertions(+)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 7cffea01d868..d37df878f4e9 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -1554,6 +1554,86 @@ static int virtblk_probe(struct virtio_device *vdev)
> return err;
> }
>
> +static bool virtblk_request_cancel(struct request *rq, void *data)
it is more
virtblk_request_complete_broken_with_ioerr
and maybe a comment?
/*
* If the vq is broken, device will not complete requests.
* So we do it for the device.
*/
> +{
> + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> + struct virtio_blk *vblk = data;
> + struct virtio_blk_vq *vq;
> + unsigned long flags;
> +
> + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> +
> + spin_lock_irqsave(&vq->lock, flags);
> +
> + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> + blk_mq_complete_request(rq);
> +
> + spin_unlock_irqrestore(&vq->lock, flags);
> + return true;
> +}
> +
> +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk)
and one goes okay what does it do exactly? cleanup device in
a broken way? turns out no, it cleans up a broken device.
And an overview would be good. Maybe, a small comment will help:
/*
* if the device is broken, it will not use any buffers and waiting
* for that to happen is pointless. We'll do it in the driver,
* completing all requests for the device.
*/
> +{
> + struct request_queue *q = vblk->disk->queue;
> +
> + if (!virtqueue_is_broken(vblk->vqs[0].vq))
> + return;
so one has to read it, and understand that we did not need to call
it in the 1st place on a non broken device.
Moving it to the caller would be cleaner.
> +
> + /* Start freezing the queue, so that new requests keeps waiting at the
wrong style of comment for blk.
/* this is
* net style
*/
/*
* this is
* rest of the linux style
*/
> + * door of bio_queue_enter(). We cannot fully freeze the queue because
> + * freezed queue is an empty queue and there are pending requests, so
a frozen queue
> + * only start freezing it.
> + */
> + blk_freeze_queue_start(q);
> +
> + /* When quiescing completes, all ongoing dispatches have completed
> + * and no new dispatch will happen towards the driver.
> + * This ensures that later when cancel is attempted, then are not
they are not?
> + * getting processed by the queue_rq() or queue_rqs() handlers.
> + */
> + blk_mq_quiesce_queue(q);
> +
> + /*
> + * Synchronize with any ongoing VQ callbacks that may have started
> + * before the VQs were marked as broken. Any outstanding requests
> + * will be completed by virtblk_request_cancel().
> + */
> + virtio_synchronize_cbs(vblk->vdev);
> +
> + /* At this point, no new requests can enter the queue_rq() and
> + * completion routine will not complete any new requests either for the
> + * broken vq. Hence, it is safe to cancel all requests which are
> + * started.
> + */
> + blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel, vblk);
> + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> +
> + /* All pending requests are cleaned up. Time to resume so that disk
> + * deletion can be smooth. Start the HW queues so that when queue is
> + * unquiesced requests can again enter the driver.
> + */
> + blk_mq_start_stopped_hw_queues(q, true);
> +
> + /* Unquiescing will trigger dispatching any pending requests to the
> + * driver which has crossed bio_queue_enter() to the driver.
> + */
> + blk_mq_unquiesce_queue(q);
> +
> + /* Wait for all pending dispatches to terminate which may have been
> + * initiated after unquiescing.
> + */
> + blk_mq_freeze_queue_wait(q);
> +
> + /* Mark the disk dead so that once queue unfreeze, the requests
... once we unfreeze the queue
> + * waiting at the door of bio_queue_enter() can be aborted right away.
> + */
> + blk_mark_disk_dead(vblk->disk);
> +
> + /* Unfreeze the queue so that any waiting requests will be aborted. */
> + blk_mq_unfreeze_queue_nomemrestore(q);
> +}
> +
> static void virtblk_remove(struct virtio_device *vdev)
> {
> struct virtio_blk *vblk = vdev->priv;
> @@ -1561,6 +1641,8 @@ static void virtblk_remove(struct virtio_device *vdev)
> /* Make sure no work handler is accessing the device. */
> flush_work(&vblk->config_work);
>
I prefer simply moving the test here:
if (virtqueue_is_broken(vblk->vqs[0].vq))
virtblk_broken_device_cleanup(vblk);
makes it much clearer what is going on, imho.
> del_gendisk(vblk->disk);
> blk_mq_free_tag_set(&vblk->tag_set);
>
> --
> 2.34.1