Hi stable maintainers,
The following upstream commits fix a couple of overlayfs handling problems
in Smack. Could you please cherry-pick them for 4.19.y, 5.4.y, 5.10.y,
5.15.y and 6.1.y?
#3 2c085f3a8f23 smack: Record transmuting in smk_transmuted
#2 3a3d8fce31a4 smack: Retrieve transmuting information in smack_inode_getsecurity()
#1 387ef964460f Smack:- Use overlay inode label in smack_inode_copy_up() # not needed for 6.1.y
Note that 4.19.y needs some adjustments. I'll send backported patches
separately.
I'm not an author of these commits, but have been hitting the problems with
multiple kernels based on the trees and it seems worth cherry-picking.
Perhaps, it's better to add a tag in cherry-picked commits:
Fixes: d6d80cb57be4 ("Smack: Base support for overlayfs")
Regards,
Munehisa
Upstream commit 717c6ec241b5 ("can: sja1000: Prevent overrun stalls with
a soft reset on Renesas SoCs") fixes an issue with Renesas own SJA1000
CAN controller reception: the Rx buffer is only 5 messages long, so when
the bus loaded (eg. a message every 50us), overrun may easily
happen. Upon an overrun situation, due to a possible internal crosstalk
situation, the controller enters a frozen state which only can be
unlocked with a soft reset (experimentally). The solution was to offload
a call to sja1000_start() in a threaded handler. This needs to happen in
process context as this operation requires to sleep. sja1000_start()
basically enters "reset mode", performs a proper software reset and
returns back into "normal mode".
Since this fix was introduced, we no longer observe any stalls in
reception. However it was sporadically observed that the transmit path
would now freeze. Further investigation blamed the fix mentioned above,
and especially the reset operation. Reproducing the reset in a loop
helped identifying what could possibly go wrong. The sja1000 is a single
Tx queue device, which leverages the netdev helpers to process one Tx
message at a time. The logic is: the queue is stopped, the message sent
to the transceiver, once properly transmitted the controller sets a
status bit which triggers an interrupt, in the interrupt handler the
transmission status is checked and the queue woken up. Unfortunately, if
an overrun happens, we might perform the soft reset precisely between
the transmission of the buffer to the transceiver and the advent of the
transmission status bit. We would then stop the transmission operation
without re-enabling the queue, leading to all further transmissions to
be ignored.
The reset interrupt can only happen while the device is "open", and
after a reset we anyway want to resume normal operations, no matter if a
packet to transmit got dropped in the process, so we shall wake up the
queue. Restarting the device and waking-up the queue is exactly what
sja1000_set_mode(CAN_MODE_START) does. In order to be consistent about
the queue state, we must acquire a lock both in the reset handler and in
the transmit path to ensure serialization of both operations. It turns
out, a lock is already held when entering the transmit path, so we can
just acquire/release it as well with the regular net helpers inside the
threaded interrupt handler and this way we should be safe. As the
reset handler might still be called after the transmission of a frame to
the transceiver but before it actually gets transmitted, we must ensure
we don't leak the skb, so we free it (the behavior is consistent, no
matter if there was an skb on the stack or not).
Fixes: 717c6ec241b5 ("can: sja1000: Prevent overrun stalls with a soft reset on Renesas SoCs")
Cc: stable(a)vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal(a)bootlin.com>
---
Changes in v3:
* Fix new implementation by just acquiring the tx lock when required.
Changes in v2:
* As Marc sugested, use netif_tx_{,un}lock() instead of our own
spin_lock.
drivers/net/can/sja1000/sja1000.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
index ae47fc72aa96..9531684d47cd 100644
--- a/drivers/net/can/sja1000/sja1000.c
+++ b/drivers/net/can/sja1000/sja1000.c
@@ -396,7 +396,13 @@ static irqreturn_t sja1000_reset_interrupt(int irq, void *dev_id)
struct net_device *dev = (struct net_device *)dev_id;
netdev_dbg(dev, "performing a soft reset upon overrun\n");
- sja1000_start(dev);
+
+ netif_tx_lock(dev);
+
+ can_free_echo_skb(dev, 0);
+ sja1000_set_mode(dev, CAN_MODE_START);
+
+ netif_tx_unlock(dev);
return IRQ_HANDLED;
}
--
2.34.1
Hi,
Could we please backport commit:
6d2779ecaeb56f92d7105c56772346c71c88c278
("locking/atomic: scripts: fix fallback ifdeffery")
... to the 6.5.y stable tree?
I forgot to Cc stable when I submitted the original patch, and had (mistakenly)
assumed that the Fixes tag was sufficient.
The patch fixes a dentry cache corruption issue observed on arm64 and which is
in theory possible on other architectures. I've recevied an off-list report
from someone who's hit the issue on the v6.5.y tree specifically.
Thanks,
Mark
On Wed, Oct 04, 2023 at 03:26:42PM +0900, Woo-kwang Lee wrote:
> This issue occurs when connecting Galaxy S22 and abnormal SEC Dex Adapter.
> When the abnormal adapter is connected, kernel panic always occurs after a
> few seconds.
> This occurs due to unable to get BOS descriptor, usb_release_bos_descriptor
> set dev->bos = NULL.
>
> - usb_reset_and_verify_device
> - hub_port_init
> - usb_release_bos_descriptor
> - dev->bos = NULL;
>
> hub_port_connect_change() calls portspeed(), and portspeed() calls hub_is_s
> uperspeedplus().
> Finally, hub_is_superspeedplus() calls hdev->bos->ssp_cap.
> It needs to check hdev->bos is NULL to prevent a kernel panic.
>
> usb 3-1: new SuperSpeed Gen 1 USB device number 16 using xhci-hcd-exynos
> usb 3-1: unable to get BOS descriptor set
> usb 3-1: Product: USB3.0 Hub
> Unable to handle kernel NULL pointer dereference at virtual address 0000018
>
> Call trace:
> hub_port_connect_change+0x8c/0x538
> port_event+0x244/0x764
> hub_event+0x158/0x474
> process_one_work+0x204/0x550
> worker_thread+0x28c/0x580
> kthread+0x13c/0x178
> ret_from_fork+0x10/0x30
>
> - hub_port_connect_change
> - portspeed
> - hub_is_superspeedplus
>
> Fixes: 0cdd49a1d1a4 ("usb: Support USB 3.1 extended port status request")
> Signed-off-by: Woo-kwang Lee <wookwang.lee(a)samsung.com>
> ---
> drivers/usb/core/hub.h | 2 ++
> 1 file changed, 2 insertions(+)
Are you sure this isn't already fixed by commit f74a7afc224a ("usb: hub:
Guard against accesses to uninitialized BOS descriptors") in linux-next?
thanks,
greg k-h
The MediaTek DRM driver implements GEM PRIME vmap by fetching the
sg_table for the object, iterating through the pages, and then
vmapping them. In essence, unlike the GEM DMA helpers which vmap
when the object is first created or imported, the MediaTek version
does it on request.
Unfortunately, the code never correctly frees the sg_table contents.
This results in a kernel memory leak. On a Hayato device with a text
console on the internal display, this results in the system running
out of memory in a few days from all the console screen cursor updates.
Add sg_free_table() to correctly free the contents of the sg_table. This
was missing despite explicitly required by mtk_gem_prime_get_sg_table().
Fixes: 3df64d7b0a4f ("drm/mediatek: Implement gem prime vmap/vunmap function")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Chen-Yu Tsai <wenst(a)chromium.org>
---
Please merge for v6.6 fixes.
Also, I was wondering why the MediaTek DRM driver implements a lot of
the GEM functionality itself, instead of using the GEM DMA helpers.
From what I could tell, the code closely follows the DMA helpers, except
that it vmaps the buffers only upon request.
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
index 9f364df52478..297ee090e02e 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -239,6 +239,7 @@ int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
npages = obj->size >> PAGE_SHIFT;
mtk_gem->pages = kcalloc(npages, sizeof(*mtk_gem->pages), GFP_KERNEL);
if (!mtk_gem->pages) {
+ sg_free_table(sgt);
kfree(sgt);
return -ENOMEM;
}
@@ -248,11 +249,13 @@ int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
if (!mtk_gem->kvaddr) {
+ sg_free_table(sgt);
kfree(sgt);
kfree(mtk_gem->pages);
return -ENOMEM;
}
out:
+ sg_free_table(sgt);
kfree(sgt);
iosys_map_set_vaddr(map, mtk_gem->kvaddr);
--
2.42.0.582.g8ccd20d70d-goog
Hello!
I bought a new laptop fujitsu life book and everything is going fine
on artix just the bt makes trouble:
/var/log/error.log
Sep 30 18:43:48 nexus bluetoothd[2266]:
src/adapter.c:reset_adv_monitors_complete() Failed to reset Adv
Monitors: Failed (0x03)
Sep 30 18:43:48 nexus bluetoothd[2266]: Failed to clear UUIDs: Failed (0x03)
Sep 30 18:43:48 nexus bluetoothd[2266]: Failed to add UUID: Failed (0x03)
Sep 30 18:43:48 nexus bluetoothd[2266]: Failed to add UUID: Failed (0x03)
i searched a bit the webs and found a new commit at kernel org that
does do the trouble:
https://bugs.archlinux.org/task/78980
follow the linkeys inside the commits there or read this one:
---------------before------------------------------------
/* interface numbers are hardcoded in the spec */
if (intf->cur_altsetting->desc.bInterfaceNumber != 0) {
if (!(id->driver_info & BTUSB_IFNUM_2))
return -ENODEV;
if (intf->cur_altsetting->desc.bInterfaceNumber != 2)
return -ENODEV;
}
-----------after----------------------------------------------------
if ((id->driver_info & BTUSB_IFNUM_2) &&
(intf->cur_altsetting->desc.bInterfaceNumber != 0) &&
(intf->cur_altsetting->desc.bInterfaceNumber != 2))
return -ENODEV;
--------------------------------------------------------
the dude just hooked up 3 conditions in a row with && where before it
was 2 conditions in 1 condition. + the comment was removed.
please reconsider this commit.
Yours
E
The patch titled
Subject: hugetlbfs: close race between MADV_DONTNEED and page fault
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
hugetlbfs-close-race-between-madv_dontneed-and-page-fault.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Rik van Riel <riel(a)surriel.com>
Subject: hugetlbfs: close race between MADV_DONTNEED and page fault
Date: Tue, 3 Oct 2023 23:25:04 -0400
Malloc libraries, like jemalloc and tcalloc, take decisions on when to
call madvise independently from the code in the main application.
This sometimes results in the application page faulting on an address,
right after the malloc library has shot down the backing memory with
MADV_DONTNEED.
Usually this is harmless, because we always have some 4kB pages sitting
around to satisfy a page fault. However, with hugetlbfs systems often
allocate only the exact number of huge pages that the application wants.
Due to TLB batching, hugetlbfs MADV_DONTNEED will free pages outside of
any lock taken on the page fault path, which can open up the following
race condition:
CPU 1 CPU 2
MADV_DONTNEED
unmap page
shoot down TLB entry
page fault
fail to allocate a huge page
killed with SIGBUS
free page
Fix that race by pulling the locking from __unmap_hugepage_final_range
into helper functions called from zap_page_range_single. This ensures
page faults stay locked out of the MADV_DONTNEED VMA until the huge pages
have actually been freed.
Link: https://lkml.kernel.org/r/20231004032814.3108383-3-riel@surriel.com
Fixes: 04ada095dcfc ("hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing")
Signed-off-by: Rik van Riel <riel(a)surriel.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: Muchun Song <muchun.song(a)linux.dev>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/hugetlb.h | 35 +++++++++++++++++++++++++++++++++--
mm/hugetlb.c | 28 ++++++++++++++++------------
mm/memory.c | 13 ++++++++-----
3 files changed, 57 insertions(+), 19 deletions(-)
--- a/include/linux/hugetlb.h~hugetlbfs-close-race-between-madv_dontneed-and-page-fault
+++ a/include/linux/hugetlb.h
@@ -139,7 +139,7 @@ struct page *hugetlb_follow_page_mask(st
void unmap_hugepage_range(struct vm_area_struct *,
unsigned long, unsigned long, struct page *,
zap_flags_t);
-void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
struct page *ref_page, zap_flags_t zap_flags);
@@ -246,6 +246,25 @@ int huge_pmd_unshare(struct mm_struct *m
void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
unsigned long *start, unsigned long *end);
+extern void __hugetlb_zap_begin(struct vm_area_struct *vma,
+ unsigned long *begin, unsigned long *end);
+extern void __hugetlb_zap_end(struct vm_area_struct *vma,
+ struct zap_details *details);
+
+static inline void hugetlb_zap_begin(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+{
+ if (is_vm_hugetlb_page(vma))
+ __hugetlb_zap_begin(vma, start, end);
+}
+
+static inline void hugetlb_zap_end(struct vm_area_struct *vma,
+ struct zap_details *details)
+{
+ if (is_vm_hugetlb_page(vma))
+ __hugetlb_zap_end(vma, details);
+}
+
void hugetlb_vma_lock_read(struct vm_area_struct *vma);
void hugetlb_vma_unlock_read(struct vm_area_struct *vma);
void hugetlb_vma_lock_write(struct vm_area_struct *vma);
@@ -298,6 +317,18 @@ static inline void adjust_range_if_pmd_s
{
}
+static inline void hugetlb_zap_begin(
+ struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+{
+}
+
+static inline void hugetlb_zap_end(
+ struct vm_area_struct *vma,
+ struct zap_details *details)
+{
+}
+
static inline struct page *hugetlb_follow_page_mask(
struct vm_area_struct *vma, unsigned long address, unsigned int flags,
unsigned int *page_mask)
@@ -443,7 +474,7 @@ static inline long hugetlb_change_protec
return 0;
}
-static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
unsigned long end, struct page *ref_page,
zap_flags_t zap_flags)
--- a/mm/hugetlb.c~hugetlbfs-close-race-between-madv_dontneed-and-page-fault
+++ a/mm/hugetlb.c
@@ -5348,9 +5348,9 @@ int move_hugetlb_page_tables(struct vm_a
return len + old_addr - old_end;
}
-static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
- unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags)
+void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ struct page *ref_page, zap_flags_t zap_flags)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
@@ -5479,16 +5479,19 @@ static void __unmap_hugepage_range(struc
tlb_flush_mmu_tlbonly(tlb);
}
-void __unmap_hugepage_range_final(struct mmu_gather *tlb,
- struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
- zap_flags_t zap_flags)
+void __hugetlb_zap_begin(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
{
+ adjust_range_if_pmd_sharing_possible(vma, start, end);
hugetlb_vma_lock_write(vma);
- i_mmap_lock_write(vma->vm_file->f_mapping);
+ if (vma->vm_file)
+ i_mmap_lock_write(vma->vm_file->f_mapping);
+}
- /* mmu notification performed in caller */
- __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags);
+void __hugetlb_zap_end(struct vm_area_struct *vma,
+ struct zap_details *details)
+{
+ zap_flags_t zap_flags = details ? details->zap_flags : 0;
if (zap_flags & ZAP_FLAG_UNMAP) { /* final unmap */
/*
@@ -5501,11 +5504,12 @@ void __unmap_hugepage_range_final(struct
* someone else.
*/
__hugetlb_vma_unlock_write_free(vma);
- i_mmap_unlock_write(vma->vm_file->f_mapping);
} else {
- i_mmap_unlock_write(vma->vm_file->f_mapping);
hugetlb_vma_unlock_write(vma);
}
+
+ if (vma->vm_file)
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
}
void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
--- a/mm/memory.c~hugetlbfs-close-race-between-madv_dontneed-and-page-fault
+++ a/mm/memory.c
@@ -1692,7 +1692,7 @@ static void unmap_single_vma(struct mmu_
if (vma->vm_file) {
zap_flags_t zap_flags = details ?
details->zap_flags : 0;
- __unmap_hugepage_range_final(tlb, vma, start, end,
+ __unmap_hugepage_range(tlb, vma, start, end,
NULL, zap_flags);
}
} else
@@ -1737,8 +1737,12 @@ void unmap_vmas(struct mmu_gather *tlb,
start_addr, end_addr);
mmu_notifier_invalidate_range_start(&range);
do {
- unmap_single_vma(tlb, vma, start_addr, end_addr, &details,
+ unsigned long start = start_addr;
+ unsigned long end = end_addr;
+ hugetlb_zap_begin(vma, &start, &end);
+ unmap_single_vma(tlb, vma, start, end, &details,
mm_wr_locked);
+ hugetlb_zap_end(vma, &details);
} while ((vma = mas_find(mas, tree_end - 1)) != NULL);
mmu_notifier_invalidate_range_end(&range);
}
@@ -1762,9 +1766,7 @@ void zap_page_range_single(struct vm_are
lru_add_drain();
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
address, end);
- if (is_vm_hugetlb_page(vma))
- adjust_range_if_pmd_sharing_possible(vma, &range.start,
- &range.end);
+ hugetlb_zap_begin(vma, &range.start, &range.end);
tlb_gather_mmu(&tlb, vma->vm_mm);
update_hiwater_rss(vma->vm_mm);
mmu_notifier_invalidate_range_start(&range);
@@ -1775,6 +1777,7 @@ void zap_page_range_single(struct vm_are
unmap_single_vma(&tlb, vma, address, end, details, false);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
+ hugetlb_zap_end(vma, details);
}
/**
_
Patches currently in -mm which might be from riel(a)surriel.com are
hugetlbfs-extend-hugetlb_vma_lock-to-private-vmas.patch
hugetlbfs-close-race-between-madv_dontneed-and-page-fault.patch