blit_x and blit_y are u32, so fbcon currently cannot support fonts
larger than 32x32.
The 32x32 case also needs shifting an unsigned int, to properly set bit
31, otherwise we get "UBSAN: shift-out-of-bounds in fbcon_set_font",
as reported on:
http://lore.kernel.org/all/IA1PR07MB98308653E259A6F2CE94A4AFABCE9@IA1PR07MB…
Kernel Branch: 6.2.0-rc5-next-20230124
Kernel config: https://drive.google.com/file/d/1F-LszDAizEEH0ZX0HcSR06v5q8FPl2Uv/view?usp=…
Reproducer: https://drive.google.com/file/d/1mP1jcLBY7vWCNM60OMf-ogw-urQRjNrm/view?usp=…
Reported-by: Sanan Hasanov <sanan.hasanov(a)Knights.ucf.edu>
Signed-off-by: Samuel Thibault <samuel.thibault(a)ens-lyon.org>
Fixes: 2d2699d98492 ("fbcon: font setting should check limitation of driver")
Cc: stable(a)vger.kernel.org
---
v1 -> v2:
- Use BIT macro instead of fixing bit test by hand.
- Add Fixes and Cc: stable headers.
Index: linux-6.0/drivers/video/fbdev/core/fbcon.c
===================================================================
--- linux-6.0.orig/drivers/video/fbdev/core/fbcon.c
+++ linux-6.0/drivers/video/fbdev/core/fbcon.c
@@ -2489,9 +2489,12 @@ static int fbcon_set_font(struct vc_data
h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres))
return -EINVAL;
+ if (font->width > 32 || font->height > 32)
+ return -EINVAL;
+
/* Make sure drawing engine can handle the font */
- if (!(info->pixmap.blit_x & (1 << (font->width - 1))) ||
- !(info->pixmap.blit_y & (1 << (font->height - 1))))
+ if (!(info->pixmap.blit_x & BIT(font->width - 1)) ||
+ !(info->pixmap.blit_y & BIT(font->height - 1)))
return -EINVAL;
/* Make sure driver can handle the font length */
From: Andrea Righi <andrea.righi(a)canonical.com>
Commit ebc5951eea499314f6fbbde20e295f1345c67330 upstream.
[ This fixes a performance issue we're seeing in AWS instances when
running swapoff and using the global readahead algorithm. For a
particular instance configuration, Without this fix I/O throughput
is very low during swapoff (about 15 MB/s) with this patch is
reaches 500 MB/s. Tested swapoff with different workloads with
this patch applied. 5.10 onwards already have this fix ]
In unuse_pte_range() we blindly swap-in pages without checking if the
swap entry is already present in the swap cache.
By doing this, the hit/miss ratio used by the swap readahead heuristic
is not properly updated and this leads to non-optimal performance during
swapoff.
Tracing the distribution of the readahead size returned by the swap
readahead heuristic during swapoff shows that a small readahead size is
used most of the time as if we had only misses (this happens both with
cluster and vma readahead), for example:
r::swapin_nr_pages(unsigned long offset):unsigned long:$retval
COUNT EVENT
36948 $retval = 8
44151 $retval = 4
49290 $retval = 1
527771 $retval = 2
Checking if the swap entry is present in the swap cache, instead, allows
to properly update the readahead statistics and the heuristic behaves in a
better way during swapoff, selecting a bigger readahead size:
r::swapin_nr_pages(unsigned long offset):unsigned long:$retval
COUNT EVENT
1618 $retval = 1
4960 $retval = 2
41315 $retval = 4
103521 $retval = 8
In terms of swapoff performance the result is the following:
Testing environment
===================
- Host:
CPU: 1.8GHz Intel Core i7-8565U (quad-core, 8MB cache)
HDD: PC401 NVMe SK hynix 512GB
MEM: 16GB
- Guest (kvm):
8GB of RAM
virtio block driver
16GB swap file on ext4 (/swapfile)
Test case
=========
- allocate 85% of memory
- `systemctl hibernate` to force all the pages to be swapped-out to the
swap file
- resume the system
- measure the time that swapoff takes to complete:
# /usr/bin/time swapoff /swapfile
Result (swapoff time)
======
5.6 vanilla 5.6 w/ this patch
----------- -----------------
cluster-readahead 22.09s 12.19s
vma-readahead 18.20s 15.33s
Conclusion
==========
The specific use case this patch is addressing is to improve swapoff
performance in cloud environments when a VM has been hibernated, resumed
and all the memory needs to be forced back to RAM by disabling swap.
This change allows to better exploits the advantages of the readahead
heuristic during swapoff and this improvement allows to to speed up the
resume process of such VMs.
[andrea.righi(a)canonical.com: update changelog]
Link: http://lkml.kernel.org/r/20200418084705.GA147642@xps-13
Signed-off-by: Andrea Righi <andrea.righi(a)canonical.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Reviewed-by: "Huang, Ying" <ying.huang(a)intel.com>
Cc: Minchan Kim <minchan(a)kernel.org>
Cc: Anchal Agarwal <anchalag(a)amazon.com>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Vineeth Remanan Pillai <vpillai(a)digitalocean.com>
Cc: Kelley Nielsen <kelleynnn(a)gmail.com>
Link: http://lkml.kernel.org/r/20200416180132.GB3352@xps-13
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
---
mm/swapfile.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index f6964212c6c8..fe5995c38ea4 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1950,10 +1950,14 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
pte_unmap(pte);
swap_map = &si->swap_map[offset];
- vmf.vma = vma;
- vmf.address = addr;
- vmf.pmd = pmd;
- page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, &vmf);
+ page = lookup_swap_cache(entry, vma, addr);
+ if (!page) {
+ vmf.vma = vma;
+ vmf.address = addr;
+ vmf.pmd = pmd;
+ page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
+ &vmf);
+ }
if (!page) {
if (*swap_map == 0 || *swap_map == SWAP_MAP_BAD)
goto try_next;
--
2.38.1
While running xfstests on 4.14.304 version we see a warning being
generated in one of ext4 tests with the following stack trace.
WARNING: CPU: 4 PID: 15332 at mm/util.c:414
kvmalloc_node+0x67/0x70
ext4_expand_extra_isize_ea+0x2b4/0x870 [ext4]
__ext4_expand_extra_isize+0xcb/0x120 [ext4]
ext4_mark_inode_dirty+0x1a5/0x1d0 [ext4]
ext4_ext_truncate+0x1f/0x90 [ext4]
ext4_truncate+0x363/0x400 [ext4]
ext4_setattr+0x392/0xa00 [ext4]
notify_change+0x300/0x420
? ext4_xattr_security_set+0x20/0x20 [ext4]
do_truncate+0x75/0xc0
? ext4_release_file+0xa0/0xa0 [ext4]
path_openat+0x737/0x16f0
do_filp_open+0x9b/0x110
? __check_object_size+0xb4/0x190
? do_sys_open+0x1bd/0x250
do_sys_open+0x1bd/0x250
do_syscall_64+0x67/0x110
entry_SYSCALL_64_after_hwframe+0x59/0xbe
It seems rebase to 4.14.304 brings a bunch of ext4 changes.
Commit ext4: allocate extended attribute value in vmalloc area
(73c44f61dab180b5f2dee9f15397aba36a75a882) tries to allocate buffer
using kvmalloc with improper flags that generates this warning.
To fix backport an upstream commit mm: kvmalloc does not
fallback to vmalloc for incompatible gfp flags
(170f26afa0481c72af93aa61b7398b5663451651). This removes the WARN_ON and
fallsback to kmalloc if correct flags are not passed.
Michal Hocko (1):
mm: kvmalloc does not fallback to vmalloc for incompatible gfp flags
mm/util.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--
2.38.1
Both f2fs and ext4 end up passing the ciphertext page to
wbc_account_cgroup_owner(). At the moment, the ciphertext page appears
to belong to no cgroup, so it is accounted to the root_mem_cgroup instead
of whatever cgroup the original page was in.
It's hard to say how far back this is a bug. The crypto code shared
between ext4 & f2fs was created in May 2015 with commit 0b81d0779072,
but neither filesystem did anything with memcg_data before then. memcg
writeback accounting was added to ext4 in July 2015 in commit 001e4a8775f6
and it wasn't added to f2fs until January 2018 (commit 578c647879f7).
I'm going with the ext4 commit since this is the first commit where
there was a difference in behaviour between encrypted and unencrypted
filesystems.
Fixes: 001e4a8775f6 ("ext4: implement cgroup writeback support")
Cc: stable(a)vger.kernel.org
Signed-off-by: Matthew Wilcox (Oracle) <willy(a)infradead.org>
---
fs/crypto/crypto.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index e78be66bbf01..a4e76f96f291 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -205,6 +205,9 @@ struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
}
SetPagePrivate(ciphertext_page);
set_page_private(ciphertext_page, (unsigned long)page);
+#ifdef CONFIG_MEMCG
+ ciphertext_page->memcg_data = page->memcg_data;
+#endif
return ciphertext_page;
}
EXPORT_SYMBOL(fscrypt_encrypt_pagecache_blocks);
--
2.35.1
After the qmp phy driver was split it looks like 5.15.y stable kernels
aren't getting fixes like commit 7a7d86d14d07 ("phy: qcom-qmp-combo: fix
broken power on") which is tagged for stable 5.10. Trogdor boards use
the qmp phy on 5.15.y kernels, so I backported the fixes I could find
that looked like we may possibly trip over at some point.
USB and DP work on my Trogdor.Lazor board with this set.
Changes from v1 (https://lore.kernel.org/r/20230113005405.3992011-1-swboyd@chromium.org):
* New patch for memleak on probe deferal to avoid compat issues
* Update "fix broken power on" patch for pcie/ufs phy
Johan Hovold (5):
phy: qcom-qmp-combo: disable runtime PM on unbind
phy: qcom-qmp-combo: fix memleak on probe deferral
phy: qcom-qmp-usb: fix memleak on probe deferral
phy: qcom-qmp-combo: fix broken power on
phy: qcom-qmp-combo: fix runtime suspend
drivers/phy/qualcomm/phy-qcom-qmp.c | 97 ++++++++++++++++++-----------
1 file changed, 61 insertions(+), 36 deletions(-)
Cc: Johan Hovold <johan+linaro(a)kernel.org>
Cc: Dmitry Baryshkov <dmitry.baryshkov(a)linaro.org>
Cc: Vinod Koul <vkoul(a)kernel.org>
base-commit: d57287729e229188e7d07ef0117fe927664e08cb
--
https://chromeos.dev
netvsc_dma_map() and netvsc_dma_unmap() currently check the cp_partial
flag and adjust the page_count so that pagebuf entries for the RNDIS
portion of the message are skipped when it has already been copied into
a send buffer. But this adjustment has already been made by code in
netvsc_send(). The duplicate adjustment causes some pagebuf entries to
not be mapped. In a normal VM, this doesn't break anything because the
mapping doesn’t change the PFN. But in a Confidential VM,
dma_map_single() does bounce buffering and provides a different PFN.
Failing to do the mapping causes the wrong PFN to be passed to Hyper-V,
and various errors ensue.
Fix this by removing the duplicate adjustment in netvsc_dma_map() and
netvsc_dma_unmap().
Fixes: 846da38de0e8 ("net: netvsc: Add Isolation VM support for netvsc driver")
Cc: stable(a)vger.kernel.org
Signed-off-by: Michael Kelley <mikelley(a)microsoft.com>
---
drivers/net/hyperv/netvsc.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index 661bbe6..8acfa40 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -949,9 +949,6 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device,
void netvsc_dma_unmap(struct hv_device *hv_dev,
struct hv_netvsc_packet *packet)
{
- u32 page_count = packet->cp_partial ?
- packet->page_buf_cnt - packet->rmsg_pgcnt :
- packet->page_buf_cnt;
int i;
if (!hv_is_isolation_supported())
@@ -960,7 +957,7 @@ void netvsc_dma_unmap(struct hv_device *hv_dev,
if (!packet->dma_range)
return;
- for (i = 0; i < page_count; i++)
+ for (i = 0; i < packet->page_buf_cnt; i++)
dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
packet->dma_range[i].mapping_size,
DMA_TO_DEVICE);
@@ -990,9 +987,7 @@ static int netvsc_dma_map(struct hv_device *hv_dev,
struct hv_netvsc_packet *packet,
struct hv_page_buffer *pb)
{
- u32 page_count = packet->cp_partial ?
- packet->page_buf_cnt - packet->rmsg_pgcnt :
- packet->page_buf_cnt;
+ u32 page_count = packet->page_buf_cnt;
dma_addr_t dma;
int i;
--
1.8.3.1