Add camcc support and Regera PLL ops. Also, fix the pll post div mask.
Signed-off-by: Satya Priya Kakitapalli <quic_skakitap(a)quicinc.com>
---
Changes in v2:
- As per Konrad's comments, re-use the zonda pll code for regera, as
both are mostly same.
- Fix the zonda_set_rate API and also the pll_post_div shift used in
trion pll post div set rate API
- Link to v1: https://lore.kernel.org/r/20240229-camcc-support-sm8150-v1-0-8c28c6c87990@q…
---
Satya Priya Kakitapalli (5):
clk: qcom: alpha-pll: Fix the pll post div mask and shift
clk: qcom: clk-alpha-pll: Update set_rate for Zonda PLL
dt-bindings: clock: qcom: Add SM8150 camera clock controller
clk: qcom: Add camera clock controller driver for SM8150
arm64: dts: qcom: Add camera clock controller for sm8150
Taniya Das (1):
clk: qcom: clk-alpha-pll: Add support for Regera PLL ops
.../bindings/clock/qcom,sm8150-camcc.yaml | 77 +
arch/arm64/boot/dts/qcom/sa8155p.dtsi | 4 +
arch/arm64/boot/dts/qcom/sm8150.dtsi | 13 +
drivers/clk/qcom/Kconfig | 9 +
drivers/clk/qcom/Makefile | 1 +
drivers/clk/qcom/camcc-sm8150.c | 2159 ++++++++++++++++++++
drivers/clk/qcom/clk-alpha-pll.c | 56 +-
drivers/clk/qcom/clk-alpha-pll.h | 5 +
include/dt-bindings/clock/qcom,sm8150-camcc.h | 135 ++
9 files changed, 2455 insertions(+), 4 deletions(-)
---
base-commit: 20af1ca418d2c0b11bc2a1fe8c0c88f67bcc2a7e
change-id: 20240229-camcc-support-sm8150-d3f72a4a1a2b
Best regards,
--
Satya Priya Kakitapalli <quic_skakitap(a)quicinc.com>
Good day,
We here attached our inquiry for your reference,
Could you help quote it?
Hope can get your rapid reply.
Thank you very much.
Ella Liu
Thanks & Best regards,
Hassei Trading (Shanghai) Co., Ltd.
八星贸易株式会社上海办事处 (日本原厂件一级代理商)
Add : Room 1809, No. 38, Lane 2626, Gongji Road, Pudong New Area,
Shanghai
MP : +86-16601820827 TEL : 021-50673627
Email :h assei_shanghai(a)163.com http: www.hastratok.com
Head Office: 7F. Shibadaimon-Yamato Building, 1-6-6 Shibadaimon, Minato Ku, Tokyo 105-0012 Japan
The patch titled
Subject: mm: fix crashes from deferred split racing folio migration
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
mm-fix-crashes-from-deferred-split-racing-folio-migration.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Hugh Dickins <hughd(a)google.com>
Subject: mm: fix crashes from deferred split racing folio migration
Date: Tue, 2 Jul 2024 00:40:55 -0700 (PDT)
Even on 6.10-rc6, I've been seeing elusive "Bad page state"s (often on
flags when freeing, yet the flags shown are not bad: PG_locked had been
set and cleared??), and VM_BUG_ON_PAGE(page_ref_count(page) == 0)s from
deferred_split_scan()'s folio_put(), and a variety of other BUG and WARN
symptoms implying double free by deferred split and large folio migration.
6.7 commit 9bcef5973e31 ("mm: memcg: fix split queue list crash when large
folio migration") was right to fix the memcg-dependent locking broken in
85ce2c517ade ("memcontrol: only transfer the memcg data for migration"),
but missed a subtlety of deferred_split_scan(): it moves folios to its own
local list to work on them without split_queue_lock, during which time
folio->_deferred_list is not empty, but even the "right" lock does nothing
to secure the folio and the list it is on.
Fortunately, deferred_split_scan() is careful to use folio_try_get(): so
folio_migrate_mapping() can avoid the race by folio_undo_large_rmappable()
while the old folio's reference count is temporarily frozen to 0 - adding
such a freeze in the !mapping case too (originally, folio lock and
unmapping and no swap cache left an anon folio unreachable, so no freezing
was needed there: but the deferred split queue offers a way to reach it).
Link: https://lkml.kernel.org/r/29c83d1a-11ca-b6c9-f92e-6ccb322af510@google.com
Fixes: 9bcef5973e31 ("mm: memcg: fix split queue list crash when large folio migration")
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Reviewed-by: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Cc: Barry Song <baohua(a)kernel.org>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Cc: Nhat Pham <nphamcs(a)gmail.com>
Cc: Yang Shi <shy828301(a)gmail.com>
Cc: Zi Yan <ziy(a)nvidia.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/memcontrol.c | 11 -----------
mm/migrate.c | 13 +++++++++++++
2 files changed, 13 insertions(+), 11 deletions(-)
--- a/mm/memcontrol.c~mm-fix-crashes-from-deferred-split-racing-folio-migration
+++ a/mm/memcontrol.c
@@ -7823,17 +7823,6 @@ void mem_cgroup_migrate(struct folio *ol
/* Transfer the charge and the css ref */
commit_charge(new, memcg);
- /*
- * If the old folio is a large folio and is in the split queue, it needs
- * to be removed from the split queue now, in case getting an incorrect
- * split queue in destroy_large_folio() after the memcg of the old folio
- * is cleared.
- *
- * In addition, the old folio is about to be freed after migration, so
- * removing from the split queue a bit earlier seems reasonable.
- */
- if (folio_test_large(old) && folio_test_large_rmappable(old))
- folio_undo_large_rmappable(old);
old->memcg_data = 0;
}
--- a/mm/migrate.c~mm-fix-crashes-from-deferred-split-racing-folio-migration
+++ a/mm/migrate.c
@@ -415,6 +415,15 @@ int folio_migrate_mapping(struct address
if (folio_ref_count(folio) != expected_count)
return -EAGAIN;
+ /* Take off deferred split queue while frozen and memcg set */
+ if (folio_test_large(folio) &&
+ folio_test_large_rmappable(folio)) {
+ if (!folio_ref_freeze(folio, expected_count))
+ return -EAGAIN;
+ folio_undo_large_rmappable(folio);
+ folio_ref_unfreeze(folio, expected_count);
+ }
+
/* No turning back from here */
newfolio->index = folio->index;
newfolio->mapping = folio->mapping;
@@ -433,6 +442,10 @@ int folio_migrate_mapping(struct address
return -EAGAIN;
}
+ /* Take off deferred split queue while frozen and memcg set */
+ if (folio_test_large(folio) && folio_test_large_rmappable(folio))
+ folio_undo_large_rmappable(folio);
+
/*
* Now we know that no one else is looking at the folio:
* no turning back from here.
_
Patches currently in -mm which might be from hughd(a)google.com are
mm-fix-crashes-from-deferred-split-racing-folio-migration.patch
Since the introduction of mTHP, the docuementation has stated that
khugepaged would be enabled when any mTHP size is enabled, and disabled
when all mTHP sizes are disabled. There are 2 problems with this; 1.
this is not what was implemented by the code and 2. this is not the
desirable behavior.
Desirable behavior is for khugepaged to be enabled when any PMD-sized
THP is enabled, anon or file. (Note that file THP is still controlled by
the top-level control so we must always consider that, as well as the
PMD-size mTHP control for anon). khugepaged only supports collapsing to
PMD-sized THP so there is no value in enabling it when PMD-sized THP is
disabled. So let's change the code and documentation to reflect this
policy.
Further, per-size enabled control modification events were not
previously forwarded to khugepaged to give it an opportunity to start or
stop. Consequently the following was resulting in khugepaged eroneously
not being activated:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
Signed-off-by: Ryan Roberts <ryan.roberts(a)arm.com>
Fixes: 3485b88390b0 ("mm: thp: introduce multi-size THP sysfs interface")
Closes: https://lore.kernel.org/linux-mm/7a0bbe69-1e3d-4263-b206-da007791a5c4@redha…
Cc: stable(a)vger.kernel.org
---
Hi All,
Applies on top of today's mm-unstable (9bb8753acdd8). No regressions observed in
mm selftests.
When fixing this I also noticed that khugepaged doesn't get (and never has been)
activated/deactivated by `shmem_enabled=`. I'm not sure if khugepaged knows how
to collapse shmem - perhaps it should be activated in this case?
Thanks,
Ryan
Documentation/admin-guide/mm/transhuge.rst | 11 +++++------
include/linux/huge_mm.h | 13 +++++++------
mm/huge_memory.c | 7 +++++++
mm/khugepaged.c | 13 ++++++-------
4 files changed, 25 insertions(+), 19 deletions(-)
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index 709fe10b60f4..fc321d40b8ac 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -202,12 +202,11 @@ PMD-mappable transparent hugepage::
cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
-khugepaged will be automatically started when one or more hugepage
-sizes are enabled (either by directly setting "always" or "madvise",
-or by setting "inherit" while the top-level enabled is set to "always"
-or "madvise"), and it'll be automatically shutdown when the last
-hugepage size is disabled (either by directly setting "never", or by
-setting "inherit" while the top-level enabled is set to "never").
+khugepaged will be automatically started when PMD-sized THP is enabled
+(either of the per-size anon control or the top-level control are set
+to "always" or "madvise"), and it'll be automatically shutdown when
+PMD-sized THP is disabled (when both the per-size anon control and the
+top-level control are "never")
Khugepaged controls
-------------------
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 4d155c7a4792..ce1b47b49cc3 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -128,16 +128,17 @@ static inline bool hugepage_global_always(void)
(1<<TRANSPARENT_HUGEPAGE_FLAG);
}
-static inline bool hugepage_flags_enabled(void)
+static inline bool hugepage_pmd_enabled(void)
{
/*
- * We cover both the anon and the file-backed case here; we must return
- * true if globally enabled, even when all anon sizes are set to never.
- * So we don't need to look at huge_anon_orders_inherit.
+ * We cover both the anon and the file-backed case here; for
+ * file-backed, we must return true if globally enabled, regardless of
+ * the anon pmd size control status. So we don't need to look at
+ * huge_anon_orders_inherit.
*/
return hugepage_global_enabled() ||
- READ_ONCE(huge_anon_orders_always) ||
- READ_ONCE(huge_anon_orders_madvise);
+ test_bit(PMD_ORDER, &huge_anon_orders_always) ||
+ test_bit(PMD_ORDER, &huge_anon_orders_madvise);
}
static inline int highest_order(unsigned long orders)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 251d6932130f..085f5e973231 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -502,6 +502,13 @@ static ssize_t thpsize_enabled_store(struct kobject *kobj,
} else
ret = -EINVAL;
+ if (ret > 0) {
+ int err;
+
+ err = start_stop_khugepaged();
+ if (err)
+ ret = err;
+ }
return ret;
}
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 409f67a817f1..708d0e74b61f 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -449,7 +449,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
unsigned long vm_flags)
{
if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
- hugepage_flags_enabled()) {
+ hugepage_pmd_enabled()) {
if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS,
PMD_ORDER))
__khugepaged_enter(vma->vm_mm);
@@ -2462,8 +2462,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
static int khugepaged_has_work(void)
{
- return !list_empty(&khugepaged_scan.mm_head) &&
- hugepage_flags_enabled();
+ return !list_empty(&khugepaged_scan.mm_head) && hugepage_pmd_enabled();
}
static int khugepaged_wait_event(void)
@@ -2536,7 +2535,7 @@ static void khugepaged_wait_work(void)
return;
}
- if (hugepage_flags_enabled())
+ if (hugepage_pmd_enabled())
wait_event_freezable(khugepaged_wait, khugepaged_wait_event());
}
@@ -2567,7 +2566,7 @@ static void set_recommended_min_free_kbytes(void)
int nr_zones = 0;
unsigned long recommended_min;
- if (!hugepage_flags_enabled()) {
+ if (!hugepage_pmd_enabled()) {
calculate_min_free_kbytes();
goto update_wmarks;
}
@@ -2617,7 +2616,7 @@ int start_stop_khugepaged(void)
int err = 0;
mutex_lock(&khugepaged_mutex);
- if (hugepage_flags_enabled()) {
+ if (hugepage_pmd_enabled()) {
if (!khugepaged_thread)
khugepaged_thread = kthread_run(khugepaged, NULL,
"khugepaged");
@@ -2643,7 +2642,7 @@ int start_stop_khugepaged(void)
void khugepaged_min_free_kbytes_update(void)
{
mutex_lock(&khugepaged_mutex);
- if (hugepage_flags_enabled() && khugepaged_thread)
+ if (hugepage_pmd_enabled() && khugepaged_thread)
set_recommended_min_free_kbytes();
mutex_unlock(&khugepaged_mutex);
}
--
2.43.0
The for_each_child_of_node() macro requires an explicit call to
of_node_put() on early exits to decrement the child refcount and avoid a
memory leak.
The child node is not required outsie the loop, and the resource must be
released before the function returns.
Add the missing of_node_put().
Cc: stable(a)vger.kernel.org
Fixes: 82e82130a78b ("usb: core: Set connect_type of ports based on DT node")
Signed-off-by: Javier Carrasco <javier.carrasco.cruz(a)gmail.com>
---
This bug was found while doing some code analysis, and I could not test
it with real hardware. Although the issue and it solution are
straightforward, any validation beyond compilation and static analysis
is always welcome.
---
drivers/usb/core/of.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/core/of.c b/drivers/usb/core/of.c
index f1a499ee482c..763e4122ed5b 100644
--- a/drivers/usb/core/of.c
+++ b/drivers/usb/core/of.c
@@ -84,9 +84,12 @@ static bool usb_of_has_devices_or_graph(const struct usb_device *hub)
if (of_graph_is_present(np))
return true;
- for_each_child_of_node(np, child)
- if (of_property_present(child, "reg"))
+ for_each_child_of_node(np, child) {
+ if (of_property_present(child, "reg")) {
+ of_node_put(child);
return true;
+ }
+ }
return false;
}
---
base-commit: 62c97045b8f720c2eac807a5f38e26c9ed512371
change-id: 20240624-usb_core_of_memleak-79161623b62e
Best regards,
--
Javier Carrasco <javier.carrasco.cruz(a)gmail.com>
The acpi_cst_latency_cmp comparison function currently used for sorting
C-state latencies does not satisfy transitivity, causing incorrect
sorting results. Specifically, if there are two valid acpi_processor_cx
elements A and B and one invalid element C, it may occur that A < B,
A = C, and B = C. Sorting algorithms assume that if A < B and A = C,
then C < B, leading to incorrect ordering.
Given the small size of the array (<=8), we replace the library sort
function with a simple insertion sort that properly ignores invalid
elements and sorts valid ones based on latency. This change ensures
correct ordering of the C-state latencies.
Fixes: 65ea8f2c6e23 ("ACPI: processor idle: Fix up C-state latency if not ordered")
Cc: stable(a)vger.kernel.org
Reported-by: Julian Sikorski <belegdol(a)gmail.com>
Closes: https://lore.kernel.org/lkml/70674dc7-5586-4183-8953-8095567e73df@gmail.com/
Signed-off-by: Kuan-Wei Chiu <visitorckw(a)gmail.com>
---
v2 -> v3:
- Remove #include <linux/sort.h>
- Cc @stable
Note: I only performed a build test and a simple unit test to ensure
the latency of valid elements is correctly sorted in the randomly
generated data.
drivers/acpi/processor_idle.c | 36 ++++++++++++++---------------------
1 file changed, 14 insertions(+), 22 deletions(-)
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index bd6a7857ce05..17cc81340b4b 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -16,7 +16,6 @@
#include <linux/acpi.h>
#include <linux/dmi.h>
#include <linux/sched.h> /* need_resched() */
-#include <linux/sort.h>
#include <linux/tick.h>
#include <linux/cpuidle.h>
#include <linux/cpu.h>
@@ -386,25 +385,21 @@ static void acpi_processor_power_verify_c3(struct acpi_processor *pr,
acpi_write_bit_register(ACPI_BITREG_BUS_MASTER_RLD, 1);
}
-static int acpi_cst_latency_cmp(const void *a, const void *b)
+static void acpi_cst_latency_sort(struct acpi_processor_cx *arr, size_t length)
{
- const struct acpi_processor_cx *x = a, *y = b;
+ int i, j, k;
- if (!(x->valid && y->valid))
- return 0;
- if (x->latency > y->latency)
- return 1;
- if (x->latency < y->latency)
- return -1;
- return 0;
-}
-static void acpi_cst_latency_swap(void *a, void *b, int n)
-{
- struct acpi_processor_cx *x = a, *y = b;
-
- if (!(x->valid && y->valid))
- return;
- swap(x->latency, y->latency);
+ for (i = 1; i < length; i++) {
+ if (!arr[i].valid)
+ continue;
+ for (j = i - 1, k = i; j >= 0; j--) {
+ if (!arr[j].valid)
+ continue;
+ if (arr[j].latency > arr[k].latency)
+ swap(arr[j].latency, arr[k].latency);
+ k = j;
+ }
+ }
}
static int acpi_processor_power_verify(struct acpi_processor *pr)
@@ -449,10 +444,7 @@ static int acpi_processor_power_verify(struct acpi_processor *pr)
if (buggy_latency) {
pr_notice("FW issue: working around C-state latencies out of order\n");
- sort(&pr->power.states[1], max_cstate,
- sizeof(struct acpi_processor_cx),
- acpi_cst_latency_cmp,
- acpi_cst_latency_swap);
+ acpi_cst_latency_sort(&pr->power.states[1], max_cstate);
}
lapic_timer_propagate_broadcast(pr);
--
2.34.1