Hi Sacha,
Em Sat, 13 Dec 2025 04:49:42 -0500
Sasha Levin <sashal(a)kernel.org> escreveu:
> This is a note to let you know that I've just added the patch titled
>
> RAS: Report all ARM processor CPER information to userspace
>
> to the 6.18-stable tree which can be found at:
> http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
>
> The filename of the patch is:
> ras-report-all-arm-processor-cper-information-to-use.patch
> and it can be found in the queue-6.18 subdirectory.
>
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable(a)vger.kernel.org> know about it.
You should also backport this patch(*):
96b010536ee0 efi/cper: align ARM CPER type with UEFI 2.9A/2.10 specs
It fixes a bug at the UEFI parser for the ARM Processor Error record:
basically, the specs were not clear about how the error type should be
reported. The Kernel implementation were assuming that this was an
enum, but UEFI errata 2.9A make it clear that the value is a bitmap.
So, basically, all kernels up to 6.18 are not parsing the field the
expected way: only "Cache error" was properly reported. The other
3 types were wrong.
(*) You could need to backport those patches as well:
a976d790f494 efi/cper: Add a new helper function to print bitmasks
8ad2c72e21ef efi/cper: Adjust infopfx size to accept an extra space
Regards,
Mauro
Thanks,
Mauro
On 12/15/25 09:37, Sasha Levin wrote:
> This is a note to let you know that I've just added the patch titled
>
> block: fix cached zone reports on devices with native zone append
>
> to the 6.18-stable tree which can be found at:
> http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
>
> The filename of the patch is:
> block-fix-cached-zone-reports-on-devices-with-native.patch
> and it can be found in the queue-6.18 subdirectory.
>
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable(a)vger.kernel.org> know about it.
Sasha,
This is a fix for a new feature that was queued for and is now added to 6.19. So
backporting this to stable and LTS kernels is not advisable.
--
Damien Le Moal
Western Digital Research
On 12/13/25 20:09, Sasha Levin wrote:
> This is a note to let you know that I've just added the patch titled
>
> block: mq-deadline: Remove support for zone write locking
>
> to the 6.6-stable tree which can be found at:
> http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
>
> The filename of the patch is:
> block-mq-deadline-remove-support-for-zone-write-lock.patch
> and it can be found in the queue-6.6 subdirectory.
>
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable(a)vger.kernel.org> know about it.
Sasha,
Zone write locking in the mq-deadline scheduler was replaced with the generic
zone write plugging in the block layer in 6.10. That was not backported as that
is a new feature. So removing zone write locking in 6.6 will break support for
SMR drives and other zoned block devices. Removing it from 6.6 is thus not OK.
Please undo this.
> commit bf2022eaa2291ad1243b0711d5bd03ba4105ffbb
> Author: Damien Le Moal <dlemoal(a)kernel.org>
> Date: Mon Apr 8 10:41:21 2024 +0900
>
> block: mq-deadline: Remove support for zone write locking
>
> [ Upstream commit fde02699c242e88a71286677d27cc890a959b67f ]
>
> With the block layer generic plugging of write operations for zoned
> block devices, mq-deadline, or any other scheduler, can only ever
> see at most one write operation per zone at any time. There is thus no
> sequentiality requirements for these writes and thus no need to tightly
> control the dispatching of write requests using zone write locking.
>
> Remove all the code that implement this control in the mq-deadline
> scheduler and remove advertizing support for the
> ELEVATOR_F_ZBD_SEQ_WRITE elevator feature.
>
> Signed-off-by: Damien Le Moal <dlemoal(a)kernel.org>
> Reviewed-by: Hannes Reinecke <hare(a)suse.de>
> Reviewed-by: Christoph Hellwig <hch(a)lst.de>
> Reviewed-by: Bart Van Assche <bvanassche(a)acm.org>
> Tested-by: Hans Holmberg <hans.holmberg(a)wdc.com>
> Tested-by: Dennis Maisenbacher <dennis.maisenbacher(a)wdc.com>
> Reviewed-by: Martin K. Petersen <martin.petersen(a)oracle.com>
> Link: https://lore.kernel.org/r/20240408014128.205141-22-dlemoal@kernel.org
> Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
> Stable-dep-of: d60055cf5270 ("block/mq-deadline: Switch back to a single dispatch list")
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
>
> diff --git a/block/mq-deadline.c b/block/mq-deadline.c
> index 78a8aa204c156..23638b03d7b3d 100644
> --- a/block/mq-deadline.c
> +++ b/block/mq-deadline.c
> @@ -102,7 +102,6 @@ struct deadline_data {
> int prio_aging_expire;
>
> spinlock_t lock;
> - spinlock_t zone_lock;
> };
>
> /* Maps an I/O priority class to a deadline scheduler priority. */
> @@ -157,8 +156,7 @@ deadline_latter_request(struct request *rq)
> }
>
> /*
> - * Return the first request for which blk_rq_pos() >= @pos. For zoned devices,
> - * return the first request after the start of the zone containing @pos.
> + * Return the first request for which blk_rq_pos() >= @pos.
> */
> static inline struct request *deadline_from_pos(struct dd_per_prio *per_prio,
> enum dd_data_dir data_dir, sector_t pos)
> @@ -170,14 +168,6 @@ static inline struct request *deadline_from_pos(struct dd_per_prio *per_prio,
> return NULL;
>
> rq = rb_entry_rq(node);
> - /*
> - * A zoned write may have been requeued with a starting position that
> - * is below that of the most recently dispatched request. Hence, for
> - * zoned writes, start searching from the start of a zone.
> - */
> - if (blk_rq_is_seq_zoned_write(rq))
> - pos = round_down(pos, rq->q->limits.chunk_sectors);
> -
> while (node) {
> rq = rb_entry_rq(node);
> if (blk_rq_pos(rq) >= pos) {
> @@ -308,36 +298,6 @@ static inline bool deadline_check_fifo(struct dd_per_prio *per_prio,
> return time_is_before_eq_jiffies((unsigned long)rq->fifo_time);
> }
>
> -/*
> - * Check if rq has a sequential request preceding it.
> - */
> -static bool deadline_is_seq_write(struct deadline_data *dd, struct request *rq)
> -{
> - struct request *prev = deadline_earlier_request(rq);
> -
> - if (!prev)
> - return false;
> -
> - return blk_rq_pos(prev) + blk_rq_sectors(prev) == blk_rq_pos(rq);
> -}
> -
> -/*
> - * Skip all write requests that are sequential from @rq, even if we cross
> - * a zone boundary.
> - */
> -static struct request *deadline_skip_seq_writes(struct deadline_data *dd,
> - struct request *rq)
> -{
> - sector_t pos = blk_rq_pos(rq);
> -
> - do {
> - pos += blk_rq_sectors(rq);
> - rq = deadline_latter_request(rq);
> - } while (rq && blk_rq_pos(rq) == pos);
> -
> - return rq;
> -}
> -
> /*
> * For the specified data direction, return the next request to
> * dispatch using arrival ordered lists.
> @@ -346,40 +306,10 @@ static struct request *
> deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
> enum dd_data_dir data_dir)
> {
> - struct request *rq, *rb_rq, *next;
> - unsigned long flags;
> -
> if (list_empty(&per_prio->fifo_list[data_dir]))
> return NULL;
>
> - rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next);
> - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q))
> - return rq;
> -
> - /*
> - * Look for a write request that can be dispatched, that is one with
> - * an unlocked target zone. For some HDDs, breaking a sequential
> - * write stream can lead to lower throughput, so make sure to preserve
> - * sequential write streams, even if that stream crosses into the next
> - * zones and these zones are unlocked.
> - */
> - spin_lock_irqsave(&dd->zone_lock, flags);
> - list_for_each_entry_safe(rq, next, &per_prio->fifo_list[DD_WRITE],
> - queuelist) {
> - /* Check whether a prior request exists for the same zone. */
> - rb_rq = deadline_from_pos(per_prio, data_dir, blk_rq_pos(rq));
> - if (rb_rq && blk_rq_pos(rb_rq) < blk_rq_pos(rq))
> - rq = rb_rq;
> - if (blk_req_can_dispatch_to_zone(rq) &&
> - (blk_queue_nonrot(rq->q) ||
> - !deadline_is_seq_write(dd, rq)))
> - goto out;
> - }
> - rq = NULL;
> -out:
> - spin_unlock_irqrestore(&dd->zone_lock, flags);
> -
> - return rq;
> + return rq_entry_fifo(per_prio->fifo_list[data_dir].next);
> }
>
> /*
> @@ -390,36 +320,8 @@ static struct request *
> deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
> enum dd_data_dir data_dir)
> {
> - struct request *rq;
> - unsigned long flags;
> -
> - rq = deadline_from_pos(per_prio, data_dir,
> - per_prio->latest_pos[data_dir]);
> - if (!rq)
> - return NULL;
> -
> - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q))
> - return rq;
> -
> - /*
> - * Look for a write request that can be dispatched, that is one with
> - * an unlocked target zone. For some HDDs, breaking a sequential
> - * write stream can lead to lower throughput, so make sure to preserve
> - * sequential write streams, even if that stream crosses into the next
> - * zones and these zones are unlocked.
> - */
> - spin_lock_irqsave(&dd->zone_lock, flags);
> - while (rq) {
> - if (blk_req_can_dispatch_to_zone(rq))
> - break;
> - if (blk_queue_nonrot(rq->q))
> - rq = deadline_latter_request(rq);
> - else
> - rq = deadline_skip_seq_writes(dd, rq);
> - }
> - spin_unlock_irqrestore(&dd->zone_lock, flags);
> -
> - return rq;
> + return deadline_from_pos(per_prio, data_dir,
> + per_prio->latest_pos[data_dir]);
> }
>
> /*
> @@ -525,10 +427,6 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
> rq = next_rq;
> }
>
> - /*
> - * For a zoned block device, if we only have writes queued and none of
> - * them can be dispatched, rq will be NULL.
> - */
> if (!rq)
> return NULL;
>
> @@ -549,10 +447,6 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
> prio = ioprio_class_to_prio[ioprio_class];
> dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq);
> dd->per_prio[prio].stats.dispatched++;
> - /*
> - * If the request needs its target zone locked, do it.
> - */
> - blk_req_zone_write_lock(rq);
> rq->rq_flags |= RQF_STARTED;
> return rq;
> }
> @@ -736,7 +630,6 @@ static int dd_init_sched(struct request_queue *q, struct elevator_type *e)
> dd->fifo_batch = fifo_batch;
> dd->prio_aging_expire = prio_aging_expire;
> spin_lock_init(&dd->lock);
> - spin_lock_init(&dd->zone_lock);
>
> /* We dispatch from request queue wide instead of hw queue */
> blk_queue_flag_set(QUEUE_FLAG_SQ_SCHED, q);
> @@ -818,12 +711,6 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
>
> lockdep_assert_held(&dd->lock);
>
> - /*
> - * This may be a requeue of a write request that has locked its
> - * target zone. If it is the case, this releases the zone lock.
> - */
> - blk_req_zone_write_unlock(rq);
> -
> prio = ioprio_class_to_prio[ioprio_class];
> per_prio = &dd->per_prio[prio];
> if (!rq->elv.priv[0]) {
> @@ -855,18 +742,6 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
> */
> rq->fifo_time = jiffies + dd->fifo_expire[data_dir];
> insert_before = &per_prio->fifo_list[data_dir];
> -#ifdef CONFIG_BLK_DEV_ZONED
> - /*
> - * Insert zoned writes such that requests are sorted by
> - * position per zone.
> - */
> - if (blk_rq_is_seq_zoned_write(rq)) {
> - struct request *rq2 = deadline_latter_request(rq);
> -
> - if (rq2 && blk_rq_zone_no(rq2) == blk_rq_zone_no(rq))
> - insert_before = &rq2->queuelist;
> - }
> -#endif
> list_add_tail(&rq->queuelist, insert_before);
> }
> }
> @@ -901,33 +776,8 @@ static void dd_prepare_request(struct request *rq)
> rq->elv.priv[0] = NULL;
> }
>
> -static bool dd_has_write_work(struct blk_mq_hw_ctx *hctx)
> -{
> - struct deadline_data *dd = hctx->queue->elevator->elevator_data;
> - enum dd_prio p;
> -
> - for (p = 0; p <= DD_PRIO_MAX; p++)
> - if (!list_empty_careful(&dd->per_prio[p].fifo_list[DD_WRITE]))
> - return true;
> -
> - return false;
> -}
> -
> /*
> * Callback from inside blk_mq_free_request().
> - *
> - * For zoned block devices, write unlock the target zone of
> - * completed write requests. Do this while holding the zone lock
> - * spinlock so that the zone is never unlocked while deadline_fifo_request()
> - * or deadline_next_request() are executing. This function is called for
> - * all requests, whether or not these requests complete successfully.
> - *
> - * For a zoned block device, __dd_dispatch_request() may have stopped
> - * dispatching requests if all the queued requests are write requests directed
> - * at zones that are already locked due to on-going write requests. To ensure
> - * write request dispatch progress in this case, mark the queue as needing a
> - * restart to ensure that the queue is run again after completion of the
> - * request and zones being unlocked.
> */
> static void dd_finish_request(struct request *rq)
> {
> @@ -942,21 +792,8 @@ static void dd_finish_request(struct request *rq)
> * called dd_insert_requests(). Skip requests that bypassed I/O
> * scheduling. See also blk_mq_request_bypass_insert().
> */
> - if (!rq->elv.priv[0])
> - return;
> -
> - atomic_inc(&per_prio->stats.completed);
> -
> - if (blk_queue_is_zoned(q)) {
> - unsigned long flags;
> -
> - spin_lock_irqsave(&dd->zone_lock, flags);
> - blk_req_zone_write_unlock(rq);
> - spin_unlock_irqrestore(&dd->zone_lock, flags);
> -
> - if (dd_has_write_work(rq->mq_hctx))
> - blk_mq_sched_mark_restart_hctx(rq->mq_hctx);
> - }
> + if (rq->elv.priv[0])
> + atomic_inc(&per_prio->stats.completed);
> }
>
> static bool dd_has_work_for_prio(struct dd_per_prio *per_prio)
> @@ -1280,7 +1117,6 @@ static struct elevator_type mq_deadline = {
> .elevator_attrs = deadline_attrs,
> .elevator_name = "mq-deadline",
> .elevator_alias = "deadline",
> - .elevator_features = ELEVATOR_F_ZBD_SEQ_WRITE,
> .elevator_owner = THIS_MODULE,
> };
> MODULE_ALIAS("mq-deadline-iosched");
--
Damien Le Moal
Western Digital Research
From: Stefan Binding <sbinding(a)opensource.cirrus.com>
[ Upstream commit 826c0b1ed09e5335abcae07292440ce72346e578 ]
Laptops use 2 CS35L41 Amps with HDA, using External boost, with I2C
Signed-off-by: Stefan Binding <sbinding(a)opensource.cirrus.com>
Link: https://patch.msgid.link/20251205150614.49590-3-sbinding@opensource.cirrus.…
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
LLM Generated explanations, may be completely bogus:
## Commit Analysis: ALSA: hda/realtek: Add support for ASUS UM3406GA
### 1. COMMIT MESSAGE ANALYSIS
The commit message is straightforward: it adds support for a specific
ASUS laptop model (UM3406GA) that uses 2 CS35L41 amplifiers connected
via I2C with external boost. The message describes the hardware
configuration, which is standard for such device ID additions.
No Fixes: or Cc: stable tags are present, but as noted, this is expected
for commits requiring manual review.
### 2. CODE CHANGE ANALYSIS
The entire change is a single line addition:
```c
SND_PCI_QUIRK(0x1043, 0x1584, "ASUS UM3406GA ",
ALC287_FIXUP_CS35L41_I2C_2),
```
This adds:
- Vendor ID: 0x1043 (ASUS)
- Device/Subsystem ID: 0x1584 (ASUS UM3406GA)
- Fixup: `ALC287_FIXUP_CS35L41_I2C_2` (an **existing** fixup already
used by many other ASUS models)
Looking at the surrounding code, multiple other ASUS laptops use the
same fixup:
- ASUS PM3406CKA (0x1454)
- ASUS G513PI/PU/PV (0x14e3)
- ASUS G733PY/PZ/PZV/PYV (0x1503)
- ASUS GV302XA/XJ/XQ/XU/XV/XI (0x1533)
- ASUS UM3402YAR (0x1683)
### 3. CLASSIFICATION
This is a **NEW DEVICE ID** addition - explicitly listed as an exception
that IS appropriate for stable backporting. The driver infrastructure
and fixup code already exist; this merely adds an ID to enable the
existing fix for new hardware.
### 4. SCOPE AND RISK ASSESSMENT
- **Lines changed**: 1 line
- **Files touched**: 1 file
- **Complexity**: Zero - table entry addition only
- **Risk**: Extremely low - this cannot affect any other hardware
- **No new code paths**: Uses pre-existing `ALC287_FIXUP_CS35L41_I2C_2`
fixup
### 5. USER IMPACT
- **Who is affected**: Owners of ASUS UM3406GA laptops
- **Problem without fix**: Audio (specifically the CS35L41 amplifiers)
won't function properly
- **Severity**: Non-working audio is a significant user-facing issue for
laptop users
### 6. STABILITY INDICATORS
- Signed-off by Takashi Iwai (ALSA maintainer at SUSE)
- Standard quirk addition pattern used extensively throughout this file
- Follows exact same format as dozens of other ASUS quirk entries
### 7. DEPENDENCY CHECK
- The fixup `ALC287_FIXUP_CS35L41_I2C_2` has been in the kernel for some
time, supporting multiple other ASUS models
- No other commits are required for this to work
- This should apply cleanly to stable trees that have the CS35L41
support infrastructure
### CONCLUSION
This commit is a textbook example of what should be backported to stable
trees:
1. **Falls under explicit exception**: Adding device IDs to existing
drivers is explicitly allowed
2. **Minimal change**: Single line, single table entry
3. **Zero regression risk**: Cannot affect any hardware except the
targeted laptop
4. **Uses existing infrastructure**: The fixup is already well-tested on
similar ASUS models
5. **Fixes real user problem**: Enables audio on a production laptop
6. **Obviously correct**: Identical pattern to surrounding entries
The risk-benefit analysis strongly favors backporting: virtually zero
risk with clear user benefit (working audio on a specific laptop model).
**YES**
sound/hda/codecs/realtek/alc269.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/sound/hda/codecs/realtek/alc269.c b/sound/hda/codecs/realtek/alc269.c
index b45fcc9a3785e..008bf9d5148e1 100644
--- a/sound/hda/codecs/realtek/alc269.c
+++ b/sound/hda/codecs/realtek/alc269.c
@@ -6752,6 +6752,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
SND_PCI_QUIRK(0x1043, 0x1533, "ASUS GV302XA/XJ/XQ/XU/XV/XI", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x1043, 0x1573, "ASUS GZ301VV/VQ/VU/VJ/VA/VC/VE/VVC/VQC/VUC/VJC/VEC/VCC", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1584, "ASUS UM3406GA ", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x1043, 0x1652, "ASUS ROG Zephyrus Do 15 SE", ALC289_FIXUP_ASUS_ZEPHYRUS_DUAL_SPK),
SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
SND_PCI_QUIRK(0x1043, 0x1663, "ASUS GU603ZI/ZJ/ZQ/ZU/ZV", ALC285_FIXUP_ASUS_HEADSET_MIC),
--
2.51.0
From: ChenXiaoSong <chenxiaosong(a)kylinos.cn>
[ Upstream commit a1237c203f1757480dc2f3b930608ee00072d3cc ]
This was reported by the KUnit tests in the later patches.
See MS-ERREF 2.3.1 STATUS_NO_DATA_DETECTED. Keep it consistent with the
value in the documentation.
Signed-off-by: ChenXiaoSong <chenxiaosong(a)kylinos.cn>
Acked-by: Paulo Alcantara (Red Hat) <pc(a)manguebit.org>
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
LLM Generated explanations, may be completely bogus:
## Analysis of Commit: smb/client: fix NT_STATUS_NO_DATA_DETECTED value
### 1. COMMIT MESSAGE ANALYSIS
- Subject clearly states "fix" indicating a bug fix
- References MS-ERREF 2.3.1 (Microsoft's official NTSTATUS
documentation) as the authoritative source
- Bug was discovered through KUnit tests
- Has Acked-by from Paulo Alcantara (Red Hat SMB maintainer) and Signed-
off-by from Steve French (primary SMB/CIFS maintainer)
### 2. CODE CHANGE ANALYSIS
The change is a single-line fix in a header file:
```c
-#define NT_STATUS_NO_DATA_DETECTED 0x8000001c
+#define NT_STATUS_NO_DATA_DETECTED 0x80000022
```
**Critical bug identified**: The OLD value `0x8000001c` was
**duplicated** with another constant defined just a few lines above:
```c
#define NT_STATUS_MEDIA_CHANGED 0x8000001c
```
This is clearly a bug - two distinct Windows NT error status codes were
sharing the same numeric value. This would cause:
- Incorrect error code interpretation when servers return
STATUS_NO_DATA_DETECTED (0x80000022)
- Potential confusion between STATUS_MEDIA_CHANGED and
STATUS_NO_DATA_DETECTED
The new value `0x80000022` matches the official Microsoft MS-ERREF
specification for STATUS_NO_DATA_DETECTED.
### 3. CLASSIFICATION
- **Bug fix**: Yes - corrects a provably incorrect constant value
- **Feature addition**: No
- **New API**: No
- **Specification compliance fix**: Aligns with official Microsoft
documentation
### 4. SCOPE AND RISK ASSESSMENT
- **Size**: 1 line changed
- **Files**: 1 header file
- **Risk**: Extremely low - simply correcting a wrong numeric constant
- **Subsystem**: SMB client (commonly used for network file sharing)
This is about as low-risk as a fix can get - it's correcting a single
constant value to match official documentation. The previous value was
demonstrably wrong (duplicate of another constant).
### 5. USER IMPACT
SMB is widely used for file sharing across networks. Having correct
error status codes is important for proper error handling. While the
practical impact depends on how this constant is used, having correct
protocol constants is essential for interoperability.
### 6. STABILITY INDICATORS
- Acked by Red Hat's SMB maintainer
- Signed off by the primary CIFS/SMB maintainer (Steve French)
- KUnit tests caught this issue, indicating testing coverage
### 7. DEPENDENCY CHECK
- No dependencies on other commits
- Standalone fix to a header constant
- SMB client code exists in stable trees
### Summary
**Meets stable criteria:**
- ✅ Obviously correct (matches official MS-ERREF documentation)
- ✅ Fixes a real bug (duplicate constant value)
- ✅ Extremely small and contained (single line change)
- ✅ No new features or APIs
- ✅ Low risk (just correcting a constant value)
**Risk vs Benefit:**
- Risk: Minimal - changing a constant to its documented correct value
- Benefit: Correct SMB error handling, protocol compliance
This is a textbook example of a safe stable backport candidate: an
obviously wrong value is corrected to match official documentation, the
change is tiny, and there's no possibility of regression since the old
value was demonstrably incorrect (it was a duplicate of
NT_STATUS_MEDIA_CHANGED).
**YES**
fs/smb/client/nterr.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/smb/client/nterr.h b/fs/smb/client/nterr.h
index b3516c71cff77..09263c91d07a4 100644
--- a/fs/smb/client/nterr.h
+++ b/fs/smb/client/nterr.h
@@ -41,7 +41,7 @@ extern const struct nt_err_code_struct nt_errs[];
#define NT_STATUS_MEDIA_CHANGED 0x8000001c
#define NT_STATUS_END_OF_MEDIA 0x8000001e
#define NT_STATUS_MEDIA_CHECK 0x80000020
-#define NT_STATUS_NO_DATA_DETECTED 0x8000001c
+#define NT_STATUS_NO_DATA_DETECTED 0x80000022
#define NT_STATUS_STOPPED_ON_SYMLINK 0x8000002d
#define NT_STATUS_DEVICE_REQUIRES_CLEANING 0x80000288
#define NT_STATUS_DEVICE_DOOR_OPEN 0x80000289
--
2.51.0
Hi,
After a stable kernel update, the hwclock command seems no longer
functional on my SPARC system with an ST M48T59Y-70PC1 RTC:
# hwclock
[...long delay...]
hwclock: select() to /dev/rtc0 to wait for clock tick timed out
On prior kernels, there is no problem:
# hwclock
2025-10-22 22:21:04.806992-04:00
I reproduced the same failure on 6.18-rc2 and bisected to this commit:
commit 795cda8338eab036013314dbc0b04aae728880ab
Author: Esben Haabendal <esben(a)geanix.com>
Date: Fri May 16 09:23:35 2025 +0200
rtc: interface: Fix long-standing race when setting alarm
This commit was backported to all current 6.x stable branches,
as well as 5.15.x, so they all have the same regression.
Reverting this commit on top of 6.18-rc2 corrects the problem.
Let me know if you need any more info!
Thanks,
Nick
This reverts commit b3b274bc9d3d7307308aeaf75f70731765ac999a.
On the DragonBoard 820c (which uses APQ8096/MSM8996) this change causes
the CPUs to downclock to roughly half speed under sustained load. The
regression is visible both during boot and when running CPU stress
workloads such as stress-ng: the CPUs initially ramp up to the expected
frequency, then drop to a lower OPP even though the system is clearly
CPU-bound.
Bisecting points to this commit and reverting it restores the expected
behaviour on the DragonBoard 820c - the CPUs track the cpufreq policy
and run at full performance under load.
The exact interaction with the ACD is not yet fully understood and we
would like to keep ACD in use to avoid possible SoC reliability issues.
Until we have a better fix that preserves ACD while avoiding this
performance regression, revert the bisected patch to restore the
previous behaviour.
Fixes: b3b274bc9d3d ("clk: qcom: cpu-8996: simplify the cpu_clk_notifier_cb")
Cc: stable(a)vger.kernel.org # v6.3+
Link: https://lore.kernel.org/linux-arm-msm/20230113120544.59320-8-dmitry.baryshk…
Cc: Dmitry Baryshkov <dmitry.baryshkov(a)oss.qualcomm.com>
Signed-off-by: Christopher Obbard <christopher.obbard(a)linaro.org>
---
Hi all,
This series contains a single revert for a regression affecting the
APQ8096/MSM8996 (DragonBoard 820c).
The commit being reverted, b3b274bc9d3d ("clk: qcom: cpu-8996: simplify the cpu_clk_notifier_cb"),
introduces a significant performance issue where the CPUs downclock to
~50% of their expected frequency under sustained load. The problem is
reproducible both at boot and when running CPU-bound workloads such as
stress-ng.
Bisecting the issue pointed directly to this commit and reverting it
restores correct cpufreq behaviour.
The root cause appears to be related to the interaction between the
simplified notifier callback and ACD (Adaptive Clock Distribution).
Since we would prefer to keep ACD enabled for SoC reliability reasons,
a revert is the safest option until a proper fix is identified.
Full details are included in the commit message.
Feedback & suggestions welcome.
Cheers!
Christopher Obbard
---
drivers/clk/qcom/clk-cpu-8996.c | 30 +++++++++++-------------------
1 file changed, 11 insertions(+), 19 deletions(-)
diff --git a/drivers/clk/qcom/clk-cpu-8996.c b/drivers/clk/qcom/clk-cpu-8996.c
index 21d13c0841ed..028476931747 100644
--- a/drivers/clk/qcom/clk-cpu-8996.c
+++ b/drivers/clk/qcom/clk-cpu-8996.c
@@ -547,35 +547,27 @@ static int cpu_clk_notifier_cb(struct notifier_block *nb, unsigned long event,
{
struct clk_cpu_8996_pmux *cpuclk = to_clk_cpu_8996_pmux_nb(nb);
struct clk_notifier_data *cnd = data;
+ int ret;
switch (event) {
case PRE_RATE_CHANGE:
+ ret = clk_cpu_8996_pmux_set_parent(&cpuclk->clkr.hw, ALT_INDEX);
qcom_cpu_clk_msm8996_acd_init(cpuclk->clkr.regmap);
-
- /*
- * Avoid overvolting. clk_core_set_rate_nolock() walks from top
- * to bottom, so it will change the rate of the PLL before
- * chaging the parent of PMUX. This can result in pmux getting
- * clocked twice the expected rate.
- *
- * Manually switch to PLL/2 here.
- */
- if (cnd->new_rate < DIV_2_THRESHOLD &&
- cnd->old_rate > DIV_2_THRESHOLD)
- clk_cpu_8996_pmux_set_parent(&cpuclk->clkr.hw, SMUX_INDEX);
-
break;
- case ABORT_RATE_CHANGE:
- /* Revert manual change */
- if (cnd->new_rate < DIV_2_THRESHOLD &&
- cnd->old_rate > DIV_2_THRESHOLD)
- clk_cpu_8996_pmux_set_parent(&cpuclk->clkr.hw, ACD_INDEX);
+ case POST_RATE_CHANGE:
+ if (cnd->new_rate < DIV_2_THRESHOLD)
+ ret = clk_cpu_8996_pmux_set_parent(&cpuclk->clkr.hw,
+ SMUX_INDEX);
+ else
+ ret = clk_cpu_8996_pmux_set_parent(&cpuclk->clkr.hw,
+ ACD_INDEX);
break;
default:
+ ret = 0;
break;
}
- return NOTIFY_OK;
+ return notifier_from_errno(ret);
};
static int qcom_cpu_clk_msm8996_driver_probe(struct platform_device *pdev)
---
base-commit: c17e270dfb342a782d69c4a7c4c32980455afd9c
change-id: 20251202-wip-obbardc-qcom-msm8096-clk-cpu-fix-downclock-b7561da4cb95
Best regards,
--
Christopher Obbard <christopher.obbard(a)linaro.org>
Hi,
With Live Design International 2025 (LDI) concluded, I wanted to personally offer you exclusive access to our Visitor Contact List, featuring 16,956 fully verified leads, along with all on-site walk-ins.
Each entry includes Name, Job Title, Company, Website, Address, Phone, Official Email and more.
If you’d like more details, simply reply, “Send me pricing.”
Kind Regards,
Rose
Sr. Demand Generation
P.S. Not the right fit? Reply “Unfollow” to opt out.
When of_find_net_device_by_node() successfully acquires a reference to
a network device but the subsequent call to dsa_port_parse_cpu()
fails, dsa_port_parse_of() returns without releasing the reference
count on the network device.
of_find_net_device_by_node() increments the reference count of the
returned structure, which should be balanced with a corresponding
put_device() when the reference is no longer needed.
Found by code review.
Cc: stable(a)vger.kernel.org
Fixes: deff710703d8 ("net: dsa: Allow default tag protocol to be overridden from DT")
Signed-off-by: Ma Ke <make24(a)iscas.ac.cn>
---
Changes in v2:
- simplified the patch as suggestions;
- modified the Fixes tag as suggestions.
---
net/dsa/dsa.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
index a20efabe778f..31b409a47491 100644
--- a/net/dsa/dsa.c
+++ b/net/dsa/dsa.c
@@ -1247,6 +1247,7 @@ static int dsa_port_parse_of(struct dsa_port *dp, struct device_node *dn)
struct device_node *ethernet = of_parse_phandle(dn, "ethernet", 0);
const char *name = of_get_property(dn, "label", NULL);
bool link = of_property_read_bool(dn, "link");
+ int err = 0;
dp->dn = dn;
@@ -1260,7 +1261,11 @@ static int dsa_port_parse_of(struct dsa_port *dp, struct device_node *dn)
return -EPROBE_DEFER;
user_protocol = of_get_property(dn, "dsa-tag-protocol", NULL);
- return dsa_port_parse_cpu(dp, conduit, user_protocol);
+ err = dsa_port_parse_cpu(dp, conduit, user_protocol);
+ if (err)
+ put_device(conduit);
+
+ return err;
}
if (link)
--
2.17.1
When the filesystem is being mounted, the kernel panics while the data
regarding slot map allocation to the local node, is being written to the
disk. This occurs because the value of slot map buffer head block
number, which should have been greater than or equal to
`OCFS2_SUPER_BLOCK_BLKNO` (evaluating to 2) is less than it, indicative
of disk metadata corruption. This triggers
BUG_ON(bh->b_blocknr < OCFS2_SUPER_BLOCK_BLKNO) in ocfs2_write_block(),
causing the kernel to panic.
This is fixed by introducing an if condition block in
ocfs2_update_disk_slot(), right before calling ocfs2_write_block(), which
checks if `bh->b_blocknr` is lesser than `OCFS2_SUPER_BLOCK_BLKNO`; if
yes, then ocfs2_error is called, which prints the error log, for
debugging purposes, and the return value of ocfs2_error() is returned
back to caller of ocfs2_update_disk_slot() i.e. ocfs2_find_slot(). If
the return value is zero. then error code EIO is returned.
Reported-by: syzbot+c818e5c4559444f88aa0(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=c818e5c4559444f88aa0
Tested-by: syzbot+c818e5c4559444f88aa0(a)syzkaller.appspotmail.com
Cc: stable(a)vger.kernel.org
Signed-off-by: Prithvi Tambewagh <activprithvi(a)gmail.com>
---
v1->v2:
- Remove usage of le16_to_cpu() from ocfs2_error()
- Cast bh->b_blocknr to unsigned long long
- Remove type casting for OCFS2_SUPER_BLOCK_BLKNO
- Fix Sparse warnings reported in v1 by kernel test robot
- Update title from 'ocfs2: Fix kernel BUG in ocfs2_write_block' to
'ocfs2: fix kernel BUG in ocfs2_write_block'
v1 link: https://lore.kernel.org/all/20251206154819.175479-1-activprithvi@gmail.com/…
fs/ocfs2/slot_map.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/fs/ocfs2/slot_map.c b/fs/ocfs2/slot_map.c
index e544c704b583..e916a2e8f92d 100644
--- a/fs/ocfs2/slot_map.c
+++ b/fs/ocfs2/slot_map.c
@@ -193,6 +193,16 @@ static int ocfs2_update_disk_slot(struct ocfs2_super *osb,
else
ocfs2_update_disk_slot_old(si, slot_num, &bh);
spin_unlock(&osb->osb_lock);
+ if (bh->b_blocknr < OCFS2_SUPER_BLOCK_BLKNO) {
+ status = ocfs2_error(osb->sb,
+ "Invalid Slot Map Buffer Head "
+ "Block Number : %llu, Should be >= %d",
+ (unsigned long long)bh->b_blocknr,
+ OCFS2_SUPER_BLOCK_BLKNO);
+ if (!status)
+ return -EIO;
+ return status;
+ }
status = ocfs2_write_block(osb, bh, INODE_CACHE(si->si_inode));
if (status < 0)
base-commit: 24172e0d79900908cf5ebf366600616d29c9b417
--
2.43.0