This is an automatic generated email to let you know that the following patch were queued:
Subject: media: vsp1: Fix BRx conditional path in WPF
Author: Kieran Bingham <kieran.bingham(a)ideasonboard.com>
Date: Mon Mar 26 09:29:17 2018 -0400
When a BRx is provided by a pipeline, the WPF must determine the master
layer. Currently the condition to check this identifies pipe->bru ||
pipe->num_inputs > 1.
The code then moves on to dereference pipe->bru, thus the check fails
static analysers on the possibility that pipe->num_inputs could be
greater than 1 without pipe->bru being set.
The reality is that the pipeline must have a BRx to support more than
one input, thus this could never cause a fault - however it also
identifies that the num_inputs > 1 check is redundant.
Remove the redundant check - and always configure the master layer
appropriately when we have a BRx configured in our pipeline.
Fixes: 6134148f6098 ("v4l: vsp1: Add support for the BRS entity")
Cc: stable(a)vger.kernel.org
Suggested-by: Mauro Carvalho Chehab <mchehab(a)s-opensource.com>
Signed-off-by: Kieran Bingham <kieran.bingham+renesas(a)ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart(a)ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab(a)s-opensource.com>
drivers/media/platform/vsp1/vsp1_wpf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
---
diff --git a/drivers/media/platform/vsp1/vsp1_wpf.c b/drivers/media/platform/vsp1/vsp1_wpf.c
index f7f3b4b2c2de..8bd6b2f1af15 100644
--- a/drivers/media/platform/vsp1/vsp1_wpf.c
+++ b/drivers/media/platform/vsp1/vsp1_wpf.c
@@ -452,7 +452,7 @@ static void wpf_configure(struct vsp1_entity *entity,
: VI6_WPF_SRCRPF_RPF_ACT_SUB(input->entity.index);
}
- if (pipe->bru || pipe->num_inputs > 1)
+ if (pipe->bru)
srcrpf |= pipe->bru->type == VSP1_ENTITY_BRU
? VI6_WPF_SRCRPF_VIRACT_MST
: VI6_WPF_SRCRPF_VIRACT2_MST;
From: Xidong Wang <wangxidong_97(a)163.com>
Along the eb_lookup_vmas() error path, the return value from
kmem_cache_alloc() was freed using kfree(). Fix it to use the proper
kmem_cache_free() instead.
Fixes: d1b48c1e7184 ("drm/i915: Replace execbuf vma ht with an idr")
Signed-off-by: Xidong Wang <wangxidong_97(a)163.com>
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin(a)intel.com>
Cc: <stable(a)vger.kernel.org> # v4.14+
Reviewed-by: Chris Wilson <chris(a)chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris(a)chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 8c170db8495d..0414228cd2b5 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -728,7 +728,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
err = radix_tree_insert(handles_vma, handle, vma);
if (unlikely(err)) {
- kfree(lut);
+ kmem_cache_free(eb->i915->luts, lut);
goto err_obj;
}
--
2.16.3
This is a note to let you know that I've just added the patch titled
bitmap: fix memset optimization on big-endian systems
to the 4.16-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
bitmap-fix-memset-optimization-on-big-endian-systems.patch
and it can be found in the queue-4.16 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 21035965f60b0502fc6537b232839389bb4ce664 Mon Sep 17 00:00:00 2001
From: Omar Sandoval <osandov(a)fb.com>
Date: Mon, 2 Apr 2018 15:58:31 -0700
Subject: bitmap: fix memset optimization on big-endian systems
From: Omar Sandoval <osandov(a)fb.com>
commit 21035965f60b0502fc6537b232839389bb4ce664 upstream.
Commit 2a98dc028f91 ("include/linux/bitmap.h: turn bitmap_set and
bitmap_clear into memset when possible") introduced an optimization to
bitmap_{set,clear}() which uses memset() when the start and length are
constants aligned to a byte.
This is wrong on big-endian systems; our bitmaps are arrays of unsigned
long, so bit n is not at byte n / 8 in memory. This was caught by the
Btrfs selftests, but the bitmap selftests also fail when run on a
big-endian machine.
We can still use memset if the start and length are aligned to an
unsigned long, so do that on big-endian. The same problem applies to
the memcmp in bitmap_equal(), so fix it there, too.
Fixes: 2a98dc028f91 ("include/linux/bitmap.h: turn bitmap_set and bitmap_clear into memset when possible")
Fixes: 2c6deb01525a ("bitmap: use memcmp optimisation in more situations")
Cc: stable(a)kernel.org
Reported-by: "Erhard F." <erhard_f(a)mailbox.org>
Cc: Matthew Wilcox <mawilcox(a)microsoft.com>
Cc: Rasmus Villemoes <linux(a)rasmusvillemoes.dk>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Signed-off-by: Omar Sandoval <osandov(a)fb.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/bitmap.h | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
--- a/include/linux/bitmap.h
+++ b/include/linux/bitmap.h
@@ -302,12 +302,20 @@ static inline void bitmap_complement(uns
__bitmap_complement(dst, src, nbits);
}
+#ifdef __LITTLE_ENDIAN
+#define BITMAP_MEM_ALIGNMENT 8
+#else
+#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long))
+#endif
+#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1)
+
static inline int bitmap_equal(const unsigned long *src1,
const unsigned long *src2, unsigned int nbits)
{
if (small_const_nbits(nbits))
return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
- if (__builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ if (__builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
return !memcmp(src1, src2, nbits / 8);
return __bitmap_equal(src1, src2, nbits);
}
@@ -358,8 +366,10 @@ static __always_inline void bitmap_set(u
{
if (__builtin_constant_p(nbits) && nbits == 1)
__set_bit(start, map);
- else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
- __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
+ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
memset((char *)map + start / 8, 0xff, nbits / 8);
else
__bitmap_set(map, start, nbits);
@@ -370,8 +380,10 @@ static __always_inline void bitmap_clear
{
if (__builtin_constant_p(nbits) && nbits == 1)
__clear_bit(start, map);
- else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
- __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
+ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
memset((char *)map + start / 8, 0, nbits / 8);
else
__bitmap_clear(map, start, nbits);
Patches currently in stable-queue which might be from osandov(a)fb.com are
queue-4.16/bitmap-fix-memset-optimization-on-big-endian-systems.patch
This is a note to let you know that I've just added the patch titled
bitmap: fix memset optimization on big-endian systems
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
bitmap-fix-memset-optimization-on-big-endian-systems.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 21035965f60b0502fc6537b232839389bb4ce664 Mon Sep 17 00:00:00 2001
From: Omar Sandoval <osandov(a)fb.com>
Date: Mon, 2 Apr 2018 15:58:31 -0700
Subject: bitmap: fix memset optimization on big-endian systems
From: Omar Sandoval <osandov(a)fb.com>
commit 21035965f60b0502fc6537b232839389bb4ce664 upstream.
Commit 2a98dc028f91 ("include/linux/bitmap.h: turn bitmap_set and
bitmap_clear into memset when possible") introduced an optimization to
bitmap_{set,clear}() which uses memset() when the start and length are
constants aligned to a byte.
This is wrong on big-endian systems; our bitmaps are arrays of unsigned
long, so bit n is not at byte n / 8 in memory. This was caught by the
Btrfs selftests, but the bitmap selftests also fail when run on a
big-endian machine.
We can still use memset if the start and length are aligned to an
unsigned long, so do that on big-endian. The same problem applies to
the memcmp in bitmap_equal(), so fix it there, too.
Fixes: 2a98dc028f91 ("include/linux/bitmap.h: turn bitmap_set and bitmap_clear into memset when possible")
Fixes: 2c6deb01525a ("bitmap: use memcmp optimisation in more situations")
Cc: stable(a)kernel.org
Reported-by: "Erhard F." <erhard_f(a)mailbox.org>
Cc: Matthew Wilcox <mawilcox(a)microsoft.com>
Cc: Rasmus Villemoes <linux(a)rasmusvillemoes.dk>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Signed-off-by: Omar Sandoval <osandov(a)fb.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/bitmap.h | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
--- a/include/linux/bitmap.h
+++ b/include/linux/bitmap.h
@@ -271,12 +271,20 @@ static inline void bitmap_complement(uns
__bitmap_complement(dst, src, nbits);
}
+#ifdef __LITTLE_ENDIAN
+#define BITMAP_MEM_ALIGNMENT 8
+#else
+#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long))
+#endif
+#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1)
+
static inline int bitmap_equal(const unsigned long *src1,
const unsigned long *src2, unsigned int nbits)
{
if (small_const_nbits(nbits))
return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
- if (__builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ if (__builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
return !memcmp(src1, src2, nbits / 8);
return __bitmap_equal(src1, src2, nbits);
}
@@ -327,8 +335,10 @@ static __always_inline void bitmap_set(u
{
if (__builtin_constant_p(nbits) && nbits == 1)
__set_bit(start, map);
- else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
- __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
+ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
memset((char *)map + start / 8, 0xff, nbits / 8);
else
__bitmap_set(map, start, nbits);
@@ -339,8 +349,10 @@ static __always_inline void bitmap_clear
{
if (__builtin_constant_p(nbits) && nbits == 1)
__clear_bit(start, map);
- else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
- __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
+ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
memset((char *)map + start / 8, 0, nbits / 8);
else
__bitmap_clear(map, start, nbits);
Patches currently in stable-queue which might be from osandov(a)fb.com are
queue-4.15/bitmap-fix-memset-optimization-on-big-endian-systems.patch
This is a note to let you know that I've just added the patch titled
bitmap: fix memset optimization on big-endian systems
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
bitmap-fix-memset-optimization-on-big-endian-systems.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 21035965f60b0502fc6537b232839389bb4ce664 Mon Sep 17 00:00:00 2001
From: Omar Sandoval <osandov(a)fb.com>
Date: Mon, 2 Apr 2018 15:58:31 -0700
Subject: bitmap: fix memset optimization on big-endian systems
From: Omar Sandoval <osandov(a)fb.com>
commit 21035965f60b0502fc6537b232839389bb4ce664 upstream.
Commit 2a98dc028f91 ("include/linux/bitmap.h: turn bitmap_set and
bitmap_clear into memset when possible") introduced an optimization to
bitmap_{set,clear}() which uses memset() when the start and length are
constants aligned to a byte.
This is wrong on big-endian systems; our bitmaps are arrays of unsigned
long, so bit n is not at byte n / 8 in memory. This was caught by the
Btrfs selftests, but the bitmap selftests also fail when run on a
big-endian machine.
We can still use memset if the start and length are aligned to an
unsigned long, so do that on big-endian. The same problem applies to
the memcmp in bitmap_equal(), so fix it there, too.
Fixes: 2a98dc028f91 ("include/linux/bitmap.h: turn bitmap_set and bitmap_clear into memset when possible")
Fixes: 2c6deb01525a ("bitmap: use memcmp optimisation in more situations")
Cc: stable(a)kernel.org
Reported-by: "Erhard F." <erhard_f(a)mailbox.org>
Cc: Matthew Wilcox <mawilcox(a)microsoft.com>
Cc: Rasmus Villemoes <linux(a)rasmusvillemoes.dk>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Signed-off-by: Omar Sandoval <osandov(a)fb.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/bitmap.h | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
--- a/include/linux/bitmap.h
+++ b/include/linux/bitmap.h
@@ -262,12 +262,20 @@ static inline void bitmap_complement(uns
__bitmap_complement(dst, src, nbits);
}
+#ifdef __LITTLE_ENDIAN
+#define BITMAP_MEM_ALIGNMENT 8
+#else
+#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long))
+#endif
+#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1)
+
static inline int bitmap_equal(const unsigned long *src1,
const unsigned long *src2, unsigned int nbits)
{
if (small_const_nbits(nbits))
return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
- if (__builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ if (__builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
return !memcmp(src1, src2, nbits / 8);
return __bitmap_equal(src1, src2, nbits);
}
@@ -318,8 +326,10 @@ static __always_inline void bitmap_set(u
{
if (__builtin_constant_p(nbits) && nbits == 1)
__set_bit(start, map);
- else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
- __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
+ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
memset((char *)map + start / 8, 0xff, nbits / 8);
else
__bitmap_set(map, start, nbits);
@@ -330,8 +340,10 @@ static __always_inline void bitmap_clear
{
if (__builtin_constant_p(nbits) && nbits == 1)
__clear_bit(start, map);
- else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
- __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
+ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
+ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
memset((char *)map + start / 8, 0, nbits / 8);
else
__bitmap_clear(map, start, nbits);
Patches currently in stable-queue which might be from osandov(a)fb.com are
queue-4.14/bitmap-fix-memset-optimization-on-big-endian-systems.patch
With the introduction of commit e39a97353e53 ("scsi: core: return
BLK_STS_OK for DID_OK in __scsi_error_from_host_byte()"), a command that
failed with hostbyte=DID_OK and driverbyte=DRIVER_SENSE but lacking
additional sense information will have a return code set to BLK_STS_OK.
This results in the request issuer to see successful request execution
despite the failure. An example of such case is an unaligned write on a
host managed ZAC disk connected to a SAS HBA with a malfunctioning SAT.
The unaligned write command gets aborted but has no additional sense
information.
sd 10:0:0:0: [sde] tag#3905 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 10:0:0:0: [sde] tag#3905 Sense Key : Aborted Command [current]
sd 10:0:0:0: [sde] tag#3905 Add. Sense: No additional sense information
sd 10:0:0:0: [sde] tag#3905 CDB: Write(16) 8a 00 00 00 00 00 02 0c 00 01 00 00 00 01 00 00
print_req_error: I/O error, dev sde, sector 274726920
In scsi_io_completion(), sense key handling to not change the request
error code and success being reported to the issuer.
Fix this by making sure that the error code always indicates an error
if scsi_io_completion() decide that the action to be taken for a failed
command is to not retry it and terminate it immediately (ACTION_FAIL) .
Signed-off-by: Damien Le Moal <damien.lemoal(a)wdc.com>
Fixes: e39a97353e53 ("scsi: core: return BLK_STS_OK for DID_OK in __scsi_error_from_host_byte()")
Cc: Hannes Reinecke <hare(a)suse.com>
Cc: <stable(a)vger.kernel.org>
---
drivers/scsi/scsi_lib.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index c84f931388f2..87579bfcc186 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1002,6 +1002,15 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
scsi_print_command(cmd);
}
}
+ /*
+ * The command failed and should not be retried. If the host
+ * byte is DID_OK, then __scsi_error_from_host_byte() returned
+ * BLK_STS_OK and error indicates a success. Make sure to not
+ * use that as the completion code and always return an
+ * I/O error.
+ */
+ if (error == BLK_STS_OK)
+ error = BLK_STS_IOERR;
if (!scsi_end_request(req, error, blk_rq_err_bytes(req), 0))
return;
/*FALLTHRU*/
--
2.14.3
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:
refill_stock()
get_cpu_var()
drain_stock()
res_counter_uncharge()
res_counter_uncharge_until()
spin_lock() <== boom
But commit 3e32cb2e0a12b ("mm: memcontrol: lockless page counters") replaced
the calls to res_counter_uncharge() in drain_stock() to the lockless
function page_counter_uncharge(). There is no more spin lock there and no
more reason to have that local lock.
Cc: <stable(a)vger.kernel.org>
Reported-by: Haiyang HY1 Tan <tanhy1(a)lenovo.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
[bigeasy: That upstream commit appeared in v3.19 and the patch in
question in v3.18.7-rt2 and v3.18 seems still to be maintained. So I
guess that v3.18 would need the locallocks that we are about to remove
here. I am not sure if any earlier versions have the patch
backported.
The stable tag here is because Haiyang reported (and debugged) a crash
in 4.4-RT with this patch applied (which has get_cpu_light() instead
the locallocks it gained in v4.9-RT).
https://lkml.kernel.org/r/05AA4EC5C6EC1D48BE2CDCFF3AE0B8A637F78A15@CNMAILEX…
]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
---
mm/memcontrol.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 493b4986d5dc..56f67a15937b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1925,17 +1925,14 @@ static void drain_local_stock(struct work_struct *dummy)
*/
static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
{
- struct memcg_stock_pcp *stock;
- int cpu = get_cpu_light();
-
- stock = &per_cpu(memcg_stock, cpu);
+ struct memcg_stock_pcp *stock = &get_cpu_var(memcg_stock);
if (stock->cached != memcg) { /* reset if necessary */
drain_stock(stock);
stock->cached = memcg;
}
stock->nr_pages += nr_pages;
- put_cpu_light();
+ put_cpu_var(memcg_stock);
}
/*
--
2.14.3
From: Thomas Gleixner <tglx(a)linutronix.de>
The timer start debug function is called before the proper timer base is
set. As a consequence the trace data contains the stale CPU and flags
values.
Call the debug function after setting the new base and flags.
Fixes: 500462a9de65 ("timers: Switch to a non-cascading wheel")
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: stable(a)vger.kernel.org
Cc: rt(a)linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
---
kernel/time/timer.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index a8246d79cb5a..6b322aea1c46 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -838,8 +838,6 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
if (!ret && pending_only)
goto out_unlock;
- debug_activate(timer, expires);
-
new_base = get_target_base(base, pinned);
if (base != new_base) {
@@ -854,6 +852,8 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
base = switch_timer_base(timer, base, new_base);
}
+ debug_activate(timer, expires);
+
timer->expires = expires;
internal_add_timer(base, timer);
--
2.14.3
[BUG]
fstrim on some btrfs only trims the unallocated space, not trimming any
space in existing block groups.
[CAUSE]
Before fstrim_range passed to btrfs_trim_fs(), it get truncated to
range [0, super->total_bytes).
So later btrfs_trim_fs() will only be able to trim block groups in range
[0, super->total_bytes).
While for btrfs, any bytenr aligned to sector size is valid, since btrfs use
its logical address space, there is nothing limiting the location where
we put block groups.
For btrfs with routine balance, it's quite easy to relocate all
block groups and bytenr of block groups will start beyond super->total_bytes.
In that case, btrfs will not trim existing block groups.
[FIX]
Just remove the truncation in btrfs_ioctl_fitrim(), so btrfs_trim_fs()
can get the unmodified range, which is normally set to [0, U64_MAX].
Reported-by: Chris Murphy <lists(a)colorremedies.com>
Fixes: f4c697e6406d ("btrfs: return EINVAL if start > total_bytes in fitrim ioctl")
Cc: <stable(a)vger.kernel.org> # v4.0+
Signed-off-by: Qu Wenruo <wqu(a)suse.com>
---
fs/btrfs/extent-tree.c | 10 +---------
fs/btrfs/ioctl.c | 11 +++++++----
2 files changed, 8 insertions(+), 13 deletions(-)
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index f3b088665b7a..647691fc16e8 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -10994,19 +10994,11 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
u64 start;
u64 end;
u64 trimmed = 0;
- u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
int bg_ret = 0;
int dev_ret = 0;
int ret = 0;
- /*
- * try to trim all FS space, our block group may start from non-zero.
- */
- if (range->len == total_bytes)
- cache = btrfs_lookup_first_block_group(fs_info, range->start);
- else
- cache = btrfs_lookup_block_group(fs_info, range->start);
-
+ cache = btrfs_lookup_first_block_group(fs_info, range->start);
for (; cache; cache = next_block_group(fs_info, cache)) {
if (cache->key.objectid >= (range->start + range->len)) {
btrfs_put_block_group(cache);
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index ac85e07f567b..761fba8d8f75 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -364,7 +364,6 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
struct fstrim_range range;
u64 minlen = ULLONG_MAX;
u64 num_devices = 0;
- u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
int ret;
if (!capable(CAP_SYS_ADMIN))
@@ -388,11 +387,15 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
return -EOPNOTSUPP;
if (copy_from_user(&range, arg, sizeof(range)))
return -EFAULT;
- if (range.start > total_bytes ||
- range.len < fs_info->sb->s_blocksize)
+
+ /*
+ * NOTE: Don't truncate the range using super->total_bytes.
+ * Bytenr of btrfs block group is in btrfs logical address space,
+ * which can be any sector size aligned bytenr in [0, U64_MAX].
+ */
+ if (range.len < fs_info->sb->s_blocksize)
return -EINVAL;
- range.len = min(range.len, total_bytes - range.start);
range.minlen = max(range.minlen, minlen);
ret = btrfs_trim_fs(fs_info, &range);
if (ret < 0)
--
2.16.3