Hi Greg,
commit 318aaf34f1179b39f ("scsi: libsas: defer ata device eh commands to
libata") fixes CVE-2018-10021. Its severity is disputed, yet it is a real
bug. Please consider applying it to stable releases.
Thanks,
Guenter
Hi Andrew, please consider this series for 4.18.
For maintainability, as ZONE_DEVICE continues to attract new users,
it is useful to keep all users consolidated on devm_memremap_pages() as
the interface for create "device pages".
The devm_memremap_pages() implementation was recently reworked to make
it more generic for arbitrary users, like the proposed peer-to-peer
PCI-E enabling. HMM pre-dated this rework and opted to duplicate
devm_memremap_pages() as hmm_devmem_pages_create().
Rework HMM to be a consumer of devm_memremap_pages() directly and fix up
the licensing on the exports given the deep dependencies on the mm.
Patches based on v4.17-rc6 where there are no upstream consumers of the
HMM functionality.
---
Dan Williams (5):
mm, devm_memremap_pages: mark devm_memremap_pages() EXPORT_SYMBOL_GPL
mm, devm_memremap_pages: handle errors allocating final devres action
mm, hmm: use devm semantics for hmm_devmem_{add,remove}
mm, hmm: replace hmm_devmem_pages_create() with devm_memremap_pages()
mm, hmm: mark hmm_devmem_{add,add_resource} EXPORT_SYMBOL_GPL
Documentation/vm/hmm.txt | 1
include/linux/hmm.h | 4 -
include/linux/memremap.h | 1
kernel/memremap.c | 39 +++++-
mm/hmm.c | 297 +++++++---------------------------------------
5 files changed, 77 insertions(+), 265 deletions(-)
When the allocation process is scheduled back and the mapped hw queue is
changed, fake one extra wake up on previous queue for compensating wake up
miss, so other allocations on the previous queue won't be starved.
This patch fixes one request allocation hang issue, which can be
triggered easily in case of very low nr_request.
Cc: <stable(a)vger.kernel.org>
Cc: Omar Sandoval <osandov(a)fb.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
---
V3:
- fix comments as suggested by Jens
- remove the wrapper as suggested by Omar
V2:
- fix build failure
block/blk-mq-tag.c | 12 ++++++++++++
include/linux/sbitmap.h | 7 +++++++
lib/sbitmap.c | 22 ++++++++++++----------
3 files changed, 31 insertions(+), 10 deletions(-)
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 336dde07b230..a4e58fc28a06 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -134,6 +134,8 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
ws = bt_wait_ptr(bt, data->hctx);
drop_ctx = data->ctx == NULL;
do {
+ struct sbitmap_queue *bt_prev;
+
/*
* We're out of tags on this hardware queue, kick any
* pending IO submits before going to sleep waiting for
@@ -159,6 +161,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
if (data->ctx)
blk_mq_put_ctx(data->ctx);
+ bt_prev = bt;
io_schedule();
data->ctx = blk_mq_get_ctx(data->q);
@@ -170,6 +173,15 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
bt = &tags->bitmap_tags;
finish_wait(&ws->wait, &wait);
+
+ /*
+ * If destination hw queue is changed, fake wake up on
+ * previous queue for compensating the wake up miss, so
+ * other allocations on previous queue won't be starved.
+ */
+ if (bt != bt_prev)
+ sbitmap_queue_wake_up(bt_prev);
+
ws = bt_wait_ptr(bt, data->hctx);
} while (1);
diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
index 841585f6e5f2..bba9d80191b7 100644
--- a/include/linux/sbitmap.h
+++ b/include/linux/sbitmap.h
@@ -484,6 +484,13 @@ static inline struct sbq_wait_state *sbq_wait_ptr(struct sbitmap_queue *sbq,
void sbitmap_queue_wake_all(struct sbitmap_queue *sbq);
/**
+ * sbitmap_queue_wake_up() - Wake up some of waiters in one waitqueue
+ * on a &struct sbitmap_queue.
+ * @sbq: Bitmap queue to wake up.
+ */
+void sbitmap_queue_wake_up(struct sbitmap_queue *sbq);
+
+/**
* sbitmap_queue_show() - Dump &struct sbitmap_queue information to a &struct
* seq_file.
* @sbq: Bitmap queue to show.
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index e6a9c06ec70c..14e027a33ffa 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -335,8 +335,9 @@ void sbitmap_queue_resize(struct sbitmap_queue *sbq, unsigned int depth)
if (sbq->wake_batch != wake_batch) {
WRITE_ONCE(sbq->wake_batch, wake_batch);
/*
- * Pairs with the memory barrier in sbq_wake_up() to ensure that
- * the batch size is updated before the wait counts.
+ * Pairs with the memory barrier in sbitmap_queue_wake_up()
+ * to ensure that the batch size is updated before the wait
+ * counts.
*/
smp_mb__before_atomic();
for (i = 0; i < SBQ_WAIT_QUEUES; i++)
@@ -425,7 +426,7 @@ static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
return NULL;
}
-static void sbq_wake_up(struct sbitmap_queue *sbq)
+void sbitmap_queue_wake_up(struct sbitmap_queue *sbq)
{
struct sbq_wait_state *ws;
unsigned int wake_batch;
@@ -454,23 +455,24 @@ static void sbq_wake_up(struct sbitmap_queue *sbq)
*/
smp_mb__before_atomic();
/*
- * If there are concurrent callers to sbq_wake_up(), the last
- * one to decrement the wait count below zero will bump it back
- * up. If there is a concurrent resize, the count reset will
- * either cause the cmpxchg to fail or overwrite after the
- * cmpxchg.
+ * If there are concurrent callers to sbitmap_queue_wake_up(),
+ * the last one to decrement the wait count below zero will
+ * bump it back up. If there is a concurrent resize, the count
+ * reset will either cause the cmpxchg to fail or overwrite
+ * after the cmpxchg.
*/
atomic_cmpxchg(&ws->wait_cnt, wait_cnt, wait_cnt + wake_batch);
sbq_index_atomic_inc(&sbq->wake_index);
wake_up_nr(&ws->wait, wake_batch);
}
}
+EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up);
void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr,
unsigned int cpu)
{
sbitmap_clear_bit_unlock(&sbq->sb, nr);
- sbq_wake_up(sbq);
+ sbitmap_queue_wake_up(sbq);
if (likely(!sbq->round_robin && nr < sbq->sb.depth))
*per_cpu_ptr(sbq->alloc_hint, cpu) = nr;
}
@@ -482,7 +484,7 @@ void sbitmap_queue_wake_all(struct sbitmap_queue *sbq)
/*
* Pairs with the memory barrier in set_current_state() like in
- * sbq_wake_up().
+ * sbitmap_queue_wake_up().
*/
smp_mb();
wake_index = atomic_read(&sbq->wake_index);
--
2.9.5
When the allocation process is scheduled back and the mapped hw queue is
changed, do one extra wake up on orignal queue for compensating wake up
miss, so other allocations on the orignal queue won't be starved.
This patch fixes one request allocation hang issue, which can be
triggered easily in case of very low nr_request.
Cc: <stable(a)vger.kernel.org>
Cc: Omar Sandoval <osandov(a)fb.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
---
V2:
fix build failure
block/blk-mq-tag.c | 13 +++++++++++++
include/linux/sbitmap.h | 7 +++++++
lib/sbitmap.c | 6 ++++++
3 files changed, 26 insertions(+)
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 336dde07b230..77607f89d205 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -134,6 +134,8 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
ws = bt_wait_ptr(bt, data->hctx);
drop_ctx = data->ctx == NULL;
do {
+ struct sbitmap_queue *bt_orig;
+
/*
* We're out of tags on this hardware queue, kick any
* pending IO submits before going to sleep waiting for
@@ -159,6 +161,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
if (data->ctx)
blk_mq_put_ctx(data->ctx);
+ bt_orig = bt;
io_schedule();
data->ctx = blk_mq_get_ctx(data->q);
@@ -170,6 +173,16 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
bt = &tags->bitmap_tags;
finish_wait(&ws->wait, &wait);
+
+ /*
+ * If destination hw queue is changed, wake up original
+ * queue one extra time for compensating the wake up
+ * miss, so other allocations on original queue won't
+ * be starved.
+ */
+ if (bt != bt_orig)
+ sbitmap_queue_wake_up(bt_orig);
+
ws = bt_wait_ptr(bt, data->hctx);
} while (1);
diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
index 841585f6e5f2..b23f50355281 100644
--- a/include/linux/sbitmap.h
+++ b/include/linux/sbitmap.h
@@ -484,6 +484,13 @@ static inline struct sbq_wait_state *sbq_wait_ptr(struct sbitmap_queue *sbq,
void sbitmap_queue_wake_all(struct sbitmap_queue *sbq);
/**
+ * sbitmap_wake_up() - Do a regular wake up compensation if the queue
+ * allocated from is changed after scheduling back.
+ * @sbq: Bitmap queue to wake up.
+ */
+void sbitmap_queue_wake_up(struct sbitmap_queue *sbq);
+
+/**
* sbitmap_queue_show() - Dump &struct sbitmap_queue information to a &struct
* seq_file.
* @sbq: Bitmap queue to show.
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index e6a9c06ec70c..c6ae4206bcb1 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -466,6 +466,12 @@ static void sbq_wake_up(struct sbitmap_queue *sbq)
}
}
+void sbitmap_queue_wake_up(struct sbitmap_queue *sbq)
+{
+ sbq_wake_up(sbq);
+}
+EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up);
+
void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr,
unsigned int cpu)
{
--
2.9.5
Hi Greg,
9 more patches against the 2018/05/23 linux-4.9.y stable branch.
This gets the spectre defense of 4.9 up-to-date compared to the
current upstream tree. The upstream patches to remove the indirect
branches from the BPF JIT are included (these do not have a
CC:stable tag).
Martin Schwidefsky (9):
s390: add assembler macros for CPU alternatives
s390: move expoline assembler macros to a header
s390/crc32-vx: use expoline for indirect branches
s390/lib: use expoline for indirect branches
s390/ftrace: use expoline for indirect branches
s390/kernel: use expoline for indirect branches
s390: move spectre sysfs attribute code
s390: extend expoline to BC instructions
s390: use expoline thunks in the BPF JIT
arch/s390/crypto/crc32be-vx.S | 5 +-
arch/s390/crypto/crc32le-vx.S | 4 +-
arch/s390/include/asm/alternative-asm.h | 108 ++++++++++++++++++
arch/s390/include/asm/nospec-insn.h | 195 ++++++++++++++++++++++++++++++++
arch/s390/kernel/Makefile | 1 +
arch/s390/kernel/asm-offsets.c | 1 +
arch/s390/kernel/base.S | 24 ++--
arch/s390/kernel/entry.S | 105 ++++-------------
arch/s390/kernel/mcount.S | 14 ++-
arch/s390/kernel/nospec-branch.c | 43 ++++---
arch/s390/kernel/nospec-sysfs.c | 21 ++++
arch/s390/kernel/reipl.S | 7 +-
arch/s390/kernel/swsusp.S | 9 +-
arch/s390/lib/mem.S | 9 +-
arch/s390/net/bpf_jit.S | 16 ++-
arch/s390/net/bpf_jit_comp.c | 63 ++++++++++-
16 files changed, 488 insertions(+), 137 deletions(-)
create mode 100644 arch/s390/include/asm/alternative-asm.h
create mode 100644 arch/s390/include/asm/nospec-insn.h
create mode 100644 arch/s390/kernel/nospec-sysfs.c
--
2.16.3
Hi Greg,
9 more patches against the 2018/05/23 linux-4.14.y stable branch.
This gets the spectre defense of 4.14 up-to-date compared to the
current upstream tree. The upstream patches to remove the indirect
branches from the BPF JIT are included (these do not have a
CC:stable tag).
Martin Schwidefsky (9):
s390: add assembler macros for CPU alternatives
s390: move expoline assembler macros to a header
s390/crc32-vx: use expoline for indirect branches
s390/lib: use expoline for indirect branches
s390/ftrace: use expoline for indirect branches
s390/kernel: use expoline for indirect branches
s390: move spectre sysfs attribute code
s390: extend expoline to BC instructions
s390: use expoline thunks in the BPF JIT
arch/s390/crypto/crc32be-vx.S | 5 +-
arch/s390/crypto/crc32le-vx.S | 4 +-
arch/s390/include/asm/alternative-asm.h | 108 ++++++++++++++++++
arch/s390/include/asm/nospec-insn.h | 195 ++++++++++++++++++++++++++++++++
arch/s390/kernel/Makefile | 1 +
arch/s390/kernel/asm-offsets.c | 1 +
arch/s390/kernel/base.S | 24 ++--
arch/s390/kernel/entry.S | 105 ++++-------------
arch/s390/kernel/mcount.S | 14 ++-
arch/s390/kernel/nospec-branch.c | 43 ++++---
arch/s390/kernel/nospec-sysfs.c | 21 ++++
arch/s390/kernel/reipl.S | 7 +-
arch/s390/kernel/swsusp.S | 10 +-
arch/s390/lib/mem.S | 13 ++-
arch/s390/net/bpf_jit.S | 16 ++-
arch/s390/net/bpf_jit_comp.c | 63 ++++++++++-
16 files changed, 490 insertions(+), 140 deletions(-)
create mode 100644 arch/s390/include/asm/alternative-asm.h
create mode 100644 arch/s390/include/asm/nospec-insn.h
create mode 100644 arch/s390/kernel/nospec-sysfs.c
--
2.16.3
Hi Greg,
Please queue up this series of patches for 4.16 if you have no objections.
These are mostly clean backports but one or two required some fixing up, hench
the backport.
cheers
Mauricio Faria de Oliveira (2):
powerpc/pseries: Fix clearing of security feature flags
powerpc: Move default security feature flags
Michael Ellerman (11):
powerpc/rfi-flush: Always enable fallback flush on pseries
powerpc: Add security feature flags for Spectre/Meltdown
powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
powerpc/pseries: Set or clear security feature flags
powerpc/powernv: Set or clear security feature flags
powerpc/64s: Move cpu_show_meltdown()
powerpc/64s: Enhance the information in cpu_show_meltdown()
powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
powerpc/64s: Wire up cpu_show_spectre_v1()
powerpc/64s: Wire up cpu_show_spectre_v2()
Nicholas Piggin (1):
powerpc/64s: Add support for a store forwarding barrier at kernel
entry/exit
arch/powerpc/include/asm/exception-64s.h | 29 ++++
arch/powerpc/include/asm/feature-fixups.h | 19 +++
arch/powerpc/include/asm/hvcall.h | 3 +
arch/powerpc/include/asm/security_features.h | 85 ++++++++++
arch/powerpc/kernel/Makefile | 2 +-
arch/powerpc/kernel/exceptions-64s.S | 19 ++-
arch/powerpc/kernel/security.c | 237 +++++++++++++++++++++++++++
arch/powerpc/kernel/setup_64.c | 8 -
arch/powerpc/kernel/vmlinux.lds.S | 14 ++
arch/powerpc/lib/feature-fixups.c | 115 +++++++++++++
arch/powerpc/platforms/powernv/setup.c | 96 +++++++----
arch/powerpc/platforms/pseries/setup.c | 71 +++++---
12 files changed, 638 insertions(+), 60 deletions(-)
create mode 100644 arch/powerpc/include/asm/security_features.h
create mode 100644 arch/powerpc/kernel/security.c
--
2.14.1