On 03/07/18 12:25, James Hogan wrote:
> On Wed, Mar 07, 2018 at 12:11:41PM -0800, Frank Rowand wrote:
>> I initially misread the patch description (and imagined an entirely
>> different problem).
>>
>>
>> On 03/07/18 06:06, James Hogan wrote:
>>> On dtb files which contain hyphens, the dt_S_dtb command to build the> dtb.S files (which allow DTB files to be built into the kernel) results> in errors like the following:> > bcm3368-netgear-cvg834g.dtb.S: Assembler messages:> bcm3368-netgear-cvg834g.dtb.S:5: Error: : no such section> bcm3368-netgear-cvg834g.dtb.S:5: Error: junk at end of line, first unrecognized character is `-'> bcm3368-netgear-cvg834g.dtb.S:6: Error: unrecognized opcode `__dtb_bcm3368-netgear-cvg834g_begin:'> bcm3368-netgear-cvg834g.dtb.S:8: Error: unrecognized opcode `__dtb_bcm3368-netgear-cvg834g_end:'> bcm3368-netgear-cvg834g.dtb.S:9: Error: : no such section> bcm3368-netgear-cvg834g.dtb.S:9: Error: junk at end of line, first unrecognized character is `-'
>> Please replace the following section:
>>
>>> This is due to the hyphen being used in symbol names. Replace all
>>> hyphens
>>> with underscores in the dt_S_dtb command to avoid this problem.
>>>
>>> Quite a lot of dts files have hyphens, but its only a problem on MIPS
>>> where such files can be built into the kernel. For example when
>>> CONFIG_DT_NETGEAR_CVG834G=y, or on BMIPS kernels when the dtbs target is
>>> used (in the latter case it admitedly shouldn't really build all the
>>> dtb.o files, but thats a separate issue).
>>
>> with:
>>
>> cmd_dt_S_dtb constructs the assembly source to incorporate a devicetree
>> FDT (that is, the .dtb file) as binary data in the kernel image.
>> This assembly source contains labels before and after the binary data.
>> The label names incorporate the file name of the corresponding .dtb
>> file. Hyphens are not legal characters in labels, so transform all
>> hyphens from the file name to underscores when constructing the labels.
>
> Thanks, that is clearer.
>
> I'll keep the paragraph about MIPS and the example configuration though,
> as I think its important information to reproduce the problem, and to
> justify why it wouldn't be appropriate to just rename the files (which
> was my first reaction).
Other than the part that says "its only a problem on MIPS". That is
pedantically correct because no other architecture (that I am aware
of, not that I searched) currently has a devicetree source file name
with a hyphen in it, where that file is compiled into the kernel as
an asm file. But it is potentially a problem on any architecture
to it is misleading to label it as MIPS only.
>
>> Reviewed-by: Frank Rowand <frowand.list(a)gmail.com>
>
> Thanks
> James
>
On Wed, 7 Mar 2018 22:43:42 +0100
Pavel Machek <pavel(a)ucw.cz> wrote:
> On Wed 2018-03-07 09:01:16, Richard Weinberger wrote:
> > Pavel,
> >
> > Am Mittwoch, 7. März 2018, 00:18:05 CET schrieb Pavel Machek:
> > > On Sat 2018-03-03 11:45:54, Richard Weinberger wrote:
> > > > While UBI and UBIFS seem to work at first sight with MLC NAND, you will
> > > > most likely lose all your data upon a power-cut or due to read/write
> > > > disturb.
> > > > In order to protect users from bad surprises, refuse to attach to MLC
> > > > NAND.
> > > >
> > > > Cc: stable(a)vger.kernel.org
> > >
> > > That sounds like _really_ bad idea for stable. All it does is it
> > > removes support for hardware that somehow works.
> >
> > MLC is not supported and does not work. Full stop.
> > If someone manages to get it somehow work, either with hardware or software
> > hacks they are on their own.
> > Having it in stable is the only chance we have to get it into vendor
> > kernels.
>
> Can you show how it meets the stable kernel criteria? They are
> documented in tree. This should not be in stable.
>
> And I'd like to see changelog improved. Real reason MLC is not
> supported is upper/lower page parts on MLC. And real fix to work with
> bigger pages in UBI.
Come on! Don't you think this would have been fixed already if it was
that easy?! Have you looked at an MLC datasheet to see how paired pages
are combined? If you had you would now that paired pages are almost all
the time not contiguous, thus preventing the trick you're suggesting
here. Please document yourself before doing such presumptuous
statements (you can have a look at these slides if you want some details
about why this is not so simple [1]).
I'm definitely not saying supporting MLC NANDs in Linux is impossible,
and if you're interested in working on this topic I'd be happy to help.
But please don't block this patch without understanding what supporting
MLC NANDs implies.
[1]https://events.static.linuxfound.org/sites/events/files/slides/ubi-mlc.pdf
--
Boris Brezillon, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
https://bootlin.com
Hi Pavel,
On Wed, Mar 7, 2018 at 1:43 PM, Pavel Machek <pavel(a)ucw.cz> wrote:
> On Wed 2018-03-07 09:01:16, Richard Weinberger wrote:
>> Pavel,
>>
>> Am Mittwoch, 7. März 2018, 00:18:05 CET schrieb Pavel Machek:
>> > On Sat 2018-03-03 11:45:54, Richard Weinberger wrote:
>> > > While UBI and UBIFS seem to work at first sight with MLC NAND, you will
>> > > most likely lose all your data upon a power-cut or due to read/write
>> > > disturb.
>> > > In order to protect users from bad surprises, refuse to attach to MLC
>> > > NAND.
>> > >
>> > > Cc: stable(a)vger.kernel.org
>> >
>> > That sounds like _really_ bad idea for stable. All it does is it
>> > removes support for hardware that somehow works.
>>
>> MLC is not supported and does not work. Full stop.
>> If someone manages to get it somehow work, either with hardware or software
>> hacks they are on their own.
>> Having it in stable is the only chance we have to get it into vendor
>> kernels.
>
> Can you show how it meets the stable kernel criteria? They are
> documented in tree. This should not be in stable.
>
> And I'd like to see changelog improved. Real reason MLC is not
> supported is upper/lower page parts on MLC. And real fix to work with
> bigger pages in UBI.
>
To clarify one thing: the reason for this is MLC has actually never
been supported, nor worked properly. The fact that it kinda worked was
incidental and the cause of major problems for people due to that not
being clear. This patch only makes it explicit and avoids people
mistakenly trying to use UBIFS on MLC flash and risking their data and
products. To me, that's what's important.
This is an important patch, even if all it does is keep people from
loosing data. It also changes the conversation from "I have a
corrupted UBIFS device, BTW it's on MLC..." to "What can we do to get
UBIFS to work on MLC".
I don't know what the stable criteria is with re: to this patch. But
what I do know is if it doesn't go back into the various stables,
there will be manufacturers who will continue to try to use UBIFS on
MLC in ignorance for the next several years until the current stable
kernels EOL, despite there being a known patch that would make it
immediately obvious they shouldn't.
Thanks,
- Steve
On Wed, Mar 07, 2018 at 02:02:13PM -0800, Paul Lawrence wrote:
> Great! We need to make sure this gets backported to 4.4 and 4.9, and to
> 3.18 with the original dependency, please.
That will happen when it lands in Linus's tree, which should be later
this week if all goes well.
thanks,
greg k-h
This is a note to let you know that I've just added the patch titled
nvme-rdma: don't suppress send completions
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
nvme-rdma-don-t-suppress-send-completions.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From b4b591c87f2b0f4ebaf3a68d4f13873b241aa584 Mon Sep 17 00:00:00 2001
From: Sagi Grimberg <sagi(a)grimberg.me>
Date: Thu, 23 Nov 2017 17:35:21 +0200
Subject: nvme-rdma: don't suppress send completions
From: Sagi Grimberg <sagi(a)grimberg.me>
commit b4b591c87f2b0f4ebaf3a68d4f13873b241aa584 upstream.
The entire completions suppress mechanism is currently broken because the
HCA might retry a send operation (due to dropped ack) after the nvme
transaction has completed.
In order to handle this, we signal all send completions and introduce a
separate done handler for async events as they will be handled differently
(as they don't include in-capsule data by definition).
Signed-off-by: Sagi Grimberg <sagi(a)grimberg.me>
Reviewed-by: Max Gurtovoy <maxg(a)mellanox.com>
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/nvme/host/rdma.c | 54 ++++++++++++-----------------------------------
1 file changed, 14 insertions(+), 40 deletions(-)
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -88,7 +88,6 @@ enum nvme_rdma_queue_flags {
struct nvme_rdma_queue {
struct nvme_rdma_qe *rsp_ring;
- atomic_t sig_count;
int queue_size;
size_t cmnd_capsule_len;
struct nvme_rdma_ctrl *ctrl;
@@ -521,7 +520,6 @@ static int nvme_rdma_alloc_queue(struct
queue->cmnd_capsule_len = sizeof(struct nvme_command);
queue->queue_size = queue_size;
- atomic_set(&queue->sig_count, 0);
queue->cm_id = rdma_create_id(&init_net, nvme_rdma_cm_handler, queue,
RDMA_PS_TCP, IB_QPT_RC);
@@ -1232,21 +1230,9 @@ static void nvme_rdma_send_done(struct i
nvme_end_request(rq, req->status, req->result);
}
-/*
- * We want to signal completion at least every queue depth/2. This returns the
- * largest power of two that is not above half of (queue size + 1) to optimize
- * (avoid divisions).
- */
-static inline bool nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue)
-{
- int limit = 1 << ilog2((queue->queue_size + 1) / 2);
-
- return (atomic_inc_return(&queue->sig_count) & (limit - 1)) == 0;
-}
-
static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge,
- struct ib_send_wr *first, bool flush)
+ struct ib_send_wr *first)
{
struct ib_send_wr wr, *bad_wr;
int ret;
@@ -1255,31 +1241,12 @@ static int nvme_rdma_post_send(struct nv
sge->length = sizeof(struct nvme_command),
sge->lkey = queue->device->pd->local_dma_lkey;
- qe->cqe.done = nvme_rdma_send_done;
-
wr.next = NULL;
wr.wr_cqe = &qe->cqe;
wr.sg_list = sge;
wr.num_sge = num_sge;
wr.opcode = IB_WR_SEND;
- wr.send_flags = 0;
-
- /*
- * Unsignalled send completions are another giant desaster in the
- * IB Verbs spec: If we don't regularly post signalled sends
- * the send queue will fill up and only a QP reset will rescue us.
- * Would have been way to obvious to handle this in hardware or
- * at least the RDMA stack..
- *
- * Always signal the flushes. The magic request used for the flush
- * sequencer is not allocated in our driver's tagset and it's
- * triggered to be freed by blk_cleanup_queue(). So we need to
- * always mark it as signaled to ensure that the "wr_cqe", which is
- * embedded in request's payload, is not freed when __ib_process_cq()
- * calls wr_cqe->done().
- */
- if (nvme_rdma_queue_sig_limit(queue) || flush)
- wr.send_flags |= IB_SEND_SIGNALED;
+ wr.send_flags = IB_SEND_SIGNALED;
if (first)
first->next = ≀
@@ -1329,6 +1296,12 @@ static struct blk_mq_tags *nvme_rdma_tag
return queue->ctrl->tag_set.tags[queue_idx - 1];
}
+static void nvme_rdma_async_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+ if (unlikely(wc->status != IB_WC_SUCCESS))
+ nvme_rdma_wr_error(cq, wc, "ASYNC");
+}
+
static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg, int aer_idx)
{
struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(arg);
@@ -1350,10 +1323,12 @@ static void nvme_rdma_submit_async_event
cmd->common.flags |= NVME_CMD_SGL_METABUF;
nvme_rdma_set_sg_null(cmd);
+ sqe->cqe.done = nvme_rdma_async_done;
+
ib_dma_sync_single_for_device(dev, sqe->dma, sizeof(*cmd),
DMA_TO_DEVICE);
- ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL, false);
+ ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL);
WARN_ON_ONCE(ret);
}
@@ -1639,7 +1614,6 @@ static blk_status_t nvme_rdma_queue_rq(s
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
struct nvme_rdma_qe *sqe = &req->sqe;
struct nvme_command *c = sqe->data;
- bool flush = false;
struct ib_device *dev;
blk_status_t ret;
int err;
@@ -1668,13 +1642,13 @@ static blk_status_t nvme_rdma_queue_rq(s
goto err;
}
+ sqe->cqe.done = nvme_rdma_send_done;
+
ib_dma_sync_single_for_device(dev, sqe->dma,
sizeof(struct nvme_command), DMA_TO_DEVICE);
- if (req_op(rq) == REQ_OP_FLUSH)
- flush = true;
err = nvme_rdma_post_send(queue, sqe, req->sge, req->num_sge,
- req->mr->need_inval ? &req->reg_wr.wr : NULL, flush);
+ req->mr->need_inval ? &req->reg_wr.wr : NULL);
if (unlikely(err)) {
nvme_rdma_unmap_data(queue, rq);
goto err;
Patches currently in stable-queue which might be from sagi(a)grimberg.me are
queue-4.14/nvme-rdma-don-t-suppress-send-completions.patch
This is a note to let you know that I've just added the patch titled
netlink: put module reference if dump start fails
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netlink-put-module-reference-if-dump-start-fails.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From b87b6194be631c94785fe93398651e804ed43e28 Mon Sep 17 00:00:00 2001
From: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
Date: Wed, 21 Feb 2018 04:41:59 +0100
Subject: netlink: put module reference if dump start fails
From: Jason A. Donenfeld <Jason(a)zx2c4.com>
commit b87b6194be631c94785fe93398651e804ed43e28 upstream.
Before, if cb->start() failed, the module reference would never be put,
because cb->cb_running is intentionally false at this point. Users are
generally annoyed by this because they can no longer unload modules that
leak references. Also, it may be possible to tediously wrap a reference
counter back to zero, especially since module.c still uses atomic_inc
instead of refcount_inc.
This patch expands the error path to simply call module_put if
cb->start() fails.
Fixes: 41c87425a1ac ("netlink: do not set cb_running if dump's start() errs")
Signed-off-by: Jason A. Donenfeld <Jason(a)zx2c4.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netlink/af_netlink.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -2258,7 +2258,7 @@ int __netlink_dump_start(struct sock *ss
if (cb->start) {
ret = cb->start(cb);
if (ret)
- goto error_unlock;
+ goto error_put;
}
nlk->cb_running = true;
@@ -2278,6 +2278,8 @@ int __netlink_dump_start(struct sock *ss
*/
return -EINTR;
+error_put:
+ module_put(control->module);
error_unlock:
sock_put(sk);
mutex_unlock(nlk->cb_mutex);
Patches currently in stable-queue which might be from Jason(a)zx2c4.com are
queue-4.9/netlink-put-module-reference-if-dump-start-fails.patch
This is a note to let you know that I've just added the patch titled
netlink: put module reference if dump start fails
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netlink-put-module-reference-if-dump-start-fails.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:12 PST 2018
From: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
Date: Wed, 21 Feb 2018 04:41:59 +0100
Subject: netlink: put module reference if dump start fails
From: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
[ Upstream commit b87b6194be631c94785fe93398651e804ed43e28 ]
Before, if cb->start() failed, the module reference would never be put,
because cb->cb_running is intentionally false at this point. Users are
generally annoyed by this because they can no longer unload modules that
leak references. Also, it may be possible to tediously wrap a reference
counter back to zero, especially since module.c still uses atomic_inc
instead of refcount_inc.
This patch expands the error path to simply call module_put if
cb->start() fails.
Fixes: 41c87425a1ac ("netlink: do not set cb_running if dump's start() errs")
Signed-off-by: Jason A. Donenfeld <Jason(a)zx2c4.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netlink/af_netlink.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -2276,7 +2276,7 @@ int __netlink_dump_start(struct sock *ss
if (cb->start) {
ret = cb->start(cb);
if (ret)
- goto error_unlock;
+ goto error_put;
}
nlk->cb_running = true;
@@ -2296,6 +2296,8 @@ int __netlink_dump_start(struct sock *ss
*/
return -EINTR;
+error_put:
+ module_put(control->module);
error_unlock:
sock_put(sk);
mutex_unlock(nlk->cb_mutex);
Patches currently in stable-queue which might be from Jason(a)zx2c4.com are
queue-4.14/netlink-put-module-reference-if-dump-start-fails.patch
This is a note to let you know that I've just added the patch titled
x86/speculation: Use Indirect Branch Prediction Barrier in context switch
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-speculation-use-indirect-branch-prediction-barrier-in-context-switch.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 18bf3c3ea8ece8f03b6fc58508f2dfd23c7711c7 Mon Sep 17 00:00:00 2001
From: Tim Chen <tim.c.chen(a)linux.intel.com>
Date: Mon, 29 Jan 2018 22:04:47 +0000
Subject: x86/speculation: Use Indirect Branch Prediction Barrier in context switch
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Tim Chen <tim.c.chen(a)linux.intel.com>
commit 18bf3c3ea8ece8f03b6fc58508f2dfd23c7711c7 upstream.
Flush indirect branches when switching into a process that marked itself
non dumpable. This protects high value processes like gpg better,
without having too high performance overhead.
If done naïvely, we could switch to a kernel idle thread and then back
to the original process, such as:
process A -> idle -> process A
In such scenario, we do not have to do IBPB here even though the process
is non-dumpable, as we are switching back to the same process after a
hiatus.
To avoid the redundant IBPB, which is expensive, we track the last mm
user context ID. The cost is to have an extra u64 mm context id to track
the last mm we were using before switching to the init_mm used by idle.
Avoiding the extra IBPB is probably worth the extra memory for this
common scenario.
For those cases where tlb_defer_switch_to_init_mm() returns true (non
PCID), lazy tlb will defer switch to init_mm, so we will not be changing
the mm for the process A -> idle -> process A switch. So IBPB will be
skipped for this case.
Thanks to the reviewers and Andy Lutomirski for the suggestion of
using ctx_id which got rid of the problem of mm pointer recycling.
Signed-off-by: Tim Chen <tim.c.chen(a)linux.intel.com>
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: ak(a)linux.intel.com
Cc: karahmed(a)amazon.de
Cc: arjan(a)linux.intel.com
Cc: torvalds(a)linux-foundation.org
Cc: linux(a)dominikbrodowski.net
Cc: peterz(a)infradead.org
Cc: bp(a)alien8.de
Cc: luto(a)kernel.org
Cc: pbonzini(a)redhat.com
Link: https://lkml.kernel.org/r/1517263487-3708-1-git-send-email-dwmw@amazon.co.uk
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/tlbflush.h | 2 ++
arch/x86/mm/tlb.c | 31 +++++++++++++++++++++++++++++++
2 files changed, 33 insertions(+)
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -68,6 +68,8 @@ static inline void invpcid_flush_all_non
struct tlb_state {
struct mm_struct *active_mm;
int state;
+ /* last user mm's ctx id */
+ u64 last_ctx_id;
/*
* Access to this CR4 shadow and to H/W CR4 is protected by
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -10,6 +10,7 @@
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
+#include <asm/nospec-branch.h>
#include <asm/cache.h>
#include <asm/apic.h>
#include <asm/uv/uv.h>
@@ -106,6 +107,28 @@ void switch_mm_irqs_off(struct mm_struct
unsigned cpu = smp_processor_id();
if (likely(prev != next)) {
+ u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);
+
+ /*
+ * Avoid user/user BTB poisoning by flushing the branch
+ * predictor when switching between processes. This stops
+ * one process from doing Spectre-v2 attacks on another.
+ *
+ * As an optimization, flush indirect branches only when
+ * switching into processes that disable dumping. This
+ * protects high value processes like gpg, without having
+ * too high performance overhead. IBPB is *expensive*!
+ *
+ * This will not flush branches when switching into kernel
+ * threads. It will also not flush if we switch to idle
+ * thread and back to the same process. It will flush if we
+ * switch to a different non-dumpable process.
+ */
+ if (tsk && tsk->mm &&
+ tsk->mm->context.ctx_id != last_ctx_id &&
+ get_dumpable(tsk->mm) != SUID_DUMP_USER)
+ indirect_branch_prediction_barrier();
+
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
/*
* If our current stack is in vmalloc space and isn't
@@ -120,6 +143,14 @@ void switch_mm_irqs_off(struct mm_struct
set_pgd(pgd, init_mm.pgd[stack_pgd_index]);
}
+ /*
+ * Record last user mm's context id, so we can avoid
+ * flushing branch buffer with IBPB if we switch back
+ * to the same user.
+ */
+ if (next != &init_mm)
+ this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
+
this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
this_cpu_write(cpu_tlbstate.active_mm, next);
Patches currently in stable-queue which might be from tim.c.chen(a)linux.intel.com are
queue-4.9/x86-speculation-use-indirect-branch-prediction-barrier-in-context-switch.patch
queue-4.9/x86-mm-give-each-mm-tlb-flush-generation-a-unique-id.patch